Technological Transformation: A New Role For Human, Machines And Management: TT-2020 3030644294, 9783030644291

This proceedings book contains 21 articles that arouse the greatest interest among experts from academia, industry and s

779 51 14MB

English Pages 254 [262] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Technological Transformation: A New Role For Human, Machines And Management: TT-2020
 3030644294, 9783030644291

Table of contents :
Preface
Contents
Mathematical Methods for Implementing Homeostatic Control in Digital Production Systems
Abstract
1 Introduction
2 Materials and Methods
2.1 Fractal Methods for Security Assessment of Digital Manufacturing
3 Results
4 Discussion
Acknowledgment
References
Bioinspired Intrusion Detection in ITC Infrastructures
Abstract
1 Introduction
2 Methods
2.1 A Generalized Model for Intrusion Detection
2.2 Global and Local Alignments
2.3 Smith-Waterman Algorithm
2.4 Needleman-Wunsch Algorithm
3 Results
3.1 Detection of Polymorphic Intrusions
3.2 Anomaly Detection
3.3 The Experimental Study
4 Discussion
5 Conclusion
Acknowledgments
References
Algorithm for Optimizing Urban Routes in Traffic Congestion
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
References
The Impact of Digitalization on a Production Structures and Management in Industrial Enterprises and Complexes
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
References
Uncertainty Decision Making Model: The Evolution of Artificial Intelligence and Staff Reduction
Abstract
1 Introduction
2 Milestones of AI Development and Its Integration into Economic Processes
3 Decision-Making Model in Conditions of Uncertainty When Replacing Employees with Intelligent Systems
4 Conclusion
Acknowledgements
References
Smart Containers Technology Evaluation in an Enterprise Architecture Context (Business Case for Container Liner Shipping Industry)
Abstract
1 Introduction
2 Methods
3 Results
4 Discussion and Conclusions
Acknowledgement
References
The Application of Machine Learning to One-Dimensional Problems of Mechanics of a Solid Deformable Body
Abstract
1 Introduction
1.1 The Choice of Types of Algorithms
1.2 The Choice of Algorithms
1.3 Input Data Generation
2 Results Validation
2.1 Restoration of Bending Moment
2.2 Restoration of Deflections
3 Conclusions
Acknowledgment
References
Evaluation Algorithm of Probabilistic Transformation of a Random Variable
Abstract
1 Introduction
2 Algorithm
3 Application Area
4 Conclusion
Acknowledgements
References
Digital Twin of Continuously Variable Transmission for Predictive Modeling of Dynamics and Performance Optimization
Abstract
1 Introduction
2 Materials and Methods
2.1 The Design of Chain CVT
2.2 Mathematical Modeling of Continuously Variable Transmission
2.3 Numerical Methods for Solving the Problem of Continuously Variable Transmission Dynamics
2.4 Software Package for Predictive Modeling of CVT Dynamics
3 Results and Discussion
4 Conclusions
Acknowledgments
References
Experience-Driven, Method-Agnostic Algorithm for Controlling Numerical Integration of ODE Systems
Abstract
1 Introduction
2 Materials and Methods
2.1 Controlling Step Size
2.2 Controlling Discrete Parameter of Method
3 Results and Discussion
4 Conclusions
Acknowledgments
References
Functional Visualization of the Results of Predictive Modeling in the Tasks of Aerodynamics
Abstract
1 Introduction
2 Materials and Methods
2.1 Requirements to Format
2.2 Format Description
2.3 Procedure of Resampling Original Data Files
2.4 Auxiliary Techniques
2.5 Octree Structure Formation
2.6 Completion Defining Fields at Octree Leaves
2.7 General Approach of Fields Evaluation in Non-leaves Blocks
3 Results
4 Discussion
5 Conclusions
Acknowledgments
References
Digitalization in Logistics for Organizing an Automated Delivery Zone. Russian Post Case
Abstract
1 Introduction
2 Methods and Literature Review
3 Results
3.1 Issues and Challenges of the Current Situation of FSUE Russian Post
3.2 Algorithm for the Automated Issuing Technology Implementation
4 Conclusion
Acknowledgments
References
Algorithm for Evaluating the Promotions Effectiveness Based on Time Series Analysis
Abstract
1 Introduction
2 Mathematical Models Promotions
3 Systematic Component Modeling of Time Series
4 Promotion Efficiency Algorithm
5 Practical Approach of Application
6 Conclusion
Acknowledgements
References
Analysis of Technological Innovations in Supply Chain Management and Their Application in Modern Companies
Abstract
1 Introduction
2 Methodology
3 Results
4 Conclusion
Acknowledgement
References
Digital Platforms for the Logistics Sector of the Russian Federation
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgement
References
Digital Logistics Transformation: Implementing the Internet of Things (IoT)
Abstract
1 Introduction
2 Existing Literature
3 Materials and Methods
4 Results
5 Conclusion
References
The Challenges of the Logistics Industry in the Era of Digital Transformation
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Conclusion
Acknowledgment
References
Optimal Production Manufacturing Based on Intelligent Control System
Abstract
1 Introduction
2 Problem Statement
3 Method Notation
3.1 Kripke Structure
3.2 Process Identification
3.3 Neural Network Regression
3.4 Pareto Front
4 Experimental Results
4.1 Kripke Structure
4.2 Process Identification
4.3 Neural Network Regression
4.4 Pareto Front
5 Conclusion
References
Intelligent Cyber Physical Systems for Industrial Oil Refinery
Abstract
1 Introduction
1.1 Composite of Genetic Algorithm and BP Neural Network
1.2 The Main Content of the Algorithm
1.3 Composite of Genetic Algorithm and BP Neural Network
1.4 Practical Application
2 Conclusion
References
Modal Logic of Digital Transformation: Relentless Pace to “Exo-Intellectual” Platform
Abstract
1 Introduction
2 Materials and Method
3 Results
4 Discussion
5 Conclusions
Acknowledgment
References
Technology Predictions for Arctic Hydrocarbon Development: Digitalization Potential
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Expression of Gratitude
References
Author Index

Citation preview

Lecture Notes in Networks and Systems 157

Hanno Schaumburg Vadim Korablev Ungvari Laszlo   Editors

Technological Transformation: A New Role For Human, Machines And Management TT-2020

Lecture Notes in Networks and Systems Volume 157

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA; Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada; Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

More information about this series at http://www.springer.com/series/15179

Hanno Schaumburg Vadim Korablev Ungvari Laszlo •



Editors

Technological Transformation: A New Role For Human, Machines And Management TT-2020

123

Editors Hanno Schaumburg Hamburg University of Technology Hamburg, Hamburg, Germany

Vadim Korablev Peter the Great Saint-Petersburg Polytechnic University St. Petersburg, Russia

Ungvari Laszlo Deutsch Kasachische Universitat Almaty, Kazakhstan

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-030-64429-1 ISBN 978-3-030-64430-7 (eBook) https://doi.org/10.1007/978-3-030-64430-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

V Scientific International Conference «Technological transformation: a new role for human, machines and management (TT-2020)» was held in St. Petersburg at the Peter the Great St. Petersburg Polytechnic University. The conference aimed to discuss the results of system studies on the key drivers and consequences of wide digitalization in various sectors of the economy and industry, as well as in the service sector. In the conference, ten thematic topics were presented: 1. New industrial base 2. Virtual engineering 3. Diffusion of technology (industrial robots, artificial intelligence) in industrial systems and services sectors 4. Digital infrastructure and new trends in development of ICT (Big Data, cybersecurity) 5. Supercomputers and their use in applied research 6. Cyberphysical interface and informatics of cognitive processes 7. Convergence, harmonization and integration of artificial and natural intelligence 8. Changing social and economic landscape and new management systems (electronic platforms and parallel labor market, economic inequality, economic structure) 9. Digital technologies in logistics 10. Cyberphysical systems and artificial intelligence The proceedings contains 21 articles that aroused the greatest interest among the conference participants and representing current trends in the area of the structural transformation of industrial and economic systems on a new technological base. The conference was highly appreciated by participants. We are deeply grateful to the program committee of the conference and to all its participants who introduced interesting and informative presentations.

v

Contents

Mathematical Methods for Implementing Homeostatic Control in Digital Production Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evgeny Pavlenko and Maria Poltavtseva

1

Bioinspired Intrusion Detection in ITC Infrastructures . . . . . . . . . . . . . Sangwon Lim, Maxim Kalinin, and Peter Zegzhda

10

Algorithm for Optimizing Urban Routes in Traffic Congestion . . . . . . . Anton Ignatov, Vladimir Baskov, Timur Ablyazov, Andrei Aleksandrov, and Natal’ya Zhilkina

23

The Impact of Digitalization on a Production Structures and Management in Industrial Enterprises and Complexes . . . . . . . . . . Viktor Dubolazov, Zoia Simakova, Olga Leicht, and Andrey Shchelkonogov

39

Uncertainty Decision Making Model: The Evolution of Artificial Intelligence and Staff Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roman Zhak, Dmitrii Kolesov, Joao Leitão, and Bakytbek Akaev

48

Smart Containers Technology Evaluation in an Enterprise Architecture Context (Business Case for Container Liner Shipping Industry) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Igor Ilin, Svetlana Maydanova, Anastasia Levina, Carlos Jahn, Jürgen Weigell, and Morten Brix Jensen

57

The Application of Machine Learning to One-Dimensional Problems of Mechanics of a Solid Deformable Body . . . . . . . . . . . . . . . . . . . . . . . Viacheslav Reshetnikov and Andrea Tick

67

Evaluation Algorithm of Probabilistic Transformation of a Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Askar Akaev, Tessaleno Devezas, Laszlo Ungvari, and Alexander Petryakov

77

vii

viii

Contents

Digital Twin of Continuously Variable Transmission for Predictive Modeling of Dynamics and Performance Optimization . . . . . . . . . . . . . Stepan Orlov and Lidia Burkovski

89

Experience-Driven, Method-Agnostic Algorithm for Controlling Numerical Integration of ODE Systems . . . . . . . . . . . . . . . . . . . . . . . . . 108 Stepan Orlov and Lidia Burkovski Functional Visualization of the Results of Predictive Modeling in the Tasks of Aerodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Alexey Kuzin, Alexey Zhuravlev, Zoltan Zeman, and József Tick Digitalization in Logistics for Organizing an Automated Delivery Zone. Russian Post Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Temirgaliev Egor, Dubolazov Victor, Borremans Alexandra, and Overes Ed Algorithm for Evaluating the Promotions Effectiveness Based on Time Series Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Vadim Abbakumov, Alena Kuryleva, Aleksander Mugayskikh, Reiff-Stephan Jorg, and Zoltan Zeman Analysis of Technological Innovations in Supply Chain Management and Their Application in Modern Companies . . . . . . . . . . . . . . . . . . . . 168 Alissa Dubgorn, Irina Zaychenko, Aleksandr Alekseev, Klara Paardenkooper, and Manfred Esser Digital Platforms for the Logistics Sector of the Russian Federation . . . 179 Igor Ilin, Svetlana Maydanova, Aleksandr Lepekhin, Carlos Jahn, Jürgen Weigell, and Vadim Korablev Digital Logistics Transformation: Implementing the Internet of Things (IoT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Irina Zaychenko, Anna Smirnova, Yevheniia Shytova, Botagoz Mutalieva, and Nikita Pimenov The Challenges of the Logistics Industry in the Era of Digital Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Dmitry Egorov, Anastasia Levina, Sofia Kalyazina, Peter Schuur, and Berry Gerrits Optimal Production Manufacturing Based on Intelligent Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Hanafi Mohamed Yassine and Viacheslav P. Shkodyrev Intelligent Cyber Physical Systems for Industrial Oil Refinery . . . . . . . 221 Wenjia Ma and Viacheslav Shkodyrev

Contents

ix

Modal Logic of Digital Transformation: Relentless Pace to “Exo-Intellectual” Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Vladimir Zaborovskij and Vladimir Polyanskiy Technology Predictions for Arctic Hydrocarbon Development: Digitalization Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Nikita Tretyakov, Alexey Cherepovitsyn, and Nadejda Komendantova Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

Mathematical Methods for Implementing Homeostatic Control in Digital Production Systems Evgeny Pavlenko(&) and Maria Poltavtseva Peter the Great Saint-Petersburg Polytechnical University, Saint-Petersburg, Russia [email protected]

Abstract. Digitalization, the development of sensor and cloud technologies, and the growing popularity of using the concept of the Internet of things have led to the transformation of the entire technological infrastructure. Modern industrial systems include a large number of intelligent devices that implement processes autonomously from humans. Changes in the control of manufacturing systems, the change in the character and type of attacks, led to the emergence of new requirements to ensure information security in the manufacturing. Since it is not possible to describe the full range of attacks on digital manufacturing systems, the paper suggests a new approach to modeling and assessing the security of such systems, which is invariant to any type of attack. Authors consider an approach to analyzing state of modern manufacturing systems and external influences on the basis of self-similarity, and also mathematical estimates of their security. Cyber resilience and homeostatic control of digital manufacturing systems are proposed in the work as the main approaches to ensuring information security. Keywords: Digital production systems  Multifractal spectrum  Homeostatic control  Mathematical methods  Self-similarity  Cyber resilience

1 Introduction Digitalization of technology industries, caused by the development of the Internet of things, sensor and cloud technologies, has led to great changes of the entire technological infrastructure. A lot of production and business processes are now implemented by intelligent systems, which are not information, they are cyber-physical systems (CPS), implementing physical processes through the implementation of information processes [1–3]. Participants of information processes in CPS are smart devices that are able to communicate with each other and with the environment, as well as to change their state in accordance with the environmental parameters [4]. Ample opportunities for automation of technological processes have become a trigger for the development of digital manufacturing, while opening up opportunities for cyber attacks [5, 6]. Statistics shows that in 2017, the majority of cyberattacks on CPS were directed at critical infrastructure sectors, such as energy, water supply © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 1–9, 2021. https://doi.org/10.1007/978-3-030-64430-7_1

2

E. Pavlenko and M. Poltavtseva

systems, transport facilities. At the same time, the source [7] shows increasing number of security incidents in the second half of 2017 compared to the first half of the year. The range of attacks on CPS is extremely large and it is not possible to describe all of them, due to the extremely large number of possible entry points for attacker and to zero-day vulnerabilities. That complicates the challenge of ensuring security of digital production systems. In addition, it should be noted that the use of traditional for information systems and client-server networks security methods will not be effective for CPS, as was shown earlier by the authors in [8]. This research work expands the scientific reserve of the authors, which is dedicated to the development of a new approach for securing CPS. CPS security assessment is based on the self-similarity assessment and control, since CPS processes of digital manufacturing are periodic and they are practically unaffected by human influence, so the violation of self-similarity of their functioning will indicate the impact on their functioning. Authors propose to call the preservation of self-similarity of the processes implemented by CPS as the resilience of CPS control under targeted impacts. The property of the system to maintain its operation in a given range of input and output characteristics under targeted external information impact is called cyber resilience [9]. In [10], authors proposed a homeostatic technology for CPS security control, it allows to implement multi-level control of digital production by combining distributed and centralized hierarchical management, expanding the number of control circuits and the range of control factors. this approach was inspired by papers [11–13]. To assess the state of CPS security, this approach involves the use of fractal indicators that take into account both information and functional components, this approach is aimed at control of self-similarity of the system. The self-similarity of system allows to maintain a balance in the compensation of external factors, which is the essence of homeostatic control. At the same time, the proposed self-similarity estimates take into account both long-term data dependences, manifested in periodicity at large intervals, and short-term dependences observed at a smaller scale.

2 Materials and Methods 2.1

Fractal Methods for Security Assessment of Digital Manufacturing

Technological processes of CPS in digital manufacturing can be considered as stationary, that is, as processes, statistical properties of which do not change over time [4]. Invariance of the characteristics suggests that the process under study has the property of fractality or self-similarity. Importance of evaluating the self-similarity of digital production is that any violation of the correctness of at least one process will be reflected in data flows, since the functioning of digital production is controlled by the exchange of information between its components. Therefore, it is proposed to detect cyber threats by analyzing the selfsimilarity of time series formedgenerated by parameters of CPS components.

Mathematical Methods for Implementing Homeostatic Control

3

Self-similarity Assessment Based on the Calculation of the Hurst Exponent. Hurst exponent H determines the degree of self-similarity of the process. The closer this parameter is to one, the more clearly the fractal properties are manifested [14], while the equality H = 0.5 indicates the absence of self-similarity. According to the source [15], statistics of the normalized range or R/S statistics can be used to calculate the value of the Hurst exponent. To do this, we need to calculate the range R of the series, which is the difference between the maximum and minimum values of the series, and the standard deviation of the series S: R ¼ max

Xu

 Þ  min ðx  X i¼1 i

Xu

Þ ðx  X i¼1 i



rffiffiffiffi 1 XN  Þ2 ; S¼ ðx  X i¼1 i N ¼ X

1 XN x; i¼1 i N1

ð1Þ ð2Þ ð3Þ

 is the arithmetic mean of a series of observations for N periods. Then the Hurst where X exponent H is calculated as follows: H ¼ log

R=S logðaN Þ

ð4Þ

where a – is the specified constant, a > 0. Assessment of Self-similarity on the Basis of Multifractal Indicators Calculation. In papers [16–18] the use of fractal methods for security assessment and control of stability of CPS functioning is offered as technological processes proceeding in CPS possesses property of self-similarity which violation can testify to deviations and anomalies in system. The following characteristics of the multifractal spectrum depicted in Fig. 1 were chosen as the characteristics used to detect anomalies in the CPS functioning [19, 20]: • • • • • •

width of spectrum (width), calculated as width ¼ amax  amin ; value of the Hölder exponent at the maximum of multifractal spectrum a0 ; value of the width of right “branch”, calculated as widthright ¼ amax  a0 ; value of the width of left “branch”, calculated as widthleft ¼ a0  amin ; value of the height of left “branch”, calculated as highleft ¼ f ða0 Þ  f ðamin Þ; value of the height of right “branch”, calculated as highright ¼ f ðamax Þ  f ða0 Þ.

4

E. Pavlenko and M. Poltavtseva

Fig. 1. Multifractal Legendre spectrum

3 Results As part of study of the effectiveness of security and resilience indicators appliance to assess the CPS state under targeted external destructive impact, a pilot plant was used, organized at the center for cyber security studies of the Singapore university of technology and design [21]. The test bench implements process of wastewater treatment, which can be divided into six different stages: collection and preparation for manufacturing of incoming wastewater, pre-treatment of wastewater, during which water quality is assessed, ultrafiltration and backwash, dechlorination, reverse osmosis, collection of treated water, backwash and treatment (Fig. 2).

Fig. 2. The architecture of CPS

Mathematical Methods for Implementing Homeostatic Control

5

Each subprocess is associated with a given set of devices. The architecture of the cyber-physical system includes: • sensors (flow meters, pressure meters, level transmitters, analyzers of water chemical properties, etc.); • actuators and other devices (motorized valves, pumps, dechlorinators, etc.); • programmable logic controller, which are responsible for controlling actuators; • network devices; • PC and workstations intended for processing and storing data, monitoring and visualizing the system state. Attacking impacts can be directed both on separate components of one subprocess, and on components of several subprocesses. The intensity of external influences is determined by the number and arrangement of elements, the compromise which leads to the successful implementation of the attack, and can be ranked as follows [21]: • Impact on a single component in a single processing stage (Single Stage Single Point, SSSP). • Impact on multiple components in a single processing stage (Single Stage Multi Point, SSMP). • The attack is aimed at several stages, each of which involves the compromise of one component (Multi Stage Single Point, MSSP). • The attack is aimed at several stages, each of which involves the compromise of several components (Multi Stage Multi Point, MSMP). The analyzed data for each system process is a multivariate time series formed by the indicators of the sensors involved in the current process. For the detection of cyberthreats, it is possible to use the Hurst exponent, the dynamics of which change allows us to monitor the violation of the process self-similarity (Fig. 3).

Fig. 3. Attack detection on the FIT-401 by analyzing the dynamics of Hurst exponent

6

E. Pavlenko and M. Poltavtseva

Analysis of multifractal spectrum allows detecting changes in sensor indicators. Figure 4 shows changes of the Legendre multifractal spectrum width, height and width of the spectrum left “brunch” for flow transmitter FIT-401, located in the dechlorination block Fig. 5 shows changes of another multifractal spectrum characteristics (Holder exponent in the spectrum maximum, height and width of the spectrum right “brunch”) for level transmitter LIT-301. Occurring outliers from median values allows indicating attacks on a certain component of the cyberphysical system. In particular, changes of FIT-401 values led to the pump shutdown, that directs dechlorinated water to the reverse osmosis; falsification of LIT-301 values led to the water tank emptying and to tank damaging during the first attack. Also it leds to water tank overflow during the second attack.

Fig. 4. Attack detection on the FIT-401 using multifractal spectrum characteristics

Mathematical Methods for Implementing Homeostatic Control

7

Fig. 5. Attacks detection on the LIT-301 using multifractal spectrum characteristics

4 Discussion In this paper authors propose mathematical tool, used to estimate the security digital production systems. Analysis of features and exploring invariance of technological processes that occur in digital production systems allows using fractal methods for detecting cyberthreats and destabilization of cyber-physical system. As indicators for estimating cyber-physical system stability Hurst exponent and the Legendre multifractal spectrum characteristics were chosen. Experimental results represent the effectiveness of proposed methods. Using self-similarity property of technological processes for detecting attacks is a new approach that allows detecting cybertheats in complex digital production systems. Acknowledgment. The work was performed as part of the State assignment for basic research (topic code 0784-2020-0026).

References 1. Zegzhda, P.D., Poltavtseva, M.A., Lavrova, D.S.: Cyber-physic system systematization and security evaluation. Probl. Inf. Secur. Comput. Syst. (Problemy informatsionnoy bezopasnosti. Kompyuternye systemy) (2017) 2. Sadiku, M.N.O., Wang, Y., Cui, S., Musa, S.M.: Cyber-physical systems: a literature review. Eur. Sci. J. ESJ (2017). https://doi.org/10.19044/esj.2017.v13n36p52 3. Khaitan, S.K., McCalley, J.D.: Design techniques and applications of cyberphysical systems: a survey. IEEE Syst. J. (2015). https://doi.org/10.1109/JSYST.2014.2322503

8

E. Pavlenko and M. Poltavtseva

4. Lavrova, D.S.: An approach to developing the SIEM system for the internet of things. Autom. Control Comput. Sci. 50, 673–681 (2016). https://doi.org/10.3103/ S0146411616080125 5. Lavrova, D., Zegzhda, D., Yarmak, A.: Using GRU neural network for cyber-attack detection in automated process control systems. In: 2019 IEEE International Black Sea Conference on Communications and Networking, BlackSeaCom 2019 (2019). https://doi. org/10.1109/BlackSeaCom.2019.8812818 6. Kalinin, M.O., Lavrova, D.S., Yarmak, A.V.: Detection of threats in cyberphysical systems based on deep learning methods using multidimensional time series. Autom. Control Comput. Sci. 52, 912–917 (2018). https://doi.org/10.3103/S0146411618080151 7. Kaspersky Lab ICS CERT. Threat landscape for industrial automation systems in h2 2017. https://ics-cert.kaspersky.ru/reports/2018/03/26/threat-landscape-for-industrial-automationsystems-in-h2-2017. Accessed 05 Mar 2020 8. Zegzhda, D.P., Pavlenko, E.Y.: Cyber-physical system homeostatic security management. Autom. Control Comput. Sci. 51, 805–816 (2017). https://doi.org/10.3103/ S0146411617080260 9. Zegzhda, D.P.: Sustainability as a criterion for information security in cyber-physical systems. Autom. Control Comput. Sci. 50, 813–819 (2016). https://doi.org/10.3103/ S0146411616080253 10. Zegzhda, D.P., Pavlenko, E.Y. Homeostatic strategy of security of cyberphysical systems. Probl. Inf. Secur. Comput. Syst. (Problemy informatsionnoy bezopasnosti. Kompyuternye systemy) (2017) 11. Gerostathopoulos, I., Skoda, D., Plasil, F., Bures, T., Knauss, A.: Architectural homeostasis in self-adaptive software-intensive cyber-physical systems. In: Lecture Notes in Computer Science (Including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2016). https://doi.org/10.1007/978-3-319-48992-6_8 12. Gerostathopoulos, I., Bures, T., Hnetynka, P., Keznikl, J., Kit, M., Plasil, F., Plouzeau, N.: Self-adaptation in software-intensive cyber–physical systems: from system goals to architecture configurations. J. Syst. Softw. (2016). https://doi.org/10.1016/j.jss.2016.02. 028 13. Tyrrell, A.M., Timmis, J., Greensted, A.J., Owens, N.D.: Evolvable hardware, a fundamental technology for homeostasis. In: Proceedings of the 2007 IEEE Workshop on Evolvable and Adaptive Hardware, WEAH 2007 (2007). https://doi.org/10.1109/WEAH.2007.361711 14. Trenogin, N.G., Sokolov, D.E.: Fractal properties of network traffic in the client-server information system. Bull. Res. Ins. Siber. State Univ. Telecommun. Inf. (Vestnik NII Sibirskogo gosudarstvennogo universiteta telekommunikatsiy i informatiki) (2003) 15. Petrov, V.V. Platov, V.V.: The study of the self-similar structure of the wireless network. Radio Eng. Noteb. (Radiotekhnicheskiye tetrad) (2004) 16. Zegzhda, D.P., Pavlenko, E.Y.: Security indicators for digital manufacturing. Probl. Inf. Secur. Comput. Syst. (Problemy informatsionnoy bezopasnosti. Kompyuternye systemy) (2018) 17. Zegzhda, D.P., Pavlenko, E.Y.: Digital Manufacturing Security Indicators. Autom. Control Comput. Sci. 52, 1150–1159 (2018). https://doi.org/10.3103/S0146411618080333 18. Lavrova, D., Poltavtseva, M., Shtyrkina, A.: Security analysis of cyber-physical systems network infrastructure. In: Proceedings - 2018 IEEE Industrial Cyber-Physical Systems, ICPS 2018 (2018). https://doi.org/10.1109/ICPHYS.2018.8390812 19. Zegzhda, P.D., Lavrova, D.S., Shtyrkina, A.A.: Multifractal analysis of internet backbone traffic for detecting denial of service attacks. Autom. Control Comput. Sci. 52, 936–944 (2018). https://doi.org/10.3103/S014641161808028X

Mathematical Methods for Implementing Homeostatic Control

9

20. Zegzhda, D., Lavrova, D., Poltavtseva, M.: Multifractal security analysis of cyberphysical systems. Nonlinear Phenom. Complex Syst. (2019) 21. Goh, J., Adepu, S., Junejo, K.N., Mathur, A.: A dataset to support research in the design of secure water treatment systems. In: Lecture Notes in Computer Science (Including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2017). https:// doi.org/10.1007/978-3-319-71368-7_8

Bioinspired Intrusion Detection in ITC Infrastructures Sangwon Lim1, Maxim Kalinin2(&)

, and Peter Zegzhda2

1

2

LG Electronics Inc., Seoul, Korea [email protected] Peter the Great St. Petersburg Polytechnic University, St. Petersburg, Russia [email protected]

Abstract. The paper discusses an application of the bioinspired sequence alignment algorithms to detect the security intrusions. Needleman-Wunsch and Smith-Waterman algorithms are reviewed and applied to detect regions of similarity in operational chains. Our work proposes their utilization to protect new digital infrastructures. Using the first algorithm, it is possible to detect polymorphic intrusions, the second one is applicable for anomaly detection. The experimental study is obtained for the Smith-Waterman algorithm applied for CVE vulnerabilities detection with the higher accuracy than a traditional technique. Keywords: Anomaly detection  Bioinspired security  Intrusion detection  Cybersecurity  Needleman-Wunsch  Sequences alignment  Smith-Waterman  ITC

1 Introduction Extraordinary progress of infotelecommunication (ITC) technologies causes the enriching of the intruder’s potential. She or he can now actively adapt the behavior to flexible algorithms of intrusion detection and other protection tools [1]. The key obstacle is that the number of attacks grows exponentially what becomes threating the functional stability and adaptability of modern cyberphysical systems in currently observed digital world. These facts obligate us to improve the methods and algorithms applied for intrusion detection systems (IDS), the software or hardware tools aimed to detect vulnerabilities exploitation, malicious activities or security policy violations such as unauthorized access, integrity breaches, or denial of service [2]. IDS usually solves their task of matching two sequences, the chain of the monitored acts and the pattern, to determine their equity. To detect the mismatch, it can be a sequence of system calls or sequence of network packets that are compared to the attack signature [3]. In bioinformatics, the same task has been already solved by nature that provides a gene sequence alignment that compares two chains of protein codes and consider their similarity, instead of equity, to increase the effectiveness of matching [4– 6]. The resemblance of two tasks makes urgent the researching of application of bioinspired algorithms for cyber security. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 10–22, 2021. https://doi.org/10.1007/978-3-030-64430-7_2

Bioinspired Intrusion Detection in ITC Infrastructures

11

Common IDSs are divided in two classes: signature-based and anomaly-based [7]. Both of them have advantages and disadvantages. One of the problems of the signature-based IDSs is a problem of attack polymorphism [8, 9]. If intrusion is slightly changed in the chain of acts, it can avoid IDS and to detect it a new signature must be composed. One of the approaches for anomaly-based IDS is to create a profile of normal behavior of the object undergoing the protection and then compare the observed behavior with this profile [10]. The pattern for normal behavior can be built of sequence of system calls, network protocols, or the user’s activities. The sequences alignment algorithms can improve this method by reduction the volume of normal behavior database. The next paper shows how the sequences alignment algorithm can be used to respond to this challenge. Section 2 reviews a generalized model for intrusion detection and suggests the local and global alignment of the sequences as a novel approach to compare the chain of system acts with the intrusion pattern. Section 3 describes the possible applications of Needleman-Wunsch and Smith-Waterman algorithms, and the experimental result obtained for Smith-Waterman algorithm applied for CVE vulnerabilities detection. The related works are listed and analyzed in Sect. 4. Section 5 concludes the work and plans the further research.

2 Methods 2.1

A Generalized Model for Intrusion Detection

Let us define any given ITC system System as a set of entities E that interact with each other. Whether the interaction is permitted or denied depends on the security attributes SA, and therefore the overall attributive model of the system is System ¼ \E; SA [ . Any interaction in the system is a process of acting of two or more system entities with the purpose of information exchange. In any information interaction, there are entities that initiate it, e.g. processes that initiate the reading of files. There are the subjects S of access operations. The entities that cannot be initiators of interaction are named as the objects O. Thus, a set of system entities is E ¼ S [ O: At the given system, interaction of the entities is implemented by executing a command set C. The command set can be presented as a chain: ðCondition1 Þ ! ðInter1 ; Inter2 ; . . .; Intern Þ; ðCondition2 Þ ! ðInterm Þ; . . .; where Inter1 ; Inter2 ; . . .; Intern ; Interm – the elements of possible interactions Inter. Condition1 ; Condition2 ; . . . are the conditions for interaction (e.g., “If User1 have a permission to read the file Document…”). Let AC denote an access control function AC : ðS; O; InterÞ ! f0; 1g: It checks whether the subject S can do an interaction Inter with the object O. For example, in the discretional access control, the function value for AC is 1 if and only if there is Inter in the access matrix cell respective to S and O [11].

12

S. Lim et al.

The condition in command is a unity or conjunction of access control functions:  Condition ¼

1 ACðS1 ; O1 ; Inter1 Þ ^ ACðS2 ; O2 ; Inter2 Þ ^ . . . ^ ACðSn ; On ; Intern Þ ð1Þ

State denotes a system’s state function that returns the tuple \Et ; SAt [ , where Et and SAt are the set of system entities and the set of security attributes fixed at time t:State : T ! \E; SA [ , where T is a set of time moments with given discrete frequency. IsSecure denotes the system security function, IsSecure : \E; SA [ ! f0; 1g, where it is equal to the unity if and only if the system is secure. The system state changes as a result of the commands execution. Knowledge of that for the period ½t1 ; tn  the commands C1 ; C2 ; . . .; Cn1 have been executed in the following sequence and the initial state at time t1 can define all states of the system for this time interval as a sequence: State(t1 )

С1

State(t2 )

С2

State(t3 )

С3

...

Сn -1

State(tn ) .

ð2Þ

The unsecure sequence of commands is defined as a sequence of commands C1 ; C2 ; . . .; Cn1 executed on time interval ½t1 ; tn  so the following conditions are true: n\ 1

ð

IsSecureðStateðti ÞÞ¼ 1Þ ^ IsSecureðStateðtn ÞÞ¼ 0Þ:

ð3Þ

i¼1

An intrusion on the system is a destructive influence that can be represented as a sequence of commands that brings the system to the state in which IsSecure function equals to 0. At the same time, removal of first command in attack sequence brings the IsSecure function to unity. So, the following statement is true: The intrusion is always a member of unsecure sequence of command set.

Indeed, any cyberattack fits the definition of unsecure attack sequence. The opposite statement is false. 2.2

Global and Local Alignments

In bioinformatics, a sequences alignment is a way of arranging the sequences of DNA, RNA, or protein code chains to identify regions of similarity that may be a consequence of functional, structural, or evolutionary relationships between the compared sequences [4]. Therefore, the task of matching the actual command set and the intrusion pattern is very close to the task of sequence alignment in bioinformatics. And this idea can be adopted to utilize the sequence alignment algorithms to intrusion detection.

Bioinspired Intrusion Detection in ITC Infrastructures

13

There are local [12] and global [4] alignments and their results are different. Let’s explain their work on visual representation. There are two sequences of commands open, read, open, read, write, execute, connect, execute, execute, write, close, write, close

and open, read, write, execute, execute, execute, write, write, close.

By the application of local and global alignments, there are the following results (‘–’signs a gap): For local alignment: open read open read write execute connect execute execute write close write close open read write execute execute execute write write close -

and for global alignment: open read open read write execute connect execute execute write close write close open read write execute execute execute write write close.

Global alignment traditionally applies the Needleman-Wunsch algorithm [4], stretches the smaller sequence along the bigger one. Local alignment utilizes the SmithWaterman algorithm [12] and localizes the smaller chain on specified region of bigger one. Both algorithms can be used on sequences of any length, but the NeedlemanWunsch algorithm is usually applied if the sequences have approximately equal length, and the Smith-Waterman algorithm is used if one sequence is considerably bigger than the second one. 2.3

Smith-Waterman Algorithm

Smith-Waterman algorithm [12–14] input contains two sequences a ¼ Ca1 ; Ca2 . . .Can and b ¼ Cb1 ; Cb2 . . .Cbm , and a similarity function x : ðCa [ ; Cb [ ; . . .Þ ! Z, where Ca and Cb are the sets of commands that correspond to the sequences a and b, respectively. The aim of this function is to define a degree of similarity of two commands if they are on the same positions in different sequences. For example, there is an attack to edit the configuration file. The violation sequence is to gain an access right to open this file, to open and write or delete information in it. For any unimportant commands, function x returns zero. On the opposite side, x for important commands has to be positive in case of the similar arguments and negative in opposite case. The important commands can be classified by the rank of danger consequences for the

14

S. Lim et al.

system. For example, for command that deletes all entities in the system, x can be set to 10 and for command that change any entity, x can be set to 2. Similarity of two sequences is determined by R: Rða; bÞ ¼

n1 X

xðCai ; Cbi Þ:

ð4Þ

i¼0

The first stage of this algorithm is filling the similarity matrix H of size m þ 1 on n þ 1, where m and n are lengths of corresponding sequences. The matrix H is built in the following way: Hði; 0Þ ¼ 0; 08  i\m; Hð0; jÞ ¼ 0; 0  j\n; 0 > > < Hði  1; j  1Þ þ xðCai ; Cbi Þ ; Hði; jÞ ¼ max Hði  1; jÞ þ xðCai ; Þ > > : Hði; j  1Þ þ xð; Cbi Þ 1  i  m; 1  j  m:

ð5Þ

As the matrix H has been built, the second stage of the algorithm is performed. To obtain the optimal local alignment, the search starts with the largest value in the matrix cell ði; jÞ. This cell is marked as a current cell. The next current cell is the maximal value among ði  1; jÞ; ði; j  1Þ; ði  1; j  1Þ: In case of equity of the cells, the priority is given to hi;j : The process continues until it reaches the cell with zero value, or the value at position (0,0). After that the alignment is constructed as follows: starting with the last value, the process reaches ði; jÞ using the previously calculated path. A diagonal jump implies there is an alignment (either a match or a mismatch). A topdown jump implies there is a deletion. A left-right jump implies there is an insertion. For example, in UNIX operating system, the next sequence of system calls causes addition of superuser account with password known to intruder: setreuid(0,0); open("/etc/passwd", O_APPEND| O_WRONLY); write(fd, "newuser:password:0:0::/: /bin/sh", 31); close(fd); exit(0);

The next sequence has the similar result, but it differs from the above sequence (there is a polymorphism in sequence of operations, so the intrusion pattern built be the first sequence cannot be applied to exactly identify this new chain of intrusion): setreuid(0,0); brk(); brk(); brk(); setreuid(); open("/etc/passwd", O_APPEND|O_WRONLY); time(); brk(); write(fd, "newuser:password:0:0::/:/bin/sh", 31); time(); fork(); close(fd); time(); brk(); exit(0);

Bioinspired Intrusion Detection in ITC Infrastructures

15

Let b and a denote the first sequence and second sequences, correspondently. b will be defined as an attack pattern (IDS signature). x values are set as follows: 8 3; Ca ¼ Cb < xðCa ; Cb Þ ¼ 2; ðCa 6¼ Cb Þ ^ :ðCa ; Cb 62 fsetreuid; open; write; close; exitgÞ : : 1; ðCa ¼ Þ _ ðCb ¼ Þ ð6Þ H contains the values listed in Fig. 1. Cells of H matrix which were marked as the current cells are colored with gray. The output of this algorithm are two aligned chains: a ' : setreuid open time brk write time fork close time brk exit b ' : setreuid open write close exit

R = 9.

Fig. 1. The values of H matrix.

2.4

Needleman-Wunsch Algorithm

Needleman-Wunsch algorithm [4, 15] has a few differences from local alignment discussed previously. As in Smith-Waterman algorithm [12], there are two input sequences: a ¼ Ca1 ; Ca2 . . .Can and b ¼ Cb1 ; Cb2 . . .Cbm , and a similarity function x : ðCa ; Cb ; . . .Þ ! Z. The difference from Smith-Waterman algorithm is a penalty d [4]. The similarity function R is defined in a following manner: Rða; bÞ ¼

n1 X i¼0

 fi ðai ; bi Þ; where fi ða; bÞ ¼

d; ðai ¼ Þ _ ðbi ¼ Þ : xðai ; bi Þ; ðai ¼ 6 Þ ^ ðbi 6¼ Þ

ð8Þ

This algorithm is also consists of two stages: filling the similarity matrix S of size m þ 1  n þ 1, where m and n are the lengths of the corresponding sequences.

16

S. Lim et al.

Elements ði; 0Þ and ð0; jÞ are filled with values i  d и j  d, correspondently. Other elements of S are calculated in the following way: 8 0 > > > < Fði  1; j  1Þ þ xðC ; C Þ ai bi ; Fði; jÞ ¼ max > Fði  1; jÞ þ d ð9Þ > > : Fði; j  1Þ þ d 1  i  m;

1jm

Then the second stage is performed. The current element is marked at the bottom right. The next current element is chosen with the following conditions: si1;j1 ; si;j ¼ si1;j1 þ xðCai ; Cbi Þ si1;j ; si;j ¼ si1;j þ d si;j1 ; si;j ¼ si;j1 þ d

ð10Þ

In case of fulfillment of two or more conditions, the priority is given to the most top. The process continues until it reaches the value in position (0,0). The sample of algorithms calculations is definitely similar to the mentioned above.

3 Results 3.1

Detection of Polymorphic Intrusions

Let take a look on a signature-based IDS and presume that the intruder’s target is the performing attack in way of IDS evasion. Common ways of it are described in [16]. Trace of system is defined by SystemTrace ¼ C1 ; C2 ; C3 . . .CN : 0 MaliciousTrace ¼ C10 ; C20 ; C30 . . .CM denotes a trace corresponding to the intrusion. The problem of polymorphic attack detection is to discover in SystemTrace the traces corresponding to the attack mutation equal by a result to the attack described by the MaliciousTrace sequence. The set Seq ¼ fSeqi ; 0  i  P; P  Ng is built in the following way: Seq1 ¼ C1 ; C2 . . .CP ; Seq2 ¼ C2 ; C3 . . .CP ; . . .; SeqNP þ 1 ¼ CNP þ 1 ; CNP þ 2 . . .CN :

ð11Þ

The elements of Seq are to be compared to MaliciousTrace. Considering that M\\P, the best algorithm for similarity calculating is the Smith-Waterman algorithm. The algorithm output is a value R: In case if it exceeds the value of a threshold, it will be considered that SystemTrace contains a mutation of attack implemented by MaliciousTrace. The largest number of commands in the polymorphic intrusion that this method can detect is P. But growth of P causes the increase of worktime. Each alignment is

Bioinspired Intrusion Detection in ITC Infrastructures

17

performed with OðP  MÞ. The quantity of alignment is N  P: Therefore, the complexity of method is estimated like OðP  M  ðN  PÞÞ ¼ OðP  M  N  P2  MÞÞ: P must be too big enough to detect a long chain and too small enough to satisfy the performance requirements. For the purpose of normalization, it is suggested to use the next function instead of R: R0 ða; bÞ ¼

R2 ðb0 ; b0 Þ LengthðbÞ  ; R2 ðb; bÞ Lengthðb0 Þ

ð12Þ

where b is MaliciousTrace, a is the elements of Seq set, b0 is a MaliciousTrace sequence after alignment. In case of Rða1 ; a1 Þ  Lengthða1 Þ for all sequences a1 , the definition range of this function matches the interval [0,1]. The nearness to zero increases a probability of fact that this trace contains a mutation of attack implemented by MaliciousTrace. Therefore, the described algorithm can be applied for detection of polymorphic intrusions. 3.2

Anomaly Detection

The method for system calls sequence analysis was described in [17] and after that it has a lot of extensions and improvements now in a number of works (e.g. [18–23]). Let SystemTrace denotes a system trace corresponding to a normal behavior of the system. The set Seq ¼ fSeqi ; 0  i  P; P  Ng is built in a following way: Seq1 ¼ C1 ; C2 . . .CP ; Seq2 ¼ C2 ; C3 . . .CP ; . . .; SeqNP þ 1 ¼ CNP þ 1 ; CNP þ 2 . . .CN :

ð13Þ

NormalProfile is a set of elements Seq. This set represents a database of sequences that corresponds to the normal behavior profile. Let Compare is a comparison function of two sequences: Compare : Seq  Seq ! f0; 1g:

ð14Þ

A condition for adding the sequence Seqk to NormalProfile is: Seqk 2 NormalProfile , 8Seqi 2 NormalProfile ^ Seqk 6¼ Seqi : CompareðSeqk ; Seqi Þ ¼ 0:

ð15Þ

The sequence is added to database if it is not equal to any sequence already being stored in the database (in terms of the defined comparison function).

18

S. Lim et al.

The Needleman-Wunsch algorithm and R0 function as a comparison function are suggested to use. The sequence is added to database if, for any sequence from a database, R0 is less than a threshold. The size of the database will be thus reduced. 3.3

The Experimental Study

The effectiveness evaluation for the alignment algorithms has been examined with the vulnerabilities database from CVE registry [24]. For instance, CVE-2018-4878 intrusion is an exploitation of the use-after-free vulnerability of Adobe Flash Player, which allows remote code to be executed through a damaged Flash object. The vulnerability is used to download malicious shell code from the compromised web server to a network node containing the remote administration tool ROKRAT [25] (the YARN rule for identifying the CVE-2018-4878 vulnerability is presented in Fig. 2).

Fig. 2. The YARN rule for CVE-2018-4878 detection.

The sample of the network dump containing malicious traffic that satisfies the YARN rule of the CVE-2018-4878 vulnerability is shown in Fig. 3.

Fig. 3. A malicious traffic dump for CVE-2018-4878 detection.

Network traffic dump has been collected on Ubuntu 18.04 and used as input data for the alignment algorithms. The data set contains half of the anomalous samples, and the training and testing samples are marked in 70/30 proportion (Table 2).

Bioinspired Intrusion Detection in ITC Infrastructures

19

Table 1. Dataset characteristics. Traffic Number of training data samples Number of testing data samples Normal 49000 21000 CVE-2018-4878 24500 10500

The method was implemented using the following software tools: Python, dpkt library which provides quick analysis of TCP/IP protocol packets, biopython library that provides the ability to perform actions with biological sequences, genomes, phylogenetic trees, scikit-learn library that implements basic machine learning algorithms, effectiveness calculations, and plotting. Based on traffic analysis, abnormal and normal traffic patterns were created. To evaluate the proposed intrusion detection method, it is necessary to determine the boundary of the distance indicator between the compared sequences – R: To do this, during the training phase, the data of the training sample are marked, and then, based on the results, they formulate the criteria by which the detector is triggered (e.g., for CVE-2018-4878, the criterion for determining the attack is R [ 0; 71: To analyze the effectiveness of the algorithms, the Smith-Waterman algorithm and the Suricata IDS have been tested, analyzed and compared. For a comparative assessment, the traditional effectiveness metrics of accuracy, precision, completeness (recall) have been calculated (Table 2). Table 2. Comparison of intrusion detection paradigms (the presented values have been obtained for the test case of CVE-2018-4878 detection). Effectiveness metric Smith-Waterman Accuracy, % 93.6 Precision, % 92.7 Recall, % 94.7

Suricata IDS 74.9 66.6 99

The obtained results indicate that the bioinformatic methods can be effectively applied in real IDS systems to solve protection tasks for ITC infrastructures. And the most important merit of the studied technology is that the bioinformatic approach is ready to work with the intrusion polymorphism.

4 Discussion There are a few works related to the application of sequences alignment algorithms to malicious activity detection. In [27, 28] the sequences alignment algorithms are reviewed for the pattern matching. The approach was to detect a masquerade of normal user behavior by the intruder. The authors got some positive results in comparison to other algorithms

20

S. Lim et al.

Hybrid Markov or IPAM. In [29] the sequences alignment is used to generate an attack signatures for the purpose of detecting polymorphic attacks. The generation is focused on the string mode so it is considerably different from the method suggested in this work. In [16] the method of attack mutation detection is simpler than suggested in this work. They defined a set of no-ops calls. And their approach was consisted in searching any sequence that is equal to attack signature after deletion no-ops. And also, there was a work on system calls arguments. The approach suggested in this work is more flexible and effective. For example, if there is a system with two different sets of commands that implement the similar operation, our method can detect all kinds of attack mutations which obtained by command replacing. And also, the suggested method has a command ranking by its danger degree.

5 Conclusion Based on the hypothesis that biological sequences are similar to sequences of operations in ITC system because of their variable and sequential character, a new intrusion detection technique based on application of bioinformation algorithms for sequence alignment has been proposed. The paper reviews two applications of the sequence alignment algorithms: intrusion detection and anomaly detection. The results obtained for the suggested method showed that some parameter that is a criterion for cyberattack detection is considerably different between normal traces and intrusion-relevant traces. It means that this method can be used for polymorphic attack detection. By turn, it means that in IDS, one signature can be applied for detection of multiple variations of single attack. And it is important that this attack is likely to be unknown. This method can also help us to reduce the size of the database of the stored patterns. It is very important to reduce it, because according to cybersecurity reports the number of attacks is growing exponentially. Created prototype of system based on bioinformatic algorithms have demonstrated high accuracy of intrusion detection. The quality indicators of the algorithms obtained during testing of the constructed system have confirmed that bioinformatic methods can be successfully applied to intrusion detection tasks in flexible ITC infrastructures. These methods can also be applied to anomaly-based intrusion detection that uses sequences of some acts to build a behavior profile. The further work has the goal to investigate other types and samples of sequence alignment algorithms (e.g. multiple alignment [30, 31]) and compare them on effectiveness to detect the polymorphic and unknown intrusions. Acknowledgments. The reported study was funded by RFBR according to the research project №18-29-03102. Project results are achieved using the resources of supercomputer center of Peter the Great St. Petersburg Polytechnic University – SCC “Polytechnichesky” (www.spbstu.ru).

Bioinspired Intrusion Detection in ITC Infrastructures

21

References 1. Sung, A.H., Mukkamala, S.: The feature selection and intrusion detection problems. In: Maher, M.J. (ed.) ASIAN 2004. LNCS, vol. 3321, pp. 468–482. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30502-6_34 2. McHugh, J.: Intrusion and intrusion detection. Int. J. Inf. Secur. 1(1), 14–35 (2001). https:// doi.org/10.1007/s102070100001 3. Kumar, V.: Signature based intrusion detection system using SNORT. Int. J. Comput. Appl. Inf. Technol. - IJCAIT (2012) 4. Needleman, S.B., Wunsch, C.D.: A general method applicable to the search for similarities in the amino acid sequence of two proteins. J. Mol. Biol. (1970). https://doi.org/10.1016/ 0022-2836(70)90057-4 5. Mott, R.: Smith-Waterman algorithm. In: Encyclopedia of Life Sciences (2005). https://doi. org/10.1038/npg.els.0005263 6. Tan, T.W., Lee, E.: Sequence alignment. In: Beginners Guide to Bioinformatics for High Throughput Sequencing (2018). https://doi.org/10.1142/9789813230521_0004 7. Lunt, T.F.: A survey of intrusion detection techniques. Comput. Secur. (1993). https://doi. org/10.1016/0167-4048(93)90029-5 8. Payer, U., Teufl, P., Lamberger, M.: Hybrid engine for polymorphic shellcode detection. In: Julisch, K., Kruegel, C. (eds.) DIMVA 2005. LNCS, vol. 3548, pp. 19–31. Springer, Heidelberg (2005). https://doi.org/10.1007/11506881_2 9. Leghris, C., Elaeraj, O., Renault, E.: Improved security intrusion detection using intelligent techniques. In: Proceedings - 2019 International Conference on Wireless Networks and Mobile Communications, WINCOM 2019 (2019). https://doi.org/10.1109/wincom47513. 2019.8942553 10. Hsiao, S.W., Sun, Y.S., Chen, M.C., Zhang, H.: Behavior profiling for robust anomaly detection. In: Proceedings - 2010 IEEE International Conference on Wireless Communications, Networking and Information Security, WCNIS 2010 (2010). https://doi.org/10. 1109/WCINS.2010.5541822 11. Harrison, M.A., Ruzzo, W.L., Ullman, J.D.: Protection in operating systems. Commun. ACM (1976). https://doi.org/10.1145/360303.360333 12. Smith, T.F., Waterman, M.S.: Identification of common molecular subsequences. J. Mol. Biol. (1981). https://doi.org/10.1016/0022-2836(81)90087-5 13. Waterman, M.S., Eggert, M.: A new algorithm for best subsequence alignments with application to tRNA-rRNA comparisons. J. Mol. Biol. (1987). https://doi.org/10.1016/00222836(87)90478-5 14. Arslan, A.N., Eǧecioǧlu, Ö., Pevzner, P.A.: A new approach to sequence comparison: normalized sequence alignment. Bioinformatics (2001). https://doi.org/10.1093/ bioinformatics/17.4.327 15. Nalbantoǧlu, Ö.U.: Dynamic programming. Methods Mol. Biol. (2014). https://doi.org/10. 1007/978-1-62703-646-7_1 16. Wagner, D., Soto, P.: Mimicry attacks on host-based intrusion detection systems. In: Proceedings of the ACM Conference on Computer and Communications Security (2002). https://doi.org/10.1145/586143.586145 17. Forrest, S., Hofmeyr, S.A., Somayaji, A., Longstaff, T.A.: Sense of self for unix processes. In: Proceedings of the IEEE Computer Society Symposium on Research in Security and Privacy (1996). https://doi.org/10.1109/secpri.1996.502675 18. Liu, Y., Chen, K., Liao, X., Zhang, W.: A genetic clustering method for intrusion detection. Pattern Recognit. (2004). https://doi.org/10.1016/j.patcog.2003.09.011

22

S. Lim et al.

19. Sazzadul Hoque, M.: An implementation of intrusion detection system using genetic algorithm. Int. J. Netw. Secur. Its Appl. (2012). https://doi.org/10.5121/ijnsa.2012.4208 20. Lavrova, D., Pechenkin, A.: Applying correlation and regression analysis to detect security incidents in the internet of things. Int. J. Commun. Netw. Inf., Secur (2015) 21. Lavrova, D., Poltavtseva, M., Shtyrkina, A., Zegzhda, P.: Detection of cyber threats to network infrastructure of digital production based on the methods of Big Data and multifractal analysis of traffic. SHS Web Conf. (2018). https://doi.org/10.1051/shsconf/ 20184400051 22. Poltavtseva, M.A., Zegzhda, D.P., Pavlenko, E.Y.: High-performance NIDS architecture for enterprise networking. In: 2019 IEEE International Black Sea Conference on Communications and Networking, BlackSeaCom 2019 (2019). https://doi.org/10.1109/blackseacom. 2019.8812808 23. Demidov, R., Pechenkin, A., Zegzhda, P.: Integer overflow vulnerabilities detection in software binary code. In: ACM International Conference Proceeding Series (2017). https:// doi.org/10.1145/3136825.3136872 24. CVE Repository. https://cve.mitre.org. Accessed 10 Mar 2020 25. CVE-2018-4878 vulnerability specification. https://www.securityfocus.com/bid/102893, Accessed 10 Mar 2020 26. Suricata Open Information Security Foundation (OISF): Suricata Open Source IDS/IPS/NSM engine. https://suricata-ids.org. Accessed 10 Mar 2020 27. Coull, S., Branch, J., Szymanski, B., Breimer, E.: Intrusion detection: a bioinformatics approach. In: Proceedings - Annual Computer Security Applications Conference, ACSAC (2003). https://doi.org/10.1109/csac.2003.1254307 28. Song, D., Heywood, M.I., Zincir-Heywood, A.N.: A linear genetic programming approach to intrusion detection. In: Cantú-Paz, E., Foster, J.A., Deb, K., Davis, L.D., Roy, R., O’Reilly, U.-M., Beyer, H.-G., Standish, R., Kendall, G., Wilson, S., Harman, M., Wegener, J., Dasgupta, D., Potter, M.A., Schultz, A.C., Dowsland, K.A., Jonoska, N., Miller, J. (eds.) GECCO 2003. LNCS, vol. 2724, pp. 2325–2336. Springer, Heidelberg (2003). https://doi. org/10.1007/3-540-45110-2_125 29. Li, N., Xia, C., Yang, Y., Wang, H.: An algorithm for generation of attack signatures based on sequences alignment. In: Proceedings - International Conference on Computer Science and Software Engineering, CSSE 2008 (2008). https://doi.org/10.1109/csse.2008.555 30. Corpet, F.: Multiple sequence alignment with hierarchical clustering. Nucleic Acids Res. (1988). https://doi.org/10.1093/nar/16.22.10881 31. Darling, A.C.E., Mau, B., Blattner, F.R., Perna, N.T.: Mauve: multiple alignment of conserved genomic sequence with rearrangements. Genome Res. (2004). https://doi.org/10. 1101/gr.2289704

Algorithm for Optimizing Urban Routes in Traffic Congestion Anton Ignatov1(&), Vladimir Baskov1, Timur Ablyazov2, Andrei Aleksandrov3, and Natal’ya Zhilkina4 1

Department of Organization of Transportations Traffic Safety and Service of Cars, Saratov State Technical University named after Gagarin Y.A., 77 Politechnicheskaya Street, 410054 Saratov, Russia [email protected] 2 Department of Construction Economics and Housing and Utility Infrastructure, Saint Petersburg State University of Architecture and Civil Engineering, 2-ya Krasnoarmeiskaya ul., 190005 Saint Petersburg, Russia [email protected] 3 Faculty of Building Services, South-Eastern Finland University of Applied Sciences, Patteristonkatu 3, 50100 Mikkeli, Finland [email protected] 4 Management Department, Kyrgyz-Russian Slavic University, Kievskaya Street, 44, 720065 Bishkek, Kyrgyz Republic [email protected]

Abstract. The problem of traffic congestions is currently relevant both abroad and in Russia, since the global transition to the technology of “smart cities” requires a comprehensive improvement of the transport sector leading to promotion of road safety and reduction of the negative impact of vehicles on the environment. The indicator of traffic intensity is considered as one of the main factors affecting the risk of traffic congestions in the article. As a result of the study, a digital model for predicting the risk of traffic congestions at signalized intersections is proposed which is taking into account a set of factors affecting the road traffic situation in the real time mode. The usage of this digital model will allow justifying the need for implementation of organizational measures in the busiest sections of the street-road network. It will also become possible to optimize the route according to the degree of least risk of the traffic congestion emergence. Such a measure is most relevant for the vehicle stock of emergency services. The developed digital model can be implemented as part of intelligent transport systems; the article suggests a scheme of operation of such a system based on the introduction of a digital model for predicting the risk of traffic congestions. Keywords: Traffic flow  Traffic congestion  Risk  Street-road network Route  Digital models  Intelligent transport system



This article was prepared as part of the grant by the President of the Russian Federation МК462.2020.6. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 23–38, 2021. https://doi.org/10.1007/978-3-030-64430-7_3

24

A. Ignatov et al.

1 Introduction Traffic congestion is one of the global problems: the growth in the number of cars while maintaining the existing street-road network (SRN) and accepted methods of regulating traffic flows leads not only to an increase in travel time and the level of consumption of energy resources, which are usually not renewable, but also damages ecology and leads to increase in the number of traffic accidents. Previously, it was believed that expanding the SRN, building new roads will reduce the likelihood of traffic jams but practice shows that this approach leads to a vicious circle: traffic congestions – building new roads—temporary reduction of the traffic congestion intensity—increased traffic flow on the new SRN—increased number of congestions – another even more large-scale road construction, etc. [1]. Transport delays when their duration is increased turn into traffic congestions, which is determined by the influence of various factors - the geometric parameters of the road, the mode of operation of traffic lights, parked vehicles at the edge of the carriageway, the geographic features of the formed SRN, road accidents, the psychophysiological characteristics of drivers and the degree of their professional skill and etc. The combination of these factors significantly affects the traffic capacity of roads. One of the key factors that have the most significant effect on the risk of traffic congestion is traffic intensity. This factor can be uneven both in time and in space. Annual intensity of the traffic flow is a variable factor. It depends on the level of motorization of specific areas, category of roads, season, time of day, and other factors [2–5]. At present, in Russia, an increase in traffic flow is associated with an increase in the motorization of the population while maintaining the previously formed street-road network. In rural areas, the level of traffic intensity is usually low and does not exceed 1,000 cars/day. In the most part of the urban street-road network (with the exception of the main highways) an average traffic intensity of 4,000 cars/day. In the busiest sections of the street-road network there is high traffic intensity - about 10,000 cars/day and more [6]. The intensity value of traffic varies depending on the seasons of the year: the highest level falls on the summer and autumn season [7, 8]. The intensity of traffic is significantly affected by the days of the week. As a rule, the greatest load falls on Friday: the average traffic intensity indicator is 125% of the average daily rate. The lowest traffic intensity is observed on Sunday - 72% [9]. During the day, the indicator of traffic intensity is also subject to fluctuations and has, for the most part, one or two maximums on weekdays [10]. Transport congestions have a negative impact on the economy of both municipalities in particular and the state in general, characterized mainly by the untimely delivery of employees to jobs which reduces the labor productivity. In addition, an increase in fuel consumption, untimely delivery of goods, the consequences of road accidents the risk of which increases in dense traffic could be referred to the economic losses resulted from congestion. In addition to economic losses, congestion has a negative impact on the environment: an increase in the mass of exhaust emissions per 1 square kilometer from a dense traffic flow; noise pollution; soil and water pollution

Algorithm for Optimizing Urban Routes in Traffic Congestion

25

inflicted by leaking operating fluids coming out of worn-out vehicle systems. Not the least important is the negative impact of traffic congestions on the driver’s psycho physiological state, which is characterized by increased and long-term stress of the nervous system, which in turn leads to a number of diseases, primarily the nervous, cardiovascular and digestive systems. The stress of the driver while driving a vehicle also increases the likelihood of making an erroneous or belated decision, which in turn can lead to road accidents caused due to the fault of the driver. Additionally, prolonged stressful condition is the cause of a longer response time of driver’s reaction. Consequently, the problem of traffic congestions requires the simultaneous consideration of many factors affecting traffic intensity, which cannot be implemented without the use of modern digital models for forecasting of congestions. In world practice, digital technologies are organically being introduced into the transport sector of cities, but in Russia the transition to the widespread use of digital models and algorithms is just beginning so the developing of a model to solve the problem of traffic congestions is an urgent task for research.

2 Materials and Methods The problem of traffic congestions is typical for different regions of the world, but countries with a significant degree of urbanization are mostly affected. Each country is trying to find the best ways to solve this problem and Japan is differentiated by the highest degree of digitalization of the applied solutions: an information transport system (ITS) has been created, the elements of which are vehicles, road sensors, control centers, information system providers, various digital maps and navigation systems [11]. This concept is called “Smartway”. Modern video surveillance systems, infrared sensors, “smart” traffic lights, ubiquitous contactless fare cards can almost completely eliminate traffic jams in the country capital, Tokyo. City services receive vital information about the traffic situation in real time mode and can optimally plan the route for emergency services. The experience of the UK, in particular London and Singapore, is more focused on economic measures to reduce traffic congestions. Thus, in central London, entry fees have been introduced for most categories of vehicles on weekdays during peak hours since 2003. As a result, traffic congestions decreased by 16%, the duration of existing congestions decreased by 20–30%, the speed of traffic increased by 37% and environmental pollution also decreased [1]. Nevertheless, such measures are related to the socio-economic policy in the country, and they do not always find public support. In our opinion, the first step to solve the problem of traffic congestion is to improve digital decision-making models in conditions of heavy urban traffic, as far as these models less affect public spending. Singapore’s experience also relates to charging tolls on especially busy sections of roads. Back in 1975, the country began to introduce digital technologies to control entry into certain territories, as a result of which the average flow rate increased by 30%, the share of public transport increased from 33% to 69%, the number of cars entering the zone reached 41,500 instead of 74,000 before implementation of innovations [1]. Gradually, control technologies improved, fines for non-compliance with

26

A. Ignatov et al.

traffic rules became stricter, the cost of owning a car also increased significantly, which together led to a reduction in traffic congestions and Singapore is currently one of the most populous countries with almost no traffic congestions. In Russia, the problem of traffic congestions is in the beginning of its solution: the network of toll highways is expanding, mechanisms for charging freight transport tolls and the damage they cause are being improved [12], a transition to smart city technologies that are inextricably connected with the transport sector is being announced (distribution of public transport, improvement of road safety, reduction of exhaust emissions by switching to alternative energy sources and reduction the number of cars in cities) [13]. Our study is aimed at optimizing traffic flows in order to eliminate congestions on the basis of a digital model for congestions predicting and plotting optimal vehicle routes, which is especially important in activities of emergency services. This digital model is based on extensive mathematical toolset. Dependence of the risk of traffic congestion on traffic intensity [14–16]: A B ⎞ ⎛ )⎟ + ⎜ d кр − ( 1 1 EN − ⎜ − K ⎟; ⎟ N rтз ( N ) = 0.5 − Ф⎜ ⎟ ⎜ 2 2 σ d +σ d ⎟ ⎜ кр ф ⎟ ⎜ ⎠ ⎝

ð1Þ

where rтз (N) is the risk of traffic congestion resulted from traffic intensity;

d ф - the average calculated or actual value of the transport delay at the signalized intersection at “rush hour”, s; djp - the average limiting (critical) value of the transport delay at the signalized intersection at “rush hour”, at which the probability of congestion occurrence will be 50%, s; 2

A ¼ 0:9Cð1kÞ – coefficient depending on the duration of the control cycle and the 2 effective share of the green signal, s/car; g k ¼ C – effective share of green signal; 2

B ¼ 0:9K 2 – coefficient which is depending on the coefficient determining the ratio of the cycle duration to the maximum number of vehicles that manage to pass the intersection in the j-th direction during the effective time of the i-th signaling phase s2/car2; K ¼ MC g – coefficient determining the ratio of cycle duration to the maximum H

number of vehicles that manage to pass the intersection in the j-th direction during the effective time of the i-th signaling phase, s/car; E ¼ kK – coefficient which is depending on the coefficient determining the ratio of the cycle duration to the maximum number of vehicles that manage to pass the

Algorithm for Optimizing Urban Routes in Traffic Congestion

27

intersection in the j-th direction during the effective time of the i-th signaling phase and the effective phase of the green signal s/car; C – signaling cycle length, s; g – effective duration of the green signal, s; MH – saturation flow, car/day; N – intensity of traffic arrival rate, car/day. To determine the relationship between the speed and density of the traffic flow with the risk of traffic congestion occurrence, we use the following dependences [17] of the traffic flow intensity on speed and density, based on the classical Tanaka traffic flow model [18, 19]. Dependence of the traffic flow intensity on speed [17]: NðVÞ ¼

V ; m2 V 2 þ m1 V þ m0

ð2Þ

where V is the speed of the traffic flow, m/s; m2 proportionality coefficient of braking distance, sec2/m; m1 – time characterizing the reaction of the driver, sec; m0 – average vehicle length, m. Thus, taking into account Formula (2), we obtain the dependence of the risk of traffic congestion on speed: A B ⎞ ⎛ )⎟ + ⎜ d кр − ( 1 1 − EN (V ) ⎜ − K ⎟; ⎟ ⎜ N (V ) r (V ) = 0.5 − Ф ⎜ ⎟ ⎟ ⎜ σ 2d + σ d2 кр ф ⎟ ⎜ ⎟ ⎜ ⎠ ⎝

ð3Þ

where N (V) – the intensity of the arrival of the traffic flow depending on the speed which is determined by the Formula (2), car/sec; Dependence of the risk of traffic congestion on density: A B ⎞ ⎛ )⎟ + ⎜ d кр − ( 1 1 ( ) EN q − ⎜ −K ⎟ ; ⎟ ⎜ N (q ) r (q) = 0.5 − Ф⎜ ⎟ ⎟ ⎜ σ d2 + σ d2 кр ф ⎟ ⎜ ⎟ ⎜ ⎠ ⎝

ð4Þ

28

A. Ignatov et al.

where N (q) – the intensity of arrival of the traffic flow depending on traffic density which is determined by the Formula (5), car/sec; Dependence of traffic flow intensity on density [6]: rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2   3  a m2 þ b #c 3:6  m1 þ m21 4 m2 m0  1000 q 6 7 q 6 7q; ð5Þ NðqÞ ¼ 1  1  4 5 qmax m 2 2 "

Where q – traffic density, car/m; qmax – maximum traffic density qmax ¼ m10 ; car/m; m2 – proportionality coefficient of stopping distance, sec2/m; m1 – time characterizing the reaction of the driver, sec; m0 – average vehicle length, m; a – discovered by Kolesov V.I. the regression coefficient of the linear relationship between the coefficient, generally depending on m2 (k) and m2, which is 61.5427 [4]; b – discovered by Kolesov V.I. the regression coefficient of the linear relationship between the coefficient, generally depending on m2 (k) and m2, which is equal to 0.8308 [4]; c – constant which is equal 0.6. As a result of usage of the calculation algorithm given above and on the basis of empirical data, an assessment is made of the road traffic situation from the point of view of the risk of congestion occurrence, the final result of which is the development of a digital decision making model for selecting the optimal route.

3 Results As a result of the study of traffic flow parameters in the central part of Saratov (in order to identify the influence of speed indicators of traffic flow on the risk of traffic congestion), taking into account Model (1) the characteristic curve of the influence of traffic intensity on the risk coefficient is calculated (Fig. 1).

Algorithm for Optimizing Urban Routes in Traffic Congestion

29

Fig. 1. Experimental (calculated) values of the risk of traffic congestion (rтз) from traffic flow intensity TF (N) on the studied section of the street-road network on weekdays (the opposite direction on the str. Rakhova at the intersection of str. Moscovskay – str. Rakhova, the effective width of the carriageway is 6.5 m)

After analyzing the results presented in Fig. 1 and correlating them with the main traffic flow diagram, we can conclude that the risk of traffic congestion tends to Fig. 1 along with an increase in traffic flow intensity, the maximum value of which will be limited by road traffic capacity. The ongoing process will also be characterized by an increase in the density of the traffic flow with a simultaneous decrease in its speed. Thus, the physical meaning does not contradict the main traffic flow diagram. The study of actual transport delays was conducted in the central part of the city of Saratov at the signalized intersection. Based on the approximated dependences rтз = ƒ(V) (the study was carried out under the same conditions), 7 categories of traffic conditions were established that characterize the value of the risk of traffic congestion. The step of each category corresponds to a speed interval of 5 km/h. Risk limit values are indicated by dots (Fig. 2). Thus, in accordance with traffic conditions at signalized intersections, 7 categories are determined depending on the speed values of the traffic flow (Table 1).

30

A. Ignatov et al.

Fig. 2. Limit values for the risk of traffic congestion for each category of traffic conditions taking into account speed intervals. Table 1. Categories of traffic conditions taking into account risk of traffic congestion Category I II III IV V VI VII

Speed km/h 0–5 5–10 10–15 15–20 20–25 25–30 >30

Risk of traffic congestion 0.85–1 0.48–0.85 0.34–0.48 0.27–0.34 0.23–0.27 0.19–0.23 Y, that is, a decisive function that will approximate y on the whole set X. Ft ¼

  ðt1Þ ^ l y ; y þ f ðx Þ þ uðft Þ i t i i i¼1

Xn

ð1Þ

In formula (1), we introduce a function for the optimization of gradient boosting, ðt1Þ – true values and values predicted by the algorithm, where l – cost function, yi ; ^yi xi – a set of features of the i-th element of the training set, ft – function (most often decision trees are chosen for boosting, which are trained using the CART algorithm, or

Uncertainty Decision Making Model

53

any other weak classifiers or regression decision rules, weak means high sensitivity to changes in the source data [18]), which must be trained in step t. uðft Þ – regularizing a function based on empirically proven effectiveness by XGBoost developers, it was thanks to this technique that this implementation turned out to be extremely successful, since the algorithm initially included fines for retraining - to the depth of the tree and to splitting in the leaves [19]. We write down this procedure mathematically: uðft Þ ¼ c T þ 1=2qkwk2

ð2Þ

where T – the number of vertices in the tree, w – values in the leaves, c и q – parameters of regularization parameters. Next we will bring F t using the expansion in a Taylor series to the second term, we write our optimized function as follows, see formula (3): Ft ¼

 Xn  ðt1Þ 2 ^ lðy ; y Þ þ g f ðx Þ þ 1=2h f ðx Þ þ uðfs Þ i i t i i i t i i¼1

ð3Þ

After decomposition, two new functions g and h appeared, which have the following form:

gi ¼

  ðt1Þ @l yi ; ^yi

ðt1Þ

hi ¼

ð4Þ

ðt1Þ

@^yi

@ 2 lðyi ; ^yi

Þ

ð5Þ

ðt1Þ @^yi

The task is to minimize the model error in the training set, therefore we want to find the minimum F t for every t that is achieved for the function ft in point tha can be expressed as follows: ft ¼ 

gi hi

ð6Þ

Note that each tree in this ensemble is trained by one of the standard algorithms: ID3, C4.5, CART, CHAID or MARS [20]. Algorithms for constructing trees are built greedily and at each stage, they choose a feature that best breaks down many elements. Differences in algorithms are often expressed in the metrics they use to partition the set. Here are two of the most common: I g ð pÞ ¼

XJ i¼1

ð1  pi Þ ¼ 1 

XJ i¼1

p2i

ð7Þ

54

R. Zhak et al.

This metric is called an admixture of Gini, where J – classes, i 2 f1; 2; . . .; J g and pi determines the proportion of elements marked by class i in the data set. This metric is typical for the CART algorithm. H ðT Þ ¼ IE ðp1 ; p2 ; . . .; pJ Þ ¼ 

XJ i¼1

pi log2 pi

IGðT; aÞ ¼ H ðT Þ  H ðTjaÞ H ðT Þ 

X a

pð aÞ

XJ i¼1

 PrðijaÞlog2 PrðijaÞ

ð8Þ ð9Þ ð10Þ

The second metric is used for the algorithms ID3, C4.5 and C5.0, which is called the information gain and is actually the difference between entropy and weighted entropy. We use this metric to decide which attribute to use for splitting at each step of the tree construction. Thus, having formalized the task of determining the preservation of jobs after the introduction of AI, it is possible to identify roles that could be potentially interesting for outgoing professions. Therefore, office staff after retraining could participate in AI service: take part in model verification, data quality and markup analysis, model testing and testing, data visualization and presentation. An important role in the analytical cycle of model development is played by observing it and correcting errors during operation. So, often the introduced model is mistaken in non-standard situations for it, and specialists need to catch this kind of error, fix them and then send them to the developers for further modification of the models. For these purposes, there is no need for in-depth knowledge of machine learning algorithms, mathematical statistics and probability theory. Consequently, a number of specialists with basic skills in working with certain software can cope with this task and facilitate the work of data analysis specialists. We also note the importance of the subject area for AI tasks, where it is also possible to get benefits from narrowly focused specialists and successfully engage in their training. For example, linguists. Now the direction of tasks in natural language processing is actively developing, where it is extremely important to take into account linguistic features, grammar, morphology, vocabulary. Since NLP specialists are aimed at extracting meaning from the text, accordingly, it is extremely important to study in detail all its aspects. Various linguists can be invaluable in this. In addition to NLP specialists, one can note the industry security sector for company employees, which can be provided by video analytics specialists. At the same time, these specialists will not be able to act independently, they need to prepare video series with marked-up data, which indicate the necessary precedents. In this direction there is also great scope for retraining specialists from the industrial safety sector. Thus, it can be seen that for each direction of AI there is an urgent need to train new employees, therefore mass layoffs can be considered in a sense a myth that will soon completely disappear. AI requires a lot of attention and service. It enables the business to work more efficiently, but at the same time creates jobs.

Uncertainty Decision Making Model

55

4 Conclusion The algorithm proposed in the article for optimizing the number of employees is recommended to be worked out by companies that are targeted or actively implement various optimization solutions based on artificial intelligence. This approach will allow you to radically revise, and not lose the necessary and valuable employees for the company, which could potentially be transferred to a new role. The article proposes a specific mathematical model in the form of a modified gradient boosting over decision trees. Attention is also paid to the attribute space for these models and the criteria for their selection. In addition, the article discusses the evolution of the introduction and crowding out of employees with the advent of robotics and the use of machine learning in various sectors of the economy, from the banking sector to the state. In addition to the developed decision-making model, the article carefully pays attention to the evolutionary mechanism of the penetration of artificial intelligence methods into various branches of a human duration: from the first mention of the concept of AI to advanced processing algorithms as structured data and unstructured data. Acknowledgements. This paper was prepared under financial support of the Russian Science Foundation (Grant No. 18-18-00099).

References 1. Applications of Artificial Intelligence in Top 10 Areas—Learnitude Technologies. https:// learntechx.com/blog/applications-of-artificial-intelligence-in-top-10-areas. Accessed 06 Apr 2020 2. School of data analysis. https://yandexdataschool.com/. Accessed 06 Apr 2020 3. Bernard, A.: How AI is impacting the workplace. https://www.techrepublic.com/article/howai-is-impacting-the-workplace/. Accessed 06 Apr 2020 4. Orlov, S.: Sberbank replaces 70% of mid-level managers with robots—Computerra (2018) 5. Averkin, A.N., Haase-Rapoport, M.G.P.D.A.: Explanatory dictionary of artificial intelligence. Radio Commun. (1992) 6. Artificial Intelligence Techniques—Top 4 Techniques of Artificial Intelligence. https://www. educba.com/artificial-intelligence-techniques/. Accessed 06 Apr 2020 7. Reisinger, D.: A.I. Expert Kai Fu Lee: 40% of Jobs Will Be Lost to AI, Robots—Fortune (2019). https://fortune.com/2019/01/10/automation-replace-jobs/ 8. AI: brave new worlds - (LU) Federated Hermes. https://www.hermes-investment.com/lu/ insight/outcomes/ai-brave-new-worlds/. Accessed 06 Apr 2020 9. Garvey, C.: Broken promises and empty threats: the evolution of AI in the USA. Technology’s Stories, pp. 1956–1996 (2018). https://doi.org/10.15763/jou.ts.2018.03.16.02 10. Goodfellow, B.: The Back-Propagation Algorithm. MIT press (2016) 11. Friedman, J.H.: Reitz lecture greedy function approximation: a gradient boosting machine. Ann. Statist. 29(5), 1189–1232 (2001) 12. How the future of computing can make or break the AI revolution—World Economic Forum, https://www.weforum.org/agenda/2019/06/how-the-future-of-computing-can-makeor-break-the-ai-revolution/. Accessed 06 Apr 2020

56

R. Zhak et al.

13. Press, G.: Is AI Going To Be A Jobs Killer? New Reports About The Future Of Work. https://www.forbes.com/sites/gilpress/2019/07/15/is-ai-going-to-be-a-jobs-killer-new-report s-about-the-future-of-work/#75cb5e6bafb2. Accessed 06 Apr 2020 14. Friedman, J.H.: Greedy Function Approximation: A Gradient Boosting Machine. Stanford University, Stanford (1999) 15. Primary ML software used by top-5 teams on Kaggle: Keras, LightGBM, XGBoost, PyTorch: learn machin elearning. https://www.reddit.com/r/learnmachinelearning/comment s/b91f8w/primary_ml_software_used_by_top5_teams_on_kaggle/. Accessed 06 Apr 2020 16. Shcherbina, E.: Yandex’s new machine learning method can work with categories. Tass 17. How to Make Employee Data Your Company’s Most Powerful Tool—Best Money Moves. https://bestmoneymoves.com/blog/2018/02/26/how-to-make-employee-data-your-companys -most-powerful-tool/. Accessed 06 Apr 2020 18. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and regression trees. In: Wadsworth & Brooks/Cole Advanced Books & Software, Monterey (1984) 19. Tianqi Chen, C.G.: Guestrin XGBoost: a scalable tree boosting system. In: KDD 2016: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016). https://doi.org/10.1145/2939672.2939785 20. Chauhan, N.S.: Decision tree algorithm. https://towardsdatascience.com/decision-treealgorithm-explained-83beb6e78ef4. Accessed 22 Jan 2020

Smart Containers Technology Evaluation in an Enterprise Architecture Context (Business Case for Container Liner Shipping Industry) Igor Ilin1, Svetlana Maydanova2(&), Anastasia Levina1, Carlos Jahn3, Jürgen Weigell3, and Morten Brix Jensen4 1

Peter the Great Saint-Petersburg Polytechnic University, St. Petersburg, Russia 2 Unifeeder A/S, St. Petersburg, Russia [email protected] 3 Institute of Maritime Logistics, Hamburg University of Technology, Hamburg, Germany 4 Unifeeder A/S, Aarhus, Denmark

Abstract. The current state of the economy and technology development requires cross-functional, horizontal management of shared information. Organizations deploys technologies which enable value-added exchange of data. The supply chain become a cooperation of partners that agree on common goals and who brings specific strength to the overall value creation and value delivery system. Companies – like the container liner shipping industry participants implement new technologies such as e-platforms, the Internet of things, blockchain and other to be in compliance with this new paradigm. Smart container technology could become one of the tools for the mutual benefits of supply chain partici- pants. Nevertheless, companies need to take precise and well-considered decisions, to achieve this aim. Current research proposes the method of new technologies evaluation, based for example on the following Enterprise Architecture approaches, The Open Group Architecture Framework (TOGAF) methodology and the Capability Driven Approach (CDA) in combination with Information Economics method, as a multi-criteria method of proposed IT investments evaluation. Keywords: Container liner shipping  Enterprise architecture  Capabilitydriven approach  Information economics  Smart containers technology

1 Introduction Currently the nature of business enterprise is changing. Today’s business is increasingly “boundaryless”, meaning that internal functional barriers are being eroded in favor of horizontal process management and externally the separation between vendors, distributors, customers and the firm is gradually lessening. This is the idea of the extended enterprise, which is transforming our thinking on how organizations compete and how value chains might be reformulated [1]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 57–66, 2021. https://doi.org/10.1007/978-3-030-64430-7_6

58

I. Ilin et al.

The use of shared information that enables cross-functional, horizontal management become a reality. Even more importantly is information sharing between partners in the supply chain that makes the flow of products from one end of the pipeline to another possible. What has now come to be termed the virtual enterprise or supply chain is in effect a series of relationships between partners that is based upon the valueadded exchange of information. The notion that partnership arrangements and a mentality of cooperation are more effective than the traditional arm’s-length and often adversarial basis of relationships is now gaining ground. Thus, the supply chain is becoming a association of organizations that agree on common goals and who bring specific strengths to the overall value creation and value delivery system [1]. To support the value-added exchange of information companies- container liner shipping industry participants implement new technologies such as e-platforms, Internet of things, blockchain and other. In compliance with this paradigm industry participants consider as well Smart containers technologies and their advantage for all stakeholders [2–11]. Smart containers are one of the few equipment that offers visibility into transport execution and cargo conditions from door to door. They generate valuable real-time physical tracking and monitoring data. For example, smart containers can generate data about events such as a door opening or closing, arrival or departure at a geofenced area, or temperature, humidity, and shock events occurring during the journey. This raw data is collected and processed according to the parameters of a specific use case. Attributes such as provenance, volume, timing, content, correct labeling, and others are critical for accurate supply chain analysis [12]. Data collected from smart containers is enriched with other data sources and then processed using powerful Artificial Intelligence (AI) techniques. The resulting information enables value-added predictive services and context-based alerts. Smart container services deliver accurate, meaningful information to supply chain stakeholders for decision making. Unlike traditional business intelligence (BI) tools that cannot control data collection, require skilled users, and take extended periods of time to deliver meaningful information, AI-based analysis provides insights that matter in real time [12]. Smart containers technology is used with other new technologies such as blockchain, e-platforms and autonomous vehicles. However, the real value of Smart containers technology for the container liner shipping industry participants still remain unclear as there is no sufficient confidence in the readiness of companies to pay for the new level of cooperation and achievement by Smart containers technology its maturity. The purpose of this study is to propose the method of new technologies evaluation, which could help companies - container liner shipping industry participants to align their IT investments with the strategic goals and take precise and well-considered decisions. Beside current study which consider Smart containers technologies evaluation, this method could be distributed to another new “disruptive” technologies, which evolve nowadays in all industries.

Smart Containers Technology Evaluation in an Enterprise Architecture Context

59

2 Methods The methodological framework of this study is the Enterprise Architecture (EA), a concept of enterprise management. This study uses The Open Group Architecture Framework (TOGAF) methodology and the Capability Driven Approach (CDA), a modern approach to the information systems development [13–24]. In the perception of The Open Group, a business capability is a special ability or power that a business can possess or exchange in order to achieve a specific goal or result. A business capability can be defined by a description of what needs to be done with some details added. The name of the capability is, therefore, proposed as a complex noun, for example, “resource planning”, followed by a brief description of the capability, for example, “the capability to plan organizational resources in order to develop and support business tasks” [25]. TOGAF has the following main components: • an Architecture Capability Framework, which addresses the organization, processes, skills, roles, and responsibilities required to establish and operate an architecture function within an enterprise. • the Architecture Development Method (ADM), which provides a “way of working” for architects. The ADM is considered to be the core of TOGAF and consists of a stepwise cyclic approach for the development of the overall enterprise architecture. • the Architecture Content Framework, which considers an overall enterprise architecture as composed of four closely interrelated architectures: Business Architecture, Data Architecture, Application Architecture, and Technology (IT) Architecture. • the Enterprise Continuum, which comprises various reference models, such as the Technical Reference Model, The Open Group’s Standards Information Base (SIB), and The Building Blocks Information Base (BBIB). The idea behind the Enterprise Continuum is to illustrates how architectures are developed across a continuum ranging from foundational architectures, through common systems architectures and industry-specific architectures, to an enterprise’s own individual architecture [17, 18]. TOGAF methodology is supported by enterprise architecture modeling language Archimate and modeling tool Archi, which were applied in the current study, as well as international frameworks COBIT and ITIL [18, 25, 26]. Furthermore, this study is conducted with the usage of the Information Economics method, the multi-criteria method of proposed IT investments evaluation. Parker et al. [27, 28] have given the multi-criteria approach and have defined criteria as follows: • Financial evaluation of the IT investments: enhanced return on investment (ROI). The ROI does not only looks at cash flows, arising from cost reduction and cost avoidance, but also provides some additional techniques to estimate incoming cash flows: • Value linking: additional cash flows that accrue to other departments; • Value acceleration: additional cash flows through restructuring work and improved job productivity; • Value restructuring: additional cash flows through restructuring work and improved job productivity;

60

I. Ilin et al.

• Innovation valuation: additional cash flows arising from the innovating aspects of the investment (competitive advantage). There are also prescribed two different domains: Business domain and Technology domain. Business domain evaluation criteria are as follows: strategic match, competitive advantage, competitive responses, management information, organizational risk. Technology domain criteria defined as strategic IT architecture compliance, definitional uncertainty, technical uncertainty, IT infrastructure risk. The total evaluation of the IT investments proposal takes place in three steps, covering financial, business and technological criteria, both positive and negative [29]. The TOGAF methodology combined with the Information Economics method provides substantial support for the evaluation of the investments to new disruptive technologies.

3 Results As the authors have described earlier [30–34], the main tasks of managing a logistics system is to manage informational, service, material and financial flows. In Fig. 1 the supply chain of container transportation by sea is represented using the TOGAF methodology, there are defined business actors, which manage logistics system flows and their key capabilities and resources. The main business actors in the container liner shipping industry supply chain are the following: Shipper, Carrier, Forwarder, Digital Forwarder, Insurance Company, Bank or Factoring Company, Consignee. As evaluation of IT investments in accordance with ITIL and Information economics method relates directly to the business case, this study considers evaluation of investments in Smart container technology by Carriers. The term “Carrier” in the current study represents the container shipping line, global or local company which provide services of container transportation. The Carrier key capabilities are container shipping in accordance with a linear schedule and vessel calls network, door-to-door container delivery in the specified direction, flexibility and transportation process transparency. These capabilities are supported with such key resources as containerships, containers, technologies, data (cargo, shipper, consignee, terminals). Smart containers technology could ensure better transparency and flexibility of the transportation process which become strategic capabilities for carriers. Furthermore, Smart container technology could provide better support for the risk management in the financing and insurance activity. The business case of Smart containers technology investments should be considered in accordance with the Information economics method evaluation criteria. Enhanced ROI: 1. Carrier could manage its containers in real time mode, it will reduce costs for extra shifts, moving, storage.

Fig. 1. Supply chain of sea container transportation, represented with TOGAF methodology definitions

Smart Containers Technology Evaluation in an Enterprise Architecture Context 61

62

I. Ilin et al.

2. Value Linking: data about empty containers in real time mode will be available not only for the Carrier, but for its partner as well, it will provide the capability to offer containers for client bookings faster. Tracking of laden containers will provide information for analysis and capacity forecast. 3. Value acceleration: containers utilization should improve because of information availability, so there should be expected revenue increasing. 4. Value Restructuring: Customers (Shipper, Forwarder, Consignee) will have savings of the custom clearance time, risks reducing, additional ability to manage their cargo flows, so the Carrier has a possibility to sell them additional service surcharge. 5. Innovation valuation: using Smart containers technology together with other new technologies such as digital platforms or blockchain could enforce additional cash flows for another possible services. Business domain criteria: 6. Strategic Match: Smart containers technology will enforce the strategic capabilities of the transportation process transparency and flexibility and will provide additional capabilities of risk management. 7. Competitive Advantage should be evaluated from the viewpoint of value proposition and partnership: • Smart containers technology could support such value propositions as shipping container monitoring, supply chain monitoring and analysis, working capital solutions - potential for optimizing working capital, carbon footprint reporting, multimodal delivery tracking. • Smart containers technology provides possibility of the partnership which enables cross-functional, horizontal management in the supply chain.

1. Competitive Response: Smart containers technology will become casual in the future, so using it is competitive response to the industry challenges. 2. Management Information: Smart containers technology will provide data in the real time mode and analytics for management decisions. 3. Organizational Risks: it is needed evaluation of service reliability and service coherence in accordance with standards and best practices. Thus, the TOGAF methodology and Capability-Driven Approach support for the evaluation of IT investments covering financial and business domain criteria, but the most considerable assistance they provide for technology domain criteria assessment. Technology domain criteria: 1. Strategic IT architecture compliance TOGAF methodology with the help of the Archimate modelling language could support the creation of IT architecture reference models “as is” and “to be” to define targeted IT architecture, gaps and course of actions to get the targeted IT architecture. Targeted IT architecture of the global container shipping line was

Smart Containers Technology Evaluation in an Enterprise Architecture Context

63

represented by the authors before [25]. On the basis of such reference models the Carrier could evaluate compliance of the Smart containers technology supporting infrastructure to a company strategic IT architecture and to analyze the available options. The Capability- Driven Approach will help with the estimation of the strategic capacity and necessary partnership. Thus, strategic capacity could be obtained by the Carrier by itself or via a partnership with other supply chain participant, such as the Digital Forwarder. Furthermore, on this stage could be defined set of technologies, which enable to provide a synergy during their exploitation together with Smart containers technology. 2. Definitional uncertainty Smart containers technology is a set of technologies, such as machine learning, natural language processing, voice/audio recognition, image recognition, search, routing, autonomous transport, sensors and other, with the help of TOGAF main components such as the Architecture Content Framework and the Enterprise Continuum reference models Carrier could eliminate uncertainty in terms and definitions. 3. Technical uncertainty TOGAF methodology and international frameworks COBIT and ITIL will support IT architecture governance and technical background documents for Smart containers technology implementation. 4. IT infrastructure risk Enterprise Architecture approach will provide substantial support to IT infrastructure risk elimination, together with frameworks COBIT and ITIL best practices. Therefore, the Smart containers technology could provide substantial financial and non-financial benefits to Carrier - container shipping line and enhance its partnership with other supply chain participants. Technology implementation could be evaluated with the help of the Enterprise Architecture approach, including TOGAF methodology and Capability-Driven Approach in combination with the Information economics method. EA and CDA provides substantial support in definitions of Information economics method business and technology domains criteria.

4 Discussion and Conclusions Smart container technology ensure better transparency and flexibility of the transportation processes which become strategic capabilities for carriers. Furthermore, Smart container technology could provide better support for the risk management in the financing and insurance activity. But this technology has not received its maturity yet, that is why it is still not widely used. Customers are ready to pay for this technology only if container cargo is expensive enough and they could receive additional benefits from its tracking. Smart container technology will become more reclaimed after its reach a productivity plateau [35] and synergy of its deployment with other new disruptive technologies. Smart containers technology could provide substantial financial and nonfinancial benefits which could be evaluated with the help of Enterprise Architecture

64

I. Ilin et al.

approach, including TOGAF methodology and Capability-Driven Approach in combination with Information economics method. Represented in the current research method provides effective support for new technology deployment evaluation. Acknowledgement. The reported study was funded by RSCF according to the research project № 19-18-00452.

References 1. Christopher, M.: Logistics and Supply Chain Management. Creating value- added networks. Pearson Education Limited, Great Britain (2005) 2. Feibert, D.C., Hansen, M.S., Jacobsen, P.: An integrated process and digitalization perspective on the shipping supply chain – a literature review. In: IEEE International Conference on Industrial Engineering and Engineering Management (IEEM) 2017, pp. 1352–1356 (2017) 3. Foster, M.: The cognitive enterprise: reinventing your company with AI. Seven keys to success. IBM Institute for Business Value. https://www.ibm.com/thought-leadership/ institute-business-value/report/cognitive–enterprise. Accessed 08 Mar 2020 4. Fruth, M., Teuteberg, F.: Digitization in maritime logistics—what is there and what is missing? Cogent Bus. Manag. 4, 1411066 (2017) 5. Gallay, O., Korpela, K., Tapio, N., Nurminen, J.K.: A peer-to-peer platform for decentralized logistics. In: Kersten, W., Blecker, T., Ringle, C.M. (eds.) Digitalization in Supply Chain Management and Logistics, pp. 19–34, epubli GmbH, Berlin (2017) 6. Lam, J.S.L., Zhang, X.: Innovative solutions for enhancing customer value in liner shipping. Transp. Policy 82, 88–95 (2019) 7. Muñuzuri, J., Onieva, L., Cortés, P., Guadix, J.: Using IoT data and applications to improve port-based intermodal supply chains. Comput. Ind. Eng. 139, 105668 (2019) 8. Orlova, V., Ilin, I., Shirokova, S.: Management of port industrial complex development: environmental and project dimensions. In: International Scientific Conference Environmental Science for Construction Industry – ESCI 2018, MATEC Web of Conferences, vol. 193, no. 1, p. 05055 (2018) 9. Pervez, H., Haq, I.U.: Blockchain and IoT based disruption in logistics. In: 2019 2nd International Conference on Communication, Computing and Digital Systems (C-CODE 2019), 6–7 March 2019, pp. 276–281, Islamabad (2019) 10. Saxe, S., Jahn, C., Brümmerstedt, K., Fiedler, R., Flitsch, V., Roreger, H., Sarpong, B., Scharfenberg, B.: Digitalization of Seaports. Visions of the Future. Fraunhofer Verlag, Stuttgart (2017) 11. Twenhoven, T., Petersen, M.: Impact and beneficiaries of blockchain in logistics. In: Kersten, W., Blecker, T., Ringle, C.M. (eds.) Digitalization in Supply Chain Management and Logistics, pp. 444–468. epubli GmbH, Hamburg (2019) 12. Unlocking hidden supply chain value with AI. Traxens – connecting the dots https://www. traxens.technology.com. Accessed 01 Mar 2020 13. Ilin, I., Levina, A., Abran, A., Iliashenko, O.: Measurement of Enterprise Architecture (EA) from an IT perspective: research gaps and measurement avenues. In: Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement (IWSM Mensura 2017). Association for Computing Machinery, pp. 232–243, New York (2017)

Smart Containers Technology Evaluation in an Enterprise Architecture Context

65

14. Ilin, I., Levina, A., Iliashenko, O.: Enterprise architecture approach to mining companies engineering. In: International Science Conference SPbWOSCE-2016 SMART City, MATEC Web of Conferences, vol. 106, p. 08066 (2017) 15. Ilin, I.V., Iliashenko, O.Y., Borremans, A.D.: Analysis of cloud-based tools adaptation possibility within the software development projects. In: The 30th International Business Information Management Association Conference, IBIMA 2017 Vision 2020: Sustainable Economic Development, Innovation Management, and Global Growth, January 2017, pp. 2729–2739 (2017) 16. Ilin, I., Levina, A., Lepekhin, A., Kalyazina, S.: Business requirements to the IT architecture: a case of a healthcare organization. In: 2019 Advances in Intelligent Systems and Computing, vol. 983, pp. 287–294 (2019) 17. Jonkers, H., Proper, E., Turner, M.: TOGAFTM and ArchiMate®: A future together, White Paper W 192 (2009) 18. Josey, A.: TOGAF® Version 9.1-A Pocket Guide, Van Haren (2016) 19. Josey, A., Lankhorst, M., Band, I., Jonkers, H., Quartel, D.: An introduction to the ArchiMate® 3.0 specification, White Paper from The Open Group (2016) 20. Lankhorst, M.: Enterprise Architecture at Work. Modelling, Communication and Analysis. Springer, Berlin (2017) 21. Levina, A.I., Borremans, A.D., Burmistrov, A.N.: Features of enterprise architecture designing of infrastructure-intensive companies. In: Proceedings of the 31st International Business Information Management Association Conference, IBIMA 2018: Innovation Management and Education Excellence through Vision 2020, pp. 4643–4651, Spain (2018) 22. Sandkuhl, K., Stirna, J.: Capability Management in Digital Enterprises. Springer, Berlin (2018) 23. Ulrich, W., Rosen, M.: The business capability map: building a foundation for business/ IT alignment. Cutter Consortium for Business and Enterprise Architecture. http://www.cutter. com/content-and-analysis/resource-centers/enterprise-architecture/sample-our-research/ ea110504.html. Accessed 10 Mar 2020 24. Kochovski, P., Sakellariou, R., Bajec, M., Drobintsev, P., Stankovski, V.: An architecture and stochastic method for database container placement in the edge-fog-cloud continuum Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium, IPDPS 2019, № 8821021, pp. 396–405 (2019) 25. A Business Framework for the Governance and Management of Enterprise IT. COBIT an ISACA framework. ISACA, USA (2012) 26. The Official Introduction to the ITIL Service Lifecycle. TSO, United Kingdom (2007) 27. Parker, M.M., Benson, R.J., Trainor, H.E.: Information Economics, Linking Business Performance to Information Technology, New Jersey, Prentice-Hall (1988) 28. Parker, M.M., Benson, R.J., Trainor, H.E.: Information Strategy and Economics. PrenticeHall, New Jersey (1989) 29. Renkema, T.J.W., Berghout, E.W.: Methodologies for information systems investment evaluation and the proposal stage: a comparative review. Inf. Softw. Technol. 39, 1–13 (1997) 30. Maydanova, S.A., Ilin, I.V.: Blockchain as a tool for shipping industry efficiency increase. In: Fundamental and Applied Research in Management, Economy and Trade Conference, 2018, pp. 50–58, Russia (2018) 31. Maydanova, S., Ilin, I.: Problems of the preliminary customs informing system and the introduction of the Single Window at the sea check points of the Russian Federation. In: Siberian Transport Forum - TransSiberia 2018, MATEC Web of Conferences, vol. 239, p. 04004, Russia (2018)

66

I. Ilin et al.

32. Maydanova, S., Ilin, I., Jahn, C., Lange, A.K., Korablev, V.: Balanced scorecard for the digital transformation of global container shipping lines. In: International Conference on Digital Transformation in Logistics and Infrastructure (ICDTLI 2019), Russia (2019) 33. Maydanova, S., Ilin, I., Jahn, C., Weigell, J.: Global container shipping line digital transformation and enterprise architecture modelling. In: International Scientific Conference ‘Digital Transformation on Manufacturing, Infrastructure and Service’ (DTMIS 2019), Russia (2019) 34. Maydanova, S., Ilin, I., Lepekhin, A.: Capabilities evaluation in an enterprise architecture context for digital transformation of seaports network. In: Proceedings of the 33rd International Business Information Management Association Conference, IBIMA 2019: Education Excellence and Innovation Management through Vision 2020, pp. 5103–5111, Spain (2019) 35. Top Trends from Gartner Hype Cycle for Digital Government Technology (2019). https:// www.gartner.com/smarterwithgartner/top-trends-from-gartner-hype-cycle-for-digitalgovernment-technology-2019. Accessed 10 Mar 2020

The Application of Machine Learning to One-Dimensional Problems of Mechanics of a Solid Deformable Body Viacheslav Reshetnikov1(&) 1

and Andrea Tick2

Peter the Great Polytechnic University of St. Petersburg, St. Petersburg, Russia [email protected] 2 Óbuda University, Budapest, Hungary

Abstract. This article is devoted to the study of the applicability of machine learning methods for one-dimensional problems in the mechanics of a deformable solid. The authors analyzed the available research works at the time of writing. As part of the work, the tasks of restoring the values of the bending moment and deflection of the cantilever beam were considered. The restoration of the function of the bending moment using machine learning algorithms yielded positive results. The distribution function of the displacements in the beam was not restored with the necessary accuracy. The obtained results show the possibility of using machine learning in the mechanics of a deformable solid bodies and also indicate the need to study this topic in more complex problems. Keywords: Machine learning  Mechanics of a deformable solid bodies Regression  Quality metric  Cross-validation  Kernel trick



1 Introduction The research topic is relevant. Machine learning and neural networks are now growing rapidly. They are trying to unleash their potential in all fields of science and fields of activity, whether it be trading on the stock exchange or genetics. This widespread usage is caused by the number of tasks that can be solved: classification, regression, association problem, clustering, pattern recognition and finding deviations. Unfortunately, in the field of mechanics of a deformable solid bodies there are not a lot of works. The authors found several articles on similar research topic: 1. A review article [2] on the applicability of machine learning to mechanical problems. The authors of this article focus on the monitoring of mechanical systems and the prediction of failure of mechanisms according to the testimony of various diagnostic devices. 2. An article [3] which focused on the usage of various frequency parameters, such as natural frequency, natural shape and damping coefficient, to detect cracks in beams. This work focuses on parameters such as beam deflection, bending moment, and stress distribution. In this work, modeling is performed using the ANSYS finite element package to find the relationship between the change in natural frequencies and natural forms for beams with cracks and without cracks. This is then confirmed © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 67–76, 2021. https://doi.org/10.1007/978-3-030-64430-7_7

68

V. Reshetnikov and A. Tick

by the results obtained from the ANN controller (Artificial Neural Network) and the genetic algorithm. ANN is used to determine the depth and location of a crack, as well as the directions of propagation and natural frequencies, and the relative difference in natural forms as input parameters for calculating the change and vibration parameters. The output from the ANN controller is the relative crack depth and relative crack location. The results of numerical analysis are compared with experimental results and have good similarity with the results predicted by the ANN controller. 3. The article [1]. In this article, the authors applied an artificial neural network to the problem of vibration analysis of a cantilever beam with a surface crack using the Finite Element Modeling (FEM) method. The results of this study show that the neural network helps to reduce the efforts associated with modeling, while providing solutions with reasonably accurate accuracy. 4. The article [4] describes the creation and operation of a neural network for predicting deflections of reinforced concrete beams at various load levels. In order to obtain a database of results that is necessary for training and testing a neural network, a study to measure deflections in reinforced concrete beams was carried out by the authors in a certified laboratory of the Wroclaw University of Science and Technology. The use of neural networks is an alternative to traditional methods for solving the problem of calculating the deflection of reinforced concrete elements. The research results show the effectiveness of using a neural network to predict the deflection of a reinforced concrete beam in comparison with the results of analytical calculations. Such an insignificant amount of research works in the field of the use of machine learning and neural networks for the problems of mechanics of a deformable solid was the reason for this work. The most interesting for mechanics are regression and associative tasks. Regression ones are because: • There are tasks that are compositions of simple problems (having a solution), but not having an analytical solution • Through a combination of simple problems using machine learning algorithms, you can predict the solution to a more complex problem The solution of the association problem leads to the identification of complex dependencies from experimental data. In this paper, the goal is to study the possibility of using machine learning algorithms in one-dimensional problems of the mechanics of a deformable solid by the example of a cantilever beam bend. The goal has predetermined the solution of the following tasks: • Choice of type of algorithms; • Choice of algorithms: the algorithms used are: linear regression, the support vector method, the method of nearest neighbors, gradient methods, trees and random forests; • Selection of feature space. As parameters for the sampling objects, the load value, the distance to the load application point, the type of fastening, the number of supports, the beam length, Young’s modulus, and the moment of inertia were chosen. Further, these data were converted in accordance with their type

The Application of Machine Learning to One-Dimensional Problems

69

(categorical or numerical) according to the data processing technique in machine learning (One-hot encoding, binary encoding, etc.); • Selection of appropriate metrics for assessing the quality of machine learning models. • The choice of technology used. All actions were performed in Python using frameworks such as sklearn, XGBoost. The study is based on theoretical knowledge in the field of mechanics of a deformable solid, and the theory and practice of machine learning. The scientific novelty of the study lies in the following points: • author’s vision of the use of machine learning; • qualitative characteristics of the applicability of machine learning algorithms to onedimensional problems in the mechanics of a deformable solid; • recommendations on data processing for training machine learning models in solving one-dimensional problems in the mechanics of a deformable solid; • recommendations for choosing regression algorithms for solving one-dimensional problems in the mechanics of a deformable solid. The authors obtained new scientific results and methodological justifications for the use of machine learning for one-dimensional problems in the mechanics of a deformable solid. 1.1

The Choice of Types of Algorithms

There are several methods in machine learning (Russian Machine Learning community, 2019): • Supervised learning • Unsupervised learning • Reinforced unsupervised learning Supervised learning is where you have input variables (x) and output variable (Y), and you use an algorithm to study the mapping function from input to output. The goal is to approximate the matching function so well that if there is new input data (x) that you can predict the output variables (Y) for that data. This is called supervised learning, because the process of learning an algorithm from a set of training materials can be considered as a teacher controlling the learning process. We know the correct answers, the algorithm iteratively makes predictions according to the training data and is adjusted by the teacher. Learning stops when the algorithm reaches an acceptable level of performance. Supervised learning can be applied for solution of regression and classification problems. Classification: The classification problem is that the output variable is a category such as “red” or “blue” or “disease” and “no disease”. Regression: The problem of regression is that the output variable is a real value, such as “dollars” or “weight”.

70

V. Reshetnikov and A. Tick

Some common types of problems based on classification and regression include recommendations and time series forecasts, respectively. Some popular examples of supervised machine learning algorithms are: • Linear regression for regression tasks. • Random forest for classification and regression problems. • Support Vector Machine for classification tasks. Unsupervised learning is where you only have input (X) and corresponding output variables. Unsupervised learning aims at modeling the basic structure or distribution of data in order to learn more about data. This is called supervised learning because, unlike supervised learning, there are no correct answers above and no teacher. Algorithms leave their own designs to discover and present an interesting structure in the data. Unsupervised learning can be applied for solution of of clustering and association problems. Clustering: The problem of clustering is where you want to discover the inherent groupings in the data, such as customer grouping by purchase. Association: The problem of learning association rules is where you want to find rules that describe large parts of your data, for example, people who buy X are also inclined to buy Y. Some popular examples of unsupervised learning algorithms: • k-means for clustering tasks. • Apriori Algorithm for Learning Association Rules. Problems in which you have a large amount of input (X) and only some data is marked (Y) are called reinforcement-free learning problems. These issues are linked between supervised and uncontrolled learning. A good example is a photo archive where only some of the images (for example, a dog, cat, person) are tagged, and most are unlabeled. Many machine learning problems in the real world fall into this area. This is because it can be expensive or time consuming to label data, as this may require access to domain experts. While unallocated data is cheap and easy to collect and store. You can use uncontrolled teaching methods to search and study structure in input variables. You can also use controlled learning methods to make the best predictions for unallocated data, feed that data back into the controlled learning algorithm as training data, and use the model to predict new invisible data. Based on the above, a regression type of algorithms was chosen to solve the problem of restoring the distribution functions of the bending moment and deflection of the beam. 1.2

The Choice of Algorithms

There are many regression algorithms. When solving the problem, the following algorithms were considered [5]:

The Application of Machine Learning to One-Dimensional Problems

1. 2. 3. 4. 5. 6.

71

Linear regression; k-Nearest neighbors; Support Vector Regressor; Decision Tree Regressor; Random Forest Regressor; Gradient boosting.

Linear regression is the simplest and fastest regression algorithm and perhaps the only one that has a simple analytical solution. Therefore, usually a decision through linear regression is taken as the base one and is repelled from it. The following methods are arranged in increasing order of their algorithmic complexity. 1.3

Input Data Generation

The data for training was generated using the Python language [6–8]. According to machine learning best practices it was divided into two parts: training and validation samples [10, 12, 13]. To make sure that the model is not overfitted, the parameters for data generation were selected from non-overlapping ranges. The generation took place by adding to the array of objects consisting of task parameters and the obtained analytical result. Further, for use inside the sklearn [11] framework, data was converted into a pandas DataFrame [14] object (Table 1). Table 1. Pandas DataFrame with data for training. Sample ID F L L Target 1280 15 210 63 −99225 5141 58 110 33 −105270 1648 19 310 62 −182590 3782 43 10 3 −645 2959 33 810 648 −8660520

2 Results Validation First you need to choose a metric to assess the quality of the model. For regression analysis according to the recommendations, MSE (mean squared error), RMSE (root mean squared error), R2, MAE (mean absolute error) are used. In the work, RMSE and R2 metrics were used to evaluate not only the accuracy, but also the degree of inexplicability of the model. However, only a metric is not enough, because it shows the quality of the model on only one data set. For a better assessment of the model, such a method as cross-validation was used. Cross-validation is a model validation technique for checking how successfully used statistical analysis in a model is able to work on an independent data set. Typically, cross-validation is used in situations where the goal is prediction, and I would like to evaluate how predictive the model is able to work in

72

V. Reshetnikov and A. Tick

practice. One cross-validation cycle involves breaking the data set into parts, then building the model on one part (called the training set), and validating the model on the other part (called the test set). To reduce the scatter of the results, different crossvalidation cycles are conducted on different partitions, and the validation results are averaged over all cycles (Fig. 1).

Fig. 1. Scheme of the cross-validation algorithm.

2.1

Restoration of Bending Moment

Let’s start with the simplest task - we will try to restore the dependence of the bending moment on the load application shoulder in the cantilever beam problem. This task will be divided into 2: 1. With the force applied at the end (Fig. 2a) 2. With applied distributed load (Fig. 2b)

(a)

(b)

Fig. 2. Scheme of loading the cantilever beam (a - Force at the end, b - distributed load).

The Application of Machine Learning to One-Dimensional Problems

73

In solving 1, it received 2 iterations - in the first, the beam and the length of the beam were input, in the second a column was added with the product of two parameters with a given coefficient. According to the results of the first iteration, it became clear that the algorithms cannot accurately derive the dependence between the parameters and the objective function. The mean square deviations and R2 metrics were 42,259 and −95 on the validation data. Accordingly, for the second iteration, it was decided to transfer the data into a rectifying space (expanding the attribute space by adding various combinations of parameters). This technique allows, with successful application, to linearize the objective function, to veil a complex dependence in one or more parameters. In the literature, this technique is called kernel trick [9] (Fig. 3).

Fig. 3. The result of the transition to a rectifying space in the classification problem (2 classes became easily separable).

After adding a column with multiplied parameters, machine learning algorithms successfully managed to restore the dependence. For example, we give one of the sample instances: the predicted value was −1072 N  m, the value obtained by the analytical method was −1111 N  m. Comparative results of the iterations are shown in Table 2. A visualization of the accuracy of the model is shown in Fig. 4. Using it, we can understand how strongly the predicted and analytically obtained values coincide (Table 3). Table 2. Comparison of the metrics obtained when solving the problem at 2 iterations (Validation sample). RMSE R2 Iteration 1 42259 −95 Iteration 2 0 1

74

V. Reshetnikov and A. Tick

Fig. 4. Visualization of accuracy of model predictions on a validation sample (Horizontal axis target values in N x m, vertical axis - predicted values in N x m) Table 3. Comparison of metrics obtained in solving the problem with distributed load at 2 iterations (Validation sample). RMSE R2 Iteration 1 68500 0, 79 Iteration 2 0, 12 0, 98

2.2

Restoration of Deflections

This section discusses the problem of finding the deflection of the cantilever beam under the action of force at the end. The objective function is already of a greater order than the distribution functions of the moment from previous sections. The feature space consists of the force, the length of the beam, the moment of inertia, Young’s modulus, and the distance from the beginning of the beam to the point of interest. To more accurately deduce the dependence between the attributes, the number of samples in the training set was increased 5 times, from 500,000 to 2,000,000. It will also expand the attribute space without possible consequences for the quality of the algorithm. The operation of expanding the attribute space was performed using Polynomial Features equal to 4. A graph of the accuracy of the responses is shown in Fig. 5.

The Application of Machine Learning to One-Dimensional Problems

75

Fig. 5. Visualization of the accuracy of model predictions for the beam deflection problem (iteration 1) in the validation sample (Horizontal axis - target values in mm, vertical axis predicted values in mm)

Figure 5 shows that the required accuracy has not been achieved. An even greater increase in the attribute space with a high probability will not increase the accuracy of the results and significantly increase the machine time required for learning the machine learning model. In this regard, the need arises to apply more advanced methods of data aggregation, data filtering and model architectures (for example, model cascades).

3 Conclusions The obtained results give a positive decision in favor of using machine learning algorithms for simple problems of the mechanics of a deformable solid. Already now you can use trained models in applications that facilitate routine work. Applicability in more complex tasks requires great research at the moment. Currently, the authors are actively conducting research in the field of using composite models of machine learning and data preprocessing. Acknowledgment. The author appreciate the Russian Science Foundation for support, grant No. 18-11-00245.

References 1. Akbani, I., Varma, I.: Dynamic analysis of a structural beam with surface crack using artificial neural network. Int. J. Sci. Eng. Res. 6(5) (2015) 2. Huang, Q.: Application of artificial intelligence in mechanical engineering. In: Advances in Computer Science Research, vol. 74 (2017)

76

V. Reshetnikov and A. Tick

3. Mhaske, M., Shelke, S.: Detection of depth and location of crack in a beam by vibration measurement and its comparative validation in ANN and GA. Int. Eng. Res. J. (IERJ) Special Issue 2 4. Kaczmarek, M., Szymanska, A.: Application of artificial neural networks to predict the deflections of reinforced concrete beams. Studia Geotechnica et Mechanica, vol. 38, no. 2 (2016) 5. Flach, P.: Machine Learning. The Science and Art of Building Algorithms that Extract Knowledge from Data. 400 p. DMK Press (2015) 6. Dawson, M.: Programmable in Python. 416 p. St. Petersburg, Peter (2014) 7. Lutz, M.: Programming in Python, vol. I, 4th edn. 992 p. SPb: Symbol-Plus (2011) 8. Lutz, M.: Programming in Python, vol. II, 4th edn. 992 p. SPb: Symbol-Plus (2011) 9. Shawe-Taylor, J.,Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press (2004) 10. Kaggle Machine Learning Courses. https://www.kaggle.com/learn/intro-to-machine-learning 11. Data Mining Library. https://scikit-learn.org 12. Russian Machine Learning community. http://www.machinelearning.ru 13. Stanford Machine Learnig Course. http://cs229.stanford.edu/ 14. Pandas library. https://pandas.pydata.org/

Evaluation Algorithm of Probabilistic Transformation of a Random Variable Askar Akaev1, Tessaleno Devezas2, Laszlo Ungvari3, and Alexander Petryakov4(&) 1

Institute of Complex Systems Mathematical Research, Lomonosov Moscow State University, Lenin Hills 1, Moscow, Russia 2 Atlântica School of Management Sciences, Health IT & Engineering, Barcarena, Lisbon, Portugal 3 Technical University of Applied Sciences Wildau, Hochschulring 1, 15745 Wildau, Germany 4 St. Petersburg State University of Economics, 21, Sadovaya Street, St. Petersburg, Russia [email protected]

Abstract. The technical and socio-economic systems that surround us are complex, non-linear, so it is impossible to study these systems without using models that describe random processes. To expand approaches to the random processes study, we proposed the specific evaluation algorithm for the case of the set of values of an absolutely continuous random variable divided into intervals, in each of which the random variable undergoes the corresponding functional transformation described by definite probability function. The proposed algorithm consists of three steps – the limitation of data, calculation and obtaining the transformation estimates. Firstly, with algorithm’s help, there is an ability to evaluate the mathematical expectation change. Secondly, the influence of the probabilistic transformation of a random variable at every interval on expected value can be evaluated also. The algorithm is basic but, if necessary, it may be complicated in order to obtain estimates of k-th central moments changes. In addition, the algorithm is able to be the part of another one, used for identifying the distribution law of a random variable - the result of a probabilistic transformation. As an example of the proposed algorithm implementation, the process of technological replacement of jobs in the digital transformation of the economy is considered. Keywords: Random process  Basic algorithm Technological substitution of jobs

 Digital transformation 

1 Introduction The relationship between the frequency of an event and its probability is organic. Since these two concepts are inseparable, a numerical assessment of the possibility degree of an event through probability makes practical sense precisely because more events that are probable occur on average more often than less probable ones. The technical and socio-economic systems that surround us are complex, non-linear so it is impossible to study these systems without using models that describe random processes. Random © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 77–88, 2021. https://doi.org/10.1007/978-3-030-64430-7_8

78

A. Akaev et al.

data management and analysis procedures that are applicable to a wide range of application areas, from the aerospace and automotive industries to oceanographic and biomedical research, are described in detail in (Bendat and Piersol 2010). Simulation modeling and data analysis as a tool for studying complex systems are discussed in detail in (Low and Kelton 1991). Algorithms for positive integer-valued random variables for various probabilistic functions have been considered (Devroye 1991; Shmerling 2015). A study is devoted to computer generation of statistical distributions for Monte Carlo simulation (Saucier 2000). Identification methods for nonlinear dynamical systems that include several nonlinearities are described in (Pavlov et al. 2015). A study is devoted to questions of evaluating the effectiveness of the implementation and functioning of information protection systems based on probabilistic methods (Efimov and Lapitskaya 2015). (Kulikov 2014) examined in detail the restoration problems of the probability density function of the main indicators in immunology. (Nartikoev and Peresetskiy 2019) substantiated the features of using generalized four-parametrical beta-distribution of the second kind for modeling the dynamics of income distribution in Russia. To expand approaches to the random processes study, we proposed a basic algorithm. With its help, there is an ability to evaluate the mathematical expectation transformation.

2 Algorithm Let us consider the following problem - let there be an absolutely continuous random variable X distributed with density fX ð xÞ and characterized by a finite and definite mathematical expectation E ½ X . The set of values of the random variable is divided by points xk ; k ¼ 1; n into n þ 1 intervals, in each of which the random variable X undergoes the corresponding functional transformation us ð xÞ ¼ ð1  ps ð xÞÞ  x, where ps ð xÞ is the probability function of the transformation of the random variable X in the interval s; ps ð xÞ 2 ½0; 1, s ¼ 1; n þ 1. Moreover, the set u of functional transformations us ð xÞ on all intervals is such that u : R ! R is a Borel measurable function, therefore, Y ¼ uð X Þ is a random variable. It is necessary to determine an algorithm for transformation assessing of the distribution characteristics of the random variable X in connection with the probability transformation u. In the framework of this article, when assessing the transformation of all characteristics, we focus on the first moment, or mathematical expectation. This approach basis is the idea of obtaining a basic algorithm, with which you can not only evaluate the mathematical expectation transformation, but also, if necessary, complicate the algorithm to obtain estimates of changes in k-th central moments. Based on the logic, to solve the problem, we propose the use of a function of the following form: Zxs VsX ð xÞ ¼

x  fX ð xÞ dx xs1

where s ¼ 1; n þ 1:

ð1Þ

Evaluation Algorithm of Probabilistic Transformation

79

Assuming that the set of values of the random variable X belongs to the entire number line, for s ¼ 1 we denote x0 ¼ 1; for s ¼ n þ 1  xn þ 1 ¼ þ 1. Naturally, when substituting into expression (1) xs1 ¼ 1; xs ¼ þ 1; we obtain the mathematical expectation formula for the random variable X. Thus, the function VsX ð xÞ in our algorithm characterizes the contribution s interval in the formation of the value of E ½ X . To expect a random variable Y, we use the following formula: Z1 E ½Y  ¼ E½uð X Þ ¼

Z1 uð xÞ  fX ð xÞdx ¼

1

½1  ps ð xÞ  x  fX ð xÞdx

ð2Þ

1

Then, by analogy with (1), for E[Y], the formula can be obtained: Zxs VsY ð xÞ ¼

½1  ps ð xÞ  x  fX ð xÞdx

ð3Þ

xs1

The function VsY ð xÞ, similarly to VsX ð xÞ, is used to determine the distribution intervals effect of the random variable Y on its mathematical expectation. Moreover, as follows from formulas (2) and (3), it is not necessary for us to find out the distribution law of Y - a density fX ð xÞ is sufficient for us and it is determined by the condition of the problem. Since for s ¼ 1 and s ¼ n þ 1, the corresponding sets of values of the random variable X are located on the infinite interval (−∞; x1) and (xn; +∞), for the numerical implementation of the algorithm we propose, the first point is to determine numbers A (as small) and B (as large) that the conditions are satisfied: ZA

Z1 x  fX ð xÞdx  d;

aÞ 1

x  fX ð xÞdx  d B

ZA ½1  ps ð xÞ  x  fX ð xÞdx  d;



ð4Þ

Z1

1

½1  ps ð xÞ  x  fX ð xÞdx  d B

where d is the given accuracy. Thus, for the mathematical expectations of random variables X and Y, we have: ZB E ½ X   x ¼

x  fX ð xÞdx A

ZB E ½Y   y ¼

½1  ps ð xÞ  x  fX ð xÞdx A

where x and y are the average sample in the interval (A; B).

ð5Þ

80

A. Akaev et al.

The second step of the algorithm is to calculate the values of VsX ð xÞ and VsY ð xÞ for each s using numerical methods taking into account the replacement of ð1; x1 Þ and ðxn ; þ 1Þ by the intervals ðA; x1 Þ and ðxn ; BÞ, respectively. At this step, the values x and y can also be obtained: x ¼

nX þ1 s¼1

VsX ð xÞ; y ¼

nX þ1

VsY ð xÞ

ð6Þ

s¼1

Further, for each s, the indices ksX ¼ VxsX and ksY ¼ VysY are calculated, which reflect the fraction of the s-th interval when generating the average value of the random variable. The third, final stage of the algorithm contains the evaluation part. Here we offer basic grades: 1. The ratio K ¼ yx, which characterizes the degree of change in the mathematical expectation of the random variable X as a result of the probability transformation u. Obviously, this relation makes sense for x 6¼ 0; 2. Differences ksX  ksY , which determine the change in the degree of influence of the s-th interval on the mathematical expectation. Let us give an example of the proposed algorithm implementation for the normal distribution of a random variable X with given parameters: " # 1 ð x  lx Þ 2 f ð xÞ ¼ pffiffiffiffiffiffi  exp  ; lx ¼ 4; rx ¼ 1:5 2r2x rx 2p

ð7Þ

According to some alternative feature, the point x = 3 divides the set of values of the random variable X into 2 intervals, for each of which a logistic function of the probability of transformation is given: (

0:5 pð xÞ ¼ 1 þ exp½2:1 ðx þ 1Þ ; x \ 3; 0:5 pð xÞ ¼ 1 þ exp½3ðx5Þ þ 0:5; x [ 3

Imagine the analytical form of writing (7) and (8) clearly in Figs. 1 and 2:

ð8Þ

Evaluation Algorithm of Probabilistic Transformation

81

Fig. 1. Probability density function of normal distribution (lx ¼ 4; rx ¼ 1:5Þ.

Fig. 2. Random variable transformation probability curve.

According to the first step of the algorithm, it is necessary to determine the numbers A and B, limiting the domain of definition without significant loss of accuracy. For a normal distribution, this step can be simplified by applying the 3-sigma rule, since lx and rx are known. Then the interval ðA ¼ lx  3rx ; B ¼ lx þ 3rx Þ will contain 99.72% of the values assumed by the random variable X. In our example, A ¼ 0:5, B ¼ 8:5. Next, we move on to the second step of the algorithm and calculate VsX ð xÞ and VsY ð xÞ using any of the numerical methods for solving integrals, for example, the Simpson method. For our case, formulas (1) and (3) will take the form:

82

A. Akaev et al.

Z3 V1X ð xÞ ¼ 0:5

Z8:5 V2X ð xÞ ¼ 3

Z3



V1Y ð xÞ ¼ 0:5

Z8:5  V2Y ð xÞ ¼ 3

" # x ð x  4Þ 2 pffiffiffiffiffiffi  exp  dx 4:5 1:5 2p

" # x ðx  4Þ2 pffiffiffiffiffiffi  exp  dx 4:5 1:5 2p

" #  0:5 x ð x  4Þ 2 1  pffiffiffiffiffiffi  exp  dx 1 þ exp½2:1  ðx þ 1Þ 1:5 2p 4:5

ð9Þ

" #  0:5 x ð x  4Þ 2 0:5   pffiffiffiffiffiffi  exp  dx 1 þ exp½3  ðx  5Þ 1:5 2p 4:5

For convenience, the numerical results are presented in Table 1. Table 1. Algorithm’s second step calculation results. s VsX ksX 1 0.50 12.6% 2 3.49 87.4% x ¼ 3:99

VsY ksY 0.25 20.3% 0.99 79.7% y ¼ 1:24

The group of formulas (9) can also have a graphical representation (Figs. 3 and 4)

Fig. 3. Function value curve x  fX ðxÞ.

Evaluation Algorithm of Probabilistic Transformation

83

Fig. 4. Function value curve uð xÞ  fX ð xÞ.

In Figs. 3 and 4, the areas of the figures indicated by the numbers 1 and 2, in geometric form, reflect the values VsX and VsY . The third step of the algorithm is to calculate the proposed estimates. So, K ¼ 1:24 3:99 ¼ 0:31; k1Y  k1X ¼ k2X  k2Y ¼ 7:7%, from which it follows that, as a result of probabilistic transformation, the mathematical expectation of the random variable X will decrease slightly more than 3 times, with a parallel decrease influence on its value of the category corresponding to x > 3. Summarizing the above, in brief form we outline the steps of the algorithm for assessing the transformation of the distribution characteristics of a random variable as a result of a probabilistic transformation: 1. determination of parameters A and B, limiting the set of values of a random variable while maintaining the specified accuracy of the mathematical expectation; 2. calculation of the values of Vs and ks for the initial and the resulting from the transformation of random variables; 3. directly obtaining the transformation estimates themselves (ðK; ksX  ksY Þ As we noted above, the proposed algorithm is basic and can be complicated by adding estimates of changes in other characteristics of a random variable, for example, variance, to it. In addition, in our opinion, the research interest is the problem of identifying the distribution law of a random variable - the result of a probabilistic transformation, the solution of which can also complement the transformation estimation algorithm.

84

A. Akaev et al.

3 Application Area As the implementation example of the proposed algorithm, we consider the process of technological replacement of jobs in the economy digital transformation. The problem of the interaction of diffusion mechanisms of new technologies and employment is one of the key during the period of economic system transformation (Autor et al. 2003; Bogliacino and Pianta 2010; Graetz and Michaels 2015; Pellegrino et al. 2015; Arntz et al. 2016; Autor and Salomons 2018; Bessen 2019). In general, the distribution of the relative labor force l(h) with the skill level h is close to normal, and the distribution curve of the effective labor force by skill level lef ðhÞ is described by the equation lef ðhÞ ¼ h  lðhÞ. On the interval ðhm ; hM Þ found by the 3-sigma rule, for the order sign of low ðhm  h  h1 Þ, medium ðh1  h  h2 Þ and high ðh2  h  hM Þ qualifications, the corresponding numerical values can be determined. For example, with distribution characteristics hl ¼ 4:72; rh ¼ 1:15, the values are hm ¼ 1:27; h1 ¼ 3:6; h2 ¼ 6:05; hM ¼ 8:17. The probability of substitution of a low-skilled labor force engaged in non-routine labor activity can be taken equal to zero, i.e. p1 ðhÞ ¼ 0; for hm  h  h1

ð10Þ

As the cost of intelligent machines (IM) decreases, the replacement of human labor will begin in this segment. Therefore, we can also consider the increase in the probability of substitution according to quadratic or linear laws within hm  h  h1 : aÞ p3 ðhÞ ¼ 0:166ðh  hm Þ2 ; bÞp4 ðhÞ ¼ 0:495 þ 0:39h

ð11Þ

IM will begin actively replace middle-skilled workers engaged in routine cognitive work, and gradually, as the intellectual level rises, they will move on to replace highly skilled workers. Moreover, if the probability of IM substitution of workers in the lower segment of secondary qualification ðh [ h1 ¼ 3:6Þ is already close to 100% ðpðh1 Þ ¼ 1Þ today, then the probability of replacement of researchers ðh  hM ¼ 8:17Þ even after ten years in 2030 year will be close to zero ðpðhM Þ ¼ 0Þ, since the onset of the singularity, when artificial intelligence (AI) will surpass human intelligence, is expected only in the 2040s. Since substitution processes, as a rule, follow the logistic law, the probability distribution curve connecting the points h1 and hM can be represented by a logistic curve: p2 ð hÞ ¼

1    1 þ exp #h h  h1 þ2 hM

ð12Þ

The parameter #h is found from standard conditions: pðh1 Þ ¼ 0:9; pðhM Þ ¼ 0:1. Solving Eq. (12) either for h ¼ h1 or h ¼ hM , we obtain #h ffi 0:963. For further analysis, on the low-skilled sector ðhm  h  h1 Þ, we will take option (10) when

Evaluation Algorithm of Probabilistic Transformation

85

pðhÞ ¼ 0. As a result, we obtain the following distribution curve describing the probabilities of technological replacement of labor in the current decade: ( pð hÞ ¼

0;

hm  h  h1 ;

1 h þh 1 þ exp½#h ðh 1 2 M Þ

; h1  h  hM

ð13Þ

Three options (10)–(11) are presented graphically in Fig. 5, respectively, under the numbers 1, 3 and 4, option (12) - under the number 2.

Fig. 5. Workforce technological replacement probability curve.

Now we can get the distribution of the effective labor force employed in the economy after the digital transformation completion in the 2030s. Let us write it in the form: lefe ðhÞ ¼ ½1  pðhÞhlðhÞ

ð14Þ

This distribution is presented in graphical form in Fig. 6. As can be seen from this figure, as a result of the digital transformation of the economy, labor is polarized into highly and low-skilled jobs, with a sharp reduction in secondary jobs.

86

A. Akaev et al.

Fig. 6. Distribution of effective workforce employed in the digital economy.

Having narrowed the lower and upper limits of integration to the interval ðhm  h  hM Þ, which contains practically all possible values (99.7%) of the quantity lðhÞ, we write down the equations of distribution of effective labor before and after digital transformation: ðaÞ lef

Z8:17 ¼

hlðhÞdh 1:27

ðaÞ

Z3:6

lefe ¼

Z8:17 hlðhÞdh þ

1:27

3:6

ð15Þ exp½0:963ðh  5:88ÞhlðhÞ dh 1 þ exp½0:963ðh  5:88Þ

where l(h) is the normal distribution hl ¼ 4:72; rh ¼ 1:15. Having performed the calculations, we obtain: ðaÞ

ðaÞ

lef ¼ 4:71; lefe ¼ 2:03 ðaÞ

ð16Þ

ðaÞ

The ratio lef to lefe shows how much the number of people employed in the traditional economy will decrease as a result of the digital transformation: ðaÞ



lef

ðaÞ

lefe

¼ 2:33

ð17Þ

As you can see, job cuts due to technological substitution will be 57%. ðlÞ ðmÞ ðhÞ The shares of effective workers of low kefe , medium kefe and high kefe qualifications in the digital economy are calculated by the formulas:

Evaluation Algorithm of Probabilistic Transformation

1

ðlÞ

aÞ kefe ¼

ðmÞ

bÞ kefe ¼

ðhÞ cÞ kefe

¼

1 ðaÞ lefe

1 ðaÞ lefe

Zh1

 ðaÞ

lefe

½1  pðhÞhlðhÞdh ¼

½1  pðhÞhlðhÞdh ¼

1  2:03

h1

ZhM  h2

Z3:6 hlðhÞdh; 1:27

hm

Zh2 

1  2:03

87

Z6:05 3:6

1  ½1  pðhÞhlðhÞdh ¼ 2:03

Z8:17 6:05

exp½0:963ðh  5:88ÞhlðhÞ dh; 1 þ exp½0:963ðh  5:88Þ exp½0:963ðh  5:88ÞhlðhÞ dh: 1 þ exp½0:963ðh  5:88Þ ð18Þ

Having performed the calculations using these formulas, we obtained: a) ðlÞ ðmÞ ðhÞ kefe ¼ 0:243ð24:3%Þ; b) kefe ¼ 0:492ð49:2%Þ; c) kefe ¼ 0:265ð26:5%Þ. These values in the framework of the algorithm are compared with the shares of effective workers of ðlÞ ðmÞ ðhÞ low kef ¼ 0:105ð10:5%Þ, medium kef ¼ 0:724ð72:4%Þ and high kef ¼ 0:172ð17:2%Þ qualifications before digital transformation, also obtained from the calculations. Thus, as a result of the application of the algorithm, a conclusion can be drawn about the effect of digital transformation, which consists in the polarization of labor on high and low qualifications when washing jobs of average skill.

4 Conclusion With the development of the technological base, especially information and communication technologies, not only the technical systems themselves are greatly transformed, but the involvement of the person himself as a user of the benefits produced by such systems is becoming increasingly widespread. On the other hand, the rapid evolution of the technical systems themselves contributes to the increasingly crowding out of man from those sectors of the economy where his work is replaced by intelligent machines. These two trends make future development trends increasingly difficult to predict. In this regard, the use of basic algorithms that allow not only to evaluate the transformation of the average value of a random variable, but also, if necessary, complicate the algorithm to obtain estimates of changes in the numerical characteristics of the distribution, will more accurately describe the nature of the processes occurring, for example, in socio-economic systems. In our case, the possibilities of such a basic algorithm are shown by the example of assessing the structure of employees by skill levels in the digital economy. Acknowledgements. This article was prepared as part of the RFBR grant No. 20-010-00279 “An integrated system for assessing and forecasting the labor market at the stage of transition to a digital economy in developed and developing countries.”

88

A. Akaev et al.

References Arntz, M., Gregory, T., Zierahn, U.: The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis, OECD Social, Employment and Migration Working Papers, No. 189. OECD Publishing, Paris (2016). http://dx.doi.org/10.1787/5jlz9h56dvq7-en Autor, D.H., Levy, F., Murnane, R.: The skill content of recent technological change: an empirical exploration. Q. J. Econ. 118(4), 1279–1333 (2003) Autor, D.H., Salomons, A.: Is automation labor share-displacing? Productivity growth, employment, and the labor share. Brookings Papers Econ. Activity 2018(1), 1–87 (2018). https://www. researchgate.net/publication/328877616_Is_Automation_Labor_Share-Displacing_Productivi ty_Growth_Employment_and_the_Labor_Share Bendat, S., Piersol, A.G.: Random Data: Analysis and Measurement Procedures, 4th edn. John Wiley, USA (2010) Bessen J.E.: Automation and Jobs: When Technology Boosts Employment. Boston Univ. School of Law, Law and Economics Research Paper No. 17–09 (2019). https://ssrn.com/abstract= 2935003, http://dx.doi.org/10.2139/ssrn.2935003 Bogliacino, F., Pianta, M.: Innovation and Employment: a reinvestigation using revised pavitt classes. Res. Pol. 39(6), 799–809 (2010) Devroye, L.: Algorithms for generating discrete random variables with a given generating function or a given moment sequence. J. Sci. Stat. Comput. 12(1), 107–126 (1991) Graetz, A., Michaels, G.: Robots at Work. CEP Discussion Paper No 1335. Centre for Economic Performance, London School of Economics and Political Science (2015). http://cep.lse.ac.uk/ pubs/download/dp1335.pdf Low, A.M., Kelton, W.D.: Simulation Mogeling and Analysis, 2nd edn. McGRAW-HILL Int. Edition, Julius (1991) Pellegrino, G., Piva, M., Vivarelli, M.: How do new entrepreneurs innovate? J. Ind. Bus. Econ. 42(3), 323–342 (2015). https://www.researchgate.net/publication/277965950_Innovation_ and_employment Saucier, R.: Computer generation of statistical distributions. Army Research Laboratory, Stroming Media (2000). https://doi.org/10.21236/ada374109 Shmerling, E.: Algorithms for generating random variables with a rational probability-generating function. Int. J. Comput. Math. 92(9), 2001–2010 (2015) Efimov, E.N., Lapitskaya, G.M.: Evaluation of the effectiveness of information security measures in the face of uncertainty [Ocenka jeffektivnosti meroprijatij informacionnoj bezopasnosti v uslovijah neopredelennosti]. Bus. Inform. 1(31), 51–57 (2015) Kulikov, V.B.: Recovery of polymodal probability densities from experimental data in structures with stochastic properties [Vosstanovlenie polimodal’nyh plotnostej verojatnosti po jeksperimental’nym dannym v strukturah so stohasticheskimi svojstvami]. Bull. Nizhny Novgorod Univ. N.I. Lobachevsky 1(1), 248–256 (2014) Nartikoev, A.P., Peresetskiy, A.A.: Modeling the dynamics of income distribution in Russia [Modelirovanie dinamiki raspredelenija dohodov v Rossii]. Appl. Econometrics 54, 105–125 (2019) Pavlov, Y.N., Nedashkovskiy, V.M., Tihomirova, E.A.: Identification of nonlinear dynamical systems incorporating several nonlinearities [Identifikacija nelinejnyh dinamicheskih sistem, imejushhih v svojom sostave neskol’ko nelinejnostej]. Sci. Educ. MSTU n. a. N.E. Bauman 7, 17–234 (2015)

Digital Twin of Continuously Variable Transmission for Predictive Modeling of Dynamics and Performance Optimization Stepan Orlov1(&)

and Lidia Burkovski2

1

Peter the Great St. Petersburg Polytechnic University, Polytechnicheskaya, 29, 195251 St. Petersburg, Russian Federation [email protected] 2 Schaeffler Automotive Buehl GmbH & Co. KG, Industriestraße 3, 77815 Bühl, Germany

Abstract. The paper presents a digital twin of continuously variable transmission (CVT), developed by the authors, and intended for detailed predictive modeling of CVT dynamics. The paper addresses the problems of mathematical modeling of the device, the choice of a suitable numerical method for dynamics simulations, and the architecture of problem-oriented software implementing the functionality of the digital twin. Mathematical models proposed consider flexibility of almost all bodies the device consists of. Equations of motion yielding from the models are obtained in the framework of Lagrangian mechanics. Spectral properties of the equation of motion are analyzed in the context of applying numerical integration methods for dynamics problems. The initial value problem for ordinary differential equations of motion is solved using a variety of single step numerical integration methods; the most promising class of methods is found. Specialized software package is developed for practically solving problems of CVT dynamics. Most important principles of software design are discussed. Keywords: Digital twin  Mechanical system  Multibody contact dynamics Numerical integration  Problem-oriented software



1 Introduction Digital twin technology has been widely developed in recent years. The presentation of digital twin in the form of software modules for the modeling of processes using appropriate data models, algorithms, and knowledge, are necessary for understanding, forecasting and increasing productivity to achieve the best performance indicators of products. The pioneer in digital twin technology is the General Electric company, which first introduced the concept of digital twin products. To date, digital twin technologies have been developed in the work of Siemens, SAP, ANSYS, PTC and others. In this work, digital twin technologies are used for the predictive modeling of dynamics and the optimization of parameters of an automobile transmission with continuously changing gear ratio (continuously variable transmission, CVT). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 89–107, 2021. https://doi.org/10.1007/978-3-030-64430-7_9

90

S. Orlov and L. Burkovski

In recent years, continuously variable transmissions have been widely used in the automotive industry due to a number of their advantages over traditional transmissions with a fixed set of gear ratios. In the designs of automotive CVTs, a chain or a metal belt is used, supported on the conical or toroidal contact surfaces of the pulleys of the drive and driven shafts. The transmission of torque in such CVTs occurs due to the friction forces between the bodies that make up the chain or metal belt and the pulleys. Automotive companies are constantly improving the design of CVTs, increasing transmitted torque and achieving better reliability. Therefore, predictive modeling of the variator dynamics is of great interest. It can significantly reduce the development time of a serial product and optimize its characteristics without resorting to a large number of experimental tests. The quality of the digital twin of the variator is determined by the quality of models and algorithms describing the process of variator dynamics. Therefore, the creation of variator models is a responsible task of creating such a twin. A distinctive feature of the CVT twin is the failure to use commercial packages to simulate CVT dynamics. The main problem blocking the use of commercial packages has been the complexity of modeling nonlinear and time-dependent processes of extensive contact interactions. Therefore, to create a digital twin of the variator, three main stages of development are required. The stage of developing adequate mechanical models, the stage of developing efficient methods for solving the governing equations that describe the behavior of the models, and most importantly, the stage of creating specialized problem-oriented software code that has all the attributes of an industrial software complex (pre- and postprocessor, report generator for the study, online help, etc.). The created models undergo the verification stage involving tests using various operating regimes of the variator at experimental stands. The problem-oriented software package allows one to update the set of variator models along with the development of CVT design. The goal of creating a problem-oriented program code is to provide an engineer with a convenient and easy-to-use tool when performing predictive modeling of complex variator dynamics processes. The created digital twin of the variator allows not only simulating emergency situations during operation of the variator on a real production car, but also changing the design of the variator in order to optimize the entire variator operation process.

2 Materials and Methods 2.1

The Design of Chain CVT

The continuously variable transmission design considered in this paper is shown in Fig. 1 on the left. It consists of a drive and driven shafts, each of which is a pair of pulleys, forming a pulley set. One of the pulleys on the shaft can move in the axial direction. Torque is transmitted by the chain covering both shafts and being in contact with the inner conical or toroidal surfaces of the pulleys. Axial forces act on the pulleys, providing compression of the chain in the transverse direction and tension in the longitudinal direction. The change in the ratio of axial forces leads to a change in

Digital Twin of Continuously Variable Transmission

91

the distances between the pulleys and, as a consequence, the contact radii of the chain with the pulleys, and hence the gear ratio. The chain (Fig. 1, right) consists of double connecting axes (rocker pins), the halves of which are able to roll over each other, and plates covering the pins.

Fig. 1. General view of the continuously variable transmission (left); transmission chain (right).

Friction forces between the end surfaces of the pins and the pulleys transmit torque. In predictive modeling, the following aspects of CVT behavior are of interest: the dynamics of the gear ratio change under the influence of external (including control) factors (the global dynamics); the stress-strain state of individual transmission elements that determines their life; the dependence of the transmission efficiency on its operating mode and design parameters; acoustic noise produced by the transmission in the range up to 5 kHz. To obtain detailed data on the behavior of this system during its motion, it is necessary to develop a non-trivial mathematical model that describes the dynamics taking into account deformations of chain pins and plates, shafts, bearings, and numerically study it. The data obtained in a numerical experiment are often used to optimize design parameters; there is no need for the manufacture of numerous prototypes of the product. 2.2

Mathematical Modeling of Continuously Variable Transmission

General Approach to the Description of Dynamics. Despite the significant number of works on modeling the dynamics of continuously variable transmissions [1–15], the approaches used to create mathematical models in most cases are limited directly to Newtonian mechanics, which is disappointing, since this is not enough to model the dynamics of deformable bodies—the use of the formalism of analytical mechanics is necessary, allowing to describe the dynamics of constrained mechanical systems, including those containing deformable bodies. All mathematical models proposed in this paper correspond to holonomic systems. To obtain equations of motion, we use the classical formalism of Lagrangian mechanics [16]. In this case, elastic forces are determined by the potential energy, dissipative forces—by the Rayleigh dissipative function. Friction forces are considered

92

S. Orlov and L. Burkovski

separately; the corresponding generalized forces are computed using the expression of elementary work on virtual displacements. The system of Lagrange equations of the second kind is solved with respect to generalized accelerations and is written in normal form. If necessary, first-order equations are added to the obtained system of ordinary differential equations (ODEs); those describe the evolution of the internal state of stabilization systems for the angular speeds of the shafts. The influence of these systems on the mechanical system is determined by some additional terms in generalized forces. In the systems under consideration, there may be discrete state /, on which the generalized forces depend and which can switch upon the occurrence of certain events. An event occurs when the corresponding event indicator function ek crosses zero— Fig. 2. The change in the discrete state can be associated, for example, with the beginning or end of pin-pulley contact. Details of the description of discrete state systems are discussed in [17]. The final ODE system has the form x_ ¼ F ðt; x; /Þ;

2 3 u x  4 v 5; z

2

u  q;

_ v  q;

3 v F ðt; x; /Þ ¼ 4 Fv 5 Fz

ð1Þ

where q is the column of generalized coordinates; the dot above a symbol denotes time derivative; Fv(t, u, v, z, /) is the generalized acceleration vector; Fz(t, u, v, z, /) is the right-hand side of additional first-order ODEs, / is the discrete state vector.

Fig. 2. Time history of state variables in an ODE system with discrete variables.

Mathematical Models of CVT Chain. Over the past twenty years, the authors have created many mathematical models of a continuously variable transmission and, in particular, its main element — the chain. These models have gradually evolved from simple to complex ones, and the direction of development has been determined by the experience of CVT engineers who used previously created models in practice. The limited size of the article does not allow us to describe this evolution in detail; therefore, we restrict ourselves to the most important modeling elements that are reflected in the latest chain models.

Digital Twin of Continuously Variable Transmission

93

All elements of the chain are considered to be deformable bodies—this is dictated by the need to predict their stress-strain state (for example, the tension of individual chain plates). To describe the configuration of the chain, a set of generalized coordinates is used that determine the position of the axes of the pin halves, as well as one generalized coordinate for each pin, which determines the axial position of the plate block on the pin. Let n 2 [−1, 1] be the parameter identifying a point at pin axis, p—the pin number, ±—the identifier of pin half, and rp± (n) the vector of position of a point at pin half axis. The following approximation is taken: rp ¼ up ðnÞ þ vp ðnÞk; up ¼

Xn

u u ðnÞ; vp ¼ k¼1 p;k k

X2

v w ðnÞ: k¼1 p;k k

Here k is the unit vector of the axial direction in the reference state, up± is the projection of the radius-vector rp± onto the plane Ox1x2 with basis vectors e1 ; e2 (the triple e1, e2, k forms an orthonormal Cartesian basis); vp± is the projection of the radiusvector onto the axis Ox3 with unit vector k; the vectors upk ¼ up;k;a ea are combinations of generalized coordinates up;k;a ða ¼ 1; 2Þ; vp;k are also generalized coordinates. For vp ðnÞ, a linear approximation is sufficient with coordinate functions w1 ðnÞ ¼

1n ; 2

w 2 ð nÞ ¼

1þn ; 2

for up±, linear approximation is usually not enough, and the systems of coordinate functions /k ðnÞ can be different—cubic polynomials, natural modes of free beam oscillations, finite element basis functions with finite support. Their number n determines the level of detail of the model when approximating the bending of pins. Note that modeling the extension and bending of the pins seems to be fundamentally important, while torsion can be abandoned. The angular orientation of the sections of the halves of the pins can be determined, for example, based on the relative position of the two halves of the same link, calculating the vector Dup;p þ 1 ; sp;p þ 1 ¼  Dup;p þ 1 

Dup;p þ 1  up  up þ 1 þ

shown in Fig. 3. For the vectors up , for example, we can take the average value 1 2

Z

1 1

up ðnÞ dn;

but with a specific choice of coordinate functions uk , another choice may be preferable.

94

S. Orlov and L. Burkovski

Fig. 3. On determining the orientation of pin half cross-section.

Thus, the configuration of the pins of the chain is fully defined. To determine the position of the plates, another coordinate zp is introduced for each pin—this is enough if we assume that the entire block of plates can move along the pin only as a rigid whole (Fig. 4). In the dynamic model, the plates are considered as elastic rods working for tension, bending, and torsion. Their inertia when moving in the plane of the chain Ox1x2 is added to the inertia of the pins, and when moving along the axis Ox3, it is added to the inertia associated with the generalized coordinates zp , which determine the axial positions of the plate blocks on the pins. Thus, individual plates of the chain become inertialess, which means that their configuration is determined from equilibrium conditions, the specific form of which depends on the details of modeling the interaction of the plates with pins. In the simplest case, it is assumed that the plates are rigidly connected to the pins (except for the possibility of translational movement of the block of plates along the pin); however, in the case of refined modeling of chain shear stiffness, it is necessary to take into account that the rigid pin-plate connection with respect to bending takes place only under certain conditions.

Fig. 4. Generalized coordinate determining the axial position of the plate block on the pin.

The next step after determining the kinematics of all chain elements is to calculate its kinetic and potential energy, as well as the Rayleigh dissipative function, which are part of the Lagrange equations. The kinetic energy of the chain is

Digital Twin of Continuously Variable Transmission

P pin Tp þ p Tppl ;  n Rl  P ¼ 12 qu u_ p  u_ p þ qv v_ 2p dl ¼ 12 auk;s u_ p;k  u_ p;k þ T¼

Tp

0

auk;s  m2u

R1 1

uk us dg;

95

P

p;

k;s¼1 R1

avk;s  m2v

wk ws 1 Tppl ¼ 12 mpl z_ 2p :

dg;

1 2

2 P k;s¼1

qu ¼ mlu ;

avk;s v_ 2p;k ;

qv ¼ mlv ;

Here mv is the mass of pin half, mp is half the mass of the plates of the two links spanning the pin, mu ¼ mv þ mpl =2 (to shorten the record, we consider the case where all pins and the plates are the same, although this is actually not the case, since the links must have different lengths to obtain acceptable acoustic characteristics). The potential energy of the chain is determined by the elastic strain energy of the pin halves, Pp , and of plates, Pp;p þ 1;i (the index i identifies the plate within a link), as well as the elastic energy of pin-plate interaction and the interaction between the pin halves, Pother p (the latter is not considered in this paper): X X X X P¼ P þ P þ Pother p p;p þ 1;i p; p i p p The potential energy of the pin half is the energy of bending and tensile deformation (when modeling bending, it is assumed that there is no shear deformation, as in the Bernoulli – Euler beam): Pp ¼ 12

 Rl  00 00 0 2 up  ^a  up þ cvp dl 0

¼ 12 cuk;s

n P k;s¼1

cuk;s up;k  ^ a  up;k þ

 3 Z1 2 00 00  uk us dg; l

cvk;s

1

^a ¼ k  a  k; ep;1 ¼ PðbkÞ  sp;p þ 1 ;

1 2

2 P k;s¼1

2c  l

Z1

cvk;s v2p;k ; 0

0

wk ws dg; 1

a ¼ a1 ep;1 ep;1 þ a2 ep;2 ep;2 ;

ep þ ;1 ¼ PðbkÞ  sp1;p ;

e;2 ¼ k  e;1 ;

where a is the bending stiffness tensor of the pin half, ep;a are its primary axes that rotate together with the corresponding link, PðbkÞ—the tensor of rotation by a fixed angle of rotation ± b about the unit vector k, c—the tensile stiffness of the pin half. The potential energy of elastic deformation of the plate is determined by its extension, bending and torsion: bend tors Pp;p þ 1;i ¼ Pext p;p þ 1;i þ Pp;p þ 1;i þ Pp;p þ 1;i

96

S. Orlov and L. Burkovski

These terms are defined as quadratic forms of some deformations. In particular, the potential energy of plate extension depends on its elongation Dp,p+1,i:   1 ext 2 Dp;p þ 1;i ¼ Drp;p þ 1;i   L; Pext p;p þ 1;i ¼ c Dp;p þ 1;i ; 2     Drp;p þ 1;i  rp þ 1 þ gp;p þ 1;i; þ  rp gp;p þ 1;i; ; where cext is the tensile stiffness of the plate, L is its length in the undeformed state, gp;p þ 1;i; is the coordinate corresponding to the plate position on the left half of the p-th pin, gp;p þ 1;i; þ —its position on the right half of the (p þ 1)-th pin. It is essential that the tensile deformation of the plate depends, in particular, on the bending deformation of the pin. Assuming that the plate bends in the plane of its least stiffness, we can determine the bending shape by two small angles of inclination of the its axis tangent to the segment connecting the ends—fp;p þ 1;i; at one end and fp;p þ 1;i; þ at the other. These angles can be calculated as follows (Fig. 5, left): fp;p þ 1;i; ¼ sp;p þ 1;i  tp1 ;

fp;p þ 1;i; þ ¼ sp;p þ 1;i  tp þ ;

Drp;p þ 1;i : sp;p þ 1;i   Drp;p þ 1;i 

Fig. 5. Bending and torsion of chain plate.

Models of Pin-Pulley Contact Interaction. In all cases, it was assumed that the contact interaction is localized at a point on the end surface of the pin. In the simplest models, this point lies at the pin axis; in more complex ones, its position is determined by the relative position of the two-dimensional contact surfaces approximated by paraboloids. The simplest model also assumes the presence of a unilateral constraint: when the contact takes place, a point at the end of the pin axis belongs to the contact surface of the pulley. For a number of reasons, more advanced elastic contact models based on the Hertz contact theory [18, 19] were created. In particular, they allow to avoid unilateral constraints. In these models, the normal force N is calculated according to the Hertz formula: N ¼ cD3=2 , where c is the contact stiffness (it depends on the elastic moduli of materials, geometry and relative position of the surfaces, but in our case it can be assumed constant), D is the contact deformation. It can be defined as the depth of mutual penetration of the contact surfaces of the pin and the pulley, which

Digital Twin of Continuously Variable Transmission

97

remain rigid within the framework of the proposed models. The slip velocity, on which the tangential friction force R depends, is the projection of the relative velocity onto the tangent plane: R ¼ f ðvr ÞNs? ;

  vr ¼ vr? ;

s? ¼ vr? =vr ;

  vr? ¼ ðE  nnÞ  vpin  vpul

(vpin and vpul are the velocities of pin and pulley at contact point, respectively, n is the unit vector normal to the pulley surface at the contact point). The “regularized” friction law is adopted, with the friction coefficient of the form f ¼ f0 maxfvr =v0 g. The question of the location of the contact point and the magnitude of the contact deformation has been solved differently for different models. Assuming that there is no   pin contact surface, we can obtain the formula D ¼ n  Rsurf  Rpin , and the radiusvector Rsurf can be considered the projection of the position of pin axis end Rpin onto the pulley surface along the unit vector k parallel to the axes of the shafts. For the contact point, one can take Rpin. The main disadvantage of this model is that it incorrectly describes the eccentric compression of the pin, and therefore does not allow to obtain the correct tension forces in the chain plates. To calculate the position of the contact point more accurately, it is necessary to take into account the geometry of both contact surfaces, Cpin (pin) and Cpul (pulley). Taking their quadratic approximations in the vicinity of the point at pin axis end (Fig. 6) and defining the contact deformation as the maximum depth of their mutual penetration   1 zpul ¼ a þ a  ^x þ x^  A  ^x þ O j^xj3 ; 2   1 pin z ¼ b þ b  ^x þ ^x  B  ^x þ O j^xj3 ; 2 pin pin ^x  x  xpin ; x 0 0  ðE  kkÞ  R

a ¼ aa e a ;

A ¼ aab ea eb ;

b ¼ ba e a ;

B ¼ bab ea eb ;

Fig. 6. Contact surfaces of the pin and the pulley.

98

S. Orlov and L. Burkovski



min

Rpin 2Cpin ;Rpul 2Cpul ;npin ¼npul

 pin  R  Rpul 

(here npin and npul are the unit vectors normal to the surfaces of pin and pulley, respectively), we obtain the problem for determining the contact point in the form of xpul the following system of nonlinear algebraic equations with unknowns ^ xpin  , ^  : pin ¼ r? zpul ¼ b þ B  ^ xpul a þ A  ^xpin  ¼ r? z  ; pin pul pin pul ^x  ^x ¼ v r z ¼ v r z ; 3 ? 3 ?     v3  zpin ^xpin  zpul ^xpul   :

Its exact analytical solution is difficult, however, due to the small slope of the normal to the unit vector k, it is easy to find a satisfactory approximate solution: ^xpin xpul x ;  ¼ ^  ¼ ^

2.3

r? zpin ¼ r? zpul

)

^ x ¼ ðA  BÞ1 ðb  aÞ:

Numerical Methods for Solving the Problem of Continuously Variable Transmission Dynamics

When implementing numerical methods for solving the initial problem for the ODE system of continuously variable transmission dynamics (1), it is necessary to take into account the presence of a discrete state / that changes when events with the indicators ek occur (Fig. 2). In our case, the events correspond to the beginning and end of the pin-pulley contact; the components of the vector / are the logical values that determine the state of each contact pair. The event indicators can be the distances between the contact surfaces (negative if they interpenetrate). The ODE system must also be supplemented by a state machine that defines the rules for changing the discrete state upon event occurrence. To select the most suitable numerical integration method, various single-step ODE integration methods were considered. All of them were supplemented by a procedure responsible for finding the t time instants corresponding to the events, interpolating the x vector at these instants of time, and calling the state change procedure when the event occurred. It has happened that linear interpolation of indicators at each integration step is sufficient, which allows a somewhat shorter calculation time compared to the more traditional [20] approach, when the dichotomy method and/or Newton’s method are used to solve the equations ek ¼ 0. The procedure mentioned above is as follows. Let x ¼ xn , t ¼ tn at the beginning of an integration step. At first, the integration step is performed without taking events into account, and at the end of the step x ¼ xn þ 1 , t ¼ tn þ 1 ¼ tn þ h. If it turns out that ek ðtn Þ and ek ðtn þ 1 Þ have different signs, t;k ¼ tn þ h=ð1  ek ðtn þ 1 ÞÞ=ek ðtn ÞÞ is calculated. The smallest among them is accepted as t , then linear interpolation is performed xðt Þ ¼ xn ð1  s Þ þ xn þ 1 s ; where s  ðt  tn Þ=h. Next, a state change occurs, and integration continues from the time

Digital Twin of Continuously Variable Transmission

99

instant t . Linear interpolation of state variables is sufficient for step size values of practical interest. The eigenvalue analysis of the Jacobi matrix of the ODE system right-hand side, J  @F=@x, has shown that the largest eigenvalues, in absolute value, are the real negative ones and reach 108 s1 ; they are associated with friction during contact interaction, while the highest frequency vibrations correspond to almost purely imaginary eigenvalues, with imaginary parts of the order of 106 s1 . Considering that the time step H, which determines the required detail of the numerical solution, has values 105 103 s, we can classify the system ODE as mildly stiff [21]. A special technique has been developed for testing numerical methods in the problem of CVT dynamics and evaluating the quality of numerical solutions; it focuses on the dependencies of the local step error and the global error at a given time interval on the integration step size. As part of the testing, the classical explicit Runge – Kutta methods were considered: the explicit Euler method (for comparison with other methods); 4th-order classical method (RK4); nested schemes with automatic step size control DOPRI45, DOPRI56, DOPRI78; the Gragg – Bulirsch – Stoer method (GBS) at various extrapolation orders, with and without smoothing; Richardson extrapolation of various orders with the explicit Euler method as the reference. These methods are described in [22]. In all methods, step size control was turned off, and the extrapolation order was fixed, since numerical solutions with a constant step and extrapolation order were of interest. None of the classical explicit methods gave a significant gain in speed compared to the well-known RK4 method. This is to be expected, based on the form of the stability domains of classical explicit methods and the specifics of the problem. In the search for faster numerical methods in the CVT dynamics problem, linear implicit methods of the Rosenbrock type were considered, namely the W-methods. The scheme of the first order method, W1, has the form x1 ¼ x0 þ hk1 ;

Wk1 ¼ F ðt0 ; x0 Þ;

W  E  hdA:

Here x0 and x1 are the state at the beginning and at the end of the step, respectively, h is the step size, A is the matrix approximating J ðt0 Þ (formally, any matrix of suitable size), d 2 ð0; 1 is a parameter. The scheme was used as a reference for Richardson’s extrapolation (a similar idea can be found in [23, Sect. 6.4.2]. The second-order method SW2-4 [24] was also considered. Its scheme (without error estimation for controlling step size) has the form x1 ¼ x0 þ h4 ðk1 þ 3k  2 Þ; Wk1 ¼ F ðt0 ; x0 Þ; W  E  hdA; Wk2 ¼ F t0 þ 23 h; x0 þ 23 hk1  43 hdAk1 : Contrary to expectations, the considered W-methods (schemes W1, SW2-4 and extrapolated W1) turned out to be inapplicable to the test problem of CVT dynamics. In particular, the local error at the step is so large for them (even with a sixfold extrapolation of the W1 scheme) that an acceptable numerical solution is obtained only at steps much smaller than the RK4 method requires—about 108 s.

100

S. Orlov and L. Burkovski

Fig. 7. Butcher’s tableau for the trapezoidal method.

The implicit second-order trapezoid method [22] (its Butcher tableau is shown in Fig. 7) provides good results in terms of the suitability of numerical solutions for large integration steps. However, since this method is implicit, and the ODE system is nonlinear, it is necessary to numerically solve a system of nonlinear algebraic equations at each integration step. The system has the form f ð xÞ ¼ 0:

ð2Þ

For solving it, various modifications of the Newton method were used. Actually, the maximum size of the integration step is determined by the convergence of Newtonian iterations, and the performance of the method is determined by the average duration of one iteration and their total number. Newtonian type numerical methods define the iterative process [25, 26] x k þ 1 ¼ x k þ ak d k ;

Bk dk ¼ f ðxk Þ;

where x0 is the initial approximation, xk is the approximation obtained at the k-th iteration, Bk is some approximation of the Jacobi matrix J, dk is the search direction, ak 2 ð0; 1 is a scalar quantity whose value is determined using the search algorithm along the direction dk . The number of unknowns in (4.1) is about 1800. With this size of the system, it is significant (in particular, when solving the linear system for dk ) that the matrix J is sparse. Explicit formulas for J are not available, and the calculation using finite differences is a time-consuming procedure, even with sparsity taken into account. On the other hand, the calculation of J is easy to parallelize. Numerous modifications of the Newton method are associated with the choice of matrices Bk and scalars ak ; they are aimed to avoid frequent calculation of J and at the same time to ensure fairly fast convergence. Among them, the Newton method itself was considered (denoted exact below); modification based on the Broyden formula [27] for rank one updates of Bk , but with the rejection of elements equal to zero in J (hereinafter fake-broyden); some others, including with Bk matrices that are kept constant for as long as possible (when exceeding the threshold value of the number of iterations, Bk ¼ J ðxk Þ is accepted (hereinafter const); In addition, in versions fakebroyden and const, B0 is initialized with the value from the previous time step of the trapezoid method. Note that in the modification const, the number of operations

Digital Twin of Continuously Variable Transmission

101

reduces not only due to the smaller number of f ð xÞ evaluations, but also due to the ability to perform LU-factorization of Bk only when Bk changes, thus solving the linear system at iteration much faster. According to the results of applying the trapezoid method, we can conclude that it allows the use of significantly (100 times) larger integration step sizes than RK4; however, none of the modifications of Newton’s method allowed to get a big gain in speed: exact works almost 8 times slower than RK4, with the average of 2.9 iterations per step 2.9; fake-broyden—1.2 times slower than RK4, with 13 iterations per step; const—1.5 faster than RK4, with 24 iterations per step. At the same time, the modification exact can be significantly (up to 100 times) accelerated due to parallel calculation of J and parallel solution of the linear equation for dk . The fastest of the considered numerical methods was the stabilized explicit method DUMKA3. The method implements a set of numerical schemes with a different number of stages; as the scheme number increases, the degree of its stability polynomial increases too, and the stability region extends further and further into the region of negative real values on the complex plane. The method implements automatic control of the step size and the degree of stability polynomial, but that feature was disabled during the testing of the method. The study showed that several schemes of the DUMKA3 method allow obtaining numerical solutions of CVT dynamics problem at quite large steps (up to 4  106 s versus 5  108 for RK4), and it is significantly (more than 5 times) faster (we emphasize that we are not talking about parallelization). A further increase in speed using stabilized explicit methods is possible when constructing numerical schemes that take into account the boundaries of J spectrum not only in the negative direction of the real axis, but also in the direction of the imaginary axis [28, 29]. 2.4

Software Package for Predictive Modeling of CVT Dynamics

To have a functioning digital twin, a software package for predictive modeling is as necessary as adequate mathematical models and numerical methods. Such a package has been developed for predictive modeling of the dynamics of heterogeneous systems, and, in particular, CVT. The package consists of approximately 80 separate modules (dynamic libraries) and two executable programs that provide a graphical user interface and a command line interface. A simplified view of package contents is shown in Fig. 8. Solid arrows show the direction of controlling commands, and the dashed arrows indicate data streams. This division is to some extent arbitrary; the scheme is not complete due to its simplification. For example, a part with the common infrastructure and auxiliary libraries is not connected to other parts—it would be necessary to connect it to all other parts with arrows of both types in both directions, but this would clutter up the picture.

102

S. Orlov and L. Burkovski

Fig. 8. The contents of the software package (simplified view).

Target platforms for building the complex are MS Windows and Linux. The source code is written in C++. The main dependencies are third-party libraries V8 (implements JavaScript used as a scripting language), Qt (graphical user interface); FFMPEG (video synthesis); OpenGL 3.0 (via Qt interfaces) is also used for visualization. It is possible to build an application without dependencies on third-party libraries, designed for batch calculations. The foundation of the complex are common infrastructure components (modules providing compound object model, object properties and methods, property serialization mechanism, synchronous messages facility, and other functionality). Common infrastructure provides the ability to create large software systems and solves a number of general tasks. The software also includes solvers—all the numerical methods used to solve the Cauchy problem; dynamic models of continuously variable transmission and their specific graphical user interface and visualization counterparts; a module providing the assembly of an ODE system, simulation control, processing of numerical solution (such as time history and other plots for all available postprocessor values; FFT, moving average, moving minimum and maximum); scripting language; module for twodimensional visualization and composition of scenes; three-dimensional visualization module; modules that implement graphical user interface; modules for supporting multivariate calculations and automated report generation. The developed software package has extensive user documentation. Its creation and support are carried out using an introspective documentation generator that provides

Digital Twin of Continuously Variable Transmission

103

the creation of a documentation tree based on the current package configuration; filling parts of documentation pages with content based on introspective analysis of objects; adding documentation created by the developer manually. Without dwelling on the software architecture details, we note its most important part—the compound object model [30]. Compound object contains one or more components that make up a tree. We call its root the primary object, and all other components—tear-off objects. A component of a compound object is an instance of a C++ class; the class must publicly inherit one of the two base classes that determine its type (primary or tear-off); in the class, a numeric component identifier must be declared, unique for each component in the system; in addition, each component typically implements interfaces.

Fig. 9. A compound object and its interfaces.

An interface is a C++ class that contains declarations of purely virtual or inline methods; instance (non-static) fields are also allowed. Non-inline methods and static fields are not allowed in interfaces, since we refuse to export symbols from modules in favor of automatically registering components when loading modules. The design of a compound object also requires the interface to virtually and publicly inherit some base class (common to all interfaces) and declare a numeric interface identifier, unique within the system. Interfaces are provided that relate to the entire compound object (object interfaces, or simply interfaces), and interfaces that relate to a component (component interfaces). The object model module provides functionality for converting any interface pointer of a compound object to a pointer to the desired interface. It is possible to override the implementation of an interface in a compound object—for this, one should make the tear-off object implementing the interface a child of the component whose interface implementation is to be overridden (this feature is used extremely rarely). Figure 9 shows an example of a compound object; note that the Interface5 interface has been overridden by the Tear-off 3 component.

104

S. Orlov and L. Burkovski

An interface is considered to be supported by a component if its class publicly inherits the interface class. The object model module provides functionality to cast interface pointers within a component. Importantly, the components of a compound object can be instances of classes implemented in different modules. The compound object model provides ample opportunity for code reuse. To achieve it, the code is encapsulated in tear-off components, which are then added to the configuration of compound objects using the code from tear-offs. Instances of compound objects are created by the factory provided by the object model. To create an instance, users provide the known primary object identifier. Before creating instances, a configuration file has to be loaded, containing information about the location of components in the modules and the configuration of composite objects. Creating a configuration file for the entire package is automated using a specialized command line interface snap-in. The compound object model simplifies the linking of libraries implementing functionality: each functional module has only to be linked against the OM module, implementing the compound object model, in order to access the functionality from any other modules (Fig. 10).

Fig. 10. Dependencies between the modules in the case of the traditional approach relying on symbol export (left) and in the case of automatic component registration (right).

3 Results and Discussion The above sections give a brief overview of work done within the project of development of a digital twin of CVT. This multidisciplinary research covers areas of mathematical modeling in mechanics, numerical methods, and software development. Main results of the research at its current state are listed below. A number of physical and mathematical models of chain CVT have been developed, differing in complexity and degree of detail, with the number of degrees of freedom from one to thousands. These models make it possible to judge the stability of the equilibrium position, predict global dynamics, stress-strain state of chain elements, efficiency, acoustic noise. Small-sized variator models [31] for predictive modeling of global dynamics do not require large computational costs and can be used for calculations in real time. The most detailed CVT model of takes into account the discrete structure of chain; bending of the rocker pins; bending and torsion of the chain plates;

Digital Twin of Continuously Variable Transmission

105

limited bending moment in the plates; the presence of two halves of rocker pin, rolling over each other. Based on the Hertz contact theory, a model of local contact interaction of bodies with biconvex surfaces in the presence of friction close to Coulomb’s has been developed, which implements detailed description of contact kinematics. The possibility of applying various methods of numerical integration to solve the problems of the CVT dynamics is investigated. Classes of methods are identified, the use of which seems promising: those are stabilized explicit methods of Lebedev type [32]. An infrastructure of software components has been designed and implemented, serving as the basement for creation of scalable problem-oriented software systems. It consists of many modules and facilitates the solution of both general problems and more specific ones for predictive modeling. Based on the above mentioned infrastructure of components, a software package has been created, which is a full-featured software product that implements the developed models of transmission elements, methods of numerical integration, as well as all the tools necessary for the engineer’s comfortable work on the numerical simulation of the variator—such as preparation of the initial data for single and multivariate simulations, running simulations, analysis of a numerical solution in a single calculation, preparation of summary reports based on the results of multivariate simulations.

4 Conclusions The paper presents the results of many years of work on the creation of a digital twin of a continuously variable transmission with a plate chain. The presence of such a twin makes it possible to carry out detailed predictive modeling of transmission dynamics in order to optimize its parameters and improve performance. The developed software package is currently used at an industrial enterprise that designs and manufactures continuously variable transmissions with a plate chain, and is one of the standard tools used by engineers. Acknowledgments. The authors thank the Russian Science Foundation for their support of research under grant No. 18-11-00245.

References 1. Srnik, J., Pfeiffer, F.: Dynamics of CVT chain drives. Int. J. Veh. Des. 22(1/2), 54–72 (1999) 2. Neumann, L., Ulbrich, H., Pfeiffer, F.: New model of a CVT rocker pin chain with exact joint kinematics. ASME J. Comput. Non-linear Mech. 1(2), 143–149 (2006) 3. Geier, T., Foerg, M., Zander, R., Ulbrich, H., Pfeiffer, F., Brandsma, A., van der Velde, A.: Simulation of a push belt CVT considering uni- and bilateral constraints. ZAMM – J. Appl. Math. Mech./Zeitschrift fürAngewandteMathematik und Mechanik 86(10), 795–806 (2006) 4. Neumann, L., Ulbrich, H., Pfeiffer, F.: Optimisation of the joint geometry of a rocker pin chain. Mach. Dyn. Probl. 29(4), 97–108 (2005)

106

S. Orlov and L. Burkovski

5. Bullinger, M., Pfeiffer, F., Ulbrich, H.: Elastic modelling of bodies and contacts in continuous variable transmissions. Multibody Syst. Dyn. 13(2), 175–194 (2005) 6. Schindler, T., Friedrich, M., Ulbrich, H.: Computing time reduction possibilities in multibody dynamics. In: Arczewski, K. et al (eds.) Computational Methods and Applications, pp. 239–259. Springer, Dordrecht (2011) 7. Lebrecht, W., Pfeiffer, F., Ulbrich, H.: Analysis of self-induced vibrations in a pushing Vbelt CVT. In: International Continuously Variable and Hybrid Transmission Congress, paper no. 04CVT-32 (2004) 8. Bullinger, M., Funk, K., Pfeiffer, F.: An elastic simulation model of a metal pushing VBelt CVT. In: Ambrósio, J.A. (ed.) Advances in Computational Multibody Systems, pp. 269–293. Springer, Dordrecht (2006) 9. Srivastava, N., Haque, I.: Clearance and friction-induced dynamics of chain CVT drives. Multibody Syst. Dyn. 19(3), 255–280 (2008) 10. Sedlmayr, M., Pfeiffer, F.: Spatial contact mechanics of CVT chain drives. In: Proceedings of the ASME Design Engineering Technical Conference, vol. 6, pp. 1789–1795 (2001) 11. Sedlmayr, M., Bullinger, M., Pfeiffer, F.: Spatial dynamics of CVT chain drives. In: VDIBerichte nr. 1709: CVT 2002 Congress, pp. 511–527. VDI-Verlag GmbH, Düsseldorf (2002) 12. Schindler, T.: Spatial dynamics of pushbeltCVTs, vol. 730. Düsseldorf: VDIVerlag (Fortschritt-BerichteVDI: Reihe 12, Verkehrstechnik, Fahrzeugtechnik) (2010) 13. Carbone, G., Mangialardi, L., Mantriota, G.: The influence of pulley deformations on the shifting mechanism of metal belt CVT. J. Mech. Des. 127(1), 103–113 (2005) 14. Bradley, T.H.: Simulation of continuously variable transmission chain drives with involute inter-element contact surfaces. Ph.D. thesis, Sacramento (California): University of California, 154 p. (2003) 15. Кaмeнcкoв, B.Ю.: Coвepшeнcтвoвaниe экcплyaтaциoнныx cвoйcтв aвтoмoбильнoгo фpикциoннoгo вapиaтopa c мeтaлличecкoй цeпью. Ph.D. thesis, Mocквa: Mocк. гoc. тexн. yн-т им. H.Э. Бayмaнa, 136 p. (2009) 16. Гaнтмaxep, Ф.P.: Лeкции пo aнaлитичecкoй мexaникe. 2-e изд., M.: Hayкa (1966) 17. Blochwitz, T., Otter, M., Arnold, M., Bausch, C., Clauß, C., Elmqvist, H., Junghanns, A., Mauss, J., Monteiro, M., Neidhold, T., Neumerkel, D., Olsson, H., Peetz, JV., Wolf, S.: The functional mockup interface for tool independent exchange of simulation models. In: Proceedings of the 8th International Modelica Conference (2011) 18. Johnson, K.: Contact Mechanics. Cambridge University Press, Cambridge (1985) 19. Биpгep, И.A., Пaнoвкo, A.Г.: Пpoчнocть, ycтoйчивocть, кoлeбaния. Cпpaвoчник в тpex тoмax. т. 2, M.: Maшинocтpoeниe (1968) 20. Hindmarsh, A.C., Brown, P.N., Grant, K.E., Lee, S.L., Serban, R., Shumaker, D.E., Woodward, C.S.: SUNDIALS: suite of nonlinear and differential/algebraic equation solvers. ACM Trans. Math. Softw. 31(3), 363–396 (2005) 21. Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II: Stiff and Differential Algebraic Problems. Springer, Heidelberg (Springer Series in Computational Mathematics) (2013) 22. Hairer, E., Nørsett, S., Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems. Springer, Berlin (2000) 23. Deuflhard, P., Bornemann, F.: Scientific Computing with Ordinary Differential Equations. Springer, Secaucus (2002) 24. Steihaug, T., Wolfbrandt, A.: An attempt to avoid exact jacobian and nonlinear equations in the numerical solution of stiff differential equations. Math. Comput. 33(146), 521–534 (1979)

Digital Twin of Continuously Variable Transmission

107

25. Knoll, D., Keyes, D.: Jacobian-free Newton – Krylov methods: a survey of approaches and applications. J. Comput. Phys. 193(2), 357–397 (2004) 26. Brown, J., Brune, P.: Low-rank quasi-Newton updates for robust Jacobian lagging in Newton methods. In: International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, pp. 2554–2565 (2013) 27. Broyden, C.G.: A class of methods for solving nonlinear simultaneous equations. Math. Comput. 19(92), 577–593 (1965) 28. Martin-Vaquero, J., Janssen, B.: Second-order stabilized explicit Runge – Kutta methods for stiff problems. Comput. Phys. Commun. 180(10), 1802–1810 (2009) 29. Torrilhon, M., Jeltsch, R.: Essentially optimal explicit Runge – Kutta methods with application to hyperbolic – parabolic equations. Numer. Math. 106(2), 303–334 (2007) 30. Orlov, S., Melnikova, N.: Compound object model for scalable system development in C++. Procedia Comput. Sci. 66, 651–660 (2015) 31. Orlov, S.G.: Low dimension models of continuously variable transmission dynamics. Doklady Math. 97(2), 152–156 (2018). https://doi.org/10.1134/S106456241802014X 32. Lebedev, V.I.: Explicit difference schemes for solving stiff problems with a complex or separable spectrum. Comput. Math. Math. Phys. 40, 1729–1740 (2000)

Experience-Driven, Method-Agnostic Algorithm for Controlling Numerical Integration of ODE Systems Stepan Orlov1(&) 1

and Lidia Burkovski2

Peter the Great St. Petersburg Polytechnic University, Polytechnicheskaya, 29, 195251 St. Petersburg, Russian Federation [email protected] 2 Schaeffler Automotive Buehl GmbH & Co. KG, Industriestraße 3, 77815 Bühl, Germany

Abstract. An algorithm is proposed to control the step size and an optional discrete parameter of a single-step numerical integration method for ordinary differential equation (ODE) systems, capable of estimating step local error norm. The proposed algorithm outperforms traditional ones in the case of ODE systems with non-smooth right hand side. Instead of relying on the dependency of local step error on step size, as traditional algorithms do, our algorithm collects discrete statistical data during numerical solution and makes decisions about the step size changes basing on the analysis of that data. In the end, a real-life application example is shown. Keywords: Numerical integration  ODE system  Automatic step size control

1 Introduction Let us consider the initial value problem for a system of ordinary differential equations with the independent variable x and the state vector y: y0 ¼ f ðx; yÞ;

yjx¼x ¼ y ;

ð. . .Þ0 

d ð. . .Þ: dx

ð1Þ

Consider now that the problem (1) is being solved using a single-step numerical integration method, capable of estimating step local error norm, and controlled by one or two parameters. The first, continuous parameter is the step size. There also might be the second, discrete parameter. It could, for example, identify a specific numerical integration scheme among the family of schemes implemented by the method. In our use case the numerical method is particularly DUMKA3 [1], which is a stabilized explicit method implementing a family of schemes that differ by the number of stages and the length of stability regions in the real-negative direction. Below we present an algorithm to control the two parameters of the numerical integration method during the numerical solution, with the aim to achieve the best performance of the method while keeping step local error below a specified tolerance. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 108–121, 2021. https://doi.org/10.1007/978-3-030-64430-7_10

Experience-Driven, Method-Agnostic Algorithm

109

Let us further consider a single step of the numerical method, starting at x = x0, y = y0 and ending at x = x1, y = y1 (here y1 is the numerical solution at the end of the step); denote the step size by h, such that x1 = x0 + h. Step local error e is, by definition, the difference of numerical and exact solutions at the end of the step, e  y1  yðx1 Þ;

ð2Þ

where y(x1) is the value of the exact solution of the initial value problem with x* = x0, y* = y0 at point x = x1. Traditional step size controlling algorithms [2, II.4] typically rely on the known dependency of step local error on the step size:   e ¼ Chp þ 1 þ O hp þ 2 ;

ð3Þ

where p is the order of accuracy of the numerical scheme, and C is a value depending on the current state and the ODE right hand side. Numerical methods, such as embedded Runge–Kutta schemes, provide built-in mechanisms to estimate the local error e basing on the relation (3). For the purpose of this article, the difference between e and its estimation by a numerical method makes no sense, therefore we further assume that e is provided directly by the numerical method (although that is not true in the general case). Once an integration step is done, the estimation of local error norm, ||e||, is compared against the absolute tolerance e. The quality of numerical solution is acceptable when the condition j j ej j  e

ð4Þ

is held. In this case, the step is accepted; otherwise, the step is rejected, and a new step is done with the reduced step size; the process continues until the condition (4) holds. Once the integration step is accepted, the size for the next step, hnext, is estimated by a traditional algorithm as 

hnext

e ¼h ke k

p þ1 1

ð5Þ

The formula (5) is further improved to provide better properties of the numerical method. For example, it is considered bad if the step size changes too fast. The requirement rmin 

hnext  rmax h

is then added. Besides, a safety factor r < 1 is often introduced. As a result, the formula (5) is replaced by the following one:

110

S. Orlov and L. Burkovski

(

hnext

(



e ¼ h max rmin ; min rmax ; r ke k

p þ1 1 ))

:

ð6Þ

Further, some numerical methods may impose additional restrictions on the step size controller: it might be computationally expensive to change the step size—this is the case, for example, for the Rosenbrok type W-methods [3]. These restrictions can be handled by introducing additional parameters and replacing the formula (6) with a more complex one. Apart from the step size control, there is the need to control other parameters of the numerical integration method, if any. For example, extrapolation methods [2, II.9] may implement automatic control of the order of extrapolation, in addition to the step size. In the case of DUMKA3 solver, the second discrete parameter is an integer index identifying one of several numerical schemes implemented by the method. Each scheme is characterized by certain number of stages, equal to the degree of the stability polynomial; the more the degree, the longer the stability region of the scheme in the real negative direction. To pick the appropriate scheme among others, the DUMKA3 solver estimates the highest eigenvalue of the ODE right hand side Jacobian matrix, J = ∂f/∂y. Traditional algorithms controlling the step size and the choice of numerical scheme work good for smooth ODE systems. However, numerical experiments with some nonsmooth systems have shown their poor performance. Present work has emerged as soon as we started experimenting with the application of the DUMKA3 solver to the problem of dynamics of a complex model of continuously variable transmission (CVT); the model is briefly described in [4] and is described by about 3000 state variables (1500 generalized coordinates and the same number of generalized speeds). It has turned out that the solver outperforms many other numerical methods at a fixed step size and numerical scheme, but the attempt to use traditional algorithms to control the step size and the degree of stability polynomial, that are built into the solver, has led to disappointingly poor performance—several times greater CPU time to obtain a numerical solution of quality similar to that of the solution obtained at a fixed step size. An investigation has shown that the source of the poor performance is the discontinuity of ODE right hand side, which may lead, in particular, to infinite eigenvalues of J at points of discontinuity, and frequent oscillations of the estimated step local error. To overcome the difficulties arising with the usage of traditional controlling algorithms, it was decided to develop a completely different algorithm, which is “blind” in the sense that method and ODE right hand side properties are not used in any way other than to guess initial parameter values. The data feeding our controlling algorithm is (1) step local error norm, ||e||; (2) a value proportional to the number of operations per second of model time, for each of schemes identified by second parameter (when applicable; for example, the number of stages multiplied by the number of step attempts, divided by step size). Driven by that input data, the algorithm makes decisions about parameter changes during numerical solution. To do so, it needs certain

Experience-Driven, Method-Agnostic Algorithm

111

statistical information that it collects as numerical solution proceeds. Specifically, step size is controlled basing on the percentage of failed step attempts (with step local error above tolerance) within short-term and long-term retrospectives; the second parameter is controlled basing on workload per second. Importantly, “curiosity” has to be introduced into the second parameter adjustment algorithm in order to avoid sticking in local minima of performance criterion.

2 Materials and Methods 2.1

Controlling Step Size

The automatic step size selection for numerical integration of ODE systems has been addressed in a number of papers—see [5–10] and references therein. Some approaches [8, 9] are based on the step size control as a feedback viewed from the position of the control theory. Others [5] are more heuristic, proposing methods relying on the history of past time steps. The presented step size controlling algorithm can be classified as a heuristic one, although it probably can be viewed from the position of nonlinear control theory. The algorithm also relies on the past history of integration steps; in contrast to other algorithms, the only input information for our algorithm is the fact that a step is accepted or rejected by solver. General Protocol of Interaction with the Solver. Step size controller (further referred to as h-controller) has been designed as a black-box object for use by implementations of single-step numerical integration methods (further referred to as solvers). The object has an internal state and implements a simple interface. The interface implies the use of the following protocol. 1. In the very beginning of numerical simulation, parameters of the h-controller are set using corresponding interface methods. 2. At each integration step, solver calls the recommendStepSize() method of the object at least once. The method returns the value of step size to try at this step. Once solver obtains the recommended value of the step size, the integration step is done, and the error norm, ||e||, is estimated. 3. If the condition (4) holds, the solver informs the h-controller about the fact that the step is accepted by calling the commitSuccessfulStep() method. Solver then accepts the integration step just taken, and moves to the next step, so we go to point 2 of the protocol, unless x reaches the final value. 4. If the condition (4) does not hold, the step has to be rejected as not satisfying the accuracy requirement. Therefore, the solver rejects the step and starts the new one, so we go to point 2 of the protocol. The solver will now ask the h-controller again for the step size recommendation and get a smaller step size value. That will repeat until (4) hold, and we arrive to point 3 of the protocol. The controller will be aware

112

S. Orlov and L. Burkovski

about all unsuccessful step attempts because the solver will call recommendStepSize() multiple times without reporting success. 5. Sometimes solver may call the restart() method of the h-controller. This may happen, for example, when the solver decides to change the numerical scheme. Calling the method instructs the h-controller to discard any statistical information collected so far and switch to the initial state. Internal State of the Step Size Controller. Apart from parameters, the h-controller object has an internal state that evolves as solver calls its methods. The internal state contains the following variables. • Discrete state that can be one of undefined, ready, and step. The undefined state means that the recommendStepSize() and restart() methods have not yet been called; calling recommendStepSize() will implicitly call restart(). Further we can safely assume that the undefined state does not exist at all. The ready state means that either the restart() or the commitSuccessfulStep() method has been called, and the recommendStepSize() has not been called after that; the h-controller is ready to assist the choice of step size for a new integration step. The step state means that the recommendStepSize() has been called one or more times, and commitSuccessfulStep() or reset() has not been called after that; the controller has started recommending step size for an integration step. Transitions of the discrete state are shown in Fig. 1. • Normal step size, H. This variable is initialized in reset() with a user-specified value h0; it may further change between steps, in the implementation of the commitSuccessfulStep() method. • Current step size, hcur. A call to recommendStepSize() initializes this variable with H, if the state before call is ready (i.e., in the case of first step size recommendation at current integration step); otherwise, in the step state, the implementation of recommendStepSize() multiplies hcur by the parameter k < 1. Finally, the recommendStepSize() method returns hcur as the step size recommendation. • Number of bad step size recommendations at current integration step, nrejected. A call to recommendStepSize() initializes this variable with zero if the state before call is ready, and increments it by one if the state before call is step. • Buffer R of bits describing each of at most M (currently M = 100) recent successful integration steps. Each bit characterizes the step as “good” or “bad”. An integration step is considered “good” if nrejected = 0; otherwise, if nrejected > 0, the step is considered “bad”, meaning there were rejected step attempts. The buffer represents the long-term retrospective of step information used to evolve H. Once an integration step is done, the buffer is updated; sometimes it is cleared (details are described in sec. “The evolution of normal step size recommendation” below). • Similar buffer r of bits for at most m (currently m = 10) recent successful integration steps, representing the short-term retrospective of step information, also used to evolve H.

Experience-Driven, Method-Agnostic Algorithm

113

undefined

ready

step

reset() recommendStepSize() commitSuccessfulStep() Fig. 1. Discrete state transitions in the h-controller.

Controlling Step Size Within One Step. As already mentioned above, the first recommended step size value returned by the recommendStepSize() method equals H. If that value leads to unacceptable step local error norm, the solver calls the recommendStepSize() again, getting new recommended step values. The implementation of the method returns n-th call hcur ¼ k n1 H;

ð7Þ

where the value of parameter k has been set to 0.7 as a reasonable choice. In the case user specifies the minimum possible step size hmin, and the number of unsuccessful attempts is such that hcur < hmin, the h-controller raises an exception, which leads to the abnormal termination of the simulation. The Evolution of Normal Step Size Recommendation. The value H, recommended as the first step size to try at each step, relatively slowly evolves in time. Its value may be changed by the implementation of the commitSuccessfulStep() method, switching the h-controller from the step state to the ready state. The decisions about the changes of normal step size H are taken basing on the statistical data collected in short-term and long-term retrospective buffers described above. Once those buffers are full (i.e., M or m steps after cleanup for the long-term and short-term retrospective buffers respectively), one can calculate the normalized amount of “bad” steps, B and b, in each of the buffers: B¼

Mbad ; M



mbad ; m

ð8Þ

where Mbad and mbad are the number of “bad” steps in the long-term and short-term retrospective buffers respectively. When the commitSuccessfulStep() is called, it first adds the information about the current step to both retrospective buffers. Then, if the long-term retrospective buffer is full, the algorithm may decide to change the normal step size recommendation H. This is done basing on two threshold values, Bmin and Bmax, as follows. If the amount of bad steps B does not exceed the Bmin threshold, then H gets multiplied by the parameter K > 1, so the normal step size increases. Otherwise, if B exceeds the Bmax threshold, then H gets divided by K, so the normal step size decreases. Otherwise, in the case Bmin < B  Bmax, the normal step size remains unchanged.

114

S. Orlov and L. Burkovski

If the long-term retrospective buffer R is not full, but the short-term retrospective is full, the algorithm also may change the normal step size H. Namely, in the case b > bmax, where bmax > Bmax is another threshold parameter, the normal step size gets divided by K. The second buffer has been introduced in order to avoid long sequences of “bad” steps, because “bad” steps are much more computationally expensive than “good” ones. Also, when the normal step size changes due to any of conditions B  Bmin, B > Bmax, or b > bmax, both long-term and short-term retrospective buffers R and r are cleared. This is done in order not to try changing H within next M (for the R buffer) or m (for the r buffer) steps. The algorithm described above can be written as follows: Update buffers R, r if R buffer is full then if B ≤ Bmin then H HK Clear buffers R, r else if B > Bmax then H H/K Clear buffers R, r end if else if r buffer is full then if b > bmax then H H/K Clear buffers R, r end if end if

Table 1. Possible values of h-controller parameters responsible for H evolution. Parameter M m Bmin Bmax bmax K Value 100 10 0.01 0.1 0.3 1.02

Possible values of parameters of the algorithm part responsible for the evolution of H are presented in Table 1. Notice that the M and K parameters should be chosen according to characteristic speeds of processes in the ODE system that could affect reasonable value of H, and the characteristic simulation time. For example, it follows from the algorithm that the normal step size H may increase q times in at least N(q) steps and decrease q times in at least n(q) steps, where N ð qÞ ¼ M

ln q ; ln K

nð qÞ ¼ m

ln q ; ln K

ð9Þ

Experience-Driven, Method-Agnostic Algorithm

115

Applying the above formula and taking parameter values from Table 1, we obtain N (2) = 3500, N(10) = 1.16  104, N(100) = 2.33  104, and n(q) = 0.1 N(q). This is acceptable when the total number of integration steps is about million or greater, which is the case for our example ODE problem. An important point about the step size control algorithm is the fact that when the acceptable step size at a certain integration step happens to be much less than the step size at the previous integration step, it does not immediately lead to the use of much shorter step sizes at next steps. Instead, the step is marked as “bad”, which allows to further slightly correct the normal step size H. In the case of non-smooth ODE right hand side, this approach proves to result in better performance. 2.2

Controlling Discrete Parameter of Method

In our case of the DUMKA3 solver, the discrete parameter of the numerical integration method identifies a certain numerical integration scheme, having certain number of stages and certain stability polynomial. The algorithm controlling this parameter, however, abstracts from those details. It just treats the discrete parameter as an integer number p 2 [pmin, pmax] that may affect numerical performance. The object implementing the discrete parameter control algorithm is further referred to as p-controller. General Protocol of P-controller Usage 1. At the very beginning of the numerical simulation, the solver calls the recommendInitialIndex() method. The method returns the recommended value of p; this requires some knowledge about the meaning of the p parameter, and hence is not part of p-controller core. In our case, the recommendation is based on known stability region and number of stages for each p, as well as the estimation of spectrum of ODE right hand side Jacobian matrix, J. 2. At the beginning of each integration step, the solver asks for a new recommendation of p by calling the recommendIndex() method of the p-controller. 3. After each step attempt, the solver passes a value proportional to the corresponding workload (in our case, this is the number of stages in the p-th numerical integration scheme) by calling the addStepWorkload() method of the p-controller. 4. At the end of each integration step, the solver calls the endStep() method of the p-controller, passing the actual step size to it, so the controller can update the complexity value cp (10). Notice that whenever the value of p changes, the solver also calls the restart() method of the h-controller and sets an estimated value of step size as the initial value of H. Internal State of P-controller. A block of data is associated with each valid value of p 2 [pmin, pmax]. The block contains the following elements. • Complexity metric cp of the numerical method at the corresponding value of p. The complexity is available after at least one integration step has been done, and is computed according to the formula

116

S. Orlov and L. Burkovski

cp ¼

Xnp nbad;i þ 1 1 Np i¼1 np hi

ð10Þ

The formula considers averaged complexity of last np steps that the solver has done with the value p of the discrete parameter; np is limited to some constant, currently 500, in order not to include the complexity of steps done far in the past. Further, Np in (10) is a value proportional to the number of operations per step attempt, when the discrete parameter equals p (in our case it is the number of stages); (nbad,i + 1) is the total number of step attempts at i-th integration step with the value p of the discrete parameter; hi is the size of the i-th integration step. By construction, the complexity cp is proportional to the number of operations required to advance the independent variable x by one. It is used by the p-controller to judge about the efficiency of the solver at the corresponding value of p. • Boolean flag pp indicating priority. If the flag is set, the corresponding value of p will be preferred by the algorithm over any values that don’t have the flag. Initially, the flag is set, so all values of p “have priority”. The array of (pmax −pmin +1) such blocks is part of the internal state of the pcontroller. Other parts of the state are the following variables. • The current value of the discrete parameter, further denoted pcur. • The “favorite” value of the discrete parameter, further denoted p*. The meaning of p* is discussed below in next section. Initially, p* has a nonsense value of pmin − 1. • The expiration counter for p*, denoted n* (also discussed in next section). • Normal step size H used last time for the “favorite” value of the discrete parameter, H*. Initially zero. The Algorithm of the P-controller. When the simulation starts, it calls the recommendInitialIndex() method of the p-controller according to the protocol. At that time cp, the actual complexity of each value of p, is not known, so the implementation of the method estimates the complexity of each p by using the knowledge about the meaning of the discrete parameter. In our case, the algorithm is as follows. First, the spectrum of ODE right hand side Jacobian matrix, J, is estimated, resulting in a set of a complex eigenvalues k1,…,ka that are expected to represent spectrum boundaries. Then, for each value of p, the step size hest p is estimated as   hest p ¼ max h [ 0 : hki 2 Rp ; 8i ¼ 1; . . .; a ; where Rp is the known stability region of the p-th scheme. Then the complexity is estimated as cest p ¼

Np ; hest p

Experience-Driven, Method-Agnostic Algorithm

117

where Np is the number of stages of the p-th scheme. Once the estimated complexity is known for each value of the discrete parameter, the initial value of it is chosen as pest ¼ arg

min

p2½pmin ;pmax 

n o cest : p

While the above estimation may give a reasonable value of pcur that can be used directly, the experience shows that one of values pest ± 1 or even pest ± 2 might provide better performance than pest. Therefore, further “on-the-fly” adjustment of the discrete parameter is implemented in the recommendIndex() method of the pcontroller. The recommendIndex() method is lazy. This means that, although it gets called at each integration step, it only tries to change the current recommendation pcur when the number of integration step is a multiple of the laziness value, nlazy. According to the common sense, the laziness value must be several times greater than the M parameter of the h-controller introduced in sec. “Internal state of the step size controller” (otherwise, there are chances that the actual complexity cp is overestimated due to wrong step size). Currently, nlazy = 200 if the flag pp is set, and nlazy = 1000 if pp is not set. New recommendation of p, denoted here as pnew, is picked as follows. First, the priority flag ppcur is cleared. Then, if there are values of p with the pp flag set, pnew is set to the minimal one among those. In this case, we always have pnew 6¼ pcur—new recommendation differs from the current recommendation. When the priority flag pp is cleared for all p, this means that all values of p have already been tried (because initially all pp are set, and a flag is only cleared after trying the corresponding p), and, as a consequence, their actual complexity is evaluated. The algorithm then picks new recommendation basing on the available complexity data, favoring the least complexity: pnew ¼ arg minp cp

ð11Þ

At this point, there are two possibilities: pnew 6¼ pcur and pnew = pcur. The former case has no special treatment: we just use pnew as the next value of pcur and set p* to a nonsense value, such as pmin − 1, to indicate there is no “favorite” p. It is important to handle the case pnew = pcur properly when the spectrum of the ODE right hand side Jacobian matrix changes (e.g., for nonlinear ODEs). Without it, the p-controller would likely stick to the same pcur, which would result in the outdating of actual complexity cp for all other values of p. Ultimately, the recommendation pnew based on (11) would become inadequate. Therefore, the algorithm must ensure that different values of p are tried from time to time, which we call curiosity. The curiosity is implemented in the following way. When the case pnew = pcur realizes, the “favorite” parameter value p* is compared with pnew. In the case p* 6¼ pnew, the value pnew becomes “favorite” by setting p* equal to pnew. Besides, the expiration counter n* is set to the value of parameter n*,1 (currently n*,1 = 5), and finally pcur is set to pnew. In the other case, p* = pnew, the expiration counter n* is analyzed. If it is positive, the counter is decremented by one, and the favorite value pnew is recommended. When the counter reaches zero, two valid values of p nearest to pnew

118

S. Orlov and L. Burkovski

are given priority by setting their flag pp; that causes those parameter values to be recommended in further calls to recommendIndex(). Besides, in that case the counter n* is set to −1. Finally, the case n* = −1 on entry means that the “favorite” value p* appears to be better (less complex) than the two other values of p that were given priority before and, as a result, tried as recommendations. In this case, the expiration counter n* is set to the value of parameter n*,2 (currently n*,2 = 15). Notice that n*,2 > n*,1 because the “favorite” parameter value has just been verified to be better than two neighboring values, so new attempts to use those values will be postponed for a longer time than in the case pnew has become “favorite” first time. Also, in the case pnew = pcur the value of normal step size H of the h-controller is stored in the h* variable. Once the new recommendation pnew of the discrete parameter is chosen, the hcontroller has to be restarted with a new normal step size initial recommendation h0, which is computed as follows:  h0 ¼

h Lpnew =Lp HLpnew =Lpcur

if pmin  p  pmax : otherwise

ð12Þ

In our case, Lp is the length of p-th scheme stability region in the real direction; in general, when that information is not available, one could set Lp equal to the average H provided by the h-controller for specific p. Notice that the case pmin  p*  pmax in (12) lets using “favorite” p with the last used H after checking nearby p values within the curiosity action, thus preventing the algorithm from using wrong step sizes. The entire algorithm is presented in Fig. 2.

3 Results and Discussion The algorithms for controlling the step size h and the discrete parameter p have been implemented in software. In order to illustrate and test the presented algorithm and compare it against traditional step size control, below is a brief overview of numerical experiments with an ODE system modeling the continuously variable transmission [4]. The ODE system contains about 3000 variables, and can be classified as a mildly stiff one; more details can be found in [11]. Two load cases are considered, in both of them stationary regimes of CVT operation are reached from a synthesized initial state during 0.25 s. Primary parameters are: gear ratio 2.788, input shaft rotation speed 4000 rpm. The input torque is 50 Nm in regime 1 and 250 Nm in regime 2. Notice that for this system, maximum real-negative eigenvalues of J are known to be proportional to the applied torque. Each regime has been simulated in the following modifications: a) both p-controller and h-controller are used; b) fixed p (1, 2, 3, 4, 5), step size control using the proposed h-controller; c) fixed p, traditional step size control according to (6) with several values of rmin (1/2, 2/3, 5/6), rmax = 1/rmin, and r (0.85, 0.9, 0.95, 0.99).

Experience-Driven, Method-Agnostic Algorithm

119

Modification (b) allows to identify the least complex value of p for a simulation. The comparison of modifications (a) and (b) allows to estimate the overhead introduced by the p-controller algorithm that finds and keeps using the least complex p. The comparison of (b) and (c) allows to judge about the benefits from using the h-controller. Notice that in the modification (c), only one value of p was used in each regime—the one that is least complex according to the results of (b).

Fig. 2. Algorithm for updating the discrete parameter recommendation.

120

S. Orlov and L. Burkovski

Table 2. CPU time [s] consumed by simulations (a), (b), and (c) at r = 0.9; rmin = 2/3 (regime 1), rmin = 5/6 (regime 2); rmax = 1/rmin. Torque, Nm (a) 50 250

(b) at 1 2086 2052 5012 8300

different p (c) 2 3 4 5 1950 3045 3982 5254 2164 5442 4744 6177 8347 5329

Table 2 shows CPU time, in seconds, spent for simulations of regimes 1 and 2 in modifications (a), (b), and (c). As follows from columns 2–6, the least complex p equals 2 for regime 1 (torque 50 Nm), and 3 for regime 2 (torque 250 Nm). Further, comparing column 2 with 3–6, we can see that the overhead introduced by the pcontroller is 7% for regime 1, and 5.6% for regime 2. Table 3. CPU time [s] consumed by simulations (c) at different r, rmin. rmin r 0.85 0.9 0.95 Torque 50 Nm 1/2 – 2296 – 1/3 2246 2164 2422 5/6 – 2260 –

0.99 0.85 0.9 Torque 250 – – 6246 2551 5860 5585 – – 5329

0.95 Nm – 5932 –

0.99 – 6184 –

To prove that the proposed h-controller outperforms the traditional one, a number of simulations (c) with the traditional step size control have been done, with different parameters. The CPU time spent for each of those simulations is shown in Table 3. It follows from the table that the optimal values of the parameters rmin and r are near rmin = 2/3, r = 0.9 (all other tested combinations resulted in worse performance for both regimes). Therefore, the last column in Table 2 presents the best results that we could achieve with the traditional step size control algorithm. Comparing with modification (b), we can conclude that the proposed h-controller outperforms the traditional one by 11% in regime 1 and by 12% in regime 2; when both p-controller and h-controller are used, simulations are still faster than those using the known fixed least complex p and the traditional step size control.

4 Conclusions A new algorithm for controlling two parameters of a numerical integration method for ODE system has been proposed. One of the two parameters is the integration step size, and the other one is an integer discrete parameter. The information used by the size controller during simulation is just the fact that a step has been accepted or rejected,

Experience-Driven, Method-Agnostic Algorithm

121

therefore the controller can be used with virtually any method equipped with a step local error estimator. At the same time, additional information about characteristic speeds of change of ODE right hand side Jacobian eigenvalues is expected to be known in advance and encoded in fixed parameters of the controller. Discrete parameter p is controlled using a heuristic algorithm that is capable of evaluating and updating the complexity associated with each value of p and choose the parameter with least complexity. The overhead introduced by the p-controller in a numerical example is 5–7%. The presented algorithm has been used in a real-life application dedicated to the modeling of CVT dynamics and has proven to be reliable and robust. Acknowledgments. Authors are thankful to the Russian Science Foundation for their support of research under grant No. 18-11-00245.

References 1. Medovikov, A.A.: High order explicit methods for parabolic equations. BIT Numer. Math. 38(2), 372–390 (1998) 2. Hairer, E., Nørsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I (2nd Revised. Ed.): Nonstiff Problems. Springer, Heidelberg (1993) 3. Steihaug, T., Wolfbrandt, A.: An attempt to avoid exact jacobian and nonlinear equations in the numerical solution of stiff differential equations. Math. Comput. 33(146), 521–534 (1979) 4. Shabrov, N., Ispolov, Yu., Orlov, S.: Simulations of continuously variable transmission dynamics. ZAMM – J. Appl. Math. Mech./Zeitschrift fu¨r Angewandte Mathematik und Mechanik 94(11), 917–922 (2014) 5. Krogh, F.T.: Stepsize selection for ordinary differential equations. ACM Trans. Math. Softw. 37, 15:1–15:11 (2010) 6. Shampine, L., Witt, A.: A simple step size selection algorithm for ODE codes. J. Comput. Appl. Math. 58(3), 345–354 (1995) 7. Gladwell, I., Shampine, L., Brankin, R.: Automatic selection of the initial step size for an ode solver. J. Comput. Appl. Math. 18(2), 175–192 (1987) 8. Gustafsson, K.: Stepsize selection in implicit Runge-Kutta methods viewed as a control problem. IFAC Proc. Vol. 26(2), Part 1, 495–498 (1993) 9. Söderlind, G.: Automatic control and adaptive time-stepping. Numer. Algorithms 31, 281– 310 (2002) 10. Hulbert, G.M., Jang, I.: Automatic time step control algorithms for structural dynamics. Comput. Methods Appl. Mech. Eng. 126(1), 155–178 (1995) 11. Orlov, S.: Application of numerical integration methods to continuously variable transmission dynamics models. In: SHS Web of Conference, vol. 44, p. 00065 (2018)

Functional Visualization of the Results of Predictive Modeling in the Tasks of Aerodynamics Alexey Kuzin1(&) 1

, Alexey Zhuravlev1 and József Tick3

, Zoltan Zeman2,

Peter the Great St. Petersburg Polytechnic University, St. Petersburg, Russian Federation [email protected], [email protected] 2 Szent Istvan University, Godollo, Hungary 3 Óbuda University, Budapest, Hungary

Abstract. The paper considers an important problem of 3D visualization of the results of CFD simulations on large scale meshes as well as the preprocessing of the original data to the developed format. The amount of data resulting from the modeling is often so large that the interactive visualization of simulation results is impossible without employing specialized means for data representation and storage. Problems coming to the fore are the converting of origin files of results into proposed format, data filtering, providing direct access to required blocks, and the minimization data read operations and network traffic. A technology is proposed for the 3D visualization of scalar fields on arbitrary meshes containing about 109 nodes. The technology includes data storage format and the tools for conversion of original mesh; client-server architecture of visualization application; the protocol of interaction between client and server. The technology allows fixing data bandwidth during file read operations, to minimize the number of read operations and the amount of data transferred over network for the visualization of each scene frame. Keywords: Visualization

 Uniform grid  Data format  Rendering

1 Introduction The dimension of the models in modern problems of mathematical modeling grows permanently and it brings to the foreground the problem of creation of the tools for interactive analysis and visualization of the results of such simulations. Typical tasks in the field of fluid dynamics can involve models, which consist of 109 nodes and even more. It is obvious that even direct reading of the files with modeling results takes a lot of time in this case, which is a problem for interactive analysis. Visualization of the results of simulation means in this article creation of 3D representation of the model with scalar fields distributions defined in the nodes of the mesh. Interactive visualization must provide ability to freely pan, zoom, rotate the model, change the ratio of the model’s deformed state, interactively change level of isosurface and switch to another time frame when unsteady processes are visualized. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 122–142, 2021. https://doi.org/10.1007/978-3-030-64430-7_11

Functional Visualization of the Results of Predictive Modeling

123

Two tasks can be marked out in the problem of large dataset visualization, which should be solved at each frame. Firstly, it is necessary to obtain dataset suitable for the rendering and, secondly, the rendering of the dataset must be performed. It is obvious that original dataset cannot be utilized directly as dataset for rendering because of its large size (the files with simulation results of only one frame can reach size of several TBs). That is why original data files should be converted to required data format. In this case, one should emphasize an approach related to application of means of Big Data analysis, for instance, such frameworks as Apache Hadoop and Hive [1]. In this case, the whole original dataset is distributed between nodes of the cluster and its reading is performed in parallel. The phase reduce follows after reading. On this stage required small dataset. It is worth noticing that parallelism significantly reduces the time of the whole dataset processing. One can think of another approach when data is organized in a special manner so the reading of rather small portion of the original dataset can be involved. The size of portions is as small as the size of the dataset for rendering. This approach requires development of both special format for data storing and the appropriate data structure. In this case it is usual to generate volumetric data which is defined in the nodes of uniform 3D rectangular grid. There are data structures developed to handle such data, for example VDB, described in [2] and implemented in NVIDIA GVDB Voxels rendering framework. In the next section data storage format, developed by the authors and the procedure of data conversion to one are described. The format is intended to hold hierarchical sparse volumetric data that is obtained by resampling of scalar fields from original unstructured mesh of the model to the nodes of uniform 3D grid. Concerning the second task—direct dataset rendering – one can notice that parallel volume rendering is actively used to render sparse volumetric data, which is illustrated in the works [3–5].

2 Materials and Methods 2.1

Requirements to Format

As the problem of visualization of simulation results on extra-large meshes is considered, requirements made to specialized data structure are brought to the foreground. Firstly, it is block-wise data structure, which provides direct access to the requested block. The blocks are indexed by some key, for example, by their spatial coordinates. In ideal case, each access to the block should be processed in fixed count of seek operations. Secondly, visualization of the scene requires getting of the small number of blocks so data should be effectively filtered during the query without necessity of direct reading of them. In other words, each query to the storage should return limited amount of data in fixed time without the need of reading of the whole storage. In ideal case only requested blocks are read. In general, original results of simulation initially assigned to unstructured spatial mesh. But since visualizations frameworks, such as NVIDIA IndeX, provide most effective work with uniform hexagonal grids one has to perform resampling. In addition, usage of volumetric data, assigned to uniform grid, allows building hierarchical

124

A. Kuzin et al.

format to retrieve data with the different levels of details. Since results of unsteady problem’s simulation are considered, a sequence of time frames is presented and each frame is generally described by its own mesh and nodal values of the fields. Hereinafter, it is supposed that the frames differ by nodal values only and the mesh is the same for all frames. This suggestion often corresponds to spatial description that is accepted in problems of fluid mechanics. The case of geometry transforming from frame to frame is also possible but it is not considered in this work. The format offered to attention is intended to hold nodal values of scalar fields on hierarchical sequence of uniform hexagonal grids. It is intended to fit the following requirements: 1. An absence of necessity to read the whole content of the file when spatially bounded block of data is requested. 2. Obtaining of requested spatial block of data should be performed using as few seek operations as possible. 2.2

Format Description

It is supposed that volumetric data volume is a parallelepiped and values of the fields are stored as a sequence of hexagonal grids, therefore the grid with level number k + 1 is obtained from the grid of level k by dichotomy of cells in three spatial directions. Grid of level 0 consists of one cell coinciding with the model box, grid of level 1 consists of 8 cells 2  2  2 and etc. Respectively grid of level k consists of 2k 2k  2k cells. There is no need to store coordinates of the nodes: it is enough to know model box sizes, grid level number and node multiindex to compute them. Therefore, only fields values are stored in the nodes. The data structure at its core reminds MIP-pyramid of 3D textures where each level of hierarchy corresponds to own level of details. Each next level is obtained from previous one by its dichotomy. The grid of each level except the 0-th one is not obliged to cover the whole model’s box. It can contain cavities obtained by exclusion of the cell and corresponding nodes do not have field values assigned. This situation is shown on the Fig. 1, where 2dimensional depictions of the grids of levels 1, 2 and 3 are presented. The cells that are really presented in the grid are filled with the grey color. The grid of level 1 covers the whole model’s box, when the grid 2, obtained from it by half division, contains “hole”. Further, the grid of level 3 is defined on the subset of cells of grid of level 2 only. Such division reflects irregularity of original unstructured mesh of the model. The sequence of levels of uniform grids should be generated until the depth, where the cell size is approximately equal to the sizes of tetrahedrons of original mesh in corresponding point of the domain. But original mesh can contain both concentrations, which require big depth of hierarchy, and domains with relatively rough mesh, where the levels depth can be decreased. Therefore, the cells of level k + 1 are generated as the cells of level k only if underlying tetrahedrons have sizes approximately equal to the size of cell of level k + 1. In this sense, the described approach is similar to Adaptive Mesh Refinement (AMR) methods [6, 7].

Functional Visualization of the Results of Predictive Modeling

125

Fig. 1. Volumetric data grids of levels 1, 2 and 3. Only filled cells are really presented in the grid.

These principles of multilevel grids construction in three-dimensional cases relate to the building of so-called octree structure [8]. The formation of such structure is the main purpose of the resampling.

Fig. 2. Grid divided into 4 blocks. Only filled cells really present in the grid.

The next format is proposed for storing of data structure described above. Let the problem consist of M time frames, each node contains values of N scalar fields and the data structure consists of K levels of uniform grids with numbers from 0 to K − 1. In this case nodal values are stored in M  N data files of the same structure: one file contains nodal values of one scalar field of one frame on all grids of all levels. As it was mentioned above, the grid of level k consists of 2k  2k  2k cells. In order to store data of the grid into the file the grid is split into parallelepipedal blocks, containing given count of nodes. The choice of count of nodes in block is the separate problem, which is solved at the stage of the file generation. Data is serialized into the file in the block-wise way and start position of the block is stored into metadata (will be described below), which provides direct access to each block during reading. It is worth noticing that the structure of all data files is the same; it means that the same block in all data files resides at the same position. In general, block with index sizes [n1; n2; n3] contains less nodes than n1  n2  n3 because of grid sparsity, which is pointed out above. It is illustrated on the Fig. 2 for two-dimensional case. On the figure one can see grid of level 3 with nodes from 0 until 8 in each direction. The grid is split to 4 blocks 4  4. The grid is sparse, it contains only filled cells. One can see that only the block [1; 1] is

126

A. Kuzin et al.

dense and contains maximal count of nodes when blocks [0; 0] and [1; 0] consist of rectangles of less sizes. The block [0; 1] does not contain data at all and it is not written to the file. Output data file contains nodal values of three blocks and their start positions are stored in metadata in order to provide direct access. Data of the block in the data file has rather simple structure: this is an array of parallelepipeds of nodal values. One more file is generated for storing metadata. It contains data structure that is shown on Fig. 3. This is an array of levels consisting of K elements. Each element corresponds to one level of grid hierarchy. Each such element contains array blocks of structures with information about blocks. Each structure with information about block contains following fields: 1. idx—Multiindex of the block’s start 2. size—Index sizes of the block 3. pos—Start position of the block in data file Thus, metadata helps to find position of the block with given spatial coordinates (idx and size are evaluated from these spatial coordinates) in the grid of the given level in fixed time. After that data of the block can be read from data file at position pos with only one seek operation involved. An important advantage of suggested data storage schema, along with its effectiveness, is its simplicity of program implementation.

Fig. 3. Fundamental structure of metadata.

2.3

Procedure of Resampling Original Data Files

The most complex problem is an effective generation of the described hierarchical structure of sparse uniform grids from given unstructured mesh of original model. Initial grid is supposed to be arbitrary. It can consist of either hexahedral or tetrahedral finite elements. As it was mentioned above, the formation of the octree is considered in threedimensional case. Octree block is a prototype of the data block, which must be stored in the files of the developed format. Each block can consist of 8 or 0 blocks. Hereinafter, the following terms are introduced for convenience: • Blocks that are obtained by division of one block into 8 ones are called children, the divided block is called parent; • Blocks that include no children are called leaves; • Block related to the 0-th level is called root.

Functional Visualization of the Results of Predictive Modeling

127

The corners of the block are indexed in the manner, which is shown on the Fig. 4. The following steps are proposed to provide resampling to the developed data format: 1. Reading of the model; 2. Conversion of the initial mesh into tetrahedral one. This is being done in order to unify the mesh which is being processed at the following steps; 3. Initial handling of tetrahedral elements. During the handling the hierarchical structure is being formed. Nodal values of fields at leaves are being defined after the completion. It can be possible that octree blocks, which are on bounds of tetrahedrons of different sizes may include nodes with unspecified fields after the processing; 4. Second processing of the tetrahedrons is supposed to refine the definition of the leaves fields; 5. Averaging of fields of the parental blocks through the fields of children blocks. An important point is that after averaging of two adjacent blocks, the fields in the shared nodes are generally determined by the different values. Therefore, it is necessary to make so-called nodes averaging. 6. Recording data blocks into files of developed format.

Fig. 4. Indexation of the corners of the octree block.

In current paper stages dedicated to the reading of the initial model, transforming its mesh into the mesh of tetrahedrons and recording files into created format are not considered. In this paper the procedure of resampling only is considered. Steps of resampling are described at the following sections as well as used techniques. 2.4

Auxiliary Techniques

Field Interpolation. The most of nodes at the leaves are supposed to be inside tetrahedrons. Fields values at them are assumed to be found by interpolation through tetrahedral finite elements. The two ways of interpolation are performed.

128

A. Kuzin et al.

Consider tetrahedron A1A2A3A4 and the point O located inside one (see Fig. 5). The problem itself is an interpolation of the field fO at the point O. Let the Cartesian coordinates of O are (xO, yO, zO). According to [9], it is necessary to solve system of linear equations to calculate coefficients of linear interpolation function: fO ¼ a0 þ a1 x þ a2 y þ a3 z 2

3 2 a0 1 6 a1 7 6 1 6 7¼6 4 a2 5 4 1 a3 1

x1 x2 x3 x4

y1 y2 y3 y4

31 2 3 z1 f1 6 f2 7 z2 7 7 6 7; z3 5 4 f3 5 z4 f4

where fp is a field value at the vertex Ap of tetrahedron which has coordinates xp, yp and zp. This way is convenient in the case when fields in many points of the tetrahedron must be calculated. Another approach is based on four coordinates L1, L2, L3 and L4 [10]. Value Lp is evaluated as the ratio of volume of tetrahedron constructed through the point O and the face located opposite to the vertex Ap. For instance, L2 is being evaluated in the next way: L2 ¼

VOA1 A3 A4 VA1 A2 A3 A4

This approach is suitable for the situations when field at only one point needs to be interpolated. It is efficient due to the absence of matrix inversion which is used in the first method.

Fig. 5. Considered tetrahedral element and the point lying inside one

Determination of Status of Point Penetration to Tetrahedron. During the conversion it is often necessary to know if the given node is located inside the definite tetrahedron. Therefore, procedure, which provides obtaining such status, is developed. Consider the tetrahedron A1A2A3A4 (see Fig. 5). The determination of status of point penetration to tetrahedron is related to the normals, which are orthogonal to

Functional Visualization of the Results of Predictive Modeling

129

corresponding faces and denoted as ~ np (p = 1, 2, 3, 4). It is accepted that all normals point inside the tetrahedron domain. Normal ~ np corresponds to the face located in opposite to the vertex Ap. Because of that it is important to observe the rule of the tetrahedron vertices traversal on the stage of conversion original mesh to the mesh of tetrahedral elements. Point ~ r is supposed to be placed inside tetrahedron if it meets the following requirement: ð~ r ~ rp Þ  ~ np  0;

p ¼ 1; 2; 3; 4

ð1Þ

ni . where ~ ri is a radius vector of arbitrary vertex forming the face with the normal ~ If point with radius vector ~ r follows the inequality (1) for each i from 1 to 4, the point is located inside the tetrahedron domain. This inequality can be reduced to the following form: ~ r ~ np  bp ; p ¼ 1; 2; 3; 4 bp ¼ ~ rp  ~ np

ð2Þ

The Procedure of Search of Octree Block Corners of Given Level Inside Tetrahedron. The search of the blocks corners inside the tetrahedron is essential within the octree structure formation procedure. It becomes the basics of proposed algorithm.

Fig. 6. Algorithm of the evaluation of required distance between the given point and the point, which is obtained by intersection of the ray carried out along axle x and tetrahedron face.

130

A. Kuzin et al.

Let there is a point K. Consider the ray, which is laid from K along x axis. It is required to find the point C obtained by the intersection of the ray and tetrahedron face. Position of C is evaluated as follows: ~ rC ¼ ~ rK þ a~i

ð3Þ

where ~i is unit vector corresponding to axis x. The value of a is the distance between C and K. Substitution (3) to (2) gives the following: ð~ rK þ a~iÞ  np  bp It yields that a must follow the requirement *

a  gðpÞ ¼

bp ~ rK  np ~i  ~ np

for each p from 1 to 4. The algorithm of such procedure of block corner seek is presented on the Fig. 6. Considered procedure works well for particular cases. When described ray intersects none of tetrahedron’s faces the procedure assigns 0 to a, which is incorrect (see Fig. 7). Therefore, after procedure has been performed, the results of work must be verified with the procedure of obtaining the status of the penetration to tetrahedron, which is described above.

Fig. 7. Illustration of incorrect work of algorithm which is intended to return the distance between the point C and point, made by intersection of the ray and face of tetrahedron.

2.5

Octree Structure Formation

The outcomes of the step, which follows after the transformation original mesh into tetrahedral, are formation of the octree structure and the partial definition of the nodal values at octree leaves. All tetrahedrons are processed during this part of workflow. Considering a tetrahedron assumes the handling of the cloud of points located inside one. Algorithms for search of ones and inserting blocks adjacent to mentioned points are described below.

Functional Visualization of the Results of Predictive Modeling

131

Fig. 8. Example of defining indices t, p, q of the block corner inside a bounding box.

Consider the algorithm of processing of a single tetrahedron. Manipulations begin with the determining of mean size m of the tetrahedron. It is defined by the prescribed criterion depending on the geometry of the tetrahedral element. For example, the size may be specified as the diameter of the inscribed sphere. The vector, which is directed from one corner diagonally across the whole block, defines unambiguously the block size ~ e. The level of the block, hence its size, are determined according to the following inequality: maxðe1 ; e2 ; e3 Þ  m Bounding box of the tetrahedron is known. It includes the grid of possible blocks which corners may be internal to the tetrahedron. Vertex of index 0 of the octree block with the lowest components of multiindex is marked as K (see Fig. 8). This point lies on the surface A which is perpendicular to the axis x. Consider the set of points ~ rt;p;q . ~ rK þ e2 t~i þ e2 p~j þ e3 q~ k; rt;p;q ¼ ~ where ~i, ~j and ~ k are unit vectors of Cartesian coordinate system. Values t, p and q must be so that point ~ rt;p;q lies inside the bounding box. Therefore, values t, p and q are greater or equal 0. To obtain the block corner possibly lying inside a tetrahedron, the algorithm described in previous sub-subsection is used. Point ~ r0;p;q is passed as initial data. During the processing of algorithm index t is obtained. The point ~ rt;p;q is handled if it is located inside the tetrahedron. Otherwise, another pair of indices p and q is proceeded. The flowchart of proposed algorithm of processing a single pair (p, q) is shown on the Fig. 9. Consider the insertion of blocks adjacent to the point~ rt;p;q into hierarchical structure of octree. It is implied by default that ~ rt;p;q is located inside tetrahedron. The procedure of insertion entirely consists of loop over possible blocks of the given level. Firstly, it is

132

A. Kuzin et al.

Fig. 9. The flowchart processing of a single pair of indices p and q

attempted to insert the block into the octree. It may be possible that the potential block is located beyond the bounding box of the whole octree. In this case, workflow proceeds with another adjacent block. After insertion, if block has children, nodal values of fields at all children are considered. If nodes are inside the tetrahedron, fields values are interpolated. If block does not have children, nodal values of fields at point ~ rt;p;q are calculated. The workflow of the processing point ~ rt;p;q is presented on the Fig. 10.

Fig. 10. The flowchart of procedure of insertion the blocks adjacent to the point ~ rt;p;q into the octree structure

Functional Visualization of the Results of Predictive Modeling

133

Described algorithms provide building required hierarchical structure. 2.6

Completion Defining Fields at Octree Leaves

After the completion the building of the octree, part of nodes stays unfilled with the fields. Moreover, some of these nodes penetrate to certain tetrahedrons. Therefore, tetrahedral elements must be iterated over one more time. A single tetrahedron is being processed in two steps. Firstly, search of leaves is performed. The search is based on the statement that all blocks, which are located above the leaf block in hierarchical structure, and tetrahedron’s bounding box must overlap. Handling the leaves includes the iteration over all their nodes. If node lies inside the tetrahedron and contains unspecified fields, it is added to the accumulation array which is outcome of the first step. On the second stage the nodes from array get the fields values with the interpolation through tetrahedron. 2.7

General Approach of Fields Evaluation in Non-leaves Blocks

After the nodal values at octree leaves are defined, parents must be filled with the fields. It is done in the two basic procedures. These procedures are so-called direct averaging and node averaging. Direct averaging is intended to calculate the fields at parent block through the fields of children. After the execution of such procedure over adjacent blocks, fields values at the common nodes located on the corners may be specified unambiguously. In order to make fields continuous, the procedure of node averaging is involved. In this work direct averaging of block and block averaging are supposed to be different terms: direct averaging is the procedure, which is used during the block averaging.

Fig. 11. Algorithm of averaging octree structure.

134

A. Kuzin et al.

Consider the algorithm of block averaging. It is important that leaves are not being averaged. Therefore, octree blocks are examined to have children. If block has no children, the averaging of the considered block is not executed. If child block has been averaged by the moment, averaging of fields at nodes is called. Otherwise, recursive call of the child’s children is called firstly. After all children have been handled direct averaging of the considered parent block is evoked. Workflow of the algorithm of block averaging is presented on the Fig. 11. Direct Averaging. Hereinafter, the case, when octree block contains 8 nodes which coincide with its corners (see Fig. 4), is considered. However the approach can be spread on the general case of arbitrary index sizes of block. Every parent contains 27 nodes of children (3 nodes in each direction). Firstly, consider the case when fields at each node are specified. Local coordinate system with the origin coincided with the corner 0 of parental block is introduced. Therefore, nodes may have following coordinates: l1 2 l2 yq ¼ q  2 l3 zr ¼ r  2

xp ¼ p 

Here p, q and r are indexed from 0 to 2; l1, l2, l3 are the dimensions of the parental block in corresponding directions. Let there is a linear interpolation polynom P(x, y, z). It can be defined with the usage of shape functions Ni(x, y, z) as well as in FEM [10]. Index i corresponds to the index of the node which shape function relates to. At the corners of the block Ni equals 1 if i is the same as index of the corner. At another corners it equals 0. According to this approach, interpolation polynom is presented in the following way: Pðx; y; zÞ ¼

7 X

ui Ni ðx; y; zÞ

ð4Þ

i¼0

Here ui is the value of a single field at the node i of the parent. The approach is similar to the least squares method. The functional L is implemented in the following form: L

X

½Pðxp ; yq ; zr Þ  ~up;q;r 2 ! min

ð5Þ

p;q;r

Value ~up;q;r is a field value corresponding to the node of child block with the coordinates xp, yq, zr.

Functional Visualization of the Results of Predictive Modeling

135

After the substituting (4) into (5) the expression (6) is obtained: Lðu0 ; . . .; u7 Þ ¼

7 X X ½ ui Ni ðxp ; yq ; zr Þ  ~ up;q;r 2 ! min

ð6Þ

p;q;r i¼0

Eight values ui must be calculated. Requirement of minimization L yields the equality of each partial derivative L over ui. @L @ui

¼2

7 P P ½ð uj Nj ðxp ; yq ; zr Þ  ~up;q;r ÞNi ðxp ; yq ; zr Þ ¼ 0 p;q;r

j¼0

i ¼ 0; 1; . . .; 7

This yields to the system of linear algebraic equations. AU ¼ B

ð7Þ

Elements of the matrices in (7) are calculated as follows: Aij ¼

7 PP

Nj ðxp ; yq ; zr ÞNi ðxp ; yq ; zr Þ

p;q;r j¼0

Bi ¼

P U ¼ ui ~up;q;r Ni ðxp ; yq ; zr Þ

p;q;r

In this particular case, when all field values at nodes of children are defined, required U can be evaluated in the following way: U ¼ MV

ð8Þ

Here V is a vector of the fields defined at the nodes of children blocks. It has dimensions 27  1. Therefore, dimensions of M are 27  8. So elements of the matrix M can be calculated beforehand and be used repeatedly.

Fig. 12. Illustration of the case when fields at the parent block cannot be directly averaged because matrix A′ is degenerate. Highlighted nodes contain determined fields values.

136

A. Kuzin et al.

Consider the case, when some nodes of children may have uninitialized fields, for example, when such nodes are located beyond all existing tetrahedrons. Nodes, containing undefined fields, form the set S. The system of equations is transformed into the following form: A0 U ¼ B0

ð9Þ

Threes of indices (p, q, r), which correspond to such nodes, compose the set ID:   ID ¼ fðp; q; r Þj xp ; yq ; zr 62 Sg Elements of matrix A′ and vector B′ are calculated in the following manner: A0ij ¼ Aij 

P ðp;q;rÞ2ID

Ni ðxp ; yq ; zr ÞNj ðxp ; yq ; zr Þ

P

~u0p;q;r Ni ðxp ; yq ; zr Þ B0i ¼ p;q;r  ~up;q;r ; if field is defined 0 ~up;q;r ¼ 0; otherwise Therefore, required vector U is obtained as follows: 1

U ¼ ðA 0 Þ B It may be so that fields of the parent block cannot be averaged. This is possible when matrix A′ is degenerate. The example of such situation is shown on the Fig. 12. Here nodal points, which lie on the three faces coinciding to the bounds of parent block, impact on matrix A′. Thus, in order to find fields of parent block one must to carry out the procedure of direct averaging through the children blocks located inside the parent. But, depending on whether there are nodes with undefined fields, one of two ways takes place. If all nodal values are specified, the matrix and vector multiplication must be fulfilled. Matrix can be calculated in advanced while the vector is composed by fields values of children. Otherwise, the procedure is based on the solution of system of linear Eqs. (9) where square matrix A′ must be invertible. Otherwise, field values of parent block cannot be calculated.

Functional Visualization of the Results of Predictive Modeling

137

Fig. 13. General algorithm of calculation of the fields at the nodes of parent through the fields values of the children.

A general algorithm of the direct averaging is presented on the Fig. 13. Node Averaging. After the direct averaging of the adjacent blocks of the same level it may be so that fields of these blocks at the common nodes may have different values. Therefore, procedure of averaging nodal fields is proposed.

Fig. 14. The algorithm of the node averaging.

The procedure is handles the nodes which have not been averaged. The loop over adjacent blocks is evoked. If the block has not been averaged before, it must be averaged. During the loop it is also checked if any block has initialized field at corresponding node. If there are no such blocks, nodal values stay undefined. Finally, nodal values are being averaged. For instance, field f, which is defined at n adjacent blocks at the corresponding point with the values fi(i = 1, 2, …, n), is calculated as follows: n 1X f ¼ fi n i¼1 The algorithm of node averaging is presented on the Fig. 14.

138

A. Kuzin et al.

3 Results Because data storage and the renderer can be either installed on single cluster or placed remotely and connected by high speed network connection, the most natural interaction is client-server architecture. The server is the computer where data in described format is directly placed. The server solves the problem of reading of metadata and requested blocks and transmits them across the network. Client forms the queries on retrieveng of metadata and blocks. In the second case it passes the position of the block pos as an identifier. The renderer here is understood in a broad sense, as a program that performs not only rasterization, but also preprocessing and filtering geometry data. For example, cluster version of Kitware ParaView can play role of renderer. This software complex provides rendering in distributed mode while playing role of render server for client instance of ParaView that is installed on destination computer of the user. Developers have an ability to facilitate ParaView’s functionality with plugins. One can develop plugin in order to organize client that has to interact with data server. This plugin can be, for example, regular reader when it interacts with ParaView. An interaction with data server consists of sending of queries of two types: on retrieving of metadata and on retrieving of blocks. Client needs to receive metadata at least once, it is unable to make query for receiving of blocks without it, because the blocks are identified by their position pos in data file. The client sends position of the block pos and index bounding box that additionaly narrows the range of admissible indices while it makes the query for block retrieving. The server sends the answer with data arrays for the nodes that lay in intersection of given index bounding box and block boundaries only. Such approach helps to minimize, firstly, the count of seek operations in data file (in ideal case—only one seek into given pos), secondly, amount of data transferred across the network as an answer to the query.

4 Discussion Probably, effective visualization of simulation results of problems of extra-large dimension is impossible without using of parallel rendering. The proposed data format is suitable for using in parallel rendering. Possible variant of rendering algorithm can be classified as sort-first rendering according to classification of work [11]. According to this approach viewport is divided to rectangular cells and rasterization of each cell is performed independently in parallel and geometry data is sorted at the stage of input geometry preprocessing. Accordingly, full frustum is divided into frustums for individual threads of execution. Therefore, further, when it comes to frustum, it is implied that the frustum corresponds to one thread of execution. Data receiving from the server is carried out in the form of requests described in the previous section. To form a query, one needs to know the level of details of the grid and the bounding box of the requested data. An algorithm of choosing of level of details of the grid that is to be used in rendering is similar to the procedure of MIPmapping and is illustrated on the Fig. 15. A resolution in pixels of front side of the frustum is known, therefore, the size of the pixel in physical units of the model is also

Functional Visualization of the Results of Predictive Modeling

139

Fig. 15. Split of the frustum into the fields where blocks of grids of specific level should be loaded: level k, k − 1, etc.

known and let it be equal to d. This value unambiguously determines level of the grid that has cells of size d. Let the level has index k. One should select parallelepiped around the frustum that starts at the frustum’s front side and has back side at the depth where the frustum has size of the pixel equal to 2d. One should consider blocks of grid of level k inside this parallelepiped in order to have characteristic size of the cell not bigger than physical size of pixel. In the rest part of the frustum one selects parallelepiped adjacent to previous one with front and back sides lying on the depth where pixel’s size equal to 2d and 4d respectively. The cell’s size in this parallelepiped is twice bigger than in the previous one and one should consider blocks of the grid of level k − 1 inside it. The process of creation of parallelepipeds is continued until back side of the frustum and the frustum is covered by sequence of parallelepipeds with more and more rough grid inside. The parallelepipeds defined in this way determine domains in 3D space where the grids of specified levels should be extracted. However, it is necessary to take into account the fact that the grids are oriented relative to parallelepipeds in an arbitrary way. Therefore, grid blocks are requested in accordance with the rule shown in the Fig. 16. It shows a parallelepiped bounding the domain of grid of level k, around it a bounding box with sides parallel to the sides of the grid is displayed. Namely, this is the bounding box that can be mapped into an index bounding box, which is used in requests for receiving blocks from data server. And it is the bounding box, indicated in the Fig. 5 as (k), which is the domain where blocks from the grid level k are extracted. An analogous bounding box, with sides parallel to the sides of the grid, is built around the next box. It is indicated in the figure as (k − 1).

140

A. Kuzin et al.

Fig. 16. Coverage of the frustum with blocks of uniform grids. k and k − 1 are domains where blocks of grid levels k and k − 1 respectively should be extracted.

Bounding boxes (k) and (k − 1) are intersected, therefore, the domain for which the blocks of level k − 1 are extracted is the difference of bounding boxes: (k − 1)n(k). It is important that this domain is formed by a finite number of parallelepipeds with sides parallel to the sides (k) and (k − 1). The next bounding boxes (k − 2), (k − 3) and so on are defined in the similar way until the entire volume of the frustum is covered. Thus, the entire volume of the frustum is covered with parallelepipeds, which are the bounding boxes for the grid sections of the appropriate level of detail. It is important to note: despite the fact that the linear sizes of the next bounding box (k − 1) are almost twice the previous (k), the grids they contain consist of approximately the same number of cells, since the sizes of the cells at the levels k and k − 1 differ by half. Additional optimization of network traffic can be achieved by more accurate accounting of block boundaries. Figure 16 also shows the splitting of the bounding box into blocks. In this case, only filled blocks are further used in rendering, because they touch the frustum. Additional block filtering can be performed on the client side before sending a request to the server to reduce the amount of data transferred. In this case, instead of sending a single request to the server to receive blocks within the entire bounding box (k) one can send several requests with smaller bounding boxes, excluding from them blocks that do not knowingly fall into frustum. The corresponding client-side bounding box-based calculations do not take much time, but reduce the total amount of data transmitted over the network. Due to the sparsity of the grids, especially with large level number, the blocks received from the server cannot provide full coverage of the frustum with the grids of the corresponding level. In this case, the client side performs the procedure of calculating the values in the corresponding nodes. This calculation consists of two steps: first, obtaining blocks of a coarse grid, covering the required domain, and second, direct recalculation of the nodal values to the grid of the next level.

Functional Visualization of the Results of Predictive Modeling

141

5 Conclusions The paper describes data structure and format for storing hierarchical sparse volumetric data for displaying scalar fields on extra-large meshes. The data structure resembles a MIP pyramid, i.e. it contains a sequence of spatial uniform grids with volumetric data of various levels of detail, split into the blocks. Due to this, the amount of data needed to render the scene remains limited at any requested level of detail. In addition, the format allows direct access to the requested block and minimizes the number of seek operations. The sparsity of the data is due to the irregularity of the original unstructured grid of the model from which volumetric data was obtained. A grid of each level of detail stores nodal values only in those domains where the original mesh has sufficient concentration. It preserves the format from exponential amount of data growing while the depth of levels of details increases. Proposed process of conversion initial mesh to hierarchical structure is supposed to consist of several steps. Firstly, the reading of initial data and its reforming to unified mesh of tetrahedrons are performed. Then the results on such mesh are transformed to the octree structure in three substeps. On the first one it is supposed to form hierarchical structure with partial definition of fields at blocks of the lowest levels of hierarchy. Next substep is intended to complete defining of the fields at blocks of lowest levels. Then fields at blocks of higher levels are being specified through the averaging procedure. Finally, the recording of metadata file and data in developed format is being done. An interaction between data storage and renderer has client-server architecture and it allows to spread data storage and renderer to separate clusters. The queries to data server are of two types: request for metadata and request for data block. The metadata should be received at least once and is used to form requests for data blocks. The data structure described can be used in parallel rendering in the sort-first mode, when each working thread renders its own screen cell. A possible disadvantage of such approach is common to the sort-first rendering methods and consists of the need to duplicate the requests of part of the data in different threads of execution. Acknowledgments. The authors thank Russian Science Foundation for support under grant No. 18-11-00245.

References 1. Artigues, A., Cucchietti, F.M., Montes, C.T., Vicente, D., Calmet, H., Marin, G., Houzeaux, G., Vazquez, M.: Scientific big data visualization: a coupled tools approach. Supercomput. Front. Innov. 1(3), 4–18 (2014) 2. Museth, K.: VDB: high-resolution sparse volumes with dynamic topology. ACM Trans. Graph. 32(3), 1–22 (2013) 3. Hassan, A.H., Fluke, C.J., Barnes, D.G., Kilborn, V.A.: Tera-scale astronomical data analysis and visualization. Mon. Not. R. Astron. Soc. 434(3), 2442–2455 (2013)

142

A. Kuzin et al.

4. Fogal, T., Childs, H., Shankar, S., Krüger, J., Bergeron, R.D., Hatcher, P.: Large data visualization on distributed memory multi-GPU clusters. In: Proceedings of the Conference on High Performance Graphics, HPG 2010, Eurographics Association, Aire-la-Ville, Switzerland, pp. 57–66 (2010) 5. Biedert, T., Werner, K., Hentschel, B., Garth, C.: A task-based parallel rendering component for large-scale visualization applications. In: Proceedings of the 17th Eurographics Symposium on Parallel Graphics and Visualization, pp. 63–71 (2017) 6. Weber, G.H., Beckner, V.E., Childs, H., Ligocki, T.J., Miller, M., van Straalen, B., Bethel, E.W.: Visualization tools for adaptive mesh refinement data. In: Proceedings of the 4th High End Visualization Workshop, pp. 12–25 (2007) 7. Kaehler, R., Abel, T.: Single-pass GPU-raycasting for structured adaptive mesh refinement data. In: Proceedings of the SPIE International Society for Optics and Engineering, vol. 8654 (2013) 8. Luebke, D., Reddy, M., Cohen, J.D., Varshney, A., Watson, B., Huebner, R.: Level of Details for 3D Graphics. Morgan Kaufmann, Amsterdam (2003) 9. Nayak, S., Chakraverty, S.: Interval Finite Element Method with MATLAB, 1st edn. Academic Press (2018) 10. Rao, S.S.: The Finite Element Method in Engineering, 5th edn. Butterworth-Heinemann (2011) 11. Molnar, S., Cox, M., Ellsworth, D., Fuchs, D.A.: Sorting classification of parallel rendering. IEEE Comput. Graph. Appl. 14(4), 23–32 (1994)

Digitalization in Logistics for Organizing an Automated Delivery Zone. Russian Post Case Temirgaliev Egor1 , Dubolazov Victor1 , Borremans Alexandra1(&) , and Overes Ed2 1

Peter the Great St. Petersburg Polytechnic University, Polytechnicheskaya 29, 195351 St. Petersburg, Russia [email protected] 2 Hogeschool Zuyd, Nieuw Eyckholt 300, 6419 DJ Heerlen, The Netherlands

Abstract. More and more industries in different countries are affected by digitalization. One of the first directions that took this path was logistics. In this study, the issue of digitalization of the delivery of mail and parcels on the case of FSUE Russian Post development was raised. In this study, a comparative analysis of the development of this field of activity in Russia and a number of European countries was made, customer service algorithms were examined, and information systems supporting post office activities were analyzed. The result of this work was the proposal of organizing an automated delivery zone and a simplified algorithm for processing parcels based on the current IT infrastructure of the postal service. Moreover, a number of recommendations for improving staff work and customer loyalty was formulated. Such automation and digitalization of the enterprise is extremely necessary, since Russian Post is a strategic object of the state and affects the entire population of the country. Keywords: Logistics  Digitalization  Automated delivery zone service  E-commerce  Automated post station

 Delivery

1 Introduction Any system that unites people among themselves is undergoing evolution, mail is no exception. Not the last place in this process is automation, digitalization and further robotization [1]. A multiple increase in the volume of the e-commerce market, an increase in international mail and orders on the domestic market from year to year push Russian Post to spend billions of rubles on the automation of its branches. The Unified National Postal Service, being a state institution, unites the territory of the whole country, connects the most remote corners with the largest centers. The Russian Post company covers 10 macro-regions, includes 82 regional branches, 759 post offices, about 42 thousand post offices and 100 thousand postmen. Every year, the Russian Post receives about 2.5 billion letters and accounts (1 billion of which is from government agencies) and processes about 297 million parcels. Russian Post serves about 20 million subscribers in Russia and deliver about 1 billion copies of print media © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 143–156, 2021. https://doi.org/10.1007/978-3-030-64430-7_12

144

T. Egor et al.

per year. The annual volume of transactions that go through the Russian Post is more than 3.3 trillion rubles (pensions, payments and transfers) [2]. FSUE Russian Post is the only operator available to the majority of the population, providing a wide range of financial and commercial services, the list of which is constantly expanding [3]. Among the services can be identified: – – – – – – – – –

provision of all types of postal services, including international postal services; provision of services for the storage of postal items and goods; implementation of promotional activities; provision of transport and forwarding services to individuals and legal entities; provision of financial services; implementation of state signs of postage, postage stamps, blocks, etc.; organization of exhibition activities, training and seminars; trade in products for industrial purposes; provision of customs clearance services.

Interaction with the client occurs through postal service providers. They are consultants on issues arising from clients, and also provide the necessary information that contributes to the quality of service. About 65% of customers are individuals. Logistic centers, in turn, perform both operational functions and management. The centers are subordinate to several departments that carry out activities for the production of logistics services such as: – – – –

transit sorting nodes management service; production organization service; transport management service; service of planning, dispatching and interaction with corporate clients. Among the main functions of logistics centers at the macro-region (MR) level are:

– organization of the mail processing in the logistics network of the MR and post offices; – organization of transportation of mail, empty containers, materials, etc. inside the MR and on the routes passing through it; – interaction with regional customs, management of post-processing equipment; – control of transportation costs at the MR level, the conclusion of regional transportation contracts; – planning the mail volume for sorting nodes, control over compliance with regulatory deadlines; – sales management of logistics services in the MR. With an increase in demand for mail, the enterprise is modernizing its logistics work. Strategic development includes: – creation of a production and logistics network that meets the growing requirements of the Russian Post; – construction of logistics post centers with automated sorting in cities with the highest concentration of mail items;

Digitalization in Logistics for Organizing an Automated Delivery Zone

145

– introduction of new production technologies, a significant reduction in manual labor; – opening branches for the consolidation of postal items abroad; – updating and modernization of the fleet; – development of a charter/leasing flight program; – development and implementation of a rolling stock with a 50% increase in capacity; – optimization of schedules and route network; – creation of a modern integrated IT infrastructure (TMS, GLONASS systems, analysis of mail flow “SAD Logistics”, simulation of mail flow). In this research, the algorithm for processing parcels at the Russian Post will be proceed, as well as the option of improving and digitalizing the process, as this is the main development strategy of the company.

2 Methods and Literature Review The methodology of this study is based on the induction method. Based on an analysis of the literature and interviews, the requirements for the operation of mail services for the subsequent IT development of the logistics industry as a whole will be formed. Since 2013, the level of automation and technical equipment of the Russian Post enterprise has been increasing, which allows solving rooted problems. According to the results of the second quarter of 2016, the share of goods ordered in online stores and delivered by Russian Post is 63% [4]. In the summer of 2015, a three-year project was completed on the largest implementation of the 1C automated system in Russia in terms of the number of workstations. This project allowed the company to switch to a single method of operational, accounting and tax accounting in all divisions throughout the country. It also made it possible to form a common structure in the branches and centralized maintenance of the necessary background information and create clear rules for sales and services in the branches. Moreover, this implementation made it possible to effectively regulate inventories, provide operational control and analysis of indicators of entrepreneurial activity in the market. One of the main advantages was the possibility of control by the financial director in the online mode (due to a single centralized accounting system) all ongoing financial transactions, as well as the balance on special accounts. In 2013, work on the implementation of the “Unified Automated Post Office System” (UAPOS), which covers all the functional tasks of the post office, began. The volume of investment costs for this project amounted to 288 million rubles [5]. The middle of 2017 was the final stage in connecting to the UAPOS. The system implements a modern self-service web portal containing a personal account, a built-in knowledge base, a query navigator, a news feed and other convenient tools for users of IT services. The portal is in demand among users and today is the main channel for sending requests to the support service: through it, 77% of the total number of incoming calls is received. Mail staff easily and quickly issue a request in personal account using the functionality of the query navigator. For 15 months, 3201 accounts have been activated in 398 post offices. According to 2017 data, about 600 thousand

146

T. Egor et al.

people visit the Russian Post website daily. The introduction of these systems is one of the stages of large-scale digitalization of the entire mail network. Russian Post is one of those companies that have introduced a modern distribution and logistics information system. The system is based on one of the most advanced products in the world of IBP (Integrated Business Planning) - products developed by River Logic, Inc. The new logistics system operates on the basis of its own server facilities of the Russian Post. All 82 regional branches of the Russian Post, about 1000 sorting nodes, post offices and places of international mail exchange, as well as large logistics facilities of the Post (logistics post centers and automated sorting centers) are integrated into a single system [6]. Through the system, more than 1 billion different routes are analyzed daily, from which the most optimal one is selected. Also, the system allows to reduce logistics costs by 10–15%, which also affects the overall reduction in production costs [7]. In major cities of Russia, new post offices “Mail of the Future” are opening. The technologies used in the department will significantly improve quality and reduce customer service time. In offices of this format there are self-service terminals, with their help customers can independently weigh, package and pay for parcels and parcels. The client room provides an electronic queue system that allows to redirect client flows and redistribute the load. It is in these departments that the 24/7 zone is equipped. In this area there are payment terminals, ATMs with the function of issuing cash and an automated post station (APS) for self-receipt of mail. APS is an automated station for sending and receiving small-sized items: orders from online stores and e-commerce, packages of documents and shipments of corporate items, including parcels. APS are an alternative to courier delivery and post offices. The station is a terminal with cells (the number of cells varies from 36 to 76) of various sizes: S (small), M (medium), L (large), where the customer’s order is stored until it is received, with automatic opening the door. In addition to the cells, the terminal has a touch screen, with the help of which operations with the APS and with the payment module are controlled. A barcode scanner has also been installed, designed for workers to putting items in APS cells. The payment system includes a bill acceptor (cash payment module) and a card acceptor that accepts any type of plastic card for payment. The receipt printer issues a check to customers after payment has been made. For security and anti-vandal measures, the APS is equipped with a camera that captures various actions with the terminal and all user actions. For maximum comfort, APS are located in crowded places of customers and potential customers: shopping centers, business centers, train stations, etc. The client can independently choose a convenient time for receiving the order, and not expect the courier to arrive. The first successful experience in implementing APS occurred in Germany in 2001. The development was carried out by the German company DHL courier service, which is part of the Deutsche Post DHL holding company (DP DHL), in cooperation with the Austrian automation company KEBA. Today in Germany there are about 2500 automated postal machines. This experience has been adopted by other countries of Europe and the world [8]. In Russia, the first APS prototype can be considered a postal machine. The device was created in the USSR and facilitated the work of mail on the reception and

Digitalization in Logistics for Organizing an Automated Delivery Zone

147

processing of mail correspondence, the sale of signs of postage. An analogue of the European automated terminal appeared much later in late 2010. Unlike European APS in Russia, the APS work only on the issue of items, but in the near future it is supposed to adapt the technology to receive them. Companies that provide postal delivery services post information on how to use the postal service on their websites; visitors can also watch video instructions or contact the online operator on the site. To receive an order through the post office, you must specify the appropriate delivery method at checkout. Another distinctive feature from European countries is the installation location of the device. In Russia, APS are located in enclosed spaces to ensure greater security and to avoid damage to items due to climatic conditions. As already mentioned, the APS of Russian Post are located in the “Mail of the Future” and operate 24/7. In order for the client to be able to receive a shipment through the APS, after delivery at the place, if there is a APS, the operator of the Russian Post will contact the customer and, with his/her consent, the package is placed in the APS for receipt. Information to the client about the possibility of receiving a simple small packet through a cell of an automated post station is provided by the operator of the destination post office by calling the contact phone number indicated on the shell of a simple small packet. After receiving the consent of the recipient, a simple small packet should be registered with the UAPOS for putting in the APS, carry out operations for forwarding and establishing the mail in the APS. After 48 h, non-handed small packages must be removed and stored in the office in accordance with the established timelines. The recipient who has expressed a desire to receive a simple small packet in the APS receives an SMS notification that contains the following information: – – – – – –

bar identifier of a simple small package; date and time of bookmarking a simple small package in the APS; shelf life of a simple small packet in APS; APS cell access code; the amount payable; phone number for inquiries (call center phone number).

The addressee arrives at the APS, enters the barcode, access code and pays for the additional service in cash or by credit card. Then the addressee removes the parcel from the opened APS cell and after closing the door of the cell, receives a payment receipt printed from the APS. To expand the network of post offices in the Russian Post a separate direction of development was organized. In the department of mail-order business and express delivery, a new department to manage the centers of mail and APS processing and transportation, which is subordinate to the department, was established. The department is engaged in the development of the APS network and has employees for whom the chief specialist for the development of APS is responsible. This department supervises the work of the department of the Center for issuing and receiving parcels and APS in each of the ten MR centers. The chief specialist of the department and the leading specialist, who oversees the project for the implementation of APS, also works in this department. Moreover, a working group, which consists of the chief engineer, the IT

148

T. Egor et al.

infrastructure and operations service, and the economic information security department was organized to implement the project [9]. The lower level consists of offices where the delivery of items through the APS is provided. In Europe, post offices have long been popular among residents [10]. Today, Deutsche Post AG, Germany’s state-owned postal service operating under the brand name Deutsche Post DHL, is the world’s largest logistics group. An innovator in the field, introduced its first B2C delivery network, Packstation DHL, in 2001. To implement the project, KEBAAG Corporation was involved, which is still a leading manufacturer in the automation technology industry. An early entry into this market brought its results: already in 2012 there were over 4 million registered customers picking up packages addressed to them at one of the Packstations, while at the end of 2001 there were 2500 users. The company has established partnerships with B2C retail representatives such as Quelle, QVC, Amazon, Tchibo, etc. [11] Customers appreciate 24/7 availability and ease of use of machines. At the same time, the company immediately informs about the delivery of the shipment. As soon as the goods are placed on the Packstation, the client receives an SMS message or email. In Germany, 70% of shipments are collected within 24 h. To date, Packstation DHL has expanded its network to 3,400 post offices (340 thousand cells) in 1600 cities and municipalities. Due to network density, 90% of the German population can now reach the nearest Packstation DHL in 10 min or less. By the end of 2017, the service of printing postage stamps was introduced directly through Packstation DHL at any time using DHL online payment or the DHL application. An order confirmation email is sent to the client, containing a link to download a PDF and a QR code, which allows you to print a package stamp on Packstation [12]. Using Packstations is free for both private and business customers, but preregistration is available through the Paket.de web portal. Each client receives a magnetic stripe card (Goldcard) and a PIN code that can be used to identify the client at Packstation and post offices. Previously, you could use Packstation by specifying a customer number and PIN. This was changed in 2011 to increase security, as DHL says, using a Goldcard is a must. On October 29, 2012, the mandatory mTAN was introduced, which is transmitted exclusively in the SMS notification. Each mTAN is valid for only one opening procedure (which may give the right to open several cells if the recipient picks up a shipment), thereby replacing the PIN code. The customer has seven business days to pick up the order. When all cells are filled or the shipment is too large, the package is sent to the nearest Packstation or the nearest post office. To use the service, customers without Packstation cards receive packages at the address of residence, in the absence of a client at home, the items are forwarded to the nearest Packstation. In these cases, the courier will deliver a notification to the recipient with the address of the post office and a bar code. Customers do not need to pay extra money for the service, and Packstation is fully funded by DHL due to savings in logistics. This service delivery principle is used for Gopost post offices at the United States Postal Service (USPS). Another example of the successful implementation of the post office is the Estonian logistics company SmartPOST, which specializes in the delivery of goods and mail through self-service branches called Delivery Point Solution (DPS). The company,

Digitalization in Logistics for Organizing an Automated Delivery Zone

149

founded in 2008, has now entered the Finnish market by selling a stake in the Finnish Postal Service (Posti). SmartPOST does not require pre-registration, in order to receive an order through APS it is necessary to indicate SmartPOST on the website of an online store or catalog as a delivery method. APS work both on departure and on receipt. When the goods are delivered, an SMS with the code required to receive the goods will be received. The main feature of using APS in Finland and Estonia is the delivery of food from restaurants and the return of books to the library. Unlike other similar systems (such as the Packstation), SmartPOST has placed its APS in shopping malls and hypermarkets, which makes them convenient even in bad weather [13]. In the Kfood chain supermarket (800 stores in Finland), 3 million items were issued in 2015. In 2018, Finland Post began to expand its network of APS and install terminals directly in residential buildings. To implement this project, the post office has concluded an agreement with major housing developers, such as YIT, Skanska and Bonava, on the placement of APS in new buildings. Maintenance of APS in a house costs residential cooperatives 100 euros per month, which is cheaper than renting a place in a supermarket [14]. In Germany APS service occupies 30% of the delivery market, in Poland - 18%, in the Baltic countries - 22%. The average shelf life of an order in Germany is less than a day, in Russia - 1.7 days, in Poland - 1.3, in the Baltic States - 1.6 days [15].

3 Results 3.1

Issues and Challenges of the Current Situation of FSUE Russian Post

In 2018, almost all Internet users have online shopping experience. According to the Association of Internet Commerce Companies, the volume of the e-commerce market in Russia in 2017 increased by 13% compared to 2016 and amounted to 1040 billion rubles. At the end of 2018, a market volume of 1250 billion rubles was forecasted. The number of incoming international shipments with a commodity investment also continues to grow rapidly. The market volume is constantly growing (see Fig. 1) and is projected to be 2.78 trillion rubles by 2024 [16]. According to experts, the online trade in goods and services accounts for 36% of the digital economy in Russia [17]. According to the results of 2018, the share of FSUE Russian Post in delivery on the Internet trading market amounted to 69% (see Fig. 2). Thus, the company forms about 18% of the entire digital economy of Russia. All over the world, postal operators strive to find direct and low-cost routes to their customers. The costliest stage in delivery is the “last mile” (the logistic stage, at which the goods are transferred directly to the customer). For most companies, the profitability of this stage is insufficient. As one way to reduce the cost of the “last mile”, postal operators and logistics companies are considering the active implementation of APS. In Russia, the delivery of online orders was given the opportunity to provide a service through a new network, which saves money and reduces the time for receiving a parcel. It is the development of such a powerful market as e-commerce that contributes to the development and expansion of the APS network.

150

T. Egor et al.

Fig. 1. The volume of the Internet trading market in Russia, billion rubles.

Fig. 2. The delivery methods for online trading.

The main obvious advantage of APS is its convenience. The client does not have to adapt to the schedule of the post office, you can pick up the shipment at a convenient time, and with a large geography of coverage of the APS network, the client has the opportunity to pick up the parcel or correspondence in the most convenient place for himself/herself in the absence of queues. When using the APS, there is no need to have documents for confirming your identity, you do not need to fill out receipts, and if there is no opportunity to pick it up yourself, the client can report the code from the SMS message to a third authorized person who will pick up the shipment.

Digitalization in Logistics for Organizing an Automated Delivery Zone

151

For many users of online stores, the clear advantage of APS over classic points of delivery of orders and mail will be the anonymity and lack of operators. Not every client is comfortable in advertising the purchase of certain groups of goods. At the same time, having an employee provides advice and the opportunity to solve the problem at the place of issue. In Russia, not all APS work for returning shipments, as a result of which the absence of an operator is a big disadvantage for the client. In addition, the disadvantages of APS include restrictions on the size and weight of the cells. In Russia, the development of the APS network remains weak; at the moment, this technology cannot satisfy the demand for product delivery within the country. Due to the lack of distribution and many years of practice, customers are still distrustful of APS, thereby not receiving expensive orders through the terminal for security and safety reasons. Moreover, in Russia there is no clear algorithm for the actions of the recipient in the case when he received the damaged goods. The client does not know how to behave in such a situation. In general, the implementation of APS is not aimed at replacing the existing delivery system, but is an additional service to improve the quality and loyalty of the client. APS can satisfy the needs of a certain group of customers, but do not cover all users of logistics services [18]. In Russia, there are a number of difficulties in implementing the APS network. In most European countries, APS are on the streets, which provides round-the-clock delivery of items. In Russia, due to climatic conditions, outdoor installation may result in damage to the contents of the package (cosmetics, liquids, etc.). In turn, APS owners prefer to be located in large shopping centers to avoid vandalism, which excludes the possibility of 24/7 access. The correct operation of the APS system requires a complex IT structure that is integrated with payment systems, information systems of delivery services, online stores and other interested companies, as well as mobile operators. In addition to the integration of APS, regular maintenance is required. Unlike the Europe, in APS in Russia you can choose a payment method, since for Russian realities the use of cash is still relevant, in this case, a contract for collection is required. Despite the fact that automated terminals are already firmly established in the field of delivery, in Russia it is still impossible to send parcels between individuals, there is no optimal way to identify customers, although the technical capabilities to carry out such operations are real. 3.2

Algorithm for the Automated Issuing Technology Implementation

The development strategy of FSUE Russian Post provides for the organization and development of an APS network. The project envisages the installation of about 2,500 terminals by 2021. In addition to the 24/7 operation, the hallmark of the APS of courier companies will be installation in areas (Leningrad region, Moscow region) where there are no competitors. The method of implementation and operation of APS Russian Post has a complex algorithm and a number of disadvantages. The project development strategy provides for laying in APS only small international shipments with numerous parameters like size, quality of package, information.

152

T. Egor et al.

This package format corresponds to most packages that come to recipients from sites such as Alibaba, eBay and Amazon. Therefore, it is impossible to get a shipment from Russia through APS. In turn, the procedure for uploading, receiving, and servicing is a sequence of numerous steps that must be performed by the operator. In addition to standard duties for mail employees, the principle of operation of APS does not deliver comfort to customers. The client finds out about the possibility of receiving a shipment via APS only after a phone call from the operator, where the parcel is already waiting for its consumer. This form of working with a client is more reminiscent of imposing an additional paid service in order to receive more profit than taking care of the comfort and time of the client. Consequently, customer service affects the quality of delivery and the impression of the enterprise. The degree of customer satisfaction depends directly on the moment of transfer of the shipment, “last mile”. In the case of receiving through the office windows, personal communication should be controlled between the operator of the Russian Post and an individual, which is rare for most post offices. With full network integration, there is no need for personal communication. European successful experience in creating an APS network eliminates the personal contact of the client with the operator through automatic messaging. Uploading in APS is carried out only for 2 days, during which time the parcel must be picked up. After a few days, it goes to the warehouse and will wait 30 days for the client. For control, postal service operators must fill out a call log, which displays the number of filled cells in the APS, denials of service, and reasons for rejection. Information is sent to the post office as part of the statistical data and for further analysis. The journal does not have a specific form and in most cases is a paper medium with operator notes. To date, employees of the department do not fulfill their reporting obligations if it does not have an automatic form. The specified project implementation strategy allows the company to avoid the cost of integrating terminals with a single FSUE Russian Post network, but will not give a positive feedback from the client, as there are no obvious advantages for an individual in using the service. In order for this project to recoup the investment costs associated with its implementation, it is necessary to ensure a minimum of 75% of the APS load put into operation [19]. This requires making the operation of the APS primarily convenient for the client. Accordingly, there should be such an information system that makes possible all operations related to the storage of shipments, their issuance, and payment acceptance. Moving orders, their location, paying for them - information about all this is stored in a single information system. Technical capabilities and IT infrastructure of Russian Post allow to improve this process. In 2018, the implementation of the Unified Automated System was completed. 38848 post offices were connected. The system allows you to replace more than 15 different software products used by the company earlier. The system is based on Microsoft software. As a database management system, Microsoft SQL Server is used. The covers all four vertical levels of control and is designed in three configurations that differ in their purpose, depending on the use at the hierarchical levels of the automation object. Moreover, all configurations are interconnected and there is a constant exchange of information and data [20].

Digitalization in Logistics for Organizing an Automated Delivery Zone

153

In addition to a unified automated system for meeting deadlines at the Russian Post, barcode readers are used by scanners at each processing step. Due to this, the client can also monitor the movement of his shipment. In 2014, for the convenience of customers, FSUE Russian Post launched its own mobile application, which today has more than 5 million users. The application is visited daily by 1.5 million users. The application allows you to create a personal account, automatically adds all shipments (if the client’s phone number is specified), uses push notifications about changes in the status of shipments, reminds you of delivered shipments near the OPS, reads barcodes. There is also integration with the tracking service of the AliExpress trading platform, where the bulk of small packages comes from. When the seller sends the order, the tracking number automatically falls into the list of parcels tracked in the Mail application. The base of Russian Post own server facilities can provide a fully automated APS network. To reduce the time for the operator to upload items in APS, it is necessary to reduce the number of steps in this algorithm. And to increase customer loyalty, the service should be convenient and enjoyable [21]. At the Pulkovo logistics center in St. Petersburg, around 180,000 international shipments are processed daily. Of this stream, 20% are small packets that meet the standard when bookmarked in APS. There are 523 branches in St. Petersburg and the Leningrad Region; the busiest branches are in the central region, where one branch serves a large territory. “Branches of the Future” in the central part of St. Petersburg daily receives about 96 units of small packages. In addition to bookmarks in APS, the operator must call this number of customers during the day in order to inform about the service. In order to improve the quality of the service, we consider an option where the entire system is fully automated and the operator’s contact with the client is excluded in one of the “Departments of the Future”. At the Pulkovo logistics center, all shipments go through a barcode reading, which is necessary for employees to monitor and automatically changes the tracking status for the client. The postal identifier contains information about both the shipment and the recipient, his/her personal data, phone number. Through a single system, information is sent to the server for automatic sending of SMS messages. A text message is sent to the client with a proposal to lay the package in APS, where the barcode is written, the address of the post office, where the departure and the date of bookmarking/removal of the international package will be sent, the cost of the service (50 rubles), instructions for responding to the offer. The client can give his/her consent/disagreement to the provision of the service through a reply SMS message with the number “1” - consent, “2” - disagreement. Upon the client’s response, with consent, a message is sent with information: – – – – – – –

code; date of uploading in the APS; shelf life in APS (2 days); date of seizure from APS; APS cell access code; the amount payable; phone number for inquiries (call center phone number).

154

T. Egor et al.

In case of disagreement - a message with the text “Refusal accepted.” After delivering the items to the offices, Russian Post operators scan using a barcode scanner. When scanning, the scanner displays information about which items are to be further uploaded. The parcel uploading algorithm should also be simplified by reducing steps and writing an additional batch processing script to the APS. Having excluded the actions of scanning a barcode and entering a password by an operator, you can immediately proceed to the parcel scanning step. Information is stored on the database server in which cell the shipment is located with the corresponding code. As soon as the operator scans a new package, the barcode of which has not yet been sent to the post office, the system perceives the action as an “upload” and displays on the touch screen an interface for the operator with a list of free cells for selecting the necessary one. Automation of the process at the level of the information system avoids errors in the work of operators. In many departments, the system operates in test mode, which is why automatic system updates coming from the central office and covering the entire network lead to operational errors. Failure does not allow the operator to register a small package in the system manually, since the system does not allow the employee to an autonomous database at the level of a specific department. Errors associated with the operation of equipment and systems in the department are solved with the help of an on-site team, which eliminates problems on the spot. Since 2018, the security services of the Russian Federation have banned remote maintenance.

4 Conclusion Based on the current state of the IT infrastructure of the Russian postal service and the work of European representatives in this type of activity, it should be concluded that the presence of a large number of branches throughout the Russian Federation forces the Russian Post to organize work using a centralized customer database, as well as work in unified formats of interaction with them. Further development and universalization of employees is the next step in the implementation of automated systems. This circumstance is likely to cause costs not only for retraining personnel, but also for the formation of a new staff. The effect of the implementation of these tools should provide the Russian Post with a qualitative growth in servicing customers (both individuals and legal entities) and also form the foundation for business development and sales growth in all divisions of the enterprise [22]. Every year, FSUE Russian Post handles more and more items with a commodity investment, due to the growth in e-commerce. According to analysts, the online shopping market is growing at 25–35% annually [23]. APS is a new market in Russia with very promising development in the coming years, as the pace of life in the city is constantly growing, and customers value their personal time and comfort. In general, can be noted that using the system of an automated issuance zone, an enterprise can increase the added value of the service being sold. The project is being considered as part of the enterprise development strategy until 2021 and is a small component of investment activity in addressing issues

Digitalization in Logistics for Organizing an Automated Delivery Zone

155

of more effective organization of the “last mile”. This issue is in the priorities of the enterprise in the medium term. FSUE Russian Post as a national network is highly influenced by the state. The company operates almost throughout the country and forwards mail to 150 million addresses of individuals and legal entities (including those located outside the Russian Federation). This indicates the significant importance of mail as a strategic object of the state and in the near future a decrease in the level of systemic importance of the enterprise for the economy of the Russian Federation will not occur. Acknowledgments. The reported study was funded by RSCF according to the research project № 19-18-00452.

References 1. Maydanova, S., Ilin, I.V.: Strategic approach to global company digital transformation. In: Proceedings of the 33rd International Business Information Management Association Conference, Granada, Spain, pp. 8818–8833 (2019) 2. Russian Post: Russian Post. https://www.pochta.ru/. Accessed 21 Apr 2020 3. Kuratova, L.: Forecasting the scope of services of communications industry organizations. Int. J. Econ. Law 5, 62–70 (2015) 4. Timofeeva, A.: E-commerce market research and strategy recommendations. Case study: Russian Post North-West macro-region business unit in Saint-Petersburg (2017) 5. Report on the financial and economic activities of the Federal State Unitary Enterprise “Russian Post” for 2015. /ru/documents/5215/. Accessed 21 Apr 2020 6. Kapustina, I., Bakharev, V., Kovalenko, E., Pasternak, K.: Digitalization of logistics hubs as a competitive advantage of logistics networks In: E3S Web of Conferences, vol. 157, p. 05009 (2020). https://doi.org/10.1051/e3sconf/202015705009 7. ID-EXPERT: Market and Technology News. https://idexpert.ru/news/13050. Accessed 21 Apr 2020 8. Faugere, L., Montreuil, B.: Hyperconnected city logistics: smart lockers terminals & last mile delivery networks. In: Proceedings of the 3rd International Physical Internet Conference, Atlanta, GA, USA (2016) 9. Ilin, I., Levina, A., Lepekhin, A., Kalyazina, S.: Business requirements to the IT architecture: a case of a healthcare organization. In: Energy Management of Municipal Transportation Facilities and Transport, pp. 287–294. Springer (2018) 10. Choubassi, C., Seedah, D.P., Jiang, N., Walton, C.M.: Economic analysis of cargo cycles for urban mail delivery. Transp. Res. Rec. 2547, 102–110 (2016) 11. McCarthy, D., Fader, P.: Valuing Non-Contractual Firms Using Common Customer Metrics. Social Science Research Network, Rochester, NY (2017) 12. Prange, C., Bruyaka, O., Marmenout, K.: Investigating the transformation and transition processes between dynamic capabilities: evidence from DHL. Organ. Stud. 39, 1547–1573 (2018). https://doi.org/10.1177/0170840617727775 13. Gorbachev, A., Vinogradov, V.: Customer insights for the new product development: Posti palvelut Oy, p. 69 14. Valkiainen, T.: Exploring Stakeholder Approach to Business Management, p. 115 (2018)

156

T. Egor et al.

15. Leung, K.H., Choy, K.L., Siu, P.K.Y., Ho, G.T.S., Lam, H.Y., Lee, C.K.M.: A B2C ecommerce intelligent system for re-engineering the e-order fulfilment process. Expert Syst. Appl. 91, 386–401 (2018). https://doi.org/10.1016/j.eswa.2017.09.026 16. Dubravitskaya, O.: The Russian market of online commerce by 2024 will reach 2.78 trillion rubles. https://www.rbc.ru/business/13/03/2019/5c88f46a9a79479761da827d. Accessed 21 Apr 2020 17. Dubgorn, A.S., Abdelwahab, M.N., Borremans, A.D., Zaychenko, I.M.: Analysis of digital business transformation tools. In: Proceedings of the 33rd International Business Information Management Association Conference, Granada, Spain, pp. 9677–9682 (2019) 18. Naumova, E., Buniak, V., Golubnichaya, G., Volkova, L., Vilken, V.: Digital transformation in regional transportation and social infrastructure. In: E3S Web Conference, vol. 157, p. 05002 (2020). https://doi.org/10.1051/e3sconf/202015705002 19. Ilin, I.V., Koposov, V.I., Levina, A.I.: Model of asset portfolio improvement in structured investment products. Life Sci. J. 11, 265–269 (2014) 20. TAdviser: 21 thousand branches were transferred to the Unified Automated System “Russian Post”. http://www.tadviser.ru/index.php/Пpoeкт:Пoчтa_Poccии_(coздaниe_EAC_OПC_ нa_бaзe_Microsoft_Windows_Server_и_SQL_Server. Accessed 21 Apr 2020 21. Ilin, I.V., Klimin, A.I., Overes, E., Sataev, P.: The essence and features of the introduction of the omni-channel approach to interaction with consumers. In: Proceedings of the 33rd International Business Information Management Association Conference, Granada, Spain (2019) 22. Pushkarev, M.A., Shabalin, D.V., Overes, E.: How to Run International Business in Russia, pp. 431–434 (2016) 23. Vaitkevicius, S., Mazeikiene, E., Bilan, S., Navickas, V., Sananeviciene, A.: Economic demand formation motives in online-shopping. EE 30, 631–640 (2019). https://doi.org/10. 5755/j01.ee.30.5.23755

Algorithm for Evaluating the Promotions Effectiveness Based on Time Series Analysis Vadim Abbakumov1, Alena Kuryleva1(&), Aleksander Mugayskikh2, Reiff-Stephan Jorg3, and Zoltan Zeman4 1

3

PJSC Gazpromneft, 3-5 Pochtamskaya St., Saint Petersburg, Russia [email protected] 2 Saint Petersburg State University, 7/9 Universitetskaya Emb., Saint Petersburg, Russia Technical University of Applied Sciences Wildau, Hochschulring 1, 15745 Wildau, Germany 4 Szent Istvan University, Godollo, Hungary

Abstract. The article considers the problem of evaluating the effect of promotions held in a company. This problem remains relevant in our time, as evidenced by various studies on this topic. The authors of the article providesan algorithm for evaluating the promotions effectiveness based on the sales analysis of time series. In this case, promotions are considered as interventions that cause deviations from the main trend of the time series, which can be included in the model as a separate fictitious regressor. The paper also considers mathematical models for representing the interventionand the main characteristics of interventions. To evaluate the systematic component of the time series, it is proposed to use the Prophet forecasting package, since it has a number of advantages that allow one to more accurately isolate the effect of the ongoing campaign. The evaluation algorithm was tested on historical data for a specific product of the product line of stores at gas stations. The results of evaluating the effectiveness of the evaluated product. Keywords: Promotion evaluation algorithms  Modeling effect of promotions  Intervention  Time series

 Forecasting the

1 Introduction The consequences of making management decisions in business are usually associated with the company receiving additional revenue or some losses requiring an economic assessment. Timely information about such consequences has a serious impact on the nature of business processes, so the task of obtaining quantitative assessments of the consequences of managerial decisions is very important, especially in a constantly changing business environment. In this paper, we propose an algorithm that allows us to evaluate the contribution of managerial decisions regarding ongoing promotions to the dynamics of company performance and measure their impact on the cost performance of the enterprise.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 157–167, 2021. https://doi.org/10.1007/978-3-030-64430-7_13

158

V. Abbakumov et al.

Enterprises must plan how they will generate and satisfy the demand for their products. Promotions are a powerful tool with which companies increase the demand and popularity of the offered products and services [1, 2]. Carrying out promotions involves certain costs that must be justified by the resulting effect. In this regard, companies often ask themselves whether the past promotion was successful, whether the current promotion will be profitable and how the demand for products will change in the future. Despite significant investments in promotions, for many retailers in each case it remains unclear whether the net effect of promotions is positive [3]. Various studies make a practical assessment of the effect, according to some data [4] about 75% of promotions can generate additional sales. Therefore, the question of assessing the real effectiveness of the ongoing campaigns remains relevant to this day.

2 Mathematical Models Promotions To measure the effect of a promotion on a specific product unit, it is proposed to analyze daily sales. It is convenient to consider the series of sales of any indicator in retail using time series models. Time series analysis suggests that the data contains a systematic component (usually including several components: trend, seasonality) and random noise (error), which makes it difficult to detect systematic components [5–7]. In the literature on time series analysis [8, 9], intervention is an input series that indicates the presence or absence of an event. Intervention causes the trend of the time series to deviate from its expected pattern. It is assumed that intervention occurs at a specific time, has a known duration, and has a specific type. Under the intervention time, it is assumed the time period when the event begins to cause a deviation of the main behavior pattern of the indicator being studied. Duration of intervention - how long the event or condition causes the deviation of the indicator. The type of intervention is how the impact of an event changes over time. An intervention is how an intervention causes a deviation. Promotions can be considered as intervention, since they usually cause the abovementioned deviations. In such cases, the analyst, as a result of analysis and forecasting, becomes aware of the time, duration and type of deviation that has occurred in the past or will happen in the future. If we talk about the mathematical essence of intervention, then they can be represented as fictitious regressors (explicitly defined by time, duration and type), which are introduced into the time series model through the Transfer Function Filter. Transfer function filter explains how the current and previous (lagging) intervention values cause deviations in the underlying time series process. In general, a typical model of a transfer function filter has the following form (1): 8 b > vðBÞ ¼ x0 xdððBBÞÞB < ð1Þ xðBÞ ¼ 1  x1 B  x2 B2  . . .  xs Bs > : dðBÞ ¼ 1  d1 B  d2 B2  . . .  dr Br

Algorithm for Evaluating the Promotions Effectiveness

159

where vðBÞ is the transfer function filter (finite or infinite order), x0 is the scaling factor, xðBÞ is the sth order numerator polynomial, dðBÞ is the rth order denominator polynomial, and baccounts for lagged effects. If r = 0, then vðBÞ has an infinite order. If each intervention value affects both current and past values, then intervention is a dynamic regression variable. In the case where the input series is intervention, the filter of the transfer function vðBÞ is also called the response to intervention. The overall effect of intervention on the base time series is subsequently called the effect of intervention vðBÞnt , which describes the effect of time advancement. If each intervention value only affects the current value of the base time series, i.e., xðBÞ ¼ 1 and dðBÞ ¼ 1, then intervention is a regression variable with a parameter vðBÞ ¼ x0 . In this case, intervention can be described by an indicator variable, the response to intervention is constant throughout the time, and the intervention effect x0 nt , is a scaled version of the intervention, shown in Figs. 1, 2 and 3 [10].

Fig. 1. Point intervention.

A promotion, being an intervention, is included in the model as an independent variable, as a separate regressor. The analyst selects the values of this variable and determine them by the nature of the action. According to the selected characteristics of the intervention, we can distinguish the types described below. A point (pulsed) intervention (Fig. 1) is a dummy regressor that takes a value of 1 during an intervention, and the rest of its values are 0. The duration of a point intervention is one point in time: 

nt ¼ 1; t ¼ T nt ¼ 0; t 6¼ T

ð2Þ

where nt – intervention, T is the unit time period in which the intervention occurred. Point interventions are useful for evaluating deviations that occur over a given period and whose effect on the time series disappears after that. In the framework of this study, point interventions were used only to exclude Time Series Outlier, the evaluation of the effect of such intervention by the proposed method is excluded, because according to the results of practical studies at such points the model is overfitting.

160

V. Abbakumov et al.

Fig. 2. Stepwise intervention.

Fig. 3. Linear intervention.

A stepwise intervention (Fig. 2) is a dummy regressor whose values until intervention are 0 and the subsequent values are 1. The duration of a stepwise intervention is the number of periods from time to the end of the time series: 

nt ¼ 0; t\T nt ¼ 1; t  T

ð3Þ

where nt – intervention, T is the unit time period in which the intervention occurred. Stepwise intervention is useful for evaluating a change that is known to occur during and after a certain period of time and which constantly affects the time series after that: Linear intervention (Fig. 3) is a dummy regressor, the values of which are equal to zero before and during the intervention, and subsequent values increase linearly thereafter. The duration of the intervention is the number of periods from time to the end of the time series: 

nt ¼ 0; t\T nt ¼ ðt  T Þ; t  T

ð4Þ

where nt – intervention, T is the unit time period in which the intervention occurred.

Algorithm for Evaluating the Promotions Effectiveness

161

Such interventions can be useful for evaluating changes that are known to occur during and after a certain period and whose impact on the time series increases after that: In practice, when analyzing sales time series, a more flexible type of intervention was used (Fig. 4):

Fig. 4. Intervention described by formula 5.

In this case, the variable has the following form: 8 0; t\t1 > > > > < t  t1 =t2  t1 ; t1  t\t2 nt ¼ 1; t2  t  t3 > > t  t =t  t ; t3 \t  t4 > 4 3 4 > : 0; t [ t4

ð5Þ

where nt – intervention, t1 is the point in time at which sales growth began, t2 is the point in time at which the stock was launched, t3 is the point in time when the stock was completed, t4 is the point in time before which the sales growth decayed. Sales growth begins at time t1, when a preliminary announcement of the stock is made, but most often t1 = t2. Increased sales may persist after the end of the promotion, fading until time t4, but quite often t3 = t4. The action beginning moment of the t2 and the action completion moment of t3 are known to the analyst. Moments t1 and t4 are estimated from historical data, for example, using the Grid Search. More cases that are complicated are possible, for example, when, after the end of a promotion, sales fall. In the current work, such cases did not occur, were not tested by the authors, and were excluded from the discussion in the framework of this article.

162

V. Abbakumov et al.

Examples of intervention on test data below (Fig. 5):

Conventional currency

Revenue (with promo action) Revenue (without promo action) Discount

Date

Fig. 5. The intervention allocation on the time series of sales of related products from the category of soft drinks from the store at an automatic gas station.

The simulation result with the intervention usage is the analyst’s assessment of how past promotions influenced in retrospect and will affect in the future. Thus, it becomes possible to determine the effect of certain promotions. Considering the time series underlying the studied indicator that deviated from the expected pattern as a result of intervention, it is proposed to combine the time series model with the intervention model. The resulting combined model may take the following form: yt ¼ lt þ et þ vðBÞnt

ð6Þ

where lt is the systematic component (trend and seasonality), et are errors, vðBÞnt is the intervention effect.

3 Systematic Component Modeling of Time Series The next step after a mathematical description of the nature of the ongoing promotions and the selection of an appropriate intervention model is the selection of a model for directly assessing the effect of the intervention. In practice, it turns out that traditional models are difficult to use to evaluate the effect of intervention, since a number of difficulties arise.

Algorithm for Evaluating the Promotions Effectiveness

163

When using the linear regression model, the effect of intervention is equal to the factor in the regressor that describes it. At the same time, you can build a model with any combination of seasonal adjustments, which is a significant plus, because in practice, the time series of sales are subject to seasonal changes of a different nature: from weekly to annual fluctuations. However, the model for describing the trend lines in this case suffers from a specification error, which makes the results of the effectiveness assessment unreliable. Models based on ARIMA [11] become too complex for calculation and transparent interpretation if the model includes two or more seasonal components, since the parameter space greatly increases. Neural Network Models have several advantages in the study of time series [12], however, long daily time series have practical problems associated with a significant increase in the training time of the neural network (Neural Net Fitting), as well as the need for large computing power. Therefore, in the framework of this work, the Prophet package was used, developed by Facebook specialists in 2017, which is designed to predict time series. The package allows analysts to automatically create predictive models of high accuracy in automatic mode. The methodology of the package is described in detail in the original article [13]. Its essence lies in the procedure for fitting additive regression models (fitting additive regression models) with the following four main components: 1. a trend that is modeled using piecewise-defined functions: piecewise linear regression or piecewise-logistic growth curve; 2. annual seasonality, which is modeled on the basis of the Fourier series; 3. weekly seasonality, which is represented by indicator variables; 4. events and holidays, which are also represented by indicator variables. Model parameters are estimated using Bayesian statistics maximum a posterior probability (MAP) estimate) or by full Bayesian inference). Parameters are evaluated using the Stan probabilistic programming platform. The Prophet package, therefore, is a convenient interface for working with the Stan platform from the R and Python environments. To solve the problem of the negative effect of promotions with a downward trend of the indicator in question, it is necessary to specify a number (s) that determines the number of points of trend change at time points sj of the trend line (7) at which the effect of the promotion will be non-negative: gðtÞ ¼ ðk þ aðtÞT dÞt þ ðm þ aðtÞT cÞ

ð7Þ

where g(t) is the trend line, k is the growth rate, m is the displacement parameter, cis the parameter set to make the function continuous, d 2 Rs s is the vector of the trend

164

V. Abbakumov et al.

growth rate adjustments (where di is the change in speed, which occurs at time sj), then the growth rate at time t is the growth rate at time t plus all corrections up to this P moment k þ j:t [ sj di , which can be represented as a vector aðtÞ 2 f0; 1gS :  1; t  sj . Then the total velocity at time t is k þ aðtÞT d. aj ð t Þ ¼ 0; t\sj

4 Promotion Efficiency Algorithm Summarizing the described approach, we can distinguish the following algorithm for evaluating the effectiveness of a promotion. The first step is data collection. The approach used is based on the analysis of time series, therefore, an obligatory stage of the analysis is the collection, preparation and necessary transformation of the sales data of the estimated indicator for the period, including the promotion. The next stage is the assessment of the mathematical essence of the promotion - the choice of the intervention model. To evaluate a promotion, the authors propose the use of the model described by formula (5), since this model is more flexible and is often found in practice when analyzing sales. After choosing an intervention model, it is necessary to select a model to describe the systematic component of the time series; the authors proposed to use the Prophet package, which provides two main advantages of the future model. Firstly, the trend is modeled by a piecewise linear function, this is a flexible and robust approach. Secondly, several seasonal components can be included in the model, which is often necessary in practice. Using this package also does not exclude the possibility of adding additional regressors to the model that characterize the influence of external factors of any nature. With the successful identification of a time series model, its training in retrospect, and verification of adequacy by means of the Prophet package, an intervention multiplier becomes known that reflects nothing more than the effect of the promotion. Thus, if you know the mechanism of the promotion, determining the effect of its implementation, which is the ultimate goal of the analysis, becomes possible using a practical approach to assess the trend of time series when the effect itself is included in the form of a dummy variable.

5 Practical Approach of Application Let us consider an example of using the proposed approach on the sales dynamics of PJSC Gazpromneft. The methodology was developed to evaluate discount promotions for related products and services in stores at automatic gas stations. Result is measured in units of goods sold. These are basic indicators, based on which revenue and other indicators are further calculated that may be of interest to the manager making a decision both on the timing of the promotion and on the amount of the discount.

Algorithm for Evaluating the Promotions Effectiveness

165

In the practical construction of the model, it was assumed that each intervention value only affects the current value of the base time series. An example of evaluating the effectiveness of ongoing promotions is shown in the figures below, the data is depersonalized (Figs. 6 and 7).

(a) Conventional currency

Year

(b) Conventional currency

Year

Fig. 6. a. Highlighting the systematic components of time series (trend). b. The allocation of systematic components of the time series (trend + seasonality)

In practice, the described method has helped to achieve high accuracy in describing time series and evaluating the effectiveness of ongoing stocks.

166

V. Abbakumov et al. %

High impact Medium impact Low impact

Promo action 1

Promo action 2

Promo action 3

Promo action 4

Promo action 5

Promo action 6

Promo action 7

Promo action 8

Promo action 9

Promo action 10

Promo action 11

Promo Promo action 12 action 13

Fig. 7. Results of evaluating the effectiveness of a promotion as a percentage of total sales.

6 Conclusion Many companies use promotions to increase the demand or popularity of products sold. Structural analysis of historical data can help in determining the effect of the ongoing campaign and provide an idea of predicting the effect of the promotion in the future. The decomposition of historical data into two parts (the systematic component and the effect of intervention) is one of the approaches that may be useful for analysts while using the package for forecasting time series - Prophet will avoid specification errors, make it possible to take into account several seasonal components and make the model setup process simple and straightforward for analysts. Acknowledgements. This article was prepared under financial support of the Russian Science Foundation (Grant No. 18-18-00099).

References 1. Ailawadi, K., Harlam, B.A., Cesar, J., Trounce, D.: Promotion profitability for a retailer: the role of promotion, brand, category, and store characteristics. J. Mark. Res. 43(4), 518–535 (2006) 2. Van Heerde, H.J., Neslin, S.A.: Sales promotion models. In: Handbook of Marketing Decision Models, pp. 13–77. Springer, Berlin (2017) 3. Ailawadi, K., Gupta, S.: Sales promotions. In: History of Marketing Science, pp. 463–497. World Scientific-Now Publishers Business, Singapore (2014) 4. Pesonen, A.: Quantifying promotion effectiveness for a retailer. Aalto University Learning Centre, Helsinki (2018) 5. Box, G.E.P., Jenkins, G.M., Reinsel, G.C.: Time Series Analysis: Forecasting and Control. Prentice Hall Inc., Englewood Cliffs (1994) 6. Hamilton, J.D.: Time Series Analysis. Princeton University Press, Princeton (1994) 7. Fuller, W.A.: Introduction to Statistical Time Series. Wiley, New York (1995)

Algorithm for Evaluating the Promotions Effectiveness

167

8. Nelson, J.P.: Consumer bankruptcies and the bankruptcy reform act: a time-series intervention analysis. J. Financ. Serv. Res. 17(2), 181–200 (2000) 9. Box, G., Tiao, G.: Intervention analysis with applicaions to economic and environmental problem. J. Am. Stat. Assoc. 70, 70–79 (1975) 10. Leonard, M.: Promotional analysis and forecasting for demand planning: a practical time series approach. With exhibits 1 and 2, p. 50. SAS Institute Inc., Cary (2000) 11. Box, G.E.P., Tiao, G.C.: Intervention analysis with applications to economic and environmental problems. J. Am. Stat. Assoc. 70(349), 70–79 (1975) 12. Valiotti, N.A., Abbakumov, V.L.: Quantitative assessment of the consequences of managerial decisions based on neural network models [Kolichestvennoe ocenivanie posledstvij upravlencheskih reshenij na osnove nejrosetevyh modelej]. J. Appl. Inf. 5, 6– 13 (2013) 13. Taylor, S.J., Letham, B.: Forecasting at scale (2017). https://facebook.github.io/prophet/. Accessed 20 Mar 2019 14. Cleveland, W.S., Devlin, S.J.: Locally weighted regression: an approach to regression analysis by local fitting. J. Am. Stat. Assoc. 83(403), 596–610 (1988) 15. Gedenk, K., Langer, T., Ma, Y., Neslin, S.A., Ailawadi, K.L.: Consumer response to uncertain promotions: an empirical analysis of conditional rebates. Int. J. Res. Mark. 31(1), 94–106 (2014) 16. Farris, P.W., Parry, M.E., Ailawadi, K.L.: Structural analysis of models with composite dependent variables. Mark. Sci. Winter, 73–94 (1992) 17. Leeflang, P.S.H., Wittink, D.R.: Decomposing the sales promotion bump with store data. Mark. Sci. 23(3), 317–334 (2004) 18. Ailawadi, K.L., Neslin, S.A.: The effect of promotion on consumption: buying more and using it faster. J. Mark. Res. 35, 390–398 (1998) 19. Assuncao, J.L., Meyer, R.: The rational effect of price promotions on sales and consumption. Manag. Sci. 39(5), 517–535 (1993) 20. Neslin, S.A., Shoemaker, R.W.: A model for evaluating the profitability of coupon promotions. Mark. Sci. 2(4), 361–388 (1983)

Analysis of Technological Innovations in Supply Chain Management and Their Application in Modern Companies Alissa Dubgorn1, Irina Zaychenko1(&), Aleksandr Alekseev1, Klara Paardenkooper2, and Manfred Esser3 1 Peter the Great St. Petersburg Polytechnic University, 29 Polytehnicheskaya st., 195251 St. Petersburg, Russian Federation [email protected] 2 Erasmus University Rotterdam, ’s-Gravendijkwal 230, 3015 CE Rotterdam, The Netherlands 3 GET Information Technology, Rudolf-Diesel-Strasse 14, 41516 Grevenbroich, Germany

Abstract. This paper discusses the main trends in improving supply chain management in enterprises with the help of modern technological innovations. An analysis of existing supply chain management trends in various enterprises is given. Such innovations, as Internet of Things, Blockchain, Artificial Intelligence, Cloud Computing etc. are considered as supply chain management development drivers. The digital technologies are studied from the relevance point of view, and based on the industry research a set of technological innovations with possible application within 5 years is proposed. The applicability of technological innovations for enterprises is studied based on enterprises’ IT architecture maturity. The paper focuses on supply chain management processes within today’s organizations of Russian Federation. Chances and barriers to improving the existing supply chain in enterprises of Russian Federation are highlighted. Keywords: Digital transformation  Technological innovations  Supply chain management  Logistics

1 Introduction Supply chain management and logistics play an important role both in the development of a country as a whole and in the development of any enterprise in particular. At the level of an entire country (or region), the effectiveness of logistics and supply chain management is formed on the basis of the quality of transport infrastructure, the effectiveness of customs and logistics business and is expressed in the competitiveness of a country (region) and, accordingly, of all companies in this country (region). Russia is a leader in logistics costs, which is too expensive for a country in comparison to others. According to economists, logistics or related areas account for up to 20% of GDP in the Russian Federation. A similar indicator in China is 15%, in Europe - 7–8%. Reducing transport and logistics costs to an average level in the world (the world average indicator of logistics costs is estimated by Armstrong & Associates Inc. at © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 168–178, 2021. https://doi.org/10.1007/978-3-030-64430-7_14

Analysis of Technological Innovations in Supply Chain Management

169

11.6%) will release about $ 180 billion in cash, according to a joint report by The Boston Consulting Group (BCG) and Logistics Committee of the Russian CCI [1]. One of the main tools for reducing logistics costs are technological innovations designed to optimize business processes throughout the supply chain. Finding the possibility of implementing existing digital technologies in SCM for their effective functioning is the goal of this paper. The founder of the innovation theory J. Schumpeter [2] classified technological changes according to the following principle: the use of new types of raw materials; the introduction of a product with qualitatively improved properties; new ways of organizing production and production support; new markets; application of new technological processes or equipment. Schumpeter claimed that innovation is the use of improved, new technological, technical or organizational type of means in production, distribution and procurement. Innovation remains a novelty or an idea, until it is introduced into the field of activity of the enterprise and does not bring any benefits. Consequently [3] innovation is characterized by applicability and feasibility in the process of production of goods upon economic benefits, novelty from a scientific and technical point of view. In 1988, Sanjiv Sidhu and Ken Sharma [4], the founders of I2, introduced a new concept in enterprise logistics management - supply chain management (SCM). The SCM system is designed to manage and automate business processes at the enterprise, in accordance with the entire production and sales cycle. Using corporate information systems, SCM [5] allows controlling product distribution at the enterprise, reduce inventory, optimize the use of resources in the entire chain, and increase customer loyalty due to the high level of service in deliveries. American supply chain management scientists Lambert and Stoke [6] define SCM as the integration of eight key business processes such as customer service; coordination of consumer relationships; demand coordination; coordination of order execution; ensuring production processes; supply control and coordination; product development control and coordination; coordination and management of returns. In paper “Digitalization - a global trend in supply chain development” [7] and et al. [8, 9] the problem of using technological innovations for key business processes of the supply chain in modern companies is raised. With the help of innovative products, it is possible to build an optimal production plan, to form effective customer relationships, and to optimize all stages of the product life cycle. Due to the introduction of SCM, companies significantly reduce the time it takes to process orders (20–40% faster), reduce purchase costs by 5–15%, and increase profit by 5–20% in the sum of all parameters. The customer has become the center of decision-making process on purchase management.

2 Methodology For the successful use of the opportunities provided by technological innovations, it is necessary to understand their areas of use in logistics. Considering current trends in the use of technological innovation in logistics, there are 14 innovations used in modern supply chain management:

170

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

A. Dubgorn et al.

Big data. Artificial intelligence. Blockchain. Robotics. Cloud computing. The Internet of things. Unmanned cars. Unmanned aerial vehicles. 3D printing. Digital identification. Sensing technologies. Augmented reality. Self-learning systems. Bionic performance.

The data are based on DHL’s research [10], and also taking into account the analysis of technological innovation trends presented in “Logistics Trend Radar 2018/19” [11], which is a leader in the logistics services market due to the implementation of the above digital transformation tools. DHL builds its results based on its own experience, interviews with scientists, researchers, and businesspersons using these technologies. Therefore, the declared technological innovations are presented as benchmarking for global logistics market strategies. An important role in determining the suitability of the technologies used, is their relevance. In this paper, innovations whose relevance is less than 5 years are considered, because this means the widespread use of technology in the logistics market. The relevance of more than 5 years means the unsuitability of technology in the near future, and will not be analyzed in detail. Such innovations include self-learning systems, bionic performance, unmanned aerial vehicles, unmanned vehicles, 3D printing. Among the main barriers to use the technologies the following can be considered: technical imperfection (3D printing, unmanned aerial vehicles), regulatory framework (bionic technology), ethical issues (bionic technology and self-learning systems), cost (all technologies) can be highlighted. Technical imperfections apply to almost all of the above items, and the regulatory framework is not perfect in this direction. The following innovations will be committed to further analysis: cloud technologies, the Internet of things, big data, cheap sensor technologies, augmented reality, and robotics. These innovations allow us to improve logistics processes, due to the business solutions provided, which are presented in Table 1.

Analysis of Technological Innovations in Supply Chain Management

171

Table 1. Technological innovations with related business solutions. Technological innovation Internet of things

Low-cost sensor technologies Robotics (automation) Cloud technologies

Big data

Augmented reality

Business solution Automation of warehouse logistics (accounting, planning, current status online) Providing consumers with services such as “smart home” Collection of information on movements and downtime of rolling stock Automatic maintenance planning Determination of cargo dimension Use by consumers (simplification when taking cargo parameters) Tracking employee health through smartwatch technology Packing, assembly, sorting of goods (Rethink Robotics) Loading trailers (robotic arms with pickup and hold sensors) Calculation of simplified unloading of goods from a truck Accumulation of orders, billing Collection of information on an integrated site Reduce transaction costs Real-time transport route optimization Demand forecasting - reducing labor costs, material resources Pre-delivery of goods before the order Risk prediction Simplification of warehouse operations (collection, packaging, reading barcodes, navigation using smart glasses) Safe driving (windshield display, trip support) A virtual definition of the insides of a machine (product category)

This table discusses technological innovations that, according to DHL experts [10], will prevail in the near future. The use of cheap sensor technologies in logistics and supply chain management will allow changing expensive scanner systems to mobile devices, tablets and other equipment with budget sensors. New business solutions in the form of 3D printers will enable us to provide services that are new in nature, such as pricing based on scanned dimensions. The use of low-cost sensor technologies in the form of mobile devices and tablets also opens up new possibilities for remote access to transport and ERP systems, creating new opportunities. Further presentation is accompanied by logistic functions that can be affected by technological innovations with existing business solutions. The list of technologies and related logistics operations is presented in Table 2.

172

A. Dubgorn et al. Table 2. Innovative technologies and related logistics operations. Technological innovation Internet of things

Logistics operation Transportation process; Customer service; Inventory management; Cargo handling Big data Transportation process; Stock management; Procurement management Low-cost sensor technologies Transportation process; Cargo handling; Predicted demand pricing Cloud technologies Customer service; Information support; Inventory management; Procurement management Robotics (automation) Cargo handling; Physical distribution of goods Augmented reality Transportation process; Inventory management; Physical distribution of goods

There is a need to consider brief descriptions of technological innovations listed in Table 1 and 2. 1. The Internet of things (IoT) [12, 13] is a technological innovation that connects devices to a computer network and allows them to collect, analyze, process and transmit data to other objects using software, applications or technical devices. IoT systems consist of two components: smart devices and cloud storage, to which they are connected via cellular, satellite, wireless (WiFi), bluetooth. In logistics, IoT can help significantly reduce costs and improve the quality and speed of service. For example, in some companies, employees are given instructions and tasks to be performed through special mobile portable devices (trackers, smart watches), with the help of which the work is tracked and the result obtained is analyzed. IoT in transport is accompanied by the presence in the vehicle of a navigation and security system, surveillance cameras. Data is transmitted to the cloud platform, and any interested person can access this data for analysis and control. 2. Augmented reality devices [10] allow recording the state of the environment and provide additional virtual information in real time. For example, the use of virtual glasses in a DHL warehouse: glasses fix real objects (barcode), and information on the location of these items in the warehouse may appear on the lenses. When using them, a significant reduction in the time needed to find the necessary item, a reduction in the number of errors compared to traditional methods, is expected. In addition to efficient use in a warehouse, devices of augmented reality can be useful when picking goods.

Analysis of Technological Innovations in Supply Chain Management

173

Pre-developed software can output calculation results to a portable query device. The system will have data on the car, volumetric characteristics of the packages, types of cargo and the characteristics of each specific transportation. The software integrates with the database of corporate information systems, but this does not cancel manual data entry. The result may be, for example, a volumetric model of the load location. There is an algorithm for using additional reality devices: 1) loading data on cargo transportation, which will significantly affect the loading and unloading and transportation; 2) determination of the route of movement; 3) creation of a 3D model and a loading plan and the transfer of this data to the specialist responsible for transportation; 4) projection of reports on a portable device; 5) performance of operations. Using virtual glasses, the specialist receives information through a portable device. The main condition is the presence of a camera, which must be pointed at a physical object (in this case, a truck), after which there will be a picture with the cargo in its place. The displayed report on the display also has all the information about the load. It is worth noting that due to the high cost of innovation and the difficult implementation in existing projects, the technology becomes preferred only for large companies. 3. Robots are currently widely used in the manufacturing industry, however in logistics it is used to a limited extent [14, 15]. A more suitable definition would be automation, which relates to warehouses (the use of mechanized means for the assembly, transportation, picking of goods). 4. Sensor technology [16]. Widespread use was found in container transport, where among others temperature, humidity, and vibration levels in containers were transmitted in real time using sensors. It is used to control the operating conditions of equipment to adjust them and automatic stopping equipment usage at critical moments. For the development of supply chains it plays a small role, since low-cost sensor technologies do not accelerate or reduce the cost of logistics operations due to the lack of analytics and access to a selection of the necessary information in the database. Information can only be loaded there. 5. Cloud logistics. Its essence lies in the search for counterparties, for communication between suppliers or customers with carriers, to monitor the delivery process and evaluate the effectiveness of counterparties. Necessary conditions for use - access to an information platform for further interaction of interested parties [16, 17]. There is a simple algorithm for the operation of cloud logistics: 1) 2) 3) 4) 5) 6)

the company places a request for shipping; carriers find request and leave a “response”; the company selects a carrier based on alternatives; the company and the carrier track the route and movement of goods online; upon completion of delivery, all participants receive notifications; the transaction is promptly closed by the necessary documents for accounting.

6. Big data technologies [19, 20]. To manage supply chains with the subsequent introduction of technological innovation, a database is required. A database can be

174

A. Dubgorn et al.

created using simple sources, such as sensory technology. Another method is the localization technology, which includes GPS, GLONASS. Subsequent data analytics will streamline the logistics stages in the procurement and transportation of finished products. Considering working with customers, it will be possible to track the order, which will increase the transparency of delivery to customers [21]. Database analytics will allow, based on statistical data, to identify customer behavioral characteristics, major market trends, and peak sales. Decrease in working capital due to reduction of stocks is one more advantage of a DB. If it will be possible to predict the volume of demand, then we can adjust the supply volumes for certain days, weeks, months. In this case, the distribution of the budget will be more rational and economically feasible. The analysis above shows that the most functional technology is cloud storage. However, over time, the functionality of the above technological innovations may change: the gradual improvement of the technical side of technological innovation entails the expansion of the scope. Based on Table 2, we form a summary Table 3 for logistics operations, which are aimed at the majority of technological innovations. Cargo handling, inventory management and the transportation process received the largest number of related technologies. Such results indicate that these functions are the most challenging in the logistics sphere, since most business solutions of technological innovations are directed to solve them.

Table 3. A summary table of logistics operations and innovations. Logistics operation Transportation process

Technological innovation Internet of things Big data Low-cost sensor technology Augmented Reality Pricing Low-cost sensor technology Cargo handling Low-cost sensor technology Robots, automation Internet of things Physical distribution of goods Robots, automation Augmented Reality Inventory management Big data Cloud logistics Internet of things Augmented Reality Procurement management Big data Cloud logistics Customer service Cloud logistics Internet of things

Analysis of Technological Innovations in Supply Chain Management

175

Cargo handling - a set of logistics processes in a warehouse. The main cargo handling operations are verification of documents and unloading, acceptance of goods, preparation of receipt documents, placement of goods, a set of actions for the shipment of goods for the client. The application of technological innovations in this area will help to achieve several goals, including improving operational actions (structured data), staff working conditions (improving safety, ergonomics, mechanization of dangerous actions), quick response to changes and processing speed. The next step in the study of the implementation of digital transformation tools for effective SCM will be the selection of those technologies whose effect can be determined in the short term (up to 5 years). Understanding the functioning of each technology is an integral part in correctly identifying the capabilities and prospects of introducing technological innovations in SCM at each individual enterprise. The analysis of business decisions revealed the main directions of the application of innovations, and their relationship with the logistic functions helped to identify the most likely areas of technology application, where they will be more effective.

3 Results Each organization has its own infrastructure, which is a complex of interconnected service structures or objects that make up and provide the basis for the functioning of the system. In a competitive market environment, the company always needs to improve, update and optimize its infrastructure in order to achieve the benefits of rapidly developing, dynamic systems. There is a systematic approach to assessing the maturity of IT infrastructure for the subsequent selection of a project to improve infrastructure. This approach distinguishes 4 levels of maturity of IT infrastructure: 1. Baseline: Scattered manual infrastructure. 2. Standardized level: managed, partially automated IT infrastructure. 3. Rationalized level: managed, consolidated IT infrastructure with maximum process automation. 4. Dynamic level: full automation of management, dynamic use of resources, service level agreements are tied to business requirements. Based on the level of IT maturity of the company, decisions can be made on the implementation of certain technology products. Each level will have its own list of technological innovations that can be implemented in the management of the company’s supply chain. For example, for the basic level, almost all technological products will be limited, and the introduction of any innovations in individual SCM sectors will not bring sufficient results. In the future, open source software solutions will be preferable to local solutions, as this is the only way to avoid dependence on individual IT service providers. It will be necessary to provide access to a large number of devices, as this is the only way to ensure the rapid implementation new means of collecting data, for example, portable scanners or smart glasses. One of the possibilities for this is to convert customer information into a web user interface: data is available on many devices, and the supplier can make it available to all users simultaneously and in a timely manner.

176

A. Dubgorn et al.

By expanding the web interface using software interfaces, applications can be created for employees and customers. Ultimately, the creation of interfaces and applications will lead to the transfer of data services for customers or business partners to the Internet environment. In the context of competition in the digital environment, the degree of success of doing business depends on the development and active implementation of technologies for data collection, data analysis and information exchange. When upgrading supply chain management systems, experts recommend focusing on data analysis technology, as it allows a significant increase of the adaptability of business processes to dynamically changing environmental conditions. In the context of a more developed resource base, large companies are introducing support system technologies, autonomous systems and IT services, which are costly and are second-tier technologies. Due to the large abundance of modern technological innovations in logistics and supply chain management, their numerous advantages of implementation and application, it is necessary to understand the mechanisms and principles of functioning of each technological innovation, their capabilities, scope and barriers to use. Numerous discussed technological innovations in the logistics and supply chain management industry are currently not relevant to the global logistics industry and are mature enough to apply and implement. Many of today’s technological innovations in logistics and supply chain management have such restrictive barriers to use as: 1. Legislative restrictions and the lack of a regulatory framework governing the application and use of technological innovations. 2. The high cost of the introduction and maintenance of technological innovation. 3. Ethical problems associated with public concerns about safety and use, and, as in the case of robotics, about maintaining the number of jobs and the level of employment. 4. Technical imperfections that limit the use of technological innovation to the full extent of its potential for use. In order for technological innovations to be fully used in logistics and supply chain management, the above barriers to use must be eliminated or reduced, which requires a time, since it is a matter of amending legislation, improving scientific and technological progress and changing public relations. Therefore, it is important to understand that of the many technological innovations that are currently known, only a certain part is ready for use and implementation.

4 Conclusion The purpose of this paper was to analyze the possibilities of applying technological innovations in logistics and supply chain management. It should be noted once again that logistics is one of the key industries that is most influenced by technological innovations, consequently at the moment it is extremely important not only to understand the mechanisms of functioning of technological innovations, the business solutions that they provide, their possibilities, and barriers to their application. An

Analysis of Technological Innovations in Supply Chain Management

177

individual approach to the selection of technological innovation in order to make best use of the new capabilities of the logistics industry and supply chain management should also be applied. Based on all the prospects considered, it becomes obvious that the existence of some modern trends is insecure. Innovations are not static, because startups may fail, or inconspicuous projects can become breakthrough in the current conditions. Each company must decide for itself how it will develop in the future, and based on its experience and build its forecasts for the future. Acknowledgement. The reported study was funded by RSCF according to the research project № 19-18-00452.

References 1. Logistics in Russia: new ways of capacity building. https://image-src.bcg.com/Images/ Logistics-in-Russia_tcm27-166353.pdf 2. J. Schumpeter’s theory of innovation and subsequent theories. https://habr.com/ru/post/ 57528 3. Rudskaia, I., Rodionov, D.: The concept of total innovation management as a mechanism to enhance the competitiveness of the national innovation system. In: ACM International Conference Proceeding Series, pp. 246–251 (2018) 4. Supply Chain Management – SCM. http://www.tadviser.ru/index.php 5. Ghosh, D.: Big data in logistics and supply chain management-a rethinking step. In: International Symposium on Advanced Computing and Communication (ISACC), pp. 168– 173 (2015) 6. Stock, J., Lambert, D.: Strategic Logistics Management, 4th edn. McGraw-Hill/Irwin, New York (2000) 7. Digitalization - a global trend in supply chain development. http://www.inprojects.ru/ cifrovizaciya-kak-globalnyj-trend 8. Innovation in action. https://www.logistics.dhl.ru/ru-ru/home/insights-and-innovation/innov ation/innovation-in-action.html 9. Silva, V., Rezende, R.: Additive manufacturing and its future impact in logistics. IFAC Proc. 46(24), 277–282 (2013) 10. Logistics Trend Radar 2018/19. https://www.logistics.dhl/cn-en/home/insights-and-innovat ion/insights/logistics-trend-radar.html/ 11. DHL uses augmented reality in warehouse operations. In: AR/VR/MR Conference. https:// ar-conf.ru/ru/news/dhl-ispolzuet-dopolnennuyu-realnost-v-rabote-skladov 12. The Internet of Things in Logistics: DHL and Cisco Joint Report 2015. http://json.tv/tech_ trend_find/internet-veschey-v-logistike-sovmestnyy-otchet-dhl-i-cisco-20160511113055 13. Xu, R., Yang, L. and Yang, S.-H.: Architecture design of internet of things in logistics management for emergency response. In: IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, pp. 395–402 (2013) 14. Trifonov, R., Seryshev, P.: Transformation of supply chain management in the fourth industrial revolution. Strategic decisions and risk management, №. 3 (108). https://cyberleni nka.ru/article/n/transformatsiya-upravleniya-tsepyami-postavok-v-usloviyah-chetvertoy-pro myshlennoy-revolyutsii/viewer/ 15. Mikušová, N., Čujan, Z., Tomková, E.: Robotization of logistics processes. In: MATEC Web Conference, vol. 134, p. 00038 (2017)

178

A. Dubgorn et al.

16. Six technologies that will change logistics by 2030. https://psm7.com/news/6-texnologijkotorye-izmenyat-logistiku-k-2030-godu-dhl.html 17. Zhang, Y., Liu, S., Liu, Y., Li, R.: Smart box-enabled product–service system for cloud logistics. Int. J. Prod. Res. 54(22), 6693–6706 (2016) 18. Ilin, I.V., Iliashenko, O.Yu., Borremans, A.D.: Analysis of cloud-based tools adaptation possibility within the software development projects. In: Proceedings of the 30th International Business Information Management Association Conference, IBIMA 2017 Vision 2020: Sustainable Economic development, Innovation Management, and Global Growth, January 2017, pp. 2729–2739 (2017) 19. Ilin, I.V., Iliashenko, O.Y., Klimin, A.I., Makov, K.M.: Big data processing in Russian transport industry. In: Proceedings of the 31st International Business Information Management Association Conference, IBIMA 2018: Innovation Management and Education Excellence through Vision 2020, pp. 1967–1971 (2018) 20. Shang, Y., Dunson, D., Song, J.-S.: Exploiting Big data in logistics risk assessment via Bayesian nonparametrics. Oper. Res. 65(6), 1574–1588 (2017) 21. Bulatova, N., Dugina, E., Dorzhieva, E., Siniavina, M. Technology for determining strategic directions for the development of a regional transport and logistics system under digitalization. In: ACM International Conference Proceeding Series, №. 3373353 (2019). https://www.scopus.com/inward/record.uri?eid=2-s2.0-85081594943&doi=10.1145%2f337 2177.3373353&partnerID=40&md5=75429af297939f00a53885f5eb0e79a8. https://doi.org/ 10.1145/3372177.3373353

Digital Platforms for the Logistics Sector of the Russian Federation Igor Ilin1, Svetlana Maydanova2(&), Aleksandr Lepekhin1, Carlos Jahn3, Jürgen Weigell3, and Vadim Korablev1 1

Peter the Great Saint-Petersburg Polytechnic University, St. Petersburg, Russia 2 Unifeeder A/S, St. Petersburg, Russia [email protected] 3 Institute of Maritime Logistics, Hamburg University of Technology, Hamburg, Germany

Abstract. This article explores digitalization initiatives for the transportation industry in the Russian Federation. The purpose of this study is to consider in detail an innovative digital platform development project, aimed at creating a unified multimodal transport and logistics environment in the Russian Federation in the context of a platform approach to digitalization. This article analyzed trends in the field of digitalization as a platform approach and the advantages and disadvantages in the application of this approaches in practice. Several examples from the experience in other countries in creating digital platforms were also studied, and special attention was paid to digitalization projects in the field of transport and logistics. These digital platforms of the transport sector of the Russian Federation (RF) we categorized on the base of the Enterprise Architecture approach. The prerequisites for creating an integrated platform designed for the digital transformation of the transport sector of the Russian Federation were identified, as well as the requirements for its architecture and functionality were formulated. In addition, the expected positive effects and possible threats were identified during the implementation of such a project, as well as some recommendations were made that will minimize the number of possible problematic situations during the operation of a single digital platform of the transport sector. Keywords: Digital platform  Capability driven approach environment  Transport sector of the RF

 Integrated digital

1 Introduction Currently digitalization of various spheres and industries is a widespread trend, and the transport industry is no exception. Moreover, in the Russian Federation the transport sector, which is considered one of the main drivers of the development of the national economy, is given a special attention, including digitalization, which is confirmed by the launch of a large-scale project to create an integrated digital platform for the transport sector. Since such an initiative is a new for the Russian Federation, it may become the base for similar projects in other industries in the future, therefore, it seems advisable to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 179–188, 2021. https://doi.org/10.1007/978-3-030-64430-7_15

180

I. Ilin et al.

study this initiative in more detail, especially in the context of a platform approach to digitalization. Thus, to achieve this goal it is necessary to perform several tasks. It is necessary to determine what is meant by the term “digital platforms” and “platform approach”, when to consider examples of operating platforms, study the main aspects related to the development of the integrated digital platform of the transport sector, and analyze the expected effects and threats. The aim of this study is to observe examples of successful digital platforms implementation in the transport industry of other countries, to classify existent digital platforms in the Russian transport sector and to identify requirements of modern digital platforms which could secure capability of the business ecosystem.

2 Materials and Methods Recently, the platform approach has become a steady global trend in the field of digitalization. There are many different definitions of digital platforms, but most authors define the digital platform as a system of formal and informal rules and algorithms for network user interaction, based on various architectural standards of software and hardware, which are used for storing, analyzing and transmitting data about the participants of the interaction [1]. Platform thinking was predetermined by market research with bilateral and multilateral network effects [2]. This type of thinking occupies a leading position not only in telecommunications and other high-tech industries, but also in the sector of consumer markets such as taxis, the purchase of goods and products, car sharing, rental real estate, and other industries categorized as elements of a joint consumption economy. The changes caused by such a revolutionary business model as a platform affect the principles of the economy [3]. The main advantage of the platform is the exchange scale effect between user groups, covering both consumers and manufacturers [3]. Among other things, the operation of the platform provides control and establishes a process for evaluating the result, and it is also possible to use platform technologies to resolve disputes between suppliers and customers [4]. Also, on the platforms transparent systems of monetization of services are created that do not cause additional difficulties for users. The functional component of the platform is implemented through the development of a complex architecture of digital solutions, which involves the reorganization of established organizational and regulatory orders. Extremely successful are those platforms that have simplified the basic procedures for exchanging and interacting with participants as much as possible and at the same time significantly reduced their costs by using a positive network effect or at the same time increasing suppliers and consumers of goods and services, in which both sides can change roles [1]. It is important to note that the more users on the platform, the stronger the positive network effect is expressed, which means lower costs for all participants in the interaction [3]. The scale of use of digital platforms can be very diverse: from the micro level to the global level, both within individual companies and within the value chain, and even

Digital Platforms for the Logistics Sector

181

entire industries, which allows you to create separate industry ecosystems [5]. External platforms are more capable of competing in such conditions, due to the envisaged possibility of using the network effect, as well as a greater predisposition to innovation [6–9]. To classify different types of digital platforms this study uses the Enterprise Architecture (EA), a concept of enterprise management, The Open Group Architecture Framework (TOGAF) methodology and the Capability Driven Approach (CDA), a modern approach to the information systems development [10–19]. In the perception of The Open Group, a business capability is a special ability or power that a business can possess or exchange in order to achieve a specific goal or result. A further, detailed definition of the capability requires an understanding of how this can be achieved by combining such supporting components as roles, processes, information, and tools [19]. The current study classifies the existent platforms of the transport sector of the RF as well as projects under review on the base of business capability and its components. Such classification could help to identify necessary requirements to the modern digital platforms for transport industry of the RF.

3 Results Currently digital technologies are increasingly moving into various spheres of business, and, of course, the transport industry, as one of the key sectors in the economy, is also undergoing digitalization. Today, the transport sector needs advanced digital technologies to maintain competitiveness in the world market, as well as to gradually meet the needs for the transportation of passengers and goods, the volume of which is increasing every year. In addition, consumers expect transportation services to be available while meeting all quality and safety requirements. In regard to maritime logistics in the future, local and regional platforms will be integrated completely vertically by big companies into global service packages and thus a complete full-scale logistics chain from the manufacturer to the end customer (B2C) will emerge [20]. Innovations like this will include the knowledge of the participating companies through full transparency data providers. This will facilitate real time service-level agreements by the possibility of instant quoting and the technical innovation provided by blockchain technology and thus contracts can immediately be agreed on and be settled [20]. As for the transport industry, digital platforms are very common. Here are some examples. DHL Freight introduced the CILLOX digital freight platform, which is positioned as a virtual market for enterprises that use transport services and is designed to optimize the loading of rolling stock in three main ways: full truck load or FTL, part truck load or PTL, less than one truck load or LTL, as well as the search for a supplier of transport and logistics services in accordance with the needs of the cargo [21]. In the Netherlands, the Saloodo digital freight platform has been launched, combining shippers into a single digital freight market for both domestic transports within the Netherlands and for international cargo delivery from the Netherlands to other European Union countries and vice versa. This platform currently unites more than 10,000 shippers, over 6,000 forwarders and about 250,000 units of rolling stock [22].

182

I. Ilin et al.

In Germany the following companies are examples for digital platforms in the logistics sector: Cargonexx, Drive4Schenker, Flexport, FreightHub, or Instafreight [23] Another example of the widespread adoption of digital platforms is XPO Logistics [24, 25]. An equally important example of digital platforms in this area is Dubai World Central (DWC). It is a kind of globally integrated digital transport and logistics platform that connects the markets of African, European countries, as well as the countries of the Far East and Southeast Asia. The operation of this platform extends to a whole integration zone covering the seaport of Jebel Ali, which is the largest container terminal between Singapore and Rotterdam, as well as Al-Maktoum International Airport. These facilities, as well as many others, operate within the same platform with access to the road network of the United Arab Emirates, as well as data on air and sea corridors. The free economic zone in which the DWC operates allows providing users with affordable transport and logistics services of the highest level, characterized by high speed and extreme efficiency [26]. Thus, the expected positive effects during the implementation of the platforms, as well as the experience of other countries of the world in the creation and practical application of digital platforms in the course of transport and logistics activities, determine the feasibility of applying a platform approach to digitalization of the transport sector of the Russian Federation. The need to digitalize the transport sector in the Russian Federation has been repeatedly emphasized directly at the state level [27, 28], and perhaps one of the most important and large-scale initiatives is the creation of the integrated digital platform for the transport sector of the Russian Federation. Currently there are several types of the digital platforms in the transport industry of the RF. Figure 1 digital platforms of the transport sector of the RF shows in an ordered way which business capability they provide as well as supporting components as roles, processes, information, and tools. There are digital platforms which enable functional support of logistic operations such platforms as Platon and ERA-GLONASS [22]. Both of them support technological processing of information, serve only one business process, and provide B2G information exchange on local or regional level. These digital platforms use such tools as Satellite navigational system, vehicle units, GSM/GPRS, GLONASS/GPS, and Cloud Computing. Platon and ERA-GLONASS are used widely in the Russian Federation, but they could not support all business needs of the transport industry. Another type of digital platforms enables submission of standardized information and documents to state bodies. For example, platforms as KPS Portal Seaport [29] and Unified State Information System for Transport Security [22]. These digital platforms support only one business process, provide B2G information exchange on local, regional or national level. These digital platforms support business roles of the state control and monitoring, and use such tools as Cloud Computing, EDI, EDM, electronic signature. KPS Portal Seaport is not able to use EDI exchange while it is the approved format for the information exchange in the worldwide maritime industry. Above mentioned digital platforms of the Russian transport industry enable only informational exchange which is needed for the different state bodies to support their interaction with the companies. But modern digital platform of the transport sector

Digital Platforms for the Logistics Sector Capability

Roles Business processes

Functional support of logistic operations

Technological processing of information Payment for Navigation road usage of vehicles

Submission of a standardized information and documents State control and monitoring Submission Transport of the security preliminary support information for custom authority

Supply chain transparency and agility / Submission of information to fulfill regulatory requirement Business ecosystem Ecommerce Cargo monitoring Preliminary information for Custom Authorities Value cocreation

Ecommerce Cargo monitoring Single Window Transport process modelling

Ecommerce Cargo monitoring Single Window Value cocreation

B2B, B2G, G2G Information Exchange on local / regional/ national level Cloud Computing, EDI, EDM, Big Data, sensors, GPS, IoT, RFID, sensors, cyberphysical systems, Artificial Intelligence Project of digital platform of the transport complex (adopted by MT of the RF)

B2B, B2G, G2G Information Exchange on local / national / international level Cloud Computing, EDI, EDM, Big Data, sensors, GPS, IoT, RFID, sensors, cyberphysical systems, Artificial Intelligence Project of the EAEU digital platform

Information

B2G Information Exchange on local / regional level

B2G Information Exchange on local / regional level

B2G Information Exchange on local / regional level

B2G Information Exchange on local / regional / national level

B2B, B2G Information Exchange on local / regional level

Tools

Satellite navigational system, vehicle units, GSM/GPRS, GLONASS/ GPS, Cloud Computing

Satellite navigational system, vehicle units, GSM/GPRS, GLONASS/ GPS, Cloud Computing

Cloud Computing, EDM, electronic signature

Cloud Computing, EDI, EDM, electronic signature

Cloud Computing, EDI, Big Data, sensors, GPS, electronic signature

Platon

ERAGLONASS

KPS Portal Seaport

Unified State Information System for Transport Security

Project of RZD digital platform

Examples

183

Fig. 1. Digital platforms of the transport sector of the RF in order of provided business capability.

needs to be able to support a business digital ecosystem. All participants of the Russian transport industry, as well as the Russian government, understand the value of such integrated digital platform and some steps on this direction are already made. First of all, there need to be mentioned the RZD digital platform project [21, 22]. JSC “Rossiyskiye Zhelesnye Dorogy” is the biggest transportation company in the RF and has strategic capabilities and resources for creation of digital platform which enable a business ecosystem activity support. RZD is working on the launch of the digital platform “Gruzovye Perevozki” which supports the carriage of cargoes by rail, provide opportunity of e-commerce and bookings online, cargo monitoring, and submission of

184

I. Ilin et al.

the preliminary information for custom authorities. In the future this project will be considered for support of additional services. This digital platform is a significant breakthrough as it supports not only B2G information exchange, but B2B information exchange as well and provides opportunities for all business ecosystem participants. Another advantage is the future possibility of value co-creation. Business ecosystems participants cooperation is the key opportunity and requirement of modern society, and the transport industry is not an exception. RZD digital platform is supposed to use such tools and technologies as Cloud Computing, EDI, Big Data, sensors, GPS, and electronic signature. The project is now in the process of development and the key results will be the creation of the Digital Railway: flexible systems of communication with customers based on their specific preferences, and the supply chain integrated program implementation. Another project is a digital platform implementation of the transport sector of the RF. [30] In accordance with the Strategy of Development for 2017–2030, the Program of Digital Economy of the RF was adopted by the Russian Government. This program uses a set of strategic measures for prioritized development of the most important sectors of the Russian economy, including the transport industry. In these regards the project of a digital platform of the transport sector was approved by the Ministry of Transport of the RF. Digital platforms of the transport sector of the RF shall provide B2B, B2G, G2G information exchanges on local, regional, and national levels and sustain processes of business ecosystem participants such as companies and state bodies. In addition to ecommerce and online bookings, cargo monitoring, transport process modelling, digital platforms should provide the capability for a Single Window mechanism. The Single Window is defined as a facility that allows parties involved in trade and transportation logging standardized information and documents with a single-entry point to comply with all import, export, and transit-related regulatory requirements [29]. The digital platform of the transport sector of the RF is supposed to use such tools as Cloud Computing, EDI, EDM, Big Data, sensors, GPS, IoT, RFID, GPS, sensors, cyber-physical systems, and Artificial Intelligence. The platform digital services can include a stage-by-stage transfer of the existing information systems of the industry participants to uniform industry standards. The project of the EAEU digital platform is the Eurasian Economic Union initiative for international trade facilitation and digital transport corridors implementation [31]. It was proposed by the EAEU Commission to support such business processes as ecommerce, cargo monitoring, value co-creation as well as the Single Window mechanism. The EAEU digital platform shall provide B2B, B2G, G2G information exchange on local, national, international level and use such tools as Cloud Computing, EDI, EDM, Big Data, sensors, GPS, IoT, RFID, GPS, sensors, cyber-physical systems, Artificial Intelligence. Digital infrastructure of the EAEU countries integration involves not only the uniform standards introduction, but also the mutual management of infrastructure, the high-grade digital transport corridors formation. The main purpose of the platform will be to ensure efficient and uninterrupted transmission of Informations and interaction between various components of the system of transport subsystems in the integrated digitalized transport system. An integrated digital platform of the transport sector should be considered as a kind of basic platform on which primary data and information will be exchanged between the subsystems existing in a single

Digital Platforms for the Logistics Sector

185

digitalized transport system. The important point is that all system requirements, specifications and protocols should be the same for all, and they should be registered and fixed at the state level [32]. In general, the structure of the system should be made open, but access to information modules must be limited. Such principles of building large information systems are quite typical for the modern IT industry. At the same time, within each subject subsystem that will be associated with the corresponding subsystem of transport, a unique toolkit of management methods should be formed. This need is due to two reasons. Firstly, this is a significant change in the requirements that are imposed on the efficiency of the functioning of transport systems in the modern world, and secondly, a noticeable leap in the development of information technologies that allow bringing management solutions to a completely different, qualitatively new level. However, it should be noted that, most likely, the toolkit itself and methodological approaches to its formation will not be identical, which means that unifying them will be quite problematic [33–35]. Among other things, all subsystems will be connected at the physical level by cargo flows passing between them, which, in turn, will be reflected in a single central platform as information flows, although most of them will be localized within private subsystems. Then, based on these flows, several aggregated parameters will be formed, and individual system components will exchange them through special system interfaces. Thus, it is possible to formulate a system-wide task, the essence of which is to design the architecture of the integrated digital transport system and develop a number of standards that will govern the rules for the internal presentation of data in it. Since the development of such a digital platform is an extremely large-scale project, it seems advisable to divide it into components and stages of implementation. This will make it possible to launch pilot projects in pilot operation, which will allow obtaining results practically significant for the further implementation of the project, as well as checking the viability of some alternative solutions. In addition, it is necessary to develop a special methodological approach, on the basis of which all activities related to the creation and commissioning of the platform will be managed. Ultimately, it is planned that the created digital platform of the transport sector will become a kind of an integrated multimodal environment of transport and logistics services in the Russian Federation.

4 Discussion The creation of the integrated digital platform of the transport sector is associated with the expectation of some certain benefits and advantages for both direct participants in the transport processes and for the entire state. These advantages are [30, 36]: – combining in one information environment all participants of the transport services market; – improving the safety, quality and accessibility of transportation; – transition to a paperless workflow;

186

– – – – – – – – – – – –

I. Ilin et al.

comprehensive services, quality and reliability; speed and reliability of information processing; increasing transparency of processes, reducing the number of illegal carriers; reduction of cargo transshipment costs and losses due to non-synchronous operation of various modalities; neutrality to business, as well as to the form and objectives of state control; reduction of various types of costs and costs in the supply chain; increased transport sustainability; ensuring maximum load of infrastructure; reduction in the share of «gray» transportation in the total cargo turnover; expanding the country’s export and transit capabilities; increasing the level of controllability of the transport sector as a whole; new opportunities for growth and development of the country’s transport industry.

However, despite the impressive amount of benefits that are expected when introducing a digital platform, the threats and risks associated with it should also be assessed. First of all, a potential threat is the leakage or loss of a huge amount of data, including those that are confidential, as this can lead to functional failures, unfair competition, blackmail or manipulation by some industry participants. In addition, a number of specific problems may arise: – the threat of cyberattacks that could disable part of the transport infrastructure or disrupt the flow of established transport and logistics processes; – transaction security concerns; – lack of necessary legislative regulation of platform interaction of transport sector participants. Thus, the digital platform of the transport sector is seen as a very promising project that can give an impetus to the development of the transport industry of the Russian Federation, but it is important to remember that before putting it into operation, it is necessary to prepare an appropriate legislative framework, develop standards and protocols for the interaction of industry entities, as well as ensure the proper level information security.

5 Conclusion Currently the development of the transport sector of the Russian Federation is one of the priority tasks for the state and the development of this industry is directly associated with its digitalization. The creation of the integrated digital platform of the transport sector is a large-scale initiative, the implementation of which will bring a lot of positive effects for both participants in transport and logistics activities, as well as for the country’s economy. However, it is important to remember that such platforms have some specific features, therefore, in order to avoid problems and conflict situations during its operation, it is necessary to carefully develop the legislative framework that will regulate the activities on the platform, as well as work out issues related to ensuring cyber security.

Digital Platforms for the Logistics Sector

187

Acknowledgement. The reported study was funded by RSCF according to the research project № 19-18-00452.

References 1. Styrin, E.M., Dmitrieva, N.E., Sinyatullina, L.H.: Gosudarstvennye tsifrovye platformy: ot kontsepta realizatsii. Public Adm. Issues 4, 31–60 (2019) 2. Rochet, J.C., Tirol, J.: Platform competition in two-sided markets. J. Eur. Econ. Assoc. 1(4), 990–1029 (2003) 3. Moazed, A., Dzhonson, N.: Platforma. Prakticheskoe Primenenie Revolyutsionnoi BiznesModeli. Alpina Publisher, Moscow (2019) 4. Ochneva, Y.S., Poklonsky, A.Y.: Ispolzovanie cifrovyh tehnologij kak instrument povysheniya kachestva transportnyh sistem. Synergy Sci. 31, 1016–1026 (2019) 5. Gawer, A.: Bridging differing perspectives on technological platforms: toward an integrative framework. Res. Policy 43(7), 1239–1249 (2014) 6. Gelishanov, I.Z., Yudina, T.N., Babkin, A.V.: Cifrovye platformy v ekonomike: sushnost, modeli, tendencii razvitiya. Sci. Tech. Statements SPbSPU Econ. 11(6), 22–36 (2018) 7. Korneev, M.V., Leonteva, V.A.: Sozdanie globalnoj ploshadki transportnyh uslug na baze cifrovyh tehnologij. Coll. Sci. Papers DONIZHT 51, 53–58 (2018) 8. Diakonova, M.D.: Retrospektivnoe issledovanie razvitiya cifrovizacii transporta v Rossii. Transp. Bus. Russia 5, 122–124 (2019) 9. Barykin, S., Gazul, S., Kiyaev, V., Kalinina, O., Yadykin, V.: Forming ontologies and dynamically configurable infrastructures at the stage of transition to digital economy based on logistics advances. In: Intelligent Systems and Computing. AISC, vol. 1116, pp. 844–852 (2020) 10. Ilin, I., Levina, A., Abran, A., Iliashenko, O.: Measurement of enterprise architecture (EA) from an IT perspective: Research gaps and measurement avenues. In: Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement (IWSM Mensura 2017). Association for Computing Machinery, New York, NY, USA, pp. 232–243 (2017) 11. Ilin, I., Levina, A., Iliashenko, O.: Enterprise architecture approach to mining companies engineering. In: International Science Conference SPbWOSCE-2016 “SMART City”, MATEC Web of Conferences, vol. 106, art. 08066 (2017) 12. Ilin, I.V., Iliashenko, O.Y., Borremans, A.D.: Analysis of cloud-based tools adaptation possibility within the software development projects. the 30th International Business Information Management Association Conference, IBIMA 2017 Vision 2020: Sustainable Economic Development, Innovation Management, and Global Growth, January 2017, pp. 2729–2739 (2017) 13. Ilin, I., Levina, A., Lepekhin, A., Kalyazina, S.: Business requirements to the IT architecture: a case of a healthcare organization. In: Advances in Intelligent Systems and Computing, vol. 983, pp. 287–294 (2019) 14. Jonkers, H., Proper, E., Turner, M.: TOGAFTM and ArchiMate®: A future together. White Paper W 192 (2009) 15. Josey, A.: TOGAF® Version 9.1-A Pocket Guide, Van Haren (2016) 16. Josey, A., Lankhorst, M., Band, I., Jonkers, H., Quartel, D.: An introduction to the ArchiMate® 3.0 specification, White Paper from The Open Group (2016) 17. Lankhorst, M.: Enterprise Architecture at Work. Modelling, Communication and Analysis. Springer, Berlin (2017)

188

I. Ilin et al.

18. Levina, A.I., Borremans, A.D., Burmistrov, A.N.: Features of enterprise architecture designing of infrastructure-intensive companies. In: Proceedings of the 31st International Business Information Management Association Conference, IBIMA 2018: Innovation Management and Education Excellence through Vision 2020, pp. 4643–4651 (2018) 19. Sandkuhl, K., Stirna, J.: Capability Management in Digital Enterprises. Springer, Berlin (2018) 20. Saxe, S., Jahn, C., Brümmerstedt, K., Fiedler, R.: Digitalization of Seaports - Visions of the Future. Fraunhofer Verlag, Stuttgart (2017) 21. Marusin, A.V., Ablyazov, T.H.: Perspektivy cifrovoj transformacii logistiki. Bull. Altai Acad. Econ. Law 4–2, 240–244 (2019) 22. Dmitriev, A.V., Plastunyak, I.A.: Integrated digital platforms for development of transport and logistics services. In: International Conference on Digital Technologies in Logistics and Infrastructure (ICDTLI 2019) (2019) 23. Borisova, V.V., Kudryashova, P.A.: Virtualnye logisticheskie operatory: zarubezhnyj opyt i rossijskaya praktika. News St. Petersburg State Univ. Econ. 2(116), 83–89 (2019) 24. Lbert, R., Gleser, M.: Digital forwarders. In: Bierwirth, C., Kirschstein, T., Sackmann, D. (eds.) Logistics Management. Lecture Notes in Logistics. Springer, Cham (2019) 25. Kabanov, A.S., Azarov, V.N., Mayboroda, V.P.: An analysis of the use and difficulties in introducing information technology and information systems in transport and the transport infrastructure. In: Proceedings of the 2019 IEEE International Conference “Quality Management, Transport and Information Security, Information Technologies” (IT&QM&IS) (2019) 26. Dunaev, O.N., Nesterova, D.V.: Morskie porty v eksportnoj cepi postavok rossijskih kompanij. Transp. Russ. Federation 2(69), 17–21 (2017) 27. The annual Presidential Address to the Federal Assembly. http://www.kremlin.ru/events/ president/news/59863. Accessed 02 Apr 2020 28. The Decree of The President of the Russian Federation of 07.05.2018 No. 204 “On national goals and strategic objectives of the development of the Russian Federation for the period up to 2024” 29. Maydanova, S., Ilin, I.: Problems of the preliminary customs informing system and the introduction of the Single Window at the sea check points of the Russian Federation. In: Siberian Transport Forum - TransSiberia 2018, MATEC Web of Conferences, vol. 239, art. 04004 (2018) 30. Zubakov, G.V., Protsenko, O.D.: Tsifrovaya platforma transportnogo kompleksa Rossiyskoy Federatsii. Nekotorye aspekty realizatsii. Kreativnaya Ekon. 13(3), 407–420 (2019) 31. Dyatlov, S.A.: Tsifrovaya transformatsya economic stran EAEU Prioritety i instituty razvitiya. Evraziyskaya Econ. Perspect.: Probl. Resheniya 6, 18–21 (2018) 32. Marusin, A.V., Ablyazov, T.H.: Osobennosti cifrovoj transformacii transportnologisticheskoj sfery. Econ.: Yesterday Today Tomorrow 9(3–1), 71–78 (2019) 33. Kuznecov, A.L., Kirichenko, A.V., Sherbakova-Slyusarenko, V.N.: Zadachi cifrovizacii transportnoj sistemy Rossii. Transp. Russ. Federation 5(78), 27–31 (2018) 34. Rubcova, M.V.: Sozdanie edinoj cifrovoj platformy transportnogo kompleksa kak odin iz sposobov obespecheniya effektivnosti, bezopasnosti i nadezhnosti transportnyh uslug v Rossii i stranah EAEU. In: Collection of scientific articles of participants of the 2nd International Scientific and Practical Conference, Moscow (2019) 35. Sinitsyna, A.S.: Cifrovaya transformaciya transportnogo kompleksa. In: Materials of the XIV International Scientific and Practical Conference, Krasnoyarsk (2019) 36. Fedotova, S.N.: Cifrovizaciya transportno-logisticheskih uslug. Econ. Bus.: Theory Pract. 11–5(57), 124–127 (2019)

Digital Logistics Transformation: Implementing the Internet of Things (IoT) Irina Zaychenko1, Anna Smirnova1(&), Yevheniia Shytova1, Botagoz Mutalieva2, and Nikita Pimenov3 1

Institute of Industrial Economics, Management and Trade, Peter the Great St. Petersburg Polytechnic University, 195251 St. Petersburg, Russian Federation [email protected], [email protected] 2 M. Auezov South Kazakhstan State University, Shymkent, Kazakhstan 3 LAB University, Lappeenranta, Finland

Abstract. In the era of the “Industry 4.0” concept and 5PL-logistics development the digitalization of logistics and modern information and computer technologies introduction becomes extremely relevant. This paper discusses technological trends in the development of logistics, the introduction of the Internet of Things in logistics processes in particular. Multiple studies show that the Internet of Things (IoT) is the most promising area for the digitalization processes of the industry, logistics, retail, etc.. The article examined the prerequisites for the wide spread of IoT, the barriers for the introduction of technology and the current issues with the Internet of Things. In addition, the economic effects of IoT implementation and practical application of technology in logistics were described. Based on the information studied the authors developed the matrix of scenarios for improving the existing IoT systems using internal resources and attracting external experts. This matrix takes into account the qualifications of the company’s specialists and the complexity of the existing IoT system that allows you to determine the need for external experts. Keywords: Internet of Things  Digitalization in logistics  Digital transformation in logistics  Industry 4.0  5PL model  IoT system  Matrix of scenarios for improving the IoT system

1 Introduction Nowadays digitalization processes affect all important areas of human economic activity - and logistics is not an exception. With the development of trade and economic relations between countries, an increase in production of goods and supply of raw materials logistics reach a new level and that requires new approaches for its implementation. The aim of this work is to study trends in the logistics digital transformation and to develop a matrix of scenarios for improving IoT systems using internal resources and attracting external experts.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 189–200, 2021. https://doi.org/10.1007/978-3-030-64430-7_16

190

I. Zaychenko et al.

Topicality. Companies from various industries require logistics services to ensure the movement of their material flows. The digital transformation of logistics allows you to optimize business processes, quickly identify existing problems and make economically justified management decisions based on the relevant data. Digitalization of supply chains is a priority for the companies like “production”, “consumer goods” and “retail” for saving their money, increasing revenues and supporting the new business models and the customer focus [1].

2 Existing Literature The study of the features of digital transformation in the transport and logistics industry is carried out by domestic and foreign representatives of science and business. In particular, in the work of Yu and Novitskaya studied the topic of modeling logistic flow processes [2]; in the article by Dmitriev digital technologies for cargo tracking were considered [3]; in publication studied global trends in logistics and best practices of Russian and foreign companies, etc. Separately, we can highlight the work of Ilyin and Maydanova [4–6]. The processes of digital transformation of industry and logistics implicit in the concept of “Industry 4.0”, the fourth industrial revolution. This term has various interpretations, but all of them are united by a single idea - this is an industry concept that provides for “end-to-end digitalization of all physical assets and integration into digital ecosystems with value chain partners” [7, 8]. At the moment there are few technologies that contribute to the transition of manufacturing companies to the logistics of the Industry 4.0 concept: cloud storage, big data, artificial intelligence (AI), robotics, blockchain, Internet of Things (IoT), etc. According to the Logistics Trend Radar 4.0 [9] most of these technologies contribute to the creation of the brand new ways of doing business in the next 5 years. The results of the PwC study [10, 11] show that according to business representatives the greatest changes will be caused by IoT and artificial intelligence (AI) technologies. 65% of respondents from Russia invest in the Internet of Things while 35% in AI technology.

3 Materials and Methods In preparing this paper, we used General scientific methods of research: content and comparative analysis, the method of analogies, as well as special research methods, in particular, the method of the 5PL model, proposed by Morgan Stanley in 2002. The development of a matrix of scenarios for improving IoT technologies was based on the 5PL model proposed by Morgan Stanley in 2002. The 5th level of this model best reflects the use of digital technologies in logistics, as well as process optimization taking into account the development of IT technologies. Scientific materials on the use of breakthrough technologies in the digital transformation of logistics processes are studied. In particular, analytical reports from major companies such as DHL, PwC, E&Y, Microsoft, etc. Studies of these companies provide an opportunity to comprehensively study the practical application of digital

Digital Logistics Transformation

191

technologies in logistics. This information is significantly supported by the theoretical works of Russian and foreign scientists.

4 Results Digital transformation processes are also embedded in the concept of 5PL-logistics (also known as “virtual logistics” and “e-logistics”). This concept was developed by Morgan Stanley, an American financial conglomerate, and includes 5 levels (Fig. 1).

Fig. 1. 5PL model Morgan Stanley [12]

1PL-logistics is inherent in small companies that operate in a limited area and do logistics functions independently, without involving logistics providers. 2PL-logistics involves expanding the geography of the company that requires the need to attract logistics providers of the second level to ensure the movement of the material flows. The third level of logistics (3PL) is characterized by the fact that 2PL providers move to a higher level by adding new logistics services to the basic ones. Thus, 3PL provides comprehensive logistics services. 4PL providers outsource and supply chain management within the client companies. The 4PL provider is a single provider of logistics services so in order to carry out its work it can attract providers from the lower levels. 5PL logistics is significantly different from the previous levels of logistics. The activity of the 5PL provider is to manage information and optimize all client business processes using information technologies, while the provider itself takes the role of an information mediator [12]. The Internet of Things is one of the information technologies that can be used in 5PL logistics for the collection and analysis of large amounts of data. In general, with the help of IoT technologies most of the work assigned to the 5PL provider is carried

192

I. Zaychenko et al.

out such as the prompt provision of information necessary for making management decisions. Despite the fact that the Internet of Things concept was formed at the Massachusetts Institute of Technology in 1999, for a long time it was not widespread. It is said that the Internet of Things, as we understand it now, originated in 2008–2009 when there were more devices connected to the Internet than the number of people on the Planet. Among the factors influencing the distribution of IoT implementation, the following can be highlighted [13]: • Reduction of the average cost of IoT sensors from year 2004 to 2019 by more than 70%. • Increase in the number of transistors from year 2000 to 2018 by 3.75 times. • Reduction of the average cost of basic IaaS (Infrastructure as a Service) from year 2014 to 2018 by more than 25%. • Decrease in the cost of transferring 1 GB of data from year 2014 to 2019 by 10 times. These and many other factors have contributed to the wider promotion of IoT technologies in business, including the logistics companies. It is worth noting that the companies providing logistics services are major consumers of IoT products and solutions. The source [14] presents “Digital Funnel” where industries are ranked according to their expected IoT activity. The Funnel consists of 4 groups: “Debutants”, “Followers”, “Innovators” and “Leaders”. The rank of each industry is determined by the complexity and scope of the projects, the current result, the prospects for the implementation of the Internet of Things. According to this funnel the “Transport and Logistics” industry is in the fourth round - “Leaders”. It means that the introduction of the Internet of Things in this industry is economically feasible and promising. Also, the study presents the main areas of IoT application - these are systems for the fee-paying of the highways, goods storage and accounting, monitoring for delivery schedules, warehouse logistics and a monitoring system for the technical condition of the vehicle fleet. The results of the study [15] show that the main reasons for implementing IoT solutions in manufacturing companies are: industrial automation (48%), quality and compliance (45%), production planning and scheduling (43%), supply chain and logistics (43%). It is worth noting that 64% of respondents representing retail and wholesale trade implement IoT to optimize their supply chain. In addition, 56% of transport company respondents use IoT to manage their vehicle fleet. The data presented indicates that the Internet of Things is widely used to optimize logistics processes in companies of various directions. The research shows that the implementation of IoT solutions led to increased efficiency, for instance, it improved overall efficiency (55%), allowed teams to be more productive (42%) and gave teams the opportunity to save time for other tasks (35%), helped to be more informed and make better decisions (33%), allowed the use of new business models (26%). The use of IoT technologies led to increase in profitability as there was increased production capacity (43%), ensured cost savings (39%), increased income (36%),

Digital Logistics Transformation

193

reduced business costs (35%), etc. At the same time, the implementation of IoT solutions allowed to reduce the likelihood of human error (45%), increase customer satisfaction (44%) and increase the competitiveness of the enterprise (41%). The respondents [15] noted three main advantages of IoT: 1. Increase in efficiency (91%). 2. Increase in profitability (91%). 3. Improvement in quality (85%). With a deeper study of this topic, one can conclude that IoT technologies allow you to virtually connect physical objects using the three main technological components (Fig. 2), that allows them to receive, store and transmit information that can improve the decision-making process.

Fig. 2. IoT connection with three main technological components

Sensors are electronic devices that generate useful data by reading the necessary information from physical or mechanical objects. At the input they receive physical quantities and at the output they are converted into signals suitable for processing. At the moment there are many varieties of sensors such as acceleration, force, flow sensors, sound sensors, vibration sensors, humidity sensors, etc. Sensors collect data from systems and send them to the central cloud using wireless short-range technologies (WPAN, WAN), Wi-Fi is usually used to connect the gateway and the cloud. In addition, IoT connections can be based on mobile technology using SIM cards for connection to a mobile network [16]. The networking technologies used in IoT systems also have a wide variety. Figure 3 presents some technologies of mobile communications and wireless wide area networks with a range of up to 10 km.

Fig. 3. Technologies of mobile communication and wireless wide area networks (up to 10 km)

194

I. Zaychenko et al.

In addition to the above technologies, IoT systems are also used in [16]: • wireless local area networks (up to 0.5 km): Wi-Fi; • wireless personal networks/networks of short-range devices (up to 100 m): Bluetooth, Wi-SUN (IEEE 802.15.4 g), Z-Wave, Zigbee/XBee, Thread and others; • wired connections: Powerline, Local Area Network (LAN)/Ethernet, cable modem, modem, Digital Subscriber Line (DSL), Synchronous Optical Networking (SONET); • short-range communication technologies (several cm): Radio Frequency Identification (RFID), Near Field Communication (NFC); • All-IP or next-generation networks. A variety of network technologies is not the main problem when creating IoT systems. While the implement IoT technologies decision-making logistics companies face a number of difficulties that impede or slow down the transformation processes in logistics. Among the common problems are the following: 1. Security and data confidentiality: nowadays there are no comprehensive data and network protection protocols that makes every connected device of the IoT system vulnerable to cyber-attacks. An insufficient level of protection can lead to data theft and its unauthorized use. 2. High implementation cost: large logistics companies wishing to implement an IoT system face the need to update the infrastructure and ensure the compatibility of its components that slows down the implementation of IoT technologies and makes them more expensive. 3. Adaptability and compatibility: based on the fact that international standards for IoT products and solutions are still under development, companies face their functional incompatibility that makes it necessary to use third-party platforms, replace individual components or even completely upgrade the infrastructure. 4. Durability: the existence of various technologies of wireless networks leads to their competition among themselves for the right to be accepted as a standard in the industry. In turn, standardization of any of these technologies will create compatibility problems for already existing solutions and products in the IoT enterprises [17]. 5. The absence of a digital strategy: the implementation of the IoT system should ensure vertical and horizontal data integration, as well as access to it for all the participants in the supply chain and value chain. An ill-conceived digital strategy or its absence leads to a number of errors and the elimination requires additional resources and investments [18]. The above problematic issues create obstacles to the implementation of IoT systems. The lack of generally accepted standards, the high cost of implementation and the insecurity of data hinder the spread of IoT products and solutions among large companies that leads to an increase in opportunity costs. However, this does not diminish the need and feasibility of using IoT technologies. According to the PWC [19] report, an estimate of the economic effect of the introduction of IoT in logistics in the Russian Federation until 2025 will reach 542 billion rubles (Table 1).

Digital Logistics Transformation

195

Table 1. Evaluation of the economic effect of the implementation of IoT by Russian logistics companies until 2025 Direction Asset Monitoring (Power Products) Fleet management (service) Fleet management (“Uberization”) Connected transport (“ERA- Glonass”) Connected transport (railways) Smart infrastructure (oil and gas pipelines) Asset Tracking (inventory reduction) Asset Tracking (Insurance Premium Savings) Total Source: [19]

Economic effect, billion rubles 242.0

The calculated share of the economic effect, % 44.6

44.0 41.0

8.1 7.6

66.0

12.2

39.0 7.0

7.2 1.3

63.0

11.6

40.0

7.4

542.0

100.0

Thus, the greatest economic effect from the implementation of Internet of Things technologies is expected in the areas like “Asset Monitoring (food)” (44.6%), “Connected Transport (“Glonass”) (12.2%) and “Asset Tracking (inventory reduction)” (11.6%). The introduction of IoT in logistics has an undeniable economic effect that allows [20]: • Monitor of the vehicles, transshipment vehicles, goods and people in real time; • Manage logistics processes and timely detect deviations and violations, as well as promptly take appropriate corrective measures; • Measure planned targets with current ones; • Analyze all information and indicators in order to identify new business opportunities; • Automate business processes, replacing human labor in order to improve quality and reduce costs. • Optimize the system, as well as the coordination and integration of its components. The opportunities mentioned above that open up for enterprises with the introduction of an IoT system form an analytically reliable database, promptly give signals about all kinds of violations and deviations that allows management personnel to respond faster and more efficiently. This makes it possible to reduce costs and gain a competitive advantage over less technologically advanced competing companies. In general, if we talk about the development of Internet of Things technologies in logistics, we can distinguish 3 main areas presented in Fig. 4 [19]. They are based on the use of various sensors, IoT-products and solutions that allow you to collect and

196

I. Zaychenko et al.

analyze the large amounts of information helping the optimizing business processes decision making.

Fig. 4. Directions of IoT development in logistics [21]

In order for the company to be able to get an economic effect from the implementation of IoT technologies it is necessary to go through several stages of creating an IoT system. According to the source [22], the process of implementing IoT solutions consists of 5 phases (Fig. 5).

Fig. 5. Phases of implementation of IoT solutions [22]

Digital Logistics Transformation

197

At the initial stage while developing business case representatives of all areas of the company should participate in order to avoid potential errors and ensure crossfunctional interaction within the enterprise. In the second phase it is necessary to make a decision on how the implementation of IoT technologies will take place: on their own or with the involvement of third-party expertise. The decision depends on many factors, including the availability of qualified personnel, the financial capabilities of the company, the level of complexity of the desired IoT system, etc. The phase of testing the concept is necessary to check the most important business functions and solutions that become available with the introduction of IoT. With positive experimental results, you can move on and work out the system in more detail. At the “Initial Pilot Deployment” phase, you can begin to develop scenarios and integrate IoT solutions into your organization. At this stage, it is necessary to take care of staff training and their preparation for organizational changes provoked by the introduction of the Internet of Things technologies. The final step is “Commercial Deployment”: with the deployment of IoT using a large number of devices, it is necessary to ensure the manageability and scalability of all systems. In general, the company needs to determine why it needs IoT-technology, how it will implement them and how it will adapt users to innovations (personnel, agents, etc.). The introduction of IoT technologies largely depends on the financial capabilities of the company, as well as on the skill level of specialists working with the developed IoT system. In the process of deciding on the improvement of existing systems, managers have to choose: to introduce changes on their own or to attract outside experts. Figure 6 represents several scenarios for improving IoT systems depending on the skill level of specialists (internal resource). This matrix is a guideline for making management decisions and makes it possible to determine if external experts should be involved to implement innovations.

Fig. 6. Matrix of scenarios for the improvement of IoT technologies

198

I. Zaychenko et al.

The number 1 represents the situation when the level of qualification of personnel and agents is low, as well as the level of complexity of the IoT system itself. In this case, the company must decide whether the existing system meets the requirements for optimizing business processes or not. If the existing infrastructure meets the optimal level, the company can leave everything unchanged. Otherwise, if there is a need to improve the IoT system, it is necessary to attract external experts to develop a new system and to train staff and agents. In situation #2, when the existing IoT system is not complicated and the qualifications of specialists are high there is an opportunity to independently improve the system. First, it is necessary to determine what business tasks should be carried out with the help of innovations, evaluate their economic feasibility and develop a system project taking into account new additions. If the desired components are compatible with your existing infrastructure, you can proceed with their implementation. In the case #3 where there is a complex IoT system and a high level of specialists, it is necessary with the help of an internal resource and external experts to work on simplifying the system without losing its functionality. This is necessary in order for internal and external consumers working with the system to navigate the system more easily and be able to obtain the necessary information and carry out tasks without outside help. In situation #4, when there is a complex system and a low level of qualification, it is necessary to train personnel and agents, as well as to involve third-party experts to improve the IoT system in order to simplify the process of using IoT technologies by end consumers. Thus, the matrix of scenarios for the improvement of IoT presented in Fig. 5 covers all possible strategies for improving IoT systems depending on the level of qualifications of specialists and answers the question of whether external experts should be involved.

5 Conclusion Digitalization of the economy affected logistics companies. One of the most promising digital technologies for logistics is the Internet of Things that allows you to optimize business processes by making management decisions based on operational data. Using IoT technologies the real time information collection and analysis of large amount of data is carried out. Using the technology of the Internet of Things allows companies to save money and increase profits by minimizing costs. However, given the fact that the technology has become widespread recently, there are a number of problems faced by companies wishing to implement an IoT system. Among the main problematic issues are the following: high cost of implementation, data security and confidentiality, lack of single standard, durability, adaptability and compatibility. These problems significantly slow down the transformation processes in logistics that also affects the overall digital maturity of countries’ infrastructure. Despite the existing problems the Internet of Things remains a priority for investments in the digital business transformation. There are a number of publications that describe the sequence of the process of implementing an IoT system from scratch.

Digital Logistics Transformation

199

For instance, a process of 5 phases was described in this paper. However, technologies are developing rapidly, new products and solutions appear so the question of changing an existing IoT system is highly important. We have developed a matrix for improving the IoT system depending on its level of complexity and the skill level of specialists. This matrix makes it possible to find out whether it is necessary to attract external experts to improve the IoT system or this can be done using internal resources.

References 1. The Digital Supply Chain’s Missing Link: FOCUS. Cargemini (2018). https://www.capge mini.com/wp-content/uploads/2018/12/Report-%E2%80%93-The-Digital-Supply-Chain%E 2%80%99s-Missing-Link-Focus.pdf 2. Yu, S.G., Novitskaya, V.D.: Modeling of logistics flow processes in the R&D system INNOVATIVE ACTIVITY Publishing house: Saratov state technical University named after Yuri Gagarin (Saratov), pp. 68–76 (2019) 3. Dmitriev, A.V.: Digital technology to monitor the movement of goods in the transport and logistics systems. CPPM (2019). https://cyberleninka.ru/article/n/tsifrovye-tehnologii-prosle zhivaemosti-gruzov-v-transportno-logisticheskih-sistema 4. Maydanova, S., Ilin, I., Lepekhin, A.: Capabilities evaluation in an enterprise architecture context for digital transformation of seaports network. In: Proceedings of the 33rd International Business Information Management Association Conference, IBIMA 2019: Education Excellence and Innovation Management through Vision 2020, pp. 5103–5111 (2019) 5. Maydanova, S., Ilin, I.: Problems of the preliminary customs informing system and the introduction of the Single Window at the sea check points of the Russian Federation. In: MATEC Web of Conferences, vol. 239, p. 04004 (2018). https://doi.org/10.1051/matecconf/ 201823904004 6. Maydanova, S., Ilin, I.: Strategic approach to global company digital transformation. In: Proceedings of the 33rd International Business Information Management Association Conference, IBIMA 2019: Education Excellence and Innovation Management through Vision 2020, pp. 8818–8833 (2019) 7. Industry 4.0: Creating a Digital Enterprise. PwC (2016). https://www.pwc.ru/ru/technology/ assets/global_industry-2016_rus.pdf 8. Poltavtseva, M.A.: A consistent approach to building secure big data processing and storage systems. Autom. Control Comput. Sci. 53(8), 914–921 (2019). https://www.scopus.com/ inward/record.uri?eid=2-s2.085080918181&doi=10.3103%2fS0146411619080273&partner ID=40&md5=f476a4a6993d8a221ee9450983331928 9. Kückelhaus, M.: Digitalization & the future of supply chains. In: Kückelhaus, M. (ed.) DHL Trend Research (2019). https://www.espo.be/media/23-05%201430%20Markus%20Kuckel haus.pdf 10. The digital decade: Keeping up with the times. PwC (2017). https://www.pwc.ru/ru/ publications/global-digital-iq-survey-rus.pdf 11. Vitkova, L., Saenko, I., Tushkanova, O.: An approach to creating an intelligent system for detecting and countering inappropriate information on the internet. In: Studies in Computational Intelligence, 868, pp. 244–254 (2020). https://www.scopus.com/inward/record.uri? eid=2-s2.0-85075561014&doi=10.1007%2f978-3-030-32258-8_29&partnerID=40&md5= 0fcffb2be0ff5f539b1a3238ec264bec

200

I. Zaychenko et al.

12. Karkhova, S.A.: From 5PL-providers to zero-level logistics. In: Karkhova, S.A. (ed.) State Advisor (2019). https://cyberleninka.ru/article/n/ot-5pl-provayderov-k-logistike-nulevogourovnya 13. Future of IoT. Ernst & Young Associates LLP (2019). http://ficci.in/spdocument/23092/ Future-of-IoT.pdf 14. “Digital funnel” of consumption: features and prospects of the Russian market IoT. TsSPc Platforma (2019). http://pltf.ru/wp-content/uploads/2019/02/internet_veschey_v_rossii_10_ 02_2019.pdf 15. IoT Signals. Microsoft (2019). https://azure.microsoft.com/mediahandler/files/resourcefiles/ iot-signals/IoT-Signals-Microsoft-072019.pdf 16. Internet of Things: The New Government to Business Platform. The World Bank Group (2017). http://documents.worldbank.org/curated/en/610081509689089303/pdf/120876-REV ISED-WP-PUBLIC-Internet-of-Things-Report.pdf 17. Market Pulse Report: Internet of things (IoT). GrowthEnabler (2017). https://growthenabler. com/flipbook/pdf/IOT%20Report.pdf 18. Ilin, V.: Logistics industry 4.0: challenges and opportunities. In: Ilin, V. Simić, D., Saulić, N. (eds.) LOGIC: 4th Logistics International Conference (2019). http://logic.sf.bg.ac.rs/wpcontent/uploads/Papers/LOGIC2019/ID-33.pdf 19. The Internet of Things (IoT) in Russia The technology of the future. PwC (2017). https:// www.pwc.ru/ru/publications/iot/iot-in-russia-research-rus.pdf 20. Radivojevic, G.: Internet of things in logistics. In: Radivojevic, G., Bjelic, N., Popovic, D. (eds.) 3rd Logistics International Conference (2017). http://logic.sf.bg.ac.rs/wp-content/ uploads/Papers/LOGIC2017/ID-31.pdf 21. Goryainov, A.N.: Internet of things, “Uberization” of cargo transportation and transport diagnostics. In: Goryainov, A.N. (ed.) Zb. Mater. Mi. science.-practical. conference on “Promising direct development of regional transport and logistics systems”, 22–23 May 2018. https://www.researchgate.net/publication/333532137_Internet_Vesej_uberizacia_gruz operevozok_i_transportnaa_diagnostika 22. Scully, P.: Guide To IoT solution development. In: Scully, P., Lueth, K.L. (eds.) IoT Analytics (2016). https://iot-analytics.com/wp/wp-content/uploads/2016/09/White-paper-Gu ide-to-IoT-Solution-Development-September-2016-vf.pdf

The Challenges of the Logistics Industry in the Era of Digital Transformation Dmitry Egorov1 , Anastasia Levina2(&) , Sofia Kalyazina2 Peter Schuur3 , and Berry Gerrits3 2

,

1 Orimi Trade, Saint-Petersburg, Russia Peter the Great St. Petersburg Polytechnic University, Saint-Petersburg, Russia [email protected] 3 University of Twente, Enschede, The Netherlands

Abstract. Digital transformation has a significant impact on the development of such an important sector of the economy as logistics. All major economic trends are traced in logistics, and basic digital technologies are already being used. At the same time, there are certain obstacles to the effective development of digital logistics. For example, there is a lack of informational integration of supply chains. This article systematizes information about the state of digital transformation in logistics, about the main trends, applied technologies, analyzes it, simulates the digital logistics ecosystem, identifies stakeholders, drivers of digital logistics transformation, requirements for the development of the industry. Based on this, the main lines of action to improve the efficiency of logistics activities in the context of digital transformation are formulated. The implementation of the proposed actions requires the integrated use of technology and a specialized mathematical apparatus. The integrated actions of participants in the digital logistics ecosystem will ensure the required role of logistics in integrated economic development. Keywords: Digital transformation  Logistics technologies  Digital supply chains

 Logistics challenges  Digital

1 Introduction Doing business now in almost all sectors of the economy requires taking into account the trends of digital transformation. Digital transformation, using digital technology, has an impact on the ways and principles of creating value for various stakeholders in an environment of rapidly changing circumstances. Logistics is also closely related to the processes of digital transformation. Industry development directly affects economic development in general. At present, the industry is more inclined towards the development of tools for optimizing existing processes, rather than towards structural transformation. Time requires that logistics become part of a company’s value proposition [1]. The appeared mechanisms for processing large volumes of (including personalized) data and the created technological base became prerequisites for the digital transformation of supply chains. Supply chains traditionally rest on creating value for the end © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 201–209, 2021. https://doi.org/10.1007/978-3-030-64430-7_17

202

D. Egorov et al.

consumer and are limited by demand [2]. Moreover, traditional marketing approaches are based on statistical forecasts of demand. But the real, individual consumer in practice is always different from the hypothetical average. Now that marketing has the opportunity to determine the demand of specific consumers based on their preferences, digital marketing has moved to the micro-segmentation paradigm. Important trends in the development of supply chain management are: (i) the transition to unique products manufactured specifically for a particular client (Batch Size One [3]); (ii) integrating the sales channels to converge into a single channel of orchestrated product flow (omnichannelism); (iii) replacement of one product with integrated services based on this product (servitization). Digital logistics should be able to support this new digital marketing ideology, i.e. deliver the right product in the right quantity at the right time. Moreover, since the goods are largely similar, competition is shifting precisely towards the effectiveness of logistics. The cost of missed orders is high enough - the client will go to those who deliver faster and return only when his current supplier does not fulfill the conditions. Obstacles to the effective development of digital logistics in collaboration with digital marketing are currently associated with the fact that supply chains are still often not integrated. That is, each element of the chain is trying to reach its local optimum (as soon as possible to sell its goods to the next link), not caring about the whole chain, about the cost of moving goods between links. Ideally, it should be that [4] until the final consumer receives the goods, no one in the chain will receive the money. To do this, it is necessary to build integrated supply chains from the first producer in the chain to the final consumer, but at the same time so that the links are relatively independent (so that the collapse of one link does not lead to the collapse of others). Integration is supposed to be primarily informational, based on an integrated single information space (including information on stocks, geolocation of vehicles, etc.). Information integration of supply chains will allow all links to create overall efficiency and at the same time earn money. Classical supply chains will turn into digital “supply chains” of the matrix type, in which each link at any given time affects the entire network as a whole and changes it [5].

2 Materials and Methods The logistics industry is influenced by all the main trends shaping the business. This, for example, is a combination of globalization and glocalization. Leading global companies are increasing the share of revenue received outside the home region. In addition, it allows to get closer to the needs of the end user. It is also important to consider the growth trend of e-commerce. Another influencing trend is urbanization and population aging. The process of growing urbanization leads to the formation of a smart city concept. Smart City, in turn, requires Smart Logistics, which will meet the requirements of efficiency, safety, reduced negative impact on the environment, etc. An aging population reduces labor productivity, which requires the expanded use of modern digital technologies that can level this process. A number of trends are associated with the increasing use of certain digital technologies. For the logistics sector, these are mainly cloud services, artificial intelligence (AI), the Internet of things

The Challenges of the Logistics Industry in the Era of Digital Transformation

203

(IoT) and the Internet of everything (IoE), Big data, robotics, and 3D printing. The use of these technologies can significantly change the functioning of the entire supply chain. The technologies mentioned are characteristic of the “Industry 4.0” paradigm and allow switching to distributed production, direct access from the producer to the consumer, sharing economy, etc. Cloud services, such as the cloud computing (CC) concept, can be used as an integrated platform for cloud logistics [6] to realize universal interconnectedness, exchange logistics information and optimize logistics tasks [7]. The transition to cloud services allows reducing operating costs for computing power, providing a single platform for the sender, carrier and customer, unifying document flow and business processes, reducing risks associated with IT infrastructure support issues and the security and reliability of information storage. Cloud service providers also offer automated systems to improve transportation efficiency, which can reduce costs, increase storage capacity and reduce cargo handling time. AI is indispensable in the processing of large volumes of data and for the development of optimal solutions based on this analysis. AI also allows for the personalization of offers by structuring and analyzing data, customer support, digitalization of workflow, forecasting the volume and characteristics of the supply market. AI is also involved in developing self-learning ability in autonomous (unmanned) vehicles. The magnitude of data flows in these self-learning systems can be mitigated by using decentral control in the form of a Multi-Agent System (MAS). In a joint report, DHL and IBM conclude [8] that AI will be able to transform the scope of logistics services into a predictable, automated, personalized and proactive industry. The IoT is a provider of data for the work of AI and collects heterogeneous data sets in huge volumes from various devices and objects involved in the supply chain. RFID tags are used on vehicles, containers, unmanned vehicles, and storage vehicles to simplify transportation, control traffic and to determine the location and prevent losses. Also, smart sensors and IoT are used for remote control, telepresence, geolocation services, remote object management, security monitoring and the operation of automated control and accounting systems, and document management. At the same time, labels, sensors, measuring instruments and control devices are integrated into the ecosystem [9]. The key technologies for IoT are sensors, smart chips, wireless transmission network, machine-to-machine communication (M2M), and most importantly, broadband communication channels, computing power and data storage capacity. The main areas of application of IoT in logistics are cargo traceability, warehouse and fleet management, in addition, predictive asset servicing, and route optimization [10]. Other opportunities are manifold: (i) smart containers can maintain their load’s prescribed temperature, (ii) heterogeneous smart loads can be combined in one container, thus maximizing capacity usage and (iii) as for road transport, trucks (the ‘things’ in this case) can be equipped with software that enables truck platooning, thus saving fuel [11]. The IoE is seen as a technology that can provide interoperable, reliable operation of applications such as Smarter Cities, Human Dynamics, Cyber-Physical Systems, Smart Grid, Intelligent Transport Systems, i.e. it is about dynamic ecosystems. For example, cloud-assisted remote sensing (CARS) enables the collection and exchange of data

204

D. Egorov et al.

from sensors, remote and real-time access to data, flexible provision of resources and scaling, and pricing models using the IoE [12]. Big Data technology allows to collect and analyze a significant amount of information on processing applications, schedule management, cost accounting, and planning future expenses. Data is collected on orders, transactions, traffic, incidents, resources, external providers, geolocation, etc. [13]. The received information makes it possible to optimize routing, planning and forecasting, risk management, improve marketing, and use crowdsourcing more actively and, as a result, allows to switch to data monetization [14]. Big data processing technology in collaboration with IoT help to assess and predict the transport risk associated with the deviation of the actual time of arrival of the cargo from the planned one in a situation where both premature arrival and late arrival are equally undesirable. Such an analysis is possible taking into account all demand variables (route, time, weight and volume of cargo, characteristics of the order, participants in the supply chain, etc.) [15]. Big Data Business Intelligence (BDBA) and SCA Big Data Analysis are used for demand planning, procurement in the aspects of supply risk management and supplier performance management (for example, quality, guarantees, on-time delivery, etc.), and in the routing of goods, vehicles, work force [16]. Robotization is used in warehouses to complete shipments. In addition, the use of automated unmanned vehicles is being developed. Many manufacturers are actively testing this technology. The development of technology will ensure productivity growth, improving the quality of operations while reducing costs, including nonpersonnel [17]. 3D printing can now be considered as a technology - the basis for mass customization in modern production [18]. Its application reduces the need for storage facilities, reduces delivery costs, changes the composition of suppliers, and the nature of the transported cargo. The order can come directly from the consumer to the next production. At the same time, a new large sector of the logistics industry will appear, associated with the storage and movement of raw materials for refueling 3D printers [19, 20]. The use of 3D printing can significantly change the principle of functioning of supply chains, as it allows to bring production closer to the end user and significantly reduce the cost of transporting raw materials and finished products. Also, 3D printing allows to increase the customization of goods, allowing to produce an object based on individual consumer needs. One of the industry trends is logistics outsourcing involving logistics intermediaries (providers). Currently there are 5 levels of logistics service (PL-Party Logistics). At the 5PL level, the latest developments in the field of combining intelligent software of different levels and localization are used, in conjunction with the development of strategic partnerships among all participants in the logistics chains. 5PL-provider provides a full range of services through the use of global information technology space. Its use allows to implement the “division of labor” in order to optimize costs, improve efficiency by reducing operating costs and material resources. Outsourcer automates and optimizes the work of finding logistics solutions. At the same time, the application of various IT technologies is expanding. For example, automation of route

The Challenges of the Logistics Industry in the Era of Digital Transformation

205

selection, online tracking, RFID tags, client blocks, etc. are used. In the case of outsourcing of the logistics engineering function, it is possible to move to Supergrid Logistics, which accumulates analytics and data management, logistics expertise, interaction with the customer, taking into account his changing needs. An effective solution is also the creation of transport and logistics clusters, combining freight forwarding and terminal-warehouse complexes, several types of transport. Many wholesale distribution companies (Procter & Gamble, Mars, etc.) are migrating from procurement and sales activities to the category of transport and logistics clusters. In such a cluster, effective interaction, planning, optimal servicing of goods flows, exchange of information between participants based on a single standard is possible. As a result, logistics super networks are formed using multichannel logistics. It is possible to create an ecosystem of digital transport corridors, which, among other things, requires special regulatory regulation. In such an ecosystem, it is necessary to control the quality of functioning of all related subsystems [21, 22]. Another interesting trend is logistics uberization, logistics based on the principles of sharing economy. Moreover, in the field of trucking, this has long been a reality, although until recently there were no digital platforms, and intermediary companies played the role of integrators. The present study consisted in the analysis of existing scientific literature on selected topics. The goal was to systematize information about the state of digital transformation in logistics, about the main trends, applied technologies, and prospects for increasing the efficiency of digital transformation. Based on the analytical review, conclusions are drawn.

3 Results Thus, it is possible to formulate 3 main lines of action to improve the efficiency of logistics activities in the context of digital transformation: 1. digitalization of data flows on actual consumption and movement; 2. automation of data volumes for planning purchases and stocks (data on current stocks, rhythm of supplies, etc.); 3. the introduction of a mechanism that ensures transparent relationships between participants in supply chains based on digital technologies, such as smart contracts. To implement these areas, it is necessary to comprehensively use technologies and a specialized mathematical apparatus: electronic platforms, blockchain, smart contracts, paperless workflow; adaptive self-organizing systems based on multi-agent technology, AI, telecommunications, parallel computing. The main drivers of digital logistics transformation are micro-segmentation, maximum possible customization, increased flexibility of logistics systems without increasing costs, the need to improve efficiency and reduce costs, customer retention opportunities, increasing value proposition, new technologies and industry digital platforms, urbanization, and population aging. The main requirements for the development of the industry are speed, efficiency, security, transparency.

206

D. Egorov et al.

Drivers directly affect the expansion of requirements. For example, customization, which is a trend in customer satisfaction issues, imposes additional requirements on the manufacturer, namely agility and velocity. Logistic companies acting as intermediaries between the manufacturer and the end user receive a new requirement to improve the quality of economic interactions and fulfill obligations on time and in full. In the new era, Logistic Service Providers (LSPs) are partners with the supply chain parties rather than some distant third party. They manage all or part of the seller’s logistic chain, e.g. the management of warehouse inventory, order processing, fulfillment, delivery, and after sales. E-commerce value chains require that huge amounts of cargo should be delivered within tight time windows. In fact, this is a crucial factor in creating customer value in e-commerce. This gives LSPs a key position in the new era. LSPs provide lots of value added services in addition to cargo delivery in the value chain, e.g., they may deliver an item, install it, and take back the previous one. The main stakeholders of the digital transformation of logistics are manufacturers, logistics companies, and the end consumer, who are part of the value chain as part of the supply chain. It is also possible to single out a stakeholder of a higher level - a state that is distinguished by increased requirements for economic interactions in terms of security, transparency. In general, the digital logistics ecosystem can be represented as a model, as shown in Fig. 1.

Fig. 1. Digital ecosystem of logistics

This figure describes the requirements of stakeholders (government, business environment, manufacturers, logistic companies, consumers), represented by an object marked with a parallelogram. Requirements model the properties of these elements, which are necessary to achieve the “goals” that are modeled by goals. Items marked with a helm mark are drivers. The driver encourages the organization to define its goals

The Challenges of the Logistics Industry in the Era of Digital Transformation

207

and implement the changes necessary to achieve them. Elements marked with an exclamation mark represent principles. The principles determine the properties of the system, motivated by some purpose or driver. A large number of participants are involved in the digital ecosystem of logistics (Fig. 2). All of them must act in an integrated manner, in cooperation, using uniform standards, ensuring the growth of consumer value for the consumer.

Fig. 2. Participants of the digital ecosystem of logistics

Industry-specific digital platforms that are currently widely used are, for example, the Yard Management System (YMS), which is responsible for managing the warehouse territory and placing vehicles on it, the Transportation Management System (TMS) - a system for controlling the movement of goods from the point of shipment to point of unloading, Warehouse Management System (WMS), which regulates the location and movement of goods and material values directly at the warehouse, a complex of DSS class systems that are engaged in inventory planning and production (including the functionality of BI, MES, APS, etc.). An important and integral element of such an industry platform should be secure payment systems. In addition, platforms for electronic document management, electronic queues for border crossing and others are used. At the same time, for example, in the Russian Federation a key task has been set in the field of digital transformation of transport and logistics, which consists in interfacing industry digital platforms with each other and other state systems. This should increase the productivity, safety and quality of transport systems, the effectiveness of national industrial projects, given the fact that the global transport complex is the largest consumer of digital technologies and solutions [23].

4 Conclusion Currently, the logistics industry, like many others, has technologically approached the opportunity to make a quantum leap in the management of logistics, making it as personalized as possible, at the same time economical, and increasing the level of profitability of associated capital. To realize this, it is necessary to overcome not only technological barriers, but also mental and cultural ones: the willingness of people to move to a new level of transparency, the willingness to share information online, go

208

D. Egorov et al.

from local optimization of individual links to chain optimization is necessary. Today, an extremely small number of global chains are actually implemented in the world, and any shock leads to a break. An obstacle to improving supply chain performance is not technology, but culture and beliefs, habits and skills. The successful implementation of the main directions of increasing the efficiency of digital transformation in the logistics industry allows increasing security of systems, expanding multimodality, ensuring the combination of the interests of a large number of parties involved in the logistics process and increasing the value proposition, taking into account the changing needs of the end user, without increasing investments in infrastructure. Digitalization is changing the channels of goods movement, delivery formats and management processes. Unified digital platforms and integrated actions of participants in the digital logistics ecosystem can increase overall efficiency and provide a place for logistics as one of the drivers of digitalization. Acknowledgment. The reported study was funded by RSCF according to the research project № 19-18-00452.

References 1. Vilken, V., Kalinina, O., Barykin, S., Zotova, E.: Logistic methodology of development of the regional digital economy. In: IOP Conference Series: Materials Science and Engineering, vol. 497, no. 1, p. 012037 (2019) 2. Wei, F., Alias, C., Noche, B.: Applications of digital technologies in sustainable logistics and supply chain management. In: Melkonyan, A., Krumme, K. (eds.) Innovative Logistics Services and Sustainable Lifestyles, pp. 235–263. Springer, Cham (2019) 3. DHL: ‘Batch Size One’. https://www.dhl.com/cn-en/home/insights-and-innovation/thoughtleadership/trend-reports/batch-size-one.html 4. Goldratt, E.M.: The Choice. North River, Great Barrington (2008) 5. Laaper, S., Yauch, G. Wellener, P. Robinson, R.: Embracing a digital future. Deloitte Insights (2018) 6. Li, W., Zhong, Y., Wang, X., Cao, Y.: Resource virtualization and service selection in cloud logistics. J. Netw. Comput. Appl. 36(6), 1696–1704 (2013) 7. Zhang, Y., Liu, S., Liu, Y., Li, R.: Smart box-enabled product–service system for cloud logistics. Int. J. Prod. Res. 54(22), 6693–6706 (2016). https://doi.org/10.1080/00207543. 2015.1134840 8. DHL, IBM: Artificial intelligence in logistics. https://www.dhl.com/content/dam/dhl/global/ core/documents/pdf/glo-core-trend-report-artificial-intelligence.pdf 9. Xu, R., Yang, L., Yang, S.-H. Architecture design of internet of things in logistics management for emergency response. In: 2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, pp. 395–402 (2013) 10. Tadejko, P.: Application of internet of things in logistics – current challenges. Ekon. Zarządzanie Econ. Manag. 7(4), 54–64 (2015). https://doi.org/10.12846/j.em.2015.04.07 11. Naumova, E., Buniak, V., Golubnichaya, G., Volkova, L., Vilken, V.: Digital transformation in regional transportation and social infrastructure. In: E3S Web of Conferences, vol. 157, p. 05002 (2020)

The Challenges of the Logistics Industry in the Era of Digital Transformation

209

12. Jara, A.J., Ladid, L., Gómez-Skarmeta, A.F.: The internet of everything through IPv6: an analysis of challenges, solutions and opportunities. JoWua 4(3), 97–118 (2013) 13. Ghosh, D.: Big data in logistics and supply chain management-a rethinking step. In: 2015 International Symposium on Advanced Computing and Communication (ISACC), pp. 168– 173 (2015) 14. Zhong, R.Y., Huang, G.Q., Lan, S., Dai, Q., Chen, X., Zhang, T.: A big data approach for logistics trajectory discovery from RFID-enabled production data. Int. J. Prod. Econ. 165, 260–272 (2015) 15. Shang, Y., Dunson, D., Song, J.-S.: Exploiting big data in logistics risk assessment via bayesian nonparametrics. Oper. Res. 65(6), 1574–1588 (2017). https://doi.org/10.1287/opre. 2017.1612 16. Wang, G., Gunasekaran, A., Ngai, E.W., Papadopoulos, T.: Big data analytics in logistics and supply chain management: certain investigations for research and applications. Int. J. Prod. Econ. 176, 98–110 (2016) 17. Mikušová, N., Čujan, Z., Tomková, E.: Robotization of logistics processes. In: MATEC Web Conferences, vol. 134, p. 00038 (2017). https://doi.org/10.1051/matecconf/ 201713400038 18. Manyika, J., et al.: Manufacturing the future: the next era of global growth and innovation. McKinsey Global Institute, London (2012) 19. Silva, J.V., Rezende, R.A.: Additive manufacturing and its future impact in logistics. In: IFAC Proceedings Volumes, vol. 46, no. 24, pp. 277–282 (2013) 20. Manners-Bell, J., Lyon, K.: The implications of 3D printing for the global logistics industry. Transp. Intell. 1–5 (2012) 21. Maydanova, S., Ilin, I., Lepekhin, A.: Capabilities evaluation in an enterprise architecture context for digital transformation of seaports network. Presented at the 33rd International Business Information Management Association Conference, IBIMA 2019: Education Excellence and Innovation Management through Vision 2020, pp. 5103–5111 (2019) 22. Maydanova, S., Ilin, I., Strategic approach to global company digital transformation. Presented at the 33rd International Business Information Management Association Conference, IBIMA 2019: Education Excellence and Innovation Management through Vision 2020, pp. 8818–8833 (2019) 23. Ministry of Transport of the Russian Federation. https://mintrans.ru/

Optimal Production Manufacturing Based on Intelligent Control System Hanafi Mohamed Yassine(&) and Viacheslav P. Shkodyrev Peter the Great St. Petersburg Polytechnic University, St. Petersburg, Russia [email protected]

Abstract. In this article, we proposed an intelligent approach for an optimal production based on the manufacturing process. By examining the manufacturing process, we draw a big idea of how this industry manufacturing performing his production and analysing those process to determine the control factors that control it, by using those information we build a control system based on the factory real control system. To simplify the complexity of the manufacturing process, we divide it into a sub-process based on the production line and the number of processes and create for each sub-process a logical model. By accumulating all the logical model, we have a big hierarchy logical model for the manufacturing process. Each of those sub-system logical models examined using an artificial neural network, this model based on the control system that is created. The goal of the neural network is to determine how the control factors interfere with manufacturing production. From results obtained from the neural network and by using Pareto front, we determine a set of an optimal configuration for the control system. the conclusions determine in this article can be extended to the processing industry worldwide. Keywords: Control system  Multi-objective optimization Neural network  Oil manufacturing

 Pareto front 

1 Introduction The industrial problem is a multi-objective optimization problem. Engineers use optimization to find solutions that they could not obtain through expertise. They are therefore interested in the set of optimal trade-offs between the different criteria, also called the Pareto front. The Pareto front, therefore, corresponds to a set of innovative solutions. Optimization is at the heart of any problem related to decision making, whether in engineering or economics. The goal of all these decisions is either to minimize the effort required and/or maximize the desired benefice. Multi-objective optimization has been available for about three decades, and its application in realworld problems is increasing. Its goal is to seek to optimize several components of the objective function. The solution of the problem is not a single vector, but a set of solutions known as the set of optimal Pareto solutions. There is no single method available to effectively solve all optimization problems. Several optimization algorithms have been proposed, examined, and analysed in the last decades. However, optimization in engineering remains an active field of research since many real © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 210–220, 2021. https://doi.org/10.1007/978-3-030-64430-7_18

Optimal Production Manufacturing Based on Intelligent Control System

211

optimization problems remain very complex and difficulties to be solved by existing algorithms. The existing literature presents intensive research efforts to solve some difficulties, for which only partial answers have been obtained. In most of the practical optimization problems, several criteria are to be taken into consideration to obtain a satisfactory solution. As its name suggests, multi-objective optimization aims to optimize several objectives simultaneously. These objectives are in the general case in conflict: the improvement of one objective causes the deterioration of another objective. Consequently, the result final of optimization is no longer given by a single solution but rather by a set of solutions, each of which represents a compromise between the different objectives to be optimized.

2 Problem Statement Every manufacturing based on a sequence of processes, each of those process have its inputs and outputs, depending on the structure of the process each process can have one or more outputs, those outputs are the inputs of the next process or the final products. Each process can be controlled by a set of factors, that can interfere with the outputs. Each of those processes can be composed by a sub-process so at the end of the structure of the factories we will have a hierarchy structure the base of this structure is the key to optimal control the manufacturing system. We describe objective control of a complex technical system as a network of interest in a manufacturing subsystem. To archive the best possible objective, we define the objective condition of the method as a multiobjective state function: Where every goal G in the process is managed by multiple variables u. Figure 1 demonstrates the concept of a control system where the system is part of a hierarchical structure, and every unit in the process has its control system that is all related to achieving the optimality of the system. Nonetheless, a system consists of a technological process that is regulated by a variety of technical factors that vary from one process to another based on the desired goal. The production varies with the role of the technical process and the technical elements. Figure 2 demonstrates the role of the technological process. from the Fig. 2 we can formulate an equation: y ¼ f ðx; yÞ þ q Where: x: The input of the system (Crude oil in our case). u: Technological factors. q: The technological process.

ð1Þ

212

H. M. Yassine and V. P. Shkodyrev

Fig. 1. Control system model

Fig. 2. Technological system function

Where G: the objective of the function. For the hierarchy method and multi-criteria concerns to be effective, we need to decompose the determination into the following steps: • Identify the problem and evaluate the necessary knowledge. • Structure the problem of the hierarchy from the top to the objective, then the goals from a specific viewpoint, via the intermediate level (criteria) to the lowest level (goals). • Identify a set of factors/objectives. The growing selection at the upper level is used to evaluate the array at the level directly below. • Using the priorities extracted from the contrasts to assess the goals at the next lower stage. Use this for every set. Instead, for every set at the stage below, apply its deemed values and receive its total or global priority. Start this cycle of measuring and adding up to the final goals of the alternatives at the lowest point obtained.

Optimal Production Manufacturing Based on Intelligent Control System

213

To achieve the optimal Global Goal (G), every goal on the lowest level of the system should be optimal, as the global goal can be formulated by: uÞ; . . .; Gm ð~ uÞ Þ T Gð~ uÞ ¼ ðG1 ð~

ð2Þ

A multi-objective optimization problem is a problem that has a variety of objective functions that are structured to minimize or maximize and several constraints to fulfil [3, 7, 14]. The following system of equations introduces the general structure of a multi-criteria optimization problem: 8 maximize=minimize fm ð X Þ > > < g ð xÞ  0; i

hk ð xÞ  0; > > : ð LÞ ðU Þ xi  x i  x i

m ¼ 1; 2; . . .; M; j ¼ 1; 2; . . .; J; k ¼ 1; 2; . . .; K; i ¼ 1; 2; . . .; n;

ð3Þ

ðLÞ

ðUÞ

The vector x is a vector of n decision variables x ¼ ðx1; x2; . . .; xnÞT  xi and xi is the lower and upper bounds of variable xi, respectively. These variables define decision space or research space D. generally, an element of the research space is called a possible or potential solution. The terms gj(x) and hk(x) are the constrained functions. Inequality constraints are treated as “superior or equal” type constraints since “inferior or equal” type constraints can be treated as duality. A solution x that does not satisfy all (J + K) constraints is said to be an unfeasible solution. The set of feasible solutions constitutes a feasible region. the vector f ðxÞ ¼ ðf 1ðxÞ; f 2ðxÞ; . . .; fmðxÞÞT is the objective vector. Each of the M objective functions is either to maximize or minimize depending on the problem being addressed. Using the principle of duality, a problem of maximization can be reduced to a problem of minimization by multiplying the objective function by −1 [18, 19, 22].

3 Method Notation The manufacturing process is a complex and hierarchy process that cannot solve the optimality of those process directly, to solve those kinds of problems we proposed several methods. 3.1

Kripke Structure

Simplify the manufacturing process as a logical technological model to understand the relationship between those process and identify the inputs and outputs of each process. To simplify this process, we used the Kripke structure which is a method used in model checking to represent the behaviour of a system [11] (Fig. 3).

214

H. M. Yassine and V. P. Shkodyrev

Fig. 3. Kripke structure for a manufacturing process

3.2

Process Identification

Identify the control factors (control keys) and the objectives (goals) of each process, that allow us to control this process based on the objectives [6, 15]. 3.3

Neural Network Regression

In order to understand the relationship between the objectives and the control factors of each process, we used the approximation neural network with regression and by plotting those relationships we clearly understand how they behave with each other [9, 12, 17]. 3.4

Pareto Front

By using the knowledge obtained from the three first steps we can know to identify the Pareto Front of each process. Figure 8 presents the Pareto front of two goals, if we suppose that ‘Temp’ is the first objective and ‘Results’ is the second, the blue dots are our feasible research space and the red line is Pareto front solutions for those objectives. Figure 4 presents the Pareto front of two goals, if we suppose that ‘Temp’ is the first objective and ‘Results’ is the second, the blue dots are our feasible research space and the red line is Pareto front solutions for those objectives. The Pareto front will change based on four objective function cases:

Optimal Production Manufacturing Based on Intelligent Control System

215

Fig. 4. Pareto front to optimize two objectives

a) b) c) d)

maximize both the 1st and 2nd objective. maximize the 1st objective and minimize the 2nd objective. minimize the 1st objective and maximize the 2nd objective. minimize both the first and second objectives.

The final result of those steps is a set of solution for each process, those solutions are defined by an optimal set of solutions [3–5, 13, 16]. The last configuration that can be used can only set based on the global goal [8, 10, 20, 21].

4 Experimental Results In ordure to achieve those results, we apply our method on a real oil manufacturing production. Oil manufacturing production is a complex process that contains a hierarchy structure and many complex systems. Crude oil distillation units are the first units that process petroleum in any refinery. Their objective is to separate the mixture into several fractions. As oil is being fed into the crude oil distillation unit, the first thing that happens is the crude is heated to a temperature between 100 and 137 °C. This allows salts, which can be harmful to some equipment, to be removed at the desalter. The now desalted crude continues through the system into the heater, where it is further heated to a temperature of over 350 °C. Next, it is fed into the atmospheric column where the vapors and liquids separate. Residues are stripped out at the bottom of the column. The products are taken from the side of the column and moved through the refinery for further processing [1, 2].

216

4.1

H. M. Yassine and V. P. Shkodyrev

Kripke Structure

The first step in our proposed method is to simplify those process using the Kripke structure so we can understand the relationship between them. Each of the Kripke structure composed by state and process, so we can identify the changes on the inputs as the outputs are the process results. The table below identifies the inputs and the outputs of each process. We repeat this method until the last level as results Fig. 5 shows us the last level of the desalination process in oil manufacturing.

Fig. 5. Kripke structure for the Distillation process

Table 1 shows the states (S0,…,S4) and the technological process (R1,…,R4) of the Distillation process. 4.2

Process Identification

The next step is to identify the factor that controls each technological process, to determine those factors, we analysed the technological process in the last level in the hierarchy structure of oil manufacturing. Figure 6 describes how the control factors (Temperature and Pressure) changes can affect the target (Quality and Productivity) of the system by time. The interaction of these variables contributes to various objective desires; to provide an optimum arrangement, we examined the interaction between them independently so that we can determine whether to adjust them better.

Optimal Production Manufacturing Based on Intelligent Control System

217

Table 1. The states and the process of the Distillation process States S0 S1 S2 S3 S4 S5 S6 S7 S8

Desalted crude oil from the desalination process A Mixture of Corrosion Inhibitor and Gasoline Solution Neutralizer and Crude Oil Gas Fraction 140–240 Fraction 240–300 Fraction 300–350 Mazot Unstable gasoline Fraction 140–240 after filtering Fraction 240–300 after filtering

Processes R1 C-101 R2 V-102 R3 R4

C-102/1 C-102/2

Fig. 6. Best configuration for one objective of the subsystem

Each control factors of each system have his maximum and minimum borders that cannot be surpassed, in Fig. 6 the horizontal red lines present those borders when the blue vertical line for each factor (Pressure and Temperature) is the best configuration to get the best productivity, on another way the green one is the best configuration to get the best quality.

218

H. M. Yassine and V. P. Shkodyrev

Fig. 7. An approximation neural network to study the behaviour of the subsystem

4.3

Neural Network Regression

By using an approximation neural network to study the relationship between the control factors and the subsystem objectives, we determine exactly how they behave. The red line presented in Fig. 7 shows the behaviour between the control factors and the subsystem objectives. 4.4

Pareto Front

The red line in Fig. 8 presents the Pareto front of this subsystem, and the blue dots presents a possible configuration; every dot included in Pareto front present a possible optimal solution, all those configurations, those configuration presented in Table 2. The Pareto front gives us an optimal configuration for desire objectives (can be more than two objectives), and every sub-system has its configurations, by summering all the configurations of the sub-system we can decide the optimal configuration for the system.

Table 2. The Pareto front set for optimal configuration 1 2 3 4 5 6

Pareto Temperature (°C) Pressure (Mpa) 760.446594238281 −122.043884277344 759.477335611979 −128.239791870117 760.135518391927 −133.403991699219 761.015319824219 −129.302337646484 760.051513671875 −135.233795166016 761.396830240885 −126.657180786133

Optimal Production Manufacturing Based on Intelligent Control System

219

Fig. 8. Pareto front of the subsystem

5 Conclusion The intelligent control system for an optimal production is a complex process that increased or decreased on its complexity based on the type of the manufacture and the number of the process also a number of levels of the hierarchy structure, as results, the intelligent control system for optimal manufacturing is carried out by two main parts: analysis part and resolving part. the purpose of the first part is to identify the structure of the manufacturing control system with all his control factors on each level on the hierarchy structure, that allow us to have a vision on how those processes behave which the changes of those factors. The second part is to apply our method to determine the optimal solution (set of solutions) as a Pareto Front. Even though this article applied the optimality of oil manufacturing, the conclusions determine in this article can be extended to the processing industry worldwide.

References 1. Bagajewicz, M., Ji, S.: Rigorous targeting procedure for the design of crude fractionation units with pre-flashing or pre-fractionation. Ind. Eng. Chem. Res. 41(12), 3003–3011 (2002) 2. Bagajewicz, M.J.: Energy savings horizons for the retrofit of chemical processes. Application to crude fractionation units. Comput. Chem. Eng. 23(1), 1–9 (1998) 3. Bansal, S., Darbari, M.: Multi-objective intelligent manufacturing system for multi machine scheduling. Int. J. Adv. Comput. Sci. Appl. 3(3), 102 (2012) 4. Benki, A.: Méthodes efficaces de capture de front de pareto en conception mécanique multicritére: applications industrielles, p. 153 (2014)

220

H. M. Yassine and V. P. Shkodyrev

5. Cheikh, M., Jarboui, B., Loukil, T., Siarry, P.: A method for selecting pareto optimal solutions in multiobjective optimization, p. 12 (2010) 6. Contreras-Leiva, M.P., Rivas, F., Rojas, J.D., Arrieta, O., Vilanova, R., Barbu, M.: Multiobjective optimal tuning of two degrees of freedom PID controllers using the ENNC method. In: 2016 20th International Conference on System Theory, Control and Computing (ICSTCC), pp. 67–72. IEEE, Sinaia (2016) 7. Dipama, J.: Optimisation Multi-Objectif Des Systèmes Énergétiques, p. 205 (2010) 8. Dong, J.D., Cheng, A.C., Juan, D.C., Wei, W., Sun, M.: Ppp-net: platform-aware progressive search for pareto optimal neural architectures, p. 4 (2018) 9. Fieldsend, J.E., Singh, S.: Pareto evolutionary neural networks. IEEE Trans. Neural Netw. 16(2), 338–354 (2005) 10. Zhao, H., Lee, T.-T.: Research on multi-objective optimization control for nonlinear unknown systems. In: The 12th IEEE International Conference on Fuzzy Systems, 2003. FUZZ 2003, pp. 402–407. IEEE, St Louis (2003) 11. Kripke, S.A.: Semantical analysis of modal logic i normal modal propositional calculi. Math. Logic Q. 9(5–6), 67–96 (1963) 12. Nguyen, T.T.: A multi-objective deep reinforcement learning framework, p. 17 (2018) 13. Oujebbour, F.Z.: Méthodes et applications industrielles en optimisation multi-critère de paramètres de processus et de forme en emboutissage, p. 183 (2014) 14. Pham, N.K., Kumar, A., Aung, K.M.M.: Machine learning approach to generate pareto front for list-scheduling algorithms. In: Proceedings of the 19th International Workshop on Software and Compilers for Embedded Systems - SCOPES 2016, pp. 127–134. ACM Press, Sankt Goar (2016) 15. Meza, G.R., Ferragud, X.B., Saez, J.S., Durá, J.M.H.: Background on multiobjective optimization for controller tuning. In: Controller Tuning with Evolutionary Multiobjective Optimization, vol. 85, pp. 23–58. Springer Cham (2017) 16. Rivals, I., Personnaz, L., Dreyfus, G., Ploix, J.L.: Modelisation, Classificátion Et Commande Par Reseaux De Neurones: principes fondamentaux, methodologie de conception et illustrations’ industrielles, p. 42 (1995) 17. Roijers, D.M., Whiteson, S., Vamplew, P., Dazeley, R.: Why multi-objective reinforcement learning? p. 2 (2015) 18. Saad, I., Benrejeb, M.: Optimisation multicritere par Pareto-optimalite de problemes d’ordonnancement en tenant compte du cout de la production, p. 8 (2006) 19. Schweidtmann, A.M., Clayton, A.D., Holmes, N., Bradford, E., Bourne, R.A., Lapkin, A.A.: Machine learning meets continuous flow chemistry: automated optimization towards the pareto front of multiple objectives. Chem. Eng. J. 352, 277–282 (2018) 20. Shir, O.M., Chen, S., Amid, D., Boaz, D., Anaby-Tavor, A., Moor, D.: Pareto optimization and tradeoff analysis applied to meta-learning of multiple simulation criteria. In: 2013 Winter Simulations Conference (WSC), pp. 89–100. IEEE, Washington (2013) 21. Zhang, T., Owodunni, O., Gao, J.: Scenarios in multi-objective optimisation of process parameters for sustainable Machining. Procedia CIRP 26, 373–378 (2015) 22. Zilouchian, A., Jamshidi, M. (eds.): Intelligent Control Systems Using Soft Computing Methodologies. CRC Press, Boca Raton (2001)

Intelligent Cyber Physical Systems for Industrial Oil Refinery Wenjia Ma and Viacheslav Shkodyrev(&) Peter the Great St. Petersburg Polytechnic University, St. Petersburg, Russia [email protected]

Abstract. In our work we develop the theory and applications of cyberphysical systems (CFS) - as a new integrated technological platform for a hybrid information management environment, focused on solving a wide class of applied problems. A new methods of multi-objective optimization based on Pareto-optimal using genetic algorithms and BP neural network models is proposed. As an example, we consider the task of multi-purpose optimization of the control of the technological production of oil distillation at an oil refinery. Keywords: Cyber-physical systems  Evolutionary algorithm network  Pareto optimal  Multi-objective optimization

 BP neural

1 Introduction Cyber-Physical Systems is new technological paradigm which integrated different functionality, information-telecommunication, computational and physical capabilities that can interact through many new modalities. “Deep (synergy) integration” of the basic mechanisms of intellectualization, networking and cognitive ability of knowledge-base self-organization becomes a key enabler for effective technology developments. The main goal is to develop the concept and principles of creating CFS as an integrated technological platform for a new hybrid information and control environment, which is the basis for creating a wide class of industrial automation and control systems (a new class of hybrid network systems and control technologies) focused on solving control problems in the face of uncertainty. Optimal design of machinery, there are often multiple objectives and design scheme. Optimal goal hopes that more objective or decision scheme can simultaneously achieve the most optimal in the constraints. But due to the complexity of system problem, multiple target can not often achieve the most optimal value at the same time, some contradictory each other, even conflicts. Evolutionary algorithm is an optimization method based on non-mathematical model, which is fast, simple and able to find the Pareto optimal solution. Among all algorithms, the multi-objective evolutionary algorithm based on the Pareto optimal uses the Pareto optimal concept to search within the entire solution space to obtain the Pareto optimal solution set. Therefore, the algorithm occupies the most important position in the design and application of MOEA. However, the existing multi-objective evolutionary algorithms are all proposed for situations with a definite objective function analytical formula. In engineering practice, researchers often have difficulty in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 221–230, 2021. https://doi.org/10.1007/978-3-030-64430-7_19

222

W. Ma and V. Shkodyrev

obtaining a definite objective function analytical formula. The multi-layer feed forward neural network with error back propagation learning function has the characteristics of rigorous structure, stable working state, strong operability, etc. At the same time, due to the introduction of hidden layer, a three-layer nonlinear network can approximate any continuous function with arbitrary precision, so it has been widely used in many fields such as pattern recognition, nonlinear mapping, and prediction. It can be inferred that if the neural network model is used to replace the corresponding sub-objective function, the above problems may be better solved. Therefore, this paper proposes a multiobjective optimization algorithm based on neural network and Pareto optimal. 1.1

Composite of Genetic Algorithm and BP Neural Network

1) Modeling Before using the multi-objective evolutionary algorithm, we first need to abstract the actual problem into a generalized multi-objective optimization problem, including determining the objective function, constraints and decision variables, etc., to establish a standardized multi-objective model. In particular, when the relationship between the decision variable and the objective function is very complicated, it is difficult to describe it accurately with mathematical analytical formula. In this work, a suitable multi-layer BP neural network algorithm is used to analyze the historical data, and this neural network model can accurately reflect the relationship between them. 2) Model solution When using the multi-objective evolutionary algorithm based on the Pareto optimal to solve specific problems, the algorithm needs to be designed according to the actual situation. The main tasks include: individual coding form and initialization, genetic operator design (including cross garlic and mutation operators), selection strategy, fitness assignment strategy, population diversity maintenance strategy, etc. 3) Strategic analysis The Pareto optimal solution set obtained by the model is an objective trade-off to the actual multi-objective problem, and it cannot be directly used as a basis for decision making. It is necessary to distinguish the optimal solution obtained based on the expert knowledge and experience of the decision makers, select the solution that best meets the objective reality, and design the decision variables based on this. In summary, Fig. 1 shows process of proposed multi-objective optimization algorithm based on neural network and Pareto optimal.

Fig. 1. Flow chart of composite of Genetic Algorithm and BP neural network

Intelligent Cyber Physical Systems for Industrial Oil Refinery

1.2

223

The Main Content of the Algorithm

1) Coding form of each individual and initialization Individual coding uses real coding scheme. For real-valued optimization problems, floating-point performs better than binary because they have better consistency and accuracy, resulting in faster execution. Encode decision variables with real numbers, each individual in the group is represented as x ¼ ðx1 ; x2 ; . . .; xn Þ:

ð1Þ

2) Fitness assignment In the multi-objective genetic algorithm based on Pareto optimal, the most common is the rank-based fitness assignment. Operation steps are as followed: a. Calculate the value of each sub-goal of the target vector. b. Rank all individuals in the population based on the value of rank (xu, n). c. Linear or non-linear interpolation is used to interpolate between the lowest sequence number (non-inferior optimal individual) and the highest sequence number. d. Individuals with the same serial number perform the fitness sharing operator operation, that is, a new fitness value is obtained by dividing the number of individuals with the same serial number. 3) Selection strategy The multi-objective evolutionary algorithm design has two basic goals, which are as followed: a. Make the evolution process search towards Pareto set; b. Maintain noninferior solution set diversity. Therefore, these two objectives must be obeyed when formulating a selection strategy. In this work we use a mixed strategy of tournament selection operator and crowding comparison operator. Tournament selection is a method of selecting an individual from a population of individuals in a genetic algorithm. Tournament selection involves running several “tournaments” among a few individuals (or “chromosomes”) chosen at random from the population. The winner of each tournament (the one with the best fitness) is selected for crossover. 4) Genetic operator Simulated binary crossover (SBX) is a real-parameter re-combination operator which is commonly used in the evolutionary algorithm (EA) literature. The operator involves a parameter which dictates the spread of o spring solutions vis-a-vis that of the parent solutions. 1.3

Composite of Genetic Algorithm and BP Neural Network

The specific steps of GA-BP algorithm are as follows: 1) Initialize the BP network model, determine the number of outputs and inputs X, Y, the number of layers L, the number of training N, etc.; Initial learning accuracy e  0; set adjustment parameters r1, r2, population size P and maximum number

224

2)

3)

4) 5) 6) 7)

W. Ma and V. Shkodyrev

of iterations T, crossover probability Pc and mutation probability Pm; tournament size K; cumulative iteration number t = 0. Use the three-layer BP neural network model to train historical data to obtain the relevant objective function relationship model M = {M1, M2, …, Mn}, where n is the number of objective function. Choose the corresponding model from M, calculate the target vector value of all individuals in P(t). And according to the fast non-inferior sorting algorithm proposed by Deb, sort non-inferior solutions. Calculate the crowding distance of all individuals. A mixed strategy of championship selection operator and crowding comparison operator is used, select popsize/2 individuals from P(t) into the mating pool P’(t). According to the previous algorithms, perform crossover and mutation operations on the individuals in P’(t) to obtain the offspring population Q(t). Mix the individuals in P(t) and Q(t) together and reorder them to get the new parent PðtÞ; t ¼ t þ 1

ð2Þ

8) If t  gen, end the learning process; otherwise, return to step 3. 1.4

Practical Application

The growing market competition along with toughening environmental regulations had been making oil refiners to look for new ways of costs saving and margins improvement. However, due to the lack of a computationally efficient technique for automatic scheduling, existing commercial software tools for short-term scheduling are nonexistent and strongly dependent on simulation. In oil refineries, the scheduling work is mainly done manually via a trial-and-error method. In our task, the optimization model describes the characteristics of oil refinery process with two decision variables and two objective functions which are productivity and quality. The main task of the refinery is to efficiently produce qualified fuel oil, gasoline and other products. That is, under the given raw fuel and economic conditions, to achieve high output and qualified quality to achieve maximum economic benefits. Therefore, the establishment of the optimization model of the refining process should fully consider the above requirements. Set x1 be pressure, x2 be temperature, y1 be quality and y2 be productivity. Figure 2 shows scatter plot of all variables (pressure, temperature, quality and productivity).

Intelligent Cyber Physical Systems for Industrial Oil Refinery

225

a scatter plot matrix of each variable

b 3-D plot of productivity

c 3-D plot of quality

Fig. 2. Scatter plot and 3D plot

First we are trying to use regression to obtain the relationship between decision variables and objectives. Figure 3 shows the result of linear and non-linear regression model. The regression model may provide some conclusions for the decision maker, but it can also be seen from the fig. that he cannot fully describe the relationship between the variable and the goal. Therefore for a more complex model, applying a neural network to the problem can provide much more prediction power compared to a traditional regression.

226

W. Ma and V. Shkodyrev

a Linea regression

b Linear regression

c Non-linear regression

d Non-linear regression

Fig. 3. Regression model

Using BP network to train and build the network model as followed: 

y1 ¼ ðx1 ; x2 Þ y2 ¼ ðx1 ; x2 Þ

ð3Þ

Generally, our optimization model is defined as: max y ¼ f ð xÞ ¼ ð f 1ð xÞ; f 2ð xÞ Þ; x ¼ ðx1; x2Þ

Fig. 4. BP neural network structure using TensorBoard

Figure 4 shows the network structure we are using to train the data set.

ð4Þ

Intelligent Cyber Physical Systems for Industrial Oil Refinery

227

Fig. 5. Accuracy and loss in training and test

Figure 5 shows the accuracy and loss under different iterations. Using this model can get an accuracy rate of about 96% on the test set. Then set maximum number of iterations T = 1000, population size P = 200, crossover probability Pc = 0.5 and mutation probability Pm = 0.5 and apply our algorithm to the training set. The variable value combination and optimal results are shown as followed:

Fig. 6. Scatter plot and Pareto front

Here we are using some index to evaluate the model, such as Hyper-volume (HV), Inverted Generational Distance (IGD). From Fig. 7 we can see that the value of HV reached 0.55 after 1000 generation, which means the convergence and diversity of finial Pareto optimal set performance

228

W. Ma and V. Shkodyrev

well. The smaller the IGD value, the better the diversity and convergence of Pareto optimal set. So from fig. it is showed that after 200 generation the performance is already reached a high level.

a

b

Fig. 7. a Curve of HV by number of generation; b Curve of IGD by number of generation.

Table 1. Set of Pareto optimal solutions for process C-104

1 2 3 4 5 6 7

Set of Pareto optimal solutions Quality Pressure Temperature 7013 0.1628866941 156.4090576172 7096 0.1622514576 156.6374511719 7202 0.1633187532 157.9031372070 7207 0.1639747471 157.4445495605 7208 0.1637909561 156.6460571289 7244 0.1645954549 156.8072052002 7250 0.1639635861 156.8843078613

Productivity 101.9345322 101.7902374 101.2913132 100.1697388 98.07221985 97.49977112 96.07769775

Seven Pareto-optimal schedules are shown in Table 1, and for each schedule, objectives (Quality and Productivity), and decision variables (Pressure and Temperature) are given. Its corresponding Pareto-optimal front is presented in Fig. 6. The schedules that are not Pareto-optimal are shown in color blue in Fig. 6. But for those seven Pareto optimal sets, we cannot say which one is the best. In practice, the decision-maker can select a compromised one under specific condition. Table 2 shows the result of single-objective optimization with Quality or Productivity as task target. Comparing the multi-objective optimization results with the singleobjective optimization results, we can find that the multi-objective optimization solution is a compromise between the solutions of each single-objective optimization problem. It can be seen that when one objective function reaches the maximum value, the value of the other objective function tends to be smaller. If we only purse the best quality, the productivity can be really low; and if the highest productivity is the only goal, the quality will be bad. Therefore, only under the premise of comprehensive

Intelligent Cyber Physical Systems for Industrial Oil Refinery

229

consideration of quality and productivity can we obtain more possible results with better technical indicators. Table 2. Result of single-objective optimization Quality Pressure Temperature Productivity Quality optimization 7250 0.163963586 156.884307861 96.07769775 Productivity optimization 7013 0.162886694 156.409057617 101.9345322

Secondly, the results obtained in this work are Pareto solution sets, which can provide operator with a variety of system control ideas. For example, if the productivity need to be under the value of 101, according to the Pareto set operator can understand the range of pressure and temperature to be set in order to keep the quality maintain optimal at the same time.

2 Conclusion In this work we define the problem of managing complex technical systems and technological complexes of oil refinery in conditions of significant uncertainty of the control object conditions. In particular, we focus on the class of industrial automation systems aimed at highly efficiency production process control via multi-goals - multiobjective optimization of complex industrial systems. Because the relationship between the decision variable and objectives is too complicated, it is difficult to establish a clear objective function, so that the multiobjective evolutionary algorithm cannot be applied to some practical problems. This paper proposes a new algorithm, which uses a multi-layer BP neural network. The established neural network model by training historical data can be use as the objective function, without obtaining a clear function analytical formula. Finally the proposed algorithm successfully applied to the crude oil processing problem. Satisfactory results are achieved.

References 1. Chiandussi, G., Codegone, M., Ferrero, S., Varesio, F.E.: Comparison of multi-objective optimization methodologies for engineering applications. Comput. Math. Appl. 63(5), 913– 921 (2012) 2. Fonseca, C.M., Fleming, P.J.: Genetic algorithms for multiojective optimization: formulation, discussion and generalization. In: Proceedings of the 5th International Conference on Genetic Algorithms, pp. 416–423 (1995) 3. Lameijer, E.M.W., Bäck, T.H.W., Kok, J.N.: Evolutionary algorithms in drug design. Nat. Comput. 4, 177–243 (2005) 4. Mitchell, T.M.: Machine Learning. McGraw-Hill, Inc. London (1997) 5. Bergey, P.K., Ragsdale, C.T., Hoskote, M.: A simulated annealing genetic algorithm for the electrical power districting problem. Ann. Oper. Res. 121(1–4), 33–35 (2003)

230

W. Ma and V. Shkodyrev

6. Wei, L., Zhao, M.: A niche hybrid genetic algorithm for global optimization of continuous multimodal functions. App. Math. Comput. 160, 649–661 (2005) 7. He, Z., Yen, G.G., Zhang, J.: Fuzzy-based Pareto optimality for many-objective evolutionary algorithms. IEEE Trans. Evol. Comput. 18(2), 269–285 (2014) 8. Gen, M., Cheng, R.: Genetic Algorithms and Engineering Design. Ashikaga Institute of Technology. Ashikaga, Japan (1996) 9. Li, Y., Li, M., Lu, Y.: A new text detection approach based on BP neural network for vehicle license plate detection in complex background. Lect. Notes Comput. Sci. 4492, 842–850 (2007) 10. Jaimes, A.L., Martinez, S.Z., Coello, C.A.C.: An Introduction to Multiobjective Optimization Techniques. Nova Science Publishers. Inc. (2009) 11. Todd, D.S, Sen, P.A.: Multiple criteria genetic algorithm for containership loading. In: Proceedings of the Seventh International Conference on Genetic Algorithms. Michigan State University, Morgan Kaufman Publishers, pp. 674–681 (1997) 12. Takahama, T., Sakai, S.: Constrained optimization by applying the constrained method to the nonlinear simplex method with mutations. IEEE Trans. Evol. Comput. 9(5), 437–451 (2005) 13. Bartz-Beielstein, T., Zaefferer, M.: Model-based methods for continuous and discrete global optimization. Appl. Soft Comput. 55, 154–167 (2017) 14. Suliman, A., Zhang, Y.: A review on back-propagation neural networks in the application of remote sensing image classification. J. Earth Sci. Eng. 5, 52–65 (2015) 15. Runarsson, T.P., Yao, X.: Search biases in constrained evolutionary optimization. IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev. 35(2), 233–243 (2005) 16. Xu, C., Yang, L., Liu, F.: Discuss on the union implementation scheme of energy conservation measures and electricity marketability methods. Autom. Electr. Power Syst. 31 (23), 99–103 (2007) 17. Gupta, A., Ong, Y.-S., Feng, L.: Insights on transfer optimization: because experience is the best teacher. J. IEEE Trans. Emerg. Topics Comput. Intell. 2(1), 51–64 (2018) 18. Sanchez-Anguix, V., Chalumuri, R., Aydoğan, R., Julian, V.: A near Pareto optimal approach to student–supervisor allocation with two sided preferences and workload balance. Appl. Soft Comput. J. 76, 1–15 (2019) 19. Dutta, S., Das, K.N.: A survey on Pareto-based EAs to solve multi-objective optimization problems. In: Soft Computing for Problem Solving. Advances in Intelligent Systems and Computing, vol. 817, pp. 807–820 (2019) 20. Sener, O., Koltun, V.: Multi-task learning as multi-objective optimization. In: 32nd Conference on Neural Information Processing Systems, Montréal, Canada (2018) 21. Rahat, A.A.M., Everson, R.M., Fieldsend, J.E.: Alternative infill strategies for expensive multi-objective optimisation. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2017, pp. 873–880. ACM, New York (2017) 22. Sun, C., Jin, Y., Tan, Y.: Semi-supervised learning assisted particle swarm optimization of computationally expensive problems. In: Proceedings of the Genetic and Evolutionary Computation Conference. pp. 45–52. ACM (2018)

Modal Logic of Digital Transformation: Relentless Pace to “Exo-Intellectual” Platform Vladimir Zaborovskij1(&) 1

and Vladimir Polyanskiy2

Peter the Great St. Petersburg Polytechnic University, Saint-Petersburg, Russia [email protected] 2 Institute for Problems in Mechanical Engineering, Russian Academy of Sciences, Saint-Petersburg, Russia

Abstract. It is a generally accepted opinion that digital transformation of the economic and production infrastructure of today’s society are natural result of technologies evolution of in which information processing factors start to play the lead role with building “smart” machines capable in autonomic manner performing tasks that typically require human intelligence. Machine mind, consciousness and artificial intelligence (AI) now are priority interdisciplinary branches of natural and computer science with multiple approaches and advancements in virtually every sector of the production and entertainment industry. The popularity of the idea of artificial intelligence is due to some factors: computer technology has become prevalent; digitalization transforming touched of nearly every corner of the humane world, science and technology. But here is a threat that the chaotic relentless pace of human intellectual development may push technological evolution to the unstable attractor known as “flickering mind” [1]. The article is devoted to the analysis of the role that logic and ethics can play in the process of technology development, especially in the aspect of fundamental research and adaptation of the education system to new challenges associated with the symbiosis of human intelligence and computer modeling capabilities to predicts the possible ways of evolution. This symbiosis forms a new technological reality, the essence of which can be expressed by the word exo-intelligence, which means computer technologies that can simulate cognitive functions – new class of mathematical objects which phenomenological features will be discussed below. Keywords: Technology development  Artificial intelligence learning  Exo-intelligence  Cognitive functions

 Machine

1 Introduction The global trend towards digitalization of the human environment, combined with the concepts of development of an “open society”, has had a significant impact on the priorities of modern science, technology, the education system and society as a whole. As a result, along with such new constructive concepts as “machine learning”, “digital twins” or “digital conversion”, we see that some negative assessments of the results of digitalization are widespread, namely “computer moronity”, “digital dependence” and finally, “digital slavery”. There is no doubt that in any process of technology © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 231–240, 2021. https://doi.org/10.1007/978-3-030-64430-7_20

232

V. Zaborovskij and V. Polyanskiy

renovation, the essence of which is to complement human capabilities to interact with society and the environment, various positive and negative consequences are manifested, and this development process itself, following the laws of risk preservation in complexly organized systems are in a state of “stable nonequilibrium.” Such factors that are asymptotically unstable can easily be significantly influenced by various factors and development scenarios, which leads to the formation of strange evolutionary attractors. A key aspect of the modern stage of evolution can be defined through the concept of “artificial intelligence” (AI), which is considered as a set of technological solutions that allow to simulate various cognitive functions, while obtaining solutions comparable to the results of human intellectual activity. In fact, the problem boils down to a description of the operational and associative aspects of intelligence, but the recursive definition of AI given above, inherently, allows various interpretations. In order to give this concept a constructive aspect, we will conduct a retrospective analysis of this problem at the same time considering modern approaches and methods for modeling functions that characterize human intellectual abilities. First of all, we note that the understanding of “intelligence” and the possibilities of its emulation using various mechanisms in the process of developing scientific knowledge has changed significantly. So almost until the middle of the 19th century, simple counting tasks were considered quite intellectual. Moreover, if a person was able to quickly and errorfree calculate using well-known mathematical formulas, and, on this basis, manage, for example, commercial activities, then his intellectual abilities were rated very highly. However, thanks to the formalization of calculation algorithms, mathematical calculation problems were automated one of the first. In 1822, C. Babbage created the socalled difference machine, which allows automating the calculation process by approximating various mathematical functions by polynomials using a strictly ordered sequence of operations. Obviously, the operation of such a mechanical automaton can hardly be considered intellectual, but compiling the sequence of operations itself, which leads to the correct solution, is, of course, an intellectual task in the most literal sense of the word. In fact, such an automaton, considered together with a person who defines goals and selects calculation algorithms, forms a natural “intellectual” system, for which the philosopher Plato has juxtaposed the verb cyber which expresses a person’s ability to control various material objects to achieve his plan. In the 20th century, after the work of A. Turing, N. Wiener and J. von Neumann, it became clear that individual computational operations are mechanical manipulations with chains of symbols according to strictly defined rules, therefore, only a complex of technological solutions can be recognized really “intelligent”, where explicitly, using text, or implicitly using human cognitive functions, determines the meaning of the operations and evaluates achievability of the results. Therefore, in the future, the main efforts to technologies intellectualization were aimed at developing solutions that map the associative capabilities of human intelligence to the operational capabilities of high performance computers and software. Experience in the implementation of such intelligent algorithms has shown that their effectiveness depends on the syntactic, but also semantic features of texts in natural languages, the structure of the knowledge bases and effectiveness of machine learning algorithms. However, although attempts to generalize these efforts to a wide range of applied problems, did not lead to disruptive results because could not formalize ability of intelligent algorithms to self-organize,

Modal Logic of Digital Transformation

233

evolutionary development, or solutions based on the “similarity” criterion in the description. The developers of such systems based on false promise of idea that the processes of “intellectualization” can be formalized and represented by a finite set of algorithms and programs that characterize a person’s creative activity. At the same time, strangely enough, we managed to achieve the greatest practical successes using methods that, in principle, are not characteristic of human intelligence, but are based on “crude computing power” or, in other words, on the possibility of quickly sorting through feasible solutions to the problem under consideration. Due to the natural limitation of the computational speed and the exponential growth of the combinatorial complexity of enumerating options, the problem of algorithmic solvability was formulated in the theory of computer sciences and a measure of the complexity of the problems was determined [2–4]. At the same time, the algorithmic unsolvability of a particular class of tasks does not mean the impossibility of solving a specific problem from this class, but it is only the impossibility of solving all the problems of this class with one algorithm. Naturally, a measure of the complexity of tasks is characterized by the amount of computation required to obtain solutions. It is generally accepted that there are problems of two complexity classes: 1) polynomial problems with respect to the size of the input data, or P class, the complexity of which is estimated as O (nm), where n is the dimension of the input data and m is a constant, independent of n; 2) non-deterministic polynomial problems or an NP class whose complexity is estimated as O (2n). The latter include the so-called inverse (incorrectly posed) problems, for the solution of which the idea of “regularization” or the addition of additional restrictions is used. Taking into account the constraints allows one to choose from the set of “possible” solutions that which approximates with guaranteed accuracy a solution that meets the “physical” meaning of the problem being solved. It is the search for “similar” approximate rather than exact solutions to the original problem that characterizes the essence of the problem of intellectualization in terms that make up the essence of the paradigm of computer science - computo ergo sum in other words, “what can be calculated exists”. In accordance with above, the computational complexity of solving “intellectual” problems (such as of pattern recognition or object classification) along with the speed of light in a vacuum, Planck’s constant, or mathematical constants, can be considered one of the fundamental characteristics of Nature. Obviously, for any “intellectualization” problem formulated as an inverse problem from the point of view of computer science, there are many different solution algorithms. Therefore, such “intellectualization” can be realized as an inductive or deductive procedure; in the first case, using neuromorphic computational structures and machine learning methods, and in the second, by “assembling” target algorithms from previously validated fragments. In the process of such an assembly, various aspects of parametric, resource, or situational uncertainty are “regularized” through the exchange of declarative knowledge, which is presented in the form of an ontology of classes and hierarchical relations between them. In this context, A. Turing’s fundamental thesis can be rephrased as follows: digital automata can imitate the operation of digital automata, therefore, to represent processes in AI systems, all of them must first be decomposed into: 1) algorithms for solving the problems of “generating” algorithms; 2) algorithms for calculating the target subset of processed “big data”. As a result, the power of the set of processed data becomes comparable with the power of the set of processing algorithms.

234

V. Zaborovskij and V. Polyanskiy

2 Materials and Method The fundamental scientific and technological problem of the 21st century is the ability to simulate human intelligence using digital computer systems. Obviusly, intellectual abilities of a person are realized with the help of the so-called cognitive functions that generate in brain different processes: perception, processing, memorization and exchange of information. The formal difference between human intelligence and modern AI systems is usually associated with the absence of the latest cognitive functions of awareness and emotions. It is the cognitive functions of the brain that determine the actions by which a person: a) cognizes and b) interacts with the outside world. We define classes of actions that, in principle, can be implemented using computational operations, namely: • • • • • •

forecasting results; speech or texts for the exchange of information; attention or processing of sensory data; digital or eidetic memory; gnosis (orientation in space and time); praxis (target activity)

In principle, the phenomenological component of the problem of imitation of cognitive functions can be reduced to searching for common structural features in the organization of the brain and computers that are used to solve the “classical” problems of computer science (see Fig. 1): • the possibility of an algorithmic description of computing functions; • making a decision to stop the calculations reflecting praxis or target actions; • evidence of the “solvability” of the problem under consideration using operations that implement the algorithm for computing the characteristic function of the target set.

the cognive funcon of «understanding»

Data describing physical reality:

space of concepts, computable objects, and algorithms

«space - me and material objects»

The essence of the understanding funcon: objects are assigned «coordinates» - aributes that lead to the «coordinazaon» of physical reality

Fig. 1.

Modal Logic of Digital Transformation

235

The solution of the above “classical” problems is made by people, or explicitly - on the basis of informal, i.e. not reducible to algorithms, methods, or not explicitly - on the basis of an algorithm of a previously solved formal problem, to which the applied problem can be attributed, using for this a combination of partially recursive functions physically implemented on a digital computer. In this case, the computer acts as a component of the decision-making infrastructure. This infrastructure allows you to formulate a solution to the problem using the capabilities of human intelligence, represented by a combination of various cognitive functions, and the hardware and software resources of computers that implement computational algorithms. In fact, such an infrastructure can be called an exo-intellectual platform, in which part of the cognitive functions can be algorithmized using existing methods for solving inverse problems and, thereby, increase its degree of automation. Ultimately, this allows the problem of the functional adaptation of the “exo-intellectual” platform to be considered from the perspective of the formation of a limited set of algorithms for solving “inverse problems”, for which the input data are data available for computer processing obtained by sensors or receptors, and the result of the solution is the target algorithm actions represented by the logical structure of the characteristic function of the target set under consideration, the elements of which correspond to algorithmically defined restrictions. Given that inverse problems in the general case do not have a single solution, and some of the solutions cannot guarantee the stability of the closed system as a whole, the real capabilities of the exo-intellectual platform will significantly depend not only on functional or spatial-situational, but also on ethical ogres relations, the carrier of which is the person himself. Attempts to generalize these efforts to a wide range of applied problems, although they had some success, but did not lead to breakthrough technological results, they could not include in the description of the properties of intelligent systems their ability to self-organize, evolutionary development or solutions based on compromises, and proceeded from the idea that the processes of “intellectualization of tasks” should be algorithmically formalizable and, in its basis, imitate similar human activities.

3 Results Currently, the greatest practical successes in solving intellectual problems have been achieved using methods that, in principle, are not characteristic of people, but are based on “brute computing force” or, in other words, on the ability to quickly sort possible solutions, for example, in analogy in playing chess. Obviously, for any “intellectualization” problem formulated as a solution to the inverse problem there are many different options for representing solutions in the form of a finite sequence of machine operations. In practice, such a transformation can be implemented in both inductive and deductive maners. In the first case, artificial neuromorphic computational structures are used, which are “programmed” by machine learning methods based on previously classified dataset, and in the second case, digital platforms use target software libraries assembled from previously validated fragments. To concretize the task of building exointelligent platforms, we will rely on data aggregation and classification methods based on similarity criteria for objects or processes based on functional or structural

236

V. Zaborovskij and V. Polyanskiy

associations, using the search for solutions based on heterogeneous symbiosis of biological and artificial intelligence resources (Fig. 2).

Fig. 2 .

The basis of the technology of combining the computing resources of computer systems and the capabilities of human intelligence to solve P or NP complex tasks are the concept of mathematics of “big data”. The “computational field” of the such new math consists of: 1) a tuple of “data-algorithms”, 2) their meta characterization, and 3) resources of distributed heterogeneous reconfigurable computer platforms, the nodes of which are connected into consistent system by “smart” data pass. Using the heterogeneity aspect of calculations, i.e. the use of processor elements with different architectures and set of operation, namely CPU, GPU, TPU and FPGA, allows to effectively scale, both in “width” and “vertical”, various algorithms of “extracting knowledge” from the processed data, “adjusting” taking into account the situational context implemented a variety of algorithms that are used to assess potential threats and risks associated with the implementation of “calculated” solutions and the accuracy of computer models used. It should be noted that due to heterogeneity and functional scaling, in exointelligent platforms, data processing algorithms oriented to the processor-centric approach can be effectively combined with memory-centric solutions, which allows simulating cognitive functions that can be characterized as “computational insight”, based on mechanisms to speed up the search algorithms by indexing the target data warehouse, potentially containing the whole spectrum of responses to correctly formulated queries. A similar mathematical processing technology is now widely used in modern search engines Yandex and Google, successfully modeling the functions of intellectual activity, which are usually associated with the concept of intuition. However, the technical implementation of this technology remains “flat,” that is, the adaptation of platform elements occurs only at the software level, when, as hardware components, they are formed from standard industrial systems. This leads to a decrease in integrated energy-computational efficiency due to the fact that all modern processor elements at the macro level are built on the basis of logical gates AND-NOT or

Modal Logic of Digital Transformation

237

OR-NOT, therefore, any computational operations reduce the informational entropy of the processed data, contributing to the release of thermal energy no less than Q = k * T * ln 2 J of energy, where k is the Boltzmann constant and T is the temperature of the system. This energy itself is small, so Q for T = 300 K is 0.017 eV per bit, but in terms of the number of logical elements (LE) which in modern microprocessor (MP) is equal to 2–5  1010, the total energy at a switching frequency of LEs of 5 GHz grows to values of the order of 1 J for each second of MP operation. Therefore, if modern microprocessors combined in a supercomputer cluster, and try to simulate the work of humane brain, which including 1.5  1017 LE, then the energy costs will exceed the level of practical expediency of using computer technologies therefore can be represents only of purely scientific interest. Although reconfigurable FPGA-based computers have a higher specific energy efficiency of digital elements compared to MIMD microprocessors and SIMD graphics accelerators, FPGAs, however, perform work at much lower frequencies, which, however, expands the synthesis capabilities, allowing, or to reduce unit costs energy when achieving comparable performance, or at the same energy costs to obtain greater productivity of data processing processes through the use of special architectural solutions [6, 7, 9]. Achieving the technical and economic efficiency of using exo-intelligent platforms requires the search for solutions balanced in the aspect of “standardization-specialization”, which, along with the effective implementation of standard computing procedures, have the resources necessary for using machine learning technologies and reconfiguring hardware accelerators. taking into account the structural features of the implemented algorithms. In view of the foregoing, the transition to the use of technologies of hyperconverged clustering of heterogeneous processors, storage devices of a memory class, “smart” data channels endowed with intelligent processing functions, using specialized processors optimized for processing packet traffic, to create exo-intelligent platforms requires the development of universal heterogeneous computing modules that allow “vertical” and “horizontal” functional integration with the allocation of mutually agreed levels of “processing”, “aggregation” and “explanation” of calculation results. An ex-intelligent solution based on a hyper-convergent processor/storage platform differs significantly from well-known approaches implemented in the framework of the “one program - a lot of data” model (SPM model), Amdahl-Ware phenomenological laws for programs with an invariable proportion of serial and parallel computing or the law GustavsonBors for programs that may be complicated due to the increase in the volume of processed data, because they are based on the use of hardware reconfiguration methods, which are supported by machine learning algorithms. The use of computational acceleration nodes as a basic component of the heterogeneous software and hardware reconfigurable platform extends its functionality by quickly adapting the hardware components to the features of the solution method, algorithms, and corresponding source code that being implemented at a given time (Fig. 3).

238

V. Zaborovskij and V. Polyanskiy

Fig. 3.

4 Discussion Relentless digital transformation of modern technologies, examples of which are supercomputer methods of predictive modeling, approximate optimization based on random search, genetic algorithms based on meta-heuristics borrowed from nature, strictly speaking, form the components of a new exo-intellectual infrastructure for solving complex fundamental and applied tasks. This infrastructure is harmoniously supplemented by reinforcement machine learning methods, the formal prototypes of which are Robbinson-Monroe and Kiefer-Wolfowitz stochastic approximation methods, as well as other well-known random search numerical optimization methods. Although all these methods were actively developed long before the advent of AI tasks, their implementation on modern hyperconverged computing platforms opens up new possibilities for integrating the resources of natural intelligence of people and artificial intelligence of “smart” machines [10–14]. Main capabilities of hyperconvergent high-performance computing platforms depend significantly on the balanced loading of all hardware components and the correspondence of their architecture to the specific features of the application programs. Therefore, the proposed solution logic wide use “machine learning” methods to reconfigure hardware as well as software components of computing nodes and platform as a whole in order to increase their real performance energy consumption.

5 Conclusions The idea to clarify the essence of artificial intelligence in terms of computable cognitive functions and corresponding reconfigurable accelerators which architecture chosen by “machine learning” methods that taking into account the specific features of implemented algorithms can be very attractive therefore can significantly simplify application of computer technologies due to their harmonic integration with the associative

Modal Logic of Digital Transformation

239

capabilities of natural intelligence. Discussed decision associated with design “exointellectual “ platform that can adapt their computational resources to the specific features of chosen algorithmic strategies by using a reconfigurable accelerator with FPGA on the modular edge node. For the effective use of such a platform, deep interdisciplinary training of specialists is of particular importance, which requires the careful development of new generation training programs that cover both fundamental and applied aspects of the application of computer sciences and technologies, including the use of supercomputer methods for predictive modeling, big data mathematics, artificial intelligence and machine learning. Acknowledgment. The authors are grateful to the Supercomputer Center ‘Polytechnic’ [15] for the help in gaining access to the resources of the supercomputer. The reported study was funded by RFBR according to the research project №18-2903250 mk “Robust methods of synthesis of intelligent transport systems of cyberphysical objects coalition based on the Bayesian concept of probability and the modal logic”.

References 1. Oppenheimer, T.: The Flickering Mind: Saving Education from the False Promise of Technology, p: 528. Random House Publishing Group, New York City (2007) 2. Antonov, A., Zaborovskij, V., Kalyaev, I.: The architecture of a reconfigurable heterogeneous distributed supercomputer system for solving the problems of intelligent data processing in the era of digital transformation of the economy. Cybersecurity Issues 33(5), 2–11 (2019). https://doi.org/10.21681/2311-3456-2019-5-02-11 3. Antonov, A., Zaborovskij, V., Kisilev, I.: Specialized reconfigurable computers in networkcentric supercomputer systems. High Availab. Syst. 14(3), 57–62 (2018). https://doi.org/10. 18127/j20729472-201803-09 4. Dongarra, J., Gottlieb, S., Kramer, W.: Race to exascale. Comput. Sci. Eng. 21(1), 4–5 (2019). https://doi.org/10.1109/MCSE.2018.2882574 5. Usman Ashraf, M., Alburaei Eassa, F., Ahmad Albeshri, A., Algarni, A.: Performance and power efficient massive parallel computational model for HPC heterogeneous exascale systems. IEEE Access 6, 23095–23107 (2018). https://doi.org/10.1109/ACCESS.2018. 2823299 6. NVIDIA Tesla V100. https://www.nvidia.com/en-us/data-center/tesla-v100/. Accessed 19 Apr 2020 7. Xilinx FPGA. https://www.xilinx.com/. Accessed 01 Feb 2020 8. Intel FPGA. https://www.intel.com/content/www/us/en/products/programmable.html. Accessed 19 Apr 2020 9. IDE Vivado HLS. https://www.xilinx.com/video/hardware/vivado-hls-tool-overview.html. Accessed 19 Apr 2020 10. UltraScale and UltraScale + FPGA product Table (2019). https://www.xilinx.com/products/ silicon-devices/fpga/virtex-ultrascale.html#productTable. Accessed 19 Apr 2020 11. Sorting Methods. https://www.mathworks.com/matlabcentral/fileexchange/45125-sortingmethods?focused=3805900&tab=function. Accessed 19 Apr 2020 12. Vitis_Libraries. https://github.com/Xilinx/Vitis_Libraries. Accessed 19 Apr 2020

240

V. Zaborovskij and V. Polyanskiy

13. Antonov, A., Besedin, D., Filippov A.: Research of the efficiency of high-level synthesis tool for FPGA based hardware implementation of some basic algorithms for the big data analysis and management tasks. In: Proceedings of the FRUCT ’26, pp. 23–29, April 2020 14. Merge sort. http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort. Accessed 19 Apr 2020 15. Supercomputer Center ‘Polytechnic’. https://www.top500.org/system/178469. Accessed 19 Apr 2020

Technology Predictions for Arctic Hydrocarbon Development: Digitalization Potential Nikita Tretyakov1(&), Alexey Cherepovitsyn1, and Nadejda Komendantova2 1

2

Saint-Petersburg Mining University, Saint-Petersburg, Russia [email protected] International Institute for Applied Systems Analysis, Laxenburg, Austria

Abstract. A key factor in the development of the Arctic is the projects aimed at the development of oil and gas-bearing offshore and onshore fields, whose reserves are estimated at 13% of the world’s proven reserves and 43% of the undiscovered reserves. Arctic oil production and oil and gas engineering projects in the coming years will become drivers of oil and gas industry development in the world due to the huge mineral resource base. Efficient development of new fields under conditions of low temperatures, harsh climate and lack of necessary infrastructure is impossible without innovative technological solutions capable of ensuring hydrocarbon recovery and competitive energy prices on the market. The oil and gas industry is gradually becoming more progressive in terms of digital transformation, which makes it possible to state the fact that a large customer is emerging in the market of innovative technological solutions, capable of becoming in the near future a locomotive in the field of digital technologies. The program of technological transformations requires a forecast of digital technologies application in oil and gas projects in the Arctic. Technological forecasts for the development of oil production and oil and gas mechanical engineering for the development of Arctic hydrocarbons have revealed the need to prioritize digital transformation with an emphasis on the development of remote drilling and production control systems, technologies for monitoring production facilities with the help of drones and machine vision, the creation of corporate data processing and analysis centers, the use of artificial intelligence in the construction of effective logistics routes and identification of hydrocarbon accumulation sites. Keywords: Digital transformation  Arctic  Oil&gas  Innovation  Offshore  Organizational mechanism

1 Introduction The development of Arctic oil and gas resources is carried out by northern countries with access to shelf and marine areas, which are strategic reserves of mineral resources. According to the U.S. Geological Survey, approximate oil reserves are 83 billion barrels and the volume of natural gas reaches 1550 trillion cubic meters. At the same © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 241–251, 2021. https://doi.org/10.1007/978-3-030-64430-7_21

242

N. Tretyakov et al.

time, the total value of recoverable mineral resources is estimated at USD 30 trillion, which shows a huge potential in the development of oil production in this region. The most significant reserves are located on the territories owned by the Russian Federation or in those regions claimed by the Russian Federation, the total value of which is 20 trillion USD. The Russian Ministry of Natural Resources estimates that Russia’s Arctic zone contains 7.3 billion tons of oil and 55 trillion cubic meters of natural gas. Russia’s huge hydrocarbon reserves in the Arctic zone can meet the growing energy needs of the world. The hydrocarbon reserves are distributed as follows: The Barents Sea - 49%, Kara Sea - 35%, Okhotsk Sea - 15%, Baltic Sea - less than 1% [17, 18]. The size of the mineral resource base of the Arctic zone determines its special importance for the strategic development of the country and its energy security, which creates a favorable ground for the development of hydrocarbons in the shelf zone and the creation of capital-intensive projects with the participation of the oil and gas sector. Many other projects on the development of the Arctic mineral resource base are at the stage of approval and design, and some oil and gas companies are forced to freeze the development of fields on the shelf and continental Arctic due to low profitability of hard-to-recover oil production in the conditions of volatile market prices for energy resources and unstable geopolitical situation. Exploration works, mining under conditions of low temperatures and severe climate require new technological solutions, which will reduce the cost of oil and gas production and increase the efficiency of fields in the Arctic sea and continental zones, as well as provide the autonomy of domestic companies in the use of advanced technologies. The growing demand for new technological innovations requires forecasted studies in the field of oil and gas projects to assess the potential of digitalization processes. The following tasks have been set to achieve the research objectives: – Analysis of existing oil and gas projects in offshore and continental Arctic zones in Russia and other Arctic countries, – Research of oil and gas industry projects in the Arctic from the perspective of digital technology application, – Research of the largest planned oil and gas projects in the Arctic zone and assessing the required production capacity, – Forecast of the use of digital technologies in promising projects for the development of Arctic resources.

2 Materials and Methods Based on the results of the literature review, it can be concluded that most publications on the development of oil production and oil and gas engineering in the Arctic zone and its transformation under the conditions of digitalization are written in a journalistic style and represent the result of analytical research. In the world practice, various forecasts concerning the application of innovative technological solutions in the field of the Arctic shelf and mainland development are published on a regular basis by such international organizations as the International Energy Agency, the World Energy Council, the Organization of the OPEC Oil Exporting Countries, the World Bank and

Technology Predictions for Arctic Hydrocarbon Development

243

many others, as well as the results of industry reports are published on the Internet resources of international consulting companies (“PricewaterhouseCoopers”, “Ernst&Young”, “Euro Petroleum Consultants”) and large energy companies. Analytical research is also conducted by leaders of digital and technological solutions (Siemens) and Russian energy organizations (Arctic and Antarctic Council). Their forecasts are mainly focused on oil production volumes in the Arctic, changes in production methods, development of new fields on the shelf, as well as the identification of economic, geopolitical and market factors affecting the development of oil and gas projects in the Arctic. As for academic publications addressing the potential for digitalization of the oil and gas industry as part of hydrocarbon development in the Arctic, various authors discuss such issues as environmental risks, the need for large-scale investments, the creation of transport infrastructure, modern platforms and complex oil and gas equipment. A number of Russian scientists are also studying the potential for digitalization of Arctic projects and hydrocarbon production in the Arctic (N.A. Eremin and A.N. Dmitrievsky 2016, V.G. Martynov 2017, A.G. Kazanin 2020 and others) [1–4]. Most scientists agree that the oil and gas sector is becoming a driving force for digital solutions, including in the promising Arctic and shelf areas, but some point to the high risks of large-scale offshore projects, especially in the absence of the necessary infrastructure (Grigoriev G.A 2015) [5]. In the course of writing the scientific work was applied general scientific methods of research methods such as empirical description, analysis and synthesis, classification, as well as private methods of scientific research: graphic method and statistical method.

3 Results According to the distribution of rights to develop Arctic fields, only two Russian companies have relevant licenses to produce oil and natural gas in the strategically important territory of the Russian Federation. According to the Subsoil Law, only companies with more than 50% of their shares owned by state structures and with a minimum of five years’ experience on the shelf can qualify for offshore field development. At the moment, Rosneft and Gazprom Neft, which jointly hold 90% of the licenses in the Arctic region and 10% are still undistributed, meet these criteria, but the Russian government expects to grant them to the private sector [8]. State companies are actively investing in Arctic projects, so from 2011 to 2016, Rosneft’s investment was more than 100 billion rubles, and from 2017 to 2021 the planned budget for financing oil and gas projects in the Arctic is 250 billion rubles. Gazprom Neft has already invested more than 400 billion rubles in Arctic projects, most of which have provided for the implementation of large-scale projects “New Port” and “Prirazlomnoye” (see Fig. 1) [15, 16]. Implementation of the largest oil and gas projects in the Arctic environment requires a new approach and technological solutions for effective offshore hydrocarbon development. Companies that have the necessary licenses to develop offshore fields are striving to follow the digital industry trends and are actively introducing innovative

244

N. Tretyakov et al.

Fig. 1. Development of the oil and gas potential of the shelf and continental zones of the Russian Arctic. Complied by the author based on the source [8]

technologies as part of large-scale investment projects such as “Novy Port” and “Prirazlomnoye” (“Gazprom”), “The Northern Tip of the Chaivo Field” and “Sakhalin-1” (“Rosneft”) (see Fig. 2). According to Rosneft’s strategy 2022, the company is prioritizing digitalisation potential development and breakthrough technological solutions. As part of oil and gas projects to develop offshore and mainland Russia, the “Digital Field” concept, industrial internet, Big Data technologies and remote drilling and production management are being actively developed [6]. Key aspects of Rosneft’s strategy 2022: – launch of a corporate data center with an industrial Internet platform and an integrated digital twin field, -testing technology for monitoring production facilities with drones and machine vision, application of artificial intelligence in field development, – ice rig monitoring system tests for offshore drilling, – introduction of systems of predictive analytics and indicators of dynamic equipment status. “Digital transformation” - technological transformation plan of Gazprom Neft until 2030. Gazprom Neft’s digital strategy includes key areas that integrate the entire value creation chain and business process management. Priority is given to such aspects as creating a single digital business management platform and data management systems. All production assets can be managed from a single center, which will significantly improve the efficiency of business processes, as well as establish communications between company units, increasing the speed of technological and organizational decision-making [7]. Key areas of Gazprom Neft’s technological transformation strategy: – cognitive geologic exploration (application of artificial intelligence in searching for industrial oil reserves and evaluating the probability of success of a project), – cognitive engineering (formation of a field development scheme using machine learning)

Technology Predictions for Arctic Hydrocarbon Development

245

– corporate drilling management center (remote control of production facilities), – production control center (combining all production processes in a single integrated environment) – digital twins (creating prototypes of real objects in a virtual environment to find optimal solutions).

Fig. 2 Technological transformations of companies engaged in oil and gas projects in the shelf and mainland Arctic zones in Russia. Compiled by the author based on sources [6, 7]

As it was noted earlier, the mineral base of the Arctic zone has a huge potential, so in the coming years it is planned to launch a number of large oil and gas clusters and fields in the Arctic shelf and continental zone. The promising project “Vostok Oil” with the participation of Rosneft implies the creation of an oil and gas province in the Arctic, which may become a flagship of the oil and gas industry not only in Russia but also in the whole world. The proven resource base of the fields is 5 billion tons of oil, which corresponds to 37 billion barrels of oil. The total amount of investment in the “East Oil” project is estimated at 10 trillion rubles. Implementation of the large-scale project will create more than 100,000 jobs and increase GDP by 2% annually. After implementation of the first stage of the project in 2025, production may reach 50 million tons of oil per year, implementation of the second stage in 2030 will increase this figure to 100 million tons of oil per year. The “East Oil” project involves the development of the Taimyr Peninsula, the creation of modern transport infrastructure and the development of the Payakhskoye field, as well as the fields of the Vankor cluster. There are also a number of major projects that have been suspended or postponed due to the sanctions policy of Western countries, high production costs in difficult climatic conditions with low market prices and lack of necessary infrastructure and new technological solutions to ensure the efficiency of projects. The successful development of the Arctic mineral resource base and the implementation of planned projects require the updating of technological solutions and the introduction of digital technologies necessary to optimize business processes along the entire value chain. Building technological forecasts of oil production and industry engineering development for the development of hydrocarbons in the Arctic requires an assessment of the digital potential of oil and gas projects. The Arctic has a powerful production potential of more than 150 million tons of oil per year, which is almost equivalent to 30% of oil produced in 2018 in the Russian Federation. Table 1 shows

246

N. Tretyakov et al.

the major oil and gas projects of state companies Rosneft and Gazprom, which are planned to be implemented in 2020–2035. Table 1. Assessment of the potential of oil production and oil and gas engineering during the implementation of projects aimed at the development of the mineral resource base of the Arctic [19, 20] No.

1 2

3 4

Large oil and gas projects in the Arctic 2020–2035 East Oil, Rosneft

Recoverable hydrocarbon reserves

Leningradskoye field and Tambey cluster, Gazprom Dolginskoye oil field, Gazprom Neft Victory Project in Kara Province, Rosneft

3 and 5.5 trillion cubic meters of natural gas

Maximum hydrocarbon production per year 100 million tons of oil 50 billion tons of natural gas

236 million tons of oil

5 million tons of oil

130 million tons of oil and 500 billion cubic meters of natural gas

8 million tons of oil

5 billion tons of oil

Despite the strategic potential of the Arctic region for the Russian Federation, at present the rate of development of the mineral resource base of the shelf and continental zones lags behind the planned values. However, according to the Minister of Energy of the Russian Federation, over 400–600 billion USD will be invested in Arctic projects in the next 20 years, which is equivalent to 28–42 trillion rubles. Investments for the development of projects in the Arctic zone will be found mainly through extrabudgetary funds from Russian energy companies and foreign partners from China, India, South Korea and Vietnam. Implementation of oil and gas projects is impossible without updating the technological concept and adapting existing approaches to resource extraction under difficult climatic conditions. Creating a favorable environment for successful implementation of projects to develop the Arctic mineral resource base is a task that can be solved only with the use of digital technologies capable of increasing the efficiency of oil and gas projects. The problem of technological forecasts for the application of innovations in such projects is urgent, so it is necessary to assess which digital technologies companies should pay attention to when forming the strategy of digital transformation and what amounts of investment are necessary for successful development of Arctic assets. The authors have aggregated information about existing projects in the Arctic shelf and mainland areas, identified trends in digital transformation of the largest oil and gas companies and new technological solutions used at production facilities. In order to create technological forecasts for the development of oil production and oil and gas engineering, the potential for the use of digital technologies in planned projects for the development of hydrocarbons in the Arctic was calculated, followed by an assessment

Technology Predictions for Arctic Hydrocarbon Development

247

of forecast values of investments in the development of digital technologies, necessary for the effective implementation of the above projects. The potential for digitalization of the Arctic oil and gas projects includes the development of corporate Big Data processing centers, artificial intelligence systems for exploration and production of raw materials, remote monitoring of production facilities and additive production, the total level of investment in which is more than 180 billion rubles (see Table 2). Table 2. Technological forecasts for the use of digital technologies in the framework of oil and gas projects on the Arctic shelf and the mainland of the Arctic [6, 7, 21, 22] No. Digital transformation technology solutions

The effect of the introduction of digital solutions

Essential digital infrastructure

Necessary investment

1

Corporate data centers based on industrial internet platforms and big data

1) 10 billion rubles 1) Creation of data processing 2) 7.5 billion rubles centers (more than 5 centers) 3) 80 billion rubles 2) Acquisition of cloud technologies for data storage (more than 100 units) 3) Installation of more than 800 thousand touch indicators at all production facilities

2

Artificial intelligence systems for cognitive exploration and well drilling

3

Remote monitoring of production facilities

4

Automation of production through additive technologies

Management of all production facilities of projects in the Arctic zone as a single asset, which makes this process much more effective in a single information space. Reduce up to 15% in operating expenses The self-learning system processes the initial information in the shortest possible time and gives the final result about the geological situation and forms the optimal field development scheme. Saves up to 75% Autonomous monitoring of objects using drones and machine vision ensures the timely detection and prevention of crisis situations and the reduction of human injuries And the on- line production of parts using 3 D printing ensures uninterrupted operation of remote platforms and reduces downtime losses

1) Computing clusters for creating digital doubles of deposits (more than 5 clusters) 2) Supercomputers with speeds above 100 Gbit per second more than 10 units) 3) Software

1) 50 billion rubles 2) 14 billion rubles 3) 5 billion rubles

1) Industrial drones (more than 10,000 units) 2) Organization of mobile monitoring groups (more than 4 0 platforms)

1) 3 billion rubles 2) 4 00 million rubles

1) Precision equipment e 3D printing (over 100 units) 2) Powders s material s for printing (over 8 tons)

1) 4 billion rubles 2) 1.2 billion rubles

The results of technological forecasting of oil production and oil and gas mechanical engineering development for the Arctic hydrocarbons development, obtained using analogies and comparison methods, demonstrate the urgent need to invest significant resources in the programs of digital transformation of companies involved in the Arctic fields development.

248

N. Tretyakov et al.

4 Discussion Digital technology is already tightly integrated into a wide range of industries. In the oil and gas sector, digital technologies are mainly aimed at reducing operating costs, which is an extremely important factor in the volatility of hydrocarbon prices: assets operating in difficult conditions, including offshore fields, are particularly sensitive to production costs. A number of current trends and growth points can be identified in the development of digital solutions for offshore field development. The oil and gas industry is extremely capital-intensive in principle, and offshore projects are characterized by a truly huge amount of required investments: this makes it imperative to extend the life of fixed assets and model all possible scenarios in order to prevent accidents that could cause capital damage both to the asset holder company and to the ecology of the region. Digital twin technology, which allows to virtualize the entire production chain and to predict and analyze the technological performance of not only different industrial facilities, but also the entire system, successfully copes with this task. A big role in the future of industry is given to Big Data. It is the successful integration of data between all parts of the production chain, occurring according to standards and protocols established by organizations, will significantly reduce the cost of various operations: for example, according to estimates, the cost of offshore drilling can be reduced by 10–13% through tracking and interpretation of consolidated drilling data [13]. In addition to integration and aggregation of data, the analysis of data plays a major role. Predictive analytics is one of the main growth points in well drilling, the costs of which make up a significant part of CAPEX when developing offshore fields. Consulting agencies estimate that the volume of expenses reduction at the stages of field exploration and development can reach USD 30 bln per year on the market average [12]. Certainly, possibilities of optimization of manufacture lie not only in the field of work with the information: automation of the equipment and exclusion of the person from a number of industrial processes, such as, operations with pipes, use of systems of delivery of a liquid and so forth, will allow not only to achieve more efficiency and productivity, but also will considerably raise safety of technological processes. Now about robotization speak, as about the following stage of automation of manufacture: at the expense of creation of independent automatic systems and robotization of the basic business processes, it is possible to achieve considerable decrease in non-productive time, to establish a chain of deliveries. Besides, use of new technologies, for example, the 3D-printer can also favourably affect decrease in time of idle times that the most wearing out details of mechanisms will be made directly on a platform, leveling dependence on deliveries from a continental zone [11]. Changes in technological processes must precede changes in the management of the company. The organization of interdisciplinary groups of experts united under the auspices of multifunctional production processes will help to form and maintain an upto-date understanding of the status and efficiency of the production system (the entire technological complex of the platform).

Technology Predictions for Arctic Hydrocarbon Development

249

If we talk about the industry as a whole, we can say that the use of digital technologies is promising. The complex of modern developments makes it possible to achieve greater efficiency in all areas of the oil and gas complex. For example, it is already possible to achieve a significant increase in the confirmation coefficient of commercially recoverable reserves. According to specialists’ estimates, the growth of A and B category reserves due to the digitalization of oil and gas fields in Russia may reach 7 billion tons of oil, 45% of which are hard-to-recover reserves [9]. The impact of new technologies on already developed assets will be manifested through reduction of operating costs and non-productive time. By increasing production profitability, a decrease in the rate of production decline in already developed fields will be achieved, as well as the involvement of areas whose development was previously considered unprofitable. Exploration is also an active field of digital solutions application, as the main risks and uncertainties are encountered by representatives of the oil and gas sector at this very stage. Already now there are successful results of application of new technologies with a high level of efficiency: used for interpretation of geological exploration data neural networks were able to make a forecast of the occurrence of productive formations with an accuracy of 30 cm. Given the high cost of offshore well construction, this technology appears promising and will significantly reduce the risk of dry well construction [10]. In aggregate, all digital technologies applied in an integrated manner at various assets will significantly reduce production costs, which will increase the number of assets with TRIZ, the development of which was previously unprofitable. To maintain the level of competitiveness and ensure profitability of business, Russian companies need not only to be able to use and implement ready-made foreign digital solutions, but also to build their own base of innovative technologies and structures capable of servicing and producing them. Due to the existing database of facts from which it is possible to draw a conclusion about unreliability, and in cases of Arctic exploration and development of oil shale deposits - impossibility to supply foreign technologies, the creation of production of digital products inside the country is not a matter of technological superiority, but of economic stability of the industry and the state. Taking into account the prospect of a drop in hydrocarbon prices due to a decrease in production costs triggered by the development of digital technologies, there may be an unviable situation in the market for companies that have not managed to carry out digital transformation of their business [14]. At the same time, pursuing the idea of protectionism, it is impossible to completely abandon foreign products: this threatens to reduce the inflow of new technologies, as well as a decline in competition in the local market and, consequently, the quality of domestic products. According to the data of the Union of Software Developers and Automated Control Systems, import dependence in terms of digital technologies is in the range of 80–98%, from which we can draw a conclusion about the degree of importance of conducting a competent policy of import substitution, focused on creating a favorable ecosystem for the development of Russian digital solutions.

250

N. Tretyakov et al.

5 Conclusion The effective development of offshore and onshore oil and gas fields in the Arctic is a strategic task within the framework of ensuring the economic security of the Russian Federation and the overall development of the global oil and gas industry. However, the presence of many contradictory factors in the implementation of large-scale projects in the Arctic zone impedes the development of offshore oil production and one of the most effective ways of solving this problem is to develop and apply digital technologies. This paper analyzed information on the current situation on the Arctic shelf and existing oil and gas projects, as well as the use of digital solutions aimed at improving the performance of oil and gas companies. Based on the methods of analogies and methods of comparison, a technological forecast of the development of oil and gas production and oil and gas mechanical engineering was made, based on the current needs and challenges facing the industry in connection with the request for the development of the Arctic shelf, and the amount of investment needed to realize the potential of digital technologies in future oil and gas projects on the Arctic shelf was estimated. It was determined that the development of corporate centers for processing Big Data using the industrial Internet of things, artificial intelligence systems for cognitive geological survey, drilling and production of oil & gas, remote monitoring of production facilities and additive production will have a multiplier effect on the country’s economy and will have a significant impact on the global oil and gas industry. Expression of Gratitude. The study was carried out with the financial support of the Russian Foundation for Basic Research, project No. 18-010-00734 «Evolution of methodology of technological forecasting of development of the interconnected industrial and social and economic systems at hydrocarbon resources development in the Arctic».

References 1. Cherepovitsyn, A.: Investigation of the oil and gas company innovative potential at different stages of the fields exploitation. Notes Mining Inst. 222, 892–902 (2016) 2. Dmitrievskiy, A., Eremin, N.: Digitalization and intellectualization of oil and gas fields. Autom. IT Oil Gas Area 24(2), 13–19 (2017) 3. Martynov, V., Dmitrievskiy, A., Eremin, N.: Well-triggered sensor systems. Oil Gas Innov. 2, 50–55 (2016) 4. Kazanin, A.: Trends and prospects of the oil and gas sector development under the digitalization conditions. Econ. Manage. 26(1), 35–45 (2020) 5. Grigoriev, G.: Arctic hydrocarbon prospects of Russia: technological and geologicaleconomic problems of development. Business journal “Neftegaz.RU”: Extraction and processing. https://clck.ru/MRvaw. Accessed 08 Mar 2020 6. PJSC Oil Company “Rosneft”. Annual Report 2018: Technologies shaping the future. https://clck.ru/FrSdB. Accessed 10 Mar 2020 7. PJSC Gazprom Neft. Digital transformation. The Fourth Industrial Revolution. https://clck. ru/MRvyY. Accessed 10 Mar 2020

Technology Predictions for Arctic Hydrocarbon Development

251

8. Sklyarova, Z., Tkach, V.: License distribution dynamics on the Russian Federation shelf. Vesti Gazeta 4(24), 166–176 (2015) 9. Abukova, L., Dmitrievskiy, A., Eremin, N.: Digital modernization of oil and gas complex of Russia. Petrol. Bus. 10, 2–6 (2018) 10. Baraboshkin, E., Ivchenko, A., Ismailova, L., Orlov, D., Baraboshkin, E., Koroteev, D.: Core photoslithological interpretation using neural networks. In: 20th International Sedimentological Congress, Book of Abstracts, Quebec City (2018) 11. Guptam, S., Saputelli, L., Nikolaou, M.: Applying big data analytics to detect, diagnose, and prevent impending failures in electric submersible pumps. In: SPE Annual Technical Conference and Exhibition, UAE (2016) 12. Hegde, C., Gray, K.: Use of machine learning and data analytics to increase drilling efficiency for nearby wells. J. Nat. Gas Sci. Eng. 40, 327–335 (2017) 13. PJSC Oil Company Rosneft. “Rosneft” will double investments into Arctic shelf development. Rossiyskaya gazeta. https://clck.ru/MSgtN. Accessed 11 Mar 2020 14. PJSC Gazprom Neft. “Gazprom Neft relies on the Arctic and technologies for efficient production of ‘difficult oil’. https://clck.ru/MSgzU. Accessed 11 Mar 2020 15. Ignatieva, A.: Oil reserves of the Russian Arctic zone are estimated at 7.3 billion tons. Business Journal “Neftegaz.RU”: oil and gas reserves. https://clck.ru/MSyhG. Accessed 12 Mar 2020 16. Gagarsky, E.A.: Wealth of shelf. Marine News Russ. 1, 3–7 (2016) 17. PJSC Rosneft. Exploration and production. Shelf projects. https://www.rosneft.ru/business/ Upstream/offshore/. Accessed 10 Mar 2020 18. Gazprom increased gas reserves in the Leningrad field by 850 bcm, Neftegaz.RU Business Journal: Exploration. https://clck.ru/MT2Mx. Accessed 12 Mar 2020 19. Digital cluster “Sibintek” - internal IT-integrator of “Rosneft”, Portal TAdviser: Projects based on Internet technologies of things (IoT), https://clck.ru/MT2up. Accessed 13 Mar 2020 20. PJSC Gazprom Neft. Gazprom Neft has developed a supercomputer to create digital models of Siberian and Arctic fields. https://clck.ru/MT36r. Accessed 13 Mar 2020

Author Index

A Abbakumov, Vadim, 157 Ablyazov, Timur, 23 Akaev, Askar, 77 Akaev, Bakytbek, 48 Aleksandrov, Andrei, 23 Alekseev, Aleksandr, 168 Alexandra, Borremans, 143 B Baskov, Vladimir, 23 Burkovski, Lidia, 89, 108 C Cherepovitsyn, Alexey, 241 D Devezas, Tessaleno, 77 Dubgorn, Alissa, 168 Dubolazov, Viktor, 39 E Ed, Overes, 143 Egor, Temirgaliev, 143 Egorov, Dmitry, 201 Esser, Manfred, 168 G Gerrits, Berry, 201 I Ignatov, Anton, 23 Ilin, Igor, 57, 179

J Jahn, Carlos, 57, 179 Jensen, Morten Brix, 57 Jorg, Reiff-Stephan, 157 K Kalinin, Maxim, 10 Kalyazina, Sofia, 201 Kolesov, Dmitrii, 48 Komendantova, Nadejda, 241 Korablev, Vadim, 179 Kuryleva, Alena, 157 Kuzin, Alexey, 122 L Leicht, Olga, 39 Leitão, Joao, 48 Lepekhin, Aleksandr, 179 Levina, Anastasia, 57, 201 Lim, Sangwon, 10 M Ma, Wenjia, 221 Maydanova, Svetlana, 57, 179 Mugayskikh, Aleksander, 157 Mutalieva, Botagoz, 189 O Orlov, Stepan, 89, 108 P Paardenkooper, Klara, 168 Pavlenko, Evgeny, 1 Petryakov, Alexander, 77 Pimenov, Nikita, 189

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 H. Schaumburg et al. (Eds.): TT 2020, LNNS 157, pp. 253–254, 2021. https://doi.org/10.1007/978-3-030-64430-7

254 Poltavtseva, Maria, 1 Polyanskiy, Vladimir, 231 R Reshetnikov, Viacheslav, 67 S Schuur, Peter, 201 Shchelkonogov, Andrey, 39 Shkodyrev, Viacheslav, 221 Shkodyrev, Viacheslav P., 210 Shytova, Yevheniia, 189 Simakova, Zoia, 39 Smirnova, Anna, 189 T Tick, Andrea, 67 Tick, József, 122 Tretyakov, Nikita, 241

Author Index U Ungvari, Laszlo, 77 V Victor, Dubolazov, 143 W Weigell, Jürgen, 57, 179 Y Yassine, Hanafi Mohamed, 210 Z Zaborovskij, Vladimir, 231 Zaychenko, Irina, 168, 189 Zegzhda, Peter, 10 Zeman, Zoltan, 122, 157 Zhak, Roman, 48 Zhilkina, Natal’ya, 23 Zhuravlev, Alexey, 122