Recent Research in Control Engineering and Decision Making [1st ed.] 978-3-030-12071-9, 978-3-030-12072-6

This book constitutes the full papers and short monographs developed on the base of the refereed proceedings of the Inte

1,071 88 67MB

English Pages XXV, 771 [792] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Recent Research in Control Engineering and Decision Making [1st ed.]
 978-3-030-12071-9, 978-3-030-12072-6

Table of contents :
Front Matter ....Pages i-xxv
Front Matter ....Pages 1-1
Data Protection During Remote Monitoring of Person’s State (Tatyana Buldakova, Darina Krivosheeva)....Pages 3-14
Principles of Managing the Process of Innovative Ideas Genesis (Tatiana V. Moiseeva, Sergey V. Smirnov)....Pages 15-25
Software Package for Modeling the Process of Fire Spread and People Evacuation in Premises (Andrey Samartsev, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Leonid Filimonyuk, Dmitry Fominykh et al.)....Pages 26-36
Nonlinear Information Processing Algorithm for Navigation Complex with Increased Degree of Parametric Identifiability (Konstantin Neusypin, Maria Selezneva, Andrey Proletarsky)....Pages 37-49
The Task of Reducing the Cost of Production During Welding by Robotic Technological Complexes (Dmitry Fominykh, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Tatyana Shulga, Andrey Samartsev)....Pages 50-60
String Matching in Case of Periodicity in the Pattern (Armen Kostanyan, Ani Karapetyan)....Pages 61-66
High Generalization Capability Artificial Neural Network Architecture Based on RBF-Network (Mikhail Abrosimov, Alexander Brovko)....Pages 67-78
Dynamic System Model for Predicting Changes in University Indicators in the World University Ranking U-Multirank (Olga Glukhova, Alexander Rezchikov, Vadim Kushnikov, Oleg Kushnikov, Irina Sytnik)....Pages 79-90
Optimization of the Hardware Costs of Interpolation Converters for Calculations in the Logarithmic Number System (Ilya Osinin)....Pages 91-102
The Multi-agent Method for Real Time Production Resource-Scheduling Problem (Alexander Lada, Sergey Smirnov)....Pages 103-111
Knowledge Base Engineering for Industrial Safety Expertise: A Model-Driven Development Approach Specialization (Aleksandr Yurin, Aleksandr Berman, Olga Nikolaychuk, Nikita Dorodnykh)....Pages 112-124
Investigation of Hydroelasticity Coaxial Geometrically Irregular and Regular Shells Under Vibration (Anna Kalinina, Dmitry Kondratov, Yulia Kondratova, Lev Mogilevich, Victor Popov)....Pages 125-137
Design Automation of Digital In-Process Models of Parts of Aircraft Structures (Kate Tairova, Vadim Shiskin, Leonid Kamalov)....Pages 138-148
Using Convolutional Neural Networks in the Problem of Cell Nuclei Segmentation on Histological Images (Vladimir Khryashchev, Anton Lebedev, Olga Stepanova, Anastasiya Srednyakova)....Pages 149-161
Numerical Study of Eigenmodes Propagation Through Rectangular Waveguide with Quarter-Wave Chokes on the Walls (Alexander Brovko, Guido Link)....Pages 162-172
Extraction and Forecasting Time Series of Production Processes (Anton Romanov, Aleksey Filippov, Nadezhda Yarushkina)....Pages 173-184
Computer Analysis of Geometrical Parameters of the Retina Epiretinal Membrane (Stanislav Daurov, Sergey Potemkin, Svetlana Kumova, Tatiana Kamenskikh, Igor Kolbenev, Elena Chernyshkova)....Pages 185-198
Synthesis of the Information Channel with Codec Based on Code Signal Feature (Dmitry Klenov, Michael Svetlov, Alexey L’vov, Marina Svetlova, Dmitry Mishchenko)....Pages 199-214
Using of Linguistic Analysis of Search Query for Improving the Quality of Information Retrieval (Nadezhda Yarushkina, Aleksey Filippov, Maria Grigoricheva)....Pages 215-226
Improved Quality Video Transmission by Optical Channel from Underwater Mobile Robots (Sergey Kirillov, Vladimir Dmitriev, Leonid Aronov, Petr Skonnikov, Andrew Baukov)....Pages 227-239
Sketch Design of Information System for Personnel Management of Large State Corporation in the Field of Control Engineering (Vadim Zhmud, Alexander Liapidevskiy, Galina Frantsuzova)....Pages 240-255
Models and Methods for Determining Damage from Atmospheric Emissions of Industrial Enterprises (Elena Kushnikova, Ekaterina Kulakova, Sergei Alipchenko, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko)....Pages 256-267
Computer Analysis of the Equilibrium in Painting (Alexander Voloshinov, Olga Dolinina)....Pages 268-288
Using System Dynamics for the Software Quality Management of the Decision Making Software Systems Development (Olga Dolinina, Vadim Kushnikov)....Pages 289-297
Bernstein’s Theory of Levels and Its Application for Assessing the Human Operator State (Sergey Suyatinov)....Pages 298-312
Semantic Marking Method for Non-text Documents of Website Based on Their Context in Hypertext Clustering (Sergey Papshev, Alexander Sytnik, Nina Melnikova, Alexey Bogomolov)....Pages 313-323
Optimal Control Problems of Compressor Facilities Processes at Industrial Enterprise (Ekaterina Kulakova, Sergei Alipchenko, Alexander Rezchikov, Vadim Kushnikov, Elena Kushnikova, Olga Glukhova)....Pages 324-337
BEM Based Numerical Approach for the Study of the Dispersed Systems Rheological Properties (Yulia A. Pityuk, Olga A. Abramova, Nazgul B. Fatkullina, Aiguzel Z. Bulatova)....Pages 338-352
Formalization of Requirements for Locked-Loop Control Systems for Their Numerical Optimization (Vadim Zhmud, Galina Frantsuzova, Lubomir Dimitrov, Jaroslav Nosek)....Pages 353-365
Accented Visualization in Digital Industry Applications (Anton Ivaschenko, Pavel Sitnikov, Georgiy Katirkin)....Pages 366-378
Dynamic Capabilities Indicators Estimation of Information Technology Usage in Technological Systems (Alexander Geyda)....Pages 379-395
Modeling of Struggle Processes in the Computer-Related Crime Field (Aleksey Bogomolov, Alexander Rezchikov, Vadim Kushnikov, Vladimir Tverdokhlebov, Oksana Soldatkina, Tatyana Shulga)....Pages 396-405
Towards Fuzzy Partial Global Fault Diagnosis (Sofia Kouah, Ilham Kitouni)....Pages 406-420
Development of a Software for the Semantic Analysis of Social Media Content (Aleksey Filippov, Vadim Moshkin, Nadezhda Yarushkina)....Pages 421-432
An Analysis of Road Traffic Flow Characteristics Using Wavelet Transform (Oleg Golovnin, Anastasia Stolbova, Nikita Ostroglazov)....Pages 433-445
An Approach to Estimating of Criticality of Social Engineering Attacks Traces (Anastasiia Khlobystova, Maxim Abramov, Alexander Tulupyev)....Pages 446-456
Ontologies of the Fire Safety Domain (Yuliya Nikulina, Tatyana Shulga, Alexander Sytnik, Natalya Frolova, Olga Toropova)....Pages 457-467
Wavelet-Based Arrhythmia Detection in Medical Diagnostics Sensor Networks (Anastasya Stolbova, Sergey Prokhorov, Andrey Kuzmin, Anton Ivaschenko)....Pages 468-479
On Parallel Addition and Multiplication via Symmetric Ternary Numeral System (Iurii V. Stroganov, Liliya Volkova, Igor V. Rudakov)....Pages 480-487
Simulation of Power Assets Management Process (Oleg Protalinsky, Anna Khanova, Ivan Shcherbatov)....Pages 488-501
Examination of the Process of Automated Closure of Containers with Screw Caps (Slav Dimitrov, Lubomir Dimitrov, Reneta Dimitrova, Stelian Nikolov)....Pages 502-514
About the Concept of Information Support System for Innovative Economy in the Republic of Kazakhstan (Irbulat Utepbergenov, Leonid Bobrov, Irina Medyankina, Zinaida Rodionova, Shara Toibaeva)....Pages 515-526
Possibilities of Typical Controllers for Low Order Non-linear Non-stationary Plants (Galina Frantsuzova, Vadim Zhmud, Anatoly Vostrikov)....Pages 527-539
Mathematical Models and Algorithms for the Management of Liquidation Process of Floods Consequences (Maria Khamutova, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Elena Kushnikova, Andrey Samartsev)....Pages 540-551
Analysis of Three-Dimensional Scene Visual Characteristics Based on Virtual Modeling and Parameters of Surveillance Sensors (Vitaly Pechenkin, Mikhail Korolev, Kseniya Kuznetsova, Dmitriy Piminov)....Pages 552-562
Search of Optimum Conditions of Plating Using a Fuzzy Rule-Based Knowledge Model (Denis Solovjev, Alexander Arzamastsev, Inna Solovjeva, Yuri Litovka, Alexey L’vov, Nina Melnikova)....Pages 563-574
Front Matter ....Pages 575-575
Mathematical Model of Adaptive Control in Fuel Supply Logistic System (Ekaterina Kasatkina, Denis Nefedov, Ekaterina Saburova)....Pages 577-593
Mathematical Model for Prediction of the Main Characteristics of Emissions of Chemically Hazardous Substances into the Atmosphere (Ekaterina Kusheleva, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Elena Kushnikova, Andrey Samartsev)....Pages 594-607
Increasing the Safety of Flights with the Use of Mathematical Model Based on Status Functions (Irina Veshneva, Aleksander Bolshakov, Aleksei Kulik)....Pages 608-621
Mathematical Modeling of Electronic Records Management and Office Work in the Executive Bodies of State Administration (Olga Perepelkina, Dmitry Kondratov)....Pages 622-633
Mathematical Modeling of the Process of Engineering Structure Curvature Determination for Remote Quality Control of Plaster Works (Nadezhda Ivannikova, Pavel Sadchikov, Alexandr Zholobov)....Pages 634-645
Mathematical Models, Algorithms and Software Package for the National Security State of Russia (Natalya Yandybaeva, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Oleg Kushnikov, Anatoly Tsvirkun)....Pages 646-659
Mathematical Modeling of Waves in a Non-linear Shell with Wiscous Liquid Inside It, Taking into Account Its Movement Inertia (Lev Mogilevich, Yury Blinkov, Dmitry Kondratov, Sergey Ivanov)....Pages 660-670
Mathematical Modeling of Hydroelastic Interaction Between Stamp and Three-Layered Beam Resting on Winkler Foundation (Aleksandr Chernenko, Dmitry Kondratov, Lev Mogilevich, Victor Popov, Elizaveta Popova)....Pages 671-681
The Mathematical Model for Describing the Principles of Enterprise Management “Just in Time, Design to Cost, Risks Management” (Igor Lutoshkin, Svetlana Lipatova, Yuriy Polyanskov, Nailya Yamaltdinova, Margarita Yardaeva)....Pages 682-695
Algebraic Bayesian Networks: The Use of Parallel Computing While Maintaining Various Degrees of Consistency (Nikita A. Kharitonov, Anatoly G. Maximov, Alexander L. Tulupyev)....Pages 696-704
Mathematical Modeling and Calibration Procedure of Combined Multiport Correlator (Nickita Semezhev, Alexey L’vov, Adel Askarova, Sergey Ivzhenko, Natalia Vagarina, Elena Umnova)....Pages 705-719
Mathematical Models for the Analysis of Destabilization Processes of the Socio-Political Situation in the Country Using the Methods of Non-violent Resistance (Aleksey Bogomolov, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Elena Kushnikova, Vladimir Tverdokhlebov)....Pages 720-728
Front Matter ....Pages 729-729
Mobile Platform for Decision Support System During Mutual Continuous Investment in Technology for Smart City (Bakhytzhan Akhmetov, Lyazzat Balgabayeva, Valerii Lakhno, Vladimir Malyukov, Raya Alenova, Anara Tashimova)....Pages 731-742
Automatic Traffic Control System for SOHO Computer Networks (Evgeny Basinya, Aleksander Rudkovskiy)....Pages 743-754
Combined Intellectual and Petri Net with Priorities Approach to the Waste Disposal in the Smart City (Olga Dolinina, Vitaly Pechenkin, Nikolay Gubin)....Pages 755-767
Back Matter ....Pages 769-771

Citation preview

Studies in Systems, Decision and Control 199

Olga Dolinina Alexander Brovko Vitaly Pechenkin Alexey Lvov Vadim Zhmud Vladik Kreinovich Editors

Recent Research in Control Engineering and Decision Making

Studies in Systems, Decision and Control Volume 199

Series editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected]

The series “Studies in Systems, Decision and Control” (SSDC) covers both new developments and advances, as well as the state of the art, in the various areas of broadly perceived systems, decision making and control–quickly, up to date and with a high quality. The intent is to cover the theory, applications, and perspectives on the state of the art and future developments relevant to systems, decision making, control, complex processes and related areas, as embedded in the fields of engineering, computer science, physics, economics, social and life sciences, as well as the paradigms and methodologies behind them. The series contains monographs, textbooks, lecture notes and edited volumes in systems, decision making and control spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. ** Indexing: The books of this series are submitted to ISI, SCOPUS, DBLP, Ulrichs, MathSciNet, Current Mathematical Publications, Mathematical Reviews, Zentralblatt Math: MetaPress and Springerlink.

More information about this series at http://www.springer.com/series/13304

Olga Dolinina Alexander Brovko Vitaly Pechenkin Alexey Lvov Vadim Zhmud Vladik Kreinovich •









Editors

Recent Research in Control Engineering and Decision Making

123

Editors Olga Dolinina Institute of Applied Information Technologies and Communication Yuri Gagarin State Technical University of Saratov Saratov, Russia

Alexander Brovko Institute of Applied Information Technologies and Communication Yuri Gagarin State Technical University of Saratov Saratov, Russia

Vitaly Pechenkin Institute of Applied Information Technologies and Communication Yuri Gagarin State Technical University of Saratov Saratov, Russia

Alexey Lvov Institute of Applied Information Technologies and Communication Yuri Gagarin State Technical University of Saratov Saratov, Russia

Vadim Zhmud Department of Automation Novosibirsk State Technical University Novosibirsk, Russia

Vladik Kreinovich Department of Computer Science University of Texas at El Paso El Paso, TX, USA

ISSN 2198-4182 ISSN 2198-4190 (electronic) Studies in Systems, Decision and Control ISBN 978-3-030-12071-9 ISBN 978-3-030-12072-6 (eBook) https://doi.org/10.1007/978-3-030-12072-6 Library of Congress Control Number: 2018968103 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This book constitutes the full papers and short monographs developed on the base of the refereed proceedings of the International Conference on Information Technologies: Information and Communication Technologies for Research and Industry (ICIT 2019), held in Saratov, Russia, in February, 2019. The book Recent Research in Control Engineering and Decision Making brings accepted papers which present new approaches and methods of solving problems in the sphere of control engineering and decision making for the various fields of studies: industry and research, energy efficiency and sustainability, ontology-based data simulation, smart city technologies, theory and use of digital signal processing, distributed calculations and cloud computing, cognitive systems, robotics, cybernetics, automation control theory, image and sound processing, image recognition technologies and computer vision. Particular emphasis is laid on modern trends, new approaches, algorithms and methods in selected fields of interest. The presented papers were accepted after careful reviews made by at least three independent reviewers in a double-blind way. The acceptance level was about 60%. The chapters are organized thematically in several areas within the following tracks: • Models, methods and approaches in decision-making systems • Mathematical modelling for industry and research • Smart city technologies. The conference was focused on development and globalization of Information and Communication Technologies (ICT), methods of control engineering and decision making along with innovations and networking, ICT for sustainable development and technological change, and global challenges. Moreover, the ICIT 2019 served as discussion area for the actual above-mentioned topics.

v

vi

Preface

The editors believe that the readers will find the proceedings interesting and useful for their own research work. Saratov, Russia Saratov, Russia Saratov, Russia Saratov, Russia Novosibirsk, Russia El Paso, USA February 2019

Olga Dolinina Alexander Brovko Vitaly Pechenkin Alexey Lvov Vadim Zhmud Vladik Kreinovich

Programme Committee

Programme Committee Chair Alexander Rezchikov, Doctor of Engineering, Professor, Institute of Precision Mechanics and Control (Russian Academy of Sciences), Corresponding Member of Russian Academy of Sciences (Russia)

Programme Committee Members Alexander Sytnik, Doctor of Engineering, Professor, Corresponding Member of Russian Academy of Education, Yuri Gagarin State Technical University of Saratov (Russia) Vadim Kushnikov, Doctor of Engineering, Professor, Institute of Precision Mechanics and Control, Russian Academy of Sciences (Russia) Vadim Zhmud, Doctor of Engineering, Professor, Novosibirsk State Technical University (Russia) Leonid Bobrov, Doctor of Engineering, Professor, Novosibirsk State University of Economics and Management (Russia) Olga Dolinina, Doctor of Engineering, Professor, Yuri Gagarin State Technical University of Saratov (Russia) Sergey Borovik, Doctor of Engineering, Professor, Institute for the Control of Complex Systems, Russian Academy of Sciences (Russia) Vladimir Kulagin, Doctor of Engineering, Professor, Moscow Institute of Electronics and Mathematics of Higher School of Economics (Russia) Boris Pozdneev, Doctor of Engineering, Professor, Moscow State University of Technology “STANKIN” (Russia) Nadezhda Yarushkina, Doctor of Engineering, Professor, Ulyanovsk State Technical University (Russia) Sebastien Vaucher, Ph.D., Swiss Federal Laboratories for Materials Science and Technology, EMPA, Thun (Switzerland)

vii

viii

Programme Committee

Lubomir Dimitrov, Doctor of Engineering, Professor, Technical University of Sofia (Bulgaria) Ekaterina Pechenkina, Ph.D., Swinburne University of Technology (Australia) Uranchimeg Tudevdagva, Doctor of Science, Professor, Mongolian University of Science and Technology (Mongolia) Armen Kostanyan, Ph.D., Associate Professor, Yerevan State University (Armenia) Valery Kirilovich, Doctor of Engineering, Professor, Zhitomir State Technological University (Ukraine)

Organizing Committee Chair Olga Dolinina, Doctor of Engineering, Professor, Yuri Gagarin State Technical University of Saratov (Russia)

Organizing Committee Members Alexander Sytnik, Doctor of Engineering, Professor, Yuri Gagarin State Technical University of Saratov (Russia), Corresponding Member of Russian Academy of Education (Russia) Vadim Kushnikov, Doctor of Engineering, Professor, Institute of Precision Mechanics and Control, Russian Academy of Sciences (Russia) Olga Toropova, Ph.D., Associate Professor, Yuri Gagarin State Technical University of Saratov (Russia) Alexander Brovko, Doctor of Science, Professor, Yuri Gagarin State Technical University of Saratov (Russia) Svetlana Kumova, Ph.D., Associate Professor, Yuri Gagarin State Technical University of Saratov (Russia) Elena Kushnikova, Ph.D., Associate Professor, Yuri Gagarin State Technical University of Saratov (Russia) Daria Cherchimtseva, Yuri Gagarin State Technical University of Saratov (Russia)

Conference Organizer Yuri Gagarin State Technical University of Saratov, website: www.sstu.ru, email: sstu_offi[email protected]

Programme Committee

ix

Co-organizers Russian Academy of Education Institute of Precision Mechanics and Control, Russian Academy of Sciences (Saratov, Russia) Institute for the Control of Complex Systems, Russian Academy of Sciences (Samara, Russia)

Conference Website, Call for Papers http://icit2019.sstu.ru

Contents

Part I

Models, Methods and Approaches in Decision Making Systems

Data Protection During Remote Monitoring of Person’s State . . . . . . . . Tatyana Buldakova and Darina Krivosheeva

3

Principles of Managing the Process of Innovative Ideas Genesis . . . . . . Tatiana V. Moiseeva and Sergey V. Smirnov

15

Software Package for Modeling the Process of Fire Spread and People Evacuation in Premises . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrey Samartsev, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Leonid Filimonyuk, Dmitry Fominykh and Olga Dolinina Nonlinear Information Processing Algorithm for Navigation Complex with Increased Degree of Parametric Identifiability . . . . . . . . . . . . . . . . Konstantin Neusypin, Maria Selezneva and Andrey Proletarsky The Task of Reducing the Cost of Production During Welding by Robotic Technological Complexes . . . . . . . . . . . . . . . . . . . . . . . . . . . Dmitry Fominykh, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Tatyana Shulga and Andrey Samartsev String Matching in Case of Periodicity in the Pattern . . . . . . . . . . . . . . Armen Kostanyan and Ani Karapetyan High Generalization Capability Artificial Neural Network Architecture Based on RBF-Network . . . . . . . . . . . . . . . . . . . . . . . . . . . Mikhail Abrosimov and Alexander Brovko Dynamic System Model for Predicting Changes in University Indicators in the World University Ranking U-Multirank . . . . . . . . . . . Olga Glukhova, Alexander Rezchikov, Vadim Kushnikov, Oleg Kushnikov and Irina Sytnik

26

37

50

61

67

79

xi

xii

Contents

Optimization of the Hardware Costs of Interpolation Converters for Calculations in the Logarithmic Number System . . . . . . . . . . . . . . . Ilya Osinin

91

The Multi-agent Method for Real Time Production Resource-Scheduling Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Alexander Lada and Sergey Smirnov Knowledge Base Engineering for Industrial Safety Expertise: A Model-Driven Development Approach Specialization . . . . . . . . . . . . . 112 Aleksandr Yurin, Aleksandr Berman, Olga Nikolaychuk and Nikita Dorodnykh Investigation of Hydroelasticity Coaxial Geometrically Irregular and Regular Shells Under Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Anna Kalinina, Dmitry Kondratov, Yulia Kondratova, Lev Mogilevich and Victor Popov Design Automation of Digital In-Process Models of Parts of Aircraft Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Kate Tairova, Vadim Shiskin and Leonid Kamalov Using Convolutional Neural Networks in the Problem of Cell Nuclei Segmentation on Histological Images . . . . . . . . . . . . . . . . . . . . . . 149 Vladimir Khryashchev, Anton Lebedev, Olga Stepanova and Anastasiya Srednyakova Numerical Study of Eigenmodes Propagation Through Rectangular Waveguide with Quarter-Wave Chokes on the Walls . . . . . . . . . . . . . . . 162 Alexander Brovko and Guido Link Extraction and Forecasting Time Series of Production Processes . . . . . . 173 Anton Romanov, Aleksey Filippov and Nadezhda Yarushkina Computer Analysis of Geometrical Parameters of the Retina Epiretinal Membrane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Stanislav Daurov, Sergey Potemkin, Svetlana Kumova, Tatiana Kamenskikh, Igor Kolbenev and Elena Chernyshkova Synthesis of the Information Channel with Codec Based on Code Signal Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Dmitry Klenov, Michael Svetlov, Alexey L’vov, Marina Svetlova and Dmitry Mishchenko Using of Linguistic Analysis of Search Query for Improving the Quality of Information Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Nadezhda Yarushkina, Aleksey Filippov and Maria Grigoricheva

Contents

xiii

Improved Quality Video Transmission by Optical Channel from Underwater Mobile Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Sergey Kirillov, Vladimir Dmitriev, Leonid Aronov, Petr Skonnikov and Andrew Baukov Sketch Design of Information System for Personnel Management of Large State Corporation in the Field of Control Engineering . . . . . . 240 Vadim Zhmud, Alexander Liapidevskiy and Galina Frantsuzova Models and Methods for Determining Damage from Atmospheric Emissions of Industrial Enterprises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Elena Kushnikova, Ekaterina Kulakova, Sergei Alipchenko, Alexander Rezchikov, Vadim Kushnikov and Vladimir Ivaschenko Computer Analysis of the Equilibrium in Painting . . . . . . . . . . . . . . . . 268 Alexander Voloshinov and Olga Dolinina Using System Dynamics for the Software Quality Management of the Decision Making Software Systems Development . . . . . . . . . . . . . 289 Olga Dolinina and Vadim Kushnikov Bernstein’s Theory of Levels and Its Application for Assessing the Human Operator State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Sergey Suyatinov Semantic Marking Method for Non-text Documents of Website Based on Their Context in Hypertext Clustering . . . . . . . . . . . . . . . . . . 313 Sergey Papshev, Alexander Sytnik, Nina Melnikova and Alexey Bogomolov Optimal Control Problems of Compressor Facilities Processes at Industrial Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Ekaterina Kulakova, Sergei Alipchenko, Alexander Rezchikov, Vadim Kushnikov, Elena Kushnikova and Olga Glukhova BEM Based Numerical Approach for the Study of the Dispersed Systems Rheological Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Yulia A. Pityuk, Olga A. Abramova, Nazgul B. Fatkullina and Aiguzel Z. Bulatova Formalization of Requirements for Locked-Loop Control Systems for Their Numerical Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Vadim Zhmud, Galina Frantsuzova, Lubomir Dimitrov and Jaroslav Nosek Accented Visualization in Digital Industry Applications . . . . . . . . . . . . . 366 Anton Ivaschenko, Pavel Sitnikov and Georgiy Katirkin

xiv

Contents

Dynamic Capabilities Indicators Estimation of Information Technology Usage in Technological Systems . . . . . . . . . . . . . . . . . . . . . 379 Alexander Geyda Modeling of Struggle Processes in the Computer-Related Crime Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Aleksey Bogomolov, Alexander Rezchikov, Vadim Kushnikov, Vladimir Tverdokhlebov, Oksana Soldatkina and Tatyana Shulga Towards Fuzzy Partial Global Fault Diagnosis . . . . . . . . . . . . . . . . . . . 406 Sofia Kouah and Ilham Kitouni Development of a Software for the Semantic Analysis of Social Media Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Aleksey Filippov, Vadim Moshkin and Nadezhda Yarushkina An Analysis of Road Traffic Flow Characteristics Using Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Oleg Golovnin, Anastasia Stolbova and Nikita Ostroglazov An Approach to Estimating of Criticality of Social Engineering Attacks Traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Anastasiia Khlobystova, Maxim Abramov and Alexander Tulupyev Ontologies of the Fire Safety Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Yuliya Nikulina, Tatyana Shulga, Alexander Sytnik, Natalya Frolova and Olga Toropova Wavelet-Based Arrhythmia Detection in Medical Diagnostics Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Anastasya Stolbova, Sergey Prokhorov, Andrey Kuzmin and Anton Ivaschenko On Parallel Addition and Multiplication via Symmetric Ternary Numeral System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 Iurii V. Stroganov, Liliya Volkova and Igor V. Rudakov Simulation of Power Assets Management Process . . . . . . . . . . . . . . . . . 488 Oleg Protalinsky, Anna Khanova and Ivan Shcherbatov Examination of the Process of Automated Closure of Containers with Screw Caps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 Slav Dimitrov, Lubomir Dimitrov, Reneta Dimitrova and Stelian Nikolov About the Concept of Information Support System for Innovative Economy in the Republic of Kazakhstan . . . . . . . . . . . . . . . . . . . . . . . . 515 Irbulat Utepbergenov, Leonid Bobrov, Irina Medyankina, Zinaida Rodionova and Shara Toibaeva

Contents

xv

Possibilities of Typical Controllers for Low Order Non-linear Non-stationary Plants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Galina Frantsuzova, Vadim Zhmud and Anatoly Vostrikov Mathematical Models and Algorithms for the Management of Liquidation Process of Floods Consequences . . . . . . . . . . . . . . . . . . . 540 Maria Khamutova, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Elena Kushnikova and Andrey Samartsev Analysis of Three-Dimensional Scene Visual Characteristics Based on Virtual Modeling and Parameters of Surveillance Sensors . . . . . . . . . . . 552 Vitaly Pechenkin, Mikhail Korolev, Kseniya Kuznetsova and Dmitriy Piminov Search of Optimum Conditions of Plating Using a Fuzzy Rule-Based Knowledge Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Denis Solovjev, Alexander Arzamastsev, Inna Solovjeva, Yuri Litovka, Alexey L’vov and Nina Melnikova Part II

Mathematical Modelling for Industry and Research

Mathematical Model of Adaptive Control in Fuel Supply Logistic System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577 Ekaterina Kasatkina, Denis Nefedov and Ekaterina Saburova Mathematical Model for Prediction of the Main Characteristics of Emissions of Chemically Hazardous Substances into the Atmosphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 Ekaterina Kusheleva, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Elena Kushnikova and Andrey Samartsev Increasing the Safety of Flights with the Use of Mathematical Model Based on Status Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608 Irina Veshneva, Aleksander Bolshakov and Aleksei Kulik Mathematical Modeling of Electronic Records Management and Office Work in the Executive Bodies of State Administration . . . . . . . . 622 Olga Perepelkina and Dmitry Kondratov Mathematical Modeling of the Process of Engineering Structure Curvature Determination for Remote Quality Control of Plaster Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634 Nadezhda Ivannikova, Pavel Sadchikov and Alexandr Zholobov Mathematical Models, Algorithms and Software Package for the National Security State of Russia . . . . . . . . . . . . . . . . . . . . . . . . 646 Natalya Yandybaeva, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Oleg Kushnikov and Anatoly Tsvirkun

xvi

Contents

Mathematical Modeling of Waves in a Non-linear Shell with Wiscous Liquid Inside It, Taking into Account Its Movement Inertia . . . . . . . . . 660 Lev Mogilevich, Yury Blinkov, Dmitry Kondratov and Sergey Ivanov Mathematical Modeling of Hydroelastic Interaction Between Stamp and Three-Layered Beam Resting on Winkler Foundation . . . . . . . . . . 671 Aleksandr Chernenko, Dmitry Kondratov, Lev Mogilevich, Victor Popov and Elizaveta Popova The Mathematical Model for Describing the Principles of Enterprise Management “Just in Time, Design to Cost, Risks Management” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682 Igor Lutoshkin, Svetlana Lipatova, Yuriy Polyanskov, Nailya Yamaltdinova and Margarita Yardaeva Algebraic Bayesian Networks: The Use of Parallel Computing While Maintaining Various Degrees of Consistency . . . . . . . . . . . . . . . . . . . . . 696 Nikita A. Kharitonov, Anatoly G. Maximov and Alexander L. Tulupyev Mathematical Modeling and Calibration Procedure of Combined Multiport Correlator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705 Nickita Semezhev, Alexey L’vov, Adel Askarova, Sergey Ivzhenko, Natalia Vagarina and Elena Umnova Mathematical Models for the Analysis of Destabilization Processes of the Socio-Political Situation in the Country Using the Methods of Non-violent Resistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720 Aleksey Bogomolov, Alexander Rezchikov, Vadim Kushnikov, Vladimir Ivaschenko, Elena Kushnikova and Vladimir Tverdokhlebov Part III

Smart City Technologies

Mobile Platform for Decision Support System During Mutual Continuous Investment in Technology for Smart City . . . . . . . . . . . . . . 731 Bakhytzhan Akhmetov, Lyazzat Balgabayeva, Valerii Lakhno, Vladimir Malyukov, Raya Alenova and Anara Tashimova Automatic Traffic Control System for SOHO Computer Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743 Evgeny Basinya and Aleksander Rudkovskiy Combined Intellectual and Petri Net with Priorities Approach to the Waste Disposal in the Smart City . . . . . . . . . . . . . . . . . . . . . . . . 755 Olga Dolinina, Vitaly Pechenkin and Nikolay Gubin Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769

Contributors

Maxim Abramov Laboratory of Theoretical and Interdisciplinary Problems of Informatics, St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St. Petersburg, Russia Mathematics and Mechanics Faculty, St. Petersburg State University, St. Petersburg, Russia Olga A. Abramova Center for Micro and Nanoscale Dynamics of Dispersed Systems, Bashkir State University, Ufa, Russia Mikhail Abrosimov Yuri Gagarin State Technical University of Saratov, Saratov, Russia Bakhytzhan Akhmetov Department of Computer and Software Engineering, Turan University, Almaty, Kazakhstan Raya Alenova International University of Information Technologies, Almaty, Kazakhstan Sergei Alipchenko Department of Applied Information Technologies, Yuri Gagarin State Technical University of Saratov, Saratov, Russia Leonid Aronov Ryazan State Radio Engineering University, Ryazan, Russia Alexander Arzamastsev Tambov State University Named After G. R. Derzhavin, Tambov, Russian Federation Adel Askarova Yuri Gagarin State Technical University of Saratov, Saratov, Russian Federation Lyazzat Balgabayeva Department of Computer and Software Engineering, Turan University, Almaty, Kazakhstan Andrew Baukov Ryazan State Radio Engineering University, Ryazan, Russia

xvii

xviii

Contributors

Evgeny Basinya Novosibirsk State Technical University, Novosibirsk, Russian Federation Institute of Information and Communication Technologies, Novosibirsk, Russian Federation Aleksandr Berman Matrosov Institute for System Dynamics and Control Theory, Siberian Branch of the Russian Academy of Sciences, Irkutsk, Russia Yury Blinkov Saratov State University, Saratov, Russia Leonid Bobrov Novosibirsk State University of Economics and Management, Novosibirsk, Russia Alexey Bogomolov Institute of Precision Mechanics and Control, Russian Academy of Sciences, Saratov, Russia Saratov State University, Saratov, Russia Aleksander Bolshakov Peter the Great St. Petersburg Polytechnic University, Saint Petersburg, Russian Federation Alexander Brovko Yuri Gagarin State Technical University of Saratov, Saratov, Russia Aiguzel Z. Bulatova Center for Micro and Nanoscale Dynamics of Dispersed Systems, Bashkir State University, Ufa, Russia Tatyana Buldakova Bauman Moscow State Technical University, Moscow, Russia Aleksandr Chernenko Yuri Gagarin State Technical University of Saratov, Saratov, Russia Elena Chernyshkova Department of Foreign Languages, Saratov State Medical University Named After V. I. Razumovsky, Saratov, Russia Stanislav Daurov Department of Applied Information Technology, Institute of Applied Information and Communications Technologies, Yuri Gagarin State Technical University of Saratov, Saratov, Russia Lubomir Dimitrov Faculty of Mechanical Engineering, Technical University of Sofia, Sofia, Bulgaria Slav Dimitrov Technical University of Sofia, Sofia, Bulgaria Reneta Dimitrova Technical University of Sofia, Sofia, Bulgaria Vladimir Dmitriev Ryazan State Radio Engineering University, Ryazan, Russia Olga Dolinina Department of Information Systems and Technology, Yuri Gagarin State Technical University of Saratov, SSTU, Saratov, Russia Nikita Dorodnykh Matrosov Institute for System Dynamics and Control Theory, Siberian Branch of the Russian Academy of Sciences, Irkutsk, Russia

Contributors

xix

Nazgul B. Fatkullina Center for Micro and Nanoscale Dynamics of Dispersed Systems, Bashkir State University, Ufa, Russia Leonid Filimonyuk Institute of Precision Mechanics and Control of RAS, Saratov, Russia Aleksey Filippov Ulyanovsk State Technical University, Ulyanovsk, Russia Dmitry Fominykh Institute of Precision Mechanics and Control, Russian Academy of Sciences, Saratov, Russia Galina Frantsuzova Novosibirsk State Technical University, Novosibirsk, Russia Natalya Frolova Yuri Gagarin State Technical University of Saratov, Saratov, Russia Alexander Geyda St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St. Petersburg, Russia Olga Glukhova Department of Applied Information Technologies, Yuri Gagarin State Technical University of Saratov, Saratov, Russia Oleg Golovnin Samara University, Samara, Russia Maria Grigoricheva Ulyanovsk State Technical University, Ulyanovsk, Russian Federation Nikolay Gubin Department of Information Systems and Technology, Yuri Gagarin State Technical University of Saratov, SSTU, Saratov, Russia Nadezhda Ivannikova Astrakhan State University Architectural and Civil Engineering, Astrakhan, Astrakhan Region, Southern Federal District, Russia Sergey Ivanov Saratov State University, Saratov, Russia Anton Ivaschenko Samara State Technical University, Samara, Russia Vladimir Ivaschenko Institute of Precision Mechanics and Control, Russian Academy of Sciences, Saratov, Russia Sergey Ivzhenko Yuri Gagarin State Technical University of Saratov, Saratov, Russian Federation Anna Kalinina Yuri Gagarin State Technical University of Saratov, Saratov, Russia Leonid Kamalov ASCON JSC, Ulyanovsk, Russian Federation Tatiana Kamenskikh Department of Eye Diseases, Saratov State Medical University Named After V. I. Razumovsky, Saratov, Russia Ani Karapetyan IT Educational and Research Center, Yerevan State University, Yerevan, Armenia

xx

Contributors

Ekaterina Kasatkina Kalashnikov Izhevsk State Technical University, Izhevsk, Russia Georgiy Katirkin SEC “Open Code”, Samara, Russia Maria Khamutova Saratov State University, Saratov, Russia Anna Khanova Astrakhan State Engineering Institute, Astrakhan, Russia Nikita A. Kharitonov St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS), St. Petersburg, Russia Anastasiia Khlobystova Laboratory of Theoretical and Interdisciplinary Problems of Informatics, St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St. Petersburg, Russia Mathematics and Mechanics Faculty, St. Petersburg State University, St. Petersburg, Russia Vladimir Khryashchev P.G. Demidov Yaroslavl State University, Yaroslavl, Russia Sergey Kirillov Ryazan State Radio Engineering University, Ryazan, Russia Ilham Kitouni MISC Laboratory, University of Abdelhamid Mehri—Constantine 2, Ali Mendjeli, Algeria Dmitry Klenov Yuri Gagarin State Technical University of Saratov, Saratov, Russia Igor Kolbenev Department of Eye Diseases, Saratov State Medical University Named After V. I. Razumovsky, Saratov, Russia Dmitry Kondratov Russian Presidential Academy of National Economy and Public Administration, Saratov, Russia Yuri Gagarin State Technical University of Saratov, Saratov, Russia Yulia Kondratova Saratov State University, Saratov, Russia Mikhail Korolev Yuri Gagarin State Technical University of Saratov, Saratov, Russia Armen Kostanyan IT Educational and Research Center, Yerevan State University, Yerevan, Armenia Sofia Kouah RELA(CS)2 Laboratory, University of Larbi Ben M’Hidi, Oum El Bouaghi, Algeria Darina Krivosheeva Bauman Moscow State Technical University, Moscow, Russia Ekaterina Kulakova Department of Applied Information Technologies, Yuri Gagarin State Technical University of Saratov, Saratov, Russia

Contributors

xxi

Aleksei Kulik Yuri Gagarin State Technical University of Saratov, Saratov, Russian Federation Svetlana Kumova Department of Applied Information Technology, Institute of Applied Information and Communications Technologies, Yuri Gagarin State Technical University of Saratov, Saratov, Russia Ekaterina Kusheleva Saratov State University, Saratov, Russia Oleg Kushnikov Yuri Gagarin State Technical University, Saratov, Russia Institute of Precision Mechanics and Control of the Russian Academy of Science, Saratov, Russia Vadim Kushnikov Institute of Precision Mechanics and Control of Russian Academy of Sciences, Saratov, Russia Yuri Gagarin State Technical University, Saratov, Russia Saratov State University, Saratov, Russia Elena Kushnikova Department of Applied Information Technologies, Yuri Gagarin State Technical University of Saratov, Saratov, Russia Saratov State Technical University, Saratov, Russia Institute of Precision Mechanics and Control, Russian Academy of Sciences, Saratov, Russia Andrey Kuzmin Penza State University, Penza, Russia Kseniya Kuznetsova Yuri Gagarin State Technical University of Saratov, Saratov, Russia Alexander Lada Institute for the Control of Complex Systems of Russian Academy of Sciences, Samara, Russia SEC Smart Transport Systems, Samara, Russia Valerii Lakhno Department of Computer Systems and Networks, National University of Life and Environmental Sciences of Ukraine, Kiev, Ukraine Anton Lebedev P. G. Demidov Yaroslavl State University, Yaroslavl, Russia Alexander Liapidevskiy Novosibirsk Institute of Program Systems, Novosibirsk, Russia Guido Link Karlsruhe Institute of Technology, Karlsruhe, Germany Svetlana Lipatova Ulyanovsk State University, Ulyanovsk, Russia Yuri Litovka Tambov State Technical University, Tambov, Russian Federation Igor Lutoshkin Ulyanovsk State University, Ulyanovsk, Russia Alexey L’vov Yuri Gagarin State Technical University of Saratov, Saratov, Russian Federation

xxii

Contributors

Vladimir Malyukov Department of Information Systems and Mathematical Disciplines, European University, Kiev, Ukraine Anatoly G. Maximov St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS), St. Petersburg, Russia Irina Medyankina Novosibirsk State University of Economics and Management, Novosibirsk, Russia Nina Melnikova Yuri Gagarin State Technical University of Saratov, Saratov, Russian Federation Dmitry Mishchenko Yuri Gagarin State Technical University of Saratov, Saratov, Russia Lev Mogilevich Yuri Gagarin State Technical University of Saratov, Saratov, Russia Tatiana V. Moiseeva Institute for the Control of Complex Systems of Russian Academy of Sciences, Samara, Russia Vadim Moshkin Ulyanovsk State Technical University, Ulyanovsk, Russia Denis Nefedov Kalashnikov Izhevsk State Technical University, Izhevsk, Russia Konstantin Neusypin Bauman Moscow State Technical University, Moscow, Russia Olga Nikolaychuk Matrosov Institute for System Dynamics and Control Theory, Siberian Branch of the Russian Academy of Sciences, Irkutsk, Russia Stelian Nikolov Technical University of Sofia, Sofia, Bulgaria Yuliya Nikulina Yuri Gagarin State Technical University of Saratov, Saratov, Russia Jaroslav Nosek Technical University of Liberec, Liberec, Czech Republic Ilya Osinin Scientific Production Association Real-Time Software Complexes, Moscow, Russia Nikita Ostroglazov Samara University, Samara, Russia Sergey Papshev Yuri Gagarin State Technical University of Saratov, Saratov, Russia Vitaly Pechenkin Department of Information Systems and Technology, Yuri Gagarin State Technical University of Saratov, SSTU, Saratov, Russia Olga Perepelkina Russian Presidential Academy of National Economy and Public Administration, Saratov, Russia Dmitriy Piminov Yuri Gagarin State Technical University of Saratov, Saratov, Russia

Contributors

xxiii

Yulia A. Pityuk Center for Micro and Nanoscale Dynamics of Dispersed Systems, Bashkir State University, Ufa, Russia Yuriy Polyanskov Ulyanovsk State University, Ulyanovsk, Russia Victor Popov Yuri Gagarin State Technical University of Saratov, Saratov, Russia Elizaveta Popova Saratov State University, Saratov, Russia Sergey Potemkin Department of Applied Information Technology, Institute of Applied Information and Communications Technologies, Yuri Gagarin State Technical University of Saratov, Saratov, Russia Sergey Prokhorov Samara National Research University, Samara, Russia Andrey Proletarsky Bauman Moscow State Technical University, Moscow, Russia Oleg Protalinsky Moscow Energy Institute, Moscow, Russia Alexander Rezchikov Institute of Precision Mechanics and Control, Russian Academy of Sciences, Saratov, Russia Saratov State Technical University, Saratov, Russia Zinaida Rodionova Novosibirsk Management, Novosibirsk, Russia

State

University

of

Economics

and

Anton Romanov Ulyanovsk State Technical University, Ulyanovsk, Russia Igor V. Rudakov BMSTU, Moscow, Russia Aleksander Rudkovskiy Novosibirsk State Technical University, Novosibirsk, Russian Federation Ekaterina Saburova Kalashnikov Izhevsk State Technical University, Izhevsk, Russia Pavel Sadchikov Astrakhan State University Architectural and Civil Engineering, Astrakhan, Astrakhan Region, Southern Federal District, Russia Andrey Samartsev Institute of Precision Mechanics and Control, Russian Academy of Sciences, Saratov, Russia Maria Selezneva Bauman Moscow State Technical University, Moscow, Russia Nickita Semezhev Yuri Gagarin State Technical University of Saratov, Saratov, Russian Federation Ivan Shcherbatov Moscow Energy Institute, Moscow, Russia Vadim Shiskin Institute of Aviation Technology and Design and Management of Ulyanovsk State Technical University, Ulyanovsk, Russian Federation Tatyana Shulga Yuri Gagarin State Technical University of Saratov, Saratov, Russia

xxiv

Contributors

Pavel Sitnikov ITMO University, Saint Petersburg, Russia Petr Skonnikov Ryazan State Radio Engineering University, Ryazan, Russia Sergey Smirnov Institute for the Control of Complex Systems of Russian Academy of Sciences, Samara, Russia Sergey V. Smirnov Institute for the Control of Complex Systems of Russian Academy of Sciences, Samara, Russia Oksana Soldatkina Saratov Branch of the Institute of State and Law, Russian Academy of Sciences, Saratov, Russia Denis Solovjev Tambov State University Named After G. R. Derzhavin, Tambov, Russian Federation Inna Solovjeva Tambov State Technical University, Tambov, Russian Federation Anastasiya Srednyakova P. G. Demidov Yaroslavl State University, Yaroslavl, Russia Olga Stepanova P. G. Demidov Yaroslavl State University, Yaroslavl, Russia Anastasia Stolbova Samara National Research University, Samara, Russia Iurii V. Stroganov BMSTU, Moscow, Russia Sergey Suyatinov Bauman Moscow State Technical University, Moscow, Russia Michael Svetlov Institute of Precision Mechanics and Control of RAS, Saratov, Russia Marina Svetlova Yuri Gagarin State Technical University of Saratov, Saratov, Russia Alexander Sytnik Yuri Gagarin State Technical University of Saratov, Saratov, Russia Irina Sytnik Yuri Gagarin State Technical University of Saratov, Saratov, Russia Kate Tairova Institute of Aviation Technology and Design and Management of Ulyanovsk State Technical University, Ulyanovsk, Russian Federation Anara Tashimova Department of Computer Science and Information Technology, Aktobe Regional State University Named After K. Zhubanov, Aktobe, Kazakhstan Shara Toibaeva Institute of Information and Computational Technologies, Almaty, Republic of Kazakhstan Olga Toropova Yuri Gagarin State Technical University of Saratov, Saratov, Russia Anatoly Tsvirkun Institute of Control Problems, Russian Academy of Science, Moscow, Russia

Contributors

xxv

Alexander Tulupyev Laboratory of Theoretical and Interdisciplinary Problems of Informatics, St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St. Petersburg, Russia Mathematics and Mechanics Faculty, St. Petersburg State University, St. Petersburg, Russia Alexander L. Tulupyev St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS), St. Petersburg, Russia Laboratory of Theoretical and Interdisciplinary Problems of Informatics, St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St. Petersburg, Russia Mathematics and Mechanics Faculty, St. Petersburg State University, St. Petersburg, Russia Vladimir Tverdokhlebov Institute of Precision Mechanics and Control, Russian Academy of Sciences, Saratov, Russia Elena Umnova Yuri Gagarin State Technical University of Saratov, Saratov, Russian Federation Irbulat Utepbergenov Institute of Information and Computational Technologies, Almaty, Republic of Kazakhstan Natalia Vagarina Yuri Gagarin State Technical University of Saratov, Saratov, Russian Federation Irina Veshneva Saratov State University, Saratov, Russian Federation Liliya Volkova BMSTU, Moscow, Russia Alexander Voloshinov Yuri Gagarin State Technical University of Saratov, Saratov, Russia Anatoly Vostrikov Novosibirsk State Technical University, Novosibirsk, Russia Nailya Yamaltdinova Ulyanovsk State University, Ulyanovsk, Russia Natalya Yandybaeva Balakovo Branch, Russian Presidential Academy of National Economy and Public Administration, Balakovo, Russia Margarita Yardaeva Ulyanovsk State University, Ulyanovsk, Russia Nadezhda Yarushkina Ulyanovsk State Technical University, Ulyanovsk, Russian Federation Aleksandr Yurin Matrosov Institute for System Dynamics and Control Theory, Siberian Branch of the Russian Academy of Sciences, Irkutsk, Russia Irkutsk National Research Technical University, Irkutsk, Russia Vadim Zhmud Novosibirsk State Technical University, Novosibirsk, Russia Alexandr Zholobov Don State Technical University, Rostov-on-Don, Russia

Part I

Models, Methods and Approaches in Decision Making Systems

Data Protection During Remote Monitoring of Person’s State Tatyana Buldakova(&)

and Darina Krivosheeva

Bauman Moscow State Technical University, 2-Ya Baumanskaya, 5, 105005 Moscow, Russia [email protected], [email protected]

Abstract. Problem of human personal data protection in telemedicine systems is considered. The model of possible threats is developed for a mobile measuring system that provides continuous monitoring of a person’s state by recorded biosignals. The problem of ensuring the confidentiality and integrity of personal data transferred from the sensor to the cloud is identified. Possible ways of protection of the transmitted information in systems for remote monitoring of person’s state are systematized. An original method of personal data protection is presented. It is shown that the necessary information for construction of cryptographic keys can be obtained by appropriate processing of biosignals. It is proposed to use biosignals registered by sensors to construct symmetric cryptographic keys, which reflect the physiological characteristics of the patient and can be used to conceal information. The processing of biosignals is based on the reconstruction of a mathematical model that generates time series, which are diagnostically equivalent to the original biosignals. Examples of reconstruction by biosignals for obtaining physiological signature of the person are given. Keywords: Telemedicine  Mobile measuring system Biosignals  Reconstruction of system



Data protection



1 Introduction Remote monitoring systems are actively used in various areas where remote control of the object’s state is required (for example, in industrial production, power engineering, agriculture, education, etc.). In the remote monitoring process, remote collection, transmission, storage and processing of data about the state of the observed object are performed, which are necessary for the formation and decision-making on the control of this object [1–3]. Thus, such systems allow to control various processes in the automated mode, and also remotely to assess the state of complex objects [4, 5]. Monitoring systems work with important information of limited access, which is subject to both accidental and deliberate influences from the external environment. Even insignificant, at first glance, violations of any of the information processes occurring in the system lead to the risks of distortion, disclosure or destruction of information. In turn, this can lead to serious consequences - loss of confidentiality, integrity and/or accessibility of information, compromising the organization and undermining its credibility, serious damage. © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 3–14, 2019. https://doi.org/10.1007/978-3-030-12072-6_1

4

T. Buldakova and D. Krivosheeva

Therefore, the main problem in the functioning of monitoring systems is the leveling of random and deliberate environmental influences, leading to a decrease in the security of the transmission channel and the reliability of the transmitted data [6, 7]. The information security issues are particular importance in remote monitoring systems of person’s state [8]. In similar systems the decisive role is to ensure the security of personal data. Violation of the integrity and confidentiality of information, theft of personal medical data lead not only to financial losses, but also to undesirable social consequences, cause moral damage to the patient. In this work, possible ways of protection of the transmitted information in systems for remote monitoring of person’s state are systematized. An original method of personal data protection is presented. This method is based on the reconstruction of model equations of systems, which used biosignals.

2 Features of Remote Monitoring Systems of Patient’s State The process of modernization of the healthcare system is accompanied by the active introduction of information and communication technologies and the creation of telemedicine systems which remotely provide highly skilled care and consultations of medical center doctors to patients in remote areas [9, 10]. It is assumed that, first of all, this approach will develop in telemedicine systems of dynamic monitoring of patients suffering from chronic diseases (cardiovascular, renal, etc.), or for the elderly people. In addition, presently the mobile telemedicine complexes are being developed for work at the accident sites. In general, remote monitoring systems allow not only to carry out medical and preventive measures and to control the person’s state, but also to implement research activities based on modern technologies of data integration and processing (Fig. 1). The principles of creation of such systems are similar, the differences are in the methods of information processing and using the results of person’s state assessment.

Fig. 1. Directions of activity of telemedicine systems

Data Protection During Remote Monitoring of Person’s State

5

In telemedicine systems the human state assessment can be performed in different ways: (1) under the supervision of the attending physician; (2) based on the analysis of controlled parameters (norm - pathology); (3) automated according to computational model of a person (named “virtual physiology”), describing the physiological activity of human subsystems [11–13]. Based on the results of information processing, such systems are also able to provide support for decision-making in emergency situations and the development of recommendations about the organization of work for people managing complex equipment. Regardless of the method of assessment, the various sensors allowing to record biosignals of the person are used as sources of objective information on a functional state of the person. The system of built-in sensors allows controlling various physiological parameters, including heart rhythm, breathing rhythm, electrocardiogram (ECG), body temperature, degree of blood oxygen saturation, etc. At the same time heart rate monitors, the ECG sensors and respiratory frequency sensors are the most widespread among biosignals registrators. The achieved technical level and of “sensor-on-a-chip” and “laboratory-on-a-chip” technologies allow the creation of compact mobile terminals that realize the functions of primary physiological analysis and data transmission to medical centers for more indepth analysis [14, 15]. The modern telemedicine complex combines a powerful computer that is easily interfaced with a variety of medical equipment, short-range and long-range wireless communications, video conferencing facilities and IP-broadcasting facilities. A promising area for the development of remote monitoring of patient’s state is the integration of sensors into clothing, various accessories, mobile phones [16, 17]. The inclusion of such mobile measuring system in common information space will allow for continuous monitoring of person’s state regardless of its location (see Fig. 2).

Fig. 2. Remote monitoring based on common information space

6

T. Buldakova and D. Krivosheeva

The storage, computation and visualization of a huge amount of data collected by the monitoring system require significant computing resources provided by the virtual infrastructure using cloud technologies. Biosignal registrators can send data to the cloud directly, or through intermediate base stations (for example, smartphone). Health professionals and other system’s users can view the collected medical information directly from the cloud using a smartphone or via the Internet in real time and make decisions according to the current functional state of a person. In this case the particular importance is gained the problem of ensuring integrity, confidentiality and availability of the transferred physiological data, on the basis of which the decision on the patient’s state is made. Considering possible access to information (including unauthorized access) various specialists, methods and technologies for protection of personal data of patients are required.

3 Model of Possible Threats for Remote Monitoring The lack of real methods for security can lead not only to a breach of confidentiality, but also potentially allows hackers to cause harm to the patient by changing the actual physiological data that will lead to an incorrect diagnosis and treatment [18]. Taking into account the international requirements of Health Insurance Portability and Accountability Act (HIPAA) protection of personal medical data is absolutely necessary (http://www.hhs.gov/ocr/hipaa/). In order to choose the means of protection, at first it is necessary to identify the possible threats of information security applicable to all components of the monitoring system (Table 1). Table 1. Possible threats of information security. No. Components 1 2

3 4 5

Threats

Sensor

Access of an malefactor to the sensor Communication Malefactors can eavesdrop on links all kinds of conversations, and also distort signals Smartphone An malefactor could affect the operation of the smartphone Data warehouse Possible access to data in the in the cloud cloud Medical staff Transferring information to malefactor

6

Patient

Transferring information to malefactor

7

Patient’s body

Malefactor can shake patient’s hand therefore its biosignals can be distorted

Comments and recommendations for data protection It is necessary to use the reliable sensors limiting access Communication in system is unreliable therefore it is necessary to encrypt signals Protection of applications on the smartphone Only authorized physician can access the patient information It is supposed that medical staff will not open access to information under the influence of malefactors It is supposed that patient will not open access to information under the influence of malefactors Reliable sensors don’t allow the malefactor to distort signals

Data Protection During Remote Monitoring of Person’s State

7

Analysis of the threat model showed that there is a problem of information security of patient data transmitted from the sensor to the storage. At the same time, it is crucial to protect personal medical data when it is transmitted through the communication channel from sensors to the cloud-based medical database. In this regard, for the protection of transferred personal information, it is necessary to choose how to distribute the cryptographic keys between the sensor and the cloud to ensure the encryption and integrity of the data.

4 Data Protection in Telemedicine Systems Despite the growing flow of researches in the field of information security, very few studies are aimed at studying the risks of information security in the health sector, which is substantially regulated and uses business models that are rather different from those of other industries [8, 19, 20]. Analysis of different approaches to the distribution of cryptographic keys is made in [21]. The protection method E2E (end-to-end), which is often used in such cases, works by defining and the subsequent distribution of cryptographic keys between sensors and a cloud. This method provides the secrecy and integrity of data. Further, the key can also be used for mutual authentication of the message. The main problem here is the possibility of a confidential distribution (delivery) of the keys to their users. For the patient, this procedure should be clear and not burdensome. In the most favorable case, the patient doesn’t worry about the key. All traditional approaches to health systems security are based on asymmetric cryptosystems. Asymmetric encryption uses two different keys: one for encryption (also called public) and another for decryption (called a private or secret). Such approach is rather reliable for ensuring confidentiality and integrity of transmitted data, but due to the large key lengths the asymmetric cryptosystems are expensive to regular data exchange in real-time system as demand large expenses of resources and time. Besides, asymmetric cryptography poorly opposes to some types of attacks, and for its use the additional mechanisms of authentication are necessary. Therefore it is inexpedient to use asymmetric enciphering in remote monitoring systems where the data is processed in real time. An alternative approach to the protection of the transmitted data is the method of creating paired symmetric keys for the sensor and receiver. In symmetric cryptosystem the same cryptographic key is applied for encryption and decryption. This key has to remain in a secret either both parties. As a result, algorithms with the closed key work for three orders faster than algorithms with an open key, that is very important for realtime telemedicine systems. However a flaw of symmetric codes is the impossibility of their use for confirmation of authorship as the key is known to each party. For this purpose, in a number of works, it is proposed to use the biosignals recorded by sensors that reflect the physiological characteristics of the patient and can be used to conceal information [22, 23]. For example, in [24] some morphological features of electrocardiogram (ECG) and photoplethysmogram (PPG), which are unique for each person and which change little over time, are highlighted (see Fig. 3).

8

T. Buldakova and D. Krivosheeva

Fig. 3. Morphological PQRST-parameters of the ECG signal of a healthy person

It is proposed to use these morphological features for the construction of cryptographic keys and also for creating a model of “physiological” signature of the individual. In addition, it is noted that physiological signals can be artificially generated by the model signal generator, provided that this model is correctly constructed on the basis of information about the human state. Further for realization of this approach it is necessary to choose a method of creation of model for generation of artificial physiological signals. Realization of this approach to protection of a communication channel from the sensor to a cloud is presented in PEES [25] system. For data security the ECG and PPG are used. The generator of an artificial ECG is described by expression [26] X dECGðtÞ ¼ ai ð2phrmean t  hi Þe dt i2P;Q;R;S;T



2 ð2phrmean thi Þ 2b2 i

 ð1Þ

where hrmean– is the mean heart rate of the person. Temporal variability parameters include mean heart rate, standard deviation of heart rate and LF/HF ratio. To obtain morphological parameters, each type of P, Q, R, S, and T waves on the ECG is represented by the Gaussian curve. Each curve has three parameters and consequently there are a total of 15 morphological parameters (aP, aQ, aR, aS, aT, bP, bQ, bR, bS, bT, hP, hQ, hR, hS, hT). The PPG model characterizes the shape of a PPG pulse using differential equations, and is based on a Windkessel model of the human vascular system [27]. The signal is split into two parts - systole and diastole. The diastole is modeled using the equation PPGdias ðtÞ ¼ a1 þ a2 eða3 tÞ þ

1 a4

þ eða5 ta6 Þ

 cosða7 t þ a8 Þ

ð2Þ

Data Protection During Remote Monitoring of Person’s State

9

For the systole, analytical driving left ventricular pulse waveform is considered, using a single logistical function, as PPGsys ðtÞ ¼

1 a9 þ eða10 ta11 Þ

ð3Þ

The coefficients [a1, a2, …, a11] in Eqs. (2) и (3) are the morphological parameters. The temporal parameters include the mean heart rate, standard deviation of heart rate and the LF/HF ratio. It is assumed that the original biosignals are safely transmitted to the cloud for parameterization of the model. After its initialization, based on the morphological properties of biosignals, any future distribution of E2E keys will occur transparently to the patient. Once the initialization process is completed, the security key may be updated by executing PEES protocol if necessary. This approach doesn’t require a priori distribution of keys. For creation a secure connection E2E it is rather simple installation of sensors on the patient’s body. Inside the data warehouse in the cloud, there is the diagnostic equivalent of physiological signals in the form of time series. This artificial model is created using the generator, which is configured according to the patient’s physiological data. Note, that the model in the cloud is not static as the patient’s state may change. Therefore, it is necessary to update the model regularly in the safe way. The main disadvantage of the above example is a large number of morphological parameters. In addition, it is necessary to apply the selection of functional dependencies according to the type of recorded biosignals, which is not very effective. In this example, essentially the reconstruction of functional dependencies using time series (biosignals records) was applied. Below is proposed the approach based on the reconstruction of systems. The reconstructed models are represented in the form of differential equations, the solution of which is the desired functional dependencies.

5 Proposed Approach to Physiological Data Protection System model reconstruction, based on the observed time series, is successfully applied to the solution of various problems. In the context of incomplete data, the reconstruction method allows to evaluate the state of system, to identify features of behavior, to predict its development [28–30]. These problems are very important for the remote monitoring of complex systems. Model approach to analyzing systems, which based on reconstruction, is well established in the processing of person’s biosignals. In contrast to the classical methods of reconstruction of complex systems, here it is proposed a different approach - to reconstruct the models whose structure takes into account the biophysical features of the origin (generation) of biosignals. The proposed approach is based on the concept of basic models. The model structure is determined by the type (morphology) of the biosignal, and its parameters are determined by the features of its specific implementation.

10

T. Buldakova and D. Krivosheeva

Then the problem of definition of morphological features is reduced to a problem of model system reconstruction. In this case morphological features are both structure of model, and its parameters. The keys generation process is performed in accordance with the protocol PEES and discussed in detail in [25]. We will consider the application of this approach using the example of the sphygmogram that records the fluctuations the blood vessel wall caused by the ejection of the stroke volume of blood into the arterial channel. Using the principle of basic models of oscillatory systems and taking into account the biomechanics of the vessel, the dynamic properties of the vascular wall can be described by the autonomous equation Van-der-Pol – Rayleigh:      €x þ e1 x2  r02 þ e2 x_ 2  x20  r02  x_ þ ax ¼ 0:

ð4Þ

Here x is the movement of the blood vessel wall registered by the sensor. It is obvious, that the equation parameters, reflecting such properties of a vessel as a pliability and dissipation, are peculiar to any vessels and at the same time are unique for separate individual. Since the vessel walls are moved by the pressure of the blood flow, the accounting of cardiac subsystem influence leads to the following equation      €x þ e1 x2  r 2 þ e2 x_ 2  x20  r 2  x_ þ ax ¼ Pðx0 tÞ

ð5Þ

A time series of vessel wall pulsations (pulsogram) is the initial information for determination of unknown control parameters. Data of this time series are preprocessed to remove trend, frequency stabilization, and noise reduction. As the system functions in the mode of limit cycle, the unknown parameters x0 and r of the Eq. (5) are determined from experimental data. After their determination, using the measured values xmeas ðtÞ and the calculated values x_ calc ðtÞ and €xcalc ðtÞ, the values pi, a, e1 and e2 are determined by method of least squares. Here pi are expansion coefficients of function P in a Fourier series, i = 1, …, N. As an example, the following values of the model parameters obtained in one of the experiments are given: e1 = −0.3; r = 0.065; e2 = −3.37; x0 = 6.152, a = 36.15. To verify the adequacy of the model, phase portraits of the original and model systems were constructed, which are shown in Fig. 4. From the figures one can see a good coincidence of phase portraits. As a diagnostic characteristic, it is used a characteristic vector composed of model parameters a, e1 , and e2 . The diagnostic capabilities of the proposed model were investigated using the example of identification of two functional states of a human-operator: relaxed and stressed.

Data Protection During Remote Monitoring of Person’s State

11

Fig. 4. Phase portraits of the original system (on left) and model system (on right)

In Figs. 5 and 6 there are show the signals of pulse activity x and electrical activity of the heart e, at rest (solid line) and at psychoemotional stress (dashed line). It is seen that in addition to increase the frequency, the shape of the curves changes to a certain extent.

Fig. 5. Pulse signals at rest and at stressed state

Fig. 6. Signals of electrical activity of the heart at rest and at stressed state

It is noted that the load changes the values of the parameters a, e1 , and e2 . In this experiment the following values were calculated for the rest state: e1 = −0.3; e2 = −3.37; a = 36.15. At stressed state, these values respectively take values: e1 = −1.07; e2 = −8.31; a = 95.5.

12

T. Buldakova and D. Krivosheeva

The proposed method of identification made it possible to distinguish between these two states. However, conducted researches showed that this model not always corresponds to the registered biosignal. Therefore depending on a state of vessels, patients were divided into several groups. Each group is assigned the certain structure of the model equation. Thus, the databank of models was created. For example, for elder patients, instead of the Eq. (5), the Van-der-Pol - Duffing model was used. Thus, the structure and parameters of the model can be used not only to identify the state of a particular patient, but also to protect its personal data. In this case, the transmitted data is actually encrypted during the reconstruction process. It is supposed, that further researches should be conducted using jointly recorded biosignals. This will not only improve the adequacy of the patient’s state assessment, but also allow to develop a more adequate mathematical model of the “physiological” signature of person.

6 Conclusion In this paper the features of mobile measurement systems are considered, and possible ways of protecting the transmitted data in the systems of remote monitoring of patient’s state are analyzed. It is shown, that the creation of a technology to protect the registered data transmitted through an open communication channel from sensors to the cloud storage (medical database) remains an urgent problem and requires the development of new mathematical methods and models to provide encryption and decryption of messages using jointly recorded patient’s biosignals. There is proposed the approach, in which a reconstructed mathematical model of a biosignal generator is used to construct cryptographic keys. The method of data protection based on this approach is demonstrated by the example of the biosystem “heartvessels”. Morphological features for the formation of a “physiological” signature include the structure of the model, used to assess the patient’s state, and also its physiologically significant parameters.

References 1. Idhate, S., Bilapatre, A., Rathod, A., Kalbande, H.: Dam Monitoring system using wireless sensor networks. Int. Res. J. Eng. Technol. 4(4), 1767–1769 (2017) 2. Xu, M., Sun, M., Wang, G., Huang, S.: Intelligent remote wireless streetlight monitoring system based on GPRS. In: Xiao, T., Zhang, L., Fei, M. (eds.) Communications in Computer and Information Science, vol. 324, pp. 228–237. Springer, Heidelberg (2012) 3. Ibrahim, A., Muhammad, R., Alshitawi, M., Alharbi, A., Almarshoud, A.: Intelligent green house application based remote monitoring for precision agricultural strategies: a survey. J. Appl. Sci. 15(7), 947–952 (2015) 4. Suyatinov, S.I.: The use of active learning in biotechnical engineering education. In: Smirnova, E.V., Clark, R.P. (eds.) Handbook of Research on Engineering Education in a Global Context, pp. 233–242. IGI Global, Hershey (2019). https://doi.org/10.4018/978-15225-3395-5.ch021

Data Protection During Remote Monitoring of Person’s State

13

5. Buldakov N.S., Buldakova T.I., Suyatinov S.I.: Etalon-photometric method for estimation of tissues density at X-ray images. In: Progress in Biomedical Optics and Imaging Proceedings of SPIE, vol. 9917, paper 99171Y (2016). https://doi.org/10.1117/12.2229539 6. Buldakova, T.I., Dzhalolov, A.: Analysis of data processes and choices of data-processing and security technologies in situation centers. Sci. Tech. Inf. Process. 39(2), 127–132 (2012). https://doi.org/10.3103/S0147688212020116 7. Buldakova, T.I., Mikov, D.A.: Comprehensive approach to information security risk management. In: CEUR Workshop Proceedings, vol. 2081, paper 05, pp. 21–26 (2017) 8. Appari, A., Johnson, M.E.: Information security and privacy in healthcare: current state of research. Int. J. Internet Enterp. Manag. 6(4), 279–314 (2010) 9. Lantsberg, A.V., Treusch, K., Buldakova, T.I.: Development of the electronic service system of a municipal clinic (based on the analysis of foreign web resources). Autom. Doc. Math. Linguist. 45(2), 74–80 (2011) 10. Bashi, N., Karunanithi, M., Fatehi, F., Ding, H., Walters, D.: Remote monitoring of patients with heart failure: an overview of systematic reviews. J. Med. Internet Res. 19(1), e18 (2017) 11. Nakamura, N., Koga, T., Iseki, H.: A meta-analysis of remote patient monitoring for chronic heart failure patients. J. Telemed. Telecare 20(1), 11–17 (2014). https://doi.org/10.1177/ 1357633X13517352 12. Schmidt, S., Schuchert, A., Krieg, T., Oeff, M.: Home telemonitoring in patients with chronic heart failure. Deutsches Ärzteblatt International 107(8), 131–138 (2010). https://doi. org/10.3238/arztebl.2010.0131 13. Prado, M., Roa, L., Reina-Tosina, J.: Virtual center for renal support: technological approach to patient physiological image. IEEE Trans. Biomed. Eng. 49(12), 1420–1430 (2002) 14. Lia, B.N., Fua, B.B., Dong, M.C.: Development of a mobile pulsewaveform analyzer for cardiovascular health. Comput. Biol. Med. 38(2), 438–445 (2008) 15. Mundt, C.W., Montgomery, K.N., Udoh, U.E., Barker, V.N.: A multiparameter wearable physiologic monitoring system for space and terrestrial applications. IEEE Trans. Inf Technol. Biomed. 9(3), 382–391 (2005) 16. Paradiso, R., Loriga, G., Taccini, N.: A wearable health care system based on knitted integrated sensors. IEEE Trans. Inf Technol. Biomed. 9(3), 337–344 (2005) 17. Winters, J., Wang, Y.: Wearable sensors and telerehabilitation. IEEE Eng. Med. Biol. Mag. 3, 56–65 (2003) 18. Venkatasubramanian, K.K., Banerjee, A., Gupta, S.K.S.: PSKA: usable and secure key agreement scheme for body area networks. IEEE Trans. Inf Technol. Biomed. 14(1), 60–68 (2010) 19. Malhotra, K., Gardner, S., Patz, R.: Implementation of elliptic-curve cryptography on mobile healthcare devices. In: IEEE International Conference on Networking, Sensing and Control, pp. 239–244 (2007) 20. Liu, A., Ning, P.: TinyECC: a configurable library for elliptic curve cryptography in wireless sensor networks. In: Information Processing in Sensor Networks, pp. 245–256 (2008) 21. Buldakova, T.I., Suyatinov, S.I.: Reconstruction method for data protection in telemedicine systems. In: Progress in Biomedical Optics and Imaging - Proceedings of SPIE, vol. 9448, paper 94481U (2014). https://doi.org/10.1117/12.2180644 22. Cherukuri, S., Venkatasubramanian, K., Gupta, S.K.S.: BioSec: a biometric based approach for securing communication in wireless networks of biosensors implanted in the human body. In: Proceedings of Workshop on Wireless Security and Privacy, pp. 432–439 (2003) 23. Poon, C.C.Y., Zhang, Y.-T., Bao, S.-D.: A novel biometrics method to secure wireless body area sensor networks for telemedicine and m-health. IEEE Commun. Mag. 44(4), 73–81 (2006)

14

T. Buldakova and D. Krivosheeva

24. Nabar, S., Banerjee, A., Gupta, S.K.S., Poovendran, R.: GeM-REM: generative modeldriven resource efficient ECG monitoring in body sensor networks. In: International Conference on Body Sensor Networks (BSN), pp. 1–6 (2011) 25. Banerjee, A., Gupta, S.K.S., Venkatasubramanian, K.K.: PEES: Physiology-based End-toEnd Security for mHealth. In: Proceedings of the 4th Conference on Wireless Health (2013). article 2 26. McSharry, P.E., Clifford, G.D., Tarassenko, L., Smith, L.A.: A dynamical model for generating synthetic electrocardiogram signals. IEEE Trans. Biomed. Eng. 50(3), 289–294 (2003) 27. Nabar, S., Banerjee, A., Gupta, S.K.S., Poovendran, R.: Resource-efficient and reliable long term wireless monitoring of the photoplethysmographic signal. In: Proceedings of the 2nd Conference on Wireless Health, pp. 1–9 (2011). article no. 9 28. Bezruchko, B.P., Smirnov, D.A.: Extracting knowledge from time series: an introduction to nonlinear empirical modeling. Springer Series in Synergetics. Springer, Heidelberg (2010) 29. Xu, P.C.: Differential phase space reconstructed for chaotic time series. Appl. Math. Model. 33(2), 999–1013 (2009). https://doi.org/10.1016/j.apm.2007.12.021 30. Basarab, M.A., Konnova, N.S., Basarab, D.A., Matsievskiy, D.D.: Digital signal processing of the Doppler blood flow meter using the methods of nonlinear dynamics. In: Progress in Electromagnetics Research Symposium, pp. 1715–1720 (2017). https://doi.org/10.1109/ piers.2017.8262026

Principles of Managing the Process of Innovative Ideas Genesis Tatiana V. Moiseeva(&)

and Sergey V. Smirnov

Institute for the Control of Complex Systems of Russian Academy of Sciences, Samara, Russia [email protected], [email protected]

Abstract. Innovative development has universally valid social value and it is very important today. Its corresponding theoretical base is developing, but there is no clear understanding how innovations are generated and what their genesis is. The description of the life cycle of the creation, development and implementation of innovation usually begins with the words: “Innovation is given”, but who gives it, where and how the innovative idea is born, is not currently the subject of researches and discussion. Therefore, it seems important, first of all, to understand where and how innovations are born, in order to design appropriate information technology for representing the meaning of problem situation in the processes of collective decision-making, and then find ways of managing this process. This article shows that the source of the innovation idea is the problem situation in which the actors turned out to be. It is proposed to use the theory of intersubjective management to solve problem situations and find ways out of them. The development of technological platform for implementing the provisions of the intersubjective management theory is based on the formal ontologies construction. Ontological engineering makes it possible to build a communicative semantic model that integrates all views of actors on the problem situation, which is necessary for the creating of their collective model of the innovation idea at the stage of the innovation origin, preceding its implementation. Keywords: Innovative idea  Problem situation  Intersubjective management  Innovation genesis  Ontological model  Formal ontology  Ontological data analysis  Formal concept analysis

1 Introduction Modern innovations have universally valid social value, not limited by the terms of technical and economic fields of activity, but extended to the spheres of social norms, images and ideals. The universal nature of innovation requires adequate models of innovation processes creation, interdisciplinary holistic understanding of innovation development as the factor of society development. One of the key issues of society evolution is the improvement of innovations birth and dissemination mechanisms. Today much attention is paid to the study of these issues [1–3], but we have no yet clear understanding of how and where innovations are born and how to manage this process. © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 15–25, 2019. https://doi.org/10.1007/978-3-030-12072-6_2

16

T. V. Moiseeva and S. V. Smirnov

One of the known approaches allows to organize the genesis of innovative ideas competently, to channel it into the company’s development strategy and includes three stages [4–6]. According to this approach, the problem areas of the enterprise’s work are formulated at the first stage, while the second one determines the range of problems which lead to company’s failures, identified on the first stage. Solving such problems should be the goal of future innovations. Having determined the range of problems that require an innovative solution, it is proposed to concentrate on ideas generation. The third step includes the evaluation of the proposed ideas by experts who, using a special matrix, analyze different ideas according to the criteria that correspond to the previously formulated problems, as well as the company’s capabilities. Ideas that gained maximum amount of points are recognized by experts as optimal for implementation (See Fig. 1).

Fig. 1. Traditional approach (an innovative idea arises in the organization)

We agree with the statement that the discussion of the problem should engage “the widest possible range of actors involved in the target process of the problem” [4]. Indeed, the wider the circle of actors, the more opportunities to take a comprehensive look we have. This point of view is supported by Japanese specialists Nonaka and Takeuchi, authors of the well-known book “ The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation”, who advocate the redundancy of information [7]. Acquaintance with the approach proposed by Baumgartner in [4] immediately raises the question: why do innovations grow from within the company? Perhaps this process begins earlier, and the circle of problems must be outlined not by the company head, but by other interested persons, maybe by potential consumers who have not found the means to satisfy some of their needs. It should be noted that the process of the origin of the innovative idea in the literature is practically not covered, although the issues related to the economic aspects of the development of an innovative product are fully and carefully worked out in some works.

Principles of Managing the Process of Innovative Ideas Genesis

17

While agreeing with the importance and necessity of solving the problem of innovations origin managing, we present new principles of managing the process of innovative idea birth (in comparison with [4]), based on the theory of intersubjective management [8].

2 Situation Understanding by Actors as the First Step to Innovation Idea After all the actors have realized themselves involved in the situation and have formed an intersubjective community, the comprehension of problem situation comes, which is targeted by the awareness related to the finding way out of the problem situation. Innovative idea is often identified with a scientific idea, however, the difference between them is the following. Scientific idea is the result of cognition, and the innovative one is the result of awareness, which can be viewed as comprehension of the situation meaning. In other words, “awareness” and “comprehension” are synonymous. “Most likely the meaning is something that we project into the things around us which are themselves neutral” [9], and the awareness associated with finding a way to solve the problem is focused on understanding the situation. The first thing that actors must do having faced with the uncertainty of the life situation is to understand the meaning of this situation, which can be interpreted in different ways. From the pragmatic point of view (from the position of the actor who is the subject of the activity) the meaning becomes a value and a characteristic of the utility of the object for this person, and this meaning is applied to the particular situation which is estimated as uncertain because of the presence of a multitude of competing opportunities. It is the search for meaning, as an inborn motivational tendency (according to Frankl [10]), is the main driver of the behavior and development of the individual. Perception of meaning is “awareness of possibility against the background of reality, or, more simple, awareness of what can be done in relation to this situation” [10]. Since “meaning is always a realistic way, appropriate to the circumstances… The meaning in existential understanding is a function of two variables: each time conditions are changed (the possibilities of absolutely certain actual circumstances) and the qualities, abilities, talents of a person in these circumstances are also changed… Something that is significant for us has the meaning… ” [10], then it is obvious that each actor will put his own meaning into any problem situation. Realizing that one person is not able to constitute the whole variety of meanings of the situation, actors perceive from others what they lack in their own experience. Having found themselves bound by a common problem situation, autonomous heterogeneous actors, differing in subjective features, intellectual abilities and notions of values, realize it in different ways, while recognizing the need for coordinated actions to manage the situation. Without waiting for anyone’s orders, actors (becoming “social theoretics” [11]) act by organizing themselves and forming intersubjective communities to resolve the problem situation by joint efforts. The meaning-generating people’s activity influences the entire intersubjective community in which a man exists. As a result a common semantic space is formed. The actors searching the innovative idea work out jointly intersubjective

18

T. V. Moiseeva and S. V. Smirnov

knowledge of the current situation, on the basis of which themselves, without external organizing influences, make decisions about what tasks must be done to resolve it [12].

3 Intersubjective Management of Innovative Ideas Genesis Joint understanding of the problem situation by different actors leads to the birth of an innovative idea. The innovative idea is the result of the actors’ understanding of the problem situation, which formulates the idea of some innovation, which helps it to be settled and which creates additional value for the actors (See Fig. 2).

Fig. 2. New approach to innovative idea genesis based on the theory of intersubjective management.

We can say that the innovative idea is, in a broad sense, the phenomenon of culture, which means that it can appear not necessarily in the economic sphere, but also in the social or political ones. The phenomenon of innovation has been used for a long time and actively in the economic field, but recently destroyed the traditional boundaries of economic theory, gained a general social character and is closely related to the development of other spheres of social life, social relations, traditions, culture, creativity. If we turn to the history of the matter, we’ll find that as noted in [13] until the beginning of the twentieth century “innovations encompassed the most value-neutral spheres which layed on the periphery of ideological, political and socio-cultural management, and the idea of technology was associated exclusively with production activities, which gave opportunity to analysts (D. Bell, A. Touraine) to speak of a bourgeois society of the last century as a dualistic culture, which is innovative in the production sector and traditional in the non-production sector.” This dualism is overcomed in our era, when innovations, penetrating all spheres of life, become possible not only in the production sphere, but also in social, political, organizational and managerial. Taking into account the evolution of innovation actors status, from “borderline individuals and groups, representatives of foreign diasporas engaged in trade and

Principles of Managing the Process of Innovative Ideas Genesis

19

management… distinguished by a detached attitude to local norms and traditions” in the 15th and 18th centuries, to innovation practices who become not only the dominant socio-cultural installation, but also a special profession in the 20th century, we note that today “innovation activities related to rational organization by the subject-object principle should be corrected by the self-organization principle’’ [13], and, therefore, relying on the theory of intersubjective management, it must be prerogative of heterogeneous actors. Since the emergence of an innovative idea is the result of a joint search for the way out from problem situation by actors who understand it as a matter of their common concern and understand the need for joint decision-making on its resolution, an intersubjective approach is proposed to manage this process [8]. The main actors in the theory of intersubjective management are actors who, unlike managers, are participants of non-violent methods of management based not on coercion, but on achieving mutual understanding and consensus with other actors. Dissatisfied with the existing situation, heterogeneous actors, initially not linked to each other, can feel themselves in similar problem situations. However, each autonomous actor will be “concerned” [according to M. Heidegger] in his own way, being in the center of the “objective world”, which includes everything related this concern. Each actor creates his own description of the objective world, i.e. subjective (personal) ontology, understood as “the description and organization of a set of things that exist and which determines how these things are interrelated” [13], and, as a result, his own understanding of the meaning of the situation. There is usually a lot of actors, involved into one problem situation, who form an intersubjective self-organized community (self-organization refers to a spontaneous, unplanned emergence of order from random (chaotic) local interactions without external managing influences [14]). Despite the fact that actors see their own meaning of the problem situation, it can be realized by them as a matter of their common concern connected with the need for joint decision-making on its settlement. The lack of resources (time, financing, materials and etc.) forces a person to seek for like-minded people in order to work out an enforced solution to manage the situation taking into account their common interests. It should be specially emphasized that unlike the “system” with its fixed, rigid connections between elements, in such community of heterogeneous actors the relations between them are more “blurred”. Members of the community don’t think about who is the boss or how to subordinate, they are bound only by the common goal they want to achieve in order to find a way out of the problem situation, in contrast to the participants in the organizational structures described in [3, 14], which represents systems with clear connections “chief – subordinate’’. Despite different views of the problem situation, community members understand that the way out can be found only together. It means that it is necessary to negotiate with each other. Heterogeneous actors acquire, accumulate and apply subjective, intersubjective and objective knowledge in the process of a collegial decision-making to resolve a problem situation. Trying to find a solution, communicatively rational actors discuss the situation. During the multilateral dialogue of heterogeneous actors, an ontological model of the situation is developed, which is a coordinated description of the situation in the form of

20

T. V. Moiseeva and S. V. Smirnov

concepts and relationships. Since each actor is not only personal, i.e. has personality, but at the same time he is social, we see means of finding a solution that would suit all members of the community in discourse. The argumentative discourse that underlies the interaction of actors is, according to Habermas, “a special ideal type of communication… aimed at critical discussion and substantiation of the views and actions of communication participants,” a peculiar criterion for determining if the reached agreement is true or false [15]. In the narrow sense, discourse means “the language that we interpret according to specific context of use, connecting it with the speaker” [16]. Since the term “discourse” is used to denote discussion of certain problems, it is suggested to apply the logic of discourse in the intersubjective management of the innovative idea birth. Interacting, actors perform such communicative actions that are aimed at achieving mutual understanding of acting individuals and resolving the contradiction between private (mine) and general (ours). People who live in society and find themselves in critical problem situation must learn to negotiate in order to reach agreement. Thus, unlike the manager for whom the “vertical” coercion is the main management tool, the actor uses “horizontal” interaction to reach agreement, relying on the solidarity of the actors. In order to reach an agreement, members of the organized community must conduct complex multilateral negotiations, using mutual beliefs in order to find consensus. As Haken noted in the work of “Self-Organizing Society”, “what we will do in the future will be determined not so much by the high level of technological development as by sociological constructs, in particular by the finding of consensus in the social plan” [17]. Consensus, found as a result of the discussion of problem by all heterogeneous actors, leads to the birth of an innovative idea, when the whole community decides what to do to resolve the problem situation.

4 Formalized Information Technology for Representing the Meaning of Problem Situation in the Processes of Collective Decision-Making The foregoing indicates a special need for a technological platform for implementing the provisions of the emerging theory of intersubjective management in systems which have such main feature as communication of people (actors in problem situations which require settlement) aimed at achieving their mutual understanding. It has already been noted that it is advisable to use ontological models of situations, i.e. formal ontologies [18, 19], as semantic models. It is obvious that these models can be derivated from the cognitive structure of perception of the problem situation by each individual actor, hence the formal representation of this structure becomes the initial task. The solution of this problem is connected with the statement that two cognitive abilities of human consciousness play fundamental role in understanding the sense of a problem situation by the actor. They are: distinguishing different (discrete) objects in similar situations and finding connections between these objects. The links determine the relationship of objects in a problem situation: unary ones are interpreted as

Principles of Managing the Process of Innovative Ideas Genesis

21

properties aggregated by objects, and more arities describe types of object associations (for semantic modeling of any associations it is enough to be restricted by binary relations). The subjectivity of actor’s perception of problem situation is manifested not only in the vision of peculiar object composition of the problem situation. Even more important aspect of subjectivism is that each actor “tries” problem situation according to his own system of values and goals, an elementary model of which can serve as a set of indicators and criteria. This set is a kind of prism through which the actor “sees” the situation, interacting in some way with the detected objects (exposing these objects to certain test procedures and measuring them), and therefore this set should be interpreted as a set of properties of the objects of the problem situation, which are only possible to be taken into consideration by the actor. Thus, the simplest formal representation of the cognitive structure of the actor’s problem situation perception is the well-known and generally accepted relational form of empirical data representation the “object-properties” table (OPT), or multivalued formal context (FC) [20]: ðG ; M; V; I Þ;

ð1Þ

where:   G ¼ fgi gi¼1;...;r ; r ¼ Gk   1 is a set of objects relevant to the subject of management (i.e. taken into account by him): G  G, where G is the whole set of the problem situation objects hypothetically conceivable;   M ¼ mj j¼1;...;s ; s ¼ jM j  1 is the set of measurable objects properties; V is the set of property values; I is the ternary relation between G*, M and VðI  G  M  VÞ, defined for all pairs from G *  M. In order to identify the actor’s meaning of the problem situation (in the form of formal ontology), it would be advisable to apply the method of ontological data analysis, which is based on the formal concepts analysis [21, 22]. Ontological data analysis refers to the assumption that any measurement of an object’s property can give a special “None” result, which indicates either a semantic discrepancy between the investigated object and measurement procedure applied to it, or whether the value of the measured property lays beyond the threshold of sensitivity and out of dynamic range of the used measurement instrument. The subject himself, who in the process of perceiving reality usually applies fundamental mental cognitive procedure called conceptual scaling, can also inspire the emergence of such results [23, 24]. Conceptual scaling of the property is the subjective design of the “coverage” of the dynamic range (domain of delivered values) of the corresponding measurement procedure with the creation of new measurable properties of the problem situation objects. Such coverage is called as conceptual scale. After scaling, the newly entered properties are actually measured in the binary scale of names {X, None}, where X is linguistic constant, which collectively denotes any symbol of the scale of dynamic range of measuring procedure.

22

T. V. Moiseeva and S. V. Smirnov

The constants X and None abstract and specify the composition of the set V in (1). Two linguistic constants, describing the realities of the accumulation of empirical information by the subject, completes its generalized representation: • Failure captures the subject’s doubts about a certain - X or None - result of measurement. These doubts can arise if the involved expert refrains from expressing a certain opinion, or due to the measuring tool failure, or somebody’s decision to abstain in the voting, etc. Thus, Failure describes the “result” of the measurement procedure, which is often observed in practice, which can be collectively qualified as “refusal to perform measurement”, • NM (not measured) indicates that this property was not measured in reality (the introduction of this formal result, among other things, is very useful for structuring data about the problem situation). Finally the format of the object-attribute model is complicated to represent the realities of the initial information about the problem situation in the ontological analysis of data. Thus, in order to take into account the iterative interaction of the actor with the problem situation, the results of a multiple independent estimation of the properties of objects are fixed; the use of several different evaluation procedures by the actor in order to clarify the same characteristics of the situation are reflected; different actor’s confidence levels of various information sources are accentuated, etc. Having summarized these additional circumstances and the realities of subject accumulation of factual information about the problem situation, it is proposed to replace the elementary model of the cognitive perceptual structure of the subject (1.1) by the generalized OPT, which is described by the tuple: ðG ; M; Se; Pr; AÞ;

ð2Þ

where: S Se ¼ ri¼1 SeðiÞ is the set of all measurements made during the investigation of the  P  domain of interests, jSej ¼ ri¼1 SeðiÞ  ¼ m and SeðiÞ ¼ fseðiÞk gk¼1;...qðiÞ , qðiÞ  1; i ¼ 1; . . .; r are the set of measurement series which the gi 2 G* is subjected to, while each series se(i)k S is characterized by degree of confidence in the results st(i)k; Pr ¼ sj¼1 PrðiÞ is the arsenal of all measurement procedures used in the study of  P  the subject area, jPr j ¼ sj¼1 PrðjÞ  ¼ n and PrðjÞ ¼ fprðjÞk gk¼1;...pðjÞ , pð jÞ  1; j ¼ 1; . . .; s are the set of congruent procedures for measuring the property mj 2 M, while each procedure   pr(j)k is characterized by degree of confidence in the results pt(j)k; A ¼ aij i¼1;...;m;j¼1;...;n is the matrix of the results of the series of measurements Se of the properties M of the objects from the G* sample, performed using the measurement procedures Pr, aij 2 {X, None, Failure, NM}. The incompleteness and inconsistency of the available data are typical conditions in which the actor has to construct a semantic picture of the problem situation. The modeling of these conditions in the application of ontological analysis is connected, first of all, with the choice of an adequate method of evaluating the truth of the basic semantic judgments about the problem situation bij = “the object x has the y property” extracted from the object-attribute model during the analysis. In order to model the

Principles of Managing the Process of Innovative Ideas Genesis

23

“human approach” to such estimates, ontological analysis is based on the conceptual and analytical apparatus of multivalued logics. Specifically, we use vector VTF -logic, which generalizes the usual fuzzy logic in a simplest way [25]:   D þ  E þ  bij  ¼ b ; b ; b ; b 2 ½0; 1; ij ij ij

ð3Þ

where the component, or the aspect of truth bijþ (bijþ – True), is formed by evidence confirming the basic semantic judgment (expressing the personal experience and knowledge of the subject, coming to him from experts, found by him in the literature, acquired by him in specially designed experiments, etc.), and the component (aspect)  b ij (bij – False) is formed by evidence denying the basic semantic judgment. Waiting for the greater adequacy of such fuzzy evaluations of the truth of the basic semantic judgment to the actual process of understanding the problem situation could be explained by the fact, that the traditional constants True and False, by which the subject seeks to evaluate the truth of the basic semantic judgment intuitively, are often determined by independent set of evidences for this subject in such a way that False can’t be produced from the absence (deficiency) of True, and True - from the absence (shortage) of False. Thus, using the outlined approach, appropriate methods and means, each actor is able to formalize his own, based on his personal system of values, understanding of the problem situation in the ontological model. This result itself is very significant to the actor, allowing him to “understand himself,” to see the limits of his perception of reality, to publish his position in the generally accepted language and, if necessary, a priori (before cooperation with other actors in the problem situation) to correct it in accordance with the situation features. Finally, we must be able to formally unite subjective semantic models in the process of collective decision-making for mutual understanding and interaction of actors in the problem situation. Ontological engineering offers appropriate methods and tools [26, 27], which allows us eventually to construct a communicative semantic model that integrates views on the problem situation of all its actors [28].

Table 1. Differences between traditional approach to innovative ideas genesis and intersubjective one. Characteristic

Traditional approach

The origin of innovation Main character Processes Place Structure Elements links Management tool The way of management

Innovation is born inside organization Decision maker Cognition Outside the problem situation System Fixed “Vertical” enforcement Based on coercion

Approach based on the intersubjective management theory Innovation is born by potential consumers Actor Awareness Inside the problem situation Intersubjective community Fuzzy “Horizontal” interaction Based on the mutual understanding and consensus

24

T. V. Moiseeva and S. V. Smirnov

5 Conclusion The proposed approach is radically different from the current management in many areas. The following Table 1 gives a summary of main differences seen by authors. Its main dissimilarity lays in the answer to the question: “Who must make the decision what to do”? It really occurred that it was very complicated problem to ordinary people to understand and accept that they could make decisions themselves. Our task as scientists is to help them working out schemes of collaborative work, beginning from the innovative ideas born management, and to arm actors with informational techniques. Understanding where and how innovations are born gives us opportunity to design appropriate information technology for representing the meaning of problem situation in the processes of collective decision-making, based on the formal ontologies construction. Such approach makes it possible to build a communicative semantic model that integrates all views of actors on the problem situation, which is necessary for the creating of their collective model of the innovation idea at the stage of the innovation origin, preceding its implementation. The authors hope that outlining of the problem associated with the process of birth, development and implementation of innovative ideas management would provoke fruitful discussion and define new horizons and approaches in the vital sphere of management concerning innovative ideas genesis.

References 1. Hwang, V.W., Horowitt, G.: The Rainforest: The Secret to Building the next Silicon Valley. Regenwald, California (2012) 2. Sengupta, J.: Innovation models. In: Theory of Innovation. Springer, Cham (2014). https:// doi.org/10.1007/978-3-319-02183-6_2 3. Meissner, D., Polt, W., Vonortas, N.S.: Towards a broad understanding of innovation and its importance for innovation policy. J. Technol. Transf. 42, 1184–1211 (2017). https://doi.org/ 10.1007/s10961-016-9485-4 4. Baumgartner, J.: The Way of the Innovation Master. JPB, Erps-Kwerps (2010) 5. Rudenko, G.: Innovations: success formula. Effective Crisis Manag. 3 (2014). http://info.e-cm.ru/magazine/84/eau_84_289.htm. Accessed 10 Mar 2018. (in Russian) 6. Kotsemir, M., Meissner, D.: Conceptualizing the innovation process – trends and outlook. J. Innov. Entrepreneurship 5(14), 1–18 (2016). https://doi.org/10.1186/s13731-016-0042-z 7. Nonaka, I., Takeuchi, H.: The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford University Press, Oxford (1995) 8. Vittikh, V.A.: Introduction to the theory of intersubjective management. Group Decis. Negot. 24(1), 67–95 (2015). https://doi.org/10.1007/s10726-014-9380-z 9. Längle, A.: Eine praktische anleitung der logotherapie. Überarbeitung und neugestaltung als werkbuch. Dorothee Bürgi, Residenz, St. Pölten (2007). (in German) 10. Frankl, V.E.: Man’s Search for Meaning: an Introduction to Logotherapy. Perseus Book Publishing, New York (2000) 11. Giddens, A.: Elements of the Theory of Structuration. The Constitution of Society. Polity Press, Cambridge (1984)

Principles of Managing the Process of Innovative Ideas Genesis

25

12. Moiseeva, T., Polyaeva, N.: Problem situation modeling in the intersubjective management theory. Herald Dagestan State Tech. Univ. Tech. Sci. 45(1), 160–171 (2018). https://doi.org/ 10.21822/2073-6185-2018-45-1-160-171. (in Russian) 13. Styopin, V.S. (eds.): New philosophical encyclopedia. Mysl’, Moscow (2010) 14. Küppers, G.: Self-organization – The emergence of order. From local interactions to global structures. http://www.uni-bielefeld.de/iwt/sein/paper2. Accessed 20 May 2015 15. Habermas, J.: Moral Consciousness and Communicative Action. The MIT Press, Cambridge (1983) 16. Kuznecov, V.G. (ed.): Dictionary of philosophical terms. INFRA-M, Moscow (2013). (in Russian) 17. Haken, H.: Erfolgsgeheimnisse der natur. synergetik: Die Lehre vom Zusammenwirken, Deutsche Verlags-Ansalt GmbH (1995). (in German) 18. Guarino, N.: Formal ontology, conceptual analysis and knowledge representation. Int. J. Hum Comput Stud. 43(5/6), 625–640 (1995) 19. Smirnov, S.V.: Ontologies as semantic models. Ontol. Designing 2, 12–19 (2013). (in Russian) 20. Barsegyan, A.A., Holod, I.I., Tess, M.D., Elizarov, S.I.: Data and Process Analysis, 3d edn. BHV-Peterburg, Saint Petersburg (2009). (in Russian) 21. Smirnov, S.V.: Ontological analysis of domains modeling. News Samara Sci. Centre RAS 3 (1), 62–70 (2001). (in Russian) 22. Ganter, B., Wille, R.: Formal Concept Analysis Mathematical Foundations. Springer, Berlin (1999) 23. Ganter, B., Wille, R.: Conceptual scaling. In: Roberts, F. (ed.) Applications of Combinatorics and Graph Theory to the Biological and Social Sciences, pp. 139–167. Springer, New York (1989). https://doi.org/10.1007/978-1-4684-6381-1_6 24. Wolff, K.E.: Concepts in fuzzy scaling theory: order and granularity. Fuzzy Sets Syst. 132, 63–75 (2002) 25. Arshinskii, V.: Substantial and formal deductions in logics with vector semantics. Autom. Remote Control 68(1), 139–148 (2007) 26. Vinogradov, I.D., Smirnov, S.V.: Algorithm for conceptual schemes combining based on the reconstruction of their formal context. In: Fedosov, E.A., Kuznetcov, N.A., Vittikh, V.A. (eds.) III International Conference Complex Systems: Control and Modeling Problems, pp. 213–220. SamNC RAN, Samara (2001). (in Russian) 27. Stumme, G., Maedche, A.: FCA merge: bottom-up merging of ontologies. In: Proceedings of 17th International Conference on Artificial Intelligence, IJCAI 2001, Seattle, WA, USA, pp. 225–230 (2001) 28. Smirnov, S.V.: Formal approach to presenting the meaning of the problem situation in collective decision-making processes. In: XII All-Russian Management Problems Conference, pp. 6261–6270. IPU RAN, Moscow (2014). (in Russian)

Software Package for Modeling the Process of Fire Spread and People Evacuation in Premises Andrey Samartsev1(&) , Alexander Rezchikov1 , Vadim Kushnikov1 , Vladimir Ivaschenko1 , Leonid Filimonyuk1 , Dmitry Fominykh1 , and Olga Dolinina2 1

2

Institute of Precision Mechanics and Control of RAS, Rabochaya st. 24, 410028 Saratov, Russia [email protected], [email protected] Yuri Gagarin State Technical University, Politechnicheskaya st. 77, 410054 Saratov, Russia [email protected]

Abstract. The article proposes a software package containing effective implementation of models of the dangerous fire factors spread dynamics and people evacuation in premises. The dangerous fire factors spread model included in the package consist of fire and heat spread models. The first one is based on two-dimensional four-state probabilistic cellular automaton, the second one is based on simple graph theory apparatus and basics of thermodynamics. The people evacuation model implemented in the package is based on multi-agent systems approach, simple graph theory and basics of mechanics which include kinematics, conservation laws and collision theory. The package implementation process in the Java programming language is described, advantages of the chosen language are listed. The software package distinctive features and functional requirements are listed and explained, its inner module structure is described and illustrated, user interaction types are shown. The package allows to simulate and analyze critical situations that arise during a fire, calculate the dynamics of dangerous fire factors development and simulate people evacuation from the premises. The implemented models and the algorithm for their interconnection provide high model performance and accuracy [1]. Keywords: Software package  Fire simulation in premises evacuation from premises  Java programming language



People

1 Introduction Despite the successes achieved in the fight against fires in the Russian Federation, their number remains significant. And the greatest number of them occurs in buildings [2]. Currently, there are a lot of articles devoted to the construction of specialized software products that allow modeling the dynamics of the fire hazards spread (fire, heat,

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 26–36, 2019. https://doi.org/10.1007/978-3-030-12072-6_3

Software Package for Modeling

27

smoke, etc.) in the premises [2–6] and the evacuation of people from them [2, 7–10]. As a rule, the processes of fire spread and evacuation are considered independently. To increase the adequacy of modeling results, a joint consideration of evacuation and fire development processes is necessary. Currently, there are a few examples of such software products in Russia and abroad [9–12]. Usually they use a field model for the fire modeling, and individual-flow type models for evacuation. Both these models have limited functionality and do not have an effective interaction algorithm. This article proposes a software package containing an effective algorithm for interaction of fire and evacuation models, built on the basis of a simple mathematical apparatus of cellular automata, multi-agent systems approach, simple graph theory, basics of mechanics and thermodynamics [1]. This software package allows joint simulation of fire spreading process in premises and evacuation of people with acceptable accuracy and quite high computational performance.

2 Brief Models Description 2.1

Fire, Heat and Smoke Spread Model

The spread of fire is modeled by a two-dimensional probabilistic cellular automaton A = {cij} and every cell has four states: • NC is the state of cells, which correspond to non-combustible materials; • I is the state of cells, which correspond to combustible materials and in which combustion has not yet been observed; • B is the state of cells, which correspond to combustible materials, in which at a given time the fire is observed; • F is the state of cells, which correspond to combustible materials and in which all combustible material has been expired. At the initial moment of model time, several neighboring cells corresponding to the place of origin of the fire are transferred to state B. Further, the fire begins to spread to neighboring cells, which are transferred to state B from state I with a certain probability Ptij, which is determined by the number and mutual arrangement of neighboring burning cells in state B, and by speed of fire spread for a given material. Then, some time after the transition to state B, the reserves of combustible material inside the cell run out, and the cell transfers to the state F, the probability of such transition at time t is defined as Htij. The cell emits heat and smoke during combustion. The amount of heat released can be estimated using specific heat of combustion for the combustible load, the heat capacity of air and floors. It is more difficult to simulate the process of heat spread, which is caused by the phenomena of heat conduction, convection and radiation. In the model for this, a method has been adopted, the central place in which is occupied by graph G, constructed in a special way and controlling the process of heat distribution. Each cell cij, into which the room is divided, corresponds to one vertex of the graph G. If two vertices of the graph G are connected by an edge, then their corresponding cells exchange heat. Thus, the modeling of the heat distribution process can be

28

A. Samartsev et al.

corrected, among other things, by choosing the graph G. A similar method is used to simulate the smoke spread process, which is controlled by another graph. As a result, in the model each cell exchanges heat with neighboring cells and some number of remote cells. This approach allows us to accelerate the process of heat distribution modeling in comparison to the classical field models in which heat exchange occurs only between neighboring cells. The cost of this approach is a slight decrease in modeling accuracy, which, as a rule, is not critical. In addition, the process of fire and heat spreading is a very complicated, unstable with respect to external conditions and can develop quite unpredictably, so absolute accuracy cannot be achieved in the modeling of this process [1]. 2.2

Evacuation Model

In the model, evacuated people are represented as agents. Each agent has its own set of parameters: maximum speed, maximum acceleration, radius of projection on the floor and mass, which do not change with time. In addition, at a certain point in time, an agent is characterized by its coordinates in space, an instantaneous speed vector, instantaneous acceleration, and an optimal speed vector. The optimal speed is the speed that the agent would like to have in the current situation, based on his position, environment and exits location. The direction of acceleration is always determined by the difference between the desired and the current speed. The speed of the agent, in turn, is determined by its acceleration, and the coordinates, by its speed. This approach allows to take into account the inertness of agents, which is explained both by the physical properties of people and some natural delay during decisions making. When an agent collides with other agents or with walls, its speed is re-calculated according to the laws of conservation of energy and momentum, to be more precise, the model of partially elastic collision is used. The direction of the optimal speed is largely determined by the direction to the nearest exit from the room. To determine this direction, the room is divided into cells and a graph G2 is constructed, the vertices of which correspond to the cells, and the edges connect the neighboring cells not occupied by the walls. The length of the edges is equal to the distance between the corresponding cells. By initiating the algorithm for finding the shortest paths from the vertices corresponding to the exits, the distance from any point inside the room to the nearest exit can be estimated. The direction to the nearest exit coincides with the direction in which the distance to the exit decreases the fastest. Also, the location of neighboring agents influences the agent’s choice of the optimal velocity vector; the agent can choose the paths for overtaking other agents in his path. The proposed method of choosing the optimal speed allows us to generalize the model to the case of a complex shape map of premises. 2.3

Models Integration

Models integration is a separate complex task. The fire influences agent’s exit choice, to be more precise, the one that is chosen is the closest to the agent from among those which approach is not triggering approach to the source of fire. In addition, the spreading smoke affects the behavior of people, their visibility, which ultimately affects

Software Package for Modeling

29

the speed of movement of agents. So with poor visibility, agents do not risk moving at maximum speed, and when moving by touch, speed has to be reduced to a minimum.

3 Software Package Functional Requirements The software package must correspond to next requirements: 1. Contain software implementation of fire and evacuation models proposed in [1]. Allow to run fire and evacuation models together and separately, and in turn. 2. Interact with the user through the graphical and console interfaces. 3. Display visually the positions of agents and obstacles, the front of the fire, the distribution of heat and smoke in the room. 4. Software package must be universal, it must be able to work with a wide class of premises, the map of which the user can specify in a certain format. 5. Allow the user to conduct computational experiments for a certain premises according to the specified initial conditions, for example, the location of the sources of fire and people; to plot the graphs of various dependencies obtained in the course of computational experiments. 6. Must have possibility to conduct multiple computational experiments, which repeat with different initial conditions such as location of agents or sources of fire in premises. The purpose of the multiple experiment is to eliminate the random influence from results and to get convenient for analysis averaged dependencies. 7. Support multi-threaded tasks organization. This is particularly effective for multiple experiments, which have considerable computational complexity and at the same time are easily parallelizable. In addition, multithreading support is required for efficient software package management via graphical and console interfaces without “hangings” and user long waiting. Created software package named “CrowdFireSim” correspond the requirements listed above and is implemented using a object-oriented Java programming language which advantages include clear syntax, simple memory management, efficient language-level concurrency support, cross-platform execution, quite high performance and a large number of freely distributed libraries.

4 Software Package Inner Structure Figure 1 shows the scheme of interaction between software package modules and user. In the software package, the following modules can be distinguished. 1. Evacuation model module This module contains two classes EvacuationModel and EvacuationNet. The EvacuationNet class implements a grid that divides the room into cells [3, 10] and contains the logic for initializing a set of distances from each cell to the nearest exit from the room and a set of vectors indicating the direction to the nearest

30

A. Samartsev et al.

Fig. 1. Software package inner structure and its interaction with user

exit for each cell. Each cell contains a reference to an agent whose center is in the given cell (or null if such an agent does not exist). Also, the class contains the logic for determining the desired speed for each agent. The EvacuationModel class contains the logic for initializing the evacuation model, it includes initialization of the room size; the walls, exit zones, evacuation zones positions and sizes; and the initial coordinates of people. Each instance of the EvacuationModel class contains an instance of the EvacuationNet class, which is used to calculate the desired agent speed. The EvacuationModel class manages model time; starting from the value of the desired speed, the instantaneous acceleration, instantaneous velocity and coordinates of each agent at each step of the model time are computed sequentially, the conditions of their collision with each other or with walls are checked and the velocities of the agents in the case of a collision are recalculated. In addition, in this class, statistics are collected using the supporting class EvacuationStat and the logic of multiple experiments is implemented. 2. Fire model module This module contains two classes of FireModel and FireNet. The FireNet class is an implementation of a cellular automaton responsible for fire spread, supplemented by heat and smoke propagation algorithms [1]. The FireModel class contains the logic for initializing the fire model: the size of the room, the size and position of the walls, the statistics collection area. Each instance of the FireModel class contains an instance of the FireNet class, which is used to calculate the fire front, density, and smoke density. The EvacuationModel class manages model time, and also collects statistics using the FireStat helper class.

Software Package for Modeling

31

3. Settings module This module contains two classes ModelSettings and ModelSettingsSerializer. An instance of the ModelSettings class stores model settings that are retrieved and applied to fire and evacuation models when they are initialized. These settings include: the size of the room, the size and position of the walls, exit areas, evacuation zones and the statistics collection area. The ModelSettingsSerializer class includes methods that allow you to save the contents of the ModelSettings class instance as a JSON file or load settings from a file of the same format with the correct structure. The implementation uses the Jackson [13] library. JSON files have a convenient, human-readable format that allows to create and modify such files manually. The JSON file structure is as follows: { "x" : 26.0, "y" : 16.0, "walls" : [{ "x" : 3.0, "y" : 3.0, "widthX" : 0.2, "widthY" : 10.0 }, ... ], "doors" : [{ "x" : 0.0, "y" : 0.0, "widthX" : 0.2, "widthY" : 16.0 }, ... ], "ezs" : [{ "x" : 3.0, "y" : 3.0, "widthX" : 20.0, "widthY" : 10.0 }, ... ], "fzs" : [{ "x" : 0.0, "y" : 0.0, "widthX" : 3.3, "widthY" : 10.0 }, ... ] }.

The size of the room is defined by two fields x and y, the sizes and positions of the walls are specified by the array “walls” of the objects, each object specifies the wall using 4 parameters that specify the coordinates of the lower left corner (x, y), as well as the width of the wall along the axes (widthX, widthY). Exit areas (“doors”), evacuation zones (“ezs”) and statistics collection areas (“fzs”) are defined by similar arrays. The sign… implies that the rest of the objects are listed later in this JSON file. Software package parse the file and load a map into a model for research. It is worth noting that software package also contains some “standard” maps of the premises. Working with these maps is possible without preparing a JSON settings file. 4. Graphic module of model objects This module includes classes which instances specify the model objects displayable on the screen. These objects include walls (Wall class), agents (Human class), floor (Floor class), exit zones (Door class), grid of coordinates system (Cartesian class) and some others. All classes of this module implement the Drawable interface, which contains draw() method, where the object drawing logic must be implemented. 5. Swing graphical component module The classes of this module either inherit or encapsulate Swing library classes [14]. Class MainJComponent from the module contains the logic of displaying objects implementing the Drawable interface in application window. Other classes from the module perform the drawing of the menu bar and the context menu, allowing the user

32

A. Samartsev et al.

to control the display and simulation progress. Also there are handlers for mouse buttons and scroll wheels, allowing to set the visible area of the model space displayed on the screen and change the scale. 6. Console command module The classes of this module implement the processing of terminal commands and use the pattern “Command” [15]. Every console command corresponds to the class from this module which performs command processing. Software package contains commands that allow to start and stop the simulation, change the temperature and smoke display ranges in the premises or change some simulation parameters. The greatest interest here are commands that implement multiple experiments. We list the formats of these commands: exp1 is required for conducting an experiment that calculates  hm n ðtÞ function which is the result of averaging the dependence of the number of people being evacuated on time, provided that at the initial moment there are n people in the room according to m realizations obtained during the simulation. exp2

number

of

is required to conduct an experiment that calculates function of the average evacR1 100 uation time tm(n) from the number of people n in the premises, tm ðnÞ ¼ 1n  hn ðtÞdt: 0

exp3 m m is required for the experiment calculating functions  hm n;0 , hn;1 , hn;1 , where hm is the result of averaging the dependence of the number of people not being n;0 evacuated on time, provided that at the initial moment there are n people in the premises, basing on m realizations obtained in the simulation process, with zero visibility; hm is the same but with 1 m visibility; n;1 hm is the same but without smoke. n;1 Calculations of each of the above functions are performed using a separate instance of the EvacuationModel class in a separate thread. exp4 is required for the experiment calculating functions vm vm vm n;0 ,  n;1 ,  n;1 , where m vn;0 is the dependence of the maximum speed of the agent on its arrival number to the exit, provided that at the initial moment there are n people in the room, averaged over m realizations obtained in the simulation process, with zero visibility; vm n;1 is the same but with 1 m visibility; vm n;1 is the same but without smoke.

Software Package for Modeling

33

Here, the calculations of each function also organized in a separate thread. In addition to the above commands, there are other commands in the software package that allow you to start and stop the simulation, change the display temperature and smoke ranges in the room, change some simulation parameters, for example, the command set allows user to change the characteristics of the generated agents, and the show command displays the current settings of the generated agents in the terminal window in textual form. 7. Statistics module The module consists of classes EvacuationStat, FireStat and some auxiliary classes. The classes of this module perform the assembly of statistics during the experiment, its storage and transformation. Statistics is the information that is required to calculate the functions of any multiple or single experiment. For example, for the evacuation model, it can be the number of agents in the room at a given time, the order of evacuation of agents and the characteristics of these agents, and for the fire model it can be the area of open fire, the average temperature and the average smoke density in the rooms. 8. Graph Display Module It consists of the StatisticGraph class and uses the freely distributable JFreeChart library [16]. The library allows you to display graphs and diagrams of various types in a separate application window, has a wide range of settings for the graphs appearance and signatures, and also allows you to represent different sections of graphs on an arbitrary scale. 9. Agent Generation Module The module consists of the class HumanFactory. It contains the agent generation logic with the characteristics (mass, maximum speed and radius) that satisfy the given distributions and located inside one of the evacuation zones. If there is a conflict, i.e. the projection of the generated agent crosses other agents or walls, then the agent’s coordinates are generated anew until there is a no conflict. In case of conflict it is important to generate only the agent coordinates again, but not the rest of its characteristics, because, in generating new characteristics case, there is a risk of distorting the distribution because probability of conflict is proportional to the size of the projection area. Especially distinctively the distortion can be manifested in case of agents high density inside evacuation zones.

5 Task Distribution Between Threads The distribution of tasks between the threads in the single-experiment mode is shown in Fig. 2.

34

A. Samartsev et al.

Fig. 2. Distribution of tasks between threads

Thread 1 process user commands from the console command module, in case of executing multiple experiments runs additional threads for model calculations. Thread 2 makes calculations related to the fire spread model, thread 3 makes calculations related to the evacuation model, and thread 4 is shows the state of the models in application window and process GUI commands. At the same time, the major computational load during the single experiment is on threads 2 and 3. It is worth noting that in the single experiment the model time speed coincides with the real-time speed, this is done for the models visual observation convenience. To obtain this artificial limitation, method scheduleAtFixedRate() of ScheduledExecutorService interface implementation generated by the factory Executors.newSingleThreadScheduledExecutor() from the standard java.util.concurrent package was used. In the multiple experiment mode, the simulation speed is not artificially limited, it is limited only by computer power and simulation parameters, primarily the agents number, the premises size and its geometry complexity.

6 Results and Discussion With the usage of the presented software, a series of experiments was conducted; we will give the results of one of them as an example. Figure 3 shows a graph of the number of people in a premises versus time. Time t = 0 corresponds to the beginning of the evacuation. Graphs are presented for the situation with the absence of smoke, with a range of visibility equal to 1 m and with  m m zero visibility (functions hm n;1 , hn;1 , hn;0 respectively).

Software Package for Modeling

35

Fig. 3. Graphs of the number of people in a room versus time: 1 –  hm n;1 , smoke is absent, 2 – m m   hn;1 , length of visibility is 1 m, 3 – hn;0 zero visibility

It follows from the graph that smoke occupies one of the leading places during evacuation from the premises; it seriously affects the speed of evacuation. This can be seen even in the above example for small premises. For instance, when smoke is absent more than 90% people manage to evacuate before 25 s. In zero visibility conditions significantly more than a half of people can not leave premises before 25 s. A software package containing an effective algorithms for interacting models of the dangerous fire factors spread dynamics in the premises and people evacuation is proposed. The models used in software package based on the easy to implement and convenient for interaction mathematical apparatus of cellular automata, multi-agent systems approach, simple graph theory, basics of mechanics and thermodynamics. The software package allows to change the agents number during the simulation, add sources of fire, observe the spread of fire, heat and smoke in the premises, and analyze critical situations triggered by a fire, calculate the dangerous fire factors spread dynamics and simulate the people evacuation from the premises. The complex can be used to identify dangerous sections of evacuation routes in order to ensure safe people evacuation, as well as for training of personnel responsible for fire safety in organizations. In conclusion, it should be noted that the proposed software complex has a wide development potential. Additional settings could be added, it makes sense to consider a wider introduction of multithreading, which will allow simulating fire and evacuation inside large premises faster, including with the involvement of programming technologies related to computing on video cards [17].

36

A. Samartsev et al.

References 1. Samartsev, A.A., Rezchikov, A.F., Kushnikov, V.A., Ivashchenko, V.A., et al.: Fire and heat spreading model based on cellular automata theory. J. Phys: Conf. Ser. 1015, 032120 (2018) 2. Svirin, I.S.: Overview of models of fire spread in buildings. Probl. Saf. Emerg. Situat. 6, 114–129 (2013). (in Russian) 3. Rudnitsky, V.N., Melnikova, E.A., Pustovit, M.A.: Parallelization and optimization of the calculation of the fire spread process on the basis of three-dimensional cellular automata. Vector Sci. TSU 1, 22–26 (2014). (in Russian) 4. Apiecionek, L., Zarzycki, H., Czerniak, J.M., et al.: The cellular automata theory with fuzzy numbers in simulation of real fires in buildings. Advances in Intelligent Systems and Computing, vol. 559, pp. 169–182 (2018) 5. Technical guide SITIS VIM 4.10, Constructing information technologies and systems OOO Sitis (2017). (in Russian) 6. Fedosov, S.V., Ibragimov, A.M., Soloviev, R.A., et al.: Mathematical model of fire spread in the premises system. Vestn. MGSU 4, 121–128 (2013). (in Russian) 7. Aptukov, A.M., Brazun, D.A., Lyushnin, A.V.: Modeling the behavior of a panicking crowd in a multi-level ramified premises. Comput. Stud. Model. 5, 491–508 (2013). (in Russian) 8. Moussaida, M., Helbing, D., Theraulaza, G.: How simple rules determine pedestrian behavior and crowd disasters. PNAS 108(17), 6884–6892 (2011) 9. Hanea, D.M.: Human risk of fire: building a decision support tool using Bayesian networks. Wöhrmann Print Service, 227 p (2009) 10. Korhonen, T.: Fire Dynamics Simulator with Evacuation: FDS + Evac Technical Reference and User’s Guide (FDS 6.5.2, Evac 2.5.2, DRAFT). VTT Technical Research Centre of Finland (2016) 11. Litvintsev, K.Y., Dekterev, A.A., Kirik, E.S., et al.: Possibilities of joint simulation of the spread of fires and evacuation in buildings. In: Kasimova, D.P. (ed.) Conjugated Tasks of Mechanics of Reactive Media, Computer Science and Ecology: Materials XX All-Russia. Scientific Conference from International Participation 2016, pp. 26–29 (2016). (in Russian) 12. Litvintsev, K.Y., Kirik, E.S., Dekterev, A.A., et al.: Analytical complex “Sigma PB” on simulation of fire spread and evacuation. In: Fire Safety, pp. 51–59 (2016). (in Russian) 13. Jackson Project home. https://github.com/FasterXML/jackson. Last Accessed 23 Sept 2018 14. Package javax.swing. https://docs.oracle.com/javase/7/docs/api/javax/swing/packagesummary.html. Last Accessed 21 Sept 2018 15. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Boston (1994) 16. A 2D chart library for Java applications (JavaFX, Swing or server-side). https://github.com/ jfree/jfreechart. Last Accessed 17 Sept 2018 17. NVIDIA, Accelerated computing, CUDA zone. https://developer.nvidia.com/cuda-zone. Last Accessed 30 Sept 2018

Nonlinear Information Processing Algorithm for Navigation Complex with Increased Degree of Parametric Identifiability Konstantin Neusypin , Maria Selezneva(&) and Andrey Proletarsky

,

Bauman Moscow State Technical University, Moscow, Russia [email protected]

Abstract. The aircraft navigation system with the error compensation algorithm of the basic inertial navigation system is considered. A nonlinear correction algorithm has been developed using an SDC representation of the navigation system’s error model matrix. To improve the accuracy of the model, a method is proposed for increasing the degree of identifiability of the parameters in the model matrix. The problem of identification of nonlinear systems is investigated. A numerical criterion for the degree of identifiability of the parameters of a non-linear model of one class, based on the SDC representation of the non-linear model, has been developed. Keywords: Navigation complex  Navigation system errors  Correction algorithm  Nonlinear model  SDC representation  Identifiability criterion  Identifiability quality

1 Introduction The control of modern aircraft is carried out based on information from the navigation complex (NC). Usually NC consists of an inertial navigation system (INS), a global navigation satellite system (GNSS) and algorithmic support, which is used to correct the navigation information. INS and GNSS have errors, which must be compensated by algorithmic way [1, 2]. INS consists of accelerometers mounted on a gyro-stabilized platform (GSP). When the aircraft operates on long time intervals, in order to prevent the increase in INS errors, a correction is applied in the structure of the INS using a linear estimation algorithm and a controller [3, 4] or a linear adaptive control algorithm [1, 5, 6]. In algorithmic support of NC used linear models of errors INS, which has low accuracy. To further improve the accuracy of the studied NC, it is advisable to use non-linear models in algorithmic support. In the nonlinear control algorithm, a model obtained using the SDC representation [7, 8] is used. Further improvement of the accuracy of NC in this article proposed to implement using the nonlinear models with improved properties. Since the structure of the model is set a priori, the parameters of this model are identified. The quality of parametric identification may be different.

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 37–49, 2019. https://doi.org/10.1007/978-3-030-12072-6_4

38

K. Neusypin et al.

The qualitative characteristics of the identification process is the time interval over which a parameter can be identified with a given accuracy and the possible achievable accuracy of the parameter determination. For linear stationary and non-stationary systems, numerical criteria for the degree of identifiability are known [9, 10]. To determine the quality of identification of parameters of nonlinear systems, a numerical criterion for the degree of parametric identifiability has been developed. Thus, for the control algorithm, nonlinear models with well identifiable parameters are selected. The use of models with improved parametric identifiability properties improves the accuracy of navigation definitions of NC.

2 Algorithmic Correction of the Navigation Complex Compensation errors of navigation systems NC is carried out by algorithmic means. NC with the correction of INS errors in the structure is presented in Fig. 1.

Fig. 1. Scheme of INS with estimation algorithm and the regulator

Here we introduce the following notations: INS is the inertial navigation system; GNSS is the global navigation satellite system; EA is the estimation algorithm; H is the true navigation information; n is the GNSS error vector; x is the INS error vector; z is the measurement vector; ^x is the estimate of the INS error vector; u is the is the control vector. The INS and GNSS measurements are compared, resulting in the formation of a measurement vector, which is a mixture of navigation system errors. The Kalman filter is often used as an estimation algorithm. The Kalman filter uses the INS error model and at its output an estimate of the INS error is obtained. The regulator also used the INS error model and generated the control vector, which is fed to the INS input. With the help of the regulator, the errors of the INS are compensated for by the speed and angles of deviation of the GSP from the horizon plane. In the controller of serial NC the linear mathematical model of INS errors is used, which has low accuracy. In order to improve the accuracy of the correction of INS, adaptive control algorithms are used. When the divergence criterion is fulfilled in the Kalman filter [11, 12] and the controller, the matrix Ak;k1 is used, as which is distinguished by a large sampling period. In this case, the evaluation of INS errors and the correction of INS are

Nonlinear Information Processing Algorithm

39

less frequent, and the accuracy of estimation and, accordingly, the accuracy of regulation is increased by increasing the degree of observability [13, 14]. When the system operation mode changes, the quality assessment is repeated. The adaptive controller takes form  uk ¼

Ak þ 1;k ^xk ; Ak þ 1;k ^xk ;

  if vTk vk  csp Hk Pk;k1 HkT þ Rk  if vTk vk [ csp Hk Pk;k1 HkT þ Rk

It should be noted that the increase in the accuracy of estimation due to an increase in the sampling period is known and is only confirmation of the correctness of the proposed principle of constructing an adaptive controller. Increasing the degree of observability of INS errors by changing some other parameter also leads to an increase in the accuracy of estimation, and, therefore, regulation. As an example, consider the problem of INS correction by means of an adaptive controller for one horizontal channel of the system. The simplest mathematical model of INS errors was used, which takes form: i ^xk1 ; xk ¼ Axk1 þ xk1  Kk1   dVk xk ¼ ; uk   1 gT A¼ ; T=R 1   0 ; xk1 ¼ ek1 1 ¼ A; Kk1  1 2 ¼ Kk1 3T=R

3gT 1



xk is the state vector; Ak is the model matrix; xk1 is the noise matrix; dVk is the error in the speed determining, uk is the deviation angle of GSP; ek−1 is the drift velocity of GSP, which is a stationary random process with an exponential correlation function; R is the Earth radius; g is the gravity acceleration; T is the sampling period; i 1 2 is the regulator matrix, i = 1, 2; Kk1 is the optimal regulator matrix, Kk1 is the Kk1 adaptive control matrix. With an increase in the level of measuring noise, which was not considered in the covariance matrix of the measuring noise of the Kalman filter, the estimation error increases, resulting in an increase in the deflection angle of the GSP of INS with the optimal regulator. In the adaptive controller, the matrix К2 is used instead of the matrix К1. The use of the adaptive selection of the regulator matrix allows one to significantly reduce the angles of deviation of the GSP from the horizon plane.

40

K. Neusypin et al.

Thus, an adaptive INS regulator with relay selection of the control matrix is presented. The nonlinear characteristic of changes in some parameters of the INS error model is not taken into account in the adaptive controller. Linear models of INS errors have low accuracy, since only the dominant components of the process of changing errors are taken into account. Therefore, to obtain higher precision of NC, it is advisable to use nonlinear models of INS errors.

3 Development of a Nonlinear Control Algorithm We will carry out the synthesis of the control algorithm for the nonlinear model of the INS errors in continuous form. The nonlinear error model of the INS has the following form: d xðtÞ ¼ f ðt; xÞ þ g1 ðt; xÞwðtÞ þ g2 ðt; xÞuðtÞ ; xðt0 Þ ¼ x0 ; dt yðtÞ ¼ hðt; xÞ :

ð1Þ

f ðt; xÞ; g1 ðt; xÞ; g2 ðt; xÞ are the nonlinear matrixes. We represent (1) in an equivalent form: the model has the structure of linear differential equations with parameters that depend on the state (State Dependent Coefficient, SDC) [7, 8]. The Eq. (1) transformed with the help of the SDC-representation method are: d xðtÞ ¼ Aðt; xÞxðtÞ þ Dðt; xÞwðtÞ þ Bðt; xÞuðtÞ ; xðt0 Þ ¼ x0 ; dt yðtÞ ¼ Hðt; xÞ xðtÞ :

ð2Þ

The nonlinear system (2) is controllable, if   rank Dðt; xÞ; Aðt; xÞDðt; xÞ; A2 ðt; xÞDðt; xÞ; . . .; An1 ðt; xÞDðt; xÞ ¼ n;   rank Bðt; xÞ; Aðt; xÞBðt; xÞ; A2 ðt; xÞBðt; xÞ; . . .; An1 ðt; xÞBðt; xÞ ¼ n:

ð3Þ

Where—the order of the system (1). Gramirans of controllability Pw ðt; xÞ и Pu ðt; xÞ exist and are the solutions of the Lyapunov equations: Aðt; xÞPw ðt; xÞ þ Pw ðt; xÞAT ðt; xÞ þ Dðt; xÞDT ðt; xÞ ¼ 0; Aðt; xÞPu ðt; xÞ þ Pu ðt; xÞAT ðt; xÞ þ Bðt; xÞBT ðt; xÞ ¼ 0:

ð4Þ

Accordingly, the nonlinear system (3.12) is observable, if the condition is met h i rank H T ðt; xÞ; H T ðt; xÞAT ðt; xÞ; . . .; H T ðt; xÞAT ðt; xÞn1 ¼ n:

ð5Þ

Nonlinear Information Processing Algorithm

41

Observability Gramian Po ðt; xÞ exists and is a solution to the Lyapunov equation: AT ðt; xÞPo ðt; xÞ þ Po ðt; xÞAðt; xÞ þ H T ðt; xÞ Hðt; xÞ ¼ 0:

ð6Þ

If criteria (4) and (5) are fulfilled, system (2) is observable and controllable. The task of the synthesis of the control algorithm is formulated within the framework of the theory of differential games. Then the quality functional of a differential game will be: 1 1 Jðx; u; wÞ ¼ yT ðt1 ÞFyðt1 Þ þ 2 2

Zt1

 T  y ðtÞQ yðtÞ þ uT ðtÞR uðtÞ  wT ðtÞPwðtÞ dt: ð7Þ

t0

Symmetric matrices F and Q are positively semidefinite, and R and P – are positively defined matrices. Optimal control actions that minimize the functional (7) are of the form:   qðxÞ ; wðtÞ ¼ P1 DT ðxÞ ^SðxÞxðtÞ þ ^   uðtÞ ¼ R1 BT ðxÞ ^SðxÞxðtÞ þ ^ qðxÞ :

ð8Þ

To find the matrix and in (8), we use the inverse sweep method. Matrix estimates and are determined by solving the equations: d^ SðxÞ þ AT ðxÞ^SðxÞ þ ^SðxÞAT ðxÞ  ^SðxÞPðxÞ^SðxÞ þ H T QH ¼ 0; ^ Sðx0 Þ ¼ S0 ; dt   d ^qðxÞ þ AT ðxÞ  ^SðxÞPðxÞ PðxÞ^qðxÞ ¼ 0; ^ qðx0 Þ ¼ q0 ; dt

ð9Þ

where PðxÞ ¼ BðxÞR1 BT ðxÞ  DðxÞP1 DT ðxÞ. Model (1) with control (8) takes the form:   d xðtÞ ¼ f ðt; xÞ  PðxÞ ^SðxÞxðtÞ þ ^ qðxÞ ; xðt0 Þ ¼ x0 ; dt yðtÞ ¼ hðt; xÞ :

ð10Þ

When implementing a found control, it is necessary to use instead of its estimate, obtained using a non-linear Kalman filter. Taking into account the equivalence theorem, if the state vector is replaced by its estimate, the structure of the control algorithm does not change. In practice, according to formula (8), the control vector has a simplified form: uðtÞ ¼ R1 BT ð^xÞS0^xðtÞ;

ð11Þ

Where S0 is a positive definite matrix, which is determined by solving the equation:

42

K. Neusypin et al.

S0 A0 þ AT0 S0  S0 B0 R1 BT0 S0 þ H T QH ¼ 0:

ð12Þ

The resulting controls using a linear model and a quadratic quality criterion ensure the stability of this model under any initial conditions. It should be noted that in the general formulation the problem of the global asymptotic stability of a nonlinear system with a control synthesized using the SDC method is not solved. Therefore, when using such a control for a nonlinear system, additional research is needed. When using high-dimensional models, there are problems with the definition of the matrix S(x). One of the solutions to the problem can be splitting the entire control interval into separate sub-intervals. Xo, X1 … Xn – the corresponding values of the system state on each subinterval. Thus, the matrix S(x) can be uniquely determined for each state of the system.

Fig. 2. Determination of matrix S(x)

Another solution could be to create a database of matrices S(x). It is proposed for each state of the system to find in advance S(x). During the flight, depending on the state of the system, use the corresponding matrix S(x) from the existing database (Fig. 2).

4 Development of a Nonlinear Correction Algorithm for INS Errors The INS error equations are the equations of orientation errors and equations for horizontal accelerometers. These equations have the form: dV_ ¼ gw þ B; dV dV w_ ¼  w þ e; R R e_ ¼ le þ g:

ð13Þ

Where dV is the error in the speed determining; W is the deviation angle of GSP; B, η are the Markov random processes; R is the Earth radius; g is the gravity acceleration; l is the average frequency of random change in drift.

Nonlinear Information Processing Algorithm

43

Equation (13) in the matrix form have the following form: x_ ðtÞ ¼ f ðt; xðtÞÞ þ wðtÞ where

2

3 2 3 2 3 2 3 x1 dV gx2 B xðtÞ ¼ 4 x2 5 ¼ 4 w 5; f ðt; xÞ ¼ 4 xR1 þ x1Rx2 þ x3 5; wðtÞ ¼ 4 0 5: e g x3 lx3

ð14Þ

We obtain the SDC representation of the Eq. (14): x_ ðtÞ ¼ Aðt; xÞxðtÞ þ wðtÞ 32 3 0 g 0 x1 where Aðt; xÞxðtÞ ¼ 4 R1 xR1 1 5 4 x2 5 . 0 0 l x3 In discrete form, the SDC representation of the nonlinear system (15) is:

ð15Þ

2

xk ¼ Fxk þ wk

ð16Þ

where2

3 2 3 2 3 o 1 Tgk dVk Bk k xk ¼ 4 wk 5; wk ¼ 4 0 5; F ¼ 4 RTk 1 þ TdV T 5; T sampling period: Rk gk o o 1  Tl ek We represent the state vector as the sum of the vectors zk and yk , selecting in the vector zk only the components we intend to control, and in the vector yk all the remaining components of the state vector. Then the equation of the object takes the form: xk ¼ Fzk1 þ Gyk1 þ wk1 þ uk1

ð17Þ

wk1 þ Gyk1 ¼ fk1

ð18Þ

denote:

Let zk1 and fk1 be estimated. The equation will be the following form:

uk1 ¼  Kk1^zk1 þ ^fk1

ð19Þ

The use of the state vector assessment in the regulator assumes its preliminary assessment using the estimation algorithm. At the output of the estimation algorithm, we have a signal of the form ^xk ¼ xk  ~xk where ~xk —the error of the state vector estimation.

ð20Þ

44

K. Neusypin et al.

Substituting expression (19) into Eq. (17) and taking into account expression (20), we obtain xk ¼ ðF  Kk1 Þzk1 þ Kk1~zk1 þ ~fk1

ð21Þ

Optimal control is determined by finding such a regulator matrix.   J ¼ M xTk xk

ð22Þ

Accepts the minimum value. We write the covariance matrix of the state vector h i   M xk xTk ¼Mf ðF  Kk1 Þxk1 þ Kk1~xk1 þ ~fk1 h iT   ðF  Kk1 Þxk1 þ Kk1~xk1 þ ~fk1 g

ð23Þ

Given the principle of orthogonality, the expression (23) takes the form:       T M xTk xk ¼ ðF  Kk1 ÞM xk1 xTk1 ðF  Kk1 ÞT þ ðF  Kk1 ÞM ~xk1~xTk1 Kk1     T þ Kk1 M ~xk1~xTk1 ðF  Kk1 ÞT þ Kk1 M ~xk1~xTk1 Kk1 h i h i þ ðF  Kk1 ÞM ~xk1~fTk1 þ M ~fk1~xTk1 ðF  Kk1 ÞT h i h i T þ Kk1 M ~xk1~fTk1 þ M ~fk1~xTk1 Kk1 h i   T þ Kk1 M ~xk1~xTk1 Kk1 þ ~fk1~fTk1 determine the sum of the variances of the state vector     J ¼ sp M xk xTk ¼ M xTk xk : Find the optimal value of the controller matrix from the condition of equality to zero gradient: @K@Jk1 ¼ 0. Using the rules of differentiation of matrices, we obtain the optimality condition, which leads to the minimum of the functional: Kk1 ¼ F

5 Criteria for the Degree of Identifiability of Nonlinear Systems In practice, for the convenience of information processing, the discrete form of a system is often used, in which the SDC representation of the nonlinear system (1) is

Nonlinear Information Processing Algorithm

xk þ 1 ¼ Uðtk ; xk Þxk þ Gðtk ; xk Þwk ; yk þ 1 ¼ Hðtk þ 1 ; xk þ 1 Þxk þ vk þ 1 :

45

ð24Þ

It is assumed that wk и vk þ 1 are “white” Gaussian uncorrelated noises,  and for any j and k, mj and wk , they are uncorrelated among themselves. (i.e. M vj wTk ¼ 0). Let the equation of the object in the SDC representation and the equation of measurements have the form (24). In this case, the state vector xk þ n can be expressed by its value at the initial moment of time xk in the form xk þ n ¼ Uðtk þ n1 ; xk þ n1 Þ    Uðtk ; xk Þxk þ Uðtk þ n1 ; xk þ n1 Þ    Uðtk þ 1 ; xk þ 1 ÞGðtk ; xk Þwk

ð25Þ

þ    þ Gðtk þ n1 ; xk þ n1 Þwk þ n1 : Substituting the expression for xk þ n in the equation of measurement yk þ n , we get yk þ n ¼ Hk þ n Uðtk þ n1 ; xk þ n1 Þ    Uðtk ; xk Þxk þ Hk þ n Uðtk þ n1 ; xk þ n1 Þ    Uðtk þ 1 ; xk þ 1 ÞGðtk ; xk Þwk

ð26Þ

þ    þ Hk þ n Gðtk þ n1 ; xk þ n1 Þwk þ n1 þ vk þ n : Substituting into this equation the expression xk , we have yk þ n ¼Hk þ n Uðtk þ n1 ; xk þ n1 Þ    Uðtk ; xk ÞOkþ yk

 Hk þ n Uðtk þ n1 ; xk þ n1 Þ    Uðtk ; xk ÞOkþ vk þ Hk þ n Uðtk þ n1 ; xk þ n1 Þ    Uðtk þ 1 ; xk þ 1 ÞGðtk ; xk Þwk

ð27Þ

þ    þ Hk þ n Gðtk þ n1 ; xk þ n1 Þwk þ n1 þ vk þ n ;    1 T Where Okþ ¼ OT Ok is the pseudo-inverse matrix of Ok . k Ok Introduce the notation ½ k1;k

k2;k

   kn;k  ¼ Hk þ n Uðtk þ n1 ; xk þ n1 Þ    Uðtk ; xk ÞOkþ :

ð28Þ

v0k ¼ c1;k wk þ c2;k wk þ 1 þ    þ cn;k wk þ n1  k1;k vk  k2;k vk þ 1      kn;k vk þ n1 þ vk þ n ¼ Hk þ n Uðtk þ n1 ; xk þ n1 Þ    Uðtk ; xk ÞOkþ vk þ    þ Hk þ n Gðtk þ n1 ; xk þ n1 Þwk þ n1 þ vk þ n :

ð29Þ

Then the statement of the problem is reduced of the unknown  to the determination  nonstationary elements of the column vector k1;k k2;k . . . kn;k from the newly formed measurements, i.e. [7]

46

K. Neusypin et al.

k1;k ¼ f1;k ðyk ;    ; yk þ 2n1 Þ þ v00 k ; k2;k ¼ f2;k ðyk ;    ; yk þ 2n1 Þ þ v00 k þ 1;    kn;k ¼ fn;k ðyk ;    ; yk þ 2n1 Þ þ v00 k þ n1 ;

ð30Þ

where 2

3 2 f1;k ðyk ;    ; yk þ 2n1 Þ yk 6 f2;k ðyk ;    ; yk þ 2n1 Þ 7 6 yk þ 1 6 7¼6 4 5 4   fn;k ðyk ;    ; yk þ 2n1 Þ yk þ n1 2

3 2 v00 yk k 6 v00 7 6 yk þ 1 6 kþ1 7 ¼ 6 4  5 4  v00 yk þ n1 k þ n1

yk þ 1 yk þ 2  yk þ n

yk þ 1 yk þ 2  yk þ n

   

31 2 3 yk þ n1 yk þ n 6 7 yk þ n 7 7 6 yk þ n þ 1 7;  5 4  5 yk þ 2n1 yk þ 2n2

31 2 3    yk þ n1 v0k 6 0 7  yk þ n 7 7 6 vk þ 1 7:   5 4  5 v0k þ n1    yk þ 2n2

Therefore, the criterion of the degree of parametric identifiability of the dynamic non-stationary systems model is:

i DINk

h i 2 E ki;k R0 ¼ h 2 i ; ^i E yi;k R k

ð31Þ

h i 2 where E ki;k is the variance of an arbitrary i-th component of the parameter vector; h i 2 E zi;k is the variance of the directly measured state vector; R0 is the variance of the ^ ik is the variance of the characteristic measuring noise. original measuring noise; R Thus, formalized dependence (31) is used to determine the degree of parametric identifiability of matrix Uðtk ; xk Þ. The variance of the original measuring noise is determined from practical considerations in accordance with the mode of operation of the measuring system or is taken from the passport of the measuring device. Certain difficulties arise when calculating the reduced measurement noise. However, when using an adaptive estimation algorithm, the variance of the reduced measurement noise is calculated at each step of the operation of the algorithm. The quality of identification or the effectiveness of identification is determined by the maximum attainable accuracy of identification and the necessary time to achieve the given accuracy of identification. The developed numerical criterion for the degree of identifiability has a clear physical meaning, is simple, and allows you to calculate the quality of identification of parameters in the form of a scalar.

Nonlinear Information Processing Algorithm

47

6 Simulations Results The results of mathematical modeling of IC operation using test models of the INS errors are presented in Figs. 3 and 4.

Fig. 3. NC with an improved model with a correction in the structure of the INS (errors in measurement of the speed)

Fig. 4. NC with an improved model with correction in the structure of the INS (errors in measurement of the angles of deviation of the GSP)

In Figs. 3 and 4 marked: 1 is the error of the adjusted INS in measurement of the speed; 2 is the error of the corrected INS in determining the speed with the improved model. According to the results of mathematical modeling, the accuracy of calculating errors in determining the speed using the developed NC with a correction in the structure of the INS by means of a control algorithm with an improved model increases by an average of 7%, errors in determining the deviation angles of the GSP increases by 10%. The simulation results demonstrated the operability of the used nonlinear control algorithm based on the SDC representation of the nonlinear INS error model, and also demonstrated the advantage of NC with an improved model with enhanced parametric

48

K. Neusypin et al.

identifiability properties. Using the developed control algorithm, it is possible to significantly improve the accuracy of navigation definitions of the aircraft.

7 Conclusion Investigated NC functioning for a long time without correction from fixed stations. To compensate for the errors of the NC, a scheme was used, including estimation and control algorithms. In serial NC algorithms are used with linear models of navigation system errors. It is proposed to increase the accuracy of NC by using the non-linear models of INS errors in algorithmic support. A nonlinear control algorithm is developed based on the SDC representation of the nonlinear model. It is proposed to use models with improved parametric identification properties in NC. A numerical criterion for the degree of identifiability of parameters of nonlinear models has been developed. The application of the criterion is valid for nonlinear systems that can be represented using the SDC method. Thus, the criterion allows one to determine the degree of identifiability of the parameters of the matrix of a model of a single class of nonlinear systems and use models with improved parametric identifiability in NC. Acknowledgments. This work was supported by the Russian Fund for Fundamental Research (Project 16-8-00522), the State Mission of the Ministry of Education and Science of the Russian Federation (Project No. 2.7486.2017) and the Program of Introducing Talents of Discipline to Universities in China (Program 111, No. B 16025).

References 1. Groves, P.D.: Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems. Artech House, Boston (2013) 2. Neusypin, K.A.: Sovremennye sistemy i metody navedeniya, navigatsii i upravleniya letatelnymi apparatami [Modern systems and methods of guidance, navigation and aircraft control]. MGOU (2009). (in Russian) 3. Selezneva, M.S., Neusypin, K.A.: Development of a measurement complex with intelligent component. Meas. Tech. 59(9), 916–922 (2016) 4. Shen, K., Selezneva, M.S., Neusypin, K.A., Proletarsky, A.V.: Novel variable structure measurement system with intelligent components for flight vehicles. Metrol. Meas. Syst. 24 (2), 347–356 (2017) 5. Proletarsky, A.V., Neusypin, K.A., Shen, K., Selezneva, M.S., Grout, V.: Development and analysis of the numerical criterion for the degree of observability of state variables in nonlinear systems. In: Proceedings of the 7th International Conference, vol. 7, pp. 150–154 (2017) 6. Noureldin, A., Karamat, T.B., Georgy, J.: Fundamentals of Inertial Navigation, SatelliteBased Positioning and Their Integration. Springer, Heidelberg (2013) 7. Van Trees, H.L., Bell, K.L.: Bayesian Bounds for Parameter Estimation and Nonlinear Filtering/Tracking. Wiley-IEEE Press, New York (2007)

Nonlinear Information Processing Algorithm

49

8. James, R.C., Christopher, N.D., Curtis, P.M.: Nonlinear regulation and nonlinear H∞ control via the state-dependent Riccati equation technique: part 1, theory. In: Proceedings of the First International Conference on Nonlinear Problems in Aviation and Aerospace, Daytona Beach, FL, USA, pp. 117–141 (1996) 9. Ham, F.M., Brown, R.G.: Observability, eigenvalues, and Kalman filtering. IEEE Trans. Aerosp. Electron. Syst. AES-19(2), 269–273 (1983) 10. Kalman, R.E., Ho, Y.C., Narendra, K.S.: Controllability of linear dynamical systems. In: Contributions to the Theory of Differential Equations, vol. I, no. 2, pp. 189–213 (1963) 11. Van Trees, H.L., Bell, K.L.: Bayesian bounds for parameter estimation and nonlinear filtering/tracking, p. 951. Wiley-IEEE Press (2007) 12. Stepanov, O.A.: Application of the theory of nonlinear filtering in tasks of processing navigation information. Central Scientific Research Institute “Electrical device” (2003) 13. Ivakhnenko, A.G.: Polynomial theory of complex systems. IEEE Trans. Syst. Man Cybern. SMC-1(4), 364–378 (1971) 14. Shen, K., et al.: Technology of error compensation in navigation systems based on nonlinear Kalman filter. J. Natl. Univ. Defense Technol. 2, 84–90 (2017)

The Task of Reducing the Cost of Production During Welding by Robotic Technological Complexes Dmitry Fominykh1(&) , Alexander Rezchikov1 , Vadim Kushnikov2 , Vladimir Ivaschenko1 , Tatyana Shulga2 and Andrey Samartsev1 1

2

,

Institute of Precision Mechanics and Control, Russian Academy of Sciences, 24 Rabochaya Street, Saratov 410028, Russia [email protected] Yuri Gagarin State Technical University, 77 Politechnicheskaya Street, Saratov 410054, Russia

Abstract. The article deals with the issue of control of the welding process in robotic technological complexes via the criterion of production cost. The statement of the problem is given, models and algorithms for its solution are considered. The solution of the problem is built on minimizing the probability of fail a developed plan of activities for reducing the production cost. For this purpose, a graph of the plan of activities is constructed, its minimum sections are determined, for each of which a state graph is formed. Based on the state graph, a system of differential equations of Kolmogorov-Chapman is compiled. Solving the system of equations, it is possible to calculate the probability of failure of implementation the plan for a specific section. The introduction of the models and algorithms considered in the article will allow reducing the cost of the products manufactured and increasing the production efficiency with the use of robotic technological complexes. Keywords: Robotic technological complex  Mathematical model Algorithm  Production cost  Combination of events  Plan of activities Technological process

 

1 Introduction With the rapid development of technology and increasing competition, one of the main problems of industrial enterprises is reducing production cost. The technological process of welding via robotic technological complexes (RTC) is no exception. Currently, various systems for RTC control have been developed and practically tested [1–3]. They are mainly aimed to provide welding modes and positioning accuracy of the manipulator and do not allow for a reduction in production cost, taking into account all stages of the technological process. These circumstances determine the relevance and practical significance of this article, which contains the development of models and algorithms for controlling the welding process in the RTC by a criterion that allows minimizing the cost of production. © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 50–60, 2019. https://doi.org/10.1007/978-3-030-12072-6_5

The Task of Reducing the Cost of Production

51

2 Statement of the Problem Develop an algorithm for searching the vector v(t)2{V} of control actions on the robotic complex, which allows for any valid values of the state vector of the environment x(t)2{X} to minimize the criterion in the time interval [t0, t1]: Zt1 D (t; x; vÞ dt ! min t0

under limitations: Fi ðt; x; vÞ  0; i ¼ 1; 2; . . .n; Fi ðt; x; vÞ\0; i ¼ n þ 1; . . .; m and initial conditions: Fi ðt0 ; x; vÞ ¼ 0; i ¼ m þ 1; . . .; k where D is the function of the cost of the production, {X}, {V} are sets of permissible values of vectors x(t) and v(t), respectively, n, m, k are known constants, t is time. In other words, it is necessary to determine such values of the vector of control actions that ensure that the function D(t, x, v) falls into the range of its minimum values. The presence of such area for this problem is confirmed by an analysis of the causal relationships of the process of functioning of the robotic complex and is consistent with the experience of operational dispatch staff.

3 Mathematical Models and Algorithms The solution to this problem is difficult because of the need to develop a complex dynamic model that takes into account the numerous quantitative and qualitative characteristics of the technological process, and also because of the uncertainty of the model parameters in the time interval. In this regard, a heuristic method for solving the problem has been developed, which is presented below. The algorithm for reducing the cost of production is based on a statement confirmed by practice, according to which in order to reduce cost, it is enough to develop and implement an action plan acting as control actions. Thus, solving a problem comes down to minimizing the failure probability of this plan. Based on the analysis of the technological process, an action plan was developed to minimize the cost of production. This plan is presented in the form of a tree, in which the tops are the actions of the plan, and the arcs determine the sequence of their implementation and interconnection (see Fig. 1).

52

D. Fominykh et al.

Fig. 1. The plan of activities to minimize the cost of production when welding via RTC: U0 reduction of the cost of the production to be welded; U1 - reduction of material consumption; U2 reducing labor intensity; U3 - reduction of energy consumption; U4 - reducing the cost of defective products; U5 - reduction of metal consumption per item; U6 - reducing the consumption of welding wire; U7 - reduction of labor costs for operating staff; U8 - reduced maintenance costs of robotic systems; U9 - reducing the cost of auxiliary equipment; U10 - reducing energy costs; U11 - reducing the cost of shield gas; U12 - reducing the cost of compressed air; U13 - reduction of defective products; U14 - increase in the rate of correction of defects in welds; U15 - reduction of metal waste when cutting blanks; U16 - ensuring quality blanks; U17 - reduction the amount of waste wire when testing a broaching mechanism; U18 - welding control; U19 - optimization of the number of operating staff; U20 - optimization of working time of operating staff; U21 - scheduled maintenance according to the schedule; U22 - optimization of the number of staff; U23 optimization of staff time; U24 - reduce the cost of lifting equipment; U25 - reduce the cost of semi-automatic welding; U26 - reduce costs for the grinders; U27 - equipment shutdown monitoring during downtime; U28, U33 - control of the electrical circuit; U29 - fine adjustment of the flow of shield gas and gas cutoff; U30, U34 - control of gas supply equipment; U31, U36 ensuring the correct operation of the torch cleaning program; U32, U35 - control of the pneumatic circuit; U37 - maintenance of the shop gates in the closed state during welding; U38 - ensuring quality of the blanks; U39 - ensuring the correct operation of welding programs; U42 - eliminate the leak of coolant; U43 - timely control of the quality of the welded product; U44 - provision of consumables for the correction of defects; U45 - reduction in the number of operating staff; U47 reduction of working time of operating staff; U48 - control over the effective use of working time of operating staff; U49 - reduction in the number of staff; U51 - reduction of working time for staff; U52 - monitoring the efficient use of staff time; U53 - reduction of energy consumption of lifting (continued)

The Task of Reducing the Cost of Production

53

(continued)

equipment when idle; U54 - elimination of intensive operation modes of lifting equipment; U55 compliance with welding modes on semi-automatic machines; U56 - power off of semi-automatic machines during idle time; U57 - adherence to technology when working with a grinders; U58 shutdown of the grinders when idle; U59 - improving the qualifications of the staff of the billet workshop; U60 - control of the quality control department of the product before welding in robotic complexes; U61 - periodic inspection of tooling; U62 - check and adjustment of the welding torch geometry; U63 - periodic check of the guides; U64 - check the flow of shield gas; U65 - check the operation of the gas shutoff; U66 - check the operation of the feed rollers; U67 inlet shielding gas quality control; U68 - control of the shield gas pressure at the inlet of the robotic complex; U69 - the use of heaters on gas transmissions; U70 - quality control of the wire arriving for robotic complexes; U71 - control over the compliance of lubricants coming in for robotic systems to the requirements of documentation; U72 - quality control the tips for welding torches; U73 - control of the quality of coolants; U74 - control of the serviceability of the cooling circuit of the torch of the robotic complex; K - the conjunction symbol; V - the disjunction symbol

Minimal sections of the action plan are defined to evaluate the probability of plan non-fulfillment. Minimal sections are classified by the number of elements they are composed of one-, two-, three-element, etc. Those sections match to the combinations of events U1, U2, …,U74, failure of which leads to the failure of the entire plan. Minimal sections for the actions plan are presented below. One-element sections: S1 ¼ ðU1 Þ; S2 ¼ ðU2 Þ; S3 ¼ ðU3 Þ; S4 ¼ ðU4 Þ: Two-element sections: S5 ¼ ðU5 ; U6 Þ; S6 ¼ ðU15 ; U17 Þ; S7 ¼ ðU15 ; U18 Þ; S8 ¼ ðU16 ; U17 Þ; S9 ¼ ðU16 ; U18 Þ; S10 ¼ ðU13 ; U14 Þ; S11 ¼ ðU35 ; U43 Þ; S12 ¼ ðU38 ; U43 Þ; S13 ¼ ðU39 ; U43 Þ; S14 ¼ ðU40 ; U43 Þ; S15 ¼ ðU41 ; U43 Þ; S16 ¼ ðU42 ; U43 Þ; S17 ¼ ðU37 ; U43 Þ; S18 ¼ ðU36 ; U43 Þ; S19 ¼ ðU34 ; U43 Þ; S20 ¼ ðU33 ; U43 Þ; S21 ¼ ðU59 ; U43 Þ; S22 ¼ ðU60 ; U43 Þ; S23 ¼ ðU61 ; U43 Þ; S24 ¼ ðU67 ; U43 Þ; S25 ¼ ðU68 ; U43 Þ; S26 ¼ ðU69 ; U43 Þ; S27 ¼ ðU35 ; U44 Þ; S28 ¼ ðU38 ; U44 Þ; S29 ¼ ðU39 ; U44 Þ; S30 ¼ ðU40 ; U44 Þ; S31 ¼ ðU41 ; U44 Þ; S32 ¼ ðU42 ; U44 Þ; S33 ¼ ðU37 ; U44 Þ; S34 ¼ ðU36 ; U44 Þ; S35 ¼ ðU34 ; U44 Þ; S36 ¼ ðU33 ; U44 Þ; S37 ¼ ðU59 ; U44 Þ; S38 ¼ ðU60 ; U44 Þ; S39 ¼ ðU61 ; U44 Þ; S40 ¼ ðU67 ; U44 Þ; S41 ¼ ðU68 ; U44 Þ; S42 ¼ ðU69 ; U44 Þ Three-element sections: S43 ¼ ðU7 ; U8 ; U9 Þ; S44 ¼ ðU43 ; U73 ; U74 Þ; S45 ¼ ðU44 ; U73 ; U74 Þ; S46 ¼ ðU10 ; U11 ; U12 Þ

54

D. Fominykh et al.

Four-element sections: S47 S49 S51 S53 S55

¼ ðU8 ; U9 ; U19 ; U20 Þ; S48 ¼ ðU10 ; U11 ; U31 ; U32 Þ; ¼ ðU10 ; U12 ; U29 ; U30 Þ; S50 ¼ ðU11 ; U12 ; U27 ; U28 Þ; ¼ ðU43 ; U70 ; U71 ; U72 Þ; S52 ¼ ðU8 ; U9 ; U45 ; U47 Þ; ¼ ðU8 ; U9 ; U46 ; U47 Þ; S54 ¼ ðU8 ; U9 ; U45 ; U48 Þ; ¼ ðU8 ; U9 ; U46 ; U48 Þ; S51 ¼ ðU44 ; U70 ; U71 ; U72 Þ

Five-element sections: S52 ¼ ðU7 ; U9 ; U21 ; U22 ; U23 Þ; S53 ¼ ðU7 ; U8 ; U24 ; U25 ; U26 Þ; S54 ¼ ðU7 ; U9 ; U21 ; U49 ; U51 Þ; S55 ¼ ðU62 ; U63 ; U64 ; U65 ; U66 Þ; Six-element sections: S56 ¼ ðU27 ; U28 ; U29 ; U30 ; U31 ; U32 Þ; S57 ¼ ðU7 ; U8 ; U25 ; U26 ; U53 ; U54 Þ; S58 ¼ ðU7 ; U8 ; U24 ; U26 ; U55 ; U56 Þ; S59 ¼ ðU7 ; U8 ; U24 ; U25 ; U57 ; U58 Þ: For each section, a state graph is constructed. Figure 2 illustrates an example of a state graph for the three-element section S44(U43, U73, U74).

Fig. 2. A state graph for the three-element section S44(U43,U73,U74). «1» means the implementation of the relevant activity, «0» means its failure. ki is the frequency of disturbances that impede the implementation of the activity; µj is the intensity of actions to overcome them

The Task of Reducing the Cost of Production

55

We make an assumption that the simultaneous repair of all the failed equipment is provided, random processes of failures and recoveries have a Markov property. Thus, the task of minimizing the cost is reduced to finding the actions of overcome with such intensity that the probability of failure to fulfill the plan will be minimized. In accordance with state graph a system of Kolmogorov-Chapman differential equations is compiled. The probability of non-fulfillment of the plan to minimize the cost of production is determined from the solution of this system. For the section S44(U43, U73, U74) the problem is reduced to choosing the intensity of the error recovery actions l*i (t), i = 1, 2, 3, at which in the given time interval [t0, t1]: Zt1

Pð0;0;0Þ ðk1 ; k2 ; k3 ; l1 ; l2 ; l3 ; tÞdt ! min:

t0

The limitations are formed by using the Kolmogorov-Chapman differential equations system: 8 ð1;1;1Þ dP ðtÞ > ¼ ðk1 þ k2 þ k3 ÞPð1;1;1Þ ðtÞ þ l1 Pð0;1;1Þ ðtÞ þ l2 Pð1;0;1Þ ðtÞ þ l3 Pð1;1;0Þ ðtÞ > dt > ð0;1;1Þ > > dP ðtÞ > > ¼ k1 Pð1;1;1Þ ðtÞ  ðl1 þ k2 þ k3 ÞPð0;1;1Þ ðtÞ þ l2 Pð0;0;1Þ ðtÞ þ l3 Pð0;1;0Þ ðtÞ > dt > ð1;0;1Þ > dP ðtÞ > > ¼ k2 Pð1;1;1Þ ðtÞ  ðl2 þ k1 þ k3 ÞPð1;0;1Þ ðtÞ þ l1 Pð0;0;1Þ ðtÞ þ l3 Pð1;0;0Þ ðtÞ > dt > > < dPð1;1;0Þ ðtÞ ¼ k3 Pð1;1;1Þ ðtÞ  ðl3 þ k1 þ k2 ÞPð1;1;0Þ ðtÞ þ l1 Pð0;1;0Þ ðtÞ þ l2 Pð1;0;0Þ ðtÞ dt ð0;0;1Þ dP ðtÞ > ¼ k2 Pð0;1;1Þ ðtÞ þ k1 Pð1;0;1Þ ðtÞ  ðl2 þ l1 þ k3 ÞPð0;0;1Þ ðtÞ þ l3 Pð0;0;0Þ ðtÞ > > dt > ð0;1;0Þ > dP ðtÞ > > ¼ k3 Pð1;0;1Þ ðtÞ þ k1 Pð1;1;0Þ ðtÞ  ðl3 þ l1 þ k2 ÞPð0;1;0Þ ðtÞ þ l2 Pð0;0;0Þ ðtÞ > dt > > ð1;0;0Þ > > dP ðtÞ ¼ k2 Pð0;1;1Þ ðtÞ þ k1 Pð1;0;1Þ ðtÞ  ðl2 þ l1 þ k3 ÞPð0;0;1Þ ðtÞ þ l1 Pð0;0;0Þ ðtÞ > dt > > : dPð0;0;0Þ ðtÞ ¼ k3 Pð0;0;1Þ ðtÞ þ k2 Pð0;1;0Þ ðtÞ þ k1 Pð1;0;0Þ ðtÞ  ðl1 þ l2 þ l3 ÞPð0;0;0Þ ðtÞ dt where P(i,j,k) is the probability of transition to the state when activities U43, U73, U74 are implemented or not implemented, i, j, k 2{0,1}, 1 – activity is implemented, 0 – activity is not implemented. Initial conditions are: Pð1;1;1Þ ð0Þ ¼ 1; Pð0;0;0Þ ð0Þ ¼ 0; Pð0;0;1Þ ð0Þ ¼ 0; Pð0;1;0Þ ð0Þ ¼ 0; Pð1;0;0Þ ð0Þ ¼ 0; Pð1;0;1Þ ð0Þ ¼ 0; Pð1;1;0Þ ð0Þ ¼ 0: In view of the large dimension of the obtained system of differential equations, it is difficult to obtain its analytical solution, therefore it is solved by a numerical method. Using the exact values of the numerical coefficients ki, lj, i, j = 1, 2, 3 in a preset time interval, we can calculate the probability of failure to fulfill the plan due to a combination of events corresponding to the cross section S44(U43, U73, U74). Thus, by solving this system of differential equations for all minimal sections at different points in time, we can estimate the probability of a combination of critical events that will not allow minimizing the cost.

56

D. Fominykh et al.

4 An Example of the Application of the Model Let us determine the values of the numerical coefficients ki, lj, i, j = 1, 2, 3 used in calculating the probability P(0,0,0). Based on the RTC operating experience with Kawasaki FA-10L manipulators, C40 controllers and related Fronius welding equipment, disturbances were identified that hindered the implementation of the cost minimization plan. These disturbances are listed in the Table 1. Table 1. The disturbances and their frequency Disturbances

Disturbances frequency, hours−1 k1 = 0,056

Excessive current in the VR-1500 wire feed unit due to wire feed difficulty Tracking error of the programmed torch path due to exceeding the k2 = 0,021 maximum deviation of the arc current from the threshold value The unevenness of the weld due to the fact that the wire feed speed is k3 = 0,008 variable

Actions to address these disturbances and the corresponding intensity are presented in Table 2. Table 2. The disturbances recovery actions for the implementation of cost minimization plan Actions

Actions intensity, hours−1

A1 Align the cable assemblie, check the guide channel for bending or fouling, check the pressure of the rollers of the wire feed unit A2 Replace the feed unit motor and the pressure rollers B C1

Check the torch geometry, check the correct location of the workpart. If necessary, adjust the current threshold Replace the wire feed unit VR-1500

ð1Þ

l1

¼ 1; 02

ð2Þ

l1 ¼ 3; 06 l2 ¼ 2; 92 ð1Þ

l3 ¼ 0; 87

C2 Release the brake, replace the contact tip, check the guide channel for lð2Þ ¼ 2; 71 3 bending, contamination

From the Table 2 it can be seen that the disturbances can be eliminated in different ways with different recovery intensity. Then we estimate the probability of failure of the plan with various combinations of recovery actions. We need to choose such combination as may ensure the minimum value of probability of failure of plan. The results of the calculations are given in Table 3. Table 3. Choosing the sequence of recovery actions

A2 BC1 ðl1 ¼ 3; 06; l2 ¼ 2; 92; l3

P(0,0,0) ¼ 0; 87Þ 0,343 ¼ 2; 71Þ 0,234 ¼ 0; 87Þ 0,31

ð2Þ A2 BC2 ðl1

¼ 2; 71Þ 0,35

Sequence of recovery actions and their intensity ð1Þ

A1 BC1 ðl1

ð1Þ

¼ 1; 02; l2 ¼ 2; 92; l3

ð1Þ

ð2Þ

ð2Þ

ð1Þ

A1 BC2 ðl1 ¼ 1; 02; l2 ¼ 2; 92; l3

ð2Þ

¼ 3; 06; l2 ¼ 2; 92; l3

The Task of Reducing the Cost of Production

57

As follows from the Table 3, the minimum probability of a critical combination leading to a failure of the activities plan is achieved by performing a sequence of actions: 1. Align the cable assemblie, check the guide channel for bending or fouling, check the pressure of the rollers of the wire feed unit. 2. Check the torch geometry, check the correct location of the workpart. If necessary, adjust the current threshold. 3. Release the brake, replace the contact tip, check the guide channel for bending, contamination. Using the values of the coefficients ki, lj, we can calculate the probability of failure of the fulfillment the plan of minimize the cost, for the minimum section S44(U43,U73, U74). Similarly, the probability of failure of the plan for all other sections was calculated. The results of the calculations are shown on Fig. 3. As shown from Fig. 3, at the moment t1 the highest probability (0,57) corresponds to the minimum section S18 (non-fulfillment of events U36 «Ensuring the correct operation of the torch cleaning program» and U43 «Timely control of the quality of the

Fig. 3. Changes of the value of probability of cost minimization plan failure due to the emergence of critical combinations of events over time (the numbers correspond to the number of the minimum section)

58

D. Fominykh et al.

welded product»), at the moment t2—to the minimum section S29 (non-fulfillment of events U39 «Ensuring the correct operation of welding programs» and U44 «Reduction in the number of operating staff»).

5 Software and Hardware The developed mathematical models and algorithms are supposed to be implemented as a part of the complex of technical means for controlling the Kawasaki robotic welding complex. The typical arc welding RTC consists of synchronously working manipulators equipped with welding equipment (power source, wire feed unit, cooling unit, welding torch). The complex is equipped with safety devices (fencing, emergency stop buttons, photocell barriers). Management and operational control of the complex is carried out by the operator through a portable remote control connected to the controller. Structural scheme of the interaction of the developed mathematical software with the complex of technical means for the Kawasaki RTC control is shown in Fig. 4.

Fig. 4. The introduction of the developed software in the complex of technical means of the industrial enterprise: TPS5000 is the power source Fronius TransPulseSynergic5000, C40— controller Kawasaki C40 series, FA-10L—robot manipulator Kawasaki FA-10L, 1GA is the central control unit of the controller; 1HP is the control unit of the servodrivers; FC40 - the multifunctional operator console; 1 GB is the engine control unit of the axes of the manipulator; Rob4000 is the interface for communication with the welding equipment; workstations are workstations of: CEO, COO, CTO, chief technologist, chief mechanical engineer, head of quality department, operator, RCView is integrated software interface Kawasaki RCView, DB is database, IdentEm is the module of emergency identification, ActionsGen is the module for generating control actions, EstimateP is the module of estimation of the probability of failure of the plan, CostCalc is the module of the calculation of the cost of production

The Task of Reducing the Cost of Production

59

The procedure of realization of described algorithm at different time intervals is explained using Table 4. Table 4. The procedure of solving the problem of minimization the product cost No

Actions

1

Analysis of the parameters of the RTK. Obtaining information on failures and deviations in welding. In case of a risk of a critical combination of events, the information is brought to the attention of the operator and is entered into the database and the shift log Analysis of the implementation of activities plan to reduce production cost. Counting and recording into the database of the values of the coefficients ki, lj. Information about the causes of disturbances and how to eliminate them is issued by the dispatching staff and is entered into the database. If necessary, the activities of the plan to reduce the cost production are corrected, recommendations are given to operating staff, control actions are implemented and are entered into the database Receiving information about all disturbances for the week and the implemented activities of the plan. Construction of the minimum sections of the graph of the plan of activities. Estimation of the probability of fulfillment of the plan by solving the system of Kolmogorov-Chapman differential equations. If necessary, correction of the mathematical model parameters is carried out Based on the analysis of the accumulated information about the control actions implemented during the month, an expert estimation of the economic effect of solving the problem carries out. If the expected level of the economic effect is not achieved, the plan of activities aimed at minimizing the production cost, as well as the parameters of the used mathematical model, is corrected. Development of an action plan to minimize the cost of production for the next month

2

3

4

Time interval Every hour

Every day

Every week

Every month

Reducing the cost of 1 unit production through the introduction of the developed software can be up to 15%. The calculated data are shown in Fig. 5. The calculations were based on statistical data from industrial enterprises operating the RTC.

60

D. Fominykh et al.

Fig. 5. Estimated economic effect from the introduction of the developed software

6 Conclusions The article considered the solution of the problem of controlling the process of welding in robotic technological complexes according to the criterion of production cost. The models and algorithms suggested will allow to significantly reduce the production cost and to improve the efficiency of welding via RTC. Implementation of the developed mathematical foundation is planned to be carried out on the basis of JSC “Transmash” (Engels city, Russia) structural divisions using the methods [4].

References 1. Bartenev, V., Jakun, S., Al’-Ezzi, A.: News of the samara scientific center of the Russian. Acad. Sci. 13(4), 288–293 (2011) 2. Dille, M., Grocholsky, B., Singh, S.: Field and Service Robotics: Springer Tracts in Advanced Robotics, vol. 62, pp. 183–193 (2010) 3. Filaretov, V., Zuev, A., Gubankov, A., Procenko, A., Yukhimets, D.: Proceeding - 2016 International Conference on Computer, Control, Informatics and its Applications: Recent Progress in Computer, Control, and Informatics for data Science (IC3INA), pp. 158–162 (2016) 4. Fominykh, D., Rezchikov, A., Kushnikov, V., Ivashchenko, V., Bogomolov, A., Filimonyuk, L., Dolinina, O., Kushnikov, O., Shulga, T., Tverdokhlebov, V.: J. Phys. Conf. Ser. 1015, 032169 (2018)

String Matching in Case of Periodicity in the Pattern Armen Kostanyan(&)

and Ani Karapetyan

IT Educational and Research Center, Yerevan State University, Yerevan, Armenia [email protected], [email protected]

Abstract. The string matching problem is an extensively studied computational problem having applications in many areas such as text editors, compilers, data compression, plagiarism detection, pattern recognition, etc. Dozens of algorithms have been designed to solve it. The finite automata method, KnuthMorris-Pratt and Boyer-Moore algorithms are the ones of particular importance among them. The string matching in these algorithms is preceded by pattern preprocessing during which some structure supporting the string matching process is created, such as the finite automaton in the finite automata method and the table of prefix function in the Knuth-Morris-Pratt algorithm. The periodicity in pattern implies regularity in the supporting structure, which one can use to reduce the preprocessing time. In this paper, the problem of making the preprocessing phase more efficient in case of a periodic pattern is investigated for the finite automata method and the Knuth-Morris-Pratt algorithm. It is proved that the construction of supporting structure for the entire pattern in these cases can be reduced to its construction merely for the period of that pattern without affecting the processing of the text. Keywords: String matching algorithm  Periodic pattern



Finite automata method



Knuth-Morris-Pratt

1 Introduction The string matching problem (the problem of finding all positions of a specific pattern in a string) is a classical computational problem extensively studied since 1960s. Two main approaches can be distinguished among the great number of algorithms that have been designed to solve this problem, namely the “window shifting” ([1, 2]) and the bitparallel processing ([3, 4]). One can find a detailed summary and comparative analysis of these algorithms in [5]. The periodicity in patterns has been used to design better matching algorithms ([6, 7]). The use of periodic patterns is especially relevant in such advanced areas as video summarization, pattern mining in a sequence of events, molecular biology, etc. ([8, 9]). In this paper, we shall consider two classical string matching algorithms, namely the finite automata (FA) method and the Knuth-Morris-Pratt (KMP) algorithm. Both of them use a pattern-preprocessing phase at which a structure supporting the string processing is constructed. In the FA method, the supporting structure is represented by © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 61–66, 2019. https://doi.org/10.1007/978-3-030-12072-6_6

62

A. Kostanyan and A. Karapetyan

means of the string matching automaton, while in the KMP algorithm it is done by means of the prefix function array. The periodic patterns we consider are the ones that can be represented as a repetition of a substring (called a period) one or more times followed by a prefix of the period. This notion of a periodic pattern has been introduced in [6] to create a linear time modification of the Boyer-Moore algorithm. Our investigation focuses on the improvement of the preprocessing phase for both the FA method and the KMP algorithm using the periodic patterns. We show that the periodicity in a pattern makes it sufficient to perform the preprocessing only for the period of such pattern with its subsequent use in text processing by means of appropriate references to the preprocessing results. As a result, a linear time preprocessing in the length of the period is obtained.

2 The String Matching Problem The string matching problem is formulated as follows ([10, 11]). We are given a text T[1..n] of length n and a pattern P[1..m] of length m (n > m). It is assumed that the elements of P and T are characters drawn from a finite alphabet R. We say that the pattern P occurs with shift s in the text T if 0  s  n – m and T [s + 1..s + m] = P[1..m] (i.e., T[s + j] = P[j] for 1  j  m). If P occurs with shift s in T, then s is said to be a valid shift; else it is said to be an invalid shift. The stringmatching problem is the problem of finding all valid shifts with which P occurs in T. An overview of solutions to this problem according to the FA method and the KMP algorithm is presented below.

3 The Finite Automata Method Let us give a brief description of the FA method according to [12, 13]. The FA method is based on the suffix function, which is defined as follows. Given a pattern P[1..m] over an alphabet R, the suffix function for P is defined as a mapping r : R ! f0; 1; . . .; mg such that rð xÞ ¼ maxfkj0  k  m; Pk is a suffix of xg; where Pk denotes P[1..k]. P The pattern P[1..m] implies a finite automaton MP ¼ ðQ; q0 ; F; ; dÞ such that • • • •

Q ¼ f0; 1; . . .; mg is the set of states; q0 ¼ 0 is the initial state; F ¼ fmg is the one-element set of final states;   P d :Q  ! Q is the transition function such that dðq; aÞ ¼ r Pq a for all q 2 Q and a 2 R.

Claim: Pattern P[1..m] occurs with shift s in text T[1..n] , MP accepts the string T [1..s + m].

String Matching in Case of Periodicity in the Pattern

63

With the use of the prefix function (see below) MP can be constructed in OðjRj  mÞ time, which results in Oðn þ jRj  mÞ total time for string matching with a finite automaton. Simon noticed that the only disadvantage of this method is the big size of the matching automaton. He has suggested an improvement of the automaton construction algorithm by showing that there can be at most 2m transitions leading to a non-zero state ([14]).

4 The Knuth-Morris-Pratt Algorithm Given a string x define a border for x to be a proper prefix of x, which is also a suffix of x. Let LB(x) be the longest border of x. Unlike the FA method, the KMP algorithm [2] is based on the prefix function defined as a mapping p : f1; . . .; mg ! f0; 1; . . .; m  1g such that    pðqÞ ¼ LB Pq ; where Pk denotes P[1..k]. That is, pðqÞ is the length of the longest proper prefix of Pq that is also a suffix of it. It is assumed in the KMP algorithm that if Pq is the longest prefix of the pattern P currently matched, then the reasonable shift of P to continue the search is q – p[q]. The following correlation exists between the prefix function in the KMP algorithm and the transition function in the FA method (see [10], pg. 1002): 8 < 0; dðq; cÞ ¼ q þ 1; : dðp½q; cÞ;

if q ¼ 0 and P½1 6¼ c if q 6¼ m and P½q þ 1 ¼ c otherwise

ð1Þ

The preprocessing in the KMP algorithm is reduced to the construction of a onedimensional array of length m representing the prefix function, which takes HðmÞ time. The processing using the prefix array takes HðnÞ time.

5 The String Matching and Periodicity In defining of the notion of a periodic pattern, we adopt the definition given in [6, 7]. Given a string w we define a period of w to be a non-empty string u such that w = uru′ (r  1), where u′ is a prefix of u. We say that w is k-periodic, if u is the shortest period of w, |u| = k and r  2. For example, the string w = ‘‘ab2ab2ab’’ is a 3periodic string that also has a period of size 6. Periodicity lemma ([11]): If w has periods of sizes p and q such that p + q  |w|, then it also has a period of size gcd(p, q). Using the KMP prefix array, one can find the shortest period of a string in linear time. A much better O(log log n)-time algorithm to solve the same problem was suggested by Apostolico, et al. in [15].

64

A. Kostanyan and A. Karapetyan

Some important results in string matching using periodic patterns are listed below. Galil suggested an improvement of Boyer-Moore algorithm that applies a better shift rule for periodic patterns. The suggested algorithm is linear in the worst case ([6]). The Galil-Seiferas algorithm modifies Galil’s algorithm by decomposing the pattern into three parts in a way to have a periodic middle part ([7]). Galil’s algorithm is used in the modified algorithm to match the periodic middle part, whereas the naive method is applied to match the remaining parts.

6 FA Method and KMP Algorithm Applied to Periodic Patterns Lemma 1: If P is a periodic pattern with shortest period a such that P ¼ aq a0 ðq  1Þ, then LB ðPÞ ¼ aq1 a0 . The proof directly follows from Lemma 3 in [2] (pg. 336). Lemma 2: If P is a periodic pattern with shortest period a such that P ¼ aq a0 ðq [ 1Þ and P1 ¼ at a1 ð1\t  qÞ is a prefix of P, then a is the shortest period of P1. Proof: Clearly a is a period of P1. Suppose P1 has a period shorter than a. According to Periodicity lemma and since t > 1, P1 should also have a period c such that jcj is a proper divisor of jaj. Therefore, c would be shorter than a period of P in contradiction to our assumption. ☐ Note that the condition t > 1 is necessary. Suppose, for example, P ¼ a2 ; P1 ¼ aa1 , where a = ‘‘a2ba3b’’ and a1 = ‘‘a2’’. We see that a is the shortest period of P but is not the shortest period of P1 as latter has a shorter period ‘‘a2ba’’. Based on lemmas 1–2 one can reduce the processing time by processing its shortest period and extending the obtained results to the entire pattern. Theorems 1 and 2 below clarify this approach applied to the FA method and the KMP algorithm. Theorem 1: Given a k-periodic pattern P, the components of the prefix array in the KMP algorithm for all i 2 ½2k; m can be computed as p½i ¼ i  k: Corollary: The KMP algorithm applied to a k-periodic pattern can perform the preprocessing phase in O(k) time by computing and storing the values for the first 2k – 1 components of the prefix array. Theorem 2: Given a k-periodic pattern P, the transitions of the matching automaton in the FA method for all q 2 ½2k; m can be computed as 8 < q þ 1; dðq; cÞ ¼ m  k þ 1; : dði þ k; cÞ; where i = q mod k.

if q 6¼ m and P½i þ 1 ¼ c if q ¼ m and P½i þ 1 ¼ c if P½i þ 1 6¼ c

ð2Þ

String Matching in Case of Periodicity in the Pattern

65

Proof: The first statement directly follows from the definition of the matching automaton. Then, we see from Lemma 1 and the correlation between prefix and transition functions (1) that dðm; cÞ ¼ dðjLBðPÞj; cÞ ¼ dðm  k; cÞ ¼ m  k þ 1. Suppose Pq ¼ at a1 ; jaj ¼ k. It follows from lemmas 1–2 that P[i + 1] 6¼ c implies that     dðq; cÞ ¼ d LB Pq ; c ¼ dðjat1 a1 j; cÞ ¼ dðjat2 a1 j; cÞ ¼ . . . ¼ dðjaa1 j; cÞ ¼ dðk þ q mod k; cÞ: ☐ Note that the third statement cannot be further simplified to be dðq; cÞ ¼ dði; cÞ. Indeed, if P ¼ a3 a0 is a 7-periodic string with a = ‘‘a2ba3b’’ and a′ = ‘‘a2b’’, then d(16, ‘‘a’’) = d(9, ‘‘a’’) = 6, whereas d(2, ‘‘a’’) = 2. Corollary: The FA method applied to a k-periodic pattern can perform the preprocessing phase in Oðk  jRjÞ time by computing and storing the values of the transition function only for the first 2k states.

7 Conclusion In this paper, the optimization of the preprocessing phase for periodic patterns has been investigated for the finite automata method and the Knuth-Morris-Pratt algorithm. It is shown for both cases that it suffices to construct only an initial part of the supporting structure (that is, the matching automaton in the finite automata method and the prefix array in the Knuth-Morris-Pratt algorithm) referring to that part to determine missing components. Acknowledgements. This work was supported by the Ministry of Education and Science of the Republic of Armenia, project N 18T-1B341.

References 1. Boyer, R.S., Moore, J.S.: A Fast String Searching Algorithm. Programming Techniques, pp. 762–772 (1997) 2. Knuth, D.E., Morris, J.H., Pratt, V.R.: Fast pattern matching in strings. SIAM J. Comput. 6, 323–350 (1997) 3. Baeza-Yates, R., Gonnet, G.: A new approach to text searching. In: ACM SIGIR Forum, vol. 23, no. SI, pp. 68–175. ACM, New York (1992) 4. Navarro, G., Raffinot, M.: A bit-parallel approach to suffix automata: fast extended string matching. In: 9th Annual Symposium on Combinatorial Pattern, pp. 14–33. Springer, Heidelberg (1998) 5. Al-Khamaiseh, K., Al-Shagarin, S.: A survey of string matching algorithms. Int. J. Eng. Res. Appl. 4(7), 144–156 (2014) 6. Galil, Z.: On improving the worst case running time of the Boyer-Moore string matching algorithm. In: 5th International Colloquium on Automata, Languages and Programming, pp. 241–250. ACM, Italy (1978)

66

A. Kostanyan and A. Karapetyan

7. Galil, Z., Seiferas, J.: Time-space optimal string matching. J. Comput. Syst. Sci. 26(3), 280– 294 (1981) 8. Huang, K., Chang, C.: Mining periodic patterns in sequence data. In: Data Warehousing and Knowledge Discovery, 6th International Conference, LNCS, vol. 3181, pp. 401–410. Springer, Spain (2004) 9. Amir, A., Benson, G.: Detecting multiple periods and periodic patterns in event time sequences. In: Conference on Information and Knowledge Management, pp. 617–626. ACM, Singapore (2017) 10. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms, 3rd edn. The MIT Press, USA (2009) 11. Smyth, W.F.: Computing Patterns in Strings, 1st edn. Addison-Wesley, UK (2003) 12. Aho, A., Hopcroft, J., Ullman, J.: The Design and Analysis of Computer Algorithms, 1st edn. Addison-Wesley, UK (1974) 13. Kostanyan A.: Fuzzy string matching with finite automata. In: Proceedings on 2017 IEEE Conference CSIT-2017, Yerevan, Armenia, pp. 25–29. IEEE Press, USA (2018) 14. Simon, I.: String matching algorithms and automata. In: Results and Trends in Theoretical Computer Science, pp. 386–395 (2005) 15. Apostolico, A., Breslauer, D., Galil, Z.: Optimal parallel algorithms for periods, palindromes and squares. In: 19th International Colloquium on Automata Languages and Programming, pp. 296–307. Springer, Berlin (1992)

High Generalization Capability Artificial Neural Network Architecture Based on RBF-Network Mikhail Abrosimov

and Alexander Brovko(&)

Yuri Gagarin State Technical University of Saratov, Saratov, Russia {destinywatcher,brovkoav}@gmail.com

Abstract. This paper describes the issue of error level fluctuations due to training set shrinking in RBF-networks. An architecture of artificial neural network (ANN) based on RBF-network is presented with a learning algorithm to train it. The presented architecture is multi-layer, unlike original RBF-network thus has a potential in deep learning. Numeric results lead to a conclusion about error level fluctuations being significantly lower for the presented architecture compared to RBF-network in case of training set shrinking. This displays a greater generalization ability of the presented architecture. The paper contains an application of ANN to the task of restoring the dielectric parameters for subject placed in waveguide. Keywords: Artificial neural network RBF neural network



Neural network learning algorithm



1 Introduction ANN appear in a variety of architectures that are suited for the specific task groups. For example, RBF-networks are suited for function approximation while convolutional ANN is suited for image recognition. Both convolutional and RBF architectures are inherently ANNs and both have common and specific features in both structures and training algorithms. 1.1

Feature Maps

There are two architectures to be described and compared below: convolutional neural network and RBF network. Convolutional neural network is ANN architecture is based on Y. LeCun’s work [1, 2] related to the research of visual cortex that describes shared synaptic weights as a model of signal processing similar to matrix convolution operation (Fig. 1). This processing method’s output consists of multiple feature maps that have smaller size compared to input matrix. Convolution step might exceed 1 position. After convolution of input signals the output feature maps go through pooling operation. The result matrixes size reduces multiple times compared to input feature

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 67–78, 2019. https://doi.org/10.1007/978-3-030-12072-6_7

68

M. Abrosimov and A. Brovko

Fig. 1. Convolutional neural network.

maps. Each element of pooling matrix is usually calculated as either average or as maximum in absolute value compared to neighbor elements in input. Pooling results might be as well processed by a pair of convolution layer and pooling layer and so on. Main advantages of this processing method are following: • image topology influences feature maps; • data volume reduction that leads to synaptic weights amount reduction and simplified calculations. Each distinct kernel in convolution layer represents a certain image feature and provides a possibility to build a map for a certain feature. Each element of convolution layer input represents a similarity level of input region to feature pattern kernel that was used. The convolutional ANN is displayed in Fig. 1. In this example it consists of two convolutional layers, two pooling layers, one hidden full-connected nonlinear perceptron layer, and one nonlinear output perceptron layer. Convolutional layers, pooling layers and hidden nonlinear perceptron layers can exceed the amount shown in Fig. 1. Radial Basis Function Neural Networks (RBF-networks) applied to a broad variety of tasks, including the defining of dielectric sample parameters in closed electromagnetic system [3, 4]. There are issues present in this RBF-network application, such as training set is required to have significant size in order to be consistent [5]. RBFnetwork error level fluctuations are observed in case of shrinking of the training set. RBF networks are distinct from the most of ANN architectures with pre-processing procedure that runs on input vectors. Input vector is technically compared to every vector among RBF-basis vector set (Fig. 2). Further processing runs for the vector that consists of distances between input vector and each of RBF-basis vectors, that is based on Powell function approximation theory [5, 6]. Each distance defines similarity between input vector and a specific RBF-basis vector, which can be compared to feature extraction.

High Generalization Capability Artificial Neural Network Architecture

69

Fig. 2. RBF-network.

Cluster centroids that can be calculated with use of k-means algorithm (as well as with any other clustering algorithm) on training set are often used as RBF-basises. In that case RBF functions parameters are defined with cluster parameters. RBF-layer output vector is calculated according the following formula y1i ¼ f ðd ðx; ci Þ; li Þ; y10 ¼ 1; i ¼ 1; k

ð1Þ

where x is input vector, k is RBF-layer neurons count, ci —the RBF-basis for i RBFlayer neuron, li function parameter for neuron i, d is function for comparison the input vector to RBF-basises, f is activation function. Output layer of the RBF network usually calculates linear combinations for RBFlayer’s output according to following formula y 2 ¼ gð y 1 W Þ

ð2Þ

where W is weight coefficient matrix, g is linear transfiguration function. RBF-network training process includes the defining of RBF-bases and RBFfunction parameters for each neuron of the RBF-layer. During the input vector processing with RBF-layer an information loss is possible. That can be proved with reverse input vector calculation being not always possible for n-dimensional vector with distances to k n-dimensional RBF-bases as available data [7].

70

M. Abrosimov and A. Brovko

Taking into consideration Gaussian function, which is usually used in RBF-layers, the formula (1) can be represented with: y1i ¼ e



d ðx;ci Þ; l2 i

; y10 ¼ 1; i ¼ 1; k

ð3Þ

If RBF-network calculation processes generalized, an input vector gets compared to the array of RBF-bases, then comparison result gets processed by nonlinear function with specific parameters for each RBF-basis. Feedforward ANN’s in general take input vector with pre-determined dimension count. In case of significant dimension count for input vector both full-connected ANN’s learning and getting output grow in calculation difficulty. Convolutional neural network architecture is suited for massively multidimensional input vector processing, that makes it capable of image processing. It is possible due to non-full-connected convolutional layers that produce feature maps form the input. Feature maps have significantly less dimensions then input vector. According to formula (3), RBF-layer estimates similarity between input vector and RBF-bases, which can be described as feature mapping. That conclusion leads to the possibility of building architecture similar to Convolutional neural network based on RBF-layers. 1.2

Existing Solutions and Basis

Linear optimization methods are proven to be useful in RBF-network training as well as gradient descent that was described in [8]. It means that learning algorithms for RBF networks are technically compatible with algorithms of learning the Convolutional neural networks and multi-layer perceptron. An implementation of multi-layer RBF-network similar to implementation of multilayer perceptron (MLP) compared to single layer perceptron as a sequence of RBFnetworks is described in [9]. This type of architecture requires training of multiple RBF-layers, since every MRBF layer contains a distinct RBF-layer. The advantages of this MRBF-network architecture compared to RBF-network are in higher accuracy that is close to MLP ANN with higher suitability for function approximation compared to MLP [9]. There is MRBF-network architecture with multiple sequent connected RBF-layers has considerable accuracy growth compared to RBF-network because of higher nonlinearity. This architecture has issues with training because of multiple connected RBFlayers. The use of nonlinear RBF-functions only creates limitations on architecture training algorithms [10]. The noted solutions above display significant metrics growth for multi-layer ANN’s based on RBF-network compared to RBF-network. The possibility of adaptation for the back propagation learning algorithm to RBFnetwork and multi-layer networks based on RBF networks allows using gradient descent in structure of learning algorithm for the ANN architecture proposed in this paper.

High Generalization Capability Artificial Neural Network Architecture

71

2 Method and Implementation 2.1

Architecture Description

The ANN architecture proposed in this paper is a multi-layer Feedforward ANN with RBF-layer used as first layer. First layer’s outputs are defined by distances from input vector to RBF-bases, nonlinear RBF-function and RBF-basis related function parameters. Layers from second to output layer are all nonlinear perceptron layers connected like in MLP ANN (Fig. 3).

Fig. 3. Proposed ANN architecture.

In this architecture the first layer is used for feature mapping that is then processed similar to convolutional ANNs. The proposed ANN architecture is significantly different from MRBF architectures presented in papers [9, 10]. Multiple functional RBF-networks stacked together in sequence have both multiple RBF-layers and linear layers, unlike the proposed architecture. Multiple RBF-layers in sequence before linear layer is having a learning issue to resolve, unlike the proposed single RBF-layer with multiple nonlinear perceptron layers. Both MRBFs in papers [9, 10] are dealing with multiple RBF-layer training issue.

72

2.2

M. Abrosimov and A. Brovko

Learning Algorithm

The learning algorithm for the ANN architecture presented in this paper consists of two steps: first RBF layer training and other nonlinear layers training. RBF layer training with algorithm of minimal full RBF-basis set is described in paper [7]. Minimal RBF-basis set finding algorithm for Euclidean distance is based on criteria that is derived from the following equation system 8 ðx1  w11 Þ2 þ ðx2  w12 Þ2 þ > > > > < ðx1  w21 Þ2 þ ðx2  w22 Þ2 þ ðx1  w31 Þ2 þ ðx2  w32 Þ2 þ > > > ... > : ðx1  wk1 Þ2 þ ðx2  wk2 Þ2 þ

   þ ðxn  w1n Þ2 ¼ d12    þ ðx3  w2n Þ2 ¼ d22    þ ðx3  w33 Þ2 ¼ d32

ð4Þ

   þ ðx3  wkn Þ2 ¼ dk2

where x—is n-dimensional input vector; w is a matrix of RBF-bases, with each string being n-dimensional RBF-basis vector, dj2 is a square of Euclidean distance between input vector x and RBF basis j. After opening the brackets, sequent substitution equation from equation j + 1 and adding definitions: 8 ðw21  w11 Þx1 þ ðw22  w12 Þx2 þ    þ ðw2n  w1n Þxn ¼ c1 > > < ðw31  w21 Þx1 þ ðw32  w12 Þx2 þ    þ ðw3n  w2n Þxn ¼ c2 ð5Þ ... > > : ðwk1  wk11 Þx1 þ ðwk2  wk12 Þx2 þ    þ ðwkn  wk1n Þxn ¼ ck1 where cj is a sum of free members for equation j If the system (5) has a solution for x, there is no information loss after comparison to RBF-bases. The RBF-basis search algorithm is based on the following criteria: n ¼ rank ðV Þ

ð6Þ

where V is an extended matrix for system (5).

3 Numeric Results 3.1

Description of Test Problem

Numerical evaluation of the proposed technique with new ANN architecture will be illustrated on the problem of microwave imaging. The problem of determining the spatial distribution (profile) of the complex dielectric permittivity of a material inside a dielectric sample is of great interest due to the functional advantages of this non-destructive testing technology in potential

High Generalization Capability Artificial Neural Network Architecture

73

applications, such as medical diagnostics (detection of heterogeneous areas in human tissues), detection of defects and cracks in structural materials and composite panels, etc. Particular interest to this technology arises in connection with significant difficulties in creating the technology of controlled microwave sintering of composite materials from powders (granules). Microwave imaging is a prospective tool for monitoring of the sample state during the course of microwave sintering. The technology of microwave pattern recognition (microwave imaging) belongs to the class of inverse problems, the numerical solution of which is an important aspect of applied non-destructive testing technologies. The currently existing technologies for determining one or two-dimensional profiles of complex dielectric permeability are associated with the use of sophisticated experimental equipment and are characterized by low resolution and accuracy. An alternative approach to solving this class of problems may be the use of ANN. In this paper, the ANN technology is used to determine the three-dimensional dielectric permittivity profiles of the sample material from the results of relatively simple electromagnetic measurements. The S-parameters of the waveguide system, inside which the sample is located, are used as the initial measured data to determine the dielectric permittivity profile of the material. The dielectric permittivity profile is approximated by a function containing a limited number of control coefficients, but at the same time being flexible enough to represent various spatial distributions of material parameters. The coefficients of the function are determined using a pre-trained INS, to the input of which the S-parameters of the electromagnetic measuring system are fed. For ANN training, the results of numerical simulation of the propagation of electromagnetic waves in a measuring system using the finite difference method are used. As a measuring system, we will use a six-port (four ports in rectangular waveguides and two ports corresponding to two orthogonal modes in a round waveguide) turnstile waveguide junction (Fig. 4), containing a dielectric sample lying on the bottom wall of the connection. The measured values in such a system are the complex reflection and transmission coefficients in all six ports of the junction (full scattering matrix). The specified type of measuring system was chosen in order to provide a sufficient amount of information to the ANN input for determining the three-dimensional dielectric permittivity profile of the sample material. In addition, the specified type of measuring system permits to avoid sample rotation (measurements at several positions). The turnstile junction of waveguides is not the only possible solution that satisfies the above requirements. In the more general case, as a measuring system, any multiport waveguide connection can be used, the parameters of the scattering matrix of which are sensitive to variations of the material parameters of the sample in all three coordinate directions.

74

M. Abrosimov and A. Brovko

Fig. 4. Turnstile junction of waveguides with a dielectric sample inside and six measuring ports.

3.2

Computational Technique

Numerical processing of the results obtained using measurements is performed using the ANN. In order to provide greater flexibility in the ability to work with different spatial distributions of dielectric constant, this section uses a combination of ANN with a regression mathematical model. At the first step of the method, the three-dimensional dielectric permittivity profile inside the sample is mathematically described using some model that provides a one-to-one correspondence between the spatial coordinates inside the sample and the values of the dielectric constant at the specified points. As such a model, we will use a quadratic polynomial function. This function is determined by a relatively small number of coefficients, and each set of coefficients corresponds to a certain dielectric constant profile inside the sample. In the next step of the method, the ANN with global cubic radial basis functions is used as a numerical inverter. The numerical inverter establishes the correspondence between the measured values (Sparameters of the electromagnetic system) and the coefficients of the function selected in the first step of the method. ANN, after appropriate training, is used to determine the dielectric constant profiles from the measured S-parameters of the system. The method proposed in this paper is intended to determine the dielectric permittivity profiles of real experimental samples, which can have an arbitrary distribution of material parameters at the points inside the sample. At the same time, the method involves application of a model function, namely, a quadratic polynomial in this paper, for the most accurate description of the dielectric constant profile. In other words, a

High Generalization Capability Artificial Neural Network Architecture

75

regression model with three independent variables (spatial coordinates x, y, z) is used to approximate the real experimental profile by a quadratic polynomial function. In accordance with the experiment planning theory, to build a regression model, we will use an orthogonal rotatable plan using the function values at fifteen base points inside and outside the sample. For definiteness, we will assume that the sample has the shape of a rectangular parallelepiped. In this case, eight base points are located at the corners of the sample, six star points are located outside the sample (opposite the centers of symmetry of the sides of the parallelepiped) and one point is located at the center of symmetry of the sample (Fig. 5).

Fig. 5. The location of the base points of a quadratic regression model around the dielectric sample.

The distance from the center of symmetry of the sample to the star points is calculated by the formula d ¼ a  ds=2

ð7Þ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a¼ N  2k2  2k1

ð8Þ

where

ds is the size of block side, k is a number of independent variables (in this case k ¼ 3), N ¼ 2k þ 2k þ 1 is a number of necessary base points (in this case N ¼ 15).

76

M. Abrosimov and A. Brovko

The values of dielectric constant at base points are used to construct a quadratic polynomial function. In turn, the values themselves at base points are determined using the ANN with the architecture presented in the previous section. 3.3

Numerical Tests

In order to evaluate performance of the proposed ANN architecture, a comparison to RBF ANN results will be used in this section. Training set size, average ANN error for input vectors not included in training set and ANN training time are considered as learning efficiency metrics. Dielectric parameters calculation is an application task used during the comparison. The results are compared to the results of systems described in [1, 7]. Table 1 contains results of RBF-network training on task of dielectric parameters calculation. Training set was reduced at each step in order to get numeric results. It can be noted, that significant error fluctuation is observed with the training set size reduction. Table 1. RBF-network results Training time, s Training set size Absolute error value Relative error value 17,58 2000 0,0852 0,0085 7,87 1500 0,0504 0,005 6,33 1250 0,0504 0,005 1,81 900 1,4652 0,14652 1,22 750 0,0569 0,0057

The results provided by the ANN architecture presented in this paper, considering close training time, are presented in Table 2. Table 2 contains results of the presented ANN architecture training in dielectric parameters calculation. At each step training set is reduced. Training runs for 500 epochs in order to give relatively similar time results. The most of calculations during the training still are related to the nonlinear perceptron layers, since RBF layer gets trained with fast algorithm described in [7]. Error values fluctuations that happen in RBF-network during training set reduction is not happening in the presented ANN architecture. Table 2. Learning results for the presented ANN architecture Training time, s 7,33 7,48 7,48 7,70 7,78

Training set size 2000 1500 1250 900 750

Absolute error value 0,0682 0,0691 0,0684 0,071 0,0692

Relative error value 0,0068 0,0069 0,0068 0,007 0,0069

Training epochs 500 500 500 500 500

High Generalization Capability Artificial Neural Network Architecture

77

In Fig. 6 the comparison of error levels is displayed as a diagram. Error fluctuations in RBF-network are significant compared to the difference between RBF-network’s error and the proposed ANN architecture’s error in other cases. So the sensitivity to training set reduction is displayed.

Fig. 6. Comparison of relative error values with training set reduction.

4 Conclusion A new RBF-network-based ANN architecture was presented in this paper. A training algorithm for the proposed architecture was described. Numeric results show higher training stability in approximately same time. So, it becomes possible to obtain the results of the same order of errors with training sets much of much less size in comparison to RBF ANN. Therefore, lowered dependency of error fluctuations on training set size changes displays a higher generalization ability of the presented ANN architecture compared to RBF-network.

References 1. Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: OverFeat: integrated recognition, localization and detection using convolutional networks. In: International Conference on Learning Representations (ICLR2014), CBLS, (arXiv:1312. 6229) (2014) 2. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)

78

M. Abrosimov and A. Brovko

3. Brovko, A.V., Murphy, E.K., Yakovlev, V.V.: Waveguide microwave imaging: neural network reconstruction of functional 2-D permittivity profiles. IEEE Trans. Microw. Theory Tech. 57(2), 406–414 (2009). https://doi.org/10.1109/TMTT.2008.2011203 4. Yakovlev, V.V., Murphy, E.K., Eves, E.E.: Neural networks for FDTD-backed permittivity reconstruction. COMPEL: Int. J. Comput. Math. Electr. Electron. Eng. 24(1), 291–304 (2005) 5. Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice Hall, Englewood Cliffs (1999) 6. Powell, M.J.D.: Approximation Theory and Methods. Cambridge University Press, New York (1981). http://doi.org/CBO9781139171502 7. Abrosimov, M.A., Brovko, A.V.: Criteria for minimal RBF artificial neural network’s center vectors set definition. Neirokomputery: razrabotka, primenenie. (3), 50–54 (2018). (in Russian) 8. Bianchini, M., Frasconi, P., Gori, M.: Learning without local minima in radial basis function networks. IEEE Trans. Neural Netw. 6(3), 749–756 (1995). https://doi.org/10.1109/72. 377979 9. Craddock, R.J., Warwick, K.: Multi-layer radial basis function networks. an extension to the radial basis function. In: IEEE Proceedings of International Conference on Neural Networks (ICNN 1996), vol. 2, pp. 700–705 (1996). https://doi.org/10.1109/ICNN.1996.548981 10. Chao, J., Hoshino, M., Kitamura, T., Masuda, T.: A multilayer RBF network and its supervised learning. In: Proceedings of IEEE International Joint Conference on Neural Networks, IJCNN 2001 (Cat. No. 01CH37222), vol. 3, pp. 1995–2000 (2001). https://doi. org/10.1109/IJCNN.2001.938470

Dynamic System Model for Predicting Changes in University Indicators in the World University Ranking U-Multirank Olga Glukhova1(&) , Alexander Rezchikov2 , Vadim Kushnikov1,2 , Oleg Kushnikov2 , and Irina Sytnik1 1

Yuri Gagarin State Technical University of Saratov, 77, Politekhnicheskaya ave., Saratov 410054, Russia [email protected] 2 Institute of Precision Mechanics and Control of the Russian Academy of Science, 24, Rabochaya ave., Saratov 410028, Russia [email protected]

Abstract. The quality of high school is current problem all over world. The paper presents the system for predicting the accreditation indicators of technical universities based on J. Forrester mechanism of system dynamics. The mathematical model based on nonlinear differential equations was developed to predict the efficiency indicators of the educational activities. Keywords: Higher education  Quality of educational process  Mathematical model  System dynamics  World universities ranking  U-Multirank

1 Introduction All Russian higher education institutions are monitoring the efficiency activity each year using the following indicators: educational and research activity, financial and economic activity, international activity, the contingent and employment of students, and more over. The evaluation of the effectiveness of the university management becomes an urgent issue in the conditions of developing market relations and reduction of government funding with appearance of a new requirements for the market of educational services and the labour market. For a long time in the Russian Federation the main methods of assessing the quality of the educational process were licensing, certification and accreditation procedures. The public accreditation of the universities compiled by various magazines, newspapers, agencies and scientists gradually acquires more significance, especially within the university rankings. An analytical tool to predict the dynamics of changes in the main university indicator becomes a necessity for the university management. This tool helps in the promotion of the university and its inclusion in academic ratings. Without the use of system dynamics, it is difficult to conduct such studies. The information and advisory system developed based on system dynamics and allow the decision maker to determine the change in the institution characteristics at different time intervals. © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 79–90, 2019. https://doi.org/10.1007/978-3-030-12072-6_8

80

O. Glukhova et al.

U-Multirank is a multidimensional, user-driven approach to international ranking of higher education institutions [1]. It compares the performances of higher education institutions—in short: universities—in the five dimensions of university activity: (1) teaching and learning, (2) research, (3) knowledge transfer, (4) international orientation and (5) regional engagement. The U-Multirank web tool enables comparisons at the level of the university as a whole and at the level of specific study programs. Based on empirical data, U-Multirank compares institutions with similar institutional profiles (“like-with-like”) and allows users to develop their own personalized rankings by selecting indicators in terms of their own preferences. The first U-Multirank ranking was the 2014 edition, covering more than 850 higher education institutions from more than 70 countries. It provided a ranking at the institutional level as a whole as well as at the level of specific fields of study. For the latter, the fields of electrical engineering, mechanical engineering, business studies and physics were covered in the 2014 edition. Thereafter, the coverage of institutions and subject areas was expanded each year. U-Multirank is an independent ranking prepared with seed funding from the European Commission’s Erasmus+ program. The work of the U-Multirank consortium is overseen by an Advisory Board.

2 Methods The mathematical model is developed for modelling and prediction of efficiency educational activity indicators in the universities. The developed model is based on the Forrester’s and Meadows’s models and allows formalizing the complex cause-effect relationships between system variables [2]. Modelled variables that characterize the educational process are the system levels of the basic dynamic system model. Derivatives of time levels—streams—move the contents of one state to another. Relations between flows and model levels can be represented as a differential equations system: dXi ¼ Fi ðX1 ; . . .; Xn Þ; i ¼ 1; . . .; n dt

ð1Þ

We represent the functions Fj in the form of expansions in a series in powers of Xk. We try to restrict ourselves to the first, linear terms of the expansion, and choose the coefficients for them by conducting experiments on measuring the corresponding levels at different points in time. If we restrict ourselves to only linear terms of the expansion, the system will take the form: dXi ¼ ai;0 þ ai;1 X1 þ . . . þ ai;j Xj ; j ¼ 1; . . .; n dt

ð2Þ

Dynamic System Model for Predicting Changes

81

The products ai;j Xi ; j ¼ 1; . . .; n are the rates of the i-th flow, which multiplicatively depend on the levels using multipliers or functional dependencies xi;j;k ; k ¼ 1; . . .; n: ai;j ðX1 ; . . .; Xn Þ ¼ ai;j xi;j;1 ðX1 Þ. . .xi;j;k ðXn Þ; k ¼ 1; . . .; n

ð3Þ

Thus, the system consists of levels, i.e., a set of characteristics, at each instant of time completely determining the state of the system, and flows, i.e. the rate of change of levels per unit time, which are the sum of the rates multiplied by the levels: Yn Xn dXi ¼ ai;0 þ a x ðXk ÞXj ; i ¼ 1; . . .; n i;j j¼1 k¼0 i;j;k dt

ð4Þ

where X1 ; . . .; Xn —levels or stocks: characteristics set, that determines the state of the system at any given time completely; dXi dt —flows: rates of levels change per unit time, which are added from the rates multiplied by levels; ai;j Xj ; j ¼ 1; . . .; n—rates of flows, which change of a level and include all factors causing its growth or decrease; xi;j;k ; k ¼ 1; . . .; n—functional dependencies between levels. The indicators of the educational activity effectiveness in the universities are presented in the model as simulated variables. The indicators of the World University Ranking U-Multirank were chosen as indicators for the proposed model. The model is a system of non-linear differential equations, the modelling characteristics of the educational process being determined according to the solution of this system. The graph model is used to illustrate the casual relationships between systemlevel of mathematical model. The oriented graph (see Fig. 1) was built according to analysis of cause-and-effect relationships between the selected system variables (accreditation indicators of the university). An algorithm is proposed to determine the indicators of the educational activity effectiveness in the university to solve the system of nonlinear differential equations. The proposed approach is aimed at solving complex problems of managing the educational process in universities. The structure of the proposed model repeats the structure of cause-effect relationships in the system. The model provides the suggestion to quickly and relevantly assess the performance of the system, which could be used by person responsible for managing quality control [3–9]. The system of nonlinear differential equations has the form (5):

Fig. 1. Graph of causal relationships used in the construction of a mathematical model

82 O. Glukhova et al.

Dynamic System Model for Predicting Changes

! 8 BPn  f ð X Þ  f ð X Þ  f ð X Þ > 1 3 2 19 6 27 dX ð t Þ X ð t Þ > 1 1 > > dt ¼ X1 ðtÞ  BPk  f ðX Þ  f ðX Þ  f ðX Þ  f ðX Þ > > 3 21 4 22 5 23 7 29 > X ð t Þ 1 > ! > MPn > >  f ð X Þ  f ð X Þ  f ð X Þ 8 4 9 20 12 28 > dX2 ðtÞ X2 ðtÞ > > > dt ¼ X2 ðt Þ  MPk  f ðX Þ  f ðX Þ  f ðX Þ > > 10 22 11 23 13 29 X2 ðtÞ > > BV  > dX3 ðtÞ > BC > Þ  BZ > dt ¼ X3 ðt Þ BZ  f14 ðX >  19 > dX4 ðtÞ > MV MC dX5 ðtÞ k > Þ MZ  f15 ðX20 Þ  MZ ; dt ¼ X5 ðtÞ Cn C > P  dt ¼ X4 ðt > > > PT  f21 ðX12 Þ þ PI  f17 ðX8 Þ > dX6 ðtÞ > > dt ¼ X6 ðtÞ > þ PJ  f19 ðX10 Þ þ PD  f22 ðX17 Þ þ PS > > > > f ð X Þ  f ðX Þ  f24 ðX30 Þ > 20 11 > G þ IG þ NG 23 25 > dX7 ðtÞ > k > ¼ X ð t Þ  f25 ðX14 Þ; dXdt8 ðtÞ ¼ X8 ðtÞ An A > 7 F P dt >   > dX9 ðtÞ > Vn Vk >  f16 ðX5 Þ  f18 ðX6 Þ > P dt ¼ X9 ðtÞ > > dX ð t Þ > IDP DPn DPk 10 n IDPk dX11 ðt Þ > ¼ X ð t Þ ; > 10 P dt dt ¼ X11 ðtÞ X11 ðtÞ > > > dX12 ðtÞ JIn JIk dX13 ðtÞ þ IL > > ; dt ¼ X13 ðtÞ IB þ IN > F P dt ¼ X12 ðtÞ > > dX14 ðtÞ < ¼ X14 ðtÞðDS  f28 ðX15 Þ þ DV  f29 ðX16 Þ þ DPÞ dt

> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > :

k dX16 ðt Þ n INk ¼ X15 ðtÞ JDn JD ; dt ¼ X16 ðtÞ INNPP P dX17 ðtÞ PPn PPk dX18 ðtÞ IRn IRk ¼ X17 ðtÞ P ; dt ¼ X18 ðtÞ F  f31 ðX26 Þ dt dX19 ðtÞ FBk dX20 ðtÞ FMn FMk X19 ðtÞ FBnFB dt ¼  ; dt ¼ X20 ðtÞ  FM 

dX21 ðtÞ dt

dX15 ðtÞ dt

83

ð5Þ

22 ðt Þ k n  XFS ¼ X22 ðtÞ XOP  OPk ; dXdt 21 ðtÞ   22 ðtÞ X22 ðtÞ dX23 ðtÞ k n ¼ X23 ðtÞ XJS  XJS dt 23 ðt Þ 23 ðt Þ   dX24 ðtÞ ISPk n ¼ X24 ðtÞ XISP   f ð X Þ 33 26 ð t Þ X ð t Þ dt 24 24

¼ X21 ðtÞ

dX26 ðtÞ dt

FSn X21 ðtÞ

¼

dX25 ðtÞ JPn JPk dt ¼ X25 ðtÞ P n X26 ðtÞ XFDP  f ð X 32 21 Þ 26 ðt Þ

 k  XFDP  26 ðtÞ

 n ¼ X27 ðtÞ XBRP  BRPk  27 ðtÞ X27 ðtÞ dX28 ðtÞ MRPk MRPn dt ¼ X28 ðtÞ X28 ðtÞ  X28 ðtÞ   dX29 ðtÞ SRPn SRPk ¼ X ð t Þ  29 X29 ðtÞ X29 ðtÞ dt IR þ NR dX30 ðtÞ RPn RPk dX31 ðtÞ ¼ X ð t Þ ; ¼ X ð t Þ  f34 ðX30 Þ 30 31 P dt dt F dX27 ðtÞ dt

where X1 ; X2 ; . . .; X31 are predictable variables, and other parameters are constants that are determined experimentally at the model adaptation stage. The following notation is used in the system of Eq. (1): X1(t)—bachelor graduation rate, BPn, BPk—% of graduate bachelors at the Bk Bn beginning/end of the billing period, BPn ¼ BS ; BPk ¼ BS , Bn, Bk—number of bachelor graduates at the beginning/end of the billing period, BS—total number of bachelor graduates; X2(t)—masters graduation rate, MPn, MPk—% of graduate masters at the Mk Mn ; MPk ¼ MS , Mn, Mk—number of beginning/end of the billing period, MPn ¼ MS

84

O. Glukhova et al.

masters graduates at the beginning/end of the billing period, MS—total number of masters graduates; X3(t)—graduating on time (bachelors), BV—number of graduating on time (bachelors), BC—number of graduate bachelors continuing education, BZ—average annual number of undergraduates, BZ ¼ f1T þ ðX16 Þf2T ðX19 Þf3T þ ðX21 Þf4T þ ðX22 Þf5T þ ðX23 Þf6T þ ðX27 Þf7T þ ðX29 Þ; X4(t)—graduating on time (masters), MV—number of graduating on time (masters), MC—number of graduate masters continuing education, MZ—average number of masters enrolled, MZ ¼ f1T þ ðX16 Þf2T ðX20 Þf3T þ ðX21 Þf4T þ ðX22 Þf5T þ ðX23 Þf6T þ ðX28 Þf7T þ ðX29 Þ; X5(t)—citation rate, Cn, Ck—total number of citations at the beginning/end of the billing period; X6(t)—research publications (absolute numbers), PT—number of publications in technical disciplines, PA—number of publications in the field of arts, PJ—number of interdisciplinary studies, PD—number of patents publications, PS—other publications; X7(t)—external research income, G—research grants from national and international financial institutions, IG—grants from research councils and research funds, NG —grants from charities and other non-profit organizations; X8(t)—Art related output, An, Ak—number of publications in the field of arts at the beginning/end of the billing period; X9(t)—Top cited publications, Vn, Vk—number of top cited publications at the beginning/end of the billing period; X10(t)—Interdisciplinary publications, IDPn, IDPk—number of interdisciplinary publications at the beginning/end of the billing period; X11(t)—Post-doc positions, DPn, DPk—post-doc positions at the beginning/end of Dk Dn the billing period, DPn ¼ NPP ; DPk ¼ NPP , Dn, Dk—number of post-doc at the beginning/end of the billing period, NPP – Academic staff; X12(t)—Co-publications with industrial partners, JIn, JIk—co-publications with industrial partners at the beginning/end of the billing period; X13(t)—Income from private sources, IB—income from research for industry and business, IN—income from research for private foundations, charities and other nonprofit organizations, IL—income from licensing; X14(t)—Patents awarded (absolute numbers), DS—number of patents shared with other organizations, DV—number of patents registered by the university, DP—other patents; X15(t)—Industry co-patents, JDn, JDk—industry co-patents at the beginning/end of the billing period; X16(t)—spin-offs (i.e. firms established on the basis of a formal knowledge transfer arrangement between the institution and the firm) recently created by the institution, INn, INk—number of spin-offs at the beginning/end of the billing period;

Dynamic System Model for Predicting Changes

85

X17(t)—Publications cited in patents, PPn, PPk—Publications cited in patents at the beginning/end of the billing period; X18(t)—Income from continuous professional development, IRn, IRk—income from continuous professional development at the beginning/end of the billing period; X19(t)—Foreign language bachelor programs, FBn, FBk—foreign language bachelor programs at the beginning/end of the billing period, FB—total number of foreign language bachelor programs; X20(t)—Foreign language master programs, FMn, FMk—foreign language master programs at the beginning/end of the billing period, FM—total number of foreign language master programs; X21(t)—% incoming exchange students, FSn, FSk—% incoming exchange students at the beginning/end of the billing period, FSn ¼ FSn ; IPk ¼ FSk , Fn, Fk—number of incoming students at the beginning/end of the billing period; X22(t)—% exchange students sent out, OPn, OPk—% exchange students sent out at the beginning/end of the billing period, OPn ¼ OSn ; OPk ¼ OSk , On, Ok—number of exchange students sent out at the beginning/end of the billing period; X23(t)—% of students in international joint degree programs, JSn, JSk—% of students in international joint degree programs at the beginning/end of the billing period, JSSk n JSn ¼ JSS S ; JSk ¼ S , JSSn, JSSk—number of students in international joint degree programs at the beginning/end of the billing period; X24(t)—% of international academic staff, ISPn, ISPk—% of international academic ISk ISn ; ISPk ¼ NPP , ISn, ISk— staff at the beginning/end of the billing period, ISPn ¼ NPP number of international academic staff at the beginning/end of the billing period; X25(t)—% of international joint publications, JPn, JPk—number of publications with one or more foreign collaborators at the beginning/end of the billing period; X26(t)—% of international doctorate degrees, FDPn, FDPk—% of international FDk n doctorate degrees at the beginning/end of the billing period, FDPn ¼ FD DS ; FDPk ¼ DS , FDn, FDk—number of international doctorate degrees at the beginning/end of the billing period, DS—total number of international doctorate degrees; X27(t)—% of bachelor graduates working in the region, BRPn, BRPk—% of bachelor graduates working in the region at the beginning/end of the billing period, BRk n BRPn ¼ BR BS ; BRPk ¼ BS , BRn, BRk—number of bachelor graduates working in the region at the beginning/end of the billing period; X28(t)—% of master graduates working in the region, MRPn, MRPk—% of master graduates working in the region at the beginning/end of the billing period, MRk n MRPn ¼ MR MS ; MRPk ¼ MS , MRn, MRk—number of master graduates working in the region at the beginning/end of the billing period; X29(t)—% of student internships in the region, SRPn, SRPk—% of student internships in the region at the beginning/end of the billing period, SRPn ¼ SRS n ; SRPk ¼ SRS k , SRn, SRk—number of student internships in the region at the beginning/end of the billing period; X30(t)—Regional joint publications, RPn, RPk—number of publications with one or more co-authors, geographically located in the same region at the beginning/end of the billing period;

86

O. Glukhova et al.

X31(t)—Income from regional sources, IR—research incomes for industry and business in the region, NR—research incomes for regional private sources. Functional dependencies f1 ðX1 Þ; f2 ðX1 Þ; . . .; f34 ðX30 Þ between variables are determined on the basis of analysis of statistical data and usually approximated by polynomials of low degree. As an example, we analyse the indicator X1. We write for this indicator differential equation of the simulated variable X1: Consider the process of building a model on the example of the differential equation for the variable X1, which describes the dynamics of the average annual number of bachelor graduates. The differential equation for the simulated variable X1 will be (6): ð6Þ гдe X1(t)—bachelor graduation rate, BPn(t), BPk(t)—% of graduate bachelors at the beginning/end of the billing period, Bn(t), Bk(t)—number of graduate bachelors at the beginning/end of the billing period, BS(t)—total number of graduate bachelors. The rate of change of the variable X1 is influenced by such external factors as the average annual amount of financial resources of the university F, the average annual rating of the university R, the relevance of university graduates W and the level of development of the region UR. In addition, the rate of change of variable X1 depends on other variables, such as: graduating on time (bachelors) (X3), foreign language bachelor programs (X19), % incoming exchange students (X21), % exchange students sent out (X22), % of students in international joint degree programs (X23), % of bachelor graduates working in the region (X27), % of student internships in the region (X29) (see Fig. 2).

Fig. 2. Change of university accreditation indicators at different time intervals

Dynamic System Model for Predicting Changes

87

The final equation for variable X1 takes the form (7) taking into account all functional dependencies on other system levels:   dX1 ðtÞ BPn BPk ¼ X1 ðtÞ f1 ðX3 Þf2 ðX19 Þf6 ðX27 Þ  f3 ðX21 Þf4 ðX22 Þf5 ðX23 Þf7 ðX29 Þ ð7Þ dt X1 ðtÞ X1 ðtÞ Positive and negative rates of change variable X1 are calculated by: X1T þ ¼ f1T þ ðX3 Þf2T þ ðX19 Þf3T þ ðX21 Þf4T þ ðX22 Þf5T þ ðX23 Þf6T þ ðX27 Þf7T þ ðX29 Þ X1T ¼ f1T ðX3 Þf2T ðX19 Þf3T ðX21 Þf4T ðX22 Þf5T ðX23 Þf6T ðX27 Þf7T ðX29 Þ Differential equations and rates for other simulated variables are written in the same way. The structure of the model allows taking into account external and internal disturbing influences, and also provides the person responsible for managing quality control with the ability to quickly and adequately respond to them. The proposed algorithm for calculating these indicators is based on the system dynamics and the regression models. The mathematical model is constructed on the base model of system dynamics, which is further tested for compliance with real data using the regression model. The regression model is built on the available statistic data collected during the period of the university’s work. Software development on the basis of the presented mathematical model will become the following stage of research.

3 Results The results of forecasting the characteristics of the university accreditation in the time interval from 2010 to 2015 are presented below. They allow identifying the main trends in the change of these characteristics, which is necessary when making managerial decisions to ensure quality control and the functioning of the university. We use the statistical data of the monitoring of the Data Books from National Research University Higher School of Economics during the computational experiment [10]. Figure 3 shows the graphs of the solutions of the equations system (5) for the most significant indicators of accreditation. Checking the adequacy of the developed mathematical model (5) is performed using regression equations based on statistical data for the period 2005–2015, as well as using retrospective data.

88

O. Glukhova et al.

Fig. 3. Change of university accreditation indicators at different time intervals

In Fig. 4 shows graphs comparing the values of the simulated variable X1(t), calculated using the developed mathematical model Xds and the regression model Xr with historical data Xex. The mathematical model accurately enough repeats statistical data at most time intervals. A small absolute error (up to 10%) indicates the adequacy of the developed mathematical model (5). Using the resulting model (5), we will model the average annual number of bachelor graduates. We will check the obtained data of the system and compare them with the data of official statistics. From the solution of the system of equations obtained (Fig. 5), it can be seen that the increase in the demand for university graduates allows an increase in the number of bachelor graduates for most time intervals. 0.40 0.39 0.38 0.37 0.36 0.35 0.34

X1ex

0.33

X1r

0.32

X1ds

0.31 0.30 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 Fig. 4. The dynamics of the values of the variable X1(t), calculated by different models

Dynamic System Model for Predicting Changes

89

After conducting a simulation of the main indicators of higher education institutions, trends in changes in the predicted characteristics were identified, which can be useful when making management decisions to improve the manageability of the educational process. With the help of the developed system, work was done on modeling the behavior of the system when changing various variables and system factors. The developed mathematical model can be used in specialized information systems, allowing evaluating the change in accreditation indicators of Russian universities at different time intervals.

0.40 0.39 0.38 0.37 0.36 0.35 0.34

X1ex

0.33

X1ds

0.32 0.31 0.30

2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 Fig. 5. Simulated number of bachelor graduates with an increase in demand for university graduates

4 Discussion A system of indicators characterizing the educational process of the university is proposed and justified when assessing the effectiveness of its functioning with the help of the rating system U-Multirank. A mathematical model of the process of changing the indicators of the functioning of a university is developed and based on the system dynamics and regression analysis. The structure of the proposed model visually illustrates the structure of cause-effect relationships in the system and allows taking into account external and internal changes. The person responsible for quality control is given the opportunity to react promptly and adequately to them. But the developed mathematical model has difficulties with an accurate analysis of some interdependencies and in tracking feedback, therefore it requires checking the adequacy of the developed model with the help of the regression model.

90

O. Glukhova et al.

Thus, the developed mathematical model allows to take into account the features of the educational process as a complex system and to monitor the quality of the process at any discrete point in time.

References 1. The World University Ranking U-Multirank. http://u-multirank.eu/. Accessed 21 Nov 2018 2. Forrester, J.: Some Basic Concepts in System Dynamics. World Dynamics. Pegasus Communications, Waltham (1973). 144 pp 3. Tikhonova, O., Kushnikov, V., Fominykh, D., Rezchikov, A., Ivashchenko, V., Bogomolov, A., Filimonyuk, L., Dolinina, O., Kushnikov, O., Shulga, T., Tverdokhlebov, V.: Mathematical model for prediction of efficiency indicators of educational activity in high school. IOP Conf. Ser. J. Phys. Conf. Series 1015, 032143 (2018) 4. Rezchikov, A., Kushnikov, V., Ivashchenko, V., Bogomolov, A., Shulga, T., Gulevich, N., Frolova, N., Pchelintseva, E., Kushnikova, E., Kachur, K., Kulakova, E.: Models of minimizing the damage from atmospheric emissions of industrial enterprises. Adv. Intell. Syst. Comput. 575, 255–262 (2017) 5. Filimonyuk, L.: The problem of critical events’ combinations in air transportation systems. Adv. Intell. Syst. Comput. 573, 384–392 (2017) 6. Tverdokhlebov, V.: Phase pictures of properties of complex objects of technical diagnostics. EWDTS 10, 527–530 (2010) 7. Khvorostukhina, E., L’Vov, A., Ivzhenko, S.: Performance improvements of a Kohonen selforganizing training algorithm. ElConRus 2017, 456–459 (2017) 8. Semezhev, N., L’Vov, A., Nina, M., Meschanov, V.: Mathematical modeling of the combined multi-port correlator. ElConRus 2018, 1154–1159 (2018) 9. Pechenkin, V., Korolev, M.: Optimal placement of surveillance devices in a threedimensional environment for blind zone minimization. Comput. Opt. 41(2), 245–253 (2017) 10. Higher School of Economics Data Books. http://www.hse.ru/. Accessed 15 Oct 2018

Optimization of the Hardware Costs of Interpolation Converters for Calculations in the Logarithmic Number System Ilya Osinin(&) Scientific Production Association Real-Time Software Complexes, Krasnoproletarskaya Street, 16, 127473 Moscow, Russia [email protected]

Abstract. The logarithmic number system (LNS) is an advanced alternative to the widely known in a computer technology representation of floating-point numbers. It provides greater accuracy and speed of computation with a comparable range of representation of numbers. However, the widespread use of LNS is prevented by the need to apply interpolation to convert numbers from the traditional format and back, and to perform addition/subtraction operations. Known solutions are oriented to interpolation by a first-order polynomial, which does not allow the use of double or quadruple precision due to exponential growth of hardware costs. The work is devoted to minimizing hardware costs by optimizing the order of the interpolation polynomial and the interpolation step for computations over numbers of different width. The results of the work can be used to develop arithmetic devices that operate with numbers in the LNS and are optimized for the level of hardware costs. Keywords: Logarithmic number system Polynomial



LNS



Interpolation



Accuracy



1 Introduction Logarithmic number system (LNS) nowadays is of great interest in the area of highperformance computing (HPC) as an alternative for representing the numbers with a floating point. Ensuring the same range of numbers representation in the given number of bits LNS performs operations of multiplication, division, raising to the power and taking the root of a number as fast as it is with the fixed point. This leads to a significant increase in the computational speed of those tasks where the above operations prevail over the operations of addition and subtraction. This becomes possible due to the properties of the logarithms.

The work was supported by the Russian Science Foundation, grant N 17-71-10043. © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 91–102, 2019. https://doi.org/10.1007/978-3-030-12072-6_9

92

I. Osinin

log2 ðx  yÞ ¼ i þ j;

ð1Þ

log2 ðx  yÞ ¼ i  j;

ð2Þ

log2 ðxy Þ ¼ i  y;

ð3Þ

pffiffiffi log2 ð y xÞ ¼ i  y;

ð4Þ

log2 ðx þ yÞ ¼ i þ log2 ð1 þ 2ji Þ;

ð5Þ

log2 ðx  yÞ ¼ i þ log2 ð1  2ji Þ;

ð6Þ

where x, y are the initial numbers with floating point, i = log2x, j = log2y is representation of numbers x and y in logarithmic number system, respectively. Besides a high processing speed (1)–(4) in the hardware LNS contributes to a better accuracy of computations due to the absence of rounding-off when performing them. This increases the accuracy of calculations in the following classes of problems: • ill-conditioned; • multi-scale exponents; • sensitive to equivalent transformations. On the other hand the operations of addition and subtraction in LNS (5)–(6) are expressed through multiplication that negatively tells upon the rate of their performance. Wide implementation of LNS is also prevented by the need to convert the numbers from the traditional format with the floating point into the logarithmic format. This conversion lies in finding the logarithm to base 2 of the initial number with the floating point. Backward transformation lies in finding an anti-logarithm, i.e. raising of number 2 to the power that corresponds to the logarithmic representation of the number. Both functions are non-linear, their computation in the mantissa representation range of the number with the floating point is possible, for example, with the help of substitution tables calculated beforehand. Swartzlander et al. [1] was the first to realize this possibility in application to LNS. However, we need to remember that the size of the tables grows exponentially with the growth of the capacity of the data being processed. So, for 23-digit mantissa we need the table of 23 Mbyte. Interpolation of functions of the logarithm and anti-logarithm makes it possible to reduce considerably the size of the tables. Taylor was the first to offer the straight-line interpolation for LNS in 1983 [2]. The solution needed one multiplier and one adder and two look-up tables (LUT) with stored interpolation factors. In this case the size of both tables is not more than 12 kbyte. Further development of the linear interpolator was offered in 2000 by Coleman et al. [3]. It lied in the correction of the error that occurred at each interval of interpolation. The scheme had four LUT, two multipliers and three adders. And though the interpolation time and hardware costs increased almost two times as compared to the Taylor solution, the precision of interpolation improved and that made it possible to reduce the total size of the LUT down to 2.8 kbyte.

Optimization of the Hardware Costs of Interpolation Converters

93

For the fastest operations of toting and diminution it was offered to interpolate the second augend in formulas (5)–(6) as a whole, that resulted into creation of special techniques of co-transformation [4–6] and non-uniform interpolation step [3, 7–9] due to the need to broaden the interpolation range by a number of time equal to the number of bit of mantissa of the initial number. In the long run, technical realization of these approaches is quite slow and hardware cost consuming. So, further we shall offer the following. The computation (5)–(6) takes place with the help of anti-logarithm of the number, adding 1 to it and calculating the logarithm of the produced sum. This solution allows keeping the calculations in the range of representation of mantissa of number M 2 [1, 2) and use the same hardware that was used for the codes transformation [10]. In this regard, it is important to reduce the hardware costs of the code converter while maintaining a high level of performance, since the device will also be used when performing arithmetic operations addition and subtraction. The considered solutions [1–9] have in common the implementation of linear type of interpolation, i.e. the implementation of the first order of the polynomial. The accuracy of calculation of the logarithm and anti-logarithm provided here limits the range of representation with the numbers analogous to the single precision of the IEEE754 format (32 bit). So, European Logarithmic Microprocessor [3], for example, that was built in the year 2000 and runs on the basis of LNS can operate only with 32-bit numbers. Implementation of these solutions in the work with the numbers of double precision (64 bit) and more will bring in the exponential growth of the hardware costs, which will make certain solution inapplicable in real arithmetic devices. The goal of the article is to analyze and optimize the hardware costs of interpolation devices that use polynomials of different degrees and do the calculations of logarithms and anti-logarithms for the numbers of single, double and quadruple precision.

2 Interpolation Polynomial for Codes Transformation In the introduction it was stated that interpolation is needed to reduce the hardware costs of the codes converters into LNS and back that realize the functions of logarithm and anti-logarithm calculations. In a general case, interpolation is the way to find intermediate values of a quantity using the available discrete set of known values with the help of calculation of a polynomial y¼

n X

ci  xi

ð7Þ

i¼0

where ci is an interpolation factor, x is a variable of interpolation, n is the power of the polynomial. A number with a floating point in a general form is found with formula

94

I. Osinin

x ¼ ð1Þsign  2E  ð1 þ

M Þ 2f

ð8Þ

where sign is the sign of the number, E is the index of power, M is the mantissa in the normalized form, f is the capacity of the mantissa. The conversion into the LNS lies in finding the logarithm to the bases log2 x ¼ log2 2E þ log2 ð1 þ

M M Þ ¼ E þ log2 ð1 þ f Þ f 2 2

ð9Þ

where E is the index of power, M is the mantissa in the normalized form, f is the capacity of the mantissa, and 1þ

M 2 ½1; 2Þ: 2f

ð10Þ

So, to convert the number into the LNS we need to interpolate the function of the logarithm in the interval from 1 to 2. For the reverse conversion we need to interpolate the function of the anti-logarithm in the interval from 0 to 1. The maximum error at the interpolation of the conversion functions from the floating point with LNS and back is estimated by formula emax ¼ maxðIcalc  Iexact Þ

ð11Þ

where Icalc is the result calculated with the help of interpolation, Iexact is the exact result. Then the maximum relative error will make erelmax ¼

emax 2f

ð12Þ

where emax is the maximum interpolation error, f is the capacity of the mantissa of the number being converted. A reasonable accuracy of the conversion is when erel max < 0, 5. In this case the error of interpolation after rounding will not bring in the changes in the meaningful bits of the result. Let’s take the interpolation step as uniform and equal to 2−k, where 2k is the number of interpolation intervals. In this case high-order k mantissa bits determine the number of the interval, which is more convenient for the further technical realization of the converters. The maximum error value calculated for function y = log2x at x 2 [1, 2) was practically the same as the analogous value for function y = 2x at x 2 [0, 1). So, further consideration is identically applicable for the function of logarithm and antilogarithm. Figure 1 shows the plotted values of the interpolation error emax as a function of the number of intervals 2k for the following types of interpolation polynomials: linear (n = 1), quadratic (n = 2) and cubic (n = 3), where n is the order of the polynomial. Single, double and quadruple lines in Fig. 1 show the maximum permissible interpolation error for single, double and quadruple precision, respectively.

Optimization of the Hardware Costs of Interpolation Converters

95

Implementation of the polynomial of the n-th power allows reducing the value of the maximum error by 2n+1 times with each redoubling of the number of intervals emax 

r 2kðn þ 1Þ

ð13Þ

where n is the power of the polynomial, 2k is the number of interpolation intervals, r is a constant produced in the course of numerical experiment: r = 0.1796348 at n = 1, r = 0.021299 at n = 2, r = 0.0249023 at n = 3. However, implementation of the polynomial interpolation in its pure form is not reasonable because of the exponential growth of the number of intervals and consequently of the hardware costs for the accuracy higher that single. For example, polynomial of the second order needs 223 interpolation intervals for the double accuracy (8-byte numbers), which is equivalent to three LUT with the total capacity of 32238 byte = 192 Mbyte. Similarly, polynomial of the third order will need four LUT with the total capacity of 422716 byte = 8 Gbyte to provide for quadruple accuracy and (16-byte numbers).

6

The order of a number of interpolation intervals, k 8 10 12 14 16 18 20 22 24 26

5.12E-06

single

2.56E-09 Inerpolation error, emax

1.28E-12

double

6.40E-16 3.20E-19 1.60E-22 8.00E-26 4.00E-29 2.00E-32 1.00E-35

quadruple Linear

Quadratic

Fig. 1. The error as a function of the number of intervals

Cubic

96

I. Osinin

Now let’s consider how we can reduce the volume of LUT down to the acceptable values.

3 Correction of the Error Resulting from the Interpolation It was shown in the previous section that polynomial interpolation in the general case is not enough for the conversion of numbers into the LNS and back because of large hardware costs for the LUT. Figure 2 shows the plotted relative interpolation error in the range of each interval for the following types of interpolation polynomials: linear (n = 1), quadratic (n = 2) and cubic (n = 3), where n is the degree of the polynomial. Error value distribution of each i-th interpolation interval has an analogous form and differs only by the size of proportion Pi ¼

ei

ð14Þ

emax

where ei is the error of the i-th interpolation interval and emax is the maximum interpolation error.

1 Relative interpolaion error

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.0

0.1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Portion of each interpolation interval Linear Quadratic Cubic

Fig. 2. The plot of the error function within an interval

1.0

Optimization of the Hardware Costs of Interpolation Converters

97

This makes it possible to use interpolation of the second level for the correction of the error inside each interval of interpolation of the first level if we expand formula (7) y¼

n X

ci  xi þ P i 

i¼0

m X

dj  x j

ð15Þ

j¼0

where x is the variable of interpolation, ci and dj are the interpolation factors of the first and second levels, respectively, n and m and the degree of the polynomial of the first and second level, respectively. Implementation of polynomial of the m-th degree for error correction allows reducing the total value of the maximum error of conversion by 2m+1 times with each doubling of the number of intervals of the second level emax 

r 2kðn þ 1Þ  2lðm þ 1Þ

ð16Þ

where n and m are the degrees of the polynomial of the first and second level, respectively, 2k and 2l are the numbers of intervals of interpolation of the first and second levels, respectively, r is a constant produced in the course of numerical experiment: r  0.1796348 at n = 1, r  0.021299 at n = 2, r  0.0249023 at n = 3. From the point of view of circuit requirements it is reasonable to choose the same number of intervals of the first and second level interpolation to provide equal access time to the LUT. To have equal time of calculating the polynomials of the first and second level the case of the equality of their orders fits most. So, Fig. 3 shows the plot of the maximum interpolation error as a function of the equal number of intervals of the polynomials of two levels with identical degrees n = m, where n 2[1, 3]. In this particular case the minimum order of the number of interpolation intervals to provide the error of conversion to be less than tolerable k

r log2 emax

2ðn þ 1Þ

ð17Þ

where n is the degree of the polynomial of the first and the second level, respectively, emax = 2−f−0,5 is the value of the maximum tolerable error of conversion, f is the mantissa capacity of the number being converted, r is the constant produced in the course of the numerical experiment: r = 0.1796348 t n = 1, r = 0.021299 at n = 2, r = 0.0249023 at n = 3.

98

I. Osinin

5

The order of a number of interpolation intervals , k 6 7 8 9 10 11 12 13 14

5.12E-07

single

Error interpolation emax

2.56E-10 1.28E-13

double

6.40E-17 3.20E-20 1.60E-23 8.00E-27 4.00E-30 2.00E-33

quadruple

1.00E-36 Linear

QuadraƟc

Cubic

Fig. 3. Error as a function of the number of intervals of polynomials

Comparing Figs. 1 and 3 we can see that implementation of error correction makes it possible to reduce the order of the number of interpolation intervals by more than two times. For example, for the quadruple interpolator with the quadruple error correction seven LUT will be necessary (three for each polynomial of the second degree and one for the proportion of the correction value) with the total capacity of 7288 byte = 14 kbyte instead of 192 Mbyte for quadruple interpolator without correction. Similarly, the polynomial of the third order will need nine LUT to provide quadruple precision with the total capacity of 921416 byte = 2.25 Mbyte instead of 8 Gbyte for cubicinterpolator without correction. In order to make a conclusion with regard to the most optimum solution from the point of view of hardware costs of the interpolation converter we should estimate the number of logical units that will be required for their realization.

4 Hardware Realization of the Codes Converter It was shown in the previous section that due to the implementation of the error correction it becomes possible to use two-level interpolation for the work with numbers of single, double and quadruple precision in LNS. However, the degrees of polynomials here can be different that results into various hardware costs necessary for the realization of the interpolators. This section will describe the sheet-oriented realization a conveyor-like codes converter from the floating point format into the LNS.

Optimization of the Hardware Costs of Interpolation Converters

99

The converter comprises: • An interpolator of the n-th order of the first level, the block scheme of which is given in Fig. 4 for n 2[1, 3]; • An interpolator of the m-th order of the second level, the block scheme of which is identical to the interpolator of the first level of the respective order and the computation result is supplied to the input of “Correction” in the interpolator of the first level. Besides integer-value multipliers and adders the interpolator has LUT c0…c3 of interpolation factors and the LUT of error correction proportion, the addresses for which are k of the most significant bit of the mantissa of the number being converted x that comes to the data-entry of the device, where k is the order of the number of the interpolation intervals. The result of the conversion is available at the data-out in 3 + n machine cycles in case the computational pipeline is implemented, where n is the order of the interpolation polynomial. After completion of the computational pipeline the conversion time make one cycle. The reverse converter has an identical realization but differs only in the following: not mantissa of the number is provided to the data-entry, but a fraction of the number in the logarithmic representation. Technical realization of the codes converter from the floating point into the LNS is done on the field-programmable gate array (FPGA) Altera Cyclone V. This FPGA is a part of the Cyclone V SoC Development Board, with the help of which we carried out the verification of the device using the example of converting the numbers of double precision. Mantissa

c3

c2

c1

Correction c0

P

Linear (n=1) Cubic (n=3)

Quadratic (n=2) Result Fig. 4. Block-scheme of the interpolator of the first level

100

I. Osinin

The analysis of the results’ hardware implementation of interpolators allowed establishing the fact that different hardware costs expressed in the number of logical units of FPGA make Sn ¼ 305  4n 

n X

i þ 2n þ 4  ðn þ 1Þ þ 2  2n þ 4  2k  ðn þ 1Þ

ð18Þ

i¼0

where n is the degree of the polynomial; k is the order of the number of interpolation intervals. Total hardware costs of the codes converter that consists of two levels of interpolation and the LUT of correction make S ¼ 2  Sn þ 2  2n þ 4  2k

ð19Þ

where n is the degree of the polynomial; Sn is the hardware costs of the interpolator of the n-th order, k is the order of the number of the interpolation intervals. Hardware costs of the interpolator measured in the number of logic elements of FPGA and required at minimum quantities to provide single, double and quadruple precision are given in Fig. 5. With increasing accuracy of the processed numbers, hardware costs grow exponentially. At the same time, the rate of increase in the number of logical elements necessary for the implementation of the coefficient tables is significantly larger than that of the interpolator. This leads to the fact that, for example, the total hardware costs of an interpolator of the third degree are significantly less than for an interpolator of the first degree, starting with double accuracy.

Hardware cost, logic elements

1.00E+11 1.00E+10 1.00E+09 1.00E+08 1.00E+07 1.00E+06 1.00E+05 1.00E+04 Single Linear

Double Quadratic

Quadruple Cubic

Fig. 5. Hardware costs as a function of the provided precision

Optimization of the Hardware Costs of Interpolation Converters

101

The solution from the point of minimization of the hardware costs of the interpolation converter is the implementation of the polynomials of the first and second level: • Of the first order for the single precision; • Of the second order for the double precision; • Of the third order for the quadruple precision. Table 1 shows the information with regard to the required memory of the interpolation factors for the data of the orders of polynomials.

Table 1. The information on the required memory of the interpolation factors Precision Single Double Quadruple

Capacity of the factors, bit 32 64 128

Number of factors of each LUT, units 128 (k = 7) 256 (k = 8) 16384 (k = 14)

Total memory of the factors, kbyte 2.5 14 2304

5 Conclusion This article is devoted to solving the main problem of LNS - the complexity of performing operations addition and subtraction. This is achieved by minimizing hardware costs due to the optimization of the ratio between the order of the interpolation polynomial and the step of interpolation for the single, double and quadruple calculations. Moreover, calculations are understood as conversion operations in LNS and back, performed relatively rarely, as well as basic arithmetic operations, addition and subtraction, occurring everywhere. It was found out that polynomial interpolation in a general case is not enough for the conversion of numbers into LNS and back because of the exponentially growing hardware costs for the tables of interpolation factors (LUT); for example, 8 Gbyte of memory will be necessary to ensure quadruple precision. It is possible to reduce the factors memory capacity due to the implementation of the correction of the error present at the first level of the interpolation. An additional polynomial is used for these purposes at the second level that improves many-fold the accuracy of the conversion. In this case each doubling of the digit capacity of the number with a floating point will require increasing the order of the polynomial of the first and the second level by one starting with the first order for the single precision. The suggested circuit realization of the pipelined converter of the codes from the floating-point format into the LNS and back has made it possible to reduce considerably the level of the hardware costs; for example, only 2.25 Mbyte of the factor memory will be required to provide for the quadruple precision.

102

I. Osinin

The results of the work can be used in the development of the arithmetic devices that operate the numbers in LNS and that are optimized by the level of the hardware costs. This allows using double and quadruple accuracy of representation of numbers, without causing at the same time a sharp increase in hardware complexity and a drop performance in the implementation of basic operations addition and subtraction that occur most frequent in the course of calculations.

References 1. Swartzlander, E., Alexopoulus, A.: The sign/logarithm number system. IEEE Trans. Comput. 100, 1238–1242 (1975). https://doi.org/10.1109/t-c.1975.224172 2. Taylor, F.: An extended precision logarithmic number system. IEEE Trans. Acoust. Speech Signal Process. 31, 232–234 (1983). https://doi.org/10.1109/TASSP.1983.1164042 3. Coleman, J.N., Softley, C.I., Kadlec, J.: The european logarithmic microprocesor. IEEE Trans. Comput. 57, 532–546 (2008). https://doi.org/10.1109/tc.2007.70791 4. Naziri, S., Ismail, R., Shakaff, A.: The design revolution of logarithmic number system architecture. In: 2014 2nd International Conference on Electrical, Electronics and System Engineering (ICEESE), pp. 5–10 (2014). https://doi.org/10.1109/iceese.2014.7154603 5. Coleman, J.N., Ismail, R.: LNS with co-transformation competes with floating-point. IEEE Trans. Comput. 65, 136–146 (2016). https://doi.org/10.1109/TC.2015.2409059 6. Ismail, R., Coleman, J.N.: ROM-less LNS. In: 2011 IEEE 20th Symposium on Computer Arithmetic, pp. 43–51 (2011). https://doi.org/10.1109/arith.2011.15 7. Ismail, R., Hussin, R., Murad, S.A.: Interpolator algorithms for approximating the LNS addition and subtraction: design and analysis. In: IEEE Transactions on Computers, pp. 174– 179 (2012). https://doi.org/10.1109/iccircuitsandsystems.2012.6408336 8. Tsiaras, G., Paliouras, V.: Logarithmic number system addition-subtraction using fractional normalization. In: 2017 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1325–1336 (2017). https://doi.org/10.1109/iscas.2017.8050569 9. Ellaithy, D.M., El-Moursy, M.A., Ibrahim, G.H., Zaki, A., Zekry, A.: Double logarithmic arithmetic technique for GPU. In: 2017 12th International Conference on Computer Engineering and Systems (ICCES), pp. 974–982 (2018). https://doi.org/10.1109/icces.2017. 8275335 10. Osinin, I.P.: A modular-logarithmic coprocessor concept. In: Proceedings International Conference on High Performance Computing & Simulation (HPCS-2017), pp. 588–595 (2017). https://doi.org/10.1109/hpcs.2017.93

The Multi-agent Method for Real Time Production Resource-Scheduling Problem Alexander Lada1,2(&) 1

and Sergey Smirnov1

Institute for the Control of Complex Systems of Russian Academy of Sciences, Samara, Russia [email protected], [email protected] 2 SEC Smart Transport Systems, Samara, Russia

Abstract. An operational scheduling method of production resources for enterprises has been analyzed and is being proposed. In order to assemble a client’s order, it is necessary to produce each detail by making the number of technological operations via an appropriate production resource. For scheduling and managing the production process, it is necessary to define the whole structure of the final assembly with a technology map. This representation is proposed by using a special ontological definition, and give the example for an enterprise producing electrical products. The process of scheduling has a high level of complexity due to the variety of types of resources used, and the dependence of production processes on many factors and conditions. Also considered real time events and each time getting information about a new fact of processing of each detail on each resource, the current production plan has to be rescheduled. Traditional methods for solving the problem are not possible using in real time scheduling, which is why it is proposed the multi-agent approach for that task. The developed system based on the proposed method is used by the real enterprise produces electrical products in Samara city, where, as a result, the number of delays in the execution of production orders was reduced by 10%. Keywords: Multi-agent methods  Production resource management Ontology of the production enterprise  Real-time scheduling



1 Introduction The problem optimization of organization and production scheduling was first described in 1939 by Leonid Kantorovich in his work Mathematical methods of organization and production planning [1]. Since then, it has become one of the most significant tasks of the optimization theory. At the current moment, there are many practical cases of such problems, which do not have exact solution methods because in each specific case, different criteria of optimality, restrictions and preferences are used: the mutual dependencies of different plans, interchangeability of various machines

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 103–111, 2019. https://doi.org/10.1007/978-3-030-12072-6_10

104

A. Lada and S. Smirnov

during a particular technological operation, the order of transferring parts to production, dependence on the terms of delivery final product to customers, the current loading of one’s own resources and the possibility to cooperate with other production enterprises, the terms of payment and receipt of materials and components, etc. Additionally, the task becomes more complicated by the need to solve it in real time, when by using the flow of external factual data requiring it to change the incoming conditions and reconstruct the solution in an acceptable time. In the early 2000s Gorodetsky, Wooldridge and Jannings demonstrated the possibility of using multi-agent technology for solving such problems [2, 4, 6]. In 2006, Chapman gives an overview of the main approaches applied in production scheduling [3]. With regard to the development of the theory in 2010, Easley and Kleinberg, based on the ideas of Sandholm, proved the theorem establishing the equivalence of the assignment problem and iterative auctions in the agents’ virtual market. As a result, the important advantages of this approach were confirmed: intuitive understandability for users, ease of including new business requirements, the possibility of parallel processing, etc. [5]. In the works of Wittich, Rzhevsky and Skobelev, a method of adaptive scheduling was proposed, and the first prototypes and industrial control systems for production resources were created [7–9]. In 2014 Granichin in his work about NP-hard planning problems of computing on grid-networks, proved the possibility to have a quasi-optimal solution in polynomial time [10]. In 2010–2017, Skobelev and Mayorov proposed a situational approach to resource management and developed a multi-agent platform for the creation of intelligent systems, preserving scenes in the context of the situation to improve the quality and effectiveness of planning in the course of changing the situation by events [11–14]. At the same time, the definition of the problem based on ontology and development of methods for obtaining and processing actual data from production resources for solving the problem in real time, in the described theory, was not considered. Therefore, it becomes urgent to develop new methods and tools for solving the problem in real time for creating new management systems that would allow solving a wider range of tasks using an ontological description, build an initial schedule, and adaptively schedule resources based on actual events coming from participants in the production process.

2 Setting the Task of Initial Planning of Production Resources Consider a set of production orders, On, n ¼ 1; s and a set of resources (machines and other equipment) Rj, j ¼ 1; m. Each order is characterized by a technological map, with a description of all the details Dik, k ¼ 1; p, which can also consist of other details (there is a multilevel nesting). Every detail Dik, described by materials Mik z , z ¼ 1; q, of which it is manufactured, and an ordered set of technological operations TOik j , j ¼ 1; m, which

The Multi-agent Method for Real Time Production

105

are required to be made with the material or another part by the resource Rj for a known part time TDik j . The technological map describes by the following ontological structure:

[On]

{[Dn1] [Mn,11] [Mn,1q] {[Dn,11] [Mn,1,11] [Mn,1,1q] {[Dn,1,1,,,,p]} [TOn,1,11] [R1] [TDn,1,11] [TOn,1,1m] [Rm] [TDn,1,1m]} n,1 {[D p]} [TOn,11] [R1] [TDn,11] [TOn,1m] [Rm] [TDn,1m]} n {[D p]}

For each resource Rj is known a daily time window [TRsj; TRfj]. It defines availability of this resource for work (working shift of the machine), taking into account the workers’ work and rest time. The same detail cannot be processed on several resources simultaneously; while one resource processes one detail, the second resource can be used to process another detail. It is required to make a shift-daily work schedule for each resource Rj for the production of all Oi orders, according to their technological maps, with a minimum downtime of resources Rj. 2.1

Initial Scheduling Method

To solve the problem of initial scheduling, it is proposed to use the “greedy” iterative method, where the details of all orders are distributed according to the production resources by the following algorithm: orders Oi are processed sequentially from 1 to s. Of all the details of the Dik i-th order, first select those that lie at the lowest level of the technology map, then the level above and so on, to the highest level. Details lying on the same level are processed sequentially, according to their ordinal number in the level, taking into account the successive execution of technological operations TOik j on the resources Rj. When planning the operations of each detail the method checks the availability of the necessary Mik z materials required for the production of the detail, if there is not enough material, the part is skipped. Gaps in the schedule of the required resource are analyzed taking into account the window of availability of its current shift time window [TRsj; TRfj] and first of all the empty spaces, created by the previous scheduling operations, are filled in starting with the earliest gap. If the size of the gap is not sufficient to perform the operation, then the next gap is analyzed. In the worst case scenario, if it is not possible to use any available gap, the operation sets it at the very end of the schedule. If, in view of its setting, the finish time of processing larger than

106

A. Lada and S. Smirnov

the right time of the current resource shift window [TRsj; TRfj], it goes to the next available shift. All the details of subsequent orders process in a similar way. The algorithm repeats until all operations of the details of all orders are scheduled to available resources. 2.2

An Example of Using Initial Scheduling Method

Let’s consider an example of planning two orders O1 and O2 of the production a transformer-locking unit and switching lever for the transformer substation. The technological map for these orders has the form:

[O1] Transformer locking unit {[D11] Channel [M 1,11] Metal sheet 4 mm 1.5 kg {[D 1,11] Channel-01 [M 1,1,11] Metal sheet 4 mm 1 kg [TO 1,1,11] Sawing of metal [R1] Laser complex [TD1,1,11] 60 sec [TO 1,1,12] Bending [R2] Hydraulic press [TD1,1,12] 60 sec 1,1 [TO 1] Sawing of metal [R1] Laser complex [TD1,11] 80 sec [TO 1,12] Bending [R2] Hydraulic press [TD1,12] 60 sec}} 1 {[D 2] Channel-02 [M 1,21] Metal sheet 3 mm 2.5 kg [TO 1,23] Cutting [R3] Band saw machine [TD1,23] 120 sec} [O2] Lever for turning on transformer {[D21] Flange [M 2,11] Metal sheet 6 mm 0.2 kg [TO2,11] Sawing of metal [R1] Laser complex [TD2,11] 50 sec [TO 2,12] Bending [R2] Hydraulic press [TD2,12] 70 sec} 2 {[D 2] Sleeve [M 2,21] Pipe 30x6 mm 0.2 kg [TO2,23] Cutting [R3] Band saw machine [TD2,23] 160 sec} For simplicity, let’s assume that all production resources (in our example there are 3 of them) are available for work all the time (time windows [TRsj; TRfj] are not limited), and all the production materials required in the technological maps are available. The distribution of details by resources begins with the deepest level of the technological map, in our example it is the detail of Channel-01, it cuts on the laser complex in 60 s, then it goes to the hydraulic press, where it bents for 60 s. Next, it goes to the level above and plans the parent detail Channel, which also cuts and bends, but only after the previous child detail. After that, the last detail of Channel-02 is planned. It is processed on another separate resource and does not depend on the previous two details, let’s put it first. The resulting schedule is represented by the diagram (Fig. 1).

The Multi-agent Method for Real Time Production

107

Fig. 1. The diagram of the initial operations on resources allocation

Proceeding to O2 order planning, which received simultaneously with the first, the empty spaces is used as much as possible in the already existing schedule. Order O2 consists of two details, which need to be processed on the same level, so the order of their processing is not important. Let’s start with the detail Flange, it cuts on the laser complex in 50 s, the method look for the first R1 empty space, there is one after processing the detail Channel-01 with the duration of 60 s; since there is enough time, it put the detail there. Next, Flange needs to be processed on R2 within 70 s, looking for the first available space on R2, after the end of processing on R1. An empty space at the beginning of 60 s in length does not fit the needed time, the method go further to find a place after the detail Channel-01 with a length of 80 s, put the Flange there. Let’s consider the last detail Sleeve, it is processed only on R3 for 160 s, the method finds the nearest free space of the required length after Channel-02 and put it there. A new schedule is shown in Fig. 2.

Fig. 2. The diagram of scheduling order O2

3 The Dynamic Rescheduling by Actual Events The task of constructing a dynamic schedule based on actual events in real time is more complex than the initial scheduling problem. In the case where the schedule changes dynamically over time, new orders can be added, already known orders can be canceled

108

A. Lada and S. Smirnov

or partially changed, resources will become unavailable, but most often there can be delays in progress during the schedule execution. Our approach is based on association orders and resources by software agents which interact with each other; and based on agents interests, is capable of responding to changes in the composition of orders and resources, identifying conflicts in the schedule, making decisions and interacting with each other to resolve conflicts and seeking compromise through negotiations (mutual concessions) [12–14]. This allows us to find coordinated solutions and maintain a balance of interests of agents and the entire system, which in general is a multicriteria objective function. Each resource Rj is associated with a resource agent, each detail Dik is associated with a detail agent. Agents can send and receive messages and make decisions according to their logic and the current situation, which is determined by the state of each agent. Current agent states change when orders arrive and external events are committed. When a new order comes into the system, the detail agents appear according to the technological map of this order. They send out a request for their placement on resource agents, which in turn analyze their current status, the availability of empty spaces, thus evaluating their schedule and offering options to place details. The detail agent seeks the option to place itself as early as possible. The resource agent Rj, in turn, tends to be constantly busy and minimizes the idle time within its work shift window [TRsj; TRfj], which is calculated by the formula: Dtime j ¼ TRf j  TRs j 

p X

TDkj

ð1Þ

k¼1

where k is index of the placed details of orders on the resource Rj, TDkj is the processing time of these details. The global target function F is defined as the total idle time of all resources: F ¼ fP ! max;

m X

Dtime j ! ming

ð2Þ

j¼1

where P is the total number of scheduled details on all resources. With the improvement of the global function F, the current version of resource allocation represents the current version of the schedule, after which agents of unplaced and poorly placed details try to improve their position via negotiations with other details. If, as a result of these negotiations, the global function has improved, the new version of resource allocation is accepted as the current version of the schedule and the process is repeated until new actual events arrive, or new agents’ negotiation lead to global improvement. 3.1

The Example of Dynamic Rescheduling by Actual Events

Let’s consider the initial schedule described in Fig. 2, it is supposed that now it’s is somehow possible to record actual events about the completion of processing of each detail on each resource (for example, by using the workers’ terminal), after which the schedule will need to be adaptively reconstructed. For simplicity sake, let’s count the time from point 0.

The Multi-agent Method for Real Time Production

109

Suppose at the moment of time T70 there was an event of completion processing of the detail Channel-01 on the R1 resource (it was expected that it would end in T60). As a result of the reaction to this event, all the scheduling on the resource R1 is shifted by 10 s to the right. Since the detail Channel-01 is also processed on the R2 resource after R1, the schedule for R2 also shifts to the right for 10 s. With all the changes, the schedule will take the form which is shown in Fig. 3.

Fig. 3. The diagram of scheduling system reaction for the completion event of T70

Suppose that at the moment of time T100, the completion event of the processing detail Flange on the R1 resource and Channel-01 on the R2 resource has occurred. As a result of the earlier completion of these operations, it is possible to start the following operations earlier. The new version of the plan is shown in Fig. 4.

Fig. 4. The diagram of the schedule after the rescheduling event of T100

110

A. Lada and S. Smirnov

Suppose at the moment of time T180 there is an event of completion processing detail Channel on the R1 resource and the detail Flange on the R2 resource, but processing detail Channel-02 on the resource R3 is not yet complete. As a result, the schedule on all three resources will be reconstructed. The new version of the plan is presented in Fig. 5.

Fig. 5. The diagram of the stabilized schedule after considering all events

When comparing the versions of the schedule after the initial phase (Fig. 2) and after the final actual event (Fig. 5), it can be concluded that the operations are distributed, ensuring minimum downtime of resources according to real-time events.

4 The Results A multi-agent method for real time production scheduling was developed, based on ontological description of the performed operations and the specified optimization criteria – minimization of resources idle time. This method allows building schedules for processing-related operations on specified resources according to real-time events. The system developed based on the proposed method is used in the authentic Samara enterprise, LLC “PC” Electrum”, which produces electrical transformers. Because of the system, the number of delays during the production process has been reduced by 10%. The developed method is not limited to the scope of the described subject area (electrical production), it is also applicable to other industries requiring similar production tasks. The scheduling system developed based on the proposed method can work autonomously, or can be integrated with other enterprise systems: warehousing, accounting, etc.

The Multi-agent Method for Real Time Production

111

Acknowledgments. The paper has been prepared based on the materials of scientific research within the subsidized state theme of the Institute for Control of Complex Systems of the Russian Academy of Sciences for research and development on the topic: «Research and development of methods and means of analytical design, computer-based knowledge representation, computational algorithms and multi-agent technology in problems of optimizing management processes in complex systems».

References 1. Kantorovich, L.V.: Mathematical Methods of Organization and Production Planning. Leningrad State University, Leningrad (1939). (in Russian) 2. Gorodetsky, V.I., Skobelev, P.O.: Industrial applications of multi-agent technology: reality and perspectives. SPIIRAS Proc. 55(6), 11–45 (2017) 3. Chapman, S.N.: The Fundamentals of Production Planning and Control. Prentice Hall, Upper Saddle River, 272 p. (2006) 4. Wooldridge, M.: An Introduction to Multi-Agent Systems. Wiley, Hoboken, 484 p. (2009) 5. Easley, D., Kleinberg, J.: Networks, Crowds, and Markets: Reasoning About a Highly Connected World. Cambridge University Press, New York, 833 p. (2010) 6. Jennings, N.R., Wooldridge, M.J. (eds.): Agent Technology: Foundations, Applications, and Markets. Springer, Heidelberg, 325 p. (2012) 7. Vittikh, V.A., Moiseeva, T.V., Skobelev, P.O.: Making decisions on the basis of consensus using multi-agent technologies. Ontol. Des. 2(8), 20–25 (2013). (in Russian) 8. Skobelev, P.: Towards autonomous AI systems for resource management: applications in industry and lessons learned. In: Proceedings of the XVI International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2018). LNAI, vol. 10978, pp. 12–25. Springer (2018). https://doi.org/10.1007/978-3-319-94580-4_2 9. Rzevski, G., Skobelev, P.: Managing Complexity. Wit Press, 216 p. (2014) 10. Amelina, N., Granichin, O., Granichina, O., Ivanskiy, Y., Jiang, Y.: Differentiated consensuses in a stochastic network with priorities. In: Proceedings of the 2014 IEEE International Symposium on Intelligent Control, 8–10 October 2014, Antibes, Nice, France, pp. 264–269 (2014) 11. Skobelev, P., et al.: Practical approach and multi-agent platform for designing real time adaptive scheduling systems. In: Proceedings of the XII International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2014). CCIS, vol. 0430, pp. 1–12. Spinger (2014) 12. Skobelev, P.: Multi-agent systems for real time adaptive resource management. In: Leitão, P., Karnouskos, S. (eds.) Industrial Agents: Emerging Applications of Software Agents in Industry, pp. 207–230. Elsevier (2015) 13. Leung, J.: Handbook of Scheduling: Algorithms, Models and Performance Analysis. Chapman & Hall, CRC Computer and Information Science Series, 1216 p. (2004) 14. Mayorov, I., Skobelev, P.: Towards thermodynamics of real time scheduling. Int. J. Des. Nat. Ecodynamics 10(3), 213–223 (2015). https://doi.org/10.2495/dne-v10-n3-213-223

Knowledge Base Engineering for Industrial Safety Expertise: A Model-Driven Development Approach Specialization Aleksandr Yurin1,2(&) , Aleksandr Berman1 , Olga Nikolaychuk1 , and Nikita Dorodnykh1 1

Matrosov Institute for System Dynamics and Control Theory, Siberian Branch of the Russian Academy of Sciences, 134, Lermontov St., Irkutsk 664033, Russia [email protected] 2 Irkutsk National Research Technical University, 83, Lermontov St., Irkutsk 664074, Russia

Abstract. Degradation of equipment in many industries is ahead the rate of its modernization and replacement. As a result, there is a need for a rapid inspection and definition of the possible hazards and appropriate measures to avoid catastrophic failures. The effectiveness of some tasks connected with this inspection (or an industrial safety expertise) can be improved by rule-based expert systems. This paper presents an end-user oriented approach for knowledge base engineering. The approach proposed is based on specialization of the MDA/MDD approach and includes application of ontologies and conceptual models to represent computation-independent models, a domain-specific notation to improve the design of logical rules, CLIPS (C Language Integration Production System) as a programming language for knowledge bases. The research software (Personal Knowledge Base Designer) implements presented models and algorithms. Approbation of the proposed approach is made in the Irkutsk Research and Design Institute of Chemical and Petroleum Engineering (IrkutskNIIhimmash) to create prototypes of knowledge bases for the industrial safety expertise tasks. Keywords: Industrial safety expertise MDD  Transformations



Knowledge base



Prototyping



1 Introduction Degradation of equipment in many industries is ahead the rate of its modernization and replacement causing the relevance of the problem of improving its safety. This is especially true for oil refining, petrochemical and chemical equipment. In this case, only a thorough inspection of equipment provides definition of possible hazards and appropriate measures to avoid catastrophic failures. In this connection, it is important to minimize the costs of maintenance and repair, including problems of monitoring, diagnosing and forecasting technical conditions which make up the process of the industrial safety expertise (ISE). Development and application of methods and means © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 112–124, 2019. https://doi.org/10.1007/978-3-030-12072-6_11

Knowledge Base Engineering for Industrial Safety Expertise

113

of artificial intelligence, in particular expert systems (ES), can significant improve the effectiveness of the ISE. In particular, the ISE procedure consists of the following main stages: 1. 2. 3. 4. 5. 6. 7. 8.

Planning works for ISE. Analysis of a technical documentation. Forming a map of initial data. Development of a ISE program. Technical diagnostics. Analysis (including interpretation) of diagnostics results. Calculation of a durability and residual resource. Making decisions for the repair.

Analysis of ISE considered stages showed that the implementation of the stages 4, 6 and 8 requires the processing of a large volume of poorly formalized information. At that, the efficiency of processing can be improved by expert systems which allow: • to interpret the operating conditions and parameters, • to justify the technical diagnostic program, • to interpret diagnostic parameters. In this case, it is proposed to automate these stages using rule-based ESs. A review of software and approaches for maintenance (including monitoring, diagnosis and detection of failures) of heterogeneous technical equipment [1–5] has shown a weak use of ESs for the ISE and existence of ‘ad-hoc’ solutions for separate types of equipment. The main element of an ES is a knowledge base (KB) that includes a set of systematized knowledge describing the regularities of a subject domain. The problem of improving the KBs and ESs engineering remains critical and can be addressed in different ways: improvement of the existing approaches, creation of new approaches or development of specialized software and integrated frameworks supporting all phases of the ESs life cycle [6, 7]. One of the trends in this area is the use of the principles of cognitive (visual) modeling and design as well as approaches based on generative programming [8] in particular, a model-driven development (MDD) approach and its modifications, for example, a model-driven architecture (MDA) [9–12]. The MDA/MDD is an approach in software engineering that considers the creation of software on the basis of transformations and interpretations of information models. There are already examples of application of MDA/MDD for development of database applications (e.g., Bold for Delphi), agent-oriented monitoring applications [13], embedded systems and other uses [14–17]. So, our purpose in this paper is to describe a specialization and application of the MDA/MDD approach for rule-based KBs prototyping in the tasks connected with the selected ISE stages. The specialization proposed based on the main principles of MDA/MDD (e.g., model types and creation stages), but this principles are implemented in the context of intelligent systems engineering.

114

A. Yurin et al.

We suggest the following: • using ontologies and conceptual models (in the form of UML class diagrams or mind maps) to represent a computation-independent model (typically, this model is not used in MDA/MDD projects); • using a domain-specific notation, namely RVML (Rule Visual Modeling Language) [18], to improve the design of platform-independent models (RVML is designed for logical rules modeling); • using CLIPS (C Language Integration Production System) as a target platform for KBs implementation. The approach proposed is implemented by means of research software that is used in the Irkutsk Research and Design Institute of Chemical and Petroleum Engineering (IrkutskNIIhimmash) for prototyping KBs in the ISE tasks [19].

2 The MDA/MDD Specialization 2.1

The MDA/MDD Models and Software Development Process

In accordance with MDA/MDD principles [9–12], the developed software is represented in the form of the following information models which define an architecture, functions and implementation features: • a computation-independent model (CIM), this model hides all of implementation details for certain software platform and merely describes the user requirements; • a platform-independent model (PIM), this model hides some implementation details and contains platform-independent elements; • a platform-specific model (PSM), this model takes into account the implementation details that depend on a specific software platform. Therefore, the process of software development is a step by step transition from abstract models to specific ones with the sequential transformation of the models and generation of the source codes and specifications of KBs: CIM ! PIM ! PSM ! CODE. The model transformation is one of the main principles of the MDA/MDD approach and can be considered from different points of view. In particular, there are three types of transformation: Model-to-Model (M2M); Model-to-Text (M2T) and Text-to-Model (T2M). Two types of transformations are identified [20] in accordance with the modeling languages used to describe the source and target models: • the endogenous transformation is a transformation between models which are using one modeling language; • the exogenous transformation is a transformation between models which are using different modeling languages.

Knowledge Base Engineering for Industrial Safety Expertise

115

The model transformations can also be classified according to the direction of a transformation [20]: • a vertical transformation is a transformation where the source and target models reside at different abstraction levels; • a horizontal transformation is a transformation where the source and target models reside at the same abstraction level. Thus, it is necessary to implement the following sequence of exogenous horizontal transformations: a M2M-transformation for CIM ! PIM; a M2M-transformation for PIM ! PSM; a M2C-transformation for PSM ! CODE. In turn, the MDA/MDD software development process includes steps related to the creation of specific models (CIM, PIM and PSM). Currently, there are some trends for the implementation of model transformations: – using graph grammars (graph rewriting), Graph REwriting And Transformation (GReAT) [21], etc.); – using model transformation languages (e.g., Query/View/Transformation (QVT), ATLAS Transformation Language (ATL), TMRL [22] etc.); – using declarative and procedural programming languages [23]; – using languages for transforming XML documents (e.g., eXtensible Stylesheet Language Transformations (XSLT), etc.). Therefore, the ‘ad-hoc’ solution is made in this work, and a direct-manipulation approach [24] is used for description of transformations. 2.2

The Methodology Proposed

The specialization of the MDA/MDD development process for KBs is presented by the following sequence of steps: Step 1: Building a model of a subject domain that contains the main concepts and relationships. At this step the end-user creates a CIM. This model can be implemented in the form of ontology or an UML class diagram. The efficiency of this step can be improved by reusing the existing conceptual models created in CASE-tools (e.g., Protégé, IHMC CmapTools, and IBM Rational Rose Enterprise) [25]. Most of the software that supports the MDA/MDD approach (e.g., Bold for Delphi) does not realize this step. In the case of intelligent systems’ engineering this step corresponds to the stage of knowledge conceptualization. Step 2: Building a platform-independent model (PIM) that describes logical rules. The PIM is obtained as a result of a CIM transformation (CIM ! PIM). In the process of a CIM transformation the concepts are mapped into fact templates and rule elements (such as the conditions and actions), in addition, the cause-and-effect relationships are transformed into logical rules. Visual modeling is one of the main aspects of the MDA/MDD approach that traditionally uses UML. It should be noted that in case of specific software the developers have to use different UML extensions [26] which, in turn, provide the ability to take into consideration some features of a subject domain, architectures,

116

A. Yurin et al.

programming languages and formalisms. In this work a domain-specific extension of UML, namely the Rule Visual Modeling Language (RVML), is used to represent logical rules. Step 3: Building a platform-specific model (PSM) that provides taking into account the features of the certain knowledge representation language (e.g., CLIPS), such as priorities of rules and ‘by default’ values of slots. The PSM is resulted from the PIM (PIM ! PSM). Step 4: Generating KB code or specifications. At this step the interpretation of RVML diagrams is performed (PSM ! CODE). The main results of the interpretation are the source codes and specifications for the interpreter. Step 5: Testing obtained program codes by special software. It should be noted that the end-user (an expert or system analytic) designs the CIM, PIM and a part of the PSM. The described sequence of steps almost coincides with the standard MDA/MDD methodology, but the step’s content is redefined for the rule-based KBs engineering. 2.3

Models

Let’s describe models used in the specialization of MDA/MDD for the rule-based expert systems. The CIM includes a description of a subject domain ontology and ontology of the rule-based expert systems, and can be presented in the form of a MOF (Meta Object Facility) metamodel (Fig. 1).

Fig. 1. A CIM metamodel (a fragment).

Knowledge Base Engineering for Industrial Safety Expertise

117

The PIM includes two elements intended for representation and modeling an ES architecture and logical rules of KBs (Fig. 2).

Fig. 2. A PIM metamodel (a fragment).

RVML is used for the creation of the PIM and PSM for KBs. This notation provides the mechanism for description of cause-and-effect relationships of the rather abstract level (for the PIM). In addition, the specification of certain elements of the notation (such as the priority or importance, and the certainty factor) provides the means to create a PSM, especially for CLIPS. 2.4

Model Transformations

Model transformations are based on the comparison of elements of CIM, PIM, PSM and software platform metamodels. In our case the correspondence of elements can be represented in the tabular form (Table 1) [27].

Table 1. A correspondence of elements for CIM, PIM, PSM and CLIPS (a fragment). Ontology (CIM) Project (name, description) Class (name, description) Object (name, description) Method Property Property value Property type Relationship

Knowledge base (PIM, PSM) Knowledge base (name, description) Template (name, description) Fact (name, description) – Slot (description, value) Slot value Slot type Rule (nodal element)

CLIPS code (platform) – deftemplate deffacts – Slot Default Type defrule

118

A. Yurin et al.

Transformations of the models are implemented via a general-purpose programming language (Object Pascal) in research software (Personal Knowledge Base Designer) [28].

3 Knowledge Base Engineering for Industrial Safety Expertise One of the tasks of the ISE that requires the KBs engineering is the task of analysis of diagnostics results and predicting possible aging (degradation). It is proposed to use a cause-and-effect model of technical state dynamics (TSDM) [29] as a theoretical basis for this task. TSDM reflects the factors of design, manufacturing and operation stages of a technical equipment life cycle which resulting degradation. The values of these factors (e.g., operating conditions, personal errors and manufacturing imperfections) define the possible technical states and its structure (including state transitions). Let’s consider the KB engineering for the task of the degradation processes prediction. Step 1: Building a CIM using a TSDM for petrochemical equipment (synthesis of polyethylene) by the experts. This model includes classes of states (initial defectiveness, damage, destruction, failure) which are described by a set of parameters with values.

Fig. 3. A CIM for industrial safety expertise (a fragment).

Knowledge Base Engineering for Industrial Safety Expertise

119

The main result of the step is a subject domain model in the form of ontology (or the concept map) that contains concepts and relationships, for example [18] (Fig. 3): a mechanism of the degradation process (exist-meh), a construction material (material), a technological environment, etc. In particular, the following main fact templates obtained: incident-object, object-properties, technological-heredity, heat-exchange-technological-environment, mechanical-stress-const, technological-environment, material, making-defects, existevent, exist-meh, exist-kin, exist-dam, exist-des, …. The list of main rule templates obtained: • IF mechanical-stress-const AND technological-environment AND material AND making-defects THEN exist-event AND exist-meh • IF exist-event THEN exist-kin • IF exist-kin AND exist-event THEN exist-event • IF exist-dam AND exist-event THEN exist-des AND exist-event • IF exist-event AND exist-des THEN exist-des AND exist-event • and other The user defines specific rules based on the rule templates, for example: IF mechanical-stress-const (cycle-frequency = HIGH) AND technological-environment (pH = ACTIVE; properties-alternation = YES) AND material (type = STEEL; chemical-prop-alloying = LOW-ALLOY) AND making-defects (technological-heredity ?id-th-m1) THEN exist-event (caption = MECHANISM OF CORROSION FATIGUE) AND (exist-meh (caption-meh = CORROSION FATIGUE) Examples of RVML models which correspond to the developed specific rules are presented in [27]. The following degradations were considered: corrosion cracking, mechanical fatigue, hydrogen embrittlement and corrosion cracking. The KB for corrosion cracking contains 14 fact templates, 12 rule templates, 4 initial (initial) facts and 20 specific rules. Step 3: Building the PSM includes the PIM specialization regarding to the features of certain programming language, in particular, CLIPS. In our case the default values for properties (slots) in fact templates and certainty factors for specific rules defined by RVML. For example the specified rule in the PSM is the followings (added elements are underlined): IF mechanical-stress-const (cycle-frequency = HIGH) … THEN exist-event (caption = MECHANISM OF CORROSION FATIGUE; cf = 0.9) AND (exist-meh (caption-meh = CORROSION FATIGUE; meh-cf = 0.9) Step 4: Generating code and specifications, including: – CLIPS code: – specifications of the ES for the interpreter, which provides the generation of the user interface for the creation, reading, updating, and deleting (CRUD) of the KB elements. Step 5: Testing a KB is carried out by the expert by means of logical inferences in PKBD (Fig. 4).

120

A. Yurin et al.

After successful testing, the KB is used in the decision support system for the tasks of industrial safety expertise (Fig. 5) [19].

4 Discussion The applicability and effectiveness of the approach proposed assessed by the indirect and direct methods. The used time and semantic adequacy selected as the main criteria. Semantic adequacy determined on the basis of expert assessments of specialists of the Laboratory of information and telecommunication technologies for investigation of technogenic safety ISDCT SB RAS (an academic institute) and Irkutsk Research and Design Institute of Chemical and Petroleum Engineering (an industrial institute). The complexity and high cost of model experiments which evaluate the real time of KBs engineering necessitate the application of the indirect method for assessment of the time used. Indirect method provides theoretical estimates of the time for KBs engineering in accordance with [6]. At that, the conceptualization stage was decomposed into three stages: problem identification, knowledge retrieval and structuring (Table 2).

Fig. 4. Personal knowledge base designer: a GUI example.

Knowledge Base Engineering for Industrial Safety Expertise

121

Fig. 5. A GUI example of software for industrial safety expertise. Table 2. Assessment of time used by the indirect method. Stages of KB engineering 1. Problem identification 2. Knowledge retrieval 3. Knowledge structuring 4. Knowledge formalization 5. Implementation (codification) 6. Testing Total:

Standard method (weeks) 1–2 4–12 2–4 4–8 0.5

MDA/MDD specialization (weeks) 0.75–1.5 3–8 1.5–3 3–6 –

1–2 12.5–27.5

1–2 9.25–20.5

The approach based on MDA/MDD specialization showed some time reduction due to the exclusion of the implementation (codification) stage (the automatic code generation is used) and exclusion (or reduction of the participation time) of the knowledge engineer at 1–4 stages. The direct method allowed to obtain the time of performing educational tasks [27] and showed a significant reduction the time (up to 60%), especially when creating simple rules.

122

A. Yurin et al.

5 Conclusions The paper describes the specialization and application of the MDA/MDD approach for rule-based KBs engineering for the tasks of the ISE. The specialization includes: the use of ontology as a CIM, the use of RVML to create a PIM and PSM. CLIPS is selected as a targeted knowledge base programming language. The methodology proposed, models, transformations and case study are considered. The approach proposed is designed for non-programmers: experts and system analytics who are able to develop only two information models: a CIM (ontology) and PIMs (models of a rule-based KB). In this case, it is possible to automate the PIMs creation with automated analyses of conceptual models (UML class diagrams and mind maps) [25]. According to the MDA/MDD approach, other models and their transformations are either integrated into software that implements the approach or they are created automatically up to the testing step. The benefits of the approach proposed in comparison with the standard method [6, 7, 17] are the follows: • a significant reduction the time for the implementation stage and the elimination of programming errors through automatic code generation; • a reduction in time for the identification, conceptualization, and formalization stages due to the use of ontology and cognitive graphics. Algorithms and software (PKBD) are used by domain specialists of Irkutsk Research and Design Institute of Chemical and Petroleum Engineering to support the definition of the rate and reasons of petrochemical equipment degradation [19]. The reported study was partially supported by RFBR projects 18-07-01164, 18-0800560 and 18-37-00006.

References 1. Lee, J.: Modern computer-aided maintenance of manufacturing equipment and systems: review and perspective. Comput. Ind. Eng. 28(4), 793–811 (1995) 2. Nikolaychuk, O.A., Yurin, A.Y.: Computer-aided identification of mechanical system’s technical state with the aid of case-based reasoning. Expert Syst. Appl. 34(1), 635–642 (2008) 3. Ruiz, D., Nougues, J.M., Puigjaner, L.: Fault diagnosis support system for complex chemical plants. Comput. Chem. Eng. 25(1), 151–160 (2001) 4. Venkatasubramanian, V., Rengaswamy, R., Yin, K., Kavuri, S.N.: A review of process fault detection and diagnosis: part I: quantitative model-based methods. Comput. Chem. Eng. 27 (3), 293–311 (2003) 5. Wang, H.C., Wang, H.S.: A hybrid expert system for equipment failure analysis. Expert Syst. Appl. 28(4), 615–622 (2005) 6. Jackson, P.: Introduction to Expert Systems, 3rd edn. Addison-Wesley, Harlow (1999) 7. Giarratano, J.C., Riley, G.: Expert Systems: Principles and Programming, 4th edn. Course Technology, Boston (2005)

Knowledge Base Engineering for Industrial Safety Expertise

123

8. Czarnecki, K., Eisenecker, U.: Generative Programming: Methods, Tools, and Applications, 1st edn. Addison-Wesley, Harlow (2000) 9. Frankel, D.: Model Driven Architecture: Applying MDA to Enterprise Computing, 1st edn. Wiley, New York (2003) 10. Kleppe, A., Warmer, J., Bast, W.: MDA Explained: The Model Driven Architecture: Practice and Promise, 1st edn. Addison-Wesley, Harlow (2003) 11. Sami, B., Book, M., Gruhn, V.: Model-Driven Software Development. Springer, Berlin (2005) 12. Schmidt, D.C.: Model-driven engineering. Computer 39(2), 25–31 (2006) 13. Gascueña, J.M., Navarro, E., Fernández-Caballero, A., Martínez-Tomás, R.: Model-tomodel and model-to-text: looking for the automation of VigilAgent. Expert Syst. 31(3), 199– 212 (2004) 14. Canadas, J., Palma, J., Tunez, S.: InSCo-Gen: a MDD tool for web rule-based applications. In: Web Engineering. ICWE 2009. Lecture Notes in Computer Science, vol. 5648, pp. 523– 526 (2009) 15. Distante, D., Pedone, P., Rossi, G., Canfora, G.: Model-driven development of web applications with UWA, MVC and JavaServer faces. In: Web Engineering. ICWE 2007. Lecture Notes in Computer Science, vol. 4607, pp. 457–472 (2007) 16. Dunstan, N.: Generating domain-specific web-based expert systems. Expert Syst. Appl. 35 (3), 686–690 (2008) 17. Nofal, M., Fouad, K.M.: Developing web-based semantic expert systems. Int. J. Comput. Sci. 11(1), 103–110 (2014) 18. Yurin, A.Y., Dorodnykh, N.O., Nikolaychuk, O.A., Grishenko, M.A.: Designing rule-based expert systems with the aid of the model-driven development approach. Expert Syst. 35(5), e12291 (2018) 19. Berman, A.F., Nikolaichuk, O.A., Yurin, A.Y., Kuznetsov, K.A.: Support of decisionmaking based on a production approach in the performance of an industrial safety review. Chem. Pet. Eng. 50(1–2), 730–738 (2015) 20. Mens, T., Gorp, P.V.: A taxonomy of model transformations. Electron. Notes Theor. Comput. Sci. 152, 125–142 (2006) 21. Balasubramanian, D., Narayanan, A., Buskirk, C., Karsai, G.: The graph rewriting and transformation language: GreAT. Electron. Commun. EASST 1, 1–8 (2007) 22. Dorodnykh, N.O., Yurin, A.Y.: A domain-specific language for transformation models. In: CEUR Workshop Proceedings. Information Technologies: Algorithms, Models, Systems (ITAMS 2018), vol. 2221, pp. 70–75 (2018) 23. Berman, A.F., Nikolaychuk, O.A., Yurin, A.Y.: Intelligent planner for control of failures analysis of unique mechanical systems. Expert Syst. Appl. 37(10), 7101–7107 (2010) 24. Czarnecki, K., Helsen, S.: Feature-based survey of model transformation approaches. IBM Syst. J. 45(3), 621–645 (2006) 25. Dorodnykh, N.O., Yurin, AYu.: Using UML class diagrams for design of knowledge bases of rule-base expert systems. Softw. Eng. 4, 3–9 (2015). (in Russian) 26. Miguel, M., Jourdan, J., Salicki S.: Practical experiences in the application of MDA. In: UML 2002—The Unified Modeling Language. UML 2002. Lecture Notes in Computer Science, vol. 2460, pp. 128–139 (2002) 27. Yurin, A.Y., Dorodnykh, N.O., Nikolaychuk, O.A., Grishenko, M.A.: Prototyping rulebased expert systems with the aid of model transformations. J. Comput. Sci. 14(5), 680–698 (2018)

124

A. Yurin et al.

28. Yurin, A.Y., Berman, A.F., Nikolaychuk, O.A., Dorodnykh, N.O., Grishenko, M.A.: The domain-specific editor for rule-based knowledge bases. In: 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1130–1135. IEEE, Opatija (2018) 29. Berman, A.F., Nikolaichuk, O.A.: Technical state space of unique mechanical systems. J. Mach. Manuf. Reliab. 36(1), 10–16 (2007)

Investigation of Hydroelasticity Coaxial Geometrically Irregular and Regular Shells Under Vibration Anna Kalinina1 , Dmitry Kondratov2(&) , Yulia Kondratova3 Lev Mogilevich1 , and Victor Popov1 1

,

Yuri Gagarin State Technical University of Saratov, Saratov, Russia [email protected], [email protected], [email protected] 2 Russian Presidential Academy of National Economy and Public Administration, g. Saratov, ul. Moskovskaya, 164, Saratov, Russia [email protected] 3 Saratov State University, Saratov, Russia [email protected]

Abstract. One of the main problems of modern technology is the reduction of general weight under vibration stability of the applied construction. The application of thin-walled construction elements together with viscous incompressible liquid presents one of the possible variants of the given problem solution. An urgent task of scientific and practical interest in studying the issues of strength and reliability of mechanical systems used in the aviation and space industry is the task of constructing and studying mathematical models describing the dynamics of interaction between geometrically regular and ribbed cylindrical shells with a viscous incompressible fluid under various vibration loads. The model of hydroelasticity of coaxial geometrically irregular and regular shells during vibration is investigated. The outer geometrically irregular shell has ribs of finite width. Elastic outer and inner shells are freely attached to the ends. Viscous incompressible fluid completely fills the space between the shells. The amplitude frequency characteristics of the inner and outer shells are found. The influence of the width of the liquid layer and the viscosity of the liquid on the amplitude frequency characteristics of the shells is shown. The graphs of amplitude frequency characteristics are given. The research is made under the financial support of RFFI Grants № 18-01-00127-a, 19-01-00014-a and President of Russian Federation Grant MD-756.2018.8. Keywords: Coaxial cylindrical shells  Mathematical modeling  Hydroelasticity  Viscous liquid  Geometrically irregular shell  Amplitude frequency characteristics  Vibration

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 125–137, 2019. https://doi.org/10.1007/978-3-030-12072-6_12

126

A. Kalinina et al.

1 Introduction An urgent task of scientific and practical interest in studying the issues of strength and reliability of mechanical systems used in the aviation and space industries is the task of constructing and studying mathematical models describing the dynamics of interaction between geometrically regular and ribbed cylindrical shells with a viscous incompressible fluid under various vibration loads. The studies on hydroelasticity are fairly widespread. We present a review of some studies on hydroelasticity. In papers [1–3] the nonlinear dynamics and stability of thin circular cylindrical shells clamped at both ends and subjected to axial fluid flow were analyzed. The study of the mechanical model presented in the form of a ring-shaped tube formed by two surfaces of coaxial cylindrical shells interacting with a viscous incompressible fluid was carried out in [4], in [5] the influence of the type of fixing and the properties of a fluid was studied on resonant frequencies and amplitude frequency characteristics of shells. Amabili in [6] investigated the nonlinear dynamics and stability of circular cylindrical shells containing non-viscous incompressible liquids, using the nonlinear Donnell neon shell theory, taking into account the effect of viscous structural damping. In [7] he the studied stable viscous forces effect on oscillations of shells with internal and annular flow, considering Navier-Stokes equations. Paidoussis and Misra in [8, 9] investigated the dynamic and stable characteristics of coaxial cylindrical shells containing an viscous incompressible liquid flow, the shell movements being described by the Flughge thin shell equations. Fluctuating fluid forces associated with shell vibrations, are formulated using generalized Fourier transform methods. Chung, Turula, Mulcahy presented analytical and experimental methods for evaluating vibration characteristics of cylindrical shells, such as the thermal liner of a fastflow reaction vessel [10]. The problems of interacting viscous incompressible liquid and elastic cylindrical shells of finite length in the presence of vibration were considered in [11–15]. Thus, [14] conducts studying the amplitude frequency characteristics (AFC) of two embedded elastic cylindrical shells containing viscous incompressible liquid layer, its flow being free under mechanical system vibration. In [16], the dynamics interaction problems of multilayer coaxial elastic cylindrical shells, freely supported at the ends, and interacting with a viscous incompressible fluid between them, under vibration conditions were considered. However, elastic shells and viscous incompressible liquid hydroelastic interaction, both taking into account: viscous liquid motion inertia; mechanical vibration effects; outer geometrically irregular cylindrical shell of finite length elasticity; the shells’ free support of at the ends of the mechanical system have not being investigated up to now. Wherefore, the present paper focuses of studying amplitude frequency characteristics of elastic shells.

Investigation of Hydroelasticity Coaxial Geometrically

127

Summing up, it can be concluded that the studying interaction dynamics of thinwalled cylindrical shells and a viscous incompressible liquid with regard to vibration is an important issue. Particular attention should be paid to the use of methods and theories for solving hydrodynamic problems.

2 Statement of the Problem The mechanical system (Fig. 1) formed by two coaxial cylindrical shells of finite length, freely rests on the edges which interact with the layer of viscous incompressible fluid. Outer elastic geometrically irregular cylindrical shell 1 with an internal radius R1 freely supported at the ends The outer surface of the outer shell of the pipe is a geometrically irregular shell with n stiffeners. Thus, the height of the outer shell of the pipe varies in steps. The inner shell 2 with the outer radius R2 is an absolutely rigid cylinder. The cylindrical gap between outer and inner cylinder shells is filled viscous incompressible fluid 3. The surface of the outer shell and the surface of the inner cylinder form a cylinder in a cylinder of length l. Radial clearance cylindrical slot is d ¼ R1  R2 \\R2 . The movement of the inner shell relative to the outer at the ends is absent. The free bearing of the shell is ensured by the grips 4. We will assume that the temperature is either constant or not changing, i.e. temperature effects in the fluid and the shell will be neglected [17, 18]. The system is under harmonic vibration along the axes O1 x1 , O1 z1 . The entire mechanical system is mounted on base 5.

Fig. 1. Mechanical system

128

A. Kalinina et al.

3 The Theory and Solution Let us present a mathematical model of the mechanical system considered above in dimensionless variables: ð1Þ



ðr  R 2 Þ 2y l wm x ; h ¼ h; s ¼ x t; f ¼ ; r ¼ uh ; Vr ¼ wðm1Þ x un ; Vh ¼ d l 2R2 w

wðm1Þ x l d2 x ð iÞ ð iÞ ðiÞ ðiÞ ðiÞ ð iÞ ð iÞ ; uf ; u1 ¼ uðmiÞ U1 ; u2 ¼ vðiÞ m U ; u3 ¼ wm U3 ; Re ¼ w 2R2 m   2 d w ð iÞ wð2Þ ð1Þ E ðiÞ ð1Þ  w ¼ \\1; kðiÞ ¼ m \\1; kð2Þ ¼ m k ; cðiÞ ¼  2  ; i ¼ 1; 2; ð1Þ R2 d ðiÞ ð iÞ wm q0 1  l0 * " #+ 2e0j wðm1Þ x2 wEz 00 wEx 00 ; p ¼ p 0 þ q R2 P  Re ð1Þ fz0 ðsÞ cos h þ ð1Þ fx0 ðsÞ sin h : ej ¼ l wRe wm wm Vy ¼

The mathematical model in dimensionless variables (1), taking into account the smallness of the parameters: w - the relative width of the fluid layer, and, kð1Þ , kð2Þ characterizing the relative deflections of the external geometrically irregular and internal geometrically regular cylindrical shells has the form Hydrodynamic equations: @P0 @uf0 1 @P0 @ 2 uf0 @uh0 @P0 @ 2 uh0 ¼ 0; Re þ 2  þ  ¼ 0; Re r @f @n @s @s @h @n2 @n2 @ un @ uh @ uf þ þ ¼ 0; ¼0 @n @h @f Equations of dynamics of an external geometrically irregular shell !! ð1Þ ð1Þ ð1Þ ð1Þ ð1Þ um 1 @ 2 U1 mm 1 @ 2 U 2 1 @U3 ð1Þ R2 þ  l0 ð1Þ  k1 ðfÞ 2 ð1Þ 2 ð1Þ r @f R wm r @f wm r @f@h !! ð1Þ ð1Þ ð1Þ ð1Þ um 1 @U1 mm @U2 1 ð1Þ 0 ð1Þ R2 ð1Þ h0 k1 ðfÞ  l0 ð1Þ þ U3 þ ð1Þ r @f ð1Þ @h r R wm wm ! ! ð1Þ ð1Þ ð1Þ ð1Þ ð1Þ R22 mm 1 @ 2 U2 1 @ 3 U3 1 @ 3 U3 h0 ð1Þ  k2 ðfÞ  l0   2 r @f@h r @f@h2 r3 @f3 R2 ðRð1Þ Þ wð1Þ m ! !  ð1Þ 2 ð1Þ ð1Þ 2 2 ð1Þ 2 ð1Þ h0 R m @U @ U 1 @ U 1 0 m ð1Þ 2 2 3 3 k ðfÞ  l0   2 2 2 2 ð1Þ r @f R2 r 2 @h ðRð1Þ Þ wm @h ( ! ð1Þ ð1Þ ð1Þ ð1Þ ð1Þ R 2 1  l0 mm 1 @ 2 U 2 R2 mm @ 2 U1  ð1Þ ð1Þ  ð1Þ  k1 ðfÞ ð1Þ 2 R R wm @h2 wm r @h@f

ð2Þ

Investigation of Hydroelasticity Coaxial Geometrically

129

) ! ð1Þ ð1Þ ð1Þ ð1Þ ð1Þ 2 mm 1 @ 2 U2 1 @ 3 U3 um R22 x2 @ 2 U1 ð1Þ   ð1Þ k ðfÞ ¼ ;  h 0 2 2 2 ð1Þ ð1Þ r @f@h2 R wm r @f@h wm ðcð1Þ Þ @s ( !! ð1Þ ð1Þ   2 ð1Þ ð1Þ   ð1Þ 1  l0 vm 1 @ U2 R2 um 1 @ 2 U1  ð1Þ  k1 ðfÞ 2 ð1Þ ð1Þ 2 R @f2 wm r wm r @f@h !   ð1Þ   ð1Þ ð1Þ ð1Þ vm 1 @U2 R2 um @U1 1 ð1Þ 0  ð1Þ ð1Þ þ h0 k1 ðfÞ  ð1Þ r r @f @h R wm wm   3 ð1Þ ! ð1Þ   2 ð1Þ 2 vm 1 @ U2 1 @ U3 ð1Þ  þ ð1Þ  h0 k2 ðfÞ 2 2 2 2 ð1Þ r r R @f @f @h wm 9   >   2 ð1Þ ! hð1Þ 2 ð1Þ ð1Þ   = 0 2 vm @U2 1 1 @ U3 0 þ ð1Þ k ðfÞ   2 ð1Þ > r r @f@h r R ; wm @f ( !! ! ð1Þ ð1Þ ð1Þ ð1Þ ð1Þ   1 R22 vm @ 2 U2 @U3 um @ 2 U 1 1 ð1Þ þ ð1Þ þ k1 ðfÞ  l0 R2 ð1Þ @h R Rð1Þ wð1Þ @h2 wm @f@h r m ) !   3 ð1Þ ! ð1Þ ð1Þ ð1Þ R22 vm @ 2 U 2 @ 3 U3 @ U3 ð1Þ 1 ð1Þ þ þ  l0 h k2 ðfÞ 2 r2 @f2 @h 0 @h2 @h3 ðRð1Þ Þ wð1Þ m  2 ( ) ! ð1Þ   3 ð1Þ ! ð1Þ ð1Þ ð1Þ h0 R22 vm @ 2 U2 @ 3 U3 @ U3 ð1Þ 1 þ   l0 k3 ðfÞ 2 2 r2 @h@f2 @h2 @h3 ðRð1Þ Þ 12 ðRð1Þ Þ wð1Þ m !! ( ) ð1Þ 2 ð1Þ   ð1Þ ð1Þ ð1Þ R2 1 R2 vm @ 2 U2 @U3 ð1Þ um @ U1 ð1Þ þ  h0 k2 ðfÞ l0 ð1Þ þ þ ð1Þ 2 2 ð1Þ @h R ðRð1Þ Þ wm @f@h r wm @h  2 ( ð1Þ ð1Þ   3 ð1Þ ! ð1Þ ð1Þ   h0 ð1  l0 Þ 1 vm @ 2 U 2 1 1 @ U3  þ k3 ðfÞ 2 2 2 2 ð1Þ ð1Þ 6 r r @f2 @h ðR Þ wm @f )   2 ð1Þ !   ð1Þ   ð1Þ vm 1 @U2 1 @ U3 1 ð1Þ 0 þ  h k ðfÞ  ð1Þ r @f@h r 0 3 @f wm r ( !! ð1Þ ð1Þ   2 ð1Þ ð1Þ ð1Þ   1  l0 vm 1 @ U2 R2 um @ 2 U1 1 ð1Þ þ  ð1Þ  h0 k2 ðfÞ 2 ð1Þ r2 ð1Þ @f@h r Rð1Þ R @f wm wm

130

A. Kalinina et al. ) !!   ð1Þ ð1Þ ð1Þ ð1Þ  2 1 1 R2 um @U1 v m R2 x 2 @ 2 U 2 ð1Þ 0  ð1Þ k2 ðfÞ ¼ ð1Þ 2 2 k1 ðfÞ;  h0 2 ð1Þ r r R wm @h wm ðcð1Þ Þ @s   4 ð1Þ !   4 ð1Þ ! ð1Þ   3 ð1Þ R22 vm 1 @ U2 1 @ U3 1 @ U3 1 ð1Þ  k3 ðfÞ l0  2 ð1Þ r2 > r2 @f2 @h2 r4 R22 @f2 @h @f4 ðRð1Þ Þ wm : 12   3 ð1Þ !!   3 ð1Þ !   ð1Þ   2 ð1Þ 1 1 @ U2 1 @ U3 1 @ U3 1 1 ð1Þ 0 ð1Þ 2 vm h k ðfÞ R  þ l0  2 2 ð1Þ r @f@h2 r3 R22 r 0 3 @f3 ðRð1Þ Þ wm r @f@h # !!   !   ð1Þ ð1Þ ð1Þ ð1Þ 1 @ 2 U3 1 @ 2 U3 1 1 ð1Þ 00 ð1Þ 2 vm @U2 þ l0 R k ðfÞ h   2 2 ð1Þ r2 R2 r 0 2 @h2 @f2 ðRð1Þ Þ wm @h " !   2 ð1Þ !! ð1Þ   3 ð1Þ ð1Þ   3 ð1Þ 1 um 1 @ U1 vm 1 @ U2 1 @ U3 ð1Þ 1 ð1Þ þ 2 þ þ l0 ð1Þ h0 k2 ðfÞ 2 2 ð1Þ r3 ð1Þ R2 wm r R @f3 @f2 wm r @f @h !!  ð1Þ 2     ð1Þ   2 ð1Þ ð1Þ   2 ð1Þ ð1Þ h0 um 1 @ U1 R v 1 @ U 1 @U 1 0 m 2 ð1Þ 2 3 k ðfÞ þ þ þ l0 ð1Þ 2 ð1Þ ð1Þ r r 2 @f R2 R @f2 wm r wm r @f@h 9 3   ! !!  ð1Þ 2 > ð1Þ   ð1Þ ð1Þ ð1Þ = 2 h0 um 1 @U1 vm @U2 1 7 ð1Þ R2 ð1Þ k200 ðfÞ5 þ R2 þ l þ U 0 3 ð1Þ r ð1Þ ð1Þ > r @f R2 R wm wm @h ; 8 2 ! > ð1Þ   4 ð1Þ ! ð1Þ ð1Þ ð1Þ 1 < h0 R22 vm @ 3 U2 @ 4 U3 @ U3 ð1Þ 1 þ   l k3 ðfÞ 0 2 2 ð1Þ r2 @h2 @f2 @h3 @h4 ðRð1Þ Þ > ðRð1Þ Þ wm : 12 ) !! ð1Þ   ð1Þ ð1Þ ð1Þ ð1Þ um 1 @ 3 U1 R22 vm @ 3 U2 @ 2 U3 ð1Þ ð1Þ þ þ k ðfÞ þ l0  R2 ð1Þ  h 0 2 2 ð1Þ Rð1Þ wm @h3 @h2 wm r @f@h 8 2   ð1Þ > ð1Þ   4 ð1Þ ! ð1Þ   3 ð1Þ 1  l0 2 < h0 vm 1 @ U2 1 @ U3  þ ð1Þ k3 ðfÞ 2 2 ð1Þ r2 @f2 @h2 R > 12Rð1Þ : wm r @f @h !   3 ð1Þ !  ð1Þ ð1Þ   ð1Þ vm 1 @ 2 U2 1 @ U3 1 h0 0 þ R2  k ðfÞ ð1Þ r @f@h2 r R2 3 wm r @f@h !! ð1Þ ð1Þ   3 ð1Þ ð1Þ   3 ð1Þ 1  l0 mm 1 @ U2 R2 u m 1 @ U 1 1 ð1Þ  ð1Þ þ h0 k2 ðfÞ 2 2 2 ð1Þ ð1Þ 2 wm r @f @h R wm r @f@h 9 !  ð1Þ 2   > ð1Þ   ð1Þ ð1Þ ð1Þ h0 mm 1 @ 2 U2 R2 um @U1 1 0 = k ðfÞ þ  þ R2 2 ð1Þ ð1Þ @h > r R2 Rð1Þ wm ; wm r @f@h ð1Þ

ð1Þ

vm @U2 þ ð1Þ wm @f 8  2 " > < h0ð1Þ

1 þ ð1Þ R

(

R22 ðRð1Þ Þ ¼

2

! ! ð1Þ ð1Þ ð1Þ   ð1Þ mm @U2 um 1 @U1 ð1Þ ð1Þ þ U3 k1 ðfÞ þ l0 R2 ð1Þ ð1Þ @f wm @h wm r ) ! !   ð1Þ ð1Þ ð1Þ ð1Þ mm @U2 @ 2 U3 @ 2 U3 ð1Þ 1 ð1Þ   l0 h0 k2 ðfÞ 2 2 2 ð1Þ @h r @h @f wm R22 Rð1Þ

ð1Þ R22 x2 @ 2 U3ð1Þ qR2 wm x2 Rewp0 k ðfÞ þ þ P0  1 2 ð1Þ Rew ðcð1Þ Þ @s2 qR2 wm x2 " # wEx 00 wEz R22 1  Re ð1Þ fx0 ðsÞ sin h þ ð1Þ fz000 ðsÞ cos h  ð1Þ 2 ð1Þ ð1Þ wm wm wm ðcð1Þ Þ q0 h0

The inner geometrically regular shell dynamics equations

ð3Þ

Investigation of Hydroelasticity Coaxial Geometrically

131

 ð2Þ 2 ð2Þ ð2Þ (  ð2Þ ð2Þ ð2Þ ð2Þ c q 0 h0 2Rð2Þ ð2Þ @ 2 U1 1  l0 ð2Þ @ 2 U1 @ 2 U1 ð2Þ ð2Þ  q0 h0 x2 uðm2Þ ¼ 0; um um þ 2 2 2 2 l 2 @s @f @h ðRð2Þ Þ  ð2Þ 2 ð2Þ ð2Þ ( 2 ð2Þ ð2Þ ð2Þ  ð2Þ c q 0 h0 1 þ l0 2Rð2Þ ð2Þ @ 2 U1 1  l0 2Rð2Þ @ 2 U2  vðm2Þ u þ m 2 2 2 l @f @h 2 l @f ðRð2Þ Þ " #   2 ð 2 Þ ð 2 Þ  2   2Rð2Þ @ 2 U ð2Þ @ 2 U ð2Þ 2 ð2Þ ð2Þ ð2Þ @ U2 ð2Þ @U3 ð2Þ 2 2 þ vm þ wm vm 2 1  l 0 þ þ a0 @h l @h2 @f2 @h2 " #)  2  2Rð2Þ 2 @ 3 U ð2Þ @ 3 U ð2Þ ð2Þ ð2Þ 3 3 wðm2Þ 2  l0  a0 þ l @f2 @h @h3 !  2 ð2Þ W1z1 ð2Þ ð2Þ 2 W1x1 ð2Þ @ U2 cos h  2 sin h þ vm  q0 h0 x ¼ 0; x2 x @s2  ð2Þ 2 ð2Þ ð2Þ  ð2Þ ð2Þ ð2Þ c q 0 h0 ð2Þ 2R ð2Þ @U1 ð2Þ @U2 u l þ v 0 m 2 l m @f @h ðRð2Þ Þ " #    2   2Rð2Þ 2 @ 3 U ð2Þ @ 3 U ð2Þ ð2Þ ð2Þ ð2Þ ð2Þ 2 2 þ vm 2  l0  a0 þ wðm2Þ U3 2 3 l @f @h @h " #)   ð2Þ 2 4 ð2Þ ð2Þ  2 ð2Þ 4 4 ð2Þ 2R @ U3 2R @ U3 @ 4 U3 ð2Þ ð2Þ þ a0 wm þ2 þ l l @f4 @f2 @h2 @h4 !  ð2Þ W1x1 W1z1 @ 2 U3 qR2 wðm2Þ x2 ð2Þ ð2Þ  q0 h0 x2 ¼ ð1Þi1 sin h þ 2 cos h þ wðm2Þ 2 2 x x @s Rew ( " #) ð2Þ Rewp0 wEx 00 wEz  þ P  Re ð2Þ fx0 ðsÞ sin h þ ð2Þ fz000 ðsÞ cos h ; ð2Þ 2 ðiÞ qR2 wm x wm wm n¼n

ð4Þ ð1Þ

ð2Þ

where nð1Þ ¼ 1 þ kð1Þ U3 , nð2Þ ¼ kð2Þ U3 , The boundary conditions for hydrodynamic equations ð1Þ

un ¼

ð2Þ

@U3 wð2Þ @U3 ; uh ¼ 0; uf ¼ 0 at n ¼ 1; un ¼ m ; uh ¼ 0; uf ¼ 0 at n ¼ 0: ð 1Þ @s wm @s

The boundary conditions for elastic shells ðiÞ

ðiÞ

U3 ¼ 0; U2 ¼ 0;

ðiÞ

ðiÞ

@ 2 U3 @U1 ¼ 0 at f ¼ 1; i ¼ 1; 2: ¼ 0; 2 @f @f

ð5Þ

By solving thydrodynamics Eq. (2) on the assumption of a time-harmonic vibration, we find the components of liquid velocity and pressure

132

un0 ¼

A. Kalinina et al.

1

(

ð1Þ

wm

ð1 Þ

ð1Þ

þ

ð2Þ

@ u30 @ u30  @s @s ð2Þ

@ 2 u30 @ 2 u30  @s2 @s2

! Z0

n

12c L ð n Þ  2aL ð n Þ dn 2 1 e2

! Z0

2aL2 ðnÞ þ n

12c L ð n Þ dn; 1 e2

( ! ð1Þ ð2Þ

@ @ u30 @ u30 12c uh0 ¼ ð1Þ  sh rðf  qÞ L ð n Þ  2aL ð n Þ 2 1 @h e2 @s @s wm 1 ! ð1Þ ð2Þ

@ 2 u30 @ 2 u30 12c  ð n Þ þ L ð n Þ dq; þ 2aL 2 1 e2 @s2 @s2 ( !   Z1 ð1 Þ ð2Þ

r ch rf sh rf @ @ u30 @ u30 12v þ  sh r ð 1  q Þ L ð n Þ  2wL ð n Þ  2 1 ð1Þ ch r sh r @h e2 @s @s 2wm 1 !

ð1Þ ð2Þ @ 2 u30 @ 2 u30 12c  ð n Þ þ L ð n Þ dq; þ 2aL 2 1 e2 @s2 @s2 ( ! Zf ð1Þ ð2Þ

1 @ u30 @ u30 12c  ch rðf  qÞ L ð n Þ  2aL ð n Þ uf0 ¼ ð1Þ 2 1 e2 @s @s wm 1 !

ð1Þ ð2Þ @ 2 u30 @ 2 u30 12c þ  ð n Þ þ L ð n Þ dq; 2aL 2 1 e2 @s2 @s2 ( !   Z1 ð1Þ ð2 Þ

1 sh rf ch rf @ u30 @ u30 12c þ   sh r ð 1  q Þ L ð n Þ  2aL ð n Þ 2 1 ð1Þ ch r sh r e2 @s @s 2wm 1 !

ð1Þ ð2Þ @ 2 u30 @ 2 u30 12c  ð n Þ þ L ð n Þ dq; þ 2aL 2 1 e2 @s2 @s2 " ! Zf ð1Þ ð2Þ r @ u30 ðq; h; sÞ @ u30 ðq; h; sÞ  sh rðf  qÞ 12c P0 ¼ ð1Þ @s @s wm r

Zf

1

!#   Z1 ð1Þ ð2 Þ @ u30 ðq; h; sÞ @ u30 ðq; h; sÞ r ch rf sh rf  dq  ð1Þ þ þ 2e a sh rð1  qÞ @s @s ch r sh r 2wm 1 " ! !# ð1Þ ð2Þ ð1Þ ð2Þ @ u30 ðq; h; sÞ @ u30 ðq; h; sÞ @ u30 ðq; h; sÞ @ u30 ðq; h; sÞ 2  þ 2e a  dq;  12c @s @s @s @s 2

Investigation of Hydroelasticity Coaxial Geometrically

133

where 1 f½1  F1 ðenÞA þ F2 ðenÞB  4F4 ðenÞC g; 2A 1 L2 ðnÞ ¼ fF3 ðenÞA  F4 ðenÞB  4F2 ðenÞCg; A 1 L3 ðnÞ ¼ fF1 ðenÞA þ F2 ðenÞ½B þ F2 ðeÞ  4F4 ðenÞ½C  F4 ðeÞg; 2A 1 L4 ðnÞ ¼ fF3 ðenÞA  F4 ðenÞ½B þ F2 ðeÞ  F2 ðenÞ½C  F4 ðeÞg; Fk ðeÞ ¼ Fk ðenÞjn¼1 ; A k ¼ 1; 4; A ¼ F22 ðeÞ þ 4F42 ðeÞ; B ¼ 4F3 ðeÞF4 ðeÞ þ F1 ðeÞF2 ðeÞ  F2 ðeÞ;

L1 ðnÞ ¼

C ¼ F2 ðeÞF3 ðeÞ  F1 ðeÞF4 ðeÞ þ F4 ðeÞ; F1 ðenÞ ¼ chen  cos en; 1 1 F2 ðenÞ ¼ ðchen  sin en þ shen  cos enÞ; F3 ðenÞ ¼ sh en  sin en; 2 2 rffiffiffiffiffi 1 x F4 ðenÞ ¼ ðch en  sin en  sh en  cos enÞ; e ¼ d : 4 2m By applying the Bubnov-Galerkin method, we choose the form of solving dynamics equations of the outer geometrically irregular and internal geometrically regular shells in the form ð iÞ

  n    o 2k  1 ð iÞ ðiÞ ð iÞ ð iÞ pf a10Ck cos h þ a10Sk sin h sin s þ uu1k þ a10Ok ; 2 k¼1  n 1 o   X 2k  1 ð iÞ ð iÞ ðiÞ ðiÞ ð iÞ pf a20Sk cos h þ a20Ck sin h þ a20Ok sin s þ uu2k ; ¼ vðmiÞ U20 ¼ vðmiÞ cos 2 k¼1  n 1    o X 2k  1 ðiÞ ð iÞ ð iÞ ðiÞ ðiÞ pf a30Ck cos h þ a30Sk sin h sin s þ uu3k þ a30Ok : ¼ wðmiÞ U30 ¼ wðmiÞ cos 2 k¼1 ð iÞ

u10 ¼ uðmiÞ U10 ¼ uðmiÞ ð iÞ

u20

ð iÞ

u30

1 X

sin

As a result of solving shells dynamics equations, we obtain the elastic displacements of the outer geometrically irregular and internal geometrically regular elastic cylindrical shells, as well as their amplitude-frequency characteristics: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 ðAnðiÞ Þ þ ðBnðiÞ Þ AnðiÞ ðiÞ ðiÞ A ðx Þ ¼ ; W ¼ arctg ; i ¼ 1; 2; 1 DnðiÞ BnðiÞ

ð6Þ

134

A. Kalinina et al.

where  An

ð iÞ

¼

R ð iÞ

2

ðiÞ A ; BnðiÞ ðiÞ ðiÞ 11 ð i Þ c q0 h0

 ¼

RðiÞ

2

ðiÞ ðiÞ cðiÞ q0 h0

ð iÞ

ð1Þ

A13 ; A11 ¼ At26 þ At12 At22 ;

At11 ð2Þ At21 ðiÞ ð2Þ ð1Þ ð iÞ A11 ¼ At16  At13 At8 ; A13 ¼ At26 ; A ¼ At16 ; A ¼ A13 ; At23 13 At18 31  2  2  2  2  2 ð iÞ ð jÞ ð iÞ ð iÞ ð iÞ ð jÞ ðiÞ ð iÞ ð iÞ ð iÞ ðiÞ ðiÞ DnðiÞ ¼ A11 A33 þ A11 A33 þ 4we2 B0 A33 a11 þ 2A11 A33 a13 þ 4w2 e4 B0 A11  2  4 ðiÞ ðiÞ ð iÞ ð jÞ ð1Þ ð2Þ þ 4we2 B0 A11 A13 þ A13 ; A33 ¼ At23 At32 þ At18 At36 ; A33 ¼ At32 At21 þ At22 At34 ð iÞ

ð iÞ

b33 ¼ 12vB0

At11 ð1Þ ð2Þ ; B ¼ At21  At22 At24 B0 ¼ At41  At32 At34 ; At33 0

4 Calculation Results The numerical solution of the amplitude-frequency characteristics for a mathematical model with an outer ribbed elastic shell and an inner elastic regular shell will be carried out with the following parameters R2 ¼ 2  101 M, l = 2010−1 м, d = 210−2 м, q = 103 kg/м3, m = 10−4 м2/s, ð1Þ ð2Þ ð1Þ ð2Þ ð1Þ ð2Þ h0 ¼ 102 M, h0 ¼ 102 M, l0 ¼ 0; 25, l0 ¼ 0; 25, q0 = 7, 4103 кг/м3, q0 = 7, 3 3 410 кг/м , E ð1Þ ¼ 1; 6  1011 Pa, Eð2Þ ¼ 1; 6  1011 Pa, ej ¼ 0; 02 M, hpj ¼ 2; 2h0 . Significant resonant frequencies with large values of the dynamic coefficient were used. The frequency range of calculations is presented from 0 to 20 000 Hz. As a result of studies, it was obtained that the values of the resonant frequencies for the amplitudefrequency characteristics Að1Þ ðxÞ and Að2Þ ðxÞ coincide (Fig. 2), i.e. the values for the amplitude-frequency characteristics of the deflections of the inner elastic and outer elastic shells are almost the same.

Fig. 2. Graphics for Að2Þ and Að1Þ both internal and external elastic shells

Investigation of Hydroelasticity Coaxial Geometrically

135

Thus, the resulting new oscillatory system with an external ribbed elastic shell and an internal elastic regular shell interacting with a layer of a viscous incompressible liquid begins to function as a whole. It takes place because the adopted model of an incompressible fluid was used and the disturbances are transmitted instantaneously. Each shell adds to its overall picture its resonant frequencies differing from the values of the individual shells resonant ones. The calculations showed that while reducing the cylindrical gap width, the amplitude AFC decreases, with is due to the liquid damping properties change. Frequencies are shifted to the low frequency region (Fig. 3).

Fig. 3. Graphs Að2Þ for both standard (1) and 2-fold d (2) for internal elastic and external elastic shells

A sharp decrease in liquid viscosity let to the sharp increases of AFC due to the change of liquid damping properties (Fig. 4).

Fig. 4. Graphs Að2Þ with standard liquid viscosity (1) and the ones with 100-fold reduced liquid viscosity (2)

136

A. Kalinina et al.

5 Summary and Conclusion The approaches developed and tested in the paper can be used for constructing and studying the models dynamic interaction among elastic structural elements, absolutely rigid bodies and a viscous incompressible liquid. The results presented in the paper can be used for constructing and studying the models of dynamic processes in mechanical systems consisting of elastic cylinder geometrically regular and geometrically irregular shells, absolutely rigid bodies and a viscous incompressible liquid. The developed mathematical model based on the known parameters of the mechanical system and the specified requirements of strength and durability, will allow, already at the design stage, to choose the most optimal system parameters. Acknowledgment. The research is made under the financial support of RFFI Grants № 18-0100127-a, 19-01-00014-a and President of Russian Federation Grant MD-756.2018.8.

References 1. Karagiozis, K.N., Paidoussis, M.P., Misra, A.K., Grinevich, E.: An experimental study of the nonlinear dynamics of cylindrical shells with clamped ends subjected to axial flow. J. Fluids Struct. 20, 801–816 (2005) 2. Karagiozis, K.N., Paidoussis, M.P., Amabili, M., Misra, A.K.: Nonlinear stability of cylindrical shells subjected to axial flow: theory and experiments. J. Sound Vib. 309, 637– 676 (2008) 3. Misra, A.K., Wong, S.S.T., Pandoussis, M.P.: Dynamics and stability of pinned-clamped and clamped-pinned cylindrical shells conveying fluid. J. Fluids Struct. 15, 1153–1166 (2001) 4. Plaksina, I.V., Kondratov, D.V., Kondratova, Y.N., Popov, V.S.: Problems of hydroelasticity for a pipe of annular cross-section with an elastic, geometrically irregular outer shell under pressure. Comput. Sci. 13(3), 70–76 (2013). Izvestija Saratov university. New Seria. Mathematics and Mechanics 5. Kondratov, D.V., Kondratova, Y.N., Mogilevich, L.I., Plaksina, I.V.: Hydroelasticity of an elastic cylindrical pipe of annular cross-section with its various fastenings. Vestn. Saratov State Tech. Univ. 41(59), 29–37 (2011) 6. Amabili, M., Pellicano, F., Pandoussis, M.P.: Non-Linear dynamics and stability of circular cylindrical shells containing flowing fluid. Part II: large-amplitude vibrations without flow. J. Sound Vib. 228, 1103–1124 (1999) 7. Amabili, M., Garziera, R.: Vibrations of circular cylindrical shells with nonuniform constraints, elastic bed and added mass. Part III: steady viscous effects on shells conveying fluid. J. Fluids Struct. 16(6), 795–809 (2002) 8. Païdoussis, M.P., Misra, A.K., Chan, S.P.: Dynamics and stability of coaxial cylindrical shells conveying viscous fluid. J. Appl. Mech. 52, 389–396 (1985) 9. Païdoussis, M.P., Misra, A.K., Nguyen, V.B.: Internal- and annular-flow-induced instabilities of a clamped-clamped or cantilevered cylindrical shell in a coaxial conduit: the effects of system parameters. J. Sound Vib. 159(2), 193–205 (1992)

Investigation of Hydroelasticity Coaxial Geometrically

137

10. Chung, H., Turula, P., Mulcahy, T.M., Jendrzejczyk, J.A.: Analysis of a cylindrical shell vibrating in a cylindrical fluid region. Nucl. Eng. Des. 63, 109–120 (1981) 11. Kondratov, D.V., Kondratova, J.N., Mogilevich, L.I., Rabinsky, L.N., Kuznetsova, E.L.: Mathematical model of elastic ribbed shell dynamics interaction with viscous liquid pulsating layer. Appl. Math. Sci. 9(69–72), 3525–3531 (2015) 12. Kondratov, D.V., Kondratova, J.N., Mogilevich, L.I.: Oscillating laminar fluid flow in a cylindrical elastic pipe of annular cross-section. Fluid Dyn. 44(4), 528–539 (2009) 13. Antsiferov, S.A., Kondratov, D.V., Mogilevich, L.I.: Perturbing moments in a floating gyroscope with elastic device housing on a vibrating base in the case of a nonsymmetric end outflow. Mech. Solids 44(3), 352–360 (2009) 14. Kondratov, D.V., Kondratova, J.N., Mogilevich, L.I.: Studies of the amplitude frequency characteristics of oscillations of the tube elastic walls of a circular profile during pulsed motion of a viscous fluid under the conditions of rigid jamming on the butt-ends. J. Mach. Manuf. Reliab. 38(3), 229–234 (2009) 15. Mogilevich, L.I., Popov, V.S.: Investigation of the interaction between a viscous incompressible fluid layer and walls of a channel formed by coaxial vibrating discs. Fluid Dyn. 46(3), 375–388 (2011) 16. Kondratov, D.V., Elistratova, O.V., Mogilevich, L.I., Kondratova, YuN.: Hydroelasticity of three elastic coaxial shells interacting with viscous incompressible fluids between them under vibration. Vibroeng. Procedia 18, 157–163 (2018) 17. Kondratov, D.V., Kalinina, A.V.: Examination of processes of a hydroelasticity of a ridge pipe ring a lateral view at action of vibration. Trudy MAI 78, 4 (2014) 18. Kalinina, A.V., Kondratov, D.V., Kondratova, J.N., Plaksina, I.V., Kuznetsova, E.L.: Mathematical modelling of hydroelasticity processes ribbed pipe of ring like profile at pressure pulsating. Izvestija Tulskogo gosudarstvennogo universiteta, Technicheskie nauki 7, Part. 1, pp. 40–55 (2015)

Design Automation of Digital In-Process Models of Parts of Aircraft Structures Kate Tairova1(&)

, Vadim Shiskin1

, and Leonid Kamalov2

1

Institute of Aviation Technology and Design and Management of Ulyanovsk State Technical University, Sozidateley ave., 13a, 432072 Ulyanovsk, Russian Federation [email protected], [email protected] 2 ASCON JSC, Sozidateley ave., 11a, 432072 Ulyanovsk, Russian Federation [email protected]

Abstract. The paper considers the problems of digital in-process models’ (DIPM) of parts of aircraft structures development. The aircraft preproduction process is overviewed and the place of the design models and DIMs is described. DIMs development is based upon the design models, yet the design model is given without the design history in a form of associated copy or the exchange file. The design methodology of the models is proposed. Its software implementation in form of an application for the CAD platform is discussed. Keywords: Aircraft structure Lifecycle management



In-process models



Digital Mock-up



1 Introduction 1.1

The Digital Models and Mock-Ups in Aircraft Constructions

The amount of aircraft structure parts, which form a supporting structure, exceeds 100,000 pcs. The digital mock-up (DMU) of a product is developed on each lifecycle stage [1]. The DMU development is a rather consuming process, as the geometrical form is defined by master-geometry of an aircraft, which is designed according to aerodynamics requirements. Master-geometry is presented by the combination of the surfaces of a second degree. Also it is necessary to loft parts to each other to provide assemblebility, that requires a high design accuracy. All of this shows high requirements to the design tools and to the qualification of a designer, his professional experience and the user skills in a CAD-system.

This study was supported Ministry of Education and Science of Russia in framework of project № 2.4760.2017/8.9. © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 138–148, 2019. https://doi.org/10.1007/978-3-030-12072-6_13

Design Automation of Digital In-Process Models

139

Currently to define the aircraft structure the DMU is used. Each part has to have the digital representation and has to be lofted with the aircraft geometry. Also it is necessary to provide the assemblebility of the product and to check its cinematics. Further the DMU becomes the basis of the digital models, used in later lifecycle stages. The DMU is designed according to certain methodology and design rules. It transferred to the production site without the design history. Firstly, it makes the resulting DMU lighter. Secondly, it protects the DMU from the unauthorized modification and provides the integrity of the assembly. However, the given approach does not helps to make rapid transformation of the DMU into the digital in-process model (DIPM). After the DMU had been approved, it is required to build DIPMs from the scratch using the geometrical data without the design history. All of this makes the design of DIPM activity more complex and requires the highest engineering qualification. On the technological preproduction stage, the DIPM is developed. It is represented by the digital twin of the part on each production stage, and is distinguished from the DMU by its geometry. In addition, the DIPM contains data, which requires shopmanufacturer to produce the part. To hold the DIPM data the PLM software TeamCenter, which is used by the IAZ affiliation of PAO “Irkut Corporation”, the special object is created, named technological version of a part [2]. The usage of TeMP software (Avisatar-SP, PAO “Ilushin”) implies the complex of electronic product and process models: the design electronic model, the process electronic model. TeMP software is used by the number of aircraft-building companies, including the affiliation of PAO “Sukhoi Company” “KnAAZ of U. A. Gagarin”, AO “GSS”, AO “GNKC of M.F. Khrunichev” [3]. The necessity of the usage of DIPM to implement the loftless aircraft production technology is widely discussed in a number of study books. The mentioned factor of the process is typical for the worldwide aircraft building companies. They use information models and systems, which help to support the product on each production stage. TeamCenter Manufacturing software has the Consumed Item and DIPM terms. The Consumed Item is an object of the product structure tree, which arrives to the final assembly operation as a part. DIPM describes the status of the object of the product structure on various stages of its production [4]. TeamCenter Engineering has an additional data set UGMASTER called the Alternative representation, or the Altrep to define the geometry different from the parts’. This dataset cannot be an assembly, but its quantity is not limited in the scope of one revision. Every design bureau applies the unique methodology of DMU creation. At the same time the methodology of DIPM design is not presented. This brings up the situation, where every designer uses his own approach to build the models. Different ways of model building may lead to the same result from the geometrical point of view. But the modification of the model may be troublesome, the design history is excessive, the operations used may be inadequate, the building errors, the extra edges. Such DIPMs may cause problems in the production, as the CAM software computes the tool trajectory for each piece of the surface separately. As a result, the product is nonassembled, the production cycle is halted, and the detail is defective.

140

K. Tairova et al.

Creation of the methodology, that provides a uniformed DIPM building having the shortest design history would allow the unite approach to each model, avoid the surface deviation. To make sure, that the methodology is applied correctly it is necessary to develop the tools of its automated implementation as a software step-by-step DIPM creation master. Authors propose the methodology, based upon the extensive experience of DIPMs creation based upon modern methods of modelling and lifecycle management. Authors intend to raise the efficiency of the DIPMs of the parts of aircraft structures design procedure, using the resultative design strategies in CAD environment. To realize the objective, it is necessary to analyse the building processes of DMU and the DIPM, and the approaches to the lifecycle management. In addition, it is needed to develop the methodology of DIPMs development. Finally the software implementation must be carried out in the CAD environment using the resultative design strategies. 1.2

The Research Methods

The research methods include the modern approaches in design automation, branchwise reviews and analytics, the methods of mathematical modeling and logic, the theory of sets, the business-processes analysis methods, the experimental researches.

2 The Lifecycle Management Processes Analysis The lifecycle management is based upon the business process model of DMU and DIPM development. Currently the DMU is the design specification, created according to ISO 17599:2015. The requirements of the consistency should be according to ISO 10303-1, ISO 10303-11, ISO 10303-42, ISO 10303-201. The DMU example is shown on Fig. 1. The DIPM is an alternative digital model of a product, based on the DMU, that consist data required to perform a production process with subject to specialities of producing plant. The DIPM is described in GOST “Digital Mock-Up of a product”, which currently is under development. The DIPM building is an important step of an aircraft preproduction stage on the stage of model transfer to the production. DIPM is developed subject to requirements of shop-manufacturer. The DIPM is used in manufacturing: • • • •

as the mathematical model with CNC production center; to model and visualize manufacturing process for planning and optimization; to calculate the labour input and the materials input in the manufacturing expenses; to determine the personnel, equipment and the materials requirements.

For DIPM example see Fig. 2. The part presented in developed view, the bending lines are shown, the machining stock is added.

Design Automation of Digital In-Process Models

Fig. 1. The example of DMU.

Fig. 2. The example of a DIPM.

141

142

K. Tairova et al.

The DMU and DIPM development are significant tasks of the preproduction process. The speed and the quality of these models define the duration of the design process and the manufacturing process result. If the models are performed properly, the manufacturing process would be executed without halts, the probability of defectives would be low, the maturation on the assembling stage be minimal. The DMU is passed on the later stages without the design history as the associated copy. The necessity of DIPM development occurs as the raw parts travels through several stages during the manufacturing process. The evolution of the part is shown on Fig. 3.

Fig. 3. The raw profile, the DIPM and the DMU (from left to right).

The DMU is built according to product design, the cutouts are made, the bending is performed. To produce such a part firstly the formation of design elements is required, such as pockets and the cutouts on the raw material, and the guiding and assembling holes. Once the production process moves from left to right, the design DIPM process moves in the opposite direction.

3 The Proposed Methodology of DIPM Building To maintain the DIMP design automation authors propose the methodology. It includes the individual tasks for the parts produced of the sheets, the pressed profile and the plate. 3.1

The Design Methodology for the Pressed Profile Parts DIPM

1. Determine the dimensions of the profile. 2. Determine the base surface. It is rational to pick the surface, which is perpendicular to the flat face and the least of all less changed regarding to the raw material.

Design Automation of Digital In-Process Models

143

3. Remove the chamfers and the blendings from the flat faces. Calculate the proper length of the part. 4. Pick the base surface. It is convenient to pick base surface perpendicular to one of the flat faces. Determine the normal to the flat faces sections. The section’s place is chosen to show the design elements: bending radius, cuttings, pockets, etc. The higher the curveness parameter, the closer the sections must be placed. 5. Determine the centre of inertia for each section by the means of a CAD-system. It is the common feature of the most CAD software. 6. Calculate the deviation of the inertia centre from the base surface ti for each section. 7. Calculate the average deviation of inertia centre from the base surface by the formula: Pn tcp ¼

8. 9. 10. 11. 12.

13. 14.

3.2

i¼1

ti

n

ð1Þ

where n is the number of sections, ti is the deviation of inertia center from the base surface. Create the equidistant surface with the CAD tools using tcp as the distance to the base surface. Simplify the model. The design elements such as cuttings, pockets and cutouts. The thinings and the bendings must remain. Calculate the length of the profile by measuring the length of the profile rib. Build the profile by the section, taken from the Reference book. Maintain the design elements by building the additional surfaces, following the grains of the DMU model. The building of these additional surfaces is performed on tcp range from the base surface. Measure the ribs, build the design elements on DIPM. Include the cutting stocks on flat faces of the parts, reveal the technological holes for the bending. The Design Methodology for the Sheet Parts DIPM

1. Determine the thickness of the sheet. 2. Remove the chamfers and the blendings from the flat faces. Calculate the proper length of the part. 3. Pick the base surface. It is important to pick base surface perpendicular to one of the flat faces. It must reside from the inner side of the part.

144

K. Tairova et al.

4. Determine the surface of the neutral layer. For this purpose, the neutral line must be built. It is the curve having the constant length. From the inner surface of the part the equidistant line is put, the distance Rn defined by formula: Rn ¼ r þ xs

ð2Þ

r being the bending radius, s – the sheet thickness, x is the reference coefficient, different by the material (see Fig. 4).

Fig. 4. The neutral line determination

5. Calculate the sweep length Lp by the formula Lp ¼ ðl1 þ l2 þ    þ ln Þ þ

p ðu RH1 þ u2 RH2 þ    un RHn Þ 180 1

ð3Þ

l1, l2, ln — length of straight sections; u1, u2, un — bending angle in degrees; Rн1; Rн2; Rн3 — the neutral line radius, determined by formula (2). The dimensions are shown on Fig. 5.

Fig. 5. The sweep length calculation.

Design Automation of Digital In-Process Models

145

6. Build the flattened model. By the length of sweep Lp the flattened model is designed, using the sheet thickness from the PMI. 7. Maintain the design elements by projecting them from the DMU model. 8. Include the cutting stocks on flat faces of the parts; reveal the technological holes for the bending and assembling. 3.3 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

11.

3.4

The Design Methodology for the Plate Parts DIPM Determine the thickness of the sheet. Remove the chamfers and the blendings from the flat faces. Pick the base surface. Determine the normal to the flat faces sections. Determine the centre of inertia for each section by the means of CAD-software. Calculate the deviation of the inertia centre from the base surface ti for each section. Calculate the average deviation of inertia centre from the base surface by the formula (1). Create the equidistant surface with the CAD tools using tcp as the distance to the base surface. Simplify the model. The design elements such as cuttings, pockets and cutouts. The thinings and the bendings must remain. Calculate the length of the panel by measuring the length of the rib. Maintain the design elements by building the additional surfaces, following the grains of the DMU model. The building of these additional surfaces is performed on tcp range from the base surface. Include the cutting stocks on flat faces of the parts, reveal the technological holes for the bending. The Formal Description of the Methodology

Each methodology point is implemented in CAD environment by some design steps, which make up stages of DMU building. Formal representation of the methodology may be described as a union of design steps sets: M ¼ S1 [ S2 [ . . . [ Si

ð4Þ

where Si stands for the design step, i = (1…m), m is the number of design steps to implement the methodology. In its turn, each project step is implemented in CAD environment by design operations — the commands, that user executes while working with the application. The designer chooses consequence of design operations according to his design experience, the project goal, initial design data and CAD user skills.

146

K. Tairova et al.

For the purpose of this research the proper choice of the design steps consequence is determined by the shortest path rule. Meaning the principles of modern CADs, there is only one variant of design step implementation, that excludes the others. This exclusive choice is formally represented by the formula: 8 POi > > < PO11 i 12 Si ¼  > > : POi1j

9 > > =

8 POi > > < PO21 i 22 V  > > > > ; : POi2j

9 > > =

8 POi > > < POk1 i k2 V. . .V  > > > > ; : POikj

9 > > = > > ;

;

ð5Þ

where POikj stands for the project operation, k = (1…n), n is the number of design steps variants. j = (1…l), l is the total number of design history lines, V stands for the XOR operator. The folded state of (5) may be written as (6): 8 > POik1 < POi _n > k2 Si ¼ k¼1 >  > : POikj

9 > > = > > ;

ð6Þ

Taking into account (5) and (6) the methodology (4) may be described as the following implementation: 93 8 POik1 > > > =7 [m [m 6_k¼1 < POi > k2 7 M ¼ i¼1 Si ¼ i¼1 6 4 n >    >5 > > ; : POikj 2

ð7Þ

where U stands for the unity operator, m is the number of methodology design steps. It is necessary to define the following sets of entities in order to maintain the methodology design automation 1. 2. 3. 4.

The set of design methodology points. The set of design steps. The set of design steps implementations variants. The set of design operations, determined on the set of CAD application commands field.

4 Software Implementation The sets of entities, described above, are the basis for the software implementation of the methodology in a form of CAD-software extension. The application guides the user actions in CAD environment and helps him following the design methodology The components diagram is shown of Fig. 6.

Design Automation of Digital In-Process Models

147

Fig. 6. The components diagram of the CAD-extension.

The use-case diagram is shown in Fig. 7. To build the DIPM user executes the application. Firstly it is needed to select, whether the part is produced of profile, sheet or the plate. Secondly the guidance algorithm is run. To execute each step the choice of design steps variants is proposed, that differ by the design operations. User executes the design operation according to the chosen variant of the design step by calling the commands of CAD software and choosing the elements of the model under construction. The application remembers the user’s design steps choices. The result of the building is the design strategy with concrete parameters. It may be modified like the design history what gives the opportunity to view the result in different variants and choose the best design strategy with the shortest building path.

148

K. Tairova et al.

Fig. 7. The use-case diagram.

5 The Conclusion The main results of the research are the following: 1. The methodology of DIPM design of the construction parts of the aircraft. 2. The DIPM design strategy of the construction parts of the aircraft, which provide the means of saving and re-design the DIPM. 3. Software implementation of the methodology in form of the CAD-extension. The application of the results in practice help to maintain the following effects: 1. Decrease the labor of DIPM building by the means of the ready methodology and design steps variants. 2. Unify the approaches to the DIPM building, that helps to modify them. 3. Achieve the shortest design history of the DIPM, what lowers the consumption of the computer resources and increases the engineer’s labor performance, lowers the probability of errors on the later model application stages.

References 1. ISO 10303-242:2014: Industrial automation systems and integration – Product data representation and exchange – Part 242: Application protocol: Managed model-based 3D engineering 2. Veprev, A., Strelyaev, S., Bushkov, S.: The formation of technological structure of the product and development of the manufacturing process by the route of production of the technological versions of a part. SAPR and Graphics 1–2010, pp. 81–83 (2010) 3. Official users of system TeMP. https://astpp.ru. Accessed 03 Nov 2018 4. Product lifecycle management: Siemens PLM software. https://www.plm.automation. siemens.com/global/en. Accessed 03 Nov 2018

Using Convolutional Neural Networks in the Problem of Cell Nuclei Segmentation on Histological Images Vladimir Khryashchev(&) , Anton Lebedev , Olga Stepanova and Anastasiya Srednyakova

,

P.G. Demidov Yaroslavl State University, Sovetskaya 14/2, 150014 Yaroslavl, Russia [email protected]

Abstract. Computer-aided diagnostics of cancer pathologies based on histological image segmentation is a promising area in the field of computer vision and machine learning. To date, the successes of neural networks in image segmentation in a number of tasks are comparable to human results and can even exceed them. The paper presents a fast algorithm of histological image segmentation based on the convolutional neural network U-Net. Using this approach allows to get better results in the tasks of medical image segmentation. The developed algorithm based on neural network AlexNet was used for the creation of the automatic markup of the histological image database. The neural network algorithms were trained and tested on the NVIDIA DGX-1 supercomputer using histological images. The results of the research show that the fast algorithm based on neural network U-Net can be successfully used for the histological image segmentation in real medical practice, which is confirmed by the high level of similarity of the obtained markup with the expert one. Keywords: Convolutional neural network Histological image segmentation



Cell nuclei segmentation



1 Introduction Oncological diseases in the past decades become one of the greatest threats to mankind. Cancer is the second leading cause of death in Russia after diseases of the cardiovascular system [1]. There is a need in fast and high-quality processing of digitized histological images using computer vision and machine learning algorithms because of emergence and spread of digital micropreparation scanners [2]. Nowadays the development of algorithms for automatic cell nuclei segmentation in histological images is one of the urgent tasks in this area. The information obtained at the output of such an algorithm plays an important role in the pathologies diagnosis. Today, thanks to the last advances in machine learning, a number of applications have been developed for cancer detection and classifying in histological images of brain, cervix, lung and prostate [3].

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 149–161, 2019. https://doi.org/10.1007/978-3-030-12072-6_14

150

V. Khryashchev et al.

One of the promising directions in the histological images analyses is the use of deep neural networks. For example, the convolutional neural network architecture is used to create a breast cancer screening system in [4]. The accuracy of such approach applied on public image databases has reached 96–98%, while the area under ROC curve (AUC) is in the range of 0.98–0.99. The authors [5] propose a convolutional neural network providing an accuracy of 92% for the osteosarcoma diagnosis. The paper [6] presents a deep convolutional neural network for the gastric carcinoma diagnosis. The accuracy of the proposed approach in the task of cancer detection is 69% and in the task of mucosal necrosis detection is 81%, which exceeds the level of traditional machine learning algorithm detection. The purpose of this study is to develop algorithms for automatic cell nuclei segmentation in histological images of breast tissue based on convolutional neural networks, providing a segmentation level comparable to expert one [7–9]. Such algorithms can be used in decision support systems for early diagnosis of breast cancer by pathologists, as well as means of training or control for beginners in the field of breast cancer diagnosis [10–13].

2 The Histological Image Database The digitized histological material of a patient is represented as a raster graphic TIFF format image (Tagged Image File Format). This format involves storing images with high color depth. Histological images are characterized by high resolution, which allows a specialist to work with different scales excluding significant loss of quality (10, 20, 30, 40). In this study we used data from the Andrew Janowczyk image database, which is freely available. This database contains 143 digitized histological images of patients with ER-positive breast cancer (cancer sensitive to estrogen receptors) stained with hemotoxin and eosin. Image size is 2,000  2,000 pixels. Images from the database contain partial markup: the creators highlighted about 12,000 cell nuclei manually. It is worth noting that that it is only a small part of the total number of cell nuclei containing in these histological images. Two images belonging to different patients were selected from the database to form a test sample. Each image of the test sample with a resolution of 2,000  2,000 pixels was divided into 4 fragments. As a result 8 test images of 572  572 pixels were formed. After that test images were divided into simple (group 1: images 1-1, 1-2, 1-3, 1-4) and complex (group 2: images 2-1, 2-2, 2-3, 2-4) ones depending on the number of selected elements. An example of a simple and complex image from a test dataset is presented in Fig. 1. The choice of a small number of test images is due to the difficulty of expert markup creating: manual marking of histological images is a nontrivial task thanks to the high labor intensity and significant time costs. Expert markup was created in the form of binary masks for the test sample images. The cell nuclei were determined in this case according to the recommendations received from specialists in this field. Thus, desired objects have the following features: oval shape, small size compared to the glands (round white objects), dark purple/dark pink color (compared to the color of the background), uneven edges often.

Using Convolutional Neural Networks

151

Fig. 1. Examples of images from the test sample: a with simple structure; b with complex structure.

3 The Development of Automatic Histological Image Segmentation Algorithms 3.1

Histological Image Segmentation Algorithm Based on AlexNet Neural Network

To solve the problem of automatic segmentation an algorithm based on the use of the AlexNet [14] convolutional neural network architecture (hereinafter—Algorithm 1) was developed. This architecture consists of five convolutional layers and three fully connected ones. The activation function used in this architecture is ReLU (see Fig. 2). The aim of this algorithm is to create a segmentation binary mask database for histological images. The scheme of training and testing of Algorithm 1 is shown in Fig. 3. A half of the histological images database was used for training of Algorithm 1. From these images fragments of 32  32 pixels containing the markup were cut out, then they were multiplied by geometric transformations. The size of the resulting dataset was 816,500 image fragments. The training and testing of the algorithm was carried out on a supercomputer for deep learning NVIDIA DGX-1. Then Algorithm 1 was tested on images with expert markup (see Fig. 4). The result of automatic image segmentation by the developed algorithm is presented in Fig. 4c. Noticeable that Algorithm 1 quite successfully copes with the task of cell nuclei segmentation in histological images, due to that the fragment of the image at the output of the network has a significant similarity with the result of manual segmentation (Fig. 4b). Visually, several “missing elements” can be noted, as well as the discrepancy

152

V. Khryashchev et al.

between the boundaries of the same elements. Nevertheless, making conclusion about the correctness of the obtained results is possible only through an objective evaluation using specialized metrics.

Fig. 2. The architecture of the convolutional neural network AlexNet.

Fig. 3. The scheme of training and testing of Algorithm 1.

Fig. 4. The comparison of expert markup with the automatic segmentation result: a original image; b reference markup; c markup on the output of Algorithm 1.

Using Convolutional Neural Networks

153

Despite the high quality of segmentation, it is worth noting that the processing of one histological image by Algorithm 1 takes about 3 h. Such time costs impose restrictions on the usage of such a network for the direct segmentation of histological images in real medical practice and necessitate the search for faster approaches to solve this problem. In this regard, it was decided to use Algorithm 1 as an auxiliary tool for creating automatic markup of the histological image database. As a result 141 images of the test database were re-marked. The creation of automatic markup allowed to reduce significantly the time required to create the data markup manually, as well as to evaluate the feasibility of automatic markup using convolutional neural networks. In addition, this stage provides the possibility of learning the neural networks on images obtained from the output of other networks (“unsupervised learning”). 3.2

Fast Histological Image Segmentation Algorithm Based on U-Net Neural Network

To solve the problem of cell nuclei segmentation in the real time mode, an algorithm based on the convolutional neural network U-Net [15] has been developed (hereinafter —Algorithm 2). The choice of this architecture was made for several reasons. First, it allows to receive a large number of images for training automatically. This aspect is important, because when medical image analysis algorithms are developing, the amount of input data is very limited, and that does not contribute to effective training of networks. Secondly, the U-Net architecture allows you to achieve clearer separation in the case when objects of the same class on the image are in contact. The third crucial quality in favor of the chosen architecture is its speed. A schematic representation of the network architecture is presented in Fig. 5. The network architecture consists of a contracting path (left side) and an expanding path (right side). The contracting path represents a typical convolutional neural network architecture and consists of several blocks (4 blocks in the original version). Each such block consists of repeated applications of two 3  3 convolutions, each of that is followed by an activation function “rectified linear unit” (ReLU), and a pooling operation with a 2  2 filter size with stride 2 for downsampling. The number of channels is doubled at each step of downsampling. The expanding path includes the same number of blocks as the contracting path. Each of its blocks consists of an upsampling of the feature map with a reduction the number of channels by two times followed by a 2  2 convolution (“deconvolution”), a concatenation with the correspondingly cropped feature map from the contracting path and two 3  3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels at each convolution. At the last layer, a 1  1 convolution is used to map each 64-component feature vector to the desired number of classes (getting a flat image). On the whole, the network has 23 convolutional layers.

154

V. Khryashchev et al.

Fig. 5. The architecture of the convolutional neural network U-Net.

The training and testing scheme of Algorithm 2 is shown in Fig. 6. Fragments of images with automatic markup of 572  572 pixels were used for training, and these were multiplied using geometric transformations. The total size of the training sample was 1,920,000 images. The training took place in parallel on four video cards on the supercomputer NVIDIA DGX-1, the size of the batch was 128 images. Learning was stopped after completing about 390,000 iterations.

Fig. 6. Training and testing of Algorithm 2.

The results of processing the test images using such a network are shown in Fig. 7. Visual subjective assessment shows that the algorithm successfully recognizes objects corresponding to the cell nuclei description. It also shows that the result of the segmentation contains “false positives” and “goal pass”.

Using Convolutional Neural Networks

155

Fig. 7. The comparison of expert markup with the automatic segmentation result: a original image; b reference markup; c markup on the output of Algorithm 2.

4 Results of Nuclei Cell Segmentation in Histological Images To assess the quality of segmentation, images at the output of Algorithm 1 and Algorithm 2 were compared with expert markup. The following metrics were used for the analysis: Simple Match Coefficient (M1), Tversky index (M2), Sörensen-Dice coefficient (M3), Hausdorff distance (M4) [16]. The calculated values of metrics for eight images of varying complexity from the test dataset are presented in Figs. 8, 9, 10 and 11, as well as in Table 1. Let us consider in more detail the data obtained using each of the metrics. We introduce the necessary notation: N00 is the total number of pixels where the reference and segmentation result of algorithm both have a value of “0” (there is a background); N11 is the total number of pixels where the reference and segmentation result of algorithm both have a value of “1” (presence of the object); N10 is the total number of pixels for which the reference value is “1”, and algorithm segmentation result is “0” (“false negative”), N01 is the total number of pixels for which the reference value is “0”, and the algorithm markup is “1” (“false positive”). The results of calculating the Simple Match Coefficient, measured by formula (1), for the segmentation masks at the output of the algorithms based on U-Net and AlexNet networks are presented in Fig. 8. M1 ¼

N00 þ N11  100% N00 þ N01 þ N10 þ N11

ð1Þ

156

V. Khryashchev et al.

According to M1, the quality of cell nuclei segmentation performed by Algorithm 2 outperforms the results obtained at the output of Algorithm 1. In addition, the results of evaluating M1 metric shows that the segmentation of simple images with fewer elements is performed more successfully.

Fig. 8. The results of M1 metric calculation for the developed algorithms.

Fig. 9. The results of M2 metric calculation for the developed algorithms.

Using Convolutional Neural Networks

157

The Tversky index (M2 metric) (see Fig. 9) allows adjusting the value of the coefficients a and b, which control the magnitude of penalties for “false positives” and “false negatives”, respectively: M2 ¼

N11  100% N11 þ aN01 þ bN10

ð2Þ

In this paper, the following values of parameters for calculating the Tversky index were chosen: a = 0.4, and b = 0.6. The results of calculating the M2 metric are presented in Fig. 9. According to the M2 metric, the segmentation similarity performed by Algorithm 2 with the reference markup is from 58.1% to 69.6%. The M2 values for Algorithm 1 are slightly higher and fall in the range of 68.54–80.14%, which is an acceptable result for using this algorithm in order to create automatic markup. The M3 metric—the Sörensen-Dice coefficient—is a variation of the Tversky index obtained with the coefficients a = b = 0.5. In this case, the penalties for “false positives” and “false negatives” are the same. The results characterizing the values of the M3 metric are presented in Fig. 10. From this figure, we can see that the similarity of segmentation performed by Algorithm 2 to the reference markup is in the range from 53.7% to 66.0%, while the result for Algorithm 1 is 73.6%, on average. The Hausdorff distance (M4 metric)—is a distance measure of the similarity of two images which considers the entire image as a whole, rather than individual segments: M4ðA; BÞ ¼ maxðhðA; BÞ; hðB; AÞÞ

ð3Þ

hðA; BÞ ¼ maxðminðjja  bjjÞÞ;

ð4Þ

where

A—the reference markup; B—the result of algorithm segmentation; a, b—pixels, a  A, b  B, |(|a − b|)|—Euclidean distance. The results of calculating the M4 metric are presented in Fig. 11. As can be seen from this figure, the developed algorithms demonstrate a similar level of segmentation quality. It is also seen that the increase in the value of the metric M4, and therefore the deterioration of the segmentation quality occurs with an increase in the complexity of the images. One of the central issues in the field of medical image analysis is the trade-off between quality and computational complexity of algorithms. Modern approaches to diagnostics, including screening studies, require the creation of algorithms with reasonable processing time for a single clinical case. Thus, the speed of the algorithm, which determines the number of histological images that can be processed per unit of time, plays an important role. The results of performance evaluating of the developed algorithms obtained on the NVIDIA DGX-1 supercomputer are presented in Table 2.

158

V. Khryashchev et al.

Fig. 10. The results of M3 metric calculation for the developed algorithms.

Fig. 11. The results of M4 metric calculation for the developed algorithms.

Using Convolutional Neural Networks

159

Table 1. M1, M2, M3, M4 metric values for test dataset consisting of 8 images with different complexity. Algorithm

Image complexity Metric M1, % Algorithm 1 Simple 84.22 89.13 81.45 87.98 Complex 69.65 67.01 71.30 63.71 Algorithm 2 Simple 85.9 90.8 83.4 89.8 Complex 71.1 68.0 72.8 64.2

M2, % 77.89 69.20 72.25 68.54 80.14 77.43 72.57 77.04 66.9 58.1 67.8 64.0 68.6 69.6 63 68.3

M3, % 76.41 67.99 72.42 68.62 79.32 76.99 71.55 76.29 62.3 53.7 64.2 60.4 64.3 66.0 59.1 64.5

M4 9.33 9.38 11.49 10.20 11.62 14.25 14.46 14.18 9.61 9.20 11.16 9.26 10.96 13 13.81 13.38

Table 2. Computational complexity of the developed algorithms. Algorithm Time per one processed image, seconds AlexNet 4 U-Net 10,800

According to the results of the metric calculation, as well as visual analysis, Algorithm 1 shows a high quality of histological image segmentation. However, the use of this algorithm in practice is associated with significant time costs. Thus, Algorithm 1 can be recommended as an auxiliary tool for creating automatic markup of medical images. Algorithm 2 with an almost similar level of segmentation provides significantly better performance, which allows us to recommend it for use in real medical practice.

5 Conclusion The results of the study show that the Algorithm 2 developed on the basis of the U-Net network can be successfully used to implement the segmentation of histological images based on automatically obtained markup in real medical practice, as evidenced by the high level of similarity of the resulting markup to the reference one. In addition, Algorithm 2 allows histological images to be processed 2,700 times faster than Algorithm 1.

160

V. Khryashchev et al.

Algorithm 1 based on neural network AlexNet can be used to automatically create the markup of the training database, but if there are a large number of objects in the image (complex image), it is better to mark up the image manually (according to the results of calculating such metrics as Simple Match Coefficient and Hausdorff distance). In addition, despite the high quality of the segmentation of histological images, this approach cannot be used for direct analysis of medical images in real time due to significant time costs.

References 1. World Health Organization: Cancer, http://www.who.int/cancer/en (2018). Accessed 11 January 2018 2. Gurcan, M.N., Boucheron, L.E., Can, A., Madabhushi, A., Rajpoot, N.M., Yener, B.: Histopathological image analysis: a review. IEEE Rev. Biomed. Eng. 2, 147–171 (2009) 3. Irshad, H., Veillard, A., Roux, L., Racoceanu, D.: Methods for nuclei detection, segmentation, and classification in digital histopathology: a review-current status and future potential. IEEE Rev. Biomed. Eng. 7, 97–114 (2014) 4. Chougrada, H., Zouakia, H., Alheyane, O.: Deep convolutional neural networks for breast cancer screening. Comput. Methods Programs Biomed. 157, 19–30 (2018) 5. Mishra, R., Daescu, O., Leavey, P., Rakheja, D., Sengupta, A.: convolutional neural network for histopathological analysis of osteosarcoma. J. Comput. Biol. 25(3), 313–325 (2018) 6. Sharma, H., Zerbe, N., Klempert, I., Hellwich, O., Hufnagl, P.: Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology. Comput. Med. Imaging Graph. 61, 2–13 (2017) 7. Khryashchev, V., Apalkov I., Zvonarev, P.: Neural network adaptive switching median filter for image denoising. In: Proceedings of the International Conference on Computer as a tool (EUROCON 2005), Belgrade, Serbia and Montenegro, pp. 959–962 (2005) 8. Khryashchev, V., Ganin, A., Stepanova, O., Lebedev A.: Age estimation from face images: challenging problem for audience measurement systems. In: Conference of Open Innovation Association, FRUCT, pp. 31–37 (2014) 9. Khryashchev, V., Shmaglit, L., Shemyakov, A.: The application of machine learning techniques to real time audience analysis system. In: Computer Vision in Control Systems-2, Intelligent Systems Reference Library, vol. 75, pp. 49–69. Springer International Publishing, Switzerland (2015) 10. Taneja, A., Ranjan, P., Ujlayan, A.: Multi-cell nuclei segmentation in cervical cancer images by integrated feature vectors. Multimed. Tools Appl. 77, 9271–9290 (2018) 11. Song, Y., Cai, W., et al.: Region-based progressive localization of cell nuclei in micro-scopic images with data adaptive modeling. BMC Bioinform. 14(1), 173 (2013) 12. Chen, C., Wang, W., Ozolek, J.A., Lages, N., Altschuler, S.J., Wu, L.F.: A template matching approach for segmenting microscopy images. In: IEEE International Symposium on Biomedical Imaging, pp. 768–771 (2012) 13. Janowczyk, A., Madabhushi, A.: Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases. J. Pathol. Inform. 7, 29 (2016) 14. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Bartlett, P. (ed.) Advances in Neural Processing Systems 25 (NIPS) 2012, vol. 1, pp. 1097–1105. NIPS, USA (2012)

Using Convolutional Neural Networks

161

15. Fischer, P., Ronneberger, O., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J. (eds.) Medical Image Computing and ComputerAssisted Intervention (MICCAI) 2015. LNCS, vol. 9351, pp. 234–341. Springer, Munich (2015) 16. Minervini, M., Rusu, C., Tsaftaris, S.A.: Learning computationally efficient approximations of complex image segmentation metrics. In: Ramponi, G., Carini, A. (eds.) 8th International Symposium on Image and Signal Processing and Analysis (ISPA) 2013, vol. 1, pp. 60–65. University of Zagreb, University of Trieste, Trieste (2013)

Numerical Study of Eigenmodes Propagation Through Rectangular Waveguide with Quarter-Wave Chokes on the Walls Alexander Brovko1(&)

and Guido Link2

1

2

Yuri Gagarin State Technical University of Saratov, Saratov, Russia [email protected] Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany [email protected]

Abstract. The problem of improvement of input/output ports construction of conveyor belt microwave heating system is considered. The desirable construction of the ports should not allow electromagnetic energy to escape from the microwave processing chamber, i.e. must operate as stop band filter for a number of eigenmodes. This paper contains numerical results illustrating performance of the quarter-wave chokes as a band-stop filter for large cross-section waveguide. The results were obtained by direct FDTD modeling of the system. Presented results show possibility of effective eigenmode filtering for waveguides with cross-section 780  100 mm. Limits of the approach may come from the requirements of small length of the filter (100 mm) correspond to smaller frequency of min point Therefore, numerical study shows that a single pair of chokes is able to prevent propagation of a distinct mode in waveguide with cross-section up to 780  100 mm. However, filtering of the different eigenmodes require different sizes of the chokes. 3.2

Study of Performance for a Set of Chokes

The results presented in previous section lead to the following idea: in order to provide filtering effect for a number of modes around desired frequency 2.45 GHz, a set of chokes with different length L can be used, as depicted on Fig. 8.

Fig. 8. Structure with 20 chokes (10 chokes on top wall and 10 chokes on the bottom wall) with different size L.

168

A. Brovko and G. Link

The length of the neighboring chokes may be close to each other, but the length of chokes at the opposite sides of waveguide may differ significantly, in order to cover large spectrum of propagating eigenmodes in the waveguide. Figure 9 contains the results for the model with the following parameters: • Waveguide cross-section 780  100 mm. • Sizes of chokes: L = 27, 28, 29, 30, 31, 32, 33, 34, 35, 36 mm, D = 10 mm, S = 14 mm, T = 2 mm. • Distance between chokes: 100 mm.

Fig. 9. Transmission coefficient |S21| for modes TE10 (red line), TE30 (yellow line), TE50 (blue line), TE70 (green line). (Color figure online)

The picture shows the transmission coefficient value |S21| versus frequency for three propagating modes: TE10, TE30, and TE50. We can see, that the set of chokes provides stop propagation band for each of the modes, but for higher modes the stop band is shifted to higher frequencies. The results depicted in Fig. 9 are obtained for a fixed value of distances between the neighboring chokes, equal to 100 mm. This enforces rather large length of the whole structure. The interesting question is how small could be the distances between the chokes, providing the same attenuation effect. Figure 10 contains results for the structure with the following parameters: • Waveguide cross-section 780  100 mm. • Sizes of chokes: L = 27, 28, 29, 30, 31, 32, 33, 34, 35, 36 mm, D = 10 mm, S = 14 mm, T = 2 mm. • Distance between chokes: 100, 90, 80, 70, 60 mm.

Numerical Study of Eigenmodes Propagation

169

Fig. 10. Transmission coefficient |S21| for mode TE10 for different distance between chokes: 100 mm (red line), 90 mm (yellow line), 80 mm (blue line), 70 mm (green line), 60 mm (brown line). (Color figure online)

The results depicted in Fig. 10 shows, that flat stop band is provided only for distance between chokes, greater than 80 mm. For smaller values of the distances the stop band contains peaks. The next question which needs to be studied is how large could be the cross-section of the waveguide in order to observe the attenuation effect. More specifically, what is the maximum value of waveguide height, which can preserve the effect of the eigenmodes attenuation? Figure 11 contains results for three different values of the height of the waveguide. Parameters of the model: • Waveguide cross-section 780  B mm, where B = 100, 120, 150 mm. • Sizes of chokes: L = 27, 28, 29, 30, 31, 32, 33, 34, 35, 36 mm, D = 10 mm, S = 14 mm, T = 2 mm. • Distance between chokes: 100 mm. The results of modeling show, that for the height value 120 mm we obtain narrower stop band, shifted to lower frequency, and for 150 mm even narrower band with destroyed structure is observed.

170

A. Brovko and G. Link

Fig. 11. Transmission coefficient |S21| for mode TE10 for different height of waveguide: 100 mm (red line), 120 mm (yellow line), 150 mm (blue line). (Color figure online)

Next question which needs to be studied is the influence of the difference between the length values for neighboring chokes on the results. Is it possible to make the difference larger in order to cover wider spectrum of the attenuated eigenmodes? Figure 12 contains the results for the structure, in which length of neighboring chokes differs on 2 mm.

Fig. 12. Transmission coefficient |S21| for modes TE10 (red line), TE30 (yellow line), TE50 (blue line), TE70 (green line). (Color figure online)

Numerical Study of Eigenmodes Propagation

171

Parameters of the problem: • Waveguide cross-section 780  100 mm. • Sizes of chokes: L = 23, 25, 27, 29, 31, 33, 35, 37, 39, 41 mm, D = 10 mm, S = 14 mm, T = 2 mm. • Distance between chokes: 100 mm. The results of modeling show, that in this structure stop band is wider, but it is not as flat as in previous model (contains some peaks inside the band).

4 Conclusions Numerical results presented in the paper permit to formulate the following conclusions: 1. A set of chokes can provide stop propagation band for a number of eigenmodes of the rectangular waveguide with large cross-section. 2. The chokes must be placed symmetrically on top and bottom walls of the waveguide. 3. Under condition of unlimited length of the waveguide, it is potentially possible to filter all propagating modes on the desired frequency. But in case of limited length of waveguide (up to 1 m), only limited number of the modes can be attenuated. 4. The attenuation works well for waveguide height 100 mm, but for larger waveguide the stop band is destroyed. 5. If the distance between the neighboring chokes *80…100 mm, then the structure provides flat stop band. Smaller distance between the chokes leads to destroying of the stop band. 6. If the length of the neighboring chokes differs on 1 mm, then the structure provides flat stop band. For the difference in length of neighboring chokes 2 mm and more, the propagation peaks are observed inside the stop band. Therefore, application of waveguide quarter-wave chokes as stop-band filter may be considered as an option in case of construction of input/output ports of the conveyor-belt microwave heating system, however, its effectiveness may depend on sizes of the port waveguides, and on the constructive limits on the port length.

References 1. Balanis, C.A.: Advanced Engineering Electromagnetics. Wiley, New York (2012) 2. Vale, C.A.W., Meyer, P., Palmer, K.D.: A design procedure for bandstop filters in waveguides supporting multiple propagating modes. IEEE Trans. Microw. Theory Tech. 48(12), 2496–2503 (2000) 3. Numan, A.B., Sharawi, M.S.: Extraction of material parameters for metamaterials using a full-wave simulator. IEEE Antennas Propag. Mag. 55(5), 202–211 (2013) 4. Holloway, C.L., Kuester, E.F., Baker-Jarvis, J., Kabos, P.: A double negative (DNG) composite medium composed of magnetodielectric spherical particles embedded in a matrix. IEEE Trans. Antennas Propag. 51(10), 2596–2603 (2003)

172

A. Brovko and G. Link

5. Yang, L., Bowler, N.: Rational design of double-negative metamaterials consisting of 3D arrays of two different non-metallic spheres arranged on a simple tetragonal lattice. In: IEEE International Symposium on Antennas and Propagation (APSURSI), Spokane, WA, 3–8 July 2011, vol. 10, pp. 1494–1497 (2011) 6. Lagarkov, A.N., Semenenko, V.N., Kisel, V.N., Chistyaev, V.A.: Development and simulation of microwave artificial magnetic composites utilizing nonmagnetic inclusions. J. Magn. Magn. Mater. 258–259, 161–166 (2003) 7. Ziolkowski, R.W.: Design, fabrication, and testing of double negative metamaterials. IEEE Trans. Antennas Propag. 51(7), 1516–1529 (2003) 8. Ruvio, G., Leone, G.: State-of-the-art of metamaterials: characterization, realization and applications. Stud. Eng. Technol. 1(2), 38–47 (2014) 9. Mizuno, Y., Sakakibara, K., Kikuma, N.: Loss reduction of microstrip-to-waveguide transition suppressing leakage from gap between substrate and waveguide by choke structure. In: 2016 International Symposium on Antennas and Propagation (ISAP), pp. 374–375 (2016) 10. Burrill, A., Ben-Zvi, I., Cole, M., Rathke, J., Kneisel, P., Manus, R., Rimmer, R.: Multipacting analysis of a quarter wave choke joint used for insertion of a demountable cathode into a SRF photoinjector. In: 2007 IEEE Particle Accelerator Conference (PAC), pp. 2544–2546 (2007) 11. QuickWave-3DTM, QWED Sp. z o.o., ul. Nowowiejska 28, lok. 32, 02-010 Warsaw, Poland. http://www.qwed.com.pl/. Accessed 22 Oct 2018

Extraction and Forecasting Time Series of Production Processes Anton Romanov(&) , Aleksey Filippov and Nadezhda Yarushkina

,

Ulyanovsk State Technical University, Ulyanovsk, Russia [email protected]

Abstract. The manufacturing processes of the aircraft factory are analyzed to improve the quality of management decisions. Production processes models based on time series models are proposed. The applying of fuzzy smoothing of time series is considered. A new technique for extracting fuzzy trends for forecasting time series proposed. The use of type-2 fuzzy sets for making new models of time series with the aim of improving the quality of the forecast considered. An information system is being built to calculate the production capacity using these models. The system implements the algorithms for the calculation of a production capacity based on a methodology approved in the industry. The information extracted from the production processes is supposed to be used as a component of the models. An experiment with checking the quality of smoothing of time series is described. The experiment shows the possibility and advantages of modeling time series using type-2 fuzzy sets. Keywords: Time series factory

 Type-2 fuzzy sets  Production capacity  Aircraft

1 Introduction The technological preparation of complex production at large enterprise requires the analysis of production capacities. The aim is to increase the efficiency of the use of material, technical and human resources [1]. The calculation of a production capacity based on a methodology approved in the industry has many disadvantages, like not enough precision because of averaging and troubles with adaptation to the concrete factory. The proposed new models and algorithms allow you to adapt the methodology to increase the efficiency of management at the expense of the increasing precision of forecast of production processes. The goal requires solving the next tasks: • input data definition; • the creation of models reflecting the state of production processes; • development algorithms for calculation of a production capacity.

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 173–184, 2019. https://doi.org/10.1007/978-3-030-12072-6_16

174

A. Romanov et al.

The solution of these tasks allows building a unified information environment for technological support of production. The task is to balance the production capacity of an aircraft factory. The current approach of management is based on using a common methodology for a few factories approved in the industry. Methodology contains algorithms and coefficients, accumulated from the statistic of production. The main disadvantage of this approach is a strong discrepancy between the real production indicators and the collected statistical data on the concrete factory [2]. Limitations of methodology application: • the long extraction time of statistical coefficients from production indicators; • the impossibility of dynamic adaptation of calculations into separate periods shorter than the forecast horizon; • the methodology does not provide for adaptation to a specific production. By analyzing this methodology it was found out that the coefficients (staff time, staff performance, equipment performance and depreciation of equipment) are aggregated and averaged information from the indicators of production processes. These processes are easily represented by discrete time series. Using a fuzzy approach allows creating models with more options such improving quality because of applying knowledge about time series [5, 6, 11, 15]. Also by analyzing production processes, it was found that this discrete interval is the one month - the minimum forecast horizon, and the time interval in which the indicators are unchanged.

2 Types of Extracted Time Series of Factory The task is to extract changes in the values of production processes indicators. Time series models are used for tracking these changes. The methodology for calculating of production capacity uses some coefficients, defined above. But these coefficients not always must be given by an expert or a method. Each of them can be extracted on the factory. As an example, staff time can be tracked for each factory unit; depreciation of equipment can be calculated based on summarizing volumes of completed works. By analysis factory data was extracted the following types of time series: • • • • •

staff work time fund (fluctuating time series); tool work time fund (fluctuating time series); performance ratio (growing time series); area usage (growing time series); depreciation of equipment (growing time series).

These types of time series may be different for different factory units. For all types of processes can be identified monthly indicator values. Very important to find the following characteristics of time series: seasonality, local and global tendencies. The proposition is to use several models for smoothing, extracting and forecasting tendencies and values of the time series of production processes.

Extraction and Forecasting Time Series

175

3 Using F-Transform for Smoothing of Time Series Using F-transform for smoothing of time series has advantages over other smoothing methods, like exponential smoothing [10], because of possibilities to include knowledge information about time series. Smoothed time series gives a better tendency forecast. Generally, the F-transform of the function f : P ! R is a vector whose components can be considered as weighted local mean values of f . This paper assume R is the set of real numbers, ½a; bR and P ¼ fp1 ; . . .; pl g; n\l, is a finite set of points such that P½a; b. Function f : P ! R defined on the set P is called discrete. Below basic facts about the F-transform as they were presented in [3]. The first step in the definition of the F-transform of f is a selection of a fuzzy partition of the interval ½a; b by the finite number n  3 of fuzzy sets A1 ; . . .; An . According to the original definition, there are five axioms which characterize a fuzzy partition: normality, locality, continuity, unimodality and orthogonality (the Ruspini condition) [3]. A fuzzy partition is called uniform if the fuzzy sets A2 ; . . .; An1 are shifted copies of the symmetrized A1 . The membership functions A1 ; . . .; An in the fuzzy  partition are called basic functions. The basic function Ak covers a point pj if Ak pj [ 0: Figure 1 shows a uniform fuzzy partition of an interval ½a; b by fuzzy sets A1 ; . . .; An , n  3, with triangular membership functions. The formal expressions of these functions are given below where h ¼ ba n1 :  A1 ð xÞ ¼ Ak ð xÞ ¼

1  xa h ; 0;

 jxx j

A n ð xÞ ¼

k

h

;

0;  xx

n

h

0;

x 2 ½a; x2 ; otherwise;

x 2 ½xk1 ; xk þ 1 ; otherwise; ; x 2 ½xn1 ; b; otherwise

Fig. 1. An example of a uniform fuzzy partition by triangular membership functions.

176

A. Romanov et al.

In the subsequent text fix the interval ½a; b, a finite set of points P½a; b and relaxed fuzzy partition A1 ; . . .; An of ½a; b. Denote akj ¼ Ak pj and consider n  l matrix A with elements akj . A is a partition matrix of P. Below, a matrix of a special uniform partition is presented. Assume that the points p1 ; . . .; pl 2 ½a; b are equidistant so that a ¼ p1 ; b ¼ pl , pi þ 1 ¼ pi þ h; i ¼ 1; . . .; l  1, and h [ 0 is a real number. Let A1 ; . . .; An be a uniform partition ½a; b such that each basic function Ak has a triangular shape and covers fixed number of points, say N. Moreover, let nodes x0 ; x1 ; . . .; xn n; xn þ 1 be among the points p1 ; . . .; pl so that x0 ¼ p1 ; xn þ 1 ¼ pl . If N is an odd number, say N ¼ 2r  1, then l ¼ ðn þ 1Þr  1. In this particular case, the basic function Ak covers the points pðk1Þr þ 1 ; . . .; pðk þ 1Þr1 , so that   1 r1 ; Ak ðpkr Þ ¼ 1; Ak pðk1Þr þ 1 ¼ ; . . .; Ak ðpkr1 Þ ¼ r r   1 r1 ; . . .; Ak pðk þ 1Þr1 ¼ : Ak ðpkr þ 1 Þ ¼ r r Thus, the partition matrix A has a fixed  structure; it depends on one parameter r and does not require computation of Ak pj at each point pj .

4 Discrete F-transform Once the basic functions A1 ; . . .; An are selected, define (see [4]) the (direct) Ftransform of a discrete function f : P ! R as a vector ðF1 ; . . .; Fn Þ where the k-th component Fk is equal to Pl Fk ¼

    f pj  Ak pj ; k ¼ 1; . . .; n:   Pl j¼1 Ak pj

j¼1

ð1Þ

In order to stress that the F-transform components F1 ; . . .; Fn depend on A1 ; . . .; An the F-transform is taken with respect to A1 ; . . .; An . T Let us identify the function  f : P ! R with the vector-column $f ¼ ðf1 ; . . .; fl Þ of its values on P so that fj ¼ f pj , $j ¼ 1; . . .; l. Moreover, let partition $A1 ; . . .; An be represented by the matrix A. The vector ðF1 ; . . .; Fn Þ is the F-transform of f determined by A if ðF1 ; . . .; Fn Þ ¼

  ðAf Þ1 ðAf Þn ; . . .; a1 an

ð2Þ

Extraction and Forecasting Time Series

where ðAf Þk is the k-th component of the product Af , ak ¼

l P

177

akj ; k ¼ 1; . . .; n.

j¼1

Expression (2) is a matrix form of the F-transform of f . It will be denoted by Fn ð f Þ. Obviously, the computation on the basis of (2) is less complex than that one based on (1). The reason is in the unified representation of the partition matrix A which does not include a computation of each Ak at every point pj .

5 Forecasting Time Series Based on Fuzzy Trends The fuzzy elementary trend modeling method [7, 8, 14] is used to predict numerical values and fuzzy trends in the state of an process indicators. The forecast uses hypothesis testing: • Hypothesis 1. The hypothesis of conservation of trend. The Forecast is constructed on base the previous period. The formula for the predicted value st þ 1 ¼ st þ sp ; where st þ 1 – forecast for the next period of time; st – real value at time t; sp – the value of the trend over the previous period of time. • Hypothesis 2. The hypothesis of stability of the trend. The moving average is used to predict st þ 1 ¼ st þ Gsp ; where Gsp – importance of a dominant fuzzy trend. Consider the trend of the previous selected period. Select the predominant cluster of trends. The forecast for the above formula is calculated. The trend is built. Optimistic forecast for the some number of occurrences of trends used. The highest average trend is selected. • Hypothesis 3. Forecasting for a given period on the basis of fuzzy elementary trends. Stages of the prediction algorithm for the period based on trends: the expert sets the number of considered trends for the previous period. For example, for half a year - a set of trends A. Either he sets the pattern set of trends. The presumed trend following this set is known. fstnm ; . . .; stn1 ; stn g Search for a set of trends A in all other previous periods. n o s0tnlk ; . . .; s0tnlðk1Þ ; s0tnl If such a set of B is found in which the C trend is located after this found set B then trend c is considered into account. The forecast equal to the trend C is constructed.

178

A. Romanov et al.

st þ 1 ¼ st þ s0tnl þ 1 If the set B, which would coincide with the set A, was not found then the search for the set is repeated, but it is already not looking for its complete coincidence. Select new pattern A is shorter into one trend. This is repeated until a suitable set of trends B [10]. To select the best hypothesis, an entropy time series is additionally introduced.

6 Definition of Type-2 Fuzzy Sets to Use in Time Series Models The tasks of time series modeling are solved by a large number of methods. These methods have a different mathematical basis, are divided according to application possibilities (that is, they may have particular applicability conditions depending on the type of problem being solved and the nature of the time series), they may require constant or temporary use of the analyst directly during the modeling process. An important condition for the application of methods is the focus on obtaining short-term forecasts. It follows from the recent features of the processes for which time series models are applied. The nature of fuzzy time series due to the use of expert estimates, the inherent uncertainty of which belongs to the class of fuzziness. Unlike stochastic uncertainty, fuzziness hinders or even excludes the use of statistical methods and models, but can be used to make subject-oriented decisions based on approximate human reasoning. The formalization of intellectual operations that simulate human fuzzy statements about the state and behavior of complex phenomena, forms today an independent area of applied research, called ‘‘fuzzy modeling’’ [3]. This direction includes a complex of problems, the methodology for solving which is based on the theory of fuzzy sets, fuzzy logic, fuzzy models (systems) and granular calculations. In 1975, Lotfi Zadeh presented fuzzy sets of the second order (type-2) and fuzzy sets of higher orders, to eliminate the disadvantages of type-1 fuzzy sets. These disadvantages can be attributed to the problem that membership functions are mapped to exact real numbers. This is not a serious problem for many applications, but in cases where it is known that these systems are uncertain. The solution to the above problem can be the use of type-2 fuzzy sets, in which the boundaries of the membership areas themselves are fuzzy [9]. It can be concluded that this function represents a fuzzy set of type-2, which is three-dimensional, and the third dimension itself adds a new degree of freedom to handle uncertainties. In [9] Mendel defines and differentiates two types of uncertainties, random and linguistic. The first type is characteristic, for example, for the processing of statistical signals, and the characteristic of linguistic uncertainties is contained in systems with inaccuracies based on data determined, for example, through expert statements. To illustrate, note the main differences between type-1 fuzzy sets and type-2 fuzzy sets. Let us turn to Fig. 2, which illustrates a simple triangular membership function.

Extraction and Forecasting Time Series

179

Fig. 2. The example of the type of fuzzy sets of the 1st (a) and the 2nd (b) types.

Figure 2(a) shows a clear assignment of the degree of membership. In this case, to any value of x there corresponds only one point value of the membership function. If you use a fuzzy membership function of the second type, you can graphically generate its designation as an area called the footprints of uncertainty (FOU). In contrast to the use of the membership function with clear boundaries, the values of the membership function of type-2 are themselves fuzzy functions. This approach gave the advantage of approximating a fuzzy model to a verbal one. People can have different estimates of the same uncertainty. Especially it concerns estimated expressions. Therefore, it became necessary to exclude a unique comparison of the obtained value of the degree of the membership function. Thus, when an expert assigns membership degrees, the risk of error accumulation is reduced because of the non-inclusion of points located near the boundaries of the function and under doubt.

7 Time Series Model Based on Type-2 Fuzzy Sets Time series modeling based on type-2 fuzzy sets allow to build the model reflecting uncertainty of the choice of values of coefficients or values of indicators determined by an expert. Choose an interval time series as type of time series for the object of modeling. For our subject area, previously selected time series of indicators are easily represented by proposed type of time series: most time series have a rare change in values. Can mark stability of intervals. For interval time series, an algorithm for constructing a model is described in [12]. The formal model of the time series: TS ¼ ftsi g; i 2 N; where tsi ¼ ½ti ; Bti  is an element of the time series at the moment of time ti and a value in the form of a type-2 fuzzy set Bti : For the entire time series, the universe of type-2 fuzzy sets is defined as U ¼ ðB1 ; . . .; Bl Þ; Bi 2 U; l 2 N; l - the number of fuzzy sets in the universe. A set Bti is a type-2 fuzzy set, therefore, a type-1 fuzzy set is assigned to it as a value. For interval time series, a prerequisite for creating type-1 sets is a part

180

A. Romanov et al.

separated from the source series, limited, for example, by a time interval of 1 day, 1 month or 1 year. For the selected interval, a universe of type-1 fuzzy sets is defined. The algorithm for constructing a model will be used the same as described in [12] except for the moment of choice of intervals: they will be determined based not on the time characteristic, but on the boundaries of the initially formed type-2 sets. The form of fuzzy sets is proposed to use a triangular due to the small computational complexity when conducting experiments.

8 Algorithms Calculation of Production Capacity of Information System The developed information system implements next functions: • performs calculation of production capacities; • reveals a deficit and forms recommendations for balancing capacities by determining • the possibility of redistribution of the volumes of the same type of work; • identifies the need to enter additional production areas and equipment; • identifies the need for recruitment and redeployment of staff. The basic input data is the production program. The list of products are given and the scope of work for their creation, distributed by a period. the amount of work can be redistributed between time periods based on current indicators of production processes, their dynamics at the factory. There are the following types of resources: human, material and production area. For calculation of production capacity the following steps are required: • Determine the units for the calculation of production capacity. • For each unit, calculate the current capacity for each of the three types of resources. • For each unit define free capacity for each of the three types of resources. So, next steps depend on resource type. For human resources the following possibilities for calculation of production capacity exists: transfer between units and hiring new workers. Limiting factors are the skills of specific employees in the transfer and the delayed start of the work of the employee in hiring. Extend the calculation of production capacity algorithm by next steps: • If there are free human resources and a transfer of workers between factory units is possible, then fulfill it. • Otherwise, hire new workers. These steps show the priority that used at the factory. Material resources, such as equipment and machines, are difficult to transfer between departments. If there are no available resources, then the only option is to purchase new equipment.

Extraction and Forecasting Time Series

181

Current implementation of the information system is based on average values of indicators throughout the year. Propose to use new models to analyze the time series of indicators at more frequent intervals. To do this, an important role will be determined by the accumulated information in the enterprise information systems.

9 Experiment The experiment plan implies the construction of time series models and the assessment of their quality. For experiments, time series have been generated.

Fig. 3. Smoothing the time series of the coefficient

The forecasting process at this stage will not be carried out; therefore, an internal measure of the quality of the model will be assessed using the SMAPE criterion [13]: SMAPE ¼

n 100% X jFt  At j n t¼1 ðjAt j þ jFt jÞ=2

Consider the process of smoothing the coefficient. The original time series has 60 points. For comparison, the graph of Fig. 3 shows the smoothing of the time series by the F-transform method [4]. For smoothing, a set of 15 type-2 fuzzy sets and 5 sets of type-1 was selected. As can be seen from the Fig. 4, 5 points of a smooth series were obtained. SMAPE score for both types of smoothing: • for F-transform - 2.01%, • for type-2 fuzzy sets - 0.65%.

182

A. Romanov et al.

Fig. 4. Smoothing the time series of employee count

Next smooth employee count time series, Fig. 4. For smoothing, a set of 15 type-2 fuzzy sets and 5 sets of type-1 was chosen. For the time series, 5 points of a smoothed series were also obtained. SMAPE score for both types of smoothing: • for F-transform - 47.54%, • for type-2 fuzzy sets - 13.23%. It was also a comparison of the internal measures of the quality of the model for SMAPE with simple exponential smoothing. The estimates showed the best by 0.1% smoothing quality by the method proposed using type-2 fuzzy sets.

10 Conclusion The analysis of existing algorithms, data and information systems has shown a strong accumulation of errors in calculations of production capacity of enterprise. It was shown the great impact of operational monitoring of indicators. These principles allow improving the quality of technological preparation of complex industries. Proposed methods of prediction of time series improve the quality of management decisions because of modeling processes in the information system. Successfully applied an approach based on type-2 fuzzy sets, to create a model of a time series of production processes. It should be noted that the approach based on modeling interval time series gives a positive result. This moment is fixed as a result of the smoothing procedure when the number of selected points and their values are as close as possible to the stabilization intervals. Smoothing model based on type-2 fuzzy sets show better internal quality by SMAPE criterion. The integration soft computing techniques, i.e., the F-transform and fuzzy trend and time series modeling, were applied to analyze and forecast time series. In this contribution, is described a new software system that was elaborated using the proposed theory. Aside from the F-transform, the technology platform includes an analysis of time series and their trends, which are characterized in terms of natural language.

Extraction and Forecasting Time Series

183

Further research areas are: • extract the rule base from time series models; • creation of a time series prediction mechanism based on type-2 fuzzy sets; • development of a modeling system based on fuzzy time series models for calculation of production capacity in the process preparation of production. Acknowledgements. The authors acknowledge that the work was supported by the framework of the state task of the Ministry of Education and Science of the Russian Federation No. 2.1182.2017/4.6 “Development of methods and means for automating the production and technological preparation of aggregate-assembly aircraft production in the conditions of a multiproduct production program”. The reported study was funded by RFBR and the government of Ulyanovsk region according to the research projects No. 18-47-730022 and No. 18-47-732016.

References 1. Yarushkina, N.G., Negoda, V.N., Egorov, YuP, Moshkin, V.S., Shishkin, V.V., Romanov, A.A., Egov, E.N.: Modeling the process of technological preparation of production based on ontological engineering. Autom. Manag. Process. 4, 4–100 (2017). (in Russian) 2. Yarushkina, N.G., Afanasyeva, T.V., Negoda, V.N., Samokhvalov, M.K., Viceroy, A.M., Guskov, GYu., Romanov, A.A.: Integration of design diagrams and ontologies in the objective of the balancing of the capacity of the aviation-building enterprise. Autom. Manag. Process. 4, 85–93 (2017). (in Russian) 3. Perlieva, I., Yarushkina, N., Afanasieva, T., Romanov, A.: Time series analysis using soft computing methods. Int. J. Gen. Syst. 42(6), 687–705 (2013) 4. Perlieva, I.: Fuzzy transforms: theory and applications. Fuzzy Sets Syst. 157, 993–1023 (2006) 5. Sarkar, M.: Ruggedness measures of medical time series using fuzzy-rough sets and fractals. Pattern Recogn. Lett. Arch. 27, 447–454 (2006) 6. Hwang, J.R., Chen, S.M., Lee, C.H.: Handling forecasting problems using fuzzy time series. Fuzzy Sets Syst. 100, 217–228 (1998) 7. Herbst, G., Bocklish, S.F.: Online Recognition of fuzzy time series patterns. In: 2009 International Fuzzy Systems Association World Congress and 2009 European Society for Fuzzy (2009) 8. Kacprzyk, J., Wilbik, A.: Using Fuzzy Linguistic summaries for the comparison of time series. In: International Fuzzy Systems Association World Congress and 2009 European Society for Fuzzy Logic (2009) 9. Mendel, J.M., John, R.I.B.: Type-2 fuzzy sets made simple. IEEE Trans. Fuzzy Syst. 10(2), 117–127 (2002) 10. Gardner Jr., E.S.: Exponential smoothing: the state of the art. J. Forecast. 4, 1–38 (1989) 11. Novak, V., Stepnicka, M., Dvorak, A., Perfilieva, I., Pavliska, V.: Analysis of seasonal time series using fuzzy approach. Int. J. Gen. Syst. 39, 305–328 (2010) 12. Bajestani, N.S., Zare, A.: Forecasting TAIEX using improved type 2 fuzzy time series. Expert Syst. Appl. 38–5, 5816–5821 (2011)

184

A. Romanov et al.

13. SMAPE criterion by Computational Intelligence in Forecasting (CIF). http://irafm.osu.cz/cif/ main.php 14. Pedrycz, W., Chen, S.M.: Time series analysis, modeling and applications: a computational intelligence perspective (e-book Google). Intell. Syst. Ref. Libr. 47, 404 (2013) 15. Novak, V.: Mining information from time series in the form of sentences of natural language. Int. J. Approximate Reasoning 78, 1119–1125 (2016)

Computer Analysis of Geometrical Parameters of the Retina Epiretinal Membrane Stanislav Daurov1 , Sergey Potemkin1 , Svetlana Kumova1(&) Tatiana Kamenskikh2 , Igor Kolbenev2 , and Elena Chernyshkova3

,

1

Institute of Applied Information and Communications Technologies, Department of Applied Information Technology, Yuri Gagarin State Technical University of Saratov, Saratov, Russia {daurovsk,skumova}@mail.ru, [email protected] 2 Department of Eye Diseases, Saratov State Medical University n.a. V.I. Razumovsky, Saratov, Russia [email protected], [email protected] 3 Department of Foreign Languages, Saratov State Medical University n.a. V.I. Razumovsky, Saratov, Russia [email protected]

Abstract. Objective: to develop algorithms of processing of video images of optical slices of the retina of the eye to quantify the degree of folding of the epiretinal membranes and of the Central fossa. Material and methods: The object of the study was the video image of the retina obtained by optical coherence tomography. To develop methods of determining the degree of folding epiretinal membrane was formed mathematical model of the profile consisting of a base profile (low frequency component) and folding (high frequency part). Results: Developed two alternative methods of estimation of folding epiretinal membrane retinal - averaging method and the method using the Wavelet transform. The algorithm of geometrical parameters of the Central fossa: the height, width and line shape. These algorithms are implemented in a software system. Conclusion: The practical application of the developed system showed its adequacy, as well as an introduction into medical practice the use of quantitative estimates of some parameters of a condition of the retina. Keywords: Optical coherence tomography  Video  Profile of epiretinal membranes  Folding membranes  The geometry of the central fossa  The relative depth  Assessment of the symmetry of the holes  The deviation of the shape from the norm

1 Introduction According to the statistics of the Federal Agency for public health and social development in 2015 every second Russian citizen suffers from one or another visual impairment. Up to 500 thousand visually impaired people are registered in Russia every year. According to the results of epidemiological monitoring “indicators of eye © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 185–198, 2019. https://doi.org/10.1007/978-3-030-12072-6_17

186

S. Daurov et al.

diseases in Russia are steadily increasing and in most regions they exceed the average European indicators by 1.5-2 times”. Currently, optical coherence tomography (OCT) is one of the methods of eye examination in ophthalmology [1]. This method is non-contact and allows visualizing different structures of the eye with a resolution higher than in ultrasound examination. The results of the optical coherence tomography are presented as a set of images of slices (see Fig. 1).

Fig. 1. Tomogram of the retina obtained with OCT

The analysis of received images is made by doctors visually and demands a high level of professional training. When writing an impression, a doctor has to rely only on his or her own experience. There are not always enough highly qualified specialists in this field of medicine to meet the demands existing in many large and small health-care institutions. All this excludes the possibility of mass and high-quality diagnosis of eye diseases. There is another drawback of visual analysis—the lack of quantitative estimation of retina current state, which is especially lacking in long-term monitoring of disease or treatment process. The presence of an epiretinal membrane (ERM) is one of the important reasons of decreasing central visual acuity. ERM is commonly developed in elderly age. The membrane etiology is associated with inflammation, proliferative process, age-related changes in vitreous body. Migration of glial cells through the defects of inner boundary membrane into vitreous body helps development of ERM on retina’s surface. Cells of retinal pigment epithelium—hyalocytes, fibroblasts and astrocytes take part in formation of membranes. Immune inflammation with cytokines, growth factors are important for membrane formation. Tangential and vertical tractions of ERM cause changes in retina (layers thickening, surface folding, fibrillation of nerve fiber layer). Surgical treatment improves quality of patients’ vision. Therefore, the computer analysis of the retina obtained with OCT-tomograms is in demand, it is confirmed by an extensive bibliography on this problem. Let us consider some of these publications. The first group of publications is devoted to studies of the statistical characteristics of various types of fundus diseases. In [2], an automatic method was proposed for

Computer Analysis of Geometrical Parameters

187

determining the presence or absence of epiretinal membrane in OCT images. Macular edema caused by diabetic retinopathy [3] is examined by measuring the central thickness of the retina. A variety of values and deviations of foveola thickness allows to detect various diseases of the retina, especially during age-related macular degeneration [4]. The second group of publications is devoted to computer-aided detection of signs of diseases and a quantitative assessment of their severity. In [5], a method for the identification of the normal macula and its three types of pathologies, namely, agerelated degeneration, macular edema, and macular hole, was proposed on the basis of machine learning. The retina contains ten intraretinal layers, according to the flat OCT data of the image of which three-dimensional textural information is automatically generated [6] and, in combination with the anatomical atlas of normal retinas, is used for clinically important applications. To assess the severity of glaucoma, an automatic method for analyzing OCT images has also been proposed [7]. As there is no opportunity to comment on all available publications, more or less raising the questions of development of methods for computer analysis of the retina, we focus on one of them, solving the problem similar to that in the present work. The main provisions of this article [8] are: development of the reference model of the retina, selection of a set of characteristics to evaluate current state of the retina, method of comparing the analyzed profile of the retina with the reference model, and a decision-making procedure. The reference model of the retina is developed on the basis of analysis of a large number of normal (without pathologies) profiles and includes two boundary membranes: inner boundary membrane IBM (upper boundary) and outer boundary membrane - the retinal pigment epithelium RPE (lower boundary), distance between them determines thickness of the retina. There are 5 regions on the reference model and, therefore, on the analyzed profile: central one includes the central fossa (fovea), two regions (parafovea) on the left and on the right of the central one and two regions (periphovea) on the left and on the right. At macular lesions, the boundary membranes acquire various morphological changes in continuity and smoothness. Therefore, the following characteristics were used as criteria for assessing state of the retina: retinal thickness, continuity and smoothness of membranes that limit it. All characteristics were determined within the regions. For each region, the average thickness was determined, then the thickness ratios were defined (the ratio of the average thickness of the region to the thickness of the central one). To determine continuity standard deviation coefficients (deviations) and correlations between the analyzed boundary and the reference model were used. To express smoothness of the boundary, gradients and curvature measures were used. In general, each region of the retina was analyzed by 10 parameters, and taking into account the number of regions, 50 parameters were calculated while processing each OCT-tomogram. The decision-making procedure consisted of coherent analysis of all set of parameters for their compliance with permissible limits, which were established by ophthalmologists. Characterizing results of the work [8], its scale should be mentioned—all components of the eye retina were investigated (inner and outer limiting membranes and

188

S. Daurov et al.

space between them); the reference model of the retina with 5 established regions was developed; 10 characteristics calculated for each region in the process of comparison of the analyzed retina with the reference model using a rather complex mathematical apparatus were formed. The decision-making procedure was also impressive. All above-mentioned suggests an excessive complexity and redundancy, therefore we offer our own method of solving this problem using a different approach.

2 Materials and Methods The proposed work is devoted to computer analysis of morphological changes in macular region of the retina, in particular, the epiretinal membrane. The main goal was to solve the problem of quantitative description of current state of the retina, for this, the algorithms for assessing folding of the retinal epiretinal membrane and parameters of the central fossa have been developed. The degree of epiretinal membrane folding is one of the parameters of retina. It indicates the degree of pathology development (besides it is difficult to determine in the visual analysis at the initial stage), which ultimately may lead to retinal break. This makes it necessary to estimate the degree of membrane folding.

Fig. 2. Profiles of the retina with the presence of folding.

The normal profile of retina (see Fig. 2a) in terms of folding is characterized by whether smoothness or folding of a very small degree. The profile of retina with an average folding degree is shown in Fig. 3.

Fig. 3. Profiles of the retina with the absence of folding.

Computer Analysis of Geometrical Parameters

189

Analyzing the images of profiles with a visually noticeable folding, one can conclude that the profile is a combination of two components: a basic profile that determines the rough configuration of the analyzed profile, and a function that contains small profile changes. Transforming of the above argument in the area of mathematical definitions, one can represent the analyzed profile in the form of a one-dimensional function consisting of a low-frequency (basic profile) and high-frequency (small change) parts. To test and select the most effective algorithm of the retina membrane folding determining, a mathematical model of the profile is proposed, which is a single period of cosine curve (low-frequency component—a basic profile) with several periods of sine curves superimposed (high-frequency component—membrane folds): equations are centered and set on a separate line. Ym ¼ A cosð2p x=NÞ þ a sinð2p fx=N Þ;

x ¼ 1; 2; 3. . .; N  1:

ð1Þ

where A is the amplitude of a low-frequency model component; a, f are the amplitude and frequency of a high-frequency model component; N is the number of model points. Thus, the main task of the determining the membrane folding algorithm is to select from the function describing a real profile of the membrane, high-frequency component and evaluation of its parameters (amplitude and frequency) [9]. At first sight, this problem is quite simple to solve using Fourier transform. Testing of this method on a mathematical model presents an ideal result, which is not surprising since the model itself is built on the components of the Fourier transform basic function. However, when using the Fourier transform to process the function of a real profile of the membrane, one can obtain a wide range of frequencies, in which frequency zones involved in the formation of a basic profile and membrane folds intersect, and it creates great difficulties in the development of their formal separation method. In view of the foregoing, it is necessary to look for methods, which would be (1) easy to implement and (2) aimed at the detection or reconstruction of a basic profile. To solve this problem, one can propose (in order of increasing complexity) the following methods for determining a basic profile: moving average and using Wavelet transform. The goal of an averaged (basic) profile finding method is to find a profile most similar to the original one (i.e. undeformed). To determine it one can use the known moving average method, the essence of which is that using a linear mask m, containing an odd number of units, the analyzed profile Yi is processed. The counts of the analyzed profile within the mask are summed, and the resulting sum is divided by the mask size, the calculated average is fixed in a new array (a basic profile), then the mask is mixed to the next count of the analyzed profile, etc. In general, to implement this method, one has to perform a number of steps. To eliminate edge effects it is necessary to complete the profile function on the left and on the right by half of the mask size. To form the left (2) and the right (3) functions the mirroring principle is used:

190

S. Daurov et al.

Lk ¼ Yi;

m1 ; i ¼ 1; 2; 2

Rj ¼ YNi;

i ¼ 1; 2;

  m1 k¼ 1 ; 2 m1 ; 2

j ¼ i;

ð2Þ ð3Þ

The augmented profile function is obtained by combining three functions: Fi ¼ fLjYjRg;

i ¼ 1; 2; . . .; N þ ðm  1Þ;

ð4Þ

Where | is the concatenation sign (cascading). The augmented profile function processing in accordance with the formula: Si ¼ ðFk þ

Xm1  2 j¼1

 m1 ; Fkj þ Fk þ j Þ=m; m ¼ 3; 5; 7; . . .; i ¼ 1; 2; . . .N; k ¼ i þ 2 ð5Þ

where F, S are the augmented and smoothed profile function; m is the mask size; i, j are the indices of the smoothed function and mask current points; k is the augmented function index. Such a complicated indexing system allows to obtain immediately a smoothed function (basic profile) of the N size. Defining the function of minor changes of the original profile by subtracting the smoothed profile function from the original one: Bi ¼ Yi  Si ;

i ¼ 1; 2; ::N;

ð6Þ

The smoothing efficiency largely depends on the mask size. Therefore, to conduct a computational experiment, it is necessary to develop criteria for the averaging process completion: • the maximum value of the IBj integral difference between a smoothed and an original profile. IjB ¼

XN   Bji ; i¼1

j ¼ 1; 2; 3;

  K1 ¼ max IjB

ð7Þ

where K1 is the first criterion for the averaging stage completing, j is the index of the averaging stage. • the amount of the integral difference change between adjacent steps of averaging:     ð8Þ K2 ¼ IjBþ 1  IjB   M; where Δ is a predefined constant. It is theoretically obvious that the most accurate selection of a basic profile occurs when the mask size is equal to the period mT = T of the high-frequency profile model

Computer Analysis of Geometrical Parameters

191

Fig. 4. The results of the moving average method

component. However, during the computational experiment, none of the proposed criteria (formulae 7 and 8) provided the averaging process completion. When using the K1 criterion the completion occurred at a greater value of the mask mmax > mT. The following experiment was carried out using the K2 criterion, but with a change in the condition: o n   K2 ¼ max IjBþ 1  IjB  ;

j ¼ 1; 2; 3;

mmax ;

ð9Þ

As a result, the mask value equal to mmin is found. Thus, the required value of the mask mx, which provides the most accurate selection of a basic profile is in the range from mmin to mmax. Multiple experiments on the model processing with different high-frequency component frequencies revealed the following pattern: mx ¼ ðmmax þ mmin Þ=2  mT;

ð10Þ

It should be noted that this result is obtained for a model in which a periodic function is used as a high-frequency component, and that is unlikely in real profiles. However, this principle should be checked when analyzing real profiles. The second method for the degree of folding determining is a discrete wavelet transform method [10]. The essence of the wavelet transform is that the signal is decomposed by the basic functions in the form of short waves having a zero average value. Such functions are called wavelets (in translation “splash”). The discrete wavelet transform procedure involves two functions: the wavelet function itself w; (the mother wavelet) and the scaling function u; which creates a system of basic functions:   uj;k ð xÞ ¼ 2j=2 u 2 j x  k ;

ð11Þ

where k index determines the position of a basic function uj, kðxÞ on the x axis; j index determines the width of the function uj,kðxÞ along the x axis; factor 2j/2 adjusts the height (amplitude) of the function. The direct discrete wavelet transform consists of a pair of transforms:

192

S. Daurov et al.

1 X Aðj0; kÞ ¼ pffiffiffiffiffi f ð xÞuj0;k ð xÞ; x M

ð12Þ

1 X f ð xÞwj;k ð xÞ; Dðj; kÞ ¼ pffiffiffiffiffi x M

ð13Þ

where x = 0, 1, 2, ….., M−1 is a discrete variable; j0 = 0; the number M is chosen so that it is a power of 2, i.e. M = 2 J; indices j = 0, 1, 2, ….., J−1 and k = 0, 1, 2, ……J −1. As a result approximating coefficients of the function A(j0,k) (a rough copy of the analyzed function—a low-frequency part) and detailing coefficients D(j, k) (small changes—a high-frequency part) are calculated. The inverse DWT is calculated using the formula: 1 X 1 Xþ1 X f ð xÞ ¼ pffiffiffiffiffi Aðj0 ; k Þujo ;k ð xÞ þ pffiffiffiffiffi Dðj; kÞwj;k ð xÞ: j¼j0 k k M M

ð14Þ

The wavelet transform is widely used to filter signals and images, for this purpose the detailing coefficients D(j, k) obtained as a result of direct DWT are processed, for example, limited in magnitude in accordance with a given threshold, then the reverse DWT is performed. In our case it is necessary to determine only approximating component of the analyzed profile, i.e., in our terms, to find a basic profile. To do this, the detailing coefficients in the formula (14) should be reset to zero (excluded) and the reverse DWT should be performed. Since there are a lot of wavelet functions types, the choice of the most adequate one for a particular problem is always quite a difficult task. Therefore, this choice is often made using a heuristic approach. In our case, we choose the simplest—the Haar wavelet (requires minimum computational work) and the most common—Daubechies wavelet (Fig. 5).

Fig. 5. The graphics of wavelets of Haar and Daubechies

Computer Analysis of Geometrical Parameters

193

During the initial experiments on the processing of mathematical models with Haar and Daubechies wavelets it was found that Haar wavelet is not suitable for this task (Fig. 6a), since the approximating part (basic profile) is obtained significantly stepped and, therefore, the detailing part of the profile is determined with a large error. Daubechies wavelet showed a good result (Fig. 6b), while the criterion K2 should be used as the termination criterion for the conversion process, formula (8). (Table 1)

a) Haar wavelet, pass 6

b) Daubechies wavelet, pass 7 Fig. 6. Processing the model by discrete wavelet transform

194

S. Daurov et al.

Comparing the results of the moving average method (Fig. 4) and the discrete wavelet transform using Daubechies wavelet function (Fig. 6b) for determining a basic profile in processing a mathematical model, it is seen that both methods show excellent results, providing almost perfect selection of a basic profile. This fact does not allow to make a decision in favor of one or the other method and, therefore, it is necessary to continue checking their capabilities in real profiles processing. Comparison of the results of processing the real retinal profiles by the proposed methods (Fig. 6) also demonstrates approximately equal opportunities. However, the analysis of the profile shape obtained after processing by the discrete wavelet transform method (Fig. 6b) shows that the basic profile is smoothed out without sharp transitions, and this underlines the absence of small changes within it and, therefore, at an intuitive level allows to ascertain that a large accuracy of its determination is provided. Along with this, the DWT method has a more accentuated criterion for completing a basic profile search. The high-frequency part selected as a result of the previous real profile processing is analyzed to determine the degree of folding: the maximum positive and negative amplitudes and the average frequency. The procedure of calculating this set of characteristics is carried out by a specially designed program, which performs the following operations: • formation of vector of points (positions) in which the function of small changes is 0 or changes sign; • formation of vector of positive amplitudes and vector of negative amplitudes, for this the analysis of function of small changes between adjacent positions of vector of points is carried out to find out the extreme values of amplitudes; • average frequency F is defined as halved dimension of vector of points (every 2 points correspond to one period), maximum positive A þ Max and AMax maximum negative amplitudes are in the corresponding vectors. Determination of parameters of the retina central fovea includes the following estimations: relative depth, symmetry and deviation of the central fovea form from the norm [9]. The procedure for determining the relative depth includes the following operations: • to perform a threshold transform of the halftone profile image to a binary one, resulting in two lines corresponding to the inner boundary membrane at the top (blue in Fig. 7) and the outer boundary membrane at the bottom (green); • since profiles (slices) are most often inclined during retinal scanning, it is necessary to determine the angle of the profile slope before determining the central fovea depth, and rotate the binary image to this angle, so that the profile becomes horizontal; if the angle is less than 3°, the rotation can not be performed; • on the top line of the profile there is the lowest point, which corresponds to the central fovea bottom; two maximum points are determined on the right and on the left from this point, and they are connected by a segment (yellow), the length of which determines the fovea width; • a vertical segment is built through the point corresponding to the bottom of the fovea (Fig. 7), this segment connects two lines (yellow and green) resulting in two

Computer Analysis of Geometrical Parameters

195

a) moving average method М = 43 b) DWT method, Daubechies wavelet 6, К = 6

Fig. 7. Real profiles processing by a moving average method (a) and DWT method (b).

values: h1 - the absolute central fovea depth, h2 - the distance from the lower fovea point to the outer boundary membrane (green line). Relative depth is calculated using the formula: g ¼ h1=ðh1 þ h2Þ and is a capacious parameter, for example, if g = 0, it means that the central fovea is absent, and if g = 1 —there was a retina rupture, if g = (0.4 – 0.7)—the central fovea corresponds to the norm (Fig. 8).

Fig. 8. Determination of the central fovea relative depth.

The algorithm of matching the central fovea shape to the norm is formulated as follows: a parabola is constructed with parameters corresponding to the parameters of

196

S. Daurov et al.

the central fovea (absolute depth and width) and compared with the fovea profile graph. The comparison is made by calculating the correlation coefficient between the parabola and the profile. The algorithm for estimating the central fovea symmetry is similar to the previous one. The correlation coefficient between the initial profile of the central fovea and its mirror reflection relative to the vertical axis is calculated. The degree of profile asymmetry is characterized by values of the correlation coefficient less than 1. Table 1. Table captions should be placed above the tables. The degree estimations of the retina folding 1

þ A AMax Max F 3,3 1,3 31

2

4,4

3

17,9 8

№ Retina slice image

10

14

19

þ The program gives the following estimations of folding: AMax is maximum positive  amplitude, AMax is maximum negative amplitude and F—spatial frequency. Amplitudes are measured in pixels, and the frequency is determined by the number of oscillations with maximum parameters that can be placed on the profile. 55 patients aged 56–73 years with various degrees of ERM severity (from slight traction to lamillary rupture) with visual acuity from 0,03 to 0,6 with correction took part in the study. Along with general clinical studies, patients underwent optical coherent tomography of a macula by means of Spectralis® OCT apparatus (Heidelbergengineering, Germany) before and after surgical treatment. These patients

Computer Analysis of Geometrical Parameters

197

undertook three-port subtotal vitrectomy with use of 25G instruments. The operation purpose was to remove epiretinal tissue causing traction action. Membrane peeling was fulfilled using tangential traction for reducing the risk of retinal rupture. Peeling of inner boundary membrane (IBM) was carried out with tweezers to decline ERM recurrent risk. Colored by trypan blue IBM was used. The periphery by means of sclerocompression for searching silent breaks and performing necessary laser coagulation were examined. A gas tamponade was used to provide drainage of subretinal fluid. The results of ERM analysis, before and after surgical treatment of patients with various degrees of ERM severity were evaluated according to Gass classification. According to this classification all the patients were divided into three groups: Group I. Stage 0: membrane is translucent and is not accompanied by any retina deformation, such membranes are also known as cellophane maculopathy due to cellophane-like reflection of retina’s inner surface while ophthalmoscopy. This group included 7 patients (7 eyes) with visual acuity from 0,3 to 0,6; Group II. Stage 1: irregular wrinkling of retina’s inner surface is forming; sight of wrinkled cellophane is acquired due to traction of inner layers of retina, which are folds formed as a result of wrinkled of lying on top membrane; minute surface radial folds diverge outwards from the edges of a shriveled membrane; wrinkling may be quite enough for deformation of paramacular vessels and pulling them up to foveola. This group included 36 patients (36 eyes) with visual acuity from 0,08 to 0,2; Group III. Stage 2: thick matte membranes; total wrinkling of macula throughout all of the thickness may occur simultaneously with retinal edema, small hemorrhages, cotton-like exudates and, occasionally, small detached macular retina; such membranes are called macularpuckers (wrinkled macula) or membranes of the 2nd stage. This group consisted of 12 patients (12 eyes) with visual acuity from 0,1 to 0,03. Clinical example of treating the patient Ch., 73 years old (group II). The patient came to the clinic of eye diseases with complaints of visual acuity decrease OS. Complaints appeared six months ago. Visual acuity at the time of treatment was 0,2. OCT revealed ERM, slightly deforming retina’s profile. Minimally invasive vitrectomy with ERM removal was carried out. Visual acuity at the time of patient’s discharge from the hospital was 0,7. OCT data: ERM was completely removed. Clinical example of treating the patient S., 68 years old (group III). The patient came to the clinic of eye diseases with complaints of reduced visual acuity, distortion of objects, stain before OD. Complaints appeared three years ago, increased gradually. Visual acuity at the time of treatment was 0,05. OCT revealed: ERM, deforming retina’s profile, non-through gap, cysts. Minimally invasive vitrectomy with ERM removal was fulfilled. Visual acuity at the moment of patient’s discharge from the hospital was 0,3. OCT data: ERM was removed, isolated cysts. Clinical example of treating the patient O., 65 years old (group III). The patient came to the clinic of eye diseases with complaints of reduced visual acuity, distortion of objects, stain before OD. Complaints appeared three or four years ago, increased gradually. Visual acuity at the moment of treatment was 0,03. OCT revealed: ERM, deforming retina’s profile, through gap, cysts. Minimally invasive vitrectomy with ERM removal was fulfilled. Visual acuity at the moment of patient’s discharge from the hospital was 0,1. OCT data: ERM was removed, gap was closed.

198

S. Daurov et al.

Conclusion. The mathematical model of a retina membrane profile has been developed. Two algorithms based on the method of initial profile smoothing and Wavelet transform were developed to determine the epiretinal membrane folding. The developed algorithms are tested on a mathematical model and on real profiles, and they have proved their efficiency. An algorithm of estimating the central fovea parameters was developed, in particular, the relative depth, symmetry and concordance to the norm of the central fovea shape. The results of program complex testing showed the adequacy of the developed algorithms, but the research in this direction should be continued. In comparison with the article [8], the present work contains results of examination of two elements of the eye retina—epiretinal membrane, its geometrical parameters as folding and characteristics of the central fossa are determined. The calculated parameters are direct, i.e. physically understandable. Mathematical apparatus used in the calculations is rather simple and does not require large calculative power. Practical adequacy of the developed algorithms is practically proved.

References 1. Lambrozo, B., Rispoli, M.: OCT of the retina. Method of analysis and interpretation, Moscow, 83 p., April 2012. (in Russian) 2. Baamonde, S., de Moura, J., Novo, J., Ortega, M.: Automatic detection of epiretinal membrane in OCT images by means of local luminosity patterns. Adv. Comput. Intell. 10305, 222–235 (2017) 3. Virgili, G., Menchini, F., Murro, V., Peluso, E., Rosa, F., Casazza, G.: Optical coherence tomography (OCT) for detection of macular oedema in patients with diabetic retinopathy. Cochrane Database Syst. Rev. 7, CD008081 (2011) 4. Roh, Y.R., Park, K.H., Woo, S.J.: Foveal thickness between stratus and spectralis optical coherence tomography in retinal diseases. Korean J. Ophthalmol. 27(4), 268–275 (2013) 5. Liu, Y.Y., Chen, M., Ishikawa, H., Wollstein, G., Schuman, J.S., Rehg, J.M.: Automated macular pathology diagnosis in retinal OCT images using multi-scale spatial pyramid and local binary patterns in texture and shape encoding. Med. Image Anal. 15(5), 748–759 (2011) 6. Quellec, G., Lee, K., Dolejsi, M., Garvin, M.K., Abramoff, M.D., Sonka, M.: Threedimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula. IEEE Trans. Med. Imaging 29(6), 1321–1330 (2010) 7. Koprowski, R., Rzendkowski, M., Wrobel, Z.: Automatic method of analysis of OCT images in assessing the severity degree of glaucoma and the visual field loss. BioMed. Eng. OnLine 13, 16 (2014) 8. Fu, D., Tong, H., Zheng, S., Luo, L., Gao, F., Minar, J.: Retinal status analysis method based on feature extraction and quantitative grading in OCT images, Biomed. Eng. OnLine 15, 87 (2016). https://doi.org/10.1186/s12938-016-0206-x, https://www.researchgate.net/ publiction/305519101_Retinal_status_analysis_method_based_on_feature_extraction_and_ quantitative_grading_in_OCT_images 9. Daurov, S.K., Dolinina, O.N., Kamenskikh, T.G., Batischeva, Yu.S., Kolbenev, I.O., Andreychenko, O.A., Potemkin, S.A., Proskudin, R.A.: Computer analysis of epiretinal membrane parameters. Saratov J. Med. Sci. Res. 13(2), 350–358 (2017). (in Russian) 10. Gonsales, R., Vuds, R.: Digital Image Processing, 3rd edn. Publishing Pearson (2008). ISBN: 978-0-13-168728-8

Synthesis of the Information Channel with Codec Based on Code Signal Feature Dmitry Klenov1(B) , Michael Svetlov2 , Alexey L’vov1 , Marina Svetlova1 , and Dmitry Mishchenko1 1

2

Yuri Gagarin State Technical University of Saratov, Saratov, Russia [email protected], [email protected], [email protected], [email protected] Institute of Precision Mechanics and Control of RAS, Saratov, Russia [email protected]

Abstract. The work considers an information channel (IC), which consists of encoding devices, decoding devices and communication channel (CC). Two IC types are analysed: IC with transformations and generic IC, where both transformations and erasures are possible. To provide high level of the IC noise immunity, it is suggested to use cascade coding with an error-correction code on the first stage and code based on code signal feature (CSF) on the second stage of the encoding. The paper gives an overview of the CSF-based code, describes its properties, explains encoding principles and provides the structural schemas of encoding and decoding devices. The mathematical model for each IC type is created. Both models assume the influence of the random additive pulse noise with Poisson distribution of impulses and Gaussian distribution of impulse amplitudes. The noise influence analysis is performed. As a first step, the formulas to calculate CC statistics are deduced. As a second step, possible IC reception outcomes are identified based on several proved lemmas. Finally, the IC reception outcome probability formulas are obtained. The main idea behind the probability formula synthesis is a decomposition of the complex outcome events into a number of patterns, which are similar for all IC reception outcomes. The decomposition unifies the probability calculation approach and simplifies the resulting formulas. Keywords: Coding · Decoding · Noise immunity · Code signal feature · Information channel · Mathematical model

1

Introduction

The main structural unit of digital data transmission systems (DDTS) with serial synchronous interface is an information channel (IC), which consists of encoding device (ED), decoding device (DD) and communication channel (CC) [1,2]. Providing high level of information reliability is the main problem of the IC synthesis. The IC information reliability is characterized by its noise immunity c Springer Nature Switzerland AG 2019  O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 199–214, 2019. https://doi.org/10.1007/978-3-030-12072-6_18

200

D. Klenov et al.

and quantitatively measured by probability of the correct reception (pcr ), false reception (pf r ) and probability of the protective failure (ppf ). The transmitted information is encoded and sent over the CC in the form of code words. Under the influence of the random additive pulse noise, especially under the pulse noise of high intensity, when ipn = fpn /fc ≥ 3 (ipn —pulse noise intensity; fpn , fc —frequencies of the pulse noise and code respectively), the transmitted code words may be distorted. There are two types of possible code word modifications: transformations and erasures. The transformations translate an initial code word into any other code word. The erasures translate the initial code word into an invalid code word for the used code. Depending on possible modifications, IC are classified as IC with transformations, IC with erasures and generic IC, where both transformations and erasures are possible [3–6]. The solution of the noise immunity improvement problem implies the false reception probability reduction, which is usually achieved by error-correcting codes application [1,2]. The efficiency of these codes is increased when erasures are present in the IC. There is a formula, which ties the minimal code distance of the code and the amount of spotted/corrected errors of different type: dmin ≥ r + s + e + 1, r ≥ s

(1)

where r and s—the amount of spotted and corrected transformation errors; e— the amount of corrected erasure errors. If a DD operates in a mode, when all spotted transformation errors are corrected, (1) can be rewritten as dmin ≥ 2s + e + 1.

(2)

It can be concluded from (2) that having the same minimal code distance value dmin , the correcting code fixes twice more erasure errors than transformation errors, which positively influences the IC noise immunity. Another way of the IC information reliability improvement is cascade encoding. Cascade codes usually provide high correcting ability, whereas the DDTS, based on such codes, have low implementation complexity [1,4,7]. The cascade coding was invented by Forney [8]. In practice DDTS typically utilize two-cascade codes with convolutional or block binary codes in the internal and various BoseChaudhuri-Hocquenghem (BCH) codes in the external cascade [9,10]. However, the correcting ability of such codes may become insufficient to provide the proper level of information reliability under the influence of high-intensity noise. So it is suggested to use the code based on code signal feature (CSF) on the last stage of the encoding in addition to the existing code cascades. Without losing generality, here and further on it is assumed that two-stage coding structure is used with some error-correcting primary code (PC) on the first stage, and secondary code (SC) with CSF on the second stage of the encoding. It is also presumed that K-ary primary alphabet is used in the PC in general case. It means that all PC code symbols can be enumerated from 1 to K and their values fully cover [0, K − 1] range. Further on in this paper, the phrase “symbol i” means PC symbol with number i and value i − 1.

Information Channel Synthesis

201

The IC with transformations with CSF-based code were already studied in the previous works of the authors. For these IC the mathematical model was introduced and investigated, outcome probability formulas were deduced [11], analytical encoding and decoding algorithms were suggested, SC synchronization questions are evaluated [12]. The goal of this work is to provide unified mathematical model of the IC with CSF-based code suitable for all IC types. The approach used in the work allows to obtain outcome probability formulas both for IC with transformations and generic IC.

2

Code Signal Feature

As it is known, the signal feature of the code characterizes informational parameter of the transmitted signal, describes its modulation and generally defines its properties and features. The coding theory describes six different signal features. Five of them—the amplitude, time, frequency, phase and polar—are referred as primary signal features, whereas the sixth—CSF—is considered as secondary (protective) [7]. The code based on CSF is a binary combinatoric nonlinear inseparable code with a constant weight. Its code words are binary sequences of length n ≥ 5. The SC code word set A = {Ai }i ∈ [1, K] contains K code words (by the number of symbols in the PC alphabet). Each sequence has the same weight m1 (3 ≤ m1 ≤ n − 2), by other words, contains m1 unary symbols each. The positions of unary symbols in the code word Ai can be defined as μiu , where 1 ≤ u ≤ m1 , 1 ≤ μiu ≤ n. The first and the last symbols in each code sequence always equal to one, i.e. μi1 = 1, μim1 = n. Definition 1. The unary symbols on the positions 1 and n of any SC code word are called boundary. Other unary symbols are called internal. The CSF-based code processes PC symbols one by one. Each PC symbol i with duration Top is replaced by the SC code word Ai , which is transmitted over the CC as series of m1 working pulses with fundamentally small duration τ  Top and amplitude As . The pulses are located on the time slots that correspond by their numbers to the unit-bits of the SC code word Ai . The first working impulse is generated with a fixed initial delay Δt0 measured after each incoming to the SC encoder PC symbol. All subsequent working pulses are generated at time intervals, being multiple to some delay interval Δt. The working pulses time positions Tiu can be calculated by the formula Tiu = Δt0 + (μiu − 1)Δt, 1 ≤ u ≤ m1 .

(3)

The entire SC code sequence construction is completed within the time TA = Δt0 + (n − 1)Δt < Top . As it was shown in [7], to provide high level of noise immunity, the delay values Δt0 and Δt must be rigidly fixed and be the same for the SC ED and DD. Besides that, the condition Δt0 = kΔt, where k ∈ N must hold true.

202

D. Klenov et al.

The Fig. 1 shows the SC encoding of an arbitrary PC code word Q = (q1 , q2 , ..., qnK ) and its transmission over the CC. Besides the requirements listed above, additional limitations may be enforced for the entire SC code word set. More precisely, the SC code word set A can be generated in a way to satisfy interval precondition defined below.

Fig. 1. The PC symbols representation with the SC code combinations and their transmission with sequences of short duration pulses

Definition 2. Each code word Ai ∈ A forms a collection of intervals as pairwise differences between unary symbol position numbers Si = (μiv − μiu )1≤u 0 is shown on the example of r11 (h). Let’s define ξk to be stochastic variable which represents the amplitude of the k-th noise impulse. By definition it is normally distributed with parameters (μ, σ 2 ). The appearance of one noise impulse does not affect other noise pulses, so the variables ξ1 , ξ2 , ..., ξk are independent. It is known, that sum of independent normally distributed variables with parameters (μ1 , σ12 ), (μ2 , σ22 ), ..., (μk , σk2 ) is also a normally distributed variable with parameters (μ1 + μ2 + ... + μk , σ12 + σ22 + ... + σk2 ). In our case μ1 = μ2 = ... = μk = μ, σ1 = σ2 = ... = σk = σ. Hence, the sum of amplitudes of k noise impulses ξ (k) = ξ1 + ξ2 + ... + ξk is a Gaussian variable with parameters (kμ, kσ 2 ). Given that, the amplitude of the signal, received at DD is also stochastic and equal to A = As + ξ (k) . To recognise the unary working impulse at the DMC-1, the inequality A ≥ h must hold true, which means As +ξ (k) ≥ h or ξ (k) ≥ h−As . Therefore, (k) r11 (h)

= p(As + ξ

(k)

1 ≥ h) = √ √ 2π kσ

+∞ 

e−

(ξ−kμ)2 2kσ 2

dξ.

(7)

h−As

Remaining three formuals are obtained in a similar way: (k) r00 (h)

(k) r01 (h)

(k) r10 (h)

= p(ξ

= p(ξ

(k)

(k)

= p(As + ξ

1 < h) = √ √ 2π kσ 1 ≥ h) = √ √ 2π kσ

(k)

h

e−

(ξ−kμ)2 2kσ 2

dξ,

(8)

+∞  (ξ−kμ)2 e− 2kσ2 dξ,

(9)

−∞

h

1 < h) = √ √ 2π kσ

h−A  s

e−

(ξ−kμ)2 2kσ 2

dξ.

(10)

−∞

The final result for rij (h) can be obtained by summing up (7)–(10) for all k ≥ 0 with an account to noise pulses appearance probability:

rij (h) =

∞  k=0

(k)

r(k)rij (h) =

 k ∞ ipn Tτop  k=0

k!

−ipn Tτop (k) rij (h).

e

(11)

It can be shown (see [11]), that both r00 (h) + r01 (h) and r10 (h) + r11 (h) form the full set of events and sum up to 1.

5

Mathematical Model of the IC

Definition 4. The PC symbol i is recognized at the time moment tr on some DC, if unary signal is received at time tr at all outputs of the DC MDL which

Information Channel Synthesis

207

correspond to unary bits of the SC code word Ai . Having A(t) to be the signal amplitude at the DC input at some arbitrary time moment t, the precise definition of the PC symbol i recognition at the DC can be written as  ∀ u : 1 ≤ u ≤ m1 A(tr − Tiu ) ≥ h2 for DC-I; (12) ∀ u : 1 ≤ u ≤ m1 A(tr − Tiu ) ≥ h1 for DC-II. If the recognized PC symbol i is the only recognized symbol on a DC at time moment tr , then the PC symbol i is decoded on this DC. If multiple PC symbols are recognized at the time tr on a DC, then decoding is not possible due to noise influence and the DC majoritary element generates the protective failure signal. The PC codec usually operates in the cyclic synchronization mode, which means that PC DD uses time strobing, so the PC symbols decoding time moments are fixed. Hence, any PC symbols, recognized at the SC DD outside of these moments do not influence the reception result and can be ignored. Let’s define several supplementary events, each of them depend on a boundary value h: – S(h): SC boundary working pulses are received correctly; – A(h): all internal unary bits of the correct SC code word are received correctly; – Bk (h), 0 ≤ k ≤ K − 1: all internal unary bits are received exactly for k out of K − 1 possible incorrect SC code words. It is possible to calculate the probabilities of the mentioned events: 2

s(h) = p (S(h)) = (1 − r10 (h)) , m −2

a(h) = p (A(h)) = (1 − r10 (h)) 1 ,  m1 −2 k  K−1−k m1 −2 k r01 (h) 1 − r01 (h) . bk (h) = p (Bk (h)) = CK−1

(13) (14) (15)

For the convenience of the further analysis, four SC decoding outcomes are considered: correct reception (CR), false reception (FR), and two protective failure (PF) outcomes—protective failure due to inability to decode any PC symbol (PF0 ) and protective failure when two or more PC symbols are decoded simultaneously (PF2+ ). Let’s calculate the outcome probabilities for a single DC. Correct reception happens when both boundary and internal unary symbols of the correct SC code word are received. At the same time, internal unary bits of all SC incorrect code words are not received. CR(h) = S(h) ∩ A(h) ∩ B0 (h).

(16)

The probability of the correct reception is: p (CR(h)) = p (S(h)) p (A(h)) p (B0 (h)) = s(h)a(h)b0 (h).

(17)

208

D. Klenov et al.

False reception happens when boundary unary bits are received, all unary bits of the correct code word are not received simultaneously and unary bits of exactly one incorrect code word are received: F R(h) = S(h) ∩ A(h) ∩ B1 (h). p (F R(h)) = p (S(h)) (1 − p (A(h))) p (B1 (h)) = s(h) (1 − a(h)) b1 (h).

(18) (19)

The protective failure PF0 happens either when boundary unary bits are not received, either when boundary unary bits are received, but internal unary bits are received neither for correct SC code word nor for any of the incorrect code words:   (20) P F0 (h) = S(h) ∪ S(h) ∩ A(h) ∩ B0 (h) . p (P F0 (h)) = (1 − s(h)) + s(h)(1 − a(h))b0 (h).

(21)

Finally, the protective failure PF2+ happens when either correct code word is recognized together with at least one incorrect code word, or correct code word is not received, but at least two incorrect code words are recognized. In the formula below the statements “at least Y ” are represented by the complimentary events, which are better to describe as “any number, but not zero, not one, ..., not Y − 1”. They have the same meaning, but the mathematical representation of the latter approach is more straight-forward:  

(22) P F2+ (h) = S(h) ∩ (A(h)) ∩ B0 (h) ∪ A(h) ∩ B0 (h) ∪ B1 (h) . p (P F2+ (h)) = s(h) [a(h)(1 − b0 (h)) + (1 − a(h)) (1 − b0 (h) − b1 (h))] .

(23)

The resulting protective failure probability is the sum of (21) and (23): p (P F (h)) = p (P F0 (h)) + p (P F2+ (h)) = = (1 − s(h)) + s(h) [a(h)(1 − b0 (h)) + (1 − a(h)) (1 − b1 (h))] . (24) It can be shown [11], that p (CR(h)) + p (F R(h)) + p (P F (h)) = 1. As the IC with transformations has only one DC in the DD structure (Fig. 4), it is possible to use formulas (17), (19) and (24) with boundary value h1 to calculate the outcome probabilities: pcr = s(h1 )a(h1 )b0 (h1 ).

(25)

pf r = s(h1 ) (1 − a(h1 )) b1 (h1 ).

(26)

ppf = (1 − s(h1 )) + s(h1 ) [a(h1 )(1 − b0 (h1 )) + (1 − a(h1 )) (1 − b1 (h1 ))] . (27) Before outcome analysis of the generic IC, let’s prove several lemmas. Lemma 1 (About superiority). It can be concluded from the symbol recognition definition and inequality h1 < h2 that a PC symbol i recognition on the DC-I at time moment tr is superior to the same symbol recognition at the time moment tr on the DC-II. By other words, if a PC symbol i is recognized on DC-I

Information Channel Synthesis

209

at the time moment tr , then PC symbol i is also recognized on the DC-II at time moment tr (probably, with other PC symbols). Vice-versa, if the PC symbol i is not recognized on the DC-II at the time moment tr , it is not recognized on the DC-I at the time moment tr . Proof. Let’s prove the first statement. If a PC symbol i is recognized on the DC-I at the time moment tr , then by recognition definition ∀ u : 1 ≤ u ≤ m1

A(tr − Tiu ) ≥ h2 ,

hence, the following is also true: ∀ u : 1 ≤ u ≤ m1

A(tr − Tiu ) ≥ h1 ,

this means the PC symbol i recognition on the DC-II at the time moment tr . The proof of the second statement is similar. If a PC symbol i was not recognized on the DC-II at the time moment tr , then ∃ u : 1 ≤ u ≤ m1

A(tr − Tiu ) < h1 < h2 ,

this means the recognition absence of the PC symbol i on the DC-I at the time moment tr .  Lemma 2 (About impossibility of different PC symbol decoding). The simultaneous decoding of a PC symbol i2 on DC-II and a PC symbol i1 = i2 on DC-I is not possible. Proof If a PC symbol i1 was decoded on the DC-I at a time moment tr , it means that i1 is the only recognized symbol on the DC-I at this time moment. Using the lemma about superiority, it can be stated that the PC symbol i1 is also recognized on the DC-II at the time moment tr (probably with some other PC symbol i2 ). If two different symbols are recognized at the same time on a DC, then the majoritary element in the DC structure (Fig. 3) generates the protective failure signal.  Corollary 1. Correct PC symbol decoding on one DC and incorrect PC symbol decoding on the another DC are not possible at the same time. Similarly, simultaneous decoding of the different incorrect PC symbols on different DCs is also not possible. Generic IC SC DD structure analysis and proved lemmas allow to create the table of possible outcomes (Table 1): It can be noted, that in the first three lines of the Table 1 the final SC DD outcome coincides with the DC-II outcome and does not depend on the DC-I result. Hence, it is possible to use formulas (17), (19) and (21) with boundary value h1 for probability calculation.

210

D. Klenov et al. Table 1. Possible reception outcome combinations DC-II reception outcome DC-I reception outcome SC DD general outcome CR

CR, PF0

CR

FR

FR, PF0

FR

PF0

PF0

PF

PF2+

CR

CR

PF2+

FR

FR

PF2+

PF0 , PF2+

PF

For the cases from the last three lines of the Table 1, more profound analysis is required. First of all, let’s modify (22), explicitly representing complementary events via conjunction of their subevents: K−1  K−1 

Bk (h) ∪ A(h) ∩ Bk (h) . (28) P F2+ (h) = S(h) ∩ A(h) ∩ k=1

k=2

As the SC code word set A fulfils the interval precondition, then positions of the internal unary symbols in all code words are unique. Hence, noise impulses influence them independently. This allows to rewrite (28) as K−1  K−1 

A(h) ∩ Bk (h) ∪ A(h) ∩ Bk (h) . (29) P F2+ (h) = S(h) ∩ k=1

k=2

Formulas (13), (14), (15) (28) allow to calculate the probability of the PF2+ with explicit subevents representation:  K−1 K−1   bk (h) + (1 − a(h)) bk (h) . (30) p (P F2+ (h)) = s(h) a(h) k=1

k=2

Probability calculation of the remaining outcomes requires probability formulas for the SC working impulse conditional appearance. Lemma 3 (About the probability of the SC working impulse conditional appearance). The probability g1 of the SC unary working impulse preservation, having at least this impulse erasure, can be counted by the formula: g1 = (1 − r10 (h2 ))/(1 − r10 (h1 )).

(31)

The probability g0 of the SC zero to unary working impulse transformation, having at least this symbol erasure, can be calculated as: g0 = r01 (h2 )/r01 (h1 ).

(32)

Information Channel Synthesis

211

Proof. Let’s take the conditional probability formula: p(A|B) =

p(A ∩ B) . p(B)

(33)

In our case the event A is the unary SC symbol reception: preservation of the unary impulse in the first case and zero-to-unary impulse transformation in the second case. By other words, event A is the event of the reception of the SC impulse with the amplitude not less than h2 . Event B is the event of the reception of the SC impulse with the amplitude not less than h1 . Due to inequality h1 < h2 p(A ∩ B) = p(A) is true. Substituting this expression into (33), the statements of the lemma can be obtained.  Let’s finally calculate the probabilities of the events listed in the last three lines of the Table 1. As usual, they will be represented via supplementary events and then their probability will be obtained using conditional probability formula (33), lemma about SC impulse conditional appearance and the formula (29). K−1 

A(h1 ) ∩ Bk (h1 ) ∪ P F2+ (h1 ) ∩ CR(h2 ) = S(h1 ) ∩ ∪

K−1

k=1

 A(h1 ) ∩ Bk (h1 )

k=2

= (S(h1 ) ∩ S(h2 )) ∩

K−1

∩ S(h2 ) ∩ A(h2 ) ∩ B0 (h2 ) =  A(h1 ) ∩ Bk (h1 ) ∩ A(h2 ) ∩ B0 (h2 ) ∪

k=1



K−1

 A(h1 ) ∩ Bk (h1 ) ∩ A(h2 ) ∩ B0 (h2 )

.

k=2

The second group of events contains events intersection A(h1 )∩A(h2 ), which are incompatible by the lemma about superiority. Hence, this intersection is empty. Also it can be noted, that S(h1 ) ∩ S(h2 ) = S(h2 ) and A(h1 ) ∩ A(h2 ) = A(h2 ). Consequently, the final formula for P F2+ (h1 ) ∩ CR(h2 ) looks like: K−1 

P F2+ (h1 ) ∩ CR(h2 ) = S(h2 ) ∩ A(h2 ) ∩ Bk (h1 ) ∩ B0 (h2 ) . (34) k=1

To find the probability of this event, the formula (33) should be used, as events Bk (h1 ) and B0 (h2 ) depend on each other. p (P F2+ (h1 ) ∩ CR(h2 )) = = p (S(h2 )) p (A(h2 ))

K−1 

p (Bk (h1 )) p (B0 (h2 )|Bk (h1 )) =

k=1

= s(h2 )a(h2 )

K−1  k=1

k  bk (h1 ) 1 − g0m1 −2 .

(35)

212

D. Klenov et al.

The same technique is used to find the probabilities of the remaining events: p (P F2+ (h1 ) ∩ F R(h2 )) = = s(h2 )a(h1 )(1 − g1m1 −2 )

K−1 

bk (h1 )kg0m1 −2 (1 − g0m1 −2 )k−1 +

k=1

+ s(h2 ) (1 − a(h1 ))

K−1 

 k−1 bk (h1 )kg0m1 −2 1 − g0m1 −2 . (36)

k=2

p (P F2+ (h1 ) ∩ P F0 (h2 )) = = s(h1 )(1 − g12 ) a(h1 )

K−1 

bk (h1 ) + (1 − a(h1 ))

k=1



K−1 

 bk (h1 ) +

k=2

  K−1 k   + s(h2 ) a(h1 ) 1 − g1m1 −2 bk (h1 ) 1 − g0m1 −2 + k=1

+ (1 − a(h1 ))

K−1 



bk (h1 ) 1 −

k g0m1 −2

 .

(37)

k=2

The probability calculation of the event P F2+ (h1 ) ∩ P F2+ (h2 ) is the most complicated as different number of recognized SC code words both on DC-I and DC-II have to be taken into account. The recognized SC code words can be both correct and incorrect. p (P F2+ (h1 ) ∩ P F2+ (h2 )) =  K−1  k   bk (h1 ) 1 − (1 − g1m1 −2 ) 1 − g0m1 −2 − = s(h2 ) a(h1 ) k=1

 − g1m1 −2 (1 − g0m1 −2 )k − kg0m1 −2 (1 − g1m1 −2 )(1 − g0m1 −2 )k−1 + + (1 − a(h1 ))

K−1 

  bk (h1 ) 1 − (1 − g0m1 −2 )k − kg0m1 −2 (1 − g0m1 −2 )k−1 . (38) 

k=2

Using formulas (17), (19), (24), (35), (36), (37) and (38) it is possible to obtain the resulting outcome probability formulas for generic IC with CSF-based code: pcr = s(h1 )a(h1 )b0 (h1 ) + s(h2 )a(h2 )

K−1 

k  bk (h1 ) 1 − g0m1 −2 ,

(39)

k=1

pf r = s(h1 )(1 − a(h1 ))b1 (h1 )+ + s(h2 )a(h1 )(1 − g1m1 −2 )

K−1 

bk (h1 )kg0m1 −2 (1 − g0m1 −2 )k−1 +

k=1

+ s(h2 ) (1 − a(h1 ))

K−1  k=2

 k−1 bk (h1 )kg0m1 −2 1 − g0m1 −2 ,

(40)

Information Channel Synthesis

213

ppf = (1 − s(h1 )) + s(h1 )(1 − a(h1 ))b0 (h1 )+  K−1 K−1   2 bk (h1 ) + (1 − a(h1 )) bk (h1 ) + + s(h1 )(1 − g1 ) a(h1 ) k=1



k=2

  + s(h2 ) a(h1 ) 1 − g1m1 −2

K−1 

k  bk (h1 ) 1 − g0m1 −2 +

k=1

+ (1 − a(h1 ))

K−1 



bk (h1 ) 1 −

k g0m1 −2

 +

k=2

 + s(h2 ) a(h1 )

K−1 

k   bk (h1 ) 1 − (1 − g1m1 −2 ) 1 − g0m1 −2 −

k=1 m1 −2 k g0 )

 − kg0m1 −2 (1 − g1m1 −2 )(1 − g0m1 −2 )k−1 +  K−1    m1 −2 k m1 −2 m1 −2 k−1 . (41) bk (h1 ) 1 − (1 − g0 ) − kg0 (1 − g0 ) +(1 − a(h1 )) −

g1m1 −2 (1



k=2

6

Conclusion

The work introduces and investigates the mathematical model for the IC with transformations and mathematical model for the generic IC, where both SC working impulse transformations and erasures are possible. The information reliability of the introduced mathematical models is assessed. The formulas to calculate outcome probabilities are obtained for each IC type. It is shown, that usage of the code based on CSF on the last stage of encoding significantly improves the IC noise immunity. Results of the modelling show, that false reception probability can be decreased by 2–3 magnitude orders comparing to the IC without CSF-based code.

References 1. Peterson, W.W., Weldon, E.J.: Error-Correcting Codes, 593 p. The MIT Press, Cambridge (1972) 2. Golomb, S.W.: Digital Communications with Space Applications, 272 p. PrenticeHall, Englewood Cliffs (1964) 3. Fink, L.M.: The theory of discrete messages transmission. 2nd issue, reworked and improved, 728 p. Soviet Radio, Moscow (1970). (in Russian) 4. Gladkikh, A.A.: The theory of the redundant codes soft decoding in the erasure communication channel, 379 p. UlSTU, Ulyanovsk (2010). (in Russian) 5. Berlekamp, E.R.: Algebraic Coding Theory, 478 p. McGraw-Hill Book Company, New York (1968) 6. Viterbi, A.J., Omura, J.K.: Principles of Digital Communication and Coding, 584 p. McGraw-Hill, New York (1979) 7. Jurgenson, R.I.: Noise immunity of digital systems of transmittion of telemechanical information, 250 p. Energiya, Leningrad (1971). (in Russian)

214

D. Klenov et al.

8. Forney, D.: Concatenated Codes, 104 p. Massachusetts Institute of Technology, Cambridge, Massachusetts, USA (1965) 9. Zolotarev, V.V., Ovechkin, G.V.: Noise immune encoding. In: Zubarev, Y.B. (ed.) Methods and Algorithms: Reference Book, 126 p. Hotline-Telecom, Moscow (2004). (in Russian) 10. Sklar, B.: Digital Communications: Fundamentals and Applications, 1104 p, 2nd edn. Prentice Hall, New Jersey (2001) 11. Klenov, D.V., L’vov, A.A., L’vov, P.A., Svetlov, M.S., Svetlova, M.K.: Mathematical model of the information channel with code signal feature-based codec. In: Proceedings of 8th All-Russian Conference on System Synthesis and Applied Synergy, SSPS-2017, Nizhniy Arkhyz, pp. 322–330 (2017). (in Russian) 12. Svetlov, M.S., L’vov, A.A., Klenov, D.V., Dolinina, O.N.: Self-synchronized encoding and decoding algorithms based on code signal feature. In: Proceedings of 27th Internaional Conference Radioelektronika (RADIOELEKTRONIKA), Brno, Czech Republic, pp. 1–5 (2017). https://doi.org/10.1109/RADIOELEK.2017.7936641 13. L’vov, A.A., Svetlov, M.S., Martynov, P.V.: Improvement of information reliability of digital systems with QAM/COFDM modulation. In: Proceedings of 20th IMEKO TC4 Symposium, Benevento, Italy, pp. 479–484 (2014) 14. Elias, P.: Error-free coding. IEEE Trans. Inform. Theor. 4, 29–37 (1954)

Using of Linguistic Analysis of Search Query for Improving the Quality of Information Retrieval Nadezhda Yarushkina , Aleksey Filippov(&) and Maria Grigoricheva

,

Ulyanovsk State Technical University, Severny Venets str., 32, Ulyanovsk 432027, Russian Federation {jng,al.filippov}@ulstu.ru, [email protected]

Abstract. The paper describes the process of research and development of methods for linguistic analysis of search queries. Linguistic analysis of search query is used to improve the quality of information retrieval. After syntactic analysis of original search query it translated to a search query in a new format. Taking into account, the features of information retrieval query language allow improving the quality of information retrieval. Also, the paper describes the results of experiments that confirm the correctness of the method. Keywords: Information retrieval

 Syntactic analysis  Search queries

1 Introduction The growth of data volume and the need to reduce search time led to the need to improve the information retrieving methods. The first information retrieval systems worked primarily with the factual information. The factual information can include, for example, the characteristics of objects and their relationships. Information retrieval systems can process text documents in natural language and other data presentation formats [1, 2]. Currently, a large amount of data is presented in an unstructured form. This fact proves the relevance of research in the field of information retrieval and text mining. The number of full-text databases still increases. Full-text databases are electronic analogs of printed publications and documents. Unstructured presentation of information is one of the factors strongly affecting information retrieval systems [1, 2]. The quality of search in information retrieval systems is usually characterized by two criteria: recall and precision. The recall is determined by the ratio between the total count of found relevant documents and the total count of all relevant documents. The precision is determined by the ratio between the found relevant documents and the total count of found documents [2]. The characteristics of the information retrieval system itself and the quality of the search query affects the quality of the search. An ideal search query can be formed by a user who knows the domain area well. Also, to form an ideal query, a user needs to © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 215–226, 2019. https://doi.org/10.1007/978-3-030-12072-6_19

216

N. Yarushkina et al.

know the features of current information retrieval system and their information retrieval query language. Otherwise, a search result will have low precision or low recall values [2].

2 Main Problem Information retrieval is the process of searching in an extensive collection of data some semi-structured (unstructured) material (document) that satisfies the information needs of a user. Semi-structured data is data that does not have a clear, semantically visible and easily distinguishable structure. Semi-structured data is the opposite of structured data. The canonical example of structured data is relational databases. Relational databases are typically used by enterprises to store product registers, employee personal data, etc. [2]. For information retrieval, a user formulates a search query. A search query is the formalized way of expressing the information needs of information retrieval system users. Information retrieval query language is used for the expression of information needs. The syntax of information retrieval query language varies from system to system. Modern information retrieval systems allow entering a query in natural language in addition to the information retrieval query language [1]. Information retrieval system finds documents containing the specified keywords or words that are in any way related to the keywords based on the user search query. The result of an information retrieval system is a list of documents, sorted by relevance [2]. In this paper, will consider the work of the proposed method on the example of the existing information retrieval subsystem of the system for opinion mining in social media (SOM). Understanding the meaning of publications in social media is the most critical and complex element of automated text processing [3, 4]. The SOM consists of the following subsystems (Fig. 1): 1. Subsystem for importing data from external sources. This subsystem works with popular in Russia social network VKontakte [5] through the public application programming interface (VK API). Massmedia loader retrieves data from HTML pages of mass media sites based on rules. The creation of own rules for each mass media is needed. The rule should contain a set of CSS-selectors. Ontology loader allows loading ontologies in OWL or RDF format into the data storage subsystem. Ontology is used for a description of the features of the problem area [6]. 2. Data storage subsystem provides the representation of information extracted from social and mass media in a unified structure that is convenient for further processing. The data is stored in the context of users, collections, data sources, versions, etc. As database management systems are used: • Elasticsearch for indexing and retrieving data [7]; • MongoDB for storing data in JSON format [8]; • Neo4j for storing graphs of social interaction (social graph) and ontology [9]. 3. Data converter converts the data imported from social and mass media into an internal SOM unified structures.

Using of Linguistic Analysis of Search Query

217

Fig. 1. The architecture of the system for opinion mining in social media

4. Social graph builder constructs a social graph. The social graph based on the relationships of users and communities of the social network. The OWL/RDFontology translator translates ontology into the graph representation [10]. 5. Semantic analysis subsystem performs preprocessing of text resources. Also, this subsystem performs statistical and linguistic analysis of text resources. 6. Information retrieval subsystem finds objects related to the specific search query. In this case, the search query can be semantically extended using an ontology. Elasticsearch provides a full Query DSL [11]. Think of the Query DSL as an AST of queries, consisting of two types of clauses: 1. Leaf query clauses look for a particular value in a specific field, such as the match, term or range queries. These queries can be used by themselves. 2. Compound query clauses wrap other leaf or compound queries and are used to combine multiple queries logically, or to alter their behavior. The query string is parsed into a series of terms and operators. A term can be a single word—quick or brown—or a phrase, surrounded by double quotes—“quick brown”—which searches for all the words in the phrase, in the same order.

218

N. Yarushkina et al.

Users allow to customize the search: 1. By default, all terms are optional, as long as one term matches. A search for “foo bar baz” will find any document that contains one or more of “foo” or “bar” or “baz”. The preferred operators are + (this term must be present) and—(this term must not be present). All other terms are optional. For example, this query: quick brown + fox-news states that: • fox must be present; • news must not be present; • quick and brown are optional—their presence increases the relevance. 2. Multiple terms or clauses can be grouped with parentheses to form sub-queries: (quick OR brown) AND fox. Therefore, the SOM search algorithm, based on Elasticsearch Query DSL, has several disadvantages: 1. A user may not know the features of Elasticsearch Query DSL. 2. Words joined by the OR operator is using by default in information retrieval. The using of the OR operator unnecessarily increases the recall of information retrieval and reduces its precision. It is necessary to develop the method of linguistic analysis and translation of a search query into a search query in the format of Elasticsearch Query DSL. The new form of search query allows to take into account the features of Elasticsearch Query DSL and improve the quality indicators (precision and recall) of information retrieval.

3 The Method of Linguistic Analysis of Search Query for Improving Quality of Information Retrieval The primary goal of the developed method of linguistic analysis and translation of a search query into a search query in the format of Elasticsearch Query DSL is the improvement of information retrieval quality. The main task is to select in a search query the groups of terms, united by some semantics. 3.1

The Method of Linguistic Analysis and Translation of a Search Query

The scheme of linguistic analysis of texts does not depend on the natural language itself. Regardless of the language of a source text, its analysis goes through the same stages [12, 13]: 1. Splitting the text into separate sentences. 2. Splitting the text into separate words.

Using of Linguistic Analysis of Search Query

219

3. Morphological analysis. 4. Syntactic analysis. 5. Semantic analysis. The first two stages are the same for most natural languages. Language-specific differences usually appear in the processing of word abbreviations, and in the processing of punctuation marks to determine the end of a sentence. The results of the syntactic analysis are used to select in a search query the groups of terms, united by some semantics. To identify a noun phrase from a query is the identification of meaningful query terms is necessary. It is also necessary to define the relationship between the terms of a query. A noun phrase or nominal phrase is a phrase that has a noun (or indefinite proper noun) as its head or performs the same grammatical function as such a phrase. Noun phrases are ubiquitous cross-linguistically, and they may be the most frequently occurring phrase type. SyntaxNet as an implementation of the syntactic analysis process is used. SyntaxNet is the TensorFlow-based syntax definition framework that uses a neural network. Currently, 40 languages including Russian are supported. The source code of the already-trained Parsey McParseface neural network model that is suitable for parsing text is published for TensorFlow. The main task of SyntaxNet is to make computer systems able to read and understand human language. The precision of the model trained in the SinTagRus case is estimated at 87.44% for the LAS metric (Label Attachment Score), 91.68% for the UAS metric (Unlabeled Attachment Score) and determines the part of speech and the grammatical characteristics of words with an accuracy of 98.27% [14]. It is necessary to parse a search query to obtain the parse tree on the first step of the algorithm. To collect data about the search query structure, dependencies between words and the types of these dependencies the resulting parse tree will be used. The parse tree can be represented as the following set: T ¼ ft1 ; t2 ; . . .; tk g;

ð1Þ

where k is a count of nodes in the parse tree; ti is a node of the parse tree can be described as:  ti ¼ i; wi ; mj ; c ; i ¼ 1; k; where i is an index of the word in the search query; wi 2 W; W fw1 ; w2 ; . . .; wk g; W is a set of words of the search query; mj 2 M, M = {Noun, Pronoun, Verb, Adverb, Adjective, Conjunc., Prepos., Interjec.} is a set of parts of speech for natural language; c is an index of the word in a search query, that depends on the i-th word. Thus, a search query is converted into a parse tree on the first step of the algorithm. For each word in a search query a part of speech, index of this word in a search query, and relations with other words of a search query are set.

220

N. Yarushkina et al.

The description of a parse tree in the form of a structure containing information about two or more related words with an indication of their parts of speech and location in the original query is used on the second step of the algorithm. In the process of analysis of an input parse tree, the nodes that reflect a semantics of this query are selected. Search and translation of selected nodes into Elasticsearch Query DSL is executed using a set of rules. The translation process uses a set of rules. Rules are used to add special characters from the Elasticsearch Query DSL to the words of a search query. Also, stop words are deleted from search query during the translation. The result of the algorithm is a new search query that takes into account the semantics of information needs and features of the Elasticsearch Query DSL. This algorithm can be represented as the following equation: F Query : ðT; RÞ ! Q ; The input parameters of the function F Query are the parse tree of search query T (Eq. 1) and the set of rules R, and the result is a translated query Q . R ¼ fR1 ; R2 ; . . .; Rn g is the set of rules for searching elements in the parse tree and their translation in Elasticsearch Query DSL. Each rule can be represented as the following expression: Ri ðp; t1 ; t2 ; . . .; tm Þ ¼ Qj ; where p is a rule priority; tk is k-th element of a rule that allows selecting the node (nodes) of the parse tree to processing; m is a count of elements in a rule; Qj 2 Q ; j ¼ 1; q is an element of a translated query. Each element of a translated query contains the word or words of an original search query escaped by a symbol from the set of Elasticsearch Query DSL operators. 3.2

Examples of Rules for Linguistic Analysis and Translation of a Search Query

The formal description of the rule to search a noun phrase in a search query can be represented as follows:

where d is a count of adjectives in a noun phrase. Extraction of noun phrase from the parse tree of a search query finds one or more adjective that subordinates to the current noun tree node. The result of this rule is a noun phrase, escaped with at the beginning and at the end.

Using of Linguistic Analysis of Search Query

221

Figure 2 shows the flowchart of the algorithm for search a noun phrase in a search query.

Fig. 2. The flowchart of the algorithm for search a noun phrase in a search query

The formal description of the rule to search a related nouns in a search query can be represented as follows:

222

N. Yarushkina et al.

where d is a count of nouns that related to i-th noun. Extraction of related nouns from the parse tree of a search query finds one or more noun that subordinates to the current noun tree node. The result of this rule is a set of nouns, escaped with at the beginning and at the end. The formal description of the rule to search a proper noun that subordinates to the noun in a search query can be represented as follows:

where d is a count of proper nouns that related to i-th noun. Extraction of proper nouns that subordinate to the noun from the parse tree of a search query finds one or more proper noun that subordinates to the current noun tree node. The result of this rule is a set of nouns, escaped with at the beginning and at the end. The formal description of the rule to search a proper noun in a search query can be represented as follows:

where d is a count of proper nouns that related to i-th proper noun. Extraction of proper nouns from the parse tree of a search query finds one or more proper noun that subordinates to the current proper noun tree node. The result of this rule is a set of nouns, escaped with at the beginning and at the end. The formal description of the rule to search a single noun in a search query can be represented as follows: Rsingle

noun

¼ ð1; hi; wi ; Noun; ciÞ ¼ þ wi :

This rule has a lower priority and is only executed after the rules with a higher priority have been skipped. The rule allows finding a node of the parse tree with a part of speech noun that is not associated with other nodes with a part of speech noun, adjective, or proper noun. The formal description of the rule to search a verb in a search query can be represented as follows:

Using of Linguistic Analysis of Search Query

223

Rverb ¼ ð1; hi; wi ; Verb; ciÞ ¼ þ wi : This rule has the highest priority and does not overlap with any other rule. This rule allows finding a node of the parse tree with a part of speech verb.

4 Experiments To test the method of linguistic analysis of search query proposed in this study some experiments were conducted. Figure 3 shows the parse tree for search query: “Amount of fare in public transport in Ulyanovsk”. Ulyanovsk is a city in Russia [15]. The nodes of a parse tree are the words of a search query. Each node have a part of speech as the parameter.

Fig. 3. The parse tree for search query “Amount of fare in public transport in Ulyanovsk”

After the work of the algorithm, the significant elements were found in the parse tree (Fig. 3). In the resulting tree (Fig. 4), the nodes labeled as a rule with which they were found.

Fig. 4. The result tree for search query “Amount of fare in public transport in Ulyanovsk”

224

N. Yarushkina et al.

Thus, after linguistic analysis and translation the resulting search query for the search query “Amount of fare in public transport in Ulyanovsk” is '+”Amount fare” +”public transport” +Ulyanovsk' . To assess the quality of the proposed method, the precision indicator of information retrieval is used. The precision value is calculated using the following expression: a P¼ ; b

ð2Þ

where a is a count of relevant documents in the search result; b is a total count of documents in the search result. The recall value is not used because the data storage subsystem of SOM contains large counts of documents. For search query QO “Amount of fare in public transport in Ulyanovsk” the count of relevant documents in the search result of the information retrieval is 8. The total count of documents is 44857. Thus, the precision PðQO Þ of the information retrieval for the search query “Amount of fare in public transport in Ulyanovsk” is (Eq. 2):  P QO ¼

8 ¼ 0; 00018: 44857

For search query Q '+”Amount fare” +”public transport” +Ulyanovsk' translated from search query “Amount of fare in public transport in Ulyanovsk” using the proposed method the count of relevant documents in the search result of the information retrieval is 8. The total count of documents is 8. Thus, the precision PðQ Þ of the information retrieval for the search query '+”Amount fare” +”public transport” +Ulyanovsk' is (Eq. 2): PðQ Þ ¼

8 ¼ 1: 8

Thus, the using of proposed method improve the precision of information retrieval because of reducing the count of documents in the search result.

5 Conclusion The quality of the information retrieval is affected by both the characteristics of the information retrieval system itself and the quality of a search query. An ideal search query can be formed by a user who knows well the domain area. Also, a user needs to know the features of current information retrieval system to form an ideal query. Otherwise, the information retrieval will have the low precision or low recall values. In this paper, the work of the proposed method was considered on the example of the existing information retrieval subsystem of SOM. Information retrieval subsystem of SOM based on Elasticsearch. Elasticsearch provides a full Query DSL. Elasticsearch Query DSL has several disadvantages:

Using of Linguistic Analysis of Search Query

225

1. The user may not know the features of Elasticsearch Query DSL. 2. Words joined by the OR operator is using by default in information retrieval. The using of the OR operator unnecessarily increases the recall and reduces the precision of information retrieval. A method of linguistic analysis and translating of a search query in Elasticsearch Query DSL allows improving the precision of information retrieval. A search query is converted into a parse tree on the first step of the algorithm. For each word in a search query a part of speech, index of this word in a search query, and relations with other words of a search query are set. The description of a parse tree in the form of a structure containing information about two or more related words with the indication of their parts of speech and location in the original request is used on the second step of the algorithm. In the process of analysis of an input parse tree, the nodes that reflect the semantics of the search query are selected. Search and translation of selected nodes into Elasticsearch Query DSL is executed using a set of rules. The translation process uses a set of rules. Rules are used to add special characters from the Elasticsearch Query DSL to the words of a search query. Also, stop words are deleted from search query during the translation. The result of the algorithm is a new search query that takes into account the semantics of information needs and features of the Elasticsearch Query DSL. According to the results of 20 computational experiments can conclude: the use of the proposed method allows to increase the precision of information retrieval by an average of 18 times. Acknowledgments. The study was supported by: the Ministry of Education and Science of the Russian Federation in the framework of the project No. *2.1182.2017/4.6. Development of methods and means for automation of production and technological preparation of aggregate-assembly aircraft production in the conditions of a multiproduct production program; the Russian Foundation for Basic Research (Grants No. 18-47-730035 and 16-47-732054).

References 1. Voorhees, E.M.: Natural language processing and information retrieval. In: Information Extraction, pp. 32–48. Springer, Heidelberg (1999) 2. Manning, C., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. Cambridge University Press, Cambridge (2008) 3. Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passiveaggressive algorithms. JMLR 7, 551–585 (2006) 4. Turney, P.: Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In: Proceedings of the Association for Computational Linguistics, pp. 417–424 (2002) 5. VKontakte. https://vk.com/. Accessed 20 Oct 2018 6. Gruber, T.: Ontology. http://tomgruber.org/writing/ontology-in-encyclopedia-of-dbs.pdf. Accessed 20 Oct 2018 7. Elasticsearch. https://www.elastic.co/. Accessed 20 Oct 2018 8. MongoDB. https://www.mongodb.com/. Accessed 20 Oct 2018

226

N. Yarushkina et al.

9. Neo4j. https://neo4j.com/. Accessed 20 Oct 2018 10. Yarushkina, N., Filippov, A., Moshkin, V.: Development of the unified technological platform for constructing the domain knowledge base through the context analysis. In: Creativity in Intelligent Technologies and Data Science, pp. 62–72 (2017) 11. Elasticsearch Query DSL. https://www.elastic.co/guide/en/elasticsearch/reference/2.3/querydsl-query-string-query.html. Accessed 20 Oct 2018 12. SRILM—The SRI Language Modeling Toolkit. http://www.speech.sri.com/projects/srilm. Accessed 20 Oct 2018 13. Manning, C., Schutze, H.: Foundations of Statistical Language processing. The MIT Press, Cambridge (1999) 14. Sboev, A.G., Gudovskikh, D.V., Ivanov, I., Moloshnikov, I.A., Rybka, R.B., Voronina, I.: Research of a Deep Learning Neural Network Effectiveness for a Morphological Parser of Russian Language (2017). http://www.dialog-21.ru/media/3944/sboevagetal.pdf. Accessed 20 Oct 2018 15. Ulyanovsk. https://en.wikipedia.org/wiki/Ulyanovsk. Accessed 20 Oct 2018

Improved Quality Video Transmission by Optical Channel from Underwater Mobile Robots Sergey Kirillov , Vladimir Dmitriev , Leonid Aronov, Petr Skonnikov(&) , and Andrew Baukov Ryazan State Radio Engineering University, Ryazan, Russia [email protected], [email protected], [email protected]

Abstract. Search for minerals in the continental shelf of Russia, monitoring gas and oil pipelines, inspecting the underwater parts of the vessel, solving the problem of navigational uncertainty under water is impossible without the use of underwater mobile robots that allow transmitting control data, telemetric information and video images of improved quality in real time. In the interests of solving these tasks, a prototype of an underwater optical channel for transmitting information and control data, as well as enhanced submarine images with a speed of 10 … 100 Mbit/s, has been developed and the requirements for its technical parameters have been formulated. The limiting distances for transmitting information in different types of waters for an underwater transmission system with a budget of 45 dB have been determined. Keywords: Optical channel  Video quality improvement  Small size remote operated submersibles  Sea water  Absorption  Scattering

1 Introduction Manned submersibles as well as unmanned small size remote operated submersibles (SROS) are used in various areas of activity [1]. In addition, intensive study of coastal shelves related to underwater monitoring of climatic, biological, chemical and ecological changes in oceans, seas, lakes and rivers as well as the development of underwater mineral deposits is implemented nowadays. These tasks are solved with the help of various SROS, which require a reliable and efficient system for transmitting information and control data in real time [2]. The information transmitted contains various characteristics of underwater environment and video data of improved quality from underwater cameras as well as commands for controlling underwater robots. This video information can be used in order to provide and manage SROS, which requires the transmission of this information in real time [3]. Thus, the task of developing high-speed underwater video transmission systems and control commands as well as improving the quality of transmitted video data arises.

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 227–239, 2019. https://doi.org/10.1007/978-3-030-12072-6_20

228

S. Kirillov et al.

Currently, to carry out the transfer of information in aquatic environment wired and wireless communication lines are used [3]. Wired underwater communication lines including fiber-optic ones, significantly limit the range of action and reduce the mobility of SROS. Practical use of wired data transmission channel based on coaxial cable or optical fiber is impossible since it imposes restrictions on the maneuverability of underwater vehicle, limiting distance between objects of information network, etc. In addition, the use of such lines becomes impossible with the increase in the number of serviced underwater vehicles. The range of these lines is significantly limited due to their large mass and windage. Wireless acoustic communication lines are based on the transmission of sound waves through aquatic environment. However, these communication lines provide low transmission rate and have low noise immunity due to the presence of large amount of acoustic noise [4]. Acoustic channel is unsuitable for transmitting video data stream since the maximum bandwidth of a channel does not exceed 500 kbit/s. The usage of electromagnetic waves can significantly increase the speed of information transmission. In addition, the propagation of electromagnetic waves does not depend on temperature and water pressure. However, this type of wave is subject to strong attenuation in aquatic environment due to absorption and scattering, which significantly limits the range of communication; its disadvantages are small radius of action and large required power that cannot be provided on small underwater vehicle. The analysis of [5] has shown that the smallest absorption in aquatic environment is experienced by electromagnetic waves in optical range with the wave-length within 400 … 600 nm. Wireless optical communication lines make it possible to obtain high information transfer rate due to the use of large frequency band of tens of GHz [6]. Taken into consideration the information above, it is advisable to use wireless optical system for transmitting information and control data in order to use mobile SROS. However, due to the interaction of optical radiation with water medium during propagation, significant scattering and absorption of signal, being rather difficult to predict, occur. Another disadvantage of optical transmission systems of information and control signals is high requirements to the accuracy of laser beam sighting in underwater conditions.

2 Optical Signal Propagation Model in Sea Water As said in [7], sea water contains three main components: clean water, dissolved substances (inorganic and organic) and suspension (mineral and organic). However, the mechanism that influences the components mentioned is different. So, while substantiating the optical signal distribution model we suggest dividing substances contained in water according to their influence on the present signal. The main components that take into account the influence of optical signal propagation medium are absorption and scattering. The ability of water and substances contained in it to absorb electromagnetic radiation in a wide range of intensities is determined by the Bouguer law [4]. This law connects transmitted and received power of optical signal by means of absorption coefficient a aðkÞ depending on wave length k of distributed signal and characterizes

Improved Quality Video Transmission by Optical Channel

229

losses of electromagnetic energy in aqueous medium for heating, change of chemical composition, ionization, reradiation on other wave length, etc. In general case, ocean water consists of water molecules, organic particles, gases, viruses, bacteria, phytoplankton and inorganic particles [7]. The structure of various factors influence on the attenuation of optical radiation in seawater is shown in Fig. 1.

Fig. 1. The structure of various factors influence on the attenuation of optical wave-length radiation in the ocean

The concentration of mentioned elements depending on aqueous medium type varies within enough wide ranges. So, in some cases, it is impossible to take into account completely all components of optical signal absorption index in aqueous medium. Therefore, usually [1] only main components having the highest concentration are taken into account. On the basis of the above mentioned facts, an absorption coefficient will have the following form: ð1Þ where aw ðkÞ, ac ðkÞ, ad ðkÞ, ah ðkÞ—coefficients of clean water absorption and also absorption by chlorophyll, detritus, humus-like compounds correspondingly. Besides, the absorption coefficient depends on optical signal wave length and composition of aqueous medium. Paper [4] shows measured dependences of absorption coefficient for various types of water according to which the choice of optical signal wave length having the lowest absorption is made. For clean water the absorption minimum is about 450 nm. On adding the impurities, the absorption minimum lightly shifts towards longer waves and its value depends on impurity type and their concentration.

230

S. Kirillov et al.

Intensity and direction of scattering depend on dimensions of particles in water. As a result of this, paper [3] marks out scattering by particles having dimensions commensurable with wave length k (Rayleigh molecular scattering) and scattering by particles having dimensions greater than k (Mie scattering). Scattering on atoms and molecules is described by the Rayleigh theory [3] that allows completely taking into account scattering of signal in clean water. Scattering on particles having dimensions greater than k has distinct direction determined by scattering indicatrix [7, 8]. Calculation of such type of scattering is executed using the Mie theory which takes into account relative dimensions of particles, their form, distance between particles, mutual location of particles and refractive indices. Besides, we assume that particles are spherical and have similar dimensions. As a result, full scattering is estimated as an amount of scatterings executed by each particle. The majority of used models of aqueous media is limited by scattering of clean water and scattering on particles, not taking into account other possible components of scattering [8]. So, the general coefficient of scattering can be represented as follows: bðkÞ ¼ bw ðkÞ þ bp ðkÞ;

ð2Þ

where bw ðkÞ, bp ðkÞ—coefficients of scattering of optical signal on clean water molecules and particles correspondingly. Taken the above mentioned facts into account, a coefficient of aqueous medium transmission is determined by the expression [3]: saq:m ðkÞ ¼ expðeðkÞRÞ

ð3Þ

where eðkÞ ¼ aðkÞ þ bðkÞ—light attenuation coefficient in aqueous medium, R—the distance between the source of optical signal receiver. While developing the model we have considered that the line-of-sight optical line of communication will be used. The signal power on photo detector surface PC is described by the equation of communication system range [3]: PC ¼

2 2 p2  sTx  sRx  saq:m  dTx  dRx  PL 2 32  R2  k

ð4Þ

where PL —laser power, sTx , saq:m , sRx —coefficients of transmission of optical transmitter, medium and optical receiver taking into account losses of energy in transmitter, medium and receiver correspondingly; dTx , dRx —diameters of aperture of transmitter and receiver correspondingly. Block diagram of the model developed is shown in Fig. 2.

Improved Quality Video Transmission by Optical Channel

231

Fig. 2. Block diagram of algorithmic model of optical radiation propagation in aquatic environment

The model developed uses the values of optical source power PL , optical oscillation wavelength k, immersion depth of transmitter and receiver h, the dimensions of transmitter and receiver aperture dTx , transmitter and receiver visibility angles aTx and aRx ; transmitter and receiver transmittances sTx and sRx as its input data.

3 SROS Requirements The following requirements are imposed on the equipment of optical system for transmission of information installed on board the submersible: 1. Minimizing mass-dimensional indices and power consumption; 2. Provision of operational capability of optical communication line in the conditions when significant sighting errors are present due to the impossibility of exact positioning in movable aqueous medium; 3. Operational capability of submersible within a long period of time without technical maintenance, it should also have high degree of reliability because access to underwater robotic complexes is complicated; 4. Transmission of video data in real time mode under transmission speed should be not less than 10 Mb/s for 830  480 pixel images and frequency of 25 frames/s. When designing underwater optical link, it is initially necessary to select light source, since the design of transmitting and receiving parts of communication system is determined by the type of radiating element. Currently, LEDs and laser diodes can be used to build underwater communication lines of SROS [3]. LEDs emit waves in a narrow spectral range. LEDs advantages are high efficiency, stability, long service life, reliability and low cost. The disadvantage is the transition to non-linear mode when heated. Laser diodes provide the formation of coherent radiation. The switching speed of laser diodes is significantly higher than that of LEDs, which allows us to obtain in-

232

S. Kirillov et al.

formation transfer rates up to several Gbit/s [6, 8]. The disadvantage of laser diodes is high sensitivity to temperature changes. Currently, the following types of photon receivers are used in underwater optical communication lines [3, 8]: photodiode, avalanche photodiode, photomultiplier. The usage of photodiodes provides ease of implementation, but their sensitivity in blue-green part of the spectrum is limited, which is a drawback, since in this region of the spectrum the optical signal experiences the least attenuation [3…7]. Avalanche photodiodes make it possible to significantly increase sensitivity with compact dimensions; however, they possess strong dependence of gain factor on temperature. Photomultiplier allows to reach the compromise between gain and sensitivity to changes in external conditions. Thus, as a result of the analysis, laser diode with radiation wavelength of 450 nm and power of 80 mW can be used as a radiation source, and photomultiplier with receiving aperture diameter of 8 mm can be used as a receiver. The parameters of transmitter and receiver were chosen taking into account the above requirements after analyzing currently proposed optical devices. Preliminary dependences of normalized radiation power on the distance between receiver and transmitter for different types of water were obtained (Fig. 3), where the line 1 is pure ocean water with average attenuation coefficient eðkÞ ¼ 0:12; 2—coastal ocean water eðkÞ ¼ 0:3; 3—coastal seawater eðkÞ ¼ 0:5; 4—water in places of strong biological activity (turbor harbor) eðkÞ ¼ 2:19.

Fig. 3. Dependencies of rated receiver input power from distance

The dependences obtained were used to calculate error probability Perr when transmitting the optical signal from distance to optical receiver R for different types of water (Fig. 4).

Improved Quality Video Transmission by Optical Channel

233

Fig. 4. Dependences of error probability from the distance to the optical receiver

Solid lines in Fig. 4 correspond to the transmission rate of 10 Mbps, and dotted lines correspond to the transmission rate of 100 Mbps. The developed model is consistent with the results of studies conducted in [3, 8], according to the calculation of signal power at receiver input and the probability of error-receiving symbol.

4 SROS Equipment Block diagram of the transmitter and receiver of optical radiation is shown in Fig. 5.

Fig. 5. Block diagram of the transmitter and receiver of optical radiation

Transmitter of optical radiation transforms amplitude-modulated electric signal into optical one. The source of optical radiation is a semiconductor laser NDB 7875 (blue) with operational wave length 445 nm made by Nichia Company. The current through the laser is set by a managed current source in powerful field transistor. Information signal is given to a couple of current drivers MC 2042 under parallel switching. Laser shunting happens under supply of logic level “0” to driver inputs. At this moment only displacement current flows through laser diode.

234

S. Kirillov et al.

Laser shunting does not happen under supply of logic level “1” to driver inputs. In such case the displacement current and the modulation current flow through laser diode. So, amplitude modulation of optical radiation is executed. Laser radiation power is set by means of a control circuit of optical transmitter assembled in operational amplifier LM 2902. To avoid over-heating a laser diode is installed on the heavy base of optical collimating circuit being a good radiator. Maximum power of optical radiation was more than 20 dBm. The receiver of optical radiation transforms modulated radiation of optical transmitter into electric signal formed by a receiving lens on light-sensitive area of silicon photodiode S5973 made by the company HAMAMATSU. Then obtained photocurrent enters the input of transimpedance amplifier MO2011 where the first cascade of amplification is realized. The appearance of the transceiver module without a sealed container and with it is shown on the left and right sides of Fig. 6 respectively.

Fig. 6. The appearance of the transceiver module

5 SROS Test Scattering medium influence on real-time video transmission test was carried out using IP camera connected to SROS transceiver module. A video stream was transmitted via wireless optical channel from the camera to SROS transceiver module to which a personal computer that received the video stream was connected. The method of comparative analysis of video files where a video stream from IP camera was recorded in various conditions was chosen for video image quality assessment. While studying the influence of scattering medium on transfer rate of real-time video image it was determined that a significant reduction in transfer rate is manifested at a concentration of 5 ml MAALOX in 65 L of water. Each video was analyzed using VirtualDub version 1.5.10 and its parameters such as data transfer rate and frame rate were determined. The frame rate parameter was used to eliminate video stream compression effect on the evaluation of video transmission quality. Figure 7 shows video transmission quality (data transfer rate to the required Vq ratio) and frame rate Vk dependences on BER channel.

Improved Quality Video Transmission by Optical Channel

235

Fig. 7. Video transmission quality and frame rate dependences on BER channel

As it can be seen from Fig. 7 analysis, a significant quality reduction of real-time video transmission is manifested when BER level is about 4  106 . On the other hand, the addition of scattering medium does not significantly affect video stream quality, and this influence has no qualitative difference from the effect of simple signal level reduction. Experimental explorations of underwater optical information transmission system showed that at data transfer rate of 10 Mbit/s, the magnitude of error did not exceed 107 at a distance of up to 64 m in clear ocean water, 42 m in coastal ocean water and 19 m in oceanic water in places of strong biological activity. With data transfer rate of 100 Mbit/s, the magnitude of error did not exceed 107 at a distance of 53 m in pure ocean water, 32 m in coastal ocean water and 9 m in ocean water in places of strong biological activity. The same operating distance, as it was with transmission rate of 10 Mbit/s, was achieved with error probability of 103 . Table 1 shows the main parameters of known underwater communication instruments. The analysis of Table 1 shows that at present there are at least three options of instruments for wireless optical communication with data transmission speed from 10 to 50 Mb/s at distances not exceeding 20 m (under weak turbidity of water). Table 1. Main parameters of known underwater communication instruments Publication year

Authors, source

Distance, m

Transmission speed, Mb/s

1992

Source [9]

9

50

1995

Source [10]

20

10

2005 2013

Source [11] Scientific team headed by Kirillov S.N. Patent “Underwater optical communication instruments” #2526207 [12]

12 100

10 100

Source Laser diode Laser diode LED Laser diode

236

S. Kirillov et al.

The results obtained in present paper (transmission speed 100 Mb/s at distances up to 100 m in coastal ocean water) allow considering the possibility of creating the equipment the parameters of which exceed all known models [9…11].

6 Video Quality Improvement Algorithm Underwater optical communication channel is designed to transmit real-time improved quality video data, obtained with the help of video enhancement algorithm. The approach proposed envisages the increase in visibility distance in images as well as their “cleaning” from suspension that is, removing visible organic and mineral particles that impair the view of underwater scenes, especially small objects. The increase in visibility range is provided by increasing image contrast using adaptive histogram equalization with the restriction [13]. In this case, in the algorithm proposed, automatic contrast adjustment is performed according to the calculated contrast value of the original frame [14]. The result of video stream quality improving algorithm operation is shown in Fig. 8.

Fig. 8. Comparison of original and processed images. A—original image, B—processed image, C—oversized original image fragment, D—oversized processed image fragment.

As a result of experimental studies using natural underwater images, it was found that the proposed algorithm increases visibility range in 2.5 … 4 times. Also the advantages of the algorithm are the ability to full automatic operation in real time and adaptation to changing underwater scenes meaning automatic algorithm parameters adjustment.

Improved Quality Video Transmission by Optical Channel

237

7 Properties of Ocean Water at Great Depth Scientific and applied underwater research implementation involves working on ocean floor. According to the World Ocean physical map, it’s a significant area having the depth of 4,000 m or more, decreasing in coastal seas and sill areas to 1.500–2.000 m. Ocean water is inhomogeneous structure, and if horizontal cut is fairly uniform within the given geographic area boundaries, the vertical profile varies considerably. World Ocean maps with average values of temperature and salinity at different depths are given in [7]. As the depth of the ocean increases, the amount of sunlight decreases [15], which, together with the decreasing amount of oxygen, leads to the decrease of phytoplankton content [16]. Organic matter and suspension have different vertical distribution in the ocean, and the concentration of organic matter in oceans and seas decreases with increasing depth. Average suspended matter concentration values for surface waters of the open ocean are 0.05 … 0.5 mg/l, and for deep waters—only 1 … 250 lg/l [17]. This fact leads to the decrease in the effects associated with scattering of optical radiation in seawater, and consequently, to the decrease in total attenuation coefficient. Phytoplankton and organic impurities concentration decreases to the depth of about 200 m and then remains stable [7, 16, 17]. Vertical profile of salinity and temperature is of greatest interest for us [18]. Salinity vertical cut varies for different areas of the World Ocean, but its structure is remains the same. In near-surface layer (up to 200 m), as the water dives, gradual increase in salinity occurs, and then halocline takes place, i.e. the phenomenon of sharp salinity jump with the change in depth. Part of the vertical section, located at the depth of 1000–1500 m, is subject to convection mixing, and the salinity of this vertical ocean profile part varies greatly depending on geographic location and season. However, at depths more than 1000 m, water salinity is almost unchanged and lies in the range 34.5– 35, and this pattern is characteristic of all geographic latitudes, including polar [18]. In contrast to salinity, which varies within narrow limits, the World Ocean waters temperature range is wide and ranges from −1 to 30 °C [18]. The greatest variations in temperature are, as is the case with salinity, the upper layers up to 1000 m deep. At depths of more than 1000 m, temperature changes range is small, within 3 … 5 °C leading to their monotonous decrease with increasing depth.

8 Conclusion As the result of our work performed on the basis of theoretical and experimental studies [12, 15, 19, 20], a wireless optical channel prototype was created and the requirements for its technical parameters were formulated. Limiting video transmission distances in different types of water for a 45 dB underwater transmission system are determined. On the basis of this model, a fully programmed algorithm for preprocessing underwater images was developed, consisting of strictly defined sequence of actions with optimized applied filter parameters. This algorithm was tested on natural and synthesized underwater images and produced 3 … 4 times increase of distance to recognize small objects compared to the original image. In addition, the algorithm is fully automatic and does not require any calibration during operation.

238

S. Kirillov et al.

References 1. Francois, R.E., et al.: Unmanned arctic research submersible (UARS) system development and test report. Technical report, no. APL-UW 7219. Applied Physics Laboratory, University of Washington (1972) 2. Baulo, E.N., Bukin, O.A., Doroshenko, I.M., Major, A.Y., Salyuk, P.A.: Teleupravlyaemyj podvodnyj kompleks dlya issledovaniya bioopticheskih parametrov morskoj vody [Remotecontrolled underwater complex for the study of bio-optical parameters of sea water]. Optika atmosfery i okeana 27(3): 3–8 (2014). (in Russian) 3. Shlomi, A.: Underwater optical wireless communication network. J. Opt. Eng. 59, 110 (2010) 4. Doronin, Y.P.: Fizika okeana [Ocean Physics]. Gidrometeoizdat, St. Petersburg (1978). (in Russian) 5. William, M.I., James, B.P.: Infrared optical properties of water and ice spheres. Icarus 8, 324–360 (1968) 6. Pratt, V.: Lazernye sistemy svyazi [Laser Communication Systems]. Svyaz, Moscow (1972). (in Russian) 7. Shifrin, K.S.: Vvedenie v optiku okeana [Introduction to Ocean Optics]. Gidrometeoizdat, St. Peterspurg (1983). (in Russian) 8. Hanson, F., Stojan, R.: High bandwidth underwater optical communication. Appl. Opt. 47 (10), 90 (2008) 9. Snow, J.B., Flatley, J.P., Freeman, D.E., Landry, M.A., Lindstrom, C.E., Longacre, J.E., Shwartz, J.A.: Underwater propagation of high data rate laser communication pulses. In: SPIE, vol. 1750, pp. 419–427 (1992) 10. Bales, J.W., Chryssostomidis, C.: High bandwidth, low-power, shot range optical communications under-water. In: International Symposium on Unmanned Untethered Submersible Technology, vol. 9, pp. 406–415 (1995) 11. Chancey, M.A.: Short range underwater communication links. Master thesis. North Carolina state University (2005) 12. Dmitriev, V.T., Kirillov, S.N., Kuznecov, S.N., Locmanov, A.A., Polyakov, S.Y.: Apparatura podvodnoj opticheskoj svyazi [Submarine Optical Communications Equipment]. Patent holder: «Ryazan state radio engineering university» Patent №2526207. (in Russian) 13. Zuiderveld, K.: Contrast limited adaptive histogram equalization. In: Graphic Gems IV, pp. 474–485 (1994) 14. Michelson, A.A.: Studies in Optics. University of Chicago (1927) 15. Kirillov, S.N., Balyuk, S.A., Kuznecov, S.N., Esenin, A.S.: Razrabotka modeli rasprostraneniya opticheskogo signala v vodnoj srede dlya podvodnyh sistem peredachi informacii [Development of a model of optical signal propagation in an aquatic medium for underwater information transmission systems]. Vestn. RSREU 2(40), 3–8 (2012). (in Russian) 16. Mobley, C.D.: Terrestrial optics. Applied Electromagnetics and Optics Laboratory, SRI International, Menlo Park, California 17. Johnson, L.J.: The underwater optical channel. Department of engineering University of Warwick, p. 18 (2012) 18. Temperature, Salinity, Density and Ocean Circulation. http://ocean.stanford.edu/courses/ bomc/chem/lecture_03.pdf 19. Kostkin, I.V., Pushkin, V.A., Locmanov, A.A., Korsukov, I.D.: Algoritm uluchsheniya kachestva podvodnyh izobrazhenij [Algorithm for improving the quality of underwater images]. Vestn. RSREU 2(40), 40–46 (2012). (in Russian)

Improved Quality Video Transmission by Optical Channel

239

20. Kirillov, S.N., Kostkin, I.V., Dmitriev, V.T.: Opticheskij kanal peredachi videoizobrazhenij s podvodnyh mobilnyh robotov dlya raznyh tipov voln i klimaticheskih zon [Optical video transmission channel from underwater mobile robots for different types of waves and climatic zones]. Morskie informacionno-upravlyayushchie sistemy 3(6), 44–51 (2014). (in Russian)

Sketch Design of Information System for Personnel Management of Large State Corporation in the Field of Control Engineering Vadim Zhmud1(&) , Alexander Liapidevskiy2, and Galina Frantsuzova1 1

Novosibirsk State Technical University, Karl Marx Ave. 20, 630073 Novosibirsk, Russia [email protected], [email protected] 2 Novosibirsk Institute of Program Systems, Novosibirsk, Russia [email protected]

Abstract. The need to create an information system for proactive personnel management is particularly acute for many state corporations. The strategy of such management consists, first of all, in not only finding the necessary specialists in due time as the need arises for them, but also in anticipation of these needs. In particular, it is necessary to have a reliable idea of what personnel are trained in the country, what competencies they possess. This analysis by profession and level of preparation should also be obtained with reference to the geography of university graduates, since it is unlikely that a significant part of graduates are ready to change their place of residence without sufficiently weighty motives. The paper analyzes the main tasks of creating such a system and the possibility of using Federal state educational standards for such a search, as well as for actively influencing the process of preparing students. Such influence can be carried out by opening new training profiles, as well as by creating individual educational trajectories through the use of network-based form of education. The paper also contains terminology, discusses the basic background information for creating a preliminary technical project for a pilot version of the system. Keywords: Control engineering Higher education  Competencies making

 

Automation  Personnel management  Federal educational standards  Decision

1 Introduction The question of the best higher technical education is especially relevant [1–27]. It is important then the level of the knowledges of the graduates was typically high in the field of the future job. In the field of Control Engineering is most important due to the various demanded knowledges and various field of their using foe creating of new devices, machines and even robots. For example, the State Corporation © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 240–255, 2019. https://doi.org/10.1007/978-3-030-12072-6_21

Sketch Design of Information System for Personnel Management

241

“Rostechnologii” is especially interested in planed education of the student in the field of “Control in technical systems”. This field is relative with the directions “Cybernetics”, “Automation”, “Mechatronics”, “Robotics” and so on. This paper proposes an analysis of the possibility of creating the smart information system for the automatised search of the future workers in this field, and if the search would be not effective, this system would help in the organizing the complex individual trajectory for the preparing of such specialists. Currently, even the biggest corporations for replenishing the personnel shortage use random search by acquaintances, either through advertisements in the media or by the services of external personnel agencies (Employment Service or commercial personnel agencies). Such a search is not professional enough; moreover, often these services are unnecessarily expensive, since payment is made for each candidate separately. The search usually covers those candidates who could not get a job on their own due to various reasons, including in some cases lack of professionalism, complexity or other features undesirable for a potential employer. In addition, the recruitment agencies themselves recruit their clients by advertisement, they are in fact intermediaries between unknown workers and unknown employers, their services are purely formal, and they are not based on a deep understanding of the competencies required by the employer and the actual presence of such competencies of the employee. This correspondence is supposed to be established by the employer in the form of an interview. This forms the risk of recruiting of incompetent employees and the risk of not being able to find enough competent specialists in the required time. This situation “As is” creates starting conditions for the development of an information system that could contribute to the management of the competences of personnel of a large organization or corporation. Before discussing of possible ways to create such a system, let consider terminology, which should be understood in the same way by all participants in the process. In the next section, this terminology is given in the simplest formulations; standard terms can be found with the help of search systems in various directories.

2 Terminology Integrated group of specialties (IGS)—is a group of specialties in accordance with the Federal State Education Standards of Higher Education (FSES), which is actual in Russia, in the six-digit number of which the first two digits match (from 01 to 54). The direction of training is the name of the direction of preparation in accordance with the FSES, from among the standard names defined by a six-digit number. The level of training is one of the following levels of higher education: bachelor degree (4 years of study), or a specialty (5 years of study), or master-degree (2 years of study based on a bachelor degree), or PhD (4 years of study based on a master-degree or specialty). Educational program (EP)—is (here) implemented by an educational organization training program for specialists in education, having a corresponding set of documentation confirming the organization’s security with everything necessary for its

242

V. Zhmud et al.

implementation and its actual implementation, confirmed by the results of examinations, final qualification works, testing and other necessary the facts. The organization must have a license for educational activities in this area; the current EP must be accredited in the manner prescribed by law. This means that there should be personnel support, material and methodological, etc. In the absence of accreditation, an educational organization is not entitled to issue diplomas of education, although it may carry out educational activities on the basis of an existing license. Profile—is (here) option (one of several) curriculum, determined by the choice of disciplines for the choice of educational organization. Individual educational trajectory (IET)—is (here) is a variant of the curriculum, determined by the choice of the student in this profile. The network form of training is a form of training in which the student takes place a part of the training at another enterprise, educational or scientific one. The network practice-oriented form of education—is (here) is a form of network education in which the learner takes place at the second enterprise only practices (introductory, educational, industrial, undergraduate, or several types of these practices). The network interuniversity form of education—is (here) is a form of network education, in which the second training enterprise is an educational organization of higher education that has state accreditation in this area of education. The joint educational program (JEP) is an educational program for its implementation in the network form of education, in which any part of the training can be implemented in any of the educational organizations; used for the exchange of students, teachers, for other forms of cooperation and for the Double Degree programs. International IEP—is such SEP, in which one of the educational organizations is foreign. The program of double diploma (PDD) is a special type of IEP, as a result of which the student receives a diploma from both educational organizations in which he conducted the training. International PDD—is PDD in which one of the educational organizations is foreign. Competences—is a set of knowledge, skills, abilities, abilities and readiness acquired by students as a result of training; are formulated as a list of what should know, be able to be ready to do, what a graduate should have an idea about, are divided into: GCC—general cultural competence, GPC—general professional competences, PC—professional competence. Information system (IS) is a software tool that is installed on several computers and has access to a number of databases, including those generated from this system. Role—is differentiation of access rights for entering and retrieving information during work and IP. Database is an information resource that has the necessary information for the operation of an information system, which is replenished from this information system and from the Internet, the accuracy of which information is in some way ensured and guaranteed.

Sketch Design of Information System for Personnel Management

243

3 Our Vision of a Prototype Information System 3.1

Purpose of the IS

Forecasting the future needs of the Corporation in the specialists of the highest qualification (bachelor, specialists, masters, graduate students, candidates of science, doctors of science) and in improving the qualifications of existing personnel. Automated search for organizations engaged in the preparation of the required personnel or advanced training in the required programs. Automated comparison of proposed educational programs for the following indicators: standard competencies, additional competencies, level of training (including the level of educational organization, competition in this specialty, indicators of learning success and residual knowledge, indicators of publication activity and other). Automated formation of proposals for the formation of individual educational trajectories based on existing network forms of education or for the formation of new network forms of education. Automated formation of offers for retraining of personnel using current qualification improvement programs or at the suggestion of creating additional advanced training programs. Integration of information about the actual implementation of personnel programs using the information system to modify it and improve its user performance. Compilation of the required lists of selections for any used indicator (for example, by employer, by educational organizations, sections by age, etc.). 3.2

Situation “as Necessary”

A human resources manager (HRM) or head of a division who wants to recruit new employees should have modern personnel search tools that allow them to preliminarily assess the competencies of potential personnel with a sufficient degree of reliability. The information system should prompt the specialist what search criteria should be set from the pop-up list. Also, the system should be able to perceive computations even if they are not in the ready list, in this case additional text is formed for the perception by potential users on the part of educational organizations. The information system should allow work with users who have different roles: Employer (HRM), educational organization, employment service, employee, administrator and moderator. An employer can search for an organization that prepares the necessary specialists, using key competencies, key words (not included in standard competencies), names of training directions, profiles, activities, specialties, etc., when searching. Also, the Employer may use such characteristics as the level of training, age and work experience in the specialty and otherwise, if necessary. An employer can also make an entry on the results of previous uses of IS, in particular, note its usefulness when searching, a high level of accepted specialists, or an insufficient level of recommended and adopted specialists, an insufficient level of training in this educational organization, etc. The employer can leave requests and receive answers to them immediately or after processing. The employer must be authorized, having received access rights (login,

244

V. Zhmud et al.

password), the accuracy of authorization can be checked by the system administrator or moderator. An educational organization can enter information about the educational programs implemented, the disciplines taught in them, the competencies acquired by the students, the levels of education, the practices for passing the practice, the implemented JEP, PDD, etc. at its discretion. An educational organization can leave requests for proposals from employers, as well as proposals for creating JEP, DPP, for implementing practices, and so on. An educational organization must register with a representative, having obtained the access rights (login, password); the accuracy of the authorization can be checked by the system administrator or moderator. An employee can enter information about the education he has received, about existing skills, knowledge, competencies, create a “Portfolio” of his work, scientific, scientific, technical, design or other developments, work experience, his interests, preferences, hobbies, scientific publications and other at its discretion, except for information not related to employment opportunities. Preliminary employee must familiarize himself with the rules of working with IS and confirm his agreement with them and consent not to attempt to use IS for purposes not related to employment (as social networks or other non-professional forms of communication). The employee must register, having received the access rights (login, password); the authenticity of the authorization can be checked by the system administrator or moderator. The IS Administrator guarantees the non-disclosure of personal data without the consent of the employee. The administrator, using his own resources and moderators, ensures that the IP is used only for professional purposes. In case of detection of violations of the rules for the use of IP by users or violation of ethical standards, the Administrator deletes the records that constitute such violations and sends a warning to the violator. In case of gross violations of rules or ethical norms or in the case of repeated violations, the Administrator denies the user access to the IP. Information from the Internet can be used to fill in preliminary information about an educational organization using the official website of educational organizations. According to the current legislation, all educational organizations are obliged to provide this information on their official websites and are responsible for the accuracy of this information. Such information includes information about the educational organization, the Charter, the License, the Accreditation Certificate [30], a list of implemented areas of training on the IGS and levels, with details up to profiles and individual educational trajectories, curricula and time schedules. Competency models are not required to be made publicly available; therefore, this information should be requested by IS representatives through announcements, through mailing, personal appeals or in other ways, in accordance with applicable legal regulations. Students’ portfolios may be publicly available, however, according to current standards, educational organization should provide students with opportunities to create a portfolio, but creating a portfolio is not mandatory for students. Therefore, the portfolio base can be replenished only by individual actions of the employee, including by specifying a link to a page with open access to this portfolio on another external website. In this case, an icon should appear in the employee questionnaire indicating

Sketch Design of Information System for Personnel Management

245

the presence of a portfolio on the external site in open access. Click on this icon to open this link in an additional window or offer to download this page, for example, as a PDFfile.

4 Approximate Scenario of the Action of the Personnel Specialist 4.1

Employer Actions

Initial situation: the need to recruit specialists with the necessary set of competencies. The employer logs in. Press the button “Search for specialists”. The search options are provided: “By educational organization”, “By direction of preparation”, “By integrated specialty group”, “By region”, “By keywords”, “Other”. In each of these options there may be refinement buttons, for example, the second search layer may contain refinements from the first layer, “By educational organization” –> “By specialty” –> “By profile” –> “By competences” and so on. Also, in each new search refinement, refinements can be added, for example, “the work experience in the specialty is not less than (specify) years”, etc. When searching by competence, the employer can select competencies by an accumulating list, for example, select all or several GCC, all or several GPCs, all or several PCs from a given training direction, can also view a list of additional PCs that are arranged alphabetically or by activity, or found by keywords, etc. Also employer sets the level of training (bachelor, master, etc.) If the required educational program is found, then employer can see when there will be another graduation, how many students can be “ordered”. The order is implemented in the formation of a proposal for employment. When creating a long-term application (with a waiting period of more than two years), an opportunity is formed to conclude an agreement on targeted student training. When creating an application for a period of more than four years for bachelors or more than two years for masters, an opportunity is being formed to send to the Educational organization a proposal to create targeted budgetary places for training specialists for this employer. Also, the employer can use the system to find a partner and form a proposal for practical training for future employees. Also, the employer, who has already received specialists using IS, can form a review of the system and a review of the specialist or other feedback. 4.2

Educational Organization Actions

Initial situation: a search for an employer is required for trained personnel or for organizing practices. The representative of the Educational organization is logged in and presses the “Search for Employers” button. Search options are provided: “By the direction of preparation”, “By the integrated group of the specialty”, “By region”, “By keywords”, “Other”.

246

V. Zhmud et al.

In each of these options, there may be refinement buttons, for example, Minimum Wage, Social Package, Dormitory Support, and more. Also, in each new search step, refinements can be added. When searching by competence, a representative of an Educational organization can select competencies by a cumulative list, for example, select all or several, can also view a list of required additional PCs, which are arranged alphabetically or by activity, or found by keywords and so on. Also Educational organization set the level of training (bachelor, master, etc.) If the required employer is found, then it is possible to form a proposal for placement in a practice, or to conclude an agreement on target recruitment, etc. The representative of the Educational organization can also use the system to find a partner for JEP. Also the Representative of the Educational organization can offer the employer, who has already received specialists using IS, to form a review about the Employee, and the Employee to fill out the “Success Story” page on the website of his Educational organization. 4.3

Employee (Worker) Actions

Initial situation: the search for an employer is required for employment. The worker is registered and after checking his data by the moderator he enters the system. Press the “Search for Employers” button. Search options are provided: “By the direction of preparation”, “By the integrated group of the specialty”, “By region”, “By keywords”, “Other”. In each of these options, there may be refinement buttons, for example, Minimum Wage, Social Package, Dormitory Support, and others. Also, in each new search refinement, refinements can be added. Also one can set the level of training (bachelor, master, etc.). If the desired Employer is found, one can send a Resume or fill out a questionnaire, then click the “Check” button. An automatic check reveals blank required items or incorrectly filled data. If no errors were found, there is a “Save data” button and there is a “Send to employer” button. There is also a button “Post on the website for all employers”, while the Employee may prohibit some employers to see this information, for which there is a “Black list of employers”. Also, an employee may limit the search for Employers by region, by industry, and so on. 4.4

Actions Recruitment Agency or Employment Service

The actions of the recruitment agency or the employment service can be carried out on the basis of a contract with the drawing up of a limited list of the possibilities of actions of this participant. This option is not provided in the trial version.

Sketch Design of Information System for Personnel Management

247

5 The Main Objectives of the Project The main objectives of the project are to turn the situation “As is” into the situation “As necessary”. For this it is necessary to develop and implement all the missing components of the project. The proposed structural diagram of the interaction of the system with users with different roles is shown in Fig. 1, scenarios of the work of participants with different roles are shown in Figs. 2, 3 and 4.

Fig. 1. The proposed structural diagram of the interaction of the information system with users with different roles

Fig. 2. Work scenario of the Employer

248

V. Zhmud et al.

Fig. 3. Scenario of the work of the Educational Organization

Fig. 4. Employee’s work scenario

Sketch Design of Information System for Personnel Management

249

6 An Example of Working with the Information System and Formal Difficulties 6.1

Problem Statement

Suppose that the recruitment agency of the Corporation is seeking graduates. First of all, it will be necessary to decide on the required level of training—undergraduate or graduate. The information system should suggest that a bachelor is a graduate with a higher education who has completed training for four academic years, i.e. eight semesters, and the master is a bachelor’s graduate who has completed additional studies after receiving a bachelor’s degree, another two additional years in magistracy, after which he received an additional diploma. Some specialists on personnel consider such a graduate with two higher educations. Ideal situation would be the following. Some HR-manager has list of the necessary professional competences: Ci. Hence, the future specialist should have all of them, i.e. his competence as total amount of the knowledges should be W¼

N X

Ci :

i¼1

For each competence in reality the student don’t know the total amount of the knowledges, he has only residual amount of them. If this amount is sufficient for the working in this field, the competence cah be treated as existing, is this amount is not sufficient, hence this competence is absent, although it was given during the education. The fact of having some competence we can denote ki , which means that the graduate from the university has the competence Ci. For example,  ci  0:6Ci ) ki ¼ kðci Þ ¼ 1 ; where ki ðxÞ ¼

1; if x ¼ true 0; if x ¼ false

The fact of having all competences is logical statement: K ¼ k1 [ k 2 [ . . . [ k N : Here  K¼

1; if ki ¼ 1 8i : 0; if 9kj ¼ 0

Several competences can be demanded as “one of the following list”. Let denote such competences as lj , and the fact of having any of them can be set by the statement: M ¼ l 1 \ l2 \ . . . \ l N :

250

V. Zhmud et al.

Here  M¼

1; if 9ki ¼ 1 : 0; if kj ¼ 0 8j

In the case that there can be several lists of M, i.e. M1, M2, … MQ, then the total demand is the following: KR ¼ K1 [ K2 [ . . .KR [ M1 [ M2 [ . . . [ MQ : Here  KR ¼

1; if Ki ¼ 1 8i; Mj ¼ 1 8j : 0; if 9Kj ¼ 0 \ 9Mj ¼ 0

In reality fussy logic should be used here, because there is seldom the opportunity to say fur sure, whether the graduate has the demanded competence, or not. 6.2

Formal Difficulties

One of the significant formal difficulties is that approximately the same competences are even called differently in even the closest educational standards and contain different language. There is no fixed list of the competences Ci, which would be universal for every EP. Employer can use subject instead competence. For example, “Theory of Probability” can mean some set of the competences; let it be C1, C2, C3. The subject “Statistic Mathematics” can give competencies C4, C5, C6. Hence the employer can demands list of subjects instead of the list of competences with the close mathematical equations for their existing or absence. In this case, close subjects can give close competences, for example, “Theory of Probability” can be adopted instead “Statistic Mathematics”. I.e. we can state that C1 \ C2 \ C3 ¼ C3 \ C4 \ C5 : But these subjects are not fully equal. For example, it can be the following situation, than the first subject gives four competences H1 ¼ C1 þ C2 þ C3 þ C4 , and the second subject gives the two the same competences, and the two different ones, H2 ¼ C1 þ C2 þ C5 þ C6 . If the employer needs list containing H3 ¼ C1 þ C2 þ C3 þ C5 , hence, he can demand both of the former, or competences, based on H1 with additional study of C5 from H2 , or competences, based on H2 with additional study of C3 from H1 : H3 ¼ H1 þ H2 ; or H3 ¼ H1 þ C5 ; or H3 ¼ H2 þ C3 :

Sketch Design of Information System for Personnel Management

251

Therefore the problem can be rather easily resolved if there would be standard competences or standard subjects (studied disciplines). Nevertheless, there is neither first situation, nor second. At a minimum, an add-on should be created that would search for matching competencies and approximate their assignment to the same class (clustering). It would be much easier if the matching competencies had the same numbers and were formulated identically. The same applies to the activities, since, apparently, the authors of the standards did not mean a significant difference between the concepts, for example, “operational and technological activities” and “service and operational activities”. However, it should be borne in mind that even if the differences are predetermined by different authorship, then various educational programmers relied on these standards, and the principle “they ordered something, acted”, that is, even differences in names that can be considered synonymous, gave a more significant difference in competences, and an even more significant difference in the readable disciplines and in the emphasis in them. Therefore, not only assignment to the same clusters is required, but also preservation of the original formulations, as well as an indication of subtle differences, so that the employer can independently understand these subtleties and decide how important they are.

7 Additional (Individual) Competencies Additional commitments, which are discussed in clause 5.6 of the FSES, are an additional problem for the employer, since they differ not only in each area of training, but also in each educational organization, and even within educational organizations, if they implement educational programs according to the same standards, but in different profiles, for example, in different departments. As we can see, even formulations within the framework of standards from the same Ministry often have insufficiently substantiated differences in formulations, and here the employer will deal with formulations developed by individual departments, where the subjectivity of individual teachers’ vision of the problem will have a much more noticeable effect. The lack of competencies, even within the framework of the standard, is that they are not sufficiently formalized. For example, it would be possible to more clearly distinguish the difference between the fact that an employee is competent to “participate” as one of the co-contractors (and, therefore, not personally responsible for the final result, but have only shared responsibility), and that the employee is competent to perform personally with personal responsibility for the result. In this sense, “readiness to participate in the work” should be recognized as a relatively weak competence in comparison with, for example, “the ability to provide” some result. Apparently, it would be desirable for each type of activity to provide not only such competencies where the future employee is required only “willingness to participate”, but also those where he is required to have the “ability to implement” or “provide”, “complete” and etc. An even weaker wording should be recognized “to have an idea about …”, such wording is also found.

252

V. Zhmud et al.

For example, “the ability to use first aid techniques, methods of protection in emergency situations” does not formally guarantee that the employee in an emergency really uses these techniques. Being able to do something does not mean that this action will be done when necessary. Many people are “capable” of doing something that they still do not do. Moreover, the “willingness” to do something is even far from the ability. A person may be able to defend a dissertation, but does not. The other is “ready” to do this, but not only does not do it, but is not able to do it, although “ready”, in the sense of “I agree to take on this work.” Apparently, the competence to “have the skills” of some activity would be stronger, in the sense that it obliges the educational organization not only to provide the necessary knowledge of such activities, but also to teach them in practice, as well as to monitor their effectiveness. That is why in critical cases the employer prefers to hire an employee with experience (positive, which is confirmed by the employer’s recall, or, at least, by reference to the employee’s resume for completed contracts with their numbers and deadlines, etc.). For this reason, a simple search by educational standards is clearly insufficient. It is necessary to take into account additional competencies, accounting for actually confirmed knowledge and skills, results of previously performed work, i.e. resume in the full sense of the word. Consequently, the information system should provide opportunities for integrating ready-made resumes from the websites of personnel agencies and employment services, as well as from the websites of educational organizations (if there are such resumes). In addition, the information system should provide an opportunity for potential employees to fill out a resume in this system directly. Thus, we come to the need to create personal offices of employees.

8 About Personal Accounts of Employees Personal offices of employees must be filled on the western type of questionnaires, called CV. The employee should be able to upload there all the information that he considers necessary to inform the potential employer, but it is undesirable to disclose personal information. Consequently, the system should not request passport data, data on the certification of pension insurance and other information that can be used to the detriment of an employee if they become the property of unscrupulous persons. The system should be organized so that it not only does not request such information, but also does not provide the ability to report such information. At present, many systems have been created and are successfully operating in which the scientific achievements of researchers are integrated, since these systems bring together all the publications of the authors. Such systems include, for example, scientometric databases, such as RISC, Scopus, Web of Science (Web of Knowledge), ORCID, and many others. Some of them are generated automatically; authors can only request a search for information about themselves, gather the available information together, exclude information about namesakes, merge together several of their own profiles, and so on. In other databases, authors can actively work, add not only information about completed publications, but also upload publications in open access,

Sketch Design of Information System for Personnel Management

253

track the readers of these publications, cite them, track publications of selected authors, and so on. Unfortunately, these bases are not sufficiently interconnected; sometimes their connection with each other is simply impossible. For example, an author who has publications in the Scopus, WoS, RISC databases cannot transfer all the publications available there to the databases of interest created for generating applications for grants by pressing a few buttons and, for example, entering an individual number on these databases. The created bases of the Russian Science Network, as well as the portals of the Russian Foundation for Basic Research and the Russian Science Foundation, request the participants to enter these data manually, which, on the one hand, is extremely time-consuming work, on the other hand, also requires verification. Where it would be easier if it were enough to enter, for example, the ORCID number, after which all these databases would receive all the necessary data from the Scopus, WoS, RISC databases. But this is not yet. Approximately the same is true of the resume of potential employees: if the employee did not write something about him, then the employer does not know and will never know. Immodest person gets the best place (ceteris paribus), modest remains out of work. If the system cannot integrate information from open databases, then it, at a minimum, should form hints, the right positions to fill in so that the prospective employee most fully fills in his own questionnaire.

9 About the Forecast of Staffing Needs Staffing requirements will not be met in a timely manner if they have not been planned in advance. Of course, only those who develop a strategic program for the development of their own organization and a program of joint work with partner enterprises can be engaged in forecasting. The corporation can even carry out such activities as the construction of new workshops, factories, data processing centers, shared centers, etc. This may require the simultaneous employment of several tens or even several hundred employees. This task cannot be solved with the help of a recruitment agency. It is impossible to solve it by a single search in the information system for competencies among educational programs of various universities. It requires a planned advance work with the educational organization, perhaps even with several educational organizations. For these purposes, a toolkit for analyzing the compliance of existing educational programs with the goals and objectives of staffing the Corporation is required, as well as a toolkit for preparing proposals for creating new educational programs and, if necessary, new individual educational trajectories, which can only be done on the basis of a sufficiently in-depth analysis of existing EP and ITI, also on the basis of a formal analysis of future needs and opportunities for the participation of the Employer in training personnel by organizing all types of s practices. Also important is the possibility of creating SOPs between several educational organizations.

254

V. Zhmud et al.

Advanced training is also an important way to manage the competencies of employees and their improvement; therefore, the information system should provide the tools for the formation of programs of such advanced training.

10 Conclusion The problems and methods of their solution considered in this paper require the creation of an effective information system that fully takes into account the needs of a large corporation interested in managing the competencies of future employees. With the active assistance of interested parties, such a system can be created in a relatively acceptable time frame, i.e. in one to two years. Creating demonstration pilot samples of the system can be carried out much faster.

References 1. Alberto, P.A., Waugh, R.E., Fredrick, L.D., Davis, D.H.: Sight word literacy: a functionalbased approach for identification and comprehension of individual words and connected text. Educ. Training Autism Dev. Disabil. 48, 332–350 (2013) 2. Anthis, K.: Is it the clicker, or is it the question? Untangling the effects of student response system use. Teach. Psychol. 38(3), 189–193 (2011) 3. Austin, J.L., Lee, M.G., Thibeault, M.D., Bailey, J.S.: Effects of guided notes on university students’ responding and recall of information. J. Behav. Educ. 11(4), 243–254 (2002) 4. Boyle, J.R.: Enhancing the note-taking skills of students with mild disabilities. Interv. School Clin. 36(4), 221–224 (2001) 5. Cakiroglu, O.: Effects of preprinted response cards on rates of academic response, opportunities to respond, and correct on academic responses of students with mild intellectual disability. J. Intellect. Dev. Disabil. 39(1), 73–85 (2014) 6. Cihak, D., Alberto, P.A., Taber-Doughty, T., Gama, R.I.: A comparison of static picture prompting and video prompting simulation strategies using group instructional procedures. Focus Autism Other Dev. Disabil. 21, 89–99 (2006) 7. Duchaine, E., Green, K.B., Jolivette, K.: Using response cards as a class-wide intervention to decrease challenging behavior. Beyond Behav. 20(1), 3–10 (2011) 8. Embry, D.D., Biglan, A.: Evidence-based kernels: fundamental units of behavioral influence. Clin. Child Family Psychol. Rev. 11, 75–113 (2008) 9. Flores, M.M., Ganz, J.B.: Effects of direct instruction on the reading comprehension of students with autism and developmental disabilities. Educ. Train. Dev. Disabil. 44(1), 39–53 (2009) 10. Garbo, R., Mangiatordi, A., Negri, S.: A computer based support to guided note taking: a preliminary study on university students with dyslexia. Int. J. Technol. Inclusive Educ. (IJTIE) 1(2), 52–59 (2012) 11. Hattie, J., Timperly, H.: The power of feedback. Rev. Educ. Res. 77, 81–112 (2007) 12. Haydon, T., Marsicano, R., Scott, T.M.: A comparison of choral and individual responding: a review of the literature. Preventing School Fail. 57, 181–188 (2013) 13. Hollo, A., Hirn, R.G.: Teacher and student behaviors in the contexts of grade-level and instructional grouping. Preventing School Fail. 59(1), 30–39 (2015)

Sketch Design of Information System for Personnel Management

255

14. Horn, C.: Response cards: an effective intervention for students with disabilities. Educ. Train. Autism Dev. Disabil. 45, 116–123 (2010) 15. Jimenez, B.A., Lo, Y., Saunders, A.: The additive effects of scripted lessons plus guided notes on science quiz scores of students with intellectual disabilities and autism. J. Spec. Educ. 47, 231–244 (2014) 16. Johnson, D., McLeod, S.: Get answers: using student response systems to see students’ thinking. Learn. Lead. Technol. 32(4), 18–23 (2005) 17. Konrad, M., Joseph, L.M., Itoi, M.: Using guided notes to enhance instruction for all students. Interv. School Clin. 46, 131–140 (2011) 18. Kretlow, A.G., Cooke, N.L., Wood, C.L.: Using in-service and coaching to increase teachers’ accurate use of research-based strategies. Remedial Spec. Educ. 33(6), 348–361 (2012) 19. Lambert, M.C., Cartledge, G., Heward, W.L., Lo, Y.: Effects of response cards on disruptive behavior and academic responding by fourth-grade urban students. J. Positive Behav. Interv. 8, 88–99 (2006) 20. Layng, T.V.J., Twyman, J.S.: Education + technology + innovation = learning? In: Murphy, M., Redding, S., Twyman, J. (Eds.) Handbook on Innovations in Learning, Center on Innovations in Learning, Temple University; Charlotte, NC, pp. 135–150. Information Age Publishing, Philadelphia (2014) 21. Lee, S.E.: Education as a human right in the 21st Century. Democracy Educ. 21(1), 1–9 (2013) 22. Mahon, K.: Creating a Content Strategy for Mobile Devices in the Classroom. Center on Innovations in Learning, Philadelphia (2014) 23. Neef, N.A., McCord, B.E., Ferreri, S.J.: Effects of guided notes versus completed notes during lecture on college students’ quiz performance. J. Appl. Behav. Anal. 39, 123–130 (2006) 24. Patterson, K.B.: Increasing positive outcomes for African American males in special education with the use of guided notes. J. Negro Educ. 74, 311–320 (2005) 25. Schwab, J.R., Tucci, S., Jolivette, K.: Integrating schema-based instruction and response cards for student with learning disabilities and challenging behaviors. Beyond Behav. 22(3), 24–30 (2013) 26. Skibo, H., Mims, P., Spooner, F.: Teaching number identification to students with severe disabilities using response cards. Educ. Train. Dev. Disabil. 46, 124–133 (2011) 27. Zhmud, V.A.: QUEST: the state accreditation of your university—bachelor, master, aspirant, specialty. Automatics Softw. Enginery 4(18), 128–148 (2016)

Models and Methods for Determining Damage from Atmospheric Emissions of Industrial Enterprises Elena Kushnikova1(&) , Ekaterina Kulakova1 , Sergei Alipchenko1 , Alexander Rezchikov2 , Vadim Kushnikov1,2 , and Vladimir Ivaschenko2 1

2

Saratov State Technical University, 77, Politechnicheskaya str., Saratov 410054, Russia [email protected] Institute of Precision Mechanics and Control, RAS, 24, Rabochaya str., Saratov 410028, Russia

Abstract. Mathematical models and algorithms have been developed that allow analytically determining the total amount of damage using the metric functions of the state space, piecewise-defined functions and S-shaped curves.It has been established that the amount of damage from atmospheric pollutants can be approximated by the developed nonlinear evaluation functions obtained using the minimax criterion and the Savage criterion, which makes it possible to minimize the objective damage function under conditions of uncertainty of disturbing influences in such a way that the damage does not exceed a given value. The developed software will be used in the modernization of automated systems of services of the chief ecologist of industrial enterprises. Keywords: Mathematical model

 Industrial enterprises  Pollutants

1 Introduction Intensive growth of industrial production is currently practically impossible to ensure without pollution of the surface layer of the atmosphere, including air basins over cities, villages, recreation areas, agricultural facilities, forests, nature reserves, oceans, seas, lakes, rivers, etc. This leads to climate change on the planet, increase in the morbidity of population, violation of the genetic apparatus in humans, animals and plants, reduces the duration and quality of life, reduces the labor productivity, reduces the productivity of farmland, increases corrosion of metal structures of buildings and structures, increases the cost of their protection, destroys historical monuments, increases housekeeping costs, causes of social unrest, and also leads to mass migration of residents from areas and cities which have environmentally dirty production. Some authors to be hundreds of billions of dollars annually estimate the magnitude of only direct damage from atmospheric pollution by emissions from industrial enterprises, thermal power plants and vehicle exhaust gases. About 2.1 million people © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 256–267, 2019. https://doi.org/10.1007/978-3-030-12072-6_22

Models and Methods for Determining Damage from Atmospheric

257

die from the effects of air pollution every year, another 470 thousand people die due to the destruction of the ozone layer. At the same time, particularly large material and human losses occur in developing countries, such as the states of South and East Asia, Brazil, and Russia. The theoretical substantiation of the principles of functioning of environmental monitoring systems and management was carried out in the works of such domestic and foreign scientists as [1–7]. The above considerations determine the relevance, economic feasibility and practical significance of this research that involves the development of mathematical models, algorithms and software packages to minimize damage from atmospheric emissions of an industrial enterprise. The article presents short description of problems being solved with the minimization of various types of damage, also it proposes models and algorithms for determining the scalar components of the vector optimality criterion, characterizing various types of damage from atmospheric emissions of industrial enterprises.

2 Short Description of Problems Being Solved with the Minimization of Various Types of Damage In accordance with the general methodology for solving the problem of minimizing the damage caused by atmospheric emissions from industrial enterprises, it is necessary to develop mathematical models and algorithms to determine the scalar components of the vector optimality criterion Cfi ¼ Cfi ðC; tÞ; i ¼ 1; 5 that characterize various types of damage from industrial pollutants. In particular, it is necessary to create mathematical software that allows determining the scalar components of the vector optimality criterion as a function of the concentration of pollutants C and the duration of their impact on objects and territories t at control points or recipient points. Usually these points are located in places of mass gathering of population, in close proximity to administrative buildings, industrial enterprises, in residential areas, etc. Scalar components Cfi ¼ i ¼ 1; 5 respectively characterize the following types of damage: • • • •

the damage associated with an increase in the incidence of the population; the loss of agriculture because of the effects of atmospheric pollutants; the damage from changes in the natural environment; life quality degradation of the population as a result of the systematic impact of atmospheric emissions of the industrial enterprise; • damage to enterprises arising from the regulation of air emissions and payments of fines for violation of requirements. The formal definition and mathematical modeling of these types of damage caused by atmospheric pollutants to the population, industry, agriculture and the environment is a difficult problem that has not received a rigorous mathematical solution yet.

258

E. Kushnikova et al.

Its main difficulty is related to the fact that atmospheric pollutants significantly affect the activities of complex biological, socio-economic and human-machine systems. Mathematical modeling of the functioning of these systems, as well as their management by rigorous, mathematically sound algorithms, is very difficult to implement due to such properties of the object of study as large dimensionality, diversity of species, activity and expediency of functioning, structural complexity, nonlinearity and nonstationarity, the presence of a large number of positive and negative causal relationships, emergence, etc. Due to this feature of complex systems, heuristic algorithms of mathematical modeling and control of the above-mentioned types of damage arising as a result of atmospheric pollutants impact on the population, industry, agriculture and the environment should be developed. These algorithms are based on the using the decisionmaking theory, widely used in the mathematical substantiation of management procedures for complex, insufficiently formalized systems. Along with the formal apparatus of the theory of decision-making, mathematical models used in assessing damage from emergencies are also used in determining scalar components Cfi ¼ Cfi ðC; tÞ; i ¼ 1; 5, models to predict safe levels of exposure to potentially toxic substances, as well as the recipient method for determining the economic damage caused by pollution. It should be noted that the definition of scalar components Cfi ¼ Cfi ðC; tÞ; i ¼ 1; 5 is carried out taking into account a number of restrictions, allowing to determine the concentration of pollutants in controlled objects and territories.

3 Models and Algorithms for Determining the Scalar Components of the Vector Optimality Criterion, Characterizing Various Types of Damage Consider the models and algorithms for determining damage used in calculating the scalar components of fi ¼ Cfi ðC; tÞ; i ¼ 1; 5. In determining these components that characterize various types of damage from the effects of atmospheric pollutants and regulation of the performance of technological equipment of an industrial enterprise, the formal apparatus of the utility theory is used, in particular, the decision matrix. Let us explain the features of its application to the solution of the problem by the example of determining the value of the first scalar component Cf1 of vector objective function. This type of damage consists of damage accumulated during the time interval Dt from the impact of atmospheric pollutants y1 and damage y2 , caused by an unfavorable   coincidence of circumstances that increased the negative impact of atmospheric eij  emissions on public health: Cfi ¼ y1 þ y2 . When determining Cf1 the decision matrix is used; this matrix is constructed for each component of damage.  y To determine the component y2 the decision matrix eij  2 , is used, which is defined by Table 1.

Models and Methods for Determining Damage from Atmospheric

259

  Table 1. Solution matrix eij  for determining the damage. F1 E1 e11 E2 e21 …… …… Em em1

F2 e12 e22 …… em2

F3 e13 e23 …… em3

…… …… …… …… ……

Fn e1n e2n …… emn

Consider the notation that was made in the description of Table 1. Ei ; i ¼ 1; m is variants of the decisions related to a decrease in the productivity of industrial enterprise equipment for controlling the concentration of pollutant emissions C at controlled points. In particular, C1 is the decision to reduce performance by DC, Em is the decision to reduce performance by mDC. Fi ; i ¼ 1; n is all kinds of external conditions that significantly affect the amount of damage in the zone of influence of atmospheric pollutants of an industrial enterprise. These external conditions are determined for each enterprise separately at the stage of implementation of the developed mathematical and information software. These include, for example, normal conditions, calm weather, smog, fog, forest or steppe fires, abnormal heat or cold, high-polluted atmosphere, increased population morbidity, as well as various combinations of these external conditions. At the stage of adaptation of the developed software to the conditions of a particular enterprise, it is necessary to take into account external conditions with the greatest possible completeness, which will increase the quality of the decisions. eij ; i ¼ 1; n; j ¼ 1; m is the damage value y1 , corresponding to different variants of the decisions, as well as to the external conditions listed above, which increase the negative impact of atmospheric emissions on public health.  y The decision matrix eij  1 , which characterizes the accumulated damage from the impact of atmospheric emissions on public health, is similarly constructed. The main difference of these matrices is various external conditions Fi ; i ¼ 1; n; affecting the eij ; i ¼ 1; n; j ¼ 1; m value of. So, in particular, in the second decision matrix, the external conditions include finding control points in the capital, large, medium and small cities of the country, a significant proportion of people of the older and middle age groups among the population of these cities, the architecture of the city, unsuccessful in terms of weathering emissions (wind shadow around large buildings), the lack of a sufficient number of specialized medical institutions, the location of emission sources of atmospheric pollutants in the center or city limits. A combination of various external conditions is also possible.   The main difficulty in the formation of the matrix eij , used to solve the problem, is the damage definition eij ; i ¼ 1; n; j ¼ 1; m. In the ongoing study for this proposed to use: • expert evaluations; • metric functions of the condition space;

260

E. Kushnikova et al.

• piecewise-defined functions; • S-shaped curves; • recipient techniques used in determining the economic damage from the effects of pollutants; • combined methods. In the general case, each of the above methods can be used to determine the damage caused by the incidence of the population in the zone of impact of atmospheric pollutants of an industrial enterprise. However, from the standpoint of ease of use, application of an apparatus of expert evaluations, piecewise-defined functions and Sshaped curves are the most effective. 3.1

The Use of Expert Estimates in Determining the Amount of Damage

The magnitude of the damage is determined from an expert survey, the methodology of its conduct and the algorithms for processing the information received are discussed in detail in the special literature and do not require additional explanation. 3.2

The Use of the Metric Function of the State Space to Determine the Damage

Heuristic algorithm for determining the values eij ; i ¼ 1; n; j ¼ 1; m developed in this section of the thesis is based on the following hypothesis. Assume that at a point t0 in time at a point with coordinates x0 ; y0 ; z0 the concentration of pollutants is known Cðt0 Þ, as well as the magnitude of the damage caused by the incidence of the population in the area affected by the atmospheric pollutants of an industrial enterprise eðx0 ; y0 ; z0 ; C ðt0 ÞÞ. At time tk at points with coordinates xk ; yk ; zk , that satisfy the conditions jxk  x0 j  ex ; jyk  y0 j  ey ; jzk  z0 j  ez ; as well as the jCðtk Þ  C ðt0 Þ  ec j:  eðxðtk Þ; yðtk Þ; zðtk Þ; tÞ ¼ gqs; qs ¼ ðxðt0 Þ  xðtk ÞÞ2 l1 þ ðyðt0 Þ  yðtk ÞÞ2 l2 0;5 þ ðzðt0 Þ  zðtk ÞÞ2 l3 þ ðCðt0 Þ  Cðtk ÞÞ2 l4 P4

ð1Þ

i¼1 li ¼ 1; li ; i ¼ 1; 4 are weight coefficients characterizing the degree of influence of the deviation in the ith coordinate on the value of the distance function qs; g—is the known scaling factor used to calculate damage in monetary terms; qs is the distance function defined in the metric state space between the points S0 ; Sk 2 fSg with coordinates ðx0 ; y0 ; z0 ; C ðt0 ÞÞ and ðxk ; yk ; zk ; Cðtk ÞÞ, respectively; fSg is the set of admissible states of the control object; ex ; ey ; ez ; ec is the neighborhood of the point S0 , at which dependence (1) is satisfied. The algorithm for determining quantities eij ; i ¼ 1; n; j ¼ 1; m using the metric function of the state space (1) consists of the following main steps:

Models and Methods for Determining Damage from Atmospheric

261

• the choice of options for decisions ei ; i ¼ 1; m, associated with a decrease in the productivity of equipment of an industrial enterprise; • compiling a list of the most significant external conditions Fi ; i ¼ 1; n, affecting the magnitude eij ; i ¼ 1; n; j ¼ 1; m; • selection of base point coordinates so that their neighborhoods ex ; ey ; ez ; ec overlap the entire controlled area; • determination of damage from atmospheric pollution at these points, for example, using the method of expert evaluations or other known methods; • selection of coordinates of control points where it is necessary to determine the amount of damage from the incidence of the population in the zone of influence of atmospheric pollutants of an industrial enterprise; • determination of damage eij ; i ¼ 1; n; j ¼ 1; m at the control points by formula (1)  y and the formation of a decision matrix eij  2 . 3.3

The Use of a Piecewise-Defined Function to Determine Damage

The algorithm is based on the following idea of changing the damage in the process of increasing the concentration of pollutants at the control points (receptor points). In particular, analyzing the results of mathematical modeling of air pollution in cities and the impact of potentially toxic substances on the morbidity of the population it can be concluded that the damage y2 practically does not occur if the concentration of harmful substances in the atmosphere does not exceed the threshold limit value ðTLVÞC  CTLV . If the impact of adverse external conditions increases, then the concentration of atmospheric pollutants is released beyond the threshold limit value, and damage occurs, the value of which in the interval ðCTLV ; C1  is described by the linear function y2 ¼ kC þ b; C 2 ðCTLV ; C1 .(k,b are coefficients of the linear function). This stage of development of the unfavorable situation corresponds to moderate additional pollution of controlled objects and territories, which leads to damage, the value of which is directly proportional to the concentration C 2 ðCTLV ; C1  of atmospheric pollutants at the receptor point. If the number of atmospheric pollutants recorded at the control point continues to increase, as their removal is disrupted, for example, due to windless weather, smog appears and increases, the damage will increase exponentially. This corresponds to the stage of significant deterioration of health and reduced efficiency of healthy people, hospitalization of patients with chronic diseases, increased calls for emergency medical care by elderly citizens, a surge in the morbidity of children, etc. These phenomena can be significantly increased, for example, in industrial centers with a large number of operating enterprises, heavy traffic, forest and peat fires in the vicinity of cities, etc. Thus, at this stage of the development of the emergency situation, the damage is described by the dependence y2 ¼ aC ; C 2 ðC1 ; C2  (ai is the base of the exponential function, ai  1; C2 is the upper bound of the interval of change of concentration at this interval).

262

E. Kushnikova et al.

This stage of development of an unfavorable situation can last quite a long time and it significantly complicates the normal life activity of people. In the most severe cases at this stage human casualties may occur. With the continuing unfavorable development of the situation, its last stage begins, which is associated with a gradual increase in damage due to the partial adaptation of the young and healthy part of the population to what has happened, the suspension of the most environmentally harmful industries, the departure or evacuation of children, the elderly and the sick from the zone of intense exposure to atmospheric pollutants. At this stage, the damage value, as in the previous case, is described by a linear function, but with other coefficients: y2 ¼ K  C þ b ; C 2 ðC2 ; C3 c. (k  ; b are the coefficients of the linear function; C1 ; C2 ; C3 are known constants). Taking into account the assumptions made, the value of damage eij ; i ¼ 1; n; j ¼ 1; m at various stages of the development of an unfavorable situation associated with an increase in the concentration of harmful substances in the atmosphere can be determined from the following expression:   8 0; if Cij 2 0; CTLVij ; > > > > < kij Cij þ bij ; if Cij 2 ðCTLVij ; C1ij c; C eij ¼ aij ij ; if Cij 2 ðC1ij ; C2 ij c; >   > > > kij Cij þ bij ; if Cij 2 ðC2ij ; C3ij c; : i ¼ 1; n; j ¼ 1; m

ð2Þ

(Cij are known constants). The algorithm for determining the values of eij ; i ¼ 1; n; j ¼ 1; m using piecewisedefined functions (2) includes: choice of decision-making options Ei ; i ¼ 1; m, associated with a decrease in the productivity of equipment of an industrial enterprise; making a list of the most significant external conditions Fi ; i ¼ 1; n, affecting the value of eij ; i ¼ 1; n; j ¼ 1; m; determination of atmospheric pollution concentration at the control points; calculation at control points by the formula (3) of damage eij ; i ¼ 1; n; j ¼ 1; m and  y formation of the decision matrix eij  2 . In conclusion, it should be noted that the use of dependence (2) to estimate the amount of damage to the population or agriculture is appropriate only when the atmospheric pollutant is not a poisonous or moderately toxic substance. 3.4

The Use of S-Shaped Curves to Determine Damage

Models and algorithms of this method for determining the scalar components of the optimality criterion apply mainly to the damage caused by atmospheric pollutants to population health and agriculture. The developed software is based on the assumption that the magnitude of the damage is proportional to the magnitude of the relative

Models and Methods for Determining Damage from Atmospheric

263

deviation of the functioning parameters of the biological system from the norm, i.e. it can be accurately approximated by linear dependence eij ¼ k2 y þ b2 ,   tt0 2 " # d 3 f ðR0 Þ   12 s tt0 1 d 2 f ðR 0 Þ 1 dR3 s y¼ aks 1 þ ksQ 1  2 2 tt0 2 ln 22 dR2 3 ln 2 d f ðR2 0 Þ s

ð3Þ

dR

— exist. y(is the relative deviation of the parameters of the biological system from the norm; is the mass of the biological system; is the number of received harmful substance a; s is time of removal of the harmful substance from the body or from the biological system; t is the duration of exposure to toxic substances on the biological system; is the coefficient of proportionality; f ðR0 Þ is polynomial function with coefficients, determined experimentally at the stage of adaptation of the developed software to the conditions of functioning of a particular control object; k2, b2 are known constants). Function (3) was used in the construction of a mathematical model to predict safe levels of exposure to potentially toxic substances. The grounds for the assumption underlying the considered mathematical model give the following properties of the process of exposure to the body of potentially toxic substances contained in atmospheric pollutants: • initially, the effectiveness of the effect of toxic substance on the body is directly proportional to its concentration (linear dependence); • in the beginning, small changes in concentrations cause significant differences in action, then large changes in concentrations cause only a slight increase in the effect; • with an increase in concentration or dose effect increases slightly at first, and then begins to grow rapidly. Based on these properties, when determining the damage value eij ; i ¼ 1; n; j ¼ 1; m it is also proposed to determine as a linear dependence

eij ¼ k3 a þ b3 ; a ¼

  tt0 2 h 12 s tt0 s

 i tt0 1þC 1  2 s

ð4Þ

obtained from expression (3) under condition (5): 2

3 d 2 f ðR0 Þ 6 dR2 7 61 þ 1 7ks ¼ const; 4 3 3 In 2 d f ðR0 Þ5 dR3

ð5Þ

264

E. Kushnikova et al.

- exist. In accordance with in the above dependencies (2) and (3) the effect is understood as a dimensionless value characterizing the intensity of exposure to the biological system of toxic substances contained in atmospheric pollutants. Figure 1 shows a graph of the dependence a ¼ aðC; tÞ, constructed at different values C 2 b10; 10c and t 2 b0; 7:5c.

Fig. 1. The dependence of the effect of exposure on the concentration () of a toxic substance and the duration of its impact () on the biological system.

In conclusion, it should be noted that the above dependence can also be used to determine the value of other scalar criteria Cfi ¼ Cfi ðC; tÞ; i ¼ 1; 5 of the problem being solved. The algorithm for determining the values of eij ; i ¼ 1; n; j ¼ 1; m using the S-shaped function (3) is similar to the above described algorithm for determining damage using a piecewise-defined function (2). 3.5

The Use of Recipient Methods in Determining Damage from Exposure to Pollutants

With the help of recipient methods, actual and potential losses of the national economy associated with environmental pollution are often determined.  To determine the damage eij ; i ¼ 1; n; j ¼ 1; m when forming the decision matrix eij  can be used: • the method of control areas, statistical methods, • combined methods.

Models and Methods for Determining Damage from Atmospheric

265

When using the method of control areas, the damage eij ; i ¼ 1; n; j ¼ 1; m is determined by comparing the indicators of the condition of the area polluted with atmospheric pollutants and the control area. The main difficulty lies in the selection of compared regions, in which the external conditions affecting their economic performance would coincide quite well. It is assumed that damage in a polluted area is caused only by exposure to atmospheric pollutants. When using statistical methods, the damage eij ; i ¼ 1; n; j ¼ 1; m is determined using regression models that relate its value to the concentration of harmful substances C and the duration of their impact on the controlled object or territory. Along with the regression analysis methods, neural models can be applied to solve this problem. Combined methods involve the joint use of various recipient methods for determining damage eij ; i ¼ 1; n; j ¼ 1; m, for example, the method of control areas and neural networks. The algorithm for determining the magnitude of economic damage using recipient techniques is well known from publications in the literature, therefore, the features of its use   for determining eij ; i ¼ 1; n; j ¼ 1; m and the formation of the decision matrix eij  do not require additional consideration. In conclusion the research of this section of the thesis, it can be concluded that the value of damage  eij ; i ¼ 1; n; j ¼ 1; m, which is necessary for the formation of the decision matrix eij , is advisable to determine using the method of expert evaluations, the metric functions of the state space, piecewise-defined functions, S-shaped curves, recipient techniques and combined methods. Table 2 provides recommendations on the choice of methods for calculating the magnitude of damage for different scalar components of the optimized objective function Cfi ¼ Cfi ðC; tÞ; i ¼ 1; 5. The effectiveness of a particular method in determining these components is assessed according to the following scale: • • • • •

1—very effective application of the method; 2—method often used; 3—possible application; 4—used as an auxiliary method; 5—rarely used.

Besides, Table 2 uses the following notation: 6—method of control areas is used; 7 —statistical methods are used; 8—combinations of the method of control areas, statistical methods and neural networks are used. The final decision on the choice of the method of calculating the damage eij ; i ¼ 1; n; j ¼ 1; m is made at the stage of adaptation of the developed mathematical software to the conditions of operation of a particular industrial enterprise.

266

E. Kushnikova et al. Table 2. The area of methods for calculating damage from exposure to pollutants

Damage type

Method of calculation Expert Metric Piecewise- S-shaped Recipient evaluations functions defined functions methods functions 1, 2 3, 4 3, 4 1, 2 1, 2

Damage by the disease of the population (Cfi criterion) Loss of agri-cultural 1, 2 products (Cf2 criterion) 1, 2 Damage caused by environmental change (Cf3 criterion) 1, 2 Damage due to deteriorating quality of life (Cf4 criterion) Enterprise damage (Cf5 criterion)

Combined methods 6, 7, 8

3, 4

3, 4

3, 4

1, 2

6, 7, 8

2, 3

2, 3

3, 4

2, 3

6, 7

3, 4

3, 4

3, 4

1, 2

6, 7

2, 3

3, 4

3, 4

3, 4

6, 7, 8

4 Conclusion Mathematical models and algorithms are developed that allow analytically determining the total amount of damage using the metric functions of the state space, piece wise defined functions and S-shaped curves, which allows forming a solution matrix in real time, increasing the efficiency and accuracy of calculating the optimized target functions. It has been established that the magnitude of damage from atmospheric pollutants can be approximated by the developed nonlinear estimation functions obtained using the minimax criterion and the Savage criterion, which makes it possible to solve the vector optimization problem under uncertainty conditions of disturbing influences so that the damage does not exceed the specified value. The developed software can be used to create information systems to minimize damage from atmospheric emissions from industrial enterprises.

References 1. Filimonyuk, L.: The problem of critical events’ combinations in air transportation systems. In: Advances in Intelligent Systems and Computing, vol. 573, pp. 384–392. Springer (2017) 2. Spiridonov, A., Rezchikov, A., Kushnikov, V., Ivashchenko, V., Bogomolov, A., Filimonyuk, L., Dolinina, O., Kushnikova, E., Shulga, T., Tverdokhlebov, V., Kushnikov, O., Fominykh, D.: Prediction of main factors’ values of air transportation system safety based on system dynamics. J. Phys: Conf. Ser. 1015, 032140 (2018). IOP Publishing

Models and Methods for Determining Damage from Atmospheric

267

3. Angel, S., Parent, J., Civco, D.L., Blei, A., Potere, D.: The dimensions of global urban expansion: estimates and projections for all countries, 2000–2050. Prog. Plan. 75, 53–107 (2011). https://doi.org/10.1016/j.progress.2011.04.001 4. Aronson, M.F.J., La Sorte, F.A., Nilon, C.H., Katti, M., Goddard, M.A., Lepczyk, C.A., Winter, M., et al.: A global analysis of the impacts of urbanization on bird and plant diversity reveals key anthropogenic drivers. Proc. R. Soc. B: Biol. Sci. 281, 20133330 (2014). https:// doi.org/10.1098/rspb.2013.3330 5. Aronson, M.F.J., Lepczyk, C.A., Evans, K.L., Goddard, M.A., Lerman, S.B., MacIvor, J.S., Vargo, T., et al.: Biodiversity in the city: key challenges for urban green space management. Front. Ecol. Environ. 15, 189–196 (2017). https://doi.org/10.1002/fee.1480 6. Bates, D., Maechler, M., Bolker, B., Walker, S.: lme4: linear mixed-effects models using Eigen and S4 (2015). https://cran.r-project.org/web/packages/lme4/index.html 7. Casalegno, S., Anderson, K., Hancock, S., Gaston, K.J.: Improving models of urban greenspace: from vegetation surface cover to volumetric survey, using waveform laser scanning. Methods Ecol. Evol. 8, 1443–1452 (2017). https://doi.org/10.1111/2041210X. 12794

Computer Analysis of the Equilibrium in Painting and Olga Dolinina(&)

Alexander Voloshinov

Yuri Gagarin State Technical University of Saratov, Saratov, Russia {alvoloshinov,odolinina09}@gmail.com

Abstract. All artists and art theorists recognize, that equilibrium is the simplest and most important principle of artistic construction, by means of which the elements of composition are organized into a single perceived and narrative whole. An important factor in ensuring the balance of the composition is the distribution of the “weights” of its elements, especially around the vertical and horizontal axes of the picture. The problem consists in constructing an adequate mathematical model that allows to estimate mathematically the balance of a pictorial composition. In the paper described two methods of solution are proposed: the concept of the colorimetric barycenter and a probabilistic model. There have been analyzed 1161 paintings of the 16 famous artists. The results of the analysis prove that the color balance is a necessary condition for the construction of a painterly composition and is almost strictly performed by all artists. Keywords: Equilibrium in painting model  Composition



Colorimetric barycenter



Probabilistic

1 Introduction Communicating with artists, one often hears that the composition of the picture is “well balanced”, that the picture “there are no empty spaces” or, on the contrary, that the picture is “unbalanced” and there are “many voids” in it. In fact, all artists and art theorists recognize, that equilibrium is the simplest and most important principle of artistic construction, by means of which the elements of composition are organized into a single perceived and narrative whole. It is no accident that the distinguished American art critic Rudolf Arnheim begins his famous book “Art and visual Perception” [1] namely the head of “Balance”. For the same reason, balance, balance become the most important conditions for the harmony of artistic composition. It is clear that in the age of the dominance of information technology, it would be desirable to make a decision about the harmony or disharmony of a work of art on the basis of precise mathematical methods. An important factor in ensuring the balance of the composition is the distribution of the “weights” of its elements, especially around the vertical and horizontal axes of the picture. The perceived weight of a picture element within the structural organization of a composition is determined by the size of the element, its shape, position within the composition, its implicit “direction”. Another important, if not the most important, © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 268–288, 2019. https://doi.org/10.1007/978-3-030-12072-6_23

Computer Analysis of the Equilibrium in Painting

269

factor in achieving a balanced composition is the distribution of color masses in the picture. In research on composition, art theorists have long noticed that a large area of a dull unsaturated color can be balanced by a small area of a highly saturated color, i.e. saturated color “weighs” more than unsaturated. This qualitative principle of color balance in painting was more accurately formulated in 1905 by Manzell in the form of a quantitative law of the inverse ratio of areas: areas of balanced colors are inversely proportional to the product of their brightness and saturation [2]. Subsequently, Manzell’s formula, which establishes a color balance between pairs of colors, was subjected to repeated experimental checks and worked quite satisfactorily [3–5]. It is clear that in the real picture the situation is much more complicated. The equilibrium of its composition depends not only on the areas and location of the main color masses, but also on the organization of the compositional center, on the plastic and rhythmic composition of the composition, on its proportional divisions, on the color, tonal and texture relations of the individual parts among themselves and the whole, etc. However, the only important characteristic in achieving equilibrium of the composition is the center of equilibrium—the center around which the composition organizes itself. The outstanding role of the center in the composition of the painting— the center of equilibrium, the geometric center, the compositional center—was considered by Arnheim in the work “The Power of the Center” [6]. So, all elements of the composition must be distributed around a certain center of equilibrium in such a way that the effect of the composition’s balance is achieved. This is almost the first commandment and the main axiom for every artist. And the eye of the artist accurately estimates the balance of artistic composition better than any scales. Nevertheless, in the age of science there is a temptation to assess the accuracy of the artist’s eye quantitatively.

2 The Problem Thus, the problem consists in constructing an adequate mathematical model that allows us to mathematically accurately estimate the balance of a pictorial composition. For the quantitative analysis of the balance of a pictorial composition and the exact location of its center of equilibrium, it is natural to apply the mechanical idea of the center of mass, because the relationship of color masses to the picture and the physical masses in mechanics seems obvious (it is no accident that the artists themselves speak of “color masses”). The idea of the center of mass of a system of material points or in Greek the idea of a barycenter was developed by Archimedes in the book “On the equilibrium of plane figures” [7] and allowed the great Greek scientist to find brilliant solutions to a large number of problems in mechanics and mathematics. Including using the mechanical idea of the lever Archimedes found the area of a curvilinear triangle bounded by a quadratic parabola, thus anticipating the idea of the integral calculus of Newton and Leibniz for 2000 years. It must be said that Archimedes himself did not consider his “barycentric” solutions of geometric problems mathematically correct, since he understood that his definition of the barycenter of a system of material points is rather intuitive. Only in 1827 the German geometer Möbius in the treatise “Barycentric calculus” [8] dispelled

270

A. Voloshinov and O. Dolinina

Archimedes’ fears, giving a mathematically rigorous definition of the barycenter. In the same treatise, Mobius developed a projective geometry based on the concept of a barycenter. Thus, a connection was established between mechanics, geometry and the Renaissance theory of perspective. So, if the color spots in the picture compare a certain mass or weight, then as an equilibrium center of the picture will be the usual center of mass or center of gravity. The possibility of applying the mechanical concept of the center of mass to the analysis of the center of equilibrium in painting was pointed out back in the sixties the last century Arnheim [1, 6]. In 1971 Orlov literally applied the idea of color masses to the analysis of the harmony of painting, carving elements of identical colors from Kuinji and Levitan reproductions and weighing their masses on the pharmacy scales, which, due to the homogeneity of the sheet of paper, are proportional to their area [9]. This technique, which looked extremely archaic in the computer era, nevertheless allowed us to establish a dependence in the distribution of the areas of colors in the picture, similar to Zipf’s well-known linguistic law. The idea of a colorimetric barycenter was first used by Voloshinov in 1997 when analyzing the icon of Andrei Rublev “The Trinity” and the Suprematist composition of Kazimir Malevich “Eight red rectangles” [10], which allowed establishing internal analogies in the composition structure of these paintings.

3 Methods of Solution To solve this problem, two methods of solution are proposed: A: the concept of a colorimetric barycenter and B: a probabilistic model. 3.1

The Concept of a Colorimetric Barycenter

The definition of the barycenter, given by Archimedes more than two thousand years ago and true in substance, today is difficult to call mathematically correct: “The center of gravity of some material body (a system of material points) is a certain point located inside it, possessing the property that if it is suspended for it a heavy body, it remains at rest and maintains its original position” [7]. The properties of the barycenter are described by Archimedes using the following three axioms: A1. Every system of material points has a barycenter, and the only one. A2. The barycenter of the system of two material points is located on the segment connecting these points, and its position is determined by the rule of the Archimedean lever: the product of the mass on the “shoulder” (distance from the point to the barycenter) of the first point is equal to the product of the mass on the shoulder of the second point. A3. The position of the barycenter of the system of material points will not change if in this system we select some material points and masses of these points to transfer to the barycenter of the selected subsystem of material points. Denoting the masses of material points A and B by m1(A) and m2(B), and Z is their barycenter, the rule of the Archimedean lever is expressed by the equality m1(A)AZ = m2(B)BZ

Computer Analysis of the Equilibrium in Painting

271

and is illustrated by a figure familiar from the school course physics (Fig. 1), where, ! ! PA , PB —the gravity forces applied to the points m1(A) and m2(B); ! ! ! ! ð PA þ PB Þ—resultant forces PA and PB . Denoting the masses of material points A and B by m1(A) and m2(B), and Z is their barycenter, the rule of the Archimedean lever is expressed by the equality m1(A)AZ = m2(B)BZ and is ! illustrated by a figure familiar from the school course physics (Fig. 1), where, PA , ! ! ! PB —the gravity forces applied to the points m1(A) and m2(B); ð PA þ PB Þ— ! ! resultant forces of the PA and PB .

Fig. 1. The barycenter of a system of two material points, determined by the rule of the archimedean lever.

We note that the principle of Archimedean lever Arnheim extends even to the perception of the depth of the picturesque space: “Apparently, the effect of the lever can also be applied to the third dimension—the depth. In other words, in the depicted space the farther from the viewer objects are located, the more weight they carry” [1]. Here Arnheim is not entirely accurate: the further the objects are located, the more Archimedes the lever and the more the product of mass on the shoulder. As already noted only in the XIX century. A. Mobius gave a mathematically rigorous definition of the barycenter. According to Mobius, the barycenter of the system of material points m1 ðA1 Þ; . . .; mn ðAn Þ is a point Z of the space for which the following vector equality holds: ! ! m1  ZA 1 þ . . . þ mn  ZA n ¼ ~ 0; where for brevity we assume mi ¼ mi ðAi Þ; i ¼ 1; . . .; n. It is easy to show (see, for example, [11]) that the definition of the Mobius barycenter completely corresponds to the axioms of Archimedes A1–A3, translating them into theorems. In particular, it is valid the following theorem.

272

A. Voloshinov and O. Dolinina

Theorem. If Z is the barycenter of the system of material points m1 ; . . .; mn , then for any choice of the point O in space the following equality holds: ! ! ! m1  OA 1 þ . . . þ mn  OA n OZ ¼ m1 þ . . . þ mn

ð1Þ

The corollary to this theorem is the axiom of Archimedes A1. If in space some rectangular coordinate system Oxyz is selected, in which the ! ! vectors entering into Eq. (1) have coordinates OZ ðx; y; zÞ, OAi ðxi ; yi ; zi Þ, i ¼ 1; . . .; n, then, having written the vector equality (1) in coordinate form, for the coordinates of the point Z—barycenter of the system of material points m1 ; . . .; mn , the following expressions are obtained: x0 ¼

m1 x1 þ . . . þ mn xn m1 y1 þ . . . þ mn yn m1 z1 þ . . . þ mn zn ; y0 ¼ ; z0 ¼ ; m m m

ð2Þ

where m ¼ m1 þ . . . þ mn . It is evident, that for flat easel painting barycenter is defined by two coordinates x and y. The concept of a colorimetric barycenter is essentially the extension of Archimedes’ barycentric ideas to the color space of painting. It was first formulated in [12] and then developed in [13–15]. According to this concept, the pictorial image is formally represented as a bounded region of a certain surface called the image surface Im, with each point of which a certain color shade expressed in an element of the color space F is uniquely associated. The surface Im can be spherical or more complex, as in the dome The murals, however, most often Im is some closed area of the Euclidean plane, as is the case in easel painting. It is this last case for Im that is considered in this article. As for the space F, two cases are mainly distributed here: either a black and white image, as, for example, on engravings or lithographs, or colored, which is characteristic of easel painting. Thus, a formally pictorial image is realized by some subset of the Cartesian product Im  F, which formalizes the semantic space of the image in question and is the object of perception. The concept of a colorimetric barycenter involves the construction of a mapping Im  F ! M;

ð3Þ

which, to each point of the pictorial image, depending on its color, uniquely, according to a certain rule, assigns a certain non-negative number from the set M, which is considered as the “colorimetric mass” of the given point. As a rule, M  ½0; 1. The mapping (3) makes it possible to obtain the structural-colorimetric spectrum of the pictorial work under consideration and then, using formulas of the form (2), determine the colorimetric barycenter of the pictorial work under consideration, which implies the use of the corresponding computer program [12].

Computer Analysis of the Equilibrium in Painting

273

The simplest for analysis is the case of black and white graphics, where the color space F consists of only two colors—black and white. Since light tones are perceived by a person as “light” and dark tones as “heavy” [3], we gave the white color a minimum weight of 0, and the black color the maximum weight equal to 1. Thus, for black-and-white graphics, the colorimetric mass M ¼ f0; 1g. If in addition to black and white colors the color space F contains shades of gray (as in black and white photography, for example), it is natural to increase the weight of the gray color as it approaches from white to black. Modern computers distinguish 28 = 256 shades of gray, so the gray tones from white to black were put in line with the color masses 0 1 2 255 ¼ 0; ; ; . . .; ¼1 255 255 255 255

ð4Þ

As a test example of the effect of the background on the position of the barycenter, the following series of calculations was carried out. At first, the black circle was placed on a white background. Since the weight of white is zero, it is clear that the barycenter of the entire “picture” will coincide with the center of the circle (Fig. 2a). Then the background was given more and more dark shades of gray, its weight increased, and the barycenter of the picture shifted to its geometric center (Fig. 2b). Finally, when the background turned black, merging with the circle, the barycenter came to the geometric center of the picture (Fig. 2c).

Fig. 2. The displacement of the barycenter from the center of the figure to the geometric center of the “picture” as the background darkens from white (a) to gray (b) and black (c)

Finally, in the case of a color image, at least two approaches to finding a barycenter are possible. The color image can be converted to a black and white image with shades of gray tone and apply to it just described the procedure for finding the barycenter of a black and white tone image. The second method is based on a three-component color

274

A. Voloshinov and O. Dolinina

representation. The RGB color presentation system adopted by the International Lighting Commission (MCO) in 1931 was the most widely used in modern computers and allows to present a color space in the form of a set of three-dimensional vectors with color coordinates measured in parts of the standardized primary red colors (R-red), green (G-green) and blue (B-blue) in white. The finding of the barycenter of the color picture breaks up into the search for three barycenters in the red-white, green-white and blue-white decomposition of the original color space according to the described procedure. “Red”, “green” and “blue” barycenters form the so-called monochromatic triangle, the center of gravity of which is the colorimetric barycenter of the original color image. As was shown in [14], the colorimetric barycenters of the color image, calculated with the help of the black and white tone representation and using the RGBmodel, practically coincide, therefore we used a simpler black and white tone representation. RGB-standard was repeatedly refined by MCO and in 1976 was introduced the so-called equal-contrast color Lab system [16], which also gives practically the same values of barycenters as the RGB-system. It is easy to notice, that the procedure for choosing the colorimetric mass m 2 M is the most delicate task in arranging the calculations of the colorimetric barycenter. Ideally, we should consider the psychology of color perception, which affects the “weight” of the perceived color, which was not done in our work. For example, it is known from the experiments of Pinkerton and Hamfrey [17] that the yellow color was judged by the subjects as substantially more light compared to all other colors, and the red color was significantly more severe compared to green, orange and blue. Also, McManus et al. [18] established in their experiments that, other things being equal, a red square is perceived as substantially more severe than other monochromatic colors. These and similar data have yet to be quantified, and then they can be used to evaluate the colorimetric masses. But for any method of choosing the colorimetric mass m 2 M, any picture on the computer screen is a rectangular matrix of pixels containing k columns and n rows. Each pixel with coordinates ðxi ; yj Þ should be associated with some colorimetric mass mij ði ¼ 1; 2; . . .; k; j ¼ 1; 2; . . .; nÞ and then formulas (2) for determining the colorimetric barycenter take the form: x0 ¼

where m ¼

k P n P

n X k k X n 1X 1X xi mij ; y0 ¼ yj mij ; m j¼1 i¼1 m i¼1 j¼1

ð5Þ

mij is a colorimetric mass of the whole picture.

i¼1 j¼1

The Results of the Barycentric Model Analysis The results of the analysis of equilibrium in painting, performed on a barycentric model, allow us to draw the following conclusions.

Computer Analysis of the Equilibrium in Painting

275

1. There is no fundamental difference in the behavior of the barycenter for both black and white graphics, and for color painting. In both cases, the colorimetric barycenter is located near the geometric center of the picture, which indicates a good sense of balance for artists (see Fig. 3d).

Fig. 3. The position of the barycenter near the geometric center for black and white graphics (a, b) and color painting (c, d) (A. Durer. Melancholy. 1514 (a); I. Bilibin. Bookplate A.E. Benakis. 1922 (b). I. Shishkin. In the north, wild … 1891 (c); Salvador Dali. Atomic Ice. 1949 (d))

276

A. Voloshinov and O. Dolinina

2. Artists of different eras and different nationalities, different artistic trends and schools quite accurately balance the composition of their work. This is also true of the engraving engravings of Albrecht Durer (Fig. 3a), and for the linocut images of Ivan Bilibin (Fig. 3b), for the realistic paintings of Ivan Shishkin (Fig. 3c), and for Surrealist paintings by Salvador Dali (Fig. 3d). 3. The colorimetric barycenter often correlates with the special semantic points of the picturesque composition. Thus, on the fresco of Leonardo da Vinci “The Last Supper” the barycenter is located near the geometric center of the fresco and the center of the perspective of its composition (the point of descent of parallel lines), which falls on the right eye of Christ. Avoiding excessive pedantry, one can say that in this ideal mirror-symmetric composition the geometric center, the perspective center and the barycenter coincide (Fig. 4a). 500 years later, Salvador Dali repeated the mirror symmetry of Leonardo in his “Last Supper”, but shifted the center of perspective from the eyes to the mouth of the Savior, for to Dali the demokritic speculation coming from the mouth of Christ was more important than the vision of Leonard, this window of the soul into the world by Leonardo. The barycenter of Dali is located strictly on the vertical axis of symmetry of the picture, but is shifted downward on the chest of Christ, which gives the composition greater stability (Fig. 4b).

Fig. 4. The coincidence of the semantic and geometric centers, the perspective center and the composition barycenter. Leonardo da Vinci (The Last Supper. 1495–1497 (a). Salvador Dali. The Last Supper. 1955 (b))

4. The colorimetric baricenter is often located on the semantic lines of the picture. As we have just seen, in the mirror-symmetrical composition of The Last Supper and Leonardo, and Dali, the barycenter is located on the vertical axis of symmetry of the compositions, providing a mirror symmetry horizontally. On many landscapes, where the horizon line is depicted, the barycenter is most often located on this line near the vertical axis of symmetry of the picture. In turn, the horizon line often coincides with or is close to the horizontal line of the golden section of the composition (Fig. 5a and b).

Computer Analysis of the Equilibrium in Painting

277

Fig. 5. The position of the colorimetric barycenter on the horizon line (a, b) and the coincidence of the horizon line with the golden section line (a) (A. Kuindzhi. Forgotten village. 1874. (a). I. Levitan. Above eternal rest. 1894. (b))

5. The colorimetric barycenter of a sufficiently large ensemble of studied pictures is located inside the so-called “golden rectangle” of the picture formed by the lines of the golden section of the picture along the vertical and horizontally, counting from each of the four sides of the picture (Figs. 3, 4 and 5). This once again emphasizes the special form-forming role of the golden section in achieving harmony of the pictorial work. 6. There were no significant features in the position of the colorimetric barycenter in figurative and non-figurative painting. This indicates an equal value of the composition balance in both realistic and abstract painting. In other words, abstract painting is no less balanced than the realistic one, based on natural forms. Moreover, one can even say (see Fig. 6) that abstract painting is balanced more strictly

Fig. 6. The almost exact coincidence of the colorimetric barycenter and the geometric center of the painting in abstract painting (V. Kandinsky. Improvisation.1917–18. (a); K. Malevich. Suprematist composition. 1916)

278

A. Voloshinov and O. Dolinina

and more necessarily than realistic. This balance of abstract composition provides the mysterious sense of harmony that we experience when looking at the mysterious spots of Kandinsky’s paintings (Fig. 6a) or the laconic geometric figures of Malevich (Fig. 6b). 3.2

The Probabilistic Model of Equilibrium in Painting

The barycentric equilibrium model in painting has one significant drawback, which is evident from Fig. 7: it does not take into account the spread of color masses relative to the center of equilibrium. As is known, for a numerical characteristic of the spread of a random variable in probability theory, there exists a special quantity called variance. Therefore, for more detailed studies of equilibrium in painting, it is advisable to move from a barycentric to a probabilistic equilibrium model.

Fig. 7. Various symmetrical “pictures”, giving the same values of the barycenter.

Consider for simplicity the case of black and white graphics. Any engraving represented on the computer by a matrix of black and white pixels can be considered as a two-dimensional random variable that takes on the horizontal values xi ði ¼ 1; 2; . . .; kÞ and vertically yj ðj ¼ 1; 2; . . .; nÞ with probabilities pi and pj, denoting the probability of appearance of black pixels in the i-th column or j-th row, respectively. As is customary in probability theory, this random variable can be specified by Tables 1 and 2: Table 1. Values of the variable x xi x1 n pi 1 P m

m1j

j¼1

x2 n P 1 m

m2j

j¼1

   xk n  1 P m

mkj

j¼1

Table 2. Values of the variable y yj y1 k pj 1 P m

i¼1

y2 mi1

1 m

k P i¼1

mi2

   yn k  1 P m

i¼1

min

Computer Analysis of the Equilibrium in Painting

279

where mij—“Weight” pixels in the i-th column and j-th line, equal to 1, if the pixel is P P black and 0, if the pixel is white; m ¼ ki¼1 nj¼1 mij —colorimetric mass of the engraving (in our case equal to the number of black pixels; xi ¼ i; yj ¼ j, If the dimensions of the engraving are counted in pixels, or xi ¼ ia=k; yj ¼ jb=n (a—linear dimension of the engraving along the axis x, b—linear dimension of the engraving along the axis y)), if the size is counted in linear units. The origin is selected in the lower left corner of the engraving. The sum of the probabilities of all values of a random variable must, as is known, is equal 1. In fact, k X

pi ¼

i¼1

k n X 1X i¼1

m

mij ¼

j¼1

k X n 1X m mij ¼ ¼ 1: m i¼1 j¼1 m

Pn Similarly j¼1 pj ¼ 1. Let us find the mathematical expectation of a twodimensional random variable from the known formulas of the probability theory (6). Mx ¼

k X

x i pi ¼

i¼1

My ¼

n X j¼1

k n X 1X xi mij m i¼1 j¼1

n k X 1X y j pj ¼ yj mij m j¼1 i¼1

ð6Þ

Comparing formulas (5) and (6), it is easy to see that Mx ¼ x0 и My ¼ y0 , i.e. if the probability of the appearance of a random variable is treated as its mass, then the mathematical expectation of a random variable coincides with the coordinates of its barycenter. Taking into account equalities (6), the scattering of a random variable will be determined from the well-known dispersion formulas Dx ¼

k X

ðxi  Mx Þ2 pi ¼

i¼1

Dy ¼

n X j¼1

k n X 1X ðxi  Mx Þ2 mij m i¼1 j¼1

n k X 1X ðyj  My Þ pj ¼ ðyj  My Þ2 mij m j¼1 i¼1

ð7Þ

2

A more convenient characteristic of the scattering of a random variable is, as is known, the mean square deviation having the same dimension as the random variable, and calculated by formulas rx ¼

pffiffiffiffiffiffi pffiffiffiffiffiffi Dx ry ¼ Dy

ð8Þ

In all formulas (6–8) we start from the linear dimensions of a random variable, assuming xi ¼ ia=k; yj ¼ jb=n. Moreover, it is convenient to carry out a comparative analysis of the numerical characteristics of pictures in a normalized form, assuming

280

A. Voloshinov and O. Dolinina

a = b = 1. In this case, we will have the following intervals of values of the random variable and its main characteristics: 0  x; y  1; 0  Mx ; My \1; 0  rx ; ry \0; 5: Typical patterns of “paintings” and their main characteristics are shown in Fig. 8, where the first column is closer to the structure of the portrait, the second—to the landscape, and the third—to the abstract composition. Obviously, the “patterns” symmetric about the axis ox will have Mx = 0.5, and the symmetry about the oy axis will give My = 0.5 (the first and third columns in Fig. 8). The distribution of the color mass along the whole axis ox or oy gives the values = 0.289 or = 0.289, respectively (see Fig. 8c–f) The largest value is the standard deviation in the case when the color masses are concentrated at the angles of the “picture” (rx ¼ ry ¼ 0; 404 in the Fig. 8g), which in real pictures does not happen. Alternation of small color masses by the type of a chessboard

a) Mx = My = 0,5 σx = σy= 0,058

d) Mx = 0,5 My = 0,1 σx =0,289 σy= 0,058

g) Mx = My = 0,5 σx = σy= 0,404

b) Mx = My = 0,5 σx = σy= 0,173

e) Mx = 0,5 My = 0,2 σx =0,289 σy= 0,115

h) Mx = My = 0,507 σx = σy= 0,313

c) Mx = My = 0,5 σx = σy= 0,289

f) Mx = 0,5 My = 0,44 σx =0,289 σy= 0,345

i) Mx = My = 0,507 σx = σy= 0,299

Fig. 8. Typical patterns of “paintings” and their main characteristics

Computer Analysis of the Equilibrium in Painting

281

with small cells (Fig. 8i) gives the values rx ¼ ry ¼ 0; 299 (Fig. 8i), close to a uniform distribution of the color mass along the whole axis (rx ¼ ry ¼ 0; 289 on Fig. 8c). In the case of tone graphics and a black and white tone representation of color painting, formulas (6–8) remain valid with the difference that the tone masses mij will take not the entire value of 0 and 1, but the entire spectrum of values (4). Results of the Study on the Probability Model 1161 works of 16 well-known artists were analyzed (in parentheses the number of papers examined is indicated): the pillars of the Renaissance by Leonardo da Vinci (6) and Albrecht Durer (175); classics of Russian painting Karl Bryullov (16) and Ilya Repin (70); famous Russian landscape painters Arkhip Kuinji (23), Ivan Shishkin (79), Isaak Levitan (22), Vasily Polenov (24); the sea painter Ivan Aivazovsky (13) and the artiststoryteller Victor Vasnetsov (19); the pillars of modern Western art, the abstractionist Pablo Picasso (69) and the surrealist Salvador Dali (264), the pillars of the Russian avantgarde Vasily Kandinsky (30) and Kazimir Malevich (30), as well as Gustav Klimt (114) and Marc Chagall (207). The results obtained using the barycentric model can be supplemented by the following conclusions obtained from the probabilistic model. 1. In most cases, the mathematical expectation (colorimetric barycenter) is near the geometric center of the picture inside the rectangle formed by the lines of the golden section of the canvas (Fig. 9). Consequently, in most cases, the artists quite accurately balance the composition of their work. The role of the golden section in organizing the composition of the painting (and not only in painting) is widely known. For example, on the examined canvas of I. Shishkin (Fig. 3c), the trunk of the lone pine—the main semantic dominant of the picture—is on the vertical line of the golden section of the picture, and in the “Atomic Leda” of Dali (Fig. 3d), the horizontal line of the golden section separates the elements of air and water, depicted in the picture. Thus, the obtained results on the compositional equilibrium of color masses in painting once again confirm the special role of the golden section in achieving harmony of the work of painting. 2. In normalized coordinates, the ensemble of more than 1000 mathematical expectations (barycenters) of the color masses of the pictures looks like a vertically placed ellipse with a center near the geometric center of the canvas (Fig. 9). In each of the quadrants in Fig. 9 shows the picture with the greatest deviation of the barycenter from the geometric center. For example, a black hat that occupies the entire upper right corner of the picture of G. Klimt “The Black Hat” leads the barycenter to the right and upwards (the first quadrant), and in A. Kuindzhi’s painting “Steppe. Niva” bright sky, occupying the top two third of the canvas, on the contrary, significantly lowers the barycenter down, leaving it symmetrical horizontally (third quadrant). A vertically placed ellipse of barycentres suggests that artists attach greater importance to the balance of the composition horizontally than vertically. Taking into account the bilateral symmetry (horizontally) of the living world, this result looks quite natural and once again demonstrates the outstanding role of right-left symmetry in nature and art. Quite eloquent are the quantitative data: the same number of pictures (628 and 630, respectively) lies to the left and to the right of the vertical axis of

282

A. Voloshinov and O. Dolinina

symmetry, while the horizontal axis of symmetry of the pictures is approximately twice as large as the higher (815 and 443, respectively). 3. The average value of the ensemble of colorimetric barycentres is on the vertical axis of symmetry, but is shifted downwards relative to the horizontal axis of symmetry (Fig. 9)—its coordinates (0.50, 0.48). Moreover, in an abstract painting the understatement of the barycenter is manifested to a lesser degree (see Fig. 6), which, apparently, is explained by its fundamental isolation from reality, where every mechanical system, when the center of gravity is shifted downwards, acquires greater stability. In figurative painting, and especially in the landscape, where the upper part of the picture often depicts an empty celestial space, this displacement of the barycenter downward is sufficiently large (as in the picture just examined by Kuinji). Perhaps the use of a more stable composition (perceived as more calm and stable) is one of the reasons for the calming effect of landscape painting. This result fully agrees with Arnheim’s statements that the lower part of the visually perceptible model requires more weight to make it look stable [1].

Fig. 9. An ensemble of mathematical expectations of color masses (colorimetric barycentres) 1161 paintings by artists of different eras and schools. In each of the four quadrants, the pictures with the largest deviations of the barycentres are shown

Computer Analysis of the Equilibrium in Painting

283

For all artists. Number of works: 1161 Above the axis of symmetry Y = 0,5: 419 Left of the axis of symmetry X = 0,5: 575 Below the axis of symmetry Y = 0,5: 742 Right of the axis of symmetry X = 0,5: 586 Average values: X Y expected values (barycenters): 0,50 0,48 standard deviations: 0,2837 0,2801 ⊛– the average value of the expectations 4. There are no fundamental features in the position of the color barycenter in figurative and non-figurative painting. This indicates an equal value of the composition balance in both realistic and abstract painting. In other words, abstract painting is no less (and on average even slightly more) balanced than realistic one, based on natural forms. 5. The horizon line in landscape painting often coincides with the line of the golden section of the canvas along the vertical (Fig. 5a). In such cases, the barycenter is also located on this line. But even if the horizon line does not coincide with the golden section line, the barycenter still appears on the horizon line (Fig. 5b). This balance of “heavenly” and “earthly” in the landscape composition is very important in the semantic sense and opens a wide field for aesthetic philosophical generalizations. In general, landscape artists quite accurately withstand the balance horizontally, which is explained by the importance of symmetry of the right-left in nature, whereas on the vertical allow significant deviations (most often understating the barycenter). Thus, the ensemble of barycentres in landscape painters is more like a vertical line (Fig. 10).

Fig. 10. Lowered barycenter vertically and strict observance of right-left symmetry horizontally by landscape painters A. Kuindzhi and I. Levitan

284

A. Voloshinov and O. Dolinina

Artist: A. Kuindzhi. Number of works: 23 Above the axis of symmetry Y = 0,5: 2 Below the axis of symmetry Y = 0,5: 21 Left of the axis of symmetry X = 0,5: 14 Right of the axis of symmetry X = 0,5: 9 Average values: X Y expected values (barycenters): 0,49 0,42 standard deviations: 0,2919 0,2755 ⊛– the average value of the expectations Artist: I. Levitan. Number of works: 22 Above the axis of symmetry Y = 0,5: 4 Below the axis of symmetry Y = 0,5: 18 Left of the axis of symmetry X = 0,5: 11 Right of the axis of symmetry X = 0,5: 11 Average values: X Y expected values (barycenters): 0,50 0,44 standard deviations: 0,2913 0,2622 ⊛– the average value of the expectations 6. On the contrary, in abstractionists and surrealists paintings, deviations of the barycenter from left to right and up and down from the geometric center of the picture are equally probable, since unlike realistic painting where the force of gravity is invisible, in abstract and surreal painting the top and bottom are just as equal as the left and right. The barycentre ensemble of abstractionists and surrealists forms a circle rather than an ellipse (Fig. 11). Figure 11 shows the ensembles of barycentres from the surrealists S. Dali and M. Chagall. Quite similar is the

Fig. 11. The practical coincidence of the mean value of the barycentres with the geometric center of the painting among the surrealist artists S. Daly and M. Chagall

Computer Analysis of the Equilibrium in Painting

285

ensemble of 88 barycentres in the abstractionist P. Picasso, and the coordinates of the mean of the barycentres (0.50, 0.50) in Picasso coincide with the coordinates of the geometric center to within two characters. Artist: S. Dali. Number of works: 264 Above the axis of symmetry Y = 0,5: 91 Below the axis of symmetry Y = 0,5: 173 Left of the axis of symmetry X = 0,5: 116 Right of the axis of symmetry X = 0,5: 148 Average values: X Y; expected values (barycenters): 0,50 0,48 standard deviations: 0,2918 0,2941 ⊛– the average value of the expectations

Artist: M. Chagall. Number of works: 207 Above the axis of symmetry Y = 0,5: 74 Below the axis of symmetry Y = 0,5: 133 Left of the axis of symmetry X = 0,5: 105 Right of the axis of symmetry X = 0,5: 102 Average values: X Y; expected values (barycenters): 0,50 0,48 standard deviations: 0,2720 0,2670 ⊛– the average value of the expectations 7. As already noted, artists, especially landscape painters, prefer an underestimated center of gravity (displacement of the barycenter down from the horizontal axis of symmetry of the picture). As for the barycentre deviations to the left and to the right of the vertical axis of symmetry, they are in the overwhelming majority equally probable. Perhaps, only Aivazovsky can be called a “leftist” artist, and Shishkin —“right” one. In Aivazovsky’s pictures, as a rule, a bright ray of hope shines from the right upper corner to a raging dark sea: a bright spot on the right shifts the colorimetric barycentre to the left, and the dark sea below lowers the barycenter down. Thus, the barycentres of Aivazovsky are grouped in the third quadrant. In Shishkin, on the contrary, the right half of the picture is darker than the left, so Shishkin’s barycenter dominates to the right of the vertical axis of symmetry of the picture (Fig. 12). Artist: I. Aivazovsky. Number of works: 13 Above the axis of symmetry Y = 0,5: 3 Below the axis of symmetry Y = 0,5: 10 Left of the axis of symmetry X = 0,5: 9 Right of the axis of symmetry X = 0,5: 4 Average values: X Y expected values (barycenters): 0,49 0,46 standard deviations: 0,2942 0,2903 ⊛– the average value of the expectations

286

A. Voloshinov and O. Dolinina

Fig. 12. The displacement of the mean value of the barycentres to the left of I. Aivazovsky and to the right of I. Shishkin

Artist: I. Shishkin. Number of works: 79 Above the axis of symmetry Y = 0,5: 20 Below the axis of symmetry Y = 0,5: 59 Left of the axis of symmetry X = 0,5: 28 Right of the axis of symmetry X = 0,5: 51 Average values: X Y expected values (barycenters): 0,50 0,46 standard deviations: 0,2896 0,2727 ⊛– the average value of the expectations 8. It is easy to see from Fig. 9, the picture of A. Kuindzhi “Steppe Niva” is the only picture with the smallest value of the mathematical expectation of My. The numerical characteristics of the picture are as follows: Mx ¼ 0; 474; My ¼ 0; 172; rx ¼ 0; 300; ry ¼ 0; 204. It is seen that the horizontal symmetry in the picture is fairly good, which fully corresponds to the symmetry of the steppe landscape horizontally, while the asymmetry of the picture along the vertical emphasizes the boundlessness of the whitish sky over the boundless steppe. As for the standard deviations, rx ¼ 0; 300 in Kuindzhi’s pictures d with great accuracy coincides with rx ¼ 0; 289 for “landscape” patterns (second column at the Fig. 8), at the same time ry ¼ 0; 204 occupies an intermediate position between ry ¼ 0; 115 for the «absolute white» sky (Fig. 8e) and ry ¼ 0; 345 for the sky with “absolutely black” clouds (Fig. 8f). Let’s note that if from the considered thousand pictures allocate 10 pictures with the minimum values of My, then all of them will be the works of the landscape painters (4 works by Kuindzhi, 3 by Levitan and 3 by Shishkin) with the mathematical expectation My in the interval from 0.31 to 0, 34.

Computer Analysis of the Equilibrium in Painting

287

4 Conclusion In conclusion, returning to the generalizing Fig. 9, it should be said that all the considered artists quite well balance the color masses of their paintings, so that the barycentres of the pictures are grouped around the geometric center of the picture. So Arnheim’s statement about the primary role of the balance in the visual perception with which we started the article is absolutely fair. It can be said that such an unerring balancing of color masses by the artists is also due to the fact that even a light nonwhite background of the picture significantly displaces the barycenter of the absolutely non-centered picture to its geometric center (see Fig. 2a, and b). As for the dispersion of color masses over the space of the picture, the mean square deviations in both coordinates have values on the order of 0.28–0.30, which are typical for the uniform distribution of color masses on the canvas (see Fig. 8c, f and i). Thus, in addition to the well-balanced paintings, artists also prefer to paint pictures “without emptiness”. During the investigation the authors of the paper have not found a single painting work where the color masses were concentrated only in the center or only at the corners of the picture (see Fig. 8a and g, respectively), i.e. would have very low (of the order of 0.05) or very high (of the order of 0.4) values of the mean square deviations.

References 1. Arnheim, R.: Art and Visual Perception. A Psychology of the Creative Eye. The New Version. University of California Press, Berkeley, Los Angeles, London (1974) 2. Munsell, H.A.: A Color Notation. George H Ellis Co, Boston (1905) 3. Alexander, K., Shansky, M.: Influence of hue, value and chroma on the perceived heaviness of colors. Percept. Psychophys. 19, 72–74 (1976) 4. Linnett, C., Morris, R., Dunlap, W., Fritchie, C.: Differences in color balance depending upon mode of comparison. J. Gen. Psychol. 111, 271–283 (1991) 5. Morris, R., Dunlap, W.: Influence of chroma and hue on spatial balance of color pairs. Color Res. Appl. 13, 385–388 (1988) 6. Arnheim, R.: The Power of the Center: A Study of Composition in the Visual Arts. The New Version. University of California Press, Berkeley, Los Angeles, London (1988) 7. Veselovsky, I.N. (ed.): Arhimed. Writings, p. 640. Fizmatgiz (1962). (in Russian) 8. Möbius, A.F.: Der barycentrische Calcul. Gesammelte Werke, Bd. 1, Leipzig (1885) 9. Orlov, Y.K.: Invisible Harmony. Number & Thought, vol. 3, pp. 70–106. Znanie (1980). (in Russian) 10. Voloshinov, A.V.: Trinity by Andrew Rublev: Geometry & Philosophy. Chelovek, № 6, pp. 52–74 (1997). (in Russian) 11. Balk, M.B., Boltyansky, V.G.: Geometry of Mass, p. 160. Nauka (1987). (in Russian) 12. Firstov, V., Voloshinov, A.: Conception of colorimetric barycenter in painting analysis. In: Proceedings International Congress on Aesthetics, Creativity, and Psychology of the Art. Perm, Russia, 1–3 June 2005, pp. 258–260. Smysl, Perm (2005). (in Russian) 13. Firstov, V., Voloshinov, A.: The concept of colorimetric baricenter in group analysis of painting. In: Gottesdiener, H., Vilatte, J.-C. (eds.) Culture and Communication: Proceedings of the XIX Congress International Association of Empirical Aesthetics, pp. 439–443. Universite d’Avignon et des Pays de Vancluse, Avignon (2006). (in Russian)

288

A. Voloshinov and O. Dolinina

14. Firstov, V., Voloshinov, A.: The concept of colorimetric barycenter and some structural regularities of the color space of painting. Vestnik SGTU 2006, № 2(13), pp. 150–160 (2006). (in Russian) 15. Firstov, V., Voloshinov, A., Locher, P.: The colorimetric barycenter of paintings. Empirical Stud. Arts. 25(2), 209–217 (2007). (in Russian) 16. Judd, D., Vyshetsky, G.: Color in Science and Technique, p. 592. Mir (1978) 17. Pinkerton, E., Humphrey, N.: The apparent heaviness of colors. Nature. 250, 164–165 (1974). 18. McManus, I., Edmondson, D., Rodger, J.: Balance in Pictures. British Journal of Psychology. 76, 311–324 (1985).

Using System Dynamics for the Software Quality Management of the Decision Making Software Systems Development Olga Dolinina1(&)

and Vadim Kushnikov2

1

2

Yury Gagarin State Technical University of Saratov, Saratov, Russia [email protected] Institute of Precision Mechanics and Control of Russian Academy of Sciences, Saratov, Russia [email protected]

Abstract. Quality of the software is considered as an integrated criterion consisted of the numeric and qualitative characteristics defined by the standards of the various level: ISO, national standards, standards of the companies, and project’s standards. It is shown that the software quality planning during various stages of development can be realized by using the system dynamics method that is suggested as a formal method of the quality management. The examples of the equations used for software quality characteristics variation that depend on the changing of external factors, such as complexity of development, experience of software developers, experience of operating personnel, currency exchange rate, business reputation of the software developers, and business reputation of the company, in which the software is operated, are given. The solution example for the system of dynamics equations used for the planning of the software quality is described as well. Keywords: Integrated criterion of the software quality Planning of the software quality



System dynamics



1 Introduction Software quality can be defined as a structured set of quantitative and qualitative characteristics described in the international (ISO/IEC 9126) and national standards, for example, Russian GOST. Basic characteristics as functionality, reliability, usability, efficiency, maintainability, portability can be decomposed into subcharacteristics. For example, reliability can be described in terms of maturity, fault tolerance and recoverability. Company standards and even project standards can include more diverse software characteristics and subcharacteristics defined by the experts and developers. The key problem of the software quality management is that it considers the described problem to be rather informal. On the contrary, researchers try to find formal solutions for the software quality management. It was shown by Sterman [1] that system

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 289–297, 2019. https://doi.org/10.1007/978-3-030-12072-6_24

290

O. Dolinina and V. Kushnikov

dynamic modeling [2] can be effectively used for management of large scale projects including software development [3]. Barlas [4] notes that system dynamics is a “white box” approach using the model structure to produce the results. Jeng and An [5] showed how system dynamics model can be used to enhance the effectiveness and agility of Service-Oriented Architecture software project management. In [6, 7] from a point of view of system analysis, the goal of achieving the required level of software quality is presented as a problem of optimizing a complex criterion. The purpose of the paper is to show that the system dynamic modeling can be effectively used for the software quality assurance. Let M to be a set of depending on time characteristics meeting ISO/IEC 9126 and Russian GOST standards, where: m1(t) is functionality; m2(t) is reliability; m3(t) is practicality; m4(t) is effectiveness; m5(t) is maintainability; m6(t) is mobility; m7(t) is suitability; m8(t) is correctness; m9(t) is interoperability; m10(t) is security; m11(t) is consistency of the system as a whole; m12(t) is completeness; m13(t) is resistance to errors; m14(t) is recoverability; m15(t) is availability; m16(t) is clarity; m17(t) is learnability; m18(t) is easiness of use; m19(t) is attractiveness; m20(t) is inconsistency in the performance of functions; m21(t) is temporary effectiveness; m22(t) is resource consumption; m23(t) is consistency; m24(t) is analyzability; m25(t) is variability; m26(t) is stability; m27(t) is testability; m28(t) is the presence of significant errors in knowledge bases; m29(t) is adaptability; m30(t) is easy installation; m31(t) is coexistence; m32(t) is interchangeability; m33(t) - flaws in the design documentation. In spite of the fact that some of variables mi ðtÞ; ði ¼ 1; 33Þ are non-digital, in practice digital scales are usually used. For the numerical simulation of the process of change, it is assumed that the variables mi ðtÞ; ði ¼ 1; 33Þ are measured on a quantitative scale, and when performing calculations, their normalized values are used, determined from the following expression: mi ðtÞ ¼ mi ðtÞ=mH i ðtÞ; ði ¼ 1; 33Þ; where mi(t) is the current value of the characteristic defined in the numerical scale; mH i (t) the normalization coefficient. The expediency of modeling variables mi ðtÞ; ði ¼ 1; 33Þ is that at a fixed point at the time moment t2 [tb, te], the simulation results show how many times the value of the corresponding quality characteristics of the software has changed relative to the specified values. As a formal tool describing the change of the considered characteristics in time, the mathematical apparatus of system dynamics was chosen to solve this problem. According to [2, 8], the model of system dynamics consists of levels, flows that transfer the contents from one level to another; decision-making procedures that regulate the flow rate between levels, and information channels that connect the decision procedures with the levels.

Using System Dynamics for the Software Quality Management

291

For simulated characteristics, presented in terms of system dynamics by system levels, the following differential equations are valid: dmi ¼ miþ ðtÞ  miþ ðtÞ; dt

ð1Þ

where m+i (t) - positive rate of the variable speed mi(t), including all the factors that cause the growth of this variable; m−i (t) - the negative rate of the rate of the variable mi(t), including all factors that cause the decrease of this variable. It is assumed that these rates are split into products of functions that depend only on “factors” - combinations of the main variables, that is, in turn, themselves being functions of systemic levels. þ mi ðtÞ  ¼ mi ðy1 ðtÞ; y2 ðtÞ; . . .; yn ðtÞÞ ¼ f ðF1 ðtÞ; F2 ðtÞ; . . .; Fk ðtÞÞ ¼ f1 ðF1 ðtÞÞf2 ðF2 ðtÞÞ. . . fk ðFk ðtÞÞ; where Fj= gj(yi1(t) …, yiS (t)) are factors; S = S(j) < n, j = k(j) < 33 is the number of the levels. Then the system of nonlinear differential equations is compiled, from which the values of the simulated variables mi ðtÞ; ði ¼ 1; 33Þ are determined for a given time interval. Thus, to solve the task, it is necessary to carry out the following studies: • to identify the set of the most significant external factors affecting the simulated Faki ðtÞ; i ¼ 1; v variables mi ðtÞ; ði ¼ 1; 33Þ; • to construct a graph of cause-effect relationships Gpss, reflecting the relationship between variables mi ðtÞ; ði ¼ 1; 33Þ and external factors Faki ðtÞ; i ¼ 1; v; • to develop a matrix of incidence of the graph Gpss and on its basis write the equations of system dynamics, the solution of which will allow to determine the values of variables mi ðtÞ; ði ¼ 1; 33Þ at different time intervals; • to select the auxiliary functions f1(F1) f1(F1) … fk(Fk) used in calculating the simulated variables; • to offer an effective algorithm from the point of view of complexity for the numerical solution of the equations of system dynamics; • to propose and justify the procedure for checking the adequacy of the constructed model on statistical data. The results of the system analysis of the complex of measures necessary to maintain the required level of quality in the software of intelligent systems show that in the general case, the following indicators should be used as perturbations (external factors) in the model: • Fak1(t) is experience of software developers; • Fak2(t) is experience of operating personnel; • Fak3(t) is complexity of software development;

292

O. Dolinina and V. Kushnikov

• Fak4(t), Fak5(t) is currency exchange rate (Euro, US dollar against the national currency); • Fak6(t) is business reputation of the software developers; • Fak7(t) is business reputation of the company in which the software is operated. Variables Fak1 (t), …, Fak5 (t) are initially measured in a quantitative scale, variables, Fak6(t), Fak7(t) are reduced to the quantitative scale, for example, using well-known algorithms and procedures of fuzzy sets theory [9]. When performing mathematical modeling, their normalized values, determined from the following expression, are used: Faki ðtÞ ¼ Faki ðtÞ=FakiH ; i ¼ 1; 7

ð2Þ

where Faki(t) is the current value of the variable, defined in the numerical scale, FakH i is the normalization coefficient. The graph of cause-effect relationships Gpss between simulated variables mi ðtÞ; ði ¼ 1; 33Þ and environmental factors Faki ðtÞ; i ¼ 1; 7 is formed mainly in the event tree GTE (Fig. 2.3) using the well-known method of graphical construction of cause-effect complexes [10, 11], as well as well-known recommendations on the presentation of cause-effect relationships in models of system dynamics [11, 12]. Because of the considerable complexity and inadequacy of this graph Gmi ; i ¼ 1; 33 for the practical purposes, it splits in the separate subgraphs, each of which is used in the formation of the corresponding nonlinear differential equation. The graph Gpss incidence matrix is a AðjM þ Fakj  jE jÞ 33  40 matrix in terms of the number of the simulated variables mi ðtÞ; ði ¼ 1; 33Þ and environmental factors Faki ðtÞ; i ¼ 1; v. The values of the elements of this matrix are determined by the following expressions: 1. 8i  40, 8j  40 aij = +1 If an increase in the value of a variable mi ðtÞ; ði ¼ 1; 33Þ or an environmental factor Faki ðtÞ; i ¼ 1; v leads to increasing in the variable mi ðtÞ; ði ¼ 1; 33Þ or the factor Faki(t) of the external environment. 2. 8i  40, 8j  40 aij= –1 If Faki ðtÞ; i ¼ 1; v an increase in the value of the variable mi ðtÞ; ði ¼ 1; 33Þ or the environmental factor Faki ðtÞ; i ¼ 1; v leads to a decrease in the variable or factor of the external environment. In the absence of a connection between these variables and factors aij= 0. Let’s consider that all vertices of the graph Gpss are related to each other, i.e. the graph is complete. The mathematical model of system dynamics, constructed for a complete graph Gps, seems excessively cumbersome for detailed consideration in this paper. Therefore, below this model is developed for a particular case of the graph. In conclusion, it should be noted that the definition of characteristics mi ðtÞ; ði ¼ 1; 33Þ that affect the reliability of software, even with the use of a complete graph does not lead to the need to solve the trans-computational problem [13]; which does not have insurmountable computational complexity.

Using System Dynamics for the Software Quality Management

293

2 The Solution Special literature contains descriptions of two approaches to the formation of differential equations of system dynamics describing the variation of the simulated variables [10, 11, 14]. In the first case, a graph of cause-effect relationships is initially constructed, each vertex of which is associated with differential equations in the form of expressions dmi ðtÞ ƒ! ¼ fi ð~ m; Fak ; tÞ; i ¼ 1; n dt

ð3Þ

when using the second approach, the matrix A(|M + Fak||E|) of the cause-effect relationship graph is first formed, for each row of which Eqs. (3) are composed. We use the second approach here. The first line of the matrix A(|M + Fak||E|) of the cause-effect relationship graph Gpss defining the first differential equation of the system being formed is given in Table 1. Table 1. The rows of the matrix of the graph Gpss, defining cause-effect relationships affecting m1(t) Values of the variables m1 to m10 m1 m2 m3 m4 m5 m6 m7 m8 m1 0 -1 1 1 -1 0 0 0 Values of the variables m21 to m30 m11 m12 m13 m14 m15 m16 m17 m18 m1 0 0 -1 -1 1 -1 1-1 Values of the variables m21 to m30 m21 m22 m23 m24 m25 m26 m27 m28 m1 0 1 0 0 0 -1 -1 0 Values of the variables m31 to m33 and Fak1 to Fak7 m31 m32 m33 Fak1 Fak2 Fak3 Fak4 Fak5 m1 0 0 0 1 1 -1 0 0

m9 0

m10 0

m19 1

m20 0

m29 0

m30 0

Fak6 Fak7 0 0

The values of the elements of the matrix are chosen in accordance with the opinion of the experts on the relevance of cause-effect relationships affecting the model variable. These values can be changed when implementing the developed software for the required level of software quality of intelligent systems in a particular enterprise. The values of the elements of the matrix are selected in accordance with the opinion of the software experts on the relevance of causal relationships that affect the variable being modeled. These values can be changed when introducing the developed software to achieve the required level of quality programs of intelligent systems in a particular enterprise.

294

O. Dolinina and V. Kushnikov

The subgraph Gm1 of the cause-effect relationship graph Gpss for the system variable m1(t) is shown in Fig. 1. As a result, the first differential equation of the system variable m1(t) has the form of the following expression: dm1 ðtÞ 1 ¼  ðB1 ðtÞ  D1 ðtÞÞ; dt m1 where B1 ðtÞ ¼ ðf1 ðm3 ðtÞÞf2 ðm4 ðtÞÞf3 ðm15 ðtÞÞf6 ðm19 ðtÞÞ  f7 ðm22 ðtÞÞðFak1 ðtÞ þ ðFak2 ðtÞÞ; D1 ðtÞ ¼ ðf8 ðm2 ðtÞÞf9 ðm5 ðtÞÞf10 ðm13 ðtÞÞf11 ðm14 ðtÞÞ  f12 ðm16 ðtÞÞf13 ðm17 ðtÞÞf14 ðm18 ðtÞÞf15 ðm26 ðtÞÞ  f16 ðm27 ðtÞÞf17 ðm30ðtÞÞ ðtÞÞÞFak3 ðtÞÞ

Gm

1

Fig. 1. Subgraph Gm1 of the cause-effect relationships, which affect the value of the system variable m1(t)

The normalization is performed using a multiplier 1/m*1, where m*1 is the maximum value of the level of functionality of the software in the selected numerical scale of measurements.

Using System Dynamics for the Software Quality Management

295

Thus, the first differential equation of the system is formed as follows: dm1 ðtÞ 1 ¼  ððf1 ðm3 ðtÞÞf2 ðm4 ðtÞÞf3 ðm15 ðtÞÞf6 ðm19 ðtÞÞ dt m1  f7 ðm22 ðtÞÞðFak1 ðtÞ þ ðFak2 ðtÞÞ  ðf8 ðm2 ðtÞÞf9 ðm5 ðtÞÞ f10 ðm13 ðtÞÞf11 ðm14 ðtÞÞf12 ðm16 ðtÞÞf13 ðm17 ðtÞÞf14 ðm18 ðtÞÞ

ð4Þ

f15 ðm26 ðtÞÞf16 ðm27 ðtÞÞf17 ðm30 ðtÞÞÞFak3 ðtÞÞÞ Similarly, each of the 33 equations of the system for all system variables mi is formed. (Figure 3, Fig. 4) Solution of the system of the differential equations for the various time intervals taking into consideration the external parameters allows to predict the values of the software quality characteristics. Let’s show the possibility of using the suggested method for management of the subcharacteristics of the software quality by the example of analyzing the change of such an important indicator in software development as complexity of software development parameter Fak3 (t). Figure 2 shows the results of solving the system of differential equations with an increase in the complexity of software development by 100%. The results of the solving the system of differential equations with an increase in the complexity of software development by 120% is shown at the Fig. 2.

Fig. 2. Changing the software quality settings with an increase in the complexity of development by 100%

296

O. Dolinina and V. Kushnikov

Fig. 3. Changing the software quality settings with an increase in the complexity of development by 120%

Fig. 4. Comparative analysis of the changing the software quality settings with an increase in the complexity of development by 120% and 100%

It is possible to generate the management solutions during all stages of the software development and keep the complicated quality criteria stable according the planned one. The time interval [tb ,te] can be selected according the demands of the software quality management: long term, short term, operational.

Using System Dynamics for the Software Quality Management

297

3 Conclusion It is shown that the problem of increasing of the software quality for the decision making systems is formalized as the task of the system dynamics. This task can be solved by the solution of the system of the differential equations for the various time intervals taking into consideration the external parameters what allows to predict the values of the quality characteristics. The solution demands participation of the software experts who define the importance of the each software quality characteristics for the whole software quality.

References 1. Sterman, J.D.: System Dynamics Modeling for Project Management (1992). http://web.mit. edu/jsterman/www/SDG/project.html 2. Forrester, J.: Counterintuitive behavior of social systems. Technol. Rev. 73(3), 52–68 (1971) 3. Abdel-Hamid, T., Madnick, S.: Software Project Management: An Integrated Approach. Prentice Hall, New York (1991) 4. Jeng, J.J., An, L.: System dynamics modeling for SOA project management. In: IEEE International Conference on Service-Oriented Computing and Applications (SOCA 2007) (2007) 5. Barlas, Y.: Formal aspects of model validity and validation in system dynamics. Syst. Dynam. Rev. 12, 183–210 (1996) 6. Rodrigues, A., Bowers, J.: The role of system dynamics in project management. Int. J. Project Manage. 14, 213–220 (1996) 7. Dolinina, O.N., Kushnikov, V.A., Pechenkin, V.V., Rezchikov, A.F.: The way of quality management of the decision making software systems development. In: Advances in Intelligent Systems and Computing: Proceedings of 7th Computer Science On-line Conference 2018, vol. 1, pp. 90–99 (2018) 8. Forrester, J.W.: Industrial Dynamics. MIT Press, Cambridge (1961) 9. Zadeh, L.A.: The concept of a linguistic variable and its application to approximate reasoning. Inf. Sci. 8, 199–249, 301–357, 9 43–80 (1975) 10. Goodman, M.: Study Notes in System Dynamics. M. Goodman, Pegasus (1989) 11. Sterman, J.: Business Dynamics. Irwin McGraw-Hill, Boston (2000) 12. Rezchikov, A.F., Tverdokhlebov, V.A.: Causal complexes of interactions in production processes. Problemy upravlenia, no. 3, pp. 51–59 (2007). (in Russian) 13. Rezchikov, A.F., Tverdokhlebov, V.A.: Cause-Effect complexes as models of processes in complex systems. Mechatronics, Automation, Control, no. 7, pp. 1–9 (2007). (in Russian) 14. Meadows, D.H., Meadows, D.L., Randers, J., Behrens, W.W.: Limits to Growth. Universe Books, New York (1972)

Bernstein’s Theory of Levels and Its Application for Assessing the Human Operator State Sergey Suyatinov(&) Bauman Moscow State Technical University, 2-Ya Baumanskaya, 5, Moscow 105005, Russia [email protected]

Abstract. Currently, the essence of intelligence and, accordingly, the mechanisms of its implementation are represented in two ways. In the first case, it is based on speculative conclusions, dressed in one or another mathematical form. In the second case, it is based on a biological model of intelligence, formed in living systems in the process of evolution and adaptation to changing external influences. The nervous system of living organisms is the biological embodiment of intelligence. The greatest perfection of intelligence shows in the organization of motion control. The article deals with the origins and basic provisions of the biological theory of levels of human movement regulation. This theory was proposed by the Russian scientist Bernstein, one of the founders of biomechanics. It is shown how the new neural structures (layers) appeared in the process of evolution and complication of the movements of living organisms. These structures, receiving information from sensory fields, formed the corresponding “semantic” behavior models and control commands for their implementation. Features of formation and functioning of layers, mechanisms of their interaction are considered. Based on analysis of the formation of control signals in the organization of motion control, the principles of intelligent information processing are formulated. Examples of the implementation of these principles in the intelligent control system are given. The results show the relevance of the scientific Bernstein’s principles for the development of intelligent system. Keywords: Neural networks  Motion control levels Intelligent control  Human operator



Natural intelligence



1 Approaches to Understanding the Essence of Intelligence Currently, the leading paradigm for the development of cybernetics is the intellectualization of information processing and control systems. Intellectualization (in the most general interpretation) implies the accumulation of knowledge and operating with them to achieve certain goals [1, 2].

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 298–312, 2019. https://doi.org/10.1007/978-3-030-12072-6_25

Bernstein’s Theory of Levels and Its Application for Assessing

299

Despite the wide range of existing theories of artificial intelligence, methodologies and methods of operating knowledge, the understanding of the essence of intelligence is not straightforward. Depending on the interpretation of the concept of “knowledge”, we can distinguish two approaches in understanding the essence of intelligence. In the first case, knowledge is understood as the truth characterizing the properties of objects in the real world. In accordance with this, it is considered that the basis of intelligence is logical reasoning and reasoning based on knowledge. Historically, such approach to the interpretation of the intelligence essence arose first. Its implementation includes the construction of formal models and the corresponding reasoning mechanisms. Developing this approach, Newell and Simon formulated a hypothesis about the physical symbol system, which is one of the leading principles of the methodology of artificial intelligence [3–5]. The methodology is based on specially developed mathematical methods of operating with symbols that denote different sides of objects. Without considering here all the advantages and disadvantages of this methodology, we note that from the standpoint of the organization of intellectual management, the second approach to the interpretation of the concept of “knowledge” is preferable. In contrast to the first interpretation, it is implied that intelligence is not based on knowledge of the truth, but on knowledge of how to behave in a changeable environment. In this case, the mechanisms of learning and adaptation inherent in living systems prevail. Obviously, this interpretation of intelligence is closest to the tasks of adaptive control. It should be noted that the mechanisms for implementing intelligence, based on knowledge of possible behavioral reactions, have been improved in the process of evolution of living organisms since the onset of life on Earth. That is why biological models of artificial intelligence, even in simplified forms of implementation, show high efficiency in information processing and control tasks. A classic example is artificial neural networks. Being a very primitive analogue of biological neural networks, they have found application in various information processing and control systems. Note that the era of neural networks began with the work of W. McCulloc and W. Pitts. Based on a very simplified presentation of the biological basis of the neural organization of the nervous system and the basic properties of a neuron, impressive results were obtained at that time and the prerequisites were created for modeling some functions of higher nervous activity. Other methods for realizing the elements of artificial intelligence, prompted by Nature itself, proved to be fruitful. Therefore, at present, the emphasis in research on the mechanisms of intellectual activity has been made on the knowledge and modeling of the internal mechanisms of the brain. Two concepts in understanding the essence of intelligence and also the most common methods of artificially implementing intelligence are presented in Fig. 1. Taking into account the fruitfulness of the rather simple ideas of W. McCulloc and W. Pitss, it seems promising the appeal to the theory of levels of the Russian physiologist Bernstein for understanding the mechanisms of natural intelligence and also formulating the principles of artificial intelligence functioning.

300

S. Suyatinov

Fig. 1. Essence and examples of the implementation of intelligence

2 Bernstein’s Theory of Levels and the Principles of Intellectual Control The history of the development of cybernetics convincingly proved that biology is a fruitful source of new systemic ideas in various fields of technology. The works of W. McCulloc, N. Wiener, P. Anokhin and many other scientists confirm this [6–10]. Until recently, the name of the Russian physiologist, practical scientist Nikolai Alexandrovich Bernstein was absent in this series of remarkable scientists. Bernstein was the first in the world of science to use the study of the organization of movements as a way of understanding the patterns of the brain. Before him, human movements were studied for the purpose of describing them. Bernstein tried to understand how the nervous system controls and how the intelligence is realized in the process of organizing the movement. Bernstein’s fruitfulness is due to the fact that human movements are the most beneficial processes for understanding the mechanisms of functioning of the central nervous system (CNS). The basis of scientific provisions of Bernstein is his new understanding of the functioning of the body [11]. In contrast to the conditioned reflex theory of I. Pavlov, the organism according to Bernstein is an active, purposeful system created in the process of evolution. In other words, the process of life is not a simple “balancing with the external environment” in the “stimulus-response” type, but an active overcoming of this environment. In fact, it is about the body’s accumulating knowledge of how to behave in a changeable environment. Opening and studying the mechanisms of how

Bernstein’s Theory of Levels and Its Application for Assessing

301

knowledge is accumulated and used, Bernstein defined the fundamental biological principles of the functioning of the central nervous system in the process of organizing movement. Note that the scientist himself did not use the concept of “intellect” in his works. Analysis of his work from the standpoint of modern scientific knowledge allows us to consider the research of Bernstein as a study of the mechanisms of functioning of natural intelligence in the case of the organization of movements. Movement is one of the main vital reactions of living beings to external influence. In the process of the evolution of the animal world, new sensors and motor mechanisms appeared. The motor responses became more complex, providing better adaptation. The process of continuous motor adaptation was accompanied by anatomical complications of the central nervous structures that controlled new types of movements. Accordingly, neural control systems were developed and complicated. These new more advanced biofeeds were layered on the previous neural systems, not suppressing, but interacting with them. Another brain superstructure defined the quality of a new class of motor tasks. As a result, a neural network coordination-motor structure was formed. This structure consists of several levels of motion control (in evolutionary terms). Each level is characterized by its own set of sensitive elements (its own sensory field). The uniqueness of the sensory field of each layer determines the uniqueness of the sensory corrections of this layer. Each sensory correction corresponds to a specific character (“sense”) of movement. A more detailed study of each layer revealed that for each set of sensory information there is a “semantic” correction model (control model). This model is stored in memory as a “semantic” model. The younger level, being responsible for a more complex movement, has a more complex control model. But with the implementation of complex control, the leading younger level has to attract the more ancient levels to assistants. Such levels and their sensory corrections are called background. Performing a greater number of auxiliary corrections, they provide smooth, fast, economical, and accurate movements, since they are better suited specifically for these types of corrections. Summing up some results, we note the following. Each level is focused on solving a certain class of tasks of organizing a movement. Each task corresponds to a certain type of correction. The level is the result of the whole previous stage of evolutionary development. It is characterized by a certain maximum complexity of the control problems being solved and, together with the underlying levels, implements a given “semantic” model. Each layer stores in memory a set (list) of movements and corresponding corrections. When solving a task from a certain class, the leading level ensures switchability, maneuverability, resourcefulness, and background levels—coordination, plasticity, obedience, accuracy. From the point of view of physiology, specialized “layers” in the CNS act as levels. The levels of the spinal cord and the medulla oblongata, the level of the subcortical centers, the levels of the cortex were allocated. Each level has a specific, peculiar only to him motor manifestations. Each level has its own class of motions. Scientist has identified five levels, denoting their letters: A, B, C, D and E. The levels differ in the degree of detail representation of movement.

302

S. Suyatinov

Level A is the lowest and genetically the most ancient, and controls muscle tone. It participates in the organization of any movement together with other levels. This level receives signals from muscular proprioceptor, which report about degree of muscle tension, and also from equilibrium organs. At the level B signals which report about mutual position and the movement of body parts are processed. It is the level of the analysis of a state “in body space”. Its mission is to organize the movements at higher levels due to coordination of own complex motive ensembles. At level C all information on external space enters. Therefore the movements adapted for spatial properties of objects (form, situation, length, weight and so on) are created. Level D is a cortical level that testifies to its high organization. This circumstance provides higher level of abstraction and realization of mechanisms of subject actions. At this level only the final subject result is set. For this level the way of performance of action, a set of motive operations is indifferent. Level E is the highest level of the organization. It is the level of intelligent motor acts. Movement patterns of this level are defined not by subject, but abstract, verbal sense. Figure 2 shows the organization of layers.

Fig. 2. Hierarchical organization of information processing levels

In the works of Bernstein, the role of learning and the mechanisms of forming “semantic” models, as well as the interaction of levels, are considered in detail.

Bernstein’s Theory of Levels and Its Application for Assessing

303

The analysis of works of Bernstein and his followers [12–15] allowed to reveal mechanisms of level functioning at the organization of the movement and also to formulate the following fundamental principles of intelligent control: • existence of multi-channel information system of monitoring of external environment and internal state of system; • multi-scale principle of control: the behavior program (movement model) has the hierarchical structure containing the set of submodels of varying degrees of detail and abstract submission; • learning ability of system for the purpose of increasing intelligence and improving of own behavior; • multimodel reflection of the external world: as a result of training its own model is built at each level; • principle of multi-channel control: organization of complex movements involves, as a rule, several levels—one on which to build this movement (called leading), and all lower levels; • principle of subordination and coordination of levels: in the human mind only those components of the movement which are based on leading level are presented; • principle of multipermission and duplication: formally one and the same movement can be based at the different levels; • existence of forecast mechanisms of external world changes, own behavior and the advancing control; • retentive functioning at partial breakdown in communication or control. Bernstein presented a hierarchical model of intelligence from the point of view of physiology. At present, there is no doubt about the hierarchical organization of complex control systems [16, 17]. Bernstein’s merits are that he not only determined and experimentally proved the hierarchical structure of the mechanism for controlling movement in biological systems, but also presented the intellectual content of each level. According to Bernstein, as the level increases, the generalizing abilities of “semantic” models (degree of abstraction) increase, and the mechanisms for working with them change accordingly. In part, Bernstein was able to uncover the meaning of these mechanisms, but the processes implementing them remain unknown. However, knowing the semantic content of the information processing mechanisms at each level, one can put them into correspondence with well-known mathematical methods. Figure 3 presents one of the possible options for the mathematical description of levels. From the figure it is clear that the methods of both mathematical and biological models of intelligence are used. This is due to the fact that some functions of biological models are known, and the mechanisms for their implementation are currently unknown. Based on the principle of functionality, they are represented by models of a symbolic mathematical system.

304

S. Suyatinov

Fig. 3. Mathematical representation of the levels of intellectual processing of information

3 Application of Bernstein’s Theory Provisions in the Control and Information Processing The ideas of the organization of natural intelligence, set out in the theory of levels, are embodied in modern control systems. For example, in [18] a three-level control system of steel-smelting furnaces is presented (see Fig. 4).

Fig. 4. The general structure scheme of the intellectual control

Bernstein’s Theory of Levels and Its Application for Assessing

305

The top-level expert systems are combined with a lower-level artificial neural network. In this case, the neural network is trained from the expert system, a priori coarser and initially fully responsible for control. Over time, the neural network produces a more accurate formation of the control action, taking into account the actually defined conditions for the operation of the object. In the event of abrupt changes in the object and the environment of operation, the system again transfers control to the expert system, and training of the neural network begins anew. In addition to building intelligent control systems, it is promising to use the provisions of the theory of levels in information processing systems, in particular, in systems for identifying the functional state of a human operator. Despite the high level of automation in the control of complex technical objects, the main element of all man-machine complexes remains the human operator. The activity of a human operator is characterized by high psycho-emotional stress, which can adversely affect the quality of the tasks to be solved. Therefore, constant monitoring of the physical state of people controlling complex devices, equipment and complexes is necessary. Moreover, not only the efficiency of the technical objects, but also the safety of people (for example, in transport) often depends on the state of a human operator. Thus, the analysis of the functional state of a human operator is relevant for many areas of professional activity related to the maintenance of complex equipment. Currently, there are various methods for assessing the state of a human operator: by galvanic skin response, by biometric images, by the frequency-amplitude spectrum of EEG signals (electroencephalogram), by heart rate variability [19–22]. It is known that one of the most objective and convenient for automated processing is the evaluation of the functional state of the cardiovascular system by biosignals (ECG, pulsogram, etc.) [23, 24]. In [25–28] it is shown that the use of two biosignals of the cardiac cycle allows increasing the reliability of the estimate. This solves the problem of identification of the cardiovascular system. The result of identification is a model of the system dynamics. Evaluation of the state is carried out by the values of the model parameters. The problem is that the parameters characterizing the same human state have different meanings for different people. To solve this problem, a two-level system of intellectual information processing is proposed. The structure of the system of human state assessment by biosignals is presented in Fig. 5.

Fig. 5. System for assessing the human state by biosignals

306

S. Suyatinov

On the basis of the input information, the upper level determines the “semantic” model or psycho-type of behavior. At the lower level, a model of the behavior of the cardiovascular system is constructed on based of biosignals [29]. Then, in the merger block of information the current state of a person is determined for the chosen psychotype of behavior and parameters of the model obtained in the lower block. In this case the mechanism of merging models is the solution to the classification problem. The information field of the upper level is formed by the results of psychophysiological testing on the following indicators: 1. 2. 3. 4. 5. 6. 7.

Intensity of the nervous processes (weak, medium, strong); Equilibrium of nervous processes (excitation, inhibition, normal); Severity of emotional stress (maximum, strong, moderate, absent); Level of deterioration (high, moderate, absent); Holmes social adaptation (low, threshold, high); Toronto scale (neurosis, psychosomatic diseases, healthy); Age (20–35 and 36–50).

Based on this information, the expert system forms groups of psycho-physiological types of behavior. Each group has its own set of parameters of dynamic low-level models that determine a particular human state. The specific state is defined in the block of merging behavior models. Thus, we have the following state estimation algorithm. According to the results of psychophysiological testing at the upper level, the psychological image (model) of the subject is determined. In the memory of this level, for each image, the sets of lower level model parameters characterizing the functional state are stored. At the lower level, using the biosignals, a model of the system dynamics is constructed. The parameters of the resulting model represent the vector of signs of the functional state. In the merge block, this vector is correlated with a set of parameters of a particular psycho-physiological image. Thus, when assessing the state of a human operator, his psycho-type is taken into account. This reduces the ranges of possible variation of parameters in the gradation of functional states. As a result, the objectivity of the assessment increases. Physically, the upper level and merge block can be implemented in the form of expert systems.

4 Formation of Models of the Lower Level Because of complex nature of interrelations between the considered subsystems, limited opportunities of noninvasive methods of their research, it is offered to construct the functional model of subsystems interaction on the basis of the trained dynamic networks of original structure. Unlike currently used simplified models of proportional interaction, application of dynamic networks allows to use effectively the large amount of experimental data when training a network and to display complex interaction mechanisms of subsystems in model.

Bernstein’s Theory of Levels and Its Application for Assessing

307

To identify the biosystem “heart-vessels” is proposed to localize the frequency spectrum of the recorded signals in the frequency domain, due mainly myocardial contractile activity. The “heart-vessels” model is represented as a dynamic network, the input signal is an electrocardiogram, and the output is a sphygmogram (Fig. 6).

Fig. 6. Model as a dynamic network

It should be noted that existing technologies allow simultaneously recording several biosignals of different functional origin, without interfering with the work of the human operator [30]. In this complex, new sensors PS25255 are used, which do not hamper the work of the operator.

Fig. 7. ECG and sphygmogram registration and also sensors

Figure 7 shows the device layout, allowing simultaneous recording of two biosignals (ECG and sphygmogram), using only the fingers.

308

S. Suyatinov

We present a model of the cardiovascular system in the form of a Volterra neural network designed for simple and effective nonlinear processing of a sequence of signals of delayed relative to each other. The excitation for such a network at time n is the vector x = [xn, xn−1, …, xn−L]T, where L is the number of unit delays, and (L + 1) is the vector length. In this case, vector will contain the values characterizing the magnitude (voltage) of the ECG signals at different instants of time. The output signal of the neural network is determined by the expression: yðnÞ ¼

L X

wi xðn  iÞ þ

i1 ¼1

þ

L X L X

wi1 i2 xðn  i1 Þxðn  i2 Þ

i1 ¼1 i2 ¼1

L X

...

i1 ¼1

L X

ð1Þ

wi1 i2 ...ik xðn  i1 Þxðn  i2 Þ. . .xðn  ik Þ;

ik

where x denotes the input signal, and weights wi, wij, …, wijk, etc., called Volterra nuclei, correspond to higher-order reactions. To simplify the structure of the network and reduce its computational complexity, the Volterra decomposition can be represented in the following form: yn ¼

L X i¼0

" xni wi þ

L X j¼0

" xnj wij þ

L X

## xnk ðwijk þ . . .Þ

;

ð2Þ

k¼0

where yn = y(n), xn−i = x(n − i), etc. Each term in square brackets is a first-order linear filter, in which the corresponding weights represent the impulse response of a linear filter of a different order. The number of levels at which the filters are created is equal to K, that is the order of the above polynomial, i.e. degree of the Volterra series. Selection of weights is carried out sequentially layer by layer, and these processes are independent of each other. In this case, the output signal of the neural network will be the value of the sphygmogram corresponding to the current value of the ECG signal. The problem of parametric identification of such model is reduced to finding of weight coefficients, which are the characteristic indicators of the state of the model and reflect its features. To find the weights of a neural network, it is necessary to determine the objective function and minimize its value using universal methods for optimizing neural networks. In this case, this method reduces to solving a differential equation: dw dE ¼ l ; dt dw

ð3Þ

where w is the vector of the network weights, E is the objective function, and dE/dw is the gradient. In this case, the objective function E is the difference between the experimentally obtained and theoretically calculated values of the pulse wave. After registration of two

Bernstein’s Theory of Levels and Its Application for Assessing

309

biosignals, the neural network is tuned (trained) using the input data in the form of ECG signal values and the output data as sphygmogram values. In the experiment, a Volterra network (with parameters L = 2 and K = 3) was used to identify the unknown communication function of ECG and sphygmogram (pulse wave) signals: yn ¼

2 X i¼0

" xni wi þ

2 X j¼0

" xnj wij þ

2 X

## xnk wijk

:

ð4Þ

k¼0

Proceeding from the principle of natural symmetry of Volterra kernels, all weights wi1, wi2, …, wik have the same values for each combination of indices i1, i2, …, ik. In order to simplify the experiment, two different age groups participated in accordance with point 7 of psychophysiological tests. Psychological tests were not used. There were 35 people in each group: 20 healthy and 15 people suffering from hypertension stage II. As a result of network training, the averaged values of weight coefficients were obtained from person’s biosignals. Their values for healthy people will be written in the first line, and for patients - in the second line (Tables 1 and 2), where are the first 9 values (from 14 values) of Volterra network weights. Table 1. Averaged weights for the first group of people w0 w1 w01 w10 w00 w11 w000 w111 w011 −7.83 43.58 564.5 564.5 53.14 5.2∙10−2 −10.39 −9.3∙10−5 2.2∙10−3 −10.13 60.11 41.33 648.6 648.6 0.07 −8.21 4.29 4.29

Table 2. Averaged weights for the second group of people w0 w1 w01 w10 w00 w11 w000 w111 w011 −4 −17.83 33.57 464.5 264.5 43.41 0.012 −19.91 −6.7∙10 1.2∙10−2 −21.13 40.13 201.55 441.6 348.7 0.16 −17.29 1.89 14.15

From this example it can be seen that the parameters (weight coefficients) of the neural network are integral indicators of the state of the cardiovascular system. In this case, as experiments show, the nature of the pulse wave dependence on the ECG signal is determined by the human state. Consequently, the values of the weight coefficients in the model of the cardiovascular system represented by the Volterra neural network are also changing. This circumstance used to assess the functional state of the cardiovascular system of the human operator. It can be seen that the values of the weight coefficients of Volterra neural network for a healthy and sick persons differ to a different extent from each other. Separating and comparing the most significant parameters, we assign a functional state to them. Knowing which group the model parameters belong to, the functional state is estimated with greater certainty.

310

S. Suyatinov

Thus, in general, the criterion for the functional state of the human operator is the range of values of the main parameters of the cardiovascular system model for a specific group.

5 Conclusion The theory of levels of Bernstein represents the biological mechanisms and structures of intellectual information processing in the organization of human movement. Understanding these mechanisms allows you to design their technical prototypes. The use of prototypes in automatic control systems increases the functionality of the systems and the quality of control. The presented mechanisms of intellectual information processing have universal using. Their effectiveness in assessing the state of a human operator is shown.

References 1. Proletarsky, A.V., Shen, K., Neusypin, K.A.: Intelligent control systems: contemporary problems in theory and implementation in practice. In: 5th International Workshop on Computer Science and Engineering: Information Processing and Control Engineering, pp. 39–47 (2015) 2. Shen, K., Selezneva, M.S., Neusypin, K.A., Proletarsky, A.V.: Novel variable structure measurement system with intelligent components for flight vehicles. Metrol. Meas. Syst. 24 (2), 347–356 (2017) 3. Gugerty, L.: Newell and Simon’s logic theorist: historical background and impact on cognitive modeling. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 50(9), 880–884 (2006). https://doi.org/10.1177/154193120605000904 4. Mitchell, T.M., Michalski, R.S., Carbonell, J.G.: Machine Learning: An Artificial Intelligence Approach, vol. 1, 572 p. Elsevier (2014) 5. Luger, G.F.: Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 6th edn. Addison-Wesley Longman, London (2008) 6. Piccinini, G.: The first computational theory of mind and brain: a close look at Mcculloch and Pitts’s “Logical Calculus of Ideas Immanent in Nervous Activity”. Synthese 141, 175– 215 (2004) 7. Pospíchal, J., Kvasnička, V.: 70th anniversary of publication: Warren McCulloch & Walter Pitts—a logical calculus of the ideas immanent in nervous activity. In: Sinčák, P., Hartono, P., Virčíková, M., Vaščák, J., Jakša, R. (eds.) Emergent Trends in Robotics and Intelligent Systems. Advances in Intelligent Systems and Computing, vol. 316. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-10783-7_1 8. Egiazaryan, G.G., Sudakov, K.V.: Theory of functional systems in the scientific school of P. K. Anokhin. J. History Neurosci. 16(1–2), 194–205 (2007) 9. Novikov, D.: Cybernetics: From Past to Future, p. 107. Springer, Berlin (2016) 10. Montagnini, L.: Wiener and Computers. Act 2. In: Harmonies of Disorder. Springer Biographies. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-50657-9_7

Bernstein’s Theory of Levels and Its Application for Assessing

311

11. Bernstein, N.A.: The current problems of modern neurophysiology. In: Sporns, O., Edelman, G.M. (eds.). Bernstein’s Dynamic View of the Brain: The Current Problems of Modem Neurophysiology. Motor Control, 2(4), 285–299. (1998). (Original work published 1945) 12. Labra-Spröhnle, F.: Human, all too human: Euclidean and multifractal analysis in an experimental diagrammatic model of thinking. Cogn. Syst. Monogr. 29, 105–133 (2016). https://doi.org/10.1007/978-3-319-22599-9_9 13. Bongaardt, R., Meijer, O.G.: Bernstein’s theory of movement behavior: historical development and contemporary relevance. J. Mot. Behav. 32(1), 57–71 (2000) 14. Selezneva, M.S., Neusypin, K.A.: Development of a measurement complex with intelligent component. Meas. Tech. 59(9), 916–922 (2016) 15. Buldakova, T.I., Suyatinov, S.I.: Registration and identification of pulse signal for medical diagnostics. In: Proceedings of SPIE—The International Society for Optical Engineering, vol. 4707, pp. 343–350 (2002). Paper 48 16. Valavanis, K.P., Saridis, G.N.: Intelligent Robotic Systems: Theory, Design and Applications. Springer, New York (2012) 17. Forrest, J., Novikov, D.: Modern trends in control theory: networks, hierarchies and interdisciplinarity. Adv. Syst. Sci. Appl. 12(3), 1–13 (2012) 18. Vasil’ev, S.N., Doganovskij, S.A., Edemskij, V.M.: To the intelligent control of electric arc furnaces. Avtomatizacija v promyshlennosti (3), 39–43 (2003). (in Russian) 19. Suyatinov, S.I., Kolentev, S.V., Bouldakova, T.I.: Criteria of identification of the medical images. In: Proceedings of SPIE—The International Society for Optical Engineering, vol. 5067, pp. 148–153 (2002) 20. Lantsberg, A.V., Treusch, K., Buldakova, T.I.: Development of the electronic service system of a municipal clinic (based on the analysis of foreign web resources). Autom. Doc. Math. Linguist. 45(2), 74–80 (2011) 21. Boucsein, W., Haarmann, A., Schaefer, F.: Combining skin conductance and heart rate variability for adaptive automation during simulated IFR flight. In: Harris, D. (ed.) EPCE 2007. LNCS, vol. 4562. Springer, Berlin (2007). https://doi.org/10.1007/978-3-540-733317_70 22. Kalinichenko, A.N., Yur’eva, O.D.: Assessment of human physchophysiological states based on methods for heart rate variability analysis. Pattern Recognit. Image Anal. 22(4), 570–575 (2012). https://doi.org/10.1134/S1054661812040074 23. Lia, B.N., Fu, B.B., Donga, M.C.: Development of a mobile pulse waveform analyzer for cardiovascular health monitoring. Comput. Biol. Med. 38, 438–445 (2008) 24. Geisler, F.C., Kubiak, T., Siewert, K., Weber, H.: Cardiac vagal tone is associated with social engagement and self-regulation. Biol. Psychol. 93(2), 279–286 (2013) 25. Allen, J.: Photoplethysmography and its application in clinical physiological measurement. Physiol. Meas. 28, R1–R39 (2007). https://doi.org/10.1088/0967-3334/28/3/R01 26. Gil, E., Orini, M., Bailon, R., Vergara, J., Mainardi, L., Laguna, P.: Photoplethysmography pulse rate variability as a surrogate measurement of heart rate variability during nonstationary conditions. Physiol. Meas. 31, 1271–1290 (2010) 27. Poh, M.-Z., McDuff, D.J., Picard, R.W.: Advancements in noncontact, multiparameter physiological measurements using a webcam. IEEE Trans. Biomed. Eng. 58(1), 7–11 (2011) 28. Birrenkott, D.A., Pimentel, M.A.F., Watkinson, P.J., Clifton, D.A.: A robust fusion model for estimating respiratory rate from photoplethysmography and electrocardiography. IEEE Trans. Biomed. Eng. 65(9), 2033–2041 (2018). https://doi.org/10.1109/TBME.2017. 2778265

312

S. Suyatinov

29. Buldakova, T.I., Suyatinov, S.I.: Reconstruction method for data protection in telemedicine systems. In: Progress in Biomedical Optics and Imaging—Proceedings of SPIE, vol. 9448 (2014). https://doi.org/10.1117/12.2180644. Paper 94481U 30. Suyatinov, S.I.: The use of active learning in biotechnical engineering education. In: Smirnova, E., Clark, R. (eds.). Handbook of Research on Engineering Education in a Global Context, pp. 233–242. IGI Global, Hershey, PA (2019). Web. 23 Oct. 2018. https://doi.org/ 10.4018/978-1-5225-3395-5.ch021

Semantic Marking Method for Non-text Documents of Website Based on Their Context in Hypertext Clustering Sergey Papshev1(&) , Alexander Sytnik1 , Nina Melnikova1 and Alexey Bogomolov2 1

2

,

Yuri Gagarin State Technical University of Saratov, Politehnicheskaya Street 77, Saratov, Russia {psv,as,MelnikovaNI}@sstu.ru Institute of Precision Mechanics and Control, Russian Academy of Sciences, Rabochaya Street 24, Saratov, Russia [email protected]

Abstract. Initial indexing and structuration of information on Internet are the conditions for resolving of the task of an effective search of information that best relates to user’s query now. Mainly they deal with text-based time expensive processing methods. Hyper structured nature of the web is used as an alternate approach for this purpose, but websites also contain information in the non-text format: (images, movies, pdf-files etc.). These documents, first of all, are intended for perception by the person, but not for the automated processing. In this article, we propose the method for the decision of this problem on the way of semantic marking of non-text documents based on their context in hypertext clustering. At the same time, we develop the approach of the context independent semantic clustering of the website with using of web-analytics information, which utilizes internal hypertext structure, user’s behavior statistics and does not require full-text content analysis. For this purpose, we represent the hypertext structure of the site as a graph and apply flow simulation algorithms to produce web clustering. Then we make a semantic description of the clusters by sets of keywords. Non-text documents have hyperlinks to some web clusters, so we consider extracted keywords for relating cluster as its semantic marking. We have checked the suggested method on the example of site sstu.ru. Keywords: Semantic marking document



Hypertext clustering



Graph



Non-text

1 Introduction to the Problem According Internet Live Stats1 the number of websites in the Internet grows exponentially and soon will reach two billion. Under this fact the technologies for searching, analyzing, and fast accessing become crucially important. The developing of methods

1

http://www.internetlivestats.com/.

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 313–323, 2019. https://doi.org/10.1007/978-3-030-12072-6_26

314

S. Papshev et al.

for finding information, which is relevant to a user’s query, is one of the challenges for Web now. One of the best ways to realize this task consists of semantic description of web pages. As an answer to this challenge the concept of a semantic web based on ontology description of web space by means of OWL2 was proposed. A main part of web pages is HTML-documents in text format and we need extract keywords from web documents for ontology description. These methods do not apply to non-text documents such as Flash, Flex, Silverlight objects, Java-applets, binary documents, image files, audio- video-files, PDF-documents and so forth. In [1] the semantics classification of images is based on multiple instance learning, i.e. on text instances in the image that is not so far from the same text-based methods. How can we define the semantics of non-text web documents automatically, by program means? In this case, well-known text processing methods aren’t effective, and we need new approaches to semantic identification non-text documents. Certainly, the hyperlink to web object is defined by tag “href”. In most cases this tag contains the parameter “alt” with label of the object. Unfortunately, sometimes it is only the name of object or even only its filename2. Besides, the label (name of object) does not always bring us semantic meaning. Moreover, frequently we see in tag “href” absolutely empty alt-field3. Therefore, in these cases an indirect way for semantic marking of non-text documents of website is needed. In [2] authors proposed the idea of context based searching of image semantics. This approach assumes to select the collections of thematically connected web pages and then to extract of keywords for relevant to them image. Another word, these thematic collections we can mean as some webpage clusters. A variety of web data extraction applications, for example, knowledge extraction, search results representation or recommendation algorithms use webpage clustering [3, 4]. The main part of like applications uses clustering methods based on text analysis, treating web pages as simple text documents, pushing aside their hypertext nature. Fulltext analysis leads to well-known limitations of text analysis techniques connected with a polysemy-capturing problem. Besides, text clustering requires a preliminary documents indexing, which software performs for every document in a target collection (thus, it requires full-text scanning for every web page). In this paper we propose the decision of the semantic identification for non-text web documents in connecting with web clustering technology and taking into account hypertext nature web and web analytics information of the site. The applicability of the method was checked on the example of educational website www.sstu.ru. First of all, the weighted graph model of the site is constructed on the basis of hypertext structure of website and its web analytics. Then, the graph is clustered by two well-known algorithms BorderFlow [5] and MCL [6] to allocate the clusters, keeping

2

3

The hyperlink to the image “_MG_0878.JPG” on the page http://photo.sstu.ru/main.php?g2_itemId= 889. See, for example, https://www.ibm.com/blogs/policy/dataresponsibility-at-ibm/.

Semantic Marking Method for Non-text Documents of Website

315

in mind, that they contain semantic information. The set of keywords for web cluster, which non-text document is hyperlinked to, establishes semantic meaning for the nontext-document. To produce this semantic description, keyword selection for each web cluster is carried out.

2 Semantic Clustering of Website Pages 2.1

Semantic Clustering of Hypertext Documents

To solve the problem of website semantic clustering, some algorithms and engines were developed [3]. The essential difference between clustering approaches to usual text documents and web documents consist of the hypertext nature of the web. Brin and Page [7] mentioned that the hypertext link from one page to another indirectly points to its semantic link. Thus, meta-description of one web page (for example, in the form of keywords), gives us an opportunity to assume semantics of other web pages to which the first one refers. 2.2

Graph-Based Approach to Web Clustering

The rationale of our approach is that if we group web pages with a number of hyperlinks inside group higher than hyperlinks to web pages outside the group, web pages in the group can be considered as thematically connected. The partitioning of website onto these groups forms a set of semantic clusters of webpages. To perform this partition, the usually graph-clustering algorithms are used (see, for example, [8, 9]). The realization of semantic clustering for the website based on its hypertext structure and taking into account hits of users was suggested in [10]. This approach assumes that behavior of users divides website pages into groups with a higher level of page connectivity inside rather with pages outside of the group. These groups of web pages form clusters of website referring to special topics. This approach is based on the mathematical model of hypertext structure in the form of a weighted graph. In tour investigation the weights of edges are calculated on the web analytics data of user’s transition numbers on hyperlinks. We modify this model by using extended web analytics data and along with using of algorithm BorderFlow for clustering we produce clustering, by one more well-known algorithm MCL to compare the clustering results. Collecting Statistics About Behavior of Users on the Website. Apart content of documents where exist metadata concerning additional information about the behavior of users on the website [11]. Applying to web documents it may be information about the number of views of pages, number of unique views, the average duration of sessions, percent of user’s exits and so on. The enhanced graph model was suggested in [10], which uses for weights of edges extended web analytics data including the average duration of user’s viewing of the web page by alone number hyperlink transitions. Google Analytics service allows accumulating data about user’s behavior on the website. The service works at JavaScript means embedded in each page of the site. This script monitors actions of the user at each new session. The script sends all gathered information on some server, to which we can get access any time.

316

S. Papshev et al.

Google Analytics metadata which we use for our graph model contains: • Page views—Total number of successful accesses to each page by users. Repeated accesses are counted. • Time access—Time between access and leaving of the page by the user during the session, i.e. total time which user views the page. As the Google Analytics service does not allow unloading directly these metadata, the Query Explorer tool is used for it. This tool works as a query agent via the web interface and allows us to get selected web analytics data in the suitable format for using in Excel. Graph Model Constructing. To explore hypertext website, the most obvious graph model is used, so nodes of the graph correspond to pages, and edges—to hyperlinks between pages. Thus, define graph model of website is defined as two-set object G = {P, L}, where the set of vertices P = {p1, …, pn}—is the set of hypertext pages, and the set of edges L = {l (p1, p2): 9p1, p2 2 P}—is the set of pairs of pages p1 and p2, which are connected by hyperlink. Let’s consider the behavior of users on a hypertext structure as the set of routes R ¼ fri g for pages P0  P, observed within time interval DT, where ri ¼ ðp0m ; . . .; p0n Þ; p0j 2 p0 ; j 2 N. The user’s routs may be also represented by graphs   G0i ¼ Vi0 ; Ei0 ; Wi0 , where Vi0  V—visited nodes, Ei0  E—fulfilled transitions 0 between them, Wi —weight of edges in routing graphs (the weight of each edge corresponds to number of transitions along hyperlink and time access at the time DT). In some cases, because of a large number of pages it is forced to reduce the number of nodes and edges according to limiting parameter DW for weights in routing graphs: e.g. for all routes routs Wi0 \DW. Thus, in common, hypertext structure is presented as graph H ¼ fP; L; W g, where P and L have the same sense, and W—weights of edges. As the additional web analytics information concerning the edges is used, our graph is a weighted graph, where the weight of the edge is an integral number, calculated on the two mentioned above characteristics. These parameters are collected at a concrete time interval (we use in our work a calendar year), then they are normalized and are combined into one number. At the same time, the numerical component of the time access for node breaks into equal quantity for input edges as web analytics does not allow us to known the distribution of overall time access between incoming routs. Clustering of the Graph. Formally, the description for a semantic clustering problem can be submitted as follows. Let the set of hypertext documents is P ¼ fp1 ; . . .; pn g, the metric function qsem ðp; p0 Þ defined as the distance between them. It is required to split the initial set P of documents onto subsets—clusters, so that each cluster contained the objects which were close, and objects from different clusters which were far according to the metric qsem. If Y is the numeric marks of clusters, then as a result, the number yi 2 Y is attributed to each hypertext document pi 2 P. The function fsem : P ! Y is called function of a semantic clustering, and its concrete realization depends on the chosen clustering method.

Semantic Marking Method for Non-text Documents of Website

317

Taking into account information on routes R, the function of a semantic clustering has to consider not only data about hypertext structure but also routes on it: fsem ðH; RÞ. Thus, the task of semantic clustering may be reduced to the clustering of a weighted   graph, and we mean the function fsem ðG; RÞ : V ! Y; R ¼ G0i as the semantic clustering function. There exist a lot of special algorithms for clustering of graphs [8]. We use the application Graph Clustering and Visualization Framework [12]. After loading of data, the application allows to choose one of realized algorism. In our case the clustering of website fragment sstu.ru was produces by algorisms MCL и BorderFlow, because these methods are specialized on graph processing.

3 Semantic Identification of Non-text Website Documents 3.1

The Results of Clustering

Figure 1 describes the sequence of three steps for realizing of web clustering method: extracting web analytics data, construction of the graph for the website, and clustering of the graph.

URL of website

Google Analytics

Query Explorer

Connection to service

Unloading of metadata

sites.py

Depth based recursive by-pass of website. Constructing of graph structure for the website

prepare.py

normalize.py

Unloaded from Query Explorer web analytics data combine with graph structure

Normalizing of Weights of edges

Graph Clustering and Visualization Framework Clustering of weighted graph by chosen algorithm

The set of clusters

Fig. 1. The diagram of steps for carrying out the web clustering of the website. The input of the algorithm is the URL of the website, the output is the set of clusters. Each step presented by the shot description and header: the name of the Python module or used software

As a result of applying method, the collection of clusters consisting of the graph’s vertices is produced. This way the initial set of hypertext pages is splitting into thematically similar clusters by reversed mapping of clusters—subsets of vertexes of the

318

S. Papshev et al.

graph, to the set of hypertext documents. The parameters of page distribution among clusters are presented at Figs. 2 and 3.

Fig. 2. The histogram of page distribution on clusters according to MCL algorithm. The horizontal scale marks the numbers of clusters, the vertical scale—the number of web pages in the clusters.

Fig. 3. The histogram of page distribution on clusters according to BorderFlow algorithm. The horizontal scale marks the numbers of clusters, the vertical scale—the number of web pages in the clusters.

These figures show that both algorithms give similar clustering results and produce eight clusters. The analysis of the content intersection of clusters is resented in Table 1. In the table rows and columns are marked by numbers of the calculated clusters: down —numbers of BorderFlow clusters, across—numbers of MCL clusters. The rows contain the percent of MCL cluster web pages which include in the BorderFlow clusters.

Semantic Marking Method for Non-text Documents of Website

319

The table demonstrates that some clusters in both splittings almost the same, however, some pages algorithm BorderFlow distribute on several clusters, while the algorithm MCL grouped them in one cluster. Features of splitting can selectively be considered when the final semantic description of clusters is carrying out. 3.2

The Linking of Non-text Documents to Semantic Web Clusters

As it was mentioned above, our approach is able to form semantically close webpage clusters. Obviously, semantics for non-text documents can be discovered by analyzing the content of text documents in the cluster. Table 1. The percent of the intersection of MCL- and BorderFlow- web clusters. Numbers of MCL clusters Numbers of BorderFlow clusters 1 2 3 4 5 1 73% 0% 4% 0% 0% 2 18% 0% 0% 0% 0% 3 0% 91% 0% 0% 0% 4 1% 0% 0% 0% 50% 5 0% 0% 0% 100% 0% 6 0% 0% 0% 0% 0% 7 1% 0% 0% 0% 0% 8 1% 0% 0% 0% 0%

6 0% 0% 0% 33% 0% 0% 0% 0%

7 0% 0% 0% 0% 0% 0% 0% 100%

8 0% 0% 0% 0% 0% 100% 0% 0%

Also, our approach is able to form webpage clusters with non-text content at all. We can discover semantics for these pages by analyzing incoming hits to such pages from outside pages or resources. In this case, the text-based clustering is inapplicable, while graph-based algorithms correctly detected clusters for such pages. They have found exact clusters with .jpg and .pdf documents related to original web pages. For other formats, like .csv or .ttl, they also work well and relate such documents with clusters of well-known pages. In another case non-text documents can’t be involved in clusters at all during reduction procedure under parameter ΔW. It very often occurs because non-text documents aren’t hypertext pages, have not hypertext links to different pages, and therefore they have a high risk to be truncated. In this case, we can link the non-text document to nearest cluster in sense semantic metric fsem (H, R). Some additional aspects arise if we take into account the parameters ΔT and ΔW of graph model construction. These parameters are individual for each website and we need produce some adaptive cycle of experiments under expert observation to pick up graph model parameters.

320

3.3

S. Papshev et al.

Semantic Marking of Non-text Documents

Now, our clustering approach for estimating of the semantic sense of non-text documents can be applied. In the result of the clustering procedure the set of webpages is divided into clusters. To understand what semantic senses have the produced clusters, keywords for pages of each cluster are extracted and each cluster gets the keywords for semantic marking. The developed program works with the database in SQLite format, which Python supports by the import of the SQLite3 module. By using the program TfidfVectorizer from library Scikit-learn frequency analysis of cluster’s files is carried out. Before frequency analysis of files, the stemming of their text is fulfilled. For this purpose the tool Snowballstemmer Scikit-learn [13] of Python, which supports Russian language processing, is applied. Table 2. The examples of semantic marking of non-text objects of the website sstu.ru. Key words for cluster student, club, competition, event

entrant, exam, statement,deadline, submission, competition, school, pupils

URL of page in cluster http:// photo.sstu. ru/main. php?g2_ itemId= 889 http:// www.sstu. ru/ abiturientu/ v-o/2018/ http:// www.sstu. ru/ abiturientu/

http:// www.sstu. ru/ abiturientu/ v-o/2018/

Name of object

Performance of dance group “Carmen”

The information booklet about SSTU and deadline document submission The presentation about the university, rules and an order of document submission Time list of exams

Type of object jpg

pdf

URL of object

http://photo.sstu.ru/ main.php?g2_view= core. DownloadItem&g2_ itemId=891&g2_ serialNumber=2 http://www.sstu.ru/ upload/medialibrary/ ce9/buklet_sgtu_1809-2017.pdf

pdf

http://www.sstu.ru/ uload/medialibrary/ d37/2019-_UIT_Prezentatsiya-nasayt.pdf

pdf

http://www.sstu.ru/ upload/medialibrary/ ad5/Perechen-VI2018-_2_.pdf

For convenient statistical processing, the received keywords data from database SQLite is exported to format .csv and finish processing is produced in Excel. During this process all most frequently and least frequently words were deleted. The produced set of keywords for the cluster describes the semantics of linked to it non-text files. This set may be described and categorized by an expert.

Semantic Marking Method for Non-text Documents of Website

321

The Table 2 shows the instances of semantic marking for non-text objects related to two clusters. Each row of the table concerns one cluster. The columns in the table contain the set of keywords extracted for the cluster, URL of the page in the cluster, context name of the non-text object, to which the page refers by hyperlink, type of nontext object and its URL. The first row of the table concerns image in .jpg format. The second one—information about three documents in .pdf format. As it is possible to see, the hyperlink to the first non-text object on the page “http://photo.sstu.ru/main.php?g2_itemId=889” is described by the tag:



Remark that the parameter “alt” of the tag represents only filename of image and does not give us any semantic information. In each case for two clusters, the object’s real semantic content corresponds to the set of keywords for the cluster well. Therefore the set of keywords for cluster can be accepted as semantic mark for the non-text objects linked to the cluster’s pages. The Algorithm A given below summarizes the proposed method in a whole. Algorithm A. Semantic Marking of Non-text Documents of the Website Input: URL of website. Output: The set SD of non-text documents, accompanied by its keywords marking Step 1. Graph model constructing. Construct graph model of website as two-set object G = {P, L}, where P is the set of vertices corresponded to webpages, and L is the set of edges corresponded to hyperlinks between webpages. Step2. Extract web analytics data. Import web analytics data using Query Explorer Step3. Weighted graph construction. Unloaded from Query Explorer web analytics data combine with graph G. Produce graph H ¼ fP; L; W g, where P and L have the same sense, and W—weights of edges, calculated based on the web analytics data. Step 4. Webpage clustering. Clustering of weighted graph H by algorithm BorderFlow. As a result we receive clusters C1 ; C2 ; . . .; Ck . Step 5.  Keywords extraction. For each cluster Ci ; i ¼ 1::k extract set of keywords Ei ¼ ei1 ; . . .; eiji . Step 6. Semantic marking of non-text documents of clusters. In each cluster Ci ; i ¼ 1::k select the set of non-text documents Di ¼ fdi1 ; . . .; dili g and semantically mark out it by the keywords set Ei , form SDi ¼ dij ; Ei j¼1::li . Step 7. Semantic marking of non-text documents of website. Form the set SD of S non-text documents of the website with its semantic marks: SD ¼ ki¼1 SDi .

322

S. Papshev et al.

4 Conclusion and Discussion The results received in this work expand and supplement possibilities of information space research in Internet. In this article, we fulfilled language-independent semantic clustering of the website, based on web-analytics information, which utilized internal hypertext structure, user’s behavior statistics and did not require full-text content analysis. As a basis, a weighted graph model of hypertext structure was developed. This model is based on communication between the semantic similarity of web pages and behavior of users on the website. The weighted graph with modified web analytics dataflow simulations was clustered to produce a set of webpage clusters. The clustering results of website sstu.ru by algorithms MCL and BorderFlow were presented in the paper. The sets of the clusters received by different algorithms were compared and the objective character of educated clusters was established. It was found that in both cases these clusters contain thematically similar web pages. The semantics similarity for these pages was established by analyzing page views and time access to such pages from outside pages or resources. In our case, the graph-based algorithms correctly detected clusters for such pages. Then, the clusters with .jpg and .pdf documents related to original web pages were extracted. The semantic marking of non-text documents was determined on the basis of the content of text documents in the same cluster. For other formats, like .csv or .ppt, this approach also works well. In some cases non-text documents weren’t involved in clusters at all during reduction procedure under parameter DW. These documents have a high risk to be truncated because the non-text documents aren’t hypertext and do not have hyperlinks to other pages. In this case, the non-text document has been related to nearest cluster in sense semantic metric fsem ðH; RÞ. Some additional aspects arise if we take into account the parameters DT and DW of graph model construction. These parameters are individual for each website and it is needed to produce some adaptive cycle of experiments under expert observation to pick up graph model parameters. Future investigations may be continued also to compare the effectiveness of clustering procedure by different clustering algorithms and to explore the choosing of clustering parameters for websites of different types.

References 1. Manjaly, A.V., Priya, B.S.: Malayalam text and non-text classification of natural scene images based on multiple instance learning. In: IEEE International Conference on Advances in Computer Applications, ICACA 2016, pp. 190–196, Coimbatore, India (2016/ 2017). https://doi.org/10.1109/icaca.2016.7887949 2. Franzoni, V., Milani, A., Pallottelli, S., Leung, C.H.C., Yuanxi, L.: Context-based image semantic similarity. In: Proceedings of 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). IEEE, pp. 1280–1284 (2015)

Semantic Marking Method for Non-text Documents of Website

323

3. Carpineto, C., Osinski, S., Romano, G., Weiss, D.: A survey of web clustering engines. ACM Comput. Surv. 41(3) (2009). Article 17 4. Sridevi, K., Umarani, R., Selvi, V.: An analysis of web document clustering algorithms. Int. J. Sci. Technol. 1(6), 275–282 (2011) 5. Kosala, R., Blockeel, H.: Web mining research: a survey. ACM SIGKDD Explor. Newslett. 2(1), 1–15 (2000) 6. MCL—a cluster algorithm for graphs, http://micans.org/mcl/. Accessed 20 Oct 2018 7. Brin, S., Page, L.: The anatomy of a large-scale hypertextual web search engine. Comput. Netw. 30(1–7), 107–117 (1998) 8. Aggarwal, C.C., Wang, H.A.: Survey of Clustering Algorithms for Graph Data. Springer, Boston, pp. 275–301 (2010) 9. Ngomo, N., Schumacher, F.: Borderow: a local graph clustering algorithm for natural language processing. In: Computational Linguistics and Intelligent Text Processing, pp. 547–558 (2009) 10. Salin, V., Slastihina, M., Ermilov, I., Speck, R., Auer, S., Papshev, S.: Semantic clustering of website based on its hypertext structure. In: Proceedings of 6th International Conference, KESW 2015. Communications in Computer and Information Science, pp. 182–194 (2015) 11. Kumbaroska, V., Mitrevski, P.: Behavioural-based modelling and analysis of Navigation Patterns across Information Networks. Emerg. Res. Solut. ICT 1, 60–74 (2016). https://doi. org/10.20544/ERSICT.02.16.P06 12. Schaeffer, S.E.: Graph clustering by flow simulation. Comput. Sci. Rev. T(1), 27–64. https:// doi.org/10.1016/j.cosrev.2007.05.001 13. Scikit-learn machine learning in Python. http://scikit-learn.org/stable/modules/clustering. html. Accessed 18 Apr 2018

Optimal Control Problems of Compressor Facilities Processes at Industrial Enterprise Ekaterina Kulakova1(&) , Sergei Alipchenko1 , Alexander Rezchikov2 , Vadim Kushnikov1,2 , Elena Kushnikova1,2 , and Olga Glukhova1 1

2

Department of Applied Information Technologies, Yuri Gagarin State Technical University of Saratov, Saratov, Russia [email protected] Institute of Precision Mechanics and Control, Russian Academy of Sciences, Saratov, Russia

Abstract. The complex of optimal control problems by the equipment of a compressor facilities at an industrial enterprise is defined, which solution will allow gaining considerable economic effect. Mathematical models, method and the decision algorithm in the conditions of the real time mode are developed. The procedure of the decision for standard compressor facilities of machinebuilding enterprise is illustrated with an example. Keywords: Compressor facilities

 Operational control  Mathematical model

1 Introduction Economy development in the conditions of the market relations is characterized by the advancing increase in prices for raw materials and energy. In these conditions the profit of the enterprises with a power-intensive flow process directly depends on the correct organization of intra factory accounting and control of consumption of energy resources, accuracy of settling with third-party suppliers of energy carriers, efficiency and quality of the decisions made at control of the difficult and power-intensive equipment of compressor facilities. The modern industrial enterprises are large consumers of the compressed air used on the technology purposes, a power supply of different power stations and other needs. According to the experts [1–3] in the course of its development from 20 to 60% of the general expense of the electric power of the enterprise are spent. In the industry the greatest distribution was gained by the compressed air of low pressure (to 0.9 MPa) used generally as the energy carrier. The total power consumed by the equipment of compressor facilities of the large industrial enterprise lies in the range of 10–15 MW. The technological process features of compressed air production and distribution include considerable complexity of the used equipment, variety of the admissible modes of its functioning, fast changing of information characterizing the process course, considerable influence of parameters of compressed air on the modes of functioning of consumers of pneumatic energy. © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 324–337, 2019. https://doi.org/10.1007/978-3-030-12072-6_27

Optimal Control Problems of Compressor Facilities Processes

325

Now a significant amount of the automation equipment of compressor facilities [4–6] is implemented, systems of dispatching management of air supply of the enterprise are developed and approved. Further increase in effective management of this object is connected with improvement of quality of the made decisions. The vast majority of devices of automation technology process are called to ensure first of all functioning of the difficult equipment on one of the admissible modes. The choice of optimum values of managing parameters of the mode at the same time is assigned to service personnel of compressor stations, master stations and carried out, generally proceeding from an intuition and experience of the persons, making decisions (PMD). In this regard, one of the way of substantial increase quality of the made decisions, leading to identification of essentially new sources of economy, consists in solving optimal control problem of compressor facilities as a part of automated control power supply systems at the industrial enterprise [6, 7].

2 Problem Definition According to results of researches [5–7], as the priorities optimal on-line control realized on the basis of a domestic complex hardware components tasks of management of flow distribution in pneumatic networks, load dispatches between compressors and the choice of compressors rational refrigerating duty [4, 6] have been chosen. The procedures analysis of making decision of these tasks has found essential interrelation of the managed parameters that has caused need of their association as a part of a uniform complex. The efficiency criterion of the complex tasks has been defined proceeding from the following reasons. The main gain of the control object of functioning is providing the air supply set modes of the enterprise pneumatic equipment at the smallest admissible costs of production of compressed air. In a type of it, the criterion of efficiency should include two components: J1 (characterizing damage at consumers at violation of the set air supply mode) and J2 (characterizing compressed air production costs). The damage at consumers results from an exit for the established borders of key parameters of compressed air: pressure, relative humidity and dust content. The end air coolers, oil filters and other devices regulating two last parameters, as a rule, do not need automated management. Therefore, as the first criterion the functionality characterizing degree of a deviation compressed air pressure in points of its consumption from preset values is used. The structure analysis of compressed cost value air at the operating enterprises has shown that as a result of automated management its variable part consisting of electricity cost, a cooling water decreases is combustible lubricants, etc., and managed are the most powerful electricity costs and a cooling water. In this regard as the second component of the optimized criterion the criterion function characterizing a total power demand by compressors, pumps and air supply system fans is chosen. Taking into account told, a complex tasks statement of compressor facilities management at the industrial enterprise has the following appearance. It is necessary to develop an algorithm for finding the vector of control actions u ðtÞ  fU g, minimizing

326

E. Kulakova et al.

in real-time on an interval ½tF ; tC  for any valid values of the state environment vector x ðtÞ  fX g efficiency criterion for a complex of problems J = (J1, J2): J1 ¼ J2 ¼

max R tC AP

tF

0

0

A1  R tC P tF

NCi ðt; x; x ; u; u Þdt þ a

i¼1

i¼1

2 Pi ðtÞ  Pi ðtÞ Hi ðtÞdt;

C2 R tC P tF

NFi ðt; x; x0 ; u; u0 Þdt þ b

i¼1

Pu1 R tC P tF

NPui ðt; x; x0 ; u; u0 Þdt

i¼1

ð1Þ at functional restrictions: Fc ðt; x; uÞ  0; c ¼ 1; n1 under boundary conditions: Fcðtc Þ ðx:uÞ ¼ 0; c ¼ n2 þ 1; n3 ; Fcðtc Þ ðx:uÞ ¼ 0; c ¼ n3 þ 1; n4 caused by specifics of functioning management object. Here J1 is the objective function of damage from non-compliance with the set air supply mode; J2 is the objective function of the electric power costs for compressed air production; Pi ðtÞ and Pi ðtÞ are the current and optimum compressed air pressure on an entrance of the i-th consumer, Hi ðtÞ is the priority coefficient of the i-th consumer; n1–n4 are the constants; NCi , NFi , NPui are the power of the i-th compressor, fan and pump respectively; A1, Amax, C2, Pu1 are the consumers number of compressed air, compressors and fans respectively; a, b are the structural coefficients:  b¼ a¼

1; if the fan cooling tower is part of the compressor facilities 0; otherwise  1; if the compressor station includes a pumping station 0; otherwise

In article the task solution is received (1) for management object of the most characteristic structure (a = 1, b = 0) equipped with the centrifugal compressor units (CCU) and the characteristic of pneumatic network, linear on the working site.

3 Analysis of Dynamic Properties of an Object of Management By search of the task extremal the considerable difficulties connected with uncertainty of model parameters on arise by indirect methods of a variational calculus ½tF ; tC  in a type of drift of input and manipulated variables, and also with complexity the solution

Optimal Control Problems of Compressor Facilities Processes

327

of the nonlinear system differential equations of a high order at temporary restrictions of the real time mode. In this regard extremal search has been carried out with use of a piecewise linear approximation method. The interval of integration has been broken into pieces Dt length, and the optimized integral is replaced with some approach with the final sum. The analysis of dynamic properties of the control object allows to evaluate the change range Dt for different air supply systems. It is necessary for an exception of the unstable mode of functioning of system that the lower bound Dt satisfied to inequality: Dtmin [

4 X

Dti

ð2Þ

i¼1

The retrieval time of optimum control Dt1 including as well time the sensors polls makes for the decision method developed below on average 7–10 min. Time of implementation of the control actions Dt2 depends on type of the compressor facilities equipment (compressors, pumps, fans, the choking devices and so forth), the influence nature (input under loading, shutdown, transfer into no-load operation) and the person presence in control contour. For the compressor units using preliminary start of the compressor with its subsequent transfer into no-load operation it makes 2–3 min. Time of a transfer lag is defined by expression: Dt3 ¼

lmax VT

ð3Þ

where lmax is the air duct length to the most removed adjustable consumer; VT is the speed of distribution of compressed air. For most the enterprises the size lmax lies within 0.8–1.5 m, and the optimum speed of transportation of compressed air makes 12–20 m/sec. Time of transitional delay Dt4 makes 2–3 min [2]. The upper bound choice of an interval Dt is carried out so that in time between two task solutions the size of the most rapidly changing PK parameter has not exceeded the established norm of PK mor than at a size of dynamic runaway DPK [2]: Dtmax ¼

Qc DPK PK ðQK  Qm ÞP2atm

ð4Þ

where Qc is the volume of pneumatic network; DPK is the amount of deflection PK from the established norm of PK , not causing damage in consumers; QK ; Qm are the maximum productivity of compressors and the minimum gas rate in network respectively). Implementation experience of the compressor facilities control tasks shows that for the most industrial enterprises Dt 2 ½20; 30 min.

328

E. Kulakova et al.

4 Mathematical Model As a result of the relaxation time research of model parameters which is carried out with use of a method of imitating modeling it has been established that on a time interval Dt 2 ½20; 30 min. rather exact description of the optimized processes can be received on the basis of quasinon-stationary models. It has allowed to simplify the developed solution algorithm of a task considerably. Model optimization problem of flow distribution in pneumatic network. The greatest difficulties arising at its development are connected with need to repeated calculation of dependence of PK ¼ PK ðP1 ; . . .; PN1 Þ(PK - air pressure at controlled consumers) in real time. In [6] it was noted that use traditional calculation method of this dependence leads to inadmissible increase in retrieval time of extrema of criterion of J1. The control object features have allowed to simplify an calculation algorithm of this dependence considerably. It has been established that for most pneumatic networks throttling of pneumatic energy consumers for the purpose to optimize the air supply mode inadmissibly for technical reasons. Then at the characteristic of pneumatic network, linear on the working site, pressure on a collector of PK will change in proportion to change pressure in any point of pneumatic network [7]. As a result the mathematical model of a flow distribution optimization problem has been presented in the following form: J1 ðPK Þ ¼

A1  P i¼1

2 Pi ðPK Þ  Popt Hi ! min; i

dPi =dPK ¼ Pi =PK ; i ¼ 1; A1 ; PK0 ¼ Ci Pi0 ; Pj  Bj ; j ¼ 1; A2 ; Pl  Dl ; l ¼ 1; A3 ; Popt i ¼ Pi ; i ¼ 0; A4

ð5Þ

are the technology borders of pressure change at controlled large where Bj ; Dl ; Popt i consumers of pneumatic energy; A2 ; A3 ; A4 are the known constants. Mathematical model of centrifugal compressor set power (CCS). When calculating power for a basis the technique [1] as the most suitable for the purposes of on-line control has been taken. A number of the changes based on specifics of a solvable task and which have allowed to simplify the procedure of calculation has been brought in it. 1. By search of extrema the size of the optimized functionality serves only as the indicator of the decision correctness. Because the power of CCS it is possible to present dependences in the form: NC ðx; uÞ ¼ NC0 ðx; uÞ þ N mec þ N C ð xÞ þ N R ð xÞ

ð6Þ

where NC0 is the power without mechanical losses N mec , losses on convection N C and on radiation N R ; components N mec , N C and N R which definition is connected

Optimal Control Problems of Compressor Facilities Processes

329

with carrying out a number of replicate laboratory experiments, when calculating were not considered how not influencing the provision of an extremum. 2. In the power calculation method of CCS pressure loss used in a compressor construction in intermediate air coolers, and also their efficiency is recommended to be defined by practical consideration. For control of the specific unit installed at the industrial enterprise it is connected with carrying out a number of the laborconsuming and exact experiments demanding a long stop of the compressor, its partial dismantling and availability of the special laboratory equipment. In a type of it in the developed pressure loss model at intermediate cooling were not considered. The error (Table 1) arising at the same time has been compensated by components:   DN ðGÞ ¼ DN L ðGÞ þ 0:5 DN L ðGÞ þ DN U ðGÞ

ð7Þ

where DN U ðGÞ; DN L ðGÞ are the upper and lower limits of the CCS power calculation error change caused by not accounting of pressure losses or an definition error of the air coolers efficiency function Z; G is the weight air consumption. As have shown results of imitating modeling in an effect, this simplification has slightly affected condition of required extrema, significantly having simplified mathematical model.

Table 1. Power consumption, losses. CCS type

Power consumption on the passport system, kW

K-1500-61-1 K-1500-61-2 K-500-61-1 K-500-61-2 K-350-61-2 K-250-61-2 K-100-61-1

6950 6950 2950 2600 1940 1470 480

Power losses, kW 382.2 382.2 137.7 133.8 101.4 64.9 23.3

Absolute accuracy, kW 332.0 382.0 119.6 119.6 88.1 56.4 20.3

Relative accuracy, % 4.8 4.8 4.1 4.6 4.5 3.8 4.2

3. The calculation method of CCS power has been added with an operational correction algorithm of the size of intermediate air coolers thermal conductivity K1 applied at process of model parameters correction. As a result, the power of the compressor unit was determined by the following model:

330

E. Kulakova et al.

K NC ¼ 103 GR K1

3 P

Dt0j þ 10:81G;

j¼1

  1=d ðQ Þ Dt01 ¼ THO1 E1 1 1  1 ;   1=d ðQ Þ Dt02 ¼ THO2 E2 2 2  1 ;   1=d ðQ Þ Dt03 ¼ THO3 E3 3 3  1 ; Q1 ¼ Q; THO1 ¼ Tatm ; THO2 ¼ TK1  Z1 ðTK1  THO2 Þ; TK1 ¼ THO1 þ Dt1 ;   W1 K 1 F  1 W2 W1  Z1 ¼ 1e ; W1 K 1 F W1  1 W2 W1 W2 1e

ð8Þ

W1 ¼ MB C1 ; W2 ¼ GC2 ; THO3 ¼ TK2  Z2 ðTK2  THO2 Þ; Z2 ¼ Z1 ; TK2 ¼ THO2 þ Dt2 ; GRTHO2 GRTHO3 ; Q3 ¼ ; Q2 ¼ PBC E1 PBC E1 E2 " #d1 ðQ1 Þ ðtÞ  Tatm  1=d1 ðQ1 Þ E1 ¼ E  1 þ1 ; Tatm 1ðtÞ " #d2 ðQ2 Þ ðt Þ  THO2  1=d2 ðQ2 Þ E2 ¼ E  1 þ1 ; THO2 2ðtÞ " #d3 ðQ3 Þ ðt Þ  THO3  1=d3 ðQ3 Þ E3 ¼ E  1 þ1 ; THO3 3ðtÞ where NC is the compressor power; Q1 ; Q2 ; Q3 are the productivities on an entrance to the first, second and third steps of CCS; Dt01 ; Dt02 ; Dt03 are the air temperature gain in the first, second and third steps of CCS; THO1 ; THO2 ; THO3 are the air temperature on an entrance to the first, second and third steps of CCS; E1, E2, E3 are the compression ratios in the first, second and third steps; d1 ðQ1 Þ; d2 ðQ2 Þ; 1=d3 ðQ3 Þ are the dimensionless experimental dependences of the first, second and third step of CCS; Q is the CCS productivity on suction; TK1 ; TK2 are the water temperature on an entrance of the first and second step; Tatm is the free air temperature; THO2 is the cooling water temperature; MB is the cooling water proceeding through an intermediate air cooler; C1, C2 are the air and water heat capacities; K1 is the thermal conductivity of an intermediate air cooler; F is the cooling area; PBC is the

Optimal Control Problems of Compressor Facilities Processes

331

air pressure on a prime entrance; Z1, Z2 are the cooling efficiency of the first and ðtÞ ðt Þ ðt Þ second air cooler; R is the gas constant; E1 ðtÞ; E2 ðtÞ; E3 ðtÞ; Tatm ; THO2 ; THO3 are the compression ratios and air temperatures on an entrance to the first, second and third steps of CCS taking place at factory tests of the compressor, respectively. Mathematical model of compressed air production process. It consists of combined equations, describing electric power consumption process by compressors and pumps, and also the restrictions received as a result of criterion minimization J1. In article mathematical models for different control modes of productivity of CCS are considered: a general case, regulation by throttling on suction and change of an engine speed. The formalized description of a criterion function definition range J2 is created by three groups of restrictions: connected with collaboration of compressors on one collector; the resulting solutions of flow distribution control task and the giving of a cooling water caused by pumps collaboration. The equations restrictions of the first group for a general case of compressors productivity regulation have the following appearance: rk P i¼1

Qi ¼ Q; PKi ¼ PK ; i ¼ 1; rK ; Q1 [ 0; PKi  P0BCi þ A1i Q þ A2i Q2 þ A3i Q3

PKi  P00BCi

ð9Þ

þ A11i Q þ A21i Q þ A31i Q ; 2

PKi  Kxmax Qi ;

3

PKi  Kxmin Qi ; i ¼ 1; rK

where rk is the quantity of the working CCS; PKi ; Qi are the pressure and productivity of the i-th compressor; P0BCi ; P00BCi ; A1i ; A11i ; A2i ; A21i ; A3i ; A31i ; Kxmax ; Kxmin are the known coefficients). At regulation of productivity throttling on suction or change of an engine speed expression (9) significantly becomes simpler. In this case restrictions of the first group are described by combined equations: max Pmin BCi  PBCi  PBCi

ð10Þ

min where Pmax BCi ; PBCi are the upper and lower borders of pressure change on the i-th compressor suction), and for regulation by change of an engine speed the equations:

nmin  ni  nmax i i

ð11Þ

max are the change limits of the i-th engine compressor rotating speed). where nmin i ; ni The restrictions of the second group do not depend on a control mode of CCS productivity and represent the optimization problem solution (5). They are described by the equation:

PK ¼ f  ðx; uÞ

ð12Þ

332

E. Kulakova et al.

The restrictions caused by pumps parallel operation of cooling water also do not depend on a compressors productivity control mode. At water supply by centrifugal pumps this group of the equations - restrictions has an appearance: H ¼ Hg þ

kdl þ

Pl

n i¼1 i gp2 d 4



8V 2

þ PAmax V pffiffi 2 ; ðli = li Þ i¼1 2

H ¼ a0 þ a1 V þ a2 V 2 ; K2 K2 K2 P P P a0 ¼ a0i ; a1 ¼ a1i ; a2 ¼ a2i ; i¼1

Vi ¼

a1i 

i¼1

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2

a1i 4a2i ða0i H Þ ; Vi 2a2i

ð13Þ

i¼1

 0; i ¼ 1; K2 ; H min  H  H max

where H and V are the cooling water pressure and an expense on the pumping station collector; Ng is the geometrical height of water rising; k is hydraulic resistance coefficient on straight sections of a water supply system piece “pumping - compressor”; Pl l is the water supply system piece length with diameter d; i¼1 ni is the sum of coefficients of local resistances of the same piece; g, p are the constants; Amax is the number of the compressors connected to the water supply highway at the time of the task solution;li is the metering characteristic of the i-th heat exchangers compressor; li is the pipelines equivalent length of the i-th heat exchanger the compressor). Taking into account told, the of compressor facilities control complex tasks for a general case of compressors productivity regulation has an appearance: J2 ¼

Amax X

NCi ðx; uÞ þ

i¼1

Amax X

xx NCi hi þ 103 gHq

i¼1

K2 X Vi

! min

gi

i¼1

ð14Þ

xx where q is the water density; NCi is the CCS power determined on (8), NCi is the no-load operation power of the i-th CCS; gi is the i-th pump efficiency, at the restrictions for area of the optimized functionality set change (9), (12), and (13). The same statement for a case of CCS productivity regulation throttling on suction is described by the equations:

J2 ¼

Amax X

xx NCi hi þ

Amax X PBCi

i¼1

 hi ¼

i¼1

Patm

NCi þ 103 gHq

K2 X Vi i¼1

gi

! min

0; if i compressor is working in the network

ð15Þ

1: if the compressor is running at idle

at restrictions (10), (12), (13), and for regulation of speed productivity change: J2 ¼

Amax X i¼1

xx NCi hi þ

Amax  3 X n i¼1

nj

þ 103 gHq

K2 X Vi i¼1

gi

! min

at restrictions (11), (12), (13). The developed method of the task solution (1) we illustrate in Fig. 1.

ð16Þ

Optimal Control Problems of Compressor Facilities Processes

333

Fig. 1. Schedule of the procedure of search of extrema of a complex of tasks: D0 is the initial set; D9L is the solutions of the general task; Dij, i = 1, 8 is the intermediate sets.

5 Decision Method The objective belongs to the class multicriteria variation on a conditional extremum. Establishment of the relations of an order between J making a vector should precede its decision. The components analysis J1 and J2 has shown that in some cases losses from violation of the air supply mode considerably exceed electric power costs of compressed air productio. So convolution of criteria has been carried out with assignment of an absolute priority to a problem of flow distribution optimization (5). This task belongs to problems of convex programming which necessary and sufficient living conditions of an extremum are defined from Kuhn – Takker’s theorem.

334

E. Kulakova et al.

The decision is: PA1 opt i¼1 P Pi0 Hi PK ¼ PK0 P N1 2 i¼1 Hi Pi0

ð17Þ



max if PK  Pmin otherwise function reaches a minimum in one of boundary points K ; P K

min . on an interval PK ; Pmax K High dimension of a optimization complex problems (4. 8)–(4. 10), belonging to the class the nonlinear and having difficult computable criterion functions set on nonconvex sets, a large number of restrictions like equalities and inequalities complicates search of the optimal solution in real time. In this regard the general decompositional method of problem solving has been developed (14)–(16), based on use of the principle of immersion. Tasks (14)–(16) were transformed to a look: w0 ðx; u; H Þ þ 103 gHq

K2 P Vi i¼1

gi

! min

wi ðx; uÞ ¼ 0; i ¼ 1; K1 ;

ð18Þ

wi ðx; uÞ  0; i ¼ 1; K1 þ 1; KL1 at restrictions (13). The design (18) at H = const breaks up to two tasks solved independently from each other: optimum load dispatch and choice of the compressors best refrigerating duty. It has been established what consumed NC compressor capacities monotonously depends on the pressure size H. Experimentally confirmed also small sensitivity NC to change H at different conditions of the environment and control actions has been confirmed, and also approximately linear nature of this dependence. Proceeding from told, the interval of change H has been broken into 4 sites, for each of which consistently decided a problem of optimum load dispatch and choice of a refrigerating duty. The optimum problem load dispatch was a decomposed on a local tasks set, each of which corresponded to the working compressors certain combination. As on a collector of compressor station not all combinations can provide the required pressure size and an expense, the technique of options preliminary exception which do not have the decision has been developed. The other local tasks extrema were defined by method of dynamic programing. Similar approach has been used also at the choice of the compressors best refrigerating duty. For assessment the efficiency of the developed models and methods decision (1) the provision of an extremum J2 established experimentally was compared to its design value. The comparative analysis results of dependences J2 = J2 (V) received for the compressor station completed five CCS of the K-250-61-2 series and four centrifugal pumps 12D-19 and 10D-19 (on two each brands), are given in Fig. 2.

Optimal Control Problems of Compressor Facilities Processes

335

Fig. 2. Example of definition of an optimum water discharge with use of the basic and changed method of calculation of J2 = f (V) (curves 1 and 2 respectively): ЭR is the total expense of the electric power, Эk is the electric power expense compressors.

From this it follows that despite discrepancy between curves ЭR (1) and ЭR (2) on an axis of ordinates approximately on 200 kW, optimum expenses of a cooling water practically match, making 1650 m3/h and 1620 m3/h respectively. The complex tasks solution procedure at the control of standard compressor facilities at the machine-building enterprise are presented in the form of the datalogical scheme in Fig. 3: 1 is the collection of information, characterizing the normal technology process mode; 2 is the normal operating mode registration of air supply system; 3 is the identification of an emergency not an object; 4 is the decision-making on elimination of an emergency; 5 is the collection of information about an optimum operating mode of the equipment; 6 is the entering of information in files of the database; 7 is the decision of the MAIN program; 8 is the choice of structure and productivity of the working compressors; 9 is the choice of working pumps structure; 10 is the issue of the message on need of structure change and compressors productivity; 11 is the message issue on need of the working pumps structure change; 12 is the decision of the LUFT program; 13 is the issue of the message containing the name of the switched-off (connected) body and the technology process key controlled parameters size; 14, 15 is the message on emergency on pneumatic network the enterprise; 16 is the calculation of average daily indicators of technology process; 17 is the analysis of

336

E. Kulakova et al.

Fig. 3. Information logical circuit of production control.

the implemented control actions; 18 is the decision of the KOR program; 19 is the drawing up list of unrealized control actions; 20 is the analysis of the list of unrealized influences; 21 is the definition of compressed air cost value; 22 is the control of planned decrease achievement in cost value; 23 is the awarding; 24 is the analysis of causes of the optimum mode infringement; 25 is the experimental check of the mathematical models parameters used by the MAIN program; 26 is the correction of mathematical models parameters.

6 Conclusion The considered optimum control problems complex of the compressor facilities equipment has undergone approbation and has been implemented on Saratov electric modular production association, and also has been approved by Editorial council for the Energy saving program for consideration and practical implementation in the industry. Now works on its inclusion in structure of hybrid intelligence standard system, the managing director of the enterprise power economy are conducted.

Optimal Control Problems of Compressor Facilities Processes

337

References 1. Ris, V.F.: Tsentrobezhnyye kompressornyye mashiny [Centrifugal compressor machines], 351 p. Mashinostroyeniye, Moscow (2008). (in Russian) 2. Camponogara, E., de Castro, M.P., Plucenio, A.: Compressor scheduling in oil fields: a piecewise-linear formulation. In: IEEE International Conference on Automation Science and Engineering, pp. 436–441 (2007) 3. Zheng, X., Hu, J.: Air compressor testing system based on the HMI software. Adv. Mater. Res. 139–141, 1874–1878 (2010) 4. Safie, F.M., Ring, R.W., Cole, S.K.: Reliability and maintainability analysis of a high air pressure compressor facility. In: Reliability and Maintainability Symposium (RAMS), Orlando, FL, USA, pp. 1–6 (2013) 5. Ionin, A.A.: Gazosnabzheniye [Gas supply], 448 p. Lan’, St. Petersburg (2012). (in Russian) 6. Kushnikov, V., Kulakova, E.: Development of a set of applications for intelligent control system of compressor facilities at an industrial enterprise. In: Automation Control Theory Perspectives in Intelligent Systems. Proceedings of the 5th Computer Science On-Line Conference 2016 (CSOC 2016), vol. 3, pp. 141–153 (2016) 7. Filimonyuk, L.: The problem of critical events’ combinations in air transportation systems. In: Advances in Intelligent Systems and Computing, vol. 573, pp. 384–392. Springer International Publishing (2017)

BEM Based Numerical Approach for the Study of the Dispersed Systems Rheological Properties Yulia A. Pityuk, Olga A. Abramova(B) , Nazgul B. Fatkullina, and Aiguzel Z. Bulatova Center for Micro and Nanoscale Dynamics of Dispersed Systems, Bashkir State University, Ufa, Russia [email protected] http://cmnd.bashedu.ru/en.html

Abstract. The relevance of adequate modeling of disperse systems in the microscale is driven by the need of solution of applied problems appearing in the oil and gas industry, micro-manufacturing, environmental, bio-, medical-, nano- and other technologies. High-efficient computational techniques for modeling of large volume of the dispersed system is required for more accurately determine the rheological parameters of such systems, based on the calculated properties of its components. The present work is dedicated to the study of the dispersed system features in a shear flow at low Reynolds numbers. The computational approach is based on the boundary element method accelerated using the fast multipole method on heterogeneous computing architectures. The results of the simulations and details of the method are discussed. Furthermore, the standard viscometric functions that characterize the behavior of an emulsion or bubbly liquid, when this is regarded as a homogeneous medium, were calculated and studied. Keywords: Dispersed systems · Shear flow · Rheology · Boundary element method · Fast multipole method · Graphics processors

1

Introduction and Motivation

Multiphase flows, such as flows of bubbly liquids, emulsions, suspensions, and gas-particle mixtures, as a rule, are very complex. Generally, they depend on the microscale flows (the scale of single inclusions), which determine the mass, momentum and energy exchange between the phases and depend on the particles shape, interactions of the particles and the domain boundaries, particle-particle interactions, the presence of admixtures, etc. Mathematical modeling of such phenomena based on the first principles of mechanics is a challenging and significant problem. The need for adequate modeling of disperse systems in the microscale is driven by the need of solution of applied problems appearing in the c Springer Nature Switzerland AG 2019  O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 338–352, 2019. https://doi.org/10.1007/978-3-030-12072-6_28

Study of the Dispersed Systems Properties

339

oil and gas industry, micro-manufacturing, environmental, bio-, medical-, nanoand other technologies. The features of the behavior of deformable dispersed inclusion, such as liquid drops or gas bubbles, that is immersed in a viscous flow is an actual problem of modern science and technology with applications in the fields of suspension rheology, fluid mixing, and dispersion and two-phase flow. The research of the dynamics of bubbles and drops allows one more detailed study of the interface dynamics between two fluids and provides a basis for calculation of the rheological characteristics of an emulsion. The possibility of prediction of the water-in-oil emulsion properties is very important for accelerating of oil production from the porous layer, for the separation oil from water and other impurities, for the processing, and transportation of raw materials. Furthermore, the droplets dynamics in viscous liquid flow is associated with a prototypical motion of complex particles and capsules such as red blood cells. Moreover, the study of the bubbly liquid properties is significant in the surface cleaning of the microelectronic devices. Direct numerical simulation is very important for the hydrodynamics of multiphase flow and multiscale problems. First, the experiments in micro scale are very expensive, visualization is complicated, and the problems are multi-parametric. Second, three-dimensional modeling of the dynamics of the deformable objects, such as bubbles and drops, is practically impossible without numerical simulations. A study of the features and properties of mixtures of fluids with droplet structure under the influence of various external forces is of interest a wide range of researchers for a long time. The calculation of the dynamics of emulsion droplets under the action of various external fields is carried out using various numerical methods for investigation of the continuum mechanics problems, for instance, finite-difference methods [7], finite element method [10], control volumes, and VOF method [5], the boundary element method (BEM). Therefore, the major scientific problem addressed in the present work is the application of contemporary methods for the study of the properties of dispersed systems under imposed simple shear flow. The simulations are performed using the boundary element method accelerated both via advanced scalable algorithms, particularly, the fast multipole method (FMM), and via utilization of advanced hardware, particularly, graphics processors (GPUs) and multicore CPUs. We developed and tested this efficient approach [1] and applied it for the study of the wide range of processes, such as emulsion flow in an unbounded domain and in microchannels of various shapes, viscous fluid flow around the rigid structures, bubble dynamics in different domains [1,2,8]. The use of this approach reduces the computational complexity of the overall problem and potentially can handle direct simulations of large dispersed systems with millions of boundary elements [1]. The boundary element method (BEM) is appropriate for the study of motion of the drops with an arbitrary deformation in unbounded domains. BEM for Stokes flow is described in [11] and was successfully applied for simulation of the dynamics and interaction of the droplets, bubbles and rigid particles in dispersed flows [6,12,14,16]. But for the more detailed study of the dispersed systems

340

Y. A. Pityuk et al.

properties the three-dimensional simulation of a large number of dispersed inclusions is needed. Some results of such kind of modeling are represented in works [14], where authors achieved substantial accelerations of droplet dynamics simulations via the use of multipole expansions and translation operators, which is very much in the spirit of the FMM and can be considered as one and two level FMM. However, O (N ) scalability of the FMM can be achieved only on hierarchical (multilevel) data structures, which were not implemented there. The high-efficient computational techniques for numerical modeling are required for more accurately determine the rheological properties of the emulsion, suspensions or bubbly liquids, based on the calculated properties of its components. There are several theoretical models for calculation of rheological characteristics, for instance, the relative viscosity of dispersed systems, but most of them are valid for small concentrations and non-deformable spherical particle. One of the first relations for the effective viscosity of a suspension of rigid spherical particles dispersed in a viscous Newtonian fluid, μef f depending on a volume concentration of the dispersed phase, was derived by Einstein. In his works it was shown, that the increase of the suspension viscosity can be associated with the volume concentration of solid particles by a proportionality factor. This ratio has the following form (1) μef f = μ1 (1 + kα), where for the spherical particles coefficient k = 2.5, μ1 is the kinematic viscosity of ambient fluid, α is the volume fraction. The equation describes the increasing of the viscosity of the entire system due to the presence of spherical particles. This formula well approximates the experimental data for dilute suspensions with assuming that the concentration of the dispersed phase is small enough, so that particles sufficiently far from each other and they are moving without interactions. In several experimental studies it was shown, that for dilute suspensions of solid spherical particles in Newtonian fluid the values μrelative in most cases agree with those obtained by the Einstein formula (1), but with an increase of α the angle of the curve of the relative viscosity in dependence of the volume concentration increased, and the Eq. (1) is no longer describes the process. As the value α increases, the interaction between the particles can not be neglected, and therefore the experimental data for more concentrated systems are in poor agreement with the results obtained by the formula (1). Then, many theoretical models for determining rheological characteristics were based on Einstein formula with various modifications for more concentrated systems and also for any non-spherical shapes of the particles. A large number of published papers related to the study of the rheology of suspensions [3,13]. The theoretical expression for the viscosity of a dispersed system consisting of small liquid droplets dispersed in a carrier liquid, was first proposed by Taylor   μ1 + 2.5μ2 α , (2) μef f = μ1 1 μ1 + μ2 where μ2 is the dynamic viscosity of the fluid inside dispersed inclusions. For μ2  μ1 this ratio reduces to the Einstein equation for solid spherical particles.

Study of the Dispersed Systems Properties

341

But this formula is also valid only for small values of the capillary number, Ca = μ1 aG/γ, a is the particle radius, G is the shear rate, γ is the surface tension. There are various modifications of the above formulas was developed to extend the range of applicability of relations for more concentrated dispersions. One of the most widely used relation between μef f and α is Mooney formula   k(μ1 )α μef f = exp (3) μ1 1 − α/αmax and Dougherty-Krieger formula μef f = μ1

 1−

α αmax

−αmax k ,

(4)

where k = 2.5 for spherical particles and k > 2.5 for non-spherical shapes; αmax is the maximum particle packing density. This formulas describe the dependence of μef f on α in the wider range α. The complexity of constructing of the theoretical models for determination of the rheological characteristics of dispersed systems consisting of deformable inclusions of arbitrary size and arbitrary distribution, due to the fact that it is difficult to predict the shape of the deformed inclusion, because it will change in different ways under the combined effect of viscous forces and surface tension forces. For many years, the evaluation of the dispersed system rheology was carried out only using experimental or theoretical methods, but in recent years due to the development of high-performance computations and numerical methods the direct numerical simulation of the dynamics of such systems, consisting of deformable particles of an arbitrary size, is possible. Computer simulation is one of the most efficient approaches for determining the rheological properties of dispersed systems with different parameters, and it is a good alternative for expensive experiments because it gives the opportunity to choose the optimal parameters in each case based on the practical requirement to study various properties of dispersed systems. Numerical experiments make it possible to determine the contribution of the dispersed phase in the components of the effective stress tensor of the system, and based on the values of normal stress differences, to detect the effects while the non-Newtonian properties arise.

2

Problem Statement and Numerical Procedure

The dynamics of deformable inclusion (index 2) in an unbounded viscous liquid (index 1) is considered. Since the motion is slow the viscous forces resulting from the fluid flow, are higher than the inertia forces related to the acceleration or slowdown of the particles in the fluid. This fact makes it possible to neglect the inertial terms in the calculations completely. All processes are studied under isothermal conditions, without taking into account the intermolecular Van der

342

Y. A. Pityuk et al.

Waals forces. In this case, we can take the Stokes equations for the describing of the motion of each fluid ∇ · σ i = −∇pi + μi ∇2 ui = 0,

∇ · ui = 0,

(5)

where u and σ are the velocity and the stress tensors, μ is the dynamic viscosity, and p is the pressure, which includes the hydrostatic component, i = 1, 2. At the fluid-fluid interface (S), the boundary conditions for the velocity u and the traction f are u1 = u2 = u, f = σ 1 · n1 − σ 2 · n2 = f1 − f2 = f n, f = γ(∇ · n) + (ρ1 − ρ2 )(g · x), x ∈ S,

(6)

where n is the normal to S pointing into fluid 1, ρ and g are the density and the gravity acceleration, respectively. In the case of infinite domains, the condition u1 (x) → u∞ (x) should be imposed on the carrier fluid, where u∞ (x) is a solution of the Stokes equations. The dynamics of the fluid-fluid interface can be determined from the kinematic condition dx = u(x), x ∈ S, (7) dt where u(x) is the interface velocity determined from the solution of the elliptic boundary value problem stated above. Although the governing equations and boundary conditions are linear with respect to u and f , the dynamics of the interface is a non-linear problem since u(x) depends on all the points of the surface. The problem is solved using boundary element method, which is based on the integral equations for the determination of the velocity distribution over the boundary. The object surfaces are covered by triangular meshes. The collocation points are located at the mesh vertices. More details of the present implementation can be found in [1]. The boundary integral equations combined with the boundary conditions in discrete form result in a system of linear algebraic equations (SLAE) AX = b,

(8)

where A is the system matrix, X is the solution vector, and b is the right-handside vector. For the integration of Eq. (7), we used the Adams–Bashforth–Moulton predictor-corrector scheme of the sixth order. This scheme requires two calls of the right-hand-side function per time step. It also requires an initialization, which was provided by a fourth order Runge–Kutta scheme.

3

Results and Discussion

Numerical tests are performed on a workstation equipped with two Intel Xeon 5660, and one NVIDIA Tesla K20. Several algorithm implementations were done, including CPU and CPU/GPUs versions of the iterative algorithm with the

Study of the Dispersed Systems Properties

343

FMM accelerated MVP and a conventional BEM in which the BEM matrices were computed and stored. The latter implementation was developed for verification and validation purposes to ensure that the algorithms produce consistent results. It is assumed that the dynamic viscosity and density of the gas can be neglected in comparison with the corresponding parameters of the liquid. In this case, we consider the model of an ideal gas and viscous liquid motion. Calculation of Rheological Properties. Some numerical approaches allow more detail investigation of the behavior of deformable drops in the flow, which is the basis for the study and description of the rheological properties of liquidliquid systems. The dynamics of droplets under the influence of shear flow, which is a standard rheometric flow, are considered. In [4] the way for calculation of the effective stress tensor of the dispersed system Σ as a averaging of the stress tensor T over the selected volume of dispersed system V is represented  1 TdV. (9) Σ= V V In the case of a Newtonian incompressible viscous fluid without suspended particles the stress tensor is expressed through the strain rate tensor S˙ as follows Σ = −P I + 2μS˙

(10)

or in the component form  ⎧  ∂ui ∂uj ⎪ ⎪ μ + , j = i, ⎨ ∂xj ∂xi Σij = ⎪ ∂u ⎪ ⎩ −P + 2μ i , j = i, ∂xi

(11)

where P is the hydrostatic pressure in the liquid when the liquid is rest. It was derived that if the dispersed phase is also a Newtonian fluid and the motion occurs at low Reynolds numbers, then Σ for the dispersed system in a shear flow u∞ = (Gy, 0, 0) is defined as   ∂ui ∂uj + (12) + αΣdij , Σij = −δij P + μ1 ∂xj ∂xi Σdij

1 = V2

 [fi xj − μ1 (1 − λ)(ui nj + uj ni )] dS,

i, j = 1, 2, 3,

(13)

S

where α is the volume concentration of dispersed phase. The first two terms on the right-hand side of the Eq. (12) are the input of the continuous phase, the last integral expresses the contribution of the dispersed phase in the stress tensor of the whole system, and its value depends on the microstructure of the

344

Y. A. Pityuk et al.

considered system. Based on the values constituting formula (13), it is seen that the geometry of inclusions (deformation and orientation in the flow) significantly affects the stress tensor of the system. If the velocity and traction on the surface of each particle are defined, then using formula (12), the following rheological characteristics can be calculated μef f = μ1 + αΣd12 /G,

N1 = α Σd11 − Σd22 ,

N2 = α Σd22 − Σd33 ,

(14)

here μef f is the effective viscosity, N1 , N2 are the first and second normal stress differences. For Newtonian fluids, the viscosity does not depend on the shear rate, and the normal stresses differences are zero. When the system shows a non-Newtonian behavior the changing of system viscosity while changing G is observed. Furthermore, for example, normal stresses differences appear in polymer melts and solutions due to the elasticity of the polymer chains which extend downstream. Because of the extensible nature of the molecular resistance such deformations, the first normal stress difference is positive. The second normal stress difference is usually much smaller in magnitude and negative. Since the shear flow is the standard rheometric flow, we consider different type of dispersed systems under imposed shear flow (Gy, 0, 0). First of all, to validate the method of calculation the rheological characteristics used in this work, the obtained contribution of one drop to the stress tensor of the emulsion was compared with the numerical results represented in the literature [9]. The calculations were carried out for a stable state of a deformed single drop with a center at the origin, placed in a stationary uniform shear flow (Gy, 0, 0), for different values of capillary numbers Ca for given λ = 1 and λ = 6.4. There was N = 642 points or NΔ = 1280 triangular elements on the drop surface. 1.4

1.4

1.2

1.2

1

1

0.8

Nd1

Σ12

0.8

Σij

0.6 Σ12

0.2

0.4 Nd1

0.2

0 −0.2 −0.4 0

Σij

0.6

0.4

0

Nd2 0.05

0.1

0.15

0.2

0.25 Ca

(a) λ = 1

0.3

0.35

0.4

−0.2 0

Nd2 0.2

0.4 Ca

0.6

0.8

(b) λ = 6.4

Fig. 1. The dependence of the rheological characteristics on Ca: lines are the calculations obtained in this work, the symbols are calculations presented in [9].

Study of the Dispersed Systems Properties

345

1

μrelative

10

0

10

0

0.1

0.2

α

0.3

0.4

0.5

Fig. 2. The dependence of the relative viscosity on the volume fraction in the logarithmic scale: −− calculations presented in this work; − − −− using Einstein’s formula (1); −− using Einstein’s formula with k = 4 (1); − × − using formula (4); − ◦ − using formula (3).

Calculations for such a small scale system were carried out using direct, without any accelerations BEM. The contribution of one drop to the components of the emulsion stress tensor was calculated using the following formulas Σ12 = Σd12 , N1 = N1d = Σd11 − Σd22 and N2d = Σd22 − Σd33 . Viscous emulsions behaviour closely related to the orientation and deformation of the droplets suspended in the carrier liquid. It is known that the drops placed in a shear flow, after some time became a stable form, which is determined by a balance between the hydrodynamic (tends to pull the droplet downstream) and surface (tends to keep the drop spherical shape) forces. The ratio of these forces is expressed in a dimensionless parameter - capillary number. Figure 1 shows the dependence of the calculated values on the capillary number, the results of this work are marked by lines, the symbols mark the results presented in [9]. As one can see, Σ12 is a positive value for all values of Ca and λ. Thus, the presence of deformable droplets affects the effective viscosity of the disperse system as a whole, which increases while increasing Ca. Moreover, for different λ the influence of the presence of droplets on the effective viscosity varies. When λ = 6.4, the values Σ12 with increasing capillary number grow faster than for λ = 1. The comparison of the Σ12 , N1 , N2 for the same λ for several capillary numbers Ca = 0.2, Ca = 0.3, Ca = 0.5 calculated for a dilute emulsion consisting of 8 drops in a periodic cell with α = 10−4 with the results from the article [9] also represented in work [15]. The results obtained in this paper with the values published in [9] and [15] are in a good agreement. Besides, calculations of the relative viscosity μ∗ = μrelative = μef f /μ1 of the ordered dispersed system were conducted for various parameters. The ordered

346

Y. A. Pityuk et al.

system consists of periodically distributed cells in which there is only one inclusion (rigid particle, deformable drop or bubble). We consider the volume of the ordered dispersed system for Ca  1 and λ  1. In this case, one may regard that spherical rigid non-deformable particle are simulated in a viscous liquid.

(a) Ca = 0.05

(c) Ca = 0.15

(b) Ca = 0.1

(d) Ca = 0.2

Fig. 3. Stable bubble shape in simple shear flow for different Ca.

0.2

Σ12

0.15

Nd1

0.1 0.05 0 −0.05

Nd2

−0.1 −0.15

0.05

0.1

Ca

0.15

0.2

Fig. 4. The dependence of the rheological characteristics on Ca, λ = 0.

To compare the obtained values of μrelative of the suspension with theoretical expressions (1), (3), (4), the volume of dispersed system formed of 729 spherical non-deformable particles was under imposed shear flow. There was NΔ = 1280 triangular elements on the particle surface. The relative viscosity was calculated for a different volume fraction of the dispersed phase. The Fig. 2 shows the results of calculations (−−) in a logarithmic scale with calculations using various formulas (1), (3), (4). It is seen from the figure that at low concentrations the dependence of μrelative on α is linear, the behavior of the curves is

Study of the Dispersed Systems Properties

(c) α = 11.31%, 27 bubbles

347

(a) α = 3.35%, 27 bubbles

(b) α = 6.54%, 27 bubbles

(d) α = 17.96%, 27 bubbles

(a) α = 3.35%, 729 bubbles

(b) α = 6.54%, 729 (c) α = 11.31%, 729 (d) α = 17.96%, 729 bubbles bubbles bubbles

Fig. 5. A snapshot of initial spatial distribution in cell (on the left) and all computational domain (on the right) Ca = 0.1. 1.3

α=3.35% α=6.54% α=11.31% α=17.96%

1.25

1.15

μ

relative

1.2

1.1 1.05 1

0.05

0.1

Ca

0.15

0.2

Fig. 6. The dependence of the relative viscosity on Ca, λ = 0.

similar, but the angle of inclination is different. The obtained numerical results are better approximated by the Einstein formula with a coefficient k = 4, which was obtained by Ward and Whitmore to approximate experimental data. Moreover, the values of the first and second normal stress differences in the case of non-deformable particles were much less than unity, that is, practically zero.

348

Y. A. Pityuk et al.

Then, we can consider another type of dispersed system, for instance, bubbly liquid in case of λ = 0. It is assumed that the dynamic viscosity and density of the gas can be neglected in comparison with the corresponding parameters of the liquid. In this case, we consider the model of an ideal gas. There was NΔ = 1280 triangular elements on the bubble surface. The steadystate bubble shapes in simple shear flow (Fig. 3) are used to calculate contribution from each individual inclusion in the effective stress tensor of a dispersed system as a whole. The obtained results are represented in Fig. 4 for different values of Ca. While Ca increases the bubbles become more elongated and the angle between the central axis of the bubble and the flow direction decreases. The higher the deformation of inclusion, the greater the value of first and second normal stress differences. Further, the numerical experiments for ordered bubbly liquid of various concentrations were carried out. As a initial state we took the deformed steady-state shape of inclusion, represented on Fig. 3. The bubbles distribution in cell and in all considered domain is shown on Fig. 5 for different volume concentrations α = 3.35%, α = 6.54%, α = 11.31%, α = 17.96%. Obtained results for the relative viscosity in these cases are represented on Fig. 6. One can see, that the presence of deformable inclusions (even with λ = 0) influence on the value of μrelative , at the same time the relative viscosity does not strongly depend on the capillary number in such range of values. Of particular interest is the study of the rheological properties of emulsions consisting of deformable droplets of different radius, and the investigation of the relationship between these properties of the whole system and its components. Besides, in practice, polydisperse emulsions are more common. In this paper, calculations are performed for such “liquid-liquid” systems at low concentrations. The computations were conducted for initially spherical drops with randomly uniform spatial distributions in the cubic cell with the center in the origin. There was 30 drops, the radii were varied 1.0412 ≤ a ≤ 1.9287. In the second case, for α = 20% in a cubic cell, 31 spherical drops with 1.1606 ≤ a ≤ 2.1131 were generated. Then around the central cubic cell 26 similar cells were distributed. The obtained case of the initial distributions are shown on the Fig. 7. Thus, we considered the volume of a polydisperse emulsion with 810 drops with α = 10% and 837 drops with α = 20% for λ = 1, λ = 1.5 and λ = 3.6, placed in a shear flow (Gy, 0, 0). The surface of each drop was covered by triangular mesh with N = 642 vertices. Note that these computations are performed for the nondimensional time t = tnon−dim = γtdim /(μ1 a), where a = amin is the minimal initial droplet size. During simulation, the conservation of the drop volume is checked. For a given droplet distribution for α = 10%, the capillary number was varied 0.05206 ≤ Ca ≤ 0.096435. The contribution of particles to the stress tensor Σ12 , N1 , N2 was found by the formulas (13), (14) for the droplets distributed in the central cell. Figure 8 shows the changing in time of the rheological characteristics during the movement of a polydisperse emulsion in a shear flow. After some simulation time, the shape of the droplets changed and achieved a stable

Study of the Dispersed Systems Properties

349

(a) α = 10%, 810 drops

(b) α = 20%, 837 drops

Fig. 7. A snapshot of initial spatial distribution in cell (on the left) and all computational domain (on the right). 0.1 0.08

0.12

0.1

Σ12

0.08

Σ

12

0.1 0.08

0.06

Σ

12

0.06 0.06

0.04

0.04

N1

0.04

1

0.02

1

t

(a) λ = 1

2

0.02 N

0

N2 3

−0.02 0

N

1

0.02

0 −0.02 0

N

1

t

2

(b) λ = 1.5

N

2

0

2

3

−0.02 0

1

t

2

3

(c) λ = 3.6

Fig. 8. Changing in time of rheological characteristics for central cell of polydispersed emulsion with α = 10%.

deformed state. Accordingly, the values of Σ12 , N1 , N2 also changed and, as can be seen from, came to their stationary values for each λ. The greatest interest is the changing in time of the relative viscosity of the emulsion. The calculated μrelative for all considered λ is represented in Fig. 9. It

350

Y. A. Pityuk et al. 1.25

μrelative

1.2 1.15 1.1 1.05 1 0

0.5

1

1.5 t

2

2.5

3

Fig. 9. The relative viscosity of polydispersed emulsion, α = 10%: λ = 1 is the solid line, λ = 1.5 is the dashed line, λ = 3.6 is the dotted line. 1.5 1.4

μ

relative

1.3 1.2 1.1 1 0.9 0.8

t1

t2 0.5

1

1.5 t

2

2.5

3

Fig. 10. The relative viscosity of polydispersed emulsions for λ = 1: α = 10% is the dashed line, α = 20% is the solid line.

is shown that during the emulsion dynamics the character of achieving a steadystate value is slightly different while the viscosity ratio varies. The smaller λ, the faster the curve goes to a steady-state value and has a greater angle of inclination. This is explained by the fact that the stable state of a deformed drop with given parameters occurs more slowly with increasing λ. It is also shown that the larger the value of λ, the greater the steady-state value of the relative viscosity. Results of these studies showed the presence of normal stress differences in effective stress

Study of the Dispersed Systems Properties

351

tensor emulsion, that are similar to those, which were found in non-Newtonian fluids with elastic properties. Furthermore, calculations were carried out for a polydisperse emulsion with α = 20% for λ = 1. Figure 10 shows the changing in time of μrelative . One can see, that the character of achieving a steady-state value varies not only depending on λ, but also on the volume fraction of the dispersed phase. For a more concentrated system, the steady-state value is reached more slowly t1 < t2 . Since the relative viscosity changes until the droplets have taken a stable deformed shape according to the specified parameters, therefore, with the same distribution in a more concentrated emulsion, the droplets deform more slowly. Note, that in all cases the changing of the values of the rheological characteristics is nonmonotonic. This is due to the change in time of the relative droplet location and their influence on the deformation of each other and, broadly speaking, depends on the initial distribution. As can be seen from the Fig. 10, with increasing emulsion concentration, this effect is more pronounced. While volume fraction is increasing of 10%, the relative viscosity increased in average by 15%.

4

Conclusions

The application of the BEM-based approach for 3D calculation of the dynamics of dispersed inclusions accelerated both via an advanced scalable algorithm (FMM), and via utilization of a heterogeneous computing architecture (multicore CPUs and graphics processors) is used. Thus, the direct numerical simulation of dispersed systems consisting of deformable inclusions can be used for a more detailed and realistic description of the rheological characteristics and for the prediction of the microstructure. The features of the dispersed systems for a wide range of viscosity ratio and capillary number are considered. The results of 3D numerical simulation for the dilute emulsion and bubbly liquid in simple shear flow for different volume fractions is also represented. The represented approach can be used to determine the rheological parameters of monodisperse and polydisperse dilute emulsions or bubbly liquids, as well as suspensions consisting of particles of various non-spherical forms. It allows one studying changes in time of different rheological characteristics, as well as it depends on the physical parameters in a range that remains uncovered by various theoretical models, which are often limited by the assumption of monodispersity of the emulsion, non-deformability of particles or very small concentrations. Aknowlegements. The reported study was funded by Russian Science Foundation according to the research project No. 18-71-00068), FMM library is provided by Fantalgo, LLC (Maryland, USA).

352

Y. A. Pityuk et al.

References 1. Abramova, O.A., Pityuk, YuA, Gumerov, N.A., Akhatov, ISh: High performance BEM simulation of 3D emulsion flow. Communications in Computer and Information Science (CCIS). 753, 317–330 (2017). https://doi.org/10.1007/978-3-31967035-5 23 2. Abramova O.A., Itkulova Yu.A., Gumerov N.A.: FMM/GPU Accelerated BEM Simulations of Emulsion Flows in Microchannels. Contribution paper of ASME 2013 IMECE. (2013). https://doi.org/10.1115/imece2013-63193 3. Chang, C., Powell, R.L.: Hydrodynamic transport properties of concentrated suspensions. AIChE Journal. 48(11), 2475–2480 (2002) 4. Cunha F. R., Almeida M. H. P., Loewenberg M.: Direct numerical simulations of emulsion flows. J. Braz. Soc. Mech. Sci. Eng., vol. 25(1) (2003). https://doi.org/ 10.1590/S1678-58782003000100005 5. Fatkhullina, Y.I., Musin, A.A., Kovaleva, L.A., Akhatov, I.S.: Mathematical modeling of a water-in-oil emulsion droplet behavior under the microwave impact. J. of Physics: Conference Series. 574, (2015) 6. Farutin, A., Biben, T., Misbah, Ch.: 3D numerical simulations of vesicle and inextensible capsule dynamics. J. Comput. Phys. 275, 539–568 (2014). https://doi. org/10.1016/j.jcp.2014.07.008 7. Gupta, A., Sbragaglia, M., Scagliarini, A.: Hybrid Lattice Boltzmann/Finite Difference simulations of viscoelastic multicomponent flows in confined geometries. J. Comput. Phys. 291, 177–197 (2015). https://doi.org/10.1016/j.jcp.2015.03.006 8. Itkulova (Pityuk) Yu.A., Abramova O.A., Gumerov N.A., Akhatov I.S.: Boundary element simulations of free and forced bubble oscillations in potential flow. Proceedings of the ASME2014 International Mechanical Engineering Congress and Exposition, vol. 7. IMECE2014-36972 (2014). https://doi.org/10.1115/IMECE201436972 9. Kennedy, M.R., Pozrikidis, C., Skalak, R.: Motion and deformation of liquid drops, and the rheology of dilute emulsions in simple shear flow. Computers Fluids 23(2), 251–278 (1994). https://doi.org/10.1016/0045-7930(94)90040-x 10. Keir, G., Jegatheesan, V.: A review of computational fluid dynamics applications in pressure-driven membrane filtration. Rev. Environ. Sci. Biotechnol 13, 183–201 (2014). https://doi.org/10.1007/s11157-013-9327-x 11. Pozrikidis, C.: Boundary Integral and Singularity Methods for Linearized Viscous Flow. Cambridge University Press, Cambridge, MA., P. 259. (1992). https://doi. org/10.1017/cbo9780511624124 12. Pozrikidis, C.: Motion of a spherical liquid drop in simple shear flow next to an infinite plane wall or fluid interface. J Eng. Math. 107, 111–132 (2017). https:// doi.org/10.1007/s10665-017-9927-5 13. Stickel, J., Powell, R.L.: Fluid mechanics and rheology of dense suspensions. Annu. Rev. Fluid. Mech. 37, 129–149 (2005) 14. Zinchenko, A.Z., Davis, R.H.: A multipole-accelerated algorithm for close interaction of slightly deformable drops. J. Comp. Phys. 207, 695–735 (2005). https:// doi.org/10.1016/j.jcp.2005.01.026 15. Zinchenko, A.Z., Davis, R.H.: Shear flow of higly concentrated emulsions of deformable drops by numerical simulations. J. Comp. Phys. 455, 21–62 (2000). https://doi.org/10.1017/S0022112001007042 16. Zinchenko, A.Z., Davis, R.H.: General rheology of highly concentrated emulsions with insoluble surfactant. J. Fluid Mech. 816, 661–704 (2017). https://doi.org/10. 1017/jfm.2017.91

Formalization of Requirements for Locked-Loop Control Systems for Their Numerical Optimization Vadim Zhmud1(&)

, Galina Frantsuzova1 , Lubomir Dimitrov2, and Jaroslav Nosek3

1

2

Novosibirsk State Technical University, Karl Marx Ave. 20, 630073 Novosibirsk, Russia [email protected], [email protected] Faculty of Mechanical Engineering, Technical University of Sofia, Bul. St. Kliment Ohridski 8, 1756 Studentski Complex, Sofia, Bulgaria [email protected] 3 Technical University of Liberec, Studentská 1402/2, 461 17 Liberec Czech Republic [email protected]

Abstract. Methods of designing regulators for locked systems have been studied and developed for more than half a century. Recently, due to the development of computing and software, numerical optimization methods have come to the fore. These methods allow you to effectively calculate the controller for a known mathematical model of a particular object. The developer can get positive such calculations even without sufficiently deep knowledge of the theory of regulation. The greatest difficulty lies in the fact that the requirements for the optimization result, formulated in a technical language understandable to the developer, are transformed into formal requirements that can be taken over by the software that performs these calculations. In this article, these requirements are formulated and systematically set out on the basis of a long experience in the development of these methods and their use for a wide variety of different management tasks. Keywords: Control theory  Optimization  Numerical modeling  Cost functions  Objective functions  Transient processes  Regulator  Controllers

1 Introduction A number of control tasks are so complex [1–22] that solving it in an analytical form is a significant problem, whereas finding the coefficient of a regulator by the method of numerical optimization can be carried out quite simply [23–32]. However, this method has not yet become sufficiently widespread, since any optimization requires formalizing the statement of the problem as an objective function, and there has not yet been a general opinion on the methodology for forming such an objective function.

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 353–365, 2019. https://doi.org/10.1007/978-3-030-12072-6_29

354

V. Zhmud et al.

Nevertheless, the requirements for closed systems, formulated on the basis of the characteristic indicators of the transition process, are fairly well formalized and generally accepted. Therefore, it is important to link these requirements among themselves. This paper has aims to solve this problem.

2 Problem Statement Traditional requirements for closed systems are formulated according to the type of transition process. Numerical optimization requires setting a cost function, which would be the only optimization criterion. Note that this approach is not the only possible. In particular, the requirements for the form of the transient function can be converted to some parameters of graphically defined boundaries of the resulting transition process, and then one can apply the Monte-Carlo method. Namely, one can randomly set the regulator coefficients, and if the transient process chart goes beyond the predetermined boundaries, then search continues, if this graph is completely within the specified boundaries, the search is terminated, and the search results are those regulator coefficients that provided the transition process specified in demands. Such a procedure is built into the software package MATHLAB-Simulink and is successfully used in many cases. However, this is not in the strict sense an optimization, since the given graphical limits of the transition process may be either unnecessarily or not sufficiently spacious. In the first case, a resulting process can be very far from the best possible will be obtained, in the second case, the solution of the problem is impossible. Optimization in its true sense is to find a regulator that is the best from the standpoint of formally specified parameters. It remains only to formally set these parameters and apply adequate software for modeling and optimization. The task of choosing adequate software has actually been solved; we recommend the VisSim program as the most suitable for this purpose [23–32]. As for the formation of optimization criteria, this will be discussed below.

3 Traditional System Requirements Traditionally, requirements for closed systems form in the form of a set of conditions that are formed on the basis of technological requirements. As a rule, the following demands are required: 1. Zero or negligible small static error. For example, it can be formulated mathematically: lim yðtÞ ¼ vðtÞ:

t!1

ð1Þ

Formalization of Requirements for Locked-Loop Control Systems

355

In addition, in the case of linear systems, this requirement can be expressed in terms of images: Yð0Þ ¼ Vð0Þ:

ð2Þ

This requirement can also be expressed in the form of a requirement for the openloop transfer function: w\w0 ) jWP ðwÞj  10J :

ð3Þ

Here J is a positive integer, for example, 10, w0 is a small value of frequency, WP is an open loop transfer function. Figure 1 shows a transient process with a steady error, which contradicts to the requirement 1.

Fig. 1. Transient process with a fixed error

Fig. 2. Transient process with overshooting

2. A small overshooting that does not exceed a preset value, as a percentage of the value of the task jump that caused this overshoot. In particular, there may be a requirement for no overshoot. Figure 2 shows a transient process with overshoot, which contradicts to requirement 2. 3. Short duration of the transient process until error decreases to value less then certain small threshold. For example, the time to reach a relative error of 5% can be set: t [ t0:05 ) jeðtÞj\0:05 vðtÞ:

ð4Þ

4. Absence of oscillations or their small number, or the ratio of the amplitude of the next oscillation to the amplitude of the previous oscillation (damping index). Figure 3 shows a transient process with oscillations that contradict to requirement 4. 5. No reverse overshooting. The reverse overshoot is the output of the output signal in the direction opposite to the prescribed direction of this value. Figure 4 shows a transient process with a reverse overshoot, which contradict to requirement 5. In addition, in a transient process, features that may worsen its attractiveness may occur, but often they are permissible, the presence of such features is not a reason to consider the system to be bad or inoperable. However, in some special cases, such features may be extremely undesirable.

356

V. Zhmud et al.

Fig. 3. Transient process with oscillations

Fig. 4. Transient process with reverse overshoot

6. Not monotonic course of the transient process. Figure 5 shows the transient process in a not monotonic fashion, which contradicts to the requirement 6.

Fig. 5. Transient process with not monotonic motion

4 Demands to the Cost Function The requirements to the regulator coefficients are formed as an objective function. The optimization procedure must find its extremum, that is, the maximum or minimum. If the procedure seeks for the maximum, the objective function is called the “Profit Function”. If the procedure seeks for the minimum value, the objective function is called “Cost”. When using software VisSim, it is most convenient to use the cost function, that is, a positive-definite function that is calculated from the results of the simulation of the transient process, and which should take the minimum of all possible values for the computed arguments, in this case, with the found coefficients of the regulator. If, according to technical requirements, the objective function is specified as a profit function, then it is easy to form a cost function from it, putting it equal to the value opposite to the profit function, i.e., profit with negative sign. That this function does not become negative, it can add a constant positive value, which is known to be greater than any current value of the profit. To treat lost profit as cost or prevented

Formalization of Requirements for Locked-Loop Control Systems

357

losses as profit is logical, this interpretation makes it easy to move from cost to profit and back, and, therefore, to ensure that the minimum of the cost function meets the best adjustment of the regulator. At least, the following requirements are imposed on the cost function: 1. Cost function should depend on all the parameters that need to be optimized, directly or indirectly. It is not necessary that this dependence be expressed in an analytical relationship. It is enough that the relationship takes place. 2. Cost function must be a real nonnegative function for all values of all optimized parameters. 3. Cost function must correspond to the optimization objectives, namely: the better the optimization goals are achieved, the less is the cost function. 4. The dependence of the cost function on each parameter should be smooth and have a single minimum and the achievement of this minimum must correspond to the objectives of the control (the goals of regulator adjustment). 5. Cost function must not decrease indefinitely with an infinite growth (in absolute value) of at least one optimized parameter. If cost function does not depend on even one of the coefficients, in the optimization process this coefficient will increase to infinity or decrease to minus infinity, the procedure will not end (the software will interrupt it), the optimization will not be successful. If the dependence of the value function is not smooth, the optimization procedure will be unstable. This will be manifested either by the fact that at each attempt to continue the optimization from the achieved values it will lead to new values far from the initial values, or the result of optimization will depend on the starting conditions (or both of these phenomena will occur). If there are several minima in the cost function, the optimization procedure can give different results, depending on the starting conditions. This defect can be corrected using the global optimization procedure, i.e. finding the global minimum of the cost function. If the value function unboundedly decreases with the growth of at least one parameter, the optimization procedure will also not be completed, since it will be interrupted due to the fact that absolute value of this parameter will reach an unacceptably large value. The procedure of global optimization can be obtained by applying the usual optimization procedure repeatedly from different starting values, storing the results (the received values of the cost function), and then choosing the result that corresponds to the minimum value of them.

5 Tools of the Cost Functions In the simulation, various transient processes can be obtained depending on the values of the PID-regulator coefficients. By comparing the various transient processes, developer can choose the preferred options and discard the unacceptable ones. Developer of the system always strives to ensure the greatest speed, the least overshoot

358

V. Zhmud et al.

and the smallest static error. If, however, the change in some coefficient causes improvement of one indicator, it simultaneously causes deterioration in another indicator of the quality of the transient processes. Hence, it is difficult to choose the best option. In particular, the increase in the coefficient can simultaneously increase the speed and overshoot, and by the form of transients it may be not clear to developer which of the two variants of transient processes is most attractive. Therefore, a quality criterion is necessary that connects all other criteria into a single characteristic. In some cases, developer can use two or more criteria, and even if there is no reason to choose one criterion from many, it will be sufficient to indicate that both transient processes satisfy the technical requirements for a locked system. Still there is one problem, the solution of which can not be fulfilled without a single criterion for the quality of the system. This task is an automatic iterative optimization of the regulator coefficients. We have already mentioned that the VisSim software tool can automatically optimize one or more parameters if there is a quality criterion. The criterion of quality can be any cost function that satisfies the requirements imposed on this criterion. The cost function in general is written in the form of a functional: ZH WðTÞ ¼

wðtÞdt:

ð5Þ

0

Here H is the duration of the simulated transient process, under the integral there is a function that depends on time. As a rule, this function is associated with a transient process in the system during the development of a jump or another kind of change in the perturbation h(t) or the prescription v(t). The most obvious, but not the best, option is as follows: ZH W1 ðTÞ ¼

e2 ðtÞdt:

ð6Þ

0

The following cost function is more effective in comparison with (6): ZH W2 ðTÞ ¼

jeðtÞjdt:

ð7Þ

0

However, the relation (7) is also not the best choice. The best value function can be computed in the case when the integral is the sum of several elementary functions with the corresponding weight coefficients. In general, this function can be written as follows:

Formalization of Requirements for Locked-Loop Control Systems

ZH WðHÞ ¼

wq

Q X

wq dt:

359

ð8Þ

q¼1

t¼0

Here, the value function is defined as the time integral of the weighted sum of the positive definite functions wq, from the beginning of the transient process t = 0 to its end, when t = H. The weight coefficients allow us to establish the ratio of the contributions of each of these functions. One of the effective functions wq for (8) is the error modulus e(t) multiplied by the time t from the beginning of the transient process: w1 ðtÞ ¼ jeðtÞjt:

ð9Þ

The use of such a function allows us to find the regulator, which most effectively reduces the error module, multiplied by the time. The expediency of reducing the error module of the justification does not require, and the multiplication of this quantity for a time is justified by the fact that the more time elapsed since the jump that caused the error, the better the remnants of this error should be suppressed. The initial value of the error is excluded from the objective function at all, because at this moment t = 0. The factor t plays the role of a weighting coefficient that continuously increases linearly. Time can also be applied to some positive degree, for example, t2. This reinforces the requirement for rapid fading of the error and weakens the requirement for its magnitude at the very beginning of the transient process. The disadvantage of the cost function based only on the term (9) in relation (8) is that often when using a regulator tuned by the optimization method with such cost function, oscillations in the transient process arise in the resulting system. Several modifications of the objective function can be proposed to suppress oscillations. For example, the additional term may grow in the event that the overshoot exceeds a certain prescribed value. For example, if it is required that the overshoot does not exceed 10%, then for a linear system this means that for a single step jump, the output signal, changing from zero to one, should never exceed 1.1. In this case, developer can form a “penalty” addition, which is equal to the positive part of the difference between the output value and a value equal to 1.1. If this difference is negative, then the penalty function is zero, if it is positive, then this value is equal to the value of the following function: w2 ðtÞ ¼ maxf0; xðtÞ  1; 1g:

ð10Þ

Here the function max {0, f} is a limiter:  maxf0; f g ¼

0; if f \0 : f ; if f  0

ð11Þ

This cost function is relevant only for the development of a unit stepped action, with other test signals it must be changed.

360

V. Zhmud et al.

Another and more effective way to suppress oscillations in the transient process is the use of the error growth detector [28]:   deðtÞ w3 ðtÞ ¼ max 0; eðtÞ : dt

ð12Þ

The product of the error on its derivative must be negative for the best transient processes. In this case, that is, if the error and its derivative have different signs, the error value decreases during the process. The function (12) is zero, and its contribution to the value function (8) is also zero. This situation corresponds to the desired development of the process. If the error and its derivative have the same signs, the error in magnitude increases, the product of the error by its derivative is positive, and the function (12) is also positive. Then the value function (8) with the addition (12) under the integral increases as a result of integration of the positive function (12). The optimization procedure will find such regulator parameters that minimize the value with respect to the relation (8). Consequently, the procedure minimizes the parts of the transient process in which (12) is not zero. This term (12) does not ensure the absence of transition regions on which the error increases, but it makes the contribution of such regions minimal, that is, minimizes their extent, and the value of (12) on them. In the relation (6), the corresponding term has the form w4 ðtÞ ¼ ½eðtÞ2

ð13Þ

It does not work well enough, but in some cases this cost function is justified. At the beginning of any process, the error is large. If the prescription v(0) = 1, then the initial error is e(0) = 1. During the rest of the process, the error modulus is much less than unity; therefore, the term (12) has an initial value that cannot be reduced. In the general case, developer can use the error module to some extent, multiplied by the time to another integer power. Then the value functions will be the following: ZH jeðtÞjM tN dt

WðHÞ ¼

ð14Þ

0

For M = 1, N = 1, we obtain a situation with the function from (9), for M = 2, N = 0 we obtain (6), and for M = 1, N = 0 we obtain (7). This is a positively defined function, that is, it cannot be negative for any values of its arguments. We investigated the dependence of the effectiveness of this cost function on the M and N degrees. It is shown that in the case of relatively simple linear objects this function works most effectively at M = 1, N = 1, and also at M = 2, N = 3 and in some other cases. In general, the growth of the exponent index M requires a stronger growth of the exponent index N. However, because of the possibility of using the composite value function (8), it is M = 1, N = 1, that is, relation (5.9), that can be recommended for most practical problems.

Formalization of Requirements for Locked-Loop Control Systems

361

When creating a composite cost function, it is necessary to be guided by the principles of complementarities, competition and completeness. We shall call the integral of each of the terms under the integral in (8) a particular criterion, and the sum of these particular criteria will be equal to the value function as a whole.

6 The Principle of Complementarities If any of the particular criteria works correctly, but not enough, then other particular criteria should complement its action. The contribution of each of them should influence the optimization result, which is ensured by the selection of weight coefficients.

7 The Principle of Competition If a particular criterion acts on the result in such a way as to increase the regulator’s coefficients, then this particular criterion will most effectively be supplemented by such a particular criterion that acts in the direction of decreasing these coefficients. For example, terms that increase with a large overshoot in the system tend to cause the coefficient in the derivative link to increase it, but to decrease the coefficient in the proportional link. The criteria, depending on the dynamic error, act to increase the coefficient of the proportional link. A criterion that depends on the integral of the error, contributes to an increase in the coefficient of the integral link, but it may not be sufficient. For this purpose the cost function, that contains the integral product of the error modulus for the time from the beginning of the transient process, as a term in the cost function (8).

8 Principle of Completeness If the regulator has an integral link, this may be because the system requires a zero static error. If the cost function does not include terms that increase dramatically when the static error is not zero, then the integral link coefficient will not be calculated reasonably enough as a result of the optimization procedure. If a second-order astatism is required from the system, it is not enough to introduce double integration into the regulator structure, it is also necessary to introduce into the cost function such a term that sharply increases if the system does not have a second-order astatism.

9 Additional Considerations for Choosing a Cost Function One of the variants of the best-performing cost functions follows from the relation (14) with the values M = 1, N = 1, that is, based on (9):

362

V. Zhmud et al.

ZH wðTÞ ¼

tjeðtÞjdt:

ð15Þ

0

Since e(t) depends on the coefficients of the proportional, integral and derivative links of the regulator P, I, D, the first condition is satisfied. Further to this basic variant, it is possible to add adjectives, proceeding from special requirements. The presence of t in relation (15) will eliminate the static error shown in Fig. 1. In particular, if it is necessary to restrict the search for coefficients within certain end bounds, for example, it is desirable that all coefficients do not exceed the maximum permissible values for them, it is sufficient to enter such terms that are zero, as long as these coefficients remain within the allotted boundaries, but sharply increase coefficients go beyond these limits. To do this, it is enough to calculate the difference between these coefficients and their permissible values, if the difference is negative, the result should be equated to zero, if it is positive, it should be left as such, after which all these differences need to be added and multiplied by the corresponding weights, relation (15), moreover, it is desirable to add them under the integral. This method allows you to convert a global search to a local one; therefore, even if a task in a global search has no solutions, if it is converted to a local task, it will already have a solution. In addition, to the specified type of the cost function, it is advisable to add a function of the form (12) [28]. This will dramatically increase the calculated value of the cost function in the case of all types of overshoot, as well as in the case of an oscillatory or non-monotonic transition process. Thus, such an additive will exclude solutions characterized by the disadvantages shown in Figs. 2, 3, 4 and 5. It also turns out to be extremely expedient to introduce into the objective function a term expressing the energy costs of control, i.e. integral of the square of the control signal [32]. The control signal is formed at the output of the regulator, it is easy to obtain when modeling. The introduction of such a term allows one to find a solution characterized by the greatest energy savings spent on control. This addendum works especially effectively if there is an integrator in the object. It should be especially emphasized that the form of the cost function by itself does not sufficiently determine the formulation of the optimization problem. It should be considered only in conjunction with the test effect, which is set on the system model during optimization during modeling. In particular, if it is necessary to obtain secondorder astatism, this means that the system should have zero error when working on a linearly increasing task or interference. If during the optimization the test signal is a simple step action, this will not allow to effectively calculating the coefficient for the double integrator, which is responsible for second-order astatism. At the same time, it was revealed by simulation that if such an optimization uses only a linearly increasing signal, then the resulting system can be quite effective when developing such an impact, but to have an extremely large overshoot when developing a stepped jump. Therefore, in solving this problem, a complex type of test action is required, in which there is a section with a jump and a section with a linearly increasing task.

Formalization of Requirements for Locked-Loop Control Systems

363

Another way to simultaneously meet these two requirements is to simulate two systems in parallel, in which object models and controller models coincide, but test actions do not match. In this case, the test action on one of such systems is formed in the form of a step jump, and on the other in the form of a linearly increasing task. The cost function should in this case be a weighted sum of the cost functions calculated in each of these systems. Weights determine the relationship between the two requirements. It is not possible to formalize the requirements for the weight coefficients; we can only recommend to investigate the resulting transients in the system. If the dynamic error in the development of a linearly increasing assignment rather quickly tends to zero, and the overshoot during the development of a stepped effect is too large, then the weighting factor of the cost function calculated in a system with a stepped assignment should be increased. If the overshoot is sufficiently small, and the damping rate of the dynamic error is insufficient, this factor should be reduced.

10 Conclusion In this paper, all the developed criteria for the formation of cost functions used in the design of regulators by the method of numerical optimization are brought together. These studies can be continued, since there are still some classes of fairly complex objects, the management of which still remains a challenge. In particular, this applies to objects that are extremely prone to vibrations due to the presence of internal circuits with positive feedback. When managing such objects, in some cases it is extremely difficult to eliminate the reverse overshoot shown in Fig. 4. Nevertheless, it is possible to significantly reduce it to values that are in principle achievable with such models of the object.

References 1. http://www.vissim.nm.ru/download.html 2. Ang, K.H., Chong, G., Li, Y.: PID control system analysis, design, and technology. IEEE Trans. Control Syst. Technol. 4(13), 559–576 (2005) 3. Skogestad, S.: Simple analytic rules for model reduction and PID controller tuning. J. Process Control 13(4), 291–309 (2003) 4. Wang, Q.-G., Zhang, Z., Astrom, K.J., Chek, L.S.: Guaranteed dominant pole placement with PID controllers. J. Process Control 2(19), 349–352 (2009) 5. Shmaliy, Y.: Continuous-Time Systems. Springer, Dordrecht (2007) 6. Ichikawa, A., Katayama, H.: Linear Time Varying Systems and Sampled Data Systems. Springer, London (2001) 7. Amato, F.: Robust Control of Linear Systems Subject to Uncertain Time-Varying Parameters. Springer, Berlin (2006) 8. Berdnikov, V.P.: Algorithm of determination of non-stationary nonlinear systems full stability areas. Russ. Technol. J. 5(6), 55–72 (2017) 9. Berdnikov, V.P.: Modified algorithm of determination of non-stationary nonlinear systems full stability areas. Russ. Technol. J. 6(3), 39–53 (2018)

364

V. Zhmud et al.

10. Sastry, S.: Nonlinear Systems: Analysis, Stability, and Control, p. 668. Springer, New York (1999) 11. Merlet, J.: Parallel Robots. Solid Mechanics and its Applications, p. 394. Kluwer Academic Publishers, Dordrecht (2000) 12. Wang, J., Gosselin, C.M.: A new approach for the dynamic analysis of parallel manipulators. Multibody Syst. Dyn. 2(3), 317–334 (1998) 13. Briot, S., Khalil, W.: Dynamics of Parallel Robots: From Rigid Bodies to Flexible Elements. Springer, Heidelberg (2015) 14. Zhang, Y., Luo, J., Hauser, K.: Samplingbased motion planning with dynamic intermediate state objectives: application to throwing. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 2551–2556 (2012) 15. Mori, W., Ueda, J., Ogasawara, T.: 1-DOF dynamic pitching robot that independently controls velocity, angular velocity, and direction of a ball: contact models and motion planning. In: Proceedings of the IEEE, ICRA 2009, pp. 1655–1661 (2009) 16. Senoo, T. Namiki, A., Ishikawa M.: Highspeed throwing motion based on kinetic chain approach. In: Proceedings of the IEEE/RSJ, IROS 2008, pp. 3206–3211 (2008) 17. Kober, J., Wilhelm, A., Oztop, E., Peters, J.: Reinforcement learning to adjust parametrized motor primitives to new situations. Auton. Robots 33(4), 361–379 (2012) 18. Khalil, H., Saberi, A.: Adaptive stabilization of a class of nonlinear systems using high-gain feedback. IEEE Trans. Autom. Control 32(11), 1031–1035 (1987) 19. Qian, C., Lin, W.: Output feedback control of a class of nonlinear systems: a nonseparation principle paradigm. IEEE Trans. Automat. Control. 47(10), 1710–1715 (2002) 20. Marino, R., Tomei, P.: Output regulation for linear minimum phase systems with unknown order exosystem. IEEE Trans. Autom. Control 52, 2000–2005 (2007) 21. Marino, R., Tomei, P.: Global estimation of unknown frequencies. IEEE Trans. Autom. Control 47, 1324–1328 (2002) 22. Bobtsov, A.: New approach to the problem of globally convergent frequency estimator. Int. J. Adapt. Control Signal Process. 22(3), 306–317 (2008) 23. Zhmud, V., Liapidevskiy, A., Prokhorenko, E.: The design of the feedback systems by means of the modeling and optimization in the program VisSim 5.0/6. In: Proceedings of the IASTED International Conference on Modelling, Identification and Control, AsiaMIC 2010, Phuket, Thailand, 24–26 November 2010, pp. 27–32 (2010) 24. Zhmud, V., Yadrishnikov, O., Poloshchuk, A., Zavorin, A.: Modern key technologies in automatics: structures and numerical optimization of regulators. In: Proceedings of the 2012 7th International Forum on Strategic Technology, IFOST 2012, Tomsk, Russia (2012) 25. Zhmud, V., Yadrishnikov, O.: Numerical optimization of PID-regulators using the improper moving detector in cost function. In: Proceedings of the 8-th International Forum on Strategic Technology 2013, (IFOST-2013), vol. II, Ulaanbaatar, Mongolia, 28 June–1 July, pp. 265–270 (2013) 26. Zhmud, V., Zavorin, A.: Method of designing energy-efficient controllers for complex objects with partially unknown model. In: Proceedings of the XVI International Conference the Control and Modeling in Complex Systems, Samara, Russia, 30 June–3 July 2014, pp. 557–567 (2014) 27. Zhmud, V., Dimitrov, L.: Designing of complete multi-channel PD-regulators by numerical optimization with simulation. In: Proceedings of 2015 International Siberian Conference on Control and Communications, SIBCON 2015 (2015) 28. Zhmud, V., Yadrishnikov, O., Semibalamut, V.: Control of the objects with a single output and with two or more input channels of influence. WIT Trans. Model. Simul. 59, 147–156 (2015). https://www.witpress.com

Formalization of Requirements for Locked-Loop Control Systems

365

29. Zhmud, V., Dimitrov, L.: Investigation of the causes of noise in the result of multiple digital derivations of signals researches with mathematical modeling. In: 11th International IEEE Scientific and Technical Conference on Dynamics of Systems, Mechanisms and Machines (Dynamics), Omsk, Russia, 14–16 November 2017 (2017) 30. Zhmud, V., Dimitrov, L., Roth, H.: New approach to numerical optimization of a controller for feedback system. In: 2nd International Conference on Applied Mechanics, Electronics and Mechatronics Engineering (AMEME), Beijing, 22–23 October 2017. Destech Publicat Inc. (2017) 31. Zhmud, V., Zavorin, A.: Compensation of the sources of unwanted direction of the transient process in the control of oscillatory object. Autom. Softw. Eng. (3) (2013). http://jurnal.nips. ru/sites/default/files/ASE-3-2013-3.pdf 32. Ivoilov, A., Zhmud, V., Trubin, V., Roth, H.: Using the numerical optimization method for tuning the regulator coefficients of the two-wheeled balancing robot. In: 2018 IEEE Proceedings of the 14th International Scientific-Technical Conference APEIE—44894, Novosibirsk, Russia, pp. 228–236 (2018)

Accented Visualization in Digital Industry Applications Anton Ivaschenko1(&) , Pavel Sitnikov2 and Georgiy Katirkin3 1

,

Samara State Technical University, 443100 Samara, Russia [email protected] 2 ITMO University, Saint-Petersburg, Russia [email protected] 3 SEC “Open Code”, 443001 Samara, Russia [email protected]

Abstract. The paper proposes a new approach of accented visualization useful to develop system architectures implementing interactive user interfaces in digital industry applications. The proposed solution is suitable for image data processing, analysis, virtualization and presentation based on Augmented Reality and the Internet of Things. Accentuated visualization is based on adaptive construction and virtual consideration of the content of the current real scene in the field of view of a person, as well as the viewer’s experience that contains perceptions, points of view and expected behavior. The proposed approach was implemented in a specialized intelligent system for manual operation control. Such a system implements the ideas of Industry 4.0 for smart manufacturing by introduction of cyber-physical decision-making support. The overall solution is used to identify gaps and failures of operator in real time, predict possible operating mistakes and suggest better procedures based on comparing the sequence of actions to an experience of highly qualified operators captured in knowledge base. There are presented the results of solution industrial implementation using neural networks and AR accented visualization in practice. Keywords: Augmented reality  Smart manufacturing Ontology  Decision-making support



Industry 4.0



1 Introduction Modern goals of digital economy development distinguish the concept of Industry 4.0 to be one of the key technological trends nowadays. This concept is based on developing cyber-physical systems capable of monitoring real manufacturing processes, supplementing them with specifically generated virtual entities and providing contextual and decentralized decision-making support. These features require implementation of innovative user interfaces suitable for image data processing, analysis, virtualization and presentation based on Augmented Reality and the Internet of Things.

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 366–378, 2019. https://doi.org/10.1007/978-3-030-12072-6_30

Accented Visualization in Digital Industry Applications

367

Despite high expectations for successful implementation of these technologies their practical use remains problematic. Based on comparatively extensive experience of development of software and information systems for simulation and decision-making support in various problem domains there can be specified a number of challenges that include the problems with usability, relevance and performance. To cover these issues there was developed a new concept of accented visualization, which allows improving the efficiency of modern user interfaces used in Industry 4.0.

2 State of the Art The concept of Industry 4.0 is intensively and massively explored in modern literature [1–3]. It describes a composite solution vision based on implementation of modern IT technologies to develop cyber-physical systems for smart factories. Based on existing and rich experience of manufacturing automation it picks out the most efficient technologies that can be applied in practice to improve the efficiency of the general supply chain. By means of monitoring and application of intelligent technologies for predictions and decision-making support Industry 4.0 concentrate on providing maximum controllability of business and production processes. Under these conditions the role of human decision-makers remains still critical, which makes it important to provide usable and useful user interfaces. One of the basic technologies used in Industry 4.0 is the Internet of Things [4, 5]. Over the Internet of Things, cyber-physical systems communicate and cooperate with each other and with humans in real-time both internally and across organizational services. Using multiple devices with different functionality as a cloud can improve the quality of monitoring and decision-making support in real time. Modern protocols and architectures of wireless networks allow implementing a variety of topologies at technical level. Practical implementation of Industry 4.0 is concerned with Big Data processing, which is a separate branch of research and development nowadays [6–8]. Combination of the Internet of Things as a major data source and Big Data analysis technologies as a powerful tool for information processing and analysis is successfully used at modern production enterprises. Still Big Data analysis turns out to be a challenging problem in industrial applications due to a requirement to process unstructured volumes of data in real time. The results of this analysis should be presented for decision-makers in the form of illustrative info graphics giving impact to relevant indicators and their critical changes. There is a requirement to avoid overloading with subordinate information and make user interfaces context sensitive. Major modern trends of contextual data visualization are widely explored in [9, 10]. The examples are given for medical data but can be easily disseminated for a description of a complex technical system. It is noted that the main goal of data visualization is to combine several data sets to analyze multiple layers of a biological system at once. The system should interlink all related data sets (e.g., images, text,

368

A. Ivaschenko et al.

measured values, scans) and offer visual analytics to support experts. This approach supports the idea of maximum effective visualization of complex data for professionals instead of automatic decision-making. This concept gains a foothold and is taken further in [11]. In this paper there is presented the “human-in-the-loop” approach, according to which the humans (decision makers) are not only involved in pre-processing, by selecting data or features, but actually during the learning phase, directly interacting with the algorithm. The main reason is that in case the set of parameters needed for decision-making is big in size and various in types, it becomes problematic to present them in a single picture. Therefore, it is proposed to involve the decision maker into the process of data processing and visualization by means of continuous interacting with the system, which helps optimizing the learning behavior of both humans and algorithms. One of the most promising yet challenging technologies of contextual data visualization is Augmented Reality (AR) [12–14]. Various AR devices (goggles, head mounted displays or widely spread tablets) provide overlaid information additive or masking the real environment that can be informative for the users and provide decision-making support. AR technology allows developing interactive and contextdependent user interfaces that provide the possibilities of computer vision and object recognition in real time. These features make AR a powerful tool of Industry 4.0 implementation in practice that can considerably improve the capabilities of computerhuman interfaces. At the same time AR faces the same core usability challenges as traditional interfaces, such as the potential for overloading users with too much information and making it difficult for them to determine a relevant action. However, AR exacerbates some of these problems because multiple types of augmentation are possible at once, and proactive apps run the risk of overwhelming users. To solve the problems of AR usability there should be reduced the issues of (a) limitations of AR goggles available at the market, (b) low performance and quality of image recognition algorithms capable of functioning in real time and (c) the lack of a methodology for AR based user interfaces development and implementation in practice. Applying AR to Industry 4.0 makes it possible for users to be trained for those tasks, and actively assisted during their performance, without ever needing to refer to separate paper or electronic technical orders [15–17]. Incorporating instruction and assistance directly within the task domain, and directly referencing the equipment at which the user is looking, could eliminate the current need for personnel to continually switch their focus of attention between the task and its separate documentation. As a result, the user can receive visual aids for operating using user interfaces that support each task naturally without distracting. Summarizing this overview there can be stated a problem of AR implementation in Industry 4.0 using modern technologies of context-driven and adaptive user interfaces. First results in this area [18–20] were improved and transformed to an idea of accented visualization that is discussed below as the main subject of this paper.

Accented Visualization in Digital Industry Applications

369

3 Accented Visualization Approach In this section, there is proposed a combined formal model for contextual visualization of overlay data using an AR device. This model was originally developed for maintenance manuals of technical equipment and based on the experience of AR and VR projects but can be extended and used in industrial applications. The model described below is proposed for an example of manufacturing decision making support using AR. The accentuated notion of visualization is based on adaptive construction and virtual consideration of the content of not only the current real scene in the field of view of a person, but also the viewer’s experience that contains perceptions, points of view and expected behavior. This is especially true for the actual support of work and decision-making, when a correct solution to the situation is required. This system can see the solution of the situation performed by different performers with different education, knowledge and skills in a unified way. Let us consider the scene sj , where j ¼ 1::N s is the scene number that contains a number of real objects wi;j , i ¼ 1::Njw and corresponding virtual entities q : qi;j;l ; l ¼ 1::Ni;j sj ¼ fwi;j ; qi;j;l g:

ð1Þ

Objects are considered independent in view of various features of visualization in various scenes. Objects can have different displays in different scenes but refer to the only enterprise in the real world. These objects are considered equal. It is possible to describe this equality using the compliance matrix for all scenes:  Cðwi1 ;j1 ; wi2 ;j2 Þ ¼

1; 0;

wi1 ;j1  wi2 ;j2 ; elswise:

ð2Þ

Attention to the viewer at a certain point in time is given to many objects (one or more). Each center of an object can be described by an event represented by a Boolean variable: vi;j;k ¼ vðwi;j ; ti;j;k Þ 2 f0; 1g:

ð3Þ

This focus may end or require action. In this case, the sequence of actions can be considered as a scenario. For example, the service process for a particular device can be described by a scenario of actions organized by the sequence. The work process is formulated in a standard scenario: cj;m ¼ fei;j;m;n g;   where ei;j;m;n ¼ e wi;j ; di;j;m;n ; ti;j;m;n ; Dti;j;m;n 2 f0; 1g.

ð4Þ

370

A. Ivaschenko et al.

Events ei;j;m;n represent the facts that actions di;j;m;n that refer to the objects wi;j have t to be done at lap time ti;j;m;n  i;j;m;n 2 . Effective manufacturing process requires correspondence of ei;j;m;n and vi;j;k :     M ei1 ;j;m;n  vi2 ;j;k ¼ ei1 ;j;m;n  vi2 ;j;k  wi1 ;j1  wi2 ;j2      wi1 ;j1  wi2 ;j2  ti;j;k ¼ ti1 ;j;m;n  Dti1 ;j;m;n :

ð5Þ

AR implementation in this case should support maximum correspondence of vi;j;k to ei;j;m;n and result in prompt data contextual visualization according to the current viewer’s focus. It can be formalized as generating the set of focus attractors in textual, mark or highlight form: Qi;m ¼ fqi;j;l g; qi;j;l ¼ qðwi;j ; ti;j;l Þ 2 f0; 1g:

ð6Þ

to meet the following objective: Iðsj ; cj;m Þ ¼

 hXN v XN q XN w   i   j j j _ q [ 0 e  1  M e  v i;j;m;n i;j;m;n i;j;k i;j;l n¼1 k¼1 l¼1 i¼1

XN c

j;m

¼ 0: ð7Þ This means that, in order to provide a standard AR scenario, an interactive guide should produce virtual enterprises with a context that leads the data in moments when the viewer’s attention should be drawn to specific objects.

4 Solution Architecture The proposed solution is based on the analysis of user behavior according to the introduced model. Embedded software with intelligent decision-making support captures user behavior in the form of events and compares these event chains with typical operational scenarios. The analysis is performed during the period of the standardized production procedure using the cross-correlation functions. Such an analysis allows identification of possible gaps in the viewer’s perception, if no necessary attention is paid to certain scene objects in the necessary times. This knowledge is captured in the form of rules in the knowledge base associated with the specified types of scene objects and steps of operating scenarios. As a result, scenarios are delivered by virtual enterprises (text points, marks or highlights) that draw the user’s attention to the necessary scene objects, if necessary. The solution approach is illustrated in Fig. 1. There are two modules that handle the knowledge base based on user actions and production scenarios in the form of Ontology and several decision support modules that analyze the user’s actions and the attractiveness of the center.

Accented Visualization in Digital Industry Applications

371

Different users have different behaviors: some prefer to obtain the maximum in formation displayed in the field of view, others try to reduce non-essential data, leaving an absolute minimum that is important. To capture the appropriate sample and device of the system according to the preferences of the user, can be used special eye tracking systems that determine and control the movement of the eyes using the front camera. The first experience of research in this field ended with an understanding of the difference between eye movement in the case of conventional (screen-based) user interfaces and AR. The main difference is that in the case of using AR interfaces additional efforts are required to provide control elements to the user in the expected places.

Fig. 1. Accented visualization solution.

As part of the proposed solution, an original algorithm for the production identification of points was developed. The ontology of industrial production was introduced to describe the critical features or characteristics of each form of objects and form. The problem of its use in practice concerns the possible overlap of several objects that may affect the identification accuracy. In addition to this, objects specific to the manufacturing industry are usually objects colored in monochrome and the presence of a contour of complex geometric shapes. To overcome this, an intelligent algorithm based on neural networks was introduced, which help in the identification of production points by a partial scheme. User’s focus coordination is based on intelligent analysis of the process of production or maintenance (implemented by intelligent navigator module). The system tracks user attention and adapts additional data introduced to virtual scene according to the current context and need. User’s focus is captured in the form of event chains and compared with typical scenarios. Context is a set of concepts that describe the current

372

A. Ivaschenko et al.

situation and background that determines the decision. Focus is a concrete object processed at a certain moment. Such fragmentation allows introducing a control loop, where the correct focus is generated according to context in real time. Analysis is produced at a time frame of standardized production procedure using cross-correlation functions. Such analysis allows identification possible gaps in viewer’s perception, if no required attention is given to certain scene objects at necessary times. This knowledge is captured in the form of rules in Ontology linked to specified types of scene objects and steps of operating scenarios. As a result, the scenarios are supplied by virtual entities (textual items, marks or highlights) that attract user’s attention to the required scene objects when needed. QA experiments prove that e.g. for a dataset 74 different objects, 66 points were successfully identified, which means 89% of the efficiency. The main feature of the proposed approach is the ability to function in real time. The following functionality is currently available: • scene object identification based on image analysis; • complex devices analysis including components identification by partial view and assemble tips generating; • contextual description of the object in view; • search and highlighting of the object required; • user attention identification and contextual add-ons generating according to the principles of accented visualization; • operating scenario processing, tracking, and control.

5 Implementation. Intelligent System for Manual Operation Control The proposed approach was implemented in a specialized intelligent system for manual operation control. Such a system implements the ideas of Industry 4.0 for smart manufacturing by introduction of cyber-physical decision-making support. The introduced above architecture is implemented as the following. Several video cameras are used to track operations according to technological process and identify the objects to be operated in real scene (details and units). Intelligent software provides image recognition of the objects and their matching with a corresponding description in a knowledge base. Video panels or AR goggles are used to present the corresponding contextual information to an operator. The results of implementation are illustrated by Fig. 2. One can see that no extra limitations for the working place and lighting (like e.g. green background) are required. The overall solution is used to identify gaps and failures of operator in real time, predict possible operating mistakes and suggest better procedures based on comparing the sequence of actions to an experience of highly qualified operators captured in knowledge base. Implementation details are specific in terms of the technologies used. To track and capture the objects there can be utilized simple web cameras: the quality of video is good enough for the most op-to-date models. Still it is recommended to introduce

Accented Visualization in Digital Industry Applications

373

minimum to cameras to reduce the defects of lighting caused by occultation and blur. The quality of lighting turns out to be not so critical, which makes it beneficial being deployed at real enterprises. Considerable lighting changes requires additional calibration.

Fig. 2. Intelligent manual operations’ control. Working place.

Intelligent image recognition and objects identification was performed using standard neural networks. There were considered several alternative libraries, including Tensorflow (which is the fastest), Keras, Theano (no longer supported by the developers) and Deeplearning4j (supports Java). Tensorflow was chosen for implementation, in addition to high productivity it is distributed under the open license Apache 2.0, provides access from Python, C ++, Java, Haskell, Go, Swift API, supports Linux, Windows, macOS, iOS, Android, supports Google and cloud computing and gains high popularity among the developers. Among the possible neural network topologies (AlexNet, VGG16, GoogLeNet/Inception, etc.) there was chosen ResNet-50, a Residual Network powered by Microsoft. ResNet-50 is a convolution neural network that is trained on more than a million images from the ImageNet database [21]. It contains is 50 layers deep with direct links between the neurons located by one level. AR scene is deployed on a tablet or AR goggles, for this example there was used Epson Moverio. Using AR allowed implementing an algorithm that improves image recognition considering the features of the identified components and the results of its implementation and practical use.

374

A. Ivaschenko et al.

One of a vehicle truck engine unit (turbo-compressor) was taken as an example. Objects identification is illustrated by Fig. 3. The required object is contextually highlighted according to the production technological process (see Fig. 4).

Fig. 3. Intelligent manual operations’ control. Objects identification.

Fig. 4. Intelligent manual operations’ control. Relevant object identified and highlighted

Accented Visualization in Digital Industry Applications

375

While operating develops the systems continues to track the operations and is capable of processing not only the details but also the assembly units (see Fig. 5).

Fig. 5. Intelligent manual operations’ control. Operating in process.

The proposed example required identification of 8 details in 11 steps of technological process. Some details were split to separate objects to improve the quality of identification. As a result, there were introduced 27 objects for image recognition. Each object (and subunit) was filmed (videotaped) separately from different angles. The results were fed to a neural network that gave different results of average identification probability (see Table 1). The resulting dependency is given in Fig. 6. Neural network learning required 27500 frames (with different angles and backgrounds) with a distribution of 1000 frames for each object in average. The resulting identification probability raised to 0.95, which required 5500000 learning steps. To improve the quality for identification the system should know the stage of operating and require no foreign objects exterior objects to appear in scene. The last requirement corresponds to 5S methodology of workplace organization. Still, in case side objects appear AR device can mark them and remain excluded from image recognition process. The described intelligent system for manual operation control with the provided identification quality has won a specialized contest organized by KAMAZ, Skolkovo Foundation and a Foundation for Advanced Studies in a nomination of the best industrial solution in August 2018.

No. of steps 10 100 1000 10000 100000 1000000 1700000 1800000 2100000 2200000 2300000 2400000 3000000 3100000 3200000 3300000 3700000 3800000 4300000 4600000 4700000 4800000 5000000 5300000

1,21E−05 0.000111 0.001234 0.0153 0.1358 0.35325 0.502124 0.53421 0.56694 0.59148 0.62248 0.65338 0.6849 0.71988 0.7425 0.77724 0.80475 0.83406 0.86215 0.8969 0.908 0.91914 0.923 0.93779

2 1,32E−05 0.000123 0.001465 0.0112 0.135 0.3354 0.50345 0.535234 0.5667 0.59169 0.62116 0.655 0.68562 0.71646 0.74324 0.77375 0.80335 0.83123 0.86207 0.8985 0.9085 0.91524 0.92422 0.93972

3 1,75E−05 0.000135 0.001765 0.0147 0.1142 0.367 0.508 0.53635 0.56457 0.598 0.62192 0.65803 0.6881 0.71616 0.74447 0.77357 0.80279 0.83291 0.86692 0.89342 0.9048 0.91971 0.92578 0.93822

4 1,87E−05 0.000154 0.001424 0.0178 0.114 0.312 0.5034 0.53742 0.56491 0.59185 0.6233 0.6574 0.68917 0.71235 0.74863 0.7777 0.80497 0.83811 0.86602 0.8911 0.90425 0.91535 0.922 0.93735

5 1,79E−05 0.000153 0.001863 0.0114 0.177 0.342 0.50737 0.5367 0.56378 0.59973 0.6212 0.65706 0.68397 0.71657 0.74689 0.77741 0.803 0.83633 0.86782 0.89447 0.90787 0.91303 0.92401 0.93185

6 1,46E−05 0.000123 0.001538 0.0132 0.188 0.378 0.50737 0.53769 0.56706 0.59361 0.6248 0.65705 0.68224 0.71954 0.74837 0.77443 0.80749 0.83427 0.86683 0.89445 0.90198 0.91249 0.92234 0.9352

Table 1. Details’ identification probability. 7 1,46E−05 0.000142 0.001414 0.0174 0.172 0.385 0.50357 0.53312 0.56503 0.5927 0.62933 0.65328 0.6819 0.71831 0.74446 0.77326 0.8079 0.83112 0.86837 0.89539 0.905 0.91177 0.92119 0.93781

8 1,65E−05 0.000124 0.001368 0.0187 0.168 0.314 0.50373 0.53423 0.56153 0.59749 0.62541 0.65828 0.68463 0.71344 0.74313 0.77387 0.80494 0.8378 0.8676 0.89107 0.9044 0.91399 0.9219 0.93254

9 1,12E−05 0.000142 0.001734 0.0124 0.187 0.378 0.50121 0.5352 0.56349 0.5986 0.6284 0.65128 0.68467 0.71607 0.74303 0.77576 0.8043 0.83343 0.86704 0.89612 0.90462 0.9142 0.92151 0.93508

376 A. Ivaschenko et al.

Accented Visualization in Digital Industry Applications

377

Fig. 6. Intelligent manual operations’ control. Neural network learning efficiency.

6 Conclusion The main benefit of accented visualization AR is the ability to customize user interfaces that will reduce non-essential data. The described approach covers modern problems of AR implementation at industrial enterprises, concerned with usability, performance and image recognition quality. Next steps are concerned with improving the quality of image processing and probation of the proposed solution in industry.

References 1. Digital Russia. New Reality. Digital McKinsey, 133 p. (2017). https://www.mckinsey.com/ ru/our-work/mckinsey-digital 2. Lasi, H., Kemper, H.-G., Fettke, P., Feld, T., Hoffmann, M.: Industry 4.0. Bus. Inf. Syst. Eng. 4(6), 239–242 (2014) 3. Kagermann, H., Wahlster, W., Helbig, J. (eds.): Recommendations for implementing the strategic initiative Industrie 4.0: Final report of the Industrie 4.0 Working Group, 82 p. (2013) 4. Hersent, O., Boswarthick, D., Elloumi, O.: The Internet of Things: Key Applications and Protocols, 370 p. Wiley, Chichester (2012) 5. Ivaschenko, A., Novikov, A., Kosov, D., Kuzmin, V.: Moving sensors concept for distributed diagnostics. In: IEEE SAI Intelligent Systems Conference 2015, London, UK, pp. 1051–1053 (2015) 6. Bessis, N., Dobre, C.: Big Data and Internet of Things: A Roadmap for Smart Environments, 450 p. Springer (2014) 7. Baesens, B.: Analytics in a Big Data World: The Essential Guide to Data Science and Its Applications, 232 p. Wiley (2014) 8. Surnin, O.L., Sitnikov, P.V., Ivaschenko, A.V., Ilyasova, N.Yu., Popov, S.B.: Big data incorporation based on open services provider for distributed enterprises. In: CEUR Workshop Proceedings, Proceedings of the International Conference Information Technology and Nanotechnology, Session Data Science (DS-ITNT 2017), vol. 190, pp. 42–47 (2017)

378

A. Ivaschenko et al.

9. Holzinger, A.: Extravaganza tutorial on hot ideas for interactive knowledge discovery and data mining in biomedical informatics. Lecture Notes in Computer Science, vol. 8609, pp. 502–515 (2014) 10. Sturm, W., Schreck, T., Holzinger, A., Ullrich, T.: Discovering medical knowledge using visual analytics–a survey on methods for systems biology and *omics data. In: Eurographics Workshop on VCBM, Eurographics (EG), pp. 71–81 (2015) 11. Holzinger, A.: Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Inform. 3(2), 119–131 (2016) 12. Van Krevelen, R.: Augmented reality: technologies, applications, and limitations (2007). https://doi.org/10.13140/rg.2.1.1874.7929 13. Navab, N.: Developing killer apps for industrial augmented reality. IEEE Comput. Graph. Appl. 24(3), 16–20 (2004) 14. Singh, M., Singh, M.P.: Augmented reality interfaces. IEEE Internet Comput. 17(6), 66–70 (2013) 15. Ke, C., Kang, B., Chen, D., Li, X.: An augmented reality based application for equipment maintenance. In: Tao, J., Tan, T., Picard, R.W. (eds.) Affective Computing and Intelligent Interaction. ACII 2005. Lecture Notes in Computer Science, vol. 3784, pp. 836–841. Springer, Heidelberg (2005) 16. Lee, K.: Augmented reality in education and training. TechTrends 56, 13–21 (2012) 17. Friedrich, W.: ARVIKA: augmented reality for development, production and service. Siemens AG, Automation and Drives Advanced Technologies and Standards (2003) 18. Ivaschenko, A., Milutkin, M., Sitnikov, P.: Accented visualization in maintenance AR guides. In: Proceedings of SCIFI-IT 2017, Belgium, EUROSIS-ETI, pp. 42–45 (2017) 19. Ivaschenko, A., Khorina, A., Sitnikov, P.: Accented visualization by augmented reality for smart manufacturing applications. In: 2018 IEEE Industrial Cyber-Physical Systems (ICPS), pp. 519–522. ITMO University, Saint Petersburg (2018) 20. Ivaschenko, A., Sitnikov, P., Milutkin, M., Khasanov, D., Krivosheev, A.: AR optimization for interactive user guides. In: Proceedings of Intelligent Systems Conference (IntelliSys) 2018, 6–7 September 2018, London, UK, pp. 1183–1186 (2018) 21. ImageNet. http://www.image-net.org. Accessed 30 Nov 2018

Dynamic Capabilities Indicators Estimation of Information Technology Usage in Technological Systems Alexander Geyda(&) St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St. Petersburg, Russia [email protected]

Abstract. The article outlines conceptual and corresponding formal models that provide means for estimation of information technology usage operational properties. Dynamic capability defined as the operational property of a system that describe its ability to adapt to changes of the system’s environment. Operational properties indicators of IT usage defined as a kind of system operational properties indicators under conditions of changing environment in such a way that it is possible to estimate their values analytically. Such estimation fulfilled through plotting the dependences of predicted values of operational properties of IT usage against variables and options of problems solved. To develop this type of models, the use of information technologies during system functioning analyzed through an example of a technological system. General concepts and principles of modeling of information technology usage during operation of such systems defined. An exemplary modeling of effects of technological information and related technological material (non-information) operations of technological systems operation provided. Based on concept models of operation of technological systems with regard to information technologies usage, set-theoretical models followed by functional models of technological systems operation using information technologies introduced. An example of operational properties indicators estimation considered based on ARIS diagramming tools usage. Keywords: Information technology  Efficiency  Efficacy Capabilities  Dynamic capabilities  Potential  Potentiality



Effectiveness



1 Introduction Dynamic capabilities are usually defined [1] as the ability of a firm to integrate, build, and reconfigure internal and external competences to address rapidly changing environments. A more detailed definition of dynamic capabilities as a firm’s “behavioral orientation to continuously integrate, reconfigure, renew, and recreate its resources and capabilities, focusing on upgrading and reconstructing its core capabilities in line with dynamic, changing environment to obtain and sustain competitive advantage” was

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 379–395, 2019. https://doi.org/10.1007/978-3-030-12072-6_31

380

A. Geyda

given in [2]. A role of dynamic capabilities consists in “changing internal components of the firm and creating new changes” [3]. As we can see, these definitions describe the ability of a firm or an organization to change, adapt, compete, and perform in a changing environment. I define system dynamic capability as a systemological property. System dynamic capability is a system’s ability to perceive its changing goals in its changing environment. This definition is similar to our previous definition of system’s potential and other operational properties of systems and operational properties of information technology usage [4–8]. Other examples of models and methods for definition and estimation of such properties could be found in [9–27]. This ability to perceive system’s changing goals in its changing environment requires a system to check system and environment states, and their relations, to learn, to produce information about actions needed for further execution and next, to perform such actions in order to change the system and its actions, to adapt and perceive changing goals in a changing environment. This ability manifests on a changing border of the system and its environment. For such ability, the system must be able to perform some information actions to check states of the system and its environment, to learn, and produce information about the required actions. Environment changes generate this need for information actions, which are performed as causal for non-information actions followed by information actions. Thus, an environment change makes IT usage necessary, which, in turn, causes IT effects and IT effects produce dynamic capability effects on the changing border of a system and an environment. This kind of an IT is always required for dynamic capability effects to be realized and environment change is required to generate a need for such IT usage for creation of dynamic capability effects. Therefore, when one talk about the operational properties of IT usage or dynamic capabilities one shall estimate the role of IT in creation of system dynamic capability effects in response to changing environment. To describe the relations between information and non-information actions and dynamic capability effects during system functioning, concepts and principles (concept model) of IT application for dynamic capabilities effects realization are suggested. Through applying these concepts and principles, author reveal general patterns of IT application. The suggested conceptual model is provided for transition first to graphtheoretical, set-theoretical and then to functional model (to estimate probabilistic measure [9]) of IT usage for dynamic capabilities effects. It is based on patterns of noninformation effects development with the use of information obtained. General concepts and principles of IT usage for dynamic capabilities effects creation, or IT-enabled dynamic capabilities [28], are described in section two; modeling concepts, principles and patterns of such capabilities are described in section three. Examples of schemas for operational properties indicators estimation, including dynamic capabilities indicators, are introduced in section four. In section five, prototypes of software package for estimation of IT enabled dynamic capabilities indicators are described. Please note that the first paragraph of a section or subsection is not indented. The first paragraphs that follows a table, figure, equation etc. does not have an indent, either.

Dynamic Capabilities Indicators Estimation of Information Technology

381

2 General Concepts and Principles of IT Usage for Dynamic Capabilities Effects I shall describe the use of IT through a technological system example. The system is considered technological if its functioning is defined by technological documentation (e.g. manuals, descriptions, instructions) in the system. These include, for instance, systems that function to enforce manufacturing of unique products (e.g. in aerospace industry), and systems for implementation of state projects and targeted programs. General concepts required for development of IT application models in the context of technological systems dynamic capabilities include: IT, IT application, information, information use, system, system operation, purposeful changes in system operation, goal, outline of changes in system operation, benefit, technological information operation, technological non-information operation, system operation effects, effects of transition processes during functioning. Concepts are linked in a schema of purposeful changes of technological systems through the application of IT (Fig. 1). IT effects [3] are manifested in a technological system conditioned by changes in operation (for example, by transition processes from reaching one goal to reaching another). This change in operation becomes apparent in changes in non-information actions (their composition, properties and sequence). The changes in non-information actions are caused by the results of the information actions. Implementation of information actions is governed by necessary consideration of the environment impact on a technological system. As a result of the series of changes, personnel using a technological system obtain the effects different from those that would appear, should there have been no changes, that is, not considering the environment impact or the new technological system, conditioned by this impact. The operation implementation with new chosen parameters is explained by technological information operations implemented to take into account the impact of the environment on a technological system. These technological information operations provide for selection of next technological operations with better parameters (in effected conditions) depending on the changes in the states of a technological system and its environment. Best operational effects are achieved through consideration of these changes at execution of technological information operations. The use of different types of technological operations (hereinafter referred to as “TlOp”), e.g. information, non-information, in technological system functioning depending on verified technological system states and its environment is illustrated in Fig. 1. When TlOp sequences are implemented, first technological information operations (hereinafter referred to as “TIO”) are executed. These operations estimate changed states of the environment and system elements with regard to environment impact. Further, TIO liable for changed TlOp are executed (if necessary). Their ultimate goal is to obtain information about the technological system state and its environment and what should be changed in this regard. Then, technological noninformation operations (hereinafter referred to as “TNIO”) connected with information operations by cause-effect relations are executed through practical implementation. The notions of information and IT, benefits of IT, benefits of information, information and non-information actions, TlOp, TIO, TNIO and other related notions were specified in [5]. Principles of technological system research and a number of related notions were

382

A. Geyda

introduced in [4, 5]. General OP characteristics were defined in [6]. Let us specify the notions that are used further in the context of functional modeling of a technological system. Technological information operation is an action to be executed according to the technological documentation, the goal of which is to provide needed information (for example, instructions) to perform other actions. Technological non-information operation is an action to be taken according to the technological documentation, the goal of which is to perform an exchange of material and energy (according to the instructions obtained). Technological information operations are executed according to a certain information technology. TIO (or, as a rule, a number of TIOs) aims at obtaining (creation) and transforming the information into such a form, where it could be used by a person or technical equipment to solve a task of choosing (for instance, choosing a mode of TNIO). During implementation of TIO and TNIO sequences, depending on the occurred events and states of the system elements and environment, which were revealed as a result, different TIO are executed. Then TIOs are used for choosing various TNIO resulting in occurrence of various events and states of the system. In this regard, the system and environment states do not recur during operation in reality, and sequences of TlOp, events and states (a loop in Fig. 1) should be expanded into structured sequences of events and states (outcome tree). As a result, numerous possible state sequences are obtained. They are connected by branches (events) depending on states of a system, environment and implemented sequences of TlOp (TIO and TNIO), and the events, which are revealed during TlOp execution.

TNIO

States after TIO

TIO

Conversion TNIO

TIO of TNIO choice

States prior to Target TNIO

States after TIO state estimation

Target TNIO

TIO of state estimation States prior to TIO

Fig. 1. A loop of different types of TlOp used during system functioning.

Dynamic Capabilities Indicators Estimation of Information Technology

383

The system operation outcome is a sequence of conditioned states of the system and branches (events) between them caused by TlOp (both TIO and TNIO) and actions of the system environment. Let us denote a layer of possible chains of actions, events and obtained states by Li . It depends on the environment state when i-th loop in Fig. 1. is performed. Chains of actions, events and states obtained due to sequences of such loops ðL1 ; L2 ; L3 Þ are illustrated in Fig. 2. They depend on TIO and IT used for system functioning. During planning, the possible operation outcomes are reviewed, being a sequence of possible states and branches between them caused by TlOp (TIO and TNIO). Composition and characteristics of TlOp, which lead to possible operation outcomes, change as a result of TIO. TIO leads to various sequences of random events and states revealed as a result of changes in the environment states. These events and states form possible outcomes. Each possible outcome, except for various possibility measures of its implementation (depending on states of the system and environment, and implemented TlOp) complies with different effects (results with specified requirements) of operation and different operation efficiency.

L1

L2

L3 Fig. 2. Sequences of states in system functioning

384

A. Geyda

Operational properties of technological systems, namely system potential [4] or dynamic capability of such systems (with regard to IT application), describe future system parameters associated with its operational efficiency in changing environment. This property should be estimated based on the modeling of all possible future operation outcomes under all possible environment changes. System potential, or dynamic capability of a technological system, is a property that indicates whether a technological system is suitable to reach changing goals (actual and possible) in a changing environment. It would be rational to use the difference between technological systems with applied “new” and “old” IT as an indicator of IT enabled dynamic capability of the “new” IT compared with the one used previously. Thus, this indicator can be used as an analytical estimation of an operational property indicator of IT usage. This indicator should be estimated based on analytic models developed through description of laws and manifestation patterns of effects, as a result of execution of TIO and TNIO sequences of various characteristics at different technological systems operation outcomes. Use cases of such indicators includes choosing IT and TIO characteristics for optimal implementation of new IT, such as usage of distributed ledger technologies for various business processes, robotic technological process automation.

3 Concepts, Principles and Patterns of Operational Properties of IT Usage Modeling Concepts applied during development of system functioning models with regard to transition actions of the system improvement, and principles applied during conceptual and formal modeling of technological system were defined in [5–8]. Let us consider general concepts, which require interpretation due to a suggested concept of IT application in the context of technological system functioning. Simplex of TlOp (simplex) is a sequence of the initial TIO (TIO required to initiate TNIO), TNIO and final TIO (TIO required to terminate TNIO). Reduced simplex (hereinafter referred to as “RS”) is a simplex containing zero TNIO. There are several types of RS depending on the type of a state evaluation task they solve: if RS solves a task of general system state evaluation at the moment (to the moment) then it is type one RS. If RS solves a problem of state evaluation of one or several sites (i.e. “workplaces” that constitute the system as a whole) at the moment (to the moment) then it is type two RS. Depending on their specifics, different RS should be executed to evaluate the states of TS and environment as a result of execution of simplexes. This rule is fixed by a principle of simplex linking through RS implementation. These RS are implemented differently depending on the results of execution of prior TlOp and environment states. RS targeted result consists in chosen composition and prescripts of further actions. This result should be used in consequent simplexes to achieve targeted results of TNIO. While different sequences of simplexes and RS should be executed differently (depending on various recorded states of the system and environment), different states are implemented as a result. Afterwards these states could lead to implementation of various simplexes and TS transition into next states, as a result. Creation of these sequences is given according to a principle of functional

Dynamic Capabilities Indicators Estimation of Information Technology

385

dependency of the system operation outcome from simplexes and states of the system and its environment. Nodes of an outcome tree are possible states achieved as a result of TlOp (TIO and TNIO by selected means), and tree edges stemming from the parent node are possible outcomes (transitions between states) resulting in TlOp implementation. Such sequences of states and operations are then parameterized with possibilities of outcomes. A fragment of such parameterized graph-theoretical model is shown in Fig. 3. To keep the size of a model smaller, a principle of aggregation is suggested. It consists in aggregation of states achieved up to the moment of completion of certain types of reduced simplexes. Aggregation schema R2 applied to a reduced simplex of type 2 is shown in Fig. 4. Tree branching at the system operation complies with one of the possible events chain if it is actualized. If the system state during operation is calculated on the basis of the state of several workplaces and several respective RS of type 2, the sub trees complying with possible states of workplaces and their combinations are connected into the branch. The outcome tree corresponds with all possible TS operation outcomes. Composition and characteristics of outcomes and the outcome tree depend on the TlOp composition and characteristics, and, as a result, on the used IT. In particular, possibility measure of possible outcome implementation (possibility measure of the outcome to become reality) depends on the composition and characteristics of TlOp (TIO and TNIO) and on the state of the environment during operation. Operation effects achieved as a consequence of certain outcome implementation depend on the composition and characteristics of TlOp (and the IT) and on the states of environment at operation. Knowing the possible outcome and characteristics of the effects, providing this outcome is real, one could calculate the system dynamic capabilities indicator (system potential).

1 − (α 2 + β 2 )

S1ti2−1

131

y1′−3

S1ti2−2,3

α 2 S tn 1 22 β2

y131 ⊕ y132

Σ2 132

1 − (β2 )

α 2 , β2

133

−−−−−

β2

y133

Fig. 3. Parameterized model fragment and its aggregation

Aggregation schema R2 applied for reduced simplex of type 1 is shown in Fig. 3. Nested aggregation schema R1 applied to the results of schema R2 application is

386

A. Geyda

shown in Fig. 4. The role of IT usage during system functioning effects formation is illustrated in Fig. 5 through the example of RS type 1 execution and a resulting schema. RS11 to RS1N sequence is considered (upper part of figure). Each RS type 1 in a sequence checks results of operations fulfillment (from RS type 2 on corresponding workplaces, where TIO has just finished checking the operations results). Following this check, an initial state of the corresponding RS type 1 is formed based on the RS type 2 effects. Such effects are information effects of a check. Next, based on this initial information state (of the check type) RS type 1 is performed and a goal state of RS type 1 is formed due to the RS type 1 fulfillment. This goal state formed due to RS type 1 precepts information effects. Precepts obtained as a result of RS type 1 depend on the check results, IT and IT operations used to perform RS type 1. They use the results of the check, technological data and environment check data to calculate effects compliance during RS type 1 fulfillment. Based on an indicator of such compliance actual percepts are obtained. The percepts obtained during RS type 1 are then sent to RS type 2 and next to simplexes in order to start the corresponding TlOp.

y111 ⊕ y112 1 − (β2 ) (α 2 , β 2 )

−−−−−

β2

1 − (α + β )

α

r β

y113 y121 ⊕ y122 1 − (β2 ) (α 2 , β 2 )

−−−−−

y11 ⊕ y12 Σ1

1 − (β2 ) (α , β )

β2

β2

y123

y13

y131 ⊕ y132 1 − (β2 ) (α 2 , β 2 )

−−−−−

−−−−−

β2

y133 Fig. 4. Nested aggregation of parameterized model

Dynamic Capabilities Indicators Estimation of Information Technology

387

As a result, TlOp workflow is changing. Therefore, producing of non-information effects is changing as a result. Thus, information effects appear because of possible changes (during checks) and then they cause changes in non-information effects through changed precepts. Once TlOp is finished, the corresponding TIO initiates the process of verification again. This cycle repeats again for RS12 and further until the last RS1N is fulfilled. To measure the results of system functioning with regard to IT usage, appropriate system dynamic capability indicators shall be suggested.

RS1N

RS11 Check results

S1b

S1e

Check results Checking finished

IT function IT States

Precepts obtained

Y × R ×Y r

effects complience

S Nb

S Ne

Send precepts Starting precepted

Workflow

Fig. 5. Role of IT during system functioning effects formation

An estimation of system dynamic capabilities indicators is proposed in the form of probabilistic or other correspondence measure estimation of effects predicted as required values. Such estimation is the basis of implementation of the loop of targeted changes. Estimation can be conducted either on the basis of analytical mathematical methods and models or through the generalization of one’s experiences (heuristically). Models of states changes shown are linked together by graph theoretic nested tree model illustrated in Fig. 6. In this model, some trees that model the states of a system and its environment are linked together by nested trees T1 ; T2 ; T3 . The difference between the solutions of the considered problems based on an analytical evaluation of system dynamic capability indicators and those achieved heuristically consists in possibilities to built predictive mathematical models and to automate solutions of practical problems as mathematical problems of analytical estimation, analysis and synthesis (for example, as operation research or mathematical programming problems). Specifically, taking into account transition actions during the process of improvement of a system and its functioning, and the role of various information actions and IT technologies in this process, formulation and solution of practical problems of improvement of a system and its functioning, and IT usage as

388

A. Geyda

mathematical problems of system dynamic capability (potential) research become possible. Typical system operational properties indicators, including schemas for estimation of dynamic capability indicators allowing for such research are described below. s(n)1 s(n-1)1 s(n)2 s

T1 s(n)3 S(n-1)2 s(n)4 rs1n4

Sn+1

O1 rs1n+1

T2

rs21 rs22

rs23

O2 System and environment state 1

α r

β

αβ

T3 System and environment state 2

System and environment state 3

Fig. 6. Linking models with graph theoretic nested tree model

4 Schemas for Operational Properties Estimation 4.1

A Sequence of Three Schemas for OP Indicators Estimation

Let us introduce a sequence of three schemas for the estimation of system operational properties indicators, including operational property of system dynamic capability (OP). Each successive schema uses a previous one. The first, basic schema is aimed at the estimation of operation efficiency in the case were the goal of functioning does not change and system does not improve during functioning. According to the estimation following this schema, it is assumed that the decision made using an IT to improve actions during their implementation, along with the system and processes of their execution according to the goal perceived was made before functioning started. Thus, the functioning is not interrupted. The second schema generalizes the first one to account for a plurality of possible functioning in order to reach different goals under different conditions of an environment. According to this schema, it is assumed that the possible improvements are determined in advance (with use of IT), there are multiple possible goals to achieve and certain transition improvement actions are determined (with IT use) before the start of operation.

Dynamic Capabilities Indicators Estimation of Information Technology

389

Finally, the third schema summarizes the first two schemas in order to account for both possible transition actions selected before their application to achieve different goals and targeted transition actions selected and implemented during operation, depending on the prevailing conditions. 4.2

Basic Schema for Operational Properties Estimation

Let us introduce Ip as a value of measure on the set, p as a function defining this measure, Y-set of vectors of random characteristics of operational effects (characteristics of operational quality), R-set of the required component-wise relations between random values of effects of characteristics and their desired values, Yr-set of vectors of characteristics values of the required functioning effects. Then, the estimation of Ip is set by the following schema: p : Y  R  Y r ! ½0; 1

ð1Þ

I p ¼ pð Y  R  Y r Þ

ð2Þ

Ip is the measure value of the possibility [29] indicating that the predicate in parentheses takes the value “true” or the value indicating that the random event corresponding to the predicate occurs. Thus, if Y = y is a random variable defined on the axis of real numbers, R is the ratio “at most”, yr is the point at the real axis (the required limit value of the random variable y), then p = Fy (x), Ip = Fy (yr) is the value of the function of distribution of the random variable y in the point yr. 4.3

Schema of Operational Properties Estimation Given that the Transition Processes Are Known During Goal Changing

Let us introduce Yr (t) as a random process modeling any possible accidental changes of Yr in time (for example, due to changes in goal of system operation), t0 as the starting point of the system operation, Yr = Yr (t0), T as a goal duration of the system operation. Then, the schema of the estimation of operational properties would be of the following type: p : Y  R  Y r ðtÞ; t0 ; T ! ½0; 1

ð3Þ

Iop ¼ pðY  R  Y r ðtÞÞ; t 2 ½t0 ; T 

ð4Þ

Iop is the measuring value of the possibility that the predicted values of system operation effects (under varying goals) comply with the desired values of effects in the corresponding way. Whereby: Iop ðt0 Þ ¼ Ip

ð5Þ

If the requirements are changed and corresponding changes can be determined before operation, it is necessary to plan a transition process (with the characteristic u) from one operation to another under the stated changes, which makes it possible to estimate the value of Iop. However, if characteristics u of transition actions cannot be

390

A. Geyda

determined in advance and depends upon the state of the system and the environment during operation, it is necessary to use the third estimation schema. According to this schema, all transition actions are a sequence of changing actions depending upon the state s during the operation of the system and the environment. States and transition actions are determined with use of IT. 4.4

Schema of the Estimation of OP Given the Processes of Improvement Depend on System States During Its Operation

The last OP research schema is used when sequences of transition actions described with the characteristics u are implemented during operation, depending on achieved states of the system and the environment s. Transition actions are selected in accordance with IT applied to change “goal” functioning and their prescripts. Transition actions effects manifested through workflow actions and prescripts selected for the “goal” functioning. “Goal” functioning is one transition fulfilled for. At the same time, resources are spent for transition fulfillment. The constraint of u (s) describes the characteristics of transition actions necessary for calculating transition effects and then, as a result, effects of the “goal” functioning. Characteristics of the sequence uc = u1 … un of such transition actions out of the possible sequences Uc depend on the characteristics of the manifested sequences of states sc = s1 … sn out of possible sequences of states Sc. The schema of estimation of OP in this case is as follows: p : Y ðUcðScÞ; tÞ  R  Y r ðtÞ ! ½0; 1

ð6Þ

Iops ¼ pðY ðUc ðScÞ; tÞ  R  Y r ðtÞ; t 2 ½t0 ; T ; sc 2 Sc; uc 2 Uc

ð7Þ

Iops is the measuring value of the possibility that the predicted values of TSF effects (under varying goals and transition actions) meet the required values of effects in a corresponding way, in accordance with these goals. Possible sequences sc and uc depend on applied IT. At this, the previous estimation schema complies with Uc = u: Iops ðUc ðScÞ; tÞ ¼ Iops ðu; tÞ ¼ Iop ðtÞ

ð8Þ

5 Prototypes of Software for Estimation of Operational Property Indicators of IT Usage Modeling of operational properties of IT usage requires creation of multiple system functioning models under multiple scenarios of environment functioning. Multiple models creation may be quite complex. Therefore, I propose to use diagrammatic means. Graph theoretic, diagrammatic models transformed into parametric through adding parameters and variables to graph theoretic models are built. Database of parameters and variables restrictions is used for this purpose. In the example considered, diagrammatic models were created with ARIS toolset modernized so as to use nested diagrams to reflect some relations through graph theoretic models.

Dynamic Capabilities Indicators Estimation of Information Technology

391

Next, parameterized models are transformed into functional through adding formulas to ARIS models elements. Then, nested diagrammatic models are transformed into Microsoft Excel spreadsheets shown below. Resulting spreadsheets constitute a program model of an IT enabled system dynamic capability estimation. Examples of diagrammatic models are shown below. They are based on some common sub-process models (Fig. 7).

Fig. 7. Sub-processes used by diagrammatic ARIS models

Simplest models available were used. For example, only four scenarios of environment functioning are possible and there are four changing goals as a result. Diagrammatic model of functioning could be built for each goal. The use of an IT is modeled with relevant IT operations, resulting in a change of the course of functioning. Such operations require additional resources and time when a functioning goal is altered due to a change of environment. Different model versions are considered. Version 1 (Fig. 8) differs from version 2 (Fig. 9) by respective TIO characteristics according to different IT used.

Fig. 8. Diagrammatic ARIS model version 1 to estimate system dynamic capability indicator of IT usage

392

A. Geyda

Next, an indicator of IT enabled dynamic capability is estimated as a probabilistic mix of system functioning efficiency with IT used for functioning changes according to four different scenarios of functioning change. Resulting Microsoft Excel table example (Fig. 10) constitutes a program model for estimation of operational properties of IT usage and corresponding dynamic capability indicators. It was obtained automatically, using model-driven meta-modeling [30–35] and ARIS possibilities to generate a program code.

Fig. 9. Diagrammatic ARIS model version 2 to estimate operational properties of IT usage and dynamic capability indicators

Fig. 10. Program model to estimate operational properties of IT usage and dynamic capability indicators

Dynamic Capabilities Indicators Estimation of Information Technology

393

6 Discussion The obtained results allow for evaluation of predicted values of systems operational properties of IT usage and dynamic capabilities indicators. They could help to analytically estimate IT operational properties, dynamic capabilities properties and other operational properties related to IT usage depending on variables and options in tasks solved. This could lead to a solution of contemporary problems of a research dedicated to the operational properties of IT usage, system dynamic capabilities as well as other operational properties using predictive analytical mathematical models and mathematical methods of research problem solving, for example, using mathematical programming and operation research mathematical models and methods. Examples of problems possible to decide include choosing IT and TIO characteristics for optimal implementation of new IT, such as optimal usage of distributed ledger technologies for business processes, robotic technological process automation optimization, and cyberphysical systems characteristics choosing. Acknowledgment. Performed under support of the RFBR grant No. 16-08-00953.

References 1. Teece, D., Pisano, G., Shuen, A.: Dynamic capabilities and strategic management. Strateg. Manag. J. 18(7), 509–533 (1997) 2. Wang, C., Ahmed, P.: Dynamic capabilities: a review and research agenda. Int. J. Manag. Rev. 9(1), 31–51 (2007) 3. Teece, D.: Explicating dynamic capabilities: the nature and microfoundations of (sustainable) enterprise performance. Strateg. Manag. J. 28(13), 1319–1350 (2007) 4. Geyda, A., Lysenko, I.: Tasks of study of the potential of socio-economic systems. In: SPIIRAN Proceedings, vol. 10, pp. 63–84 (2009). (in Russian) 5. Geyda, A., Lysenko, I.: Operational properties of agile systems and their functioning investigation problems: conceptual aspects. J. Appl. Inform. 12, 93–106 (2017). (in Russian) 6. Geyda, A., Lysenko, I.: Schemas for the analytical estimation of the operational properties of agile systems. In: SHS Web Conference, vol. 35 (2017) 7. Geyda, A., Lysenko, I., Yusupov, R.: Main concepts and principles for information technologies operational properties research. In: SPIIRAN Proceedings, vol. 42, pp. 5–36 (2015). (in Russian) 8. Geyda, A., Ismailova, Z., Klitny, I., Lysenko, I.: Research problems in operating and exchange properties of systems. In: SPIIRAN Proceedings, vol. 35, pp. 136–160 (2014). (in Russian) 9. Taylor, J.: Decision Management Systems: A Practical Guide to Using Business Rules and Predictive Analytics, 312 p. IBM Press, Indianapolis (2011) 10. Kendrick, T.: How to Manage Complex Programs. AMACOM, New York (2016) 11. Dinsmore, T.: Disruptive Analytics: Charting Your Strategy for Next-Generation Business Analytics, 276 p. Apress, New York (2016) 12. Downey, A.: Think Complexity: Complexity Science and Computational Modeling. O’Reilly Media, Newton (2012)

394

A. Geyda

13. Cokins, G.: Performance Management: Myth or Reality? Performance Management: Integrating Strategy Execution, Methodologies, Risk, and Analytics, 274 p. Wiley, New York (2009) 14. Cokins, G.: Why is modeling foundational to performance management? Dashboard Inside Newsletter, March 2009 15. Hood, C., Wiedemann, S., Fichtinger, S., Pautz, U.: Requirements Management. The Interface Between Requirements Development and All Other Systems Engineering Processes, 275 p. Springer, Heidelberg (2008) 16. Hybertson, D.: Model-Oriented Systems Engineering Science: A Unifying Framework for Traditional and Complex Systems, 379 p. AUERBACH, Boca Raton (2009) 17. Aslaksen, E.: The system concept and its application to engineering, 266 p. (2013) 18. Aslaksen, E.: Designing Complex Systems. Foundations of Design in the Functional Domain. Complex and Enterprise Systems Engineering Series, 176 p. CRC Press/AUERBACH, Boca Raton (2008) 19. Franceschini, F., Galetto, M., Maisano, D.: Management by Measurement: Designing Key Indicators and Performance Measurement Systems, 242 p. Springer, Heidelberg (2007) 20. Roedler, G., Schimmoller, R., Rhodes, D., Jones, C. (eds.): Systems engineering leading indicators guide. INCOSE Technical Product, INCOSE-TP-2005-001-03. Version 2.0, Massachusetts Institute of Technology, INCOSE, PSM, 146 p. (2010) 21. Tanaka, G.: Digital Deflation: The Productivity Revolution and How It will Ignite the Economy, 418 p. McGraw-Hill, New York (2003) 22. Guide to the System Engineering Body of Knowledge, SEBoK v. 1.3.1. INCOSE (2014) 23. Simpson, J.J., Simpson M.J. Formal Systems Concepts. Formal, Theoretical Aspects of Systems Engineering. Comments on “Principles of Complex Systems for Systems Engineering”, Systems Engineering. vol. 13, no 2, pp. 204–207 (2010) 24. Elm, J., Goldenson, D., Emam, Kh., Donatelli, N., Neisa, A.: A survey of systems engineering effectiveness - initial results (with detailed survey response data). NDIA SE Effectiveness Committee, Special report CMU/SEI-2008-SR-034, Acquisition Support Program, 288 p. Carnegie-Mellon University, NDIA (2008) 25. Patel, N.: Organization and Systems Design. Theory of Deferred Action, 288 p. Palgrave McMillan, New York (2006) 26. Stevens, R.: Engineering Mega-systems: The Challenge of Systems Engineering in the Information Age. Complex and Enterprise Systems Engineering Series, 256 p. CRC Press, Boca Raton (2011) 27. Mikalef, P., Pateli, A.: Information technology-enabled dynamic capabilities and their indirect effect on competitive performance: findings from PLS-SEM and fsQCA. J. Bus. Res. 70(C), 1–16 (2017) 28. Taticchi, P.: Business Performance Measurement and Management: New Contexts, Themes and Challenges, 376 p. Springer, Heidelberg (2010) 29. Zio, E., Pedroni, N.: Literature review of methods for representing uncertainty, 61 p. FONCSI, Toulouse (2013) 30. Lee, E.: The past, present and future of cyber-physical systems: a focus on models. Sensors 15, 4837–4869 (2015) 31. Henderson-Sellers, B.: On the Mathematics of Modelling, Metamodelling, Ontologies and Modelling Languages. Springer Briefs in Computer Science, 118 p. Springer, Heidelberg (2012) 32. Kendrick, T.: How to Manage Complex Programs, 336 p. AMACOM, USA (2016) 33. Debevoise, T., Taylor, J.: The MicroGuide to Process and Decision Modeling in BPMN/DMN: Building More Effective Processes by Integrating Process Modeling with Decision Modeling, 252 p. CreateSpace Independent Publishing Platform, USA (2014)

Dynamic Capabilities Indicators Estimation of Information Technology

395

34. Lankhorst, M.: Enterprise Architecture at Work: Modelling, Communication and Analysis. The Enterprise Engineering Series, 352 p. Springer, Heidelberg (2013) 35. Kleppe, A.: Software Language Engineering: Creating Domain-Specific Languages Using Metamodels, 240 p. Addison-Wesley Professional, Boston (2008)

Modeling of Struggle Processes in the Computer-Related Crime Field Aleksey Bogomolov1,3(&) , Alexander Rezchikov1,3 , Vadim Kushnikov1,2,3 , Vladimir Tverdokhlebov1 , Oksana Soldatkina4 , and Tatyana Shulga2 1

Institute of Precision Mechanics and Control, Russian Academy of Sciences, 24, Rabochaya Str., 410028 Saratov, Russia [email protected] 2 Yuri Gagarin State Technical University, 77 Politechnicheskaya Str., 410054 Saratov, Russia 3 Saratov State University, 83 Astrakhanskaya Str., 410012 Saratov, Russia 4 Saratov Branch of the Institute of State and Law, Russian Academy of Sciences, 135 Chernyshevskogo Str., 410028 Saratov, Russia

Abstract. The complex of system dynamics models had been developed. It allowed modeling and analysis of dynamics factors for the commission, investigation and prevention of computer-related crime. The complex included schemes of causal relationships between dynamics factors and systems of differential equations of system dynamics. The solution of such systems of equations allowed the analysis and forecast of the dynamics of specified indicators, depending on external and internal factors. This was necessary when the effect of management in crime fighting was choosen. An example of practical application of the developed complex of models is given. Keywords: Crime  Information  Malware  Virus  Information and telecommunication system  The Internet  Net  Attack  Hacking  Causal relationship  Model of system dynamics  Forrester model

1 Introduction In accordance with the current criminal law of the Russian Federation, computer crimes are an institution of a special part of the criminal law of Russia relating to the subinstitution “Crimes against public security and public order”. The specific object of these crimes are public relations on information security and computer information processing systems. Various types of computer-related crime are negative aspects of the increasing informatization of society. The Council of Europe Computer Crime Convention [1], concluded in Budapest on November 23, 2001 and ratified by almost 50 states, establishes five groups of computer crimes forming computer crime: crimes against the confidentiality, integrity and availability of computer data and systems; offenses related to the use of computer tools; offenses related to the maintenance of computer data; © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 396–405, 2019. https://doi.org/10.1007/978-3-030-12072-6_32

Modeling of Struggle Processes in the Computer-Related Crime Field

397

offenses related to violation of copyright and related rights; acts of racism and xenophobia committed through computer networks. According to the Criminal Code of the Russian Federation, crimes in the field of computer information are: lawless access to computer information (item 272 Criminal Code of the Russian Federation), creation, use and distribution of malicious software (item 273), breaking the rules of operation of the means of safekeeping, processing and transmission of computer information and information and telecommunication networks (item 273). Responsibility for crimes committed with the use of information and telecommunication technologies has also been established in a number of other articles of the Criminal Code of the Russian Federation related, in particular, to the sale of drugs and the spread of pornography. The seriousness of the threats is evidenced by the Information Security Doctrine, which was adopted in the Russian Federation in 2016. According to office “K” Ministry of Internal Affairs of the Russian Federation, in 2014, 11,000 computer crimes were reported in Russia, in 2016 their number was 65 949, in 2017—90 587 [2]. Currently, the increase in the number of computer crimes is accelerating. These tendencies are due to the high rates of informatization, the increasing role of the information sphere in the life of society and, consequently, the increasing interest of the criminal world to this area. Currently, tens of thousands of cybercrimes are committed annually in Russia and in the world, and this number is growing. In 2011, the number of computer crimes was about 8,000, and in 2014, the Ministry of Interior registered about 11,000 computer crimes. During the first quarter of this year 25,773 crimes have been committed by means of the computer and telecommunication technologies. Considering the high latency of Internet crimes it should be noted that the data are taken from the official statistics and their real number is much higher. Crimes in the sphere of computer information are made as the organized groups and individuals who are involved in criminal activity under the influence of external factors on the background of lack of legislative regulation of the Internet space. According to experts, the main problem in dealing with such crimes is their high latency—more than 80%. Its main reasons are user anonymity, absence of material traces of crime, publicity reluctance on the part of the victim. The trend showing increase in the number of computer crimes is due to the high rates of informatization, the increasing role of the IT sphere in society and, consequently, the growing interest of criminality in this field. Crimes in the computer information field are committed both by organized groups of individuals and individual specialists. They are involved in criminal activity under the influence of external factors and owing to imperfections in the criminal law protection of computer information. The growth of computer crime should be hindered not only by law enforcement agencies, but also by all civil society. President of Russia V.V. Putin stresses the need to develop an international universal legal framework to combat cybercrime as a global threat. The work for the development of the law segment regulating relations on the Internet has to be comprehensive, focused and structured. It was necessary to build an effective legal policy in the field of information security in general and on the Internet in particular. In these conditions it is necessary to apply a quantitative assessment of the resources used for legal counteraction to computer crimes, impact assessment undertaken actions.

398

A. Bogomolov et al.

Therefore, it is necessary to develop mathematical models and methods that will allow to make a quantitative assessment and forecasting of computer crime indicators given the numerous complex causal relationships arising between these indices. The basis of legal policy, according to the general theory of law, is the concept of activity. Any activity is purposeful by nature. The legal policy also has a target component and its goals are arranged in a complex tree hierarchy. The guidance that is outlined for the purpose defines the objectives of the legal policy. The most important and primary targets are called priorities. The presence of priorities allows you to choose the right means and methods to achieve goals and integrate the efforts of all subjects to working out and implementation of strategic legal ideas. The issue of selection of priorities is particularly acute in the information security field on the Internet. The ways to assess the effectiveness of the implementation of certain selected areas are needed. The following method is proposed in order to define them. One of the options for assessing the effectiveness of the chosen direction development of the sphere influence of legal policy is a quantitative assessment of the resources used to combat computer crime, as well as an analysis of the consequences of actions taken. Therefore, it is interesting to develop mathematical models and methods that will allow obtaining a quantitative assessment of the dynamics of computer crime indicators, taking into account the numerous complex causal relationships arising between these indicators.

2 Statement of the Problem The objective of the proposed research is the development of software for forecasting and analyzing the dynamics of crime indicators in the field of computer information. The results of the study allow to identify trends and key factors affecting crime rates in the field of computer information, and to identify measures to reduce the number of crimes. This will make it possible to correctly determine the priorities of this segment of legal policy and predict the reaction of society to certain planned actions.

3 Mathematical Models and Algorithms To solve the problem, it is proposed to develop a mathematical model based on the dynamics of the system [3–5]. The system dynamics model was used by the authors to analyze complex systems of various nature, in particular [6, 7]. It is necessary: – to determine the totality of predicted variables characterizing the crime in the field of computer information, and external factors affecting these variables; – to develop a scheme of causal relationships of individual indicators and factors that affect them; – based on the established dependencies, to construct equations of the system dynamics and determine algorithms for their solution;

Modeling of Struggle Processes in the Computer-Related Crime Field

399

– as a result of solving the system of equations, determine the values of the system variables at specified time intervals; – to analyze the adequacy of the constructed model on the basis of retrospective data and, if necessary, correct the model; – to analyze the influence of various factors on the indicators of computer crime and to conduct a simulation of the dynamic indicators of computer crime, depending on various conditions; – to identify and process conclusions based on the conducted numerical experiments. It should be noted that the system dynamics model was used to study forensic processes by well-known researchers earlier [8]. However, the unprecedented quantitative and qualitative growth of informatization in recent years and, as a result, significant changes in the studied area force us to consider new models of the processes of struggle with computer crime. Studying the statistics, assumptions and mechanisms of investigation of computer crimes [9, 10] it is possible to identify the following list of characteristics, adequately defining indicators of their number on long time intervals. Among these indicators, we consider the number of appeals of citizens and organizations to the Ministry of Internal Affairs for the year in connection: Xatt(t)—with hacking computer systems; Xvir(t)—with the use of malicious programs; Xfra(t)—with cases of fraud in the field of computer information (hereinafter—CI) and fraud committed using computer and telecommunication technologies. Also identified the following indicators of the action of forces aimed at combating computer crime—the number in the country per year: Xre(t)—registered crimes in the field of CI; Xde(t)—solved crimes in the field of CI; Xju(t)—convictions for crimes in the field of CI; Xla(t)—adopted legislation against crimes in the field of CI, as well as the number of events and activities in the country: Xsaf(t)—to promote safe online behavior; Xtea(t)—against computer crime by citizens and organizations; Xfi(t)—the amount of cash (rubles) allocated by public and private campaigns to carry out activities to combat computer crime. In addition, the indicators were identified, of which the relationship with the development of computer-related crime is more complex: Xsci(t)—scientific developments in the field of information technology; Xpr(t)—issued by universities of specialists in the field of IT; Xus(t)—Internet users in the country; Xso(t)—the share of domestic software and information technology in the market and Xmo(t)—the amount of money in the accounts of the population (average estimate);

400

A. Bogomolov et al.

The above indicators affecting the number of computer crimes in the country will be also referred to as system variables. Except variables of this system there are external factors that do not depend on system variables, but affect them. Yune(t)—average annual number of unemployed; Ysan(t)—annual amount of foreign sanctions against the country; Ydoll(t)—average annual ruble to US dollar exchange rate. The list of variables resulting from this step may be, generally speaking, incomplete in the sense of reflecting all aspects and mechanisms that influence the occurrence of dangerous combinations of events in the system. This circumstance is explained by the fact that there are certain limitations for the researcher when choosing variables: the variables must be measurable, certain statistics on them must be available, in addition, the subjective interests of the researcher always exist. Therefore, in case of doubts about the completeness of the used list of variables and external factors, it is noted that, first, the list can be supplemented by changing the circumstances of solving the problem, second, that the selected variables are not independent, as a rule, and therefore some can be carried out through the consideration of others, which are closely correlated with them. And the third circumstance is that in any case a certain projection of the phenomena under study onto the subspace of its chosen characteristics is considered. The listed arguments justify in a satisfactory manner the viability of the systemdynamic approach, provided that the resulting models are adequate. Selected variables are measurable, and their statistics is quite accessible. Moreover, the choice is determined by the high role of these indicators in shaping the social situation and economic realities, which significantly affect the financial situation of citizens and their motivation to commit certain types of crimes. It can be argued that a number of other indicators related to the socio-economic situation in the country have the same significance, but most of them are largely related to the selected ones, therefore, adding them to the list of external factors would be redundant. Each system variable has connections with other variables and external factors that increase or decrease its value. To establish these relationships, the methods of comparison between time series of variables, external factors, and expert conclusions based on the experience of practical and research work of specialists in the considered area is used. As a result of this work, cause-and-effect schemes of interrelations of variables and external factors are formed. To reflect these schemes, a matrix of causal relationships is constructed, in which the meaning “"” (or “#”) at the intersection of the row corresponding to the variable A(t), with the column corresponding to another variable or external factor B(t), means the presence of a positive (or negative) growth dependence on A(t) on a variable or an external factor B(t) (see Table 1). Empty cells mean that the relationship between the corresponding variables is rather irrelevant and is not taken into account in the model. Diagonal cells are also empty, since the dependence of a variable on itself is not separately written out. Forrester used graphics to depict cause-effect relationships between variables. However, in the case of a large number of variables, such graphs are almost infinite and rather convenient for demonstrating the complexity of the system. When analyzing cause-effect relationships and when constructing differential equations of system

Xre Xde Xju Xso Xsci Xpr Xsaf Xtea Xla Xmo Xfi Xus Xatt Xvir Xfra

# # #

" "

Xre

# # #

"

"

Xde "

# # #

"

Xju "

# #

"

Xso

# #

"

Xsci

# # #

"

" "

"

Xpr

" # # #

" " "

" " "

Xsaf

# # #

"

"

"

Xtea "

"

"

Xus

#

" " " " " " "

Xfi

"

"

Xmo

#

"

" "

"

Xla "

#

# #

Xatt

Table 1. The matrix of cause-effect relationships in the system.

"

"

# #

Xvir

#

# #

Xfra

" " "

#

#

Yune

# # " " " "

" "

Ysanц

# # " " " "

" "

Ydoll

Modeling of Struggle Processes in the Computer-Related Crime Field 401

402

A. Bogomolov et al.

dynamics, the matrix of cause-effect relationships of system variables is more convenient. Assuming that the established scheme of cause-and-effect relationships is constant over the considered time interval, we use the constructed table to compile the differential equations of system dynamics. The system-dynamic approach suggests constructions for each system X of a differential equation of the form dXðtÞ  ¼ PXþ RXþ  P X RX ; dt where PXþ —the product of system variables that correspond to the assumed causeeffect relationships increase X, RXþ —the sum of external factors that increase X, P X— —the sum of external factors that the product of system variables that decrease X, R X decrease X. The constructed system of nonlinear differential equations was numerically solved by the Runge-Kutta method of 4 orders.

4 Discussion of the Results Under the assumptions of the model, it is assumed that the dependencies FX–Z(Z), which appear in the above system of equations, are close to linear, which is confirmed by the establishment of correlations between them. After establishing a specific type of these dependencies and expressions for the mathematical modeling of the dynamics of external factors, a system of nonlinear differential equations was obtained, the solution of which gives an acceptable error in relation to the available statistics (about 15%). In particular, the results of solving for the variable Xfra(t)—the number of cases of fraud in the field of computer information and fraud committed using computer and telecommunication technologies differs from the values of this indicator to the statistics of the Ministry of Interior 2014–2016 no more than 10%. Taken into consideration all mentioned above, we assume that the results of numerical experiments with the constructed model will provide certain grounds for conclusions regarding the nature of the dynamics of computer crime and its dependence on various factors. From among the variables of system computer-related crime we will consider an index of Xfra(t)—quantity of fraud cases in the sphere of computer information and fraud committed with use of computer and telecommunication technologies. We will consider influence on dynamics of index Xsaf(t)—the number of actions for promotion of safe behavior on a network an Xfi(t)—the volume of the state and private investments into development of the sphere of fight against computer crime. As it can be assumed from the type of the columns corresponding to the variables in Table 1, the variables have positive impact on computer crime counteraction. Let k1 be the first coefficient of Xsaf(t) equation. Let k2 be the first coefficient of Xfi(t) equation. Values of the coefficients determine growth of Xsaf(t) and Xfi(t) variables. The main decision was received for k1 = 0.1, k2 = 0.1. The computing experiment consisted in variation k1 and k2 near the certain values. This values was selected on the

Modeling of Struggle Processes in the Computer-Related Crime Field

403

basis of the available data. The computing experiment consisted in solving the differential equation system of model for Xfra(t) following in two cases. The former case includes the coefficient k1 takes value in the range [0.1, 0.5], if k2 = 0.1 constantly. The values of the variable Xfra(t) determined under these conditions using the constructed model are given in Table 2. Table 2. The value of the variable Xfra(t) in 2014–2019, when the coefficient k1 increases from 0.1 to 0.5 with other conditions remaining unchanged in the model, k2 = 0.1 constantly. k1 0.1 0.2 0.3 0.4 0.5

2014 20000 30000 33000 34000 38000

2015 36000 40000 43000 45000 48000

2016 49000 50000 60000 66000 70000

2017 2018 65000 80000 70000 90000 70000 95000 80000 100000 85000 120000

2019 87000 95000 100000 125000 140000

The latter case includes the same behavior of k1, but coefficient k2 sequentially increases to 4.5. The values of the variable Xfra(t) determined under these conditions using the constructed model are given in Table 3. Table 3. The value of the variable Xfra(t), when the coefficient k1 increases from 0.1 to 0.5, but coefficient k2 decreases from 4.5 to 0.5 (other conditions remaining unchanged). k1 0.1 0.2 0.3 0.4 0.5

k2 4.5 3.5 2.5 1.5 0.5

2014 14000 15000 17500 21500 25000

2015 16200 15600 19500 23000 28700

2016 18000 16900 21400 26700 31000

2017 19400 19020 23040 28650 35000

2018 22500 20900 28000 31400 36000

2019 25000 22800 31000 35900 40000

As the result of the experiment, in significant reduction of k1 in the former case will cause the significant increase in number of cybercrimes in the next years. Probably, indices of solvability will grow much slower as weakening of safe online behavior promotion will lead to bigger passivity of the fraud victims, consequently it will increase latency of computer crimes. The carried-out calculations confirm the significance promotion of safe online behavior for preventing of cybercrimes, especially considering that such crimes are often committed using technical methods coupled with methods of social engineering. Then, in case of reserves connection on financing of fight against computer crime (it expresses in increasing of k2 in the range from 0.5 to 4.5), arises an opportunity accepted (around 20000–30000 in the next year) control of cybercrime growth. On this basis it is possible to recommend combination of measures for promotions of citizen safe behavior in the computer sphere: k1 = 0.3 and measures for increase in volume of the state and private investments of fight against computer-related crime: k2 = 2.5. It

404

A. Bogomolov et al.

will allow to inform citizens on possible options of fraud and to provide protection against such threats at technological level at the same time. The conditions corresponding to the recommended values of coefficient stated above are defined depending on schedulable dynamics of the appropriate variable and external factors. As appears from structure of the equation for Xsaf(t) and linearity of correlative dependences between variables, the first coefficient of k1 in this equation is defined by the expressions entering it for quantity of Xpr(t) of the information technology specialists who graduated from higher education, Xtea(t) of citizens action and organizations against computer-related crime actions, Xla(t) of the accepted legislative rules directed against computer crimes. Similar, the first coefficient k2 in the equation for Xfi(t) is defined by the expressions to variables: Xmo(t) is a volume of money resources on accounts of people, Xus(t) is the number of Internet users, Xla(t)—adopted legislation against computer crime.

5 Conclusions Thus, a graph based causal relationships between indicators on crimes, investigation and prevention of such crimes, causal tables and system of equations developed a model that allows carrying out analysis of cybercrime. The results of the solution of the received equation system give the opportunity to predict dynamics of crime depending on the social-economic situation in the country and external factors at various time intervals. The variation of external factors for computational experiments can be carried out, for example, by changing the coefficients in the equations for their dynamics, as was done above in the case of k1 for Xsaf(t) and k2 for Xfi(t). The use of mathematical models and methods in this case is especially advisable, since experiments with a real system are not reproducible and very expensive. The results of the work allow us to identify general directions of legal policy, give estimates to efficiency of selection directions implementation (promote safe online behavior, attracting funding, increase the number of issued specialists in computer security sphere, etc.), that will facilitate the solution of problems in the sphere of fighting computer-related crime at various level.

References 1. The Council of Europe Computer Crime Convention. https://clck.ru/EnBgi. Last Accessed 24 Nov 2018 2. Official website of the Ministry of Internal Affairs of the Russian Federation. https://мвд.рф/folder/101762/item/10287274. Last Accessed 24 Nov 2018 3. Forrester, J.: Principles of Systems. Wright Allen Press, Cambridge (1960) 4. Forrester, J.: Counterintuitive Behavior of Social Systems. Reidel Publishing Company, Dordrecht (1971) 5. Meadows, H., Dennis, L., Randers, J., William, W.: The Limits to Growth. Universe Books, New York (1972) 6. Tikhonova, O., Kushnikov, V., Fominykh, D., Rezchikov, A., Ivashchenko, V., Bogomolov, A., Filimonyuk, L., Dolinina, O., Kushnikov, O., Shulga, T., Tverdokhlebov, V.:

Modeling of Struggle Processes in the Computer-Related Crime Field

7.

8.

9. 10.

405

Mathematical model for prediction of efficiency indicators of educational activity in high school. J. Phys: Conf. Ser. 1015, 032143 (2018) Spiridonov, A., Rezchikov, A., Kushnikov, V., Ivashchenko, V., Bogomolov, A., Filimonyuk, L., Dolinina, O., Kushnikova, E., Shulga, T., Tverdokhlebov, V., Kushnikov, O., Fominykh, D.: Prediction of main factors’ values of air transportation system safety based on system dynamics. J. Phys: Conf. Ser. 1015, 032140 (2018) Minaev, V., Vains E., Gracheva, Y.: Application of system-dynamic modeling techniques to solve information security problems. In: Sovremennye problemy i zadachi obespecheniya informacionnoj bezopasnosti on Proceedings, Moskva, pp. 177–183 (2017). (in Russian) Efremova, M.A., Agapov, P.V.: Crimes against information security: international legal aspects of fighting and experience of some states. J. Internet Bank. Commer. 21(S3) (2016) Rogova, E.V., Karnovich, S.A., Ivushkina, O.V., Laikova, E.A., Efremova, M.A.: State of the contemporary criminal law policy of Russia. J. Adv. Res. Law Econ. 7(1), 93–99 (2016)

Towards Fuzzy Partial Global Fault Diagnosis Sofia Kouah1(&) 1

and Ilham Kitouni2

RELA(CS)2 Laboratory, University of Larbi Ben M’Hidi, Oum El Bouaghi, Algeria [email protected] 2 MISC Laboratory, University of Abdelhamid Mehri—Constantine 2, 25000 Ali Mendjeli, Algeria [email protected]

Abstract. Diagnosis aims at identifying a faulty system based on its behavior observations. It is widely emerged in altogether computer sciences fields, among others: aeronautics, space exploration, nuclear energy, process industries, manufacturing, healthcare, networking, automatism and many other control applications. Diagnosis involves distributed components with an uncertain global view. This paper intends to provide an efficient fuzzy based diagnosis mechanism. Such mechanism enables local hosts’ diagnosis. These local decisions could be merged to provide the global diagnosis. The fuzziness choice is motivated by the fact of incomplete and uncertain system descriptions and observations. Also it is justified by the difficulties of obtaining a complete viewpoint of all system parts where the control is distributed. Our diagnosis mechanism, named FPGD for Fuzzy Partial Global Diagnosis consists of two main steps: Firstly, each remote control host detects and localizes abnormal behaviors which results on a local diagnosis. Each host proceeds by applying a recovery planned actions to maintain system functioning. Furthermore, such local diagnoses should be sent to the global part in order to be merged and analyzed, hence giving a precise and exhaustive global diagnosis. The automatic diagnosis reasoning is a fuzzy system; which based on fuzzy rules, handles incomplete information to deduce system malfunctioning. Keywords: Complex system

 Diagnosis  Fuzzy logic  Internet of things

1 Introduction Complex systems are omnipresent in all daily life fields. A complex system can be viewed as a collection of various components which interact together directly or indirectly in order to fulfil the main objective for which the system has been conceived. Studying and modeling such kind of systems aims to investigate the relationships between all parts of system under study, for analyzing and identifying their impact on the collective behavior. In fact, such relationships give arise to several types of interactions that relate the systems parts and the physical environment in which they cohabit. Since system components are strongly coupled, a failure that occurs in a given component can propagate to others with respect to the existing relationships and on the way have disastrous consequences on the system functioning. Accordingly, developing © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 406–420, 2019. https://doi.org/10.1007/978-3-030-12072-6_33

Towards Fuzzy Partial Global Fault Diagnosis

407

fault diagnosis is an important issue for modeling complex systems and improving its maintenance activities [1]. Therefore, this paper focuses on the study of fault diagnosis. The diagnosis process aims to detect abnormalities from the desired behavior of a given system. It intends to identify the failure causes and localize all components responsible of the unexpected events. Indeed, Fault diagnosis research area has received wide attention over the last decades [1, 2]. A variety of fault detection and diagnosis approaches have been developed including: model based approaches, knowledge based approaches, qualitative simulation based approaches, neural-network based approaches and classical multivariate statistical approaches [2]. We are mainly interested by the model based category, where we intend to provide an efficient diagnosis mechanism which can go beyond tolerance feature. Considering the distributed aspect of the system and the nature of handled information related to the usual observations and the expected system descriptions which are relatively uncertain and incomplete, the proposed methods [3] suffer of an overall view state of the system which influence greatly diagnosis quality. This challenging point can be treated on the basis of two mainly reliable facts: • The necessity to distribute diagnosis process upon systems relevant parts that can controls and influence the system behavior. Thus, the decision process investigates partial diagnosis in order to achieve a complete and concise global diagnosis. Partial diagnoses are those provided locally by these parts. However, the global one is the merging of partial decisions. • Handling incomplete and uncertain information: To deal with imprecision, ambiguity and uncertainty, there exist different proposed extensions of classical logic, such as fuzzy logic [4] and probability logic [5]. Since we need to model imprecision on diagnostic process and not to handle imprecision on events occurrence, we adopt fuzzy logic rather than probability logic [6]. Although fuzzy logic deals with imprecise information, the information is handled in sound mathematical theory. In other words, “Fuzzy logic is not fuzzy”. Basically, fuzzy logic is a precise logic of imprecision and approximate reasoning [7]. Our contribution focuses on proposing a practical fuzzy based diagnosis approach where the decision making is distributed over systems parts. This approach, namely Fuzzy Partial Global Diagnosis (FPG-Diag), intends to generate a complete and consistent diagnosis from distributed parts that can be viewed as concurrent subsystems interacting together to achieve a common goal. Each sub-system generates a local diagnosis which is partial and sends it to the unique coordinator (i.e. global part) of the whole system. Based on these individual and separate diagnoses, the coordinator makes an effective decision, despite incomplete and uncertain nature of exchanged data and information. Both diagnosis process of sub-systems and the coordinator diagnosis process are based on fuzzy logic reasoning such that the second one is a reliable combination of the firs ones. This paper is structured as follows: Firstly, Sect. 2 provides the theoretical background allied to our subject, mainly the diagnosis and fuzzy systems. Section 3 discusses some applications areas of fault diagnosis, also it reviews some literature related works. Next, Sect. 4 provides FPG-Diag strategy. Finally, Sect. 5 presents the conclusions of the paper and gives prospects to be continued in the future.

408

S. Kouah and I. Kitouni

2 Background This section recalls some basic concepts and reviews some works related to our study. 2.1

Fault Diagnostic

This paper addresses the problem of fault diagnosis in complex systems in which we focus on distributed discrete-event systems [8], which consists in determining the occurrences of fault events from observations and system knowledge [3]. This problem has been widely studied in the last decades [2, 8, 9]. Such works aim at modeling the system then applying an automatic algorithm on both system description and the current observations in order to detect faulty components. In fact, a complementary process of fault diagnosis, well-known as Prognostic, can be applied as further step. It consists in using automated methods to detect and diagnose degradation of physical system performance, anticipating future failures and forecasting the remaining life of physical systems in acceptable operating state before faults or unacceptable degradations of performance occurrence [2]. Indeed, such methods provide a cornerstone for condition-based maintenance of engineered systems. Let us now pinpoint the main steps of such overall framework (i.e. maintenance) in order to better clarify the importance and the application of fault diagnosis. Generally, a based maintenance process using automated fault diagnosis can be viewed as a process of four steps [2]: Firstly, fault detection step consists in monitoring the physical system under study and detecting any abnormal conditions. After detecting an abnormal condition, faultdiagnosis is exploited for evaluating the fault and determining its causes. Following these two steps which are known as fault detection and diagnosis [2], the fault evaluation step assesses the size and significance of the impact on system performance. Examples of performance parameters are energy consumption and time execution cost. Finally, based on the fault evaluation results, the last step arises decision making about how responding to the fault; for instance taking a corrective action or giving some advices. One can notice that diagnosis is so attractive; this is why it is usually integrated into the largest framework of monitoring, supervision and maintenance. After situating diagnosis and giving an overview of how it can be used, let consider its important features and components. Generally, fault diagnosis can be defined as a process of three complementary tasks: fault detection, fault isolation and fault identification [10]. These tasks are defined hereafter. • Fault detection: this task consists in deciding whether the system works in normal conditions or whether a fault has occurred. It is a logical operation whose answer must be true or false. • Fault isolation: this task is triggered whenever fault has occurred. It aims at localizing the potential components causing the fault. • Fault identification: it intends to identify the nature of the fault. In other words it analyses specific faults parameters among others its size, criticality and significance. The fault detection operates on desired behavior model whereas both fault isolation and identification involve a faulty system behavior model under the considered faults.

Towards Fuzzy Partial Global Fault Diagnosis

409

To our best knowledge, Reiter’s approach [11] is the basic foundation of diagnosis approaches. The author [11] has defined a complete theory which deals with diagnosing faulty components in precise and concise manner. Accordingly, based on this work we present the diagnosis formalization. Diagnosis Formalization In this section, we recall the basic intuition of Reiter’s diagnosis theory formalization. System Representation: A system is described by two elements (SD, COMPS), s. t.: – SD: is a set of first order logical predicates that represent system description. – COMPS: is a finite set of constants that represent system component. Intuitively, system description specifies how a given system behaves normally on the hypothesis that all its components are functioning correctly. A component is a part of the system which is characterized as follows: it can be replaced independently of any other part and produces together with all the other components the system functionality. Normal and Abnormal Behavior: Two Unary predicates are defined on components of COMPS in a given SD, namely AB (i.e. Abnormal Behavior) and ¬AB (i.e. not Abnormal Behavior), such that: For any component ci in COMPS, AB(ci) means that ci behaves abnormally; however, the literal ¬AB(ci) represents the fact that the component ci behaves correctly. Observation: The observations form a set of first-order predicates. The set of system observations is noted by OBS. Reader can refer to [11], where an interesting example of Davis’ circuit has been entirely modeled. It is noteworthy to disclose that a diagnosis problem arises when the set OBS is inconsistent with the normal behavior of the system representation. Given a system representation (SD, COMPS) and the observations OBS, it is now possible to introduce the notion of diagnosis problem. Diagnosis Problem: A diagnosis problem is 3-tuple (SD, COMPS, OBS), s.t. (SD, COMPS) is the system description and OBS is the observations. The semantic definition of the diagnosis is based on the concept of health-state which is built hon the system representation, it is given by: rðD; COMPSnDÞ ¼ i V  V c2D AbðcÞ ^ c2ðCOMPSnDÞ :AbðcÞ . Therefore, diagnosis is given by the following definition. Diagnosis: Let DCOMPS, a diagnosis D, for a diagnosis problem (SD, COMPS, OBS) is a set of candidates rðD; COMPSnDÞ s.t.: SD [ OBS [ rðD; COMPSnDÞ is satisfiable. After defining the diagnosis, an important question arises: how to demonstrate such theory and compute all diagnoses? In fact, diagnosis definition provides a well-defined semantics for the diagnosis concept. Nevertheless, as Diagnosis concepts are given in terms of models, it offers little interests at the computational level. Indeed, the proof theory (i.e. Demonstrator) provides a syntactical approach and a more attractive

410

S. Kouah and I. Kitouni

algorithmic, named diagnosis algorithm which aims at determining all possible diagnoses for a given faulty system. This is why it is more convenient to consider that there is an automatic demonstrator underlying each diagnosis algorithm. For instance, Reiter’s algorithm [11] uses explicitly such demonstrator. In that way, Diagnosis System definition introduces the syntactical definition. Diagnosis System: A diagnosis system is a quadruplet (DS, COMPS, OBS, A), such that (DS, COMPS, OBS) is the diagnosis problem and A is the diagnosis algorithm. Intuitively, a Diagnosis is the set of candidates computed by the diagnosis algorithm A. In other words, diagnosis system is defined by system representation, potential observations and the diagnosis algorithm. Due to the powerful and effectiveness of Reiter approach, we plan to use, in the future, similar intuition for our problem formalization. Concerning the automatic algorithm reasoning, we propose to use existing inference engines which have been already tested and implemented. An ongoing paper focuses on these aspects. As a matter of fact, in practice many diagnoses solutions are possible for a given diagnosis problem (DS, COMPS, OBS). For instance, diagnosis system can be modeled by the function “Diagno”, such as Diagno : ðDS; COMPS; OBS; AÞ ! 2COMPS , returns the potential solutions for a given diagnosis system by applying the underling diagnosis algorithm on (DS, COMPS, OBS). Two cases can be hold: • j2COMPS j ¼ 1, thus, resulting component which is certainly faulty should be replaced by another one. • j2COMPS j [ 1, the resulting conflict set of concurrent solutions arise an uncertainty, for exactly identifying the faulty components. In other words, some resulting components may not be faulty. On the other hand, each component has an acceptable failure rate which can be designated by the component manufacturer or deduced by experimentations. Let DCi be the acceptable failure rate associated to ci 2 COMPS. Thus, the information delivered from sensors or any source of data is approximate. Notice that, the handled information is uncertain and diagnosis process gives approximate results. Consequently, an adequate theory should be introduced to cope with such drawbacks. As mentioned in the introduction, we adopt fuzzy logic theory. 2.2

Fuzzy Logic

Fuzzy logic is used for modeling a wide range of computer systems where data are imprecise, uncertain or changing rapidly over time. It is used in different engineering areas, among others, facial pattern recognition, air conditioners, vacuum cleaners, antiskid braking systems, transmission systems, control of subway systems and unmanned helicopters, knowledge-based systems for multi-objective optimization of power systems, expert systems, Robotics and Biotechnology. Fuzzy logic is a mathematical theory dealing with uncertainty. Generally, a fuzzy logic system is non-linear input-output mapping, where variables take their truth values from the close set [0,1] of real numbers that generalize Boolean truth values. Likewise, the fuzzy facts are true

Towards Fuzzy Partial Global Fault Diagnosis

411

only for some degrees between 0 and 1, and they are false for some others in the same set. Fuzzy logic system operates on fuzzy sets. A fuzzy set F, which extends the ordinary crisp set, is characterized by a membership function µF (x), which gives the degree of similarity of x to F. Intuitively, input and output variables are handled as partial truths, where each truth value may range between completely true and completely false. System description is designed as if-then rules. These rules are expressed as a collection of IF-Then statements, having fuzzy propositions as antecedents (i.e. premises) and consequences (i.e. conclusion). Generally, the main core of any fuzzybased applications can be described as general structure which is composed of four main steps [12]: • Fuzzification: The membership functions map the current crisp input values into fuzzy sets. Thereby, truth degrees of potential rules premise can be determined. • Inference: After computing premise truth value of each rule, the fuzzified values activate the potential rules. Such rules are designed by experts or extracted from statistical data. This step results in one fuzzy subset to be assigned to each output variable for each rule. • Composition: the fuzzy inference engine combines together all the fuzzy subsets, which are assigned to each output variable, in order to obtain an aggregated fuzzy output. In other words, a single fuzzy subset can be generated for each output variable. • Deffuzification: This last step is optional; it can be used for converting the fuzzy output sets into crisp numbers that can be useful for making decisions or controlling actions. Owing to the fact that fuzzy logic deals with imprecise nature of solving problems and coping with the imprecision of real-world situations such as designing precise environmental control systems, fuzzy logic is chosen. Accordingly, using fuzzy logic in fault diagnosis system is a promising in the sense that it can easily capture the necessary information and come up with sound diagnosis decisions.

3 Related Works In this section reviews some papers which are related to fault diagnosis and fuzzy logic in this field. • Paper [2] provides an overview of fault detection, diagnosis and prognostics (FDD&P). First it describes the fundamental processes and some important definitions. Then it identifies the strengths and weaknesses of methods across the broad spectrum of approaches, also a Generic application of fault detection and diagnostics to operation and maintenance of engineered systems is established in clear way, and terminology and the first range of applications are set. While the second part reviews the FDD&P research in the heating, ventilating, air conditioning, and refrigeration (HVAC&R) and other building systems field. Authors conclude and discuss the current state of applications in buildings and expected contributions for operating and maintaining buildings in the future.

412

S. Kouah and I. Kitouni

• Authors in [13] are interested by fault localization. It is well known that the fault localization is a central concern of diagnosis process; notably in communication networks. Authors set terminology and concepts related to fault management field. They propose a survey focused on fault localization as a process of deducing the exact source of the failure from the set of observed failure indications. The complexity of Fault localization is resulting from complexity, unreliability, and nondeterminism of communication systems. Also they stated that fault management system should provide means to represent and interpret uncertain data; within the system knowledge and fault evidence a lot of examples are given in [14–16]. Nevertheless, many problems remain unsolved particularly, those in relation with data incompleteness, uncertainty, distribution and dynamicity aspect of networks. • In paper [17], authors provide a review of different approaches adopted so far in the field of fault diagnosis in dynamic systems. Different methodologies presented reflect the type and depth of research done in this area. It also takes into account the limitations of techniques for example the model-based methods of parameter estimation, parity space and other such approaches. Those methods are employed in technique of monitoring the process variables like temperature, pressure, etc. and generated alarms. Fault management is performed by several steps each of them is decorticated. For fault isolation, the following two methodologies are used: Directional Residual Method and Structured Residual Method. For fault detection different methods and tools are described: e.g. Parity Space Method Generation of Residuals, State and Algebraic Observers, Limit Checking and as frameworks we read: Neural Networks, Fuzzy Logic, Fuzzy-Neural Networks. • In [2] authors motivate the use of fuzzy logic instead of other methods and logics for dealing with uncertainty in expert systems as: Subjective probability, Certainty factors, Fuzzy measures and Fuzzy set theory. Fuzzy logic allows working with ambiguous or fuzzy large or small quantities, or data that is subject to interpretation. This reference proposes to reduce the outage time and enhance service reliability, by locating fault sections in a power system as soon as possible using fuzzy logic. It is well to known that currently fault detection and diagnosis use heuristic rules, past experiences for this purpose and the important role of such experience has motivated extensive work. Authors give a new structure of a fault Expert diagnosis system based on fuzzy logic which seems to be flexible with small number of rules. • The paper [18] presents application of fuzzy logic for fault detection and isolation in industrial processes. Knowing that most data in industrial practice are characterized by uncertainty and inaccuracy, Fuzzy logic is a very efficient tool for dealing with information having such a feature. First Fuzzy models of sub-systems are applied in fault detection algorithms especially for generation of residuals on the grounds of fuzzy models, in a second step residual values evaluation is done based on fuzzy rules followed by the description of a faults-symptoms relation. Fuzzy logic is applied both in algorithms of diagnosing based on pattern recognition methodology, as well as in algorithms of automatic inference. We notice that advantages of diagnostic algorithms based on fuzzy logic have been taken into considerations at all steps of the process. • Recently, in [19] large scale industrial systems, the fuzzy inference approach in the field of fault diagnosis is introduced. A centralized, single-level diagnostic in single-

Towards Fuzzy Partial Global Fault Diagnosis

413

level structure is presented notably in the step of fault isolation. The proposed approach is depicted in an example. Authors conclude the paper regarding expected benefits of the decentralized two-level diagnostic structures. 3.1

Discussion and Hypothesis

Topics concerning diagnosis research point toward several aspects and issues, among others: • The representation model of the behavior and diagnosis process. • The reasoning model (i.e. algorithm) and the associated decision making process which should comply with the used model. • The global diagnosis strategy and the corresponding system architecture. • The consideration of some particular aspects and characteristics of the system being to be diagnosed. • The ontological structure which describes the different faults concepts and their relationship. • System diagnosability. Our contribution is about the global diagnosis strategy within fuzzy logic as representation and reasoning model, using semi centralized structure. From diagnosis view point of complex systems, it is quite difficult to obtain concise and correct results from centralized diagnosis process. This fact is due to their nature which is usually open and distributed. On the other hand, a completely distributed diagnosis solution converges generally to incoherent and imprecise results. Thus, combining both alternatives gives rise to an appropriate and efficient answer, that is, diagnosis process should be semi centralized. Indeed, some assumptions have to be made which concern mainly fault kinds and transmission reliability. Faults concern uniquely those related to sources of Data. In addition, we make trust on the communication medium, that is, transmission faults that can occur in the different transmission layers are not studied in this work. Also, we assume that the system is diagnosable.

4 FPG-Diag: Fuzzy Partial Global Diagnosis 4.1

Overview

This section presents an overall overview of FPG-Diag diagnosis approach. FPG-Diag process is based on system decomposition which distinguishes two kinds of parts: partial part (i.e. partial system with several instances) and global part (i.e. global system, a unique instance). Each partial part processes diagnosis locally and by the way can react, in real time, to some critical situations and delivers the possible diagnoses in timely manner to the global part. This later synthetizes the partial diagnoses and analyzes upcoming results in order to make the global decision. Accordingly, two kinds of diagnosis are distinguished in FPG-Diag and used conjointly: Partial and global diagnosis. Figure 1 depicts the overall process.

414

S. Kouah and I. Kitouni

Fig. 1. Fuzzy partial global diagnosis strategy.

• Partial diagnosis: each partial part is assumed to be able of diagnosing faults locally. Based on its local observations and information of its interrelated and surrounding connected components, communications and further functioning features, this local part diagnoses itself the possible faults. Since partial part has a local view point of the whole system where it belongs, the diagnosis result is partial. To each local part Sysi is associated a partial local diagnosis process, noted by PDi. It aims to detect locally fault upon occurrence and send relevant information needed for the global diagnosis. The underlying diagnosis mechanism is modeled by the activity diagram of Fig. 2. • Global diagnosis: By global diagnosis we mean the full diagnosis process of the entire system. Based on a non-empty set of partial diagnosis and a global view point of the system including system structure, communications, functioning information and other features; the global-part diagnoses the system. The global diagnosis is noted by GD and modeled by the activity diagram of Fig. 3. 4.2

Local Part Behavior Description

The local part behavior (see Fig. 2) is described as follows: – Initially, the local part processes by gathering Data from the different sensors and data sources. These data, which are of different forms (numerical, analogical … etc.), have to be fuzzified. – Fuzzified Data (i.e. Fuzzy Input (i)) are then used by the Fuzzy inference engine to check their consistency with respect to the expected Data (i.e. desired model). For instance, each sensor has an interval of expected values of physical measurements.

Towards Fuzzy Partial Global Fault Diagnosis

415

If Data values overcome such interval then fault occurs which means that sensor is failed. – The result of fuzzy reasoning about data consistency after defuzzification distinguishes three cases: • Healthy state: such result is obtained whenever the diagnosis degree is equal to zero (i.e. “May be” linguistic variable); it expresses the fact that system is healthy and consistency is preserved. The subsequent task is to proceed by the next CollectData after a specified time.

Fig. 2. Local part behavior activity diagram.

• Uncertain Diagnosis: this result is obtained in the case where the diagnosis degree is in ]0, 0.5[ interval (i.e. “May be” linguistic variable). It means that possibly a fault could occur. In that case, a revision routine should be applied with respect to a diagnosis history and the current Data (i.e. Fuzzy Input (i+1)). Also, a notification about diagnosis uncertainty should be sent to the Global Part diagnosis system. Revision routine can converge to healthy state or to certain diagnosis. • Certain Diagnosis: Certain diagnosis result confirms the failure upon its occurrence is obtained whenever the diagnosis degree is in [0.5, 1] interval (i.e. “May be” linguistic variable). Hereafter, it only remains to localize the failing components and identify the nature of faults. To do so, another fuzzy reasoning should be applied; it is about Fuzzy reasoning on fault isolation activity. During this activity, the fuzzy inference engine uses fuzzified data to deduce the set of the possible faulty components. The “fault isolation” is similar to the “fault detection”; this later is modeled by: fuzzification, Fuzzy reasoning on consistency and defuzzification activities; however the handled fuzzy rules are altogether different. – After “fault isolation” activity, “Fault identification” and “local diagnoses results and analysis” are then executed. They are modeled as follows:

416

S. Kouah and I. Kitouni

Fault identification consists in identifying the type of fault with respect to a set of possible faults which are known beforehand. Moreover, the controllability of the fault is examined. Two cases are distinguished: • The fault is controllable: in this case, a repair plan is triggered and a notification about this situation is sent to the global part. Plan repair consists of a set of corrective actions, such as faulty component replacement, error correction …etc. after the repair; the system starts again collecting data. • The fault is non-controllable: here, the local system administrator is notified and fault criticality level is analyzed. If the critical level is high, then the emergency alert system is triggered. Usually, non-controllable faults led to the shutdown of the system. Illustrated Discussion: In local part behavior there are two kinds of fuzzy reasoning: The first one consists in calculating the state of failure. The Fuzzy engine operates on fuzzy rules that model consistency by taking into account: temporal consistency, spatial consistency, functioning consistency and structural consistency. To clarify the temporal correlation, we present two examples: • If data from a given sensor has not been received by the partial part after a given time, then the sensor may be faulty. • Also, a significant change between two consecutive measurements indicates, possibly, a fault occurrence. The second fuzzy reasoning is about isolating failures. The corresponding fuzzy rules which take into account the above constraints analyses the interdependences to localize the failure with respect to the set a faults. Fault models and relationships between faults are constructed on the basis of expert knowledge, the nature of faults and components or inferences from historical data of the studied system. 4.3

Global Part Behavior Description

The global part behavior (see Fig. 3) is described as follows: – Initially, the global part has to analyze the received data flow which can be a set of essential collected data of the different local parts or the various kinds of fault reports indicating the state of failures. In fact, collected data, which have not caused fault in their local parts, could do so whenever data are merged together. This is due to the relationship that may exist between them. Whatever the data flow (i.e. data or reports), the treatment is similar: – Data (respectively reports) has to be pre-processed and normalized in order to unify their format and remove redundancy. – Then, data (respectively reports) are merged together according to a pre-established merger scheme. – After defuzzification, three results are distinguished: Healthy state, Uncertain Diagnosis and Certain Diagnosis. They have the same intuition as those of local part except that the result concerns the global system.

Towards Fuzzy Partial Global Fault Diagnosis

417

– Global Healthy state and global uncertain diagnosis have the same treatment as local part. Global certain diagnosis due to data or to report or both of them are followed by fault isolation activity. – Such activity uses a fuzzy inference engine to deduce the faulty components. Herein, the task is more complex than isolation at the local level. It is to analyze all possible correlations by taking into account the spatial, temporal, functional and structural constraints, in addition, it reasons about merged data and reports together. Fault identification identifies the fault controllability. – Controllable faults are repaired and global system administrator is notified. However, non-controllable fault led to fault criticality analysis. – Whatever the fault critical level, a notification of fault criticality within the associated level is sent to the global system administrator. If the fault is highly critical, thus, the emergent alert system is triggered. System can be halted. – The merged data (respectively merged reports) are fuzzified and passed to the fuzzy inference engine for reasoning on data (respectively reasoning on reports) and checking system consistency.

Fig. 3. Global part behavior activity diagram.

4.4

Some Employment Feasibility

To show the applicability of the proposed approach, let consider an example of Internet of Things (IoT) system: A based IoT air quality monitoring system diagnosis. The main goal of this application is to achieve a remote monitoring of air quality in a given Building and ensures diagnosis of its sensors failures. The Building (sees as the Global Part) contains N rooms, such that N [ 0. Each room i (i.e. LocalPart i) controls the temperature degree, Carbon Dioxide (CO2) percentage and humidity. It also uses a distance or motion detector to ensure room security. Such system can be useful in various fields such as: healthcare (patients monitoring), business management, agriculture, chemistry, etc. We consider two rooms Ri (1  i  2), such that Ri represents the LocalPart i: Each room is equipped with the following hardware components: a Temperature and humidity sensor (DHT11), CO2 sensor (MQ-7) and distance sensor (HCSR04). Concerning microcontroller and communication technologies hardware, Arduino Uno and Bluetooth (HC-06) are respectively used. In addition, Alarms and LEDs should

418

S. Kouah and I. Kitouni

be used to alert users. The smart phone is used as a gateway that ensures communication between the things and the LocalPart Server. This communication is empowered by means of Bluetooth Protocol. Each LocalPart Server, which should be internet enabled, exchange relevant data with the GlobalPart Server through TCP/IP Protocol. The system is depicted by Fig. 4. The connected things are realized and lunched (See Fig. 5(a)). The gateway (i.e. Mobile application on Smart Phone) is implemented under Android platform (See Fig. 5(b)). The Global Part and the Local ones are developed under JADE platform. Each LocalPart have two main modules: the monitoring and the failure diagnosis. The monitoring process depends on the user preferences of the desired measured values. It collects data measurements, compares and processes these data, and triggers the appropriate actions; for instance: firing an alert when safety or security is threatened; turning on the heating when temperature is low, etc. Simultaneously, the diagnosis process behaves as described in Fig. 2. The collected measurements data are fuzzified according to the linguistic variables and membership function. Linguistic variables should be well chosen to cover abnormal and normal behaviors in concordance with sensors properties which are specified in their Datasheet, such as senor accuracy, measurement range. Behavior coherency is checked, by means of “Fuzzy reasoning on coherency” module, over a set of rules. These rules operate on the fuzzified data; they specify the main features related to the coherency behavior, by taking into account the spatial, temporal, structural and functional correlations that can exist between measured data or between sensors. For instance, ambient temperature, CO2 and Humidity can be interrelated by a formula derived from expertise. The result should then be defuzzified and the diagnosis process continues its processing according to the obtained results.

Global Part

Local Part 1

Data flow

Fig. 4. IoT system diagnosis topology

Local Part 2

Towards Fuzzy Partial Global Fault Diagnosis

a) Connected Things

419

b) Mobile Application

Fig. 5. (a) Connected things, (b) mobile application

Concerning the fuzzy reasoning on fault diagnosis, a set of fuzzy rules which are based on several features related to the sensors’ faults is used. Such aspects can be measurement precision range, null measured value, out of range value of the expected sensor interval. Examples of linguistic variables corresponding to each sensor measurement are: Below-Normal, Low, Medium, High, Above-Normal. Similarly, the GlobalPart behaves as depicted in Fig. 3. Preprocessing and normalization concerns for instance the unification of temperatures values that are received from local parts into different measuring units (Celsius degree, Fahrenheit degree). Fuzzy reasoning on fault diagnosis takes into account relationships between all received data that could be spatial, temporal, structural and functional interrelated; for instance, find out a big difference between ambient temperatures in the two rooms. Nevertheless, these values should be relatively approximate since sensors are located in the same building (i.e. geographical site). Fault isolation is based on fault types. A classification of IoT sensors faults is under development, we attempt to ontologies them within their related phenomena.

5 Conclusion In this paper, we have considered an important research area, the fault diagnosis, which has potential impacts on complex systems. At first time, we have outlined the key elements and the main features of this failed, in order to define and extract its properties, requirements and challenging points. At second time, we have recalled the diagnosis problem formalization with respect to Reiter’s theory. After that, we have given an overview of fuzzy logic and reviewed some related works. The proposed approach is then presented. FPG-Diag is an incremental and hierarchical strategy that takes into account spatio-temporal, functional and structural constraints. The approach is general and can be used in diagnosing a wide range of systems. Fuzziness is used at different level, it reflect the real case. Diagnosis process detect fault upon its occurrence and react at real time. This work can be continued in several ways: • Reasoning Algorithms and implied routines should be refined and studied. • Application of FPG-Diag to Internet of Things systems. • Refine the approach and model it by means of multi agent paradigm.

420

S. Kouah and I. Kitouni

References 1. Kościelny, J.M., Syfert, M.: Fuzzy logic application to diagnostics of industrial processes. IFAC Proc. Vol. 36(5), 711–716 (2003) 2. Katipamula, S., Brambley, M.R.: Methods for fault detection, diagnostics, and prognostics for building systems—a review, part i. HVAC&R Res. 11(1), 3–25 (2005) 3. Katipamula, S., Brambley, M.R.: Methods for fault detection, diagnostics, and prognostics for building systems—a review, part ii. HVAC&R Res. 11(2), 169–187 (2005) 4. Klir, G., Yuan, B.: Fuzzy Sets and Fuzzy Logic. Prentice Hall, New Jersey (1995) 5. Hailperin, T.: Probability logic. Notre Dame J. Form. Log. 25(3), 198–212 (1984) 6. Kouah, S., Saidouni, D.E.: Fuzzy labeled transition refinement tree: application to stepwise designing multi agent systems. In: Fuzzy Systems: Concepts, Methodologies, Tools, and Applications, pp. 873–905. IGI global (2017) 7. Bělohlávek, R., Dauben, J.W., Klir, G.J.: Fuzzy Logic and Mathematics: A Historical Perspective. Oxford University Press, Oxford (2017) 8. Zaytoon, J., Lafortune, S.: Overview of fault diagnosis methods for discrete event systems. Annu. Rev. Control 37(2), 308–320 (2013) 9. Grastien, A., Travé-massuyès, L., Puig, C.V.: Solving diagnosability of hybrid systems via abstraction and discrete event techniques. In: 20th world congress of the international federation of automatic control, IFAC 2017, Toulouse, France, 9–14 July 2017, Proceedings Book, pp. 5023–5028 (2017) 10. Marco, M., Li, Y.: A diagnostic system for gas turbines using GPA-index. In: Proceedings of 18th International Congress COMADEM, pp. 307–322. Cranfield University Press, England (2005) 11. Reiter, R.: A theory of diagnosis from first principles. Artif. Intell. 32(1), 57–95 (1987) 12. Leung, R., Lau, H.C., Kwong, C.K.: An expert system to support the optimization of ion plating process: an OLAP-based fuzzy-cum-ga approach. Expert Syst. Appl. 25(3), 313–330 (2003) 13. Gautam, K.K., Bhuria, V.: Application of fuzzy logic in power transformer fault diagnosis. In: Proceeding of International Conference on Advanced Computing, Communication and Networks—CCN 2011, pp. 834–839. The IRED Publisher (2011) 14. Małgorzata, S.T., Adarshpal, S.S.: A survey of fault localization techniques in computer networks. Sci. Comput. Program. 53(2), 165–194 (2004) 15. Deng, R.H., Lazar, A.A., Wang, W.: A probabilistic approach to fault diagnosis in linear lightwave networks. In: Hegering, H.G., Yemini, Y. (eds.) Integrated Network Management III, North-Holland, Amsterdam, pp. 697–708 (1993) 16. Steinder, M., Sethi, A.S.: End-to-end service failure diagnosis using belief networks. In: Stadler, R., Ulema, M. (eds.) Proceedings of Network Operation and Management Symposium, Florence, Italy, pp. 375–390, April 2002 17. Wietgrefe, H.: Investigation and practical assessment of alarm correlation methods for the use in GSM access networks. In: Stadler, R., Ulema, M. (eds.) Proceedings of Network Operation and Management Symposium, Florence, Italy, pp. 391–404, April 2002 18. Satadru, D., Mohon, S., Pierluigi, P., Beshah, A.: Sensor fault detection, isolation, and estimation in lithium-ion batteries. IEEE Trans. Control Syst. Technol. 24(6), 2141–2149 (2016) 19. Dey, S., Ayalew, B., Pisu, P.: Nonlinear robust observers for state of charge estimation of lithium-ion cells based on a reduced electrochemical model. IEEE Trans. Control Syst. Technol. 23(5), 1935–1942 (2015)

Development of a Software for the Semantic Analysis of Social Media Content Aleksey Filippov , Vadim Moshkin(&) and Nadezhda Yarushkina

,

Ulyanovsk State Technical University, Ulyanovsk, Russia {al.filippov,v.moshkin,jng}@ulstu.ru

Abstract. The paper presents a developed intelligent tool for Opinion Mining of social media. In addition, the article presents new algorithms to the hybridization of ontological analysis and methods of knowledge engineering with methods of nature language processing (NLP) for extracting the semantic and emotional component of semi-structured and unstructured text resources. These approaches will improve the efficiency of the analysis of social media content-specific data and fuzziness of natural language. Also the original algorithm for translating the RDF/OWL-ontology into a graphical knowledge base is proposed. In addition, the article presents an approach to the inference on the ontology repository. The approach based on translating the SWRL constructs into the elements of the Cypher language. Keywords: Ontology  Semantic analysis  Social media  Unstructured resources  Graph knowledge base  Inference  SWRL  OWL

1 Introduction Active growth of social media audience on the Internet (social networks, forums, blogs and online media) made them a new source of data and knowledge. The specifics of working with social media has several advantages and disadvantages. Advantages include: – – – –

high speed of access to information; a broad audience; a wide range of data topics; large amount of data. The disadvantages are:

– large amount of data; – unstructured presentation of information; – absence of a single conceptual framework.

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 421–432, 2019. https://doi.org/10.1007/978-3-030-12072-6_34

422

A. Filippov et al.

A large amount of social media data is both an advantage and a disadvantage at the same time. Monthly in Russian social networks about 30 million unique authors publish 580 billion messages according to statistics for 2017. However, a large amount of data makes it possible to obtain a large training sets, for machine learning methods and a large statistical sample for social studies. The monthly billions of unstructured text messages and publications that users leave monthly cannot be processed manually. There is a need for methods of automated intelligent and sentimental analysis of text data. These methods handle large amounts of data and understand their meaning (Text Mining), determine the sentiment (Opinion Mining) of user messages and publications in a short time [1–5]. Understanding the meaning and sentiment of publications in social media is the most important and complex element of automated text processing [6–11]. Our scientific group has created an intelligent tool for Opinion Mining of social media. This tool includes new approaches to the hybridization of ontological analysis and methods of knowledge engineering with methods of nature language processing (NLP) for extracting the semantic and emotional component of semi-structured and unstructured text resources [12–16]. These approaches will improve the efficiency of the analysis of social media content-specific data and fuzziness of natural language.

2 The Architecture of the Software System for Opinion Mining Social Media Service-oriented approach is the basis of the architecture of the software system for Opinion Mining Social Media (SOM). This approach allows: 1. To increase the overall fault tolerance of the SOM by performing services in different address spaces. 2. To increase the scalability of the SOM by running several instances of services and balancing the load between them. 3. To provide the ability to use different operating systems, programming languages, storage technologies, etc. 4. To reduce the downtime of SOM when making changes, correcting errors, etc. 5. To provide an opportunity to completely replace services while maintaining the interface of interaction with other parts of the SOM. REST in conjunction with the HTTP protocol [17] is the basis for the organization of the interface for the interaction of SOM services. REST allows a distributed system of any type to have the following properties: performance, extensibility, simplicity, updatability, intelligibility, portability and reliability. The architecture of SOM is shown in Fig. 1.

Development of a Software for the Semantic Analysis

423

Fig. 1. Architectural diagram of the software system for Opinion Mining social media.

The SOM consists of the following subsystems: 1. Subsystem for importing data from social media. This subsystem works with popular Internet services (Vkontakte, Facebook, Odnoklassniki, Twitter, Instagram, Youtube) through the public application programming interface (Public API). The data loader from the Intranet media retrieves data from HTML pages based on rules. You need to create your own rule for each Internet media. The rule should consist of a set of CSS-selectors. The ontology loader loads into the storage subsystem a description of the features of the problem area (PrA) in the form of ontologies in the language RDF or OWL. 2. The data storage subsystem provides the representation of information extracted from social media in a unified structure that is convenient for further processing. The data is stored in the context of users, collections, data sources, versions, etc. As database management systems (DBMS) are used: – Elasticsearch for indexing and retrieving data [18]; – MongoDB for storing data in JSON format [19]; – Neo4j for storing graphs of social interaction (social graph) and ontology [20].

424

A. Filippov et al.

The data converter converts the data imported from social media into an internal SOM submission. The social graph builder constructs a social graph. The social graph based on the relationship of users and social media communities. The translator OWL/RDF-ontology in the graph translates the ontology into the graph knowledge base [21] [22] [23]. 3. The subsystem of semantic data analysis performs preprocessing of text resources. In addition, this subsystem performs statistical and linguistic analysis of text resources. 4. The subsystem of sentimental data analysis determines the attitude of a speaker, writer, or other subject with respect to some topic or emotional reaction to a document, interaction, or event from text. 5. The data search subsystem searches for objects related to a specific task. The task presented in the form of a set of keywords. In this case, the user’s query can be extended semantically using an ontology. Ontology contains descriptions of features of the PrA. 2.1

The Graph Knowledge Base and a Social Graph as Data Models of SOM

The SOM storage subsystem stores the following kinds of data: – data extracted from social media; – description of PrA in the form of a graphical knowledge base; – social graph that reflects the users and their connections of in social media. The graph DBMS Neo4j used to store the description of the PrA in the form of a graph knowledge base and a social graph. The main advantages of Neo4j are: 1. 2. 3. 4.

Native storage format for graphs. One copy of the DBMS can control graphs with billions of nodes and links. Neo4j can control graphs that do not completely fit into RAM. Graph-oriented query language—Cypher.

The search engine Elasticsearch used to organize data retrieval. The main advantages of Elasticsearch are: 1. Elasticsearch can process petabytes of structured and unstructured data. 2. Using denormalization to increase the search efficiency. 3. Elasticsearch is one of the most popular search engines that is currently used by many large organizations and services such as Wikipedia, The Guardian, StackOverflow, GitHub, etc. Document-oriented DBMS MongoDB is used to store data extracted from social media. The main advantages of MongoDB are: 1. 2. 3. 4.

High performance. Document-oriented query language. Fault tolerance. Scaling.

Development of a Software for the Semantic Analysis

425

3 The Model of the KB of the Ontology Repository Ontology is a model of the PrA representation and visualized in the form of a semantic graph. Graph-oriented database management system (Graph DBMS) Neo4j is the basis of the ontology store for fuzzy KB. Neo4j is currently one of the most popular graph databases and has the following advantages: 1. Having a free community version. 2. Native format for data storage. 3. One copy of Neo4j can work with graphs containing billions of nodes and relationships. 4. The presence of a graph-oriented query language Cypher. 5. Availability of transaction support. Neo4j was chosen to store the description of the PrA in the applied ontology form since the ontology is actually a graph. In this case, it is only necessary to limit the set of nodes and graph relations into which ontologies on RDF and OWL will be translated. The context of an ontology is some state of ontology obtained during versioning. Context can also be a subject area. Formally the ontology is:

O ¼ T; C Ti ; I Ti ; PTi ; STi ; F Ti ; RTi ; i ¼ 1; t;

ð1Þ

where t is a number of the ontology contexts, T ¼ fT1 ; T2 ; . . .; Tn g is a set of ontology contexts,  CTi ¼ C1Ti ; C2Ti ; . . .; CnTi is a set of ontology classes within the i-th context,  I Ti ¼ I1Ti ; I2Ti ; . . .; InTi is a set of ontology objects within the i-th context,  PTi ¼ PT1 i ; PT2 i ; . . .; PTn i is a set of ontology classes properties within the i-th context,  STi ¼ ST1 i ; ST2 i ; . . .; STn i is a set of ontology objects states within the i-th context,  F Ti ¼ F1Ti ; F2Ti ; . . .; FnTi is a set of the logical rules fixed in the ontology within the i-th context, RTi —is a set of ontology relations within the i-th context defined as: R Ti ¼



RTCi ; RTI i ; RTPi ; RTS i ; RTFi ;

ð2Þ

where RTCi is a set of relations defining hierarchy of ontology classes within the i-th context, RTI i is a set of relations defining the ‘class-object’ ontology tie within the i-th context, RTPi is a set of relations defining the ‘class-class property’ ontology tie within the i-th context,

426

A. Filippov et al.

RTS i is a set of relations defining the ‘object-object state’ ontology tie within the i-th context, RTFi is a set of relations generated on the basis of logical ontology rules in the context of i-th context. Some relations of the G (RTCi и RTI i ) may be functional relations. Functional relations are characteristic of the OWL language. 3.1

Translation of RDF/OWL-Ontology into a Graph KB

It is necessary to select structural elements of TBox (structure, scheme) and ABox (content) of graph KB respectively for successful translation of RDF or OWL ontology to graph KB objects. Formally the functions of translating an RDF/OWL ontology to a graph KB are: fORDF : RDF ! O; fOOWL : OWL ! O; where RDF ¼ hC RDF ; I RDF ; PRDF ; SRDF ; RRDF i—set of entities RDF ontology (corresponds to the entities of expression 1), OWL ¼ hC OWL ; I OWL ; POWL ; SOWL ; ROWL i—set of entities RDF ontology (corresponds to the entities of expression 1), O—set of ontology entities of the graph KB (expression 1). Table 1 shows the correspondence of the RDF/OWL-ontology entities to the graph KB entities.

Table 1. Correspondence of RDF/OWL-ontology entities to the graph KB entities RDF TBox rdfs:Resource

OWL

rdfs:Class

owl:Class

owl:Thing

rdfs:subClassOf owl:SubclasOf rdf:Property rdfs:domain rdfs:range ABox rdf:type rdf:ID rdf:resource rdf:ID

Graph KB  C Ti ¼ C1Ti ; C2Ti ; . . .; CnTi  C Ti ¼ C1Ti ; C2Ti ; . . .; CnTi RTCi

 PTi ¼ PT1 i ; PT2 i ; . . .; PTn i

owl:ObjectProperty owl:DataProperty owl:ObjectPropertyDomain owl:DataPropertyDomain owl:ObjectPropertyRange owl:DataPropertyRange

RTPi

owl:NamedIndividual

 I Ti ¼ I1Ti ; I2Ti ; . . .; InTi

owl:ClassAssertion

RTI i

RTPi

 owl:ObjectPropertyAssertion STi ¼ ST1 i ; ST2 i ; . . .; STn i owl:DataPropertyAssertion RTi S

Development of a Software for the Semantic Analysis

427

The main entities of RDF and OWL ontologies correspond to the ontology of the graph KB. The graph KB entities unify different formats of ontologies and form a data model. The developer can build queries to the ontology repository contents on Cypher using the generated data model. This method of extracting knowledge from the ontology repository is more familiar to the developer than working with reasoners. However, the possibility of the inference on the contents of our repository of ontologies exists. 3.2

Description the Main Concepts of the Social Media and Their Relations in Knowledge Base

The main SOM data model concepts are: Mass media concept stores information about different social media (VKontakte, Facebook, Twitter, etc.) or news site. The SOM import subsystem downloads data from these social media using their API and from news site by using set of CSS-selectors. The Person concept is a list of users extracted from social media. The Person concept has a set of attributes often used in social networks: surname, first name, date of birth, hobbies, education, etc. The Group concept stores information about communities extracted from social media. The Group concept has a set of attributes often used in social networks: group name, group description, age restrictions, creation date etc. The Post concept stores information about records in social media. The Post concept has the following attributes: author, title, content, creation date, attachments etc. The Comment concept stores information about comments in social media. The Comment concept has the following attributes: author, title, content, creation date, attachments etc. The Attachment concept stores information about the attachments of entries and comments in social media. The Attachment concept has several types and allows you to store the following types of attachments: photos, photo albums, audio, video, links, documents (files), surveys etc. Table 2 shows the correspondence of the social media concepts and SOM concepts. Table 2. The correspondence of the social media concepts and SOM concepts SOM

Person Group Post

VKontakte, Facebook, ok.ru URL, for example, vk.com User Group Post

Comment Attachment

Comment Attachment

MassMedia

Twitter

Instagram

Youtube

URL

URL

URL

User – Twit

User – Photo

User – Video

Comment Attachment

Comment Tags, links

Comment Link

Social media URL – – News, article Comment Attachment

428

A. Filippov et al.

The main concepts of the SOM data model allow storing data downloaded from most existing social media. Unified presentation of SOM data allows efficient processing, analysis and search. The data converter is used to transform data downloaded from social media into the internal representation of the SOM. It is necessary to develop a data converter module for each new Internet resource. The Internet media loader generates the same data representation for all sites. Therefore, the converter for each site separately is not necessary to adapt.

4 The Inference on the KB Contents The inference is the process of reasoning from the premises to the conclusion. Reasoners are used to implement the function of inference. Reasoners form logical consequences produce an inference based on many statements, facts and axioms [24, 25] and also carry out a control of logical integrity. The Neo4j GDBMS does not allow the use of existing reasoners. Thus, there is a need to develop a mechanism for inference from the ontology repository content. Formally the logical rule of the ontology of the fuzzy knowledge base is:

F Ti ¼ ATree ; ASWRL ; ACypher ; where Ti ATree ASWRL ACypher

– – – –

i-th context of the ontology of the fuzzy KB; a tree-like representation of a logical rule F Ti ; SWRL representation of the logical rule F Ti ; Cypher representation of the logical rule F Ti :

The tree-view ATree of a logical rule F Ti is: ATree ¼ hAnt; Consi; where Ant ¼ Ant1 HAnt2 H. . .Antn —is the antecedent (condition) of the logical rule F Ti ; H 2 fAND; ORg—is a set of permissible logical operations between antecedent atoms; Cons—consequent (consequence) of a logical rule F Ti . Figure 2 shows an example of a tree-like representation of a logical rule. This rule describes the nephew-uncle relationship.

Development of a Software for the Semantic Analysis

429

Fig. 2. Example of a tree-like representation of a logical rule.

The tree-like logical rule is translated into the following SWRL [26]: 1. hasParent(?a,?b) & hasBrother(?b,?c) & => hasUncle(?a,?c) 2. hasChild(?b,?a) & hasSister(?c,?b) => hasUncle(?a,?c), and the following Cypher view: 1. MATCH (s1:Statement{name: “hasChild”, lr: true}) MATCH (r1a) < -[:Domain]-(:Statement{name:”hasFather”})-[:Range]- > (r1b) MERGE (r1b)-[:Domain]- > (s1) MERGE (r1a)-[:Range]- > (s1) 2. MATCH (s1:Statement{name: “hasChild”, lr: true}) MATCH MATCH MERGE MERGE

(r2c) < -[:Domain]-(:Statement{name:”hasSister”})-[:Range]- > (r2a) (r2c) < -[:Domain]-(:Statement{name:”hasFather”})-[:Range]- > (r2b) (r2b)-[:Domain]- > (s1) (r2a)-[:Range]- > (s1).

Thus, the rules are translated into their tree-view when imported into the KB of logical rules in the SWRL language. The presence of a tree-like representation of a logical rule allows you to form both a SWRL-representation of a logical rule and a Cypher-representation based on it. Relations of a special type—RTFi (expression 2) are formed by using a logical rule on the Cypher between the graph KB entities F Ti (expression 1). The graphic KB entities must satisfy the atoms of the antecedent of the logical rule. Relations formed to organize the inference from the ontology repository contents.

430

A. Filippov et al.

A set of Cypher queries is formed to control the logical integrity of the ontology repository contents. Cypher queries invert the axioms of TBox. If one of these queries returns the result then the logical integrity is violated.

5 Conclusion Intelligent tool for Opinion Mining social media developed by our research group will allow you to download data from the social network VKontakte and Internet media. The social graph is formed during the download of data from the social network VKontakte. This social graph contains the following types of relationships: is a friend, is a subscriber, is a relative, is in a relationship, is in the community. The statistical index of text data is formed when data is loaded using the search engine Elasticsearch. The data is converted into the SOM data model concept and stored in MongoDB. The data search subsystem searches for data by keywords in the context of data sources and concept types: users, communities, entries, comments and attachments. The user’s initial search query can be extended during the search based on the graphical knowledge base. The graph knowledge base is formed during the translation of the ontology in the OWL format into nodes and the relationship of the graph knowledge base. Further development of the SOM consists of: 1. Development of downloaders for social networks Twitter, Facebook, Instagram, Youtube, ok.ru. 2. Testing the storage subsystem on large amounts of data. 3. Development of a subsystem of sentimental data analysis. 4. Development of a subsystem of semantic data analysis. 5. Finalization of the user interface. The resulting SOM should improve the effectiveness of analyzing the content of social media taking into account the specifics of data representation and the fuzziness of natural language. Acknowledgments. This study was supported by the Russian Foundation for Basic Research (Grants No. 18-47-730035, 18-47-732007, 18-37-00450, 18-47-732007).

References 1. Hamilton, W., Bajaj, P., Zitnik, M., Jurafsky, D., Leskovec, J.: Embedding logical queries on knowledge graphs. In: Advances in Neural Information Processing Systems, pp. 2027–2038 (2018) 2. Gjoka, M., et al.: Practical recommendations on crawling online social networks. IEEE J. Sel. Areas Commun. 29(9), 1872–1892 (2011) 3. Ellison, N., Gibbs, J., Weber, M.: The use of enterprise social network sites for knowledge sharing in distributed organizations: the role of organizational affordances. Am. Behav. Sci. 59(1), 103–123 (2015)

Development of a Software for the Semantic Analysis

431

4. Pallis, G., Zeinalipour-Yazti, D., Dikaiakos, M.. Online social networks: status and trends. In: New Directions in Web Data Management 1. Studies in Computational Intelligence, vol. 331, pp. 213–234 (2011) 5. Key Trends to Watch in Gartner 2012 Emerging Technologies Hype Cycle. http://www. forbes.com/sites/gartnergroup/2012/09/18/key-trends-to-watch-in-gartner2012-emergingtechnologies-hype-cycle-2. Accessed 11 Oct 2018 6. Korshunov, A.: Tasks and methods for determining the attributes of users of social networks. In: Proceedings of the 15th All-Russian Scientific Conference on Digital Libraries: Advanced Methods and Technologies, Digital Collections—RCDL 2013, (2013) 7. Korshunov, A., Beloborodov, I., Gomzin, A., Chuprina, K., Astrakhantsev, N., Nedumov, J., Turdakov, D.: Determination of demographic attributes of users of microblogging. In: Proceedings of the Institute of System Programming of RAS. vol. 25 (2013). https://doi.org/ 10.15514/ispras-2013-25-10 8. Timina, I., Egov, E., Yarushkina, N., Yashin, D.: The use of the aggregator for choosing the method of forecasting time series. In: 2018 3rd Russian-Pacific Conference on Computer Technology and Applications (RPC), Vladivostok, pp. 1–5 (2018). https://doi.org/10.1109/ rpc.2018.8482168 9. Crammer, K.: Doubly aggressive selective sampling algorithms for classification. In: Artificial Intelligence and Statistics, pp. 140–148 (2014) 10. Go, A., Bhayani, R., Huang, L.: Twitter sentiment classification using distant supervision. CS224 N Project Report, Stanford, vol. 1, no. 12 (2009) 11. Turney, P.: Distributional semantics beyond words: supervised learning of analogy and paraphrase. Trans. Assoc. Comput. Linguist. 1, 353–366 (2013) 12. Chetviorkin, I., Loukachevitch, N.: Sentiment analysis track at ROMIP-2012. Computer linguistics and intellectual technologies. Computer linguistics and Intellectual Technologies: Dialogue-2013. Sat. Scientific articles, vol. 2, pp. 40–50 13. Antonova A., Soloviev A., Using the method of conditional random fields for processing texts in Russian. Computer linguistics and intellectual technologies: Dialogue-2013. Sat. scientific articles, no. 12(19), pp. 27–44. Publishing house of the RSUH, Moscow (2013) 14. Pazelskaya, A., Soloviev, A.: Method of definition of emotions in texts in Russian. Computer linguistics and intellectual technologies. Computer linguistics and intellectual technologies: Dialogue-2011. Sat. Scientific articles, no. 11(18), pp. 510–523. Publishing House of the RSUH, Moscow (2011) 15. García-Moya, L., Anaya-Sanchez, H., Berlanga-Llavori, R.: Retrieving product features and opinions from customer reviews. IEEE Intell. Syst. 28(3), 19–27 (2013) 16. Tarasov, D.: Deep recurrent neural networks for multiple language aspect-based sentiment analysis. In: Computational Linguistics and Intellectual Technologies: Proceedings of Annual International Conference “Dialogue-2015”, vol. 2, no. 14(21), pp. 65–74 (2015) 17. Representational state transfer. https://en.wikipedia.org/wiki/Representational_state_transfer. Accessed 11 Oct 2018 18. The heart of the elastic stack. https://www.elastic.co/products/elasticsearch. Accessed 11 Oct 2018 19. MongoDB: For Giant ideas. https://www.mongodb.com. Accessed 11 Oct 2018 20. Introducing the Neo4j graph platform. https://neo4j.com. Accessed 11 Oct 2018 21. Yarushkina, N., Filippov, A., Moshkin, V.: Development of the unified technological platform for constructing the domain knowledge base through the context analysis. Commun. Comput. Inf. Sci. 754, 62–72 (2017) 22. Novák, V., Perfilieva, I., Jarushkina, N.G.: A general methodology for managerial decision making using intelligent techniques, Chap. In: Recent Advances in Decision Making. Series Studies in Computational Intelligence, vol. 222. pp. 103–120 (2009)

432

A. Filippov et al.

23. Afanasieva, T., Yarushkina, N., Gyskov, G.: ACL-scale as a tool for preprocessing of manyvalued contexts. In: CEUR Workshop Proceedings. The Second International Workshop on Soft Computing Applications and Knowledge Discovery (SCAD 2016), pp. 2–11 (2016) 24. Makhortov, S.: On the algebraic model of a distributed production system. In: Proceedings of the Fifteenth National Conference on Artificial Intelligence with International Participation “RNCAI-2016”, Smolensk, vol. 1, pp. 64–72 (2016) 25. Moshkin, V., Yarushkina, N.: The inference based on fuzzy ontologies. In: Integrated Models and Soft Computations in Artificial Intelligence. Proceedings of the VIII International Scientific and Practical Conference, Kolomna, 18–20 May 2015, vol. 1, pp. 259–267. Fizmatlit, Moscow (2015) 26. SWRL: a semantic web rule language combining OWL and RuleML. https://www.w3.org/ Submission/SWRL. Accessed 11 May 2018

An Analysis of Road Traffic Flow Characteristics Using Wavelet Transform Oleg Golovnin(&) , Anastasia Stolbova and Nikita Ostroglazov

,

Samara University, 34, Moskovskoye shosse, Samara 443086, Russia [email protected]

Abstract. Measures to obtain reliable information about the current state of traffic flows are necessary to introduce effective control methods offered by modern intelligent transport systems. We developed a method and software for the wavelet analysis of road traffic flow characteristics in the frequency and time domains without restoring the missing samples. The developed method was implemented in the form of software embedded in an intelligent transport system. The method of wavelet analysis of road traffic flow characteristics takes into account the non-equidistance of data, which allows the construction of a time-frequency scan with a uniform representation without restoring the missing samples with adjustment of the sampling intervals. Background data on traffic flows for analysis was obtained from the CityPulse Dataset Collection. We analyzed such characteristics as average speed and vehicle count. We analyzed wavelet spectra and scalograms, identified common dependencies in the frequency distribution of extremes, and revealed differences in spectral power for different road segments. Keywords: Spectral analysis

 Data mining  Traffic pattern  Wavelet

1 Introduction Measures to obtain reliable information about the current state of traffic flows are necessary to introduce effective management methods offered by modern intelligent transport systems [1]. In case of monitoring, traffic flows are characterized by instability, diversity and practical difficulty of obtaining [2]. The effectiveness of monitoring traffic flows characteristics can be enhanced by the use of automatic technological processes for collecting, storing, planning and analyzing information [3]. There are monitoring technologies based on sensors [4], image analysis [5] and satellite navigation systems technologies [6], based on data from cellular operators [7], Earth remote sensing data [8], using prediction models for time series [9], statistical methods [10] and spectral analysis [11]. There are monitoring and prediction methods using hybrid approaches: spectral-statistical analysis [12] and spatio-temporal analysis [13], based on statistics and neural networks [14], statistics and macroscopic models of traffic flows [15]. Frequency methods of spectral analysis do not allow determining the time of existence of the frequency in the process under study, which leads to limited © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 433–445, 2019. https://doi.org/10.1007/978-3-030-12072-6_35

434

O. Golovnin et al.

opportunities in the analysis of non-stationary frequency processes. Wavelet analysis refers to time-frequency methods and allows realizing an analysis of the lifetime of a frequency in a process; it is one of the actively developing methods of spectral analysis of non-stationary processes [16] and predictions [17]. Wavelets are successfully used for searching similarities and differences in the characteristics of traffic flows [18], traffic flow forecasting [19] and detection of traffic events [20]. However, in the analysis, the non-equidistant process is reduced to a uniform—this approach is simple, does not require the development of new algorithms, however, leads to the errors in dating, therefore, the problems of analyzing nonequidistant processes are not fully solved. To improve the quality of data analysis and prediction accuracy, wavelet neural networks [21] and fuzzy wavelet neural networks [22] are used. Initial preparation of data on traffic flows [23], search for trends and patterns in data, traffic flow forecasting [24] uses fuzzy logic, wavelet transform and neural network technologies. The use of neural networks involves their training, which leads to problems with scaling and replication when implementing solutions into practice. So, the purpose of the paper is to develop a method and software for the wavelet analysis of road traffic flow characteristics in the frequency and time domains without restoring the missing samples. The software must be scaled and replicated for use in practice as part of an intelligent transport system.

2 The Method of Wavelet Analysis of Traffic Flows Characteristics Data on the characteristics of traffic flows, obtained, in general, from unreliable monitoring systems, are a time series: fxi ; Dti gi¼1...N ; Dti ¼ ti þ 1  t;

ð1Þ

where i is the reference number, ti is the reference time. Typical data series obtained as a result of monitoring the characteristics of traffic flows include rows with missing observations (2): 8 < xi ¼ xi ðti Þ; i P : Yk  Dt0 : : ti ¼

ð2Þ

k¼1

Then the applied wavelet transformation takes the following:   1 X ti  b W ða; bÞ ¼ pffiffiffi xi w ; a a i

ð3Þ

where wðtÞ is the selected analyzing wavelet; a 6¼ 0—scale parameter; b  0—shift parameter.

An Analysis of Road Traffic Flow Characteristics Using Wavelet Transform

435

For the analysis and interpretation of non-stationary non-equidistant time series obtained from transport flow monitoring systems, we propose to use the wavelet transform method with time-frequency scan with a uniform representation. To do this, follow these steps: 1. Select the time interval forced sampling interval Dt0 . 2. Taking into account the interval obtained, restore the array of shifts b corresponding to a uniform time series. 3. When calculating the wavelet coefficients W ða; bÞ at each step of the shift b, it is necessary to recalculate the wavelet w with new non-uniform sampling intervals corresponding to the time series intervals ti þ 1  ti . Like the algorithm applied in [25], we propose to use Morlet wavelets to analyze the characteristics of traffic flows:   t2 wðtÞ ¼ expðiktÞ exp  2 : 2r

ð4Þ

In step 1 we use the minimum possible value of the interval of forced sampling of the time series, because the most likely reason for the nonequidistance in the data on traffic flows is the omission of observations (2). For the analysis of the results, we apply the wavelet spectrum describing the distribution of energy in scales:     2 S ai ; bj ¼  W a i ; bj  :

ð5Þ

We also use a scalogram that has the following appearance: b 1     1 NX Sg ai ; bj ¼ S ai ; bj : Nb j¼0

ð6Þ

We apply wavelet analysis to the macroscopic characteristics of the traffic flow: speed vðtÞ, intensity IðtÞ, density kðtÞ: vðtÞ ¼

IðtÞ ; kðtÞ

ð7Þ

IðtÞ ¼

@Q ; @t

ð8Þ

where Q—the vehicle count on the road segment.

436

O. Golovnin et al.

3 Implementation of Integrated Software We integrate the developed software with the proposed method of wavelet analysis of the characteristics of transport flows into the intelligent geographic information platform for transport process analysis [26]. Wavelet analysis software is implemented on the ITSGIS framework [27], designed to build intelligent transport systems. Integration with the intelligent transport system is carried out at three levels: data level—receiving raw data from monitoring systems, business logic level—presenting processed data for intelligent transport system services, presentation level to the user—embedding visual components into ITSGIS user interfaces. The software implements functions for the wavelet analysis of the characteristics of traffic flows: – Extracting data sequences with uniform and non-uniform sampling from various data sources: XML, JSON, CSV files, databases and ontological knowledge bases; – Obtaining spectral characteristics of the process; – Computation of wavelet functions and wavelet transform coefficients; – Wavelet spectra and scaling; – Calculation of conversion errors. The software architecture is shown in Fig. 1.

Wavelet Analysis Software Processing Component

Wavelet Domain Component

Data Input Component

Data Output Component

Testing Component

GUI Component

ITSGIS Framework

Knowledge Base

Service Locator

Graphical User Interface

Road Reports

Enterprise Service Bus

RAW DATA Analytics Data

Data Collector

Utilities

Raw Traffic Data Accessor

Fig. 1. Software architecture

Data Base

An Analysis of Road Traffic Flow Characteristics Using Wavelet Transform

437

Wavelet analysis software consists of the following components that implement the functions described above: – – – – – –

Data Input Component; Data Output Component; GUI Component—user interaction: display data and receive commands; Wavelet Domain Component; Processing Component—making calculations; Testing Component—quality control of calculations.

We implement the software for the .NET Framework 4.5 in C #. The graphical user interface is based on WinForms, with advanced visualization using OpenGL technology. The interaction with the services of the intelligent transport system is performed via the SOAP protocol. We use the technology of object-relational mapping NHibernate for integration with various relational data sources.

4 Case Study 4.1

Source Data Preparation

We obtain background data on traffic flows for analysis from the CityPulse Dataset Collection for Smart City Applications [28]. Baseline data are presented in CSV format for the city of Aarhus, Denmark [29]. Each measurement of the characteristics of traffic flows is carried out after 5 min. The lack of equidistance in the source data occurs due to measurement omissions. One record contains information about the traffic flow: status, average measured time, average speed, median measured time, time stamp, vehicle count. We use the weekly interval from Monday to Sunday as one data set for analysis. Analyzed characteristics: average speed (7) and vehicle count (8). For each data series, we perform a centering operation, which makes the signal under investigation stationary. For example, Fig. 2 shows an initial data series for the vehicle count, the trend line is red.

Fig. 2. Initial data series for vehicle count

438

O. Golovnin et al.

Figure 3 shows a centered stationary row for vehicle count.

Fig. 3. Centered data series for vehicle count

A wavelet analysis of non-equidistant data on the characteristics of traffic flow was carried out for 3 road segments: high traffic intensity (Nordjyske Motorvej), medium traffic intensity (Randersvej), low traffic intensity (Søftenvej). Figure 4 shows their location on the map.

Fig. 4. Location of the studied areas on the map: green—Nordjyske Motorvej, blue— Randersvej, orange—Søftenvej

An Analysis of Road Traffic Flow Characteristics Using Wavelet Transform

439

Characteristics of the studied road sections are given in Table 1. Table 1. The studied road sections Property

ROAD_TYPE DISTANCE_IN_METERS NDT_IN_KMH POINT_1_LAT, POINT_1_LNG POINT_2_LAT, POINT_2_LNG

4.2

Street (Report ID) Nordjyske Motorvej (158475) MAJOR_ROAD 2335 112 56.23489, 10.12501 56.21740, 10.10702

Randersvej (158836) MAJOR_ROAD 1195 81 56.21071, 10.17302

Søftenvej (173011)

ROAD 2061 51 56.21508, 10.13978 56.20391, 10.17512 56.22579, 10.11658

Wavelet Data Analysis of Average Speed

Then, in the wavelet-spectrum graphs calculated by (5), the time is plotted along the X-axis, and the frequency (Hz) is along the Y-axis. The larger the spectrum, the brighter the picture. On scaling graphs calculated by (6), the frequency (Hz) is plotted along the X-axis, and the capacity is plotted along the Y-axis. We use the Morlet wavelets (4) in calculations. For the Nordjyske Motorvej segment, Fig. 5 shows the calculated wavelet spectrum for average speed data. The increase in spectral density is noticeable on Monday, Wednesday, Thursday and Friday, which is typical of this class of road.

Fig. 5. Wavelet spectrum for Nordjyske Motorwej (average speed)

For the Randersvej segment, Fig. 6 shows the wavelet spectrum for the average speed data. We observe noticeable increase in the spectral density only on Friday. For the Søftenvej segment, Fig. 7 shows the wavelet spectrum for average speed data. The increase in spectral density is noticeable on Tuesday and Friday.

440

O. Golovnin et al.

Fig. 6. Wavelet spectrum for Randersvej (average speed)

Fig. 7. Wavelet spectrum for Søftenvej (average speed)

The wavelet spectra show that there are two frequency bands for each day of the week: high 0.004–0.016 Hz and low 0.001–0.003 Hz. Within these ranges, an increase in spectral density occurs. For high-speed roads, both bands are involved, for lowspeed roads, mainly low frequencies. We use the high-frequency range only on days with the lowest intensity of movement. The built scalograms for the three sections of roads are combined in Fig. 8. Scalograms show that the weekly interval of the time series under study in terms of average speed is characterized by 5 special points – local frequency maxima in the intervals 0.0006–0.0006, 0.0009–0.0012, 0.0012–0.0021, 0.0039–0.0056, 0.0073– 0.0101 Hz. So, we can conclude that the general regularity of the time series of speeds is observed regardless of the level of intensity of traffic flows on the road segment, the

An Analysis of Road Traffic Flow Characteristics Using Wavelet Transform

441

direction of movement of their type of road. The location of the first three largest extremes in one low-frequency region indicates the predominance of low-frequency components in time series. So, the intensity of movement and speed mode on the road does not affect the overall appearance of the scalogram for average speed, but limits the variation in speed variations.

Fig. 8. Scalograms (average speed): green—Nordjyske Motorvej, blue—Randersvej, orange— Søftenvej

4.3

Wavelet Data Analysis (Vehicle Count)

For the Nordjyske Motorvej segment, Fig. 9 shows the wavelet spectrum for vehicle count data. The spectral density increases are noticeable in the high frequency range on Tuesday, Wednesday, Thursday, and Friday.

Fig. 9. Wavelet spectrum for Nordjyske Motorway (vehicle count)

442

O. Golovnin et al.

For the Randersvej segment, Fig. 10 shows the wavelet spectrum for the vehicle count data. The increase in the spectral density in the low frequency range is noticeable from Monday to Thursday.

Fig. 10. Wavelet spectrum for Randersvej (vehicle count)

For the Søftenvej segment, Fig. 11 shows the wavelet spectrum for vehicle count data. The increases in spectral density in the middle and high frequency ranges are noticeable on Monday, Tuesday, Wednesday and Friday.

Fig. 11. Wavelet spectrum for Søftenvej (vehicle count)

The scalograms for the vehicle count data for the three sections of roads are combined in Fig. 12.

An Analysis of Road Traffic Flow Characteristics Using Wavelet Transform

Fig. 12. Scalograms (vehicle count): orange—Søftenvej

green—Nordjyske

443

Motorway, blue—Randersvej,

The frequency location of the first three extremes falls within the frequency ranges of 0.0005–0.0006, 0.0010–0.0013 and 0.0018–0.0020 Hz and almost coincide, which allows to conclude that the intensity of traffic flows varies in the low frequency range, but different capacity indicates distinctions in characteristics. The scalogram for the Søftenvej road segment with increasing frequencies is very different and has an extremum in the average frequency range of 0.0080 Hz, which is caused by the low speed mode and low throughput of the highway. Therefore, the intensity of movement and the speed limit on a section of the road influence the general view of the scalogram for vehicle count and allow to identify the most significant frequency components from the point of view of traffic intensity.

5 Conclusion We propose a method of wavelet analysis of the characteristics of traffic flows, taking into account the non-equidistance of data, which allows the construction of a timefrequency scan with a uniform representation without restoring the missing samples with adjustment of the sampling intervals. We implement the developed method in the form of software embedded in an intelligent transport system. We applied the wavelet analysis method in the analysis of traffic flows on the example of three different in intensity and speed road segments in the city of Aarhus, Denmark. We analyzed wavelet spectra and scalograms, identified common dependencies in the frequency distribution of extremes, and revealed differences in spectral power for different sections of highways. We experimentally tested the developed software, implemented in an intelligent transport system, on the territory of the city of Saransk, Russia to identify specific traffic conditions and their analysis.

444

O. Golovnin et al.

References 1. Lima, S.F., Barbosa, S.A.A., Palmeira, P.C., Matos, L., Secundo, I., Nascimento, R.: Systematic review: techniques and methods of urban monitoring in intelligent transport systems. In: International Conference on Wireless and Mobile Communications, ICWMC, pp. 1–5. ARIA, Nice (2017) 2. Taylor, M.A., Bonsall, P.W.: Understanding Traffic Systems: Data Analysis and Presentation, 2nd edn. Routledge, London (2017) 3. Jain, N.K., Saini, R.K., Mittal, P.A.: Review on traffic monitoring system techniques. Soft Comput.: Theories Appl. 742, 569–577 (2019) 4. Askari, H., Asadi, E., Saadatnia, Z., Khajepour, A., Khamesee, M.B., Zu, J.: A hybridized electromagnetic-triboelectric self-powered sensor for traffic monitoring: concept, modelling, and optimization. Nano Energy 32, 105–116 (2017) 5. Sahgal, D., Ramesh, A., Parida, M.: Real-time vehicle queue detection at urban traffic intersection using image processing. Int. J. Eng. Sci. Generic Res. 4(2), 12–15 (2018) 6. Liu, Z., Jiang, S., Zhou, P., Li, M.: A participatory urban traffic monitoring system: the power of bus riders. IEEE Trans. Intell. Transp. Syst. 18(10), 2851–2864 (2017) 7. Bellavista, P., Caselli, F., Corradi, A., Foschini, L.: Cooperative vehicular traffic monitoring in realistic low penetration scenarios: the COLOMBO experience. Sensors 18(3), 822 (2018) 8. Fedoseev, A., Golovnin, O., Mikheeva, T.: An approach for GIS-based transport infrastructure model synthesis on the basis of hyperspectral information. Procedia Eng. 201, 363–371 (2017) 9. Wang, Y.D., Xu, D.W., Lu, Y., Shen, J.Y., Zhang, G.J.: Compression algorithm of road traffic data in time series based on temporal correlation. IET Intell. Transp. Syst. 12(3), 177– 185 (2017) 10. Crawford, F., Watling, D.P., Connors, R.D.: A statistical method for estimating predictable differences between daily traffic flow profiles. Transp. Res. Part B: Methodol. 95, 196–213 (2017) 11. Tchrakian, T.T., Basu, B., O’Mahony, M.: Real-time traffic flow forecasting using spectral analysis. IEEE Trans. Intell. Transp. Syst. 13(2), 519–526 (2012) 12. Zhang, Y., Zhang, Y., Haghani, A.: A hybrid short-term traffic flow forecasting method based on spectral analysis and statistical volatility model. Transp. Res. Part C: Emerg. Technol. 43, 65–78 (2014) 13. Jiang, Y., Kang, R., Li, D., Guo, S., Havlin, S.: Spatio-temporal propagation of traffic jams in urban traffic networks. arXiv preprint arXiv:1705.08269 (2017) 14. Moretti, F., Pizzuti, S., Panzieri, S., Annunziato, M.: Urban traffic flow forecasting through statistical and neural network bagging ensemble hybrid modeling. Neurocomputing 167, 3–7 (2015) 15. Zeroual, A., Harrou, F., Sun, Y., Messai, N.: Monitoring road traffic congestion using a macroscopic traffic model and a statistical monitoring scheme. Sustain. Cities Soc. 35, 494– 510 (2017) 16. Mallat, S.: A Wavelet Tour of Signal Processing, 3rd edn. Academic Press, Orlando (2009) 17. Aminghafari, M., Poggi, J.M.: Nonstationary time series forecasting using wavelets and kernel smoothing. Commun. Stat.-Theory Methods 41(3), 485–499 (2012) 18. Cheng, Y., Zhang, Y., Hu, J., Li, L.: Mining for similarities in urban traffic flow using wavelets. In: International Conference on Intelligent Transportation Systems Conference, ITSC, pp. 119–124. IEEE, Seattle (2007)

An Analysis of Road Traffic Flow Characteristics Using Wavelet Transform

445

19. Zhang, H., Wang, X., Cao, J., Tang, M., Guo, Y.: A multivariate short-term traffic flow forecasting method based on wavelet analysis and seasonal time series. Appl. Intell. 8, 3827– 3838 (2018) 20. Mohan, D.M., Asif, M.T., Mitrovic, N., Dauwels, J., Jaillet, P.: Wavelets on graphs with application to transportation networks. In: International Conference on Intelligent Transportation Systems, ITSC, pp. 1707–1712. IEEE, Qingdao (2014) 21. Antonios, K., Alexandridis, A., Zapranis, D.: Wavelet neural networks: a practical guide. Neural Netw. 42, 1–27 (2013) 22. Linhares, L.S.L., Araújo Jr., J.M., Araújo, F.M.U., Yoneyama, T.: A nonlinear system identification approach based on fuzzy wavelet neural network. J. Intell. Fuzzy Syst. 28(1), 225–235 (2015) 23. Chen, J.F., Lo, S.K., Do, Q.H.: Forecasting short-term traffic flow by fuzzy wavelet neural network with parameters optimized by biogeography-based optimization algorithm. Comput. Intell. Neurosci. 2018, 1–13 (2018) 24. Boto-Giralda, D., Díaz-Pernas, F.J., González-Ortega, D., Díez-Higuera, J.F., AntónRodríguez, M., Martínez-Zarzuela, M., Torre-Díez, I.: Wavelet-based denoising for traffic volume time series forecasting with self-organizing neural networks. Comput.-Aided Civil Infrastruct. Eng. 25(7), 530–545 (2010) 25. Khaymovich, A.I., Prokhorov, S.A., Stolbova, A.A., Kondratyev, A.I.: A model of milling process based on Morlet wavelets decomposition of vibroacoustic signals. In: International Conference Information Technology and Nanotechnology, vol. 1904, pp. 135–140. ITNT, Samara (2017) 26. Golovnin, O., Fedoseev, A., Mikheeva, T.: Intelligent geographic information platform for transport process analysis. In: CEUR Workshop Proceedings, vol. 1901, pp. 78–85. RWTH Aachen (2017) 27. ITSGIS Homepage. http://www.itsgis.ru. Accessed 30 Oct 2018 28. Tönjes, R., Barnaghi, P., Ali, M., Mileo, A., Hauswirth, M., Ganz, F., Ganea, S., Kjærgaard, B., Kuemper, D., Nechifor, S., Puiu, D., Sheth, A., Tsiatsis, V., Vestergaard, L.: Real time IoT stream processing and large-scale data analytics for smart city applications. In: European Conference on Networks and Communications, Poster Session (2014) 29. CityPulse Dataset Collection. http://iot.ee.surrey.ac.uk:8080/datasets.html. Accessed 30 Oct 2018

An Approach to Estimating of Criticality of Social Engineering Attacks Traces Anastasiia Khlobystova1,2(&) , Maxim Abramov1,2 and Alexander Tulupyev1,2

,

1 Laboratory of Theoretical and Interdisciplinary Problems of Informatics, St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, 14-th Linia, VI, No. 39, St. Petersburg 199178, Russia [email protected], [email protected], [email protected] 2 Mathematics and Mechanics Faculty, St. Petersburg State University, Universitetsky pr., 28, Stary Peterhof, St. Petersburg 198504, Russia

Abstract. In this article we propose to consider the trajectories of social engineering attacks, which are the most critical from the point of view of the expected damage to the organization, and not from the point of view of the probability of success of the defeat of the user and, indirectly, critical documents to which he has access. The article proposes an approach to solving the problem of identifying the most critical path of multiway socio-engineering attack. The most critical trajectory in this article is understood as the most probable trajectory of the attack, which will bring the greatest damage to the organization. As a further development of the research direction, we can consider models that describe in more detail the context and take into account the distribution of the probability of hitting the proportion of documents available to the user, offering models for building integrated damage estimates associated with the affected user, various access policies and accounting for the hierarchy of documents in terms of their criticality or value. Keywords: Multi-pass social engineering attacks  Social graph of company employees  Critical trajectories in social graph  Social engineering attacks  Users protect  Information security

1 Introduction 1.1

Prerequisites for Studying

One of the most pressing tasks in the field of information security today is information system user protection against socio-engineering attacks. The attacks executed by virtue of social computing methods became one of the most effective; more than 82% among them get success [1–6]. The software and hardware hack-proofing tools that are used in organization are of no consequence, because in the final reckoning information security is managed by user who can make actions promoting a successful incident

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 446–456, 2019. https://doi.org/10.1007/978-3-030-12072-6_36

An Approach to Estimating of Criticality of Social Engineering Attacks Traces

447

whether intentional or otherwise. In this case, different organizations may use different software that has specific vulnerabilities, may be present or absent various components, but ultimately it is the users who work with all this, i.e. socio-engineering attacks, unlike certain software and hardware, can be used and are used in most cases. Thus, the problem of protecting users of information systems from social engineering attacks is urgent. Important thing is the overall goal concerning lines of research which consists in stiffening of information security by means of automatization of protection analysis related to info system user’s protection against socio-engineering attacks. Such situation happens taking into account the increase of info systems’ complexity, increase of the average staff number of the organizations working with critical documents and due to the need of preventive measures application that is obliged to increase the level of information security. During protection analysis it is important to consider the fact that socio-engineering attacks can be both composed, i.e. to be executed directly on user having an access to target critical documents and multiway when malefactor attacks target user through by means of a link of users. Multiway socio-engineering attacks are the target of this research. Social graphs are used for the analysis of users protection from multiway socioengineering attacks and for model analysis of these attacks [7]. Usually it is possible to allocate several propagation paths concerning socio-engineering attacks. Realization of socio-engineering attacks on various trajectories will have, at first, different success probability estimates related to malefactors’ activity and in the second instance will bring variably sized damage to the organization. That is propagation paths of multiway socio-engineering attacks can be ranged on probability of their passing by the malefactor and by the extent of a damage caused to the organization in case of their realization. Of greatest interest to decision makers will obviously be the identification of the most critical trajectories, i.e. the most likely trajectories, the implementation of attacks through which brings the maximum damage. The identification of such trajectories will prevent the implementation and spread of social engineering attacks; will help to improve the level of information security of the company. 1.2

Related Works

The design work for this research are works [8, 9] where the first approaches to assessment of users security are described to the fullest extent possible. The article is composed and multiway socio-engineering attacks. There is an approach to identification of the most critical trajectory of distribution of the socio-engineering attack made by malefactor from user to user in [10]. Researches concerning stiffening of user security are presented here [11]. Authors of this work represent the multiway model of vulnerability assessment related to users in reference to multiway socio-engineering attacks which is based on three basic elements: method of contact, system image and scenario of the attack, experiments according to the analysis of efficiency of the developed system are made too. The approach described in [12] is based on trans-

448

A. Khlobystova et al.

formation of safety requirements to the elements of a training game, as a result of use of this designed game program information system’ users are able recognize the main scenarios of socio-engineering attacks. The research findings of investigations [13–16] are empirical estimates which in turn can promote the creation of protection level estimates related to user security. Similar research is [17], the factors influencing over maintenance of security policy by employees are demonstrated too. In [18] there are factors influencing over exposure of users and reasons of susceptibility socioengineering attacks. The research, based on the analysis of the text and focused on identification of phishing emails is presented in [19]. In a research [20] the question of protection against forecasting of the hidden confidential users data related to social networks is studied too. There are also the investigated questions of clear and concealed information in social user, the development of safety method is a result of this research. The similar question is raised in [21], the approach offered by authors of this article is based on the automated collection of information from open sources, its analysis and identification of places, critical from the point of safety view. Development of the web service based on technology mining that allows distinguishing of security hazards across such social media platforms as Twitter [22]. Authors [23–26] make the analysis of the social graph for the purpose of detection of abnormal behavior and suspicious or false accounts. In [27] the problem of violation of data confidentiality concerning social networks is discussed and several methods for its decision are offered. The research [28] provides a complex review of key academic researchers in the area of information confidentiality over the last 40 years. 1.3

Posing of the Problem

The degree of critical documents’ and information system user’s protection from socioengineering attacks in [29] is treated as a likelihood of document or user to stay unaffected, in other words, it is a likelihood of socio-engineering attack’s failure. Naturally closely linked to the likelihood of required socio-engineering attack’s success, likelihoods of documents’ or user’s affection committed by the intruder which is taking a socio-engineering attack are addressed simultaneously. Such approach seems to be interesting because it’s focused on identification of “critical documents—information system—user—intruder” complex’s weak links in general. However, this approach, not losing its importance, is incomplete and insufficient if to take into account a value of critical documents which is generally highly heterogeneous. It’s intuitive that the case if an intruder gets an access to the document with an estimated value of 100 rubles with 95% likelihood as the result of attack is very different from the case if he affects the document with an estimated value of 10,000,000 rubles (if to speak about information importance) with 1% likelihood. The last case seems to be more critical and it can be justified by the discourse about expected damage. The first case’s expected damage will be 95 rubles and in the second one it will be 100,000 rubles. This situation is a rational confirmation of our intuitive conclusion. Thus, the question of the most critical attack’s trajectories in terms of expected damage

An Approach to Estimating of Criticality of Social Engineering Attacks Traces

449

searching raises. The goal of the article is to offer a graph model of context where socio-engineering attacks can develop and to describe the algorithm of identification of the most critical attack’s trajectories in terms of expected damage. For the sake of brevity let’s suppose that an integral characteristic of every user divided by expected damage’s quantity if an intruder influences user as a result of a current socioengineering attack is available. This characteristic can be built on the basis of possibility’s analysis and user’s security clearance to the documents of different criticality, but algorithms of its synthesis are the object of separate independent consideration.

2 Research Methods Multipass socio-engineering attacks which modeling is made on using staff’s social graphs are considered as the research object. A directed  graph G ¼ ðU; E Þ, where n U ¼ fUseri gi¼1 is a multiplicity of vertexes (users), E ¼ ui ; uj ; pi;j 1  i;j  n;i6¼j is a multiplicity of orderly threesomes with specified assessment of likelihood of attack’s dissemination between two users—pi;j . Methods of graph theory are used for attack’s modeling and analysis. Assessments of socio-engineering attack’s success as well as user’s protection/affection likelihoods are reduced to a calculation of a complex event’s likelihood. In the social graph, each node is a user of the information system. Some critical documents are usually available to each user. In order to move from security assessments of users of information systems to security assessments of critical documents, it is necessary to understand which documents are available to which users. It should be noted that the rights of access to critical documents in different information systems can be distributed differently. Critical documents in the information system can have different levels of criticality so their compromize will lead to different size damage. In ideal case document’s criticality should be expressed financially through company’s damage in its compromize and the affection of critical documents’ should be addressed documently. The expected damage of a socioengineering attack can be calculated based on this information. However, this approach isn’t always possible and it requires significant resources for implementation. Critical documents are usually divided by the level of their criticality. The first group consists of the most critical documents and the second one consists of the least critical. Let’s consider approaches to the rights of access’ distribution of different level critical documents in information systems. 2.1

Approaches to Distribution of Access Rights to Critical Documents

The first approach to distribution of access rights to critical documents is that critical documents are divided into groups, proceeding from their level of criticality, and each user of an information system has access to documents of one level of criticality (Fig. 1). In this case models of critical documents and the user of an information system can be presented as follows (Table 1).

450

A. Khlobystova et al.

Fig. 1. Distribution of user access rights, at which every user of an information system has access to documents of one level of criticality. Table 1. Models of critical documents and user № 1 2

Name Critical documents User

Presentation cd(id,lc) users(id,lc)

Comment The critical documents is characterized by an identification number and a level of criticality The user in the system has an identification number, through which he is associated with other attributes, and access level to critical documents

But more often users have access not only to documents of some one level of criticality, but also to documents of lower level of criticality (Fig. 2). I.e., when critical documents are divided into groups on criticality level, and users have access to critical documents of the level of criticality and documents of all levels below. The most widespread is the distribution model on access levels when users have access not to all documents of a certain level of criticality but only to part of them (Fig. 3). I.e., when critical documents are divided on criticality levels, and users have access to a certain number of critical documents of each level of criticality. In this article we will consider a problem of identification of the most critical trajectory of distribution of the a socio-engineering attack for an information system where access rights are distributed so that critical documents are divided groups, proceeding from their level of criticality, and each user of an information system has access to documents of single level of criticality.

An Approach to Estimating of Criticality of Social Engineering Attacks Traces

451

Fig. 2. Distribution of user access rights: each user has access to critical documents of the level of criticality and documents of all levels below.

Fig. 3. Distribution of user access rights: each user has access to certain documents of different levels of criticality.

452

A. Khlobystova et al.

Earlier, a study was conducted to identify the most critical path of the socioengineering attack between two users [10]. The critical trajectory within the conducted research was understood as the trajectory including a set of arches, which product of probabilities of attack distribution has a maximal value. I.e. in other words, the most likely trajectory of the socio-engineering attack. We will note that the problem of finding of such trajectory generally comes down to a problem of shortest way search by means of the simplest mathematical transformations [10]. The research showed that the most efficient solution of the problem of finding the most probable path between two users from the point of view of resource intensity and time complexity is the combination of Dijkstra and Bellman-Ford algorithms. Besides it is convenient to use Dijkstra’s algorithm when the number of arches in the column are more or equally to number of tops, and Belmana-Ford’s algorithm in the opposite case.

3 Results By assessing the probability of passing a socio-engineering attack between two users, we formally understand the most likely trajectory of the attack between these users. In other words, the probability of passing the attack from the user m to the user l—is pml ¼ max

Useri;j 2U

pm

Y

! pij ;

i;j

where pm —evaluation of the probability of success of a direct socio-engineering attack of an attacker on the user m, pij —appropriate evaluation of the probability of an attack spreading to the user j through the user i. As it noted above, identification of the most probable trajectories without damage assessment from their realization doesn’t give us necessary information which would allow to take the preventive targeted measures promoting increase in level of information security in the organization. In this regard, it is necessary to pass from identification of the most probable trajectories to identification of the most critical trajectories. The most critical trajectory will be called the most likely trajectory of the socio-engineering attack, which will bring maximum damage to the organization. For assessment of criticality of trajectories, it is offered to enter the corresponding metrics which can be formalized as follows ctml ¼ pml  lossð1; lcÞ; where ctml —assessment of a trajectory criticality between users m and l, pml —maximum estimate of the probability of social engineering attack passing between these users, a lossðl; lcÞ—potential damage to the organization when compromising critical documents available to the user l, of criticality level lc. Thus, it is necessary to find a trajectory ct: ct ¼ max ðctml Þ. Userm;l 2U

The simplest option of finding of such trajectory is calculation and ranging of various options of values ctml for different m and l. However, this approach is resource-

An Approach to Estimating of Criticality of Social Engineering Attacks Traces

453

intensive. To reduce resource consumption, you can move in the direction of narrowing the search area of the values of probability estimates. A similar filter can be the setting of the lower threshold for estimating the probabilities of trajectories. As well as setting the threshold level of criticality of the document, a loss at his compromise at which the total value of criticality of a trajectory will be minimum.

Fig. 4. Example of social graph of company employees

By way of illustration of the proposed approach, consider the example (Fig. 4). Let the social graph be given, it consists of three company’s employees, the probabilities of passing the socio-engineering attack from user to user are indicated. Let us also assume that there are the three critical documents of the three different level of criticality in the information system. Each of this documents are available to only one user. The following values are set for lossðl; lcÞ: lossð1; 1Þ ¼ 1; lossð2; 2Þ ¼ 2; lossð3; 3Þ ¼ 3. That is, the document 1 has the level of criticality 1, the document 2 has the level of criticality 2, the document 3—3. The level 1 corresponds the least level of criticality, the level 3—the highest level. Let’s calculate the estimate of the trajectory’s criticality between users for these conditions: c12 ¼ 0:9  0:8  2 ¼ 1:44; c13 ¼ 0:9  ð0:8  0:6Þ  3 ¼ 1:296; c21 ¼ 0:67  0:9  1 ¼ 0:603; c23 ¼ 0:9  0:6  3 ¼ 1:62; c31 ¼ 0:23  ð0:8  0:9Þ  1 ¼ 0:1656; c32 ¼ 0:23  0:8  2 ¼ 0:368:

454

A. Khlobystova et al.

Thus, ct ¼ 1:62; which corresponding to the attack on the critical document of the level of criticality 3 through the user 3, which being attacked by using the user 2. Note that the simplest values of function lossðl; lcÞ were considered as an example, also the simplest example of the information system with three users, critical documents and levels of criticality was used. Probably, further research will show the need for change of principles directions and distributions of documents’ level of criticality, these quantities can be set by experts and be expressed by more difficult way.

4 Discussion For the first time in this article, we propose to consider the trajectories of social engineering attacks, which are the most critical from the point of view of the expected damage to the organization, but not from the point of view of success defeat probability of user and, indirectly, critical documents to which he has access. The paper presents approaches to the distribution of access rights to critical documents in the organization, as well as an approach to identifying the most critical trajectories of socio-engineering attacks. As a further development of the research direction, we can consider models that describe in more detail the context and take into account the distribution of the probability of hitting the proportion of documents available to the user, offering models for building integrated damage estimates associated with the affected user, various access policies and accounting for the hierarchy of documents in terms of their criticality or value. Acknowledgments. The research was carried out in the framework of the project on state assignment SPIIRAN № 0073-2018-0001, with the financial support of the RFBR (project № 1837-00323 Social engineering attacks in corporate information systems: approaches, methods and algorithms for identifying the most probable traces; project № 18-01-00626 Methods of representation, synthesis of truth estimates and machine learning in algebraic Bayesian networks and related knowledge models with uncertainty: the logic-probability approach and graph systems).

References 1. Phishing campaign targets developers of Chrome extensions. https://www.zdnet.com/article/ phishing-campaign-targets-developers-of-chrome-extensions/. Accessed 08 Oct 2018 2. One coffee? Your total is some personal data. http://nymag.com/selectall/2018/08/shiru-cafsoffer-students-free-coffee-for-harvested-data.html. Accessed 27 Sept 2018 3. Cybersecurity threatscape: Q1 2018. https://www.ptsecurity.com/ww-en/analytics/ cybersecurity-threatscape-2018-q1/. Accessed 10 Sept 2018 4. Cybersecurity threatscape: Q2 2018. https://www.ptsecurity.com/ww-en/analytics/ cybersecurity-threatscape-2018-q2/. Accessed 20 Sept 2018 5. The cyber-crooks became to withdraw money from the Russians’ cards a new way. http:// www.amur.info/news/2018/09/05/143017. Accessed 02 Sept 2018 6. Russia lost 600 billion rubles due to hacker attacks in 2017. https://ria.ru/economy/ 20181016/1530769673.html. Accessed 18 Oct 2018

An Approach to Estimating of Criticality of Social Engineering Attacks Traces

455

7. Suleimanov, A., Abramov, M., Tulupyev, A.: Modelling of the social engineering attacks based on social graph of employees communications analysis. In: Proceedings of 2018 IEEE Industrial Cyber-Physical Systems (ICPS), St.-Petersburg, pp. 801–805 (2018). https://doi. org/10.1109/icphys.2018.8390809 8. Azarov, A.A., Tulupyeva, T.V., Suvorova, A.V., Tulupyev, A.L., Abramov, M.V., Usupov, R.M.: Social Engineering Attacks: The Problem of Analysis. Nauka Publishers, St. Petersburg (2016). (in Russian) 9. Abramov, M.V., Tulupyev, A.L., Suleymanov, A.A.: Analysis of users’ protection from socio-engineering attacks: social graph creation based on information from social network websites. Sci. Tech. J. Inf. Technol. Mech. Opt. 18(2), 313–321 (2018). https://doi.org/10. 17586/2226-1494-2018-18-2-313-321. (in Russian) 10. Abramov, M.V., Tulupyev, A.L., Khlobystova, A.O.: Identifying the most critical trajectory of the spread of a social engineering attack between two users. In: 2nd International Scientific-Practical Conference Fuzzy Technologies in the Industry (FTI 2018), Ulyanovsk, pp. 38–43 (2018) 11. Jaafor, O., Birregah, B.: Multi-layered graph-based model for social engineering vulnerability assessment. In: IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pp. 1480–1488. IEEE (2015). https://urldefense.proofpoint. com/v2/url?u=https-3A__doi.org_10.1145_2808797.2808899&d=DwICAw&c=vh6FgFnduej NhPPD0fl_yRaSfZy8CWbWnIf4XJhSqx8&r=KvRiVug56O91B-NIU0fp0w1-a6MZSyIQI3f WkfSPTA&m=RsW8UqAIqgbl73ThVvHbk2I1NHddhe1GV0DaKR2xz74&s=v2xUNFeakL f0sqVLZ6s3TCZELpHPQHFV47KDCJWS18&e= 12. Yasin, A., Liu, L., Li, T., Wang, J., Zowghi, D.: Design and preliminary evaluation of a cyber Security Requirements Education Game (SREG). Inf. Softw. Technol. 95, 179–200 (2018). https://doi.org/10.1016/j.infsof.2017.12.002 13. Junger, M., Montoya, L., Overink, F.J.: Priming and warnings are not effective to prevent social engineering attacks. Comput. Hum. Behav. 66, 75–87 (2017). https://doi.org/10.1016/ j.chb.2016.09.012 14. Dang-Pham, D., Pittayachawan, S., Bruno, V.: Why employees share information security advice? Exploring the contributing factors and structural patterns of security advice sharing in the workplace. Comput. Hum. Behav. 67, 196–206 (2017). https://doi.org/10.1016/j.chb. 2016.10.025 15. Öğütçü, G., Testik, Ö.M., Chouseinoglou, O.: Analysis of personal information security behavior and awareness. Comput. Secur. 56, 83–93 (2016). https://doi.org/10.1016/j.cose. 2015.10.002 16. Algarni, A., Xu, Y., Chan, T.: An empirical study on the susceptibility to social engineering in social networking sites: the case of Facebook. Eur. J. Inf. Syst. 26(6), 661–687 (2017). https://doi.org/10.1057/s41303-017-0057-y 17. Li, H., Luo, X.R., Zhang, J., Sarathy, R.: Self-control, organizational context, and rational choice in Internet abuses at work. Inf. Manag. 55(3), 358–367 (2018). https://doi.org/10. 1016/j.im.2017.09.002 18. Albladi, S.M., Weir, G.R.S.: User characteristics that influence judgment of social engineering attacks in social networks. Hum. Centric Comput. Inf. Sci. 8(1), 5 (2018). https://doi.org/10.1186/s13673-018-0128-7 19. Bhakta, R., Harris, I.G.: Semantic analysis of dialogs to detect social engineering attacks. In: IEEE International Conference on Semantic Computing (ICSC), pp. 424–427. IEEE Xplore Digital Library, California (2015). https://doi.org/10.1109/icosc.2015. 7050843

456

A. Khlobystova et al.

20. Cai, Z., He, Z., Guan, X., Li, Y.: Collective data-sanitization for preventing sensitive information inference attacks in social networks. IEEE Trans. Dependable Secure Comput. 15(4), 577–590 (2018). https://doi.org/10.1109/TDSC.2016.2613521 21. Edwards, M., Larson, R., Green, B., Rashid, A., Baron, A.: Panning for gold: automatically analysing online social engineering attack surfaces. Comput. Secur. 69, 18–34 (2017). https://doi.org/10.1016/j.cose.2016.12.013 22. Lee, K.C., Hsieh, C.H., Wei, L.J., Mao, C.H., Dai, J.H., Kuang, Y.T.: Sec-buzzer: cyber security emerging topic mining with open threat intelligence retrieval and timeline event annotation. Soft. Comput. 21(11), 2883–2896 (2017). https://doi.org/10.1007/s00500-0162265-0 23. Cao, J., Fu, Q., Li, Q., Guo, D.: Discovering hidden suspicious accounts in online social networks. Inf. Sci. 394, 123–140 (2017). https://doi.org/10.1016/j.ins.2017.02.030 24. Zhang, M., Qin, S., Guo, F.: Satisfying link perturbation and k-out anonymous in social network privacy protection. In: IEEE 17th International Conference on Communication Technology (ICCT), pp. 1387–1391. IEEE Xplore, Chengdu (2017). https://doi.org/10.1109/ icct.2017.8359860 25. Kaur, R., Singh, S.: A comparative analysis of structural graph metrics to identify anomalies in online social networks. Comput. Electr. Eng. 57, 294–310 (2017). https://doi.org/10.1016/ j.compeleceng.2016.11.018 26. Yang, Z., Xue, J., Yang, X., Wang, X., Dai, Y.: VoteTrust: leveraging friend invitation graph to defend against social network sybils. IEEE Trans. Dependable Secure Comput. 13 (4), 488–501 (2016). https://doi.org/10.1109/TDSC.2015.2410792 27. Abawajy, J.H., Ninggal, M.I.H., Herawan, T.: Privacy preserving social network data publication. IEEE Commun. Surv. Tutor. 18(3), 1974–1997 (2016). https://doi.org/10.1109/ COMST.2016.2533668 28. Choi, H.S., Lee, W.S., Sohn, S.Y.: Analyzing research trends in personal information privacy using topic modeling. Comput. Secur. 67, 244–253 (2017). https://doi.org/10.1016/j. cose.2017.03.007 29. Abramov, M.V., Azarov, A.A.: Identifying user’s of social networks psychological features on the basis of their musical preferences. In: Proceedings of 2017 XX IEEE International Conference on Soft Computing and Measurements (SCM 2017), pp. 90–92. Saint Petersburg Electrotechnical University “LETI”, Saint Petersburg (2017). https://doi.org/10.1109/scm. 2017.7970504

Ontologies of the Fire Safety Domain Yuliya Nikulina(&) , Tatyana Shulga , Alexander Sytnik Natalya Frolova , and Olga Toropova

,

Yuri Gagarin State Technical University of Saratov, 77, Politechnicheskaya st., Saratov 410054, Russia [email protected], {taiss,natalya-fr}@yandex.ru, [email protected], {taiss}

Abstract. This article considers recent research dealing with developing ontologies of fire safety. The authors have made brief overview of ontologies existing in this field and consider possible ways to use them. The latest researches have explored the problem from different angles. It is shows various applications of ontology: there are projects on wildfires, forest fire risk management, fire in buildings and visualization of the spread of smoke. The article draws our attention to the main characteristics of ontologies and the applicability of a particular ontology in different areas. Criteria for ontology comparison have been formulated. According to the research the most important criterions are availability of ontology and scope of use. The type of ontology as a selection criterion will help at the initial stage to make a list of ontologies. The authors presented the results of the analysis in the form of a comparative table. The results can be used to help developers to make the right choice of ontologies. Keywords: Ontology

 Ontological engineering  Fire safety

1 Introduction The problem of fire safety is becoming increasingly important. An important aspect of ensuring fire safety is the widespread use of new information technologies, models, methods and decision-making support systems to prevent fire-hazardous situations in enterprises. Ensuring fire safety is an important issue that has received much attention at present. Modern information technologies allow automating routine operations, analyzing regulatory documents, carrying out system checks in a single database, thereby facilitating the task of the design engineer. Designing an automatic fire extinguishing system is a time-consuming process that requires a specialist not only highly qualified, but also knowledge of current regulatory and legal acts in the field of fire safety. To increase the productivity of design work, to reduce the time spent on the project allows the computer-aided design system. Despite the large number of systems on the market, it is necessary to understand which data model to choose. The number of examples of using ontologies for modeling the process of fire fighting and ensuring fire safety is increasing. There are a lot of examples of using ontologies to solve problems in the field of fire safety in global. It shows good results © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 457–467, 2019. https://doi.org/10.1007/978-3-030-12072-6_37

458

Y. Nikulina et al.

and has several advantages. Using ontology allows you to solve the problem of integrating knowledge from various sources, presenting knowledge about the subject area in an explicit form, which, in turn, facilitates the understanding and maintenance of knowledge, promotes the development of knowledge, reduces duplication and inconsistency. Ontologies have been developed for various subject areas last years [1–8]. It is important to understand which ontology you can use to solve your own problem. Sometimes you can choose an existing one and it will meet your requirements. In other situations you can get some classes and properties and add some new features that are necessary for you. This article offers a comprehensive overview of the ontologies existing in field of fire safety and considers possible ways to use them.

2 Ontologies Overview We will conduct a brief overview of ontologies of the fire safety domain and consider possible ways to use them. 2.1

Fire Ontology

The ontology of Fire was created in order to represent the set of concepts about the fire occurring in natural vegetation, its characteristics, causes and effects, with focus on Cerrado vegetation domain. The fire plays a determinant role on the structure and composition of Cerrado physiognomies [9]. There are 53 classes, 19 properties in this ontology. Much attention is given to fire characteristic, such as: – – – – – – –

area burned, fire frequency, fire intensity, fire severity flame height, speed, spread.

This ontology will be useful for solving the problems of wildfires. But some parts may be used in other situations, for example, fire characteristics are also suitable for describing fires in buildings. 2.2

Fire Ontology Network

Fire ontology network is the part of project SemSorGrid4Env—Semantic Sensor Grids for Rapid Application Development for Environmental Management. SemSorGrid4Env is a joint project of seven European partners co-funded by the European Commission’s Seventh Framework Programme [10]. The fire ontology network gives support to the project use case on forest fire risk management. After an analysis of the requirements for this application, we have

Ontologies of the Fire Safety Domain

459

developed these ontologies mainly reusing a good number of ontologies from the SWEET suite, which cover the following domains: fire, forest and vegetation, weather, geography, water body, infrastructure, location and time; and the core sensor network ontology. Since SWEET already covers most of the representational needs of this application, we have just added a SpatialObject class to represent objects that have a location, have identified the classes to be considered as spatial objects (bodies of water, landforms, infrastructures, and fire), and have extended the definition of datasets to make them cover a region and a temporal extent. Figure 1 presents an overview of the main classes of this ontology network. In the figure, classes are represented as circles, properties as arrows, and rdfs:subClassOf properties as dotted arrows. The different classes are grouped according to the ontology that they belong to and the concepts and properties in green are those that are not present in other ontologies and researchers had to add.

Fig. 1. Main classes at fire ontology network

2.3

Semantic Web for Earth and Environmental Terminology (SWEET)

SWEET ontologies are written in the OWL ontology language and are publicly available [11]. SWEET 2.3 is highly modular with 6000 concepts in 200 separate ontologies. You can view the entire concept space from an OWL tool such as Protege by reading in sweetAll.owl. Alternatively, these ontologies can be viewed individually. SWEET 2.3 consists of nine top-level concepts/ontologies. Some of the next-level concepts are shown in the Figure. SWEET is a middle-level ontology; most users add a domain-specific ontology using the components defined here to satisfy end user needs. Figure 2 presents a part of SWEET ontology that deals with fire.

460

Y. Nikulina et al.

Fig. 2. Fragment of graph represented a part of SWEET ontology in domain of fire

2.4

EmergencyFire: An Ontology for Fire Emergency Situations

The aim of this project is to develop an ontology for emergency response protocols, in particular, to fires in buildings. This developed ontology supports the knowledge sharing, evaluation and review of the protocols used, contributing to the tactical and strategic planning of organizations. The construction of the ontology was based on the Methontology. The domain specification and conceptualization were based in qualitative research, in which were extracted 131 terms with definitions, of which 85 were assessed by specialists. From there, in the Protégé tool, the domain’s taxonomy and the axioms were created. The specialists validated the ontology using the assessment by human approach (taxonomy, application and structure). Thus, a sustainable ontology model to the rescue tactical phase was ensured [12]. The approved ontology comprises 103 classes, 67 subclasses, 34 object properties, 26 datatype properties, 34 instances and 21 swrl rules. Main objective is to attenuate problems such as the lack of standardization and documentation of emergency response protocols of fire in buildings. EmergencyFire ontology may help organizations to quickly respond to an emergency situations of fire in buildings, since people and systems will have a common understanding of protocols. The development of the ontology was based on interviews, conceptual analysis of documents (manuals, guides, norms and technical terms). As a future work researchers plan to include: ontology evaluation by specialists related to agencies of other brazilian states, conducting a study to include other variables not yet evaluated, such as the insertion of new emergency response protocols, implementation of application that consumes the created ontology (axioms and concepts), and execution of case study to attest the applicability of the EmergencyFire. In this way, new contributions will be added to the work. 2.5

Community-Based Fire Management

Participation of the public in fire situations is vital to reduce the loss of the community. However, due to the complexity and variability of the fire information, achieving this goal is quite a challenge. The ability of fire management plays a crucial role when a fire accident happens in a community. In this paper, we present community based fire ontology by researching different Sources such as emergency management guidelines,

Ontologies of the Fire Safety Domain

461

the community, ontology theory, which have taken into account the different concepts of the fire domain. We think that the proposed ontology addresses the information of the community-based fire management by providing the accessibility for the community [13]. Figure 3 presents community-based fire management.

fire_mitigation

routine_inspection volunter_training resources_support

fire_preparation fire_propagation launch_volunteergroup Owl: Thing

extinguishment fire_response

alarm scene_control evacuation medical_aid

fire_recovery investigation scene_clean-up loss_check-in Fig. 3. Graph represented community-based fire management

2.6

Ontology for Fire Emergency Planning and Support

This work aims are to build the building ontology and to provide support during fire emergency. Case studies are done to infer smoke propagation and determination of escape outlet, which can provide the basis for additional query and inference [14]. During fire emergencies, parts of a building may have higher risks compared to other parts of the building. This distribution of risks depends on the location of the fire, placement of hazardous materials in the building, condition of doors and windows

462

Y. Nikulina et al.

around the building, vents and so on. Before getting to safety, one needs to know where safe places are, and how to approach them safely. This can be difficult in a large building complex. As the aforementioned attributes can be seen as objects and classes, they can be assigned their semantic meanings. This includes the description of the rooms, and their relationships with every other room, such as their adjacency. The building can be modeled by OWL description logic, which define the building as a graph, where objects are classified into classes, with properties to connect between them. The generated graph via description logic can be used for further inference for more data and for querying. This project focuses on visualization of smoke spread.

3 Ontology Comparison The analysis of 6 ontologies that was described above allows us to distinguish the following criteria for the comparison of ontologies. At first, the quantitative criterion is the most obvious. The Table 1 presents a comparison of the quantitative indicators of each ontology. Table 1. Quantitative criterion of the described ontologies Name of ontology Fire ontology Fire ontology network SWEET EmergencyFire Community-based fire management Ontology for fire emergency planning and support

Number of classes 53 42 6000 103 15 n/a

Number of object properties 19 8 n/a 34 n/a n/a

The most interesting is to analyze qualitative criteria. The following qualitative criteria can be distinguished during the analysis. Scope of use the ontology is an important criterion, that shows the practical utility of the developed project. This criterion will often be decisive for the choice of one or another ontology. The next valuable criterion is availability of ontology. In some cases there are a lot of publications and detailed description of ontology, but it is not available in the database of ontologies. Ontology type is an important criterion. By type of creation, four types of ontologies are distinguished: the ontology of the representation, the upper ontology, the domain ontology and the applied ontology. The ontologies listed in the article refer to the upper ontology, domain ontology and applied ontology types. The domain ontology generalizes the concepts used in some domain tasks, abstracting from the tasks themselves. In many disciplines, standard ontologies are

Ontologies of the Fire Safety Domain

463

being developed that can be used by subject matter experts to share and annotate information in their field. The purpose of applied ontologies is to describe the conceptual model of a specific task or application. Applied ontologies describe concepts that depend on both the ontology of the tasks (see below) and the ontology of the subject domain. Such ontologies contain the most specific information. Three types of ontologies are distinguished by content: general ontologies, taskoriented ontologies and subject ontologies. General ontologies describe the most common concepts, that are independent of a particular problem or area. Both ontologies of representation and upper ontologies fall into this category. A task-oriented ontology is an ontology used by a specific application program and containing terms used in developing software that performs a specific task. It reflects the specifics of the application, but may also contain some general terms (for example, the graphical editor will contain specific terms such as palette, fill type, layer overlay, etc., and general terms will save and load the file). The tasks to which the ontology can be devoted can be very diverse: scheduling, setting goals, diagnostics, sales, software development, building classification. At the same time, the task-oriented ontology uses the specialization of terms represented in the ontologies of the upper level (general ontologies). Every ontology has been analyzed according to the described qualitative criteria and the comparative Table 2 has compiled. Table 2. Qualitative criterion of the described ontologies Name of ontology

Scope of use

Availability

Fire ontology

Ecology, wildfires

Free

Fire ontology network

Forest fire risk management Wide range

n/a

Domain ontology

Free

EmergencyFire

Fire in buildings

Not available

Upper ontology Applied ontology

Community-based fire management

n/a

n/a

Applied ontology

Ontology for fire emergency planning and support

Fire in buildings

Not available

Applied ontology

SWEET

Ontology type by purpose Domain ontology

Ontology type by content Taskoriented ontology Taskoriented ontology General ontology Taskoriented ontology Taskoriented ontology Taskoriented ontology

464

Y. Nikulina et al.

As can be seen from the comparison, the article presents ontologies of various types and applications. We will discuss further the comparative analysis.

4 Discussion As we can see from Table 1, the SWEET ontology is the bigger one, and has the widest scope of application. But at the same time it has no application value. So, developers can use this ontology as a basis and add a domain-specific ontology using the components defined there to satisfy end user needs. According to the Table 2 the Fire Ontology is available in Bioportal, it is the domain ontology and it is created as task-oriented ontology. But the scope of use is narrow. This ontology specializes in solving a specific problem. It was created in order to represent the set of concepts about the fire occurring in natural vegetation, its characteristics, causes and effects, with focus on Cerrado vegetation domain. It is good at solving this particular task, but improvements are needed for use in related fields. Fire ontology network is developed to give support to the project use case on forest fire risk management. It is suitable for solving similar problems. But there is no evidence that the ontology is publicly available. EmergencyFire ontology seems to be most suitable for further work. In the publications classes, properties and axioms are described in detail. The authors attracted the experts, who analyzed the classes, axioms and properties. The development of the ontology was based on interviews, conceptual analysis of documents (manuals, guides, norms and technical terms) using the manual coding method and evaluated by expert [12]. But this ontology is not available in open sources. Therefore, we cannot use it as a basis for solving our problem. Ontology for Fire Emergency Planning and Support can provide information on smoke propagation and determination of escape outlet syntactically, which is obtained through multistage inference that is decidable [13]. This ontology was developed specifically to solve the certain problem. So, it may be applicable to similar tasks. Classes and entities are described in ontology can be used for fire analysis and fire alarm system designing. But there is also no information on the location of ontology in open sources. Due to the fact that the ontology of fire safety must be coordinated with the expert, it is necessary to analyze the quality of ontology, some of which is calculated based on the topology of the ontology graph. Methods for assessing cognitive ergonomics can also be used to assess ontologies of the same subject area, made by different people/teams. The calculated metrics will help to understand which of them is better from the point of view of cognitive ergonomics and to make a choice in favor of one of them if the estimates of other important criteria are not fundamentally different [15]. The quality assessment approach based on the ontology graph topology is used in the work [16]. There are metrics used to analyze the quality of ontology. Here we give those of them that relate to the metrics of cognitive ergonomics: – ontology depth. The greater the depth, the harder the graph is perceivable.

Ontologies of the Fire Safety Domain

465

– ontology width. The smaller the width of the ontology, the better in terms of cognitive ergonomics. – ontology tangledness. The smaller the final value, the better the ontology in terms of cognitive ergonomics. – the ratio of the number of classes to the number of properties. The greater this value, the easier it is to perceive the ontology. It is important to mention the subjective metrics of ontology quality assessment. The essential feature of all metrics considered above is the possibility of their automatic calculation. This can seriously simplify the work of an expert in evaluating ontologies with a large number of concepts (if there are less than 50 concepts in ontology, then an experienced expert can evaluate it with a single glance from the point of view of cognitive ergonomics). In the work [15] the researchers developed the COAT (Cognitive Ontology AssessmenT) tool for automatic assessment of the cognitive ergonomics of ontologies. In addition to directly calculating metrics, the COAT user has the ability to obtain information about these metrics, their purpose and the interpretation of values from the dictionary built into the tool. The COAT tool at this stage is implemented as a console application in Java. In some cases, the previously proposed model is not applicable, since there is no need to evaluate the cognitive ergonomics of ontology. For example, if only computational efficiency is important, and only programs, and not people, will deal with the created ontology.

5 Conclusion This work analyzes ontologies on the fire safety domain. Fire safety is important problem nowadays. A lot of researches around the world try to develop the problem of fire safety to avoid repetition of tragic events. Ontologies overview may help developers in their research and don’t do the work that has already been done. In some cases you can use ready ontology and it will meet your requirements. Using ready fragments of ontology will reduce the time the time required for research. Main objective is to attenuate problems such as the lack of documentation and descriptions of existing solutions. We believe that this article may help researchers to make right choice or decide to continue to develop one of the existing projects. As a future work we plan to develop ontology that meet our requirements and add some new characteristics. In Russia there is lack of such researches. In same time foreign projects do not take into account the particular qualities of Russian documents (manuals, guides, norms and technical terms) [17]. We are going to study in detail the problem of analyzing a large number of documents (federal laws, regulations, sets of rules). At the moment, the general system of requirements for ensuring fire safety is not presented in the foreseeable form. Requirements and directives are scattered in a large number of regulatory documents, which contain abundant norms in abundance, mutually exclusive, and sometimes deliberately impracticable. It is supposed to translate the analysis of legislation into an algorithm.

466

Y. Nikulina et al.

The list of requirements for the functionality of fire safety systems is constantly increasing last years, while the terms of implementation of projects are reduced. The functionality of the developed software product allows it to be used in organizations engaged in the design, installation and maintenance of automatic fire alarms. It will greatly facilitate the work of design engineers. Reduce the timing of implementation, adjustment and coordination of the project. There will be no need to regularly check the new regulatory documents that have appeared, check the necessary values. Also, the software product will automate routine operations, reduce the number of errors associated with the human factor. At the moment, there are examples on the market where the functionality of a single import solution can be replaced only by the joint use of several Russian analogues. But the design organization should change the technological chain, integrate several new products into their business processes, ensure their integration and joint interaction to implement this. The installation of a new Russian engineering complex is not an easy task even if it is cheaper. This fact is also the reason for the rejection of domestic development in favor of foreign software. Based on the analysis of existing software products, the necessary functionality for such an application was determined: – import of data presented in a number of common formats (the main format for downloading a project is DWG, for downloading texts of laws and regulations— DOC); – preliminary data processing; – building an ontological model based on uploaded documents; – creation of a rule base; – the possibility of automatic placement of fire detectors, laying loops; – the possibility of automatic preparation of reporting documents; – export the results to a file. Based on the identified functionality of the application, the requirements for the structure and functioning of the system were determined. The system proposes to allocate the following functional subsystems: – – – – – – – –

data loading subsystem; data processing subsystem; data storage subsystem; data manipulation subsystem; modeling subsystem; a subsystem of visualization of the results; subsystem for automatic preparation of reporting documents; data upload subsystem.

The result of the work will be the project based on the analysis of customer requirements of the fire alarm installation, legal requirements, and technical limitations. The ready-made project fragments or recommendations in the case of the existing variable norms in the federal legislation will be displayed. Automatic placement of fire detectors and cabling will be carried out.

Ontologies of the Fire Safety Domain

467

References 1. Kumar, S., Baliyan, N.: Semantic Web-Based Systems. Springer, Singapore (2018) 2. Hoppe, T., Humm, B., Reibold, A.: Semantic Applications. Springer, Berlin (2018) 3. Dolinina, O.: Method of the debugging of the knowledge bases of intellectual decision making systems. In: Automation Control Theory Perspectives in Intelligent Systems, Proceedings of the 5th Computer Science On-Line Conference 2016 (CSOC 2016), vol. 3, pp. 307–314. Springer, Cham (2016) 4. Danilov, N.A., Shulga, T.E.: Use cases for the usability analysis based on the ontology. In: International Conference on Information Technologies ICIT-2016: Information and Communication Technologies in Education, Manufacturing and Research, pp. 160–166 (2017) 5. Danilov, N., Shulga, T., Frolova, N., Melnikova, N., Vagarina, N., Pchelintseva, E.: Software usability evaluation based on the user pinpoint activity heat map. In: Silhavy, R., Senkerik, R., Oplatkova, Z., Silhavy, P., Prokopova, Z. (eds.) Software Engineering Perspectives and Application in Intelligent Systems, CSOC 2016. Advances in Intelligent Systems and Computing, vol. 465, pp. 217–225. Springer, Cham (2016) 6. Danilov, N.A., Shulga, T.E., Sytnik, A.A.: Repetitive event patterns search in user activity data. In: Proceedings of the 2018 IEEE Northwest Russia Conference on Mathematical Methods in Engineering and Technology (MMET NW), St. Petersburg, Russia, 10–14 September 2018, pp. 92–94. Saint Petersburg Electrotechnical University “LETI” (2018). (in Russian) 7. Peroni, S.: The SPAR ontologies. In: Vrandečić, D., Bontcheva, K., Suárez-Figueroa, M.C., Presutti, V., Celino, I., Sabou, M., Kaffee, L.-A., Simperl, E. (eds.) The Semantic Web, ISWC 2018, vol. 11137, pp. 119–136. Springer, Cham (2018) 8. Sytnik, A.A., Shulga, T.E., Danilov, N.A.: Ontology of the “software usability” domain. Trudy ISP RAN/Proc. ISP RAS 30(2), 195–214 (2018). (in Russian) 9. Souza, A.: The ontology of fire, National Center for Biomedical Ontology (2016). https:// bioportal.bioontology.org/ontologies/FIRE. Accessed 20 Oct 2018 10. Semantic Sensor Grids for Rapid Application Development for Environmental Management. http://linkeddata4.dia.fi.upm.es/ssg4env/index.php/ontologies/12-fire-ontology-network/ default.htm. Accessed 15 Oct 2018 11. Semantic web for earth and environmental technology. https://sweet.jpl.nasa.gov. Accessed 21 Oct 2018 12. Bitencourt, K., Durão, F.A., Mendonça, M., De Souza Santana, L.L.B.: An ontological model for fire emergency situations. IEICE Trans. Inf. Syst. E101–D(1), 108–115 (2018) 13. Liu, G., Xu, B., Tu, Q., Sha, Y., Xu, Z.: Towards building ontology for community-based fire management. In: Proceedings 2011 International Conference on Transportation, Mechanical, and Electrical Engineering (TMEE) (2011) 14. Tay, N.N.W., Kubota, N., Botzheim, J.: Building ontology for fire emergency planning and support. E-J. Adv. Maint. 8(2), 13–22 (2016) 15. Bolotnikova, E., Gavrilova, T., Gorovoy, V.: To one method of ontology evaluation. Int. J. Comput. Syst. Sci. 50(3), 448–461 (2011). ISSN 1064-2307 16. Gangemi, A., Catenacci, C., Ciaramita, M., Lehmann, J.: Modelling ontology evaluation and validation. In: Sure, Y., Domingue, J. (eds.) The Semantic Web: Research and Applications, ESWC 2006. Lecture Notes in Computer Science, vol. 4011. Springer, Berlin (2006) 17. Shamszaman, Z.U., Ara, S.S., Chong, I., Jeong, Y.K.: Web-of-objects (WoO)-based context aware emergency fire management systems for the internet of things. Sensors 14, 2944–2966 (2014)

Wavelet-Based Arrhythmia Detection in Medical Diagnostics Sensor Networks Anastasya Stolbova1 , Sergey Prokhorov1 , Andrey Kuzmin2(&) , and Anton Ivaschenko3 1

3

Samara National Research University, 34, Moskovskoye Shosse, Samara 443086, Russia [email protected] 2 Penza State University, 40, Krasnaya Street, Penza 440026, Russia [email protected] Samara State Technical University, 244, Molodogvardeyskaya Street, Samara 443100, Russia [email protected]

Abstract. This paper describes an application of wavelet transform for nonequidistant time series analysis in distributed sensor networks. Based on an idea of modern technologies of the Internet of Things and Big Data implementation in digital medicine there is outlined a problem of uneven time series analysis specific for medical diagnostics, specifically electrocardiogram (ECG) monitoring. As a solution there is proposed an original approach and algorithms of calculating the wavelet transform coefficients, using only those samples of the time series that are contained in the width of the wavelet. The advantage of this approach is that the result of the transformation is an even representation. The velocity of the algorithm is improved by taking into account the effective radius of the mother wavelet and calculating its width. The method and software tool for wavelet-based analysis of ECG signals are proposed for arrhythmia detection task. Experimental results show that proposed wavelet-based method of ECG analysis can detect signs of arrhythmia. Results of wireless channel speed test confirm that the chosen hardware meets the requirements of wireless protocol bandwidth. Proposed solutions are suitable for portable heart monitoring systems. Keywords: Medical diagnostics Wavelet transform



The Internet of Things



ECG analysis



1 Introduction Present-day medical diagnostics requires innovative solutions implementing a combination of the Internet of Things and Big Data technologies. Therefore there is a strong request for new methods and algorithms capable to process large quantities of data in real time using mobile sensor networks. One of the challenging areas of application of these solutions is mobile heart monitoring. Modern mobile devices for heart monitoring are comparatively cheap and portable, which makes them useful for every day patient tracking. As a basic characteristic they © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 468–479, 2019. https://doi.org/10.1007/978-3-030-12072-6_38

Wavelet-Based Arrhythmia Detection in Medical Diagnostics …

469

capture and analyze electrocardiogram signals (ECG) that can be used to identify a number of most common heart diseases. Despite the powerful functionality of hardware of such devices there is a requirement to develop software capable of processing large quantities of data in real time. In this paper there is proposed a solution of this problem using an apparatus of wavelet transform.

2 State of the Art The Internet of Things is one of the most promising architectural concepts nowadays that is widely studied and implemented in technical and medical diagnostics [1, 2]. Using several devices with different functionality as a cloud can improve the quality of data collection, processing and analysis in real time. Modern protocols and architectures of wireless networks allow implementing a variety of topologies at technical level. Software solutions [3] can improve the quality of complex objects monitoring meeting up-to-date requirements of problem domains. In medial care the concept of the Internet of Things is effectively implemented for remote patient monitoring [4, 5]. It allows improving the quality of medical care and personalizing it to be used at intelligent hospital wards or as an outpatient procedure. In this case the vital signs are transmitted using wireless network technologies that provide flexibility and mobility to patients. The most considerable benefit of such an approach is its flexibility: each diagnosis sensor is not bound to the particular patient care system, which allows combining a random set of devices to an arbitrary scanning system. Wavelet transform is widely used for signal processing in various applications, including electrocardiogram (ECG) analysis. For example, wavelet transform is considerably useful in study of P- and T-waves. It can be used to identify changes caused by acute coronary artery occlusion and can derive ECG signal components sensible to transient ischemia [6]. A number of approaches implement Daubechies wavelet to identify, classify and analyze arrhythmia based on ECG signals. Morlet wavelet transformation is used to determine patients with ventricular tachycardia [7]. In the papers [8, 9] wavelet transform was used to measure the myocardial action potential that allows automatic diagnostics of hart diseases e.g. myocardial ischemia and heart failure. Implementation of wavelet transform in practice is considered with a problem of its application in the analysis of non-equidistant (uneven) time series. Below there is proposed an algorithm for obtaining an array of even shifts of the transformation, taking into account the uneven of the analyzed process: we choose the interval of involuntary discretization; we determine the number of shifts and then calculate their value. Based on the proposed algorithm, we develop an algorithm for continuous wavelet transformation of non-equidistant time series.

470

A. Stolbova et al.

3 Continuous Wavelet Transform of Uneven Time Series Wavelet transformation is one of the methods of time-frequency analysis of data. Coefficients of wavelet transform h are calculated as following: 1 W ða; bÞ ¼ pffiffiffi a

Z1 xðtÞw 1

  tb dt; a

ð1Þ

where x(t) is a random process, w(t) is the selected analyzing wavelet, a 6¼ 0 is the scale parameter, b  0 is the shift parameter. Spectral methods can be used to analyze medical signals such as electrocardiogram, electroencephalogram, heart rate variability (HRV), analysis of cosmophysical phenomena, and in other areas. Such processes, as a rule, are non-stationary in frequency and spectral analysis does not allow localizing frequencies in time. In this case, the time-frequency analysis methods are applicable, which include the wavelet transform. In this areas researcher has to deal with data that is non-stationary and nonequidistant (for example, HRV), i.e. Dtk = tk+1 – tk = random. Usually, an adaptive wavelet transform method is considered for a model with gaps in observations, the disadvantage of which is that it is necessary to know at what point in time the count was skipped. When calculating the estimate of the wavelet coefficients of irregular processes using the rectangle method, expression (1) is converted to the following form:   N2 1 X tk  b W ða; bÞ ¼ pffiffiffi ðtk þ 1  tk Þxk w ; a a k¼0

ð2Þ

where N is the number of counts of the implementation of a non-equidistant time series. The expression for estimating the coefficients by the trapezoid method has the following form:     N 2  1 X tk þ 1  b tk  b p ffiffi ffi xk þ 1 w ðtk þ 1  tk Þ: þ xk w W ða; bÞ ¼ a a 2 a k¼0

ð3Þ

In calculating the estimation of the transformation coefficients, we use only those samples of the time series contained in the width of the wavelet. The advantage of this approach is that the result of the transformation is an even representation. The velocity of the algorithm is improved by taking into account the effective radius of the mother wavelet and calculating its width. Algorithm details are provided below.

Wavelet-Based Arrhythmia Detection in Medical Diagnostics …

471

4 Shifts Array Estimation Algorithm When calculating the wavelet transform of regular processes, the shift values are defined as values that are multiples of the sampling interval, and to perform the conversion, it suffices to know only the shift number. In the case when the sampling interval is not constant, the following algorithm is proposed for obtaining an array of shifts. 1. Select a forced sampling interval Δt0, defined as a minimum, maximum or average value of discretization intervals of a non-equidistant time series: 8 min Dtk ; > > k > < N1 P 1 Dt0 ¼ N Dtk ; > k¼0 > > : max Dtk :

ð4Þ

k

2. Determine the number of counts of the estimated regular time series: 

 tN1  t0 N ¼ ent : Dt0 

ð5Þ

where ent [] is the operation of taking the whole part. 3. Determine the number of shifts: 

 N Nb ¼ ent : K

ð6Þ

bj ¼ j  K  Dt0 ;

ð7Þ

where K is the thinning coefficient. 4. Build an array of shifts:

where j ¼ 0; . . .; Nb  1.

5 Coefficients Estimation Algorithm Taking into account the features described above, the algorithm for estimating the coefficients of the wavelet transform of time series with irregular discretization is as follows:

472

A. Stolbova et al.

1. Download the process under study, represented by a non-equidistant time series l ½xlk ðtlk =Dtlk Þk¼0...N l¼0...M :

x0 x1 … … … xk-1 xk xk+1 … xn-1 2. Get an array of scales: ai ¼

1 ; ðxmin þ i  DxÞ

ð8Þ

where xmin is the minimum frequency; Dx is the frequency sampling interval:

a0 a1 … … … ai-1 ai ai+1 … … 3. Get an array of shifts according to the algorithm proposed above:

b0 b1 … … … bj-1 bj bj+1 … … 4. For the current scale value ai, calculate the width of the wavelet wt: wt ¼ 8ai Dt ;

ð9Þ

where Δt is the effective radius of the base wavelet. 5. Calculate the value of the coefficient of the wavelet transform using expression (2) or (3). In this case, only those irregular time series counts should be taken into account, the corresponding time stamps of which fall within the width of the wavelet relative to the current shift bj: Wij ¼

X k

  tk  bj wt wt ðtk þ 1  tk Þxk w ; bj   t k  bj þ : ai 2 2

ð10Þ

Due to the fact that it is impossible to use all the wavelet values at the signal boundaries and errors appear, calculations are recommended if the following condition is met, however, it is not mandatory: wt wt \bj \tN1  : 2 2

ð11Þ

6. Repeat steps 4, 5 for all scales ai and shifts bj. Therefore, the result of the algorithm is the matrix of coefficients of the wavelet transform. Note that the result of the conversion is regular, since the scale function is inversely proportional to the uniformly sampled frequency, and the proposed algorithm for calculating shifts assumes their regularity.

Wavelet-Based Arrhythmia Detection in Medical Diagnostics …

473

6 Implementation The proposed approach was implemented in a specialized software solution using C# language on .NET platform using pattern design technology. It consists of the following modules: 1. Data loading and generating module, designed to upload or generate the original sequences of time series of the following types: • random stationary with a given type of correlation function; • random non-stationary with a given type of correlation function; • deterministic. The module allows you to get processes with regular and irregular data sampling. 2. Spectral analysis module, for obtaining the spectral characteristics of a signal using the following methods: • • • •

fourier transform; window Fourier transform; wavelet transform with regular sampling; wavelet transform with irregular discretization;

3. Analysis module, designed to estimate the efficiency and accuracy of the calculated coefficients of the wavelet transform. 4. Wavelet construction module. This module is introduced to calculate the wavelet functions. The system uses 10 main types of wavelets. To construct them, one needs to specify the number of samples of wavelet N, wavelet sampling interval Dt, scale a and shift b. 5. Simulation module, which is designed to simulate wavelet transform processes with irregular data sampling and assess the adequacy of the developed algorithms.

7 Implementation of Wavelet Analysis to Diagnose Arrhythmia The proposed approach was used to process two channels of ECG data with the sampling frequency of 128 Hz. Test signals were obtained from a special long term ECG records database [10]. This free access database is available at physionet.org. It is world-known as paroxysmal atrial fibrillation (PAF) prediction challenge database. The contents of this resource include arrhythmic signals and relatively healthy signals. Arrhythmic signals set in turn consists of signals just before the attack (30 min long) and during the attack (5 min long). In the current research original enumeration of signal records is preserved. It allows repeating the results. The following signals are considered as examples: • patients without arrhythmia (signals 1 and 2);

474

A. Stolbova et al.

• patients with arrhythmia (signals 15 and 16); • just before the attack; • during the attack. The note “clear record” mentioned below describes the signal of ECG of people without arrhythmia. ECG wavelet transform analysis identified 5 typical frequency ranges being presented as parts of ECG signal (see Table 1). Healthy patients (see cases 1 and 2) have frequencies in the ranges of 2–5 from 0.45 to 1 Hz in wavelet spectrum of the first ECG channel. Patients with arrhythmia (see

Table 1. Typical ECG range Number 1 2 3 4 5

Frequency range Lower 1 Hz 1–2 Hz 2–4 Hz 4–5 Hz 5–8 Hz

cases 15 and 16) get frequencies from the range of 2 and 3 on the first channel, and the power of the frequencies from the range of 4 and 5 decreases from 2 to 3 times being compared with healthy patients. The results of the analysis of the first ECG channel for continuations are identical to the test records. Table 2 presents the frequencies detected in the signal and their power. Patents 15 and 16 get frequency power increase on the second ECG channel just before attack and during it. Frequency poser increases form range 5 in 2–4 times

Table 2. First ECG channel 1 Clear record

2 Clear record

15 Arrhythmia

x, Hz

A

x, Hz

A

x, Hz

A

16 Arrhythmia just before the attack x, Hz A

1.2660 2.6902 4.0354 6.8839

0.97 1 0.8 0.5

1.3451 2.7694 4.1145 6.9630

0.83 1 0.65 0.45

1.6616 3.4024 5.0640 7.5169

1 0.48 0.18 0.19

1.5825 3.1650 4.8266 7.5960

1 0.55 0.2 0.17

comparing with other records (see Tables 3 and 4). The experiment was split to 2 phases regarding to the patient 16 (phase 1 describes the situation just before the heart attack and phase 2 shows how the heart attack continues and develops). Figures 1,2 and 3 illustrate the experimental results. One can see that during the arrhythmia ECG signal becomes non-stationary in frequency on both first and second channels.

Wavelet-Based Arrhythmia Detection in Medical Diagnostics …

475

Table 3. Second ECG channel, phase 1 1 Clear record

2 Clear record

15 Arrhythmia

x, Hz

A

x, Hz

A

x, Hz

A

16 Arrhythmia just before the attack x, Hz A

– 1.2660 2.6902 4.0354 5.6179

– 1 0.6 0.17 0.25

0.3956 1.3451 2.7694 – 5.6179

1 0.5 0.5 – 0.15

0.3956 1.6616 3.4024 – 6.8839

1 0.37 0.15 – 0.35

0.3165 1.1869 1.8990 – 6.4882

0.45 0.2 0.2 – 1

Table 4. Second ECG channel, phase 2 1 Clear record

2 Clear record

15 Arrhythmia

x, Hz

A

x, Hz

A

x, Hz

A

16 Arrhythmia just before the attack x, Hz A

– 1.4242 3.0067 4.5101 6.0135

– 1 0.45 0.2 0.25

0.3956 1.5034 2.3737 – –

1 0.1 0.05 – –

0.3165 1.5034 3.0859 – 6.4091

0.05 0.2 0.25 – 1

0.3956 1.5825 – – 5.2222

0.4 0.27 – – 1

8 Implementation of Medical Diagnostics Sensor Network Figure 4 presents the common architecture of medical diagnostics sensor network. It consists of:

Fig. 1. Healthy patient ECG wavelets transform

476

A. Stolbova et al.

Fig. 2. ECG transform for a patient with arrhythmia just before the heart attack

Fig. 3. ECG transform for a patient with arrhythmia heart attack

Wavelet-Based Arrhythmia Detection in Medical Diagnostics …

• • • •

patients; monitoring devices; wireless communication network; data server.

Fig. 4. Architecture of medical diagnostics sensor network

477

478

A. Stolbova et al.

Every portative monitoring device registers, processes, stores and transmits biomedical data. Every monitoring device consists of two main elements: • ECG sensor; • data processing device. These elements can be coupled into a single device or divided into two independent devices connected to each other via wire or wireless connection. The last variant is widely used [11]. In this case ECG sensor is a specially designed device that registers and transmits raw data. The user’s smartphone or tablet PC plays the role of a data processing device that processes, stores, analyses, displays and transmits the data to a remote data server via the Internet. It allows developing quite cheap monitoring devices based on AFE (Analog Front-End) solutions that connect to a user’s smartphone or tablet PC via Bluetooth. The user just needs to install a proper mobile application on their smartphone. ECG sensor registers ECG signal, amplifies it, converts it into a digital form and performs some special functions like contact break detection and battery control. Data processing device is a mobile computational platform for ECG analysis. Specially developed mobile software records and analyses signals, communicates with a remote server or another mobile device. Wavelet analysis is performed on this mobile platform. Monitoring devices should meet some up-to-date requirements [11] like: the ability of offline ECG analysis, usability for patients, high reliability etc. The main requirements for ECG sensors [11] include miniaturized dimensions, ergonomic shape, continuous autonomous work, safety and others. Engineering solutions of ECG sensors should take into account a specific use environment, first of all, free movement conditions which lead to a large number of artifacts and considerable level of noise of different nature. Therefore, monitoring device should provide a special preprocessing operation before transmitting the signal to the remote receiver. This preprocessing includes artifacts detection and filtering. The first is detecting ECG signal areas with artifacts. These areas are excluded from the following processing. Filtering is an important problem as ECG signal is recorded under the influence of different noise factors such as electrical network system, muscle activity, body movement, changing parameters of electrode-skin contact and so on. It requires the application of powerful methods of filtering that can be implemented on a mobile platform. Communication between every single monitoring device and remote data server can be based on SOAP protocol secure connection. In case of signs of arrhythmia detection additional analysis of the recorded signal may be needed. This possibility is provided by medical institution server as well as remote access to patient’s electronic health record for medical professionals, cardiologist for example. Medical institution server maintains the medical database that stores all data obtained from individual monitoring devices.

Wavelet-Based Arrhythmia Detection in Medical Diagnostics …

479

9 Conclusion Results of the proposed solution probation in practice prove the perspective of wavelet transform implementation in medical diagnostics sensor networks, especially in mobile heart monitoring systems. The main advantage of the proposed approach and algorithms is that the result of the transformation is an even representation. The velocity of the algorithm is improved by taking into account the effective radius of the mother wavelet and calculating its width. The method and software tool for wavelet-based analysis of ECG signals are recommended for arrhythmia detection task. The proposed tool can be used as part of medical diagnostics sensor networks software. Next steps are concerned with extension of possible algorithms implemented in digital medical diagnostics on the basis of the Internet of Things.

References 1. Fortuno, G., Trunfio, P.: Internet of Things Based on Smart Objects: Technology, Middleware and Applications. Springer, New York (2014) 2. Bessis, N., Dobre, C.: Big Data and Internet of Things: A Roadmap for Smart Environments. Springer, Switzerland (2014) 3. Ivaschenko, A., Minaev, A.: Multi-agent solution for adaptive data analysis in sensor networks at the intelligent hospital ward. In: Ślȩzak, D., et al. (eds.) International Conference on Active Media Technology. LNCS, vol. 8610, pp. 453–463. Springer, Switzerland (2014) 4. Sahandi, R., Noroozi, S., Roushanbakhti, G., Heaslip, V., Liu, Y.: Wireless technology in the evolution of patient monitoring on general hospital wards. J. Med. Eng. Technol. 34(1), 51–63 (2010) 5. Aminian, M., Naji, H.R.: A hospital healthcare monitoring system using wireless sensor networks. J. Health Med. Inform. 4(2), 121 (2013) 6. Saritha, C., Sukanya, V., Narasimha Murthy, Y.: ECG signal analysis using wavelet transforms. Bul. J. Phys. 35, 68–77 (2008) 7. Addison, P.S.: Wavelet transforms and the ECG: a review. Physiol. Meas. 25(5), 155–199 (2005) 8. Peng, Z., Wang, G.: A novel ECG eigenvalue detection algorithm based on wavelet transform. Biomed. Res. Int. 2017, 5168346 (2017) 9. Gutiérrez-Gnecchia, J.A., Morfin-Magaña, R., Lorias-Espinoza, D., Tellez-Anguiano, A., Reyes-Archundia, E., Méndez-Patiñoa, A., Castañeda-Mirandac, R.: DSP-based arrhythmia classification using wavelet transform and probabilistic neural network. Biomed. Signal Process. Control 32, 44–56 (2017) 10. Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P., Mark, R., Mietus, J., Moody, G., Peng, C.-K., Stanley, H.: PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation 101 (23), 215–220 (2000) 11. Kuzmin, A., Safronov, M., Bodin, O., Petrovsky M., Sergeenkov A.: Device and software for mobile heart monitoring. In: Proceedings of the 19th Conference of Open Innovations Association FRUCT, pp. 121–127. FRUCT Oy, Helsinki (2016)

On Parallel Addition and Multiplication via Symmetric Ternary Numeral System Iurii V. Stroganov(&)

, Liliya Volkova

, and Igor V. Rudakov

BMSTU, 2-ya Baumanskaya ul. 5/1, 105005 Moscow, Russia {stroganovyv,liliya,irudakov}@bmstu.ru

Abstract. This article is concerned with ternary logic application. Usage of ternary numeral system is recommended, particularly of symmetric ternary numeral system, as implementing arithmetic operations in ternary allows reducing roundoff errors, accumulated during finite-precision computation. A shift is suggested towards ternary computational machines. Ternary computations basis is given; addition and multiplication algorithms are discussed in classic and adapted versions, the latter is suggested as to develop a parallel implementation. Particular effects are highlighted which allow computing these operations in parallel mode, several examples illustrate the algorithms suggested. The resulting time and acceleration gain is discussed basing on data aggregated by means of an implementation in Haskell. Basing on experimental data, multithreaded implementation is recommended in order to accelerate addition and multiplication operations modelling. This research justifies the prospect of application of ternary co-processors for more precise computation. Keywords: Ternary computation  Ternary logic  Symmetric ternary numeral system  Arithmetic operations implementation  Parallel algorithms

1 Introduction When calculating by complex numerical methods, computational errors show, which distort results [1–4]. One particular method of decreasing roundoff errors is using residue arithmetic [5], but complexity of such operations as comparison, division, extracting square root, overfloat detection in residue arithmetic is higher than in positional numeral systems [6, 7]. Residue arithmetic found its application in cryptography, digital image and signal processing. A different approach is shifting towards a non-binary numeral system and nonbinary machines, which is granted by physical factors [8]. Furthermore, such shift is associated with the necessity of extending the set of control commands when using a limited discrete communication channel [9] (e.g. due to the limitation of CPU legs, only two-bit commands can be transmitted in Arduino [10]). The most economical positional numeral system (NS) is the one with the base equal to Euler constant (base of the natural logarithm) [11–13]. The closest integer bases are 2 and 3, the latter being even closer to e.

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 480–487, 2019. https://doi.org/10.1007/978-3-030-12072-6_39

On Parallel Addition and Multiplication

481

This article is concerned with examining ternary computation principles and outlining its higher roundoff errors sustainability. Particular algorithms for addition and multiplication will be introduced, which are parallel-ready. Effects of time gain are shown on material of implementation in Haskell.

2 Ternary Computations 2.1

The Ternary Symmetric Numeral System

By analogy with a bit, a trit is the least unit of memory in the ternary numeral system, which stores one of three possible values. Following denotations exist:     f0; 1; 2g, 0; 12 ; 1 , 1; 0; 1 . The ternary NS can be symmetric and asymmetric, due to the oddness of its base. Within a symmetric NS, a digit can be negative. The symmetric ternary NS (STNS) has such features as higher information density and roundoff error, which is statistically equal to zero [14, 15]. The STNS was put in the basis of the serial ternary virtual machine «Setun», designed by a team of soviet scientists headed by Brusentsov [16]. The «Setun» machine operates integers and real numbers; it lacks a logical type (analogous to boolean in binary NS). When encoding numbers in symmetric code (i.e. in STNS), there would be no need for a sign digit, as the sign is distributed over digits of a represented number. E.g., a positional decimal NS number 35 is represented in ternary asymmetric NS as 10223 , and in STNS as 11013 . Different implementations of ternary addition and multiplication are possible. In particular, non-parallel digitwise, and parallel with a table and with subdividing into consequences of digits. The common practice is in digits concatenation with Table 1 [17]. It is suggested to group digits into triades for processing, with following carry to the next triade and consideration of this next triade. These triades can be added separately, independently. The important point is considering carry digits. Given two integers, each consisting of 6 digits, a common approach (as in binary numeral system) would be subsequent addition of lower digits, then of higher digits, each pair of digits resulting also with a carry difit value. In order to parallelize the calculus, parallel addition of lower and higher subsequences of trits is suggested. The result of lower subsequences addition is written into the answer, and then the higher subsequences addition result is incremented by the carry digit from the addition of lower subsequences, and is written into the result as well. 2.2

The Addition Operation

For numbers addition, digits should be added similar to any positional NS principle, from lower to higher order digits, and the initial value of each digit should be 0. In Table 1 the result of two trits addition is given in following format: (carry digit, current digit).

482

I. V. Stroganov et al. Table 1. Table addition for ternary logic. Carry digit 0

Carry digit 1 + 1 0 1

+ 1 0 1 1,0 1,1 0,1 1 1,1 0,1 0 1,1 0,1 0,0 0 0,1 0,0 1 0,1 0,0 0,1 1 0,0 0,1

Carry digit 1

1

+ 1 0,0 1 0,1 0,1 0 0,0 1,1 1 0,1

0

1

0,0 0,1 0,1 1,1 1,1 1,0

According to the aforesaid, the addition of two integers is described as in following example. 7010 ¼ 110113 12510 ¼ 1111013 10 þ 125 ¼ 295 ¼ 1111103 0 0 1 1 0 1 1 0 ) 1 1 1 1 0 1 1 0 1 0 0 0 0 0 0 1 1 0 1 1 0 ) 1 1 1 1 0 1 1 1 1 1 0

1 1

1 1

1 1 1 1

0 1 1 1

0 0 1 1 0 1 0 0 0 1 1 0 1 1

0 1 0 ) 1 1 0 0 0 0 1 ) 1 1 1 0

1 1

1 1

1 1 1 1

0 1 1 1

0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1

0 1 ) 1 0 0 1 1 0

ð1Þ

One can consider the number as a list of digits, then divide it into two sublists of equal length in such a way that the number D1 ¼ 70 would be represented by two lists of decimal digits A1 ¼ ½7; B1 ¼ ½0. In a similar manner, for ternary NS for D1 ¼ 11011 its sublists would be A1 ¼ 011 and B1 ¼ 011; for D2 ¼ 111101 its sublists would be A2 ¼ 111 and B2 ¼ 101. When adding D1 þ D2 , sublists should be added separately, i.e. A1 þ A2 and B1 þ B2 , and the carry digit from B1 þ B2 should be taken into consideration when computing A1 þ A2 The concatenation in the following sample is marked as ++(P++C2 denotates two concatenated digits). D1 ¼ A1 þ þ B1 ¼ 011011 D2 ¼ A2 þ þ B2 ¼ 111101 C1 ¼ A1 þ A2 ¼ 011 þ 111 ¼ 111 P þ þ C2 ¼ B1 þ þ B2 ¼ 011 þ 101 ¼ 0110

ð2Þ

P ¼ 0; C2 ¼ 110 D3 ¼ D1 þ D2 ¼ ðC1 þ PÞ þ þ C2 ¼ 111110 In the sample computations above, the carry digit P is equal to zero, but the given approach would show correct in the overfloat case as well, as shown below.

On Parallel Addition and Multiplication

483

D1 ¼ A1 þ þ B1 ¼ 111111 ¼ 111 þ þ 111 D2 ¼ A2 þ þ B2 ¼ 111111 C1 ¼ A1 þ A2 ¼ 111 þ 111 ¼ 1001 P þ þ C2 ¼ B1 þ B2 ¼ 111 þ 111 ¼ 1001 P¼1

ð3Þ

C2 ¼ 001 D3 ¼ D1 þ D2 ¼ ðC1 þ PÞ þ þ C2 ¼ ð1001 þ 1Þ þ þ 001 ¼ 1000001

2.3

The Multiplication Operation

Multiplication of two numbers can be calculated in much the same way. In general, this operation is reduced to addition and digitwise shift. It may be remarked that when multiplying two-digit numbers the result would not exceed 4 digits, and for three-digit the result would not exceed 6 digits. When multiplying X1 by X2, these cas be divided into subsequences A1, B1 and A2, B2 correspondingly. The result of A1*A2 is considered as two subsequences A11 and A12, the result of B1*B2 – as subsequences B11 and B12. Addition of multiplication results A1*B2 and A2*B1 can be executed independently, with following increment of carry digits of higer trit subsequences. Using the property of associativity for multiplication, one can get the following. 111111  111111 ¼ 111  111111  1000 þ 111  111111 ¼ 111  1000  ð111  1000 þ 111Þ þ 111  ð111  1000 þ 111Þ

ð4Þ

¼ 111  111  1000000 þ ð111  111 þ 111  111Þ  1000 þ 111  111 In other words, X1 ¼ 36410 ¼ A1 þ þ B1 ¼ 111 þ þ 111 X2 ¼ 36410 ¼ A2 þ þ B2 ¼ 111 þ þ 111 8 A1  A2 ¼ 111  111 ¼ 110111 ¼ A11 þ þ A12 > < A1  B2 þ A2  B1 ¼ 111  111 þ 111  111 ¼ 110111 þ 110111 ¼ 111111 ¼ B11 þ þ B12 > : B1  B2 ¼ 111  111 ¼ 110111 ¼ C11 þ þ C12 8 D 0 ¼ C12 ¼ 111 > > > < P1 þ þ D1 ¼ C11 þ B12 ¼ 110 þ 111 ¼ 0111 > P þ þ D2 ¼ B11 þ A12 þ P1 ¼ 111 þ 111 þ 0 ¼ 1111 > 2 > : D3 ¼ A11 þ P2 ¼ 110 þ 1 ¼ 111 X3 ¼ X1  X2 ¼ D3 þ þ D2 þ þ D1 þ þ D0 ¼ 111 þ þ 111 þ þ 111 þ þ 111 ¼ 111111111111 ¼  13249610

ð5Þ

484

I. V. Stroganov et al.

Besides, increasing the number of partitions will lead to increasing the amount of calculations, though it still would allow decreasing the physical size of arithmetic logical unit performing addition and multiplication of given sequences. X1 ¼ 111111 ¼ A1 þ þ B1 þ þ C1 X2 ¼ 111111 ¼ A2 þ þ B2 þ þ C2 X3 ¼ X1  X2 ¼ A1  A2  100000000 þ ðA1  B2 þ B1  A2 Þ  1000000 þ ðB1  B2 þ A1  C2 þ A2  C2 Þ  10000 þ ðB1  C2 þ B2  C1 Þ  100 þ C1  C2 ð6Þ In order to find the product of multiplication, one can use the multiplication table of three-digit integers. Due to the symmetry of numbers with respect to zero, the table may contain not the plenty of all possible options (as there exist 27 ways of recording a three-digit ternary number), but only it will contain the nonnegative options (14 lines and 14 columns). Obviously, when increasing the number of digits in the multiplied numbers, the size of the table will increase as well. The rest of the cases can be accounted for via determining the signs of the efficients, as extracting the inverse sign requires inversion of each digit, and the sign of the multiplication result would be negative, in case one of the efficients is negative. The division operation can be implemented similarly.

3 Parallel Algorithms for Addition and Multiplication The above described algorithm was intentionally developed as to allow parallel execution. When adding D1 and D2, these can be subdivided into subsequences A1, B1 and A2, B2 correspondingly. Calculus of C1 and of a pair (P, C2) can be executed independently, so that P is the carry digit. Finally, C1 should be incremented by P. Thus, addition operation shows parallelism and is ready for parallel implementation. The implementation is developed in Haskell [18–20], which was chosen for its system of types classes. The latter allows implementing a new type simply by defining interfaces, particularly, addition for numbers. Num is an interface which requires such operations as addition, inverting sign, and multiplication. Functions which use arguments of type Num (variables belonging to a given class of types) use implementations of above-mentioned required operations defined in the derivant classes (each needs to implement an interface consisting of these functions). In order to implement an integer ternary type in STNS, following functions were implemented (including those required by parent classes): equality check, comparisons, minimum, maximum, addition, multiplication, conversion into string. With a parallel implementation in Haskell, experiments were conducted on a PC with Core i7, 10 cores. Figure 1 shows time of modeling addition of 18-digit ternary integers, several threads operating time is provided. The least time is achieved with 2 threads; 3 require more additional operations, which doesn’t show further improvement. There is a gain in addition modeling time by 10% for 2 threads, and a less value—by 5% for 3 threads.

On Parallel Addition and Multiplication

485

Addition operation modelling time Time, ms

5.5 5.0 4.5 4.0 1

2

3 4 Number of threads

5

Fig. 1. Addition operation modeling time given for parallel implementation executed via several threads.

Figure 2 shows time of modeling multiplication of 18-digit ternary integers, several threads operating time is provided. The least time is achieved with 2 threads (10% time off) and 3 threads follow (3% off).

Time, ms

19.0

Multiplication operation modelling time

18.0 17.0 16.0 1

2

3 4 Number of threads

5

Fig. 2. Multiplication operation modeling time given for parallel implementation executed via several threads.

Finally, the experimental data are given in Table 2, including acceleration achieved

Table 2. Time and acceleration data for operations modelling. Operation/threads Time of operation modelling, ms 1 2 3 4 5 + 4.4 4.03 4.15 4.4 5.09 * 17.7 16.4 17.3 17.8 18.6

Acceleration achieved 1 2 3 4 5 1.00 1.09 1.06 1.00 0.86 1.00 1.08 1.02 0.99 0.95

486

I. V. Stroganov et al.

4 Conclusion It is shown that STNS allows different implementations of addition and multiplication operations. A parallel algorithm and implementation are introduced. Time of execution is obtained during experiments conducted by means of an implementation in Haskell. Time of operation was reduced by 10% for both addition and multiplication executed by 2 threads, which allows following reducing of complicated mathematical methods calculation time. The shift towards ternary arithmetic, e.g. as co-processors, is of big perspective value.

References 1. Higham, N.J.: Accuracy and Stability of Numerical Algorithms, 2nd edn. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, pp. 43–44 (2002) 2. Ralston, A., Rabinowitz, P.: A First Course in Numerical Analysis, Dover Books on Mathematics, 2nd edn. Courier Dover Publications, Mineola (2012) 3. Aksoy, P., DeNardis, L.: Information Technology in Theory. Cengage Learning, Boston (2007) 4. Anoprienko, A.I., Ivanitsa, S.V., Ivanitsa, S.V.: Features of representation of real numbers in post-binary formats (in Russian). Math. Mach. Syst. 1(3), 49–60 (2012) 5. Isupov, K.S.: Modular-positional format and software package for high-precision bit-parallel calculations in floating-point format (in Russian). Bull. South Ural. State Univ. Ser.: Comput. Math. Comput. Sci. 2, 65–79 (2012) 6. Lavrinenko, A.N., Chervyakov, N.I.: Study of non-modular operations in the system of residual classes (in Russian). Sci. Sheets Belgorod State Univ. Ser.: Econ. Inform. 21, 110–122 (2012) 7. Isupov, K.S.: On an algorithm for number comparison in the residue number system (in Russian). Bull. Astrakhan State Tech. Univ. Ser.: Manag. Comput. Eng. Inform. 3, 40–49 (2014) 8. Denisenko, B.: New physical effects in nanometer MOSFETs (in Russian). Compon. Technol. 12, 158–162 (2009) 9. Polyakov, V.I., Skorubsky, V.I.: The use of multivalued logic in the design of functional circuits (in Russian). Proc. High. Educ. Inst. Ser.: Instrum. 57(4), 57–60 (2014) 10. Budyakov, P.S., Chernov, N.I., Yugai, V. Ya., Yugai, N. N.: Logic functions representation and synthesis of k-valued digital circuits in linear algebra. In: 24th Telecommunications Forum (TELFOR 2016), pp. 1–4. IEEE (2016) 11. Hayes, B.: Third base. Am. Sci. 89(6), 490–494 (2001) 12. Kushnerov, A.: Ternary digital technology. Retrospective and contemporary state (in Russian). Ben-Gurion University, Beersheba, pp. 1–5 (2005). http://314159.ru/kushnerov/ kushnerov1.pdf. Accessed 20 June 2015 13. Bobreshov, A.M., Koshelev, A.G., Zolotukhin, E.V.: Multichannel organic light emitting RGB diode, as ternary logic element. In: Proceedings of Voronezh State University, Voronezh (2016) 14. Voevodin, V.V., Kim, G. D.: A mathematic’s view on machine operations (in Russian). In: Computational Methods and Programs, vol. 26. MSU, Russia (1977) 15. Stroganov, I.V., Rudakov, I.V.: Ternary virtual machine for calculating. Int. J. Adv. Stud., vol. 4–3. Publishing House Science and Innovation Center, Ltd., Saint-Louis (2017)

On Parallel Addition and Multiplication

487

16. Brusentsov, N.P., Maslov, S.P., Rozin, V.P., Tishulina, A.M.: Small digital Computer “Setun”. MSU publishing house, Moscow (1965). (in Russian) 17. Knuth, D.: The Art of Computer Programming, vol. 2: Seminumerical Algorithms, chapter 4.1. Addison-Wesley Professional, Boston (2011) 18. Marlow, S.: Haskell 2010. Language Report, https://www.haskell.org/onlinereport/ haskell2010/. Accessed 21 Oct 2018 19. Marlow, S.: Parallel and Concurrent Programming in Haskell. O’Reilly Media Inc, Sebastopol (2013) 20. Mena, A.S.: Beginning Haskell: A Project-Based Approach. Apress, New-York (2015)

Simulation of Power Assets Management Process Oleg Protalinsky1, Anna Khanova2

, and Ivan Shcherbatov1(&)

1

2

Moscow Energy Institute, 14 Krasnokazarmennaya St., Moscow 111250, Russia [email protected] Astrakhan State Engineering Institute, 16 Tatishchev St., Astrakhan 414056, Russia

Abstract. Implementation of the Industry 4.0 concept leads to in-depth end-toend automation of all activities of an integrated power grid and requires fundamentally new technologies that change conventional business models. Operational assets of power grid companies are characterized by semantic, syntactical, structural and systematic heterogeneity, which hinders the interaction at all management levels aimed at accident prevention and performance improvement. A cognitive double-level ontological model was developed as an aggregate of the conceptual confinement model and a set of hierarchic confinement models of processes of technical condition diagnostics, repair program development and optimization, as well as optimization of logistic processes as the repair program is being implemented in a power grid company. A structural process scheme of distribution zone repair management in Q-scheme symbolism, generalized and detailed modeling algorithm schemes were developed. An interaction graph for components of the power grid repair management process was developed. Elementary components of the stochastic process of interaction of the repair management process elements were detailed by their sets of states. The sub-model “Consumption of consumable materials and resources”, realized in the Arena simulation package, was detailed. Examples were provided regarding the application of intelligent techniques at the strategical and operational levels of power grid management for structural synthesis of a balance scorecard system and identification of apparent defects of process equipment. Keywords: Power grid companies  Industry 4.0  Ontology  Confinement model  Management  Simulation model  Artificial neural network  Internet of things

1 Introduction Origination and development of the Industry 4.0 concept are based on the notion of the fourth industrial revolution, which is now an observable, predicted and controlled process, consisting in mass adoption of cyber-physical systems into production and the area of servicing human needs [1]. There is no production or service area that can do without electricity, and the shortage of resources and climatic changes by 2030 will © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 488–501, 2019. https://doi.org/10.1007/978-3-030-12072-6_40

Simulation of Power Assets Management Process

489

result in power demand increase by 50% [2]. During the implementation of the Industry 4.0 concept, transformation of the production and service sectors should start from the integrated power grid. The power grid structure comprises management of the Unified National (All-Russian) Power Grid, 14 interregional power grid operators and about 3,000 territorial grid operators. The share of interregional power grid operators (IRPGO) in the power supply market is 70%. The principal direction of the power grid development is the consolidation of all facilities of the Unified National (All-Russian) Power Grid, active introduction of online condition monitoring systems using various sensors of power distribution networks and maintaining communication with other facilities and relevant control systems. This will enable acquisition of historic or daily information on all power grid processes and parameters. The lack of necessary investments in the power grid for the last 20 years resulted in considerable physical and technological obsolescence of power networks. The overall deterioration of electrical distribution networks has reached 70%. Introduction of on-line monitoring systems and the power grid upgrade shall be coordinated and aimed at achievement of certain long-term objectives, expressed as indicators, which are often disorganized, do not provide full details on the progress of the Energy Strategy of Russia [3]. Changes related to the implementation of the Industry 4.0 concept will manifest themselves at all levels of the power grid performance: from methods of business process organization to execution of specific technological operations [4]. Commitments to the implementation of the Industry 4.0 concept necessitate the usage of modern computer simulation, since it is the efficient tool for research of new processes and testing of new equipment and technologies. It is becoming urgent to improve power grid management mechanisms, including simulation-based evaluation of the Energy Strategy’s progress, digitalization of production, artificial intelligence and robotics technology, cloud-based computing and Big Data technologies.

2 Power Grid Strategical Management System The Industry 4.0 concept is based on four components: cyber-physical systems, “Internet of things”, “Internet of services” and “Smart factory”. Cyber-physical systems (CPS) are resulted from integration of computing and physical processes. CPSs can store and analyze data; they are equipped with multiple sensors and actuators, and can be connected to computer networks. “Internet of things” is a network, where CPSs interact with each other via unique addressing schemes. Examples of “Internet of things” include Smart Grids [1]. With “Internet of services” companies can offer not only their various products, but also their production technologies. “Smart factory” (SF) is an enterprise, where CPSs communicate via “Internet of things” and help the personnel and equipment fulfil their tasks using the situation awareness available in the power grid. SF technical facilities work on the basis of information received both from physical (including corporate information systems) and virtual (computer models) worlds [5]. Let us consider IRPGO management level in the context of SF. Information on various processes of IRPGO cyber-physical systems is automatically gathered from

490

O. Protalinsky et al.

sensors—this is the operational management level (Fig. 1a). At the tactical management level, this information is further transferred via “Internet of things” [6] and accumulated in the corporate information system (ERP system) of the power grid (Fig. 1b).

Fig. 1. Structure of the IRPGO strategic management system in the context of implementation of the Industry 4.0 concept.

For the purpose of multi-aspect representation of the power grid subject domain as a situational awareness storage, a structure of basic computer models is developed: 1. Ontological model (OM)—a tool for structuring, description and elimination of heterogeneity of terms in the SF subject domain. It has a double-level structure: the top-level confinement model (IDEF0 notation) and low-level regular ontology (IDEF5 notation). 2. Simulation model (SM) details solutions, assigns authorities and resources. Development of a simulation model requires ontological and process models related to design notations of IDEF family. 3. Neural simulation is represented by artificial neural networks designed for solution of various classification, prediction and identification tasks. Basic models may be supplemented with other types of models (process, cognitive, etc.) depending on tasks to solve. Application of multi-aspect computer models results in a term base, extensive statistics (idle time, equipment loading, situations, etc.),

Simulation of Power Assets Management Process

491

trained artificial neural networks (ANN) that represent predicted functioning of power grid processes under different conditions. At the strategic management level, it is necessary to select a tool for systematization of disorganized indicators of current and predicated information. Organization performance optimization is studied in two main directions: evaluation of financial measures as scorecards and formation of integrated systems aimed at performance assessment in different aspects of the organization’s activities. Analysis of 19 business performance improvement tools showed that Quantum Performance Measurement, Balanced Scorecard and the Hewlett-Packard internal market concept are the most popular tools. According to the consulting firm Bain & Company, which has been performing annual Management Tools & Trends reviews for more than 20 years, Balanced Scorecard is the most effective. Joint usage of current (from ERP system) and predictive (from multi-aspect models) information enables the manager to make managerial decisions.

3 Power Grid Ontological Modeling Works of L.V. Massel are devoted to power ontological engineering, graphically representing ontologies that reflect the basic concepts of situational management, including situation analysis and situation modeling [7]. Advanced studies are carried out for development of ontologies based on interpretation of special cognitive models —confinement models based on the system-cognitive approach (Table 1). There are confinement models in various subject areas: management of organizational information-intellectual assets, personality of a specialists, quality control of personnel performance, strategic management of social-economic systems, etc. [8, 9]. Let us consider development of an ontology system for assurance of interrelations and coherence of studies as exemplified by equipment maintenance and repair (M&R), as well as IRPGO operational assets management.

Table 1. Types of confinement models Hypernymic (HCM) Meronymic (MCM) Attributive (ACM)

Level

DesigMode nation

Structure

Type Conceptual (CCM)

“causes” / “depends on”

“being”

Identification of main Classification of factors to be achieved concept types Top-level ВУ MLО

ontology,

“being a part of” Classification of concept classes

{

}

“tends to” Classification of concept properties (attributes)

НУ , belonging to hierarchic CM Low-level ontologies MLО

492

O. Protalinsky et al.

In case of M&R process management in power grid companies, the knowledge domain specification covers several interrelated subject areas (power grid companies, repair and operational assets management, reliability, logistics, efficiency, quality, etc.) and is represented by a double-level ontological model (1):

 HY MLO ¼ MLBY ; MB ; O ; MLO

ð1Þ

where MLBY O —top-level ontology of M&R management process in power grid companies represented by a functional model, developed in IDEF0 methodology;  HY MLO —set of low-level ontologies of M&R management process in power grid companies, developed in IDEF5 methodology; MB—inferential mechanism. Each level of an ontological model can be represented by one or several confinement models (CM) of different types (Table 1). CCM is always on the CM top level, and each consequent level represents specification of an element (any, except for the central one) of the preceding CM level for a model of any of the four types described above. As a result of development of an ontological model in the form of a double-level confinement model, an expert is enabled to determine a set of factors of the highest significance for achieving the objective and a set of relations between them. 3.1

Top-Level Ontological Model

The OM top level (Fig. 2) in terms of IDEF0 model is an aggregate for four sets: MLBY O ¼ hIMLO ; UMLO ; OMLO ; MMLO i;

ð2Þ

 where IMLO ¼ Xp ; Mp —finite set of curves, called inputs; UMLO ¼ fDL; P; Stg— finite set of curves, called management;  MMLO ¼ fR; DL; OPg—finite set of objects, called mechanisms; OMLO ¼ S; Xf ; Mf —finite set of curves, called outputs.

Fig. 2. Generalized context diagram of execution of the M&R and operational assets management system process in power grid companies.

Simulation of Power Assets Management Process

493

In terms of the generalized context diagram of the M&R management system process in power grid companies (Fig. 2), the elements designate the following: ① Xp—process objective (R costs as per the repair program implementation plan), and Xf—process results (actual M&R costs as per repair program implementation); ② Y—M&R activities; ③ R—material and labor resources for implementation of M&R processes; ④ Mp—process objective interpretation (planned equipment specifications), Mf—process result interpretation (actual equipment specifications); ⑤ St—repair program; ⑥ DL—resource limitations; ⑦ P—regulations and procedures of the M&R management process in power grid companies (process flow charts); ⑧ OP—set of planning approaches aimed at optimization of the repair program implementation in power grid companies; ⑨ S—proper technical condition of an asset. In semiotic terms, confinement model elements (Fig. 3) are grouped according to their belonging to “circles”: internal circle (syntactic level), correlative external circle (semantic level), and their supplemental middle circle (pragmatic level). M&R costs minimization ① (variable costs first of all) is carried out by execution of various repair activities ②, considering regulations and procedures ⑦ (syntactic level—internal circle). Based on planning approaches ⑧ and considering repair program parameters ⑤, equipment specifications ④ are interpreted as the M&R process result (semantic level—external circle). Change of technical condition of operational assets of power grid companies ⑨ is based on selection of material and labor resources ③, considering resource limitations ⑥ of the power grid company (pragmatic level—middle circle). Model elements are interconnected by conceptual relations “Causes”/“Depends on” of three types: management (M), input/output (I/O), tool (T).

Fig. 3. CCM of M&R and IRPGO operational assets management process.

In ontological terms, the elements are grouped according to their belonging to “sectors”: prescriptive, descriptive and relational. In the first case, the model elements are used for imperative (No. 2), declarative (No. 5) and situational (No. 3)

494

O. Protalinsky et al.

representation of the activity substance. In the second case, denotative (No. 1), conceptual (No. 4) and comparative (No. 9) descriptions of a specific activity result are made. In the third case, tactical (No. 7), strategic (No. 8) and operational (No. 6) evaluations of performance results are made. 3.2

Low-Level Ontological Model

The low-level ontological model of M&R and operational assets management process in power grid companies can be represented by the following tuple:

Fig. 4. Low-level ontological model of M&R and IRPGO operational assets management process in power grid companies.

Simulation of Power Assets Management Process

495

MLHY O ¼ hA; M; Ri; where A ¼ fai g—set of notions (concepts), creating the low-level ontology of the repair management process of a power grid company MLO, i ¼ 1; I, i.e. j Aj ¼ I; Mi ¼ fmi ; . . .; mdi g—set of attributes of notion ai (d—number of attributes, describing such notion); R  A  A—immediate inheritance relation. The hierarchy of the lowlevel ontological model of M&R and operational assets management process in power grid companies includes ontology notions and a set of notion attributes (Fig. 4). In terms of a model of the M&R management process life cycle, the low-level OM MLHY O (Fig. 4) is represented as integration of interrelated ontologies: HY HY HY MLHY O ¼ MLO1 [ MLO2 [ MLO3 ; HY where MLHY O1 —ontology of process equipment diagnostics (PED); MLO2 —ontology of HY the optimal repair program formation (ORPF) process; MLO3 —ontology of logistics process optimization (LPO) during the repair program implementation. We proposed a method for structuring the power grid M&R process information as an ontological knowledge base on the basis of an aggregate of interrelated cognitive double-level models of a special kind (confinement models), combination of system analysis and synthesis methods, analogic inference and system triads identification.

4 Power Grid Simulation Modeling IRPGO process management during the implementation of the Industry 4.0 concept [6] includes not only aspects of structural organization improvement, but also internal transport logistics, since 90% of time the subjects of labor are not in the technological conversion condition, but in the logistic standby condition [5]. IRPGO operational assets form a complex system with an immense, fairly uncountable number of possible conditions, where the repair request flow is being processed, material and technical resources (MTR) and repair crews are allocated during M&R activities. In this case, M&R are characterized by the stochastic nature of maintenance items (supports, racks, channels, etc.) performance, non-expandable MTR (NEMTR), operations of repair crews, consumption of expandable MTR (EMTR), which generally signifies the discrete-stochastic nature of a M&R process concerted and one more time confirms the correct selection of a simulation modeling technology as a research technique. According to the task set for simulation modeling of structure-forming elements of a power distribution zone (PDZ), a generalized chart was developed (Fig. 5).

496

O. Protalinsky et al.

Fig. 5. Generalized chart of the process of logistic interaction of PDZ structure-forming elements.

Performance dynamics of structure-forming elements of PDZ M&R processes can be described by the vector stochastic process X(t) with dependent components: X ðtÞ ¼ fX1 ðtÞ; X2 ðtÞ; X3 ðtÞ; X4 ðtÞg; where X1(t)—stochastic process, which describes PDZ M&R work process, X2(t)— stochastic process, which describes operations of repair crews during maintenance activities (M&R), X3(t)—stochastic process, which describes NEMTR functioning (allocation), X4(t)—stochastic process, which describes EMTR functioning (allocation). Interaction of maintenance process components X(t) is transitive and is determined (set) by the graph (Fig. 6).

Fig. 6. Graph of interaction of M&R process components X(t).

The maintenance rate per repair item (power transmission line, transformer or other substations, power distribution stations and other power supply and transmission facilities), on the one hand, depends on the number of items requiring repair, and on the other hand—on performance of a repair crew and the number of currently available crews. That is why in the maintenance plan X2(t) governs X1(t). However, the rate of callouts of repair crews for maintenance of assets (per one repair crew), on the one hand, also depends on the number of items requiring repair, and on the other hand—on the number of available repair crews, so in terms of maintenance demand X1(t) governs X2(t). Apparently, reasoning regarding X1(t) and X3(t), and X1(t) and X4(t) is similar. As for interaction of X2(t) and X3(t), it should be noted that the intensity of failures and recovery of NEMTR is determined by interaction of X2(t) and X3(t), but if NEMTR is carried out by a specialized service, there is no such interaction. Then it becomes obvious that X5(t) appears in the system, so interaction of X2(t) and X3(t) in Fig. 3 is shown as a dotted line.

Simulation of Power Assets Management Process

497

In the queuing theory, a special class of mathematical schemes, called queuing systems of Q-schemes, was developed for formalization of system functioning processes, which are maintenance processes. A structural scheme of the conceptual M&R process model in Q-scheme symbolism was developed (Fig. 7).

Fig. 7. Structural chart of functioning of structure-forming PDZ elements in Q-scheme symbolism (S—source; A—accumulator; C1–C5—channels; 1–6—valves).

The source S simulates the process of repair orders (in terms of request queuing) receipt by the process dispatch control service. The accumulator A simulates execution of the repair program. Chanel C1 simulates the process of order processing for inclusion in the repair program. Channels C2–C5 accordingly simulate processes of repair crew formation, allocation of NEMTR, functioning of EMTR, execution of the repair process. Valves 1…6 with corresponding control connections (dotted lines) by interlocking inputs and outputs of the accumulator and channels reflect the management of procurement and utilization of PDZ resources (repair crews and MTR). In the simulation model (SM) of PDZ M&R processes, each marked-out state graph, as well as certain auxiliary processes form a detailed logical diagram of the modeling algorithm (Fig. 8), as well as program schemes. The proposed simulation model is realized in the Arena simulation modeling environment, but the model can be developed in another simulation environment on the basis of logical diagrams and modeling algorithms.

Fig. 8. Detailed scheme of modeling algorithm.

498

O. Protalinsky et al.

Let us have a closer look at the simulation model as exemplified by sub-model 4 “EMTR Consumption”, designed for simulation of the process of supplies loading on freight motor transport and tracking of further movements of EMTR (Fig. 9).

Fig. 9. Module “EMRT Consumption”.

The sub-model “EMRT Consumption” consists of the following types of blocks: Station—animation modules—determine the stations; Assign—sets new graphic images for transactions and changes variable values; Separate—separates the carrier and supplies; Request—requests carriers Avto 1-4, Gruz 1-4, vehicles and loaders respectively, who will distribute the supplies; Process—supplies loading process; Transport—determines the name of the carrier and the station, to which the carrier shall arrive; Move—dispatches the carrier to the station; Free—releases the carriers. Simulation runs of the mode will simulate situations during the repair program implementation depending on different supplies routing schemes and exogenous factors. Performance indicators of power grid repair activities are evaluated in the Balanced Scorecard System.

5 Power Grid Neural Simulation Let us investigate the possibility of application of intellectual data processing methods at all levels of the power grid functioning: from business process organization (as exemplified by generation of the balance scorecard system structure) to methods of execution of specific operations for organization (as exemplified by electrical equipment monitoring for defect identification). Power grid activities are characterized by an enormous number of various indicators: financial (balance sheets, different financial statements), production, etc. Though there are hundreds of indicators, BSS includes 8–25 most significant indicators in accordance with a selected strategy. For selection of the BSS structure (i.e. set of indicators) it is proposed to use ANN. The power grid top management, i.e. experts, including the chief accountant, discipline directors or department managers, shall prepare variants of the BSS structures (i.e. select a set of indicators from an available list) in accordance with a selected strategy. The BSS structures developed in

Simulation of Power Assets Management Process

499

accordance with the strategy are the cases used for ANN training. The trained ANN will generate BSS in accordance with a selected strategy. If there is no any set of objectives for the required strategy, the system is trained by entering opinions of various experts regarding a necessary set of objectives for the selected strategy in the system (Fig. 10). Then ANN weights are adjusted. If the system is trained, i.e. all variants of objectives are sorted out, the system proceeds to synthesis of BSS variants [10].

Fig. 10. Cohonen’s ANN structure for BSS structure synthesis.

Electrical equipment may suffer from defects, which do not affect its operability immediately. Defects may be described in reference literature, but identification of each defect requires complex diagnostics. The introduced EAM-system is a source of extensive data on equipment state. Feedforward artificial neural networks, where outputs of one layer are connected strictly to inputs of a subsequent layer, were selected for identification of electrical equipment defects. The number of neurons of the input layer exceeds the number of equipment parameters by one to take into account a free term (Fig. 11).

Fig. 11. Structure of a feedforward multi-layer ANN for identification of electrical equipment defects.

500

O. Protalinsky et al.

“1” is always fed to the input of the “extra” neuron. The number of the output layer neurons exceeds the number of possible equipment defects by one. This is necessary for computation of the zero defect probability. For the output layer neurons, the sigmoid activation function is used, since it restores the probability of the object’s relation to a certain class [11]. Examples of application of intellectual methods at strategic and operational levels of power grid management for synthesis of the BSS structure and identification of apparent defects of process equipment were demonstrated.

6 Conclusion Practical application of the proposed models and methods showed performance improvement, expressed by a 30–50% reduction of time consumption required for interaction of analysts with experts, as well as simplification of the model structure by 40–80% as compared with conventional methods of knowledge base formation. The proposed approaches have been already realized as part of EAM Optima software. This software product enables equipment condition control, diagnostics and repair planning, as well as cost reduction during repair management through development of optimal (from the financial expenditure perspective) repair programs. Application of the proposed method provides a convenient toolkit for effective management of equipment maintenance and repair processes in power grid companies.

References 1. Program: Digital economics of the Russian Federation. Approved by Decree of the RF government No. 1632-p, 28 July 2017. http://government.ru/docs/28653. Accessed 18 Oct 2018 2. Hermann, M., Pentek, T., Otto, B.: Design principles for Industrie 4.0 scenarios. In: 49th Hawaii International Conference on System Sciences, pp. 3928–3937. IEEE, Koloa (2016). https://doi.org/10.1109/hicss.2016.488 3. Energy strategy of Russia till 2030. Institute of energy strategy, Moscow (2010) 4. Lu, Y.: Industry 4.0: a survey on technologies, applications and open research issues. J. Ind. Inf. Integr. 6, 1–10 (2017). https://doi.org/10.1016/j.jii.2017.04.005 5. Khanova, A., Protalinskiy, O., Averianova, K.: The elaboration of strategic decisions in the socio-economic systems. J. Inf. Organ. Sci. 41(1), 57–67 (2017) 6. Zhou, K., Liu, T., Zhou, L.: Industry 4.0: towards future industrial opportunities and challenges. In: Conference on Fuzzy Systems and Knowledge Discovery, pp. 2147–2152. IEEE, Zhangjiajie (2015). https://doi.org/10.1109/fskd.2015.7382284 7. Massel, L.: Problems of transition to intelligent and digital power engineering from point of information technologies. In: Critical Infrastructures: Contingency Management, Intelligent, Agent-Based, Cloud Computing and Cyber Security, pp. 13–14. Atlantis Press, Irkutk (2018) 8. Sitthithanasakul, S., Choosri, N.: Using ontology to enhance requirement engineering in agile software process. In: 10th International Conference on Software, Knowledge, Information Management & Applications, pp. 181–186. IEEE, Chengdu (2016). https:// doi.org/10.1109/skima.2016.7916218

Simulation of Power Assets Management Process

501

9. Suarez-Figueroa, M.C., Gomez-Perez, A., Motta, E., Gangemi, A. (eds.): Ontology Engineering in a Networked World. Springer, Heidelberg (2012). https://doi.org/10.1007/ 978-3-642-24794-1 10. Khalyasmaa, A., Dmitriev, S., Kokin, S., Eroshenko, S.: Fuzzy neural networks’ application for substation integral state assessment. WIT Trans. Ecol. Environ. 190, 599–605 (2014). https://doi.org/10.2495/EQ140581 11. Protalinsky, O., Shcherbatov, I., Stepanov, P.: Identification of the actual state and entity availability forecasting in power engineering using neural-network technologies. J. Phys: Conf. Ser. 891(1), 1–6 (2017). https://doi.org/10.1088/1742-6596/891/1/012289

Examination of the Process of Automated Closure of Containers with Screw Caps Slav Dimitrov, Lubomir Dimitrov(&) , Reneta Dimitrova, and Stelian Nikolov Technical University of Sofia, Sofia, Bulgaria {sbd,lubomir_dimitrov,rkd,st_nikolov2}@tu-sofia.bg

Abstract. A test-rig for the study of the parameters of the process of automated closing of containers with screw caps is designed and realized. The purpose of the developed test-rig is to study and manage the parameters of the process of automated closure of screw caps. The aim of the experiment is to create a regression model of the process of automated closure of screw caps, which takes into account the influence of certain factors. Keywords: Automation

 Test rig  Mathematical modelling

1 Introduction Small and medium enterprises are the basis of Bulgarian industry. They account for more than 40% of GDP in the country. These enterprises are mainly in the field of single and small-scale production. On the other hand, the lack of service staff, both engineers and workers, makes the automation of this production extremely up-to-date. Before proceeding with the development of automated flow lines, it is necessary to investigate the processes that will need to be automated. Various test rigs are made for this purpose [1–3]. This article is devoted to examination of the process of automation of closure of containers with screw caps and to the development of an experimental test rig. To the devices used in the developed test-rig for testing the parameters of the process of automated closure of screw caps, the following basic requirements are defined: possibility to use different types of screw caps; possibility of using different closing containers; possibility to control the basic parameters of the closing process— rotation speed and compressive force; possibility of changing the way the caps are fed into the turning area; mobile and repairable test-rig construction, minimum price. Taking these requirements into account, a conceptual 3D model of the test-rig has been developed as shown in Fig. 1 [4]. The construction of the test-rig is built on a modular principle [5–7] and includes the following basic modules:

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 502–514, 2019. https://doi.org/10.1007/978-3-030-12072-6_41

Examination of the Process of Automated Closure of Containers with Screw Caps

503

• Housing—The housing (1) is divided into two parts—lower and upper. In the lower part is the control of the test-rig and part of the drive of the conveyor (2). At the upper part of the housing the drive part of the closing module is located (4). All other components are attached to the housing. • Conveyor—The conveyor (2) serves to feed the closed containers to the closure and removal position of these containers after their closure. A linear conveyor will be used in the test-rig, which will allow the parameters of the closing process to be examined for containers of different shapes and dimensions (within certain limits).

Fig. 1. Conceptual 3D model of the designed test-rig

• Magazine collector—The magazine collector (3) serves to store the screw caps in the oriented position and feed them to the closed containers when the bench cycle is started. Orientation and charging of caps in the collector will be done manually, and if available, this process can be automated using the vibratory bunker. • Closing module—The closing module (4) provides the necessary movements for turning the caps to the respective containers. These are a rotary motion ensuring screwing and vertical clamping of the cap to the container. The drive used must be able to control the torque when turning the caps. • Closing cup holder—The closing container holder (5) serves to cut off and lock if necessary the closed containers in the closure position. • Closing head—The closing head (6) serves to grip the cap and provide the necessary movements to move it to the turning area and to make the turn itself. It is

504

S. Dimitrov et al.

mounted to the closing module (4) and is changed using caps of different construction and dimensions. • Control panel—The control panel (7) is located at the front of the lower half of the housing (1) and contains the necessary controls for controlling the test-rig. (Fig. 2)

Fig. 2. General appearance, part of the electrical circuit and control of the test-rig

2 Working Principle and Control The main operations when working with the developed test-rig are as follow: • Attaching the closing head required for the type of screw cap • Adjusting the automatic feeding capacities in the placement area on the opening of the closed container • Loading of the screw caps in collector 3 • Setting up of the linear conveyor to the gauges of the closing containers • Loading of the closing containers on linear conveyor • Setting the closing process parameters. • The closing process is started. • The closed containers are manually removed from the linear conveyor and the closure quality is assessed. • After the magazine collector templates run out, the test-rig stops automatically. For developing of the control system the software PLC SIMATIC S7-1200, S7 “TIA portal” is used [8]. It is specialized software for programming of Siemens controllers and managing their peripheral devices. The software allows the use of different types of visualization. This includes computer systems or an external display. It also allows the use of external devices managed by different protocols. Parts of the developed program

Examination of the Process of Automated Closure of Containers with Screw Caps

505

are shown in Fig. 1. For the purposes of this work, a computer program “NS3” has been developed by us. It is used for servo motor control and recording of the data from the conducted experiments. It performs the following functions: • • • • • • • • • • •

Reading data from servo motor operation; Conversion of the read data from Hex into Dex Code; Write data array; The array is compiled for errors; Transmitting data via serial channel to the master controller; Transmitting time intervals to the master controller; Write cycles in the main program; Plotting of investigation graphs; Design of graphics on a computer; Save archive data; Show investigation errors.

3 Examination of Parameters of the Process of Automated Closing with Screw Caps The theoretically developed by methodology (based on [9]) was experimentally tested. For this purpose, an experimental test-rig was developed. The study was carried out with 500 ml plastic bottles. 3.1

Selection of Factors and Factor Space

The factor is a variable subjectable to change during the process of assembly. Several factors are commonly altered during the experiment planning. For this reason, they must be compatible (all combinations of factors are possible) and independent (the ability of a factor to be set at different levels, regardless of the levels of the other factors). The set of factors used to plan the experiment is called a factor space. Factors vary on three levels—upper, lower and zero. The center of the experiment (reporting point) is a point of the factor space, with zero (central) coordinates. This is the point around which the individual factors vary. It is recommended that the center of the experiment is a point having coordinates equal to the average level of the factors [10, 11]. The following factors and factor space were selected for the conducted experiment: • First factor X1—the angle A° of the slope of magazine collector. It is one of the major factors in making the “taking” cap from magazine collector. In the developed test-rig, this angle may change within the limits 2  50°;

506

S. Dimitrov et al.

• Second factor X2—the speed V m/min of linear conveyor. This factor affects the successful “taking” of a cap from the magazine collector and the productivity of the whole process of automated closure of screw caps. In the developed test-rig, this speed may vary within the range 4  70 m/min; • Third factor X3—the time T s for screwing of the caps. This factor affects: the successful closure of caps, the ultimate result of automated closing and the productivity of this process. In the developed test-rig this time may change in the interval 1  15 s. Table 1 gives a plan for conducting the planned experiment. The permanent factors in the experiment are: • Refusal Allocation Act—normal; • Stand-by coefficient of the test-rig—КГ = 0.98; The determining parameter Y is the percentage of properly closed containers. Table 1. Central regressive experiment 23, factors and levels of the factors [4] Levels

Xoi Xoi Xoi Xoi Xoi JI

3.2

+ ai + JI − ai − JI

I factor X1—Angle of the slope of MC A[°] Natural value 5 15 25 35 45 JI = 10

II factor X2— Movement speed of JIT V[m/min] Natural Coded value value 5 +1.682 15 +1 25 0 35 −1 45 −1.682 JI = 10

Coded value +1.682 +1 0 −1 −1.682

III factor X3—Turning time T[s]; Natural Coded value value 3 +1.682 5 +1 7 0 9 −1 11 −1.682 JI = 2

Purpose of the Experiment

The aim of the experiment is to create a regression model [12] of the process of automated closure of screw caps, which takes into account the influence of the factors defined above. The type of desired mathematical model will be: Y ¼ b0 þ

k X i¼1

bi X i þ

k X ij¼1

bij X i X j þ

k X

bii X2i

i¼1

where: – k is the number of variable factors—3; – bi—regression coefficients. The number of regression coefficients l is determined by the formula:

Examination of the Process of Automated Closure of Containers with Screw Caps



507

ðk þ 1Þðk þ 2Þ ð3 þ 1Þð3 þ 2Þ ¼ ¼ 10 2 2

Choosing the right value of the star arm a, is done by using tables suggested in [10, 11]. For n0= 1 and к = 3 we define: a = 1.683, a2 = 2.829. The overall pattern of the composite plan developed for the experiment at these values is shown in Table 2. Table 2. Composite plan for the central regressive experiment 23 [4] № Xo X1

11 +

– + – + – + – + þa +1.683 a −1.683 0

12 +

0

13 +

0

14 +

0

15 16 17 18 19 20

0 0 0 0 0 0

1 2 3 4 5 6 7 8 9

+ + + + + + + + +

10 +

3.3

+ + + + + +

X2

X3

X1X2 X1X3 X2X3 X21

X22

X23

– – + + – – + + 0

– – – – + + + + 0

+ – – + + – – + 0

+ – + – – + – + 0

+ + – – – – + + 0

+ + + + + + + + 0

+ + + + + + + + 0

0

0

0

0

0

0

0

0

0

0

+ + + + + + + + a2 2.829 a2 2.829 0

0

0

0

0

0

0

0

0

0

0

0

0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

þa 0 +1.683 a 0 −1.683 0 þa +1.683 0 a −1.683 0 0 0 0 0 0

0 a2 2.829 0 a2 2.829 0 a2 2.829 0 a2 2.829 0 0 0 0 0 0 0 0 0 0 0 0

Processing of Experimental Data

For experimental data processing, the predictive matrices given in Tables 3, 4 and 5 are compiled. A regression analysis was performed to determine the coefficients of the mathematical model using Microsoft Excel. The formula used for this purpose is [10, 11]:

508

S. Dimitrov et al. N P

Xij Yi bi ¼ i¼1N P 2 Xij i¼1

where: – Xij is encoded value of the i−th attempt from j−th pillar of the experiment matrix; – Yi- parameter value at the i−th attempt. Table 3. Central regressive experiment 23 at X1 = 0 [4] X1

X2

X3

XX 12

XX 13

XX 23

X21

X22

X23

Y

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

−1 1 −1 1 −1 1 −1 1.682 1.682 −1.682 −1.682 0 0 0 1 -1 1 -1 0 0 1 −1 −1.682 −1.682 −1.682 1.682 1.682 1.682

−1 −1 1 1 −1 −1 1 −1.682 1.682 −1.682 1.682 0 1.682 −1.682 1.682 1.682 −1.682 −1.682 1 −1 0 0 −1 0 1 −1 0 1

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 −1 −1 1 1 −1 −1 −2.829124 2.829124 2.829124 −2.829124 0 0 0 1.682 −1.682 −1.682 1.682 0 0 0 0 1.682 0 −1.682 −1.682 0 1.682

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 1 1 1 1 1 1 2.829 2.829 2.829 2.829 0 0 0 1 1 1 1 0 0 1 1 2.829 2.829 2.829 2.829 2.829 2.829

1 1 1 1 1 1 1 2.829 2.829 2.829 2.829 0 2.829 2.829 2.829 2.829 2.829 2.829 1 1 0 0 1 0 1 1 0 1

43.230 35.270 52.790 44.830 43.230 35.270 52.790 32.515 48.595 45.903 61.983 42.270 55.289 39.209 51.309 59.269 35.229 43.189 48.810 39.250 38.290 46.250 45.944 48.964 55.504 32.556 35.576 42.116

Based on this formula and the data from the experiment matrix, the following formulas can be derived for:

Examination of the Process of Automated Closure of Containers with Screw Caps

• The free member of the regression equation. 20 P

Xi0 Yi b0 ¼ i¼120 P 2 Xi0 i¼1

• For linear members of the regression equation. 20 P

bj ¼

20 P

Xij Yi

i¼1 20 P i¼1

; where j ¼ 1  3 and buj ¼ Xij2

Xiu Xij Yi

i¼1 20 P i¼1

¼ 1  3; u\j

; where u ¼ 1; 2; j Xij2

Table 4. Central regressive experiment 23 at X2 = 0 [4] X1

X2 X3

−1 1 −1 1 −1 1 −1 1.682 1.682 −1.682 −1.682 0 0 0 1 -1 1 −1 0 0 1 −1 −1.682 −1.682 −1.682 1.682 1.682 1.682

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

−1 −1 1 1 −1 −1 1 −1.682 1.682 −1.682 1.682 0 1.682 −1.682 1.682 1.682 −1.682 −1.682 1 -1 0 0 -1 0 1 −1 0 1

X X

X X

X X

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 1 1 −1 −1 −2.82912 2.829124 2.829124 −2.82912 0 0 0 1.682 −1.682 −1.682 1.682 0 0 0 0 1.682 0 −1.682 −1.682 0 1.682

1 −1 −1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 2

1 3

2 3 X21 X22 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 1 1 1 1 1 1 2.829 2.829 2.829 2.829 0 0 0 1 1 1 1 0 0 1 1 2.829 2.829 2.829 2.829 2.829 2.829

X23

Y

1 1 1 1 1 1 1 2.829 2.829 2.829 2.829 0 2.829 2.829 2.829 2.829 2.829 2.829 1 1 0 0 1 0 1 1 0 1

46.040 32.460 55.600 40.820 44.840 33.660 56.800 31.183 40.473 47.235 70.105 42.270 55.289 39.209 46.481 64.097 34.437 43.981 48.810 39.250 35.480 49.060 48.652 53.691 62.249 29.848 30.849 35.371

509

510

S. Dimitrov et al.

• For quadratic members 20 P

buu ¼

i¼1 20 P i¼1

Xiu1 Yi

; where u ¼ 1  3

ðXiu1 Þ2

After evaluating the significance of the obtained coefficients according to the Student’s criterion and passing to the natural values of the studied factors, the following mathematical model was obtained: Y ¼ 42:27  6:79A  3:98V þ 4:78T  1:2AV þ 1:76T 2 Table 5. Central regressive experiment 23 at X3 = 0 [4] Xi

X2

X3

XX 12

XX 13

XX 23

X21

X22

X23

Y

−1 1 −1 1 −1 1 −1 1.682 1.682 −1.682 −1.682 0 0 0 1 −1 1 −1 0 0 1 −1 −1.682 −1.682 −1.682 1.682 1.682 1.682

−1 −1 1 1 −1 −1 1 −1.682 1.682 −1.682 1.682 0 1.682 −1.682 1.682 1.682 −1.682 −1.682 1 −1 0 0 −1 0 1 −1 0 1

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 −1 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 −1 −1 1 1 −1 −1 −2.82912 2.829124 2.829124 −2.82912 0 0 0 1.682 −1.682 −1.682 1.682 0 0 0 0 1.682 0 −1.682 −1.682 0 1.682

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1 1 1 1 1 1 1 2.829 2.829 2.829 2.829 0 0 0 1 1 1 1 0 0 1 1 2.829 2.829 2.829 2.829 2.829 2.829

1 1 1 1 1 1 1 2.829 2.829 2.829 2.829 0 2.829 2.829 2.829 2.829 2.829 2.829 1 1 0 0 1 0 1 1 0 1

11.920 27.480 5.620 15.140 11.920 27.480 5.620 43.055 18.835 13.419 6.287 12.110 4.272 19.948 10.932 3.472 31.688 14.068 7.450 16.770 21.310 8.770 11.973 9.853 7.733 38.145 30.945 23.745

Examination of the Process of Automated Closure of Containers with Screw Caps

511

The coefficients of the mathematical model obtained are given in Table 6.

Table 6. Mathematical model coefficients [4] Coefficient bo b1 b2 ba b12 b13 b23 b11 b22 b33 Defined value 92.72 0.28 2.04 −1.04 1.75 1.25 0.50 8.94 6.64 2.22

3.4

Verification of Adequacy of the Model Obtained

For the mathematical model obtained, an adequacy check was performed using the F- criterion of Fisher [11, 12]. For this purpose, the following ratio is used: F¼

S2a@ S2Y

where: – S2Y - the average dispersion; N P 2 ðY i Y iM Þ

– S2a@ ¼ i¼1 Nl - dispersion of adequacy; – Yi is the value of the parameter at the i−th attempt; – YiM – the value of the parameter at the i−th attempt, determined with the help of the created model; – l –number of the significant coefficients in the mathematical model l = 6. The obtained value for F is: F ¼ 1:94: Taking into account the conditions of this experiment, the determined in references table value for F [1, 3] is: FT ¼ 3:69: Since: 1:94\3:69; i:e:F\FT ; the assumed hypothesis of adequacy of the model is accepted. Therefore, the mathematical model obtained as a result of the experiment is adequate.

512

3.5

S. Dimitrov et al.

Graphic Interpretation of the Received Model

The graphical interpretation was performed using the three-dimensional section method. The following three sections are implemented for the obtained mathematical model: • For the coded value of the first factor X1 − 0 The natural value of the angle A° of the slope of magazine collector is 25°. The graph is shown in Fig. 3;

Fig. 3. Graphical interpretation of the mathematical model obtained for A = 25°

• For the coded value of the second factor X2 − 0 The natural value of the speed V for the linear conveyor is 25 m/min. The graph is shown in Fig. 4;

Fig. 4. Graphical interpretation of the mathematical model obtained for V = 25 m/min

Examination of the Process of Automated Closure of Containers with Screw Caps

513

• For the coded value of the third factor X3 − 0 The natural value of the time T for screwing of the caps is 7 s. The graph is shown in Fig. 5. The graphs shown in Figs. 3, 4 and 5 can be used to evaluate successfully closed containers in different modes of setting and operating an automated screw closure system.

Fig. 5. Graphical interpretation of the mathematical model obtained for T = 7 s

4 Conclusions On the basis of the research and analysis of the results, an optimal combination of the main factors influencing the process of automated closure of screw caps can be determined. They can be used when: • designing new systems for automated closure of screw caps; • setting up existing automated closure systems with screw caps to increase their performance. The developed test-rig allows the study of individual stages of the process of automated closure of screw caps, which can be useful in: • developing new closing head constructions; • developing new constructions for magazine collector; • control of the effort required to open the already closed containers.

References 1. Darwin, G.C.: Robotics and Automation in the Food Industry: Current and Future Technologies. Woodhead Publishing, Cambridge (2013) 2. Ahmadzadeh, H., Masehian, E., Asdapour, M.: Modular robotic systems: characteristics and applications. J. Intell. Rob. Syst. 81, 317–357 (2015)

514

S. Dimitrov et al.

3. Schütz, D., Friedrich, M.: Robotic Systems for Handling and Assembly. Wahl in Springer Tracts in Advanced Robotics, STAR, vol. 67 (2011) 4. Dimitrov, S.: Investigation of the process of automated closure of screw bottles, Ph.D. thesis. Sofia, Bulgaria (2017) 5. Groover, M.: Automation, Production Systems and Computer-Integrated Manufacturing. 3rd edn. Prentice Hall Press (2007) 6. Kyrylovich, V., Morgunov, R., Dimitrov, L., Toropova, O., Kumova, S.: Information models of flexible manufacturing cell components and related drawing up features. Recent 18(51), 22–32 (2017) 7. Salonitis, K.: Modular design for increasing assembly automation. CIRP Ann.-Manuf. Technol. 63(1), 189–192 (2014) 8. Programming Guideline for S7-1200/S7-1500. http://www1.siemens.cz/ad/current/content/ data_files/automatizacni_systemy/mikrosystemy/simatic_s71200/programming-guidelinefor-s71200-s71500_2014-09_en.pdf. Accessed 04 Nov 2018 9. Ghalyan, I.F.J.: Force-Controlled Robotic Assembly Processes of Rigid and Flexible Objects. Methodologies and applications. Springer International Publishing, Switzerland (2016) 10. Yuan, M., Lin, Y.: Model Selection and Estimation in Regression with Grouped Variables. Wiley Online Library (2006) 11. Dean, A., Voss, D., Draguljić, D.: Design and Analysis of Experiments, 2nd edn. Springer International Publishing (2017) 12. Witold, M.: On an extended Fisher criterion for feature selection. IEEE Trans. Pattern Anal. Mach. Intell. 5, 611–614 (1981)

About the Concept of Information Support System for Innovative Economy in the Republic of Kazakhstan Irbulat Utepbergenov1, Leonid Bobrov2 , Irina Medyankina2 Zinaida Rodionova2(&) , and Shara Toibaeva1 1

2

,

Institute of Information and Computational Technologies, 125 Pushkin Street, Almaty 050010, Republic of Kazakhstan [email protected], [email protected] Novosibirsk State University of Economics and Management, 56 Kamenskaya Street, Novosibirsk 630099, Russia [email protected], {i.p.medyankina,z.v.rodionova} @edu.nsuem.ru

Abstract. The development of mechanisms ensuring functioning of a unified data-processing environment in the Republic of Kazakhstan is a priority for the innovation development. This is especially important for multidisciplinary innovation projects, when close cooperation between ICT and innovation activity subject areas is necessary. Here, information resources and technologies play a decisive role in the development of basic innovation infrastructure. Also, they allow innovators to concentrate on solving the most important tasks without duplicating tasks solved by others earlier. A brief analysis of the current situation in the field of information support for innovative development, which affects the terminological, theoretical, methodological and informational aspects, is given. The goal and objectives of information support system for innovative development are formulated. Basic principles underlying the creation of an information support system for the innovation economy in the Republic of Kazakhstan are described. Besides, the article gives a brief description of system architecture as a single entry point into the global information space through the created information portal. This portal contains meta-information both on Kazakhstan information resources and on the resources of other countries. The mathematical formulation of the problem of forming polythematic innovation cluster database is given. The solution of this problem allows minimizing total costs. Keywords: Decision making systems  Methodology  Architecture  Math modeling

Information

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 515–526, 2019. https://doi.org/10.1007/978-3-030-12072-6_42

application



516

I. Utepbergenov et al.

1 Introduction The development of innovative economy and the strengthening of innovation support infrastructure in order to form new industries in Kazakhstan require a scientific base considering the world experience and knowledge reflected in a huge amount of terabytes of diverse information [1, 2]. In this connection, at present, a team from the Institute of Information and Computing Technologies of Kazakhstan Ministry of Education and Science is working on a project aimed at creating a unified republican information support system for innovation activities. The project includes studying problems of information support for innovations taking into account the specificity of individual stages of their life cycle, the development of theoretical and methodological approaches to solving these problems and the creation of appropriate working tools, as well as the formulation of specific project proposals for improving the regional information support system. This paper presents the main principles of the concept of information support system for innovation activities in the Republic of Kazakhstan. The building of this system takes into account the specificity of individual stages of innovation life cycle.

2 Innovative Development and Digital Economy of Kazakhstan The subheadings of the Global Innovation Index (GII), published annually by Cornell University, INSEAD and WIPO, clearly demonstrate the importance of innovation and attention to various aspects of innovation. – – – –

The Human Factor in Innovation [3]. Effective Innovation Policies for Development [4]. Winning with Global Innovation [5]. Innovation Feeding the World [6].

Annual ratings published in these indices reflect the dynamics of the innovative development of different countries and the effectiveness of their efforts. As an example, you can see the illustration of “Movement in the top 10 of the GII” [6]. The position of Kazakhstan in Global Innovation Index ratings is illustrated in Table 1, which, for comparison, also reflects the position of Russia. Table 1. Kazakhstan and Russia in GII ratings Country 2014 2015 2016 2017 Kazakhstan 79 82 75 78 Russia 49 48 43 45

About the Concept of Information Support System for Innovative Economy

517

To intensify innovation development in Kazakhstan, the State Program of Industrial and Innovative Development for 2015–2019 [7] was adopted. It is focused on solving a wide range of problems and, in particular, providing for: – cooperation with the United Nations Industrial Development Organization, the International Bank for Reconstruction and Development and other international institutions; – the use of advanced international practices in order to enhance the competitiveness of the national economy; – the development of innovation infrastructure, increasing the level of business development, including the overall quality of business environment and stimulating cluster development; – measures for the transfer of relevant technologies for priority sectors and the further qualitative development of country’s own innovation system; – methodological and informational support to all participants of innovation activity and the cluster process; – Strengthening grant funding of research activities. The innovative research development policy should be based on the achievements of historically established scientific schools, existing scientific experience and the use of accumulated information and knowledge [2]. The development of digital technologies is recognized as one of the ways to diversify the national economy and reorient it from raw materials to an industrial service model. On December 12, 2017, the government of the Republic of Kazakhstan No. 827 approved the State Program “Digital Kazakhstan”. According to this program, it is necessary to take measures to improve the quality of the existing infrastructure of innovation development, and the key direction in ICT industry development is ensuring the growth of the information technology service share. The activities of the program are planned to be implemented in five directions, one of which is “Creating an Innovation Ecosystem”. Thus, innovation and digital development are considered in an organic unity, which creates favorable prerequisites for the implementation of projects focused on information support of innovation activities. One of such projects is the project of the Institute of Information and Computing Technologies, carried out by the authors under a grant from Kazakhstan Ministry of Education and Science of the Republic in collaboration with Russian specialists.

3 Information Support of Innovative Development: Analysis of the Current Situation The problems of information support of scientific and technical development have become aggravated since the second half of the last century due to the information explosion. This explosion is characterized by an exponential increase in the volume of published scientific and technical information. During this time, a great experience has been accumulated to overcome these problems. An example of successful development and implementation of a large regional system of scientific and technical information

518

I. Utepbergenov et al.

can be an automated scientific system of technical information focused on information support of scientific research. This system covers scientific organizations of the Siberian Branch of the Academy of Sciences in Novosibirsk, Tomsk, Omsk, Kemerovo, Krasnoyarsk, Irkutsk, Ulan-Ude, Yakutsk. The scientific ideas underlying the creation of this system are shortly presented in the monograph [8]. This century, with its rapid development of information and communication technologies, has not dampened information support problems, but aggravated them due to the rapid growth of information stored in digital form, which is clearly illustrated by the results of IDC’s research [9]. In these conditions, the mechanistic transfer of past developments to today’s conditions is inexpedient due to the changes in the socio-economic, technical, informational and communicative nature, because they force us to take a fresh look at many aspects of information activities of libraries in market conditions. Thus, there is a need for creative processing of already known scientific principles to adapt them to today’s specific conditions and problems. The project of creating a system of information support for the processes of innovation economy development in Kazakhstan is a logical continuation of the previous works. It develops the ideas of the authors group attached to a new problem area —information support for innovations as a specific current activity. 3.1

Terminological Aspect

There is a wide variety of interpretations of the concepts “innovative economy”, “innovative activity” and “innovation”. For example, the search for the meaning of the term “innovation” in on-line dictionaries gives dozens of different definitions. In [10], it is noted that there are hundreds of definitions in the literature, and there are 16 author’s interpretations of this concept taken from different monographs and textbooks (a wider list of author’s definitions—see http://reffire.ru/tema749229text.html). Note that this term is also ubiquitously used in various speeches, publications, plans, etc. You can have the impression that each speaker puts their own meaning into this term. Many publications consider terminological discussions and attempts to find a unified, scientifically based and comprehensive interpretation of the term “innovation” to understand practical tasks unambiguously. At the same time, information support of innovation is considered (in cases where if it is taken into account at all, as an important and integral component of the innovation infrastructure) in a general form, without mentioning the specifics of each life cycle stage. The implementation of the project is associated with the need to solve a number of non-trivial scientific problems, including those of a fundamental nature. 3.2

Theoretical and Methodological Aspects

The reason for the brief terminological review above is the importance of a clear answer to the question of what the problem of information support for innovations includes and what kind of information resources are necessary for its successful solution.

About the Concept of Information Support System for Innovative Economy

519

At the moment, despite more than a decade’s history of innovation development in Kazakhstan, there is no single generally accepted concept that would reflect a scientifically based system of views determining the main directions, conditions and procedure for solving problems of creating and using distributed information support systems for innovations as a specific kind of activity. At the same time, individual attempts to create such concept and solve methodological issues of creating regional information support systems for innovations (see, for example, the Concept of scientific and information support for programs and projects of CIS countries in the innovation sphere) indicate the need for further in-depth research in this area. The situation is complicated by several important circumstances: – analysis of publications, strategies for innovative development of regions and other documents of a regulatory and methodological nature indicates a clear underestimation of the importance of information support for innovations, as well as insufficient use of the entire diversity of world information resources at each stage of innovation activity; – numerous materials reflect the diversity of opinions about the information support for innovation, which range from extremely optimistic, close to the well-known slogan “Yandex—you’ll find everything!” to statements about the sufficiency of using only patent-information information resources; – information resources, potentially useful at the regional level, are dispersed across a multitude of organizations that have different departmental affiliation and are not at all aimed at making these resources available to all interested parties; – even the availability of information resources available for use does not guarantee the quality of information support of various tasks arising in the process of innovation because there is practically no information service infrastructure. Also, innovative workers are not willing to spend up to one third of their working time on regular search, selection and analysis of information according their profile; – serious financial investments are needed to create regional information support systems for each life cycle stage of innovations ensuring the regular provision of relevant information on a variety of innovation profiles. 3.3

Informational Aspect

Modern innovation activity takes place in conditions of sustainable development and building up digital bibliographic, factual and full-text collections the creation of which is being realized in collaboration with bodies of scientific and technical information, federal and industry libraries, universities, enterprises, associations and consortiums of the innovation sphere, academic and industry research institutes, scientific and professional societies and unions, business information integrators, etc. At the same time, it is necessary to admit that representatives of small and mediumsized innovative businesses often solve the problems of creating and selling their products without taking into account the existing and publicly published developments. It results in wasting intellectual efforts and precious material and financial resources.

520

I. Utepbergenov et al.

4 Goal and Objectives of Information Support System for Innovations The goal of this project is to create an informational consulting environment to support innovation activities in Kazakhstan through information consulting and providing meta-information about world information resources in line with the objectives of expanded innovative production. To achieve this goal, it is necessary to meet the following objectives: – to study the specifics of innovators’ information needs at each stage of the innovation life cycle from idea generation to the withdrawal of a product from the market; – to form a meta-information base (knowledge base) on existing world information resources relevant to the tasks of each innovation life cycle stages (generation of ideas—R & D—OCD—production of an experimental batch—market launch— growth—saturation—decline—output from the market); – development and commissioning of a specialized information retrieval system designed for the situational orientation of innovative organizations in the global information space with the goal of effective information support for innovation activities. Solving these problems involves the generalization and development of Kazakhstan, Russian and European experience in providing information support for science and education in relation to innovation as a modern activity with its own specific features.

5 General System Requirements The system should provide a search for information resources in the database of metainformation on: – The description of a practical situation (problem, task) by directly finding it or reducing it to one of the model ones represented in the multilevel system classifier; – The specified stage of the innovation life cycle; – Thematic classifier of information resources; – Random query in terms of keywords with indication of the required search fields. The content of DB meta-information should meet the following requirements: – – – – – –

A big number of themes; The provision of information sources of different kinds; Multinational content; The international nature of information resources; The use of a single metadata model for all sources of information; The provision of detailed information about each information resource to evaluate for which stage of the innovation life cycle it is advisable to use the resource, which are content and record formats, vendors and access conditions, etc.

About the Concept of Information Support System for Innovative Economy

521

6 Basic Principles of Creating the System The creation of the system of information support for the innovation economy in Kazakhstan is based on the following principles: (1) The principle of system approach dominance as a methodological basis for the process of creating an information infrastructure to support innovation. This means that the infrastructure must be analyzed, first of all, as a set of interrelated elements that form a complex system with feedback. At the entrance of the system there are various information resources as a source of forming metainformation base (knowledge base), the search results in which (output) are given to the user to select the information sources that best meet the task. At the same time, using both information resources of Kazakhstan (as internal resources) and world resources in the form of various databases types (as external resources) is envisaged. (2) Orientation principle which considers orientation towards real information needs of innovative enterprises and clusters. This principle involves both identifying needs through questionnaires, interviewing, etc., and maintaining constant user feedback. (3) The principle of considering the specifics of individual life cycle stages of innovation and the resulting information security features. The point is that when forming the base of meta-information, you need to cover various kinds of information sources. For example, scientific, technical and patent information bases (R & D stage), business information bases (market entry), etc. (4) The principle of using the previous experience. To prevent unproductive expenditure of resources when creating a system, it is advisable to take into account positive and negative experiences of developed countries, both by analyzing relevant publications and by getting practical information. (5) The principle of reasonable minimization of efforts, when the most simple and economical models and methods are used to obtain the result you need. (6) The principle of efficiency. It is necessary to observe a rational relationship between the costs of creating a system and the target effects, including the final results that affect the activities of enterprises in the innovation sphere. (7) The principle of one-time processing of information and its reusing. Following this principle eliminates the re-entry of information and its primary processing, but does not close the possibility of adding existing data. (8) Principle of partnership. When creating the system, one should take into account the strengths and competitive opportunities of the Republic not focusing on the use of its internal potential only. In each situation, it is advisable to consider the possibilities of creating partner communities and integrating partners’ resources. (9) The principle of focusing on the external environment. Adherence to this principle will allow you to avoid excessive concentration on internal problems, as well as purposefully monitor the external environment and timely respond to the emergence of new information resources.

522

I. Utepbergenov et al.

(10) Modeling principle which involves modeling real situations and processes using quantitative estimates and relevant mathematical methods, including methods for solving optimization problems. (11) The principle of continuous development of the system to ensure resistance to external and internal disturbing influences. Changes to the system should not disrupt its operation. The implementation of this principle requires in-depth analytical pre-project work, including a rational grouping of tasks to be performed, so that each group could be provided with possible development directions for damping possible disturbing influences. (12) Standardization (unification) principle. When creating a system, you need to use rationally standard, unified and standardized elements, design solutions, application packages, complexes, technologies should be rationally used. The functioning of this system will inevitably require the implementation of appropriate information security measures [11, 12]. In this regard, when implementing these measures, it is necessary to observe: (13) The principle of comprehensive use and of the entire arsenal of available protection means in the organization at all stages of the information processing cycle, as well as: (14) The principle of minimum risk and minimum damage, which concerns the impossibility of creating an ideal protection system. Following principles (13) and (14) involves taking into account the specific conditions of the protection object anytime at each stage of processing and using information.

7 System Architecture Detailing the content, focus and priorities of the work at individual innovation stages, we present a model of innovation life cycle in relation to external information resources necessary for the successful implementation of the work (Fig. 1).

About the Concept of Information Support System for Innovative Economy

523

Fig. 1. Model of innovation life cycle in relation to external information resources

The system provides for the creation of a single entry point for navigation in Kazakhstan and global information space. The entry point is an information portal where you can find meta-information both about Kazakhstan information resources and about the resources of other countries. Turning to a specialized information retrieval system designed for the situational orientation of innovative organizations in the information space, the user has the ability to find necessary resources in two ways: – through the system of classifiers indicating life cycle stage, the required subject, the problem to be solved, etc.; – formulating a query in terms of keywords in a way that they were used to doing when searching on the Internet. When, as a result, the user receives a list of information resources described in detail and selects a specific resource(s), they can: – get access to this resource, by clicking on the corresponding hyperlink (if it is an open resource); – read the terms of access and contact the vendor through the specified contact details to make the contract, or use the mediation services of the owner of the metainformation base (if the resource is paid). In some cases, the user may be given the opportunity to study the demo-version or to get trial access to some resources. The system architecture is illustrated in Fig. 2.

524

I. Utepbergenov et al.

Fig. 2. System architecture

Independent work of enterprises in the innovation sphere in terms of organizing information support requires having personnel with the necessary level of information competencies. In this connection, the project provides for the development and implementation of appropriate advanced training programs. It is assumed that these programs will be based on the European ICT Qualifications Framework and include short-term overseas internships.

8 Innovation Clusters: Polythematic Database Formation The development of innovation cluster forms in Kazakhstan involves using new technologies of interaction between cluster organizations members. When forming cluster information infrastructure, it is possible to organize a system of distributed information processing, which minimizes the total costs of the formation and operation of innovation cluster polythematic database. The mathematical formulation of this problem is given below. ð

N X R X X n¼1 r¼1 q:Oq 2O;;

xqnr hqnr þ

N X R X X n¼1 r¼1 q:Oq 2O;;

ð1  xqnr Þpnr hqnr Þ ! min

ð1Þ

Under restrictions: Q R X X

n xqnr sqi nr  li

ð2Þ

r¼1 q¼1 N X n¼1

  xqnr ¼ 1 q ¼ 1; Q; r ¼ 1; R

ð3Þ

About the Concept of Information Support System for Innovative Economy

525

xqnr ¼ f0; 1g R X

pnr ¼ 1; ð0  pnr  1Þ

ð4Þ

r¼1

where  xqnr ¼

1  if the participant Un performs the operation Oq on the array Mr ; 0  otherwise; hqnr ¼

I X

sqi nr

ð5Þ

i¼1 qi sqi nr ¼ Vr tn

ð6Þ

tnqi is the amount of ith resource costs required to perform operation Oq in the center Un (taking into account the characteristics of software and hardware, as well as other factors affecting the real cost of data processing operations in the center Un); lin is the limit on the total cost of the ith resource in the center Un; Vr is the number of documents in the thematic array Mr, (here r = (1, 2, …, R) is the thematic rubric), which is part of the polythematic database; Pnr is the indicator characterizing the degree of Un center interest in using the array M r; O” is a subset of information processing technological operations performed in a distributed way; N is the number of organizations in the cluster. As the result of solving the problem of integer linear programming, we find the values xqnr showing which processing of which thematic arrays each of the centers Un should perform when forming polythematic database for the innovation cluster. Based on the conditions of the problem, each of the operations Oq on any array Mr is necessarily executed, and only one of the centers Un, i.e. the principle of one-time processing of information is observed. The above task is also one of the possible forms for the practical implementation of the principle of partnership.

9 Conclusion The main principles described above are used as a methodological basis for developing a system of information support for innovation activities in the Republic of Kazakhstan. This system is considered as the core of the information infrastructure of innovation support providing situational orientation of innovative organizations in the global information space to effectively support innovation activities. The creation and

526

I. Utepbergenov et al.

approbation of the working prototype of the information system will allow the integration of fragmentary experience in information support of innovations introduce possible refinements into the proposed solutions. Acknowledgments. This work was supported by a grant from the MES RK (project No. AP05134019 “Development of scientific and methodological foundations and applied aspects of building a distributed information support system for innovation activities, considering the specific features of each of the stages of the innovation life cycle”).

References 1. Nazarbayev, N.: The Third Modernization of Kazakhstan: Global Competitiveness: A Message from the President of the Republic of Kazakhstan N. Nazarbayev to the People of Kazakhstan, Republic of Kazakhstan, 31 January 2017. http://www.akorda.kz/ru/addresses/ addresses_of_president/poslanie-prezidenta-respubliki-kazahstan-nnazarbaeva-narodukazahstana-31-yanvarya-2017-g 2. Message of the President of the Republic of Kazakhstan “Strategy” Kazakhstan-2050: the new political course of the established state, 14 December 2012. https://strategy2050.kz/ru/ multilanguage/ 3. Dutta, S., Lanvin, B., Wunsch-Vincent, S. (eds.): The Global Innovation Index 2014: The Human Factor in Innovation, vol. xxv, 400 p. WIPO, Geneva (2014) 4. Dutta, S., Lanvin, B., Wunsch-Vincent, S. (eds.): The Global Innovation Index 2015: Effective Innovation Policies for Development, vol. xxxi, 418 p. WIPO, Geneva (2015) 5. Dutta, S., Lanvin, B., Wunsch-Vincent, S. (eds.): The Global Innovation Index 2016: Winning with Global Innovation, 422 p. WIPO, Geneva (2016) 6. Dutta, S., Lanvin, B., Wunsch-Vincent, S. (eds.): The Global Innovation Index 2017: Innovation Feeding the World, 463 p. WIPO, Geneva (2017) 7. The state program of industrial-innovative development of the Republic of Kazakhstan for 2015–2019 (2014). No. 874, Astana, 1 August 2014. https://strategy2050.kz/static/files/pr/ rus.doc 8. Yelepov, B.S., Bazhenov, S.R., Bobrov, L.K., Kalenov, N.E.: Design and Operation of Regional ASSTI (Automated Systems of Scientific and Technical Information). Siberian Branch, Novosibirsk (1991) 9. Reinsel, D., Gantz, J., Rydning, J.: Data Age 2025: The Evolution of Data to Life-Critical Don’t Focus on Big Data; Focus on the Data That’s Big (2017). https://www.seagate.com/ files/www-content/our-story/trends/files/Seagate-WP-DataAge2025-March-2017.pdf/ 10. Agarkov, S.A., Kuznetsova, E.S., Gryaznova, M.O.: Innovation Management and State Innovation Policy. Academy of Natural Sciences, Moscow (2011) 11. Rodionova, Z.V., Pestunova, T.M.: Enterprise Information Security Management Based on Business Process Analysis. NSUEA, Novosibirsk (2016) 12. Rodionova, Z.V., Utepbergenov, I.T.: A Systems Approach to Creating Adaptive Mechanisms for Managing Security of Information Resources. Joint Issue Bulletin D. Serikbayeva EKSTU and Computational Technologies of ICT SB RAS. T.1. Part 1, pp. 227–238 (2018)

Possibilities of Typical Controllers for Low Order Non-linear Non-stationary Plants Galina Frantsuzova(&)

, Vadim Zhmud

, and Anatoly Vostrikov

Novosibirsk State Technical University, Karl Marx Ave. 20, 630092 Novosibirsk, Russia [email protected], [email protected], a.s. [email protected]

Abstract. The possible approach to the calculating typical controllers for loworder nonlinear non-stationary plants is presented in this paper. It is assumed that the differential channel was put out to the feedback circuit in the stabilization systems. As a result, two control loops are formed in the system. The internal contour contains the proportional and differential components of a typical controller. The output signal derivative implicitly contains all information about the nonlinear and non-stationary characteristics as well as about the external uncontrolled perturbations for the first order plant. For this reason, the inner loop control can be interpreted as a variation of the control law based on the localization method. It is proposed to use the basic relations of this method to calculate the controller components in the feedback circuit. As a result, the inner loop dynamics can be subordinated to a linear equation. For the second order nonlinear plants, it was proposed to introduce an additional differential component to the typical PID controller and consider the PIDD2 controller. In this case, it is also proposed to calculate the inner control loop by means of the localization method. It is shown that after the inner loop stabilization by means of differential components, the calculation of both the PID and PIDD2 controllers can be carried out using the modal approach and the desired roots formation. Thus, we have the system invariant to the external perturbations action for the first and second order nonlinear non-stationary plants. The numerical simulation results in MATLAB illustrate the basic properties of such systems. Keywords: Typical controllers  Nonlinear plant  Perturbations  Localization method  Invariant system

1 Introduction Simple typical controllers (P, PI, PID) are still widely used in industry to solve many practical problems due to their simplicity and reliability, well-known properties and the operating principle as well as their low cost [1]. To date, a large number of recommendations have been proposed for tuning [2–4], calculating [1, 5–9] and optimizing the typical controllers [10–12] for low-order linear plants. However, it is often necessary to refine the controllers’ parameters and carry out their manual adjustment after the calculation. © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 527–539, 2019. https://doi.org/10.1007/978-3-030-12072-6_43

528

G. Frantsuzova et al.

Note that the typical PID controller allows us to completely solve the stabilization problem for first-order plants [3, 13]. Its use for higher order plants leads to a forced weakening of the requirements for processes in an automatic system. The typical controllers don’t always provide the required quality if some external factors (change in load, ambient temperature, etc.) have a significant effect or the plant parameters change over time. The various techniques are used in order to expand the scope of traditional PID controllers. First of all, such modifications involve the controller transformation in order to obtain a system that will be invariant to external perturbations. In this case, the additional integration or differentiation channels can be added to the PID controller [14–16]. In some situations, it is possible to use fractional order controllers [17], controllers with weight coefficients of errors, etc. Thus, the interest in this issue continues unabated despite the huge number of publications devoted to various methods of calculating typical controllers. This is due to the fact that most variants of the controllers’ synthesis are focused on a specific plant type. The localization method [18, 19] is an effective approach to the controller synthesis for a class of the nonlinear plants with the external perturbations. The resulting automatic system has the invariance property both in relation to the nonlinear plant characteristics and to the action of external disturbing factors. A feature of this method is to add the highest derivative of the plant to the controller. This derivative implicitly contains all information about nonlinear and non-stationary characteristics as well as external uncontrolled perturbations. Since standard industrial PID controllers contain a differential component, they can be considered as a version of the controller based on the localization method for the first-order plants [20]. The possibility of using the localization method in the controller calculation for the first and second orders nonlinear plants is discussed in the paper. It is proposed to add an additional differentiation channel to the typical PID controller for effectively control a second-order nonlinear plant. Note that here we consider a real differentiation, i.e. derivatives are obtained using a special differentiator with low inertia. In order to simplify the operation of the stabilization system, it is customary to transfer the differential component of the PID controller in the feedback channel. This technique can significantly reduce the control throws with a step input signal [13, 16]. The transfer of some controller components to the feedback leads to the appearance of two control loops in the system. In accordance with this fact, it is proposed to calculate the controller coefficients in the two stages. At the first stage it is suggested to stabilize the processes in the inner contour by means of a controller based on the localization method [18, 19]. As a result, the whole system becomes linear and the calculation of the external control loop is carried out using the modal method [16]. The paper is organized as follows. Section 2 presents the problem statement; Sect. 3 contains recommendations on the controller calculation for low-order objects, taking into account the practical implementation of the differential components; Sect. 4 presents the simulation results; Sect. 5 contains conclusions.

Possibilities of Typical Controllers…

529

2 Problem Statement Let’s consider the possibility of control by means of typical controllers for the first and second order nonlinear non-stationary plants. The model of such plants is represented by the equation yðnÞ ¼ f ðt; yðn1Þ ; yÞ þ bðt; yðn1Þ ; yÞu;

ð1Þ

where y 2 R1 —plant output variable; u 2 R1 —control variable; n ¼ 1; 2; functions f ðÞ and bðÞ can vary within a certain range, depending on the system operating conditions: jf ðÞj  fmax , 0\bmin  jbðÞj  bmax . Also, the functions dependence on the time reflects the perturbations affect on the plant. The system aim is to stabilize it in the setpoint v ¼ const by using u. This is the equivalent to the property providing lim ½yðtÞ  v  D0 with the required quality t!1

indicators. These requirements are specified in the form of the estimates for a transient time, overshoot and permissible error in the statics. Next, we discuss the options for using a typical controller with its differential component in the feedback channel.

3 Typical Controllers Synthesis 3.1

PD-Controller for a First Order Plant

Let’s consider the possibility of stabilizing the first order plant (1) by means of a typical PD controller with a transfer function WPD ðsÞ ¼ kp þ kD s ¼ kD ðs þ c1 Þ:

ð2Þ

As noted, this controller is in the feedback channel and the corresponding system diagram is shown in Fig. 1.

Fig. 1. System with PD controller.

In accordance with (2), the control law is written in the form u ¼ kD ½v  c1 y  y_ 

530

G. Frantsuzova et al.

or taking into account the designation Fðy; vÞ ¼ v  c1 y we can write u ¼ kD ½Fðy; vÞ  y_ : As you can see, this view of the controller corresponds to the localization method [18– 20]. The choice of the coefficient kD in accordance with the method recommendations by the ratio bmin kD  20. . .100 allows us to provide the required properties in the system given by a linear equation y_ ¼ Fðy; vÞ ¼ v  c1 y

ð3Þ

Moreover, the system dynamics doesn’t depend on external perturbations or changes in the plant parameters. Thus, a system with such PD controller becomes invariant with respect to the plant’s variable parameters and external perturbations. The calculation of the PD-controller consists of the choosing differential component coefficient kD and forming the linear Eq. (3) according the quality requirements of the processes using the modal approach. The parameter c1 in the Eq. (3) is the coefficient of the controller proportional component. 3.2

PID-Controller for a First Order Plant

Now let’s discuss the PID controller use in the systems with a first-order plant (1). We assume that the integral component is in the direct channel and the proportionaldifferential one is in the feedback circuit (Fig. 2). This scheme corresponds to the following controller WPID ðsÞ ¼ kp þ

hc i ki 0 þ kD s ¼ kD þ ðs þ c1 Þ : s s

ð4Þ

As can be seen from Fig. 2 this system consists of two loops. The inner contour is the scheme of Fig. 1 so it can be calculated based on the localization method as described above.

Fig. 2. System with PID controller and a first-order plant.

As a result, the inner contour dynamics will correspond to the linear Eq. (3). This allows us, instead of a two-loop system (Fig. 2), to consider an equivalent linear scheme presented in Fig. 3.

Possibilities of Typical Controllers…

531

Fig. 3. Scheme of the equivalent system with PID controller.

At the second stage, we use the known linear control theory methods and form the desired distribution of roots in the system Fig. 3. Unknown controller coefficients c1 and c0 are calculated according to the modal approach. Note that such controller doesn’t depend on the plant model and ensures the invariance of the system properties with respect to external uncontrolled perturbations. They are suppressed by means of the coefficient kD in the inner system contour (Fig. 2). 3.3

PIDD2-Controller for a Second Order Plant

Let’s consider the second order plant (1) when n ¼ 2. In order to get more effectively controller we recommend adding a double differentiation in the feedback channel (Fig. 4).

Fig. 4. System with PIDD2 controller.

As a result, we obtain a PIDD2 controller with a transfer function of the form WPIDD2 ðsÞ ¼ kp þ

hc i ki 0 þ kD1 s þ kD2 s2 ¼ kD2 þ ðs2 þ c2 s þ c1 Þ : s s

ð5Þ

It can be seen that in this case the inner contour (Fig. 4) also corresponds to the systems structure based on the localization method. Therefore, the coefficient kD2 can be calculated by similar ratio, bmin kD2  20. . .100. In this case, it is possible to suppress the perturbations influence and to subordinate the contour dynamics to the following second-order equation €y ¼ r  c1 y  c2 y_ :

ð6Þ

532

G. Frantsuzova et al.

It is clear that the integral component at the system input allows us to provide a zero static error. As a result, we obtain an equivalent linear system similar to that shown in Fig. 3. It is calculated by means of the modal approach and the desired distribution of the system roots is formed [9, 16]. 3.4

Controllers Implementation

The practical implementation of the control laws (2), (4) and (5) assumes the special differentiating device use in order to filtering high frequency noise It’s called the differentiating filter [16, 19] and described by a model of the form 1 ; l2 s2 þ 2dls þ 1

Wf ðsÞ ¼

ð7Þ

where l—small parameter that determines the processes inertia in the filter; d— damping factor. The numerical value choice l should provide transients in the filter an order of faster than in the system. The damping coefficient is chosen from the oscillations absence condition, i.e., as a rule, in the range d  ð0; 5. . . 0; 7Þ. The practical implementation of the controller (4) using the device (7) is as follows  WPID ðsÞ ¼ kD

 c0 s þ c1 þ 2 2 ; s l s þ 2dls þ 1

ð8Þ

and the controller (5) takes the form  WPIDD2 ðsÞ ¼ kD2

 c0 s 2 þ c 2 s þ c1 þ 2 2 : s l s þ 2dls þ 1

ð9Þ

As an example, a block diagram of the system with the controller (9) is shown in Fig. 5 [16].

Fig. 5. System with the PIDD2 controller and filter.

Note that it is necessary to follow the localization method recommendations and increase the order of the device (7) if a large measurement noise is present. There are the “fast” processes that are imposed on the “slow” basic one due to the filter use. It is

Possibilities of Typical Controllers…

533

the feature of all systems based on the localization method. If the differentiating devices order above the second one then the fast processes can be unstable. So it’s necessary to check their stability and correct all system at the design stage.

4 Simulation Results The following examples illustrate the possibilities of the closed-loop systems with different type controllers. All simulations were made in MATLAB Simulink. 4.1

System with PID-Controller and First Order Plant

First, let’s consider the plant with the following model y_ ¼ a1 ðtÞy þ a2 ðtÞey þ bu þ MðtÞ; where 1  a1  0; 0  a2 ðtÞ  2; b ¼ 0:5; jMðtÞj  10. The processes requirements: tn  6 sec., no static error. The desired characteristic equation is as follows DðsÞ ¼ s2 þ 3:5s þ 3 ¼ 0: The calculated controller and filter parameters are: c0 ¼ 3; c1 ¼ 3:5; kD ¼ 30; l ¼ 0:01 and d ¼ 0:5. The desired transition process for the system is shown in Fig. 6. The processes with variable plant parameters and external perturbation are exactly the same as desired one. Therefore, these graphs are not shown separately.

1.2 1 0.8

y

0.6 0.4 0.2 0 -0.2 0

1

2

3

4

5

6

7

8

9

10 t, sec.

Fig. 6. Transient process of the system.

Figure 7 illustrates control in the system when the plant parameters are constant and there is no external perturbation. The effect of a jump perturbation on the control is shown in Fig. 8.

534

G. Frantsuzova et al. 2 1

u

0 -1 -2 -3 0

1

2

3

4

5

6

7

8

9

10 t, sec.

Fig. 7. Graph of the control change.

10 M(t)

u, M(t)

5 0

t, sec. u(t)

-5 -10 -15

0

1

2

3

4

5

6

7

8

9

10

Fig. 8. Illustration of the perturbation action on the control.

Graphs of the parameter a1 ðtÞ changing and the corresponding control are shown in Figs. 9 and 10 respectively.

1.4 1.2 1 a1

0.8 0.6 0.4 0.2 0 0

1

2

3

4

5

6

7

8

9

10 t, sec.

Fig. 9. Illustration of the non-stationary parameter change.

Possibilities of Typical Controllers…

535

1.5 1 0.5 u

0 -0.5 -1 -1.5 0

1

2

3

4

5 t, sec.

6

7

8

9

10

Fig. 10. Control at the plant parameter change.

It should be noted that the perturbation MðtÞ and non-stationary parameter a1 ðtÞ doesn’t affect on the system output. The transition process is unchanged and has the form shown in Fig. 6. 4.2

System with PIDD2-Controller and Second Order Plant

Now we consider the properties of the system with following second-order plant €y ¼ a1 ðtÞ_yy þ a2 ðtÞy2 þ bðtÞu þ MðtÞ; where 2  a1  1; ja2 ðtÞj  3; 2  b  5; jMðtÞj  10. The transient processes requirements: process time tn  6 sec., no overshoot and static error. The desired characteristic equation is as follows DðsÞ ¼ s3 þ 5:2s2 þ 9s þ 5:1 ¼ 0. The controller and filter parameters: c0 ¼ 5:1; c1 ¼ 9; c2 ¼ 5:2; kD2 ¼ 30; l ¼ 0:01 and d ¼ 0:5. The desired transition process of the system is presented in Fig. 11. As in the previous case, the processes with variable plant parameters and external perturbation are exactly the same as desired one.

1 0.8

y

0.6 0.4 0.2 0 0

1

2

3

4

5

6

7

8

9

t, sec.

10

Fig. 11. Transient process of the system with the second order plant.

536

G. Frantsuzova et al.

Figures 12 and 13 illustrate the control change for different time intervals. The oscillations in the initial part Fig. 12 are due to the differentiating filter with small parameter l. This fast process is separately shown in the Fig. 13. 2 1.5

u

1 0.5 0 -0.5 0

1

2

3

4

5

6

7

8

9

t, sec.

10

Fig. 12. Illustration of the control change. 1 0.8 0.6

u

0.4 0.2 0 -0.2 -0.4 0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

0.2 t, sec.

Fig. 13. Fast processes in the initial control area.

Figure 14 shows the action of the jump perturbation on the control signal.

10 M(t) 5

u, M(t)

0 -5 u(t) -10 -15 -20

0

0.5

1

1.5

2

2.5

3

3.5

4

Fig. 14. Effect of the perturbation on the control.

4.5

t, sec.

5

Possibilities of Typical Controllers…

537

The non-stationary parameter a1 and control change are shown in Figs. 15 and 16 respectively. 1.5 1

a1

0.5 0 -0.5 -1 0

1

2

3

4

5

6

7

8

9

10 t, sec.

8

9

10 t, sec.

Fig. 15. Graph of the coefficient change.

2.5 2 1.5

u

1 0.5 0 -0.5 -1 0

1

2

3

4

5

6

7

Fig. 16. Effect of non-stationary coefficient on control.

Now let’s sum up the results. First of all, the typical controllers based on the localization method provide the suppression of the external perturbations and timevarying parameters for the first and second order nonlinear plants. These perturbations are quickly processed in the inner loop and it is clearly seen in the corresponding control change graphs. The calculated closed-loop systems have the required properties including zero static error due to the integral controller component.

5 Conclusion The calculating procedure of the typical controller for the stabilization systems with the nonlinear characteristics and non-stationary parameters of first and second order plants is proposed. It involves the preliminary relocation of the controller differential components to the feedback channel. This technique allows us to eliminate the control throws when the input signal is the step signal. Since there are corresponding derivatives in the feedback, the localization method relationships can be used for the

538

G. Frantsuzova et al.

controller calculating. As a result, the closed-loop control system has the required processes quality, as well as the invariance with respect to external uncontrolled perturbations. Note that the differential component in the PID controller implicitly provides the current estimate of the right side of the first order plant description (1). An adding double differentiation channel to the PID controller provides additional information already about the second-order plant. Thus, if we add a third derivative to the controller, we can also obtain an invariant system with the required processes quality for a third-order nonlinear non-stationary plant. Its structure will be similar to that shown in Fig. 5 and the controller can be calculated by the proposed procedure.

References 1. Ang, K.H., Chong, G., Li, Y.: PID control system analysis, design, and technology. IEEE Trans. Control Syst. Technol. 4(13), 559–576 (2005) 2. Reference materials of PID controller in Simulink (in Russian). http://www.mathworks.com/ help/simulink/slref/pidcontroller.html. Accessed 19 Mar 2018 3. Rotach, V.Ya.: Automatic Control Theory. 5th edn. Publishing House MEI, Moscow (2008) 4. Denisenko, V.V.: PID controller: principles of construction and modification. STA 4, 66–74 (2006) 5. Nikulin, E.F.: Fundamentals of Automatic Control Theory. Frequency Methods of Systems Analysis and Synthesis. Publishing BHV-Petersburg, St. Petersburg (2004) 6. Skogestad, S.: Simple analytic rules for model reduction and PID controller tuning. J. Process Control 4(13), 291–309 (2003) 7. Schei, T.S.: Automatic tuning of PID controllers based on transfer function estimation. Automatica 12(30), 1983–1989 (1994) 8. Wang, Q.-G., Zhang, Z., Astrom, K.J., Chek, L.S.: Guaranteed dominant pole placement with PID controllers. J. Process Control 2(19), 349–352 (2009) 9. Dorf, R., Bishop, R.: Modern Control Systems. Publishing Laboratory of Basic Knowledge, Moscow (2002) 10. Zhmud, V.A., Dimitrov, L.V., Taichenachev, A.V., Semibalamut, V.M.: Calculation of PIDregulator for MISO system with the method of numerical optimization. In: International Siberian Conference on Control and Communications SIBCON 2017, pp. 670–676. Astana, Kazakhstan (2017) 11. Zhmud, V.A., Dimitrov, L.V., Roth, H.: A new approach to numerical optimization of a controller for feedback system. In: 2nd International Conference on Applied Mechanics, Electronics and Mechatronics Engineering, pp. 213–219. DEStech Publications Inc., Beijing (2017) 12. Zhmud, V.A., Pyakillya, B.I., Liapidevskii, A.V.: Numerical optimization of PID-regulator for object with distributed parameters. J. Telecommun. Electron. Comput. Eng. 2(9), 9–14 (2017) 13. Vostrikov, A.S., Frantsuzova, G.A.: Synthesis of PID-controllers for nonlinear nonstationary plants. Optoelectron. Instrum. Data Process. 5(51), 471–477 (2015) 14. Vostrikov, A.S.: Controller synthesis problem for automation systems: state and prospects. Autometriya 2(46), 3–19 (2010)

Possibilities of Typical Controllers…

539

15. Kotova, E.P., Frantsuzova, G.A.: Application PI2D controller in automatic control systems. In: International Siberian Conference on Control and Communications SIBCON 2017, pp. 692–695. Astana, Kazakhstan (2017) 16. Frantsuzova, G.A.: PI2D-controllers synthesis for nonlinear non-stationary plants. In: 14th International Scientific-Technical Conference APEIE, vol.1, pp. 212–216. NSTU, Novosibirsk (2018) 17. Zhmud, V.A., Zavoryn, A.N.: Fractional-exponent PID-controllers and ways of their simplification with increasing control efficiency. Autom. Softw. Eng. 1, 30–36 (2013) 18. Vostrikov, A.S., Utkin, V.I., Frantsuzova, G.A.: Systems with state-vector derivative in control. Autom. Remote Control 3, 22–25 (1982) 19. Vostrikov, A.S.: Control Systems Synthesis by Localization Method. Publishing NSTU, Novosibirsk (2007) 20. Yurkevich, V.D.: Calculation and tuning of controllers for nonlinear systems with differentrate processes. Optoelectron. Instrum. Data Process. 5(48), 447–453 (2012)

Mathematical Models and Algorithms for the Management of Liquidation Process of Floods Consequences Maria Khamutova1(&) , Alexander Rezchikov1,2 , Vadim Kushnikov1,2 , Vladimir Ivaschenko2 , Elena Kushnikova2,3 , and Andrey Samartsev2 1

2

Saratov State University, 83 Astrakhanskaya Street, 410012 Saratov, Russia [email protected] Institute of Precision Mechanics and Control of RAS, 24 Rabochaya Street, 410028 Saratov, Russia 3 Institute of Control Problems of the Russian Academy of Science, 65, Profsouznaya Ave., 117997 Moscow, Russia

Abstract. A problem statement of management of liquidation process of floods consequences was formulated. The model of forecasting floods consequences affecting the amount of damage is proposed as a model of control object. A positive and negative feedbacks between system variables and external factors affecting the dynamics of object are taken into account in this model. The model was developed on the basis of system dynamics and is represented by a system of differential equations. The results of solution of system of differential equations were compared to a real data of the flood that occurred in Primorye in 2001. The algorithm of solving the problem of management of liquidation process of floods consequences in which a control functions are presented in the form of the action plans aimed at reducing the characteristics of floods consequences was developed. Selection of optimal action plan is based on a comparison of the calculated values of the cost function for each action plan from a set of action plans. Keywords: Management of liquidation process of floods consequences Model of forecasting floods consequences  System dynamics



1 Introduction Floods are increasingly occurred around the world due to global warming, population growth, deforestation and the growth of human activities [1]. It is impossible to prevent flood, but it is quite possible to weaken and minimize the possible consequences. This requires effective management of liquidation process of floods consequences.

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 540–551, 2019. https://doi.org/10.1007/978-3-030-12072-6_44

Mathematical Models and Algorithms for the Management of Liquidation

541

It follows from the above that the development of the models and algorithms for the management of liquidation process of floods consequences is an important issue. The works of many researchers are devoted the studies on this issue, in particular the works of V. V. Kulba, B. N. Porfiriev, V. A. Akimov, S. V. Borscht, K. Sene, T. E. Adams, T. Pagano and others. Their researches were addressed issues of emergency management, flood modeling, the development of flood monitoring and forecasting systems, and the development of management information systems [2, 3]. However, in the works of these researchers, insufficient attention is paid to the development of the models of forecasting possible floods consequences, which significantly affects the quality of decision making and the effectiveness of management of the flood response process.

2 The Solution Problem of the Management of Liquidation Process of Floods Consequences 2.1

The Problem Statement

For management information systems of EMERCOM need to develop models and algorithms that allow to determine control pðtÞ 2 P that minimize the cost function in the time interval t 2 ½t0 ; tN : ZðpðtÞÞ ¼

ZtN X n t0

ðXi  Xi ðt; pðtÞÞÞ2 ci dt ! min

ð1Þ

i¼1

subject to the first-order dynamic constraints dXi ðtÞ ¼ f ðt; pðtÞ; X1 ðt; Þ; . . .; Xn ðtÞÞ; i ¼ 1; n; t [ 0; Xi ðtÞ [ 0; i ¼ 1; n; dt

ð2Þ

and the boundary conditions Fit0 ðX; X 0 ; pÞ ¼ 0; FjtN ðX; X 0 ; pÞ ¼ 0; i ¼ 1; k1 ; j ¼ 1; k2 ;

ð3Þ

where Xi , i ¼ 1; n are the preferable values of the characteristics of floods consequences, Xi ðtÞ, i ¼ 1; n are the characteristics of floods consequences, ci , i ¼ 1; n are the weight coefficients of the characteristics Xi ðtÞ. In Fig. 1 the scheme of management of liquidation process of floods consequences is presented. The main stages of solving the problem of the management of liquidation process

542

M. Khamutova et al.

Fig. 1. The scheme of management of liquidation process of floods consequences

of floods consequences are shown on the scheme. 2.2

The Mathematical Model

The mathematical model of forecasting the characteristics of floods consequences affecting the amount of damage was considered as a model of the control object. The model was developed based on the system dynamics according to which an object is described by a system of differential equations of the first order. dXi ðtÞ ¼ Xiþ  Xi ; i ¼ 1; n; dt

ð4Þ

where Xiþ , Xi , i ¼ 1; n are continuous or piecewise continuous functions, defining positive and negative rate of change of variable Xi , i ¼ 1; n. In turn, Xi ¼ fi ðF1 ; F2 ; . . .; Fm Þ, Xiþ ¼ fi þ ðF1 ; F2 ; . . .; Fm Þ are functions, its arguments Fj , j ¼ 1; m affecting the rate of change of variable Xi . Fj can be functions whose arguments are system variables Xi , i ¼ 1; n or external factors [4, 5]. According to normative document [6], the following floods consequences characteristics were chosen as a system variables of the model: X1 ðtÞ is the number of forces involved in emergency-and-rescue operations; X2 ðtÞ is the number of houses destroyed and damaged in the floods; X3 ðtÞ is the number of people evacuated from the flooded area; X4 ðtÞ is the number of deaths (human life losses); X5 ðtÞ is the length of railways

Mathematical Models and Algorithms for the Management of Liquidation

543

and roads in the flood zone; X6 ðtÞ is the number of industrial enterprises in the flood zone; X7 ðtÞ is the number of facilities involved in emergency-and-rescue operations; X8 ðtÞ is the population in the flood zone; X9 ðtÞ is the area of agricultural land in the flood zone; X10 ðtÞ is the number of dead farm animals; X11 ðtÞ is the damage to the fixed production assets in the flood zone; X12 ðtÞ is the damage to the current production assets in the flood zone. Based on reasoning [7] and from the analysis of the relationships between the characteristics of floods consequences a model of forecasting the characteristics of flood consequences is presented in the following form: 8 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
dX7 ðtÞ > > ¼ f7þ ðX1 ðtÞÞ > > dt > > > > dX8 ðtÞ > > > ¼ f8þ ðDðtÞ; SðtÞÞ  f8 ðX4 Þ > > dt > > > > dX9 ðtÞ > > ¼ f9þ ðIðtÞ; SðtÞÞ  f9 ðX1 ðtÞ; X7 ðtÞÞ > > > dt > > > > dX10 ðtÞ > > ¼ f10þ ðFðtÞ; GðtÞ; TðtÞ; SðtÞ; X1 ðtÞ; X7 ðtÞÞ > > dt > > > > dX11 ðtÞ > > > ¼ f11þ ðFðtÞ; GðtÞ; SðtÞ; X6 ; DðtÞ; P; CÞ > > dt > > > > > : dX12 ðtÞ ¼ f þ ðX11 Þ 12 dt

ð5Þ

where A(t) is the density of transport networks in the flooding area, D(t) is the density of the population, F(t), G(t), T(t) are the average daily values of flow rate, the water level and the temperature respectively, I(t) is the fraction (share) of agricultural land and S(t) is the flooding area, P is the population density in the subject of the Federation, C is the cost of fixed assets of the subject of the Federation.

544

fi

M. Khamutova et al.

Suppose that the functions of the right-hand side of the system (5) have the form n n P þ = Q Fj ðF1 ; . . .; Fn Þ ¼ ki;l fi;l ðFj Þ, and the coefficients ki;l ¼ 0; l ¼ 1; m  1; ki;l 6¼

þ =

l¼1

j¼1

then the functions takes the form 0; l ¼ m; ki;l ¼ 0; l ¼ m þ 1; n; n þ = þ = Q Fj fi ðF1 ; . . .; Fn Þ ¼ ki fi ðFj Þ, where Fj is the system variables or external j¼1

þ =

, i ¼ 1:12 are determined at the stage of adaptation a factors, and the coefficients ki model to object of study. Therefore, model (5) can be represented by the following system of differential equations: 8 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
dX7 ðtÞ > > ¼ k7þ f7X1 ðX1 ðtÞÞ > > dt > > > > dX8 ðtÞ > > > ¼ k8þ DðtÞf8S ðSðtÞÞ  k8 f8X4 ðX4 Þ > > dt > > > > dX9 ðtÞ > > ¼ k9þ IðtÞf9S ðSðtÞÞ  k9 f9X1 ðX1 ðtÞÞf9X7 ðX7 ðtÞÞ > > > dt > > > > dX10 ðtÞ X1 X7 > þ S > ¼ k10 FðtÞGðtÞTðtÞf10 ðSðtÞÞf10 ðX1 ðtÞÞf10 ðX7 ðtÞÞ > > dt > > > > > > dX11 ðtÞ ¼ k þ PCFðtÞGðtÞDðtÞf S ðSðtÞÞf X6 ðX6 ðtÞÞ > > 11 11 11 > dt > > > > > : dX12 ðtÞ ¼ k þ f X11 ðX11 ðtÞÞ: 12 12 dt

ð6Þ

where fjXi is the function that defines a relationship between the system variable Xi ðtÞ and the dependent system variable Xj ðtÞ and fjS , in turn, the dependence Xj ðtÞ on SðtÞ, i; j ¼ 1:12. If there are no formulas to set the relationship between system variables, then the functions fjXi , fjS are determined on the basis of a statistical data. Using statistics on the floods that occurred in Primorye, in 2001 [8], the polynomials fjXi ,

Mathematical Models and Algorithms for the Management of Liquidation

545

fjS were constructed. In Figs. 3 and 4 the graphs of the constructed polynomials f1S ¼ 0:001S3 ðtÞ0:04S2 ðtÞ þ 0:6SðtÞ2:1 and f1X8 ¼ 54X84 ðtÞ  137X83 ðtÞ þ 103:4X82 ðtÞ  20:7X8 ðtÞ þ 1:9 are presented, respectively (Fig. 2).

Fig. 2. The graph of the polynomial f1S

Fig. 3. The graph of the polynomial f1X8

546

M. Khamutova et al.

Taking into account the constructed polynomials, the system (6) takes the following form: 8 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > dX7 ðtÞ 1 > > ¼ max ðk7þ ð3:5X13 ðtÞ  5:3X12 ðtÞ þ 3:27X1 ðtÞ þ 0:0003ÞÞ > > dt X7 > > > > > dX8 ðtÞ 1 > > ¼ max ðk8þ DðtÞð0:18S3 ðtÞ  0:06S2 ðtÞ þ 0:77SðtÞ  1:77Þ  k8 ð2:17X42 ðtÞ > > > dt X > 8 > > > >  0:0024X4 ðtÞ þ 0:16ÞÞ > > > > > dX ðtÞ 1 > 9 > > ¼ max ðk9þ IðtÞð0:002S2 ðtÞ þ 0:07SðtÞ þ 0:5Þ  k9 ð0:43X13 ðtÞ  2:3X12 ðtÞ > > dt X > 9 > > > > þ 3:2X1 ðtÞ  0:07Þð1:15X73 ðtÞ  1:78X72 ðtÞ þ 0:93X7 ðtÞ  0:024ÞÞ > > > > > dX ðtÞ 1 > þ > > 10 ¼ max ðk10 FðtÞGðtÞTðtÞð0:0007S4 ðtÞ þ 0:03S3 ðtÞ  0:46S2 ðtÞ þ 2SðtÞ > > dt X > 10 > > > >  0:4Þð0:25X13 ðtÞ  1:24X12 ðtÞ þ 2:04X1 ðtÞ  0:049Þ > > > > > > ð10:9X73 ðtÞ  26:57X72 ðtÞ þ 16:7X7 ðtÞ  0:515ÞÞ > > > > > dX11 ðtÞ 1 > þ > > ¼ max ðk11 PCFðtÞGðtÞDðtÞð0:0005S3 ðtÞ þ 0:02S2 ðtÞ  0:01SðtÞ þ 0:4Þ > > dt X > 11 > > > > ð3:5X63 ðtÞ þ 7:8X62 ðtÞ  2:7X6 ðtÞ þ 0:25ÞÞ > > > > > dX12 ðtÞ 1 > þ 4 3 2 > > ¼ max ðk12 ð45:3X11 ðtÞ þ 111:95X11 ðtÞ  84:07X11 ðtÞ þ 20:04ÞÞ > > X12 > dt > > > : t0 ¼ 1; Xi ðt0 Þ ¼ Xi0 ; i ¼ 1:12

ð7Þ

The system of nonlinear differential Eq. (7) is a Cauchy problem and is solved by the numerical Runge-Kutta method of the 4th order. The characteristics were

Mathematical Models and Algorithms for the Management of Liquidation

547

Fig. 4. The results of numerical solution of the system (7)

normalized relative to a maximum values for presentation of the results of calculation. The results of numerical solution of the system (7) are presented in Fig. 5. The values of the characteristics of floods consequences calculated by a system (7) were compared with the real data of the flood that occurred in Primorye, in 2001 [8]. It follows from Table 2, the characteristics calculated from the model are slightly difi ferent from the corresponding real values, where DX cp are the average values of relative errors of the model results (7) (Table 1). Table 1. The comparison the average values of relative errors Xi i DX cp

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 16% 14% 6% 3% 11% 2% 14% 14% 5% 3% 15% 14%

The developed model is adapted to each specific object of study. Information about the object is collected and stored, on the basis of which the system variables are selected from normative document [6], the form of functions of the right-hand sides of the system (5) is determined, the coefficients of the model are specified and at each stage of adaptation the model is adjusted.

548

2.3

M. Khamutova et al.

The Algorithm of Solving the Problem of Management of Liquidation Process of Floods Consequences

To solve problem (1–3) necessary construct the functions of the right-hand sides of system of differential Eq. (5) for a specific object. The results of numerical solution of system (5) are approximated by polynomials of low degrees. For characteristics of the flood that occurred in Primorye, in 2001, these polynomials are as follows: X1 ðt; pðtÞÞ ¼ 0:001t3 þ 0:0665t2  0:0345t  0:008 X2 ðt; pðtÞÞ ¼ 0:0536t3 þ 0:4455t2  0:786t þ 0:447 X3 ðt; pðtÞÞ ¼ 0:011t3 þ 0:151t2  0:14t þ 0:25 X4 ðt; pðtÞÞ ¼ 0:0923t3  0:859t2 þ 2:6156t  1:849 X5 ðt; pðtÞÞ ¼ 0:04t3 þ 0:288t2  0:187t þ 0:239 X6 ðt; pðtÞÞ ¼ 0:0063t3 þ 0:104t2 þ 0:107t þ 0:045 X7 ðt; pðtÞÞ ¼ 0:03t3  0:032t2 þ 0:01t þ 0:023

ð8Þ

X8 ðt; pðtÞÞ ¼ 0:0132t3  0:0245t2 þ 0:245t  0:067 X9 ðt; pðtÞÞ ¼ 0:009t3 þ 0:1115t2  0:06t  0:038 X10 ðt; pðtÞÞ ¼ 0:16t3  1:5t2 þ 4:57t  3:23 X11 ðt; pðtÞÞ ¼ 0:004t3 þ 0:01t2 þ 0:21t  0:22 X12 ðt; pðtÞÞ ¼ 0:034t3  0:127t2 þ 0:24t  0:145: The weight coefficients ci , i ¼ 1; 12 for the cost function (1) are selected based on the experience of the dispatching personnel and determine the significance of the 12 P characteristics Xi ðtÞ, ci ¼ 1. For flood in Primorye, these coefficients are as follows: i¼1

c1 ¼ 0:2; c2 ¼ 0:09; c3 ¼ 0:03; c4 ¼ 0:125; c5 ¼ 0:075; c6 ¼ 0:03; c7 ¼ 0:14; c8 ¼ 0:16; c9 ¼ 0:08; c10 ¼ 0:07; c11 ¼ 0:05; c12 ¼ 0:05:

ð9Þ

The polynomials (8), the weight coefficients (9) and the preferable values of the characteristics are substituted to the cost function (1), then integrand is simplified, and definite integral is calculated on the interval [1, 4]. Z4 Zðp0 Þ ¼

ð1:6  1:73t3 þ 3:5t2  3:64t þ 0:003t6  0:06t5 þ 0:45t4 Þdt ¼ 0:634 1

ð10Þ The action plans that minimize the characteristics of floods consequences are used as control functions. The set of action plans P ¼ fp1 ðtÞ; . . .; pk ðtÞg are formed on the basis experience of experts and as practice shows this set is finite. To solve problem

Mathematical Models and Algorithms for the Management of Liquidation

549

(1-3), the values Zðpj ðtÞÞ for each action plan from the set P are calculated, and then the plan that implementation gives the minimum value of the cost function is selected by the enumeration. Consider the action plans pj ðtÞ 2 P; j ¼ 1; 4 for the flood in Primorye (2001). The characteristics of flood consequences are calculated by the model (5) for each action plan, the obtained results are approximated by polynomials and these polynomials are substituted in the cost function (1). Table 2 presents the values of the cost function Z according to the original plan p0 and with the implementation of the action plans p1 ðtÞ; p2 ðtÞ; p3 ðtÞ; p4 ðtÞ. It follows from the table that the minimum value of the cost function (1) is achieved in the implementation of the action plan p4 ðtÞ. Table 2. The value of the cost function Z when implementation an action plans According to the The implementation original plan of the plan p1 ðtÞ p0 ðtÞ Zðpj ðtÞÞ 0.634 0.41

The implementation of the plan p2 ðtÞ 0.48

The implementation of the plan p3 ðtÞ 0.5

The implementation of the plan p4 ðtÞ 0.404

For the plan p4 ðtÞ, the cost function (1) takes the form: Z4 Zðp4 ðtÞÞ ¼

ð0:27t  0:189t3 þ 0:25t2 þ 0:001t6  0:01t5 þ 0:07t4 þ 0:32Þdt ¼ 0:404 1

ð11Þ Geometrical interpretation of the definite integral (11) is shown in Fig. 5.

Fig. 5. Geometrical interpretation of the definite integral Zðp4 ðtÞÞ

550

M. Khamutova et al.

Thus, the action plan p4 ðtÞ is the most effective action plan and its implementation allows to more effectively management the liquidation process of floods consequences.

3 Implementation of the Developed Models and Algorithms The software package FCFAAD was developed to implement models and algorithms [9]. FCFAAD is a client/server application, the structure of which is shown in Fig. 6. The basic functionalities of FCFAAD: • calculation of the characteristics of floods consequences affecting the amount of damage, and their output to the screen; • storage of input and calculated data in the database for further analysis and correction; • finding the minimum of the cost function.

Fig. 6. The structure of the software package FCFAAD

FCFAAD can be used as part of the software of management information systems of EMERCOM. The FCFAAD program can also work in the mode of a computer simulator used to train decision makers. With the help simulator it is possible to work out practical skills and improve experience in the field of flood forecasting and decision-making on the elimination of their consequences.

Mathematical Models and Algorithms for the Management of Liquidation

551

4 Conclusion The problem statement of the management of liquidation process of floods consequences was formulated. The mathematical model of system dynamics for forecasting the characteristics of floods consequences was developed. The algorithm of solving the problem of the management of liquidation process of floods consequences was presented. The control functions were selected in the form of the action plans aimed at reducing the characteristics of floods consequences according presented algorithm. The software package FCFAAD for management information systems was developed on the basis of the presented models and algorithms.

References 1. Natural Catastrophes 2015: Analyses, assessments, positions. Topics Geo, p. 82. Munich Re, Munich (2016) 2. Adams, T.E., Pagano, T.C.: Flood Forecasting – A Global Perspective, p. 480. Academic Press, Cambridge (2016) 3. Sene, K.: Flood Warning, Forecasting and Emergency Response, p. 303. Springer, Berlin (2008) 4. Forrester, Jay W.: World Dynamics, 2nd edn. Productivity Press, Portland (1973) 5. Sadovnichiy, V., Akayev, A., Korotayev, A., Malkov, S.: Modelling and forecasting world dynamics. In: Scientific Council for Economics and Sociology of Knowledge Fundamental Research Programme of the Presidium of the RAS, RAS ISPR, Moscow (2012). (in Russian) 6. State Standard 22.0.06-97/State Standard 22.0.06-95: Safety in emergencies. The sources of natural emergencies. Injuring factors. Nomenclature of parameters of injuring influences. (in Russian) 7. Khamutova, M., Rezchikov, A., Kushnikov, V., Ivashchenko, V., Bogomolov, A., Filimonyuk, L., Dolinina, O., Kushnikova, E., Shulga, T., Tverdokhlebov, V., Fominykh, D.: Forecasting characteristics of flood effects. J. Phys.: Conf. Ser. 1015, 052012 (2018). https://doi.org/10.1088/1742-6596/1015/5/052012 8. Vorobiev,Yu., Akimov, V., Sokolov, Yu.: Catastrophic Floods of the Beginning of the XXI Century: Lessons and Conclusions. Dex-Press, Moscow, p. 352 (2003). (in Russian) 9. Franco, E.F., Hirama, K., Carvalho, M.: Applying system dynamics approach in software and information system projects: a mapping study. Inf. Softw. Technol. 93, 58–73 (2018)

Analysis of Three-Dimensional Scene Visual Characteristics Based on Virtual Modeling and Parameters of Surveillance Sensors Vitaly Pechenkin(&)

, Mikhail Korolev , Kseniya Kuznetsova and Dmitriy Piminov

,

Yuri Gagarin State Technical University of Saratov, 77, Politechnicheskaya Street, Saratov 410054, Russia [email protected]

Abstract. The article proposes an approach to evaluating and optimization of the configuration of surveillance cameras location in a complex threedimensional scene. Optimization is carried out on the basis of virtual modeling of the three-dimensional scene, taking into account the parameters of the used surveillance cameras. A visibility “heat map” for scene observability, which allows selection of the optimal sensors configuration for various tasks, is proposed. There is an option to simulate camera parameters when calculating this heat map. In the article there is defined the visibility function for positions of objects in the three-dimensional scene, taking into account the complex geometry of space, the overlap of the visibility of three-dimensional objects, the parameters of light sources and different noises depending on the shadows of virtual objects. Authors describe the architecture of the software package, the working principle of the surveillance devices, and the algorithm for determining the “heat map”. The formal problem statement is based on solving the optimization task by maximization of the observability level of the scene objects, defining blind zones by using of the specially defined graph. Keywords: Surveillance devices  Monitoring tools  Auditory sensors Visual sensors  Heatmap  Optimization of location  The sensitivity Lighting  Observability  Placement optimization

 

1 Introduction The article deals with the problem of building a visibility heat map for positions in a three-dimensional scene. Three-dimensional scene is a virtual model of a real site with objects placed on it, which must be monitored. The scene itself has obstacles to the surveillance devices, either in the form of geometric three-dimensional objects, or in the form of surfaces. When selecting positions for placement of surveillance devices (hereinafter referred to as cameras or sensors), so-called “blind zones” are formed on the scene, which usually should be minimized. The main purpose of the article is to determine the visibility function for the positions of the scene, which is represented by the heat map in the developed software. In this paper the task of selecting of the © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 552–562, 2019. https://doi.org/10.1007/978-3-030-12072-6_45

Analysis of Three-Dimensional Scene Visual Characteristics …

553

equipment for surveillance according to its characteristics is not considered, although this problem is relevant and its solution requires special attention [1]. In this section the main currently existing approaches is described briefly. One of the issues in solving the problems of optimizing the placement of surveillance devices in an open area or in a room with the presence of a specific internal configuration is the need to perform recalibration of cameras regularly [2]. The use of filtering algorithms in conjunction with the analysis of the results of previous calibrations allows to “extrapolate” the characteristics of the camera, if it is necessary to achieve a quick calculation at the certain point in time [3]. In the most cases, the detection of objects in a dynamically distributed three-dimensional environment [4], built on the basis of tracking algorithms, depends on the effectiveness of detection methods. However, the use of such algorithms in the 3D scene is complicated by the mutual overlap of the object visibility and the complex geometry of space, which, combined with the presence of light sources, creates additional difficulties that impede the observation of the particular scene and accurate recognition of the situation [5, 6]. In cases where the location of target objects on the scene is not known in advance and they need to be detected by examining the entire scene, surveillance devices should be so placed as to cover the entire observation area as much as possible, minimizing the number of “blind zones”—areas that are not observed by any sensor. The scene can have a complex geometric structure, which must be taken into account in the calculations. One of the effective ways to accomplish this task is to solve the optimization problem of efficiently deploying surveillance, which is based on approximate methods for finding optimal solutions [7, 8]. Compared with the tasks of monitoring of the perimeter or individual objects, the automation of placement and control of dynamically changing scenes suggests a higher intellectual level of both the surveillance system itself, allowing you to filter information from a massive data stream effectively, and the software that provides the ability to add, edit, and predict missing data [9]. Frequently, the objects in the virtual scene are located outside the line of sight of surveillance cameras, thus the detection and positioning of the 3D object is complicated. For solving this task, the algorithm is proposed for calculating the position of the scene elements using two additional light sources [10]. Inappropriate installation of surveillance sensors on straight lines leads to the appearance of “dead angles” in space; the solution of the optimal location reduces to the selection of exact closed forms of the same type of surveillance cameras [11]. One of the sub-types of the problems of optimizing of the placement of such sensors is the problem of their optimal placement on the border of the controlled area. Through the use of combinatorial methods, this approach allows the use of probabilistic models of localization of entry into the secure area [12, 13]. To solve the problem of choosing the optimal configurations of surveillance equipment in the virtual dynamic three-dimensional scene, the simulation of the scene itself and the positioning of the surveillance cameras on it is not enough. It is also necessary to analyze the parameters of the surveillance cameras, the lighting conditions and the refinement of the existing algorithms for camera placement, taking into account their (cameras’) sensitivity.

554

V. Pechenkin et al.

2 Problem Statement This paper describes the procedure for calculating the visibility function of positions for the three-dimensional scene with given configuration of surveillance cameras. It is proposed to take into account the parameters of the sensors, the parameters of light sources, various noises, depending on the shadows of the virtual objects. The structure of the software package that allows to create visibility heat map will also be described below. 2.1

Informal Problem Statement

In general, the informal problem statement can be described as follows. In the scene with defined location of the set of the complex geometry objects it is necessary to fix the surveillance devices (sensors) which must monitor the whole zone of the scene. Sensors can be located in the limited set of positions within the scene. Obstacles on the scene or relief on it can create “blind zones” for surveillance, and these zones can be “blind” for the sensors of various types: audio, visual, motion sensors. The main task is to calculate the total measure of visibility for positions on the scene, which allows to evaluate configuration and location of the sensors visually. This article does not consider the issues of identifying objects of observation, determining the trajectory of their movement, or determining their location. It is assumed that their combination is given by a set of possible positions on the scene, in particular, by coordinates on a flat scene. The suggested solution is based on the threedimensional model, which allows to determine whether the object can be observed at the certain position by the camera located at one of the fixed points (see Fig. 1).

Fig. 1. Observed objects on the three-dimensional scene.

It is assumed that the observed three-dimensional scene contains a grid with the certain step and dimensions, the nodes of which are considered as locations of possible placement of target objects, and there are obstacles on the scene, under which grid nodes are not considered. This technique is often used for solving problems of

Analysis of Three-Dimensional Scene Visual Characteristics …

555

optimizing the camera placement. Cameras can not be located in any position, but only in the finite set of discrete positions defined by the grid spacing, preset parameters and constraints. This formulation allows considering a wide class of applied optimization problems of the cameras placement, both in enclosed spaces and on large outdoor areas (industrial in particular). 2.2

Formal Problem Statement

The work [14] presents the formalization of the problem of optimizing the placement of surveillance cameras in terms of its various target statements: maximizing the observability level of scene objects, determining blind zones, minimizing the number of surveillance cameras while maintaining a predetermined observability level. To build the formalization, it is proposed to use the weighted oriented graph of visibility of objects positions in a three-dimensional scene G = (V [ A, E, f), which is based on the grid superimposed on the observed surface (2D and 3D grids are possible), where – V is the set of grid nodes corresponding to the possible positions of objects on the scene; – A = {Ai} are sets of grid nodes in which cameras can be located, Ai corresponds to the possible position of the i-th camera; – E—is the set of oriented edges of the graph that connect the vertices of the set A with the vertices of V and determine the presence of a line of sight from the position of the camera; – f: E ! R is a function of visibility of a position for a specific camera (weight of the arc of the visibility graph, the function will be defined further) Formally, we define a ¼ fa1 ; a2 ; . . .; ak g

8

1ik

ai 2 Ai

ð1Þ

as the set which represents configuration of sensors. The set of all possible configu rations of the sensors is denoted as A. Our task is to determine the observability function of the scene position taking into account the parameters of the installed cameras. When building monitoring systems, special attention is paid to the following parameters of surveillance cameras: – sensitivity (the minimum illumination of the object, providing the required image quality); – noise (the degree of manifestation on the image of the so-called “snow” and fully influencing the image quality); – resolution (maximum number of lines detected in the test table image with the required detection accuracy); – gamma correction (non-linear conversion of the light-signal characteristic to match the observation conditions and the modulation characteristics of the display device with the contrast sensitivity of vision).

556

V. Pechenkin et al.

The sensitivity of the camera is directly dependent on the parameter of its light-signal characteristics; the direct dependence of the output signal on the scene illumination is the higher the parameter, the higher the sensitivity at the required threshold. The setting of the task consists in determining the parameters of the surveillance cameras of the monitoring system and their influence on the degree of visibility of the object on the stage: value. Formally, the problem is solved by determining the node  ! R for the visibility graph. In [15, 16] it was proposed to weight function g: V  A determine the sensitivity value of the surveillance camera with the minimum allowable illumination of the CCD matrix, taking into account the variables: the luminosity of the lens used, the distance to the object, its contrast and some others. As a result, the formula is obtained that links the matrix L0 (the illumination on the object) with the sensitivity matrix of the camera S: S ¼ 1 þ L0 

k0  s 4  ð1 þ mÞ2 F 2

ð2Þ

where: ko is the reflectance of the object (test-table in white); s is the transmittance of light by the lens; m is the ratio of the focal length of the lens to the distance to the object; F is the ratio of the focal length of the lens to the diameter of its entrance pupil. All these parameters are parameters of surveillance cameras and parameters of the monitored scene. The matrix S, in turn, can be considered as the matrix of weights of the visibility graph, the dimension of which is determined by the number of grid nodes superimposed on the scene. Thus, the function of the sensitivity of a camera for the specific position of the three-dimensional scene is determined by three groups of parameters: parameters of the surveillance camera (cameras), parameters of the scene illumination (number of light sources, directionality, intensity) and parameters of the 3D scene itself. In the 3D scene, the illumination of the object is significantly different from the illumination of the real room. The main factor limiting sensitivity in the virtual scene is noise, depending on the shadows of dynamic objects. As the density and number of objects in the scene increase, the number of obstacles to the light source increases, thereby raising the percentage of so-called “snow” in the output image. To reduce the noise in the image, it is proposed to introduce the variable N representing noise (shadows of the dynamic objects of the scene), the value of which is calculated from the following expression N¼

NFRONT ; NBACK

ð3Þ

where NFRONT is the object illumination from the side of the camera for which sensitivity is calculated, NBACK is the object illumination from the side opposite to the camera. To calculate the degree of visibility of grid nodes, the modified formula (6) is used, that takes into account the shadows on the three-dimensional scene: f ðv; ai Þ ¼ 1 þ L0 

k0  s  N 4  ð1 þ m Þ2  F 2

ð4Þ

Analysis of Three-Dimensional Scene Visual Characteristics …

557

As the function of the visibility of the grid node for a specific camera, the corresponding value of the matrix S is used: Pk

gðv; aÞ ¼

f ðv; ai Þ k  maxv;ai f ðv; ai Þ i¼1

ð5Þ

where f(v, ai) is the sensitivity value calculated for a specific point of the scene (grid node), k—number of cameras from which the v position is visible. It is obvious that 8v2V 8 0  gðv; aÞ  1  a2A

ð6Þ

The heatmap for this three-dimensional scene and sensor configuration  a is a pair HM ¼ ðV; aÞ.

3 The Structure of the Software To simulate the three-dimensional scene and solve the problem of optimizing the sensors placement, the software was developed that allows building and updating the observability graph in real time, as well as solving the formalized problem (options are available for obtaining an optimal solution or an approximate solution obtained by heuristic algorithms). The toolkit allows to build the virtual scene of great complexity and contains the following types of objects: – Target. It is an object located on the scene, the establishment of observation of which is the goal of this software. – Obstacles. They imitate real obstacles on the scene, prevent the establishment of the visual contact of the camera with the target object. They can have a geometric shape of varying complexity. – Surveillance devices. They can have different operation algorithms based on different observation function. The software package contains three groups of classes: Description classes, which contains the description of entities and objects of the scene (Fig. 2). This class group contains: • Classes for describing and building a scene grid. The scene grid is a set of nodes, the location in space of which depends on the grid construction method; it is intended for the subsequent formation of the graph of the entire scene. At present, the “table” grid has been implemented, the nodes of which in each of the three planes are equidistant from their neighbors, and the “projection” grid, in which the nodes are projected onto the surface of a complex object (this grid is intended for use on an uneven landscape shaft) – Classes for describing a scene and its objects: – Scene class allows to build the needed scene, as well as manage objects on it.

558

V. Pechenkin et al.

– The Scene_Object class is the main parent class for all objects in the scene. It allows to “bind” the object to the grid nodes (to determine which of the nodes the object belongs to). – SurvellianceDeviceController is the base class for all sensors. Class objects have characteristics common to all sensor types, such as spatial position, methods of obtaining grid nodes that are “observed” by this device. The class heirs are SurveillanceCameraController, SurveillanceMicrophoneController, SurveillanceProximityController и SurveillanceSensitivityController, which describe the logic of the sensors, respectively, camera, microphone, motion sensor and sensitivity sensor. All sensors have the characteristics of real devices, which are taken into account when determining the observability of nodes by this sensor. • Classes used to build the visibility graph. – Graph class describes the visibility graph and is used when building a scene model. – Classes Node, Edge and Light are used to describe nodes and edges, respectively. Nodes store their own position in space, illumination and a list of adjacent edges of a certain weight.

Fig. 2. The structure of the main classes of the software.

• Logic classes are designed to perform calculations; they describe the optimization algorithm for solving the problem of determining the optimal location of cameras in space. – Computation_Center contains algorithms for calculating parameters and properties of a graph (object of the Graph class). – The static class Evaluation_Function contains the evaluation functions of the object’s visibility, which are used in the formation of the graph, namely in determining the weights of the edges connecting the sensors and targets. – The Graph_Builder class allows to build a weighted oriented graph using the existing scene grid (Grid) and scene objects (Scene Object).

Analysis of Three-Dimensional Scene Visual Characteristics …

559

Algorithms for calculations related to solving the problem of calculating the blind zone for a given configuration of sensors are placed in the heirs of the ComputationModuleController class (Fig. 3). These classes are also intended to display the control interface of these modules.

Fig. 3. Classes of modules for calculating blind zones and surveillance camera positions.

• Rendering classes (Fig. 4) perform a single function, which is creating a graphic representation of an existing scene, as well as its models.

Fig. 4. Logic and rendering classes.

– The main class Scene UI is for managing the graphical representation of the scene and providing user interaction. • GridRenderer drawing class allows to create a visible representation of the graph and the grid scene. To display nodes and links between them, this class uses the Node Connection Renderer helper class. – A visual representation of the camera connection with a grid or graph is displayed on the screen via the SurveillanceDeviceRenderer class, which displays the links between the camera and the visible grid nodes of the scene. – The SurveillanceDeviceLight class displays a visual representation of the sensitivity of a grid node on the screen that displays the connections between the camera and the illuminated grid nodes of the scene.

4 Results The task of calculating the sensitivity of the camera in the illuminated threedimensional dynamic scene can be considered as one of the steps to determine the effectiveness of the specific configuration of the surveillance system using cameras.

560

V. Pechenkin et al.

The developed software uses the formula (5) to determine the degree of visibility of the scene position for a given configuration of cameras. Figure 5 shows the results of visualization of the calculation of the blind zones and the visibility of the scene positions.

Fig. 5. 3D scene with two surveillance cameras and light sources.

Optimization of the placement of surveillance cameras to improve the efficiency of the entire system should be addressed with more accurate accounting of the placement of light sources. The current version of the software uses the rather simply calculated coefficient N in formula (6) to account the illumination of the scene and its specific positions. Mutual arrangement of light sources, their parameters can also be a source for optimizing a particular configuration of camera layout.

Fig. 6. Heat map for the visibility of the 3D-surface nodes.

Analysis of Three-Dimensional Scene Visual Characteristics …

561

Figure 6 shows the heat map of the visibility of the nodes of the real surface, modeled in the developed software. The heat map is based on the sensitivity function (6) of two surveillance cameras in an illuminated three-dimensional dynamic scene. The visibility of the nodes on the map is displayed in a limited range defined by the formula (1). The most visible nodes tend to a value 1, invisible nodes to a value 0.

5 Conclusion The article proposes the way to formalize the problem of optimal placement of video surveillance sensors by the special visibility graph defined for the complex 3D scene. Mathematical model that allows to estimate the visibility of the nodes on the grid that superimposed on the scene, taking into account the parameters of the observation sensors, light sources and their position is presented. On the basis of the developed model the visibility heat map for the entire scene is constructed. Heat map visualization provides the ability to see blind zones for a given configuration of the sensors. The developed software allows interactively to edit the 3D scene, configure the placement and parameters of sensors, light sources. Each proposed sensors’ configuration can be evaluated both visually and quantitatively. Software was implemented in the Unity 3D environment.

References 1. Davidyuk, N.V.: Automation of the procedure for selecting the technical devices of detecting for the system of physical protection of objects. Vestnik AGTU, vol. 1, pp. 98–100 (2009). (In Russian) 2. El-Attar, A., Karim, M., Tairi, H., Ionita, S.: Robust multistage algorithm for camera selfcalibration dealing with varying in-trinsic parameters. J. Theor. Appl. Inf. Technol. 32, 46– 54 (2011) 3. Civera, J., Bueno, D.R., Davison, A.J., Montiel, J.M.M.: Camera self-calibration for sequential bayesian structure from motion. In: Proceedings of IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 3411–3416 (2009) 4. Fiore, L., Somasundaram, G., Drenner, A., Papanikolopoulos, N.: Optimal camera placement with adaptation to dynamic scenes. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 956–961 (2009) 5. Hänel, M., Kuhn, S., Henrich, D., Pannek, J., Grüne, L.: Optimal camera placement to measure distances conservatively regarding static and dynamic obstacles. Int. J. Sens. Netw. 12, 25–36 (2012) 6. Kim, H., Sarim, M., Takai, T., Guillemaut, J.-Y., Hilton, A.: Dynamic 3D scene reconstruction in outdoor environments. In: Proceedings of International Symposium on 3D Data Processing, Visualization and Transmission, 3DPVT, pp. 613–626 (2010) 7. Zhao, J., Yoshida, R., Cheung, S.C.S., Haws, D.: Approximate techniques in solving optimal camera placement problems. Int. J. Distrib. Sens. Netw. 1, 1705–1712 (2013) 8. Hengel, V.D., Hill, R., Ward, B., Cichowski, A., Detmold, H., Madden, C., Dick, A., Bastian, J.: Automatic camera placement for large scale surveillance networks. In: Proceedings of WACV, pp. 1–6 (2009)

562

V. Pechenkin et al.

9. Loktev, D.A.: The algorithm of placement of video cameras and its software implementation. Modeling of complex video monitoring system inside the building, pp. 84–92. MGSU, Moscow (2012). (In Russian) 10. Lisin, A.V.: Control of uncovered CCTV sites with additional light sources. Computer Optics, pp. 123–217 (2017). (In Russian) 11. Hsieh, Y.C., Lee, Y.C., You, P.S.: An immune based two-phase approach for the multipletype surveillance camera location problem. Expert Syst. Appl. 38, 5416–5422 (2011) 12. Azhmukhamedov, I.M.: Formalizing the task of locating security system elements in a controlled area. Vestnik AGTU, vol. 1, pp. 77–79 (2008). (In Russian) 13. Kritter, J., Brévilliers, M., Lepagnot, J., Idoumghar, L.: On the optimal placement of cameras for surveillance and the underlying set cover problem. Appl. Soft Comput. 74, 133–153 (2018) 14. Pechenkin, V.V., Korolev, M.S.: Optimal placement optimization of surveillance devices in a three-dimensional environment for blind zone minimization. Computer Optics (2017). (In Russian). https://doi.org/10.18287/2412-6179-2017-41-2-245-253 15. Nikitin, V.V., Cyculin, A.K.: Television in physical protection systems. SPbGEHTU LEHTI, Saint Petersburg (2001). (In Russian). http://www.securitybridge. com/articles/80/11516/. Accessed 16 Sept 2018 16. Korolev, M.S., Pechenkin, V.V.: Development of optimum configurations of surveillance tools on the basis of virtual 3D-modelling and parameters of cameras sensitivity. In: Proceedings of XII Conference on Management Problems in Socio-Economic and Technical Systems, pp. 46–50. Saratov (2016). (In Russian)

Search of Optimum Conditions of Plating Using a Fuzzy Rule-Based Knowledge Model Denis Solovjev1,2 , Alexander Arzamastsev1, Inna Solovjeva2, Yuri Litovka2, Alexey L’vov3(&) , and Nina Melnikova3 1

Tambov State University Named After G.R. Derzhavin, Tambov, Russian Federation {solovjevdenis,arz_sci}@mail.ru 2 Tambov State Technical University, Tambov, Russian Federation [email protected], [email protected] 3 Yuri Gagarin State Technical University of Saratov, Saratov, Russian Federation [email protected], [email protected]

Abstract. The paper discusses existing approaches to modeling the thickness distribution of plating. However, these approaches do not take into account the experience, knowledge and intuition of the decision makers in the process of searching optimal conditions of the technological process of electroplating. The authors propose an original approach to the search for optimum conditions of plating with using rule-based model of knowledge with the aim of reducing the uneven thickness distribution on the product. Studied the structural schemes of the traditional system of management of the galvanic process and system based on rule-based knowledge model. The system fuzzy-rule-knowledge model allows to obtain a predetermined the unevenness of the plating with a high degree of adequacy. Keywords: Electroplating process  Unevenness of plating  Mathematical model  Rule-based model of knowledge  Decision-maker  The system of fuzzy rules  Object of control  System of control  System decision support

1 Introduction The choice of the optimal process conditions of the electroplating must eventually eliminate the manufacturing defect of product. The reject of the defect articles will ensure the minimum cost per unit of output, which is the main criterion for evaluation of the galvanic process. The main causes of the defective product are its appearance, uneven coating thickness, coating porosity and adhesion strength of the coating to substrate. Receipt of galvanic coatings with the desired characteristics is associated with processing and analyzing large amounts of experimental and statistical information. It is necessary to select and control the modes of electrolysis and the composition of the electrolyte. The technologist should optimize the configuration of tubs, anodes and protective screens to implement the optimum conditions of plating on the product. Modeling of technological processes of producing galvanic coatings with the desired © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 563–574, 2019. https://doi.org/10.1007/978-3-030-12072-6_46

564

D. Solovjev et al.

characteristics is discussed in numerous papers. A developed mathematical model that takes into account the influence of geometrical and electrochemical factors was considered in [1]. The other mathematical model of the dynamics of mixing and intensity upgrade of the electrolyte in the contact area was developed in [2]. In [3], a mathematical model describing the thickness distribution on the product with the use of fractals is given. However, the search for optimum conditions of plating on a product belongs to a class of poorly structured and multi-criteria problems, in some cases, generally not subjected to formalization. Therefore, the optimal condition in problems of this class can be found only with the combination of experience, knowledge and intuition of the decision maker (DM). To find the optimal conditions can be with used methods of data mining, reasoning on the basis of rule-based models knowledge, simulation modeling, evolutionary computing and genetic algorithms, neural networks, situational analysis, cognitive modeling. These methods is implemented in the development of special systems of decision support (DSS), which are based on modern computer technology. The use of an artificial neural network to predict the growth rate of the thickness of the coating on the product surface was considered in [4]. The article [5] discusses the choice of metal plating from the point of view of economic, physical-mechanical, technological and ecological criteria based on the method of analysis of hierarchies. The work [6] compares the effectiveness of genetic algorithms with statistical regression ones with respect to the task of identifying factors that have the greatest influence on characteristics of galvanic coatings. Nevertheless, there is currently no work in place to search the optimal conditions for application of electroplating coatings on a product, which is based on considerations concerning knowledge production models. The aim of this work is the search for optimum conditions of electroplating on a product with system of decision-making support with the use of rule-based models of the knowledge to prevent spoilage of the product. The first relatively short description of the proposed research was considered in [7]. In the paper, detailed description of the obtained results is offered for discussion.

2 Searching the Optimal Plating Process Conditions: Problem Statement Consider the problem of search of optimum conditions of the electroplating process for reduction of the uneven distribution of the coating thickness on the product. The uneven thickness of the coating may be evaluated by criteria proposed by L.I. Kadaner in work [8]. The unevenness of the plating is determined by the ratio of the mean thickness d distribution of deposited metal on the surface Sc of the product to a predetermined minimum thickness dmin:  R ¼ d dmin where dmin, d are the minimum and average thickness of the coating.

ð1Þ

Search of Optimum Conditions of Plating

565

Calculation of the coating average thickness can based on M. Faraday’s law: d¼

E  gði; t; C1 ; C2 ; . . .; Cj ; . . .Þ  i  T q

ð2Þ

where E is the electrochemical equivalent of the metal; T is the duration of the electroplating process; q is the density of the coating metal; η the outcome of the metal by the electric current. η received by the processing of the experimental data. Parameter, η, depends on, as a rule, the density of the current i, temperature t of the electrolyte and the concentration of j-th component Cj of the electrolyte. The current density and temperature of the electrolyte have a major influence on the average value of coating thickness, what follows from formula (2). It is established, that the temperature increase for the electrolytes with monotonous polarization curve increases the conductivity of the solution. Scattering ability decreases with increasing temperature. According to Yu. Tafel equation the current density can have as a positive as negative impact on scattering ability. If at high current densities the outcome of the metal by the electric current is lower than at low current densities, this ratio has a positive effect on scattering ability. To find the optimal operating parameters it is necessary to solve the following problem: find the current density i* and temperature t* of the electrolyte, which deliver the minimum value of criterion (1) uneven thickness of the coating on the product.

3 The Conventional Mathematical Model of the Plating Process There exists a mathematical model with distributed coordinates presented in [9]. In order to solve the problem, in which the average current density is defined as follows: Z 1 i¼ iðx; y; zÞdSc ð3Þ Sc Sc

where (x, y, z) are the coordinates of a point in the space of a plating bath belonging to the surface SC of the product. The current density on the product surface is calculated on the basis of G. S. Ohm’s law, which has the following differential form: iðx; y; zÞ ¼ v  graduðx; y; zÞjSc

ð4Þ

where v is the inherent electrolyte conductivity; u is the potential distribution of the electric field in the space of the plating bath. The potential distribution of the electric field is defined by the differential P.-S. Laplace equation in partial derivatives that can be written in the next form: @ 2 /ðx; y; zÞ @ 2 /ðx; y; zÞ @ 2 /ðx; y; zÞ þ þ ¼0 @x2 @y2 @z2

ð5Þ

566

D. Solovjev et al.

with the following boundary conditions: • on the border “electrolyte–insulator”:  @/ðx; y; zÞ  ¼0 @n Sins

ð6Þ

• on the border “electrolyte–anode”: /ðx; y; zÞ þ Fa ðiÞjSa ¼ U

ð7Þ

• on the border “electrolyte–cathode”: /ðx; y; zÞ  Fc ðiÞjSc ¼ U

ð8Þ

where Sins is the surface of galvanic bath walls; n is the normal to this surface; U is a voltage; Fa, Fc are the functions of the anode and cathode polarization, respectively. To solve the set of Eqs. (3)–(9) one can use the methods of finite differences or finite elements, iterative methods, and methods for calculating the integrals of the curved surfaces. The solution of the optimization problem is assumed to be obtained using the efficient nonlinear programming algorithms.

4 Rule-Based Model for Searching the Optimal Plating Process Conditions The solution of Eqs. (3)–(9) is a nontrivial and time-consuming task. In this regard, to solve this problem is proposed the use of fuzzy rule-based model of knowledge [10]. The current density “i” and electrolyte temperature “t” will serve as input linguistic variables for this model. For the variable “i” we will define the following membership functions: µ1(i) is “low”; µ2(i) is “average”; µ3(i) is “high”. The similar functions were selected for the variable “t”, namely: µ1(t) is “low”; µ2(t) is “average”; µ3(t) is “high”. Changes in the density of the current “Di” and the electrolyte temperature “Dt” will serve as the output linguistic variables. Variables are terms membership functions: µ1(Di) is “decrease”; µ2(Di) is “do not change”; µ3(Di) is “increase”; µ1(Dt) is “decrease”; µ2(Dt) is “do not change”; µ3(Dt) is “increase”. The ranges of the linguistic variables and the specific form of the membership functions of terms depend on the metal coatings and electrolytes used for electroplating processes. A set of membership functions determines rule-based model the search of operational parameters of electroplating in terms of fuzzy logic. The base of knowledge in rule-based model contains a system of rules, based on conditional statements of the type E. Mamdani, written in the form “IF… THEN…”. The system of rules generates the output values of the variables, based on the values of input variables in the following way: 1. IF i = “high” And t = “low” THEN Di = “decrease” AND Dt = “increase”; 2. IF i = “high” And t = “high” THEN Di = “decrease” AND Dt = “decrease”;

Search of Optimum Conditions of Plating

567

3. IF i = “average” And t = “average” THEN Di = “do not change” AND Dt = “do not change”; 4. IF i = “high” And t = “average” THEN Di = “decrease” AND Dt = “do not change”; 5. IF i = “average” And t = “high” THEN Di = “do not change” AND Dt = “decrease”; 6. IF i = “low” And t = “average” THEN Di = “increase” AND Dt = “do not change”; 7. IF i = “average” And t = “low” THEN Di = “do not change” AND Dt = “increase”; 8. IF i = “low” And t = “NOT high” THEN Di = “increase” AND Dt = “do not change”; 9. IF i = “NOT high” And t = “low” THEN Di = “do not change” AND Dt = “increase”. The operation of defuzzification for this system of rules is proposed to carry out by the method of “centre of gravity”, which is the most suitable method for solving optimization problems: DiRmax

Di ¼

Dimin

Di  lR ðDiÞ  dDi

DiRmax Dimin DtRmax

Dt ¼

Dtmin

lR ðDiÞ

;

ð9Þ

;

ð10Þ

 dDi

Dt  lR ðDtÞ  dDt

DtRmax Dtmin

lR ðDtÞ  dDt

where Dimin, Dimax, Dtmin, Dtmax are the ranges of the output variables; µR is the final membership function of a fuzzy set of the output variable.

5 Block-Diagram of Electroplating Process Control System Consider the process control systems of electroplating: (a) using a traditional mathematical model and (b) a fuzzy rule-based model to search for optimal conditions. To do this, we imagine the galvanic process as a control object, selecting a finite set of input x(s) and output y(s) coordinates as well as the external perturbation f(s) and control u(s) actions. The input x(s) of the control object enters the following information: configuration of the anodes Sa and Sc of the product, and quality H preliminary surface preparation details. The output y(s) of the control object is information about appearance, uneven coating thickness, porosity and adhesion of the coating to the substrate.

568

D. Solovjev et al.

The current density i on the surface of the product, the temperature t of the electrolyte, the concentration of the j-th component Cj of the electrolyte, pH of the electrolyte, the level L of the electrolyte solution, the number N of product and their location F in the galvanic bath are set as the control action u(s) to the object. Among the measurable and non-measurable disturbances f(s) are: the presence of electrolyte impurities P, the surface defects D of the product, ablation of Q the electrolyte from bath on surface of the product, evaporation E of the electrolyte from the bath, interrupted electrical contact B in the coating process and the experience O of the operator of galvanic lines. External disturbances are stochastic in nature, but their effect on the galvanic process can be reduced or prevented. It is necessary to pay attention to the choice of electroplating equipment for the preparation and application of coatings and maintenance of equipment. Periodic analysis of the composition of electrolytes is required. Depending on the operation conditions of the product, it is necessary to improve the control of surface of detail. You should also conduct timely training and refresher training operators of electroplating. Usually in traditional control systems (Fig. 1a) used mathematical models in conjunction with optimization algorithms to generate commands z(s) control devices (power supplies, heating elements, pumps for correction of electrolyte, mechanisms for the swing rods, the motor-reducers, positioning of suspension, etc.).

Fig. 1. The block diagram of a classical control system a and system of decision-making support b

A block diagram of a control system using the decision support systems (DSS) based on knowledge, represented in Fig. 1b. This DSS should form recommendations for decision-makers v(s) to eliminate a specific type of defect based on

Search of Optimum Conditions of Plating

569

information about the input and output coordinates, as well as the control action. Recommendation v(s) not only the rules with the exact values of the input and output variables, but also fuzzy values. The sight of these recommendations, v(s) can be: “to decrease/increase the acidity of the electrolyte,” “to decrease/increase the electrolyte level”, “increase/decrease surface anodes” “clean the electrolyte from impurities”, “install protective screens”, “to improve the preparation of surface”, “troubleshooting electrical”, etc. We want to evaluate the effectiveness of the traditional system of management and control system with the use of the DSS. To do this, we draw a comparison with experimental results on concrete examples.

6 Materials and Methods Consider one of the most common electroplating processes – nickel plating in sulfate electrolyte. High concentration of salts of nickel in the composition of the sulfate electrolyte allows to increase the density of cathodic current and, consequently, increase the productivity of the process. The most common composition of sulfate electrolyte for galvanic process are the following, g/l: NiSO47H2O 240–250, NaCl—22,5; H3BO3—30. Nickel plating is carried out at temperatures of 50–60°C and current densities of 0.1–10 A/dm2. Constants for the calculation equations of traditional mathematical models have the following meanings: electrochemical equivalent E = 1,024 g/Ah; the density q = 8900 g/dm3. According to [10], the functions of anode and cathode polarization and also the outcome of the metal by the electric current for nickel plating in sulfate electrolyte at current concentrations of the components of electrolyte and its acidity pH from 4.5 to 5.5, are of the form shown in Fig. 2. For the control system using DSS, we selected a triangular form as a type of membership functions the term sets for input variables: 8 0; v  a > > > > va > > ; a\v  b < b a ð11Þ lm ðv; a; b; cÞ ¼ c  v > > ; b\v  c > > > cb > : 0; v [ c where [a, c] is the fuzzy set interval; b is the nucleus; m is the function number of; v is the input variable. For the current density the membership functions µ1(i;–;0;4), µ2(i;2;5;8), µ3(i;6;10;–) are shown in the Fig. 3a and corresponding membership functions µ1(t;–;50;54), µ2(t;52;55;58), µ3(t;56;60;–) for electrolyte temperature are given in the Fig. 3b.

570

D. Solovjev et al.

Fig. 2. The dependencies of anodic and cathodic polarization a and the outcome of the metal by the electric current b from current density and temperature for nickel plating in sulfate electrolyte

The selected p-curve is bell-shaped type membership functions the term sets for the output variables: lm ðw; d; g; hÞ ¼

1 wh2g   1þ

ð12Þ

d

where d, g, h are the coefficients of concentration, slope, and maximum of the membership function respectively; m is the function number; w is the output variable.

Search of Optimum Conditions of Plating

571

The corresponding membership functions for current density variation and the electrolyte temperature have the following form: µ1(Di; 0,2; 1, −1), µ2(Di; 0,2; 1; 0), µ3(Di; 0.2; 1; 1), and µ1(Dt; 0,38; 0,9; –2), µ2(Dt; 0,38; 0,9; 0), µ3(Dt; 0,38; 0,9; 2) and are shown in Fig. 3c, d. respectively. Charts with the results of the system of rules of fuzzy inference is shown in Fig. 4 on the example of the accurate values of input variables i = 2.4 A/dm2 and t = 57.9 ° C. In Fig. 5a, b shows the surfaces of changes of values of output variables Di and Dt by varying the values of the input variables. In the experimental study used T-, V and Z-shaped work pieces with surface areas Sc = 1  100 dm2. Details have undergone a galvanic nickel plating to the same specified thickness of coating dmin = 9 µm. A measure of the average value of the thickness of the deposited Nickel coatings was carried out according to State Standard 9.302-88 with a thickness gauge “Constant K5”. The thickness gauge has a measurement error ±1 µm in the range 0…100 µm with a resolution of 0.1 µm. To reduce the influence of random factors was conducted series of experiments and calculated of averaged measured at geometrically equivalent points in the thickness of the coating layer on the surface of the work piece.

Fig. 3. Membership functions of term-sets for input a, b and output c, d variables

572

D. Solovjev et al.

Fig. 4. The operation results of rules

Fig. 5. The surfaces of the fuzzy output Di (a) and Dt (b) for a fuzzy model

More accurate values of the coefficients of membership functions of term-sets of input and output variables were chosen in accordance with the minimization of deviation of the thicknesses of the coating from the experimental measurement results for each of the T-, V- and Z-shaped forms of workpieces.

7 Results and Discussion Deviation of the experimental measurements of average coating thicknesses from those calculated by conventional mathematical models Ddmm exp and fuzzy rule-based model Ddflexp , are pictured in Fig. 6.

Search of Optimum Conditions of Plating

573

Fig. 6. The results of the comparisons of the experimental deviations of the averages values of the coating thickness from the predicted for the studied T-, V - and Z-shaped forms of work pieces

For T-shaped work piece, the calculation error for fuzzy rule-based model is 2% more than the traditional model. In other cases (for V- and Z-shaped work pieces) calculation of fuzzy rule-based model gave a lower error (5% and 3%). Thus, for some forms of work pieces we get more accurate average value of the coating thickness compared with that given by the conventional mathematical model.

8 Conclusion Recently, the intellectualization of the stages of modeling of technological processes of producing galvanic coatings with the desired characteristics has been increasingly demanded by the task. In the work it is shown that, the receipt of galvanic coatings with a given uniformity is possible not only with the use of control systems based on traditional mathematical models but also fuzzy rule-based models with a high degree of adequacy. To improve the accuracy of calculation in traditional mathematical models, it is necessary to remove assumptions, to increase the number of equations and the number of grid nodes along each of the coordinates. Improving the accuracy of calculation in fuzzy rule-based model to be much more simple task: it will require only a change in form of membership functions of terms, the increase in their number and the number of rules of inference.

References 1. Robison, M.R., Free, M.L.: Modeling and experimental validation of electroplating deposit distributions from copper sulfate solutions. ECS Trans. 61, 27–36 (2014) 2. Filzwieser, A., Hein, K., Mori, G.: Current density limitation and diffusion boundary layer calculation using CFD method. J. Miner. Metals Mater. Soc. 54(4), 28–31 (2002)

574

D. Solovjev et al.

3. Zhou, J.G., He, Z., Guo, J.: Fractal growth modeling of electrochemical deposition in solid freeform fabrication. In: Proceedingd of 10th Solid Freeform Fabrication Symposium, pp. 229–238. University of Texas at Austin (1999) 4. Sánchez, L.F., Vilán Vilánb, J.A., García Nietoc, P.J., Coz Díazd, J.J.: The use of design of experiments to improve a neural network model in order to predict the thickness of the chromium layer in a hard chromium plating process. Math. Comput. Model. 52(7–8), 1169–1176 (2010) 5. Kaoser, M.M., Mamunur, R.M., Ahmed, S.: Selecting a material for an electroplating process using AHP and VIKOR multi attribute decision making method. In: International Conference on Industrial Engineering and Operations Management, pp. 834–841 (2014) 6. Ossman, M.E., Sheta, W., Eltaweel, Y.: Linear genetic programming for prediction of nickel recovery from spent nickel catalyst. Am. J. Eng. Appl. Sci. 3(2), 482–488 (2010) 7. Solovjev, D.S., Solovjeva, I.A., Litovka, Yu.V., Arzamastsev, A.A., Glazkov, V.P., L’vov, A.A.: Using fuzzy rule-based knowledge model for optimum plating conditions search. In: XI International Conference on Mechanical Engineering, Automation and Control Systems, IOP Conference Series: Materials Science and Engineering, vol. 327, p. 022045 (2018). https://doi.org/10.1088/1757-899x/327/2/022045 8. Kadaner, L.I.: The Uniformity of Electroplated Coatings. Publishing house of the Kharkov State University, Kharkov (1961) 9. Subramanian, V.R., White, R.E.: Simulating shape changes during electrodeposition: primary and secondary current distribution. J. Electrochem. Soc. 149(10), 498–505 (2000) 10. Phukon, L.J., Baruah, N.: Design of fuzzy logic controller for performance optimisation of induction motor using indirect vector control method. Int J Electr. Electron. Data Commun. 3 (2), 72–78 (2015) 11. Prikladnaya elektrokhimia [Applied electrochemistry]. Pod red. Kudryavtseva N.T. M.: Khimia (1975, in Russian)

Part II

Mathematical Modelling for Industry and Research

Mathematical Model of Adaptive Control in Fuel Supply Logistic System Ekaterina Kasatkina(&) , Denis Nefedov and Ekaterina Saburova

,

Kalashnikov Izhevsk State Technical University, Izhevsk, Russia [email protected], [email protected], [email protected]

Abstract. Optimal control of fuel supply system boils down to choosing an energy development strategy which provides consumers with the most efficient and reliable fuel and energy supply. As a part of the program on switching the heat supply distributed control system of the Udmurt Republic to renewable energy sources, an “Information-analytical system of regional alternative fuel supply control” was developed. Information-analytical system is designed to dealing with problems of optimal control of regional distributed fuel supply system of the Udmurt Republic. In order to increase effective the performance of regional fuel supply system a modification of information-analytical system and extension of its set of functions using the methods of quick responding when emergency occurs are required. The object of the research is the logistic distributed fuel supply system consisting of three interconnected levels: raw material accumulation points, fuel preparation points and fuel consumption points, which are heat sources. The mathematical model of optimal control of fuel supply logistic system is introduced. Emergencies which occur on any one of these levels demand the control of the whole system to reconfigure. The paper demonstrates models and algorithms of optimal control in case of emergency involving break down of such production links of logistic system as raw material accumulation points and fuel preparation points. The implementation of the developed algorithms is based on the usage of genetic optimization algorithms, which made it possible to obtain a more accurate solution in less time. The developed models and algorithms are integrated into the information-analytical system that enables to provide effective control of alternative fuel supply of the Udmurt Republic in case of emergency. Keywords: Genetic algorithm  Optimal control  Fuel supply  Mathematical modeling  Alternative energy

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 577–593, 2019. https://doi.org/10.1007/978-3-030-12072-6_47

578

E. Kasatkina et al.

1 Introduction The development of fuel and energy complex (FEC) and its inner industry systems is a complex multistage process that covers decision-making issues along all production stages: from the extraction of raw materials to their processing, transportation and final consumption. System analysis and optimal control of fuel supply system boils down to choosing an energy development strategy which provides consumers with the most efficient and reliable fuel and energy supply. In 2010 the program on switching the heat supply distributed control system to renewable energy sources has launched in the Udmurt Republic [1]. This paper observes a developed program “Information-analytical system of regional alternative fuel supply control” [2]. Information-analytical system is designed to dealing with problems of optimal control of regional distributed fuel supply system of the Udmurt Republic. The structure of information-analytical fuel supply control system comprises three main blocks: information subsystem; analytical subsystem; Geoinformation subsystem. In order to increase effective the performance of regional fuel supply system a modification of IAS and extension of its set of functions using the methods of quick responding when emergency occurs are required.

2 Mathematical Model of Optimal Control of Fuel Supply System Logistic scheme of supply of heat sources with fuel consists of three levels [3]. In logging and woodworking enterprises as well as timber harvesting zones wood raw materials are produced and then are transported to the raw material accumulation points (RMAP) – it is the first level. At RMAP primary processing of wood raw materials is þ executed. The collection of raw materials at RMAP begins at the time tRMst and þ uniformly runs until the time tRMend . The outflow of raw materials to fuel preparation   points starts at the time tRMst and runs until the time tRMend . Basic technological operations related to fuel preparation emerge at the second level. The second level comprises fuel preparation points (FPP) where primary processed wood raw materials are sorted, cut into small pieces, heat-treated and packed. After that finished fuel is delivered to heat sources in the region – it is the third level of logistic system.  þ þThe  time ; tFend . Fuel required to supply fuel to heat sources is determined by the interval tFst    ; tFend . Fuel conconsumption at heat sources occurs during the heating season tFst sumption at heat sources is determined by their loading and change of temperature during the heating season. Every level of logistic system includes warehouses for raw material storage. The diagram of raw material movement on different levels of logistic system is demonstrated in Fig. 1.

Mathematical Model of Adaptive Control in Fuel Supply Logistic System

579

Fig. 1. The scheme of raw material and fuel movement on different levels of logistic system

The solving of fuel supply logistic system design problem consists of four stages each of which boils down to dealing with certain tasks: routing [4, 5], clustering, optimal distribution of resources and stock control [6–9]. As a result of consistent execution of all design stages a fuel supply logistic system with defined locations for all objects, their links as well as volumes and performances of raw material and fuel preparation and consumption at every object of the system is developed [10]. The given logistic system contains M raw accumulation points, N fuel preparation points and L heat sources. ~ RMAP ; Q ~ FPP ; Q ~ FPP ; Q ~ H denote current volumes of wood raw materials at ith Let Q RMi RMk Fk Fj RMAP ði ¼ 1; MÞ, current volumes of wood raw materials and fuel at kth FPP ðk ¼ 1; NÞ and current volumes of fuel at jth heat source ðj ¼ 1; LÞ respectively, t. of n. f.; þ RMAP qRMi ; qFjþ H are speeds of wood raw material replenishing at ith RMAP and refueling th at j heat source, t. of n. f./day; qRMAP ; qH RMi Fj are speeds of wood raw material conth þ FPP suming at i RMAP and fuel consuming at jth heat source, t. of n. f./day; qFPP RMk ; qFk are speeds of wood raw material consuming and refueling at k FPP, t. of n. f./day. The speed of raw material replenishing at kth FPP is defined as a sum of speeds of wood raw at RMAP that supplies that given FPP. material consuming qRMAP RM FPP when wood raw material is Fuel preparation line launches at the time tRMst delivered to FPP warehouse. Therefore, speeds of wood raw material consuming and

580

E. Kasatkina et al.

refueling at kth FPP are equal and are defined by the performance of equipment pk ðtÞ, t. of n. f./day: þ FPP ðtÞ ¼ pk ðtÞ; k ¼ 1; N: qFPP RMk ðtÞ ¼ qFk

ð1Þ

Fuel preparation line works under normal conditions. The performance of equipment at FPP can be increased, if needed, by c, %. Raw material and fuel warehouses have capacity reserve: pmax ðtÞ ¼ ð1 þ cÞpk ðtÞ; k ¼ 1; N: k

ð2Þ

The lack of capacity in case of emergency can be mitigated by increasing the capacity of working equipment. Speeds of wood raw material replenishing at RMAP depend upon the amount of deforestation approved by the forest plan. The amount of fuel consumed by heat sources during the heating season is not constant. The dynamics of fuel consumption are defined depending on the seasonality function sðtÞ: H qH Fj ðtÞ ¼ qFoj sðtÞ; j ¼ 1; L;

ð3Þ

th where qH Foj is specific fuel consumption at j heat source with uniform consumption during the heating season, t. of n. f./day. The system of equations describing stock change on different levels of fuel supply logistic system is as follows:

~ RMAP dQ þ RMAP RMi ¼ qRMi ðtÞ  qRMAP ðtÞ; i ¼ 1; M; RMi dt N dQ ~ FPP X RMj j¼1

dt

N dQ ~ FPP X Fj j¼1

dt

¼

M X

qRMAP ðtÞ  RMi

N X

i¼1

¼

N X j¼1

ð4Þ

qFPP RMj ;

ð5Þ

þ FPP qFk ðtÞ;

ð6Þ

j¼1

qFjþ FPP 

L X k¼1

~H dQ Fj ¼ qFjþ FPP ðtÞ  qFPP ðtÞ; j ¼ 1; L: Fj dt

ð7Þ

Suppose that in the end of each period all fuel resources at FPP warehouses and heat sources as well as wood raw material supplies at RMAP are consumed without remainder. This implies following balance equations:

Mathematical Model of Adaptive Control in Fuel Supply Logistic System þ tRMend

581

 tRMend

Z

Z

þ RMAP qRMi ðtÞdt ¼

qRMAP ðtÞdt; i ¼ 1; M; RMi

ð8Þ

 tRMst

þ tRMst

þ



tRMend M Z X i¼1

qRMAP ðtÞdt ¼ RMi

k¼1

 tRMst

þ

ZtFend

t L ZFend X

þH qFk ðtÞdt;

ð9Þ

þ tFst



þH qFk ðtÞdt ¼

ZtFend

qH Fk ðtÞdt; k ¼ 1; L;

ð10Þ

 tFst

þ tFst

þ  where DtRM ; DtRM ; DtFþ ; DtF are periods of wood raw material and fuel replenishing and consuming respectively. Let us introduce the restrictions on the amount of stock at warehouses taking into account raw material humidity:

Zt 0

þ RMAP b1 qRMi ðsÞds

Zt

RMAP b2 qRMAP ðsÞds  VRMi ; i ¼ 1; M; RMi



ð11Þ

 tRMend

þ tRMst

0

t M Z X i¼1

Zt

qRMAP ðsÞds RMi



 tRMst

Zt 0

 tRMst

qFþ FPP ðsÞds



t L Z X j¼1

 tRMst

Zt QFr Hj

 þ tFst

qFjþ H ðsÞds

Zt   tFst

FPP VRM qFPP ; RM ðsÞds  b2

qFjþ H ðsÞds

þ tFst

qH Fj ðsÞds 

VFFFP b2

F VHj ; j ¼ 1; L; b2

ð12Þ

ð13Þ

ð14Þ

where b1 ; b2 are ratios that define the number of bulk cubic meters of wood raw RMAP is the volume of wood material per ton of standard fuel, bulk cub. m./t. of n. f.; VRMi th FPP raw material warehouse in i RMAP, bulk cub. m.; VRM ; VFFPP are volumes of wood raw material and fuel warehouses at FPP, bulk cubic meter; QFr Hj is the size of reserve th F fuel supply in j heat source, t. of n. f.; VHj is the volume of fuel warehouse in jth heat source, bulk cub. m.

582

E. Kasatkina et al.

Thus, Eqs. (4)–(14) describe the dynamics of stock change on different levels of fuel supply logistic system. The operation of any organizational system is always risk-bearing. When establishing fuel supply for regional heat supply system the focus should be on operational risks. The disruption of technological fuel preparation operations poses a threat for the security of the given region’s energy resources and may cause not only significant financial losses, but also serious social implications. High-quality risk analysis and control enables prompt responding to failures in fuel supply system operation and, hence, raising the level of energy security, while optimal control of emergencies minimizes negative effects of disruption [11]. Operational risks in regional fuel supply system may depend primarily on the malfunctioning of logistic system components. So two types of risks can be distinguished: emergencies occurred due to RMAP breakdown; emergencies occurred due to FPP breakdown; emergencies occurred due to heat sources breakdown. 2.1

Optimal Control in Case of Emergency Related to RMAP Breakdown

Let us assume at the time tbr m RMAPs broke down and these accumulation points supplied raw materials to n FPPs. To be specific, suppose that broken RMAPs have indexes 1; 2; 3; . . .; m, and corresponding FPP indexes are 1; 2; 3; . . .; m. Optimal control boils down to the supply redistribution of raw materials from M − m RMAPs left to all FPPs, so that total expenditures in the system over the period of RMAP recovery trec are minimum. During the period trec the volume of raw materials that must be delivered from m broken RMAPs to corresponding FPPs can be calculated as follows: Qrec RM

¼

tbrZþ trec m X k¼1

qRMAP RMk dt

ð15Þ

tbt

As long as all wood raw materials at RMAP are consumed without remainder after each period, then, when broken RMAPs are recovered, raw materials need to be distributed in the volume of Qrec RM from their warehouses between FPP that received less resources than expected because of emergencies. Moreover, raw materials are successfully transported from one of M  m working RMAPs to one of n FPPs which needs raw materials only if raw materials volume in this RMAP warehouse exceeds the amount needed during the period trec . Thus, the amount of raw materials which can be transported from ith working RMAP to FPP in need of raw materials can be calculated as follows: Qhi

~ RMAP þ ¼Q RMi

tbrZþ trec

tbr

þ RMAP qRMi dt

tbrZþ trec

 tbr

qRMAP dt; i ¼ m þ 1; M: RMi

ð16Þ

Mathematical Model of Adaptive Control in Fuel Supply Logistic System

583

The essence of optimal control of fuel supply in case of emergency related to RMAP breakdown is to minimize total expenditures during the recovery period of RMAP: RMAP RMAP FðqRMAP RMðm þ 1Þ ; qRMðm þ 2Þ ; . . .; qRMM Þ ! min;

ð17Þ

where qRMAP ðtÞ are control functions. RMi Additional expenses on fuel supply related to RMAP breakdown consist of two parts: F ¼ F I þ F II :

ð18Þ

(1) The costs of raw material transportation from working RMAPs to FPPs in need of the materials: M n X X

FI ¼

i¼m þ 1 j¼1

h sRM ij Qi ;

ð19Þ

are specific transportation costs on wood raw material delivery from ith where sRM ij RMAP to jth FPP, rub./t. of n. f. (2) The costs of raw material transportation from recovered RMAPs to FPP received less raw materials than expected: F II ¼

m X N X i¼1 j¼n þ 1

M X i¼m þ 1

Qhi ¼

h sRM ij Qi ;

m X

Qhi :

ð20Þ

ð21Þ

i¼1

To solve the problem of optimal control of regional fuel supply system in case of emergency related to breakdown of the logistic system’s objects genetic optimization algorithms adjusted to the current problems are used [12, 13]. A general algorithm for solving the problem of optimal control of fuel supply system in case of emergency related to RMAP breakdown can be described as a flow chart in the Fig. 2.

584

E. Kasatkina et al.

Fig. 2. The flow chart of an algorithm for solving the problem of optimal control of fuel supply system in case of emergency related to RMAP breakdown

2.2

Optimal Control in Case of Emergency Related to FPP Breakdown

Let us assume that at the time tbr fuel preparation stopped at n FPPs. These production points were provided with raw materials from m RMAPs and supplied fuel to l heat sources. To be specific, suppose that broken FPPs have indexes 1; 2; 3; . . .; n, corresponding RMAPs have indexes 1; 2; 3; . . .; n, and heat sources have indexes 1; 2; 3; . . .; l. The objective of optimal control is to redistribute the supply of raw materials from m RMAPs to N  n working FPPs, and redistribute fuel supply to l heat sources, so that total expenditures in the system during the period of FPP recovery trec are minimum.

Mathematical Model of Adaptive Control in Fuel Supply Logistic System

585

At the time tbr the volume of fuel located at FPP is equal to: n X

~ FPP ¼ Q Fj

j¼1

8 > >
þ þ  > pj  ðtbr  tRMst Þ qH : tbr  tRMst Fj ðtÞ; tbr [ tFst ; j¼1

ð22Þ

j¼1

and the volume of fuel at corresponding heat sources is defined as follows:

l X

~H ¼ Q Fk

k¼1

8 > > > >
> l l  P P > > þ   : ðtbr  tFst Þ pk  tbr  tFst qH Fk ðtÞ; tbr [ tFst : k¼1

ð23Þ

k¼1

Then the volume of fuel needed to power heat sources during FPP recovery time trec is equal to: l X

Qrec Fk ¼ trec

k¼1

l X

pk 

k¼1

l X

~H : Q Fk

ð24Þ

k¼1

The essence of optimal control of fuel supply in case of emergency related to FPP breakdown is to minimize total expenditures during FPP recovery period: RMAP RMAP þH þH þH FðqRMAP RM1 ; qRM2 ; . . .; qRMm ; qF1 ; qF2 ; . . .; qFl Þ ! min;

ð25Þ

þF where qRMAP ðtÞ; qHk ðtÞ are control functions. RMi Total expenditures (25) consist of four parts: stock costs, organizational costs related to stock registration, its loading, discharging etc., storage costs and costs of shipping FPP raw materials and fuel to a certain heat source:

F ¼ F I þ F II þ F III þ F IV :

ð26Þ

(1) Stock costs:

F ¼ I

l X k¼1

tbrZþ trec

cFk

þH qFk ðtÞdt;

tbr

where cFk is the cost of fuel deliver from FPP to kth heat source, rub./t. of n. f.

ð27Þ

586

E. Kasatkina et al.

(2) Organizational costs: F II ¼

m X

zRMi nRMi þ

i¼1

l X

zFk nFk ;

ð28Þ

k¼1

where zRMi is organizational costs of one raw material shipment from ith RMAP, rub./shipment; zFk is organizational costs of one fuel shipment to kth heat source, rub./shipment; nRMi ; nFk are numbers of wood raw material shipments from ith RMAP and fuel shipments to kth heat source during FPP recovery time. Let us introduce the functions gRMi ðtÞ; i ¼ 1; m and gFk ðtÞ; k ¼ 1; l, so that: ( gRMi ðtÞ ¼

1;

if qRMAP ðtÞ [ 0; RMi

0;

ðtÞ ¼ 0; if qRMAP RMi

( gFk ðtÞ ¼

1;

þH if qFk ðtÞ [ 0;

0;

þH if qFk ðtÞ ¼ 0;

i ¼ 1; m;

k ¼ 1; l:

ð29Þ

ð30Þ

Then nRMi ¼

X

gRMi ðtÞ; i ¼ 1; m;

ð31Þ

gFk ðtÞ; k ¼ 1; l:

ð32Þ

trec

nFk ¼

X trec

(3) Bulk storage costs: toTjZþ tpeM

F

III

¼ hC

QQ T ~ Q ðtÞdt þ hT C

toT j

þ

l X k¼1

toTjZþ tpeM

toT j toTjZþ tpeM

hTk

QQ T ~T Q ðtÞdt ð33Þ

~ T ðtÞdt; Q Tk

toTj

where hRM ; hF are unit costs of wood raw material and fuel storage at FPP, rub./(t. of n. f.day); hFk is unit costs of fuel storage in kth heat source, rub./(t. of n. f.day). Unit costs include warehouse lease costs, amortization costs during the storage etc.

Mathematical Model of Adaptive Control in Fuel Supply Logistic System

587

(4) Costs of raw material and fuel shipping:

F

IV

¼

N X

m X

j¼n þ 1

i¼1

RM QRMAP RMij sij

þ

l X

! F QH Fjk sjk

n X l X

F QFPP Fab sab ;

ð34Þ

a¼1 b¼1

k¼1 m X

þ

QRM ij  cpj trec ;

ð35Þ

QFjk ¼ qH k trec :

ð36Þ

i¼1 N X j¼1

A general algorithm for solving the problem of optimal control of fuel supply system in case of emergency related to FPP breakdown can be described as a flow chart in Fig. 3. Initial population generation Forming of structural links between FPPs and heat sources Forming of structural links between RMAPs and FPPs

Individuals assessment

Stopping crirerion

Yes

No Individuals selection

The application of genetic algorithms Gene values redistribution Feasible solution

No

Changing of structural links between FPPs and heat sources

Yes Results assessment

Changing of structural links between RMAPs and FPPs

New generation forming Optimal structure of connection between the objects and parameter values of the system

Fig. 3. The flow chart of an algorithm for solving the problem of optimal control of fuel supply system in case of emergency related to FPP breakdown

588

E. Kasatkina et al.

3 Results of Optimal Control of Regional Fuel Supply System in Case of Emergency Considering the case of emergency in the regional fuel supply system informationanalytical system redefines structural links between objects and calculates the parameters of changed system in accordance with mathematical models introduced in analytical subsystem of the program complex. The results of calculations are plotted on the electronic map of the Udmurt Republic as new routes of raw material and fuel movement. Quantitative characteristics of changed system are depicted in the information bar as well as in the control table of fuel supply system stock. An example of visual representation of optimal control of regional fuel supply system in case of FPP breakdown is shown in Figs. 4, 5, 6 and 7. Figures 4 and 5 demonstrate the initial map of fuel supply in Vavozh region, the Udmurt Republic.

Uva region

Legend Legend Heat sources: Heat sources: Gas Coal Firewood Electric power Enterprises Enterprises of UR:of UR: Poultry farms Pig farms Cattle farms Woodworking RM accumulation points

Vavozh region

Fuel preparation points

Mozhga region Kizner region

Pellet plants Biogas plants Roads:Roads: Asphalt Priming Fuel transportation routes RM transportation routes

Fig. 4. Wood raw material transportation routes in Vavozh region, the Udmurt Republic

It’s planned to establish two fuel preparation points in villages Volipelga and Novaya Biya in Vavozh region, the Udmurt Republic. These FPPs will be provided with raw materials from six accumulation points and they will supply fuel to about ten regional heat sources.

Mathematical Model of Adaptive Control in Fuel Supply Logistic System

Uva region

589

Legend Legend Heat sources: Heat sources: Gas Coal Firewood Electric power Enterprises of UR: Poultry farms Pig farms Cattle farms Woodworking RM accumulation points

Vavozh region

Fuel preparation points Pellet plants Biogas plants Roads:Roads: Asphalt Priming Fuel transportation routes RM transportation routes

Mozhga region

Kizner region

Fig. 5. Fuel transportation routes in Vavozh region, the Udmurt Republic

Uva region

Legend Legend Heat sources: Heat sources: Gas Coal Firewood Electric power Enterprises Enterprises of UR:of UR: Poultry farms Pig farms Cattle farms Woodworking RM accumulation points

Vavozh region

Mozhga region Kizner region

Fuel preparation points Pellet plants Biogas plants Roads:Roads: Asphalt Priming Fuel transportation routes RM transportation routes

Fig. 6. Raw material transportation routes in Vavozh region, the Udmurt Republic, in case of FPP breakdown in Volipelga village

590

E. Kasatkina et al.

Uva region

Legend Legend Heat sources: Heat sources: Gas Coal Firewood Electric power Enterprises Enterprises of UR:of UR: Poultry farms Pig farms Cattle farms Woodworking RM accumulation points

Vavozh region

Fuel preparation points

Kizner region

Mozhga region

Pellet plants Biogas plants Roads:Roads: Asphalt Priming Fuel transportation routes RM transportation routes

Fig. 7. Fuel transportation routes in Vavozh region, the Udmurt Republic, in case of FPP breakdown in Volipelga village

If a FPP in Volipelga village breaks down, raw material and fuel transportation routes in the system will rearrange. The result of the calculation of links between fuel supply system’s objects in case of emergency is shown in Figs. 6 and 7. Figure 8 shows the diagrams depicting the change of wood raw material and fuel volumes at FPP located in Volipelga village as well as at heat sources being powered by this FPP in case of fuel supply system operating under normal conditions. In case of FPP breakdown raw materials are redistributed to neighboring fuel preparation points which will supply fuel to heat sources for a FPP repairing period providing the compliance with fuel delivery schedule. Figure 9 shows the diagrams depicting the change of wood raw materials and fuel at FPP located in Starye Kopki village in case of FPP breakdown in Volipelga village. Former FPP will supply fuel to a part of heat sources of broken FPP.

Mathematical Model of Adaptive Control in Fuel Supply Logistic System

591

~ FPP QRM , t. of n. f. 700 600 500 400 300 200 100 0 0

30

60

90

120

150

180

210

240

270

300

330

360

t , days

0

30

60

90

120

150

180

210

240

270

300

330

360

t , days

~ QFFPP , t. of n. f. 450 375 300 225 150 75 0

~ QFH , t. of n. f.

Volipelga Staroe Zhue Tyloval-Pelga

tbr May

July

t , days

tbr + trec

September November January

March

May

Fig. 8. Change in the amount of wood raw materials and fuel under normal conditions and in case of emergency at FPP in Volipelga village and heat sources being powered by this FPP

592

E. Kasatkina et al.

~ FPP QRM , t. of n. f. 900 800 700 600 500 400 300 200 100 0 0

30

60

90

120

150

180

0

30

60

90

120

150

180

~ QFFPP , t. of n. f.

210

240

270

300

330

360

t , days

210

240

270

300

330

360

t , days

450 375 300 225 150 75 0

t br May

July

t br + t rec

September Novemb

January

March

May

Fig. 9. Change in the amount of wood raw materials and fuel at FPP in Starye Kopki village, under normal conditions and in case of emergency at FPP in Volipelga village

4 Conclusion The developed information-analytical system enables to model and control the alternative fuel supply control system in an optimal way. The developed mathematical models of optimal control in case of emergency were integrated in this system. The application of genetic algorithms enables to respond quickly to emergency situations due to high performance of the algorithm without accuracy losses. Informationanalytical system is introduced in the Ministry of Industry and Energy of the Udmurt Republic and is used for optimal control of regional fuel and energy complex. Today the system keeps the information about 1470 boilers, four pellet plants, 24 FPPs and eight biogas complexes.

Mathematical Model of Adaptive Control in Fuel Supply Logistic System

593

References 1. Rusyak, I.G., Presnukhin, V.K., Ketova, K.V., Korolev, S.A., Trushkova, E.V.: Development of the concept of fuel supply distributed regional heating system of local renewable fuels. Energobezopasnost i energosberezheniye 5, 14–20 (2010). (in Russian) 2. Rusyak, I.G., Kasatkina, E.V., Sairanov, A.S.: An information-analytical system of regional alternative energy sources fuel supply control, vol. 65, no. 4, pp. 83–87. IUS, St. Petersburg (2013). (in Russian) 3. Rusyak, I.G., Ketova, K.V., Nefedov, D.G.: Matematicheskaya model’ i metod resheniya zadachi optimal’nogo razmeshcheniya proizvodstva drevesnyh vidov topliva. In: Proceedings of the Russian Academy of Sciences. Power Engineering, no. 2, pp. 177–187 (2017). (in Russian) 4. Veenstra, M., Roodbergen, K.J., Coelho, L.C., Zhu, S.X.: A simultaneous facility location and vehicle routing problem arising in health care logistics in the Netherlands. Eur. J. Oper. Res. 268(2), 703–715 (2018) 5. Dinh, T., Fukasawa, R., Luedtke, J.: Exact algorithms for the chance-constrained vehicle routing problem. Math. Program. 172(1–2), 105–138 (2018) 6. Daskin, M.S.: What you should know about location modeling. Nav. Res. Logist. Econ. Inf. Technol. 55, 283–294 (2008) 7. Shen, Z.-J., Coullard, C., Daskin, M.S.: A joint location-inventory model. Transp. Sci. 37(1), 40–55 (2003) 8. Gong, M., Xu, Z., Xie, Y., Pan, J., Li, R.: Fault-section location of distribution network containing distributed generation based on the multiple-population genetic algorithm of chaotic optimization. In: Proceedings - 2017 Chinese Automation Congress, CAC 2017, pp. 4984–4988 (2017) 9. Huang, K.-M., Lu, C.-W., Lian, M.-J.: Modeling and algorithm for multi-echelon locationrouting problem. Control Decis. 32(10), 1803–1809 (2017) 10. Ketova, K.V., Trushkova, E.V.: The solution of the logistics task of fuel supply for the regional distributed heat supply system. Comput. Res. Model. 4(2), 451–470 (2012) 11. Maryanov, V., ReVelle, C.S.: The queueing maximal availability location problem: a model for siting of emergency vehicles. Eur. J. Oper. Res. 93, 12–120 (1996) 12. Bertsekas, D.P.: Nonlinear Optimization, 3rd edn. Athena Scientific, Nashua (2016) 13. Kramer, O.: Genetic Algorithm Essentials: Studies in Computational Intelligence. Springer, Oldenburg (2017)

Mathematical Model for Prediction of the Main Characteristics of Emissions of Chemically Hazardous Substances into the Atmosphere Ekaterina Kusheleva3(&) , Alexander Rezchikov1 , Vadim Kushnikov1,3 , Vladimir Ivaschenko1 , Elena Kushnikova1,2 , and Andrey Samartsev1 1

Institute of Precision Mechanics and Control, Russian Academy of Sciences, 24, Rabochaya Street, Saratov 410028, Russia 2 Yuri Gagarin State Technical University, 77 Politechnicheskaya Str., Saratov 410054, Russia 3 Saratov State University, 83 Astrakhanskaya Str., Saratov 410012, Russia [email protected]

Abstract. Based on the formal apparatus of system dynamics, there was developed a mathematical model to predict the main characteristics of emissions of chemically hazardous substances into the atmosphere. When building the model on the basis of GOST R 22.1.10, the main characteristics of emissions at chemically hazardous facilities were selected. There were selected external factors which should be taken into account while building. A graph of causeeffect relationships existing between the simulated characteristics is constructed. The proposed model is described by a system of nonlinear differential equations of the first order. A model example of emission of a chemically dangerous substance into the atmosphere is presented. The calculation of the simulated characteristics is made, the corresponding graphs are presented. The numerical solution of the system of equations is obtained due to the Runge-Kutta method. The comparison of the results calculated by the model with the actual data of emergency confirms the adequacy of the proposed model. The results got by the model can be used in the development of information systems for predicting the effects of emissions of chemically hazardous substances for operational dispatching staff of the MES. Keywords: Mathematical model  System dynamics  Emissions of chemically hazardous substances

1 Introduction Emissions of chemically hazardous substances at industrial facilities are one of the most dangerous technogenic disasters. In most cases, they lead to poisoning and death of people, severe environmental consequences. Over the past decades in Russia there have been a number of chemical accidents at enterprises, warehouses of toxic substances, as well as on the roads during transportation of chemically dangerous goods [1]. © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 594–607, 2019. https://doi.org/10.1007/978-3-030-12072-6_48

Mathematical Model for Prediction of the Main Characteristics

595

On the territory of Russia there are more than 3 thousand objects, which in case of accidents and disasters can lead to mass defeats of people. More than 2 thousand objects belong to chemically hazardous, with total reserves of chemically hazardous substances are more than 1 million tons In connection with the industrial production growth the probability of industrial accidents caused by the uncontrolled release of chemically hazardous substances into the environment increases, hence the urgency of forecasting the possible effects of chemical contamination [2]. To improve the efficiency of elimination of the consequences of emissions of chemically hazardous substances into the atmosphere, it is necessary to be able to predict the characteristics of emissions that affect the amount of damage. The implementation of forecasting allows to prepare for emergencies, develop plans for further actions, makes it possible to mitigate the consequences of a chemical accident, reduce the risk of severe ecological consequences and losses among the population. The existing forecasting models do not allow to determine the set of characteristics of chemical emissions taking into account a large number of nonlinear feedbacks between them, which leads to a decrease in the prediction accuracy. To solve this problem, it is relevant to use the apparatus of system dynamics [3], used in the modeling of complex processes [4, 5], national security systems [6] and aviation security [7–9]. Based on this, the article proposes a mathematical model based on the apparatus of system dynamics, allowing to determine the simulated characteristics for different time intervals and taking into account the changing parameters of the environment.

2 Mathematical Model When developing a mathematical model according to [10] as the main characteristics of the consequences of emission of chemically hazardous substances in an industrial installation were selected: X1—time of the evaporation of chemically hazardous substances in the area of the accident from the earth’s surface; X2—time of liquidation of consequences of the accident on chemically hazardous objects; X3—the area of infection; X4—time of coming of primary and/or secondary cloud to settlements; X5— the number of people affected from the primary cloud; X6—the number of people affected from secondary clouds; X7—number of people received outpatient care; X8— the number of people placed in the hospital and intensive care; X9—the number of affected equipment in units; X10—the number of solutions for disinfection of the area; X11—the number of forces and means necessary for emergency operation; X12—the percentage of effectiveness of the warning system. Environmental factors that affect the speed of change in the values of the main characteristics: F1—the total number of ejected chemically hazardous substances at the facility; F2—the number of staff at the chemically hazardous facility; F3—wind speed; F4—air temperature; F5—before the beginning of the alert; F6—численность population; F7—the number of shelters. Taking into account the fact that a large number of industrial facilities are located near the city or in proximity to settlements, it is necessary to consider the environmental conditions of the population in the cities. In large cities, one of the main sources

596

E. Kusheleva et al.

of atmospheric pollutants is road transport. In connection with the increase in the number of vehicles, the problem traffic jams is becoming aggravated, their number, duration and length are increasing. As a result of prolonged congestion, there is a massive release of pollutants into the atmosphere. The most acute problem is felt in residential areas located on the roadside, near intersections and highways. Because of the constant congestion, people living or working near the road breathe the air with a high content of harmful substances regularly, which adversely affects their respiratory tract and the whole body. Despite the fact that people themselves feel satisfactory and do not look for medical help. This part of the population becomes more susceptible to the negative effects of other chemicals and the consequences of such effects can be exacerbated by the weakened state of people. This part of the population should be taken into account in building a mathematical model as an external factor F8—the number of people exposed to regular emissions of traffic congestion. Considering F8 may require more forces and means necessary for the rescue operation. Moreover, F8 should be taken into account to predict X7 and X8, because instead of providing outpatient care to such people may require assistance in a hospital or reanimation. Using the mathematical apparatus of system dynamics to describe the object under study, a system of nonlinear differential equations of the first order (1) is constructed. dXi ðtÞ ¼ Xiþ ðtÞ þ Xi ðtÞ; i ¼ 1; n; dt

ð1Þ

where Xiþ ðtÞ, Xi ðtÞ, 1; n—continuous or partially continuous functions which determine the positive and negative speed of change of the characteristic value Xi ðtÞ, Xi ðtÞ ¼ fi ðF1 ; F2 ; . . .; Fm Þ, Xiþ ðtÞ ¼ fi þ ðF1 ; F2 ; . . .; Fm Þ; Fj , j ¼ 1; m—factors affecting the speed of change of the characteristic value [11]. Based on the analysis of the relationships between the studied characteristics of the accident on a chemically dangerous object, a directed graph of cause-and-effect relationships is constructed (see Fig. 1). There is a subgraph for the X3(t) characteristic on Fig. 2. When developing a mathematical model for each characteristic Xi ðtÞ; i ¼ 1; 12, it is necessary to construct an equation of the form (1). For example, for a variable, the differential Eq. (1) has the form: dX3 ðtÞ ¼ X3þ ðtÞ þ X3 ðtÞ ¼ f3þ ðX1 ðtÞ; F1 ; F3 ; F4 Þ; dt where f3þ —is the functional dependence of the area of contamination on the time of evaporation of chemically hazardous substances in the area of the accident from the earth’s surface, the total amount of emitted chemically hazardous substances at the facility, wind speed and air temperature. Equations for other variables are prepared in the same way. Taking into account the above, the General form of the mathematical model used to predict the consequences of accidents at chemically dangerous objects looks like (2):

Mathematical Model for Prediction of the Main Characteristics

Fig. 1. Graph of cause-effect relationships between the simulated characteristics.

Fig. 2. Subgraph for X3(t) characteristic.

597

598

E. Kusheleva et al.

8 dX1 ðtÞ þ  > > dt ¼ f1 ðF1 Þ  f1 ðX10 ðtÞ; X11 ðtÞ; F3 ; F4 Þ; > > > dX ð t Þ þ 2  > > dt ¼ f2 ðX3 ðtÞ; X7 ðtÞ; X8 ðtÞ; X9 ðtÞ; F1 Þ  f2 ðX10 ðtÞ; X11 ðt ÞÞ; > > > dX ð t Þ þ 3 > > dt ¼ f3 ðX1 ðtÞ; F1 ; F3 ; F4 Þ; > > > dX4 ðtÞ þ  > > > dt ¼ f4 ðX1 ðtÞÞ  f4 ðF1 ; F3 ; F4 Þ; > > dX ð t Þ þ 5 > > dt ¼ f5 ðX1 ðtÞ; F2 Þ; > > < dX6 ðtÞ ¼ f þ ðF ; F Þ  f  ðX ðtÞ; X ðtÞ; X ðtÞ; F Þ; 5 6 4 11 12 7 6 6 dt dX7 ðtÞ þ > > > dt ¼ f7 ðX5 ðtÞ; X6 ðtÞ; F8 Þ; > dX8 ðtÞ > þ > > dt ¼ f8 ðX5 ðtÞ; X6 ðtÞ; X11 ðtÞ; F8 Þ; > > > > dX9 ðtÞ ¼ f9þ ðX3 ðtÞÞ  f9 ðX10 ðtÞ; X11 ðtÞÞ; > dt > > dX10 ðtÞ > > ¼ f10þ ðX3 ðtÞ; X9 ðtÞÞ; > dt > > dX ð t Þ > 11 > ¼ f11þ ðX3 ðtÞ; F6 Þ; > dt > > : dX12 ðtÞ ¼ f þ ðX ðtÞ; F ; F Þ  f  ðF Þ; 11 6 7 12 5 12 dt

ð2Þ

The system of Eq. (2) is solved at initial conditions t0 ¼ 0, Xi ðt0 Þ ¼ X i0 , i ¼ 1; 12. þ = Functional dependencies fi , i ¼ 1; 12 are determined in the process of adaptation of the model to a specific process of emissions of chemicals and are based on statistic data. In the absence of statistically significant information, it is proposed to use the appropriate dependencies, which are determined based on the analysis of the experience of specialists and the physical meaning of the problem.

3 Model Example 3.1

Actual Emission Data

Below is an example of the calculation of dependencies Xi ðtÞ; i ¼ 1; 12, characterizing the effects of emissions of chemically hazardous substances into the atmosphere. At the industrial facility of one of the Russian cities there was a spill of 27 tons of chlorine [12]. The gas cloud at air temperature t = −1 °C and wind speed of 1 m/s penetrated to a depth of 7.5 km of the residential area with a population density of 2500 people/km2. The focus of the actual infection was divided into the following sectors (see Table 1): Table 1. Sectors of the hotbet of the actual infection. Pollution level Slightly polluted area Sector with moderate pollution Sector with the medium contamination density Heavily polluted sector

The size of the sector in km2 3.2 1.1 0.6

Number of affected people 7900 2737 1436

0.2

429

Mathematical Model for Prediction of the Main Characteristics

599

In a slightly polluted area, the population experienced information and mental stress. It is characterized by asthenic neurosis, which does not require emergency medical care. In the sector with moderate pollution, sanitary losses with mild chlorine defeat were formed. The victims are diagnosed with toxic rhinoconjunctivitis, laringotracheitis and chemical stress. On an outpatient basis, they received first aid. In the sector with medium pollution, the victims suffered moderate damage. Was diagnosed with toxic form of acute respiratory infections and chemical stress. Such patients were placed in hospitals of the prehospital stage to provide first aid. In the sector with a strong gas content there were detected 429 people. They were diagnosed with toxic bronchopneumonia and ecotoxicity shock. Such patients were transferred to the intensive care unit. The area of the focus of the actual infection was 5 km2, this area was inhabited by 12 500 inhabitants. Experimental data obtained as a result of the analysis of the effects of chlorine þ = emission at the industrial facility were used in the construction of functions fi , i ¼ 1; 12. 3.2

Building the Functions fi

þ =

, i ¼ 1; 12

þ =

Functional dependences fi , i ¼ 1; 12 are built separately for each specific case of emission depending on the specifics of the predicted object. In this paper it is considered the construction of these dependencies for the selected model example. þ = Functions fi , i ¼ 1; 12 are based on statistical data, analysis of the experience of specialists or the physical meaning of the problem. Particularly, the function characterizing the speed of change of the variable X10, was chosen in direct proportion to the area of the infection zone X3(t) and the population in the infection zone F6, corresponds to the experimental data. The product of factors X3(t) и F6 is raised to the degree of 0.5, which provides the most accurate match with the real results. Thus, the required dependence has the form: pffiffiffiffiffiffiffiffiffiffi þ f10þ ¼ k10 X3 F6 : þ =

, i ¼ 1; 12 to the proposed example of the emission of chemDependencies fi ically hazardous substances into the atmosphere are presented in Table 2. þ = The coefficients ki , i ¼ 1; 12 coefficients are determined at the stage of adaptation of the model to the object of study by means of a computational experiment. In particular, conducting a series of computational experiments, it was found that the values of the coefficient k7þ , coefficient in the range [0,3; 1,7], achieved the greatest compliance characteristics X7(t) to the real data. Due to the fact that the characteristics Xi ðtÞ; i ¼ 1; 12 are interrelated, k7þ , it is necessary to select them so that all the modelled variables correspond to the actual data of the emission as much as possible. As a result of a series of computational experiments, it was found that this is achieved at the value of the coefficient k7þ ¼ 0; 7.

600

E. Kusheleva et al.

Table 2. Analytical type of functions fi þ , fi , functions used to calculate the characteristics of the effects of emission of chemically hazardous substances into the atmosphere [3]. f1þ

k1þ F10;8 k1þ ¼ 0:03

f6

k6 X11 ðtXÞX4 ð12tÞðtÞF7 k6 ¼ 0:08

f1

k1 X10 ðFt3ÞXF114 ðtÞ k1 ¼ 0:05

f7þ

k7þ ðX5 ðtÞ þ X6 ðtÞÞ0;6 F8 k7þ ¼ 0:7

f2þ

k2þ X3 ðtÞðX7 ðtÞ þ X8 ðtÞÞX9 ðtÞF10;8 k2þ ¼ 0:12 k2 X10 ðtÞX11 ðtÞ k2 ¼ 0:31

f8þ

k8þ ðX5 ðtÞ þ X6 ðtÞÞ0;7 X11 ðtÞF8 k8þ ¼ 0:25 k9þ X3 ðtÞ k9þ ¼ 0:81 k9 X10 ðtÞX11 ðtÞ k9 ¼ 0:75

f2

f9þ f9

f3þ

k3þ

f4þ

k4þ X1 ðtÞ k4þ ¼ 1:1

f10þ

f4 f5þ

k4 F10;8 F3 F4 k4 ¼ 0:07 k5þ X Fðt2Þ2;1 k5þ ¼ 0:03 1 k6þ F5 F6 k6þ ¼ 0:05

f11þ f12þ

f6þ

3.3

F10;8 F3 F4 X1 ðtÞ2:9

k3þ ¼ 2:2  106

 f12

þ þ k10 ðX3 ðtÞX9 ðtÞÞ0;2 k10 ¼ 1:15 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi þ þ k11 X3 ðtÞF6 k11 ¼ 0:06 X11 ðtÞF7 þ k12 ¼ 305:5 F6  1  k12 F5 k12 ¼ 0:003 þ k12

Calculation of the Factor F8

To calculate the factor F8, it is necessary to determine the area of the territory where the increased concentrations of harmful substances caught in the atmosphere as a result of road congestion are regularly observed. To solve this problem it is proposed to use the mathematical model presented in [13]. Modeling is carried out in three stages. The first step is to determine the wind speed. To do this, consider the model of an ideal incompressible fluid. Taking into account the assumption of a vacuum flow, the equation for determining the air flow speed will have the form: @2u @2u @2u þ 2 þ 2 ¼ 0; @x2 @y @z

ð3Þ

where u - is the speed potential. Setting of boundary conditions for Eq. (3) is considered in [13]. Components of the air flow speed vector: u¼

@u @u @u ;v ¼ ;w ¼ : @x @y @z

ð4Þ

To solve the Eq. (3) we use the method of time-based solution: @u @ 2 u @ 2 u @ 2 u ¼ 2 þ 2 þ 2; @s @x @y @z where s—fictitious time.

ð5Þ

Mathematical Model for Prediction of the Main Characteristics

601

If s ! 0 the equation is to seek “the determination”, that is, the solution of Eq. (3). Construct the difference scheme for the Eq. (5). Let the independent variables set to the following intervals of their changes: s 2 ½0; s1 ; x 2 ½x1 ; x2 ; y 2 ½y1 ; y2 ; z 2 ½z1 ; z2 : We obtain a difference grid by dividing each of these intervals into a number of equal parts. We conduct the following notations: n, i, j, k—ordinal numbers of division points s, x, y, z respectively; sn þ 1  sn ¼ Ds; xi þ 1  xi ¼ Dx; yj þ 1  yj ¼ Dy; zk þ 1  zk ¼ Dz—intervals between the points s, x, y, z respectively;   – the value of the c function corresponding to the u sn ; xi ; yj ; zk ¼ uni;j;k sn ; xi ; yj ; z k ;   – the value of the c function corresponding to the u sn ; xi1 ; yj ; zk ¼ uni1;j;k sn ; xi1 ; yj ; zk ;   þ1 – the value of the c function corresponding to the u sn þ 1 ; xi ; yj ; zk ¼ uni;j;k sn þ 1 ; x i ; yj ; z k Using the introduced notations, as well as the approximation of the differential  operators that make up the Eq. (5), at the point u sn ; xi ; yj ; zk , we write an implicit   difference scheme approximating the Eq. (5) at the point u sn ; xi ; yj ; zk : þ1 uni;j;k  uni;j;k

Ds

¼

1 þ1 þ1 uni þþ1;j;k  2uni;j;k þ uni1;j;k

þ

Dx2 nþ1 þ1 þ1 ui;j;k þ 1  2uni;j;k þ uni;j;k1 Dz2

þ1 þ1 uni;jþþ11;k  2uni;j;k þ uni;j1;k

þ

Dy2

þ

ð6Þ

:

The method of fractional steps is used to resolve the implicit difference scheme (6). We transform an implicit difference scheme into a splitting scheme using the fractional steps method: n þ 1=3

ui;j;k

 uni;j;k

Ds n þ 2=3

ui;j;k

n þ 1=3

¼

n þ 1=3

 ui;j;k Ds

Ds

¼

n þ 1=3

þ ui1;j;k

Dx2 n þ 2=3

¼

n þ 2=3

þ1 uni;j;k  ui;j;k

n þ 1=3

ui þ 1;j;k  2ui;j;k

n þ 2=3

ui;j þ 1;k  2ui;j;k

;

n þ 2=3

þ ui;j1;k

Dy2 þ1 nþ1 nþ1 uni;j;k þ 1  2ui;j;k þ ui;j;k1

Dz2

ð7Þ

:

;

ð8Þ

ð9Þ

After solving (7)–(9) and determining the wind speed field, it is possible to determine the components of the air flow velocity vector, which are calculated by formulas:

602

E. Kusheleva et al.

ui;j;k ¼

ui;j;k  ui1;j;k ; Dx

ð10Þ

vi;j;k ¼

ui;j;k  ui;j1;k ; Dy

ð11Þ

wi;j;k ¼

ui;j;k  ui;j;k1 : Dz

ð12Þ

In the second step, it is necessary to determine the number, location and intensity of sources of emissions of harmful substances, i.e. vehicles in congestion. The third stage involves the transport of pollutants in the atmosphere. As a modeling equation, we use the mass transfer equation presented in [13]: @c @c @c @c @ @c @ @c @ @c þu þv þ w ¼ kx þ ky þ kz  ac @t @x @y @z @x @x @y @y @z @z þ Qs ðtÞdðx  xs Þdðy  ys Þdðz  zs Þ;

ð13Þ

where c is the desired concentration of the toxic substance; x, y, z—coordinates of the calculation point on the axis of abscissa, ordinate and applicator that con, respectively; t —time; u, w, v—projection of the vector average of the speed of the substance on the abscissa and the ordinate of the applicator that con, respectively; a—the coefficient of change of substance concentration due to chemical transformations; kx, ky, kz—are components of the exchange ratio in the x, y, z, respectively; Qs—is the emission of toxic substances; dðx  xs Þ—delta-function of Dirac; xs, ys, zs—coordinates of the source of emission of toxic substances. Equation (13) must be supplemented with the initial and three boundary conditions for each of the spatial coordinates: cðt ¼ 0; x; y; zÞ ¼ hðx; y; zÞ; c!0

at

c!0 c!0

at at

j xj ! 1; j yj ! 1; z ! 1:

It is necessary to carry out the numerical solution of Eq. (13) taking into account the Eqs. (10)–(12). We write an explicit difference scheme approximating Eq. (13) at  the point c tn ; xi ; yj ; zk : 



 cni;j;k cni;j1;k cni;j;k cni1;j;k  dk x ui;j;k  dk vi;j;k  dyy Dx dx þ Dy   cn cn cniþ 1;j;k 2cni;j;k þ cni1;j;k z þ i;j;k Dzi;j;k1 wi;j;k  dk dz ¼ kx Dx2 cni;j þ 1;k 2cni;j;k þ cni;j1;k cni;j;k þ 1 2cni;j;k þ cni;j;k1 þ ky þ kz  acni;j;k þ Dy2 Dz2 

þ1 cni;j;k cni;j;k Dt

þ

þ Qs ðnÞdðxi  xs Þd yj  ys dðzk  zs Þ:

ð14Þ

Mathematical Model for Prediction of the Main Characteristics

603

The explicit finite-difference scheme (14) is conditionally stable, the condition of its stability is given in [13]. On the basis of the proposed model, it is possible to determine the concentration of atmospheric pollutants at each point of the controlled territory, which makes it possible to predict the size of the territory where the concentration of harmful substances is regularly increased. Knowing the population density, we can assume the number of people who have been exposed to regular emissions of congestion. Taking into account the carried out calculations, for the presented model example, the size of the territory affected by the emission of harmful substances at the industrial facility, where due to road congestion there are regular increases in the concentration of atmospheric pollutants, is about 1 km2 (see Fig. 3). According to the model example, the population density is 2500 people/km2, which means that the projected value of F8  2500 people.

Fig. 3. A—the territory affected by the emission of harmful substances at the industrial facility; B, C, D, E, F—the territory where due to road congestion there are regular increases in the concentration of atmospheric pollutants.

604

3.4

E. Kusheleva et al.

Calculation of the Main Predicted Characteristics

The predicted values of the main Xi ðtÞ; i ¼ 1; 12 of chlorine ejection as a result of emergency situation at an industrial facility are determined by numerical solution of the system of Eq. (2) using the software package Matlab v.9.4(R2018a). KiN ; i ¼ 1; 12—the coefficient of normalization for the measure Xi(t): KiN ¼

Xi ðtÞ  Ximin ; ðXimax  Ximin ÞXi ðtÞ

where Ximin ; Ximax —respectively, the minimum and maximum values of the indicator Xi. Having accepted for all indicators Ximin ¼ 0, we obtain: KiN ¼

1 Ximax

:

The equation for XiN ðtÞ; i ¼ 1; 12 will take the form: XiN ðtÞ ¼

Xi ðtÞ : Ximax

On Fig. 4 a graph of the numerical solution of the system (2) normalized with respect to the maximum values of the simulated characteristics is presented. Particularly, the analysis of the dependence X3N ðtÞ allows us to conclude that a significant increase in the pollution area is observed in the first hour after the accident, which is confirmed by the data of the model example. 3.5

Checking the Adequacy of the Model

The adequacy of the model is checked by comparing the values of the predicted characteristics obtained as a result of the solution of the system (2) with the real data of chlorine emission into the atmosphere that occurred at the industrial facility of one of the cities of the Russian Federation. On the Fig. 5 the characteristics X6N ðtÞ and X8N ðtÞ, and determined by model (2) are compared with the real statistical data of the model example, interpolated Lagrange polynomials Y6(t) and Y8(t). The analysis of the obtained graphs allows us to assert that the obtained values X6N ðtÞ и X8N ðtÞ are slightly different from the real values of Y6(t) and Y8(t). Analysis of the values obtained for XiN ðtÞ; i ¼ 1; 12 shows that the characteristics calculated as a result of the application of the developed model (2), is slightly different

Mathematical Model for Prediction of the Main Characteristics

605

Fig. 4. The main characteristics of the consequences of accidental release of chlorine at an industrial facility.

from their real values. The average value of the relative errors in the simulation nodes for each characteristic does not exceed 15%, which suggests that the developed mathematical model is adequate for simulator systems that can be used in the preparation of operational dispatch staff of the MES.

606

E. Kusheleva et al.

Fig. 5. Comparison of calculated values X6N ðtÞ and X8N ðtÞ with actual emergency data.

4 Conclusion Based on the formal apparatus of system dynamics, a mathematical model is developed to predict the main characteristics of emissions of chemically hazardous substances into the atmosphere. The predicted characteristics are compared with the real emission data, which confirmed the adequacy of the proposed model. The developed mathematical software can be used in the development of information systems for predicting the effects of emissions of chemically hazardous substances, as well as in training simulator systems for operational dispatch personnel of the Ministry of emergency situations.

References 1. Klyuev, V.V., Sosnin, F.R.: Nondestructive testing in oil refining and the chemical industry. Chem. Pet. Eng. 40(3–4), 241–247 (2004) 2. Ivanov, A.S., et al.: The cause-and-effect approach to investigation of emergency situations in human-machine systems. Mechatron. Autom. Control, (2), 38–43 (2012). (in Russian) 3. Forrester, J.W.: World Dynamics, 2nd edn. Productivity Press, Portland (1973)

Mathematical Model for Prediction of the Main Characteristics

607

4. Rutkovsky, V.Y., et al.: New adaptive algorithm of flexible spacecraft control. In: Studies in Systems, Decision and Control, vol. 55, pp. 313–326 (2016) 5. Glumov, V.M., et al.: Constructing the general scheme of a departmental management system. Autom. Remote Control 57(12), 1794–1806 (1996) 6. Bogomolov, A.S.: Analysis of the ways of occurrence and prevention of critical combinations of events in man-machine systems. Izvestiya Saratovskogo Universiteta. Novaya Seriya-Matematika Mekhanika Informatika, vol. 17, pp. 219–230 (2017) 7. Filimonyuk, L.Y.: The problem of critical events’ combinations in air transportation systems. In: Advances in Intelligent Systems and Computing, vol. 573, pp. 384–392 (2017) 8. Syrov, A.S., et al.: Motion control problems for multimode unmanned aerial vehicles. Autom. Remote Control 78(6), 1128–1137 (2017) 9. Glumov, V.M., et al.: Design and analysis of lateral motion control algorithms for an unmanned aerial vehicle with two control surfaces. Autom. Remote Control 78(5), 924–935 (2017) 10. State Standard 22.1.10-2002. Safety in emergency situations. Monitoring of chemically dangerous objects. General requirements, p. 9. Standartinform Publications, Moscow (2002). (in Russian) 11. Brodsky, Y.: Lectures on Mathematical and Simulation Modeling. Direct Media, Moscow, Berlin (2015). (in Russian) 12. Marukhlenko, S.L., Degtyarev, S.V., Marukhlenko, A.L.: Technogenic accident hazard software module. Izvestiya Yugo-Zapadnogo Gosudarstvennogo Universiteta, no. 6–2 (39), pp. 41–45 (2011). (in Russian) 13. Kusheleva, E.V., et al.: A Model to predict the distribution of atmospheric pollutants in road congestion. Control systems and information technology, no. 2, pp. 55–60 (2018). (in Russian)

Increasing the Safety of Flights with the Use of Mathematical Model Based on Status Functions Irina Veshneva1(&) , Aleksander Bolshakov2 and Aleksei Kulik3

,

1

Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russian Federation [email protected] 2 Peter the Great St. Petersburg Polytechnic University, Polytechnicheskaya, 29, Saint Petersburg 195251, Russian Federation [email protected] 3 Yuri Gagarin State Technical University of Saratov, 77 Politechnicheskaya Street, Saratov 410054, Russian Federation [email protected]

Abstract. The article deals with application of complex-valued status functions for development of the method of mathematical modeling of flights to ensure their safety based on the prevention of flight accidents. The application of the proposed method on the basis of status functions is shown using a precedent matrix of flight accidents. The method contains the steps corresponding to the Mamdani algorithm for the following purposes: the formation of the rules base, fuzzification, aggregation, activation, accumulation. Notable is the use of orthonormal basis of complex-valued status functions instead of membership functions, which changes the implementation at each stage. The configuration of flight operations safety management system is used on the input of which the information is received about the condition of the crew, instruments for measuring external factors and airborne equipment. The main parameters of the aircraft flight safety assessment are identified by formalization of expert information and the values of linguistic variables are formulated with their use. Orthonormal status functions were formed for which interpretation rules are presented. For activation we used status functions, which makes it possible to create a rule for the double evaluation of object and phenomenon when creating rules database. Analogues of minimax operations are used for accumulation with a demonstration of the form of these functions for different values of factors. Comparison of the proposed method with analogues is given (algorithms of Mamdani, Tsukamoto, Larsen and Sugeno). Keywords: Status functions  Membership functions Flight safety  Mathematical model

 Mamdani algorithm 

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 608–621, 2019. https://doi.org/10.1007/978-3-030-12072-6_49

Increasing the Safety of Flights with the Use of Mathematical

609

1 Introduction According to statistics, 83% of the flight accidents account for the human factor, 15% on failures of technology, 2% on unfavorable external effects [1]. In this case, the cause of the incident may be a single factor or a combination there of. Therefore, ensuring the safety of flight civil and military aircraft with human factor is an actual scientific and technical problem. To solve it, it is necessary to create new software and hardware systems and methods for reserving, managing, and diagnosing the functioning of the aircraft equipment. Based on the use of measuring tools in computer technology, aviation enterprises have increased the values of reliability indicators of products. To do this, redundancy of individual elements and various means of internal control are applied. And new methods of flight safety are used. These approaches are related to the assessment of the flight conditions of the apparatus, the detection of dangerous events, the forecasting of their development and the formation of relevant information with transmission to onboard display systems. In this case, the concept of a dangerous event in the control of the aircraft is associated with a change in operating conditions, which is caused by the impact of various internal and external factors, which leads to loss of controllability. To predict such events, artificial intelligence methods are currently used [2]. To solve the above tasks, aircraft safety management systems with intelligent support are used to make decisions by the crew, which are implemented as a component of the aircraft’s on-board equipment [3]. Usually, the intellectual support system of the crew contains a logic unit that processes signals of measured and recorded aircraft flight characteristics affecting its safety conditions, including the humans psychophysical state [4, 5]. The processed signals are then fed to the input of the limiting signal system, which interacts with the automatic and manual control system. It should be noted that the transmission of the limiting signals to control systems (CS) of the aircraft causes high requirements to security of the system of limiting signals to reliability of the hardware and software of the system. At the same time, urgency of forecasting the development of an unfavorable flight accident to prevent it in the early stages of development remains, which in turn requires the construction of control complex to ensure aircraft safety with integrated decision support modules [6, 7]. Methods and means of artificial intelligence [8], for example, algorithms of fuzzy logic [6, 7, 9], expert systems and neural networks [10, 11], simulation modeling [12], have been used extensively in solving problems of forecasting and supporting decision making [8]. The main advantage of using artificial intelligence is the high speed of processing input data having different physical origin and connected by common synergetic principles. Thus, the improvement of systems for forecasting the development of a flight accident using artificial intelligence methods is an important task in ensuring the safety of aircraft control.

610

I. Veshneva et al.

2 The Problem of Development of Mathematical Modeling of Flight Safety on the Basis of the Status Functions Method To improve flight safety, it is required to create a method that allows forecasting future states based on qualitative information on key flight characteristics, including psychophysiological state of the pilot, level of his training, state of the aircraft, and evaluation of external factors. The description of data on state of the man-machine system includes a large number of parameters, measured in different values and having different contributions to the safety of the flight. To combine quantitative and qualitative indicators into a common system, it is advisable to use intelligent technologies for interpreting data. Assume that each of the indicators will be evaluated by ordered pair of values. A pair of estimates is used as arguments of a complex-valued function. Such representation makes it possible to spread estimates based on different axes of coordinates that have a different effect on the indicator. For example, the level of pilot training and the impact on him of a stressful situation. In [13, 14], a method of status functions (SF) is proposed, which can be used to form such estimates. Authors will develop the method of mathematical modeling, taking into account above assumptions based on the status functions (SF) [13, 14]. The proposed method is designed to convert the values of input variables to output based on the use of SF. It can be based on the Mamdani method [15, 16]. For this, the algorithm should contain the main stages corresponding to the stages of the most popular algorithms Mamdani, Tsukamoto, Larsen and Sugeno, such as: 1. Formation of the rules base. 2. Fuzzification. 3. Aggregation. 4. Activation. 5. Accumulation. 6. Defuzzification. It is necessary to implement the following steps: (1) to design and offer the flight operations safety management system that differs from the used one in the formation of hierarchical system of parameters; (2) to develop the algorithm for interpreting and transforming data for the aircraft safety management system (SMS) based on SF method, which makes it possible to combine different-sized indicators into a single system and take into account the mutual influence and cross-over; (3) to compare with already known most common algorithms for processing qualitative information using fuzzy models and show the difference and advantages of the proposed algorithm based on SF; (4) to use the algorithm for interpreting and transforming data based on SF to improve flight safety based on predicting future states. The method using SF is based on the canonical representation of random functions [17] and is applicable for simulation of feedback channels [18], which is an important step in the process of developing the SMS of the aircraft flight.

3 Aircraft Safety Management System On the aircraft board, the SMS is installed by flight, which includes a complex of software and hardware. The functioning of the SMS modules is aimed at identifying changes in operating conditions, and is also associated with predicting their impact on the controllability of the aircraft and with formation of information and control signals

Increasing the Safety of Flights with the Use of Mathematical

611

for flight crew and control systems. Assessment and forecast are carried out on the basis of data that describe the change in factors affecting the control object in real time. In Fig. 1 shows the structural diagram of the SMS. The main modules provide preliminary information processing, decision support, and information transfer. Preliminary processing of information is associated with obtaining SMS information and measurement information from aircraft equipment, as well as with the formation of signals that are initiated by the output of specified values of disturbing variables beyond the established limits. The decision support module forms a conclusion about the degree of danger of the event, as well as forms recommendations on how to eliminate it. These data are then transferred to the information transfer module, which generates electrical signals. The latter comes to indicating and warning devices, as well as, if necessary, in the SMS. The proposed aircraft SMS allows not to admit a false conclusion about the flight accident on the basis of two-level identification of operating conditions of the aircraft. As a computing core, software-logic integrated circuits and converting devices are used. Their type is determined by the type of information exchange interfaces with the peripheral devices of the aircraft equipment.

Crew Status

Devices for measurement of external factors

SMS

Preliminary processing ofdata

Decision support

Transfer of informaƟon

State of the onboard equipment

Devices for indicaƟon and noƟficaƟon of equipment Onboard equipment control systems

Fig. 1. Scheme of SMS functioning during the aircraft flight

The signals that enter control system are limiting and do not lead to a deterioration in the controllability of the aircraft. In the development of the aircraft control system, in addition to signal monitoring, there are additional ways to improve its reliability. An important aspect of design is the use of artificial intelligence methods to adapt functioning in real time to changes in flight conditions. As means of artificial intelligence, programs based on fuzzy logic methods, neural networks and expert systems that are implemented on the basis of their joint use are used [19, 20]. Moreover, the preprocessing modules use fuzzy logic algorithms, and the decision support module contains prediction blocks, i.e. belongs to the class of hybrid expert systems. The proposed aircraft SMS allows to assess the threat of a flight incident, to carry out a forecast of its consequences, and also to generate recommendations to the pilot for its neutralization. At the same time, the SMS sends information to the control system of the aircraft that is taken into account when costs are reduced and reconfiguration is carried out.

612

I. Veshneva et al.

4 The Status Functions Method as Bases of the Database of Rules for the Systems of Interpretation and Data Transformation The SF rules base represents a finite set of production rules that are consistent with the linguistic variables used in them. In contrast to the algorithms of the theory of fuzzy sets constructed on fuzzy rules of the form of representation in the form: RULE i : IF  Condition i  THEN  Conclusion i ; Fi :

ð1Þ

where Fi (i2{1, 2, …, n}) are coefficients of the degree of definiteness or weighted in the interval [0, 1]. For harmonization of the rules with respect to linguistic variables in the form of conditions and conclusions of the rules, only fuzzy linguistic expressions of the form are used: 00

00

00

RULE\# [ : IF ‘‘ b1 equally a0 & ‘‘ b2 equally a00 THEN ‘‘ b3 equally v or 00

00

RULE\# [ : IF ‘‘ b1 equally a0 OR ‘‘ b2 equally a00 THEN ‘‘ b3 equally v

00

ð2Þ

In fuzzy expressions, the membership functions on the term set for linguistic variables are defined. To do this, enter the base variable x. If there is a numerical scale for the variable (for example, speed), then its values can be used. If there is no numerical scale (for example, pilot fatigue), an ordinal scale can be used for it, for example, from 0 to 100%. Usually triangular and trapezoidal membership functions are used. In the case of using SF, a complex function is compiled. It is assumed that this complex function describes the state of the modeled object or subject. Suppose that two characteristics are used to describe a phenomenon: the characteristic itself, and its perception. The set of possible states of a certain system is characterized by introduction of random variables to describe the variables that form this set. In this case, the value is random in the sense of matching the real state of the system. Assume that any random variable attributed to the state of an object (or subject) is composed of Z′ and its perception Z″: Z ¼ Z 0 þ Z 00 :

ð3Þ

Then, to introduce a system of state measurements to an object (subject), it is necessary to introduce an ordered pair of real random variables S = {S1, S2} belonging to the different sets of estimates of observable “events” corresponding to the measurement of the object (subject), S1 is a practical estimate object (subject), S2—the perception of the event S1, corresponding to the imaginary part of the event. Thus, it is possible to introduce a simple and intuitive introduction of complex random variables

Increasing the Safety of Flights with the Use of Mathematical

613

attributed to the state of an object or phenomenon. This ordered pair will be used in the exponential representation of a complex-valued function. 4.1

The Creation of a Knowledge Ball of Rules for the Systems of Interpretation and Data Transformation on the Basis of the Status Functions Method

To form the basis of the rules of interpretation and data transformation systems on the basis of the SF method, it is necessary to create a rule for a double estimation of object and phenomenon. The rule for interpreting the data should look like this: RULE ij : IF ‘‘ Condition i & Condition j  00  THEN ‘‘ Conclusion ij Fij :

00

ð4Þ

where Fij(i2{1, 2, …, n})—coefficients of definiteness or weight coefficients from the interval [0, 1]. The condition on i uses the estimate of the measurable part. The distribution function is used in the form of an orthonormal basis of the coordinate functions of the system. The basic coordinate functions are obtained as a result of the orthogonalization procedure for Gaussian functions, which are as close as possible to the membership functions of the theory of fuzzy sets. The condition on j is used in the formation of the degree of the exponential, and characterizes the intentional direction of the aspirations of the object. In the theory of fuzzy sets, the form of the membership function is determined based on the basis of the convenience of representing and performing computational operations of the adequacy of the corresponding linguistic variable process. The type of SF is chosen proceeding from the orthonormal basis of the system of estimations provided that the corresponding linguistic variable is adequate to the investigation process. The stage is considered complete if all the rules for interpreting and transforming data based on the SF method are formed. Consider the stage in the example of assessing the safety of aircraft during a flight along the route. Based on the formalization of expert information, it is suggested to outline the main parameters of the aircraft flight safety assessment and combine them into pairs of estimates. The rules for setting the values of linguistic variables are formulated on the basis of data. Orthonormalized SF are formed as: /ðr; kÞ ¼ fl ðrÞei2ki pr ;

ð5Þ

where the values of l, i are determined according to the values of linguistic variables according to the expert data and take the values l = 1, 2, 3; i = 1, 2, 3, 4. The form of the SF (see Fig. 2) is obtained from the following expressions:

614

I. Veshneva et al. fl 2 1

– 0.4

– 0.2

0.2

0.4

r

–1 –2

Fig. 2. The real parts of the SF—f1, f2, f3

f1 ðrÞ ¼ 3:3761ð0:5802e22:22r þ e49:99ð0:14 þ r Þ;

ð6Þ

f2 ðrÞ ¼ 1:9393e22:22r ;

ð7Þ

2

2

2

f3 ðrÞ ¼ 1:3766eð14:49:99rÞr þ 1:8710e14:49:99rÞr  5:0211e22:22r ; 2 k1 ¼ 1 4 k2 ¼ 0:33: k4 ¼ 1 2

ð8Þ ð9Þ

The list and description of the parameters affecting the safety of the aircraft flight is presented in Table 1. The rules for interpreting the data are as follows: RULE 11 : IF ‘‘  Fatigue ¼ low  &  attention ¼ ¼ scattered 00 THEN ‘‘w11 ¼ f1  ei2pk1 r00 :

ð10Þ

and so on. In the case under consideration, the rule base will be 3  4  6 = 72 rules. The next step is to enter specific values for the variables. 4.2

The Fuzzification for Input Variables

Assignment of fuzzy term membership functions using conventional data, i.e. fuzzification is carried out in this way. First, all the concrete values of the input variables of the fuzzy inference procedure are considered known, i.e. the values V′ = {a1, a2, …, am}. Then we study conditions of the form “bi is equal to a” of the rules of the fuzzy inference procedure, here b is the name of the linguistic variable; a is the value on the scale of the base variable x, a′ is a term with a given membership function l(x). The value of ai is used in the form of the argument l(x) and the quantitative value bi′ = l(ai) is calculated, which is the result of the fuzzification of the condition “bi is equal to a”. The fuzzification is completed if all the values of bi = l(ai) are determined for each of the conditions in the rules base of the fuzzy procedure. If a definite term a″ of the

Increasing the Safety of Flights with the Use of Mathematical

615

Table 1. The parameters affecting the safety of the aircraft flight. Group

Subgroup Parameter

Psychophysical 1.1 state of the pilot

Tiredness

State of the aircraft

2.1

2.2

External factors 3.1

3.2

Method of measurement (device) According to the eye Real fi(r) reaction sensor, strain gauges Imaginary According to the eye exp(i2pkjr) reaction sensor

Linguistic variables High f1, middle f2, low f3

High k1, middle k2, low k3, scattered k4 Level of training Real fi(r) Test assignments with High f1 (competence) the classification of the middle f2, low f3 pilot No k1, low k2, Stress Imaginary According to the eye exp(i2pkjr) reaction sensor middle k3, high k4 Failure of Means of signaling and Minor f1, Real fi(r) functionally indication of failures crashf2, catastrophic f3 significant elements Deformation of the Imaginary Sensors for measuring Absent k1, structural exp(i2pkjr) loads on power cells small k2, significant k3, components of the critical k4 structure Manageability and Real fi(r) Characteristic of the High f1, middle stability of aircraft control object (Cooper- f2, low f3 Harper table) An error in the Imaginary Means of detecting the No k1, small software of the exp(i2pkjr) failure of the aircraft k2, significant control system function k3, critical k4 aircraft control systems Headwind Real fi(r) Change of aircraft flight Weak f1, parameters middle f2, strong f3 Visibility Imaginary Photocells Good k1, poor exp(i2pkjr) k2 Lateral wind Real fi(r) Change of aircraft flight Weak f1, parameters middle f2, strong f3 Good k2, poor Visibility Imaginary Photocells exp(i2pkjr) k1 Attention

1.2

Kind SF

linguistic variable bi is absent in fuzzy statements, then the corresponding membership function value is not determined. In accordance with the above description of the method, possible values of the input variables for implementation of the procedure for prediction of flight safety are selected based on the created rule base. For example, suppose the following situation is diagnosed: 1. Psychophysical state of the crew: 1:1. Fatigue is medium, Attention—the medium, 1:2. The level of preparation is moderate, Stress—no, 2. 2. State of the aircraft

616

I. Veshneva et al.

2:1. The failure of functional elements is negligible, The deformation of the force elements is absent, 2:2. The steer ability and stability of the aircraft is high, An error in the software of control systems aircraft—no, 3. External influencing factors: 3:1. The counter wind is medium, Visibility is good, 3:2. The lateral wind is medium, Visibility is good. Table 1 shows the status of the status functions and its value on the scale of the base variable r, respectively, for each group of flight safety indicators. This value is calculated as the first integral moment of the SF, often called the mathematical expectation or mean value of the random function. 4.3

The Aggregation of the Condition of Fuzzy Rules

At this stage, the truth values of the conditions are considered to be known B = {bi′}. Then the conditions of the rules of fuzzy procedure are investigated. If the rule condition is a fuzzy expression of the form “b is equal to a” or “b is equal to a”, then the truth value is bi′, where ∇ is the sign that corresponds to the expressions “MORE”, “LESS”, “MUCH MORE” and other. When using the algorithm for interpreting and transforming data based on the SF method, the degree of truth is determined from the procedure for calculating the correlation of the mathematical expectation of the square of the SF module. For example, using expert estimates of the degree of truth means the consistency of the opinion of a particular expert with the group’s opinion. In our case, let’s use an algorithm reflecting the relative frequency with which each of the indicators differs in the frequency of observing aircraft safety precedents. Weight coefficients [21] are determined from the correlation coefficient of the j-th indicator with the total score based on the BC precedents: Rjy ¼

n X i¼1

Kij Yi 

ð

Pn i¼1

Kij Þ  ð n

Pn i¼1

Yi Þ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ,v ! ! u n P P n u X X ð ni¼1 Kij Þ2 ð ni¼1 Yi Þ2 t ; Kij2  Yi2   n n i¼1 i¼1

ð11Þ where Kij is the value of the i-th precedent for the j-th indicator, Yi is the final test score of the i-th precedent, n is the number of precedents. The truth values of the subcases are determined from the use case matrix have the values {0.04719, 0.21491, 0.12476, 0.19781, 0.20766, 0.20766}. 4.4

The Activation of Sub Clauses in Fuzzy Rules

This is the process of calculating the values of the truth degree of the conclusion of fuzzy product rules. For rules, the truth values of fuzzy procedure conditions are assumed, i.e. the set of values B″ = {bi″} and the coefficients Fi. This determines the truth value, which is equal to the algebraic product of the corresponding quantity bi″ by the coefficient Fi.

Increasing the Safety of Flights with the Use of Mathematical

617

At the stage of formation of the rules base, rules for double estimation of an object and a phenomenon are created, which distinguishes the algorithms of the SF and Mamdani. The degree of truth of each of the subcontracts is determined by calculating the correlation of the mathematical expectations of the squares of the SF modules. 4.5

The Accumulation of Conclusions of Fuzzy Rules

This is the procedure for determining the membership functions of the output linguistic variables of the collection W = {w1, w2, …, ws}. The accumulation is designed to combine the degrees of truth of the conclusions when forming the function of the Re

Аbs

, Im

а)

b)

2

^2

,Arg 10 8

1

0.4

0.2

6

0.2

4

0.4

2

1 2 Re

c)

0.4

0.2

0.2

Аbs

, Im 3

0.4

^2

,Arg

d)

10

2

8

1

6

0.4

0.2

0.2

0.4

4

1 2

2 0.4

0.2

0.2

0.4

Fig. 3. View of the SF: (a) the real and imaginary part before the change in wind and crew stress; (b) the square of the module and the SF argument before the wind and crew stress change; (c) the real and imaginary part after the wind and crew stress change; (d) the square of the module and SF argument after wind change and crew stress

output variables. Analyzes of minimax operations are used to accumulate data based on the SF method (see Fig. 3). 4.6

Defuzzification

Defusification is the process of determining of the deterministic values of the output linguistic variables of a population. It is complete if quantitative characteristics (real numbers) are defined for output linguistic variables.

618

I. Veshneva et al.

For the defuzzification based on the SF method, the rules for calculating integral moments, Fourier transforms and calculation of the transfer function are used. In our case, the SMS flight system of the aircraft system of decision support rules is developed on the basis of a precedent matrix of the flight incident. For example, in flight we observe a sharp change in the wind and the stress of the crew. The values of integral moments and transfer function are presented in the Table 2. Table 2. Integral moments assessment of state of aircraft flight. Characteristics Expected value Dispersion Asymmetry Excess Transfer function

Terms for evaluation Complex flight conditions, close to emergency, the trend is positive

0.0566004 0.029943 −0.00431677 −0.004551 0.00228713 0.001038 max! p = 11.9 − i 0.4 min! p = 23.6 − i 2.1

Re

0.4

At the final point in time −0.179503

At the initial moment of time −0.127696

Management is effective. Trend positive

, Im

Re

, Im

2

2

1

1

0.2

0.2

r

0.4

0.4

0.2

1

0.4

r

1

2

Aircraft staff

0.2

2

aircraft 0.3225

0.2621

Security aircraft flight

0.4153

Re

, Im 2

External factors

1

0.4

0.2

0.2

0.4

r

1

2

Fig. 4. The skeleton of accumulation of fuzzy rules for the example of ensuring the safety of aircraft flight

Increasing the Safety of Flights with the Use of Mathematical

619

As a result of the analysis of the suturing on the basis of the SF method for this example, it is possible to diagnose that the aircraft is controlled in difficult flight conditions, close to emergency ones, however, the expected trend is positive and the aircraft is efficiently controlled (Fig. 4).

5 The Comparison of the Status Function Method with Analogues Imagine a brief correspondence between the steps of the data transformation algorithm based on the status function method and their content and the well-known Tsukamoto, Larsen and Sugeno algorithms with the Mamdani algorithm. All algorithms are based on the following steps: (1) Forming a rule base; (2) Fuzzification; (3) Aggregation; (4) Revitalization; (5) Accumulation; (6) Defusification. Comparison the Tsukamoto algorithm with the Mamdani algorithm: (1) according; (2) according; (3) according; (4) no; (5) no; (6) according. Comparison of Larsen’s algorithm with Mamdani’s algorithm: (1) according; (2) according; (3) no; (4) according; (5) according; (6) according. Comparison of Sugeno’s algorithm with Mamdani’s algorithm: (1) no; (2) according; (3) no; (4) no; (5) no; (6) according. Comparison of the contents of the stages of the data transformation algorithm based on the status function method with the Mamdani algorithm: (1) no; (2) no; (3) no; (4) no; (5) no; (6) no. The difference in the content of the steps of the algorithm is due to the use of complex-valued status function (SF).

6 Conclusion As a result of the work performed, mathematical modeling method based on status functions for enhancing flight safety is proposed, and it is implemented as part of the development of the flight safety management system. A distinctive feature of the proposed method is the presence in its composition of status functions, which are selected on condition that the linguistic variable of the measured characteristic is adequately provided to the process of assessing the state of the control object. Also, the use of status functions allows one to take into account the actual and imaginary variables of the input characteristics of the system in algorithms of flight safety assessment of the aircraft. Therefore, the use of status functions as a part of the SMS algorithms will ensure high reliability of assessing the presence of the threat of an accident and its consequences. Further development of the method is related to the development of its interaction with the support device for the decision making of the SMS and its modification, depending on the flight mode of the aircraft.

620

I. Veshneva et al.

References 1. Popov, Ju.V.: Safety indicators of aviation flights. Internet-zhurnal «Tehnologii tehnosfernoj bezopasnosti» , №6(58) (2014). http://agps-2006.narod.ru/ttb/2014-6/10-06-14.ttb.pdf. Accessed 12 Apr 2017. (in Russian) 2. Kluev, V.V., Rezchikov, A.F., Kushnikov, V.A., Tverdokhlebov, V.A., Ivashenko, V.A., Bogomolov, A.S., Filimonyuk, L.Yu.: An analysis of critical situations caused by unfavorable concurrence of circumstances. Kontrol’ Diagn. − Test. Diagn. 7, 12–14 (2014). (in Russian) 3. Sapogov, V.A., Anisimov, K.S., Novozhilov, A.V.: Fail-safe computing system for integrated flight control systems. Electronyi J. «Trudy MAI»—Electron. J. «Trudy MAI» № 45, 42 (2008). www.mai.ru/science/trudy. (in Russian) 4. Harris, J.: An Introduction to Fuzzy Logic Applications. Springer, Dordrecht (2000) 5. Bol’shakov, A.A., Kulik, A.A., Sergushov, I.V. Development the control system algorithms functioning of flight safety for the aircraft of helicopter type. Izvestija Samarskogo nauchnogo centra RAN Scientific Journal of “Proceedings of the Samara Scientific Center of the Russian Academy of Sciences”, 18, №1 (2), 358–362 (2016). (in Russian) 6. Luo, J., Lan, E.: Fuzzy logic controllers for aircraft flight control. In: Fuzzy Logic and Intelligent Systems. International Series in Intelligent Technologies, vol. 3. Springer, Dordrecht (1995) 7. Nonami, K., Kendoul, F., Suzuki, S., Wang, W., Nakazawa, D.: Autonomous Flying Robots, Unmanned Aerial Vehicles and Micro Aerial Vehicles. Springer, Japan (2010). ISBN 978-4431-53855-4 8. Ionita, S., Sofron, E.: The fuzzy model for aircraft landing control. In: Pal, N.R., Sugeno, M. (eds.) Advances in Soft Computing—AFSS 2002. AFSS 2002. LNCS, vol. 2275. Springer, Heidelberg (2002) 9. Zadeh, L.A.: Outline of a new approach to the analysis of complex systems and decision processes. IEEE Trans. Syst. Man Cybern. SMC-3(1), 28–44 (1973) 10. Fedunov, B.E., Prohorov, M.D.: Conclusion on the precedent in knowledge bases of onboard intellectual systems. Artif. Intell. Decis. Mak. №3, 63–72 (2010). (in Russian) 11. Kontogiannis, T., Malakis, S.: Proactive approach to detecting and identifying human errors in aviation and air traffic control. Sci. Saf. 47, 693–706 (2009) 12. Vishnjakova, L.V., Degtjarev, O.V., Slatin, A.V.: Simulating operational modeling of processes of functioning of difficult aviation systems and management complexes. In: SPb.: SPIIRAN—Conference Proceedings «Simulation Modeling: Theory and Practice». SPIIRAS, vol. 1, pp. 30–41 (2011). (in Russian) 13. Veshneva, I.V., Chistjakova, T.B., Bol’shakov, A.A.: The status functions method for processing and interpretation of the measurement data of interactions in the educational environment. SPIIRAS Proc. 49, 144–166 (2016). (in Russian) 14. Veshneva, I.V., Chistjakova, T.B., Bol’shakov, A.A., Singatulin, R.A.: Model of formation of the feedback channel within ergatic systems for monitoring of quality of processes of formation of personnel competences. Int. J. Qual. Res. 9(3), 495–512 (2015) 15. Ossovskij, S.: Neural networks for information processing. M.: Finansy i statistika (2002). (in Russian) 16. Mamdani, E.H.: Advances in the linguistic synthesis of fuzzy controller. Int. J. Man-Mach. Stud. 8, 669–678 (1976)

Increasing the Safety of Flights with the Use of Mathematical

621

17. Sinicyn, I.N.: Canonical representations of random functions and their application in problems of computer support of scientific research. M.: TORUS-PRESS (2009). (in Russian) 18. Batenkov, K.A.: Continuous channel modeling in shape of some space transformation operators. SPIIRAS Proc. 32, 171–198 (2014). (in Russian) 19. Matthew, S., Sunitha, M.S.: Links in graphs and fuzzy graphs. Achiev. Fuzzy Sets Syst. 6, 107–119 (2010) 20. Shevchenko, A.M., Nachinkina, G.N., Solonnikov, Ju.I.: Modeling of means of information support of the pilot during the take-off phase of the aircraft. In: Proceedings of the Moscow Institute of Electromechanics and Automatics (MIEA), vol. 5, pp. 54–64 (2012). (in Russian) 21. Bol’shakov, A.A., Veshneva, I.V., Mel’nikov, L.A., Perova, L.G.: New methods of mathematical modeling of the dynamics of the formation and management of competences in the learning process at the university. Hot Line—Telekom, Moscow (2014). (in Russian)

Mathematical Modeling of Electronic Records Management and Office Work in the Executive Bodies of State Administration Olga Perepelkina(&)

and Dmitry Kondratov

Russian Presidential Academy of National Economy and Public Administration, g. Saratov, ul. Moskovskaya, 164, Saratov, Russia [email protected], [email protected]

Abstract. The activities of the executive bodies of state administration lie in making management decisions within the framework of their powers. The efficiency of this process is determined by office work and records management system. The introduction of electronic document management system is a priority for the state administration and the successful implementation of the system allows for the transition to a more qualitative level of functioning. However, the introduction of electronic document management system is not made at sufficiently high rate, largely due to the lack of accurate monitoring system of electronic document management and office work implementation effectiveness, which prevents from quick adjusting and making the appropriate changes in the system. The relevance of the study is caused by the importance of increasing the efficiency of electronic document management system in administration, which is possible only by means of mathematical models, advanced and reliable algorithms and systems of quality assessment programs according to certain criteria. The purpose of the research is to develop mathematical models, software package evaluation of the effectiveness of the introduction of electronic document management system for improving the administration effectiveness. The main objectives of study modeling information flow in the administration are defined in the paper, as: improving management efficiency, accelerating the documents movement, reducing the complexity of processing documents. The formalization method, consisting of exploring objects by displaying their content and structure in a symbolic form was used. The main conclusion of the study is the fact that mathematical model for evaluating implementation of the system of electronic document and records management in the executive bodies of administration by using “soft” mathematical modeling was constructed. On the basis of the constructed model the software for evaluating the effectiveness of implementation of the system in the administration can be made. This software can be worked out by using any Russian software. Keywords: Document management  Electronic document management  Electronic document management system  The introduction of evaluation criteria  Mathematical modeling  Object modeling  Mathematical soft modeling  Forrester system dynamics model  Software package  The domestic office software

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 622–633, 2019. https://doi.org/10.1007/978-3-030-12072-6_50

Mathematical Modeling of Electronic Records Management

623

1 Introduction At present, information and communication technologies are actively implemented in the course cooperation between the citizens and administration, determining state apparatus transformation and modernization ways. The introduction of information and communication technologies makes the work of the executive bodies of state power more dynamic, flexible and interactive. They become the new administrative technologies, improving the quality of public services in operational interaction of citizens [1]. The main function of the state institution is to make management decisions. In this case, the initial data of making and adopting decisions and the decisions themselves are presented in the form of documents. The volume of technical document management is often so great that it overshadows the main activities of both – the administration and management [2]. Office automation is based on the fact that the registration paper document is fully or partially converted into electronic form, and further work is carried out mainly with the electronic registration cards and submission of documents [3]. Management effectiveness largely depends on teamwork office work elements, so documents creation and movement in the public management system is the basis of these bodies’ activities. The electronic document management system is a system for storing, transmitting and processing electronic data in the form of electronic documents [4]. Thus, the introducing electronic document management system is a priority for the administration, because successful implementing of this system ensures the transition to a higher quality level of their functioning. However, implementing electronic document management system is not quick enough, which is largely due to the lack of an accurate control system over the effectiveness of introducing electronic document management and record keeping, which does not allow to quickly adjust and make appropriate changes in the system. In this case, all authorities must use the same electronic document management system or electronic document management systems should be easily integrated with each other, which is especially important for administration transition to Russian software, fulfilling the Order of the Ministry of Communications and Mass Media of the Russian Federation No. 520 of 29.09.2017 “On approval of the schedule of transition of the Ministry of communications and Mass Media of the Russian Federation with the use of domestic office software during 2017–2018 and the planning period until 2020”. The search for possible ways to create a unified electronic document management system requires an analysis evaluating system implementation effectiveness, which is one of the main objectives of our research. The results of the analysis will help to select the optimal strategy for the use of electronic document management system in the administration to optimize the e-government system.

624

O. Perepelkina and D. Kondratov

2 Research Methods The results of the research are obtained on the basis of the theory of system analysis, set theory, theoretical and graphic models, methods of structural analysis and design, mathematical modeling, modern technologies of data storage and processing. The research methodology is based on the results of discrete mathematics, the theory of object-oriented design and relational databases. To obtain theoretical results, the deductive reasoning was carried out, and the construction of practical models was made by means of mathematical modeling.

3 Results and Discussion The document is an information object that fixes and regulates the activities of any organizational structure. Documents are widely used in various spheres of public life, and the movement of documents from the moment they are received and created to the completion of execution or dispatch is called document circulation [4]. Electronic document circulation is the movement of documents created in electronic form in an organization (government, company, enterprise) from the first moment of the stage of their creation or the moment of their receipt until the completion of execution or departure: registration, formation of cases, distribution, execution control, storage and reuse of documentation, reference work with documents [5]. Under the electronic document management system refers to the system of storage, transmission and processing of electronic data, the form of electronic documents [4]. The basic structural unit of paperwork and document management authorities document. The term document is commonly understood as “information recorded on a tangible medium with details that allow it to be identified” [6]. Work with documents is an integral part of the activities of the authorities. For the purpose of ordering, establishment of uniformity of production of this work by the Government of the Russian Federation the Order of 15.06.2009 No. 477 “about the approval of rules of office-work in Federal Executive authorities” approved rules of office-work In Federal Executive authorities [7]. The authorities in the organization of document management and record keeping are guided by the Rules in accordance with article 11 of the Federal law of 27.07.2006 № 149 “on information, information technology and information protection”, as well as the requirements for the electronic document management system [8, 9]. Electronic document management system is an electronic system that provides a strictly regulated and formally controlled movement of documents within and outside the organization on the basis of information and communication technologies [10]. The possibility of using the electronic document management system in Russia appeared with the adoption of Federal law No. 1 of 10.01.2002 “on electronic digital signature” [11] and a number of normative documents [12–16], which formed the necessary legislative framework for the creation and development of a secure, legally significant electronic document management [17].

Mathematical Modeling of Electronic Records Management

625

The introduction of electronic document management and record-keeping in the authorities is aimed at solving three main blocks of tasks in the field of: document processing; control over the executive discipline; organization of access to information. Select the following criteria for the classification of indicators of efficiency of introduction of system of electronic document flow in government: types of resources; the principal objectives; the evaluated characteristics; direction of performance evaluation. It is quite difficult to quantify the effectiveness of the implementation of the electronic document management system, as it is necessary to take into account a large number of factors and process a significant amount of information. Thus, the more complex and large-scale the system, the more difficult it is to quantify its effectiveness. However, the quantitative assessment is simply necessary for the implementation of the electronic document management system and records management in the authorities, as it can be used to determine the value of the indicators of the system and based on them to assess the degree of comparative preference of different decisions or to compare the results obtained at each stage of activity, with the previous or subsequent result [18]. To analyze different objects and processes with the help of mathematical methods it is necessary to make a mathematical description of objects and processes, which is called a mathematical model [19]. The electronic document management system will be the object of simulation. The main objectives of the information flow modeling in the administration are: improving administrative activity efficiency; accelerating documents’ movement; reducing processing documents’ complexity [7]. In determining the criteria for evaluating the effectiveness of an electronic document management system is appropriate to evaluate the implemented system, because if the introduction of the system has not been completed, the estimation of the effect will be low or absent. The effect of the introduction of electronic document management system can be divided into two parts: the direct effect of the introduction of the system, associated with the savings on materials, staff time, etc.; the indirect effect due to the advantages of the organization functioning, which provides electronic document management system (transparency of management, control performance discipline, the possibility of accumulation of knowledge and others) [1]. At each stage of electronic document management system implementation, it is necessary to control the achievement of certain results. The primary task is to determine the criteria for evaluating the effectiveness of the electronic document management system at each stage: pre-project examination, analysis and preliminary planning, informational investigation, development and implementation of the system. Then it is necessary to determine how at each implementation stage the identified criteria for evaluating the effectiveness of the system are achieved: how documents functioning process is controlled and managed; whether operational and quality management decisions are achieved, which becomes possible only with the use of mathematical models, modern and reliable algorithms and complexes of quality assessment programs according to certain criteria. In general, the whole process of implementation of the electronic document-flow management system can be represented in the form of a formal model with application of the mathematical apparatus to ensure the effectiveness of control objectives.

626

O. Perepelkina and D. Kondratov

In the course of making a mathematical model evaluation system, we chose “soft” mathematical modeling, suggested by V. I. Arnold, since it allows: to make a mathematical evaluation model, to study objects and processes according to their mathematical models, to present an interpretation of the obtained results. The practical value of the “soft” mathematical model depends on how adequately and fully it will reflect the internal structure of the object or due process [20]. We considered the use of “soft” mathematical modeling while constructing electronic office work evaluation system model in administration bodies [21]. In [21] we also identified indicators to assess electronic office work evaluation system in the administration the effectiveness of the implementation on the basis of which the criteria values of indicators were found out: X0, X1, X2, X3, X4, X5, X6. With the aim of developing mathematical models to solve the tasks of mathematical modeling and forecasting indicators of efficiency estimation while implementing electronic document circulation system in the administration, it is advisable to use the graph method of data representation. When analyzing risks, it is more convenient to use a common model of the electronic document management system, which will combine all possible document processing processes in the organization. To do this, it is necessary to analyze the information flows, to determine the ways of passing the documents and to combine the information into a common graph, which will represent a complete interconnected model of the electronic document management system used in the future in the factor analysis of information risks presented in [22]. Since electronic office work evaluation system effectiveness in the administration is a complex, non-deterministic, stochastic system, to describe we used the model of system dynamics by Jay Forrester, which is based on the principle of system dynamics and which assumes that the systems behavior is determined by its information-logical structure. The model designed to simulate world processes was analyzed in his works devoted to the study of industrial and urban systems [23]. The basic model of Forrester’s global dynamics consists of the following elements (levels): flows, streams conveying the content of one level to another; decision procedures, that regulate the rate of flow between the levels; information channels connecting decision procedures with the levels. The variables differential equations for the main phase are recorded, where positive growth speed rate of variable is specified, including all factors causing growth variable “y”; negative growth of speed rate, which includes all the factors causing the decrease of the variable “y”. The pace is a product of functions that depend only on the “factors” - combinations of the basic variables, i.e., which, in their turn, are themselves functions of system equations. The number of factors is less than the number of key variables in the model. In turn, each factor does not depend on all system equations. This is presented in order to simplify the modeling task. The pace is a product of functions that depend only on the “factors” - combinations of the basic variables, i.e., which, in their turn, are themselves functions of system equations. The number of factors is less than the number of key variables in the model. In turn, each factor does not depend on all system equations. This is presented in order to simplify the modeling task.

Mathematical Modeling of Electronic Records Management

627

There are several interrelated flows to assess the implementation of electronic document management system in the administration by means on Forrester model: paper/electronic document; the number of employees of authorities/bodies, number of employees registered in the electronic document management system/the number of employees registered in the electronic document management system and working in the electronic document management system/the number of employees who do not work in the electronic document management system; the number of electronic documents in the databases; the number of registration cards of paper documents. The block diagram of streams used for modeling assessment of electronic document system in the administration, is supplemented by the equations system which allows to measure the dynamic changes occurring during the process of this streams flow at different rates at the input, the different lag and gain parameters. Thus the impact of external and internal influences on the developed assessment of the model for electronic office work evaluation system implementation in administration is taken into account. The system level in the implementation estimation model - is a variable depending on the difference between incoming and outgoing flows. The model also uses the rates that are required to account for existing instantaneous flows between levels in the system. The levels measure the state achieved by the electronic document management system as a result of the combined influence of certain factors. The decision making procedures are fulfilled by the person making the decision (the decision of a key role in decision-making) on values changing of influencing system factors. The example of a subgraph representation of the system level is shown in Fig. 1.

Fig. 1. View system-level subgraph X0

628

O. Perepelkina and D. Kondratov

Electronic document management systems consist of several modules (subsystems): office work, citizens’ requests, orders, mobile workplace, reporting center. In the process electronic document management system implementation the modules are implemented gradually (modularly). That is why number of registered users in the system is an important indicator, so the function f1(X6) depends on time. While electronic document management system implementing all administration employees should work in it, and the ratio number of registered employees in the system should be close to 100% of the total number of employees of the authority. In addition to the considered above, we presented mathematical model of additional functional dependencies [7]. Then, the worked out mathematical model for assessing electronic document management system implementation in the administration will be presented in the form of the following system of differential equations. 8 dX0 > dt > > > dX1 > > dt > > dX2 > > dt > > > dX3 > > > < dt

  ¼ X0 ðtÞ B100 A   f1 ðX6 Þ C100 ¼ X1 ðtÞ B  f0 ðX0 Þ   ¼ X2 ðtÞED100 D   f10 ðX0 Þ ¼ X3 ðtÞ

V1 CI

V2 CI V5 ED V6 ED

 f2 ðX1 Þ  f6 ðX2 Þ

¼ X4 ðtÞ  f3 ðX1 Þ  f7 ðX2 Þ  f11 ðX3 Þ 0 1 > > > > > > dX5 @X4 ðtÞA  f4 ðX1 Þ  f8 ðX2 Þ  f12 ðX3 Þ  f15 ðX4 Þ > > dt ¼ X5 ðtÞ > V5 D > > V6 D > >   > > : dX6 ¼ X ðtÞ ðKSV4 Þ100Þ  f ðX Þ  f ðX Þ  f ðX Þ  f ðX Þ  f ðX Þ 6 5 1 9 2 13 3 14 4 16 5 dt ðKSV3 Þ dX4 > dt

ð1Þ

In the right part of equations displayed constants: B, A, C, D, ED, C, I, К, S, V1, V2, V3, V4, V5, V6. They are determined experimentally in adaptation stage of software designed to simulate a particular object. In the first part of the mentioned system of equations the functional dependencies used: f 0 ðX0 Þ; f 1 ðX1 Þ; f 2 ðX1 Þ; f 3 ðX1 Þ; f 4 ðX1 Þ; f 5 ðX1 Þ; f 6 ðX2 Þ, f 7 ðX2 Þ; f 8 ðX2 Þ, f 9 ðX2 Þ; f 10 ðX0 Þ, f 11 ðX3 Þ; f 12 ðX3 Þ, f 13 ðX3 Þ; f 14 ðX4 Þ; f 15 ðX4 Þ; f 16 ðX5 Þ – functions that determine the effect of the criterion variable to the rate of change criterion variable on the right side of the equation. In the course of the study, these dependences were approximated by polynomials of low degree (Fig. 2) shows dependencies and their graphs f 1 ðX6 Þ, f 2 ðX6 Þ; f 3 ðX6 Þ on the interval 1 year. where: f 1 ðX6 Þ–the ratio of the number of registered employees of the authorities in the system to the number of employees of the authorities actually working at this stage in the electronic document management system; f 2 ðX6 Þ–the ratio of the number of electronic documents in the databases created in the electronic document management system to the number of registration cards in administration paper documents; f 3 ðX6 Þ–the ratio of the amount of time spent on typical operations with electronic documents (registration: incoming, outgoing, internal document).

Mathematical Modeling of Electronic Records Management

629

Fig. 2. Graphs of approximation of dependencies by polynomials of low degree

The calculation results are presented in the electronic document graphs according to the worked out model (Fig. 3).

Fig. 3. Graphs of calculation of indicators of electronic document management system on the developed model

Thus, the developed model is intended for simulating and forecasting the basic parameters of the electronic document management system evaluation in administration bodies and consists of levels, paces, decision-making procedures and has a complex structure with a lot of feedbacks. The model allows to take into account internal and external impact factors and to make management decisions by the heads of administration. Due to the fact that the developed model has a certain complexity at the analysis of interdependencies, it is necessary to use regression model to test the adequacy of the developed model. This model is made on the basis of administration factual material, including the compilation and analysis of reports, ratings and analytical reports. The coefficients in the equations are determined by means of the least squares methods. The assessment of the developed model adequacy is performed by calculating the approximation errors and correlation coefficient evaluation.

630

O. Perepelkina and D. Kondratov

To carry out computational experiment on the developed mathematical model and certain parameters of the electronic document management system, as well as using the system normalized indicators as initial conditions given in Table 1, an adequacy check was carried out by means of regression analysis apparatus (the study of electronic document management system implementation effectiveness in Penza region administration). Table 1. Normalized indicators for evaluating the effectiveness of the system implementation for 2013–2017 Years Xfi X0 2013 48,39 2014 64,52 2015 47,62 2016 100,00 2017 100,00

X1 66,67 85,00 93,33 100,00 100,00

X2 16,42 20,50 88,23 93,24 97,87

X3 8,00 7,00 5,00 4,00 3,00

X4 9,00 8,00 8,00 7,00 7,00

X5 1,00 1,00 1,00 1,00 1,00

X6 60,00 40,00 30,00 100,00 100,00

Using the least square method, according to the statistics have been built graphs of the function of dependence factors of time t and approximating their functions - linear, polynomial, logarithmic Xn0 . . .Xnn . Since the initial data are selective, the model should assess correlation coefficient magnitude significance, i.e., to determine how time t affects the value of parameters for assessing electronic document management systems implementation in administration. The system of regression equations made on the results of observations of indicators system changes is presented below. 8 n X1 ðtÞ ¼ 0; 0089t3 þ 0; 0654t2  0; 134t þ 0; 608 > > > > X n ðtÞ ¼ 0; 03471t4  0; 3142t3 þ 1; 2658t2  1; 8908t þ 1; 33 > > < 2n X3 ðtÞ ¼ 0; 0248t3 þ 0; 1835t2  0; 1217t þ 0; 326 X4n ðtÞ ¼ 0; 0117t3  0; 0971t2 þ 0; 2712t  0; 082 > > > > > X n ðtÞ ¼ 0; 0077t4  0; 0603t3 þ 0; 3589t2  0; 4587t þ 0; 385 > : 5n X6 ðtÞ ¼ 0; 0608t4  0; 7075t3 þ 3; 4792t2  4; 0233t þ 2; 8

ð2Þ

On the basis of the constructed model the software for evaluating the effectiveness of electronic document management system implementation in administration bodies can be made. The block diagram of software system is represented in diagrams (Fig. 4). We give the interface of the software package “Analysis of the effectiveness of the implementation of electronic document management system” (Fig. 5).

Mathematical Modeling of Electronic Records Management

631

Fig. 4. Block diagram of the software complex “Analysis of the effectiveness of the implementation of the electronic document management system”

Fig. 5. The interface of the software complex “Analysis of the effectiveness of the implementation of the electronic document management system”

632

O. Perepelkina and D. Kondratov

At the first step in the “Data Entry” the user inputs the normalized and criteria value indicators for assessing effectiveness of introducing electronic document management system in administration bodies, and also the number of years during which there is an assessment of the effectiveness of the introduction of the system in government authorities, the initial T start and end t finite values of the time interval from the drop down list. If necessary, the analysis of a single parameter the user selects the desired option from the drop down list. By clicking “save” to save the introduced data Xki ðtÞX0 ; X2 ; X3 . . .X6 . At the second stage “Data Analysis” to perform the analysis of the calculated values and print the graph, click on “Analysis of the calculated values”. When you click on the “print” button, the chart is displayed in an additional window, over which you can perform the “save” “print” actions. This software complex can be developed by any Russian software and perform evaluation of the effectiveness of the implementation process at any stage.

4 Summary and Conclusion On the basis of the former study it can be concluded that a new mathematical model for assessing the effectiveness of the introduction of electronic document management and record keeping in the Executive bodies of state power is built, different from the previously known with the use of “soft” mathematical modeling. Thus, the developed model makes it possible to automate the process of evaluating the effectiveness of the implementation of electronic document management system in the authorities and significantly reduce the time of calculations when making management decisions, in case of choosing the introduction of electronic document management system in the administration bodies. The results obtained while making the software package can be used to study of the effectiveness of automation of numerous classes of data structures can be implemented in administration practices.

References 1. Perepelkina, O.A., Kondratov, D.V.: Evaluation of key performance indicators implementation of electronic document management system in the executive bodies of administration Penza region: problems of management, processing and transmitting information, pp. 230– 234 (2015) 2. Perepelkina, O.A.: Introduction of electronic document management system and office work in the Penza region. Prospects for development of Russian state and society in modern conditions, pp. 86–87 (2015) 3. State system of document management. Archive, Institute of Records Management and Archival M., Project (1999) 4. Shafeeva, Y.I., Bykov, N.N.: Electronic document management system in government. The Young Scientist, №23, 78–81 (2015). https://moluch.ru/archive/103/23890 5. GOST 34.003-90: Information technology (IT). Set of standards for automated systems. Automated systems. Terms and definitions 6. GOST 51141-98: Paperwork and archiving. Terms and definitions

Mathematical Modeling of Electronic Records Management

633

7. Perepelkina, O.A.: Mathematical modeling of system of electronic document circulation and office-work in executive bodies of state authority in the example of the Penza region. Internet magazine “Naukovedenie”, vol. 9, №6 (2017). https://naukovedenie.ru/PDF/89TVN617.pdf 8. Law of the Russian Federation from 27.07.2006 №149-FZ “On information, informationprecision manufacturing techniques and data protection” 9. Dmitriev, A.P.: Experience monitoring workflow in the federal bodies of executive power. Documentation in the information society: challenges optimize workflow, Rosarchiv, VNIIDAD, Moscow, pp. 181–186 (2012) 10. Kabashi, S.Y., Asfandiyarova, I.G.: Records management and archiving in the terms and definitions, p. 107. Design PoligrafServis, Ufa (2008) 11. Federal Law of the Russian Federation of 10.01.2002 №1 “On electronic digital signature” 12. Russian Federation Government Resolution of 06.09.2012 №890 “On measures on improvement of electronic document management in public authorities” 13. Order of the Russian Government dated 12.02.2011 №176-p “On approval of the action plan for the transition of federal executive bodies to paperless with the internal activity of the organization” 14. GOST 6.30-2003: Unified systems of documentation. Unified system of organizational and administrative documentation. Requirements for registration documents 15. GOST 7.0.8-2013: System of standards on information, librarianship and publishing. Paperwork and archiving. Terms and definition 16. GOST 6.10.4-84: Unified systems of documentation. Giving legal force documents on the machine carrier and mashinogramme created by means of computer facilities. The main provisions 17. Alshanskaya, T.V., Zakharova, A.V., Yumaeva, L.R.: Features of protection of electronic document. NovaInfo. Ru., №28, 279–281 (2014) 18. Tsekhan, O.B., Mantsevich, V.A.: Indicators of integrated assessment of the effectiveness of the electronic document management system. Technologies of Informatization and management, Minsk, Issue 2 (2011) 19. Samarsky, A.A., Mikhailov, A.P.: Mathematical Modeling: Ideas, Methods, Examples, 2nd edn. FIZMATLIT, Moscow (2001) 20. Gubenkov, A.N., Fedorova, O.S.: “Soft” mathematical modeling of real objects and processes. Bulletin of Saratov State Technical University, T. 1, №1 (63), 7–14 (2012) 21. Perepelkina, O.A., Kondratov, D.V.: Use of “soft” mathematical simulation of the development of mathematical valuation models incorporate systems electron document circulation and office work. Software systems and computational techniques, №1, 63–72 (2018) 22. Afanasiev, E.P., Kasarin, O.V.: Identification of contradictions in ensuring the quality and effectiveness of the system of protection of cloud systems of electronic document. In: Modern Problems and Challenges to the Information without the Danger-SIB-2013 Proceedings of the All-Russian Scientific-Practical Conference “SIB - 2013”, Moscow, pp. 104–108 (2013) 23. Forrester, J.: Fundamentals of Cybernetics Enterprise. Progress, Moscow (1971)

Mathematical Modeling of the Process of Engineering Structure Curvature Determination for Remote Quality Control of Plaster Works Nadezhda Ivannikova1(&) , Pavel Sadchikov1 and Alexandr Zholobov2

,

1 Astrakhan State University Architectural and Civil Engineering, 414056 Tatishcheva, St. 18, Astrakhan, Astrakhan Region, Southern Federal District, Russia [email protected] 2 Don State Technical University, 344000 Gagarin Square 1, Rostov-on-Don, Russia

Abstract. Nowadays the plaster works are very popular, especially in restoration of old buildings that have complex geometric configuration. The plaster works are labor intensive, and shall be controlled. The quality control of plaster works on curved surfaces assumes the use of individual templates of curved surfaces for control of compliance with the design statement. This is not an efficient technology. The analysis of methods of the surface curvature determination for large objects available in the allied fields of sciences showed that the existing methods that can be used for computer-aided monitoring are not applicable for measurement of the surface curvature of fixed 3D structures. To solve this problem the authors of this Article have developed a method for noncontact quality control of plaster works. This method is based on a math model of the process of determination of a geometrical form of a structure to be plastered. The curvature of the plastered surface is defined with use of formulae received by way of solution of equations of surface of complex geometric bodies. This Article describes the process of determination of a formula for the surface of the central part of the Cathedral of Vladimir Icon of the Mother of God, which has a shape of a rectangular parallelepiped limited in the upper part by a dome having a shape of elliptic paraboloid. The results of this study are used for the development of a method of noncontact determination of curvature of an engineering structure surface. Keywords: Mathematical modeling  Curved surface  Elliptic paraboloid Plastering  Engineering structure  Projected plane  Remote inspection Construction quality control

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 634–645, 2019. https://doi.org/10.1007/978-3-030-12072-6_51

 

Mathematical Modeling of the Process

635

At present time the construction industry is developing exponentially. A greater interest is shown to the structures of complex shapes. This is especially seen in the construction of religious buildings, sports buildings, cultural buildings, as well as during restoration of buildings that are the monuments of architecture. The structures with complex curved shapes require more qualified efforts as whatever materials are used for their construction, the level of complexity of their designing and finishing is higher, than that of the elements with flat surfaces [1–5]. Usually the structures with complex geometry are plastered. The plaster parameters shall be in accordance with the design requirements and technical standards. The most important but at the same time one of the most difficult to control parameters of plastered structures is determination of the surface evenness or curvature. According to the Rules of inspection of buildings, structures and complexes of worship buildings the quality control of plastered surfaces shall be performed with use of individual template for each type of coating. The process of individual templates fabrication is not perfect process wise. Moreover, the templates for inspection of curved surfaces profile are not applicable for remote quality control. In modern construction methods of determination of the structure surface curvature are sometimes based on use of electronic total stations, 3D scanners, which is less frequent, although the first method is too labor intense, and the second is cost inefficient. The analysis of methods of determination of the curvature of large structures surfaces available in the allied fields of science showed that there exists a method of noncontact determination of curvature of a long object surface that can be used for automated control of curvature of various long objects, e.g. rolled products and pipes. However, this method is not applicable for determination of the curvature of fixed engineering structures as well as some others. With purpose to solve this problem the authors of this Article have developed a method of noncontact quality control of plaster works. The basis of this method is a math model of the process of determination of a geometric shape of the structure to be plastered. The math model studied in this Article is based on the example of the central part of the Cathedral of Vladimir Icon of the Mother of God. In the plan view the main structure of the Cathedral (see Fig. 1a, b, c) has a shape of a rectangular parallelepiped (see Fig. 2a), with a dome of elliptic paraboloid shape in the upper part (see Fig. 2b), with the parameters «p» and «q».

636

N. Ivannikova et al.

Fig. 1. The Cathedral of Vladimir Icon of the Mother of God in the city of Akhtubinks, Astrakhan Region, Russia: a photo image; b elevation view of the cathedral, longitudinal; c cross section view.

Mathematical Modeling of the Process

637

Fig. 2. Scheme of the main structure of the Cathedral: a rectangular parallelepiped, where h–height, b–length, a–breadth; b scheme of the limiting structure of the main structure of the Cathedral which has an elliptic paraboloid shape, where h0–height, b0–length, a0–breadth.

638

N. Ivannikova et al.

The general elliptic paraboloid equation: z¼h

x2 y2  2p 2q

ð1Þ

The volume of this body is determined as follows:    Z2 Z2  Z 2  x2 y2 x2 y y3 b2  V¼ h  hy  dydx ¼  b dx 2p 6q 2 2p 2q a

b

a

a2 b 2

Z  a 2

¼ a2

a2

   hb x2 b b3 hb x2 b b3   þ þ   dx 2 4p 48q 2 4p 48q

   Z2  x2 b b3 x3 b b3 x a2   hb  ¼ dx ¼ hbx   a 2p 24q 6p 24q 2 a

ð2Þ

a2



   a a3 b b3 a a a3 b b3 a ¼ hb   þ  hb þ 2 48p 48q 2 48p 48q   a3 b b3 a a2 b2  ¼ ab h  ¼ abh   24p 24q 24p 24q So, the volume of this body is the following:   a2 b2  V ¼ ab h  24p 24q

ð3Þ

If the circular paraboloid is limited by a parallelepiped, then p = q, and: V ¼ abh 

 ab  2 a þ b2 24p

ð4Þ

It is required to determine the area of the surface of the central dome of the Cathedral (see Fig. 3a), limiting the parallelepiped.

Mathematical Modeling of the Process

639

Fig. 3. Image of the interior structures of the Cathedral of Vladimir Icon of the Mother of God, to be plastered: a central dome; b drum of a cylindrical shape; c pendentive structure adjacent to the dome.

This surface is a part of the elliptic paraboloid. Z ¼h

x2 y2  2p 2q

ð5Þ

640

N. Ivannikova et al.

The area can be determined as follows: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2 ZZ @z @z dxdy Ssurf ¼ 1þ þ @x @y

ð6Þ

Where the domain of integration is represented by a rectangle (see Fig. 4).

Fig. 4. Scheme of the domain of integration for elliptic paraboloid, represented by a rectangle

Derived functions z ¼ zðx; yÞ: @z 2x x ¼ ¼ @x 2p p

ð7Þ

@z 2y y ¼ ¼ @y 2q q

ð8Þ

So, the area of the surface is: ZZ Ssurf ¼

a b sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi  2  2 Z2 Z 2 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x y x2 y2 1þ  d¼ þ  dx 1 þ 2 þ 2 dy p q p q

a2

ð9Þ

b2

As the upper part of the dome represented by an elliptic paraboloid is cut for installation of a cylindrical drum (see Fig. 3b), the area of the curved surface is determined as follows: S ¼ Ssurf  Stop Where: Z2 0

Stop ¼ 4 0

ffi pffiffiffiffiffiffiffiffiffiffiffiffiffi 2qh0 qpx2 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Z x2 y2 dx 1 þ 2 þ 2 dy p q 0

ð10Þ

ð11Þ

Mathematical Modeling of the Process

641

The domain of integration of the cut surface of elliptic paraboloid is represented by an ellipse: (see Fig. 5).

Fig. 5. Scheme of the domain of integration for cut surface of the elliptic paraboloid, represented by an ellipse.

So: Stop ¼ 4S0

ð12Þ

In the cross section of the elliptic paraboloid with the top ðh0 ; 0; 0Þ z = 0, then x2 y2 x2 y2  ) þ ¼ h0 ) 2p 2q 2p 2q y2 x2 q ¼ h0  ) y2 ¼ 2qh0  x2 p 2q 2p rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi q y ¼  2qh0  x2 p 0 ¼ h0 

ð13Þ

Then: y2 x2 q ¼ h0  ) y2 ¼ 2qh0  x2 p 2q 2p

ð14Þ

Basing on the results of measurement of linear dimensions and heights h1, a0, a1, the pendentive structures (see Fig. 3c), presented as elements of elliptic paraboloid, let’s define the height h0 of the upper cut part of the dome. In the trace of surface xOz we have a parabola, the top of which we bring into coincidence with the top of the paraboloid of the Cathedral (see Fig. 6a). As the location of the top of parabola has no impact on the determination of

642

N. Ivannikova et al.

Fig. 6. Curved surface trace: (a) initial diagram of the curved surface trace; (b) diagram of the coincidence of a curved surface trace with the origin of coordinates.

dimensions h0, h1, a0, a1, we can bring in coincidence with the origin of coordinates (see Fig. 6b). Then, using the equation of conic we get z¼

x2 2p

x2 ¼ 2pz ) p ¼ Solving a system of equations:

ð15Þ x2 2z

ð16Þ

Mathematical Modeling of the Process

8 a0 2 >

:P ¼ ð Þ

a1 2 2

)

2ðh1 þ h0 Þ

a20 a21 ¼ 8h0 8ðh1 þ h0 Þ

643

ð17Þ

Basing on the property of proportionality, we have the following: a20 a21 ¼ h0 h1 þ h0

ð18Þ

a21 h0 ¼ a20 h1 þ a20 h0

ð19Þ

a21 h0  a20 h0 ¼ a20 h1

ð20Þ

The height of the cut part of the dome: h0 ¼ p¼

a20 h1  a20

a21

a20 a2 a2 h1 a2  a20 ¼ 0: 20 2¼ 1 8h0 8 a1  a0 8h1

ð21Þ ð22Þ

The value of «q» is determined using the same method. For this purpose we have studied a plane section of elliptic paraboloid yOz: q¼

b21  b20 8h1

ð23Þ

Substituting the received formulae for p, q into the main design formulae we can determine the volumes and areas of curved surfaces that are to be plastered. The results of the calculations can be verified with any software for architectural modeling [6]. In the example below the verification was carried out in 3d Max– Autodesk (see Fig. 7). At the present time, a software for a PC is being developed [7]. This software will allow development of geometrical models of the curved surface under study in an

644

N. Ivannikova et al.

Fig. 7. Modelling of a cut dome of the Cathedral of Vladimir Icon of the Mother of God located in the city of Akhtubinsk in «3D Max–Autodesk».

automatic mode basing on the received equations, and to detect and assess the deviations from set curvature of the surface during the quality control of plaster works. The results of this study are used for the patent of Russia No. 2559168 development of a method of noncontact determination of curvature of an engineering structure surface [8].

References 1. Ross, H., Stahl, F.: Stoffe, Verarbeitung, Schadensvermeidung. Koln, Rudolf Müller (2003) 2. Ballay, F., Frey, H., Hein, S., Herrmann, A., Kuhn, V., Lindau, D., Nutsch, W., Stemmler, C., Traub, M., Uhr, U., Waibel, H., Werner, H.: Bautechnik Fachkunde, fur, Maurer/Maurerinnen, Beton- und Stahlbetonbauer/Beton- und Stahlbetonbauerinnen, Zimmerer/Zimmerinnen und Bauzeichner/Bauzeichnerinnen, 10th edn. Verlag EUROPA-LEHRMITTEL, Nourney, Vollmer GmbH & Co. KG, 42781 Haan-Gruiten (Germany) (2003) 3. Guelberth, C.R., Chiras, D.: The Natural Plaster Book: Earthen, Lime, and Gypsum Plasters for Natural Homes. New Society Publishers, Gabriola Island, BC, Canada (2003) 4. Morrison, A.: Plastering with Natural Hydraulic Lime. Straw Bale Innovations, LLC, OR, USA (2007) 5. McKee, H.: An Introduction to Early American Masonry, Stone, Brick, Mortar and Plaster. Association of Preservation Technology, Springfield, IL (2017) 6. Rogers, D.F., Adams, J.A.: Mathematical Elements for Computer Graphics. McGraw-Hill, Inc., New York (2001) 7. Tamrazyan, A.G., Zholobov, A.L., Ivannikova, N.A.: Technology of examination of plastered surfaces of complex architectural forms of building structures using methods of geometrical

Mathematical Modeling of the Process

645

modeling. Vestnik MGSU 2011(11), 125—130 (2012). (In Russian). Proceedings of Moscow State University of Civil Engineering 8. Ivannikova, N.A.: A complex of remote test of specified profile of building structures curved surfaces. Promyshlennoe i grazhdanskoe stroitel’stvo (6), 20–24 (2014). (In Russian)

Mathematical Models, Algorithms and Software Package for the National Security State of Russia Natalya Yandybaeva1(&) , Alexander Rezchikov2 , Vadim Kushnikov2,3 , Vladimir Ivaschenko2 , Oleg Kushnikov3 , and Anatoly Tsvirkun4 1

2

Balakovo Branch, Russian Presidential Academy of National Economy and Public Administration, 107, Chapaeva Str., Balakovo 413865, Russia [email protected] Institute of Precision Mechanics and Control, Russian Academy of Sciences, Rabochaya Str., 24, Saratov 410028, Russia 3 Yuri Gagarin State Technical University, 77, Polytechnic Str., Saratov 410054, Russia 4 Institute of Control Problems, Russian Academy of Science, 65, Profsouznaya Ave., Moscow 117997, Russia

Abstract. Mathematical model to determine and predict the main characteristics of the national security of the Russian Federation is developed in the article. A system-dynamic approach is used to develop a predictive mathematical model. To characterize the state of national security the Decree of the President of the Russian Federation identifies the main indicators of national security: citizens ‘satisfaction with the degree of protection of their constitutional rights and freedoms (expert evaluation of sociological surveys); the share of modern weapons, military and special equipment in the Armed Forces of the Russian Federation (% of modern funds in the existing weapons park in the Russian Federation); life expectancy at birth (years); gross domestic product per capita (RUB.); decile coefficient; inflation rate (%)-average annual growth of prices in the country; unemployment rate (%); share of expenditures in gross domestic product for the development of science, technology and education (%); share of expenditures in gross domestic product for culture (%); share of the territory of the Russian Federation that does not meet environmental standards (%). The heuristic algorithm is offered and the computer program for modeling and forecasting of the main characteristics of national security is formed. The results of computational experiments with the developed model, algorithm and software are presented. The method of application for the developed mathematical software for the system of supporting of management decision-making is presented. On ILC the main stages of decision-making to analyze the state of national security of the country at different time intervals using the calculated values of the characteristics of national security are shown. Mathematical software presented in the article is used to analyze the state of the national security of the country and to predict the main indicators of national security of Russia. Keywords: National security equations  Mathematical model

 System dynamics model   Object-oriented programming

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 646–659, 2019. https://doi.org/10.1007/978-3-030-12072-6_52

Differential

Mathematical Models, Algorithms and Software Package

647

1 Introduction The regulatory and legal basis for ensuring the security of any country is the national security strategy, which is approved by the President of the country and declares the directions of foreign and domestic policy of the state in the nearest future. The Decree of the President of the Russian Federation dated December 31, 2015 № 682 “On the national security strategy of the Russian Federation” is the document implementing measures to protest national security in our country today (hereinafterthe Strategy) [1]. The Strategy defines the concept of national security as the protection of the individual, society and the state from existing internal and external threats, which allows the realization of constitutional rights, freedoms, worthy quality and living standard of citizens. To characterize the state of national security the Strategy identifies the main indicators of national security. The works of such Russian scientists as V. A. Sadovnichiy, A. A. Akayev, A. V. Korotaev, K. Y. Kondratyev, H. H. Moses, A. D. Ursula, V. K. Levashov, etc. are devoted to the working-out of the concept of steady development of the country. Methods and models of the transition of the country to steady development from the perspective of national security were investigated in the works of G. G. Malinetskii, A. V. Podlazov, S. Yu. Malkov, V. M. Matrosov, V. A. Koptyug [2], D. M. Gvishiani, V. A. Egorova. Possible security threats were investigated in the works of these scientists, mathematical models and methods to build predictive scenarios of socio-economic development of the country were developed. The models were characterized by a large dimension, a set of variables describing the object of study. You can also mention the complex causal relationships between modeled variables in them. Mathematical software to predict the values of the main indicators of national security, defined in accordance with the Presidential Decree № 537 of 2009 “On the Strategy of national security of the Russian Federation until 2020” was developed by the authors [3]. The development of new mathematical models, algorithms and software systems for modeling and forecasting its main characteristics became expedient with the adoption of the new Strategy in 2015. Based on the above-stated, the formulation of the research problem is as follows: to assess the state and forecast indicators of national security of the country models and methods, algorithms and software should be developed.

2 Research Methods 2.1

Development of Mathematical Model

As simulated variables in the developed mathematical model the following main indicators of the national security of the Russian Federation are used: • X1(t)-citizens ‘satisfaction with the degree of protection of their constitutional rights and freedoms (expert evaluation of sociological surveys); • X2(t)-the share of modern weapons, military and special equipment in the Armed Forces of the Russian Federation (% of modern funds in the existing weapons park in the Russian Federation);

648

N. Yandybaeva et al.

• • • • • •

X3(t)-life expectancy at birth (years); X4(t)-gross domestic product per capita (rub.); X5(t)-decile coefficient; X6(t)-inflation rate (%)-average annual growth of prices in the country; X7(t)-unemployment rate (%); X8(t)-share of expenditures in gross domestic product for the development of science, technology and education (%); • X9(t)-share of expenditures in gross domestic product for culture (%); • X10(t)-share of the territory of the Russian Federation that does not meet environmental standards (%).

This set of variables does not possess the completeness of the characteristics of the studied system and models some projection of the entire system on the selected subspace of its characteristics. The relationships between the simulated variables X1–X10 are presented using the digraph shown in Fig. 1. They were obtained by using the apparatus of correlation analysis of initial statistical data using the theory of cause-and-effect complexes [4]. A system-dynamic approach which is widely put into practice for the analysis of processes in complex socio-economic systems today [5] is used to develop a predictive mathematical model. We investigate the derivatives of variables X1–X10 as a function of these variables. Derived levels are dynamic dXi(t)/dt, i = 1…n and are called streams. Flows in the model are the rate of levels change per time unit. The relationships between flows and modeled variables (levels) are systems of differential equations of the following form: dXi =dt ¼ Fi ðX1 ðtÞ; . . .Xn ðtÞÞ; i ¼ 1. . .n

ð1Þ

Let’s decompose the functions Fi into a series according to powers Xk(t) and investigate the first lineal terms of the expansion: dXi ðtÞ=dt ¼ ai;0 þ ai;1 X1 ðtÞ þ . . . þ ai;n Xn ðtÞ; i ¼ 1. . .n

ð2Þ

Coefficients ai are determined experimentally. Products ai Xk(t), i = 1…n represent the rate of the i flow, t-time. Since the analyzed system is complex, nonlinear, the rates depend on the levels. Further: ai;k ðX1 ðtÞ; . . .Xn ðtÞÞ ¼ ai;k f i;k;l ðX1 ðtÞÞ. . .f i;k;n ðXn ðtÞÞ; k ¼ 1. . .n

ð3Þ

Accept designations: ai,k = const, fi,k,l, l = 1…n depends only on “its” level of X1(t). The value of multipliers fi,k,l equal to 1 is taken as a reference, they can vary in one direction or another.

Fig. 1. The digraph of the relationships between the variables X1–X10

Mathematical Models, Algorithms and Software Package 649

650

N. Yandybaeva et al.

As a result, we have: dXi ðtÞ=dt ¼ ai;0 þ

X

ai;k

Y

f i;k;l ðXl ðtÞÞ; Xk ðtÞ; i ¼ 1. . .n:

ð4Þ

The system of Eqs. (4) is a decomposition of the model (1) [6]. The model developed on the basis of the system dynamics model has the following form: dX1 ðtÞ=dt ¼ ðPl ðtÞ þ Pse ðtÞÞ  f 1 ðX2 Þ  f 2 ðX3 Þ  f 3 ðX4 Þ  f 7 ðX8 Þ  f 8 ðX9 Þ  Shpr ðtÞ  f 9 ðX10 Þ  f 4 ðX5 Þ  f 5 ðX6 Þ  f 6 ðX7 Þ; dX2 ðtÞ=dt ¼ ðVðtÞ þ SOðtÞÞ  f 10 ðX8 Þ  TðtÞ; dX3 ðtÞ=dt ¼ ðEðtÞ þ ZdðtÞ þ X8 ðtÞÞ  f 11 ðX4 Þ  f 20 ðX8 Þ ðBNðtÞ þ IðtÞ þ UðtÞÞ  f 12 ðX5 Þ  f 13 ðX10 Þ  f 18 ðX7 Þ; dX4 ðtÞ=dt ¼ ðVðtÞ=PðtÞÞ þ DðtÞÞ  ðIðtÞ þ UðtÞÞ  f 14 ðX7 Þ; dX5 ðtÞ=dt ¼ ðPðtÞ þ ScðtÞ þ UðtÞÞ  f 16 ðX7 Þ  IðtÞ  f 15 ðX6 Þ; dX6 ðtÞ=dt ¼ ðDeðtÞ þ DðtÞ þ EðtÞÞ  ðVðtÞ þ WðtÞÞ; dX7 ðtÞ=dt ¼ ðIðtÞ þ ScðtÞ þ DðtÞÞ  f 15 ðX6 Þ  ðWðtÞ þ TðtÞ þ VðtÞÞ  f 27 ðX5 Þ; dX8 ðtÞ=dt ¼ ðVðtÞ þ HðtÞ þ TchðtÞ þ SrðtÞÞ  f 26 ðX4 Þ  ðIðtÞ þ DðtÞ þ MðtÞ þ PðtÞ þ ScðtÞÞ  f 19 ðX6 Þ; dX9 ðtÞ=dt ¼ ðVðtÞ þ DðtÞ þ ScðtÞÞ  f 25 ðX4 Þ  ðTðtÞ þ PðtÞÞ  f 21 ðX6 Þ; dX10 ðtÞ=dt ¼ ðPZðtÞ þ VZðtÞÞ  f 22 ðX2 Þ  ZeðtÞ  f 23 ðX4 Þ  f 24 ðX8 Þ:

Where: – – – – – – – – – – – – – – – – – – – – – – – –

ð5Þ

X1(t)–X10(t) are the current levels of values of the simulated variables X1…X10; Pl(t)-the degree of realization of personal rights of citizens (expert evaluation); Pse(t)-the degree of realization of socio-economic rights (expert evaluation); Shpr(t)-the number of registered crimes (units); P(t)-the population (pers.); SO(t)-the amount of state defense order (rub.); E(t)-the average salary (rub.); Zd(t)-share of GDP expenditure on health (%); BN(t)-morbidity (unit); U(t)-unemployment rate (%); De(t)-money issue (rub.); I(t)-inflation (%); Sc(t)-income (rub.); V(t)-gross domestic product of the country (rub.); D(t)-number of economically active population (pers.); W(t)-labor supply (pers.); T(t)-taxes (rub.); H(t)-number of educational institutions (units); Tch(t)-number of teaching staff with academic degrees and titles (pers.); Sr(t)-average annual amount of research funding (rub.); M(t)-migration (people); PZ(t)-number of industrial enterprises (units); VZ(t)-volume of emissions of pollutants into water, soil, air (conv. unit); Ze(t)-costs of environmental protection (rub.) [7].

Mathematical Models, Algorithms and Software Package

651

Since all the variables included in (5) have different dimensions, the indicators normalized relative to those of 2000 were used while calculating. The functional dependencies fi(Xi) in model (5) are used to estimate the interaction of the simulated variables. The type f1(X2)–f27(X5) is determined by the apparatus of the regressive analysis. The adequacy of the developed mathematical model (5) in checked with the help of retrospective data [8]. Let us analyze, for example, the simulated variable X7-the unemployment rate. In 2016, the value of variables normalized relative to 2000, which are included in X7, was: I ¼ 0:64; Sc ¼ 14:3; D ¼ 1:05; W ¼ 1:03; T ¼ 7:32; V ¼ 11:78: Then according to model (5) we obtain: X7 ð2016Þ ¼ ð0:64 þ 14:3 þ 1:05Þ  0:9  ð1:03 þ 7:32 þ 11:78Þ  0:69 ¼ 0:502: The retrospective normalized value of X7 relative to 2000 in 2016 was 0.518. Let’s determine the absolute error:   retr calc A ¼ Xretr 2016 X2016 =X2016  100% ¼ ð0:518  0:502Þ=0:518  100% ¼ 3%: In Fig. 2 graphs comparing the values of the simulated variables X6 and X7, calculated using the developed mathematical model with retrospective data on the time interval 2015–2016 are presented.

Fig. 2. Comparison of dynamics of variables calculated X6 and X7, by model (5) and retrospective data

652

N. Yandybaeva et al.

In Table 1 the results of calculating the values of the simulated variables X1–X10 by model (5) and comparing them with the retrospective data for 2016 are presented. Also, the absolute errors of the predicted indicators are calculated. Table 1. Checking of the adequacy of the developed mathematical models Xi

Xiretr

Xicalc

A (%)

1.1 1.09 0.9 X1 3.89 3.7 4.9 X2 X3 1.1 0.99 10 X4 14.26 14.19 0.5 X5 1.12 1.08 3.5 X6 0.27 0.28 3.7 X7 0.518 0.502 3.0 X8 1.47 1.44 2.04 X9 1.54 1.5 2.6 X10 1 1.02 2

A minor absolute error (up to 10%) indicates the adequacy of the developed mathematical model (5). 2.2

A Software Package for Calculating the Target Values of Indicators of National Security

When carrying out simulation modeling of complex socio-economic systems, many computer programs are used, for example, “Simulation system of socio-economic development of the city”, “Modeling of socio-economic development of the region” etc. [9–11]. To predict the activities of enterprises, modeling of socio-economic development of the regions, software products of the company “Forecast” which take into account the peculiarities of the Russian market are widely used. It is also necessary to mention the indicators of socio-economic development of the Russian Federation and it is used in Leningrad region. IAS “Forecast” provides the Ministry of economic development with forecast scenarios for the development of the region and possible control actions in the execution of a particular scenario. The software product for solving the problems of predicting the development of complex systems was development by authors [12]. Taking into consideration the complexity of the object of study and for automation of calculations of forecast values of indicators of national security “The program for the modeling and forecasting of main indicators of the national security of the Russian Federation” has been developed (Fig. 3).

Mathematical Models, Algorithms and Software Package

653

Fig. 3. Program interface when loading

The developed program (Fig. 4) takes into account the features of the object under study and solves a specific problem related to the calculation of the forecast values of national security indicators of the Russian Federation.

Fig. 4. Calculation of forecast values

654

N. Yandybaeva et al.

This program is created in the software environment GUIDE Matlab and is designed to facilitate the preparation of forecasts by specialists-analysts of the departments of economic analysis and forecasting at various levels of management [13]. The algorithm of the program implements the following steps sequentially: 1. The choice of the modeled variable X1–X10 in the model (5) is carried out and the functional dependences in this variable are set. At the same time, the type of functional dependencies can also be seen in the program dialog box. 2. The time interval of forecasting is set and the calculated values of socio-economic variables X4–X7 are determined. 3. To predict the main characteristics of national security, the model (5) is used: The initial values X0i are entered, the initial and final points of the interval [t1; t2] are set. 4. The dialog box displays a graph of the dynamics of the predicted values of variables X1–X10. 5. The program also builds graph models that display a diagram of the relationship characteristics of national security. 6. When assessing the state of national security of the country the values of indicator proposed by Glazyev [14], Lokosov V. V. are used as thresholds, which are compared with the calculated values.

3 Results 3.1

Computational Experiment

Computational experiment with the indicators of national security of Russia was carried out in the time interval of [0; 21] years (Fig. 5).

Fig. 5. Dynamics of simulated variables X1–X10 in the interval of [0; 21] years

Mathematical Models, Algorithms and Software Package

655

In Fig. 5 we can note a significant increase in the variable X4-GDP per capita: from 1.0 (49835 rubles) in 2000 to 14.18 (706500 rubles) in 2015 to 12.8 (637888 rubles) in the forecast year 2020. There is also an increase in the variable X2-share of modern weapons, military and special equipment in the Armed forces of the Russian Federation: from 1.0 (15%) in 2000 to 3.13 (47%) in 2015 and to 3.1 (46.5%) in the forecast year 2020. Other variables X1, X3, X5–X10 vary slightly. 3.2

Methods of Practical Use of the Developed Models, Algorithms and Programs

The developed information system automates the process of accumulation and processing of statistical information to create predictive models. It is also used to check the adequacy of the constructed models. Let’s consider the main stages of the information system. The initial statistical information used for the development of the model (5) from the territorial departments of statistics is accumulated in the database “Statistics”, which is located in the Department of economic analysis and forecasting of the municipality [15, 16]. Predictive model of socio-economic development for the region using econometric methods is built by specialists of the departments of economic analysis and forecasting. The formed forecast, which is the basis for the development of the forecast of socio-economic development of the country, goes tj the Ministry of economic development of the region from the administration of the municipality. At the next stage, the work of an expert from the expert-analytical center of the FSBEI HE “Russian Academy of National Economy and Public Administration Under the President of the Russian Federation” to assess the state of national security of the Russian Federation is organized. The key functions of the expert-analytical center “RANEPA” are: • organization of operational cooperation with the Administration of the President of the Russian Federation and the government of the Russian Federation; • analysis of information, development of forecasting and scenarios for the development of socio-economic relations, preparation of relevant proposals of public authorities; • interaction with Russian and foreign partners engaged in research, analytical work and expert support of the authorities. The expert’s work consists in calculating the forecast values of national security indicators, Xfi comparing them with the thresholds and, if the forecast values exceed the critical ones, analyzing the causes of the crisis. Taking into account the expert’s opinion the calculated forecast is provided to the specialists of the Ministry of economic development. The medium-term forecast is development annually for the next financial year and planning period by the Ministry of economic development of the Russian Federation. Then the Ministry of economic development of the Russian Federation with the use of scenario conditions and the main parameters of the medium-term forecast, approved

656

N. Yandybaeva et al.

by the Government, calculates the medium-term forecast. The developed forecast is sent for the consideration and approval to the Ministry of Finance of the Russian federation, and also submits it to the Government Commission on budget projections [17]. Information-logic circuit (ILC) is used to illustrate the practical use of the developed mathematical software. On ILC (Fig. 6) the main stages of decision-making to analyze the state of national security of the country at different time intervals using the calculated values of the characteristics of national security are shown. In Fig. 6 the following stages of the analysis of the state of national security of the country are designated: 1. monitoring of the socio-economic situation in the country; 2. record information in the database; 3. determination Xac i –the actual values of safety indicators, checking the adequacy of the mathematical model; 4. calculation of the forecast values of national security indicators Xfi by an expert; 5. whether the forecast values of national security exceed the thresholds Xfi  Xth i ? 6. analysis of the causes of the crisis situation in the country by an expert; 7. determination of the consequences of the crisis in the sectors of national economy, national defense ecology, etc.; 8. preparation of strategies for the development of sectors of the economy taking into account the changes in the values of national security characteristics by the Department of economic sectors development; 9. development of possible scenarios for the development of the country by the Department of macroeconomic analysis and forecasting: innovative, inertial, optimistic, pessimistic ones; 10. definition of a list of specific measures to solve problems in the economic, social, political and other spheres; 11. decision-making process of the DMP; 12. saving information in the database; 13. updating statistical information in the database; 14. determination Xac i of the actual values of national security indicators; 15. the expert carried out the adjustment of the mathematical model using the current information; 16. calculation by the expert of forecast values of indicators of national security; 17. whether forecast values of national security exceed threshold values Xfi ? 18. check the expert implementation of the plan of measures approved by the Government of the Russian Federation; 19. evaluation of the prospects of the development, development restriction of the sectors; 20. preparation of a quarterly report to the Government of the Russian Federation; 21. saving information in the database; 22. mainstreaming of statistical information in the database; 23. determination Xac i of actual values of indicators of national security; 24. adjustment of the mathematical model using relevant information by an expert;

Fig. 6. Information and logic diagram for the state of national security of the Russia

Mathematical Models, Algorithms and Software Package 657

658

N. Yandybaeva et al.

25. the expert calculation of forecast values of the indicators of national security Xfi ; 26. whether forecast values of national security exceed thresholds Xfi  Xac i ? 27. verification of the implementation of the plan of measures approved by the Government of the Russian Federation; 28. assessment of the consequences of exceeding of the forecast values of national security of thresholds for the economy, defense, environment and other areas; 29. preparation of the annual report to the Government; 30. saving information in the database.

4 Discussion Thus, a system-dynamic approach is used to develop a predictive mathematical model. Taking into consideration the complexity of the object of study and for automation of calculations of forecast values of indicators of national security “The program for the modeling and forecasting of main indicators of the national security of the Russian Federation” has been developed. The main stages of decision-making to analyze the state of national security of the country at different time intervals using the calculated values of the characteristics of national security are shown. Mathematical software presented in the article is used to analyze the state of the national security of the country and to predict the main indicators of national security of Russia at different time intervals, with the given initial conditions and functional dependencies.

References 1. On the National Security Strategy of the Russian Federation (app. Decree of the President of the Russian Federation of December 31, 2015. No. 683). Rossiyskaya Gazeta. No. 6871 from 13 January 2016 2. Koptyug, V.A., Matrosov, V.M., Levashov, V.K., et al.: Approaches to the development of the national strategy for sustainable development of Russia, 409 p. Publishing House “Academy”, Moscow (2001) 3. Klyuev, V.V., Rezchikov, A.F., Kushnikov, V.A., et al.: Mathematical models for control, diagnosis and forecasting of the national security of Russia. The Control. Diagnostics. - no. 3, pp. 43–51 (2016) 4. Yandybaeva, N.V.: Modeling and forecasting performance indicators for educational activities of higher educational institution. Vestnik Mordovskogo universiteta = Mordovia Univ. Bull. 28(1), 120–136 (2018) 5. Forrester, J.W.: World Dynamics. Wright -Alien Press, Inc., Cambridge (1971). Mode of Access: http://green-sector.com/upload/mirovaya_dinamika_-_forester.pdf 6. Brodsky, I.I.: Lectures on Mathematical and Simulation Modeling, 240 p. Direct-Media, Berlin (2015) 7. Reszhikov, A.F., Kushnikov, V.A., Yandybayeva, N.V., et al.: Model to assess the state of Russia’s national security, based on system dynamics theory. Appl. Inform. 2(68), 106–118 (2017)

Mathematical Models, Algorithms and Software Package

659

8. The Site of the State Committee on Statistics. [Electronic Resource]. Mode of Access: http:// gks.ru 9. Tuglular, T., Belli, F., Linschulte, M.: Input contract testing of graphical user interfaces. Int. J. Softw. Eng. Knowl. Eng. 26(2), 183–215 (2016) 10. Koo, H.M., Ko, I.Y.: An analysis of problem-solving patterns in open source software. Int. J. Softw. Eng. Knowl. Eng. 25(6), 1077–1103 (2015) 11. Sheriyev, M.N., Atymtayeva, L.B., Beissembetov, I.K., Kenzhaliyev, B.K.: Intelligence system for supporting human-computer interaction engineering processes. Appl. Math. Inf. Sci. 10(3), 927–935 (2016). https://doi.org/10.18576/amis/100310 12. Yandybayeva, N.V., Kozhanova, E.R., Kushnikov, V.A.: Development of a software product for determining the effectiveness of a higher education institution’s activity. Bull. (Saratov State Technical University) 2(1(75)), 214–219 (2014) 13. Certificate № 2016661727 of the state registration of the computer program. The program for the modeling and forecasting of main indicators of the national security of the Russian Federation/N. V. Yandybaeva. (RF); publ. 19 October 2016 14. Glazyev, M.: The basis of ensuring the economic security of the country: the alternative reform. Russ. Econ. Mag. 1, 8–9 (1997) 15. Data mining and knowledge Discovery—1996 to 2005: overcoming the hype and moving from “University” to “Business” and “Analytics” Gregory Piatetsky-Shapiro. Data Min. Knowl. Discov. J. (2007). Mode of Access: https://link.springer.com/article/10.1007/ s10618-006-0058-2 16. Gruber, T.R., Tenenbaum, J.M., Weber, J.C.: Toward acknowledge medium for collaborative product development. In: Gero, J.S. (eds.) Artificial Intelligence in Design 1992. Kluwer Academic Publishers, Boston (1992). Mode of Access: http://tomgruber.org/writing/ onto-design.pdf 17. Resolution of the Government of the Russian Federation of November 14, 2015 № 1234 “On the procedure for the development, adjustment, monitoring and control of the implementation of the forecast of socio-economic development of the Russian Federation for the medium term and the invalidation of some acts of the Government of the Russian Federation”. Website of the Ministry of economic development of the Russian Federation [Electronic Resource]. - Mode of Access: http://economy.gov.ru/minec/activity/sections/ strategicPlanning/regulation/

Mathematical Modeling of Waves in a Non-linear Shell with Wiscous Liquid Inside It, Taking into Account Its Movement Inertia Lev Mogilevich1(B) , Yury Blinkov2 , Dmitry Kondratov1 , and Sergey Ivanov2 1

Yuri Gagarin State Technical University of Saratov, 77 Politechnicheskaya street, Saratov 410054, Russia [email protected], [email protected] 2 Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia [email protected], [email protected]

Abstract. New mathematical models of wave movements in infinitely long physically non-linear shell are constructed. The shell contains viscous incompressible liquid, considering its movement inertia. The models are based on the hydrodynamics related problems, described by shells and viscous incompressible liquid dynamics equations in the form of the generalized MKdV equations. The effective numerical algorithm with the use of the Grobner bases technology was proposed. It was aimed at constructing differential schemes to solve the generalised MKdV equation, obtained in the given article. The algorithm was used to analyze deformation non-linear waves propagation in elastic and non-elastic cylinder shells with viscous incompresible liquid inside them. The numerical experiments were carried out on the basis of the obtained numerical algorithm. The experiments made it possible to reveal new viscosity and incompressible liquid inertia effects on the deformation wave behavior in the shell, depending on Poisson ratio of the shell material. In particular, the exponential growth of wave amplitude under the liquid presence in the shell made of non-organic materials (various pipelines and technological constructions) is revealed. In the case of organic materials (blood vessels) viscous liquid impact leads to wave quick going out. Liquid movement inertia presence leads to deformation wave velocity transformation. Supported by grants RFBR 19-01-00014-a and the President of the Russian Federation MD-756.2018.8. Keywords: Non-linear shell · Viscous incompressible liquid Deformation waves · Numerical experiment

c Springer Nature Switzerland AG 2019  O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 660–670, 2019. https://doi.org/10.1007/978-3-030-12072-6_53

·

Modeling of Waves

1

661

Stating the Problem

The article’s goal is to develop mathematical and computer modeling methods of cylinder shells non-linear wave dynamics processes, the shells being filled with viscous incompressible liquid, taking into account liquid movement inertia forces, on the basis of computer algebra methods. The obtained results can be used for materials damage diagnosis by means of acoustic methods. The developed model also describes the processes in the pipes of small diameter (nanotubes) as related to the wavelength, the cases for the pipes made of non-organic materials (various technological pipelines), as well as the ones of organic material (blood vessels) are considered. This allows for using this method in the field of hemodynamics. Reference [1] investigates the case for the absolutely rigid pipe with circular cross-section of viscous incompressible liquid laminar movement under harmonic in time pressure change. Only liquid viscosity was taken into account and movement inertia equals zero. The investigation for the rod, the plate and the pipe— an elastic cylinder shell—of the wave process under the liquid absence was carried out in [2–6], under ideal liquid in [7–9] for physically linear shell in [10].

Fig. 1. Axisymmetric shell sagging

Let us consider an elastic infinitely long shell with a cross-section (Fig. 1) with viscous incompressible liquid inside it. By writing down the movement equation of cylinder shell element in the shifts for Kirchhoff-Love model, we consider the material to be non-linearly elastic one with cubic dependence of tension intensity of deformation intensity [11] σi = Eei + me3i , i = x, y = r

(1)

where E - Young module, m - material constant, defined in the experiments on stretching or compression.

662

L. Mogilevich et al.

The tensions of liquid layer are defined by     ¯ ,n ¯ r + Prx cos n ¯ , ¯i qn = Prr cos n     qx = Prx cos n ¯ ,n ¯ r + Pxx cos n ¯ , ¯i   ∂Vx ∂Vr ∂Vx r Prr = −p + 2ρν ∂V ∂r ; Prx = ρν ∂r + ∂x ; Pxx = −p + 2ρν ∂x

(2)

where qx , qn - tension under the liquid inside the cross-section; r, x - cylinder coordinates; Vr , Vx - velocity on cylinder coordinate system projections; W - shell sagging positive to the curvature center; p - liquid pressure; ρ - liquid density; ν - kinematic viscosity coefficient; n ¯ - normal to the median surface of the shell; n ¯ r , ¯i—basis vector (r, Θ, x) cylindrical coordinate system whose center is located on the geometricaxis. If the tension  are  shift on the shell undisturbed surface,  ¯  then n ¯=n ¯ r cos n ¯ , i = 0. ¯, n ¯ r = 1, cos n The shell dynamics equation is written down in the form [11–13].  ∂U

 ∂U Eh0 ∂ W 1 + 43 m ∂x − μ0 R E ∂x + 1−μ20 ∂x  2 2 W − ∂U +W − ρ0 h0 ∂∂tU2 = R ∂x R  ∂qx x ; = − qx − W ∂q ∂r + U ∂x R  2 2  2 

 h0 ∂ Eh0 ∂ W ∂ ∂W ∂U − ∂x 12 ∂x2 ∂x2 ∂x ∂x + 1−μ20    4m ∂U W −μ0 R 1 + 3 E ∂x + 2 ∂U W  1  ∂U − R μ0 ∂x + − ∂x R +W R   2    4m ∂U ∂U W W2 + 1 + + + −W 2 R 3 E ∂x ∂x R R

(3)

2

+ρ0 h0 ∂∂tW 2 =

 ∂qn n . = qn − W ∂q ∂r + U ∂x R

Here μ0 - Poisson ratio; t - time; U - longitudinal shift. The equation of motion of a viscous incompressible fluid and the equation of continuity in a cylindrical coordinate system (r, Θ, x) in the case of axisymmetric flow is written down ni the form [14]   2 ∂Vr ∂ Vr ∂ 2 Vr Vr ∂Vr ∂Vr 1 ∂p 1 ∂Vr + V + V + = ν + + − r ∂r x ∂x ∂t ρ ∂r ∂r 2 r ∂r ∂x2 r2 ,   2 2 ∂Vx (4) + Vr ∂Vx + Vx ∂Vx + 1 ∂p = ν ∂ V2x + 1 ∂Vx + ∂ V2x , ∂t

∂r

∂x

∂Vr ∂t

ρ ∂x

+

Vr r

+

∂r

∂Vx ∂x

r ∂r

∂x

= 0.

The liquid adhesion conditions on the boundary with the shell are made according to Lagrange approach. ∂U ∂t

∂Vx x = Vx + U ∂V ∂x − W ∂r

∂Vr ∂Vr − ∂W ∂t = Vr + U ∂x − W ∂r .

(5)

Modeling of Waves

1.1

663

The Dynamics Equation Derivation, Taking into Account Liquid Presence Inside Elastic Shell

Considering l as characteristic wavelength, let us switch to dimensionless variables. W = wm u3 ; U = um u1 ; t∗ = cl0 t; x∗ = xl ; r∗ = Rr1 ;  (6) c0 c0 E c0 = ρ 1−μ 2 ; Vr = wm l vr ; Vx = wm R1 vx ) 0( 0 here c0 – sound speed. Let us introduce a problem of small parameter ε  1 and relations, characterizing the problem. Consider  1 um R = ε = o(1); = O ε 2 ; hR0 = O(ε) l l (7)   m −1 ; wRm = O(ε) E =O ε Let us introduce semi-characteristic (running) coordinates and fast time. ξ = x∗ − ct∗ , τ = εt∗ ,

(8)

when c – unknown dimensionless wave velocity. Substituting (6), (7), (8) in the system (3), we obtain the equations for dimensionless values u1 , u3 .  um ∂ ∂u1 wm l − μ u 0 um R 3 {1+ l ∂ξ ∂ξ    2 2 2   wm l ∂u1 wm ∂u1 4 m um 2 2 + u + u − 3 E l ∂ξ l ∂ξ 3 u2m R2 3 (9)  2  2 ∂ 2 u1 + ε2 ∂∂τu21 = − ulm c2 ∂∂ξu21 − 2εc ∂ξ∂τ

um l



=−  2

h20 ∂ 12l2 ∂ξ 2

x + um u ∂qx l[qx − wRm u3 ∂q 1 ∂x∗ ] l ∂r ∗ . ρ0 h0 c20

wm R ∂ 2 u3 um l ∂ξ 2



 1 + −μ0 ∂u ∂ξ +

wm l um R u3



{1+  2 2  2 wm l ∂u1 1 + uwmmRl ∂u + 43 ulm m + E ∂ξ ∂ξ u3 + u2m R2 u3   wm ∂u3 ∂u1 wm l ∂ + Rl ∂ξ l ∂ξ − ∂ξ + μ0 um R u3 {1+  2 u2m ∂u1 + 43 m + 2 E l ∂ξ   2 2 2 wm l R ∂ 2 u3 2 1 + wm c2 ∂∂ξu23 − 2εc ∂ξ∂τ + + uwmmRl ∂u ∂ξ u3 + u2m R2 u3 l2  ∂q ∂q w u n m 2 R(qn − Rm u3 ∂rn ∗ + l u1 ∂x∗ ) +ε2 ∂∂τu23 = . ρ0 h0 c2 

2

0

Let us disintegrate elastic shifts by degrees ε =

(10)

um l :

u1 = u10 + εu11 + ..., u3 = u30 + εu31 + ... 0

(11)

1

substituting them in the equations and leave members ε ε . By equating coefficients at ε0 to zero, we obtain the equations system ∂ 2 u10 ∂ξ 2

2

2 ∂ u10 30 − μ0 uwmmRl ∂u ∂ξ − c ∂ξ 2 = 0

10 −μ0 ∂u ∂ξ +

wm l um R u30

= 0.

(12)

664

L. Mogilevich et al.

Then:

 ∂ 2 u10 wm l ∂u10  u30 = μ0 , 1 − μ20 − c2 = 0. (13) um R ∂ξ ∂ξ 2 Therefore, u10 – is arbitrary function, and dimensionless wave velocity c =  1 − μ20 if c2 = 1 − μ20 . By equalling the coefficients at ε in the left and right equation parts and considering the previous results, we find  wm l ∂ 2 ∂u11 μ + − μ u 0 31 0 ∂ξ ∂ξ um R         ∂u10 3 ∂ 4 m um 2 2 2 + ∂ξ 3 Eε l 1 − μ0 1 + μ0 + μ0 + ∂ξ  2 10 +2 1 − μ20 ∂∂τu∂ξ = (14) 2 l = − εum ρ0 h0 c2 qx ; 0   3 wm l ∂u11 1 R2 −μ0 ∂ξ + um R u31 + ε l2 μ0 1 − μ20 ∂∂ξu310 = =

Rl q . εum ρ0 h0 c20 n

Excluding u11 u31 , out of the system, we obtain √ 2 2 ∂ 2 u10 1 R2 μ0 1−μ0 ∂ 4 u10 + 2 ∂ξ∂τ ε l 2 ∂ξ 4 +     um 2  ∂u10 2 ∂ 2 u10 2 1 + μ + μ2 + 2m 1 − μ 0 0 0 Eε l ∂ξ ∂ξ 2 =  2 n = − √ 1 2 εum ρl 0 h0 c2 qx − μ0 Rl ∂q ∂ξ . 2

1−μ0

(15)

0

In the absence of liquid, the right equation part equals zero and we obtain MKdV. We need to define the right part by solving hydrodynamics equations. 1.2

Dynamics Liquid Equations Solution and Defining the Tensions Influencing the Shell from a Viscous Incompressible Liquid

By considering circular section, we introduce dimensionless variables and parameters Vr = wm cl0 vr ; Vx = wm Rc01 vx ; r∗ = Rr1 ; t∗ = cl0 t; x∗ = 1l x  1 (16) m P + p0 ; Rl1 = ψ = O ε 2 ; λ = wRm1 = O (ε) . p = ρνcR0 lw 3 1

Substituting (16) in Eq. (4) and boundary condition (5), we obtain the equations and boundary conditions for the dimensionless variables, liquid velocity and pressure components. By disintegrating pressure and velocity components in powers of a small parameter λ P = P 0 + λP 1 + ..., vx = vx0 + λvx1 + ..., vr = vr0 + λvr1 + ... for the first terms of the expansion, we get the equations   0 0 R1 c0 ∂vx ∂P 0 ∂P 0 1 ∂ ∗ ∂vx r ; = 0; ψ + = ∗ ∗ ∗ ∗ ∗ ∗ ∂r ν ∂t ∂x r ∂r ∂r 0   ∂vx 1 ∂ ∗ 0 r ∗ ∂r ∗ r vr + ∂x∗ = 0

(17)

(18)

Modeling of Waves

665

and boundary conditions in the form of 0 3 vr0 = − ∂u ∂t∗ ; vx = ∂v 0

um R1 ∂u1 wm l ∂t∗

∂v 0

r∗ ∂r∗r = 0; r∗ ∂rx∗ = 0

r∗ = 1; r∗ = 0.

(19)

We now determine the tensions from the liquid side of the shell in these variables. We obtain, within precision λ, ψ  x qx = λ R1νc0 ρc20 ∂v , qn = −p0 − ψλ R1νc0 ρ0 c20 P (20) ∂r ∗  ∗ r =1

At the first iteration step, considering ψ R1νc0  1, we leave out the first element in the right side of the second Eq. (18) we obtain   ∂P ∂ 1 um R1 u3 dx∗ , ∂x∗ = 16 ∂t∗ 2 wm l u1 − (21)  ∗2   1 um R1 ∂ 2 u1  ∂ 2 u3 ∗ ∂vx um R1 ∂ 2 u1 . = + 4 r − 1 − dx ∗ ∗2 ∗2 ∗2 ∂t wm l ∂t 2 wm l ∂t ∂t By substituting (21) in the first element of the second Eq. (18), we obtain     1 um R1  P = ∂t∂∗ 16 2 wm l u1 − u3 dx∗ +    + 23 ψ R1νc0 ∂t∂∗ 12 uwmmRl1 u1 − u3 dx∗ dx∗    (22)   ∂vx  ∂ 1 um R1 u3 dx∗ + ∂r ∗ r ∗ =1 = ∂t∗ 8 2 wm l u1 −    + 13 ψ R1νc0 ∂t∂∗ 12 uwmmRl1 u1 − u3 dx∗ at the second iteration step. The iteration method converges. 1.3

The Main Equation, Describing the Deformation Wave in the Shells, Containing a Viscous Incompressible Liquid  By taking into account, that the variables ξ = x∗ − ct∗ τ = εt∗ , c = 1 − μ20 are introduced, we obtain with the accuracy to ε and consider the relation (20)      P = 1 − μ20 8 2 u30 dξ − uwmmRl1 u10 −   (23) 2 10 − 13 ψ R1νc0 8u30 − uwmmRl1 ∂u 1 − μ 0 ∂ξ Here



∂vx  ∂r ∗ r ∗ =1

=

− 16 ψ R1νc0

   10 − 1 − μ20 4 2u30 − uwmmRl1 ∂u ∂ξ    2 2u30 − uwmmRl1 ∂∂ξu210 1 − μ20 .



Then, by taking into account

(24)

wm l um R u30

= μ0 u10ξ , we obtain  2    ∂u10 um R ∂qn R ν 2 2 qx − μ0 l ∂ξ = − R1 c0 ρc0 4 1 − μ0 l 1 − 2μ0 R1 ∂ξ +     2   ∂ 2 u10 + Rl1 ρc20 16 1 − μ20 ulm 1 − 4μ0 RR1 ∂ξ 2

(25)

666

L. Mogilevich et al.

Therefore, we have the equation ∂ 2 u10 ∂ξ∂τ

+

2 1 R2 μ0 ε l2



1−μ20 ∂ 4 u10 2 ∂ξ 4 +

   2  ∂u10 2 ∂ 2 u10 1 − μ20 1 + μ0 + μ20 ulm ∂ξ ∂ξ 2 =  = − 12 ερ0l h0 − R1νc0 ρ4 [1−  2  2  2     ∂u10 R1 1 ∂ u10 2 1 − 4μ R − 2μ0 RR1 1 − μ + ρ . 0 0 ∂ξ l 6 R1 ∂ξ 2 + 2m Eε

2



(26)

Analysis of the Resolving Equation

The adopted precision in (26) allows to obtain R1 = R. Let us introduce the 10 symbols ∂u ∂ξ = c0 φ, η = c1 ξ, t = c2 τ and then get √ 2 μ2 1−μ2 c3 3 + 1ε Rl2 0 2 0 c12 ∂∂ηφ3 −     2 c20 c1 2 ∂φ + 2m 1 − μ20 1 + μ0 + μ20 ulm Eε c2 φ ∂η −  2 ν 1 −2 1 − (2μ0 ) ρ0ρl h0 ε R1 c0 c2 φ+   2 R1 c1 ∂φ 1 1 − μ20 1 − (4μ0 ) ρ0ρl + 12 h0 ε l c2 ∂η = 0. ∂φ ∂t

Let us consider c2 = 2 



c1 = c2

c0 =

ν ρl ; ρ0 h0 ε R1 c0

2 1 R2 μ0 ε l2



1−μ20 2

(28)

−1  13 ;

 1   u 2    −1 2 2 1 + μ + μ2 m 6 cc21 2m ; 1 − μ 0 0 0 Eε l

and introduce

(27)

  σ0 = 1 − 4μ20 c2 ,

(29)

(30)

(31)

σ0 > 0 at μ0 < 12 ; σ0 < 0 at μ0 > 12 ; σ0 = 0 at μ0 = 12 ; 6σ1 =

c20 c1 2m  um 2 c2 Eε l



  1 − μ20 1 + μ0 + μ20 ;

as at was specified early, c0 we can consider σ1 = 1   ρl R1 c1  1 1 − 16μ20 ; 1 − μ20 σ= 2 12 2ρ0 h0 c0 l c2 σ > 0 when μ0 < 14 ; σ < 0 when μ0 > 14 ; σ = 0 when μ0 = 14 ;

(32)

(33)

Modeling of Waves

667

and we obtain the generalized modified Korteweg - de Vries equation (MKdV). ∂φ ∂φ ∂ 3 φ ∂φ +σ + 6φ2 + 3 − σ0 φ = 0 ∂t ∂η ∂η ∂η

(34)

Under the condition σ0 = 0 (μ = 12 ) we have exact solution  

  φ = ±k sech k η − k 2 + σ t

(35)

ω = k2 + σ k

(36)

Φ (η, 0) = ±k sech (kη)

(37)

with phase velocity

When t = 0 we have

The inertia force of liquid motion acts simultaneously with the moving viscosity liquid force. Their effect on the amplitude of the wave is determined by using computational experiment in solving the Cauchy problem for the Eq. (34) with the initial condition (37). We obtain the following difference scheme for the Eq. (34), similar to the Crank-Nicolson scheme for the heat conducting equation [15,16]. − unj un+1 j + τ    n+1 n+1 uj+1 − uj−1 + unj+1 − unj−1 + +σ  4h   n+1 n+1 n n u3 j+1 − u3 j−1 + u3 j+1 − u3 j−1 + + 4h     n+1 n+1 n+1 n n n n uj+2 − 2un+1 j+1 + 2uj−1 − uj−2 + uj+2 − 2uj+1 + 2uj−1 − uj−2 + + 4h3 −σ0

un+1 +un j j 2

(38)

=0

Using a difference scheme (38), a numerical study of the model (34) was carried out. The results are shown in the figures below.

3

Conclusion

In the absence of liquid influence, the solitary wave moves without changing the velocity and amplitude (Fig. 2). The presence of a liquid (Fig. 3) led to an increase in the velocity and amplitude of the wave (inorganic materials). The wave velocity drops (Fig. 4), the amplitude increases (inorganic materials). The velocity and amplitude of the wave fall (living organisms) (Fig. 5).

668

L. Mogilevich et al.

Fig. 2. σ = 0, σ0 = 0

Fig. 3. σ > 0, σ0 > 0, 0 < μ0
0,

1 4

< μ0
0, then the wave velocity is greater than the one of small perturbations in the rod, and this is true for μ0 < 14 (living organisms), but if μ0 > 14 , than maybe σ < 0 and maybe σ + k 2 < 0, then the velocity is less than the one of small disturbances (sound) ρE0 . Fluid inertia movement at μ0 < 14 increases the velocity of the wave, and when μ0 > 14 reduces this velocity. If σ0 > 0, and this is possible at μ0 < 12 , then the wave amplitude increases (non-organic substances), but if μ0 > 12 , then

Modeling of Waves

Fig. 5. σ < 0, σ0 < 0, μ0 >

669

1 2

σ0 < 0 and the wave amplitude drops (alive organisms), wich reflects viscosity impact. Supported by the Grants of RFBR 19-01-00014-a and of the President of the Russian Federation MD-756.2018.8.

References 1. Gromeka, I.S.: K teorii dvizheniya zhidkosti v uzkikh tsilindricheskikh trubakh. [On the Theory of liquid motion on narrow cylindrical tubes]/Gromeka I. S. [B. m.] : M.: Izd-vo AN SSSR, pp. 149–171 (1952) 2. Nariboli, G.A.: Nonlinear longitudinal dispersive waves in elastic rods. J. Math. Phys. Sci 4, 64–73 (1970) 3. Nariboli, G.A., Sedov, A.: Burgers-Korteweg-De Vries equation for viscoelastic rods and plates. J. Math. Anal. Appl. 32, 661–667 (1970) 4. Yerofeyev, V.I., Klyuyeva, N.V.: Solitony i nelineynyye periodicheskiye volny deformatsii v sterzhnyakh, plastinakh i obolochkakh (obzor). Isvesiya VUZ, AND, vol. 48, no. 6, pp. 725–740 (2002) 5. Zemlyanukhin, A.I., Mogilevich, L.I.: Nelineynyye volny v neodnorodnykh tsilindricheskikh obolochkakh: novoye evolyutsionnoye uravneniye. Isvesiya VUZ, AND, vol. 47, no. 3, pp. 359–363 (2001) 6. Bochkarev, S.A.: Sobstvennyye kolebaniya vrashchayushcheysya krugovoy tsilindricheskoy obolochki s zhidkost’yu. VMSS. T. 3, no. 2, pp. 24–33 (2010) 7. Paidoussis, M.P., Nguyen, V.B., Misra, A.K.: A theoretical study of the stability of cantilevered coaxial cylindrical shells conveying fluid. J. Fluids Struct. 5(2), 127–164 (1991). https://doi.org/10.1016/0889-9746(91)90454-W 8. Amabili, M., Garziera, R.: Vibrations of circular cylindrical shells with nonuniform constraints, elastic bed and added mass. Part III: steady viscous effects on shells conveying fluid. J. Fluids Struct. 16(6), 795–809 (2002). https://doi.org/10.1006/ jfls.2002.0446 9. Amabili, M.: Nonlinear Vibrations and Stability of Shells and Plates, p. 374. Cambridge University Press, Cambridge (2008). https://doi.org/10.1017/ CBO9780511619694 10. Blinkov, Yu.A., Blinkova, A.Yu., Evdokimova, E.V., Mogilevich, L.I.: Mathematical modeling of nonlinear waves in an elastic cylindrical shell surrounded by an elastic medium and containing a viscous incompressible liquid. In: Acoustical Physics, vol. 64, no. 3, pp. 283–288 (2018). ISSN 1063-7710

670

L. Mogilevich et al.

11. Kauderer, H.: Nichtlineare Mechanik, p. 685. Springer-Verlag, Berlin (1958) (Russ. ed.: Nelineynaya mekhanika, p. 778. Jnostrannaya literatura Publications, Moskow (1961)) 12. Vol’mir, A.S.: Obolochki v potoke zhidkosti i gaza: zadachi gidrouprugosti, S. 320. Nauka (1979) 13. Vol’mir, A.S.: Nelineynaya dinamika plastinok i obolochek. Nauka (1972) 14. Loytsyanskiy, L.G.: Mekhanika zhidkosti i gaza [Fluid Mechanics].: M.: Drofa, p. 840 (2003) 15. Gerdt, V.P., Blinkov, Yu.A.: Involution and difference schemes for the NavierStokes Equations. In: CASC. Lecture Notes in Computer Science, vol. 5743, pp. 94–105 (2009). https://doi.org/10.1007/978-3-642-04103-7-10 16. Amodio, P., Blinkov, Yu.A., Gerdt, V.P., La Scala R.: On consistency of finite difference approximations to the Navier-Stokes equations. In: CASC. Lecture notes in Computer Science, vol. 8136, pp. 46–60 (2013). https://doi.org/10.1007/978-3319-02297-0-4

Mathematical Modeling of Hydroelastic Interaction Between Stamp and Three-Layered Beam Resting on Winkler Foundation Aleksandr Chernenko1 , Dmitry Kondratov2 , Lev Mogilevich1 Victor Popov1(&) , and Elizaveta Popova3

,

1

Yuri Gagarin State Technical University of Saratov, Saratov, Russia [email protected], [email protected], [email protected] 2 Russian Presidential Academy of National Economy and Public Administration, Saratov, Russia [email protected] 3 Saratov State University, Saratov, Russia [email protected]

Abstract. The purpose of the article is to develop the mathematical model of bending oscillations of a three-layered beam, resting in Winkler foundation and interacting with a vibrating stamp through a thin layer of viscous incompressible liquid. The three-layered beam with incompressible lightweight filler by using broken normal hypothesis was considered. The bending oscillations equation of three-layered beam resting on Winkler foundation and interacting with vibrating stamp through viscous liquid layer is obtained. On the basis of plane hydroelasticity problem solution, the laws of the three-layered beam deflections and pressure in the liquid along the channel are found. The frequency dependent functions of the beam deflections amplitude distribution and liquid pressure along the channel are constructed. The obtained results allow to define oscillations resonance frequencies and to study tense-deformed state of three-layered beam, as well as, hydrodynamic parameters of viscous liquid interacting with vibrating stamp and three-layered beam, resting on Winkler foundation. The study was funded by Russian Foundation for Basic Research (RFBR) according to the projects № 18-01-00127-a and № 19-01-00014-a. Keywords: Hydroelasticity Viscous liquid  Oscillations

 Three-layered beam   Vibrating stamp

Winkler foundation



1 Introduction Nowadays composed and multi-layered materials are widely used in aerospace, machine building, civil construction and other industries. That is why the mathematical modeling problems of static and dynamic behavior of three-layered elastic beams and plates are very important as for practical use as for theoretical purposes. For example, a review of different approaches to analyzing the behavior of multilayer structural elements was made in [1, 2]. In particular, reference [1] considers zig-zag theories for multilayered beams and plates and reference [2] deals with the approach to studying © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 671–681, 2019. https://doi.org/10.1007/978-3-030-12072-6_54

672

A. Chernenko et al.

three-layered elastic elements with incompressible filler based on the hypothesis of broken normal, as well as static and dynamic problems of three-layered structures elements with compressible filler under various loadings. In many practical cases multi-layered construction elements are resting on an elastic foundation. One of the first investigations of this problem for homogenous beam was made in [3]. The model of Winkler foundation was considered in this study. In a monograph [4] an elastic foundation is studied on the base of a single- or double-layer model, whose properties are described by two or more elastic characteristics. Contemporary analytical and numerical studies of homogeneous beams and plates interaction with elastic foundation, are considered in review [5]. The statics and dynamics problems of three-layered beams and plates resting on elastic foundation under the local and distributed loads of various natures are considered in [6–9]. In particular, reference [6] is devoted to studying the thermal force bending of an elastoplastic three-layered beam resting on Winkler foundation. The core layer is assumed to be rigid. Reference [7] investigates the axisymmetric vibrations of an elastic circular three-layered plate with lightweight filler and resting on an elastic foundation under the action of sudden local loads. The foundation is modeled as the Winkler one. Symmetric transverse oscillations of a circular sandwich plate resting on Winkler foundation under a thermal impact are studied in [8]. Reference [9] studies static and dynamic stability of an asymmetric sandwich beam with viscoelastic core resting on a Pasternak foundation under a pulsating axial load and subjected to onedimensional thermal gradient. On the other hand, there are many applications for homogeneous beams and plates interacting with liquid i.e. for hydroelastic problems. For instance one of the first hydroelasticity investigations was made in [10]. This study deals with the axisymmetric free oscillations of the circular plate interacting with water. The hydroelastic response model of pipeline filled with a transported fluid on the base of a beam interacting with an ideal liquid is considered in [11]. The hydroelastic response model of internal combustion engine wet cylinder liner on the base of studying hydroelastic forced oscillations of a cylindrical shell surrounded by a viscous liquid layer is considered in [12]. Reference [13] studies the dynamic behavior of annular channel formed by two elastic cylindrical shells interacting with layer of an ideal fluid between them. The hydroelastic oscillations of a circular plate immersed in an ideal incompressible liquid contained in a rigid cylinder is investigated in [14]. The stability and dynamic hydroelastic problem of a plate, being a part of the border, dividing the areas filled with viscous incompressible fluid is studied in [15]. The investigation of vibrations of an infinite beam resting on a viscous liquid layer is carried out in [16]. The forced oscillations of an elastic fixed channel wall interacting with viscous liquid layer are considered in [17]. The bending oscillations of a cantilever beam surrounded by a viscous incompressible liquid are studied in [18] and an analogous problem for beam located in viscous incompressible flow is considered in [19]. The hydroelastic models of walls oscillations of parallel-plate and tapering channels filled with a liquid are developed in [20–25]. References [26–29] considered hydroelastic oscillations of plates resting on Winkler or Pasternak foundation. However, the mathematical modeling of threelayered plate dynamic interaction with a liquid is of theoretical and practical interest

Mathematical Modeling of Hydroelastic Interaction Between Stamp

673

too. For instance, free hydroelastic oscillations of composite plates interacting with ideal liquid are investigated in [30]. On the other hand, the oscillations of the multilayered plates interacting with liquid are studied in [31, 32]. But still, up to now no work has been done to study the dynamic interaction between viscous liquid layer and sandwich beam resting on elastic foundation. Thus, the present work deals with the hydroelastic oscillations of a three-layered beam resting on Winkler foundation and interacting with vibrating stamp through viscous liquid layer.

2 Formulation of the Problem Let us consider a narrow channel filled with viscous uncompressible liquid (see Fig. 1). The channel walls are parallel to each other and the channel’s length is 2‘. The upper channel wall is a rigid vibrating stamp. The bottom channel wall is a three-layered beam resting on Winkler foundation. The three-layered beam consists of outer layers 1, 2 and incompressible lightweight filler 3. The outer layers thicknesses are h1 and h2, as well as the filler thickness is 2c. The broken normal hypothesis for the three-layered beam is accepted [2], i.e. Kirchhoff hypothesis is valid for outer layers, as well as the normal in the beam filler remains straight and turned by the angle u. Following [2], let us assume the rigid diaphragms to be situated at the beam edges hindering the relative layers shift, but not impeding the deformation from its plane. Let us introduce Cartesian coordinate system Oxz. Its center is located in the beam filler center in an unperturbed state. The three-layered beam oscillations are due to impact of vibrating stamp through liquid layer, while deformations of the beam are considered to be small. The three-layered beam is simply supported at the edges. The liquid layer thickness is h0  ‘. The liquid pressure at the right and left edges is constant p0. The stamp vibrations occur along the axis Oz, their amplitude is zm. We consider plane hydroelastic problem for the steady-state harmonic oscillations, since the liquid layer damping due to its viscosity leads to the transient processes quick going out [33]. According to [34], the inertial forces in a longitudinal direction are not considered and we study the three-layer beam bending oscillations only.

Fig. 1. The three-layered beam, resting on Winkler foundation and interacting with vibrating stamp through a viscous liquid between them

674

A. Chernenko et al.

The stamp vibrations law is presented as: zðxtÞ ¼ zm f ðxtÞ;

ð1Þ

f ðxtÞ ¼ sin xt;

where zm is the amplitude of vibrations stamp; x is the frequency. According to [2] the dynamic equations of three-layered beam resting on Winkler foundation are obtained in the form of: a1 @@xu2 þ a6 @@xu2  a7 @@xw3 ¼ qzx ; 2

2

3

a6 @@xu2 þ a2 @@xu2  a3 @@xw3 ¼ 0; 2

2

3

ð2Þ

a7 @@xu3 þ a3 @@xu3  a4 @@xw4  jw  m0 @@tw2 ¼ qzz ; 3

3

4

2

where the following notation is used a1 ¼ K1þ h1 þ K2þ h2 þ 2 K3þ c;   a2 ¼ c2 K1þ h1 þ K2þ h2 þ 23 K3þ c ;       a3 ¼ c K1þ h1 c þ 12 h1 þ K2þ h2 c þ 12 h2 þ 23 K3þ c2 ;     a4 ¼ K1þ h1 c2 þ c h1 þ 13 h21 þ K2þ h2 c2 þ c h2 þ 13 h22 þ 23 K3þ c3 ; a6 ¼ cðK1þ h1  K2þ h2 Þ;     a7 ¼ K1þ h1 c þ 12 h1  K2þ h2 c þ 12 h2 ; Kjþ ¼ Kj þ 43 Gj ; m0 ¼ q1 h1 þ q2 h2 þ 2q3 c: Here j = 1, 2, 3 are the layer numbers, Gj is the shear modulus of j-th layer, Kj is the bulk modulus of j-th layer, qj is the density of j-th layer material, u is the longitudinal three-layered beam displacement, w is the three-layered beam deflection, u is the angle of rotation of the deformed normal in the three-layered beam filler, j is the foundation modulus, qzz is the normal stress in the viscous liquid, qzx is the shear stress in the viscous liquid. The boundary conditions of Eq. (2) are u¼u¼w¼

@2w ¼ 0 at x ¼ ‘: @x2

ð3Þ

Taking into account that the movement of viscous liquid layer in a narrow channel is a creeping one, according to [35] the dynamic equations of liquid layer are presented in the form of:

Mathematical Modeling of Hydroelastic Interaction Between Stamp



1 @p q @x

¼m

1 @p q @z

¼m @ux @x

@ 2 ux @x2



þ

@ 2 uz @x2 @uz @z

 ;  2 þ @@zu2z ; þ

675

@ 2 ux @z2

ð4Þ

¼ 0;

where ux, uz are liquid velocity projections on the coordinate axis, q is the liquid density, m is the kinematic coefficient of the liquid viscosity, p is the pressure. Equation (4) boundary conditions consist of the no-slip ones and the conditions for the pressure at the edges: ux ¼ 0; uz ¼

dz at z ¼ h0 þ c þ h1 ; dt

ð5Þ

@w ux ¼ @u @t ; uz ¼ @t at z ¼ w þ c þ h1 ;

ð6Þ

p ¼ p0 at x ¼ ‘:

3 Solution of the Problem Let us introduce small parameters k¼

zm h0  1; w ¼  1; h0 ‘

and dimensionless variables: x 1 f ¼ zch h0 ; n ¼ ‘ ; s ¼ xt;

w ¼ wm Wðn; sÞ; u ¼ um Uðn; sÞ; u ¼ um Uðn; sÞ; p ¼ p0 þ

qm zm x Pðn; f; sÞ; h 0 w2

uz ¼ zm xUf ðn; f; sÞ; ux ¼

ð7Þ

zm x w Un ðn; f; sÞ:

Bearing in mind small parameters and dimensionless variables (7), the Eq. (4) in the dimensionless variables in zero approximation on w and k will be presented in the form of: @P @n

@Un @n

¼

@ 2 Un @f2

@P @f

¼ 0;

þ

@Uf @f

;

¼ 0;

ð8Þ

676

A. Chernenko et al.

and corresponding boundary conditions (7), (6) take a form of: Un ¼ 0; Uf ¼

df at f ¼ 1; ds

ð9Þ

Un ¼ 0; Uf ¼ wzmm @W @s at f ¼ 0;

ð10Þ

P ¼ 0 at n ¼ 1:

Thus, according to the second equation of the system (8) in the zero approximation on w, we can assume that the pressure does not depend on the coordinate f. By solving the hydrodynamic Eq. (8) with boundary conditions (9), (10) we obtained: Un ¼

f2  f @P ; 2 @n

Uf ¼ wzmm @W @s þ df  12 wzmm P ¼ 6ðn2  1Þ ds

Rn R 1

@W @s

ð11Þ

2 3 @ 2 P 3f 2f 12 @n2

;

dndn þ 6ðn þ 1Þ wzmm

R1 R 1

@W @s

ð12Þ dndn:

The normal and shear stresses of the viscous liquid layer acting on three-layered beam in variables (7) are: qzz ¼ p0  qzx ¼

qm zm x P at f ¼ 0; h0 w 2

ð13Þ

qm zm x @Un at f ¼ 0: h0 w @f

ð14Þ

Comparing Eqs. (13), (14) we get qzz  qzx , i.e. the shear stress can be ignored, and by substituting (13) and neglecting (14) in Eq. (2) we obtain: a1 @@xu2 þ a6 @@xu2  a7 @@xw3 ¼ 0; 2

2

3

a6 @@xu2 þ a2 @@xu2  a3 @@xw3 ¼ 0; 2

2

3

a7 @@xu3 þ a3 @@xu3  a4 @@xw4  jw  m0 @@tw2 ¼ p0 þ 3

3

4

2

ð15Þ qm zm x P: h 0 w2

Considering the first and second Eq. (15), we obtain that: @2 u @x2

¼ b1 @@xw3 ;

@2 u @x2

¼ b2 @@xw3 ;

3

3

ð16Þ

Mathematical Modeling of Hydroelastic Interaction Between Stamp

677

where the following notation is used 3 a6 b1 ¼ aa2 1aa7 a 2 ; 2 a 6

b2 ¼

a1 a3 a7 a6 a1 a2 a26

:

and taking to account Eqs. (16) and (12) in third Eq. (15) we obtain bending oscillations equation of three-layered beam resting on Winkler foundation and interacting with vibrating stamp through viscous liquid layer: D @@xw4 þ jw þ m0 @@tw2 4

2

df ¼ p0  qmh zwm 2x 6ðn2  1Þ ds  12 wzmm 0

Rn R 1

@W @s

dndn þ 6ðn þ 1Þ wzmm

R1 R 1

! @W @s

dndn ; ð17Þ

where D ¼ a4  a7 b1  a3 b2 . The boundary conditions of Eq. (17) are w¼

@2w ¼ 0 at x ¼ ‘: @x2

ð18Þ

According to boundary conditions (18) the solution of the Eq. (17) is presented in the form of: w ¼ wm

1 X

ðR0k þ Rk ðsÞÞ cos

k¼1

 2k  1 pn : 2

ð19Þ

Here the upper index 0 means the solution, corresponding to the static pressure p0. By substituting Eqs. (7), (19) into Eq. (17) and expanding the pressure p0 and (n2 − 1) in the series of trigonometric functions of the longitudinal coordinate n, we obtain wm

1 P

  

k¼1

þ m0 x2 wm

1 2 P d Rk k¼1

ds2

D ‘4

ð2k1Þp 2

4

þ j ðR0k þ Rk Þ cos 2k1 2 pn 1  P

cos 2k1 2 pn þ wm ¼ p0

1 P k¼1

zm



1 P k¼1

2

dRk 12 hqmx cos 2k1 2 pn w2 ds 0

ð20Þ

4ð1Þk 2k1 ð2k1Þp cos 2 pn k

4ð1Þ 12 hqmx w2 ð2k1Þp 0

k¼1

2 ð2k1Þp



2 ð2k1Þp

2

df 2k1 ds cos 2 pn:

678

A. Chernenko et al.

Due to Eq. (20) linearity, we get the system of linear algebraic equations for the definition R0k !  1 1 X X D ð2k  1Þp 4 2k  1 4ð1Þk 2k  1 pn ¼ p0 pn; cos wm þ j R0k cos 4 2 2 2 ‘ ð2k  1Þp k¼1 k¼1 and the system of the ordinary differential equations for the definition Rk ðsÞ wm

1 P k¼1

   D ‘4

ð2k1Þp 2

1 2 P d Rk 2 2k1 þ j Rk cos 2k1 2 pn þ m0 x wm ds2 cos 2 pn

4



1 P

þ wm

2 ð2k1Þp

k¼1

¼ zm

1 P k¼1

2

k¼1

dRk 12 hqmx cos 2k1 2 pn w2 ds 0

k

4ð1Þ 12 hqmx w2 ð2k1Þp



0

2 ð2k1Þp

2

df 2k1 ds cos 2 pn:

Finally, after solving these equations systems, we obtain the following expression for three-layered beam elastic deflection due to the influence of the vibrating stamp through viscous liquid layer "  5 # 1 2‘4 X ð1Þk þ 1 cosðð2k  1Þpx=ð2‘ÞÞ 2 w ¼ p0  ð2k  1Þp D k¼1 1 þ ðj=D Þð2‘=ð2k  1ÞpÞ4  zm Aðx; xÞ sinðs þ uðx; xÞÞ; Here we introduce the symbols: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi C2 þ B2 ;   uðx; xÞ ¼ arctg CB ;

Aðx; xÞ ¼



1 P k¼1



1 P k¼1

Kkz Kkw x2 ðDk m0 x2 Þ2 þ ðKkw xÞ2

cos 2k1 2‘ px;

Kkz xðDk m0 x2 Þ ðDk m0 x2 Þ2 þ ðKkw xÞ2

cos 2k1 2‘ px;



Dk ¼ D‘4



ð2k1Þp 2

4



þ j;

2 4ð1Þk 2 Kkz ¼ 12 hqmw2 ð2k1Þp ð2k1Þp ; 0  2 2 Kkw ¼ 12 hqmw2 ð2k1Þp : 0

ð21Þ

Mathematical Modeling of Hydroelastic Interaction Between Stamp

679

As a result, taking into account (7) and by substituting (19) in (12), the pressure in a liquid layer between the vibrating stamp and the three-layered beam can be written down as: p ¼ p0 þ zm Pðx; xÞ sinðxt þ up ðx; xÞÞ;

ð22Þ

Here we introduce the symbols: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 þ Q 2 ;   up ðx; xÞ ¼ arctg QS ;  1 P ðKkw xÞ2 z Kk x 1  ðD m x2 Þ2 þ ðK w xÞ2 cos 2k1 Q¼ 2‘ px; Pðx; xÞ ¼

k

k¼1



1 P k¼1

0

Kkw Kkz x2 ðDk m0 x2 Þ ðDk m0 x2 Þ2 þ ðKkw xÞ2

k

cos 2k1 2‘ px:

4 Summary and Conclusion Thus, the mathematical model of hydroelastic interaction between vibrating stamp and three-layered beam resting on Winkler foundation was obtained by means of perturbation method. The presented mathematical model can be used for investigating the resonance bending oscillations of three-layered beam and its tense-deformed state in the case of the three-layered beam being a wall of the channel resting on elastic foundation. The bending oscillation equation of three-layered beam resting on Winkler foundation and interacting with vibrating stamp through viscous liquid layer is obtained on the base of analytical plane hydrodynamic problem solving for creeping movement of viscous liquid layer in a narrow channel with vibrating walls. As a result of solving this equation for the steady-state harmonic oscillations, we find the expressions for deflection of three-layered beam Eq. (21) and pressure along the channel Eq. (22). These expressions contain the frequency dependent functions A(x, x) and P(x, x). The function A(x, x) in Eq. (21) is the frequency dependent function of three-layered beam deflection distribution along the channel. The function P(x, x) in Eq. (22) is the frequency dependent function of pressure distribution along the channel. Thus, investigating these functions gives the opportunity to study the hydroelastic response of the channel wall resting on Winkler foundation and hydrodynamic parameters of viscous incompressible liquid in the channel. Furthermore, the obtained results can be used for mathematical modeling hydroelastic vibrations of three-layered elements used in instrument engineering, aerospace, machine building, civil construction and so on. Acknowledgments. The study was funded by Russian Foundation for Basic Research (RFBR) according to the projects № 18-01-00127-a and № 19-01-00014-a.

680

A. Chernenko et al.

References 1. Carrera, E.: Historical review of zig-zag theories for multilayered plates and shells. Appl. Mech. Rev. 56(3), 287–308 (2003) 2. Gorshkov, A.G., Starovoitov, E.I., Yarovaya, A.V.: Mechanics of Layered Viscoelastoplastic Structural Elements. Fizmatlit, Moscow (2005). (in Russian) 3. Krylov, A.N.: On Analysis of Beams Lying on Elastic Base. Izd-vo AN SSSR. Leningrad (1931). (in Russian) 4. Pleskachevskii, Y.M., Starovoitov, E.I., Leonenko, D.V.: Mechanics of Three-Layer Beams and Plates Connected with an Elastic Foundation. Fizmatlit, Moscow (2011). (in Russian) 5. Wang, Y.H., Tham, L.G., Cheung, Y.K.: Beams and plates on elastic foundations: a review. Prog. Struct. Eng. Mater. 7(4), 174–182 (2005) 6. Starovoitov, E.I., Leonenko, D.V.: Deformation of a three-layer elastoplastic beam on an elastic foundation. Mech. Solids 46(2), 291–298 (2011) 7. Starovoitov, E.I., Leonenko, D.V.: Vibrations of circular composite plates on an elastic foundation under the action of local loads. Mech. Compos. Mater. 52(5), 665–672 (2016) 8. Starovoitov, E.I., Leonenko, D.V.: Thermal impact on a circular sandwich plate on an elastic foundation. Mech. Solids 47(1), 111–118 (2012) 9. Pradhan, M., Dash, P.R., Pradhan, P.K.: Static and dynamic stability analysis of an asymmetric sandwich beam resting on a variable Pasternak foundation subjected to thermal gradient. Meccanica 51(3), 725–739 (2016) 10. Lamb, H.: On the vibrations of an elastic plate in contact with water. Proc. Roy. Soc. A 98, 205–216 (1921) 11. Veklich, N.A.: Equation of small transverse vibrations of an elastic pipeline filled with a transported fluid. Mech. Solids 48(6), 673–681 (2013) 12. Mogilevich, L.I., Popov, V.S., Popova, A.A.: Oscillations of a cylinder liner of an internal combustion engine with a water cooling system caused by piston group impacts. J. Mach. Manuf. Reliab. 37(3), 293–299 (2008) 13. Bochkarev, S.A., Lekomtsev, S.V.: IOP Conference Series: Materials Science and Engineering, vol. 208, 012009 (2017) 14. Askari, E., Jeong, K.-H., Amabili, M.: Hydroelastic vibration of circular plates immersed in a liquid-filled container with free surface. J. Sound Vib. 332(12), 3064–3085 (2013) 15. Velmisov, P.A., Ankilv, A.V.: Dynamic stability of plate interacting with viscous fluid. Cybern. Phys. 6(4), 262–270 (2017) 16. Önsay, T.: Effects of layer thickness on the vibration response of a plate-fluid layer system. J. Sound Vib. 163(2), 231–259 (1993) 17. Ageev, R.V., Mogilevich, L.I., Popov, V.S., Popova, A.A., Kondratov, D.V.: Mathematical model of pulsating viscous liquid layer movement in a flat channel with elastically fixed wall. Appl. Math. Sci. 8(159), 7899–7908 (2014) 18. Faria, C.T., Inman, D.J.: Modeling energy transport in a cantilevered Euler-Bernoulli beam actively vibrating in Newtonian fluid. Mech. Syst. Signal Process. 45(2), 317–329 (2014) 19. Akcabay, D.T., Young, Y.L.: Hydroelastic response and energy harvesting potential of flexible piezoelectric beams in viscous flow. Phys. Fluids 24(5) (2012) 20. Bochkarev, S.A., Lekomtsev, S.V.: Effect of boundary conditions on the hydroelastic vibrations of two parallel plates. Solid State Phenom. 243, 51–58 (2016) 21. Mogilevich, L.I., Popov, V.S., Popova, A.A.: Dynamics of interaction of elastic elements of a vibrating machine with the compressed liquid layer lying between them. J. Mach. Manuf. Reliab. 39(4), 322–331 (2010)

Mathematical Modeling of Hydroelastic Interaction Between Stamp

681

22. Bochkarev, S.A., Lekomtsev, S.V.: Numerical investigation of the effect of boundary conditions on hydroelastic stability of two parallel plates interacting with a layer of ideal flowing fluid. J. Appl. Mech. Tech. Phys. 57(7), 1254–1263 (2016) 23. Kurzin, V.B.: Streamwise vibrations of a plate in a viscous fluid flow in a channel, induced by forced transverse vibrations of the plate. J. Appl. Mech. Tech. Phys. 52(3), 459–463 (2011) 24. Ageev, R.V., Kuznetsova, E.L., Kulikov, N.I., Mogilevich, L.I., Popov, V.S.: Mathematical model of movement of a pulsing layer of viscous liquid in the channel with an elastic wall. PNRPU Mech. Bull. 3, 17–35 (2014) 25. Mogilevich, L.I., Popov, V.S., Popova, A.A.: Longitudinal and transverse oscillations of an elastically fixed wall of a wedge-shaped channel installed on a vibrating foundation. J. Mach. Manuf. Reliab. 47(3), 227–234 (2018) 26. Hosseini-Hashemi, S., Karimi, M., Hossein Rokni, D.T.: Hydroelastic vibration and buckling of rectangular Mindlin plates on Pasternak foundations under linearly varying inplane loads. Soil Dyn. Earthq. Eng. 30(12), 1487–1499 (2010) 27. Ergin, A., Kutlu, A., Omurtag, M.H., Ugurlu, B.: Dynamics of a rectangular plate resting on an elastic foundation and partially in contact with a quiescent fluid. J. Sound Vib. 317(1–2), 308–328 (2008) 28. Hasheminejad, S.M., Mohammadi, M.M.: Hydroelastic response suppression of a flexural circular bottom plate resting on Pasternak foundation. Acta Mech. 228(12), 4269–4292 (2017) 29. Ergin, A., Kutlu, A., Omurtag, M.H., Ugurlu, B.: Dynamic response of Mindlin plates resting on arbitrarily orthotropic Pasternak foundation and partially in contact with fluid. Ocean Eng. 42, 112–125 (2012) 30. Kramer, M.R., Liu, Z., Young, Y.L.: Free vibration of cantilevered composite plates in air and in water. Compos. Struct. 95, 254–263 (2013) 31. Ageev, R.V., Mogilevich, L.I., Popov, V.S.: Vibrations of the walls of a slot channel with a viscous fluid formed by three-layer and solid disks. J. Mach. Manuf. Reliab. 43(1), 1–8 (2014) 32. Mogilevich, L.I., et al.: Mathematical modeling of three-layer beam hydroelastic oscillations. Vibroeng. Procedia 12, 12–18 (2017) 33. Panovko, Y.G., Gubanova, I.I.: Stability and Oscillations of Elastic Systems. Consultants Bureau Enterprises. Inc., New York (1965) 34. Gorshkov, A.G., Morozov, V.I., Ponomarev, A.T., Shklyarchuk, F.N.: Aerogidroelasticity of Designs. Fizmathlit, Moscow (2000). (in Russian) 35. Loitsyanskii, L.G.: Mechanics of Liquid and Gas. Drofa, Moscow (2003). (in Russian)

The Mathematical Model for Describing the Principles of Enterprise Management “Just in Time, Design to Cost, Risks Management” Igor Lutoshkin(B) , Svetlana Lipatova , Yuriy Polyanskov , Nailya Yamaltdinova , and Margarita Yardaeva Ulyanovsk State University, L. Tolstoy 42, Ulyanovsk 432017, Russia [email protected], [email protected], [email protected], [email protected], [email protected]

Abstract. The formalization of the principles “just in time”, “design to cost” and “risks management” is described in the form of a mathematical model. These principles are chosen on the basis of the analysis of the management methodologies used in the practice of industrial enterprises. The model is recommended to be used in the design of information systems to support management decisions. The model can be built on the basis of different methods: stochastic, parametric, heuristic, etc. The proposed approach allows to use heterogeneous submodels for the assessment and management task based on one or more specific criteria: just in time, design to cost and risks management. The proposed mathematical model is a model of the upper level of abstraction, in practical use it is required to take into account the task being solved, selected criteria and available limitations. Practical implementation of the proposed model and its introduction into digital production presuppose the implementation and monitoring with the help of an information system. The use of such a methodology is advisable at large enterprises, in particular machine building and aviation. Keywords: Assessment of the company’s activities · A comprehensive model for assessing the activities of the enterprise · The methodology for assessing the activities of the enterprise · Just in time · Design to cost · Risks management

1

Introduction

In difficult economic circumstances (shortage of financial resources and labor, limited productive capacities, competitors’ activities, economic sanctions) it is important for all enterprises to develop effective economic strategies in order to maintain advantages on the market. The main goals of enterprises are to produce goods (provide services) in time, reduce input costs and other expenses. c Springer Nature Switzerland AG 2019  O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 682–695, 2019. https://doi.org/10.1007/978-3-030-12072-6_55

The Mathematical Model for Describing the Principles

683

In this case they need to improve management tools under various risks that are associated with constantly changing external environment and own company’s capacity (range of goods, prices, quality of row materials etc.). Risks management problems were solved in [3]. The authors described the process of innovative goods production and considered the impact of risks on competitive edge. The organizational principles “Just in time” and “Design to cost” are treated together in [3], the principles “Just in time” and “Fact-reactingproduction” are treated together in [18]. In our paper formalization of the principles of enterprise’s activities is offered. It sets the main priorities of management and allows for the developed model to be adapted to the company’s organizational structure. On the one hand, modern methods and tools are required for applying the model. In particular, information systems are used in the field of digital manufacturing. On the other hand, development of information systems in digital economy makes both accounting and management functions be included in the system functionality. It raises the question about formal description of management principles. Therefore, a formal management model should be the base of intelligent information systems in economics.

2

Existing Management Methodologies

The most widespread organizational principles in Russia are “Just in time” [3, 18,20], “Design to cost” [3,25], “Risks management” [3,5,13]. However, there are some differences in interpretations of these terms resulting from differences among management methodologies. The methodologies are focused on features of specific company’s activities. It causes difficulties in applying them in another area. Also each methodology is aimed to implement particular principle, whereas the impact of several principles can take place. Let us consider the main principles of methodologies and identify those of them that can be used in different settings. The most common logistical concept which provides resources to companies within specific time frames is the methodology “Just in time” (Just In Time, JIT). The methodology implies the following: the supply of raw materials should be in accordance with production plan with respect to required amount, place of taking over the goods, time of delivery and sale. The application of the methodology may lead to reduction in stock holdings and lower costs [30]. The scientific results suggest that JIT principle is effective in service system [32], production process [4], supply chain management [16]. Time frames for production of goods and services form activity of all enterprises so the principle “Just in time” is offered to be taken into account in our research. The methodology “Design to cost” lies in creating above-average quality goods with limited cost. Note that the total costs of production include the costs of design, manufacturing, escorting, software support and recycling. In

684

I. Lutoshkin et al.

this, cost parameters of the products can be controlled in the implementation of this methodology [6]. As an objective function, the total profit or costs can be considered. In the first case, the objective function should be maximized [19], in the second one, minimized [17]. It is not possible to plan enterprise’s activities without cost accounting so the methodology Design to cost is offered to be included in the model. The methodology “Risks management” is applied in case when there are risk management alternatives [1]. Recent studies [15] focus on conceptualization of risk in the supply chain management. The authors [15] emphasize that the main problem in the field of risk management is the quantitative assessment and modeling of management process. The authors [14,28] pay attention to level of experts’ knowledge about risk assessment techniques. The new approaches of uncertainty estimation are developed in [8,27]. Principles of sustainability [10, 11,23], reliability [10,21,26] and validity [2,12,14] are considered. There is the recognition of the need to develop management control and dynamic assessment of the risk management [9]. The fundamental questions of the risk management and forecasting events in different areas are stated [7,17,24]. Business management includes several levels of management. The risk management principle is needed at the top management level whereas production units work in accordance with an approved production plans. The methods of cost management, such as direct costing, absorption costing, standard-cost, method, target costing, CVP-analysis, LCC-analysis, VCC method are aimed at cost reduction. They set the functions of new goods production planning and cost monitoring in accordance with the environment. Currently, the quality-enhancement system “Total Quality Management (TQM)” is being implemented in production process of many enterprises. The advantage of this methodology is improving the quality of not only goods but also company’s internal processes. The main disadvantage is high costs of initialization processes, supporting services, improving quality of goods. The quality of organizational processes depends on staff qualifications, resources and management tools that are used in the proposed model. Also such categories as client satisfaction, financial indicators, job satisfaction have an impact. These categories form a set of constrains which define feasible region of resources and factors. The above-mentioned methodologies are the most commonly used, although a number of other methodologies with the aim of improving the effectiveness of companies’ activities is also could be applied. The choice of methodology depends on the type of enterprise’s activity, units of the enterprise, management functional. In particular, in [29] the experience of introduction of Japanese production model in the British car market is presented. The authors conclude that there are obstacles to the implementation of the model because of specific structure of Japanese automobile industry. Differences in mentality, management ladder and geographical place result in failure to apply the model in European market.

The Mathematical Model for Describing the Principles

685

Let us consider the mathematical model, which is based on the principles “Just in time”, “Design to cost”, “Risks management”.

3

The Mathematical Model of Management

Suppose the production process can be divided into several phases N . Each phase i is characterized by the fixed period of time τi , 1 ≤ i ≤ N . The duration of each phase is determined by specificities of management, normative documents, staff qualifications, efficient use of capital resources, etc. In case when the process of production is linear, the total period of time of production T can be represented: T =

N 

τi .

i=1

If the production process is not linear, the time points at which the process branches into parts appear. Let the phases k and l can be implemented independently from each other, and the results are used in the next phase. In this case the total period of production time is determined: T = max(τk , τl ) − τk − τl +

N 

τi .

i=1

That is, in nonlinear processes the total period may be less than in linear ones. Let us introduce the following symbols: – ni is the number of types of resources that are transformed in the phase i, 1 ≤ i ≤ N; – Rij is the amount of the resources j, 1 ≤ j ≤ N , that is transformed in the phase i, 1 ≤ i ≤ N ; – Ri = (Ri1 , Ri2 , . . . , Rini ) is the vector of the amount of resources that are transformed in the phase i, 1 ≤ i ≤ N ; – ri is the number of types of factors (mechanisms), that are used in the phase i, 1 ≤ i ≤ N ; – Xij is the value of the factor j, 1 ≤ j ≤ N , that is used in the phase i, 1 ≤ i ≤ N; – Xi = (Xi1 , Xi2 , . . . , Xiri ) is the vectors of the values of the factors, that are used in the phase i, 1 ≤ i ≤ N ; – yi is the result determined in the phase i, 1 ≤ i ≤ N . The relationship between results, resources and factors can be expressed as a function (production function) Fi : yi = Fi (Ri , Xi ). The process of achievement of the result yi takes τi time units. Changing yi is possible through changing Ri and Xi . And let Ri and Xi be limited by feasible i , respectively. sets PRi and PX

686

I. Lutoshkin et al.

In general, each phase of production process has both inward and outward resources flows. The inward resources are determined by the results of other phases. Denote by Ai the set of the results which influence on the activity in the phase i, Ai = {yk1 , yk2 , . . . , yksi } , {k1 , k2 , . . . , ksi } ⊂ {1, 2, . . . , N } then Ri = Ri (Ai ). In general, the results, factors and resources depend on the time t: Xi = Xi (t), Ri = Ri (t) = Ri (Ai (t), t), yi = yi (t) = F (Ri (Ai (t−τi ), t−τi ), Xi (t−τi ), t). Let enterprise’s finished product is made in the last phase and expressed in monetary term. Depending on the kind of production the product can be described in the continuous or discrete forms. Let the function f (t) be defined on the [0; T ] and let us introduce designation SI 0≤t≤T (f (t)): ⎧ T  ⎪ ⎪ ⎪ f (t), [0; T ] = {0, 1, 2, ..., T }, ⎨ SI 0≤t≤T (f (t)) = t=0 T ⎪ ⎪ ⎪ ⎩ f (t)dt, [0; T ] − continuous interval. 0 ∗ Suppose the enterprise’s planned volume of production is yN and the planning ∗ period is T . We can formulate the following problem—to define the functions Xi (t) ≥ 0, Ri (t) ≥ 0, 1 ≤ i ≤ N under the conditions ∗ SI 0≤t≤T (yN (t)) = yN ;

(1)

i Ri (t) ∈ PRi , Xi (t) ∈ PX , 1 ≤ i ≤ N, 0 ≤ t ≤ T ∗ ;

(2)

yi (t) = F (Ri (Ai (t − τi ), t − τi ), Xi (t − τi ), t), 1 ≤ i ≤ N, 0 ≤ t ≤ T ∗ ; Ai = {yk1 , yk2 , . . . , yksi } , {k1 , k2 , . . . , ksi } ⊂ {1, 2, . . . , N } .

(3) (4)

The solution to the problem (1)–(4) is to create the schedule for optimal use of the resources and factors in order to implement the production plan. The developed model is based on the methodology “Just in time”. Denote by SJIT the feasible set of the problem (1)–(4). If SJIT is empty, the problem (1)–(4) does not have any solutions. In this sense, it can be argued that the existing technologies are not sufficiently effective and should be changed with the aim of reducing τi . If SJIT contains more than one element, then any element can be a solution and we need to chose the most appropriate one. The choice can be based on additional criteria. One of the most important indicators of any business is the total production costs. Let – CiR be the vector of resource cost in the phase i, 1 ≤ i ≤ N ; – CiX be the vector of factor cost in the phase i, 1 ≤ i ≤ N ;

The Mathematical Model for Describing the Principles

687

– CF be the cost that is not related to the production process; – C be the accumulated cost related to the production process. Then the total production cost can be calculated by the functional:  N (CiR (t − τi ), Ri (t − τi ) + C(T ) = SI 0≤t≤T i=1 CiX (t − τi ), Xi (t − τi )) + CF (t)) .

(5)

Denote by P C ∗ the maximum allowed cost. Then we can write the following condition: (6) P C ∗ ≥ C(T )/SI 0≤t≤T (yN (t)). Finding the functions Xi (t) ≥ 0, Ri (t) ≥ 0, 1 ≤ i ≤ N under the conditions (2), (3), (4), (6) is based on the application of the methodology “Design to cost”. Denote by SDC the feasible set of the problem (2), (3), (4), (6). If SDC is empty, then the problem does not have any solutions and the existing technologies do not provide the output with the affordable cost. If SDC contains two or more elements, then we need to chose the most appropriate of them. As a rule, the issue of the choice of optimal solution is more relevant for high organizational levels. In this, each choice is accompanied by risks of profit loss caused by wrong decision or no-action alternative. The following stochastic variables can negatively affect the results: – ξi (t) the vector of stochastic variables that indicate the profit loss caused by the lack of resources in the phase i, 1 ≤ i ≤ N ; – ηi (t) the vector of stochastic variables that indicate the profit loss caused by the lack of factors in the phase i, 1 ≤ i ≤ N ; – ζi (t) the scalar that represents the profit loss caused by imperfections in the technological system in the phase i, 1 ≤ i ≤ N . Generally, it depends on resources, factors and time component t ≥ 0: ζi = ζi (Ri , Xi , yi , t), ξi = ξi (Ri , t), ηi = ηi (Xi , t). Then Eq. (3) can be rewritten: yi (t) = F (Ri (Ai (t − τi ),t − τi ) + ξi (Ri (Ai (t − τi ), t − τi ), t), Xi (t − τi )+ ηi (Xi (t − τi ), (t − τi ), t) + ζi (Ri (Ai (t − τi ), t − τi ), Xi (t − τi ), yi (t), t),

(7)

1 ≤ i ≤ N, 0 ≤ t ≤ T. The equation (7) can be used for controlling production phases. We can regulate – the average value and dispersion of ζi knowing the technology in the phase i; – the stochastic variable ξi by choosing input resources (e.g. suppliers) in the phase i; – the profit loss that is caused by random deviation of the values of the factors ηi in the phase i.

688

I. Lutoshkin et al.

Let introduce the function (α)+ : (α)+ = α, if α ≥ 0, (α)+ = 0, if α < 0. Let δξi , δηi , δζi be the permissible norms of errors for the stochastic variables ξi , ηi , ζi , respectively, 1 ≤ i ≤ N . In case of going beyond the permissible norms, the total enterprise’s loss is defined as L(R, X, y) =

N 

Li (Ri , Xi , yi ) =

i=1

N 

(βR ρR (ξi ) + βX ρX (ηi ) + βy ρy (ζi )) ,

i=1

here R = (R1 , R2 , . . . , RN ), X = (X1 , X2 , . . . , XN ), y = (y1 , y2 , . . . , yN ), the values



ρR (ξi ) = SI 0≤t≤T E(ξi (Ri (Ai (t − τi ), t − τi ), t) − δξi )+ ,



ρX (ηi ) = SI 0≤t≤T E(ηi (Xi (t − τi ), t) − δηi )+ ,



ρy (ζi ) = SI 0≤t≤T E(ζi (Ri (Ai (t − τi ), t − τi ), X(t − τi ), y(t), t) − δζi )+

identify the extent of the deviation of stochastic effects from the given norms of errors. The norms  ·  belong to Euclidean space. The choice of Xi (t) ≥ 0, Ri (t) ≥ 0, 1 ≤ i ≤ N , under the conditions (2), (4), (7) is based on the methodology “Risks management”. The risk is measured through the function L(R, X, y). Denote by SRM the set of solutions of the problem (2), (4), (7) which satisfy the condition L(R, X, y) < δL , where δL —the permissible norm of the loss. If the set SRM is empty, there are unacceptably high risks. It is recommended to change technologies, resources, improve the staff qualifications. If SRM is not empty, then each element of the set is the solution and we should chose the most appropriate one. Let us consider the case of integration of the mentioned methodologies in one methodology “Just in time, design to cost, risks management”. According to this methodology, we need to fix the planning period T = T ∗ , the cost of ∗ and the permissible norms production P C ∗ , the output yN δξi , δηi , δζi , δL . The obtained solutions should be found in S = SJIT SDC SRM . If the set S is not empty, then each element of the set can be considered as a solution. If S contains more than one element, then it is needed to introduce additional criteria, e.g. constructing a projection of the enterprise’s established distribution ˜ X) ˜ into the S. of resources and factors (R, However, the transition from the current distribution of resources and factors to some solution in S requires additional costs. In this regard, we need to choose the appropriate solution in S. ˜ X) ˜ does not belong to S. That is, this distribution does not Suppose that (R, satisfy the methodology “Just in time, design to cost, risks management”. We need to find the most appropriate distribution and evaluate the transition to it. ˜ X) ˜ to the other The evaluation of transition from the current distribution (R, one (R, X) is represented as the following function: ˜ X, ˜ R, X) = ρcost (R,

N  i=1

 ρXi (Xi , X˜i ) + ρRi (Ri , R˜i ) ,

(8)

The Mathematical Model for Describing the Principles

689

where ρXi (Xi , X˜i ) is the value of cost associated with the transition from X˜i to Xi in the phase i; ρRi (Ri , R˜i ) is the value of the cost associated with the transition from R˜i to Ri in the phase i. Then the problem ˜ X, ˜ R, X) → ρcost (R,

min ,

(R,X)∈S

(9)

is to find the distribution (R, X) that satisfies the methodology at the lowest cost. Therefore, the developed mathematical model of management makes it possible to find the feasible distribution of resources in frame of the enterprise’s production process.

4

The Example of the Application of the Model

Consider the investment strategy development process as an example of business management. Let y(t) is the revenue function at the moment t, I(t) is the investment function, v(t) is the function of investment effect which is accumulated on the fix interval [τ1 , τ2 ], G(τ ) describes the nature of this effect, τ is the time lag, w(t) reflects the combined impact of non-real-investment factors:  τ2 v(t) = G(τ )I(t − τ )dτ ; (10) τ1

w(t) = g(y(t − 1));

(11)

y(t) = f (v(t), w(t)).

(12)

The financial result of the company for the planning period [0; T ] is the total profit or loss:  T  T π(x(t), I(t))dt = (f (v(t), w(t)) − I(t) − c(y(t), t)) dt. (13) Π(T ) = 0

0

Here c—total costs at the point of time t, and the costs are independent on investment. Investment budget is limited in the following way: 0 ≤ I(t) ≤ B, t ∈ [0; T ].

(14)

The types of the functions g(y), f (v, w), c(y, t) and G(τ ) are defined by statistical methods which are applied to the enterprise’s financial data. However, it is possible to formulate the general requirements for these functions: – f (v, w), c(y, t) and G(τ )—non-negative functions; – f (v, w) increases with respect to v; – G(τ ) has unique maximum τ = τ ∗ and lim G(τ ) = 0. τ →+∞

690

I. Lutoshkin et al.

Let y ∗ is the required output over the time period T ∗ . The problem of finding the optimal investment strategy (10), (11), (12), (14) under the condition 

T∗

y(t)dt = y ∗ .

(15)

0

defines the principle “Just in time” and forms the set of feasible investment strategies SJIT . If SJIT is not empty, then we can maximize the functional (4) on the S and find an optimal solution that satisfies the principle “Just in time”. Therefore, we can state the optimal control problem: to maximize (13) under the conditions (10), (11), (12), (14), T = T ∗ . Introduce the value P C ∗ which restricts the production cost: ∗



PC ≥



T∗

T∗

(c(y(t), t) + I(t)) dt/ 0

y(t)dt.

(16)

0

The variational problem (10), (11), (12), (14), (16) defines the principle “Design to cost” and forms the set of possible investment strategies SDC . If SDC is not empty, then we can find the optimal solution (the best investment strategy) by introducing an additional criterion, e.g. maximization of profit (13). The realization of the principle “Just in time, design to cost” consists in finding the possible investment strategies (10), (11), (12), (14), (15), (16). Identification of the best strategy requires introducing an additional criterion in S = SJIT SDC . Note that the problem cannot be solved by using only analytical methods. In this case numerical methods are required, e.g. the parameterization method [22].

5

Practical Realization

In order to test the dynamic model of real investment, the corporation’s semiannual data set of revenue and real investment (in million Russian rubles) from consolidated financial statements (January 2007–June 2017) is analyzed. The Public Join Stock Company United Aircraft Corporation (PJSC UAC) was established to a Russian President’s Decree dated February 20th, 2006 (its former name, before 2015, was OJSC UAC). The priority activity areas are development, production, sales, operation support, warranty and servicing, modernization, repair, and disposal of civil and military aircraft [31]. Let us consider the case when f (v, w) and g(y) are linear with respect to v, w and y: f (v, w) = α1 w + α2 v, g(y) = α3 y. That is, y(t) can be written:  τ2 y(t) = αy(t − 1) + G(t − s)I(s)ds. τ1

Correlation analysis is conducted to observe the relationship between investment and sales: ρ(τ ) = corr(y(t), I(t − τ )), τ = 0, 1, 2, . . ..

The Mathematical Model for Describing the Principles

691

The most significant correlation coefficients are determined by the t-test and lie in the closed interval τ ∈ [2; 5]. Thus, we can conclude that the current output is under the influence of the previous real investment with delays from 1 till 2.5 years. Based on the properties of the function G(τ ), we can represent it in the following form:  G(τ ) = exp aτ 2 + bτ + c . The parameter estimates are obtained by the method of least squares: a ˆ= −0.46381, ˆb = 4.70459, cˆ = −9.26659, α ˆ = −0.84039. Also we can note that it is impossible to maintain production level in case of absence of systematic real investment. It means that accumulated impact of other factors on production volume is insufficient. Let the planning period T = 3 years. The limited number of advertising budget B is equal to 25902.5 million Russian rubles (maximal real in-vestment in according to the biannual data). The total profit of the corporation over the last 3 years (2015–2017) in according to the data set is 989012.5 million rubles. Let the enterprise’s goal is to make a profit in the amount of 100,0000 million rubles. The set of potential solutions is denoted by SJIT . We can check if SJIT is not empty by solving the optimal control problem:  3 (f (v(t), w(t)) − I(t))) dt → max . Π(3) = 0

SJIT

The optimal control problem can be solved by using the maximum principle. Introduce the functions  G(τ ), τ1 ≤ τ ≤ τ2 , ¯ G(τ ) = ; 0, else;  0, 0 ≤ t < 1, ¯ δ(s − t + 1) = δ(s − t + 1), t ≥ 1, where δ(t)—Dirac delta function. Then the Hamiltonian function is represented: ¯ + ψ3 (s)(f (v(s), w(s)) − I)+ H(s, v, w, I, ψ1 , ψ2 , ψ3 ) = ψ1 (s)G(0)I  3 ¯ − t + 1)f (v(s), w(s)))  ¯ − s)I) ∂(G(t ∂(δ(s + ψ2 (t) ψ1 (t) dt. ∂t ∂t s Note that the Hamiltonian function is linear with respect to I. It means that maximum can be obtained if only I takes the extreme values B or 0. The partial derivative of the Hamiltonian function with respect to I is determined:  3 ¯ − s) ∂ G(t ∂H(s, v, w, I, ψ1 , ψ2 , ψ3 ) ¯ = ψ1 (s)G(0) − ψ3 (s) + dt. ψ1 (t) ∂I ∂t s

692

I. Lutoshkin et al.

The optimal distribution of real investment has the following form: ⎧ ∂H(s, v, w, I, ψ1 , ψ2 , ψ3 ) ⎪ ⎪ B, > 0, ⎪ ⎪ ∂I ⎨ ∂H(s, v, w, I, ψ , ψ , ψ ) 1 2 3 I ∗ (s) = 0, < 0, ⎪ ∂I ⎪ ⎪ ⎪ ∂H(s, v, w, I, ψ , ψ , ψ ) 1 2 3 ⎩ I, ¯ = 0, 0 ≤ I¯ ≤ B. ∂I ∂H(s, v, w, I, ψ1 , ψ2 , ψ3 ) The partial derivative does not depend on state ∂I variables and is determined only by joint variables. Consider the Caushy problem:   dψ1 dψ2 (s + 1) = −α2 ψ3 (s) + ; ds ds   dψ2 (s + 1) dψ2 = −α1 ψ3 (s) + ; ds ds dψ3 = 0, ψ1 (3) = ψ2 (3) = 0, ψ3 (3) = 1. ds In order to solve the Caushy problem the numerical method, based on the Runge-Kutta second order method, is used. The solutions are found for the different integration steps h. In each case the solution has the same structure:  B, 0 ≤ t < t∗ , I(t) = 0, t∗ ≤ t ≤ 3, where t∗ is a switch point of control. Table 1 demonstrates the obtained values of the total profit and the switch points for the different integration steps. Table 1. Experimental results for the different integration steps h h

Π(T ), Russian rubles t∗ , years

0.01000 1.13763 × 106 6

1.41

0.00500 1.13649 × 10

1.413

0.00250 1.13590 × 106

1.4125

0.00125 1.13561 × 106

1.4125

As shown in the table, there is the investment strategy that allows to get the total profit 1.135×106 . The obtained value is greater than the planned one. Then the set SJIT is not empty, and we can apply the strategy by which the planned value of the profit can be achieved. In particular, by applying the strategy  B, 0 ≤ t < 1.41, I(t) = 0, 1.41 ≤ t ≤ 3, the enterprise can make the profit in amount of 1.135 × 106 million rubles.

The Mathematical Model for Describing the Principles

6

693

Conclusion

In our paper the mathematical model of business management is offered. This model is based on the organizational principles “Just in time”, “Design to cost”, “Risks management” and describes these principles within an integral approach. Applying the model to the description of enterprises it is required to transform the introduced mathematical expressions taking into account the chosen criteria, limits and level of detail. The model may be used in designing management decision support systems to control the activities whether of the whole enterprise or its units. The model can be constructed according to the various methods: stochastic, parametric, heuristic, etc. Our approach allows to use a number of heterogeneous sub-models in a problem of assessing and controlling based on the principles “Just in time”, “Design to cost”, “Risks management”. The application of the methodology is appropriate for large-scale enterprises. In particular, the proposed model is tested on the real data set of the Public Join Stock Company United Aircraft Corporation (PJSC UAC). The problem of finding the optimal real-investment strategy is formulated. The parameter estimates are obtained, and the model is verified. The results are in accordance with the understanding of optimal investment strategy. The novelty of the paper consists in the developed mathematical model and integrated management approach. Acknowledgements. Work carried out in the framework of the state task 2.1816.2017/PCH Ministry of Education and Science of the Russian Federation.

References 1. Aven, T.: Risk assessment and risk management: review of recent advances on their foundation. Eur. J. Oper. Res. 253(1), 1–13 (2016) 2. Aven, T.: Risk analysis validation and trust in risk management: a postscript. Saf. Sci. 99(part B), 255–256 (2017) 3. Baklashov, V.I., Kazanskaya, D.N., Skobelev, P.O., Shpilevoy, V.F., Shepilov, Y.Y.: Multi-agent system “Smart Factory” for strategic and operational management of machine-building production “Just in time” and “For a given price”. Izvestiya Samara Sci. Center Russ. Acad. Sci. 16, 1292–1295 (2014). 1(5). (in Russian) 4. Che-Ani, M.N., Kamaruddin, S., Azid, I.A.: Towards just-in-time (JIT) production system through enhancing part preparation process. In: Proceedings of IEEE International Conference on Industrial Engineering and Engineering Management, pp. 669–673. IEEE (2017) 5. Chursin, A.A., Davydov, V.A.: Economic and mathematical model of the impact of risks on the competitiveness of enterprises of the rocket and space industry. Econ. Manag. Eng. 5, 46–52 (2012). (in Russian) 6. Denkena, B., Horst, P., Schmidt, C., Behr, M., Krieglsteiner, J.: Estimation of production cost in an early design stage of CFRP lightweight structures. Proc. CIRP 62, 45–50 (2017)

694

I. Lutoshkin et al.

7. Feduzi, A., Runde, J.: Uncovering unknown unknowns: towards a Baconian approach to management decision-making. Org. Behav. Hum. Decis. Process. 124, 268–283 (2014) 8. Flage, R., Aven, T., Baraldi, P., Zio, E.: Concerns, challenges and directions of development for the issue of representing uncertainty in risk assessment. Risk Anal. 34(7), 1196–1207 (2014) 9. Flage, R., Aven, T.: Emerging risk—conceptual definition and a relation to black swan types of events. Reliab. Eng. Syst. Saf. 144, 61–67 (2015) 10. Gabrel, V., Murat, C., Thiele, A.: Recent advances in robust optimization: an overview. Eur. J. Oper. Res. 235, 471–483 (2014) 11. Giannakis, M., Papadopoulos, T.: Supply chain sustainability: a risk management approach. Int. J. Prod. Econ. 171(4), 455–470 (2016) 12. Goerlandt, F., Khakzad, N., Reniers, G.: Validity and validation of safety-related quantitative risk analysis: a review. Saf. Sci. 99(part B), 127–139 (2017). https:// doi.org/10.1016/j.ssci.2016.08.023 13. Guskova, T.N., Spiridonova, E.E.: Static methodology and practical issues of risk management. Bull. Volga State Univ. Serv. Series: Econ. 1(47), 87–93 (2017). (in Russian) 14. Hansson, S.O., Aven, T.: Is risk analysis scientific? Risk Anal. 34(7), 1173–1183 (2014) 15. Heckmann, I., Comes, T., Nickel, S.: A critical review on supply chain risk— Definition, measure and modeling. Omega 52, 119–132 (2015) 16. Jardini, B., Kyal, M.E., Amri, M.: The management of the supply chain by the JIT system (Just in Time) and the EDI technology (Electronic Data Interchange). In: Proceedings of the 3rd IEEE International Conference on Logistics Operations Management, Article ID 7731712 (2016) 17. Khan, F., Rathnayaka, S., Ahmed, S.: Methods and models in process safety and risk management: past, present and future. Process Saf. Environ. Prot. 98, 116–147 (2015) 18. Klochkov, V.V., Vdovenkov, V.A.: The problem of ensuring the production of aviation equipment “Just in time” and the concept of “fast-reacting production”. Izvestiya Samara Sci. Center Russ. Acad. Sci. 16, 1418–1425 (2014). 1(5). (in Russian) 19. Li, X., Guo, S., Liu, Y., Du, B., Wang, L.: A production planning model for maketo-order foundry flow shop with capacity constraint. Math. Probl. Eng. (2017). https://doi.org/10.1155/2017/6315613 20. Liu, L., Wang, J.J., Liu, F., Liu, M.: Single machine-by-product planning and resource allocation scheduling problem with learning and general positional effects. J. Manuf. Syst. 43, 1–14 (2017) 21. Lundberg, J., Johansson, B.J.E.: Systemic resilience model. Reliab. Eng. Syst. Saf. 141, 22–32 (2015) 22. Lutoshkin, I.V.: The parameterization method for optimizing the systems which have integro-differential equations. Bull. Irkutsk State Univ. Series “Mathematics” 4(1), 44–56 (2011). (in Russian) 23. Malek, R., Baxter, B., Hsiao, C.: A decision-based perspective on assessing system robustness. Proc. Comput. Sci. 44, 619–629 (2015) 24. Pasman, H., Reniers, G.: Past, present and future of Quantitative Risk Assessment (QRA) and the incentive it obtained from Land-Use Planning (LUP). J. Loss Prev. Process Ind. 28, 2–9 (2014)

The Mathematical Model for Describing the Principles

695

25. Petrenya, Y.K., Glukhov, V.V., Shilin, P.S.: The concept of “designing for competition” as the basis for the formation of an innovative enterprise policy. Econ. Sci. 10(1), 155–163 (2017). Scientific and technical statements of the St. Petersburg State Polytechnic University. (in Russian) 26. Sahebjamnia, N., Torabi, S.A., Mansouri, S.A.: Innovative applications of O.R. integrated business continuity and disaster recovery planning: towards organizational resilience. Eur. J. Oper. Res. 242, 261–273 (2015) 27. Spiegelhalter, D.J., Riesch, H.: Don’t know, can’t know: embracing deeper uncertainties when analysing risks. Philos. Trans. Roy. Soc. A 369, 4730–4750 (2014) 28. SRA: Glossary society for risk analysis (2015). www.sra.com/resources. Accessed 14 Aug 2018 29. Turnbull, P., Oliver, N., Wilkinson, B.: Buyer-supplier relations in the UK automotive industry: strategic implications of the Japanese manufacturing model. Strateg. Manag. J. 13, 159–168 (1992) 30. Zahedi, R., Yusriski, R.: Stepwise optimization for model of integrated batch production and maintenance scheduling for single item processed on flow shop with two machines in JIT environment. Proc. Comput. Sci. 116, 408–420 (2017) 31. http://www.uacrussia.ru/en/corporation/. Accessed 01 June 2018 32. Wang D., Chen Y., Chen D.: Efficiency optimization and simulation to manufacturing and service systems based on manufacturing technology Just-In-Time. Pers. Ubiquit. Comput. 22, 1–13 (2018)

Algebraic Bayesian Networks: The Use of Parallel Computing While Maintaining Various Degrees of Consistency Nikita A. Kharitonov(B) , Anatoly G. Maximov , and Alexander L. Tulupyev St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS), St. Petersburg, Russia [email protected]

Abstract. This paper presents approaches to parallelization of algorithms for maintaining external and internal consistency in algebraic Bayesian networks as one of the representatives of probabilistic graphical models. The algorithms modified based on these approaches are described and presented in the form of schemes. Keywords: Algebraic Bayesian networks · Probabilistic graphical models · Consistency · External consistency · Internal consistency · Parallel computing

1

Introduction

When making decisions, it is often necessary to operate with incomplete, imprecise, non-numeric, and otherwise imperfect data or knowledge, in other words, data (or knowledge) with uncertainty [13,14]. This occurs, for example, when you working with incomplete data or data obtained on the basis of expert assessments. Some of the models for working with data with uncertainty are probabilistic graphical models [2,7,12]. In addition, the question of optimizing the response time often arises when making decisions [1,3,4]. This paper addresses the issue of optimizing work with algebraic Bayesian networks. Algebraic Bayesian networks [16] are a logical probabilistic graphical model of knowledge patterns bases with uncertainty. A knowledge pattern in this model is represented as a conjunct ideal. In the case of working with a newly constructed or modified network, the problem of checking the network for consistency naturally arises. The example of algebraic Bayesian network and knowledge patterns is represented on Fig. 1. There are four types of consistency [16]: This work was partially supported by the RFBR according to the research project no. 18-01-00626 as well as by RF Governmental Assignment no. 0073-2018-0001 to SPIIRAS. c Springer Nature Switzerland AG 2019  O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 696–704, 2019. https://doi.org/10.1007/978-3-030-12072-6_56

Algebraic Bayesian Networks

697

Fig. 1. Example of algebraic Bayesian network and knowledge patterns in it

– Local—consistency of separately taken knowledge patterns; – External—local consistency and coincidence of conjuncts at intersections; – Internal—local consistency and the ability to take such scalar estimates in the entire network that the resulting algebraic Bayesian network is internally consistent; – Global—the ability to immerse the network into a single comprehensive consistent knowledge pattern. The purpose of this paper is to provide a theoretical description of parallel algorithms for maintaining consistency, namely, external and internal consistencies as most used when working with algebraic Bayesian networks. Sequential algorithms for these purposes have been described in the works [9,16].

2

Related Works

Issues of use of parallel algorithms in algebraic Bayesian networks has previously been addressed only in [16]. In the model of the Bayesian belief network [6,8] related to algebraic Bayesian networks, the problems of parallel computing were considered in such works as [5,17,18], but these papers did not address issues of maintaining consistency due to the absence of such a concept in Bayesian belief networks. Also, the issues of parallel computing were considered widely in the work of Kulagin [10,11].

3

External Consistency

The first task to be solved, to reach the goal, is the study of approaches to the use of parallel computing in the algorithms for maintaining external consistency.

698

N. A. Kharitonov et al.

The external consistency implies internal and global consistency for algebraic Bayesian networks with scalar estimates. The algorithm for maintaining external consistency was described in the papers [9,16] and is provided in Fig. 2.

Fig. 2. Algorithm for maintaining external consistency without using parallel computing

The main stages of this algorithm are: 1. Sequential adding knowledge patterns to the stack with checking their consistency and removing them from the network; 2. Sequential adding items from the stack to the network with estimates changing and checking their consistency. Both of these processes can be executed concurrently. Since the algebraic Bayesian network removes knowledge patterns located “on the edge”, it is impossible to take the next knowledge pattern without deleting the previous one. It therefore proposed to first remove a knowledge pattern from the network and only then to maintain its consistency and add to the stack in the new process. In this case, the order of adding elements to the stack is important, so it must occur sequentially. To recreate an algebraic Bayesian network from the stack, a similar situation occurs: knowledge patterns must be removed from the stack and added to the algebraic Bayesian network sequential. However, maintaining their consistency is also possible in parallel. Figure 3 presents an algorithm for maintaining external consistency using parallel computations.

Algebraic Bayesian Networks

699

Fig. 3. Algorithm for maintaining external consistency using parallel computations

4

Internal Consistency

The second task to be solved, to reach the goal, the study of approaches to the implementation of parallel computing in the algorithm for maintaining internal consistency. Internal consistency is one of the main types of consistency of algebraic Bayesian networks. Global consistency follows directly from internal consistency for networks without cycles. Moreover, any algebraic Bayesian network can be presented to a consistent form or proofed that it is inconsistent. The algorithm for maintaining internal consistency was considered in [9,16]. Figure 4 shows the operation of this algorithm. According to the scheme, the algorithm can be divided into two main stages: 1. Adding conditions from knowledge patterns to a linear programming problem; 2. Solving the linear programming problem to the maximum and minimum with a change in estimates. It is possible to perform each of the stages in parallel, and the moment of transition to solving a linear programming problem is critical, since starting to solve this problem without any conditions from knowledge patterns in some cases means to get an incorrect result. Let’s consider possible ways for parallelizing each of the processes. 4.1

First Process

In the current version of the algorithm of maintaining internal consistency, the process of adding conditions from knowledge patterns to a linear programming

700

N. A. Kharitonov et al.

Fig. 4. Algorithm of maintaining internal consistency without the use of parallel computing

problem occurs sequentially. However, due to the fact that the result of its solution does not change by conditions place changing in the linear programming problem, this process can be made parallel. At the same time, access to the linear programming problem will be critical: we cannot allow two processes write two different conditions to the same position at the same time. The number of concurrently executed processes in this case corresponds to the number of knowledge patterns in the network. Taking into account the comments above, the process of adding conditions in parallel can be presented schematically in the Fig. 5. 4.2

Second Process

The created linear programming problem in the current version of the algorithm is also solved sequentially with respect to each of the variables. However, during the solution, the task itself does not change, therefore there is the possibility of its parallel solution. Each of the processes will solve the problem with respect to a specific variable of a linear programming problem with a maximum or minimum. Since in a linear programming problem each variable corresponds to a particular conjunct of the algebraic Bayesian network and when solving a problem with a maximum and a minimum the upper and lower bounds change accordingly, there is no critical resource that two processes can try to change at the same time.

Algebraic Bayesian Networks

701

Fig. 5. Parallel adding of conditions from knowledge pattern to a linear programming problem

The number of processes executed in parallel in this case corresponds to twice the number of variables in the linear programming problem. Taking into account the comments, the process of parallel solution of the linear programming problem can be presented schematically in the Fig. 6.

Fig. 6. Parallel solution of the linear programming problem

4.3

Algorithm for Maintaining Internal Consistency Using Parallel Computing

The algorithm for maintaining the internal consistency of algebraic Bayesian networks using parallel computations is presented in the Fig. 7. Main steps of the algorithm: – An empty linear programming problem is created; – Knowledge patterns are consequently taken from the algebraic Bayesian network;

702

N. A. Kharitonov et al.

Fig. 7. Algorithm for maintaining internal consistency using parallel computing

– For each knowledge pattern the consistency conditions are added to the linear programming problem in the new process; – Awaiting completion of all processes; – In the constructed linear programming problem, variables are taken sequentially; – In the new process, the linear programming problem is solved for the maximum and the estimates are changed in the network; – In the new process the linear programming problem is solved for the maximum and are changed the estimates in the network; – Awaiting completion of all processes, if at least one of them returns false, then the algebraic Bayesian network cannot be internally consistent, otherwise the algebraic Bayesian network is reduced to this kind.

5

Conclusion

The paper presents algorithms for maintaining external and internal consistencies using parallel computations. Further work in this direction is the implementation of the described algorithms and the conduct of numerical experiments describing the difference in operating time using parallel computing and without. The result of this and further research is the optimization of work with algebraic Bayesian networks that are planned to be used, for example, for analysis critical documents in the theory of social engineering attacks [15].

Algebraic Bayesian Networks

703

References 1. Bloom, F: Optimizing decision making. In: Opportunities in Neuroscience for Future Army Applications, pp. 36–44. The National Academies Press, Washington, DC (2009) 2. Das, M., Ghosh, S.K.: FB-STEP: a fuzzy Bayesian network based data-driven framework for spatio-temporal prediction of climatological time series data. Expert Syst. Appl. 117, 211–227 (2019). https://doi.org/10.1016/j.eswa.2018.08.057 3. Falzer, P.R., Garman, D.M.: Optimizing clozapine through clinical decision making. Acta Psychiatr. Scand. 126(1), 47–58 (2012). https://doi.org/10.1111/j.16000447.2012.01863.x 4. Fehlings, M.G., Noonan, V.K., Atkins, D., Burns, A.S., Cheng, C.L., Singh, A., Dvorak, M.F.: Optimizing clinical decision making in acute traumatic spinal cord injury. J. Neurotrauma 34(20), 2841–2842 (2017). https://doi.org/10.1089/neu. 2016.4926 5. Guzm´ an, E., V´ azquez, M., Del Valle, D., P´erez-Rodr´ıguez, P.: Artificial neuronal networks: a Bayesian approach using parallel computing. Rev. Colomb. Estad. 41(2), 173–189 (2018). https://doi.org/10.15446/rce.v41n2.55250 6. Gan, H.X., Zhang, Y., Song, Q.: Bayesian belief network for positive unlabeled learning with uncertainty. Pattern. Recogn. Lett. 90, 28–35 (2017). https://doi. org/10.1016/j.patrec.2017.03.007 7. Hosseini, S., Sarder, M.D.: Development of a Bayesian network model for optimal site selection of electric vehicle charging station. Int. J. Electr. Power 105, 110–122 (2019). https://doi.org/10.1016/j.ijepes.2018.08.011 8. Ibrahimovic, S., Turulja, L., Bajgoric, N.: Bayesian belief networks in IT investment decision making. In: Maximizing Information System Availability Through Bayesian Belief Network Approaches: Emerging Research and Opportunities, pp. 75–107 (2017). https://doi.org/10.4018/978-1-5225-2268-3.ch004 9. Kharitonov, N.A., Zolotin, A.A., Tulupyev, A.L.: Software implementation of algebraic Bayesian networks consistency algorithms. In: 2017 XX IEEE International Conference on Soft Computing and Measurements (SCM), Saint-Petersburg, Russia, pp. 8–10 (2017) 10. Kulagin, V.: Design of control systems for parallel computing structures based on net models. In: 2016 International Siberian Conference on Control and Communications (SIBCON), Moscow, Russia, pp. 1–4 (2016). https://doi.org/10.1109/ SIBCON.2016.7491749 11. Kulagin, V.P.: Problems of parallel computing. Prospects Sci. Educ. 1(19), 7–11 (2016). (in Russian) 12. Li, J., Song, G., Semakula, H.M., Zhang, S.: Climatic burden of eating at home against away-from-home: a novel Bayesian belief network model for the mechanism of eating-out in urban China. Sci. Total Environ. 650, 224–232 (2019). https:// doi.org/10.1016/j.scitotenv.2018.09.015 13. Quintanilha, A.: Knowledge and dialogue to deal with uncertainty. Free Radical Bio. Med. 106, S4–S4 (2018). https://doi.org/10.1016/j.frb.2018.04.0551 14. Sreelekha, S.: NeuroSymbolic integration with uncertainty. Ann. Math. Artif. Intel. 106(3–4), 201–220 (2018). https://doi.org/10.1007/s10472-018-9605-y 15. Suleimanov, A., Abramov, M., Tulupyev, A.: Modelling of the social engineering attacks based on social graph of employees communications analysis. In: Proceedings—2018 IEEE Industrial Cyber-Physical Systems, ICPS 2018, pp. 801– 805. IEEE (2018). https://doi.org/10.1109/ICPHYS.2018.8390809

704

N. A. Kharitonov et al.

16. Tulupyev, A.L.: Algebraic Bayesian networks: a probabilistic-logic graphical model of knowledge patterns bases with uncertainty. Doctor of science dissertation. St. Petersburg State University (2009). (in Russian) 17. Vasimuddin, M., Chockalingam, S.P., Aluru, S.: A parallel algorithm for Bayesian network inference using arithmetic circuits. In: Proceedings—2018 IEEE 32nd International Parallel and Distributed Processing Symposium, IPDPS 2018, pp. 34–43. IEEE (2018). https://doi.org/10.1109/IPDPS.2018.00014 18. Zhang, M.M., Lam, H., Lin, L.: Robust and parallel Bayesian model selection. Comput. Stat. Data Anal. 127, 229–247 (2018). https://doi.org/10.1016/j.csda. 2018.05.016

Mathematical Modeling and Calibration Procedure of Combined Multiport Correlator Nickita Semezhev , Alexey L’vov(&) , Adel Askarova , Sergey Ivzhenko , Natalia Vagarina , and Elena Umnova Yuri Gagarin State Technical University of Saratov, Saratov, Russian Federation {semezhevn,v-n-s}@yandex.ru, {alvova,aach,sarvizir, eg-umnova}@mail.ru

Abstract. Increasing amounts of information transmitted-by-the-air provides an expansion of the data transmission bandwidth and an increase in the requirements for transmitting and receiving devices. This problem can be solved with the help of software defined radio systems (SDR). Special types of multiport devices are promising elements of the SDR systems, because of the low production costs and the possibility of using them in the wideband microwave radio systems. Combined multiport correlator (CMPC) is the one of such devices. This multiport device is the symbiosis of a multiport correlator and multi-probe transmission line, which allows it to be calibrated without preciously known loads. This article presents a mathematical model of the combined multiport correlator that allows to transmit data in a wide frequency band, which is very important for modern software defined radio systems. High efficiency of CMPC was confirmed with the help of the numerical experiments. The computer simulation confirms theoretical conclusions. Keywords: Complex amplitude  Calibration procedure  Multi-port correlator  Software defined radio receiver  Maximum likelihood method  Calibration standards  Multiport  Mathematical model  Multi-probe transmission line correlator

1 Introduction Continuous evolution of communication standards in today’s communication technology sphere demands frequent changes in the communication hardware and the supporting software. Researchers are looking for a generic architecture for communication transceiver units, which can accommodate multiple communication standards or changes in the communication standards and protocols without requiring multiple transceiver units or replacing the existing hardware systems. Software defined radio (SDR) architecture promises to fulfill this requirement by virtue of its flexibility and adaptability to the new communication standard and protocol by changing the accompanying software running on its platform [1].

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 705–719, 2019. https://doi.org/10.1007/978-3-030-12072-6_57

706

N. Semezhev et al.

Broadband RF signals are converted to base-band and fed to high-speed analogueto-digital converter (ADC) and programmable digital filter or channel decoder to select the desired channel signal such that demodulation means of RF signals with various bandwidths, transmission rates and modulation schemes are readily programmed. The hardware for this procedure requires down-conversion to base-band of the entire bandwidth for various mobile standards and different frequency bands. This base-band signal is digitized and all the subsequent processing is implemented in software [2]. The most widely used architecture, employed in almost all the modern receiver systems, is the super-heterodyne one. It offers superior sensitivity, selectivity, higher dynamic range but requires a tedious frequency planning and necessitates the use of multiple bulky and costly filter components at different stages in the receiver line-up that put limitations on the frequency bands, over which the receiver system could operate. Ideally, this architecture will not be suited for SDR applications that require a very broad coverage in the frequency band. The objective is to realize an application of SDR to provide a multi-channel, multi-mode wireless direct digital receiver [2]. So an alternative direct conversion receiver architecture based on six-port technology has been proposed [3, 4, 14]. Six-port technique was proved to be a successful one in a variety of applications, including radar systems [5]. The six-port based receiver (SPR) uses a passive, linear, six-port circuit, which can be designed to cover a very large bandwidth that is an ideal requirement for the SDR systems. The combination of SDR and six-port technology provides great flexibility in system configuration, significant reduction in hardware cost particularly at millimeter-wave frequencies. Different types of six-port circuits have been designed with center frequencies varying from 2.4 to 28 GHz [4], [6] operating over large frequency bands. The attractiveness of the SPR is in its capability to decode a complex (vector) communication signal by measuring scalar output responses from power detectors. It is well-known that the problem of parameter estimation of a received RF signal using the multi-port (usually six-port) technique is reduced to solution of the following canonical set of equations [4, 5, 7, 14]: Pi ¼ jAi a þ Bi bj2 ;

i ¼ 1; 2; . . .; N;

ð1Þ

where a and b are the estimating complex amplitudes of communication and local oscillator (LO) signals respectively; Ai and Bi are calibration constants of the multi-port that implied to be known; Pi are the ith measuring port power response; N is the number of measuring ports (N  4). The conventional procedure put forward by Engen and Hoer [7] consists in elimination of unknown variable b by making ratios pi ¼

Pi jAi R þ Bi j2 ¼ ; i ¼ 2; . . .; N; Pr jAr R þ Br j2

ð2Þ

where index r designates the reference measuring port; R = b/a is the complex ratio to be estimated. As a result, the unknown R is derived from the intersection of three

Mathematical Modeling and Calibration Procedure

707

circles, their radii and centers being depended on power ratios Pi as well as constants, Ai and Bi, of SPR [4, 7]. In practice, the circles do not intersect in one point due to measurement errors; and it is proposed to determine the geometrical centre of their intersection by means of least squares [8]. However, the calculation of power ratios at early stages of data processing cannot be considered as well grounded. This procedure helps to reduce the number of unknowns, but from the other hand, distribution of errors in power ratios pi (induced due to effect of initial measuring errors ei) becomes very complicated and far from being normal. Therefore, it is impossible to use any optimal solution procedure of the set (2) that provides the efficient estimator of R as well as to accomplish the theoretical analysis of estimation uncertainties. That is why reached estimation accuracy is not sufficient; and satisfied demodulation results were reported only for QPSK and QAM16 signals [2–4]. Besides, the suitable selection of six-port junction structure (i.e. the appropriate phase and amplitude relationships between measuring ports) is required for all possible communication frequencies from the operation range in order to provide the stable solution of Eq. (2). Thus, SPR architecture comprises complicate hybrid circuits, directional couplers, power dividers, etc.; hence, its cost is rather high [2–8]. This paper describes analysis and mathematical modeling of a new calibration method of multi-port based on the combined multi-port correlator (CMPC) consisting of multi-probe transmission line (MPTL) and an arbitrary multi-port junction (MPC), which enables accurate calibration without precisely known reflection or transmission standards. Block-diagram of CMPC is shown in Fig. 1.

2 Mathematical Model of MPTL The measurement process with the MPTL consists in analyzing the distribution of the electromagnetic field inside the line [9, 10], which depends on the parameters of the attached load (modulus and phase of its complex reflection coefficient (CRC)) and the amplitude of the standing wave in the line. MPTL consists of a synthesizer of microwave signals (G), connected through a transient device with a microwave measuring path of MPTL. There are N measuring probes arranged along the central longitudinal axis of the line. The output of the local oscillator is connected to the second flange of the measuring path. The probe detector responses are transferred via the data acquisition board (DAB) to the personal computer (PC), which controls the synthesizer frequency. The parameters of the standing wave occurred in the path of the MPTL are uniquely related to the parameters of the connected generator and the load. This assumes the following [9, 11]: • the transmission line has no losses; • the probes are located at precisely known distances from the measured load; • own reflection coefficients of the probes are negligible, that is, they do not violate the field pattern inside the line; • detectors of probes have ideal quadratic characteristics.

708

N. Semezhev et al.

Fig. 1. Block-diagram of the combined multi-port correlator: G is the microwave generator; LO is the local oscillator; DPS is the digital phase shifter; MPC is the conventional multi-port correlator; MPTL is the multi-probe transmission line; DAB is the data acquisition board; PC is the personal computer.

To ensure the smallness of the own reflection coefficients of the probes, it is necessary to use attenuators at their outputs, which strongly weaken the connection of the sensors with the field inside the MPTL (less than −30 dB). This circumstance causes a small signal-to-noise ratio at the outputs of the detectors, which is a serious obstacle in the development of precision MPTL-based meters. However, at the same time the requirement of quadratic characteristics of the detectors is well observed because of the small amplitudes of their output voltages (about several microvolts). Under the named assumptions, the mathematical model of MPTL can be represented by the following set of equations [9, 11]:  ui ¼ ai jaj2 1 þ q2 þ 2q½cosu  cosð4pdi =kÞ þ sinu  sinð4pdi =kÞg þ ni ; ði ¼ 1; NÞ ð3Þ where ui is the output voltage of the ith probe’s detector; ai is the ith detector gain; q, u are unknown modulus and phase of the complex ratio R in the multiport, respectively; a is an unknown standing wave amplitude in the line tract; di is the distance from the flange of the multiport to the ith probe; k is the known wavelength in the MPTL tract; ni is the output voltage measurement error of the ith probe; N is the number of probes. For the sake of brevity, the following notations are usually introduced: j = 2p/k is the wave number and wi = 4pdi/k = 2jdi is the phase shift between the incident and reflected waves on the ith detector. Model (3) is fundamentally different from its analog [10]:  ui ¼ ai jaj

2

     4pdi 4pdi 1 þ q þ 2q cosu  cos þ sinu  sin : k k 2

Mathematical Modeling and Calibration Procedure

709

It takes into account measurement errors, which cannot be neglected, when the measurements with maximum accuracy are required. It is assumed that the errors ni are mainly caused by the shot noise of the probe detectors and the thermal noise of the matching amplifiers on the DAB, therefore, they can be regarded as independent normally distributed quantities with zero mathematical expectations and unknown variances r21 ; . . .; r2N , with sufficient accuracy for practice. During the measurement, the set (3) are to be solved for the unknown parameters q, u and a. However, measurements can be proceeded only after all the other model parameters are known. A specific feature of the MPTL is that the distances di from the flange connecting the loads to the corresponding probe and the wavelength k in the tract are assumed to be exactly known. This does not contradict the reality, since these parameters can be measured, before the experiment. Therefore, it only remains to determine the values of the detector transmission coefficients ai. The estimating procedure of the coefficients ai known as the MPTL calibration is usually accomplished by alternately connecting several loads from the set to the line. In this case the change of phase difference of waves, a and b, is reached by variation of digital phase shifter state (Fig. 1). Then the model (3) takes the following form [11]: n

o uij ¼ ai a2j 1 þ q2j þ 2qj cosuj cosð2kdi Þ þ sinuj sinð2kdi Þ þ nij ;

ð4Þ

where i ¼ 1; N; j ¼ 1; M ; the index j refers to the parameters q, u and a of the corresponding DPS state of M calibration states; voltage uij is the signal, taken from the ith probe and jth DPS position; nij is the measuring error. After that, the unknown transmission coefficients of the probe detectors aj are found from the solution of the set (4). Thus, models (3) and (4) are basic for MPTL. Conventional variable substitution [11, 12]: 8 < x1i ¼ ai x ¼ 2ai cosð4pdi =kÞ ði ¼ 1; NÞ; ð5Þ : 2i x3i ¼ 2ai sinð4pdi =kÞ 8 < q1 ¼ a2 ð1 þ q2 Þ; q ¼ a2 qcosu; : 2 q3 ¼ a2 qsinu

or

 8 2 2 > ¼ a 1 þ q q < 1j j j 2 ¼ a q cosu ðj ¼ 1; MÞ; q 2j j j j > : 2 q3j ¼ aj qj sinuj

ð6Þ

transforms models (3) and (4) to linear sets of equations: ui ¼ q1 x1i þ q2 x2i þ q3 x3i þ ni ; ði ¼ 1; NÞ;

ð7Þ

uij ¼ q1j x1i þ q2j x2i þ q3j x3i þ nij ; ði ¼ 1; N; j ¼ 1; MÞ;

ð8Þ

for the new unknowns q1, q2, q3 and q1j, q2j, q3j. The fundamental difference between the models (7) used for measurements and (8) used for calibration is that during the calibration process the transmission

710

N. Semezhev et al.

coefficients of the probes detectors aj are assumed to be unknown. Therefore, in the calibration equations the values of xij characterizing the measuring circuit are also unknown. So that, the system (7) is linear, rather than the system (8), which is nonlinear for its unknowns. For the convenience, it is better to represent these systems in the matrix form: u ¼ X  qþn

ð9Þ

U ¼ X  Q þ N;

ð10Þ

where: u = {u1, u2, …, uN}T is the vector of measurements, q = {q1, q2, q3}T is the state vector, n = {n1, n2, …, nN}T is the error vector, U = ||uij|| and N = ||nij|| are the matrices of size N  M, composed of the output signals uij and errors nij respectively (at MPTL calibration stage); Q = [q1, …, qM] is the matrix of model state vectors qj = {q1j, q2j, q3j}T (j ¼ 1; M) obtained in the jth DPS position; X = [x1, …, xM] is the matrix of experiment design N  3, composed of the values xij, determined by the design of MPTL and the frequency, at which measurements are taken [9, 10]. For the model (10), where the parameters xij are unknown, the next constraints can be found from (5) [11]: xi1 

xi2 xi3 ¼ 0; xi1  ¼ 0; wi ¼ 4pdi =k: 2coswi 2sinwi

ð11Þ

Thus, models (9) and (10) with constraints (11) are fundamental in developing an optimal strategy for improving the accuracy of a software defined radio (SDR) receiver, based on the CMPC. In this formulation, the problem of measuring the unknown parameters q, u consists in finding the optimal estimates of the state parameters q on the basis of the measurements u of the probe detector responses with the subsequent calculation of the estimates of the vector c components from (6). In accordance with the methodology discussed in the work [13] there are three independent optimization problems that are the integral parts of the overall strategy for optimal measurement of load parameters using the combined MPC: • an optimal choice of the algorithm for estimating the state parameters q and the required parameters c basing on the signals u; • selection of the optimal mathematical model of the system generator-MPCmeasuring load (assuming that the structure of the model (3) is already known and it is only necessary to find the optimal values of the unknown coefficients aj, that is, to calibrate the MPTL); • choice of the optimal arrangement of measurements of u (design of optimal standing wave analyzers based on MPTL).

Mathematical Modeling and Calibration Procedure

711

3 Mathematical Model of the MPC It is assumed that the multiport itself is a passive linear device, and the signal amplitude detectors are located at the output of its measuring ports. It is believed that the signal level from the generator is low, which allows considering the characteristics of the detectors as ideal quadratic. Therefore, for signals taken from MPC ports the following mathematical model is valid [12]: ui ¼ jAi a þ Bi bj2 þ ni ; i ¼ 1; N ;

ð12Þ

where ni is the measuring error at the ith port. Similarly to the mathematical models of MPTL considered in the previous subsection, the model (12) is different from the conventional one, ui ¼ jAi a þ Bi bj2 , [2–8]. It also takes into account the stochastic nature of the measurements. When design a precision SDR receiver, one cannot neglect the errors ni, which are assumed to be independent normally distributed values with zero mathematical expectations and unknown variances r21 ; . . .; r2N as well. The model (12) is the main one for the developing of the optimal algorithm of processing ui signals from MPC ports. In the case of CMPC, there is no need to find the parameters, a and b, because this task is solved when calibration of MPTL is carried out. The calibration procedure for multiport correlator consists in preliminary determination of the MPC constants Ai и Bi, when DPS is put in M states with unknown parameters, a and b. In this case, the calibration equations has the following form:  2 ui ¼ Ai aj þ Bi bj  þ nij ; i ¼ 1; N; j ¼ 1; M ;

ð13Þ

where the index j corresponds to the jth DPS state, and M is the number of states used for calibration. The models (12) and (13) can be represented in a form analogous to (7), (8) by the next variable substitution [12]: q1 q2 q3 q4

¼ jaj2 ; ¼ jbj2 ; ; ¼ jabj cos u; ¼ jabj sin u; x1i x2i x3i x4i

 2 q1j ¼ aj  ;  2 q2j ¼ bj  ; q3j ¼ aj bj  cos uj ; q4j ¼ aj bj  sin uj ;

¼ jAi j2 ; ¼ jBi j2 ; ¼ 2jAi Bi j cos wi ; ¼ 2jAi Bi j sin wi





j ¼ 1; M ;

i ¼ 1; N ;

ð14Þ

ð15Þ

where the left set of (14) refers to (12), and the right set of (14) refers to (13); u ¼ argðb=aÞ is the phase of the ratio R; wi ¼ argðBi =Ai Þ is the phase difference between Ai and Bi. Therefore, the sets can be written in the form:

712

N. Semezhev et al.

ui ¼ x1i q1 þ x2i q2 þ x3i q3 þ x4i q4 þ ni ; i ¼ 1; N ; uij ¼ x1i q1j þ x2i q2j þ x3i q3j þ x4i q4j þ nij ; i ¼ 1; N; j ¼ 1; M ;

ð16Þ ð17Þ

for new unknowns q = (q1, q2, q3, q4)T and qj = (q1j, q2j, q3j, q4j)T respectively. The principal difference between the model (16) used for measurements and model (17) used for calibration is that in the calibration procedure the coefficients Ai and Bi are assumed to be unknown. Therefore, in the calibration equations, the values of xij characterizing the meter are also unknown. Consequently, the set of Eq. (16) is linear with respect to q while the set (17) is nonlinear with respect to its unknowns. However, in MPC models, the number of state parameters exceeds the number of parameters to be estimated. Indeed, in the Eq. (12), only three parameters are unknown,jaj, jbj and u; but in the set (16), four state parameters q1, q2, q3, q4 are to be determined. Therefore, there must exist constraint equations, which are easily obtained from (14), (15): q1 q2 ¼ q23 þ q24 ;

ð18Þ

q1j q2j ¼ q23j þ q24j ; j ¼ 1; M ;

ð19Þ

The refusal to take into account the last two constraints leads to inefficient statistical estimates of the measured parameters and to an increase in measurement errors. So that, when passing to the state parameters from the MPC model (13) the mathematical model of MPC should include the set of the linear system (16) and the quadratic constraint (18), in order to preserve the correctness of reasoning. And when considering the calibration equations, where the parameters xij, which characterize the measuring circuit, are also considered unknown, one need to take into account Eqs. (17), (19), and the following constraints: 4x1i x2i ¼ x23i þ x24i ; i ¼ 1; N ;

ð20Þ

For the convenience, it is better to consider the named equations in a matrix form: u ¼ X  q þ n; U ¼ X  Q þ N; where all the designations of vectors and matrices coincide with the corresponding notation for MPTL, only the state parameters q and qj have dimension 4, but not 3. In this case, the constraints (18)–(20) can be rewritten as follows: 0:5qT Gq ¼ 0;

0:5qTj Gqj ¼ 0; j ¼ 1; M ; xTi Gx xi ¼ 0; i ¼ 1; N ;

where vectors qj are constituting the matrix of the state vectors Q = [q1, …, qM]; vectors xi are constituting the matrix N  4 of the experiment design; and the matrices G and Gx determine the quadratic forms of constraints:

Mathematical Modeling and Calibration Procedure

2

x11 6 x12 6 X ¼ 6 .. 4 .

x21 x22 .. .

x31 x32 .. .

3 x41 x42 7 7 .. 7; . 5

x1N 2

x2N x3N x4N3 0 2 0 0 6 2 0 0 0 7 7: Gx ¼ 6 4 0 0 1 05 0 0 0 1

2

0 6 1 G¼6 4 0 0

1 0 0 0

0 0 2 0

713

3 0 07 7; 05 2

ð21Þ

In this case, the parameter to be evaluated is the value c = u. The dependence between the vectors q and c is given by (16), where a and b are already determined from the mathematical model of MPTL. Therefore, when developing a strategy for increasing the accuracy of a SDR receiver based on the CMPC, it is necessary to solve the Eqs. (17), (19), (20) during calibration of both parts (MPTL and MPC, Fig. 1) of CMPC. After that, it is possible to measure the input signals using the Eqs. (16) and (18).

4 Calibration Procedure CMPC calibration method is as follows. Using the calibration procedure proposed in [11], it is possible to calibrate the part of CMPC probes that belong to the MPTL, and at the same time to determine the estimators of unknown parameters a2j (squared magnitude of the signal from generator G) as well as qj and uj of ratios Rj corresponding to the various states of the DPS. After that it is easy to determine the complex amplitudes aj and bj of the signals from LO and G respectively from (14) and (15). After the estimation of unknown complex ratios Rj and the relative values of the amplitudes aj, the procedure of adaptive Bayesian approach is applied, that is the found maximum likelihood estimators of these parameters are substituted into (10) or (11). After that, it can be solved by the same method for xi. In order to make the number of equations in the sets (4) and (13) not less than the number of unknown variables both MPC and MPTL should have the number of measuring ports exceeding three. Therefore, the total number of the ports in CMPC should be no less than eight. The detailed description of calibration Eqs. (10), (17)– (20) solution is given in [11–13] where it is shown that the following iteration procedure gives the asymptotically efficient estimates of the intermediate parameters xi corresponding to the ith port of the MPC: ðm þ 1Þ

xi

ðmÞ

¼ xi

T 1 xi Gxi ðmÞ ^ Q ^ GT xi ; Q

T 1 ðmÞT T ðmÞ ^ ^ xi G Q Q G xi ðmÞT

 0:5

ðmÞ



i ¼ 1; N



ð22Þ

714

N. Semezhev et al.

  ^ ¼ ^qjl  is the matrix of parameters defined in (14) whose maximum likeliwhere Q hood estimates are obtained from the previous calibration of MPTL; m is number of current iteration. So that, the CMPC calibration procedure does not require the reflection or transmission standards, moreover, the design of the proposed in [14] multi-port is very simple, as the statistical method for the solution of set (10) does not set specific limits on calibration constants Ai and Bi, which exist in multi-ports, based on the conventional measurement procedures [4–8], [15] in order to ensure the stability and uniqueness of the solution of set (10). The main advantage of the proposed CMPC architecture is that it can be calibrated without exactly known standards. The only parameters in the model (2) that implied to be exactly known are the distances dj and measuring frequency (of wavelength k). The unknown parameters in the set (2) are all ai, aj, qj, and uj. Their total number is N + 3K, and the number of equations in (2) is NK. So, if the next relation is true: NK  N þ 3K;

ð23Þ

then the set (17) can be solved by the maximum likelihood method. The solution is based on the singular value decomposition of the signal matrix U consisting of values uij (taken from MPTL probes) and described in [11]. Thus, in addition to calibration of the   MPTL   (determination of probe detector gains ai) one can estimate the parameters, aj , qj , uj (hence, the complex amplitudes aj, bj for all considered states of the DPS) simultaneously. This is a very important feature of MPTL, which enables cheap and high precision measuring instruments on its base to be made. CMPC calibration procedure may be carried out, for example, with a help of single digital phase shifter. In order to satisfy the inequality (23) the number of different phase shifter positions M should be no less than four. However, it is strongly recommended to use the greater values of M that allows increasing the calibration accuracy. Therefore, the digital phase shifter is successively set in M positions, and for each position, the detector responses of MPC ports and MPTL probes are measured and transmitted to PC memory via DAB. The process of calibration is as follows. Firstly, the data from the weakly coupled probes is used for calibration of part corresponding to MPTL. It also estimates all the parameters aj, bj in parallel. After that the estimates of these parameters obtained are substituted in the set (10) describing the MPC part of the CMPC. As it can be seen, the Eq. (10) are symmetrical with respect to parameters Ai, Bi and aj, bj. Therefore, this set can be solved for unknown MPC parameters Ai, Bi using the procedure described in [11]. The measurement procedure with this CMPC does not differ from the measurement procedure of the conventional MPC and was discussed in [11, 12].

Mathematical Modeling and Calibration Procedure

715

5 Simulation Results The comparative study of the accuracy of the proposed statistical CMPC calibration technique has been carried out using computer numerical simulation. The blockdiagram of the examined SDR receiver is shown in Fig. 2.

Fig. 2. Block diagram of the SDR tranceiver based on CMC: BPF is the bandpass filter; LNA is the low noise amplifier; LO is the local oscillator; ML is the matched load; CMC is the combined multi-port correlator; ADC is the analog to digital converter; DSP is the digital signal processor.

Two series of experiments were made, in which two calibration techniques are tested. The first technique was discussed in [15] for the six-port correlator (the procedure and six-port parameters were taken from this paper) and the second technique was proposed above. The simulation consists in the following. At first stage the values of all parameters presented in the models (10) and (22) are set. Then the values of the output ports’ and probes’ responses unk and ujk are calculated for different DPS states. The number of ports for MPC and MPTL was N = 4. At the final stage, the estimators of the parameters Ai, Bi (implied to be unknown) are obtained by both calibration procedures using these simulated responses. In all tests, the calibration accuracy is characterized by the mean standard deviation (STD) error of parameters Ai, Bi estimates of for all ports. During the first series the measuring errors nik were equal to zero (ideal situation). In the second series, these errors were included and generated by the standard random number generator producing the Gaussian zero mean values. The variance of these numbers was chosen in accordance with the required signal-to-noise ratio. The operating frequency was 18.0 GHz. The power signal-to-noise ratio was about 104 for MPC and about 103 for MPTL. Typical results are shown in Figs. 3, and 4 that show the dependence of lg(STD) of calibration errors of MPC parameters magnitudes jAn j and jBn j on the number of phase shifter positions M. At every experimental point, 10,000 simulation tests were made, and the STD for all N ports was calculated. Figure 2 corresponds to calibration without errors n and Fig. 3 consider the case when the additive errors are exist at the ports’ outputs.

716

N. Semezhev et al.

As it can be seen from the dependences, the conventional calibration technique [15] has the systematic errors that cannot be eliminated by the increase in phase shifter positions. The systematic errors of CMPC statistical calibration is significantly less, hence, the calibration accuracy grows drastically. This fact becomes more evident in presence of additive random noise, though the magnitude of the signal from generator and LO was about 100 times higher than the noise voltage at the detector outputs.

Fig. 3. Dependence of mean error STD on the number of phase shifter positions M.

Fig. 4. Dependence of mean error STD on the number of phase shifter positions M in presence of additive noise of detector responses’ measurements.

In addition, second series of simulation experiments was carried out to compare a conventional calibration procedure with the proposed one. In these experiments, signals with quadrature amplitude modulation (QAM64) are used. In order to evaluate the performance of the proposed calibration procedure, 2500 random 64QAM symbols are generated. The first 100 symbols are treated as a training sequence and are used to calibrate the CMPC system.

Mathematical Modeling and Calibration Procedure

717

The simulation results are presented in Figs. 5 and 6. This pulse shaped signal is modulated and frequency up-converted to a desired carrier frequency to simulate an actual RF modulated received signal. This modulated passband data is fed to the RF port of the CMPC system.

Fig. 5. Constellations of 64-QAM modulated transmitted and received symbols for SNR = 20 dB; conventional approach (Left), proposed approach (Right).

Fig. 6. Constellations of 64-QAM modulated transmitted and received symbols for SNR = 40 dB; conventional approach (Left), proposed approach (Right).

Figures 5 and 6 show the comparison of the constellation points of the transmitted and the received signals when the CMPC is calibrated using conventional approach at a carrier frequency of 18.0 GHz and when the CMPC system is calibrated according to the proposed approach. The result in Fig. 5 correspond to the signal-to-noise ratio (SNR) of 20 dB and the result showed in Fig. 6 correspond to SNR of 40 dB. As it can be seen from the diagrams, the suggested calibration technique based on the CMPC proposed in [14] and analyzed in this paper, improves the accuracy of SDR receiver. For both experiments, the modulation error ratio (MER) [16] have been calculated using the obtained constellation diagrams. The offered calibration algorithm

718

N. Semezhev et al.

allowed reducing the MER for 64QAM signals from 16.04 to 2.81% for diagram in Fig. 5 (SNR = 20 dB) and from 7.93 to 1.6% for diagram in Fig. 6 (SNR = 40 dB) respectively. The carrier frequency of 18.0 GHz.

6 Conclusion Studies carried out analytically and using computer simulation showed that the application of the optimal processing method for the voltages from the measuring arms of the CMPC makes it possible to increase the accuracy of the parameters, jaj, jbj and u estimation regardless the values of the MPC complex coefficients Ai and Bi, characterizing the multiport specific receiver. Therefore, it is not necessary to impose such strict requirements on the design of a CMPC, which exist in the traditional ways of designing a SDR receiver [2–8, 15]. In fact, the only requirement for the construction of CMPC is the presence of a full rank in the matrix of the experiment design X. In this case, the columns of this matrix should not be linearly dependent. Consequently, with increasing accuracy of measurements, the proposed algorithm makes it possible to use simplified designs of multiport, which leads to a reduction in their cost. The new calibration procedure for CMPC, consists of the MPC itself being added by the MPTL is proposed in analyzed. The use of optimal statistical methods of digital processing for the signals taken from its ports by the maximum likelihood method allows one to raise calibration and measurement accuracy of the named meter considerably. Moreover, calibration technique has no need in precise calibration standards and can be carried out using an arbitrary phase shifter. The results of computer simulation confirmed the theoretical conclusions. Hence, the authors recommend the concept of the combined multi-port correlator and in the complex with the suggesting calibration technique as promising for design of the broadband receiver is the software defined radio systems.

References 1. Kennington, P.B.: RF and Baseband Techniques for Software Defined Radio. Artech House, Boston/London (2005) 2. Xu, X., Wu, K., Bosisio, R.G.: Encyclopedia of RF and Microwave Engineering. École Polytechnique de Montreal, Montreal (2005) 3. Li, J., Bosisio, R.G., Wu, K.: Computer and measurement simulation of a new digital receiver operating directly at millimeter-wave frequencies. IEEE Trans. Microw. Theory Tech. 43(12), 2766–2772 (1995) 4. Tatu, S.O., Moldovan, E., Wu, K., Bosisio, R.G.: A new direct millimeter-wave six-port receiver. IEEE Trans. Microw. Theory Tech. 49(12), 2517–2522 (2001) 5. Ghannouchi, F.M., Mohammadi, A.: The Six-Port Technique with Microwave and Wireless Applications. Artech House, London/Boston (2009) 6. Gagné, J.F., Gauthier, J., Bosisio, R.G.: High speed low cost architecture of direct conversion digital receiver. In: Conference Proceedings of the IEEE IMS Symposium, Phoenix, AZ, vol. 2, pp. 1093–1096 (2001)

Mathematical Modeling and Calibration Procedure

719

7. Engen, G.F., Hoer, C.A.: Application of an arbitrary six-port junction to power measurement problems. IEEE Trans. Instrum. Meas. 21(5), 470–474 (1972) 8. Engen, G.F.: Least square solution for use in the six-port measurement technique. IEEE Trans. Microw. Theory Tech. 28(12), 1473–1480 (1980) 9. Katz, B.M., Meschanov, V.P., Shikova, L.V., Lvov, A.A., Shatalov, E.M.: Synthesis of a wideband multiprobe reflectometer. IEEE Trans. Microw. Theory Tech. 56(2), 507–514 (2008) 10. Caldecott, R.: The generalized multiprobe reflectometer and its application to automated transmission line measurements. IEEE Trans. Anten. Propag. AP-21(4), 550–554 (1973) 11. L’vov, A.A., Semenov, K.V.: A method of calibrating an automatic multiprobe measurement line. Meas. Tech. 4, 357–365 (1999) 12. Semezhev, N., L’vov, A.A., L’vov, P.A, Geranin, R.V., Solopekina, A.A.: A novel parameter estimation technique for software defined radio system based on broadband multiport receiver. In: Proceedings of the XI International Conference on SIBCON-2015, Omsk, pp. 320–324 (2015). https://doi.org/10.1109/sibcon.2015.7147132 13. L’vov, A.A., Meschanov, V.P., Svetlov, M.S.: Optimal estimation of microwave circuit para with automatic network analyzers. Radiotehnika 10, 240–244 (2016). (in Russian) 14. Xu, Y., Gauthier, J., Bosisio, R.G.: Six-port digital receivers: a new design approach. Microwave Opt. Technol. Lett. 25(5), 356–360 (2000) 15. Yakabe, T., Xiao, F., Iwamoto, K., Ghannouchi, F.M., Fujii, K., Yabe, H.: Six-port based wave-correlator with application to beam direction finding. IEEE Trans. Instrum. Meas. 50 (2), 377–380 (2001) 16. Malarić, K., Suć, I., Bačić, I.: Measurement of DVB-S and DVB-S2 parameters. In: 2015 23rd International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia (2015)

Mathematical Models for the Analysis of Destabilization Processes of the Socio-Political Situation in the Country Using the Methods of Non-violent Resistance Aleksey Bogomolov1,3(&) , Alexander Rezchikov1,3 , Vadim Kushnikov1,2,3 , Vladimir Ivaschenko1 , Elena Kushnikova1,2 , and Vladimir Tverdokhlebov1 1

Institute of Precision Mechanics and Control, Russian Academy of Sciences, 24 Rabochaya Str., Saratov 410028, Russia [email protected] 2 Yuri Gagarin State Technical University, 77 Politechnicheskaya Str., Saratov 410054, Russia 3 Saratov State University, 83 Astrakhanskaya Str., Saratov 410012, Russia

Abstract. The task of counteractions unfavorable combinations of events, deliberately organized with the aim of destabilizing the socio-political situation in the country using actions of non-violent resistance of the citizens. Herewith, the destabilization plan is representing in the form of a logical tree, and adverse combinations of events—in the form of its minimum cross sections. The probabilities of realization of the minimum cross sections is determined by solving the systems of Kolmogorov-Chapman differential equations. Counteracting efforts to destabilize the socio-political environment is to monitor these probabilities and take the necessary measures to reduce them. Keywords: Dangerous combination of events  Color revolution  State security  Fault tree  Minimum cross-section  Logical probabilistic analysis  Expert system  Destabilization

1 Introduction The analysis of the features of the historical development of different countries showed that regardless of the political system, they are sensitive to the technology of “nonviolent” overthrow of power [1, 2]. Application of such information technologies, organized from the outside, leads to unrest, turmoil, coups, civil wars and, eventually, to the overthrow of legitimate government. Currently, the methods of preparing the situation of destabilization, modernized taking into account the progress in the field of information, science and technology, are actively applying to overthrow objectionable governments through acts of civil disobedience, backed by rebellion in law enforcement agencies. Impacts, the non-violent nature of which is ultimately conditional © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 720–728, 2019. https://doi.org/10.1007/978-3-030-12072-6_58

Mathematical Models for the Analysis

721

enough, cause destabilization of the political and social situation in the world and threaten the independence of many countries. The phenomenon of non-violent resistance has a long history; in the twentieth century, his ideas were developed in [3, 4]. This phenomenon widely studied in the west [5–7] and is declared as a tool for promoting democracy [8, 9]. Of course, it has not been ignored by Russian researchers [10, 11]. In the area of purposeful development used to dismantle undesirable state regimes, the main scientific and practical organization is the A. Einstein Institute (Boston, USA). In addition to the founder of this institute, Gene Sharp (Gene Sharp, 1928–2018) and the directors, among his researchers and research associates are: Robert L. Helvy—a career officer who served for 30 years in the US Army, a strategic consultant to nongovernmental organizations that promote pro-democratic reforms nonviolent methods. He consulted the corresponding groups in Burma, Thailand, the Tibet, Belarus, Serbia, Venezuela and Zimbabwe, Cornelia Sargent—a lawyer and specialist in human rights, Elizabeth Defeis—a specialist in international law, human rights, advice on the processes of building democracy in various countries; Mary King is a professor, conflictologist and world problems specialist at the University for Peace in Costa Rica, a lead researcher at the American University Center for Global Peace, Washington. During the administration of President Carter, Dr. King was responsible for Peace Corps in 60 countries, was an adviser to President Carter on Middle Eastern issues, and was his representative at meetings with business leaders in the region. Curt Goering is Executive Director of the United States Amnesty International Center for over 20 years, responsible for the organization’s strategy and policy, its public representation and fundraising (receiving financial support). Conducted research in the Middle East, Africa, Europe and Asia. Researchers at the A. Einstein Institute have been longly involved in numerous projects related to the direct implementation of the ideology of non-violent resistance in specific countries. In addition to publishing research results and giving lectures, they actively advise leaders of anti-government groups. Other researchers who are involved in methods of organizing political change include the Professor David H. Weaver, Dr. Jacob Groshekfrom Indiana University, Boston University, USA. Dr. It could be a sociopolitical change. An additional research includes data mining, data mining and visualizing social media content. Professors Marc Lynch and Katerina O’Donnell are also researching the role of social media (YouTube, Twitter, Facebook) in protest events in the Middle East (Tunisia, Egypt). To do this, they study individual profits and competencies of participants of protest events, tools for managing protest activity, the intragroup dynamics of the development of relations, as well as the role of power structures, evaluating the achievement of the goal of protest actions by external actors. In connection with the above, the development of mathematical models and methods for support decision-making to counter external forces engaged in the planning and organization of destabilization situations is relevant. Analysis and reviews of publications (for example, [12]) showed that mathematical models and methods are used, as a rule, for the analysis of military confrontation, and work on mathematical support in the form of methods, models, algorithms and information systems to counter the development of destabilization situations are practically absent.

722

A. Bogomolov et al.

Since the methods of organization of destabilization situations are the most productive in certain combinations, including management errors, adverse economic, social and natural factors, according to the authors, the opposition should be based on the concepts of dangerous combinations of events [13–15]. The proposed approach is to identify such combinations that lead to destabilization situations, assess their probability and determine the sequence of action to prevent. Created on the basis of the developed models, methods and algorithms, the consulting system will allow using modern high-performance computing tools to determine recommendations for countering destabilization situations at different time intervals.

2 Statement of the Problem Let there be a plan of actions A for the organization of destabilization situations that is developed by using the modernized list of methods described in [1, 2]. According to this plan, an event graph is constructed in the form of a tree D(A), the root of which is the event corresponding to the destabilization situation. The tops of the tree D (A) correspond to individual events of the plan A, the realization of which leads to the overthrow of power. Tree arcs D(A) represent causal relationships between these events. Each event e of the tree D(A), except for terminal events, corresponds to the logical operator “OR” or “AND”, expressing the condition for the occurrence of this event as a consequence of the events of the lower level. Minimal cross sections of a tree D(A) are used to construct models of dangerous combinations of events. Let the enemy, preparing the situation of destabilization, carry out the actions of his plan, corresponding to the events e1, …, ek with intensities k1, …, kk, and the country opposes the implementation of this plan with intensities µ1(t), …, µk(t). We denote the dangerous combination of events through C1, …, Cn. The probability of the realization of each combination depends on the vector k(t) = (k1(t), …, kk(t)) of values of the intensities of the occurrence of events e1, …, ek, the vector µ(t) = (µ1(t), …, µk(t)) of values of the intensities of counteraction to these events and the vector x (t) of the state of the external environment. We denote X(t) the set of possible values x(t). It is necessary: on the time interval [t0, t1], where t0 and t1 are the initial and final times of events, respectively, determine the probabilities Pi(k(t), µ(t), x(t)), i = 1, …, n, of each dangerous combination Ci, leading to the destabilization situation; determine the vector of actions µ•(t) = (µ•1(t), …, µ•k(t)) for which the following conditions are fulfilled in the time interval [t0, t1] for all admissible states x(t) 2 X(t) of the environment: Pi ðkðtÞ; lðtÞ; xðtÞÞ  ei ðtÞ;

ð1Þ

where ei(t)—the functions given on [t0, t1], and n Z X

t1

i¼1

t0

Fi ðkðtÞ; lðtÞ; xðtÞ; tÞdt ! min;

ð2Þ

Mathematical Models for the Analysis

723

where Fi(k(t), µ(t), x(t), t) are given functions of time and other resources to prevent the situation of destabilization, under boundary conditions Fi ðkðt0 Þ; lðt0 Þ; xðt0 Þ; t0 Þ ¼ 0;

ð3Þ

Fi ðkðt1 Þ; lðt1 Þ; xðt1 Þ; t1 Þ ¼ 0

ð4Þ

Cj  Gj ðkðtÞ; lðtÞ; xðtÞ; tÞ  Dj ;

ð5Þ

and restriction

where j = 1, …, m, Cj, Dj, m are given constants. When an adversary tries to organize a situation of destabilization, the state requires influence, in which the likelihood of options for implementing this plan is not outside the permissible safe corridor. The models, methods, and algorithms for solving the discussed problems are given below.

3 Mathematical Models and Algorithms To determine the combinations of non-violent influences that lead to the destabilization situation, it is proposed to use the representation of their organization plans in the form of fault tree complexes. An example of such a structure—a tree Ds—a variant of the top level tree of the development of a situation of destabilization based on shares of nonviolent resistance is shown in Fig. 1, where the following notation is used: 1 is work “strictly according to the instructions”; 2 is “bumper” (selective, alternate) strike; 3 is absenteeism “due to illness”; 4 is strike through dismissal; 5 is the destruction of their property; 6 is installation of new street signs and names; 7 is hanging flags, using items of symbolic colors; 8 is the symbolic “development” of land; 9 is reduction in the pace of work; 10 is absence from work; 11 is refusal of conscription and deportation; 12 are thefts, escapes and production of false documents; 13 is removal of signs of property and street marking; 14 is arbitrariness; 15 is refusal to accept the appointment of officials; 16 is refusal to dissolve existing institutions; 17 are wearing symbols; 18 is leaflets, pamphlets and books; 19 is public statements signed by famous people; 20 is boycott of social events; 21 is refusal of honors; 22 is reluctant and slow obedience; 23 is disobedience in the absence of direct supervision; 24 is sit-down strike; 25 is illegal movement; 26 is seminars; 27 is protest rallies; 28 is failure to comply with the order to disperse the meeting or rally; 29 is newspapers and magazines; 30 is refusal of loyalty to the authorities; 31 is literature and speeches calling for resistance; 32 is refusal to help the police; 33 is fraternization with soldiers; 34 is intentional inefficiency of work and selective refusal to cooperate with the executive bodies; 35 is blocking the transfer of commands and information; 36 is alternative economic institutions; 37 are alternative markets; 38 is dumping; 39 is selective patronage over firms, institutions; 40 is civil disobedience; 41 are resettlement of citizens; 42 is national disobedience; 43 are financing of strikes by foreign sources; 44 is departure from government educational institutions; 45 is agitation on the Internet, radio, television; 46 is insurrection; 47 are

724

A. Bogomolov et al.

delays and obstacles to the work of institutions; 48 is cessation of work and trade (“hartal”); 49 are peasant strikes; 50 is civil unrest; 51 is solidarity strike; 52 is student strikes; 53 are failure to fulfill the functions of suppressing protests; 54 is termination of all economic activity; 55 is riots; 56 is non-violent government overthrow.

Fig. 1. Tree Ds is a variant of the top-level tree of destabilization situation based on non-violent resistance shares

The minimum cross sections of the tree Ds (Fig. 1) correspond to dangerous combinations of events—combinations of terminal vertices, the realization of which, regardless of the implementation of the other terminal vertices, leads to the root event. The graph of events and the system of Kolmogorov-Chapman differential equations [16]. Let the vertices e1, …, ek form the minimum section of the fault tree. To analyze the process of the origin of the root event, an event graph is used, whose 2k vertices correspond to the occurrence of various combinations of events e1, …, ek. A mathematical model, applicable under certain conditions for determining the probabilities of the realization of minimal sections is a system of linear differential equations of Kolmogorov-Chapman, consisting of 2k equations for functions P0 ðtÞ; . . .; P2k 1 ðtÞ representing the probabilities of events corresponding to the vertices of event graph:

Mathematical Models for the Analysis

725

8 k P > > > P00 ðtÞ ¼ ðlj Pj ðtÞ  kj P0 ðtÞÞ; > > > j¼1 > ... > > k < 2P 1 P0i ðtÞ ¼ pi;þj Pj ðtÞ  Pi ðtÞp i ; > j¼ 0 > > . . . > > k > P > 0 > kj P2k k þ j2 ðtÞ; > : P2k 1 ðtÞ ¼ j¼1

where

pi;þj

8 kl ; if for some l 2 f1; . . .; kg arc of > > > > < the state graph; marked kl ; enters from condition j to i; ¼ ll ; if for some l 2 f1; . . .; kg > > arc; marked ll ; enters from condition j to i; > > : 0; if there is no arc from j in i graph;

p i is the sum of marks of all arcs going out from i to other graph vertices, i, j 2 {0, …, 2k − 1}. An algorithm for solving the task Step 1. Construction of models of the processes leading to the emergence of a situation of destabilization, and the development of algorithms for solving the task of their prevention. Step 1.1. Define a set A = {A1, …, An} of dangerous situations. Step 1.2. Construct a set of fault trees D = {D1, …, Dm}, corresponding to the situations from the set A, with a set of elementary events E = {e1, …, ek}. Step 1.3. For each e1, …, ek define sets of instructions Q(µ1(t)), …, Q(µk(t)), providing a parry of events with intensities µ1(t), …, µk(t) and the correspondence between the values µ1(t), …, µk(t) of events e1, …, ek and the lists of actions Q(µ1(t)), …, Q(µk(t)) to ensure the specified intensities. Step 1.4. For each fault tree Dh 2 D, h = 1, …, m, define all the minimum sections. Step 1.5. Develop a set of algorithms for solving the task of determining the probabilities Pi(k(t0), µ(t0), x(t0), t0) of the emergence of a situation of destabilization and determining the intensities µ1(t), …, µk(t) of countering them and parrying events from the set E, under which conditions (1)–(5) are satisfied for given intensities of occurrence of events k1(t), …, kk(t) and influences of the environment x(t). Step 2. Modeling of dangerous combinations of events. Step 2.1. Construction of complexes for modeling the development of the situation of destabilization at different time intervals. Step 2.2. Modeling the development of situations of destabilization and specification of the probability of their occurrence. Step 2.3. Entering information into the database, and adjustment of the model. Step 2.4. Development of recommendations to prevent the development of a situation of destabilization at different time intervals and stages of development.

726

A. Bogomolov et al.

Step 3. Preventing the development of a situation of destabilization. Step 3.1. Introduction of the developed intelligent decision support system in information systems of the relevant government agencies. Step 3.2. When using the system, based on the current state of the situation in the country and the registered events of the set E, solve the task and find the vector µ•(t) = (µ•1(t), …, µ•k(t)), under which conditions (1)–(4) are satisfied, according to it, define recommendations Q(µ•(t)) = (Q(µ•1(t)), …, Q(µ•k(t))), containing the regulated actions of the decision maker to normalize the situation in the country. Offer performers to implement actions.

4 An Example of the Use of Mathematical Models As an example, consider actions for the non-violent overthrow of power in one of the countries. Define a set of minimum sections of the tree Ds, corresponding to dangerous combinations of events leading to the emergence of a situation of destabilization (see Table 1). Table 1. The set of minimum sections of the tree Ds (Fig. 1), corresponding to dangerous combinations of events leading to the non-violent overthrow of power Dangerous combinations of events (minimum sections of the tree Ds) C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12

Events leading to dangerous combinations of events 37, 38 37, 39 36, 38 36, 39 31, 32, 15, 16 17, 18, 19, 3, 20, 21 17, 18, 19, 4, 20, 21 17, 18, 19, 1, 2, 20, 21 51, 22, 23, 24, 43, 44, 45 51, 11, 12, 13, 43, 44, 45 51, 5, 6, 7, 8, 43, 44, 45 51, 26, 27, 28, 29, 43, 44, 45

For example, the minimum section C5 corresponds to a dangerous combination of four events: mass anti-government agitation (31—literature and speech calling for resistance); the actual transition of neutral citizens to the side of the protesters (in the form of refusal to cooperate with the police and the army units involved, 32), in conjunction with the non-recognition by the a significant part of citizens of the power of the government and ministers of the country (15—refusal to accept the appointment of officials) and at the same time the preservation of alternative elites (16—refusal to dismiss the existing institutions) leads to a change of power in the country. Suppose that

Mathematical Models for the Analysis

727

the listed factors act each with an intensity of 0.7 per week, and counteraction of the government goes with an intensity of 1 action per week: kj = 1, µj = 0.7, j = 1, …, 4. With the initial conditions P0(0) = 1, P1(0) = P2(0) = … = P15(0) = 0, which corresponds to a sharp jump in the occurrence of these events, numerically solve the system of differential equations of Kolmogorov-Chapman and build polar diagrams characterizing the time variation of the probability of realization of the plan to overthrow the government in various ways P(Ci), i = 1, …, 12, associated with the found minimum sections Ci (Fig. 2).

Fig. 2. Diagrams characterizing the time variation of the probability of realization of the plan to overthrow the government in various ways P(Ci), associated with the found minimum sections Ci: (a), (b), and (c) are the first, second and third weeks of events respectively

Diagram (Fig. 2a), demonstrates the high probability of implementing a dangerous combination of events in the first, Fig. 2b-reaching a dangerous result in the second, Fig. 2b-decrease in probability to the third week as a result of the government’s actions to prevent the overthrow of power. The likelihood of implementing scenarios corresponding to C3 and C8 remains high by the third week of events due to the relatively small impact of efforts to prevent the overthrow of power for C5 combination on the prevention of C3 and C8 due to the different composition of the basic events. The task of preventing a change of power requires the application of efforts acting in required quantities to all dangerous combinations of events related to the plan of the organization of non-violent regime change.

5 Conclusions On the basis of modern information technologies, the mathematical approach counteraction of preparation and the organization of development of the situations of destabilization leading to change of the legitimate power in interests of external forces is offered. The approach is based on the definition of dangerous combinations of “nonviolent” impacts, leading to the development of destabilization situations, assessment of the possibility of implementing these combinations using high-performance computing tools and the development of measures to prevent them. An algorithm for solving the problem and the concept of its implementation in the information-advising system are proposed.

728

A. Bogomolov et al.

The proposed approach is illustrated by an example. The results of the work are intended for use in the development of advising systems to counter the violation of the integrity and national security of the country with the use of information technology.

References 1. Sharp, G.: The Politics of Nonviolent Action. Porter Sargent, Boston (1973) 2. Stephan, M.J., Chenoweth, E.: Why civil resistance works. The strategic logic of nonviolent conflict. Int. Secur. 33(1), 7–44 (2008) 3. King, M.L.: Stride Toward Freedom: The Montgomery Story. Harper & Row, New York (1958) 4. Collected Works of Mahatma Gandhi, vol. 22. www.gandhiserve.org/cwmg/VOL022.PDF. Accessed 24 Nov 2018 5. Clarke, R., Knake, R.: Cyber War. The Next Threat to National Security and What to Do About It. HarperCollins, New York (2010) 6. Lynn, W.: Defending a new domain: the pentagon’s cyberstrategy. Foreign Affairs, September/October, pp. 97–108 (2010) 7. Universal Declaration of Human Rights. http://www.un.org/ru/documents/decl_conv/ declarations/declhr.shtml. Accessed 24 Nov 2018 8. Brzezinski’s Fear: Class Warfare and Destruction of the New World Order. http://www. prisonplanet.com/brzezinski%E2%80%99s-fear-class-warfare-and-destruction-of-the-newworld-order.html. Accessed 24 Nov 2018 9. Obama: Gaddafi death is warning to iron-fist rulers. www.reuters.com/article/2011/10/20/uslibya-gaddafi-whitehouse-idUSTRE79J6WJ20111020. Accessed 24 Nov 2018 10. Tsaturyan, S., Dzhavlakh, K.: Ukraine 2014: equipment and preliminary results of the coup. Mezhdunarodnoe publichnoe i chastnoe pravo 5, 11–15 (2014). (in Russian) 11. Filimonov, G., Tsaturyan, S.: Social networks as an innovative mechanism of “soft” impact and management of mass consciousness. Politika i obshchestvo 1, 65–75 (2012). (in Russian) 12. Novikov, D.A.: Hierarchical models of military action. In: Managing Large Systems: Proceedings, vol. 37, pp. 25–62 (2012). (in Russian) 13. Bogomolov, A.S.: Analysis of the ways of occurrence and prevention of critical combinations of events in man-machine systems. Izvestiya Saratovskogo Universiteta, Novaya Seriya-Matematika Mekhanika Informatika 17, 219–230 (2017) 14. Rezchikov, A.F., Kushnikov, V.A., Ivashchenko, V.A., Bogomolov, A.S., Filimonyuk, L. Y., Sholomov, K.I.: The dynamical cause-effect links’ presentation in human-machine systems. Izvestiya Saratovskogo Universiteta Novaya Seriya-Matematika Mekhanika Informatika 17, 109–116 (2017) 15. Mal’ko, A.V., Rezchikov, A.F., Ivashhenko, V.A., Kushnikov, V.A., Bogomolov, A.S., Soldatkina, O.L., Semikina, S.A., Filimonuk, L.Y.: Interstate conflicts: modeling and escalation of legal policy in the field of prevention. Legal Sci. Pract.-Bull. Nizhniy Novgorod Acad. Ministry Interior Russia 40(4), 28–34 (2017) 16. Verma, A.K., Ajit, S., Karanki, D.R.: Reliability and Safety Engineering. Springer, London (2010)

Part III

Smart City Technologies

Mobile Platform for Decision Support System During Mutual Continuous Investment in Technology for Smart City Bakhytzhan Akhmetov1(&), Lyazzat Balgabayeva1, Valerii Lakhno2, Vladimir Malyukov3, Raya Alenova4, and Anara Tashimova5 1

Department of Computer and Software Engineering, Turan University, Almaty, Kazakhstan {bakhytzhan.akhmetov.54,lyazzat_iso}@mail.ru 2 Department of Computer Systems and Networks, National University of Life and Environmental Sciences of Ukraine, Kiev, Ukraine [email protected] 3 Department of Information Systems and Mathematical Disciplines, European University, Kiev, Ukraine [email protected] 4 International University of Information Technologies, Almaty, Kazakhstan [email protected] 5 Department of Computer Science and Information Technology, Aktobe Regional State University named after K.Zhubanov, Aktobe, Kazakhstan [email protected]

Abstract. The article describes a model for a mobile platform for decision support system during mutual investment in technology for Smart City. Unlike existing approaches, our model is based on solving a bilinear differential quality game with several terminal surfaces. In the obtained solution there was first considered a new class of bilinear differential games. This allows to describe adequately the process of finding rational strategies of players during mutual investment in advanced technologies of rapidly developing Smart City. During the research there was developed a software product “Invest Smart City” in the Android Studio environment. The developed software product allows reducing the discrepancies in the forecasting data and in real returns from investment in Smart City technologies, as well as optimizing investment strategies by both sides of the investment process. Keywords: Smart city  Optimal investment strategies Differential game  Android mobile platform  Software



Decision support



1 Introduction Conceptually, the idea of creating and developing smart cities (hereinafter Smart City) was actively discussed more than 15 years ago. Rare forums that were devoted to urban issues and prospects for its development using new information technologies, managed without discussions about advanced technologies in Smart City. Very many players on © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 731–742, 2019. https://doi.org/10.1007/978-3-030-12072-6_59

732

B. Akhmetov et al.

the investment market, as well as public institutions, began to consider Smart City in the context of the prospects for investing in Smart technologies and creating new zones for cooperation of producers of high-tech products for the needs of the urban economy. The city authorities of many large cities, first of all at the level of municipalities, announced investment strategies in Smart City projects. This was dictated by the desire to improve the city status, as well as by the opportunity to attract long-term investments. The idea of localizing a high-tech business within the city infrastructure has also become very promising. At the same time, companies faced to the tasks aimed at solving local urban problems. Such investment projects are characterized by a high degree of uncertainty and riskiness. In works [1, 2] the authors noted that, in order to increase the effectiveness and efficiency of evaluating such large projects, it is advisable to use the potential of various computerized decision support systems (DSS). Without a doubt, this also applies to large interstate or interregional projects for investing in Smart City technologies [3–5]. All the above mentioned predetermined the relevance of the topic of our research, in particular, in the aspect of the need to develop new models and the corresponding software product, which will reduce the discrepancies in the forecasting data and in the real return from investment in Smart City technologies.

2 Literature Review In recent years, a large number of works were devoted to the mathematical and cybernetic aspects of effective financial investment in Smart City technologies [6–10]. According to a number of authors [8, 9], during the analysis of the models and algorithms used to assess mutual investment in technology, it is advisable to consider possible situations in the context of the actions of two sides (players): side 1—an investor ðInv1Þ from one region ðReg1Þ; side 2—an investor ðInv2Þ from another region ðReg2Þ: In accordance with [9, 10], ðInv2Þ is considered as a certain set of potential threats that may arise as a result of incompetent, inconsistent investor actions. This will lead to a loss of capital, which is spent on the project, in particular, Smart City technology. In works [12–14] there was noted that, as applied to this class of problems, the most adequate models describing the behavior of a complex system are models based on game theory. As the analysis of similar researches in this area [9, 12, 14, 15] showed the most of the models and algorithms, given in works [15–17], do not contain real recommendations to investors of Smart City. This especially concerns the search for rational mutual financial investment strategies. A new and sufficiently relevant area of the research in this area are the works devoted to the use of various intellectualized expert (ES) [7, 8] and decision support systems (DSS) [9, 10] in order to select rational investment strategies in Smart City. This task is particularly relevant in relation to the segment of software products (SP) for mobile devices, smart phones and phones on the Android platform. Existing ones in this SP are not very informative and are not suitable for evaluating real investment projects.

Mobile Platform for Decision Support System

733

We also note that the approaches described by the authors [11–13] do not allow to find effective recommendations and strategies for control of investments in Smart City. This circumstance determines the need and relevance of the development of new models and software products oriented on the Android platform, which are able to support decision-making support procedures during the search for optimal strategies of mutual financial investment in the advanced technologies of Smart City.

3 Objectives of the Research Objectives of the article – the development of a model and algorithms for a decision support system, adapted by the Android platform, during the search for rational mutual investment strategies in technologies for Smart City; – approbation of the model and software product for the Android platform using computational experiments.

4 Methods and Models 4.1

Problem Statement

One of the most important tasks facing to the services that ensure the development, creation and implementation of advanced technologies for Smart City, is the task of their financial support for projects and attracting financial resources ðFinRÞ of the investors. At the same time, decision-making on investing in Smart City technologies should be based on procedures that allow investment, taking into account all possible factors. This is possible if DSS or ES will be developed and implemented. In particular, popular software products for the Android platform, allowing to make rational decisions on investing funds for the development of such technologies. The proposed model is based on an analysis of the possibilities for mutual investment of players in Smart City technology. The model is a continuation of our works [9, 16, 19] and is based on solving a bilinear differential quality game with two terminal surfaces. We considered the problem in following formulation. There are two players (investors) who control the dynamic system. The system is defined by a system of bilinear differential equations with dependent motions. We define the sets of strategies of (U) and (V) players. Also there are given two terminal surfaces S0 , F0 . The aim of the first player (hereinafter Inv1) is to bring a dynamic system with the help of his control strategies to the terminal surface S0 no matter how the second player acts (hereinafter Inv2). The aim of the Inv2 is to bring a dynamic system with the help of his control strategies on the terminal surface F0 , no matter how Inv1 acted. The solution is to find a set of initial states of the objects and their strategies that allow objects to bring the system to that or another surface.

734

B. Akhmetov et al.

Further in the article there is accepted: Inv1 player 1 or an investor №1in Smart City technologies; Inv2 player 2 or an investor №2 in Smart City technologies; FinR financial resource of the investor; g coefficient determining the equilibrium beam; S0 terminal surface for Inv1; F0 terminal surface for Inv2; r1 coefficient characterizing the elasticity of investments of Inv2 in relation to the investments of Inv1 in Smart City; r2 coefficient characterizing the elasticity of investments of Inv1 in relation to the investments of Inv2 в Smart City; positive orthant; R2þ t time parameter; optima; strategy Inv1; u U strategies Inv1; V strategies Inv2; h1 the value of the financial resource of Inv1; h2 the value of the financial resource of Inv2; Z1 the set of preferences of Inv1; Z2 the set of preferences of Inv2; g1 the growth rate of the financial resources of Inv1 during the successful implementation of Smart City; g2 the growth rate of the financial resources Inv2 during the successful implementation of Smart City In the task 3, the player-ally is treated for Inv1, the opponent player is treated for Inv2. And, vice versa—in the task 4 the player-ally is treated for Inv2, and the opponent player is treated for Inv1. The first player seeks to invest the Smart City technology in the second region, the second player—to invest the Smart City in the first region. We assume that for a specified period of time [0, T] (T—real positive number) there are allocated h1 ð0Þ of financial resources for Inv1. This parameters determine the forecast, at the moment of time t ¼ 0, the value FinR of the players. We describe the dynamics of FinR change for the players: dh1 ðtÞ=dt ¼  h1 ðtÞ þ g1 ðtÞ  h1 ðtÞ  uðtÞ  g1 ðtÞ  h1 ðtÞ þ ½ðr2Þ vðtÞ  g2 ðtÞ  h2 ðtÞ; dh2 ðtÞ=dt ¼  h2 ð yÞ þ g2 ðtÞ  h2 ðtÞ  vðtÞ  g2 ðtÞ  h2 ðtÞ þ ½ðr1 Þ uðtÞ  g1 ðtÞ  h1 ðtÞ:

ð1Þ

ð2Þ

The interaction ends when conditions are met: ðh1 ðtÞ; h2 ðtÞÞ 2 S0 ;

ð3Þ

Mobile Platform for Decision Support System

735

ðh1 ðtÞ; h2 ðtÞÞ 2 F0 :

ð4Þ

  S0 ¼ ðh1 ; h2 Þ : ðh1 ; h2 Þ 2 R2þ ; h1 [ 0; h2 ¼ 0 ;

ð5Þ

  F0 ¼ ðh1 ; h2 Þ : ðh1 ; h2 Þ 2 R2þ ; h1 ¼ 0; h2 [ 0 :

ð6Þ

We assume that

If condition (3) is fulfilled, then we consider that the mutual investment procedure in Smart City is completed. That is, Inv2 did not have enough funds to continue the investment process. If condition (4) is fulfilled, then we consider that the Smart City investment procedure is also completed. That is, Inv1 did not have enough funds to continue the investment process. If both conditions (3) and (4) are not fulfilled, we assume that mutual investment continues. Values ðh1 ðTÞ; h2 ðTÞÞ show the result of mutual investment in the planned interval [0, T]. The process of mutual investment in Smart City is considered within the framework of a positional differential game with full information [9, 19]. Due to the symmetry, we restrict ourselves to considering the problem from the position of Inv1. The second problem is solved in a similar way. We denote by T  the set [0, T]. The definition of the pure strategy of the first player was given in works [9, 14, 19]. The solution of the problem 1 is to find the sets of “preferences” of Inv1 and its optimal strategies. Similarly, the problem is posed from the point of view of Inv2. We give the solution of the game, i.e. of the sets of “preferences” Z1 and optimal strategies Inv1. Variant 1. r1  r2 ¼ 1; g2  g1 : There is obtained the following result:   Z1 ¼ ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; g2  h2 ð0Þ\r1  g1  h1 ð0Þ ; u ðh1 ð0Þ; h2 ð0ÞÞ ¼ f1;ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; g2  h2 ð0Þ\r1  g1  h1 ð0Þg and not defined otherwise. Variant 2. r1  r2 ¼ 1; g2 \g1 : There is obtained the following result:

736

B. Akhmetov et al.

  Z1 ¼ ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; g2  h2 ð0Þ\r1  g1  h1 ð0Þ ; u ðh1 ð0Þ; h2 ð0ÞÞ ¼ f0;ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; g2  h2 ð0Þ\r1  g1  h1 ð0Þ\g1 h2 ð0Þg; u ðh1 ð0Þ; h2 ð0ÞÞ ¼ f1;ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; g2  h2 ð0Þ\r1  g1  h1 ð0Þg and not defined otherwise. Variant 3. r1 r2 [ 1; g2 [ r1  g1  r2 : Here u ð:Þ; Z1 is defined as in Variant 1. Variant 4. r1 r2 [ 1; g1  g2 \r1  g1  r2 : There is obtained the following result:  Z1 ¼ ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; ðr1  g1  r2  g2 Þ0:5 h2 ð0Þ\r1  g1  h1 ð0Þg; u ðh1 ð0Þ; h1 ð0ÞÞ ¼ f1; ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ðr1  g1  r2  g2 Þ0:5 h2 ð0Þ\r1  g1  h1 ð0Þ

o

and not defined otherwise. Variant 5. r1  r2 [ 1; g1 =ðr1  r2 Þ\g2 \g1 : Here u ð:Þ; Z1 is defined as in Variant 4. Variant 6. r1  r2 [ 1;

g2 \g1 =ðr1  r2 Þ:

 Z1 ¼ ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; ðr2  g2  h2 ð0Þ\ g1  h1 ð0Þg;  u ðh1 ð0Þ; h2 ð0ÞÞ ¼ 0; ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; r1  g1  r2  h2 ð0Þ\r1  g1  h1 ð0Þ\g2  h2 ð0Þg;

Mobile Platform for Decision Support System

737

 u ðh1 ð0Þ; h2 ð0ÞÞ ¼ 1; ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; r1  g1  h1 ð0Þ [ g2  h2 ð0Þg and not defined otherwise Variant 7. r1  r2 \1; g2  g1 : Here u ð:Þ; Z1 is defined as in Variant 1. Variant 8. r1  r2 \1; r1  g1  r2  g2 \g1 : In this case we obtain:   Z1 ¼ ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; r2  g2  h2 ð0Þ\g1  h1 ð0Þ ;   0; ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; r1  g2  r2  h2 ð0Þ [ u ðh1 ð0Þ; h2 ð0ÞÞ ¼ ; \r1  g1  h1 ð0Þ\g1  h2 ð0Þ  u ðh1 ð0Þ; h2 ð0ÞÞ ¼ 1; ðh1 ð0Þ; h2 ð0ÞÞ : ðh1 ; h2 Þ 2 intR2þ ; r1  g1  h1 ð0Þ  g1  h2 ð0Þg and not defined otherwise. Variant 9. r1  r2 \1; g2 \r1  g1  r2 : Here u ð:Þ; Z1 is defined as in Variant 8. The task from the point of view of the second player-ally is solved in a similar way. The sets of “preferences” (cones) from the point of view of the second player-ally “adjoin” to the sets of “preferences” of the first player-ally. These sets are divided among themselves by the equilibrium beams. Equilibrium beams have the property that if a pair ðh1 ð0Þ; h2 ð0ÞÞ belongs to a beam, then players have strategies that make it possible to be on the equilibrium beam for all subsequent moments in time. This may allow for the given ðh1 ð0Þ; h2 ð0ÞÞ to find the ratio of the interaction parameters when the pair ðh1 ðtÞ; h2 ðtÞÞ will be on the equilibrium beam.

5 Computational Experiment Computational experiments were performed in MatLab environment, as well as using the Invest Smart City software for the Android platform. (The software product was created in the Android Studio environment), see Fig. 1, 2, 3.

738

B. Akhmetov et al.

Fig. 1. Results of the computational experiment 1

Fig. 2. Results of the computational experiment 2

Mobile Platform for Decision Support System

739

Fig. 3. Results of the computational experiment 3

The data on investment projects in the Smart City technology of the large cities of Ukraine and Kazakhstan - Kiev, Kharkov, Lvov, Zaporozhiye, Almaty, Astana - were taken as initial data. Figures 1, 2, 3 and 4 show the results for 4-x test calculations during a computational experiment. The purpose of the experiment is to determine the sets of strategies of the players U and V (shown by blue lines with diamond shaped markers). There are considered cases when the strategies of the players derive them on the corresponding terminal surfaces S0 , F0 . During the experiment, there were found sets of initial states of the objects and their strategies that allow objects to lead the system to that or another terminal surface. On the plane of the axis H1 - financial resources of Inv1. The axis H2 —financial resources of Inv2. The area under the beam—Z1 (“preference” area of Inv1). The area above the beam—Z2 (“preference” area of Inv2) [20, 21]. Equilibrium beams on the smart phone screen are displayed in gray lines with round markers. The obtained results demonstrate the effectiveness of the proposed approach. During the model testing in the MatLab environment, as well as in the Invest Smart City software product, the correctness of the obtained results was established.

6 Discussion of the Results of a Computational Experiment Figure 1 illustrates the situation when Inv1 has an advantage in the ratio of initial financial resources at investing in Smart City. That is, FinR are in sets of preferences of Inv1. In this case, the 1st player, applying his optimal strategy, will achieve his goal, namely, bring the state of the system to “his” terminal surface.

740

B. Akhmetov et al.

Fig. 4. Results of the computational experiment in «Invest Smart City» in comparison with MatLAB

Figure 2 demonstrates a situation in which Inv2 uses a non-optimal strategy of Inv1 at the initial moment of time. Player 2 “leads” the state of the system to “his” terminal surface. Figure 3 corresponds to the case when the initial state of the system is on the equilibrium beam. It “satisfies” simultaneously Inv1 and Inv2. We get a “sustainable” system. Figure 4 shows the acceptable accuracy of the “Invest Smart City” software product in relation to the results of computational experiments in MatLab. The discrepancy does not exceed 5–7%. The disadvantage of the model is the fact that the predictive assessment data obtained using the Invest Smart City software product at choosing investment strategies in Smart City did not always coincide with the actual data. However, in comparison with the existing models, the proposed solution improves the indicators of efficiency and predictability for the investor on average by 9–12% [8, 12]. Note that the solution of problems 1 and 2 was previously obtained in [22]. The further prospects for the development of this research, outlined in the article, are to transfer the accumulated experience to the real practice of optimizing investment policies in the technology for Smart City in other countries.

Mobile Platform for Decision Support System

741

7 Conclusions – proposed a model for software products and decision support systems during the process of mutual investment in Smart City technologies. The model differs from the existing ones that there was solved a system of equations describing a bilinear differential quality game with several terminal surfaces; – considered a new class of bilinear differential games, which allowed to describe adequately the process and to find the best strategies for mutual investment in Smart City technology; – developed a software product “Invest Smart City” in the Android Studio environment. The product “Invest Smart City” allows to reduce discrepancies in forecasting data and in real returns from investment in Smart City technologies, as well as to optimize investment strategies by both sides of the investment process.

References 1. Albino, V., Berardi, U., Dangelico, R.M.: Smart cities: definitions, dimensions, performance, and initiatives. J. Urban Technol. 22(1), 3–21 (2015) 2. Angelidou, M.: Smart cities: a conjuncture of four forces. Cities 47, 95–106 (2015) 3. Glasmeier, A., Christopherson, S.: Thinking about smart cities, pp. 3–12 (2015) 4. Zanella, A., Bui, N., Castellani, A., Vangelista, L., Zorzi, M.: Internet of things for smart cities. IEEE Internet of Things J. 1(1), 22–32 (2014) 5. Paroutis, S., Bennett, M., Heracleous, L.: A strategic view on smart city technology: the case of IBM smarter cities during a recession. Technol. Forecast. Soc. Chang. 89, 262–272 (2014) 6. Hollands, R.G.: Critical interventions into the corporate smart city. Cambridge J. Reg. Econ. Soc. 8(1), 61–77 (2015) 7. Angelidou, M.: Smart city policies: a spatial approach. Cities 41, S3–S11 (2014) 8. Irani, Z., Sharif, A., Kamal, M.M., Love, P.E.: Visualising a knowledge mapping of information systems investment evaluation. Expert Syst. Appl. 41(1), 105–125 (2014) 9. Lakhno, V., Malyukov, V., Bochulia, T., Hipters, Z., Kwilinski, A., Tomashevska, O.: Model of managing of the procedure of mutual financial investing in information technologies and smart city systems. Int. J. Civil Eng. Technol (IJCIET) 9(8), 1802–1812 (2018) 10. Altuntas, S., Dereli, T.: A novel approach based on DEMATEL method and patent citation analysis for prioritizing a portfolio of investment projects. Expert Syst. Appl. 42(3), 1003– 1012 (2015) 11. Gottschlich, J., Hinz, O.: A decision support system for stock investment recommendations using collective wisdom. Decis. Support Syst. 59, 52–62 (2014) 12. Strantzali, E., Aravossis, K.: Decision making in renewable energy investments: a review. Renew. Sustain. Energy Rev. 55, 885–898 (2016) 13. Cascetta, E., Carteni, A., Pagliara, F., Montanino, M.: A new look at planning and designing transportation systems: a decision-making model based on cognitive rationality, stakeholder engagement and quantitative methods. Transp. Policy 38, 27–39 (2015) 14. Malyukov, V.P.: Discrete-approximation method for solving a bilinear differential game. Cybern. Syst. Anal. 29(6), 879–888 (1993)

742

B. Akhmetov et al.

15. Akhmetov, B.B., Lakhno, V.A., Akhmetov, B.S., Malyukov, V.P.: The choice of protection strategies during the bilinear quality game on cyber security financing. Bull. Nat. Acad. Sci. Republic Kazakhstan 3, 6–14 (2018) 16. Lakhno, V., Malyukov, V., Gerasymchuk, N., et al.: Development of the decision making support system to control a procedure of financial investment. Eastern-Eur. J. Enterpr. Technol. 6(3), 24–41 (2017) 17. Smit, H.T., Trigeorgis, L.: Flexibility and games in strategic investment (2015) 18. Arasteh, A.: Considering the investment decisions with real options games approach. Renew. Sustain. Energy Rev. 72, 1282–1294 (2017) 19. Lakhno, V., Malyukov, V., Parkhuts, L., Buriachok, V., Satzhanov, B., Tabylov, A.: Funding model for port information system cyber security facilities with incomplete hacker information available. J. Theor. Appl. Inf. Technol. 96(13), 4215–4225 (2018) 20. Lee, J.H., Phaal, R., Lee, S.H.: An integrated service-device-technology roadmap for smart city development. Technol. Forecast. Soc. Chang. 80(2), 286–306 (2013) 21. Paskaleva, K.A.: Enabling the smart city: the progress of city e-governance in Europe. Int. J. Innov. Reg. Dev. 1(4), 405–422 (2009) 22. Lakhno, V., Malyukov, V., Bochulia, T., et al.: Model of managing of the procedure of mutual financial investing in information technologies and smart city systems. Int. J. Civil Eng. Technol. 9(8), 1802–1812 (2018)

Automatic Traffic Control System for SOHO Computer Networks Evgeny Basinya1,2 and Aleksander Rudkovskiy1(&) 1

2

Novosibirsk State Technical University, 20 K. Marx Street, Novosibirsk 630073, Russian Federation [email protected] Institute of Information and Communication Technologies, 48 Deputatskaya Street, Novosibirsk 630099, Russian Federation

Abstract. One can say without a shred of doubt that network security plays a significant role in the modern world. The problem with information security lies in the imperfection of the TCP/IP technology stack and software vulnerabilities. Major manufacturers of network equipment do not pay enough attention to the security infrastructure of the SOHO class network, which is mostly based on the hardware platform MIPS or ARM. To help solve this issue, one of the solutions is outlined in this article—an algorithm that ensures information security of small computer networks. This algorithm allows to identify suspicious network activity and eliminate threats through remote control of network equipment L3. Traffic processing is performed on a personal computer using an intrusion detection and prevention system, along with a system for analysis and information security events correlation. Information flows are redirected using port mirroring technology on a router. The traffic control system of the SOHO class computer network, which has weak computational capabilities at getaway hosts, functions on the basis of the client-server model using such programming languages as Python and C++. The combined use of these tools provided greater efficiency in the completion of a wide range of different tasks. Both manual and automated testing techniques were involved in the final evaluation of the solution. As part of evaluating the effectiveness of the proposed product, several experiments were conducted on the modelling of malicious network activity such as DoS and IP-spoofing. As a result, the system has successfully identified and eliminated all threats. It is recommended to use this solution for SOHO networks that have weak computational power at internetwork hosts and are lacking a comprehensive firewall. Keywords: IDS  IPS  SIEM  Network information security system  Threats  Vulnerability  Intrusion

 Information

1 Introduction One of the main factors responsible for the development of information technology is network information security. Automation of business processes in any enterprise or government agency is not complete without the use of the TCP/IP (Transmission Control Protocol/Internet Protocol) stack. It is important to note that this stack has © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 743–754, 2019. https://doi.org/10.1007/978-3-030-12072-6_60

744

E. Basinya and A. Rudkovskiy

significant vulnerabilities in basic algorithms and protocols. For example, OSPF (Open Shortest Path First) is a dynamic routing that is based on the channel state tracking technology and uses the Dijkstra algorithm to find the shortest path [1]. OSPF is designed to manage traffic within an autonomous system. This protocol looks for the best paths between source and destination, based on a database of channel conditions. OSPF has a vulnerability that allows an unauthorised user to gain complete control over an autonomous OSPF system, unwanted traffic, as well as intercept transmitted data. Specially designed OSPF could be sent to vulnerable devices, resulting in leakage of routing tables, and a reworked router status advertisement (Router LSA) can be distributed through the target domain. The security threat is that a potential attacker could introduce false routes into the network—including instructions that let him view the traffic sent from that point onwards. The OSPF router detects neighbours, establishes adjacency relationships, and then maintains the neighbourhood using the Hello protocol. The packets of this protocol contain the Router Priority values (for choosing the DR—designated router) and HelloInterval (the interval between the Hello packets). It also indicates how often a neighbour should be heard to determine its performance (RouterDeadInterval). The HelloInterval and RouterDeadInterval values should be identical for all routers. Before the interface starts to work, the network is checked for the DR. DR performs two tasks: it generates network-LSA announcements (these LSAs contain a list of routers currently connected to the network) and is adjacent to all other routers (in case of failure its functions are passed to BDR—backup designated router). If such a router is already defined, it is accepted regardless of the value of the Router Priority. If a DR router has not yet been assigned, the given router takes its role provided that it has the highest Router Priority value. Then the router describes its channel base, sending a sequence of Database Description packets to its neighbour. This process of exchanging Database Description packages is called the Database Exchange Process. After the Database Exchange process, and all Link State requests are completed, the databases are synchronised, and routers are marked as adjacent. At this point, the adjacency relationships are complete and router-LSA (router channel advertisements) are announced. LSA is advertised every 30 min (the architectural constant LSRefreshTime is responsible for this), with each subsequent LSA having a larger sequence number than before. Of course, LSA with a higher number replaces the one with the smaller one. These LSAs diverge throughout the autonomous system by the so-called flood. The router that received an LSA from one of its neighbours sends it to all of its other neighbours, so each router forms its own base of the LSA DB. On the basis of the channel base, each router builds a shortest path tree, being the root of it. This tree contains routes to all destinations within the AS. Route information of external origin is presented as leaves of a tree. The tree includes the path to any network or host. However, when forwarding packets to the destination, only the next router (next hop) is used. The attacker must accurately determine the LSA database parameters on the target router. This vulnerability can be exploited exclusively by sending a specially created single-user or multi-user Router LSA package. No other type of LSA can be used. Violation of the confidentiality, integrity and availability of data can entail significant reputational and financial costs of businesses and individuals. Predominantly business priorities are focused on making a profit and increasing the profitability of

Automatic Traffic Control System for SOHO Computer Networks

745

economic activity. At the same time, the information security issues are often neglected. According to the 2018 annual report on cybersecurity by Cisco, almost 53% of all cyber threats resulted in costs of more than $ 500,000 [2]. Accordingly, the importance of managing the traffic of computer networks with ensuring a high level of information security is increasing. Within the framework of this topic, scientific research is conducted by Russian and foreign scientists: Craig H. Rowland, Yunlu Gong, Shingo Mabu, Yifei Wang, Kotaro Hirasawa, Myung-Kyu, YiChong-Sun Hwang, Avdoshin Sergey Mikhailovich, Antineleskul Anton Vladimirovich [3, 4]. Interesting solutions are works in which the convergence of IDS and SIEM systems occurs. Unfortunately, these approaches are not focused on SOHO (Small office/home office) class networks, which have weak computational capabilities on gateway hosts. Another disadvantage is the lack of source code of implemented projects, which makes it difficult to evaluate them and perform a comparative analysis.

2 Purpose of Work The aim of this work was to research and develop a traffic management system for a computer network of the SOHO class. The main task was to develop an original algorithm for ensuring the information security of small computer networks, by which the system will function.

3 Theory and Practice SOHO means a small local network that can integrate computers, TVs on the Smart TV platform, digital video cameras, players and other microprocessor devices. The emergence of Smart TV technology has allowed to connecting TVs to a wireless network (Wi-Fi) and local cable network (Ethernet), which has changed the quality of services provided by the SOHO computer network. The class of equipment that is designed for SOHO networks is very poorly protected. In products such as DLink, TPLink, Cisco and others, there are built-in firewalls that perform the functions of packet filters, which, as practice shows, are not enough for thorough security. Also, the routers themselves have a broad range of vulnerabilities, with which prevent the built-in firewalls from completing the given task. Major manufacturers of network equipment for use in small corporate networks do not pay sufficient attention, as they are aiming at simplicity and cheapness of components to be able to target a larger audience. Consequently, this causes the confidentiality of consumers who use their products to suffer. One of the points that it is useful to draw attention to is the use of the weak computational power of MIPS/ARMbased processors in routes, which do not allow to ensure competent technical requirements that contribute to increasing security against intruders and current threats. It is also quite problematic to implement Security and Intrusion Solutions.

746

3.1

E. Basinya and A. Rudkovskiy

Analysis of Network Equipment

One of the main factors causing such a wide choice of products is varying hardware components. These include the presence of certain ports and different types of processors, affecting the overall computing power of a device. The higher the power, the more opportunities to implement a more expensive list of security functions. Also, a large proportion of equipment fails to ensure the complete safety of devices in a number of ways. Let’s look into some of them: The more expensive models of TP-LINK routers have a HomeCare function [5], which provides an antivirus that positions itself as protection against modern cyber threats. DLINK presents a wide range of specialised router models for SOHO networks. From a security point of view, they all have a built-in firewall, but arising problems are similar to those with TP-LINK routers. Firewalls do not protect against the vulnerability threats of the routers themselves. One of the current ways to handle network traffic to equipment will be implemented using mirroring technology. A large number of managed network switches allow duplicating traffic from one or more ports and VLAN (Virtual Local Area Network) to a single port. We will mainly need this to monitor all traffic for security purposes, to evaluate the performance and to load network equipment using hardware. The Internet of Things (Internet of Things, IoT), like any high-growth technology, is experiencing a number of “growing pains”, among which the most serious is the problem of security. The “smarter” devices connected to the network, the higher the risks associated with unauthorised access to the IoT system and the use of its capabilities by attackers. Today, the efforts of many companies and organisations in the field of IT are aimed at finding solutions that will minimise the threats hindering the full implementation of IoT. For IoT devices, security lies primarily in the integrity of the code, authentication of users (devices), the establishment of ownership rights (including the data generated by them), and the ability to repel virtual and physical attacks. However, in fact, most of the IoT devices are not equipped with security elements and, have externally accessible management interfaces and default passwords, i.e. they have all the signs of a web vulnerability. According to Statista.com [6], the volume of the Internet of things market in 2017 exceeded one billion dollars. The total number of devices connected to the Internet is currently estimated at more than 23 billion with the prospect of increasing to 30 billion by 2020. After that, the analytical agency IHS Markit predicts a non-linear growth of up to 125 billion devices by 2030. Such a volume of production is entirely possible, but already now the shock rates of production of IoT devices are achieved mainly due to the cheapest “Chinese” machines, in the development of which safety was considered last. Many IoT devices are not patched and are not monitored by the information security department [7]. Typical vulnerabilities that exist in IoT devices: • Old firmware • No update procedure • Older protocol versions

Automatic Traffic Control System for SOHO Computer Networks

• • • •

747

Attacks against the Web Interface Known vulnerabilities Physical access Absence or weak encryption algorithm

Based on the above weaknesses, it can be seen that the network must be well protected and in a state of monitoring traffic, so even when an attacker physically accesses the IoT device, the system will detect abnormal activity in the network and take appropriate measures [8]. The idea of the solution lies in the fact that it is possible to increase the security of the network without additional cost and change of equipment, which acts as the boundary between the internal network and the outside world. The bottom line will be that even with cheap home routers there is a possibility of traffic mirroring. Thanks to this feature, all traffic will be duplicated with the help of mirroring technology to users’ home stations in the case of a home network, or to the main company server, in the case of a small business. This technology does not require the main computing power. Therefore, to enhance the security, a regular home computer or a regular server will be suitable. It is necessary to develop an original algorithm for ensuring information security in small networks. 3.2

Algorithm Development

In order to develop an original algorithm for ensuring information security in small networks, it was necessary to draw a block diagram (Fig. 1), and to think over the logic of the client-server application, which helped to create a traffic management system. At the first stage, the profile operating system is implemented on the routers of the computer network. This enables the installation of the client side of the software and a more flexible configuration of its functions. Next step is to install the client software on all routers that are in the network. Then the intelligent functions of the controlled network equipment are configured automatically. Insecure protocols that percept information in open form are disabled (for example, Telnet—(Teletype Network)), followed by the configuration of SSH (Secure Shell) communication. As this protocol has cryptographic strength, it is advised to use secure HTTPS(Hypertext Transfer Protocol Secure) protocol instead of HTTP (Hypertext Transfer Protocol) to encrypt control traffic. One of the ways to do it is by generating a certificate and changing the port to any non-standard one. For nodes authentication, it is best to use information in the form of SSL (Secure Sockets Layer) certificates. NTP (a network protocol to synchronise the internal clock of the device using networks with variable latency) is then enabled, and lastly, Port Security Technology is configured after that. This allows to “bind” the MAC (Media Access Control) addresses of the hosts to the ports of the router. In case of the information flow not satisfying the rules, frames are discarded. IP-binding is configured, which allows to pre-designate a strict correspondence between IP-and MAC-addresses. In case of discrepancy of these parameters in the information flow, the traffic is discarded. The next step is to install the server side of the application, where it processes network traffic for malicious activity.

748

E. Basinya and A. Rudkovskiy

Fig. 1. Algorithm for Ensuring Information Security

Next, the IDS/IPS and SIEM systems on the machine located in the internal network are configured for analysis, demonstration and managing threats. Also, it is necessary to note the fact of connecting the paralleling function with the presence of a video card that allows performing calculations, without loss of performance for the rest of the system [9]. The next step is to configure traffic mirroring using the corresponding functions on the network equipment so that the traffic is duplicated on the system where the server is located. After that, the traffic is analysed for malicious activity, and if activity is detected, control commands are sent to the client. Depending on the incident, there is a pool of options for control commands, such as: breaking a session, blocking an IP address, or blocking a port. 3.3

Overview of Existing Intrusion Detection Systems Solutions

The difference is that the firewall acts as a packet filter and controls only session parameters (IP, port number and state of connections), IDS “looks” inside the packet

Automatic Traffic Control System for SOHO Computer Networks

749

(up to the seventh OSI level), analysing the transmitted data [10]. There are several types of intrusion detection systems: • Network-based IDS (NIDS) tracks intrusions by checking network traffic and monitors multiple hosts. A network intrusion detection system accesses network traffic by connecting to a hub or switches configured for port mirroring or a network TAP device. • Protocol-based IDS (PIDS) is a system that tracks and analyses communication protocols with related systems or users. For a web server, this kind of IPS usually monitors the HTTP and HTTPS protocols. When using HTTPS, the IDS should be located on such an interface so it can view HTTPS packets even before they are encrypted and sent to the network. • Application protocol-based IDS (APIDS) is a system that monitors and analyses data transmitted using application-specific protocols. For example, on a web server with a SQL database, the OWB will track the contents of the SQL commands sent to the server. • Host IDS (Host-based IDS, HIDS)—a system located on a host. It monitors intrusions using the analysis of system calls, application logs, file modifications (executable, password files, system databases), state of the host and other sources. Network security is the first barrier for IDS, but not all router manufacturers can afford equipment based on its technical capabilities [11]. The market for intrusion detection systems is quite wide, but three IDS are particularly popular and are firmly entrenched in it: • Snort • Suricata • Bro Snort is the first free IDS system available. It logs, analyses, searches content, and is also widely used for blocking or passively detecting a variety of attacks and probes, such as attempts to overflow a buffer, hidden port scanning, attacks on web applications, SMB probing, and attempts to determine the operating system. The software is mainly used to prevent penetration, and block attacks if they occur. [12] Snort rules are regularly replenished by the community, but there are also services that provide the ability to subscribe to the rules. Noting the control of applications, we can say that Snort can only differentiate and define rules like alert, drop, etc., but it does not distinguish which application this traffic belongs to. However, there is an add-on for Snort OpenAppID, which allows to considering L7 traffic and can drop or highlight it based on standard Snort rules. The advantage of Snort is that this technology uses a relatively broad level of acceptance, to correct any emerging threats quickly. For example, the Snort rule was available to monitor the vulnerability in the Equifax violation centre about a day after it was announced. The disadvantage of Snort is its age. Snort is 20-year-old and designed to work on older infrastructure. Although the rules are relatively easy to write, it becomes more difficult to adapt them to the increasingly complex threats and requirements of high-speed networks. Suricata is also a freely distributed IDS, which was based on Snort and then turned into an independent project. Suricata was

750

E. Basinya and A. Rudkovskiy

introduced in 2009 in an attempt to meet the needs of modern infrastructure. The rules of Suricata are also distributed by subscription, but unlike Snort, it can provide additional features, such as: protocol recognition based on the port, detection of files and their contents. One of the main advantages of Suricata is that it was developed much later than Snort. This means that there are still many features on board that are virtually inaccessible today. One of these features is multithreading support. Over the years, the increase in network traffic has been accompanied by processing requirements for IDS devices (measured in packets per second). Fortunately, Suricata supports multithreading. Snort does not support multithreading, however. No matter how many cores the processor contains, for Snort only one core or thread will be used. Like Snort, Suricata is rule-based, and although it offers compatibility with Snort rules, it also introduced multithreading, which provides the theoretical ability to process more rules in faster networks with large amounts of traffic on the same equipment. In the application control area, Suricata slightly differs from Snort. It supports application level discovery rules and can, for example, define HTTP or SSH traffic on non-standard ports based on protocols. It will then also apply protocol settings to these discoveries. Because it is multithreaded, one copy will balance the processing load on each processor of the sensor that Suricata is configured to use, allowing the hardware to reach the speed of 10 gigabits in seconds without sacrificing the rule set coverage. Suricata supports file extraction, which is an incredibly useful feature that allows to automatically extract selected files when the rule containing the “filestore” option is launched. It, for example, can extract all .pdf files or all files with one pixel .png and can save them in a pre-configured folder for further manual analysis. Suricata also included the Lua scripting language, which provided more flexibility for creating rules that define conditions that would be difficult or impossible using the outdated Snort rule. Simply put, it allows users to adapt Suricata to the complex threats that are usually faced by an enterprise. The significant shortcomings of these systems are that there are ways to deceive and circumvent them, so for, example, one can flood the traffic with empty packets so that while the system processes them, the attacker could carry out a real threat. Bro is an intrusion detection system that differs from other systems due to its focus on network analysis. While rule-based mechanisms are designed to detect exceptions, Bro is looking for specific threats and trigger warnings. The Bro network security monitor, for example, is more of a system for detecting malicious network activity. Where Snort and Suricata work with traditional IDS signatures, Bro uses scenarios to analyse traffic. Bro’s significant advantage is that these scenarios also allow highly automated workflows between different systems, an approach that enables to make much more granular decisions rather that old passes or drops. However, its configuration can become quite complex. Although Bro can certainly be used as a traditional IDS, users are more likely to use Bro to record detailed network behaviour. For example, it can be used to store longterm records of all HTTP requests and results—or tables matching MAC addresses and IP addresses. Bro stores network metadata that it writes more efficiently than packet captures, which means that it can be searched, indexed, queried and reported in ways that were

Automatic Traffic Control System for SOHO Computer Networks

751

previously unavailable. This makes Bro especially suitable for detecting malicious network activity and searching for threats. While flexibility is an obvious advantage, the disadvantage is that Bro, with its indepth inspection of packages, is resource intensive. It is worth noting that the study of threats is easier through Snort or Suricata. To this end, Bro is quite difficult to use, although the community is actively working to improve it. IDS Bro was chosen for work, as it meets the security requirements of the criteria mentioned above. Bro relies heavily on its extensive scripting language to define and analyse detection policies. In addition, Bro also provides an independent signature language for performing low-level pattern matching. 3.4

Comparing Profile Operating Systems

The operating system plays an essential role, since the OS will be installed on a special architecture. It is worth noting that an important feature of the system will be a profile operating system that will meet the requirements for installing the client and configuring the router. For the analysis three options were selected: m0n0wall, pfSense, OpenWRT. m0n0wall is a FreeBSD 6.4 mini distribution based on the creation of network gateways. The distribution kit is equipped with a simple and convenient web-interface for setting all system parameters. Supports saving the entire configuration as a single XML file. Among the functions worth noting: supports work as a wireless access point, 802.1Q VLAN, firewall, NAT, traffic restriction, traffic monitoring with the generation of SVG graphs, SNMP agent, DNS cache, DynDNS client, IPSec, client/server for PPTP VPN, PPPoE 802.1Q VLAN, DHCP. pfSense is probably one of the most popular distributions for solving routing problems. It is based on the FreeBSD OS and is capable of turning a simple and lowpowered machine into a router. One of the delivery options is LiveCD. It allows to load the system from a disk image, but without the ability to install packages. For this, a complete installation of the system on the hard disk is necessary. It is also possible to use images for installation on Compact Flash cards. The vast majority of home routers operate on the basis of the operating system OpenWRT [13]. This operating system is based on the Linux kernel, and has several advantages, such as the presence of components whose size is optimised for routers, due to the memory limitations of network equipment. The main feature of OpenWrt is full support of the JFFS2 file system, which allow using to use the package manager dpkg to manage packages. All this makes OpenWrt an easily customisable and adaptable system for each specific case. In options for routers that have a large amount of flash memory (starting from 4 MB), the SquashFS file system is usually used, which uses an overlay (a combination of variable and unchangeable files in the same directory). In this case, the file system uses space less efficiently, as it stores the description of changes in a separate section, but it allows to easily roll back to the default settings. Standard firmware provides a basic set of features. To extend the available functionality additional packages are being used.

752

3.5

E. Basinya and A. Rudkovskiy

IDS Signatures

At the heart of the network traffic control system is the standard IDS/IPS Bro knowledge base. To ensure a high level of information security, this knowledge base is expanded with its own rules for identifying and processing various threats. The rules listed below are aimed at detecting attacks on IoT devices and on MS Windows, with which the built-in functions of the network equipment protection system cannot cope. Below are a couple of rules that are aimed at detecting attacks on IoT devices and a typical attack on MS Windows, which firewalls of network equipment do not cope with. Signatures for the discovery of the Mirai botnet. signature mirai-botnet-detect { ip-proto == tcp dst-port == 80 payload /.*mirai-botnet.bin/ event "Found botnet Mirai!" }

Signature for detection of Windows Reverse Shell signature windows_reverse_shell { ip-proto == tcp tcp-state established, originator event "ATTACK-RESPONSES Microsoft cmd.exe banner (reverse-shell originator)" payload / .* Microsoft Windows. * \ x28C \ x29 Copyright 1985 - Microsoft Corp./ }

To ensure a high level of network security, it is necessary to supplement and expand the functionality of security systems by adding additional scripts taking into account currently known vulnerabilities and using the most optimal methods on top of the existing knowledge base.

4 Results To study the performance of the developed traffic management system, both manual and automated testing was performed. To ensure network security, an existing knowledge base extended with its own scripts was integrated. Next, an experiment was carried out which involved a attack emulations such as DoS(Denial of Service) and IPspoofing to test how well the detection of malicious traffic actually works. In the framework of this experiment, IDS was running correctly, as was confirmed by the traffic logs. During the further analysis, it became clear that IDS Bro has demonstrated

Automatic Traffic Control System for SOHO Computer Networks

753

excellent performance and all of the simulated attacks were caught thanks to the signatures and were recorded in the logs and the system has successfully neutralised all the threats and gave instructions to the client module. It is also worth noting that during the evaluation, the following disadvantages and advantages were noticed: The advantage of this solution is that, thanks to the developed algorithm which involves a client-server architecture, it has the ability to increase the number of monitored routers. As for the disadvantages—when analysing the logs of an IDS, it was discovered that the delay in event processing and decision-making occurs after the malicious activity itself and, this is due to the mechanism of the IDS at which the testing was carried out. This situation can be explained by the fact that the detection of malicious activity in traffic occurs ex-post.

5 Conclusion The aim of this study was to develop an original algorithm that ensures the information security of small computer networks. The algorithm allows to identify suspicious network activity and eliminate threats through remote control of L3 network equipment. Traffic processing is performed on a personal computer using the system of intrusion prevention and intrusion, together with a system for analysing and correlating information security events. Information flows are redirected using port mirroring technology on the router. The software implementation of this algorithm formed the basis of the implemented traffic management system of a computer network of the SOHO class, which has weak computing capacities on gateway hosts. To test the performance of the solution, manual and automated testing was involved. As a result, the system successfully identified and eliminated threats. It is recommended to use this solution for SOHO networks that have weak computational power on internetwork hosts without a comprehensive firewall. The novelty of this solution is in the centralised security management of the network infrastructure carried out from the user’s PC. Delegating the task of analysing malicious activity to the user’s computer. The specificity of solving problems in this area is still crucial, since neglecting security measures, as has been previously done by some of the top manufacturers of network equipment for home use, can put confidential data of many costumers under threat. The possibility of scaling the computing power of the user’s workplace and increasing the bandwidth of the communication channel made it possible to implement this solution in a network of SoHo class. This solution provides network security and covers a wide range of tasks from improving the quality of logging network activity to auditing IoT devices for malicious activity. Thus, it can be noted that this solution can be improved by various functions. For example, it is possible to start distributing the solution to the already built-in network equipment for small, medium and large-scale computing powers with the use of more powerful processors. To increase the speed and volume of information processing, the technology of parallelisation of calculations can be utilised with the help of GPU

754

E. Basinya and A. Rudkovskiy

(Graphics processing unit) at users’ workplaces. For more accurate detection of malicious network activity and traffic logging on the network, the correlation engine and the SIEM system are recommended.

References 1. Tadimety, P.R.: OSPF: A Network Routing Protocol, pp. 13–17. Apress, Berkeley (2015) 2. Annual Cybersecurity Report. www.cisco.com, https://www.cisco.com/c/dam/m/digital/elqcmcglobal/witb/acr2018/acr2018final.pdf. Accessed 17 Oct 2018 3. Intrusion Detection System. United States Patent № US 6,405,318 B1, 11.06.2002/Craig H. Rowland 4. Gong, Y., et al.: Intrusion detection system combining misuse detection and anomaly detection using genetic network programming. In: 2009 ICCAS-SICE, Fukuoka, pp. 3463– 3467 (2009) 5. TP-LINK® Provides Built-In Security and Parental Controls with New Homecare™ Protection. www.tp-link.com, https://www.tp-link.com/us/news-details-17529.html. Accessed 17 Oct 2018 6. Size of the global Internet of Things (IoT) market from 2009 to 2019 (in billion U.S. dollars). www.statista.com, https://www.statista.com/statistics/485136/global-internet-of-thingsmarket-size/. Accessed 17 Oct 2018 7. Farooq, M.U., Waseem, M., Khairi, A., Mazhar, S.: A critical analysis on the security concerns of Internet of Things (IoT) (0975 8887). Int. J. Comput. Appl. 111, 1–6 (2015) 8. IoT Security & Privacy Trust Framework. https://otalliance.org, https://otalliance.org/ system/files/files/initiative/documents/iot_trust_framework6-22.pdf. Accessed 17 Oct 2018 9. Chaipa, S., Eloff, M.M.: Towards the development of an effective intrusion detection model. In: 2017 Information Security for South Africa (ISSA), Johannesburg, pp. 32–39 (2017) 10. Pharr, M., Fernando, R.: GPU Gems 2: Programming Techniques, pp. 224–230. AddisonWesley Professional, Boston (2005) 11. Lin, W.-C., Ke, S.-W., Tsai, C.-F.: CANN: an intrusion detection system based on combining cluster centers and nearest neighbors. Taiwan: Knowl.-Based Syst. 78, 13–21 (2015) 12. Di Pietro, R., Mancini, L.V.: Intrusion Detection Systems, 1st (edn.), pp. 65–92. Springer, Boston (2008) 13. Wang, D., Zhao, J., Huang, L.: Design of A Smart Monitoring and Control System for Aquaponics Based on OpenWrt, pp. 937–942. Atlantis Press, Hefei (2015)

Combined Intellectual and Petri Net with Priorities Approach to the Waste Disposal in the Smart City Olga Dolinina(&)

, Vitaly Pechenkin

, and Nikolay Gubin

Department of Information Systems and Technology, Yuri Gagarin State Technical University of Saratov, SSTU, Saratov, Russia [email protected]

Abstract. The disposal of solid household waste from large cities is one of the important parts of their environmental safety system, which is a part of the “Smart City” concept. The suggested approach is based on Petri net with priorities that allows to simulate the process of solid waste disposal. The way which allows to evaluate the effectiveness of the use of the specialized expert system is considered, also. The priorities of Petri net transitions are considered as probabilistic characteristics of the live transitions in the network. Changing priorities allow to customize the Petri net behavior according to existing empirical data. Results allow to investigate the impact of the whole process quantitative characteristics on the solid waste disposal such as speed of filling of waste containers, the number of trucks used, the waiting time for the job assignment and some others. The main result of the work is the evaluation of the effectiveness of the expert system in the management of specialized transport for the removal of municipal solid waste. The simulation results are based on the specified probabilistic levels of making the right decision when using it. Keywords: Smart City  Simulation modeling Petri net  Transition  Place  Expert system

 Garbage collection  Priority

1 Introduction Decision support systems are now becoming increasingly common in the Smart City applications designed for emergency warning systems, law enforcement, public safety, and many other areas of human activity. However, control system needs to take into account a large number of technical, economic, political, social, legal factors while making decision. It significantly complicates the process of choosing the right solution. As a rule, this is due to the difficulties that arise in the process of collecting relevant, reliable and complete information on the subject. A significant increase in the volume of incoming information leads to changes in the methods of analysis and requires not only automating of the processing and examination of data, but also the intellectualization of management processes and using of effective intelligent decision-making support technologies [1]. Recently, the main direction in the development of decision-making systems is the development of knowledge-based systems [2]. © Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 755–767, 2019. https://doi.org/10.1007/978-3-030-12072-6_61

756

O. Dolinina et al.

The basic infrastructure of the “Smart City” elements must include adequate water supply, assured electricity supply, efficient urban mobility and public transport, IT connectivity and digitalization, social services, sanitation, including garbage collection management. Also there are problems that are related to the social organization of the society, the level of understanding of the values and standards and attitudes towards the environmentally sound behavior. All of the above reasons lead to the fact that the solution to the problem of waste management requires an integrated approach based on the use of intelligent management systems. The task of waste removal is a part of the overall problem of creating an environmentally friendly environment in the urban space, usually associated with the natural habitat, protecting the city’s ecology from pollution [3]. “Smart Environment” can be considered as a part of the “Smart City” concept, the core of which is using the mobile communication technologies (Internet of Things). It is obvious that the targeted waste collection on time saves expenditures, fuel, reduces exhaust gas emissions. When considering the essence of the “Smart City” concept it is important to identify a set of factors that are necessary for understanding of the projects implemented within its framework. The project initiatives for the smart city should be focused on the creation of urban space infrastructure and organizational systems based on modern technologies that respond to emerging problems. Among the considered factors one must mention the following [4]: – – – – – – –

management and organization; technologies; policy; social communities; the economy; infrastructure; natural environment

Typical task for the most of the projects implemented within the framework of the “Smart City” concept is the targeted managing of the removal process of solid industrial and domestic garbage. This task is directly related to the all characteristics of the typical project, given in the definition of “Smart City” [5]. As a rule, in the solution methods there are components responsible for the use of mobile technologies, information intellectual systems based on the knowledge bases. This approach involves information, communication (based on mobile communication) and Web 2.0 technologies, which make it possible to accelerate decision-making processes, apply innovative methods of city management, and improve the urban space environmental safety [6]. Similar tasks are relevant and at the present time methodological fundamentals and applied methods of their practical solution are being actively developed.

2 A Platform “Smart Clean City” Structure The general structure of the system “Smart Clean City” and optimization schedules algorithms for garbage collection trucks (GCT) are described in the article [7] where the system model is presented in the form of a dynamic graph, which weight functions

Combined Intellectual and Petri Net with Priorities Approach

757

depend on time and are determined by the current state of the area for garbage containers (AGC) and current road traffic. The current structure of the platform for application development within the project “Smart clean city” is presented in Fig. 1.

Fig. 1. Structure of “Smart Clean City” platform

The platform unites several subsystems, which allow to solve various problems of analysis and management of the system for collecting and removing of the solid waste. These subsystems are combined by means of special adapters that allow to synchronize data and control various parts of the entire system. At an early stage of the system development such interaction was carried out directly between subsystems with the help of inherited interactions between them. Developed applications have a unified interface and allow to manage the data integrity model in various subsystems. The subsystems are server applications that allow to simulate the system of collection and removal of disposal (SUB_1), apply optimization algorithms for the schedule of trucks on several parameters (SUB_2), use the knowledge base built on the experience of the enterprise organization of waste disposal (SUB_3) in the decisionmaking system. Developed applications are located both on the server part of the system, and in client applications using mobile platforms. The “User Management” service is necessary for administrators who manage user access to various parts of the system and devices. This service allows administrators to perform authentication, authorization, and control user access to IT-resources. The “Database Management” service organizes access to data sources in applications and backs up the entire system database. The “User Interface” service connects the drivers of specialized cars for waste disposal with the system of available client applications which are implemented on the basis of the Android mobile platform. “Vehicle Monitoring” is used to determine the location of the vehicle using GPS/GLONASS technology, to collect data in the process of following on the route and transferring them to the system.

758

O. Dolinina et al.

In the next section the mathematical apparatus of Petri nets used in the SUBSYSTEM 1 “Simulation”, which is used for developing of the garbage collection system model and for analyzing of its effectiveness will be described.

3 Simulation of the Garbage Collection Process by Petri Nets with Priority Petri nets are considered to be the well-established mathematical formalism for modeling and analyzing of the distributed systems. This approach allows to take into account a large number of details of the functioning of the analyzed processes. These processes can range from technical systems to the business systems, or social interactions. [8–11]. Recent time many extensions have been proposed in order to capture specific, possibly quite complex, behavior in a more direct manner. These extensions include Inhibitor Petri nets [12] and nets with priorities for transitions [13, 14]. A model of the garbage collection process is proposed and there is described a set of methodologies that allow to solve a wide range of issues for increasing of the efficiency of using trucks transporting household waste to landfills outside the city. The methodological complex includes three levels of building decision-making systems, based on various formal-mathematical approaches. The first level is responsible for stochastic modeling of the process being controlled. For its implementation, the apparatus of Petri nets is used, which allows analyzing of parallel processes in the contest mode, which is as close as possible to the real conditions of a dynamically changing situation [15]. This tool is also traditionally used for the analysis of transport systems that perform cargo handling tasks [16]. Stochastic modeling is used to analyze the parameters of the system, determine the optimal parameters of the loading and unloading cycle of trucks that take out solid domestic waste. The optimized model allows, in turn, to test the use of the other two approaches used. To solve the problem of optimal schedules constructing, a dynamic network model based on the formalization proposed below is used. In this setting, the system model and method of solution are different from the existing ones [17] in which used network model of the transport system is considered to be dynamic, and takes into account the actual state of the road network and traffic. In addition, in real time, information about the fullness of containers with waste is received and used in calculations. The third approach that allows to use the expert knowledge of specialists in expert systems for managing the logistics of waste disposal is the methodology for designing and applying knowledge bases, namely expert rule-based systems that allow you to evaluate the variants of automatically constructed schedules and choose among them the most realistic ones. Expert knowledge can include knowledge about typical traffic problems including the peak hours, time of the elimination of the road accidents in the various parts of the city and etc. Garbage disposal is a complex technological process in which companies for the garbage collection are included. The effectiveness of this process is affected by a large number of factors that were described earlier and which make it difficult to obtain the optimal solution. To analyze the process of waste disposal, it is necessary to take into

Combined Intellectual and Petri Net with Priorities Approach

759

account random factors and complex dependencies between all subjects of the process as well. To implement this analysis, one can use the methodology of simulation. Formally, a Petri net with priorities is defined as a 6-Tuple in the form: PN ¼ ðP; T; F; m0 ; w; qÞ

ð1Þ

where P = {p1, p2, …, pn} is a finite set of places; T = {t1, t2, …, tm} is a finite set of transitions; F  (P  T) [ (T  P) is a finite set of arcs; m0 : P ! {0, 1, 2, 3 …} is some initial marking. W : F ! {1, 2, 3, …} is a weight function for arcs; q : T ! {1, 2, 3, …} is a transition priority mapping; P \ T = ∅; P 6¼ ∅; T 6¼ ∅. Places pi are represented by circles, transitions ti by boxes, and the flow relation F by directed arcs. Places can carry tokens represented by filled circles or natural numbers near the place. An initial marking in a Petri net is a function m0, mapping each place to some natural number (possibly zero). A current marking m is designated by putting m(p) tokens into each place p 2 P. For a transition t 2 T an arc (x, t) is called an input arc, and an arc (t, x)—an output arc. A place p 2 P with an arc from itself to a transition t 2 T is an input place for t, and a place with an arc from a transition t to itself is an output place. A transition t 2 T is enabled in the marking m if 8p 2 Pmð pÞ  F ðp; tÞ

ð2Þ

A transition with no input places is referred to as a source transition and a transition with no output places is called a sink transition. A source transition is always enabled. An enabled transition t can or cannot fire. The firing of the enabled transition t removes w (p, t) tokens from each input place p of t and adds w (t, p) tokens to each output place p of t. An enabled transition t can fire yielding a new marking m′ (denoted m t! m′), such that 8p 2 Pm0 ð pÞ ¼ mð pÞ  wðp; tÞ þ wðt; pÞ

ð3Þ

We say in this case that the marking m′ is directly reachable from the marking m. The marking m is called dead if it enables no transition. An initial run in PN is a finite or infinite sequence of firings for some enabled transitions m0 ! m1 ! m2 . . .

ð4Þ

A marked Petri net with priorities is a Petri net together with a priority mapping q. The firing rule for a Petri net PN with priorities is defined as follows. Let m be a marking in PN, and T’  T be the subset of enabled transitions in m (according to usual rules for Petri nets). Then probability of the firing for an enabled transition t 2 T’ in m is equal to

760

O. Dolinina et al.

P

qðtÞ ti 2T 0 qðti Þ

ð5Þ

i.e. transitions with higher priorities have higher probability of firing over transitions with lower priorities. Within the framework of the Petri net approach, it is proposed to build a model for the process of handling waste containers, which includes various states of vehicles and waste containers, technical specialists, drivers and the processes of their interaction. The tokens of the model are trucks for the removal of waste and a site for containers. The model fragment contains 16 places and 19 transitions as follows: P ¼ fpi ji ¼ 1; . . .; 16g

ð6Þ

P ¼ fti ji ¼ 1; . . .; 19g

ð7Þ

The transitions and places selected for modeling reflect the parallel processes taking place and take into account the various possible states depending on the factors influencing them. The obtained model is used to carry out a simulation experiment to analyze the characteristics of the process of removal of solid domestic waste. The set of places P and transitions T is presented in Tables 1 and 2 respectively.

Table 1. Places of the Petri net for garbage collection modeling Name p1 p2 p3 p4 p5 p6 p7 p8 p9 p10, 11, p12 p14, 16 p17 p18 p19 p20 p21 p22

13, 15

Interpretation GCT is in the parking (garage) GCT is waiting for assignment AGC is filled GCT is assigned to clean up the AGC AGC is being processed (“is busy”) GCT is on the AGC AGC is empty GCT is on the AGC GCT diagnostics Route definition Waiting for traffic completion Waiting for completion of emergency GCT on the route GCT is on the solid garbage dumps GCT is empty GCT is partially full AGC driver requests for the next assignment Solid garbage dump

Combined Intellectual and Petri Net with Priorities Approach

761

Table 2. Transitions of the Petri net for garbage collection modeling Name t1 t2 t3 t4 t5 t6, 7 t8 t9 t10 t11 t12 t13 t14 t15 t16 t17, 20, t18 t19 t21 t22 t25 t26 t27 t28 t29 t30 t31

23, 24

Interpretation Driver receives permission for work Driver does not get a work permission There is a filling of the AGC (TSource) There is a partial filling of the AGC GCT movement to the AGC There is a complete AGC cleaning There is a partial AGC cleaning There is a GCT breakdown There is a GCT filling GCT goes to next AGC No route with optimization Traffic optimized route Route optimized by Expert System In traffic jam Completion of traffic On the route Arrive to unexpected situation Completion of an unexpected situation Get an unexpected situation Completion of an unexpected situation Unloading waste The driver is ready for the next assignment Finish of the working day for a GCT GCT is assigned for unloading Determination of the next AGC On the route to the next AGC On-site GCT repair

It is suggested the Grin software [grin-software.net] to implement Priority Petri Nets and to simulate their dynamic behavior. Grin provides a unifying graph theory framework with special extension for Petri net simulation. With this software tool one can design, animate, and simulate Petri nets models. It provides features that facilitate the implementation of our Petri nets in automatic simulation mode. The software allows to edit the network in interactive graphical mode, assign the attributes of places and transitions, run the simulation process with the specified quantitative restrictions (the number of transitions in one simulation cycle). It is important to note that for all transitions, priority is assigned, which is directly related to the probability of their triggering during the simulation procedure. Figure 2 shows a fragment of the Petri net which is used to analyze the process of waste disposal. After the analysis, the network parameters are modified and a comparative analysis of the effectiveness of the organization of the process is carried out.

762

O. Dolinina et al.

The proposed model of the Petri net for the process of waste collection and removal allows a computational simulation experiment to be performed to determine an effective algorithm for managing the collection and removal of waste. t10

t11

t26

p10

t3

p3

p4 t5 p6

t4

p7 t6

p5

t14

t12

p8 t8

p11 t9

t7 t28

p21

p15

t19 p13

t29

p17

t22

p2 t30

p12

t15 p9 t17

p20

t13

t16

p16

t21

p14

p22

t18 t20

t23

t24

t1 p1 t2

p19

t31 t27

p18

t25

Fig. 2. Fragment of the Petri network for modeling the process of collection and removal of solid waste

The relevance of the Petri net model represented in Fig. 2 and described in the previous sections is demonstrated through several simulations made for different configurations of the system. They are defined according to the functions to be activated in the Petri net model of the system. Since in the network shown in Fig. 2 there is a source transition (TSource), then it is alive at any moment, since PN has at least one enabled transition. The tokens corresponding to the trucks move through the network cyclically, returning either to the garage P1 (transition T16), or they queue up for a new job assignment (firing transition T17).

4 Expert System for Evaluation of Calculated Optimal Routes Evaluation of the proposed route (optimized by the criterion of minimum of time) is complemented by evaluation of confidence level for the route, which is determined by the expert system. The expert system checks the compliance of the route with the latest information about the traffic situation and the information about the forecast of experts on congestion of the road network. If the constructed route does not have a high level of reliability, it is rebuilt. The rules of the expert system are as follows:

Combined Intellectual and Petri Net with Priorities Approach

pri : ri : vi : If aj then bk with the confidence level ck ;

763

ð8Þ

where ri 2 {R}—the set of the rules, pri 2 {PR}—the set of the priorities, vi 2 {V}—the set {Points on the Road}; aj 2 {A}—the set of the facts which represent; ck 2 {C}—the set of the linguistic variables, where C = {‘possible’, ‘probable’, ‘most likely’}, ck is the fuzzy variable described with the trapezoid function. Rules are formed by the experts (from the traffic police or professional drivers) who are well-acquainted with the traffic situation in the city. For example, in case of the traffic accident and corresponding traffic jam the experts could make the solution what step should be made to select the other route or to wait. If the described algorithm of the calculation of the optimal path tries to select the next node vi but gets the message from the mobile maps service about the high load of the transport services and knowledge base contains the rule ri with the priority pri  80, then the solution is made on the base of the selection of the ri (to follow the algorithm or to select the other node). The structure of the decision-making system is shown in Fig. 3.

Fig. 3. The structure of calculating the optimal route

Knowledge base consists of the rules the examples of which are presented below (AGC—Area for Garbage Containers, GCT—Garbage Collection Truck):

764

O. Dolinina et al.

5 Results for the Process of Waste Collection and Removal Modeling Let’s consider the results of the modeling of the system. This article presents only some of the results obtained, which allow us to evaluate the effectiveness of the entire system. One of the important results of the presented analysis is the estimation of the number of trucks necessary for the waste disposal at the given rate of appearance of filled containers necessary to reduce the number of untreated containers. It is clear that the increasing of the number of trucks makes it possible to minimize the number of such containers, but this, in turn, increases the waiting time for trucks when the next task is assigned. In Fig. 4 the dependence of the elapsed time (number of network cycles) for processing the certain number of containers with a choice of various types of route optimization is presented. The solid line shows the time when the non-optimized route

100 90 80

TIME, %

70 60 50 40 30 20 10 0 0

25

50

75

100

125

150

CONTAINERS

No op miza on Op miza on on expert system

175

200

225

Op miza on on traffic

Fig. 4. Time for processing containers on AGC

250

Combined Intellectual and Petri Net with Priorities Approach

765

is selected. If the traffic optimization is used, the elapsed time is reduced by about 10%. If optimization is used according to the rules of the expert system, taking into account the current actual state of the traffic, the time spent is reduced by another 10%. Obviously, the results depend on the prioritization of the transitions that determine the chosen optimization method. Important parameters should be the container loading rate on AGC (transition priority T3) and the number of GCTs used (the number of tokens in place P1 at the time of the process simulation start), also. Figure 5 shows the changing of the number of unprocessed containers. Stabilization of the parameter is the absence of strong deviations in modeling in different sessions depending on the probability of the expert rule using. The Y axis is the number of unprocessed containers after the stabilization of the modeling process. The X axis is the probability of applying of the expert. Graphs show this relationship for the different amount of GCTs used. 110 100 90

Containers

80 70 60 50 40 30 20 10 0 0

10

20

30

40

50

60

70

80

90

100

Probability of expert system op miza on (in %) 5 trucks

10 trucks

15 trucks

20 trucks

25 trucks

Fig. 5. Number of unprocessed containers

The graphs clearly show that for the given AGC fill rate, differences in the use of 20 or 25 GCTs are not significant. Another regularity is the increase in the value of using of the expert system with an increase in the number of GCT. The greater the number of GCPs used, the later the dependency curve becomes flat, without increasing the efficiency of the expert system using.

766

O. Dolinina et al.

6 Conclusion To increase the effectiveness of waste collection, it is necessary to develop a dynamically managed targeted waste collecting system. The separate solutions, which are currently implemented, should be prepared for integration into the intellectual urban infrastructure. The proposed model for the solid waste disposal takes into account the dynamic nature of the analyzed process and makes it possible to evaluate the benefits of the knowledge base use. The dependence of the number of unprocessed containers on the probability of correct prediction of the best route by the expert system is obtained. Also it is demonstrated the nature of the dependence of the time for processing of the certain number of containers on the chosen method of route optimization. The described system can be considered as a targeted waste management system, it has been tested in Saratov (Russia), city with a population of approximately 1 million people. Pilot implementation of the system in the period from September 2015 to the present time shows 21% decrease in the processing time of containers compared to the traditional manual planning, without taking into account the dynamic parameters of the system described above. Similar advantages are demonstrated by the simulation of the waste removal process based on the Petri nets and using several levels of optimization. Comparative analysis has been carried out using real data and simulation results. The area of the city for which the model has been developed has about 250 containers at 56 sites. Two landfills for the removal of solid household waste are used. Each container has a capacity of 100 kg. The carrying capacity of the truck is estimated at 5000 kg. (actual capacity depends on the degree of compressibility of the waste). The model of the system is universal and can be used to optimize the process of collecting and exporting solid domestic waste in the large metropolitan areas.

References 1. Rassel, S., Norvig, P.: Artificial Intelligence: A Modern Approach. Vil’yams, Moscow, 1408 p (2006) 2. Sahina, S., Tolunb, M.R., Hassanpourb, R.: Hybrid expert systems: a survey of current approaches and applications. Expert Syst. Appl. 39(4), 4609–4617 (2012) 3. Arnott, D., Pervan, G.: A critical analysis of decision support systems research. J. Inf. Technol. 20(2), 67–87 (2005) 4. Chourabi, H., Nam, T., et al.: Understanding smart cities: an integrative framework. In: Proceedings of the 2012 45th Hawaii International Conference on System Sciences, pp. 2289–2296. IEEE Computer Society (2012) 5. IEEE Smart City definition. http://smartcities.ieee.org/about. Accessed 21 Oct 2018 6. Toppeta, D.: The Smart City Vision: How Innovation and ICT Can Build Smart, “Livable”, Sustainable Cities. The Innovation Knowledge Foundation. http://inta-aivn.org/. Accessed 19 Oct 2018 7. Brovko, A., Dolinina, O., Pechenkin, V.: Method of the management of garbage collection in the “Smart Clean City” project. In: Gaj, P., Kwiecień, A., Sawicki, M. (eds.) CN 2017. CCIS, vol. 718, pp. 432–443. Springer, Cham (2017). https://doi.org/10.1007/978-3-31959767-6_34

Combined Intellectual and Petri Net with Priorities Approach

767

8. Huang, Y., Chung, T.: Modeling and analysis of urban traffic lights control systems using timed CP-nets. J. Inf. Sci. Eng. 24, 875–890. http://www.iis.sinica.edu.tw/page/jise/2008/ 200805_13.pdf (2008) 9. DiCesare, F., Kulp, P.T., Gile, M., List, G.: The application of Petri nets to the modeling, analysis and control of intelligent urban traffic networks. In: Valette, R. (ed.) ICATPN 1994. LNCS, vol. 815, pp. 2–15. Springer, Heidelberg (1994). https://doi.org/10.1007/3-54058152-9_2 10. Benarbia, T., Labadi, K., Moumen, D., Chayet, M.: Modeling and control of self-service public bicycle systems by using Petri nets. Int. J. Model. Ident. Control 17, 173–194 (2012) 11. Malkov, M.V., Maligina, S. N.: Petri nets and modeling. In: Proceedings of the Kolsk Science Center RAS 3, pp. 35–40 (2010). (in Russian) 12. Verbeek, H.M.W., Wynn, M.T., van der Aalst, W.M.P., Hofstede, A.H.M.: Reduction rules for reset/inhibitor nets. J. Comput. Syst. Sci. 76(2), 125–143 (2010) 13. Best, E., Koutny, M.: Petri net semantics of priority systems. Theor. Comput. Sci. 96(1), 175–215 (1992) 14. Lomazova, I., Popova-Zeugmann, L.: Controlling Petri net behavior using priorities for transitions. Fundam. Inform. 143(1–2), 101–112 (2016) 15. Ryabtsev, V., Utkina, T.: Information technology for design of automated control of technological processes systems. Control Commun. Secur. Syst. 1, 207–239 (2016) 16. Naumov, V.: Petri nets in modeling the process of freight forwarding services. Road transport (Kharkov) 24, 120–124 (2009). (in Russian) 17. Anagnostopoulos, T., Zaslavsky, A., Medvedev, A., Khoruzhnikov, S.: Top-k query based dynamic scheduling for IoT-enabled smart city waste collection. In: Proceedings of the 16th IEEE International Conference on Mobile Data Management, MDM 2015, Pittsburgh, US (2015)

Author Index

A Abramov, Maxim, 398 Abramova, Olga A., 311 Abrosimov, Mikhail, 73 Akhmetov, Bakhytzhan, 678 Alenova, Raya, 678 Alipchenko, Sergei, 240, 301 Aronov, Leonid, 217 Arzamastsev, Alexander, 516 Askarova, Adel, 652 B Balgabayeva, Lyazzat, 678 Basinya, Evgeny, 689 Baukov, Andrew, 217 Berman, Aleksandr, 117 Blinkov, Yury, 603 Bobrov, Leonid, 463 Bogomolov, Aleksey, 354, 665 Bogomolov, Alexey, 288 Bolshakov, Aleksander, 557 Brovko, Alexander, 73, 157 Bulatova, Aiguzel Z., 311 Buldakova, Tatyana, 3 C Chernenko, Aleksandr, 616 Chernyshkova, Elena, 179 D Daurov, Stanislav, 179 Dimitrov, Lubomir, 321, 453 Dimitrov, Slav, 453 Dimitrova, Reneta, 453 Dmitriev, Vladimir, 217

Dolinina, Olga, 25, 250, 262, 700 Dorodnykh, Nikita, 117 F Fatkullina, Nazgul B., 311 Filimonyuk, Leonid, 25 Filippov, Aleksey, 168, 207, 376 Fominykh, Dmitry, 25, 48 Frantsuzova, Galina, 229, 321, 475 Frolova, Natalya, 408 G Geyda, Alexander, 343 Glukhova, Olga, 81, 301 Golovnin, Oleg, 387 Grigoricheva, Maria, 207 Gubin, Nikolay, 700 I Ivannikova, Nadezhda, 579 Ivanov, Sergey, 603 Ivaschenko, Anton, 333, 423 Ivaschenko, Vladimir, 25, 48, 240, 487, 544, 592, 665 Ivzhenko, Sergey, 652 K Kalinina, Anna, 127 Kamalov, Leonid, 137 Kamenskikh, Tatiana, 179 Karapetyan, Ani, 62 Kasatkina, Ekaterina, 533 Katirkin, Georgiy, 333 Khamutova, Maria, 487 Khanova, Anna, 445

© Springer Nature Switzerland AG 2019 O. Dolinina et al. (Eds.): ICIT 2019, SSDC 199, pp. 769–771, 2019. https://doi.org/10.1007/978-3-030-12072-6

769

770 Kharitonov, Nikita A., 642 Khlobystova, Anastasiia, 398 Khryashchev, Vladimir, 148 Kirillov, Sergey, 217 Kitouni, Ilham, 364 Klenov, Dmitry, 191 Kolbenev, Igor, 179 Kondratov, Dmitry, 127, 570, 603, 616 Kondratova, Yulia, 127 Korolev, Mikhail, 504 Kostanyan, Armen, 62 Kouah, Sofia, 364 Krivosheeva, Darina, 3 Kulakova, Ekaterina, 240, 301 Kulik, Aleksei, 557 Kumova, Svetlana, 179 Kusheleva, Ekaterina, 544 Kushnikov, Oleg, 81, 592 Kushnikov, Vadim, 25, 48, 81, 240, 262, 301, 354, 487, 544, 592, 665 Kushnikova, Elena, 240, 301, 487, 544, 665 Kuzmin, Andrey, 423 Kuznetsova, Kseniya, 504 L L’vov, Alexey, 191, 516, 652 Lada, Alexander, 104 Lakhno, Valerii, 678 Lebedev, Anton, 148 Liapidevskiy, Alexander, 229 Link, Guido, 157 Lipatova, Svetlana, 630 Litovka, Yuri, 516 Lutoshkin, Igor, 630 M Malyukov, Vladimir, 678 Maximov, Anatoly G., 642 Medyankina, Irina, 463 Melnikova, Nina, 288, 516 Mishchenko, Dmitry, 191 Mogilevich, Lev, 127, 603, 616 Moiseeva, Tatiana V., 14 Moshkin, Vadim, 376 N Nefedov, Denis, 533 Neusypin, Konstantin, 36 Nikolaychuk, Olga, 117 Nikolov, Stelian, 453 Nikulina, Yuliya, 408 Nosek, Jaroslav, 321

Author Index O Osinin, Ilya, 92 Ostroglazov, Nikita, 387 P Papshev, Sergey, 288 Pechenkin, Vitaly, 504, 700 Perepelkina, Olga, 570 Piminov, Dmitriy, 504 Pityuk, Yulia A., 311 Polyanskov, Yuriy, 630 Popov, Victor, 127, 616 Popova, Elizaveta, 616 Potemkin, Sergey, 179 Prokhorov, Sergey, 423 Proletarsky, Andrey, 36 Protalinsky, Oleg, 445 R Rezchikov, Alexander, 25, 48, 81, 240, 301, 354, 487, 544, 592, 665 Rodionova, Zinaida, 463 Romanov, Anton, 168 Rudakov, Igor V., 433 Rudkovskiy, Aleksander, 689 S Saburova, Ekaterina, 533 Sadchikov, Pavel, 579 Samartsev, Andrey, 25, 48, 487, 544 Selezneva, Maria, 36 Semezhev, Nickita, 652 Shcherbatov, Ivan, 445 Shiskin, Vadim, 137 Shulga, Tatyana, 48, 354, 408 Sitnikov, Pavel, 333 Skonnikov, Petr, 217 Smirnov, Sergey V., 14 Smirnov, Sergey, 104 Soldatkina, Oksana, 354 Solovjev, Denis, 516 Solovjeva, Inna, 516 Srednyakova, Anastasiya, 148 Stepanova, Olga, 148 Stolbova, Anastasia, 387 Stolbova, Anastasya, 423 Stroganov, Iurii V., 433 Suyatinov, Sergey, 276 Svetlov, Michael, 191 Svetlova, Marina, 191 Sytnik, Alexander, 288, 408 Sytnik, Irina, 81

Author Index T Tairova, Kate, 137 Tashimova, Anara, 678 Toibaeva, Shara, 463 Toropova, Olga, 408 Tsvirkun, Anatoly, 592 Tulupyev, Alexander L., 642 Tulupyev, Alexander, 398 Tverdokhlebov, Vladimir, 354, 665

U Umnova, Elena, 652 Utepbergenov, Irbulat, 463

771 V Vagarina, Natalia, 652 Veshneva, Irina, 557 Volkova, Liliya, 433 Voloshinov, Alexander, 250 Vostrikov, Anatoly, 475 Y Yamaltdinova, Nailya, 630 Yandybaeva, Natalya, 592 Yardaeva, Margarita, 630 Yarushkina, Nadezhda, 168, 207, 376 Yurin, Aleksandr, 117 Z Zhmud, Vadim, 229, 321, 475 Zholobov, Alexandr, 579