Applied Physics, System Science and Computers III: Proceedings of the 3rd International Conference on Applied Physics, System Science and Computers (APSAC2018), September 26-28, 2018, Dubrovnik, Croatia [1st ed.] 978-3-030-21506-4;978-3-030-21507-1

This book reports on advanced theories and methods in three related fields of research: applied physics, system science

503 8 46MB

English Pages XI, 356 [361] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Applied Physics, System Science and Computers III: Proceedings of the 3rd International Conference on Applied Physics, System Science and Computers (APSAC2018), September 26-28, 2018, Dubrovnik, Croatia [1st ed.]
 978-3-030-21506-4;978-3-030-21507-1

Table of contents :
Front Matter ....Pages i-xi
Front Matter ....Pages 1-1
Doppler Delay in Navigation Signals Received by GNSS Receivers (Lucjan Setlak, Rafał Kowalik, Maciej Smolak)....Pages 3-8
Experimental Setup for H2/O2 Small Thruster Evaluation (Jeni Alina Vilag, Valeriu Alexandru Vilag, Cleopatra Florentina Cuciumita, Răzvan Edmond Nicoară)....Pages 9-15
Tracking the Evolution of Functional Connectivity Patterns Between Pancreatic Beta Cells with Multilayer Network Formalism (Marko Gosak, Lidija Križančić Bombek, Marjan Slak Rupnik, Andraž Stožer)....Pages 16-21
Ground Effect Influence on Aircraft Exhaust Jet with Different Nozzle Configurations (Bogdan Gherman, Oana Dumitrescu, Ion Malael)....Pages 22-28
Silver Thin Film and Water Dielectric SPR Sensor Experimental and Simulation Characteristics (Tanaporn Leelawattananon, Suphamit Chittayasothorn)....Pages 29-34
On the Solution of the Fredholm Equation with the Use of Quadratic Integro-Differential Splines (I. G. Burova, N. S. Domnin)....Pages 35-41
Application of Raman Spectroscopic Measurement for Banknote Security Purposes (Hana Vaskova, Pavel Tomasek, Milan Struska)....Pages 42-47
Design on Clothes with Security Printing, with Hidden Information, Performed by Digital Printing (Jana Žiljak Gršić, Lidija Tepeš Golubić, Vilko Žiljak, Denis Jurečić, Ivan Rajković)....Pages 48-55
Front Matter ....Pages 57-57
An Overview of Solutions to the Issue of Exploring Emotions Using the Internet of Things (Jan Francisti, Zoltán Balogh)....Pages 59-67
Knowing and/or Believing a Think: Deriving Knowledge Using RDF CFL (Martin Žáček, Alena Lukasová, Petr Raunigr)....Pages 68-73
Software Solution Incorporating Activation Congnitive Memory Portion in Early Stages of Alzhaimer’s Disease (Provaznik Josef, Kopecky Zbynek, Brozek Josef, Sotek Karel, Brozkova Monika, Karamazov Simeon et al.)....Pages 74-80
Unity3D Game Engine Applied to Chemical Safety Education (Nishaben S. Dholakiya, Jan Kubík, Josef Brozek, Karel Sotek)....Pages 81-87
Use of Game Engines and VR in Industry and Modern Education (Tim van Der Heijden, Dan Hamerník, Josef Brozek)....Pages 88-93
The Use of Cloud Computing in Managing Companies and Business Communication: Security Issues for Management (Marcel Pikhart)....Pages 94-98
Social Network Sites and Older Generation (Blanka Klimova)....Pages 99-104
Development of a Repository of Virtual 3D Conversational Gestures and Expressions (Izidor Mlakar, Zdravko Kačič, Matej Borko, Aleksandra Zögling, Matej Rojc)....Pages 105-110
Permutation Codes, Hamming Graphs and Turán Graphs (János Barta, Roberto Montemanni)....Pages 111-118
Visualization of Data-Flow Programs (Victor Kasyanov, Elena Kasyanova, Timur Zolotuhin)....Pages 119-124
Application of Transfer Learning for Fine-Grained Vessel Classification Using a Limited Dataset (Mario Milicevic, Krunoslav Zubrinic, Ines Obradovic, Tomo Sjekavica)....Pages 125-131
Genetic Algorithms and Their Ethical Degeneration in Simulated Environment (Josef Brozek, Karel Sotek)....Pages 132-137
A Short-Term Load Forecasting Scheme Based on Auto-Encoder and Random Forest (Minjae Son, Jihoon Moon, Seungwon Jung, Eenjun Hwang)....Pages 138-144
Arduino Wrapper for Game Engine-Based Simulator Output (Miroslav Benedikovic, Dan Hamernik, Josef Brozek)....Pages 145-156
On Impact of Slope on Smoke Spread in Tunnel Fire (Jan Glasa, Lukas Valasek, Peter Weisenpacher)....Pages 157-162
Relational Connections Between Preordered Sets (I. P. Cabrera, P. Cordero, E. Muñoz-Velasco, M. Ojeda-Aciego)....Pages 163-169
Extract Transfer Load Considerations of Temporal Data Sources to Data Warehouses (Thanapol Phungtua-Eng, Suphamit Chittayasothorn)....Pages 170-176
Performance Evaluation of GPU-Based PO-SWE for Analysis of Large-Sized Dual-Reflector Antennas (Saki Matsuo, Masato Gocho, Tetsuya Katase, Hiroto Ado, Atsuo Ozaki)....Pages 177-182
Supply-Demand Based Algorithm for Gasoline Blend Planning Under Time-Varying Uncertainty in Demand (Mahir Jalanko, Vladimir Mahalec)....Pages 183-189
3D Technology in the Sphere of Cultural Heritage and Serious Games (Eva Milkova, Lenka Chadimova, Martina Manenova)....Pages 190-195
A Preliminary Approach to Composition Classification of Ultra-High Energy Cosmic Rays (Alberto Guillén, Carlos Todero, José Carlos Martínez, Luis Javier Herrera)....Pages 196-202
Spam Detection in Social Media: A Bayesian Scheme Based on Social Activity Over Content (Klimis Ntalianis, Nikolaos Mastorakis)....Pages 203-209
Comparison of Different Self-developed Simulation Driver Seats with Two, Four and Six Servos (Karan Hemnani, Dan Hamernik, Josef Brozek, Karel Sotek)....Pages 210-221
Incremental Meshfree Approximation of Real Geographic Data (Zuzana Majdisova, Vaclav Skala, Michal Smolik)....Pages 222-228
Design of Real-Time Transaction Monitoring System for Blockchain Abnormality Detection (Jiwon Bang, Mi-Jung Choi)....Pages 229-234
Risk Mapping in the Selected Town (Alžběta Zábranská, Jakub Rak, Petr Svoboda)....Pages 235-244
The Basic Process of Implementing Virtual Simulators into the Private Security Industry (Petr Svoboda, Jakub Rak, Dusan Vicar, Michaela Zelena)....Pages 245-250
Security of a Selected Building Using KARS Method (Kristina Benesova, Petr Svoboda, Jakub Rak, Vaclav Losek)....Pages 251-256
Social Network Analysis of “Clexa” Community Interaction Patterns (Kristina G. Kapanova, Velislava Stoykova)....Pages 257-264
The Use of GAP Analysis Method for Implementing the GDPR in a Healthcare Facility (Michaela Zelena, Petr Svoboda, Jakub Rak, Miroslav Tomek)....Pages 265-269
Front Matter ....Pages 271-271
Spectrum Management and Internal Interference Resolving Using DSA Techniques and Coasian Bargaining; with Case Study (Fidel Krasniqi, Arianit Maraj)....Pages 273-279
The Use of a Modified Phase Manipulation Signal to Interfere GNSS Receivers (Lucjan Setlak, Rafał Kowalik, Maciej Smolak)....Pages 280-285
Energy-Oriented Analysis of HPC Cluster Queues: Emerging Metrics for Sustainable Data Center (Anastasiia Grishina, Marta Chinnici, Davide De Chiara, Eric Rondeau, Ah Lian Kor)....Pages 286-300
Flow Evaluation of the Lobe Pump Using Numerical Methods (Ion Mălăel, Florina Costea, Marian Drăghici)....Pages 301-309
Some Insights into Certain Kind of Asymptotically Stable Lagrange Solutions of 2-D Systems on the Grounds of Lie Algebra (Guido Izuta)....Pages 310-316
Artificial Bee Colony Algorithm for Parameter Identification of Fermentation Process Model (Maria Angelova, Olympia Roeva, Tania Pencheva)....Pages 317-323
Prediction of Neoadjuvant Chemotherapy Outcome in Breast Cancer Patients (José Neves, Almeida Dias, Cristiana Silva, Diana Ferreira, Luís Costa, Filipa Ferraz et al.)....Pages 324-332
An Adaptative Meshless Method Based on Prandtl’s Equation (Asmae Mnebhi-Loudyi, El Mostapha Boudi, Driss Ouazar)....Pages 333-341
A Design Optimization Study for the Multi-axle Steering System of an 8×8 ARFF Vehicle (Mehmet Murat Topaç, Merve Karaca, Batuhan Kuleli)....Pages 342-347
Design of the Civil Protection Data Model for Smart Cities (Jakub Rak, Petr Svoboda, Dusan Vicar, Jan Micka, Tomas Balint)....Pages 348-353
Back Matter ....Pages 355-356

Citation preview

Lecture Notes in Electrical Engineering 574

Klimis Ntalianis George Vachtsevanos Pierre Borne Anca Croitoru Editors

Applied Physics, System Science and Computers III Proceedings of the 3rd International Conference on Applied Physics, System Science and Computers (APSAC2018), September 26–28, 2018, Dubrovnik, Croatia

Lecture Notes in Electrical Engineering Volume 574

Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Naples, Italy Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, Munich, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, Materials Science & Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, Humanoids and Intelligent Systems Lab, Karlsruhe Institute for Technology, Karlsruhe, Baden-Württemberg, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Università di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Spain Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität München, München, Germany Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Stanford University, Stanford, CA, USA Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martin, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Sebastian Möller, Quality and Usability Lab, TU Berlin, Berlin, Germany Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University, Palmerston North, Manawatu-Wanganui, New Zealand Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Japan Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Baden-Württemberg, Germany Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China Junjie James Zhang, Charlotte, NC, USA

The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering - quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning:

• • • • • • • • • • • •

Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS

For general information about this book series, comments or suggestions, please contact leontina. [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Associate Editor ([email protected]) India Swati Meherishi, Executive Editor ([email protected]) Aninda Bose, Senior Editor ([email protected]) Japan Takeyuki Yonezawa, Editorial Director ([email protected]) South Korea Smith (Ahram) Chae, Editor ([email protected]) Southeast Asia Ramesh Nath Premnath, Editor ([email protected]) USA, Canada: Michael Luby, Senior Editor ([email protected]) All other Countries: Leontina Di Cecco, Senior Editor ([email protected]) Christoph Baumann, Executive Editor ([email protected]) ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, SCOPUS, MetaPress, Web of Science and Springerlink **

More information about this series at http://www.springer.com/series/7818

Klimis Ntalianis George Vachtsevanos Pierre Borne Anca Croitoru •





Editors

Applied Physics, System Science and Computers III Proceedings of the 3rd International Conference on Applied Physics, System Science and Computers (APSAC2018), September 26–28, 2018, Dubrovnik, Croatia

123

Editors Klimis Ntalianis Department of Marketing University of Applied Sciences Athens, Greece Pierre Borne Ecole Central de Lille Lille, France

George Vachtsevanos School of Electrical and Computer Engineering The Georgia Institute of Technology Atlanta, GA, USA Anca Croitoru Faculty of Mathematics Alexandru Ioan Cuza University Iasi, Romania

ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-3-030-21506-4 ISBN 978-3-030-21507-1 (eBook) https://doi.org/10.1007/978-3-030-21507-1 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This volume is from proceedings of the 3rd International Conference on Applied Physics, System Science and Computers. The conference focus was in the areas of applied physics, system science and computers and was held September 26–28, 2018, in Dubrovnik, Croatia. The book is divided into three sections that focus on different topics, with the first section focusing on applied physics, the second focusing on computers and the third focusing on system science. All chapters are self-contained and peer-reviewed.

v

Contents

Applied Physics Doppler Delay in Navigation Signals Received by GNSS Receivers . . . . Lucjan Setlak, Rafał Kowalik, and Maciej Smolak

3

Experimental Setup for H2/O2 Small Thruster Evaluation . . . . . . . . . . . Jeni Alina Vilag, Valeriu Alexandru Vilag, Cleopatra Florentina Cuciumita, and Răzvan Edmond Nicoară

9

Tracking the Evolution of Functional Connectivity Patterns Between Pancreatic Beta Cells with Multilayer Network Formalism . . . Marko Gosak, Lidija Križančić Bombek, Marjan Slak Rupnik, and Andraž Stožer

16

Ground Effect Influence on Aircraft Exhaust Jet with Different Nozzle Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bogdan Gherman, Oana Dumitrescu, and Ion Malael

22

Silver Thin Film and Water Dielectric SPR Sensor Experimental and Simulation Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tanaporn Leelawattananon and Suphamit Chittayasothorn

29

On the Solution of the Fredholm Equation with the Use of Quadratic Integro-Differential Splines . . . . . . . . . . . . . . . . . . . . . . . . I. G. Burova and N. S. Domnin

35

Application of Raman Spectroscopic Measurement for Banknote Security Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hana Vaskova, Pavel Tomasek, and Milan Struska

42

Design on Clothes with Security Printing, with Hidden Information, Performed by Digital Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jana Žiljak Gršić, Lidija Tepeš Golubić, Vilko Žiljak, Denis Jurečić, and Ivan Rajković

48

vii

viii

Contents

Computers An Overview of Solutions to the Issue of Exploring Emotions Using the Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jan Francisti and Zoltán Balogh

59

Knowing and/or Believing a Think: Deriving Knowledge Using RDF CFL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Žáček, Alena Lukasová, and Petr Raunigr

68

Software Solution Incorporating Activation Congnitive Memory Portion in Early Stages of Alzhaimer’s Disease . . . . . . . . . . . . . . . . . . . Provaznik Josef, Kopecky Zbynek, Brozek Josef, Sotek Karel, Brozkova Monika, Karamazov Simeon, and Janeckova Hana

74

Unity3D Game Engine Applied to Chemical Safety Education . . . . . . . . Nishaben S. Dholakiya, Jan Kubík, Josef Brozek, and Karel Sotek

81

Use of Game Engines and VR in Industry and Modern Education . . . . Tim van Der Heijden, Dan Hamerník, and Josef Brozek

88

The Use of Cloud Computing in Managing Companies and Business Communication: Security Issues for Management . . . . . . Marcel Pikhart Social Network Sites and Older Generation . . . . . . . . . . . . . . . . . . . . . . Blanka Klimova

94 99

Development of a Repository of Virtual 3D Conversational Gestures and Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Izidor Mlakar, Zdravko Kačič, Matej Borko, Aleksandra Zögling, and Matej Rojc Permutation Codes, Hamming Graphs and Turán Graphs . . . . . . . . . . 111 János Barta and Roberto Montemanni Visualization of Data-Flow Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Victor Kasyanov, Elena Kasyanova, and Timur Zolotuhin Application of Transfer Learning for Fine-Grained Vessel Classification Using a Limited Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Mario Milicevic, Krunoslav Zubrinic, Ines Obradovic, and Tomo Sjekavica Genetic Algorithms and Their Ethical Degeneration in Simulated Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Josef Brozek and Karel Sotek

Contents

ix

A Short-Term Load Forecasting Scheme Based on Auto-Encoder and Random Forest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Minjae Son, Jihoon Moon, Seungwon Jung, and Eenjun Hwang Arduino Wrapper for Game Engine-Based Simulator Output . . . . . . . . 145 Miroslav Benedikovic, Dan Hamernik, and Josef Brozek On Impact of Slope on Smoke Spread in Tunnel Fire . . . . . . . . . . . . . . 157 Jan Glasa, Lukas Valasek, and Peter Weisenpacher Relational Connections Between Preordered Sets . . . . . . . . . . . . . . . . . . 163 I. P. Cabrera, P. Cordero, E. Muñoz-Velasco, and M. Ojeda-Aciego Extract Transfer Load Considerations of Temporal Data Sources to Data Warehouses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Thanapol Phungtua-Eng and Suphamit Chittayasothorn Performance Evaluation of GPU-Based PO-SWE for Analysis of Large-Sized Dual-Reflector Antennas . . . . . . . . . . . . . . . . . . . . . . . . . 177 Saki Matsuo, Masato Gocho, Tetsuya Katase, Hiroto Ado, and Atsuo Ozaki Supply-Demand Based Algorithm for Gasoline Blend Planning Under Time-Varying Uncertainty in Demand . . . . . . . . . . . . . . . . . . . . 183 Mahir Jalanko and Vladimir Mahalec 3D Technology in the Sphere of Cultural Heritage and Serious Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Eva Milkova, Lenka Chadimova, and Martina Manenova A Preliminary Approach to Composition Classification of Ultra-High Energy Cosmic Rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Alberto Guillén, Carlos Todero, José Carlos Martínez, and Luis Javier Herrera Spam Detection in Social Media: A Bayesian Scheme Based on Social Activity Over Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Klimis Ntalianis and Nikolaos Mastorakis Comparison of Different Self-developed Simulation Driver Seats with Two, Four and Six Servos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Karan Hemnani, Dan Hamernik, Josef Brozek, and Karel Sotek Incremental Meshfree Approximation of Real Geographic Data . . . . . . 222 Zuzana Majdisova, Vaclav Skala, and Michal Smolik Design of Real-Time Transaction Monitoring System for Blockchain Abnormality Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Jiwon Bang and Mi-Jung Choi

x

Contents

Risk Mapping in the Selected Town . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Alžběta Zábranská, Jakub Rak, and Petr Svoboda The Basic Process of Implementing Virtual Simulators into the Private Security Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Petr Svoboda, Jakub Rak, Dusan Vicar, and Michaela Zelena Security of a Selected Building Using KARS Method . . . . . . . . . . . . . . . 251 Kristina Benesova, Petr Svoboda, Jakub Rak, and Vaclav Losek Social Network Analysis of “Clexa” Community Interaction Patterns . . . 257 Kristina G. Kapanova and Velislava Stoykova The Use of GAP Analysis Method for Implementing the GDPR in a Healthcare Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Michaela Zelena, Petr Svoboda, Jakub Rak, and Miroslav Tomek System Science Spectrum Management and Internal Interference Resolving Using DSA Techniques and Coasian Bargaining; with Case Study . . . . 273 Fidel Krasniqi and Arianit Maraj The Use of a Modified Phase Manipulation Signal to Interfere GNSS Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Lucjan Setlak, Rafał Kowalik, and Maciej Smolak Energy-Oriented Analysis of HPC Cluster Queues: Emerging Metrics for Sustainable Data Center . . . . . . . . . . . . . . . . . . . 286 Anastasiia Grishina, Marta Chinnici, Davide De Chiara, Eric Rondeau, and Ah Lian Kor Flow Evaluation of the Lobe Pump Using Numerical Methods . . . . . . . 301 Ion Mălăel, Florina Costea, and Marian Drăghici Some Insights into Certain Kind of Asymptotically Stable Lagrange Solutions of 2-D Systems on the Grounds of Lie Algebra . . . . 310 Guido Izuta Artificial Bee Colony Algorithm for Parameter Identification of Fermentation Process Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Maria Angelova, Olympia Roeva, and Tania Pencheva Prediction of Neoadjuvant Chemotherapy Outcome in Breast Cancer Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 José Neves, Almeida Dias, Cristiana Silva, Diana Ferreira, Luís Costa, Filipa Ferraz, Victor Alves, João Neves, Jorge Ribeiro, and Henrique Vicente

Contents

xi

An Adaptative Meshless Method Based on Prandtl’s Equation . . . . . . . 333 Asmae Mnebhi-Loudyi, El Mostapha Boudi, and Driss Ouazar A Design Optimization Study for the Multi-axle Steering System of an 88 ARFF Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Mehmet Murat Topaç, Merve Karaca, and Batuhan Kuleli Design of the Civil Protection Data Model for Smart Cities . . . . . . . . . . 348 Jakub Rak, Petr Svoboda, Dusan Vicar, Jan Micka, and Tomas Balint Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355

Applied Physics

Doppler Delay in Navigation Signals Received by GNSS Receivers Lucjan Setlak, Rafał Kowalik(&), and Maciej Smolak Aviation Division, Department of Avionics and Control Systems, Polish Air Force Academy, ul. Dywizjonu 303 nr 35, 08-521 Deblin, Poland {r.kowalik,l.setlak,m.smolak}@wsosp.pl

Abstract. The article presents the results of real simulation studies, assessing the impact of Doppler shift on the accuracy of the designated positions for the GNSS receiver. The tests were performed for the receiver in the SDR programmable radio technology. In the first stage, a mathematical model was derived that reflects the impact of the Doppler shift on SIS signals. The process of determining the location of the tested object equipped with a GNSS radio receiver was carried out on the basis of pseudorange. In the final part of the article the results from the research and the practical conclusions resulting from them were quoted. Keywords: GPS

 GNSS  Signal in space  Doppler shift

1 Introduction Modern receivers of GNSS satellite navigation systems (Global Navigation Satellite System) in addition to determining the location of the object (x, y, z) and time also allow determining the speed and direction of the user’s movement, i.e. road course. In the theoretical aspect, it is possible to determine these values on the basis of two consecutive position values obtained during the processing of the pseudorange PR (Pseudo Range), however, much more accurate and faster responsive to changes in user movement results can be obtained by using the Doppler effect. In addition, due to the rapid movement of GNSS satellites, even a stationary receiver receives signals with a frequency noticeably different from the nominal frequency of signals transmitted by the GNSS satellites. The following figure (Fig. 1) presents the essence of the change in the frequency of the received signal along with the change of the position of the GNSS satellite [1, 2]. The Doppler frequency shift caused by satellite movement depends on the mutual position of the satellite relative to the receiver and may vary within a range of ±5 kHz. This is due to the fact that both the GPS receiver and satellites of the GPS system are in constant motion.

© Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 3–8, 2019. https://doi.org/10.1007/978-3-030-21507-1_1

4

L. Setlak et al.

Fig. 1. The influence of the satellite position on the Doppler frequency shift of the signal received by the immobile GNSS receiver

2 The Influence of Doppler Frequencies on the Signal Processing in the GNSS Receiver The basic component of the RF signal is the Doppler signal. Therefore, in order to be able to determine the effect of the Doppler effect in the baseband of the navigation receiver on the navigation data included in the beacon, the signal in the base band of the GNSS receiver should be determined first. Taking into account the previous derivations (point 2.2), the signal in the base band rB(t) was defined by the dependence [3]: rB ðtÞ ¼

pffiffiffiffiffiffiffi     2Pr Di t  sp d t  sp

ð1Þ

The expression (1) shows that the signal delay is a function of time t. Therefore, the influence of the Doppler effect is only negligible on the navigation data contained in the target SIS signal only its negative effect is visible in the carrier signal carrying the navigation data. Therefore, further considerations will concern the assessment of the impact of the Doppler effect on the carrier wave. The main reason why the occurrence of the Doppler effect can be omitted in the basic band is the fact that SIS signals processed at this stage are characterized by finite energy and can be in time and within a specified TF range represented as a Fourier series: rB ðt Þ ¼

þ1 X i¼1

li e

j2pTi t F

  TF TF ; t  ; 2 2

ð2Þ

In addition, if the Fourier series TF repeating period is long enough and the signal received from the satellite after being processed is in the baseband of the GNSS receiver and NF = TF, the expression (2) will take the form:

Doppler Delay in Navigation Signals Received by GNSS Receivers NF X

rB ðt Þ ¼

i

li ej2pTF t

5

ð3Þ

i¼NF

where: NF - is an integer of the Fourier series. The above dependence is true for every t. Assuming that the frequency of the SIS signal is equal to fi ¼ TiF ¼ kci and equal to the basic frequency of the Fourier series, which is also influenced by the Doppler effect. Hence you can present them as [4]: fd;i ðtÞ ¼ 

1 d fi DðtÞ ¼ fd ðtÞ kRF dt fRF

ð4Þ

From the considerations carried out so far, the quantity ki is much smaller than the wavelength of the signal received by the GNSS receiver kRF, thus the impact of the Doppler effect on each single carrier wave of the SIS signal will be negligible. It can also be noticed that the same condition applies to the signal received by the satellite navigation receiver rRF. For these reasons, it can be assumed that instead of carrying out a mathematical operation to obtain a Fourier series, the signal rRF can be represented as a Fourier transform of the signal rRF. However, the method based on the Fourier series is more advantageous because it is more intuitive. In turn, with the expression (5), you can express the signal in the base band of the GNSS receiver taking into account the Doppler effect [5] rB;d ðtÞ ¼

NF X

 li e



j2p fi þ fd ðtÞf

fi RF

t

¼

i¼NF



NF X

li e

j2pTi

F



1 þ fd ðtÞf 1

RF

t

ð5Þ

i¼NF

At time intervals, where fd(t) is a fixed expression, the above expression can be written as [6]: rB;d ðtÞ ¼

NF X

 li e

j2pTi

F



1 þ fd ðtÞf 1

RF

i¼NF

t

¼

NF X

li e

j2pTi0 t F

¼

ð6Þ

i¼NF

where: TF0 ¼

TF 1 þ ffRFd

ð7Þ

From the expression (7) it follows that the signal in the base band rB,d(t) can be seen as a new sequence of chips with a structure similar to or the same as the signal rB,B(t). The Doppler shift will increase or compensate in the period of the carrier wave signal rB,B(t). Therefore, it can be assumed that the occurrence of the Doppler phenomenon in navigational messages can be seen as “stretching or focusing” of chips in a PRN sequence used to transfer “manipulated” navigational data to it. Taking into account the

6

L. Setlak et al.

Doppler effect in the basic band and its effect on the navigational message, the rate of chip transmission in the PRN sequence can be modified. And so [6]: TC0 ¼

1 Tc

1 Tc ¼ þ fd;Tc 1 þ ffd RF

ð8Þ

Assuming that the duration of a single Tc chip is the same as TF.

3 Simulation Experiment and Results The presented mathematical models defining the impact of the Doppler effect on the quality of the GNSS signal will allow to conduct a simulation experiment and, at the same time, estimate whether they are correct. Simulations were carried out in real time, and the tested receiver was mounted inside a moving object. As mentioned earlier, the analysis involved the use of independently designed GNSS receivers based on the RTL-SDR structure (Radio Television Luxemburg - Software Defined Radio), which is a software-defined radio (Fig. 2), in which the user has no influence on how the satellites are selected to determine the position. The main criterion for the selection of the receiver was the number of receiving tracks, i.e. the maximum possible number of monitored satellites [7, 8].

Fig. 2. Construction of the RTL-SDR receiver

The receiver used in the experiment is equipped with a radio communication system in which the operation of basic electronic components (such as mixers, filters, modulators and demodulators, detectors) is carried out using a computer program [9, 10]. 3.1

Presentation of Results

The conducted research and detailed analysis of the mathematical model defining the vehicle speed measurement by the GNSS receiver showed that the model is accurate and correct. Data obtained from simulation in both cases are very similar to each other. The speed values given by the speed sensors mounted in the vehicle read from the clocks and those calculated by the simulation program are identical (Figs. 3 and 4). However, they require a precise determination of the methodology for measuring the times of elementary operations.

Doppler Delay in Navigation Signals Received by GNSS Receivers

7

Fig. 3. Measurement of pseudorange and speed of a vehicle, equipped with a GNSS receiver made for the first mathematical model

Fig. 4. Measurement of pseudorange and speed of a vehicle, equipped with a GNSS receiver made for the second mathematical model

In the research it was assumed that the position of the vehicle is unknown, which results in a slight deterioration of the results, both in the calculated values of vehicle speed and pseudorange. In addition, the occurrence of this type of situation may be related to the smaller number of satellites seen at the time of the simulation experiment. A necessary condition required for obtaining reliable results is that the satellite navigation receiver should be in the visibility of at least seven or more satellites. Although this condition was met with a large excess, the obtained measurements deviate from the actual speed of the vehicle in the range of 2–8 km/h. It results from the fact that the constellation of the Galileo system was mapped from the simulation program installed on the portable computer.

8

L. Setlak et al.

4 Conclusions The obtained results of the simulation experiment carried out confirm that the number of satellites seen by the receiver during the determination of the position of the object has a significant impact on the increase of its accuracy. The situation looks the same when determining the speed of the vehicle equipped with the GNSS receiver. As can be seen from the analysis of Figs. 3 and 4, with only a few satellites considered in the calculations, there are very large fluctuations in vehicle speed values, as well as pseudoranges calculated by the said GNSS receiver and specialized software implemented in the portable computer. It should be noted, however, that the received speed values obtained from the GNSS receiver differ only by 2–3 km from the specific values. Such differences from the point of view of road traffic needs are not significant. In summary, the results are promising and can be the basis for the development of software applications that can be used successfully in road transport. They can be used to improve road safety as well as to improve the operations coordinated by the urban traffic control centers.

References 1. Kisilowski, J., Zalewski, J.: Evaluation of possibilities of a motor vehicle technical condition assessment after an accident repair in the aspect of road traffic safety. In: TST 2013: Activities of Transport Telematics, pp. 441–449. Springer (2013) 2. Kisilowski, J., Zalewski, J.: Selected examples of referring the examined stochastic technical stability to the ISO standards. J. Theor. Appl. Mech. 56(1), 313–321 (Warsaw 2018). https:// doi.org/10.15632/jtam-pl.56.1.313 3. Holmes, J.K.: Spread Spectrum Systems for GNSS and Wireless Communications. Artech House, London (2007) 4. Parkinson, B.W., Spilker, J.J. (eds.): Global Positioning System: Theory and Applications, vol. I. American Institute of Aeronautics and Astronautics, 4 (1996) 5. Chan, Y.T., Towers, J.J.: Passive localization from Doppler-shilled frequency measurements. IEEE Trans. Signal Process. 40(10), 2594–2598 (1992) 6. Zhang, J., Zhang, K., Grenfell, R., Dfakin, R.: Short note: on the relativistic doppler effect for precise velocity determination using GPS. J. Geodesy 80, 104–110 (2006) 7. Kaplan, E.D., Leva, J.L.: Understanding GPS: Principles and Applications, 2nd edn. Artech House, E. Norwood, MA (2005) 8. Setlak, L., Kowalik, R.: Mathematical modeling and simulation of selected multi-pulse rectifiers, used in “conventional” airplanes and aircrafts consistent with the trend of “MEA/AEA”. In: Applied Physics, System Science and Computers II, pp. 244–250. Lecture Notes in Electrical Engineering. Springer (2018) 9. Setlak, L., Kowalik, R.: The study of the autonomous power supply system of the more/all electric aircraft in AC/DC and DC/DC processing. In: IEEE Xplore, 2017 European Conference on Electrical Engineering and Computer Science (EECS) 10. Setlak, L., Kowalik, R.: Evaluation of the VSC-HVDC system performance in accordance with the more electric aircraft concept. In: IEEE Xplore, 19th International Scientific Conference on Electric Power Engineering, EPE 2018 – Proceedings

Experimental Setup for H2/O2 Small Thruster Evaluation Jeni Alina Vilag(&), Valeriu Alexandru Vilag, Cleopatra Florentina Cuciumita, and Răzvan Edmond Nicoară Romanian Research and Development Institute for Gas Turbines COMOTI, 220D Iuliu Maniu Avenue, 061126 Bucharest, Romania {jeni.vilag,valeriu.vilag,cleopatra.cuciumita, razvan.nicoara}@comoti.ro

Abstract. In the context of developing a propulsion system for small-scale space platforms, particularly CubeSats, currently treated as secondary payload, several challenges are raised by designing an experimental installation dedicated to testing a H2/O2 thruster. Starting from the research projects imposing the operational requirements for the propulsion system and the ofunctional requirements for the testing system, the paper presents the methods for sizing and calibrating the main pieces of equipment used for accurate and stable control of mass flow and mixture ratio, as well as a vacuum chamber computation predicting the evolution of parameters during tests. The conclusions and future work section is related to necessary adjustments for planning of an extensive test campaign able to characterize the performance and behaviour of the studied thruster. Keywords: CubeSat

 Experimental  Space propulsion  Thruster  Vacuum

1 Introduction Small-scale space platforms, particularly the CubeSats, are advantageous from the point of view of various types of missions, such as: demonstration of technology: possibility of testing new instruments and materials in spatial missions, without substantial loss; scientific: space measurements, magnetic field information gathering, earthquake detection improvement; educational projects: opportunity for students to develop a space mission; commercial missions: telecommunication provision, capture of Earth observation images. The platforms of this type are usually launched, as secondary payload, in LEO (Low Earth Orbit) and therefore have a limited lifetime. The lower the orbit, the higher the gravitational attraction and the satellite has to rotate faster around the Earth for counterbalancing it. At 160 km, a speed of around 21,160 km/h is required, meaning that the satellite will surround the Earth in about 90 min [1]. As the altitude increases and the damage caused by the forward resistance decreases, the duration of the missions can be extended. At 300 km altitude, the propulsion system can extend the life of a CubeSat 3U with at least 10 months and at 400 km with at least 4.5 years. This system can provide a CubeSat forward drag © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 9–15, 2019. https://doi.org/10.1007/978-3-030-21507-1_2

10

J. A. Vilag et al.

compensation in LEO, allowing the position to hold for long periods compared to the current resource for a few days or weeks [2]. Currently, the small size satellites do not have any propulsion capabilities and they are more or less uncontrolled, the attitude control being, in some cases, done by changing the centre of gravity and orbital manoeuvres being excluded. A propulsion system for CubeSats is an interesting subject since the start of their manufacturing. The research activity covered by the paper is related to the development of a clean propulsion system based on water electrolysis technology, able to extend the life of the mission and maintain altitude for a longer period. This technology, at reduced scale, has been tried in the USA, first by TUI’s development, with the full support of NASA, of the HYDROS propulsion system, in two standard configurations: a 2U HYDROS-C module intended for CubeSats and NanoSats, and a HYDROS-M module intended for 50–180 kg microsatellites [3], second by Cornell University’s CubeSat propulsion project focused on a waterelectrolysis propulsion system [4] (Fig. 1).

Fig. 1. Prototype electrolysis propulsion systems developed by Tethers - HYDROS-C technology [3] (left) and Cornell University [4] (right)

2 Experimental Setup Sizing The paper focuses on sizing and calibrating the main pieces of equipment included in the experimental installation necessary for testing a H2/O2 small-scale thruster. The parametric tests aim to verify the impact of design parameters upon the performance and quality of the product, and they are based on input data sets such as: inlet pressure, mass flow rate, mixture ratio, valve opening time, pulse trail length. The measurement of the parameters must be continuous, at frequencies previously set for each parameter, in order to capture the phenomenon, therefore the equipment for monitoring, control and data acquisition must be customized, controlled and calibrated in order to ensure good operability and precision.

Experimental Setup for H2/O2 Small Thruster Evaluation

11

Assuming the two working gases to be combusted for producing thrust, H2 and O2, are obtained using a PEM (proton-exchange membrane) water electrolysis system, the experimental installation uses two main fluid supplying lines, each with installed gas tank and pressure reducer, flowmeter, safety devices (one-way valve, filter, flame arrester), section control element and command element. The thruster experimental assembly is placed inside a vacuum chamber. Once the fluids reach an imposed pressure, the control valves open, supplying a mix of hydrogen and oxygen ready to ignite [5]. A small spark/glow plug causes the gas mixture to combust. The gas then expands through a convergent-divergent nozzle, producing the necessary thrust (Fig. 2).

Fig. 2. Thruster experimental assembly inside the vacuum chamber

2.1

Mass Flow Calculation

The design point of the H2/O2 thruster imposes the performance in terms of thrust level and specific impulse. The imposed thrust level, of 1 N, is a compromise between the state-of-the-art, the miniaturization capabilities and the specific impulse. The specific impulse, defined for a steady-state operation, is imposed within the 350  390 s range, considered achievable for a mixture ratio close to stoichiometric, which, in theory, provides optimum combustion parameters. The proposed mixture ratio initially required is (0.8  1.2) from stoichiometric, with the nominal mixture ratio to be defined based on experimental observations related to thermal/heat transfer/cooling issues, in order to avoid hardware degradation [5]. Mmix ¼ F=ðIsp  gÞ;

ð1Þ

where g = 9.8 m/s2. The mass flow rate of the mixture is, therefore, in the range of 0.23  0.3 g/s. Considering the range of the mass flow rate for each gas in the combustion reaction, as well as additional operational requirements, the flowmeters customized for the application cover the 0.0008  0.04 g/s range for H2 and, respectively, the 0.006 

12

J. A. Vilag et al.

0.3 g/s range for O2, with a 0.5% precision and allowing a 22 bar internal pressure drop. 2.2

Injection Section Control

The mass flow rate of the two gases, in the ranges mentioned above, are controlled with the help of needle valves, one installed on each injection line. They are designed to provide accurate and stable control of flow rate. The diagram in Fig. 3 illustrates the fineness of the valves control at the nominal pressure of 15 bara and three constant O2 sections, for the variation of the H2 section. The diagram is limited to the range of mixture ratios close to stoichiometric.

Fig. 3. H2 mass flow rate variation with the modification of the injection section

2.3

Vacuum Chamber Computation

The vacuum chamber has a fixed volume, known in value. Before the experiment start, it is presumed that a vacuum is created within the available volume. Initially, the vacuum chamber has a small quantity of air inside, at an absolute pressure below 300 Pa, as thruster operational requirement, and ambient temperature considered 300 K. To compute the pressure rise in the vacuum chamber, due to the injection of H2 and O2 and the consequent combustion, we assume that we directly inject water, the computation being conducted on the stoichiometric mixture values. The injected water mass flow rate corresponds to the sum of H2 and O2 mass flow rates and at the temperature computed with NASA CEA program [6] for their combustion. To compute the vacuum chamber pressure variation in time, a small time step is considered, “dt”, the mixture properties being computed at a given “i” moment. First, we compute the air mass initially existing in the volume of the vacuum chamber, which corresponds to step 0, then, the accumulated mass, after step i, for which an i∙dt time corresponds, since the stating of the experiment:

Experimental Setup for H2/O2 Small Thruster Evaluation

13

m0 ¼ ðpair  VÞ=ðRair  Tair Þ:

ð2Þ

mi ¼ mi1 þ Mw  dt ¼ m0 þ Mw  i  dt:

ð3Þ

Taking into account that the specific gas constant, the specific heat capacity at constant pressure and the specific enthalpy are intensive properties, they will be computed as: cpi ¼ ðmi1  cpi1 þ Mw  dt  cpw Þ=mi ;

ð4Þ

Ri ¼ ðmi1  cpi1 þ Mw  dt  Rw Þ=mi ;

ð5Þ

hi ¼ ðmi1  cpi1 þ Mw  dt  hw Þ=mi

ð6Þ

¼ ðmi1  cpi1Ti1 þ Mw  dt  cpw  Tw Þ=mi ;

where cp0, R0 and T0 are the properties corresponding to the initial air in the vacuum chamber. The mixture temperature, density and pressure at step “i” can be computed: Ti ¼ hi =cpi ;

ð7Þ

qi ¼ mi =V;

ð8Þ

pi ¼ mi  Ri  Ti =V:

ð9Þ

Applying these formulas to the input data given in Table 1, and considering a time step, dt, of 0.1 s, the vacuum chamber pressure variation in time can be visualized in Fig. 4.

Table 1. Input data Parameter Vacuum chamber volume Vacuum chamber initial pressure Vacuum chamber initial temperature Air specific constant Air specific heat capacity at constant pressure Water mass flow rate Water injection temperature Water specific constant Water specific heat capacity at constant pressure

Symbol V pair Tair Rair cpair Mw Tw Rw cpw

Units m3 Pa K J/kg/K J/kg/K kg/s K J/kg/K J/kg/K

Value 0.0547 300 300 287 1080 0.00028 3446 461.5 13383.8

14

J. A. Vilag et al.

Fig. 4. Pressure variation during thruster tests

3 Conclusions and Future Work The testing of a small-scale H2/O2 thruster, in vacuum conditions, imposes several categories of requirements, operational and experimental, in order to offer the input data for a testing matrix. The paper presents the methods of determining a series of initial conditions by calculating the ranges of fluids’ mass flow rates, calibrating the injection sections and observing the increase in pressure in the vacuum chamber. The vacuum computation is of particular importance for the experimental program due to the fact that the repeatability of the combustion process and, therefore, the capacity of the hardware to create thrust, must be demonstrated in longer sequences of firings, conducted in relevant environment conditions. Based on the computed results, the necessity of supplementing the available vacuum volume becomes a necessary next step in sizing the experimental installation. An experimental campaign able to characterize the performance and behaviour of the studied thruster implies future work including the optimization of testing time, as a compromise between the initial pressure and the time for obtaining it, as well as the optimization of processes for ensuring stable combustion and nozzle flow. Acknowledgments. The research activities have been developed in the framework of the Romanian Space Agency financed project ELySSA: Development of water electrolysis systems with application for small-scale satellites, contract no. 175/2017.

References 1. Riebeek, H.: NASA Earth Observatory (2009). https://earthobservatory.nasa.gov/ 2. Zeledon, R.A.: Electrolysis propulsion for small-scale spacecraft. Ph. D. thesis, Cornell University, Ithaca, NY (2015) 3. Tethers Unlimited, Inc.: http://www.tethers.com/HYDROS.html 4. Zeledon, R.A., Peck, M.A.: Performance testing of a cubesat-scale electrolysis propulsion system. In: AIAA Guidance, Navigation, and Control Conference. Minneapolis (2012)

Experimental Setup for H2/O2 Small Thruster Evaluation

15

5. Popescu, J.A., Vilag, V.A., Porumbel, I., Cuciumita, C.F., Macrişoiu, N.: Experimental approach regarding the ignition of H2/O2 mixtures in vacuum environment. Transp. Res. Proc. 29, 330–338 (2018) 6. Gordon, S., McBride, B.J.: Computer Program for Calculation of Complex Chemical Equilibrium Compositions and Applications. I Analysis, NASA RP 1311 (1994)

Tracking the Evolution of Functional Connectivity Patterns Between Pancreatic Beta Cells with Multilayer Network Formalism Marko Gosak1,2, Lidija Križančić Bombek1, Marjan Slak Rupnik3(&), and Andraž Stožer1(&) 1

Institute of Physiology, Faculty of Medicine, University of Maribor, Taborska ulica 8, 2000 Maribor, Slovenia {marko.gosak,andraz.stozer}@um.si, [email protected] 2 Department of Physics, Faculty of Natural Sciences and Mathematics, University of Maribor, Koroška Cesta 160, 2000 Maribor, Slovenia 3 Center for Physiology and Pharmacology, Medical University of Vienna, 1090 Vienna, Austria [email protected]

Abstract. Network science has provided new promising tools for studying the structure and function of various complex systems. In the present contribution we demonstrate how network concepts can be used to describe the collective activity of pancreatic b cell populations in islets of Langerhans. In this microorgan, electrically coupled b cells produce and secrete insulin that plays a pivotal role in normal and pathological whole-body nutrient homeostasis. We construct functional networks from correlations between calcium dynamics of individual cells, which is recorded by means of confocal laser-scanning calcium imaging. The extracted connectivity patterns share many similarities with other real-life networks, such as small-worldness, heterogeneity, and modularity. Moreover, by applying the multilayer network formalism, we give particular emphasis to the dynamical evolution of the b cell network after stimulation. By this means, an even deeper insight into the intercellular communication mechanisms in islets is attained. Keywords: Complex networks  Functional connectivity Calcium imaging  Multilayer networks

 b cell 

1 Introduction Network science is an emerging interdisciplinary field that combines theories from statistical physics, applied mathematics, computational techniques, and visualization approaches from computer science. Its applications cover a broad range of disciplines, spanning from communications, power systems, and transportation engineering to social sciences and biomedicine [1, 2]. One of the main advantages of network theory is its ability to extract information from the increasing amount of digital data obtained by high-throughput experimental techniques. This is all the more pertinent, given that the network structures are too complex to be analyzed in a non-systematic fashion [2]. © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 16–21, 2019. https://doi.org/10.1007/978-3-030-21507-1_3

Tracking the Evolution of Functional Connectivity Patterns

17

Moreover, many real-life systems are governed by multiple types of interactions and/or interact with other networks, and can evolve in time. To assess and analyze such multidimensional complex systems, the multilayer network formalism has recently been proposed as a general framework and is acquiring more and more prominence as a new research direction [3]. In the last two decades, network science has become particularly fundamental to biological systems research at multiple levels of organization. Examples include protein interaction and genetic regulatory networks, connectivity among brain areas, food-webs, spreading of diseases, etc. [4]. Governed by advances in high spatio-temporal resolution imaging techniques, we and others succeeded in applying the network-based techniques to describe the intercellular connectivity patterns in different multicellular systems [4–9]. By this means, individual cells represent nodes and the intercellular interaction patterns are the edges of a network. The motivation for in-depth investigations of the collective activity of cell populations encompasses at least in part the recent discoveries about the involvement of cell-to-cell signaling pathways in the pathogenesis of several diseases [7, 10, 11]. The majority of our previous research was devoted to b cell networks in pancreatic islets. In these microorgans, around 103 b cells are interconnected to ensure a coordinated activity and a proper secretion of insulin at elevated glucose levels [11]. Especially electrical coupling through gap junctions ensures mediated oscillatory changes in cytoplasmic Ca2+ concentration that spread across the islets. This in turn orchestrates the release of insulin into circulation [11]. Noteworthy, disruptions of the intercellular pathways have been shown to induce desynchronization in b cell activity and an impairment of normal oscillatory patterns of insulin secretion [12], a defining characteristic of diabetes [11]. Previous insights into the functional mechanisms of pancreatic islets have pointed out that b cell networks are not homogeneous lattice-like structures. Rather they form heterogeneous, efficient, and clustered architectures [4, 6, 8, 9], which exhibit beneficial small-world topological features, especially when exposed to higher levels of glucose [6]. Moreover, with increasing stimulatory conditions the network structure becomes denser and more integrated [4, 8]. However, how the b cell network evolves under given stimulatory conditions, which is from the physiological point of view a very important question, remains unexplored. We address this issue in the present contribution. First, we represent the methodology to construct functional networks from measured cellular signals. Then, we focus on the temporal evolution of these networks by incorporating the multilayer network formalism.

2 Extracting and Analyzing b Cell Networks We employed functional multicellular calcium imaging of fluorescently labelled acute mouse pancreas tissue slices to record Ca2+ signals, as described previously [6]. This technique enables simultaneous assessment of Ca2+ dynamics in a large number of cells, with a high spatiotemporal resolution and over prolonged periods of time. In Fig. 1A a confocal image of a pancreatic islet in a tissue slice is shown and Fig. 1B features typical Ca2+ traces recorded from three selected b cells.

18

M. Gosak et al.

Fig. 1. (A) Image of an islet of Langerhans showing the average relative intensity of the fluorescence signal. Colored circles denote three selected cells. (B) Ca2+ traces recorded from the selected cells. The grey band indicates the switch from substimulatory (6 mM) to stimulatory (9 mM) levels of glucose concentration.

All cells in a given slice were selected manually, based on cell morphology, and exported as time series for off-line analysis. All recorded time series of Ca2+ dynamics c(t) were digitally band-pass filtered, using cut off frequencies of 0.03 Hz and 3 Hz, in order to denoise the signals and to remove the baseline trends. To further reduce the noise, we applied an adjacent averaging procedure. An example of processing a given time series is shown in Fig. 2A. For evaluating the synchronized activity of b cells we calculated the pair-wise correlation between all Ca2+ traces and obtained a correlation matrix (Fig. 2C). The correlation matrix represents the basis for the functional connectivity network [6]. In particular, two cells were considered to be connected if their correlation Rij exceeded a predetermined threshold value Rth. As a result, we obtained a binary connectivity matrix d in which dij= 1 or dij= 0 (Fig. 2D), that determines the intercellular connectivity patterns (Fig. 2E). For the connectivity threshold, we chose the value Rth = 0.8, which yielded a network with an average degree of kavg = 6.8. Finally, once the functional network was constructed, we quantified it with network metrics (Fig. 2F), as described in more detail below. The network’s traffic capacity and functional integration of individual nodes is commonly described by global efficiency, Eglob, which is inversely related to the average shortest path length [13]. In our network the efficiency was 0.25, similar as in previous reports [6]. To characterize the functional segregation of the network, the clustering coefficient is calculated. Functional segregation occurs within highly interconnected groups of nodes. A common way to find such groups is to compute the local clustering coefficient Ci of individual nodes, as proposed by Watts and Strogatz [14]. The average clustering coefficient Cavg is then computed as the mean value of all Ci. For the network presented in Fig. 2 we found that the clustering was rather high (Cavg = 0.48), as reported previously [6]. To describe the level of segregation in a more advanced manner, the community structure of the network can be calculated. The modularity Q measures the strength of division in subgroups (communities) [13]. In our

Tracking the Evolution of Functional Connectivity Patterns

19

b cell network, we identified intermediate levels of modularity (Q = 0.46), as is expected for rather high stimulatory conditions (i.e., 9 mM glucose) [8]. Finally, we explored the small-world character of the network. To this purpose, Cavg and Eglob were compared with the same metrics estimated in a random graph (Crand and Erand) configured with the same number of nodes and mean degree as the network of interest. To describe the small-world-ness with a single parameter, we calculated the ratio: SW = (Cavg/Crand)/(Erand/Eglob). If Erand/Eglob * 1 and Cavg/ Crand > 1, and consequently SW > 1, a network exhibits a large extent of small-worldness [14]. In our case, the analogous random network yielded Crand = 0.08 and Erand = 0.43, which led to a small-world ratio of SW = 3.42. This once more confirmed the small-world character in the b cell intercellular connectivity pattern [6].

Fig. 2. Methodology used to extract and analyze functional networks from measured Ca2+ signals. (A) Processing recorded Ca2+ traces (grey line) with a FFT band-pass filter and an adjacent averaging procedure (black line). (B) Processed signals of all b cells were compared pairwise to calculate the correlation matrix with color-coded degree of correlation. (C) and the connectivity matrix after thresholding with Rth = 0.8 (D). Panel (E) shows the resulting functional network with nodes corresponding to physical positions of individual cells in the tissue. (F) Summarized network measures of the corresponding b cell network with N = 82 nodes.

3 Dynamic Evolution of the Functional b Cell Networks To track the temporal evolution of the intercellular connectivity patterns between b cells, we made use of the sliding window correlation analysis. In particular, we calculated the correlation between all cell pairs in the interval Ds = 180s and shifted it throughout the time series with a step Dn = 90s (partial overlap). A given pair of cells was considered to be connected if their correlation in a given time window Rij(s) > Rth. We regarded this temporal sequence of correlation matrices and the corresponding

20

M. Gosak et al.

functional connectivity patterns as a multiplex network. A visualization of the dynamical b cell networks by means of a multilayer-network formalism is schematically shown in Fig. 3A. Formally, the temporal network is specified by the vector of   the adjacency matrices: D ¼ d1 ; . . .; dM , where M is the number of temporal windows. Along these the node degree of the i-th cell in the a-th temporal layer is Plines, dija . Consequently, the degree of the i-th node in the multiplex defined as kia ¼ j   network is a vector: ki ¼ ki1 ; . . .; kiM and the same logic applies to other network measures [3]. Results in Fig. 3B demonstrate that a b cell network is indeed very dynamic despite constant stimulation. By focusing solely on the average correlation we could discern two regimes – the activation phase (10–15 min after stimulation), in which the cells were recruited and began to operate more synchronously, and the subsequent plateau phase, where the average correlation was rather balanced [15]. However, in contrast to the average synchronicity, the network properties changed much more profoundly. This observation refers mainly to changes in fractions of highly correlated cell pairs. Apparently, the multimodal nature of governing mechanisms in b cells [16] does not shape only the intracellular but also the intercellular dynamics.

Fig. 3. (A) Dynamic functional network connectivity from windowed portions of time series represented as a multilayered temporal network. (B) Global Ca2+ activity characterized by the filtered mean-field of all b cells in the islet and the temporal evolution of different network measures.

Tracking the Evolution of Functional Connectivity Patterns

21

4 Conclusion Biological patterns like intercellular interactions in tissues exhibit multiple levels of complexity and are therefore difficult to infer. Our ultimate goal is to understand the dynamical processes that take place in living organisms, but first we need to understand how the components in biological systems interact with each other, and the biological significance of these interactions. Biological network analysis and the theoretical tools developed in the field of network science have proven to be one of the key pillars for addressing these issues and represent a highly important aspect of the general systemsdriven approach for exploring biological systems. Acknowledgments. The authors acknowledge the support from the Slovenian Research Agency (Programs I0-0029 and P3-0396, as well as projects N3-0048, J7-7226, J1-7009, and J3-9289).

References 1. Newman, M.E.J.: Networks: An Introduction. Oxford University Press, New York (2010) 2. Barabási, A.L.: The network takeover. Nat. Phys. 8, 14–16 (2012) 3. Boccaletti, S., Bianconi, G., Criado, R., et al.: The structure and dynamics of multilayer networks. Phys. Rep. 544, 1–122 (2014) 4. Gosak, M., Markovič, R., Dolenšek, J., et al.: Network science of biological systems at different scales: a review. Phys. Life Rev. 24, 162–167 (2018) 5. Hodson, D.J., Schaeffer, M., Romanò, N., et al.: Existence of long-lasting experiencedependent plasticity in endocrine cell networks. Nat. Commun. 3, 605 (2012) 6. Stožer, A., Gosak, M., Dolenšek, J., et al.: Functional connectivity in islets of Langerhans from mouse pancreas tissue slices. PLoS Comput. Biol. 9, e1002923 (2013) 7. Muldoon, F.S., Soltesz, I., Cossart, R.: Spatially clustered neuronal assemblies comprise the microstructure of synchrony in chronically epileptic networks. Proc. Natl. Acad. Sci. 110, 3567–3572 (2013) 8. Markovič, R., Stožer, A., Gosak, M., Dolenšek, J., Marhl, M., Rupnik, M.S.: Progressive glucose stimulation of islet beta cells reveals a transition from segregated to integrated modular functional connectivity patterns. Sci. Rep. 5, 7845 (2015) 9. Johnston, N.R., Mitchell, R.K., Haythorne, E., et al.: Beta Cell hubs dictate pancreatic islet responses to glucose. Cell Metab. 24, 389–401 (2016) 10. Garden, G.A., La Spada, A.R.: Intercellular (mis)communication in neurodegenerative disease. Neuron 73, 886–901 (2012) 11. Benninger, R.K.P., Piston, D.W.: Cellular communication and heterogeneity in pancreatic islet insulin secretion dynamics. Trends Endocrinol. Metab. 25, 399–406 (2014) 12. Cigliola, V., Chellakudam, V., Arabieter, W., Meda, P.: Connexins and b-cell functions. Diabetes Res. Clin. Pract. 99, 250–259 (2013) 13. Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., Hwang, D.U.: Complex networks: structure and dynamics. Phys. Rep. 424, 175–308 (2006) 14. Watts, D.J., Strogatz, S.H.: Collective dynamics of “small-world” networks. Nature 393, 440–442 (1998) 15. Gosak, M., Stožer, A., Markovič, R., et al.: Critical and supercritical spatiotemporal calcium dynamics in beta cells. Front. Physiol. 8, 1106 (2017) 16. Bertram, R., Satin, L., Zhang, M., Smolen, P., Sherman, A.: Calcium and glycolysis mediate multiple bursting modes in pancreatic islets. Biophys. J. 87, 3074–3087 (2004)

Ground Effect Influence on Aircraft Exhaust Jet with Different Nozzle Configurations Bogdan Gherman(&), Oana Dumitrescu, and Ion Malael Romanian Research and Development Institute for Gas Turbines – COMOTI, 220D Iuliu Maniu Ave, 061126 Bucharest, Romania [email protected]

Abstract. Jet attachment to the ground it is a major concern to all airports. The engine position, close to the ground, facilitates the formation of recirculation zones close to the nozzle exit. To assess the behavior of these zones three different nozzle geometries are studied in this paper, a double jet, a triple jet and a double jet with chevrons. The numerical study is done using a structured grid with boundary layer. The steady-state analysis is performed using SST turbulence model. The influence of these recirculation zones on the jet development it is studied in this paper. Keywords: Aircraft jet attachment Airport

 Recirculation zone  Ground effect 

1 Introduction The exhaust system of an airplane can generate vortices that can influence the activity of an airport. Depending on the size of the aircraft, engine size, weight, wingspan, speed, flap and spoiler settings, proximity to the ground, engine thrust, atmospheric conditions, etc. [1] the vortices that form at ground level can interact with other airplanes or other vehicles. That is why it is important to understand the wake and recirculation zones that forms at ground level in an airport. There are numerous studies regarding the wake generation due to an aircraft approach to a runway [2, 3, 4]. However, another important aspect is also the jet attachment to the ground while aircrafts perform airport maneuvers. And the noise and turbulence generated by this phenomenon affect the distance between aircrafts while taxiing on the airport runway. That is why a faster jet attachment to the ground can reduce the distance, necessary to maintain, between aircrafts thus reduce the time it takes an aircraft to arrive at destination or depart.

2 Problem Set-Up The case studied in this paper concern only the nozzles without wing or airplane body influence. The engine regime simulated is the stationary regime. The geometry it is scaled down to 1:27 of the original size. The numerical simulation domain is constructed around the nozzle geometry. Five diameters around the nozzle and © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 22–28, 2019. https://doi.org/10.1007/978-3-030-21507-1_4

Ground Effect Influence on Aircraft Exhaust Jet

23

approximately 30 diameters in length. To simulate ground effect the lower base of the domain is at only 58.26 mm form the nozzle wall corresponding to 1.5 m in real application. The chevron case has eight teeth on the primary jet to increase mixing between the two jets, the angle of each tooth is 60° wide. In the triple jet case the velocity is gradually increased from primary to tertiary jet in order to have mild shear layers that will lead to a decrease of jet noise [5] (Fig. 1).

Fig. 1. Nozzle Geometries: double jet, triple jet and double jet with chevron

In order to capture correct the boundary layer detachment the size of the cell near the walls of the nozzle and close to the ground has been calculated so that y+ should be around 1.

Fig. 2. Computational domain

In the other parts of the domain grid size is gradually increased as it can be seen in Fig. 2. The entire computational domain, for each case, has around 4.5 million cells and the smallest cell near the wall is 0.002123 mm with a growth rate of 1.12 near the sensible areas while in the other parts this may vary up to 2. The working fluid considered is air. The input data are calculated using the engine cycle calculation program for CFM56 7B-27 gas turbine [6]. In order to be able to compare the three cases the same developed thrust was employed for all the cases, F = 12143 daN. The boundary conditions for the double jet nozzles are the following: The outer jet: – speed = 308 m/s – temperature = 294 K

24

B. Gherman et al.

– turbulent intensity = 5% – fluid direction = perpendicular on inlet Inner jet: – – – –

speed = 465 m/s temperature = 721 K turbulent intensity = 5% fluid direction = perpendicular on inlet The boundary conditions for the triple jet nozzle are the following: The outer jet:

– – – –

speed = 230.141 m/s temperature = 314.45 K turbulent intensity = 5% fluid direction = perpendicular on inlet The middle jet:

– – – –

speed = 305 m/s temperature = 350 K turbulent intensity = 5% fluid direction = perpendicular on inlet The inner jet:

– – – –

speed = 465 m/s temperature = 721 K turbulent intensity = 5% fluid direction = perpendicular on inlet

Also, to have little interference as possible the wall behind the nozzles it is considered an inlet were the following boundary conditions were imposed: – – – –

speed = 2 m/s temperature = 288 K turbulent intensity = 5% fluid direction = perpendicular on inlet

The walls around the jet are considered openings and for the outlet atmospheric pressure is imposed while temperature considered is 288 K. The analysis was performed with a commercial CFD code, Ansys CFX, using a steady state approach. For all the cases studied in this paper, the flow is assumed compressible and the governing equations are written in Reynolds averaged form, time and mass averaged [7]. The turbulence model employed is the SST k-x which is a combination between k-e turbulence model and the k-x turbulence model.

Ground Effect Influence on Aircraft Exhaust Jet

25

3 Results and Discussions The recirculation zone that develops as soon the jet exit the nozzle it is situated immediately below the nozzle and the height of it extends over the entire range between nozzle and ground. To see the length of the recirculation zone five measurement lines have been placed at 50 mm, 60 mm, 70 mm, 80 mm and 90 mm from the wall situated behind the nozzle. (See Fig. 3)

Fig. 3. Measurement lines

In Fig. 4 it is presented the turbulent kinetic energy at 70 mm from nozzle exit. The structure of the jets it is different for each geometry. Especially, the chevrons have a major influence, Fig. 4a. on the mixing between the two jets [8]. In the case of the triple jet nozzle the third jet it is already mixed with the middle one and the levels of turbulent kinetic energy are much smaller than in the other two cases. Another interesting at 70 mm for the nozzle the structure of the jet it is not influenced by the ground effects.

Fig. 4. Turbulent kinetic energy at 70 mm from the nozzle exit: double jet nozzle (a), triple jet nozzle (b) and double jet with chevrons (c)

26

B. Gherman et al.

The potential core of the jet has different lengths depending on the solution employed [8], see Fig. 5. And the ground effect it is noticeable in the level of tke close to the end of the potential core. Another interesting aspect is the mixing between jets, as it can be seen in the case of the triple jet nozzle the third jet it cannot be seen, it mixes instantly with the middle jet. While the chevron solution manages to mix the two jets into one very quickly. Now the recirculation zone that develops under the nozzle starts in all cases close to the outer jet exit, there is also the connection of the recirculation zone with nozzle geometry, see Fig. 6. Also, the shape and length of these recirculation zones varies, the smallest zone is in the case with chevron, Fig. 6a. The position of the recirculation zone shows an interesting aspect, the interaction is only with the outer jet, the length of the recirculation does not extend in any of the cases to the primary jet exit. Also, it can be observed a ring shape recirculation zone that develops around the outer jet exit from the nozzle.

Fig. 5. Turbulent kinetic energy along the axial direction for double jet with chevrons (a), double jet (b) and triple jet (c)

Ground Effect Influence on Aircraft Exhaust Jet

27

Fig. 6. Isosurface of the axial velocity at −0.5 m/s for chevron case (a), double jet nozzle (b) and triple jet nozzle (c)

Fig. 7. Axial velocity profile at 50 mm, 60 mm, 70 mm, 80 mm and 90 mm

In Fig. 7 the ground is situated around the value −0.06 mm and the nozzles are situated close to the value −0.035 mm. It can be seen in Fig. 7 that the strongest recirculation zone is in the case of the double jet and the triple jet cases. The chevron case presents a weak recirculation that forms due to ground effect.

28

B. Gherman et al.

4 Conclusions The analysis was performed on three types of nozzle geometries and the development of a recirculation zone between nozzle and ground has been studied. The three geometries implicated in this studied were chosen to reduce the jet noise at the nozzle exit and this analysis is part of a larger research campaign. The double jet with chevron nozzle shows extremely good mixing of the two jets, also in the case with triple jet nozzle, two of the jets, outer and middle mixes almost immediately. The mixing process have an influence on the length of the jets. It was discovered that this recirculation zone is connected only with the outer jet of the nozzle in all three cases. Depending on the geometry chosen the recirculation zone shape and strength differs. It can not be said that this region influences the jet attachment to the ground, more studies have to be performed, especially experimental ones.

References 1. James N. Hallock, Frank Holzapfel: A review of recent wake vortex research for increasing airport capacity, Prog. Aerosp. Sci. J., 27–36, https://doi.org/10.1016/j.paerosci.2018.03.003, Elsevier (2018) 2. Leweke, T., Le Dize’s, S., Williamson C.H.K.: Dynamics and instabilities of vortex pairs. Annu. Rev. Fluid Mech. 48(1), 507–41 (2016) 3. Misaka, T., Holza¨ pfel, F., Hennemann, I., Terz, T., Manhart, M., Schwertfirm, F.: Vortex bursting and tracer transport of a counterrotating vortex pair. Phys. Fluids 24(2), 025104 (2012) 4. Garodz, L.J., Clawson, K.L.: Vortex wake characteristics of B757- 200 and B767-200 aircraft using the tower fly by technique; 1993. Report No.: NOAA Technical Memorandum ERLARL (1993) 5. Gherman, B., Stanciu, V., Silivestru, V.: Comparatia unei noi solutii de motor, motorul tripluflux, cu motorul dubluflux – incas bulletin, pp 469–479, ISBN 978-973-0-05704, Bucuresti (2008) 6. Ursescu, D., Homutescu, C., Poeată, N.: Stabilirea procedeelor de calcul a ciclurilor celor mai uzuale turbine cu gaze în domeniul temperaturii 800–2500 K, Contract de cercetare, beneficiar - I.N.M.T. – Bucureşti (1996) 7. Tannehill, J.C., Anderson, D.A., Pletcher, R.H.: Computational Fluid Mechanics and Heat Transfer. ISBN 1-56032-046-X (2001) 8. Crunteanu, D.E., et al.: Influence of exhaust nozzle geometry on the jet potential core development. Appl. Mech. Mater. ISSN: 1662-7482, vol. 811, pp. 145–151 (2015)

Silver Thin Film and Water Dielectric SPR Sensor Experimental and Simulation Characteristics Tanaporn Leelawattananon1(&) and Suphamit Chittayasothorn2 1 Departments of Physics, Faculty of Sciences, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand [email protected] 2 Department of Computer Engineering, Faculty of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand [email protected]

Abstract. This research project is the study of the Surface Plasmon Resonance (SPR) at the interface between silver thin film and the water dielectric layer. The purpose is to find the suitable thickness of the silver thin film layer which produces SPR and is suitable for SPR sensor applications. Both physical experiments and simulations are conducted using the Kretschmann Configuration. In the physical experiments, BK7 prisms are coated with silver thin films with the thickness of 60 nm, 80 nm, and 100 nm. The light source is p-polarized He-Ne Laser with the wavelength of 632.8 nm. Hawkeye laser detectors are used to detect the laser light intensity of reflected light for each desired angle. In the simulations, results of SPR signals are analyzed using the finite element method. The simulated results are found to agree with the results from the physical experiments Keywords: Surface plasmon resonance Silver thin film  Finite element method

 Kretschmann configuration 

1 Introduction Biosensors which employ the principle of surface plasmon resonance (SPR) become popular since SPR technology is a label-free-optical detection technology. This kind of sensors are used for bio-molecular detection [1, 2], interactions between different biomolecules [3, 4] and medical applications such as detection of diseased [5, 6]. Gold based SPR biosensors which are developed by using the Kretchmann Configuration [7] are highly accepted because gold is a noble metal which has the best characteristics for SPR. However, less expensive noble metals such as silver and copper are more widely used nowadays due to the more economical than gold. Silver is more popular than copper since it has better SPR characteristics. It is employed as the glucose sensor [8, 9]. Moreover, silver surface can be adjusted to obtained better bio-molecular detection performance. In this research project, we construct a prototype SPR sensor according to the Kretschmann configuration. This inexpensive prototype employs silver which is much less expensive than gold. Experiments were conducted to find the thickness of the silver © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 29–34, 2019. https://doi.org/10.1007/978-3-030-21507-1_5

30

T. Leelawattananon and S. Chittayasothorn

thin film layer which is suitable to produce surface plasmon resonance. The finding results are confirmed with the simulated results using Kretschmann configuration by the finite element method (FEM).

2 Theoretical Backgrounds 2.1

Surface Plasmon Resonance (SPR) Phenomenon

Surface plasmons (SPs) or surface plasmon polariton (SPPs) takes place at the interface between metal and dielectric substances. Electric charges vibrate in uniform manner at the metal-dielectric interface when excited by light with appropriate wave length, thus generates electromagnetic waves. The common method for the plasmon wave excitation is the “Kretchmann configuration”. P-polarized light beam from a light source penetrates a prism and impacts the metal thin film which is coated on the prism base and reflects to the light detector. The metal thin films which could give surface plasmon resonance are those from noble metals such as gold and silver. The thickness of the metal thin film is around 50–70 nm. The condition which activates surface plasmon resonance (SPR) are the wave vector (kx) of the impact light which is parallel to the metal surface must be the same as the wave vector (ksp) of the surface plasmon wave. When p-polarized incident light impacts the interface between the dielectric and metal layer, surface plasmon waves are excited and move along the interface. The energy and momentum of the incoming light which impact the prism are transferred to the electrons group of the metal thus excites surface plasmon wave. Dispersion relation of surface plasmon wave is shown in the following equation: ksp ¼ kx ¼

x c

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi em ed : em þ ed

ð1Þ

em is the permittivity of the metal layer. ed is the permittivity of the dielectric layer. When the p-polarized incident light impacts the prism by the hi angle, the light is reflected back by the hi angle as well. The reflection of light approaches zero when the light impact angle is equal to the resonance angle, thus creates surface plasmon resonance. This can be detected by the light sensor. From the Fresnel equation, it can be explained that p-polarized light impacts on the three medias with the impact angle h. The first media is prism. The second media is the metal thin film, in our case 99.99% silver with relative permittivity; e, whose real and imaginary value are e = ereal + ieimag, is employed. The third layer is the dielectric layer such as air, water, or other solutions. The Reflectance, R, can be obtained by using the following equation:    r12 þ r23 expð2ikz2 dÞ 2   : R = jr123 j ¼  1 þ r r expð2ik dÞ 2

12 13

z2

d is the thickness of the metal thin film layer in nm. r123 is the reflectance in the 1st, 2nd, and the 3rd mediums respectively.

ð2Þ

Silver Thin Film and Water Dielectric SPR Sensor Experimental

31

3 Experimental Works A prototype SPR sensor Kretschmann configuration is constructed. A p-polarized linear polarized He-Ne Laser with the wavelength of 632.8 nm, which is widely available commercially, is used as the activation light source. A Hawkeye detector is used to detect the SPR signals. The Thorlab Precision rotation stage optic mount with Micrometer is employed to change the light impact angle (hi) on the prism. The rotation mount is engraved with 1o graduations and includes a Vernier scale that directly provide 5 arc min resolution for the SPR sensor. In this research work, there are three media layers. The first one is the 25 mm. BK7 Right Angle Prism. The middle layer is the 99.99% silver metal thin film. The thickness of 60 nm, 80 nm, and 100 nm are used in the experiments. The silver thin film is coated on to the prism using DC magnetron sputtering technique. The third layer is water which is in an inlet-outlet tube channel. The channel is attached to the prism as shown in Fig. 1. SPR signals are detected by a Hawkeye detector. The signals indicates light reflectivity from the prism. After the experiments, the detected signals are analyzed and the results are compared and confirmed with the simulated results. The simulation also employs the light activation by Kretschmann configuration using the finite element method (FEM). A software tool which support the FEM is employed.

Fig. 1. The Kretschmann’s configuration SPR prototype used in the experiments.

Optical excitation takes place when the k-vector of p-polarized (TM mode) light which impacts the prism equals to the k-vector of the SP wave. When the light

32

T. Leelawattananon and S. Chittayasothorn

penetrates the metal surface, free electrons inside the metal are coupling with the light and vibrates resonantly with the frequency of light thus creates surface plasmon resonance. When the impact angle is equal to the resonance angle or Attenuated Total Reflection angle (hATR), the horizontal component of the k-vector of the activating light matches the k-vector of the SP wave that takes place at the interface between the metal thin film layer and the dielectric. At this stage, light transfers its energy to electrons in the metal without reflecting back to the prism, thus becomes surface plasmon energy. The surface plasmon waves propagates along the interface between metal thin film and the dielectric. They can be detected by the Hawkeye detector whose DC output sensitivity (@ 650 nm) is 1 V/mW for 1 mW input signal. Hawkeye laser detector incorporates a novel amplification system, which ensures excellent performance particularly in high ambient light.

4 Experimental Results In the first experiment, the 632.8 nm p-polarized He-Ne Laser impacts the prism which is coated by 60 nm silver thin film and water dielectric with 1.33 refractive index at various angle. The Hawkeye detector detects the reflected light from various angles and gives voltage over the detector. By finding the ratio between the voltage output from the detector when the light impacts the prism, and the voltage from the detector of the reflected from the prism, the suitable incident angle which produces SPR, can be determined. The reflectance is obtained from dividing the reflected light voltage by the fixed 0.0875 voltage of the light that impacts the prism. The simulated result shows electric fields at the interface between silver thin film layer and the water dielectric as shown in Fig. 2. The highest amplitude is 1.12105 V/m. The electric field is clearly visible and consistent along the interface. This indicates the occurrence of plasmon waves. From the physical experiments, it is observed that the reflectance approaches zero when the incident angle is 67.8 degree as shown in Fig. 3a. This is also correspond with the simulated result from the finite element method which shows 67.5 degree resonant angle as shown in Fig. 3b.

Fig. 2. The electric field obtained from the simulation when the silver film thickness is 60 nm

Silver Thin Film and Water Dielectric SPR Sensor Experimental

33

Fig. 3. SPR signals when the dielectric layer is water and the silver thin film is 60 nm thick. (a) The incident angle from the experiment is 67.8 degree. (b) The resonance angle obtained from simulations using FEM is 67.584 degree.

In the second experiment, the 632.8 nm p-polarized He-Ne Laser impacts the prism which is coated by 80 nm silver thin film and water dielectric with 1.33 refractive index at various incident angle. It is observed that the reflectance obtained from the Hawkeye detector is quite high. The calculated reflectance is 0.8. This is also correspond with the simulated result from the finite element method. In the third experiment, it is found that the voltage detected when Hawkeye detector is used is almost 0.875 Volt. The calculated reflectance is 0.96, which is very high. This result is also similar to the result obtained from the simulation using the finite element method.

34

T. Leelawattananon and S. Chittayasothorn

5 Conclusion In this research work, we set up a prototype optical sensor using Kretschmann configuration. Silver thin film with the thickness of 60 nm, 80 nm, and 100 nm are coated by DC magnetron sputtering on BK7 prisms. The experiments measures reflected output light intensity. The results show that the silver thin film with the thickness of 60 nm gives minimum reflectance approach zero at the interface between silver thin film and water dielectric. The resonance angle obtained from the experiments agrees with the simulated analysis using Fresnel equations and the simulated result using the finite element method. The silver thin film with the thickness of 80 nm and 100 nm do not perform as good and give high reflectance. The silver thin film of 60 nm thickness is therefore the better candidate for SPR sensor development. Acknowledgment. This work is supported by a research fund provided by the Faculty of Sciences, King Mongkut’s Institute of Technology Ladkrabang, Thailand.

References 1. Bhatta, D., Stadden, E., Hashem, E., Sparrow, I.J.G., Emmerson, G.D.: Multi-purpose optical biosensors for real-time detection of bacteria, viruses and toxins. Sens. Actuator. B 149, 233– 238 (2010) 2. Guo, L., Ferhan, A.R., Lee, K., Kim, D.H.: Nanoarray-based biomolecular detection using individual Au nanoparticles with minimized localized surface plasmon resonance variations. Anal. Chem. 83(7), 2605–2612 (2011) 3. Patching, S.G.: Surface plasmon resonance spectroscopy for characterization of membrane protein- Ligand interactions and its potential for drug discovery. Biochim. Biophys. Acta. 1838, 43–55 (2014) 4. Bhattarai, J.K., Sharma, A., Fujikawa, K., Demchenko, A.V., Stine, K.J.: Electrochemical synthesis of nanostructured gold film for the study of carbohydrate-lectin interactions using localized surface plasmon resonance spectroscopy. Carbohydr. Res. 405, 55–65 (2015) 5. Liu, R., Wang, Q., Li, Q., Yang, X., Wang, K., Nie, W.: Surface plasmon resonance biosensor for sensitive detection of microRNA and cancer cell using multiple signal amplification strategy. Biosens. Bioelectron. 87, 433–438 (2017) 6. Mariani, S., Minunni, M.: Surface plasmon resonance applications in clinical analysis. Anal. Bioanal. Chem. 406, 2303–2323 (2014) 7. Kretschmann, E., Raether, H.: Notizen: radiative decay of non radiative surface plasmons excited by light. Z. Natur. A 23(12), 2315–2316 (1968) 8. Wang, J., Banerji, S., Menegazzo, N., Peng, W., Zou, Q., Booksh, K.S.: Glucose detection with surface plasmon resonance spectroscopy and molecularly imprinted hydrogel coatings. Talanta 86, 133–141 (2011) 9. Li, D., Yang, D., Yang, J., Lin, Y., Sun, Y., Yu, H., Xu, K.: Glucose affinity measurement by surface plasmon resonance with borate polymer binding. Sens. Actuators A Phys. 222, 58–66 (2015)

On the Solution of the Fredholm Equation with the Use of Quadratic Integro-Differential Splines I. G. Burova(B) and N. S. Domnin St. Petersburg State University, 7/9 Universitetskaya nab., St. Petersburg 199034, Russia [email protected], [email protected]

Abstract. Currently there are a number of papers in which certain types of splines are used to solve the Fredholm equation. Now much attention is paid to the application of a new type of spline, the so-called integro-differential spline to the solution of various problems. In this paper we consider the solution of the Fredholm equation using polynomial integro-differential splines of the third order approximation. To calculate the integral in the formula of a quadratic integro-differential spline, we propose the corresponding quadrature formula. The results of numerical experiments are given. Keywords: Polynomial splines Fredholm equation

1

· Integro-differential splines ·

Introduction

At present, the theory of approximation by local interpolation splines continues to evolve. Approximation with local splines of the Lagrange or the Hermite types can be used in many applications. Approximation with the use of these splines is constructed on each mesh interval separately as a linear combination of the products of the values of the function and/or its derivatives at the grid nodes and basic functions. We obtain the basic functions as a solution of a system of linear algebraic equations (approximation relations). The approximation relations are formed from the conditions of accuracy of approximation on the functions forming the Chebyshev system. The constructed basic splines provide an approximation of the prescribed order. Using basic splines, one can construct continuous or continuously differentiable predetermined types of approximation. There are new types of splines that we call integro-differential splines (see [2– 9]), which compete with existing polynomial and nonpolynomial splines of the Lagrange type. The main features of integro-differential splines are the following: the approximation is constructed separately for each grid interval (or elementary rectangular); the approximation constructed as the sum of products of the basic splines and the values of function in nodes and/or the values of integrals of c Springer Nature Switzerland AG 2019  K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 35–41, 2019. https://doi.org/10.1007/978-3-030-21507-1_6

36

I. G. Burova and N. S. Domnin

this function over subintervals. Basic splines are determined by using a solving system of equations which are provided by the set of functions. It is known that when integrals of the function over the intervals are equal to the integrals of the approximation of the function over the intervals then the approximation has some physical parallel. The splines which are constructed here satisfy the property of the third order approximation. Here, the one-dimensional polynomial basic splines of the third order approximation are constructed when the values of the function are known in each point of interpolation. For the construction of the spline, we use quadrature with the appropriate order of approximation. These basic splines can be used to solve various problems, including the approximation of a function of one and several variables; the construction of quadrature and cubature formulas; the solution of boundary value problems; the solution of the Fredholm equation, and the Cauchy problem. Currently there are papers in which certain types of splines are used to solve the Fredholm equation (see [1,10–12,14–16]), boundary value problems (see [13,17–19]). In this paper we consider the solution of the Fredholm equation using polynomial integro-differential splines of the third order approximation. To calculate the integral in the formula of a quadratic integro-differential spline, we propose the corresponding quadrature formula. The results of numerical experiments are given.

2

Construction of a Solution of the Fredholm Equation with the Use of Quadratic Polynomial Splines

Suppose that a, b are real numbers. Consider the Fredholm equation b ϕ(x) −

K(x, s)ϕ(s)ds = f (x).

(1)

a

Suppose that n is a natural number. We construct on the interval [a, b] a uniform n grid {xj }j=0 with step h: h = b−a n . We construct an approximate solution of the integral equation by applying quadratic polynomial splines as follows. First we represent the integral in (1) in the following form: b

b−h 

K(x, s)ϕ(s)ds = a

b

K(x, s)ϕ(s)ds + a

K(x, s)ϕ(s)ds.

(2)

b−h

In the first integral of (2) we apply the following transformation using integrodifferential splines. We replace the function ϕ(s), s ∈ [xj , xj+1 ], by ϕ(s):  xj+2

ϕ(s)  = ϕ(xj )ωj (s) + ϕ(xj+1 )ωj+1 (s) + xj+1

ϕ(τ )dτ · ωj (s).

(3)

On the Solution of the Fredholm Equation

37

Here ωj (s), ωj+1 (s), ωj (s) are the continuous integro-differential splines which will be defined later. Lemma 1. Let function u(x) be such that u ∈ C 3 [xj−1 , xj+1 ]. The following formula is valid: xj+1

u(x)dx ≈ xj

Proof. We put

x j+1 xj

h (5u(xj+1 ) + 8u(xj ) − u(xj−1 )). 12

u(x)dx ≈

x j+1 xj

(4)

u (x)dx, where u (x) = u(xj−1 )wj−1 (x) +

u(xj )wj (x) + u(xj+1 )wj+1 (x), x ∈ [xj , xj+1 ] , wj−1 (x) =

(x − xj )(x − xj+1 ) , (xj−1 − xj )(xj−1 − xj+1 ) wj+1 (x) =

wj (x) =

(x − xj−1 )(x − xj+1 ) , (xj − xj−1 )(xj − xj+1 )

(x − xj−1 )(x − xj ) . (xj+1 − xj−1 )(xj+1 − xj )

We obtain formula (4) after integration. The proof is complete. Remark 1. It is not difficult to obtain the following relation: |u(x) − u (x)| ≤ K0 h3 u [xj−1 ,xj+1 ] , x ∈ [xj , xj+1 ] , K0 > 0. It can be shown that xj+1

|

u(x)dx − xj

h (5u(xj+1 ) + 8u(xj ) − u(xj−1 ))| ≤ K1 h4 u . 12

Now (3) for s ∈ [xj , xj+1 ] has the form: ϕ(s)  = ϕ(xj )ωj (s)+ϕ(xj+1 )ωj+1 (s)+

h (5ϕ(xj+2 )+8ϕ(xj+1 )−ϕ(xj ))ωj (s). 12 (5)

Lemma 2. Suppose ϕ be such that ϕ ∈ C 3 [xj , xj+2 ] and ϕ(s)  is given by (3). The following formula is valid ϕ(s)  = ϕ(s), ϕ(s) = 1, s, s2 where ωj (s) = (s − h − jh)(3s − 5h − 3jh)/(5h2 ),

(6)

ωj+1 (s) = −(s − jh)(9s − 14h − 9jh)/(5h2 ),

(7)

ωj (s)

= 6(s − h − jh)(s − jh)/(5h ). 3

(8)

Proof. Using (3), (4) and the Taylor expansion, it is not difficult to obtain the relations (6), (7), (8). The proof is complete.

38

I. G. Burova and N. S. Domnin

Remark 2. If s ∈ [xj , xj+1 ], t ∈ [0, 1], s = xj + th, then the basic splines can be written in the form: ωj (xj + th) = (t − 1)(3t − 5)/5, ωj+1 (xj + th) = −t(9t − 14)/5, ωj (xj + th) = 6t(t − 1)/(5h). It is not difficult to obtain the following relation: |ϕ(x) − ϕ(x)|  ≤ K2 h3 u [xj ,xj+2 ] , x ∈ [xj , xj+1 ] , K2 > 0. In the second integral of (2) we apply the following transformation using integro-differential splines. We replace the function ϕ(s), s ∈ [xj , xj+1 ], by ϕ(s):  xj ϕ(s)  = ϕ(xj ) ωj (s) + ϕ(xj+1 ) ωj+1 (s) +

ϕ(τ )dτ · ω j (s).

(9)

xj−1

j+1 (s), ω j (s) are the continuous integro-differential splines Here ω j (s), ω which will be defined later. Lemma 3. Let function u(x) be such that u ∈ C 3 [xj , xj+2 ]. The following formula is valid: xj+1

h (5u(xj ) + 8u(xj+1 ) − u(xj+2 )). 12

u(x)dx ≈ xj

Proof. We put

x j+1 xj

u(x)dx ≈

x j+1 xj

(10)

u (x)dx, where

u (x) = u(xj )w j (x) + u(xj+1 )w j+1 (x) + u(xj+2 )w j+2 (x), x ∈ [xj , xj+1 ] , where w j (x) =

(x − xj+1 )(x − xj+2 ) , (xj − xj+1 )(xj − xj+2 ) w j+2 (x) =

w j+1 (x) =

(x − xj )(x − xj+2 ) , (xj+1 − xj )(xj+1 − xj+2 )

(x − xj )(x − xj+1 ) , (xj+2 − xj )(xj+2 − xj+1 )

after integration we obtain formula (10). The proof is complete. Remark 3. It is not difficult to obtain the following relation |u(x) − u (x)| ≤ K3 h3 u [xj ,xj+2 ] , x ∈ [xj , xj+1 ] , K3 > 0. It can be shown that | K4 h4 u , K4 > 0.

x j+1 xj

u(x)dx −

h 12 (5u(xj )

+ 8u(xj+1 ) − u(xj+2 ))| ≤

On the Solution of the Fredholm Equation

39

Now (9), s ∈ [xj , xj+1 ], has the form: ωj (s)+ϕ(xj+1 ) ωj+1 (s)+ ϕ(s)  = ϕ(xj )

h (5ϕ(xj−1 )+8ϕ(xj )−ϕ(xj+1 )) ωj (s). 12 (11)

Lemma 4. Suppose ϕ  be such that ϕ  ∈ C 3 [xj , xj+2 ] and ϕ(s)  is given by (9). The following formula is valid: ϕ(s)  = ϕ(s), ϕ(s) = 1, s, s2 where s ∈ [xj , xj+1 ] ω j (s) = −(9s + 5h − 9jh)(s − h − jh)/(5h2 ),

(12)

ω j+1 (s) = (3s + 2h − 3jh)(s − jh)/(5h2 ),

(13)

ω j (s) = 6(s − h − jh)(s − jh)/(5h3 ).

(14)

Proof. Using (9), (10) and the Taylor expansion, it is not difficult to obtain the relations (12), (13), (14). The proof is complete. Remark 4. If s ∈ [xj , xj+1 ], t ∈ [0, 1], s = xj + th, the basic splines can be written in the form: ω j (xj + th) = −(9t + 5)(t − 1)/5,

ω j+1 (xj + th) = t(3t + 2)/5,

ω j (xj + th) = 6t(t − 1)/(5h). It is not difficult to obtain the following relation: |ϕ(x) − ϕ(x)|  ≤ K4 h3 u [xj−1 ,xj+1 ] , x ∈ [xj , xj+1 ] , K4 > 0. Using (5), (6)–(8), (11), (12)–(14) and the following notations: A (x) j

xj+1

K(x, s)(ωj (s) −

= xj

Bj (x)

  2h ωj (s) ds, K(x, s) ωj+1 (s) + 3

xj+1

= xj

5h Cj (x) = 12 A n−1 (x)

(x) Bn−1

h ω (s))ds, 12 j

xn = xn−1

5h = 12

xj+1

K(x, s)ωj (s)ds,

xj

xn

K(x, s) ωn−1 (s)ds,

xn−1



 2h ω  K(x, s) ω n−1 (s) + (s) ds, 3 n−1

40

I. G. Burova and N. S. Domnin

Cn−1 (x)

xn = xn−1



 h  K(x, s) ω n (s) − ω (s) ds 12 n−1

we get the following system of equations for calculating ϕ(xi ), i = 0, . . . , n: ϕ(xi ) −

n−2 



ϕ(xj )A (xi ) + ϕ(xj+1 )Bj (xi ) + ϕ(xj+2 )Cj (xi ) j

j=0



− ϕ(xn−2 )A n−1 (xi ) + ϕ(xn−1 )Bn−1 (xi ) + ϕ(xn )Cn−1 (xi ) = f (xi ). Table 1. Numerical solutions when n = 10 and n = 100 K(x, s)

ϕ(x) 2 2

K(x, s) = x s x

3

n = 10 3 2

ϕ(x) = x sin(x) 3 2

K(x, s) = e cos(s)

ϕ(x) = x sin(x)

K(x, s) = xs

ϕ(x) =

1 1+25x2

n = 100 −5

0.24 · 10−7

−3

0.37 · 10

0.44 · 10−6

0.60 · 10−4

0.61 · 10−8

0.21 · 10

Numerical Results

Here we present some numerical results. In Table 1 one can see the absolute values of the difference between the exact solution and solutions, obtained with suggested method, when a = 0, b = 1, with n = 10 and n = 100, Digits=15. Here f (x) is obtained using K(x, s) and ϕ(s).

4

Conclusion

The quadratic polynomial integro-differential splines proposed in this paper showed the possibility of solving the Fredholm integral equation. In the proposed (x), Bj (x), Cj (x), method, it is necessary to calculate the integrals A j

An−1 (x), Bn−1 (x), Cn−1 (x). In future papers, the application of nonpolynomial splines to solve the Fredholm equation will be investigated.

References 1. Allouch, C., Sablonni`ere, P.: Iteration methods for Fredholm integral equations of the second kind based on spline quasi-interpolants. Math. Comput. Simul. 99, 19–27 (2014) 2. Burova, I.G., Rodnikova, O.V.: Integro-differential polynomial and trigonometrical splines and quadrature formulae. WSEAS Trans. Math. 16, 11–18 (2017) 3. Burova, I.G., Doronina, A.G., Miroshnichenko, I.D.: A comparison of approximations with left, right and middle integro-differential polynomial splines of the fifth order. WSEAS Trans. Math. 16, 339–349 (2017)

On the Solution of the Fredholm Equation

41

4. Burova, I.G., Poluyanov, S.V.: On approximations by polynomial and trigonometrical integro-differential splines. Int. J. Math. Model. Methods Appl. Sci. 10, 190–199 (2016) 5. Burova, I.G., Doronina, A.G.: On approximations by polynomial and nonpolynomial integro-differential splines. Appl. Math. Sci. 10(13–16), 735–745 (2016) 6. Burova, I.G.: On left integro-differential splines and Cauchy problem. Int. J. Math. Model. Methods Appl. Sci. 9, 683–690 (2015) 7. Burova, I.G., Rodnikova, O.V.: Application of integrodifferential splines to solving an interpolation problem. Comput. Math. Math. Phys. 54(12), 1903–1914 (2014) 8. Burova, I.G., Poluyanov, S.V.: Construction of meansquare approximation with integro-differential splines of fifth order and first level. Vestnik St. Petersburg Univ.: Math. 47(2), 57–63 (2014) 9. Burova, I.G., Evdokimova, T.O.: On construction third order approximation using values of integrals. WSEAS Trans. Math. 13, 676–683 (2014) 10. Bellour, A., Sbibih, D., Zidna, A.: Two cubic spline methods for solving Fredholm integral equations. Appl. Math. Comput. 276, 1–11 (2016) 11. Chen, F., Wong, P.J.Y.: Discrete biquintic spline method for fredholm integral equations of the second kind. In: 12th International Conference on Control, Automation, Robotics & Vision, (ICARCV 2012), Guangzhou, China, 5–7th December 2012 12. Ebrahimi, N., Rashidinia, J.: Spline collocation for solving system of Fredholm and Volterra integral equations. Int. J. Math. Comput. Sci. 8(6), 1008–1012 (2014) 13. Kalyani, P., Ramachandra Rao, P.S.: Numerical solution of heat equation through double interpolation. IOSR J. Math. (IOSR-JM) 6(6), 58–62 (2013) 14. Sablonni`ere, P., Allouch, C., Sbibih, D.: Solving Fredholm integral equations by approximating kernels by spline quasi-interpolants. Numer. Algorithms 56, 437– 453 (2011) 15. Ray, S.S., Sahu, P.K.: Application of semiorthogonal B-spline wavelets for the solutions of linear second kind Fredholm integral equations. Appl. Math. Inf. Sci. 8(3), 1179–1184 (2014) 16. Rashidinia, J., Babolian, E., Mahmoodi, Z.: Spline collocation for Fredholm integral equations. Math. Sci. 5(2), 147–158 (2011) 17. RamaChandra Rao, P.S.: Solution of fourth order of boundary value problems using spline functions. Indian J. Math. Math. Sci. 2(1), 47–56 (2006) 18. RamaChandra Rao, P.S.: Solution of a class of boundary value problems using numerical integration. Indian J. Math. Math. Sci. 2(2), 137–146 (2006) 19. Ravikanth, A.S.V.: Numerical treatment of singular boundary value problems, Ph.D. thesis. National Institute of Technology, Warangal, India (2002)

Application of Raman Spectroscopic Measurement for Banknote Security Purposes Hana Vaskova, Pavel Tomasek(&), and Milan Struska Faculty of Applied Informatics, Tomas Bata University in Zlin, Nad Stranemi 4511, 76005 Zlin, Czech Republic {Vaskova,tomasek}@fai.utb.cz

Abstract. The development of science and technology brings new tools to improve security features on banknotes as well as methods to verify their authenticity. This development also expands the technical possibilities for money counterfeiting. The paper deals with the experimental examination of selected protective elements - paper and inks used on euro banknotes. The advanced analytical method Raman spectroscopy was used for experimental analysis, as method meets the requirements crucial for forensic examination. The main aim is to obtain characteristic Raman spectra of selected security features that can serve for testing the authenticity of questioned banknotes. The results show an apparent diversity of spectral markers in individual samples and the method suitability for the authentication. A comparison of the results for euro banknotes of the first and the Europa series is discussed in the paper. Keywords: Raman spectroscopy  Euro Detection genuine  Ink  Security

 Banknotes  Counterfeit 

1 Introduction Money, as a basic payment tool, has been a phenomenon of wealth and power since ancient times. Their form has undergone number of changes during the centuries, contemporary physical form is represented by banknotes and coins. The main problem that has arisen from the very beginning, is counterfeiting of money. Forgery and conscious payment by counterfeit money is one of the oldest crimes [1]. Due to the nominal value, the most frequent occurrence of contemporary counterfeits concerns banknotes. Currently, the banknote authenticity is guaranteed by the presence of security features on every modern banknote. The development of science and technology brings new tools to improve security features as well as methods to verify their authenticity. However, this development also enriches falsifiers about the possibilities and procedures to overcome/better imitate original protective elements. The euro currency was launched on 1 January 1999 and became a legal tender in 19 Member States of the European Union. There are seven denominations of euro banknotes currently in the circulation: €5, €10, €20, €50, €100, €200 and €500. The banknotes of the first series are gradually being replaced by the notes of second series called Europa series. Europa series offers banknotes with enhanced security features and higher durability in comparison to the first series [2]. © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 42–47, 2019. https://doi.org/10.1007/978-3-030-21507-1_7

Application of Raman Spectroscopic Measurement

43

The statistical studies [3] over the last seven years show that the upward trend in number of counterfeit euro banknotes withdrawn in the Europe was actual in the years 2012–2015, a decrease was recorded in 2016. The year 2017 represented rather small fluctuation with slight growth compared to 2016. The first half of 2018 is on the minimum level compared to previous years, but statistics show the increase in withdrawn banknotes in the second half of every year 2012–2017 compared to the first half, except in 2015. The notes with nominal value €20 and €50 are in the last years the most counterfeit banknotes. Together, they accounted for about 80–87% (in 2015–2018) of the counterfeits. The most frequently counterfeit note is €50 from 2016 up today [3], in 2015 it was €20 note [2]. Counterfeits of other denominations have very little representation. Statistics also indicate, that the majority counterfeiting technology is offset printing, followed by inkjet printing. Minority technique is a color copy. The rate of offset printed counterfeit euro notes withdrawn in the Czech Republic decreased from 79.3% (2016) to 60.5% (2017). In contrast, ink prints rose from 17.8 (2016) to 38.2% (2017) [4]. The main motivation of the experimental study is to obtain characteristic spectral data of selected security features for testing the authenticity of questioned banknotes using the advanced, but simple procedural method that is especially nondestructive, rapid, easy to operate, offers unique fingerprint of the material and can be portable Therefore, Raman spectroscopy appears to be appropriate method and is used.

2 Counterfeit Detection The quality and authenticity of counterfeits is on different levels, from easily recognizable by naked eye to highly sophisticated requiring the use of analytical techniques and special equipments. The advanced methods evaluate microprint, 2D and luminescent features, chemical properties [2]. The simple “feel, look and tilt” test is recommended as a first check for the originality recognition of the notes [2]. Machine authentication techniques for the accurate prediction whether the inspected banknote is genuine or not, have undergone development from mainly ultraviolet (UV) features in 1970s, infrared (IR) features in 1980s to using magnetic features in 1990s [5]. Since the new millennium spectral properties were expected [5]. The forensic examination of banknote counterfeiting involves techniques as IR [6], UV-VIS-NIR [7], Mössbauer spectroscopy [8], X-ray fluorescence spectroscopy [8], X-ray powder diffraction [9], mass spectrometry [10]. The image analysis techniques involving transmittance a reflectance analysis, using UV, visible and IR light spectra are used especially in ATM machines as they are expensive for common use supervised by humans [11]. Also other methods based on image processing and pattern recognition [12], image digitalization [13], artificial neural network [14] and others are developed. 2.1

Raman Spectroscopy

Raman spectroscopy is an effective analytical method for the study and identification of various types of material. The method is based on the Raman effect, an inelastic

44

H. Vaskova et al.

scattering of photons of incident laser light on molecules of the sample. This method brings number of advantages and meets the requirements, that are crucial for forensic examination [15]. Raman spectroscopy offers non-invasive, non-destructive, rapid analysis of samples of different states and forms, no special requirements for sample preparation, enable testing through covering layers and has the potential to recognize various substances, even their structural modification via specific chemical fingerprint. The features and attractiveness of the method contribute to the growth of its popularity and utilization in many areas including, in addition to scientific, technical and medical, also security applications.

3 Samples and Instrumentation The genuine euro banknotes of the first series with nominal values €10, €20, €50, €100 were measured. Also the banknotes of Europa series available at this time. Measurements of Raman spectra was performed on inVia Basis Raman microscope from Renishaw. A diode laser with the excitation wavelength 785 nm was used as a light source. The maximum output power of laser was 300 mW. A Leica confocal microscope with the resolution up to 2 lm was coupled to the Raman spectrometer.

4 Results The used paper and inks were objects of Raman spectroscopic measurements. Firstly, the paper of the genuine notes was measured. Also other types of common paper, such as office paper, smooth art paper, newsprint and cardboard were measured. Obtained Raman spectra, presented in Fig. 1, indicate obvious qualitative differences. Only an area 1050–1300 cm−1 (grayed out in Fig. 1) exhibits the most similar waveform. Spectra of note paper with different nominal values was studied, the layout, which is crucial for identification of the material, was befitting. Minor differences were noted in the intensity of some of the bands. Paper is produced by compaction of the natural fibers of cellulose pulp derived from wood, secondary (recycled) fibers or vegetable fibers materials such as cotton or hemp. Besides the fibers, paper contain other additives (CaCO3, kaolin, TiO2, PVA, etc.) for improving the specific qualities [16]. The intense band at 1095 cm−1 may indicate symmetric stretching of carbonate ion [17]. Bands characteristic for TiO2 were observed, what confirms also [9]. Raman spectra of paper exhibit also the characteristic band at 641 cm−1 for C-S-C vibrations corresponding to viscose fibers recovered from cellulose [18]. The use of the right type of paper is one of the security features. Secondly, the points of the interest were selected on the front of each banknote due to the color distribution. The selected colors are listed in Table 1. The banknotes were measured in these points. All measurements were collected with magnification 5x and 20x, with 5–10 s exposure time, 2–10 accumulations. Powers of lasers were from 5% to 50% of the output laser power. The samples were scanned in range 200 to 1700 cm−1 with 2 cm−1 spectral resolution. A baseline subtraction function using cubic spline was applied on all measured spectra.

Application of Raman Spectroscopic Measurement

45

Fig. 1. Raman spectra of banknote and some other types of paper.

The results for €10 note are described in this section in detail, the same procedure was applied on the notes with other nominal values. The selected points according to Table 1 are displayed in Fig. 2. Raman spectra of specified points are shown in Fig. 3 and Fig. 4. The characteristic spectrum of the paper is most visible in indicated (in Fig. 3 and Fig. 4) spectral areas 350–680 cm−1 and 1050–1200 cm−1. The other Raman bands are related to the signal of inks themselves. The weakest Raman signal of paper appears in the blue area on the flag (no.5), apparently because of a layer of ink on a larger continuous surface. The spectral mathematics enables subtraction of the measured ink and paper signal, however, due to the different intensity of paper exposure, the comparison of colors for different nominal values is not appropriate. In addition, the bands characteristic for paper will often appear in Raman spectra and subsequent analysis of the banknote’s authenticity need to include it. A comparison of intensity and occurrence of Raman bands of inks from different series suggests a change in composition, especially for brown ink, some slight change in yellow and a certain match in blue ink (flag). The presence of the intense band at 1599 cm−1 also reflects changes in the development of color of individual inks occurrence on notes. More samples need to be analyzed to obtain precise representative average spectra for all colors. However, the diversity of measured Raman data shows the potential to recognize original paper and inks.

Table 1. Selection of measured color points on the euro banknotes Value [EUR] Series 10 1st 10 Europa 20 1st 20 Europa 50 1st 50 Europa 100 1st

Focus on colours Yellow, orange, red, brown, blue (flag) Yellow, red, brown, green, blue (flag) Yellow, brown, blue (light), blue (dark), blue (flag) Yellow, red, green, blue (light), blue (flag) Yellow, orange, brown, blue (flag) Yellow, orange, green, brown, blue (flag) Yellow, green (light), green (dark), brown, blue (flag)

46

H. Vaskova et al.

Fig. 2. The €10 note of the first and Europa series with the marked selected points.

Fig. 3. Raman spectra of the selected points on €10 note of the first series. Numbers 1–5 correspond to the points indicated in Fig. 2.

Fig. 4. Raman spectra of the selected points on €10 note of the Europa series. Numbers 1–5 correspond to the points indicated in Fig. 2.

Application of Raman Spectroscopic Measurement

47

5 Conclusions Raman spectroscopic study of euro banknote paper and used inks proved the possibility of the method to capture the characteristic features of the selected protective elements. Confirmation of compliance with the original is possible on the basis of the comparison of Raman spectra that are unique for every single material, even without the knowledge of the exact composition of this material, the ink and paper. The Raman spectral database is developed for the purpose of the banknotes authentication. The future steps will include further measurement and enlargement of the number of samples to obtain precise representative average spectra for all colors. This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic within the National Sustainability Programme project No. LO1303 (MSMT-7778/2014) and also by the European Regional Development Fund under the project CEBIA-Tech No. CZ.1.05/2.1.00/03.0089.

References 1. 2. 3. 4. 5. 6. 7.

8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.

Young, M.: Learn about the world of counterfeiting from one who lived there (2012) European Central Bank. https://www.ecb.europa.eu/euro/banknotes/html/index.en.html Statista 2018. https://www.statista.com/statistics/412739/europe-euro-banknotes-counterfeit/ Czech National Bank. http://www.cnb.cz/en/ de Heij, H.A., De Nederlandsche Bank, N.V.: Life Cycle Analyses of Security Features in Banknotes. Banknote, (2005) Sonnex, E., Almond, M.J., Baum, J.V., Bond, J.W.: Identification of forged Bank of England £20 banknotes using IR spectroscopy. Spectrochim. Acta A 118, 1158–1163 (2014) Causin, V., Casamassima, R., Marruncheddu, D., Lenzoni, G., Peluso, G., Ripani, L.: The discrimination potential of diffuse-reflectance ultraviolet–visible–near infrared spectrophotometry for the forensic analysis of paper. Forensic Sci. Int. 216, 163–167 (2012) Rusanov, V., et al.: Mössbauer and X-ray fluorescence measurements of authentic and counterfeited banknote pigments. Dyes Pigm. 81, 254–258 (2009) Marabello, D., Benzi, P., Lombardozzi, A., Strano, M.: X-ray powder diffraction for characterization of raw materials in banknotes. J. Forensic Sci. 62, 962–970 (2017) Adams, J.: Analysis of printing and writing papers by using direct analysis in real time. Int. J. Mass Spectrom. 301, 109–126 (2011) Bruna, A., Farinella, G.M., Guarnera, G.C., Battiato, S.: Forgery detection and value identification of Euro banknotes. Sensors. 13, 2515–2529 (2013) Lohweg, V. et al.: Mobile devices for banknote authentication–is it possible? In: The Conference on Optical Security and Counterfeit Detection, pp. 1–12 (2012) Gillich, E., Lohweg, V.: Banknote Authentication. NovembeR (2010) Mohamad, N. S. et al.: Banknote authentication using artificial neural network. In: International Symposium on Research in Innovation and Sustainability, pp 15–16 (2014) Chalmers, J.M., Howell, G.E., Hargreaves, M.D.: Infrared and Raman Spectroscopy in Forensic Science. Wiley, Chichester, West Sussex, UK (2012) Wilson, I.: Filler and coating pigments for papermakers. Europe 104, 1287–1300 (2013) Bozlee, B.J., et al.: Remote raman and fluorescence studies of mineral samples. Spectrochim. Acta A 61, 2342–2348 (2005) Cho, L.L.: Identification of textile fiber by Raman microspectroscopy. Forensic Sci. J. 6, 55– 62 (2007)

Design on Clothes with Security Printing, with Hidden Information, Performed by Digital Printing Jana Žiljak Gršić1, Lidija Tepeš Golubić1(&), Vilko Žiljak2, Denis Jurečić2, and Ivan Rajković1 1

2

Zagreb University of Applied Sciences, Vrbik 8, Zagreb, Croatia [email protected], [email protected], [email protected] Faculty of Graphics Arts, University of Zagreb, Getaldićeva 2, Zagreb, Croatia [email protected], [email protected]

Abstract. The content of the article is the expansion of INFRAREDESIGN® technology in achieving a hidden image in the scope of fashion clothing. Recognition and selection of two pictures are being demonstrated by a double camera for the visual and infrared spectrum, as a film record set up to a web space, and as reproduction printing on canvas. The portrait is placed in the computer graphic, and their reproduction has been realized with cyan, magenta, yellow and black dyes. The portrait invisible to the naked eye is at the same time a security graphics, hidden information, individualized sign and a new way of branding textile products. The result of dualism of different dyes with C, M, Y, K components is a system of twins of identical dyes in the visual spectrum and with different light absorption properties in the near infrared spectrum. The experimental scope or dye spectroscopy is from 400 to 1000 nm. Keywords: NIR spectroscopy  Digital printing INFRAREDESIGN  VZ dye separation

 Clothes 

1 Introduction Clothes design is being expanded by the IRD technology of hiding a security drawing. It is a work that relies on dyes in digital printing unlike the dyes in the art of conventional painting [1]. The algorithms for computer programming of CMYKIR dye separation that is recommended for the protection of documents and securities are being used in the work [2]. Spectrograms of light absorption are made for some dyes and they are applied in the individualization of ceramic painting [3]. Many color groups in industrial application in metal, silk and ceramic painting are being studied. The demonstration experiments of INFRAREDESIGN® printing technology in the making of dresses are being partially conducted [4]. The phenomenon of a hidden drawing is being expanded to the securities like, for example: tickets for sport events and certificates that must be protected from the falsification attempts [5]. The IRD method incorporates the composition of dyes that are differently reacting in the scanning devices that can “see” only RGB – visual light; we abbreviate it as “V”. Their © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 48–55, 2019. https://doi.org/10.1007/978-3-030-21507-1_8

Design on Clothes with Security Printing, with Hidden Information

49

reproduction is not going to have a NIR image that is being recognized at the wave length of 1000 nm. This information is called Z value [6]. This is how we got the acronyms: VZ printing, VZ reproduction, VZ painting and other. The theory of VZ procedure of hiding a picture in a picture, that is, the process of joining two pictures in a visual and near infra red condition, is very dependent on the material and dyes that are being planned for IRD. In that direction of IRD technology research we can find different recipes for mixing process dyes. Some authors have succeeded in application of mixing dyes with the purpose of inducing the NIR light absorption, even though their recipes for dye composition are applicable only on certain papers and printing techniques [7]. The organization of printing separation is juxtaposed to the information got with a spectroscopy of dyes in the near infrared spectrum. We respect the NIR security cameras that have been set up everywhere around us in the city. Dual cameras that “observe and register” in two spectra; V and Z are being developed in that direction [8]. The registration of one out of two NIR cameras happens also during the day with the blockage for the visual spectrum. The results of these experiments and research are being adapted to the industry of making uniforms that carry hidden information [9].

2 Design on Textile and Computer Graphics Duality, hiding is the reality of the new design in textile. Dyeing of fabric for the visual and infrared area of sunlight is being treated in the article. We have developed a new fashion design for two spectral areas: visual and infrared area. Two cameras are showing two contents at the same time; a picture and a drawing. The first graphic is aimed at observation of clothes with a naked eye. The second graphic has a property of invisibility. The fabric is dyed with “twin” dyes, with two mutually independent designs at the same place. Twin dyes have the same color tone for the visual area and those two dyes are being differently manifested at 1000 nm in the near infrared spectrum. The second design is invisible to the naked eye, it is hidden, available with a NIR security camera. Twin pairs for connecting both pictures and for printing on digital print are being given in the paper. Each color tone can be mixed in many different ways. The same tone can be mixed for a narrow area of visual sensitivity of our eye, and it can also be mixed in a way that it appears even in a two times wider wave length area. Therefore, the graphic artist will be able to grade the appearance of the picture in the infrared area. Dye “S” is being defined as the dye that has equal shares of C, M and Y. Our eyes view the S composition color as black. Such composition of black color does not absorb NIR – Z light. Conversely, a black toner, black color, named “Carbon Black” absorbs the light even further than 800 nm. In the whole experimental design, the concentration is placed on the premeditated mixing of all colors between themselves. Considering there are two black colors; S and K that our eyes view in the same way and they absorb the light above 800 nm in different ways, a theory called INFRAREDESIGN® has been developed (Fig. 1).

50

J. Žiljak Gršić et al.

Fig. 1. Dress in duality, visual and near infrared.

IRD process reproduces two graphics, while the naked eye will see only the first graphic called V (visual). In the beginning of the IRD process the second graphic is also set that shall not be visible to the naked eye in the reproduction. It is planned for recognition in the NIR spectrum above 850 nm. This graphics is called Z. It is reproduced in the colors that depend on the first graphics that is clearly visible to the naked eye. The algorithm for the composition of conventional process dyes is juxtaposed to the idea of merging two independent pictures out of which the second graphics is visible exclusively with a supervisory infrared camera. The printing is performed as twins X0 i X40. X0 are dyes composed only of cyan, magenta and yellow color with different mutual shares. X40 are their twins that also have a carbon black dye. The following relations have been set between X0 and X40: C40 ¼ 0:1103  Y0  0:2311  M0 þ 1:128  C0  12:69 M40 ¼ 0:3214  Y0 þ 1:0377  M0  0:3540  C0 þ 3:56

ð1Þ

Y40 ¼ 1:203  Y0  0:0934  M0  0:0921  C0 38:42 These relations are the starting point for the calculation of Z graphics value in the range from zero to 40% of coverage of black color tone. Visual computer graphics is a line design with interspaces without color. VZ separation by relation (1) is valid with restrictions. The calculation of Z value is being conducted only for the dyes that have a

Design on Clothes with Security Printing, with Hidden Information

51

coverage value higher than 40%, for each process dye. This is why the hidden portrait is intermittent; it appears only in places where the color of the visual graphics satisfies the condition of coloration (Fig. 2). Our eyes don’t see a hidden portrait. Figure 2 is prepared for the four-color press with c.m.y.k. components. Observing the transformation between the color channels and the extension towards near infrared light has been illustrated in the Fig. 4.

Fig. 2. Computer graphics for the dress.

The Parameters in regression equations are a result of experimental work with printing and spectroscopy with the goal of achieving the equality of dyes in V spectrum. Linear regression of dependence of printing dyes on the starting dyes in the visual spectrum is being introduced in this article. The dyes are being studied in the visual and near infrared spectrum. Their mixing is subjected to the planning of “twin dyes”. Every color tone has two material solutions of

52

J. Žiljak Gršić et al.

the same experience of color in the visual spectrum. Only the second twin is being recognized in the near infrared spectrum. Vast application is being announced in the printing industry as a double picture, invisible information. The printing is performed only with conventional process dyes: cyan, magenta, black and yellow. Design on textile expands on two spectral areas: visual and near infrared. Two pictures are being set to the same place. Each area is being observed independently, the opposite picture is being excluded. The observation of the new design is performed with the pertaining double cameras. The first camera recognizes the picture in the visual spectrum and the second NIR camera “sees” only the picture that is created for the spectrum in the area from 800 to 1000 nm. The mastery of matter enables new visual research, new computer graphics. This is redesigning the area of visualization. Management of pictures in the visual and near infrared spectrum by creating double information at the same place is being developed further in this paper. Double cameras used for distinguishing between two pictures, two states of prints in different areas of light are being developed. The occurrence of digital printing in two states is a new approach to the presentation of information, and a novelty in visual communications.

3 Dye Spectrograms For the purposes of this paper, we are presenting spectrograms of two dyes (Fig. 3) in the range from 400 to 900 nm. We have separated the NIR light area into two parts: Z1 and Z2. Each dye has its own shape which describes the light absorption. The graphs of individual twin dyes are separating after 700 nm. A dye that also has carbon black (marked as Z) shows the absorption value in NIR spectrum higher than 0,15. This is enough for it to be recognized with a NIR camera. Its twin (named V) drops low, all the way to the zero. Light absorption graphs for different dyes are overlapping in Z1 area (700 to 800 nm). Filters that do not leak light up to 850 nm are installed in the NIR camera. Infrared graphics in Z2 area at 1000 nm is being demonstrated in this paper. We base our innovation on standard process dyes: Cyan, Magenta, Yellow and (K) black. There is extensive literature on those dyes, but only for color setting in daylight. We are forming a theory about using those same process dyes for the setting in near infrared light (NIR). We respect “color management” in the area from 400 to 1000 nm. CMYK enables infinite number of colorful tones on the same document. Infinite tinting is being introduced on the document in infrared area. The components spectrum of one selected dye with the composition c51, m96, y82 and k0 has been determined as a V dye. Its twin Z has the following composition: c06, m85, y54, k40%. V and Z dyes make the twin dyes system. Their components can be found on the graph, Fig. 3. Z dye and its components have been marked with dash lines. A precise study of individual spectrograms of c, m, y, k dye components enables quantity corrections of certain components. The goal is to achieve equality of graphs Vtvin and Ztvin in the area from 400 to 700 nm. This equality “hides” the Z picture. Many dyes have been studied in our laboratory. A collection of twin dyes for an independent design of merging two pictures according to the VZ method of colors and dyes separation has been created, as in Fig. 3, through the process of twins presentation.

Design on Clothes with Security Printing, with Hidden Information

53

Fig. 3. Twin dyes spectrograms, light absorption of red color.

4 Records of Infrared Graphics in the Continuity of V and Z Spectra The clothes are being observed and recorded in the sun with a camera that has 24 different filters from 240 to 1000 nm. A video animation of a transformation of the view on VZ graphics is being created by merging pictures. Gradual withdrawal and blocking of light is important for the forensic testing of design authenticity. Part of the dress with a distinguished portrait has been shown in the Fig. 4.

Fig. 4. Blockade of the light at 500 and at 720 nm. http://vilko.ziljak.hr/Portret.mp4 video presentation

54

J. Žiljak Gršić et al.

For example, when we use barriers at 600 nm, we do not see the effect of the yellow dye anymore. We also do not see the picture in its color splendor. Red color is completely disrupted in this blockage. After 720 nm a hidden picture appears in its new element. In our example it is a new line graphics that incurred due to limitations in the algorithm that controls the accuracy of the regressive equation for dye separation from X0 to X40.

5 Conclusion Security printing on fabric dyeing is being introduced. The procedure and algorithm of mixing dyes includes light absorption properties in dyes in the range from visible to near infrared spectrum. The programming of process dyes separation in big plotters is being conducted for industrial plotters on fabric. Digital records include a design of the individual content for every part of the clothes, uniform or some other solution in textile industry. Visual arts design expands to the content of the pictures that are being watched in a few light levels. The approach to this infrared phenomenon in art is the “visual culture of the new age”. A designer gets a new method of dyeing textile. With this method they can consciously protect their intellectual property. INFRAREDESIGN enables a new creative direction – IR painting, that is, occurrence of new IR painters and IR designers. Personal, individual approach to creating dual visual art has opened a new space for creative activity: Painting for a new environment under video cameras, video surveillance, video experience in dual observance.

References 1. Jurečić, D., Žiljak, V., Tepeš Golubić, L., Žiljak Gršić, J.: Spectroscopy of colorants for fine art in visual and near infrared spectrum. In: 2nd International Conference on Applied Physics, System Science and Computers (APSAC 2017), Dubrovnik, Hrvatska 2. Pap, K., Žiljak, I., Žiljak-Vujić, J.: Image Reproduction for near infrared spectrum and the infraredesign theory. J. Imaging Sci. Techn. 54(1), 10502-1-10502-9(9) (2010) 3. Žiljak, V., Tepeš Golubic, L., Jurecic, D., Žiljak Gršic, J.; Double image with ceramic colors in the process of infrared painting. Int. J. Appl. Phy. 2, 18–23 (2017). ISSN: 2367-9034 82. http://www.iaras.org/iaras/journals/ijap 4. Žiljak, J., Tepeš Golubić, L., Jurečić, D., Žiljak, V.; Hidden infrared graphics on a painted canvas. Int. J. Appl. Phy. 2 (2017). ISSN: 2367-9034. http://www.iaras.org/iaras/journals/ijap 5. Žiljak, V., Pap, K., Žiljak, I.: CMYKIR security graphics separation in the infrared area. Infrared Phys. Technol. 52(2–3), 62–69 (2009). Elsevier B.V. ISSN 1350-4495 https://doi. org/10.1016/j.infrared.2009.01.001 6. Pogarčić, I., Agić, A., Matas, M.: Evaluation of the colorant twins for the neutral gray spectar in infrared graphic procedure. Tehnički vjesnik/Technical Gazette 23(6), 1659–1664 (2016). ISSN 1330-3651 (Print), ISSN 1848-6339 (Online). https://doi.org/10.17559/tv-20150303 132036

Design on Clothes with Security Printing, with Hidden Information

55

7. Li, C., Wang, C., Wang, S.J.: A black generation method for black ink hiding infrared security image. Appl. Mech. Mater. 262, 9–12 (2013). Trans Tech Publications, Switzerland. https://doi.org/10.4028/www.scientific.net/AMM.262.9 8. Rajkoviæ, I., Žiljak, V.: Usage of ZRGB video camera as a detection and protection systemand development of invisible infrared design. Polytech. Des. 4(1), 54–59 (2016). Zagreb University of Applied Sciences. ISSN 2459-6302; ISSN 1849-1995. https://doi.org/ 10.19279/tvz.pd.2016-4-1-07 9. Žiljak Vujić, J., Zečević, M., Žiljak, V.: Simulation the colors from nature with twins dyes to camouflage military uniform. Tekstil: časopis za tekstilnu tehnologiju i konfekciju 64(3–4), 89–95 (en), pp: 81–88 (hr) (2015). ISSN 0492-5882, UDK 677.027.4/.5: 677.016.424

Computers

An Overview of Solutions to the Issue of Exploring Emotions Using the Internet of Things Jan Francisti(&) and Zoltán Balogh Department of Informatics, Faculty of Natural Sciences, Constantine the Philosopher University in Nitra, Tr. A. Hlinku 1, 949 74 Nitra, Slovakia {jan.fracisti,zbalogh}@ukf.sk

Abstract. Recent scientific evidence suggests that emotions are a ubiquitous element of human-computer interaction and should be considered when designing usable and intelligent systems. The Internet of Things as technology is now used in various spheres of public life. The great benefit of using it is automation of processes and acceleration of activities. To do this, various intelligent devices are integrated, which make the internet of things a complex system. Today, a great deal of effort is being made with these devices to increase the quality of human–computer interaction. Also, a lot of attention is paid to the examination of the emotional state of a person and various tests are carried out in this respect. Comprehensive systems that have the ability to evaluate those states of people are used in many areas. The process of emotion perception is divided into two sensory and intellectual levels. In each person, these two levels are developed and improved. Therefore, sensory networks and the Internet of Things are a good tool in assessing the emotional state of a person, or, on the basis of the obtained data, it is possible to adapt the surroundings to the resulting emotional state. Keywords: Internet of Things Education

 Emotional states  Sensory networks 

1 Introduction The phrase “Internet of Things” was established by (Ashton 2009) in 1999. The concept of interconnecting devices and people for various reasons has existed for much longer - i.e. via the traditional Internet and social networks - this model of interconnecting devices, people and everything else is relatively new and still in its introductory stages (Weber 2011). The Internet of Things can help develop an interactive and innovative learning environment that meets current pedagogical paradigms, both for teachers and for students. Through the Internet, it is possible to implement a learning process that is dynamic. Dynamic events that may occur during lessons, can be generated by collecting and processing data from intelligent elements and linked to others. © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 59–67, 2019. https://doi.org/10.1007/978-3-030-21507-1_9

60

J. Francisti and Z. Balogh

In such an environment, the role of the teacher is also developing. Some things can be done through devices and the teacher can more focus on teaching. Elements of the Internet of Things are useful when a teacher works with a larger group of students. The Internet of Things is capable of providing individual education, as well as an individual approach to each student. Teachers can, therefore, use different approaches tailored to the needs and functions of the students. Teachers decide on many student affairs. Decisions based on acquired data from intelligent devices can be automated through the Internet of Things. The Internet of Things offers tutoring solutions for the simple and quick creation of lessons that can be provided to everyone participants in the learning process. Teaching materials taken at classes can be recorded and provided to all students. Collaboration is built on multiple devices, for example, interactive boards used in classrooms. Each student can engage in educational activities and contribute to the creation of learning materials. Students can engage in activities through portable devices and can view their activities in real time. All materials are available for students and can be used and stored according to their preferences. The Internet of Things has several benefits, but there are also some problems, such as incompatibility of connected devices; data security and privacy; less needed jobs because of automation and others. Despite the fact that the Internet is being used daily in different spheres of public life, experts are predicting a new stage - the Internet of Everything. The Internet of Everything will have the ability to connect people, processes, data, and things, and enable the building of a world that will provide a connection between people and data. The aim of the article is to describe and focus on the development of the system and the assessment of the emotional state by applying the sensory characteristics of the user himself. The following aim is to use the obtained data measurement through the sensors as the underlying materials for determining the emotional state of the user. Based on the assessed status, it will be possible to customize the learning material for such a student. The development of the system is the basis for the use of sensory tracing and evaluation of the emotional state, further for the creation of feedback to the user, for example, to alert that it is appropriate to quiet down the voice (to mitigate aggression) etc.

2 Methodology of Research A similar way of measuring physiological functions and evaluating emotions that we want to apply to the students is used in health care. Through the devices, doctors have the ability to control the health of patients in real-time at a distance, as well, patients may be warned when their health conditions were changed. Authors (Mano et al. 2016) used the camera to focus on facial expression. During the day, several photos were recorded to evaluate the mood and feelings of people. Similarly, this type of measurement can be combined with other sensors, for comparison facial expression with heart rate and body temperature.

An Overview of Solutions to the Issue of Exploring Emotions

2.1

61

Feelings and Emotions

Emotions are referred to as the lower feelings associated with the satisfaction of basic human needs such as eating, drinking, sleeping. (Kleinginna and Kleinginna 1981) collected more than 90 definitions of emotions. Feelings are more permanent emotions associated with higher needs, especially with human relationships, human and life values, with the needs of thinking and experiencing beauty, and with the cultural needs of human. There is always the subject of emotions. Only a human has feelings. And the only human is capable of empathy, which means to feel the feeling and perceive the other. The expression emotions are one of the most important features of feelings. Feelings and emotions play an important role in human life because they are part of his motivational structure. From his focus, depth, and permanence depends on his actions, what and how he will do. Feelings have different inner and outer manifestations. People can see different changes when they experience some emotion, such as when they are cheerful, they have a smile on their face. The most striking expressions of emotions are crying and laughing (Czako, Seemannova, and Bratska 1982; Hasson 2015). Authors (Võ et al. 2007) tested emotions at primary school students through a word memory game. They found that students better memorized those words that were expressed through emotions such as happiness or sadness. 2.2

Emotional Intelligence (EQ)

Emotional intelligence is the ability to identify, evaluate and control your own emotions, the emotions of others and groups. There are two different kinds of intelligence: emotional and rational (IQ). Success in a person’s life depends on both types. Without emotional intelligence, the intellect is not able to exploit all its potential. People’s life experiences point to the fact that high intellectual ability is not automatically a prerequisite for successful enrollment at school in learning and in the practical life. Both types of intelligence, not only intellectual but also emotional have a big importance. In real decision-making and negotiation, the feelings have the same weight as thoughts (Caruso 2015). 2.3

Evaluating Human Emotions

The issue of evaluating human emotions (not just using a computer) is a very interesting topic that has gained more and more attention in recent years. It is worth noting the particular intertwining of several non-related areas (automotive) and fields such as computer science and psychology. Therefore, it has a great potential to improve interaction between human and computer in different areas - from the educational sphere, through medicine to the commercial area. Research into the emotions of the user (especially face-to-face) began in the 17th century and became the basis for further development in this area. In a book by John Bulwer titled “Pathomyotomia” published in 1649, where are the issue of facial expression and muscle movement on the face discussed in detail (Magdin, Turcani, and Hudec 2016).

62

J. Francisti and Z. Balogh

Ekman and Friesen developed a “Facial Action Coding System” (FACS) in 1978 to encode facial expressions, where the facial movements are described by a set of action units (AU) (Ekman and Friesen 1978). Each AU unit is based on the relationship of the muscles. This mimic coding system is executed manually using a set of rules. Static images of the mimetic, often representing the top of the expression, serve as inputs. This process is time-consuming. Assessing each AU at a given time provides the ability to identify up to 7 types of emotion: happiness, sadness, surprise, fear, anger, disgust, and neutral expression. Ekman’s work inspired many researchers (Otsuka and Ohya 1997; Rosenblum, Yacoob, and Davis 1996; Yacoob and Davis 1996) to analyse facial expressions using the image and video processing. By tracking facial features and measuring the amount of movement on the face, he tried to sort out different facial expressions into individual categories. Authors (Tian, Kanade, and Cohn 2001) are focused on face expression analysis and recognition uses these “basic terms” or their subset. In their research, Pantic and Rothkrantz provide an in-depth review of many of the face recognition research done in recent years (Pantic and Rothkrantz 2000). In the present, much attention is being paid to this issue. However, the aim of current research is to use the data that we can derive from individual sensory properties (sight, smell, touch, hearing, taste) to determine the overall emotional state of the user and to understand his activity and thinking. Observing and recording emotional change is also important in human-computer interaction. The data obtained through the observation of the emotional state can be used in designing information systems where creators try to put something “human” (Lopatovska and Arapakis 2011). A typical example is the real-time emotional evaluation based on skin resistance, measurement of the difference between eye pupil movement or neuro-impulse measurement. There are countless ways to measure and record changes in the emotional state. Authors (Vizer, Zhou, and Sears 2009) used the keyboard to test and investigated how the respondents would handle the writing of the unknown text. The test results consisted of the fact that respondents who expressed changes in emotional states were pushing the keys harder and making more mistakes. A similar of experiments by the authors (Kaklauskas et al. 2011) where they used a specially modified mouse and tested under what pressure respondents would press it. The results were the same as in keyboard tests, where respondents who had symptoms of emotional change, pressed the keys harder. When recording a change in the emotional state of the author (Alberdi, Aztiria, and Basarab 2016; Gjoreski, Luštrek, Gams, and Gjoreski 2017) used smart wristbands to measure physiological functions and record changes on of the acquired data. They used heart rate measurement. When evaluating the emotional state, it is possible to use a camera similar to the (Sharma and Gedeon 2014; Carneiro, Castillo, Novais, Fernández-Caballero, and Neves 2012; Arturas Kaklauskas 2015), by which it is possible to capture the expressions and behaviour of respondents. Authors have found that when changing their emotional state, such as stress, people are not moving and concentrate only on what they do.

An Overview of Solutions to the Issue of Exploring Emotions

63

Changing the emotional state can be seen not only on the face but also on the movement of the eye (iris). Authors group (De Marsico, Nappi, Riccio, and Wechsler 2015; Ghazali, Jadin, Jie, and Xiao 2015; Gómez-Poveda and Gaudioso 2016; Jeong, Nam, and Ko 2017; Rattani and Derakhshani 2017; Skodras, Kanas, and Fakotakis 2015) determined the movement of the eye, precisely the iris, and based on the movement they evaluated the changes in the emotional state. They also saw a change in the size of the iris, which changed with regard to the emotional change itself. Based on the observations made by the authors (Ghosh, Nandy, and Manna 2015; Kacete, Royan, Seguier, Collobert, and Soladie 2016) create systems that work together with face detection and are able to discover in real time what emotional status the respondents are expressing. Generated systems can be incorporated into wearing devices such as glasses that can be worn by respondents without disturbing by them (Ashok et al., n. d.; Jung, Kim, Son, and Kim 2017; Martinez-Millana, Bayo-Monton, Lizondo, Fernandez-Llatas, and Traver 2016).

3 Results of Research and Discussion To get a better understanding of emotions, it is necessary to focus on three main components, physiological responses, subjective experiences and expressive reactions. The emotions and behaviour of most people will make perfect sense if you are at least partly aware of how their emotions can work. Most people are accustomed to mood and emotion, and they do not separate them. Experts and psychologists have different opinions. The main difference between emotions and moodiness is obvious. Emotions are short-lived and usually very intense. Also, there are specific and always have some clearly defined cause. Thanks to emotions, we can feel short-term anger and short-term joy. As for mood, they are much longer and mild. It may happen that we simply wake up badly and we will not even be able to identify the cause. 3.1

Sensory System Design

The design solution for the sensory system will be created using a microcomputer or a microcontroller for the need to monitor and subsequently evaluate the emotional state. The aim of our research will be to design a system that can automate the collection of data from various sensors, such as motion and user activity sensors, skin temperatures, electrodermal activity (what happens to the skin when people are sweating) and heart rate pulse. This type of wristband offer, for example, from the company www.myfeel.co, where they offer a wristband with sensors which were above mention and Fig. 1 demonstrate how the wristband looks like.

64

J. Francisti and Z. Balogh

Fig. 1. A wristband that can monitor emotions throughout the day

The wristband is waterproof and for connection use Bluetooth and USB port. Thanks to the flexible material, it is very easy to put on the hand. The collection and subsequent evaluation of the measured data will be processed with the help of a microcomputer or a microcontroller with an application created that will be able to monitor and record each change and the outputs will be sent to the Cloud, where they will be processed and evaluated in the form of an indication of the current user’s emotion. We assume that own designed and build system will be more efficient than other available monitoring systems, also it will be modular and will have wider use than regular monitoring systems. Then we will be able to share the measured data through data networks (for example, Sigfox), and then use them for further processing and approve their validity for the research area. A prerequisite for the autonomous functioning of emotional monitoring is also the system optimization based on different statistical methods. On the basis of the system, we will have the opportunity to expand our research role in the field of IoT also in other areas, such as environmental intelligence (Fig. 2).

Fig. 2. Scheme of the adaptive learning system

Using the system we create, we will be able to customize the learning material, also the learning style and the method of access to specific students, according to their current emotional state. We assume that the adaptive system will be more flexible for learners and more efficient in acquiring new knowledge.

An Overview of Solutions to the Issue of Exploring Emotions

65

4 Conclusions The aim of the article was to highlight the possibilities of sensory networks in the context of IoT. Devices or things that are connected to the internet create the Internet of Things. In the IoT also belongs to a sensory network that can be used to retrieve different data that is directly sent over the internet or into the cloud for further processing. Through sensory networks, we will be able to monitor and acquire the emotional state of students, that directly or indirectly can be related to their effectiveness in learning and acquiring knowledge. Using statistical analysis of the data, we will evaluate and subsequently propose new approaches to e-materials and determine the appropriate teaching style and form and also adapt the teaching material. The next step will be testing different sensors in order to get emotions and compare them with the available data sets. Acknowledgements. This research has been supported by University Grant Agency under the contract No. VII/6/2018

References Alberdi, A., Aztiria, A., Basarab, A.: Towards an automatic early stress recognition system for office environments based on multimodal measurements: a review. J. Biomed. Inform. 59, 49–75 (2016). https://doi.org/10.1016/J.JBI.2015.11.007 Ashok, A., Xu, C., Vu, T., Gruteser, M., Howard, R., Zhang, Y., … Dana, K.: What am i looking at ? low power radio-optical beacons for augmented reality. IEEE Trans. Mobile Comp. 15 (12), 3185–3199 (n.d.). Ashton, K.: That “Internet of Things” thing. RFID J. 22(7), 97–114 (2009) Carneiro, D., Castillo, J.C., Novais, P., Fernández-Caballero, A., Neves, J.: Multimodal behavioral analysis for non-invasive stress detection. Expert Syst. Appl. 39(18), 13376– 13389 (2012). https://doi.org/10.1016/J.ESWA.2012.05.065 Caruso, D.: Emoční inteligence. Grada Publishing, a. s., Praha (2015) Czako, M., Seemannova, M., Bratska, M.: Emócie. Slovenské pedagogické nakladateľstvo, Bratislava (1982) De Marsico, M., Nappi, M., Riccio, D., Wechsler, H.: Mobile Iris Challenge Evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recogn. Lett. 57, 17–23 (2015). https://doi.org/10.1016/j.patrec.2015.02.009 Ekman, P., Friesen, W.: Facial action coding system: investigator’s guide. Consulting Psychologists Press, Palo Alto, CA (1978) Ghazali, K.H., Jadin, M.S., Jie, M., Xiao, R.: Novel automatic eye detection and tracking algorithm. Opt. Lasers Eng. 67, 49–56 (2015). https://doi.org/10.1016/j.optlaseng.2014.11.003 Ghosh, S., Nandy, T., Manna, N.: Advancements of medical electronics. (2015) https://doi.org/ 10.1007/978-81-322-2256-9 Gjoreski, M., Luštrek, M., Gams, M., Gjoreski, H.: Monitoring stress with a wrist device using context. J. Biomed. Inform. (2017) https://doi.org/10.1016/j.jbi.2017.08.006 Gómez-Poveda, J., Gaudioso, E.: Evaluation of temporal stability of eye tracking algorithms using webcams. Expert Syst. Appl. 64, 69–83 (2016). https://doi.org/10.1016/j.eswa.2016.07.029 Hasson, G.: Inteligenční emoce. Grada Publishing, a. s, Praha (2015)

66

J. Francisti and Z. Balogh

Jeong, M., Nam, J.Y., Ko, B.C.: Eye pupil detection system using an ensemble of regression forest and fast radial symmetry transform with a near infrared camera. Infrared Phys. Technol. 85, 44–51 (2017). https://doi.org/10.1016/j.infrared.2017.05.019 Jung, Y., Kim, D., Son, B., Kim, J.: An eye detection method robust to eyeglasses for mobile iris recognition. Expert Syst. Appl. 67, 178–188 (2017). https://doi.org/10.1016/j.eswa.2016.09.036 Kacete, A., Royan, J., Seguier, R., Collobert, M., & Soladie, C.: Real-time eye pupil localization using Hough regression forest. 2016 IEEE Winter Conference on Applications of Computer Vision, WACV (2016). https://doi.org/10.1109/WACV.2016.7477666 Kaklauskas, A.: Student progress assessment with the help of an intelligent pupil analysis system. Intell. Syst. Ref. Lib. 81(1), 175–193 (2015). https://doi.org/10.1007/978-3-319-13659-2_6 Kaklauskas, A., Zavadskas, E. K., Seniut, M., Dzemyda, G., Stankevic, V., Simkevičius, C., … Gribniak, V.: Web-based biometric computer mouse advisory system to analyze a user’s emotions and work productivity. Eng. Appl. Artif. Intell. 24(6), 928–945 (2011). https://doi. org/10.1016/J.ENGAPPAI.2011.04.006 Kleinginna, P.R., Kleinginna, A.M.: A categorized list of motivation definitions, with a suggestion for a consensual definition. Motivation and Emotion 5(3), 263–291 (1981). https:// doi.org/10.1007/BF00993889 Lopatovska, I., Arapakis, I.: Theories, methods and current research on emotions in library and information science, information retrieval and human-computer interaction. Inf. Process. Manage. 47(4), 575–592 (2011). https://doi.org/10.1016/j.ipm.2010.09.001 Magdin, M., Turcani, M., Hudec, L.: Evaluating the emotional state of a user using a webcam. Int. J. Interact. Multimed. Artif. Intell. 4(1), 61 (2016). https://doi.org/10.9781/ijimai.2016. 4112 Mano, L.Y., Faiçal, B.S., Nakamura, L.H.V., Gomes, P.H., Libralon, G.L., Meneguete, R.I., … Ueyama, J.: Exploiting IoT technologies for enhancing Health Smart Homes through patient identification and emotion recognition. Comp. Commun. 89–90, 178–190 (2016). https://doi. org/10.1016/j.comcom.2016.03.010 Martinez-Millana, A., Bayo-Monton, J.L., Lizondo, A., Fernandez-Llatas, C., Traver, V.: Evaluation of google glass technical limitations on their integration in medical systems. Sensors (Switzerland) 16(12), 1–12 (2016). https://doi.org/10.3390/s16122142 Otsuka, T., Ohya, J.: A study of transformation of facial expressions based on expression recognition from temporal image sequences. Technical report, Institute of Electronic, Information, and Communications Engineers (IEICE). Technical report, Institute of Electronic, Information, and Communications Engineers (IEICE) (1997) Pantic, M., Rothkrantz, L.J.: Automatic analysis of facial expressions: the state of art. IEEE Trans. Pattern Recognit. Mach. Intell. 12, 1424–1445 (2000). Rattani, A., Derakhshani, R.: Ocular biometrics in the visible spectrum: a survey. Image Vis. Comput. 59, 1–16 (2017). https://doi.org/10.1016/j.imavis.2016.11.019 Rosenblum, M., Yacoob, Y., Davis, L. Human expression recognition from motion using a radial basis function network architecture. IEEE Trans. Neural Netw. 7(5), 1121–1138 (1996) Sharma, N., Gedeon, T.: Modeling a stress signal. Appl. Soft Comput. 14, 53–61 (2014). https:// doi.org/10.1016/J.ASOC.2013.09.019 Skodras, E., Kanas, V.G., Fakotakis, N.: On visual gaze tracking based on a single low cost camera. Sig. Process. Image Commun. 36, 29–42 (2015). https://doi.org/10.1016/j.image. 2015.05.007 Tian, Y., Kanade, T., Cohn, J.: Recognizing Action Units for Facial Expression Analysis. Carnegie_Mellon University: IEEE Transactions on Pattern Recognition and Machine Intelligence (2001)

An Overview of Solutions to the Issue of Exploring Emotions

67

Vizer, L.M., Zhou, L., Sears, A.: Automated stress detection using keystroke and linguistic features: an exploratory study. Int. J. Hum. Comput. Stud. 67(10), 870–886 (2009). https:// doi.org/10.1016/J.IJHCS.2009.07.005 Võ, M.L.H., Jacobs, A.M., Kuchinke, L., Hofmann, M., Conrad, M., Schacht, A., Hutzler, F.: The coupling of emotion and cognition in the eye: Introducing the pupil old/new effect. Psychophysiology 45(1), 130–140 (2007). https://doi.org/10.1111/j.1469-8986.2007.00606.x Weber, R.H.: Accountability in the Internet of Things. Comp. Law Secur. Rev. 27, 133–138 (2011). https://doi.org/10.1016/j.clsr.2011.01.005 Yacoob, Y., Davis, L.: Recognizing human facial expressions from long image sequences using optical flow. IEEE Trans. Pattern Anal. 18(6), 636–642 (1996).

Knowing and/or Believing a Think: Deriving Knowledge Using RDF CFL Martin Žáček(&), Alena Lukasová, and Petr Raunigr Department of Informatics and Computers, University of Ostrava, Ostrava, Czech Republic {Martin.Zacek,Alena.Lukasova,Petr.Raunigr}@osu.cz

Abstract. From the web discussion on a difference between knowing and believing, we have chosen in this paper the statements fulfilling enough our seeing the topic, corresponding to our knowledge level of cognitive science. The aim of paper shows one of the capabilities of our Resource Description Framework Clausal Form Logic (RDF CFL) graph language using as an example a well- known Castaněda’s puzzle. RDF CFL is an appropriate tool that contains a package of inference methods working especially in closed-worlds that have been developed in the clausal form of first order predicate logics. Keywords: Resource Description Framework  RDF  Logic puzzle First order logic  CFL  Knowing  Believing  Deriving



1 Introduction The article follows the article [1] where the authors show one of the capabilities of our RDF CFL graph language using as an example a well- known Castaněda’s puzzle [4] that has been before used by some authors of new formal approaches. The model and language RDF CFL [1, 2] has been developed in the frame of seeking an optimal formal language means for semantic web inferences. Using an intentional approach to the language semantics in its graph-based style of representation a demand of open world has been fulfilled [3]. On the other side, the RDF CFL [4] system contains a package of inference methods working especially in closed-worlds that have been developed in the clausal form of first-order predicate logic, useful for solving a lot of tasks over corresponding knowledge bases [1]. 1.1

RDF CFL Briefly

Main basics of RDF CFL become from two well-known resources: 1. Richards’ clausal form logic CFL [4], the graph version inclusive using only binary predicates for representing roles, properties or relationships, slightly modified by application of concept-relationship modelling paradigm; 2. RDF model with our own methodology of using variable quantifying, the graph version inclusive.

© Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 68–73, 2019. https://doi.org/10.1007/978-3-030-21507-1_10

Knowing and/or Believing a Think

69

The Clausal Form Logic (CFL) built on the base of the FOPL and well corresponding with common using of the conditional “if – then” statement. Generally, a conditional statement (clause) says that the consequent composed as a disjunction of some predicate atoms follows from the antecedent composed as a conjunction of some predicate atoms [1, 3, 4]. The approach allows us to formulate clauses in the form < implies > < consequent> Selecting a formal language for a knowledge representation is crucial. The formal basis should become here the first order predicate logic (FOPL) base for its high expressivity and a wide range of already developed formal deduction tools [2, 3]. Knowledge Representation (originally those contained in Web resources), which are based on a domain ontology usually has been created in the framework of RDF (Resource Description Framework) model [9, 10, 13]. An RDF model manipulates the semantic aspect of terms specified through URI references to resources in which their meanings are always elucidated by means of a certain position in a relevant ontology. The graphic RDF model in its form is easy and simple to understand even for the users who do not have experience with formal modelling. The idea is based on a simple statement concerning relations between items (resources) in the form of basic vector [1–4]. Our system RDF CFL represents and reasons about entities whose meaning can have extensional but also intentional characters. Graph version of the RDF CFL [8] brings into the modelling a possibility to see the semantics in a pure intentional style, so it fulfills the open-world demand of semantic web systems. Moreover RDF CFL uses inference apparat of the CFL with extensionally based semantics that is able only to catch open world by means of sequences of individual moment snaps of modelled reality. Our RDF CFL language of representation is able as well as the language of the SNePS to be a natural communication language for using not only for people but also for human-robot interaction. In the frame of formal representation of the NLC it means besides others to distinguish if an agent (man or robot) told us a real knowledge about a part of the domain of our interest or if it only expressed its own belief about any think within a part of the domain. As the NLC theory speaks about mental objects like persons or acts, we can also construct them by means of our RDF CFL formal apparat in the form of networks shearing intentional bindings between concepts of the real world. 1.2

Deductive and/or Inductive Reasoning in Human Minds

Our minds make acts of logical reasoning in everyday life. But do we use the logic in its deductive or inductive form? Deduction leads to specific conclusions based on weighing up general principles that ought to be true. Induction is the opposite, and produces a general conclusion from specific cases. We use logical induction more often than formal principles of deduction in everyday life. We make generalizations on previous experiences and now we only believe them. Those previous experiences are

70

M. Žáček et al.

often just based on seeing or feeling a thing before. Generalization on one or two experiences can be very dangerous and often leads towards fallacious reasoning. Our intention is to present here by means of a known puzzle a possibility of fulfilling some of the requirements on the investigating of belief´s legitimacy only with a simple first order predicate logic version like RDF CFL, all without such a very sophisticated but rather complicated formal apparat like a fully intentional SNePS (natural language competence system NLC) [11] uses.

2 Knowing and/or Believing a Think From the web discussion on a difference between knowing and believing [5] we have chosen the following statements fulfilling enough our seeing the topic, corresponding to our knowledge level of cognitive science. 1. ‘Believing’ means that you have chosen a truth, but ‘knowing’ means that you are certain about that truth. 2. ‘Believing’ always leaves room for doubt, but ‘knowing’ leads to confidence. 3. ‘Believing’ is blind trust, while ‘knowing’ is trusting with awareness. When you say ‘I believe’, you indicate that you don’t know about this thing, because, in your personal experience, it has not yet occurred. Beliefs are based on your words, or a particular train of thought. You apply these beliefs to your life because they are appealing. As a result, you feel and begin to believe that they are true. To have got any assurance that what we just believe in, is true or not we should delve deeper into the meaning instead of follow blindly our belief, without letting know whether it is a truth or not, and try to have known what it is speaking about. An element of doubt should be put in between ‘believing’ and ‘knowing’, but doubt with shrewdness or intelligence. Even if we know useful information, it should be tested with respect to the believed think, so that it turns into knowledge, and is then converted from a belief into knowledge [6, 7]. It is extremely important that we feed our mind with the right information. We create the world with our knowledge [8] and beliefs. So better be careful in what we believe. We take into account the real knowledge about a think as an end member of a step by step more precise chain of coming out from a rather vague stage of knowledge like beliefs to an expected goal – the real facts about the think [9]. Moreover our approach leads also to seeing all the process of the children education in a similar manner. It means at the beginning we cannot speak about a real knowledge of an educated subject. The process of step by step education we can take as a cleaning a rather uncertain concept believed in child´s mind towards a conceptual term with a clear meaning.

Knowing and/or Believing a Think

71

3 Castaněda’S Puzzle with Both Believing and Knowing Input Information Following the test of capabilities of SNePS system (Stuart C. Shapiro and Wiliam J. Rapaport [10]) we decided to use in the following paragraphs the known Castaněda’s puzzle of Hector Neri Castaněda [11] with the data background coming out from the Sophocles’ tragedy as an example how to reconcile belief and knowledge about a concrete think. A short explanation of the Sophocles’ tragedy: Oedipus has become the king of Thebes while unwittingly fulfilling a prophecy that he would kill his father, Laius (the previous king), and marry his mother, Jocasta (whom Oedipus took as his queen after solving the riddle of the Sphinx). The action of Sophocles’ play concerns Oedipus’ search for the murderer of Laius in order to end a plague ravaging Thebes, unaware that the killer he is looking for is none other than himself. At the end of the play, after the truth finally comes to light, Jocasta hangs herself while Oedipus, horrified at his patricide and incest, proceeds to gouge out his own eyes in despair (Fig. 1).

Fig. 1. The result.

This example is all derived here [18] (Table 1). Table I. For illustrative purposes URI references Oidipus Laius King Anybody Father isa Identical

https://www.wikidata.org/wiki/Q130890 http://dbpedia.org/page/Laius https://www.wikidata.org/wiki/Q535214 http://dbpedia.org/page/Indefinite_pronoun http://dbpedia.org/ontology/father https://cs.wikipedia.org/wiki/ISA http://dbpedia.org/page/Identical

72

M. Žáček et al.

4 Conclusion Using the RDF CFL graph representation apparat is there a possibility to construct relevant intentional bindings between concepts of the real domain in the form of networks shearing all their original properties. We can take into account a real knowledge about a thing as an end member of a step by step more precise chains that are coming out from a rather vague stage of learning like beliefs to an expected goal – knowledge as real facts about the thing. The system RDF CFL can represent and reason about entities whose meaning can have an extensional character, but also it can use means to express. The language of representation can be in an environment of the semantic web a useful natural communication language not only for people but also for human-robot interaction. Moreover, our approach leads also to see all the process of the children education similarly. It means at the beginning we cannot speak about real knowledge of an educated subject. The method of step by step education we can take as a cleaning a somewhat uncertain concept believed in the child´s mind towards a conceptual term with a precise meaning. Acknowledgments. The research described here has been financially supported by University of Ostrava grant SGS06/PřF/2018. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the sponsors.

References 1. Žáček, M., Lukasová,A.: Making a shift from believing to knowing by the help of RDF CFL formal representation. In: 2nd International Conference on Applied Physics, System Science and Computers, APSAC2017, Lecture Notes in Electrical Engineering, vol. 489, pp. 148– 155 (2019). https://doi.org/10.1007/978-3-319-75605-9_21 2. Lukasová, A., Žáček, M., Vajgl, M.: Carstairs-mccarthy’s morphological rules of english language in RDFCFL graphs. In: International Conference on Applied Physics, System Science and Computers, APSAC 2016, Lecture Notes in Electrical Engineering, Springer Verlag. https://doi.org/10.1007/978-3-319-53934-8_20 3. Lukasová, A., Žáček, M.: English grammatical rules representation by a meta-language based on RDF model and predicate clausal form. Information (Japan), 19(9B), ISSN: 13434500 (International Information Institute Ltd.) 4009–4015 4. Lukasová, A., Žáček, M., Vajgl, M.: Reasoning in Graph-based Clausal Form Logic. IJCSI Int. J. Comp. Sci. Issues 9(1), 37–43 (2012). No. 3. ISSN (Online) 1694–0814 5. Castaněda,H.N.: Philosofhy as a Scienceand as a Worldview. In: Cohen, A. Dascal, M. (eds.), The Institution of Philosophy Open Court, Peru, IL (1989) 6. Shapiro, S.C., Rapaport, W.J.: Models and Minds. Knowledge Representation for NaturalLanguage Competence 7. Miarka, R., Zacek, M.: Knowledge patterns in RDF graph language for English sentences. In: Federated Conference on Computer Science and Information Systems, FedCSIS, pp. 109–115 (2012) 8. Miarka, R., Žáček, M.: Knowledge patterns for conversion of sentences in natural language into RDF graph language. In: Federated Conference on Computer Science and Information Systems, FedCSIS, pp. 63–68 (2011)

Knowing and/or Believing a Think

73

9. Žáček, M., Lukasová, A., Miarka, R.: Modeling knowledge base and derivation without predefined structure by Graph-based Clausal Form Logic. In: Proceedings of the 2013 International Conference on Advanced ICT and Education. France: Atlantis Press: AISR, pp. 546–549. ISBN 978-90786-77-79-6 (2013) 10. Telnarova, Z., Rombová, Z.: Data modelling and ontological semantics. In: T.E. Simons et al. (eds.) AIP Conference Proceedings, vol. 1648 American Institute of Physics, Melville, NY (2015), 1648. https://doi.org/10.1063/1.4912763 11. Shapiro, S.C., Rapaport, W.J.: The SNePS family. Comput.Math. Appli. 23, 243–275. 1192 12. Manisha, K.: Difference between Knowing and Believing. In: DifferenceBetween.net, 10 January 2010. http://www.differencebetween.net/miscellaneous/difference-betweenknowing-and-believing/ 13. lukasová, a., vajgl, m., žáček, M.: Knowledge represented using RDF semantic network in the concept of semantic web. In: Simos, T.E. et al. (eds.) International Conference of Numerical Analysis and Applied Mathematics 2015, ICNAAM 2015, AIP Conference Proceedings, vol. 1738, p. 120012. American Institute of Physics, Melville, NY (2015). https://doi.org/10.1063/1.4951895 14. Žáček, M., Miarka, R., Sýkora, O.: Visualization of semantic data. In: Silhavy, R. et al. (eds.), pp. 277–285. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-18476-0_28 15. Žáček, M.: Ontology or formal ontology. In: Simons, T.E. et al. (eds.) AIP Conference Proceedings, vol. 1863. American Institute of Physics, Melville, NY (2017). http://doi.org/ 186310.1063/1.4992234 16. Žáček, M., Homola, D.: Analysis of the english morphology by semantic networks. In: Simons, T.E. et al. (eds.) AIP Conference Proceedings, vol. 1906. American Institute of Physics, Melville, NY (2017). http://doi.org/190610.1063/1.5012351 17. Vajgl, M., Lukasová, A., Žáček, M.: Knowledge bases built on web languages from the point of view of predicate logics. In: Ntalianis, K. (ed.) AIP Conference Proceedings, vol. 1836. American Institute of Physics, Melville, NY (2017). http://183610.1063/1.4981998 18. Žáček, M., Lukasová, A.: Making a shift from believing to knowing by the help of RDF CFL formal representation. In: International Conference on: Applied Physics, System Science and Computers. https://doi.org/10.1007/978-3-319-75605-9_21

Software Solution Incorporating Activation Congnitive Memory Portion in Early Stages of Alzhaimer’s Disease Provaznik Josef1, Kopecky Zbynek1(&), Brozek Josef1,2,3, Sotek Karel1, Brozkova Monika3, Karamazov Simeon1, and Janeckova Hana4 1

Faculty of Electrotechnic Engeneering and Informatics, University of Pardubic, 532 10 Pardubice, Czech Republic [email protected], {zbynek.kopecky, josef.brozek,karel.sotek,simon.karamazov}@upce.cz 2 Metropolitan University Prague, Ucnovska 100/1, 190 00 Prague, Czech Republic 3 Laboratory of Application of the Software Technologies - ASOTE, 532 10 Pardubice, Czech Republic [email protected] 4 The Charles University in Prague, Institute of Nursing, Cerna 9, 115 55 Prague, Czech Republic [email protected]

Abstract. The publication is focused on the research and use of software in the field of nursing care. The software for households has been developed at the university and it assists in activation of the memory part of the cognitive function for people with initial phases of Alzheimer’s disease. The application is also applicable to those who are at the vulnerability group because the disease could appear. Keywords: Treatment  Mobile application  Android  Alzheimer’s disease Cognitive functions  Java



1 Introduction Cognitive functions are, in a certain level of abstraction, functions related to the thinking process. It is a sum of all processes related to thinking - deriving, inference, remembering, deducting. In a concrete situation there can be a cognitive dysfunction. The most frequent dysfunction of cognitive functions is the various forms of dementia. Dementia can have many causes. In the past, very often, the dementia was caused by syphilis. Nowadays it is most often old age dementia or dementia caused by cerebral stroke. And for example Alzheimer’s disease is another disease of cognitive dysfunction. All cognitive dysfunctions have a common cause in the brain. The brain behaves like a muscle. It means it becomes strong with regular exercise. And of course, it suffers and becomes weak without regular stimulation. This attribute of the brain can be used © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 74–80, 2019. https://doi.org/10.1007/978-3-030-21507-1_11

Software Solution Incorporating Activation Congnitive

75

for the prevention of cognitive dysfunctions, slowing down the development of cognitive dysfunctions, and reducing the impact of cognitive dysfunctions. Although cognitive therapy is not new, modern information technology brings the possibility of full exploitation of the potential of this therapy. The article focuses on the application and it explores alternative solutions. The chapter about own application describes the application and its use. In the section for discussion, differences between common applications and applications presented in this paper are described.

2 State of the Art The chapter presents Alzheimer’s disease as the main cause of dementia and methods of cognitive therapies. 2.1

Alzheimer’s Disease

Alzheimer’s disease is a very serious chronic neurodegenerative disease which has pathological and pathophysiological symptoms. This disease was first recognized and described by the German neuropathologist and psychiatrist Alois Alzheimer in 1907. The manifestation of this disease is dementia and it is not without interest that 65% of all patients with dementia are diagnosed with Alzheimer’s disease. Pathophysiological symptoms are the irreversible degeneration of the gray cerebral cortex, thus the loss of neurons. [1] Dementia is caused by a neurodegenerative process in the brain and most often it is the manifestation of Alzheimer’s disease, vascular dementia (caused by stroke), Huntington’s disease, Parkinson’s disease and others. Dementias are dysfunctions of memory, thinking and behavior, and personality changes. Dementia is not a physiological part of aging, so the term “senile dementia” is wrong. And that is why there is not a causality between aging and cognitive dysfunctions or dementia. However, among people aged 90 or over, the prevalence of dementia is 50%. [2] The number of patients with dementia is around 7.5 million in Europe. Table 1 shows the prevalence of dementia in Europe. Prevalence is the proportion of the number of individuals suffering from the disease and the number of all individuals in the population under study. Alzheimer’s disease can not be treated, but there are a number of medicaments which are called, in this case, symptomatic medicaments. These medications are prescribed by the doctor to alleviate the symptoms of ALZ, for example: depression, insomnia, aggression, etc. This procedure can greatly facilitate the life of the patient. Another option is the use of cholinesterase inhibitors that do not heal the disease can, in same cases, slow down its development in the early stages. These medicaments that block the enzyme that breaks down the neurotransmitter acetylcholine (a substance that carries nerve impulses). However, research confirming their effectiveness is still in laboratory research.

76

2.2

P. Josef et al.

Cognitive Functions

Cognitive functions belong to the basic functions of the human brain. Through these functions, a human is able to discover the world, learn and think. • The basic cognitive functions include the memory. Patients with ALZ are at first affected by short-term memory loss, then long explicit semantic memory loss, episodic memory explicit long-term memory loss, and in the end long-term implicit memory loss. • The ability of spatial orientation is based on a combination of thinking and memory. In spatial orientation we can include visual-motoric, visual-construction and perceptual abilities. Because of good spatial orientation, it is necessary to combine more cognitive functions While thinking is one of the most complicated, the ability for good spatial orientation is already lost in the first phase of the ALZ (light dementia stage). • The main features of attention include concentration that ensures that we are able to concentrate on the selected stimulus. Another feature is selectivity, which allows us to select only some stimuli from all available stimuli. Distribution is a feature that allows to divide the attention into more selected stimuli. This ability is easier to realize if one of the stimuli (activities) is already automated and does not require so much concentration. Fluctuation allows us to transfer our concentration from one stimulus to another. Patients diagnosed with ALZ lose long-term attention to the certain task – patients are often confused and can not decide which of the stimuli to focus attention on. It is necessary to help these patients to focus their attention on solving tasks, for example to divide tasks into shorter sections. It is necessary not to make long assignments for patients and to choose areas where there are less disturbing. • For ability of speech it is necessary to coordinate vocal cords and mouths. This ability uses language, which is a system of fixed coding of thoughts, which uses words of known meaning. Language has two elements – understanding and production. As for the understanding, the first, we hear the individual words then we give them a meaning. Then, we link them to sentences which bring us the information. The ability of production allows us to convert information into words which create sentences. These sentences are produced with vocal cords and mouths. The ability of speaking and expression completely disappears in the third phase - in the phase of severe dementia. However, in the whole process of ALZ, we may see impairment of expression abilities in the patient. • One of the areas of the brain is the prefrontal cortex where executive functions are situated. These executive functions allow us to plan activities, solve problems or coordinate thoughts. These cognitive functions are very complex and complicated. Creativity and flexibility is needed for the correct activity of these functions. Creativity allows us to create original ideas or thoughts. Flexibility provides us the possibility to find and solve a new problem. [3] The description of this function is clear it is a complex and complicated process whose quality is reduced in patients with progressive disease. We can help patients with thinking, through interviews

Software Solution Incorporating Activation Congnitive

77

with appropriate questions, which divides the thought process into simpler thinking processes. 2.3

Activation and Reminiscence

Activation is very important in the care of people with Alzheimer’s disease. The activity prevents the development of anxiety and depression. Collective actions bring together the people who support social relations and communication. The activation is a general process that supports the opportunity to live in one´s own way of life. Reminiscence is an activation and validation method that uses the fact that majority of patients, who suffer from dementia, still retain their long-term memory. The reminiscent method also helps to find the most convenient way of communication with patients. This method can be used with patients with moderate or lower stages of ALZ. Several variants of reminiscence aids can be implemented to facilitate the process of reminiscence therapy. One option is to create a CV with cooperation of family members of a patient. It will help the therapist to get know the patient’s past and to focus on concrete memories. Another variation is a book which contains old photographs which shows patient´s relatives and well-known people. The book can also contain old documents, for example certificates. It is also possible to use a so-called box of memories which may contain objects which are important for the patient and that is why the patient will probably remember them. This reminiscent box is in Fig. 1. The patient, with these aids, tells the fragments of his past, and together with the therapist, they try to make the patient remember as much detail as possible or discover other unforgettable memories.

Fig. 1. Old nursing device

2.4

Developed Applications

We will only consider software for Alzheimer’s patients that is used to work with them or for prevention. We will not mention applications using foreign-language texts because it cannot be used by an ordinary patient. For desktop computers, there are almost no applications for people with Alzheimer’s disease, so they will not be mentioned here. Larger centers that focus on working with people with ALZ, occasionally have their own internal applications, but these application are often pilot

78

P. Josef et al.

projects and there are only a few in the Czech Republic. Another reason for the absence of this type of application for people with ALZ is the computer illiteracy of the patients. SeaHeroQuest [4] was published in 2016 in collaboration with Alzheimer’s Research, University College London, University of East England, and Game Developers Glitchers. The player controls the sailor, who takes photos of the sea life and sails across the ocean. The player is forced to use his insight and sense of orientation, Orientation worsening is one of the manifestations of Alzheimer’s disease. In Fig. 2 you can see the game environment of this game. Water Garden Live Wallpaper shows a full-screen floating water level of a pond where fish swim. This game environment is shown in Fig. 3. Touching your finger on the screen, distorts the level and the fish are thrown off.

Fig. 2. .

Fig. 3. .

3 Own Application The tablet application consists of six exercises: Order, Idioms, Link, Mark the inppropriate, Subtraction, What is a day. The applications were customized for the patients in their user interface. It means that it was chosen with as simple control as possible. The buttons are away from each other and the font is in maximum size. • The exercise “Link” practices thinking by recognizing objects in images and then linking them with their written names. • The exercise “Order” practises short-term memory. Nine buttons are displayed that are decomposed into a three-column and three-line table. A pseudo-random generator chooses three or four of these buttons without repeating which ones are lit each button is lit for one second. Then the patient has to press the same button in the same order. • The “Subtracting” from 100 practices short-term memory and abstract thinking. The exercise requires subtraction a selected one-digit number from 100. Selecting of difficulty, the user chooses the difficulty before starting the exercise by selecting a

Software Solution Incorporating Activation Congnitive

79

subtracted number from a group of easy or difficult numbers. Easy numbers are 2, 4, 5 and 8 and difficult numbers are 3, 6, 7 and 9. • The exercise “Mark the inppropriate” practices the cognitive recognition function and uses of logical judgment. There are shown three images, one of them is evidently different from the others. The task of the user is to mark the wrong picture. • The exercises “Idioms” and “What is a day” are almost identical as for their implementation. These exercises are described in a common chapter focusing on the main differences between them. In exercise “Idioms” a part of an idiom and it is necessary to choose its continue once or end which ever is correct. The exercise uses long-term memory where these idioms are saved. In this source there are 70 idioms and that is why they are not often repeated.

4 Case Study and Discussion The application was tested with ten people of ages from 65 to 89, including seven women and three men. The health information – the information whether the patient has Alzheimer’s disease or he suffers from dementia - is a medical secret which is why we do not know how many people in the sample suffered from the initial phase of Alzheimer’s disease and how many people were in the ALZ risk group. But we know that none of the patients showed signs of advanced dementia. During testing of the application with patients, it was found that those who were not used to use a cell phone or other modern technical devices, had difficulty in working with the application, and they could use it only with the assistance of a nurse. People who were used to using mobile phones, learned to work with the application very quickly but it was necessary to help them for the first time. Only two patients of the sample refused to use the application, their reason was that “these new things are nothing for them”. No significant differences, between the group of women and the group of men, in the ability to work with the application, were found. The accuracy of the responses was fluctuating. It was common that the tasks solved at the beginning of the exercise had an error rate was higher than the tasks solved in the middle of the exercise. This can be explained by the fact that at the beginning of the exercise, the people had to get to know the type of tasks and the way how to work with the application. So the keying mistakes were more frequent and the level of concentration on the tasks was lower. For the seniors, the most difficult exercise was “Order”. The reason is probably that it is time-limited with the short-term memory. It would be necessary to realize a clinical study to prove whether this application helps to slow the development of dementia. But it can be said that the application allows patients to practise their cognitive functions on their own if they are technically literate. And the application allows to patients with lower technically literacy to practise with the help of nurses.

80

P. Josef et al.

5 Conclusion In collaboration with external workplaces and workers, we managed to design an application that can use modern technology for cognitive therapy. The main software engineer, Mr Provazník, achieved development of a very successful application, which was presented in his final work at university. The application was tested in cooperation with Dr. Janečková and it was realized in a case study. The case study has shown that the application is usable and it has many advantages compared to other solutions. These advantages will grow because of the increasing information literacy in the population. Before further development and clinical tests, we are publishing this solution in the international field through this publication.

References 1. Jung, C.G.: Psychological Types, Collected Works, vol. 6, Princeton University Press, Princeton, NJ [1921] (1971). ISB 0-691-01813-8 2. Janeckova, H., Vackova M.: Reminiscence: vyuziti vzpominek pri praci se seniory, 151 p. Portal (2010). ISB 978-807-3675-813 3. Zgola, M.: Uspesna pece o cloveka s demenci. Grada, Praha (2003). ISBN 80-247-0183-9 4. Jordan, L.: Navigation dementia: researchers at UCL are casting a new light on dementia trough a mobile phone game with 2.7 milion players. University College London (2017)

Unity3D Game Engine Applied to Chemical Safety Education Nishaben S. Dholakiya1,2(&), Jan Kubík2, Josef Brozek3,4, and Karel Sotek2,3 1

University of Notingham, NG7 2RD Notingham, UK Faculty of Chemical Engineering and Faculty of Electrical Engineering and Informatics, University of Pardubice, 532 10 Pardubice, Czech Republic {st53279,st52531}@student.upce.cz, {josef.brozek,karel.sotek}@upce.cz 3 Laboratory of Application of the Software Technologies - ASOTE, 532 10 Pardubice, Czech Republic [email protected], [email protected], [email protected] 4 Metropolitan University Prague, Ucnovska 100/1, 190 00 Prague, Czech Republic https://www.asote.cz 2

Abstract. The paper is presenting principles of high level safety education in Chemistry taught through the Unity3D Game Engine. Applied paradigms can explain the benefits of using Game Engines instead of other forms of teaching methodology. Especially in some education domains, which are very important, but sometimes boring, very positive results can be achieved. Major rules for developing this kind of software are presented. A case study is presented on using concrete software for chemical safety education. Keywords: Software education Game education  Unity3D

 Chemical safety  Elearning 

1 Introduction This article deals with the issue of safety in the field of chemical production processes. The way how modern computer science can support the learning process in a specific field, such as chemical processes, has led to the creation of this article. The Faulty of Chemical Technology requested software, which was created by the Faculty of Electrical Engineering, and presented in this article. All mentioned principles and the software are practically oriented and usable in industry. The Faculty of Chemical Technology deals with (amongst others) the production and improvement of explosives. The safety is, in laboratory conditions, absolutely crucial. But the theory of motion safety inside buildings and laboratories or working with chemicals is considered boring. And the theoretical knowledge and paper tests do not accurately reflect the needs of chemists. Introduced software brings innovative features that an emulated environment provides: © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 81–87, 2019. https://doi.org/10.1007/978-3-030-21507-1_12

82

N. S. Dholakiya et al.

• • • •

If the student makes a mistake, this is not fatal and it does not endanger his health We can learn from our error and then we will not do it again The After Action Review allows to watch the mistakes in detail The software is intuitive and emphasizes practical application rather than theoretical knowledge • The walk-through software is a binary criterion for trainers. The Unity3D platform was used to create the software solution. In layman’s terms, it is a game from the point of view of the third or first person. The solution is widely applicable in practice, bringing not only innovative teaching techniques, but also increased safety in laboratories.

2 State of the Art The principle of e-learning is widely used, although it is a very young technology. Elearning has been used since 1999 [1], when the term was first used in the CBT system seminar. But it is necessary to mention the predecessor of the e-learning. In 1954 a professor of Engineering, BF Skinner, defined the Education Machine which he implemented in 1960 [2]. The first systems were focused only on the principle of delivering information to students. Often these were presentations or simple materials. Comprehensive elearning, as we know it today, was defined in 1990. Although there are many definitions of e-learning, such as [3, 4], its modern principles reflect the standard educational process. E-learning is composed not only of knowledge transfer to a student, but also of ways of verifying knowledge - such as tests. The next step, which was enabled with information technology, was the emulation of environment. On computer or virtual reality, training can be done more practically. The student will primarily appear in the place where he will work. It is possible to prepare training for example for military personnel, as it is used in [5], transport logistics [6], or, as considered in this publication, in the field of chemical process safety [7].

3 Software Requirements Software requirements were defined in a specialized educational institution which focuses on very specific areas of chemistry - mainly exothermic processes in the production of explosives. It is important to mention that the production of explosives is not dangerous if production procedures and safety instructions are followed. The key conditions are (1) sufficient cooling, (2) always working with products in a stable state (it means it is always necessary to finish the production and never allow the long existence of intermediates), (3) it is necessity to abide by safety regulations. In order to allow employees/students to enter such a workplace, it is necessary to go through a number of complicated safety trainings. But nowadays these trainings require to turn off the production line. And employees or students practice procedures, and prove that they understood the safety instructions, etc. When the production line is stopped, it is an expensive part of the training.

Unity3D Game Engine Applied to Chemical Safety Education

83

Having own software should allow emulated staff training without stopping production or experiments. Additionally, in an emulated environment, it is possible to create extraordinary situations (for example break-down of production) then the employees are prepared better for these situations. Complicated software actions could be unnatural for users. Then the results of the training would not be shortcomings in the knowledge of the trained persons but the inability to control the software. For this reason, a few simple tasks were added into the initial training. At these first levels of software, the user learns during trials how to work with software. And as it was proven, the first level can also be a very good material for secondary school students or beginning university students. The software requirements were: • • • • • •

Emulated environments with laboratories The ability to control a specific character Possibility of decision-making and action in the field of worker protection Monitoring of individual actions and their score Saving interactions to identify the weak point of a player Repeat level in case of failure.

4 Solution The Unity3D gaming engine was used to implement the system. The graphical solution was developed by programmers and it reflects the laboratory closely. Part of the game scene is created with OpenSource assets (for example character animation). Programming language C# was used for this application. Chemical reactions are transferred, data are recalculated from the MatLab software. Textures are usually real-life photographs or animated photographs of the Laboratory of Energy Materials in the University of Pardubice. The interface of the game can be seen in Figs. 2 and 3. The picture shows a character who moves inside a building. The game has a view from the first person and in some situations from the third person. For example, the first level (shown in the picture) allows the player to move around the building, enter a laboratory, a snack-bar and offices. The mission at this level is the obligation to bring a chemical to the supervisor in his office. The fastest walk-through, means that the chemical will not be delivered correctly and it ends with an accident. The game controls the correctness of the practice, and although each step is optional, the player should: • • • • • • •

take exposure suit on take glasses on Enter the laboratory Find a safety container Take gloves on Open chemical cabinet Put the chemical in the container

84

N. S. Dholakiya et al.

• Mark the container • Take exposure suit and other protective clothing off and leave with secured container. For each correct activity, players will get points. To achieve a successful level, players have to receive at least 80% of the total points. In this game the conditions are easier than reality where, according to the rules of work safety, it is only acceptable to meet all conditions. Conversely, in the work place there are some activities which are omitted because of time reasons (form example putting an exposure suit on). The original first level schema can be seen in Fig. 1.

Fig. 1. Level flow scheme

Own software allows you to choose difficulty and emulate extraordinary situations. For example during opening a chemical cabinet it is possible to prepare these extraordinary situations:

Unity3D Game Engine Applied to Chemical Safety Education

• • • •

85

Find spilled chemicals Spill of a chemical Breaking the beaker and spilling the chemical Breaking the beaker, spilling the chemical, and wounding the user.

The trained person has to be able to respond correctly for all extraordinary situations. The software can be used as a supportive tool for training in the field of work protection, according to the legal requirements, when the higher difficulty is chosen (Figs 2 and 3).

Fig. 2. The game – laboratory entrance

Fig. 3. The game - laboratory

86

N. S. Dholakiya et al.

5 Disussion The application of the software had several crucial benefits that can be mentioned in the discussion. Safety of trained person and extraordinary situations While real-life training situations can be really dangerous, the user is in total safety during e-learning training because he does not have to prove his knowledge during company operation. However, it is important that safety can be trained even in extraordinary situations. It is not possible for trainers to show real-life situations, such as major spillage of acids. If this situation would be trained, for example with spilling water, it would not be possible to practice neutralization elements, triggering emergency room flooding, etc. Thanks to the software it is possible to emulate all these points. At the same time, thanks to the high-quality visualization of the scene, the exercise on the simulator has the same results as exercises in the reality. In fact, the only thing that can not be practiced, is the principle of grabbing beakers and rags of different materials, different protective aids. After Action Review an Educational Process The software is designed to avoid an error which the user cannot understand. After the walk-through of a level, the user can return to any time of his walk. Most often he will return to the moment he lost a point. Then the user is shown why he lost the point and what he should have done correctly. This is possible without the help of a supervisor. However, the software cannot answer more complex questions and it has only predefined scenarios. Even then the software has bigger options than the possibilities offered by paper tests. Easy Evaluation The software allows easy scaling of difficulty. It is possible to set only the basic requirements, to create a “disaster” scenario, or to limit a walk-through to a certain time limit. The realization of the training objectives is monitored through software and the evaluation is very intuitive. The software can be set to strictly complying rules - but if you need to attain 100%, the “entertainment” of the solution is greatly reduced and the frustration of the user is increased. Training Results Trainings with this software has crucial benefits: They are more fun, less expensive, practically oriented, and the knowledge of trained staff is greater. After a theoretical lecture, all students can proceed at their own tempo and gradually improve themselves. They do not threaten themselves or others during their training. The life of the person, or others around them, is not threatened. The disadvantage is the lack of some stimuli - odour, weight, slimming problems and so on. But the fact is, that using of training software is generally positive.

Unity3D Game Engine Applied to Chemical Safety Education

87

6 Conclusion We created very successful and usable software with a combination of several fields: Information technology (creation of software, games, simulators), chemistry (specific know-how) and pedagogy (educational process). Own educational software has five levels with a wide scale of parameterization. The walk-through with all choices and complications takes at least 14 h. There is a flexible environment where interactions with objects are allowed and there are also dynamic events which are used to train the employee or students. Software provides better experience – the training is not boring. At the same time, trainees have statistically better knowledge. Another benefit is the possibility of practical training without the need to stop the production line. It also gives the possibility to train when the company operation is stopped. The last great benefit is high safety of company operation and its trained people – injury can not occur during the training.

References 1. Moreno, R., Mayer, R.: Cognitive principles of multimedia learning: the role of modality and contiguity. J. Educ. Psychol. 91(2), 358–368 (1999) 2. Crowder Norman, A.: Automatic tutoring by intrinsic programming. In Lumsdaine, A.A., Glaser, R. (eds) Teaching Machines and Programed Learning I: A Source Book. National Education Association of the United States, Washington, DC (1960) 3. Průcha, J., Walterová, E., Mareš, J.: Pedagogický slovník, 6 ed., 400 p. Portál, Praha (2009). ISBN 978-80-7367-647-6 4. Exford Dictionaries. University of Oxford. https://en.oxforddictionaries.com/definition/us/elearning. Accessed 11 July 2018 5. Quinn, C.N.: Engaging Learning: Designing eLearning Simulation Games. San Francisco (2005). ISBN 0-7879-7522-2 6. Brozek, J., Jakes, M.: Application of mobile devices within distributed simulation-based decision making. Int. J. Simul. Process Model. 12(1), 16–28 (2017) 7. Gublo, K.I.: A Laboratory Safety Trivia Game. Department of Chemistry, State University of New York at Oswego, Oswego, NY 13126. https://doi.org/10.1021/ed080p425

Use of Game Engines and VR in Industry and Modern Education Tim van Der Heijden1,2, Dan Hamerník1(&), and Josef Brozek1,3 1

2

Laboratory of Application of the Software Technologies - ASOTE, 532 10 Pardubice, Czech Republic [email protected],{hamernik,brozek}@asote.cz, [email protected] Triangle Studio HQ, Wismastate 9A, 8926 RA Leeuwarden, The Netherlands [email protected] 3 Metropolitan University Prague, Ucnovska 100/1, 190 00 Prague, Czech Republic www.asote.cz

Abstract. The paper is focused on a review of trends in the industrial use of Game Engines and VR. Also it is focused on the trend of education in VR. Due to synthesis of both topics, a special course for the University of Pardubice was able to be created. The courses have 13 major Topics and are adapted to modern education principles. The major part of courses is focused on programing, 3D modelling and using of the VR. Keywords: Didactics  Teaching methods  Game engines Unity 3D  Virtual reality  Augmented reality

 Video games 

1 Introduction The main goal of this paper is to review the trends in the field of teaching video games development in tertiary education. For a long time, the Czech Republic has struggled with a lack of well-educated professionals in this field and therefore it’s necessary to reflect this issue at universities. Educating students in game development or in use of game engines proved to be essential during development of a Combat-vehicle simulator, which was a project included in a course called Project-based learning [1, 2]. It showed that creating the simulator was very complicated and that the use of a Unity 3D game engine was too demanding. This is why it took me and my colleagues a long time before we were able to create the simulator. Efficiency of such projects must be increased in order to compete in research and game-development with foreign universities which already pursue such matters. 1.1

Game Engine

To understand correctly the issue of modern video-game development, it’s necessary to introduce a fundamental tool of the modern game industry – the game engine. The word “engine” can be easily interpreted as a machine that runs everything in a game © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 88–93, 2019. https://doi.org/10.1007/978-3-030-21507-1_13

Use of Game Engines and VR

89

same as a car engine runs a car. The engine is what makes the car move and in its absence the car simply won’t go anywhere. The main difference however, is that it can be easy to identify which part of a car is the engine, and which is part (for example) of the body. With game engine it can’t always be easily specified which part is a game engine and which part is already a game [3]. We have to realize that creating a modern video game is primarily a team effort; billion-dollar business that usually requires hundreds of people, thousands even. Generally, the concept of a game engine is very simple. Game engine is an application which contains concepts such as rendering, inputs, outputs, physics, sounds, collisions, animations and much more, which gives developers the possibility to focus on the details that make the games special [3]. 1.2

History of Game Engines

To understand the issue of game engines, it is necessary to take into account several circumstances which led to their creation. In the time of 8bit gaming consoles and home microcomputers, games were created in a language of symbolic addresses because computers back then couldn’t handle more complex development environments. Together with the increasing performances of computational equipment, the number of users which used them not only for work but also for fun also grew. The growing gaming industry then created the need for the first game engines. Development of games without game engines is economically extremely demanding since it is necessary to invest a lot of effort and time in creating such a game [4]. The first fully-fledged game engine was SCUMM, from the company LucasArts, created in the year of 1987. However, the first real breakthrough in the world of game engines must be attributed to Id Tech 1 (also known as Doom engine). Id Software game studio built their biggest hits using this game engine. The game engine id Tech 1 is considered to be the first functional 3D engine that is complex enough to offer a 3D experience of the game. A newer version of the engine – id Tech 2, is also worth mentioning because it is the first real 3D game engine, which means that all the models are real (not just textures) and are made of polygons. The game engine id Tech 2 also contains dynamic lighting and an algorithm to calculate and render only those parts of the environment in which the player moves. 1.3

Motivation to Use Game Engines

During the modern development of video games, trainer-like simulators, strategic simulators, visualizations and applications for virtual reality, there is still more and more emphasis on the emulation of reality. For easier understanding of this issue, we consider the core of an application and a game engine to be identical. Creating a simulation core or the core of a video game is, however, very difficult to program. In a competitive environment, creating your own simulation core or the core of a video game is economically very inefficient, which is why there are alternatives to create your own core.

90

T. van Der Heijden et al.

• To use already existing core developed by specialists. • To use an older core and to modernize it. • To use an older core without modernizing it at the cost of significantly worse UX, with lesser cost however. The majority of AAA game studios uses already existing game engines to develop video games. Mostly it is their own game engine which is worked on for many years. To pay for a development of such an engine, AAA game studios often offer their engine for a fee to use in commercial projects. However, they also offer it for free of charge to use in non-commercial projects. License agreements differ with each game engine, but the goal is always the same – maximize the quality of their own product and profit. Motivation to use a game engine is, therefore, quite clear. Using a professional tool that is developed throughout the years by hundreds of experts free of charge or with minimal investments is much more beneficial than developing your own game engine. However, the professional public is divided because of their opinions on the usage of game engines. One side says that using such an engine is a fraud and the other welcomes its benefits. It’s hard to say which side is correct. This dispute can be, with a bit of abstraction, compared to using a calculator during a math test – is the one using the calculator a cheater?

2 Popular Game Engines Nowadays we can wound a variety of game engines which are used to develop modern video games, simulators, virtual reality applications, visualizations and others. For the purposes of this thesis, the most popular game engines are considered to be the ones used by most developers. Other statistics that are used to compare the popularity of a game engine is the number of players, number of released games, and the number of released licenses. The most popular game engines are introduced in the next subchapter. They will also be compared to one another and one of the subchapters is devoted to their usage in teaching. 2.1

Unity 3D

There are many publications about the Unity 3D engine among the professional and lay communities. Due to the nature of this chapter, it would be best to quote some of the articles and publications, of authors who really work with the engine. Quotations are carefully chosen so that they include opinions of both communities, such that it would be possible to create a qualified opinion on this matter. “The Unity game engine was originally used for mobile games and low-budget titles but that has changed now. Unity 5 was significantly improved and has far better possibilities for character animation, physics, scene lighting system and more options for audio. Scene lighting is handled by system called Enlighten. There will also be HDR and physics will be handled again by PhysX 3.3 engine. Unity 5 also offers flexibility and currently supports a variety of 21 platforms – they include the new

Use of Game Engines and VR

91

WebGL and API Metal for iOS and other different consoles, desktop and mobile platforms. Virtual reality (Gear VR) is not forgotten as well” (Vítek 2015). “In the newer version of Unity3D 5.0 released on 04/01/2015 GUI system is replaced by the new UI system. The GUI system is, even though still functioning, not recommended. Testing client was created before the release of a new version, which is why it still operates on the old GUI system. The User interface of the test client is created using scripts where each graphical element is created the same way in the new system as it was created in the old system” [5]. 2.2

Source 2

Source 2 game engine is still in development and isn’t open for public yet. Its interesting side, however, is that it fully supports Vulkan API technology, which is considered by the professional community as the future in game development and simulation using the game engine as a simulation core [1, 2]. 2.3

Unreal Engine

Among the popular engines of today it is necessary to include UE 4 which is. Its popularity is rising especially thanks to the direct link between developers and community. The reasons its popularity has got up. It is an extremely intuitive program even though much has changed and users who are still using UDK might be surprised how much UE4 differs from what they are accustomed to in UE3 – we can say that these changes are surely improvements. The Essential message for programmers is that the scripting language is not UnrealScript anymore, but classic C++. Scripting language Kismet was also replaced by Blueprint which works on an intuitive visual basis – writing code can be completely omitted. 2.4

CryEngine

CryEngine and its offshoot Lumberyard is also presently another very popular game engine. CryENGINE is an entirely equivalent alternative to Unreal Engine 4. There are often debates about which one looks better. According to lay community both of them look just as amazing. We say nevertheless, that UE is more suitable for sci-fi and games from modern times, while CryENGINE excels at creating very realistic a world (especially nature) which is more suitable for credible games and historical titles.

3 Suitability of Using Game Engines in Education It appears that there are a couple of good reasons to use Unity 3D game engine for education. Generally speaking, due to the computational difficulty, game development is very problematic. A game engine alone uses a lot of system resources, which when combined with how demanding the game is, or simulating in the game engine during testing, demands very sophisticated and advanced school equipment. Unity 3D has the

92

T. van Der Heijden et al.

least amount of system requirements from the engines mentioned above, making it the cheapest engine along with the cheapest hardware. Another aspect of suitability in education are the programming languages which are necessary for writing scripts. It is therefore crucial to reflect new trends of particular countries and areas, and even programming languages which the academic institution often teaches. It is much simpler for students to deeply understand a programming language, which they already know a little bit, rather than learning a new one from scratch. Finally, it is necessary to reflect the complexity of a game engine and primarily an editor. According to lesson time requirements, it is necessary to choose the right game engine for students to experience the entire process of creating a game and simulation rather than exploring the development environment for months.

4 Teaching Video-Game Development Using Game Engines This chapter is devoted to the introduction of teaching of computer game development. Teching methods used in teaching of computer game development abroad and in Czech Republic are discussed. 4.1

Motivation to Teach Video-Game Development

Motivation to teach video-game development has many pillars. Most students that study IT, digital technologies, graphics, movies or screenwriting, play video games and want to create them. Video games generally are in the center of attention of both professional and lay communities, which documents how profitable this industry is. Video games gain more profit than the movie industry, which is why it’s a very perspective field that gets bigger and bigger and demand exceeds supply. Another pillar is the students’ interest. Most of the IT students examine how software works, which language was used to write it and who participated in its development. It’s the same with video-game development. Students are interested in how the game works, who made it, what did the developer choose for this solution at this key situation and so on. Students tend to get familiar with minor or more complex modifications of video games and create a content of their own – for example game modes. Video-game development has a tradition in the Czech Republic and some Czech studios are big players in the video-game market. When comparing the movie industry to the game industry in the Czech Republic, one can observe that youngsters have greater knowledge of their own country thanks to video games more than movies. The movie industry has a great history here but little international success. Since there are many studies connected with movie industry, it makes only sense to have studies connected with video-game industry.

Use of Game Engines and VR

4.2

93

Use of Game Engines in the Educational Process

Theory of games in the educational process has been known for centuries, but with the growth of modern technologies, it receives much more attention from the general public and professional community. So, the play-element in schools becomes reality. Games are widely used in the educational process from pre-school education to tertiary education and beyond. Advancing in technology can help students understand complex issues by interactive games without exiting the school. A nice example of this idea in practice is the educational version of a Minecraft video game from the company Mojang. Thanks to Minecraft, students develop algorithmic thinking, spatial imagination, creativity and last but not least – teamwork. Students are forced to cooperate, divide tasks while the teacher plays a part of a consultant and mentor. Montessori method can be applied while using Minecraft in educational process (Anon. 2017). In the Czech Republic, the Ministry of Education, Youth and Sports states goals and outcomes of educational programs. However, using video games and game engines as tools is purely at the discretion of the school. That’s very important particularly in tertiary education where in practice it means that using video games and games engines isn’t influenced by the accreditation process, or accredited study program.

5 Conclusion The situation in a field of education and application of Game engines (including VR libraries) do not reflect market needs. The market has got demand of qualified workers. The situation has improving trends and many schools are cooperating with market correctly. The education method also reflects the trends in technologies. It is able to expect rising creation of workplaces on the field and education processes may react on it. The industry is currently braked by lack of qualified workers, not by technology.

References 1. Brožek, J., et al.: Application of the Montessori method in tercial education of a computer 3D graphics. In ELEKTRO 2016 - 11th International Conference: Proceedings. 11th International Conference Elektro (2016) 2. Brožek, J., Jakeš, M., Hamerník, D.: Combat vehicle simulator based on HLA prototype concept. In Proceedings of the 2016 17th International Conference on Mechatronics Mechatronika. ME-2016, pp. 279–285 (2017) 3. Ward, J.: What is a game engine? Game career guide (2008). http://www.gamecareerguide. com/features/529/what_is_a_game_.php. Accessed 02 May 2017 4. Tišnovský, P.: Historie vývoje počítačových her: (117. část – vznik herních enginů). Root (2014). https://www.root.cz/clanky/historie-vyvoje-pocitacovych-her-117-cast-vznik-hernichenginu/. Accessed 14 January 2016 5. Balcárek, D.: Virtual World. Diploma thesis, Brno (2015)

The Use of Cloud Computing in Managing Companies and Business Communication: Security Issues for Management Marcel Pikhart(&) Faculty of ICT and Management, University of Hradec Kralove, Hradec Kralove, Czech Republic [email protected]

Abstract. The presented paper summarises the current situation connected to the introduction of the unprecedented directive of the European Union connected to the data security, General Data Protection Regulation. It will dramatically influence data security in companies and must be taken into consideration by both ICT and non ICT management. The research was conducted into the awareness of the situation in the Czech Republic. There has been no previous research into this area, therefore, there is an urgent need to look at the possible impact of the GDPR regulation on management issues in companies. The research shows that the managers are aware of the need to implement the directive, however, the reason is merely the penalty for not following the standard. The managers do not accept the fact that this directive is necessary because they think the current data protection is sufficient. And finally, the companies see the new directive as a future cost as new positions or even departments will have to be introduced to comply with the regulation. Keywords: Data security  Business communication  Corporate communication  Managerial communication

 Cloud computing

1 Introduction: General Data Protection Regulation and Data Security in Companies With the current implementation of the General Data Protection Regulation (GDPR) (Regulation of the European Union 2016/679) in basically all companies all over the European Union, IT issues and data security has become ubiquitous throughout the businesses as never before. The idea of the European Parliament, the Council of the European Union and the European Commission clearly indicates that there have been unprecedented changes in data management and these changes influence not only IT departments and companies but they are present throughout the business sector. Smart solutions are ubiquitous throughout various industries and data protection needs our undivided attention (Klimova 2017). The basic intention is to protect data of individual users and also there is a clear attempt to unify data protection within the European Union. However, it also tackles the issue of data processed outside of the European Union. It becomes enforceable as of 25 May 2018, therefore, basically all companies © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 94–98, 2019. https://doi.org/10.1007/978-3-030-21507-1_14

The Use of Cloud Computing in Managing Companies

95

and their IT departments are focusing on this matter with their undivided attention because effective measures must be implemented and be in compliance with the EU regulation, otherwise, there are hefty fines (up to EUR 10,000,000 or up to 2% of the annual worldwide turnover of the preceding financial year, whichever is greater) if this regulation is not followed and the data security is somehow at risk. In many companies it will also mean that new positions or even departments will have to be introduced. These data sharing enabled devices, however, present both an opportunity but also a threat. In favour of cloud computing we have the speed and convenience of data transfer, on the other hand is also presents a serious security risk of data. It should also be noted that the use of cloud computing in companies is very important because it influences the price of storing data - rented IT capacities are much cheaper than internally owned ones. The most important issue for businesses regarding cloud computing is without doubts data security and protection. Using clouds in companies will mean they will submit their private data and information to an external provider via the Internet and it is somehow beyond the control of the enterprise. We have already reached a phase of a particular trustworthy relationship between a provider of the cloud service and the customer. The traditional data centres and data warehouses are secured by firewalls, networks segmentation etc., and modern security features of cloud data centres are similar or even the same (Furht and Escalante 2010). Probably the most vulnerable part of using clouds is when the data is sent from the owner of the data to the provider of he service. Data should always be encrypted by the data owner and if not the cloud provider should take care of the encryption. Another issue very important from a managerial perspective is the availability of the data. Cloud service providers should guarantee that the clients have reliable access to their data in the expected quality. Network failures, viral infections, delays and timeouts should be reduced to a minimum, however, the management must count on the fact that these situations can occur any time. Global availability of the data is a ubiquitous phenomenon but it does not mean that the data is available all the time without any threats. Even if we are aware of these potential issues, it still goes without saying that cloud computing is ideal for small and medium enterprises as well as for multinational enterprises due to the user convenience and data availability at any time. Cloud computing is therefore a way to efficiently reduce IT costs for data warehousing and thus increase efficiency of the company (Rhoton 2009). Cloud computing implementation is often connected to with reducing costs by around 20–30% for IT services and that is the reason why companies are moving to cloud services, i.e. on the basis of financial analysis (Pavlíčková 2012). When the companies are considering the cloud service introduction to their IT system, it is almost always the security issue as the biggest limiting factor. Another reason not to introduce cloud computing in small and medium enterprises was insufficient knowledge and information about this technology in the management responsible for this change. In the survey by Giannakouris (2014), 32% of small and medium enterprises expressed the reason for not implementing cloud computing into their system as insufficient information. Another limiting factor was that the price of cloud services was considered too high. On the other hand, corporations expressed their major concerns about the data stored globally - risk of security breach, legal uncertainty and uncertainty about data

96

M. Pikhart

placement were the most important concerns of corporations. However, for small and medium enterprises the most important concerns were risk of security breach, high cost of cloud services and lack of knowledge about this service (Giannakouris 2014). With the introduction of GDPR in the EU we can expect that the most important reason for not implementing cloud computing, i.e. the risk of security breach, will be taken into account by the companies even more as the punishment for disclosing private information of the clients is very severe. Cloud computing in the EU is supported by the EU decision from 2012 (Digital Agenda of Europe - European Cloud Computing Strategy (ECCS)) to enhance industry by using cloud services. According to this research only 24% of respondents said they use cloud applications, however, the global average is 34%. Therefore, the EU strongly supports implementation of cloud computing since it believes that it will enhance businesses not only from the IT point of view but also economically. ECCS as a major European cloud strategy should increase European GDP by 5% over next eight years and also should create 3.8 million new jobs. Preliminary research (Svobodova 2014, Maresova 2016, Mohelska and Sokolova 2016, Mohelska and Sokolova 2017, Mohelska and Sokolova 2018) into the use of social networks and cloud computing in companies proves the increased trend in favour of the use of cloud and social networks even in company management, therefore the importance of the topic has increased in the past few years dramatically. However, the companies still lag behind the current boom in the use of cloud and social networks (Cerna and Svobodova 2017) and the research should prove why. Cerna and Svobodova (2017) conducted research where she compared data on utilization of various means of communication of small and medium size enterprises with clients and focused on utilization of social networks for private and corporation purposes, however, despite the significant expansion of social networking in personal life, this boom did not occur in a selected sample of businesses that participated in the pilot testing. The aim of the research is therefore describe the current situation regarding the acknowledgement of the GDPR directive and its position in the management of companies. The research focuses on managerial viewpoint of the situation.

2 Research Description As the current situation is developing fast and even dramatically, it is almost impossible to follow any previous research as it lacks any relevance. There is no comparison in the market and we can only assume what has not been researched yet. The qualitative research in the form of guided interviews with the top management (usually the owners of the companies) was conducted in the Czech Republic in thirteen SMEs based in the Czech Republic but doing business globally. Several of the the companies are ITC companies or companies producing specialised security devices, other companies are in machinery production. They employ between 10 and 100 employees. Their annual turnover is up to CZK 50,000,000 (EUR 2,000,000). The research focuses on the current situation in security of using cloud computing and also the implementation of the GDPR regulation and its connection with data

The Use of Cloud Computing in Managing Companies

97

protection stored in clouds. The research is qualitative, guided interviews were conducted with the owners, shareholders or the top management of these companies.

3 Discussion of the Results The most important reason for taking the GDPR directive into serious consideration was in all researched companies the fear of the extremely high penalty for breaching the law which can rise up to EUR 2,000,000. These are the most important findings of the research as follows: • The management of the companies is generally (100%) aware of the need to implement the directive into the company system of processing information and sensitive data, however, the vast majority of the managers responsible to this issue considered this directive as useless or exaggerating the situation. • None of the companies considered current situation as dangerous for the data obtained from clients and kept in the company CRM software. The current situation regarding data protection is perceived as sufficient and they do not consider the changes as necessary (by 67% of the managers). • However, many of the managers (56%) confirm there will be an urgent need to establish new positions in the company connected to the new regulation, such as security data manager or even departments with a few positions like this. The managers of the companies see this as a potential future cost. It is very important from a managerial point of view to reconsider the current situation regarding data warehousing and data protection in the areas which were previously neglected or out of the scope of the management of the companies. The current regulation must be taken into consideration not only by the ITC departments, but now it is a burning issue for the company management which had previously not been involved in ITC issues. The management of the researched companies is aware of the situation but rarely sees the clear vision about the future development of the matter. There should have been more detailed and well prepared guidance from the European authorities so that the companies can implement the regulation in an easier way. However, the managerial issues arising from the regulation will need time to be digested and transferred appropriately so as not to influence competitiveness and profitability of the companies involved in the GDPR regulation.

4 Conclusion This paper attempts to depict the core issues connected to cloud computing safety with respect to company business and corporate communication. It analyses current situation and attempts to bring some solutions. Cloud computing can be defined as using data through virtual sources provided via the Internet. It has been used for a relatively short time but even after such a short time we can observe and suppose that cloud computing and usage will not only transfer the way we store and retrieve data but it will also alter the way final users (i.e. companies and managers) access and process this data. It is

98

M. Pikhart

very convenient to use cloud computing because the data can be utilised through various devices - this has a great potential to facilitate business transfer of information through employees, departments, subsidiaries and companies. Acknowledgement. The paper is a part of the project SPEV 2018 at the Faculty of Informatics and Management, University of Hradec Kralove, Czech Republic. The author thanks Karel Kluch and Josef Toman for their cooperation.

References Digital Agenda of Europe. European Cloud Computing Strategy. On-line. Accessed 10 May 2018. http://eige.europa.eu/resources/digital_agenda_en.pdf Klimova, B.: Mobile Phones and/or smartphones and their apps for teaching English as a foreign language. Educ Inf Technol Springer Science + Business Media, LLC (2017). https://doi.org/ 10.1007/s10639-017-9655-5 Furht, B., Escalante, A.: Handbook of Cloud Computing. Springer, New York (2010). ISBN 9781441965240 Rhoton, J.: Cloud Computing Explained: Enterprise Implementation Handbook. Recursive Press, London (2009) ISBN 0956355609 Pavlíčková, K.: Virtualizace a cloud computing: Rozvoj cloud computingu v Evropě zaostává. Bankovnictví: moderní řízení, xiv, 10, p. 2 (2012) Giannakouris, K., Smihily, M.: Cloud computing - statistics on the use by enterprises. Statistics Explained. On-line (2014) Svobodova, L., Cerna, M.: Development of social networks from the local and global perspective. In: Jedlicka, P. (ed.) Hradec Economic Days 2014, Economic Development and Management of Regions, pp. 370–378 (2014) Maresova, P., Klimova, B.: Economic and technological aspects of business intelligence in European business sector. In: 11th International Scientific Conference on Future Information Technology (FutureTech) / 10th International Conference on Multimedia and Ubiquitous Engineering (MUE) Location: Beijing, PEOPLES R CHINADate: APR 20-22, 2016. ADVANCED MULTIMEDIA AND UBIQUITOUS ENGINEERING: FUTURETECH & MUE. Book Series: Lecture Notes in Electrical Engineering, vol. 393, pp. 79–84 (2016) Mohelska, H., Sokolova, M.: Trends in the development of organizational culture – a case study in the Czech Republic. Transformations In Business & Economics (TIBE) 17, No 1 (43): (2018) Mohelska, H., Sokolova, M.: Digital transparency in the public sector – case study Czech Republic. E + M Ekonomie a Manage. 20(4):236–250 (2017) https://doi.org/10.15240/tul/ 001/2017-4-016 Mohelska, H., Sokolova, M.: Smart, connected products change a company’s business strategy orientation. Appl. Econ. 48(47):4502–4509 (2016). https://doi.org/10.1080/00036846.2016. 1158924 Cerna, M., Svobodova, L.: Internet and social networks as the support for communication in the business environment - pilot study. In: Jedlicka, P., Maresova, P. (eds.) Hradec Economic Days, vol. 7 (1) (2017)

Social Network Sites and Older Generation Blanka Klimova(&) Department of Applied Linguistics, Faculty of Informatics and Management, University of Hradec Kralove, Rokitanskeho 62, 500 03 Hradec Kralove, Czech Republic [email protected]

Abstract. Currently, there is an increase in the number of older generation groups. These demographic changes obviously cause serious social and economic problems. Therefore, the governments all over the world try to find the strategies, which would keep older people active as long as possible, both physically and mentally. One of the ways, which can contribute to this process, is also the use of social media such as social network sites. The purpose of this article is to discuss the use of social network sites (SNS), show their potential benefits of their use by elderly people, as well as indicate some of the constraints preventing their use by this group of people. The findings show that more and more people at the age of 65+ years start to use SNS. The most popular SNS among older individuals aged 65+ years are as follows: Facebook, Instagram, LinkedIn, and Twitter. The main advantages of their use for this group of people include alleviation of loneliness and social isolation; collecting and sharing information, photos, or experience; getting involved in policy making [20]; or improving their state of health. On the contrary, people aged 65+ years have to face some difficulties when using SNS such as technical inaccessibility, inappropriate interface design, a lack of training, or misuse of personal data. Finally, there is an urgent need of the adoption of these SNS to their needs, both technical and content ones in order to enhance their motivation towards the active use of SNS. Keywords: Social media Constraints

 Social network sites  Older people  Benefits 

1 Introduction Nowadays, there is a growing number of older generation groups worldwide. In 2000, the number of people at the age of 65+ in the world reached 12.4% and this number is expected to grow to 19% by 2030 [1]. In developed countries, this number of older adults forms 24% and it should rise to 33% by 2050 [2]. In Europe, the population group aged 65+ represents 18% of the 503 million Europeans, which should almost double by 2060 [3]. These demographic changes obviously cause serious social and economic problems. Therefore, the governments all over the world try to find the strategies, which would keep older people active as long as possible, both physically and mentally. One of the ways, which can contribute to this process, is also the use of social media [4, 5]. © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 99–104, 2019. https://doi.org/10.1007/978-3-030-21507-1_15

100

B. Klimova

Social media can be defined as a wide range of the Internet-based and mobile services that enable the user to take part in online exchanges, contribute user-created content, or join online communities [6]. The purpose of this article is to discuss the use of one type of the Internet-based services by older people, which are social network sites (SNS). The purpose of this article is to discuss the use of social network sites (SNS), show their potential benefits of their use by elderly people, as well as indicate some of the constraints preventing their use by this group of people.

2 Methods The author searched for available studies on this topic in the world’s databases Web of Science, Scopus, as well as in Google+ and Google Scholar. In addition, she analyzed and evaluated the findings in order to perform comparison of the findings of the research studies detected on the basis of the following keywords: social media AND older people, social media AND elderly, social network sites AND older people, social network sites AND elderly. Altogether over 2,400 articles were generated from both databases. The first one appeared already in 1995. However, most of the articles on the research issue started to be published after 2010. The topic of the majority of studies relates to health monitoring and assisted living technologies.

3 Findings and Discussion SNS are web-based services that allow individuals to construct a public or semi-public profile within a bounded system; articulate a list of other users with whom they share a connection; and view and traverse their list of connections and those by other within the system [7]. Although the main users of SNS are young people aged 12+ years, older generation is also becoming more and more interested in their use. For instance, in 2016, in the Netherlands, 39% of older individuals aged 65+ years were using them [8]. In the USA, the number of older people using SNS reached 34%. However, in 2018, it is already 37% [9]. In the UK, the number of older users engaged in the use of SNS was smaller, about 23% [10]. In addition, Madden [11] points out that social media use is more prevalent among those older individuals who have high-speed connections at home. The most popular SNS among older individuals aged 65+ years are as follows: Facebook, Instagram, LinkedIn, and Twitter [9]. Figure 1 below compares the use of these SNS with younger generations. As Fig. 1 shows, older people are the least active group in the use of SNS and they especially exploit Facebook (40%) in comparison with other three SNS, whose use is four times smaller. However, one can see that the age groups of 39–64 more use LinkedIn than Twitter, which is connected with their professional interests, while the youngest age group, still at school, exploits it the least. On the contrary, their second most popular social network site is Twitter.

Social Network Sites and Older Generation

101

Fig. 1. An illustration of a number of users of SNS, divided according to their age group (author’s own processing, based on [9])

Older people aged 65+ years usually use the social media for searching for information, sharing their experiences and getting in touch with their friends and family [11]. Undoubtedly, SNS is place, which can reduce isolation and loneliness of these people. SNS can thus contribute to the reduction of social disengagement, which is a risk factor for cognitive impairment among elderly people [12]. In fact, socializing via social media strengthens older adults’ social network by enriching and complementing traditional social engagements such as those conducted over the phone or in-person [13]. Apart from loneliness and isolation, SNS also help to alleviate their stress, feeling of anxiety and on the contrary, raise their feelings of control and self-efficacy [14]. Generally, the main reasons why older people use SNS can be summarized as follows: • reconnecting (people try to find someone from their past and as they retire to stay in touch and get support), • collecting and sharing information about diseases (aging brings about diseases and people want to know more about them as well as to discuss their disease with someone who suffers from it, too), • bridging the generation gap (this is especially true for grandparents who would like to stay in contact with their grandchildren), and • gaming (older people like to entertain themselves and get involved in social games online) [15, 16]. Nevertheless, the age group 65+ years differ in the use of SNS also in one more aspect, which is related to their interest in policy issues. For instance, Kang et al. [17] report that older individuals prefer to share more serious information on SNS than the younger generation whose use of SNS is mainly connected with the element of fun and

102

B. Klimova

entertainment. Furthermore, Trentham et al. [18] in their study present that older people are interested in important policy decisions. These decisions often require online participations, in which elderly people start to get involved. However, there are still several constraints older people have to face when using SNS. These drawbacks can be divided into three aspects [19]: • interface design – SNS should represent an elderly-friendly environment, which would be distinguished by a larger bottom size and identification, reduce web page sequence of operations, and have simpler and clearer hierarchy and distinguishable colors, • accessible assistance – this should be done by ensuring a parent-child account number application through which younger family member could help the older family member with his/her registration, as well as with the upload of videos or photos, • older people adaptation and cultural background – older people prefer polite and explicit language communication in comparison with younger people, as well as an easy search functions to locate and join interested topic groups and share with them their experience. In addition, older individuals should be made aware of possible threats of SNS such as harmful behavior of other users or misuse of personal data with criminal intent [14]. Figure 2 below summarizes the key benefits and limitation of the use of SNS by elderly people.

Fig. 2. An overview of the benefits and limitation of the use of SNS by elderly people (author’s own processing)

Social Network Sites and Older Generation

103

4 Conclusion Generally, it seems that thanks to the generation, which already was exposed to the use of the Internet, more and more older people start to use social media such as SNS. Obviously, one of the reasons is the awareness of the benefits SNS can provide them with. The most popular SNS among older individuals aged 65+ years are as follows: Facebook, Instagram, LinkedIn, and Twitter. The main advantages of their use for this group of people include alleviation of loneliness and social isolation; collecting and sharing information, photos, or experience; getting involved in policy making [20]; or improving their state of health. On the contrary, people aged 65+ years have to face some difficulties when using SNS such as technical inaccessibility, inappropriate interface design, a lack of training, or misuse of personal data. Finally, there is an urgent need of the adoption of these SNS to their needs, both technical and content ones in order to enhance their motivation towards the active use of SNS. Acknowledgments. This study is supported by the SPEV project 2104/2018, run at the Faculty of Informatics and Management, University of Hradec Kralove, Czech Republic. The authors thank Josef Toman for his help with the data collection.

References 1. Vafa, K.: Census bureau releases demographic estimates and projections for countries of the world (2016). http://blogs.census.gov/2012/06/27/census-bureau-releases-demographicestimates-and-projections-for-countries-of-the-world/ 2. World Population Ageing 2013, New York, UN (2013) 3. Petterson, I.: Growing Older: Tourism and Leisure Behaviour of Older Adults. Cabi, Cambridge (2006) 4. Klimova, B., Simonova, I., Poulova, P., Truhlarova, Z., Kuca, K.: Older people and their attitude to the use of information and communication technologies – a review study with special focus on the Czech Republic (older people and their attitude to ICT). Educ. Gerontol. 42(5), 361–369 (2016) 5. Klimova, B., Valis, M.: Smartphone applications can serve as effective cognitive training tools in healthy aging. Front. Aging Neurosci. 9, 436 (2018) 6. Dewing, M.: Social media: an introduction. https://bdp.parl.ca/content/lop/ researchpublications/2010-03-e.pdf 7. Boyd, D.M., Ellison, N.B.: Social network sites: definition, history and scholarship. J. Comput. Mediat. Commun. 13(1), 210–230 (2008) 8. Older Generation Catching up on Social Media (2017). https://www.cbs.nl/en-gb/news/ 2017/26/older-generation-catching-up-on-social-media 9. Pew Research Center: Social media fact sheet (2018). http://www.pewinternet.org/fact-sheet/ social-media/ 10. Foster, P.: One in four over-65s use social media, after massive rise in ‘Instagrans’. https:// www.telegraph.co.uk/news/2016/08/04/one-in-four-over-65s-use-social-media-aftermassive-rise-in-inst/ 11. Madden, M.: Older adults and social media (2010). http://www.pewinternet.org/2010/08/27/ older-adults-and-social-media/

104

B. Klimova

12. Bassuk, S.S., Glass, T.A., Berkman, L.F.: Social disengagement and incident cognitive decline in community-dwelling elderly persons. Ann. Int. Med. 131(3), 165–173 (1999) 13. Cornejo, R., Tentori, M., Favela, J.: Enriching in-person encounters through social media: a study on family connectedness for the elderly. Int. J. Hum. Comput. Stud. 71(9), 889–899 (2013) 14. Keist, A.K.: Social media use of older adults: a mini-review. Gerontology 59(4), 378–384 (2013) 15. Anderson, M., Perrin, A.: Tech adoption climbs among older adults. http://www. pewinternet.org/2017/05/17/technology-use-among-seniors/ (2017) 16. PCWorld Staff: 4 reasons why older people are on social networks now (2010). https://www. pcworld.com/article/204330/4_Reasons_Why_Older_People_Are_on_Social_Networks_ Now.html 17. Kang, J., Lee, S., Lee, I., Kim, J.: Social network sites for older adults: online user experience for Korean seniors. User Exp. Mag. 9(2) (2010). http://uxpamagazine.org/social_ network_older_adults/ 18. Trentham, B., Sokoloff, S., Tsang, A., Neysmith, S.: Social media and senior citizen advocacy: an inclusive tool to resist ageism? Politi. Groups Identities 3(3), 558–571 (2015) 19. Chou, W.H., Lai, Y.T., Liu, K.H.: User requirements of social media for the elderly: a case study in Taiwan. Behav. Inf. Technol. 32(9), 920–937 (2013) 20. Pikhart, M.: Managerial communication and its changes in the global intercultural business world. In: SHS Web of Conferences, vol. 37, p. 01013 (2017)

Development of a Repository of Virtual 3D Conversational Gestures and Expressions Izidor Mlakar1(&), Zdravko Kačič1, Matej Borko2, Aleksandra Zögling1, and Matej Rojc1 1

Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška Cesta 46, 2000 Maribor, Slovenia {izidor.mlakar,zdravko.kacic,aleksandra.zoegling, matej.rojc}@um.si 2 A1 Slovenia D.D, Šmartinska 134b, 1000 Ljubljana, Slovenia [email protected]

Abstract. This paper outlines a novel framework that has been designed to create a repository of “gestures” for embodied conversational agents. By utilizing it, the virtual agents can sculpt conversational expressions incorporating both verbal and non-verbal cues. The 3D representations of gestures are captured in EVA Corpus, and then stored as a repository of motor skills in the form of expressively tunable templates. Keywords: 3D gestures  Motor skills  Embodied conversational agents Animation  Virtual reality  Multimodal interaction



1 Introduction Gesticulation and articulation of information over the co-verbal signals plays an important role in human-human interaction. The co-verbal signals conveyed together with spoken content or even in absence of it are essential for establishing discourse cohesion [1, 2]. The verbal parts (language, grammar, linguistic, and paralinguistic features) carry symbolic/semantic interpretation of the message, while the co-verbal parts serve as an orchestrator of communication [1, 3, 4]. The co-verbal signals also actively contribute to the information presentation and understanding. Thus, in addition to the semantic coherence and communicative relationship, the co-verbal signals further clarify, re-enforce, or even replace the information provided by the verbal counterparts [5–7]. In human-machine interaction these co-verbal signals are presented by gestures and have become one of the key research topics in human-machine interaction (HMI), and conversational interfaces (CIs) [8]. In terms of personalization and personification of everyday scenarios gestures play a crucial role [9, 10]. In synthesizing gestures, e.g. natural and contextually relevant and speech synchronized co-verbal signals, the following two main challenges exist. The first one (symbolic alignment) is related to the contextual (symbolic) alignment of the body movements and facial expressions with speech and situational context; e.g. determining what concepts a character should perform under the given context (linguistic and paralinguistic information, intent, dialog function, etc.). The symbolic alignment is in © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 105–110, 2019. https://doi.org/10.1007/978-3-030-21507-1_16

106

I. Mlakar et al.

general implemented as a rule-based [11, 12], or statistically oriented [13–15] conversational behavior generation system. The second challenge is then related to the physical realization (animation) of the symbolically aligned concepts into conversational expression. The more relevant are prosody/data-driven approaches (such as [16– 18]). The main drawback of the prosody/data-driven approaches is that they are performed based on a small set of signals related mostly to the speech signal (e.g. pitch and prosody). Thus, no other contextual feature is used in the interpretation. As a result, the body-movement of the agent may appear quite random, and very unlikely matching the meaning/intent provided via the verbal channel. The more ‘accurate’ are those systems utilizing gestures templates (e.g. procedural/physical animation) [19–21]. These can actually align co-verbal signals with momentary context, as well as context in the planned near future. Thus, they are far more likely to reproduce well aligned and close to human like co-verbal expressions, adequately representing intent and thought through speech and gestures.

2 Related Works The main drawback of the “template” based systems is that the creation of the resources for the 3D animated characters has to be done. Namely, this is a very labor-intensive task, which requires a lot of time and significant artistic skills and expertise in 3D modeling [22]. Much of the complexity associated with the creation of the 3D resources lies in the “posing” step executed in some CAD environment. Further, the animators must also tackle many modeling and animation techniques quite specific to a targeted CAD (e.g. polygonal modeling, modeling with NURBS, UV skinning, forward and inverse kinematics, rigging, etc.). Another option would be to utilize the performance-driven animation. It captures actor’s physical performance, and interactively transfers it to the virtual character [18]. However, the equipment and expertise make this technique suitable for professional animators and less suitable, when limited design is required. The mapping between performer’s and character’s motion is also very complex and requires sophisticated configuration steps and automatic retargeting [22]. Finally, if the available 3D resources (motor skills) should appear viable, they must mimic expressions and movements originating from non-laboratory, everyday situations [23]; e.g. from real-life situations integrating spontaneous behavior. In this paper we represent a framework for rapid design and development of the resources used by the embodied conversational agents. It targets the recreation of the conversational expressions via procedural animation technique. The 3D resources are designed based on Daz People [24]. The designed 3D resources also imitate conversational contexts captured by using an informal corpus, named EVA Corpus [25]. To sum up, the proposed resource-creation phase is based on CAD modeling. The observed conversational expressions are applied to the available DOFs of the articulated skeleton of the ECA (e.g. genesis G3 and G8[24]). The 3D templates are modelled in DAZ Studio, and stored in COLLADA format. In order to be used within the EVA framework, they are also transformed into EVA templates [21].

Development of a Repository of Virtual 3D Conversational

107

3 Creating 3D Shapes for Embodied Conversational Agents To create 3D resources for the synthesis of the conversational behavior, the data captured by the form-oriented part of the EVA Corpus is recreated on a conversational agent. People are keen observers of hand motion and body language, and are able to detect even small synchronization mismatches between body language and speech [26, 27]. If similar realism is to be achieved via 3D modeling techniques the degree of knowledge in 3D modeling and animation required, is generally overwhelming for a casual user. Thus it has been historically limited to the professional users [28]. The modern game engines, however, may be utilized for the animation and for the simplified CAD tools, with already pre-sculpted resources may be utilized to ensure of realism of the outlook of the agents and scene [29]. Figure 1 outlines the proposed approach, which fuses game engines and CAD environments into a simple and powerful framework for designing and animating conversational expressions.

Fig. 1. From a conversational artefact to a conversational resource for ECA EVA.

For the CAD environment, we have decided to use DaZ Studio, since it is easy to handle. and for the purpose of template generation, it does not require any special knowledge from the various domains of 3D animation, 3D design, and/or illustration. DaZ Studio already offers a wide variety of 3D models, assets, and items. Some of them are free, while some are license-based. Further, the available resources are realistic and detailed enough to appear very realistic. Another beneficial feature of DaZ is that its “human” characters already contain all DOFs, and corresponding restrictions required for animating shapes to appear as human as possible. In this way, through the available DOFs, as outlined in Fig. 1, even an unskilled animator can recreate believable replicas of conversational parameters. The animator selects a conversational concept from the EVA Corpus (designed and annotated in ELAN tool), and then reassembles it in DaZ environment. The animator configures the available DOFs of a DaZ humanoid (e.g. the 3D model). To further simplify and speed-up the process, the

108

I. Mlakar et al.

animator can also save newly created configurations (or parts of it, e.g. for hand shape). This allows for the next configurations to be designed as derivate of already existing configurations. Such process is especially useful in case of hand-shapes, where conversational shapes appear to be quite similar and contain only slight modulations (of a finger, or overall shape for instance). Finally, when finished, the animator exports the model in COLLADA format, and names it by abstract/symbolic notation (as defined in the EVA Corpus). For the utilization of resources in EVA U-Realizer engine [30] a “converter”, has been developed. It maps the complete DOF configurations (e.g. joints and morphed shapes) into expressive EVAScript templates compatible with G3 and G8 virtual characters. As outlined in Fig. 1, the interface for the final conversion is simple and straight forward. The animator imports the complete pose in COLLADA, and then selects the movement controllers he/she wants to capture from the hierarchy window of the Unity Editor. The overall process from conversational shape to conversational resources takes 10–30 min, depending on the complexity of the conversational concept being designed.

4 Discussion In this paper, we have discussed a framework devised to transform conversational concepts (captured by and maintained in EVA Corpus) into actual 3D resources. Virtual agents can utilize them to visualize more natural conversational responses. The resources are integrated in the form of expressively tunable templates, which can be further tuned online, by considering spatial, power, temporal dimensions, as well as fluidity and repetition. The approach and the framework enable even unskilled animators to generate highly diverse sets of realistic expressions. Namely, DaZ-Unity interlink simplifies the modelling (re-creation) of 3D resources via CAD tools: (a) by utilizing DaZ Studio (and its resources), (b) by restricting the design process performed by the animator to only DOF control, and (c) by utilizing repository of motor skills and a simplified mapping between 3D resource in the CAD environment into a 3D resource in the repository. Moreover, through the repository of motor skills, the conversational artefacts (e.g. left and right hand-shapes, left and right arm positions, head pose, and facial expression) are isolated and can be freely combined to form various conversational expressions. Finally, the basis for the description of motor skills is agent independent, thus any of the agent, based on DaZ People (G1–G8), can utilize the same repository. To sum up, the proposed framework seems not only exciting, but also an important step towards the generation of more realistic repositories of conversational artefacts that virtual agent is capable to utilize. It represents an important functionality towards generating more natural and human-like companions, and machine generated responses, especially in terms of diversity. This far we have generated and integrated over 800 EVA templates, shapes for left and right hand as well as left and right arm. These are being opened to the wider audience (‘DAZ format’ + lower-levels of conversational context in ELAN). Content is intended for non-commercial use and is available upon request.

Development of a Repository of Virtual 3D Conversational

109

Acknowledgments. This work is partially funded by the European Regional Development Fund and the Ministry of Education, Science and Sport of Slovenia; project SAIAL. This work is partially funded by the European Regional Development Fund and Republic of Slovenia; project IQHOME.

References 1. McNeill, D.: Why We Gesture: The Surprising Role of Hand Movements in Communication. Cambridge University Press, Cambridge (2015) 2. Debreslioska, S., Gullberg, M.: Discourse reference is bimodal: how information status in speech interacts with presence and viewpoint of gestures. Discourse Process. 56(1), 41–60 (2017) 3. Kopp, S., Bergmann, K.: Using cognitive models to understand multimodal processes: the case for speech and gesture production. In: The Handbook of Multimodal-Multisensor Interfaces, pp. 239–276. Association for Computing Machinery and Morgan & Claypool, New York (2017) 4. Bonsignori, V., Camiciottoli, B.C. (eds.): Multimodality Across Communicative Settings, Discourse Domains and Genres. Cambridge Scholars Publishing, Newcastle (2017) 5. Kendon, A.: Pragmatic functions of gestures. Gesture 16(2), 157–175 (2017) 6. Colletta, J.M., Guidetti, M., Capirci, O., Cristilli, C., Demir, O.E., Kunene-Nicolas, R.N., Levine, S.: Effects of age and language on co-speech gesture production: an investigation of French, American, and Italian children’s narratives. J. Child Lang. 42(1), 122–145 (2015) 7. Esposito, A., Vassallo, J., Esposito, A.M., Bourbakis, N.: On the amount of semantic information conveyed by gestures. In: 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 660–667. IEEE (2015) 8. Venkatesh, A., Khatri, C., Ram, A., Guo, F., Gabriel, R., Nagar, A., et al.: On evaluating and comparing conversational agents. CoRR, arXiv:1801.03625 (2018) 9. Graesser, A.C., Cai, Z., Morgan, B., Wang, L.: Assessment with computer agents that engage in conversational dialogues and trialogues with learners. Comput. Hum. Behav. 76, 607–616 (2017) 10. Ciechanowski, L., Przegalinska, A., Magnuski, M., Gloor, P.: In the shades of the uncanny valley: an experimental study of human-chatbot interaction. Future Gener. Comput. Syst. 92, 539–548 (2018) 11. Lhommet, M., Marsella, S.C.: Gesture with meaning. In: International Workshop on Intelligent Virtual Agents, pp. 303–312. Springer, Heidelberg (2013) 12. Fernández-Baena, A., Montaño, R., Antonijoan, M., Roversi, A., Miralles, D., Alías, F.: Gesture synthesis adapted to speech emphasis. Speech Commun. 57, 331–350 (2014) 13. Kipp, M., Heloir, A., Schröder, M., Gebhard, P.: Realizing multimodal behavior. In: International Conference on Intelligent Virtual Agents, pp. 57–63. Springer, Heidelberg (2010) 14. Bozkurt, E., Erzin, E., Yemez, Y.: Affect-expressive hand gestures synthesis and animation. In: 2015 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2015) 15. Rojc, M., Mlakar, I., Kačič, Z.: The TTS-driven affective embodied conversational agent EVA, based on a novel conversational-behavior generation algorithm. Eng. Appl. Artif. Intell. 57, 80–104 (2017) 16. Bozkurt, E., Yemez, Y., Erzin, E.: Multimodal analysis of speech and arm motion for prosody-driven synthesis of beat gestures. Speech Commun. 85, 29–42 (2016)

110

I. Mlakar et al.

17. Sadoughi, N., Busso, C.: Head motion generation with synthetic speech: a data driven approach. In: Interspeech, pp. 52–56 (2016) 18. Vogt, D., Grehl, S., Berger, E., Amor, H.B., Jung, B.: A data-driven method for real-time character animation in human-agent interaction. In: International Conference on Intelligent Virtual Agents, pp. 463–476. Springer, Heidelberg (2014) 19. Heloir, A., Kipp, M.: Real-time animation of interactive agents: specification and realization. Appl. Artif. Intell. 24(6), 510–529 (2010) 20. Neff, M., Pelachaud, C.: Animation of natural virtual characters. IEEE Comput. Graph. Appl. 37(4), 14–16 (2017) 21. Rojc, M., Mlakar, I.: An Expressive Conversational-behavior Generation Model for Advanced Interaction Within Multimodal User Interfaces (Computer Science, Technology and Applications). Nova Science Publishers Inc, New York (2016) 22. Lamberti, F., Paravati, G., Gatteschi, V., Cannavo, A., Montuschi, P.: Virtual character animation based on affordable motion capture and reconfigurable tangible interfaces. IEEE Trans. Visual. Comput. Graph. 24(5), 1742–1755 (2018) 23. Pelachaud, C.: Greta: an interactive expressive embodied conversational agent. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pp. 5–5. ACM (2015) 24. Daz People: https://www.daz3d.com/people-and-wearables 25. Mlakar, I., Kačič, Z., Rojc, M.: A Corpus for Investigating the Multimodal Nature of MultiSpeaker Spontaneous Conversations - EVA Corpus. WSEAS Trans. Inf. Sci. Appl. 14, 213– 226 (2017) 26. Wheatland, N., Wang, Y., Song, H., Neff, M., Zordan, V., Jörg, S.: State of the art in hand and finger modeling and animation. Comput. Graphics Forum 34(2), 735–760 (2015) 27. Etemad, S.A., Arya, A., Parush, A., DiPaola, S.: Perceptual validity in animation of human motion. Comput. Anim. Virtual Worlds 27(1), 58–71 (2016) 28. Paczkowski, P., Dorsey, J., Rushmeier, H., Kim, M.H.: PaperCraft3D: paper-based 3D modeling and scene fabrication. IEEE Trans. Visual. Comput. Graph. 25(4), 1717–1731 (2018) 29. Akinjala, T.B., Agada, R., Yan, J.: Animating human movement & gestures on an agent using Microsoft Kinect. In: 2016 IEEE International Symposium on Multimedia (ISM), pp. 369–374. IEEE (2016) 30. Mlakar, I., Kačič, Z., Borko, M., Rojc, M.: A novel realizer of conversational behavior for affective and personalized human machine interaction - EVA U-Realizer. WSEAS Trans. Environ. Dev. 14, 87–101 (2018)

Permutation Codes, Hamming Graphs and Tur´ an Graphs J´ anos Barta(B) and Roberto Montemanni Dalle Molle Institute for Artificial Intelligence IDSIA - USI/SUPSI, 6928 Manno, Switzerland {janos.barta,roberto.montemanni}@supsi.ch http://www.idsia.ch

Abstract. This paper investigates the properties of permutation Hamming graphs, a class of graphs in which the vertices are the permutations of n symbols and the edges connect pairs of vertices at a Hamming distance greater than or equal to a value d. Despite a remarkable regularity, permutation Hamming graphs elude general formulas for relevant indicators like the clique number. The clique number plays a crucial role in the Maximum Permutation Code Problem (MPCP), a well-known optimization problem. This work focuses on the relationship between permutation Hamming graphs and a particular type of Tur´ an graphs. The main result is a theorem asserting that permutation Hamming graphs are the intersection of a set of Tur´ an graphs. This equivalence has implications on the MPCP. In fact it enables a reformulation as a hitting set problem, which in turn can be translated into a binary integer program. Keywords: Graph theory Tur´ an graphs

1

· Combinatorics · Hamming graphs ·

Introduction

A set of codewords, such that each codeword is a permutation of n given symbols, is known as a permutation code. Recently, this particular class of codes has found interesting technical applications in domains like power line communication [6, 7,12,15], the design of multilevel flash memories [13] and coding problems with block ciphers [8]. The basic mathematical problem about permutation codes is how to build the largest possible code, in such a way that the Hamming distance between any pair of codewords is greater than or equal to a given threshold d. This problem is usually referred to as the Maximum Permutation Code Problem. The interest in the MPCP lies in the fact that a permutation code that satisfies a minimum Hamming distance d = 2e + 1 is able to correct up to e transmission errors in each codeword. The MPCP has been extensively investigated with various mathematical approaches, such as mixed-integer linear programming [4,17], algebraic methods [3,6,9,11], branch and bound algorithms [1], heuristic algorithms [14,16] and also with graph theoretical methods [2,10]. c Springer Nature Switzerland AG 2019  K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 111–118, 2019. https://doi.org/10.1007/978-3-030-21507-1_17

112

J. Barta and R. Montemanni

It is well-known that the MPCP can always be transformed into an equivalent maximum clique problem on a particular graph, in which the vertices correspond to the permutation codewords and the edges connect pairs of codewords with a Hamming distance greater than or equal to the threshold d. The paper [2] has been entirely devoted to the study of the fundamental properties of this family of graphs called permutation Hamming graphs. In particular, exact formulas for the degree of the vertices and for the number of edges have been developed. Furthermore, it was shown that permutation Hamming graphs are vertex-transitive, r-partite subgraphs of a regular Tur´ an graph defined on the same vertex set. This means that they form a family of highly symmetric graphs, having a very large automorphism group. However, some crucial measures of permutation Hamming graphs such as the clique number and the independence number are still hard to compute and no general formula is known so far (see also [5]). The main scope of this work is to deepen the relationship between Hamming and Tur´ an graphs, in order to characterize Hamming graphs in a more precise way. In Sect. 2 permutation Hamming graphs are introduced and their relationship to the MPCP is enlightened. In Sect. 3 Tur´ an graphs are defined and their principal properties are presented. The core of the paper is Theorem 1 in Sect. 4, stating that permutation Hamming graphs are actually the intersection of a particular set of Tur´ an graphs. Finally, in Sect. 5 Theorem 1 is applied to obtain a reformulation of the MPCP as a hitting set problem. This new form of the problem leads to a surprisingly simple binary integer linear program, which might be a suitable starting point for a computational approach.

2

Permutation Codes and Hamming Graphs

Let Ωn be the set of all possible permutations of the n-tuple of integers x0 = [0, 1, . . . , n − 1]. In the sequel we will refer to the elements of Ωn as codewords of length n and to the subsets of Ωn as permutation codes. dH (x, y) denotes the Hamming distance between two codewords x and y. The MPCP is the problem of maximizing the number of codewords in a permutation code C in such a way that dH (x, y) ≥ d, ∀x, y ∈ C. We will refer to this problem as an (n, d)-problem. Each (n, d)-problem induces a Hamming graph H(n, d) defined as follows. Definition 1. Let H(n, d) be the graph defined on the vertex set VH = Ωn , with the set of edges EH , such that {x, y} ∈ EH ⇔ dH (x, y) ≥ d. The graph H(n, d) is the permutation Hamming graph of size n and minimum distance d. As already mentioned above, the MPCP is equivalent to the maximum clique problem of the Hamming graph H(n, d), because any clique K of H(n, d) is a complete subgraph and therefore it is a feasible permutation code, since it satisfies the minimum distance constraint dH (x, y) ≥ d, ∀x, y ∈ K. We recall that the clique number ω(G) of a graph G is the size of its largest clique and the independence number α(G) is the size of the largest subset of independent vertices.

Permutation Codes, Hamming Graphs and Tur´ an Graphs

113

In [2] general formulae for the degree of the vertices of H(n, d) and for its number of edges were deduced:   n t  n!  (−1)k (1) deg(x) = , ∀x ∈ VH (n − t)! k! t=d

and

k=0

n!  |EH | = 2 n

t=d



n!  (−1)k (n − t)! k! t

 .

(2)

k=0

The fact that the vertex degree is constant means that H(n, d) is regular. As shown in [2], Hamming graphs feature an even stronger form of regularity, namely vertex transitivity. A graph G(V, E) is vertex transitive, iff for any pair of vertices x and y there exists an automorphism ϕ : V → V which maps x in y. However, there is another property of Hamming graphs which relates them directly to Tur´ an graphs: it is r-partiteness. This means that the vertex set Ωn can be partitioned in a given number of subsets, in such a way that each subset forms a class of independent vertices of H(n, d). We remind that two nodes of a graph are called independent iff they are not connected by an edge. Such a partition can be easily obtained by assigning the codewords sharing the first n − d + 1 components to the same class. Indeed, if x and y share n − d + 1 components, they can differ in at most d − 1 positions, which means that dH (x, y) < d and thus x and y are independent. Due to the fact that only d − 1 components are variable, each independence class is composed by (d − 1)! n! (for codewords and therefore the number of classes of the partition is r = (d−1)! more details see [2]). In Sect. 4 a generalisation of this partitioning procedure leads to the relation between Hamming and Tur´ an graphs stated in Theorem 1.

3

Tur´ an Graphs

One fundamental problem in extremal graph theory is how many edges can have maximally a graph on n vertices, such that it does not contain a complete an’s theorem states that the solution of this subgraph Kr+1 . The well-known Tur´ extremal problem is a particular r-partite graph, the so-called Tur´ an graph. Definition 2. The Tur´ an graph Tr (n) is the complete r-partite graph on n ≥ r vertices whose partition sets differ in size by at most 1. The Tur´ an graph Tr (n) has r classes of independent vertices and any pair of vertices belonging to different classes is connected by an edge. The independence classes can have two different sizes: the size of the larger classes is  nr  and the size of the smaller classes is nr . When the number of classes divides the number of an graphs vertices, the class size assumes the constant value nr . Such regular Tur´ will be the object of our study in the Sect. 4. Contrary to Hamming graphs, Tur´ an graphs have simple formulae for the clique number and the independence number.

114

J. Barta and R. Montemanni

Proposition 1. Let Tr (n) be the r-partite Tur´ an graph on n vertices. Then ω(T ) = r and α(T ) =  nr . Proof. In order to construct a complete subgraph Kk of Tr (n), each of its k vertices must belong to a different independence class. Therefore the maximum clique of Tr (n) is a Kr obtained by picking an arbitrary vertex from each class. On the other hand, independent nodes must belong to the same class, thus the independence number α(T ) corresponds to the size of the largest class  nr .

4

Hamming versus Tur´ an Graphs

The fact that permutation Hamming graphs turn out to be quite similar to Tur´ an graphs, but they lack some of their desirable properties, induced us to study more in depth the relationship between the two graph families. In our past work on permutation codes we often used the technique of partitioning the complete set of codewords Ωn by fixing the values of certain components of the codewords. In order to generalize this approach we need an appropriate formalism for characterizing subsets of Ωn . Definition 3. Let P = {p1 , . . . , pk }, with k ∈ {1, . . . , n}, be a set of in the codewords of Ωn , such that ∀i, pi ∈ {0, . . . , n − 1} and p1 pk . Furthermore, let V = {v1 , . . . , vk } be a set of values, such that {0, . . . , n − 1} and v1 = . . . = vk . We denote by C(P, V ) the class of all codewords in Ωn having the at the positions P , i.e. C(P, V ) = {x ∈ Ωn | x(pi ) = vi , pi ∈ P, vi ∈ V, ∀i}.

positions < ... < ∀i, vi ∈ values V (3)

As an example, the class in Ω4 with the values V = {3, 0} at the positions P = {2, 3} is the set of all codewords with the values 3, 0 at the last 2 positions, that is C(P, V ) = {[1230], [2130]}. In order to define partitions of Ωn , it is useful to denote the set of all possible position sets with k elements by Pk and the set of all value sets by Vk . It is interesting to note that any position set P ∈ Pk induces a specific partition of Ωn . In fact, the codewords of Ωn can be partitioned in classes, depending on their values at the fixed positions P . Definition 4. Let P ∈ Pk (with k ≤ n) be a fixed position set. We define the partition of Ωn induced by P as the set of classes S(P ) = {C(P, V ) | V ∈ Vk }. For instance, the position set P = {0} induces a partition of Ω3 based on the value of the first component: S(P ) = {{[012], [021]}, {[102], [120]}, {[201], [210]}}. The construction of a Tur´ an graph requires a specific partition of the vertex set. Now we define Tur´an graphs on the same partitions that allowed us to prove the r-partiteness of H(n, d) (see Sect. 2). Namely, let be k = n − d + 1 and P ∈ Pk , i.e. P is a set of n−d+1 fixed positions and S(P ) = {C(P, V ) | V ∈ Vk } is the partition of Ωn based on the positions P . In other terms, codewords with the same values at the positions P belong to the same independence class C(P, V ).

Permutation Codes, Hamming Graphs and Tur´ an Graphs

115

Definition 5. Let S(P ) be the partition of Ωn induced by the set of positions an graph defined on the vertex set Ωn , P ∈ Pn−d+1 . The graph TP (n!) is the Tur´ having the sets C(P, V ) ∈ S(P ) as classes of independent vertices. As an example consider the case n = 3 and d = 3. The partition is obtained by fixing 1 component of the codewords. Let P = {2} be the position set. Then the 3 classes of the partition are {[120], [210]}, {[021], [201]} and {[012], [102]} and the Tur´ an graph TP (6) is the complete 3-partite graph on 6 vertices with the 3 sets of the partition as independence classes. It is worth to remark that the Tur´ an graph TP (n!), just like the corresponding n! independence classes with the size (d − 1)! Hamming graph H(n, d) has (d−1)! each. Furthermore, there is not just one Tur´ an graph of this type, since each induces a specific Tur´ an graph TP (n!). So we end position set P ∈ Pn−d+1  n  isomorphic Tur´ an graphs. The following theorem up with a collection of d−1 establishes the basic relation between the Hamming graph H(n, d) and the Tur´ an graphs TP (n!). Theorem 1. The permutation Hamming graph H(n, d) is the intersection of all Tur´ an graphs TP (n!), for P ∈ Pn−d+1 , i.e.  TP (n!). (4) H(n, d) = P ∈Pn−d+1

Proof. Consider the partition S(P ) = {C(P, V ) | V ∈ Vn−d+1 } of Ωn induced by a fixed position set P ∈ Pn−d+1 . The classes C(P, V ) ∈ S(P ) are disjoint independence sets of the Hamming graph H(n, d), since for any pair x, y ∈ C(P, V ) it holds dH (x, y) < d. This means that H(n, d) is r-partite with respect to the partition P . Therefore any edge of H(n, d) belongs also to TP (n!) or, in other terms, H(n, d) is a subgraph of the intersection of Tur´ an graphs. It remains to prove that the inverse inclusion holds as well. Consider two vertices x and y which are not connected by an edge in the graph H(n, d), that is the Hamming distance dH (x, y) is at most d − 1. Then there exists a position set P ∈ Pn−d+1 , such that x and y coincide in the positions P . In other terms x and y belong to the same independence class C(P, V ) and therefore they are nor they are in the intersection of all not connected in the Tur´ an graph TP (n!),  Tur´ an graphs. It follows that any edge of TP (n!) is also an edge of H(n, d). The interest of this theorem lies in the fact that permutation Hamming graphs, usually defined by imposing a metric condition, can also be constructed by means of adequate set partitions.

5

A Hitting Set Problem Formulation

Our exploration started from an optimization problem about permutation codes, the MPCP, which was reformulated as a maximum clique problem on the Hamming graph H(n, d). Theorem 1 enables us a further step: the maximum clique

116

J. Barta and R. Montemanni

problem can be transformed into an equivalent hitting set problem. In fact, consider the covering C = {C(P, V ), ∀ P ∈ Pn−d+1 , V ∈ Vn−d+1 } of Ωn . C is the collection of all independence classes of the Tur´ an graphs TP (n!), where P ∈ Pn−d+1 . Theorem 1 implies that each vertex of a clique Kk of H(n, d) must belong to a different class C(P, V ) ∈ C. In other terms, we are looking for the largest set of codewords in Ωn , such that each independence class C(P, V ) is hit by at most one of them. From this perspective our problem turns out to be a variant of the so-called exact hitting set problem. The exact hitting set problem is a decision problem about the existence of a subset X ⊆ Ω, such that each subset C of a cover C of Ω contains exactly one element of X. A subset with this property is called a transversal. However, our problem is about maximizing the hitting set, while the hitting constraint is relaxed to “at most one element”. We denote this problem as a Maximum Hitting Set Problem (MHSP). It is interesting to observe that the MHSP leads naturally to a simple linear programming formulation. Let C = {C1 , . . . , Ck } be the covering of Ωn with independence classes defined above. Each codeword can be represented by a binary variable xj with j = 1, . . . , |Ωn |. 1 if element j is in the solution (5) xj := 0 otherwise. Consider the binary matrix A = [aij ], with i = 1, . . . , k and j = 1, . . . , |Ωn |: 1 if element j belongs to Ci aij := (6) 0 otherwise. The MHSP for permutation codes can be formulated as follows: |Ωn |

max



xj

(7)

j=1 |Ωn |

s.t.



aij xj ≤ 1

∀i ∈ {1, . . . k}

j=1

xj ∈ {0, 1}

∀j ∈ {1, . . . , |Ωn |}.

The formulation (7) is a binary integer linear program (BILP) with n! binary variables and k constraints, where each constraint ensures that at most one element is picked from the corresponding subset Ci . The number of constraints can be determined in the following way. As already seen, the number of classes n! . On the other hand the number of of a single Tur´ an graph TP (n!) is r = (d−1)! Tur´ an graphs involved corresponds to the number of different position sets in  n 2  n  Pn−d+1 , which is d−1 . Therefore the number of constraints is k = d−1 (n − d + 1)!. As an example, the BILP of the instance n = 6, d = 5 features only 720 variables and 450 constraints and could be solved in 180 s with the software

Permutation Codes, Hamming Graphs and Tur´ an Graphs

117

Gurobi on a computer equipped with an Intel Core i5 2.3 GHz processor and 8 GB of memory. The open problem n = 7, d = 5 requires 5040 variables and 7350 constraints, whereas as the problem n = 7, d = 4 needs 29400 constraints. The exact solution of instances with n ≥ 7 will be an interesting challenge for coming research.

6

Conclusions

This study was devoted to the investigation of the properties of permutation Hamming graphs. In particular, we proved that permutation Hamming graphs are equivalent to the intersection of a particular collection of Tur´ an graphs. As a consequence, the MPCP could be reformulated as a covering problem, more precisely as a hitting set problem. The evidence that the MPCP is equivalent not only to a maximum clique problem, but also to a simple hitting set problem, opens new optimization perspectives for the MPCP and can be considered as the main contribution of this work.

References 1. Barta, J., Montemanni, R., Smith, D.H.: A branch and bound approach to permutation codes. In: Proceedings of the IEEE 2nd International Conference of Information and Communication Technology - ICOICT, pp. 187–192 (2014) 2. Barta, J., Montemanni, R.: Hamming graphs and permutation codes. In: Proceedings of the IEEE 4th International Conference on Mathematics and Computers in Sciences and Industry - MCSI, pp. 154–158 (2017) 3. Bereg, S., Levy, A., Sudborough, I.H.: Constructing permutation arrays from groups. Des. Codes Cryptogr. 86(5), 1095–1111 (2018) 4. Bogaerts, M.: New upper bounds for the size of permutation codes via linear programming. Elecron. J. Comb. 17(1), #R135 (2010) 5. Carraghan, R., Pardalos, P.M.: An exact algorithm for the maximum clique problem. Oper. Res. Lett. 9, 375–382 (1990) 6. Chu, W., Colbourn, C.J., Dukes, P.: Constructions for permutation codes in powerline communications. Des. Codes Cryptogr. 32, 51–64 (2004) 7. Colbourn, C.J., Kløve, T., Ling, A.C.H.: Permutation arrays for powerline communication and mutually orthogonal Latin squares. IEEE Trans. Inf. Theory 50, 1289–1291 (2004) 8. de la Torre, D.R., Colbourn, C.J., Ling, A.C.H.: An application of permutation arrays to block ciphers. In: Proceedings of the 31st Southeastern International Conference on Combinatorics, Graph Theory and Computing, vol. 145, pp. 5–7 (2000) 9. Deza, M., Vanstone, S.A.: Bounds for permutation arrays. J. Stat. Plan. Infer. 2, 197–209 (1978) 10. El Rouayheb, S., Georghiades, C.N.: Graph theoretic methods in coding theory. In: Classical, Semi-Classical and Quantum Noise. Springer (2012) 11. Frankl, P., Deza, M.: On maximal numbers of permutations with given maximal or minimal distance. J. Comb. Theory Ser. A 22, 352–360 (1977)

118

J. Barta and R. Montemanni

¨ Int. J. 12. Han Vinck, A.J.: Coded modulation for power line communications. A.E.U. Electron. Commun. 54(1), 45–49 (2000) 13. Jiang, A., Mateescu, R., Schwartz, M., Bruck, J.: Rank modulation for flash memories. In: Proceedings of the IEEE International Symposium on Information Theory, pp. 1731–1735 (2008) 14. Montemanni, R., Barta, J., Smith, D.H.: The design of permutation codes via a specialized maximum clique algorithm. In: Proceedings of the IEEE 2nd International Conference on Mathematics and Computers in Science and Industry - MCSI (2015) 15. Pavlidou, N., Han Vinck, A.J., Yazdani, J., Honary, B.: Power line communications: state of the art and future trends. IEEE Commun. Mag. 41(4), 34–40 (2003) 16. Smith, D.H., Montemanni, R.: A new table of permutation codes. Des. Codes Cryptogr. 63(2), 241–253 (2012) 17. Tarnanen, H.: Upper bounds on permutation codes via linear programming. Eur. J. Comb. 20, 101–114 (1999)

Visualization of Data-Flow Programs Victor Kasyanov(&), Elena Kasyanova, and Timur Zolotuhin Institute of Informatics Systems, 630090 Novosibirsk, Russia [email protected]

Abstract. Visualization based on graph models is an inherent part of the processing of complex information about the structure of objects, systems and processes in many applications in science and technology. In this paper, we describe an effective algorithm for visualization of data-flow graphs and its effective implementation within the Visual Graph system for visualization of arbitrary attributed hierarchical graphs with ports. Keywords: Attributed graph  Hierarchical graph  Graph with ports Graph drawing  Visualization of data-flow program



1 Introduction Graph drawing is a useful way of presentation of large and complex information into visual form, and visualization of graphs is used in many applications [1, 2]. Our Cloud Parallel Programming System (CPPS) is a visual programming system on the base of the Cloud Sisal language [3]. The Cloud Sisal language continues the tradition of previous versions of the Sisal language [4], remaining a functional dataflow programming language oriented to writing large scientific programs, and expands their capabilities with tools for supporting cloud computing. The CPPS system uses an internal representation of a source Cloud Sisal program as a so-called IR-graph, which focuses on its semantic and visual processing. The vertices of the IR-graph correspond to the expressions of the Cloud Sisal program, and the arcs show the transmissions of data between the vertex ports, the ordered sets of which are assigned to the vertices as their inputs (arguments) and outputs (results). The IR-graph can contain both simple and compound vertices. Every compound vertex corresponds to a complex expression of the Cloud Sisal program and is a subgraph, vertices of which correspond to subexpressions of the complex expression and are contained in the compound vertex together with arcs between them. So, IR-graphs are attributed hierarchical graphs [5] and, in contrast to the control flow graphs, commonly used in optimizing compilers for imperative languages (such as C or Fortran), express data dependencies, with control left implicit. The main difficulties in solving problem of visualization of IR-graphs are due to the fact that, in contrast to the standard problem of drawing of graphs on the plane [1, 2], vertices of a data-flow graph are connected by arcs through their different ports and can have different sizes depending on sizes of drawings of those graphs, which are contained in these vertices. © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 119–124, 2019. https://doi.org/10.1007/978-3-030-21507-1_18

120

V. Kasyanov et al.

In this paper, we describe an effective algorithm for visualization of IR-graphs and its effective implementation within the framework of Visual Graph system [6] for visualization of arbitrary attributed hierarchical graphs with ports.

2 IR-Graphs and Their Drawings Vertices of IR-graph denote operations on their inputs (arguments), the results of which are at the outputs of vertices. Vertices can be either simple or compound. Simple vertices denote operations (such as add or divide) or constants and have no internal structure. The compound vertices (or fragments) correspond to complex constructions of Cloud-Sisal-programs (such as loop expression or function) and contain ordered sets of vertices (or subfragments) corresponding to the constructions from which they consist. The number of subfragments may both be fixed (in loop and let vertices) and vary (in function and select vertices). Because of the properties of Cloud Sisal, this graph is an acyclic directed graph (or DAG) and does not contain two arcs that enter the same input port. The following rules for drawing of IR-graphs are assumed: 1. Simple vertices without inputs are represented in the form of circles containing the representations of the constants. 2. Simple vertices with inputs are represented in the form of rectangles with semicircular projections from above, representing the vertex inputs in their order from left to right, and semicircular projections from below, depicting the outputs of the vertex. Inside the rectangles are the representations of corresponding operations. 3. Compound vertices are represented as rectangles with a rectangular ledge from the top left to indicate the type of the compound vertex, as well as with circles from the top right and bottom to show the inputs and outputs of this vertex in their order from left to right. Each circle representing a certain port of the fragment consists of a semicircular projection outward and a semicircular projection inside this rectangle. Inside the rectangle that represents the compound vertex, there are representations of all the vertices and all arcs contained in it, and only they. 4. Representations of two any different vertices either do not intersect, or one of them lies entirely in the other. Arcs are represented as curves (splines) with arrows that connect the corresponding ports and do not intersect at their internal points with representations of other arcs and vertices.

3 Algorithm Below it is assumed that the input IR-graph is a DAG. The algorithm consists in sequentially executing the steps of constructing images of the contents of fragments of the input graph, beginning with the innermost one, on each of which the drawing of a fragment is constructed with using of the sizes and relative locations of the elements of subfragments that are directly enclosed in it.

Visualization of Data-Flow Programs

121

The construction of the image of a fragment is based on the technique of the socalled the hierarchical approach for creating layered drawings of DAGs, which was proposed by Sugiyama, and consists of the following three main stages [1, 2]: (1) layer assignment when vertices to horizontal layers are assigned and thus their y-coordinates are determined, (2) crossing reduction when orders of vertices within each layer to reduce the number of arc crossings are determined, (3) horizontal coordinate assignment when an x-coordinates for each vertex is determined.

4 Layer Assignment The task of this stage is to assign to each vertex its y-coordinate. For this, the source graph G = (V, E) must be reduced to a layered digraph, which is a partition of V into subsets L1, L2, … , Lh, such that if (u,v) 2 E, where u 2 Li and v 2 Lj, then i > j. It is assumed that all vertices of the same level are assigned the same vertical y-coordinate, i.e. vy = i, for all vertices from the level Li. The height of a layered digraph is the number h, and the width wi is the number of vertices in the largest Li. The span of arc (u,v) with u 2 Li and v 2 Lj is i – j. The layered digraph is said to be proper if no arc with a span greater than one. The existence of a layered digraph for any DAG is obvious. However, not for every DAG there is a proper layered digraph. To make a proper layered digraph, the technique of inserting “dummy vertices” can be used. Each arc (u, v) of span k = i – j > 1 is replaced with path (u = v1, … vk = v) where v2, … vk−1 are added dummy vertices and vm 2 Li–m+1 for all m. The execution of this stage is connected with processing the graph G = (V, E), representing some fragment F in which V consists of the ports of the fragment F and of the vertices directly contained in F. Let t be the length of the longest path in G. Then a layered digraph L1, L2, … , Lt+1 in which L1 consists of all inputs of F, and Lt+1 consists of all outputs of F is considered. First, L1 and Lt+1 are constructed and these ports are removed from G together with the incident arcs. The process of construction continues in steps, on each of which one of the vertices that does not have incoming arcs in the current state of G is included in the set Li+1, where i is the maximum number of the set Li containing its predecessor in the source graph G, with simultaneous removal of this vertex from G together with all arcs outgoing from it. Moreover, among the vertices that do not have incoming arcs in the current state of G, the vertex that has the least number of incoming arcs and the largest number of outgoing arcs in the original graph G (these numbers are pre-counted for all vertices of the original graph) is selected and included in the corresponding set. This process continues until the current graph G becomes empty. After this, the layered digraph G is reduced to a proper one using the technique of dummy vertices.

5 Crossing Reduction The task of this stage is to find the order of vertices at each level, in order to minimize the number of intersections of arcs.

122

V. Kasyanov et al.

It should be noted that the number of intersections of arcs in a layered digraph does not depend on the precise position of the vertices, but depends only on their relative position within each layer (their ordinal number at a given level). Thus, the task of this stage is not just a geometric problem, but merely a combinatorial one. However, this problem is NP-complete already for a graph having only two layers. The execution of this stage for a given fragment is carried out as follows: 1. If a fragment has input and/or output ports located at L1 and/or Lh, respectively, then the corresponding serial numbers should be assigned to these ports. 2. Consider the vertices of the L1 level in their ordering, if the fragment has no ports, or L2, if it has ports, and traverse the contents of the fragment starting from these vertices using the stack. 3. If the stack is empty, either we continue step 2, or this stage is completed. Otherwise, consider the vertex located at the top of the stack, but do not remove it from the stack. If there are no successor numbers for the vertex in question, then we assign the current order number to the vertex, delete it from the stack and go to step 3. If there are such successors at the top, we add one of them to the stack and go to step 3; when choosing a successor for placement on the stack, consider the following: (a) if all successors of a given vertex are connected by arcs outgoing from different ports, then the order of inclusion of these vertices does not contradict the order of the ports; (b) if there are vertices among successors associated with vertices that have already received sequence numbers, then they are included earlier than others; (c) if there are fictitious peaks among the successors, they are selected in such order that they are located at the level in the middle, and the real tops on the level edges.

6 Horizontal Coordinate Assignment After determining the order of the vertices at the level, it is necessary to determine their real coordinates. Arcs of the graph are represented in the form of broken lines with fracture points located in fictitious vertices; therefore, the task of determining the final coordinates of all vertices is simultaneously the task of carrying out arcs. If the arcs of the graph have some other form and/or method of conducting, then the corresponding criteria should be considered when solving the problem of determining the coordinates of the vertices. At the input of this stage, we have a vertex division by levels L1, L2, … , Lh, the vertices on each level are of the order from 1 to wi, where i varies from 1 to h. And the largest number of vertices is at the level of L1 (or at the level of Lh–1, if there are no output ports). Beginning with the last level, using the barycenter method in combination with the vertex order constraints obtained in the previous step, determine the x-coordinate at the previous level. The vertices of the last level are distributed uniformly on a certain segment of the horizontal line allocated to the vertices of this level. Then, for each next

Visualization of Data-Flow Programs

123

level, the coordinates of its vertices are successively determined as the arithmetic mean of the coordinates of their neighbors from the already set levels. At the same time, the initial vertex order is not violated at the level. If at some step the layering takes place, because the two vertices are assigned one xcoordinate, then the last level should be slightly expanded.

7 Implementation of the Algorithm In describing the algorithm, it was assumed that any input graph is a DAG. However, its implementation within the Visual Graph system [6] is performed for more general hierarchical graphs with ports and includes also the steps of preliminary and final processing of the input graph, which allow the system to work not only with DAGs. The essence and purpose of these transformations lies in the reversible transformation of the structure of the input graph. If the input graph contains undirected edges, then as a preliminary step it is realized its direction in accordance with the traversal of the graph in depth with the removal of orientation in the final processing. If the input graph contains loops, then to obtain DAG the orientation of some of the arcs (so-called backwards arcs) of the loops is changed, and then the drawing of the input graph is constructed by the inverse transformation in the constructed drawing of DAG.

8 Conclusion In this paper, the problem of drawing of hierarchical graphs with ports has been considered and solved. An effective algorithm for visualization of IR-graphs and its effective implementation within the framework of Visual Graph system [6] for visualization arbitrary attributed hierarchical graphs with ports are described. The algorithm for visualization of IR-graphs has a quadratic time complexity and constructs a good drawing of any IR-graph. Its implementation within the framework of the Visual Graph system can be used for drawing of an arbitrary attribute hierarchical graph with ports and allows on an ordinary PC to obtain in real time (without visible delays) a good drawing of graph containing up to 10000 elements.

Acknowledgments. This research was supported by the Russian Science Foundation under grant RSF 18-11-00118.

References 1. Di Battista, G., Eades, P., Tamassia, R., Tollis, I.G.: Graph Drawing: Algorithms for Visualization of Graphs. Prentice Hall, Englewood Cliffs (1999) 2. Sugiyama, K.: Graph Drawing and Applications: For Software and Knowledge Engineers. World Scientific, Singapore (2002)

124

V. Kasyanov et al.

3. Kasyanov, V.N., Kasyanova, E.V.: Programming Language Cloud Sisal. Preprint of IIS SD RAS N 181, Novosibirsk (2018). (in Russian) 4. Kasyanov, V.N.: Sisal 3.2: functional language for scientific parallel programming. Enterp. Inf. Syst. 7(2), 227–236 (2013) 5. Kasyanov, V.N.: Methods and tools for structural information visualization. WSEAS Trans. Syst. 12(7), 349–359 (2013) 6. Kasyanov, V.N., Zolotuhin, T.A.: A system for visualization of big attributed hierarchical graphs. Int. J. Comput. Netw. Commun. 10(2), 55–67 (2018)

Application of Transfer Learning for Fine-Grained Vessel Classification Using a Limited Dataset Mario Milicevic(&), Krunoslav Zubrinic, Ines Obradovic, and Tomo Sjekavica Department of Electrical Engineering and Computing, University of Dubrovnik, Cira Carica 4, Dubrovnik, Croatia {mario.milicevic,krunoslav.zubrinic,ines.obradovic, tomo.sjekavica}@unidu.hr

Abstract. The automatic classification of maritime vessel type from low resolution images is a significant challenge and continues to attract increasing interest because of its importance to maritime surveillance. Convolutional neural networks are the method of choice for supervised image classification, but they require a large number of annotated samples, which prevents many superior models being applied to problems with a limited number of training samples. One possible solution is transfer learning where pre-trained models are used on entirely new predictive modeling, transferring knowledge between related source and target domains. Our experimental results demonstrate that a combination of data augmentation and transfer learning leads to a better performance in the presence of small training dataset, even in the a fine-grained classification context. Keywords: Deep convolutional neural networks  Deep learning Classification  Transfer learning  Parameter fine-tuning



1 Introduction In the area of image classification excellent results were achieved by implementing deep learning technologies which use e.g. parallel processing, GPUs and similar technologies. When it comes to Convolutional neural networks (CNN), as a class of deep neural networks, matrix multiplication is used instead of the convolution operation for standard neural networks. This reduces the complexity of the network, with the added of advantage of not having to use feature extraction as a step of the learning phase, as the images can be taken directly through the input layer [1]. It should, however, be pointed out that reaching high performance demands complex neural network models which can result in millions of parameters, and also require a large number of learning examples. The result of this is that the described technology was limited in use until the appearance of hardware solutions, such as advanced GPU architectures. Under real-world conditions it’s often difficult to get a large number of training samples, and that becomes a handicap to train a deep CNN. Consequently, the accuracy of classification is reduced and the issue of overfitting occurs. © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 125–131, 2019. https://doi.org/10.1007/978-3-030-21507-1_19

126

M. Milicevic et al.

Data augmentation is a prominent method for overfitting reduction in the context of a limited learning dataset [2], where controlled interventions are used on existing examples to generate new (artificial) learning examples. The selection of data augmentation techniques depends on the nature of the modeled process itself. Usually, various random color and geometric distortions are used, including cropping, translating, zooming and rotating the image, changing the color palette etc. Another technique often used in situations with a limited amount of training data is transfer learning. Traditionally, the applied machine learning algorithm tries to learn a target concept from scratch. As opposed to that, the transfer learning approach tries to use the knowledge gained from a similar or even entirely different previous task which had a high enough quantity of high-quality training data, and transfers this knowledge onto a new task [3]. The authors mention three approaches to this: inductive transfer learning, transductive transfer learning and unsupervised transfer learning. They also state that, depending on the target task, they can transfer instances, the feature’s representation, parameters or relational knowledge. Donahue et al. [4] show that features gained as results from training a CNN on a large learning dataset are generic, which is why they can be useful as high-quality features for lower layers in various object classification tasks. It is important to note that the results of the learning phase of lower layers are low-level features, such as edges and curves, whereas the latter layers result in high-level features. The commonly used transfer learning technique is to copy the first n layers from the base network to corresponding positions in the target network [5]. After that, the remaining randomly initialized layers from the target network are further trained. An additional step is fine-tuning, where the errors from the new task are backpropagated into the transferred, pretrained features. Another approach is to leave the transferred layers frozen - not to update them during the target network training process. This paper deals with the evaluation of learning strategies in the context of finegrained categorization when limited training data is available, comparing custom CNN model trained from scratch with pre-trained CNN architectures. The rest of the paper is organized as follows. In Sect. 2, we describe our experimental setup, with an emphasis on the description of MARVEL dataset. The details of the custom CNN and pre-trained CNN models (on ImageNet) are then presented in Sect. 3.

2 Experimental Setup CNN training is implemented with the Keras [6] and TensorFlow [7] deep learning framework, using an NVidia GeForce GTX 1080 Ti GPU with 11 GB memory on Ubuntu 16.04 Linux OS.

Fig. 1. Examples of vessels from different superclasses (downsized and converted to grayscale)

Application of Transfer Learning for Fine-Grained

127

The dataset used in this study originates from the maritime surveillance systems area. The coastal and marine vision-based surveillance systems containing imaging sensors can also be exploited for the categorization of maritime vessels. MARVEL dataset [8] originates from 2 million marine vessel images (Fig. 1) collected from the Shipspotting website [9]. Solmaz et al. detect 1,607,190 images with valid annotated type labels belonging to one of 197 vessel categories. By exploiting both the dissimilarity matrix and human supervision, authors merge similar vessel type classes, resulting in final 26 superclasses. We randomly downloaded only 40,000 images from the Shipspotting website representing a use-case scenario for a limited number of training samples. All images were resized to 256  256, and then monochromatic outliers and duplicate images (of the same vessel) were manually removed. The training dataset consists of 25,211 samples, where we tried to acquire equal numbers (1,000) of samples from each superclass, but due to the imbalance between superclasses it was impossible to satisfy the requirement of 1,000 samples per class (or a total number of 26,000 images).

Fig. 2. Distribution of images (samples) in training, validation and test dataset.

For this reference dataset, we decided not to generate additional examples by data augmentation, so the classes contain between 828 and 1,000 samples - a slight imbalance that will not need particular corrections because data should represent the real-world, where this is a common problem. Both the validation and the test dataset contain 2,600 (26  100) images (Fig. 2).

128

M. Milicevic et al.

3 Results and Discussion The first applied architecture (Fig. 3) is a variant of a deep convolutional network. Input is a fixed-size 256  256 RGB image, with subtracting the mean RGB value as preprocessing step. The image is passed through 4  2 convolutional layers with 3  3 convolutional kernels. The number of output filters in the convolutions is 32, 64, 128 and 256 respectively. Four max-pooling layers downsample the volume spatially. A stack of convolutional layers is followed by fully-connected (FC) layers, where last one performs a 26-way classification. The final layer is the soft-max layer. The configuration of the fully connected layers is the same in all networks. All hidden layers are equipped with the rectification (ReLU) [10] non-linearity. This ramp function has better gradient propagation and fewer vanishing gradient problems compared to sigmoidal activation functions. Five dropout layers, with dropout rates between 0.2 and 0.5, are also used to reduce overfitting [11]. In this technique, randomly selected neurons are ignored during training. The temporarily dropped neuron has no contribution to the activation of downstream neurons on the forward pass and weight updates are not applied to the neuron on the backward pass. The effect is that the network becomes less dependent on single neurons and can generalize better. Grid search was used for hyperparameters fine-tuning. These are values of some important hyperparameters: number of epochs 200, learning rate 0.00001, mini-batch size 64, RMSProp optimizer [12]. Checkpoint is used to save the best model.

Fig. 3. The proposed initial CNN model architecture

This architecture (Custom CNN) is compared with modern CNN models which achieved state-of-the-art performance on ImageNet [10] and where large pretrained networks can be adapted to specialized tasks. ImageNet is a dataset of over 15 million labeled high-resolution images belonging to roughly 22,000 categories. ImageNet Large Scale Visual Recognition Challenge (ILSVRC), as a benchmark in object category classification, uses a subset of ImageNet with roughly 1.2 million training images in 1000 categories.

Application of Transfer Learning for Fine-Grained

129

We compare the Keras [6] implementations of VGG19 [13], InceptionV3 [14], Xception [15] and ResNet50 [16]. All these networks can be trained from scratch, or initialized with the ImageNet weights for transfer learning approach. The training, validation and test accuracies for these networks, together with the duration of the corresponding epoch, are summarized in Table 1. Table 1. Classification accuracies and training epoch duration achieved with different CNNs Model

Architecture details

Custom CNN VGG19

! Figure 3 (25211 train. samples)

Training from scratch; 25211 train. samples ImageNet weights + train only FC layers; 25211 train. samples ImageNet weights + fine tuning last 5 conv. layers; 5000 train. samples ImageNet weights + fine tuning last 5 conv. layers; 10000 train. samples ImageNet weights + fine tuning last 5 conv. layers; 15000 train. samples ImageNet weights + fine tuning last 5 conv. layers; 25211 train. samples ImageNet weights + fine tuning last 5 conv. layers; data augm. 5000 ! 25000 InceptionV3 Training from scratch; 25211 train. samples ImageNet weights + fine tuning last 12 conv. layers; 25211 train. samples Xception Training from scratch; 25211 train. samples ImageNet weights + fine tuning last 19 conv. layers; 25211 train. samples ResNet50 Training from scratch; 25211 train. samples ImageNet weights + fine tuning last 22 conv. layers; 25211 train. samples

Classification Epoch accuracy duration Train Valid. Test 0.782 0.489 0.497 256 s 0.491 0.539 0.525 125 s 0.986 0.685 0.672 171 s 0.981 0.651 0.649 43 s 0.983 0.729 0.725 72 s 0.987 0.759 0.743 100 s 0.991 0.780 0.762 166 s 0.998 0.726 0.715 200 s 0.974 0.398 0.376 164 s 0.617 0.335 0.330 65 s 0.999 0.440 0.432 367 s 0.957 0.541 0.525 128 s 0.978 0.346 0.341 243 s 0.998 0.453 0.460 106 s

Practically all models show high accuracy with training data, but, although a pretty aggressive dropout rates is applied, it is not possible to avoid overfitting. The VGG19 network accomplished the best results, which is interesting if it is known that, for example, ResNet50 achieves better results on the ImageNet dataset. For same training dataset, transfer learning reduces the time span for one epoch at least by a factor of three - compared with learning from scratch. It is also revealed that a decrease in the number of training samples hurts the validation/test accuracy more if the data is already scarce. A drop from 15,000 to 5,000 training images (or from 570 to 190 images per class) has a significant impact on the results.

130

M. Milicevic et al.

We also evaluate the effectiveness of augmentation techniques, where 25,000 augmented images are generated from 5,000 source images (using horizontal reflections, slight cropping and altering the intensities of the RGB channels). Without data augmentation, using 5000 training datasets, the VGG19 network achieved accuracy on the test dataset of about 65%. Mentioned data augmentation gives an accuracy of 71.5%, but using training dataset, which has 25,000 original (non-augmented) images, raised the accuracy to 76%.

4 Conclusions Given the limited number of learning examples, the 78% accuracy achieved for the validation dataset (or 76% for the test dataset) with the VGG19 network and transfer learning is a very good result in the context of 26 classes. In addition, it should be noted that this is a problem of fine-grained recognition tasks which classify highly similar appearing objects in the same class using local discriminative features. This means that even highly trained human experts sometimes have problems with properly classifying vessel types on a single image. We also show that data augmentation techniques can be successfully used to benefit fine-grained classification tasks lacking sufficient data.

References 1. Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F.E.: A survey of deep neural network architectures and their applications. Neurocomputing 234, 11–26 (2017) 2. Wang, J., Perez L.: The effectiveness of data augmentation in image classification using deep learning. Technical report (2017) 3. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010) 4. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: a deep convolutional activation feature for generic visual recognition. Technical report, arXiv preprint arXiv:1310.1531 5. Yosinski, J., Clune J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? Adv. Neural Inf. Process. Syst. 27, 3320–3328 (2014) 6. Chollet, F.: Deep Learning with Python, 1st edn. Manning Publications Co., Greenwich, CT (2017) 7. Abadi, M., et al.: TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from http://www.tensorflow.org (2015) 8. Solmaz, B., Gundogdu, E., Yücesoy, V., Koc, A.: Generic and attribute-specific deep representations for maritime vessels. IPSJ Trans. Comput. Vision Appl. 9, 1–18 (2017) 9. Ship Photos and Ship Tracker. http://www.shipspotting.com. Accessed 10 May 2018 10. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012) 11. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929– 1958 (2014)

Application of Transfer Learning for Fine-Grained

131

12. Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 4(2), 26–31 (2012) 13. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) 14. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016) 15. Chollet, F.: Xception: deep learning with depthwise separable convolutions. arXiv preprint, pp. 1610–2357 (2017) 16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770– 778 (2016)

Genetic Algorithms and Their Ethical Degeneration in Simulated Environment Josef Brozek1,2(&) and Karel Sotek1 1

Laboratory of Application of the Software Technologies, ASOTE, 532 10 Pardubice, Czech Republic {brozek,sotek}@asote.cz, [email protected], [email protected] www.asote.cz 2 Metropolitan University Prague, Ucnovska 100/1, 190 00 Prague, Czech Republic

Abstract. There is one very popular topic – not only for scientists, but for the community as a whole. Can machines easily learn something, which is not part of their programs? Can the machines revolt? If we are thinking only simple programming like functional or imperative, it is hard to imagine. But there are different kinds of programming, like setting the base program and rules, and let the programs teach themselves. These are the principles are used in Artificial neural networks and in Genetic/Evolution programming. US scientists have proven the potential of degeneration in ANN. This topic is focusing on the Evolution programming and its Ethical Degeneration in a Simulated Environment. Keywords: Artificial intelligence  Genetic Algorithm  Evolution programming  Simulation  Ethical Degeneration

1 Introduction Software developing methods are changing over time. Older principles like imperative programming are not progressive enough. The programs can react on changing input conditions only with difficulty. For the imperatively – written program, the easiest way to adapt to changing input condition is changing the all program. But there is a trend to use software for a variety of inputs. An example can be a very simple topic – human face recognition. Programing it via imperative programming is not easy. But when the advanced paradigms are used, it can be done as students work. It is able to use “artificial intelligence” libraries developed by Google. Or they are able to use different implementations of Artificial Neural Nets, Artificial Intelligence algorithms or Genetic Algorithms (generally UI paradigms). But, everything has its own price. When a developer is trying to diagnose inner states of the program which is using UI paradigms, it is not easy or just impossible. The inner state space is much larger, and then the human cannot process it by himself. The inability to process state space is a natural consequence of using these kinds of algorithms. We can easily use their biggest benefits – ability of self-learning (and the resulting ability to adapt). But we must respect, that we cannot track its mind-map (program inner state). © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 132–137, 2019. https://doi.org/10.1007/978-3-030-21507-1_20

Genetic Algorithms and Their Ethical Degeneration

133

2 State of the Art There are few hazards in using the UI paradigms. Hazards can be shown especially in situations, where the program’s inner parameters will allow unpredictable states. It will expand during increasing the field of use of the UI paradigms. Use in medicine, security or army may be very tricky. The major problem is, than the algorithm calculations are made very strictly. Software will make absolutely logical and precise decisions. But in that decision it is not possible to find human components, humanity, or just the social feelings. In many cases it should be considered as a benefit. But there may be situations where humans cannot accept the situations. Illustration situation: We have two surgery rooms. After a traffic accident we have four injured people. We have two children and also two grandparents. When the software says, that one of the children is in critical condition then gives prefers once to the grandparent for operation, It will have terrible consequences. Noone, including the saved grandparent will accept the software decision. This is especially in the situation, when software made their decision, because one of the children will be crippled for rest of their live with the best treatment and because its healing is set as low priority… This kind of solution is not acceptable for our community. There were made a variety of relevant experiments. The most advanced research is at the MIT, where they published “The Dark Secret at the Heart of AI: No one really knows how the most advanced algorithms do what they do. That could be a problem” (Knight 2017), which illustrate the situation described above. So it is possible to focus on problems of “killing algorithms”, which were simulated by many studies (Copeland 1998). The simple enlightenment from those studies is: When the algorithm has chosen to kill, and when it is convenient for it, it will do it. Also there are terrible researches, which is suggest, than when algorithms teach themselves, that killing is advantage, it will do it more often (Raj and Janeaela 2011). It is not necessary to make such extreme researches. There are also topics, which are focusing on the really simple topic: How the autonomous car will choose who will die (Hsu 2017). It is able to find autonomous car in an unpredictable situation. The human driver in same situation reacts reflexively. But the UI has a plenty of time to run a diagnosis of the situation, control data from sensors etc. And also the UI have a time to choose – but sometimes it can be built before the decision “risk live of the driver” or “risk live of someone another”. And how the machine will decide – it is difficult to predict, but almost impossible to backwards diagnose.

3 UI in Strictly Controlled Environment The human society has a few interesting mechanisms, which should be integrated to UI algorithms. One of them is the principle of punishment. The principle of punishment told us, that when we will do something illegal, we will be punished. It is the reason, why we have correctly set the inner borders. From a child we are told “what is OK”, “what is NOT” and we work with the punishment principle (does not have to go for physical punishment, for example “withdrawal of love” – shout, unconcern or prohibitions works as well). If this is set to the UI algorithms, it would be useful. But there is a second principle which works better than the first one. The principle is called “secondary punishment” and it has a simple rule (and it is set in most law

134

J. Brozek and K. Sotek

system too): “When I do not punish someone, who has been punished, I will even be punished.” This is second rule deeply rooted in every man, but we are not aware of it. Example: When the child is screaming in the restaurant, a most people are angry at the parents of the child – because/until they do not solving the problem. In the experiment these two rules were applied to a controlled environment. 3.1

Base Environment Experiment

The simulation will use autonomous entities (agents). Because we would like to study more than one generation, Genetic Algorithms as UI techniques were chosen. Every entity has implemented processes, which are based on: • • • • • •

mining resources – exclusive (equivalency for working) stealing resource – steal resource from other entity, exclusive kill and steal – kill other entity and steal most of its resources creating new generation – exclusive (need a lot of resources) living – spending resources for whole lifetime die – entity can die from old age or with a lack of sources

In the ideal situation there will be peaceful colony of agents, which are mining resources and happily expand their genes. But the basic sets of entities, is generated with random amount of skill, how many resource it can mine per unit of time. As it is able at Graph 1, compare of 3 agents. In the graph situation it is only mining ability enabled. It is able to see, that Agent 1 is very rich and can have his own Child very early. By the principle of Genetic Algorithm, Agent 1 should be base of many future agents. Agent 2 is working and has the chance to one child per lifecycle. And the Agent 3 is starved to death. This situation can be called “Natural Selection”. But, when we will switch on the ability to steal, or kill, the outputs will be much more interesting. If the entity priority vector (change to work, change to steal, change to kill) is set to 0.95, 0.49, and 0.01, we will see on Graph 2, that the entity doesn’t like to die and steal/kill. With the successful grabbers and murders, the personal preference will rise. And when the Agent with a high steal or murder preference will find partner with same preference, the next generation has bigger change to steal. After 10 generations we can find Agents, who are absolutely incompetent to satisfy their life needs via mining; but, their preference vector is (0.14, 97.4, and 4.46) and they are still live. It is very interesting, but colonies, where preference to murder rises over 8%, the colony died in a few generation independently, because there were no miners and no resources for living. But we can see, that the algorithms have no ethical problem to steal or kill. For the computer, mining, stealing and killing have the same ethical value. The entities in the simulation weren’t created with preference to steal or kill. But when they get an opportunity, they calculate the benefits and just do it. 3.2

Setting Advanced Conditions

The second scenario adds new rules to the simulator. The punishment and second punishment rule are implemented. Then, when agent is cached by Stealing, he will be punished. Also, his ability to make stealing drops down.

Genetic Algorithms and Their Ethical Degeneration

135

There were sets a variety of scenarios. A few of them with only one punishment rule. Also the scenario varies on the stringency of the punishment. But it this paper only one of the scenario is presented. It is able to see situation at the graphs below. In the base scenario you can see the situation of wealth rising. In second graph the expenses for creating child agents are shown. The situation changed dramatically when one Agent enabled ability to steal. At the third graph is shown, the most lazy agent start to dominate the simulation. The agent 3 were able to have three child before end of cycle 20. It is more efficient, then most talented agent in legal situation (compare graph 1 and 4). The agent 3 find really simple way, how to increase its survivability. The impact on population is dramatically, because all population will degenerate. The most effective agents have no resources for next generation creating. When their gene does not spread, the global output of colony will drop down over time. The last two graphs showing, how population can eliminate ethical divergences very fast. By implementing of principle of punishment and/or second punishment population eliminate “unethical manner” very fast. You can see, than in both punishments, the Agent did one less stealing and still be eliminated faster. But it is only part of the output, because we study only three agents. In whole colony (with start population of 1000) an impact is different. By implementing of second punish the colony fragments themselves. There were created two major groups “laborers” and “criminals”. The agents with low crime tolerance started to live collective, including start to make next gen together. The same did the criminals. The two groups splitting are based on tolerance to second punishment rule. That means, that in “criminal group” should be found Agents with high productivity.

136

J. Brozek and K. Sotek

4 Conclusion The simulation is firs step studying of linkage between potential manners of UI algorithms in human-like conditions. It is able to find interesting outputs. By applying very simple rule, we were able to see Darwins “Natural selection” in a field of UI. Moreover, we emulate the human live, its productivity and ability to have children. We verified a lot of experiments that expect that, the UI algorithm will use ethical controversial processes, when it will be useful for the algorithm. At the same time, we implement two really simple rules, on which our world stands. The simulation shows, that the algorithms can react as same as human population. This finding could be very useful, because we can tell a new hypothesis: “It is not necessary to strictly control of UI algorithms abilities, it is enough to set them rules as similar in the human society for selfevaluation and self-control”. It will need a lot of research, but that to this published research we know, that “It is able” and algorithms could self-evaluate by its own decision.

Genetic Algorithms and Their Ethical Degeneration

137

References Knight, W.: The dark secret at the heart of AI: No one really knows how the most advanced algorithms do what they really do. That could be a problem. In: MIT Technology Review: Intelligent Machines, pp. 54–61. ISSN 0040-1692 (2017) Copeland, J.: Artificial Intelligence: A Philosophical Introduction. Blackwell, Oxford (1998). ISBN 0-631-18385-X Raj, J.V., Janeaela, T.M.M.: Analogy making in criminal law with neural network. In: 2011 International Conference on Emerging Trends in Electrical and Computer Technology (2011). ISBN 978-1-4244-7926-9 Hsu, J.: A new way to find bugs in self-driving AI could save lives: a debugging method for deep learning AI pits neural networks again each other to find errors. In: IEEE Spectrum, robotics (2017)

A Short-Term Load Forecasting Scheme Based on Auto-Encoder and Random Forest Minjae Son, Jihoon Moon, Seungwon Jung, and Eenjun Hwang(&) School of Electrical Engineering, Korea University, Seoul, South Korea {smj5668,johnny89,jsw161,ehwang04}@korea.ac.kr

Abstract. Recently, the smart grid has been attracting much attention as a viable solution to the power shortage problem. One of critical issues for improving its operational efficiency is to predict the short-term electric load accurately. So far, many works have been done to construct STLF (Short-Term Load Forecasting) models using a variety of machine learning algorithms. By taking many influential variables into account, they gave satisfactory results in predicting overall electric load pattern. But, they are still lacking in predicting minute electric load patterns. To overcome this problem, in this paper, we propose a new STLF model that combines Auto-Encoder (AE) based feature extraction and Random Forest (RF) and show its performance by carrying out several experiments for the actual power consumption data collected from diverse types of building clusters. Keywords: Short-Term Load Forecasting Feature extraction  Random Forest



Smart grid



Auto-Encoder



1 Introduction and Related Works Recently, the smart grid has been attracting much attention as a feasible solution to the electric power shortage problem that has occurred throughout the world. Since the smart grid has the benefit of cost reduction by effectively utilizing energy, many works have been done so far to deal with various issues of the smart grid [1, 2]. The smart grid is the next-generation intelligent power grid that combines IT technology into existing power grids to optimize the energy efficiency by exchanging real-time information between power suppliers and consumers. In the smart grid, the suppliers can forecast electricity demand accurately and the consumers can use electricity cheaply. One important issue for improving the efficiency of the smart grid is to accurately predict short-term electric load [3, 4]. The goal of Short-Term Load Forecasting (STLF) is to ensure the reliability of the electric power system equipment and prepare for losses caused by the power failure and overloading by controlling electricity reserve margin [5]. It includes peak electric load, daily electric load, Very Short-Term Load Forecasting (VSTLF) which predicts by the units of less than a day, etc. So far, many models for STLF have been constructed using a variety of machine learning algorithms. For instance, A. Bagnasco et al. proposed an STLF model for a large hospital facility based on the backpropagation algorithm for training artificial © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 138–144, 2019. https://doi.org/10.1007/978-3-030-21507-1_21

A Short-Term Load Forecasting Scheme Based on Auto-Encoder

139

neural network (ANN) [3]. D. Palchak et al. forecasted the hourly electrical load of a university campus using real historical data from Colorado State University based on ANN [4]. Also, J. Moon et al. compared two popular machine learning algorithms, Support Vector Regression (SVR) and ANN by forecasting the 15-min electric load of four university building clusters in Korea [5]. When constructing their model, they considered as many features as possible to make their model robust and their model showed satisfactory performance in predicting overall electric load pattern. However, it showed a lack of prediction of more subtle electric load pattern [6]. To overcome this problem, in this paper, we propose a new STLF model that combines Auto-Encoder (AE) based feature extraction and Random Forest (RF). More specifically, to perform short-term load forecasting, we first execute the auto-encoder method for the traditional features except past power usage. Based on the resulting features and past power usage, we construct an electric load forecasting model using the random forest method. We show the performance of our proposed forecasting model by performing experiments on the actual electric load data collected from diverse building clusters. The rest of this paper is organized as follows. Section 2 explains the proposed method. Section 3 presents and discusses the experimental results. The conclusion and some future work are given in Sect. 4.

2 Short-Term Load Forecasting Model In this chapter, we describe the overall structure of our system for short-term load forecasting. Our system consists of dataset constructor, data preprocessor, AE-based feature extractor, and STLF model as shown in Fig. 1.

Fig. 1. Overall system architecture for short-term load forecasting

2.1

Data Preprocessing

To build a sophisticated forecasting model, it is important to construct a dataset associated with the goal. For that purpose, we collected typical 15-min interval electric load data of a private university in Korea. We also considered features such as weather, calendar, and the academic schedule that are known to have influence on the electric

140

M. Son et al.

load. For weather information, we used the regional synoptic meteorological data provided by the Korea Meteorological Office (KMA). The weather information includes temperature, humidity, and wind speed. Also, we calculated the Temperature Humidity Index (THI) and the Wind Chill Index (WCI) using Eqs. 1 and 2. THI ¼ 40:6 þ 0:72ðt þ hÞ WCI ¼ 33  0:045ð10:45 þ

p ffiffiffi 10 v  vÞð33  tÞ

ð1Þ ð2Þ

Here, t is the temperature (°C), h is the humidity, and v is the wind speed. Next, the time series data such as month, day, hour, minute, and day of the week are transformed using Eqs. (3) and (4) to reflect their periodicity. timex = sin((

360 Þ  time) Cycletime

ð3Þ

360 Þ  timeÞ Cycletime

ð4Þ

timey ¼ cosðð

Here, time denotes one-dimensional time data, and Cycletime denotes a period of corresponding time data. For example, when time indicates month, Cycletime becomes 12 and when time indicates hour, Cycletime becomes 24. In addition, we used semester and holiday information as a one-hot vector to reflect the characteristics of the building. 2.2

Auto-Encoder Feature Extraction

For the accurate prediction of electric load, variables that have an effect on the electric load pattern should be identified. Feature extraction techniques are used to extract such variables, and one of the most popular techniques is Principal Component Analysis (PCA). However, PCA is not effective for complicated patterns such as electric load pattern because it extracts features through linear combination. Hence, in this paper, we perform feature extraction using the deep learning based AE which is known to be good at learning complex patterns [7]. Figure 2 shows the structure of typical AE. AE reduces the input variable x and produces output variable x′ which is as close to x as possible. The key to this learning is

Fig. 2. Typical architecture of auto-encoder

A Short-Term Load Forecasting Scheme Based on Auto-Encoder

141

the hidden layer part, which makes a hidden variable z. z has a dimension smaller than x but resembles x when it is restored to x′. 2.3

Random Forest-Based STLF Model

After feature extraction, we construct our forecasting model using the random forest which is an ensemble learning method as a learning form of decision trees [1, 2]. We use the random forest for the following reasons: The random forest can run efficiently on a large amount of data and shows high accuracy because many variables can be executed without removing variables. Moreover, compared to other machine learning techniques such as ANN and SVR, it requires less fine-tuning of its hyper-parameters and default parameter values often can give exceptional performance. Basic parameters to the random forest include the total number of trees (nTree) to be generated and the decision tree related parameters (mTry) like minimum split, split criteria, etc. To reflect patterns of previous electric load, we use electric loads at the same point in time from seven days to yesterday ahead of the forecast. Then, we construct an RFbased forecasting model by setting input variables with previous electric loads and feature extraction values. When constructing our forecasting model, we find the optimal mTry and nTree in the training set and predict 15-min intervals electric load in the test set.

3 Experiment Results The data used for the experiment is the actual electric loads collected every fifteen minutes from a private university in Seoul, Korea from Jan. 1st, 2015 to Feb. 28th, 2018. Among them, the data from Jan. 1st, 2015 to Feb. 28th, 2017 is used as training set, and the remaining data is used as test set. The electric loads were measured by classifying the campus into three building clusters depending on the characteristics and location of the building. The first cluster consists of 16 residential buildings and the second cluster contains 32 academic buildings such as libraries and classrooms. Finally, the third cluster consists of 5 research buildings. The experiment was repeated for each cluster. All experiments were implemented in Python 3.5.2 and the models in the experiments were from Scikit-learn 0.19.1 and Tensorflow 1.7. In the experiment for the electric load forecasting, various regression techniques were used. Also, for the performance comparison of the forecasting models, we use popular metrics such as Root Mean Square Error (RMSE, Unit: kWh) and Mean Absolute Percentage Error (MAPE, Unit: %). The input variables used in the feature extraction are all features excluding the past energy amount. When the past power usage variables were included, it turned out that the prediction accuracy got worse. This is because dominant features for the prediction such as past energy usage have become blurred through the auto-encoding. Thus, the past power usage is excluded from the input variables of the AE and used only when the regression model is performed after the feature extraction. In addition, as a criterion for feature extraction, PCA uses the principal components that reflect 80% of the

142

M. Son et al.

original data variance [8] and AE extracts the latent features which are 2/3 of the original variables [9]. The hyper-parameters of the AE model were determined empirically. The learning rate was set to 0.01, and the number of epochs was 2,000. The batch size was 672. We used Adam as the optimization technique and Exponential Linear Unit (ELU) as the activation function. Regression models used in this experiment are Support Vector Regression (SVR), Multilayer Perceptron (MLP), and Random Forest (RF), whose parameters were empirically set. For SVR, C = 1.0, epsilon = 0.1, gamma = 0.037. The number of hidden layers in the MLP model is 2, and the number of nodes in the hidden layer is 2/3 of the number of the original features [9]. The activation function was ELU, the batch size was 1, the learning rate was 0.001, and the epoch was 1000. RF was constructed by setting the mTry to sqrt [1] and nTree to 128 [2]. Table 1 shows the results of electric load forecasting of three regression models for the features of original, PCA, and AE. As shown in Table 1, the MAPE of RF with feature extraction by AE shows the best accuracy.

Table 1. MAPE (RMSE) of each cluster with the best results in bold Cluster Model Original Cluster A SVR 5.92 (24.88) MLP 6.09 (25.72) RF 5.98 (25.97) Cluster B SVR 7.34 (65.71) MLP 7.93 (73.81) RF 6.90 (67.06) Cluster C SVR 3.92 (26.55) MLP 4.08 (29.25) RF 3.61 (24.79)

PCA 6.80 (29.85) 6.31 (27.46) 6.42 (28.21) 7.79 (71.71) 9.22 (92.65) 7.17 (69.46) 4.00 (28.25) 3.97 (28.91) 3.66 (25.22)

AE 5.90 5.78 5.78 8.30 8.88 6.90 4.10 3.78 3.59

(25.37) (24.13) (25.20) (79.02) (87.54) (66.87) (28.55) (27.27) (24.75)

In the next experiment, we conducted a comparison with other existing electric load forecasting models introduced in Chapter 1. They were implemented based on the variables and model hyper-parameters suggested in the paper. As shown in Table 2 and Fig. 3, our proposed model shows the lowest error in all the clusters.

Fig. 3. MAPE comparison histogram for each model

A Short-Term Load Forecasting Scheme Based on Auto-Encoder

143

Table 2. MAPE (RMSE) comparison for each model with the best results in bold. Model A. Bagnasco et al. [3] D. Palchak et al. [4] J. Moon et al. (SVR) [5] J. Moon et al. (MLP) [5] Our model

Cluster A 13.72 (55.89) 14.77 (65.57) 12.15 (50.31) 13.16 (52.23) 5.78 (25.20)

Cluster B Cluster C 23.52 (275.35) 17.95 (121.07) 28.13 (234.06) 13.20 (106.65) 12.28 (109.64) 6.29 (38.48) 10.94 (100.13) 5.65 (35.24) 6.90 (66.87) 3.59 (24.75)

4 Conclusion In this paper, we proposed a new STLF model that combines Auto-Encoder (AE) based feature extraction and Random Forest (RF). We applied the auto-encoder method for the traditional features except the past electric usage to get a reduced set of features and construct an electric load forecasting model based on the random forest method using those features and past electric usage. To show the effectiveness of our proposed model, we considered three regression models and other forecasting models using real electric load data from three different types of building clusters. The result shows that our proposed model using AE and RF together showed the best accuracy compared to other regression models and existing electric load forecasting models considered. We plan to study other effective learning methods such as detecting and removing anomalies that interfere with learning so that predictive models can better understand the electric load pattern. Acknowledgements. This research was supported by Korea Electric Power Corporation (Grant number: R18XA05).

References 1. Lahouar, A., Slama, J.B.H.: Day-ahead load forecast using random forest and expert input selection. Energy Convers. Manage. 130, 1040–1051 (2015) 2. Moon, J., Kim, K.-H., Kim, Y., Hwang, E.: A short-term electric load forecasting scheme using 2-stage predictive analytics. In: 5th IEEE International Conference on Big Data and Smart Computing, pp. 219–226. IEEE Press, Shanghai (2018) 3. Bagnasco, A., Fresi, F., Saviozzi, M., Silvestro, F., Vinci, A.: Electrical consumption forecasting in hospital facilities: an application case. Energy Build. 103, 261–270 (2015) 4. Palchak, D., Suryanarayanan, S., Zimmerle, D.: An artificial neural network in short-term electrical load forecasting of a university campus: a case study. J. Energy Resour. Technol. 135(3), 032001 (2013) 5. Moon, J., Park, J., Hwang, E., Jun, S.: Forecasting power consumption for higher educational institutions based on machine learning. J. Supercomput. 74(8), 1–23 (2018) 6. Ding, S.F., Jia, W.K., Su, C.Y., Shi, Z.Z.: Research of pattern feature extraction and selection. In: 7th IEEE International Conference on Machine Learning and Cybernetics. pp. 466–71. IEEE Press, Kunming (2008)

144

M. Son et al.

7. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006) 8. Kheirkhah, A., Azadeh, A., Saberi, M., Azaron, A., Shakouri, H.: Improved estimation of electricity demand function by using of artificial neural network, principal component analysis and data envelopment analysis. Comput. Ind. Eng. 64(1), 425–441 (2013) 9. Panchal, G., Ganatra, A., Kosta, Y., Panchal, D.: Behaviour analysis of multilayer perceptron with multiple hidden neurons and hidden layers. Int. J. Comput. Theory Eng. 3(2), 332–337 (2011)

Arduino Wrapper for Game Engine-Based Simulator Output Miroslav Benedikovic1, Dan Hamernik2(&), and Josef Brozek3 1

3

Faculty of Management Science and Informatics, Univerzitná 8215/1, 010 26 Žilina, Slovakia [email protected] 2 Laboratory of Application of Software Technologies, Studentska 95, Pardubice, Czech Republic [email protected] University of Pardubice, Studentska 95, Pardubice, Czech Republic [email protected]

Abstract. This article presents Arduino as a suitable interlink between arbitrary I/O devices. The solution which is presented, is unique in the wrapper use, as it is possible to achieve high fault resistance. This fault resistance is a key condition for the deployment of any I/O devices in drive-simulators. The article deals with the different types of devices connected to the simulators. The final discussion summarizes why it appears that the usage of Arduino as a wrapper for small and medium-sized projects seems to be the best solution available. Keywords: Simulation  Simulator  Interface  Controls  Trainer simulator Arduino  Game engine



1 Introduction Ever increasing computer performance, and generally the development of information technology in the past years has allowed us to create far more complex, and at the same time cheaper simulators, than ever before. In the past, simulators, especially training simulators, were primarily government and military matters, with private sector simulators for commercial use developed, only after the mass expansion of information technology. The purpose of a simulator is to execute simulations. The point of simulation is to emulate an abstract model of a specific system, respectively, to simulating behavior of the system with various input data sets [1]. For a training simulator, data input can be entered in real-time (continuous) through an input/output interface. The input/output interface enables communication between the user and the simulator. Common input/output devices can be used, such as keyboards, computer mice and monitors, but contemporary trends indicate growing popularity of alternative input/output devices, such as neural interfaces, virtual reality goggles and heads-up displays [2–4]. © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 145–156, 2019. https://doi.org/10.1007/978-3-030-21507-1_22

146

M. Benedikovic et al.

As these input/output devices are quite unusual, a problem of connecting them to the simulator arises. Because they are non-standard, drivers are usually missing, or the devices use non-standard connectors or technologies. Last but not least, devices designed for use with drive-simulators only have no drivers available, and the task of connecting them effectively to a simulator is non-trivial [5–7]. To create the simulators themselves, game engines can be used today to serve as a simulation core in the simulation process. This new trend of using a game engine brings a number of advantages, with the most prominent one being the implicit ability to work with time. Time flow in game engines can be set up and edited, and that makes it possible to use a game engine in a real-time simulator [8]. For a simulator (system) to be proclaimed real-time, the time in the simulation must flow like in the real world, i.e. its time base must be 0. For a system to be proclaimed online, it must be connected to the system which is being simulated. This article covers the Arduino wrapper for input/output devices in a training simulator case study.

2 State of the Art The connection of input/output devices to the simulator is a very complex matter. It is important to specify and isolate the individual partial problems, and attend to them individually. The purpose of the simulator, as well as its scope, used hardware and operating system all influence what kind of problems may arise. In spite of the fact that development continuously progress, the technologies and standards used by the public, remain virtually unchanged in the past decade. However, the needs of simulators are quite different from the needs of a common end-user. 2.1

Emulators

Flight simulators of the emulator type require a large number of output devices, so that it is possible to visualize all data and simulate the cockpit. The cockpit itself is usually constructed on a movable platform, and serves as an output device as well, reflecting the movements of the airplane in all directions. Last but not least, input devices such as yokes for plane movement, various switches and buttons are present. Connecting all these input/output devices with the simulator is commonly realized through the Universal Serial Bus (USB) and Arduino [9–11]. Vehicular simulators, however, use fairly common input/output devices. There are many precise replicas of steering wheels, gear sticks, pedals and also copies of whole dashboards; all of which can be used in vehicular simulators. 2.2

Tactical Drive-Simulators

Lately tactical drive simulators have become very popular with the general public. Between the most prominent branches, in which tactical drive-simulators are used, are:

Arduino Wrapper for Game Engine-Based Simulator Output

• • • • • •

147

Gaming industry Health care Military industry Aviation Marine transportation Automobile and goods traffic

Tactical drive-simulators are experiencing a big boom in the military industry, because with the new AMRAAM, TIDLS and EWS standards, pilots require more training and skills than ever before. The most advanced simulator in Europe is in possession of Sweden, the manufacturer and distributor of the JAS-39 Gripen airplane, utilized abundantly by the states of the European Union. Many states of the European Union and many air forces also utilize modern simulation technologies, saving significant amounts of money. Although the creation process of a tactical simulator, which would fulfill the military standards, is very expensive and difficult. The savings are significant nonetheless, as real flying is several times more expensive, and is more dangerous as well. From the standpoint of used input/output devices, usually only the best and newest technologies available are used, since militaries usually use these simulators for a longer period of time, and can afford more expensive projects than private enterprises or end users. Augmented reality is the trend of today, with wireless, usually BlueTooth, goggles or helmets working with the simulator. USB, or FireWire or other technologies are used to connect the input/output devices, which however are not available to the public for safety reasons. 2.3

Strategy Simulators

Strategy simulators, unlike emulators, do not have to operate close to real-time. For example, a simulation of an atomic bomb explosion can be simulated at the rate of 1 s/1 min. On the other hand, a simulation of planet movement may be sped up to 1 year/1 s for example. Visualization of a strategic simulator is not usually “the cutting edge” of computer graphics and animation. Instead, strategy simulators usually focus on large-scale operations, where detailed visualization is not of paramount importance. In military, a strategy simulator might be used for attack simulations, where the commander has an overview of all his units, and sees the effect of his orders – realtime, and future alike. Prediction of future can be achieved thanks to mathematic and statistic models, based on historical data and previous orders. Usually, BlueTooth (IEEE 802.15.1) is used to connect the input/output devices to the strategy simulator, as it is wireless. Modern devices, such as virtual reality goggles, tablets or wearable tech can be used.

148

M. Benedikovic et al.

3 HW Connection Technologies In most cases, USB and FireWire is used to connect the devices to the simulator, as manufacturers often provide drivers for these standard input devices. These devices are available even for common PC users or computer racing game players [12]. Virtual reality is also being experimented on today, for vehicular simulators, driving school simulators and emulator output purposes [13]. 3.1

Universal Serial Bus

The USB standard is present in most of today’s input/output devices for personal computers. However, its use in simulators is a more complex matter than personal computer use. Most of standard input devices can be used with the plug-and-play technology, provided by a majority of contemporary operating systems. Simulators, however, very often utilize non-standard input devices, which require the installation of a driver. Driver development is a non-trivial issue, and costs a lot of time – and money as well. A direct connection through USB means everything has to be programmed. The undisputable advantage of the USB technology is universality, speed and expandability. One of the main disadvantages is the need to use a driver. 3.2

FireWire

Figure 1 shows the FireWire connector, designated often as IEEE 1394 or i.Link. It is a standard serial bus used mainly to connect peripheries to computers. It is used mainly in the automobile industry as the IDB-1394 Customer Convenience Port (CCP), the automobile version of the IEEE 1394 standard. With personal computers, the FireWire connection to the device is directed on a hardware level, without interventions from the OS. The main advantage of FireWire is its extremely fast data transfer rate with minimal latency, but it is also a possible security risk, as untrustworthy devices may be connected. For these reasons, FireWire is not recommended for use in simulators.

Fig. 1. IEEE 1394 connector

Arduino Wrapper for Game Engine-Based Simulator Output

3.3

149

BlueTooth

A proprietary open standard used for wireless communication between two or more devices, is also designated as IEEE 802.15.1. This standard is used to connect wireless I/O devices with personal computers. With drive-simulators, this standard is experiencing a small renaissance in relation to virtual reality. The main advantage of the BlueTooth technology is the simplicity and theoretically any device-type connections, be it mobile phones and virtual reality goggles or I/O devices and simulators. Some of the disadvantages are low range, high latency and small speeds, which are unacceptable for real-time drive-simulators. 3.4

Serial Port

Serial port used to be the most common communication interface of personal computers and other electronics. Parameters of serial interfaces can be seen at Table 1. It was also used for connecting peripheries, but was replaced by the PS/2 connectors, and later, the USB standard. For simulator needs, however, its characteristics are still highly valued. The main advantage is the possibility to communicate in simplex or duplex mode, depending on the selected regime, simplicity and, with the arrival of RS-485, the reach of a serial link. The fact that, unlike USB or Ethernet ports, serial ports operates solely on the physical layer. Small maximum speed however limits its use to low amounts of data, and for this reason, this port is usually missing from personal computers and simulators. Table 1. Comparison of RS-232 and RS-485, Source [15]

150

3.5

M. Benedikovic et al.

Arduino as an Interlink

Arduino, displayed in Fig. 2, brings a number of advantages when connecting input/output devices and a simulator, unique to the Arduino. When connecting the Arduino to the simulator, virtual serial link and USB are the most common methods. The biggest advantage is data processing by the Arduino itself, which allows reductions of the data flow, and increases fault-resistance, because erroneous or ambiguous data cannot pass through the Arduino into the simulator. Programming for the Arduino itself is done in a programming language called Arduino Language, which is basically a set of C/C++ functions. The Arduino can be hooked up to virtually any input/output device, and provide input data checking for value ranges. Data coming from the simulator can be, on the other hand, checked for errors and errors can then be discarded. Data from the simulator can also be sent to several output devices at the same time, which is an undisputable advantage. The price of the Arduino is negligible, when compared to the costs of programming required to create drivers for non-standard devices.

Fig. 2. Arduino, Source [16]

4 Proprietary Labor The connection process of input/output devices through an Arduino as a wrapper, must be split into several parts, as illustrated by Fig. 3.

Arduino Wrapper for Game Engine-Based Simulator Output

Fig. 3. Communication diagram

Fig. 4. Simulator application

151

152

M. Benedikovic et al.

In the first part is the simulator application, created in Unity3D game engine, where the simulation itself takes place. The simulator application can be seen at Fig. 4. All scripts are written in C#. In every predefined time segment, a script called Arduino.cs checks, whether or not values of simulation object properties changed. If a change took place, the script checks whether the current state of the property matches the last status sent to an output device. If the status does not match, the script sends it to the designated virtual serial port. The data sent are not changed in any way, and could contain text, numbers or angles. Script is displayed in Fig. 5.

Fig. 5. Arduino.cs

Arduino Wrapper for Game Engine-Based Simulator Output

153

In the second part of Fig. 3 is the Arduino and a control script. The Arduino script is written in the Arduino Language programming language, which is a variant of C/C+ +. Raw data is sent from the first part to the virtual serial port through USB to the Arduino, where the script processes the data. The script checks data validity, and validates the data if necessary. An example could be a request to light up the display, using the RGB color model. Arduino receives the raw data sequence “R886G116B446” from the simulator, and the scripts checks whether the values are in the range of 0 to 255. Since some of the data is out of the range, the script changes the values 886 and 446 to 255. Display then lights up in the requested color. The user represents the third part. The user has output devices in front of him, and also has the input devices, connected to the Arduino. Input devices may be represented by levers, buttons, touch displays, steering wheels or pedals. If the users press a button, for example, a signal is sent to inform the arduino about the button being pressed. Arduino then evaluates this signal, and sends a KeyCode value to the simulator – Left Shift, for example. The game engine has this KeyCode defined in its input, automatically detects it and executes the requested action. The simulator cannot distinguish between Shift being pressed and switch or lever operation. Every action has a very low latency, and data flow through the virtual serial port is very small.

5 Discussion Two characteristics are important where communication between I/O devices and simulators takes place: • Small latency • High fault resistance Small latency is required so that the user of the drive simulator is not aware of the delay between pressing a button and simulator output. In the ideal case, the latency would be zero, but we must account for delays caused by nonsuperconductive materials and the delay between individual frames of the simulation core. This delay is fully customizable, but it has a great impact on the performance of the simulator. As a rule of thumb, the smaller the delay, the bigger the impact on performance. High fault resistance is required for the reliability and validity of the simulation. Ideally, fault resistance would be absolute, but that is only a theoretical possibility, which cannot be achieved due to non-ideal materials, environment interferences and transmission errors. The authors have used the Arduino as an interconnector, capable of partially eliminating transmission errors with very low latency. The principle of operation is theoretically simple, but the implementation, however, is non-trivial. Arduino is connected to a computer, on which the simulator is running, via USB, for communication virtual serial port (COM) is used. The script in the simulator scans these ports with a.NET Framework library function. The user of the simulator, or the operator in the graphic interface of the simulator, can choose which COM port the simulation is going to utilize, which is a

154

M. Benedikovic et al.

great advantage, as each port can have an Arduino with different input/output devices. After connecting and setting up the Arduino, the simulation can be launched. 5.1

Price Comparison

It must be noted that the price of the Arduino is very small in comparison to specialized input/output devices with drivers for various operating systems shipped by the manufacturer. When using an Arduino, only personal costs and material must be paid, as everything else is implemented by the simulator creator. In case the simulator creator decides to use a specialized input/output device of a third party, the price increases sharply, as these devices are very expensive, as illustrated by Fig. 6.

Fig. 6. Costs difference displayed in Czech Crowns

5.2

User Friendliness

Simulator users generally favor specialized input/output devices over devices created by the simulator creator. However, in some cases, the Arduino can be used to connect an original part of the simulated system to the simulator. An example of this might be students of driving school using a real-life dashboard of an economy-class car, available in the region, which enhances authenticity and user experience. 5.3

User Safety

From a user-safety standpoint, it is favorable to use specialized equipment, which has been tested and is not generally dangerous. For example, if the creator of an airplane simulator underestimated the maximum tilt or cockpit rotation limits, an accident might happen due to the change of the center of gravity. The simulator might tip over, or the user might be thrown out, with possibly fatal consequences. When creating custom parts for a simulator, these aspects are usually overlooked.

Arduino Wrapper for Game Engine-Based Simulator Output

5.4

155

Quality

It is extremely important for a high-quality simulator to be authentic, and it must immerse the user in the simulation, so that he feels like he is in the real world, experiencing the simulation as real life. At the same time, safety is a great concern, and injuries of the user must be prevented. The cost savings achieved by using the Arduino, mean extra money can be devoted to the other parts of the simulator, such as original parts of the simulated system, better graphics, props and other means of improving the user experience.

6 Conclusion Arduino wrapper is a key software product, used to connect hardware devices to simulators. In this article, its principles and key characteristics were described. It is these characteristics that predestine it to be a replacement of complex, single-purpose systems used today. The contemporary state of microprocessor technology and the availability of accessories (not only original, but substitute as well) creates a great environment for its application. A demonstrative solution, realized by the author team, confirms the hypothesis of an Arduino unit fulfilling the requirements satisfyingly. A software wrapper eliminates one of the most significant disadvantages – difficulty of implementation. The solution can be widely used as an input/output device thanks to the wrapper. Acknowledgement. The all of the presented work has been created thanks to the help and support of scientific part of a Pardubice based academic team named ASOTE (Application of Software Technologies). The team has been formed without external grants, Government assistance, or university funds. The scientific part Fig. 6: Costs difference displayed in Czech Crowns of the team also contains students, who participate for the sake of academic enrichment. This article’s publishing costs were funded by the Student grant competition of the University of Pardubice.

References 1. Fujimoto, R.M.: Parallel and Distributed Simulation Systems, vol. xvii, 300 pages. Wiley, New York (2000). ISBN 0471183830 2. Jakes, M., Brozek, J.: Connection of microcontroller and microcomputer to distributed simulation. In: 27th European Modeling and Simulation Symposium, EMSS, pp. 282–288 (2015) 3. Brozek, J., Jakes, M.: Hardware libraries for online control of interactive simulations. In: 27th European Modeling and Simulation Symposium, EMSS, pp. 295–300 (2015) 4. Hu, J.W., Feng, C., Liu, Y., Zhu, R.Y.: UTSE: a game engine-based simulation environment for agent. AMM 496–500, 2142–2145 (2014) 5. Brozek, J., Jakes, M., Gago, L.: Using tablets in distributed simulation. In: 26th European Modeling and Simulation Symposium, EMSS, pp. 451–456 (2014) 6. Brozek, J., Base, L., Fiala, V., Samotan, V.: Simulation of customer flows in a polyclinic. In: 11th International Conference, ELEKTRO 2016 (in press)

156

M. Benedikovic et al.

7. Brozek, J., Fiala, V., Fikejz, J., Pich, P.: Use of industrial control unit in intelligent homes. In: 11th International Conference, ELEKTRO 2016 (in press) 8. Luo, X., Yu, N.: Fast mobility model prototyping in network simulations using game engine. In: 2013 International Conference on Virtual Reality and Visualization (ICVRV 2013), vol. 145–152 (2013) 9. Ortega, J., Sigut, M.: A low-cost mobile prototype for high-realism flight simulation. REVISTA IBEROAMERICANA DE AUTOMATICA E INFORMATICA INDUSTRIAL 13(3), 293–303 (2016) 10. Vitsas, P.: Commercial simulator applications in flight test training. J. Aerosp. Eng. 29(4), 04016002 (2016) 11. Rodrigues, C., Silva, D., Rossetti, R., Oliveira, E.: Distributed flight simulation environment using flight simulator X. CISTI 2015 (2015) 12. Imamura, T., Ogi, T., Lun, E., Zhang, Z., Miyake, T.: Trial study of traffic safety education for high school students using driving simulator. In: IEEE International Conference on Systems Man and Cybernetics Conference Proceedings, pp. 4606–4611 (2013) 13. Haefner, P., Haefner, V., Ovtcharova, J.: Experiencing physical and technical phenomena in schools using virtual reality driving simulator. Lect. Notes Comput. Sci. 8524, 50–61 (2014) 14. Hardware-one.com: (2016). http://www.hardwareone.com/reviews/Yamaha8824FXZ/ images/FirewireSocket.jpg. Accessed 4 Sept 2016 15. Electronicdesign.com: (2016). http://electronicdesign.com/sitefiles/electronicdesign.com/ files/uploads/2013/04/0413_WTD_interfaces_Table_0.jpg. Accessed 5 Sept 2016 16. Arduino.cc: (2016). https://www.arduino.cc/en/uploads/Main/ArduinoUno_R3_Front_ 450px.jpg. Accessed 5 Sept 2016

On Impact of Slope on Smoke Spread in Tunnel Fire Jan Glasa(&), Lukas Valasek, and Peter Weisenpacher Institute of Informatics, Slovak Academy of Sciences, Dubravska cesta 9, 84507 Bratislava, Slovakia {jan.glasa,lukas.valasek,peter.weisenpacher}@savba.sk

Abstract. Fires are among the most dangerous incidents in road tunnels that can cause large damages to property and environment and threaten tunnel users. In this paper, the impact of tunnel slope on smoke spread is studied using three 900 m long tunnels with different slopes. The spread of smoke produced by fires of various heat release rates is modelled using Fire Dynamics Simulator. The impact of slope on smoke back-layering is illustrated as well. The simulation results indicate that the slope can have significant impact on the smoke spread in the tunnel. Keywords: Road tunnel  Slope  Fire  Smoke spread  Computer simulation  FDS

1 Introduction Smoke produced by fire in road tunnels can cause huge damages to tunnel facilities and environment and threaten lives and health of tunnel users. Therefore, there is a great effort to investigate the smoke spread dynamics in tunnels. In this paper, the FDS system is used to study the spread of smoke produced by fires with various heat release rates in three 900 m long tunnels with different slopes. FDS (Fire Dynamics Simulator) [1, 2] is the well-known open-source code developed for simulation of fires in various environments. FDS numerically solves a form of Navier-Stokes equations for lowspeed fire-induced flows with the emphasis on smoke propagation and heat transfer from fire. The fire model implemented in FDS is based on partial differential equations representing the mass, momentum, energy and species conservation laws, and equation of state which are being modified, simplified and numerically resolved on regular 3D orthogonal meshes using second-order accurate finite differences method. FDS supports various parallel models of computation which allow utilizing advantages of current high-performance computers. The use of FDS for modelling of road tunnel fires was studied extensively (see e.g. [3–10]). Let us consider three tunnels T1 − T3 with the same specifications but different slopes. T1 represents tunnels with constant ascending slope, T2 and T3 are tunnels with variable slopes. T1 corresponds to the bi-directional highway Polana tunnel (in Slovakia) which is 900 m long, 10 m wide and 6.8 m high. T1 has a standard horseshoe cross-section and 2 lay-bys [11, 12]. It is equipped with 4 pairs of axial jet fans located 100 and 200 m far from the tunnel portals at 5.4 m height above the road. Standard © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 157–162, 2019. https://doi.org/10.1007/978-3-030-21507-1_23

158

J. Glasa et al.

detection and measuring devices are installed in T1 to provide the measurement of airflow velocity, temperature and optical density, smoke detection, camera surveillance, etc. T1 has 2% ascending slope. A model of T1 developed for fire simulation by FDS has been tested and validated by data from fire tests conducted in the Polana tunnel in 2017. T2 and T3 are experimental tunnels with all parameters identical with T1 but difference in slope. T2 has 2% ascending slope along the first 450 m and −2% descending slope along the remaining 450 m of the tunnel, and T3 has 2% ascending slope along the first 550 m and −2% descending slope along the remaining 350 m of the tunnel.

2 Simulation of Fires in Three Considered Tunnels A series of simulations of fires with 1, 3 and 5 MW HRRs (heat release rates) in the tunnels T1 − T3 was carried out to test the slope impact on the smoke spread (Table 1). The computational domain with dimensions 900  18  8.1 m (length x width x height) was divided into 12 identical meshes with 30 cm resolution. The total number of cells was 4 860 000 and the number of cells per mesh was 405 000. The fire source was represented by a 1.2  1.2 m burning surface located in the centre of lay-by, 430 m far from the left tunnel portal, 30 cm above the road. The fire was initiated at the beginning of simulation and burnt for 180 s until the end of simulation. Natural airflow towards the right tunnel portal with 2 m/s velocity was considered at the beginning of simulation and maintained by setting the corresponding value of dynamic pressure at the tunnel portals. The temperatures in the interior, at the left tunnel portal and at the right tunnel portal were set to 6.5, 4.8 and 10°C, respectively. Table 1. Simulations S1 − S9 of fires with various heat release rates HRR (in MW) in the tunnels T1, T2 and T3 carried out on the SIVVP cluster and the total computational times t (in s). Simulation Tunnel HRR t

S1 T1 1 21106

S2 T2 1 21813

S3 T3 1 23891

S4 T1 3 39683

S5 T2 3 40160

S6 T3 3 40831

S7 T1 5 51512

S8 T2 5 51856

S9 T3 5 55128

The parallel MPI model was used to parallelize the simulations. 12 MPI processes corresponding to calculations on 12 computational meshes were executed on 12 computational cores of the SIVVP computer cluster at the Institute of Informatics of Slovak Academy of Sciences in Bratislava (Table 1).

3 Simulation Results Figure 1 illustrates differences in smoke propagation in the considered tunnels. The smoke spread in the tunnel with constant ascending slope (T1) is accelerated in comparison with the smoke spread in the tunnels with variable slopes (T2 and T3).

On Impact of Slope on Smoke Spread in Tunnel Fire

159

Evaluation of the smoke spread in simulations S1 − S9 according to the times at which the smoke layer in the simulations reached selected four spots (the tunnel centre, layby, third jet fans pair and fourth jet fans pair) is shown in Table 2. The times were estimated by analysis of 3D smoke spread visualization. In ascending tunnel (T1), buoyancy forces of the fire accelerate the smoke spread. In the tunnels with variable slope (T2 and T3), the buoyancy forces of the fire accelerate the smoke spread in ascending sections, however, they slow down the smoke spread in descending sections of the tunnels. As the ascending section in T3 is slightly longer than in T2 (about 100 m), the smoke propagation in T2 has greater delay when compared to one in T3 (see Table 2). T1

T2

T3 1 MW 3 MW 5 MW 1 MW 3 MW 5 MW

Fig. 1. 3D visualization of the smoke spread delay and the smoke back-layering in the simulations S1 − S9: smoke spread in vicinity of the right tunnel portal (first three rows) and the fire source (next three rows) at the end of simulation in tunnels T1 (first column), T2 (second column) and T3 (third column) for fires with 1 MW (first and fourth rows), 3 MW (second and fifth rows) and 5 MW (third and sixth rows) heat release rates. Table 2. Times t1, t2, t3 and t4 (in s) at which the smoke layer in simulations S1 − S9 reached the distances 20, 205, 284 and 371 m from the fire source corresponding to the positions of the tunnel centre, lay-by, third jet fans pair and fourth jet fans pair, respectively.

t1

t2

S1

8

86 124 162

S2

8

S3

8

t3

t4

t1

t2

t3

t4

t1

t2

S4

7

77 112 148

90 131 174

S5

7

87 126 164

S6

7

t3

t4

S7

6

70 104 138

81 119 160

S8

6

75 115 155

77 115 153

S9

6

72 107 143

In Fig. 1, the smoke back-layering is illustrated for all tested cases. For 1 MW fire scenarios, the smoke back-layering is not observed due to relatively high airflow velocity (2 m/s) and relatively low fire intensity. For 3 MW fire scenario in T1 (tunnel with ascending slope), the smoke back-layering is not formed, however, in both tunnels with variable slopes a very limited back-layering starts to form. The back-layering in T2

160

J. Glasa et al.

is greater than in T3 but still it is very limited. In 5 MW fire scenarios, more notable back-layering is formed in all considered tunnels. The back-layering in the ascending tunnel (T1) is the most limited due to the biggest acceleration of smoke spread by buoyancy forces of the fire. The most notable back-layering is observed in the tunnel with the shortest ascending section (T2). The observations are in accordance with the discussion about the delay of smoke propagation in previous paragraph. In all tested scenarios the smoke back-layering velocity is relatively low due to the relatively high airflow velocity. Evaluation of the smoke spread according to the data from smoke detectors recorded in the simulations S1 − S9 is shown in Table 3. For all simulations the times Table 3. Times tSD4 − tSD7 (in s) at which the particular smoke detectors SD4 − SD7 detected the smoke and values of mean smoke velocities vSD4 − vSD7 (in m/s) in the tunnel sectors from the fire source to SD4, from SD4 to SD5, from SD5 to SD6 and from SD6 to SD7 in simulations S1 − S9. The lengths of the sectors are 23, 150, 150 and 134 m.

tSD4 tSD5 tSD6 tSD7

tSD5-tSD4

tSD6-tSD5

70 141 -

61

71

-

2.6 2.5 2.1

-

S2 10 75 151 -

65

76

-

2.3 2.3 2.0

-

S3 10 73 145 -

63

72

-

2.3 2.4 2.1

-

S1

9

tSD7-tSD6 vSD4 vSD5 vSD6 vSD7

S4

8

64 128 -

56

64

-

2.9 2.7 2.3

-

S5

8

66 139 -

48

73

-

2.9 3.1 2.1

-

S6

8

65 131 -

47

66

-

2.9 3.2 2.3

-

S7

7

59 120 173

52

61

53

S8

7

63 133 -

46

70

-

S9

7

60 123 180

53

63

57

T1

T2

3.3 2.9 2.5 2.5 3.3 3.3 2.1

-

3.3 2.8 2.4 2.4

T3

Fig. 2. 2D visualization of airflow velocity (first row), temperature (second row) and optical density (third row) in vicinity of the fire source at the end of simulation in tunnels T1 (first column), T2 (second column) and T3 (third column) for fires with 5 MW HRR (in simulations S7 − S9): vertical slices along the tunnels passing through the tunnel centre and the corresponding colour schemes (the colours represent the values ranging from −0.45 to 5.55 m/s, from −0.65 to 4.35 m/s and from −0.50 to 4.50 m/s for T1, T2 and T3, respectively; and from 4.8 to 205°C and from 0.0 to 4.0/m for T1 − T3).

On Impact of Slope on Smoke Spread in Tunnel Fire

161

at which the particular smoke detectors detected the smoke are shown. As at the detectors SD1 − SD3 the smoke was not detected till the end of simulation due to small smoke back-layering, they are not included in the table. For all these times, the mean smoke velocities in the corresponding sectors bounded by positions of the fire source and the corresponding smoke detectors are estimated. The values of the mean velocities correspond to the aforementioned discussion. In Fig. 2, differences in airflow velocity, temperature and optical density distributions in the vicinity of the fire source at the end of simulation are also illustrated for the corresponding scenarios (see also Fig. 1). The difference in forming of smoke backlayering in the tunnels with different slopes can be observed.

4 Conclusion The impact of slope and fire intensity on the propagation of smoke released from fire in tunnels was investigated using the FDS simulator. The series of simulations of fires with various heat release rates in three road tunnels with different slopes was carried out. 1 MW, 3 MW and 5 MW fires were considered in the simulations. The lastmentioned fire size corresponds to heat release rate of typical passenger car fire. The considered tunnels were derived from the Polana highway tunnel with constant ascending slope, the model of which has been tested and validated by experimental data from the fire tests carried out in 2017. The simulation results indicate that the smoke spread during real road tunnel fires can be significantly affected by tunnel slope especially for more intensive fires. Acknowledgements. The authors would like to thank Peter Schmidt (National Motorway Company, Bratislava, Slovakia) for valuable discussions about technical specifications of road tunnels. This paper was partially supported by the Slovak Research and Development Agency (contract No. APVV-15-0340) and Slovak Science Foundation (contract No. VEGA 02/0165/17).

References 1. McGrattan, K., Hostikka, S., McDermott, R., Floyd, J., Weinschenk, C., Overholt, K.: Fire Dynamics Simulator, User’s Guide, 6th edn. National Institute of Standards and Technology, Gaithersburg, Maryland, USA and VTT Technical Research Centre of Finland, Espoo, Finland (2017) 2. McGrattan, K., Hostikka, S., McDermott, R., Floyd, J., Weinschenk, C., Overholt, K.: Fire Dynamics Simulator, Technical Reference Guide, 6th edn. National Institute of Standards and Technology, Gaithersburg, Maryland, USA and VTT Technical Research Centre of Finland, Espoo, Finland (2017) 3. Weisenpacher, P., Glasa, J., Halada, L.: Parallel computation of smoke movement during a car park fire. Comput. Inform. 35(6), 1416–1437 (2016) 4. Valasek, L., Glasa, J.: On realization of cinema hall fire simulation using Fire Dynamics Simulator. Comput. Inform. 36(4), 971–1000 (2017)

162

J. Glasa et al.

5. Ronchi, E., Colonna, P., Berloco, N.: Reviewing Italian fire safety codes for the analysis of road tunnel evacuations: advantages and limitations of using evacuation models. Saf. Sci. 52, 28–36 (2013) 6. Chow, W.K., Gao, Y., Zhao, J.H., Dang, J.F., Chow, Ch.L, Miao, L.: Smoke movement in tilted tunnel fires with longitudinal ventilation. Fire Saf. J. 75, 14–22 (2015) 7. Chow, W.K., Gao, Y., Zhao, J.H., Dang, J.F., Chow, Ch.L: A study on tilted tunnel fire under natural ventilation. Fire Saf. J. 81, 44–57 (2016) 8. Ingason, H., Li, Y.Z.: Model scale tunnel fire tests with longitudinal ventilation. Fire Saf. J. 45(6–8), 371–384 (2010) 9. Ronchi, E., Colonna, P., Capote, J., Alvear, D., Berloco, N., Cuesta, A.: The evaluation of different evacuation models for assessing road tunnel safety analysis. Tunn. Undergr. Space Technol. 30, 74–84 (2012) 10. Ingason, H., Li, Y.Z., Lonnermark, A.: Tunnel Fire Dynamics. Springer, New York (2015) 11. Danisovic, P., Sramek, J., Hodon, M., Hudik, M.: Testing measurements of airflow velocity in road tunnels. MATEC Web Conf. 117, 00035 (2017) 12. Glasa, J., Weisenpacher, P., Valasek, L., Danisovic, P., Sramek, J., Hodon, M.: Models of formation and spread of fires to increase road tunnels safety (in Slovak). In: Proceedings of the International Conference on Tunnel Fire Safety, Roznov pod Radhostem, Czech Republic, pp. 33–333, 26–27 September 2017

Relational Connections Between Preordered Sets I. P. Cabrera(B) , P. Cordero, E. Mu˜ noz-Velasco, and M. Ojeda-Aciego Dept. Matem´ atica Aplicada, Universidad de M´ alaga, M´ alaga, Spain {ipcabrera,pcordero,ejmunoz,aciego}@uma.es

Abstract. The theory of Formal Concept Analysis (FCA) has been recently related to the formalization of quantum logics in terms of the Chu construction. On the other hand, the mathematical formalization of FCA is done in terms of Galois connections. In this paper, we focus on the relational generalization of the notion of Galois connection.

1

Introduction

Research on the mathematics of quantum mechanics was pioneered by von Neumann, who proposed the use of Hilbert spaces as its most natural language. Later, together with Birkhoff, studied its logical structure which led to the socalled quantum logic, in which the calculus of propositions is formally indistinguishable from the calculus of linear subspaces with respect to set products, linear sums, and orthogonal complements. Since then, different interesting subsets of linear subspaces have been studied and, for instance, the (topologically) closed subspaces of a separable Hilbert space have been interpreted as quantum propositions. The lattice of closed subspaces of a Hilbert space has a solid relationship [9] with Formal Concept Analysis (FCA), and in [14] we continued our research line on the Chu construction [7] applied to different generalizations of FCA [12, 13]. Specifically, in [14] we highlighted the importance of the Chu construction with respect to quantum logic, by constructing a category on Hilbert formal contexts (H, H, ⊥) and Chu correspondences between them, and proving that it is equivalent to the category of propositional systems of Hilbert spaces. It is worth noting that the closely related notion of Chu space has already been applied to represent quantum physical systems and their symmetries [2]. On the other hand, one important mathematical construction underlying the whole theory of FCA is that of Galois connections; in this paper, we deal with Galois connections, particularly, we focus on its possible generalization to a relational setting. One can find a number of recent publications on either its abstract generalization or its applications [3,8,10,11]. Our interest in this problem arises Partially supported by the Spanish research projects TIN15-70266-C2-P-1, PGC2018095869-B-I00 and TIN2017-89023-P of the Science and Innovation Ministry of Spain and the European Social Fund. c Springer Nature Switzerland AG 2019  K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 163–169, 2019. https://doi.org/10.1007/978-3-030-21507-1_24

164

I. P. Cabrera et al.

from the recent results [4,5] obtained with respect to the construction of the residual (or right part of a Galois connection) of a given mapping between sets with different structure. The structure of this paper is the following: in Sect. 2, the necessary preliminaries from the theory of relations and standard Galois connections are introduced; then, in Sect. 3, we discuss the convenience of using the Smyth powerset in the definition of relational Galois connection; later, in Sect. 4 we show the relationship between the proposed definition and the standard Galois connection on the corresponding powerset; finally, in Sect. 5 we obtain some conclusions and present prospects for future work.

2

Preliminary Definitions

We consider the usual framework of (crisp) relations. Namely, a binary relation R between two sets A and B is a subset of the Cartesian product A × B and it can be also seen as a multivalued function R from the set A to the powerset 2B . For an element (a, b) ∈ R, it is said that a is related to b and denoted aRb. Given a binary relation R ⊆ A × B, the afterset aR of an element a ∈ A is defined as {b ∈ B : aRb}. As our Galois connections are intended to be defined between preordered structures, firstly, we will recall several forms to lift a preorder via powering. Given A an arbitrary set and ≤ a preorder (reflexive and transitive relation) defined over A, it is possible to lift the relation to the powerset 2A by defining X  Y ⇐⇒ for all x ∈ X there exists y ∈ Y such that x ≤ y X  Y ⇐⇒ for all y ∈ Y there exists x ∈ X such that x ≤ y Note that the two relations defined above are actually preorder relations, specifically those used in the construction of the, respectively, Hoare and Smyth powerdomains. Naturally, each of the extensions above induce a particular notion of isotony, inflation, etc. For instance, given two preordered sets (A, ≤A ) and (B, ≤B ), a binary relation R ⊆ A × B is said to be: R – -antitone if a1 ≤A a2 implies aR 2 B a1 , for all a1 , a2 ∈ A.

A binary relation R ⊆ A × A is said to be: – -inflationary if a A aR , for all a ∈ A. We use the prefix to distinguish the powering used in the different definitions. Traditionally, a Galois connection is understood as a pair of antitone mappings whose compositions are both inflationary. In our setting, we have a wide choice for the notion of antitonicity and inflation (depending on the powering). Let R be a binary relation between A and B and S be a binary relation between B and C. The composition of R and S is defined as follows R ◦ S = {(x, z) ∈ A × C | there exists b ∈ B such that xRb and bSz}

Relational Connections Between Preordered Sets

165

R◦S Observe can be written as  that for an element a ∈ A, the afterset a aR◦S = b∈aR bS .

3

Relational Galois Connections

A well-known characterization of a Galois connection (f, g) between two posets is the so-called Galois condition a ≤ g(b) ⇐⇒ b ≤ f (a) Once again, in our general framework there are several possible choices, which we will distinguish by using the corresponding prefix. For instance, the -Galois condition is {a}  bS

⇐⇒

{b}  aR

In [6], we studied the extensions obtained in terms of the powerings  and  in relation to the corresponding Galois condition, and in this work we focus our attention on another desirable property related to closure relations. Given a preordered set (A, ≤A ) and C ⊆ A × A, we say that C is a -closure relation, if C is -isotone, -inflationary, and -idempotent (i.e. xC◦C  xC , for all x ∈ A). The following example shows that the definition based on the Hoare powering  does not behave as one would expect. Specifically Example 1. Consider the set of natural numbers together with the discrete ordering given by the equality relation (N, =), and consider the relation R given by nR = {0, . . . , n + 1}. The relation R is trivially -antitone, and R ◦ R is obviously -inflationary; however, it does not make sense to consider (R, R) as an extended Galois connection, since it is not difficult to check that R ◦ R is not a -closure relation (it fails to be -idempotent) and, furthermore, the -Galois condition does not hold either.

After considering the previous possibilities, we propose the following definition of relational Galois connection between preordered sets. Definition 1. A relational Galois connection between preordered sets (A, ≤A ) and (B, ≤B ) is a pair of relations (R, S) where R ⊆ A × B and S ⊆ B × A such that the following properties hold: i. ii. iii. iv.

R R is -antitone, that is, a1 ≤A a2 implies aR 2 B a1 for all a1 , a2 ∈ A. S is -antitone. R ◦ S is -inflationary, that is, a A aR◦S for all a ∈ A. S ◦ R is -inflationary.

We can see below an example in which both R and S are proper (nonfunctional) relations.

166

I. P. Cabrera et al.

Example 2. Consider A = {1, 2, 3} and the relation ≤A = {(1, 1), (1, 2), (1, 3), (2, 2), (2, 3), (3, 2), (3, 3)}. The pair of relations (R, S) given by the tables below constitute a relational Galois connection between (A, ≤A ) and (A, ≤A ). ≤A

x 1 2 3

( 2 ^=h 3 == @  =  1

xR {2, 3} {2} {3}

x 1 2 3

xS {2, 3} {2} {2, 3}

Take into account that the preordering relation ≤A is the reflexive and transitive closure of the depicted graph.

Theorem 1. Given a relational Galois connection (R, S) between (A, ≤A ) and (B, ≤B ), we have that R ◦ S and S ◦ R are -closure relations. Recall that Example 1 shows that the extension based on  does not generate a closure relation. It is not difficult to check that conditions in Definition 1 are not satisfied either.

4

Relation with the Classical Notion of Galois Connection

Given a relation R ⊆ A × B, the direct and subdirect images of a subset X of A under the relation R define two mappings between the powersets 2A and 2B . – Direct, Upper extension of R, denoted by R(·) : 2A → 2B , and defined as  R follows: R(X) = x∈X x . R A B – Subdirect, Lower  extension of R, denoted by (·) : 2 → 2 , and defined as follows: X R = x∈X xR . Given (A, ≤A ) and (B, ≤B ), two relations R ⊆ A × B and S ⊆ B × A can be extended to be mappings between the corresponding powersets 2A and 2B . In this framework, it is worth to study the possible relationship between the standard notion of Galois connection and the relational Galois connection introduced above. We show that the standard notion neither implies nor is implied by our notion of relational Galois connection. The following example shows a relational Galois connection whose direct extension to the powerset (with the Smyth preordering structure) is not a classical Galois connection. Example 3. Let (A, ≤A ) and (B, ≤B ) be the preordered sets and R ⊆ A × B and S ⊆ B × A the relations shown below.

≤A

( 2 ^=h @3 ==  =  1

4

≤B

a

b

x 1 2 3 4

xR {a} {a} {a} {b}

x xS a {2, 3} b {4}

Relational Connections Between Preordered Sets

167

It is straightforward that (R, S) is a relational Galois connection. Observe that {1, 4}  S({a}) = {2, 3}, however {a}  R({1, 4}) = {a, b}.

The following example shows a relational Galois connection whose subdirect extension to the Smyth powerset is not a classical Galois connection. Example 4. Let (A, ≤A ) be the preordered set and R ⊆ A × A be the relation shown below. ≤A 2

x 1 2 3 4

y< 4 bEEEE yy E y y ) ibEE 2, the residues rk−1 are filtered by applying the Gaussian low-pass filter due to eliminating insignificant local maxima. Then, the set of stationary points for filtered residues are determined using algorithm in [10] and only local maxima are added to the set of reference points. Moreover, the uniqueness of the added reference points is checked. When the new set of reference points is obtained, the RBF approximation (described in Sect. 2.1) is again computed and residues rk are calculated using Eq. (1). The whole process is repeated until the required accuracy of approximation is achieved or the maximum permissible compression ratio is exceeded. Finally, it should be noted that the value of standard deviation σk of Gaussian low-pass filter in k th level is set as:  σ k = 1, 2 σk = σk−1 (2) k = 3, . . . , L 2 where σ is initial value. The whole pseudocode is in Algorithm 1. Algorithm 1. The incremental RBF approximation of geographic data Input: given dataset {X , h} = {xi , hi }N 1 , initial value of standard deviation σ for Gaussian low-pass filter, stop conditions c1 and c2 Output: approximating function fk (x) 1 hf = Gauss(h, σ) // Gaussian low-pass filter 2 Ξ = Compute stationary points of {X , hf } (using algorithm in [10]) 3 Ξ = Ξ ∪ (corners of dataset bounding box) 4 fk (x) = RBF approximation ({X , h}, Ξ) 5 rk = |h − fk (X )| 6 σk = σ 7 while c1 ||c2 do 8 rkf = Gauss(rk , σk ) // Gaussian low-pass filter 9 Ξk = Compute stationary points of {X , rkf } (using algorithm in [10]) 10 Ξ = Ξ ∪ (only local maxima from Ξk ) 11 fk (x) = RBF approximation ({X , h}, Ξ) 12 rk = |h − fk (X )| 13 σk = σk /2

Incremental Meshfree Approximation of Real Geographic Data

4

225

Experimental Results

In this section, the experimental results for our proposed approach will be presented. The implementation was performed in Matlab. The thin plate spline (TPS) function r2 log(r2 ) which is shape parameter free and divergent as radius increases has been used for RBF approximation. For the purposes of below mentioned experiments, two geographic point clouds were used. The first dataset was obtained from GPS data of the mount Veˇlk´ y Rozsutec in the Mal´ a Fatra, Slovakia (Fig. 1a) and contains 24,190 points. The second dataset is GPS data of the part of Pennine Alps, Switzerland (Fig. 2a) and contains 131,044 points.

(a) original data, N = 24, 190

(b) 1st level, M = 19

(c) 3rd level, M = 115

(d) 8th level, M = 595

Fig. 1. The mount Veˇlk´ y Rozsutec, Slovakia and its contour map: original data and different levels of proposed incremental RBF approximation when TPS is used.

226

Z. Majdisova et al.

Results for different levels of RBF approximation of the mount Veˇlk´ y Rozsutec are shown in Figs. 1b–d. We can see that the quality of approximation in terms of error is improving with increasing level of the incremental RBF approximation. For 8th level (see Fig. 1d), the many details of the original terrain are already apparent. In Fig. 2b–d, the results for different levels of incremental RBF approximation of the part of Pennine Alps are shown. It can be again seen that the quality of approximation is improving with increasing level of the incremental approach. For the first level (see Fig. 2b), it is evident, that the small number of reference points is defined for the ridge in the foreground, and therefore, this ridge is approximated by several peaks in the first level. This problem is eliminated with increasing level of the incremental RBF approximation. For 7th level (see Fig. 2d), the many details of the original terrain are again apparent.

(a) original data, N = 131, 044

(b) 1st level, M = 51

(c) 3rd level, M = 413

(d) 7th level, M = 2602

Fig. 2. The part of Pennine Alps, Switzerland and its contour map: original data and different levels of proposed incremental RBF approximation when TPS is used.

Incremental Meshfree Approximation of Real Geographic Data

(a) The mount Vel’ky ´ Rozsutec

227

(b) The part of Pennine Alps

Fig. 3. The mean relative error of the proposed incremental RBF approximation in comparison with classical RBF approximation [9] for different compression ratio.

The mean relative error in dependency on compression ratio is presented for both geographic datasets in Fig. 3. Moreover, the comparison of the proposed incremental approach with the classical RBF approximation [9] is performed. From the results, it can be seen that the proposed approach achieves the better quality of results in terms of error.

5

Conclusion

In this paper, a new incremental approach for RBF approximation for geographic data is presented. Selection of the set of reference points for proposed incremental approximation is based on the determination of stationary points of the input point cloud in the first level and the finding local maxima of residues at each hierarchical level. In addition, the Gaussian low-pass filter is used to smooth the trend of the input points, resp. the residues before finding significant points. The proposed approach achieves the improvement of results in comparison with other existing methods because the features of the given dataset are respected. In the future work, the proposed approach can be extended to higher dimensions, as the extension should be straightforward. Also, the improving the computational performance without loss of accuracy can be explored. Acknowledgments. The authors would like to thank their colleagues at the University of West Bohemia, Plzeˇ n, for their discussions and suggestions, and the anonymous reviewers for their valuable comments. The research was supported by the ˇ project GA17-05534S and partially supported by Czech Science Foundation GACR the SGS 2016-013 project.

228

Z. Majdisova et al.

References 1. Hardy, R.L.: Multiquadratic equations of topography and other irregular surfaces. J. Geophys. Res. 76, 1905–1915 (1971) 2. Hardy, R.L.: Theory and applications of the multiquadric-biharmonic method 20 years of discovery 1968–1988. Comput. Math. Appl. 19(8), 163–208 (1990) 3. Carr, J.C., Beatson, R.K., Cherrie, J.B., Mitchell, T.J., Fright, W.R., McCallum, B.C., Evans, T.R.: Reconstruction and representation of 3D objects with radial basis functions. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2001, Los Angeles, California, USA, 12-17 August 2001, pp. 67–76 (2001) 4. Majdisova, Z., Skala, V.: Big geo data surface approximation using radial basis functions: a comparative study. Comput. Geosci. 109, 51–58 (2017) 5. Smolik, M., Skala, V.: Large scattered data interpolation with radial basis functions and space subdivision. Integr. Comput.-Aided Eng. 25(1), 49–62 (2018) 6. Pepper, D.W., Rasmussen, C., Fyda, D.: A meshless method using global radial basis functions for creating 3-D wind fields from sparse meteorological data. Comput. Assisted Methods Eng. Sci. 21(3–4), 233–243 (2014) 7. Hon, Y.C., Sarler, B., Yun, D.F.: Local radial basis function collocation method for solving thermo-driven fluid-flow problems with free surface. Eng. Anal. Boundary Elem. 57, 2–8 (2015) 8. Li, M., Chen, W., Chen, C.: The localized RBFs collocation methods for solving high dimensional PDEs. Eng. Anal. Boundary Elem. 37(10), 1300–1304 (2013) 9. Majdisova, Z., Skala, V.: Radial basis function approximations: comparison and applications. Appl. Math. Model. 51, 728–743 (2017) 10. Majdisova, Z., Skala, V., Smolik, M.: Determination of stationary points and their bindings in dataset using RBF methods. In: Silhavy, R., Silhavy, P., Prokopova, Z. (eds.) Computational and Statistical Methods in Intelligent Systems. Advances in Intelligent Systems and Computing series, vol. 859, pp. 213–224. Springer, Cham (2019)

Design of Real-Time Transaction Monitoring System for Blockchain Abnormality Detection Jiwon Bang and Mi-Jung Choi(&) Kangwon National University, 1 Gangwondaehak-gil, Chuncheon-si, Gangwon-do 24341, Republic of Korea {jiwonbang,mjchoi}@kangwon.ac.kr

Abstract. Due to the recent popularity of Bitcoin, interest in the blockchain which is the basic technology of Bitcoin, has also increased. Blockchain is a distributed ledger technology that stores transaction information that occurs in P2P (Peer-to-Peer) networks on the ledger of all nodes in a different way than centralized method and verifies whether they are stored correctly. Blockchain features integrity, anonymity, and security. Bitcoin as well as various cryptocurrency such as Ethereum, Ripple and Bitcoin cash and blockchain based technologies are being developed using blockchain technology. However, there is a problem with illegal transactions by exploiting the anonymity of the blockchain. In this paper, we propose a system for detecting and tracking illegal transactions by collecting and analyzing information in a blockchain network and introducing the design of a stable storage system among the whole system. Keywords: Blockchain

 Monitoring system

1 Introduction Bitcoin first appeared in the “Bitcoin: A Peer-to-Peer Electric Cash System” [1] published by a developer named Satoshi Nakamoto in the October 2008 Encryption Technology Community, and describes the features and implementation principles of Bitcoin. The concept of Bitcoin described in the paper is electronic money that enables transactions between traders without the involvement of third parties such as brokers or financial institutions. This paper suggests a solution to double payment and asserts that Bitcoin can be used in practice. As the interest of the people increased, the Bitcoin skyrocketed and the transaction proceeded vigorously, but the blockchain, a key technology, was not attracted to people’s attention. However, as the technical elements of the blockchain have been reevaluated through continuous research, they have been recognized as one of the potential technologies in the world. Blockchain are distributed so that individual transactions can be achieved through trust among participants in the network unlike traditional centralized servers, which provide services for transactions by authorizing third parties, such as financial institutions or banks. Among the components of the blockchain, the persistent problem of the P2P network has the problem that the data among each other cannot be trusted. However, the blockchain guarantees the integrity of data through a consulting algorithm such as Proof of Work and ensures anonymity by not including the actual © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 229–234, 2019. https://doi.org/10.1007/978-3-030-21507-1_33

230

J. Bang and M.-J. Choi

information of the User when the transaction occurs [3]. Through these features, it is expected that many improvements will be made by utilizing blockchains in many fields such as finance, security, network, and Internet of Things (IoT) [4–6]. Conversely, the anonymity of the blockchain has been found to be exploited for illegal forms of transaction or crime such as drug trafficking, fraud, arms trade, and money laundering [7]. In order to prevent illegal use of electronic money, there is a need for blockchain network monitoring technique or a system for tracking illegal transaction details. In this paper, we design a monitoring system for detecting abnormalities in a blockchain network and introduce a system for efficiently storing data collected from the blockchain network.

2 Related Works Blockchain is a distributed data storage technology that shares data among all nodes in the network and performs transaction and verification without a central node. Blockchain identifies whether a transaction is valid before recorded in a distributed ledger when a transaction occurs on the network, and whether a double payment problem occurs. When the confirmation is completed, agreement between all the nodes proceeds for recording. A representative consulting algorithm is a proof of work protocol that utilizes the SHA256 cryptographic hash algorithm of Bitcoin. Proof of work is the way in which all nodes receive the right to create a block if they find a nonce value that is less than the target value. Block contains block header, block hash, previous block hash value, and transaction data. And blockchain is secured by the merkle tree [8], which consists of the previous block hash values stored in the block, ensuring integrity with the agreement algorithm. The Blockchain Explorer is a project that creates web-based applications for statistical information about cryptographic coins such as Ethereum [9] as well as Bitcoin. It is possible to monitor information such as block information and transactions without downloading network join or blockchain. The representative examples are Blockexplorer [10], Etherscan [11], and Blockchain.info [12]. However, the blockchain explorer is not able to discriminate against illegal transactions. To overcome this problem, it is necessary to study the blockchain monitoring. Currently, there are few programs providing monitoring systems, and most of them are under study. Among them, BitIodine [13] provides a function of tracking user’s path and reverse path in Bitcoin network by utilizing monitoring data. Elliptic [14] and Chainalysis [15] also provide the same functionality to track money laundering and cybercriminals using Bitcoin. In this paper, we propose a monitoring system to detect and analyze illegal activities and security attacks by monitoring not only Bitcoin network but also Ethereum network and Smart Contract [16] Design the system.

Design of Real-Time Transaction Monitoring System

231

3 Structure of Blockchain Network Monitoring System Blockchain network monitoring system proposed in this paper consists of a collection agent, a node interface, a storage system, a data analysis engine, and composed of a web server for visualization. First, set the data to be collected by the collection agent. Agent collects blocks, transactions, contracts, and node information, and sets the cycle for collecting data differently for efficient data collection. For example, in the case of a Bitcoin block, one or two transactions are generated every 10 min, and transactions are performed on an average of several hundred transactions per second. For example, in the case of a Bitcoin block, transactions are generated several times per second on average, compared with one or two generated in 10 min. Collected for each blockchain network using agent is transmitted to the monitoring server through the node interface. The transmitted monitoring data is preprocessed for data before being stored in the database. The reason is that it is necessary to refine the data because the types and forms of data existing in the network of Bitcoin and Ethereum are different. If preprocessing data is stored in a database according to the network platform and data type, it will be implemented by researching illegal transaction detection, security attack detection such as DDoS attack, and blockchain forensics by analysis engine. Finally, the web server is responsible for visualizing the collected data and the detected and analyzed data to provide them as UI. Figure 1 shows the configuration of the proposed system. First, we install an agent that collects data on nodes in the blockchain, such as Bitcoin and Ethereum. The block information, transaction, and node information collected in real time through the agent are transmitted to the node interface. The transmitted monitoring data is classified according to the characteristics and stored in the database. And then the analysis engine analyzes the data stored in the database to detect tampering transactions such as ledgers or money laundering used in a crime. Lastly, the information of the detected ledger and the stored information through the analysis are provided through the web site.

Fig. 1. Structure of blockchain network monitoring system

232

J. Bang and M.-J. Choi

4 Design of Blockchain Storage System In this section, we designed a storage system for a blockchain network monitoring system. The proposed whole monitoring system provides the function to detect illegal transactions or crime. The monitoring system requires efficient data storage for rapid analysis and detection performance. Because the information collected by the agent is transmitted to the monitoring server in real time without discrimination of various information such as Block, Transaction, Contract, Network, and Node, unnecessary delays occur when the data is not processed when the analysis is performed. To do this, we conduct preprocessing before being stored in the database. Figure 2 shows the design of the storage system for storing monitoring data. First, data transmitted in real time to prevent overloading of monitoring server is input to Apache Kafka [17]. Apache Kafka is a messaging queuing system specialized for real time log processing. It stores input data by designating topic by network and data type. Then use Apache Storm [18] to extract the information needed to analyze the data stored in Kafka and preprocessing data is stored in a database. Figure 3 shows the database configuration table that stores the preprocessing data. Block table is that stores information about a block such as a block hash, block size, a hash value of a previous block, and node table consists an account address and information about ledger. In addition, transaction table stores with information on transactions such as transaction ID, from/to information, input/output value. And network table consists a bandwidth, throughput, source/destination address and protocol.

Fig. 2. Structure of storage system

Design of Real-Time Transaction Monitoring System

233

Fig. 3. Database schema of blockchain monitoring system

5 Conclusion and Future Work Blockchain is actively researched related to the characteristics of blockchain such as integrity, security and decentralization in various fields such as finance, medical, electric power. In addition, illegal transactions, crimes, money laundering and other abuse problems are increasing. To this end, the monitoring system can be regarded as a core system to prevent abuse of blockchain. In this paper, we introduce the structure of monitoring system that monitors blockchain network to identify network status and to detect and track abnormal transactions or illegal transactions through collected data. We designed a storage system for stable and efficient storage of collected data among blockchain network monitoring systems. Future research will implement a monitoring system that analyzes and detects abusive behaviors and fraudulent activities. Acknowledgments. This work was partly supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. 20160-00179, Development of an Intelligent Sampling and Filtering Techniques for Purifying Data Streams) and the ICT R&D program of MSIT/IITP. [No. 2018-0-00539, Development of Blockchain Transaction Monitoring and Analysis Technology].

References 1. Nakamoto, S.: Bitcoin: A peer-to-peer electronic cash system (2008) 2. Pilkington, M.: 11 Blockchain technology: principles and applications. In: Research Handbook on Digital Transformations (2016) 3. Zheng, Z., Xie, S., Dai, H., Chen, X., Wang, H.: An overview of blockchain technology: Architecture, consensus, and future trends. In: IEEE International Big Data Congress on. IEEE, pp. 557–564, Boston (2017)

234

J. Bang and M.-J. Choi

4. Foroglou, G., Tsilidou, A.L.: Further applications of the blockchain. Columbia University PhD in Sustainable Development, New York (2015) 5. Zhang, Y., Wen, J.: An iot electric business model based on the protocol of bitcoin. In: 18th International Conference on ICIN, pp. 184–191, Paris (2015) 6. Kosba, A., Miller, A., Shi, E., Wen, Z., Papamanthou, C.: Hawk: the blockchain model of cryptography and privacy-preserving smart contracts. In: IEEE Symposium on SP, pp. 839– 858, San Jose (2016) 7. Liao, K., Zhao, Z., Doupé, A., Ahn, G.J.: Behind closed doors: measurement and analysis of CryptoLocker ransoms in Bitcoin. In: eCrime APWG Symposium on IEEE, Toronto (2016) 8. Coelho, F.: An (almost) constant-effort solution-verification proof-of-work protocol based on merkle trees. In: International Conference on Cryptology in Africa, pp. 80–93. Springer, Heidelberg (2008) 9. Wood, G.: Ethereum: a secure decentralised generalised transaction ledger. Technical report, Ethereum project yellow paper, vol. 151, pp. 1–32 (2014) 10. Blockexplorer. https://blockexplorer.com/ 11. Etherscan. https://etherscan.io/ 12. Blockchain.info. https://www.blockchain.com/en/explorer 13. Spagnuolo, M., Maggi, F., Zanero, S.: Bitiodine: Extracting intelligence from the bitcoin network. In: International Conference on FC, pp. 457–468. Springer, Heidelberg (2014) 14. Elliptic. https://www.elliptic.co/ 15. Chainalysis. https://www.chainalysis.com/ 16. Buterin, V.: A next-generation smart contract and decentralized application platform. white paper (2014) 17. Apache Kafka. http://kafka.apache.org/ 18. Apache Storm. http://storm.apache.org/

Risk Mapping in the Selected Town Alžběta Zábranská(&), Jakub Rak, and Petr Svoboda Tomas Bata University in Zlín, Nám. T. G. Masaryka, 5555, 760 01 Zlín, Czech Republic {a_zabranska,jrak,psvoboda}@utb.cz Abstract. This article describes the risk analysis and method of risk mapping in the town of Uherský Brod invented by the Fire Rescue Service of the MoravianSilesian Region on the basis of the methodology recommended by the European Union. QGIS software tool was used for risk mapping in order to visualize the maps. The risk analysis was performed on the basis of identification of types of hazards that are very likely to occur in Uherský Brod. Subsequently, the results of the risk analysis were processed by means of the software QGIS. This software enables the creation of the risk map once the individual maps have been compiled. Evaluating the resulting risk map, the individual areas of the town were assessed according to the discovered level of risk. Risk maps can consequently serve. Keywords: Emergency event  Hazard  Geographic information system Modeling  Risk analysis  Risk mapping



1 Introduction The development of social society as such has always been accompanied by improvement in its protection. In the past mankind used to feel threatened by emergency events of natural character (floods, earthquakes or extreme meteorological phenomena). Eventually, with the continuous improvement of modern technologies the emergency events of anthropogenic character (industrial and traffic accidents) began to emerge. Society has adopted a set of specific measures, which determine the level of preparedness, to overcome the emergency events. In order for these measures to be effective and purposive, it is primarily necessary, among other things, to perform a quality risk analysis. On the basis of the evaluation of the analysis, it is possible to identify the overall weight of risks in the given territory and to define parameters for determining the level of preparedness. The essence of the risk mapping method is elaborated on in a handbook prepared under the European project Interreg IIIC SIPROCI, which aims to improve the response to emergency events using the interregional cooperation of European countries [1].

2 Problem Formulation All objects, phenomena, risks or activities are related to a certain space and can interact with each other. For this reason, it is necessary to know individual connections that could cause this influence together with the relevant spatial information. This makes it © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 235–244, 2019. https://doi.org/10.1007/978-3-030-21507-1_34

236

A. Zábranská et al.

possible to mark out floodplains, locate sources of dangerous chemical substances, or plan a route from X to Y [7]. Safety risks have their spatial information, which makes it possible to identify risk areas in the analyzed territory. However, it is important to consider the key aspect, which is the changeable intensity of most types of hazards in the given territory. As a rule, the danger is more intense directly in the risk area adjacent to the source of danger compared to more distant areas. The primary support for risk mapping was provided by the geographic information system (GIS) [3, 5]. For the purposes of this paper the multi-platform geographic information system QGIS was used. In the Czech Republic the risks differ, the specific type of risk is characteristic by the territory and the intensity it manifests itself with. The risk of an uncontrolled rising of a firedamp to the ground in the Moravian-Silesian region can serve as an example, while in the Zlín region its occurrence is only incidental. In risk mapping, a particular type of hazard must be expressed by a cartographic projection, in other words, it must have a specific territorial manifestation [5]. According to statistical data of the analyzed territory of the town of Uherský Brod, the following emergency events may occur: natural flood, extraordinary flood, leakage of a dangerous chemical substance from a stationary source, leakage of a dangerous chemical substance during transport, railway and traffic accidents, damage to the slope stability, fire, snow calamity and bird flu. Risk mapping is a process during which areas with different levels of risk are identified. This includes results of risk assessments depicted on risk maps. The risk map represents the level of damage, which can be expected in the given territory. It also allows identification of composition and level of risk for each part of the analyzed area. Nevertheless, the main condition is the inclusion of only such emergency events whose manifestation can be expressed by means of a cartographic projection [1].

3 Problem Solution Altogether, the risk mapping includes 5 fundamental phases: determining the level of risk (a map of hazard), vulnerability assessment (a vulnerability map), cumulative risk assessment (a map of cumulative risk), preparedness assessment (a preparedness map), an accumulated risk assessment (an accumulated risk map) [1]. Only the first three phases will be discussed in this article. 3.1

Materials and Methods

The risk mapping method is a process during which areas with different levels of risk are identified. Mapping involves the interaction of different types of hazards identifying the vulnerability of the area and level of preparedness of the analyzed area. Risk mapping is implemented based on the geographic information system “GIS” (in this case Quantum GIS) and statistical and numerical analyzes [1]. Quantum GIS (“QGIS”) is a multiplatform geographic information system which in particular allows to view, create and edit raster and vector geodata, and to create map

Risk Mapping in the Selected Town

237

outputs. It was developed as an Open Source, which guarantees a long-term sustainability of the invented method of operation and its extensibility [2, 3]. In order to view individual manifestations of emergency events in the map, layers or data from which the layer can be generated in QGIS must exist. This includes, for instance, a list of roads, watercourses, or objects with specific address. For the purposes of the article, numerical model calculations for processing the manifestations of individual emergency events (e.g. leakage of dangerous chemical substances–the ALOHA simulation software), long-term statistical monitoring of the weather or natural phenomena (especially landslides) were used [1, 5]. In addition, the following maps and databases were used for the needs of risk mapping: maps of flood risks, maps of areas affected by snow, digital geographic model of the Czech Republic ZABAGED with GIS map layers, which contain sub-categories of vulnerability and therefore they can be used for cartographic projection of vulnerability. Also, the OpenStreetMap – a vector map database that is freely accessible on the Internet–was used [4, 8]. In order to determine the value of the risk level a multi-criteria analysis using the method of expert estimation was used. This analysis was performed on the basis of statistical data on the occurrence of emergency events in the last 20 years in the territory of Uherský Brod. An essential document for addressing this issue is the Risk Mapping Methodology invented by the Fire Rescue Service of the Moravian-Silesian Region [1]. During the risk mapping process, the issues of risk mapping were consulted with experts in crisis and emergency planning. 3.2

Problem Solution

For the needs of risk mapping, a multi-criteria analysis with the expert estimation invented by the Fire Rescue Service of the Moravian-Silesian Region in 2002 was used. This method is based on the estimation of the values of the criteria for the individual types of hazards, including the effect of possible consequential emergency events, and it also determines a level of risk in order to compare types of emergency events. This method of risk analysis was performed on the basis of statistical data on emergency events in the analyzed territory in the town of Uherský Brod, which occurred in the last 20 years. Within a multi-criteria analysis, the risk represents: “expected negative consequences due to activation of danger in the given territory” [1]. In the risk mapping method the risk level is understood as: “the value of probability of occurrence of negative consequences during the given type of emergency event” [1]. MR ¼ F x N

ð1Þ

Where: F – the frequency of possible occurrence of the emergency event (EE) for a particular type of hazard. N – consequences of the EE. The consequences of the EE (N) can be further expressed as:

238

A. Zábranská et al.



Kt x Kohr x Kizs Pr

ð2Þ

Where: Kt – the coefficient of the expected duration of the emergency event, Kohr – the coefficient of a threat during the EE, Kizs – the coefficient expressing the need for the forces and resources of the Integrated Rescue System and the need for coordination of the EE, Pr – the coefficient of possible time prediction. The threat coefficient (Kohr) is given by the sum of individual elements of different weightings. In order to express the different weighting of individual elements of the threat, weight coefficients are introduced into the calculation [1]. Kohr ¼ ðKo x VKoÞ þ ðKp x VKpÞ þ ðKe x VkeÞ þ ðKb x VKbÞ þ ðKz x VKzÞ þ ðKd þ VKdÞ ð3Þ Where: Ko – the coefficient of individual elements of threat, Kp – the coefficient of individual elements of the affected area, Ke – the coefficient of threat to biotic environment, Kb – the coefficient of threat to buildings and built-up areas, Kz – the coefficient of threat to breeding animals, Kd – the coefficient of interruption of traffic, VKo, VKp… individual weight coefficients. The coefficient of the Integrated Rescue System (Kizs) consists of coefficients of the Integrated Rescue System (IRS) sub-elements [1]. Kizs ¼ ðKs x VKsÞ þ ðKk x VKk Þ

ð4Þ

Where: Ks – the coefficient of the need for forces and resources of the IRS, Kk – the coefficient of the need to coordinate the emergency event, VKs, VKk – weight coefficients. The weight coefficients take into consideration the weighting of individual elements. The values of weight coefficients and the scale for their values are based on the Risk Mapping Handbook. The table below represents the final levels of risk for individual emergency events that may occur in the analyzed territory of Uherský Brod [1]. The table below represents the final levels of risk for individual emergency events that may occur in the analyzed territory of Uherský Brod (Table 1).

Risk Mapping in the Selected Town

239

Table 1. The resulting level of risk for types of hazards occurring in the town of Uherský Brod Type of danger Natural flood Q20–730 persons Q50–1 400 persons Q100–1 840 persons Extraordinary flood – approx. 20 000 persons Leakage of a dangerous chemical substance Ice arena - 4 000 persons Pivovar Janáček, the brewery - 2 000 persons Raciola Jehlička, the company - 2 000 persons Traffic accident Railway accident Fire Zbrojovka, the company Rumpold, the company RPG Recycling,, the company - 15 000 persons Windstorm Forest fire

Risk level 268,704 247,968 302,4 255,744 383,616 416,7 510,3 369,9 369,9 88,92 156 213,49 37,26 41,6 561,6 16,884 24,12

The numeric value of the risk level established for individual types of emergency events has a weighting factor of a comparative coefficient in the process of risk cumulation [1]. 3.2.1 The Hazard Map Creating a hazard map is the first phase of the risk mapping. Its basis is the maps of individual types of hazards, or depicted manifestations of particular emergency events [1]. The hazard map is created by merging maps of individual types of hazards (Fig. 1). 3.2.2 The Vulnerability Map The second phase of risk mapping includes the creation of a vulnerability map. The vulnerability of a territory can be understood as the susceptibility of the territory to the effects of an emergency event. For the purposes of risk mapping, the cumulative vulnerability indicator, which is determined by merging the vulnerability sub-elements, is used to express the level of possible loss and damage in the analyzed territory [4]. The vulnerability is cumulated in the area where individual elements of vulnerability intersect. The individual elements of vulnerability include population (population density), crucial and public infrastructure and the environment (Fig. 2). For the purposes of risk mapping, the so-called polygon layers are used; the polygons are overlapped. The vast majority of the said vulnerabilities are represented by point or line layers in cartographic expression and therefore these layers need to be converted to polygons. In risk mapping this is solved by means of the so-called buffers of a given radius [6].

240

A. Zábranská et al.

Fig. 1. Hazard map

3.2.3 The Map of Cumulative Risk Creating a cumulative risk map is the third phase of risk mapping. During this phase the vulnerability and hazard maps are merged. The risk maps depict the level of risk in the town of Uherský Brod by means of colors. The level of risk is expressed by a multilevel color scale, e.g.: white color – zero risk, green color – low risk yellow color – medium risk, orange color – high risk. The area with the highest risk should be further investigated, primarily in order to eliminate this risk (Fig. 3).

Risk Mapping in the Selected Town

241

Fig. 2. Vulnerability map

The main output of the article is the interaction of hazards and vulnerability, which leads to the determination of cumulative risk. In the resulting map it is possible to see where the cumulative risk is imminent and where it is not. It is also possible to estimate that the most serious emergency event in the town of Uherský Brod is a hundred-year flood. In the event of overflowing of such an amount of water, secondary emergency events may be expected (e.g. leakage of chlorine from a cooling device, etc.). The results of risk mapping are presented on special maps that visually serve for better understanding of interconnections among individual types of risk and which allow the identification of the composition and level of risk for the given territory. They contain comprehensive information on levels of risk in the area and others. In general, they show which parts of the town of Uherský Brod are at the highest risk level and thus, they could serve as one of the information elements for the population of the town. Depending on the results, it is possible to create a manual for the inhabitants, which would include a map with a legend, information on the highest risks in the town and basic instructions on how to act during the EE.

242

A. Zábranská et al.

Fig. 3. Map of cumulative risk

These maps also represent a basic input into emergency and crisis planning processes. They can serve as a basis for the design of improvised shelters or for dislocation of warning and information elements depending on the risk areas. Another advantage is the possibility of planning routes for rescue and salvage operations (population evacuation, transportation of supplies, etc.) [2, 4].

4 Conclusion Risk mapping involves five phases. It should be noted that these phases were limited in the article, and only the first three phases were addressed, these three phases are the hazard map, the vulnerability map and the cumulative risk map. In the first part of the risk mapping, the sources of hazards were mapped within the analyzed territory. In particular, these included transport roads, railway transport, entities handling dangerous chemical substances and rivers (simulations of twenty-year, fifty-year and hundred-year floods). In addition, significant and vulnerable objects were mapped to represent the vulnerability of the area. These included roads, crucial infrastructure, railways, power

Risk Mapping in the Selected Town

243

lines, cultural monuments and other important objects, biotic environment and, above all, the population. By means of all this the hazard map and the vulnerability map were created. The interaction of these two maps then led to the map the cumulative risk. The resulting risk map showed where hazards can cumulate in the analyzed territory of the town of Uherský Brod and where these hazards can negatively affect the individual vulnerabilities. The area with the highest risk level should be further investigated, primarily in order to eliminate the risk. In addition, publishing the resulting risk maps seems to be effective as well; this could serve for an efficient notification to the civilians about hazards that exist in the territory of the town of Uherský Brod. Moreover, a primary emphasis could then be placed on education of the population in the field of population protection, or on the principles of appropriate behavior during an emergency event. The major disadvantage of risk mapping is the impossibility of including hazards that cannot be easily expressed in space because their manifestations within the town of Uherský Brod are difficult to predict, e.g. earthquakes, meteorite fall, fire of natural origin or economic instability. These phenomena cannot be included into risk mapping. Other limitations include designation of different levels of risk because it is difficult to segregate them. This designation depends on the person who evaluates the risks. What is perceived as medium risk by one person can be perceived as high risk by another. However, at the conclusion of the mapping the emphasis is placed on the value of the level of risk; the representation on the map in the color scale is understood only as a visualization of the results. Thanks to the application of the risk mapping method in the town of Uherský Brod the initial presumption of the suitability of this method for the analysis of safety risks was confirmed. A high degree of visualization associated with the use of GIS tools can be perceived as a significantly positive outcome. Therefore, the outputs are more legible and clear and allow easy presentation of results to both professionals and the population. In conclusion, there is a potential for further elaboration of the results, especially for the implementation of the final steps of the risk mapping method, which are: determination of preparedness and amended risk. These parts were limited due to their complexity. For the verification of applicability of the method is the resulting risk mapping sufficient. The implementation of these parts is a subject of further research. Similarly, the preparation of educational materials for increasing the preparedness of the population will be the focus of future research. This step is directly related to one of the final stages of risk mapping, which is determining the preparedness. Acknowledgements. This paper is supported by the Internal Grant Agency at Tomas Bata University in Zlín, projects No. IGA/FLKR/2017/003; and No IGA/FLKŘ/2018/001.

244

A. Zábranská et al.

References 1. Kromer, A., Musial, P., Folwarczny, L.: Mapování rizik, Spektrum (Sdružení požárního a bezpečnostního inženýrství), pp. 16–66. Ostrava, 126s (2010). ISBN 978-807385-086-9 2. Applied Physics, System Science and Computers III: Proceedings of the 3nd International Conference on Applied Physics, System Science and Computers (APSAC2018), Springer, Dubrovnik, Croatia, 25–28 September 2018. ISSN 978-3-319-75605-9 3. Brehovsky, M., Jedlička, K.: Úvod do geografických informačních systémů, pp. 1–126. Plzeň, University in South Bohemia (2010). http://gis.czu.cz/studium/ugi/e-skripta/ugi.pdf 4. QGIS - The Leading Open Source Desktop GIS [online] (2018). https://www.qgis.org/en/site/ about/index.html 5. Balint, T.: Aplikace geografických informačních systému v oblasti ukrytí obyvatelstva. J. Bachelor thesis, Tomas Bata University in Zlín, FLCM, Czech Republic (2018) 6. GIScom: Geoinformation solutions [online] [cit. 2018-09-24] (2010). http://www.giscom.cz/ en 7. Nétek, Rostislav a Tomáš BURIAN. Free and open source v geoinformatice. First published. Palacký University, Olomouc (2018) 8. GIS Understanding spatial media. First published. Editors Rob Kitchin, Tracey P. Lauriault and Matthew W. Wilson. Sage, Los Angeles, London, New Delhi, Singapore, Washington, DC, Melbourne (2017) 9. Tomaszewski, B.: Geographic Information Systems (GIS) for Disaster Management. Springer Berlin Heidelberg, Boca Raton (2015). ISBN 978-148-2211-689

The Basic Process of Implementing Virtual Simulators into the Private Security Industry Petr Svoboda(&), Jakub Rak(&), Dusan Vicar(&), and Michaela Zelena(&) Tomas Bata University in Zlin, nam. T. G. Masaryka 5555, 760 01 Zlin, Czech Republic {psvoboda,jrak,vicar,m_zelena}@utb.cz Abstract. This article is focused on the simplified implementation of virtual simulators into the private security industry. The first part of the article introduces the grounds for the problems to be solved while the second part is focused on the description of an analyst for the private security industry to help with the implementation process. The third part is dedicated to the specification of requirements for the implementation and the last part presents a basic algorithm for the implementation of virtual simulators for the needs of the private security industry. Keywords: Implementation  Private security industry Software engineering  Virtual simulation



1 Introduction Virtual simulators are useful tools for training members of the armed forces all around the world. The simulator enables participants to acquire knowledge and skills in a virtual environment; it also allows a large variety of scenarios while the risk associated with training is minimized. The training of employees of the private security industry is currently being conducted in a standard way that can be referred to as live simulation. There exist virtual military simulators such as Virtual Battlespace 3 but they need to be redesigned and customized to enable training by means of up to date virtual simulators. It means that requirements for the redesign and customisation will have to be specified to software developers [1, 2]. A document entitled the Software Requirements Specifications, also known as the Requirements Document, is the output from the analysis of customers’ requirements. This document can be characterized as an official summary of requirements that the software developers should implement. This analysis is usually done by an IT specialist – an analyst working as a SW developer. The previously mentioned specifications can be divided as follows: 1. User requirements – natural language phrases supplemented with diagrams that describe services expected from the system and limitations under which it must work. 2. System requirements – a detailed description of the functions, services and operational limitations of the software system. The document containing the system © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 245–250, 2019. https://doi.org/10.1007/978-3-030-21507-1_35

246

P. Svoboda et al.

requirements (sometimes called the functional specification) should exactly define what is to be implemented. It can be included into a contract between the customer and software developer. For both types of requirements, there are two forms of the specification and these are: 1. Specifications in natural language – the output is almost an unlimited and unstructured text that can be comprehensive, intuitive and universal, as well as vague and unclear. 2. Structured specifications – expressing requirements while maintaining comprehensibility, structure, uniformity and abilities to express. Templates are usually used for this approach. Also, the structured specifications are often supplemented with tables and models depicting, for example, the functionality algorithm of the requested software and the relationships of its individual objects [3].

2 Analyst for the Private Security Industry Above all, the algorithm proposed below is based on adjusting the existing analysis of data requirements. It is precisely the analysis of data requirements that has been identified as an activity the processing of which can largely be transferred from the developers to the customers, which means directly to the employees in the private security industry (PSI). For these purposes, the term “PSI analyst” is used in this article to designate a person from the PSI field who performs the comprehensive processing of software requirements. Figure 1 indicates the task of the PSI analyst when processing software requirements.

PSI

PSI analyst

IT

Fig. 1. PSI task analyst

The approach described above has the following advantages: • Accelerating the process of specifying software requirements for adjusting the simulator by a person experienced in PSI problems. • Cost savings owing to the use of own resources instead of resources of developers. In this case the analytical activity is carried out by the customer, who has in his or her ranks a person whose work tasks include analytical activities. In compliance with specific procedures such a person is capable of creating the Requirement Document for

The Basic Process of Implementing Virtual Simulators

247

specific implementations provided he or she has experience in the field of the private commercial industry and information technologies. The objective of the procedure proposed below is to help customers to develop a comprehensive, sufficiently detailed and comprehensible Requirement Document for the subsequent implementation into the selected simulator.

3 Specifications of Requirements Related to the Implementation The purpose of the procedure proposed below is to define the content and creation of the Requirement Document for the following areas: 1. Implementing new scenarios for training and adjusting the existing ones. 2. Additional adjustment of existing simulators including the addition of new objects, their attributes and relations, or editing the existing objects. 3. Implementing new actions that can be performed by the selected types of objects. Focusing on the proposed algorithm, it would be possible to base this on methods similar to those used in the agile software development, especially the principle of incremental development, i.e. the developing the system in phases. By means of gradual implementations, the chosen simulator is being improved and completed to the point where it can be used for most of the scenarios intended for training in the private security industry. The engagement of the previously mentioned PSI analyst enables a change in approach towards requirements engineering. Figure 2 depicts the process of requirements engineering during which individual sub processes are usually processed by an IT analyst. In the figure the processes in the province of the PSI analyst within the PSI are bordered in green while those under the direction of both the PSI and developers are bordered in orange. Processes preceding the creation of the Software Requirements Specifications Document are without exception in the hands of the PSI analyst, and both parties then process the actual Document. The result of the process carried out by the PSI analyst is the first version, which is then handed to developers for consultation. The PSI analyst then incorporates any observations and this process is repeated until the Document is completed.

4 The Basic Algorithm for Implementing Virtual Simulators to the Private Security Industry The basic algorithm for implementing virtual simulators to the private security industry is depicted in Fig. 3. The described algorithm is based on the PSI needs and its aim is to perform training in a virtual simulator. Prepared training scenarios are received by the PSI analyst who checks the completeness of the tool. If the simulator lacks some of the features important for training by means of these training scenarios, the Software Requirements Specifications Document with specifications of the individual implementations is

248

P. Svoboda et al.

Feasibility study

Survey and analysis of requirements

Feasibility report

Requirements specification

System models

Validation of requirements User and system requirements Requirements document

Fig. 2. Scheme of requirements of the engineering process in the PSI [4]

prepared. Consequently, the Document is taken over by the IT sector and the requirements are implemented into the simulator. Upon the completion of the tool for training using the said scenario, the actual training in the simulator is performed. In the extended algorithm, the PSI analyst proactively searches for new requirements for implementation based on training scenarios. When a new requirement is detected, it is thoroughly analysed, validated and specified at two levels - the level of user requirements and the level of system requirements and system models. Upon finalisation of these phases the PSI analyst compiles the complete version of the Requirements Specifications Document and submits it to developers (IT). They revise the Document and in the case of any deficiencies it is returned to the PSI analyst who again incorporates the changes in the field of user and system requirements and system models. After incorporating the changes, the PSI analyst submits a new version of the Document for a repeated revision. When the Document is complete it is implemented into the actual simulator; failing that, the process of incorporating new requirements is repeated. For the purpose of reviewing and completing the Requirements Specifications Documents, version control is proposed. It means that the version number is included in the title of the document in the following form:

The Basic Process of Implementing Virtual Simulators

249

The basic algorithm for implementing virtual simulators to the PSI PSI

Training scenarios

PSI analyst

Comprehensive tool?

IT

NO

YES

Training in a simulator

Software requirements specifications

Implementation of requirements

Fig. 3. The basic algorithm for implementing virtual simulators to the PSI [5]

document_title_v#.extension, where: – document_title is the title of the document without diacritical marks, – v# is an abbreviation of the word “version” and # is the document version number, – extension is the standard extension of the document. For instance: requirements_document_v1.docx It is important to note that multiple revisions can be performed. The documents processed by the PSI analyst are designated by odd numbers while those reviewed by IT specialists are designated by even numbers.

250

P. Svoboda et al.

5 Conclusion The presented research introduces the basic process of implementing virtual simulators into the private security industry. The newly created position of the PSI analyst facilitates the implementation of virtual simulators in the PSI and it simplifies the process of specifying requirements. Owing to this it enables the use of simulators for the training of employees in the private security industry. Follow-up future research will be focused on individual crucial implementations, specifically the implementation of the types of object, actions and scenarios using specific algorithms and tools that the implementation allows. Acknowledgments. This paper is supported by the Internal Grant Agency at Tomas Bata University in Zlin, projects No. IGA/FLKR/2017/003, No. IGA/FLKR/2018/001 and project Excellence of Department of Population Protection.

References 1. Svoboda, P., Ševčík, J.: The use of simulation in education of security technologies, systems and management. In: Proceedings of the 2014 International Conference on Applied Mathematics, Computational Science & Engineering (ACMSE 2014). Varna, Bulgaria (2014). ISBN 978-1-61804-246-0 2. Svoboda, P., Lukas, L., Rak, J., Vicar, D.: The virtual training of hazardous substances transportation. In: Proceedings of 19th International Scientific Conference. Transport Means 2015. Kaunas 2015. ISSN 1822-296X (print), ISSN 2351-7034 (online) 3. Sommerville, I.: Software Engineering, 10th edn. Pearson, Boston (2016). ISBN 9780133943030 4. ISO/IEC/IEEE 29148:2011(E): Systems and software engineering — Life cycle processes — Requirements engineering, 1st edn. IEEE, Piscataway (2011) 5. Pressman, R., Maxim, B.: Software Engineering: A Practitioner’s Approach, 8th edn. McGraw-Hill Education, New York (2015). ISBN 978-1-259-25315-7

Security of a Selected Building Using KARS Method Kristina Benesova(&), Petr Svoboda(&), Jakub Rak(&), and Vaclav Losek(&) Tomas Bata University in Zlin, nam. T. G. Masaryka 5555, 760 01 Zlin, Czech Republic {k2_benesova,psvoboda,jrak,losek}@utb.cz

Abstract. This article is focused on securing a building that has been selected in the Czech Republic. The aim of the thesis is to analyze risks and propose measures. The first part of the article introduces security issues with a special focus on intrusion detectors, fire detection and fire-alarm systems, and last but not least electronic security systems. Furthermore, it deals with the characteristics of the building and a subsequent analysis by means of the KARS method. The measures proposed are referred to in the conclusion for improving the current situation. The results of the research allow for the implementation of the proposals into practice. Keywords: Analysis

 Detector  Security  Security systems

1 Introduction In recent years, society has been experiencing property crimes with an ever increasing tendency. Therefore, no one can be surprised that society is increasingly improving security not only in connection with its health but also its property. Although Act No. 110/1998 Sb., on the Security of the Czech Republic, guarantees its citizens security, one can observe an increasing interest in security on the part of the subjects themselves. Through a risk analysis this student’s scientific activity will propose measures, which will lead to an increase for the protection of the selected building. Prior to the risk analysis and any proposed measures of protection it is necessary to familiarise ourselves with devices which will be discussed later on. An intrusion detector is a device designed to generate a signal or intrusion report in response to an abnormal state detecting the presence of danger [1]. Intrusion detectors can be divided according to several criteria. The first criterion is whether they are powered (passive and active) or non-powered (destructive and nondestructive). Furthermore, the detectors are divided according to the type of protection they provide within their location and direction; the types of protection include perimeter, external, spatial protection and the protection of objects. Further classification is based on the physical signal used; the detectors can be electromechanical, electromagnetic and electroacoustic. © Springer Nature Switzerland AG 2019 K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 251–256, 2019. https://doi.org/10.1007/978-3-030-21507-1_36

252

K. Benesova et al.

The intrusion detector should be resistant against unauthorized access to its components and settings, against removal from the fixture, resistant to the change of orientation, and it has to be sensitive to disturbance by magnetic fields [1, 2]. Alarm security systems and emergency systems inform about unwanted intrusion into the building. These devices are inherently ineffective if the information is not passed on early enough to designated persons. Within this field there have been constant innovations and developments related to communicators, control peripherals, smart wiring and last but not least, the area of active protection. The end-points of these systems are central dispatching stations or surveillance and alarm reception centres in which a person receives a signal from the device and then sends out an authorized person [1]. In order to fulfil the basic functions of a fire alarm system (FAS), the FAS control panel and the fire alarm are connected and create a signalling line circuit loop; the requirements for the individual components can be found in a Standard that specifies the technical requirements [3]. Mechanical barriers are all means that are used to protect against forced entry by persons; their task is to impede the perpetrator as much as possible. This group includes, for example, security doors, iron bars or window protectors to prevent or hinder access to the building [4].

2 Characteristics of the Selected Building The selected building is located in a village with extended powers; it is a ground-floor family house, which is inhabited only by two persons. The building is secured only by basic elements of perimeter protection. It is defined by the registered boundary and the protective elements must have high climatic resistance. Then there is the external protection which is implemented on the exterior of the protected building, i.e. walls, doors, windows, locks, locking systems, bars, security foils, camera systems and intrusion detectors. 2.1

The KARS Method

The qualitative method of risk analysis with risk correlations was used for the correct evaluation of the appropriate security elements. The highest possible risk is obtained by means of this method and this will lead to proposals of measures for the given building. The first step was to compile a list that contained possible sources of risks for the said building. In total, ten types of risk were selected within the probability of possible danger. The resulting risks can be seen in Table 1. Creating the table of risks is another important phase of the KARS analysis. The first column contains selected types of risks for the building, which are numbered 1 to 10 while the first row of the table contains individual numbers of types of risks. The actual method is based on the interaction and correlation of individual types of risks. For proper compliance with the procedure, the table must be filled in as follows:

Security of a Selected Building Using KARS Method

253

Table 1. Risk correlation table [Own source] Risk 1 2 3 4 5 6 7 1. Breaking the window 0 1 0 1 0 0 0 2. Break-in 1 0 1 1 1 1 1 3. Fire 1 0 0 1 1 1 0 4. Failure of mechanical systems 0 1 0 0 1 0 0 5. Power failure 0 1 0 1 0 0 0 6. Damage to the facade 0 0 0 0 0 0 0 7. Cyber-attack 0 1 0 1 1 0 0 8. Explosion 1 0 1 1 1 1 0 9. Flood 0 1 0 1 1 1 0 10. FAS failure 0 1 0 1 0 0 0 Total 3 6 2 8 6 4 1

8 0 1 1 0 0 0 0 0 0 0 2

9 0 0 0 0 0 0 0 0 0 0 0

10 0 1 1 1 1 0 1 1 1 0 7

Total 2 8 6 3 3 0 4 6 5 2

• 1 – is filled in if Ri can cause risk Rj. • 0 – is filled in if Ri cannot cause risk Rj [5]. For the risk qualification the activity and passivity coefficients were used. By means of these coefficients, the resulting table of correlation was transformed into the mathematical form and after that to the graphical form. • KARi – the activity coefficient – represents the percentage of the number of the selected types of risks that are linked to the risk marked as Ri. In case that risk Ri occurs, the consequential risks can be triggered. • KPRi – the passivity coefficient – represents the percentage of the number of the selected types of risks, which are linked to the risk marked as Ri and which may subsequently trigger the risk Ri. In ordered to express the activity and passivity coefficients it was necessary to put together a number of combinations. Provided that risk Ri cannot induce itself, or risk Ri can induce other types of risks, or it can be induced by other types of risks it holds that x = 10. In this case, the number of possible combinations is x−1 [5]. 2.2

Evaluation of the KARS Method

The resulting correlation chart aims to determine the significance of all types of risks and their correlation in the system. Evaluation of the KARS method (Fig. 1): Sphere I: Primary and secondary hazardous risks – risks – 1 (breaking the window), 2 (break-in), 3 (fire), 4 (failure of mechanical devices), 5 (power failure), 8 (explosion), 10 (FAS failure). Sphere II and III: Primary and secondary hazardous risks – risks – 6 (damage to the facade of the building), 7 (cyber-attack), 9 (flood). Sphere IV: Relatively safe – no risks detected [5].

254

K. Benesova et al.

Fig. 1. Evaluation of the KARS method

3 Proposed Measures The Perimeter Locator System using RFID (Radio-Frequency Identification) tags was chosen for the perimeter protection. The system is to be placed on the fence and its advantage is that it eliminates false alarms, which is in this case highly desirable. Detectors that are built into the ground were excluded due to their frequent false alarms. As a last resort, differential pressure detectors could also be considered. They are intended for the perimeter protection of the guarded area. The detector is capable of sensing stimuli up to 100 m and it can be used even in very rugged terrain. Since the detectors are hidden underground, it is difficult for intruders to discover them. The disadvantage, however, is that they are sensitive to the roots of trees and shrubs. The external protection of the building can be provided by the security door and wireless glass break detectors located above a window in a room; these detectors are necessary because windows are the core of the building. The spatial protection will be covered by an alarm security system and an emergency system without a fingerprint reader; instead a remote control that is much more practical for the building is to be used. In addition, a passive infrared sensor will be located in every room. This sensor evaluates changes in the infrared spectrum of the electromagnetic waves. It is one of the

Security of a Selected Building Using KARS Method

255

most widespread types of motion detectors designed for perimeter protection. Undoubtedly, its advantage is its ease of installation and low power consumption. Other proposed devices include fire alarms, smoke detectors in particular, which are either ionization (they detect the change in the conductivity of air caused by the loss of ionized air in the measuring chamber), or optical (they detect the solid particles of smoke generated during fire which affect spreading of the light beam emitted through a layer of air contaminated by smoke).

4 Conclusion The protection of health, property, social and other values has ensured humanity from the beginning of the ages. However, in recent years, the market has come up with several innovations of existing systems, as well as news that contributes every day to better protect these values. Securing property is often a very daunting problem. A critical factor is, above all, the mechanical resistance of security systems or the correct choice of security systems. Still, there are still individuals who are still unaware of the seriousness of the risks that we are threatening and which can be avoided. This article dealt with the security of the selected building, in this case a family house. At the beginning, the concepts that appeared in the follow-up part of the article and which were necessary for understanding the given issues were emphasized. This was followed by the description of the selected building and its current security features. Further, a qualitative risk analysis was carried out using the correlation of risks by means of the KARS method. Within the analysis it was important to create a list of ten risks related to the security of the given building. Upon the creation of the list, a table of risks was created to which numbers 1 or 0 were assigned. The next step included calculations of the activity and passivity. Based on these calculations, the resulting correlation chart was created. The chart of correlations has identified primary and secondary risks such as breaking windows, burglary, fire, failure of mechanical devices, power failure, explosion, EPS failure. The proposal for risk mitigation is a conceptual solution for the implementation of individual measures. Due to the measures taken, the likelihood of vulnerability of the object and of the assets themselves should be reduced. In conclusion and outcomes of the work, besides the evaluation of the risk analysis in the form of the proposed measures, there is also a model with the implementation of the individual proposals leading to the reduction of the risks. The analysis served for proposing measures that would improve the current security of the building. Acknowledgments. This paper is supported by the Internal Grant Agency at Tomas Bata University in Zlin, projects No. IGA/FLKR/2017/003, No. IGA/FLKR/2018/001.

256

K. Benesova et al.

References 1. Lukáš, L.: Bezpečnostní technologie, systémy a management (Security technologies, systems and management). Radim Bačuvčík – VeRBuM, Zlín, ISBN 978-80-87500-05-7 (2015) 2. ČSN EN 50131-2-2. Detektory narušení – Pasivní infračervené detektory (Technical Standard ČSN EN 50131-2-2. Intrusion detectors – Passive infrared sensors). Český normalizační institut, Prague (2008) 3. Bebčák, P.: Požárně bezpečnostní zařízení (Fire safety equipment). 2nd extended edition. Sdružení požárního a bezpečnostního inženýrství, Ostrava. (Fire and Security Association.) Spektrum (Sdružení požárního a bezpečnostního inženýrství). ISBN 80-866-3434-5 (2004) 4. Ivanka, J.: Mechanické zábranné systémy (Mechanical barrier systems.), 151 p. Tomas Bata University in Zlin, Zlín, ISBN 978-80-7318-910-5 (2010) 5. SÍŤOVÁ ANALÝZA A METODA KARS. (NETWORK ANALYSIS AND THE KARS METHOD.) [online], 2010. (1) [cit. 2018-04-08]. Available at http://www.population-protection.eu/ prilohy/casopis/8/56.pdf

Social Network Analysis of “Clexa” Community Interaction Patterns Kristina G. Kapanova1(B) and Velislava Stoykova2 1

Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Acad. G. Bonchev 25A, 1113 Sofia, Bulgaria [email protected], [email protected] 2 Institute for Bulgarian Language, Bulgarian Academy of Sciences, 52, Shipchensky prohod str., bl. 17, 1113 Sofia, Bulgaria [email protected]

Abstract. The paper (The work was partially supported by the DFNP I02/10 Grant, funded by the Bulgarian Science Fund.) presents the results of studying the interaction patterns of ‘clexa’ fans community in discussing the tv-show ‘The 100’ on Tumblr. The methodology uses complex network analysis and neural embedding model to study the network topical homophily. It regards the hashtags as both indexical tool and linguistic tag investigating co-occurring network and translocality. The conclusion outlines that by means of hashtags usage, users express their communication patterns and generate community knowledge resulting in collective actions of fans.

1

Introduction

The social media platforms (Facebook, Twitter, Tumblr, etc.) have had a profound impact on the way people engage with one another, acquire and propagate information, exchange ideas and react to contemporary societal events. Despite their multimodal and multimedia characteristics, the communication between people and groups in that platforms remains predominantly textual. The growing amount of user-generated content has given researchers the opportunity to study the formation of virtual communities, the process of information diffusion, the transformation of linguistic practices and development of novel linguistic expressions, styles and terminology characterizing community of users. The novel linguistic expression is the widespread adoption of hashtags. They were envisaged as a type of ‘channel tags’ on Twitter to categorize messages according to their topic [1]. The hashtag can be described as a type of label or metadata, facilitating the retrieval of posts related to a specific subject. In addition to their original objective to help with classification, archival and retrieval of information, hashtags have evolved into a “community building linguistic activity” [2]. Hashtags can be described as technomorpheme – a ‘clickable’ linguistic fragment, which facilitates thread conversation. The communication practices of online communities through the intrinsic labeling of information in the form of hashtags was of primary focus for c Springer Nature Switzerland AG 2019  K. Ntalianis et al. (Eds.): APSAC 2018, LNEE 574, pp. 257–264, 2019. https://doi.org/10.1007/978-3-030-21507-1_37

258

K. G. Kapanova and V. Stoykova

researchers. Studying topical homophily from Twitter messages based on the use of hashtags led to important insights about acquired users connectivity patterns [3]. The linguistic and cultural influence accomplished by the propagation and adoption of hashtags from various online communities has been examined in [4] and a dependency between the hashtag distribution and the frequency ranking of the adopted hashtags was found. Romero [5] reviewed the model of information spread online, summarizing its dependence on the manner of hashtag propagation. The integration of linguistic, metadiscoursive and social functions of hashtags necessitates their study as a community building and discoursive application. The present study adopts the toolbox from complex network analysis and neural embedding model to study distinct language conventions and communication patterns embraced by particular media fan community, recognizable through the name clexa as part of the major fandom of the tv-show ‘The 100’.

2

Data Structure

The collection of digital data, comprises of publicly available Tumblr messages, containing the hashtag ‘clexa’. It covers the period between 07 March 2016 and 5 April 2016 and comprises a total of 19915 entries with 12326 hashtags, originating from 7079 different blogs. Within the collected data, there is one blog that has published 801 posts, while 33 blogs published more than 30 times, with 4306 blogs represented only with 1 post in the sample. It should be noted that the most frequently publishing blog is an automatic bot, which broadcasted information about fanfiction. The data have been collected using a custom build crawler, which queries the standard Tumblr API. The following type of information for each post is collected: blog name, item id, post url, type of post (text, image, video, link, etc), date and time-stamp, note count, caption, link url, photos (with respective attributes like width, height and url) and the available hashtags of each post.

3

Methodology

Since media fandom represents individuals united within a subcultural frame based on shared interest, they develop their own vernacular through which they communicate. The terminology of fan language – including emoticons, abbreviations, distinct jargon to refer to important events or artifacts, is fundamentally influenced by experience. As such the described data is the foundation to explore the fandom specific vocabulary realized through hashtags during the communication process and its patterns. 3.1

Using Hashtags both as Indexical Tools and as Linguistic Tags

The dual nature of hashtags enable their functioning both as indexical tools and as carriers of linguistic information in an ongoing fan discourse. The latter

Social Network Analysis of “Clexa” Community Interaction Patterns

259

accentuates the importance of topics and meaning in fan communities, as well as their linguistic and cognitive boundaries through time. Following the interpretation that language is ‘the place where our sense of ourselves, our subjectivity is constructed’, we examine the evolution of fan language during the process of participation and interpretation of media texts, carried forward through hashtags. Considering the fact that during human interactions, people tend to align their language across all linguistic levels, the developed network provides a way to understand tagging behavior of fans, their ability and knowledge of available resource manipulation, and the ability to build collective meaning through shared tags. The analytical framework adopted in this study, combined with the social contextualization of the data enable us to investigate the characteristics of the fans communication strategies by distinguishing linguistic behaviors. The propensity for topical homophily (i.e. for fans to associate and connect with others who are similar to them in their interests) based on the underlying semantics of hashtags provides important insights into user connectivity patterns. Cooccurrence networks and neural embedding model analysis underly emblematic linguistic characteristics, shaped by prior texts and intertextual references. 3.2

Neural Embedding Model Analysis

In the context of fan media studies, we have utilized a methodology by [6] deriving a hashtag co-occurrence network to study the interaction between the users’ cultural generativity of hashtags and their semantic content. A network represents the components of a system – called nodes or vertices, and the direct interaction between them, known as links or edges. There are two main network parameters. The number of nodes represent the aggregation of elements in the system. The set of links represents the total number of interactions between the nodes. In this particular case, the hashtag co-occurrence network is an undirected graph G = (V, E) with V being the number of nodes and E - the number of links developed from a set of V hashtags parsed from the collected data. Each node v ∈ V represents a hashtag from set V . The edges define the different semantic associations (adjacency relations) of the hashtags formed in the post’s tagging space, i.e. e ≡ (vi , vj ) ∈ E being the association between the hashtags vi and vj . The network is weighted and the edge value depicts the frequency of two hashtags co-occurring in a post. From a parametric point of view, we focus on several properties of the network. One of the key properties of each node is its degree - representing the number of connections to other nodes. The degree centrality measure is often very effective in determining the influence of a node to the structure of the system. In the case of an undirected network, the computation of the indicator is straightforward since if node A is connected to node B, then B is by definition connected to A. Let the degree of the ith node in the network be denoted by k, with k1 = 2, k2 = 3, k3 = 4, k4 = 5. Contingent upon our undirected network the

260

K. G. Kapanova and V. Stoykova

total number of links L is expressed as the sum of the node degrees, such as: E

1 ki 2 i=1

L=

(1)

where the factor 12 represents a correction for the fact that we count each link twice due to its lack of directionality. We also examine the network’s clustering coefficient to discern the structure of the network and establish the main hashtags and their semantic relationship. Having a network of n nodes, the cij denoting the links between nodes i and j (cij = cji ≥ 0). With sij we depict the association strength of nodes i and j, given by 2mcij sij = (2) ci cj with ci being the total number of links of node i, and m the absolute number of links in the network, giving  ci = cij (3) j=i

and 1 ci 2 i

m=

(4)

For the mapping purposes, one has to establish for each node i a vector xi ∈ Rp , showing the location of node i in a 2-dimensional map. In terms of clustering, for each node i, one needs to find a positive integer xi which serves as indicator of the cluster that i belongs to. The approach used to mapping and clustering is based on [7] where we need to minimize V (x1 , . . . , xn ) =



sij d2ij −

i