Computational Intelligence: 9th International Joint Conference, IJCCI 2017 Funchal-Madeira, Portugal, November 1-3, 2017 Revised Selected Papers [1st ed.] 978-3-030-16468-3;978-3-030-16469-0

This book presents revised and extended versions of the best papers presented at the 9th International Joint Conference

553 34 12MB

English Pages XIV, 352 [355] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Computational Intelligence: 9th International Joint Conference, IJCCI 2017 Funchal-Madeira, Portugal, November 1-3, 2017 Revised Selected Papers [1st ed.]
 978-3-030-16468-3;978-3-030-16469-0

Table of contents :
Front Matter ....Pages i-xiv
Particle Stability Time in the Stochastic Model of PSO (Krzysztof Trojanowski, Tomasz Kulpa)....Pages 1-18
Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm (Alice Plebe, Vincenzo Cutello, Mario Pavone)....Pages 19-39
Mining of Keystroke and Mouse Dynamics to Increase the Engagement of Students with Programming Assignments (Mario Garcia Valdez, Juan-J. Merelo, Amaury Hernandez Aguila, Alejandra Mancilla Soto)....Pages 41-61
Improving Genetic Programming for Classification with Lazy Evaluation and Dynamic Weighting (Sašo Karakatič, Marjan Heričko, Vili Podgorelec)....Pages 63-75
Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data (Rédina Berkachy, Laurent Donzé)....Pages 77-97
Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs (Ivor Uhliarik)....Pages 99-117
Exploring Internal Representations of Deep Neural Networks (Jérémie Despraz, Stéphane Gomez, Héctor F. Satizábal, Carlos Andrés Peña-Reyes)....Pages 119-138
Adapting Self-Organizing Map Algorithm to Sparse Data (Josué Melka, Jean-Jacques Mariage)....Pages 139-161
A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images (Alon Schclar, Amir Averbuch)....Pages 163-178
Evaluating Methods for Building Arabic Semantic Resources with Big Corpora (Georges Lebboss, Gilles Bernard, Noureddine Aliane, Adelle Abdallah, Mohammad Hajjar)....Pages 179-197
Efficient Approaches for Solving the Large-Scale k-Medoids Problem: Towards Structured Data (Alessio Martino, Antonello Rizzi, Fabio Massimo Frattale Mascioli)....Pages 199-219
Automated Diagnostic Model Based on Isoline Map Analysis of Myocardial Tissue Structure (Olga V. Senyukova, Danuta S. Brotikovskaya, Svetlana G. Gorokhova, Ekaterina S. Tebenkova)....Pages 221-238
Framework for Discrete-Time Model Reference Adaptive Control of Weakly Nonlinear Systems with HONUs (Peter M. Benes, Ivo Bukovsky, Martin Vesely, Jan Voracek, Kei Ichiji, Noriyasu Homma)....Pages 239-262
Assessing Transfer Learning on Convolutional Neural Networks for Patch-Based Fingerprint Liveness Detection (Amirhosein Toosi, Sandro Cumani, Andrea Bottino)....Pages 263-279
Environment Scene Classification Based on Images Using Bag-of-Words (Taurius Petraitis, Rytis Maskeliūnas, Robertas Damaševičius, Dawid Połap, Marcin Woźniak, Marcin Gabryel)....Pages 281-303
Cellular Transport Systems Improved: Achieving Efficient Operations with Answer Set Programming (Steffen Schieweck, Gabriele Kern-Isberner, Michael ten Hompel)....Pages 305-326
Reinforcement Learning and Attractor Neural Network Models of Associative Learning (Oussama H. Hamid, Jochen Braun)....Pages 327-349
Back Matter ....Pages 351-352

Citation preview

Studies in Computational Intelligence 829

Christophe Sabourin Juan Julian Merelo Kurosh Madani Kevin Warwick   Editors

Computational Intelligence 9th International Joint Conference, IJCCI 2017 Funchal-Madeira, Portugal, November 1–3, 2017 Revised Selected Papers

Studies in Computational Intelligence Volume 829

Series Editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland

The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. The books of this series are submitted to indexing to Web of Science, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink.

More information about this series at http://www.springer.com/series/7092

Christophe Sabourin Juan Julian Merelo Kurosh Madani Kevin Warwick •





Editors

Computational Intelligence 9th International Joint Conference, IJCCI 2017 Funchal-Madeira, Portugal, November 1–3, 2017 Revised Selected Papers

123

Editors Christophe Sabourin IUT Sénart Université Paris-Est Créteil (UPEC) Créteil, France Kurosh Madani Université Paris-Est Créteil (UPEC) Créteil, France

Juan Julian Merelo Department of Computer Architecture and Technology University of Granada Granada, Spain Kevin Warwick University of Reading Reading, UK Coventry University Coventry, UK

ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-030-16468-3 ISBN 978-3-030-16469-0 (eBook) https://doi.org/10.1007/978-3-030-16469-0 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Organization

Conference Co-chairs Kurosh Madani, University of Paris-EST Créteil (UPEC), France Kevin Warwick (honorary), University of Reading and Coventry University, UK

Program Co-chairs Christophe Sabourin, IUT Sénart, University of Paris-EST Créteil (UPEC), France Juan Julian Merelo, University of Granada, Spain Una-May O’Reilly, MIT Computer Science and Artificial Intelligence Laboratory, USA

Program Committee Ajith Abraham, Machine Intelligence Research Labs (MIR Labs), USA Salmiah Ahmad, International Islamic University Malaysia, Malaysia Jesús Alcalá-Fdez, University of Granada, Spain Aboul Ali, Faculty of Computers and Information, Egypt Plamen Angelov, Lancaster University, UK Michela Antonelli, University of Pisa, Italy Shawki Areibi, University of Guelph, Canada William Armitage, University of South Florida, USA Sansanee Auephanwiriyakul, Chiang Mai University, Thailand Dalila B. M. M. Fontes, Faculdade de Economia and LIAAD-INESC TEC, Universidade do Porto, Portugal Thomas Baeck, Leiden University, The Netherlands

v

vi

Organization

Helio Barbosa, Laboratorio Nacional de Computaçao Cientifica, Brazil Mokhtar Beldjehem, University of Ottawa, Canada Gilles Bernard, Paris 8 University, France Daniel Berrar, Tokyo Institute of Technology, Japan Mohsin Bilal, Umm Al-Qura University, Saudi Arabia Yevgeniy Bodyanskiy, Kharkiv National University of Radio Electronics, Ukraine Ahmed Bufardi, NA, Switzerland Ivo Bukovsky, Czech Technical University in Prague, Faculty of Mechanical Engineering, Czech Republic Daniel Callegari, Pontificia Universidade Catolica do Rio Grande do Sul (PUCRS), Brazil Heloisa Camargo, UFSCar, Brazil Erik Cambria, Nanyang Technological University, Singapore Rahul Caprihan, Dayalbagh Educational Institute, India Pablo Carmona, University of Extremadura, Spain Fabio Casciati, Università degli Studi di Pavia, Italy Giovanna Castellano, University of Bari, Italy Pedro A. Castillo, University of Granada, Spain Wen-Jer Chang, National Taiwan Ocean University, Taiwan Mu-Song Chen, Da-Yeh University, Taiwan France Cheong, RMIT University, Australia Costin Chiru, University Politehnica of Bucharest, Romania Amine Chohra, University of Paris-EST Créteil (UPEC), France Chi-Yin Chow, City University of Hong Kong, Hong Kong Catalina Cocianu, The Bucharest University of Economic Studies, Faculty of Cybernetics, Statistics and Informatics in Economy, Romania Vincenzo Conti, Kore University of Enna, Italy Valerie Cross, Miami University, USA Martina Dankova, University of Ostrava, Czech Republic József Dombi, University of Szeged, Institute of Informatics, Hungary Yongsheng Dong, Chinese Academy of Sciences, China António Dourado, University of Coimbra, Portugal Peter Duerr, Sony Corporation, Japan Marc Ebner, Ernst-Moritz-Arndt-Universität Greifswald, Germany El-Sayed El-Alfy, King Fahd University of Petroleum and Minerals, Saudi Arabia Fabio Fassetti, DIMES, University of Calabria, Italy Carlos Fernandes, University of Lisbon, Portugal Artur Ferreira, ISEL—Instituto Superior de Engenharia de Lisboa, Portugal Stefka Fidanova, Bulgarian Academy of Sciences, Bulgaria Valeria Fionda, University of Calabria, Italy Simon Fong, University of Macau, Macau Leonardo Franco, Universidad de Málaga, Spain Yoshikazu Fukuyama, Meiji University, Japan David Gil Mendez, University of Alicante, Spain

Organization

vii

Stefan Glüge, ZHAW School of Life Sciences and Facility Management, Switzerland Vladimir Golovko, Brest State Technical University, Belarus Antonio Gonzalez, University of Granada, Spain Sarah Greenfield, De Montfort University, UK Tan Guan, Universiti Malaysia Kelantan, Malaysia Hazlina Hamdan, Universiti Putra Malaysia, Malaysia Oussama Hamid, University of Nottingham, UK Thomas Hanne, University of Applied Arts and Sciences Northwestern Switzerland, Switzerland Susana Muñoz Hernández, Universidad Politécnica de Madrid (UPM), Spain Arturo Hernández—Aguirre, Centre for Research in Mathematics, Mexico Chris Hinde, Loughborough University, UK Katsuhiro Honda, Osaka Prefecture University, Japan Wei-Chiang Hong, Jiangsu Normal University, China Alexander Hošovský, Technical University of Kosice, Slovak Republic Gareth Howells, University of Kent, UK Jiansheng Huang, Western Sydney University, Australia Daniela Iacoviello, Sapienza Università di Roma, Italy Yuji Iwahori, Chubu University, Japan Colin Johnson, University of Kent, UK Magnus Johnsson, Lund University, Sweden Cengiz Kahraman, Istanbul Technical University, Turkey Dmitry Kangin, University of Exeter, UK Iwona Karcz-Duleba, Wroclaw University of Science Technology, Poland Christel Kemke, University of Manitoba, Canada Wali Khan, Kohat University of Science and Technology (KUST), Kohat, Pakistan Georges Khazen, Lebanese American University, Lebanon Ahmed Kheiri, Lancaster University, UK Frank Klawonn, Ostfalia University of Applied Sciences, Germany Mario Köppen, Kyushu Institute of Technology, Japan Vladik Kreinovich, University of Texas at El Paso, USA Ondrej Krejcar, University of Hradec Kralove, Czech Republic Pavel Krömer, VSB Ostrava, Czech Republic Jiri Kubalik, Czech Technical University, Czech Republic Yau-Hwang Kuo, National Cheng Kung University, Taiwan Dario Landa-Silva, University of Nottingham, UK Anne Laurent, Lirmm, Montpellier University, France Nuno Leite, Instituto Superior de Engenharia de Lisboa, Portugal Hui Li, Nankai University, China Diego Liébana, Queen Mary University of London, UK Ahmad Lotfi, Nottingham Trent University, UK Edwin Lughofer, Johannes Kepler University, Austria Wenjian Luo, University of Science and Technology of China, China Francisco Gallego Lupianez, Univ. Complutense de Madrid, Spain

viii

Organization

Jinwen Ma, Peking University, China Stephen Majercik, Bowdoin College, USA Agnese Marchini, Pavia University, Italy Jean-Jacques Mariage, Paris 8 University, France Mitsuharu Matsumoto, The University of Electro-Communications, Japan John McCall, Smart Data Technologies Centre, Robert Gordon University, UK Mohamed Mellal, N/A, Algeria Corrado Mencar, University of Bari, Italy Juan Julian Merelo, University of Granada, Spain Marjan Mernik, University of Maribor, Slovenia Konstantinos Michail, Cyprus University of Technology, Cyprus Mustafa Misir, Nanjing University of Aeronautics and Astronautics, China Chilukuri Mohan, Syracuse University, USA Ambra Molesini, Alma Mater Studiorum—Università di Bologna, Italy José Molina, Universidad Carlos III de Madrid, Spain Ronei Marcos de Moraes, Universidade Federal da Paraíba, Brazil Ruby Moritz, Universität Magdeburg, Germany Bernhard Moser, Software Competence Center Hagenberg GmbH, Austria Luiza Mourelle, State University of Rio de Janeiro, Brazil Pawel Myszkowski, Wroclaw University of Technology, Poland Vesa Niskanen, Univ. of Helsinki, Finland Yusuke Nojima, Osaka Prefecture University, Japan Vilém Novák, University of Ostrava, Czech Republic Luis Nunes, Instituto Universitário de Lisboa (ISCTE-IUL) and Instituto de Telecomunicações (IT), Portugal Thomas Ott, Institute of Applied Simulation, ZHAW Zurich University of Applied Sciences, Switzerland Ender Özcan, University of Nottingham, UK Ben Paechter, Edinburgh Napier University, UK Rainer Palm, Orebro University, Orebro Sweden, Germany Gary Parker, Connecticut College, USA David A. Pelta, University of Granada, Spain Parag Pendharkar, Pennsylvania State University, USA Valentina Plekhanova, University of Sunderland, UK Petrica Pop, North University of Baia Mare, Romania Radu-Emil Precup, Politehnica University of Timisoara, Romania Daowen Qiu, Sun Yat-sen University, China Marek Reformat, University of Alberta, Canada Joaquim Reis, ISCTE, Portugal Antonello Rizzi, Università di Roma “La Sapienza”, Italy Olympia Roeva, Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences, Bulgaria Nizar Rokbani, University of Sousse, Tunisia Neil Rowe, Naval Postgraduate School, USA Suman Roychoudhury, Tata Consultancy Services, India

Organization

ix

Carlo Sansone, University of Naples Federico II, Italy Miguel Sanz-Bobi, Comillas Pontifical University, Spain Jurek Sasiadek, Carleton University, Canada Gerald Schaefer, Loughborough University, UK Robert Schaefer, AGH University of Science and Technology, Poland Alon Schclar, Academic College of Tel-Aviv Yaffo, Israel Tjeerd Scheper, Oxford Brookes University, UK Christoph Schommer, University Luxembourg, Campus Belval, Maison du Nombre, Luxembourg Olga Senyukova, Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Russian Federation Nurfadhlina Sharef, Universiti Putra Malaysia, Malaysia Ruedi Stoop, Universität Zürich/ETH Zürich, Switzerland Catherine Stringfellow, Midwestern State University, USA Mu-Chun Su, National Central University, Taiwan Laszlo T. Koczy, Szechenyi Istvan University, Hungary Norikazu Takahashi, Okayama University, Japan Yi Tang, Yunnan University of Nationalities, China Gianluca Tempesti, The University of York, UK Philippe Thomas, Université de Lorraine, France Juan-Manuel Torres-Moreno, Ecole Polytechnique de Montréal, Canada Dat Tran, University of Canberra, Australia Carlos M. Travieso, University of Las Palmas de Gran Canaria, Spain Krzysztof Trojanowski, Uniwersytet Kardynała Stefana Wyszyńskiego, Poland Elio Tuci, Middlesex University, UK Alexander Tulupyev, St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS), Russian Federation Jessica Turner, Georgia State University, USA Lucia Vacariu, Technical University of Cluj-Napoca, Romania Arjen van Ooyen, VU University Amsterdam, The Netherlands Salvatore Vitabile, University of Palermo, Italy Neal Wagner, Systems Technology and Research, USA Jingyu Wang, Northwestern Polytechnical University (NPU), China Guanghui Wen, Southeast University, China Li-Pei Wong, Universiti Sains Malaysia, Malaysia Jian Wu, School of Economics and Management, Shanghai Maritime University, China Chung-Hsing Yeh, Monash University, Australia Jianqiang Yi, Institute of Automation, Chinese Academy of Sciences, China Wenwu Yu, Southeast University, China Slawomir Zadrozny, Polish Academy of Sciences, Poland Cleber Zanchettin, Federal University of Pernambuco, Brazil Hans-Jürgen Zimmermann, ELITE (European Laboratory for Intelligent Techniques Engineering), Germany

x

Invited Speakers António Dourado, University of Coimbra, Portugal Emma Hart, Edinburgh Napier University, UK Paulo Novais, Universidade do Minho, Portugal Jonathan Garibaldi, University of Nottingham, UK

Organization

Preface

The present book includes extended and revised versions of a set of selected papers from the 9th International Joint Conference on Computational Intelligence (IJCCI 2017), held in Funchal, Madeira, Portugal, from November 1 to 3, 2017. IJCCI 2017 received 66 paper submissions from 29 countries, of which 25% were included in this book. The papers were selected by the event chairs, and their selection is based on a number of criteria that include the scores and comments provided by the program committee members and the session chairs’ assessment and also the program chairs’ global view of all papers included in the technical program. The authors of selected papers were then invited to submit a revised and extended version of their papers having at least 30% innovative material. The purpose of the International Joint Conference on Computational Intelligence —IJCCI—is to bring together researchers, engineers, and practitioners interested in the field of computational intelligence from both theoretical and application perspectives. Four simultaneous tracks will be held covering different aspects of computational intelligence, including evolutionary computation, fuzzy computation, neural computation, and cognitive and hybrid systems. The connection of these areas in all their wide range of approaches and applications forms the International Joint Conference on Computational Intelligence. The papers selected to be included in this book contribute to the understanding of relevant trends of current research on computational intelligence, including metaheuristics, affective computing, reality mining, big data, and deep learning. We would like to thank all the authors for their contributions and also to the reviewers who have helped in ensuring the quality of this publication. Créteil, France Granada, Spain Créteil, France Coventry, UK November 2017

Christophe Sabourin Juan Julian Merelo Kurosh Madani Kevin Warwick

xi

Contents

Particle Stability Time in the Stochastic Model of PSO . . . . . . . . . . . . . Krzysztof Trojanowski and Tomasz Kulpa Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alice Plebe, Vincenzo Cutello and Mario Pavone Mining of Keystroke and Mouse Dynamics to Increase the Engagement of Students with Programming Assignments . . . . . . . . . . . Mario Garcia Valdez, Juan-J. Merelo, Amaury Hernandez Aguila and Alejandra Mancilla Soto

1

19

41

Improving Genetic Programming for Classification with Lazy Evaluation and Dynamic Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sašo Karakatič, Marjan Heričko and Vili Podgorelec

63

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rédina Berkachy and Laurent Donzé

77

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ivor Uhliarik

99

Exploring Internal Representations of Deep Neural Networks . . . . . . . . 119 Jérémie Despraz, Stéphane Gomez, Héctor F. Satizábal and Carlos Andrés Peña-Reyes Adapting Self-Organizing Map Algorithm to Sparse Data . . . . . . . . . . . 139 Josué Melka and Jean-Jacques Mariage A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Alon Schclar and Amir Averbuch

xiii

xiv

Contents

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Georges Lebboss, Gilles Bernard, Noureddine Aliane, Adelle Abdallah and Mohammad Hajjar Efficient Approaches for Solving the Large-Scale k-Medoids Problem: Towards Structured Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Alessio Martino, Antonello Rizzi and Fabio Massimo Frattale Mascioli Automated Diagnostic Model Based on Isoline Map Analysis of Myocardial Tissue Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Olga V. Senyukova, Danuta S. Brotikovskaya, Svetlana G. Gorokhova and Ekaterina S. Tebenkova Framework for Discrete-Time Model Reference Adaptive Control of Weakly Nonlinear Systems with HONUs . . . . . . . . . . . . . . . . . . . . . . 239 Peter M. Benes, Ivo Bukovsky, Martin Vesely, Jan Voracek, Kei Ichiji and Noriyasu Homma Assessing Transfer Learning on Convolutional Neural Networks for Patch-Based Fingerprint Liveness Detection . . . . . . . . . . . . . . . . . . . 263 Amirhosein Toosi, Sandro Cumani and Andrea Bottino Environment Scene Classification Based on Images Using Bag-of-Words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Taurius Petraitis, Rytis Maskeliūnas, Robertas Damaševičius, Dawid Połap, Marcin Woźniak and Marcin Gabryel Cellular Transport Systems Improved: Achieving Efficient Operations with Answer Set Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Steffen Schieweck, Gabriele Kern-Isberner and Michael ten Hompel Reinforcement Learning and Attractor Neural Network Models of Associative Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Oussama H. Hamid and Jochen Braun Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351

Particle Stability Time in the Stochastic Model of PSO Krzysztof Trojanowski and Tomasz Kulpa

Abstract Stability properties of a particle in the stochastic models of PSO are a subject of presented analysis. Measures of a number of particle steps necessary to reach an equilibrium state, particularly, generalized weak versions of measures: particle convergence expected time ( pcet) and the particle location variance convergence time ( pvct) are developed. A new measure, namely particle stability time, is also proposed. For all the measures graphs of estimated and recorded values are presented. Finally, an adaptation of expected running time (ERT) measure is proposed which can be applied for identification of convergence regions in the parameters space.

1 Introduction Particle swarm optimization (PSO) is a stochastic search algorithm based on a population model of social influence and social learning, where population members, namely particles, follow a very simple set of rules and the rule parameters configure the behavior of particles. Typically, the optimization purpose is finding an element in the search space which maximizes objective criteria and PSO model appeared to be particularly well suited to this class of tasks. Since the first publication [1] in 1995, numerous variants of PSO model have been developed. Some of them became regarded as “standard” PSO algorithms and were used in publications as reference models for comparisons with proposed new variants. In every case one of the most important tasks concerned finding such configuration of the model parameters which make the algorithm able to find suboptimal solutions with minimum computational cost. The set of PSO parameters defines a configuration space where one look for the convergence area, that is, the subset of configurations guaranteeing the converK. Trojanowski · T. Kulpa (B) Faculty of Mathematics and Natural Sciences, School of Exact Sciences, Cardinal Stefan Wyszy´nski University in Warsaw, Wóycickiego 1/3, 01-938 Warsaw, Poland e-mail: [email protected] K. Trojanowski e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_1

1

2

K. Trojanowski and T. Kulpa

gent behavior of particles and a swarm. Definitions of convergence areas, as well as measures of time necessary for a particle or a swarm to converge can be found in the literature. Measures estimating the number of steps necessary for a particle to reach a particular stable state can be helpful in studying the behavior of particles. For the deterministic model, a particle convergence time ( pct) was proposed, while for the stochastic model one can find two measures: convergence expected time ( pcet), as well as the particle location variance convergence time ( pvct) [2–4]. These measures aim to evaluate a number of steps necessary to reach the equilibrium state with given precision by the particle. Additionally, it is assumed, that once the state is reached, there is no return to any of the previous states. These measures are hardly applied in practice, thus, their weak versions have additionally been proposed. Characteristics of weak measures for different particle configurations can be found in [3]. The characteristics were generated for the stochastic model of PSO with inertia weight under stagnation assumption. In the presented research we propose theoretical particle convergence measures based on the idea of observation window wider than one. This method of evaluation of the number of steps extends earlier proposed weak versions of measures pcet and pvct. New, generalized versions of these weak measures are proposed and applied in simulations. Besides, a new measure which evaluates numbers of steps necessary to reach the equilibrium state, namely particle stability time ( pst) is proposed and discussed. Methods of evaluation of the expected value of pst as well as the variance of pst are presented. Characteristics of expected value and variance of pst obtained in simulations are compared with characteristics of generalized weak versions of particle convergence measures. The characteristics recorded in simulations are generated for two versions of PSO: PSO with inertia weight (IPSO) and recently proposed a variant of PSO called Standard PSO2011 (SPSO2011). Finally, a new measure ERT pst is proposed and its characteristics recorded in simulations for IPSO and SPSO2011 are presented. The measure ERT pst is based on the idea of the expected running time (ERT) of an algorithm but in this case, the aim is to estimate an expected number of steps. The chapter consists of eight sections. In Sect. 2 a brief review of stability analysis approaches based on the evaluation of the number of steps is presented. Section 3 describes versions of PSO selected for simulations. In Sect. 4 definitions of generalized versions of weak convergence measures are presented. In Sect. 5 particle stability time is defined. In Sect. 6 the particle stability expected time and variance of the particle stability time are studied and results of simulations are presented. Section 7 presents a new measure of expected running time of a particle based on the idea of ERT measure and the results of simulations. Section 8 concludes the paper.

Particle Stability Time in the Stochastic Model of PSO

3

2 Related Work Particle stability analysis based on a number of steps necessary to reach a particular state by a particle or a swarm can be found in literature. In [5] a formal definition of the first hitting time (FHT) and expected FHT (EFHT) is proposed. Both concepts refer to an entire swarm, precisely, FHT represents the number of times the evaluation function f eval is called until the swarm for the first time contains a particle x for which | f eval (x) − f eval (y∗ )| < δ. Particle stability for the stagnation assumption, that is, the case where yt = y and yt∗ = y∗ , for all t sufficiently large, is studied in [3]. New stability measures are proposed: the particle convergence expected time pcet (δ) which represents the minimal number of steps necessary for the expected particle location to obtain equilibrium with the given precision δ, and the particle location variance convergence time pvct (δ)—as a minimal number of steps necessary to get variance of particle location lower than δ for all subsequent time steps. For experimental evaluations, weak versions of pcet (δ) and pvct (δ), that is, pwcet (δ) and pvwct (δ) are also proposed where one evaluates the minimal number of steps necessary to get the first expected difference of particle location or the first variance of particle location lower than δ. This idea differs from FHT where the minimal number of steps is also evaluated but in the case of FHT a stopping criterion is based on a difference between f eval (x) and f eval (y∗ ), whereas in pcet—on an euclidean distance between subsequent expected locations.

3 Concerned Versions of PSO This research consists of two parts: theoretical and practical. In the former part, new measures for analysis of the particle stability are presented, while in the latter one an experimental study was conducted. The simulations were performed for the PSO model with inertia weight (IPSO) [6] and Standard PSO2011 (SPSO2011) [7].

3.1 IPSO IPSO implements following velocity and position equations 

vt+1 = w · vt + ϕt,1 ⊗ (yt − xt ) + ϕt,2 ⊗ (yt∗ − xt ), xt+1 = xt + vt+1

(1)

where vt is a particle’s velocity, xt —particle’s location, yt —the best location the particle has found so far, yt∗ —the best location found by particles in its neighborhood, w – inertia coefficient, ϕt,1 and ϕt,2 control influence of the attractors on the velocity,

4

K. Trojanowski and T. Kulpa

ϕt,1 = Rt,1 c1 , ϕt,2 = Rt,2 c2 , and c1 ,c2 represent acceleration coefficients, Rt,1 , Rt,2 are two vectors of random values uniformly generated in range [0, 1] and ⊗ denotes pointwise vector product. Values of coefficients w, c1 and c2 define convergence properties of the particle.

3.2 SPSO2011 In SPSO2011, on the contrary to IPSO, evaluation of a single move needs to involve all coordinates of the particle in a single procedure. First, for a particle a center of gravity around three points: its current position and two points related to personal and global attractors are calculated. Then, a hypersphere located in the gravity center and of radius equal Euclidean distance between the current position and the center is defined. Next, a random point is defined within a hypersphere and eventually, the difference between this point and the current position becomes a component of a sum in the velocity update equation. Formally, it is evaluated as follows ⎧ ⎨ p = xt + ϕt,1 ⊗ (yt − xt ), l = xt + ϕt,2 ⊗ (yt∗ − xt ), ⎩ g = 13 (xt + p + l)

(2)

where g represents the current center of gravity. Then, a random point x within a hypersphere H(g, ||g − x||) is generated. In our simulations the point x was generated with use of Algorithm 2. In this algorithm a random point z uniformly distributed on the Algorithm 1 . Generation of N -dimensional random point z uniformly distributed on the surface of a unit hypersphere. 1: procedure RandPointOnAHypersphere 2: Generate N independent normal random variates: z 1 , . . . , z N 3: z ← [z 1 , . . . , z N ] 4: for j ← 1, N do 5: z j ← z j /z 6: return z Algorithm 2 . Generation of N -dimensional random variate z uniformly distributed within a unit hypersphere. 1: z = RandPointOnAHypersphere()  call Algorithm 1 2: Generate power random variate w ← y 1/N , where y ← U (0, 1) 3: z ← w · z 4: return z

Particle Stability Time in the Stochastic Model of PSO

5

surface of a unit hypersphere is generated with use of Algorithm 1 first. Then, coordinates of z are multiplied by a power random variate. Eventually, x = g + z · ||g − x||. Now the velocity and position can be updated: 

vt+1 = w · vt + x − xt . xt+1 = xt + vt+1 .

(3)

3.3 General Assumptions The simulations were conducted for stagnation assumption, that is, where yt = y and yt∗ = y∗ , for all t. Thus, due to lack of communication between particles, there is just one particle to observe. It is also assumed that y∗ = y = 0.

4 Generalized Weak Versions of Convergence Measures The measures of pcet (δ) and pvct (δ) proposed in [3] evaluate time necessary for a particle to reach an equilibrium state. Due to the individual evaluation of velocity vector coordinates in IPSO, the definitions of the two measures concern a onedimensional search space: Definition 1 (The Particle Convergence Expected Time) For a given positive number δ (4) pcet (δ) = min{t | |es+1 − es | < δ for all s ≥ t}, where es = E[xs ]. Definition 2 (The Particle Location Variance Convergence Time) For a given positive number δ (5) pvct (δ) = min{t | ds < δ f or all s ≥ t}, where ds = V ar [xs ]. In practice, both pcet (δ) and pvct (δ) need infinite time to be calculated. Thus, their weak versions, pwcet (δ) and pvwct (δ) were also proposed: Definition 3 (The Particle Weak Convergence Expected Time) For a given positive number δ (6) pwcet (δ) = min{t | |et − et+1 | < δ}, where et = E[xt ]. Definition 4 (The Particle Location Variance Weak Convergence Time) For a given positive number δ

6

K. Trojanowski and T. Kulpa

pvwct (δ) = min{t | dt < δ},

(7)

where dt = V ar [xt ]. When studying visualizations of simulations given in [4], one can observe a significant difference between graphs with values of pwcet (δ) obtained with Algorithm 1 [4] and theoretical upper bounds pcetub obtained with Eqs. (37, 38) [4]. Values of pwcet (δ) are calculated in the iterative algorithm where the step counter stops immediately when the distance between subsequent expected locations is small enough. However, sometimes this can be a false signal about the particle convergence. Therefore, for the better approximation of theoretical measures of pcet (δ) and pvct (δ), we extend their weak versions with an observation window of size lw . The window size defines a number of subsequently recorded small steps to regard the particle as still. Definition 5 (The Particle Generalized Weak Convergence Expected Time) Let lw be a given positive integer and δ be a given positive number. The particle generalized weak convergence expected time pgwcet (lw , δ) is the minimal number of steps necessary to get lw subsequent differences between expected values of particle locations lower than δ, that is pgwcet (lw , δ) = min{t | |et+k+1 − et+k | < δ for all k ∈ {0, 1, . . . , lw − 1}}.

(8)

The measure pgwcet (lw , δ) generalizes earlier proposed particle weak convergence time ( pwcet (δ)). For lw = 1 one can get particle weak convergence time pgwcet (1, δ) = pwcet (δ).

(9)

It is obvious, that for given δ > 0 lw1 < lw2 ⇒ pgwcet (lw1 , δ) ≤ pgwcet (lw2 , δ).

(10)

It is also interesting, that when lw → ∞ the measure pgwcet (δ) approximates pcet (δ), that is (11) lim pgwcet (lw , δ) = pcet (δ). lw →∞

Analogously, one can also generalize the measure of a particle location variance weak convergence time Definition 6 (The Generalized Particle Location Variance Weak Convergence Time) Let lw be a given positive integer and δ be a given positive number. The generalized particle location variance weak convergence time g pvwct (δ) is the minimal number of steps necessary to get lw subsequent variances of particle location lower than δ, that is

Particle Stability Time in the Stochastic Model of PSO

g pvwct (lw , δ) =min{t | dt+k < δ for all k ∈ {0, 1, . . . , lw − 1}}.

7

(12)

Similarly, for lw = 1 one can get particle location variance weak convergence time g pvwct (1, δ) = pvwct (δ), (13) and for given δ > 0 lw1 < lw2 ⇒ g pvwct (lw1 , δ) ≤ g pvwct (lw2 , δ).

(14)

When lw → ∞ the measure pgwcet (δ) approximates pvct (δ), that is lim g pvwct (lw , δ) = pvct (δ).

lw →∞

(15)

The proposed generalised versions of measures have been applied in simulations for the IPSO particles defined in one-dimensional search space. An influence of the parameter lw on the characteristics of pgwcet can be observed in Fig. 1. This figure depicts the values of pgwcet generated for δ = 0.0001 as a function of initial location and velocity represented by expected locations e0 and e1 where E[φt ] and w are fixed. Graphs for two settings of E[φt ] and w are presented: [0.06, 0.96] and [1.76, 0.96]. A grid of pairs [e0 , e1 ] consists of 40,000 points (200 × 200) varying from −10 to 10 for both e0 and e1 . One can see, that the characteristics are more regular for larger values of lw . Estimated convergence times of the particle location pgwcet (lw , δ) and estimated convergence times of the particle location variance g pvwct (lw , δ) for δ = 0.001 are presented in Figs. 2, 3 and 5 [8]. The size of observation window lw = 5 has been estimated experimentally as offering satisfying precision with little oversize. Figures 2 and 3 were obtained for δ = 0.001, lw = 5, example starting conditions: e0 = 0 and e1 = 0.5 and different values of IPSO parameters E[φt ] ∈ [−2, 10.5] and w ∈ [−2, 2]. Figure 5, depicts graphs for four configurations of (E[φt ], w): type A (0.06,0.96), type B: (1.76,0.96), type C: (3.91,0.96), and type D: (2.11,0.06), and e0 ∈ [−10, 10] and e1 ∈ [−10, 10].

5 Particle Stability Time The measures discussed in Sect. 4 concern order-1 ( pgwcet (δ)) and order-2 (g pvwct (δ)) stability analysis in a stochastic model of IPSO. They both are based on the same idea of counting the minimal number of steps necessary for the expected value of the particle location or the location variance to be regarded as stable. This methodology can be applied also for the analysis of a particle observed movement.

8

K. Trojanowski and T. Kulpa E[φt]=0.06; w=0.96; y=0

E[φt]=1.76; w=0.96; y=0

400 350 300 250 200 150 100 50 0

400 300 200 100 0

500 400 300 200 100 0

350 300 250 200 150 100 50 0

10

10

5

5 0

e1

-5 -10-10

-5

0 e 0

5

10

0

e1

-5 -10-10

-5

0 e 0

5

10

(a) lw = 1 E[φt]=1.76; w=0.96; y=0

E[φt]=0.06; w=0.96; y=0

400 350 300 250 200 150 100 50

400 300 200 100 0

500 400 300 200 100 0

500 450 400 350 300 250 200

10

10

5

5 0

e1

-5 -10-10

-5

0 e 0

5

10

0

e1

-5 -10-10

-5

0 e 0

5

10

(b) lw = 2 E[φt]=0.06; w=0.96; y=0

E[φt]=1.76; w=0.96; y=0

450 400 350 300 250 200 150 100 50

400 300 200 100 0 10

500 400 300 200 100 0

500 450 400 350 300 250 200

10 5 e1

5 0 -5 -10-10

-5

0 e 0

5

10

e1

0 -5 -10-10

-5

0 e 0

5

10

(c) lw = 3 Fig. 1 Characteristics of pgwcet (lw , 0.0001) for lw ∈ {1, 2, 3} for two particle configurations [E[φt ], w]: [0.06, 0.96] (left column) and [1.76, 0.96] (right column)

Simply, the number of steps necessary for a particle to obtain a quasi-equilibrium state can be observed in simulations: Definition 7 (The Particle Stability Time) Let lw be a given positive integer and δ be a given positive number. The particle stability time, namely pst (lw , δ), is defined as pst (lw , δ) = min{t | ||xt+k+1 − xt+k || < δ for all k ∈ {0, 1, . . . , lw − 1}}

(16)

Particle Stability Time in the Stochastic Model of PSO

11000 10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0

2 1.5

1

2 1 w

10000 1000 100 10 1

9

0 -1 -2

0.5 10 0 7 8 9 w -0.5 4 5 6 -1 -1.5 1 2 3 E[φ ] -2 -2 -1 0 t

-2

-1

0

1

2

3

4 5 E[φt]

6

7

8

9

10

Fig. 2 Estimated values of pgwcet (5, 0.001) for example starting conditions and different values of IPSO parameters E[φt ] and w; 3D shape with logarithmic scale for pgwcet (5, 0.001) (left), and isolines from 0 to 500 with step 20 (right) extracted from [8] 9000 8000 7000 6000 5000 4000 3000 2000 1000 0 2 1.5

1

0.5 0 8 9 10 w -0.5 5 6 7 -1 2 3 4 -1.5 1 0 E[φt] -2 -2 -1

2 1 w

10000 1000 100 10 1

0 -1 -2

-2

-1

0

1

2

3

4 5 E[φt]

6

7

8

9

10

Fig. 3 Estimated values of g pvwct (5, 0.001) for example starting conditions and different values of IPSO parameters E[φt ] and w; 3D shape with logarithmic scale for g pvwct (5, 0.001) (left), and isolines from 0 to 500 with step 20 (right) extracted from [8]

where || · || denotes Euclidean norm in Rn space. The definition of pst has two parameters: precision δ and observation window size lw . The parameter δ defines threshold step length, below which one can say that the particle has not moved. The parameter lw defines the number of subsequent sufficiently small steps which must be observed for the particle to be regarded as still. In the case of a one-dimensional space, when lw → ∞ the measure pst (δ, lw ) approximates pct (δ) proposed in [2, 4]. For the given particle the measure pst has analogous property as the measure pgwct. that is, for given δ > 0 lw1 < lw2 ⇒ pst (lw1 , δ) ≤ pst (lw2 , δ).

(17)

6 Expected Value and Variance of pst The value of pst for a particle depends on explicitly given parameters lw and δ, but also on the particle configuration parameters x0 and x1 as well as the parameters of the selected PSO variant, like, for example, c1 , c2 and w in the case of IPSO. However, even for fixed values of all these parameters, we are still dealing with randomness due

10

K. Trojanowski and T. Kulpa

to the presence of random parameters in the velocity formula. Thus, any conclusions based on observations of just a single run of the fully configured particle are not justified because the observed value of pst (lw , δ) represent a realization of a random variable (that is, a random variate) which depends on the particle and its trajectory. For such a random variable one can consider both an expected value of pst, namely particle stability expected time pset (lw , δ) = E[ pst (lw , δ)]

(18)

pstv(lw , δ) = Var[ pst (lw , δ)].

(19)

and a variance of pst

Applying the expectation operator to both sizes of Eq. (17) one can obtain for given δ > 0 (20) lw1 < lw2 ⇒ pset (lw1 , δ) ≤ pset (lw2 , δ). For the purpose of simulations, particularly due to limited time an upper limit for the number of particle steps pstmax is proposed. Thus, the recorded outcome of a particle single run equals pstrec (lw , δ) = min( pst (lw , δ), pstmax )

(21)

and this value approximates the value of pst. In the presented simulations we estimate an expected value of pstrec psetrec (lw , δ) = E[ pstrec (lw , δ)]

(22)

pstvrec (lw , δ) = Var[ pstrec (lw , δ)].

(23)

and a variance of pstrec

For the obtained set of n values of pstrec,i one can estimate psetrec as follows n 1 pstrec,i (lw , δ) n i=1

(24)

n 2 1  pstrec,i (lw , δ) − psetrec (lw , δ) . n − 1 i=1

(25)

psetrec (lw , δ) = and the variance of pst pstvrec (lw , δ) =

Due to the fact, that every simulation stops when the number of steps reaches pstmax , for the cases where pst tends to infinity one can obtain psetrec = pstmax and pstvrec = 0.

Particle Stability Time in the Stochastic Model of PSO

11

E[φt]=0.06; w=0.96; y=0

E[φt]=1.76; w=0.96; y=0

450 400 350 300 250 200 150 100 50

500 400 300 200 100 0 10

10000 8000 6000 4000 2000 0

10000 9500 9000 8500 8000 7500 7000 6500 6000 5500 5000

10

5 e1

0 -5 -10-10

-5

0

5

5

10

e1

0 -5

e0

-10-10

-5

0

5

10

e0

(a) lw = 1 E[φt]=0.06; w=0.96; y=0

E[φt]=1.76; w=0.96; y=0

550 500 450 400 350 300 250 200 150

500 400 300 200 100 0 10

10000 8000 6000 4000 2000 0

10000 9800 9600 9400 9200 9000 8800 8600

10

5 e1

0 -5 -10-10

-5

0

5

5

10

e1

0 -5

e0

-10-10

-5

0

5

10

e0

(b) lw = 2 E[φt]=0.06; w=0.96; y=0

E[φt]=1.76; w=0.96; y=0

550 500 450 400 350 300 250 200 150

500 400 300 200 100 0 10

10000 8000 6000 4000 2000 0

10000 9800 9600 9400 9200 9000 8800 8600 8400

10

5 e1

0 -5 -10-10

-5

0

5

5

10

e1

e0

0 -5 -10-10

-5

0

5

10

e0

(c) lw = 3 Fig. 4 Characteristics of psetrec (lw , 0.0001) for lw ∈ {1, 2, 3} for two particle configurations [E[φt ], w]: [0.06, 0.96] (left column) and [1.76, 0.96] (right column)

Besides, from Eq. (20) it follows that for given δ > 0 lw1 < lw2 ⇒ psetrec (lw1 , δ) ≤ psetrec (lw2 , δ).

(26)

In the presented research the simulations deriving pstrec were repeated n = 100 times with pstmax = 10000. Example characteristics of pstrec ((lw1 , 0.0001) where lw ∈ {1, 2, 3} are presented in Fig. 4 where δ = 0.0001 and grids of pairs [e0 , e1 ] and [x0 , x1 ] consists of 40000 points (200 × 200) varying from −10 to 10. When compared with the characteristics of pgwcet given in Fig. 1, the regularity improvement

12

K. Trojanowski and T. Kulpa E[φt]=0.06; w=0.96; y=0

E[φt]=1.76; w=0.96; y=0

450 400 350 300 250 200 150 100

500 400 300 200 100 0 10

500 450 400 350 300 250 200

500 400 300 200 100 0 10

5 e1

0 -5 -10-10

-5

0

5

5

10

e1

e0

0 -5 -10-10

-5

0

(a) type A

(b) type B

E[φt]=3.91; w=0.96; y=0

E[φt]=2.11; w=0.06; y=0

500 450 400 350 300 250

500 400 300 200 100 0 10

5

10

e0

18 16 14 12 10 8 6 4 2

13.5 13 12.5 12 11.5 11 10.5 10 9.5 9

10 5 e1

0 -5 -10-10

-5

(c) type C

0

5 e0

10

5 e1

0 -5 -10-10

-5

0

5

10

e0

(d) type D

Fig. 5 Graphs of pgwcet (5, 0.001) for selected configurations (E[φt ], w) of IPSO particles extracted from [8]

of characteristics pstrec ((lw1 , 0.0001) for lw1 > 1 is less spectacular, however, also easy to observe. Figure 5 [8] depicts graphs of pgwcet obtained for IPSO as a function of expected locations e0 and e1 where E[φt ] and w are fixed. Four pictures in Fig. 5 show pgwcet for the representatives of four main types of a particle (A, B, C and D—according to classification proposed in [4]). Respectively, Fig. 6 [8] depicts graphs of pst as a function of initial location and velocity represented by locations x0 and x1 where φt and w are fixed. In both cases δ = 0.001, lw = 5 and grids of pairs [e0 , e1 ] and [x0 , x1 ] consists of 40000 points (200 × 200) varying from −10 to 10. Recorded values of psetrec for IPSO are presented in Fig. 7 [8], while for SPSO2011—in Fig. 8 [8]. They were obtained for a grid of configurations (φ, w) starting from [φ = −2, w = −2] to [φ = 10.5, w = 2] and changing with step 0.05 for w and for φ (which gave 250 × 80 points) and for e0 = 0, e1 = 0.5 and δ = 0.001. In both cases lw = 5 and c1 = c2 = φ/2. Both figures consist of two types of pictures: 3D presentation, and a map of isolines for values from 0 to 1000 with step 20. Respectively, graphs of the recorded variance of pstrec are depicted in Figs. 9 [8] (IPSO) and 10 [8] (SPSO2011). One can observe, that the region of low values of pset and pstv (which can be also regarded as a region of particle convergent configurations) for IPSO remains

Particle Stability Time in the Stochastic Model of PSO

13

φt=1.76; w=0.96; y=0

φt=0.06; w=0.96; y=0 10000 8000 6000 4000 2000 0

500 450 400 350 300 250 200 150 100 50 0

500 400 300 200 100 0 10

10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0

10 5

x1

0

-5

-10-10

0 x 0

-5

10

5

5

x1

0

-5

(b) type B

(a) type A

φt=2.11; w=0.06; y=0

φt=3.91; w=0.96; y=0 10000 8000 6000 4000 2000 0

10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0

10

15 14 13 12 11 10 9 8 7 6 5

14 12 10 8 6 4 2 10

5

x1

0

-5

-10-10

0 x 0

-5

10

5

0 x 0

-5

-10-10

10

5

5

x1

0

-5

10

5

0 x 0

-5

-10-10

(d) type D

(c) type C

Fig. 6 Graphs of recorded values of pst (5, 0.001) for selected configurations (φt , w) of IPSO particles extracted from [8]

(a)

(b)

10000 1000 100 10 1

2

10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0

1 0

w

-1 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -2 φ

w

7

8

(d)

2

2

1

1 w

0 -1 -2

2 1 0 -1 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -2 φ

(c) w

10000 1000 100 10 1

10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0

0 -1

-2 -1

0

1

2

3

4 φ

5

6

7

8

9 10

-2

-2 -1

0

1

2

3

4

5

6

9 10

φ

Fig. 7 A recorded particle stability expected time pset (5, 0.001) for IPSO particles: 2-dimensional (left column) and 50-dimensional search space (right column) extracted from [8]

14

K. Trojanowski and T. Kulpa

(a)

(b)

10000 1000 100 10 1

10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0

2 1 0

10000 1000 100 10 1

(d)

2

2

1

1

0

w

w

w

-1 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -2 φ

(c)

-1 -2

1 0

w

-1 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -2 φ

2

10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0

0 -1

-2 -1

0

1

2

3

4 5 φ

6

7

8

-2

9 10

-2 -1

0

1

2

3

4 5 φ

6

7

8

9 10

Fig. 8 A recorded particle stability expected time pset (5, 0.001) for SPSO2011 particles: 2-dimensional (left column) and 50-dimensional search space (right column) extracted from [8]

(a)

(b)

14

1x1012 1x1010 1x10 8 1x10 1x106 10000 100 1

2 1 0

2.5x107 2x107 1.5x107 1x1014 7 1x1012 1x10 1x10108 5x106 1x106 1x10 0 10000 100 1

w

-1 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -2 φ

(d)

2

2

1

1

0

w

w

1 0

w

7

8

-1 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -2 φ

(c)

-1 -2

2

5x106 6 4.5x10 4x106 6 3.5x10 3x106 6 2.5x10 6 2x10 6 1.5x10 6 1x10 500000 0

0 -1

-2 -1

0

1

2

3

4 φ

5

6

7

8

9 10

-2

-2 -1

0

1

2

3

4

5

6

9 10

φ

Fig. 9 A recorded variance of the particle stability expected time pstv(5, 0.001) for IPSO particles: 2-dimensional (left column) and 50-dimensional search space (right column) extracted from [8]

Particle Stability Time in the Stochastic Model of PSO

(a)

(b)

1x1014 12 1x1010 1x10 8 1x106 1x10 10000 100 1

2 1 0

3x107 7 2.5x10 2x107 7 1x1014 12 1.5x10 1x1010 7 1x106 1x10 8 1x106 5x10 1x10 0 10000 100 1 -1 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -2 φ

(c)

w

(d)

2

2

1

1

0

w

w

2

7 1.2x10 1x1076 8x106 6x106 4x106 2x10 0

1 0

w

-1 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -2 φ

-1 -2

15

0 -1

-2 -1

0

1

2

3

4

5

6

7

8

9 10

-2

-2 -1

φ

0

1

2

3

4

5

6

7

8

9 10

φ

Fig. 10 A recorded variance of the particle stability expected time pstv(5, 0.001) for SPSO2011 particles: 2-dimensional (left column) and 50-dimensional search space (right column) extracted from [8]

almost unchanged when the number of dimensions of the search space grows from 2 to 50. Just the opposite observation can be made for SPSO2011—the region shrinks along the φ axis as the number of dimensions grows.

7 Expected Running Time of a Particle Another approach to the problem of the convergence region identification can be based on a measure of expected running time (ERT) [9, 10]. This measure was originally proposed for comparison of algorithm performance in the special session on real-parameter optimization of CEC 2005. Currently, it is used, for example, in a platform COCO (COmparing Continuous Optimisers).1 The measure is addressed to the cases where reaching a suboptimal solution with satisfying precision is not always guaranteed, particularly, when compared algorithms have different probabilities of success. It returns finite positive values for such configuration parameters of an algorithm which guarantee a finite execution time with a strictly positive probability. In the presented research, the pst value is an input for the ERT formula. This way, the ERT method can be applied as an alternative to earlier studied weak measures for identification of configuration regions, where the particle has a finite time to reach an equilibrium state with a strictly positive probability. 1 http://coco.gforge.inria.fr/.

16

K. Trojanowski and T. Kulpa

The ERT formula can be adapted as follows: Definition 8 (The Expected Running Time of a Particle) Let lw be a positive integral, and δ be a given positive number. The expected running time of a particle is ERT pst (lw , δ) = pstsucc (lw , δ) +

1 − psucc pstmax psucc

(27)

where pstsucc (lw , δ) denote the average number of pst (lw , δ) for the cases, when particle obtained stability state before pstmax and psucc —the fraction of such cases. Respectively, we obtain also the following equation for evaluation of ERT pst n ERT pst (lw , δ) =

i=1

pstrec,i (lw , δ) #succ

(28)

(a)

(b)

2

2

1

1

0

w

w

where #succ is the number of simulations when the equilibrium state was obtained in less number of steps than pstmax . One can see, that evaluation of ERT pst does not depend on the PSO model, that is, can be evaluated in the same way for IPSO and SPSO2011, and any other version of PSO as well. In the case when none of simulations ends successfully, ERT pst (lw , δ) = n × pstmax . Otherwise, for the convergent configurations, ERT pst closely approximates pset. For the configurations from the border of the convergence region, one can identify areas with a positive incomplete chance for a convergence. In our simulations, ERT pst is applied for identification of a region in a particle configuration space, where pstrec (lw , δ) < pstmax is satisfied with strictly positive probability. Figure 11 presents recorded values of ERT pst for IPSO and SPSO2011

-1 -2

0 -1

-2 -1

0

1

2

3

4

5

6

7

8

-2

9 10

-2 -1

0

1

2

3

(c)

(d)

2

2

1

1

0 -1 -2

4

5

6

7

8

9 10

5

6

7

8

9 10

φ

w

w

φ

0 -1

-2 -1

0

1

2

3

4 φ

5

6

7

8

9 10

-2

-2 -1

0

1

2

3

4 φ

Fig. 11 A recorded expected running time of a particle (ERT pst ) for IPSO (top row) and SPSO2011 (bottom row): 2-dimensional (left column) and 50-dimensional search space (right column)

Particle Stability Time in the Stochastic Model of PSO

17

obtained for a grid of configurations (φ, w) starting from [φ = −2, w = −2] to [φ = 10.5, w = 2] and changing with step 0.05 for w and for φ (which gave 250 × 80 points). In both cases pstmax = 104 , lw = 5 and c1 = c2 = φ/2.

8 Conclusions In this paper, we study particle stability properties in the stochastic model of PSO and in simulations for two versions of PSO: IPSO and SPSO11. In the case of the stochastic model, the particle is regarded as stable when its expected location and its expected location variance are convergent. Thus, we develop existing two measures: particle convergence expected time pcet (δ) and the particle location variance convergence time pvct (δ) and propose generalized approaches to their weak versions. The new measures also evaluate numbers of steps necessary to get the monitored value lower than δ for all subsequent time steps but the measurement is based on the observation window wider than one. In the second part of the paper, a new measure of pst for analysis of the recorded particle stability time has been proposed. The pst measure has been applied in simulations for IPSO and SPSO11. In a series of experiments an expected value, as well as a variance of pst, namely pset and pstv, have been calculated. Empirical characteristics of pset and pstv are presented. Finally, a new measure, ERT pst is proposed which can be useful for identification of particle configuration parameters regions where a finite time to reach an equilibrium state is guaranteed with a strictly positive probability.

References 1. Kennedy, J., Eberhart, R.C.: Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks, Piscataway, NJ, IEEE, pp. 1942–1948 (1995) 2. Trojanowski, K., Kulpa, T.: Particle convergence time in the PSO model with inertia weight. In: Proceedings of the 7th International Joint Conference on Computational Intelligence (IJCCI 2015), vol. 1: ECTA, pp. 122–130 (2015) 3. Trojanowski, K., Kulpa, T.: Particle convergence expected time in the PSO model with inertia weight. In: Proceedings of the 8th International Joint Conference on Computational Intelligence (IJCCI 2016), vol. 1: ECTA., SCITEPRESS, pp. 69–77 (2016) 4. Trojanowski, K., Kulpa, T.: Particle convergence time in the deterministic model of PSO. In: Computational Intelligence. Volume 669 of Studies in Computational Intelligence, pp. 175– 194. Springer (2017) 5. Lehre, P.K., Witt, C.: Finite First Hitting Time Versus Stochastic Convergence in Particle Swarm Optimisation, pp. 1–20. Springer, New York (2013) 6. Shi, Y., Eberhart, R.C.: A modified particle swarm optimizer. In: Proceedings of the IEEE Congress on Evolutionary Computation 1998, IEEE, pp. 69–73 (1998) 7. Zambrano-Bigiarini, M., Clerc, M., Rojas, R.: Standard particle swarm optimisation 2011 at CEC-2013: a baseline for future PSO improvements. In: 2013 IEEE Congress on Evolutionary Computation, IEEE Publishing, pp. 2337–2344 (2013)

18

K. Trojanowski and T. Kulpa

8. Trojanowski, K., Kulpa, T.: Particle stability in PSO under stagnation assumption. In: Proceedings of the 9th International Joint Conference on Computational Intelligence (IJCCI 2017), vol. 1, SCITEPRESS, pp. 273–280 (2017) 9. Price, K.: Differential evolution vs. the functions of the second ICEO. In: Proceedings of the 4th IEEE ICEC, IEEE Publishing, pp. 153–157 (1997) 10. Auger, A., Hansen, N.: Performance evaluation of an advanced local search evolutionary algorithm. In: IEEE Congress on Evolutionary Computation, IEEE, pp. 1777–1784 (2005)

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm Alice Plebe, Vincenzo Cutello and Mario Pavone

Abstract This paper proposes the use of multi-objective optimization to help in the design of interior lighting. The optimization provides an approximation of the inverse lighting problem, the determination of potential light sources satisfying a set of given illumination requirements, for which there are no analytic solutions in real instances. In order to find acceptable solutions we use the metaphor of genetic evolution, where individuals are lists of possible light sources, their positions and lighting levels. We group the many, and often not explicit, requirements for a good lighting, into two competing groups, pertaining to the quality and the costs of a lighting solution. The cost group includes both energy consumption and the electrical wiring required for the light installation. Objectives inside each group are blended with weights, and the two groups are treated as multi-objectives. The architectural space to be lighted is reproduced with 3D graphic software Blender, used to simulate the effect of illumination. The final Pareto set resulting from the genetic algorithm is further processed with clustering, in order to extract a very small set of candidate solutions, to be evaluated by the architect. Keywords Lighting design · Genetic algorithm · Decision maker · Blender

1 Introduction In this paper we explore the possibility to support the task of designing light in architecture, by a formulation of a multi-objective optimization problem, to be solved using evolutionary algorithms, followed by a clustering technique to reduce the final A. Plebe Department of Information Engineering and Computer Science, University of Trento, Trento, Italy e-mail: [email protected] V. Cutello · M. Pavone (B) Department of Mathematics and Computer Science, University of Catania, Catania, Italy e-mail: [email protected] V. Cutello e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_2

19

20

A. Plebe et al.

Pareto set. The design of interior lighting is the crucial and complex process of integrating luminaries into the fabric of architecture [1, 2]. Humans, like most primates and several mammals, are predominantly visual creatures. Forms of artificial lighting have been introduced since antiquity, to make visual perception possible when and where sunlight lacks [3]. In most of the contemporary world a considerable amount of time is spent indoors and with insufficient daylight illumination. Our ability to move toward interior environments, orientate ourselves, and go about our business relying on the perceptions we form of the surrounding objects, reckons on the level and quality of ambient illumination. Contemporary lighting design has the goal of selecting the lighting equipment and their placement in the interior environment that result in a comfortable and pleasant visual experience. The design process should take into account several aspects, such as the type of occupants and the type of activities in the given space, or the interior surface finishes and furnishings. Unlike most multi-objective optimization problem in other domains, such as industrial engineering [4], in lighting design there is rarely an explicit formulation of the requirements for the optimal solution. However, it is often possible to identify at least two types of objectives for a lighting system: on one side the properties that enhance the quality and the pleasantness of the interior light, on the other side the costs involved in providing the chosen lighting solution. The first component of costs is related to the realization of the lighting plant. In addition, in the last decades increasing attention has been paid to the issue of energy savings. In U.S. the energy consumed for lighting accounts for about 30% of the total energy consumed by commercial buildings [5], and in the European Union the yearly consumption is over 170 TWh [6]. Therefore, the concept of sustainable lighting design has become central in architectural strategies [7]. In the approach here proposed we group objectives that belong to the quality or the cost categories by weighted summations, and then treat the two combined values as contrasting multi-objectives. It should be highlighted that lighting design is a process that encompasses strict physical evaluations with aesthetic and stylistic evaluations, upon the premise that the lighting condition has enormous emotional, psychological, and physiological impact on people. It is not possible, nor even desirable, for a computerized optimization to provide a single, deterministic solution. The great help of a system like the one here proposed would be to shortlist a manageable small set of lighting solution for the architect to be evaluated with her subjective expertise and sensibility. As with most real problem solved with multi-objective optimization, the final Pareto set is far too large for a visual evaluation. We apply a technique to reduce it, even in the absence of known preferences of the decision maker, typically assumed in literature [8, 9]. We partition the whole Pareto set into a small number of clusters, and in each cluster we pick a representative solution. The algorithm here presented is an extension of an earlier model [10], based on the combination of the 3D graphic software Blender and a genetic algorithm for solving the multi-objective optimization problem. Blender provides the rendering engine for a physical simulation of the effect of a lighting solution on a model of the interior environment. In the current version the problem objectives have been refined, with

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm

21

three components in the quality group and two components in the group related to the cost of the lighting solution. In the latter, we compute an estimate of the realization cost of each candidate lighting solution.

2 The Lighting Problem and the Optimization Solution The design of interior lighting is the crucial and complex process of integrating luminaries into the fabric of architecture [1, 2]. The goal is to select the lighting equipment and their placement in the interior environment that result in a comfortable and pleasant visual experience. The design process should take into account several aspects, such as the type of occupants and the type of activities in the given space, or the interior surface finishes and furnishings. Since the discovery of the electric light system by Thomas Edison in 1879, lighting design has experienced several significant revolutions, such as fluorescent lamps in 1938 and, more recently, solidstate lighting. Traditionally, illumination design has been seen as a blend of art and practice, where all the challenges are left to the creativity and the experience of the design architect. Given the aesthetic nature of the task, lighting design may seem difficult to formally model. Nevertheless, in the last years the design process has been increasingly considered as a mathematical and physical problem to be solved with optimization techniques. A well established aids offered by computational tools to the designer is by photorealistic architectural rendering, simulating in computer graphics the effect of a lighting solution on a model of the interior environment [11]. Mathematically, this is the solution of the direct lighting problem that is the computation of radiance distribution in an environment that is completely known a priori, including its lighting parameters. The drawback of adopting direct lighting as the only aiding tool is that, if the achieved illumination is not satisfactory, it is not easy to infer which modifications to the current solution may lead to improvements. Very likely, the final solution chosen by the designer over a collection of trials will be far from optimal. A more effective assistance would be given by computational tools implementing the inverse lighting problem [12–15]: the determination of potential light sources satisfying a set of given illumination requirements, for a pre-defined interior space. For this problem there are no reliable analytic solutions, even for simple geometries, One of the earliest attempt to solve this problem with optimization [13] was based on the Broyden-Fletcher-Goldfarb-Shanno method [16] applied of the Hessian of the matrix derived by the objectives of lighting uniformity. The limitation of this class of methods is that the dependency from gradients leads easily to poor local minima. In very simple cases constrained least squares based optimization may work, for example when the position of the lights are fixed, and only their intensities can be varied [14]. A system facing the inverse lighting problem must provide a virtual environment able to accurately reproduce the architectural space and its spectral reflectometric properties. Moreover, a physical simulation platform must be considered as well for correct illumination calculation in sample points of the architectural space.

22

A. Plebe et al.

Several different tools can be considered for this purpose. Lightsolve [17] is an interactive dedicated environment for daylight design, with a performance-driven decision support system, however the system lacks a detailed architectural reproduction, and the inclusion of interior furniture is difficult to manage. In the work of [18] the 3D models of building facades are obtained with the simple modelling tool Google SketchUp, which offers a quick and easy way to outline an architectural space, but resulting in a low level of realism. Conversely, the popular software Radiance, widely used in the field of optimal lighting design [11, 19, 20], consists of a sophisticated physically-correct rendering engine for illumination calculation, and it allows architectural spaces reproduction at arbitrary levels of detail. Nevertheless, it is a non-interactive system composed of a collection of command-line programs, and all architectural specifications have to be coded into configuration files. In this paper we adopt the 3D graphic software Blender as a unified solution to the two requirements stated above. Firstly, Blender is the most comprehensive opensource 3D computer graphic tool available. It is particularly suitable for modelling architectural interiors, with the possibility of importing components from CAD files. Secondly, Blender provides a physically-based rendering engine, able to exhaustively evaluate lighting configurations needed for solving the inverse lighting problem. Moreover, Blender embeds a Python interpreter, which can run scripts supplied by the user, in order to extend its functionalities. Thanks to its intrinsic versatility, Blender has already been applied to a number of different problems, from the medical field [21] to industrial applications [22], and the inverse lighting problem itself [10].

2.1 Non-dominated Sorting Genetic Algorithm Due to the clashing of the multiple factors involved in interior lighting design, the resulting problem is multi-objective in nature. In contrast to single-objective optimization problems, where there is usually a single global minimum solution to be found, the goal of multi-objective optimization is to determine the set of best tradeoffs between all the conflicting criteria, namely the Pareto-optimal set. Genetic algorithms are a popular class of computational models, which have been extensively applied in various multi-objective optimization domains over the last decades. As the name suggests, genetics algorithms mimic the working principles of natural genetics and natural selection to construct robust search algorithms that require minimal problem information. The algorithm structure is borrowed from the sexual genetic reproduction process, divided into three fundamental phases: selection (or reproduction), mutation and crossover. Starting from a random population of solutions, the algorithm iteratively computes fitness values for each in order to identify the best solutions and to converge to a Pareto-optimal set. The selection operator is used to promote the best individuals in the population, by duplicating good solutions and discarding the bad ones, while it keeps the population size constant. Crossover and mutation operators perform the creation of new solutions. The first randomly picks two solutions from the mating pool, and exchanges some portion between the two

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm

23

chromosomes to create two new solutions. Afterwards, the mutation operator introduces diversity in the populations by randomly mutating the chromosomes obtained after crossover. One of the key working principles of genetic algorithms is the chromosomal representation of a solution. The algorithm works with a coding of decision variables, instead of the variable themselves, and choosing the right representation scheme is crucial to its performance [23]. The most traditional approach is to code the decision variables in a binary string of fixed length, which is a natural translation of real-life genetic chromosomes. Such strings are directly manipulated by the genetic operators, crossover and mutation, to obtain a new (and hopefully better) set of individuals. Another well established method is the floating-point representation of chromosomes, where each solution is coded as a vector of floating point numbers, and crossover and mutation operators are adapted to handle real parameter values. For the algorithm presented in this work, we adopted a novel chromosomal representation of solutions, specifically tailored for lighting design optimization. As will be further described in Sect. 3.1, each individual represents a possible illumination configuration, and it is coded as an ordered set of variable length containing lamp specifications. Special operators of crossover and mutation are implemented to handle this peculiar chromosomal representation. The specific genetic algorithm adopted in the present paper is the Non-dominated Sorting Genetic Algorithm II (NSGA-II), introduced by Deb et al. [24], an elitist multi-objective genetic algorithm that performs well with real world problems, producing Pareto-optimal solutions to the optimization problem. The elitist approach favours the best solutions of a population by giving them an opportunity to be directly carried over to the next generation. This strategy ensures that the best fitness values do not deteriorate during the evolution, and it enhances the probability of creating better offspring. The elitism is integrated in the algorithm by selecting the next-generation population of size N among the best individuals from the offspring and the parent population combined together (size 2N ). This selection strategy, named crowded tournament selection, takes into account two criteria: the non-domination and the crowding distance of the individuals. The first is the non-domination rank of the solution in the population, and it is used to classify the entire 2N population into nondominated fronts. The second criterion is a measure of the search space around the solution, which is not occupied by any other solution in the population. Giving preference to solutions that are less crowded (with larger crowding distances) ensures a better spread among the solutions during the evolution. These conditions make sure that non-dominated individuals belonging to a high rank front and residing in a less crowded area are selected to reproduce more than others. The result of the algorithm is the set of non-dominated solutions of the whole final population, namely the Pareto front.

24

A. Plebe et al.

3 The Proposed Model The algorithm presented in this paper has been implemented in the form of a Blender script, structured in four groups of Python modules. The first group of modules, which rely on Blenders modelling features, performs the set-up of the simulation environment. The architectural interior scene of interest is represented inside the computer graphics software by means of geometric meshes and material shaders. The room structure (walls, floors, ceiling) and its furnishings are defined by the meshes, while colours, textures and reflectivity properties of the objects are specified through the shaders. Within the definition of the 3D model, the user has to provide a grid of points on the ceiling and the walls corresponding to the feasible set of coordinates for lamp positioning. This step is required because, depending on the room design, there might be some areas where the lamp placement is not allowed, for example in presence of windows, pillars, or supporting beams. The evaluation of light quality in the 3D interior space is achieved performing individual lighting measurements over some supporting elements, called samplers, composed by surfaces with plain materials, which are introduced in the scene by the second group of python modules. The resulting rendered images are stored in HDR format, in order to preserve all the information of the dynamic range. Their pixel values are used, in the third group of modules, by the genetic algorithm to compute the actual fitness values of a solution, as will be detailed in Sect. 3.1. After evaluating the entire current population and selecting the mating pool, the genetic operators of crossover and mutation are applied to generate the offspring. The operators are specifically implemented for the presented case problem, as further described in Sect. 3.2, with the support of an evolutionary computation python framework named DEAP [25], which allows to freely customizing any component of the genetic algorithm workflow. At the end of the evolution, the obtained result is the Pareto front of the final population, namely the set of non-dominated solutions, each one of them representing an optimal lighting configuration for the given interior environment. A decision maker is then used to select a small number of best representative solutions, as will be described in Sect. 3.3. The Fig. 1 shows a flowchart representing the overall optimization process.

3.1 Definition of the Objectives We can first split the requirements of our problem into two different overall goals: on the one hand the achievement of the highest quality of the lighting for the interior space for which it has to be designed, on the other hand the minimization of lighting costs. The two groups of objectives are clearly contrasting: it is easier to achieve a uniform and well adequate level of illumination with an expensive illumination plant. The objective functions related with the quality of the lighting are computed using the rendering of a solution S performed by Blender, and measuring the illumination

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm

25

Fig. 1 Flowchart summarizing the proposed optimization model

on a set of samplers placed in the interior space. A solution S is, similarly to [10], the coding for a set of lamps with their specifications and placement: S = L1 , L2 , . . . , L L , L = d, {C, W } , v, l, w, k.

(1) (2)

The genetic code of a solution S is an ordered set of lamp descriptions L, in which d is a code identifying the type of commercial lamp, the second parameters specify the type of placement: C for ceiling and W for wall. The vector v specifies the 3D coordinates of the lamp placement, l is the intensity of the lamp in lumen, w its electrical efficiency in lumen/watt, and k the color temperature in kelvin degrees. The many possible combinations of parameters in a lamp description are restricted to a feasible subset accordingly to a list of predefined lamp definitions. The lamp code d specifies a real commercial light fixture, and the possible combinations of intensity l, consumption w, color temperature k and type of placement are extracted from the specification sheets provided by the manufacture. We define P the ordered set of vectors pi , each of them made by N pixel values measured on one of the M samplers in the scene: P(S) = p1 (S), p2 (S), . . . p M (S).

(3)

The desired level of illumination in the environment is specified with the target value tˆ, and in each sampler the deviation from the target is averaged:  ΔP(S) =

 1  | p − tˆ|, p ∈ P(S) . N p∈p

(4)

The quality of the lighting solution S is evaluated with the following three objective functions:

26

A. Plebe et al.

q1 (S) =

1 M



d,

(5)

d∈ΔP(S)

q2 (S) = max ΔP(S),      1 q3 (S) =  ( p − p)2 , N M p∈P(S) p∈p

(6) (7)

where s is the average value of the pixels in the sampler. The first two functions, in Eqs. (5) and (6), evaluate the compliance with the target level of light, respectively in the average and in the worst case among the samplers. Function q3 in Eq. (7) is an evaluation of the overall uniformity of lighting. Treating those three functions as separate fitness in multi-objective optimization would be incorrect, because they are not conflicting. It can be easily verified in the limit case of an individual Sˆ that illuˆ = q2 (S) ˆ = q3 (S) ˆ = 0. minates all samplers exactly at target level tˆ, we obtain q1 (S) Therefore, the fitness function for lighting quality is combined from the three objective functions: (q)

(q)

(q)

f q (S) = w1 q1 (S) + w2 q2 (S) + w3 q3 (S) , (q)

(q)

(q)

(8)

where w1 + w2 + w3 = 1. The second group of objective functions to minimize are related to the cost of the lighting solution, both in term of the initial cost for its realization, and the running cost when the lights are switched on. The realization cost is dominated by the electrical wiring, and its dependence on the solution S is related to the placement of the lamps. We compute an estimate of this dependency using an approximate solution of the rectilinear Steiner problem, appropriate for the wiring pattern adopted in houses. The general Steiner problem asks for a minimum spanning networks connecting a given set of points, allowing for the introduction of new auxiliary points so that a spanning network of all the points will be shorter than otherwise possible [26]. In a rectilinear Steiner tree only horizontal or vertical line segments in a plane can connect the points [27], a situation of great importance in VLSI design [28]. Note that an efficient minimum spanning computation is out of the scope, for the purpose of evaluating the dependency of the installation cost on the lighting solution a rough approximation is enough, therefore we adopted a simple heuristic, called refined single-trunk tree [29]. In the “simple” single-trunk tree it is assumed that there is a main trunk that goes horizontally or vertically, and all points are connected with stems orthogonal to the trunk. In the refined version, for each point it is checked if it is shorter to connect it to the trunk, or to the nearest stem connecting another point to the trunk. If the latter holds, the point is connected to this stem. The Fig. 2 shows the different structures resulting from the two approaches in the computation of the single-trunk Steiner tree, from an example of lighting configuration. For simplicity’s sake, let vLi = [xi , yi ]T be the 2D position of the lamp Li ∈ S on the ceiling plane, and y0 the coordinate of the trunk, supposed to run horizontally

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm

27

Fig. 2 Difference between a simple single-trunk Steiner tree (a) and a refined single-trunk Steiner tree (b), connecting all the lamps in a room

all across the room. As mentioned above, the exact computation of the optimal tree is not the aim of this work, therefore all the lamps can be considered on the same 2D plane. The contribution of lamp Li to the length of the approximate rectilinear Steiner tree is the following: ⎧ ⎪ ⎨ yi − y0 if vLi closer to the trunk,  (Li ) = xi − xv if vLi closer to the vertical stem connecting vLv , ⎪ ⎩ yi − yh if vLi closer to the horizontal stem connecting vLh .

(9)

The cost function derived by the electrical wiring is computed cumulating all stem lengths: 1   (L) , (10) c1 (S) = √ A L∈S where A is the area in m2 of the interior environment to be lighted. Energy consumption represents the second objective in the group of costs to minimize, and is quantified as the overall power consumption of the lamps (measured in Watt) divided by the volume of the room: c2 (S) =

1  CL , V L∈S

(11)

where CL is the amount of Watts consumed by the lamp L of the individual S, V the volume of the interior space in m3 . The fitness function related to costs is the weighted sum of the installation cost and the energy consumption costs: f c (S) = w1(c) c1 (S) + w2(c) c2 (S) ,

(12)

28

A. Plebe et al.

where w1(c) + w2(c) = 1. Finally, the bi-dimensional fitness function used in the genetic algorithm is the following:

f q (S) . (13) f (S) = f c (S)

3.2 Genetic Operations At each step t of evolution, there is a population G (t) = {Si }, which elements are individuals coding a lighting solution, as described in Eqs. (1) and (2). The population size |G| = N is constant during the evolution. The initial population G (0) is generated randomly. Note that the number L of lamp descriptions in a single solution is not fixed, but constrained: L MIN ≤ L ≤ L MAX . The variation of the population is based on two fundamental operations: crossover and mutation. Given two individuals:  (1)  S1 = L(1) 1 , . . . , L L (1) ,  (2)  S2 = L(2) 1 , . . . , L L (2) ,

(14) (15)

we define as two-points crossover the following function:  χ (S1 , S2 ) = 

(1) (2) (2) (1) (1)  L(1) 1 , . . . , Li , Li+1 , . . . , L j , . . . , L j+1 , . . . , L L (1) ,   (2) (2) (1) (1) (2) (2)  L1 , . . . , Li , Li+1 , . . . , L j , . . . , L j+1 , . . . , L L (2) ,

(16)

where i and j are random integers such that 1 < i < j < min{L (1) , L (2) }. Note that χ takes two solutions as input and returns two modified solutions. The operator can guarantee consistent results thanks to the fact that the set of lamps in a solution is ordered by their locations v in the interior space. This expedient implies that the choice of i and j is tantamount to partitioning the room into simply-connected spaces. Therefore the crossover is able to propagate the topological relationship in the parent solutions through the new individuals. The mutation function ω operates on a single individual, and it is the composition of two different levels of mutation. The upper level is that of the ordered set of lamp descriptions, and it is mutated as following:  S \ Li if r < 0.5 ωU (S) =  S {L L+1 } if r ≥ 0.5

(17)

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm

29

where r , here and in all the following equations, is a random number in range [0 · · · 1], and i is a random integer in range [1 · · · L]. The lamp description L L+1 is a new lamp generated randomly from the set of possible lamps. Mutation at the lower level, that of single lamp description, is given by: ⎧ ⎪ ⎨d, {C, W } , v + Δv, l, w, k if r > π p ω L (L) = d, {C, W } , v, l, w, k if r > πl ⎪ ⎩  d, {C, W } , v, l, w, k if r > πk

(18)

where l is a new level of lighting, selected randomly from the possible light intensities for the lamp of type d, similarly for  k. The displacement Δv of lamp positioning is computed in a random direction from center v, with a random offset within a neighborhood, decreased in the course of the evolution. The parameters π{ p,l,k} are the mutation probabilities for, respectively, lamp position, lighting level, and color temperature. Crossover and mutation are applied to the solutions selected from the current population G (t) using the concept of crowded tournament selection, introduced in Sect. 2.1. We compare two randomly selected individuals and keep one winner, taking care that each solution of G (t) will never be selected for more than two different couples. The comparison metrics requires first the partitioning of G (t) into progressively non-dominated Pareto fronts, and a related ranking r (S) of the solutions: G = F 1 ∪ F2 · · · r (S) = i if S ∈ Fi .

(19) (20)

In addition, each solution has an associated crowding distance c(S), measuring how crowded with other solutions is the neighborhood of the given solution. We skip the details of this computation, which follows the conventional niche count metric [30]. By combining r (S) and c(S) we define a comparison operator ≺, indicating when a solution Si wins a tournament with another solution S j :    r (Si ) < r S j ,     Si ≺ S j if r (Si ) = r S j ∧ c (Si ) > c S j .

(21)

For the construction of the set M of  mating  couples,two random perturbations of (1) (1) [1 · · · N ] are generated: i 1 · · · i N and i 1(2) · · · i N(2) . Each couple is made by two winning solutions of the tournament:

30

A. Plebe et al.

 M=       arg min Si (1) , Si (1) , arg min Si (1) , Si (1) , , 1 2 3 4 ≺ ≺       arg min Si (2) , Si (2) , arg min Si (2) , Si (2) , , 1 2 3 4 ≺ ≺  ··· .

(22)

Equation (22) ensure that the two solutions in a couple are always different, and that the same solution can appear in no more than two different couples. Note that |M| = N2 , and the strategy in Eq. (22) requires that N is a multiple of 4. The operator χ is applied on the couples of M with the crossover random probability, and on both elements of the couple the operator ω can be applied, with random mutation probability. Let us express with the composite operator φ the generation of M from G, followed by the application of crossover and mutation, and the flattening of the couples into a set of new individual solutions, which will be of size N again. One complete step of evolution can then be described as: G (t+1) ←

N     (t) ∪ φ G (t) G

(23)



 where ≺N is the reduction of a set its first N elements, ranked with the comparison operator ≺. The size N of the population remains constant during evolution. When t = t f , the final generation programmed for the evolution, a Pareto set F1 is available, as the non-dominated set of solutions in G (t f ) . In the presented problem of lighting optimization there are some conditions on the design process to be satisfied, therefore a constraint handling method has to be considered as well. The constrains in question concern positioning the lamps inside the interior environment: • a lamp must be placed inside the room and in contact with the room surface; • two lamps can not be placed in the same location; • a lamp should be mounted on the walls or on the ceiling in accordance with its model of light fixture; • some areas of the room are not suitable for lamp placement. The constraint specifications are provided to the system within the 3D model of the environment itself. As stated in the beginning of Sect. 3, the walls and ceiling are structured as a discrete grid of vertices, each representing a feasible position for a lamp. With this approach, the set of constraints can be effortlessly reformulated for different experiments, ensuring absolute flexibility in the design process. Since the satisfaction of the above constraints is mandatory for the problem, they can be referred as hard constraints. To handle them, we adopted a strategy based

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm

31

on preserving feasibility of solutions, where crossover and mutation operations are designed to always produce feasible offspring from feasible individuals. By construction, the mutation operator is only able to produce solutions that satisfy all the constraints. Conversely, if the crossover generates an infeasible solution, the last one is discarded and the operation is repeated with a different choice of crossover points.

3.3 Partitioning the Final Pareto Set As in most multi-objective optimization problems, our lighting design system typically generates too many solutions in the final Pareto set, and selecting a single one that best reflects the preferences of the architect can be a daunting task. A considerable amount of research effort has been devoted to alleviate this inconvenience in the general multi-objective case, with several proposed methods that reduce the Pareto optimal set to a set of solutions that is attractive to the decision maker. A large part of the proposed methods assumes that the preferences of the decision maker are well known in advance, and can be expressed in mathematical terms and incorporated in the optimization algorithm [8, 9]. The situation of the architectural lighting design is different. Although the objectives defined in our optimization problem capture important requirements of the design process, there are aesthetic and stylistic components of the design process that elude mathematical formulations. The great advantage of a tool like the one here proposed is for the architect to drastically restrict the search space of solutions, and to concentrate his or her creativity on a small number of simulated solutions. It is difficult to prescribe in advance any preferred part of the Pareto front, in principle the entire front can offer attractive solutions to the lighting designers, the choice is up to their expertise and aesthetic disposition. For this reason we focused on methods commonly classified as a posteriori [31], where the selection of a small subset of solutions is made on the entire final approximate Pareto front, computed without the incorporation of preferences from the decision maker. First, we partitioned the set of solutions into a predefined number of clusters Nc , using the subtractive clustering algorithm [32, 33]. Let us define O the set of vectors in the fitness space of the final solutions F: O = {f (S) |S ∈ F}.

(24)

The vectors are normalized with all dimensions in range [0 · · · 1], we call O¯ the set of normalized vectors. For each solution a “potential” function ψ is introduced, that captures the neighborhood size of the solutions: ψ (0) (oi ) =

 o∈O¯

4

e rI 2

oi −o

¯ , oi ∈ O.

(25)

32

A. Plebe et al.

The superscript (0) is meant because Eq. (25) provides the initial values of the potentials, which are updated recursively, each time identifying as a cluster center the solution with the largest potential:   ck = arg max ψ (k) (o) ,

(26)

o∈O¯

4

ψ (k+1) (oi ) = ψ (k) (oi ) − e r O 2

β oi −ck

¯ ψ (k) (ck ), oi ∈ O.

(27)

Equation (26) computes the center of the k-th cluster, the recursive loop is terminated when k = Nc , the predefined number of clusters. The parameters r I in Eq. (25) and r O in (27) act effectively as radii, influencing, respectively, the range of neighborhood of a solution and the closeness of distinct cluster. Their values are computed as a function of the number of desired clusters Nc : 2 , Nc 2.5 rO = . Nc rI =

(28) (29)

All solutions S in F are partitioned in the clusters according to the distance of the vectors in fitness space to the cluster centers. Calling S¯ (k) the solution in F that is center of cluster k, corresponding to the normalized vector ck , the partitioning is done as following:  Q=    S : arg max f(S) − f(S¯ (k) ) = 1 , k∈[1..Nc ]

(30)

...,     (k)   ¯ S : arg max f(S) − f(S ) = Nc . k∈[1..Nc ]

For each cluster, a central representative is picked, so that the final set of solutions presented to the designer is very small.

4 Results As discussed in Sect. 1, a satisfactory lighting quality is dependent on the visual tasks that are to be performed in the interior space, and on specific requirements of visual interest within the space. The translation of these user’s requirements in the model is basically by means of the placement of samplers in the areas most critical from the lighting point of view, and by imposing the target illumination level.

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm

33

All genetic parameters of the model have been tuned in a preliminary phase on simpler and smaller rooms, and these settings did not required further tweaking in the case studies eventually considered. The two case environments chosen for evaluating empirically our lighting optimization algorithm, are complex architectural interiors, with irregular and non-convex planimetries. The first case study is a reproduction of a coffee shop, the dimensions of the environment are 14 × 10 × 2.8 m. The architecture of this room is characterized by a narrow dining area leading to a wider space with a lounge room and a bar counter. A total of 13 samplers have been used to evaluate illumination levels, placed in key areas where light should create visual interest. The genetic algorithm has been run with a population of 200 individuals, the final Pareto front is shown in Fig. 3, where it is possible to appreciate how the solutions smoothly span a large front of the two fitness. In the upper plot the complete population of final solution is shown, together with the Pareto front. The lower plot contains the partitioning of the final front in three clusters, and the solutions highlighted in red are the best representatives of each cluster. The solutions in cluster 1 and 2 are the solutions most qualified for, respectively, a lighting plan that privileges optimal cost and energy, or one that gives more importance to the quality of illumination. The plots in the left column of Fig. 4 shows the structure of the Steiner tree wiring the lighting solutions, in (a) the solution with optimal light quality, in (c) that with the lower cost. It can be seen that the overall wiring is shorter in (c) than in (a). The plots in the right column of Fig. 4 are the isophotes computed in the room at a surface 1.5 m above the floor, for the solution with best quality in (b), and that with lowest costs in (d). The target level is well approximated in the solution (b), even close to the internal walls. In the solution (d) the overall level is slightly below the target, especially in the center of the shop, the uniformity is still acceptable, except the lower side of the horizontal internal wall, an area difficult to light properly with few lamps. The Fig. 5 shows photorealistic renderings of the interior space from two different points of view, (a) and (b) refer to the solution with optimal light quality, while

Fig. 3 Final Pareto front of the optimization of the coffee shop case. In the upper plot there is the complete populations at the end of the optimization, and the Pareto front. In the lower plot the solutions are grouped in three clusters, and for each cluster the best representative is marked in red

34

A. Plebe et al.

Fig. 4 Results of the optimization for the coffee shop case. Plots on the left are schemes of the Steiner tree for the electrical wiring of the light fixture. Plots on the right are isophotes at 1.5 m level, colors are coded with green the perfect matching of the desired target of illumination, and hues towards yellow, orange, red, and purple, are progressive displacements from the target level. Plots a and b refer to the solution with best quality, plots c and d to that with lower costs. In all plots the internal walls and pillars are shown

(c) and (d) refer to the solution with minimum cost. It can be seen that even this solution, which saves 67% of the energy consumption of the previous solution, has an acceptable level of lighting with fair uniformity. The second case study is a reproduction of a hall in a shopping mall, with dimensions of 12 × 11 × 4.0 m, composed of a central area connected to secondary small shop. The main space contains a column with display stands and an area serving as lounge room, while the secondary area for the small shop has a lower ceiling level and contains several product racks and a counter with the cash register. A total of 14 samples have been used, with a genetic population of 200 individuals. As in the previous case study, there is a wide and smooth coverage of the Pareto front in Fig. 6, although some solutions of this case study reached poor level of quality fitness, compared to the previous case. This result can be explained by the brighter shading of walls and floors in the mall environment (pale yellow and white) reflecting more light than the deep red and beige color tones of the coffee shop, which requires more intense light sources in order to reach the same perceived illumination level. Moreover, the construction of the suboptimal Steiner tree is less straightforward than in the coffee shop case, because it is not possible to set up a trunk exploiting symmetry

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm

35

Fig. 5 Rendering of two views inside the coffee shop, lighted with two different solutions, the representative of the cluster with best quality in (a) and (b), and the representative of lower costs in (c) and (d)

Fig. 6 Final Pareto front of the optimization of the shopping mall case. In the upper plot there is the complete populations at the end of the optimization, and the Pareto front. In the lower plot the solutions are grouped in three clusters, and for each cluster the best representative is marked in red

along the horizontal dimension, as visible in the left columns of Fig. 7. Nonetheless, the visual results are rather satisfying, as shown in the photorealistic renderings of two of the best representative solutions in Fig. 8, the first one preferring light quality and uniformity, the second one considering optimal level of energy saving and costs. The solution illustrated in (c) and (d), even though is clearly darker than the other, its energy saving is as high as 66% with respect to the light configuration in (a) and (b).

36

A. Plebe et al.

Fig. 7 Results of the optimization for the shopping mall case. Plots on the left are schemes of the Steiner tree for the electrical wiring of the light fixture. Plots on the right are isophotes at 1.5 m level, colors are coded with green the perfect matching of the desired target of illumination, and hues towards yellow, orange, red, and purple, are progressive displacements from the target level. Plots a and b refer to the solution with best quality, plots c and d to that with lower costs. In all plots the internal walls and pillars are shown

This case study demonstrates how the presented algorithm can be a suitable tool to effectively design light configuration for a frequently changing environment, a shopping mall, with minimum effort from the user.

5 Conclusions In this paper we described a system for inverse design of interior lighting based on the integration between the 3D computer graphic software Blender, a NSGA-II based multi-objective genetic algorithm, and a post-selection of best solutions based on

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm

37

Fig. 8 Rendering of two views inside the shopping mall, lighted with two different solutions, the representative of the cluster with best quality in (a) and (b), and the representative of lower costs in (c) and (d)

cluster analysis. The system takes as input an arbitrary interior environment, including realistic furniture and materials, with the description of the lighting requirements in terms of desired average illumination, and placement of samplers in the key locations of the interior space. We grouped two conceptually different sets of objective functions: on one side those contributing to the pleasantness of the lighting, and on the other side those contributing to the expenses in the realization and the functioning of the lighting system. In the implementation here described, we chosen specific objectives, common in the lighting design process: for the first group the compliance with the target illumination level, the uniformity of light distribution in the interior space; for the second group the overall length for electrical wiring, and the consumption of electric power. The generality of our approach allows for easy addition of other requirements, like, for example, a desired distribution of color spectra, of glaring avoidance. The cases presented as results demonstrate the effectiveness of the system in helping the process of interior lighting design.

References 1. Gordon, G.: Interior Lighting for Designers. Wiley, New York (2014) 2. Livingston, J.: Designing With Light: The Art, Science, and Practice of Architectural Lighting Design. Wiley, New York (2015)

38

A. Plebe et al.

3. Wunderlich, C.H.: Light and economy: an essay about the economy of prehistoric and ancient lamps. In: Chranovski, L. (ed.) Lychnological News, pp. 251–264. LychnoServices, Hauterive (Suisse) (2003) 4. Kahraman, C. (ed.): Computational Intelligence Systems in Industrial Engineering With Recent Theory and Applications. Atlantis Press, Paris (2012) 5. Commercial buildings energy consumption survey: Technical report, U.S. Energy Information Administration (2012) 6. Bertoldi, P., Hirl, B., Labanca, N.: Energy efficiency status report. Technical report, European Commission—Institute for Energy and Transport (2012) 7. Sansoni, P., Farini, A., Mercatelli, L. (eds.): Sustainable Indoor Lighting. Springer, Berlin (2015) 8. Jaimes, A.L., Coello, C.A.C.: Interactive approaches applied to multiobjective evolutionary algorithms. In: Doumpos, M., Grigoroudis, E. (eds.) Multicriteria Decision Aid and Artificial Intelligence: Theory and Applications, pp. 191–207. Wiley, New York (2013) 9. Bechikh, S., Kessentini, M., Said, L.B., Ghédira, K.: Preference incorporation in evolutionary multiobjective optimization: a survey of the state-of-the-art. Adv. Comput. 98, 141–207 (2015) 10. Plebe, A., Cutello, V., Pavone, M.: Evolving illumination design following genetic strategies. In Sabourin, C., Merelo, J.J., Warwick, K., Madani, K., O’Reilly, U.M. (eds.) 9th International Joint Conference on Computational Intelligence, pp. 222–233. Scitepress (2017) 11. Larson, G.W., Shakespeare, R.: Rendering with Radiance: The Art and Science of Lighting Visualization. Morgan Kaufmann, San Francisco, CA (1997) 12. Baltes, H. (ed.): Inverse Source Problems in Optics. Princeton University Press, Princeton, NJ (1978) 13. Kawai, J., Painter, J.S., Cohen, M.F.: Radioptimization: goal based rendering. In: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, pp. 147–154 (1993) 14. Schoeneman, C., Dorsey, J., Smits, B., Arvo, J., Greenberg, D.: Painting with light. In: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, pp. 143–146 (1993) 15. Patow, G., Pueyo, X.: A survey of inverse rendering problems. Comput. Graph. Forum 22, 663–687 (2003) 16. Papalambros, P.Y., Wilde, D.J.: Principles of Optimal Design. Cambridge University Press, Cambridge, UK (1988) 17. Andersen, M., Gagne, J.M., Kleindienst, S.: Interactive expert support for early stage full-year daylighting design: a user’s perspective on Lightsolve. Autom. Constr. 35, 338–352 (2013) 18. Gagne, J., Andersen, M.: A generative facade design method based on daylighting performance goals. J. Build. Perform. Simul. 5, 141–154 (2012) 19. Futrell, B., Ozelkan, E.C., Brentrup, D.: Optimizing complex building design for annual daylighting performance and evaluation of optimization algorithms. Energy Build. 92, 234–245 (2014) 20. Moylan, K., Ross, B.J.: Interior illumination design using genetic programming. In Johnson, C., Carballal, A., ao Correia, J. (eds.) Proceedings IV Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design, pp. 148–160 (2015) 21. Daenzer, S., Montgomery, K., Dillmann, R., Unterhinninghofen, R.: Real-time smoke and bleeding simulation in virtual surgery. In: Westwood, J.D., Haluck, R.S., Hoffman, H.M., Mogel, G.T., Phillips, R., Robb, R.A., Vosburgh, K.G. (eds.) Medicine Meets Virtual Reality, pp. 94–99. IOS Press, Amsterdam (2007) 22. Plebe, A., Grasso, G.: Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants. In: American Institute of Physics Conference Proceedings, pp. 090003–1–090003–4 (2016) 23. Janikow, C.Z., Michalewicz, Z.: An experimental comparison of binary and floating point representations in genetic algorithms. In: Proceedings of the 4th International Conference on Genetic Algorithms, pp. 31–36 (1991)

Optimizing Costs and Quality of Interior Lighting by Genetic Algorithm

39

24. Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In: International Conference on Parallel Problem Solving From Nature, pp. 849–858 (2000) 25. Fortin, F.A., De Rainville, F.M., Gardner, M.A., Parizeau, M., Gagné, C.: DEAP: evolutionary algorithms made easy. J. Mach. Learn. Res. 13, 2171–2175 (2012) 26. Hwang, F.K., Richards, D.S., Winter, P.: The Steiner Tree Problem. North Holland, Amsterdam (1992) 27. Hanan, M.: On Steiner’s problem with rectilinear distance. SIAM J. Appl. Math. 14, 255–265 (1966) 28. Kahng, A.B., Robins, G.: On Optimal Interconnections for VLSI. Springer, Berlin (1994) 29. Chen, H., Qiao, C., Zhou, F., Cheng, C.K.: Refined single trunk tree: a rectilinear Steiner tree generator for interconnect prediction. In: Proceedings of the International Workshop on System-Level Interconnect Prediction, pp. 85–89 (2002) 30. Deb, K.: Multi-objective Optimization Using Evolutionary Algorithms. Wiley, New York (2001) 31. Zio, E., Bazzo, R.: Multiobjective optimization of the inspection intervals of a nuclear safety system: a clustering-based framework for reducing the pareto front. Ann. Nucl. Energy 37, 798–812 (2010) 32. Chiu, S.L.: Fuzzy model identification based on cluster estimation. J. Intell. Fuzzy Syst. 2, 267–278 (1994) 33. Zio, E., Bazzo, R.: A comparison of methods for selecting preferred solutions in multiobjective decision making (4), 23–43

Mining of Keystroke and Mouse Dynamics to Increase the Engagement of Students with Programming Assignments Mario Garcia Valdez, Juan-J. Merelo, Amaury Hernandez Aguila and Alejandra Mancilla Soto Abstract The aim of the experiments described in this paper is to evaluate the use of keyboard and mouse dynamics as an appropriate non-obtrusive sensory input for an system that is sensitive to the affective state of its user. Our motivation for starting this research line has been the lack of tools and methodologies for taking into account this affective state in learning environments. In an ideal situation, when instructors have to choose from a collection of programming assignments, they should consider the student´s affective state and skills in order to select a learning resource with the appropriate level of difficulty. However, neither the data or the ability to process it are present in current learning management systems. This work tries to address this problem, by focused on the capture and pre-processing of data that is going to be fed to several machine learning techniques with the objective of classifying the affective states of users with different levels of expertise when learning a new programming language. We capture student data from a web-based platform, a learning management system where students interact with programming exercises. We introduce in this paper a series of pre-processing techniques that are able to convert data from keyboard and mouse dynamics captured from students as they were coding basic Python programming assignments into feature vector, that have been later used for the classification into five affective states: boredom, frustration, distraction, relaxation and engagement. The following classification algorithms have been evaluated: k-nearest neighbors,feed forward neural networks, naïve Bayes classifier, J-48 tree induction algorithm, deep learning, random forest, gradient boosted trees and naïve Bayes Kernel). The best accuracy was around 78% and was achieved by the tree induction algorithms. Results show that data gathered from ready-available, nonM. G. Valdez (B) · A. H. Aguila · A. M. Soto Department of Graduate Studies, Instituto Tecnologico de Tijuana, Tijuana, Mexico e-mail: [email protected] A. H. Aguila e-mail: [email protected] A. M. Soto e-mail: [email protected] J.-J. Merelo Geneura Team and CITIC, University of Granada, Granada, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_3

41

42

M. G. Valdez et al.

obtrusive sensors can be used successfully as another input to hybrid classification models in order to predict an individual´s affective states, and these in turn can be used to make students more engaged and thus learn more efficiently. Keywords Affective computing · Neural networks · Machine learning · Learning analytics

1 Introduction Programming courses are often regarded as difficult [1–3] and there is a common conception, backed by data, that they often have a high failure rate [4]. Jenkins [5] argues that this high rate of failure has multiple factors: – There is a deep relation with the expectations, attitudes, and previous experiences of the teaching staff and students. – The nature of the subject that involves the learning of new abstract constructs, syntax and tools. – Groups are often heterogeneous and thus it is difficult to design courses that are beneficial for everyone. Many students begin to program when they are at their first year of university which means they are tackling a totally new topic that does not respond to their habitual study approaches. Dijkstra [6] argues that the subject of programming is very problem-solving intensive, which means that a high precision is required since even the slightest deviation from syntax can render a program totally worthless. These are hurdles that often frustrate novice students. This is aggravated by the fact that the time devoted to a particular topic is fixed, and when it is exhausted, instructors move to explain more advanced subjects, while a group of students might still be struggling with basic syntax. Again Jenkins [5, 7] notes that many students expect the course to be difficult and come with the notion that they will have to struggle; others may have a stereotyped image of a programmer, and these beliefs can negatively affect their initial motivation and expectations. During the duration of the course and its delivery, novice programmers have been proved to experience emotions such as frustration and confusion when a bug in a program eludes them, but also joy when they successfully run a challenging program for the first time. They can also become bored if they found the assignments too repetitive or too easy. Learning to program is a difficult task, and similarly to other learning tasks, emotions play a significant role. Research has gone far to understand the role of emotion in learning, both in the process and the outcome; for instance, in e-learning [8, 9] and in programming [5, 10–12]. In these works the emotion of flow/engagement is regarded as the ideal affective state in which students tend to be most capable of acquiring meaningful information through the learning process. Flow is defined by

Mining of Keystroke and Mouse Dynamics to Increase the Engagement …

43

Csikszentmihalyi [13] as a mental state in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment. Engagement is also a positive affect where students are thought to be more involved behaviorally, intellectually, and emotionally in their learning tasks [14]. In order to promote engagement, sometimes instruction material is designed with the objective of creating learning activities that are challenging to students, but not so much as to frustrate them. Engagement is not only achieved through the design of single learning activities but also on the appropriate selection and sequence in which they are presented. For this, experienced instructors gauge a student´s affective state and then assign an activity with the appropriate level of difficulty. However, recognition of emotions is a difficult task, even for humans. Instructors often use their social intelligence to recognize student´s affective states. During presential classes, an instructor habitually reads the faces of students in the classroom to see if they are confused, bored or engaged and then decides what to do next. However, that is impossible in asynchronous and virtual learning environments, that is why there have been several ways of including automatic perception of emotions in Interactive Learning Environments (ILEs) and Learning Management Systems [15] and even in Integrated Development Environments [16]. But this of course presupposes that the affective state of the student is available to the system. This detection and recognition can be done in real time in different ways. In the wider context of the detection of emotions and affective state, many researchers have demonstrated that affect-aware computers can provide a better performance in assisting humans [3, 17, 18], and that providing a richer experience while learning to program might improve the outcome by making the students more engaged with it [19]. However, when ILEs embrace these methodologies and techniques a problem arises: Sensors which are not present in the default computer configuration must be used to gather data that will be used to perform the recognition of usersáffective states. Some sensors rely on physiological readings and must be in physical contact with students [20–22]. These sensors can be considered intrusive or invasive, not to mention that you have to acquire them and convince the student to wear them while learning. They can also disrupt the student´s learning experience [23–25]. Other less cumbersome sensors, such as video cameras, which are in fact default equipment in most laptops, can also be considered invasive, since they always transmit the user’s identity, appearance and behavior along with the emotional data [17]. So what we would need in order to embed recognition of affective states in learning systems would be non-invasive, anonymous and also ubiquitous sensors. In this paper we are going to test such a kind of system. And in order to do that, we will use the dynamic of the learning environment itself. In programming courses, students solve assignments by implementing their solution in a particular language; in order to do this, they must type the code, run it directly or by compiling it to check its results. This cycle can be repeated many times, and as they are in this cycle, students often change their affective state according to their performance. Data can, then, be non-intrusively collected while students are working

44

M. G. Valdez et al.

on their assignments and typing on the keyboard and interacting with the GUI via mouse or other pointing device. All the dynamics of actually typing the code and ILE feedback received by students, the corrections they tried, the elapsed time between each step, the number of attempts, compiling errors, exceptions, among others, are collected by the ILE and can be used. Data mining of all this data could bring us a better understanding of the learning process, and also a better model of the student state for both the selection of the appropriate content to be shown next and to evaluate the learner´s competencies. But the problem that remains is to how to recognize the student’s affective state so that it can be used by an adaptive system. In this work, a method for the recognition of affective states in real time through the mining of keystroke and mouse dynamics data is analyzed further after its initial presentation in [26]. Since students can vary the dynamic of keystrokes according to their affective state when programming, a correlation between the user´s interactions (keyboard and mouse) with an emotional state has been established in preliminary work by Zimmermann et al. [27], but the work experiments where conducted by asking users to write a predefined message, not programming exercises. In order to test the correlation of keyboard and mouse dynamics and an affective state when programming, an interactive web-based programming platform was initially proposed by us in [26]. In this paper we complete the background and motivation for this research, and update the state of the art, as well as justify better the methodology used and the way data has been collected, extending the results and the methods used. The platform we presented in that paper originally was designed to collect keyboard and mouse data from the studentsínteraction for data analysis. The proposed method has three main components: a JavaScript library to capture keyboard and mouse dynamics from a browser-based editor, a pre-processing step whose output is a feature vector that is sent to a third component, the classification algorithm. An experiment was conducted where students solved a series of programming assignments using the web based learning environment. Using only the data from the student´s keyboard and mouse dynamics six affective states where recognized for each of the attempts at solving the assignments. Results obtained from the experiment are promising. The affective states recognized include the most common in novice programmers according to the study of Bosch et al. [28]: boredom, engagement/flow, confusion and frustration. For the initial experiment [26] four binary classifiers were compared: J-48 decision trees, k-nearest neighbour classifier, feed forward neural network, and a naïve Bayes. The output of each classifier determined if the student experienced the affective state during the assignment. In this experiment we will also use Deep Learning [29] and Gradient Boosted Trees via H2O version 3.8.2.6, and the Random Forest and Naïve Bayes Kernel implementations from RapidMiner Studio version 8.1. The proposed method could be used in an ensemble with other sensor channels to improve the non-invasive recognition of the students affective states when they are working on a programming task.

Mining of Keystroke and Mouse Dynamics to Increase the Engagement …

45

In order to determine what affective states a student was experiencing, an Experience Sampling Method (ESM) was used by Csikszentmihalyi and Larson [30]. After the students successfully solve a programming assignment, they are presented with an ESM survey that asks what they were feeling during their solving of the assignment. If a relationship between interaction dynamics and affective state could be modeled, ILEs could adapt the instruction using the same devices required to input the program to the computer, which could constitute a breakthrough since writing programs in a computer is the best way to learn programming by actually doing it. A by Lahtinen et al. [2] reports that students rated “working alone on programming coursework” as a more useful situation than lectures. The rest of the paper is structured as follows: next section presents a series of works related to the proposed method in this paper; Sect. 3 describes the proposed method for the recognition of affective states in a web based learning environment for the teaching of programming languages; next in Sect. 4 we explain the experimental evaluation of the proposed method and results. Finally, we draw some conclusions about our methodology and experiments.

2 Related Work Affect recognition is an active field of research. Many methods have been proposed, some of them requiring the intervention of the user to fill up questionnaires or forms; these selections remain static until the user changes the values and are easy to implement, but cannot detect dynamic changes. A more dynamic approach requires the use of sensors to capture affective states in real time, and that is what we are doing in this paper. The actual sensors are determined by the context, the environment and the learning activity determine. The most common learning environment is the classroom, a physical space that provides a context to facilitate learning. But learning is possible in a wide variety of settings, such as out-of-school locations and outdoor environments which are sometimes referred to as ubiquitous learning environments. An analysis of such environments is done by Yang [31] in a context-aware environment, but his work misses affective information. There are also virtual learning environments such as the one proposed by Dillenburg [32] where learners can have avatars, and virtual places where they can play roles and socialize. In general, the importance of affect in learning environments has been recognized at a later stage of research. For instance, the work of Kapoor y Picard [33] uses a multimodal approach with sensory information from facial expressions and postural shifts of the learner combined with information about the learner´s activity on the computer; the learning activity was solving a puzzle on a computer. They report using a multimodal Gaussian Process approach achieving an accuracy of over 86%. Sidney et al. [24] enhances

46

M. G. Valdez et al.

the intelligent tutor system AutoTutor with affective recognition using non-intrusive sensory information from facial expressions, gross body movements, and conversational cues from logs. The subject material of that system consisted in lectures of computer literacy. In this early stage of research, the importance of privacy and non-intrusiveness is set aside. Some other works try to introduce affective computing features into interactive learning environments, without explicitly detecting of using the student’s affective state. For instance, Elliott et al. [34, 35] propose the integration of an affective reasoner with an agent in a virtual environment. In this work an agent called Steve responds emotionally to other agents and interactive users. The agent simulates his emotions through different multimedia modes including facial expressions and speech. Affective reasoning agents were used for training, by putting students in work related situations. Agents could react to the behavior of students; for instance if a student was being careless in a task and was in a dangerous situation, Steve would show distress or fear. Also in a virtual environment the work of McQuiggan, Robison and Lester [36] extends this line of research by investigating the affective transitions that occur throughout narrative-centered learning experiences. The analysis of affective state transitions in this work replicated the findings by DMello et al. [35] and Baker et al. [10] where also engagement/flow dominated self-reported affect. For outdoor environments Shen et al. [37] augment a pervasive e-learning platform with affective recognition. The results about emotion recognition from physiological signals achieved a best-case accuracy (86.3%) for four types of learning emotions. Bosch et al. [11] analyzed the relationship between affective states and performance of novice programmers learning the basic Python. The results of their study indicated that the most common emotions students experienced were engaged (23%), confusion (22%), frustration (14%), and boredom (12%). It was useful to consider these results, as it presented evidence of what affective states need to be targeted in order to obtain less biased data. Similar to the previous work, Rodrigo et al. [10] observed which affective states and behaviors relate to student´s achievement within a basic computer science course. The authors found that confusion, boredom and engagement in IDE-related (on-task) conversation are associated with lower achievement. Measuring mood is important, since it may have an impact on programmer´s performance according to Khan et al. [12]. It may be possible to detect moods on the basis of information regarding the programmer´s use of the keyboard and mouse, and to integrate them into development environments that can improve programmer performance. There has been very little research reported on the effectiveness of the use of keyboard and mouse dynamics as a sensory channel for affective recognition, and the few have not been focused on programming. The preliminary work by Zimmerman et al. [27] describes a method to correlate users interactions (keyboard and mouse) with an emotional state, measuring different physiological parameters. The work of Vizer et al. [38] also uses sensory data based on the time elapsed between each key press, the task given to users was to write a free-text and used linguistic features in order to recognize both physical and emotional stress. The results show a classifica-

Mining of Keystroke and Mouse Dynamics to Increase the Engagement …

47

tion accuracy of 62.5% for physical stress and 75% from emotional stress; authors argue that these results are comparable with other approaches in affective computing. They also stress that their methods must be validated further in other contexts. Moods may have an impact on programmer´s performance according to Khan et al. [12]. It may be possible to detect moods on the basis of information regarding the programmer´s use of the keyboard and mouse, and to integrate them into development environments that can improve programmer performance. They briefly describe a further experiment that could use keyboard and mouse but only as future work. There are other studies about the behaviour of programmers, which are not directly concerned with affective states but are nevertheless important to their performance. Eye tracking in computing education is proposed in the work Busjahn et al. [39], and it is also used for assessing learners comprehension of C++ and Python programs in Turner et al. [40]. Blikstein [41] proposed an automated technique to assess, analyse and visualize students learning computer programming. Blikstein employs different quantitative techniques to extract students behaviours and categorize them in terms of programming experience. It is obviously necessary, in order to detect the moods, to prove that there is an actual relationship between them and the keyboard activity. In fact, Keystroke Dynamics (KD) has been investigated by using either fixed-texts or free-texts [42]. KD performed on fixed-texts involves the recognition of typing patterns when typing a pre-established fixed- length text, e.g., a password. In the other case, free-text KD achieves the recognition of typing patterns when typing a text of arbitrary-length, e.g., a description of an item. However, as noted by Janakiraman and Sim [43], most of the research regarding KD is done on fixed-text input, the reason being that fixedtext KD usually yields better results than free-text KD. Yet, the authors of this work share the opinion with Janakiraman, R., and Sim, T., that it would be more useful if KD can handle free text as well as fixed text, this is also a requirement if the text is a program. This is what we are actually using in this paper, since programs are created by the student and do not have to follow exactly the same pattern. Although the use of KD is found in several research works as a biometric measure, its use for identifying affective states is rare in comparison. Epp et al. [44] effectively used KD in conjunction with decision-tree classifiers for the identification of 15 affective states. Although their work was based on fixed text, their technique to extract a feature vector was an inspiration for the proposed method in this work. As ´ for free-text KD, Bixler y DMello [28] present a method for the identification of boredom and engagement based on several classification models. A similar situation exists in the case of Mouse Dynamics (MD), which is used mainly for authentication; however, Salmeron-Majadas et al. [45] use both MD and KD to predict four affective states using five different classification algorithms. Bakhtiyari y Husain [46] discuss a method based on fuzzy models for the recognition of emotions through KD, MD and touch-screen interactions. Lim [47] use several sensor sets and methods for detecting stress levels, including neural networks; the detected stress level is then fed into the

48

M. G. Valdez et al.

inference system that decides on adaptation. However, this work focuses only on stress level, disregarding the more general affective state. Although it is no longer current, the state of the art was described quite comprehensively in Kolakowska’s paper [48]. More recent papers, like the one by Carneiro and Novais [49] takes a more comprehensive approach, using also a non-intrusive system based on keystroke analysis; several affective states are measured: mental fatigue, stress and emotional valence. However, it is not focused particularly in programming, which has certain characteristics, but uses music to create a certain mood on the subjects. The KD methodology, however, is applicable to a wide range of situations. Also Kolakowska has recently published a paper [50] assessing the usefulness of KD not only in emotion recognition, but also in user authentication. Whatever the method, it is indeed proved that KD can be used for detecting a range of user mood and attitudes [51]. In this paper we are more concerned with the data collection and the selection of a particular method for detecting the mood with the limited amount of data obtained via KD.

3 Proposed Method The goal of this work is to propose an affective recognition method based on the sensory data provided by the keyboard and mouse dynamics generated by a learner as he or she types a programming assignment. A detailed explanation of the process outlined in a previous work [26] is explained next. The process starts when the learner begins to type a program in a browser-based editor. As she types or moves the mouse all the data of the dynamics is recorded on the client. When the learner submits the code to evaluation, the HTTP request includes, in a JSON string, all the sensory data along with the code and information about the current session. In the web server, the code is evaluated by an external container that provides a virtual sandbox in which to execute the program. The sandbox is used to prevent malicious or erroneous code to halt the server. When a container is halted, it is simply removed and a new container is created. When the result is ready, it is recorded along with the sensory data and sent to the preprocessing module that it will be explained next. The output is a feature vector, ready for classification. A previously trained classifier is responsible for the classification, which outputs the predicted affective state. Currently, the method does not consider other sensory data, but it could be integrated with other sensory inputs in a multi-modal approach. Each of the steps is explained in detail next.

3.1 Capturing the Keystroke and Mouse Data As it was explained before, as a student is trying to solve an assignment, a script coded in JavaScript runs in the background, capturing every keystroke, mouse movement

Mining of Keystroke and Mouse Dynamics to Increase the Engagement …

49

and mouse button pressed. Each record of these events includes a time-stamp in milliseconds (using the method getTime() of JavaScript´s built-in class Date) that describes when the event occurred. If the event is a keystroke, the script captures what key was specifically pressed, and what type of event occurred, it can be either a keydown or a key-up event. If it is an event related to a mouse button press, the key code of that button is recorded, as well as the type of event occurred again key-down or keyup. Finally, if the event was a mouse movement, the mouse coordinates inside of the web browser are recorded. The script monitors the mouse position every 100 ms, and only if the position has changed, it records the new position. Each time a learner tries to evaluate a program, all the data generated is sent to the server along with the code. When the result of the execution is returned, all records are cleared and the process starts again. There is no problem if the user leaves for a long period of time, because no event will be triggered. If a user copies and then pastes the code from another source, this will be recorded. There is a limitation, only the browser-editor must be used; this could be a problem for more advanced programmers needing specialized editors. On the other hand a browser-based editor with the corresponding remote execution, does not require the installation of interpreters or compilers in learners computer. The code could even be written in a mobile device or any web-enabled device. Programming assignments are evaluated using unit tests; the results of the evaluation are shown to users. If all tests the program is considered to be correct. The source code for the Protoboard web based learning environment including the KD and MD functions are open source and available as Github repositories at https:// github.com/amherag/keyboard-mouse-dynamics; the code for the sandbox is also available in https://github.com/mariosky/sandbox.

3.2 Preprocessing of the Keystroke and Mouse Data The raw data obtained from the script needs to be preprocessed to obtain a feature vector. Basically, this pre-processing consists in measuring the delays between keydown, key-up or mouse-move events triggered during an assignment. These events have the dynamic shown in Fig. 1; For example when typing the word key a user first presses the letter K and triggers the key-down event, this is indicated with an arrow that changes the state of the key, the time of the event is important and also recorded. Only the event data is received from the browser, in order to generate feature vectors, the patterns and rhythm users have when pressing consecutive keys are captured measuring the delays between events, as it is a common practice when dealing with keystroke dynamics. In this work, the definitions proposed by Sim y Janakiraman [52] are used. Held time (Ht) is defined as the time between a key-down and a key-up of the same key, this would be Ht(K) in the example. Inter key time (It) is defined as the time between two consecutive key-down events, this time could be negative is the second key is pressed before the first is released. A sequence is defined as a list

50

M. G. Valdez et al.

Table 1 The components of the feature vector; adapted from Epp et al. [44]. These components were used in [26] too ID Description 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39

Mean duration between 1st and 2nd down keys of the digraphs Standard deviation of the previous feature Mean duration of the 1st key of the digraphs Standard deviation of the previous feature Mean duration between 1st key up and next key down of the digraphs Standard deviation of the previous feature Mean duration of the 2nd key of the digraphs Standard deviation of the previous feature Mean duration of the digraphs from 1st key down to last key up Standard deviation of the previous feature Mean number of key events that were part of the graph Standard deviation of the previous feature Mean duration between 1st and 2nd down keys of the trigraphs Standard deviation of the previous feature Mean duration of the 1st key of the trigraphs Standard deviation of the previous feature Mean duration between 1st key up and next key down of trigraphs Standard deviation of the previous feature Mean duration between 2nd and 3rd down keys of the trigraphs Standard deviation of the previous feature Mean duration of the 2nd key of the trigraphs Standard deviation of the previous feature Mean duration between 2nd key up and next key down of trigraphs Standard deviation of the previous feature Mean duration of the third key of the trigraphs Standard deviation of the previous feature Mean duration of the trigraphs from 1st key down to last key up Standard deviation of the previous feature Mean number of key events that were part of the graph Standard deviation of the previous feature Mean duration of mouse key presses Standard deviation of the previous feature Mean duration of all mouse movements Standard deviation of the previous feature Mean distance in pixels in the X coordinate Standard deviation of the previous feature Mean distance in pixels in the Y coordinate Standard deviation of the previous feature Number of attempts

Mining of Keystroke and Mouse Dynamics to Increase the Engagement …

51

Fig. 1 Keystroke dynamics, adapted from Sim and Janakiraman [52]

of consecutive keystrokes. In the above example valid sequences could be [K, E], [E, Y] and [K, E, Y], the first two are digraphs and the third is a trigraph. While there can be sequences of any size in a text, working only with digraphs and trigraphs is preferred. Even if only digraphs and trigraphs are considered, the amount found in free-text is too large and not very useful for a feature vector as noted by Epp et al. (2007) so they propose to use statistical measures to capture the patterns; the same strategy is used in this work. The averages and standard deviations are calculated from the delays between the events of the digraphs and trigraphs in each program. To calculate the average and standard deviations of these key presses, the delays between a key-down and a key-up event of the left button clicks are used. In addition to these averages and standard deviations of the delays between keystrokes and mouse button presses, the average and standard deviations of the number of total events contained in a digraph and a trigraph are calculated. These features are proposed and explained by Epp et al. [44]. Most of the times, a digraph should contain four events, while a trigraph six. However, sometimes an individual can start a digraph or a trigraph before ending the previous one. These additional features represent these particular cases, and could be meaningful for the estimation of a learner´s affective states. Regarding the mouse movements, the average and standard deviation of the duration of each mouse movement, and the averages and standard deviations of the movements in the axes X and Y are also calculated. Lastly, a final feature is added to preprocessing of the data. The web tutorial recorded and included in the feature vector how many attempts a student required before successfully solving an assignment. The final feature vector consists of 39 features; these are based on the work of Epp et al. [44] and are shown in Table 1. Once feature vectors are obtained from an experiment, the generated dataset is normally used as training data for a classifier. Researchers of affective recognition have used a wide variety of classification algorithms. As a proof of concept four well known classification algorithms where compared: k-Nearest Neighbors (k-NN) algorithm, a feed forward neural network trained with back- propagation, a naïve

52

M. G. Valdez et al.

Bayes classifier and finally a decision trees algorithm for rational data (J-48) [53]. Experiments and results will be presented next.

4 Experiment and Results The aim of this research is to evaluate the use of keyboard and mouse dynamics as an appropriate sensory input for an affective recognition system in a learning environment for programmers, who interactively write short programs. An experimental approach was adopted with this aim: Sensory and quantitative data was collected from learners as they were enrolled in a basic course of Python programming. This data was then pre-processed using the method described earlier in Sect. 3, and together with the quantitative data obtained from users, classifiers were trained and validated. The results were then compared and we arrived to some conclusions regarding the effectiveness of the method. The goal given to subjects was to solve as many assignments as they could in a period of two weeks. There was neither time limit nor a minimum amount of time required for a participant while trying to solve the assignments or complete the tutorial. The participants were able to stop and resume their interaction with the system at any time. A tutorial was developed to obtain the necessary data using the proposed platform Protoboard, a web-based environment, whose latest version has been released under an open source license, and can be found online at https://github.com/mariosky/ protoboard. The main functionality used for this experiment is shown schematically in Fig. 2. Users log in to Protoboard by creating an account or by using their account credentials from the social network Facebook. The web tutorial begins with three

Fig. 2 Code execution and affect recognition architecture. Figure extracted from [26]

Mining of Keystroke and Mouse Dynamics to Increase the Engagement …

53

Fig. 3 Web-based editor used for code evaluation. The lower panel indicates (in Spanish) the tests passed successfully and the program output [26]

introductory videos that explain the fundamentals of programming in Python, and how to use the editor to execute their code. What follows after these videos, are 13 programming assignments that students need to solve in sequential order. Assignments are of incremental difficulty. In Fig. 3, the user interface of the programming editor is presented. For this experiment Protoboard was configured to allow unlimited attempts to solve each assignment. A total of 55 volunteers, with no previous experience in Python, where recruited as a response to three announcements in a special interest group of programming students in a social network, but the fact that they needed Facebook kept some prospects from volunteering; a subject reported that he normally creates temporary email accounts to try new web services; however, the use of Facebook gave researchers access to public information about the subjects. Their ages were in the range of 18–30 years. Although no experience in software programming was needed,

54

M. G. Valdez et al.

Table 2 Survey question: how many years of programming experience do you have? Years programming Number of learners 0 1 2–4 5+

8 13 15 22

Fig. 4 Completed activities and assignments by learner in ascending order (left) and distribution of emotions reported (right) [26]

as the web tutorial´s course is of a very basic level, participants had different levels of experience, from 0 (8 persons) to more than 5 (22) (Table 2). In order to determine what affective states a student was experiencing, the Experience Sampling Method (ESM) [30] was used. After the students successfully solve a programming assignment, they were presented with an ESM survey that asks what they were feeling during their solving of the assignment. A very brief description is given about what to do in this survey, followed by statements the students need to answer according to how they were feeling. As an example, the statement I was feeling frustrated is presented, and a student needs to answer either Strongly agree, Agree, Neutral, Disagree, and Strongly Disagree (Fig. 4). After the two weeks of the experiment, only four learners completed all of the programming assignments and 22 did not completed any. Out of the total activities available (videos, survey and questionnaires) only two completed all. Figure 4 shows the number of assignments and activities completed by each learner (Table 3). The participantsínteraction generated a total of 142 feature vectors one for each successfully completed assignment. The affective states reported by learners after completing each assignment was grouped in three classes: Yes, Neutral and No. The class distribution of the emotions reported is shown in Fig. 4. This results show that the most common emotions were flow/engagement (72%) and relaxation (61%) while few learners reported distraction (5%), frustration (8%) and boredom (2%). ´ This distribution is different from what was reported by DMello et al. [28] and Rodrigo et al. [10]. Although Flow/Engagement was also the predominant class, the distribution is skewed to the first two. There are some possible reasons for this; first the majority of students had prior experience in programming, so learning new one is not that problematic, a second reason could be the freedom users had to abandon the

Mining of Keystroke and Mouse Dynamics to Increase the Engagement …

55

Table 3 Performance of every classifier with 10-fold cross validation [26]. Each cell shows accuracy and (κ) Affect Naïve Bayes J-48 k-NN ANN Flow/engaged Relaxation Distraction Frustration Boredom

79.00% (0.50) 71.10% (0.40) 70.33% (0.43) 74.71% (0.49) 74.71% (0.47)

80.48% (0.51) 72.57% (0.45) 71.00% (0.43) 73.86% (0.45) 84.57% (0.59)

76.14% (0.43) 69.14% (0.37) 74.00% (0.50) 72.62% (0.42) 83.81% (0.58)

78.43% (0.45) 71.24% (0.34) 72.52% (0.39) 78.00% (0.50) 76.19% (0.45)

Table 4 Parameters used in the evolutionary algorithm for feature selection Parameter Value Population size Generations Selection Tournament size Crossover Crossover probability Mutation rate Minimum number of features

700 30 Tournament Population size/4 Uniform 0.5 1 39

5

Table 5 Parameters used in the classification algorithms Algorithm Parameters J48 Neural network

Confidence threshold for pruning = 0.25; Minimum instances per leaf = 2; Folds for reduced error pruning = 3 Learning algorithm: Back-propagation; Hidden layers = 2, Hidden layer size = 20 neurons; Learning rate = 0.6; Momentum = 0.6; Training iterations = 200

activities, perhaps frustrated or bored learners simply quit the tutorial. A high number of learners (49%) did not completed any assignment and the once who completed more where also the more experienced (Fig. 4). As part of the data mining process a canonical genetic algorithm was used for feature selection with the parameters used shown in Table 4. These parameters were set heuristically and were the same for selecting the features of all classification algorithms. The population size was changed mainly because some of the algorithms are more computationally demanding. The genetic algorithm is included as a RapidMiner operator. In the algorithm, each individual consists of a binary list indicating if a feature will be used or not. The mutation operator simply changes the switch with the probability indicated in the mutation-probability parameter. When the population is initialized each switch has 0.5 probability of being “on”.

56

M. G. Valdez et al.

Table 6 Performance of classifiers for Flow/engaged with 10-fold cross validation and optimized feature-weights Classifier Accuracy (%) κ Naïve Bayes Deep neural network Random forest Gradient boosted trees KNN Naïve Bayes Kernel J48

72.52 78.29 78.81 78.95 76.19 80.22 78.19

0.339 0.419 0.360 0.415 0.242 0.346 0.433

Table 7 Parameters used in the evolutionary algorithm for feature weight optimization Parameter Value Initialize probability Population size Generations Selection Tournament size Crossover Crossover probability Mutation rate Minimum number of features

0.5 10 30 Tournament Population size/4 Uniform 0.5 1 39

5

In the end, the subset for the frustration classifier includes only 11 features, and the subset for the boredom classifier consists of only 13 features. The selected classifiers where implemented using Rapid Miner, with the parameters shown in Table 5. Table 3 shows the performance of each of the four classifiers together with the κ coefficients. A 10-Fold cross validation was used. The accuracies and κ coefficients obtained are close to what is usually obtained in fixed-text methods, for example, the results presented in Epp et al. [44]. In a fixed-text method, subjects are asked to write specific texts. The results obtained with this method are satisfactory, considering that learners were writing a program, a task comparable with free-text writing, where there is no restriction on what has to be written. In the case of the κ coefficients, it is usual to see values below 0.2 in methods involving free-text. In this case, some values of κ were close to 0.5, is important to consider the values of κ in these results because the class distribution is not uniformly distributed. As it is observed in many works in affective recognition, decision trees normally produce competitive accuracy. In this case the J-48 classifier obtained an accuracy of 80.48%, which is marginally better than the others, and an accepted κ statistic for the kind of problem. The artificial neural network (ANN) gives the highest accuracy in the classification of frustration.

Mining of Keystroke and Mouse Dynamics to Increase the Engagement …

57

Table 8 Parameters used for the different classification algorithms used for recognizing the Flow/engaged state Algorithm Parameters Random forest J48 kNN Gradient boosted trees

Bayes Kernel Deep learning

Number of trees = 10, Criterion = Gain ratio, Maximal depth = 20, Confidence = 0.25, Minimal gain = 0.1, Minimal leaf size = 2 Confidence = 0.79, Minimal leaf size = 2 k=4 Number of trees = 20, Maximal depth = 5, Minimum rows = 10, Minimum split improvement = 0.0, Number of bins = 20, Learning rate = 0.1, Sample rate = 1.0 Laplace correction = True, Estimation mode = Greedy, Minimum bandwidth = 0.1, Number of kernels = 10 Activation = Rectifier, Hidden layer sizes = [50, 50], Epochs = 10

Table 6 shows the performance for the additional classifiers evaluated specifically for the Flow/engaged affective state. The goal was to increase the performance achieved earlier. Again a ten-fold cross-validation was conducted. Again, an evolutionary algorithm was used for feature with the parameters shown in Table 7. The parameters used in this case for the classification algorithms, and the algorithms used are shown in Table 8. Again the accuracy and κ values of the algorithms show the difficulty of having classes that are not uniformly distributed, with several algorithms reaching an acceptable accuracy but with a low value of κ, this is an indication of algorithms classifying almost everything as the dominant class. The results of the tree induction algorithms are better when we consider both accuracy and κ values. In the case of kernel-based or deep learning methods they provide competitive results but at a higher computational cost.

5 Conclusion The aim of this research was to present an affective recognition system that uses keyboard and mouse dynamics (KD) as a non-intrusive sensory input. This system was applied to for students of different levels learning to program by carrying out assignments consisting in writing short programs interactively. An experimental approach was adopted with this objective, by effectively taking KD measurements on real subjects using the environment; these subjects they responded to surveys. Measurements and surveys were merged into a dataset that was used for training different machine learning system, with our objective being to find out which method yields the best results in this environment. Results obtained with these machine learning systems show that the method used for gathering affective data convey sufficient information for an adequate classification of affective states; in fact, although it is usual for a classification method based on

58

M. G. Valdez et al.

free-text to obtain an accuracy and κ measure below their counterparts of fixed-text; in this work the classification accuracy obtained with the best machine learning systems were similar to those using fixed-text. This level of accuracy obtained beyond the state of the art in free-text systems might be due to the addition of the mouse dynamics features and the additional preprocessing performed on the feature vectors. With data captured and preprocessed as indicated in the methodology, and out of all the machine learning algorithms tested, tree induction algorithms obtained better results when we take into consideration both accuracy and κ values. These methods, besides, are able to give us some insight on the key KD features that effectively describe the different affective states. While these are promising results, further experiments are needed. In this work we have used programmers with different levels of expertise; however, it would be interesting to focus on novice programmers, with which we expect to gather more data and more representative examples for all the classes. These experiments should give researchers the ability to do stratified samples to balance the class distribution and achieve better results. Additional experiments should give more insight on the relationship between affective state and interactive behavior. Results show near 80% accuracy, these results are competitive against other classifiers of a similar domain, but we do not know yet what is the effectiveness when applied to affective-aware learning environments. A different line of work will focus on extracting other features from the learners’ interaction such as behavioral data, as proposed by Blikstein [41]; the parameters proposed by this author include the size of the program and changes in the code. Other authors also propose treating the use of certain keys as features e.g., the delete key because it is mainly used for correcting sentences. Also a multi-modal approach to the detection of the affective state might be needed, using attention detectors alongside KD dynamics to measure at least the boredom/attention state independently of the verbal assessment by the student. Since all these need a completely new approach, a new methodology will be developed to gather data and take measures. Acknowledgements This work has been supported in part by: de Ministerio español de Economía y Competitividad under project TIN2014-56494-C4-3-P (UGR-EPHEMECH), DeepBio (TIN201785727-C4-2-P) and by CONACYT PEI Project No. 220590.

References 1. Robins, A., Rountree, J., Rountree, N.: Learning and teaching programming: a review and discussion. Comput. Sci. Educ. 13, 137–172 (2003) 2. Lahtinen, E., Ala-Mutka, K., Järvinen, H.M.: A study of the difficulties of novice programmers. In: ACM SIGCSE Bulletin, vol. 37, pp. 14–18. ACM, New York (2005) 3. Munson, J.P., Zitovsky, J.P.: Models for early identification of struggling novice programmers. In: Proceedings of the 49th ACM Technical Symposium on Computer Science Education, pp. 699–704. ACM, New York (2018) 4. Bennedsen, J., Caspersen, M.E.: Failure rates in introductory programming. ACM SIGCSE Bull. 39, 32–36 (2007)

Mining of Keystroke and Mouse Dynamics to Increase the Engagement …

59

5. Jenkins, T.: The motivation of students of programming. In: ACM SIGCSE Bulletin, vol. 33, pp. 53–56. ACM, New York (2001) 6. Dijkstra, E.W., et al.: On the cruelty of really teaching computing science. Commun. ACM 32, 1398–1404 (1989) 7. Jenkins, T.: On the difficulty of learning to program. In: Proceedings of the 3rd Annual Conference of the LTSN Centre for Information and Computer Sciences, vol. 4, pp. 53–58 (2002). Citeseer 8. Kort, B., Reilly, R., Picard, R.W.: An affective model of interplay between emotions and learning: reengineering educational pedagogy – building a learning companion. In: ICALT, vol. 1, pp. 43–47 (2001) 9. Rossin, D., Ro, Y.K., Klein, B.D., Guo, Y.M.: The effects of flow on learning outcomes in an online information management course. J. Inf. Syst. Educ. 20, 87 (2009) 10. Rodrigo, M.M.T., Baker, R.S., Jadud, M.C., Amarra, A.C.M., Dy, T., Espejo-Lahoz, M.B.V., Lim, S.A.L., Pascua, S.A., Sugay, J.O., Tabanao, E.S.: Affective and behavioral predictors of novice programmer achievement. In: ACM SIGCSE Bulletin, vol. 41, pp. 156–160. ACM, New York (2009) 11. Bosch, N., D’Mello, S., Mills, C.: What emotions do novices experience during their first computer programming learning session? In: International Conference on Artificial Intelligence in Education, pp. 11–20. Springer, Berlin (2013) 12. Khan, I.A., Hierons, R.M., Brinkman, W.P.: Mood independent programming. In: Proceedings of the 14th European Conference on Cognitive Ergonomics: Invent! Explore!, pp. 269–272. ACM, New York (2007) 13. Csikszentmihalyi, M.: Flow: The Psychology of Optimal Performance. Cambridge University Press, Cambridge (1990) 14. Bangert-Drowns, R.L., Pyke, C.: Teacher ratings of student engagement with educational software: an exploratory study. Educ. Technol. Res. Dev. 50, 23–37 (2002) 15. Gottardo, E., Pimentel, A.R.: Affective human-computer interaction in educational software: a hybrid model for emotion inference. In: Proceedings of the XVI Brazilian Symposium on Human Factors in Computing Systems, p. 54. ACM, New York (2017) 16. Hundhausen, C., Olivares, D., Carter, A.: IDE-based learning analytics for computing education: a process model, critical review, and research agenda. ACM Trans. Comput. Educ. (TOCE) 17, 11 (2017) 17. Picard, R.W., Vyzas, E., Healey, J.: Toward machine emotional intelligence: analysis of affective physiological state. IEEE Trans. Pattern Anal. Mach. Intell. 23, 1175–1191 (2001) 18. Bosch, N., DMello, S.: The affective experience of novice computer programmers. Int. J. Artif. Intell. Educ. 27, 181–206 (2017) 19. Asensio Montiel, J.J., García Sánchez, P., Mora García, A.M., Fernández Ares, A.J., Merelo Guervós, J.J., Castillo Valdivieso, P.Á.: Progamer: aprendiendo a programar usando videojuegos como metáfora para visualización de código. ReVisión 7 (2014) 20. Kula, I., Branaghan, R.J., Atkinson, R.K., Roscoe, R.D.: Assessing user experience via biometric sensor affect detection. In: End-User Considerations in Educational Technology Design. IGI Global (2017) 21. Swansi, V., Herradura, T., Suarez, M.T.: (Analyzing Novice Programmers EEG Signals using Unsupervised Algorithms) 22. Gonzalez-Sanchez, J., Baydogan, M., Chavez-Echeagaray, M.E., Atkinson, R.K., Burleson, W.: Affect measurement: a roadmap through approaches, technologies, and data analysis. In: Emotions and Affect in Human Factors and Human-Computer Interaction, pp. 255–288. Elsevier (2017) 23. Zhai, J., Barreto, A.: Stress detection in computer users through non-invasive monitoring of physiological signals. Blood 5 (2008) 24. Sidney, K.D., Craig, S.D., Gholson, B., Franklin, S., Picard, R., Graesser, A.C.: Integrating affect sensors in an intelligent tutoring system. In: Affective Interactions: The Computer in the Affective Loop Workshop at, pp. 7–13 (2005)

60

M. G. Valdez et al.

25. Arroyo, I., Cooper, D.G., Burleson, W., Woolf, B.P., Muldner, K., Christopherson, R.: Emotion sensors go to school. AIED, vol. 200, pp. 17–24 (2009) 26. Valdez, M.G., Aguila, A.H., Merelo, J.J., Soto, A.M.: Enhancing student engagement via reduction of frustration with programming assignments using machine learning. In: Proceedings of the 9th International Joint Conference on Computational Intelligence, IJCCI, INSTICC, vol. 1, pp. 297–304. SciTePress (2017) 27. Zimmermann, P., Guttormsen, S., Danuser, B., Gomez, P.: Affective computing - a rationale for measuring mood with mouse and keyboard. Int. J. Occup. Saf. Ergon. 9, 539–551 (2003) 28. Bixler, R., D’Mello, S.: Detecting boredom and engagement during writing with keystroke analysis, task appraisals, and stable traits. In: Proceedings of the 2013 International Conference on Intelligent User Interfaces, pp. 225–234. ACM, New York (2013) 29. Candel, A., Parmar, V., LeDell, E., Arora, A.: Deep learning with h2o. Technical report, H2O.ai Inc (2016). http://docs.h2o.ai/h2o/latest-stable/h2o-docs/booklets/DeepLearningBooklet.pdf 30. Kubey, R., Larson, R., Csikszentmihalyi, M.: Experience sampling method applications to communication research questions. J. Commun. 46, 99–120 (1996) 31. Yang, S.J.: Context aware ubiquitous learning environments for peer-to-peer collaborative learning. Educ. Technol. Soc. 9, 188–201 (2006) 32. Dillenbourg, P., Schneider, D., Synteta, P.: Virtual learning environments. In: 3rd Hellenic Conference “Information & Communication Technologies in Education”, Kastaniotis Editions, Greece, pp. 3–18 (2002) 33. Kapoor, A., Picard, R.W.: Multimodal affect recognition in learning environments. In: Proceedings of the 13th Annual ACM International Conference on Multimedia, pp. 677–682. ACM, New York (2005) 34. Elliott, C., Rickel, J., Lester, J.: Lifelike pedagogical agents and affective computing: an exploratory synthesis. In: Artificial Intelligence Today, pp. 195–211. Springer, Berlin (1999) 35. D’Mello, S., Jackson, T., Craig, S., Morgan, B., Chipman, P., White, H., Person, N., Kort, B., el Kaliouby, R., Picard, R., et al.: Autotutor detects and responds to learners affective and cognitive states. In: Workshop on Emotional and Cognitive Issues at the International Conference on Intelligent Tutoring Systems, pp. 306–308 (2008) 36. McQuiggan, S.W., Robison, J.L., Lester, J.C.: Affective transitions in narrative-centered learning environments. Educ. Technol. Soc. 13, 40–53 (2010) 37. Shen, L., Wang, M., Shen, R.: Affective e-learning: using “emotional” data to improve learning in pervasive learning environment. Educ. Technol. Soc. 12, 176–189 (2009) 38. Vizer, L.M., Zhou, L., Sears, A.: Automated stress detection using keystroke and linguistic features: an exploratory study. Int. J. Hum.-Comput. Stud. 67, 870–886 (2009) 39. Busjahn, T., Schulte, C., Sharif, B., Begel, A., Hansen, M., Bednarik, R., Orlov, P., Ihantola, P., Shchekotova, G., Antropova, M., et al.: Eye tracking in computing education. In: Proceedings of the Tenth Annual Conference on International Computing Education Research, pp. 3–10. ACM, New York (2014) 40. Turner, R., Falcone, M., Sharif, B., Lazar, A.: An eye-tracking study assessing the comprehension of C++ and python source code. In: Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 231–234. ACM, New York (2014) 41. Blikstein, P.: Using learning analytics to assess students’ behavior in open-ended programming tasks. In: Proceedings of the 1st International Conference on Learning Analytics and Knowledge, pp. 110–116. ACM, New York (2011) 42. Gunetti, D., Picardi, C.: Keystroke analysis of free text. ACM Trans. Inf. Syst. Secur. (TISSEC) 8, 312–347 (2005) 43. Janakiraman, R., Sim, T.: Keystroke dynamics in a general setting. In: International Conference on Biometrics, pp. 584–593. Springer, Berlin (2007) 44. Epp, C., Lippold, M., Mandryk, R.L.: Identifying emotional states using keystroke dynamics. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 715–724. ACM, New York (2011) 45. Salmeron-Majadas, S., Santos, O.C., Boticario, J.G.: Exploring indicators from keyboard and mouse interactions to predict the user affective state. In: Educational Data Mining (2014)

Mining of Keystroke and Mouse Dynamics to Increase the Engagement …

61

46. Bakhtiyari, K., Husain, H.: Fuzzy model on human emotions recognition (2014). arXiv:1407.1474 47. Lim, Y.M.: Detecting and modelling stress levels in e-learning environment users (2017) 48. Kołakowska, A.: A review of emotion recognition methods based on keystroke dynamics and mouse movements. In: 2013 6th International Conference on Human System Interactions (HSI), pp. 548–555. IEEE (2013) 49. Carneiro, D., Novais, P.: Quantifying the effects of external factors on individual performance. Futur. Gener. Comput. Syst. 66, 171–186 (2017) 50. Kołakowska, A.: Usefulness of keystroke dynamics features in user authentication and emotion recognition, pp. 42–52. Springer International Publishing (2018) 51. Wrobel, M.R.: Applicability of emotion recognition and induction methods to study the behavior of programmers. Appl. Sci. 8, 323 (2018) 52. Sim, T., Janakiraman, R.: Are digraphs good for free-text keystroke dynamics? In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–6. IEEE (2007) 53. Tan, P.N., et al.: Introduction to Data Mining. Pearson Education India (2006)

Improving Genetic Programming for Classification with Lazy Evaluation and Dynamic Weighting Sašo Karakatiˇc, Marjan Heriˇcko and Vili Podgorelec

Abstract In the standard process of creating classification decision trees with genetic programming, the evaluation process it the most time-consuming part of the whole evolution loop. Here we introduce a lazy evaluation approach of classification decision trees in the evolution process, that does not evaluate the whole population but evaluates only the individuals that are chosen to participate in the tournament selection method. Further on, we used dynamic weights for the classification instances, that are linked to the chance of that instance getting picked for the evaluation process and are determined by that instance’s classification rate. These instance weights change based on the misclassification rate of the instance. We thoroughly describe and experiment with the lazy evaluation on standard classification benchmark datasets and show that not only lazy evaluation approach uses less time to evolve the good solution, but can even produce statistically better solution due to changing instance weights and thus preventing the overfitting of the solutions. Keywords Classification · Machine learning · Genetic programming · Lazy evaluation · Dynamic weighting

1 Introduction The whole family of evolutionary algorithms is inspired by the theory of evolution by Darwin where population evolves through the generations, where genetic material is mixed between the individuals and individuals have more or fewer children in accordance to their fitness. In computer science, individuals of evolutionary methods S. Karakatiˇc (B) · M. Heriˇcko · V. Podgorelec Institute of Informatics FERI, University of Maribor, Smetanova 17, 2000 Maribor, Slovenia e-mail: [email protected] M. Heriˇcko e-mail: [email protected] V. Podgorelec e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_4

63

64

S. Karakatiˇc et al.

are some solutions, the fitness value of every solution is the metric that algorithm optimizes through the evolution. Solutions go through multiple evolutionary operators where they mix and mutate and evolve to (hopefully) optimal or near-optimal solutions to given optimization problem. The Genetic Programming (GP) is one of the instances of evolutionary algorithms, where solutions are programmes. The basic evolution loop of GP is the following. The loop starts with the evaluation process of the programmes in the whole population, then continues with the selection process of evaluated programmes. Next, the mating process (crossover) between various solutions generates new offspring solutions, and then other evolutionary operators can be applied (such as mutation and elite selection) [1]. The whole process of the genetic programming is computationally intensive, and every speedup of the evolution without sacrificing the quality of the final programme is welcomed. On the other hand, we have a problem of classification, which is one of basic machine learning problems, where a machine learns to classify instances to one of the predefined classes. One way to solve classification problem is with GP, where programmes are represented in a hierarchical decision tree structure [1] that classify instances. The classical techniques of building these decision trees used in the industry and academia are CART [2], C4.5 [3], ID3 [4] and ensembles of these methods [5, 6]. With GP we can utilize the power of evolution to build these decision trees as has been used numerous times before. In the evolutionary process of construction classification decision trees, the computationally most intensive part is the evaluation of the decision trees [7]. In this paper, we present experiments with the lazy evaluation of the classification decision trees made with GP, where solutions from the population are evaluated only when needed and on the limited amount of classification instances. Here we build on our previous work [8–10], and add the dynamic evaluation process—it can be expanded if the evaluation process does not differentiate between the quality of different solutions. The basic idea of the process of lazy dynamic evaluation changes the importance of classification instances through the process of evolution, giving more importance to those instances, that are more often misclassified and less importance to those that are more often correctly classified. Existing research on this subject is limited, but lazy evaluation still builds on the ideas from other researchers. The most notable impact to our work was introduced by Gathercole and Ross [11] where they proposed a dynamic training subset selection for the supervised problem (such as classification) where they proposed three different subset selection processes and heuristically change the testing classification set in each generation. Method of lazy and dynamic evaluation builds on their idea, where we weight instances through the evolution, but we expand this with the lazy evaluation, where an individual is tested only when it is needed and only on it’s testing set. Also, Zhang and Cho introduced the idea that incrementally selected testing subsets can reduce evaluation time without sacrificing generalization accuracy of evolved solutions [7]. Furthermore, Šprogar introduced the idea, that even excluding the fitness of the genetic solutions can improve the robustness of evolution process [12]. This method eliminates the operator for evaluation without much sacrifice to the quality of the final solution and can also speed up the process of evolution.

Improving Genetic Programming for Classification with Lazy Evaluation …

65

The organization of our work is in the following. We start with the section where we analyze the processing time of genetic programming method for classification purposes. Next is the section where we present the idea of the lazy evaluation method and describe it in detail. In the following section, we describe the layout of the experiment and present the results of the implemented method of lazy evaluation. We conclude with the final remarks, the interpretation of the results and present our plans for the future research.

2 Computational Complexity of Genetic Programming for Classification The standard evolutionary loop of GP goes as following and is presented in the Fig. 1, which is based on [10], in the right greyed area. First, the initial population is randomly generated, where one individual is one classification decision tree. Next, the evaluation of the individual’s classification tree starts. In this step, every individual is given the score, which is called fitness. In the case of classification, some combination of classification metrics, such as accuracy, F-score, and the size of the classification decision tree, is used.

Fig. 1 Flowchart of genetic programming with the evolution loop highlighted in the light grey background [10]

66

S. Karakatiˇc et al.

Fig. 2 Representation of one individual in the evolution process as classification decision tree

Then the selection of the individuals follows. Here the classification decision trees are chosen to go through the mating process. There are many different selection methods, but here we are only exploring the tournament selection method. Tournament selection chooses k random individuals from the populations and the best individual (fitness wise) wins the tournament and is selected to participate in the mating process. The mating process is heavily dependent on the genotype representation. In GP we present one individual in a form of a tree, or more specifically here one individual is a classification decision tree as is shown in the Fig. 2. In our implementation, we set the chance of crossover happening to 100%. In the regular form GP (without heuristic crossover) the crossover process chooses a random node in the tree from the first parent, and the random node from the second parent and exchanges the subtrees, creating one or two offspring. After the crossover, the mutation process is next. Here we must consider the chance of the mutation happening, which randomly changes a selected decision tree. Standard chances of mutation happening on any particular decision tree may vary from 1% and all the way to 50%. After the mutation, the new generation of the classification tree is evaluated and the evolution loop continuous.

2.1 Processing Time Analysis of GP In this section we use the big Omicron (big O) notation for evaluating computational complexity. The big O notation is used to express the upper bound of the processing time growth rate of a process, or in other words, the time complexity of an algorithm. The approximation of processing time goes as it is presented in the following. As it is evident from the Fig. 1, most of the processing time is spent in the evolution loop—the amount of this time is mostly dependent on the stopping criteria. If there is a dynamic stopping criterion, such as an amount of stagnating generations (generations without improvement), the total processing time varies from run to run and is mostly due to chance. If we have fixed number of generations, then we can approximate the processing time more precisely. Let us analyze the individual processing times of each genetic operator in the GP process.

Improving Genetic Programming for Classification with Lazy Evaluation …

67

With the tournament selection method, the order of growth for one generation is O(mk), where m is the number of parents chosen (usually the same as the population size, when parents produce two offspring), and k is the tournament size (usually between 2 and 10). In GP one individual is a classification decision tree. So naturally, the processing time of the crossover process is also dependent on the representation. The choosing of the nodes happens two times (two parent trees) and the loop of node choosing runs from minimal of 1 to maximum depth of the tree. The maximum depth of the tree is again a heavily dependent, this time on the classification problem—simpler classification problems permit shallower trees and more complex ones demand bigger decision trees. So the order of growth for the crossover process in one generation is O(2d) (2 because of two parent trees), or simpler just O(d), where d is the maximum depth of the tree. The time complexity of mutation operator is similar than in crossover. When the new child is created and it is determined that it was chosen by chance to go through the mutation, the node is chosen in a random fashion and is replaced by a random subtree or the node content (the decision rule) is changed. The node picking itself has the order of growth O(d) as does the creation of the random subtree O(d) (if the maximum depth of the tree is set to d). So the time complexity is O(2d), or simpler just O(d), but keep in mind, that this doesn’t happen to every new individual. Now for the most time-consuming process in GP loop—evaluation process. Here we take every individual and use every classification instance in the training set to classify that instance. From these classification results, the classification metrics can be calculated (accuracy, F-score, recall, precision, AUC, and others), which are then part of the fitness of that individual. The process of calculating the classification metrics is another time-consuming process. The calculation of accuracy is straightforward by just counting the correctly classified instances, but calculating the F-score is a more time-consuming process (as we have to calculate recall and precision for every class, and then calculate individual F-score for each class and then aggregate it to get the final F-score). Let’s assume we have n number of new offspring individuals, the maximum depth of each individual tree is d, we have t number of training classification instances that are to be classified by each tree, and that we calculate only the accuracy for the fitness. The time complexity of evaluation for every individual in one generation is O(ndt). When we combine the time complexities of individual operators we get the following time complexity of an evolution loop for one generation: O(mk) + O(d) + O(d) + O(ndt) m = number of parents from selection k = tournament size d = maximum depth of the tree n = number of offspring individuals t = number of classification instances in the training set

68

S. Karakatiˇc et al.

Fig. 3 Pie chart showing the average processing times of genetic operators in one generation of GP for classification decision tree construction. The experiment was made on car dataset with 1382 instances in the training set, 150 solutions in the population, 2000 generations and 100 independent runs of GP [10]

As is evident from the calculations, all of the times are linear and thus the total time should be mostly dependent on the highest factor in the equation. We also ran the GP and timed each genetic operator in the evolution loop multiple times. In Fig. 3 is the pie chart that shows the proportions of each processing times in one generation on average of 100 independent runs. As is evident from the pie chart in Fig. 3, 94% of the total processing time in the evolution loop is spent the evaluation process. Note that this cannot be generalized to all GAs or even all GPs and is specific for GP construction classification decision trees. Even using different data set (we used the car data set) could produce slightly different results. Despite this, we see that shortening evaluation time should significantly impact the running time of the GP in general.

3 Lazy Evaluation Method As was shown, the most time-consuming process in evolution loop is the evaluation process, thus we propose an approach which shortens this time. In our proposed extension of GP, which we named lazy evaluation, we do not evaluate all of the decision trees in the population on all of the classification instances. Instead, we evaluate only decision trees chosen to participate in the selection process and only some classification instances. The Fig. 4 shows the probability of one decision tree from the population to be chosen to be evaluated in one tournament, dependent on the size of the tournament and the population size in the standard GP. We can clearly see that each decision tree has a substantial chance to be chosen for the evaluation, and the chances only grow as the size of the tournament grows. The Fig. 5 shows the number of evaluations of decision trees in one generation dependent on the number of classification instances and the size of the population of

Improving Genetic Programming for Classification with Lazy Evaluation …

69

Fig. 4 Line chart showing the probability of one decision tree being chosen for the evaluation, dependent on the tournament size and the population size in standard GP

Fig. 5 Line chart showing the number of evaluations in one generation dependent on the number of classification instances in the classification problem and number of decision trees in the population in standard GP

standard GP without the lazy evaluation. We can see that the number of evaluations greatly rises as we get larger classification problems with more instances. The Fig. 6 there is the number of evaluations (number of classification instances X population size) in one generation dependent on lazy evaluation tournament size (and standard GP) and population size. As it is evident from the Fig. 6, the number of evaluations per generation is drastically lower as in traditional evaluation (all decision trees on all classification instances). Let us take the example where we have population size 100 and number of classification instances is 1000. If we use lazy evaluation with tournament size of 2, we have 400 evaluations (2 ∗ 100 ∗ 2), for evaluation with tournament size of 5 we

70

S. Karakatiˇc et al.

Fig. 6 Number of evaluations in generation dependent on a number of classification instances and population size. Standard GP denotes the standard GP without extensions

get 1000 evaluations (5 ∗ 100 ∗ 2) and if we have the lazy evaluation with tournament size of 10, we have 2000 (10 ∗ 100 ∗ 2). But the standard GP always has 100, 000 evaluations (100 ∗ 1000), much more than any of the lazy evaluations. Of course, this is only in theory, so time saved with this approach would not be directly in the same proportion in practice. We will test the real time saved in the next section with the experiment.

3.1 Weighting the Classification Instances We included the weighting of classification instances through the evolution process. This has already been proven to work in the paper [11]. In contrast to the paper by Gathercole and Ross [11], we choose different instances for every tournament in one generation and not the same instances for every evaluation in one generation. This raises the chance of one instance getting chosen for the evaluation and further diversifies the search space without raising the number of total evaluation in one generation. The weights of classification instances determine the probability of that particular classification instance getting chosen to be used for the evaluation process—the higher the weight, more chance of that instance getting picked, the lower the weight, lesser are chances of that instance getting picked. The weights change based on the difficulty of classifying of that particular instance—more times the instance is misclassified, the higher its weight becomes and more chance it has to be chosen again for the evaluation. This forces the GP to focus on more difficult instances and compensates for a few instances used in the evaluation. Our weighting strategy is as follows. In the beginning, all of the instances have the same initial weight of 1. For every misclassification of that instance, its weight

Improving Genetic Programming for Classification with Lazy Evaluation …

71

Fig. 7 The line in the charts shows the number of times the particular instance was chosen for evaluation process in the 2500th generation (top) and 5000th generation (bottom). The area in the bottom part of charts shows the weight of that particular instance. Note that the number of picks of any particular instance for evaluation is linked to its weight [10]

is increased by the amount 1/n, where n is the number of classification instances in the test set. The Fig. 7 shows instance weights and number of picks, first chart in the middle of the evolution (2500th generation) and the last chart for the last (5000th) generation, from the research [10]. As we can see from the charts, some of the weights (and consequently a number of picks) stay the same through the evolution, but others increase throughout the evolution.

4 Experiments with Lazy Evaluation of GP for Classification We conducted the experiment with our proposed approach with lazy evaluation of evolutionary classification decision trees. We used 10 classification benchmark datasets from UCI repository [13] and we measured following metrics: total

72

S. Karakatiˇc et al.

accuracy, average F-score (β = 1) and total running time of the whole process from start to finish. All of the tests were done using 5 fold cross-validation. Datasets used in the experiments were the following: autos, balance-scale, breastcancer, breast-w, car, credit-a, diabetes, heart-c, iris and vehicle. The GP settings were set to the following values: – selection method: tournament – fitness function: (1 − accuracy) + (0.02 ∗ number O f N odes) – population size: 150 – elite size: 1 – number of generations: 2000 – crossover probability: 100% – mutation probability: 10% – number of runs: 10 Although some operators in GP can be parallelized, we compared non-parallelized version in our experiments, but parallelizing the lazy evaluation is one of our goals in the future.

4.1 Classification Results Figure 8 shows two classification metrics: overall accuracy and average F-score for all of the 10 datasets. As it is evident from the number from the table in the Fig. 8, there are slight differences in both metrics between different settings. The best performing GP accuracy wise is GP with lazy evaluation (0.70) where we used 10 instances in the evaluation process, followed by the standard GP with no lazy evaluation. The same lead is shown in the F-score metric, where GP with lazy evaluation scored 0.54.

4.2 Statistical Analysis of the Classification Results We also conducted the statistical analysis on the classification results, to evaluate if the differences are statistically significant or are they due to chance. First, we tested if all of the result data is normally distributed. Shapiro–Wilk’s test showed that none of the results for any GP setting or any metric is normally distributed ( p < 0.05 for all results). With this one of the assumptions for the usage of the parametric test is violated and we have to use the no-parametric statistical test to conduct the analysis. Thus the Kruskal–Wallis test for multiple independent groups was used to determine if there are statistically significant differences between different GP settings. Kruskal–Wallis returned that there are statistically significant differences between groups for accuracy (χ 2 = 370.465, p < 0.001). Because of

Improving Genetic Programming for Classification with Lazy Evaluation …

73

Fig. 8 Classification metric of the resulting classification decision trees on all 10 datasets with 5 fold cross validation. LE = Lazy evaluation; inst = number of instances used in the evaluation process [10]

this, we conducted post-hoc tests, to determine between which GP settings are these differences. We used Wilcoxon signed-rank test for pair-wise comparison on metric accuracy. Of course we used Holm–Bonferroni correction for multiple comparisons. Post-hoc test shows, that there are no statistically significant differences between two of the best performing GPs (between Standard GP and GP with lazy evaluation with 10 instances. p = 0.852). Similar are the results of Kruskal–Wallis test for average F-score metric, where there are statistically significant differences between different settings (χ 2 =191.558, p < 0.001). Here the Wilcoxon signed-rank test post-hoc test with the Holm– Bonferroni correction shows that there are statistically significant differences between standard GP and GP with lazy evaluation with 10 instances ( p < 0.001).

4.3 Evolution Time Analysis Looking at the running times in the Fig. 9, shows a clear lead of the lazy evaluation GPs in comparison to standard GP. The slowest lazy evaluation GP is the one with 10 instances used in the evaluation process that used on average 4495.92 ms, which is just 62.6% of the average total running time of the standard GP with the average total running time of 7182, 97 ms. We see that the running times are not proportionally shorter in comparison to the theoretical saving due to fewer evaluations, but they are still shorter.

74

S. Karakatiˇc et al.

Fig. 9 Average total running times of the whole evolution process in milliseconds for all settings. LE = Lazy evaluation; inst = number of instances used in the evaluation process [10]

4.4 Discussion The classification results show that there are statistically significant differences between standard GP and GP with lazy evaluation with 10 instances on metric Fscore. With this, we can conclude that if we want to optimize our classification method to the metric F-score, the usage of lazy evaluated GP is definitely appropriate. On the other hand, there were no statistically significant differences in the metric accuracy. This means that lazy evaluated GP was not statistically better and also not worse than other counterparts. As the results of the evolution time showed, the lazily evaluated variants of GP took less time to get to the results discussed here. With this, we can conclude that even if lazy evaluated GP does not improve results in the classification metric, it is still worth using as it runs for shorter time. The interesting question here is, why we can better results in the F-score vise and similar in accuracy vise, while still running for shorter time. One answer could be that with changing weights of the classification instances, the environment for the GP always changes, thus the chance of getting stuck in local optima is lowered. Also,.giving more importance to harder to classify instances, forces the evolution to evolve trees that explore more difficult patterns and not only the simple obvious ones.

5 Conclusions We proposed a lazy and dynamic evaluation approach within the evolutionary method of genetic programming. This was applied in the process of creating the classification decision trees that uses dynamic choosing of the instances. We tested this approach in the experiment setting, where we applied it to several classic classification benchmarks and compared it to the standard GP without our improvements. Results of the experiments show, that this approach has great potential and should be explored further on. Not only that all of the lazy evaluation GPs took less

Improving Genetic Programming for Classification with Lazy Evaluation …

75

processing time to finish the whole evolution process in comparison to standard GP, some settings (with more instances in evaluation process) returned comparable results (in accuracy and average F-score). One of the lazy evaluation settings included (with 10 instances in evaluation) in the experiment even returned better results than the standard GP. This can be contributed to changing the environment of the GP, thus preventing to overfit the solutions and to the weighting process that gives more importance (more chance to be involved in evaluation process) to harder to classify instances. All of the results were supplemented with statistical analysis, that confirmed that the advantage in F-score metric is statistically significant. In the future, we are planning to research lazy evaluation further. The focus will be given to the importance of the tournament size and explore the number of evaluation instances. Also, the parallelized version of lazy evaluation should be directly comparable to parallel GP for decision tree creation and computation times should be analyzed to determine if the improvements in time analysis are also present then.

References 1. Espejo, P.G., Ventura, S., Herrera, F.: A survey on the application of genetic programming to classification. IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev. 40, 121–144 (2010) 2. Breiman, L., Friedman, J., Stone, C.J., Olshen, R.A.: Classification and Regression Trees. CRC Press, Boca Raton (1984) 3. Quinlan, J.R.: C4. 5: Programs for Machine Learning. Elsevier, Amsterdam (2014) 4. Cheng, J., Fayyad, U.M., Irani, K.B., Qian, Z.: Improved decision trees: a generalized version of id3. In: Proceedings of the Fifth International Conference on Machine Learning, pp. 100–107 (1988) 5. Liaw, A., Wiener, M., et al.: Classification and regression by randomforest. R News 2, 18–22 (2002) 6. Ganjisaffar, Y., Caruana, R., Lopes, C.V.: Bagging gradient-boosted trees for high precision, low variance ranking models. In: Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 85–94. ACM (2011) 7. Zhang, B.T., Cho, D.Y.: Genetic programming with active data selection. In: Asia-Pacific Conference on Simulated Evolution and Learning, pp. 146–153. Springer (1998) 8. Podgorelec, V., Zorman, M.: Decision tree learning. In: Encyclopedia of Complexity and Systems Science, pp. 1–28. Springer (2015) 9. Podgorelec, V., Šprogar, M., Pohorec, S.: Evolutionary design of decision trees. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 3, 63–82 (2013) 10. Karakatiˇc, S., Heriˇcko, M., Podgorelec, V.: Experiments with lazy evaluation of classification decision trees made with genetic programming. In: Proceedings of the 9th International Joint Conference on Computational Intelligence - Volume 1: IJCCI, INSTICC, SciTePress, pp. 348– 353 (2017) 11. Gathercole, C., Ross, P.: Dynamic training subset selection for supervised learning in genetic programming. Parallel Probl. Solving Nat. PPSN III 312–321 (1994) 12. Šprogar, M.: Excluding fitness helps improve robustness of evolutionary algorithms. In: Knowledge-Based Intelligent Information and Engineering Systems, pp. 905–905. Springer (2005) 13. Lichman, M.: UCI machine learning repository (2013)

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data Rédina Berkachy and Laurent Donzé

Abstract We develop a fuzzy hypothesis testing approach where we consider the fuzziness of data and the fuzziness of the hypotheses as well. We give the corresponding fuzzy p-value with its α-cuts. In addition, we use the so-called “signed distance” operator to defuzzify this p-value and we provide the convenient decision rule. Getting a defuzzified p-value and being able to interpret it can be of good use in many situations. We illustrate our testing procedure by a detailed numerical example where we study a right one-sided fuzzy test and compare it with a classical one. We close the paper by an application of the method on a survey from the financial place of Zurich, Switzerland. We display the decisions related to tests on the mean made on a set of variables of the sample. Both fuzzy and classical tests are conducted. One of our main findings is that despite the fact that each of both approaches have a different decision rule in terms of interpretation, the decisions made are by far the same. In this perspective, we can state that the fuzzy testing procedure can be seen as a generalization of the classical one. Keywords Test of the mean · T-test · Fuzzy p-value · Fuzzy statistics · Fuzzy hypotheses · Fuzzy data · One-sided and two-sided Tests · Defuzzification · Signed distance method

1 Introduction and Motivation Many research papers discussed lately the extension of the classical approach of the statistical inference to the fuzzy environment. We note for example Kruse and Meyer [1], Filzmoser and Viertl [2], Parchami et al. [3], Berkachy and Donzé [4], R. Berkachy (B) · L. Donzé Applied Statistics and Modelling, Department of Informatics, Faculty of Economics and Social Sciences, University of Fribourg, Fribourg, Switzerland e-mail: [email protected] URL: http://diuf.unifr.ch/asam L. Donzé e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_5

77

78

R. Berkachy and L. Donzé

Grzegorzewski [5], Berkachy and Donzé [6] and many others. Filzmoser and Viertl [2] presented a fuzzy testing approach and gave a fuzzy p-value. The authors stated in their paper that the fuzziness is coming from the data. In addition, similarly to the Neyman-Pearson conjecture [7], they gave a three-level decision rule where they assumed that it exists a region of no rejection of both null and alternative hypotheses. Simultaneously, Parchami et al. extended the ideas of Filzmoser and Viertl [2], but they assumed that the fuzziness is a matter of hypotheses rather than data. From their side, Berkachy and Donzé [4] inspired by the previous works, generalized the case and presented tests based on fuzzy data and fuzzy hypotheses at the same time. We note that Kruse and Meyer [1] were from the first displaying fuzzy hypotheses testing with fuzzy data. On the other hand, Grzegorzewski [5] provided a testing procedure based on fuzzy confidence intervals. For this test, he gave a decision rule measured by the so-called degree of conviction of a given hypothesis. In the same way, Berkachy and Donzé [6] discussed an approach based also on fuzzy confidence intervals but with the assumption of both fuzzy hypotheses and fuzzy data. From another side, a fuzzy decision is in many situations difficult to understand and interpret, and thus, being able to have a crisp one can be convenient. We know that even if defuzzifying a fuzzy set makes us lose some informations, but this operation is useful in several cases of decision making. Grzegorzewski [8] displayed different operators to defuzzify his fuzzy decisions. Afterwards, based on the work of Grzegorzewki [8], Berkachy and Donzé [9] proposed the signed distance as a defuzzification operator, and gave its use on fuzzy decisions. We remind that this method has been extensively used in other contexts such as evaluating linguistic questionnaires, as seen in Berkachy and Donzé [10]. We recall in this work the testing approach described by Berkachy and Donzé [4]. Particularly, we will provide the fuzzy p-values corresponding to the case where we consider both fuzzy data and fuzzy hypotheses. Furthermore, since we are interested in the signed distance defuzzification method, we put our attention on how to defuzzify these fuzzy p-value using this distance. We illustrate our theory with a detailed example of a right one-sided fuzzy test. We perform as well the same test with the same setups with the classical testing approach. The aim of this task is to compare the decisions made using both approaches. In addition, one of the main contributions of this paper is to show how one can compute these fuzzy (and classical) testing approaches on real data. For this purpose, we will use a survey coming from the financial place of Zurich in Switzerland, called “Finanzplatz: Umfrage 2010”. This latter has been conducted in 2010 by the Office of the Economy of the Canton of Zurich to understand the present situation of Zurich firms. We will then run fuzzy and classical tests of the mean on a set of variables carefully chosen in order to give us some informations about our data. To sum up, we give in Sect. 2 some fundamental definitions about fuzzy sets. In Sect. 3, we define concisely the signed distance. The Sect. 4 is devoted to present briefly the classical hypothesis testing approach, followed by the fuzzy one. Afterwards, we show in Sect. 5 the detailed numerical example performed with the fuzzy and classical approaches. We close the paper by an application of the method on real data.

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data

79

2 Definitions and Notations Let us give some essential definitions and notations about fuzzy sets. Definition 1 (Fuzzy Set) If A is a collection of objects denoted generically by x then a fuzzy set X˜ in A is a set of ordered pairs: X˜ = {(x, μ X˜ (x)) : x ∈ A},

(1)

where μ X˜ (x) is the membership function of x in X˜ which maps A to the closed interval [0,1] that characterizes the degree of membership of x in X˜ . Definition 2 (Fuzzy Number) A fuzzy number X˜ is a convex and normalized fuzzy set on R, such that its membership function is continuous and its support is bounded. Definition 3 (α-cut of a Fuzzy Number) The α-cut of a fuzzy number X˜ is a nonfuzzy set defined as: X˜ α = {x ∈ R : μ X˜ (x)  α}.

(2)

The fuzzy number X˜ can be represented by the family set { X˜ α : α ∈ [0, 1]} of its α-cuts. Definition 4 (Indicator Function of an α-cut) An indicator function I X˜ α : R → {0, 1} of an α-cut of a fuzzy number X˜ is defined as follows:  I X˜ α (x) = I{x∈R;μ X˜ ≥α} (x)

=

1 if μ X˜ (x) ≥ α, 0 otherwise.

We note that the α-cut of a fuzzy number X˜ is the closed interval [ X˜ αL , X˜ αR ], where its left α-cut X˜ αL is given by: X˜ αL = inf{x ∈ R : μ X˜ (x)  α}, and X˜ αR its right one by: X˜ αR = sup{x ∈ R : μ X˜ (x)  α}. In addition, the α-cut of a fuzzy number X˜ is a union of finite compact and bounded intervals. Furthermore, by the least-upper bound property generalized to ordered sets and the extension principle, the following expression of the membership function of X˜ (see Viertl [11]) is induced:

80

R. Berkachy and L. Donzé

μ X˜ (x) = max{α I X˜ α (x) : α ∈ [0, 1]},

(3)

where I X˜ α (x) is the indicator function. Definition 5 (Triangular Fuzzy Number) A triangular fuzzy number X˜ is a fuzzy number with membership function given as follows: ⎧ x−a ⎪ ⎨ b−a if a < x ≤ b, x−c μ X˜ (x) = b−c (4) if b < x ≤ c, ⎪ ⎩ 0 elsewhere.

A triangular fuzzy number is commonly represented by a tuple composed of three values a, b and c, i.e. X˜ = (a, b, c), where a < b < c ∈ R. In this case, its left and right α-cuts X˜ αL and X˜ αR are written as 

X˜ αL = a + (b − a)α, X˜ αR = c − (c − b)α.

(5)

From another side, it is very useful to define some arithmetics on fuzzy numbers. As instance, the sum and difference between two fuzzy numbers are given as follows: Definition 6 (Sum of Two Fuzzy Numbers) The sum S˜ X˜1 , X˜2 of two fuzzy numbers X˜1 L R L R and X˜2 with their corresponding α-cuts X˜1α = [ X˜1 α , X˜1 α ] and X˜2α = [ X˜2 α , X˜2 α ] is written by: L L R R S˜ X˜1 , X˜2 = X˜1α + X˜2α = [ X˜1 α + X˜2 α , X˜1 α + X˜2 α ].

(6)

Definition 7 (Difference between Two Fuzzy Numbers) The difference D˜ X˜1 , X˜2 between two fuzzy numbers X˜1 and X˜2 with their corresponding α-cuts is given by: L R R L (7) D˜ X˜1 , X˜2 = X˜1α − X˜2α = [ X˜1 α − X˜2 α , X˜1 α − X˜2 α ].

From the previous definitions, we are able to define the fuzzy sample mean. This definition will be useful in further sections. It is given by: Definition 8 (α-cuts of the Fuzzy Sample Mean) We denote by X˜ the fuzzy sample mean. It is given by its α-cuts as: n 1  L X˜ α = X˜ i , n i=1 α

n 1  ˜ R Xi , n i=1 α

(8)

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data

81

L R where X˜ i α and X˜ i α are respectively the left and right α-cuts of the fuzzy number X˜ i .

3 The Signed Distance Defuzzification Method Yao and Wu [12] and Lin and Lee [13] mainly described the signed distance defuzzification method. Afterwards, Berkachy and Donzé [10] have used it in another context, the evaluation of linguistic questionnaires. Regarding its nice properties, this method will be implemented in our test procedure for defuzzifying the fuzzy p-values. Let us first briefly define it. Definition 9 The signed distance d0 (a, 0) measured from 0 for a real value a in R is a. Definition 10 (Signed Distance of a Fuzzy Number Measured from the Origin) Let X˜ be a fuzzy set on R, such as X˜ = {(x, μ X˜ (x))|x ∈ R} where μ X˜ (x) is the membership function of x in X˜ . Suppose that the α-cuts X˜ αL and X˜ αR exist, and are integrable for α ∈ [0, 1]. The signed distance of X˜ measured from the fuzzy origin 0˜ is:

1 ˜ = 1 [ X˜ αL + X˜ αR ]dα. (9) d( X˜ , 0) 2 0

4 Testing Fuzzy Hypotheses with Fuzzy Data In the following, we recall the theories described in Berkachy and Donzé [4], based on the contributions of Filzmoser and Viertl [2] and Parchami et al. [3] whom introduced the concept of fuzzy p-value. First of all, let us remember the main statements of the classical testing approach.

4.1 Testing Hypotheses in the Classical Approach We consider a population described by a probability distribution Pθ depending on the parameter θ , and belonging to a family of distributions P = {Pθ : θ ∈ Θ}. The classical testing hypotheses approach on a parameter θ consists to consider a null hypothesis denoted by H0 , H0 : θ ∈ Θ H0 and an alternative one denoted by H1 , H1 : θ ∈ Θ H1 . We denote by Θ H0 and Θ H1 the subsets of Θ such that Θ H0 ∩ Θ H1 = ∅. Consider a random sample X 1 , . . . , X n . A test statistic T is a function of this sample used in testing the null hypothesis against the alternative one, where T : Rn → R. For such tests, two decisions can be given: “not reject the null hypothesis H0 ” or “reject the null hypothesis H0 ”.

82

R. Berkachy and L. Donzé

Yet, by the Neyman-Pearson testing approach [7], one can have the possibility of getting a three-level decision where a third case appears to be: “both the null and alternative hypotheses are neither rejected or not rejected”. The hypothesis testing statement is reduced to a decision problem based on the test statistic T . We define a space of possible values of T decomposed into a rejection region R and its complement R c . Three forms of the rejection region R are possible depending on the alternative hypotheses H1 . We suppose the following three tests: 1. H0 : θ ≥ θ0 vs. H1 : θ < θ0 ; 2. H0 : θ ≤ θ0 vs. H1 : θ > θ0 ;

(10) (11)

3. H0 : θ = θ0 vs. H1 : θ = θ0 ;

(12)

where θ is the parameter to test and θ0 a particular value of this parameter. We would reject the null hypothesis H0 if respectively: 1. T ≤ tl (left one-sided test); 2. T ≥ tr (right one-sided test);

(13) (14)

3. T ∈ / (ta , tb ) (two-sided test);

(15)

where tl , tr , ta and tb are quantiles of the distribution of T . From another side, we denote by δ the significance level of the test. The quantiles of the distribution tl , tr , ta , and tb are found such that the following probabilities hold: 1. P(T ≤ tl ) = δ, 2. P(T ≥ tr ) = δ, 3. P(T ≤ ta ) = P(T ≥ tb ) =

(16) (17) δ . 2

(18)

By this method, we decide to reject the null hypothesis if the value of the test statistic t = T (y1 , . . . , yn ) falls into the rejection region R. Deciding whether to reject or not a given null hypothesis H0 can be made by considering the p-value. The computation of this latter depends in particular on the boundary of the null hypothesis. We denote by pθ∗ a p-value defined as a function of the boundary θ ∗ of the null-hypothesis. This p-value pθ∗ for the three cases (13), (14) and (15) can be written respectively in the following manner: 1. pθ ∗ = Pθ ∗ (T ≤ tl ), 2. pθ ∗ = Pθ ∗ (T ≥ tr ), 3. pθ ∗ = 2 min[Pθ ∗ (T ≤ ta ), Pθ ∗ (T ≥ tb )],

(19) (20) (21)

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data

83

where “Pθ ∗ ” means that the probability distribution depends on the boundary θ ∗ . Decision rule. We make a decision by comparing the p-value to a predefined significance level δ as follows: – If the p-value is smaller than δ, we reject the null hypothesis H0 . – Otherwise, we don’t reject it.

4.2 Fuzzy Hypotheses Filzmoser and Viertl [2] showed a hypotheses test where they considered the case of fuzzy data. However, Parchami et al. [3] asserted that the fuzziness is rather coming from the hypothesis. In this paper, we display the approach described in Berkachy and Donzé [4], in which, inspired by the last papers, we treated a case where both data and hypotheses are fuzzy. Let us begin by defining a fuzzy hypothesis. Definition 11 (Fuzzy Hypothesis) A fuzzy hypothesis H˜ on the parameter θ , denoted as “ H˜ : θ is H ”, is a fuzzy subset of the parameter space Θ with its corresponding membership function μ H˜ . Remark 1 A given fuzzy hypothesis H˜ reduces to a crisp hypothesis H when the membership function μ H˜ = IΘ . It is common practice to model fuzzy hypotheses by triangular fuzzy numbers. The fuzzy version of the hypotheses (10), (11) and (12) can be respectively written as: 1. H˜ O L = (a, b, b) (fuzzy left one-sided hypothesis), 2. H˜ O R = (a, a, b) (fuzzy right one-sided hypothesis), 3. H˜ T = (a, b, c) (fuzzy two-sided hypothesis),

(22) (23) (24)

where a < b < c ∈ R. Let X 1 , . . . , X n be a crisp random sample with probability distribution Pθ . We model this sample by fuzzy numbers and we get the following fuzzy random sample X˜ = ( X˜ 1 , . . . , X˜ n ). X˜ i is a fuzzy number as described in Definition 2, with its corresponding membership function μ X˜ i . For the membership function μ X˜ of X˜ , such that μ X˜ : Rn → [0, 1]n , it exists a value x seen as a n-dimensional vector, where this function reaches 1. Thus, the α-cuts of μ X˜ can be seen as a closed compact and convex subset of Rn . From another side, let φ be a real valued function, φ: Rn → R. We denote by ˜ Z the fuzzy number resulting from applying the function φ on the fuzzy random sample, in other terms Z˜ = φ( X˜ 1 , . . . , X˜ n ). Then, by the extension principle [14], the membership function μ Z˜ of Z˜ is written in the following manner:

84

R. Berkachy and L. Donzé

 μ Z˜ (z) =

sup {μ X˜ (x) : φ(x) = z} if ∃x : φ(x) = z, 0 if x : φ(x) = z,

(25)

for all z ∈ R. Moreover, the α-cuts of Z˜ are given by: Z˜ α = [min φ(x), max φ(x)], x∈ X˜ α

x∈ X˜ α

(26)

for all α ∈ (0, 1] [11]. We finally have to define the fuzzy boundaries of fuzzy hypotheses. Definition 12 (Boundary of a Hypothesis) The boundary H˜ ∗ of a hypothesis H˜ : θ is H , is a fuzzy subset of Θ, with membership function μ H˜ ∗ . Nevertheless, the fuzzy boundaries corresponding to the tests (10), (11) and (12) are respectively written as: 1. H˜ ∗ = H if θ ≤ θ0 , 0 otherwise, ( H˜ is left one-sided); 2. H˜ ∗ = H if θ ≥ θ0 , 0 otherwise, ( H˜ is right one-sided); 3. H˜ ∗ = H, ( H˜ is two-sided).

4.3 Fuzzy p-value Considering p-values as fuzzy is a direct consequence of the fuzziness of hypotheses. For this purpose, showing their α-cuts is necessary in assessing the results of the test statistics. In this case, we take into consideration the three possible rejection regions as defined in (13)–(15). The α-cuts of the corresponding fuzzy p-values are shown in the following proposition. Proposition 1 Given a test procedure based on fuzziness of data and hypotheses and considering the three rejection regions (13), (14) and (15), the α-cuts of the fuzzy p-value p˜ are given by: 1. p˜ α = [Pθ R (T ≤ t˜αL ), PθL (T ≤ t˜αR )]; t˜αR ),

t˜αL )];

2. p˜ α = [PθL (T ≥ Pθ R (T ≥ L [2Pθ R (T ≤ t˜α ), 2PθL (T ≤ t˜αR )] if Al > Ar , 3. p˜ α = [2PθL (T ≥ t˜αR ), 2Pθ R (T ≥ t˜αL )] if Al ≤ Ar ;

(27) (28) (29)

for all α ∈ (0, 1], where t˜αL and t˜αR are the left and right α-cuts of t˜ = φ( X˜ 1 , . . . , X˜ n ), θ L and θ R are the α-cuts of the boundary of H˜ 0 , Al is the area under the membership function μt˜ of the fuzzy number t˜ on the left side of the median, and Ar is the one on the right side. In this case, one has to decide on which side the median is located based on the biggest amount of fuzziness.

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data

85

Proof 1 The proof of the Proposition 1 will be done in three steps due to considering the case of both fuzzy data and fuzzy hypotheses: 1. We denote by supp(μt˜) given by supp(μt˜) = {x ∈ R : μt˜(x) > 0}, the support of μt˜, where μt˜ is the membership function of t˜ = T ( X˜ 1 , . . . , X˜ n ), t˜ is the fuzzy value resulting from applying the test statistic T on the fuzzy random sample X˜ 1 , . . . , X˜ n as seen in the above sections. Filzmoser and Viertl [2] expressed the precise p-value called p for a one-sided test according to the extension principle of Zadeh [14]. The corresponding p-values respectively to cases (10) and (11) are given by the following: 1. p = P(T ≤ t = max supp(μt˜)); 2. p = P(T ≥ t = min supp(μt˜)).

(30) (31)

We want to write the α-cuts of the fuzzy p-value p. ˜ Therefore, we know that μt˜ is a membership function and all its α-cuts are compact and closed on R. Consequently, the α-cuts of the p-value p˜D related to (30) and (31) are written as: 1. p˜ F Vα = [P(T ≤ t˜αL ), P(T ≤ t˜αR )]; 2. p˜ F Vα = [P(T ≥

t˜αR ),

P(T ≥

t˜αL )].

(32) (33)

The case of two-sided test can be similarly conceivable. 2. Our next step is to extend these formulas to the case of fuzzy hypothesis. According to Parchami et al. [3], we present the α-cuts of the fuzzy p-value where the fuzziness is coming from the hypotheses. These α-cuts are in the following form: 1. p˜ P Aα = [Pθ R (T ≤ t), PθL (T ≤ t)]; 2. p˜ P Aα = [PθL (T ≥ t), Pθ R (T ≥ t)]; [2Pθ R (T ≤ t), 2PθL (T ≤ t)] if Al > Ar , 3. p˜ P Aα = [2PθL (T ≥ t), 2Pθ R (T ≥ t)] if Al ≤ Ar .

(34) (35) (36)

Consequently, we get the left and right α-cuts, p˜ αL and p˜ αR , of the fuzzy p-values based on fuzziness of data and hypotheses as seen in Proposition 1. This is a result of combining the Eqs. (19), (20), (32), (33), and using the Definition 11 and the fuzzy p-value discussed by Parchami et al. [3]. 3. Our last step is to ensure the fulfillment of the properties of a membership function: μt˜ and μ H˜ 0 are membership functions and the probabilities are restricted to [0, 1]. This fact induces that the resulting membership functions of p˜ are between 0 and 1. We add that it reaches 1 for a given value. Nevertheless, the α-cuts of each case form a closed finite interval. Hence, they are compact and convex subsets of R for all α ∈ (0, 1].

86

R. Berkachy and L. Donzé

Decision rule. According to the Neyman-Pearson assertion, Filzmoser and Viertl [2] adopted a three-decision problem related to the left and right α-cuts of p. ˜ The decision rule for a given test with a significance level δ, is as follows: – if p˜ αR < δ, the null hypothesis is rejected; – if p˜ αL > δ, the null hypothesis is not rejected; – if δ ∈ [ p˜ αL , p˜ αR ], both null and alternative hypothesis are neither rejected or not.

4.4 Defuzzification of the Fuzzy p-value by the Signed Distance The signed distance has previously been presented as a defuzzification operator with apparently nice properties. For this purpose, we intend to use it in defuzzifying the fuzzy p-value, according to Berkachy and Donzé [4]. We would like afterwards to assess whether the decision made with the defuzzified p-values is similar to the one in the classical approach. To accomplish this task, we consider the α-cuts of the fuzzy p-values given in Eqs. (27)–(29) and we will apply the Eq. (9) to defuzzify them. We get the following defuzzified p-values: ˜ =1 1. d( p, ˜ 0) 2 ˜ =1 2. d( p, ˜ 0) 2 ˜ = 3. d( p, ˜ 0)



1

0

0

1

(Pθ R (T ≤ t˜αL ) + PθL (T ≤ t˜αR ))dα;

(37)

(PθL (T ≥ t˜αR ) + Pθ R (T ≥ t˜αL ))dα;

(38)

 1 1 (2Pθ R (T ≤ t˜αL ) + 2PθL (T ≤ t˜αR ))dα, 2 0 1 1 (2PθL (T ≥ t˜αR ) + 2Pθ R (T ≥ t˜αL ))dα, 2 0

if Al > Ar , if Al ≤ Ar .

(39)

Decision rule. The defuzzified p-values are similar to the ones of the classical approach in terms of interpretation. Therefore, two main decisions are considered: ˜ < δ, the null hypothesis is rejected; – if d( p, ˜ 0) ˜ > δ, the null hypothesis is not rejected with the degree of conviction – if d( p, ˜ 0) ˜ d( p, ˜ 0); ˜ = δ (a rare case), one should decide whether to reject or not the null – if d( p, ˜ 0) hypothesis. A main difference between the fuzzy and the defuzzified p-values in terms of decision rule is that a no-decision case is considered in the first one. Yet, the decision of not rejecting H0 or H1 doesn’t occur when the p-values are defuzzified since they are now on crisp. Thus, not detecting the no-decision region might be a disadvantage of the defuzzification of the fuzzy p-value.

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data

87

5 Numerical Example We give in this section a detailed example illustrating our approach. We note that we will treat a one-sided test as shown in (11). For a two-sided test, one can similarly reason. We provide after a comparison between the fuzzy and the classical tests for the same setups.

5.1 The Test and Its Setups We consider a random sample of size n = 10 observations. We assume that the sample is derived from a normal distribution with an unknown mean and a standard deviation of 1.075, N (μ, 1.075), as seen in Table 1. 1. The Fuzzified Sample We consider the data set as fuzzy. Different types of fuzzy numbers can be adopted in order to model this uncertain data set. To simplify the example, we use triangular fuzzy numbers. The fuzzified sample is shown in Table 1. 2. Define the Hypotheses Test The aim is to test the following null hypothesis H˜ 0 on the significance level δ = 0.05: H˜ 0 : μ is approximately μ0 = 2.3, vs. H˜ 1 : μ is approximately bigger than μ0 = 2.3.

Table 1 The data set and the corresponding fuzzy number of each observation—Example of Sect. 5 Index xi Triangular fuzzy number Index xi Triangular fuzzy number 1 2 3 4 5

1 3 2 2 4

(0.5, 1, 1.5) (2.5, 3, 3.5) (1.5, 2, 2.5) (1.5, 2, 2.5) (3.5, 4, 4.5)

6 7 8 9 10

2 4 1 2 3

(1.5, 2, 2.5) (3.5, 4, 4.5) (0.5, 1, 1.5) (1.5, 2, 2.5) (2.5, 3, 3.5)

88

R. Berkachy and L. Donzé

1.0

Membership functions of the null and alternative hypotheses

0.0

0.2

0.4

α

0.6

0.8

The null hypothesis The alternative hypothesis

2.0

2.2

2.4

x

2.6

2.8

3.0

Fig. 1 The membership functions of the null and the alternative hypotheses—Example of Sect. 5

3. Fuzzify the Hypotheses We consider not only the sample as fuzzy but the hypotheses as well. For this reason, a fuzzification of the hypotheses is needed. As instance, we can model them by triangular fuzzy numbers. Let H˜ 0T the fuzzy null hypothesis be given by the following fuzzy number H˜ 0T = (2, 2.3, 2.6) and the alternative one by H˜ 1O R = (2.3, 2.3, 3). Both hypotheses are shown in Fig. 1. In addition, the α-cuts of H˜ 0T are given by:  ( H˜ 0T )α

=

( H˜ 0T )αL = 2 + 0.3α; ( H˜ 0T )αR = 2.6 − 0.3α.

(40)

4. Calculate the Fuzzy Sample Mean Suppose that the membership function of the observed fuzzy sample mean X˜ given by Eq. (8), is written as: ⎧ ⎪ if 1.9 < x ≤ 2.4; ⎨2x − 3.8 μ X˜ (x) = −2x + 5.8 if 2.4 < x ≤ 2.9; ⎪ ⎩ 0 otherwise;

(41)

with the corresponding α-cuts: ⎧ L ⎨ X˜ = 1.9 + 0.5α; α X˜ α = ⎩ X˜ R = 2.9 − 0.5α. α

5. Define the Rejection Region Related to the Test

(42)

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data

89

In statistical inference, we have to define the rejection region so-called R. For this one-sided test, in the classical approach we used to reject the null hypothesis H0 if T ≥ tr where tr is such that P(T ≥ tr ) = δ. Note that tr is a quantile of the distribution of the test statistic T . This rejection region will be transferred to the fuzzy environment. 6. Calculate the Functions θ1 (α) and θ2 (α) We are now able to calculate the functions θ1 (α) et θ2 (α). These ones will serve as the bounds of the integral associated to the normal distribution density function. Based on Eqs. (40) and (42), and Definitions 6 and 7, the functions θ1 (α) et θ2 (α) of α are as follows: R

X˜ − ( H˜ T ) L 0.9 − 0.8 × α = 2.65 − 2.35 × α, θ1 (α) = α √ 0 α = √ σ/ n σ/ n L

X˜ − ( H˜ T ) R −0.7 + 0.8 × α θ2 (α) = α √ 0 α = = −2.06 + 2.35 × α. √ σ/ n σ/ n 7. Calculate the α-cuts of the Fuzzy p-value Combining all the above informations, we provide the fuzzy p-value. This latter is given using its α-cuts p˜ α as seen in Eq. (28) and can be written in the following manner:

∞  ∞ −u 2 −u 2 1 − 21 (2π ) exp( (2π )− 2 exp( p˜ α = )du, )du , (43) 2 2 θ1 (α) θ2 (α)

1.0

Membership functions of the fuzzy p−value and the significance level

0.0

0.2

0.4

μ

0.6

0.8

The fuzzy p−value The significance level

0.0

0.2

0.4

p

0.6

0.8

Fig. 2 The membership function of the fuzzy p-value p˜ α —Example of Sect. 5

1.0

90

R. Berkachy and L. Donzé

where θ1 (α) et θ2 (α) are the functions of α calculated in the previous step. The membership function corresponding to the fuzzy p-value p˜ is shown in Fig. 2. 8. Defuzzify the Fuzzy p-value by the Signed Distance From Fig. 2, we can see that the fuzzy p-value and the significance level overlap. Thus, we cannot make any visual decision. The defuzzification of the fuzzy p-value is an efficient way in decision making. We apply the signed distance operator as described in Eq. (38) and we get the following result: ˜ = 1 d( p, ˜ 0) 2



1 0

(Pθ L (T ≥ t˜αR ) + Pθ R (T ≥ t˜αL ))dα



∞  1 1 −u 2 −u 2 1 1 ∞ (2π )− 2 exp( (2π )− 2 exp( )du + )du dα 2 0 2 2 2.65−2.35×α −2.06+2.35×α = 0.43877.

=

9. Decide The defuzzified p-value (0.43877) is bigger than the significance level (0.05). Therefore, the decision in this case is to not reject the null hypothesis at the level δ = 0.05 with the degree of conviction 0.43877.

5.2 Fuzzy versus Classical Approach In another study, we discussed the influence of the shape of the fuzzy hypotheses on the test decision. We were interested in investigating the effect of variating the spread of the fuzzy hypotheses. As a result, we showed that even though the alternative hypothesis determines the rejection region, this spread has no effect on the test decision in the treated approach. From another side, we asserted that the defuzzified p-value is sensitive to the form and spread (the fuzziness) of the fuzzy null hypothesis. We finally found that the highest defuzzified p-value corresponds to the largest spreaded fuzzy p-value, and inversely (see Berkachy and Donzé [4]). Since we are interested in the differences between the fuzzy and classical approaches of hypotheses testing, we provide next the same previous test but performed in the classical approach, and we close the section by a comparison between approaches. The Test by the Classical Approach If we perform the same previous test in the classical approach at the same significance level δ = 0.05, in other terms, if we consider the data and hypotheses as not fuzzy, we get the following results: – a test statistic T = 0.2942, – a quantile of the distribution of T , t(1−δ;n−1) = t(0.95;9) = 1.833,

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data

91

– a p-value p = 0.388. It exists two ways to make a decision in such cases: by directly comparing the test statistic with the quantile t(1−δ;n−1) , or to compare the p-value with the significance level δ. As our interest is in the p-values, we know that we reject the null hypothesis at the level δ if the p-value associated to the test statistic is smaller or equal to δ. Then, regarding our example, we do not reject the null hypothesis since the p-value p = 0.388 is bigger than δ = 0.05. Comparison between the Test in the Fuzzy Approach and the One in the Classical Approach As previously seen, the decision made by the fuzzy approach is similar to the one made by the classical approach. One main difference is that we are able to treat the fuzziness and subjectivity of the hypotheses in such tests if we consider the fuzzy approach instead of the classical one. The fuzzy p-value is useful in these cases. In addition, despite the fact that defuzzifying the fuzzy p-value, by the signed distance in this paper, made us lose informations induced by the fuzziness of the decision, a crisp decision is needed in many situations. Being able to emet a “fuzzy” or “defuzzified” decision associated to an uncertain context is definitely an advantage. Therefore, we can say that the defuzzified p-value can be a relevant indicator of fuzziness of the null hypothesis, and by this measure we can make a convenient decision. We finally highlight that since the fuzzy p-value is related to the spread of the null hypothesis, the question of the modelisation of the fuzziness becomes an important point to consider.

6 Application on Real Data In this last section, we show an application of our fuzzy approach on real data. We use a survey entitled “Finanzplatz: Umfrage 2010” of the financial place of Zurich in Switzerland in 2010. This survey was made by the Office of Economy of the canton of Zurich in order to understand the actual and the foreseable situations of the canton’s firms from different perspectives. It is composed of 234 observations, i.e. firms, answering to 21 categorical (linguistic) variables, where each has 5 possible answers going from bad (1) to good (5). The variables treat of the following subjects: – – – – –

The present state of business, The demand for the services or products, The gross profit, The employment, The use of technical and personal capacities.

Our purpose is to apply our testing approach on this survey and therefore have informations regarding the state of the businesses in Zurich. We remark that having such variables induces fuzziness in the answers and thus, fuzzy logic can be of

92

R. Berkachy and L. Donzé

Table 2 The possible linguistic terms and their corresponding fuzzy numbers—Application in Sect. 6

Linguistic

Modality

X1 X2 X3 X4 X5

bad between bad and fair fair between fair and excellent excellent

Triangular Fuzzy Number ˜ 1 = (0, 1, 2.5) X ˜ 2 = (1, 2, 3) X ˜ 3 = (2.5, 3, 3.5) X ˜ 4 = (3, 4, 5) X ˜ 5 = (3.5, 5, 6) X

1.0

Membership functions of the corresponding fuzzy numbers of the treated example

~ X3

~ X2

~ X5

~ X4

0.0

0.2

0.4

α

0.6

0.8

~ X1

0

1

2

3

4

5

6

x

good use in such cases. For this reason, we model the possible answers (also called linguistics) of the variables by fuzzy numbers. As instance, triangular fuzzy numbers are chosen for this example, as seen in Table 2. The first objective of this study is to know if the situation of the firms is close to be fair or approximately less than fair. We will be interested in this work to test some conjectures represented by three of the set of variables: the present state of business, the gross profit compared to the last 12 months and the employment compared to the last 12 months. We will perform tests around the mean as described in Sect. 4, using both the classical and fuzzy approaches. For the first test, we would like to test the average of the present state of business and to know if it is approximately 3 (fair) or approximately smaller than 3 on the significance level δ = 0.05. This left one-sided test can be written as follows: H˜ 0 : μ is approximately 3 vs. H˜ 1 : μ is approximately smaller than 3. Since we’re supposing that not only data are fuzzy but hypotheses as well, we model our hypotheses by triangular fuzzy numbers and we get the following: H˜ 0T = (2.9, 3, 3.1) vs. H˜ 1O L = (3, 5, 5).

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data

93

Table 3 The results of the crisp and fuzzy tests, and the membership function of the fuzzy p-value corresponding to the variable “The present state of business”—Application in Sect. 6 The state of business The p-value The decision

Crisp left-sided test H0 : µ = 3 vs. H1 : µ < 3 Fuzzy left-sided test ˜ 0 : µ is approximately 3 vs. H ˜ 1 : µ is approximately smaller than 3 H

1

H0 is totally not rejected

0.8581

H0 is not rejected with a degree of conviction of 0.8581 or 85.81%

1.0

Membership functions of the fuzzy p−value and the significance level

0.0

0.2

0.4

μ

0.6

0.8

The fuzzy p−value The significance level

0.0

0.2

0.4

0.6

0.8

1.0

p

Then, we fuzzify our data set and we get the corresponding fuzzy random sample. We assume that this sample is normally distributed with a mean μ. We calculate the fuzzy mean of this sample as seen in Eq. (8) and we get the following tuple: X˜ = (2.797, 3.765, 4.596). We apply after the testing procedure previously described. This is done by both classical (crisp) and fuzzy approaches, this latter given by the defuzzified p-value. The p-value and the decision by the classical approach, the membership function of the fuzzy p-value and the decision related to the defuzzified p-value are presented in Table 3. It shows that with both approaches, we tend to not reject the null hypothesis H0 at the significance level δ = 0.05. We note that in this case, we define a degree of conviction to further interpret the resulting fuzzy decision. Then, we don’t reject the fact that the present state of business in Zurich is close to be fair. For the second interesting variable i.e. “the gross profit compared to the last 12 months”, we would like to test whether the gross profit is approximately 5 (excellent) or smaller at the significance level δ = 0.05. The hypotheses of our two-sided test are: H˜ 0 : μ is close to 5 vs. H˜ 1 : μ is away from 5.

94

R. Berkachy and L. Donzé

Table 4 The results of the crisp and fuzzy tests, and the membership function of the fuzzy p-value corresponding to the variable “The gross profit compared to the last 12 months”—Application in Sect. 6

The gross profit compared to the last 12 months The p-value The decision Crisp two-sided test 0 H0 is totally rejected H0 : µ = 5 vs. H1 : µ = 5 Fuzzy two-sided test 4.7124e−38 H0 is strongly rejected. ˜ 1 : µ is away from 5 ˜ 0 : µ is close to 5 vs. H H

1.0

Membership functions of the fuzzy p−value and the significance level

0.0

0.2

0.4

μ

0.6

0.8

The fuzzy p−value The significance level

0.0

0.2

0.4

0.6

0.8

1.0

p

The fuzzy hypotheses are written in the following manner: H˜ 0T = (4.8, 5, 5.2) vs. H˜ 1T = 1 − H˜ 0T . Similarly to the previous case, we calculate the mean of the fuzzy sample and we get the following: Y˜ = (2.367, 3.205, 4.032). The results corresponding to this test are shown in Table 4. We can see that both, classical and fuzzy tests, tend to strongly reject the null hypothesis H0 with a very small fuzzy p-value and a null crisp one, at the confidence level 1 − δ. Nevertheless, we can say that the hypothesis that the mean of the gross profit compared to the last 12 months is close to 5 (excellent) is rejected and we couldn’t reject the alternative hypothesis that this mean is away from it. Our third test is related to the variable “the employment compared to the last 12 months” is a left one-sided one. The hypotheses are the following: H˜ 0 : μ is approximately 5 vs. H˜ 1 : μ is approximately smaller than 5. The fuzzy hypotheses are written in the following manner:

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data

95

Table 5 The results of the crisp and fuzzy tests, and the membership function of the fuzzy p-value corresponding to the variable “The employment compared to the last 12 months”—Application in Sect. 6 The employment compared to the last 12 months The p-value The decision

Crisp left-sided test H0 : µ = 5 vs. H1 : µ < 5 Fuzzy left-sided test ˜ 0 : µ is approximately 5 vs. H ˜ 1 : µ is approximately smaller than 5 H

5.632e−91

H0 is totally rejected

5.8168e−55 H0 is strongly rejected.

1.0

Membership functions of the fuzzy p−value and the significance level

0.0

0.2

0.4

μ

0.6

0.8

The fuzzy p−value The significance level

0.0

0.2

0.4

0.6

0.8

1.0

p

H˜ 0T = (4.8, 5, 5.2) vs. H˜ 1O L = (5, 7, 7). In this case, the fuzzy sample mean is written as: Z˜ = (2.491, 3.265, 4.011). The classical and the defuzzified p-values and their corresponding decisions are given in Table 5. This table shows that the null hypothesis tends to be strongly rejected at the significance level 0.05. The interpretation of this decision is by asserting that the employment compared to the last 12 months is not approximately 5. Hence, we can’t reject that the employment situation is approximately smaller than 5 (less than excellent). Discussion The results associated to the three tests show the similarity in the decisions between the classical and the fuzzy approaches. However, we remind that the interpretation of the classical p-value is different than the one of the fuzzy approach seen as a degree of conviction related to the hypotheses. And thus, this can give us a broad idea about the application of both of them. We can clearly see that the fuzzy approach is a sort of

96

R. Berkachy and L. Donzé

generalization of the classical one. We finally highlight that the shape and spread of the fuzzy hypotheses and the fuzzy data have effects on the procedure of modelling of the uncertainty in such situations.

7 Conclusion Based on previous contributions, we presented in this work a hypothesis testing approach with its corresponding decision rule. We considered that both data and hypotheses are fuzzy. For instance, we showed the related fuzzy p-value with its α-cuts. One of the objectives of this paper is to give the procedure of defuzzifying this fuzzy p-value by the so-called signed distance. We illustrate the approach by a detailed numerical example of a right one-sided test. In addition, we perform the same test with the same hypothetical sample at the same significance level, but with the classical testing approach. The purpose is to compare between the decision made by the fuzzy approach and the one using the classical approach. We found that the same decision is made using both approaches. We clearly saw that the fuzzy procedure of testing can be perceived as a generalization of the classical approach. Another objective of this paper is to display an application of this approach on real data coming from a survey entitled “Finanzplatz: Umfrage 2010”. We applied the testing procedure of the mean on a set of variables carefully chosen. We provided the same tests by the fuzzy and the classical approaches, the fuzzy p-value and its defuzzified measure as well. The idea is to be able to interpret the decision made in order to get some informations about our hypotheses. To sum up, we know that the defuzzification operation reduces the amount of information carried in a fuzzy number, the fuzzy p-value in our case, but getting a defuzzified value can be useful in the interpretation of the decision. We remember that our approach is already considered as advantageous since we can now on take into consideration the fuzziness of both data and hypotheses. Thus, proposing such measures and interpreting the related-decisions deserves attention in decision making. For further researches, we will focus on testing hypotheses around other statistical measures such as the variance.

References 1. Kruse, R., Meyer, K.D.: Statistics with Vague Data, vol. 6. Springer, Netherlands (1987) 2. Filzmoser, P., Viertl, R.: Testing hypotheses with fuzzy data: the fuzzy p-value. Metrika 59, 21–29 (2004). Springer 3. Parchami, A., Taheri, S.M., Mashinchi, M.: Fuzzy p-value in testing fuzzy hypotheses with crisp data. Stat. Pap. 51(1), 209–226 (2010) 4. Berkachy, R., Donzé, L.: Testing fuzzy hypotheses with fuzzy data and defuzzification of the fuzzy p-value by the signed distance method. In: Proceedings of the 9th International Joint Conference on Computational Intelligence (IJCCI 2017), pp. 255–264 (2017)

Defuzzification of a Fuzzy p-value by the Signed Distance: Application on Real Data

97

5. Grzegorzewski, P.: Testing statistical hypotheses with vague data. Fuzzy Sets Syst. 112(3), 501–510 (2000) 6. Berkachy, R., Donzé, L.: A new approach of testing fuzzy hypotheses by confidence intervals and defuzzification of the fuzzy decision by the signed distance. Under Review (2018) 7. Neyman, J., Pearson, E.S.: The testing of statistical hypotheses in relation to probabilities a priori. Math. Proc. Camb. Philos. Soc. 29, 492–510 (1933) 8. Grzegorzewski, P.: Fuzzy tests - defuzzification and randomization. Fuzzy Sets Syst. 118(3), 437–446 (2001) 9. Berkachy, R., Donzé, L.: Defuzzification of a fuzzy hypothesis decision by the signed distance method. In: Proceedings of the 61st World Statistics Congress, Marrakech, Morocco (2017) 10. Berkachy, R., Donzé,L.: Individual and global assessments with signed distance defuzzification, and characteristics of the output distributions based on an empirical analysis. In: Proceedings of the 8th International Joint Conference on Computational Intelligence - Volume 1: FCTA, pp. 75–82 (2016) 11. Viertl, R.: Statistical Methods for Fuzzy Data. Wiley, Hoboken (2011) 12. Yao, J., Wu, K.: Ranking fuzzy numbers based on decomposition principle and signed distance. Fuzzy Sets Syst. 116(2), 275–288 (2000) 13. Lin, L., Lee, H.: Fuzzy assessment for sampling survey defuzzification by signed distance method. Expert Syst. Appl. 37(12), 7852–7857 (2010) 14. Zadeh, L.: Fuzzy sets. Inf. Control. 8(3), 338–353 (1965) 15. Zimmermann, H.: Fuzzy set theory. Wiley Interdiscip. Rev. Comput. Stat. 2(3), 317–332 (2010)

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs Ivor Uhliarik

Abstract Recent years have witnessed the effort to extend answer set programming (ASP) with properties of fuzzy logic. The result of this combination is fuzzy answer set programming (FASP), a powerful framework for knowledge representation and non-monotonic reasoning with graded levels of truth. The various results in solving FASP make use of transformations into fuzzy satisfiability (SAT) problems, optimization programs, satisfiability modulo theories (SMT), or classical ASP, each of which comes with limitations or scaling problems. Moreover, most of the research revolves around Gödel and Łukasiewicz semantics. The former approach is elegant in its attempt to generalize well-known methods in classical ASP to the fuzzy case. In our work we seek to extend this approach under the product semantics, utilizing the fuzzy generalization of the DPLL algorithm. As such, we design the inner works of a DPLL-based fuzzy SAT solver for propositional product logic, which should provide foundations for the technical implementation of the solver. Keywords Fuzzy ASP · Fuzzy logic · Answer set programming · Product logic · Fuzzy SAT · DPLL

1 Introduction Answer set programming (ASP) is a well-known and popular logic programming paradigm based on the stable model semantics [11]. The theory behind ASP [21] has allowed a range of effective solvers to become well-established, such as the Potassco suite [10], smodels [26], or DLV [20]. ASP can be used as a declarative programming framework for modeling and solving combinatorial search problems. If we take into account the notion of graded truth, the result is a continuous optimization system capable of preserving the intuitive way of representing problems in classical ASP. I. Uhliarik (B) Department of Applied Informatics, Comenius University, Mlynská dolina, 842 48, Bratislava, Slovak Republic e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_6

99

100

I. Uhliarik

Fuzzy answer set programming (FASP) [29] was introduced as the combination of ASP and fuzzy logic, where atoms may be assigned graded levels of truth. The paradigm is subject to active research, the most recent solvers and results being introduced by [24], [2], or [1]. A real-world application was shown in [25] which focused on modeling biological networks. Although there have been many proposals of FASP solvers, the design and implementation of available tools are still far from reaching the maturity of classical ASP solvers. A notable approach to solving FASP involves the reduction of programs to fuzzy SAT instances [19] using fuzzy extensions of Clark’s completion [8] and loop formulas [22]. However, the definition of loop formulas in FASP relies on the properties of Łukasiewicz logic and, to our knowledge, no implementation of this approach is available. Another approach [5] proposes the reduction of FASP programs to bilevel linear programming problems. The pitfall of bilevel (and mixed integer) programs is the introduction of many auxiliary variables that do not scale well to real-world problems. The approach is also limited to Łukasiewicz logic, but implementation is available in later work [3]. Other proposals for solving FASP focus on searching finite many-valued domains, such as the approach leveraging DLVHEX [28] or reduction to classical ASP [23], refined in [24], where only the latter two support disjunctive FASP programs and have available implementations in the form of the tool ffasp. An important work was the proposal and implementation of a method to find approximations of fuzzy answer sets [3], the result of which is the tool fasp, a prototype for solving propositional FASP programs. It operates only on normal programs, yields exact results in the case of positive and stratified programs, and mitigates the problem of introducing many auxiliary variables by constraining the optimization space of mixed integer or bilevel programs. To our knowledge, this is the first implemented approach where any of Łukasiewicz, Gödel, and product t-norms may be used (although not in combination). The authors of fasp have also tackled the problem using the approach of translating FASP into satisfiability modulo theories [2], further constraining the output stable models using the minimum undefinedness semantics [1]. Problems in continuous domains are sometimes difficult to define in pure propositional Gödel, Łukasiewicz, or product fuzzy logics. As described above, most of these approaches consider Łukasiewicz logic only, because of its properties that allow many concepts to be easily fuzzified. While some results also cover Gödel and product t-norms, there is no specific insight into FASP under product semantics. We seek to fill this gap by adopting the approach of reducing the FASP program to a fuzzy SAT problem [19]. The reason of concentrating on product t-norms is the ability to embed Łukasiewicz and (extended) Gödel logics in (extended) product logic [4]. Given such a product logic framework, we can represent and reason upon problems involving any of the three semantics in a uniform way.

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs

101

2 Fuzzy Answer Set Programming In our previous work [27] we have described the syntax and semantics of fuzzy answer set programs focused on product logic, derived from the definitions of Janssen et al. [18] and Guller [13]. Below we revisit the important notions that will be used throughout the paper.

2.1 Syntax Given the lattice L = [0, 1], ≤, let PropAtom be the set of propositional atoms of product logic over L. C = {c|c ∈ C} is the set of truth constants where {0, 1} ⊆ C ⊆ [0, 1] and C is a countable set. 0, 1 are true and false in product logic, respectively. A classical literal is either a truth constant c ∈ C, an atom a or a classical negation literal ¬a. An extended literal is either a classical literal a or a default negation literal not a. Fuzzy ASP rules are expressions of the form r ≡ a ← f (b1 , . . . , bn ; c1 , . . . , cm ) where a, bi , c j ∈ PropAtom ∪ {0, 1} for all 1 ≤ i ≤ n, 1 ≤ j ≤ m. f is a function symbol representing the mapping Ln+m → L increasing in its n first and decreasing in its m last arguments. The atom a is a classical literal; the atoms b1 , . . . , bn are classical literals, and the atoms c1 , . . . , cm are default negation literals. The head and body of the rule r , rh and rb , is the left-hand and right-hand side of the rule, respectively. The Herbrand base Br of the rule r is the set of atoms present in r . A rule of the aforementioned form is called – a constraint iff a ∈ C (i.e., the head is a truth constant), – a fact iff all bi , c j ∈ C for 1 ≤ i ≤ n, 1 ≤ j ≤ m (i.e., the body consists of truth constants), – positive iff m = 0 or c j ∈ C for 1 ≤ j ≤ m (i.e., the negative part of the rule is empty or consists of truth constants), – simple if it is positive and not a constraint. A FASP program is a finite  set of FASP rules. Let P be a FASP program. Then, the Herbrand base B P is B P = {Br |r ∈ P}. A program is called – constraint-free if it does not contain constraints, – positive if all rules occurring in it are positive, – simple if all rules occurring in it are simple.

102

I. Uhliarik

2.2 Semantics We interpret product logic in the standard way (as in Guller [13]) by the -algebra  = ([0, 1], ≤, ∨ , ∧ , ·, ⇒ , ∼ , 0, 1) where ∨ is the supremum and ∧ the infimum operator on [0, 1]; · is the algebraic product;  ⇒b = a⇒

1 if a ≤ b, b else; a

 ∼a=

1 if a = 0, 0 else.

The mapping f in the rule a ← f (b1 , . . . , bn ; c1 , . . . , cm ) constructs a body expression recursively: – constants c ∈ C and extended literals are body expressions, – for body expressions α and β, α  β is also a body expression for  being any of {∧, & , ∨, →, ↔}, where ∧, ∨, →, ↔ are standard propositional connectives and & is strong conjunction. We define the interpretation of a FASP program P as a B P → L mapping I = {a1l1 , . . . , anln }; I (ai ) = li if 1 ≤ i ≤ n and I (a) = 0 otherwise. It can be extended to constants and expressions as shown below (as in [27]): – – – – –

I (c) = c if c ∈ C I (not α) = ∼ I (α) I (α& β) = I (α)·I (β)  I (β) for  ∈ {∧, ∨, →} I (α  β) = I (α) ⇒ I (β)·I (β)⇒ ⇒ I (α) I (α ↔ β) = I (α)⇒

for expressions α and β. Similarly to [13],  is a complete linearly ordered lattice algebra1 ; the residuum operator ⇒ of · satisfies the condition of residuation. The semantics of minimal and stable models were described in our previous work [27] and are quoted below for reference: A rule (r : a ← α) ∈ P is satisfied by an interpretation I of P iff I (a) ≥ I (α).2 An interpretation I of program P is a model of P iff every rule r ∈ P is satisfied by I . For interpretations I and J of P we define I ⊆ J iff ∀a ∈ B P : I (a) ≤ J (a) and I ⊂ J iff 1 We

do not explicitly refer to the properties and neutral elements of ∨ , ∧ . is because we can regard rules as residual implicators.

2 This

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs

103

(∀a ∈ B P : I (a) ≤ J (a)) ∧ (I = J ). Given this ordering on interpretations, we say a model I of P is minimal iff no model J exists such that J ⊂ I . Let P be a positive FASP program. An interpretation A of P is called the answer set of P iff A is the minimal model of P. For non-positive programs we use the fuzzy generalization of the GL reduct. For a non-positive program P, the reduct of a rule (r : a ← f (b1 , . . . , bn ; c1 , . . . , cm )) ∈ P w.r.t. an interpretation I of P is the positive rule r I defined as r I = r : a ← f (b1 , . . . , bn ; I (c1 ), . . . , I (cm )) i.e. the occurrences of default negation literals not a are replaced by the constants I (not a) ∈

L. The reduct of P is the set of rules P I defined as P I = {r I |r ∈ P}. An interpretation A

of a program P is an answer set of P iff A is the answer set of P A . [27]

Finally, a simple FASP program always has exactly one fuzzy answer set, whereas a positive FASP program may have any number of fuzzy answer sets.

3 Solving FASP In the first section we have mentioned existing related work regarding solving FASP programs. Our previous work [27] studies these approaches and provides insight into the approach of Janssen et al. [19] based on the reduction of FASP programs to fuzzy SAT theories. We have identified the problems that would need to be solved in order to successfully extend the approach of the reduction under product semantics. In short, the major limitation lies in the generalization of loop formulas and the ASSAT procedure [22] to the fuzzy case, which relies on properties of Łukasiewicz (not Gödel or product) logic. Our motivation to focus on product logic in particular is the embeddability of Łukasiewicz and (extended) Gödel logics in extended product logic [4]. The full system for solving FASP programs will consist of the following pipeline: 1. the reduction of the input FASP program to propositional product fuzzy logic theory, 2. the translation of the theory into normal form, 3. finding a valuation of the atoms (if one exists) corresponding to an answer set of the original FASP program. While the problem of the stated reduction under product semantics still remains open, we investigate the methods for solving the fuzzy SAT problem in isolation as an important module to be later used in the pipeline of solving fuzzy answer set programs.

104

I. Uhliarik

3.1 Fuzzy SAT The paper [27] reviews several existing fuzzy SAT solver proposals. Most of these are based on numerical optimization methods, such as mixed integer programming (MIP) [16] for Łukasiewicz logic or bounded mixed integer quadratically constrained programming (bMICQP) for product logic; the latter was described in terms of fuzzy description logics by Bobillo and Straccia [6]. The downside of these approaches is the introduction of many auxiliary variables [3] that may negatively affect the performance, and the use of these methods as black-boxes without possible ad-hoc optimizations suitable for the used semantics. Similar work uses the state-of-theart optimization algorithm termed covariance matrix adaptation evolution strategy (CMA-ES) [7], but in general, the stochastic nature of the algorithm is prone to converging to local optima [17]. We choose to focus on a different branch of research in the field, which aims to provide the theory and technical foundations of solving fuzzy SAT in product logic that is based on the fuzzy generalization of the Davis-Putnam-Logemann-Loveland (DPLL) procedure [13]. The paper proposes (a) the transformation of a propositional product theory into order clausal form, (b) the inference (branching) rules, and (c) a method to find models of theories (valuations of atoms) in open branches. The procedure is proved to be refutation sound and complete for finite order clausal theories. Although the paper lacks the notion of intermediate truth constants,3 which are abundant in most practical applications of FASP, these are introduced in related work [12, 14].

4 DPLL-Based Fuzzy SAT Solver In this section we describe the architecture of a fuzzy SAT solver based on the fuzzy generalization of the DPLL procedure as described in Sect. 3.1, using the foundations laid by Guller [13, 15]. We revisit the syntax and semantics of extended propositional product logic, the translation to clausal form, and the DPLL inference procedure. We omit some formal definitions of concepts and proofs (these may be found in [13]), but describe how each of the components could be implemented, identify key problems and propose their solutions. Remark 1 The syntax and semantics of such a system as defined in our previous work [27] and in Sect. 2 included the notion of intermediate truth constants. In the following proposal we shall omit them due to the complexity they introduce and consider them for future work.

3 Constants

in the open interval (0, 1).

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs

105

4.1 Extended Product Logic In the following sections we will use the common concepts and notation of propositional product logic which we adopt from Guller [13] and our previous work [27]. We denote the set of propositional atoms of the product logic as PropAtom and assume the truth constants 0, 1; 0 denotes the false and 1 the true in the product logic. We introduce the unary connective  and binary connectives , equality, ≺, strict order. The order propositional formulae of the product logic are built up from PropAtom ∪ {0, 1} using the connectives: ¬, , ∧, & , ∨, →, ↔, and , ≺.4 OrdPropForm is the set of all such order propositional formulae. Let εi , 1 ≤ i ≤ n, be either an order formula, a set of order formulae or a set of sets of order formulae, in general. By atoms(ε1 , . . . , εn ) we denote the set of all atoms present in ε1 , . . . , εn . We interpret extended product logic (as in Guller [13, 14]) by the standard algebra augmented by the operators  , ≺ ,  for the connectives , ≺, , respectively.  = ([0, 1], ≤, ∨ , ∧ , ·, ⇒ , ∼ ,  , ≺ ,  , 0, 1) where ∨ is the supremum and ∧ the infimum operator on [0, 1]; · is the algebraic product;  ⇒y = x⇒  y = x  x =



1 if x ≤ y, y else; x

∼x=

1 if x = y, 0 else;

≺y = x≺



1 if x = 0, 0 else; 1 if x < y, 0 else;

1 if x = 1, 0 else.

Similarly to Sect. 2.2 and [13],  is a complete linearly ordered lattice algebra; ∨ , ∧ is commutative, associative, idempotent, monotone; 0, 1 is its neutral element; · is commutative, associative, monotone; 1 is its neutral element; the residuum operator ⇒ of · satisfies the residuation principle. Gödel negation ∼ satisfies the condition: ⇒0; for all x ∈  , ∼ x = x⇒  satisfies the condition5 : 1. for all x ∈  ,  x = x Next, we define the valuation of propositional atoms (similarly to Guller [13]) as the mapping V : PropAtom −→ [0, 1] such that V(0) = 0 and V(1) = 1.

4 With 5 With

the decreasing connective precedence: ¬, , & , , ≺, ∧, ∨, →, ↔. the decreasing operator precedence: ∼ ,  , ·,  , ≺ , ∧ , ∨ , ⇒ .

106

I. Uhliarik

Let φ ∈ OrdPropForm and V be a valuation. The truth value φV ∈ [0, 1] of φ in V is defined recursively on the structure of φ as follows (analogously to [13]): φV φV φV φV

= V(φ); = ∼ φ1 V ; =  φ1 V ; = φ1 V  φ2 V ,  ∈ {∧, & , ∨, →, , ≺}; φ = φ1 ↔ φ2 , φV = (φ1 V ⇒ φ2 V )· (φ2 V ⇒ φ1 V ).

φ φ φ φ

∈ PropAtom, = ¬φ1 , = φ1 , = φ1  φ 2 ,

We call a set of order formulae an order theory. Let φ, φ  ∈ OrdPropForm and T ⊆ OrdPropForm. V |= φ (φ is true in V) iff φV = 1. V |= T (V is a model of T ) iff V |= φ for all φ ∈ T . φ is a tautology iff V |= φ for every valuation V. φ ≡ φ  (φ is equivalent to φ  ) iff φV = φ  V for every valuation V.

4.2 Translation to Clausal Form In classical logic it is common for inference algorithms to operate over theories in (conjunctive or disjunctive) normal form. Our case is analogous: the algorithm performing the DPLL procedure in propositional product logic accepts as input an order clausal theory. We briefly cover the constitution of such theories, which are more formally defined by Guller [13, Sect. 3]. Given the set of propositional atoms, a conjunction Cn of powers of atoms is a non-empty finite set written in the form a0m 0 & · · · & anm n . A conjunction { p} is called unit and denoted as p (without braces). An order literal l is an expression of the form ε1  ε2 , where εi is a truth constant (0 or 1) or a conjunction of powers of atoms, and  is either  or ≺. A pure order literal is a literal which does not contain truth constants. An order clause is a finite set of order literals {l0 , . . . , ln } = ∅ written in the form l0 ∨ · · · ∨ ln . A pure order clause is a finite set of pure order literals; the empty order clause ∅ is denoted as ; a unit order clause is of the form {l} and is denoted as l (without braces). An {order, pure order, unit order} clausal theory is a set of {order, pure order, unit order} clauses, respectively. The input order theory in form of a single formula is to be represented as a binary tree where the root is the operation of lowest precedence. The algorithm performing the translation traverses the tree and applies the interpolation rules [13] in each step. These rules substitute the subformulae with new auxiliary atoms. The proofs of the correctness of the translation and the time and space complexities were stated by Guller [13, Sect. 3]. The implementation based on the interpolation rules is straightforward and depicted in Algorithm 1.

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs

107

Input: A non-empty order theory T Result: Order clausal form (set of clauses) S of theory T 1 2 3 4 5

S ← a0  1 ∪ T (a0 is a newly introduced atom) repeat S ← apply interpolation rules over new atoms in S where possible until no new atoms are introduced return S Algorithm 1. The translation of order theory into order clausal theory.

4.3 DPLL Rules A variant of the DPLL procedure operating over finite order clausal theories was introduced by Guller [13, 15]. In the papers we can find proofs of its refutational soundness and completeness. Below we describe some of the notions occurring in Guller’s work that will be used throughout the text. Then we list the rules and briefly describe their intuition. C is a guard iff either C = a  0, C = 0 ≺ a, C = a ≺ 1, or C = a  1. Let S be an order propositional clause. We denote guards(a) = {a  0, 0 ≺ a, a ≺ 1, a  1} and guards(S) = {C | C ∈ S is a guard}. We will use the auxiliary function simplify [13, 15], which replaces every occurrence of an atom in an expression by its truth constant and returns a simplified expression according to laws holding in . Definition 1 Auxiliary function simplify [15] simplify(0, a, υ) = 0; simplify(1, a, υ) = 1;  0 if a ∈ atoms(Cn), simplify(Cn, a, 0) = Cn else; ⎧ ∗ ⎨1 if ∃n ∗ Cn = a n , ∗ ∗ simplify(Cn, a, 1) = Cn − a n if ∃n ∗ a n ∈ Cn  = a n ∗ , ⎩ Cn else; simplify(l, a, υ) = simplify(ε1 , a, υ)  simplify(ε2 , a, υ) if l = ε1  ε2 ; simplify(C, a, υ) = {simplify(l, a, υ) | l ∈ C}.

Another auxiliary function  [13, 15] returns the product of two expressions (powers of atoms, their conjunctions, or literals). For two conjunctions of powers of atoms Cn 1 , Cn 2 , the function is defined as follows: Definition 2 Auxiliary function  over conjunctions of powers of atoms [15]

108

I. Uhliarik

0  ε = ε  0 = 0; 1  ε = ε  1 = ε; Cn1  Cn2 = {a m+n | a m ∈ Cn1 , a n ∈ Cn2 } ∪ {a n | a n ∈ Cn1 , a ∈ / atoms(Cn2 )} ∪ n n {a | a ∈ Cn2 , a ∈ / atoms(Cn1 )}. For {0, 1} and literals l1 and l2 , the function is extended component-wisely [15]: Definition 3 Auxiliary function  over literals [15] 0  ε = ε  0 = 0; 1  ε = ε  1 = ε; l1  l2 = (ε1  ε2 )  (υ1  υ2 ) if li = εi i υi ,   if 1 = 2 =, = ≺ else. The DPLL rules follow. (Unit contradiction rule [15]) (1) S ; S ∪ {} S is unit; there exist 0 ≺ a0 , . . . , 0 ≺ am , a0 ≺ 1, . . . , am ≺ 1 ∈ guards(S), l0 , . . . , ln ∈ S such that li is pure order literal, atoms(l0 , . . . , ln ) = {a0 , . . . , am }; there exist αi∗ ≥ 1, i = 0, . . . , n, J ∗ ⊆ { j | j ≤ m}, β ∗j ≥ 1, j ∈ J ∗ , such that  n αi∗  ∗ i=0 li   j∈J ∗ (a j ≺ 1)β j is a contradiction. Rule (1) derives  (the unit S is unsatisfiable) iff we can find a product of powers of the input pure order literals and guards a j ≺ 1, j ∈ J ∗ that would lead to a contradiction of the form ε ≺ ε. (Trichotomy branching rule [15]) (2) S



; S ∪ {a  0} S ∪ {0 ≺ a, a ≺ 1} S ∪ {a  1} a ∈ atoms(S). The branching rule (2) splits the derivation to three subcases of the trichotomy a  0 ∨ 0 ≺ a ∧ a ≺ 1 ∨ a  1. In the first, second, and third branch, we assume that either a  0, 0 ≺ a ∧ a ≺ 1, or a ≺ 1 is true, respectively.

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs

109

(Pure trichotomy branching rule [15]) (3) S



; (S − {φ}) ∪ {l1 } (S − {φ}) ∪ {C} ∪ {l2 } (S − {φ}) ∪ {C} ∪ {l3 } φ = (l1 ∨ C) ∈ S, C = , l1 ∨ l2 ∨ l3 is a pure trichotomy. Similarly to the previous rule, rule (3) is a branching rule, splitting the derivation into the three subcases of the trichotomy of pure literals l1 , l2 , and l3 . (Contradiction rule [15]) (4) S ; (S − {l ∨ C}) ∪ {C} l ∨ C ∈ S, l is a contradiction. The order literal l can be removed from the input order clause l ∨ C if it is a contradiction. (Tautology rule [15]) (5) S ; S − {l ∨ C} l ∨ C ∈ S, l is a tautology. The input order clause l ∨ C can be removed from S if it is a tautology. (0-simplification rule [15]) (6) S

; (S − {C}) ∪ {simplify(C, a, 0)} a  0 ∈ guards(S), C ∈ S, a ∈ atoms(C), a  0 = C. If a  0 ∈ guards(S) and the input order clause C contains a, then C can be simplified. (1-simplification rule [15]) (7) S

; (S − {C}) ∪ {simplify(C, a, 1)} a  1 ∈ guards(S), C ∈ S, a ∈ atoms(C), a  1 = C. If a  1 ∈ guards(S) and the input order clause C contains a, then C can be simplified.

110

I. Uhliarik

(0-contradiction rule [15]) S

α

(S − {a0α0 & · · · & an n  0 ∨ C}) ∪ {C}

(8)

; α

0 ≺ a0 , . . . , 0 ≺ an ∈ guards(S), a0α0 & · · · & an n  0 ∨ C ∈ S − guards(S).

If 0 ≺ a0 , . . . , 0 ≺ an ∈ guards(S), a0α0 & · · · & anαn  0 is contradictory; it can be removed from the input order clause and C can be derived. The 1-contradiction rule is analogous. (1-contradiction rule [15]) (9) S (S − {a0α0 & · · · & anαn  1 ∨ C}) ∪ {C}

;

ai ≺ 1 ∈ guards(S), i ≤ n, a0α0 & · · · & anαn  1 ∨ C ∈ S − guards(S). (0-consequence rule [15]) S

α

S − {0 ≺ a0α0 & · · · & an n ∨ C}

(10)

; α

0 ≺ a0 , . . . , 0 ≺ an ∈ guards(S), 0 ≺ a0α0 & · · · & an n ∨ C ∈ S − guards(S).

If 0 ≺ a0 , . . . , 0 ≺ an ∈ guards(S), then it must hold that 0 ≺ a0α0 & · · · & anαn . Therefore, the input order clause 0 ≺ a0α0 & · · · & anαn ∨ C is a consequence of the guard(s) and may be removed. (1-consequence rule [15]) (11) S S−

{a0α0 &

· · · & anαn ≺ 1 ∨ C}

;

ai ≺ 1 ∈ guards(S), i ≤ n, a0α0 & · · · & anαn ≺ 1 ∨ C ∈ S − guards(S). Analogous to rule (10). (0-annihilation rule [15]) (12) S

; S − {a  0} a  0 ∈ guards(S), a ∈ / atoms(S − {a  0}).

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs

111

(1-annihilation rule [15]) (13) S ; S − {a  1} a  1 ∈ guards(S), a ∈ / atoms(S − {a  1}). If the atom a different from 0, 1 occurs in S only in the guard a  0 or a  1, then this guard may be removed from S.

4.4 Inference Having described the rules of the DPLL procedure, we now propose how the algorithm could be implemented using the foundations laid by Guller [13, 15]. The procedure consists of constructing the DPLL tree from the (non-empty) input order clausal theory using the rules (1)–(13). If all branches are closed (we derive the empty clause  in each branch), the tree is closed and the theory is unsatisfiable. Otherwise, if there exists a branch that cannot be closed, the tree is open and there exists a model related to the tree [13, Theorem 4.2]. The pseudocode in Algorithms 2–6 depicts the inference of the DPLL procedure. The process starts with Algorithm 2. The overall algorithm returns a model of input theory T or  in the case of a closed tree. First, if there are any atoms in T with missing guards, we need to obtain them in further branches by calling the Trichotomy(S) function. Input: A non-empty theory T in order clausal form Result: Model of T or empty order clause  if the tree is closed 1

return Trichotomy(T ) Algorithm 2. The DPLL procedure

The Trichotomy(S) function in Algorithm 3 uses the trichotomy branching rule (2) for each atom in S which is not fully guarded. Atom a is fully guarded in theory T iff T contains both 0 ≺ a and a ≺ 1. After the application of the branching rule, the resulting subcases are processed by the Reduce(S) function. If we derive a branch that could not be recursively closed, we immediately return its corresponding partial model using the auxiliary function Valuation(T ). Otherwise, if the recursive construction of the DPLL tree yields only closed branches, we return . If all atoms are fully guarded, we return the application of reduction of the input theory.

112 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

I. Uhliarik

Function Trichotomy(S): foreach a ∈ atoms(S) where a is not fully guarded do S  ← split S using rule (2) over a foreach s ∈ S  do s ← Reduce(s) if s =  then return Valuation(s) end end if all s ∈ S  are closed then return  end end return Reduce(S) Algorithm 3. The Trichotomy function of the DPLL procedure.

Next, the Reduce(S) function in Algorithm 4 repeatedly applies the rules (4)– (13) in the order they are defined, until all equality guards are eliminated. During each iteration, if  is derived, the branch is closed and  is returned. Once the input theory is cleared of all equality guards, we proceed with the PureTrichotomy(S) function if all atoms are fully guarded, or call the Trichotomy(S) function otherwise to obtain the missing guards in deeper levels of the tree. Function Reduce(S): while S contains equality guard do S ← Application of rules (4)–(13) consecutively. Return  if any of the rules returns . 4 if all atoms in S are fully guarded then 5 return PureTrichotomy(S) 6 end

1 2 3

7 8 9

return Trichotomy(S) end Algorithm 4. The Reduce function of the DPLL procedure.

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs

113

The pure trichotomy branching rule (3) is used in Algorithm 5 within the function PureTrichotomy(S). First, we check if the input contains equality guards; in such a case we pass the branch to the Reduce(S) function. At this point, the input theory should already be pure and fully guarded w.r.t. every atom. If the theory is also unit, we proceed with the UnitContradiction(S) function. Otherwise, we apply the branching rule (3) and recursively process the resulting branches. There exists proof [15, Lemma 9] stating that the repeated application of rules (1) and (3) in this way leads either to a finite closed subtree or an open branch with an associated submodel. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

Function PureTrichotomy(S): if S contains equality guard then return Reduce(S) end if S is unit then return UnitContradiction(S) end S  ← split T using rule (3) foreach s ∈ S  do s ← PureTrichotomy(s) if s =  then return Valuation(s) end end return  Algorithm 5. The PureTrichotomy function of the DPLL procedure.

Finally, Algorithm 6 represents the application of the unit contradiction rule (1). If the rule is applicable (i.e., there exists a contradictory product of powers of the literals and guards), the branch is closed. Otherwise, the branch is open (the associated subtheory is satisfiable) and we can find the model using the partial valuation method [13, Table 5].

1 2 3 4 5 6

Function UnitContradiction(S): if S is unit and rule (1) is applicable then return  end return Valuation(S) Algorithm 6. The UnitContradiction function of the DPLL procedure.

114

I. Uhliarik

4.5 Unit Contradiction In Sect. 4.3 we have defined the unit contradiction rule (1) of the DPLL procedure. As we have indicated in the previous section, using this rule can designate the branch in question as either closed (contradictory) or open (satisfiable) depending on whether the rule can be applied or not. We have already stated that the rule can be applied iff we can find a contradictory product of powers of pure order literals and guards of the form ε ≺ ε. Example 1 demonstrates this problem: given the order clausal theory {a 2  b3 , b ≺ a} and the full guards of the contained atoms, we find a product of the boxed literals with the associated powers (the strict order literal (15) has the power of 2; the literals (14) and (16) have the power of 1) such that we obtain the contradiction (17). Example 1 Application of the unit contradiction rule 0 ≺ a, 0 ≺ b, a ≺ 1, b ≺ 1 a 2  b3 , b ≺ a

a 2  b3 (

b≺a b≺1

(14) 2

)

a 2 & b3 ≺ a 2 & b3 — a contradiction

(15) (16)

(17)

Remark 2 To obtain a contradictory product, at least one of the literals has to be a strict order literal, according to Definition 3 (operator  over literals). Deciding whether such a product exists and finding the relevant powers is a nontrivial problem. We propose to represent the problem with likeness to the representation of a linear programming (LP) problem: Ax ≤ b ∧ x ≥ 0. In particular, we shall represent the powers of atoms on the left and right side of the literals in which they appear, in the matrix A in the following way: ⎤ a1,1 . . . a1,2q . . . a1,2q+ p ⎢ .. ⎥ A = ⎣ ... . . . ... . . . . ⎦ am,1 . . . am,2q . . . am,2q+ p ⎡

where each row represents an atom, each column a literal; ai, j is the difference of the power assigned to atom at position i in literal at position j on the left and right

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs

115

side of the literal; q is the number of equality literals, p is the number of strict order literals. The reason the count of equality literals is double in the matrix (1 . . . 2q) is the commutativity of the  operation—we represent the original and the swapped literals separately. The initial state of the matrix for the theory in Example 1 would be   2 −2 −1 0 Aex = −3 3 1 1 with the ordering of literals (14)–(16). Note that we have omitted the guards that weren’t used for the sake of simplicity (these would be valued 0 in the matrix). The yielded contradiction (17) would be represented as a zero vector. Then, for each strict order literal lt in the input theory we construct the matrix Alt in the slack form, suitable for performing the simplex method [9]: ⎡

a1,1 . . . a1,2q ⎢ .. . . . Alt = ⎣ . . .. am,1 . . . am,2q

. . . a1,2q+ p−1 .. .. . . . . . am,2q+ p−1

⎤ 1 . . . 0 a1,lt .. . . .. .. ⎥ . . . . ⎦ 0 . . . 1 am,lt

where the column representing the strict order literal lt was selected as the pivot column and moved to the right-hand side; the identity submatrix was appended to represent slack variables; the matrix was modified using standard matrix operations so that the pivot column was non-negative. Having obtained the matrices in the aforementioned representation, we would be able to perform the simplex method. However, this solution proposal needs further work as to properly define the maximization goal and the extraction of the powers of atoms from the solution candidate.

5 Conclusions We have reviewed several approaches to solving FASP programs and investigated the work of Janssen et al. [19] based on the reduction of FASP programs to fuzzy SAT problem instances. Our focus is to solve FASP under the product logic semantics, since we have that Łukasiewicz and Gödel logic are both sublogics of the extended product logic  [4]. This introduces several challenges which have already been identified in our previous work [27]. Next, we have examined the design of a fuzzy SAT solver based on the DPLL procedure customized for propositional product logic, using the foundations of Guller [13]. The main contribution of this paper is the algorithmization of some of the parts of the inference pipeline: the translation of order theories into order clausal form and the DPLL procedure. We have also provided insight on the non-trivial problem of applying the unit contradiction rule (1) and its connection to satisfiability

116

I. Uhliarik

of the theory. We have proposed an approach to solving this problem based on the simplex method, although the concept needs further work. The set of possible truth constants in the input order theories was restricted to the boundary truth constants 0 and 1. We seek to extend the DPLL approach with intermediate constants in future research. The pilot implementation of the DPLLbased fuzzy SAT solver is underway with preliminary results and we hope to make it available in the approaching months. Acknowledgements The research reported in this paper was supported by the grant UK/244/2018.

References 1. Alviano, M., Amendola, G., Peñaloza, R.: Minimal undefinedness for fuzzy answer sets. In: Singh, S.P., Markovitch, S. (eds.) Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4–9, 2017, San Francisco, California, USA, pp. 3694–3700. AAAI Press (2017). http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14309 2. Alviano, M., Peñaloza, R.: Fuzzy answer set computation via satisfiability modulo theories. TPLP 15(4–5), 588–603 (2015). https://doi.org/10.1017/S1471068415000241 3. Alviano, M., Pealoza, R.: Fuzzy answer sets approximations. Theory Pract. Log. Program. 13(4–5), 753–767 (2013) 4. Baaz, M., Hájek, P., Švejda, D., Krajíˇcek, J.: Embedding logics into product logic. Stud. Log. 61(1), 35–47 (1998). https://doi.org/10.1023/A:1005026229560 5. Blondeel, M., Schockaert, S., De Cock, M., Vermeir, D.: NP-completeness of fuzzy answer set programming under Lukasiewicz semantics, pp. 43–50 (8 2012) 6. Bobillo, F., Straccia, U.: A fuzzy description logic with product t-norm. In: 2007 IEEE International Fuzzy Systems Conference, pp. 1–6 (2007). https://doi.org/10.1109/FUZZY.2007. 4295443 7. Brys, T., Drugan, M.M., Bosman, P.A., De Cock, M., Nowé, A.: Solving satisfiability in fuzzy logics by mixing cma-es. In: Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, GECCO ’13, pp. 1125–1132. ACM, New York, NY, USA (2013). https://doi.org/10.1145/2463372.2463510 8. Clark, K.L.: Negation as failure., pp. 293–322. Springer US, Boston, MA (1978). https://doi. org/10.1007/978-1-4684-3384-5_11 9. Dantzig, G.B.: Origins of the simplex method. A History of Scientific Computing, pp. 141–151. ACM, New York, NY, USA (1990). https://doi.org/10.1145/87252.88081 10. Gebser, M., Kaminski, R., Kaufmann, B., Schaub, T.: Answer Set Solving in Practice. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan and Claypool Publishers (2012) 11. Gelfond, M., Lifschitz, V.: The Stable Model Semantics for Logic Programming, pp. 1070– 1080. MIT Press, Cambridge (1988) 12. Guller, D.: Expanding Gödel Logic with Truth Constants and the Equality, Strict Order, Delta Operators, pp. 241–269. Springer International Publishing, Cham (2017). https://doi.org/10. 1007/978-3-319-48506-5_13 13. Guller, D.: A DPLL procedure for the propositional product logic. In: Proceedings of the 5th International Joint Conference on Computational Intelligence - Volume 1: FCTA, (IJCCI 2013). pp. 213–224. INSTICC, SciTePress (2013). https://doi.org/10.5220/0004557402130224 14. Guller, D.: An order hyperresolution calculus for Gödel logic with truth constants and equality, strict order, delta. In: 2015 7th International Joint Conference on Computational Intelligence (IJCCI), vol. 2, pp. 31–46 (2015)

Foundations of a DPLL-Based Solver for Fuzzy Answer Set Programs

117

15. Guller, D.: Technical foundations of a DPLL-based SAT solver for propositional product logic (2016), Unpublished manuscript 16. Hähnle, R.: Many-valued logic and mixed integer programming. Ann. Math. Artif. Intell. 12(3), 231–263 (1994). https://doi.org/10.1007/BF01530787 17. Hansen, N.: The CMA Evolution Strategy: A Comparing Review, pp. 75–102. Springer, Berlin, Heidelberg (2006). https://doi.org/10.1007/3-540-32494-1_4 18. Janssen, J., Schockaert, S., Vermeir, D., De Cock, M.: Answer Set Programming for Continuous Domains: A Fuzzy Logic Approach. Atlantis Computational Intelligence Systems. Atlantis Press, Paris (2012). https://books.google.sk/books?id=OLjCFm8KpZIC 19. Janssen, J., Schockaert, S., Vermeir, D., Cock, M.D.: Reducing fuzzy answer set programming to model finding in fuzzy logics (2011). http://arxiv.org/abs/1104.5133 20. Leone, N., Pfeifer, G., Faber, W., Eiter, T., Gottlob, G., Perri, S., Scarcello, F.: The DLV system for knowledge representation and reasoning. ACM Trans. Comput. Log. 7(3), 499–562 (2006). https://doi.org/10.1145/1149114.1149117 21. Lifschitz, V.: Action languages, answer sets and planning. The Logic Programming Paradigm: a 25-Year Perspective, pp. 357–373. Springer, Berlin (1999). http://www.cs.utexas.edu/users/ ai-lab/?lif99 22. Lin, F., Zhao, Y.: ASSAT: computing answer sets of a logic program by sat solvers. Artif. Intell. 157(1), 115–137 (2004). https://doi.org/10.1016/j.artint.2004.04.004 23. Mushthofa, M., Schockaert, S., Cock, M.D.: A finite-valued solver for disjunctive fuzzy answer set programs. In: Proceedings of the Twenty-first European Conference on Artificial Intelligence, ECAI’14, pp. 645–650. IOS Press, Amsterdam, The Netherlands (2014). https://doi. org/10.3233/978-1-61499-419-0-645 24. Mushthofa, M., Schockaert, S., De Cock, M.: Solving Disjunctive Fuzzy Answer Set Programs, pp. 453–466. Springer International Publishing, Cham (2015). https://doi.org/10.1007/978-3319-23264-5_38 25. Mushthofa, M., Schockaert, S., De Cock, M.: Computing attractors of multi-valued gene regulatory networks using fuzzy answer set programming. In: Proceedings of the 2016 IEEE International Conference on Fuzzy Systems FUZZ-IEEE’2016. pp. 1955–1962. IEEE (2016) 26. Simons, P., Niemel, I., Soininen, T.: Extending and implementing the stable model semantics. Artif. Intell. 138(1), 181–234 (2002). https://doi.org/10.1016/S0004-3702(02)00187-X 27. Uhliarik, I.: Solving fuzzy answer set programs in product logic. In: Proceedings of the 9th International Joint Conference on Computational Intelligence - Volume 1: IJCCI, pp. 367–372. INSTICC, SciTePress (2017). https://doi.org/10.5220/0006518303670372 28. Van Nieuwenborgh, D., De Cock, M., Vermeir, D.: Computing Fuzzy Answer Sets Using DLVHEX, pp. 449–450. Springer, Berlin, Heidelberg (2007). https://doi.org/10.1007/978-3540-74610-2_40 29. Van Nieuwenborgh, D., De Cock, M., Vermeir, D.: An introduction to fuzzy answer set programming. Ann. Math. Artif. Intell. 50(3), 363–388 (2007). https://doi.org/10.1007/s10472007-9080-3

Exploring Internal Representations of Deep Neural Networks Jérémie Despraz, Stéphane Gomez, Héctor F. Satizábal and Carlos Andrés Peña-Reyes

Abstract This paper introduces a method for the generation of images that activate any target neuron or group of neurons of a trained convolutional neural network (CNN). These images are created in such a way that they contain attributes of natural images such as color patterns or textures. The main idea of the method is to pre-train a deep generative network on a dataset of natural images and then use this network to generate images for the target CNN. The analysis of the generated images allows for a better understanding of the CNN internal representations, the detection of otherwise unseen biases, or the creation of explanations through feature localization and description. Keywords Deep-learning · Convolutional neural networks · Autoencoders · Generative neural networks · Activation maximization · Interpretability

Supported by the Hasler Fundation, project number 16015. J. Despraz (B) · S. Gomez · H. F. Satizábal · C. A. Peña-Reyes School of Business and Engineering Vaud (HEIG-VD), University of Applied Sciences of Western Switzerland (HES-SO), Yverdon-les-Bains, Switzerland e-mail: [email protected] S. Gomez e-mail: [email protected] H. F. Satizábal e-mail: [email protected] C. A. Peña-Reyes e-mail: [email protected] J. Despraz · S. Gomez · C. A. Peña-Reyes Computational Intelligence for Computational Biology (CI4CB), SIB Swiss Institute of Bioinformatics, Lausanne, Switzerland © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_7

119

120

J. Despraz et al.

1 Introduction The growing amount of data available to researchers and companies coupled with the increasing computational power allows for the development of complex machine learning systems. In this context, artificial deep neural networks are a powerful tool, able to extract information from large datasets and, using this acquired knowledge, make accurate predictions on previously unseen data. As a result, deep neural networks are being applied in a wide variety of domains ranging from genomics to autonomous driving, from speech recognition to gaming [1, 2]. Many areas where neural network-based solutions can be applied require a validation, or at least some explanation, of how the system makes its decisions. If the system controls a potentially life-threatening device, such as a car, there ought to be a way to validate the decision mechanism in order to ensure it won’t take a bad or dangerous decision in a given scenario. Another need for explainable systems comes from the recent European Union regulation [3] that provides a right to nondiscrimination and the right to explanation for every automated decision-making process that significantly affect users. Unfortunately, the very large number of parameters required by deep neural networks is extremely challenging to cope with for explanation methods, and these networks remain for the most part black boxes. This demonstrates the real need for accurate explanation methods that are able to scale with this large quantity of parameters and provide useful information to a potential user. In this context, we propose a method that focuses on convolutional neural networks (CNNs) and provides a way to represent the knowledge gathered by a trained network. In contrast with many existing similar approaches, our method relies on the use of a deep generative network to produce outputs that reflect the internal representations of the target CNN. This offers several advantages and, in particular, it allows the generation of images displaying natural characteristics, especially in terms of colors and textures. These images can then be used to analyze the network, detect its biases, understand which are the most relevant features, and ultimately help interpret its decisions. In the rest of this paper, Sect. 2 summarizes the existing related work and state-ofthe-art methods in the field of activation maximization and deep generative networks, Sect. 3 introduces in detail our method and its implementation, Sects. 4 and 5 present the main results obtained on the well-known VGG-16 image classifier together with examples of useful applications, and the study is concluded in Sect. 6. Additional examples and results are presented in the Appendix.

2 Related Work Explanation methods for artificial neural networks are driven by the need for interpretable systems. As a consequence, several approaches have been proposed, especially in the domain of convolutional neural networks. Besides their ability to learn

Exploring Internal Representations of Deep Neural Networks

121

accurately on large datasets, CNNs have the advantage of dealing with images and are therefore relatively easy to interpret and analyze for humans. The problematic of CNN interpretability has been addressed by various studies [4] and, as a result of this research, several novel methods have been introduced. Many of those methods focus on analyzing a single input and try to explain the behavior of the network in its vicinity. Ribeiro et al. [5] for instance proposed a method where a simpler model, locally identical to the original model, is constructed around the desired input. For instance, this local model can be a simple linear regression used to highlight regions of interest in the input image. Zhou et al. [6] proposed entirely replacing the fully-connected layers of the CNN by applying a global average pooling on the extracted features and training a simple fully-connected layer to produce similar outputs as the original network. This then allows for the generation of heatmaps that highlight the regions of importance in the network predictions. Other methods such as deconvolution [7], guided backpropagation [8], and activation visualisation [9] have been used to visualize the regions of the input image that have the largest influence on a network response. In parallel, other methods have been introduced that recreate preferred inputs for a given network and a given class. In two separate studies, Simonyan et al. [8] and Yosinski et al. [10] proposed a method based on gradient ascent allowing for the generation of images maximizing a target class activation. These images are however quite unrealistic-looking and lack natural characteristics, especially in terms of color and texture. A more recent approach that has been very successful at creating patterns closely matching the properties of real images are generative methods. This field is very dynamic and methods have evolved quickly, from the generation of images by evolutionary algorithms using direct and indirect encodings [11], to more complex reconstruction techniques [12], to the current state-of-the-art generative adversarial networks (GANs) [13]. Studies on GANs have demonstrated that deep-generative networks can be used to create images exhibiting properties very similar to natural ones, sometimes making them indistinguishable to humans [14]. Based on the generative power of GANs, it is possible to create new methods for activation maximization that generate much higher quality images [15]. In particular, a method similar to the one presented herein has been recently proposed by Nguyen et al. [16] allowing the generation of photo-realistic images from a deep-generative neural network. In order to obtain well-structured images, they used part of a GAN network, previously trained to generate sets of images similar to the ones they targeted. To ensure the convergence of the optimization to a realistic-looking image, they further constrained the inputs to be within a well-defined range, allowing only for values that trigger neuron responses close to the ones measured with images from the train dataset. The work presented herein is based on a previous study [17] where we considered the generation of images representative of a given class and of certain filters within the CNN. In this work, we analyzed two types of networks, one pre-trained with an autoencoder and another one where the weights were randomized. The quality

122

J. Despraz et al.

of the images generated by the pre-trained network were limited by the quality of the pre-training that led to blurry images. In this paper, we propose a supplementary training step that addresses this issue and leads to higher quality images. In addition, we also illustrate how these images can then be used to create explanations. Details on our specific implementation are presented in Sect. 3.

3 Methods In order to generate images that trigger strong responses for a given classifier, one needs to create a generative network that is complex and unbiased enough to be able to generate a large range of images, but not so unconstrained that it generates noisy unrealistic images. As demonstrated in [17], a good starting point is to use an autoencoder which, by design, possesses a module that specializes in encoding the information and another complementary one that is able to reconstruct an image from the encoded information. Therefore, the first step of the methodology is to train an autoencoder and, once performing sufficiently well on a given dataset, to further train it to generate images that, when given to a trained classifier, yield the same activations of the various features as the original image. This significantly improves the visual quality of the results and ensures that the generative part of the autoencoder can be used to produce high quality images containing all of the features the classifier can detect. Once the training of the autoencoder is achieved, we consider only its generative half and couple it with the trained classifier one wishes to analyze, thus enabling us to generate preferred inputs for any target class or target group of neurons. This step is achieved by standard backpropagation algorithms on the coupled network. The various steps of the methods are described in more details in Sects. 3.1–3.3.

3.1 Training the Autoencoder The training of the autoencoder is achieved with 4,000 images randomly selected from the ImageNet database [18]. Through this training phase, the network learns how to extract the information contained in the input image and stores it on a reduced vector space as depicted in light red in Fig. 1. We will denote v, a compressed information vector lying in this space. In addition, the generative half of the autoencoder (depicted in light blue) learns how to reconstruct the original input image from this vector v. Thus, for an input image I , the output of the autoencoder is defined mathematically as: Iˆ = G(v) = G(E(I )) (1) where E and G are the functions corresponding respectively to the encoding and generation of the image and are computed such that distance (I, Iˆ) is minimum (the

Exploring Internal Representations of Deep Neural Networks

123

Fig. 1 Schematic view of the autoencoder with an example input and output image. The encoding part of the network (E) is filled in green while the generative part (G) is represented in blue. The red layer, in between the encoder and generator is the layer of neurons that contains a compressed representation (v) of the input image. This simple architecture enables the recreation of images close to the input in terms of mean squared error but can less accurately reproduce higher frequency components of the input and hence yields relatively blurry outputs

distance here is defined as the RMSE between I and Iˆ). The detailed networks’ architectures are presented in the Appendix in Tables 2 and 3. As observed in our previous work [17], the training of this network is not able to reconstruct high frequency components of the input images with sufficient precision (see Fig. 1). The resulting outputs are somewhat blurry and these imperfections can be observed simply by looking at the output images.

3.2 Training the Generator In order to improve the reconstruction of the images, we perform a second training step where we optimize the various features of the image instead of the image itself. The objective is to reduce the distance between the features from the generated image ( Fˆk ) and those of the input image (Fk ). This ensures that the generative half of the autoencoder is able to recreate any given feature of the input image. This step is achieved by using a trained classifier (VGG-16 [19] in our experiments) to extract a set of features through a series of convolutions and max pooling operations. The distance between features is then computed as:    2  (Fk )i j − ( Fˆk )i j (2) dk =  i, j

124

J. Despraz et al.

Fig. 2 Schematic view of the generator training process. In this step, the encoder and classifier parameters remain fixed and only the generator is trained. For each image that is presented to the networks, the autoencoder (encoder and generator) reproduces the image. This image is then passed to the classifier and the reconstruction error is computed. This error is defined as the distance between each feature of the generated image ( Fˆk ) and the original image features (Fk ), as extracted by the classifier

Where each of the Fk ’s and Fˆk ’s features are 2-dimensional squared matrices and (·)i j denotes the element at index (i, j) in these matrices. Note that the parameters of the encoding part are kept fixed for this step and are therefore not trained. In addition, the distance as computed by Eq. (2) is given a weight inversely proportional to the corresponding feature size and to the number of features in the same layer, hence giving each layer the same relative importance. The method is depicted schematically in Fig. 2 and typical outputs from this training step are presented in Fig. 3. As illustrated in Fig. 3, this approach yields images that are visually much closer to the input and is therefore a significant improvement to the simple autoencoder. Interestingly, we observe also from this figure that the attributes of the resulting image depend on the set of features that is chosen as a target. For features close to the input, the generated image is visually very close to the original image whereas for features deeper into the network (i.e. farther away from the input), the resulting image differs significantly from the original; in particular, a significant part of the spatial information has been lost. In order to maximize the diversity of the images seen by the generative network, we trained the autoencoder with 1,000 new images drawn randomly from the 1,000 classes from imagenet and made sure that we had one image per class. Furthermore,

Exploring Internal Representations of Deep Neural Networks

125

Fig. 3 Example of an image recreated by a trained generator, according to Fig. 2. We observe that, depending on the features one wishes to recreate, the attributes of the reconstructed image vary considerably. For features close to the input, high frequency and color are well reconstructed and the resulting image is almost indistinguishable from the original. In contrast, when reconstructing only the features from deep layers, we observe that the resulting image lacks certain attributes and much of the original spatial information is lost

since we are ultimately interested in recreating images representative of any arbitrary feature learned by the classifier, we ran our training with the objective of minimizing the distance to all extracted features.

3.3 Generating Preferred Inputs Using the trained generative network, extracted from the trained autoencoder of Sect. 3.1, we can couple it to a trained classifier as depicted in Fig. 4 to produce images that strongly activate or inhibit a target neuron or group of neurons. This does not require any modification to the vanilla classifier except for the activation function on the last layer where the original softmaxes are replaced by rectified linear units (ReLUs). The resulting coupled network has one input layer: a 1-dimensional vector v, and two outputs: an image whose dimensions are identical to the classifier’s inputs and a series of activations for each neurons of the classifier. With this architecture, we can thus iterate on the compressed information space to produce an image that maximizes the activation of a particular output neuron of the classifier. Furthermore, the pre-training of the generator ensures that the generated image will lie within (or at least close to) the space of natural images. The action of the coupled network from Fig. 4 can be summarized as: Iˆ = G(v) such that C( Iˆ)is maximum for neuron ck , k ∈ {1, 2, . . . , n}

(3)

where Iˆ is the image produced by the generator associated with the function G, v is the input vector, and C is the function associated with the classifier. In this configuration, the vector v defined in the compressed space of the original autoencoder is built as:

126

J. Despraz et al.

Fig. 4 Architecture of a coupled generative-classifier network. The generative network G (blue) creates images from an input vector v lying in the compressed information space (red) and these images are fed to the trained classifier C (yellow). The weights of the generator and the classifier are fixed and the components of the vector v are trained with a specific loss function so as to generate images that yield the desire response from a given target neuron (typically a class neuron) represented here in dark red

v = (v1 , . . . , vm )T = (ε(α1 ), . . . , ε(αm ))T

(4)

where the expression ε(·) is a function introducing a random Gaussian perturbation (similar to jitter in other works [20]), and αi ∈ R are parameters to be optimized. The loss of the coupled network is then defined as: loss = −a(ck ) + λ



a(ci )

(5)

i=k

where a(·) is the target neuron activation function and λ ∈ R+ is a factor penalizing the activation of other classes. The loss function is then minimized using standard backpropagation methods over the entire coupled network to compute the optimal components αi ’s of v according

Exploring Internal Representations of Deep Neural Networks

127

Table 1 Implementation details and parameters used for the optimization of the coupled generatorclassifier network. U (a, b) denotes here the continuous uniform distribution with lower limit a and upper limit b Variables 3,200 Optimizer Stochastic gradient descent Parameters Learning rate = 0.01, Momentum = 0.1, Nesterov momentum, Decay = 0 Loss function λ = 0.1, a(·) = ReLU(·) Initial conditions for v αi ∈ U (−1, 1) ∀i Noise parameters (ε) Type = multiplicative, μ = 1, σ = 0.05

to Eq. (4). Note that the choice of ReLU as an activation function on the last layer is justified because it prevents a decrease in the loss due to negative contributions of non-targeted neurons ci . Therefore, in the case where a(ci ) = 0 ∀i = k (i.e. all classes other than k do not contribute to the loss), the optimization should favor a maximal expression of the target neuron ck . Convergence can be controlled by ensuring that a(ck ) > 0 at all times, thus removing the risk of having d(loss) = 0 at some point in the optimization which would prevent full convergence to the optimum solution. Table 1 lists the different parameters that were used in our experiments.

4 Results We tested our method on the VGG-16 [19] classifier trained on the ImageNet [18] dataset. We coupled it to our pre-trained generator to produce images for all of the the 1’000 classes available. To implement, test, and optimize the deep-neural networks, we used the open-source library Keras [21] configured with the Tensorflow [22] backend. Figure 5 shows some of the results obtained for a few classes that lead to a good convergence of the method. In order to create those images, we computed the optimal choice of input vector v to the coupled (generative-classifier) network that led to the strongest activation of the final layer neuron corresponding to the class we were targeting. Further results for other classes are available in Figs. 11, 12 and 13 in the Appendix. We observe in these images that the objects are recreated with the vivid colors one would expect in real-life examples. In addition, some of the details and textures

128

J. Despraz et al.

Fig. 5 Well-converged selection of images that activate target classes of the VGG-16 classifier. The colors and texture from natural images are well recreated. More examples are available in the Appendix. This image is best viewed in color/screen

have been reproduced by the generative network. We observe for instance the spikes in the buckeye and in the cockroach image, the fur of the beaver, or the reflexions on the lemon. The context surrounding the object is often represented as well, this is typically the case in the barber chair picture where we see what looks like white walls or the castle, and the barn where we see grass and blue patches like water and parts of the sky. In addition to the activation of class neurons, we also analyzed the response of convolutional filters within the classifier network. The procedure was the same, i.e. finding the input vector v that triggers the strongest response of all of the neurons contained in a given filter of a convolutional layer. Results are presented in Fig. 6 for a top layer and Fig. 7 for a deep layer. We observe from Figs. 6 and 7 that the complexity of the features detected increases as their position gets deeper into the network. Layers close to the input like to see plain, simple colors and/or simple motives while layers closer to the class output are more sensitive to complicated shapes and patterns that, for some, can be recognized as attributes from the classes the classifier is able to identify. We notice for instance objects that resemble fish eyes, rabbit ears, horns, dog noses or human fingers. Some more examples are presented in Fig. 14 in the Appendix.

Exploring Internal Representations of Deep Neural Networks

129

Fig. 6 Selection of generated images that maximize the second layer of the VGG-16 classifier. As expected, preferred images exhibit simple patterns (e.g. straight lines, uniform colors). This image is best viewed in color/screen

5 Discussion and Applications Visualizing the preferred inputs to neurons enables a user to assess the quality of the training by understanding if the data has been assimilated successfully and without biases (see Sect. 5.1). Furthermore, by allowing the representation of images associated with various features within the classifier, it also provides a tool to generate explanations by combining methods such as Class Activation Maps (CAMs) [6] to rank, localize, and explain their presence in a given class (see Sect. 5.2).

5.1 Assessing the Training Quality Since a particular context is sometimes strongly linked to a given class or because datasets can simply be biased by construction, the trained classifier can easily itself be biased as a reflection of the intrinsic data distribution. Yet, it is sometimes very hard to measure these biases, without a proper tool to analyze the network. Our method offers a way to detect biases in the classifier’s internal representations, as illustrated in Fig. 8. In these examples, we observe that the context surrounding

130

J. Despraz et al.

Fig. 7 Selection of generated images that maximize the last convolutional layer of the VGG-16 classifier (i.e. the layer farthest from the input). As expected and in contrast to Fig. 6, preferred images for layers deep into the network exhibit high levels of complexity. Some of the high-level features of the final classes such as eyes, feathers, horns, fingers, or rabbit ears can easily be recognized. More examples are available in the Appendix. This image is best viewed in color/screen

Fig. 8 Selection of images displaying some bias in the CNN internal representations. Row 1 shows the generated image for the target class while row 2 shows a typical example of the training dataset. In these cases, the biases are typically elements from the broader context of the class

Exploring Internal Representations of Deep Neural Networks

131

Fig. 9 Natural explanation from a human perspective: description of the object in the picture by identifying relevant features and locating them on the image. “I decided it is a lion because I see round ears here, large paws there, golden fur, and a typical mouth pattern here”

some objects can be strongly present in the generated image, occasionally even more than the object itself. Notice for example how eyebrows and eyes appear in the image for academic gown or how trees, forest paths, and cycling gear seem to appear in the tandem class. This suggests that the classifier’s internal representations of those classes might be wrong and that an additional training would be required to correct this bias. Furthermore, we hypothesize that a correction of those biases might be a way to easily increase the network accuracy on typical benchmark tests. Interestingly, this analysis can also reflect which attributes are neglected by the network despite those being intrinsically part of the object. For instance, we observe that the body of the beaver doesn’t appear at all on the image, only the head seems to be relevant for the network. We can come to the same conclusion for the coyote, the collie, or the samoyed in Figs. 11, 12 and 13 where only head regions are represented.

5.2 Generating Explanations Besides bias detection and training analysis, another useful application of our method is for the automated generation of explanations that describe the reasons for a certain prediction of a classifier. Consider for instance the lion picture in Fig. 9. If a human were to describe why he thinks this represents a lion, he would presumably construct an explanation from the features that are typical for a lion and show on the picture where they appear (e.g. “I decided it is a lion because I see round ears here, large paws there, golden fur, and a typical mouth pattern here.”). Interestingly, we can produce a similar type of explanation with our approach when coupled with a method that generates Class Activation Maps. Based on the previous works of Zhou et al. [6], features of high importance for a given class and

132

J. Despraz et al.

Fig. 10 Generated explanation for a volcano image. The features importance and localization is computed by applying a class activation mapping and the corresponding filter images are created by activation maximization. In this example, we can explain the classification in the class volcano because of the peak structure of the object, the presence of smoke around the top, and flames at several locations on the object

a given classifier can be computed by emulating the behavior of the classifier’s fully connected layers with a simpler model. As a result, it allows for the creation of heatmaps that highlight the locations where the most relevant class features appear in a given input. Since we are able to generate an image representative of each feature that the classifier can detect, we can add to the heatmap an image that explains what was detected at this location. An example of such an explanation is presented in Fig. 10 where the class volcano is predicted and the reason for this particular result is explained by the detection and localization on the image of three attributes that are particularly relevant to the class volcano, namely: peak/rocks, smoke/clouds, and flames/fire.

Exploring Internal Representations of Deep Neural Networks

133

Despite the usefulness of such a tool, its application is limited by the ability of a human to put a name, or a semantic meaning, to the generated image for each relevant feature. While some images depicting characteristics or objects, such as sky, flames, red, or round, are easy to understand and label with an adjective, some other images can be much more confusing and do not necessarily correspond to an obvious characteristic that has a simple verbal explanation. Therefore, it would be interesting to develop further this approach by adding another layer of explanation that would translate the feature image into sensible words that could easily be understood by a potential human user.

6 Conclusion We introduced a method for the generation of images that are representative of a CNN’s learned representations. The method is based on the construction and appropriate training of a deep generative network that is first optimized to recreate accurately natural images and then coupled to a trained classifier to generate realistic images that strongly activate one or a group of neurons from the classifier. The generator is trained in two distinct steps: first as part of a simple autoencoder, then second as part of a modified autoencoder where one seeks to reproduce each feature of the image with great precision, instead of reconstructing the image itself. We showed that this methodology leads to high quality images. We presented several results based on our analysis of the VGG-16 network, where we displayed generated images for various classes and convolutional layers within the VGG-16 network. In the end, we proposed two applications of our method, one to improve the learning process and analyze the training efficiency by detecting biases and another to generate automated explanations of a classifier prediction.

Appendix See Tables 2 and 3 and Figs. 11, 12, 13 and 14.

134

J. Despraz et al.

Fig. 11 Well converged selection of images that activate target classes of the VGG-16 classifier (sample #2). This image is best viewed in color/screen

Fig. 12 Well converged selection of images that activate target classes of the VGG-16 classifier (sample #3). This image is best viewed in color/screen

Exploring Internal Representations of Deep Neural Networks

135

Fig. 13 Well converged selection of images that activate target classes of the VGG-16 classifier (sample #4). This image is best viewed in color/screen

Fig. 14 Selection of generated images that maximize the last layer of the VGG-16 classifier (i.e. the layer farthest from the input) (sample #2). This image is best viewed in color/screen

136

J. Despraz et al.

Table 2 Detailed encoding network architecture (E) layer dim layer (cont.)

dim (cont.)

3 × 224 × 224

128 × 3 × 3

Input image Gaussian noise Convolution 2D Batch normalization Convolution 2D Batch normalization Max pooling Convolution 2D Batch normalization Convolution 2D Batch normalization Max pooling Convolution 2D Batch normalization

32 × 3 × 3 32 × 3 × 3 2×2 64 × 3 × 3 64 × 3 × 3 2×2 128 × 3 × 3

Convolution 2D Batch normalization Max pooling Convolution 2D Batch normalization Convolution 2D Max pooling Convolution 2D Batch normalization Convolution 2D Max pooling Flatten Dense Dense

Table 3 Detailed generative network architecture (G) layer dim layer (cont.) Input vector Gaussian noise Locally connected 1D Reshape Upsampling Convolution 2D Batch normalization Convolution 2D Batch normalization Upsampling Convolution 2D Batch normalization Convolution 2D Batch normalization Upsampling Convolution 2D Batch normalization Convolution 2D

1 × 3200 1×1 128 × 5 × 5 2×2 512 × 2 × 2 512 × 2 × 2 2×2 256 × 3 × 3 256 × 3 × 3 2×2 256 × 3 × 3 256 × 3 × 3

Batch normalization Upsampling Convolution 2D Batch normalization Convolution 2D Batch normalization Upsampling Convolution 2D Batch normalization Convolution 2D Batch normalization Upsampling Convolution 2D Batch normalization Convolution 2D Batch normalization Convolution 2D Batch normalization

2×2 256 × 3 × 3 256 × 3 × 3 2×2 512 × 3 × 3 512 × 3 × 3 2×2 1 × 2048 1 × 3200

dim (cont.) 2×2 128 × 3 × 3 128 × 3 × 3 2×2 128 × 3 × 3 128 × 3 × 3 2×2 64 × 3 × 3 64 × 3 × 3 3×3×3

Exploring Internal Representations of Deep Neural Networks

137

References 1. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015) 2. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015) 3. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation”. arXiv:1606.08813 (2016) 4. Montavon, G., Samek, W., Mller, K.R.: Methods for interpreting and understanding deep neural networks. arXiv:1706.07979 (2017) 5. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) 6. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016) 7. Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2528–2535 (2010) 8. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv:1312.6034 (2013) 9. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision ECCV 2014. Lecture Notes in Computer Science, vol. 8689, pp. 818–833. Springer International Publishing, Cham (2014) 10. Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv:1506.06579 (2015) 11. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 427–436 (2015) 12. Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5188–5196 (2015) 13. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation. arXiv:1710.10196 (2017) 14. Denton, E.L., Chintala, S., Fergus, R., Others: Deep generative image models using a Laplacian pyramid of adversarial networks. In: Advances in neural information processing systems, pp. 1486–1494 (2015) 15. Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distill 2 (2017) 16. Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J.: Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Advances in Neural Information Processing Systems, pp. 3387–3395 (2016) 17. Despraz, J., Gomez, S., Satizbal, H.F., Pea-Reyes, C.A.: Towards a better understanding of deep neural networks representations using deep generative networks. In: Proceedings of the 9th International Joint Conference on Computational Intelligence - Volume 1: IJCCI, INSTICC, pp. 215–222. SciTePress (2017) 18. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015) 19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)

138

J. Despraz et al.

20. Mahendran, A., Vedaldi, A.: Visualizing deep convolutional neural networks using natural pre-images. Int. J. Comput. Vis. 120, 233–255 (2016) 21. Chollet, F., et al.: Keras (2015) (2017) 22. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Others: Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467 (2016)

Adapting Self-Organizing Map Algorithm to Sparse Data Josué Melka and Jean-Jacques Mariage

Abstract Machine learning techniques applied to data-mining face the challenge of time and memory requirements, and for this purpose should make full profit of the increase in power that recent multi-core processors bring. When applied to sparse data, it is also sometimes necessary to find an appropriate reformulation of the algorithms, keeping in mind that memory load was and still is an issue. In [1], we presented a mathematical reformulation of the standard and the batch versions of the Self-Organizing Map algorithm for sparse data, proposed a parallel implementation of the batch version, and carried out initial performance evaluation tests. We here reproduce and extend our experiments on a more powerful hardware architecture and compare the results to our previous ones. A thorough quantitative and qualitative analysis confirms our preceding results. Keywords Neural-based data-mining · Self-organizing map algorithm · Parallel computing · Sparse data

1 Introduction In the field of data-mining, machine learning (ML) algorithms are widely used for data classification, clustering, data analysis and visualization. They are applied to nowadays already very huge, but nevertheless more and more increasing, data volumes. Data bases size expands as storage facilities progress (cloud or hard disks capacity), and for now on, the WWW is definitely the widest knowledge base ever seen. Mining these kinds of data has become intractable on conventional computers. It requires endlessly raising technical and algorithmic resources. The more data volumes grow, the more we need to reduce computation time. It thus becomes of prime J. Melka (B) · J.-J. Mariage Laboratoire d’Informatique Avancée de Saint-Denis, Université Paris 8, 2 Rue de la Liberté, Saint-Denis, France e-mail: [email protected] J.-J. Mariage e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_8

139

140

J. Melka and J.-J. Mariage

importance to resort to distributed high-performance computing technology, either on cluster computing systems and on multi-core CPU or GPU. To this end, parallel implementations of specifically tuned versions of former ML algorithms remain a key concern. In the approach we adopt, we consider data-mining as the automatic “inspection of a large dataset with the aim of Knowledge Discovery” [2]. In real world applications, a sampling subset of a given raw data-space is collected, with the hope that it approximately gathers the intrinsic structural characteristics of the far wider and possibly infinite original space. After various preprocessing steps, in order to clean the data and to reduce its size to acceptable dimensionality, a reduced refined discrete set of vectors is extracted to create a training data set. From this training data, a set of prototype vectors (commonly refereed to as a codebook) is then extracted, that approximately represents the structural discriminating features of the original data space. For this task, clustering methods based on the unsupervised competitive learning paradigm are very efficient. Unsupervision is necessary because we do not know in advance what we are looking for in the data. Approximation allows better generalization ability for data variability absorption by the final prototypes. Dimensionality reduction aims to yield reasonable training times with regard to computing resources offered by today computers. Among unsupervised competitive ML algorithms, the Self-Organizing Map (SOM) [3] is one of the most popular neural network (NN) models. Over the last decades, SOM has gained popularity through a vast variety of applications in numerous domains. Many variants of the former algorithm have been derived and adapted to specific subfields of ML. Since the dawn of data-mining, SOM was applied to text-mining, with the well known WEBSOM [4, 5] text navigation interface for automatic keywords generation and text classification, which generated a prolific research activity. Indeed, aside from its powerful efficiency as a clustering algorithm, SOM offers the crucial advantage of extracting the topological ordering of the mutual relationships among data categories, and to depict a graphic projection of them on a bidimensional map, providing an easy interface with the underlying data space. Similar data items are clustered close together in the topology, while different ones are separated in remote clusters, far from each other. Moreover, SOM clustering proportionately reflects the density of the data categories. Statically less frequent items in the data are represented on smaller clusters than most frequent ones, which extend over a wider area. And last but not least, SOM provides straightforward very useful properties for data analysis and knowledge discovery in databases. Map size increase can act as a magnification parameter, to bring different resolution granularities into the mapping, discover and zoom on emergent topologies [6]; concrete applications are described in [7]. The individual layout of each dimension of the prototype vector can also be visualized with the so called component planes technique, which consists in dimension projections onto the map. This enables to visually detect less obvious or even hidden component combinations that may eventually need to be more deeply investigated to track intricate dependencies between the variables. Component planes are color

Adapting Self-Organizing Map Algorithm to Sparse Data

141

coding of the same component values in each prototype vector. They reflect nodes sensitivity relatively to each dimension—i.e., component correlations; “.... not only linear correlations, but also nonlinear and local or partial correlations between variables” [8]. The topological ordering capability of the SOM locates similar patterns in identical positions, which indicates correlation between the respective components. Yet, SOM is clearly outperformed by other popular clustering algorithms. The ART2 family models [9] or the SOM closely related K-means algorithm, require respectively far shorter or significantly reduced computation time. See e.g. [10] for a cluster validity and learning efficiency comparison of ART-C 2A with ART 2A [11], online K-Means, batch K-Means, and SOM. Nevertheless, albeit not the fastest, nor often the most efficient, over the years, SOM has confirmed to be an essential reference neuronal tool for data inspection. Thanks to its distinctive topology-preserving characteristic, SOM is particularly well suited for the visual inspection of very high dimensional information spaces, from structured databases, with clearly identified fields, to (and most of all) poorly structured raw data spaces like the WWW or text data collections. When applied to such vast amounts of data, to achieve meaningful results, NNs training times typically require days to weeks. In the particular case of SOM, the training stage implements the usual lengthy iterative convergence procedure, which is further penalized by a complementary diffusion mechanism over a wide part of the units, which is unavoidable because it is the key point to achieve topology extraction and thus for visual inspection quality. Regarding the data, as mentioned in [1], the computational cost of the SOM algorithm of course depends on the size of the learning database, but it is vectors dimensionality that has the most significant impact on the algorithm complexity. However, especially in text-mining, the data extraction methods produce very sparse data vectors, with spatial correlation. Albeit large in dimensionality, vectors have only about a few % of non zero values. We thus focus on improving the efficiency of the SOM algorithm in order to reduce the computing time for such kind of data, and show that modifying some crucial low level parts of the former algorithm can bring a substantial gain in execution time without losing original properties. To this end, we propose a rewrite of the conventional version of the SOM algorithm adapted to sparse input, and suggest a modified batch SOM version, more specifically tailored for both sparsity and parallelism efficiency. We refer to our variants as Sparse-Som and Sparse-BSom. We next consider the parallel implementation of these algorithms, and present our batch version with OpenMP acceleration. As a first performance evaluation of our modifications, we compared our SparseBSom to Somoclu [12] as a standard benchmark. We check the evolution of execution time for increasing parallelization levels, obtained by varying the number of cores and threads devoted to the calculations. Regardless of result precision, maps are identically-configured to avoid unwanted parametric influence effects, while focusing on speed performance improvements brought by our modifications. To evaluate their respective performance, we compared here our two implementations to each other in some series of extensive experiments, in order to confirm or unprove the validity of our preliminary results on extended data sets, and using a

142

J. Melka and J.-J. Mariage

more powerful hardware architecture. We measured the training time over datasets of gradually varying densities. For consistency, batch version was used in serial mode in the speed benchmark. We also conduced a qualitative evaluation over twelve artificial and real datasets, with vectors varying in number, size and densities. We assess training accuracy with the usual average quantization error and by mean of recall and precision measure following majority-voting calibration of the maps. Results confirm our first conclusions, and also reveal some new interesting facts about influence of cache misses. In what follows, we first recall the standard SOM algorithm and its batch variant and proceed to their computational complexity analysis (Sect. 2). Afterwards, in light of the rather scarce related work concerning SOM and data sparsity, we describe our modified versions of the standard SOM and of the batch SOM (Sect. 3). We next consider the parallel implementation of these algorithms, and present our batch version with OpenMP acceleration (Sect. 4). We next evaluate their performance on sparse data and comment the results (Sect. 5). Finally, we draw conclusions from the experiments and suggest further developments for the proposed methods (Sect. 6).

2 SOM Algorithm Self-Organizing Map [3] is an artificial neural network trained by unsupervised competitive learning. The network usually consists of a two-dimensional lattice of units as depicted in Fig. 1. Associated with each unit are a weight vector and a position in the map. The learning process produces a low-dimensional map of the input space, each input being represented by its closest unit (prototype). We here briefly recall the main steps of the standard algorithm and the difference with the batch version, in order to specify our adaptations to sparse data and

Fig. 1 The SOM map [13]

Adapting Self-Organizing Map Algorithm to Sparse Data

143

parallelization. The reader interested in a more detailed description may refer to the abundant literature about SOM and its thousands of applications.

2.1 Standard Algorithm In the standard algorithm [14], weight vectors (the codebook) are updated immediately after the presentation of each data vector. This process is repeated during tmax training iterations, as summarized in Algorithm 1.

Algorithm 1: Standard SOM. Input: x a set of N vectors of D components. Data: w the codebook of M vectors. 1 for t ← 1 to T do 2 compute α(t) ; 3 compute σ (t) ; 4 choose i ∈ 1 . . . N randomly ; 5 for k ← 1 to M do 6 dk ← 0 ; 7 for j ← 1 to D do 8 dk ← dk + (x i j − wk j ) 9 c ← arg mink d ; 10 for k ← 1 to M do  11

h ck ← exp − rk −rc2

12 13

for j ← 1 to D do wk j ← wk j + α(t)h ck (x i j − wk j )

2

2σ (t)

// current learning rate // current width // (1): compute distances

// (2): get bmu

; // (3): update phase

First, the distance between an input vector x and weight vectors w is computed at each time step t. The squared euclidean distance1 is usually used when the data is modeled by Euclidean space dk (t) = x(t) − wk (t)2

(1)

Second, the best-matching unit c is determined by dc (t) = min d(t) k

(2)

Then the weight vectors are updated using the learning rule wk (t + 1) = wk (t) + α(t)h ck (t) [x(t) − wk (t)]

(3)

1 Using the squared distance here is equivalent to using the euclidean distance, and avoids the square

root computation.

144

J. Melka and J.-J. Mariage

where 0 < α(t) < 1 is the learning rate which decreases monotonically over time, and h ck (t) is the neighborhood function. A commonly used neighborhood function is the Gaussian   rk − rc 2 h ck (t) = exp − 2σ (t)2

(4)

where rk and rc denote the coordinates of the nodes k and c respectively, and the width of the neighborhood σ (t) decreases over time.

2.2 Batch Algorithm The batch version of the SOM (Algorithm 2) batches all the input samples together in each epoch [15, 16]. A proof of the convergence and ordering of the Batch Map is established in [17]. Algorithm 2: Batch SOM. Input: x a set of N vectors of D components. Data: w the codebook of M vectors. 1 for e ← 1 to K do 2 compute σ (e) ; 3 for i ← 1 to N do 4 for k ← 1 to M do 5 dk ← 0 ; 6 for j ← 1 to D do 7 dk ← dk + (x i j − wk j ) 8 c(i) ← arg mink d ; 9 for k ← 1 to M do 10 n ← (0, . . . , 0) ; 11 y←0; 12 for i ← 1 to N do   13 14 15 16 17 18

h c(i)k ← exp −

rk −rc(i) 2 2σ (e)2

for j ← 1 to D do n j ← n j + h c(i)k x i j y ← y + h c(i)k ; for j ← 1 to D do n wk ← yj ;

// current width // (1): compute distances

// (2): get bmu // init numerator (vector) // init denominator // accumulate n and y

;

// (5): update phase

In this variant, Eqs. (1) and (2) are computed for all data samples at the start of each epoch. As in the standard algorithm, weight vectors of the triggered nodes and their neighbors are updated, but only once at the end of each epoch, with the average of all the training samples that trigger them:

Adapting Self-Organizing Map Algorithm to Sparse Data

145

t f wk (t f ) =

  t0 h ck (t )x(t ) t f  t0 h ck (t )

(5)

where t0 , t  and t f respectively refer to the first, current and last time indexes over the running epoch, and the neighborhood does not shrink during the epoch, thus σ (t  ) = σ (t0 ). Another batch oriented version closer to the original algorithm has been proposed by [18] but is much less used.

2.3 Complexity Hereafter, we will use the following notations. M is the number of units in the network grid, D is the number of vector dimensions, N is the number of sample vectors, T is the tmax of the standard version and K is the number of epochs of the batch version. Time. The computational complexity of the standard version is O(T M D) for both Eqs. (1) and (3).2 For the batch version, the complexity of Eqs. (1) and (5) is O(K N M D). Complexity of the two versions being similar if one chooses T = K N , we therefore only refer hereafter to the standard version for simplicity. Since we use sparse vectors as inputs, let us define d = D f where f is the fraction of nonzero values in the inputs, the resulting complexity could be O(T Md) if we express the equations appropriately, which may be very attractive in the case of d  D. Memory. The memory requirements for the SOM algorithm depend on three factors, namely the vectors size, the number of units and the amount of data used as input. With the sparse version, the size of the codebook remains unchanged and still requires O(M D) space, but the size of the input data is reduced from O(N D) to O(N d). This can considerably lower memory requirements for highly sparse large data sets, especially when M  N , which is usually the case for complex information processing in data mining applications.

3 Making Good Use of Sparseness To make good use of sparseness, we wrote an appropriate distance computation for the batch version, similarly to [19, 20]. We also used the key idea from the variants proposed by [21, 22] to make a sparse standard version. One other option for taking advantage of data sparseness, already proposed in [14, 23], is to replace euclidean distance with dot-product. A serious drawback with this approach is that it is limited to cosine similarity metric, and requires units 2 Complexity

of Eq. (2) does not depend on vector size and it is only O(T M).

146

J. Melka and J.-J. Mariage

normalization after each update, which makes it less convenient for the standard algorithm. Recently, another interesting variant of sparse SOM was proposed by [24]. In their version, the codebook is devised to be sparse and hence the algorithm formulation is not mathematically equivalent to the standard algorithm, contrarily to our approach. Reference [24] address sparsity, regarding both the dimensionality of the data set (number of vectors) and that of the weight vectors (number of components). Since discontinuous data sets are samples of a generally far wider space, they are sparse by nature. It is thus possible to start from an initial subsample of the data and to increase it progressively. Vector dimensions are interpreted as a probability distribution. They select the most significant ones, based on the input data already fed to the network. Dimensions with the highest probability are kept in units prototype vectors. Others are neglected.

3.1 Batch Version The computation of Eq. (5) depends only on the nonzeros values in the input. Rewriting Eq. (1) accordingly, gives: dk (t) = wk (t)2 + x(t)2 − 2(wk (t) · x(t))

(6)

The values of the squared norms can be precomputed, once for x and before each epoch for w, and their influence on the computation time is thus negligible.

3.2 Standard Version To simplify the notation, β(t) replaces α(t)h ck (t) in the following. Codebook update. We express Eq. (3) as [1]: w k (t + 1) = wk (t) + β(t)[x(t) − wk (t)] = wk (t) − β(t)wk (t) + β(t)x(t) = (1 − β(t))wk (t) + β(t)x(t)   β(t) x(t) = (1 − β(t)) w k (t) + 1 − β(t)

(7a) (7)

Therefore, by storing the coefficient (1 − β(t)) separately, we don’t need to update all the values of w in the update phase, but only those affected by x(t). Distance computations. We can rewrite Eq. (1) as Eq. (6), but the computation of w(t) at each step remains problematic. However, by Eq. (7a) the value of w(t)2 allow us to compute w(t + 1)2 efficiently [1]:

Adapting Self-Organizing Map Algorithm to Sparse Data

147

wk (t + 1)2 = (1 − β(t))w k (t) + β(t)x(t)2 = (1 − β(t))w k (t)2 + β(t)x(t)2 + 2((1 − β(t))wk (t) · β(t)x(t))

(8)

= (1 − β(t)) wk (t) + β(t) x(t) + 2β(t)(1 − β(t))(wk (t) · x(t)) 2

2

2

2

3.3 Modified Algorithm Putting all of these changes together, we obtain the Algorithm 3 for the modified standard version. Numerical stability To avoid division by very small values in line 23, we rescale z k every time γk becomes very small (below some given  value). Such cases remain rare enough to have no noticeable impact on the overall complexity.

4 Parallelism The SOM algorithm has experienced numerous parallel implementation attempts, both with dedicated hardware (neurocomputers) and massively parallel computers in the early years [25, 26] and later by using different cluster architectures [27–30]. A comprehensive, but somewhat outdated review of the different approaches can be found in [31]. More recently, several GPU implementations have been made, exploiting the potential of current graphic cards to run massively parallel computing [32–35]. Mixing both approaches is also possible using a GPU cluster [36]. However, to our best knowledge current GPU implementations are restricted to dense data, although it may be possible to adapt this approach for sparse data. It should be noted that the batch version is generally preferred for computational performance reasons, as it only needs a few iteration cycles and it can be parallelized efficiently, which greatly speeds up the learning process [12, 19, 20, 37–39].

4.1 Workload Partitioning Different levels of parallelism are suitable for neural network computations [40], but the following are most widely applicable: – Network partitioning splits the NN, dividing up the neuron units among different processors; it is advantageous since most of the calculations are unit located, and thus independent.

148

J. Melka and J.-J. Mariage

Algorithm 3: Sparse SOM (standard version.) Input: x a set of N sparse vectors of D components. Data: z the codebook of M dense vectors. Data: γ an array of reals, satisfying w k =  γk z k Data: ω an array of reals, satisfying ωk = j w2k j  Data: χ an array of reals, satisfying χi = j x i2j Data:  to control the numerical stability, set it to very small value. 1 Procedure Init  2 for i ← 1 to N do χi ← j x i2j ; init χ 3 for k ← 1 to M do init z, ω and γ for t = 0 4 initialize z k ;  5 ωk ← j∈1,...,D z 2k j ; 6 γk ← 1 ; 7 Procedure Rescale Input: k 8 for j ← 1 to D do 9 z k j ← γk z k j 10 γk ← 1 11 Procedure Main 12 Init () ; 13 for t ← 1 to tmax do 14 choose an input i ∈ 1 . . . N ; 15 for k ← 1 to M do compute  distance between x i and wk 16 dk ← ωk + χi − 2γk j z k j x i j 17 c ← arg mink d ; 18 interpolate α and σ ; 19 foreach k ∈ Nc do update z k and ωk 20 β ← α exp (rk − rc 2/2σ 2 ) ;  21 ωk ← (1 − β)2 ωk + β 2 χi + 2β(1 − β)γk j z k j x i j ; 22 foreach j such as x i j = 0 do β 23 z k j ← z k j + (1−β)γ xi j k 24 γk ← (1 − β)γk ; 25 if γk <  then rescale z k 26 Rescale (k) 27 for k ← 1 to M do get the actual codebook w 28 Rescale (k)

– Data partitioning dispatches the input data among processors; in this case the complete network needs to be duplicated (or shared). By the serial nature of the standard SOM version, data partitioning is irrelevant, and it turns out that it is hard to parallelize efficiently by network partitioning. The main reason is that the high frequency of thread synchronization prevents it to take a real advantage from parallelism. That does not apply to the batch version [19]. Therefore, parallel applications of the standard SOM algorithm often relax the requirement to update the network at each time step, but then the parallel and serial algorithms are no longer equivalent, and the algorithm stability may be lost.

Adapting Self-Organizing Map Algorithm to Sparse Data

149

Reference [19] also points out that the data partitioning approach offers a better scalability, because the amount of training data is generally bigger than the network size and can be potentially huge.

4.2 Cache Management While implementing the batch algorithm, we noticed that the memory access latency is a key performance issue on modern CPUs, even without parallelism and much more so in the shared-memory multiprocessing paradigm. This is due to sparse-by-dense vector operations, because of the unpredictable pattern of memory accesses, which cannot take advantage of the processor cache prefetching, and makes it challenging to split the workload evenly across processors. It is therefore necessary to find a proper management of the processor cache, which has a very significant impact on performances on modern processors, by avoiding multiple accesses to memory. In order to improve the data cache locality, we have modified the loop order for certain portions of code, without changing the underlying algorithm. The resulting algorithm is shown in Algorithm 4. The outer loops (lines 5 and 12) are set on the codebook, which is made up of high-dimensional dense vectors, and the inner loops (lines 7 and 15) on the compressed sparse data.

4.3 OpenMP OpenMP [41] provides a shared-memory multiprocessing paradigm easily applicable to C/C++ or Fortran code with special directives, without modifying the existing code. Thanks to this simplicity, we were able to parallelize our batch version without significant changes in the source code. With true partitioning (e.g. with distinct machines on a cluster using message passing), which is often communication bound, it is difficult to mix both partitioning schemes, although some authors [42, 43] proposed such hybrid approaches. This is less problematic with shared memory systems. In order to simplify the underlying code and prevent shared variables from concurrent writes, our parallel implementation uses outer parallel loop for best match units search (line 7) and inner parallel loop for updates (line 12), both using the omp for directive. This is equivalent to use network partitioning for BMU search, and data partitioning for updates.

150

J. Melka and J.-J. Mariage

Algorithm 4: Sparse BSOM (batch version). Input: x a set of N sparse vectors of D components. Data: w initialized codebook of M dense vectors.  Data: χ an array of N reals, satisfying χi = j x i2j Data: dst array of N reals to store best distances. Data: bmu array of N integers to store best match units. Data: num array of D realsto accumulate numerator values. 2 1 for i ← 1 to N do χi ← j x i j ; init χ 2 for e ← 1 to emax do train one epoch 3 interpolate σ ; 4 for i ← 1 to N do dst i ← ∞ ; initialize dst 5 for k ← 1to M do find all bmus 6 ω ← j w 2k j ; 7 forall i ∈ 1, . . . , N do 8 d ← ω + χi − 2(x i · wk ) ; 9 if d < dst i then store best match unit 10 dst i ← d ; 11 bmui ← k ; 12 forall k ∈ 1, . . . , M do 13 den ← 0 ; init denominator 14 for j ← 1 to D do num j ← 0 ; init numerator 15 for i ← 1 to N do accumulate num and den 16 c ← bmui ; 17 h ← exp(rk − rc 2/2σ 2 ) ; 18 den ← den + h ; 19 for j ← 1 to D do 20 num j ← num j + h x i j 21 for j ← 1 to D do update wk num 22 wk j ← denj

5 Performance Evaluation To evaluate the performance of our implementations, we have trained several networks with the same configuration parameters on various datasets and measured their relative performance, using the following parameters: – 30 × 40 rectangular unit grids for all the networks – tmax = 10 × Nsamples (or K epochs = 10 for the batch version) – rectangular neighborhood limits with the radius r (t) decreasing linearly from 15 to 0.5 – Gaussian neighborhood function, with σ (t) = 0.3 r (t) – α(t) = 1 − (t/tmax ) if applicable

Adapting Self-Organizing Map Algorithm to Sparse Data

151

5.1 Datasets We have selected several large datasets in sparse format from [44] to evaluate the performance of the two approaches on true examples. Since the first publication of this study [1], we were able to extend our results using larger datasets. url kddb avazu rcv1 news20 sector mnist usps protein dna satimage letter

Identifying suspicious URLs [45]. KDD Cup 2010 Challenge dataset [46], encoded in binary sparse coding. Data used in the Avazu 2014 competition on click-through rate prediction, preprocessed by [47]. We use the site part only. Reuters corpus dataset [48], multiclass. netnews dataset [49], normalized. text categorization dataset [50], normalized. MNIST database of handwritten digits [51]. subset of CEDAR handwritten database [52]. bioinformatic dataset [53]. recognizing splice-junction of primate gene sequences [54]. classification of satellite images [55]. character recognition dataset [56].

The detailed properties of these datasets are given in Table 1: ‘features’ denote the number of values inside the vectors, ‘samples’ is the number of vectors, ‘density’ gives the percentage of ‘non-zero’ values; in cells with double rows the first row indicates the value for the training set and the second the value for the test set.

5.2 Speed Benchmark We have conduced our tests on a multicore computer with 2 sockets of 18 cores Intel Xeon E7-8867 at 2.40 GHz (2 threads at 1.2 GHz per core). The results obtained confirm our previous benchmark [1]. Parallel comparison on batch algorithm. We measured the performance of the parallel implementation of the batch algorithm in terms of execution time, with various levels of parallelism. As a comparison baseline we have used the open-source tool Somoclu3 [12] whose characteristics are the following : – – – –

supports both dense and sparse vectors as input uses the batch algorithm for training massively parallel using OpenMP and/or MPI designed for performance (without optimization on sparse inputs)

3 We

used version 1.7.4. Later versions of Somoclu have been improved by us and use more efficient sparse computation (see: https://github.com/peterwittek/somoclu/commit/ d5ffcf250db77aa103a9de96968ef0e27dc14d15).

152

J. Melka and J.-J. Mariage

Table 1 Characteristics of the datasets Classes Features url kddb

2 2

3231961 1129522

avazu

2

999962

rcv1

53

47236

news20

20

62061

sector

105

55197

mnist

10

780

usps

10

256

protein

3

357

dna

3

180

satimage

6

36

26

16

letter

Samples

Nonzero

Density

2396130 19264097 748401 23567843 2264987 15564 518571 15933 3993 6412 3207 60000 10000 7291 2007 17766 6621 2000 1186 4435 2000 15000 5000

277058644 173376873 6735609 353517436 33974763 1028284 33486015 1272569 321123 1045412 524492 8994156 1511219 1866496 513792 1839250 615906 91233 53669 158048 71254 240000 80000

0.0036 0.0008 0.0008 0.0015 0.0015 0.14 0.14 0.13 0.13 0.29 0.30 19.22 19.37 100.00 100.00 29.00 26.06 25.34 25.14 98.99 98.96 100.00 100.00

For this test, we have used the sector, news20, mnist and usps training datasets. The first two are very sparse and sufficiently large to evaluate the optimization effect on sparse data, and the last two are intended to observe the implementation behavior on mostly dense data. Several runs were made with different number of CPUs assigned to the computation using OpenMP. Contrarily to our previous benchmark, measurements shown here do not take into account the loading time of the datasets, giving more precise results. Results shown in Fig. 2 demonstrate that: Somoclu speed is correlated with the total input vectors dimension, while Sparse-BSom speed is closely correlated with the number of non-zero values. Notably, Sparse-BSom is several order of magnitude faster than Somoclu in case of very sparse data, and stays faster in all four cases. For both implementations, execution time decreases when the number of cores grows (the dotted lines represent the theoretical speed-up linearly based on the number of cores).

Adapting Self-Organizing Map Algorithm to Sparse Data

153

Fig. 2 Parallel speed benchmark, based on [1]

However, the speed of Sparse-BSom appears to reach a plateau with 64 cores, and Somoclu seems to scale worse with smaller datasets as well. This is explained by the fact that the computing time becomes dominated by threading synchronization in faster executions. Another visible point is that Sparse-BSom scales better with dense datasets than large sparse datasets. Serial comparison of optimized versions. We carried out experiments to compare our optimized approaches to each other. To this end, we run our two implementations using the same parameters on selected datasets with various densities. As stated before the standard version cannot be parallelized efficiently, so we compared single threaded versions only in these tests. Results are shown in Fig. 3. Sparse-BSom performs better than Sparse-Som on very sparse data, which seems easy to explain, because this last algorithm involves more calculations, and for this reason has a larger constant factor in its time complexity. Less clear is the reason why Sparse-Som performs better on dense data. A deeper analysis with Linux perf tool shows that the major slowdown factor here is the cache misses rate (see Appendix 6). This is because the large vectors in the codebook involve a lot of cache misses due to the sparse computation. In fact, the memory access management of Sparse-Som is beneficial for large sparse datasets but not for smaller and more dense datasets.

5.3 Quality Evaluation Some authors have reported degradation of the resulting maps using the batch algorithm compared to the standard algorithm [57, 58]. Hence, we looked for such effects with our sparse implementations.

154

J. Melka and J.-J. Mariage

Fig. 3 Serial speed benchmark, based on [1]

We already noticed in [1] that results for Somoclu and Sparse-BSom are perfectly consistent. Therefore, we will focus our analysis here on the differences between our standard version and our batch version. Methodology. Because our datasets are mostly multi-class, we calculate the following metrics for each label, and find their average, weighted by support (the number of true instances for each label). Error Metrics Various error measures can be used to analyze the maps without human labeling, the most common ones are the Average Quantization Error and the Topographic Error. – Average Quantization Error is the average of the quantization error for all the training samples, which represent the distance between the data vectors and the BMU [59]. Q=

N 1 x i − wc  where wc is the best match unit for x i N i=1

(9)

– Topographic Error is the percentage of the data for which the second BMU is not neighboring the first one [60].

N 1 0, first and second bmu adjacents Q= u(x i ) where u(x i ) = N i=1 1, otherwise

(10)

Classification. Since SOM can be used in a supervised manner to classify input vectors, one can also use standard evaluation metrics (recall, precision and F-score) to evaluate the clustering quality. We have used the following evaluation method for all datasets: 1. train the SOM network with the training part of the dataset. 2. perform unit calibration with the associated label (each unit is labeled according to the majority of the data it matches).

Adapting Self-Organizing Map Algorithm to Sparse Data Table 2 Quantization and Topographic errors Sparse-Som Quantization Topographic url kddb avazu rcv1 news20 sector mnist usps protein dna satimage letter

3.534 ± 0.004 0.653 ± 0.001 0.597 ± 0.000 0.842 ± 0.001 0.914 ± 0.000 0.839 ± 0.001 4.437 ± 0.003 3.463 ± 0.002 2.465 ± 0.001 4.675 ± 0.005 0.461 ± 0.001 0.377 ± 0.001

0.285 ± 0.004 0.248 ± 0.006 0.310 ± 0.005 0.128 ± 0.005 0.243 ± 0.001 0.121 ± 0.005 0.263 ± 0.004 0.089 ± 0.003 0.321 ± 0.004 0.048 ± 0.003 0.058 ± 0.004 0.122 ± 0.004

155

Sparse-BSom Quantization

Topographic

3.762 ± 0.015 0.696 ± 0.004 0.627 ± 0.001 0.822 ± 0.006 0.904 ± 0.002 0.780 ± 0.007 4.513 ± 0.004 3.124 ± 0.015 2.452 ± 0.000 3.289 ± 0.032 0.381 ± 0.001 0.348 ± 0.001

0.249 ± 0.009 0.192 ± 0.013 0.272 ± 0.013 0.352 ± 0.010 0.600 ± 0.016 0.511 ± 0.026 0.253 ± 0.002 0.281 ± 0.005 0.449 ± 0.009 0.263 ± 0.024 0.223 ± 0.015 0.255 ± 0.012

3. predict the labels of the training data according to the label attributed to their best match units. 4. do the same as step 3 on the test data. If a unit has not attracted data in the training stage, it is not labeled; if in test stage it attracts some input data, we assign it a non-existent class. Though this strategy can significantly decrease the overall recall score (it is possible to use more sophisticated approaches to deal with such cases), this simple method is in general enough to analyze the clustering quality. Results. The experiments were run five times, and we report mean values and standard deviation for each test. Detailed results are shown in ‘Table 2: Quantization and Topographic errors’, and ‘Table 3: Prediction evaluation’. Error metrics were measured on the training data. It should be emphasized that no parameter optimization per dataset was performed, and it is certainly possible to obtain better results with careful parameter tuning. For example, the network we have used (1200 units) is too large for small training datasets, which explains the low recall rate for the dna dataset. It seems, however, that the standard SOM version is more robust against such type of difficulty, indicating that data samples are better distributed over the network with this algorithm. Regarding the quantization error, it seems that standard version produces lower error rates than batch version with a great number of samples, and the opposite is true with less samples. The inverse occurs for the topographic error. This raises the question of whether the choice of similar parameters for batch and standard algorithms is actually equivalent.

156

J. Melka and J.-J. Mariage

Table 3 Prediction evaluation Sparse-Som Precision Recall url kddb avazu rcv1 news20 sector mnist usps protein dna satimage letter

90.5 ± 0.2 82.0 ± 0.2 83.9 ± 0.2 76.7 ± 0.0 77.1 ± 0.0 79.3 ± 0.5 73.7 ± 0.5 64.0 ± 0.5 59.5 ± 0.4 72.7 ± 0.8 66.1 ± 0.8 93.2 ± 0.1 93.3 ± 0.1 95.6 ± 0.2 91.3 ± 0.7 56.8 ± 0.2 49.8 ± 1.0 91.2 ± 0.3 79.4 ± 1.2 91.9 ± 0.2 86.6 ± 0.2 81.6 ± 0.3 79.2 ± 0.4

90.4 ± 0.2 86.1 ± 0.0 88.6 ± 0.0 80.7 ± 0.0 80.4 ± 0.0 80.1 ± 0.3 73.0 ± 0.4 63.2 ± 0.5 58.1 ± 0.4 69.8 ± 1.1 58.2 ± 1.4 93.1 ± 0.1 93.3 ± 0.1 95.5 ± 0.2 90.6 ± 0.5 57.5 ± 0.2 51.3 ± 0.8 90.6 ± 0.3 68.9 ± 1.2 92.1 ± 0.2 84.5 ± 0.9 81.5 ± 0.3 78.7 ± 0.5

f-score

Sparse-BSom Precision Recall

f-score

90.2 ± 0.2 80.6 ± 0.1 84.2 ± 0.2 74.3 ± 0.1 74.3 ± 0.1 79.3 ± 0.4 72.9 ± 0.3 63.4 ± 0.5 58.4 ± 0.4 70.1 ± 1.1 59.5 ± 1.3 93.1 ± 0.1 93.2 ± 0.1 95.5 ± 0.2 90.9 ± 0.6 55.7 ± 0.2 49.6 ± 0.9 90.7 ± 0.3 73.6 ± 1.1 91.9 ± 0.2 85.5 ± 0.6 81.5 ± 0.3 78.8 ± 0.5

91.5 ± 0.1 82.1 ± 0.1 84.1 ± 0.1 76.6 ± 0.0 76.4 ± 0.4 80.1 ± 1.0 76.2 ± 0.4 47.7 ± 0.8 44.9 ± 0.8 57.4 ± 1.4 59.1 ± 2.8 91.1 ± 0.2 91.3 ± 0.2 95.7 ± 0.1 92.2 ± 0.3 57.0 ± 0.1 50.7 ± 0.4 89.2 ± 0.5 82.2 ± 4.7 92.0 ± 0.6 88.2 ± 0.6 82.1 ± 0.5 80.4 ± 0.6

91.4 ± 0.1 80.2 ± 0.2 83.9 ± 0.2 74.2 ± 0.2 73.8 ± 0.4 80.0 ± 0.9 72.9 ± 0.6 47.0 ± 0.8 42.2 ± 0.6 53.9 ± 1.6 45.0 ± 2.2 91.0 ± 0.2 91.3 ± 0.2 95.7 ± 0.1 91.7 ± 0.4 55.1 ± 0.2 50.0 ± 0.5 88.8 ± 0.5 42.6 ± 2.6 92.0 ± 0.5 86.6 ± 0.5 81.9 ± 0.4 80.1 ± 0.6

91.5 ± 0.1 86.1 ± 0.0 88.7 ± 0.0 80.6 ± 0.0 80.2 ± 0.1 80.9 ± 0.9 71.6 ± 0.9 47.0 ± 0.8 41.5 ± 0.5 54.0 ± 1.6 41.7 ± 2.2 91.1 ± 0.2 91.4 ± 0.2 95.7 ± 0.1 91.3 ± 0.5 57.4 ± 0.1 52.1 ± 0.5 88.7 ± 0.5 29.7 ± 1.7 92.2 ± 0.5 85.2 ± 0.6 81.9 ± 0.4 80.1 ± 0.6

In fact, the topographic error appears to be correlated with the F-score. The interpretation of the quantization error is less clear, as it is not possible to compare the error produced for different datasets since this metric is not normalized. The predictive benchmark results are globally better with the standard version than with the batch version. Furthermore, the results of the Sparse-Som also seem to be more stable, and never fall much lower than the Sparse-BSom results. In particular, a significant gap occurs between the two versions for the news20 and sector datasets, which are both very sparse. However, we cannot generalize a negative impact of sparseness with the batch version.

Adapting Self-Organizing Map Algorithm to Sparse Data

157

6 Conclusions In [1], we showed that an appropriate reformulation of the calculations of the SOM algorithm compensates computation time cost due to data sparseness commonly found in data mining, and more specifically in text mining. Similar results obtained with an extended data set on a more powerful computer architecture, confirm previous test consistency. The training time evolves in relation to vectors density, decreasing proportionally as sparsity rate increases. And compared to Somoclu, the more vectors are sparse, the more Sparse-BSom becomes faster than Somoclu. As pointed out also in our earlier experiments, the advantage brought by our implementations is clearly effective on very sparse data, but it is somewhat attenuated by the memory access latency. Memory access time is a key point regarding performance on common computer architectures, so optimizing the processor cache usage is very important, and remains a prime concern. Regarding the sequential comparison of our two optimized versions against each other, it remained unclear why Sparse-Som performed better than Sparse-BSom on dense data. As highlighted in further analysis, sparse computations generate an increase of cache misses for large vectors. In short, Sparse-Som is efficient with large sparse datasets but not for small dense ones, while Sparse-Bsom is effective with very sparse datasets. The later can be parallelized on multiple CPUs, as demonstrated by our experiments with OpenMP, while the former is harder to parallelize, due to the amount of synchronization required. We have not explored ways to leverage the parallelization capabilities of today GPUs to accelerate the sparse SOM computing. More study in this direction would certainly be beneficial. Because the codebook itself is still dense, it remains difficult to manage when it becomes extremely large. To solve this issue, the SOM algorithm should be modified to use a sparse codebook, which leads us to plan further research on this subject. As regards the maps obtained with both our versions, we carried out an empirical analysis using various datasets. Our results support the assumption that the behavior of the standard version is more stable and generally produces overall better results than the batch version. In order to ensure reliable reproducibility of our results, our complete implementation is freely available online for the research community, with its documentation, on GitHub, under the terms of the GNU General Public License (https://github.com/ yoch/sparse-som).

158

J. Melka and J.-J. Mariage

Appendix Perf Analysis of Serial Runs Note: sparse-bsom-v2 is a variation of the Sparse-BSom algorithm with outer loop on data and inner loop on codebook in BMU search. MNIST ===== Performance counter stats for ’./sparse-som -i mnist.scale ...’: 1 218 883 259 667 12 268 689 203 797

instructions cache-misses page-faults

161,142366866 seconds time elapsed Performance counter stats for ’./sparse-bsom -i mnist.scale ...’: 1 139 135 319 378 11 609 573 560 260 207

instructions cache-misses page-faults

194,187298049 seconds time elapsed Performance counter stats for ’./sparse-bsom-v2 -i mnist.scale ...’: 1 249 206 278 565 2 135 657 517 165 979

instructions cache-misses page-faults

177,912495869 seconds time elapsed

NEWS20 ====== Performance counter stats for ’./sparse-som -i news20.scale ...’: 198 365 328 105 8 905 964 788 35 271

instructions cache-misses page-faults

144,574849202 seconds time elapsed Performance counter stats for ’./sparse-bsom -i news20.scale ...’: 186 124 184 501 80 973 586 34 429

instructions cache-misses page-faults

34,220834622 seconds time elapsed Performance counter stats for ’./sparse-bsom-v2 -i news20.scale ...’: 202 038 121 713 5 552 343 423 28 890

instructions cache-misses page-faults

108,985345691 seconds time elapsed

Adapting Self-Organizing Map Algorithm to Sparse Data

159

References 1. Melka, J., Mariage, J.: Efficient implementation of self-organizing map for sparse input data. In: Proceedings of the 9th International Joint Conference on Computational Intelligence, IJCCI 2017, pp. 54–63, Funchal, Madeira, Portugal (2017) 2. Ultsch, A.: Data mining and knowledge discovery with emergent self-organizing feature maps for multivariate time series. Kohonen Maps 46, 33–46 (1999) 3. Kohonen, T.: Self-organized formation of topologically correct feature maps. Biol. Cybern. 43, 59–69 (1982) 4. Honkela, T., Kaski, S., Lagus, K., Kohonen, T.: Newsgroup exploration with WEBSOM method and browsing interface. Technical Report A32, Helsinki University of Technology (1996) 5. Kaski, S., Honkela, T., Lagus, K., Kohonen, T.: WEBSOM-self-organizing maps of document collections. Neurocomputing 21, 101–117 (1998) 6. Ultsch, A., Mörchen, F.: ESOM-Maps: tools for clustering, visualization, and classification with Emergent SOM. Technical Report 46, Department of Mathematics and Computer Science, University of Marburg, Germany (2005) 7. Polzlbauer, G., Dittenbach, M., Rauber, A.: A visualization technique for self-organizing maps with vector fields to obtain the cluster structure at desired levels of detail. In: Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, 2005, IJCNN’05, vol. 3, pp. 1558–1563. IEEE (2005) 8. Vesanto, J., Ahola, J.: Hunting for correlations in data using the self-organizing map. In: Proceeding of the International ICSC Congress on Computational Intelligence Methods and Applications (CIMA’99), pp. 279–285. ICSC Academic Press (1999) 9. Carpenter, G.A., Grossberg, S.: Art 2: self-organization of stable category recognition codes for analog input patterns. Appl. Opt. 26, 4919–4930 (1987) 10. He, J., Tan, A.-H., Tan, C.-L.: Modified ART 2A growing network capable of generating a fixed number of nodes. IEEE Trans. Neural Netw. 15, 728–737 (2004) 11. Carpenter, G.A., Grossberg, S., Rosen, D.B.: ART 2-A: an adaptive resonance algorithm for rapid category learning and recognition. Neural Netw. 4, 493–504 (1991) 12. Wittek, P., Gao, S.C., Lim, I.S., Zhao, L.: Somoclu: an efficient parallel library for selforganizing maps. J. Stat. Softw. 78, 1–21 (2017) 13. Liao, G., Chen, P., Du, L., Su, L., Liu, Z., Tang, Z., Shi, T.: Using SOM neural network for X-ray inspection of missing-bump defects in three-dimensional integration. Microelectron. Reliab. 55, 2826–2832 (2015) 14. Kohonen, T.: Self-Organizing Maps. 2nd edn. Springer Series in Information Sciences, vol. 30. Springer, Berlin (1997) 15. Kohonen, T.: Things you haven’t heard about the self-organizing map. In: 1993 IEEE International Conference on Neural Networks, pp. 1147–1156 (1993) 16. Mulier, F., Cherkassky, V.: Self-organization as an iterative Kernel smoothing process. Neural Comput. 7, 1165–1177 (1995) 17. Cheng, Y.: Convergence and ordering of Kohonen’s batch map. Neural Comput. 9, 1667–1676 (1997) 18. Ienne, P., Thiran, P., Vassilas, N.: Modified self-organizing feature map algorithms for efficient digital hardware implementation. IEEE Trans. Neural Netw. 8, 315–330 (1997) 19. Lawrence, R.D., Almasi, G.S., Rushmeier, H.E.: A scalable parallel algorithm for selforganizing maps with applications to sparse data mining problems. Data Min. Knowl. Discov. 3, 171–195 (1999) 20. Maiorana, F.: Performance improvements of a Kohonen self organizing classification algorithm on sparse data sets. In: Proceedings of the 10th WSEAS International Conference on Mathematical Methods, Computational Techniques and Intelligent Systems, MAMECTIS’08, pp. 347–352. World Scientific and Engineering Academy and Society (WSEAS) (2008) 21. Natarajan, R.: Exploratory data analysis in large, sparse datasets. Technical Report, IBM Thomas J. Watson Research Division (1997)

160

J. Melka and J.-J. Mariage

22. Roussinov, D.G., Chen, H.: A scalable self-organizing map algorithm for textual classification: a neural network approach to thesaurus generation. Commun. Cogn. Artif. Intell. J. (1998) 23. Kohonen, T.: Essentials of the self-organizing map. Neural Netw. 37, 52–65 (2013) 24. Olteanu, M., Villa-Vialaneix, N.: Sparse online self-organizing maps for large relational data. In: Advances in Self-Organizing Maps and Learning Vector Quantization (Proceedings of WSOM 2016). Advances in Intelligent Systems and Computing, vol. 428, pp. 27–37. Springer, Houston, Texas, USA (2016) 25. Wu, C.H., Hodges, R.E., Wang, C.J.: Parallelizing the self-organizing feature map on multiprocessor systems. Parallel Comput. 17, 821–832 (1991) 26. Seiffert, U., Michaelis, B.: Multi-dimensional self-organizing maps on massively parallel hardware. In: Advances in Self-Organising Maps, pp. 160–166. Springer, Berlin (2001) 27. Guan, H., Li, C.K., Cheung, T.Y., Yu, S.: Parallel design and implementation of SOM neural computing model in PVM environment of a distributed system. In: Proceedings of the Advances in Parallel and Distributed Computing, pp. 26–31. IEEE (1997) 28. Bandeira, N., Lobo, V., Moura-Pires, F.: Training a Self-Organizing Map distributed on a PVM network. In: 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence, vol. 1, pp. 457–461 (1998) 29. Tomsich, P., Rauber, A., Merkl, D.: Optimizing the parSOM neural network implementation for data mining with distributed memory systems and cluster computing. In: Proceedings 11th International Workshop on Database and Expert Systems Applications, pp. 661–665. IEEE (2000) 30. Labonté, G., Quintin, M.: Network parallel computing for SOM neural networks. In: High Performance Computing Systems and Applications, pp. 575–586. Springer, Berlin (2002) 31. Hämäläinen, T.D.: Parallel implementations of self-organizing maps. In: Seiffert, U., Jain, L.C. (eds.) Self-Organizing Neural Networks, pp. 245–278. Springer, New York (2002) 32. Campbell, A., Berglund, E., Streit, A.: Graphics hardware implementation of the parameterless self-organising map. In: International Conference on Intelligent Data Engineering and Automated Learning, pp. 343–350. Springer, Berlin (2005) 33. Moraes, F.C., Botelho, S.C., Duarte Filho, N., Gaya, J.F.O.: Parallel high dimensional self organizing maps using CUDA. In: Robotics Symposium and Latin American Robotics Symposium (SBR-LARS), pp. 302–306. IEEE, Brazilian (2012) 34. Richardson, T., Winer, E.: Extending parallelization of the self-organizing map by combining data and network partitioned methods. Adv. Eng. Softw. 88, 1–7 (2015) 35. Daneshpajouh, H., Delisle, P., Boisson, J.C., Krajecki, M., Zakaria, N.: Parallel batch selforganizing map on graphics processing unit using CUDA. In: Latin American High Performance Computing Conference, pp. 87–100. Springer, Berlin (2017) 36. Wittek, P., Darányi, S.: Accelerating text mining workloads in a MapReduce-based distributed GPU environment. J. Parallel Distrib. Comput. 73, 198–206 (2013) 37. Kohonen, T., Kaski, S., Lagus, K., Salojarvi, J., Honkela, J., Paatero, V., Saarela, A.: Self organization of a massive document collection. IEEE Trans. Neural Netw. 11, 574–585 (2000) 38. Lagus, K., Kaski, S., Kohonen, T.: Mining massive document collections by the WEBSOM method. Inf. Sci. 163, 135–156 (2004) 39. Takatsuka, M., Bui, M.: Parallel batch training of the self-organizing map using openCL. In: Neural Information Processing: Models and Applications, pp. 470–476. Springer, Berlin (2010) 40. Nordström, T.: Designing parallel computers for self organizing maps. In: Proceedings of the 4th Swedish Workshop on Computer System Architecture (DSA-92), pp. 13–15 (1992) 41. Dagum, L., Menon, R.: OpenMP: an industry standard API for shared-memory programming. IEEE Comput. Sci. Eng. 5, 46–55 (1998) 42. Yang, M.H., Ahuja, N.: A data partition method for parallel self-organizing map. In: International Joint Conference on Neural Networks, IJCNN’99, vol. 3, pp. 1929–1933. IEEE (1999) 43. Silva, B., Marques, N.: A hybrid parallel SOM algorithm for large maps in data-mining. New Trends in Artificial Intelligence (2007) 44. Chang, C.C., Lin, C.J.: LIBSVM data: classification (Multi Class). https://www.csie.ntu.edu. tw/~cjlin/libsvmtools/datasets/multiclass.html (2006)

Adapting Self-Organizing Map Algorithm to Sparse Data

161

45. Ma, J., Saul, L.K., Savage, S., Voelker, G.M.: Identifying suspicious urls: an application of large-scale online learning. In: Proceedings of the 26th annual international conference on machine learning, pp. 681–688. ACM (2009) 46. Stamper, J., Niculescu-Mizil, A., Ritter, S., Gordon, G., Koedinger, K.: Bridge to Algebra 2008–2009, Challenge data set from KDD Cup 2010 Educational Data Mining Challenge (2010) 47. Juan, Y., Zhuang, Y., Chin, W.S., Lin, C.J.: Field-aware factorization machines for CTR prediction. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 43–50. ACM (2016) 48. Lewis, D.D., Yang, Y., Rose, T.G., Li, F.: RCV1: a new benchmark collection for text categorization research. J. Mach. Learn. Res. 5, 361–397 (2004) 49. Lang, K.: Newsweeder: Learning to filter netnews. In: Proceedings of the 12th International Conference on Machine Learning, pp. 331–339 (1995) 50. McCallum, A., Nigam, K.: A Comparison of event models for Naive Bayes text classification. In: AAAI/ICML-98 Workshop on Learning for Text Categorization. Technical Report WS-9805, pp. 41–48 (1998) 51. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998) 52. Hull, J.J.: A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell. 16, 550–554 (1994) 53. Wang, J.Y.: Application of support vector machines in bioinformatics. Ph.D. Thesis, National Taiwan University (2002) 54. Noordewier, M.O., Towell, G.G., Shavlik, J.W.: Training knowledge-based neural networks to recognize genes in DNA sequences. Adv. Neural Inf. Process. Syst. 3, 530–536 (1991) 55. King, R.D., Feng, C., Sutherland, A.: StatLog: comparison of classi cation algorithms on large real-world problems. Appl. Artif. Intell. Int. J. 9, 289–333 (1995) 56. Frey, P.W., Slate, D.J.: Letter recognition using Holland-style adaptive classifiers. Mach. Learn. 6, 161–182 (1991) 57. Fort, J.C., Letremy, P., Cottrell, M.: Advantages and drawbacks of the Batch Kohonen algorithm. ESANN 2, 223–230 (2002) 58. Nöcker, M., Mörchen, F., Ultsch, A.: An algorithm for fast and reliable ESOM learning. In: ESANN, 14th European Symposium on Artificial Neural Networks, pp. 131–136 (2006) 59. Kohonen, T., Hynninen, J., Kangas, J., Laaksonen, J.: SOM_PAK: The self-organizing map program package. Report A31, Helsinki University of Technology, Laboratory of Computer and Information Science (1996) 60. Kiviluoto, K.: Topology preservation in self-organizing maps. In: IEEE International Conference on Neural Networks, vol. 1, pp. 294–299 (1996)

A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images Alon Schclar and Amir Averbuch

Abstract Hyper-spectral cameras capture images at hundreds and even thousands of wavelengths. These hyper-spectral images offer orders of magnitude more intensity information than RGB images. This information can be utilized to obtain segmentation results which are superior to those that are obtained using RGB images. However, many of the wavelengths are correlated and many others are noisy. Consequently, the hyper-spectral data must be preprocessed prior to the application of any segmentation algorithm. Such preprocessing must remove the noise and interwavelength correlations and due to complexity constraints represent each pixel by a small number of features which capture the structure of the image. The contribution of this paper is three-fold. First, we utilize the diffusion bases dimensionality reduction algorithm (Schclar and Averbuch in Diffusion bases dimensionality reduction, pp. 151–156, [1]) to derive the features which are needed for the segmentation. Second, we describe a faster version of the diffusion bases algorithm which uses symmetric matrices. Third, we propose a simple algorithm for the segmentation of the dimensionality reduced image. Successful application of the algorithms to hyperspectral microscopic images and remote-sensed hyper-spectral images demonstrate the effectiveness of the proposed algorithms. Keywords Segmentation · Diffusion bases · Dimensionality reduction · Hyper-spectral sensing

A. Schclar (B) School of Computer Science, Academic College of Tel-Aviv Yaffo, POB 8401, 61083 Tel Aviv, Israel e-mail: [email protected] A. Averbuch School of Computer Science, Tel Aviv University, POB 39040, 69978 Tel Aviv, Israel e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_9

163

164

A. Schclar and A. Averbuch

1 Introduction Image segmentation is the process of partitioning an image into disjoint regions. Pixels that belong to the same subset are more similar than pixels that belong to different regions. Each region is referred to as a segment. Regular CCD cameras provide very limited spectral information as it is equipped with sensors that only capture details that are visible to the naked eye. However, hyper-spectral cameras are equipped with multiple sensors—each sensor is sensitive to a subrange of the light spectrum including spectrum ranges that are not visible to the naked eye—namely, infra-red and ultra-violet. Its output contains the reflectance values of a scene at all the wavelengths of the sensors. Hyper-spectral cameras can be mounted on airplanes (e.g. [2]), microscopes [3] or they can be hand held [4]. A hyper-spectral image is composed of a set of images—each contains the reflectance values for a particular wavelength subrange. We refer to the set of reflectance values at a coordinate (x, y) as a hyper-pixel. Each hyper-pixel can be represented by a vector in Rn where n is the number of wavelength subranges. This data can be used to achieve inferences that can not be derived from a limited number of wavelengths which are obtained by regular cameras. Usually, the number of wavelengths is much higher than the actual degrees of freedom of the data. Unfortunately, this phenomenon is usually unavoidable due to the inability (lack of knowledge which sensor values are more important for the task at hand) to produce a special set of sensors for each application. Consider for example a task that separates red objects from green objects using an off-the-shelf CCD camera. In this case, the camera will produce, in addition to the red and green channels, a blue channel, which is unnecessary for this task. Effective utilization of the wealth of wavelengths can yield segmentation results that are better than those obtained by merely using RGB data, for example, infrared data describes the temperature of the scene. One can simply apply classical image processing techniques to each wavelength image individually. However, this disregards inter-wavelength connections, which are inherent in the spectral signatures of the captured objects. Furthermore, the high number of wavelengths renders the application of segmentation algorithms to the entire hyper-spectral image useless due to the curse of dimensionality. Thus, the entire hyper-spectral cube needs to be preprocessed in order to analyze the physical nature of the scene. Naturally, this has to be done efficiently due to the large volume of the data. Commonly, hyper-spectral images contain a high degree of correlation between many of the wavelengths which renders many of them redundant. Moreover, certain wavelengths contain noise as a result of poor lighting conditions and the physical condition of the camera at the time the images were captured. Consequently, the noise and the redundant data need to be removed while maintaining the information which is vital for the segmentation. This information should be represented as concisely as possible i.e. each hyper-pixel should be represented using a small number of features. This will alleviate the curse of dimensionality and allow the efficient application of

A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images

165

segmentation algorithms to the concisely representation of the hyper-spectral image. To achieve this, dimensionality reduction is applied to the hyper-spectral image. In this paper we extend the results in [5]. We reduce the dimensionality of the hyper-spectral image by using the diffusion bases (DB) dimensionality reduction algorithm [1]. The DB algorithm efficiently captures nonlinear inter-wavelength correlations and produces a low-dimensional representation in which the amount of noise is drastically reduced. We also propose a modified version of the DB algorithm that uses eigen decomposition of symmetric matrices that are conjugate to the Markov matrix which is used in [1, 5]. We segment the dimension-reduced data using a simple and efficient two-phase histogram-based segmentation algorithm. We refer to this method as the Wavelength-wise Global (WWG) segmentation algorithm. This paper is organized as follows: in Sect. 2 we present a survey of related work on segmentation of hyper-spectral images. The diffusion bases scheme [1] along with its modified version are described in Sect. 3. In Sect. 4 we introduce the Wavelengthwise Global (WWG) segmentation algorithm. Section 5 contains experimental results from the application of the algorithm to several hyper-spectral images. Concluding remarks and future research are given in Sect. 6.

2 Related Work Segmentation methods for hyper-spectral images can be divided into two categories— supervised and unsupervised. Supervised methods either use a-priori spectral information of the sought after segments or information about the shape of the segments. Some methods use both types of information. Unsupervised segmentation techniques do not use any a-priori information. The method proposed in this paper falls into the latter category. In [6] a variational model for simultaneous segmentation and denoising/deblurring of a hyper-spectral image is proposed. The image is modeled as a set of threedimensional tensors. The spectral signatures of the sought after materials are known apriori and are used in the model. The segmentation is obtained via a statistical moving average method which uses the spatial variation of spectral correlation. Specifically, a coarse-grained spectral correlation function is computed over a small moving 2D spatial cell of fixed shape and size. This function produces sharp variations as the averaging cell crosses a boundary between two materials. The method in [7] uses both a-priori spectral information and shape information of the segments. Specifically, they use the model which is proposed in [8] which is a convexification of the two-phase version of the Mumford-Shah model. The model uses variational methods to find a smooth minimal length curve that divides the image into two regions that are as close as possible to being homogeneous. The a-priori spectral and shape information is incorporated in the variational model and its optimization. In [9] a supervised Bayesian segmentation approach is proposed. The method makes use of both spectral and spatial information. The two-phase algorithm first

166

A. Schclar and A. Averbuch

implements a learning step, which uses the multinomial logistic regression via variable splitting and an augmented (LORSAL) [10] algorithm to infer the class distributions. This is followed by a segmentation step which infers the labels from a posterior distribution built on the learned class distributions. Then, maximum a-posterior (MAP) segmentation is computed via a min-cut based integer optimization algorithm. In order to reduce the size of the training set, the algorithm uses an active learning technique based on the mutual information (MI) between the MLR regressors and the class labels. An extension of the watershed [11] segmentation algorithm is proposed in [2]. The algorithm is used to define information about spatial structures and uses one-band gradient functions. The segmentation maps are incorporated into a spectral–spatial classification scheme based on a pixel-wise SVM classifier.

3 The Diffusion Bases Dimensionality Algorithm The Diffusion bases (DB) dimensionality reduction algorithm [1] reduces the dimensionality of a dataset by utilizing the inter-coordinate variability of the original data (in this sense it is dual to the Diffusion Maps algorithm [12–14]). It first constructs the graph Laplacian using the image wavebands as the datapoints. It then uses the Laplacian eigenvectors as an orthonormal system and projects the hyper-pixels on it. The eigenvectors are sorted in descending order according to their magnitude and only the eigenvectors that correspond to the highest eigenvalues are used. These eigenvectors capture the non-linear coordinate-wise variability of the original data. Although baring some similarity to PCA, this process yields better results than PCA due to: (a) its ability to capture non-linear manifolds within the data by local exploration of each coordinate; (b) its robustness to noise. Furthermore, this process is more general than PCA and it produces similar results to PCA when the weight function wε is linear e.g. the inner product. m , xi ∈ Rn , be the original dataset of hyper-pixels and let xi ( j) Let X = {xi }i=1 denote the jth coordinate (the reflectance value of the jth band) of xi , 1 ≤ j ≤ n. We define the vector x j  (x1 ( j) , . . . , xm ( j)) to be the jth coordinate of all the points in X i.e. the image corresponding to the jth band. We construct the set  n X  = x j j=1 .

(1)

  Let wε xi , x j , be a weight function which measures the pairwise similarity between the points in X  . A Markov transition matrix P is constructed by normalizing the sum of each row in the matrix wε to be 1:     w x , x ε   i j   p xi , x j = (2) d xi

A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images

where

n      wε xi , x j d xi =

167

(3)

j=1

  Equations 1–3 were extracted from [5]. Next, eigen-decomposition of p xi , x j is performed n        p xi , x j ≡ λk νk xi μk x j k=1

where the left and the right eigenvectors of P are given by {μk } and {νk }, respectively, and {λk } are the eigenvalues of P in descending order of magnitude. We use the eigenvalue decay property of the eigen-decomposition to extract only the first η (δ) eigenvectors B  {νk }k=1,...,η(δ) which contain the non-linear directions with the highest variability of the coordinates of the original dataset X . We project the original data X m η(δ) onto the basis  of these projections: X B = {gi }i=1 , gi ∈ R ,  B. Let X B be the set where gi = xi · ν1 , . . . , xi · νη(δ) , i = 1, . . . , m and · denotes the inner product operator. X B is the reduced dimension representation of X and it contains the coordinates of the original points in the orthonormal system whose axes are given by B. The DB algorithm is summarized in Algorithm 1.

3.1 Numerical Enhancement of the Eigen-Decomposition The Markov matrix, which is obtained in Eq. 2, is not symmetric. Working with a symmetric matrix is faster. A symmetric matrix A, which is conjugate to P, can be obtained in the following way: 

a xi , x j



  wε x i , x j =  √ d x j d (xi )

(4)

  where d x j and d (xi ) are defined in Eq. 3. Let {ϑk }k=1,...,m be the eigenvectors of A. It can be shown (see [15]) that P and A have the same eigenvalues and that νk =

ϑk ; μk = ϑk ϑ1 ϑ1

(5)

where {μk } and {νk } are the left and right eigenvectors of P, respectively. This leads to modifications of the DB algorithm (Algorithm 1). The modified DB algorithm (abbreviated as MDB from this point on) projects the hyper-pixels onto the orthonormal η(δ) η(δ) basis {ϑk }k=1 instead of {νk }k=1 . The MDB algorithm is given in Algorithm 2.

168

A. Schclar and A. Averbuch

Algorithm 1. The Diffusion Basis algorithm [1, 5]. DiffusionBasis(X  , wε , ε, δ)

  1. Calculate the weight function wε xi , x j , i, j = 1, . . . n 2. Construct a Markov transition matrix P by normalizing each row in wε to sum to 1:    wε xi , x j    p xi , x j = d xi    

where d xi = nj=1 wε xi , x j .

  3. Perform eigen-decomposition of p xi , x j n        p xi , x j ≡ λk νk xi μk x j k=1

where the left and the right eigenvectors of P are given by {μk } and {νk }, respectively, and {λk } are the eigenvalues of P in descending order of magnitude. 4. Project the original data X onto the orthonormal system B  {νk }k=1,...,η(δ) : m , gi ∈ Rη(δ) X B = {gi }i=1

where

  gi = xi · ν1 , . . . , xi · νη(δ) , i = 1, . . . , m, νk ∈ B, 1 ≤ k ≤ η (δ)

and · is the inner product. 5. return X B .

4 The Wavelength-Wise Global (WWG) Segmentation Algorithm We introduce a simple and efficient two-phase approach for the segmentation of hyper-spectral images. The first phase reduces the dimensionality of the data using the DB algorithm and the second stage applies a histogram-based method to cluster the low-dimensional data. We model a hyper-spectral image as a three dimensional cube where the first two coordinates correspond to the position (x, y) and the third coordinate corresponds to the wavelength λk . Let I = piλjk

i, j=1,...,m;k=1,...,n

∈ Rm×m×n

(6)

be a hyper-spectral image cube, where the size of the image is m × m and n is the number of wavelengths. For notation simplicity, we assume that the images are square. It is important to note that almost always n  m 2 .

A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images

169

Algorithm 2. The modified Diffusion Basis algorithm. ModifiedDiffusionBasis(X  , wε , ε, δ)

  1. Calculate the weight function wε xi , x j , i, j = 1, . . . n 2. Construct the matrix A     wε xi , x j a xi , x j =     d x j d xi    

where d xi = nj=1 wε xi , x j .   3. Perform eigen-decomposition of a xi , x j n        a xi , x j ≡ λk ϑk xi ϑk x j k=1

where {ϑk } are the eigenvectors of A, and {λk } are the eigenvalues of A in descending order of magnitude. 4. Project the original data X onto the orthonormal system B  {ϑk }k=1,...,η(δ) : m X B = {gi }i=1 , gi ∈ Rη(δ)

where

  gi = xi · ϑ1 , . . . , xi · ϑη(δ) , i = 1, . . . , m, ϑk ∈ B, 1 ≤ k ≤ η (δ)

and · denotes the inner product. 5. return X B .

I can be viewed in two ways:   1. Wavelength-wise: I = I λl is a collection of n images of size m × m where   I λl  piλjl ∈ Rm×m , 1 ≤ l ≤ n

(7)

is the image that corresponds to wavelength λl . − → m is a m × m collection of n-dimensional vectors 2. Point-wise: I = I i j i, j=1

where

  − → I i j  piλj1 , . . . , piλjn ∈ Rn , 1 ≤ i, j ≤ m

(8)

is the hyper-pixel at position (i, j). The proposed WWG algorithm assumes the wavelength-wise setting of a hyperspectral image. Thus, we regard each image as a m 2 -dimensional vector. Formally, let   2 (9) I˜  πi,λl i=1,...,m 2 ;l=1,...,n ∈ Rm ×n be a 2-D matrix corresponding to I where

170

A. Schclar and A. Averbuch

πi+( j−1)·m,λk  piλjk , 1 ≤ k ≤ n ,

1 ≤ i, j ≤ m,

( piλjk is defined in Eq. 6) and let ⎞ π1,λk 2 ⎟ ⎜  ⎝ ... ⎠ ∈ Rm , πm 2 ,λk ⎛

I˜λk

1≤k≤n

(10)

be a column vector that corresponds to I λk (see Eq. 7).

4.1 Phase 1: Reduction of Dimensionality Via DB Different sensors can produce values at different scales. Thus, in order to have a uniform scale for all the sensors, each column vector I˜λk , 1 ≤ k ≤ n, is normalized to be in the range [0,1]. I and we We form the set of vectors X = I˜λ1 , . . . , I˜λn from the columns of  apply the DB Algorithm to X . We denote the dimension-reduced representation of X by X B .

4.2 Phase 2: Histogram-Based Segmentation We introduce a histogram-based segmentation algorithm that extracts objects from X using X B . For notation convenience, we denote η (δ) − 1 by η hereinafter. We denote by G the cube representation of the set X B in accordance with Eq. 6:   G  gikj i, j=1,...,m;k=1,...,η , G ∈ Rm×m×η .  be a 2-D matrix in the setting We assume a wavelength-wise setting for G. Let G   defined in Eq. 9 that corresponds to G. Thus, G l  gil j ∈ Rm×m , 1 ≤ i,j=1,...,m  η →  and − l ≤ η corresponds to a column in G g i j  gi1j , . . . , gi j ∈ Rη , 1 ≤ i, j ≤ m →  The coordinates of − corresponds to a row in G. g will be referred to hereinafter as ij

colors. The segmentation is achieved by clustering hyper-pixels with similar colors. This is based on the assumption that similar objects in the image will have a similar set of color vectors in X B . These colors contain the correlations between the original hyperpixels and the global inter-wavelength changes of the image. Thus, homogeneous

A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images

171

regions in the image have similar correlations with the changes i.e. close colors where closeness between colors is measured by the Euclidean distance. The segmentation-by-colors algorithm consists of the following steps: 1. Normalization of the Input Image Cube G: First, we normalize each wavelength of the image cube to be in [0,1]. Let G k be the kth (k is the color index) color layer of the image cube G. We denote by k =  gikj the normalization of G k and define it to be G i, j=1,...,m

 gikj

  gikj − min G k   , 1 ≤ k ≤ η.    max G k − min G k

(11)

 2. Uniform Quantization of the Normalized Input Image Cube G: Let l ∈ N be a given number of quantization levels. We uniformly quantize every value in G k to be one of l possible values. The quantized matrix is given by Q:   Q  qikj i, j=1,...,m;k=1,...,η , qikj ∈ {1, . . . , l}

(12)

  where qikj = l ·  gikj . We denote the quantized color vector at coordinate (i, j) by   η − → (13) c i j  qi1j , . . . , qi j ∈ Rη , 1 ≤ i, j ≤ m. 3. Construction of the Frequency color Histogram: We construct the frequency function f : {1, . . . , l}η → N where for every κ ∈ → {1, . . . , l}η , f (κ) is the number of quantized color vectors − c i j , 1 ≤ i, j ≤ η, that are equal to κ. 4. Finding Peaks in the Histogram: Local maxima points (called peaks) of the frequency function f are detected. We assume that each peak corresponds to a different object in the image cube G. Here we use the classical notion of segmentation—separating object from the background. Indeed, the highest peak corresponds to the largest homogeneous area which in most cases is the background. The histogram may have many peaks. Therefore, we perform an iterative procedure to find the θ highest peaks where the number θ of sought after peaks is given as a parameter to the algorithm. This parameter corresponds to the number of objects we seek. The algorithm is also given an integer parameter ξ, which specifies the l1 cuberadius around  a peak. We , . . . , x to be N x1 , . . . , xη = define the ξ-neighborhood of a coordinate x 1 η ξ   { y1 , . . . , yη |maxk {|yk − xk |} ≤ ξ }. The coordinates outside the neighborhood Nξ are the candidates for the locations of new peaks. An iterative procedure is used in order to find all the peaks. The peaks 1, output of  . . . , θ. The  are labeled → η → ρ i = ρi1 , . . . , ρi ∈ Nη that the algorithm is a set of vectors Ψ = − ρ i i=1,...,θ , − contains the highest peaks. A summary of this step is given in Algorithm 3.

172

A. Schclar and A. Averbuch

Algorithm 3. The PeaksFinder Algorithm [5]. PeaksFinder( f , θ, ξ) (a) (b) (c) (d) (e) (f) (g)

Ψ ←φ while |Ψ | ≤ θ Find the next global maximum c of f . Add the coordinates of c to Ψ . Zero all the values of f in the ξ-neighborhood of c. end while return Ψ .

Algorithm 4. A drill-down segmentation algorithm. DrillDown(X  , wε , ε, δ ,C) (a) (b) (c) (d)

X B = DiffusionBasis(X  , wε , ε, δ) // Algorithm 1 Ωi j = WWG(X B ) // Described in Sec. 4 X B (C)= DiffusionBasis(X (C) , wε , ε, δ) // Algorithm 1 Ωi j (C)= WWG(X B (C)) // Described in Sec. 4

5. Finding the Nearest Peak to each color: Once the highest peaks are found, each quantized color vector is associated with a single peak. The underlying assumption is that the quantized color vectors, which are associated with the same peak, belong to the same object in the color image cube I . Each quantized color is associated with the peak that is the closest to it with respect to the Euclidean distance. Each quantized color is labeled by the number of its associated peak. We denote by → γ: − c i j → d ∈ {1, . . . , θ}

(14)

this mapping function, where   →  → c i j lη . γ − c i j  arg min ρk − − 1≤k≤θ

(15)

6. Construction of the Output Image: The final step assigns a unique →  color κi , 1 ≤ i ≤ θ to each coordinate in the image according to its label γ − c i j . We denote the output image of this step by Ω. Equations 6–15 were extracted from [5].

4.3 Hierarchical Extension of the WWG Algorithm We construct a hierarchical extension to the WWG algorithm in the following way: given the output Ω of the WWG algorithm, the user can choose one of

A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images

173

the objects and apply the WWG algorithm on the original hyper-pixels which belong to this object. Let C be the color of the chosen object. We define X (C) to be the set of   the original hyper-pixels which belong to this object: X (C) =  n  1 pi j , . . . , pi j Ωi j = C , i, j = 1, . . . , m . This facilitates a drill-down function that enables a finer segmentation of a specific object in the image. We form the set X (C) from X (C) as described in Sect. 3 and run the DiffusionBasis algorithm (Algorithm 1) on X (C) . Obviously, the size of the input is smaller than that of the original data, thus allowing the finer segmentation of the chosen object. We denote the result of this stage by X B (C). Next, the WWG is applied on X B (C) and the result is given by Ωi j (C). The drill-down algorithm is outlined in Algorithm 4. This step can be applied to other objects in the image as well as to the drill-down result.

5 Experimental Results The results are divided into two parts: (a) segmentation of hyper-spectral microscopy images; and (b) segmentation of remote-sensed hyper-spectral images. We provide the results using the two dimensionality reduction schemes that were described in Algorithms 1 and 2. We denote the size of the hyper-spectral images by m × m × n where the size of every wavelength image is m × m and n is the number of wavelengths. The geometry (objects, background, etc.) of each hyper-spectral image is displayed using a gray image Υ . This image is obtained by averaging the hyper-spectral image the   along wavelengths. Given a hyper-spectral image I of size m × m × n, Υ = υi j i, j = 1, . . . , m is obtained by (taken from [5]) υi j =

n 1 k I 1 ≤ i, j ≤ m. n k=1 i j

(16)

We refer to Υ as the wavelength-averaged-version (WAV) of the image. All the results were obtained using the automatic procedure for choosing ε which is described in [1]. Segmentation of Hyper-spectral Microscopy Images. Figures 1 and 2 contain samples of healthy human tissues and the results of the application of the WWG algorithm on them. The images are of sizes 300 × 300 × 128 and 151 × 151 × 128, respectively. The images contain three types of substances: nuclei, cytoplasm and glass. The glass belongs to the plate where the tissue sample lies. Figure 1b, c show the 50th and 95th wavelengths, respectively. The images in the 50th through the 70th wavelengths are less noisy than the rest of the wavelengths which resemble Fig. 1c. Figure 1d, e display the results after the application of the WWG and the modified-WWG algorithms, respectively. The algorithm clearly segments this image into three parts: the background is colored in dark gray, the cytoplasm is colored in medium shaded gray and the nuclei is colored in light gray.

174

A. Schclar and A. Averbuch

Fig. 1 A hyper-spectral microscopy image of a healthy human tissue. a The WAV of the original image. b The 50th wavelength. c The 95th wavelength. d The results of the application of the WWG algorithm with η (δ) = 4, θ = 3, ξ = 3, l = 32. e The results of the application of the modifiedWWG algorithm with η (δ) = 4, θ = 3, ξ = 1, l = 16. Taken from [5]

Figure 2b, c show the 40th and 107th wavelengths, respectively. The images in the 40th through the 55th wavelengths are less noisy than the rest of the wavelengths which resemble Fig. 2c. Figure 2d, e display the results of the application of the hierarchical extension of the WWG algorithm. Figure 2f, g display the results of the application of the hierarchical extension of the modified-WWG algorithm, respectively. Figure 2d, f depict the first iteration of the WWG algorithm and the modified-WWG algorithm, respectively. The second iteration receives as input the

A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images

175

Fig. 2 A hyper-spectral microscopy image of a healthy human tissue (figures a–e taken from [5]). a The WAV of the original image. b The 40th wavelength. c The 107th wavelength. d The results after the first iteration of the WWG algorithm using η (δ) = 2, θ = 4, ξ = 1, l = 8. e The results after the second iteration of the WWG algorithm on the hyper-pixels in the green area of (d) using η (δ) = 2, θ = 4, ξ = 6, l = 32. f The results after the first iteration of the modified-WWG with η (δ) = 3, θ = 2, ξ = 3, l = 16. g The results after the second iteration of the modified-WWG on the hyper-pixels in the green area of (f) using η (δ) = 3, θ = 2, ξ = 1, l = 8

176

A. Schclar and A. Averbuch

Fig. 3 A hyper-spectral satellite image of the Washington DC’s National Mall (taken from [5]). a The WAV of the image. The image contains water, two types of grass, trees, two types of roofs, roads, trails and shadow. b The 10th wavelength. c The 80th wavelength. d The result after the application of the WWG algorithm using η (δ) = 4, θ = 8, ξ = 7, l = 32. e The result after the application of the modified-WWG algorithm using η (δ) = 4, θ = 8, ξ = 7, l = 32. The water is colored in blue, the grass is colored in two shades of light green, the trees are colored in dark green, the roads are colored in red, the roofs are colored in pink and yellow, the trails are colored in white and the shadow is colored in black

A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images

177

hyper-pixels that are in the light gray region of Fig. 2d, f. The results of the second iteration of the WWG algorithm and the modified-WWG algorithm are shown in Fig. 2e, g, respectively. The image is clearly segmented into three parts. The background is colored in light gray, the cytoplasm is colored in dark gray and the nuclei is colored in medium shaded gray. Segmentation of Remote-Sensed Images. Figure 3 contains a hyper-spectral satellite image of the Washington DC’s National Mall and the result after the application of the WWG algorithm. The image is of size 300 × 300 × 100. Figure 3a shows the WAV of the image. The image contains water, two types of grass, trees, two types of roofs, roads, trails and shadow. Figure 3b, c show the 10th and 80th wavelengths, respectively. Figure 3d, e are the results of the WWG algorithm and modified-WWG algorithm, respectively, where the water is colored in blue, the grass is colored in two shades of light green, the trees are colored in dark green, the roads are colored in red, the roofs are colored in pink and yellow, the trails are colored in white and the shadow is colored in black.

6 Conclusions and Future Research We presented a method for unsupervised segmentation of hyper-spectral images using the Diffusion Bases dimensionality algorithm. The effectiveness of the method has been demonstrated on microscopy hyper-spectral images as well as hyper-spectral remote-sensed images. We also introduced a modified version of the diffusion bases algorithm which uses eigen decomposition of symmetic matrices which are conjugate to the non-symmetic Markov matrices in [1]. This modification produces results that are slightly inferior to the results obtained by the eigen decomposition of the Markov matrices. The authors are currently investigating methods to improve these results. The results in this paper were obtained using a Gaussian kernel. However, according to [12], any positive semi-definite kernel may be used for the dimensionality reduction. The authors are currently investigating a broad spectrum of kernels. Furthermore, a method for automatic derivation of the optimal kernel for a given set X is currently being sought after by the authors. Successful segmentation is highly dependant on—η (δ) - the dimension of the diffusion space [1, 13]. This value should be as small as possible while allowing the dimension reduced points to convey as much information as possible as possible from the original space. A rigorous way for efficiently choosing the optimal η (δ) is currently being studied by the authors. Clearly, η (δ) is data driven [16] (similarly to choosing ε in [1]) i.e. it depends on the set X at hand. Sub-pixel segmentation of remote-sensed images is another important problem that is currently being investigated by the authors. This problem is also known as anomaly detection in hyper-spectral images. One of the main challenges in this problem is the fact the sub-pixel objects are composed of several materials that are mixed. Their identification requires an unmixing algorithm that separates the different substances of the object.

178

A. Schclar and A. Averbuch

References 1. Schclar, A., Averbuch, A.: Diffusion bases dimensionality reduction. In: Proceedings of the 7th International Joint Conference on Computational Intelligence (IJCCI 2015)—Volume 3: NCTA, Lisbon, Portugal, 12–14 Nov 2015, pp. 151–156 (2015) 2. Tarabalka, Y., Chanussot, J., Benediktsson, J.A.: Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recognit. 43(7), 2367–2379 (2010) 3. Cassidy, R.J., Berger, J., Lee, K., Maggioni, M., Coifman, R.R.: Analysis of hyperspectral colon tissue images using vocal synthesis models. In: Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, vol. 2, pp. 1611–1615 (2004) 4. Zheludeva, V., Polonena, I., Neittaanmaki-Perttuc, N., Averbuch, A., Gronroos, P.N.M., Saari, H.: Delineation of malignant skin tumors by hyperspectral imaging using diffusion maps dimensionality reduction. Biomed. Signal Process. Control 16, 48–60 (2015) 5. Schclar, A., Averbuch, A.: Unsupervised segmentation of hyper-spectral images via diffusion bases. In: Proceedings of the 9th International Joint Conference on Computational Intelligence, IJCCI 2017, Funchal, Madeira, Portugal, 1–3 Nov 2017, pp. 305–312 (2017) 6. Li, F., Ng, M.K., Plemmons, R., Prasad, S., Zhang, Q.: Hyperspectral image segmentation, deblurring, and spectral analysis for material identification. In: Proceedings SPIE 7701, Visual Information Processing XIX (2010) 7. Ye, J., Wittman, T., Bresson, X., Osher, S.: Segmentation for hyperspectral images with priors. In: Proceedings of 6th International Symposium on Visual Computing, Las Vegas, NV, USA, vol. 1, pp. 1–4 (2010) 8. Chan, T., Esedoglu, S., Nikolova, M.: Algorithms for finding global minimizers of image segmentation and denoising models. SIAM J. Appl. Math. 66, 1632–1648 (2006) 9. Li, J., Bioucas-Dias, J., Plaza, A.: Supervised hyperspectral image segmentation using active learning. In: IEEE GRSS Workshop on Hyperspectral Image and Signal Processing, vol. 1, pp. 1–4 (2010) 10. Bioucas-Dias, J., Figueiredo, M.: Logistic regression via variable splitting and augmented Lagrangian tools. Technical Report, Instituto Superior Tecnico, TULisbon (2009) 11. Vincent, L., Soille, P.: Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 13(6), 583–598 (1991) 12. Coifman, R.R., Lafon, S.: Diffusion maps. Applied and Computational Harmonic Analysis (special issue on Diffusion Maps and Wavelets), vol. 21, pp. 5–30 (2006) 13. Schclar, A.: A Diffusion Framework for Dimensionality Reduction, pp. 315–325. Springer US, Boston, MA (2008) 14. Schclar, A., Averbuch, A., Hochman, K., Rabin, N., Zheludev, V.: A diffusion framework for detection of moving vehicles. Digit. Signal Process. 20, 111–122 (2010) 15. Chung, F.R.K.: Spectral graph theory. In: AMS Regional Conference Series in Mathematics, vol. 92 (1997) 16. Johnson, W.B., Lindenstrauss, J.: Extensions of lipshitz mapping into hilbert space. Contemp. Math. 26, 189–206 (1984)

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora Georges Lebboss, Gilles Bernard, Noureddine Aliane, Adelle Abdallah and Mohammad Hajjar

Abstract This paper presents detailed data on the workings of a system extracting semantic clusters from a large general Arabic corpus which has been presented in a previous work [1], and proposes some bases for best evaluation using Arabic WordNet. In the first experiments, using an evaluation corpus of about 8 millions words and GraPaVec, a method for word vectorization based on automatically generated frequency patterns, our system clustered word vectors in a Self Organizing Map neural network model and evaluated them with Arabic WordNet existing synsets. We compared the results with state-of-the-art Word2Vec and Glove methods. As our results were astonishingly high, without clear explanations, we present here a more thorough testing protocol, evaluating with a much larger corpus (1.4 billion words), introducing more refined measures, a refined definition of multiclass recall and precision, taking better into account the specifics of wordnet classification and using NLTK tools. Observations on the corpus are given in order to help researchers interested in our approach to assess methods of implementation and evaluation. Keywords Arabic semantic resources · Arabic WordNet · Word vectors · Large corpus · Word2Vec · Glove · Self organizing maps

G. Lebboss (B) · G. Bernard · N. Aliane · A. Abdallah LIASD, Paris 8 University, Saint-Denis, France e-mail: [email protected] G. Bernard e-mail: [email protected] N. Aliane e-mail: [email protected] A. Abdallah e-mail: [email protected] M. Hajjar GRIT, Lebanese University, Saida, Lebanon e-mail: [email protected] URL: http://www.springer.com/gp/computer-science/lncs © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_10

179

180

G. Lebboss et al.

1 Introduction Open digital Arabic semantic resources are scarce. Among them, we have chosen to work with Arabic WordNet [2–4], an open semantic database where lexical items are organized in synsets (sets of synonymous words), linked by semantic relationships, based on WordNet [5], now version 2.1 [6]. Arabic WordNet (hereafter AWN) is still very poor in words and synsets; our aim is ultimately to enrich it with semantic relations automatically extracted from big corpora. Existing methods for automated building of Arabic semantic resources (Sect. 2.1) are based on dictionaries (either digitized paper ones or database dictionaries as Wiktionary), on translation and aligned multilingual corpus, on WordNets and ontologies, on morphological parsing or on combinations of those resources. Apart from our system, there are not any methods based on large general corpora; most methods are based on translation. Extracting semantic resources from large corpora of course needs large corpora. Lack of it is a recurrent problem in Arabic NLP. That is why we built the largest possible open corpus [7] (described in Sect. 3), keeping in mind that it should be dynamically computed (so as to expand as much as possible as resources grow) and that the building tool should be freely available for researchers. The corpus built contains around 8 billion words; it is by large the biggest one ever made for Arabic language. The end-to-end system we first presented in [1] produces word clusters computed from large corpora. It was evaluated with an evaluation corpus build by intersecting our large corpus with AWN; evaluation corpus had 7,787,525 words and 395,014 unique words. The results of evaluation were largely higher than expected (and over those for state-of-the-art systems) without clear explanations. In order to reassess the system and its results, and get a clearer view of them, we need a completely different protocol. The aim of this paper is to establish the bases of such a protocol and give some preliminary findings that could help to make it work with a very large corpus; our explorations were made on an extract of 1.4 billion words. The rest of the paper is as follows. The second section will present related work on building Arabic semantic resources, drawing from our first paper but bringing it up to date. The third section presents word vectorization issues and state-of-the art methods, drawing also from our first paper. Our system is presented in detail in the fourth section. The fifth briefly presents Self Organizing Map, the neural network model used to cluster words. The basics of the evaluation protocol are proposed in the sixth, before concluding.

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora

181

2 Related Work 2.1 Building Semantic Resources In 2008, three methods were proposed by the AWN team. One [4] builds a bilingual lexicon of tuples from several publicly available translation resources. It merges in one set the base concepts of EuroWordNet (1024 synsets) and Balkanet (8516 synsets). Keeping only the tuples whose English word was included in the merged set, they produced tuples. Arabic words linked to the same concept were candidates to enter an synset in AWN. Their candidatures were to be validated by lexicographers; however, as of today only 64.5% have been processed, of which 74.2% were rejected as incorrect. They obtained better results with another method [4], where they generated new Arabic forms by morphological derivation from the words in AWN synsets, controled their existence with databases such as GigaWord non free Arabic corpus, the Logos multilingual translation portal, or New Mexico State University Arabic-English lexicon, and used their translation to link them to WordNet synsets and then back to AWN, to be validated by lexicographers. A similar method was proposed later [8]; words were morphologically hand-parsed by linguists, then translated and associated to synsets with equivalence relations between the synsets made explicit in the Inter-Lingual Index deep structure [9]. The third method [10] extracted named entities from Arabic Wikipedia, linked them to named entities from the corresponding English Wikipedia page, linked those to named entities from WordNet, and then back to synsets of AWN. Though the result was much better (membership was correct up to 93.3%), the coverage was scarce. A different approach [11] exported the entire set of data embedded in AWN into a database integrated with Amine AWN ontology, tapped by a Java module based on Amine Platform APIs. This module used the mapping between English synsets in WordNet and Suggested Upper Merged Ontology [12] concepts to build the Amine AWN type hierarchy. Then, it added Arabic synonyms based on the links between WordNet synsets and AWN synsets. Later the same team [13, 14] used YAGO (Yet Another Great Ontology) from Max-Planck Institute, translating its named entities into Arabic with Google translation, then added them to AWN according to two types of mappings (direct mapping through WordNet, mapping through YAGO relations to AWN synsets). Abdul Hay’s Ph.D. thesis [15] extracted semantic categories from a multilingual aligned corpus with English and two langages from EuroWordNet. If all but Arabic words were members of synsets linked by Inter-Lingual Index, then the Arabic word should also be in a linked synset in AWN. Results were correct up to 84%. Another team worked on iSPEDAL, a database monolingual dictionary digitizing monolingual paper dictionaries [16]. Two methods have been proposed [17] for enriching iSPEDAL. One used semi-structured information from plain dictionaries to deduce links (synonymy, antonymy). The other used translation by available

182

G. Lebboss et al.

resources to and from a foreign language to compute synonymy of Arabic words by correlating their translations. A somewhat similar approach [18] extracts synonymy and antonymy relationships from Arabic Wiktionary. Arabase platform [19] aims to integrate every available Arabic semantic resource, from King Abdulaziz City for Science and Technology database, to Arabic StopWords Sourceforge resource and AWN. It has, according to the authors, “a good potential to interface with WordNet”. Arabase computes by hand-made rules semantic properties of vocalized words1 and forms a sort of virtual WordNet. Till 2017, researches on Arabic semantic categories had extensively used foreign resources; very little has been done on extracting semantic information from Arabic data alone, and nothing based on an Arabic general corpus. In 2017, [20] proposed a procedure for extracting terms and relationships from an Arabic thematic corpus. The handmade corpus was small (304,665 words) and hand preprocessed, then normalized and lemmatized with a light stemming procedure, removal of stopwords, then all repeated word ngrams (from monograms to fourgrams) with sufficient weight from TF-IDF weighting were considered as terms. A multilayer perceptron was used to learn semantic relations between these terms, using AWN and a monolingual dictionary. Results in recall and precision were deemed encouraging. At the same time, El Moatez and Didier [21] proposed a word embedding-based system to calculate the semantic similarity in Arabic sentences. It is based upon a large corpus, uses the part-of-speech tagger of [22], TF-IDF weighting, and Word2Vec as word embedder (see next subsection). They show that with TF-IDF weighting and POS tagging, both CBOW and SKIP-G are significantly faster to train and yield a better accuracy. The correlation rate reached 72.33% without them and 78% with them. But this method is only applied on similarity between sentences and not on word similarity, contrary to the other methods. They do not say how many sentences were clustered.

2.2 Word Vectorization The main issue is word vectorization. Methods considered here are based upon distributional properties of words (Sect. 2.2). The main characteristic of GraPaVec as opposed to the state-of-the-art methods is that the context taken into account is the surrounding pattern of high frequency words rather than a window of neighbouring lexical items or skipgram of lexical items (Sect. 3.2). In other words, we keep what others throw away and throw away what others keep. Vectors are fed to a clustering algorithm. We have chosen here the neural network model Self Organizing Maps [23], because of two advantages: minimization of misclassification errors (misclassified items go to adjacent clusters) and easy visu-

1 Short

vowels are not written in Arabic words in normal use and in a majority of documents.

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora

183

alization of the results. Those results are then evaluated by comparison with AWN synsets (Sect. 5). Structural linguistics [24] postulated that words with similar distributions have similar categories and meanings. Since [25] first proposed it, projecting words in vector space has been the first step in many models of word clustering, where semantic properties could be linked to similarities in the distribution of the word vectors. In such models a distribution is defined by a vector of the contexts of a word. A context is defined by two elements: units and distance to the word. Units can be words, phrases or ngrams, more recently skipgrams (ngrams with “holes”). The distance can be a step function, as in the bag of word model (only words in the same document are nearby contexts), or a function of the number of units separating word and context. On the resulting matrix {words × contexts} (where each component is the frequency of a word in a context), mathematical models have been applied in order to reduce it to a bunch of clusters, from the most simple, tf-idf, to much more complex ones, such as latent semantic analysis, latent Dirichlet allocation, neural networks (various models), linear or bilinear regression models... Clustering models display quite a big variety of reduction methods, which contrasts with the poverty of context variety. Usually units are lexical items or ngrams of lexical items and sets are either documents or fixed-length windows. Some examples: in Hyperspace Analogue to Language [26], context is a fixedlength window of lexical items around the word. In WebSOM [27], a model with two layers of Self Organizing Map neural network, context is a fixed-length window of lexical items around the word. Among the rare exceptions to the lexical items context was a SOM classifier applied to a context of a fixed-length window of grammatical categories to the left of the word [28]. Word2vec [29] is a set of two unsupervised neural models, with log-linear classifiers, where words and context vectors are trained simultaneously; in CBOW (Continuous Bag Of Words) context is defined by a fixed-length window of lexical items around the word, in Skipgram lexical items are grouped in skipgrams, that is, ngrams “with holes”. Glove (Global Vectors for Word Representation) [30] is a global log-bilinear regression model designed by the NLP team at Stanford University. Paraphrasing the authors, this model combines the advantages of the global matrix factorization with windowing local context methods. Context is a fixed-length window of lexical items centered on the word. Both models represent the state of the art. One should note that though they use fixed-length windows, they use a continuous function for distance, thus introducing a un-bag-of-word approach (even in CBOW) which had rarely been used before.

184

G. Lebboss et al.

3 Our System The global architecture of our system is summarily presented in Fig. 1. The preprocessing stage is not original in that kind of work, except for two points: the size of the corpus (for Arabic language) and the fact that we tried to stay as much as possible close to the raw data, as will be clear from our description, and to present reproducible experiments in a field, Arabic semantic extraction, where there are very few of them. In the second stage, we used our own method, GraPaVec, as well as state-of-the-art Word2Vec and Glove methods. In the third stage, we used Self-Organizing Maps, making full use of its capacity to represent data in 2D space, as well as AWN with Stanford wordnet tools, which take advantage of the tree representation of wordnets.

3.1 Preprocessing The large corpus presented in [1] was build in two steps: all available Arabic corpora including Arabic Wikipedia and Wiktionary were first merged in a static corpus. But the bulk comes from the Alshamela library on-line resource (http://shamela.ws) and, over all, crawling/converting web sites (more than 120, mostly news web sites) and their documents. In order to clean and convert documents on the fly, we created our own Arabic Corpus Builder, that crawls queues of sites, merges them in plain text format with the outcome of previous corpus, and imports it in a database. It also converts on the

Fig. 1 Global view of our system

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora

185

fly usual encodings of Arabic characters (Microsoft and MacOS) in unicode. It can be found on https://sites.google.com/site/georgeslebboss. The bulk part of the corpus is mostly dynamic as the results of crawling varies in time. The corpus used in the present study is a subpart of it that contains 75,994 documents, with 1,212,703,705 words and 7,411,151 unique words. Some raw preprocessing was necessary because the corpus contains very long words (up to 473 characters), which produced unnecessary very long prefix tries (with one or two occurrences). So in a first stage we eliminated words in Arabic script of more than 50 characters and words in other scripts of more than 30 characters. Then we had to take care of specific characteristics of Arabic script. With Arabic writing conventions every morpheme can have several writings (not counting errors), and morphemes are not separated inside written words. That is why orthographic normalization is usual in Arabic language processing systems (at the price of more ambiguity) and some kind of morphological analysis is often used, even though Arabic morphology, especially derivational, is complex. We have shown in [1] that orthographic normalization and lemmatization enhance the F-score of all vectorization methods to more or less the same degree. Accordingly, we choose here to include normalization and lemmatization in all our tests. Orthographic normalization has been changed from the primitive version of our first work (six rules of replacement of special characters by more usual ones and suppression of optional diacritics etc.) to the more sophisticated version used in Stanford Arabic Tree parser (“normalization confusion method”) [31, 32], converting in IBM normalized Arabic. We had to add also suppression of Null characters, and some others, as the corpus built contained quite a lot of spurious data that caused numerous bugs. Lemmatization is the process of replacing the different forms of the same word with its lemma (dictionary entry), that can be its stem; it is also called stemming or sometimes rooting. Available Arabic lemmatizers have been thoroughly analyzed by Al Hajjar in his Ph.D. thesis [16]; our system includes the lemmatizer that according to his evaluation yielded the best results [33]. Even if better solutions have been devised since, it stays one of the easiest to use on very large corpora, and is easily available on Github and Sourceforge. This lemmatizer was first proposed in [34]. Its method involves orthographic normalization, removing stop words and punctuation, definite article ( ), inseparable conjunctions, and the longest prefix and suffix, and compares the result to a list of patterns. If a match is found, the characters representing the root in the pattern are extracted, and matched against a list of known roots. It was enhanced by various successive works, with more patterns in [35], by integration in the Qarab system in [36]. Adding still more patterns enabled [37] to eliminate the stage of checking against the root dictionary and thus diminishes the search toll. Our preprocessing is done by a multi-threaded program taking as input files generated with our trie (see Sect. 3.3. This program included Khoja java implementation for command-line by Mota Saad (from github repository; Apache 2.0 license) and another kind of “lemmatisation”: replacement of all numbers by the number ‘1’, and

186

G. Lebboss et al.

Fig. 2 Preprocessing

replacement of all words written in foreign scripts by one character (‘X’), in order to keep track of those as potential context. After normalization and lemmatization, our corpus only has 194,471 unique words, which is a drastic reduction (less than 2.62% of the original list of unique words) (Fig. 2).

3.2 GraPaVec Our method of word vectorization is based on the idea of grammatical context found in [28], but with important modifications, mainly due to our aim to develop a method as independent from specific languages as possible. So the stopword list and the stopword categories used in this paper were out of the picture. The left window (or any fixed-length window for that matter) was also too restrictive as we did not want to make any assumption as to order of parts of speech or type of syntax rules. Instead, we wanted to empirically discover recurrent patterns of very general words. So the context we take into account is composed of (ordered) patterns of such words in the vicinity of a given word, inside sets that are delimited by punctuation markers. We called our method Grammatical Pattern Vector, or GraPaVec, though the relation of this algorithm to grammar is indirect (see Sect. 3.5). GraPaVec has four steps:

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora

187

Fig. 3 General view of GraPaVec

• • • • •

Trie preparation Segment preparation Pattern element selection Pattern discovery Word vectorization.

3.3 Trie Preparation We begin by importing every word in the corpus in a prefix Trie. A Trie is a structure that represents a very large number of words in a format that is both economical and fast to explore2 ; it is more efficient here than hash-code or binary trees, and directed acyclic word graphs lose some information we wanted to keep (Fig. 3). Each path from the root to a leaf of the Trie represents a word (see Fig. 4). Each node contains a unicode character. A node is marked as leaf the first time a word ends there, and its occurrences are incremented each time. Thus a node is a simple structure with a unicode character, a field indicating the number of occurrences (if not zero, the node is a leaf), pointers towards its sons and towards its brothers. A leaf can have sons, as words can be part of other words.

2 Its

maximal depth is given by the longest word in the corpus and its maximal breadth is given by the number of possible characters at any point. As shown by [38], in language the number of successors is constrained, so the tree quickly shrinks.

188

G. Lebboss et al.

Fig. 4 A Trie-taken from [1]

It is important to note that we built this trie before any normalization or lemmatization took place, not only because we wanted to know better the raw data, but also for efficiency reasons: thus, we can adapt to various types of lemmatization and normalization and accordingly generate segments (see following section) more easily.

3.4 Segment Preparation In order to build patterns, we first need a textual unit. Our textual units are segments of sentences delimited with punctuation or separated from other contexts by more than one linefeed or carriage return (depending on the system where the text has been produced). We tried at first to use a trie structure for reading and exploiting all the segments of our big corpus; but the resources consummated, in time and memory, were far too large, even using file swapping with json representations of parts of the trie, and usual databases already could not handle our word trie in json format. The structure of the segment trie is very different from the word trie: there are much less common prefixes; the trie alphabet (that is, the unique words of the corpus) is very big, even after normalization and lemmatization. This entails that the segment trie is very wide while at the same time some parts of it are very profound. We considered using Patricia trees [39] or Deep Packed Tries [40], but these algorithms are concerned with shortening of long branches, while our algorithms suffer much more from the tries width. Thus we used much more basic representations, with integer arrays indexed in our database (integers being word identifiers), and extensively used file swapping with multi-threading. We generated segments according to the normalization and lemmatization mentioned above. The first preprocessing done on those segments was to suppress all segments reduced to one word, as they would not be useful for context determination. Their length are varying between one and 94,596 words, where the segmentation process did not succeed to parse the text, not recognizing neither punctuation nor block

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora Table 1 Distribution of segment frequencies

Minimal occurrences

Number of segments

2 3 4 5 6 7 8 9 10 11 12

13,723,315 10,983,335 9,274,362 8,048,216 7,159,900 6,488,806 5,969,157 5,551,273 5,208,699 4,921,310 4,672,660

189

end. Identification of the files concerned, probably strongly degraded ones, was not possible due to time constraints. Total number of segments for the big corpus was 136,297,374, with 118,278,743 uniques. 11.6% of these occurred more than once. In Table 1 presenting the distribution of the segments according to their frequency, it is interesting to note its curve. Though this data shows that distribution of segments is some sort of a long tail distribution (with 104,555,428 segments having only one occurrence), it is clear it is not any kind of Zipf distribution.

3.5 Pattern Element Selection This is the most important step and the one where a human eye is necessary (for now). If the corpus is big enough, the most frequent words are markers with grammatical or very abstract function (with no independent meaning or referent, the syncategoremes of Aristotle); we tested this on English as well as on Arabic. The user—which just needs to know the language—has to set the frequency threshold that separates markers and “ordinary” words (lower frequency words). This is done by looking for the most frequent lexical item appearing in the list displayed by our system and establishing its frequency as threshold. With our large corpus, a threshold of 167,000 occurrences separated 189 markers. With the previous corpus, a threshold of 9000 occurrences separated 196 markers. We observe that though this corpus is about 980 times bigger than the extract previously used for evaluation, the number of unique words is about 17.5 times bigger, and the number of markers is only 1.26 times bigger, with a threshold 2.57 times bigger. In the list of the large corpus, 53 markers were added, 12 were lost. Half of these 12 were

190

G. Lebboss et al.

combinations of markers that had correctly been classified; with better segmentation, those would be eliminated, leaving an error margin of 3% relatively to the number of markers detected. On the whole, the biggest the corpus, the more homogeneous marker distribution is, and more neat its identification to grammatical words. We compared these 196 markers with the hand-made Arabic Stopwords Sourceforge resource: half of them (97) were not included in the resource. Most of them should have been included as stopwords; others were combinations of stopwords. More generally, though Arabic Stopwords includes 449 words, it seems rather incomplete, and could easily be enriched by our method.

3.6 Pattern Building A pattern is a sequence of words containing markers and ordinary words as in: • The boy went to home • The teacher went to school • The woman went to the market As and are higher frequency words, these phrases are instances of the same ”. The star (joker) represents a sequence of ordinary words. pattern: “ Patterns are build according to the following principles (m represents a marker, x an ordinary word, p a punctuation): • • • • •

A pattern does not contain p. A pattern is a sequence of m and *. A pattern contains at least one *. * is a string of x with n as maximum length. * contains at least one x.

The maximum length n is called JokerLength. Let us take the following sequence, representing an extract from the corpus: xmmxxmxxxpmxmmxxxxmpmmxxxm Our objective is to generate all possible patterns compatible with this sequence. These patterns will be represented by sequences of m and *, as in . Supposing that JokerLength = 3, we first obtain the following patterns: • • • •

*mm*m* (followed by p) m*mm* (followed by more than 3 x) *m (followed by p) mm*m (end-of-file considered as p)

From each of these patterns all potential patterns included are deduced. For instance, contains the following sub-patterns:

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora

• • • • •

191

*m *mm *mm* *mm*m *mm*m*

Then we skip the first element and do the same with and its subpatterns, and recurse until the pattern is finished. Of course, in real patterns, m is replaced by true markers; thus pattern is in reality a set of patterns differing by the nature of both ‘m’. In the actual implementation, patterns are read from the corpus in a prefix Trie similar to the one used for words. Every star is a node that permits back reference from the ordinary word in the database to the positions it can occupy in the pattern Trie.

3.7 Word Vectorization As the preceding process builds all possible patterns in the vicinity of a word, most of them will not be relevant and will not be repeated. We need a frequency threshold to eliminate spurious patterns that won’t discriminate words. We compute for each word the number of times it occurs in every selected pattern. This process yields a (sparse) matrix {words × patterns}. We then eliminate from this matrix all patterns whose frequency is less than the pattern threshold selected. Thus word vectorisation depends on three parameters: marker threshold, JokerLength and pattern threshold.

4 Self Organizing Map Self Organizing Map is an unsupervised neural network model designed by Kohonen [23]. In its standard version, it projects the space of input data on a two-dimension map. It implicitly does a dual clustering: on one hand, the data is clustered into neurons, and on the other hand the clusters themselves are grouped by similarity in the map. Its operation is in “best matching unit” mode: all neurons compete for each input vector and the best matching neuron is the winner. X being the input vector, j an index on the n neurons in the map, W j the memory vector of the neuron j, the winner, j∗, is determined by Eq. 1, where d(x, y) is a distance measure: (1) d(X, W j ∗ ) = min d(X, W j ) j∈{1−n}

The distance can be euclidian (usual value), Manhattan, or some other. It can be replaced with a similarity measure as cosine (normalized dot product, Eq. 2), if min

192

G. Lebboss et al.

is replaced by max in Eq. 1. With sparse vectors cosine similarity drastically reduce computation time. n i=1 X i Wi, j Cos Sim(X, W j ) = (2)  X  ×  Wj  Every neuron has a weigth vector W j of the dimension of the input vector, initialized randomly and maybe pre-tuned to the set of possible values. In the learning phase the winner and every neuron in its neighbourhood learn the input vector, according to Eq. 3, where Nσ (i, j) is the neighbourhood in radius σ ; the bracketed superscript indicates the epoch. W j(t+1) = W j(t) + α (t) Nσ(t) ( j, j ∗ )(X i(t) − W j(t) )

(3)

The learning rate α decrease in time following Eq. 4, where α (0) is its initial value. α (t) = α (0) (1 −

t ) tmax

(4)

Learning in the neighbourhood of the winner decrease in space following here the gaussian in Eq. 5, which yields better results than mexican hat or other variants. M(i, j) is the Manhattan distance between indexes. Nσ(t) ( j, j ∗ ) = e−

M( j, j ∗ ) 2σ 2(t)

(5)

σ obeys Eq. 6, where σ (0) is the radius initial value, typically the radius of the map, and σ (tm ax) is its final value, typically 1. σ (t) = σ (0) (

σ (0) t ) tmax σtmax

(6)

Our implementation gives the choice of euclidian distance, Manhattan, cosine similarity; different topologies for the neighbourhood (square or hexagonal), initialize memory to the center of learning set values or randomly.

5 Evaluation Protocol Proposal 5.1 Arabic WordNet and NLTK Word Similarities In our first study we wanted to check whether AWN existing synsets were correctly retrieved, and found the following issues:

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora

193

(A) 4712 synsets are singletons. (B) 1110 are subsets of others. (C) A non-negligible number of synsets are false. Type (A) synsets artificially increases the recall value of any method (they would always be in the same cluster). As synsets of type (B) do not form a complete partition of their supersets, some words would not have been taken into account and the number of singletons would have increased. After eliminating these synsets, we were left with 5807 synsets. It is easy to see why type (C) synsets were not to be used, but much less easy to eliminate them, as it has to be done by hand. For our first experiments, we controled and choose 900 synsets grouping 2107 words. Those synsets can be found at https://sites.google.com/site/georgeslebboss. In our proposed evaluation protocol, we want the experiments to be completely reproducible, so we used the NLTK tools, release 3.3 [41], especially “nltk.corpus”, with the wordnet distances called wup_similarity and path_similarity. The first, wup_similarity [42], returns a score between 0 and 1, based on the depths of both words relative to their lowest common subsumer (the one closest to them both) in a taxonomy, according to Eq. 7: wup_similarit y(s1 , s2 ) =

2 ∗ depth(lcs(s1 , s2 )) depth(s1 ) + depth(s2 )

(7)

The second, path_similarity, returns a score between 0 and 1, based on the length of the shortest path that connects them in a taxonomy (in the case of WordNet, the similarity between two words belonging to the same synset is equal to 1), according to Eq. 8. 1 (8) path_similarit y(s1 , s2 ) = 1 + shor test Path(s1 , s2 ) In order to compute these distances on Arabic words, this tool imports and converts AWN release 2, losing in the process any synsets not in Princeton WordNet 3.0 (according to the unnamed author of the tool). Apart from the conversion dropping, this release is the one we also used. On the output of NLTK, our protocol eliminates types (A) and (B) synsets. But it is impossible to directly correct AWN synsets, so type (C) synsets will not taken into account in this protocol. After our first tests, we have left aside path_similarity, because the distances computed between relatively close words are quickly very high. So only wup_similarity is proposed for our protocol.

5.2 SOM Topology and Confusion Matrix Distance inside SOM maps are computing by Manhattan distance, according to: dist (Cwi , Cw j ) = max(|i x − jx |, |i y − j y |), Cwi (i x , i y ) and Cw j ( jx , j y ) are the clusters respectively containing the words wi and w j .

194

G. Lebboss et al.

In order to compute the confusion matrix and F-score (harmonic mean of recall and precision) of the four methods tested here, we have to run them on the same corpus, insert vectors in the database, then then cluster them with SOM model. The resulting clusters should be compared to AWN synsets. The following equations define the confusion matrix (TP, true positive, TN, true negative, FP, false positive, FN, false negative) we propose for the future protocol: • • • •

T P = #(wi , w j )/sim(wi , w j ) ≥ θ T N = #(wi , w j )/sim(wi , w j ) < θ F P = #(wi, wj)/sim(wi , w j ) ≥ θ F N = #(wi, wj)/sim(wi , w j ) < θ

and and and and

dist (Cwi , Cw j ) ≤ δ dist (Cwi , Cw j ) > δ dist (Cwi , Cw j ) > δ, dist (Cwi , Cw j ) ≤ δ.

#(wi , w j ) is the number of word pairs (wi = w j ), sim(wi , w j ) is computed with: sim(wi , w j ) = Arg Max wup_similarit y(i, j)

i∈wor dnet.synsets(wi ), j∈wor dnet.synsets(w j )

(9)

1. θ is a minimal similarity in AWN computed with wup_similarity 2. δ is a maximal distance in SOM computed with Manhattan on the map indexes; we thus make use of SOM property of classifying related vectors in nearby clusters. TP TP and precision as T P+F , F-score Recall would be as usual computed as T P+F N P being the harmonic mean of both. Our first experiments showed that when SOM mapsize was 10% bigger than the number of synsets to be retrieved, it downgraded the result of more than 20% with all the methods used; it was also the case when the map size was smaller, which is expected, but with a less drastic effect. This means we need to give special attention to SOM mapsize. However, contrary to previous evalutations, the proposed protocol takes into account the topology and spatial properties of the SOM map, which attenuates the aforementioned effect.

6 Conclusion While the first tests in [1] showed the feasibility of our approach, we have presented an evaluation technique much better adapted to the structure of AWN as well as to the properties of SOM mapping. Our preliminary tests shows that the trie keeps being a good structure for word internal representation even with very large corpora, while it is not the case for segment internal representation. All our tests were conduced on a 36 physical cores (72 virtual cores) 64 bits machine with 125Gb RAM and Ubuntu Xenial (16.04), with a high speed disk. As regards time consumption, construction of the word trie (see Sect. 3.3), which was done before any preprocessing, took 33 min with 90 threads, including saving the table in csv format. It took 1 min to insert it in one minute in PostgreSQL database.

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora

195

In the normalization and lemmatization stage we integrated in a multi-threaded java program Khoja stemmer and Stanford ArabicUtils tools, with some ad-hoc “stemming” we described in Sect. 3.1. It took 11.5 min to generate the data from the database in files formatted to suit our java program. This program then took one minute to make the transformation and produce csv files integrated into our database in 2 min. This part of the process thus took more or less 50 min. Producing the segments then took much longer, about 7 h 40 min. Integrating them in the database, removing and counting duplicates took 13 min. The observations we made during this process show the obstacles for keeping as close to real data as possible during the processing of very big corpora in a framework as ours, but also what are the possible solutions. Our first tests show thus that a fined grained evaluation of the hypotheses underlying GraPaVec method on very big corpora is feasible. Lots of the processes conduced here could be adapted to big data frameworks (which would not resolve all the problems described). In a near future we aim to apply this new protocol to such large corpora, in Arabic and other languages, and to use a dynamic growing neural model that can find by itself the number of categories. There are still a number of questions to be adressed. It should be possible to automatize the selection marker threshold, and to measure the impact on the results of moving this threshold down or up.

References 1. Lebboss, G., Bernard, G., Aliane, N., Hajjar, M.: Towards the enrichment of Arabic WordNet with big corpora. In: Proceedings of the 9th International Joint Conference on Computational Intelligence, vol. 1, pp. 101–109 (2017) 2. Black, W., Elkateb, S., Rodriguez, H., Alkhalifa, M., Vossen, P., Pease, A., Fellbaum, C.: Introducing the Arabic WordNet project. In: Sojka, P., Choi, F., Vossen, P. (eds.) Proceedings of the Third International WordNet Conference, pp. 295–300 (2006) 3. Regragui, Y., Abouenour, L., Krieche, F., Bouzoubaa, K., Rosso, P.: Arabic WordNet: new content and new applications. In: Proceedings of the Eighth Global WordNet Conference, pp. 330–338, Bucharest, Romania (2016) 4. Rodriguez, H., Farwell, D., Farreres, J., Bertran, M., Alkhalifa, M., Martí, M.A., Black, W., Elkateb, S., Kirk, J., Pease, A., Vossen, P., Fellbaum, C.: Arabic WordNet: current state and future extensions. In: Proceedings of the Fourth Global WordNet Conference, Hungary, pp. 387–405 (2008) 5. Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995) 6. Miller, G.A., Fellbaum, C., Tengi, R., Wolff, S., Wakefield, P., Langone, H., Haskell, B.: WordNet 2.1. Cognitive Science Laboratory, Princeton University (2005) 7. Lebboss, G.: Contribution à l’analyse sémantique des textes arabes. Ph.D. thesis, University Paris 8, France 8. Al-Barhamtoshy, H.M., Al-Jideebi, W.H.: Designing and implementing Arabic WordNet semantic-based. In: The 9th Conference on Language Engineering, pp. 23–24 (2009) 9. Vossen, P.: EuroWordNet: a multilingual database of autonomous and language-specific WordNets connected via an inter-lingual index. Int. J. Lexicogr. 17(2), 161–173 (2004) 10. Alkhalifa, M., Rodriguez, H.: Automatically extending named entities coverage of Arabic WordNet using Wikipedia. Int. J. Inf. Commun. Technol. 1(1), 1–17 (2008)

196

G. Lebboss et al.

11. Abouenour, L., Bouzoubaa, K., Rosso, P.: Improving Q/A using Arabic WordNet. In: Proceedings of the 2008 International Arab Conference on Information Technology (ACIT’2008), Tunisia (2008) 12. Niles, I., Pease, A.: Linking lexicons and ontologies: mapping WordNet to the suggested upper merged ontology. In: Proceedings of the International Conference on Information and Knowledge Engineering (IKE ’03), Las Vegas, Nevada, vol. 2, pp. 412–416, Las Vegas, Nevada, USA (2003) 13. Abouenour, L., Bouzoubaa, K., Rosso, P.: Using the Yago ontology as a resource for the enrichment of named entities in Arabic WordNet. In: Proceedings of The 7th International Conference on Language Resources and Evaluation (LREC 2010) Workshop on Language Resources and Human Language Technology for Semitic Languages, pp. 27–31 (2010) 14. Abouenour, L., Bouzoubaa, K., Rosso, P.: On the evaluation and improvement of Arabic WordNet coverage and usability. Lang. Resour. Eval. 47(3), 891–917 (2013) 15. Abdulhay, A.: Constitution d’une ressource sémantique arabe à partir d’un corpus multilingue aligné. Ph.D. thesis, Université de Grenoble (2012) 16. Al Hajjar, A.E.S.: Extraction et gestion de l’information à partir des documents arabes. Ph.D. thesis, University of Paris 8 (2010) 17. Hajjar, M., Al Hajjar, A.E.S., Abdel Nabi, Z., Lebboss, G.: Semantic enrichment of the iSPEDAL corpus. In: 3rd World Conference on Innovation and Computer Science (INSODE) (2013) 18. Abdelali, B., Tlili-Guiassa, Y.: Extraction des relations sémantiques à partir du Wiktionnaire arabe. Revue RIST 20(2), 47–56 (2013) 19. Raafat, H., Zahran, M., Rashwan, M.: Arabase: a database combining different Arabic resources with lexical and semantic information. In: Proceedings of the International Conference on Knowledge Discovery and Information Retrieval, pp. 233–240, Scitepress (2013) 20. Benabdallah, A., Abderrahim, M.A., Abderrahim, M.E.A.: Extraction of terms and semantic relationships from Arabic texts for automatic construction of an ontology. Int. J. Speech Technol. 20, 289 (2017). https://doi.org/10.1007/s10772-017-9405-5 21. El Moatez, N., Didier, D.: Semantic similarity of Arabic sentences with word embeddings. In: Proceedings the Third Arabic Natural Language Processing Workshop, pp. 18–24, Valencia (2017) 22. Gahbiche-Braham, S., Bonneau-Maynard, H., Lavergne, T., Yvon, F.: Joint segmentation and POS tagging for Arabic using a CRF-based classifier. In: LREC, pp. 2107–2113 (2012) 23. Kohonen, T.: Self-Organizing Maps. Springer, Berlin (1995) 24. Harris, Z.S.: Distributional structure. In Word 10(2–3), 146–162 (1954) 25. Salton, G., Wong, A., Yang, C.-S.: A vector space model for automatic indexing. Commun. ACM 18(11), 613–620 (1975) 26. Lund, K., Burgess, C., Atchley, R.A.: Semantic and associative priming in high-dimensional semantic space. In: Proceedings of the 17th Annual Conference of the Cognitive Science Society, vol. 17, pp. 660–665 (1995) 27. Honkela, T., Kaski, T., Lagus, K., Kohonen, T.: WEBSOM—self-organizing maps of document collections. In: Proceedings of WSOM’97, Workshop on Self-Organizing Maps, Espoo, Finland, pp. 310–315, Helsinki University of Technology (1997) 28. Bernard, G.: Experiments on distributional categorization of lexical items with Self Organizing Maps. In: International Workshop on Self Organizing Maps WSOM’97, pp. 304–309 (1997) 29. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: Proceedings of the International Conference on Learning Representation, Workshop Track, p. 1301 (2013) 30. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors forward representation. EMNLP 14, 1532–1543 (2014) 31. Klein, D., Manning, C.D.: Accurate unlexicalized parsing. In: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, vol. 1, pp. 423–430, Association for Computational Linguistics (2003)

Evaluating Methods for Building Arabic Semantic Resources with Big Corpora

197

32. Green S., Manning, C.D.: Better Arabic parsing: baselines, evaluations, and analysis. In: COLING (2010) 33. Khoja, S., Garside, R., Knowles, G.: An Arabic tagset for the morphosyntactic tagging of Arabic. A Rainbow Corpora Corpus Linguist. Lang. World 13, 341–350 (2001) 34. Khoja, S., Garside, R.: Stemming Arabic Text. Computing Department, Lancaster University, Lancaster (1999) 35. Aljlayl, M., Frieder, O.: On Arabic search: improving the retrieval effectiveness via a light stemming approach. In: Proceedings of ACM Eleventh Conference on Information and Knowledge Management, MClean, VA (2002) 36. Hammo, B., Abu-Salem, H., Lytinen, S., Evens, M.: A question answering system to support the Arabic Language. In: Proceedings of the ACL-02 Workshop on Computational Approaches to Semitic Languages, Philadelphia, Pennsylvania, pp. 1–11 (2002) 37. Nwesri, A.F.A., Tahaghoghi, S.M.M., Scholer, F.: Capturing out-of-vocabulary words in Arabic text. In: Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Sydney, Australia (2006) 38. Harris, Z.S.: Mathematical Structures of Language. Wiley, New York (1968) 39. Morrison, D.R.: PATRICIA: practical algorithm to retrieve information coded in alphanumeric. J. ACM 15(4), 514–534 (1968) 40. Takagi, T., Inenaga, S., Sadakane, K., Arimura, H.: Packed compact tries: a fast and efficient data structure for online string processing. In: Archives.org (2016). https://doi.org/10.1587/ transfun.E100.A.1785, arXiv:1602.00422 41. Bird, S., Loper, E., Klein, E.: Natural Language Processing with Python. O’Reilly Media Inc. (2009) 42. Wu, Z., Palmer, M.: Verbs semantics and lexical selection. In: Proceedings of the 32nd Annual Meeting on Association for Computational Linguistics, pp. 133–138, Stroudsburg, PA, USA (1994) 43. Goldberg, Y.: On the importance of comparing apples to apples: a case study using the GloVe model. Google docs (2014) 44. Levy, O., Goldberg, Y., Dagan, I.: Improving distributional similarity with lessons learned from word embeddings. Trans. Assoc. Comput. Linguist. 3, 211–225 (2015)

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: Towards Structured Data Alessio Martino , Antonello Rizzi

and Fabio Massimo Frattale Mascioli

Abstract The possibility of clustering objects represented by structured data with possibly non-trivial geometry certainly is an interesting task in pattern recognition. Moreover, in the Big Data era, the possibility of clustering huge amount of (structured) data challenges computer science and pattern recognition researchers alike. The aim of this paper is to bridge the gap on large-scale structured data clustering. Specifically, following a previous work, in this paper a parallel and distributed kmedoids clustering implementation is proposed and tested on real-world biological structured data, namely pathway maps (graphs) and primary structure of proteins (sequences). Furthermore, two methods for medoids’ evaluation are proposed and compared in terms of scalability, based on exact and approximate procedures, respectively. Computational results show that the proposed implementation is flexible with respect to the dissimilarity measure and the input space adopted, with satisfactory results in terms of scalability. Keywords Cluster analysis · Parallel and distributed computing · Large-scale pattern recognition · Unsupervised learning · Big Data mining · Non-metric spaces analysis

1 Introduction The recent explosion of interest for data mining and knowledge discovery put cluster analysis on the spot again. Cluster analysis is the very basic approach in order to extract regularities from datasets. In plain terms, clustering a dataset consists in A. Martino (B) · A. Rizzi · F. M. Frattale Mascioli Department of Information Engineering, Electronics and Telecommunications, University of Rome “La Sapienza”, Via Eudossiana 18, 00184 Rome, Italy e-mail: [email protected] A. Rizzi e-mail: [email protected] F. M. Frattale Mascioli e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_11

199

200

A. Martino et al.

discovering clusters (groups) of patterns such that similar patterns will fall within the same cluster, whereas dissimilar patterns will fall in different clusters. Indeed, not only different algorithms, but different families of algorithms have been proposed in literature: partitional clustering (e.g. k-means [1, 2], k-medians [3], k-medoids [4]), which break the dataset into k non-overlapping clusters; density-based clustering (e.g. DBSCAN [5], OPTICS [6]) which detect clusters as the most dense regions of the dataset; hierarchical clustering (e.g. BIRCH [7], CURE [8]) where clusters are found by (intrinsically) building a dendrogram in either a top-down or bottom-up approach. Moreover, in the Big Data era, the need for clustering massive datasets emerged. Efficient and rather easy-to-use large-scale processing frameworks such as MapReduce [9] or Apache Spark [10] have been proposed, gaining a lot of attention from computer science and machine learning researchers alike. As a matter of fact, several large-scale machine learning algorithms have been grouped in MLlib [11], the machine learning library built-in in Apache Spark. The core of this work is k-medoids, a (hard) partitional clustering algorithm similar to the widely-known k-means. However, conversely to k-means, developing and designing a k-medoids parallel and distributed version is not trivial due to the fact that the medoid evaluation complexity is quadratic in space and time and might therefore be unsuitable for large clusters. In a previous work [12], a novel k-medoids implementation based on Apache Spark has been proposed with two different procedures for medoids evaluation: an exact procedure and an approximate procedure. The former, albeit exact, requires the entire intra-cluster distance matrix evaluation which, as already introduced, might be unsuitable for large clusters. The latter overcomes this problem by adopting a scan-and-replace workflow, but returns a (sub)optimal medoid amongst the patterns in the cluster at hand. The remainder of the paper is structured as follows: Sect. 2 summarises the kmedoids problem; in Sect. 3 the proposed approaches and related implementations are described; in Sect. 4 the efficiency and effectiveness of both the proposed medoid evaluation routines are evaluated and discussed; Sect. 5 draws some conclusions.

1.1 Contribution and State of the Art Review As introduced in the previous section, parallel and distributed k-means implementations not only have been proposed in MapReduce [13], but it is also included in MLlib. As far as k-medoids is concerned, to the best of our knowledge, there are very few parallel and distributed implementations proposed in literature. In [14] an implementation based on Hadoop/MapReduce has been proposed where, similarly to [13], the Map phase evaluates the point-to-medoid assignment, whereas the Reduce phase re-evaluates the new medoids. Results show how the proposed implementation scales well with the dataset size, while it lacks of considerations regarding the effectiveness. In [15] the k-medoids problem is decomposed (space partitioning) in many local search problems which are solved in parallel. In [16] the

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: …

201

k-medoids problem is again solved using Hadoop/MapReduce with a three-phases workflow (Map-Combiner-Reduce). Albeit it is unclear how the new medoids are evaluated, the Authors show both efficiency and effectiveness. Specifically, the latter is measured on a single labelled dataset, thus casting the clustering problem (unsupervised by definition) as a post-supervised learning problem. Both [14, 16] start from Partitioning Around Medoids (PAM), the most famous algorithm for solving the k-medoids problem [17]: a greedy algorithm which scans all patterns in a given cluster and, for each point, checks whether using that point as a medoid further minimises the objective function (Sect. 2): if true, that point becomes the new medoid. In a previous work [12], a k-medoids implementation based on Apache Spark has been proposed which, due to its caching functionalities, is more efficient than MapReduce, especially when dealing with iterative algorithms (Sect. 3.1) [18]. Rather than using PAM, the (large-scale) k-medoids problem has been solved with the implementation proposed in [19] which has a very k-means-like workflow and therefore it is possible to rely on a very easy-to-parallelise algorithm flowchart (see e.g. [13]). In order to face the critical task of re-evaluating/updating the medoids, since such procedure consists in evaluating the pairwise distances amongst patterns in a given cluster in order to find the new medoid (namely the element which minimises the sum of distances), two alternatives have been proposed: an Exact Medoid Update evaluation based on cartesian product (suitable for small/medium clusters) and an Approximate Medoid Tracking method (suitable for large clusters as well), originally proposed in [20]. The major contribution of this paper is to address the capability of the largescale k-medoids implementation proposed in [12] to process structured data as, to the best of our judgement, we consider this as one of the most intriguing facets of k-medoids clustering. As case study, real-world biological datasets will be considered since in biology it is rather common to deal with structured data such as graphs or sequences [21]. Additional contributions are focused on the implementation of a procedure in order to re-initialise a cluster whether it looses all of its members and the implementation of a locally-parallel evaluation of the distance matrix for userdefined “small clusters” (useful to boost the Approximate Medoid Tracking method).

2 The k-Medoids Clustering Problem k-medoids is a hard partitional clustering algorithm; it aims in partitioning the dataset S = {x1 , x2 , ..., xN } into k non-overlapping groups (clusters), i.e. S = {S1 , ..., Sk } such that Si ∩ Sj = ∅ if i = j and ∪ki=1 Si = S. In order to find the optimal partition, k-medoids minimises the following objective function, namely the Within-Clusters Sum-of-Distances: W CSoD =

k   i=1 x∈Si

d (x, m(i))2

(1)

202

A. Martino et al.

where d (·, ·) is a suitable dissimilarity measure, usually the standard Euclidean distance, and m(i) is the medoid for cluster i. In this work, a recent implementation based on the Voronoi iteration is adopted and its main steps can be summarised as follows: 1. 2. 3. 4.

Select an initial set of k medoids Expectation Step: assign each data point to closest medoid Maximisation Step: update each clusters’ medoid Loop 2–3 until either medoids stop updating (or their update is negligible) or a maximum number of iterations is reached.

It is worth stressing that minimising the sum of squares (1) is an NP-hard problem [22] and therefore all methods (heuristics) proposed in literature (both in terms of efficiency and effectiveness) strictly depend on the initial medoids: this leaded to the implementation of initialisation heuristics such as k-means++ [23] or DBCRIMES [24]. It has already been discussed that evaluating the medoid is more complex than evaluating the mean, since it requires the complete pairwise distance matrix D between all points in a given cluster, say Sh , formally defined as an |Sh | × |Sh | real-valued matrix whose generic entry is given by (Sh ) = d (xi , xj ) Di,j

(2)

for any two patterns xi , xj ∈ Sh . Despite its complexity, by minimising the sum of pairwise distances rather than the squared Euclidean distance from the average point (i.e. k-means), k-medoids is more robust with respect to noise and outliers. Moreover, k-medoids is particularly suited if the cluster prototype must be an element of the dataset (the same is generally not true for k-means) and it is applicable to ideally any input space, not necessarily metric, since there is no need to define an algebraic structure in order to compute the representative element of a cluster.

3 Proposed Approach 3.1 A Quick Spark Introduction In this work Apache Spark (hereinafter simply Spark) has been chosen over Hadoop/ MapReduce due to its efficiency, as introduced in Sect. 1.1. Indeed, MapReduce forces any algorithm pipeline into a sequence of Map → Reduce steps, eventually with an intermediate Combiner phase, which strains the implementation of more complex operations such as joins or filters as they have to be cast as well into Map → Reduce. In MapReduce, computing nodes do not have memory of past executions: this does not suit the implementation of iterative algorithms, as the master process

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: …

203

must forward data to workers at each MapReduce job, which typically corresponds to a single iteration of a given algorithm. Spark overcomes these problems as it includes highly efficient map, reduce, join, filter (and many others) operations which are not only natively distributed, but can be arbitrarily pipelined together. As far as iterative algorithms are concerned, Spark offers caching in memory and/or disk, therefore there is no need to forward data back and forth from/to workers at each iteration. Atomic data structures in Spark are the so-called Resilient Distributed Datasets (RDDs) [25]: distributed (across workers1 ) collection of data with fault-tolerance mechanisms, which can be created starting from many sources (distributed file systems, databases, text files and the like) or by applying transformations on other RDDs. Example of transformations2 which will turn useful in the following are: map(): join():

filter():

union(): cartesian():

reduceByKey():

RDD2 = RDD1.map(f ) creates RDD2 by applying function f to every element in RDD1 RDD3 = RDD2.join(RDD1) creates RDD3 containing pairs with matching keys from RDD1 and RDD2 RDD2 = RDD1.filter(pred ) creates RDD2 by filtering elements from RDD1 which satisfy predicate pred (i.e. if pred is True) RDD3 = RDD1.union(RDD2) creates RDD3 by vertically stacking RDD1 and RDD2 RDD3 = RDD1.cartesian(RDD2) creates RDD3 as the cartesian product between RDD1 and RDD2, namely RDD3 will contain all possible pair of items drawn from RDD1 and RDD2 RDD2 = RDD1.reduceByKey(f ) creates RDD2 by merging, according to function f , all values for each key in RDD1: RDD2 will have the same keys as RDD1 and a new value obtained by the merging function.

Similarly, examples of actions which can be applied to RDDs are: count(): collect():

RDD.count() count the number of elements in RDD RDD.collect() collects in-memory on the master node the entire RDD content.

1 In Spark, workers are the elementary computational units, which can be either single cores on a CPU or entire computers, depending on the environment configuration (i.e. the Spark Context). 2 For a more extensive list, we refer the interested reader to the official Apache Spark RDD API at https://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.

204

A. Martino et al.

3.2 Main Algorithm The main parallel and distributed k-medoids flowchart is summarised in Algorithm 1. It is possible to figure datasetRDD to be a key-value pair RDD,3 namely an RDD where each record has the form key ; value . Specifically, records in datasetRDD have the form i ; xi where i is a sequential integer ID and xi is the i-th pattern composing the dataset. At the beginning of each iteration, a new key-value pair RDD will be created (distancesRDD), whose generic record has the form i ; distanceVectori where the key is still the pattern ID, whereas the value (distanceVectori ) is a kdimensional real-valued vector whose j-th element contains the distance between the i-th pattern and the j-th medoid. The following RDD (nearestClusterRDD) maps each pattern (still thanks to its ID) with the nearest distance value and the nearest cluster ID (i.e. medoid). Its generic record will thus have the form i ; min(distanceVectori ) ; argmin(distanceVectori ) These evaluations end the Maximisation Step for the Voronoi iteration (Sect. 2). nearestClusterRDD is the main RDD for three important routines: – the medoids update step (i.e. the Expectation Step for the Voronoi iteration), as it contains the pattern-to-medoid (also, pattern-to-cluster) assignments. Indeed, it is possible to create patternsInClusterRDD, namely an RDD containing only the patterns belonging to a given cluster. This can be done by joining nearest ClusterRDD and datasetRDD according to the ID (namely, mapping each pattern with its cluster ID) and then filtering by cluster ID (thus discarding patterns not belonging to the cluster under analysis). This RDD is, in turn, the main input for the medoids update step, which will be discussed separately in Sect. 3.4. Furthermore, this RDD still is a key-value pair of the form ˜i ; x˜i where ˜i is a sequential integer within-cluster ID.4 3 In

the following subsections and pseudocodes every variable whose name ends with RDD shall be considered as an RDD, therefore a data structure whose shards are distributed across the workers or, similarly, with no guarantee that a single worker has its entire content in memory. 4 Not to be confused with i in datasetRDD, distancesRDD and nearestClusterRDD, which is a within-dataset pattern ID.

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: …

205

– checking whether there are empty clusters. Indeed, it is possible to strip the second value (namely, the argmin(·), i.e. the cluster ID) and collect the unique (distinct) values. Such list contains the cluster IDs which have at least one pattern. Cluster IDs not appearing in such list are, by exclusion, empty and shall be re-spawned. This procedure will be discussed separately in Sect. 3.3. – evaluating the WCSoD, as in Eq. (1). Indeed, it is possible to strip the first value (namely, the min(·), i.e. the pattern-to-medoid distance) in each record and taking the sum of said values. For the sake of argument, one might object that the WCSoD value refers to a not-updated pattern-to-medoid assignment; however, it is worth remarking that in order to check for convergence two consecutive iterations are needed, thus the final WCSoD can safely be considered as up-to-date. On the other hand, depending on the stopping criteria adopted, the WCSoD evaluation can also be discarded altogether.5 However, the WCSoD evaluation has been deliberately presented not only for consistency with the objective function (1) and because it is commonly computed for the difference between consecutive iteration values against a given threshold in stopping criteria, but also because it is widely-used in order to determine the optimal number of clusters (e.g. the elbow plot [26]) or in case of multi-start with stochastic initial seeds selection.

3.3 Re-spawning Clusters In k-clustering, for some datasets and/or values for k, a cluster might loose all of its members. There are three main heuristics in order to overcome this problem: 1. drop the empty cluster altogether: this procedure, however, does not guarantee to find exactly k clusters 2. initialise a new singleton cluster with a random data point: this procedure, however, might as well select patterns near to their respective medoids, ruining clusters quality 3. initialise a new singleton cluster with the farthest pattern from its respective medoid, in order to (a) keep k clusters and (b) avoid ruining good clusters by selecting peripheral data points. Algorithm 2 shows the pseudocode for the third heuristic, where k¯ indicates the empty cluster ID. Recall from Sect. 3.2 the structure of nearestClusterRDD, the record corresponding to the greatest pattern-to-medoid distance (namely, the min(·)) is collected and the pattern ID (namely, i) and the original cluster it belongs to (namely, the argmin(·)) are retained. Said record is then filtered away from nearestClusterRDD and replaced with a new record which sees the farthest ¯ Since this pattern is also the new pattern ID mapped with the empty cluster ID (k). ¯ as in Line 6, its distance with the medoid itself is set to 0. medoid for cluster k, 5 Indeed,

in many cases the pattern-to-medoid assignments rather than the WCSoD are compared between consecutive iterations.

206

A. Martino et al.

Algorithm 1. Pseudocode for the main k-medoids skeleton (namely, Voronoi iterations). Based from [12]. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Load datasetRDD from source (e.g. text-file); Cache datasetRDD on workers; Set maxIterations, k, approximateTrackingFlag; medoids = initialSeeds(); for iter in range(1:maxIterations) do previousMedoids = medoids; distancesRDD = datasetRDD.map(d (pattern,medoids)); nearestClusterRDD = distancesRDD.map(min(distanceVector), argmin(distanceVector)); WCSoD = nearestClusterRDD.values(1).sum(); emptyClusters = nearestClusterRDD.values(2).distinct().collect(); for j in range(1:k) do if j not in emptyClusters then medoids[j] = respawnCluster(); end end for j in range(1:k) do patternsInClusterRDD = nearestClusterRDD.join(datasetRDD).filter(clusterID is j); if patternsInClusterRDD is empty then medoids[j] = None; continue; end if approximateTrackingFlag is True then medoids[j] = approximateMedoidTracking(); else medoids[j] = exactMedoidUpdate(); end end if stoppingCriteria is True then break; end end

Algorithm 2. Pseudocode for the re-spawning clusters routine. Input : nearestClusterRDD, the empty cluster ID k¯ and the medoids list medoids Output: nearestClusterRDD farthestRecord = nearestClusterRDD.values(1).max(); farthestPoint = farthestRecord[1]; originalCluster = farthestRecord[3]; nearestClusterRDD = nearestClusterRDD.filter(patternID is not farthestPoint); ¯ 5 nearestClusterRDD = nearestClusterRDD.union([farthestPoint, 0, k]); ¯ = datasetRDD.filter(patternID is farthestPoint); 6 medoids[k] 7 return nearestClusterRDD

1 2 3 4

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: …

207

3.4 Updating Medoids 3.4.1

Exact Medoid Update

The Exact Medoid Update procedure consists in evaluating the pairwise distance matrix amongst all the patterns in a given cluster and then finding the element which minimises the sum by rows/columns6 of such matrix. The end-user can define a-priori a parameter T , a threshold value which states the maximum allowed cluster cardinality in order to consider such cluster as “small”. This parameter can be tuned by considering both the expected clusters’ sizes and the amount of memory available on the master node. Algorithm 3 describes the Exact Medoid Update routine. Basically, if the cluster is “small” it is possible to collect on the master node the entire set of patterns and therefore use one of the many in-memory7 algorithms (pdist) for evaluating the pairwise distance matrix according to a given dissimilarity measure d (·, ·). Conversely, if the cluster is “not-small”, via Spark it is possible to evaluate the cartesian product of this cluster’s RDD with itself, leading to a new RDD (pairsRDD) where each record is a pair of records from the original RDD, therefore it is possible to figure pairsRDD to have the form ˜i ; x˜i ; ˜j ; x˜j Given these pairs, evaluating the dissimilarity measure is straightforward and distancesRDD will have the form ˜i ; ˜j ; d (x˜i , x˜j ) Given the analogy with the pairwise distance matrix, in order to perform the sum by rows or columns, distancesRDD will be reduced by using either the ID of the first pattern (by rows) or the ID of the second pattern (by columns) as key and using the addition operator to values. In other words, for ˜i (or ˜j) all the distances with other IDs (i.e. other patterns) are summed together. Given these sums, the final step consists in evaluating the minimum of the resulting RDD considering the sum of distances rather than the ID leading to medoidID, the ID of the new medoid which will be filtered from patternsInClusterRDD and returned as the new, updated, medoid.

6 In the following, we suppose that the dissimilarity measure d (·, ·), although it might not be metric,

is at least symmetric, i.e. d (a, b) = d (b, a). For symmetric dissimilarity measures the pairwise distance matrix is symmetric by definition and the sum by rows or columns leads to the same result. In case of not-symmetric dissimilarity measures, it is possible to ‘force’ symmetry by letting (for example) d¯ (a, b) = 21 (d (a, b) + d (b, a)). 7 Indeed, for very small clusters a Spark-driven parallelisation is discouraged as the majority of the execution time will be spent on thread scheduling and worker communication rather than processing.

208

A. Martino et al.

The distributed Exact Medoid Update routine (the else branch in Algorithm 3)   has time and space complexity of O C 2 /p , where C is the cluster size and p is the number of processing units (workers). Conversely, the not-distributed   counterpart (the if branch in Algorithm 3) has time and space complexity of O T 2 , where ideally T C. If the measure d (·, ·) is at least symmetric, it is possible to evaluate only the upper (lower) triangular part of the distance matrix and then generating the lower (upper) part by taking its transpose. This  drops the time complexity   remark  triangular to O 21 C 2 /p for the distributed case and to O 21 T 2 for the non-distributed case, respectively.

3.4.2

Approximate Medoid Tracking

Whilst the (large scale) Exact Medoid Update (Algorithm 3, namely the else branch) relies on natively distributed and highly efficient Spark operations, the cartesian product might be unfeasible for very large clusters since it creates an RDD whose size is squared the size of the cluster (i.e. squared the size of the input RDD) according to Eq. (2).

Algorithm 3. Pseudocode for the Exact Medoid Update routine. Input : cluster contents patternsInClusterRDD Output: the new medoid for the input cluster 1 2 3 4 5 6 7 8 9 10 11 12 13

if patternsInClusterRDD.count() ≤ T then patterns = patternsInClusterRDD.collect(); distanceMatrix = pdist(patterns, d ); Sum distanceMatrix by rows or columns; medoidID = argmin of rows/columns sum; return patterns[medoidID]; else pairsRDD = patternsInClusterRDD.cartesian(patternsInClusterRDD); distancesRDD = pairsRDD.map(d (pattern1,pattern2)); distancesRDD = distancesRDD.reduceByKey(add); medoidID = distancesRDD.min(key=dist); return patternsInClusterRDD.filter(ID==medoidID).collect(); end

To this end, a second, approximate, medoid evaluation based on [20] is proposed, whose contribution can be summarised as follows: 1. set a pool size P and fill the pool with the first P patterns from the cluster at hand 2. for every remaining pattern x, select uniformly at random two items from the pool (say, x1 and x2 ) and check their distances with respect to the current medoid m. • if d (x1 , m) ≥ d (x2 , m), remove x1 from the pool • if d (x1 , m) < d (x2 , m), remove x2 from the pool

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: …

209

• insert x in the pool 3. evaluate the new medoid with any standard in-memory routine using the patterns in the pool. Algorithm 4 describes the Approximate Medoid Tracking routine. As in Algorithm 3, if the cluster is “small” there is not need to trigger any parallel and/or distributed medoid evaluation which can thus be done locally, in an exact manner. Conversely, if the cluster is “not-small” the first P items will be cached and removed from the RDD which contains the patterns in such cluster. Finally, the above scan-and-replace starts on the remainder of the RDD. Algorithm 4. Pseudocode for the Approximate Medoid Tracking routine. Input : cluster contents patternsInClusterRDD Output: the new medoid for the input cluster 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

if patternsInClusterRDD.count() ≤ P then patterns = patternsInClusterRDD.collect(); distanceMatrix = pdist(patterns, d ); Sum distanceMatrix by rows or columns; medoidID = argmin of rows/columns sum; return patterns[medoidID]; else pool = patternsInClusterRDD.filter(ID in range(1:P)).collect(); patternsInClusterRDD = patternsInClusterRDD.filter(ID not in range(1:P)); for pattern in patternsInClusterRDD do Extract x1 =x2 from pool; if d (x1 , medoids[clusterID]) ≥ d (x2 , medoids[clusterID]) then Remove x1 from pool; else Remove x2 from pool; end pool = pool.append(pattern); end distanceMatrix = pdist(pool, d ); Sum distanceMatrix by rows or columns; medoidID = argmin of rows/columns sum; return pool[medoidID]; end

The if branch from Algorithm 4 (inherited from Algorithm 3) still has a time and space complexity of O(P 2 ), whereas the Approximate Medoid Tracking has a space complexity of O(C/p) and a time complexity of O(C − P) + O(P 2 ).

210

3.4.3

A. Martino et al.

On the Local Distance Matrix Evaluation

Both in Algorithms 3 and 4, in-memory procedures (pdist) for evaluating the pairwise distance matrix for small clusters (Algorithms 3 and 4) or for evaluating the pool distance matrix (Algorithm 4) have been presented. Depending on the programming language adopted, several libraries might be available for efficiently evaluate such matrices or, alternatively, a naïve double for-loop suffice. The former case is particularly true for Minkowski-based dissimilarity measures or, more in general, for dissimilarity measures working on real-valued vectors, whereas the latter case is particularly true for proper ad-hoc dissimilarity measures (e.g. edit distances, exact/inexact sequence/graph matching) for which built-in optimised routines hardly exist [27, 28]. For computationally expensive dissimilarity measures a naïve double for-loop might take a long time. Recalling that these in-memory evaluations are performed on the master node, it is possible to exploit the multi-core nature of modern CPUs by parallelising such evaluations. This can be done according to two strategies: – let a single core to build an entire row of the distance matrix (for N patterns, N threads will be spawned, each of which with complexity O(N )) – let a single core compute a single dissimilarity evaluation, for a given pair of patterns (for N patters, N 2 threads will be spawned, each of which with complexity O(1)) It is worth remarking that this parallelisation has to be performed locally by exploiting the multi-core CPU on the master node and not via Spark. Indeed, as already discussed, triggering massive parallelisations via Spark jobs for small tasks are unlikely to be helpful as most of the time will be spent on scheduling and communication. Such overhead is way inferior on a small-scale parallelism such as local multi-core parallelisation. Such remark drops the time complexity of pdist (originally O(T 2 ) –Exact Medoid Update– and O(P 2 ) –Approximate Medoid Tracking) to O(T 2 /˜p) and O(P 2 /˜p), where p˜ is the number of CPU cores on the master node, whilst keeping the space complexity to O(T 2 ) and O(P 2 ), respectively. Again, time complexities can be halved in case of symmetric dissimilarity measures. In this work, the second of the two strategies above is considered, for the sake of consistency with the Exact Medoid Update routine.

4 Experimental Results The aim of this section is to address the scalability performances of the proposed distributed k-medoids clustering. Structured datasets have been considered for these experiments in order not only to show how the proposed implementation scales when clustering data with non-trivial geometry, but also to remark an important facet of k-medoids clustering, that is, its adaptability with respect to the input space domain given an ad-hoc dissimilarity measure d (·, ·) suitable for dealing in such input space. Specifically, real-world pathway maps data (represented by graphs) and proteomes

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: …

211

(represented by sequences) will be considered and discusses separately in Sect. 4.1 and Sect. 4.2, respectively. For the sake of consistency with our previous work [12], minimal changes have been done to the hardware and software setup: a Linux Ubuntu 17.10 workstation with two Intel® Xeon® CPUs @2.60GHz for a total of 12 physical cores, 32GB RAM and 1TB HDD. Such 12 cores have been grouped in 4 groups of 3 cores each, in order to simulate 4 PCs working in parallel which will be the workers for these experiments. Apache Spark v2.2.1 has been used and, specifically, its Python API (namely, PySpark) driven by Python v2.7.14 with NumPy v1.14.1 [29] and SciPy v1.0.0 [30] for efficient numerical computations [31, 32]. Furthermore, the same two parameters as in [12] will be used for addressing scalability performances, namely speedup measures the ability of the parallel and distributed algorithm to reduce the running time as more workers are considered. More formally, the dataset size is kept constant and the number of workers increases from 1 to m. The speedup for an m-nodes cluster is defined as: speedup(m) =

running time on 1 worker running time on m workers

sizeup measures the ability of the parallel and distributed algorithm to manage an m times larger dataset while keeping the number of workers constant. The speedup for an m times larger dataset is defined as: sizeup(S, m) =

running time for processing m × S running time for processing S

In Sect. 3.4 has been remarked that for symmetric dissimilarity measures the Exact Medoid Update computational burden can be halved by evaluating only the upper (lower) part of the distance matrix and then building the lower (upper) part by taking its transpose. Despite the dissimilarity measures for graphs and sequences used in this work are indeed symmetric (see Sects. 4.1 and 4.2), for the sake of testing, the full distance matrix has been evaluated. Results presented herein can be considered as a ‘lower bound’ for symmetric dissimilarity measures. Conversely, as far as the algorithm tuning is concerned, a threshold of T = 20 has been set for the Exact Medoid Update in order to avoid an in-memory medoid evaluation (namely, avoid the if branch from Algorithm 3) and concentrate the computational effort on solving the Exact Medoid Update in a distributed fashion. Instead, a pool size of P = 500 has been selected for the Approximate Medoid Tracking in order to take advantage of the parallel in-memory pool distance matrix evaluation, as remarked at the end of Sect. 3.4.

212

A. Martino et al.

Table 1 Pathway maps datasets Dataset name # patterns BA BSM MMDE MP

4991 5081 4822 5256

# metabolites 2136 2413 2289 4431

4.1 Pathway Maps A pathway map is a chain of chemical reactions which can effectively described by a network (graph) of interacting molecules responsible for specific cellular functions. Pathways can be seen as protein (enzymes) networks and chemical networks, depending on whether nodes correspond to gene products or chemical compounds. In this work, chemical networks are considered. The data retrieval process can be summarised as follows: 1. using the Python Bioservices library [33], gather the entire organisms list on the KEGG database [34–36] (to the best of our knowledge, the most famous openaccess pathways online database) 2. for each organism, using the KEGG REST API,8 check whether the pathway for the following metabolism functions exists: Metabolic pathway (MP), Biosynthesis of secondary metabolites (BSM), Biosynthesis of antibiotics (BA), Microbial metabolism in diverse environments (MMDE). If so, download the pathway in KEGG Markup Language9 format 3. convert the downloaded pathways from KEGG Markup Language (an edge listlike format) to adjacency matrices. Weights on nodes and edges have been deliberately discarded in order to focus on pathways’ topological structure: this remark leads to binary and not-symmetric (directed graphs) adjacency matrices. In order to ease the definition of an ad-hoc dissimilarity measure between such matrices, for each of the four datasets (MP, BA, BSM, MMDE), the maximal coverage of the whole set of intervening metabolites (nodes) is retained,10 as in [37]. In this manner, for each dataset, all adjacency matrices will have the same number of nodes (i.e. the same size) and the Hamming distance [38] can be selected as a straightforward ad-hoc dissimilarity measure. Table 1 summarises the main characteristics of the four pathway maps datasets. It is worth remarking that for each pattern (organism) the size of the adjacency matrix is squared the number of nodes (metabolites). Figures 1 and 2 show the performances (sizeup and speedup) for the Exact Medoid Update and the Approximate Medoid Tracking routines, respectively. 8 http://www.kegg.jp/kegg/rest/keggapi.html. 9 http://www.kegg.jp/kegg/xml/docs/. 10 It

is sufficient for a given node to exist in a single network to be included.

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: … 14

3.5 MP BSM MMDE BA

3

MP BSM MMDE BA

12 10

2.5

Sizeup

Speedup

213

2

8 6 4

1.5 1

2 1

1.5

2

2.5

3

3.5

0

4

1

1.5

2

2.5

3

3.5

4

Dataset increasing factor m

# of workers

Fig. 1 Pathway maps performances (exact medoid update) 1.5

2.4

MP BSM MMDE BA

2.2

1.4

Sizeup

Speedup

2 1.8 1.6 1.4

1.3 1.2 1.1

1.2 1

MP BSM MMDE BA

1

1.5

2

2.5

3

# of workers

3.5

4

1

1

1.5

2

2.5

3

3.5

4

Dataset increasing factor m

Fig. 2 Pathway maps performances (approximate medoid tracking)

As far as the speedup is concerned, a linear speedup is the expected behaviour since an m times larger computational environment (i.e. number of workers) should take m times less time for processing a given dataset. However, a perfectly linear speedup is hard to achieve especially for large m since the more workers, the more communication/scheduling overhead between master and workers. Another factor which lowers the speedup performances is the number of dataset (RDDs) shards with respect to the number of workers: indeed, if the number of shards is higher than the number of workers there will be queued tasks. Similarly, as far as the sizeup is concerned, a linear sizeup is the expected behaviour since an m times larger dataset should take m times more time for a given computational environment. For the speedup tests, the datasets have been kept constant and the number of workers have been varied in range m = [1, 4]. For the sizeup tests, the number of

214

A. Martino et al.

workers has been kept constant (all workers have been used) and the dataset increasing factor has been varied in range m = [1, 4]. The Exact Medoid Update shows a very good speedup performances, which approaches the linear behaviour as the dataset size increases, meaning that large datasets can be treated efficiently. As already confirmed in [12], the Exact Medoid Update suffers from low sizeup performances (e.g. a 4-times larger datasets needs around 10 times more time). Whilst this is partially due to the quadratic complexity of the medoid update routine, additional comparative tests show that the computational complexity of the dissimilarity measure itself plays a huge role. Indeed, in [12], better sizeup performances have been obtained on larger datasets (in terms of number of patterns) with a plain Euclidean distance.11 Furthermore, by considering sparse rather than dense adjacency matrices, the sizeup performances drastically improve. The Approximate Medoid Tracking behaves in a dual fashion with respect to the Exact Medoid Update: whilst the latter overperforms the former in terms of speedup, the former overperforms the latter in terms of sizeup. Worst speedup performances with respect to the former case are due to the fact that the Approximate Medoid Update is computationally lighter with respect to the Exact Medoid Update and a massive parallelisation is unlikely to be helpful. Indeed, by looking at the speedup plot in Fig. 2, with 2 workers (6 cores) a speedup of 1.8 is obtained, which is reasonable, and with 3 workers (9 cores) a speedup of 2.2 is obtained, thus the ‘gap’ between the number of workers and the speedup factor starts increasing. This means that adding more than 3 workers does not improve the overall running times significantly. Conversely, since the Exact Medoid Update needs to solve a quadratic procedure for medoids’ update, the more computational power, the better. On the other hand, the Approximate Medoid Tracking greatly overperforms the Exact Medoid Update in terms of sizeup, showing a remarkable sub-linear behaviour: a 4-times larger dataset needs approximately 1.3 times more time.

4.2 Primary Structure of Proteins Proteins are macromolecules in charge of a vast series of function within living organisms, such as transporting other molecules, DNA replication, response to stimuli, catalysing metabolic reaction. Originally, proteins are encoded in genes so they can be represented as DNA sequences. Due to transcription, DNA sequences are converted into RNA sequences which are loaded onto the ribosome which, in turn, reads three nucleotides at the time and converts each triplet (codon) into one of the twenty amino-acids. This protein representation (sequence of amino-acids) is known as the primary structure. When leaving the ribosome, due to non-covalent interac11 For

the sake of comparison, it is worth remarking that both the Euclidean distance and the Hamming distance have complexity O(n), where n is the number of attributes. However, in this work, the number of attributes ranges from 21362 to 44312 whereas in [12] it ranged from 4 to 28102.

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: … Table 2 Proteins’ primary structures dataset Dataset name # patterns Min HSA DME SCE ECO

71785 21973 6049 4313

5 11 16 14

215

Max

Mean

Std

35991 22949 4910 2358

337 681 485 313

494 949 383 210

tions, the protein folding starts: hydrogen bonds stabilise regularly repeating local (sub-)structures (i.e. the secondary structure), such as α-helices and β-sheets. Starting from (possible) secondary structure(s), the overall protein folded shape is formed (the tertiary structure), where secondary structure(s) link together in a unique threedimensional configuration. In literature, many studies can be found where tertiary structures are modelled by three-dimensional networks [39], namely the so-called Protein Contact Networks [40] (see e.g. [21, 41–46]). However, due to the high variability in proteins’ structures (and functions) is impossible to design a ’shared nodes space’ as in the pathway maps case. Furthermore, since networks have already been used to model pathway maps, each protein will be represented by its primary structure, namely the amino-acids sequence. The data retrieval process can be summarised as follows: 1. using the Python Bioservices library, perform a query on the UniProt database [47] and gather the entire proteome in FASTA format for the following four organisms: Homo sapiens (HSA), Escherichia coli str. K-12 (ECO), Drosophila melanogaster (DME), Saccharomyces cerevisiae (SCE) 2. cast each protein from FASTA to plain sequence so that the Levenshtein distance [48] can be selected as a straightforward ad-hoc dissimilarity measure [49]. Table 2 summarises the main characteristics of the four proteomes datasets in terms of number of patterns (proteins) and some statistics on the primary structures length (minimum, maximum, mean and standard deviation). Figures 3 and 4 show the performances (sizeup and speedup) for the Exact Medoid Update and the Approximate Medoid Tracking routines, respectively. Results obtained with the pathway maps datasets (Figs. 1 and 2) are coherent with the results obtained with the primary structure datasets (Figs. 3 and 4), meaning that the overall parallel and distributed implementation is robust with respect to the dissimilarity measure adopted. Specifically, the Exact Medoid Update features a very good speedup which, again, approaches the expected linear behaviour as the dataset size increases. On the other hand, the Exact Medoid Update does not excel in terms of sizeup (a 4-times larger dataset needs from 7 to 12 times more time) which is, however, improved with respect to the pathway maps case. The Approximate Medoid Tracking still features a remarkable sub-linear behaviour, where a 4-times larger dataset needs from 1.2 to 2.2 times more time.

216

A. Martino et al. 4

14 HSA DME SCE ECO

3.5

10

Sizeup

3

Speedup

HSA DME SCE ECO

12

2.5

8 6

2

4

1.5

2 1

1

1.5

2

2.5

3

3.5

1

4

1.5

2

2.5

3

3.5

4

Dataset increasing factor m

# of workers

Fig. 3 Proteins’ primary structure performances (exact medoid update) 2.4

2.4 HSA DME SCE ECO

2.2

2

1.8

Sizeup

Speedup

2

1.6

1.8 1.6

1.4

1.4

1.2

1.2

1

1

1.5

HSA DME SCE ECO

2.2

2

2.5

# of workers

3

3.5

4

1

1

1.5

2

2.5

3

3.5

Dataset increasing factor m

4

Fig. 4 Proteins’ primary structure performances (approximate medoid tracking)

5 Conclusions One of the k-medoids advantages is its flexibility with respect to the dissimilarity measure adopted due to the fact that there is no need to define an algebraic structure for updating clusters’ representatives, as only pairwise distances are needed. The same is not true for the other two well-known k-clustering counterparts (k-means and k-medians) for which updating representatives requires the evaluation of mean and median, respectively. Especially in input spaces with non-trivial geometry, defining said operators might have no meaning. In this paper, the parallel and distributed k-medoids implementation based on Apache Spark previously tested on real-valued feature vectors has been stressed on structured input spaces (namely graphs and sequences). Moreover, additional

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: …

217

improvements have been conducted in order to face the computational complexity of some ad-hoc dissimilarity measures. As case study, biological datasets have been considered, since not only in biology it is very common to find structured data, but also because high-throughput technologies demand processing of large amount of data. Computational results on pathway maps and proteins’ primary structures show that the Exact Medoid Update scales very well with the number of nodes, meaning that large datasets can be processed efficiently. The Approximate Medoid Update has lower speedup performances due to its (lighter) computational burden, as no within-cluster full distance matrix is required. On the other hand, the Approximate Medoid Update greatly overperforms the Exact Medoid Update in terms of sizeup, showing a remarkable sub-linear behaviour.

References 1. Lloyd, S.: Least squares quantization in PCM. IEEE Trans. Inf. Theory 28, 129–137 (1982) 2. MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In Cam, L.M.L., Neyman, J. (eds.) Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. University of California Press (1967) 3. Bradley, P.S., Mangasarian, O.L., Street, W.N.: Clustering via concave minimization. In: Proceedings of the 9th International Conference on Neural Information Processing Systems, NIPS’96, pp. 368–374. MIT Press, Cambridge, MA, USA (1996) 4. Kaufman, L., Rousseeuw, P.J.: Clustering by means of medoids. In: Statistical Data Analysis Based on the L1-Norm and Related Methods (1987) 5. Ester, M., Kriegel, H.P., Sander, J., Xu, X., et al.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, vol. 96, pp. 226–231 (1996) 6. Ankerst, M., Breunig, M.M., Kriegel, H.P., Sander, J.: Optics: ordering points to identify the clustering structure. In: Proceedings of the 1999 ACM SIGMOD International Conference on Management of Data, SIGMOD ’99, pp. 49–60. ACM, New York, NY, USA (1999) 7. Zhang, T., Ramakrishnan, R., Livny, M.: BIRCH: an efficient data clustering method for very large databases. In: Proceedings of the 1996 ACM SIGMOD International Conference on Management of Data, SIGMOD ’96, pp. 103–114. ACM, New York, NY, USA (1996) 8. Guha, S., Rastogi, R., Shim, K.: CURE: an efficient clustering algorithm for large databases. SIGMOD Rec. 27, 73–84 (1998) 9. Dean, J., Ghemawat, S.: Mapreduce: simplified data processing on large clusters. Commun. ACM 51, 107–113 (2008) 10. Zaharia, M., Chowdhury, M., Franklin, M.J., Shenker, S., Stoica, I.: Spark: cluster computing with working sets. HotCloud 10, 95 (2010) 11. Meng, X., Bradley, J., Yavuz, B., Sparks, E., Venkataraman, S., Liu, D., Freeman, J., Tsai, D., Amde, M., Owen, S., et al.: MLlib: machine learning in apache spark. J. Mach. Learn. Res. 17, 1–7 (2016) 12. Martino, A., Rizzi, A., Frattale Mascioli, F.M.: Efficient approaches for solving the large-scale k-medoids problem. In: Proceedings of the 9th International Joint Conference on Computational Intelligence—Volume 1: IJCCI, INSTICC, pp. 338–347. SciTePress (2017) 13. Zhao, W., Ma, H., He, Q.: Parallel k-means clustering based on mapreduce. In: Jaatun, M.G., Zhao, G., Rong, C. (eds.) Cloud Computing, pp. 674–679. Springer, Berlin, Heidelberg (2009) 14. Yue, X., Man, W., Yue, J., Liu, G.: Parallel k-medoids++ spatial clustering algorithm based on mapreduce (2016). arXiv:1608.06861

218

A. Martino et al.

15. Arbelaez, A., Quesada, L.: Parallelising the k-medoids clustering problem using spacepartitioning. In: Sixth Annual Symposium on Combinatorial Search (2013) 16. Jiang, Y., Zhang, J.: Parallel k-medoids clustering algorithm based on Hadoop. In: 2014 5th IEEE International Conference on Software Engineering and Service Science (ICSESS), pp. 649–652. IEEE (2014) 17. Kaufman, L., Rousseeuw, P.J.: Finding Groups in Data: An Introduction to Cluster Analysis, vol. 344. Wiley (2009) 18. Martino, A., Rizzi, A., Mascioli, F. M. F.: Distance matrix pre-caching and distributed computation of internal validation indices in k-medoids clustering. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2018) 19. Park, H.S., Jun, C.H.: A simple and fast algorithm for k-medoids clustering. Expert Syst. Appl. 36, 3336–3341 (2009) 20. Del Vescovo, G., Livi, L., Frattale Mascioli, F.M., Rizzi, A.: On the problem of modeling structured data with the minsod representative. Int. J. Comput. Theory Eng. 6, 9 (2014) 21. Martino, A., Giuliani, A., Rizzi, A.: Granular computing techniques for bioinformatics pattern recognition problems in non-metric spaces. In: Pedrycz, W., Chen, S.M. (eds.) Computational Intelligence for Pattern Recognition, pp. 53–81. Springer International Publishing, Cham (2018) 22. Aloise, D., Deshpande, A., Hansen, P., Popat, P.: NP-hardness of euclidean sum-of-squares clustering. Mach. Learn. 75, 245–248 (2009) 23. Arthur, D., Vassilvitskii, S.: k-means++: the advantages of careful seeding. In: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1027–1035. Society for Industrial and Applied Mathematics (2007) 24. Bianchi, F.M., Livi, L., Rizzi, A.: Two density-based k-means initialization algorithms for non-metric data clustering. Pattern Anal. Appl. 3, 745–763 (2016) 25. Zaharia, M., Chowdhury, M., Das, T., Dave, A., Ma, J., McCauley, M., Franklin, M.J., Shenker, S., Stoica, I.: Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. In: Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation, pp. 2–2. USENIX Association (2012) 26. Thorndike, R.L.: Who belongs in the family? Psychometrika 18, 267–276 (1953) 27. Livi, L., Rizzi, A.: The graph matching problem. Pattern Anal. Appl. 16, 253–283 (2013) 28. Livi, L., Del Vescovo, G., Rizzi, A.: Graph recognition by seriation and frequent substructures mining. In: ICPRAM 2012—Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods, vol. 1, pp. 186–191 (2012) 29. van der Walt, S., Colbert, S.C., Varoquaux, G.: The NumPy array: a structure for efficient numerical computation. Comput. Sci. Eng. 13, 22–30 (2011) 30. Jones, E., Oliphant, T., Peterson, P., et al.: SciPy: open source scientific tools for Python (2001). Accessed 13 Mar 2018 31. Millman, K.J., Aivazis, M.: Python for scientists and engineers. Comput. Sci. Eng. 13, 9–12 (2011) 32. Oliphant, T.E.: Python for scientific computing. Comput. Sci. Eng. 9 (2007) 33. Cokelaer, T., Pultz, D., Harder, L.M., Serra-Musach, J., Saez-Rodriguez, J.: Bioservices: a common Python package to access biological web services programmatically. Bioinformatics 29, 3241–3242 (2013) 34. Kanehisa, M., Furumichi, M., Tanabe, M., Sato, Y., Morishima, K.: KEGG: new perspectives on genomes, pathways, diseases and drugs. Nucleic Acids Res. 45, D353–D361 (2016) 35. Kanehisa, M., Goto, S.: KEGG: kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 28, 27–30 (2000) 36. Kanehisa, M., Sato, Y., Kawashima, M., Furumichi, M., Tanabe, M.: KEGG as a reference resource for gene and protein annotation. Nucleic Acids Res. 44, D457–D462 (2015) 37. Tun, K., Dhar, P.K., Palumbo, M.C., Giuliani, A.: Metabolic pathways variability and sequence/networks comparisons. BMC Bioinform. 7, 24 (2006) 38. Hamming, R.W.: Error detecting and error correcting codes. Bell Labs Tech. J. 29, 147–160 (1950)

Efficient Approaches for Solving the Large-Scale k-Medoids Problem: …

219

39. Giuliani, A., Krishnan, A., Zbilut, J.P., Tomita, M.: Proteins as networks: usefulness of graph theory in protein science. Curr. Protein Pept. Sci. 9, 28–38 (2008) 40. Di Paola, L., De Ruvo, M., Paci, P., Santoni, D., Giuliani, A.: Protein contact networks: an emerging paradigm in chemistry. Chem. Rev. 113, 1598–1613 (2012) 41. Livi, L., Giuliani, A., Sadeghian, A.: Characterization of graphs for protein structure modeling and recognition of solubility. Curr. Bioinform. 11, 106–114 (2016) 42. Livi, L., Maiorino, E., Giuliani, A., Rizzi, A., Sadeghian, A.: A generative model for protein contact networks. J. Biomol. Struct. Dyn. 34, 1441–1454 (2016) 43. Maiorino, E., Rizzi, A., Sadeghian, A., Giuliani, A.: Spectral reconstruction of protein contact networks. Phys. A Stat. Mech. Appl. 471, 804–817 (2017) 44. Martino, A., Maiorino, E., Giuliani, A., Giampieri, M., Rizzi, A.: Supervised approaches for function prediction of proteins contact networks from topological structure information. In: Sharma, P., Bianchi, F.M. (eds.) Image Analysis, pp. 285–296. Springer International Publishing, Cham (2017) 45. Martino, A., Rizzi, A., Mascioli, F. M. F.: Supervised approaches for protein function prediction by topological data analysis. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2018) 46. De Santis, E., Martino, A., Rizzi, A., Mascioli, F. M. F.: Dissimilarity space representations and automatic feature selection for protein function prediction. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2018) 47. The UniProt Consortium: UniProt: the universal protein knowledgebase. Nucleic Acids Res. 45, D158–D169 (2017) 48. Levenshtein, V.I.: Binary Codes Capable of Correcting Deletions, Insertions, and Reversals, vol. 10, pp. 707–710. Soviet Physics Doklady (1966) 49. Cinti, A., Bianchi, F. M., Martino, A., & Rizzi, A. (2017). A novel algorithm for online inexact string matching and its FPGA implementation. arXiv preprint arXiv:1712.03560

Automated Diagnostic Model Based on Isoline Map Analysis of Myocardial Tissue Structure Olga V. Senyukova, Danuta S. Brotikovskaya, Svetlana G. Gorokhova and Ekaterina S. Tebenkova

Abstract Diagnostics of heart diseases by myocardial tissue structure analysis is very important since there exist several cardiac pathologies, indistinguishable by other symptoms. The most similar existing methods for automatic diagnostics by myocardium analysis in CT images are based only on intensity histogram features. In this work we describe the proposed method that uses isoline map-based features. We use real MSCT images with manually segmented LV myocardium to compare the existing algorithm and variations of the proposed algorithm, utilizing different strategies of isoline map construction (single- and double-level maps) for further computation of features, feature selection methods (Information Gain and Chi-squared test) and algorithms for binary classification of slice images into normal/abnormal classes (SVM and Random Forest). All considered types of isoline map-based diagnostic models demonstrate better results than histogram-based model. The best diagnostic models achieve 99.7% ROC AUC, 96.7% F-score and 0.7% false negative rate.

The work was supported by the Grant of President of Russian Federation for young scientists No. MK-1896.2017.9 (contract No.14.W01.17.1896-MK). O. V. Senyukova (B) · D. S. Brotikovskaya Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Leninskie Gory, Moscow 119991, Russia e-mail: [email protected] D. S. Brotikovskaya e-mail: [email protected] S. G. Gorokhova FSBEI FPE Russian Medical Academy of Continuous Professional Education, Barrikadnaya str., 2/1, Moscow 125993, Russia e-mail: [email protected] S. G. Gorokhova · E. S. Tebenkova Research Clinical Center of JSC Russian Railways, 20, Chasovaya str., Moscow 125315, Russia e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_12

221

222

O. V. Senyukova et al.

Keywords Automated heart disease diagnostics · Heart tissue analysis · LV myocardium · Isoline map · Supervised machine learning · Multispiral computed tomography

1 Introduction Heart diseases are the leading cause of mortality in Russia and worldwide [10]. The strategies for their treatment vary and depend on the type of pathological processes that affect the heart. Clinical signs and symptoms of many cardiovascular diseases are very similar, which additionally complicates their correct identification. Modern visualization techniques often cannot distinguish between the norm and pathology, as well as between various pathological conditions, which are different in their nature and therefore require different tactics of treatment. For example, ventricular wall thickening due to the adaptation to sport loads may be confused with the thickening caused by hypertrophic cardiomyopathy [13]. Serious difficulties arise in case of differential diagnosis of different types of dilated cardiomyopathy [7, 14]. Ischemic cardiomyopathy may be a result of coronary arteries occlusion by atherosclerotic plaques, chronic myocardial ischemia, or multiple cardiomyocyte loss with focal myocardial scaring. On the other hand, primary dilated cardiomyopathy is characterized with fibrosis and other microstructural myocardial changes without coronary artery disease. Therefore, it is important to consider specific morphological changes caused by different conditions. Modern tomographic methods, e.g. multislice spiral computed tomography (MSCT) and magnetic resonance imaging (MRI), make it possible to identify structural defects of the heart. For their accurate interpretation, in-depth analysis is required that overcomes limitations of standard algorithms of diagnostic data processing. Computer vision and machine learning-based algorithms can be applied for automated diagnostics, because pathologies of heart tissue are visible in MR and CT images (see Fig. 1). Several MRI-based myocardial infarction diagnostic approaches based on deep learning algorithms [18], Bayesian probability model [15], Linear Discriminant Analysis using intensity characteristics [1] were introduced. However, for myocardial structure analysis one key advantage of CT images is that they directly visualize tissue density at the point, and CT scanners are also much more accessible than MRI scanners. So CT images analysis is a relevant problem for the heart tissue disease diagnostics. To the best of our knowledge only a few papers dedicated to heart tissue analysis in CT images using machine learning approach, exist. The most similar research was presented in [2]. The authors performed analysis of myocardial texture aimed to detect post myocarditis scars using several CT acquisition techniques: basal scans before and after iodine contrast agent injection, CT angiographic images and myocardial extracellular volume fraction map. Detection algorithm based on classification

Automated Diagnostic Model Based on Isoline Map …

223

Fig. 1 Contrast-enhanced CT images of heart. Left ventricle myocardium contours are highlighted in green. a Normal myocardium. b Myocardial pathology (red box)

by Random Forest [5] and feature model based on statistical characteristics of CT images histograms were developed and promising results were achieved. Another approach to automated myocardial infarction diagnostics based on myocardium strain modeling and analysis of CT images was introduced in [17] and further developed in [16]. Supervised machine learning algorithms were applied. For feature selection the left ventricle was divided into 17 zones using the American Heart Association (AHA) nomenclature [6]. Each zone was used for computation of mean strains and mean intensity. At the classification stage Random Forest and Support Vector Machine, SVM [3], algorithms were compared. Experiments revealed consistent improvement while using combination of strain and intensity-based features compared to strain-only based model, which shows significance of intensity values for myocardial infarction diagnostics. Both existing methods of tissue analysis involve only histogram-based intensity characteristics of the myocardium area, that are highly sensitive to noise. Previously we proposed a novel algorithm for accurate heart disease diagnostics based on classification of isoline map features of heart tissue [12]. The algorithm was applied to the problem of classification of MSCT slice images based on left ventricle (LV) myocardium microstructural changes. In this work we provide wider description, deeper analysis and discussion of the proposed diagnostic model. The rest of the paper is organized as follows. Section 2 describes the proposed algorithm of heart disease diagnostics based on isoline map analysis. The experimental results and the discussion are provided in Sect. 3. The conclusions are drawn in Sect. 4.

224

O. V. Senyukova et al.

2 Method The proposed algorithm for automated heart disease diagnostics based on heart tissue structure analysis consists of two steps: (1) isoline maps building and feature extraction: computation of statistical characteristics for these maps and (2) classification of extracted features into two classes: “normal” and “abnormal”.

2.1 Noise Robustness of Isoline Map Representation We introduce feature representation of the considered area (in our case myocardium) based on isoline map. An isoline of certain level in the image is a curve along which the image has a constant intensity value. An isoline map is a set of isolines of one or several levels. Isoline map analysis is a convenient tool that provides intuitive representation of data with small computational costs. The approach has been successfully applied to other medical image analysis problems [11]. Isoline map allows to detect certain patterns in the image and provides robust quantitative description which is more informative than a set of histogram-based characteristics and less sensitive to noise. The analysis provided below demonstrates this useful property of isoline map-based representation. Consider two contrast-enhanced MSCT images of LV myocardium: normal myocardium image with noise (Fig. 2a) and image of tissue with pathology (Fig. 2b). In Fig. 3 intensity histogram of LV myocardium in normal case (Fig. 2a) is presented as blue area. LV myocardium tissue with pathology (Fig. 2b) histogram is presented as red area. A common part of two histograms is colored with purple. It

Fig. 2 LV myocardium contrast-enhanced MSCT images from [12]. a Normal myocardium with noise in the image. b Myocardium with pathology

Automated Diagnostic Model Based on Isoline Map …

225

Fig. 3 Intensity histogram comparison for normal myocardium with noise in the image (blue, on the right) and myocardium with pathology (red, on the left) based on [12]. A common part of two histograms is colored with purple

can be seen that the areas overlap significantly and decision rule construction for differentiation of normal and abnormal cases is complicated. At the same time considering isoline maps of certain levels allows to obtain quantitative indexes that can characterize the presence of pathology in the image. As it is demonstrated in Fig. 4, isoline maps of levels 30 and 50 represent substantially different patterns for normal tissue with image noise and tissue with pathology which makes isoline map-based features much more informative for classification than histogram-based features.

2.2 Feature Extraction Choosing intensity range. According to contrast-enhanced MSCT imaging properties, pixel intensity is determined by tissue density at the point. For that reason only fixed intensity levels are to be considered in myocardium tissue analysis task. According to Fig. 5, after building a histogram of LV myocardium area in [0, 255] intensity range, averaged over sample images of both classes smoothed with a Gaussian kernel, it can be shown that intensity distribution of the region of interest is close to normal distribution with mean 75 and standard deviation 21.94. As it can be seen from Fig. 5, the 0.8 quantile has the value 90, which means that 80% of pixels of the region of interest have the intensity value less or equal to

226

O. V. Senyukova et al.

Fig. 4 Double-level (30, 50) isoline maps examples based on [12]. a Normal myocardium with noise in the image. b Myocardium with pathology

90. So 90 was chosen as a right border of the intensity range. Intensity values of pixels corresponding to injuries tend to decrease, as demonstrated in Fig. 1. So the 0.01 quantile value equal to 26, rounded to 30, was chosen as a left border of the intensity range. Thus, the intensity range [30, 90] was chosen for isoline map-based feature extraction and histogram-based feature extraction that was implemented for comparison with the proposed method. Isoline maps construction. Since each intensity level characterizes different tissue types presented in the image, several isoline maps for uniformly distributed intensity levels were built for LV myocardium area and further separately analysed. As a result, two isoline map models were used: – Singe-level Isoline Maps. 31 isoline maps were built with corresponding levels: {30}, {32}, {34}, {36}, …{90}.

Automated Diagnostic Model Based on Isoline Map …

227

Fig. 5 Intensity histogram of LV myocardium averaged over normal and abnormal samples (blue graph) based on [12]. Distribution mean (red line), 0.01 quantile (green line), 0.80 quantile (purple line) Table 1 Levels of double-level isoline maps from [12]

Map number

Intensity levels

1 2 3 4 5 6 7 8 9 10 11

30, 35 35, 40 40, 45 45, 50 50, 55 55, 60 60, 65 65, 70 70, 75 75, 80 80, 85

– Double-level Isoline Maps. Level distribution for built isoline maps is presented in Table 1. Examples of double-level isoline maps for abnormal class sample are presented in Fig. 6. Isoline map building procedure was based on the contouring algorithm from MATLAB online documentation.

228

O. V. Senyukova et al.

Fig. 6 Double-level isoline map examples. a Initial image (myocardial infarction sample). b 30/35 levels map. c 40/45 levels map. d 50/55 levels map. e 60/65 levels map. f 70/75 levels map. g 80/85 levels map

Isoline maps features. During feature extraction step, five statistical characteristics were calculated for each isoline map. Final feature vector was constructed by concatenation of all statistical values. Single-level and double-level isoline maps were considered as separate feature models. As a result two isoline-based feature models were built: – single-level model: 31 × 5 = 155 features; – double-level model: 11 × 5 = 55 features. Consider a grayscale image I of contrast-enhanced MSCT scan. Its corresponding LV myocardium area is presented as a point set Smyo : Smyo = {(x, y)|I (x, y) ∈ LV }. Isoline map Sisoline of LV myocardium area is presented as a set of its isoline contours

Automated Diagnostic Model Based on Isoline Map …

229

C, Sisoline = {C}, where each isoline contour is presented as a set of its points: C = {(x, y)}. Statistical computation was provided as in [12]: – isoline count on the map: N= – minimum isoline length:

|Sisoline | ; |Smyo |

(1)

Lmin =

minC∈Sisoline |C| ; |Smyo |

(2)

Lmax =

maxC∈Sisoline |C| ; |Smyo |

(3)

– maximum isoline length:

– mean isoline length:  Lmean =

C∈Sisoline

|C|

|Sisoline |

×

1 |Smyo |

;

(4)

1 . |Smyo |

(5)

– standard deviation of isoline length:  Lstd =

C∈Sisoline (|C|

− Lmean )2

|Sisoline |

×

All the values were normalized, divided by the area of LV myocardium region.

2.3 Classification During the classification stage every MSCT slice image is represented as a onedimensional vector x of N features described in the section below: x = {ξ1 , . . . , ξN }, ξi ∈ IR, i = 1, N .

(6)

Binary classification algorithm a(x) is a function: IRN → M , M = {+1, −1}. Class label +1 stands for positive class, or abnormal, when pathology was detected. Class label −1 stands for a negative, normal class, when pathology was not found. In this research classification algorithms based on supervised machine learning approach were applied. Training dataset feature vectors xi , i = 1, N  , with class labels yi were used in classification algorithm a(x).

230

O. V. Senyukova et al.

We applied SVM and Random Forest classifiers which are among the best classification algorithms and demonstrated high accuracy in wide range of problems. SVM. Classical SVM algorithm tries to build a hyperplane in the feature space that maximizes the margin between two classes. Classification algorithm is represented as follows:  N   λi yi xi , x − w0 . a(x) = sign (7) i=1

Coefficients λi are non-zero only for support vectors, i.e. vectors lying on the margin between the two classes. Since classes are usually not linearly separable, initial feature space X should be converted to higher dimensional space H , where the data is linearly separable [3]. It can be provided directly using certain transformation ψ : X → H . As it is shown in (7) data vectors are used only for dot product calculation, xi , x. In H space it is replaced with ψ(xi ), ψ(x). In practice nonlinear kernel functions are usually used: K(x , x) = ψ(x ), ψ(x). Thus, SVM classification rule with nonlinear kernel is defined as follows:  N   a(x) = sign λi νi K(yi , x) − w0 . (8) i=1

In this work SVMs with different nonlinear kernel functions were compared: 1. polynomial kernel: K(x1 , x2 ) = (x1 , x2  + 1)d ;

(9)

2. radial basis function, (RBF): K(x1 , x2 ) = exp(−γ||x1 − x2 ||2 ), γ > 0;

(10)

K(x1 , x2 ) = tanh(kx1 , x2  + c), k > 0, c > 0.

(11)

3. sigmoid:

Random forest. Random forest classifier is an ensemble learning method that operates by constructing a multitude of decision trees with the element of randomness at training stage. Each new object is assigned a class label that was chosen by the majority of decision trees. Randomness is introduced in two ways: (1) each tree is trained on random subsample of the whole training sample (this is called bagging) and (2) each tree node is constructed using a random subset of the whole feature set. In this work Random Forest based on CART trees [4] was applied.

Automated Diagnostic Model Based on Isoline Map …

231

3 Experiments 3.1 Dataset and Labeling The dataset for training and evaluation of the proposed algorithm consists of 11 contrast-enhanced MSCT sequences of healthy patients and 8 contrast-enhanced MSCT sequences of patients with heart diseases in DICOM format. Since pixel intensity in MSCT images is linearly dependent on tissue density at the point, intensity values themselves are used for further analysis. Certain MSCT sequence slices presented as grayscale PNG images of 512 × 512 size were manually selected from each MSCT image sequence. Final dataset consists of 309 grayscale PNG images. Myocardium tissue structural elements, cardiomyocytes, have oblong shapes. Since cardiomyocytes are co-directed with axial plane, MSCT axial slices are analyzed in this research. On each image LV myocardium was preliminarily segmented manually. During experiments the whole dataset was divided into: (1) parameter estimation dataset (5 normal, 4 abnormal MSCT sequences) and (2) evaluation dataset (the rest 6 normal and 4 abnormal MSCT sequences). For feature selection and classifier parameters estimation k-fold cross-validation was used on the first dataset where each separate MSCT sequence was considered as fold. On each iteration of validation a pair of MSCT sequences of both classes was considered as validation set, and all the rest MSCT sequences (4 + 3 = 7 totally) were used for training. For classification evaluation the second dataset was considered. Several iterations were made and mean False Negative Rate (FNR), mean F-score and mean ROC (Receiver Operating Characteristic) AUC (Area Under ROC Curve) values were calculated. At each iteration 2 normal and 1 abnormal random MSCT sequences were used for training (about 30% from the whole evaluation dataset), all the rest sequences were used for testing (4 + 3 = 7). All the algorithms were implemented in MATLAB. Class weights were set to 3 for the positive class and 1 for the negative class. Random forest consisted of 100 CART trees [4]. For SVM classifier, the cost of constraints violation was set to 80. For polynomial kernel, d from (9) was set to 3. For RBF kernel, γ from (10) was set to N1 , where N is feature space dimension. For sigmoid kernel (11), k = 0.01 and c ∈ [−0.5, −2] were used.

3.2 Histogram-Based Feature Model For comparison with the proposed method based on isoline map, seven histogrambased characteristics utilized in [2] were calculated:

232

O. V. Senyukova et al.

– energy E=

N 

I (k)2 ;

(12)

k=1

– mean I=

N 1  I (k); N

(13)

H (k)log2 H (k);

(14)

k=1

– intensity distribution median I ; – entropy T=

90  k=30

– kurtosis

N

(I (k) − I )4 ; K =  k=1 ( N1 Nk=1 (I (k) − I )2 )2 1 N

– root mean square error

 RMSE =

– skewness

N

I (k)2 ; N

k=1

(16)

N

(I (k) − I )3 S =  k=1 , ( N1 Nk=1 (I (k) − I )2 )3 1 N

(15)

(17)

where I (k) is an intensity value of image I of size N at the point k, H (k) is normalized histogram value at the kth bin, k ∈ [30, 90]. Totally, 68 histogram-based features were obtained: – 61 values of normalized histogram; – 7 intensity-based statistics from feature selection step of [2].

3.3 Feature Selection Two techniques based on analysis of each feature impact on recall were used in this work in order to select the best features. Information gain. Information gain [9] magnitude (IG) characterizes the correlation between the feature and recall compared to recall values correlation that are evaluated using entropy.

Automated Diagnostic Model Based on Isoline Map …

233

Table 2 Isoline map-based features selected by IG ≥ 0.65 criteria based on [12] Feature type Single-level map levels Double-level map levels LMean LMin Lσ

[30, 46], 54, 60 [30, 36] [30, 38], [48, 62]

30/35, 35/40, 40/45, 50/55 30/35 30/35, 35/40, 40/45, 45/50, 50/55, 55/60, 60/65

Consider variable X that takes values {x1 , . . . xn }. Probability of value xi is denoted as p(xi ), 1, N . Entropy magnitude H measures uniformity of distribution of variable and is expressed as  p(xi ) ∗ log2 (p(xi )). (18) H (X ) = − xi ∈X

Consider variable Y that represents class label of object and variable X that represents object feature. Magnitude that characterizes mean uniformity of class labels within each separate feature value is called conditional entropy and is expressed as 

H (Y |X ) =

p(xi ) ∗ H (Y |X = xi ),

(19)

xi ∈X

where H (Y |X = xi ) is specific conditional entropy that is calculated as class label entropy within the fixed feature value. IG magnitude depends on class label entropy itself and conditional entropy, measures if Y values become more uniform with known X values and is expressed as I G = H (Y ) − H (Y |X ).

(20)

The bigger IG value was achieved, the higher correlation is. In this work normalized [0, 1] IG range was considered and features with IG ≥ 0.65 were selected. The selected features are presented in Table 2. Chi-squared test. Chi-squared test [8] is one of the most commonly used statistical hypothesis testing methods. Chi-squared test is usually applied in statistics to check if two events are independent. In feature selection problem appearance of certain feature value, X = {x1 , . . . , xN }, is the first event and appearance of certain class label, Y = {y1 , . . . , yM }, is the second event. Chi-squared statistic variable is expressed as χ2 =

  (Vˆxi ,yj − Vxi ,yj )2 . Vxi ,yj x y i

(21)

j

Here Vˆxi ,yj , i = 1, N , j = 1, M is observed frequency of event xi ∩ yj , Vxi ,yj is expected frequency of event xi ∩ yj under the assumption that events xi and yj are

234

O. V. Senyukova et al.

independent. Consider dataset of K examples. Vxi ,yj is expressed as Vxi ,yj = K ∗ IP(xi ) ∗ IP(yj ), i = 1, N , j = 1, M .

(22)

Chi-squared statistic measures the difference between observed and expected frequencies. The bigger statistic value is, the smaller is probability that two events are independent. Thus class labels depend on feature values and the feature is significant. Features selected by chi-squared test with 0.05 significance level for isoline mapbased models are presented in Table 3.

3.4 Evaluation Metrics Testing data set is divided into cases presented in Table 4. False positive cases are called type I error and False negative cases are called type II error. Based on Table 4 several classification metrics are built. TP ; precision = TP+FP TP recall = TP+FN ; f-score = 2∗Precision∗Recall ; Precision+Recall false negative rate, FNR = FNFN +TP FP false positive rate, FPR = FP+TN ; Receiver Operating Characteristic (ROC curve) represents TPR and FPR within different classification function thresholds; – ROC AUC (area under curve).

– – – – – –

F-score measure represents both precision and recall values in a single magnitude and it is a basic common metric for classification problem.

Table 3 Isoline map-based features selected by chi-squared test with 0.05 significance level based on [12] Feature type Single-level isoline levels Double-level isoline levels LMean

[30, 62]

LMin LMax Lσ

[30, 44] [42, 50], 58 40/45, 45/50, 30, 32, [44, 70]

30/35, 35/40, 40/45, 45/50, 50/55, 55/60, 60/65 30/35, 35/40, 40/45, 45/50 – 50/55, 55/60, 60/65, 65/70

Classified as healthy

Classified as diseased

True negative, TN False negative, FN

False positive, FP True positive, TP

Table 4 Testing data set cases Is healthy Is diseased

Automated Diagnostic Model Based on Isoline Map …

235

In medical diagnostics tasks type II errors are significantly more critical, thus FNR value should be measured separately. ROC AUC metric is a common tool representing mean classification function accuracy within different thresholds. Results and discussion. For quality evaluation purposes F-score, FNR and ROC AUC were analyzed. Final classification results are presented in Tables 5, 6 and 7. The following feature models were compared in this research: – Model 1: single-level isoline maps features; 31 single intensity level isoline maps described in Sect. 2.2 were constructed and five statistical characteristics from Eqs. (1)–(5) were calculated for each map. 155 features totally. – Model 2: single-level isoline maps features selected by Information Gain; For 155 single level isoline map-based features from model 1 Information gain magnitude (IG) was calculated, and IG ≥ 0.65 criteria was used. As shown in Table 2, mean isoline length was calculated for 11 isoline maps, minimal isoline length was calculated for 4 isoline maps and standard deviation of isoline length was calculated for 13 isoline maps. 28 features totally. – Model 3: single-level isoline maps features selected by Chi-squared test; Each of 155 model 1 feature set was used for Chi-squared statistics evaluation and based on 0.05 significance level test features shown in Table 3 were selected. Mean isoline length was calculated for 17 isoline maps, min isoline length was calculated for 8 isoline maps, max isoline length was calculated for 6 isoline maps and standard deviation of isoline length was calculated for 16 isoline maps. 47 features totally. – Model 4: double-level isoline maps features; 11 double intensity level isoline maps described in Sect. 2.2 were constructed and five statistical characteristics from Eqs. (1)–(5) were calculated for each map. 55 features totally. – Model 5: double-level isoline maps features selected by Information Gain; For 55 double level isoline map based features from model 2 Information gain magnitude, IG, was calculated, and IG ≥ 0.65 criteria was used. As shown in Table 2, mean isoline length was calculated for 4 isoline maps, min isoline length was calculated for 1 isoline map and standard deviation of isoline length was calculated for 7 isoline maps. 12 features totally. – Model 6: double-level isoline maps features selected by Chi-squared test; Each of 55 model 2 feature set was used for Chi-squared statistics evaluation and based on 0.05 significance level test features shown in Table 3 were selected. Mean isoline length was calculated for 7 isoline maps, min isoline length was calculated for 4 isoline maps and standard deviation of isoline length was calculated for 6 isoline maps. 17 features totally.

236

O. V. Senyukova et al.

Table 5 FNR comparison from [12]. The best result is highlighted in bold Model SVM RBF SVM polynomial SVM sigmoid 1 2 3 4 5 6 7

0.054 0.034 0.027 0.063 0.014 0.011 0.100

0.054 0.034 0.028 0.056 0.010 0.011 0.103

0.023 0.017 0.021 0.025 0.007 0.010 0.028

Random forest 0.050 0.021 0.015 0.051 0.017 0.016 0.095

Table 6 F-scores comparison from [12]. The best result is highlighted in bold Model SVM RBF SVM polynomial SVM sigmoid Random forest 1 2 3 4 5 6 7

0.930 0.950 0.958 0.926 0.962 0.967 0.864

0.929 0.950 0.957 0.929 0.963 0.966 0.861

0.910 0.947 0.953 0.912 0.955 0.958 0.882

Table 7 ROC AUC comparison. The best result is highlighted in bold Model SVM RBF SVM polynomial SVM sigmoid 1 2 3 4 5 6 7

0.972 0.977 0.985 0.965 0.977 0.987 0.888

0.970 0.974 0.985 0.965 0.979 0.986 0.891

0.959 0.980 0.978 0.942 0.986 0.984 0.888

0.935 0.961 0.959 0.930 0.961 0.952 0.879

Random forest 0.991 0.997 0.996 0.988 0.996 0.994 0.963

– Model 7: histogram-based features. 68 features based on [30, 90] intensity histogram described in 3.2 were calculated. It can be seen from Tables 5, 6 and 7 that both single-level and double-level isoline map models demonstrated consistent FNR, F-score and AUC values improvement over histogram-based feature representation from [2]. This is true even for models without feature selection (lines 1 and 4 versus line 7 in Tables 5, 6 and 7). Optimal features selection using Chi-squared test or IG criteria allows to achieve up to 5% improvement for FNR and up to 4% for F-score and ROC AUC. What feature selection method is better depends on classifier type and isoline map construction method.

Automated Diagnostic Model Based on Isoline Map …

237

SVM and Random Forest demonstrate comparable accuracy for all models and all three metrics. However, Random Forest is much better by ROC AUC than SVM with all considered types of kernels. SVM with sigmoid kernel function shows the best FNR scores, even better than Random Forest. As for F-scores, the best classifiers are SVM with RBF kernel and Random Forest. Models with double-level isoline map demonstrate slightly worse results than single-level isoline map models without feature selection but become better than single-level isoline map models after feature selection. The best proposed models provide an increase of accuracy compared to histogram-based model ≈2% for FNR, ≈9% for F-score and ≈3% for ROC AUC. The best FNR score (0.7%) was achieved by double-level isoline map-based IG features and SVM with sigmoid kernel. The best F-score (96.7%) was achieved by double-level isoline map-based chi-squared features and SVM with RBF kernel. The best ROC AUC score (99.7%) was achieved by single-level isoline map-based IG features and Random Forest classifier.

4 Conclusions In this paper we described in detail the proposed approach to diagnostics of heart diseases based on heart tissue microstructure analysis using isoline maps. We provided the description and rationale for the algorithms of isoline map construction, feature selection and classification. Application to the problem of selection of MSCT slice images of myocardium with pathology allowed to compare several strategies within the proposed algorithm using three types of metrics. These experiments revealed three best models, differing in isoline map construction process, feature selection algorithm or classifier type. All these models demonstrated better accuracy than existing histogram-based diagnostics model. In future, when much more labeled training images are available, we will explore the applicability of deep neural networks (DNNs), which is currently a very popular computer vision approach since it is able to extract useful features for classification autonomously and perform classification itself at the final stage. In this work it was not applied due to low amount of data. However, when it becomes possible, the proposed approach will still be useful, but for automatic labeling purposes.

References 1. Afshin, M., Ben Ayed, I., Punithakumar, K., Law, M., Islam, A., Goela, A., Ross, I., Peters, T., Li, S.: Assessment of regional myocardial function via statistical features in MR images. In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2011, pp. 107–114 (2011) 2. Antunes, S., Esposito, A., Palmisanov, A., Colantoni, C., de Cobelli, F., Del Maschio, A.: Characterization of normal and scarred myocardium based on texture analysis of cardiac computed

238

3.

4. 5. 6.

7. 8. 9. 10. 11. 12. 13.

14. 15.

16.

17.

18.

O. V. Senyukova et al. tomography images. In: 2016 IEEE 38th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC), pp. 4161–4164. IEEE (2016) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pp. 144–152. ACM (1992) Breiman, L., Friedman, J., Olshen, R., Stone, C.: Classification and Regression Trees. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific California (1984) Breiman, L.: Random forests. Mach. Learn.45(1), 5–32 (2001) Cerqueira, M.D., Weissman, N.J., Dilsizian, V., Jacobs, A.K., Kaul, S., Laskey, W.K., Pennell, D.J., Rumberger, J.A., Ryan, T., Verani, M.S., et al.: Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart. Circulation 105(4), 539–542 (2002) Fennira, S., Zairi, I., Jnifene, Z., Lakhal, M., Kammoun, S., Kraiem, S.: Differences between idiopathic and ischemic dilated cardiomyopathy. Tunis. Med. 94(8–9), 535–540 (2016) Greenwood, P.E., Nikulin, M.S.: A Guide to Chi-Squared Testing, vol. 280. Wiley, New York (1996) Hall, M.A.: Correlation-based feature selection for machine learning. University of Waikato Hamilton (1999) Nichols, M., Townsend, N., Scarborough, P., Rayner, M.: Cardiovascular disease in Europe 2014: epidemiological update. Eur. Hear. J. 35(42), 2950–2959 (2014) Senyukova, O.V.: Segmentation of blurred objects by classification of isolabel contours. Pattern Recognit. 47(12), 3881–3889 (2014) Senyukova, O.V., Brotikovskaya, D., Gorokhova, S., Tebenkova, E.: Automated diagnostic model based on heart tissue isoline map analysis. In: IJCCI (2017) Sharma, S., Elliott, P.M., Whyte, G., Mahon, N., Virdee, M.S., Mist, B., J, M.W.: Utility of metabolic exercise testing in distinguishing hypertrophic cardiomyopathy from physiologic left ventricular hypertrophy in athletes. J. Am. Coll. Cardiol. 36(3), 864–870 (2000) Suthar, D., Dodd, D.A., Godown, J.: Identifying non-invasive tools to distinguish acute myocarditis from dilated cardiomyopathy in children. Pediatr. Cardiol. (2018) Wang, Z., Salah, M.B., Gu, B., Islam, A., Goela, A., Li, S.: Direct estimation of cardiac biventricular volumes with an adapted Bayesian formulation. IEEE Trans. Biomed. Eng. 61(4), 1251–1260 (2014) Wong, K., Tee, M., Chen, M., Bluemke, D.A., Summers, R.M., Yao, J.: Regional infarction identification from cardiac CT images: a computer-aided biomechanical approach. Int. J. Comput. Assist. Radiol. Surg. 11(9), 1573–1583 (2016) Wong, K.C., Tee, M., Chen, M., Bluemke, D.A., Summers, R.M., Yao, J.: Computer-aided infarction identification from cardiac CT images: a biomechanical approach with SVM. In: 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2015, pp. 144–151. Springer, Berlin (2015) Xu, C., Xu, L., Gao, Z., Zhang, H., Zhang, Y., Du, X., Zhao, S., Ghista, D., Li, S., et al.: Direct detection of pixel-level myocardial infarction areas via a deep-learning algorithm. arXiv:1706.03182 (2017)

Framework for Discrete-Time Model Reference Adaptive Control of Weakly Nonlinear Systems with HONUs Peter M. Benes, Ivo Bukovsky, Martin Vesely, Jan Voracek, Kei Ichiji and Noriyasu Homma

Abstract This paper reviews the Higher Order Nonlinear Units (HONUs) and their fundamental supervised sample-by-sample and batch learning algorithms for datadriven controller learning when only measured data are known about the plant. We recall recently introduced conjugate gradient batch learning for weakly nonlinear plant identification with HONUs and we compare its performance to classical Levenberg-Marquard (LM). Further, we recall recursive least square (RLS) adaptation and compare its performance to L-M learning both for plant approximation and controller tuning. Further, a model reference adaptive control (MRAC) strategy with efficient controller learning for linear and weakly nonlinear plants is proposed with static HONUs that avoids recurrent computations, and its potentials and limitations with respect to plant nonlinearity are discussed. Recently developed stability approach for recurrent HONUs and for closed control loops with linear plant and nonlinear (HONU) controller is recalled and discussed in connotation stability of the adaptive closed control loop. P. M. Benes · M. Vesely Department of Instrumentation and Control Engineering, Czech Technical University in Prague, Prague, Czech Republic e-mail: [email protected] M. Vesely e-mail: [email protected] I. Bukovsky (B) Department of Mechanics, Biomechanics and Mechatronics, Center of Advanced Aerospace Technology, Czech Technical University in Prague, Prague, Czech Republic e-mail: [email protected] J. Voracek College of Polytechnics Jihlava, Jihlava, Czech Republic e-mail: [email protected] K. Ichiji · N. Homma Department of Radiological Imaging and Informatics, Tohoku University Graduate School of Medicine, Sendai, Japan e-mail: [email protected] N. Homma e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_13

239

240

P. M. Benes et al.

Keywords Polynomial neural networks · Higher order neural units · Model reference adaptive control · Conjugate gradients · Nonlinear dynamics

Nomenclature CG CNU colx d LNU QNU k L-M n,m n y , nu r, γ ro T u w, v x, ξ y˜ y yr e f e er e f μ

Conjugate Gradient algorithm Cubic Neural Unit (HONU r = 3) Long column vector of polynomial terms Desired value (setpoint) Linear Neural Unit (HONU r = 1) Quadratic Neural Unit (HONU r = 2) Discrete index of time Levenberg-Marquardt batch learning algorithm Length of vector x, ξ Length of recent history of y or u [samples] Order of polynomial nonlinearity (plant, controller) Control input gain at plant input Vector transposition Control input Long row vectors of all neural weights (plant, controller) Augmented input vector to HONU (plant, controller) Neural output from HONU Controlled output variable (measured) Reference model output Error between real output and HONU Error between reference model and control loop Learning rate

1 Introduction With the booming advancements in the field of both linear and non-linear process control, computational methods such as not only deep learning but also long short term memory (LSTM), extreme learning machines (ELM), polynomial ridge regression, and random vector functional link (RVFL), e.g. [1], networks are actual topic for research and esp. for real control applications. In last decade, many powerful and quite novel approaches have been introduced advancing the capabilities of our modern industry. From our review, we may classify three main areas of adaptive control which are relevant to the presented approaches of this paper, namely model predictive control (MPC), reinforcement learning (sometimes also denoted as adaptive dynamic programming (ADP)) and model reference adaptive control (MRAC).

Framework for Discrete-Time Model Reference Adaptive Control …

241

With regards to advanced methods for control of unstable dynamic systems, reinforcement learning (ADP) approaches e.g. [2–4] and references there in, have taken to the fore. The controller design is constructed via heuristic tuning due to monitoring of the control inputs and respective system response. The ADP approach therefore does not require any mathematical analysis of the process itself to derive a controller, but rather a series of penalization criteria or functions for manipulation of the controller tuning for the desired controller response. The price for this controller design can be the time needed for experiments and the need to access the real device while the analytical stability of the control loop cannot be done as the system model is not utilized (just responses). Contrasting to ADP design, model based approaches are popular in real industrial application. The model predictive control, e.g. [5–7] and references therein, focuses towards optimization of the control input of the process itself before every applied actuation, as opposed to tuning or optimization of controller parameters and the optimization controller computations are applied at every time sample. Another key control scheme is MRAC where a model of the system is required and the objective of the control loop design is to tune the controller parameters e.g. neural weights in the sense of neural network based forms. From its conception, many advancements in the field of computational intelligence based methods have been published. Key works to mention are e.g. [8–11] and references therein. The MRAC control scheme allows neural network controllers for offline and/or online adaptation depending on the applied controller strategy for a given process. Our paper focuses on adaptive feedback form of the MRAC control scheme with the introduction of standalone higher order neural units (HONUs) as a plant model and their extension with single or multiple HONU feedback controllers while a technique for avoiding recurrent computation is proposed and BIBS stability of the HONU control loop is developed. The proposed control strategy is for the unknown plant where only input and output data are known so the controlled system dynamics can be approximated. Thus, the BIBS stability of the control loop can be analyzed. Higher order neural units [12, 13] are standalone architectures which can identify non-linear process characteristics whilst maintaining an in-parameter wise linearity and fall out as a class of polynomial neural units e.g. [14], or higher order neural networks [15–18]. An advantage of the proposed MRAC-HONU based control loop design is the rather computationally efficient and fast real-time performance with learning algorithms as such conjugate gradients (CG) [19–21] and the recursive least squares (RLS) algorithm, achieving strong error minimization capabilities even for nonlinear process control. Several works to mention are [22, 23] which are focused on real industrial application of single-input-single-input (SISO) form. There, the capabilities of an extended back-propagation method via the famous LevenbergMarquardt (L-M) are presented where one HONU is used as a plant model and another single HONU is extended in feedback as an adaptive controller. Further [24], where real-time applications to fluid-based tank systems were presented. A further topic behind any controller or control loop design is the assurance of stability of the applied control loop to the dynamic process. Though the topic of stability is not the main focus of this paper, an extension of its analysis is provided via a rather novel

242

P. M. Benes et al.

input to state (ISS) approach for justification of bounded-input-bounded-state (BIBS) stability as presented in [25, 26]. Other key works on the topic of ISS stability with application to conventional recurrent neural networks can be found in [27, 28]. Due to the form of controller design at hand, there is generally no one correct or universal method for stability evaluation. However, bounded-input-bounded-output (BIBO) and BIBS [29, 30] based approaches are often more practical for real industrial applications than equilibrium point justification approaches. Given the above, this paper presents a novel framework for MRAC based HONU control via a multiple feedback controller approach, which in this paper is focused towards application on weakly non-linear systems. The conjugate gradient learning algorithm is further presented, where the capabilities with extension to the L-M learning algorithm are highlighted. As the extension of work [31], this paper also provides study of incremental learning algorithms (gradient decent (GD) and RLS) on the presented approach with more examples to RLS for weakly non-linear process control. Further, an additional stability analysis section is provided which extends the approach presented in [26] for dynamic HONU-MRAC closed control loops to static HONU plant models and multiple feedback controller configuration. The most common symbols and abbreviations are explained in Nomenclature section above, while other terms are explained at their first appearance.

2 Background to HONUs This section recalls the fundamental architectures of HONUs for SISO linear and nonlinear dynamic systems as overviewed from the work [31]. For clarity in definition, the term weakly non-linear will be addressed here as a nonlinear dynamic system, which can be approximated sufficiently by a HONU model of up to 3rd polynomial order i.e. r ≤ 3 that corresponds to 3rd order HONU (i.e. CNU). The capability to properly learn the process dynamics is paramount for extension of a further single HONU or multiple standalone HONUs as a feedback controller. As is emphasized in Sect. 6.3 application to processes featuring sinusoidal nonlinearity such as the discretized torsional pendulum model, is an example of strong process non-linearity which cannot be fully be captured by a HONU r ≤ 3 i.e. sufficient approximation is possible for certain parts of the period in the non-linear characteristic. The classical form or long vector form of HONUs of up to 3rd order are summarized in Table 1, where the augmented input vector for dynamical system approximation can be defined in (1) and its total length is n = 1 + n y + n u . The notation y implies a measured process output value (1) implying a static a HONU, i.e. static function mapping because all the values in x are measured, thus it yields that ⎡ ⎢ ⎢ x=⎢ ⎣

⎤ x0 = 1 ⎡ ⎤ 1 T  x1 ⎥ ⎥ = ⎣ yx ⎦. .. ⎥ = 1 y(k − 1) . . . y(k − n y ) u(k − 1) . . . u(k − n u ) . ⎦ ux xn

(1)

Framework for Discrete-Time Model Reference Adaptive Control …

243

Table 1 Summary of HONUs for weakly nonlinear systems (of up to 3rd polynomial order of nonlinearity, adopted from [31]) HONU (neural output y˜ (k))

Details

Order

Classical form of HONU

r= 1(LNU)

y˜ =

r= 2(QNU)

y˜ =

r= 3(CNU)

n

wi · xi =

i=0 n

n

wi, j · xi · x j =

i=0 j=i

y˜ =

n n



n

i=0 j=i κ= j

wi, j,κ xi x j xκ

HONU form

x0 = 1∀r

w· col r (x) = w·x

w = [w0 w1 . . . wn ] x = [x0 x1 . . . xn ]T

w· col r (x) = w · colx

colx = [{xi x j }]

w· col r (x) = w · colx

w = [{wi, j }]T ;

i = 0...n j = i ...n

colx = [{xi x j xκ }] w = [{wi, j,κ }]T ;

i = 0...n j = i ...n κ = j ...n

For definition of a dynamic (recurrent) HONU, the measured output value y is replaced with step-delayed outputs of the HONU model, i.e. y ← y˜ resp. yx ← y˜ x in (1), so a recurrent neural architecture is obtained which can be more difficult to train, but is necessary for training a HONU feedback controller for nonlinear dynamic systems, as is commented and exampled later in this paper. Regarding adaptive identification via batch training, an extension to classic backpropagation via the Levenberg-Marquardt algorithm can be recommended. It is applicable for both static and dynamic HONUs where the weight updates w can be calculated as follows

1 w = J · J + · I μ T

−1

· JT · e,

(2)

where J is Jacobian matrix, I is identity matrix, upper index −1 stands for matrix inversion, and the error yields as e = y − y˜ . The Jacobian matrix for static HONU, i.e. the input vector (1) with only measured values, is as follows ⎧ T f or r = 1 ⎨ x T J(k) = ∂ y˜ (k)/∂w = colx f or r = 2 ⎩ colxT f or r = 3,

(3)

where colx is a long column vector of polynomial terms as indicated in Table 1. Then, Eq. (3) shows that J is constant for all training epochs of static HONUs (i.e. when input vector is defined as in (1)). Therefore, for real time computation this variation accelerates the training process as HONUs themselves are constructed to be in-parameter wise linear, even for non-linear architectures. In the sense of recurrent

244

P. M. Benes et al.

HONUs for the input vector (1) with replacement of the process output value y˜ and learning formula (2), via by parts derivation rule it yields that J(k) = ∂ y˜ (k)/∂w = (colx( yx ))T + w · ∂colx( yx )/∂w,

(4)

Recurrent HONUs however perform more accurately in terms of dynamic approximation of weakly nonlinear dynamical systems from measured data than static architectures, which is shown in the experimental analysis section. However, a challenge with dynamical HONUs is the time-variance of the Jacobian, so the respective rows have to be recalculated in every time sample resulting in a varying Jacobian during every training epoch. In spite of this, static HONUs can still be sufficient for identification of weakly nonlinear dynamical systems so a more straightforward and very efficient means of controller tuning uncovers which is also well converging in comparison to recurrent architectures (shown in the experimental analysis section of this paper).

3 Efficient Learning Algorithms for HONUs A challenge in proper tuning of a HONU feedback controller lies in proper tuning of the plant model. In the sense of batch training via the L-M formula (2), due to not well conditioned data, often a small learning rate is necessary to properly identify the process dynamics. Then, the convergence of such algorithm often requires long training epochs. In this paper, two key efficient learning algorithms are discussed namely, CG and RLS learning algorithms. The relations in terms of the CG algorithm are recalled from the work [31].

3.1 Application of CG to HONU Plant Identification With regards to CG learning, this method may be directly applied to the HONU structure. Recall that the principle of CG is to solve a set of equations as follows b − A · w = 0,

(5)

where b is a column vector of constants, A is positively semi-definite matrix, and w is a column vector of unknowns (neural weights). Due to the in-parameter linearity of HONUs, the Jacobian (3) is not directly a function of weights. Thus, the training with respect to a HONU can be restated from (5) as y − colX · w = 0.

(6)

Framework for Discrete-Time Model Reference Adaptive Control …

245

where colX is defined [12] as follows (assuming all initial conditions are known) ⎡

⎤ colx(k = 1)T ⎢ colx(k = 2)T ⎥ ⎢ ⎥ colX = ⎢ ⎥ = J, .. ⎣ ⎦ .

(7)

colx(k = N )T which is in fact the Jacobian of a HONU for all training data (of total length N). By multiplying (6) from the left with the term colXT it yields that b − A · w = colX · y − colXT · colX · w = 0.

(8)

This results in a positive definite matrix, therefore the CG learning form may be directly applied to both static or recurrent HONUs. On initiation of training i.e. for the very first epoch we initiate CG with re ( = 0) = c − A · w( = 0),

(9)

p( = 0) = re ( = 0).

(10)

and with

Then for proceeding training epochs (i.e.  > 0) we calculate the following α() =

reT () · re () . pT () · A · p()

(11)

With the parameter calculation from (11), the following weight update rule yields w( + 1) = w() + α() · p(),

(12)

where w = α() · p() and other CG parameters for next training epoch are then calculated or updated as follows re ( + 1) = re () − α() · A · p(),

(13)

and in similar sense to the Fletcher-Reeves nonlinear CG method β() =

reT ( + 1) · re ( + 1) , reT () · re ()

(14)

therefore resulting in p( + 1) = re ( + 1) + β() · p().

(15)

246

P. M. Benes et al.

Fig. 1 Training of static CNU for system in Fig. 3 starting with L-M learning accelerated with CG learning for training epochs ≥ 10 (adopted from [31])

This section derived the extension of the classical Conjugate Gradient learning algorithm for application to HONUs. Due to its structure, is not so suitable for controller weights v training as the symmetric positive definite matrix is not so achievable and CG for a controller then becomes much more complicated task. As illustrated in the Fig. 1, the use of pre-training via L-M algorithm can be enhanced via CG training following a switch after several epochs. Thus, the proposed CG training is suggested to use in lieu with L-M training algorithm for rapid acceleration during plant identification, where only L-M can be used as a comprehensible training algorithm for HONUs including a feedback controller algorithm via MRAC control scheme Fig. 3. As an extension to the work [31] the proceeding section extends an efficient incremental training algorithm (RLS) to the presented HONU architectures. RLS Training Algorithm for HONUs Another efficient learning algorithm advantageous for real-time plant identification is the RLS learning algorithm. Though its application in the field of adaptive filters is quite readily published, its extension to HONUs for plant identification is still a not widely investigated area. The advantages of the RLS algorithm is also its applicability for used in the whole MRAC-HONU closed control loop plant and controller tuning due to its fundamental composition comprising from the covariance matrix of the principle partial derivative for adaptation. As an initial, we may recall the classical form of the inverse covariance matrix R−1 (k) to the RLS algorithm as 1 R1 −1 −1 , (16) R (k) = · R (k − 1) − μ R2 where the term R represents the n w dimension square covariance matrix and I is the n w × n w identity matrix which is initialized on the very first epoch by

Framework for Discrete-Time Model Reference Adaptive Control …

R(0) =

1 · I. δ

247

(17)

For a small initialization constant δ. Then the terms R1 , R2 are respectfully derived as R1 = R−1 (k − 1) ·

R2 =μ +

∂ y˜ (k) ∂ y˜ (k) T · R−1 (k − 1), · ∂colW ∂colW

∂ y˜ (k) T ∂ y˜ (k) . · R−1 (k − 1) · ∂colW ∂colW

Now, simplifying the partial derivative

∂ y˜ (k) ∂colW

(18)

(19)

= colx(k), we yield

1 · (R−1 (k − 1) μ R−1 (k − 1) · colx(k) · colx(k)T · R−1 (k − 1) − ), μ + colx(k)T · R−1 (k − 1) · colx(k)

R−1 (k) =

(20)

where the final weight update rule results to be w = e(k) · colxT · R−1 (k).

(21)

The relations (16)–(21) may also be extended for application to tuning of the HONU y˜ (k) . Application of feedback controller, where considering the partial derivative ∂∂colv the RLS algorithm to both linear and non-linear process identification can vary in performance, where in certain cases due to the sensitivity of the learning rate, can require smaller setting and longer runs of epochs in similar sense to the classical gradient descent (GD) learning algorithm or L-M algorithm. However as illustrated in Fig. 2, the RLS in most cases can outperform the classical gradient descent and L-M algorithm in rapid minimization of sum of square errors (SSE) therefore examples of its application are highlighted as a further component of this paper in the experimental analysis section.

4 Batch Learning Strategy for Control with Static HONU as a Plant Model From previous works as such [22–24, 32] HONUs are initially identified as dynamic plant models i.e. via recurrent HONUs followed by training of a single HONU feedback controller. In this paper, we extend the classical HONU-MRAC control loop with of multiple feedback HONU controllers, in accordance to Fig. 3. Usually we assume that the magnitudes of the input and output variables i.e. of d, y and

Water Level[m]

248

P. M. Benes et al. 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00 -0.01 700

d y_, ref y_, Plant y_, GD-QNU y_, LM-QNU y_, RLS-QNU y_, RLS-Plant.-QNU

750

800

850

900

950

1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 1.0

0.08

SSE_, GD-QNU

SSE_, GD-QNU SSE_, LM-QNU

SSE_, LM-QNU SSE_, RLS-QNU

0.06

SSE_, RLS-Plant.-QNU

SSE

SSE

time [s]

0.04 0.02 0.00

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

40

41

42

43

44

45

46

47

48

49

epochs

epochs

Fig. 2 MRAC-HONU control loop on Two-Tank liquid system (39)–(40) where one dynamic HONU is as a plant model and second as a nonlinear state feedback controller. The RLS algorithm is superior to classical GD and L-M learning

yr e f are normalized (z-scored). Further, the input gain r0 can be also adaptive and it compensates for the true static gain of the controlled plant. An added advantage of the customizable controller non-linearity of the extended HONU-MRAC control scheme is that the controller computation can be tailored to target different aspects of control for the process dynamics e.g. one HONU controller can be used to suppress noise and another may be used to minimize steady state error. As depicted in Fig. 3, we introduce the extension of multiple HONU feedback controllers to calculated the new samples of control inputs, therefore the following control law may be defined  u(k) = ro · d(k) −

nq 

 qι (k) ,

(22)

ι=1

where n q is introduced as the number of extended HONU feedback controllers. Then in the sense of L-M algorithm, the controller weights are updated via following relation −1 1 T v = Jv · Jv + · Iv · JvT · er e f , μv

(23)

Framework for Discrete-Time Model Reference Adaptive Control …

249

Fig. 3 Discrete time model reference adaptive control (MRAC) loop with multiple HONU controller: one HONU serves as a plant model and other HONUs serve as a single-layer neuro controller

where the subscript v indicates it is the controller learning rule. Further the Jacobian matrix Jv includes recurrent backpropagation computations of partial derivatives (given proper application of the time indexes), it thus follows that Jv [k, :] =

∂ y˜ ∂u ∂q ∂ y˜ (k) = , ∂v ∂u ∂q ∂v

(24)

thus the Jacobian (24) should be recurrently calculated because ⎡

0 ⎢ ∂ y˜ (k − 1)/∂v ⎢ ∂q(k) ⎢ = (col γ (ξ ))T + v · ⎢ ∂ y˜ (k − 2)/∂v ⎢ ∂v .. ⎣ .

∂ y˜ (k − m)/∂v

⎤ ⎥ ⎥ ⎥ ⎥. ⎥ ⎦

(25)

250

P. M. Benes et al.

where it follows that the dimension of the upper rightmost matrix is (1 + m) × n v , and n v denotes the total number of neural controller weights (i.e. for corresponding length of v). From our contemporary knowledge as in the works [22–24], the proposed control scheme in Fig. 3 works well given plant identification is performed with as a dynamic HONU. We therefore suggest the HONU-MRAC control scheme is applied via first trying to identify the plant dynamics with a recurrent HONU and to follow with controller tuning. In the case of weakly nonlinear plants however, i.e. with not too high degree of non-linearity, the recurrent computations in practical sense can be avoided which leads to a more straightforward and computationally efficient control algorithm. An essential in controller tuning is to define a reasonable data set of measured values for plant identification and further controller tuning. In this paper we define several key components of the principle input vector x further, v in the sense of a HONU feedback controller. The term d denotes the desired value that we can feed into the plant to measure the corresponding plant output data y, further yr e f corresponds to the reference model of process i.e. simulating the desired behavior of the process taking into account the dynamic capabilities and limits of the system properties. The ultimate objective is to modify the control loop so it adopts the reference model behavior once the controller is properly trained. We thus desire that the trained controller provides such output value q so the next sample of the controlled plant y match with the prescribed reference model output yr e f .  T x = 1 yr e f (k − 1) yr e f (k − n y ) u(k − 1) . . . u(k − n u ) , T  ξ = 1 yr e f (k − 1) yr e f (k − 2) . . . yr e f (k − m) .

(26)

With this objective, in this paper we further propose a deviation from the traditional HONU-MRAC scheme via the relation (26). On feeding the reference model values directly into the input vector (1) of the original plant model, and maintaining the scheme as per Fig. 3 it is not necessary to simulate the closed loop output with a recurrent HONU model of the plant. Instead we may directly use the apriori reference model values and can thus update the controller weights directly. Furthermore, the kth row of Jacobian (24) can be directly evaluated as in (27) where the weights w were obtained by L-M and (or) CG learning algorithm with static HONU as a plant model. Thus, we yield a method for bypassing the recurrent computations in both plant identification as well as controller training, where the computational efficiency is as a further effect is optimized  ∂ y˜ (k) = w · 0 ... 0 ∂u

∂u(k−1) ∂v

...

 ∂u(k−n u ) T , ∂v

(27)

where, ∂u(k − 1) = −ro · (col γ (ξ))T . ∂v

(28)

Framework for Discrete-Time Model Reference Adaptive Control …

251

However a drawback behind the proposed enhancement is that the controller design is limited to linear or weakly nonlinear dynamic system applications. i.e. to such plants for which the static HONU sufficiently approximate the system from the measured data. In other sense, a recurrent HONU would be required for plant identification for the control scheme and is thus more difficult to train properly or in certain cases impossible given the degree of nonlinearity.

5 Stability Analysis 5.1 Decomposition Approach to HONU-MRAC Control Loop Stability Given the control approach presented in Sect. 4, this section analyzes the dynamical stability of the proposed MRAC based closed loop via decomposition method presented in [25] and further [26] where its extension as a MRAC control loop was presented. For better comprehensibility, let us assume a single static LNU as a plant, with extension of the control law (22), with this statement the HONU as per Table 1 can be redefined as n y +n u

y˜ (k) =



xˆi (k − i) · αˆ i +

i=1

n u +n u

uˆ i (k − i)βˆi +Ci (w0 ),

(29)

i=1

where we introduce a new vector of state variable terms xˆ (k − 1) corresponding to previous step-delayed output terms of the HONU-MRAC based control loop. ˆ − 1) corresponds to the vector of step-delayed input terms, both can Similarly, u(k be explicitly defined as T

xˆ (k) = [ y˜ (k − (n y + n u ) + 1) . . . y˜ (k − 1) y˜ (k)] , T

ˆ u(k) = [ d(k − (n u + n u ) + 1) . . . d(k − 1) d(k)] .

(30)

(31)

Then, for simplification the operator Ci (.) is now introduced to denote the sum of constant neural bias weight terms. Then the local characteristic coefficients αˆ i and may be computed via the following sub-polynomial expressions αˆ i = −ro ·

nu  j=1

wn y + j ·

nq  l=1

Ci (ql (k − j)) f or i = 1, 2, . . . , n y + n u ,

(32)

252

P. M. Benes et al.

where in the sense of a dynamic LNU plant the coefficients αˆ i for i = 1, 2, 3, . . . , n y is given as αˆ i = wi − ro ·

nu 

wn y + j ·

j=1

nq 

Ci (ql (k − j)) f or i = 1, 2, 3, . . . , n y .

(33)

l=1

ˆ − 1) may then be The corresponding coefficient terms βˆi for the input vector u(k computed as   ⎧ nq nu



⎪ ⎪ ⎪ wn y + j · Ci (ql (k − j)) f or i = 1, . . . , n u ⎨ ro · wn y +i − j=1 l=1 ˆ βi = (34) nq nu ⎪



⎪ ⎪ −ro · wn y + j · Ci (ql (k − j)) f or i = n u + 1, . . . , n u + n u , ⎩ j=1

l=1

where the resulting expressions (32)–(33) further, (34) may be applied to the more classical HONU-MRAC configuration as in [24, 26] and considering its modification via (26). Then, we may then express the resulting canonical state space form as ˆ · xˆ (k), ˆ − 1) · xˆ (k − 1) + N ˆ a · uˆ a (k − 1); y˜ (k) = C xˆ (k) = M(k ⎡ ⎢ ⎢ ⎢ ˆ =⎢ M ⎢ ⎢ ⎢ ⎣

0

1

0

0

0

0

0 0 αˆ n y+ n u αˆ n y+ n u −1

⎤ 0 ··· 0 ⎡ .. ⎥ 0 0 ⎥ 1 ··· . ⎥ ⎢ .. .. ⎢ ⎥ ˆ .. .. . ,N=⎢ . . . 0⎥ ⎥ ⎣ 0 0 ⎥ .. . 0 1⎦ βˆn u +n u βˆ(n u +n u )−1 · · · αˆ 2 αˆ 1

··· .. . ··· ···

⎤ 0 .. ⎥ . ⎥ ⎥. 0⎦ βˆ1

(35)

(36)

ˆ as the local matrix of dynamics (LMD). Further, For further reference we term M the augmented input matrix and input vector may be defined as ⎡

⎤ 0 ⎢ .. ⎥ T  ˆ . ⎥ ˆa = ⎢ N ⎢N ⎥ , uˆ a (k − 1) = u(k ˆ − 1) Ci (w0 ) . ⎣ 0⎦

(37)

1 According to the definitions of BIBO and ISS stability [29], the forms (35)–(36) for the HONU-MRAC closed control loop may be justified for ISS (further BIBS) stability [26] if the following holds from an initial state sample k0 until k as follows

Framework for Discrete-Time Model Reference Adaptive Control …

253

 k−1 k−1   k−1                ˆ ˆ ˆ a (κ) M(κ) ·N S = xˆ (k) −   M(i)  · xˆ (k0 ) +  · uˆ a (κ) ≤ 0,     κ=k0

κ=k0

i=κ

(38) where a sufficient condition for maintaining BIBS yields if S(k) = S(k) − S(k − 1) ≤ 0 given the relation (38) in sample k is not violated.

5.2 Two-Tank Liquid Level System To investigate application of the decomposed stability approach described in Sect. 5.1, let us consider a weakly non-linear two-tank liquid level system described via the following balancing equations  dh 1 = Q t − Cdb · s1 · 2 · g · (h 1 − h 2 ), dt   dh 2 A· = Cdb · s1 · 2 · g · (h 1 − h 2 ) − Cdc · s2 · 2 · g · h 2 , dt A·

(39) (40)

where Q t [m3 s−1 ] denotes the inlet flow rate of the system. The tank cross-sectional area A = 0.002[m2 ], orifice cross-sectional areas s1 = s2 = 0.000785[m2 ], orifice discharge coefficients Cdb = Cdc = 0.60, the density of water ρ = 1000[kg/m3 ] and gravitational constant of acceleration g = 9.81[m s−2 ]. To add discussion on application of dynamic HONUs, the approach (33)–(34) is investigated with two HONUs i.e. one dynamic HONU as a plant and the second as a feedback controller as in [26]. For the offline tuned HONU-MRAC control loop, a single dynamic HONU is identified via RLS training with 5 previous model output values. 4 previous process inputs are further incorporated into the input vector. The HONU feedback controller consists of a single HONU feedback controller with the same input vector length i.e. n y = 5 and n u = 4 and the feedback gain r0 = 0.01. On real time application, of the derived control loop, similar performance yields with respect to the offline tuned HONU-MRAC control loop and applied online version as a constant parameter control loop (Fig. 4). As a further, the offline tuned HONU-MRAC control loop tuned after 200 epochs is in its last training epoch introduced with a large increase in its learning rate to μ = 0.9998 at time t > 488[s]. From Fig. 5c, d it is evident that from t > 490[s] the condition (38) switches from a monotonic decrease to S(S > 0) and hence signifies the onset of instability where the BIBS condition is violated for t > 492[s]. This is reflected in Fig. 5b via the violation of BIBO stability corresponding to spectral radii ρ(.) > 1 however due to the relation (38) accounting for the previous samples of HONU-MRAC state transitions, S(S > 0) yields a stronger condition that clearly pronounces that onset of instability. It further justifies the trajectory in state space for the given control input is becoming unstable as opposed to the

254

P. M. Benes et al. d 3.5

y_ HONU-tuned y_ HONU-real

y[cm]

3.0 2.5 2.0 1.5 1.0 0.5 0.0

0

200

400

t [s]

600

800

1000

Fig. 4 Comparison of offline tuned and real-time application of HONU-MRAC as constant parameter control loop on real two-tank liquid level system: one as a plant model and second as a nonlinear state feedback controller

local dynamics in vicinity of the discrete state point. In such case, though the LMD eigenvalues may locally recover i.e. ρ(.) ≤ 1 in the sense of an adaptive control loop, however in a global sense the HONU-MRAC response may be dynamically unstable with respect to transition from neighboring states.

6 Experimental Analysis Following the theoretical background of HONUs and proposed HONU-MRAC control loop scheme (Fig. 3) this section provides several practical examples on weakly non-linear dynamic systems. For identification of plant dynamics, a single static HONU is considered with static HONU(s) applied as feedback controller(s) (Fig. 3). The Python (2.7) programming language and scientific computational library Scipy are used for all presented results.

6.1 Linear Oscillating Dynamical System This section analyses the proposed HONU-MRAC control loop approach on a linear oscillating dynamic system. Thus, as an initial let us consider the following transfer function, where s denotes the Laplace operator G(s) =

50s 2 + 10s + 5E4 . 50s 4 + 500s 3 + 5E4s 2 + 4E4s + 2E6

(41)

Framework for Discrete-Time Model Reference Adaptive Control …

(a) 30 y[cm]

20

255

d yHONU_Stable yHONU_Sim. Model

10 0 -10 487

488

489

490

491

492

488

489

490

491

492

488

489

490

491

492

490

491

492

rho(M)

(b) 1.20 rho(M)_Unstable

1.15 1.10 1.05 1.00 0.95 0.90 0.85 0.80 487

(c)

15

S

10

S

5 0 -5 -10 487

(d) dS(+)

4

dS(+)_Sim. Model

3 2 1 0 487

488

489 t[s]

Fig. 5 a Adaptive (RLS) LNU-QNU control loop becomes unstable soon after learning rate μ(t > 488) = 0.9998. b Spectral radii through time of LNU-QNU closed loop LMD c BIBS condition (38) through time. d Showing the positive difference of (38), i.e. S(S > 0) reveals instability onset soon after learning rate becomes changed for t > 488

For setup of the necessary training data, a sampling interval of t = 1 [time unit] is used, where a single static HONU featuring n y = 5 process outputs and n u = 5 plant inputs are chosen. L-M training is used over 300 epochs with a learning rate μ = 0.1 considering the same constant sampling interval as the original data set. In this example, two parallel LNUs are chosen as a controller to enhance the control performance as introduced recently in [12]. Thus, on recalling the general control law (22) it yields that

256

P. M. Benes et al.

u(k) = ro · (d(k) − q1 (k) − q2 (k)),

(42)

where q1 and q2 are outputs of two parallel LNUs (Fig. 3). Both controller LNUs are trained via L-M learning with input vector setups of 5 previous step-delayed samples of the reference model i.e. m 1 = 5, m 2 = 5 from (26). Further, the same learning rate μv = 1E4 for the weights and for r0 within 30 training epochs are used. Figure 6 illustrates the performance of the HONU-MRAC control loop and respectively trained controller weights.

6.2 Application to Weakly Nonlinear System To investigate application to a weakly nonlinear system, let us consider the following theoretical plant of second-order dynamics as in Fig. 7, where τ denotes the time

Fig. 6 Results for linear oscillatory plant (41) via static LNU as a plant model (trained via L-M) and two parallel LNUs as feedback controllers (trained via L-M); the control loop output follows the desired unit steps reference signal yr e f (the upper plot), the control input (2nd from top), bottom axes shows training by L-M batch learning of static LNU for plant (bottom left), input gain (middle) and static LNU as controller (bottom right) (adopted from [31])

u (t )

τ ⋅ χ (t ) + χ (t ) = S u ⋅ u (t )

0.1 y( t) + y( t) = 0.1χ ( t)

y (t )

Fig. 7 A weakly nonlinear plant where τ = τ (u) and Su = Su (y) are nonlinear functions (43)

Framework for Discrete-Time Model Reference Adaptive Control …

257

constant and Su is the static gain corresponding to the first subsystem. The respective nonlinear functions follow as τ = τ (u) = 0.2 + |u| · 0.1; Su = Su (y) = 0.7 + |y| · 0.2.

(43)

In this example, both the plant model and continuous time control loop are simulated with a sampling interval of t = 0.01 [time unit] via forward the forward Euler method. A static CNU (i.e. HONU, r = 3) is identified from the measured data with n y = 3, n u = 3 set as the respective neural input vector lengths. The HONUMRAC control loop is trained via the L-M algorithm with μ = 0.01 followed with 10 epochs of CG for accelerated learning (see the accelerated convergence in Fig. 1). An additional single CNU as a feedback controller (as in Fig. 3) with setup m = 2 and also r0 is also trained via L-M (10 epochs, μv = 1E6). For control loop tuning the sampling interval of t H O NU = 0.5 [time unit] is used. The performance of the trained control loop compared to the plant without a controller is shown in Fig. 8.

Fig. 8 Results for control of weakly nonlinear dynamical system with variable time constant and static gain (43), static CNU as a plant model (trained via L-M + CG, Fig. 1) and static CNU controller (via L-M) (the bottom axes show the detail of the desired behavior with the trained control loop output) (adopted from [31])

258

P. M. Benes et al.

6.3 Torsional Pendulum To provide further analysis on the incremental control algorithms namely normalized gradient descent (NGD) and RLS and further to emphasis the limits for applicability of the discussed approach, let us consider a torsional pendulum (44) adopted from [4]. As a modification, we introduce a increased friction coefficient for stabilization of the system to obtain training data. It is thus a key point to note that our MRACHONU control loop scheme, requires sufficiently stable training data in order to properly identify the corresponding process dynamics and extend controller tuning. With this note, the modified (stabilized) discrete time inverted pendulum model is then as follows χ1 (k + 1) = χ1 (k) + 0.1 · χ2 (k) χ2 (k + 1) = −0.49 · sin(χ1 (k)) + (1 − 0.1 f d ) · χ2 (k) + 0.1 · u(k),

(44)

where the measured output is simulated as y = χ1 , f d = 0.6 is the friction and u is control input. Given the successful setup in Fig. 8, in a similar manner a static CNU

2.5 2.0 1.5 1.0 0.5 0.0 -0.5 -1.0 -1.5

Plant ident HONU r=3 L-M + CG for last epoch u yr yn ... static HONU yn_test ... dynamic HONU

0

50

100

150

200

250

300

Control loop output, HONU controller r=2 via one epoch of NGD

0.6

d

0.4

yref

0.2

yn_NGD

0.0 -0.2 -0.4 -0.6 0

50

100

150

500

1000

1500

t[s]

200

250

300

2000

2500

3000

vall

2 1 0 -1 -2 0

samples

Fig. 9 Investigation of NGD performance after one training epoch for control of plant (44). A single static HONU is trained via L-M with CG algorithm after 20 epochs, a single static HONU controller is tuned on identified HONU model (dynamically) via NGD and applied as an extension to a P-control loop

Framework for Discrete-Time Model Reference Adaptive Control …

259

featuring n y = n u = 2 is trained via L_M over 20 epochs followed by an additional 20 epochs of CG training to accelerate the plant identification. Following successful plant identification, a single static QNU (i.e. HONU, r = 2) is applied as a feedback controller for controller input vector lengths n qy = n qe = 3 respectfully. Two setups are tuned on the previously identified HONU model (as a dynamically trained HONU) on the first epoch via NGD and RLS tuning respectively for comparison. Figures 9 and 10 illustrate the performance of both approaches on the same HONU-MRAC control loop configuration. From Fig. 9 although the NGD algorithm towards the final samples of training data starts to fit closer to the desired control loop response, as can be seen in the HONU feedback controller weights the tuned values are more erratic during training and not so marginally retuned as compared to the neural weights shown in Fig. 10 for RLS training of the same control loop. In the final samples of training data the RLS algorithm even after one epoch is able to adequately minimize the steady state error to fit the desired control loop response, which is reflected in the substantially retuned neural weights across the whole training data.

2.5 2.0 1.5 1.0 0.5 0.0 -0.5 -1.0 -1.5

Plant ident HONU r=3 L-M + CG for last epoch u yr yn ... static HONU yn_test ... dynamic HONU

50

0

100

200

150

250

300

Control loop output, HONU controller r=2 via one epoch of RLS

0.6

d

0.4

yref

0.2

yn_RLS

0.0 -0.2 -0.4 -0.6 0

50

100

150

200

250

300

2000

2500

3000

t[s] 10

vall

5 0 -5

-10 -15 0

500

1000

1500

samples

Fig. 10 Superior controller tuning performance via RLS compared to NGD after one training epoch for control of plant (44). A single static HONU is trained via L-M with CG algorithm after 20 epochs, a single static HONU controller is tuned on identified HONU model (dynamically) via RLS and applied as an extension to a P-control loop

260

P. M. Benes et al.

Fig. 11 Control of plant (44) with the increasing desired value (dashed), the accurate control is more difficult to achieve as the sinusoidal nonlinearity becomes stronger with increasing magnitude of desired value, so it demonstrates current limits and challenges of the identification and control of strongly nonlinear systems with HONUs (adopted from [31])

However, for higher amplitudes at the applied plant input a strong sinusoidal nonlinearity can be seen as depicted in Fig. 11. On application of higher amplitudes of the plant input in this case desired value of the control loop, it is evident that for increased amplitudes the HONU-MRAC closed control loop struggles to maintain accurate control of such strong nonlinearity within the system hence highlighting the boundaries for application of such control approach to nonlinear systems.

7 Conclusion This paper presented a HONU-MRAC control strategy with focus on static HONUs to avoid recurrent computations and further improve convergence of the controller training and multiple HONU feedback controller configuration. The CG algorithm was presented as an efficient technique for accelerating plant identification in lieu with the L-M training algorithm. This adaptive control technique was shown to be easily applied to weakly nonlinear systems which can be well approximated with HONUs of appropriate polynomial order (here < 3). In addition to the work [31] deeper investigation behind incremental training algorithms namely (GD, NGD and RLS) were presented and the capabilities of RLS training with application to the presented HONU-MRAC control loop approach were highlighted. Furthermore, study into stability analysis of the proposed control loop configurations was presented, whereas in any adaptive control loop design plays a paramount role in ensuring the adaptive control loop maintains dynamical stability along its trajectory in state-space

Framework for Discrete-Time Model Reference Adaptive Control …

261

especially in sense of real-time application. Connotations to practical industrial processes was highlighted via this straightforward control approach however, as seen in the sense of the torsional pendulum example for dynamical systems which such strong nonlinearity, maintaining accurate control across all operating points remains a challenge of the presented HONU-MRAC based design. Acknowledgements Authors acknowledge support from the EU Operational Programme Research, Development and Education, and from the Center of Advanced Aerospace Technology (CZ.02.1.01/0.0/0.0/16_019/0000826), and the Japanese JSPS KAKENHI Grant Number 15J05402.

References 1. Suganthan, P.N.: Letter: on non-iterative learning algorithms with closed-form solution. Appl. Soft Comput. 70, 1078–1082 (2018) 2. Wang, F., Zhang, H., Liu, D.: Adaptive dynamic programming: an introduction. IEEE Comput. Intell. Mag. 4(2), 39–47 (2009) 3. Wang, X., Cheng, Y., Sun, W.: A proposal of adaptive PID controller based on reinforcement learning. J. China Univ. Min. Technol. 17(1), 40–44 (2007) 4. Liu, D., Wei, Q.: Policy iteration adaptive dynamic programming algorithm for discrete-time non-linear systems. IEEE Trans. Neural Netw. Learn. Syst. 25(3), 621–634 (2014) 5. Garcia, C.E., Prett, D.M., Morani, M.: Model predictive control: theory and practice—a survey. Automatica 25(3), 335–348 (1989) 6. Ławry´nczuk, M.: Neural networks in model predictive control. Intell. Syst. Knowl. Manag. 31–63 (2009) 7. Morani, M., Lee, J.H.: Model predictive control: past, present and future. Comput. Chem. Eng. 23(4–5), 667–682 (1999) 8. Osburn, P.V.: New Developments in the Design of Model Reference Adaptive Control Systems. Institute of the Aerospace Sciences (1961) 9. Narendra, K.S., Valavani, L.S.: Direct and indirect model reference adaptive control. Automatica 15(6), 653–664 (1979) 10. Parks, P.C.: Liapunov redesign of model reference adaptive control systems. IEEE Trans. Autom. Control 11(3), 362–367 (1966) 11. Elbuluk, M.E., Tong, L., Husain, I.: Neural-network-based model reference adaptive systems for high-performance motor drives and motion controls. IEEE Trans. Ind. Appl. 38(3), 879–886 (2002) 12. Bukovsky, I., Homma, N.: An approach to stable gradient-descent adaptation of higher order neural units. IEEE Trans. Neural Netw. Learn. Syst. 28(9), 2022–2034 (2017) 13. Gupta, M.M., Bukovsky, I., Homma, N., Solo, A.M.G., Hou, Z.-G.: Fundamentals of higher order neural networks for modeling and simulation. In: Zhang, M. (eds.) Artificial Higher Order Neural Networks for Modeling and Simulation, pp. 103–133. IGI Global, Hershey, PA, USA (2013) 14. Nikolaev, N.Y., Iba, H.: Adaptive Learning of Polynomial Networks Genetic Programming, Backpropagation and Bayesian Methods. Springer, New York (2006) 15. Ivakhnenko, A.G.: Polynomial theory of complex systems. IEEE Trans. Syst. Man Cybern. SMC-1(4), 364–378 (1971) 16. Taylor, J.G., Coombes, S.: Learning higher order correlations. Neural Netw. 6(3), 423–427 (1993) 17. Kosmatopoulos, E.B., Polycarpou, M.M., Christodoulou, M.A., Ioannou, P.A.: High-order neural network structures for identification of dynamical systems. IEEE Trans. Neural Netw. 6(2), 422–431 (1995)

262

P. M. Benes et al.

18. Tripathi, B.K.: Higher-order computational model for novel neurons. In: High Dimensional Neurocomputing, vol. 571, pp. 79–103. Springer India, New Delhi (2015) 19. Dai, Y., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10(1), 177–182 (1999) 20. El-Nabarawy, I., Abdelbar, A.M., Wunsch, D.C.: Levenberg-Marquardt and Conjugate Gradient methods applied to a high-order neural network. In: Presented at the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, pp. 1–7 (2013) 21. Zhu, T., Yan, Z., Peng, X.: A modified nonlinear conjugate gradient method for engineering computation. Math. Probl. Eng. 1–11 (2017) 22. Benes, P.M., Bukovsky, I., Cejnek, M., Kalivoda, J.: Neural network approach to railway stand lateral skew control. In: Computer Science & Information Technology (CS& IT), Sydney, Australia, vol. 4, pp. 327–339 (2014) 23. Benes, P., Bukovsky, I.: Neural network approach to hoist deceleration control. In: 2014 International Joint Conference on Neural Networks (IJCNN), pp. 1864–1869 (2014) 24. Bukovsky, I., Benes, P., Slama, M.: Laboratory systems control with adaptively tuned higher order neural units. In: Silhavy, R., Senkerik, R., Oplatkova, Z.K., Prokopova, Z., Silhavy, P. (eds.) Intelligent Systems in Cybernetics and Automation Theory, pp. 275–284. Springer International Publishing (2015) 25. Benes, P., Bukovsky, I.: On the intrinsic relation between linear dynamical systems and higher order neural units. In: Silhavy, R., Senkerik, R., Oplatkova, Z.K., Prokopova, Z., Silhavy, P. (eds.) Intelligent Systems in Cybernetics and Automation Theory. Springer International Publishing (2016) 26. Benes, P., Bukovsky, I.: An input to state stability approach for evaluation of nonlinear control loops with linear plant model. In: Silhavy, R., Senkerik, R., Oplatkova, Z.K., Prokopova, Z., Silhavy, P. (eds.) Cybernetics and Algorithms in Intelligent Systems, vol. 765, pp. 144–154. Springer International Publishing (2018) 27. Ahn, C.K.: Robust stability of recurrent neural networks with ISS learning algorithm. Nonlinear Dyn. 65, 413–419 (2011) 28. Yang, Z., Zhou, W., Huang, T.: Exponential input-to-state stability of recurrent neural networks with multiple time-varying delays. Cogn. Neurodyn. 8(1), 47–54 (2014) 29. Wang, Z., Liu, D.: Stability analysis for a class of systems: from model-based methods to data-driven methods. IEEE Trans. Ind. Electron. 61(11), 6463–6471 (2014) 30. Zhang, H., Wang, Z., Liu, D.: A comprehensive review of stability analysis of continuous-time recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 25(7), 1229–1262 (2014) 31. Bukovsky, I., Voracek, J., Ichiji, K., Noriyasu, H.: Higher order neural units for efficient adaptive control of weakly nonlinear systems. In: Proceedings of the 9th International Joint Conference on Computational Intelligence, Funchal, Madeira, Portugal, vol. 1, pp. 149–157 (2017) 32. Benes, P.M., Erben, M., Vesely, M., Liska, O., Bukovsky, I.: HONU and supervised learning algorithms in adaptive feedback control. In: Applied Artificial Higher Order Neural Networks for Control and Recognition, pp. 35–60. IGI Global (2016)

Assessing Transfer Learning on Convolutional Neural Networks for Patch-Based Fingerprint Liveness Detection Amirhosein Toosi, Sandro Cumani and Andrea Bottino

Abstract Fingerprint based biometric identification systems are vulnerable to spoofing attacks that involve the use of fake replicas of real fingerprints. The resulting security issues can be mitigated through the development of software modules capable of detecting the liveness of an input image and, thus, of discarding fake fingerprints before the classification step. In this work we present a fingerprint liveness detection method that combines a patch-based voting approach with Transfer Learning techniques. Fingerprint images are first segmented to discard background information. Then, small-sized foreground patches are extracted and processed by popular Convolutional Neural Network models, whose pre-trained versions were adapted to the problem at hand. Finally, the individual patch scores are combined to obtain the fingerprint label. Experimental results on well-established benchmarks show the promising performance of the proposed method compared with several state-of-the-art algorithms. Keywords Fingerprint spoofing · Convolutional Neural Networks · Fingerprint segmentation · Patch based classification · Transfer learning

1 Introduction In recent years, fingerprint based authentication has become more and more pervasive [1]. Fingerprint sensors are being deployed on a variety of consumer devices, like A. Toosi · S. Cumani · A. Bottino (B) Department of Control and Computer Engineering, Politecnico di Torino, Turin, Italy e-mail: [email protected] A. Toosi e-mail: [email protected] S. Cumani e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_14

263

264

A. Toosi et al.

notebooks and mobile phones, and are becoming a solution for access control in common facilities, like schools, health clubs and hospitals. The wide adoption of these kind of devices, however, rises several security concerns, due to their vulnerability to different forms of attack, which might result in granting access to unauthorized persons. The development of effective countermeasures has therefore become a relevant topic. Attacks can be divided into two categories: direct attacks, operating directly on the sensor, usually by means of fake replicas of real fingerprints, and indirect attacks, which target the inner modules of the fingerprint recognition system. Clearly, the first kind of attacks are easier to implement for intruders without expert knowledge of the hardware and software architectures of the sensors. Fingerprint replicas can be easily obtained by creating a mold from a latent or real fingerprint, and then filling it with materials like latex, gelatin, vinyl or wood glue and so on. It has been demonstrated that even a high quality digital image of a fingerprint allows performing successful attacks [2]. The literature shows that the success rate of such spoofing attacks can be higher than 70% [3], highlighting the need for specific protection methods capable of identifying live samples and rejecting fake ones. Liveness detection can then be addressed either at hardware or software level. Hardware based approaches embed the fingerprint scanner with additional sensors able to detect typical cues of real fingerprints, like temperature, pulse and skin resistance. However, these methods are, in general, limited by their costs and their inability to adapt to novel forms of attacks. On the contrary, software-based approaches are cost-effective and maintainable solutions that merely rely on additional software modules expressly designed to tell a live from a fake image. Software methods can be further divided into dynamic, which analyze an image stream, and static, which process a single fingerprint scan. Static methods are usually more attractive, since they require less data, less computational resources and can be applied as well to those sensors not able to capture image streams. Several solutions have been proposed in the literature for static software-based liveness detection. Initial attempts were based on the observation that fake samples are usually characterized by a lower image quality. This led to the development of methods aimed at capturing the quality of the fingerprint scans by exploiting a variety of different holistic features [4–7]. A comparison of these methods on public benchmarks, however, showed the limited effectiveness of such approaches. Better performances were obtained using well known standard local descriptors, such as Local Binary Pattern (LBP), Weber Local Descriptor (WLD), Binary Statistical Image Features (BSIF) and Local Phase Quantization (LPQ),Scale-Invariant Feature Transform (SIFT), DAISY and the Scale-Invariant Descriptor (SID). Recently, interesting results have been obtained with the introduction of descriptors expressly designed for fingerprint liveness detection, like the Histogram of Invariant gradients (HIG) [8], the Local Contrast Phase Descriptor (LCPD) [9], and the Convolutional Comparison Pattern (CCP) [10]. The effectiveness of descriptor-based approaches fostered as well the development of feature fusion methods, able to combine the different characteristics of several handcrafted features. Examples are SVM classification of LPQ and LBP [11], LPQ

Assessing Transfer Learning on Convolutional Neural Networks …

265

and WLD [12], and the integration of various image filters and statistic measures [13]. In a recent paper, we also analyzed the combination of various local descriptors and different methods for their aggregation [14]. All these works show that fusion approaches allow improving the detection accuracy compared to methods based on individual features. In the last years the development of Convolutional Neural Networks (CNN) and Deep Learning (DL) approaches has greatly affected the field of image recognition. The impressive results on a large number of visual recognition and classification challenges (such as MNIST, ImageNet, CIFAR and so on) have shown that CNNs and DL approaches are often able to replace hand-crafted feature engineering and to achieve better accuracies than methods based on local descriptors. Thus, researchers started analyzing their contribution to the fingerprint liveness detection problem as well. The proposed deep learning approaches can be roughly divided into two classes. In the first class we can find works based on the definition of ad-hoc models. These methods include the work of [15], which proposes a Deep Belief Network (DBN) with multiple layers of restricted Boltzmann machine, and [16], which presents spoofnet, a deep CNN architecture able to greatly improve the state-of-the-art, which was created by optimizing both the architecture hyperparameters and the filter weights. The second class includes methods based on Transfer Learning approaches, whose rationale is to exploit the knowledge learned while solving a problem and apply it to a similar problem in a different context. When applied to CNNs, the common transfer learning strategy starts from picking deep models that were pre-trained on ImageNet dataset [17] and then fine-tuning them to the novel task. The rationale of this procedure is that ImageNet dataset contains millions of natural images that include objects belonging to 1.000 different categories (like animals, vehicles, buildings and so on). Therefore, models trained on this dataset are capable of extracting high level features that are general enough to be “adaptable” to novel vision problems. Examples of TL approaches applied to the liveness detection problem can be found in [16, 18], where several reference models, like AlexNet, VGG and CIFAR-10, were analyzed. The objective of our work is to further investigate the effectiveness of CNN based Transfer Learning approaches in the context of fingerprint liveness detection. In particular, we propose a patch-based strategy where, after a preliminary segmentation step aimed at discarding (noisy) background information, we divide fingerprint images into non-overlapping patches. These patches are then individually classified by the neural network and, finally, their individual classification scores are combined to obtain the final image label. The rationale of our approach is threefold. First, since the dimension of the CNN input layers is necessarily limited, using small sized patches allows avoiding to resize the samples and, thus, it helps retain the original resolution and information of the image. Second, the use of patches rather than the full images as samples increases the size of the training set, thus (hopefully) making the classifier more robust and improving its generalization capabilities. Third, similarly to what has been done with handcrafted features, the combination of the different evidences extracted from the patches is likely to improve the robustness of the final fingerprint classification process.

266

A. Toosi et al.

A preliminary version of this work appeared in [19]. The main extensions introduced in the current paper are related to the comparison of different CNN reference models (namely, AlexNet and VGG, whereas [19] analyzed a single model) and on their assessment with respect to the current state-of-the-art. Experimental results show that patch-based TL classification (in particular, when using VGG as reference model) is able to both outperform non-patch based TL approaches and achieve, on average, similar results to those of CNN architectures built expressly for the fingerprint liveness detection task. The rest of the work is organized as follows. In Sect. 2 we introduce a detailed description of our approach. Section 3 presents and discuss the experimental results and, finally, conclusions are drawn in Sect. 4.

2 Methodology The approach we propose is based on the extraction of small fingerprint patches, which are then independently classified by means of a CNN model. This process requires a preliminary segmentation step to divide the fingerprint image into foreground, i.e. the region of interest (ROI), and background pixels. Image patches are then extracted from the ROI and normalized to zero-mean and unit variance before being fed to the classifier. As for the patch classification, rather than defining an ad-hoc CNN architecture, we adopt a Transfer Learning approach, consisting in fine-tuning well known architectures achieving state-of-the-art performance in object recognition tasks. In particular, we focus on AlexNet and VGG networks. This fine-tuning consists in a further training step that exploits a set of fingerprint patches taken from the training set. During this step, we also apply data augmentation to (i) reduce the risk of overfitting and (ii) increase the robustness of the classifier. When an input fingerprint image has to be analyzed, it is first divided into patches which are then normalized and independently analyzed by the fine-tuned CNN to obtain a patch score. The “live” or “fake” fingerprint label is finally inferred from the combination of all the scores of its patches. These steps are summarized in Fig. 1 and detailed in the following subsections.

2.1 Segmentation Fingerprint segmentation is based on the method proposed in [20], which is built upon the preliminary observation that the patterns of fingerprint images have frequencies only in specific bands of the Fourier spectrum. In order to preserve these frequencies, the Fourier transform of the original image is first convolved with a directional Hilbert transform of a Butterworth bandpass filter, obtaining 16 directional sub-bands. Then, soft-thresholding is applied to remove spurious patterns. Finally,

Assessing Transfer Learning on Convolutional Neural Networks … Normalized Patches

ROI Training set image

SegmentaƟon

Test set image

SegmentaƟon

Training Phase

Fine-tuning CNN Model

Patch ExtracƟon

Data AugmentaƟon

ROI

TesƟng Phase

267

Normalized Patches

Patch ExtracƟon

Image patches ClassificaƟon

Patch Scores Fingerprint image ClassificaƟon

Fig. 1 Outline of the proposed fingerprint liveness detection approach

Fig. 2 Examples of segmented fingerprint images from different sensors: a Sagem 2011. b Italdata 2011. c Biometrika 2013. d Italdata 2013. e Digital 2011. f Biometrika 2011 and g Swipe 2013 [19]

the feature image is binarized and the final segmentation is obtained by means of morphological operators. The method requires fine-tuning a set of hyperparameters, whose optimal values can vary across different benchmarks. These parameters were selected by optimizing the segmentation error on a small set of manually segmented images (around 30), which are taken from the training set to include both live and fake samples created with different spoofing materials. Some examples of the segmentation results can be seen in Fig. 2. The only exception to this procedure is represented by one of the benchmarks used in our experiments, the Swipe 2013 dataset (see Sect. 3.1), whose images are obtained by swiping the fingerprint on a linear scanner. In some cases, these images include

268

A. Toosi et al.

Original image

Cropped image

Remove white space

Incorrect Segmented image

SegmentaƟon (FDB Algorithm)

Compare top ROI boundary with starƟng row of the segmented image

Crop and Re-segment (FDB Algorithm)

IdenƟfy Region of interest boundaries (top & boƩom)

Fig. 3 An example showing the segmentation algorithm applied to Swipe 2013 images [19]

other finger parts beyond the pulp (the finger extremity). When this happens, we noticed that the segmentation algorithm might be “attracted” by these parts discarding the pulp. Thus, for Swipe 2013 images, we adopted a slightly different procedure. First, we removed the blank rows at the image bottom and identified beginning and end of the impressed fingerprint by detecting large peaks of the gradient between consecutive image lines. We then applied the segmentation algorithm to the extracted region. Clearly, a successful segmentation should start at the beginning of this region. If, on the contrary, it starts below a certain line (which we heuristically fixed at the value 300), we take the starting line of the (incorrectly) segmented area as lower boundary of the actual fingerprint region and we apply again the segmentation to obtain the final foreground mask (see Fig. 3 for an example).

2.2 Patch Extraction and Normalization The segmentation mask defines the ROI where the next computation steps are focused. This region is divided into patches of size w × w pixels, where w is a parameter of the method. In order to avoid any influence of background pixels, we only extract those patches whose pixels are all labeled as foreground. The algorithm works in the following way. We scan line by line the ROI starting from its top-left corner and treating each (i, j) pixel as the top-left corner of a candidate patch. If all pixels of this patch belong to the ROI and are labeled as foreground, the patch is stored and the ROI scan restarts

Assessing Transfer Learning on Convolutional Neural Networks …

269 i

Segmented image

i + 64

j j + 64

SegmentaƟon algorithm (FDB)

Fingerprint image

Mask image (ROI)

Fig. 4 Example of the subdivision in patches of a segmented fingerprint for a patch size w = 64 [19]

at pixel (i + w, j). When the scan of line j is concluded, if no patches have been found, the scan restarts at line j + 1, otherwise at line j + w (see Fig. 4). Finally, as already stated, we normalize each patch to zero mean and unit variance before feeding it to the CNN classifier.

2.3 Fine Tuning Pre-trained CNN Models The patch classifier consists of a “standard” CNN model, pre-trained on a generic object recognition task and “adapted” by fine-tuning the network weights to our specific problem. In this work we focus on two reference CNN architectures, AlexNet [21] and VGG [22], whose characteristics are detailed in the following. Our choice is motivated both by the impressive object recognition results obtained by these models, by the availability of pre-trained versions of these models and by the possibility to compare our TL patch-based approach with TL full-image approaches based on the same models. The fine-tuning of the network weights is performed through a further learning step that exploits as input the patches extracted from the fingerprint training datasets.

2.3.1

AlexNet

The AlexNet model used in our work is substantially equivalent to the one described in [21] and is summarized in Fig. 5. The network architecture contains five convolutional layers, interwoven with three sub sampling layers, followed by three fullyconnected layers. The receptive field of convolutional layers is decreased from 11 in first layer to 5 in the second and 3 in the remaining ones. The network uses Rectified

270

A. Toosi et al.

Fig. 5 AlexNet-BN architecture [19]

Linear Unit (ReLU) as activation function, in order to decrease the learning time and induce sparsity in the computed features. The size of the input layer is w × w × 1. In our work, we replaced the original 1.000-unit soft-max classification layer, designed to predict 1.000 different classes, [21], with a 2-unit soft-max layer, which provides an estimation of posterior probabilities of live and fake classes. Since fingerprint patches as grayscale images while the original AlexNet input consists of RGB color images, we simply picked the first channel of the weights for the first convolutional layers. As a note, we also tried to transform our samples from grayscale to color ones by simply replicating the image plane three times, with no significant differences. Stochastic gradient descent was used to fine tune the network weights. Both data augmentation (see Sect. 2.3.3) and dropout regularization [23], applied to the first two fully connected layers with probability 0.5, have been used to soften the overfitting issues. As suggested in [24], we also used batch normalization (BN) to improve the network performances. BN, first proposed in [25], aims at stabilizing the learning process and decreasing the learning rates by reducing the internal covariance shift.

2.3.2

VGG-19

VGG networks were introduced by Simonyan and Zisserman’s in [22] as an improvement of the AlexNet architecture. VGG replaces the large perceptive fields of AlexNet with multiple cascaded small-sized filters (3 × 3). The rationale of this choice is two-fold. First, the smaller size of the kernels allows extracting image features at a finer grain. Second, the filter cascades increase the depth of the network with respect to AlexNet, thus enabling the learning of more complex features.

Assessing Transfer Learning on Convolutional Neural Networks …

271

Max pooling 5 ConvoluƟons 5 x4

ConvoluƟons 1 x2 Input layer

Fully connected layers 1 x2

ConvoluƟons 3 x4

ConvoluƟons 4 x4

ConvoluƟons 2 x2

W/16 x W/16 x 512 W/8 x W/8 x 512 W/4 x W/4 x 256

Max pooling 4

1x2 1 x 4096 Fully connected layer 2 x1

Max pooling 3

W/2 x W/2 x 128 Max pooling 2 WxWx1

W x W x 64

Max pooling 1

Fig. 6 VGG-19 architecture

VGG architectures have shown very good performance for several image recognition problems. In particular, they achieved the best performance in the ILSVRC2014 classification task, and they were able to attain state-of-the-art performance on a wide variety of image recognition datasets, both as classification tools and as feature extractors [26–29]. Among the different available VGG architectures, we chose the VGG-19 variant, which, as the name suggests, has 19 layers. This choice was motivated by the availability of non patch-based TL results using the same architecture [18], which thus allows (similarly to AlexNet) the comparison between a full-image and a patch based TL approach on the same model. The original VGG-19 input layer consists of 224 × 224 RGB images, and is followed by a stack of 16 3 × 3 convolutional layers. These layers are divided into 5 convolutional blocks, separated by 4 max-pooling layers. The convolutional layers are followed by 2 4096-dimensional fully connected layers. The output layer consists of 1000 nodes. The network outputs are designed to predict 1000 different object classes (See Fig. 6). The network uses ReLU as activation function. As we did for AlexNet, we changed the size of the input layer to w × w × 1, and we replaced the network output with a layer consisting of two nodes, representing live and fake fingerprints, respectively. Again, data augmentation and dropout regularization applied to the first two fully connected layers with probability 0.5 have been used to soften the overfitting issues. In this case, we did not exploit BN since, to the best of our knowledge, a batchnormalized pre-trained version of VGG is not publicly available.

2.3.3

Data Augmentation

Data augmentation is a commonly used technique to enhance the generalization capabilities of neural network models. The technique consists in generating synthetic training samples by applying small variations to the original data. For images, the

272

A. Toosi et al.

Original Image

-22.5

0

+22.5

RotaƟon

Mirroring

Fig. 7 Data augmentation

variations are often obtained through affine transformation and cropping [21]. The advantage of DA is that it “forces” the classifier to learn small variations of the input data, thus making it (possibly) more robust to unseen data. Furthermore, the increase of the sample population can also reduce overfitting in deep neural networks [30]. In this work, for each training image we produce five additional variations, obtained by (i) mirroring the image, (ii) rotating the image of −22.5◦ and +22.5◦ , and (iii) mirroring the rotated images. The same transformations are applied to the segmentation masks. Finally, we extract patches from both the original and augmented image following the same procedure detailed in Sect. 2.2. The process is detailed in Fig. 7. We underline again that the augmentation process is applied only for training.

2.4 Patch Based Classification The liveness of an input fingerprint image is determined by combining the scores of each of the sample patches, where as patch score we take the difference of the two inputs of output layer (before softmax). These scores are averaged to produce an image score. The scores can be interpreted as log-likelihood ratios between live and

Assessing Transfer Learning on Convolutional Neural Networks …

273

fake hypotheses, and the image can be labeled by simply comparing the score to a threshold τ . Theoretically, the optimal accuracy should be obtained by setting τ = 0. In practice, we have observed that the scores are not well calibrated, i.e., the optimal accuracy is achieved with a different value of τ . In order to “recalibrate” the scores, we adopted a strategy that has been successfully employed in speaker verification tasks [31]. The method assumes that the scores for live and fake images can be modeled by means of Gaussian distributions, whose parameters can be estimated on a validation set. Given a score s, the calibrated score scal is obtained by computing the log-likelihood ratio N (s; μ L , σ L ) (1) scal = log N (s; μ F , σ F ) where μ L , σ L and μ F , σ F denote the mean and standard deviation for the live and fake uncalibrated scores, respectively. The sample label is then obtained by comparing the calibrated score scal with the theoretical threshold τ = 0. We underline that if no patches can be extracted from a test sample, we arbitrarily assign the fake label to the fingerprint. This choice derives from the observation that having a false fake is better than a false live, which could result in granting unauthorized access to the system.

3 Results and Discussion In the following, we describe the results of our experiments. First, we introduce the experimental benchmarks used in our work (Sect. 3.1). Then, we briefly discuss the effects of data augmentation, batch normalization and score calibration (Sect. 3.2) and that of the patch size (Sect. 3.3) on each of the reference models. Finally, we assess our approach with the current state-of-the-art and discuss the obtained results (Sect. 3.4).

3.1 LivDet Datasets The benchmarks used in this work are those made publicly available for the LivDet 2011 [32] and LivDet 2013 [33] competitions. These datasets have been largely used in the literature and enable a comparison with a great variety of methods and, in particular, with previous deep learning based approaches. Overall, the benchmarks consist in eight sets of live and fake fingerprints acquired with different devices (Table 1), all of which are equipped with flatbad scanners, with the exception of Swipe, which has a linear sensor. Its images are obtained by swiping the fingerprint and thus include a temporal dimension as well. Six out of the eight fake sets were acquired using a consensual method, where the subject actively cooperated

274

A. Toosi et al.

Table 1 Characteristics of the dataset used in the experiments LivDet2011 LivDet2013 Scanner Image size Live samples Fake samples Total subjects Spoof materials Cooperative Training patches (w = 32) Training patches (w = 64)

Biom. 312 × 372 2000

Digital 355 × 391 2004

Italdata 640 × 480 2000

Sagem 352 × 384 2009

Biom. 312 × 372 2000

XMatch 800 × 750 2500

Italdata 480 × 640 2000

Swipe 1500 × 208 2500

2000

2000

2000

2037

2000

2000

2000

2000

200

82

92

200

45

64

45

70

5

5

5

5

5

5

5

5

Yes

Yes

Yes

Yes

No

Yes

No

Yes

605,582

688,225

703,702

748,368

582,306

848,384

643,363

1,454,649

106,952

123,659

125,344

132,120

99,272

151,142

112,298

256,472

to create a mold of his/her finger, increasing the challenges related to the analysis of these datasets. Each dataset is divided into separate training and test sets, and is characterized by a different image size and resolution, number of individuals, number of fake and live samples and number and type of materials used for creating the spoof artifacts. Since Data Augmentation is applied to the training sets, we also report the total number of training patches obtained after the DA step for each of our experimental benchmark. According to the standard LivDet protocols, in the following, the results are reported in terms of the Average Classification Error (ACE), which is the average between the percentage of misclassified live (ferrlive) and fake (ferrfake) samples, i.e. ACE = f errlive+2 f err f ake .

3.2 Contribution of Data Augmentation, Batch Normalization and Score Calibration Here we briefly recall the results of our analysis on the contributions of data augmentation, batch normalization and score calibration. In [19], we separately analyzed the effects of these elements on AlexNet using a patch size of 64 × 64. Our results showed that all these techniques consistently improve the classification accuracy.

Assessing Transfer Learning on Convolutional Neural Networks …

275

Similar results, which have been omitted for the sake of brevity, are obtained with VGG for data augmentation and score calibration (we recall that batch normalization has not been applied to VGG, as explained in Sect. 2.3). Thus, concluding, all these stages were included in our experiments.

3.3 Effect of Patch Size Another parameter whose effect is worth to be analyzed, is the patch size since different sizes allows capturing different amounts of local information, a fact that, in turns, can affect in different ways the classification accuracy. The experiments are organized as follows. For each reference model, we classified all the benchmarks using the two patch sizes of 32 × 32 and 64 × 64 and applying the techniques described in Sect. 3.2. The results we obtained are summarized in Table 2a. The table reports as well the average accuracy for both challenges (columns Avg). Since some authors (e.g. [34]) discarded the Crossmatch 2013 dataset, due to its generalization problems [33], we also report, for LivDet 2013, the average results excluding XMatch (column Avg−X m ).

Table 2 Classification errors on the experimental benchmarks LivDet2011 Method

Biom.

LivDet2013

Digital Italdata Sagem

Avg

Biom.

XMatch

Italdata Swipe

Avg

Avg−X m

(a) Patch-based TL approaches, bold italic values represent best (average/per benchmark) accuracy Patch-based 7 AlexNet (w = 32)

3.1

8.5

5.1

5.9

0.8

12.7

0.4

7.2

5.3

2.8

Patch-based 4.0 AlexNet (w = 64)

4.5

6.3

3.7

4.6

0.4

5.4

0.5

1.3

1.9

0.7

Patch-based 3.6 VGG-19 (w = 32)

1.0

6.2

2.7

3.4

0.1

30.2

0.1

5.4

9.0

1.9

Patch-based 2.8 VGG-19 (w = 64)

2.7

5.8

1.8

3.3

0.2

10.1

0.4

0.6

2.8

0.4

7.6

4.0

6.9

(b) Baseline methods, bold italic values represent best (average/per benchmark) accuracy CNNRandom

8.2

3.6

9.2

4.6

6.4

0.8

3.2

2.4

DBN











1.2

7.0

0.6

2.9

2.9

1.6

Spoofnet











0.2

1.7

0.1

0.9

0.7

0.4

CIFAR-10











1.5

2.7

2.7

1.3

2.1

1.8

AlexNet

5.6

4.6

9.1

3.1

5.6

1.9

4.7

0.5

4.3

2.9

2.2

VGG

5.2

3.2

8

1.7

4.5

1.8

3.4

0.4

3.7

2.3

2.0

276

A. Toosi et al.

A first comment is related to the CNN model. It can be seen that, for a given patch size, VGG consistently outperforms AlexNet on all benchmarks, with the exception of XMatch 2013. Then, if we consider in more details how the patch size affects each model, a patch size of 64 is, on average, better than size 32 for both architectures. However, it can also be observed that there is not an unique patch size that is optimal for all cases. For instance, in Digital 2011 and Italdata 2013 the best size is 32 for both architectures, while for five other datasets the optimal one is 64 (again for both models). Finally, Biometrika 2013 shows a mixed behaviour, with an optimal size of 64 for AlexNet and 32 for VGG. These findings suggest that the “optimal” patch size depends mainly on the dataset, and not on the CNN architecture and, given the differences obtained using different patch sizes, show as well that methods for (heuristically) picking such optimal size are sorely needed.

3.4 Assessment of the Proposed Approach In order to assess the effectiveness of our patch-based Transfer Learning approach we compare our results with those obtained, on the same datasets1 and with the same experimental protocols, by other non-patch based deep learning methods, employing either Transfer Learning, or ad-hoc trained DNNs. Table 2b summarizes the baseline results. The first block reports the results of ad-hoc DL methods, and in particular of the CNN-Random method of [18], the DBN approach of [15], and Spoofnet [16]. The second block of Table 2b shows the results of the full-image TL approaches based on CIFAR-10 [16], and on the two architectures that we considered in this work, i.e. AlexNet and VGG [18]. Comparing our results with those of full-image TL based approaches, we can observe that patch-based classification is, in general, effective. In particular, for both architectures, a patch size of 64 allows either outperforming or achieving comparable results to those of the corresponding full-image approach on all benchmarks. Exceptions are Sagem 2011, for which we incur in a small degradation, and XMatch 2013. For this latter result, it should be stressed that none of our methods was capable of coping with XMatch 2013, opposite to other approaches which instead obtained good accuracies (e.g., the 1.7% scored by Spoofnet). As for our reference models, VGG consistently outperforms AlexNet (exception made for XMatch 2013) and, if we consider the results obtained with the optimal patch size and we neglect Xmatch 2013, we can see that it also outperforms all other full-image approaches, with the exception of a very small degradation (−0.1%) on Sagem 2011. If we compare our results with those listed in Table 2b, our method outperforms both the CNN-Random and DBN on all the datasets except XMatch 2013. Furthermore, if we discard XMatch 2013, our approach achieves, on average, the same 1 We

underline that, while all methods have been tested with LivDet2013, some results are not available for LivDet2011.

Assessing Transfer Learning on Convolutional Neural Networks …

277

accuracy of the state-of-the-art Spoofnet model. Again, we think these results show that the proposed patch-based TL approach is a viable alternative to the development of ad-hoc architectures. Concluding, our experimental results, obtained on a large number of different benchmarks, show the effectiveness of the proposed approach that, when VGG is used as reference model, is capable of obtaining state-of-the-art results in 6 out of the 8 analyzed benchmarks.

3.5 Computational Complexity As a final information, we provide some details related to the computational complexity of our approach. The software was implemented in MATLAB using MatConvNet [35] and we run our experiments on an HPC cluster, equipped with multiple Xeon E5-2680 @2.50GHz CPUs, 3TB DDR4 memory, allocating 12 cores for each experiment. The operating system is CentOS 6.6. Numbers are related to experiments with Biometrika 2013.2 Considering the AlexNet pre-trained network with BN and DA, when the patch size is 32 × 32 the system can process an average of 88,000 patches per second (PPS) during training and 318,000 PPS during testing. When the patch size is increased to 64 × 64, we have 18,000 PPS in training and 55,000 PPS in testing. As for the pre-trained VGG with DA, we have 45,000 PPS in training and 153,000 PPS in testing using 32 × 32 patches and 18,000 PPS in training and 60,000 PPS in testing for 64 × 64.

4 Conclusion In this work we have presented a fingerprint liveness detection approach based on the analysis of small patches extracted from the fingerprint foreground image. These patches are classified using different CNNs referenced model. In particular, in this work, we analyzed the contribution of AlexNet and VGG, two well-known CNN models capable of achieving state-of-the-art results in several computer vision tasks. Using Transfer Learning, the available models, pre-trained on datasets spanning a different context, were “adapted” to the fingerprint liveness detection task. These fine-tuned models are used to individually classify each patch extracted from an input fingerprint. Then, the classification scores computed for all image patches are combined to tell a live from a fake sample. Our results suggest that the proposed patch-based TL approach (i) is effective in most of the cases, (ii) it is capable of improving the results of a similar models based on the processing of the whole fingerprint image, and (iii) it achieves results that 2 Numbers

reported differs from those in [19] due to some code optimization.

278

A. Toosi et al.

are better or close to those of state-of-the-art methods that do not employ transfer learning. Based on our results, future work will be devoted to the analysis of multi-scale approaches and feature fusion techniques for the combination of different patch sizes and multiple CNN architectures. Acknowledgements Computational resources were provided by HPC@POLITO, a project of Academic Computing within the Department of Control and Computer Engineering at the Politecnico di Torino (http://www.hpc.polito.it).

References 1. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of Fingerprint Recognition, 2nd edn. Springer Publishing Company, Incorporated (2009) 2. arsTECHNICA: Chaos computer club hackers trick apples touchid security feature. Online (2013) 3. Matsumoto, T., Matsumoto, H., Yamada, K., Hoshino, S.: Impact of artificial “gummy” fingers on fingerprint systems. In: Proceedings of SPIE, vol. 4677 (2002) 4. Abhyankar, A., Schuckers, S.: Fingerprint liveness detection using local ridge frequencies and multiresolution texture analysis techniques. In: 2006 IEEE International Conference on Image Processing, pp. 321–324 (2006) 5. Nikam, S.B., Agarwal, S.: Fingerprint liveness detection using curvelet energy and cooccurrence signatures. In: Fifth International Conference on Computer Graphics, Imaging and Visualisation, 2008, CGIV ’08, pp. 217–222 (2008) 6. Marasco, E., Sansone, C.: An anti-spoofing technique using multiple textural features in fingerprint scanners. In: 2010 IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications (BIOMS), pp. 8–14 (2010) 7. Galbally, J., Alonso-Fernandez, F., Fierrez, J., Ortega-Garcia, J.: A high performance fingerprint liveness detection method based on quality related features. Future Gener. Comput. Syst. 28, 311–321 (2012) 8. Gottschlich, C., Marasco, E., Yang, A.Y., Cukic, B.: Fingerprint liveness detection based on histograms of invariant gradients. In: Proceeding of IEEE IJCB 2014, pp. 1–7 (2014) 9. Gragnaniello, D., Poggi, G., Sansone, C., Verdoliva, L.: Local contrast phase descriptor for fingerprint liveness detection. Pattern Recognit. 48, 1050–1058 (2015) 10. Gottschlich, C.: Convolution comparison pattern: an efficient local image descriptor for fingerprint liveness detection. PLoS ONE 11, 1–12 (2016) 11. Ghiani, L., Marcialis, G.L., Roli, F.: Experimental results on the feature-level fusion of multiple fingerprint liveness detection algorithms. In: Proceedings of the on Multimedia and Security, MM&Sec ’12, pp. 157–164. ACM, New York, NY, USA (2012) 12. Gragnaniello, D., Poggi, G., Sansone, C., Verdoliva, L.: Fingerprint liveness detection based on weber local image descriptor. In: IEEE BIOMS 2013, pp. 46–50 (2013) 13. Pereira, L.F.A., Pinheiro, H.N.B., Silva, J.I.S., Silva, A.G., Pina, T.M.L., Cavalcanti, G.D.C., Ren, T.I., de Oliveira, J.P.N.: A fingerprint spoof detection based on MLP and SVM. In: Proceedings IJCNN 2012, pp. 1–7 (2012) 14. Toosi, A., Bottino, A., Cumani, S., Negri, P., Sottile, P.L.: Feature fusion for fingerprint liveness detection: a comparative study. IEEE Access 5, 23695–23709 (2017) 15. Kim, S., Park, B., Song, B.S., Yang, S.: Deep belief network based statistical feature learning for fingerprint liveness detection. Pattern Recognit. Lett. 77, 58–65 (2016) 16. Menotti, D., Chiachia, G., Pinto, A., Schwartz, W.R., Pedrini, H., Falcao, A.X., Rocha, A.: Deep representations for iris, face, and fingerprint spoofing detection. IEEE Trans. Inf. Forensics Secur. 10, 864–879 (2015)

Assessing Transfer Learning on Convolutional Neural Networks …

279

17. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015) 18. Nogueira, R.F., de Alencar Lotufo, R., Machado, R.C.: Fingerprint liveness detection using convolutional neural networks. IEEE Trans. Inf. Forensics Secur. 11, 1206–1213 (2016) 19. Toosi, A., Cumani, S., Bottino, A.: CNN patch-based voting for fingerprint liveness detection. In: Proceedings of the 9th International Joint Conference on Computational Intelligence— Volume 1: IJCCI, INSTICC, pp. 158–165. SciTePress (2017) 20. Thai, D.H., Huckemann, S., Gottschlich, C.: Filter design and performance evaluation for fingerprint image segmentation. CoRR (2015). arXiv:abs/1501.02113 21. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012) 22. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv:1409.1556 23. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014) 24. Simon, M., Rodner, E., Denzler, J.: Imagenet pre-trained models with batch normalization (2016). arXiv:1612.01452 25. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015) 26. Nakada, M., Wang, H., Terzopoulos, D.: AcFR: active face recognition using convolutional neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 35–40. IEEE (2017) 27. Liu, T., Xie, S., Yu, J., Niu, L., Sun, W.: Classification of thyroid nodules in ultrasound images using deep model based transfer learning and hybrid features. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 919–923. IEEE (2017) 28. Nogueira, K., Penatti, O.A., dos Santos, J.A.: Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recognit. 61, 539–556 (2017) 29. Minaee, S., Abdolrashidiy, A., Wang, Y.: An experimental study of deep convolutional features for iris recognition. In: 2016 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), pp. 1–6. IEEE (2016) 30. Simard, P.Y., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: Proceedings of the Seventh International Conference on Document Analysis and Recognition—Volume 2, ICDAR ’03, pp. pp. 958–. IEEE Computer Society, Washington, DC, USA (2003) 31. Brümmer, N., Swart, A., Van Leeuwen, D.: A comparison of linear and non-linear calibrations for speaker recognition. In: Odyssey 2014: The Speaker and Language Recognition Workshop (2014) 32. Yambay, D., Ghiani, L., Denti, P., Marcialis, G., Roli, F., Schuckers, S.: Livdet 2011— fingerprint liveness detection competition 2011. In: 2012 5th IAPR International Conference on Biometrics (ICB), pp. 208–215 (2012) 33. Ghiani, L., Yambay, D., Mura, V., Tocco, S., Marcialis, G.L., Roli, F., Schuckcrs, S.: LivDet 2013 fingerprint liveness detection competition 2013. In: 2013 International Conference on Biometrics (ICB), pp. 1–6 (2013) 34. Gragnaniello, D., Poggi, G., Sansone, C., Verdoliva, L.: An investigation of local descriptors for biometric spoofing detection. IEEE Trans. Inf. Forensics Secur. 10, 849–863 (2015) 35. Vedaldi, A., Lenc, K.: MatConvNet: convolutional neural networks for MATLAB. In: Proceedings of the 23rd ACM International Conference on Multimedia, MM ’15, pp. 689–692. ACM, New York, NY, USA (2015)

Environment Scene Classification Based on Images Using Bag-of-Words Taurius Petraitis, Rytis Maskeliunas, ¯ Robertas Damaševiˇcius, Dawid Połap, Marcin Wo´zniak and Marcin Gabryel

Abstract We analyse the environment scene classification methods based on the Bag of Words (BoW) model. The BoW model encodes images by a bag of visual features, which is a sparse histogram over a dictionary of visual features extracted from an image. We analyse five feature detectors (Scale Invasive Feature Transform (SIFT), Speed-Up Robust Features (SURF), Features from Accelerated Segment Test (FAST), Maximally Stable Extremal Regions (MSER), and grid-based) and three feature descriptors (SIFT, SURF and U-SURF). Our experiments show that feature detection with a grid and feature description using SIFT descriptor, and feature detection with SURF and feature description with U-SURF are most effective when classifying (using Support Vector Machine (SVM)) images into eight outdoor scene categories (coast, forest, highway, inside city, mountain, open country, street, and high buildings). Indoor scene classification into five categories (bedroom, industrial, kitchen, living room, and store) achieved worse results, while the most confused categories were industrial/store images. The classification of full image dataset (15 outdoor and indoor categories) achieved the overall accuracy of 67.49 ± 1.50%, while most errors came from misclassifications of indoor images. The results of the study can be applicable for assisting living applications and security systems. Keywords Object recognition · Scene recognition · Image processing · Bag-of-Words · Assisted living

T. Petraitis · R. Maskeli¯unas · R. Damaševiˇcius (B) Department of Multimedia Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania e-mail: [email protected] D. Połap · M. Wo´zniak Faculty of Applied Mathematics, Institute of Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland M. Gabryel Institute of Computational Intelligence, Czestochowa University of Technology, 42-200 Czestochowa, Poland © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_15

281

282

T. Petraitis et al.

1 Introduction The estimated number of blind people in the world has reached 36.0 million in 2015, while the number of people with moderate and severe visual impairment increased to 216.6 million [1]. Integration of these people into society has been relevant for long, while efforts included the Braille writing system introduced throughout the world as a tactile writing system for the blind. With the advancement of technology, new opportunities for the integration of the partially sighted and the blind into society are emerging. Computer-based interpretation of visual information obtained from an environment can be used to help the blind to understand the environment, to choose the best travel routes and to avoid any obstacles while moving. For example, a Microsoft Kinect sensor based real-time navigation system [2] calculates the user’s distance to the obstacle and, if necessary, vibes to warn its user about any obstacles. Any system capable of extracting context (semantic) information from an image or video can help for the blind in performing everyday tasks. For example, information extracted from environment images can be used to extract relevant textual information [3]. Such information would particularly help the visually impaired people to orient themselves in artificial environments such as shops. Knowledge of the context helps to simplify the object detection task by narrowing the search field, and reduce the categories of objects to be searched [4]. Of course, human performance by far exceeds the efficiency of computer systems when performing object or environment recognition tasks. However, human visual abilities are degraded in a dark environment or after a long observation time, and it is dangerous or impossible to work for a person under certain conditions of work [5]. Computer-based recognition systems can extend the limits of human visual capabilities, e.g., overcoming inherited genetic disorders affecting human vision such as colour vision deficiency [6], and complement them with new abilities for environment recognition and perception, especially for mobility tasks [7]. The aim of this paper is to present a method for classifying digital images into specific environment categories (e.g., forest, city), which may be usable in the scene recognition system for partially blinded or blind people. The concept is relevant for assisted living systems [8], which aim to provide devices and services to enable independent living of disabled people. We analyse different object and scene recognition algorithms, compare methods for image feature detection and description, and present the results of experiments. This paper is an extended version of the paper [9] presented at the International Joint Conference on Computational Intelligence, IJCCI 2017, Funchal, Madeira, Portugal.

Environment Scene Classification Based on Images …

283

2 State of the Art The environment recognition methods can be categorized into global and local information based methods, as well as the hybrid ones. Global information based methods analyse each scene as an individual object and classify the scenes according to their global characteristics. Each scene can be described by a small set of features derived from the image. Examples of global features are Spatial Envelope features (naturalness, openness, roughness, expansion, ruggedness) [10] that represent the dominant spatial structure of a scene. First, for each of these properties, discriminant spectral templates are generated. Then, by multiplying the corresponding template from the energy spectrum of the image, a characteristic value for that image is obtained. The classification is performed using K-Nearest Neighbors (KNN) classifier and reaching, on average, 86% accuracy when classifying images into 8 categories. Relatively high accuracy achieved shows that in order to classify environment images, specific information about the objects contained therein is not needed and global information about the scene is enough. Local information based methods analyse local properties of each scene, so the analysis of an image begins from fine details and their local properties (quantity, position, composition or saliency [11]) before deciding to which category the scene belongs to. For example, Vogel and Schiele [12] categorized scenes by means of a semantic assessment of typology. These categories reflect the most general categories of scenes that are used as the starting point for describing each image. In reality, most natural scenes can be described ambiguously, depending on the subjective point of view. In each category, typical and less representative examples of this category can be found, and the differences in typicality are the most effective feature in classification. Csurka et al. [13] propose a scene classification model, which has the following main steps are: (1) automatic detection of specific visual image features and descriptor descriptions, (2) attribution of descriptions of these attributes to clusters (visual dictionary), (3) creation of a bag of keypoints for calculating how many attributes are assigned to each cluster, and (4) using the special features bag as an input vector to classifier, assigning the image to the predicted category. The cluster mentioned in the second step is a pool of similar distinctive properties. These pools are made up of vector quantization algorithms from a large set of features. Clusters have their own centres, which are used as words of a visual dictionary—new special features, or visual words, assigned to the centre of the nearest cluster. This illustrates the analogy between a language dictionary made of words and a visual feature dictionary that consists of vectors representing the centres of clusters. Having a visual dictionary, scene images can be described by the histograms of visual words in them, and the task of categorization is reduced to a template matching task. The dictionary building step is the most time consuming operation in the BoW framework. It involves clustering all the descriptors obtained from the image database to identify representative clusters. Feature reduction can help to improve performance while maintaining similar level of quality [14]. To find keypoints heuristic methods can be applied such as Firefly algorithm [15].

284

T. Petraitis et al.

Several extensions or improvements of the BoW model have been proposed over years such as the “bag of words” model with algebraic multi-grid method, which can recognize the blurred objects in the scene [16]; a scene descriptor based on a library of raw image data, which is used to find visual phrases (VPs) as valid landmarks that discriminatively and compactly explain an input query/database image [17]; extension of the BoW model over both audio and video feature spaces, combined with compressive sensing for multi-modal fusion via audio-visual feature dimensionality reduction and scene classification using audio-visual data [18]; an adaptive vocabulary tree (AVT) method for image indexing and retrieval of words, which provides an improved performance over traditional vocabulary tree in a dynamic environment [19]; hierarchical bag-of-textons produced using a spatial layout filter with multi-scale [20]; BoW representation based on Fast Appearance Based Mapping (FAB-MAP) and Chow Liu trees [21]; a variant of BoW, but different “words” are not taken from the same image but from temporally ordered images using a Self-Organizing Map (SOM) [22]; a modification of BoW pooling step using a merging scheme that exploits the specificities of edge-based descriptors, and pools low and high contrast regions separately [23]; a human retina model to preprocess video sequences before constructing the BoW model [24]; modified descriptors (DSP-SIFT (domain-size pooling sift) and CN (color-name) to collect the colour information to form visual words of image [25]; a local-global feature bag-of-visual-words scene classifier (LGFBOVW), which uses shape-based invariant texture index as the global texture feature, while the mean and standard deviation values are employed as the local spectral feature, and the dense scale-invariant feature transform (SIFT) feature is employed as the structural feature, while an appropriate feature fusion strategy is applied at histogram level for classification of high spatial resolution (HSR) remote sensing imagery [26]; an optimized descriptor encoding strategy based on an improved Fisher vector [27]; a feature significance-based multiBOW scene classification method, which integrates the information of feature separating capabilities among different scene categories into the traditional two-phase classification-based score-level fusion framework, realizing different treatments for different feature channels in classifying different scene categories [28]; a contextual bag-of-words (CBOW) discriminative appearance model based on visual tracking using a Bayesian framework, and an explicit detection method that handles drifting and occlusion in images [29]; optimizing feature extractor using convolutional neural networks (CNNs) to learn more suitable visual vocabularies [30]; extraction of high discriminative SIFT features using the within- and between-class correlation coefficients, and selection of high discriminative SIFT feature pairs using minimum spanning tree, which are used to construct the visual word dictionary and visual phrase dictionary, and are concatenated to a joint BoW histogram with different weights to achieve higher classification accuracy [31]; feature pre-processing based on PCA-whitening, and feature fusion to combine mid-level features and global features [32]; higher-order Occurrence Pooling that aggregates over co-occurrences of visual words, and is based on linearization of Minor Polynomial Kernel to work with various pooling operators [33]; modelling global spatial distribution of visual words by using both distance and angle information in the BoW representation [34];

Environment Scene Classification Based on Images …

285

Bag of Bags of Visual Words constructed using an Irregular Pyramid Matching with the Normalized Cut methodology to subdivide the image into a connected graph [35]; and using additional image features (the colour histogram obtained from the surroundings of keypoints) to improve image classification results [36].

3 Feature Detection Further, we review four image feature detection methods: Scale Invasive Feature Transform (SIFT), Speed-Up Robust Features (SURF), Features from Accelerated Segment Test (FAST) Maximally Stable Extremal Regions (MSER), and Vector of Locally Aggregated Descriptors (VLAD). SIFT [37] detects special features regardless of the scale and orientation of the image, and allows you to reliably detect the same special features even in slightly distorted images, adding noise or changing the lighting and/or viewing point. SIFT detects potentially specific features, and measures the stability of these properties and determines their magnitude by eliminating unstable properties. Then, according to the local gradient direction, for each specific feature, one or more orientations are calculated and assigned. With this information, image data properties can be normalized to scale, position and orientation—so the properties become scalable with respect to these transformations. The method also includes a descriptor, which detects the special properties described by the 128-dimensional vectors. The gradient values and orientations are initially calculated for the position of the surrounding object, using the Gaussian filter for the entire image. Then the descriptor’s coordinates and gradient orientations are rotated before they are detected in the direction of the special properties. So the descriptor maintains a normalized orientation of the special characteristic. The scale space of an image is defined as a function, L(x, y, σ ), that is produced from the convolution of a variable-scale Gaussian, G(x, y, σ ), with an input image, I (x, y), i.e.: L(x, y, σ ) = G(x, y, σ ) ∗ I (x, y)

(1)

here * is the convolution operation and G(x, y, σ ) =

1 −(x 2 +y 2 )/2σ 2 e 2π σ 2

(2)

The keypoints are detected using scale-space extrema in the difference-of-Gaussian function D convolved with the image I (x, y): D(x, y, σ ) = (G(x, y, kσ ) − G(x, y, σ )) ∗ I (x, y) = L(x, y, kσ ) − L(x, y, σ ) (3)

286

T. Petraitis et al.

here k is the constant multiplicative factor which separates two nearby scales. To detect the local maxima and minima of D(x, y, σ ), each sample point is compared to its eight neighbors in the current image and to its nine neighbors in the scale above and below. It is selected only if it is larger than all of its neighbours or smaller than all of them. SURF [38] uses second-order Gaussian derivative approximations with a box filter, thus losing some accuracy, but significantly shortening the calculation time. Detecting properties at different image scales, in contrast to SIFT, does not need to use a Gaussian filter, but it is enough to change the size of the box used, again avoiding expensive time calculations. SURF descriptors only use the 64-dimensional vector, which is easier to generate and compare, but saves less information that may be useful in itself. Therefore, given an image I of size m × n, its integral image can be computed as: I (x, y) =

y x  

I (u, v)

(4)

u=0 v=0

The detection of key-points by SURF is based on the Hessian matrix of a point x in image I at scale σ computed by:  H (x, σ ) =

L x x (x, σ ) L x y (x, σ ) L x y (x, σ ) L yy (x, σ )

 (5)

here L x x (x, σ ) is the convolution of the second-order Gaussian derivative with image I in x, and σ is the standard variance of the Gaussian. Similarly for L x y (x, σ ) and L yy (x, σ ). FAST [39] is a method for corner detection. The main feature of this detector is the speed. The FAST method is available in real-time (using only 7% of the time for single-frame processing) to detect corners in a PAL format video. The algorithm is characterized by speed, but is not resistant to large noise quantities in images, and results depend on the choice of threshold value. FAST detector uses the proportion of the circular arc length L instead of the proportion of the circular area to measure the self-dissimilarity of the nucleus. Each pixel point on the circle x{x1 , x2 , . . . , x16 } has one of the three states Sx when its intensity value Ix is compared with the intensity value of the nucleus I p as shown in (1). Hence, FAST classifies p as a corner if there exists a segment which contains at least 12 contiguous pixels which are all brighter or darker than pixel p. Sx =

⎧ ⎨

d, Ix ≤ I p − T (dar ker) b, Ix ≥ I p + T (brighter ) ⎩ s, I p − T < Ix < I p + T (similar )

(6)

MSER [40] detects specific interest points in an image, which form a set of interconnected points that make up the contour after the thresholding of the image. An

Environment Scene Classification Based on Images …

287

interest point in an image is a pixel which has a well-defined position and can be robustly detected. Interest points have high local information content. The intensity of all points within these regions is either lighter or darker than the points on the contour. Such regions are invariant to scaling, lighting, orientation and viewing point transforms. The extreme region is a set of interconnected image points that make up the contour after the thresholding of the image with a threshold value of t. The intensity of all points within these regions is either brighter or darker than the points on the contour. Then the stability of the region Rt is expressed as follows: Ψ (Rt ) =

A(Rt ) ; A(Rt )

d dt

(7)

here A(Rt ) denotes the area of the region Rt . The region is considered stable when its area changes slightly with the change of the threshold value t. The region Rt is maximally stable if the function (Rt ) reaches the local maximum with value t. Such regions are the properties of the images found by the MSER algorithm. Vector of Locally Aggregated Descriptors (VLAD) [41] assigns each key-point to its closest visual word and accumulate this difference for each visual word. VLAD can be used to overcome the quantization error problem faced in the Bag-of-Words (BoW) representation.

4 Proposed Method For environment recognition we apply a method known by several names in the literature: Bag of Keypoints [13], Bag of Features [42], Bag of Visual Words [43], or Bag-of-Words [44]. The goal of the BoW approach is to substitute each local descriptor of an images with visual words obtained from a predefined vocabulary in order to apply traditional text retrieval techniques to content based image retrieval (CBIR). This model is widely used and has proven its effectiveness in solving image classification tasks [12]. The model covers almost an entire process of recognition, but different methods can be used for each task of the model (see Fig. 1). Before applying the method, the images are preprocessed (see Fig. 2): an image is converted to grayscale, then histogram normalization is applied and the size of an image is reduced so that image value does not exceed the predefined value. The first stage of the method is the detection of features in the image. In this step, small patches of the image are likely to be significant for classification. The properties found are described in such a way that they can be compared with each other. Thus, each attribute is assigned to the most similar “visual word” from the previously generated dictionary. Dictionary of visual words is derived from the clusters of similar features. Then the image is encoded by a vector representing the frequency of each

288

T. Petraitis et al.

Fig. 1 Outline of Bag-of-Words model (reproduced from [9])

Fig. 2 Image preprocessing. a input image (from dataset [45]), b grayscale image, c grayscale image with normalized histogram, d scaled image (reproduced from [9]))

word in an image. The vector is used as an input of classifier. We use and analyse four feature detection methods: SIFT, SURF, FAST, and MSER. The SIFT descriptor describes each specific property using a 128-dimensional vector, which is composed of histograms of regions around image keypoints in 8 different orientations. Depending on the distance to the keypoint, weight is assigned to each calculated orientation. The weights are calculated using the Gaussian function with a mean deviation parameter equal to half of the scale of features. The resulting vector is normalized to a unity vector, and a threshold function is applied to this vector with a value of and the vector is normalized again.

Environment Scene Classification Based on Images …

289

Fig. 3 Example: a subset of a dictionary of visual words (reproduced from [9])

The SURF descriptor describes the properties of a 64-dimensional vector as follows. First, the dominant orientation of keypoints is calculated. Then, to describe the region around the keypoint, a square region is extracted, centred on the keypoint and oriented along the dominant orientation. The region is split into smaller 4 × 4 square sub-regions, and for each one, the Haar waveforms are extracted. A variation of the SURF descriptor is U-SURF. In this variation, the step of calculating the dominant orientation of features is skipped, thus optimizing the algorithm’s performance, but losing resistance to orientation transforms. Dictionary of words is created from a large collection of images by detecting their special properties and clustering them using the k-means method. To improve the algorithm’s performance and results, we use an improved k-means initiation method [46], which, by choosing starting centres, evaluates the distance of each selected centre from the data points and the points of the existing centres. Different number of visual words can be derived. We use 350 words (selected heuristically). As the k-means algorithm does not always converge or converges only after a very large number of iterations, we set the maximum number of iterations as 20,000. Clustering is repeated twice and clusters with the smallest variation are selected. An example of visual words is given in Fig. 3. For mapping of keypoints to clusters, Fast Approximate Nearest Neighbor Search Based Matcher [47] is used. Histograms are obtained by how much and what features an image has (Fig. 4). Each histogram is normalized so that the sum of its all column values is equal to 1. For classification, we use Support Vector Machine (SVM) [48] as a classifier. SVM aims to find the optimal possible hyperplane, which separates two classes in a multidimensional space. The optimality is estimated from the distance from the hyperplane to the data of both classes. Since not all data can be separated linearly, the kernel trick is used. The data is projected into a higher dimensional space, where, perhaps, it is possible to separate them. We use the χ2 kernel. The gamma parameter of this kernel, determined by the trial-and-error method, is 0.50625. For training, the number of iteration is bounded to 70,000. Since SVM is a binary classification method, classifying data into more than two classes requires classifiers and the results of classification are voted. According to the voting results, the winner is determined.

290

T. Petraitis et al.

Fig. 4 Calculation of histograms (reproduced from [9])

5 Experiments 5.1 Hardware and Software The methods were implemented on the notepad PC with an Intel Core i7-3630M 3.4 GHz CPU. Algorithms were implemented using the C++ programming language and the OpenCV 3.1 open source library (https://github.com/Itseez/opencv) with an optional opencv_contrib module (https://github.com/Itseez/opencv_contrib). The CMake 3.5.0-rc3 software and Microsoft Visual Studio 2015 compiler were used to compile the OpenCV library and output files into binary files for the Windows 10 OS.

5.2 Dataset We used the dataset from [45]. The dataset consists of 8 categories of environmental imagery: coast, forest, highway, city, mountain, open country, street, high buildings. Each category contains more than 250 annotated images with 256 × 256 pixels resolution. Figure 5 shows an example of images in each category. We extended the original dataset with indoor environment image categories from [42], which added two new categories to a set of categories used in [49]. Therefore, the new dataset has 15 indoor and outdoor categories: coast, forest, highway, inside city, mountain, open country, street, high buildings, bedroom, industrial, kitchen, living room, store, office, and suburban. In Fig. 6, an example of images from additional image categories are shown. Here we used only the first 250 images of each category: 200 images for training and 50 images for testing.

Environment Scene Classification Based on Images …

291

Fig. 5 Examples of image categories from dataset [45]

Fig. 6 Examples of images in additional seven categories (reproduced from [9])

5.3 Results First, we compared different feature detectors and descriptors by analysing various combinations of them. For experiments we used images from 8 outdoor categories, only. Each image was resized to 240 × 240 pixels. Accuracy was calculated by dividing the number of correctly categorized images by the number of images used for testing. We compared three combinations of feature detectors/descriptors: SIFT/SIFT, SURF/SURF and SURF/U-SURF. Here the first word denotes a descriptor, and the second one is a detector. The results of experiments are presented in Fig. 7. The best accuracy (84%) was obtained using the SURF detector and the U-SURF descriptors. The result can be explained by the resistance of

292

T. Petraitis et al.

Fig. 7 Comparison of SIFT and SURF methods (based on data from [9])

the BoW model to changes in the orientation of features, so no additional calculation of orientation is required. We also have analysed the effectiveness of the grid as a feature detector. We used the grid step of 12, and the feature size of 6. The results are presented in Fig. 8. The SIFT descriptor achieves the best results (82% accuracy), when detecting the properties of the grid. The U-SURF descriptor again turned out to be better than the classic SURF, so we can claim that the orientation information used in the model is not required in the descriptor. Finally, we compare the FAST and MSER detectors. An important parameter of the FAST detector—threshold value—is indicated by the number of the name of the detector, e.g., FAST30. Figure 9 depicts their results using different descriptors. As in previous experiments, SURF and U-SURF descriptors appear to be worse than SIFT when they detect the specific properties detected outside their detector. The best accuracy (79.75%) was obtained using a FAST detector with a threshold value of 30 and a SIFT descriptor. The MSER detector for detecting regions was not effective in detectors of extraordinary qualities. We also have compared the performance in terms of mean time required for encoding one image (that includes detecting image features by describing the descriptor and then describing the image by histogram). The results are shown in Fig. 10. Using the SURF detector with the U-SURF descriptor, the image is encoded on average 33% faster than the SIFT/SIFT combination. Both combinations yield similar results, so the combination of SURF/U-SURF is more cost effective in terms of time. FAST30/SIFT was the slowest, which is because of the fact that with a threshold value equal to 30 FAST algorithms detect a very large number of features.

Environment Scene Classification Based on Images …

Fig. 8 Classification results using grid-based feature detection (based on data from [9])

Fig. 9 Comparison of FAST and MSER feature detectors

293

294

T. Petraitis et al.

Fig. 10 Mean time of image encoding (based on data from [9])

As training for classification used the same 200 dataset images, and testing used the remaining 50 dataset images from each category, the accuracy of the obtained accuracy is not high. In order to obtain more accurate and reliable results, the classifier was trained 100 times with the two best (SURF/U-SURF and Grid/SIFT) combinations, while randomly selecting 200 images for training and 50 for each category. The combination of SURF/U-SURF achieved an average accuracy of 83.51 ± 1.67% and a Grid/SIFT combination achieved an accuracy of 84.99 ± 1.45%. In Figs. 11 and 12, the confusion matrices for outdoor environment categories are presented. The vertical axis consists of the real class, and the horizontal axis is the predicted class. The correctly categorized images are shown in the diagonal of the confusion matrix. The averages of the predictions from 100 tests were ranked, in which 50 images of each class were classified. Both confusion matrices are very similar and have the general features: the images of the forest, high buildings are classified most accurately, open country images are often mixed with coastal and mountain views. Note that the dataset used is not perfect and may contain some ambiguous images. Also, some image categories are essentially semantically similar, e.g., street imagery sometimes appears in urban imagery. These experiments show that the BoW model is most effective when the combination of the SURF/U-SURF and Grid/SIFT detectors and descriptors is used, with both achieving over 83% accuracy. The SURF descriptor produced good results only when used with the SURF detector, and the SIFT descriptor was most effective in describing the specific features detected by the grid technique. The FAST detector with a low threshold value parameter turned out to detect many distinctive features, and although it yielded a good result, it took a relatively long time. The MSER detector has proved to be inefficient in detecting special features.

Environment Scene Classification Based on Images …

Fig. 11 Confusion matrices for outdoor environment categories: SURF/U-SURF

Fig. 12 Confusion matrices for outdoor environment categories: Grid/SIFT

295

296

T. Petraitis et al.

Fig. 13 Confusion matrices for indoor environment scenes: SURF/U-SURF

For classification of indoor scenes, we have used an extended dataset containing 15 categories of images. Five categories of them are indoor scenes. Since the data in this set contains different sizes of images, they were reduced in proportion to the size of the experiment by not exceeding 200 × 200 pixels. The combinations of Grid/SIFT and SURF/U-SURF detectors/descriptors were used to detect and describe the distinctive features. Using the grid method, its steps and features are also reduced proportionally to 10 and 5. Because there are fewer images in the category of this dataset, 200 are used for each category: 150 images for training and 50 images for testing. First, the classification accuracy has been tested to recognize five indoor scenes. The test was performed 50 times with randomly selected training and testing images and an average accuracy of 55.85 ± 2.81% with SURF/U-SURF and an accuracy of 58.16 ± 2.22% using Grid/SIFT combination was achieved. The results are presented as confusion matrices in Figs. 13 and 14. From the results we can see that when using the Grid/SIFT combination, there is a better separation between bedroom and kitchen images, but basically all indoor images are mixed together. The best accuracy with indoor scenes is achieved with the store images. Note that the visual images of the bedroom, the kitchen and the living room are quite similar to the person and differ only by the objects they contain. The store’s images are the best separated, probably because the store environment is not visually similar to home rooms, as it has many similar and repetitive objects, little furniture, and a small amount of open space. Finally, classification of all 15 categories of environment scenes has been performed. The test was performed 50 times using randomly selected 150 training

Environment Scene Classification Based on Images …

297

Fig. 14 Confusion matrices for indoor environment scenes: Grid/SIFT

images and 50 test images, while using a Grid/SIFT combination with the same parameters and obtaining an average accuracy of 67.49 ± 1.50%. The confusion matrix for both indoor and outdoor scenes is presented in Fig. 15. Note that the indoor scenes are not often mixed with outdoor scenes, but most of the indoor scenes are mixed together. Two new scenes are included: the industrial environment and the suburbs. Images of the industrial environment include outdoor and indoor scenes. The images of suburban scenes have been classified quite accurately, and the industrial environment has often been mixed with most other categories, especially with store scenes—on average, 7 out of 50 images of the industrial environment have been categorized as stores. As we can see from Fig. 16, the industrial scenes are not visually very similar to other scenes, so they are poorly classified, probably due to the lack of data used for training, given that the images in this category contain both outdoor and indoor scenes. The classification of indoor scenes in detecting special features proved to be a much more difficult task than the classification of outdoor scenes. This is partly because the indoor scenes are created artificially, and in different scene categories are similar in their visual features. Finally, we can classify image categories based on the covariance matrix of their confusion matrix. The similarity of categories represented as a dendrogram is given in Fig. 17. We can see that indoor and outdoor scene categories can be clearly separated except for the industrial/store images. The accuracy of recognition for different image categories is summarized in Fig. 18. When we consider only binary classification case (recognition of indoor versus outdoor scenes), indoor images are recognized with an accuracy of 86%, while

298

T. Petraitis et al.

Fig. 15 Confusion matrices for all (outdoor and indoor) image categories

Fig. 16 Examples of incorrect classification (reproduced from [9])

outdoor images are recognized with an accuracy of 94%. The confusion matrix is presented in Fig. 19.

6 Conclusions We have analysed the use of the Bag of Words (BoW) model for recognition of the indoor and outdoor environment scenes using different variants of feature detection and description methods. We have investigated the SIFT, SURF, FAST, MSER, and grid-based feature detection methods, along with the SIFT, SURF and U-SURF feature description methods. We have developed an application to analyse different implementations of the BoW model using the Support Vector Machine (SVM) classifier with χ2 kernel on a selected image dataset. For our experiments, we have used 200 images of each image category (including both inside and outside images). Our results show that larger amount of training

Environment Scene Classification Based on Images …

299

Fig. 17 Dendrogram of outdoor and indoor image categories

Fig. 18 Accuracy of recognition versus image categories

data reduces the number of misclassifications. The speed and efficiency of the algorithm also depend on the feature detection and description methods, with grid-based feature detection with SIFT feature description achieving highest accuracy, while a combination of FAST100 and SIFT being the fastest one. Feature detection using an artificial grid with experimentally optimized grid parameters (step and property size) and without any reference to local image information has proved to be particularly effective with the SIFT descriptor. Using a grid

300

T. Petraitis et al.

Fig. 19 Confusion matrix of binary classification (indoor versus outdoor images)

pitch of 12 and a property size of 6, when the images are reduced to 240 × 240 pixels, we have achieved the accuracy of 84.99 ± 1.45% when classifying eight categories of outdoor. Features detected by the artificial grid yielded better accuracy (by 10%) than the features discovered by the SIFT detector (using the same set of images for training and testing). The SURF descriptor without image orientation information (U-SURF) achieved higher accuracy when compared to the classic SURF version of the BoW model. Using an SURF detector with a U-SURF descriptor, we have achieved an average improvement of accuracy of 8.43% over the classic SURF descriptor. We can argue that specific character orientation information only makes the recognition process more complex, and is not required for the accurate recognition of the environment when using the BoW model. The Speed SURF detector with the U-SURF descriptor operates faster (the image is encoded by about 33% faster than when using the grid detector with the SIFT descriptor with an average encoding time of one image equal to only 0.4 s), but achieved a slightly lower accuracy (83.51 ± 1.67%). The SURF descriptor produces good results only by describing the features detected by the SURF detector, while the SIFT descriptor works well with various detectors. Other combinations of detectors and descriptors were not as effective as the latter; while their accuracy varied from 65 to 79.75%. Overall accuracy with a dataset containing 15 outdoor and indoor image categories was 67.49 ± 1.50%. Most of classification errors came from indoor images, while industrial and store images were most often confused, but indoor images were rarely misclassified with

Environment Scene Classification Based on Images …

301

the outdoor images. The recognition of indoor images was more complicated, because artificially created environments have many inter-categorical similarities, uniform shapes, repetitive objects, which results in similar distinctive features in different categories of images, which as a result lead to classification errors. The results of the research presented in this paper could be used by researchers and practitioners developing scene recognition systems for assisted living environments as well as for security systems.

References 1. Bourne, R.R.A., Flaxman, S.R., Braithwaite, T., Cicinelli, M.V., Das, A., Jonas, J.B., et al.: Vision loss expert group. magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: a systematic review and meta-analysis. Lancet Glob Health 5(9), e888–97 (2017) 2. Mann, S., Huang, J., Janzen, R., Lo, R., Rampersad, V., Chen, A., Doha, T.: Blind navigation with a wearable range camera and vibrotactile helmet. In: 19th ACM International Conference on Multimedia, pp. 1325–1328 (2011) 3. Ezaki, N., Bulacu, M., Schomaker, L.: Text Detection from natural scene images: towards a system for visually impaired persons. In: 17th International Conference on Pattern Recognition, vol. 2, pp. 683–686 (2004) 4. Oliva, A., Torralba, A.: The role of context in object recognition. Trends Cogn. Sci. 11(12), 520–527 (2007) 5. Chan, L.A., Der, S.Z., Nasrabadi, N.M.: Image Recognition and Classification. Marcel Dekker, Inc. (2002) 6. Zhang, L., Xu, Q., Zhu, G., Song, J., Zhang, X., Shen, P., Wei, W., Shah, S.A.A., Bennamoun, M.: Improved colour-to-grey method using image segmentation and colour difference model for colour vision deficiency. IET Image Proc. 12(3), 314–319 (2018) 7. National Research Council: Electronic Travel Aids: New Directions for Research. The National Academies Press, Washington, DC (1986) 8. Dobre, C., Mavromoustakis, C., Garcia, N., Goleva, R., Mastorakis, G.: Ambient Assisted Living and Enhanced Living Environments: Principles, Technologies and Control, 1st edn. Butterworth-Heinemann, Newton, MA, USA (2016) 9. Petraitis, T., Maskeliunas, R., Damasevicius, R., Polap, D., Wozniak, M., Gabryel, M.: Environment recognition based on images using Bag-of-Words. In: 9th International Joint Conference on Computational Intelligence, IJCCI 2017, SciTePress, pp. 166–176 (2017) 10. Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vis. 42(3), 145–175 (2011) 11. Damaševiˇcius, R., Maskeliunas, R., Wo´zniak, M., Połap, D., Sidekerskiene, T., Gabryel, M.: Detection of saliency map as image feature outliers using random projections based method. In: 13th International Computer Engineering Conference: Boundless Smart Societies, ICENCO 2017, pp. 85–90 (2018). https://doi.org/10.1109/icenco.2017.8289768 12. Vogel, J., Schiele, B.: A semantic typicality measure for natural scene categorization. Pattern Recognit. 195–203 (2004) 13. Csurka, G., Dance, C.R., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. Workshop on Statistical Learning in Computer Vision, ECCV Prague, pp. 1–22 (2004) 14. Khan, N., McCane, B., Mills, S.: Feature set reduction for image matching in large scale environments. In: ACM International Conference Proceeding Series, pp. 67–72 (2012) 15. Napoli, C., Pappalardo, G., Tramontana, E., Marszalek, Z., Polap, D., Wozniak, M.: Simplified firefly algorithm for 2D image key-points search. In: 2014 IEEE Symposium on Computational Intelligence for Human-like Intelligence (CIHLI), pp. 1–8 (2015)

302

T. Petraitis et al.

16. Hung, Y., Wang, W.-B., Zheng, H.-H.: Algebraic multigrid based object recognition technology applied on image sensors. Dianzi Keji Daxue Xuebao/J. Univ. Electron. Sci. Technol. China 44(5), 743–748 (2015) 17. Masatoshi, A., Yuuto, C., Kanji, T., Kentaro, Y.: Leveraging image-based prior in cross-season place recognition. IEEE Int. Conf. Robot. Autom. 7139961, 5455–5461 (2015) 18. Kurcius, J.J., Breckon, T.P.: Using compressed audio-visual words for multi-modal scene classification. In: International Workshop on Computational Intelligence for Multimedia Understanding, IWCIM 2014, art. no. 7008808 (2014) 19. Hwang, S., Park, C., Choi, Y., Yoo, D., Kweon, I.S.: Evaluation of vocabulary trees for localization in robot applications. In: International Conference on Control, Automation and Systems, art. no. 6704138, pp. 1239–1242 (2013) 20. Kang, Y., Yamaguchi, K., Naito, T., Ninomiya, Y.: Road image segmentation and recognition using hierarchical bag-of-textons method. Lecture Notes in Computer Science, vol. 7087, pp. 248–256 (2011) 21. Mitsuhashi, M., Kuroda, Y.: Mobile robot localization using place recognition in outdoor environments with similar scenes. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM, art. no. 6027041, pp. 930–935 (2011) 22. Guillaume, H., Dubois, M., Frenoux, E., Tarroux, P.: Temporal bag-of-words: a generative model for visual place recognition using temporal integration. In: International Conference on Computer Vision Theory and Application, VISAPP 2011, pp. 286–295 (2011) 23. Law, M.T., Thome, N., Cord, M.: Bag-of-words image representation: key ideas and further insight. In: Ionescu, B., Benois-Pineau, J., Piatrik, T., Quénot, G. (eds.) Fusion in Computer Vision. Advances in Computer Vision and Pattern Recognition, pp. 29–52. Springer, Berlin (2014) 24. Strat, S.T., Benoit, A., Lambert, P., Caplier, A.: Retina enhanced SURF descriptors for spatiotemporal concept detection. Multimed. Tools Appl. 69(2), 443–469 (2014) 25. Zhang, G., Yang, J., Zhang, S., Yang, F.: Image classification based on modified BOW model. In: Balas V., Jain, L., Zhao, X. (eds.) Information Technology and Intelligent Transportation Systems. Advances in Intelligent Systems and Computing, vol. 455, pp. 337–345. Springer, Berlin (2017) 26. Zhu, Q., Zhong, Y., Zhao, B., Xia, G.S., Zhang, L.: Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery. IEEE Geosci. Remote Sensing Lett. 13(6), 747–751 (2016) 27. Shahriari, M., Bergevin, R.: Land-use scene classification: a comparative study on bag of visual word framework. Multimed. Tools Appl. 76(21), 23059–23075 (2017) 28. Zhao, L., Tang, P., Huo, L.: Feature significance-based multibag-of-visual-words model for remote sensing image scene classification. J. Appl. Remote Sens. 10(3), 035004–035004 (2016) 29. Zeng, F., Yuefeng Ji, Y., Levine, M.D.: Contextual Bag-of-Words for robust visual tracking. IEEE Trans. Image Process. 27(3), 1433–1447 (2018) 30. Feng, J., Liu, Y., Wu, L.: Bag of visual words model with deep spatial features for geographical scene classification. Comput. Intell. Neurosci. 5169675:1-5169675:14 (2017) 31. Liu, L., Ma, Y., Zhang, X., Zhang, Y., Li, S.: High discriminative SIFT feature and feature pair selection to improve the bag of visual words model. IET Image Process 11(11), 994–1001 (2017) 32. Wu, H., Liu, B., Su, W., Chen, Z., Zhang, W., Ren, X., Sun J.: Optimum pipeline for visual terrain classification using improved bag of visual words and fusion methods. J. Sens. 2017, 8513949:1-8513949:25 (2017) 33. Koniusz, P., Yan, F., Gosselin, P.-H., Mikolajczyk, K.: Higher-order occurrence pooling for bags-of-words: visual concept detection. IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 313–326 (2017) 34. Abdi, L., Kalboussi, R., Meddeb, A.: Enhanced bags of visual words representation using spatial information. In: 19th International Conference on Image Analysis and Processing, ICIAP 2017, Part II. LNCS 10485, Springer, pp. 171–179 (2017)

Environment Scene Classification Based on Images …

303

35. Ren, Y., Bugeau, A., Benois-Pineau, J.: Bag-of-bags of words irregular graph pyramids vs spatial pyramid matching for image retrieval. In: 4th International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6 (2014) 36. Gabryel, M., Damasevicius, R.: The image classification with different types of image features. In: 16th International Conference on Artificial Intelligence and Soft Computing, ICAISC 2017, Part I. Lecture Notes in Computer Science 10245, pp. 497–506 (2017) 37. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004) 38. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: speeded up robust features. In: Computer Vision, ECCV 2006, pp. 404–417. Springer, Berlin (2006) 39. Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: 9th European Conference on Computer Vision—Volume Part I (ECCV’06), pp. 430–443 (2006) 40. Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 2(10), 761–767 (2004) 41. Jégou, H., Douze, M., Schmid, C., Pérez, P.: Aggregating local descriptors into a compact image representation. In: 23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), pp. 3304–3311 (2010) 42. Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 2169–2178 (2006) 43. Yang, J., Jiang, Y.G., Hauptmann, A.G., Ngo, C.W.: Evaluating bag-of-visual-words representations in scene classification. In: ACM International Multimedia Conference and Exhibition, pp. 197–206 (2007) 44. Gabryel, M., Capizzi, G.: The bag-of-words method with dictionary analysis by evolutionary algorithm. In: 16th International Conference on Artificial Intelligence and Soft Computing, ICAISC 2017, Part I. LNCS 10246, pp. 43–51. Springer, Berlin (2017) 45. Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vis. 42(3), 145–175 (2001) 46. Arthur, D., Vassilvitskii, S.: K-Means: The Advantages of Careful Seeding, pp. 1027–1035. Society for Industrial and Applied Mathematics (2007) 47. Muja, M., Lowe, D.G.: Fast approximate nearest neighbors with automatic algorithm configuration. In: VISAPP International Conference on Computer Vision Theory and Applications, vol. 2. pp. 331–340 (2009) 48. Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998) 49. Fei-Fei, L., Perona, P.: A Bayesian hierarchical model for learning natural scene categories. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 524–531 (2005)

Cellular Transport Systems Improved: Achieving Efficient Operations with Answer Set Programming Steffen Schieweck, Gabriele Kern-Isberner and Michael ten Hompel

Abstract Autonomous vehicles for in-plant transportation are widely accepted in the industry nowadays. If applied in order picking, efficiency of their operations is essential for the performance of the overall system. Even though the developed systems are tailored to work in highly volatile environments their procedures are programmed in a comparably old-fashioned and inflexible manner. Answer set programming is experiencing a revival due to advances in the development of solving algorithms and computer hardware. This paper is an invited, extended version of Schieweck et al. (Proceedings of the 9th international joint conference on computational intelligence, 2007, [1]), in which several approaches for the application of answer set programming in order picking systems with autonomous vehicles are discussed. In this paper, the provided information on the approachesis extended, the discussion expanded and the paradigm of reactive answer set programming is additionally taken into account. Keywords Answer set programming · Multi-agent systems · Reactive ASP · Hybrid systems

The presented research is funded by the German Research Foundation DFG in the project “Advanced Solving Technology for Dynamic and Reactive Applications” (KE 1413/8-2). S. Schieweck (B) · G. Kern-Isberner Chair 1 Computer Science, TU Dortmund, Otto-Hahn-Str. 12, Dortmund, Germany e-mail: [email protected] G. Kern-Isberner e-mail: [email protected] S. Schieweck · M. ten Hompel Chair of Materials Handling and Warehousing, TU Dortmund, Joseph-von-Fraunhofer-Str. 2-4, Dortmund, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_16

305

306

S. Schieweck et al.

1 Introduction Autonomous vehicles on public roads have frequently been subject to media coverage in the recent time due to their intense development. For in-plant transportation the vision of autonomous vehicles has become reality a long time ago: “swarms” of intelligent vehicles fulfill the delivery of working stations for order picking. The utilization of such systems simultaneously facilitates throughput rates and flexibility [2]. The coordination of those vehicles requires an immense know-how in the digitization of formerly manual and/or static processes. In particular, the request for the highest possible flexibility requires reconsideration of the architectures of the digital systems. The vehicles are expected not only to be able to adapt rapidly to changes in their current environment but also to change their environment rapidly and easily. Existing approaches usually deploy classical, imperative programming which often is highly complex, specific and difficult to understand for non-experts. The application of declarative pro- grams potentially reduces the complexity and length of such codes and thus improves their understandability and adaptability. Due to the advances in computer hardware and solving algorithms answer set programming (ASP) is currently experiencing a revival. However, the paradigm is currently only well-known in the domains of knowledge representation and artificial intelligence (AI) and has not reached the classical engineering domains yet. The combination of autonomous vehicles for order picking is an interesting topic of research for the following reasons: – Volatile environments require fast and easy adaption of the vehicles objectives and operations. ASP encodings are small, easily understandable and highly adaptable. – Current industrial trends face towards distributed, highly connected systems in which decisions are made by multiple, autonomous entities. We aim to give some insights on how ASP could work beneficially in such systems. – The approach of reactive ASP [3] appears to be tailored for environments in which decisions need to be made quickly with recurring, similar instances. Thus, we apply and evaluate the paradigm of reactive ASP for the specific planning task. For the above-mentioned reasons we discuss the vision of the fusion of intelligent vehicles for order-picking and answer set programming. To start with, basic fundamentals are provided for the two main topics. Afterwards, suitable planning tasks are identified. For those tasks, a number of implementations with ASP are proposed. Then, the implementations are evaluated with a simulation model. Consequently, an implementation with reactive ASP is described, evaluated and discussed. The paper is completed with a conclusion and an outlook on future work. This paper is an invited, extended version of [1], in which several approaches for the application of answer set programming in order picking systems with autonomous vehicles are discussed. In this paper, the provided information on the background in Sect. 2 is extended. Also, the planning task and the implementations are described more thoroughly and/or from different perspectives. The paradigm of reactive ASP is additionally implemented and discussed.

Cellular Transport Systems Improved: Achieving Efficient …

307

2 Fundamentals For the understanding of the described work the basic fundamentals are explained in the following. First, the concept of answer set programming is introduced. Second, cellular transport systems are explained. Third, related and existing work is described.

2.1 Answer Set Programming An answer set program P consists of a number of rules of the form r : H ← A1 , . . . , An , not B1 , . . . , not Bm .

(1)

where H, A1 , . . . , An , B1 , . . . , Bm are literals and not is a so-called default negation operator. H is head(r ) and A1 , . . . , An , not B1 , . . . , not Bm is body(r ), respectively. H holds if pos(r ) = A1 , . . . , An is true and neg(r ) = B1 , . . . , Bm false or not known. A rule without body is a fact and thus holds without any precondition. A rule without head is a constraint and excludes the set defined in body(r ) from the answer set. Informally speaking, a rule may be interpreted as follows: if all literals of pos(r ) are included in a set that represents a problem solution and none of the literals of neg(r ), then head(r ) must also be part of the set. A rule without a body is called a fact which includes head(r ) to the set without any precondition [4]. An encoding of an answer set program contains a number of such rules which may interconnect with shared variables. A valid answer set satisfies all of the given rules. While the most basic structure of a rule is as described, rules may look differently to achieve specific grounding and solving behaviors [5]. With todays ASP-grounders and -solvers, one may also specify objective functions to select an optimal answer set from the set of the existing. For the understanding of ASP it is important to note that not is not a standard negation operator. Instead, not expresses that a corresponding literal A is not in the body of known atoms or ¬A is in the body of known atoms. As a result, uncertainty about the status of A can be taken into account. As indicated, the classical negation operator “¬” may still be included to an ASP encoding with its respective meaning [6]. A set S of literals is a model (an answer set) of P, if H ∈ S whenever pos(r ) ⊆ S and neg(r ) ∩ S = ∅ for every r ∈ P. S is a stable model of P, if S is the ⊆-minimal model of P S where P S is the reduct of P relative to the set S as defined by [7, 8] P S := {H ← A1 , . . . , An | H ← A1 , . . . , An , not B1 , . . . , not Bm ∈ P, {B1 , . . . , Bm } ∩ S = ∅}.

(2)

A rational agent can gain knowledge from P. He considers any literals P S true and the remaining literals false.

308

S. Schieweck et al.

2.2 Cellular Transport System

Flexibility

Generally, in-plant transportation systems are categorized as continuous conveyors or intermittent transportation systems. Continuous conveyors work in a permanent operation without periodical acceleration and deceleration of the actuators. They are stationary systems and thus are obstacles for other entities in the warehouse and provide low flexibility. Generally, continuous conveyors can achieve high throughput rates. Examples of continuous conveyors are roller or belt conveyors [9]. Intermittent transportation systems operate in so-called cycles. The cycles are repeated periodically and fulfill one (or two, if conducted as double-cycles) transportation request. Loading and unloading is conducted while the transportation equipment (e.g., the forklift) does not move. Source and destination of the transportation operation are loosely connected. Intermittent transportation systems are no static obstacles and provide high flexibility. For the increase of throughput the number of transportation systems must be increased. The described categorization of transportation systems reveals conflicting goals. Continuous conveyors enable high throughput rates while providing low flexibility. Intermittent transportation systems are highly flexible but only achieve high throughput rates if their number is increased significantly. The development of cellular transport systems is geared towards the resolution of these conflicting goals by providing high throughput rates as well as high flexibility as depicted in Fig. 1 [10]. Cellular transport systems are the physical embodiment of multi-agent systems which fulfill in-house transportation tasks of bins, pallets and cartons. The overall order picking system includes structural and functional elements like racks, picking

ConƟnuous

Cranes

IntermiƩent ForkliŌs

Cellular transport vehicles

Hand pallet trucks

Gap

Cellular conveyor systems

AGVs Power and Free

Belt conveyor Roller conveyor

Gravity conveyor

Throughput Fig. 1 Resolution of conflicting goals [10, 11]

Cellular Transport Systems Improved: Achieving Efficient …

309

stations, lifts and the conveying units. The distinctive feature to conventional transportation systems is that the transport units are intelligent and self-organizing and thus able to decide and act autonomously. A possible manifestation of those units are single continuous conveying units which are able to detect the meta-structure of the overall conveying system upon “plug-and-play” and operate without further configuration [10]. We discuss a different manifestation which is a fleet of intelligent vehicles for the transportation of article bins. The KIVA-system, for which first publications originate from 2008, is a prominent example of such a manifestation [12]. In this order-picking system the vehicles lift and transport small racks which are stored on the ground. The system architecture incorporates a computer cluster for dispatching and traffic control [12]. Nevertheless, ever since the buyout by Amazon Ltd. [13] no information about further advances have been published. The unique feature of the Cellular Conveyor System as developed by FraunhoferInstitute for Material Flow and Logistics IML and Dematic GmbH is the vehicles capability of moving on the various levels of the racks as well as on the ground floor. If a specific stock keeping unit (sku) is ordered, a vehicle retrieves the related article bin from the rack and transports it to one of the picking stations. After the picking process at the picking station has been conducted the vehicle transports the bin back and stores it. The lifts, which are located at the beginning and the end of each rack, are required to reach the various levels. The vehicles move on dedicated virtual paths which compose to a graph (see Fig. 2). The edges of the graph are unidirectional. As a consequence, one of the lifts is dedicated for the upwards transport of the vehicles and the other lift for the downwards transport. The control architecture of the vehicles consists of a sensor and actuator layer, an operational layer and a layer for autonomous behavior. The layers are connected by UDP-sockets. The autonomous behavior is implemented with software agents which are capable of communicating with other agents in the system. The dispatching is conducted by an external dispatching agent with the FIPA protocol [14]. The external system distributes a broadcast to all vehicles in the system once a new order-line arrives. The available (idle) vehicles bid for the order-line and the highest bidder is assigned the transportation task. As a consequence, if the system runs under full utilization (which is desirable for economical reasons), the current dispatching protocol results in a first-come-first-served procedure.

2.3 Related Work The following overview of the related work is focused on the aspects which are relevant to the identified planning task. Thus, the assignment of driving tasks to vehicles is discussed before an overview of research on intelligent vehicles for inplant transportation is provided. The assignment of driving tasks to vehicles has been studied intensely in the domain of operations research. The vehicle scheduling problem decides when, where

310

S. Schieweck et al.

Fig. 2 Example of routing graph [1]

and how any vehicle in a system shall act. Routing is included in this problem. The problem may be tackled online or offline. The offline problem can theoretically be solved to optimality using a multiple traveling salesman model which has NPcomplexity. Real-life scenarios require online solving due to uncertainty of the environment states. This may be realized with a rolling horizon, in which the planning is conducted for a limited future timespan [15]. Vehicle dispatching strategies only assign driving jobs to vehicles. Often, the planning horizon has a timespan of zero. This results in simple rules from which the assignment is made. Some exemplary rules are – – – –

First-Come-First-Served (FCFS), Random Workcentre, Shortest Travel Time/Distance and Maximum Outgoing Queue Size [16].

Researchers generally agree that the shortest travel time/distance rule is the most efficient one [17]. Especially in multi-agent systems, the choice whether the decision is made central or distributed is of interest [16].

Cellular Transport Systems Improved: Achieving Efficient …

311

Existing research dealing with the theoretical foundations is diverse. Among them are optimization models [18], heuristics [19], disposition strategies [20] but also more complex methods such as fuzzy logic [21] and ant colony algorithms [22] are used. Cellular transport systems are a rather new field of research. Therefore, publications are sparse and mainly engage with the physical design of the systems [12, 14]. In the domain of operations research it is generally assumed that distributed planning strategies lead to lower throughput rates than central strategies [15]. Research in more technical domains lead to different conclusions which prefer distributed strategies. Nonetheless, in most multi-agent transport systems dedicated disposition agents are used which offer transportation tasks to the agents which may bid on the task [2, 23]. For the present state of research the gap between theoretical AI research and industrial development is significant. One project which addresses this gap is the RoboCup Logistics League [24, 25]. Another publication addresses the combination of ASP and autonomous, KIVA-like vehicles for the combined task assignment and routing decision from a more theoretical point of view [26]. To our knowledge, no publications from other authors exist which deal with the combination of ASP and cellular transport systems on this practical level, except for our own publications on the topic [1, 27].

3 Planning Task The methods and tools which are developed in the domain of computer science are in most cases not capable of changing the physical structure of a system. Nonetheless, they can be used to process and model information and based on this influence physical processes. The identification of the planning task is based on the precept of not requiring any physical changes of the transportation system for significant improvement of its throughput rates. ASP is highly beneficial when it is applied to problems which have a combinatorial characteristic with some degree of complexity. The assignment of transportation tasks to vehicles is a well-known problem which has been approached with many different methods (see Sect. 2.3). The combinatorial aspect of the planning task is provided by the high number of possible different assignments which can be made. In the context of goods-to-person order picking a closely related task is the assignment of customer orders to picking stations. All of the articles of one customer order must be picked1 at the same picking station. Furthermore, the picking stations have a limited capacity of customer orders which can be assigned to it at the same time. The tasks are interconnected by the following aspects: – The decisions have to be completed at the same point of time: when the transportation tasks are assessed, selected and started. – The decisions are correlated. One transportation task may only be selected if sufficient capacity for the corresponding customer order(s) is available at the picking 1 And

thus must be transported to the same picking station.

312

S. Schieweck et al.

stations. The assignment of the picking station has an influence on the transportation route and time and possible congestions. – The assignment of the picking stations enables further optimization of the order picking system as a whole, e.g. concerning the utilization of picking stations and the processing time of customer orders. The planning task, the regarded system and its constraints are described in a semi-formal manner in the following. We strictly follow the description as provided in [1]. In an order-picking system, customer orders are assembled from the set of skus M which are in stock. A customer order O ⊂ M consists of a number n P,O of skus which are condensed to n O ≤ n P,O order-lines l in which skus with the same identity compose to one order-line. The system has a number of picking stations S with equal capacity c S , storage positions R, vehicles F and storage levels E. Incoming customer orders are stored in a list of orders L. The orders are sorted such that t1 ≤ t2 ≤ · · · ≤ tq−1 ≤ tq , in which t O is the time of an order O. The vehicles move on a graph G = (V, E) which has a set of nodes V and unidirectional edges E. The set of nodes may be further differentiated as V = {VR ; VS ; VW }. In this, VR are nodes which identify storage locations in the rack, VS are nodes which identify picking stations where bins need to be delivered to and VW are waypoints without further functionality. For the fulfillment of a transportation task, the vehicles need to travel a distance d on G. One vehicle can only transport one bin at a time. The planning task described above is the assignment lv = (v, l) of a vehicle v to an order-line and the related assignment os = (O, S) of an order O to a picking station S. The number of satisfied order-lines per time Nl is the primary objective [1] (3) z 1 : max Nl . Also, the utilization u s of the picking stations should be as balanced as possible [1] z 2 : min

  u avg − u s 

(4)

S

with u avg =

S 1 ui . S i=1

(5)

The capacity c S of a picking station may not be exceeded by the number of orders O assigned to the same picking station at a time. Every transportation task j has one pickup node V p ∈ VR , one or multiple (eos-job) delivery nodes Vd ⊆ VS and one storing node Vr = V p . The latter definition of the storing node implies that bins are transported back to the same location from which they were extracted. Practically speaking, we assume a fixed storage policy. Finally, we define that all of the orderlines l of a customer-order O must be transported to the same picking station [1] Vd,l1 = Vd,l2 = · · · = Vd,ln

∀li ∈ O.

(6)

Cellular Transport Systems Improved: Achieving Efficient …

313

4 System Design The existing dispatching rules (see Sect. 2.3) indicate opportunities to handle the proposed planning task. The approaches which are described in the following are tailored to work in a realistic scenario and will be evaluated as such. In the following section a number of different implementation alternatives are developed and will be evaluated in the subsequent section. To achieve improvements for the planning task, the agents are given a horizon H with size n. Thus, the opportunity arises to select an order-line from a pool which consists of the next n unfulfilled order-lines of L. If the same article is contained in multiple orders in H , the vehicle may approach multiple picking stations in one transportation task (eos-job). Thus, we define a transportation task j which is selected from H . Every unique storage position is one Pos ∈ H . To cope with the complexity real-life scenarios provide, the implemented planning agents operate with limited knowledge. First, we assume a strong correlation between driving distance and driving time and thus only consider the driving distance. For the fulfillment of a transportation task the vehicle has to conduct the following operations: 1. 2. 3. 4. 5. 6.

Drive to selected pickup node V p . Load bin. Drive to selected picking station Vd . Wait for pick. Drive to selected storage position Vr . Unload bin.

In logistics operations, one desires to reduce the unloaded traveling distance when a transportation vehicle is empty which holds for operation (1). With the assignment lv , operation (1) can be influenced, but not the other operations which are determined by o S and O. The vehicle requests a new transportation task every time it completes an operation (5). The vehicles position at this point of time varies which leads to a highly different reachability of the pickup nodes in H depending on the current vehicles position. Also, every order-line needs to be completed at some point of time. As a consequence, operations (2) through (6) are fixed and cannot be changed to improve the overall systems performance. For the above-described reasons, the rating of the order-lines l ∈ H will be based on the driving distance d p ⊂ d to the pickup nodes V p . The second assumption is made for the assignment o S . Orders only occupy capacity for which at least one transportation task has been started yet and which have unsatisfied order-lines remaining. With the first statement it is assured that no order bin occupies capacity before its first order-line has been delivered and requires an order bin at the station. The second statement implies that any transportation task which has already been started at the time of a new assignment will reach a picking station before the currently assigned one to release its capacity. A blackboard architecture has been implemented for the multi-agent system. On the blackboard, information about the incoming customer orders (previously denoted as L), the status of the order-lines and the corresponding orders are available for certain agents. Specific agents have read and write privileges for the blackboard.

314

S. Schieweck et al.

Listing 1 Excerpt—distributed planning with exemplary instance (n = 5) [1] 1 2 3 4 5 6 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 24 25

instance o r d e r _ p o s (69 ,33 ,436) . o r d e r _ p o s (70 ,34 ,322) . o r d e r _ p o s (76 ,36 ,241) . o r d e r _ p o s (82 ,39 ,446) . o r d e r _ p o s (83 ,39 ,124) . o r d e r _ p i c k s t (33 ,2) . o r d e r _ p i c k s t (34 ,3) . o r d e r _ p i c k s t (36 ,1) . o r d e r _ p i c k s t (39 ,8) . v e h _ p o s i t i o n (4 ,573) . encoding pos ( P ) : - o r d e r _ p o s (_ , _ , P ) . 1{ p o s _ v e h ( P , V ) : pos ( P ) }1 : - veh ( V ) . a s _ o r d e r _ p i c k s t (O , S ) : - o r d e r _ p i c k s t ( O , S ) , p i c k s t ( S ) . 1{ j o b _ v e h ( Ix , V ) : o r d e r _ p o s ( Ix ,_ , P ) } : - p o s _ v e h (P , V ) . a s _ o r d e r ( O ) : - j o b _ v e h ( Ix , V ) , o r d e r _ p o s ( Ix , O ,_ ) , not o r d e r _ p i c k s t ( O , _ ). 1{ a s _ o r d e r _ p i c k s t ( O , S ) : p i c k s t ( S ) }1 : - a s _ o r d e r ( O ) . : - p i c k s t ( S ) , c _ p i c k s t +1{ a s _ o r d e r _ p i c k s t ( O , S ) }. dist (A ,B , @ d i s t a n c e (A , B ) ) p o s _ v e h (P ,V , C ) , numJobs (K) }. numOrderPickst (X)

: - v e h _ p o s (_ , A ) , pos ( B ) . : - p o s _ v e h (P , V ) , v e h _ p o s i t i o n (V A ) , dist (A ,P , C ) . : - K =# c o u n t { Ix : j o b _ v e h ( Ix , V )

: - p i c k s t ( S ) , X =# c o u n t { O : a s _ o r d e r _ p i c k s t ( O , S ) }. m a x N u m O r d e r P i c k s t ( Y ) : - Y =# max { X : n u m O r d e r P i c k s t ( X ) }. # m i n i m i z e { C@3 : p o s _ v e h (P ,V , C ) }. # m a x i m i z e { K@1 : n u m J o b s ( K ) }. # m i n i m i z e { Y@2 : m a x N u m O r d e r P i c k s t ( Y ) }.

In this paper, four approaches are described and evaluated: – – – –

Distributed planning with hybrid encoding (Sect. 4.1) [1], distributed planning with numbering concept (Sect. 4.2) [1], central planning with hybrid encoding (Sect. 4.3) [1] and distributed planning with hybrid encoding and reactive ASP (Sect. 6).

The argumentations for the implementation of the approaches are stated in the respective sections. For all of the implementations the ASP grounder and solver clingo [28] in version 4.5.4 of the Windows build has been used.

4.1 Distributed Planning For this approach and the following approaches a detailed description of the encoding is given in [1]. Thus, an interested reader may obtain further information from this paper. However, the developed encodings are provided and will be described briefly. Additionally, a thorough description of the implemented structure and processes for the inclusion of ASP in the planning process is presented to complete the observations of [1]. Some versions of the distributed approach with hybrid encoding have been

Cellular Transport Systems Improved: Achieving Efficient …

315

discussed in [27]. In this paper (and [1]), only the superior version of the encodings is regarded. The distributed planning approach follows the trend towards distributed systems by increasing the vehicles autonomy and enabling them to take their own decision. This approach has been selected due to its high analogy to the current systems architecture, its flexibility and low expected computing times. The structure of the system is expanded to include ASP capabilities. The picking stations, storage positions and vehicles are already part of the existing system. In this implementation, every vehicle is assigned a dedicated ASP planning agent which solves the planning task. The information exchange of the agents is carried out with the described blackboard architecture. To prevent interferences, the blackboard is only accessible for one agent at a time. Customer orders enter the system via an Enterprise Resource Planning (ERP) and/or Warehouse Management System (WMS) and are updated with warehouseinternal information. Those information are published on the blackboard. Vehicles request new transportation tasks at the start of the system and when they complete a preceding task. Then, they delegate the described planning task to their respective ASP agent. The agent reads the next n available order-lines of L and the existing relations of customer orders and picking stations o S and transforms them to an ASP instance (see Listing 1.1). The ASP instance also contains information about the current node of the vehicle on the graph. Now, the grounding and solving processes are started. The selected transportation task is published on the blackboard as “assigned” and a potentially existing new assignment o S is added. Also, the information is forwarded to the vehicle itself which fulfills the transportation task. The ASP encoding is given in Listing 1.1. From line 9–15 the rules for the assignments are stated. In line 17–21 the rating of the jobs and the output of the decision is prepared. In lines 23–25 the optimization statements are given with their respective priorities.

4.2 Location Numbering The proposed implementation is an evolution of the one described in the preceding section. More specifically, we aim to reduce the calculation time induced by the Dijkstra-algorithm for the assessment of the transportation tasks. Still, we hope to find a good approximation of the optimal solution. Also, insights on how costly the routing algorithm is for the current planning task shall be achieved. Even though the algorithm has been selected thoroughly and routing algorithms are quite efficient, the effort for calculating every routing distance for every j ∈ H is high. In particular, it is expected to scale poorly as the computational expense grows quadratically with the number of nodes in the graph [29] and proportionally with n. Thus, a different approach has been developed which makes use of the unidirectional structure of the routing graph. The nodes V of G are given names such that simple rules suffice for the comparison of all j ∈ H . An example of the numbering for a rack is given in

316

S. Schieweck et al.

Listing 2 Excerpt—distributed planning with numbering concept [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

l e v e l _ p o s ( N , P / 1 0 0 ) : - o r d e r _ p o s (N , _ , P ) . level_veh (V ,P /100) :- veh_position (V ,P). n u m P i c k s t ( Max ) : - Max =# max { S : p i c k s t _ n o d e ( S , _ ) }. % case 1 p o s s i b l e P o s 1 ( N ) : - ... s e l e c t e d B i n 1 ( N ) : - ...

% case 2 p o s s i b l e P o s 2 ( N ) : - ... s e l e c t e d B i n 2 ( N ) : - ...

% case3 p o s s i b l e P o s 2 ( N ) : - ... s e l e c t e d B i n 3 ( N ) : - ... selectedBin (N) :- selectedBin1 (N). selectedBin (N) :- selectedBin2 (N). s e l e c t e d B i n ( N ) : - s e l e c t e d B i n 3 ( N ) , not s e l e c t e d B i n 2 ( _ ) . p o s _ v e h ( B , V ) : - s e l e c t e d B i n ( N ) , o r d e r _ p o s ( N ,_ , B ) , v e h _ p o s i t i o n (V , _ ).

Fig. 3. Generally, the nodes V p are given names such that the lower numbers have a lower driving distance d p . On the lowest level of the rack, it is taken into account that the nodes up to E3 allow a direct exit of the rack without the time consuming usage of lifts. In the provided example the number of the level is depicted in the hundreds digit. Thus, the level of a storage position can be determined with  level =

node number 100

 .

(7)

The structure of the system is further expanded with a numbering agent which also uses an ASP encoding. Also, a database for the information about the storage positions is regarded at this point. If the system is initially started or any changes have been made to the system, the numbering agent receives its required information from the environment, assigns a number to every V and stores the tuple t = (idold , idnew ) in the database. The required information is the – length of the rack, – number of rack levels and – position of the rack entries.

301

302

303

304

305

306

307

308

309

310

311

312

313

201

202

203

204

205

206

207

208

209

210

211

212

213

110

109

E3

108

107

106

E2

105

104

103

E1

102

101

Fig. 3 Example of numbering concept [1]

Cellular Transport Systems Improved: Achieving Efficient …

317

Once a new customer order accesses the system it is updated for the internal use with the “new” name of the storage node and can immediately be processed by the dispatching agents. Hence, the information of the blackboard has the new labels. The vehicles planning agent has an encoding similar to Listing 1 in which lines 16 and 18 are substituted by Listing 1.2 and the optimization statement in line 22 is removed. A detailed description of the encoding can be found in [1].

4.3 Central Planning Multi-agent systems rarely exist without any central agents which have the purpose of coordinating the overall system (see e.g. Sect. 2.2 with the current architecture). The distributed approaches address the demand for more local decision making to cope with the complexity large cyber-physical networks induce. In most cases, the search for a global optimum increases the computational effort by a large share. However, a global optimum for a planning problem is expected to be at least as good as the combination of multiple local optima and modern ASP solvers are capable of handling highly complex problems efficiently. The following approach has been developed to test the impact of the global optimum on the systems performance and the capability of ASP to find this optimum with satisfiable computing times. Figure 4 shows an exemplary situation in which a central planning approach is superior to a distributed approach in the current system. Vehicle 1 requests a new transportation task and bins a and b are in H . Bin b induces a shorter distance for vehicle 1 and will be selected with the distributed approach. For vehicle 2, bin a

1

a

1

b

2

b

2 Fig. 4 Distributed versus central planning [1]

Central

Distributed

a

318

S. Schieweck et al.

remains for which longer travel and double use of the lifts are induced. In the central planning approach the overall traveling distance for all vehicles can be minimized. In this case, vehicle 1 will be assigned bin a and vehicle 2 bin b which induces considerably less overall traveling distance. For the central approach, n needs to be redefined. A minimum of F order-lines must be available in H to allow for one assignment lv for each vehicle v. For comparability, the amount of undesirable, unassigned order-lines in H needs to remain the same. Thus, the central planning agent has a number of [1] n cen = n + F − 1

(8)

order-lines available. In contrast to the preceding implementations the vehicles do not possess an ASP agent each. Instead, one central dispatching agent can be addressed by every vehicle in the system. Still, the information is exchanged with a central blackboard architecture. If a vehicle requests a new transportation task it receives information from the blackboard. Now, two cases may occur: either one transportation task has already been assigned to the vehicle (indicated with its ID) and can be started immediately or the vehicle cannot find an assigned transportation task and the ASP planner needs to create a new assignment. The dispatching agent acquires the relevant information from the environment which is the next n cen available order-lines, relevant assignments o S , the current node V of the requesting vehicle and the storing node Vr of the remaining vehicles. Then, new assignments are generated, published on the blackboard and the requesting vehicle receives a message about the completion of the planning task. The vehicle may now access its new assignment and start the transportation task. The central planning agent has a similar encoding as the distributed agents in Listing 1. The differing lines are displayed in Listing 1.3. A thorough explanation of the encoding can be found in [1]. Listing 3 Excerpt - central planning [1] 1 2 3

: - pos ( P ) , 2{ p o s _ v e h ( P , V ) }. s p o s i t i o n (V , Pos ) : - o c c u p i e d (V , Pos ) . s p o s i t i o n (V , Pos ) : - v e h _ p o s i t i o n ( V , Pos ) , not o c c u p i e d (V , _ ) . 4 dist ( SPos , TPos , @ d i s t a n c e ( SPos , TPos ) ) : - s p o s i t i o n ( _ , SPos ) , pos ( TPos ) . 5 v e h _ d r i v e (V , D ) : - p o s _ v e h (P , V ) , s p o s i t i o n (V , SPos ) , dist ( SPos , P , D ) .

6 7

# m i n i m i z e { D@3 : v e h _ d r i v e (V , D ) }.

5 Evaluation The approaches which are described in Sect. 4 are evaluated in the following. First, the experimental design is described (Sect. 5.1). Second, the results are described and interpreted (Sect. 5.2).

Cellular Transport Systems Improved: Achieving Efficient … Table 1 Data of tested system [1] Parameter Value Vehicle Speed floor Speed rack Acceleration/deceleration Time loading/unloading Lift Speed Acceleration/deceleration Picking time per order-line

319

Unit

1 2 0, 5 4, 5

m/s m/s m/s2 s

2 2 5

m/s m/s2 s

5.1 Experimental Design For the evaluation and applied development a Demo3D simulation model has been created. For that purpose, an interface between the simulation environment and the employed ASP-grounder and -solver was created. Demo3D was selected due to its programming structure which is close to agent-based systems. Single entities are assigned proprietary C#-scripts which exchange information with messaging protocols. Thus, the simulation could be implemented similarly to a real-world application. The level of abstraction of the simulation is relatively low. Just like the real vehicles, their digital replications possess proximity sensors which slow the vehicle down (proximity