Ontologies and Concepts in Mind and Machine: 25th International Conference on Conceptual Structures, ICCS 2020, Bolzano, Italy, September 18–20, 2020, Proceedings [1st ed.] 9783030578541, 9783030578558

This book constitutes the proceedings of the 25th International Conference on Conceptual Structures, ICCS 2020, held in

308 85 6MB

English Pages XXIV, 209 [226] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Ontologies and Concepts in Mind and Machine: 25th International Conference on Conceptual Structures, ICCS 2020, Bolzano, Italy, September 18–20, 2020, Proceedings [1st ed.]
 9783030578541, 9783030578558

Table of contents :
Front Matter ....Pages i-xxiv
Front Matter ....Pages 1-1
A Formalism Unifying Defeasible Logics and Repair Semantics for Existential Rules (Abdelraouf Hecham, Pierre Bisquert, Madalina Croitoru)....Pages 3-17
Using Grammar-Based Genetic Programming for Mining Disjointness Axioms Involving Complex Class Expressions (Thu Huong Nguyen, Andrea G. B. Tettamanzi)....Pages 18-32
An Incremental Algorithm for Computing All Repairs in Inconsistent Knowledge Bases (Bruno Yun, Madalina Croitoru)....Pages 33-47
Knowledge-Based Matching of n-ary Tuples (Pierre Monnin, Miguel Couceiro, Amedeo Napoli, Adrien Coulet)....Pages 48-56
Front Matter ....Pages 57-57
Some Programming Optimizations for Computing Formal Concepts (Simon Andrews)....Pages 59-73
Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering (Quentin Brabant, Amira Mouakher, Aurélie Bertaux)....Pages 74-89
Interpretable Concept-Based Classification with Shapley Values (Dmitry I. Ignatov, Léonard Kwuida)....Pages 90-102
Pruning in Map-Reduce Style CbO Algorithms (Jan Konecny, Petr Krajča)....Pages 103-116
Pattern Discovery in Triadic Contexts (Rokia Missaoui, Pedro H. B. Ruas, Léonard Kwuida, Mark A. J. Song)....Pages 117-131
Characterizing Movie Genres Using Formal Concept Analysis (Raji Ghawi, Jürgen Pfeffer)....Pages 132-141
Front Matter ....Pages 143-143
Restricting the Maximum Number of Actions for Decision Support Under Uncertainty (Marcel Gehrke, Tanya Braun, Simon Polovina)....Pages 145-160
Vocabulary-Based Method for Quantifying Controversy in Social Media (Juan Manuel Ortiz de Zarate, Esteban Feuerstein)....Pages 161-176
Multi-label Learning with a Cone-Based Geometric Model (Mena Leemhuis, Özgür L. Özçep, Diedrich Wolter)....Pages 177-185
Conceptual Reasoning for Generating Automated Psychotherapeutic Responses (Graham Mann, Beena Kishore, Pyara Dhillon)....Pages 186-194
Benchmarking Inference Algorithms for Probabilistic Relational Models (Tristan Potten, Tanya Braun)....Pages 195-203
Analyzing Psychological Similarity Spaces for Shapes (Lucas Bechberger, Margit Scheibel)....Pages 204-207
Back Matter ....Pages 209-209

Citation preview

LNAI 12277

Mehwish Alam Tanya Braun Bruno Yun (Eds.)

Ontologies and Concepts in Mind and Machine 25th International Conference on Conceptual Structures, ICCS 2020 Bolzano, Italy, September 18–20, 2020, Proceedings

123

Lecture Notes in Artificial Intelligence Subseries of Lecture Notes in Computer Science

Series Editors Randy Goebel University of Alberta, Edmonton, Canada Yuzuru Tanaka Hokkaido University, Sapporo, Japan Wolfgang Wahlster DFKI and Saarland University, Saarbrücken, Germany

Founding Editor Jörg Siekmann DFKI and Saarland University, Saarbrücken, Germany

12277

More information about this series at http://www.springer.com/series/1244

Mehwish Alam Tanya Braun Bruno Yun (Eds.) •



Ontologies and Concepts in Mind and Machine 25th International Conference on Conceptual Structures, ICCS 2020 Bolzano, Italy, September 18–20, 2020 Proceedings

123

Editors Mehwish Alam FIZ Karlsruhe – Leibniz Institute for Information Infrastructure Karlsruhe, Germany

Tanya Braun Institute of Information Systems University of Lübeck Lübeck, Germany

Bruno Yun Department of Computing Science University of Aberdeen Aberdeen, UK

ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notes in Artificial Intelligence ISBN 978-3-030-57854-1 ISBN 978-3-030-57855-8 (eBook) https://doi.org/10.1007/978-3-030-57855-8 LNCS Sublibrary: SL7 – Artificial Intelligence © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The 25th edition of the International Conference on Conceptual Structures (ICCS 2020) took place in Bozen-Bolzano, Italy, during 18–20 September, 2020, under the title “Ontologies and Concepts in Mind and Machine.” Since 1993, ICCS has been an yearly venue for publishing and discussing new research methods along with their practical applications related to conceptual structures. This year ICCS was one of the 10 different conferences in the Bolzano Summer of Knowledge (BOSK 2020), which spanned over the course of two weeks. Each of these conferences complemented each other on a broad range of disciplines such as Philosophy, Knowledge Representation, Logic, Conceptual Modeling and Ontology Engineering, Medicine, Cognitive Science, and Neuroscience. Due to the global COVID-19 pandemic, the conference was moved to a virtual venue, with tutorials, keynotes, and research presentations taking place online to provide a safe environment for participants from around the world. The topics of this year’s conference range from Formal Concept Analysis to Decision Making, from Machine Learning to Natural Language Processing, all unified by the common underlying notion of conceptual structures. The call asked for regular papers reporting on novel technical contributions, short papers describing ongoing work or applications, and extended abstracts highlighting the visions and open research problems. Overall, 27 submissions were received out of which 24 were accepted for reviewing. The committee decided to accept 10 regular papers, 5 short papers, and 1 extended abstract which corresponds to an acceptance rate of 59%. Each submission received three to four reviews, with 3:625 reviews on average. In total, our Program Committee members, supported by four additional reviewers, delivered 87 reviews. The review process was double-blind, with papers anonymized for the reviewers and reviewer names unknown to the authors. For the first time, we organized a bidding on papers to ensure that reviewers received papers within their field of expertise. The response to the bidding process allowed us to assign each paper to reviewers who had expressed an interest in reviewing a particular paper. The final decision was made after the authors had a chance to reply to the initial reviews via a rebuttal to correct factual errors or answer reviewer questions. We believe this procedure ensured that only high-quality contributions were presented at the conference. We are delighted and proud to announce that this year’s program included two tutorials, “Mathematical Similarity Models” by Moritz Schubert and Dominik Endres (Philipps-Universität Marburg, Germany) as well as “FCA and Knowledge Discovery” by Amedeo Napoli (Université de Lorraine, France). Two keynote talks were also held during the conference: “Towards Ordinal Data Science” by Gerd Stumme (University of Kassel, Germany) and “Compositional Conceptual Spaces” by Sebastian Rudolph (TU Dresden, Germany). Note that this volume provides the extended abstracts of all the tutorials and the keynote talks. As general chair and program chairs, we would like to thank our speakers for their inspiring and insightful talks. Our thanks also go out to the local organization of BOSK

vi

Preface

who provided support in terms of registration and setting up a virtual conference. We would like to thank the Program Committee members and additional reviewers for their work. Without their substantial voluntary contribution, it would not have been possible to set up such a high-quality conference program. We would also like thank EasyChair for their support in handling submissions and Springer for their support in making these proceedings possible. Our institutions, the Leibniz Institute for Information Infrastructure Karlsruhe, Germany, the University of Lübeck, Germany, and the University of Aberdeen, UK, also provided support for our participation, for which we are grateful. Last but not least, we thank the ICCS Steering Committee for their ongoing support and dedication to ICCS. July 2020

Mehwish Alam Tanya Braun Bruno Yun

Organization

Steering Committee Madalina Croitoru Dominik Endres Ollivier Haemmerlé Uta Priss Sebastian Rudolph

University of Montpellier, France Philipps-Universitat Marburg, Germany University of Toulouse-Mirail, France Ostfalia University, Germany TU Dresden, Germany

Program Committee Mehwish Alam Bernd Amann Moulin Bernard Tanya Braun Peggy Cellier Peter Chapman Dan Corbett Olivier Corby Diana Cristea Madalina Croitoru Licong Cui Harry Delugach Florent Domenach Dominik Endres Jérôme Euzenat Raji Ghawi Marcel Gehrke Ollivier Haemmerlé Dmitry Ignatov Hamamache Kheddouci Leonard Kwuida Jérôme Lang Natalia Loukachevitch Pierre Marquis Philippe Martin Franck Michel Amedeo Napoli

Leibniz Institute for Information Infrastructure Karlsruhe, Germany Sorbonne University, France Laval University, Canada University of Lübeck, Germany IRISA, INSA Rennes, France Edinburgh Napier University, UK Optimodal Technologies, LLC, USA University of Côte d’Azur, France Babes-Bolyai University, Romania University of Montpellier, France The University of Texas Health Science Center at Houston, USA University of Alabama, Huntsville, USA Akita International University, Japan University of Marburg, Germany University of Grenoble Alpes, France TU Munich, Germany University of Lübeck, Germany University of Toulouse-Mirail, France National Research University Higher School of Economics, Russia Claude Bernard University Lyon 1, France Bern University of Applied Sciences, Switzerland Paris Dauphine University, France Moscow State University, Russia Artois University, France University of La Réunion, France University of Côte d’Azur, France University of Lorraine, France

viii

Organization

Sergei Obiedkov Nir Oren Nathalie Pernelle Heather D. Pfeiffer Simon Polovina Uta Priss Marie-Christine Rousset Sebastian Rudolph Christian Sacarea Fatiha Saïs Diana Sotropa Iain Stalker Gerd Stumme Srdjan Vesic Serena Villata Bruno Yun

National Research University Higher School of Economics, Russia University of Aberdeen, UK University of Paris-Sud, France Akamai Physics, Inc., USA Sheffield Hallam University, UK Ostfalia University, Germany University of Grenoble Alpes, France TU Dresden, Germany Babes-Bolyai University, Romania University of Paris-Saclay, France Babes-Bolyai University, Romania The University of Manchester, UK University of Kassel, Germany Artois University, France CNRS, France University of Aberdeen, UK

Additional Reviewers Rashmie Abeysinghe Dominik Dürrschnabel Maximilian Stubbemann Fengbo Zheng

University University University University

of of of of

Kentucky, USA Kassel, Germany Kassel, Germany Kentucky, USA

Abstracts of Keynote Talks

Towards Ordinal Data Science

Gerd Stumme Knowledge & Data Engineering Group, Department of Electrical Engineering and Computer Science and Research Center for Information System Design (ITeG), University of Kassel, Germany https://www.kde.cs.uni-kassel.de/stumme

Order is a predominant paradigm for perceiving and organizing our physical and social environment, to infer meaning and explanation from observation, and to search and rectify decisions. For instance, we admire the highest mountain on earth, observe pecking order among animals, schedule events in time, and organize our collaborations in hierarchies. The notion of order is deeply embedded in our language, as every adjective gives rise to a comparative (e.g., better, more expensive, more beautiful). Furthermore, specific technical and social processes have been established for dealing with ordinal structures, e.g., scheduling routines for aircraft take-offs, first-in-first out queuing at bus stops, deriving the succession order as depth-first linear extension of the royal family tree, or discussing only the borderline cases in scientific programme committees. These processes, however, are rather task-specific. In many cases, entities can be ordered through real-valued valuation functions like size or price. This process of quantification has been boosted by different factors, including i) the development of scientific measuring instruments since the scientific revolution, ii) the claim that the social sciences should use the same numerical methods which had been successful in natural sciences, and iii) nowadays by the instant availability of an enormous range of datasets to almost all aspects of science and everyday life. As real numbers constitute an ordered field, i.e., a field equipped with a linear order, the analysis of such data benefits from the existence of the algebraic operators of fields (addition, substraction, multiplication, division, and the existence of 0 and 1) together with total comparability (i.e., every pair of its elements is comparable) —a combination that allows for various measures of tendency (such as mean, variance, and skewness) as well as for a variety of transformations. If more than one real-valued dimension is present, this yields to a real vector space (as Cartesian product of the field of reals), which results in a multitude of additional descriptive measures and metric properties, such as volumes, angles, correlation, covariance. This is the standard setting for the majority of data analysis and machine learning models, and many algorithms (eg k-Means clustering, logistic regression, neural networks, support vector machines, to name just a few) have been developed for these tasks. However, organizing hierarchical relationships by means of numerical values is not always adequate, as this kind of organization presupposes two important conditions: i) every pair of entities has to be comparable, and ii) the sizes of differences between numerical values are meaningful and thus comparable. In many situations, however, this is not the case: (i) does not hold, for instance, in concept hierarchies (‘mankind’ is

xii

G. Stumme

neither a subconcept nor a superconcept of ‘ocean’) nor in organizations (a member of parliament is neither above nor below a secretary of state); and (ii) does not hold, for instance, in school grades (In the German school system, is the difference between 1 (very good) and 2 (good) equal to the difference between 4 (sufficient) and 5 (insufficient/fail)?) nor in organisations (In the European Commission, is an advisor closer to a deputy director general than a head of group to a director?). To address such variations of data types, S. S. Stevens has distinguished four levels of measurement: nominal, ordinal, interval, and ratio.1 For data on the ratio level (e.g., height), all above-mentioned operations are allowed (division, for instance, provides ratios). Data on the interval level (e.g., temperature measured in Celsius or Fahrenheit) do not have a meaningful zero as point of reference and thus do not allow for ratios, while the comparison of differences is still meaningful. Ordinal data (e.g., the parent relation) only allow for comparisons, and nominal data (e.g., eye color) only for determining equality. Although there is a large range of preliminary work, Ordinal Data Science is only just emerging as distinct research field. It focusses on the development of data science methods for ordinal data, as they lack the large variety of methods that have been developed for other data types. To this end, we define ordinal data as sets of entities (‘data points’) together with one or more order relations, i.e., binary relations that are reflexive, transitive and anti-reflexive (or variations thereof, such as quasiorders or weak orders). Ordinal data belong thus to the large family of relational data which have received high interest of the computer science community in the last years, due to developments in related fields such as sociology (“relational turn”), genetics or epidemiology, and socio-technical developments such as the rise of online social networks or knowledge graphs. This means that, for the analysis of ordinal data, one can benefit from all kinds of measures and methods for relational data, as for instance centrality measures and clustering algorithms for (social) network data or inductive logic programming or statistical relational learning from the field of relational data mining. The specific structure of ordinal data, however, allows additionally to tap on the rich – but up to date mostly unexploited for data science – toolset of mathematical order theory and lattice theory. In this talk, we present selected first approaches.

1

S. S. Stevens: On the Theory of Scales of Measurement. In: Science 103.2684 (1946), 677. https:// doi.org/10.1126/science.103.2684.677. eprint: https://science.sciencemag.org/content/103/2684/677. full.pdf.

Compositional Conceptual Spaces

Sebastian Rudolph Computational Logic Group, TU Dresden, Germany [email protected]

It has often been argued that certain aspects of meaning can or should be represented geometrically, i.e., as conceptual spaces [6]. Vector space models (VSMs) have existed for some time in natural language processing (NLP) and recently regained interest under the term (vector space) embeddings in the context of deep learning. It was shown that the principle of distributionality [4], according to which “a word is characterized by the company it keeps” – implying that words occurring in similar contexts bear similar meanings –, can serve as a powerful and versatile paradigm for automatically obtaining word-to-vector mappings from large natural language corpora. While distributional VSMs and the techniques for acquiring them (such as word2vec [7]) have shown to perform very well for tasks related to semantic similarity and other relations between singular words, the question how to lift this approach to larger units of language brings about new challenges. The principle of compositionality, going back to Frege [5], states that the meaning of a complex language construct is a function of its constituents’ meanings. A significant body of work in computational linguistics deals with the question if and how the principle of compositionality can be applied to geometrical models. For VSMs, a variety of potential vector-composition operations have been proposed and investigated. Many of those, including the most popular ones (vector addition and componentwise multiplication), are commutative and therefore oblivious to the order of words in a given phrase. Around a decade ago, compositional matrix space models (CMSMs) were proposed as a unified framework of compositionality [8]. CMSMs map words to quadratic matrices and use matrix multiplication as the one and only composition operation. Next to certain plausibility arguments in favor of CMSMs, it was shown that they are capable of emulating many of the common vector-based composition functions. During the past ten years, more results have been obtained regarding the learnability of CMSMs and their feasibility for practical NLP tasks [1–3, 9]. Among others, it has been demonstrated that CMSMs can capture sentiment analysis tasks using matrices of very low dimensionality, correctly modelling word-order sensitive phenomena (like the different intensities in the phrases “not really good” and “really not good”). Moreover, they can be successfully applied to distinguish idiomatic phrases (such as “couch potato”) from compositional ones (such as “stomach pain”). This keynote provides an introduction into CMSMs and gives an overview of their theoretical and practical aspects.

xiv

S. Rudolph

Acknowledgements. Over the years, I have collaborated on the subject with several colleagues. I am particularly thankful to Eugenie Giesbrecht, Shima Asaadi, and Dagmar Gromann.

References 1. Asaadi, S.: Compositional Matrix-Space Models: Learning Methods and Evaluation. Ph.D. thesis, TU Dresden (2020) 2. Asaadi, S., Rudolph, S.: On the correspondence between compositional matrix-space models of language and weighted automata. In: Jurish, B., Maletti, A., Würzner, K.M., Springmann, U. (eds.) Proceedings of the SIGFSM Workshop on Statistical NLP and Weighted Automata (StatFSM 2016), pp. 70–74. Association for Computational Linguistics (2016) 3. Asaadi, S., Rudolph, S.: Gradual learning of matrix-space models of language for sentiment analysis. In: Blunsom, P., et al. (eds.) Proceedings of the 2nd Workshop on Representation Learning for NLP (RepL4NLP 2017), pp. 178–185. Association for Computational Linguistics (2017) 4. Firth, J.R.: A synopsis of linguistic theory 1930-55. Studies in linguistic analysis 1952–1959, pp. 1–32 (1957) 5. Frege, G.: Die Grundlagen der Arithmetik: eine logisch-mathematische Untersuchung über den Begriff der Zahl. W. Koebner, Breslau, Germany (1884) 6. Gärdenfors, P.: Conceptual Spaces: the Geometry of Thought. MIT Press, Cambridge (2000) 7. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: International Conference on Learning Representations (ICLR 2013) (2013) 8. Rudolph, S., Giesbrecht, E.: Compositional matrix-space models of language. In: Hajic, J., Carberry, S., Clark, S. (eds.) Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), pp. 907–916. Association for Computational Linguistics (2010) 9. Yessenalina, A., Cardie, C.: Compositional matrix-space models for sentiment analysis. In: Barzilay, R., Johnson, M. (eds.) Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011), pp. 172–182. Association for Computational Linguistics (2011)

Tutorial Abstracts

FCA and Knowledge Discovery Tutorial at ICCS 2020 Amedeo Napoli Université de Lorraine, CNRS, Inria, LORIA, 54000 Nancy, France [email protected]

1 Introduction and Motivation In this tutorial we will introduce and discuss how FCA [1, 2, 4, 5] and two main extensions, namely Pattern Structures [3, 7] and Relational Concept Analysis (RCA) [9], can be used for knowledge discovery purposes, especially in pattern and rule mining, in data and knowledge processing, data analysis, and classification. Indeed, FCA is aimed at building a concept lattice starting from a binary table where objects are in rows and attributes in columns. But FCA can deal with more complex data. Pattern Structures allow to consider objects with descriptions based on numbers, intervals, sequences, trees and general graphs [3, 6]. RCA was introduced for taking into account relational data and especially relations between objects [9]. These two extensions rely on adapted FCA algorithms and can be efficiently used in real-world applications for knowledge discovery, e.g. text mining and ontology engineering, information retrieval and recommendation, analysis of sequences based on stability, semantic web and classification of Linked Open Data, biclustering, and functional dependencies.

2 Program of the tutorial The tutorial will be divided in three main parts, including (i) the basics of FCA, (ii) the processing of complex data with Pattern Structures and Relational Concept Analysis, and (iii) a presentation of some applications about the mining of linked data, the discovery of functional dependencies, and some elements about biclustering. A tentative program is given here below. – Introduction to Formal Concept Analysis (basics and examples): formal context, Galois connections, formal concept, concept lattice, and basic theorem of FCA. – Reduced notation, conceptual scaling for non binary contexts, implications and association rules in a concept lattice. – Algorithms for computing formal concepts and the associated concept lattice, complexity of the design process, building and visualizing concept lattices. – Measures for selecting interesting concepts in the concept lattice [8]. – Basics on Pattern Structures for mining complex data, the example of numerical and interval data, pattern concepts and pattern concept lattice.

xviii

A. Napoli

– Elements on Relational Concept Analysis, relational context family, relational concepts and relational concept lattice. – Applications: mining definitions in linked data, mining functional dependencies, biclustering, hybrid Knowledge Discovery.

3 Conclusion FCA is nowadays gaining more and more importance in knowledge and data processing, especially in knowledge discovery, knowledge representation, data mining and data analysis. Moreover, our experience in the domain shows that FCA can be used with benefits in a wide range of applications, as it also offers very efficient algorithms able to deal with complex and possibly large data. In addition, interested researchers have the possibility to attend the two main Conferences, International Conference on Concept Lattices and Applications (CLA) and International Conference on Formal Concept Analysis (ICFCA). Moreover, a companion workshop, namely FCA4AI, is regularly organized by Sergei O. Kuznetsov, Amedeo Napoli and Sebastian Rudolph (see http://fca4ai.hse.ru/). This year, we have the eighth edition of the the FCA4AI workshop co-located with the ECAI 2020 Conference at the end of August 2020. The whole series of the proceedings of the seven preceding workshops is available as CEUR proceedings (again see http:// fca4ai.hse.ru/).

References 1. Belohlavek, R.: Introduction to Formal Concept Analysis. Research report. Palacky University, Olomouc (2008). http://belohlavek.inf.upol.cz/vyuka/IntroFCA.pdf 2. Claudio Carpineto and Giovanni Romano. Concept Data Analysis: Theory and Applications. Wiley Chichester (2004) 3. Ganter, B., Kuznetsov, S.O.: Pattern structures and their projections. In: Delugach, H.S., Stumme, G. (eds.) ICCS 2001. LNCS, vol 2120, pp. 129–142. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44583-8_10 4. Ganter, B., Obiedkov, S.A.: Conceptual Exploration. Springer, Heidelberg (2016). https://doi. org/10.1007/978-3-662-49291-8 5. Ganter, B., Wille, R.: Formal Concept Analysis. Springer, Heidelberg (1999). https://doi.org/ 10.1007/978-3-642-59830-2 6. Kaytoue, M., Codocedo, V., Buzmakov, A., Baixeries, J., Kuznetsov, S.O., Napoli, A.: Pattern structures and concept lattices for data mining and knowledge processing. In: Bifet, A., et al. (eds.) ECML PKDD 2015. LNCS, vol. 9286, pp. 227–231 (2015). Springer, Cham. https://doi.org/10.1007/978-3-319-23461-8_19 7. Kaytoue, M., Kuznetsov, S.O., Napoli, A., Duplessis, S.: Mining gene expression data with pattern structures in formal concept analysis. Inf. Sci. 181(10), 1989–2001 (2011) 8. Kuznetsov, S.O., Makhalova, TP.: On interestingness measures of formal concepts. Inf. Sci. 442–443:202–219 (2018) 9. Rouane-Hacene, M., Huchard, M., Napoli, A., Valtchev, P.: Relational concept analysis: mining concept lattices from multi-relational data. Ann. Math. Artif. Intell. 67(1), 81–108 (2013)

Mathematical Similarity Models in Psychology

Moritz Schubert and Dominik Endres Philipps University Marburg, Marburg, Germany {moritz.schubert,dominik.endres}@uni-marburg.de

Similarity assessment is among the most essential functions in human cognition. It is fundamental to imitation learning, it plays an important role in categorization and it is a key mechanism for retrieving contents of memory. Hence, the description and explanation of human similarity perception is a pertinent goal of mathematical psychology. In the following, we will outline three of the most relevant kinds of mathematical similarity models: Geometric (Sect. 1.1), featural (Sect. 1.2) and structural (Sect. 1.3) models. These models not only differ in their mathematical details, but also in the way they represent objects as cognitive concepts. Finally, we will discuss the assessment of human stimulus representations, a crucial issue for the empirical evaluation of similarity models.

1 Similarity Models 1.1

Geometric Models

Geometric models Shepard (1962) conceptualize similarity as distances, implying that object concepts can usefully be thought of as points in an imaginary multidimensional Euclidean space. As distances, similarity ratings would have to adhere to the metric axioms: 1) minimality, i.e. sða; bÞ  sða; aÞ ¼ 1, 2) symmetry, i.e. sða; bÞ ¼ sðb; aÞ and 3) triangle inequality, i.e. sða; cÞ  sða; bÞ þ sðb; cÞ, where s is a real number in the range ½0; 1 representing a similarity measure and fa; b; cg are objects. Intuitively, all of these seem to hold for similarity (e.g. in favor of triangle inequality, if a and b are very similar and b and c are very similar, it stands to reason that a and c are not all that dissimilar). An advantage of this group of models is that many stimuli can easily be represented as points on one or several dimensions (e.g. colors with the RGB dimensions). 1.2

Featural Models

Tversky (1977) criticized the use of metric axioms for explaining similarity: In confusability tasks participants tend to identify some letters more often as certain distractor letters than as themselves, a finding that is in direct contradiction with minimality. The empirical result that sðo; pÞ [ sðp; oÞ often holds in cases where o and p belong to the same category, but p is more prototypical for the category than o (e.g. an ellipse is more similar to a circle than a circle is to an ellipse) provides evidence against symmetry. Finally, plausible counter-examples can be brought against the triangle inequality (e.g.

xx

M. Schubert and D. Endres

a pineapple is similar to a melon, because they are both exotic fruits, and a melon is similar to a beach ball, because of its shape, but a pineapple is not at all similar to a beach ball). As an alternative, Tversky (1977) proposed to represent objects as sets of features. His contrast model states sða; bÞ [ sðc; d Þ whenever A \ B  C \ D, A  B  C  D and B  A  D  C, where the capital letters represent sets of features of the small lettered objects. Tversky ensured that his scale s is totally ordered through the solvability axiom, which, given a set of objects, postulates the existence of new objects with specific feature sets to ensure that any two stimuli can be ordered. However, having to invent imaginary stimuli whenever carrying out a similarity comparison seems to be an implausibly inefficient process for the mind. Another featural model, the GLW model (Geist et al. 1996) offers a solution to this problem by situating similarity ratings in a partial order, i.e. introducing the option of not being able to determine the ranking of two object pairs. This can be interpreted psychologically as the two object pairs being incomparable (e.g. asking whether a parrot is more similar to a bicycle than a toothbrush is to a car seems like an unanswerable question). The GLW model adds one statement to the set of conditions under which sða; bÞ [ sðc; d Þ: A [ B  C [ D, where A [ B are all the features under consideration that are neither part of a nor b. This new condition is supposed to account for the context under which the similarity comparison is carried out. However, an empirical evaluation of the GLW model (Schubert and Endres 2018) suggests that the requirement of all four conditions being fulfilled is too strict and that as a result the model makes very few predictions, strongly reducing its usefulness. 1.3

Structural Models

While representing objects as sets of features is a very versatile approach, it is also possible to derive representations based on relations more directly. Structural models (e.g. (Falkenhainer et al. 1989)) try to remedy this shortcoming by giving objects structured representations which state the type of relationship between features (e.g. the two features “pillars” and “roof” in the “cathedral” concept might be connected through a “supports” relationship). In this view, the similarity between two stimuli is the degree to which their two structural representations align which each other (e.g. by encompassing the same features and/or the same relationships).

2 Assessment of Stimulus Representation For a similarity model to explain human similarity perception, it needs to be able to predict human behavior. One way to test this is to feed participants’ cognitive representations of objects into a model as input and see whether the similarity ratings output by the model match the ones given by participants. The inherent challenge in this approach is finding a sufficiently accurate representation of the participants’ cognitive stimulus representations. One approach are latent variable models (LVMs) that try to extract the latent parameters underlying participants’ similarity ratings. For geometric models, a

Mathematical Similarity Models in Psychology

xxi

commonly used LVM is the nonmetric multidimensional scaling (MDS) approach described by Shepard (1962). In this procedure, the points representing the stimuli are being iteratively rearranged until the inverse ranking of the distances between them corresponds (as closely as possible) to the ranking of the similarity ratings. Since for N stimuli this can trivially be accomplished in N  1 dimensional space, an additional requirement of the solution is that it uses as few dimensions as possible. One LVM used for featural models is additive clustering (Navarro and Griffiths 2008), a type of clustering that allows an object to belong to multiple clusters. Hence, the clusters can be interpreted as features of the objects. In order to be psychologically useful, the output of LVMs has to be filled with meaning, e.g. the dimensions produced by MDS have to be named. Ideally, this is done by the participants themselves. An alternative to LVMs is to ask the participants directly about their cognitive concepts. For example, for featural models participants could be asked to type a certain number of words they would use to describe a stimulus. While direct approaches might make fewer assumptions than LVMs, for non-featural models they ask participants to think in unduely abstract ways (e.g. having to arrange objects in a two-dimensional space).

References Falkenhainer, B., Forbus, K.D., Gentner, D.: The structure-mapping engine: algorithm and examples. Artif. Intell. 41(1), 1–63 (1989). https://doi.org/10.1016/0004-3702 (89)90077-5 Geist, S., Lengnink, K., Wille, R.: An order-theoretic foundation for similarity measures. In: Lengnink, K. (ed.) Formalisierungen von Ähnlichkeit Aus Sicht Der Formalen Begriffsanalyse, pp. 75–87. Shaker Verlag (1996) Navarro, D.J., Griffths, T.L.: Latent features in similarity judgments: a nonparametric bayesian approach. Neural Comput. 20(11), 2597–2628 (2008). https://doi.org/10. 1162/neco.2008.04-07-504 Schubert, M., Endres, D.: Empirically evaluating the similarity model of Geist, Lengnink and Wille. In: Chapman, P., Endres, D., Pernelle, N. (eds.) ICCS 2018. LNCS, vol. 10872, pp. 88–95 (2018). Springer, Cham. https://doi.org/10.1007/9783-319-91379-7_7 Shepard, R.N.: The analysis of proximities: multidimensional scal-ing with an unknown distance function. I. Psychometrika 27(2), 125–140 (1962). https://doi.org/ 10.1007/bf02289630 Tversky, A.: Features of similarity, 84 (4), 327–352 (1977). https://doi.org/10.1037/ 0033-295x.84.4.327

Contents

Knowledge Bases A Formalism Unifying Defeasible Logics and Repair Semantics for Existential Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abdelraouf Hecham, Pierre Bisquert, and Madalina Croitoru

3

Using Grammar-Based Genetic Programming for Mining Disjointness Axioms Involving Complex Class Expressions . . . . . . . . . . . . . . . . . . . . . . Thu Huong Nguyen and Andrea G. B. Tettamanzi

18

An Incremental Algorithm for Computing All Repairs in Inconsistent Knowledge Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bruno Yun and Madalina Croitoru

33

Knowledge-Based Matching of n-ary Tuples . . . . . . . . . . . . . . . . . . . . . . . Pierre Monnin, Miguel Couceiro, Amedeo Napoli, and Adrien Coulet

48

Conceptual Structures Some Programming Optimizations for Computing Formal Concepts . . . . . . . Simon Andrews

59

Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering . . . Quentin Brabant, Amira Mouakher, and Aurélie Bertaux

74

Interpretable Concept-Based Classification with Shapley Values . . . . . . . . . . Dmitry I. Ignatov and Léonard Kwuida

90

Pruning in Map-Reduce Style CbO Algorithms. . . . . . . . . . . . . . . . . . . . . . Jan Konecny and Petr Krajča

103

Pattern Discovery in Triadic Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rokia Missaoui, Pedro H. B. Ruas, Léonard Kwuida, and Mark A. J. Song

117

Characterizing Movie Genres Using Formal Concept Analysis . . . . . . . . . . . Raji Ghawi and Jürgen Pfeffer

132

xxiv

Contents

Reasoning Models Restricting the Maximum Number of Actions for Decision Support Under Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marcel Gehrke, Tanya Braun, and Simon Polovina

145

Vocabulary-Based Method for Quantifying Controversy in Social Media . . . . Juan Manuel Ortiz de Zarate and Esteban Feuerstein

161

Multi-label Learning with a Cone-Based Geometric Model. . . . . . . . . . . . . . Mena Leemhuis, Özgür L. Özçep, and Diedrich Wolter

177

Conceptual Reasoning for Generating Automated Psychotherapeutic Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graham Mann, Beena Kishore, and Pyara Dhillon

186

Benchmarking Inference Algorithms for Probabilistic Relational Models . . . . Tristan Potten and Tanya Braun

195

Analyzing Psychological Similarity Spaces for Shapes . . . . . . . . . . . . . . . . . Lucas Bechberger and Margit Scheibel

204

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

209

Knowledge Bases

A Formalism Unifying Defeasible Logics and Repair Semantics for Existential Rules Abdelraouf Hecham1 , Pierre Bisquert2(B) , and Madalina Croitoru1 1

INRIA GraphIK, Universit´e de Montpellier, Montpellier, France {hecham,croitoru}@lirmm.fr 2 IATE, INRA, INRIA GraphIK, Montpellier, France [email protected]

Abstract. Two prominent ways of handling inconsistency provided by the state of the art are repair semantics and Defeasible Logics. In this paper we place ourselves in the setting of inconsistent knowledge bases expressed using existential rules and investigate how these approaches relate to each other. We run an experiment that checks how human intuitions align with those of either repair-based or defeasible methods and propose a new semantics combining both worlds.

1

Introduction

Conflicts in knowledge representation cause severe problems, notably due the principle of explosion (from falsehood anything follows). These conflicts arise from two possible sources: either the facts are incorrect (known as inconsistence), or the rules themselves are contradictory (known as incoherence). In order to preserve the ability to reason in presence of conflicts, several approaches can be used, in particular Defeasible Logics [8,18] and Repair Semantics [16]. These two approaches stem from different needs and address conflicts in different ways. A key difference between defeasible logics and Repair Semantics is that the first was designed for incoherence while the latter was designed for inconsistence. However, since inconsistence is a special type of incoherence [11], defeasible logics can be applied to inconsistent but coherent knowledge, and thus be compared to the Repair Semantics. In this paper we want to investigate how the different intuitions of defeasible logics and Repair Semantics relate to each other. In order to attain the above mentioned objective, we make use of a combinatorial structure called Statement Graph [13]. Statement Graphs have been defined as way to reason defeasibly with existential rules using forward chaining. The reasoning is based on labeling functions shown to correspond to various flavors of Defeasible Logics. This paper proposes a new labeling for Repair Semantics and paves the way to combine both conflict-tolerant approaches in one unifying formalism.

c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 3–17, 2020. https://doi.org/10.1007/978-3-030-57855-8_1

4

2

A. Hecham et al.

Logical Language

We consider a first order language L  with constants but no other function symbol composed of formulas built with the usual quantifiers (∃, ∀) and the connectives (→, ∧), on a vocabulary constituted of infinite sets of predicates, constants and variables. A fact is a ground atom (an atom with only constants) or an existentially closed atom. An existential dependency)  rule (a.k.a. a tuple generating  r is a formula of the form ∀X, Y B(X, Y ) → ∃Z H(X, Z) where X, Y are tuples of variables, Z is a tuple of existential variables, and B, H are finite nonempty conjunctions of atoms respectively called body and head of r and denoted Body(r) and Head(r). In this paper we consider rules with atomic head (any rule can be transformed into a set of rules with atomic head [4]). A negative constraint is a rule of the form ∀X B(X) → ⊥ where B is a conjunction of atoms and X is a set of variables with possibly constants. Negative constraints are used to express conflicts. In this paper, we take into account binary negative constraints, which contain only two atoms in their body.1 A knowledge base is a tuple KB = (F, R, N ) where F is a finite set of facts, R is a finite set of rules, and N is a finite set of negative constraints. We denote the set of models of a knowledge base by models(F, R ∪ N ). A derivation is a (potentially infinite) sequence of tuples Di = (Fi , ri , πi ) composed of a set of facts Fi , a rule ri and a homomorphism πi from Body(ri ) to Fi , where D0 = (F, ∅, ∅) and such that Fi results from the application of rule ri to Fi−1 according to πi , i.e. Fi = α(Fi−1 , ri , πi ). A derivation from a set of facts F to a fact f is a minimal sequence of rules applications starting from D0 = (F0 ⊆ F, ∅, ∅) and ending with Dn = (Fn , rn , πn ) such that f ∈ Fn . A chase (a.k.a. forward chaining) is the exhaustive application of a set of rules over a set of facts in a breadth-first manner (denoted chase(F, R)) until no new facts are generated, the resulting “saturated” set of all initial and generated facts is denoted F ∗ . While this is not always guaranteed to stop, certain recognizable classes of existential rules that are decidable for forward chaining have been defined [3]; we limit ourselves to the recognizable FES (Finite Expansion Set) class [5] and use Skolem chase [17]. We consider ground atomic queries. We denote that a query is entailed from a knowledge base KB by KB |= Q (equivalent to chase(F, R) |= Q [9]). Inconsistence vs. Incoherence. Conflicts appear in a knowledge base whenever a negative constraint becomes applicable: we say that two facts f1 and f2 are in conflict if the body of a negative constraint can be mapped to {f1 , f2 }. There are two possible sources of conflicts, either the facts are incorrect (known as inconsistence), or the rules themselves are contradictory (known as incoherence). A KB = (F, R, N ) is inconsistent iff it has an empty set of models (i.e. models(F, R ∪ N ) = ∅). A knowledge base is incoherent iff R ∪ N are unsatisfiable, meaning that there does not exist any set of facts S (even outside 1

It should be noted that this restriction does not lead to a loss of expressive power, as [2] shows.

A Formalism Unifying Defeasible Logics and Repair Semantics

5

the facts of the knowledge base) where all rules in R are applicable such that models(S, R ∪ N ) = ∅ [11]. Example 1 (Inconsistence). Consider the following KB = (F, R, N ) that describes a simplified legal situation: If there is a scientific evidence incriminating a defendant then he is responsible for the crime, if there is a scientific evidence absolving a defendant then he is not responsible for the crime. A defendant is guilty if responsibility is proven. If a defendant is guilty then he will be given a sentence. If a defendant has an alibi then he is innocent. There is a scientific evidence “e1” incriminating a female defendant “alice”, while another scientific evidence “e2” is absolving her of the crime. She also has an alibi. Is Alice innocent (i.e. Q1 = innocent(alice))? Is she guilty (i.e. Q2 = guilty(alice))? – F = {incrim(e1, alice), absolv(e2, alice), alibi(alice), f emale(alice)} – R = {r1 : ∀X, Y incrim(X, Y ) → resp(Y ), r2 : ∀X, Y absolv(X, Y ) → notResp(X), r3 : ∀X resp(X) → guilty(X), r4 : ∀X alibi(X) → innocent(X), r5 : ∀X guilty(X) → ∃Y sentence(X, Y )} – N = {∀X resp(X) ∧ notResp(X) → ⊥, ∀X guilty(X) ∧ innocent(X) → ⊥} The saturated set of facts resulting from the chase is – F ∗ = {incrim(e1, alice), absolv(e2, alice), alibi(alice), f emale(alice), resp(alice), notResp(alice), guilty(alice), innocent(alice), sentence(alice, null1 )}. This knowledge base is inconsistent, because a negative constraint is applicable, (thus models(F, R ∪ N ) = ∅). This inconsistency is due to an erroneous set of facts (either one of the evidences, the alibi, or all of them are not valid). The classical answer to the Boolean queries Q1 and Q2 is “true” (i.e. Alice is guilty and innocent), because from falsehood, anything follows. However, the knowledge base is coherent because the set of rules are satisfiable i.e. there exists a set of facts (e.g. F  = {incrim(e1, bob), absolv(e3, alice), alibi(alice)}) such that all rules are applicable and models(F  , R ∪ N )) = ∅. Inconsistency Handling. Defeasible Logics and Repair Semantics are two approaches to handle conflicts. Defeasible Logics are applied to potentially incoherent situations were two types of rules are considered: strict rules expressing undeniable implications (i.e. if B(r) then definitely H(r)), and defeasible rules expressing weaker implications (i.e. if B(r) then generally H(r)). In this context, contradictions stem from either relying on incorrect facts, or from having exceptions to the defeasible implications. Repair Semantics are applied to situations where rules are assumed to hold in the true state of affair and hence inconsistencies can only stem from incorrect facts. A repair D is an inclusion-maximal subset of the facts D ⊆ F that is

6

A. Hecham et al.

consistent with the rules and negative constraints (i.e. models(D, R ∪ N ) = ∅). We denote the set of repairs of a knowledge base by repairs(KB). In presence of an incoherent set of rules, Repair Semantics yield an empty set of repairs [10].

3

Statement Graphs for Defeasible Reasoning

A Statement Graph (SG) [13] is a representation of the reasoning process happening inside a knowledge base, it can be seen as an updated Inheritance Net [15] with a custom labeling function. An SG is built using logical building blocks (called statements) that describe a situation (premises) and a rule that can be applied on that situation. A statement s is a pair that is either a ‘query statement’ (Q, ∅) where Q is a query, a ‘fact statement’ ( , f ) where f is a fact, or a ‘rule application statement’ (Φ, ψ) that represents a rule application α(F, r, π) s.t. π(B(r)) = Φ and π(H(r)) = ψ. We denote by P rem(s) the first element of the statement and by Conc(s) the second element. A statement can be written as P rem(s) → Conc(s). A statement s1 supports another statement s2 iff ∃f ∈ P rem(s2 ) s.t. Conc(s1 ) = f . A statement s1 attacks s2 ∃f ∈ P rem(s2 ) s.t. Conc(s1 ) and f are in conflict. Statements are generated from a knowledge base, they can be structured in a graph according to the support and attack relations they have between each other. Definition 1 (Statement Graph). A Statement Graph of a KB = (F, R, N ) is a directed graph SGKB = (V, EA , ES ): V is the set of statements generated from KB; EA ⊆ V × V is the set of attack edges and ES ⊆ V × V is the set of support edges. For an edge e = (s1 , s2 ), we denote s1 by Source(e) and s2 by T arget(e). For − (s) and its incoming a statement s we denote its incoming attack edges by EA − + (s) and support edges by ES (s). We also denote its outgoing attack edges by EA + outgoing support edges by ES (s). A Statement Graph (SG) is constructed from the chase of a knowledge base. Starting facts are represented by fact statements and rule applications are represented using rule application statements. Figure 1 shows SG of Example 1. An SG provides statements and edges with a label using a labeling function. Query answering can then be determined based on the label of the query statement. Continuing the previous example, in an ambiguity propagating setting (such as [12]), innocent(alice) is ambiguous because guilty(alice) can be derived, thus the ambiguity of guilty(alice) is propagated to innocent(alice) and consequently KB prop innocent(alice) and KB prop guilty(alice) ( prop denotes entailment in ambiguity propagating). On the other hand, in an ambiguity blocking setting (such as [18]), the ambiguity of resp(alice) blocks any ambiguity derived from it, meaning that guilty(alice) cannot be used to attack innocent(alice). Therefore innocent(alice) is not ambiguous, thus

A Formalism Unifying Defeasible Logics and Repair Semantics

7

f emale(alice), ∅ sentence(alice, null1 ), ∅  → f emale(alice)

innocent(alice), ∅

guilty(alice) → sentence(alice, null1 )

alibi(alice) → innocent(alice)

resp(alice) → guilty(alice)

 → alibi(alice)

incrim(e1, alice) → resp(alice)

absolv(e2, alice) → notResp(alice)

 → incrim(e1, alice)

 → absolv(e2, alice)

Fig. 1. Example 1’s Statement Graph (fact statements are gray).

KB block innocent(alice) and KB block guilty(alice) ( block denotes entailment in ambiguity blocking). The labeling function ‘PDL’ (Propagating Defeasible Logic) was proposed for Statement Graphs in [13] that yields equivalent entailment results to Defeasible Logics with ambiguity propagating [1]. Similarily, the labeling function ‘BDL’ (Blocking Defeasible Logic) was proposed by [13] to obtain entailment results equivalent to Defeasible Logics with ambiguity blocking [7].

4

Statement Graphs for Repair Semantics

In this paper we focus on two well-known semantics for inconsistent databases: IAR and ICAR repair Semantics. The Intersection of All Repairs semantic [16] is the most skeptical of the Repair Semantics. A query Q is IAR entailed (KB IAR Q) iff it is classically entailed  by the intersection of all repairs constructed from the starting set of facts (i.e. repairs(KB) ∪ R  Q). The Intersection of Closed ABox Repairs semantic [16] computes the repairs of the saturated set of facts. A query Q is ICAR entailed KB ICAR Q iff it is classically entailed by the set of facts in the intersection of the repairs constructed after generating all facts. Example 2. Consider the KB in Example 1. The repairs constructed from the starting set of facts are: – D1 = {absolv(e2, alice), alibi(alice), f emale(alice)} – D2 = {incrim(e1, alice), f emale(alice)}

8

A. Hecham et al.

D1 ∩ D2 = {f emale(alice)} therefore only f emale(alice) is entailed: KB IAR f emale(alice). The repairs constructed from the saturated set of facts are: – D1 = {absolv(e2, alice), alibi(alice), f emale(alice), notResp(alice), sentence(alice, null1 )} = {incrim(e1, alice), f emale(alice), resp(alice), guilty(alice), – D2 sentence(alice, null1 )} = {f emale(alice), sentence(alice, null1 )} thus KB D1 ∩ D2 f emale(alice) ∧ sentence(alice, null1 ). 4.1

ICAR

New Labeling for IAR Semantics

The intuition behind IAR is to reject any fact that can be used to generate conflicting atoms, meaning that only the facts that produce no conflict will be accepted. From an SG point of view, any statement that is attacked, or that supports statements that lead by an attack or support edge to an attacked statement is discarded. This can be obtained by first detecting all conflicts, then discarding any statement that either leads to a conflict or is generated from conflicting atoms. In order to detect conflicts using Statement Graphs, we need to ensure that all conflicts are represented. Given that statements attack each other on the premise, it is necessary to handle in a particular way statements with no outgoing edges (i.e. statements that do not support or attack other statements) as they might still generate conflicting atoms. That is why any statement with no outgoing edges must be linked to a query statement. We first apply PDL to detect ambiguous statements, then backwardly broadcast this ambiguity to any statement that is linked (by a support or attack edge) to an ambiguous statement (cf. Fig. 2). Labelings for Defeasible Logics start from fact statements and propagate upward towards query statements, however, for Repair Semantics, the labelings have to conduct a second pass from query statements and propagate downward towards fact statements. We use the labeling function ‘IAR’ to obtain entailment results equivalent to IAR [16]. IAR is defined as follows: edges have the same (i.e. given an edge e, IAR(e) = IAR(Source(e)). Given a statement s: (a) IAR(s) = IN iff IAR(s) = AMBIG and PDL(s) = IN . + (s) (b) IAR(s) = AMBIG iff either PDL(s) = AMBIG or ∃e ∈ ES+ (s) ∪ EA such that IAR(Target(e)) = AMBIG. (c) IAR(s) = OUT iff PDL(s) = OU T . A statement is labeled AMBIG if it was labeled ambiguous by PDL or if it leads to an ambiguous statement. Otherwise, it is IN if it has an IN complete support and is not attacked (i.e. PDL labels it IN). In the following proposition, we will denote by SGIAR KB an SG built on the KB KB that uses the ICAR labeling function and by SGIAR KB s the label of a statement s.

A Formalism Unifying Defeasible Logics and Repair Semantics

9

IN

f emale(alice), ∅ AMBIG

sentence(alice, null1 ), ∅

IN IN

 → f emale(alice) AMBIG

innocent(alice), ∅

AMBIG AMBIG

guilty(alice) → sentence(alice, null1 )

AMBIG AMBIG

AMBIG AMBIG

AMBIG

AMBIG

alibi(alice) → innocent(alice)

resp(alice) → guilty(alice)

AMBIG AMBIG

 → alibi(alice) AMBIG

AMBIG

incrim(e1, alice) → resp(alice) AMBIG AMBIG

 → incrim(e1, alice)

AMBIG

AMBIG

absolv(e2, alice) → notResp(alice) AMBIG AMBIG

 → absolv(e2, alice)

Fig. 2. IAR applied to Example 1’s Statement Graph.

Proposition 1. Let f be a fact in a KB that contains only defeasible facts and strict rules. KB IAR f iff SGIAR KB (f, ∅) = IN and KB IAR f iff (f, ∅) ∈ {AMBIG, OUT}. SGIAR KB We split the proof of (1.) in two parts, first we prove by contradiction that if KB IAR f then SGIAR KB (f, ∅) = IN: Suppose we have a fact f such that (f, ∅) = IN: KB IAR f and SGIAR KB 1. KB IAR f means that there is a derivation for f from an initial set of facts T ⊆ F and there is no consistent set of initial facts S ⊆ F such that S ∪ T is inconsistent (i.e. models(S, R ∪ N ) = ∅ and models(S ∪ T, R ∪ N ) = ∅), which means that there is no derivation for an atom conflicting with an atom used in the derivation for f i.e. f is not generated from or used to generate ambiguous atoms, thus PDL((f → ∅)) = IN. 2. SGIAR KB (f, ∅) = IN means that either: (a) SGIAR KB (f, ∅) = OUT which is impossible given 1. (i.e. PDL(f, ∅) = IN) (b) or SGIAR KB (f, ∅) = AMBIG which means either: i. PDL(f, ∅) = AMBIG (impossible given 1.), + (s) such that IAR(T arget(e)) = AMBIG which ii. or ∃e ∈ ES+ (s) ∪ EA means that f is used to generate ambiguous atoms (impossible given 1.). Now we prove by contradiction that if SGIAR KB (f, ∅) = IN then KB IAR f : (f, ∅) = IN and KB IAR f : Suppose we have a fact f such that SGIAR KB 1.

SGIAR KB (f, ∅) = IN means that IAR(f, ∅) = AMBIG and PDL(f, ∅) = IN, which means that (f, ∅) is not attacked (i.e. there is no derivation for an atom

10

A. Hecham et al.

conflicting with f ) and is not used to generate conflicting atoms (no outgoing edge leads to an AMBIG statement). 2. KB IAR f means that either f is generated by conflicting atoms (impossible given 1.) or is used to generate conflicting atoms (impossible given 1.). From (1.) the proposition (2.) directly holds ( SGIAR KB (f → ∅) = IN means → ∅) ∈ {AMBIG, OUT} given that IAR is a function).

SGIAR KB (f 4.2

New Labeling for ICAR Semantics

The intuition behind ICAR is to reject any fact that is used to generate conflict, while accepting those that do not (even if they were generated after a conflict). From an SG point of view, any statement that is attacked or that supports statements that lead to an attack is considered “ambiguous”. This is done by first applying PDL to detect ambiguous and accepted statements then the ICAR labeling starts from query statements and propagates downward towards fact statements (cf. Fig. 3). We use the labeling function ‘ICAR’ to obtain entailment results equivalent to ICAR [16]. ICAR is defined as follows: given an edge e, ICAR(e) = ICAR(Source(e)). Given a statement s: (a) ICAR(s) = IN iff ICAR(s) = AMBIG and PDL(s) ∈ {IN, AMBIG}. (b) ICAR(s) = AMBIG iff − 1. either PDL(s) = AMBIG and ∃e ∈ EA (s) s.t. PDL(e) ∈ {IN, AMBIG}, + + 2. or ∃e ∈ ES (s) ∪ EA (s) such that ICAR(Target(e)) = AMBIG. (c) ICAR(s) = OUT iff PDL(s) = Out. A statement is labeled AMBIG if it was labeled ambiguous by PDL and it is attacked, or if it leads to an ambiguous statement. It is labeled IN if it was labeled IN or AMBIG by PDL and does not lead to an ambiguous statement. an SG built on the In the following proposition, we will denote by SGICAR KB KB KB that uses the ICAR labeling function and by SGICAR KB s the label of a statement s. Proposition 2. Let f be a fact in a KB that contains only defeasible facts and strict rules: KB ICAR f iff SGICAR KB (f, ∅) = IN and KB ICAR f iff (f, ∅) ∈ {AMBIG, OUT}. SGICAR KB We split the proof of (1.) in two parts, first we prove by contradiction that if KB ICAR f then SGICAR KB (f, ∅) = IN. Suppose we have a fact f such that KB ICAR f and SGICAR KB (f, ∅) = IN: 1. KB ICAR f means that there is a derivation for f and there is no consistent set of facts S ⊆ F ∗ such that S ∪ {f } is inconsistent (models(S, R ∪ N ) = ∅ and models(S ∪ {f }, R ∪ N ) = ∅) i.e. f is not used to generate ambiguous atoms. 2. SGICAR KB (f, ∅) = IN means that either: (a) SGICAR KB (f, ∅) = OUT which is impossible given 1. (there is a derivation for f i.e. PDL(f, ∅) ∈ {IN, AMBIG}).

A Formalism Unifying Defeasible Logics and Repair Semantics

11

IN

f emale(alice), ∅ IN

sentence(alice, null1 ), ∅

IN IN

 → f emale(alice) AMBIG

innocent(alice), ∅

AMBIG AMBIG

guilty(alice) → sentence(alice, null1 )

AMBIG AMBIG

AMBIG AMBIG

AMBIG

AMBIG

alibi(alice) → innocent(alice)

resp(alice) → guilty(alice)

AMBIG AMBIG

 → alibi(alice) AMBIG

AMBIG

incrim(e1, alice) → resp(alice) AMBIG AMBIG

 → incrim(e1, alice)

AMBIG

AMBIG

absolv(e2, alice) → notResp(alice) AMBIG AMBIG

 → absolv(e2, alice)

Fig. 3. ICAR applied to Example 1’s Statement Graph.

(b) or SGICAR KB (f, ∅) = AMBIG which means either: i. PDL(f, ∅) = AMBIG and there is an edge attacking it (impossible given 1. i.e. there is no derivable conflicting atom with f ). + (s) such that ICAR(T arget(e)) = AMBIG which ii. or ∃e ∈ ES+ (s) ∪ EA means that f is used to generate ambiguous atoms (impossible given 1.). Now we prove by contradiction that if SGICAR KB (f, ∅) = IN then KB ICAR f : Suppose we have a fact f such that SGICAR KB (f, ∅) = IN and KB ICAR f : SGICAR KB (f, ∅) = IN means that ICAR(f, ∅) = AMBIG and PDL(f, ∅) ∈ {IN, AMBIG}, which means that (f, ∅) is not attacked (i.e. there is no derivation for an atom conflicting with f ) and it is used to generate ambiguous atoms (no outgoing edge leads to an AMBIG statement). 2. KB ICAR f means that either f is not derivable (impossible given 1. since PDL(f, ∅) ∈ {IN, AMBIG}), or there is a derivation for an atom conflicting with f , or f is used to generate ambiguous atoms (impossible given 1.).

1.

From (1.) the proposition (2.) directly holds ( SGICAR KB (f, ∅) = IN means ∈ {AMBIG, OUT} given that ICAR is a function).

SGICAR KB (f, ∅)

5

Human Intuitions for Conflict Management

The contribution of the paper is two fold. On one hand we have provided new labelings for Statement Graphs shown to capture repair semantics. In this section

12

A. Hecham et al.

we go one step further and show (1) there is practical value into combining defeasible reasoning and repair semantics and (2) provide a Statement Graph labeling for this new semantics. In order to get an idea of what intuitions humans follow in an abstract context, we ran an experiment with 41 participants in which they were told to place themselves in the shoes of an engineer trying to analyze a situation based on a set of sensors. These sensors (with unknown reliability) give information about the properties of an object called “o”, e.g. “Object ‘o’ has the property P” (which could be interpreted for example as ‘o’ is red). Also, as an engineer, they have a knowledge that is always true about the relations between these properties, e.g. “All objects that have the property P, also have the property Q”. Some of the properties cannot be true at the same time on the same object, e.g. “An object cannot have the properties P and T at the same time”. Using abstract situations allowed us to avoid unwanted effects of a priori knowledge while at the same time representing formal concepts (facts, rules and negative constraints) in a textual simplified manner. A transcript of the original text that the experiment participants have received is shown in the following Example 3. Example 3 (Situation 1). Textual representation: Three sensors are respectively indicating that “o” has the properties S, Q, and T. We know that any object that has the property S also has the property V. Moreover, an object cannot have the properties S and Q at the same time, nor the properties V and T at the same time. Question: Can we say that the object “o” has the property T? Let us also provide here the logical representation of the above text. Please note that the participants have not also received the logical transcript. – F = {s(o), q(o), t(o)} – R = {∀X s(X) → v(X)}

– N = {∀X s(X) ∧ q(X) → ⊥, ∀X v(X) ∧ t(X) → ⊥} – Query Q = t(o)

Participants were shown in a random order 5 situations containing inconsistencies. For each situation, the participant was presented with a textual description of an inconsistent knowledge base and a query. Possible answers for a query is “Yes” (entailed) or “No” (not entailed). The 41 participants are second year university students in computer science, 12 female and 29 male aged between 17 and 46 years old. Table 1 presents the situations and the semantics under which their queries are entailed () or not entailed (−). The “% of Yes” column indicates the percentage of participants that answered “Yes”. The aim of each situation is to identify if a set of semantics coincides with the majority, for example the query in Situation 1 (Example 3) is only entailed under block 2 . Not all cases can be represented, for example IAR f and prop f , due to productivity (c.f. Sect. 4.2). 2

Situations and detailed results are available at https://www.dropbox.com/s/ 4wkblgdx7hzj7s8/situations.pdf.

A Formalism Unifying Defeasible Logics and Repair Semantics

13

Table 1. Situations entailment and results. Situations block

prop

IAR

ICAR % of “Yes” block IAR

#1









73.17%



#2







#3









21.95%





21.95%



#4









4.87%



#5









78.04%



From the results in Table 1, we observe that blocking and IAR are the most intuitive (Situations 1 and 5), however blocking alone is not sufficient as shown by Situations 2 and 3, and IAR alone is not sufficient either (Situation 1). One possible explanation is that participants are using a semantics that is a mix of IAR and ambiguity blocking ( block IAR ). Such a semantics is absent from the literature as it is interestingly in between Repair Semantics and Defeasible Logics. Please note that we do not argue that this particular semantics is better than existing ones, our aim is to bridge Defeasible Logics and Repair Semantics by defining new inference relations based on intuitions coming from both of them, and then studying properties of these new inference relations. 5.1

New Semantics for Reasoning in Presence of Conflict

The intuition behind this new semantics is to apply IAR or ICAR on an SG which has been labeled using BDL rather than PDL. This would amount to replacing PDL by BDL in the definitions of IAR and ICAR. As it turns out, IAR with BDL fully coincides with the answers given by the majority of the participants in our experiment. To illustrate this semantics, consider Example 4. Example 4. Applying IAR with ambiguity blocking on Example 1’s SG gives KB block IAR f emale(alice)∧alibi(alice)∧innocent(alice). Note that the difference block with BDL is that KB block IAR incrim(e1, alice) and KB IAR absolv(e2, alice). The difference with IAR is that KB IAR alibi(alice) and KB IAR innocent(alice). Let us now analyse the productivity and complexity of new semantics. We say that a semantics 1 is less productive than 2 (represented as 1 → 2 ) if, for every KB and every f , it results in fewer conclusions being drawn (i.e. if KB 1 f then KB 2 f ). Productivity comparison of Repair Semantics has been discussed in [6] while the productivity between Defeasible Logics semantics can be extracted from the inclusion theorem in [8]. It can be seen that IAR → prop since prop only rejects facts that are challenged or generated from challenged facts, while IAR also rejects facts that would lead to a conflict. The results from the following Proposition 3 are summarized in Fig. 4. Proposition 3. Let KB be a knowledge base with only defeasible facts and strict rules. Given a fact f :

14

1. 2. 3. 4. 5.

A. Hecham et al.

if if if if if

KB KB KB KB KB

IAR f then KB prop f IAR f then KB block IAR f f then KB  block block f IAR block f then KB  block IAR ICAR f ICAR f then KB block ICAR f

We prove (1.) by contradiction. Suppose there is a fact f such that KB IAR f and KB prop f . KB IAR f means that there is a derivation for f from an initial set of facts T ⊆ F and there is no consistent set of initial facts S ⊆ F such that S ∪ T is inconsistent (i.e. models(S, R ∪ N ) = ∅ and models(S ∪ T, R ∪ N ) = ∅) [16]. This means that f is derivable and does not rely conflicting facts, therefore the statement (f → ∅) has a complete IN support and no IN or AMBIG attack edges, i.e. SGPDL KB (f, ∅) = IN, thus KB prop f which is a contradiction. We prove (2.) by contradiction, suppose we have KB IAR f and KB block IAR f: 1. KB IAR f means that there is a derivation for f from an initial set of facts T ⊆ F and there is no consistent set of initial facts S ⊆ F such that S ∪ T is inconsistent (i.e. models(S, R∪N ) = ∅ and models(S ∪T, R∪N ) = ∅), which means that f is not generated by conflicting atoms and is not used to generate conflicting atoms i.e. PDL(f, ∅) = IN which implies that BDL(f, ∅) = IN [8]. 2. KB block IAR f means that either BDL(f, ∅) = IN (impossible given 1.) or f is used to generate conflicting atoms (impossible given 1.) f:

We prove (3.) by contradiction, suppose we have KB block IAR f and KB block

1. KB block IAR f means that BDL(f, ∅) = IN and f is not used to generate conflicting atoms. 2. KB block f means that BDL(f, ∅) = IN (impossible given 1.). f:

block We prove (4.) by contradiction, suppose we have KB block IAR f and KB ICAR

1. KB block IAR f means that BDL(f, ∅) = IN and f is not used to generate conflicting atoms. 2. KB block ICAR f means that either BDL(f, ∅) = IN (impossible given 1.) or there is a derivable atom conflicting with f or f is used to generate conflicting atoms (impossible given 1.) We prove (5.) by contradiction, suppose we have KB ICAR KB block ICAR f :

f and

1. KB ICAR f means that PDL((f → ∅)) ∈ {IN, AMBIG}, there is no derivable fact conflicting with f , and f is not used to derive conflicting atoms. 2. KB block ICAR f means that BDL((f → ∅)) = IN and BDL((f → ∅)) = AMBIG and either there is a derivable fact that is conflicting with f (impossible given 1.) or f is used to generate conflicting atoms (impossible given 1.).

A Formalism Unifying Defeasible Logics and Repair Semantics block ICAR

block prop

block IAR

15

CAR ICAR

AR

IAR

Fig. 4. Productivity and complexity of different semantics under Skolem-FES fragment of existential rules.

6

Discussion

In this paper we build upon Statement Graphs for existential rules and their labeling functions for ambiguity blocking, ambiguity propagating [13], and provide custom labeling functions for IAR, and ICAR. These labelings explicitly show how to transition from ambiguity propagating to IAR and ICAR, and how to obtain a combination of blocking with IAR and ICAR. Using an experiment, we have shown that bringing together Defeasible reasoning and Repair semantics allows for the definition of new and potentially interesting semantics with respect to human reasoning. Implementing these new labelings, for instance in the platform presented in [14], would allow to study further the links between labeling functions and human reasoning. The modeling choices used in this paper (the use of forward chaining, specific Repair Semantics, particular intuitions of Defeasible Logics, and no account for preferences) stem from several rationale. More precisely: – Skolem chase: the focus on the forward chaining mechanism is due to its ability to handle transitive rules [19] contrary to backward chaining [3]. Regarding the choice of chase, we focused on the Skolem chase given its relatively low cost and its ability to stay decidable for all known concrete classes of the FES fragment [5]. – Language: Repair Semantics make the assumption of a coherent set of rules because incoherence might yield to the trivial solution of an empty set of repairs [10]. Therefore allowing defeasible rules with the restriction of coherence defeats the purpose of having defeasible rules in the first place. – Considered Repair Semantics: the appeal of IAR and ICAR is in their simplicity and low complexity. Considering other Repair Semantics such as ICR, AR, etc. would require using and defining a more complex version of SG and defeasible reasoning such as well founded semantics which is one favored future research avenue. – Defeasible reasoning intuitions: ambiguity handling is, of course, not the only intuition in defeasible reasoning, however other intuitions such as team defeat, handling of strict rules, etc. are meaningless in this context given the absence of preferences and defeasible rules. However, floating conclusions are applicable in the considered language, nevertheless, neither Defeasible Logics nor IAR/ ICAR accept floating conclusion.

16

A. Hecham et al.

Acknowledgement. We would like to thanks the anonymous reviewers for their helpful and constructive comments.

References 1. Antoniou, G., Billington, D., Governatori, G., Maher, M.J., Rock, A.: A family of defeasible reasoning logics and its implementation. In: Proceedings of the 14th European Conference on Artificial Intelligence, pp. 459–463 (2000) 2. Bacchus, F., Chen, X., van Beek, P., Walsh, T.: Binary vs. non-binary constraints. Artif. Intell. 140(1/2), 1–37 (2002). https://doi.org/10.1016/S00043702(02)00210-2 3. Baget, J.F., Garreau, F., Mugnier, M.L., Rocher, S.: Extending acyclicity notions for existential rules. In: ECAI, pp. 39–44 (2014) 4. Baget, J.F., Garreau, F., Mugnier, M.L., Rocher, S.: Revisiting chase termination for existential rules and their extension to nonmonotonic negation. arXiv preprint arXiv:1405.1071 (2014) 5. Baget, J.F., Lecl`ere, M., Mugnier, M.L., Salvat, E.: On rules with existential variables: walking the decidability line. Artif. Intell. 175(9–10), 1620–1654 (2011) 6. Benferhat, S., Bouraoui, Z., Croitoru, M., Papini, O., Tabia, K.: Non-objection inference for inconsistency-tolerant query answering. In: IJCAI, pp. 3684–3690 (2016) 7. Billington, D.: Defeasible logic is stable. J. Log. Comput. 3(4), 379–400 (1993) 8. Billington, D., Antoniou, G., Governatori, G., Maher, M.: An inclusion theorem for defeasible logics. ACM Trans. Comput. Log. (TOCL) 12(1), 6 (2010) 9. Cal`ı, A., Gottlob, G., Lukasiewicz, T.: A general datalog-based framework for tractable query answering over ontologies. Web Semant. Sci. Serv. Agents World Wide Web 14, 57–83 (2012) 10. Deagustini, C.A., Martinez, M.V., Falappa, M.A., Simari, G.R.: On the influence of incoherence in inconsistency-tolerant semantics for Datalog+-. In: JOWO@ IJCAI (2015) 11. Flouris, G., Huang, Z., Pan, J.Z., Plexousakis, D., Wache, H.: Inconsistencies, negations and changes in ontologies. In: Proceedings of the National Conference on Artificial Intelligence, vol. 21, p. 1295. AAAI Press; MIT Press, Menlo Park, Cambridge, London (1999, 2006) 12. Governatori, G., Maher, M.J., Antoniou, G., Billington, D.: Argumentation semantics for defeasible logic. J. Log. Comput. 14(5), 675–702 (2004). https://doi.org/ 10.1093/logcom/14.5.675.1 13. Hecham, A., Bisquert, P., Croitoru, M.: On a flexible representation of defeasible reasoning variants. In: Proceedings of the 17th Conference on Autonomous Agents and MultiAgent Systems, pp. 1123–1131 (2018) 14. Hecham, A., Croitoru, M., Bisquert, P.: DAMN: defeasible reasoning tool for multi-agent reasoning. In: AAAI 2020–34th AAAI Conference on Artificial Intelligence. Association for the Advancement of Artificial Intelligence, New York, United States, February 2020. https://hal-lirmm.ccsd.cnrs.fr/lirmm-02393877 15. Horty, J.F., Thomason, R.H., Touretzky, D.S.: A skeptical theory of inheritance in nonmonotonic semantic networks. Artif. Intell. 42(2–3), 311–348 (1990) 16. Lembo, D., Lenzerini, M., Rosati, R., Ruzzi, M., Savo, D.F.: Inconsistency-tolerant semantics for description logics. In: Hitzler, P., Lukasiewicz, T. (eds.) RR 2010. LNCS, vol. 6333, pp. 103–117. Springer, Heidelberg (2010). https://doi.org/10. 1007/978-3-642-15918-3 9

A Formalism Unifying Defeasible Logics and Repair Semantics

17

17. Marnette, B.: Generalized schema-mappings: from termination to tractability. In: Proceedings of the Twenty-Eighth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, pp. 13–22. ACM (2009) 18. Nute, D.: Defeasible reasoning: a philosophical analysis in prolog. In: Fetzer, J.H. (ed.) Aspects of Artificial Intelligence. Studies in Cognitive Systems, vol. 1, pp. 251–288. Springer, Dordrecht (1988). https://doi.org/10.1007/978-94-009-269989 19. Rocher, S.: Querying existential rule knowledge bases: decidability and complexity. Ph.D. thesis, Universit´e de Montpellier (2016)

Using Grammar-Based Genetic Programming for Mining Disjointness Axioms Involving Complex Class Expressions Thu Huong Nguyen(B)

and Andrea G. B. Tettamanzi

Universit´e Cˆ ote d’Azur, CNRS, Inria, I3S, Nice, France {thu-huong.nguyen,andrea.tettamanzi}@univ-cotedazur.fr

Abstract. In the context of the Semantic Web, learning implicit knowledge in terms of axioms from Linked Open Data has been the object of much current research. In this paper, we propose a method based on grammar-based genetic programming to automatically discover disjointness axioms between concepts from the Web of Data. A training-testing model is also implemented to overcome the lack of benchmarks and comparable research. The acquisition of axioms is performed on a small sample of DBpedia with the help of a Grammatical Evolution algorithm. The accuracy evaluation of mined axioms is carried out on the whole DBpedia. Experimental results show that the proposed method gives high accuracy in mining class disjointness axioms involving complex expressions. Keywords: Ontology learning · OWL axiom · Disjointness axiom Genetic programming · Grammatical Evolution

1

·

Motivation

The growth of the Semantic Web (SW) and of its most prominent implementation, the Linked Open Data (LOD), has made a huge number of interconnected RDF (Resource Definition Framework) triples freely available for sharing and reuse. LOD have thus become a giant real-world data resource that can be exploited for mining implicit knowledge, i.e., for Knowledge Discovery from Data (KDD). Such wealth of data can be organized and made accessible by ontologies [1,2], formal representations of shared domains of knowledge, which play an essential role in data and knowledge integration. Through a shared schema, ontologies support automatic reasoning such as query answering or classification over different data sources. In the structure of ontologies, the definition about the incompatibility between pairs of concepts, in the form of class disjointness axioms, is important to ensure the quality of ontologies. Specifically, like other types of axioms, class disjointness axioms allow to check the correctness of a knowledge base or to derive new information, a task that is sometimes called knowledge enrichment. For instance, a reasoner will be able to deduce an error, c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 18–32, 2020. https://doi.org/10.1007/978-3-030-57855-8_2

Grammar-Based GP for Mining Disjointness Axioms

19

i.e., a logical inconsistency of facts in the ontology, whenever the class Fish is associated to a resource related to the class Planet, if there is a constraint of disjointness between the two concepts Fish and Planet. However, the manual acquisition of axioms, a central task in ontology construction, is exceedingly expensive and time-consuming and mainly depends on the availability of expert resources, i.e., domain specialists and knowledge engineers. We focus on a subtask of ontology learning, which goes under the name of axiom learning, and specifically on the learning of class disjointness axioms. Axiom learning is essentially bottom up. While top-down approaches require schema-level information built by domain experts to suggest axioms, bottom-up approaches use learning algorithms and rely on instances from several existing knowledge and information resources to mine axioms. Axiom learning algorithms can help alleviate the overall cost of extracting axioms and of building ontologies in general. In terms of input data sources for the learning process, supporting dynamic data sources, where the facts are updated or changed in time, is preferable, if one wants to achieve scalability and evolution, instead of only focusing on mostly small and uniform data collections. Such dynamic information can be extracted from various data resources of LOD, which constitute an open world of information. Indeed, the advantages of LOD with respect to learning, as argued in [3], is that it is publicly available, highly structured, relational, and large compared to other resources. As a consequence of the general lack of class disjointness axioms in existing ontologies, learning implicit knowledge in terms of axioms from a LOD repository in the context of the Semantic Web has been the object of research using several different methods. Prominent research towards the automatic creation of class disjointness axioms from RDF facts include supervised classification, like in the LeDA system [4], statistical schema induction via associative rule mining, like in the GoldMiner system [5], and learning general class descriptions (including disjointness) from training data, like in the DL-Learner framework [6]. Furthermore, recent research has proposed using unsupervised statistical approaches like Formal Concept Analysis (FCA) [7] or Terminological Cluster Trees (TCT) [8], to discover disjointness axioms. Along the lines of extensional (i.e., instance-based, bottom-up) methods and expanding on the Grammatical Evolution(GE) method proposed in [9,10], we propose a new approach to overcome its limitations as well as to enhance the diversity of discovered types of axioms. Specifically, a set of axioms with more diverse topics is generated from a small sample of an RDF dataset which is randomly extracted from a full RDF repository, more specifically, DBpedia. Also, the type of mined class disjointness axioms is extended to include the existential quantification (∃r.C) and value restriction (∀r.C) constructors, where r is a property and C a class, which cannot be mechanically derived from a given set of atomic axioms. Indeed, it is not because one knows that, for instance, the two classes Person and City are disjoint, that one can conclude that, say, Person and ∀birthPlace.City (i.e., who was born in a city) are disjoint too; indeed we all

20

T. H. Nguyen and A. G. B. Tettamanzi

know they are not! Conversely, knowing that Person and Writer are not disjoint does not allow us to conclude that Person and ∀author.Writer are not disjoint either. We propose a specific axiom grammar that generates the axioms of the type we target. A set of candidate axioms is improved thanks to an evolutionary process through the use of the evolutionary operators of crossover and mutation. Finally, the final population of generated axioms is evaluated on the full RDF dataset, specifically the whole DBpedia, which can be considered as an objective benchmark, thus eliminating the need of domain experts to assess the quality of the generated axioms on a wide variety of topics. The evaluation of generated axioms in each generation of the evolutionary process is thus performed on a reasonably sized data sample, which alleviates the computational cost of query execution and enhances the performance of the method. Following [9], we apply a method based on possibility theory to score candidate axioms. It is important to mention that, to the best of our knowledge, no other method has been proposed so far in the literature to mine the Web of data for class disjointness axioms involving complex class expressions with existential quantifications and value restrictions in addition to conjunctions. The rest of the paper is organized as follows: some basics in GE are provided in Sect. 2. The method to discover class disjointness axioms with a Grammarbased Genetic Programming approach is presented in Sect. 3. Section 4 describes the experimental settings. Results are presented and discussed in Sect. 5. A brief survey of current related work in the field of learning class disjointness axioms is provided in Sect. 6. Finally, conclusions and directions for future research are given in Sect. 7.

2

Basic Concepts of Grammar-Based Genetic Programming

Genetic Programming (GP) [11,12] is an evolutionary approach that extends genetic algorithms (GA) to allow the exploration of the space of computer programs. Inspired by biological evolution and its fundamental mechanisms, these programs are “bred” using iterative improvement of an initially random population of programs. That is an evolutionary process. At each iteration, known as a generation, improvements are made possible by stochastic variation, i.e., by a set of genetic operators, usually crossover and mutation and probabilistic selection according to pre-specified criteria for judging the quality of an individual (solution). According to the levels of fitness, the process of selecting individuals, called fitness-based selection, is performed to create a list of better qualified individuals as input for generating a new set of candidate solutions in the next generation. The new solutions of each generation are bred by applying genetic operators on the selected old individuals. Then, replacement is the last step and decides which individuals stay in a population and which are replaced on a par, with selection influencing convergence. A grammar-based form of GP, namely Grammatical Evolution (GE) [13], differs from traditional GP in that it distinguishes the search space from the solution space, through the use of a grammar-mediated representation.

Grammar-Based GP for Mining Disjointness Axioms

21

Programs, viewed as phenotypic solutions or phenotypes, are decoded from variable-length binary strings, i.e., genotypic individuals or genotypes, through a transformation called mapping process. According to it, the variable-length binary string genomes, or chromosomes, are split into consecutive groups of bits, called codons, representing an integer value, used to select, at each step, one of a set of production rules from a formal grammar, typically in Backus-Naur form (BNF), which specifies the syntax of the desired programs. A BNF grammar is a context-free grammar consisting of terminals and non-terminals and being represented in the form of a four-tuple {N, T, P, S}, where N is the sets of non-terminals, which can be extended into one or more terminals; T is the set of terminals which are items in the language; P is the set of the production rules that map N to T ; S is the start symbol and a member of N . When there are a number of productions that can be used to rewrite one specific non-terminal, they are separated by the ‘|’ symbol. In the mapping process, codons are used consecutively to choose production rules from P in the BNF grammar according to the function:   Number of productions (1) production = codon modulo for the current nonterminal

3

A Grammar-Based GP for Disjointness Axioms Discovery

We consider axiom discovery as an evolutionary process and reuse the GE method of [9,10] to mine class disjointness axioms with a few modifications. As said, we focus on axioms involving class expressions with existential quantification and value restriction. Also, a “training-testing” model is applied. Specifically, the learning process is performed with the input data source derived from a training RDF dataset, a random sample of DBpedia, whereas the evaluation of discovered axioms is based on a testing one, namely the full DBpedia. In terms of GE, axioms are “programs” or “phenotypes”, obeying a given BNF grammar. A population of candidate genotypic axioms, encoded as variable-length integer strings, i.e., numerical chromosomes, is randomly generated. Then, a mapping process based on the BNF grammar is performed to translate these chromosomes to phenotypic axioms. The set of axioms is maintained and iteratively refined to discover axioms that satisfy two key quality measures: generality and credibility. The quality of generated axioms can be enhanced gradually during the evolutionary process by applying variation operators, i.e., crossover and mutation, on phenotypic axioms. In this section, we first introduce the BNF grammar construction and a specific example illustrating the decoding phase to form wellformed class disjointness axioms. A possibilistic framework for the evaluation of the discovered axioms is then presented in detail.

22

T. H. Nguyen and A. G. B. Tettamanzi

3.1

BNF Grammar Construction

The functional-style grammar1 used by the W3C is applied to design the grammar for generating well-formed OWL class disjointness axioms. Like in [9,10] and without loss of generality, we only focus on the case of binary axioms of the form DisjointClasses(C1 , C2 ), where C1 and C2 may be atomic or complex classes involving relational operators and possibly including more than one single class identifier, like DisjointClasses(VideoGame, ObjectAllValuesFrom(hasStadium, Sport)). The structure of the BNF grammar here aims at mining well-formed axioms expressing the facts, i.e., instances, contained in a given RDF triple store. Hence, only resources that actually occur in the RDF dataset should be generated. We follow the approach proposed by [9,10] to organize the structure of a BNF grammar which ensures that changes in the contents of RDF repositories will not require the grammar to be rewritten. The grammar is split into a static and a dynamic part. The static part defines the syntax of the types of axioms to be extracted. The content of this part is loaded from a hand-crafted text file. Unlike [9,10], we specify it to mine only disjointness axioms involving at least one complex axiom, containing a relational operator of existential quantification ∃ or value restriction ∀, i.e., of the form ∃r.C or ∀r.C, where r is a property and C is an atomic class. The remaining class expression can be an atomic class or a complex class expression involving an operator out of , ∃ or ∀. The static part of the grammar is thus structured as follows. (r1) (r2) (r3) (r4)

(r5) (r6) (r7) (r8)

Axiom := ClassAxiom ClassAxiom := DisjointClasses DisjointClasses := ’DisjointClasses’ ’(’ ClassExpression1 ’ ’ClassExpression2 ’)’ ClassExpression1 := Class (0) | ObjectSomeValuesFrom (1) | ObjectAllValuesFrom (2) | ObjectIntersection (3) ClassExpression2 := ObjectSomeValuesFrom (0) | ObjectAllValuesFrom (1) ObjectIntersectionOf := ’ObjectIntersectionOf’ ’(’ Class ’ ’ Class ’)’ ObjectSomeValuesFrom := ’ObjectSomeValuesFrom’ ’(’ ObjectPropertyOf ’ ’ Class ’)’ ObjectAllValuesFrom := ’ObjectAllValuesFrom’ ’(’ ObjectPropertyOf ’ ’ Class ’)’

The dynamic part contains production rules for the low-level non-terminals, called primitives in [9,10]. These production rules are automatically filled at run-time by querying the SPARQL endpoint of the RDF data source at hand. The data source here is a training RDF dataset and the primitives are Class and ObjectPropertyOf. The production rules for these two primitives are filled by the following SPARQL queries to extract atomic classes and properties (represented by their IRI) from the RDF dataset. SELECT ?class WHERE { ?instance rdf:type ?class.} SELECT ?property WHERE { ?subject ?property ?object. FILTER (isIRI(?object))} Let us consider an example representing a small sample of an RDF dataset: 1

https://www.w3.org/TR/owl2-syntax/#Disjoint Classes.

Grammar-Based GP for Mining Disjointness Axioms PREFIX PREFIX PREFIX PREFIX

23

dbr: http://dbpedia.org/resource/ dbo: http://dbpedia.org/ontology/ dbprop: http://dbpedia.org/property/ rdf: http://www.w3.org/1999/02/22\-rdf-syntax-ns\#

dbr:Amblycera dbr:Salweenia Dbr:Himalayas dbr:Amadeus dbr:Cat_Napping dbr:With_Abandon dbr:Idris_Muhammad dbr:Genes_Reunited

rdf:type rdf:type rdf:type rdf:type dbprop:director dbprop:artist dbprop:occupation dbo:industry

dbo:Animal. dbo:Plant. dbo:NaturalPlace. dbo:Work. dbr:William_Hanna. dbr:Chasing_Furies. dbr:Drummer. dbr:Genealogy.

The productions for Class and ObjectPropertyOf would thus be: (r9) Class := | | |

3.2

dbo:Animal dbo:Plant dbo:NaturalPlace dbo:Work

(0) (1) (2) (3)

(r10) ObjectPropertyOf := | | |

dbprop:director dbprop:artist dbprop:occuptation dbo:industry

(0) (1) (2) (3)

Translation to Class Disjointness Axioms

We illustrate the decoding of an integer chromosome into an OWL class disjointness axiom in functional-style syntax through a specific example. Let the chromosome be 352, 265, 529, 927, 419. There is only one production for the non-terminals Axiom, ClassAxiom, DisjointClasses, ObjectIntersectionOf, ObjectSome- ValuesFrom and ObjectAllValuesFrom as it can be seen from Rules 1–3, and 6–8. In these cases, we skip using any codons for mapping and concentrate on reading codons for non-terminals having more than one production, like in Rules 4, 5, 9 and 10. We begin by decoding the first codon, i.e. 352, by Eq. 1. The result, i.e 352 modulo 4 = 0, is used to determine which production is chosen to replace the leftmost non-terminal (ClassExpression1) from its relevant rule (Rule 4). In this case, the leftmost ClassExpression1 will be replaced by the value of Class. Next, the next codon will determine the production rule for the leftmost Class and dbo:Plant is selected by the value from 265 mod 4 = 1. The mapping goes on like this until eventually there is no nonterminal left in the expression. Not all codons were required and extra codons have been simply ignored in this case. 3.3

Evaluation Framework

We follow the evaluation framework based on possibility theory, presented in [10] (see [14] for the theoretical background) to determine the fitness value of generated axioms in each generation, i.e., the credibility and generality of axioms. To make the paper self-contained, we recall the most important aspects of the approach, but we refer the interested reader to [10,14] for an in-depth treatment. Possibility theory [15] is a mathematical theory of epistemic uncertainty. Given a finite universe of discourse Ω, whose elements ω ∈ Ω may be regarded as events, values of a variable, possible worlds, or states of affairs, a possibility distribution is a mapping π : Ω → [0, 1], which assigns to each ω a degree

24

T. H. Nguyen and A. G. B. Tettamanzi

of possibility ranging from 0 (impossible, excluded) to 1 (completely possible, normal). A possibility distribution π for which there exists a completely possible state of affairs (∃ω ∈ Ω : π(ω) = 1) is said to be normalized. A possibility distribution π induces a possibility measure and its dual necessity measure, denoted by Π and N respectively. Both measures apply to a set A ⊆ Ω (or to a formula φ, by way of the set of its models, A = {ω : ω |= φ}), and are usually defined as follows: Π(A) = max π(ω);

(2)

ω∈A

¯ = min{1 − π(ω)}. N (A) = 1 − Π(A) ¯ ω∈A

(3)

In other words, the possibility measure of A corresponds to the greatest of the possibilities associated to its elements; conversely, the necessity measure of A ¯ A generalization of the is equivalent to the impossibility of its complement A. above definition can be obtained by replacing the min and the max operators with any dual pair of triangular norm and co-norm. Given incomplete knowledge like RDF datasets, where there exist some missing and erroneous facts (instances) as a result of the heterogeneous and collaborative character of the LOD, adopting an axiom scoring heuristic based on possibility theory is a well-suited approach. Accordingly, a candidate axiom φ is viewed as a hypothesis that has to be tested against the evidence contained in an RDF dataset. Its content is defined as a finite set of logical consequences content(φ) = {ψ : φ |= ψ},

(4)

obtained through the instantiation of φ to the vocabulary of the RDF repository; every ψ ∈ content(φ) may be readily tested by means of a SPARQL ASK query. The support of axiom φ, uφ , is defined as the cardinality of content(φ). The support, together with the number of confirmations u+ φ (i.e., the number of ψ for which the test is successful) and the number of counterexamples u− φ (i.e., the number of ψ for which the test is unsuccessful), are used to compute a degree of possibility Π(φ) for axiom φ, defined, for u(φ) > 0, as  2    uφ − u− φ  . Π(φ) = 1 − 1 − uφ Alongside Π(φ), the dual degree of necessity N (φ) could normally be defined. However, for reasons explained in [10], the necessity degree of a formula would not give any useful information for scoring class disjointness axioms against realworld RDF datasets. Possibility alone is a reliable measure of the credibility of a class disjointness axiom, all the more so because (and this is a very important point), in view of the open world assumption, for two classes that do not share any instance, disjointness can only be hypothetical (i.e., fully possible, if not contradicted by facts, but never necessary). In terms of the generality scoring, an axiom is the more general, the more facts are in the extension of its components. In [9], the generality of an axiom is

Grammar-Based GP for Mining Disjointness Axioms

25

defined as the cardinality of the sets of the facts in the RDF repository reflecting the support of each axiom, i.e., uφ . However, in case one of the components of an axiom is not supported by any fact, its generality should be zero. Hence, the generality of an axiom should be measured by the minimum of the cardinalities of the extensions of the two class expressions involved, i.e., gφ = min{[C], [D]} where C, D are class expressions. For the above reasons, instead of the fitness function in [9], Π(φ) + N (φ) , (5) f (φ) = uφ · 2 we resorted to the following improved definition, proposed in [10]: f (φ) = gφ · Π(φ).

(6)

The fitness value of a class disjointness axiom DisjointClasses(C, D) (or Dis(C, D) in Description Logic notation) is measured by defining the numbers of counterexamples and the support. These values are counted by executing the corresponding SPARQL queries based on graph patterns, via an accessible SPARQL endpoint. Each SPARQL graph pattern here is a mapping Q(E, ?x, ?y) translated from the corresponding OWL expression in axiom where E is an OWL expression, x and y are variables such that the query SELECT DISTINCT ?x ?y WHERE {Q(E, ?x, ?y) } returns all individuals that are instances of E. The definition of Q(E, ?x, ?y) is based on different kinds of OWL expressions. – E is an atomic expression. • For an atomic class A, Q(A, ?x, ?y) = ?x rdf : type

A.

(7)

where A is a valid IRI. • For a simple relation R, Q(R, ?x, ?y) = ?x R

?y.

(8)

where R is a valid IRI. – E is a complex expression. We only focus on the case of complex class expressions involving relational operators, i.e., intersection, existential quantification and value restriction and skip complex relation expressions, i.e., we only allow simple relations in the expressions. In this case, Q can be inductively extended to complex expressions: • if E = C1  . . .  Cn is an intersection of classes, Q(E, ?x, ?y) = Q(C1 , ?x, ?y) . . . Q(Cn , ?x, ?y).

(9)

• if E is an existential quantification of a class expression C, Q(∃R.C, ?x, ?y) = Q(R, ?x, ?z1) Q(C, ?z1, ?z2) where R is a simple relation.

(10)

26

T. H. Nguyen and A. G. B. Tettamanzi

• if E is a value restriction of a class expression C, Q(∀R.C, ?x, ?y) = { Q(R, ?x, ?z0) FILTER NOT EXISTS { Q(R, ?x, ?z1) FILTER NOT EXISTS { Q(C, ?z1, ?z2) } } } .

(11)

where R is a simple relation. The support uDis(C,D) can thus be computed with the following SPARQL query: SELECT (count (DISTINCT ?x AS ?u)WHERE {Q(C, ?x, ?y) UNION Q(D, ?x, ?y)}

(12)

To compute the generality gDis(C,D) = min(uC , uD ), uC and uD are required, which are returned by the following SPARQL queries: SELECT (count (DISTINCT SELECT (count

(DISTINCT

?x) AS ?x)

?u C)WHERE {Q(C, ?x, ?y)} (13)

AS ?u D)WHERE {Q(D, ?x, ?y)} (14)

Finally, we must figure out the number of counterexamples u− Dis(C,D) . Counterexamples are individuals i such that i ∈ [Q(C, ?x, ?y)] and i ∈ [Q(D, ?x, ?y)]; this may be translated into a SPARQL query to compute u− Dis(C,D) : SELECT (count (DISTINCT ?x) AS WHERE {Q(C, ?x, ?y) Q(D, ?x, ?y)}

4

?counters)

(15)

Experimental Setup

The experiments are divided into two phases: (1) mining class disjointness axioms with the GE framework introduced in Sect. 3 from a training RDF dataset, i.e., a random sample of DBpedia 2015-04, and (2) testing the resulting axioms against the test dataset, i.e., the entire DBpedia 2015-04, which can be considered as an objective benchmark to evaluate the effectiveness of the method. 4.1

Training Dataset Preparation

We randomly collect 1% of the RDF triples from DBpedia 2015-04 (English version), which contains 665,532,306 RDF triples, to create the Training Dataset (TD).2 Specifically, a small linked dataset is generated where RDF triples are interlinked with each other and the number of RDF triples accounts for 1% of the triples of DBpedia corresponding to each type of resource, i.e., subjects and objects. An illustration of this mechanism to extract the sample training dataset 2

Available for download at http://bit.ly/2OtFqHp.

Grammar-Based GP for Mining Disjointness Axioms

27

Fig. 1. An illustration of the Training Dataset sampling procedure

is provided in Fig. 1. Let r be an initial resource for the extraction process, e.g., http://dbpedia.org/ontology/Plant; 1% of the RDF triples having r as their subject, of the form r p r , and 1% of the triples having r as their object, of the form r p r , will be randomly extracted from DBpedia. Then, the same will be done for every resource r and r mentioned in the extracted triples, until the size of the training dataset reaches 1% of the size of DBpedia. If the number of triples to be extracted for a resource is less than 1 (according to the 1% proportion), we round it to 1 triple. We applied the proposed mechanism to extract a training dataset containing 6,739,240 connected RDF triples with a variety of topics from DBpedia. 4.2

Parameters

We use the BNF grammar introduced in Sect. 3.1. Given how the grammar was constructed, the mapping of any chromosome of length ≥6 will always be successful. Hence, we can set maxWrap = 0. We ran our algorithm in 20 different runs on different parameter settings. In addition, to make fair comparisons possible, a set of milestones of total effort k (defined as the total number of fitness evaluations) corresponding to each population size are also recorded for each run, namely 100,000; 200,000; 300,000 and 400,000, respectively. The maximum numbers of generations maxGenerations (used as the stopping criterion of the algorithm) are automatically determined based on the values of the total effort k so that popSize · maxGenerations = k. The parameters are summarized in Table 1. 4.3

Performance Evaluation

We measure the performance of the method using the entire DBpedia 201504 as a test set, measuring possibility and generality for every distinct axiom discovered by our algorithm. To avoid overloading DBpedia’s SPARQL endpoint, we set up a local mirror using the Virtuoso Universal Server.3 3

https://virtuoso.openlinksw.com/.

28

5

T. H. Nguyen and A. G. B. Tettamanzi

Results and Discussions

We ran the GE method 20 times with the parameters shown in Table 1 on the BNF grammar defined in Sect. 3.1. Full results are available online.4 Table 2. Number of valid distinct axioms discovered over 20 runs.

Table 1. Parameter values for GE. Parameter

Value

Total effort k

100,000; 200,000; 300,000; 400,000

initLenChrom 6 pCross

80%

pMut

1%

popSize

1000; 2000; 5000; 10000

k

popSize 1000 2000

5000

10000

100000

8806 11389

4684

4788

200000

6204 13670 10632

9335

300000

5436 10541 53021 14590

400000

5085

9080 35102 21670

The number of valid distinct axioms, i.e., axioms φ such that Π(φ) > 0 and gφ > 0, discovered is listed in Table 2. For measuring the accuracy of our results, given that the discovered axioms come with an estimated degree of possibility, which is essentially a fuzzy degree of membership, we propose to use a fuzzy extension of the usual definition of precision, based on the most widely used definition of fuzzy set cardinality, introduced in [16] as follows: given a fuzzy set F defined on a countable universe set Δ, F (x), (16) F  = x∈Δ

In our case, we may view Π(φ) as the degree of membership of axiom φ in the (fuzzy) set of the “positive” axioms. The value of precision can thus be computed against the test dataset, i.e., DBpedia 2015-04, according to the formula

true positives φ ΠDBpedia (φ) =

, (17) precision = discovered axioms φ ΠTD (φ) where ΠTD and ΠDBpedia are the possibility measures computed on the training dataset and DBpedia, respectively. The results in Table 3 confirm the high accuracy of our axiom discovery method with a precision ranging from 0.969 to 0.998 for all the different considered sizes of population and different numbers of generations (reflected through the values of total effort). According the results, we have statistically compared the performance of using different settings of popSize and k. The best setting {popSize = 5,000; k = 300,000} allows the method to discover 53,021 distinct valid axioms with very high accuracy, i.e., precision = 0.993 ± 0.007. Indeed, the plot in Fig. 2 illustrating the distribution of axioms in terms of possibility and generality shows that most discovered axioms with this setting are highly possible (Π(φ) > 23 ). 4

http://bit.ly/32YEQH1.

Grammar-Based GP for Mining Disjointness Axioms

29

In order to obtain a more objective evaluation, we analyze in detail the axioms discovered by the algorithm with this best setting. First, we observe that together with the mandatory class expression containing the ∀ or ∃ operator, most extracted disjointness axioms contain an atomic class expression. This may be due to the fact that the support of atomic classes is usually larger than the support of a complex class expression. We also analyse axioms containing complex expressions in both their members. These axioms are less general, even though they are completely possible. An example is the case with DisjointClasses(ObjectAllValuesFrom(dbprop:author dbo:Place) ObjectAllValuesFrom(dbprop:placeofBurial dbo:Place)) (Π(φ) = 1.0; gφ = 4), which states that “what can only be buried in a place cannot be what can only have a place as its author ”.

Table 3. Average precision per run (±std ) k

popSize 1,000

2,000

5,000

10,000

100,000 0.981 ± 0.019

0.999 ± 0.002

0.998 ± 0.002

0.998 ± 0.003

200,000 0.973 ± 0.024

0.979 ± 0.011

0.998 ± 0.001

0.998 ± 0.002

300,000 0.972 ± 0.024

0.973 ± 0.014

0.993 ± 0.007

0.998 ± 0.001

400,000 0.972 ± 0.024

0.969 ± 0.018

0.980 ± 0.008

0.998 ± 0.001

Fig. 2. Possibility and generality distribution of the discovered axioms

We also observe that some discovered axioms have a particularly high generality, as it is the case with DisjointClasses(dbo:Writer ObjectAllValuesFrom(dbo:writer dbo:Agent)) (Π(φ) = 0.982; gφ = 79,464). This can be explained by the existence of classes supported by a huge number of instances (like dbo:Agent or dbo:Writer). From it, we might say that it is quite possible that “writers are never written by agents”. Another similar case is axiom DisjointClasses(dbo:Journalist ObjectAllValuesFrom(dbo:distributor dbo:Agent)) (Π(φ) = 0.992; gφ = 32,533) whereby in general “journalists are not distributed by agents”, although it would appear that some journalists are, since Π(φ) < 1! Finally, we analyze an example of a completely possible and highly general axiom, DisjointClasses(dbo:Stadium ObjectAllValuesFrom(dbo:birthPlace dbo:Place)) (Π(φ) = 1.0; gφ = 10,245), which we can paraphrase as “stadiums cannot have a place as their birthplace”. Knowing that Stadium and Place are not disjoint, this axiom states that Stadium and ∀birthPlace.Place are in fact disjoint; in addition, ∀.birthPlace.Place, i.e., “(people) whose birthplace is a place” is a class with many instances, hence the high generality of the axiom.

30

6

T. H. Nguyen and A. G. B. Tettamanzi

Related Work

Some prominent works are introduced in Sect. 1 and are also analysed in [9,10]. In this paper, we only focus on recent contributions relevant to class disjointness discovery. For instance, Reynaud et al. [7] use Redescription Mining (RM) to learn class equivalence and disjointness axioms with the ReReMi algorithm. RM is about extracting a category definition in terms of a description shared by all the instances of a given class, i.e., equivalence axioms, and finding incompatible categories which do not share any instance, i.e., class disjointness axioms. Their method, based on Formal Concept Analysis (FCA), a mathematical framework mainly used for classification and knowledge discovery, aims at searching for data subsets with multiple descriptions, like different views of the same objects. While category redescriptions, i.e., equivalence axioms, refer to complex types, defined with the help of relational operators like A ≡ ∃r.C or A ≡ B  ∃r.C, in the case of incompatible categories, the redescriptions are only based on the set of attributes with the predicates of dct:subject, i.e., axioms involving atomic classes only. Another procedure for extracting disjointness axioms [8] requires a Terminological Cluster Tree (TCT) to search for a set of pairwise disjoint clusters. A decision tree is built and each node in it corresponds to a concept with a logical formula. The tree is traversed to create concept descriptions collecting the concepts installed in the leaf-nodes. Then, by exploring the paths from the root to the leaves, intensional definitions of disjoint concepts are derived. Two concept descriptions are disjoint if they lie on different leaf nodes. An important limitation of the method is the time-consuming and computationally expensive process of growing a TCT. A small change in the data can lead to a large change in the structure of the tree. Also, like other intensional methods, that work relies on the services of a reasoning component, but suffers from scalability problems for the application to large datasets, like the ones found on the LOD, caused by the excessive growth of the decision tree. In [9,10], a heuristic method by using Grammatical Evolution (GE) is applied to generate class disjointness axioms from an RDF repository. Extracted axioms include both atomic and complex axioms, i.e., defined with the help of relational operators of intersection and union; in other words, axioms like Dis(C1 , C2 ), where C1 and C2 are complex class expressions including  and operators. The use of a grammar allows great flexibility: only the grammar needs to be changed to mine different data repositories for different types of axioms. However, the dependence on SPARQL endpoints (i.e., query engines) for testing mined axioms against facts, i.e., instances, in large RDF repositories limits the performance of the method. In addition, evaluating the effectiveness of the method requires the participation of experts in specific domains, i.e., the use of a Gold Standard, which is proportional to the number of concepts. Hence, the extracted axioms are limited to the classes relevant to a small scope of topics, namely the Work topic of DBpedia.5 Also, complex axioms are defined with the help of relational operators of intersection and union, which can also be mechanically derived from the known atomic axioms. 5

https://wiki.dbpedia.org/.

Grammar-Based GP for Mining Disjointness Axioms

7

31

Conclusion

We have proposed an extension of a grammar-based GP method for mining disjointness axioms involving complex class expressions. These expressions consist of the relational operators of existential quantification and value restriction. The use of a training-testing model allows to objectively validate the method, while also alleviating the computational bottleneck of SPARQL endpoints. We analyzed some examples of discovered axioms. The experimental results confirm that the proposed method is capable of discovering highly accurate and general axioms. In the future, we will focus on mining disjointness axioms involving further types of complex classes, by bringing into the picture other relational operators such as owl:hasValue and owl:OneOf. We might also forbid the occurrence of atomic classes at the root of class expressions. We also plan on refining the evaluation of candidate axioms with the inclusion of some measurement of their complexity in the fitness function. Acknowledgments. This work has been supported by the French government, through the 3IA Cˆ ote d’Azur “Investments in the Future” project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002.

References 1. Guarino, N., Oberle, D., Staab, S.: What is an ontology? In: Staab, S., Studer, R. (eds.) Handbook on Ontologies. IHIS, pp. 1–17. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-540-92673-3 0 2. Gruber, T.R.: Toward principles for the design of ontologies used for knowledge sharing? Int. J. Hum. Comput. Stud. 43(5–6), 907–928 (1995) 3. Zhu, M.: DC proposal: ontology learning from noisy linked data. In: Aroyo, L., et al. (eds.) ISWC 2011. LNCS, vol. 7032, pp. 373–380. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25093-4 31 4. V¨ olker, J., Vrandeˇci´c, D., Sure, Y., Hotho, A.: Learning disjointness. In: Franconi, E., Kifer, M., May, W. (eds.) ESWC 2007. LNCS, vol. 4519, pp. 175–189. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-72667-8 14 5. V¨ olker, J., Fleischhacker, D., Stuckenschmidt, H.: Automatic acquisition of class disjointness. J. Web Semant. 35, 124–139 (2015) 6. Lehmann, J.: Dl-learner: learning concepts in description logics. J. Mach. Learn. Res. 10, 2639–2642 (2009) 7. Reynaud, J., Toussaint, Y., Napoli, A.: Redescription mining for learning definitions and disjointness axioms in linked open data. In: Endres, D., Alam, M., S ¸ otropa, D. (eds.) ICCS 2019. LNCS (LNAI), vol. 11530, pp. 175–189. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23182-8 13 8. Rizzo, G., d’Amato, C., Fanizzi, N., Esposito, F.: Terminological cluster trees for disjointness axiom discovery. In: Blomqvist, E., Maynard, D., Gangemi, A., Hoekstra, R., Hitzler, P., Hartig, O. (eds.) ESWC 2017. LNCS, vol. 10249, pp. 184–201. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58068-5 12

32

T. H. Nguyen and A. G. B. Tettamanzi

9. Nguyen, T.H., Tettamanzi, A.G.B.: Learning class disjointness axioms using grammatical evolution. In: Sekanina, L., Hu, T., Louren¸co, N., Richter, H., Garc´ıaS´ anchez, P. (eds.) EuroGP 2019. LNCS, vol. 11451, pp. 278–294. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-16670-0 18 10. Nguyen, T.H., Tettamanzi, A.G.B.: An evolutionary approach to class disjointness axiom discovery. In: WI, pp. 68–75. ACM (2019) 11. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992) 12. Vanneschi, L., Poli, R.: Genetic programming – introduction, applications, theory and open issues. In: Rozenberg, G., B¨ ack, T., Kok, J.N. (eds.) Handbook of Natural Computing, pp. 709–739. Springer, Heidelberg (2012). https://doi.org/10.1007/ 978-3-540-92910-9 24 13. O’Neill, M., Ryan, C.: Grammatical evolution. Trans. Evol. Comput. 5(4), 349–358 (2001). https://doi.org/10.1109/4235.942529 14. Tettamanzi, A.G.B., Faron-Zucker, C., Gandon, F.: Possibilistic testing of OWL axioms against RDF data. Int. J. Approx. Reason. 91, 114–130 (2017) 15. Zadeh, L.A.: Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1, 3–28 (1978) 16. De Luca, A., Termini, S.: A definition of a nonprobabilistic entropy in the setting of fuzzy sets theory. Inf. Control 20, 301–312 (1972)

An Incremental Algorithm for Computing All Repairs in Inconsistent Knowledge Bases Bruno Yun1(B) and Madalina Croitoru2 1

2

University of Aberdeen, Aberdeen, UK [email protected] University of Montpellier, Montpellier, France

Abstract. Repair techniques are used for reasoning in presence of inconsistencies. Such techniques rely on optimisations to avoid the computation of all repairs while certain applications need the generation of all repairs. In this paper, we show that the problem of all repair computation is not trivial in practice. To account for a scalable solution, we provide an incremental approach for the computation of all repairs when the conflicts have a cardinality of at most three. We empirically study its performance on generated knowledge bases (where the knowledge base generator could be seen as a secondary contribution in itself).

Keywords: Repairs

1

· Knowledge base · Existential rule

Introduction

We place ourselves in the context of reasoning with knowledge bases (KBs) expressed using Datalog± [11] and investigate inconsistent KBs, i.e. KBs with the inconsistency solely stemming from the factual level and a coherent ontology. For instance, a prominent practical application in this setting is Ontology Based Data Access (OBDA) [22] that considers the querying of multiple heterogeneous data sources via an unifying ontology. With few exceptions [7], approaches performing query answering under inconsistency in the aforementioned setting rely on repairs [3]. Repairs, originally defined for database approaches [1] are maximal subsets of facts consistent with the ontology. Inconsistency tolerant semantics [21] avoid the computational overhead of computing all repairs by various algorithmic strategies [10]. Unfortunately, certain tasks need the repair enumeration problem, such as inconsistency-based repair ranking frameworks [26] or argumentation-based decision-making [12,25]. We focus on the problem of computing possibly some or all repairs from Datalog± inconsistent KBs. Our proposal relies on the notion of conflict (i.e. set of facts that trigger an inconsistency). Although approaches exist for computing conflicts in SAT instances or propositional logic [16,17], there are few works addressing conflict computation for Datalog± that come with additional c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 33–47, 2020. https://doi.org/10.1007/978-3-030-57855-8_3

34

B. Yun and M. Croitoru

challenges given the expressivity of the language [23]. In this paper we extend the state of the art with a computationally efficient manner to generate the set of all repairs using an incremental algorithm adapted from stable set computation in hypergraphs [9]. To this end, we make the hypothesis that the KB allows for bounded sized conflicts (limited here at three). Our proposed algorithm, in a first step, finds the conflicts of the KBs using specific sequences of directed hyperedges (derivations) of the Graph of Atom Dependency (GAD) [19] leading to falsum. In the second step, we use an efficient incremental algorithm for finding all repairs of a set of facts from the set of conflicts of a given KB. This efficient algorithm was inspired by the problem of extending a given list of maximal independent sets in hypergraphs when hyperedges have a bounded dimension [9]. The aforementioned graph theoretical problem was proven to be in the NC complexity class if the size of the hyperedges were bounded by three [9,13] which means that the task can be efficiently solved on a parallel computer where processors are permitted to flip coins. Please note that although conflicts of size more than three can easily occur even when the arity of the negative constraints is limited to two, we do not find that this condition is limiting as, in reality, it is not unlikely to find KBs with only conflicts of size two. Therefore, the proposed algorithm is more efficient than the approach of [23] for two reasons: (1) We do not compute all the “causes” and “consequences” of all the atoms and restrict ourselves to the derivations that lead to an inconsistency. (2) We use an efficient algorithm for incrementally computing repairs from conflicts. When implementing our technique we noticed two key aspects of our approach: (1) getting some repairs from a KB can be relatively easy as the average number of repairs found during the allotted time did not change when the KB grew and (2) finding the last repairs was comparatively harder than the first repairs. Please note that although the computational problem of getting all repairs is in EXPTIME as the number of repairs can be exponential w.r.t. the number of facts [8], the proposed algorithm has a two fold significance: (1) it improves upon the state of the art for the task of all repair computation and (2) it is a viable alternative for applications that require the repair enumeration.

2

Background Notions

We introduce some notions of the Datalog± language. A fact is a ground atom of the form p(t1 , . . . , tk ) where p is a predicate of arity k and for every → − − → i ∈ {1, . . . , k}, ti is a constant. An existential rule is of the form r = ∀ X , Y → − − → → − − → − → B[ X , Y ] → ∃ Z H[ Z , X ] where B (called the body) and H (called the head) are existentially closed atoms or conjunctions of existentially closed atoms and → − − → − → X , Y , Z their respective vectors of variables. A rule is applicable on a set of facts F iff there exists a homomorphism from the body of the rule to F. Applying a rule to a set of facts (also called chase) consists of adding the set of atoms of the conclusion of the rule to the facts according to the application homomorphism. Different chase mechanisms use different simplifications that prevent

An Incremental Algorithm for Computing All Repairs

35

infinite redundancies [5]. We use recognisable classes of existential rules where the chase is guaranteed to stop [5]. A negative constraint is a rule of the form → − − → − → − → ∀ X , Y B[ X , Y ] → ⊥ where B is an existentially closed atom or conjunctions → − − → of existentially closed atoms, X , Y , their respective vectors of variables and ⊥ is falsum. Definition 1. A KB is a tuple K = (F, R, N ) where F is a finite set of facts, R a set of rules and N a set of negative constraints. Example 1. Let K = (F, R, N ) with F = {a(m), b(m), c(m), d(m), e(m), f (m), g(m), h(m), i(m), j(m)}, R = {∀x(f (x) ∧ h(x) → k(x)), ∀x(i(x) ∧ j(x) → l(x))} and N = {∀x(a(x) ∧ b(x) ∧ c(x) → ⊥), ∀x(c(x) ∧ d(x) → ⊥), ∀x(e(x) ∧ f (x) ∧ d(x) → ⊥), ∀x(e(x) ∧ f (x) → ⊥), ∀x(i(x) ∧ k(x) → ⊥), ∀x(l(x) ∧ h(x) → ⊥)}. In a KB K = (F, R, N ), the saturation SatR (X) of a set of facts X is the set of atoms obtained after successively applying the set of rules R on X until a fixed point. A set X ⊆ F is R-inconsistent iff falsum can be entailed from the saturation of X by R ∪ N , i.e. SatR∪N (X) |= ⊥. A conflict of a KB is a minimal R-inconsistent subset of facts. Definition 2. Let us consider K = (F, R, N ). X ⊆ F is a conflict of K iff SatR∪N (X) |= ⊥ and for every X  ⊂ X, SatR∪N (X  ) |= ⊥. The set of all conflicts of K is denoted Conf lict(K). Example 2 [(Cont’d Example 1]. We have Conf lict(K) = {{a(m), b(m), c(m)}, {c(m), d(m)}, {e(m), f (m)}, {f (m), h(m), i(m)}, {h(m), i(m), j(m)}}. The set {d(m), e(m), f (m)} is not a conflict since SatR∪N ({e(m), f (m)}) |= ⊥. To practically compute the conflicts, we use a special directed hypergraph [14] called the Graph of Atom Dependency (GAD) and defined by [19]. Definition 3. Given a KB K = (F, R, N ), the GAD of K, denoted by GADK , is a pair (V, D) such that V is the set of atoms in SatR∪N (F) and D = {(U, W ) ∈ 2V × 2V s.t. there exists r ∈ R ∪ N and a homomorphism π such that W is obtained by applying r on U using π}. Example 3 [Cont’d Example 1]. Here, we have that V = {⊥, a(m), b(m), c(m), . . . , l(m)} and D = {D1 , D2 , . . . , D8 } where D1 = ({f (m), h(m)}, {k(m)}), D2 = ({i(m)}, j(m)}, {l(m)}), D3 = ({a(m), c(m), b(m)}, {⊥}), D4 = ({c(m), d(m)}, {⊥}), D5 = ({k(m), i(m)}, {⊥}), D6 = ({h(m), l(m)}, {⊥}), D7 = ({e(m), f (m)}, {⊥}) and D8 = ({d(m), e(m), f (m)}, {⊥}). A derivation is a sequence of rule applications such that each rule can be applied successively. A derivation for a specific atom a is a finite minimal sequence of rule applications starting from a set of facts and ending with a rule application that generates a. We now define the notion of fix and repair w.r.t. a set Y ⊆ 2F of a KB K = (F, R, N ), that will be used in the proposed algorithms.

36

B. Yun and M. Croitoru

Definition 4. Let F be a set of facts, F ⊆ F is a fix of F w.r.t. Y ⊆ 2F iff for every X ∈ Y, X ∩ F = ∅. A fix F of F w.r.t. Y is called a minimal fix of F w.r.t. Y iff for all F  ⊂ F , it holds that F  is not a fix of F w.r.t. Y . The set of all minimal fixes of F w.r.t. Y is denoted by M F ix(F, Y ). The notion of KB reparation is linked to that of conflict via the minimal fixes. We define a repair of a set of facts F w.r.t. a set Y ⊆ 2F . Definition 5. Let F be a set of facts, X ⊆ F is a repair of F w.r.t. Y ⊆ 2F iff there exists F ∈ M F ix(F, Y ) such that X = F \ F . Please note that there is a bijection between the set of repairs and the set of minimal fixes of a set F w.r.t. a set Y ⊆ 2F . We denote the set of all repairs of F w.r.t. Y by Repair(F, Y ). Moreover, the repairs of F w.r.t. Conf lict(K) are the maximal, for set inclusion, consistent subsets of F.

3

Repairs Generation

In this section, we detail a framework for computing all maximal consistent sets of a KB. The approach is given in Algorithm 1 and composed of two steps: 1. (Conflicts Generation). First, the GAD is constructed. Then, all conflicts are computed by extracting the facts used in the minimal derivations for ⊥. 2. (From Conflicts to Repairs). Second, repairs of F w.r.t. Conf lict(K) are constructed using the FindAllRepairs call. The provided algorithm is efficient for computing repairs in the case where the conflicts are at most of size 3.

Algorithm 1: Finding all maximal consistent sets of K

1 2 3 4

input : A KB K = (F , R, N ) output: A set I of repairs of F w.r.t. Conf lict(K) GADK ← GADConstructor(K); C ← FindAllConflicts(K, GADK ); Result ← FindAllRepairs(K, C); return Result;

Conflicts are subsets of F but there is not always a negative constraint directly triggered by the conflict (since the actual “clash” can be between the atoms generated using rules). These two kinds of conflicts are referred to as “conflicts” and “naive conflicts” in Rocher [23]. The use of the GAD allows to keep track of all the rule applications and to propagate those “clashes” into F 1 . 1

The algorithms avoid the problem of derivation loss [19] which is important for the completeness of our approach. Note that in Hecham et al. [19] the authors discuss how finding all derivations for an atom is practically feasible despite the problem being exponential for combined complexity but polynomial for data complexity.

An Incremental Algorithm for Computing All Repairs

37

FindAllConflicts takes as input a KB K and GADK and outputs the set of all conflicts of this KB. It is based on three steps: (1) It builds the GAD. This step has been proven to be efficient as the GAD can be constructed alongside the chase [18]. (2) The GAD is used for finding the set of all possible minimal derivations for ⊥. (3) For each derivation for ⊥, the facts in F that enabled the generation of this derivation are extracted. They correspond to conflicts of K. Example 4 (Cont’d Example 2). The set of all minimal derivations for ⊥ is {(D2 , D6 ), (D1 , D5 ), (D7 ), (D4 ), (D3 )}. The set of conflicts is {{h(m), i(m), j(m)}, {f (m), h(m), i(m)}, {e(m), f (m)}, {c(m), d(m)}, {a(m), b(m), c(m)}}. 3.1

From Conflicts to Repairs

Our approach for computing the set of maximal consistent sets of a KB from the set of conflicts is composed of four algorithms: FindAllRepairs, NewMinimalFix, FindRepairs and SubRepair. In order to compute the repairs of F w.r.t. the set of conflicts of K, we need to first compute the set of all minimal fixes of F w.r.t. Conf lict(K). FindAllRepairs computes the repairs of F w.r.t. Conf lict(K) by iteratively computing the set of all minimal fixes of F w.r.t. Conf lict(K) before converting them into repairs of F w.r.t. Conf lict(K). More precisely, F indAllRepairs repeatedly calls NewMinimalFix which returns a new minimal fix of F w.r.t. Conf lict(K) not previously found. The idea behind N ewM inimalF ix is that it produces new sets (U and A) depending on the minimal fixes of F w.r.t. Conf lict(K) that were previously found. A repair for U w.r.t. A can be modified in order to return a new fix of F w.r.t. Conf lict(K). Lastly, FindRepair computes a repair for U w.r.t. a set A ⊆ 2U by relying on SubRepair for iteratively constructing the repair. In the rest of this section, we detail the general outline of FindAllRepairs and NewMinimalFix. FindAllRepairs (see Algorithm 2) takes as input a KB K and its set of conflicts Conf lict(K) and returns the set of all repairs of F w.r.t. Conf lict(K). The sets B and I contain the set of minimal fixes and repairs of F w.r.t. Conf lict(K) respectively and are initially empty. N ewM inimalF ix(K, Conf lict(K), B) is called for finding a new minimal fix of F w.r.t. Conf lict(K) that is not contained in B. If N ewM inimalF ix(K, Conf lict(K), B) returns the empty set then B already contains all possible minimal fixes of F w.r.t. Conf lict(K). Otherwise, the new minimal fix of F w.r.t. Conf lict(K) is stored in B and converted into a repair of F w.r.t. Conf lict(K) that is stored in I. NewMinimalFix (see Algorithm 3) takes as input a KB K, the corresponding set of conflicts Conf lict(K) and a set of minimal fixes B of F w.r.t. Conf lict(K) and returns the empty set if B contains all the minimal fixes of F w.r.t. Conf lict(K), otherwise it returns a new minimal fix of F w.r.t. Conf lict(K) that is not contained in B. First, the facts in F that are not in any conflict of K are removed. By definition, these facts cannot be in a minimal fix of F w.r.t. Conf lict(K). Then, we check whether or not each fact is at least in one element of B. If this is not the case, we can build a minimal fix of F w.r.t.

38

B. Yun and M. Croitoru

Algorithm 2: FindAllRepairs

8

input : A KB K = (F , R, N ) and a set of conflicts Conf lict(K) output: A set I of repairs of F w.r.t. Conf lict(K) B ← ∅, I ← ∅, stops ← f alse; while stops = f alse do M F ← NewMinimalFix(K, Conf lict(K), B); if M F = ∅ then stops = true; else B ← B ∪ {M F }; I ← I ∪ (F \ M F );

9

return I;

1 2 3 4 5 6 7

Conf lict(K) that is not in B (line 6 to 10). To do so, we first pick an arbitrary fact u that is not in any set of B. Then, we pick an arbitrary conflict Au containing u and find a repair Rep of U w.r.t. A where A is the set of restricted conflicts by U 2 where U = F \ Au . The resulting U \ Rep is a minimal fix of U w.r.t. A . It is extended to a minimal fix of F w.r.t. Conf lict(K) by adding the fact u. It can thus be added to B as the first minimal fix containing u. Example 5 (Cont’d Example 4).At step 1 in Table 1, g(m) is removed because it is in no conflicts. Then, since B = ∅ is included in F, an arbitrary fact u = a(m) in F is picked. Then, an arbitrary conflict Au = {a(m), b(m), c(m)} that contains u is selected. U = F \ Au is {d(m), e(m), f (m), h(m), i(m), j(m)} and the restricted set of conflict by U is {{d(m)}, {e(m), f (m)}, {f (m), h(m), i(m)}, {h(m), i(m), j(m)}}. The repair of U w.r.t. A returned is {j(m), i(m), f (m)} which means that {d(m), e(m), h(m)} is a minimal fix of U w.r.t. A . We conclude that the set {a(m), d(m), e(m), h(m)} is a minimal fix of F w.r.t. Conf lict(K). This process is repeated by iteratively selecting the facts b(m), c(m), f (m), i(m) and j(m). As the reader can note, after step 6, we have that B = F. If each fact is at least in one element of B then each conflict a of K is a fix of F w.r.t. B. However, if a is not a minimal fix of F w.r.t. B then we can find u such that a \ {u} is still a fix of F w.r.t. B (line 13 to 17). We use the previous method and find a repair Rep of U w.r.t. A where A is the restricted set of conflicts by U with U = F \ (a \ {u}). U \ Rep is thus a minimal fix of U w.r.t. A but also a minimal fix of F w.r.t. Conf lict(K). In the example, we skipped this as each a in Conf lict(K) is a minimal fix of F w.r.t. B. In the case where each fact is at least in one minimal fix of F w.r.t. B and each conflict is a minimal fix of F w.r.t. B, we can still find new minimal fix F w.r.t. Conf lict(K) that is not in B (line 18 to 28). We first find subsets S of F 2

The set of restricted conflicts by a set U is the set containing each intersection of a conflict with U . Namely, it is equal to {X ∩ U | X ∈ Conf lict(K)}.

An Incremental Algorithm for Computing All Repairs

39

Algorithm 3: NewMinimalFix

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

input : A KB K = (F , R, N ), the corresponding set of conflicts Conf lict(K) and a set B of minimal fixes of F w.r.t. Conf lict(K) output: Either a new minimal fix of F w.r.t. Conf lict(K) that is not in B or ∅ if B contains all of them V ← F; for v ∈ V do if there is no a ∈ Conf lict(K) such that v ∈ a then V ← V \ {v};  if B ⊂ V then  u ← random fact in V \ B; Au ← random conflict in Conf lict(K) that contains u; U ← V \ Au ; A ← {a ∩ U | a ∈ Conf lict(K), u ∈ / a}; return {u} ∪ (U \ FindRepair(A , U )); else for a ∈ Conf lict(K) do if a is not a minimal fix of F w.r.t. B then u ← fact in a s.t. a is still a fix of F w.r.t. B after its removal; U ← V \ (a \ {u}); A ← {a ∩ U | a ∈ Conf lict(K)}; return U \ FindRepair(A , U ); for S ⊆ V, |S| ≤ 3, S ⊆ a and a ⊆ S, for every a ∈ Conf lict(K) do for v ∈ S do BSv ← {X ∈ B such that B ∩ S = {v}}; BS0 ← {X ∈ B such that B ∩ S = ∅}; for {Bv | v ∈ S} ⊆ v∈S BSv do if Bv = ∅ for every v ∈ S then  if for every X ∈ BS0 , X ⊆ Bv then   v∈S  Z←S∪ V \ Bv ; v∈S

U ← V \ Z; A ← {a ∩ U | a ∈ Conf lict(K)}; return U \ FindRepair(A , U );

26 27 28 29

return ∅;

that have a size equal or less than 33 , that are not be included in any conflict of K and such that any conflict of K are not be included in S. If there is a set S that satisfies every aforementioned conditions then it can be extended into a 3

The computational problem of finding a single repair of F w.r.t. a set Y ⊆ 2F is only in the NC complexity class when every y ∈ Y is such that |y| ≤ 3, otherwise it has been proven to be in the RNC complexity class [6, 20].

40

B. Yun and M. Croitoru

Table 1. List of minimal fixes and repairs for F w.r.t. Conf lict(K) found at each step. Step New elements of B New elements of I 1 2 3 4 5 6

{d, e, h, a} {d, e, h, b} {e, h, c} {c, h, f } {c, e, i} {c, f, j}

{b, c, f, g, i, j} {a, c, f, g, i, j} {a, b, d, f, g, i, j} {a, b, d, e, g, i, j} {a, b, d, f, g, h, j} {a, b, d, e, g, h, i}

7 8 .. .

{a, d, f, h} {b, d, f, h} .. .

{b, c, e, g, i, j} {a, c, e, g, i, j} .. .

minimal fix of B. If that is the case, the set Z does not contain any conflict of K [9]. Then, we use the previous method and find a repair Rep of U w.r.t. A where A is the set of restricted conflicts by U with U = F \ (V \ Z). U \ Rep is a minimal fix of U w.r.t. A but also a minimal fix of F w.r.t. Conf lict(K) (line 28) because Z does not contain any conflict of K. Note that this algorithm relies on Algorithm 4 for finding a repair w.r.t. some restricted conflicts. Example 6 (Cont’d Example 5). Let us consider S = {c(m), e(m)} at step 7 in Table 1. We have |S| ≤ 3, S ⊆ a and a ⊆ S for every a ∈ Conf lict(K). We have BSc(m) = {{c(m), h(m), f (m)}, {c(m), f (m), j(m)}} and BSe(m) = {{d(m), e(m), h(m), a(m)},  {d(m), e(m), h(m), b(m)}}. Since E where E = {{d(m), e(m), h(m), BS0 = ∅, for every X ∈ BS0 , X ⊆ a(m)}, {c(m), h(m), f (m)}} ∈ BSc(m) × BSe(m) . Thus, we have Z = {c(m), e(m), b(m), i(m), j(m)}, U = {a(m), d(m), f (m), h(m)} and A = {{a(m)}, {d(m)}, {f (m)}, {f (m), h(m)}, {h(m)}}. The only repair of U w.r.t. A is ∅. We conclude that the set U is a minimal fix of F w.r.t. Conf lict(K). 3.2

Generating a Repair Efficiently

We show how to efficiently find a single repair of U w.r.t. A ⊆ 2U when |a| ≤ 3 for every a ∈ A. The problem of finding a single repair is in the NC complexity class (but as soon as |a| > 3, it falls into the RNC complexity class [6,20]). FindRepair gradually build a repair by successively finding subrepairs C of large size with SubRepair and by restricting U and A4 . We now detail the general outline of the two algorithms FindRepair and SubRepair. FindRepair (see Algorithm 4) takes as input a set of facts U and a set A ⊆ 2U and returns a repair of U w.r.t. A. We first initialise the algorithm with A = A, V  = U and I = ∅ (I will eventually be the repair of U w.r.t. A). 4

A subrepair of U w.r.t. A is a subset of a repair of U w.r.t. A.

An Incremental Algorithm for Computing All Repairs

41

As long as the set A is not empty, we update A and V  by removing the facts found in a large subset of a repair of V  w.r.t. A . The two for-loop at line 10 and 13 remove supersets and sets of size one that may arise in A . The reason behind the removal of supersets is that they do not change the sets of repairs obtained. Furthermore, at line 14, we remove facts that are in a set of size one in A because they cannot be in any repairs. Finally, when A = ∅, I is returned with the remaining facts in V  .

Algorithm 4: FindRepair

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

input : A set of facts U and a set A ⊆ 2U such that |a| ≤ 3 for every a ∈ A output: A repair of U w.r.t. A A ← A, V  ← U, I ← ∅; while A = ∅ do C ← SubRepair(V  , A ); I ← I ∪ C; V  ← V  \ C; for a ∈ A do a ← a  ∩ V  ; for a ∈ A do if there exists a ∈ A such that a ⊂ a then A ← A \ {a }; for a ∈ A do if |a | = 1 then V  ← V  \ a ; A ← A \ {a }; return I ∪ V  ;

SubRepair (see Algorithm 5) is used for finding a large subrepair of U w.r.t. A ⊆ 2U . This algorithm uses two constants d0 and d1 . When these constants are initialised with d0 = 0.01 and d1 = 0.25, [13] showed that SubRepair returns p or a subrepair that either a subrepair j such that |j ∪ N (j, A, U )| ≥ d0 × log(p) p  is at least of size d0 × log(p) where p is the size of V . SubRepair maintains a collection of sets J initialised with sets of the form {v} for all facts v ∈ U such that {v} is not a set in A. The sets in J are subrepairs and will remain mutually disjoint throughout the algorithm. The algorithm iteratively checks if there is a set in J that is large enough to be returned and if not, it will select which sets of J should be merged and merge them (lines 12 to 21). However, merging subrepairs does not always produce a subrepair, that is why it removes some facts after the merging. Although the procedure for finding a matching M of Q at line 18 is not described, the reader can find it in [15]. Note that the functions N and D are defined as N (C, A, U ) = {u ∈ U \C | ∃a ∈ A, a\C = {u}} and D(C, C  , A, U ) = (N (C, A, U )∩C  )∪(N (C  , A, U )∩C). It has been proven that SubRepair runs in

42

B. Yun and M. Croitoru

O(log 2 n) time on n + m processors and since FindRepair makes at most O(log 2 n) calls to SubRepair (because the subrepairs have a minimal size), FindRepair runs in time O(log 4 n) on n + m EREW processors.

Algorithm 5: SubRepair

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

input : A set of facts U and a set A ⊆ 2U such that |a| ≤ 3 for every a ∈ A output: A subrepair of U w.r.t. A Q ← ∅, p ← |U |, J ← ∅; for u ∈ U do if {u} ∈ / A then J ← J ∪ {{u}}; stops ← f alse; while stops = f alse do for j ∈ J do if |j ∪ N (j, A, U )| ≥ d0 × stops ← true; result ← j;

p log(p)

then

if stops = f alse then for j ∈ J do for j  ∈ J do d1 ×p if |D(j, j  , A, U )| ≤ |J|×log(p) then  Q ← Q ∪ {{j, j }};   M ← matching of Q of size 14 − 2 dd01 × |J| ; for {j, j  } ∈ M do J ← J \ j; J ← J \ j; J ← J ∪ {(j ∪ j  ) \ D(j, j  , A, U )}; return result;

Example 7 (Cont’d Example 6). Suppose that U = {d(m), e(m), f (m), h(m), i(m), j(m)} and A = {{d(m)}, {e(m), f (m)}, {f (m), h(m), i(m)}, {h(m), i(m), j(m)}}, SubRepair returns the set {j(m)}. We remove j(m) from V  and A . Thus, V  = {d(m), e(m), f (m), h(m), i(m)} and A = {{d(m)}, {e(m), f (m)}, {f (m), i(m), h(m)}, {h(m), i(m)}}. {f (m), i(m), h(m)} and {d(m)} are removed from A because they are respectively a superset and a set of size one. The same process is repeated and FindRepair returns {i(m), j(m), f (m)}. c0 ×p is always satisfied with d0 = 0.01. Here, the condition |j ∪ N (j, A, U )| ≥ log(p) The approach is correct since (1) FindAllConflicts returns the set of all conflicts, (2) FindAllRepairs returns the set of all repairs of F w.r.t. Conf lict(K) [9] and (3) FindRepair returns a repair of U w.r.t. A ⊆ 2U [13].

An Incremental Algorithm for Computing All Repairs

4

43

Evaluation

Since the benchmarks of [10] was too large to be handled by the algorithm with a reasonable duration, we created a generator for Datalog± KBs as follows: 1. (Facts Generation) We generates N1 facts with a fixed probability p0 of generating a new predicate. The arity of the predicates are randomly picked between two fixed constants c0 and c1 . We used the idea that some constants only appear at a specific position in a predicate. Thus, we linked multiple positions of atoms into groups that share a distinct pool of constants with a probability p1 . The size of each pool of constants is increased such that for every predicate predi , the product of the size of all the pools of constants of positions in predicate predi is superior to the number of atoms with predicate predi . Last, we created the necessary constants and “filled” the positions of atoms with constants from the corresponding pool of constants such that N1 distinct facts are generated. 2. (Rules Generation) We generate N2 rules and the number of atoms in the head and body of each rule is picked between four fixed constants c2 , c3 , c4 and c5 respectively. In order to avoid infinite rule applications, we only use new predicates in the head of the rules. We used the idea that rules should be split in levels such that a rule in level i can be applied to the N1 original facts but also to the atoms generated by the all the rules in the level j < i. Thus, we split the rules randomly in each level such that |Li | ≥ 1 for i ∈ {1, . . . , c6 } Li | = N2 where c6 is a constant corresponding to the maximum and | 1≤i≤c6

level of a rule. In order to build the rules in level one, we randomly “filled” the body of the rules with the N1 ground atom that were previously generated. The heads of the rules are filled with atoms with new predicates but, within the same rule, there is a probability p0 that the same predicate is reused. The positions of the new predicates in the head of a rule r1 are also linked to the position of the predicates in the body of r1 with a probability p2 . At this point, the atoms in the heads of the rules are also filled with constants from the corresponding pool of constants and each rule contains only ground atoms. Variables are added into the bodies of the rules by replacing constants with a probability p3 . There is a probability p4 that a variable is reused when replacing a constant at a position belonging to the same group. Each constant in the heads of the rules is replaced by a variable that is used in the body at a position that belongs to the same group with a probability p5 . The process is repeated for the rules in level superior to one by filling the bodies of the rules with the N1 original facts and the atoms in the head of the rules in inferior levels instead of only the original facts. 3. (Negative constraints Generation) We generate N3 negative constraints and the number of atoms in the body of each negative constraint is randomly fixed between two fixed constants c7 and c8 . In order to generate the negative constraints, one has to be careful not to make the set of rules incoherent, i.e. the union of the set of rules with the set of negative constraints has to

44

B. Yun and M. Croitoru

be satisfiable. GADK = (V, D) is constructed on the KB with the facts and rules generated according to the aforementioned steps. For each ground atom   U (Di ) where Sv is the set of all v ∈ V , we compute the set Wv = S∈Sv Di ∈S

possible derivations for v and Di is a rule application in S. Thus, in order to create a negative constraint neg1 of size K, we first pick an atom v1 in V and add it in the body of neg1 . We then remove Wv1 from V and pick an atom v2 in V \ Wv1 such that every atom in the body of neg1 does not belong to Wv2 , we add v2 to the body of neg1 and remove Wv2 from V \ Wv1 . The process is repeated until the body of neg1 reaches the size K. In order to generate a set of different KBs, we decided to vary some parameters and fix others. Namely: N1 varies between 10, 100, 1000 and 10000, p1 varies between 0 and 0.05, N2 varies between 10, 100 and 200, c6 varies between 1 and 3 and N3 varies between 20, 40 and 60. The other parameters were fixed such that p0 = 0.5, p2 = 0.05, c1 = 3, c0 = c2 = c4 = 1, c3 = c5 = 2, p3 = 0.7 and p4 = p5 = 0.8. This resulted in the generation of 144 different combinations of the parameters and a total of 720 different KBs since we generated 5 different KBs for each combination. The generated KBs are in DLGP format [4], the tool for generating the KBs and the tool for computing all the repairs are all available online at: https://gite.lirmm.fr/yun/Generated-KB. 4.1

Evaluation Results

The algorithm for finding the repairs was launched on each KB with a timeout after 10 min. We recorded both the time for finding each repair and the time for the conflict computation. We made the following observations: (1) On KBs with 10 facts, the tool successfully terminated with a total median time of 418 ms and an average of 11.7 repairs. Although all instances with 100, 1000 and 10000 facts did not finish before the timeout, the program returned an average number of 86.5 repairs by KB and this number seems to be independent of the number of facts. (2) The last repairs are harder to find: Across all KBs, the first repair takes an average of 763 ms to be computed whereas the last found repair took an average of 64461 ms. Lastly, finding all the conflicts takes a small part of the total computational time as it amounts to only an average of 12.8% of the total time. (3) On the one hand, the parameter c6 does not impact the average number of repairs (66.8 with c6 = 1 and 68.7 with c6 = 3) but the median time for finding the conflicts does slightly increase (479 ms with c6 = 1 and 800 ms with c6 = 3). On the other hand, we noticed a sharp increase in the average time for finding the conflicts when the parameter N3 is increased (888.1 ms for N3 = 20, 1477.5 ms for N3 = 40 and 3737.7 ms for N3 = 60). 4.2

Conclusion

We showed an efficient incremental algorithm that allows for some or all repairs computation and empirically evaluated the proposed algorithm with a benchmark on inconsistent KBs expressed using Datalog±. We empirically showed

An Incremental Algorithm for Computing All Repairs

45

that our approach is able to find the first repairs after a reasonable amount of time. We argue that our approach is useful for applications where an enumeration, even partial, of the repairs is necessary. For example, in life science applications such as biodegradable packaging selection [24] or wheat transformation [2] the repairs are used by the experts in order to enrich the KB with further information. In this setting, enumerating the set of repairs could be of practical value. Last but not least, let us also highlight that the paper also provides a knowledge base generator, a contribution in itself for the OBDA community. Acknowledgement. The second author acknowledges the support of the Docamex project, funded by the French Ministry of Agriculture.

References 1. Arenas, M., Bertossi, L.E., Chomicki, J.: Consistent query answers in inconsistent databases. In: Proceedings of the Eighteenth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, Philadelphia, Pennsylvania, USA, 31 May–2 June 1999, pp. 68–79 (1999). https://doi.org/10.1145/303976.303983 2. Arioua, A., Croitoru, M., Buche, P.: DALEK: a tool for dialectical explanations in inconsistent knowledge bases. In: Computational Models of Argument - Proceedings of COMMA 2016, Potsdam, Germany, 12–16 September 2016, pp. 461–462 (2016). https://doi.org/10.3233/978-1-61499-686-6-461 3. Baget, J.F., et al.: A general modifier-based framework for inconsistency-tolerant query answering. In: Principles of Knowledge Representation and Reasoning: Proceedings of the Fifteenth International Conference, KR 2016, Cape Town, South Africa, 25–29 April 2016, pp. 513–516 (2016) 4. Baget, J.F., Gutierrez, A., Lecl`ere, M., Mugnier, M.L., Rocher, S., Sipieter, C.: DLGP: An extended Datalog Syntax for Existential Rules and Datalog± Version 2.0, June 2015 5. Baget, J.F., Lecl`ere, M., Mugnier, M.L., Salvat, E.: On rules with existential variables: walking the decidability line. Artif. Intell. 175(9–10), 1620–1654 (2011) 6. Beame, P., Luby, M.: Parallel search for maximal independence given minimal dependence. In: Proceedings of the First Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, California, USA, 22–24 January 1990, pp. 212–218 (1990) 7. Benferhat, S., Bouraoui, Z., Croitoru, M., Papini, O., Tabia, K.: Non-objection inference for inconsistency-tolerant query answering. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9–15 July 2016, pp. 3684–3690 (2016) 8. Bienvenu, M.: On the complexity of consistent query answering in the presence of simple ontologies. In: Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, Ontario, Canada, 22–26 July 2012 (2012). http:// www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/view/4928 9. Boros, E., Elbassioni, K.M., Gurvich, V., Khachiyan, L.: An efficient incremental algorithm for generating all maximal independent sets in hypergraphs of bounded dimension. Parallel Process. Lett. 10(4), 253–266 (2000)

46

B. Yun and M. Croitoru

10. Bourgaux, C.: Inconsistency Handling in Ontology-Mediated Query Answering. Ph.D. thesis, Universit´e Paris-Saclay, Paris, September 2016. https://tel.archivesouvertes.fr/tel-01378723 11. Cal`ı, A., Gottlob, G., Lukasiewicz, T., Marnette, B., Pieris, A.: Datalog±: a family of logical knowledge representation and query languages for new applications. In: Proceedings of the 25th Annual IEEE Symposium on Logic in Computer Science, LICS 2010, Edinburgh, United Kingdom, 11–14 July 2010, pp. 228–242 (2010). https://doi.org/10.1109/LICS.2010.27 12. Croitoru, M., Vesic, S.: What can argumentation do for inconsistent ontology query answering? In: Liu, W., Subrahmanian, V.S., Wijsen, J. (eds.) SUM 2013. LNCS (LNAI), vol. 8078, pp. 15–29. Springer, Heidelberg (2013). https://doi.org/ 10.1007/978-3-642-40381-1 2 13. Dahlhaus, E., Karpinski, M., Kelsen, P.: An efficient parallel algorithm for computing a maximal independent set in a hypergraph of dimension 3. Inf. Process. Lett. 42(6), 309–313 (1992). https://doi.org/10.1016/0020-0190(92)90228-N 14. Gallo, G., Longo, G., Pallottino, S.: Directed hypergraphs and applications. Discrete Appl. Math. 42(2), 177–201 (1993). https://doi.org/10.1016/0166218X(93)90045-P 15. Goldberg, M.K., Spencer, T.H.: A new parallel algorithm for the maximal independent set problem. SIAM J. Comput. 18(2), 419–427 (1989). https://doi.org/ 10.1137/0218029 ´ Mazure, B., Piette, C.: Boosting a complete technique to find MSS 16. Gr´egoire, E., and MUS thanks to a local search oracle. In: IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, 6–12 January 2007, pp. 2300–2305 (2007). http://ijcai.org/Proceedings/07/Papers/370. pdf ´ Mazure, B., Piette, C.: Using local search to find MSSes and MUSes. 17. Gr´egoire, E., Eur. J. Oper. Res. 199(3), 640–646 (2009). https://doi.org/10.1016/j.ejor.2007.06. 066 18. Hecham, A.: Defeasible reasoning for existential rules. (Raisonnement defaisable dans les r`egles existentielles). Ph.D. thesis (2018). https://tel.archives-ouvertes.fr/ tel-01904558 19. Hecham, A., Bisquert, P., Croitoru, M.: On the chase for all provenance paths with existential rules. In: Proceedings of the Rules and Reasoning - International Joint Conference, RuleML+RR 2017, London, UK, 12–15 July 2017, pp. 135–150 (2017) 20. Kelsen, P.: On the parallel complexity of computing a maximal independent set in a hypergraph. In: Proceedings of the 24th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, 4–6 May 1992, pp. 339–350 (1992). https://doi.org/10.1145/129712.129745 21. Lembo, D., Lenzerini, M., Rosati, R., Ruzzi, M., Savo, D.F.: Inconsistency-tolerant semantics for description logics. In: Hitzler, P., Lukasiewicz, T. (eds.) RR 2010. LNCS, vol. 6333, pp. 103–117. Springer, Heidelberg (2010). https://doi.org/10. 1007/978-3-642-15918-3 9 22. Poggi, A., Lembo, D., Calvanese, D., Giacomo, G.D., Lenzerini, M., Rosati, R.: Linking data to ontologies. J. Data Semant. 10, 133–173 (2008) 23. Rocher, S.: Interrogation tol´erante aux incoh´erences, Technical report. Universit´e de Montpellier (2013) 24. Tamani, N., et al.: Eco-efficient packaging material selection for fresh produce: industrial session. In: Hernandez, N., J¨ aschke, R., Croitoru, M. (eds.) ICCS 2014. LNCS (LNAI), vol. 8577, pp. 305–310. Springer, Cham (2014). https://doi.org/10. 1007/978-3-319-08389-6 27

An Incremental Algorithm for Computing All Repairs

47

25. Yun, B., Bisquert, P., Buche, P., Croitoru, M.: Arguing about end-of-life of packagings: preferences to the rescue. In: Garoufallou, E., Subirats Coll, I., Stellato, A., Greenberg, J. (eds.) MTSR 2016. CCIS, vol. 672, pp. 119–131. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49157-8 10 26. Yun, B., Vesic, S., Croitoru, M., Bisquert, P.: Inconsistency measures for repair semantics in OBDA. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, 13–19 July 2018, pp. 1977–1983 (2018). https://doi.org/10.24963/ijcai.2018/273

Knowledge-Based Matching of n-ary Tuples Pierre Monnin(B) , Miguel Couceiro , Amedeo Napoli, and Adrien Coulet Universit´e de Lorraine, CNRS, Inria, LORIA, 54000 Nancy, France {pierre.monnin,miguel.couceiro,amedeo.napoli,adrien.coulet}@loria.fr

Abstract. An increasing number of data and knowledge sources are accessible by human and software agents in the expanding Semantic Web. Sources may differ in granularity or completeness, and thus be complementary. Consequently, they should be reconciled in order to unlock the full potential of their conjoint knowledge. In particular, units should be matched within and across sources, and their level of relatedness should be classified into equivalent, more specific, or similar. This task is challenging since knowledge units can be heterogeneously represented in sources (e.g., in terms of vocabularies). In this paper, we focus on matching n-ary tuples in a knowledge base with a rule-based methodology. To alleviate heterogeneity issues, we rely on domain knowledge expressed by ontologies. We tested our method on the biomedical domain of pharmacogenomics by searching alignments among 50,435 nary tuples from four different real-world sources. Results highlight noteworthy agreements and particularities within and across sources. Keywords: Alignment

1

· Matching · n-ary tuple · Order · Ontology

Introduction

In the Semantic Web [4], data or knowledge sources often describe similar units but may differ in quality, completeness, granularity, and vocabularies. Unlocking the full potential of the knowledge that these sources conjointly express requires matching equivalent, more specific, or similar knowledge units within and across sources. This matching process results in alignments that enable the reconciliation of these sources, i.e., the harmonization of their content [7]. Such a reconciliation then provides a consolidated view of a domain that is useful in many applications, e.g., in knowledge fusion and fact-checking. Here, we illustrate the interest of such a matching process to reconcile knowledge within the biomedical domain of pharmacogenomics (PGx), which studies the influence of genetic factors on drug response phenotypes. PGx knowledge originates from distinct sources: reference databases such as PharmGKB, Supported by the PractiKPharma project, founded by the French National Research Agency (ANR) under Grant ANR15-CE23-0028, by the IDEX “Lorraine Universit´e d’Excellence” (15-IDEX-0004), and by the Snowball Inria Associate Team. c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 48–56, 2020. https://doi.org/10.1007/978-3-030-57855-8_4

Knowledge-Based Matching of n-ary Tuples

49

biomedical literature, or the mining of Electronic Health Records of hospitals. Knowledge represented in these sources may differ in levels of validation, completeness, and granularity. Consequently, reconciling these sources would provide a consolidated view on the knowledge of this domain, certainly beneficial in precision medicine, which aims at tailoring drug treatments to patients to reduce adverse effects and maximize drug efficacy [5,6]. PGx knowledge consists of n-ary relationships, here represented as tuples relating sets of drugs, sets of genomic variations, and sets of phenotypes. Such an n-ary tuple states that a patient being treated with the specified sets of drugs, while having the specified genomic variations will be more likely to experience the given phenotypes, e.g., adverse effects. For example, Fig. 1 depicts the tuple pgt 1, which states that patients treated with warfarin may experience cardiovascular diseases because of variations in the CYP2C9 gene. If a source contained the same tuple but with the genetic factor unknown, then it should be identified as less specific than pgt 1. Conversely, if a source contained the same tuple but with myocardial infarction as phenotype, then it should be identified as more specific than pgt 1. CYP2C9 warfarin

causes causes

pgt 1

causes

cardiovascular diseases

Fig. 1. Representation of a PGx relationship between gene CYP2C9, drug warfarin and phenotype cardiovascular diseases. It can be seen as an n-ary tuple pgt 1 = ({warfarin} , {CYP2C9} , {cardiovascular diseases}). This tuple is reified through the individual pgt 1, connecting its components through the causes predicate.

Motivated by this application, we propose a general and mathematically wellfounded methodology to match n-ary tuples. Precisely, given two n-ary tuples, we aim at deciding on their relatedness among five levels such as being equivalent or more specific. We suppose that such tuples are represented within a knowledge base that is expressed using Semantic Web standards. In such standards, only binary predicates exist, which requires the reification of n-ary tuples to represent them: tuples are individualized and linked to their components by predicates (see Fig. 1) [12]. In these knowledge bases, entities can also be associated with ontologies, i.e., formal representations of a domain [9]. Ontologies consist of classes and predicates, partially ordered by the subsumption relation, denoted by . This relation states that a class (respectively a predicate) is more specific than another. The process of matching n-ary tuples appears naturally in the scope of ontology matching [7], i.e., finding equivalences or subsumptions between classes, predicates, or instances of two ontologies. Here, we match individuals representing reified n-ary tuples, which is somewhat related to instance matching and the extraction of linkkeys [2]. However, we allow ourselves to state that a tuple is more specific than another, which is unusual in instance matching but common when matching classes or predicates with systems such as PARIS [14] and

50

P. Monnin et al.

AMIE [8]. Besides, to the best of our knowledge, works available in the literature do not deal with the complex task of matching n-ary tuples with potentially unknown arguments formed by sets of individuals. See Appendix A for further details. In our approach, we assume that the tuples to match have the same arity, the same indices for their arguments, and that they are reified with the same predicates and classes. Arguments are formed by sets of individuals (no literal values) and may be unknown. This matching task thus reduces to comparing each argument of the tuples and aggregating these comparisons to establish their level of relatedness. We achieve this process by defining five general rules, designed to satisfy some desired properties such as transitivity and symmetry. To tackle the heterogeneity in the representation of tuples, we enrich this structure-based comparison with domain knowledge, e.g., the hierarchy of ontology classes and links between individuals. This paper is organized as follows. In Sect. 2, we formalize the problem of matching n-ary tuples. To tackle it, we propose two preorders in Sect. 3 to compare sets of individuals by considering domain knowledge: links between individuals, instantiations, and subsumptions. These preorders are used in Sect. 4 to define matching rules that establish the level of relatedness between two nary tuples. These rules are applied to PGx knowledge in Sect. 5. We discuss our results in Sect. 6 and present some directions of future work in Sect. 7. Appendices are available online (https://arxiv.org/abs/2002.08103).

2

Problem Setting

We aim at matching n-ary tuples represented within a knowledge base K, i.e., we aim at determining the relatedness level of two tuples t1 and t2 (e.g., whether they are equivalent, more specific, or similar). K is represented in the formalism of Description Logics (DL) [3] and thus consists of a TBox and an ABox. Precisely, we consider a set T of n-ary tuples to match. This set is formed by tuples whose matching makes sense in a given application. For example, in our use-case, T consists of all PGx tuples from the considered sources. All tuples in T have the same arity n, and their arguments are sets of individuals of K. Such a tuple t can be formally represented as t = (π1 (t), . . . , πn (t)), where πi : T → 2Δ is a mapping that associates each tuple t to its i-th argument πi (t), which is a set of individuals included in the domain of interpretation Δ. The index set is the same for all tuples in T . Tuples come from potentially noisy sources and some arguments may be missing. As K verifies the Open World Assumption, such arguments that are not explicitly specified as empty, can only be considered unknown and they are set to Δ to express the fact that all individuals may apply. To illustrate, pgt 1 in Fig. 1 could be seen as a ternary tuple pgt 1 = ({warfarin} , {CYP2C9} , {cardiovascular diseases}), where arguments respectively represent the sets of involved drugs, genetic factors, and phenotypes. In view of our formalism, matching two n-ary tuples t1 and t2 comes down to comparing their arguments πi (t1 ) and πi (t2 ) for each i ∈ {1, . . . , n}. For instance,

Knowledge-Based Matching of n-ary Tuples

51

if πi (t1 ) = πi (t2 ) for all i, then t1 and t2 are representing the same knowledge unit, highlighting an agreement between their sources. In the next section, we propose other tests between arguments that are based on domain knowledge.

3

Ontology-Based Preorders

As previously illustrated, the matching of two n-ary tuples t1 and t2 relies on the comparison of each of their arguments πi (t1 ) and πi (t2 ), which are sets of individuals. Such a comparison can be achieved by testing their inclusion or equality. Thus, if πi (t1 ) ⊆ πi (t2 ), then πi (t1 ) can be considered as more specific than πi (t2 ). It is noteworthy that testing inclusion or equality implicitly considers owl:sameAs links that indicate identical individuals. For example, the comparison of {e1 } with {e2 } while knowing that owl:sameAs(e1 , e2 ) results in an equality. However, additional domain knowledge can be considered to help tackle the heterogeneous representation of tuples. For instance, some individuals can be part of others. Individuals may also instantiate different ontological classes, which are themselves comparable through subsumption. To consider this domain knowledge in the matching process, we propose two preorders, i.e., reflexive and transitive binary relations. 3.1

Preorder p Based on Links Between Individuals

Several links may associate individuals in πi (tj ) with other individuals in K. Some links involve a transitive and reflexive predicate (i.e., a preorder). Then, for each such predicate p, we define a preorder p parameterized by p as follows1 : πi (t1 ) p πi (t2 ) ⇔ ∀e1 ∈ πi (t1 ), ∃e2 ∈ πi (t2 ), K |= p(e1 , e2 )

(1)

Note that, from the reflexivity of p and the use of quantifiers ∀ and ∃, πi (t1 ) ⊆ πi (t2 ) implies πi (t1 ) p πi (t2 ). The equivalence relation ∼p associated with p is defined as usual by: πi (t1 ) ∼p πi (t2 ) ⇔ πi (t1 ) p πi (t2 ) and πi (t2 ) p πi (t1 ) 3.2

(2)

Preorder O Based on Instantiation and Subsumption

The second preorder we propose takes into account classes of an ontology O ordered by subsumption and instantiated by individuals in πi (tj ). We denote by classes(O) the set of all classes of O. As it is standard in DL, denotes the largest class in O. Given an individual e, we denote by ci(O, e) the set of classes of O instantiated by e and distinct from , i.e., ci(O, e) = {C ∈ classes(O)\ { } | K |= C(e)} . 1

See Appendix B and Appendix C for the proof and examples.

52

P. Monnin et al.

Note that ci(O, e) may be empty. We explicitly exclude from ci(O, e) since K may be incomplete. Indeed, individuals may lack instantiations of specific classes but instantiate by default. Thus, is excluded to prevent O from inadequately considering these individuals more general than individuals instantiating classes other than 2 . Given C = {C1 , C2 , . . . , Ck } ⊆ classes(O), we denote by msc(C) the set of the most specific classes of C, i.e., msc(C) = {C ∈ C | D ∈ C, D  C}3 . Similarly, we denote by msci(O, e) the set of the most specific classes of O, except

, instantiated by an individual e, i.e., msci(O, e) = msc(ci(O, e)). Given an ontology O, we define the preorder O based on set inclusion and subsumption as follows4 :   msci(O, e1 ) = ∅ ∧ πi (t1 ) O πi (t2 ) ⇔ ∀e1 ∈ πi (t1 ), e1 ∈ πi (t2 )    (3a)

∀C1 ∈ msci(O, e1 ), ∃e2 ∈ πi (t2 ), ∃C2 ∈ msci(O, e2 ), C1  C2   

 (3)

(3b)

Clearly, if πi (t1 ) is more specific than πi (t2 ) and e1 ∈ πi (t1 ), then (3a) e1 ∈ πi (t2 ), or (3b) all the most specific classes instantiated by e1 are subsumed by at least one of the most specific classes instantiated by individuals in πi (t2 ). Thus individuals in πi (t2 ) can be seen as “more general” than those in πi (t1 ). As before, O induces the equivalence relation ∼O defined by: πi (t1 ) ∼O πi (t2 ) ⇔ πi (t1 ) O πi (t2 ) and πi (t2 ) O πi (t1 )

(4)

The preorder O can be seen as parameterized by the ontology O, allowing to consider different parts of the TBox of K for each argument πi (tj ), if needed.

4

Using Preorders to Define Matching Rules

Let t1 , t2 ∈ T be two n-ary tuples to match. We assume that

each argument i ∈ {1, . . . , n} is endowed with a preorder i ∈ ⊆, p , O that enables the comparison of πi (t1 ) and πi (t2 ). We can define rules that aggregate such comparisons for all i ∈ {1, . . . , n} and establish the relatedness level of t1 and t2 . Hence, our matching approach comes down to applying these rules to every ordered pair (t1 , t2 ) of n-ary tuples from T . Here, we propose the following five relatedness levels: =, ∼, , ≶, and ∝, from the strongest to the weakest. Accordingly, we propose five matching rules of the form B ⇒ H, where B expresses the conditions of the rule, testing equalities, equivalences, or inequalities between arguments of t1 and t2 . Classically, these conditions can be combined using conjunctions or disjunctions, respectively 2 3 4

See Appendix D for a detailed example. D  C means that D  C and D ≡ C. See Appendix E and Appendix F for the proof and examples.

Knowledge-Based Matching of n-ary Tuples

53

denoted by ∧ and ∨. If B holds, H expresses the relatedness between t1 and t2 to add to K. Rules are applied from Rule 1 to Rule 5. Once conditions in B hold for a rule, H is added to K and the following rules are discarded, meaning that at most one relatedness level is added to K for each pair of tuples. When no rule can be applied, t1 and t2 are considered incomparable and nothing is added to K. The first four rules are the following: Rule 1. ∀i ∈ {1, . . . , n} , πi (t1 ) = πi (t2 ) ⇒ t1 = t2 Rule 2. ∀i ∈ {1, . . . , n} , πi (t1 ) ∼i πi (t2 ) ⇒ t1 ∼ t2 Rule 3. ∀i ∈ {1, . . . , n} , πi (t1 ) i πi (t2 ) ⇒ t1  t2 Rule 4. ∀i ∈ {1, . . . , n} , [ (πi (t1 ) = πi (t2 )) ∨ (πi (t2 ) = Δ ∧ πi (t1 ) i πi (t2 )) ∨ (πi (t1 ) = Δ ∧ πi (t2 ) i πi (t1 )) ] ⇒ t1 ≶ t2 Rule 1 states that t1 and t2 are identical (=) whenever t1 and t2 coincide on each argument. Rule 2 states that t1 and t2 are equivalent (∼) whenever each argument i ∈ {1, . . . , n} of t1 is equivalent to the same argument of t2 . Rule 3 states that t1 is more specific than t2 () whenever each argument i ∈ {1, . . . , n} of t1 is more specific than the same argument of t2 w.r.t. i . Rule 4 states that t1 and t2 have comparable arguments (≶) whenever they have the same specified arguments (i.e., different from Δ), and these arguments are comparable w.r.t. i . Rules 1 to 3 satisfy the transitivity property. Additionally, Rules 1, 2, and 4 satisfy the symmetry property. In Rules 1 to 4, comparisons are made argument-wise. However, other relatedness cases may require to aggregate over arguments. For example, we may want to compare all individuals involved in two tuples, regardless of their arguments. Alternatively, we may want to consider two tuples as weakly related if their arguments have a specified proportion of comparable individuals. To this aim, we propose Rule 5. Let I = {I1 , . . . , Im } be a partition of {1, . . . , n}, defined by the user at the beginning of the matching process. We define the aggregated argument Ik of tj as the union of all specified πi (tj ) (i.e., different from Δ) for i ∈ Ik . Formally, πIk (tj ) = πi (tj ). i∈Ik πi (tj )=Δ

We assume that

each aggregated argument Ik ∈ I is endowed with a preorder Ik ∈ ⊆, p , O . We denote by SSD(πIk (t1 ), πIk (t2 )) the semantic set difference between πIk (t1 ) and πIk (t2 ), i.e., SSD(πIk (t1 ), πIk (t2 )) = {e1 | e1 ∈ πIk (t1 ) and {e1 } Ik πIk (t2 )} . Intuitively, it is the set of elements in πIk (t1 ) preventing it from being more specific than πIk (t2 ) w.r.t. Ik . We define the operator ∝Ik as follows: 1 if πIk (t1 ) Ik πIk (t2 ) or πIk (t2 ) Ik πIk (t1 ) πIk (t1 ) ∝Ik πIk (t2 ) = |SSD(πIk (t1 ),πIk (t2 )) ∪ SSD(πIk (t2 ),πIk (t1 ))| otherwise 1− |πI (t1 ) ∪ πI (t2 )| k

k

54

P. Monnin et al.

This operator returns a number measuring the similarity between πIk (t1 ) and πIk (t2 ). This number is equal to 1 if the two aggregated arguments are comparable. Otherwise, it is equal to 1 minus the proportion of incomparable elements. We denote by I=Δ (t1 , t2 ) = {Ik | Ik ∈ I and πIk (t1 ) = Δ and πIk (t2 ) = Δ} the set of aggregated arguments that are specified for both t1 and t2 (i.e., different from Δ). Then, Rule 5 is defined as follows: Rule 5. Let I = {I1 , . . . , Im } be a partition of {1, . . . , n}, and let γ=Δ , γS , and γC be three parameters, all fixed at the beginning of the matching process.

    |I=Δ (t1 , t2 )| ≥ γ=Δ ∀Ik ∈ I=Δ (t1 , t2 ), πIk (t1 ) ∝Ik πIk (t2 ) ≥ γS ∨     1 (πIk (t1 ) ∝Ik πIk (t2 ) = 1) ≥ γC ⇒ t1 ∝ t2 Ik ∈I=Δ (t1 ,t2 )

Rule 5 is applicable if at least γ=Δ aggregated arguments are specified for both t1 and t2 . Then, t1 and t2 are weakly related (∝) whenever all these specified aggregated arguments have a similarity of at least γS or when at least γC of them are comparable. Notice that ∝ is symmetric.

5

Application to Pharmacogenomic Knowledge

Our methodology was motivated by the problem of matching pharmacogenomic (PGx) tuples. Accordingly, we tested this methodology on PGxLOD5 [10], a knowledge base represented in the ALHI Description Logic [3]. In PGxLOD, 50,435 PGx tuples were integrated from four different sources: (i) 3,650 tuples from structured data of PharmGKB, (ii) 10,240 tuples from textual portions of PharmGKB called clinical annotations, (iii) 36,535 tuples from biomedical literature, and (iv) 10 tuples from results found in EHR studies. We obtained the matching results summarized in Table 1 and discussed in Sect. 6. Details about formalization, code and parameters are given in Appendix G.

6

Discussion

In Table 1, we observe only a few inter-source links, which may be caused by missing mappings between the vocabularies used in sources. Indeed, our matching process requires these mappings to compare individuals represented with different vocabularies. This result underlines the relevance of enriching the knowledge base with ontology-to-ontology mappings. We also notice that Rule 5 generates more links than the other rules, which emphasizes the importance of weaker relatedness levels to align sources and overcome their heterogeneity. Some results were expected and therefore seem to validate our approach. For example, some tuples from the literature appear more general than those of PharmGKB (with 15 and 42 skos:broadMatch links). These links are a foreseen consequence of 5

https://pgxlod.loria.fr.

Knowledge-Based Matching of n-ary Tuples

55

Table 1. Number of links resulting from each rule. Links are generated between tuples of distinct sources or within the same source. PGKB stands for “PharmGKB”, sd for “structured data”, and ca for “clinical annotations”. As Rules 1, 2, 4, and 5 satisfy symmetry, links from t1 to t2 as well as from t2 to t1 are counted. Similarly, as Rules 1 to 3 satisfy transitivity, transitivity-induced links are counted. Regarding skos:broadMatch links, rows represent origins and columns represent destinations.

Links from Rule 1 Encoded by owl:sameAs

Links from Rule 2 Encoded by skos:closeMatch

Links from Rule 3 Encoded by skos:broadMatch

Links from Rule 4 Encoded by skos:relatedMatch

Links from Rule 5 Encoded by skos:related

PGKB (sd)

PGKB (ca)

Literature

EHRs

PGKB (sd) PGKB (ca)

166 0

0 10,134

0 0

0 0

Literature

0

0

122,646

0

EHRs

0

0

0

0

PGKB (sd) PGKB (ca)

0 5

5 1,366

0 0

0 0

Literature

0

0

16,692

0

EHRs

0

0

0

0

PGKB (sd) PGKB (ca)

87 9,325

3 605

15 42

0 0

Literature

0

0

75,138

0

EHRs

0

0

0

0

PGKB (sd) PGKB (ca)

20 0

0 110

0 0

0 0

Literature

0

0

18,050

0

EHRs

0

0

0

0

PGKB (sd) PGKB (ca)

100,596 287,670

287,670 706,270

414 1,103

2 19

Literature

414

1,103

1,082,074

15

EHRs

2

19

15

0

the completion process of PharmGKB. Indeed, curators achieve this completion after a literature review, inevitably leading to tuples more specific or equivalent to the ones in reviewed articles. Interestingly, our methodology could ease such a review by pointing out articles describing similar tuples. Clinical annotations of PharmGKB are in several cases more specific than structured data (9,325 skos:broadMatch links). This is also expected as structured data are a broadlevel summary of more complex phenotypes detailed in clinical annotations. Regarding our method, using rules is somehow off the current machine learning trend [1,11,13]. However, writing simple and well-founded rules constitutes a valid first step before applying machine learning approaches. Indeed, such explicit rules enable generating a “silver” standard for matching, which may be useful to either train or evaluate supervised approaches. Rules are readable and may thus be analyzed and confirmed by domain experts, and provide a basis of explanation for the matching results. Additionally, our rules are simple enough to be generally true and useful in other domains. By relying on instantiated classes and links between individuals, we illustrate how domain knowledge and reasoning mechanisms can serve a structure-based matching. In future works, conditions under which preorders p and O could be merged into one unique preorder deserve a deeper study. See Appendix H for further discussion.

56

7

P. Monnin et al.

Conclusion

In this paper, we proposed a rule-based approach to establish the relatedness level of n-ary tuples among five proposed levels. It relies on rules and preorders that leverage domain knowledge and reasoning capabilities. We applied our methodology to the real-world use case of matching pharmacogenomic relationships, and obtained insightful results. In the future, we intend to compare and integrate our purely symbolic approach with ML methodologies.

References 1. Alam, M., et al.: Reconciling event-based knowledge through RDF2VEC. In: Proceedings of HybridSemStats@ISWC 2017. CEUR Workshop Proceedings (2017) 2. Atencia, M., David, J., Euzenat, J.: Data interlinking through robust linkkey extraction. In: Proceedings of ECAI 2014. Frontiers in Artificial Intelligence and Applications, vol. 263, pp. 15–20 (2014) 3. Baader, F., et al. (eds.): The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, Cambridge (2003) 4. Berners-Lee, T., Hendler, J., Lassila, O., et al.: The semantic web. Sci. Am. 284(5), 28–37 (2001) 5. Caudle, K.E., et al.: Incorporation of pharmacogenomics into routine clinical practice: the clinical pharmacogenetics implementation consortium (CPIC) guideline development process. Curr. Drug Metab. 15(2), 209–217 (2014) 6. Coulet, A., Sma¨ıl-Tabbone, M.: Mining electronic health records to validate knowledge in pharmacogenomics. ERCIM News 2016(104)(2016) 7. Euzenat, J., Shvaiko, P.: Ontology Matching, 2nd edn. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38721-0 8. Gal´ arraga, L.A., Preda, N., Suchanek, F.M.: Mining rules to align knowledge bases. In: Proceedings of AKBC@CIKM 2013, pp. 43–48. ACM (2013) 9. Gruber, T.R.: A translation approach to portable ontology specifications. Knowl. Acquis. 5(2), 199–220 (1993) 10. Monnin, P., et al.: PGxO and PGxLOD: a reconciliation of pharmacogenomic knowledge of various provenances, enabling further comparison. BMC Bioinform. 20(Suppl. 4), 139 (2019) 11. Nickel, M., Murphy, K., Tresp, V., Gabrilovich, E.: A review of relational machine learning for knowledge graphs: from multi-relational link prediction to automated knowledge graph construction. Proc. IEEE 104(1), 11–33 (2016) 12. Noy, N., Rector, A., Hayes, P., Welty, C.: Defining N-ary relations on the semantic web. W3C Work. Group Note 12(4) (2006) 13. Ristoski, P., Paulheim, H.: RDF2Vec: RDF graph embeddings for data mining. In: Groth, P., Simperl, E., Gray, A., Sabou, M., Kr¨ otzsch, M., Lecue, F., Fl¨ ock, F., Gil, Y. (eds.) ISWC 2016. LNCS, vol. 9981, pp. 498–514. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46523-4 30 14. Suchanek, F.M., Abiteboul, S., Senellart, P.: PARIS: probabilistic alignment of relations, instances, and schema. PVLDB 5(3), 157–168 (2011)

Conceptual Structures

Some Programming Optimizations for Computing Formal Concepts Simon Andrews(B) Conceptual Structures Research Group, Department of Computing, College of Business, Technology and Engineering, The Industry and Innovation Research Institute, Sheffield Hallam University, Sheffield, UK [email protected] Abstract. This paper describes in detail some optimization approaches taken to improve the efficiency of computing formal concepts. In particular, it describes the use and manipulation of bit-arrays to represent FCA structures and carry out the typical operations undertaken in computing formal concepts, thus providing data structures that are both memoryefficient and time saving. The paper also examines the issues and compromises involved in computing and storing formal concepts, describing a number of data structures that illustrate the classical trade-off between memory footprint and code efficiency. Given that there has been limited publication of these programmatical aspects, these optimizations will be useful to programmers in this area and also to any programmers interested in optimizing software that implements Boolean data structures. The optimizations are shown to significantly increase performance by comparing an unoptimized implementation with the optimized one.

1

Introduction

Although there have been a number of advances and variations of the Close-ByOne algorithm [14] for computing formal concepts, including [4,5,12,13], optimization approaches used when implementing such algorithms have not been described in detail. Mathematical and algorithmic aspects of FCA have been well covered, for example in [7,9,15], but less attention has been paid to programming. Using a bit-array to represent a formal context has previously been reported [5,11], but without the implementation details presented here. Providing detailed code for these optimizations will be useful to programmers in this area and also to any programmers interested in optimizing software that implements and manipulates Boolean data structures. Thus, this paper sets out to describe and explain these optimizations, with example code, and to explore the classical efficiency trade-offs between memory and speed in a CbO context. As an example CbO-type algorithm, this paper makes use of In-Close2 as presented in [6]. However, the optimization approaches detailed here should be generalizable for most, if not all, algorithms that compute formal concepts. The programming language chosen is C++; it is often the language of choice for efficient coding as it facilitates low level programming and its compilers are extremely adept at producing efficient assembler. c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 59–73, 2020. https://doi.org/10.1007/978-3-030-57855-8_5

60

2

S. Andrews

Formal Concepts

A description of formal concepts [9] begins with a set of objects X and a set of attributes Y . A binary relation I ⊆ X × Y is called the formal context. If x ∈ X and y ∈ Y then xIy says that object x has attribute y. For a set of objects A ⊆ X, a derivation operator ↑ is defined to obtain the set of attributes common to the objects in A as follows: A↑ := { y ∈ Y | ∀x ∈ A : xIy }.

(1)

Similarly, for a set of attributes B ⊆ Y , the ↓ operator is defined to obtain the set of objects common to the attributes in B as follows: B ↓ := { x ∈ X | ∀y ∈ B : xIy }.

(2)

(A, B) is a formal concept iff A↑ = B and B ↓ = A. The relations A × B are then a closed set of pairs in I. In other words, a formal concept is a set of attributes and a set of objects such that all of the objects have all of the attributes and there are no other objects that have all of the attributes. Similarly, there are no other attributes that all the objects have. A is called the extent of the formal concept and B is called the intent of the formal concept. A formal context is typically represented as a cross table, with crosses indicating binary relations between objects (rows) and attributes (columns). The following is a simple example of a formal context: 0 1 2 3 4 a × ×× b ×××× c × × d ×× ×

Formal concepts in a cross table can be visualized as closed rectangles of crosses, where the rows and columns in the rectangle are not necessarily contiguous. The formal concepts in the example context are: C1 C2 C3 C4 C5

= ({a, b, c, d}, ∅) = ({a, c}, {0}) = (∅, {0, 1, 2, 3, 4}) = ({c}, {0, 2}) = ({a}, {0, 3, 4})

C6 = ({b}, {1, 2, 3, 4}) C7 = ({b, d}, {1, 2, 4}) C8 = ({b, c, d}, {2}) C9 = ({a, b}, {3, 4}) C10 = ({a, b, d}, {4})

For readers not familiar with Formal Concept Analysis, further background can be found in [17–19].

3

A Re-Cap of the In-Close2 Algorithm

In-Close2 [6] is a CbO variant that was ‘bred’ from In-Close [2] and FCbO [13,16] to combine the efficiencies of the partial closure canonicity test of In-Close with the full inheritance of the parent intent achieved by FCbO.

Some Programming Optimizations for Computing Formal Concepts

61

The In-Close2 algorithm is given below, with a line by line explanation, and is invoked with an initial (A, B) = (X, ∅) and initial attribute y = 0, where A is the extent of a concept, B is the intent and X is a set of objects such that A ⊆ X.

In-Close2 ComputeConceptsFrom((A, B), y) 1 2 3 4 5 6 7 8 9 10 11 12

for j ← y upto n − 1 do if j ∈ / B then C ← A ∩ {j}↓ if A = C then B ← B ∪ {j} else if B ∩ Yj = C ↑j then PutInQueue(C, j) ProcessConcept((A, B)) while GetFromQueue(C, j) do D ← B ∪ {j} ComputeConceptsFrom((C, D), j + 1)

Line 1 - Iterate across the context, from starting attribute y up to attribute n − 1, where n is the number of attributes. Line 2 - Skip inherited attributes. Line 3 - Form an extent, C, by intersecting the current extent, A, with the next column of objects in the context. Line 4 - If C = A, then... Line 5 - ...add the current attribute j to the current intent being closed, B. Line 7 - Otherwise, apply the partial-closure canonicity test to C (is this a new extent?). Note that Yj = {y ∈ Y |y < j}. Similarly, C ↑j is C closed up to (but not including) j: C ↑j = { y ∈ Yj | ∀x ∈ C : xIy } Line 8 - If the test is passed, place the new (child) extent, C, and the location where it was found, j, in a queue for later processing. Line 9 - Pass concept (A, B) to notional procedure ProcessConcept to process it in some way (for example, storing it in a set of concepts). Lines 10 - The queue is processed by obtaining each child extent and associated location from the queue. Line 11 - Each new partial intent, D, inherits all the attributes from its completed parent intent, B, along with the attribute, j, where its extent was found. Line 12 - Call ComputeConceptsFrom to compute child concepts from j + 1 and to complete the closure of D.

62

4

S. Andrews

Implementation of the Formal Context as a Bit Array

Common to most efficient implementations of CbO-type algorithms is the implementation of the formal context as a bit array, with each bit representing a Boolean ‘true/false’ cell in the cross-table. Such an approach leads to efficient computation in two ways: it allows for bit-wise operations to be performed over multiple cells in the context at the same time and reducing the size of cells to bits allows a much larger portion of the context to be held in cache memory. So, for example, in a 64 bit architecture the formal context can be declared in the heap using C++ thus: 1

u n s i g n e d _ _ i n t 6 4 ∗∗ c o n t e x t ;

Once the number of objects, m, and number of attributes, n, are known, the required memory can be allocated thus: 1 2 3 4 5 6 7 8 9

/∗ c r e a t e empty c o n t e x t ∗/ // c a l c u l a t e s i z e f o r a t t r i b u t e s − 1 b i t p e r a t t r i b u t e n A r r a y = ( n −1) /64 + 1 ; // c r e a t e one d i m e n s i o n o f t h e c o n t e x t c o n t e x t = new u n s i g n e d _ _ i n t 6 4 ∗ [ m ] ; f o r ( i = 0 ; i6) ] |= ( 1 i64 >6 shifts the bits in j rightwards by 6 bits, which is equivalent to integer division by 64 (26 = 64). This identifies the required 64 bit integer in the row of the bit array. The mod operator in C++ is % and the 64 bit literal integer representation of 1 is defined as 1i64 so, similarly, bit shifting 1i64 leftwards by j%64 places 1 at the required bit position in a 64 bit integer, with all other bits being zero. The C++ bit-wise logical ‘or’ operator, |, can then be used to set the required bit to 1 in the context. Bit shift operators and bit-wise logical operators are extremely efficient (typically taking only a single CPU clock-cycle to execute) and thus are fundamental to the fast manipulation of structures such as bit arrays. This becomes even more important in the main cycle of CbO-type algorithms and in efficient canonicity testing, as can be seen later in this paper. However, before considering the implementation of the algorithm itself, some consideration needs to be given to the data structures required for the storage and processing of formal concepts. Note that a temporary bit array, contextTemp, is used to initially store the context in column-wise form, because we next wish to physically sort the columns (see Sect. 4: Physical Sorting of Context Columns, below). After sorting, the context will be translated into row-wise form in the permanent bit-array, context, for the main computation.

5

Physical Sorting of Context Columns

It is well-known that sorting context columns in ascending order of support (density of Xs) significantly improves the efficiency of computing formal concepts in CbO-type implementations. The typical approach is to sort pointers to the

64

S. Andrews

columns, rather than the columns themselves, as this takes less time. However, in actuality, physically sorting the columns in large formal contexts provides better results, because physical sorting makes more efficient use of cache memory. If data is contiguous in RAM, cache lines will be filled with data that are more likely to be used when carrying out column intersections and when finding an already closed extent in the canonicity test. This can significantly reduce level one data cache misses, particularly when large contexts are being processed [3]. The overhead of physically sorting the context is outweighed by the saving in memory loads. Thus the column-wise bit array, contextTemp, is physically sorted by making use of an array of previously logically sorted column indexes, colOriginal: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

/∗ r e w r i t e s o r t e d c o n t e x t ( p h y s i c a l s o r t ) ∗/ int tempColNums [ MAX_COLS ] ; i n t rank [ MAX_COLS ] ; f o r ( j = 0 ; j < n ; j++){ // u s e t h e s o r t e d o r i g i n a l c o l nos t o i n d e x t h e p h y s i c a l ← sort t e m p C o l N u m s [ j ]= c o l O r i g i n a l [ j ] ; r a n k [ c o l O r i g i n a l [ j ] ] = j ; // r e c o r d t h e r a n k i n g o f t h e column } f o r ( j = 0 ; j < n − 1 ; j++){ f o r ( i = 0 ; i < m A r r a y ; i++){ unsigned __int64 temp = contextTemp [ j ] [ i ] ; contextTemp [ j ] [ i ] = contextTemp [ tempColNums [ j ] ] [ i ] ; contextTemp [ tempColNums [ j ] ] [ i ] = temp ; } //make n o t e o f where swapped−o u t c o l has moved t o u s i n g i t s ← rank tempColNums [ rank [ j ]]= tempColNums [ j ] ; rank [ tempColNums [ j ]]= rank [ j ] ; }

If, for example, tempColNums = [4,7,0,2,1,6,5,3], it means that column 4 is the least dense column and column 3 the most dense. The array rank is used to record and maintain the relative ranking of the columns in terms of density. Thus, in this example, rank = [2,4,3,7,0,6,5,1], which means that that column 0 has ranking 2, column 1 has ranking 4, column 2 has ranking 3, and so on. Once the columns have been physically sorted, the column-wise context, contextTemp, is written, a bit at a time, into the row-wise context, context, ready for the main computation: 1 2 3 4 5 6

6

f o r ( i n t i =0; i6)] & ( 1 i64 6) ] = c o n t e x t [ i ] [ ( j>>6) ] | ( 1 i64 6] & ( 1 i 6 4 >6] & ( 1 i 6 4 0 ; i−−){ // l o o k i n g f o r them i n c u r r e n t a t t r i b u t e column i f ( c o n t e x t [ ∗ A c ] [ j >>6] & ( 1 i 6 4 >6] = B c u r r e n t [ j >>6] | ( 1 i 6 4 = 0 ; i−−){ /∗ D = B U { j } ∗/ // i n h e r i t a t t r i b u t e s memcpy ( Bchild , Bcurrent , nArray ∗8) ; // r e c o r d t h e s t a r t o f t h e c h i l d i n t e n t i n B t r e e startB [ Cnums [ i ] ] = bptr ; // add spawning a t t r i b u t e t o B t r e e ∗ bptr = Bchildren [ i ] ; b p t r ++; sizeBnode [ Cnums [ i ]]++; // and t o B o o l e a n form o f c h i l d i n t e n t

Some Programming Optimizations for Computing Formal Concepts 85 86 87 88 89 90 91

7.1

69

B c h i l d [ B c h i l d r e n [ i ] > >6] = B c h i l d [ B c h i l d r e n [ i ] > >6] | ( 1 i 6 4 >6; p++){

70

S. Andrews

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

// i n v e r t 64 b i t s e g m e n t s o f c u r r e n t i n t e n t B m a s k [ p ]=˜ B c u r r e n t [ p ] ;

}

8

Evaluation

} // i n v e r t l a s t 64 b i t s up t o c u r r e n t a t t r i b u t e // z e r o i n g any b i t s a f t e r c u r r e n t a t t r i b u t e B m a s k [ p ]= ˜ B c u r r e n t [ p ] & ( ( 1 i 6 4 6; p++){ i n t i ; // o b j e c t c o u n t e r // p o i n t t o s t a r t o f e x t e n t int ∗ Ahighc = startA [ highc ] ; // i t e r a t e t h r o u g h o b j e c t s i n new e x t e n t f o r ( i = e n d A h i g h c − A h i g h c ; i > 0 ; i−−){ // a p p l y mask t o c o n t e x t ( t e s t i n g 64 c e l l s a t a t i m e ) Bmask [ p ] = Bmask [ p ] & context [∗ Ahighc ] [ p ] ; // i f an o b j e c t i s n o t found , s t o p s e a r c h i n g t h i s segment i f ( ! Bmask [ p ] ) break ; A h i g h c ++; // o t h e r w i s e , n e x t o b j e c t } // i f e x t e n t has been found , i t i s n o t c a n o n i c a l i f ( i==0) r e t u r n ( f a l s e ) ; } // i f e x t e n t has n o t been found , i t i s c a n o n i c a l return ( true ) ;

Evaluation of the optimization was carried out using some standard FCA data sets from the UCI Machine Learning Repository [8] and some artificial data sets. The comparison was between a version of In-Close with and without the optimizations described above, i.e. without a bit-array for the context (an array of type Bool is used instead), without the Boolean form of intent being used to skip inherited attributes (the ‘tree’ of intents is searched instead), and without physical sorting of the context. The difference in performance on the standard data sets, that can be seen in Table 1, is striking, particularly when bearing in mind that the same algorithm is being implemented in both cases - the code optimization is the only difference. Table 1. UCI data set results (timings in seconds). Data set |X| × |Y | Density

Mushroom Adult Internet ads 8124 × 125 48842 × 96 3279 × 1565 17.36% 8.24% 0.77%

#Concepts

226,921

Optimized 0.16 Unoptimized 0.47

1,436,102

16,570

0.83 2.14

0.05 0.39

Artificial data sets were used that, although partly randomized, were constrained by properties of real data sets, such as many valued attributes and a

Some Programming Optimizations for Computing Formal Concepts

71

fixed number of possible values. The results of the artificial data set experiments are given in Table 2 and, again, show a significant improvement achieved by the optimized implementation. Table 2. Artificial data set results (timings in seconds). Data set |X| × |Y | Density #Concepts

M7X10G120K 120, 000 × 70 10.00% 1,166,343

Optimized 0.98 Unoptimized 2.42

9

M10X30G120K 120, 000 × 300 3.33% 4,570,498

T10I4D100K 100, 000 × 1, 000 1.01% 2,347,376

8.37 18.65

9.10 33.21

Conclusions and Further Work

It is clear that certain optimization techniques can make a significant difference to the performance of implementations of CbO-type algorithms. Bit-wise operations and efficient use of cache memory are big factors in this, along with a choice of data structures for storing formal concepts that make a good compromise between size and speed, given the memory typically available and addressable in standard personal computers. Clearly, with more specialized and expensive hardware and with the use of multi-core parallel processing, other significant improvements can be made. However, as far as optimizations are concerned, the ones presented here are probably the most important. Although space does not permit here, it would be interesting, perhaps in a future, expanded work, to investigate the individual effects of each optimization. It may be that some optimizations are more useful than others. Similarly, it may be interesting to investigate the comparative effectiveness of optimization with respect to varying the number of attributes, number of objects and context density. The power of 64-bit bit-wise operators naturally leads to the tempting possibility of using even larger bit-strings to further increase the level of fine-grained parallel processing. So-called streaming SIMD extensions (SSEs) and corresponding chip architecture from manufacturers such as Intel and AMD [1,10] provide the opportunity of 128 and even 256 bit-wise operations. However, our early attempts to leverage this power have not shown any significant speed-up. It seems that the overheads of manipulating the 128/256 bit registers and variables are outweighing the increase in parallelism. It may be because we are currently applying the parallelism to the columns of a formal context (the bit-mask in the canonicity test) rather than the rows, that we are not seeing good results from SSEs. Whereas there are typically only tens or perhaps hundreds of columns, there are often tens or even hundreds of thousands of rows, particularly if we

72

S. Andrews

are applying FCA to data sets. Thus, a 256-bit parallel process is likely to have more impact used column-wise than row-wise. The task will be to work out how to incorporate this approach into an implementation. It may also be worth exploring how the optimizations presented here could be transferred into other popular programming languages, although interpreted languages, such as Python, are clearly not an ideal choice where speed is of the essence. For Java and C#, there appears to be some debate on efficiency compared to C++. It would be interesting to experiment to obtain some empirical evidence. In-Close is available free and open source on SourceForge1 .

References 1. AMD: AMD64 Architecture Programmers Manual Volume 6: 128-Bit and 256-Bit XOP, FMA4 and CVT16 Instructions, May 2009 2. Andrews, S.: In-Close, a fast algorithm for computing formal concepts. In: Rudolph, S., Dau, F., Kuznetsov, S.O. (eds.) ICCS 2009, vol. 483. CEUR WS (2009) 3. Andrews, S.: In-Close2, a high performance formal concept miner. In: Andrews, S., Polovina, S., Hill, R., Akhgar, B. (eds.) ICCS 2011. LNCS (LNAI), vol. 6828, pp. 50–62. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22688-5 4 4. Andrews, S.: A partial-closure canonicity test to increase the efficiency of CbO-type algorithms. In: Hernandez, N., J¨ aschke, R., Croitoru, M. (eds.) ICCS 2014. LNCS (LNAI), vol. 8577, pp. 37–50. Springer, Cham (2014). https://doi.org/ 10.1007/978-3-319-08389-6 5 5. Andrews, S.: A best-of-breed approach for designing a fast algorithm for computing fixpoints of Galois connections. Inf. Sci. 295, 633–649 (2015) 6. Andrews, S.: Making use of empty intersections to improve the performance of CbO-type algorithms. In: Bertet, K., Borchmann, D., Cellier, P., Ferr´e, S. (eds.) ICFCA 2017. LNCS (LNAI), vol. 10308, pp. 56–71. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59271-8 4 7. Carpineto, C., Romano, G.: Concept Data Analysis: Theory and Applications. Wiley, Hoboken (2004) 8. Frank, A., Asuncion, A.: UCI machine learning repository (2010). http://archive. ics.uci.edu/ml 9. Ganter, B., Wille, R.: Formal Concept Analysis: Mathematical Foundations. Springer, Heidelberg (1998). https://doi.org/10.1007/978-3-642-59830-2 10. Intel: Intel Developer Zone, ISA Extensions. https://software.intel.com/en-us/ isa-extensions. Accessed June 2016 11. Krajca, P., Outrata, J., Vychodil, V.: Parallel recursive algorithm for FCA. In: Belohavlek, R., Kuznetsov, S.O. (eds.) Proceedings of Concept Lattices and their Applications (2008) 12. Krajca, P., Outrata, J., Vychodil, V.: FCbO program (2012). http://fcalgs. sourceforge.net/ 13. Krajca, P., Vychodil, V., Outrata, J.: Advances in algorithms based on CbO. In: Kryszkiewicz, M., Obiedkov, S. (eds.) CLA 2010, pp. 325–337. University of Sevilla (2010) 1

In-Close on SourceForge: https://sourceforge.net/projects/inclose/.

Some Programming Optimizations for Computing Formal Concepts

73

14. Kuznetsov, S.O.: A fast algorithm for computing all intersections of objects in a finite semi-lattice. Nauchno-Tekhnicheskaya Informatsiya, ser. 2 27(5), 11–21 (1993) 15. Kuznetsov, S.O.: Mathematical aspects of concept analysis. Math. Sci. 80(2), 1654–1698 (1996) 16. Outrata, J., Vychodil, V.: Fast algorithm for computing fixpoints of Galois connections induced by object-attribute relational data. Inf. Sci. 185(1), 114–127 (2012) 17. Priss, U.: Formal concept analysis in information science. Ann. Rev. Inf. Sci. Technol. (ASIST) 40 (2008) 18. Wille, R.: Formal concept analysis as mathematical theory of concepts and concept hierarchies. In: Ganter, B., Stumme, G., Wille, R. (eds.) Formal Concept Analysis. LNCS (LNAI), vol. 3626, pp. 1–33. Springer, Heidelberg (2005). https://doi.org/10.1007/11528784 1 19. Wolff, K.E.: A first course in formal concept analysis: how to understand line diagrams. Adv. Stat. Softw. 4, 429–438 (1993)

Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering Quentin Brabant(B) , Amira Mouakher, and Aur´elie Bertaux CIAD, Universit´e de Bourgogne, 21000 Dijon, France {quentin.brabant,amira.mouakher,aurelie.bertaux}@u-bourgogne.fr

Abstract. Hierarchical Clustering is an unsupervised learning task, whi-ch seeks to build a set of clusters ordered by the inclusion relation. It is usually assumed that the result is a tree-like structure with no overlapping clusters, i.e., where clusters are either disjoint or nested. In Hierarchical Conceptual Clustering (HCC), each cluster is provided with a conceptual description which belongs to a predefined set called the pattern language. Depending on the application domain, the elements in the pattern language can be of different nature: logical formulas, graphs, tests on the attributes, etc. In this paper, we tackle the issue of overlapping concepts in the agglomerative approach of HCC. We provide a formal characterization of pattern languages that ensures a result without overlaps. Unfortunately, this characterization excludes many pattern languages which may be relevant for agglomerative HCC. Then, we propose two variants of the basic agglomerative HCC approach. Both of them guarantee a result without overlaps; the second one refines the given pattern language so that any two disjoint clusters have mutually exclusive descriptions. Keywords: Conceptual knowledge acquisition · Conceptual clustering · Hierarchical clustering · Agglomerative clustering Conceptual structure · Overlap

1

·

Introduction

Cluster analysis or simply clustering is the unsupervised task of partitioning a set of objects into groups called clusters, such that objects belonging to the same cluster are more similar than objects belonging to different clusters. Based on regularities in data, this process is targeted to identify relevant groupings or taxonomies in real-world databases [9]. There are many different clustering methods, and each of them may give a different grouping of a dataset. One of the most important method of clustering analysis is the hierarchical clustering [14]. In fact, in the latter, the set of objects is partitioned at different levels, and the ordering of clusters, w.r.t. inclusion, can be represented as a dendogram. Traditional clustering methods are usually unable to provide human-readable definitions of the generated clusters. However, conceptual clustering techniques, c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 74–89, 2020. https://doi.org/10.1007/978-3-030-57855-8_6

Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering

75

in addition to the list of objects belonging to the clusters, provide for each cluster a definition, as an explanation of the clusters. This definition belongs to a predefined set called the pattern language. The pair of a cluster and its definition is usually called a concept. In this paper, we are paying hand to Hierarchical Conceptual Clustering (HCC), to wit the task of generating concepts that decompose the set of data at different levels. We focus on the agglomerative approach, which is easily adaptable to various application cases. Indeed, its requirements on the pattern language are easy to meet (see Sect. 2). A particularity of conceptual clustering approaches is that only clusters having a definition in the pattern language can be returned. Quite often, the chosen pattern language does not allow expressing every possible subset of objects in the data. Thus, the pattern language can be seen as a tool for guiding the clustering and avoiding over-fitting. However, this particularity may lead to an issue: na¨ıve agglomerative HCC algorithms outputs are sometimes not, strictly speaking, hierarchical. A structure of this kind cannot be represented by a dendogram. This problem is the main focus of the present paper, which is organized as follows: The next section recalls the key notions used throughout this paper and reviews the related work. In Sect. 3, we provide a formal characterization of pattern languages ensuring that the result of agglomerative HCC does not contain overlaps. As many relevant pattern languages for the agglomerative HCC are excluded from our proposed characterization, we provide, in Sect. 4, a variant of agglomerative HCC that always outputs a set of non-overlapping clusters. Then, Sect. 5 reports an experimental study and its results. Section 6 concludes the paper and identifies avenues for future work.

2

Preliminaries and Related Work

This section reviews the fundamental notions used in this paper, then briefly surveys the related works in Agglomerative Hierarchical Conceptual Clustering. 2.1

Some Definitions

Pattern Language. We consider a partially ordered set (L, ), to which we refer as the pattern language. The partial order  is called a generality relation. For any two patterns p, q ∈ L, the relation p  q indicates that any object that is described by p is also described by q. We say that q is more general than p (or that q generalizes p) and that p is more specific than q (or that p specifies q). The set of most general elements of a set of patterns P is:     = p ∈ P  ∀q ∈ P, p  q . P

76

Q. Brabant et al.

Example 1. We now describe a pattern language that will be used as an example throughout the paper. This pattern language is composed of all subsets of {0, . . . , 9}, i.e., L = 2{0,...,9} . The generality relation  is defined as the inverse of inclusion (p  q if and only if q ⊆ p). In other words, the more elements a pattern contains, the more specific it is; for instance {1, 2}  {1}. This framework is similar to that of FCA (see, e.g. [8]). Data Representation. We consider a set of objects X. Each object x ∈ X is associated to a pattern referred to as its most specific description (MSD) through a function δ : X → D, where D is a subset of L that contains all potential MSD of objects. The MSD δ(x) can be interpreted as the pattern providing the most information about x. Example 2. We consider X = {x1 , . . . , x5 }, D = L and δ : X → D such that: x x1 x2 x3 x4 x5 . δ(x) {0, 1} {1, 2, 3, 4} {1, 2, 3, 5} {2, 5, 6} {2, 5, 7} Clusters and Concepts. For any pattern p, we denote by cl(p) the cluster of objects that p describes, i.e.:    cl(p) = x ∈ X  δ(x)  p . It follows from this definition that p  q is equivalent to cl(p) ⊆ cl(q). A pair (cl(p), p) is called a concept if there is no pattern q ∈ L such that q  p and cl(q) = cl(p). The pattern p and the cluster cl(p) are then called the intension and the extension of (cl(p), p), respectively. Note that if, for a given cluster A ⊆ X, there is no p ∈ L such that A = cl(p), then no concept corresponds to A. A natural order relation of concepts is defined by: (cl(p), p) ≤ (cl(q), q)

⇐⇒

pq

⇐⇒

cl(p) ⊆ cl(q).

(1)

Example 3. Let’s consider the following patterns: {1, 2, 3}, {2, 5} and ∅. The corresponding concepts are ({x2 , x3 }, {1, 2, 3}), ({x3 , x4 , x5 }, {2, 5}) and (X, ∅). The pattern {5} has no corresponding concept, since cl({5}) = {x3 , x4 , x5 } and {2, 5} is a more specific pattern that describes the same cluster. The cluster {x4 , x5 } has also no corresponding concept, since the most specific pattern that describes x4 and x5 (which is {2, 5}) also describes x3 . As an illustration of (1), notice that {1, 2, 3}  ∅ and {x2 , x3 } ⊆ X, and thus ({x2 , x3 }, {1, 2, 3}) ≤ (X, ∅). Agglomerative Clustering. The principle of (non-conceptual) agglomerative clustering is the following: 1. a set of clusters H = {{x} | x ∈ X} is initialized; 2. while H does not contain the cluster X: (a) pick C1 and C2 from {C ∈ H | ∀C  ∈ H, C ⊂ C  }; (b) add C1 ∪ C2 to H.

Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering

77

At the end of the process, clusters of H are either nested or disjoint, which allows to represent H by a dendogram. Usually, the chosen clusters C1 and C2 are the closest according to some inter-cluster distance, which is itself defined w.r.t. a given inter-object distance. Generalization Operation. In the case of agglomerative HCC, we substitute clusters with concepts. Merging two concepts (cl(p), p), (cl(q), q) is done by choosing a pattern r that generalises both p and q; the resulting concept is then (cl(r), r). In this paper, we consider a binary operation  on L such that p  q generalizes both p and q. Moreover, we assume that  is a join semilattice operator [4]. This assumption is not very restrictive, since the properties that characterize join operators are quite natural for the purpose of agglomerative HCC. Indeed,  begin a join semilattice operation is equivalent to: –  being idempotent, commutative and associative; –  being idempotent, monotonic and satisfies p  p  q for all p, q ∈ L; – p  q being the most specific generalization of p and q for all p, q ∈ L.  Since  is associative, it naturally extends into the function : 2L → L, where:  {p1 , . . . , pn } = p1  . . .  pn for all n > 0 and all p1 , . . . , pn ∈ L.

Example 4. In the pattern language of the previous examples, the most specific generalization of two sets is given by their intersection. Thus, the join operation associated to L and ⊆ is ∩. So, if we want to merge the concepts ({x1 }, {0, 1}) and ({x2 }, {1, 2, 3, 4}), we first compute {0, 1}  {1, 2, 3, 4} = {0, 1} ∩ {1, 2, 3, 4} = {1}, and then    cl({1}) = x ∈ X  δ(x)  {1} = {x1 , x2 , x3 }, and we get the concept ({x1 , x2 , x3 }, {1}). Finally, we assume that L has a minimal element ⊥, such that ⊥ ∈ D. Since (L, ) is a join semi-lattice with a lower bound, it is a lattice. We denote its meet operation by . Note that ⊥ cannot describe any object, since ⊥  p for all p ∈ D. Furthermore, saying that p  q = ⊥ is equivalent to say that no object can be described by both p and q. Example 5. Although {1, . . . , 9} is the least elements of L, we have {1, . . . , 9} ∈ D. Thus, in order to fit our assumptions about ⊥, we set L = 2{0,...,9} ∪ {⊥} such that ⊥  {1, . . . , 9}, and D = L \ {⊥}. Finally, the reader can check that the meet operation associated to L is the set union ∪.

78

2.2

Q. Brabant et al.

Related Work

In the following, we review the few previous works that addressed the agglomerative approach of HCC. For a broader review on conceptual clustering, we refer the reader to [12]. The work that is the most closely related to ours is certainly [6], where an agglomerative HCC algorithm is described. This algorithm takes as parameter a pattern language and a distance over the set of objects. Moreover, the authors define several levels of “agreement” between the chosen pattern language and distance. Those levels of agreement are though as conditions that limit the impact of the pattern language on the outcome of the clustering (when compared to a classical approach relying only on distance). However, the clusters produced by this approach can overlap with each other. Other papers propose algorithms that make use of a specific pattern language: [13] introduces a method (HICAP) for hierarchical conceptual clustering where patterns are itemsets. Just as the classical UPGMA method, HICAP relies on the average linkage criteria. It seems to perform as well as UPGMA or bisecting k-means, while providing clusters with human-readable definitions. One of the reason for avoiding overlaps in the result of HCC is readability. However, it is possible to allow some overlaps while preserving readability to some extent. Pyramids are cluster structures that can be drawn without crossing edges, while allowing some overlaps. In this context, Brito and Diday proposed a clustering method using the pyramidal model to structure symbolic objects [2]. Their approach is based on an agglomerative algorithm to build the pyramid structure. Each cluster is defined by the set of its elements and a symbolic object, which describes their properties. This approach is used for knowledge representation and leads to a readable graphical representation. Finally, note that the theoretical notions of this paper are directly inspired from the pattern structure formalism [7], which extends Formal Concept Analysis [8] and provides a useful theoretical framework for thinking about pattern languages. 2.3

An Abstract View of Agglomerative HCC

In this paper, we abstract ourselves from the chosen pattern language and from how the concepts to merge are picked at each step. In this respect, our work can be seen as a continuation of [6]. Algorithm 1 formalizes the general process of agglomerative HCC. It takes as argument the data (X and δ), the join operation  of the pattern language, and a function f that picks, at each step, the two patterns that are going to be merged. Typically, the selected pair is the closest according to some metric on X. Example 6. We consider a distance d defined as the complement of the Jaccard index of object MSDs: d(xi , xj ) = 1 −

|δ(xi ) ∩ δ(xj )| . |δ(xi ) ∪ δ(xj )|

Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering 1 2 3 4 5 6 7

79

functionAHCC(X, δ, ,   f ): P ← δ(x)  x ∈ X while |P | > 1 do {p1 , p2 } ← f (P , X, δ, ) P ← P ∪ {p1  p2 } end return P

Algorithm 1: The agglomerative approach of HCC. The function returns a set of patterns where each pattern characterizes a concept.

We define f to return the two concepts whose extents are the closest according to d, following a single linkage strategy, i.e., for any two clusters Ci and Cj : d(Ci , Cj ) = min{d(xi , xj ) | xi ∈ Ci , xj ∈ Cj }. The set of patterns returned by AHCC is represented in Fig. 1. {} {2} {1, 2, 3}

{2, 5}

{0, 1}

{1, 2, 3, 4}

{1, 2, 3, 5}

{2, 5, 6}

{2, 5, 7}

x1

x2

x3

x4

x5

Fig. 1. Result of AHCC in Example 6. The leaves of the tree are created at the initialization of P . The other nodes are created by merging their children, in the following order: {1, 2, 3}, {2, 5}, {2}, ∅.

2.4

Overlaps

It is usually said that two clusters A, B overlap if A ⊆ B,

B ⊆ A,

and

A ∩ B = ∅.

We define the notion of overlaps for patterns in an analogous manner. We say that two patterns p and q overlap if: p  q,

q  p

and

p  q = ⊥,

This is to say, if one does not generalize the other, and if it can exist an object that is described by both. Note that two patterns p, q necessarily overlap if cl(p) and cl(q) overlap; but the reverse is not true; if p and q overlap, whether cl(p)

80

Q. Brabant et al.

and cl(q) also overlap depends on the objects in X and there descriptions given by δ. In general, the result of Algorithm 1 can contain both extent overlaps and intent overlaps. Example 7. In Example 6, we have cl({2, 5}) = {x3 , x4 , x5 } and cl({1, 2, 3}) = {x2 , x3 }. Thus, the clusters described by {2, 5} and {1, 2, 3} overlap on x3 . Figure 2 is misleading, because it omits the link between the patterns {1, 2, 3, 5} and {2, 5}, and it suggests that x3 does not belong to cl({2, 5}). Finally, note that, although some pair of patterns describe non-overlapping clusters, any pair of patterns from 2{0,...,9} is either related by inclusion or overlapping. Indeed, since ⊥ ∈ 2{0,...,9} , we always have p ∪ q = ⊥.

3

Formal Characterization of Pattern Languages Ensuring a Result Without Overlaps

In this section, we present conditions on  and D that ensure the absence of overlap in AHCC(X, δ, , f ), regardless of X, δ and f . For any pattern p, we say  that a set B ⊆ L is a base of p if B = p, and we set     B=p . β(p) = B ⊆ D  In other words, β(p) is the set of bases of p that are included in D. Example 8. In our running example, the base of {1, 2, 3} contains all S ⊆ 2{0,...,9} such that S = {1, 2, 3}. Proposition 1. The two following propositions are equivalent: (a) D contains no overlap and for any overlapping p1 , p2 ∈ L we have ∀G2 ∈ β(p2 ) ∃q ∈ G2 : q  p1

(2)

(b) For any X, δ and f , AHCC(X, δ, , f ) contains no overlap. Proof. In order the proof easier to follow, we define, for all p ∈ L,  to make  ↓p = q ∈ L  q  p . We also reformulate (2) as follows: ∀G2 ∈ β(p2 ) :

G2 ∩ ↓p1 = ∅.

We first show by induction that (a) implies (b). Assume that (a) holds, and consider the algorithm of function AHCC.    1. P is initialized with value δ(x)  x ∈ X ⊆ D, thus without overlaps. 2. Now we consider the k-th iteration of the while loop, and assume that P contains no overlap (before line 5 is executed). We will show that P ∪{p1 p2 } , we have does not contain overlap either. For any r ∈ P , since p1 , p2 ∈ P p1  r and p2  r. Then: – if r  p1 or r  p2 then r  p1  p2 (thus r and p1  p2 do not overlap);

Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering

81

– if neither r  p1 nor r  p2 then, since P contains no overlap, we have: ↓p1 ∩ ↓r = ↓p2 ∩ ↓r = ∅,

and thus

(↓p1 ∪ ↓p2 ) ∩ ↓r = ∅

and therefore for all B1 ∈ β(p1 ) and all B2 ∈ β(p2 ) we have (B1 ∪ B2 ) ∩ ↓r = ∅. Since

   B2 = p1  p2 B1 ∪ B2 ) = B1  we also have (B1 ∪ B2 ) ∈ β(p1  p2 ). Therefore, there is a base of p1  p2 that is disjoint from ↓r. Thus, using (a), we can deduce that p1  p2 and r do not overlap. Hence, p1  p2 does not overlap with any r ∈ P , and thus P ∪ {p1  p2 } contains no overlap. The proof by induction is then complete. In order to show that (b) implies (a), we show that if (a) does not hold then (b) does not. Assume that (a) does not hold: we will show that we can define X, δ and f so that AHCC(X, δ, , f ) contains at least one overlap. – If D contains at least two overlapping patterns p1 and p2 , then AHCC(X, δ, , f ) contains overlaps for X = {x1 , x2 }, δ(x1 ) = p1 and δ(x2 ) = p2 . – If D contains no overlap, then the second part of condition (a) does not hold, i.e., there are overlapping patterns p1 , p2 ∈ L such that ∃ G2 ∈ β(p2 ) : G2 ∩ ↓p1 = ∅. Let G2 ∈ β(p2 ) such that G2 ∩ ↓p1 = ∅ and let G1 ∈ β(p1 ). We set X and δ so that δ is a bijection from X to (G1 ∪ G2 ). The algorithm initializes P to G1 ∪ G2 . Since we have not placed any restrictions on the function f , this can be chosen so as to obtain an arbitrary merger order. We choose f such that the order of merging of patterns is as follows: first, the patterns of G1 are merged with each other until obtaining p1 . When p1 is added to P , since  = {p1 } ∪ G2 . We G2 is disjoint from ↓p1 , after p1 is added to P , we have P then choose the following merging order: the patterns of G2 are merged with each other until obtaining p2 . Then, we get p1 , p2 ∈ P , and thus the result of the algorithm is not hierarchical.   Remark 1. Condition (a) on D and  is necessary and sufficient to prevent overlaps in the result of AHCC. However, this condition is quite restrictive and is not met by many interesting pattern languages. Some cases are listed below. – L is the powerset of a given set and  is the inverse of the inclusion relation (see Example 1). – L is the set of n-tuples whose components are numeric intervals, with n ≥ 3, and  is defined as in Subsect. 5.2.

82

Q. Brabant et al.

– The pattern language is as defined in [5]. Besides, the paper also presents an example of HCC whose result contains overlaps, although this is not explicitly noted (see Fig. 3, diagrams (c) and (e)). – The patterns are antichains of a given poset (X, ≤), and  is defined by: pq

⇐⇒

∀a ∈ q ∃b ∈ p : b ≤ a.

Note that this includes cases where X contains complex structures (e.g., graphs, trees, sequences, etc) and where b ≤ a means “a is the substructure of b”. Examples of such pattern languages are given, for instances, in [11] with syntactic trees, and in [10] with graphs representing molecules. – L is the set of expressible concepts of a given Description Logic, objects are individuals from an A-Box, and δ maps each individual to its most specific concept (see [1], Sect. 6.3). In this case, D equals L and contains overlaps. To our current knowledge, languages fulfilling condition (a) are either very expressive or very simple. Some cases of such simple pattern languages are the followings: – L contains no overlap (in other words, L \ {⊥} can be represented as a tree). – the elements of L \ {⊥} can be seen as interval of some total order, where the join operator is the intersection. A case of very expressive language is the following: L is the set of n-ary boolean functions (for some n > 0), and  is the logical OR. For instance, for two functions b1 , b2 ∈ L defined by b1 (x1 , . . . , xn ) = x1

and b2 (x1 , . . . , xn ) = (x2 AND x3 ),

the pattern b1  b2 is the function expressed by (x1 OR (x2 AND x3 )). Since (a) requires that D does not contain overlap, we define it as the subset of boolean functions that have only one true interpretation, i.e. functions that are of the form: b(x1 , . . . , xn ) = l1 (x1 ) AND l2 (x2 ) AND . . . AND ln (xn ) with, for each i ∈ {1 . . . n}: either li (xi ) = xi , or li (xi ) = NOT xi .

4

Preventing Overlaps in Agglomerative HCC

In this section, we describe a variant of agglomerative HCC where concepts are created agglomeratively but: (i ) the pattern merging step is corrected in order to prevent cluster overlaps; (ii ) the pattern language is extended with a “pattern negation” and (iii ) when a new pattern is created, existing patterns are updated in a top-down fashion for preventing pattern overlaps. Correcting Pattern Creation. In Algorithm 1, when a new pattern p1  p2 is added to P , it can happen that P already contains some pattern q such that cl(q) and cl(p1  p2 ) overlaps. In our variant, we avoid such situation by adding the pattern p1  p2  q to P , instead of p1  p2 (see Algorithm 2).

Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering

83

Refined Pattern Language. We define L as the set of antichains of L, i.e.:     P ⊆ L . We define meet and join operations on L by, respectively: L= P

  ∪ Q and P  Q = {p  q | p ∈ P, q ∈ Q}. P Q=P The set L endowed with  and  is a lattice (see [3], Lemma 4.1) whose maximal element is the empty set. Its order relation  is characterized by: P Q

⇐⇒

∀q ∈ Q ∃p ∈ P : q  p.

Now, we define the lattice (L∗ , ∗ ) as the product of L and L. In other words: L∗ = L × L, the meet and join operations on L∗ are defined by, respectively: (p, N ) ∗ (q, M ) = (p  q, P  Q)

and

(p, N ) ∗ (q, M ) = (p  q, P  Q),

and the associated order relation ∗ is such that (p, N ) ∗ (q, M ) ⇐⇒ p  q and

P Q .

We say that L∗ is the refinement of L. Semantic of L∗ . Each refined pattern (p, N ) ∈ L∗ can be interpreted as “p minus all patterns in N ”. A semantic of (p, N ) is given by the function Ψ : L∗ → 2L such that:    Ψ (p, N ) = q ∈ L  q  p and ∀r ∈ N : q  r . Notice that, if (p, N ) ∗ (q, M ), then Ψ (p, N ) ⊆ Ψ (q, M ) but the reversed implication is not true. We then define the set of objects described by (p, N ) by:    cl∗ (p, N ) = x ∈ X  δ(x) ∈ Ψ (p, N ) . Example 9. Following Example 5: the elements of L are antichains from 2{0,...,9} . For instance, {{0, 5}, {2}} ∈ L. The pattern ({1}, {{0, 5}, {2}}) ∈ L∗ describes all objects that are described by {1}, but are not described by {0, 5} nor {2}. The Algorithm. Algorithm 2 describes our variant AHCC-Tree of agglomerative HCC. The result of AHCC-Tree is a couple (P, neg), where P ⊆ L and neg is a map from P to L. The couple (P, neg) describes the set of refined patterns    P ∗ = p, neg(p)  p ∈ P ⊆ L∗ . Provided that there are no x1 , x2 ∈ X such that δ(x1 )  δ(x  set ofclusters  2 ), the {cl(p) | p ∈ P } does not contain overlaps. Provided that δ(x)  x ∈ X has no overlap: patterns of P may overlap, but the set of patterns P ∗ has no overlap. Example 10. We consider the same L, , X, δ and f as in Example 7. Throughout  and neg evolve as follows: the iterations of AHCC-Tree, P

84

Q. Brabant et al.

 = {{0, 1}, {1, 2, 3, 4}, {1, 2, 3, 5}, {2, 5, 6}, {2, 5, 7}}; 1. Initialization: P for p ∈ P : neg(p) = ∅ 2. {1, 2, 3, 4} and {1, 2, 3, 5} are merged into {1, 2, 3};  = {{0, 1}, {1, 2, 3}, {2, 5, 6}, {2, 5, 7}}; P for p ∈ {{0, 1}, {2, 5, 6}, {2, 5, 7}}: neg(p) = {{1, 2, 3}} 3. {2, 5, 6}  {2, 5, 7} gives {2, 5}, but cl({2, 5}) overlaps with cl({1, 2, 3}), thus {2, 5} and {1, 2, 3} are merged into {2}, which is added to P ;  = {{0, 1}, {2}}; P neg({0, 1}) = {{1, 2, 3}}  {{2}} = {{2}} 4. {0, 1} and {2} are merged into ∅;  = {∅}. P The result is represented in Fig. 2.

1 2 3 4 5 6 7 8 9 10 11 12 13 14

functionAHCC-Tree(X,   δ, , f ): P ← δ(x)  x ∈ X ∀p ∈ P : [neg(p) ← ∅] while |P | > 1 do q ← f (P , X, δ, ) while ∃p ∈ P  : [cl(p) and cl(q) overlap] do q ←qp end P ← P ∪ {q} for p ∈ P such that p and q overlap. do neg(p) ← neg(p) {q} end end return P, neg

Algorithm 2: The corrected agglomerative approach of HCC.

{}¬{} {2}¬{} {1, 2, 3}¬{} {0, 1}¬{{2}}

{1, 2, 3, 4}¬{}

{1, 2, 3, 5}¬{}

x2

x3

{2, 5, 6}¬{{1, 2, 3}} {2, 5, 7}¬{{1, 2, 3}}

δ x1

x4

x5

Fig. 2. Set of patterns P ∗ corresponding to the result of Example 10. Patterns of P ∗ are written in the form “p¬neg(p)”.

Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering

5

85

Empirical Study

In the following experiments, we perform agglomerative HCC on widely used datasets from the UCI repository1 . We try to quantify the presence of overlaps in the result of agglomerative HCC. Then, we test the corrected version of AHCC and compare it to the basic AHCC in terms of how well results fit with data. The code and datasets used for the experiments are available on GitHub2 . 5.1

Datasets

We ran experiments on 6 datasets form the UCI repository. The characteristics of those datasets are summarized in Table 1. These datasets are 2-dimensional tables where rows can be seen as objects and columns as attributes. An attribute can be numerical or categorical. In some datasets, a target attribute is specified for the purpose of supervised learning; we treat target attributes just as normal ones. Table 1. Datasets’ characteristics. Id. Name

5.2

# objects # numerical att. # categorical att.

1

Glass

214

9

1

2

Wine

178

13

0

3

Yeast

1484

8

1

4

Breast cancer wisconsin

699

10

0

5

Seeds

210

7

1

6

Travel reviews

980

10

0

Definition of a Pattern Language

The pattern language is defined as follows. Firstly, categorical attributes are translated into numerical ones using a one-hot encoding. We denote by n the number of attributes after this transformation. We define L as the set of n-tuples of the form ([a1 ; b1 ], . . . , [an ; bn ]) where each [ai ; bi ] is an interval on IR, and we define  by:   [a1 ; b1 ], . . . , [an ; bn ]  [c1 ; d1 ], . . . , [cn ; dn ] = [min(a1 , c1 ); max(b1 , d1 )], . . . , [min(an , cn ); max(bn , dn )]). Each row corresponds to an object x ∈ X. Let us denote the values in the row by a1 , . . . , an . The description of x is then δ(x) = ([a1 ; a1 ], . . . , [an ; an ]). Finally, we define f as the function that returns the two clusters whose centro¨ıds are closest according to the Euclidean distance. 1 2

Available at UC Irvine Machine Learning Database http://archive.ics.uci.edu/ml/. https://github.com/QGBrabant/ACC.

86

Q. Brabant et al.

5.3

Basic Agglomerative HCC Experiment

We applied Algorithm 1 on the different dataset presented in Table 1. For each dataset, we ran 10 tenfold cross-validations (except for yeast and travel reviews, which have many rows, and for which we ran 10 twofold cross-validations); a set of patterns P was learned on each training set using the method specified above. Then, we estimate the number of overlaps contained, on average, in P . This estimation relied on two measures: for all pattern p, we call covering rate of p the rate of objects that are described by p, and we call overlapping rate of p the rate of objects that are described by p and (at least) another pattern q ∈ P that is incomparable with p. Measuring one of these two rates can yield different values depending on whether the measure is done the training or test set. We will simply refer to the covering rate of a cluster on the training set as its size. In order to assess the relevance of a cluster, we can compare its size to its covering rate on the test set (a significantly smaller rate on the test set is a sign of overfitting). Here, however, we are mostly interested in the overlapping rate of clusters, depending on their size. The results are depicted by Fig. 3. 1

glass

wine

yeast

breast-cancer-Wisconsin

seeds

travel-reviews

0.8 0.6 0.4 0.2 0 0.8 0.6 0.4 0.2 0

1

0.8

0.6

0.4

0.2

1

0.8

0.6

0.4

0.2

1

0.8

0.6

0.4

0.2

0

Fig. 3. The abscissa represents the interval of possible cluster sizes (which is [0; 1]). This interval is divided into 40 sub-intervals of length 0.025. For each sub-interval ]a; a + 0.025], we consider all clusters whose sizes belong to ]a; a + 0.025], and we plot their average value for the following rates: covering rate on the test set (vertical line); overlapping rate on the learning set (plain line); overlapping rate on the test set (dashed line). Note that, when no cluster has a size belonging to a given interval, no vertical line is displayed.

We observe that the covering rates on the test sets tend to be close to the size of the clusters. Moreover, the overlapping rates on the test sets are very

Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering

87

similar to those obtained on the training sets. Determining which characteristics of the dataset determine the overlapping rate would require and empirical study using more diverse datasets. However, our study allows to conclude that high overlapping rates can occur in practice when performing agglomerative HCC. 5.4

Corrected Agglomerative HCC Experiment

In order to compare how well AHCC and AHCC-Tree fit to the data, we applied Algorithm 2 on the same data, with the same cross-validation setting. One way to asses the quality of fit of a pattern set P on a test set Xtest would be to measure the distance of each x ∈ Xtest to the centro¨ıd of the most specific concept of P it belongs to. However, such a measure could fail to account for the quality of fit of concepts that are “higher in the tree”. In order to have a more comprehensive view, we introduce a parameter k ∈ [0; 1] and define P (x, k) as |cl(p)| ≥ k. Then the set of most specific patterns p ∈ P such that δ(x)  p and |X train | we define an error function e by:   1 1 e(P, Xtest , k) = d(x, C), |Xtest | |P (x, k)| x∈Xtest

C∈P (x,k)

where d(x, C) is the Euclidean distance between x and the centro¨ıd of C. We rely on e to evaluate both AHCC and AHCC-Tree on each dataset. The results are reported in Fig. 4. Although the reported values are consistently in favor of AHCC, the difference between both algorithms is very slight. Thus, the proposed method AHCC-Tree seems to be a viable alternative to AHCC in term of fitting. ·10−2 5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 glass 0 1 0.8

yeast

wine 0.6

0.4

0.2

1

0.8

0.6

0.4

0.2

1

0.4

0.2

1

seeds 0.8

0.6

0.4

0.2

1

0.8

0.6

0.4

0.2

0

travel reviews 0.8 0.6

0.4

0.2

0

0.12 0.1 8 · 10−2 6 · 10−2 4 · 10−2 2 · 10−2 0

1

breast-cancer W. 0.8 0.6

Fig. 4. Mean value of e as a function of k, for AHCC (black) and AHCC-Tree (red). (Color figure online)

88

6

Q. Brabant et al.

Conclusion and Future Work

In this paper, we were interested in Agglomerative Hierarchical Conceptual Clustering. We presented a formal characterization of pattern languages in order to guarantee the hierarchical aspect of the resulting clusters. Unfortunately, this characterization is too restrictive and excludes many relevant pattern languages for HCC. So, we proposed two variants of the basic agglomerative HCC approach preventing overlaps. The second variant also refines the given pattern language so that any two disjoint clusters have mutually exclusive descriptions. Although more thorough experiments have to be conducted to confirm the empirical results, our study suggests that the proposed AHCC-Tree function is a viable alternative to basic agglomerative HCC. Acknowledgments. This research was supported by the QAPE company under the research project Kover.

References 1. Baader, F., Calvanese, D., McGuinness, D.L., Nardi, D., Patel-Schneider, P.F. (eds.): The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, New York (2003) 2. Brito, P., Diday, E.: Pyramidal representation of symbolic objects. In: Schader, M., Gaul, W. (eds.) Knowledge, Data and Computer-Assisted Decision. ATO ASI Series, pp. 3–16. Springer, Heidelberg (1990). https://doi.org/10.1007/978-3-64284218-4 1 3. Crampton, J., Loizou, G.: The completion of a poset in a lattice of antichains. Int. Math. J. 1(3), 223–238 (2001) 4. Davey, B.A., Priestly, H.A.: Introduction to Lattices and Order. Cambridge University Press, Cambridge (2002) 5. Funes, A., Ferri, C., Hern´ andez-Orallo, J., Ram´ırez-Quintana, M.J.: An instantiation of hierarchical distance-based conceptual clustering for propositional learning. In: Theeramunkong, T., Kijsirikul, B., Cercone, N., Ho, T.-B. (eds.) PAKDD 2009. LNCS (LNAI), vol. 5476, pp. 637–646. Springer, Heidelberg (2009). https://doi. org/10.1007/978-3-642-01307-2 63 6. Funes, A.M., Ferri, C., Hern´ andez-Orallo, J., Ram´ırez-Quintana, M.J.: Hierarchical distance-based conceptual clustering. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008. LNCS (LNAI), vol. 5211, pp. 349–364. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87479-9 41 7. Ganter, B., Kuznetsov, S.O.: Pattern structures and their projections. In: Delugach, H.S., Stumme, G. (eds.) ICCS-ConceptStruct 2001. LNCS (LNAI), vol. 2120, pp. 129–142. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-445838 10 8. Ganter, B., Wille, R.: Formal Concept Analysis: Mathematical Foundations. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-59830-2 9. Jonyer, I., Holder, L.B., Cook, D.J.: Graph-based hierarchical conceptual clustering. Int. J. Artif. Intell. Tools 10(1–2), 107–135 (2001). https://doi.org/10.1142/ S0218213001000441

Preventing Overlaps in Agglomerative Hierarchical Conceptual Clustering

89

10. Kuznetsov, S.O.: Machine learning and formal concept analysis. In: Eklund, P. (ed.) ICFCA 2004. LNCS (LNAI), vol. 2961, pp. 287–312. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24651-0 25 11. Leeuwenberg, A., Buzmakov, A., Toussaint, Y., Napoli, A.: Exploring pattern structures of syntactic trees for relation extraction. In: Baixeries, J., Sacarea, C., Ojeda-Aciego, M. (eds.) ICFCA 2015. LNCS (LNAI), vol. 9113, pp. 153–168. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19545-2 10 12. P´erez-Su´ arez, A., Mart´ınez-Trinidad, J.F., Carrasco-Ochoa, J.A.: A review of conceptual clustering algorithms. Artif. Intell. Rev. 52(2), 1267–1296 (2018). https:// doi.org/10.1007/s10462-018-9627-1 13. Xiong, H., Steinbach, M.S., Tan, P., Kumar, V.: HICAP: hierarchical clustering with pattern preservation. In: Proceedings of the Fourth SIAM International Conference on Data Mining, Lake Buena Vista, Florida, USA, 22–24 April 2004, pp. 279–290 (2004) 14. Zhou, B., Wang, H., Wang, C.: A hierarchical clustering algorithm based on GiST. In: Huang, D.-S., Heutte, L., Loog, M. (eds.) ICIC 2007. CCIS, vol. 2, pp. 125–134. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74282-1 15

Interpretable Concept-Based Classification with Shapley Values Dmitry I. Ignatov2,3(B) and L´eonard Kwuida1 1

3

Bern University of Applied Sciences, Bern, Switzerland [email protected] 2 National Research University Higher School of Economics, Moscow, Russian Federation [email protected] St. Petersburg Department of Steklov Mathematical Institute of Russian Academy of Sciences, Saint Petersburg, Russia

Abstract. Among the family of rule-based classification models, there are classifiers based on conjunctions of binary attributes. For example, JSM-method of automatic reasoning (named after John Stuart Mill) was formulated as a classification technique in terms of intents of formal concepts as classification hypotheses. These JSM-hypotheses already represent interpretable model since the respective conjunctions of attributes can be easily read by decision makers and thus provide plausible reasons for model prediction. However, from the interpretable machine learning viewpoint, it is advisable to provide decision makers with importance (or contribution) of individual attributes to classification of a particular object, which may facilitate explanations by experts in various domains with high-cost errors like medicine or finance. To this end, we use the notion of Shapley value from cooperative game theory, also popular in machine learning. We provide the reader with theoretical results, basic examples and attribution of JSM-hypotheses by means of Shapley value on real data.

Keywords: Interpretable Machine Learning formal concepts · Shapley values

1

· JSM hypotheses ·

Introduction

In this paper we consider the JSM method of inductive reasoning in terms of Formal Concept Analysis for classification problems [6,8,19] from Interpretable Machine Learning viewpoint [24]. Briefly, this is a rule-based classification technique [7], i.e. it relies on rules in the form “if an object satisfies a certain subset of attributes, then it belongs to a certain class”. Under some conditions these subsets of attributes are called JSM-hypotheses or classification hypotheses (see Sect. 2). c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 90–102, 2020. https://doi.org/10.1007/978-3-030-57855-8_7

Interpretable Concept-Based Classification with Shapley Values

91

In Interpretable Machine Learning (IML) we want to build from our training data a classification model, that is capable not only to infer the class of undetermined examples (not seen in the training data), but also to explain, to some extent, the possible reasons why a particular classification has been performed in terms of separate attributes and their values [24]. From this point of view, classification based on JSM-hypotheses belongs to interpretable machine learning techniques since combination of attributes provides clear interpretation. To have further insights and rely on statistically stable hypotheses under examples deletion, the notion of stability of hypotheses was proposed [16,18]. It enables ranking of hypotheses to sort out the most stable ones for classification purposes. However, in modern machine learning, especially for the so-called black-box models like deep neural nets, the analyst would like to receive plausible interpretation of particular classification cases (why to approve (or disapprove) a certain money loan, why a certain diagnosis may (or not) take place) even on the level of ranking of individual attributes by their importance for classification [24]. To further enrich the JSM-based methodology not only by ranking single hypotheses, but also by ranking individual attributes, we adopt Shapley value notion from Cooperative Game Theory [25], which has already been proven to be useful in supervised machine learning [21,26]. Following this approach, we play a game on subsets of attributes of a single classification example, also called coalitions, so each player is a single attribute. If a coalition contains any positive hypothesis (or negative one, in case of binary classification problem), then it is called a winning coalition and it receives a value 1 as a pay-off (or −1, in case of negative hypothesis). If we fix a certain attribute m belonging to the chosen classification example, the total pay-off across all the winning coalitions containing m is aggregated into a weighed sum, which takes values in [0, 1]. This aggregation rule enjoys certain theoretical properties, which guarantee fair division of the total pay-off between players [25], i.e. attributes in our case. In what follows, we explain the details of this methodology for JSM method in FCA terms with illustrative classification examples and attribute ranking for classic real datasets used in machine learning community. The paper is organised as follows: Sect. 2 introduces mathematical formalisation of binary classification problem by hypotheses induction based on formal concepts. In Sect. 3, related work on interpretable machine learning and Shapley values from Game Theory is summarised. Section 4 describes one of the possible applications of Shapley values in Formal Concept Analysis setting. Section 5 is devoted to machine experiments with real machine learning data. Finally, Sect. 6 concludes the paper.

2

JSM-Hypotheses and Formal Concepts

The JSM-method of hypothesis generation proposed by Viktor K. Finn in late 1970s was introduced as an attempt to describe induction in purely deductive form, and thus to give at least partial justification of induction [6]. JSM is an acronym after the philosopher John Stuart Mill, who proposed several schemes of

92

D. I. Ignatov and L. Kwuida

inductive reasoning in the 19th century. For example, his Method of Agreement, is formulated as follows: “If two or more instances of the phenomenon under investigation have only one circumstance in common, ... [it] is the cause (or effect) of the given phenomenon.” In FCA community, the JSM-method is known as concept learning from examples in terms of inductive logic related to Galois connection [15]. Thus, in [8,19] this technique was formulated in terms of minimal hypotheses, i.e. minimal intents(see Definition 1) that are not contained in any negative example. The method proved its ability to enable learning from positive and negative examples in various domains [18], e.g., in life sciences [1]. In what follows, we rely on the definition of a hypothesis (“no counterexample-hypothesis” to be more precise) in FCA terms that was given in [8] for binary classification1 . There is a well-studied connection of the JSM-method in FCA terms with other concept learning techniques like Version Spaces [10,23]. Among interesting venues of concept learning by means of indiscernibility relation one may find feature selection based on ideas from Rough Set Theory and JSM-reasoning [11] and Rough Version Spaces; the latter allows handling noisy or inconsistent data [3]. Let K = (G, M, I) be a formal context; that is a set G of objects (to be classified), a set M of attributes (use to describe the objects) and a binary relation I ⊆ G × M (to indicate if an object satisfies an attribute). Let w ∈ /M be our target attribute. The goal is to classify the objects in G with respect to w. The target attribute w partitions G into three subsets: – positive examples, i.e. the set G+ ⊆ G of objects known to satisfy w, – negative examples, i.e. the set G− ⊆ G of objects known not to have w, – undetermined examples, i.e. the set Gτ ⊆ G of objects for which it remains unknown whether they have the target attribute or do not have it. This partition induces three subcontexts Kε := (Gε , M, Iε ), ε ∈ {−, +, τ } of K = (G, M, I). The first two of them, the positive context K+ and the negative context K− , form the training set, and are used to build the learning context K± = (G+ ∪ G− , M ∪ {w}, I+ ∪ I− ∪ G+ × {w}). The subcontext Kτ is called the undetermined context and is used to predict the class of not yet classified objects. The context Kc = (G+ ∪ G− ∪ Gτ , M ∪ {w}, I+ ∪ I− ∪ Iτ ∪ G+ × {w}) is called a classification context. Definition 1. For a formal context K = (G, M, I) a derivation is defined as follows, on subsets A ⊆ G and B ⊆ M : A = {m ∈ M | aIm, ∀a ∈ A}

and

B  = {g ∈ G | gIb, ∀b ∈ B}.

An intent is a subset B of attributes such that B  = B.

1

Basic FCA notions needed for this contribution can be found in one of these books [12, 13].

Interpretable Concept-Based Classification with Shapley Values

93

In the subcontexts Kε := (Gε , M, Iε ), ε ∈ {−, +, τ } the derivation operators are denoted by (·)+ , (·)− , and (·)τ respectively. We can now define positive and negative hypotheses. Definition 2. A positive hypothesis H ⊆ M is an intent of K+ that is not contained in the intent of a negative example. i.e. H ++ = H and H ⊆ g − for any g ∈ G− . Equivalently, H ++ = H and H  ⊆ G+ ∪ Gτ . Similarly, a negative hypothesis H ⊆ M is an intent of K− that is not contained in the intent of a positive example. i.e. H −− = H and H ⊆ g + for any g ∈ G+ . Equivalently, H −− = H and H  ⊆ G− ∪ Gτ . An intent of K+ that is contained in the intent of a negative example is called a falsified (+)-generalisation. Example 1. Table 1 shows a many-valued context representing credit scoring data. The sets of positive and of negative examples are G+ = {1, 2, 3, 4} and G− = {5, 6, 7, 8}. The undetermined examples are in Gτ = {9, 10, 11, 12} and should be classified with respect to the target attribute which takes values + and −, meaning “low risk” and “high risk” client, respectively. Table 1. Many-valued classification context for credit scoring G/M Gender Age

Education Salary

Target

1

Ma

Young Higher

High

+

2

F

Middle Special

High

+

3

F

Middle Higher

Average +

4

Ma

Old

Higher

High

+

5

Ma

Young Higher

Low



6

F

Middle Secondary Average −

7

F

Old

Special

8

Ma

Old

Secondary Low

Average − − τ

9

F

Young Special

High

10

F

Old

Average τ

11

Ma

Middle Secondary Low

τ

12

Ma

Old

τ

Higher

Secondary High

To apply JSM-method in FCA terms we need to scale the given data. The context K below is obtained by nominal scaling the attributes in Table 1. It shows the first eight objects whose class is known (either positive or negative) together with the last four objects which are still to be classified.

94

D. I. Ignatov and L. Kwuida

K Ma F Y Mi O g1 × × g2 × × g3 × × g4 × × g5 × × g6 × × g7 × × g8 × ×

HE Sp Se HS A L × × × × × × × × × × × × × × × ×

ww ¯ × × × × × × × ×

g9 ×× × × g10 × × × × g11 × × × × g12 × × × × We need to find positive and negative non-falsified hypotheses. Figure 1 shows the lattices of positive and negative examples for the input context, respectively. The intents of the nodes form the hypotheses. Note that the intent of a given node is obtained by collecting all attributes in the order filter of this node2 . For example the hypothesis given by the intent of the red node, labelled by A, is {A, M i, F, HE}.

HE F, Mi

A

L,Ma

HS Ma

g2

g3

O

Y

g4

g1

O

Mi

HE,Y g5

Sp

Se

g8

g6

A,F

Sp g7

HS

L,SE Fig. 1. The line diagrams of the lattice of positive hypotheses (left) and the lattice of negative hypotheses (right).

Shaded nodes correspond to maximal non-falsified hypotheses, i.e. they have no upper neighbours being non-falsified hypotheses. For K+ the hypothesis {HE} is falsified by the object g5 , since {HE} ⊆ g5− = {Ma, Y, HE, L}. For K− the hypothesis {A, F } is falsified by the object g3 , since {A, F } ⊆ g3+ = {F, Mi, HE, A}.  2

This concise way of representation is called reduced labelling [13].

Interpretable Concept-Based Classification with Shapley Values

95

The undetermined examples gτ from Gτ will be classified as follows: – If gττ contains a positive, but no negative hypothesis, then gτ is classified positively (presence of the target attribute w predicted). – If gττ contains a negative, but no positive hypothesis, then gτ classified negatively (absence of the target attribute w predicted). – If gττ contains both negative and positive hypotheses, or if gττ does not contain any hypothesis, then this object classification is contradictory or undetermined, respectively. In both cases gτ is not classified. For performing classification it is enough to have only minimal hypotheses (w.r.t. ⊆), negative and positive ones. There is a strong connection between hypotheses and implications. An implication in a context K := (G, M, I) is a pair of attribute sets (B1 , B2 ), also denoted by B1 → B2 . The implication B1 → B2 holds in K if each object having all attributes in B1 also has all attributes in B2 . We call B1 premiss and B2 conclusion. Proposition 1 ([16]). A positive hypothesis H corresponds to an implication H → {w} in the context K+ = (G+ , M ∪ {w}, I+ ∪ G+ × {w}). Similarly, a negative hypothesis H corresponds to an implication H → {w} ¯ in the context ¯ I− ∪G− ×{w}). ¯ Hypotheses are implications which premises K− = (G− , M ∪{w}, are closed (in K+ or in K− ). A detailed yet retrospective survey on JSM-method (in FCA-based and original formulation) and its applications can be found in [15]. A further extension of JSM-method to triadic data with target attribute in FCA-based formulation can be found in [14]; there, the triadic extension of JSM-method used CbO-like algorithm for classification in Bibsonomy data. Input data are often numeric or categorical and need scaling, however, it might not be evident what to do in case of learning with labelled graphs [17]. Motivated by the search of a possible extensions of original FCA to analyse data with complex structure, the so called Pattern Structures were proposed [9].

3 3.1

Interpretable Machine Learning and Shapley Values Interpretable Machine Learning

In early 90ies, the discipline of Data Mining emerged a step of Knowledge Discovery in Databases (KDD) process, which was defined as follows: “KDD is the nontrivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data” [5]. In [5], the primary goals of data mining were also identified as prediction and description, where the latter focuses on finding human-interpretable patterns describing the data. The authors also noted that the boundaries between prediction and description are not sharp. So, from the very beginning, Data Mining is geared towards understandable or

96

D. I. Ignatov and L. Kwuida

human-interpretable patterns, while Machine Learning and Deep Learning, in particular, do not share this constraint. However, recently machine learning researchers realised the necessity of interpretation for a wide variety of black-box models3 and even for ensemble rulebased methods when many attributes are involved [20]. Among the family of approaches, the book [24] on interpretable machine learning poses global versus local interpretations. Thus, the global interpretability “is about understanding how the model makes decisions, based on a holistic view of its features and each of the learned components such as weights, other parameters and structures”, while the local interpretability focuses on “a single instance and examines what the model predicts for this input, and explains why” [24]. From this point of view, the JSM-method provides local interpretations when we deal with a particular classification example. According to the other taxonomic criteria, the JSM-method is intrinsic but not a post-hoc model since interpretability is introduced already on the level of hypotheses generation and their application (inference phase), and provides model-specific but not model-agnostic interpretations since it is based on a concrete rule-based model. Shapley value is a model-agnostic method and produces a ranking of individual attributes by their importance for the classification of a particular example, i.e. it provides a decision maker with local interpretations. This attribution methodology is equivalent to the Shapely value solution to value distribution ˇ in Cooperative Game Theory [25]. Strumbelj and Kononenko [26] consistently show that Shapley value is the only solution for the problem of single attribute importance (or attribution problem) measured as the difference between model’s prediction with and without this particular attribute across all possible subsets of attributes. Lundberg and Lee [21] extend this approach further to align it with several other additive attribution approaches like LIME and DeepLift under the name SHAP (Shapley Additive explanation) values [21]. To compute SHAP value for an example x and an attribute m the authors define fx (S), the expected value of the model prediction conditioned on a subset S of the input attributes, which can be approximated by integrating over samples from the training dataset. SHAP values combine these conditional expectations with the classic Shapley values from game theory to assign a value φm to each attribute m: φm =

 S⊆M \{m}

|S|!(|M | − |S| − 1)! (fx (S ∪ {m}) − fx (S)) , |M |!

(1)

where M is the set of all input attributes and S a certain coalition of players, i.e. set of attributes. In our exposition of this problem, we will follow classical definition of Shapley value [25], where a Boolean function is used instead of the expected value fx (S).

3

The workshops on Interpretable Machine Learning: https://sites.google.com/view/ whi2018 and https://sites.google.com/view/hill2019.

Interpretable Concept-Based Classification with Shapley Values

3.2

97

Shapley Values in FCA Community

In [4], the authors introduced cooperative games on concept lattices, where a concept is a pair (S, S  ), S is a subset of players or objects, and S  is a subset of attributes. In its turn, any game of this type induces a game on the set of objects, and a game on the set of attributes. In these games, the notion of Shapley value naturally arises as a rational solution for distributing the total worth of the cooperation among the players. The authors of [22] continued the development of the topic by studying algorithms to compute the Shapley value for a cooperative game on a lattice of closed sets given by an implication system; the main computational advantage of the proposed algorithms is based on maximal chains and product of chains of fixed length. Even though these approaches considering games on lattices of closed sets and implication systems are related to hypotheses families, they do not use Shapley values for interpretation or ranking attributes in classification setting.

4

Shapley Values as Means of Attribute Importance for a Given Example

Let Kc = (G+ ∪ G− ∪ Gτ , M ∪ {w}, I+ ∪ I− ∪ Iτ ∪ G+ × {w}) be our learning context. The set of all minimal positive and all minimal negative hypotheses of Kc are denoted by H+ and H− , respectively. For a given object g ∈ G, the Shapley value of an attribute m ∈ g  is defined as follows: ϕm (g) =

 S⊆g  \{m}

|S|!(|g  | − |S| − 1)! (v(S ∪ {m}) − v(S)), |g  |!

(2)

where

⎧ 1, ⎪ ⎪ ⎪ ⎨0, v(S) = ⎪ ⎪ ⎪ ⎩ −1,

∃H+ ∈ H+ : H+ ⊆ S and ∀H− ∈ H− : H− ⊆ S (∀H+ ∈ H+ : H+ ⊆ S and ∀H− ∈ H− : H− ⊆ S) or (∃H+ ∈ H+ : H+ ⊆ S and ∃H− ∈ H− : H− ⊆ S) ∀H+ ∈ H+ : H+ ⊆ S and ∃H− ∈ H− : H− ⊆ S.

The Shapley value ϕm ˜ ∈ M \ g  . In other words, ˜ (g) is set to 0 for every m  if we consider an undetermined object gS such gS = S, then v(S) = 1 when gS is classified positively, v(S) = 0 when gS is classified contradictory or remains undetermined, and v(S) = −1 if gS is classified negatively. Note that v(∅) = 0. Let us consider the example from Sect. 2. The set of positive hypotheses, H+ consists of {F, Mi, HE, A} and {HS}, while H− contains three negative hypotheses {F, O, Sp, A}, {Se}, and {M, L}. Now, let us classify every undetermined example and find the Shapley value of each attribute, i.e. the corresponding Shapley vector denoted Φ(g). The example g9 is classified positively since it does not contain any negative hypothesis, while {HS} ⊆ g9 = {F, Y, Sp, HS}. The corresponding Shapley vector is Φ(g9 ) = (0, 0.0, 0.0, 0, 0, 0, 0.0, 0, 1.0, 0, 0).

98

D. I. Ignatov and L. Kwuida

The decimal point for some zero values of ϕm (g), i.e. 0.0, means that these values have been computed, while 0 (without decimal point) means that the corresponding attribute m is not in g  . For the example g9 , only one attribute has non-zero Shapley value, ϕHS (g9 ) = 1. The example g10 remains undetermined,  = {F, O, HE, A} does not contain positive or negative hypotheses. since g10 This results in zero Shapley vector: Φ(g10 ) = (0, 0.0, 0, 0, 0.0, 0.0, 0, 0, 0, 0.0, 0). When an undetermined example is classified negatively, the Shapley vector contains negative values that add up to −1. For example, two negative hypotheses   are contained in g11 , namely {Se} ⊆ g11 = {M, Mi, Se, L} and {M, L} ⊆ g11 . Only the attributes of these two negative hypotheses have non-zero Shapley values: Φ(g11 ) = (−1/6, 0, 0, 0.0, 0, 0, 0, −2/3, 0, 0, −1/6). In case an undetermined example is classified contradictory, at least two hypotheses of both signs are contained in it and the sum of Shapley values for all its attributes is equal to zero. The positive hypothesis {HS} is contained in  g12 = {M, O, Se, HS}, which also contains the negative hypothesis {Se}. Two components of the Shapley vector are non-zero, namely ϕSe (g12 ) = −1 and ϕHS (g12 ) = 1. Thus, Φ(g12 ) = (0.0, 0, 0, 0, 0.0, 0, 0, −1.0, 1.0, 0, 0). Having Shapley values of undetermined examples, we are able to not only determine the result of classification but also to rank individual attributes by their importance for the outcome of the classification. In case of the example g11 , the attribute Se denoting secondary education has two times higher value in magnitude than the two remaining attributes Ma and L, expressing male gender and low income, respectively; Their signs show negative classification. In case of the example g12 , both attributes Se and HS are equally important, but being of opposite signs cancels classification to one of the two classes. Proposition 2. The Shapley value of an attribute m for an object g of ϕm (g) fulfils the following properties: 1. ϕm (g) = 0 for every m ∈ g  that does not belong to at least one positive or negative hypothesis contained in g  ;  ϕm (g) = 1 if g is classified positively; 2.  m∈g  ϕm (g) = 0 if g is classified contradictory or remains undetermined; 3.  m∈g  4. ϕm (g) = −1 if g is classified negatively. m∈g 

An attribute m such that ϕm (g) = 0 is known as the dummy player [25] since its contribution for every formed coalition is zero.

Interpretable Concept-Based Classification with Shapley Values

5

99

Machine Experiments Demo

To provide the reader with examples of Shapley vectors on real data, we take the Zoo dataset from UCI Machine Learning Repository4 . It has 101 examples of different animals along with their 16 attributes; the target attribute, type, has seven values. One attribute expresses the number of legs with values 0, 2, 4, 5, 6 and 8. We transform it into a binary attribute using nominal scale. The resulting input context has 101 objects and 21 attributes in total. Since this dataset supposes a solution to a multi-class problem, we consider one of possible associated binary classification problems where the major class, mammals, is our positive class, while all the remaining compose the negative class. We could also consider the remaining six classes as positive classes using the scheme one-versus-the-rest. There are 41 examples for the positive class and 60 examples for the negative class5 . After generating the hypotheses, we get only one positive hypothesis,  H+ = {milk, backbone, breathes} and nine negative hypotheses:  H− = {f eathers, eggs, backbone, breathes, legs2 , tail}, {predator, legs8 }, {eggs, airborne, breathes}, {eggs, aquatic, predator, legs5 }, {eggs, legs0 }, {eggs, toothed, backbone}, {eggs, legs6 }, {venomous}, {eggs, domestic} . If we apply the trained JSM classifier to all the examples of positive class, we obtain the same Shapley vector for all of them since they classified positively by the only one positive hypothesis (in the absence of negative hypotheses): (0.0, 0, 0, 1/3, 0, 0, 0.0, 0.0, 1/3, 1/3, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0.0). Note that in the Shapley vector above, we display zero components in two different ways as it was explained previously. Thus, values 0.0 are shown for non-contributing attributes of example 1, i.e. aardvark with attributes {hair, milk, predator, toothed, backbone, breathes, legs4 , tail, catsize}. All the attributes from the unique positive hypothesis, i.e. milk, backbone and breathes contribute equally by 1/3. However, not every negative example is classified negatively. Two negative examples crab and tortoise remain undetermined: crab = {eggs, aquatic, predator, legs4 } 4 5

https://archive.ics.uci.edu/ml/datasets/zoo. There are no undetermined examples here since we would like to test decision explainability by means of Shapley values rather than to test prediction accuracy of the JSM-method.

100

and

D. I. Ignatov and L. Kwuida

tortoise = {eggs, backbone, breathes, legs4 , tail, catsize}.

Let us have a look at the correctly classified negative examples. For example, duck  = {f eathers, eggs, airborne, aquatic, backbone, breathes, legs2 , tail} contains two negative hypotheses {f eathers, eggs, backbone, breathes, legs2 , tail} and {eggs, airborne, breathes}. The Shapley vector for the duck example is6 : −(0, 0.024, 0.357, 0, 0.190, 0.0, 0, 0, 0.024, 0.357, 0, 0, 0, 0, 0.024, 0, 0, 0, 0.024, 0, 0). Eggs, breathes, and airborne are the most important attributes with Shapley values −0.357, −0.357, and −0.19, respectively, while the attribute aquatic has zero importance in terms of Shapley value. A useful interpretation of classification results could be an explanation for true positive or true negative cases. Thus, one can see which attributes has largest contribution to the wrong classification, i.e. possible reason of the classifier’s mistake. The executable iPython scripts for the synthetic context on credit scoring and the scaled zoo context, can be found at http://bit.ly/ShapDemo20207 .

6

Conclusion

In this paper we have introduced usage of Shapley value from Cooperative Game Theory to rank attributes of objects while performing binary classification with the JSM-method within FCA setting. It helps a decision maker to not only see the hypotheses contained in the corresponding example’s description but to also take into account the constituent attributes contribution. Thus, it is also possible to explain mistakes of the classifiers known as false positive and false negative examples. As the forthcoming work, we would like to address scalability issues, which are due to subset enumeration and suitable visual display for showing attribute importance. Note that all the subsets of attributes from g  that do not contain any positive or negative hypothesis can be omitted during summation in (2). Therefore, we need to explore only order filters (w.r.t. the Boolean lattice  (2g , ⊆)) of hypotheses contained in g  . Histograms of importance vectors can be implemented and used for better differentiation between small and high importance values of vectors like XGBoost feature importance diagrams [2]. The considered variant of JSM-method might be rather crisp and cautious in the sense that it prohibits classification in case of hypotheses of both signs, so its variants with voting are important. Another interesting venue is the prospective extension of the proposed approach for multi-class classification problems via 6 7

All the non-zero values are given with precision up to the third significant sign after decimal point. The full version of this script along with the used datasets will be available at https:// github.com/dimachine/Shap4JSM.

Interpretable Concept-Based Classification with Shapley Values

101

one-versus-the-rest and one-versus-one schemes for reduction to binary classification problems. The comparison with SHAP [21] is also desirable. However, it is not possible in a direct way since SHAP relies on probabilistic interpretation of Shapley values in terms of conditional probability to belong to a certain class given an input example x, e.g. P (Class = +1|x), and designed for classifiers with such a probabilistic output. Acknowledgements. The study was implemented in the framework of the Basic Research Program at the National Research University Higher School of Economics, and funded by the Russian Academic Excellence Project ‘5-100’. The first author was also supported by Russian Science Foundation under grant 17-11-01276 at St. Petersburg Department of Steklov Mathematical Institute of Russian Academy of Sciences, Russia. The first author would like to thank Prof. Fuad Aleskerov for the inspirational lectures on Collective Choice and Alexey Dral’ from BigData Team for pointing to Shapley values as an explainable Machine Learning tool.

References 1. Blinova, V.G., Dobrynin, D.A., Finn, V.K., Kuznetsov, S.O., Pankratova, E.S.: Toxicology analysis by means of the JSM-method. Bioinformatics 19(10), 1201– 1207 (2003) 2. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 785–794 (2016) 3. Dubois, V., Quafafou, M.: Concept learning with approximation: rough version spaces. In: Alpigini, J.J., Peters, J.F., Skowron, A., Zhong, N. (eds.) RSCTC 2002. LNCS (LNAI), vol. 2475, pp. 239–246. Springer, Heidelberg (2002). https://doi. org/10.1007/3-540-45813-1 31 4. Faigle, U., Grabisch, M., Jim´enez-Losada, A., Ord´ on ˜ez, M.: Games on concept lattices: Shapley value and core. Discrete Appl. Math. 198, 29–47 (2016) 5. Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P.: From data mining to knowledge discovery in databases. AI Mag. 17(3), 37–54 (1996) 6. Finn, V.: On machine-oriented formalization of plausible reasoning in F. BaconJ.S.Mill Style. Semiotika i Informatika 20, 35–101 (1983). (in Russian) 7. F¨ urnkranz, J., Gamberger, D., Lavrac, N.: Foundations of Rule Learning. Cognitive Technologies. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3540-75197-7 8. Harras, G.: Concepts in linguistics – concepts in natural language. In: Ganter, B., Mineau, G.W. (eds.) ICCS-ConceptStruct 2000. LNCS (LNAI), vol. 1867, pp. 13–26. Springer, Heidelberg (2000). https://doi.org/10.1007/10722280 2 9. Ganter, B., Kuznetsov, S.O.: Pattern structures and their projections. In: Delugach, H.S., Stumme, G. (eds.) ICCS-ConceptStruct 2001. LNCS (LNAI), vol. 2120, pp. 129–142. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-445838 10 10. Ganter, B., Kuznetsov, S.O.: Hypotheses and version spaces. In: Ganter, B., de Moor, A., Lex, W. (eds.) ICCS-ConceptStruct 2003. LNCS (LNAI), vol. 2746, pp. 83–95. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45091-7 6

102

D. I. Ignatov and L. Kwuida

11. Ganter, B., Kuznetsov, S.O.: Scale coarsening as feature selection. In: Medina, R., Obiedkov, S. (eds.) ICFCA 2008. LNCS (LNAI), vol. 4933, pp. 217–228. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78137-0 16 12. Ganter, B., Obiedkov, S.A.: Conceptual Exploration. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49291-8 13. Ganter, B., Wille, R.: Formal Concept Analysis - Mathematical Foundations. Springer, Heidelberg (1999). https://doi.org/10.1007/978-3-642-59830-2 14. Ignatov, D.I., Zhuk, R., Konstantinova, N.: Learning hypotheses from triadic labeled data. In: 2014 IEEE/WIC/ACM International Joint Conference on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014, vol. I, pp. 474– 480 (2014) 15. Kuznetsov, S.O.: Galois connections in data analysis: contributions from the Soviet era and modern Russian research. In: Ganter, B., Stumme, G., Wille, R. (eds.) Formal Concept Analysis. LNCS (LNAI), vol. 3626, pp. 196–225. Springer, Heidelberg (2005). https://doi.org/10.1007/11528784 11 16. Kuznetsov, S.O.: On stability of a formal concept. Ann. Math. Artif. Intell. 49(1– 4), 101–115 (2007) 17. Kuznetsov, S.O., Samokhin, M.V.: Learning closed sets of labeled graphs for chemical applications. In: Kramer, S., Pfahringer, B. (eds.) ILP 2005. LNCS (LNAI), vol. 3625, pp. 190–208. Springer, Heidelberg (2005). https://doi.org/10. 1007/11536314 12 18. Kuznetsov, S.: JSM-method as a machine learning method. Itogi Nauki i Tekhniki, ser. Informatika 15, 17–53 (1991). (in Russian) 19. Kuznetsov, S.: Mathematical aspects of concept analysis. J. Math. Sci. 80(2), 1654– 1698 (1996) 20. Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018) 21. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: I.G., et al. (ed.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017) 22. Maafa, K., Nourine, L., Radjef, M.S.: Algorithms for computing the Shapley value of cooperative games on lattices. Discrete Appl. Math. 249, 91–105 (2018) 23. Mitchell, T.M.: Version spaces: a candidate elimination approach to rule learning. In: Reddy, R. (ed.) Proceedings of the 5th International Joint Conference on Artificial Intelligence, 1977, pp. 305–310. William Kaufmann (1977) 24. Molnar, C.: Interpretable Machine Learning (2019). https://christophm.github.io/ interpretable-ml-book/ 25. Shapley, L.S.: A value for n-person games. In: Contributions to the Theory of Games, vol. 2, no. 28, pp. 307–317 (1953) ˇ 26. Strumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41(3), 647–665 (2013). https:// doi.org/10.1007/s10115-013-0679-x

Pruning in Map-Reduce Style CbO Algorithms Jan Konecny

and Petr Krajˇca(B)

Department of Computer Science, Palack´ y University Olomouc, 17. listopadu 12, 77146 Olomouc, Czech Republic {jan.konecny,petr.krajca}@upol.cz

Abstract. Enumeration of formal concepts is crucial in formal concept analysis. Particularly efficient for this task are algorithms from the Closeby-One family (shortly, CbO-based algorithms). State-of-the-art CbObased algorithms, e.g. FCbO, In-Close4, and In-Close5, employ several techniques, which we call pruning, to avoid some unnecessary computations. However, the number of the formal concepts can be exponential w.r.t. dimension of the input data. Therefore, the algorithms do not scale well and large datasets become intractable. To resolve this weakness, several parallel and distributed algorithms were proposed. We propose new CbO-based algorithms intended for Apache Spark or a similar programming model and show how the pruning can be incorporated into them. We experimentally evaluate the impact of the pruning and demonstrate the scalability of the new algorithm. Keywords: Formal concept analysis · Closed sets model · Close-by-One · Distributed computing

1

· Map-reduce

Introduction

Formal concept analysis (FCA) [10] is a well-established method of data analysis, having numerous applications, including, for instance, mining of non-redundant association rules [19], factorization of Boolean matrices [7], text mining [16], or recommendation systems [1]. The central notion of FCA is a formal concept and many applications of FCA depend on enumeration of formal concepts. Kuznetsov [14] proposed a tree-recursive algorithm called Close-by-One (CbO) which enumerates formal concepts in lexicographical order. The treerecursive nature of CbO allows for more efficient implementation and further enhancements. These enhancements include parallel execution [11,12], partial closure computation [2–6], pruning [4–6,15], or execution using the map-reduce framework [13]. The CbO-based algorithms are among the fastest algorithms for enumerating formal concepts.

Supported by the grant JG 2019 of Palack´ y University Olomouc, No. JG 2019 008. c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 103–116, 2020. https://doi.org/10.1007/978-3-030-57855-8_8

104

J. Konecny and P. Krajˇca

One of the most recognized problems of FCA is the large amount of formal concepts present in the data. Typically, with growing sizes of data, the number of formal concepts grows substantially: even adding one column can double the number of formal concepts. Due to this exponential nature, large datasets become intractable. To resolve this weakness, several parallel [11,12] and distributed [8,13,17] algorithms have been proposed. However, parallel or distributed programming may be challenging even for experienced developers. To simplify the development of distributed programs, Google proposed a restricted programming model called map-reduce [9]. This model allows the user to describe algorithms solely by simple transformation functions and leave tasks related to distributed computing to the underlying framework. This approach has been the subject of enhancements and currently is superseded by a framework called Apache Spark – a state-of-the-art approach to large datasets processing. This framework transforms data similarly, but provides a less restricted interface for data processing than map-reduce and, most importantly, is by more than an order of magnitude faster [18]. The main contribution of the paper are new CbO-based algorithms for enumerating formal concepts intended for Apache Spark or a similar programming model. Furthermore, we show how the pruning utilized in [5] can be incorporated into them. We experimentally evaluate the impact of the pruning and demonstrate the scalability of the new algorithm. This paper is organized as follows. In Sect. 2, we introduce basic notions of FCA along with the programming model we use. In Sect. 3, we describe the basic CbO algorithm, its enhanced variants, and reformulation of these algorithms for an Apache Spark programming model. The performance of these algorithms is evaluated in Sect. 4. Section 5 sumarizes and concludes the paper.

2

Preliminaries

We introduce basic notions of FCA and the programming model we use to describe proposed algorithms. 2.1

Formal Concept Analysis

The input for FCA is a formal context—a triplet X, Y, I where X is a finite nonempty set of objects, Y is a finite non-empty set of attributes, and I ⊆ X × Y is a relation of incidence between objects and attributes; incidence x, y ∈ I means that the object x ∈ X has the attribute y ∈ Y . Every formal context X, Y, I induces two so called concept-forming operators ↑ : 2X → 2Y and ↓ : 2Y → 2X defined, for each A ⊆ X and B ⊆ Y , by A↑ = {y ∈ Y | for each x ∈ A : x, y ∈ I}, ↓

B = {x ∈ X | for each y ∈ B : x, y ∈ I}.

(1) (2)

Pruning in Map-Reduce Style CbO Algorithms

105

In words, A↑ is the set of all attributes shared by all objects from A and B ↓ is the set of all objects sharing all attributes from B. For singletons {i}, we use the simplified notation i↑ and i↓ . A formal concept in X, Y, I is a pair A, B such that A↑ = B and B ↓ = A, where ↑ and ↓ are concept-forming operators induced by X, Y, I. The sets A and B are called the extent and the intent, respectively. Throughout the paper, we assume that the set of attributes is Y = {1, 2, . . . , n}. We have introduced only the notions from FCA that are necessary for the rest of the paper. The interested reader can find more detailed description of FCA in [10]. 2.2

Map-Reduce Style Data Processing

In seminal work [9], Google revealed the architecture of its internal system which allows them to process large datasets with commodity hardware. In short, the proposed system and its programming model represents all data as key-value pairs which are transformed with two operations, map and reduce. This is why the data processing system based on this model is called the map-reduce framework. The main advantage of this approach is that the programmer has to provide transformation functions and the framework takes care of all necessary tasks related to distributed computation, for instance, workload distribution, failure detection and recovery, etc. Several independent implementations of the map-reduce framework [9] are available. The most prominent is Apache Hadoop1 . Beyond this, the map-reduce data processing model is used, for instance, in Infinispan, Twister, or Apache CouchDB. Remark 1. It has been shown that many algorithms can be represented only with these two operations. Algorithms for enumeration of formal concepts are not an exception. Variants of CbO [13], Ganter’s NextClosure [17], and UpperNeighbor [8] were proposed for map-reduce framework. In the cases of [17] and [8], to handle iterative algorithms, specialized map-reduce frameworks are used. Nowadays, the original map-reduce approach was superseded by more general techniques, namely by Resilient Distributed Datasets (RDDs) [18] as implemented in Apache Spark2 . RDDs allow us to work with general tuples and provide a more convenient interface for data processing. This interface is not limited to the two elementary operations, map and reduce. More convenient operations, like map, flatMap, filter, groupBy, or join, are available. Even though the underlying technology is different, the approach to data processing is similar to map-reduce. Likewise, the program is also represented as a sequence of transformations and the framework takes care of workload distribution, etc. 1 2

https://hadoop.apache.org/. https://spark.apache.org/.

106

J. Konecny and P. Krajˇca

Remark 2. Throughout the paper, to describe the proposed algorithms, we use operations which are close to those provided by Apache Spark. However, all of these operations can be transformed to map and reduce operations as used in the map-reduce framework [9]. Therefore, we term the proposed algorithms map-reduce style. We shall introduce the most important Apache Spark operations we use in this paper: (i) Operation map(X, f ) takes a collection of tuples X and function f , and applies f on each tuple in X. (ii) Operation filter(X, p) takes a collection of tuples X and predicate p, and keeps only tuples satisfying the predicate p. (iii) Operation flatMap(X, f ) takes a collection of tuples X and function f that for every tuple returns a collection of tuples, and applies the function f on every tuple. It then merges all the collections into a single collection. Other operations we introduce where necessary. Furthermore, we shall use x ⇒ y to denote transformation functions. Namely: (i) x1 , . . . , xn  ⇒ y1 , . . . , ym  denotes a function transforming a tuple x1 , . . . , xn  into a tuple y1 , . . . , ym , (ii) x1 , . . . , xn  ⇒ {y1 , . . . , ym  | cond } is a function transforming a tuple x1 , . . . , xn  into a set of tuples y1 , . . . , ym  satisfying condition cond, (iii) x1 , . . . , xn  ⇒ condition represents a predicate. For example, for a set X = {a, 1, b, −2, c, 3} ⊆ {a, b, c} × Z: map(X, x, y ⇒ x, y, y 2 ) = {a, 1, 1, b, −2, 4, c, 3, 9}, filter(X, x, y ⇒ y > 0) = {a, 1, c, 3}, flatMap(X, x, y ⇒ {x, z | z > 0 and z ≤ y}) = {a, 1, c, 1, c, 2, c, 3}.

3

Close-by-One Algorithm

We describe the basic CbO algorithm and its two reformulations for the mapreduce style programming model. Then we discuss pruning techniques improving efficiency of CbO. We show how one of these techniques can be incorporated into the map-reduce style CbO algorithms. The two new map-reduce style CbO algorithms with pruning are the main contribution of the paper. 3.1

Basic Close-by-One

The CbO algorithm belongs to a family of algorithms that enumerate formal concepts in lexicographical (or similar) order and use this order to ensure that every formal concept is listed exactly once. Note that intent B is lexicographically smaller than D, if B ⊂ D, or B ⊂ D and min((B ∪ D)\(B ∩ D)) ∈ B.

Pruning in Map-Reduce Style CbO Algorithms

107

For instance, a set {1} is lexicographically smaller than {1, 2} and the set {1, 2} is lexicographically smaller than {1, 3}. The CbO algorithm can be described as a tree recursive procedure, GenerateFrom, which has three parameters (see Algorithm 1)—extent A, intent B, and attribute y, indicating the last attribute included into the intent. The algorithm starts (line 9) with the topmost formal concept, i.e. X, X ↑  and 0 (indicating no attribute has been included yet). The GenerateFrom procedure prints the given concept A, B out (line 2) and extends the intent B with all attributes i such that i ∈ / B and i > y (line 3 and 4). Subsequently, a new extent (B ∪{i})↓ is obtained. Note that (B ∪{i})↓ = A∩i↓ . This fact allows us to eliminate the time consuming operation ↓ and replace it with an intersection which is significantly faster (line 5). Afterwards, a new intent (B ∪ {i})↓↑ is computed (line 6) and its canonicity is checked (line 7). A formal concept is canonical if its intent D is not lexicographically smaller than B. That is, if D ∩ {1, . . . , i − 1} = B ∩ {1, . . . , i − 1}. For convenience, we use the shorter notation Di = D ∩ {1, . . . , i − 1}. If the concept is canonical, it is recursively passed to GenerateFrom, along with the attribute that was inserted into the intent (line 8).

Algorithm 1: Close-by-One 1

2 3 4 5 6 7 8

9

3.2

proc GenerateFrom(A, B, y): input : A is extent, B is intent, y is the last added attribute print(A, B) for i ← y + 1 to n do if i ∈ / B then C ← A ∩ i↓ D ← C↑ if Di = Bi then GenerateFrom(C, D, i) GenerateFrom(X, X ↑ , 0)

Map-Reduce Style Close-by-One

We use a depth first search (DFS) strategy to describe CbO since this strategy is natural for the algorithm. However, other search strategies are possible. One may use also breadth-first search (BFS) [13] and even a combination of BFS and DFS [3–6,15]. No matter which strategy is used, CbO always enumerates the same set of formal concepts, but in different order. This feature is essential since it allows us to express CbO by means of distributed and parallel frameworks. Note that algorithms for map-reduce frameworks often demand BFS.

108

J. Konecny and P. Krajˇca

The map-reduce style CbO algorithm is an iterative algorithm, see Algorithm 2 for pseudo-codes. Each of its iterations represents the processing of a single layer of the CbO’s search tree. Note that each step has to be expressed in transformations provided by the given framework.

Algorithm 2: Close-by-One 1

2 3 4 5 6 7 8 9 10 11 12 13

proc CbOPass(Li ): input : Li is a collection of tuples A, B, y where A is extent, B is intent, y is the last added attribute Latt ← flatMap(Li , A, B, y ⇒ {A, B, y, i | i ∈ Y and i > y and i ∈ B}) i ↓ Lext ← map(Latt i i , A, B, y, i ⇒ B, y, i, A ∩ i ) int ext ↑ Li ← map(Li , B, y, i, C ⇒ B, y, i, C, C ) Lcan ← filter(Lint i i , B, y, i, C, D ⇒ (Bi = Di )) Li+1 ← map(Lcan i , B, y, i, C, D ⇒ C, D, i) return Li+1 L0 ← {X, X ↑ , 0} i←0 while |Li | > 0 do Li+1 ← CbOPass(Li ) i←i+1  return ij=0 Lj

The algorithm starts (line 8) with the first layer L0 containing a single tuple X, X ↑ , 0 describing the topmost formal concept, along with attribute 0, indicating that no attribute has been included yet to build this formal concept. This layer is passed to the CbOPass procedure, which uses a sequence of transformations to obtain a new layer of formal concepts. The first transformation (line 2) for each formal concept generates a set of tuples consisting of a given concept along with an attribute that will extend the given concept in the next step. Subsequently, these tuples are used to compute the extents of the new concepts (line 3). Notice that the original extent is no longer necessary, thus is omitted from the tuple. Afterwards, the intents are obtained (line 4). At this point, the new formal concepts are fully available and all that remains is to filter those passing the canonicity test (line 5) and keep only the new extent and the intent along with the attribute that was added (line 6). The outcome of the CbOPass is then used as an input for the next iteration of the algorithm. The algorithm stops when no new concept is generated (lines 10 to 12). All transformations used in Algorithm 2 are obviously parallelizable. However, the first transformation (line 2) is not associated with a computationally heavy task, hence the workload distribution may cause an unnecessary overhead.

Pruning in Map-Reduce Style CbO Algorithms

109

Thus, it may be reasonable to combine lines 2 and 3 into a single transformation as follows: Lext ← flatMap(Li , A, B, y ⇒ {B, y, A ∩ i↓  | i ∈ Y and i > y and i ∈ B}). i This gives two variants of map-reduce style CbO: (i) the first one as described in Algorithm 2 and (ii) the latter, where the first two transformations are combined together. To distinguish these variants, we denote them (i) fine-grained and (ii) coarse-grained, since the first variant provides more fine-grained parallelism and the second one more coarse-grained parallelism. 3.3

Close-by-One with Pruning

The most fundamental challenge which all algorithms for enumerating formal concepts are facing is the fact that some concepts are computed multiple times. Due to the canonicity test, each concept is returned only once. However, the canonicity test does not prevent redundant computations. Therefore, several strategies to reduce redundant computations were proposed. In FCbO [15], a history of failed canonicity tests is kept during the tree descent and is used to skip particular attributes for which it is clear that the canonicity test fails. To maintain this history, FCbO uses an indexed set of sets of attributes. This means, each invocation of FCbO’s GenerateFrom requires passing a data structure of size O(|Y |2 ). This space requirement makes FCbO unsuitable for map-reduce style frameworks. In-Close4 [5] keeps a set of attributes during the descent for which intersection A ∩ i↓ is empty. This allows us to skip some unnecessary steps. Furthermore, InClose5 [6] extends In-Close4 with a new method for passing information about failed canonicity tests. Unlike FCbO, In-Close4 and In-Close5 require the passing of only a single set of attributes for which the canonicity test failed. This makes them more suitable for distributed programming. 3.4

Using Empty Intersections for Pruning

The In-Close4 algorithm is based on two observations. (i) If the intersection C = A∩i↓ (Algorithm 1, line 5) is empty, then the corresponding intent C ↑ is Y . Note that the concept with the intent Y is the lexicographically largest concept which always exists and is always listed as the last one. Thus, the canonicity test fails for all such concepts with the exception of the last one. (ii) Further, from the properties of concept-forming operators, it follows that for every two intents B, D ⊆ Y and attribute i ∈ Y such that B ⊂ D, i ∈ B, and i ∈ D holds that (B ∪ {i})↓ ⊇ (D ∪ {i})↓ . Particularly, if (B ∪ {i})↓ = ∅, then (D ∪ {i})↓ = ∅. In fact, observations (i) and (ii) provide information on certain canonicity test failures and their propagation. Not considering the last concept Y ↓ , Y , we may use a simplified assumption that an empty extent implies canonicity failure. Furthermore, if adding an attribute i to the intent B leads to an empty extent, then adding i into any of its supersets D leads to an empty extent as well. Subsequently, this leads to a canonicity test failure, unless it is the last concept. Note that concept Y ↓ , Y  has to be treated separately.

110

J. Konecny and P. Krajˇca

Algorithm 3: Close-by-One with In-Close4 pruning 1

2 3 4 5 6 7 8 9 10 11 12

13 14 15

proc GenerateFrom(A, B, y, N ): input : A is extent, B is intent, y is the last added attribute, N is a set of attributes to skip print(A, B) M ←N for i ← y + 1 to n do if i ∈ / B and i ∈ / N then C ← A ∩ i↓ if C = ∅ then M ← M ∪ {i} else D ← C↑ if Di = Bi then PutInQueue(C, D, i) while GetFromQueue(C, D, i) do GenerateFrom(C, D, i, M ) GenerateFrom(X, X ↑ , 0, ∅)

Algorithm 3 shows how these observations can be incorporated into CbO. First, there is an additional argument N , a set of attributes that can be skipped (line 5) since their inclusion into an intent of a newly formed concept would lead to an empty extent. Additionally, a set M which is a copy of N is created (line 3). If the newly formed extent (line 6) is empty, the attribute i is inserted into the set M and algorithm proceeds with the next attribute. To collect information on canonicity test failures, Algorithm 3 uses a combined DFS and BFS. The combination of DFS and BFS means that the recursive call is not performed immediately, but rather is postponed until all attributes are processed (line 12). All recursive calls are then processed in a single loop (lines 13 and 14). Remark 3. Algorithm 3 is similar to In-Close4 [5], it uses empty intersections for pruning and a combined depth-first and breadth-first search, however, it does not compute formal concepts incrementally. In the same way as CbO is transformed into an iterative BFS map-reduce style algorithm, one may transform Algorithm 3 as well. However, handling information on empty intersections requires additional steps. All steps of the algorithm are described in Algorithm 4. We highlight only the main differences w.r.t. Algorithm 2. Namely, tuples processed in Algorithm 4 contain the additional set N with attributes to skip.

Pruning in Map-Reduce Style CbO Algorithms

111

Algorithm 4: Close-by-One with In-Close4 pruning (fine-grained) 1

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

proc CbO4PassFine(Li ): input : Li is a collection of tuples A, B, y, N  where A is extent, B is intent, y is the last added attribute, N is a set of attributes to skip Latt ← flatMap(Li , A, B, y, N  ⇒ {A, B, y, i, N  | i ∈ Y and i > i y and i ∈ B and i ∈ N }) ↓ Lex0 ← map(Latt i i , A, B, y, i, N  ⇒ B, y, i, N, A ∩ i ) ext ex0 Li ← filter(Li , A, B, y, i, N, C ⇒ |C| > 0) ↑ Lint ← map(Lext i i , B, y, i, N, C ⇒ B, y, i, N, C, C ) can int Li ← filter(Li , B, y, i, N, C, D ⇒ (Bi = Di )) Lem0 ← filter(Lex0 i i , A, B, y, i, N, C ⇒ |C| = 0) emp Li ← map(Lem0 , A, B, y, i, N, C ⇒ B, y, i) i Lfails ← groupByKey(Lemp ) i i fails Ljoin ← leftOutterJoin(Lcan ) on B, y i , Li i join Li+1 ← map(Li , B, y, i, N, C, D, M  ⇒ C, D, i, M ∪ N ) return Li+1

L0 ← {X, X ↑ , 0, ∅} i←0 while |Li | > 0 do Li+1 ← CbO4PassFine(Li ) i←i+1  return ij=0 Lj

The condition in line 2 contains additional check whether i ∈ N . Extents are computed (line 3) and non-empty extents (line 4) are used to obtain intents (line 5) which are tested for canonicity (line 6). Beside this, information on empty extents is collected (lines 7 to 9). First, empty extents are selected (line 7) and transformed to key-value pairs B, y, i (line 8). Then, operation groupByKey is used to group these pairs by their keys (line 9) to form new pairs B, y, M , where M denotes a set of attributes which led to an empty extent. It only remains to join together the valid concepts and information on attributes implying canonicity test failures (line 10). This subtask is achieved with the leftOutterJoin operation. This operation for each tuple finds a matching pair B, y, M  in Lfails (i.e. valB, Y, i, N, C, D in a set Lcan i i ues B and y of both tuples are equal) and forms a new tuple B, Y, i, N, C, D, M . , an empty set is used instead In the case that there is no matching pair in Lfails i of M . Afterwards, every tuple is transformed into a form suitable for the next iteration (line 11). Namely, values related to the current iteration are stripped out and sets M and N are merged into a single set.

112

J. Konecny and P. Krajˇca

Algorithm 5: Computing extents 1

2 3 4 5 6 7 8 9 10

proc CoarseExtents(A, B, y, N ): input : A is extent, B is intent, y is last added attribute, N is set of attributes to skip M ←N E←∅ for i ← y + 1 to n do if i ∈ / B and i ∈ / N then C ← A ∩ i↓ if C = ∅ then M ← M ∪ {i} else E ← E ∪ {C}

13

E ← ∅ foreach C ∈ E do E  ← E  ∪ {B, y, i, M, C}

14

return E 

11 12

In Sect. 3.2, two variants of map-reduce style CbO are proposed—fine-grained and coarse-grained. Algorithm 4 corresponds to the fine-grained variant. Analogously, the coarse-grained variant can also be considered. To simplify the description of the coarse-grained variant, we introduce the auxiliary procedure CoarseExtents (see Algorithm 5), which for a given formal concept A, B, attribute y, and a set N returns set of tuples B, y, i, M, A ∩ i↓  such that i is newly added attribute, A∩i↓ is a non-empty extent and M is a set of attributes which implies empty intersections for the given intent B. Algorithm 6: Close-by-One pass with In-Close4 pruning (coarsegrained) 1

2 3 4 5 6

proc CbO4PassCoarse(Li ): input : Li is a collection of tuples A, B, y, N  where A is extent, B is intent, y is the last added attribute, N is a set of attributes to skip Lext ← flatMap(Li , A, B, y, N  ⇒ CoarseExtents(A, B, y, N )) i ↑ Lint ← map(Lext i i , B, y, i, M, C ⇒ B, y, i, M, C, C ) can int Li ← filter(Li , B, y, i, M, C, D ⇒ (Bi = Di )) Li+1 ← map(Lcan i , B, y, i, M, C, D ⇒ C, D, i, M ) return Li+1

The coarse-grained variant of the algorithm is presented in Algorithm 6. One can see that it requires fewer steps than Algorithm 4, since all steps related

Pruning in Map-Reduce Style CbO Algorithms

113

to empty intersections are processed in the CoarseExtents procedure. Notice that the coarse-grained variant is similar to Algorithm 2 – both consist of four steps: (i) computation of extents, (ii) computations of intents, (iii) application of canonicity test, and (iv) transformation for the next pass. Steps collecting information on empty intersections are omitted in Algorithm 6.

4

Evaluation

We implemented all described algorithms as Apache Spark jobs to evaluate their performance and scalability. A cluster of three virtual computers (nodes) equipped with 40 cores of Intel Xeon E5-2660 (at 2.2 GHz) and 374 GiB of RAM, each interconnected with 1 Gbps network was used. All nodes were running Debian 10.2, OpenJDK 1.8.0, and Apache Spark 2.4.4. All source codes were compiled with Scala 2.12. One node was selected as a master, and all nodes served as workers. To improve data locality, we limited each worker to 20 GiB RAM. To distribute the workload equally, after each iteration, we repartitioned the data into a number of partitions equal to twice the number of cores. We focus on overall performance and scalability of proposed algorithms first. We have selected three datasets from the UCI Machine Learning Repository (mushrooms, anonymous web, T10I4D100K) and our own dataset (debian tags) and measured time taken to enumerate all formal concepts using 1, 20, 40, and 80 cores. The results are presented in Table 1. According to these results, mapreduce style CbO without pruning is significantly slower than the novel algorithms we propose. Furthermore, coarse-grained variants of both algorithms seem to be in many cases slightly faster, but it is not a general rule. Properties of the datasets and results of measurements in Table 1 suggest that the map-reduce approach is more suited for large datasets. To confirm this, we prepared a second set of experiments. We created artificial datasets3 that simulate transactional data and focused on how the number of object and attributes affects scalability. Scalability is a ratio ST (n) = TTn1 where T1 and Tn are times necessary to complete the task with one and with n CPU cores, respectively. In other words, scalability provides information on how efficiently an algorithm can utilize multiple CPU cores. Results of this experiment are presented in Figs. 1 and 2 which show scalability CbO with the In-Close4 pruning for datasets having 500 attributes, 2 % density, and varying numbers of objects (Fig. 1), and scalability for datasets consisting of 10,000 objects, 2 % density, and varying numbers of attributes (Fig. 2). These results confirm that scalability depends on the size of the data and that mainly large datasets can benefit from distributed data processing. Notice that fine-grained variants appear to scale better than coarse-grained variants. This is partially due to the fact that the fine-grained variants tend to be slower when running on a single CPU core. When multiple cores are used, the differences between fine-grained and coarse-grained variants become negligible. 3

IBM Quest Synthetic Data Generator was used.

114

J. Konecny and P. Krajˇca

Table 1. Running times for selected datasets (cbo—algorithm without pruning; cbo4— algorithm with pruning from In-Close4) Data

Cores cbo/fine cbo/coarse cbo4/fine cbo4/coarse

Debian-tags Size: 8124 × 119 Density: 19 %

1 20 40 80

475.2 s 104.3 s 41.9 s 40.5 s

431.4 s 83.7 s 47.1 s 53.1 s

34.0 14.8 14.8 17.7

An. web Size: 32710 × 296 Density: 1 %

1 20 40 80

711.6 s 102.8 s 60.7 s 70.0 s

603.6 s 74.3 s 68.3 s 56.5 s

Mushrooms Size: 8124 × 119 Density: 19 %

1 20 40 80

64.4 22.2 18.3 31.7

60.4 22.7 17.2 30.8

T10I4D100K 1 Size: 100000 × 1000 20 40 Density: 1 % 80

s s s s

3+ h 49.5 min 39.3 min 35.9 min

s s s s

3+ h 54.3 min 37.9 min 34.1 min

s s s s

17.2 11.7 11.0 12.8

s s s s

101.5 s 23.9 s 21.1 s 24.7 s

61.7 21.3 19.4 21.9

s s s s

47.4 s 18.1 s 17.1 s 21.6 s

33.6 s 18.5 s 19.2 s 32.0 s

79.5 min 4.8 min 3.5 min 3.2 min

40.7 min 3.2 min 2.4 min 2.5 min

Fig. 1. Scalability of CbO with the In-Close4 pruning w.r.t. number of objects—finegrained (left), coarse-grained (right)

Remark 4. Besides the CbO-based algorithms with In-Close4 pruning, we also implemented algorithms with: – FCbO pruning [15]. The performance of the resulting algorithm was very unsatisfactory. As expected, the algorithm was slow and memory demanding due to the quadratic nature of information on failed canonicity tests.

Pruning in Map-Reduce Style CbO Algorithms

115

– In-Close5 pruning [6]: The performance of the resulting algorithm did not show any significant improvements. We will present detailed results in the full version of this paper.

Fig. 2. Scalability of CbO with the In-Close4 pruning w.r.t. number of attributes— fine-grained (left), coarse-grained (right)

5

Conclusions

We designed and implemented a CbO-based algorithm with pruning intended for Apache Spark. While a few FCA algorithms were adapted to map-reduce framework (see Remark 1), to our best knowledge, this is the first map-reduce style algorithm for enumeration of formal concepts which utilizes pruning techniques. Our experimental evaluation indicates promising performance results.

References 1. Akhmatnurov, M., Ignatov, D.I.: Context-aware recommender system based on Boolean matrix factorisation. In: Yahia, S.B., Konecny, J. (eds.) Proceedings of the Twelfth International Conference on Concept Lattices and Their Applications, Clermont-Ferrand, France, 13–16 October 2015, CEUR Workshop Proceedings, vol. 1466, pp. 99–110 (2015). CEUR-WS.org 2. Andrews, S.: In-Close, a fast algorithm for computing formal concepts. In: 17th International Conference on Conceptual Structures, ICCS 2009. Springer (2009) 3. Andrews, S.: In-Close2, a high performance formal concept miner. In: Andrews, S., Polovina, S., Hill, R., Akhgar, B. (eds.) ICCS 2011. LNCS (LNAI), vol. 6828, pp. 50–62. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22688-5 4 4. Andrews, S.: A ‘best-of-breed’ approach for designing a fast algorithm for computing fixpoints of Galois connections. Inf. Sci. 295, 633–649 (2015) 5. Andrews, S.: Making use of empty intersections to improve the performance of CbO-type algorithms. In: Bertet, K., Borchmann, D., Cellier, P., Ferr´e, S. (eds.) ICFCA 2017. LNCS (LNAI), vol. 10308, pp. 56–71. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59271-8 4

116

J. Konecny and P. Krajˇca

6. Andrews, S.: A new method for inheriting canonicity test failures in Close-by-One type algorithms. In: Ignatov, D.I., Nourine, L. (eds.) Proceedings of the Fourteenth International Conference on Concept Lattices and Their Applications, CLA 2018, Olomouc, Czech Republic, 12–14 June 2018, CEUR Workshop Proceedings, vol. 2123, pp. 255–266 (2018). CEUR-WS.org 7. Belohlavek, R., Vychodil, V.: Discovery of optimal factors in binary data via a novel method of matrix decomposition. J. Comput. Syst. Sci. 76(1), 3–20 (2010) 8. Chunduri, R.K., Cherukuri, A.K.: Haloop approach for concept generation in formal concept analysis. JIKM 17(3), 1850029 (2018) 9. Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. In: Brewer, E.A., Chen, P. (eds.) 6th Symposium on Operating System Design and Implementation (OSDI 2004), San Francisco, California, USA, 6–8 December 2004, pp. 137–150. USENIX Association (2004) 10. Ganter, B., Wille, R.: Formal Concept Analysis Mathematical Foundations. Springer, Heidelberg (1999). https://doi.org/10.1007/978-3-642-59830-2 11. Krajca, P., Outrata, J., Vychodil, V.: Advances in algorithms based on CbO. In: Proceedings of the 7th International Conference on Concept Lattices and Their Applications, Sevilla, Spain, 19–21 October 2010, pp. 325–337 (2010) 12. Krajca, P., Outrata, J., Vychodil, V.: Parallel algorithm for computing fixpoints of Galois connections. Ann. Math. Artif. Intell. 59(2), 257–272 (2010) 13. Krajca, P., Vychodil, V.: Distributed algorithm for computing formal concepts using map-reduce framework. In: Adams, N.M., Robardet, C., Siebes, A., Boulicaut, J.-F. (eds.) IDA 2009. LNCS, vol. 5772, pp. 333–344. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03915-7 29 14. Kuznetsov, S.O.: A fast algorithm for computing all intersections of objects from an arbitrary semilattice. Nauchno-Tekhnicheskaya Informatsiya Seriya 2-Informatsionnye Protsessy i Sistemy 27(1), 17–20 (1993). https://www. researchgate.net/publication/273759395 SOKuznetsov A fast algorithm for computing all intersections of objects from an arbitrary semilattice NauchnoTekhnicheskaya Informatsiya Seriya 2 - Informatsionnye protsessy i sistemy No 1 pp17-20 19 15. Outrata, J., Vychodil, V.: Fast algorithm for computing fixpoints of Galois connections induced by object-attribute relational data. Inf. Sci. 185(1), 114–127 (2012) 16. Poelmans, J., Ignatov, D.I., Viaene, S., Dedene, G., Kuznetsov, S.O.: Text mining scientific papers: a survey on FCA-based information retrieval research. In: Perner, P. (ed.) ICDM 2012. LNCS (LNAI), vol. 7377, pp. 273–287. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-31488-9 22 ´ Foghl´ 17. Xu, B., de Fr´ein, R., Robson, E., O u, M.: Distributed formal concept analysis algorithms based on an iterative MapReduce framework. In: Domenach, F., Ignatov, D.I., Poelmans, J. (eds.) ICFCA 2012. LNCS (LNAI), vol. 7278, pp. 292–308. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29892-9 26 18. Zaharia, M., Das, T., Li, H., Shenker, S., Stoica, I.: Discretized streams: an efficient and fault-tolerant model for stream processing on large clusters. In: Fonseca, R., Maltz, D.A. (eds.) 4th USENIX Workshop on Hot Topics in Cloud Computing, HotCloud 2012, Boston, MA, USA, 12–13 June 2012. USENIX Association (2012) 19. Zaki, M.J.: Mining non-redundant association rules. Data Min. Knowl. Discov. 9(3), 223–248 (2004)

Pattern Discovery in Triadic Contexts Rokia Missaoui1(B) , Pedro H. B. Ruas2 , L´eonard Kwuida3 , and Mark A. J. Song2 1

2

Universit´e du Qu´ebec en Outaouais, Gatineau, Canada [email protected] Pontifical Catholic University of Minas Gerais, Belo Horizonte, Brazil [email protected], [email protected] 3 Bern University of Applied Sciences, Bern, Switzerland [email protected]

Abstract. Many real-life applications are best represented as ternary and more generally n-ary relations. In this paper, we use Triadic Concept Analysis as a framework to mainly discover implications. Indeed, our contributions are as follows. First, we adapt the iPred algorithm for precedence link computation in concept lattices to the triadic framework. Then, new algorithms are proposed to compute triadic generators by extending the notion of faces and blockers to further calculate implications. Keywords: Triadic concepts

1

· Triadic generators · Implication rules

Introduction

Multidimensional data are ubiquitous in many real-life applications and Web resources. This is the case of multidimensional social networks, social resource sharing systems, and security policies. For instance, in the latter application, a user is authorized to use resources with given privileges under constrained conditions. A folksonomy is a resource sharing system where users assign tags to resources. Finding homogeneous groups of users and associations among their features can be useful for decision making and recommendation purposes. The contributions of this paper are as follows: (i) we adapt the iPred algorithm for precedence link computation in concept lattices [1] to the triadic framework, and (ii) we propose new algorithms for computing triadic generators and implications. Our research work is implemented in a mostly Python coded platform whose architecture includes the following modules, as presented in Fig. 1: 1. The call of Data-Peeler procedure [4] to get triadic concepts 2. The computation of the precedence links by adapting iPred to the triadic setting 3. The calculation of two kinds of generators, namely feature-based and extentbased generators This study was financed in part by the NSERC (Natural Sciences and Engineering Research Council of Canada) and by the Coordena¸ca ˜o de Aperfei¸coamento de Pessoal de N´ıvel Superior - Brazil (CAPES) - Finance Code 001. c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 117–131, 2020. https://doi.org/10.1007/978-3-030-57855-8_9

118

R. Missaoui et al.

4. The generation of three kinds of triadic implications as well as two kinds of association rules 5. The adaptation of stability and separation indices to the triadic framework. In this paper, we will focus on presenting our work on Module 2 and parts of Modules 3 and 4 only.

Fig. 1. The architecture modules of our proposed solution.

Our findings consider the initial triadic context without any transformation such as context flattening used in [7,9], and extend notions and algorithms initially defined in Formal Concept Analysis to its triadic counterpart. The paper is organized as follows: in Sect. 2 we provide a background on Triadic Concept Analysis while in Sect. 3 we propose an adapted version of the iPred algorithm to link triadic concepts (TCs) according to the quasi-order based on extents. In a similar way, one can build a Hasse diagram with respect to either the intent or the modus. The identification of links allows us not only to produce a poset of triadic concepts, but also to compute generators using the notion of faces and blockers as defined by [10]. Section 4 describes algorithms for computing triadic generators while Sect. 5 provides a procedure to generate implications. Section 6 describes a preliminary empirical study, and Sect. 7 concludes the paper and identifies the future work.

2

Background

In this section we first recall the main notions related to Triadic Concept Analysis such as triadic concepts and implications. Then, we recall other notions related to the border and the faces of a given concept. 2.1

Triadic Concept Analysis

Triadic Concept Analysis (TCA) was originally introduced by Lehmann and Wille [8,12] as an extension to Formal Concept Analysis [6], to analyze

Pattern Discovery in Triadic Contexts

119

data described by three sets K1 (objects), K2 (attributes) and K3 (conditions) together with a ternary relation Y ⊆ K1 × K2 × K3 . We call K := (K1 , K2 , K3 , Y ) a triadic context as illustrated. by Table 1 borrowed from [5] and its adaptation in [9]. It represents the purchase of customers in K1 := {1, 2, 3, 4, 5} from suppliers in K2 := {Peter, Nelson, Rick, Kevin, Simon} of products in K3 := {accessories, books, computers, digital cameras}. With (a1 , a2 , a3 ) ∈ Y , we mean that the object a1 possesses the attribute a2 under the condition a3 . For example, the value ac at the cross of Row 1 and Column R means that Customer 1 orders from Supplier R the products a and c. We will often use simplified notations for sets: e.g. 1 2 5 (or simply 125) stands for {1, 2, 5} and a b (or simply ab) means {a, b}. A triadic concept or closed tri-set of a triadic context K is a triple (A1 , A2 , A3 ) (also denoted by A1 × A2 × A3 ) with A1 ⊆ K1 , A2 ⊆ K2 , A3 ⊆ K3 and A1 × A2 × A3 ⊆ Y is maximal with respect to inclusion in Y . For example, the tri-set 135 × P N × d ⊆ Y is not closed since 135 × P N × d ⊆ Y  12345 × P N × d ⊆ Y . Let (A1 , A2 , A3 ) be a triadic concept; We will refer to A1 as its extent, A2 as its intent, A3 as its modus, and (A2 , A3 ) as its feature. From Table 1, we can extract three triadic concepts with the same extent 2 5, namely 25 × P N R × d, 25 × P R × ad and 25 × R × abd. To compute triadic concepts, three derivation operators are introduced. Let K := (K1 , K2 , K3 , Y ) be a triadic context and {i, j, k} = {1, 2, 3} with j < k. Let Xi ⊆ Ki and (Xj , Xk ) ⊆ Kj × Kk 1 . The (i) -derivation [8] is defined by: (i)

:= {(aj , ak ) ∈ Kj × Kk | (ai , aj , ak ) ∈ Y ∀ai ∈ Xi }

(1)

(i)

:= {ai ∈ Ki | (ai , aj , ak ) ∈ Y for all (aj , ak ) ∈ Xj × Xk }.

(2)

Xi (Xj , Xk )

For example, the (2) -derivation in K is the derivation in the dyadic context K(2) := (K2 , K1 × K3 , Y (2) ) with (y, (x, z)) ∈ Y (2) : ⇐⇒ (x, y, z) ∈ Y . Table 1. A triadic context. P

R

K

S

1 abd abd ac

ab

a

bcd abd ad

d

2 ad

N

3 abd d

ab

ab

a

4 abd bd

ab

ab

d

5 ad

abd abc a

ad

The set of triadic concepts can be ordered and forms a complete trilattice with a tridimensional representation [3,8]. For each i ∈ {1, 2, 3}, the relation 1

We write (Xj , Xk ) ⊆ Kj × Kk to mean that Xj ⊆ Kj and Xk ⊆ Kk .

120

R. Missaoui et al.

(A1 , A2 , A3 ) i (B1 , B2 , B3 ) ⇔ Ai ⊆ Bi is a quasi-order whose equivalence relation ∼i is given by: (A1 , A2 , A3 ) ∼i (B1 , B2 , B3 ) ⇔ Ai = Bi . These three quasi-orders satisfy the following antiordinal dependencies:  (A1 , A2 , A3 ) i (B1 , B2 , B3 ) =⇒ (B1 , B2 , B3 ) k (A1 , A2 , A3 ) (A1 , A2 , A3 ) j (B1 , B2 , B3 ) for {i, j, k} = {1, 2, 3} and for all triadic concepts (A1 , A2 , A3 ) and (B1 , B2 , B3 ). Biedermann [3] was the first to investigate implications in triadic contexts. A triadic implication has the form (A → D)C and holds if A occurs under conditions in C as a whole, then D also occurs under the same set of conditions. Later on, Ganter and Obiedkov [5] extended Biedermann’s work and defined three types of implications: attribute × condition implications (AxCIs), conditional attribute implications (CAIs), and attributional condition implications (ACIs). In this paper we focus on implications “` a la Biedermann”, and we borrow the following definitions from [9] for conditional attribute and attributional condition implications. A Biedermann conditional attribute implication (BCAI) has the form (A → D)C (s), and means that whenever A occurs under the conditions in C, then D also occurs under the same condition set with a support s. A Biedermann attributional condition implication (BACI) has the form (A → D)C (s), and means that whenever the conditions in A occur for all the attributes in C, then the conditions in D also occur for the same attribute set with a support s. 2.2

Border and Faces

Let (P, ≤) be a poset. If x ≤ y (resp. x < y) we state that x is below (resp. strictly below) y. If x < y and there is no element between x and y, we call x a lower cover of y, and y an upper cover of x, and write x ≺ y. The upper cover of x is uc(x) = {y | x ≺ y}, and its lower cover is lc(x) = {y | y ≺ x} [2]. Let L = (C, ≤1 ) be a poset such that each node represents all the triadic concepts associated with the same extent, and ≤1 the order relation induced by the quasi-order 1 . This poset is not a complete lattice since the intersection of extents is not necessarily an extent in the triadic setting. The goal is to construct the Hasse diagram of L. Thus we need the covering relation ≺1 of ≤1 . The elements will be processed in a linear ordering

1 emotional_state ← mean(emotion_list)} Else {emotional_state ← emotion_list} return emotional_state }

Fig. 1. Pseudocode for the heuristic function detecting the emotional state in the utterance

To bring patient’s conversational utterances into the picture, a text-to-CG parser is required. But even if it was feasible to construct complete representations for every utterance made by a patient, this would not be desirable, because from analysis of the

Conceptual Reasoning for Generating Automated Psychotherapeutic Responses

191

corpus, surprisingly few such representations would actually have useful implications for treatment, at least within our simplified model. Therefore whatever method we use to parse patient input into CGs can afford to be selective about the outputs it forms, using top-down influences to prefer those interpretations that are likely to lead to useful content. Our conceptual parser, SAVVY, can do this because it assembles composite CGs out of prepared conceptual components that are already pre-selected for the domain of use to which they will be put. This means that the composite graphs are strongly weighted toward semantic structures of implicit value, leaving utterances that result in disjoint or useless component parts not able to aggregate at all. For example, if the patient says I’m scared that one day, he’ll just stab me SAVVY is able to assemble the conceptual graph [FEAR] – (expr) -> [PERSON:Patient] (attr) -> [NEGATIVE] (caus) ->[ (futr) -> [SITUATION: [PERSON] < - (agnt) [PERSON: Patient] - - - - - - - - - - - - - - - - - -¡ (rslt) -> [PHARM] -> (expr ) -> [PERSON:Patient] - -! (inst) -> [BLADE] (ptim) -> [DAY:@1] ]

only because the lexicon had definitions of useful subgraphs for “scared”, “stab” etc. which, because they would likely represent harm to someone, are considered important inclusions. Not every act or even every emotion is so provided for. A simple database currently provides background knowledge for our experiments. Each entry in the knowledgebase is a history list of zero or more CGs, indexed by both a patient identifier and one of the 15 background topics (Sect. 2.3) such as suicide_attempts, willingness_to_change and chief_complaint. Entries may be added, deleted or modified during processing, so the database can be used as a working memory to update and maintain therapeutic reasoning over sessions. Initially these entries are provided manually to represent information from the pre-existing admitting interview, but these can be updated, edited or deleted by the automated therapist. Automatic entries are vetted by domain-specific heuristic filters focussed on the topic of interest, so that only relevant CGs can be pushed onto the appropriate lists. Our experience suggests that it is not difficult to write these provided high-level conceptual functions, based on the canonical operators, are available to find and test specific sections of the graphs. 3.2 Expert System for Executive Control Psychiatric expertise is represented by a clinical Expert System Therapist (EST), based on TMYCIN [21]. This shell is populated with rules from the above-mentioned treatment theories. Consultation of the system is performed at each conversational turn, informed by the current state of variables from the inputs. Backward-chaining inference maintains internal state variables and recommends the best “pragmatic move” and” therapist’s target” for the response generation process. These parameters allow the selection and instantiation of appropriate high-level template graphs that form the therapist’s utterances

192

G. Mann et al.

to support, guide, query, inform or sympathize at that moment. In some simple cases, canned responses are issued to bypass the language pipeline and reduce processing demands. 3.3 Response Generation Responses CGs are generated based on the three input sources and two variables from the therapeutic process (Fig. 2). The heuristic first checks the patient’s emotional state. If this is outside the distressed region (Sect. 2), an expressive form CG is created by first maximally joining generalised CG templates for the pragmatic move (e.g. query) and the content recommended for the current therapeutic goal (e.g. establish_complaint). This is then instantiated with background information. If the patient’s emotional state is inside the distressed region, the EST will recommend a pragmatic move of sympathy and the heuristic will force the use of a sympathy template for its expressive form. The constructed CG is subsequently passed to a realization heuristic, SentenceRealization() which in turn expresses the CG as a grammatically correct sentence using YAG (Yet Another Generator) [22]. Function GenerateSentence (emotional_state, patient_CG, background_info, pragmatic_move, therapeutic_goal) { If the emotional_state is not in distressed region { pragmatic_move_CG ← retrieve the template for this pragmatic move therapeutic_goal_CG ← retrieve the CG for this therapeutic goal, elaborated from patient_CG expressive_form_CG ← maximal_join (pragmatic_move_CG, therapeutic_goal_CG) constructed_CG ← instantiate_background (expressive_form_CG, background_info) SentenceRealization (constructed_CG, pragmatic_move)}} else { SentenceRealization (sympathy_CG, pragmatic_move)}

Fig. 2. Pseudocode for the heuristic function creating the expressive form.

YAG is a template-based syntactic realization system, which comes with a set of core grammar templates that can be used to generate noun phrases, verbs, prepositional phrases, and other clauses. We are developing a further set of custom templates using those core grammar templates with optional syntactic constraints. The heuristic function SentenceRealization() (Fig. 3) loops through the constructed CG and creates a list of attribute-value pairs, based on grammatical/semantic roles. Function SentenceRealization (constructed_CG, pragmatic_move) {For each concept in a CG {knowledge_list ← push attribute-value pair onto knowledge_list} Retrieve template based on the pragmatic move and the constructed_CG For each slot in the template {Override the slot default value with the value from knowledge_list} Realize the template using the command surface-1 of YAG.}

Fig. 3. Pseudocode for realizing the expressive form as text.

Conceptual Reasoning for Generating Automated Psychotherapeutic Responses

193

4 Conclusion This generation component is still in development, so no systematic evaluation has yet been conducted. Some components have been coded and unit tested. Getting the heuristics of the system to interact smoothly with each other is a challenge; that is to be expected in this modelling approach. We are concerned about the number of templates that may be required, particularly at the surface expression level. If they become too difficult or too many to create, the method might become infeasible. The heuristic tests are not difficult to write, but are, of course, imperfect compared to algorithms. Also, we have not fully tested the emotion tracking on real patient texts so far. Our planned evaluation has two parts. First, a systematic “glass-box” analysis will discover the strengths and limitations of the generation component, particularly with respect to the amount of prior knowledge that needs to be provided and the generality of the techniques. Second, the “suitability”, “naturalness” and “empathy” of the response generation for human use will be tested, using a series of ersatz patient interview scenarios to avoid the ethical complications of testing on real patients. The scenarios will provide human judges (expert psychotherapists or, more likely, students in training to be psychotherapists) with information about an ongoing therapeutic intervention. Example patient utterances and the actual responses generated by the system will also be provided as transcripts. The judges will then rate these transcripts on those variables using their own knowledge of therapy. Finally, we reiterate that if hand-built conceptual representations can be practically built up using existing methods, the effort will be worthwhile if the systems are then more transparent and auditable than NN or statistical ML system and thus, more trustworthy.

References 1. Jack, H.E., Myers, B., Regenauer, K.S., Magidson, J.F.: Mutual capacity building to reduce the behavioral health treatment gap globally. Adm. Policy Mental Health Mental Health Serv. Res. 47(4), 497–500 (2019). https://doi.org/10.1007/s10488-019-00999-y 2. Meltzer, H.E., et al.: The reluctance to seek treatment for neurotic disorders. Int. Rev. Psychiatry 15(2), 123–128 (2003) 3. Fairburn, C.G., Patel, V.H.: The impact of digital technology on psychological treatments and their dissemination. Behav. Res. Therapy 88, 19–25 (2017) 4. Marcus, G.: Deep learning: a critical appraisal. arXiv preprint arXiv:1801.00631 (2018) 5. Pearl, J.: Theoretical impediments to machine learning with seven sparks from the causal revolution. arXiv preprint arXiv:1801.04016 (2018) 6. Ellis, A.: Rational-emotive therapy. Big Sur Recordings, CA, USA, pp. 32–44 (1973) 7. Greenberg, L.S., Paivio, S.C.: Working with Emotions in Psychotherapy, vol. 13. Guilford Press, New York (2003) 8. Calvo, R.A., D’Mello, S.: Affect detection: an interdisciplinary review of models, methods, and their applications. IEEE Trans. Affect. Comput. 1, 18–37 (2010) 9. Hancock, J.T., Landrigan, C., Silver, C.: Expressing emotion in text-based communication. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 929– 932. Association for Computing Machinery (2007) 10. Gill, A.J., French, R.M., Gergle, D., Oberlander, J.: Identifying emotional characteristics from short blog texts. In: 30th Annual Conference of the Cognitive Science Society, Washington, DC, pp. 2237–2242. Cognitive Science Society (2008)

194

G. Mann et al.

11. Breck, E., Choi, Y., Cardie, C.: Identifying expressions of opinion in context. In: IJCAI, vol. 7, pp. 2683–2688, January 2007 12. Smith, C.A., Ellsworth, P.C.: Attitudes and social cognition. J. Pers. Soc. Psychol. 48(4), 813–838 (1985) 13. McNally, A., et al.: Counseling and Psychotherapy Transcripts, Volume II. Alexander Street Press, Alexandria (2014) 14. Mann, G.A.: Control of a navigating rational agent by natural language. Unpublished Ph.D. thesis, University of New South Wales, Sydney, Australia (1996). https://manualzz.com/doc/ 42762943/control-of-a-navigating-rational-agent-by-natural-language 15. Paola, P.V., et al.: Evaluation of OntoLearn, a methodology for automatic learning of domain ontologies. In: Ontology Learning from Text: Methods, Evaluation and Applications, vol. 123, p. 92 (2005) 16. Leuzzi, F., Ferilli, S., Rotella, F.: ConNeKTion: a tool for handling conceptual graphs automatically extracted from text. In: Catarci, T., Ferro, N., Poggi, A. (eds.) IRCDL 2013. CCIS, vol. 385, pp. 93–104. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-543470_11 17. Morrison, J.: The First Interview: A Guide for Clinicians. Guilford Press, New York (1993) 18. Hoyt, M.F.: The temporal structure of therapy. In: O’Donohue, W.E., et al. (ed.) Clinical Strategies for Becoming a Master Psychotherapist, pp. 113–127. Elsevier (2006) 19. Shoham, V., Rohrbaugh, M., Patterson, J.: Problem-and solution-focused couple therapies: the MRI and Milwaukee models. In: Jacobson, N.S., Gurman, A.S. (eds.) Clinical Handbook of Couple Therapy, pp. 142–163. Guilford Press, New York (1995) 20. Strapparava, C., Valitutti, A.: WordNet-affect: an affective extension of WordNet. In: 4th International Conference on Language Resources and Evaluation, pp. 1083–1086 (2004) 21. Novak, G.: TMYCIN expert system tool. Technical Report AI87–52, Computer Science Department, University of Texas at Austin (1987). http://www.cs.utexas.edu/ftp/AI-Lab/techreports/UT-AI-TR-87-52.pdf. Accessed 5 Feb 2018 22. Channarukul, S., McRoy, S.W., Ali, S.S.: Enriching partially-specified representations for text realization using an attribute grammar. In: Proceedings of the 1st International Conference on NLG, Mitzpe Ramon, Israel, vol. 14, pp. 163–170. Association for Computational Linguistics (2000)

Benchmarking Inference Algorithms for Probabilistic Relational Models Tristan Potten

and Tanya Braun(B)

Institute for Information Systems, University of L¨ ubeck, L¨ ubeck, Germany [email protected], [email protected] Abstract. In the absence of benchmark datasets for inference algorithms in probabilistic relational models, we propose an extendable benchmarking suite named ComPI that contains modules for automatic model generation, model translation, and inference benchmarking. The functionality of ComPI is demonstrated in a case study investigating both average runtimes and accuracy for multiple openly available algorithm implementations. Relatively frequent execution failures along with issues regarding, e.g., numerical representations of probabilities, show the need for more robust and efficient implementations for real-world applications. Keywords: StaRAI

1

· Lifted inference · Probabilistic inference

Introduction

At the heart of many machine learning algorithms lie large probabilistic models that use random variables (randvars) to describe behaviour or structure hidden in data. After a surge in effective machine learning algorithms, efficient algorithms for inference come into focus to make use of the models learned or to optimise machine learning algorithms further [5]. This need has lead to advances in probabilistic relational modelling for artificial intelligence (also called statistical relational AI, StaRAI for short). Probabilistic relational models combine the fields of reasoning under uncertainty and modelling incorporating relations and objects in the vein of first-order logic. Handling sets of indistinguishable objects using representatives enables tractable inference [8] w.r.t. the number of objects. Very few datasets exist as a common baseline for comparing different approaches beyond models of limited size (e.g., epidemic [11], workshops [7], smokers [18]). Therefore, we present an extendable benchmarking suite named ComPI (Compare Probabilistic Inference) that allows for benchmarking implementations of inference algorithms. The suite consists of (1) an automatic generator for models and queries, (2) a translation tool for providing the generated models in the format needed for the implementations under test, and (3) a measurement module that batch-executes implementations for the generated models and collects information on runtimes and inference results. In the following, we begin with a formal definition of the problem that the benchmarked inference algorithms solve. Afterwards, we present the functions of the modules of ComPI. Lastly, we present a case study carried out with ComPI. c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 195–203, 2020. https://doi.org/10.1007/978-3-030-57855-8_15

196

2

T. Potten and T. Braun

Inference in Probabilistic Relational Models

The algorithms considered in ComPI solve query answering problems on a model that defines a full joint probability distribution. In this section, we formally define a model and the query answering problem in such models. Additionally, we give intuitions about how query answering algorithms solve these problems. 2.1

Parameterised Models

Parameterised models consist of parametric factors (parfactors). A parfactor describes a function, mapping argument values to real values (potentials). Parameterised randvars (PRVs) constitute arguments of parfactors. A PRV is a randvar parameterised with logical variables (logvars) to compactly represent sets of randvars [9]. Definitions are based on [13]. Definition 1. Let R be a set of randvar names, L a set of logvar names, Φ a set of factor names, and D a set of constants (universe). All sets are finite. Each logvar L has a domain D(L) ⊆ D. A constraint is a tuple (X , CX ) of a sequence of logvars X = (X1 , . . . , Xn ) and a set CX ⊆ ×ni=1 D(Xi ). The symbol  for C marks that no restrictions apply, i.e., CX = ×ni=1 D(Xi ). A PRV R(L1 , . . . , Ln ), n ≥ 0 is a syntactical construct of a randvar R ∈ R possibly combined with logvars L1 , . . . , Ln ∈ L. If n = 0, the PRV is parameterless and constitutes a propositional randvar. The term R(A) denotes the possible values (range) of a PRV A. An event A = a denotes the occurrence of PRV A with range value a ∈ R(A). We denote a parfactor g by φ(A)|C with A = (A1 , . . . , An ) a sequence of PRVs, φ : ×ni=1 R(Ai ) → R+ a function with name φ ∈ Φ, and C a constraint on the logvars of A. A set of parfactors forms a model G := {gi }ni=1 . The term gr(P ) denotes the set of all instances of P w.r.t. given constraints. An instance is a grounding, substituting the logvars in P with constants from given constraints. The semantics of a model G is given by grounding and building a full joint distribution PG . Query answering refers to computing probability distributions, which boils down to computing marginals on PG . Lifted algorithms seek to avoid grounding and building PG . A formal definition follows. Definition 2. With Z as  normalising constant, a model G represents the full joint distribution PG = Z1 f ∈gr(G) f . The term P (Q|E) denotes a query in G with Q a grounded PRV and E a set of events. The query answering problem refers to solving a query w.r.t. PG . 2.2

Exact Lifted Inference Algorithms

The very first lifted algorithm is lifted variable elimination (LVE) [9], which has been further refined [2,7,11,13]. LVE takes a model G and answers a query P (Q|E) by absorbing evidence E and eliminating all remaining PRVs except Q from G (cf. [13] for further details). With a new query, LVE restarts with the original model. Therefore, further research concentrates on efficiently solving

Benchmarking Inference Algorithms for Probabilistic Relational Models

197

Fig. 1. Modules ComPI. The arrows indicate the direction of the flow of created models. The arrow bypassing the translator module indicates that no translation is needed for frameworks working on BLOG models.

multiple query answering problems on a given model, often by building a helper structure based on a model, yielding algorithms such as (i) the lifted junction tree algorithm (LJT) [1], (ii) first-order knowledge compilation (FOKC) [17,18], or (iii) probabilistic theorem proving (PTP) [3]. LJT solves the above defined query answering problem while FOKC and PTP actually solve a weighted first order model counting (WFOMC) problem to answer a query P (Q|E). For all algorithms mentioned, implementations are freely available (see Sect. 3.3). LVE and LJT work with models as defined above. The other algorithms usually use a different modelling formalism, namely Markov logic networks (MLNs) [10]. One can transform parameterised models to MLNs and vice versa [16]. Next, we present ComPI.

3

The Benchmarking Suite ComPI

ComPI allows for collecting runtimes and inference results from implementations given automatically generated models. Figure 1 shows a schematic description of ComPI consisting of three main parts, (i) a model generator BLOGBuilder that allows the automatic creation of multiple models (in the BLOG grammar [6]) following different generation strategies, (ii) a translator TranslateBLOG for translation from the BLOG format to correct input format for individual frameworks (if necessary), and (iii) a benchmarking tool PInBench for collection of runtimes and inference results, summarized in multiple reports. The modules can be accessed at https://github.com/tristndev/ComPI. Below, we briefly highlight each module. 3.1

Model Generation

The goal is to generate models as given in Definition 1, generating logvar/randvar names and domain sizes, combining logvars and randvars as well as forming parfactors with random potentials. The input to BLOGBuilder is a model creation specification, which selects a model creation and augmentation strategy, which describe how models are created possibly based on a previous model. Running BLOGBuilder creates a number of models as well as a number of reports and logs that describe specific characteristics of the created models. The output format of the models is BLOG. Additionally, BLOGBuilder generates

198

T. Potten and T. Braun

reports describing the model creation process and highlighting possible deviations from the model creation specification. The module can be extended with additional model creation and augmentation strategies. 3.2

Model Translation

The task is to translate models from the baseline BLOG format into equivalent models of those different formats required for the frameworks in PInBench. Accordingly, TranslateBLOG takes a number of model files in the BLOG format as input. It subsequently translates the parsed models into different formats of model specifications. As of now, the generation of the following output types is possible: (i) Markov logic networks (MLNs) [10], (ii) dynamic MLNs [4], and (iii) dynamic BLOG files. TranslateBLOG is easily extendable as new output formats can be added by specifying and implementing corresponding translation rules and syntactic grammar to create valid outputs. 3.3

Benchmarking

The final module in ComPI is PInBench (Probabilistic Inference Benchmarking), which serves to batch process the previously created model files, run inferences, and collect data on these runs. PInBench can be interpreted as a control unit that coordinates the running of an external implementation which it continuously and sequentially supplies with the available model files. The following implementations in conjunction with the corresponding input formats are supported: – – – –

GC-FOVE1 with propositional variable elimination and LVE, WFOMC2 with FOKC, Alchemy3 with PTP and sampling-based alternatives, and the junction tree algorithm4 in both its propositional and lifted form.

We also developed a dynamic version of PInBench, named DPInBench, for dynamic models, i.e., models with a sequence of state, which could refer to passage of time. Currently, the implementations of UUMLN, short for University of Ulm Markov Logic Networks5 and the Lifted Dynamic Junction Tree (LDJT)6 as well as their input formats are supported. The data collected by PInBench is stored in multiple reports, e.g., giving an overview on the inference success per file and query (queries on big, complex models might fail) or summarizing the run times and resulting inference probabilities. Implementations not yet supported can easily be added by wrapping each in an executable file and specifying calls needed for execution. Additionally, parsing logics for generated outputs need to be implemented to extract information. 1 2 3 4 5 6

dtai.cs.kuleuven.be/software/gcfove (accessed 16 Apr. 2020). dtai.cs.kuleuven.be/software/wfomc (accessed 16 Apr. 2020). alchemy.cs.washington.edu/ (accessed 16 Apr. 2020). ifis.uni-luebeck.de/index.php?id=518#c1216 (accessed 16 Apr. 2020). uni-ulm.de/en/in/ki/inst/alumni/thomas-geier/ (accessed 16 Apr. 2020). ifis.uni-luebeck.de/index.php?id=483 (accessed 16 Apr. 2020).

Benchmarking Inference Algorithms for Probabilistic Relational Models

4

199

Case Study

To demonstrate the process of benchmarking different implementations using ComPI, we present an exemplary case study. Within the case study, the process consists of model generation with BLOGBuilder, model translation with TranslateBLOG, and benchmarking of inference runs with PInBench. All implementations currently supported by ComPI are included here. We do not consider sampling-based algorithms implemented in Alchemy as performance highly depends on the parameter setting for sampling, which requires an analysis of its own and is therefore not part of this case study. 4.1

Model Generation

Model generation starts with creating base models given a set of specified parameters (number of logvars/randvars/parfactors, domain sizes). Subsequently, these base models are augmented according to one of the following strategies: – Strategy A - Parallel Factor Augmentation: The previous model is cloned and each parfactor extended by one additionally created PRV with randomly chosen existing logvars. – Strategy B - Increment By Model : The base model is duplicated (renaming names) and appended to the current. A random randvar from the duplicate is connected with a random randvar of the current model via a new parfactor. The strategies are set up to increase complexity based on LVE. Strategy A increases the so-called tree width (see, e.g., [12] for details). Strategy B increases the model size, while keeping the tree width close to constant. It is non-trivial to generate series of models that increase in complexity without failing executions. Overall, we intend the generation of “balanced” models with moderately connected factors to allow all methods to demonstrate their individual strengths and weaknesses while maintaining manageable runtimes. Creating multiple base models with random influences (e.g., regarding relations between model objects) leads to multiple candidate model series, which allows for selecting one series that leads to runs with the least amount of errors. We tested multiple parameter settings for both strategies to generate candidate series. The specific numbers for parameters are random but small to generate models of limited size, ensuring that the tested programs successfully finish running them. We vary the domain sizes while keeping the number of logvars small for the models to be liftable from a theoretical point of view.7 More precisely, the settings given by the Cartesian product of the following parameters have been evaluated for generating base models for each strategy: – Strategy A: domain size ∈ {10, 100, 1000}, #logvar ∈ {2}, #randvar ∈ {3, 4, 5}, #factor = #randvar − 1, max randvar args = 2. Augmentation in 16 steps. 7

Models with a maximum of two logvars per parfactor are guaranteed to have inference runs without any groundings during its calculations [14, 15].

200

T. Potten and T. Braun

Fig. 2. Mean inference times per query (with logarithmic y-axis).

– Strategy B: domain size ∈ {10, 100, 1000}, #logvar ∈ {1}, #randvar ∈ {2}, #factor ∈ {1}. Further restrictions: max randvar occurrences = 4, max randvar args = 2, max factor args = 2. Augmentation in 40 steps. In each model file, one query is created per randvar. Model generation was executed in three independent runs to obtain multiple candidate models due to the included factors of randomness. The randomness may also lead to model series of potentially varying complexity. Preliminary test runs have led to the selection of the model series investigated below. 4.2

Evaluation Results

We analyse the given frameworks regarding two aspects: Firstly, runtimes of inference and secondly, inference accuracy. Runtimes. Figure 2 shows runtimes, displaying the relation between augmentation steps, domain sizes, and mean query answering times for PTP, FOKC, LVE and LJT. The two propositional algorithms supported are not shown in the plots as both presented very steep increases along with failures on early augmentation steps for bigger domain sizes (as expected without lifting). Regarding Strategy A, the mean time per query increases for bigger domain sizes for all frameworks. PTP and FOKC have a similar early increase and fail on the models after augmentation steps 6 and 7, respectively. They are the only implementations working with MLNs as input. As the original models have

Benchmarking Inference Algorithms for Probabilistic Relational Models

201

random potentials for each argument value combination in each parfactor, the translated MLNs do not have any local symmetries, which these algorithms would be able to exploit. Generating models with many local symmetries, e.g., only two different potentials per whole parfactor, may lead to better results for FOKC and PTP, with the remaining algorithms taking longer. Another possible explanation may (of course) also be that there are bugs in implementations or employed heuristics may be improvable. One could actually use ComPI to test an implementation with random inputs for bugs or better heuristics. The lifted algorithms LJT and LVE have steeper increases for high augmentation steps and the biggest domain size. The reason lies in so-called count-conversions that have a higher complexity than normal elimination operations and become necessary starting with augmentation step 11. A Technical Note on Count Conversions: A count-conversion is a more involved concept of lifted inference where a logical variable is counted to remove it from the list of logical variables [13]. A count-conversion involves reformulating the parfactor, which enlarges it. A PRV is replaced by a so called counted PRV (CRV), which has histograms as range values. Consider a Boolean PRV R(X) with three possible X values, then a CRV #X [R(X)] has the range values [0, 3], [1, 2], [2, 1] and [3, 0], which denote that given a histogram [n1 , n2 ], n1 ground instances of R(X) have the value true and n2 instances the value f alse. Given the range size r of the PRV andthe domain size n of the logical variable that  range values, which lies in O(rn ) [12]. gets counted, the new CRV has n+r−1 n−1 In the case study, we use boolean PRVs, so r = 2. If n is small, the blow up by a count conversion is easily manageable. However, with large n, the blow up leads to a noticeable increase in runtimes, which is the reason why the increase in runtimes with step 11, at which point count conversion become necessary, is more noticeable with larger domain sizes. Given Strategy B, the collected times display less extreme increases over the augmentation steps compared to Strategy A. PTP and FOKC exhibit similar behaviour compared to Strategy A. Again, changing domain sizes has little effect on FOKC. LVE only manages to finish models with small domain sizes as the models become too large overall. LJT is able to finish the models even for larger augmentation steps. The reason is that runtimes of LJT mainly depend on the tree width, i.e., the models of Strategy B have roughly the same complexity during query answering for LJT. The jump between augmentation step 23 and 24 again can be explained by more count conversions occurring. Inference Accuracy. Investigating the actual inference results presents an alternative approach to analysing collected data. Since the tested implementations all perform exact inference, we expect the implementations to obtain the same results on equivalent files and queries. Looking at accuracy allows for identifying bugs. If adding implementations of approximate inference algorithms to ComPI, one could compare accuracy of approximate inference against exact results. Figure 3 shows a comparison of probabilities queried for one (smaller) model generated for this case study. In this case, the implementations under

202

T. Potten and T. Braun

Fig. 3. Comparison of inference results between different frameworks.

investigation calculated identical results for all shown queries. However, examining models created at later augmentation steps reveals that increased model complexities lead to errors in the outputted probabilities or even to no interpretable values at all. The latter case is caused by flaws in the numerical representations: NaN values occur when potentials get so small that they become 0 if represented as a float. When normalising a distribution, the normalising constant is a sum over 0, leading to dividing 0 by 0, which results in a NaN.

5

Conclusion

We present ComPI, a benchmarking suite for comparing various probabilistic inference implementations. ComPI consists of extendable tools for automatic model generation, model translation, and inference benchmarking. A case study demonstrates the simplicity of comparative analyses carried out with ComPI. Similar analyses are needed in the evaluation of future novel algorithms in the field of probabilistic inference. The extensibility of ComPI has the potential to reduce the efforts for these evaluations. For the tested implementations, it becomes evident that there is still a need to provide bulletproof implementations. Occurring issues range from high memory consumption to the lack of robustness regarding heuristics used by the algorithms, exact numerical representations of probabilities, or reliable handling of errors. Future work will need to address these points.

References 1. Braun, T., M¨ oller, R.: Preventing groundings and handling evidence in the lifted junction tree algorithm. In: Kern-Isberner, G., F¨ urnkranz, J., Thimm, M. (eds.) KI 2017. LNCS (LNAI), vol. 10505, pp. 85–98. Springer, Cham (2017). https:// doi.org/10.1007/978-3-319-67190-1 7 2. Braun, T., M¨ oller, R.: Parameterised queries and lifted query answering. In: IJCAI 2018 Proceedings of the 27th International Joint Conference on AI, pp. 4980–4986. IJCAI Organization (2018)

Benchmarking Inference Algorithms for Probabilistic Relational Models

203

3. Gogate, V., Domingos, P.: Probabilistic theorem proving. In: UAI 2011 Proceedings of the 27th Conference on Uncertainty in AI, pp. 256–265. AUAI Press (2011) 4. Kersting, K., Ahmadi, B., Natarajan, S.: Counting belief propagation. In: UAI 2009 Proceedings of the 25th Conference on Uncertainty in AI, pp. 277–284. AUAI Press (2009) 5. LeCun, Y.: Learning World Models: the Next Step Towards AI. Invited Talk at IJCAI-ECAI 2018 (2018). https://www.youtube.com/watch?v=U2mhZ9E8Fk8. Accessed 19 Nov 2018 6. Milch, B., Marthi, B., Russell, S., Sontag, D., Long, D.L., Kolobov, A.: BLOG: probabilistic models with unknown objects. In: IJCAI 2005 Proceedings of the 19rd International Joint Conference on AI, pp. 1352–1359. IJCAI Organization (2005) 7. Milch, B., Zettelmoyer, L.S., Kersting, K., Haimes, M., Kaelbling, L.P.: Lifted probabilistic inference with counting formulas. In: AAAI 2008 Proceedings of the 23rd AAAI Conference on AI, pp. 1062–1068. AAAI Press (2008) 8. Niepert, M., Van den Broeck, G.: Tractability through exchangeability: a new perspective on efficient probabilistic inference. In: AAAI 2014 Proceedings of the 28th AAAI Conference on AI, pp. 2467–2475. AAAI Press (2014) 9. Poole, D.: First-order probabilistic inference. In: IJCAI 2003 Proceedings of the 18th International Joint Conference on AI, pp. 985–991. IJCAI Organization (2003) 10. Richardson, M., Domingos, P.: Markov logic networks. Mach. Learn. 62(1–2), 107– 136 (2006). https://doi.org/10.1007/s10994-006-5833-1 11. de Salvo Braz, R., Amir, E., Roth, D.: Lifted first-order probabilistic inference. In: IJCAI 2005 Proceedings of the 19th International Joint Conference on AI, pp. 1319–1325. IJCAI Organization (2005) 12. Taghipour, N., Davis, J., Blockeel, H.: First-order decomposition trees. In: NIPS 2013 Advances in Neural Information Processing Systems, vol. 26, pp. 1052–1060. Curran Associates, Inc. (2013) 13. Taghipour, N., Fierens, D., Davis, J., Blockeel, H.: Lifted variable elimination: decoupling the operators from the constraint language. J. Artif. Intell. Res. 47(1), 393–439 (2013) 14. Taghipour, N., Fierens, D., Van den Broeck, G., Davis, J., Blockeel, H.: Completeness results for lifted variable elimination. In: AISTATS 2013 Proceedings of the 16th International Conference on AI and Statistics, pp. 572–580. AAAI Press (2013) 15. Van den Broeck, G.: On the completeness of first-order knowledge compilation for lifted probabilistic inference. In: NIPS 2011 Advances in Neural Information Processing Systems, vol. 24, pp. 1386–1394. Curran Associates, Inc. (2011) 16. Van den Broeck, G.: Lifted inference and learning in statistical relational models. Ph.D. thesis, KU Leuven (2013) 17. Van den Broeck, G., Davis, J.: Conditioning in first-order knowledge compilation and lifted probabilistic inference. In: AAAI 2012 Proceedings of the 26th AAAI Conference on AI, pp. 1961–1967. AAAI Press (2012) 18. Van den Broeck, G., Taghipour, N., Meert, W., Davis, J., De Raedt, L.: Lifted probabilistic inference by first-order knowledge compilation. In: IJCAI 2011 Proceedings of the 22nd International Joint Conference on AI, pp. 2178–2185. IJCAI Organization (2011)

Analyzing Psychological Similarity Spaces for Shapes Lucas Bechberger1(B) 1

1

and Margit Scheibel2

Institute of Cognitive Science, Osnabr¨ uck University, Osnabr¨ uck, Germany [email protected] 2 Institute for Language and Information Science, Heinrich-Heine-Universit¨ at D¨ usseldorf, D¨ usseldorf, Germany [email protected]

Background and Motivation

The cognitive framework of conceptual spaces [3] proposes to represent concepts and properties such as apple and round as convex regions in perception-based similarity spaces. By doing so, the framework can provide a grounding for the nodes of a semantic network. In order to use this framework in practice, one needs to know the structure of the underlying similarity space. In our study, we focus on the domain of shapes. We analyze similarity spaces of varying dimensionality which are based on human similarity ratings and seek to identify directions in these spaces which correspond to shape features from the psychological literature. The analysis scripts used in our study are available at https://github.com/ lbechberger/LearningPsychologicalSpaces. Our psychological account of shapes can provide constraints and inspirations for AI approaches. For example, distances in the shape similarity spaces can give valuable information about visual similarity which can complement other measures of similarity (such as distances in a conceptual graph). Moreover, the interpretable directions in the similarity space provide means for verbalizing this information (e.g., by noting that tools are more elongated than electrical appliances). Furthermore, the shape spaces can be used in bottom-up procedures for constructing new categories, e.g., by applying clustering algorithms. Finally, membership in a category can be determined based on whether or not an item lies inside the convex hull of a given category.

2

Data Collection

We used 60 standardized black-and-white line drawings of common objects (six visually consistent and six visually variable categories with five objects each) for our experiments (see Fig. 1 for an example from each category). We collected 15 shape similarity ratings for all pairwise combinations of the images in a webbased survey with 62 participants. Image pairs were presented one after another on the screen (in random order) and subjects were asked to judge the respective c Springer Nature Switzerland AG 2020  M. Alam et al. (Eds.): ICCS 2020, LNAI 12277, pp. 204–207, 2020. https://doi.org/10.1007/978-3-030-57855-8_16

Analyzing Psychological Similarity Spaces for Shapes

205

Fig. 1. Example stimuli for which various perceptual judgments were collected.

similarity on a Likert scale ranging from 1 (totally dissimilar) to 5 (very similar). The distribution of within-category similarities showed that the internal shape similarity was higher for visually consistent categories (M = 4.18) than for visually variable categories (M = 2.56; p < .001). For further processing, the shape similarity ratings were aggregated into a global matrix of dissimilarities by taking the mean over the individual responses and by inverting the scale (i.e., dissimilarity(x, y) = 5 − similarity(x, y)). In the psychological literature, different types of perceptual features are discussed as determining the perception of complex objects, among others the line shape (Lines) and the global shape structure (Form) [1]. We collected values for all images with respect to these two features in two experimental setups. In a first line of experiments, we collected image-specific ratings which are based on attentive (att) image perception. We collected 9 ratings per image in a web-based survey with 27 participants. Groups of four images were presented one after another on the screen (in random order) together with a continuous scale representing the respective feature (Lines: absolutely straight to strongly curved; Form: elongated to blob-like). Subjects were asked to arrange the images on the respective scale such that the position of each image in the final configuration reflected their value on the respective feature scale. The resulting values were aggregated for each image by using the median. In a second line of experiments, we collected image-specific feature values which are based on pre-attentive (pre-att) image perception. This was done in two laboratory studies with 18 participants each. In both studies, the images were presented individually for 50 ms on the screen; immediately before and after the image a pattern mask was shown for 50 ms in order to prevent conscious perception of the image. Subjects were asked to decide per button press as fast as possible which value of the respective feature pertained to the critical image mostly (Lines study: straight or curved; Form study: elongated or blob-like). The binary values (in total 18 per image for each feature) were transformed into graded values (percentage of curved and blob-like responses, respectively). A comparison of the two types of feature values revealed a strong correlation between the judgements based on attentive and pre-attentive shape perception (rs = 0.83 for Lines and rs = 0.85 for Form). In both cases, the 15 images with the highest and lowest values were used as positive and negative examples for the respective feature.

206

L. Bechberger and M. Scheibel

Fig. 2. Results of our analysis of the similarity spaces.

3

Analysis

We used the SMACOF algorithm [4] for performing nonmetric multidimensional scaling (MDS) on the dissimilarity matrix. Given a desired number n of dimensions, MDS represents each stimulus as a point in an n-dimensional space and arranges these points in such a way that their pairwise distances correlate well with the pairwise dissimilarities of the stimuli they represent. The SMACOF algorithm uses an iterative process of matrix multiplications to minimize the remaining difference between distances and dissimilarities. A good similarity space should be able to reflect the psychological dissimilarities accurately. Figure 2a shows the Spearman correlation of dissimilarities and distances as a function of the number of dimensions. As we can see, a onedimensional space is not sufficient for an accurate representation of the dissimilarities. We can furthermore observe that using more than five dimensions does not considerably improve the correlation to the dissimilarities. As a baseline, we have also computed the distances between the pixels of various downscaled versions of the images. These pixel-based distances reached only a Spearman correlation of rs = 0.40 to the dissimilarities, indicating that shape similarity cannot easily be determined based on raw pixel information. The framework of conceptual spaces assumes that the similarity spaces are based on interpretable dimensions. As distances between points are invariant under rotations, the axes of the coordinate system from the MDS solution might however not coincide with interpretable features. In order to identify interpretable directions in the similarity spaces, we trained a linear support vector machine to separate positive from negative examples for each of the psychological features. The normal vector of the separating hyperplane points from negative to positive examples and can therefore be interpreted as the direction representing this feature [2]. Figure 2b shows the quality of this separation (measured with Cohen’s kappa) as a function of the number of dimensions. While a onedimensional space again gives poor results, increasing the number of dimensions of the similarity space improves the evaluation metric. Six dimensions are always sufficient for perfect classification. Moreover, it seems like the feature Form is

Analyzing Psychological Similarity Spaces for Shapes

207

found slightly earlier than Lines. Finally, we do not observe considerable differences between pre-attentive and attentive ratings. The framework of conceptual spaces furthermore proposes that conceptual regions in the similarity space should be convex and non-overlapping. We have therefore constructed the convex hull for each of the categories from our data set. We then estimated the overlap between these conceptual regions by counting for each convex hull the number of intruder items from other categories. Figure 2c plots the overall number of these intruders as a function of the number of dimensions. As we can see, the number of intruders one would expect for randomly arranged points drops very fast with more dimensions and becomes zero in a five-dimensional space. However, the point arrangements found by MDS produce clearly less overlap between the conceptual regions than this random baseline. Overall, it seems that conceptual regions tend to be convex in our similarity spaces.

4

Discussion and Conclusions

In our study, we found that similarity spaces with two to five dimensions seem to be good candidates for representing shapes: A single dimension does not seem to be sufficient while more than five dimensions do not improve the quality of the space. The shape features postulated in the literature were indeed detectable as interpretable directions in these similarity spaces. In order to understand the similarity space for shapes even better, additional features from the literature (such as Orientation) will be investigated. The main limitations of our results are twofold: Firstly, we only consider two-dimensional line drawings in our study. Our results are therefore not directly applicable to three-dimensional real world objects. Secondly, the similarity spaces obtained through MDS can only be used for a fixed set of stimuli. In future work, we aim to train an artificial neural network on mapping also novel images to points in the shape similarity spaces (cf. [5]).

References 1. Biederman, I.: Recognition-by-components: a theory of human image understanding. Psychol. Rev. 94(2), 115–147 (1987) 2. Derrac, J., Schockaert, S.: Inducing semantic relations from conceptual spaces: a data-driven approach to plausible reasoning. Artif. Intell. 228, 66–94 (2015). https://doi.org/10.1016/j.artint.2015.07.002 3. G¨ ardenfors, P.: Conceptual Spaces: The Geometry of Thought. MIT Press, Cambridge (2000) 4. de Leeuw, J.: Recent Development in Statistics, Chap. Applications of Convex Analysis to Multidimensional Scaling, pp. 133–146. North Holland Publishing, Amsterdam (1977) 5. Sanders, C.A., Nosofsky, R.M.: Using deep-learning representations of complex natural stimuli as input to psychological models of classification. In: Proceedings of the 2018 Conference of the Cognitive Science Society, Madison (2018)

Author Index

Andrews, Simon 59

Leemhuis, Mena

Bechberger, Lucas 204 Bertaux, Aurélie 74 Bisquert, Pierre 3 Brabant, Quentin 74 Braun, Tanya 145, 195

Mann, Graham 186 Missaoui, Rokia 117 Monnin, Pierre 48 Mouakher, Amira 74

Couceiro, Miguel 48 Coulet, Adrien 48 Croitoru, Madalina 3, 33

177

Napoli, Amedeo 48 Nguyen, Thu Huong 18 Ortiz de Zarate, Juan Manuel 161 Özçep, Özgür L. 177

Dhillon, Pyara 186 Feuerstein, Esteban

161

Gehrke, Marcel 145 Ghawi, Raji 132 Hecham, Abdelraouf

Pfeffer, Jürgen 132 Polovina, Simon 145 Potten, Tristan 195 Ruas, Pedro H. B. 117

3

Scheibel, Margit 204 Song, Mark A. J. 117

Ignatov, Dmitry I. 90 Tettamanzi, Andrea G. B. 18 Kishore, Beena 186 Konecny, Jan 103 Krajča, Petr 103 Kwuida, Léonard 90, 117

Wolter, Diedrich Yun, Bruno

33

177