From Astrophysics to Unconventional Computation: Essays Presented to Susan Stepney on the Occasion of her 60th Birthday [1st ed.] 978-3-030-15791-3;978-3-030-15792-0

This Festschrift is a tribute to Susan Stepney’s ideas and achievements in the areas of computer science, formal specifi

365 145 12MB

English Pages XIV, 407 [415] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

From Astrophysics to Unconventional Computation: Essays Presented to Susan Stepney on the Occasion of her 60th Birthday [1st ed.]
 978-3-030-15791-3;978-3-030-15792-0

Table of contents :
Front Matter ....Pages i-xiv
Putting Natural Time into Science (Roger White, Wolfgang Banzhaf)....Pages 1-21
Evolving Programs to Build Artificial Neural Networks (Julian F. Miller, Dennis G. Wilson, Sylvain Cussat-Blanc)....Pages 23-71
Anti-heterotic Computing (Viv Kendon)....Pages 73-85
Visual Analytics (Ian T. Nabney)....Pages 87-102
Playing with Patterns (Fiona A. C. Polack)....Pages 103-122
From Parallelism to Nonuniversality: An Unconventional Trajectory (Selim G. Akl)....Pages 123-156
A Simple Hybrid Event-B Model of an Active Control System for Earthquake Protection (Richard Banach, John Baugh)....Pages 157-194
Understanding, Explaining, and Deriving Refinement (Eerke Boiten, John Derrick)....Pages 195-206
Oblique Strategies for Artificial Life (Simon Hickinbotham)....Pages 207-214
Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP (Kangfeng Ye, Simon Foster, Jim Woodcock)....Pages 215-254
Sound and Relaxed Behavioural Inheritance (Nuno Amálio)....Pages 255-298
Growing Smart Cities (Philip Garnett)....Pages 299-310
On Buildings that Compute. A Proposal (Andrew Adamatzky, Konrad Szaciłowski, Zoran Konkoli, Liss C. Werner, Dawid Przyczyna, Georgios Ch. Sirakoulis)....Pages 311-335
Complex Systems of Knowledge Integration: A Pragmatic Proposal for Coordinating and Enhancing Inter/Transdisciplinarity (Ana Teixeira de Melo, Leo Simon Dominic Caves)....Pages 337-362
On the Emergence of Interdisciplinary Culture: The York Centre for Complex Systems Analysis (YCCSA) and the TRANSIT Project (Leo S. D. Caves)....Pages 363-395
On the Simulation (and Energy Costs) of Human Intelligence, the Singularity and Simulationism (Alan F. T. Winfield)....Pages 397-407

Citation preview

Emergence, Complexity and Computation ECC

Andrew Adamatzky Vivien Kendon   Editors

From Astrophysics to Unconventional Computation Essays Presented to Susan Stepney on the Occasion of her 60th Birthday

Emergence, Complexity and Computation Volume 35

Series Editors Ivan Zelinka, Technical University of Ostrava, Ostrava, Czech Republic Andrew Adamatzky, University of the West of England, Bristol, UK Guanrong Chen, City University of Hong Kong, Hong Kong, China Editorial Board Members Ajith Abraham, MirLabs, USA Ana Lucia, Universidade Federal do Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil Juan C. Burguillo, University of Vigo, Spain Sergej Čelikovský, Academy of Sciences of the Czech Republic, Czech Republic Mohammed Chadli, University of Jules Verne, France Emilio Corchado, University of Salamanca, Spain Donald Davendra, Technical University of Ostrava, Czech Republic Andrew Ilachinski, Center for Naval Analyses, USA Jouni Lampinen, University of Vaasa, Finland Martin Middendorf, University of Leipzig, Germany Edward Ott, University of Maryland, USA Linqiang Pan, Huazhong University of Science and Technology, Wuhan, China Gheorghe Păun, Romanian Academy, Bucharest, Romania Hendrik Richter, HTWK Leipzig University of Applied Sciences, Germany Juan A. Rodriguez-Aguilar, IIIA-CSIC, Spain Otto Rössler, Institute of Physical and Theoretical Chemistry, Tübingen, Germany Vaclav Snasel, Technical University of Ostrava, Czech Republic Ivo Vondrák, Technical University of Ostrava, Czech Republic Hector Zenil, Karolinska Institute, Sweden

The Emergence, Complexity and Computation (ECC) series publishes new developments, advancements and selected topics in the fields of complexity, computation and emergence. The series focuses on all aspects of reality-based computation approaches from an interdisciplinary point of view especially from applied sciences, biology, physics, or chemistry. It presents new ideas and interdisciplinary insight on the mutual intersection of subareas of computation, complexity and emergence and its impact and limits to any computing based on physical limits (thermodynamic and quantum limits, Bremermann’s limit, Seth Lloyd limits…) as well as algorithmic limits (Gödel’s proof and its impact on calculation, algorithmic complexity, the Chaitin’s Omega number and Kolmogorov complexity, non-traditional calculations like Turing machine process and its consequences,…) and limitations arising in artificial intelligence. The topics are (but not limited to) membrane computing, DNA computing, immune computing, quantum computing, swarm computing, analogic computing, chaos computing and computing on the edge of chaos, computational aspects of dynamics of complex systems (systems with self-organization, multiagent systems, cellular automata, artificial life,…), emergence of complex systems and its computational aspects, and agent based computation. The main aim of this series is to discuss the above mentioned topics from an interdisciplinary point of view and present new ideas coming from mutual intersection of classical as well as modern methods of computation. Within the scope of the series are monographs, lecture notes, selected contributions from specialized conferences and workshops, special contribution from international experts.

More information about this series at http://www.springer.com/series/10624

Andrew Adamatzky Vivien Kendon •

Editors

From Astrophysics to Unconventional Computation Essays Presented to Susan Stepney on the Occasion of her 60th Birthday

123

Editors Andrew Adamatzky Unconventional Computing Laboratory University of the West of England Bristol, UK

Vivien Kendon Joint Quantum Centre Durham University Durham, UK

ISSN 2194-7287 ISSN 2194-7295 (electronic) Emergence, Complexity and Computation ISBN 978-3-030-15791-3 ISBN 978-3-030-15792-0 (eBook) https://doi.org/10.1007/978-3-030-15792-0 Library of Congress Control Number: 2019934524 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Susan Stepney Photo by S. Barry Cooper, taken at the 8th International Conference on Unconventional Computation (UC 2009), Azores

Preface

The unconventional computing field is populated by experts from all fields of science and engineering. Susan Stepney is a classical unconventional computist. She started as an astrophysicist, developed as a programmer and flourished as a computer scientist. In the early 1980s, while at the Institute of Astronomy, University of Cambridge, Susan studied relativistic thermal astrophysical plasmas. She derived exact expressions for the fully relativistic energy exchange rate, evaluated it for the cases of electron-proton, electron-electron and proton-proton relaxation, and developed a computer model of thermal plasmas at mildly relativistic temperatures [8, 7, 20, 30]. Susan’s 1983 paper on simple but accurate numerical fits to various two-body rates in relativistic thermal plasmas [30] remains among her three most cited publications. In the mid-1980s, Susan left astrophysics and academia to join the GEC– Marconi Research Centre in Chelmsford. Her first computer science paper was about modelling access control of networks with different administrations linked together [15], and she later wrote a formal model of access control in the Z formal specification language [31]. She moved to Logica Cambridge in 1989, where she developed the formal specification and correctness proofs of a high-integrity compiler [21, 22, 36] and the Mondex electronic purse [2, 29]. Susan’s contributions to the Z language span over two decades. These include a highly cited textbook for advanced users of Z [4]; an edited collection of various approaches for adding object orientation to the Z language [25]; a widely applicable set of Z data refinement proof obligations allowing for non-trivial initialisation, finalisation and input/output refinement [28]; and refactoring guided by the target patterns and sub-patterns [33]. Susan moved back into academia in 2002, joining the Department of Computer Science, University of York, where she established a fruitful space for natural computation and unconventional computing ideas. Susan’s contributions to

vii

viii

Preface

unconventional computing are so diverse and numerous that it would a separate book to describe these. Most cited and/or most close to the editors’ hearts include artificial immune systems [35], grand challenges in unconventional computation [26, 27], evolving quantum circuits via genetic programming [16], controlling complex dynamics with artificial chemical networks [13, 14], simulating complex systems [3, 32], speed limits of quantum information processing [19], cellular automata on Penrose tilings [17], philosophy of unconventional computing [1], evolvable molecular microprograms [5, 9, 10], computing with nuclear magnetic resonance [18], heterotic computing [12], material computation [11, 23, 34], theory of programming of unconventional computers [24], and reservoir computing [6]. This book commemorates Susan Stepney’s achievements in science and engineering. The chapters are authored by world leaders in computer science, physics, mathematics and engineering. Susan’s work in formal specification of programming languages and semantics is echoed in Chapters “Playing with Patterns” (Fiona A. C. Polack), “Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP” (Kangfeng Ye, Simon Foster, and Jim Woodcock), “Understanding, Explaining, and Deriving Refinement” (Eerke Boiten and John Derrick), and “Sound and Relaxed Behavioural Inheritance” (Nuno Amálio). Susan’s life-long contributions to complex systems analysis, modelling and integration is reflected in Chapters “A Simple Hybrid Event-B Model of an Active Control System for Earthquake Protection” (Richard Banach and John Baugh), “Complex Systems of Knowledge Integration: A Pragmatic Proposal for Coordinating and Enhancing Inter/Transdisciplinarity” (Ana Teixeira de Melo and Leo Simon Dominic Caves), “Growing Smart Cities” (Philip Garnett), “On the Emergence of Interdisciplinary Culture: The York Centre for Complex Systems Analysis (YCCSA) and the TRANSIT Project” (Leo Simon Dominic Caves), and “On the Simulation (and Energy Costs) of Human Intelligence, the Singularity and Simulationism” (Alan F. T. Winfield). Chapters “Putting Natural Time into Science” (Roger White and Wolfgang Banzhaf) and “Anti-heterotic Computing” (Viv Kendon) address topics on programmability and unconventional computing substrates raised by Susan in her previous papers. The field of unconventional computing is represented in Chapters “From Parallelism to Nonuniversality: An Unconventional Trajectory” (Selim G. Akl), “Evolving Programs to Build Artificial Neural Networks” (Julian F. Miller, Dennis G. Wilson and Sylvain Cussat-Blanc), “Oblique Strategies for Artificial Life” (Simon Hickinbotham), and “On Buildings that Compute. A Proposal” (Andrew Adamatzky et al.).

Preface

ix

The book will be a pleasure to explore for readers from all walks of life, from undergraduate students to university professors, from mathematicians, computers scientists and engineers to chemists and biologists. Bristol, UK Durham, UK March 2019

Andrew Adamatzky Vivien Kendon

References 1. Adamatzky, A., Akl, S., Burgin, M., Calude, C.S., Costa, J.F., Dehshibi, M.M., Gunji, Y.-P., Konkoli, Z., MacLennan, B., Marchal, B., et al.: East-west paths to unconventional computing. Prog. Biophys. Mol. Biol. 131, 469–493 (2017) 2. Banach, R., Jeske, C., Hall, A., Stepney, S.: Atomicity failure and the retrenchment atomicity pattern. Formal Aspects Comput. 25, 439–464 (2013) 3. Banzhaf, W., Baumgaertner, B., Beslon, G., Doursat, R., Foster, J.A., McMullin, B., De Melo, V.V., Miconi, T., Spector, L., Stepney, S., White, R.: Defining and simulating open-ended novelty: requirements, guidelines, and challenges. Theory Biosci. 135(3), 131– 161 (2016) 4. Barden, R., Stepney, S., Cooper, D.: Z in Practice. Prentice-Hall (1994) 5. Clark, E.B., Hickinbotham, S.J., Stepney, S.: Semantic closure demonstrated by the evolution of a universal constructor architecture in an artificial chemistry. J. R. Soc. Interface 14(130), 20161033 (2017) 6. Dale, M., Miller, J.F., Stepney, S. Trefzer, M.A.: Evolving carbon nanotube reservoir computers. In: International Conference on Unconventional Computation and Natural Computation, pp. 49–61. Springer (2016) 7. Guilbert, P.W., Fabian, A.C., Stepney, S.: Electron-ion coupling in rapidly varying sources. Mon. Not. R. Astron. Soc. 199(1), 19–21 (1982) 8. Guilbert, P.W., Stepney, S.: Pair production, comptonization and dynamics in astrophysical plasmas. Mon. Not. R. Astron. Soc. 212(3), 523–544 (1985) 9. Hickinbotham, S., Clark, E., Nellis, A., Stepney, S., Clarke, T., Young, P.: Maximizing the adjacent possible in automata chemistries. Artif. Life 22(1), 49–75 (2016) 10. Hickinbotham, S., Clark, E., Stepney, S., Clarke, T., Nellis, A., Pay, M., Young, P.: Molecular microprograms. In: European Conference on Artificial Life, pp. 297–304. Springer (2009) 11. Horsman, C., Stepney, S., Wagner, R.C., Kendon, V.: When does a physical system compute? Proc. R. Soc. A. 470(2169), 20140182 (2014) 12. Kendon, V., Sebald, A., Stepney, S.: Heterotic computing: past, present and future. Philos. Trans. R. Soc. A. 373(2046), 20140225 (2015) 13. Lones, M.A., Tyrrell, A.M., Stepney, S., Caves, L.S.: Controlling complex dynamics with artificial biochemical networks. In: European Conference on Genetic Programming, pp. 159– 170. Springer (2010) 14. Lones, M.A., Tyrrell, A.M., Stepney, S., Caves, L.S.D.: Controlling legged robots with coupled artificial biochemical networks. In: ECAL 2011, pp. 465–472. MIT Press (2011) 15. Lord, S.P., Pope, N.H., Stepney, S.: Access management in multi-administration networks. IEE Secure Communication Systems (1986) 16. Massey, P., Clark, J.A., Stepney, S.: Evolving quantum circuits and programs through genetic programming. In: Genetic and Evolutionary Computation Conference, pp. 569–580. Springer (2004)

x

Preface

17. Owens, N., Stepney, S.: The game of life rules on penrose tilings: still life and oscillators. In: Adamatzky, A. (ed.), Game of Life Cellular Automata, pp. 331–378. Springer (2010) 18. Roselló-Merino, M., Bechmann, M., Sebald, A., Stepney, S.: Classical computing in nuclear magnetic resonance. Int. J. Unconv. Comput. 6, 163–195 (2010) 19. Russell, B., Stepney, S.: Zermelo navigation in the quantum brachistochrone. J. Phys. A: Math. Theor. 48(11), 115303 (2015) 20. Stepney, S.: Two-body relaxation in relativistic thermal plasmas. Mon. Not. R. Astron. Soc. 202(2), 467–481 (1983) 21. Stepney, S.: High Integrity Compilation. Prentice Hall (1993) 22. Stepney, S.: Incremental development of a high integrity compiler: experience from an industrial development. In: Third IEEE High-Assurance Systems Engineering Symposium (HASE(1998)), pp. 142–149. IEEE (1998) 23. Stepney, S.: The neglected pillar of material computation. Physica D. 237(9), 1157–1164 (2008) 24. Stepney, S.: Programming unconventional computers: dynamics, development, self-reference. Entropy 14(10), 1939–1952 (2012) 25. Stepney, S., Barden, R., Cooper, D.: Object Orientation in Z. Springer (2013) 26. Stepney, S., Braunstein, S.L., Clark, J.A., Tyrrell, A., Adamatzky, A., Smith, R.E., Addis, T., Johnson, C., Timmis, J., Welch, P., Milner, R., Partridge, D.: Journeys in non-classical computation I: A grand challenge for computing research. Int. J. Parallel Emergent Distrib. Syst. 20(1), 5–19 (2005) 27. Stepney, S., Braunstein, S.L., Clark, J.A., Tyrrell, A., Adamatzky, A., Smith, R.E., Addis, T., Johnson, C., Timmis, J., Welch, P., Milner, R., Partridge, D.: Journeys in non-classical computation II: initial journeys and waypoints. Int. J. Parallel Emergent Distrib. Syst. 21(2), 97–125 (2006) 28. Stepney, S., Cooper, D., Woodcock, J.: More powerful Z data refinement: pushing the state of the art in industrial refinement. In: International Conference of Z Users, pp. 284–307. Springer (1998) 29. Stepney, S., Cooper, D., Woodcock, J.: An electronic purse: specification, refinement, and proof. Technical monograph PRG-126. Oxford University Computing Laboratory (2000) 30. Stepney, S., Guilbert, P.W.: Numerical fits to important rates in high temperature astrophysical plasmas. Mon. Not. R. Astron. Soc. 204(4), 1269–1277 (1983) 31. Stepney, S., Lord, S.P.: Formal specification of an access control system. Soft.: Pract. Experie. 17(9), 575–593 (1987) 32. Stepney, S., Polack, F., Alden, K., Andrews, P., Bown, J., Droop, A., Greaves, R., Read, M., Sampson, A., Timmis, J., Winfield, A. (eds.): Engineering Simulations as Scientific Instruments: A Pattern Language. Springer (2018) 33. Stepney, S., Polack, F., Toyn, I.: Patterns to guide practical refactoring: examples targetting promotion in Z. In: International Conference of B and Z Users, pp. 20–39. Springer (2003) 34. Stepney, S., Rasmussen, S., Amos, M., (eds.): Computational Matter. Springer (2018) 35. Stepney, S., Smith, R.E., Timmis, J., Tyrrell, A.M., Neal, M.J., Hone, A.N.W.: Conceptual frameworks for artificial immune systems. Int. J. Unconv. Comput. 1(3), 315–338 (2005) 36. Stepney, S., Whitley, D., Cooper, D., Grant, C.: A demonstrably correct compiler. Formal Aspects Comput. 3(1), 58–101 (1991)

By Julianne Halley

Contents

Putting Natural Time into Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roger White and Wolfgang Banzhaf

1

Evolving Programs to Build Artificial Neural Networks . . . . . . . . . . . . . Julian F. Miller, Dennis G. Wilson and Sylvain Cussat-Blanc

23

Anti-heterotic Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viv Kendon

73

Visual Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ian T. Nabney

87

Playing with Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Fiona A. C. Polack From Parallelism to Nonuniversality: An Unconventional Trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Selim G. Akl A Simple Hybrid Event-B Model of an Active Control System for Earthquake Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Richard Banach and John Baugh Understanding, Explaining, and Deriving Refinement . . . . . . . . . . . . . . 195 Eerke Boiten and John Derrick Oblique Strategies for Artificial Life . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Simon Hickinbotham Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Kangfeng Ye, Simon Foster and Jim Woodcock Sound and Relaxed Behavioural Inheritance . . . . . . . . . . . . . . . . . . . . . 255 Nuno Amálio

xiii

xiv

Contents

Growing Smart Cities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Philip Garnett On Buildings that Compute. A Proposal . . . . . . . . . . . . . . . . . . . . . . . . 311 Andrew Adamatzky, Konrad Szaciłowski, Zoran Konkoli, Liss C. Werner, Dawid Przyczyna and Georgios Ch. Sirakoulis Complex Systems of Knowledge Integration: A Pragmatic Proposal for Coordinating and Enhancing Inter/Transdisciplinarity . . . . . . . . . . 337 Ana Teixeira de Melo and Leo Simon Dominic Caves On the Emergence of Interdisciplinary Culture: The York Centre for Complex Systems Analysis (YCCSA) and the TRANSIT Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Leo Simon Dominic Caves On the Simulation (and Energy Costs) of Human Intelligence, the Singularity and Simulationism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Alan F. T. Winfield

Putting Natural Time into Science Roger White and Wolfgang Banzhaf

Abstract This contribution argues that the notion of time used in the scientific modeling of reality deprives time of its real nature. Difficulties from logic paradoxes to mathematical incompleteness and numerical uncertainty ensue. How can the emergence of novelty in the Universe be explained? How can the creativity of the evolutionary process leading to ever more complex forms of life be captured in our models of reality? These questions are deeply related to our understanding of time. We argue here for a computational framework of modeling that seems to us the only currently known type of modeling available in Science able to capture aspects of the nature of time required to better model and understand real phenomena.

1 Introduction Since its origin more than two millennia ago, epistemological thinking in the West has been driven by a desire to find a way to certify knowledge as certain. Mathematics and logic seemed to offer a template for certainty, and as a consequence, modern science as it emerged from the work of scholars like Galileo and Newton aimed as much as possible for mathematical formalization. But by the beginning of the twentieth century the certainty of logic and mathematics was no longer an unquestioned truth: it had become a research question. From paradoxes in the foundations of logic and mathematics to equations with solutions that are in a sense unknowable, certainty had begun to seem rather uncertain. By the 1930s, as mathematicians and logicians grappled with the problem of formalisation, they found themselves being forced to look beyond formal systems, and to contemplate the possibility that the solution was R. White Department of Geography, Memorial University of Newfoundland, St. John’s, NL A1B 3X9, Canada e-mail: [email protected] W. Banzhaf (B) BEACON Center for the Study of Evolution in Action and Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_1

1

2

R. White and W. Banzhaf

of a different nature than they had imagined, that the answers lay in the realm of the living, creative world—the world of uncertainty. Although they did not systematically pursue this line of thought, we believe that their intuition was essentially correct, and it serves as our starting point. The basic reason for the failure of the formalisation programme, we contend, has to do with the inherent nature of logic and mathematics, the very quality that makes these fields so attractive as the source of certainty: their timelessness. Logic and mathematics, as formal systems, exist outside of time; hence their truths are timeless—they are eternal and certain. But because they exclude time, they are unable to represent one of the most fundamental characteristics of the world: its creativity, its ability to generate novel structures, processes, and entities. Bergson [4], by the beginning of the twentieth century, was already deeply bothered by the inability of mathematics and formal science to handle the creativity that is ubiquitous in the living world, and he believed that this was due to the use of “abstract” time—a representation of time—rather than “concrete” or real time. In spite of his perceptive analysis of the problem, however, he saw no real solution. In effect mathematics and the hard sciences would be unable to address the phenomenon of life because there was no way they could embody real or natural time1 in their formal and theoretical structures. Real time could only be intuited, and intuition fell outside the realm of hard science. This view had been anticipated by Goethe [1, 12], who took metamorphosis as the fundamental feature of nature. Implicitly recognising that formal systems could not encompass metamorphosis, he proposed a scientific methodology based on the development of an intuition of the phenomena by means of immersion in them. A recent echo of Bergson is found in the work of Latour [10], who argues that existence involves continuous re-creation in real (natural) time, while the representations of science short-circuit this process of constructive transformation by enabling direct jumps to conclusions. He cleverly refers to this time-eliminating aspect of science as “double clic”. The solution to the problem of time, we believe, is to re-introduce real, natural time into formal systems. Bergson did not believe there was any way of doing this, because science was essentially tied to mathematics or other systems of abstraction which seemed always to eliminate real time. Today, however, we can introduce real or natural time into our formal systems by representing the systems as algorithms and executing them on a computer, which because it operates in natural time, introduces natural time into the algorithm. The answer, in other words, lies in computing (In this chapter we will normally use the expression natural time rather than real time in order to avoid confusion with the common use of the latter expression to refer to something happening immediately, as in “a real time solution”.). This contribution will develop the argument that various difficulties that arise in logic and formal scientific modelling point to the necessity of introducing natural time itself into our formal treatment of science, mathematics, and logic. We first discuss some of the difficulties in logic and scientific theory that arise either from 1 While

time as a parameter has been used in mathematical tools, this amounts to merely a representation of time.

Putting Natural Time into Science

3

a failure to include time in the formal system, or if it is included, from the way it is represented. We then provide examples of the explanatory power that becomes available with the introduction of natural time into formal systems. Finally, the last part of the contribution offers an outline of the way forward.

2 The Role of Time in Mathematics and Science By the end of the nineteenth century the question of the certainty of mathematics itself was being raised. This led initially to efforts to show that mathematics could be formalised as a logical system. Problems in the form of paradox soon emerged in this programme, and those difficulties in turn led to attempts to demonstrate that the programme was at least feasible in principle. However those attempts also failed when it was proven that mathematics must contain unprovable or undecidable propositions. The result was that mathematics came to resemble an archipelago of certainties surrounded by a sea of logically necessary uncertainty. Moreover, with the discovery of the phenomenon of deterministic chaos emerging in the classic three body problem, uncertainties emerged even within the islands of established mathematics. Meanwhile, in physics, while thermodynamics had long seemed to be somehow problematic because of the nature of entropy, at least it produced deterministic laws. During the second half of the 20th century, however, it was shown by Prigogine [15, 17] and others [13] that these laws are special cases, and that there are no laws governing most phenomena arising in such systems, because the phenomena of interest arise when the systems are far from thermodynamic equilibrium, whereas the laws describe the equilibrium state. At the same time, in biology, there was a growing realisation that living systems exist in the energetic realm in which the traditional laws of thermodynamics are of only limited use. And with the discovery that DNA is the genetic material of life, much of biology was transformed into what might be termed the informatics of life. All of these developments have in common that they introduced, irrevocably and in a radical way, uncertainty and unpredictability into mathematics and science. They also have in common that they arose from attempts to ensure certainty and predictability by keeping time, real time, out of the formal explanatory systems. To a large extent, in the practice of everyday science, these developments have been ignored. Science continues to focus on those areas where certainty seems to be attainable. However, many of the most interesting and important problems are ones that grow out of the uncertainties that have been uncovered over the past century: problems like the origin and nature of life, the nature of creative processes, and the origin of novel processes and entities. These tend to be treated discursively rather than scientifically. However, we believe that it is now possible to develop formal treatments of problems like these by including time—real, natural time rather than any formal representation of it—in the explanatory mechanism. This will also mean recognising that a certain degree of uncertainty and unpredictability is inherent in any scientific treatment of these problems. But that would be a strength rather than a failure,

4

R. White and W. Banzhaf

because uncertainty and indeterminacy are inherent characteristics of these systems, and so any explanatory mechanism that doesn’t generate an appropriate degree of unpredictability is a misrepresentation. Creativity itself cannot be timeless, but it relies on the timeless laws of physics and chemistry to produce new phenomena that transcend those laws without violating them. In the next four sections we discuss in more detail the role of time in the treatment of paradox, incompleteness, uncertainty, and emergence in order to justify the necessity of including natural time itself rather than formal time in our explanatory systems.

2.1 Paradox By the end of the 19th century the desire to show that mathematics is certain had led to Hilbert’s programme to show that mathematics could be recast as a syntactical system, one in which all operations were strictly “mechanical” and semantics played a minimal role. In this spirit, Frege, Russell and Whitehead reduced several branches of mathematics to logic, thus apparently justifying the belief that the programme was feasible. However, Russell’s work produced a paradox in set theory that ultimately raised doubts about it. Russell’s paradox asks if the set of all sets that do not contain themselves contains itself. Paradoxically, if it doesn’t, then it does; if it does, then it doesn’t. Several solutions have been proposed to rid logic of this paradox, but there is still some debate as to whether any of them is satisfactory [8]. The paradox seems to emerge because of the time-free nature of logic. If we treat the process described in the analysis as an algorithm and execute it, then the output is an endless oscillation. Does the set contain itself? First it doesn’t, then it does, then it doesn’t, … . There is no contradiction. This oscillation depends on our working in discontinuous time—in this case the time of the computer’s clock. We can in principle make the clock speed arbitrarily fast, but if we could go to the limit of continuous time in operating the computer we would see the paradox re-emerge as something analogous to a quantum superposition: since the superposition endures it doesn’t depend on time, and independent of time, the value is always both ‘does’ and ‘doesn’t’. However, if we were to observe the state of the system at a particular instant, the superposition would collapse to a definite but arbitrary value—“does” or “doesn’t”—just as Schrodinger’s famous cat is only definitively alive or dead when the box is opened. Of course a computer must work with a finite clock speed and so the paradox cannot appear. This resolution of the paradox by casting it as an algorithm to be executed in natural time emphasizes the difference between the existence of a set and the process of constructing the set, a difference that echoes Prigogine’s [15, 17, p. 320] distinction between being and becoming. In the algorithmic construction of the set, the size of the set oscillates between n and n + 1, sizes that correspond to “doesn’t” and “does”. Most well known paradoxes are similar to Russell’s in that they involve self-reference. However, Yanofsky [29, pp. 24–25] has demonstrated one that is non-self-referential; but it, too, yields an oscillation (true, false, true, false …) if executed as an algorithm.

Putting Natural Time into Science

5

2.2 Incompleteness Whereas Russell discovered a paradox that cast doubt on the possibility of demonstrating that mathematics is a purely syntactical system, both Gödel and Turing came up with proofs that even to the extent that it is syntactical, it is impossible to demonstrate that it is. Gödel showed that even for a relatively limited deductive system there were true statements that couldn’t be proven true within the system. In order to prove those statements, the system would have to be enlarged in some way, for example with additional axioms. But in this enlarged system there would again appear true statements that were not provable within it. This result highlighted the degree to which mathematics as it existed depended not only on axioms and logical deduction—that is, syntax—but also on the products of what was called mathematical intuition—the source of new concepts and procedures introduced to deal with new problems or problems that were otherwise intractable. In other words, mathematics was progressively extended by constructions based on these intuitions. The intuitions came with semantic content, which Gödel’s result implicitly suggested could not be eliminated, even in principle. But there was another problem. Gödel, like Hilbert, thought of deduction as a mechanical procedure; thus the idea of certainty was closely linked to the idea of a machine operating in a completely predictable manner. Of course at the time the idea was metaphorical; the deductions would actually be carried out by a person (the person was invariably referred to as a computer), but for the results to be reliable, the person would have to follow without error a precise sequence of operations that would yield the desired result; in other words, for each deduction the person would have to execute an algorithm. Thus the algorithm was implicitly part of the logical apparatus used to generate mathematics. It was clear to Turing and Church that a formal understanding of the nature of these algorithms was necessary. To that end Turing formulated what is now known as the Turing machine (at the time, of course— 1937—it was purely conceptual), and at about the same time Church developed an equivalent approach, the lambda calculus. An algorithm was considered legitimate as a procedure if it could be described as a Turing machine or formulated in the lambda calculus. Each algorithm corresponded to a particular Turing machine. A Turing machine was required to have a finite number of states, and a proof calculated on the machine would need to finish in a finite number of steps. By proving that it could not in general be shown whether or not algorithms would execute in a finite number of steps (the halting problem), Turing demonstrated, in results analogous to Gödel’s, that some propositions were not provable. Church arrived at the same result using his lambda calculus. The results of Gödel, Turing, and Church showing that it is impossible to prove that a consistent mathematical system is also complete are complementary to Russell’s discovery of the paradox in set theory, which suggests that an attempt to make a formal system complete will introduce an inconsistency. Turing presumably imposed the restrictions of finite machine states and finite steps in order to ensure that Turing machines would be close analogues of the recursive equations which were the standard at that time for computing proofs. These

6

R. White and W. Banzhaf

restrictions on the Turing machine were necessary conditions for proof, but as his results showed, they did not amount to sufficient conditions. In making these assumptions he effectively restricted the machines to producing time-free results. This was entirely reasonable given that his goal was to formalize a procedure for producing mathematical results which were themselves timeless. Nevertheless he found the restrictions to be somewhat problematic, as did Church and Gödel, because they meant that a Turing machine could not capture the generation of the new concepts and procedures that flowed from the exercise of mathematical intuition by mathematicians. There was much informal discussion of this issue, including possibilities for circumventing the limitations. Turing suggested that no single system of logic could include all methods of proof, and so a number of systems would be required for comprehensive results. Stating this thought in terms of Turing machines, he suggested a multiple machine theory of mind—mind because he was still thinking of the computer as a person, and the machine as the algorithm that the person would follow mechanically. The multiple machine idea was that Turing machines would be chained, so that different algorithms would be executed sequentially, thus overcoming some of the limitations of the simple Turing machine. How would the sequence be determined? Initially the idea was that it would be specified by the mathematician, but subsequently other methods were proposed, ranging from stochastic choice to a situation in which each machine would choose the subsequent machine. Eventually Turing proposed that learning could provide the basis of choice. He observed that mathematics advances by means of mathematicians exercising mathematical intuition, which they then use to create new mathematics, a process he thought of as learning. He then imagined a machine that could learn by experience, by means of altering its own algorithms. Time had been kept out of mathematics by defining it as consisting only of the achieved corpus of results and ignoring the process by which those results were generated. Turing and Church took a first step toward including the process by formalizing the treatment of the algorithms by which proofs were derived. But this did not seem to them (or others) to be sufficient, because it did not capture the deeply creative nature of mathematics as seen in the continual introduction of new concepts and procedures that established whole new areas of mathematics. We might interpret the journey from Turing algorithm to learning as an implicit recognition of the necessity of natural time in mathematics. On the other hand, while Turing had formalized the proof process in terms of a machine which would have to operate in natural time, the machine was defined in such a way that the results that it produced would be time free. Thus in postulating strategies like chained Turing machines to represent learning, he probably assumed that the results would also be time free. However, if mathematics is understood to include the process by which it is created, it will have to involve natural time, even if the result of that creative process is time free. While Turing spoke of learning, a more appropriate term might be creativity, and creativity—the emergence of something new—necessarily involves time.

Putting Natural Time into Science

7

2.3 Uncertainty But is mathematics, in the narrower sense of established results, really entirely timeless? Perhaps. But there is at least one small part of it—deterministic chaos—that seems to be trying to break free and take up residence in natural time. The phenomenon was discovered by Henri Poincaré at the beginning of the twentieth century as he attempted to solve the Newtonian three-body problem. An equation that exhibits deterministic chaotic dynamics can, up to a point, be treated as a timeless structure and its properties investigated using standard mathematical techniques (some invented by Poincaré for this purpose). It has been shown, for example, that the attractor is a fractal object, and therefore infinitely complicated. As a consequence the solution trajectory cannot be written explicitly. It can, however, be calculated numerically—but only up to a point: since the attractor, as a fractal, is infinitely complex, we can never know exactly where the system is on it, and hence cannot predict the future states of the system. Take one of the simplest possible cases, the difference equation version of the logistic function: X t+1 = r X t (1 − X t ) with 0 < X t < 1 and 0 < r < 4. Solving the equation for X as a function of t we have X ∗ = 1 − 1/r This solution is stable for r ≤ 3; otherwise it is oscillatory. For r > 3.57 approximately, the oscillations are infinitely complex; i.e. the dynamics are chaotic. Since the solution cannot be written explicitly, to see what it looks like we calculate successive values of X , while recognizing that these values become increasingly approximate, and soon become entirely arbitrary—that is, they become unpredictable even though the equation generating them is deterministic. This can be dismissed as simply due to the rounding errors that result from the finite precision of the computer, but it is actually a consequence of the interaction of the rounding errors with the fractal nature of the attractor. The rate at which the values evolve from precise to arbitrary is described by the Lyapunov exponent. So while the system may look well defined and timeless from an analytical point of view because the attractor determines the complete behaviour of the system, in fact the attractor is unknowable analytically, and can only be known (and only in a very limited way) by iterating the equation, or by other iterative techniques such as the one developed by Poincaré. The iterations take place in time, and so it seems that at least some of our knowledge of the behaviour of the equation necessarily involves natural time. Note that a physical process characterised by chaotic dynamics must also be unpredictable in the long run, because as the process “executes” it will be following a fractal attractor, and the physical system that is executing the process, being finite, will be subject to “rounding errors”, which in effect act as a stochastic perturbation. In other words, the resolution that can be achieved by the physical system is less than that of the

8

R. White and W. Banzhaf

attractor. This is the case with the three body problem that Poincaré was working on, and the reason that the planetary trajectories of the solar system are unpredictable at timescales beyond a hundred million years or so. It is also, in Prigogine’s (1997) view, the fundamental reason for the unpredictability of thermodynamic systems at the microscopic level (Laplace’s demon cannot do the math well enough), and also a necessary factor in macroscopic self-organization in that the unpredictability permits symmetry breaking—Prigogine calls this order by fluctuations. With chaotic systems, then, we lose the promise that mathematics has traditionally provided of a precise, God’s eye view over all time, and thus of certainty and predictability. We are left only with calculations in natural time that give us rapidly decreasing accuracy and hence increasing unpredictability. But from some points of view this is not necessarily a problem. As Turing speculated in his discussion of learning, learning requires trial and error, which would be pointless in a perfectly predictable world, and specifically it requires an element of stochasticity. Many others have made the same observation—that stochasticity seems to be a necessary element in any creative process. In physical systems stochastic perturbations are the basis of the symmetry breaking by which systems become increasingly complex and organized. Chaotic dynamics may thus play a positive role by providing a necessary source of stochasticity in physical, biological, and human systems. Physics, with the major exception of thermodynamics (together with two very specific cases in particle physics: CPT and a case of a heavy-fermion superconductor [20]), is characterized by laws that are “time reversible” in the sense that they remain valid if time runs backward. In other words, time can be treated as a variable, t, and the laws remain valid when −t is substituted for t. This is referred to by some (e.g. [22]) as spatialized time, because we can travel in both directions in it, as we can in space. Spatialized time is a conceptualization and representation of natural time, whereas natural time is the time in which we and the world exist, independently of any representation of it. Spatializing time is thus a way of eliminating natural time by substituting a model for the real thing. The physics of spatialized time is essentially a timeless physics, since we have access to the entire corpus of physical laws in the same sense that we have access to the entire body of timeless mathematics. The fact that time can be treated as a variable permits the spectacularly accurate predictions that flow from physical theory: the equations can be solved to show the state of the system as a function of time, and thus the state at any particular time, whether past, present or future. This physics is in a deep sense deterministic. This is true even of quantum physics, where, as Prigogine [16] points out, the wave function evolves deterministically; uncertainty appears only when the wave function collapses as a result of observation. The determinism of spatialized time is the basis of Einstein’s famous remark that “for us convinced physicists, the distinction between past, present and future is an illusion, although a persistent one” (as quoted in [16, p. 165]). His point was that time as we experience it flowing inexorably and irreversibly is an illusion; in relativistic space-time, the reality that underlies our daily illusory existence, we have access to all times. However, Prigogine [16] points out that the space-time of relativity is not necessarily spatialized; that is just the conventional interpretation. In any case, because it is apparently timeless, the physics of quantum

Putting Natural Time into Science

9

theory and relativity is understood to represent our closest approximation to certain knowledge of the world. Thermodynamics represents a rude exception to this timelessly serene picture. Here time has a direction, and when it is reversed the physics doesn’t work quite the same way. In the forward direction of time, the entropy of an isolated system increases until it reaches the maximum possible value given local constraints. In this sense the system is predictable. But when time is reversed, so that entropy is progressively lowered, the system becomes unpredictable, because, as Prigogine showed, when the entropy of a system is lowered, an increasing number of possible states appears, states that are macroscopically quite distinct but have similar entropy levels. But only one of these can actually exist, and in general we have no certain way of knowing which one that will be. The same phenomenon appears in reversed computation. In other words the reversed time future is characterized by a bifurcation tree of possibilities. Its future is open; it is no longer deterministic or fully predictable, but rather path dependent. This discussion of reversed time futures applies to isolated systems, the ones for which thermodynamic theory was developed. However, we do not live in an isolated system. Our planet is bathed in solar energy, which keeps it far from equilibrium, and we supplement this with increasing amounts of energy from other sources. Thus our open-system world is equivalent to a reversed time, isolatedsystem world. It is a world of path dependency and open futures of a self-organizing system. The open futures of these systems is a source of unpredictability or uncertainty, just as is the uncertainty arising from chaotic dynamics, and the two work together. In both of these situations in which unpredictability appears, time continues to be treated as a variable, but in the case of far from equilibrium systems the behaviour is time asymmetric: if we treat decreasing entropy as equivalent to time reversal, physical systems are deterministic in +t but undetermined in −t. In the case of chaotic systems, while the process may be mathematically deterministic in both +t and −t, the outcome is undetermined for both directions of time. In both cases, since a mathematical treatment of the phenomenon is of limited use, the preferred approach is computational. This is not just a pragmatic choice. It reflects the poverty of spatialized time compared to the possibilities offered by real time. Whereas physics, with the major exception of thermodynamics, is based on the assumption that spatialized time captures all the characteristics of time that are essential for a scientific understanding of the world, natural time involves no assumptions. It is simply itself. A computer can only implement an algorithm step by step, in natural time. As a consequence, algorithms as they are executed do not depend on any conceptualization or representation of time beyond a working assumption that time is discontinuous or quantized, rather than continuous, an assumption imposed by the computer’s clock.

10

R. White and W. Banzhaf

2.4 Emergence Far from equilibrium, self-organizing systems are the ones that we live in; our planet is essentially a spherical bundle of such systems. The dynamics of plate tectonics is driven by energy generated by radioactive decay in the earth’s core; the complex behaviour of the oceans and atmosphere is driven by the flux of solar energy; and life itself, including human societies, also depends on the continuous input of energy from the sun. Self-organization is a kind of emergence—it is the process by which an organized structure or pattern emerges from the collective dynamics of the individual objects making up the system, whether these are molecules of nitrogen and oxygen organizing themselves into a cyclonic storm or individual people moving together to form an urban settlement. However, as the phrase self-organization suggests, there is no prior specification of the form that is to emerge, and because of the inherent indeterminacy of far-from equilibrium systems, there is always a degree of uncertainty as to exactly what form will appear, as well as where and when it will emerge. These forms are essentially just patterns in the collection of their constituent particles or objects. Unlike their constituent objects, they have no existence as independent entities, and they cannot, simply as patterns, act on their environment—in other words, they have no agency. For this reason the emergence of self-organized systems is called soft or weak emergence. Strong emergence, on the other hand, refers to the appearance of new objects, or new types of objects, in the system. We can identify three levels of strong emergence: 1. In high energy physics, forces and particles emerge through symmetry breaking. Unlike the increasing energy input required to drive self-organization, this process occurs as free energy in the system decreases and entropy increases. 2. At relatively moderate energy levels, physical systems produce an increasing variety of chemical compounds. These molecules have an independent existence and distinctive properties, like a characteristic colour of solubility in water, that are not simply the sum of the characteristics of their constituent atoms. They also have a kind of passive agency: for example, they can interact with each other chemically to produce new molecules with new properties, like a new colour. Of course they can also interact physically, by means of collisions, to produce weak emergence, for example in the form of a convection cell or a cyclonic storm. But chemical reactions can result in the simultaneous occurrence of both strong and weak emergence, as when reacting molecules and their products generate the macroscopic self-organized spiral patterns of the Belosov-Zhabotinsky reaction. The production of a particular molecule may either use or produce free energy, i.e. it may be either entropy increasing or entropy decreasing. 3. Also at relatively moderate energy levels, living systems emerge through chemical processes, but also through self-assembly of larger structures (cells, organs, organisms). The key characteristic of this level of strong emergence is that the process is initiated and guided by an endogenous model of the system and its relationship with its environment. While in (1) and (2) emergence is determined by the laws of physics, in this case it is determined by the relevant models work-

Putting Natural Time into Science

11

ing together with the laws of physics and chemistry. We include in living systems the meta-systems of life such as ecological, social, political, technological, and economic systems. It is this third kind of strong emergence, the kind that depends on and is guided by models, that is the focus of our interest. Nevertheless the weak emergence of selforganizing systems remains important in the context of strong emergence, because a process of strong emergence, as in the case of the development of a fertilised egg into a mature multi-cellular individual, often makes use of local self-organization. Furthermore, self-organized structures are often the precursors of individuals with agency, making the transition by means of a process of reification, as when a self-organized settlement is incorporated as a city, a process that endows it with independent agency. In general, while self-organized systems are forced to a state of lower entropy by an exogenously determined flux of energy, living systems create and maintain their organized structures in order to proactively import energy and thus maintain a state of lower entropy. The causal circularity is a characteristic of such systems. Model based systems are a qualitatively new type. The models provide context dependent rules of behaviour that supplement the effects of the laws of physics and chemistry. Of course we can always reduce the structures that act as the models to their basic chemical components in order to understand, for example, the chemical structure of DNA or the chemistry of the synapses in a network of neurons, and there are good reasons for doing this: it allows us to understand the underlying physical mechanisms by which the model—and by extension the system of which it is a part— functions. But this reduction to chemistry and physics tells us nothing about how or why the system as a whole exists. These questions can only be answered at the level of the model considered as a model, because it is the model that guides the creation and functioning of the system of which it is a part. In other words, the reductionist programme reveals the syntax of the system, but tells us nothing of the semantics. It is the rules of behaviour of the system as a whole, rules provided by the model, that determine the actions of the system in its environment, and thus, ultimately, its success in terms of reproduction or survival. Part of the semantic content of the model is therefore the teleonomic goal of survival. The teleonomy is the result of the evolutionary process that produced the system. In this sense evolution is the ultimate source of semantics: as Dobzhansky said in the famous title of his paper, “Nothing in biology makes sense except in the light of evolution” [5]. The mathematical biologist Rosen [18, 19] speculated that life, rather than being a special case of physics and chemistry, in fact represents a generalization of those fields, in the sense that a scientific explanation of life would reveal new physics and chemistry. In other words the models inherent in living systems could be seen as new physics and chemistry: they introduce semantics as an emergent property of physico-chemical systems.

12

R. White and W. Banzhaf

2.4.1

Models

An interesting and useful definition of life, due to Rosen [18], is that life consists of entities that contain models of themselves, that is, entities that exist and function by virtue of the models they contain. The most basic model is that coded in DNA. But neural systems also contain models, some of them, as we know, very elaborate. And of course some models are stored in external media such as books and computers. These three loci of models correspond to the three worlds of Karl Popper: World 1 is the world of physical existence, World 2 corresponds to mental phenomena or ideas; and World 3 consists of the externally stored and manipulated representations of the ideas. Worlds 2 and 3 are not generally considered by scientists to be constituents of the world that science seeks to explain. However, as Popper points out, they are in fact part of it, and exert causal powers on World 1 [14]. The implication is that a scientific understanding of biological and social phenomena requires not just an analysis at the level of physical and chemical causation, but also consideration of the causal role of meaning, or more specifically, meaning as embodied in models. Thus semantics re-enters the picture in a fundamental way. A model that is a part of a living system must be a formal structure with semantics, not just syntax. It can function as a model only by virtue of its semantic content, since in order to be a model it must represent another system, a system of which, in the case of living organisms, it is itself usually a part. The modelled system thus provides the model with its semantic content. As Rosen points out, this contradicts the orthodox position of reductionist science (and in particular of Newtonian particle physics) that “every material behaviour [can] be … reduced to purely syntactical sequences of configurations in an underlying system of particles” [19, p. 68; see also p. 46ff]. Non-living systems lack semantics; they might thus be characterised as identity models of themselves, or zero-order models. Models associated with living organisms (e.g. DNA or an idea of self) would then be first order models. And some scientific models, those that are models of models (e.g. a mathematical model of DNA), would be second order models. This chapter is concerned with first order models. We propose the following definition: a is a first order model of A, i.e. a is a functional representation of A (a r A), if 1. a is a structure (not just a collection) of appropriate elements in some medium, whether chemical (e.g. a DNA molecule composed of amino acids), cellular (e.g. a synaptic structure in a network of neurons), or symbolic (e.g. a program composed of legitimate statements in some programming language). 2. The structure a can act as an algorithm when executed on some suitable machine M, where M may be either separate from A (e.g. a computer running a model of an economic system), or some part of A (e.g. a bacterial cell running the behavioural model coded in its DNA); 3. The output of the algorithm corresponds to or consists of some characteristics of A. Specifically:

Putting Natural Time into Science

13

(a) Given a suitable environment, a running on M can create a new instance of A (e.g. in the environment provided by a warm egg, the DNA being run by the egg cell containing the DNA can create a new instance of the kind of organism that produced the egg). (b) a can guide the behaviour of A in response to certain changes in the state of the environment (e.g. on the arrival of night, go to your nest; if inflation is greater than 3 percent, raise the interest rate). 4. 3(a) and 3(b) are evolved (or in human World 3 systems, designed) capabilities or functions that in general serve to maximise the chance of survival of A. 5. If A is a living organism, r is an emergent property of the underlying physical and chemical systems. First (and higher) order models are essentially predictive. Although the output of a when executed on M is literally a response to a current condition c which represents input to a, because of the evolutionary history of a that brought it into existence, the behaviour of A in response to a is actually a response to a future, potentially detrimental, condition c predicted by a; in other words, on the basis of the current condition c, a predicts that c will occur, and as a consequence produces a response in A intended to prevent the occurrence of c or mitigate its impact. Thus a acts as a predictive algorithm, and guides the behaviour of A on the basis of its predictions. The model a is thus rich in time. It involves both past time (the time in which it evolved) and future time (the time of its prediction), as well as, during execution, the natural time of the present. This reminds us of Bergson’s [4, p. 20] observation regarding natural (“concrete”) time: “the whole of the past goes into the making of the living being’s present moment.” Only if we know already about evolution as a process can we see the three times present in a. Only by virtue of being such a system ourselves do we have the ability to perceive its purpose. In this three-time aspect, a as it represents A is fundamentally different from a purely chemical or physical phenomenon in the conventional sense. It has semantic content which would be eliminated by any possible reduction to the purely mechanical causation of chemical and physical events. From a reductionist standpoint we would see only chemical reactions, nothing of representation, purpose, past, or anticipation. On the other hand, since a does actually have this semantic content, that content must emerge in the chemical system itself. It does so by virtue of the relationship between the part of the system that constitutes a and the larger system that is being modelled, just as a molecular property like solubility in water emerges from interactions among the atoms making up the molecule. In this sense Rosen was correct that life represents a radical extension of chemistry and physics: at no point do we require a vital principle or a soul to breathe semantics, or even life, into chemistry. In the specific case of DNA, as a molecule it is essentially fixed from the point of view of the organism: over that timescale, as a molecule, it is timeless. But natural time appears as the organism develops following conception, when various genes are turned on or off, and this behaviour continues in the fully developed organism as its interactions with the environment are guided by various contingently activated combinations of genes. In that sense DNA acts as a model that changes as a result of

14

R. White and W. Banzhaf

its interactions with the modelled system of which it is a part. This is reminiscent of the chained Turing machines proposed by Turing to permit creativity. Neural models, in contrast, lack a comprehensive fixed structure analogous to that of DNA; they are open ended and develop or change continually as a result of interactions with their host organism and the environment. But in both cases, as a computational system, life is essentially a case of open ended computation.

2.4.2

Information

The model of a system represents information, and its role in the functioning of the system depends on its being treated as information by the system. Note that this is information in the sense of semantics, or meaningful information, rather than Shannon information, which is semantics-free and represents information capacity or potential information. In other words, we could say that while semantics represents the content or meaning of information, Shannon information represents its quantity, and syntax represents its structure. Shannon information is maximised when a system is in its maximum entropy state. In the case of a self-organized system, the macroscale pattern constrains the behaviour of the constituent particles so that the system’s entropy and hence its Shannon information is less than it would be if its particles were unconstrained by the self-organized structures. We do not know of a measure of semantic information; it seems unlikely that such a measure could even be defined. Nevertheless, it seems that the model is the means by which semantic content emerges from syntax. We speculate that it is ultimately the teleonomic nature of living systems that populates the vacant lands of Shannon information with the semantics of meaning-laden information. A model embedded in a living system does not simply represent some aspect of another system; it does so purposefully. In living systems, the function of the model is to guide the behaviour of the system of which it is a part, and it does this by predicting future states of both the system and its environment. System behaviour thus depends to some degree on the anticipated future state of the system and its environment—i.e. the behaviour is goal directed. In contrast, in the case of traditional feedback systems, behaviour depends on the current state of the system and its environment. We note the apparent irony that life, a system that depends for its origin and evolution on uncertainty, nevertheless depends for its survival on an ability to predict future states. In fact, it requires a balance of predictability and unpredictability. In Langton’s [9] terms, it exists on the boundary between order and chaos.

2.4.3

Agency

First order models emerged with life in an evolutionary process, one in which the model both depends on and facilitates the persistence of the system of which it is a part. The model thus necessarily has a teleonomic quality—its purpose is ultimately to enhance the likelihood of its own survival and that of the host system that implements

Putting Natural Time into Science

15

it. To this end, the model endows its host system with agency—i.e. it transforms the system into an agent that can act independently. The relationship between model and evolutionary process, the basis of strong emergence, seems fundamental: each seems to be necessary for the other. This is in a sense the basic assumption of the theory of biological evolution. In contrast, a self-organized system, the result of weak emergence, does not act independently to ensure its own persistence. Living systems, by virtue of their agency, act to maintain themselves in a state of low entropy.

2.5 Creative Algorithms The models that guide the generation and behaviour of living systems are necessarily self-referential, since they are models of a system of which they are an essential part. This means that they cannot be represented purely as mathematical structures. However, if the mathematical structures are appropriately embedded in algorithms being executed in natural time, the problem disappears. Nevertheless, the definition of algorithm remains crucial. Rosen, with deep roots in mathematics, was never quite able to resolve the problems arising from self-reference because he worked with Turing’s definition of algorithm; this is clear when he claims, repeatedly, that life is not algorithmic. But as we have noted, the Turing machine was defined in such a way as to produce only results that are consistent with time-free mathematics. To generate that mathematics, the Turing machine must be supplemented by a source of learning or creativity. Learning and creativity are essential characteristics of living systems, as is the appearance of new entities with agency, which learning and creativity make possible. Consequently, a formal understanding of life must include a formal treatment of learning, creativity and strong emergence. That requires algorithms that transcend Turing’s definition. It requires algorithms that are able to model their own behaviour and alter themselves on the basis of their models of themselves. Using a computer operating in natural time to execute only Turing algorithms is like insisting on using three dimensional space to do only two dimensional geometry: it is a colossal waste of capacity as well as a refusal to consider the unfolding world of possibilities that emerge in natural time.

3 Further Explorations on the Role of Time in Science We have gotten so used to the concept of creativity and completely new solutions to problems, or to inventions that make our life easier and are introduced the first time, that we tend to overlook the principle aspect of creating new things. In the daily processes of synthetic chemistry, for example, new molecules are generated every day by combining existing molecules into new combinations. Given the enormous extent of the combinatorial space of chemistry, we have to presume that some of those are created the very first time.

16

R. White and W. Banzhaf

If some of these compounds are stable and created today in the Universe for the first time—note we speak of actual realization of material compounds, as opposed to the mere possibility of their existence being “discovered”—they come with a time stamp of today. Thus, every material substance or object has in some way attached to it a time stamp of when it or its earlier copies first appeared in the Universe. Time, therefore, is of absolute importance to everything that exists and is able to characterize it in some way. Can we make use of that in the Sciences? Here, we want to look at the two sciences that provide modeling tools for others to use in their effort to model the material universe, mathematics and computer science.

3.1 Mathematics We have already mentioned that mathematics uses the concept of time (if at all) in a spatial sense. This means, time can be considered as part of a space that can be traversed in all directions. Notably, it can be traversed backward in time! But mathematics is actually mostly concerned about the unchanging features of the objects and transformations it has conceptualized. Thus, it glosses over, or even ignores changes in features, as they could prevent truth from getting established. For instance, a mathematical proof is a set of transformations of a statement into the values “true” or “false”, values that are unchanging and not dynamic. This reliability is its strength. Once a statement is established to be true, it is accepted into the canon of mathematically proven statements, and can serve as an intermediate for other proof transformations. But what about a mathematics of time? How would such a mathematics look like? We don’t know yet, perhaps because the notion of time is something many mathematicians look at with suspicion, and rather than asking how such a mathematics would look like, they ask themselves whether time exists at all and how they can prove that it does not exist—except as an illusion in our conciousness [3]. Although Science has always worked like that—ignoring what it cannot explain and focusing on phenomena it can model and explain—we have reached a point now where we simply cannot ignore the nature of time any more as a concept that is key to our modeling of natural systems. So let’s offer another speculation here. We said before that every object in the universe carries a property with it we can characterize as a time stamp, stating when it first appeared. This is one of its unalienable properties, whether we want to consider it or not. So how about imagining that every mathematical object and all the statements and transformations in mathematics would come with the feature of a time stamp? In other words, besides its other properties, an object, statement or transformation would carry a new property, the time when it was first created. This would help sort out some of the problems when trying to include the creation of mathematics into mathematics itself. It would actually give us a way of characterizing how mathematics is created by mathematicians. The rule would be that new objects, statements and

Putting Natural Time into Science

17

transformations can only make use of what is already in existence at the time of their own creation. Once we have achieved such a description, can we make a model of the process? Perhaps one of the natural things to ask is whether it would be possible to at least guess which objects, statements or transformations could be created next? The situation is a reminder of the “adjacent possible” of Kaufmann who proposed that ecological systems inhabit a state space that is constantly expanding through accessing “adjacent” states that increase its dimensionality. What this includes is a notion that only what interacts with the existing (which we can call “the adjacent”) could be realized next. Everything else would be a creatio ex nihilo and likely never be realized. Here is an example: Suppose we have a set of differential rate equations that describe a system at the current state. For simplicity, let’s assume that all the variables of the system carry a time stamp of this moment. Suppose now that we want to introduce a new variable, for another quantity that develops according to a new differential rate equation. Would it make sense to do that even without any coupling of this new variable to the existing system? We don’t think it would. In fact, the very nature of our wish to introduce this variable has to do with its interaction with the system as it is currently described. Thus, introducing a variable that can describe the adjacent possible has at least to have some interaction with the current system. Dynamic set theory [11] is an example of how this could work. Dynamic set theory was inspired by the need to deal with sets of changing elements in a software simulation. Mathematically, normal sets are static, in that membership in a set does not change over time. But dynamic sets allow just that: Sets can be defined over time intervals T , and might contain certain elements at certain times only. For example, if you have a set of elements X = {a1 , a2 , a3 , b1 , b2 , b3 , c1 , c2 , c3 } we can assign specific dynamic sets to a time interval T as follows A T = {(t1 , {a1 , b1 , c1 }), (t2 , {a2 , b1 }), (tT , ∅)} and B T = {(t1 , {a1 , a2 }), (t3 , {a3 , c1 , c3 }), (tT , ∅)} We can then manipulate these sets using set operations, for instance: A T ∩ B T = {(t1 , {a1 }), (tT , ∅)} or A T ∪ B T = {(t1 , {a1 , a2 , b1 , c1 }), (t2 , {a2 , b1 }), (t3 , {a3 , c1 , c3 }), (tT , ∅)}

18

R. White and W. Banzhaf

We can see here that each of these elements is tagged with a particular time at which they are part of the dynamic set, and can take part in set operations for that particular moment.2 A generalization of set theory is possible to this case. Our hope is that— ultimately—mathematics will be able to access the constructive, intuitional aspects of its own creation. Once we have assigned the additional property of time/age to mathematical objects, perhaps its generative process can be modeled. Another example of mathematical attempts at capturing time in mathematics is real-time process algebra [25]. The idea of this approach is to try to describe formally what a computational system is able to do, in particular its dynamic behavior. This project of formalization was generalized under the heading of “denotational mathematics” [26, 27]. These are all interesting attempts to capture the effect of time within the framework of Mathematics, but they fall short of the goal, because they are descriptive in nature, i.e. they are not generative and able to create novel structures, processes or variables themselves.

3.2 Computer Science Computers allow the execution of mathematical models operationalized as algorithms. But as we have seen from the discussion in this chapter, mathematics currently deals with spatialized time, not real, natural time. Thus, if we were to only aim at simulating mathematical models, we do not need natural time. This is indeed Turing’s definition of an algorithm, restricted exactly in the way required to make sure that it cannot do anything that requires natural time, so that the computer executing a Turing algorithm is only doing what, in principle, timeless mathematics can do. Here, instead, we aim for algorithms to execute on machines that need to go beyond traditional mathematical models. We need to provide operations within our algorithms that allow for modification of models. Let us briefly consider how variables (potential observables of the behavior of a [simulated] model) are realized in a computer: They are handled using the address of their memory location. Thus if we allow memory address manipulations in our algorithms, like allocating memory for new variables, or garbage collection (for variables/memory locations that have fallen out of use), we should be able to modify at least certain aspects of a model (the variables). Since the address space of a computer is limited, memory locations can be described by integer numbers in a certain range, so we are able to modify them during execution. Of course, variables are but one class of entities that need to be modified from within the code. Reflective computer languages allow precisely this type of manipulation [21]. Reflection describes the ability of a computer language to modify its own 2 Note

that we have skirted the issue of how to measure time, and how to precisely determine a particular moment and its synchronous counterparts in other regions of the Universe. For now, we’d stick to classical time and assume a naive ability to measure it precisely.

Putting Natural Time into Science

19

structure and behavior. Mostly interpreted languages have been used for reflection, yet more modern approaches like SELF offer compiling capabilities, based on an object-oriented model. As Sobel and Friedman write: “Intuitively, reflective computational systems allow computations to observe and modify properties of their own behavior, especially properties that are typically observed only from some external, meta-level viewpoint” [23]. What seems to make SELF particularly suitable is its ability to manipulate methods and variables in the same framework. In fact, there is no difference in SELF between them. Object classes are not based on an abstract collection of properties and their inheritance in instantiation, but on prototype objects, object copy and variation. We believe that SELF allows an easier implementation of an evolutionary system than other object-oriented languages. Susan Stepney’s work [24] in the context of the CoSMoS project provides a good discussion of the potential of reflective languages to allow to capture emergent phenomena through self-modification. In order for a self-modifying system not to sink into a chaotic mess, though, we probably shall need again to time stamp the generation of objects. However, the open-ended power of those systems might only come into its own when one of the key aspects of natural time is respected as well—the fact that one cannot exit natural time. This calls for systems that are not terminated. Natural openended processes like scientific inquiry or economic activity or biological evolution do not allow termination and restart. While objects in those systems might have a limited lifetime, entire systems are not “rebooted”. Instead, new objects have to be created and integrated into the dynamics of the existing systems. We return here to a theme already mentioned with Turing machines: The traditional idea of an algorithm, while having to make use of natural time during its execution as a step-by-step process, attempts to ignore time by requiring the algorithm to halt. Traditional algorithms are thus constructed to halt for their answer to be considered definitive. This, in fact, makes them closed system approaches to computation, as opposed to streaming processes, that analyze data continuously and provide transient answers at any time [6]. We might want to ask: What are the requirements for systems that do not end, i.e. do not exit natural time? [2].

3.3 Other Sciences In this contribution we do not have enough space to discuss in detail how natural phenomena as encountered in simple and complex systems can inform the corresponding sciences—which attempt to model those phenomena (physics, chemistry, biology, ecology and economy)—about natural time. But we believe it is important to emphasize that a clear distinction should be made between our modeling attempts and the actual phenomena underlying them. In the past, there were times when the model and the reality were conceptually not separated. For instance, the universe was considered like clockwork, or later as a steam engine, and even later as a giant

20

R. White and W. Banzhaf

computer. All of these attempts to understand the universe mistook the underlying system for its metaphor.

4 Conclusion Our argument here is not that the Universe is a giant computer [7], preferably running an irreducible computation [28]. This would interchange the actual system with the model of it. Rather, our argument is that time is so fundamental to the Universe that we need tools (computers) and formalisms (algorithms) that rely on natural time to be able to faithfully model its phenomena. We believe that there are many phenomena in the natural and artificially made world making use of novelty, innovation, emergence, or creativity, which have resisted modeling attempts with current techniques. We think those phenomena are worth the effort to change our concepts in order to accommodate them into our world view and allow us to develop models. As hard as it might be to do that, what would Science be without taking stock of what is out there in the world and attempting to incorporate it in our modelling efforts? Acknowledgements This essay was written on the occasion of the Festschrift for Susan Stepney’s 60th birthday. It is dedicated to Susan, whose work has been so inspiring and deep.

References 1. Amrine, F.: The Metamorphosis of the Scientist, vol. 5, pp. 187–212. North American Goethe Society (1990) 2. Banzhaf, W., Baumgaertner, B., Beslon, G., Doursat, R., Foster, J., McMullin, B., de Melo, V., Miconi, T., Spector, L., Stepney, S., White, R.: Defining and simulating open-ended novelty: requirements, guidelines, and challenges. Biosciences 135, 131–161 (2016) 3. Barbour, J.: The End of Time: The Next Revolution in Our Understanding of the Universe. Oxford University Press (2001) 4. Bergson, H.: Creative Evolution, vol. 231. University Press of America (1911) 5. Dobzhansky, T.: Nothing in biology makes sense except in the light of evolution. Am. Biol. Teach. 35, 125–129 (1973) 6. Dodig-Crnkovic, G.: Significance of models of computation, from turing model to natural computation. Minds Mach. 21, 301–322 (2011) 7. Fredkin, E.: An introduction to digital philosophy. Int. J. Theor. Phys. 42, 189–247 (2003) 8. Irvine, A., Deutsch, H.: Russell’s paradox. In: Zalta, E. (ed.) The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2016/entries/russell-paradox (2016). Winter 9. Langton, C.: Computation at the edge of chaos: phase transitions and emergent computation. Phys. D 42, 12–37 (1990) 10. Latour, B.: Enquête sur les modes d’existence. Une anthropologie des Modernes, Découverte (La) (2012) 11. Liu, S., McDermid, J.A.: Dynamic sets and their application in VDM. In: Proceedings of the 1993 ACM/SIGAPP Symposium on Applied Computing: States of the Art and Practice, pp. 187–192. ACM (1993)

Putting Natural Time into Science

21

12. Miller, E.: The Vegetative Soul. From Philosophy of Nature to Subjectivity in the Feminine. Suny Press (2012) 13. Nicolis, G., Prigogine, I.: Self-organization in Nonequilibrium Systems: From Dissipative Structures to Order Through Fluctuations. Wiley, New York (1977) 14. Popper, K.: The Open Universe. Rowman and Littlefield, Totowa, NJ (1982) 15. Prigogine, I.: From Being to Becoming: Time and Complexity in the Physical Sciences. W.H. Freeman and Co., New York (1981) 16. Prigogine, I.: The End of Certainty. Simon and Schuster (1997) 17. Prigogine, I., Stengers, I.: Order Out of Chaos: Man’s New Dialogue with Nature. Bantam Books (1984) 18. Rosen, R.: Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life. Columbia University Press (1991) 19. Rosen, R.: Essays on Life Itself. Columbia University Press (2000) 20. Schemm, E., Gannon, W., Wishne, C., Halperin, W., Kapitulnik, A.: Observation of broken time-reversal symmetry in the heavy-fermion superconductor UPt3. Science 345(6193), 190– 193 (2014) 21. Smith, B.: Behavioral reflection in programming languages. Ph.D. thesis, Department of Electrical Engineering and Computer Science, MIT (1982) 22. Smolin, L.: Time Reborn: From the Crisis in Physics to the Future of the Universe. Houghton Mifflin Harcourt (2013) 23. Sobel, J.M., Friedman, D.P.: An introduction to reflection-oriented programming. In: Proceedings of Reflection, vol. 96 (1996) 24. Stepney, S., Hoverd, T.: Reflecting on open-ended evolution. In: Proceedings of the 11th European Conference on Artificial Life (ECAL-2001), pp. 781–788. MIT Press, Cambridge, MA (2011) 25. Wang, Y.: The real-time process algebra (RTPA). Ann. Softw. Eng. 14, 235–274 (2002) 26. Wang, Y.: Software science: on the general mathematical models and formal properties of software. J. Adv. Math. Appl. 3, 130–147 (2014) 27. Wang, Y.: A denotational mathematical theory of system science: system algebra for formal system modeling and manipulations. J. Adv. Math. Appl. 4, 132–157 (2015) 28. Wolfram, S.: A New Kind of Science. Wolfram Science Inc. (2002) 29. Yanofsky, N.S.: The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us. MIT Press (2013)

Evolving Programs to Build Artificial Neural Networks Julian F. Miller, Dennis G. Wilson and Sylvain Cussat-Blanc

Abstract In general, the topology of Artificial Neural Networks (ANNs) is humanengineered and learning is merely the process of weight adjustment. However, it is well known that this can lead to sub-optimal solutions. Topology and Weight Evolving Artificial Neural Networks (TWEANNs) can lead to better topologies however, once obtained they remain fixed and cannot adapt to new problems. In this chapter, rather than evolving a fixed structure artificial neural network as in neuroevolution, we evolve a pair of programs that build the network. One program runs inside neurons and allows them to move, change, die or replicate. The other is executed inside dendrites and allows them to change length and weight, be removed, or replicate. The programs are represented and evolved using Cartesian Genetic Programming. From the developed networks multiple traditional ANNs can be extracted, each of which solves a different problem. The proposed approach has been evaluated on multiple classification problems.

1 Introduction Artificial neural networks (ANNs) were first proposed seventy-five years ago [45]. Yet, they remain poor caricatures of biological neurons. ANNs are almost always static arrangements of artificial neurons with simple activation functions. Learning Julian F. Miller—It is with great pleasure that I offer this article in honour of Susan Stepney’s 60th birthday. Susan has been a very stimulating and close colleague for many years. J. F. Miller (B) University of York, Heslington, York YO10 5DD, UK e-mail: [email protected] D. G. Wilson · S. Cussat-Blanc University of Toulouse, IRIT - CNRS - UMR5505, 21 allee de Brienne, 31015 Toulouse, France e-mail: [email protected] S. Cussat-Blanc e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_2

23

24

J. F. Miller et al.

is largely considered to be the process of adjusting weights to make the behaviour of the network conform as closely as possible to a desired behaviour. We refer to this as the “synaptic dogma”. Indeed, there is abundant evidence that biological brains learn through many mechanisms [32] and in particular, changes in synaptic strengths are at best a minor factor in learning since synapses are constantly pruned away and replaced by new synapses [66]. In addition, restricting learning to weight adjustment leads immediately to the problem of so-called catastrophic forgetting. This is where retraining an ANN on a new problem causes the previously learned behaviour to be disrupted if not forgotten [19, 44, 55]. One approach to overcoming catastrophic forgetting is to create developmental ANNs which, in response to environmental stimulus (i.e. being trained on a new problems), grow a new sub-network which integrates with the existing network. Such new networks could even share some neurons with pre-existing networks. In contrast to the synaptic dogma in ANNs, in neuroscience it is well-known that learning is strongly related to structural changes in neurons. Examples of this are commonplace. Mice reared in the dark and then placed in the light develop new dendrites in the visual cortex within days [78]. Animals reared in complex environments where active learning is taking place have an increased density of dendrites and synapses [37, 38]. Songbirds in the breeding season increase the number, size and spacing of neurons, where the latter is caused by increases in dendritic arborization [74]. It is also well-known that the hippocampi of London taxi drivers are significantly larger relative to those of control subjects [43]. Rose even argues that after a few hours of learning the brain is permanently altered [60]. Another aspect supporting the view that structural changes in the brain are strongly associated with learning, is simply that the most significant period of learning in animals happens in infancy, when the brain is developing [8]. There have been various attempts to create developmental ANNs and we review past work in Sect. 2. Although many interesting past approaches have been presented, generally the work has not continued. When one considers the enormous potential of developmental neural networks there needs to be sustained and long-term research on a diversity of approaches. In addition there is a need for such approaches to be applied in a variety of real-world applications.

2 Related Work on the Development of ANNs Although non-developmental in nature, a number of methods have been devised which under supervision gradually augment ANNs by adding additional neurons or join trained ANNs together via extra connections. ‘Constructive neural networks’ are traditional ANNs which start with a small network and add neurons incrementally while training error is reduced [13, 18]. Modular ANNs use multiple ANNs each of which has been trained on a sub-problem and these are combined by a human expert [64]. Both of these approaches could be seen as a form of human engineered development. More recent approaches adjust weighted connections between trained

Evolving Programs to Build Artificial Neural Networks

25

networks on sub-problems [62, 72]. Aljundi et al. have a set of trained ANNs for each task (experts) and use an additional ANN as a recommender as to which expert to use for a particular data instance [1]. A number of authors have investigated ways of incorporating development to help construct ANNs [41, 70]. Researchers have investigated a variety of genotype representations at different levels of abstraction. Cangelosi et al. defined genotypes which were a mixture of variables, parameters, and rules (e.g. cell type, axon length and cell division instructions) [6]. The task was to control a simple artificial organism. Rust et al. constructed a binary genotype which encoded developmental parameters that controlled the times at which dendrites could branch and how the growing tips would interact with patterns of attractants placed in an environment [61]. Balaam investigated controlling simulated agents using a two-dimensional area with chemical gradients in which neurons were either sensors, affectors, or processing neurons according to location [3]. The neurons were defined as standard continuous time recurrent neural networks (CTRNNS). The genotype was effectively divided into seven chromosomes each of which read the concentrations of the two chemicals and the cell potential. Each chromosome provided respectively the neuron bias, time constant, energy, growth increment, growth direction, distance to grow and new connection weight. A variety of grammar-based developmental methods for building ANNs have been proposed in the literature. Kitano evolved matrix re-writing rules to develop an adjacency matrix defining the structure of a neural network [36]. He used backpropagation to adjust the connection weights. He applied the technique to encoder-decoder problems of various sizes. Kitano claimed that his method produced superior results to direct methods. However, it must be said, it was later shown in a more careful study by Siddiqi and Lucas, that the two approaches were of equal quality [65]. Belew [4] evolved a two-dimensional context sensitive grammar that constructed polynomial networks (rather than ANNs) for time-series prediction. Gradient descent was used to refine weights. Genotypes were variable length with a variety of mutation operators. He found a gene-doubling mutation to be critically important. Gruau devised a grammar-based approach called cellular encoding in which ANNs were developed using graph grammars [22, 23]. He evaluated this approach on hexapod robot locomotion and pole-balancing. Kodjabachian and Meyer used a “geometry-orientated” variant of cellular encoding to develop recurrent neural networks to control the behaviour of simulated insects [39]. Drchal and Šnorek [10] replaced the tree-based cellular encoding of Gruau with alternative genetic programming methods, namely grammatical evolution (GE) [63] and gene expression programming (GEP) [16]. They also investigated the use of GE and GEP to evolve edge encoded [42] development trees. They evaluated the developed ANNs on the two-input XOR problem. Jung used a context-free grammar to interpret an evolved gene sequence [31]. When decoded it generates 2D spatially modular neural networks. The approach was evaluated on predator-prey agents using a coevolution. Floreano and Urzelai [17] criticised developmental methods that developed fixed weights. They used a matrix re-writing method inspired by Kitano, but each connection either had a fixed weight or the neuron could choose one of four synaptic adjustment rules (i.e. plasticity).

26

J. F. Miller et al.

After development stopped synaptic adjustment rules were applied to all neuron connections. They applied their method to robot controllers and found that synaptic plasticity produced ANNs with better performance than fixed weight networks. In addition they were more robust in changed environments. Jacobi presented a low-level approach in which cells used artificial genetic regulatory networks (GRNs). The GRN produced and consumed simulated proteins that defined various cell actions (protein diffusion movement, differentiation, division, threshold). After a cellular network had developed it was interpreted as a neural network [30]. Eggenberger also used an evolved GRN [12]. A neural network phenotype was obtained by comparing simulated chemicals in pairs of neurons to determine if the neurons are connected and whether the connection is excitatory or inhibitory. Weights of connections were initially randomly assigned and Hebbian learning used to adjust them subsequently. Astor and Adami devised a developmental ANN model known as Norgev (Neuronal Organism Evolution) which encoded a form of GRN together with an artificial chemistry (AC), in which cells were predefined to exist in a hexagonal grid. Genes encoded conditions involving concentrations of simulated chemicals which determine the level of activation of cellular actions (e.g. grow axon or dendrite, increase or decrease weight, produce chemical) [2]. They evaluated the approach on a simple artificial organism. In later work, Hampton and Adami showed that neurons could be removed and the developmental programs would grow new neurons and recover the original functionality of the network (it computed the NAND function) despite having a non-deterministic developmental process [24]. Yerushalmi and Teicher [80] presented an evolutionary cellular development model of spiking neurons. Inside the cells was a bio-plausible genetic regulatory network which controls neurons and their dendrites and axons in 2D space. In two separate experiments the GRN in cells was evolved to produce: (a) specific synaptic plasticity types (Hebbian, AntiHebbian, Non-Hebbian and Spike-Time Dependent Plasticity) and (b) simple organisms that had to mate. In the latter, they examined the types of synaptic plasticity that arose. Federici used a simple recursive neural network as a developmental cellular program [14]. In his model, cells could change type, replicate, release chemicals or die. The type and metabolic concentrations of simulated chemicals in a cell were used to specify the internal dynamics and synaptic properties of its corresponding neuron. The weight of a connection between cells was determined by the difference in external chemical concentrations produced by the two cells. The position of the cell within the organism is used to produce the topological properties of a neuron: its connections to inputs, outputs and other neurons. From the cellular phenotype, Federici interpreted a network of spiking neurons to control a Khepera robot. Roggen et al. devised a highly simplified model of development that was targeted at electronic hardware [59]. Circuits were developed in two phases. Diffusers are placed in a cellular grid and diffuse signals (analogous to chemicals) to local neighbours. There can be multiple signal types and they do not interact with each other. At the same time as the diffusion phase an expression phase determines the function of the cells by matching signal intensities with a signal-function expression table. A genetic algorithms is used to evolve the positions of diffusing cells (which are given

Evolving Programs to Build Artificial Neural Networks

27

a maximum signal) and the expression table. By interpreting the functionalities as various kinds of spiking neurons with particular dendritic branches, the authors were able develop spiking neural networks. Connection weights were fixed constants with a sign depending on whether the pre-synaptic neuron is excitatory or inhibitory. They evaluated the approach on character recognition and robot control. Some researchers have studied the potential of Lindenmayer systems for developing artificial neural networks. Boers and Kuiper adapted L-systems to develop artificial feed-forward neural networks [5]. They found that this method produced more modular neural networks that performed better than networks with a predefined structure. They showed that their method could produce ANNs for solving problems such as the XOR function. Hornby and Pollack evolved L-systems to construct complex robot morphologies and neural controllers [26, 27]. Downing developed a higher-level, neuroscience-informed approach which avoided having to handle axonal and dendritic growth, and maintained important aspects of cell signaling, competition and cooperation of neural topologies [9]. He adopted ideas from Deacon’s Displacement Theory [7] which is built on Edelman’s Darwinistic view of neurogenesis, known as “The Theory of Neural Group Selection” [11]. In the latter, neurons will only reach maturity if they grow axons to, and receive axons from, other neurons. Downing’s method has three phases. The first (translation) decodes the genotype which defines one or more neuron groups. In the second phase (displacement) the sizes and connectivity of neuron groups undergo modification. In the final stage (instantiation) populations of neurons and their connections are established. He applied this technique to the control of a multi-limbed starfish-like animat. Khan and Miller created a complex developmental neural network model that evolved seven programs each representing various aspects of idealised biological neurons [33]. The programs were represented and evolved using Cartesian Genetic Programming (CGP) [50]. The programs were divided into two categories. Three of the encoded chromosomes were responsible for ‘electrical’ processing of the ‘potentials’. These were the dendrite, soma and axo-synapse chromosomes. One chromosome was devoted to updating the weights of dendrites and axo-synapses. The remaining three chromosomes were responsible for updating the neural variables for the soma (health and weight), dendrites (health, weight and length) and axosynapse (health, length). The evolved developmental programs were responsible for the removal or replication of neural components. The model was used in various applications: intelligent agent behaviour (wumpus world), checkers playing, and maze navigation [34, 35]. Although not strictly developmental, Koutník et al. [40] investigated evolving a compression of the ANN weight matrix by mapping it to a real-valued vector of Fourier coefficients in the frequency domain. This idea reduces the dimensionality of the weight space by ignoring high-frequency coefficients, as in lossy image compression. They evaluated the merits of the approach on ANNs solving pole-balancing, ball throwing and octopus arm control. They showed that approach found solutions in significantly fewer evaluations than evolving weights directly.

28

J. F. Miller et al.

Stanley introduced the idea of using evolutionary algorithms to build neural networks constructively (called NEAT). The network is initialised as a simple structure, with no hidden neurons consisting of a feed-forward network of input and output neurons. An evolutionary algorithm controls the gradual complexification of the network by adding a neuron along an existing connection, or by adding a new connection between previously unconnected neurons [67]. However, using random processes to produce more complex networks is potentially very slow. It also lacks biological plausibility since natural evolution does not operate on aspects of the brain directly. Later Stanley introduced an interesting and popular extension to the NEAT approach called HyperNEAT [69] which uses an evolved generative encoding called a Compositional Pattern Producing Network (CPPN) [68]. The CPPN takes coordinates of pairs of neurons and outputs a number which is interpreted as the weight of that connection. The advantage this brings is that ANNs can be evolved with complex patterns where collections of neurons have similar behaviour depending on their spatial location. It also means that one evolved function (the CPPN) can determine the strengths of connections of many neurons. It is a form of non-temporal development, where geometrical relationships are translated into weights. Developmental Symbolic Encoding (DSE) [71] combines concepts from two earlier developmental encodings, Gruau’s cellular encoding and L-systems. Like HyperNEAT it can specify connectivity of neurons via evolved geometric patterns. It was shown to outperform HyperNEAT on a shape recognition problem defined over small pixel arrays. It could also produce partly general solutions to a series of even-parity problems of various sizes. Huizinga et al. added an additional output to the CPP program in HyperNEAT that controlled whether or not a connection between a pair of neurons was expressed or not [28]. They showed that the new approach produced more modular solutions and superior performance to HyperNEAT on three specially devised modular problems. Evolvable-substrate HyperNEAT (ES-HyperNEAT) implicitly defined the positions of the neurons [56], however it proved to be computationally expensive. Iterated ES-HyperNEAT proposed a more efficient way to discover suitable positioning of neurons [58]. This idea was taken further leading to Adaptive HyperNEAT which demonstrated that not only could patterns of weights be evolved but also patterns of local neural learning rules [57]. Like [28] in Adaptive HyperNEAT Risi et al. increased the number of outputs from the CPPN program to encode learning rate and other neural parameters.

3 Abstracting Aspects of Biological Brains Once we accept that developmental ANNs are desirable, it becomes necessary to abstract and simplify important mechanisms from neuroscience. Which aspects of biological neurons are most relevant depends on the nature of the abstracted neuron model. For instance, if the abstracted developmental neural programs include genetic regulatory networks one could consider epigenetic processes inside neurons since

Evolving Programs to Build Artificial Neural Networks

29

recent evidence from neuroscience suggests that these may be important in neuron response to past environmental conditions [29, 75]. Here we compare and contrast the proposed abstract model with various aspects of biological neurons in Sect. 3. Brain development: All multicellular organisms begin as a single cell which undergoes development. Some of the cells become neural stem cells which gradually differentiate into mature neurons. In the proposed model we allow the user to choose how many non-output neurons to start with prior to development. However, the model assumes that there is a dedicated output neuron corresponding to each output in the suite of computational problems being solved. Further discussion on the topic of how to handle outputs can be found in Sect. 13. Arrangement of neurons: The overall architecture of both ANNs and many neural developmental systems is fixed once developed, whereas biological neurons move themselves (in early development) and their branches change over time. Thus the morphology of the network is time dependent and can change during problem solving. In our model, we evolve a developmental process which means that the network of neurons and branches are time dependent. Neuron structure: Biological neurons have dendrites and axons with branches. Each neuron has a single axon with a variable number of axon branches. In addition, it has a number of dendrites and dendrite branches. There are many types of neurons with different morphologies, some with few dendrites and others with huge numbers of highly branched dendritic trees. In addition neurons have a membrane, a nucleus and a cytoplasm. Since our model of the neuron has zero volume, these aspects are also not included. In our model, the user can choose the minimum and the maximum number of dendrites neurons can have. The evolved developmental programs determine how many dendrites individual neurons can have and indeed, different neurons can have different numbers of dendrites. We have not modelled the axon in our approach. We decided this to keep the proposed model as simple as possible. Neuron volume: Neurons have many physical properties (volume, temperature, pressure, elasticity…). We have not modelled any of these and like conventional ANNs the neurons in our model are mathematical points. Dendrites are equally unphysical and are merely lines that emanate from neurons and are positioned on the left of the neuron position. They can pass through each other. Neuron movement: In brains undifferentiated neurons migrate to specific locations in the early stages of development. When they reach their destinations they either develop into mature neurons complete with dendrites and an axon, or they undergo cell death [73]. Moreover, this ability of neurons to migrate to their appropriate positions in the developing brain is critical to brain architecture and function [46, 54]. We have allowed neuron movement in our model. Neuron movement in real brains is closely related to the physicality of neurons and this could have important

30

J. F. Miller et al.

consequences. Clearly, mature biological neurons that are entangled with many other neurons will have restricted movement. Synapses: The connections between biological neurons are called synapses. Signal propagation is via the release of neurotransmitters from the pre-synaptic neuron to the post-synaptic. Like traditional ANNs, we have no notion of neurotransmitters and signals are propagated as if by wires. However, unlike traditional ANNs, dendrites connect with their nearest neuron (on the left). These connections can change when dendrites grow or shrink and when neurons move. Thus, the number of connections between neurons is time-dependent. Activity Dependent Morphology: There are few proposed models in which changes in levels of activity (in potentials or signals) between neurons leads to changes in neural morphology. This is an extremely important aspect of real brains [53]. This has not been included in the current model. Possibly a measure of signals could be used as an input to the developmental programs. We return to this issue in Sect. 13. Neuron State: Biological neurons have dendritic trees which change over time. New dendrites can emerge and dendrite branches can develop or be pruned away. Indeed, we discussed in the introduction how important this aspect is for learning. We have abstracted this process by allowing neurons to have multiple dendrites which can replicate (an abstraction of branching) or be removed (an abstraction of pruning). In the model, this process is dependent on a variable called ‘health’ which is an abstraction of neuron and dendrite state. Khan et al. [35] first suggested the use of this variable.

4 The Neuron Model The model we propose is new and conceptually simple. Two evolved neural programs are required to construct neural networks. One represents the neuron soma and the other the dendrite. The role of the soma program is to allow neurons to move, change, die or replicate. For the dendrite, the program needs to be able to grow and change dendrites, cause them to be removed and also to replicate. The approach is simple in two ways. Firstly, because only two evolved programs are required to build an entire neural network. Secondly, because a snapshot of the neural network at a particular time would show merely a conventional graphs of neurons, weighted connections and standard activation functions. To construct such a developmental model of an artificial neural network we need neural programs that not only apply a weighted sum of inputs to an activation function to determine the output from the neuron, but a program that can adjust weights, create or prune connections, and create or delete neurons.

Evolving Programs to Build Artificial Neural Networks

31

Since developmental programs build networks that change over time it is necessary to define new problem classes that are suitable for evaluating such approaches. We argue that trying to solve multiple computational problems (potentially even of different types) is an appropriate class of problems. The pair of evolved programs can be used to build a network from which multiple conventional ANNs can be extracted each of which can solve a different classification problem. We investigate many parameters and algorithmic variants and assess experimentally which aspects are most associated with good performance. Although we have concentrated in this paper on classification problems, our approach is quite general and it could be applied to a much wider variety of problems.

(a)

(b)

Fig. 1 The model of a developmental neuron. Each neuron has a position, health and bias and a variable number of dendrites. Each dendrite has a position, health and weight. The behaviour of a neuron soma is governed by a single evolved program. In addition each dendrite is governed by another single evolved program. The soma program decides the values of new soma variables position, health and bias based on previous values, the average over all dendrites belonging to the neuron of dendrite health, position and weight and an external input called problem type. The latter is a floating point value that indicates the neuron type. The dendrite program updates dendrite health, position and weight based on previous values, the health, position and bias of the neuron the dendrite belongs to, and the problem type. When the evolved programs are executed, neurons can change, die replicate and grow more dendrites and their dendrites can also change or be removed

32

J. F. Miller et al.

The model is illustrated in Fig. 1. The neural programs are represented using Cartesian Genetic Programming (CGP) (see Sect. 5). The programs are actually sets of mathematical equations that read variables associated with neurons and dendrites to output updates of those variables. This approach was inspired by some aspects of a developmental method for evolving graphs and circuits proposed by Miller and Thomson [51] and is also strongly influenced by some of the ideas described in [35]. In the proposed model, weights are determined from a program that is a function of neuron position, together with the health, weight and length of dendrites. It is neuro-centric and temporal in nature. As shown in Fig. 1 the inputs to the soma program are: the health, bias and position of the neuron and the average health, length and weight of all dendrites connected to the neuron and problem type. The problem type is a constant (in range [−1, 1]) which indicates whether a neuron is not an output or in the case of an output neuron what computational problem the output neuron belongs to. Let Pt denote the computational problem. Define Pt = 0 to denote a non-output neuron, and Pt = 1, 2 or Np to respectively denote output neurons belonging to different computational problems, where, Np denotes the number of computational problems. We define the problem type input to be given by −1 + 2Pt /Np . For example, if the neuron is not an output neuron the problem type input is −1.0. If it is an output neuron belonging to the last problem its value is 1.0. For all other computational problems its value is a value greater than −1.0 and less than 1.0. The thinking behind the problem type input is that since output neurons are dedicated to a particular computational problem, they should be given information that relates to this, so that the identical neural programs can behave differently according to the computational problem they are associated with. Later experiments were conducted to investigate the utility of problem type (see Sect. 7). Bias refers to an input to the neuron activation function which is added to the weighted sum of inputs (i.e. it is unweighted). The soma program updates its own health, bias and position based on these inputs. These are indicated by primed symbols in Fig. 1). The user can decide between three different ways of using the program outputs to update the neural variables. The update method is decided by a user defined parameter called Incropt (see Sect. 4.5) which defines how neuron variables are adjusted by the evolved programs (using user-defined incremental constants or otherwise). Every dendrite belonging to each neuron is controlled by an evolved dendrite program. As shown in Fig. 1 the inputs to this program are the health, weight and position of the dendrite and also the health, bias and position of the parent neuron. In addition as mentioned earlier, dendrite programs can also receive the problem type of the parent neuron The evolved dendrite program decides how the health, weight and position of the dendrite are to be updated. In the model, all the neuron and dendrite parameters (weights, bias, health, position and problem type) are defined by numbers in the range [−1, 1].

Evolving Programs to Build Artificial Neural Networks

33

(a) B

A

0

(b) 0

1

0

1

B

A

(c) 2

A

B

Fig. 2 Example showing a developing fictitious example brain. The squares on the left represent the inputs. The solid circles indicate non-output neurons. Non-output neurons have solid dendrites. The dotted circles represent output neurons. Output neuron’s dendrites are also dotted. In this example, we assume that only output neurons are allowed to move. The neurons, inputs and dendrites are all bound to the interval [−1, 1]. Dendrites connect to nearest neurons or inputs on the left of their position (snapping). a Shows the initial state of the example brain. b Shows the example brain after one developmental step and c shows it after two developmental steps

4.1 Fictitious Example of Early Brain Development A fictitious developmental example is shown in Fig. 2. The initial state of the example brain is represented in (a). Initially there is one non-output neuron with a single dendrite. The curved nature of the dendrites is purely for visualisation. In reality the dendrites are horizontal lines emanating from the centre of neurons and of various lengths. When extracting ANNs the dendrites are assumed to connect to their nearest neuron on the left (referred to as ‘snapping’). Output neurons are only allowed to connect to non-output neurons or the first input (by default, if their dendrites lie on the left of the leftmost non-output neuron). Thus the ANN that can be extracted from the initial example brain, has three neurons. The non-output neuron is connected to the second input and both output neurons are connected via their single dendrite to the non-output neuron. Figure 2b shows the example brain after a single developmental step. In this step, the soma program and dendrite programs are executed in each neuron. The non-output neuron (labeled 0) has replicated to produce another non-output neuron (labeled 1) it has also grown a new dendrite. Its dendrites connect to both inputs. The newly created non-output neuron is identical to its parent except that its position is a user-defined amount, MNinc , to the right of the parent and its health is set to 1 (an assumption of the model). Both its dendrites connect to the second input. It is assumed that the soma programs running in the two output neurons A and B have resulted in both

34

J. F. Miller et al.

output neurons having moved to the right. Their dendrites have also grown in length. Neuron A’s first dendrite is now connected to neuron one. In addition, neuron A has high health so that it has grown a new dendrite. Every time a new dendrite grows it is given a weight and health equal to 1.0. Also its new dendrite is given a position equal to half the parent neuron’s position. These are assumptions of the model. Thus its new dendrite is connected to neuron zero. Neuron B’s only dendrite is connected to neuron one. Figure 2c shows the example brain after a two developmental steps. The dendrites of neuron zero have changed little and it is still connected in the same way as the previous step. The dendrites of neuron one have both changed. The first one has become longer but remains connected to the first input. The second dendrite has become shorter but it still snaps to the second input. Neuron one has also replicated as a result of its health being above the replication threshold. It gets dendrites identical to its parent, its position is again incremented to the right of its parent and its health is set to 1.0. Its first dendrite connects to input one and its second dendrite to neuron zero. Output neuron A has gained a dendrite, due to its health being above the dendrite birth threshold. The new dendrite stretches to a position equal to half of its parent neuron. So it connects to neuron zero. The other two dendrites remain the same and they connect to neuron one and zero respectively. Finally, output neuron B’s only dendrite has extended a little but still snaps to neuron one. Note, that at this stage neuron two is not connected to by another neuron and is redundant. It will be stripped out of the ANN that is extracted from the example brain.

4.2 Model Parameters The model necessarily has a large number of user-defined parameters these are shown in Table 1. The total number of neurons allowed in the network is bounded between a userdefined lower (upper) bound NNmin (NNmax ). The number of dendrites each neuron can have is bounded by user-defined lower (upper) bounds denoted by DNmin (DNmax ). These parameters ensure that the number of neurons and connections per neuron remain in well-defined bounds, so that a network can not eliminate itself or grow too large. The initial number of neurons is defined by Ninit and the initial number of dendrites per neuron is given by NDinit . If the health of a neuron falls below (exceeds) a user-defined threshold, NHdeath (NHbirth ) the neuron will be deleted (replicated). Likewise, dendrites are subject to user defined health thresholds, DHdeath (DHbirth ) which determine whether the dendrite will be deleted or a new one will be created. Actually, to determine dendrite birth the parent neuron health is compared with DHbirth rather than dendrite health. This choice was made to prevent the potential very rapid growth of dendrite numbers. When the soma or dendrite programs are run the outputs are used to decide how to adjust the neural and dendrite variables. The amount of the adjustments are decided by the six user-defined δ parameters.

Evolving Programs to Build Artificial Neural Networks

35

Table 1 Table of neural model constants and their meanings Symbol Meaning NNmin (NNmax ) Ninit DNmin (DNmax ) NDinit NHdeath (NHbirth ) DHdeath (DHbirth ) δsh δsp δsb δdh δdp δd w NDSpre NDSwhi Nep MNinc Iu Ol α

Min. (Max.) allowed number of neurons Initial number of non-output neurons Min. (Max.) number of dendrites per neuron Initial number of dendrites per neuron Neuron health thresholds for death (birth) Dendrite health thresholds for death (birth) Soma health increment (pre, while) Soma position increment (pre, while) Soma bias increment (pre, while) Dendrite health increment (pre, while) Dendrite position increment (pre, while) Dendrite weight increment (pre, while) Number of developmental steps before epoch Number of ‘while’ developmental steps during epoch Number of learning epochs Move neuron increment if collision Max. program input position Min. program output position Sigmoid/Hyperbolic tangent exponent constant

The number of developmental steps in the two developmental phases (‘pre’ learning and ‘while’ learning) are defined by the parameters, NDSpre and NDSwhi . The number of learning epochs is defined by Nep . Note that the pre-learning phase of development, ‘pre’, can have different incremental constants (i.e. δs) to the learning epoch phase, ‘while’. In some cases, neurons will collide with other neurons (by occupying a small interval around an existing neuron) and the neuron has to be moved by a certain increment until no more collisions take place. This increment is given by MNinc . The places where external inputs are provided is predetermined uniformly within the region between −1 and Iu . The parameter Iu defines the upper bound of their position. Also output neurons are initially uniformly distributed between the parameter Ol and 1. However, depending on a user-defined option the output neurons as with other neurons can move according to the neuron program. All neurons are marked as to whether they provide an external output or not. In the initial network there are Ninit non-output neurons and No output neurons, where No denotes the number of outputs required by the computational problem being solved. Finally, the neural activation function (hyperbolic tangent) and the sigmoid function (which is used in nonlinear incremental adjustment of neural variables) have a slope constant given by α.

36

J. F. Miller et al.

4.3 Developing the Brain and Evaluating the Fitness An overview of the algorithm used for training and developing the ANNs is given in Overview 1. Overview 1 Overview of fitness algorithm 1: function Fitness 2: Initialise brain 3: Load ‘pre’ development parameters 4: Update brain NDSpre times by running soma and dendrite programs 5: Load ‘while’ developmental parameters 6: repeat 7: Update brain NDSwhi times by running soma and dendrite programs 8: Extract ANN for each benchmark problem 9: Apply training inputs and calculate accuracy for each problem 10: Fitness is the normalised average accuracy over problems 11: If fitness reduces terminate learning loop and return previous fitness 12: until Nep epochs complete 13: return fitness 14: end function

The brain is always initialised with at least as many neurons as the maximum number of outputs over all computational problems. Note, all problem outputs are represented by a unique neuron dedicated to the particular output. However, the maximum and initial number of non-output neurons can be chosen by the user. Nonoutput neurons can grow change or give birth to new dendrites. Output neurons can change but not die or replicate as the number of output neurons is fixed by the choice of computational problems. The detailed algorithm for training and developing the ANN is given in Algorithm 1. Development of the brain can happen in two phases, ‘pre’ and ‘while’. The ‘pre’ phase runs for NDSpre developmental steps and is outside the learning loop. This is an early phase of development. It has its own set of developmental parameters. The ‘while’ phase happens inside the learning loop which has Nep epochs. It too has its own set of developmental parameters. The idea of two phases is inspired by the phases of biological development. In early brain development, neurons are stem cells that move to particular locations and have no dendrites and axons. This can effectively be mimicked since in the ‘pre’ phase the parameters controlling dendrites can be disabled by setting DHdeath = −1.0 and DHbirth = 1.0. This means that dendrites can not be removed or be born. In addition, setting δdh = 0.0, δdp = 0.0 and δd w = 0.0 would mean that any existing dendrites could not change. In a similar way, ‘while’ parameters could be chosen to disallow somas to move, die, replicate or change during the learning loop and also to allow dendrites to grow/shrink, change, be removed, or replicate. Thus it can be seen that the collection of parameters gives the user a lot of control of the developmental process.

Evolving Programs to Build Artificial Neural Networks

37

The learning loop evaluates the brain by extracting conventional ANNs for each problem and calculating a fitness (based on accuracy of classification) it checks to see if the new fitness value is greater than or equal to the previous value at the last epoch. If the fitness has reduced the learning loop terminates and the previous value of fitness is returned. The purpose of the learning loop is to enable the evolution of a development process that progressively improves the brain’s performance. The aim is to find programs for the soma and dendrite which allow this improvement to continue with epoch beyond the limit chosen (Nep ). Later in this chapter, an experiment is conducted to test whether this general learning behaviour has been achieved (see Sect. 12).

4.4 Updating the Brain Updating the brain is the process of running the soma and dendrite programs once in all neurons and dendrites (i.e. it is a single developmental step). Doing this will cause the brain to change and after all changes have been carried out a new updated brain will be produced. This replaces the previous brain. Overview Algorithm 2 gives a high-level overview of the update brain process. Overview 2 Update brain overview 1: function UpdateBrain 2: Run soma program in non-output neurons to update soma 3: Ensure neuron does not collide with neuron in updated brain 4: Run dendrite program in all non-output neurons 5: If neuron survives add it to updated brain 6: If neuron replicates ensure new neuron does not collide 7: Add new neuron to updated brain 8: Run soma program in output neurons to update soma 9: Ensure neuron does not collide 10: Run dendrite program in all output neurons 11: If neuron survives add it to updated brain 12: Replace old brain with updated brain 13: end function

Section 15.1 presents a more detailed version of how the brain is updated at each developmental step (see Algorithm 2) and gives details of the neuron collision avoidance algorithm.

38

J. F. Miller et al.

4.5 Running and Updating the Soma The UpdateBrain program calls the RunSoma program to determine how the soma changes in each developmental step. As we saw in Fig. 1a the seven soma program inputs are: neuron health, position and bias, the averaged position, weight and health of the neuron’s dendrites and the problem type. Once the evolved CGP soma program is run the soma outputs are returned to the brain update program. These steps are shown in Overview 2. Overview 2 Running the soma: algorithm overview 1: function RunSoma 2: Calculate average dendrite health, position and weight 3: Gather soma program inputs 4: Run soma program 5: Return updated soma heath, bias and position 6: end function

The detailed version of the RunSoma function can be found in Sect. 15.3. The RunSoma function uses the soma program outputs to adjust the health, position and bias of the soma according to three user-chosen options defined by a variable Incropt . This is carried out by the UpdateNeuron Overview Algorithm 3. Overview 3 Update neuron algorithm overview 1: function UpdateNeuron 2: Assign original neuron variables to parent variables 3: Assign outputs of soma program to health, position and bias 4: Depending on Incropt get increments 5: If soma program outputs > 0 (≤0) then incr. (decr.) parent variables 6: Assign parent variables to neuron 7: Bound health, position and bias 8: end function

4.6 Updating the Dendrites and Building the New Neuron This section is concerned with running the evolved dendrite programs. In every dendrite, the inputs to the dendrite program have to be gathered. The dendrite program is executed and the outputs are used to update the dendrite. This is carried out by a function called RunDendrite. Note, in RunAllDendrites we build the completely updated neuron from the updated soma and dendrite variables. The simplified algorithm for doing this is shown in Overview Algorithm 4. The more detailed version is available in Sect. 15.5.

Evolving Programs to Build Artificial Neural Networks

39

Overview 4 An overview of the RunAllDendrites algorithm which runs all dendrite programs and uses all updated variables to build a new neuron. 1: function RunAllDendrites 2: Write updated soma variables to new neuron 3: if Old soma health > DHbirth then 4: Generate a dendrite for new neuron 5: end if 6: for all Dendrites do 7: Gather dendrite program inputs 8: Run dendrite program to get updated dendrite variables 9: Run dendrite to get updated dendrite 10: if Updated dendrite is alive then 11: Add updated dendrite to new neuron 12: if Maximum number of dendrites reached then 13: Stop processing any more dendrites 14: end if 15: end if 16: end for 17: if All dendrites have died then 18: Give new neuron the first dendrite of the old neuron 19: end if 20: end function

Overview Algorithm 4 (in line 9) uses the updated dendrite variables obtained from running the evolved dendrite program to adjust the dendrite variables (according to the incrementation option chosen). This function is shown in the Overview Algorithm 5. The more detailed version is available in Sect. 15.5. The RunDendrite function begins by assigning the dendrite’s health, position and weight to the parent dendrite variables. It writes the dendrite program outputs to the internal variables health, weight and position. It respectively carries out the increments or decrements of the parent dendrite variables according whether the corresponding dendrite program outputs are greater than or less than or equal to zero. After this it bounds those variables. Finally, it updates the dendrites health, weight and position provided the adjusted health is above the dendrite death threshold. We saw in the fitness function that we extract conventional ANNs from the evolved brain. The way this is accomplished is as follows. Since we share inputs across problems we set the number of inputs to be the maximum number of inputs that occur in the computational problem suite. If any problem has less inputs the extra inputs are set to zero. The next phase is to go through all dendrites of the neurons to determine which inputs or neurons they connect to. To generate a valid neural network we assume that dendrites are automatically connected to the nearest neuron or input on the left. We refer to this as snapping. The dendrites of non-output neurons are allowed to connect to either inputs or other non-output neurons on their left. However, output neurons are only allowed to connect to non-output neurons on their left. It is not desirable for

40

J. F. Miller et al.

Overview 5 Change dendrites according to the evolved dendrite program. 1: function RunDendrite 2: Assign original dendrite variables to parent variables 3: Assign outputs of dendrite program to health, position and weight 4: Depending on Incropt get increments 5: If dendrite program outputs > 0 (≤0) then incr(decr.) parent variables 6: Assign parent variables to neuron 7: Bound health, position and weight 8: if (health > DHdeath ) then 9: Update dendrite variables 10: Dendrite is alive 11: else 12: Dendrite is dead 13: end if 14: Return updated dendrite variables and whether dendrite is alive 15: end function

the dendrites of output neurons to be connected directly to inputs, however, when output neurons are allowed to move, they may only have inputs on their left. In this case the output neuron’s dendrite will be connected to the first external input to the ANN network (by default). The detailed version of the ANN extraction process is given in Sect. 15.6.

5 Cartesian GP The two neural programs are represented and evolved using a form of Genetic Programming (GP) known as Cartesian Genetic Programming (CGP). CGP [48, 50] is a form of GP in which computational structures are represented as directed, often acyclic graphs indexed by their Cartesian coordinates. Each node may take its inputs from any previous node or program input (although recurrent graphs can also be implemented see [77]). The program outputs are taken from the output of any internal node or program input. In practice, many of the nodes described by the CGP chromosome are not involved in the chain of connections from program input to program output. Thus, they do not contribute to the final operation of the encoded program, these inactive, or “junk”, nodes have been shown to greatly aid the evolutionary search [49, 79, 81]. The representational feature of inactive genes in CGP is also closely related to the fact that it does not suffer from bloat [47]. In general, the nodes described by CGP chromosomes are arranged in a rectangular r × c grid of nodes, where r and c respectively denote the user-defined number of rows and columns. In CGP, nodes in the same column are not allowed to be connected together. CGP also has a connectivity parameter l called “levels-back” which determines whether a node in a particular column can connect to a node in a previous column. For instance if l = 1 all nodes in a column can only connect to nodes

Evolving Programs to Build Artificial Neural Networks

41

in the previous column. Note that levels-back only restricts the connectivity of nodes; it does not restrict whether nodes can be connected to program inputs (terminals). However, it is quite common to adopt a linear CGP configuration in which r = 1 and l = c. This was done in our investigations here. CGP chromosomes can describe multiple input multiple output (MIMO) programs with a range of node functions and arities. For a detailed description of CGP, including its current developments and applications, see [48]. Both the soma and dendrite program have 7 inputs and 3 outputs. (see Fig. 1). The function set chosen for this study are defined over the real-valued interval [−1.0, 1.0]. Each primitive function takes up to three inputs, denoted z0 , z1 and z2 . The functions are defined in Table 2. Table 2 Node function gene values, mnemonic and function definition Value mnemonic Definition 0 1 2 3 4 5 6 7 7 9 10 11 12 13 14 15 16 17 18 19 20

abs sqrt sqr cube exp sin cos tanh inv step hyp add sub mult max min and or rmux imult xor

21 22

istep tand

23

tor

|z0 | √ |z0 | z0 2 z0 3 (2exp(z0 + 1) − e2 − 1)/(e2 − 1) sin(z0 ) cos(z0 ) tanh(z0 ) −z0 if z0 < 0.0 then 0 else 1.0  (z0 2 + z1 2 )/2 (z0 + z1 )/2 (z0 − z1 )/2 z0 z1 if z0 >= z1 then z0 else z1 if z0 0.0 and z1 > 0.0) then 1.0 else − 1.0 if (z0 > 0.0 or z1 > 0.0) then 1.0 else − 1.0 if z2 > 0.0 then z0 else z1 −z0 z1 if (z0 > 0.0 and z1 > 0.0) then − 1.0 else if (z0 < 0.0 and z1 < 0.0) then − 1.0 else 1.0 if z0 < 1.0 then 0 else − 1.0 if (z0 > 0.0 and z1 > 0.0) then 1.0 else if (z0 < 0.0 and z1 < 0.0) then − 1.0 else 0.0 if (z0 > 0.0 or z1 > 0.0) then 1.0 else if (z0 < 0.0 or z1 < 0.0) then − 1.0 else 0.0

42

J. F. Miller et al.

6 Benchmark Problems In this study, we evolve neural programs that build ANNs for solving three standard classification problems. The problems are cancer, diabetes and glass. The definitions of these problems are available in the well-known UCI repository of machine learning problems.1 These three problems were chosen because they are well-studied and also have similar numbers of inputs and a small number of classes. Cancer has 9 real attributes and two Boolean classes. Diabetes has 8 real attributes and two Boolean classes. Glass has 9 real attributes and six Boolean classes. The specific datasets chosen were cancer1.dt, diabetes1.dt and glass1.dt which are described in the PROBEN suite of problems.2

7 Experiments The long-term aim of this research is to explore effective ways to develop ANNs. The work presented here is just a beginning and there are many aspects that need to be investigated in the future (see Sect. 13). The specific research questions we have focused on in this chapter are: • What types of neuron activation function is most effective? • How many neurons and dendrites should we allow? • Should neuron and dendrite programs be allowed to read problem type? These questions complement the questions asked and investigated using the same neural model in recent previous work [52]. There, the utility of neuron movement in both non-output and output neurons was investigated. It was found that statistically significantly better results were obtained when only output-neurons were allowed to move. In addition, the work examined three ways of incrementing or decrementing neural variables. In the first the outputs of evolved programs determines directly the new values of neural variables (position, health, bias, weight), that is to say there is no incremental adjustment of neural variables. In the second, the variables are incremented or decremented in user-defined amounts (the deltas in Table 1). In the third, the adjustments to the neural variables are nonlinear (they are adjusted using a sigmoid function). Linear adjustment of variables (increment or decrement) was found to be statistically superior to the alternatives. To answer the questions above, a series of experiments were carried out to investigate the impact of various aspects of the neural model on classification accuracy. Twenty evolutionary runs of 20,000 generations of a 1+5-ES were used. Genotype lengths for soma and dendrite programs were chosen to be 800 nodes. Goldman mutation [20, 21] was used which carries out random point mutation until an active

1 https://archive.ics.uci.edu/ml/datasets.html. 2 https://publikationen.bibliothek.kit.edu.

Evolving Programs to Build Artificial Neural Networks

43

gene is changed. For these experiments a subset of allowed node functions were chosen as they appeared to give better performance. These were: step, add, sub, mult, xor, istep. The remaining experimental parameters are shown in Table 3. Some of the parameter values are very precise (defined to the fourth decimal place). The process of discovering these values consisted of an informal but greedy search for good parameters that produced high fitness in the first evolutionary run. It was fortuitous as it turned out that this was a reasonable way of obtaining good parameters on average. Table 3 Table of neural model parameters

Parameter NNmin (NNmax ) Ninit DNmin (DNmax ) NDinit NDSpre NDSwhi Nep MNinc Iu Ol α ‘Pre’ development parameters NHdeath (NHbirth ) DHdeath (DHbirth ) δsh δsp δsb δdh δdp δd w ‘While’ development parameters NHdeath (NHbirth ) DHdeath (DHbirth ) δsh δsp δsb δdh δdp δd w

Value 0 (20–100) 5 1 (5–50) 5 4 8 1 0.03 −0.6 0.8 1.5 −0.405 (0.406) −0.39 (−0.197) 0.1 0.1487 0.07 0.1 0.1 0.101 0.435 (0.7656) 0.348 (0.41) 0.009968 0.01969 0.01048 0.0107 0.0097 0.0097

44

J. F. Miller et al.

Table 4 Training and testing accuracy for three neural activation functions Acc. Hyperbolic tangent Rectilinear Sigmoid Train (Test) Train (Test) Train (Test) Mean Median Maximum Minimum

0.7401 (0.7075) 0.7392 (0.7266) 0.7988 (0.7669) 0.6840 (0.6200)

0.7150 (0.6698) 0.7101 (0.6717) 0.7654 (0.7398) 0.6815 (0.6217)

0.6980 (0.6760) 0.7036 (0.6851) 0.7315 (0.7237) 0.6251 (0.5894)

Table 5 Training and testing accuracy on individual problems when using tanh activation Acc. Cancer Diabetes Glass Train (Test) Train (Test) Train (Test) Mean Median Maximum Minimum

0.9086 (0.9215) 0.9129 (0.9281) 0.9657 (0.9770) 0.8571 (0.8621)

0.7233 (0.6594) 0.7266 (0.6615) 0.7630 (0.6823) 0.6693 (0.6198)

0.5883 (0.5415) 0.5841 (0.5849) 0.6729 (0.6981) 0.4673 (0.3396)

Table 6 Comparison of test accuracies on three classification problems. Model using tanh activation compared with huge suite of classification methods as described in [15] Acc. Cancer Diabetes Glass ML (model) ML (model) ML (model) Mean Maximum Minimum

0.935 (0.9215) 0.974 (0.9770) 0.655 (0.8621)

0.743 (0.6594) 0.790 (0.6822) 0.582 (0.6198)

0.610 (0.5415) 0.785 (0.6981) 0.319 (0.3340)

8 Results The mean, median, maximum and minimum accuracies achieved over 20 evolutionary runs for three different neuron activation functions neurons are shown in Table 4. We can see that the best values of mean, median, maximum and minimum are all obtained when the hyperbolic tangent function is used. The rectilinear neural functions is zero for negative arguments and equal to its argument for positive arguments. Both the sigmoid and hyperbolic tangent activation functions have α as exponent multipliers. The training and testing accuracies on individual classification problems using the hyberbolic tangent activation functions are shown in Table 5. Table 6 shows how the results for the model (using tanh activation) compare with the performance of 179 classifiers (covering 17 families) [15].3 The figures are given just to show that the results for the developmental ANNs are respectable. 3 The

paper gives a link to the detailed performance of the 179 classifiers which contain the figures given in the table.

Evolving Programs to Build Artificial Neural Networks

45

Table 7 Training and testing accuracy for different upper bounds on the number of neurons (NNmax ). The number of dendrites was 50 in all cases Acc.

NNmax = 20

NNmax = 40

NNmax = 60

NNmax = 80

NNmax = 100

Train (Test)

Train (Test)

Train (Test)

Train (Test)

Train (Test)

Mean

0.7314 (0.7026) 0.7401 (0.7075) 0.7161 (0.6927) 0.7179 (0.6969) 0.7222 (0.7072)

Median

0.7327 (0.7176) 0.7392 (0.7266) 0.7155 (0.6919) 0.7252 (0.7045) 0.7219 (0.7081)

Maximum 0.7605 (0.7468) 0.7988 (0.7669) 0.7355 (0.7461) 0.7757 (0.7483) 0.7493 (0.7480) Minimum 0.7002 (0.6563) 0.6840 (0.6200) 0.6654 (0.6408) 0.6284 (0.5998) 0.6691 (0.6492)

Table 8 Training and testing accuracy for different upper bounds on the number of dendrites (DNmax ). The upper bound on the number of neurons was 40 in all cases Acc. DNmax = 5 DNmax = 10 DNmax = 15 DNmax = 50 Train (Test) Train (Test) Train (Test) Train (Test) Mean Median Maximum Minimum

0.7408 (0.7125) 0.7463 (0.7208) 0.7926 (0.7701) 0.6789 (0.6047)

0.7347 (0.7053) 0.7335 (0.7152) 0.7924 (0.7640) 0.6850 (0.6395)

0.7338 (0.6966) 0.7325 (0.6857) 0.7988 (0.7669) 0.6797 (0.6281)

0.7401 (0.7075) 0.7392 (0.7266) 0.7988 (0.7669) 0.6840 (0.6200)

The results are particularly encouraging considering that the evolved developmental programs build classifiers for three different classification problems simultaneously, so the comparison is unfairly biased against the proposed model. The cancer results produced by the model are very close to those compiled from the suite of ML methods, however it can be seen that the models’s results for diabetes and glass are not as close. It is unclear why this is the case. The second series of experiments examined how the classifier performance varied with the upper bound on the number of allowed neurons (Table 7). The third series of experiments was concerned with varying the upper bound on the number of dendrites allowed for each neuron (DNmax ). It was found that the results when the dendrite upper bound is 20 produced exactly the same results as an upper bound of 50. All dendrites must snap to a neuron or input and therefore contribute to the output of the network. The fact that increasing the upper bound made no difference implies that the number of dendrites the neurons used never exceeded 20 anyway (Table 8). Statistical significance testing with the test data revealed that the better scenarios (NNmax = 20, 40 or DNmax = 5, 50) are only weakly statistically different from other scenarios. So it appears that performance is not particularly sensitive to the maximum number of neurons and dendrites (within reasonable bounds). The fourth series of experiments concerned the issue of problem type. As discussed in Sect. 4, problem type is a real-valued quantity in the interval [−1, 1]. It is a quantity that indicates what computational problem a neuron belongs to. Non-output neurons are not committed to a problem type so the problem type is assumed to be −1.0. However, output neurons are all dedicated to a particular problem. The cancer problem has two output neurons, the diabetes has two and the glass problem has

46 Table 9 Training and testing accuracy with and without problem type inputs to evolved programs. The upper bounds on the number of neurons and dendrites was 40 and 50 respectively

J. F. Miller et al. Acc.

Without problem type Train (Test)

With problem type Train (Test)

Mean Median Maximum Minimum

0.7314 (0.7026) 0.7327 (0.7176) 0.7605 (0.7468) 0.7002 (0.6563)

0.7401 (0.7075) 0.7392 (0.7266) 0.7988 (0.7669) 0.6840 (0.6200)

six. The extraction process that gets the ANNs associated with each problem begins from the output neurons corresponding to each problem. So the question arises as to whether it is useful or not to allow the neural programs to read the problem type? The results are shown in Table 9. Using the problem type as an input to neural programs appears to be useful as it improves the mean, median and maximum compared with not using the problem type. However, tests of statistical significance discussed in the next section show that the difference is not significant.

9 Comparisons and Statistical Significance The Wilcoxon Ranked-Sum test (WRS) was used to assess the statistical difference between pairs of experiments. In this test, the null hypothesis is that the results (best accuracy) over the multiple runs for the two different experimental conditions are drawn from the same distribution and have the same median. If there is a statistically significant difference between the two then null hypothesis is false with a degree of certainty which depends on the smallness of a calculated statistic called a p-value. However, in the WRS before interpreting the p-value one needs to calculate another statistic called Wilcoxon’s W value. This value needs to be compared with calculated values which depend on the number of samples in each experiment. Results are statistically significant when the calculated W-value is less than or equal to certain critical values for W [82]. The critical values depend on the sample sizes and the pvalue. We used a publicly available Excel spreadsheet for doing these calculations.4 The critical W-values can be calculated in two ways: one-tailed or two-tailed. The two-tailed test is appropriate here as we are interested in whether one experiment is better than another (and vice versa). For example, in Table 10 the calculated W-value is 34 and the critical W-value for for the paired sample sizes of 20 (number of runs) with p-value less than 0.01 is 38 (assuming a two-tailed test).5 The p-value gives a measure of the certainty with which the null hypothesis can be accepted. Thus the lower the value the more likely

4 http://www.biostathandbook.com/wilcoxonsignedrank.html. 5 http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/.

Evolving Programs to Build Artificial Neural Networks

47

Table 10 Statistical comparison of testing results from experiments (Wilcoxon Rank-Sum twotailed) Question

Expt. A

Expt. B

W

W critical

P-value

Significant?

Activation

tanh

Rectilinear

34

38

0.005 < p < 0.01

Yes

Activation

Rectilinear

Sigmoid

90

69

0.2 < p

No

Problem type input

Yes

No

92

69

0.2 < p

No

that the two samples come from different distributions (i.e. are statistically different). Thus in this case, the probability that the null hypothesis can be rejected is 0.99. The conclusions we can draw from Table 10 are that a hyperbolic tangent activation function is statistically significantly superior to either a rectilinear or sigmoid function. In addition, the use of problem type as an evolved program input is not statistically distinguishable from not using a problem type input. This is surprising as output neurons are dedicated to problem types and one might expect by a problem type input would allow neural programs to behave differently according to problem type.

10 Evolved Developmental Programs The average number of active nodes in the soma and dendrite programs for the hyperbolic tangent activation function is 54.7 in both cases. Thus the programs are relatively simple. It is also possible that the graphs can be logically reduced to even simpler forms. The graphs of the active nodes in the CGP graphs for the best evolutionary run (0) are shown in Figs. 3 and 4. The red input connections between

Fig. 3 Best evolved soma program. The input nodes are: soma heath (sh), soma bias (sb), soma position (sp), average dendrite health (adh), average dendrite weight (adw), average dendrite position (adp) and problem type (pt). The output nodes are: soma health (SH), soma bias (SB) and soma position (SP)

48

J. F. Miller et al.

Fig. 4 Best evolved dendrite program. The input nodes are: soma heath (sh), soma bias (sb), soma position (sp), dendrite health (dh), dendrite weight (dw), dendrite position (dp) and problem type (pt). The output nodes are: dendrite health (DH), dendrite weight (DW) and dendrite position (DP)

nodes indicate the first input in the subtraction operation. This is the only node operation where node input order is important.

11 Developed ANNs for Each Classification Problem The ANNs for the best evolutionary run (0) were extracted (using Algorithm 9) and can be seen in Figs. 5 and 6. The plots ignore connections with weight equal to zero. In the figures, a colour scheme is adopted to show which neurons belong to which problems. Red indicates a neuron belonging to the ANN predicting cancer, green indicates a neuron belonging to the ANN predicting diabetes and blue indicates a neuron belonging to an ANN predicting the type of glass. If neurons are shared the colours are a mixture of the primary colours. So white, or an absence of colour,

Evolving Programs to Build Artificial Neural Networks

49

Fig. 5 Developed ANN for cancer dataset. This dataset has 9 attributes and two outputs. The numbers inside the circles are the neuron bias. The numbers in larger font near but outside the neurons are neuron IDs. The connection weights between neurons are shown near to the connections. If any attributes are not present it means they are unused. The training accuracy is 0.9657 and the test accuracy is 0.9770

indicates that a neuron is shared over all three problems. Magenta indicates a neurons shared between cancer and glass. Yellow would indicate neurons shared between

50

J. F. Miller et al.

Fig. 6 Developed ANN for diabetes (a) and glass (b) datasets. The diabetes dataset has 8 attributes and two outputs. The glass dataset has 9 attributes and six outputs. The numbers inside the circles are the neuron bias. The numbers in larger font near but outside the neurons are neuron IDs. The connection weights between neurons are shown near to the connections. Attributes not present are unused. The diabetes training accuracy is 0.7578 and the test accuracy is 0.6823. The glass training accuracy is 0.6729 and the test accuracy is 0.6415

cancer and diabetes (in the case shown there are no neurons shared between cancer and diabetes). The ANNs use 21 neurons in total of which only four non-output neurons were unshared (output neurons cannot be shared). It is interesting to note that the number of neurons used for the diabetes data set was 6 (two of which were output neurons) even though the diabetes classification problem is more difficult than cancer. Also the brain was allowed to use up to 40 neurons but many of these were redundant and not used in the extracted ANNs. These redundant neurons were not referenced in the chain of connections from output neurons to inputs. This redundancy is very much like the redundancy that occurs in CGP when extracting active computational nodes. It is interesting that for the cancer prediction ANN, class 0 is decided by a very small network in which only two attributes are involved (the 5 and 6th attributes). This network is independent of the network determining class 1. It is also interesting that the neurons with IDs 17 and 18 are actually identical. Note that attributes 0, 1 and 4 have been ignored. There are three identical neurons connected to attribute 2 (with IDs 9, 10 and 11), these can be reduced to a single neuron!

Evolving Programs to Build Artificial Neural Networks

51

It is surprising that the diabetes ANN is so small. Indeed the neurons with IDs 18 and 19 can be replaced with a single neuron as the two neurons are identical. The ANN only reads three attributes of the dataset! In the case of the glass ANN, once again we see that the ANN consists of distinct networks (3), one predicting classes 2 and 5, one predicting classes 0, 1 and one predicting classes 3 and 4. When we analyzed ANNs produced in other cases we sometimes observed ANNs in which neurons occur that only have inputs with weight zero (i.e. effectively no inputs). Such neurons can still be involved in prediction as provided the bias is non-zero the neuron will output a constant value. Another interesting feature is that pairs of neurons often have multiple connections. This is equivalent to a single connection where the weighted value is the sum of the individual connections weights. This phenomenon was also observed in CGP encoded and evolved ANNs [76].

12 Evolving Neural Learning Programs The fitness function (see Overview Algorithm 1) included the possibility of learning epochs. In this section we present and discuss results when a number of learning epochs have been chosen. The task for evolution is then to construct two neural programs that develop ANNs that improve with each learning epoch. The aim is to find a general learning algorithm in which the ANNs change and improve with each learning epoch beyond the limited number of epochs used in training. The ‘while’ experimental parameters required to investigate were changed from those used previously when there were no learning epochs. The new parameters are shown in Table 11. Twenty evolutionary runs were carried out using these parameters and the results are shown in Table 12. Once again, we see that parameter values are very precise. They were obtained using the same greedy search discussed earlier. In Table 12 shows the results with multiple learning epochs versus those with no learning epochs. Table 13 shows the results on each problem using multiple learning epochs. It is clear that using no learning epochs gives better results. However, the results with multiple learning epochs are still reasonable despite the fact that the task is much more difficult, since one is effectively trying to evolve a learning algorithm. It is possible that further experimentation with developmental parameters could produce better results with multiple epochs. In Fig. 7 we examine how the accuracy of the classifications varies with learning epochs (run 1 of 20). We set the maximum number of epochs to 100 now to see if learning continues beyond the upper limit used during evolution (10). We can see that classification accuracy increases with each epoch up to 10 epochs and starts to gradually decline after that. However, at epoch 29 the accuracy suddenly drops to 0.544 and at epoch 39 the accuracy increases again to 0.647. After this the accuracy shows a slow decline. We obtained several evolved solutions in which training

52 Table 11 Table of neural model parameters

Table 12 Test accuracy for ten learning epochs versus no learning epochs

J. F. Miller et al. Parameter

Value

NNmin (NNmax ) Ninit DNmin (DNmax ) NDinit NDSpre NDSwhi Nep MNinc Iu Ol α ‘Pre’ development parameters NHdeath (NHbirth ) DHdeath (DHbirth ) δsh δsp δsb δdh δdp δd w ‘While’ development parameters NHdeath (NHbirth ) DHdeath (DHbirth ) δsh δsp δsb δdh δdp δd w

0 (40) 5 1 (50) 5 4 1 10 0.03 −0.6 0.8 1.5 −0.405 (0.406) −0.39 (−0.197) 0.1 0.1487 0.07 0.1 0.1 0.101 −0.8 (0.8) −0.7 (0.7) 0.0011 0.0011 0.0011 0.0011 0.0011 0.0011

Acc.

Learning epochs

No learning epochs

Mean Median Maximum Minimum

0.6974 0.6985 0.7355 0.6685

0.7075 0.7266 0.7669 0.6200

Evolving Programs to Build Artificial Neural Networks Table 13 Test accuracies over problems using ten learning epochs

53

Acc.

Cancer

Diabetes

Glass

Mean Median Maximum Minimum

0.8799 0.9425 0.9885 0.6092

0.6250 0.6354 0.7031 0.4740

0.3774 0.3962 0.5660 0.0000

Fig. 7 Variation of test classification accuracy with learning epoch for run 1 of 20

accuracy increased at each epoch until the imposed maximum number of epochs, however, as yet none of these were able to improve beyond the limit.

13 Open Questions There are many issues and questions that remain to be investigated. Firstly, it is unclear why better results can not at present be obtained when evolving developmental programs with multiple epochs. Neither is it clear why programs can be evolved that continuously improve the developed ANNs over a number of epochs (i.e. 10) yet do not improve subsequently. It is worth contrasting the model discussed in this chapter with previous work on Self-Modifying CGP (SMCGP) [25]. In SMCGP phenotypes can be iterated to produce a sequence of programs or phenotypes. In some cases genotypes were found that produced general solutions and always improved at each iteration. The fitness was accumulated over all the correct test cases summed over all the iterations. In the problems studied (i.e. even-n parity, producing π ) there was also a notion of perfection. For instance in the parity case perfection meant that at each iteration it produced the next parity case (with more inputs) perfectly. If at the next iteration, the appropriate parity function was not produced, then the iteration stopped. In the work discussed here, the fitness is not cumulative. At each epoch, the fitness is the average accuracy of the classifiers

54

J. F. Miller et al.

over the three classification problems. If the fitness reduces at the next epoch, then the epoch loop is terminated. However, in principle, we could sum the accuracies at each epoch and if an accuracy at a particular epoch is reduced, terminate the epoch loop. Summing the accuracies would give reward to developmental programs that produced the best history of developmental changes. At present, the developmental programs do not receive a reward signal during multiple epochs. This means that the task for evolution is to continuously improve developed ANNs without being supplied with a reward signal. However, one would expect that as the fitness increases at each epoch the number of changes that need to be made to the developed ANNs should decrease. This suggests that supplying the fitness at the previous epoch to the developmental programs might be useful. In fact this option has already been implemented but as yet evidence is inconclusive that this produces improved results. While learning over multiple epochs, we have assumed that the developmental parameters should be fixed (i.e. they are chosen before the development loop—see line 5 of Overview Algorithm 1). However, it is not clear that this should be so. One could argue that during early learning topological changes in the brain network are more important and weight changes more important in later phases of learning. This suggests that at each step of the learning loop one could load developmental parameters, this would allow control of each epoch of learning. However, this has the drawback of increasing the number of possible parameter settings. The neural variables that are given as inputs to the CGP developmental programs are an assumption of the model. For the soma these are: health, bias, position, problem type and average dendrite health, position and weight. For the dendrite they are: dendrite health, weight, position, problem type and soma health, bias and position. Further experimental work needs to be undertaken to determine whether they all are useful. The program written already has the inclusion of any of these variables as an option. There are also many assumptions made in quite small aspects of the whole model. When new neurons or dendrites are born what should the initial values of the neural variables be? What are the best upper bounds on the number of neurons and dendrites? Currently, dendrite replication is decided by comparing the parent neuron health with DHbirth rather than comparing dendrite health with this threshold. If dendrite health was compared with a threshold it could rapidly lead to very large numbers of dendrites. Many choices have been made that need to be investigated in more detail. In real biological brains, early in the developmental process neurons move (and have no or few dendrites) and later neurons do not move and have many or at least a number of dendrites. This is understandable as moving when you have dendrites is difficult (if not impossible) as they provided resistance and would get obstructed by the dendrites of other neurons. However, in the proposed model movement can happen irrespective of such matters. It would be possible to restrict movement of whole neurons by making the movement increments depend on the number of dendrites a neuron has.

Evolving Programs to Build Artificial Neural Networks

55

There are also very many parameters in the model and experiment has shown that results can be very sensitive to some of these. Thus further experimentation is required to identify good choices for these parameters. A fundamental issue is how to handle inputs and outputs. In the classification problems the number of inputs is given by the problem with the most attributes, problems with less are given the value zero for those inputs. This could be awkward if the problems have hugely varying numbers of inputs. Is there another way of handling this? Perhaps one could borrow more ideas from SMCGP and make all input connections access inputs using pointer to a circular register of inputs. Every time a neuron connected to an input, a global pointer to the register of inputs would be incremented. Another possible idea is to assign all inputs a unique position (across all problems) and introduce the appropriate inputs at the ANN extraction stage, this would remove the need to assign zero to non-existent inputs (as mentioned above). This would mean no inputs are shared. Equally fundamental is the issue of handling outputs. Currently, we have dedicated output neurons for each output, however, this means that development can not start with a single neuron. Perhaps, neurons could decide to be an output neuron for a particular problem and some scheme would need to be devised to allocate the appropriate number of outputs for each computational problem (rather like was done in SMCGP). Alternatively, extra genes could be added to the genome like output genes in standard CGP. These output connection genes would be real-valued between −1 and 1 and snap to nearest neurons. Essentially, this would mean that the model would have three chromosomes, one each for the soma, dendrites and outputs. This would have the advantage that only non-output neurons would be necessary. So far, we have examined the utility of the developmental model on three classification problems. However, the aim of the work is to produce general problem solving on many different kinds of computational problems. Clearly, a favourable direction to go is to expand the list of problems and problem types. How much neuron sharing would take place across problems of different types (e.g. classification and real-time control)? Would different kinds of problems cause whole new sub-networks to grow? These questions relate to a more fundamental issue which is the assessment of developmental ANNs. Should we have a training set of problems (rather than data) and evaluate on an unseen (but related) set of problems? Currently the neurons exist in a one-dimensional space however it would be relatively straightforward to extend it to two or even three spatial dimensions. In brains the morphology of neurons is activity dependent [53]. A simple way to introduce this would be to examine whether a neuron is actually involved in the propagation of signal from inputs to outputs (i.e. whether it is redundant or not). This activity could be input to developmental programs. Alternatively, a signal related input could be provided to developmental programs. This would mean that signals from other neurons in the model could influence decisions made by neural and dendrite programs. However, running developmental programs during the process of passing signals through the ANN would mean that conventional ANNs could not be extracted and also it would slow down assessment of network response to applied signals. Perhaps some statistical measures of signals could be computed which are

56

J. F. Miller et al.

supplied to neural programs. They could be calculated during each developmental step and then supplied to neuron and dendrite programs at the start of the next developmental step. However, it should be noted that activity dependent morphology implies that networks would change during training (and testing) and network morphology and behaviour would depend on past training history. This would complicate fitness assessment! Eventually, the aim is to create developmental networks of spiking neurons. This would allow models of activity dependent development based on biological neurons to be abstracted and included in artificial models.

14 Conclusions We have presented a conceptually simple model of a developmental neuron in which neural networks develop over time. Conventional ANNs can be extracted from these networks. We have shown that an evolved pair of programs can produce networks that can solve multiple classification problems reasonably well. Multiple-problem solving is a new domain for investigating more general developmental neural models.

15 Appendix: Detailed Algorithms 15.1 Developing the Brain and Evaluating the Fitness The detailed algorithm for developing the brain and assessing its fitness is shown in Algorithm 1 There are two stages to development. The first (which we refer to as ‘pre’) occurs prior to a learning epoch loop (lines 3–6). While the second phase (referred to as ‘while’) occurs inside a learning epoch loop (lines 9–12). Lines 13–22 are concerned with calculating fitness. For each computational problem an ANN is extracted from the underlying brain. This is carried by a function ExtractANN (problem, OutputAddress) which is detailed in Algorithm 9. This function extracts a feedforward ANN corresponding to each computational problem (this is stored in a phenotype which we do not detail here). The array OutputAddress stores the addresses of the output neurons associated with the computational problem. It is used together with the phenotype to extract the network of neurons that are required for the computational problem. Then the input data is supplied and the outputs of the ANN calculated. The class of a data instance is determined by the largest output. The learning loop (lines 8–29) develops the brain and exits if the fitness value (in this case classification accuracy) reduces (lines 23–27 in Algorithm 1). One can think of the ‘pre’ development phase as growing a neural network prior to training. The ‘while’ phase is a period of development within the learning phase. Nep denotes the user-defined number of learning epochs. Np represents the number of problems

Evolving Programs to Build Artificial Neural Networks

57

in the suite of problems being solved. Nex (p) denotes the number of examples for each problem. A is the accuracy of prediction for a single training instance. F is the fitness over all examples. TF is the accumulated fitness over all problems. Fitness is normalised (lines 20 and 22). Algorithm 1 Develop network and evaluate fitness 1: function Fitness 2: Initialise brain 3: Use ‘pre’ parameters 4: for s = 0 to s < NDSpre do 5: UpdateBrain 6: end for 7: TFprev = 0 8: for e = 0 to e < Nep do 9: Use ‘while’ parameters 10: for s = 0 to s < NDSwhi do 11: UpdateBrain 12: end for 13: TF = 0 14: for p = 0 to p < Np do 15: ExtractANN(p, OutputAddress) 16: F =0 17: for t = 0 to t < Nex (p) do 18: F = F + Acc 19: end for 20: TF = TF + F/Nex (p) 21: end for 22: TF = TF/Np 23: if TF < TFprev then 24: TF = TFprev 25: Break 26: else 27: TFprev = TF 28: end if 29: end for 30: return TF 31: end function

# develop prior to learning

# learning loop # learning phase

# initialise total fit # Get ANN for problem p # initialise fit # sum acc. over instances # sum normalised acc. over problems # normalise total fitness # has fitness reduced? # return previous fitness # terminate learning loop # update previous fitness

15.2 Developing the Brain and Evaluating the Fitness Algorithm 2 shows the update brain process. This algorithm is run at each developmental step. It runs the soma and dendrite programs for each neuron and from the previously existing brain creates a new version (NewBrain) which eventually overwrites the previous brain at the last step (lines 52–53).

58

J. F. Miller et al.

Algorithm 2 Update brain 1: function UpdateBrain 2: NewNumNeurons = 0 3: for i = 0 to i < NumNeurons do # get number and addresses of neurons 4: if (Brain[i].out = 0) then 5: NonOutputNeuronAddress[NumNonOutputNeurons] = i 6: increment NumNonOutputNeurons 7: else 8: OutputNeuronAddress[NumOutputNeurons] = i 9: increment NumOutputNeurons 10: end if 11: end for 12: for i = 0 to i < NumNonOutputNeurons do # process non-output neurons 13: NeuronAddress = NonOutputNeuronAddress[i] 14: Neuron = Brain[NeuronAddress] 15: UpdatedNeurVars = RunSoma(Neuron) # get new position, health and bias 16: if (DisallowNonOutputsToMove) then 17: UpdatedNeurVars.x = Neuron.x 18: else 19: UpdatedNeurVars.x = IfCollision(NewNumNeurons,NewBrain,UpdatedNeurVars.x) 20: end if 21: UpdatedNeuron = RunAllDendrites(Neuron, UpdatedNeurVars) 22: if (UpdatedNeuron.health > NHdeath ) then # if neuron survives 23: NewBrain[NewNumNeurons] = UpdatedNeuron 24: Increment NewNumNeurons 25: if (NewNumNeurons = NNmax -NumOutputNeurons) then 26: Break # exit non-output neuron loop 27: end if 28: end if 29: if (UpdatedNeuron.health > NHhealth ) then # neuron replicates 30: UpdatedNeuron.x = UpdatedNeuron.x+MNinc 31: UpdatedNeuron.x = IfCollision(NewNumNeurons, NewBrain, UpdatedNeuron.x) 32: NewBrain[NewNumNeurons] = CreateNewNeuron(UpdatedNeuron) 33: Increment NewNumNeurons 34: if (NewNumNeurons = NNmax - NumOutputNeurons) then 35: Break # exit non-output neuron loop 36: end if 37: end if 38: end for 39: for i = 0 to i < NumOutputNeurons do # process output neurons 40: NeuronAddress = OutputNeuronAddress[i] 41: Neuron = Brain[NeuronAddress] 42: UpdatedNeurVars = RunSoma(Neuron) # get new position, health and bias 43: if (DisallowOutputsToMove) then 44: UpdatedNeurVars.x = Neuron.x 45: else 46: UpdatedNeurVars.x = IfCollision(NewNumNeurons,NewBrain,UpdatedNeurVars.x) 47: end if 48: UpdatedNeuron = RunAllDendrites(UpdatedNeuron) 49: NewBrain[NewNumNeurons] = UpdatedNeuron 50: Increment NewNumNeurons 51: end for 52: NumNeurons = NewNumNeurons 53: Brain = NewBrain 54: end function

Evolving Programs to Build Artificial Neural Networks

59

Algorithm 2 starts by analyzing the brain to determine the addresses and numbers of non-output and output neurons (lines 3–11). Then the non-output neurons are processed. The evolved soma program is executed and it returns a neuron with updated values for the neuron position, health and bias. These are stored in the variable U pdatedNeurV ars. If the user-defined option to disallow non-output neuron movement is chosen then the updated neuron position is reset to that before the soma program is run (lines 16–17). Next the evolved dendrite programs are executed in all dendrites. The algorithmic details are given in Algorithm 6 (See Sect. 4.6). The neuron health is compared with the user-defined neuron death threshold NHdeath and if the health exceeds the threshold the neuron survives (see lines 22–28). At this stage it is possible that the neuron has been given a position that is identical to one of the neurons in the developing brain (NewBrain) so one needs a mechanism for preventing this. This is accomplished by Algorithm 3 (Lines 19 and 46). It checks whether a collision has occurred and if so an increment MNinc is added to the position and then it is bound to the interval [−1, 1]. In line 23 the updated neuron is written into NewBrain. A check is made in line 25 to see if the allowed number of neurons has been reached, if so the non output neuron update loop (lines 12–38) is exited and the output neuron section starts (lines 39–51). If the limit on numbers of neurons has not been reached, the updated neuron may replicate depending on whether its health is above the user-defined threshold, NHhealth (line 29). The position of the new born neuron is immediately incremented by MNinc so that it does not collide with its parent (line 30). However, its position needs to be checked also to see if it collides with any other neuron, in which case its position is incremented again until a position is found that causes no collision. This is done in the function IfCollision. In CreateNewNeuron (see line 32) the bias, the incremented position and dendrites of the parent neuron are copied into the child neuron. However, the new neuron is given a health of 1.0 (the maximum value). The algorithm examines the non-output neurons (lines 39–51) and again is terminated if the allowed number of neurons is exceeded. The steps are similar to those carried out with non-output neurons, except that output neurons can not either die or replicate as their number is fixed by the number of outputs required by the computational problem being solved. The details of the neuron collision avoidance mechanism is shown in Algorithm 3.

15.3 Running the Soma The UpdateBrain program calls the RunSoma program (Algorithm 4) to determine how the soma changes in each developmental step. The seven soma program inputs comprising the neuron health, position and bias, the averaged position, weight and health of the neuron’s dendrites and the problem type are supplied to the CGP encoded soma program (line 12). The array ProblemTypeInputs stores NumProblems+1

60

J. F. Miller et al.

Algorithm 3 Move neuron if it collides with another. 1: function IfCollision(NumNeurons, Brain, NeuronPosition) 2: NewPosition = NeuronPosition 3: collision = 1 4: while collision do 5: collision = 0 6: for i = 0 to j < NumNeurons do 7: if (| NeuronPosition - Brain[i].x | < 1.e-8) then 8: collision = 1 9: end if 10: if collision then 11: break 12: end if 13: end for 14: if collision then 15: NewPosition = NewPosition+MNinc 16: end if 17: end while 18: if collision then 19: NewPosition = Bound(NewPosition) 20: end if 21: return NewPosition 22: end function

constants equally spaced between −1 and 1. These are used to allow output neurons to know what computational problem they belong to. The soma program has three outputs relating to the position, health and bias of the neuron. These are used to update the neuron (line 13). Algorithm 4 RunSoma(Neuron) 1: function RunSoma(Neuron) 2: AvDendritePosition = GetAvDendritePosition(Neuron) 3: AvDendriteWeight = GetAvDendriteWeight(Neuron) 4: AvDendriteHealth = GetAvDendriteHealth(Neuron) 5: SomaProgramInputs[0] = Neuron.health 6: SomaProgramInputs[1] = Neuron.x 7: SomaProgramInputs[2] = Neuron.bias 8: SomaProgramInputs[3] = AvDendritePosition 9: SomaProgramInputs[4] = AvDendriteWeight 10: SomaProgramInputs[5] = AvDendriteHealth 11: SomaProgramInputs[6] = ProblemTypeInputs[WhichProblem] 12: SomaProgramOutputs = SomaProgram(SomaProgramInputs) 13: UpdatedNeuron = UpdateNeuron(Neuron, SomaProgramOutputs) 14: return UpdatedNeuron.x, UpdatedNeuron.health, UpdatedNeuron.bias 15: end function

Evolving Programs to Build Artificial Neural Networks

61

15.4 Changing the Neuron Variables The UpdateNeuron Algorithm (5) updates the neuron properties of health, position and bias according to three user-chosen options defined by a variable Incropt . If this is zero, then the soma program outputs determine directly the updated values of the soma’s health, position and bias. If Incropt is one or two, the updated values of the soma are changed from the parent neuron’s values in an incremental way. This is either a linear or nonlinear increment or decrement depending on whether the soma program’s outputs are greater than or less than or equal to zero (lines 8–16). The magnitudes of the increments is defined by the user-defined constants: δsh , δsp , δsb and sigmoid slope parameter, α (see Table 1). The increment methods described in Algorithm 5 change neural variables, so action needs to be taken to force the variables to strictly lie in the interval [−1, 1]. We call this ‘bounding’ (lines 34–36). This is accomplished using a hyperbolic tangent function.

15.5 Running All Dendrite Programs and Building a New Neuron Algorithm 6 takes an existing neuron and creates a new neuron using the updated soma variables, position, health and bias which are stored in U pdateNeurV ars (from Algorithm 4) and the updated dendrites which result from running the dendrite program in all the dendrites. Initially (line 3–5), the updated soma variables are written into the updated neuron. The number of dendrites in the updated neuron is set to zero. In lines 8–11, the health of the non-updated neuron is examined and if it is above the dendrite health threshold for birth, a new dendrite is generated and the updated neuron gains a dendrite. If so, the neuron gains a dendrite created by a function GenerateDendrite(). This assigns a weight, health and position to the new dendrite. The weight and health is set to one and the position set to half the parent neuron position. These choices appeared to give good results. Lines 12–33 are concerned with processing the dendrite program in all the dendrites of the non-updated neuron and updating the dendrites. If the updated dendrite has a health above its death threshold then it survives and gets written into the updated neuron (lines 22–28). Updated dendrites do not get written into the updated neuron if it already has the maximum allowed number of dendrites (line 25–27). In lines 30–33 a check is made as to whether the updated neuron has no dendrites. If this is so, it is given one of the dendrites of the non-updated neuron. Finally, the updated neuron is returned to the calling function. Algorithm 6 calls the function RunDendrite (line 21). This function is detailed in Algorithm 7. It changes the dendrites of a neuron according to the evolved dendrite program. It begins by assigning the dendrites health, position and weight to the parent dendrite variables. It writes the dendrite program outputs to the internal variables health, weight and position. Then in lines 8–16 it defines the possible increments in health, weight and position that will be used to increment or decrement the parent

62

J. F. Miller et al.

Algorithm 5 Neuron update function 1: function UpdateNeuron(Neuron, SomaProgramOutputs) 2: ParentHealth = Neuron.health 3: ParentPosition = Neuron.x 4: ParentBias = Neuron.bias 5: health = SomaProgramOutputs[0] 6: position = SomaProgramOutputs[1] 7: bias = SomaProgramOutputs[2] 8: if (Incropt = 1) then # calculate increment 9: HealthIncrement = δsh 10: PositionIncrement = δsp 11: BiasIncrement = δsb 12: else if (Incropt = 2) then 13: HealthIncrement = δsh *sigmoid(health, α) 14: PositionIncrement = δsp *sigmoid(position, α) 15: BiasIncrement = δsb *sigmoid(bias, α) 16: end if 17: if (Incropt > 0) then # apply increment 18: if (health > 0.0) then 19: health = ParentHealth + HealthIncrement 20: else 21: health = ParentHealth - HealthIncrement 22: end if 23: if (position > 0.0) then 24: position = ParentPosition + PositionIncrement 25: else 26: health = ParentPosition - PositionIncrement 27: end if 28: if (bias > 0.0) then 29: bias = ParentBias + BiasIncrement 30: else 31: bias = ParentBias - BiasIncrement 32: end if 33: end if 34: health = Bound(health) 35: position = Bound(position) 36: bias = Bound(bias) 37: return health, position and bias 38: end function

variables according to the user defined incremental options (linear or non-linear). In lines 17–33 it respectively carries out the increments or decrements of the parent dendrite variables according whether the corresponding dendrite program outputs are greater than or less than or equal to zero. After this it bounds those variables. Finally, in lines 37–44 it updates the dendrites health, weight and position provided the adjusted health is above the dendrite death threshold (in other words it survives). Note that if Incropt = 0 then there is no incremental adjustment and the health, weight and position of the dendrites are just bounded (lines 34–36).

Evolving Programs to Build Artificial Neural Networks

63

Algorithm 6 Run the evolved dendrite program in all dendrites 1: function RunAllDendrites(Neuron, DendriteProgram, NewSomaPosition, NewSomaHealth, NewSomaBias) 2: WhichProblem = Neuron.isout 3: OutNeuron.x = NewSomaPosition 4: OutNeuron.health = NewSomaHealth 5: OutNeuron.bias = NewSomaBias 6: OutNeuron.isout = WhichProblem 7: OutNeuron.NumDendrites = 0 8: if (Neuron.health > DHbirth ) then 9: OutNeuron.dendrites[NumDendrites] = GenerateDendrite() 10: Increment OutNeuron.NumDendrites 11: end if 12: for i = 0 to i < OutNeuron.NumDendrites do 13: DendriteProgramInputs[0] = Neuron.health 14: DendriteProgramInputs[1] = Neuron.x 15: DendriteProgramInputs[2] = Neuron.bias 16: DendriteProgramInputs[3] = Neuron.dendrites[i].health 17: DendriteProgramInputs[4] = Neuron.dendrites[i].weight 18: DendriteProgramInputs[5] = Neuron.dendrites[i].position 19: DendriteProgramInputs[6] = ProblemTypeInputs[WhichProblem] 20: DendriteProgramOutputs = DendriteProgram(DendriteProgramInputs) 21: UpdatedDendrite = RunDendrite(Neuron, DendriteProgramOutputs) 22: if (UpdatedDendrite.isAlive) then 23: OutNeuron.dendrites[NumDendrites] = UpdatedDendrite 24: increment OutNeuron.NumDendrites 25: if (OutNeuron.NumDendrites > MaxNumDendrites) then 26: break 27: end if 28: end if 29: end for 30: if (OutNeuron.NumDendrites = 0) then # if all dendrites die 31: OutNeuron.dendrites[0] = Neuron.dendrites[0] 32: OutNeuron.NumDendrites = 1 33: end if 34: return OutNeuron 35: end function

Algorithm 2 uses a function CreateNewNeuron to create a new neuron if the neuron health is above a threshold. This function is described in Algorithm 8. It makes the new born neuron the same as the parent (note, its position will be adjusted by the collision avoidance algorithm) except that it is given a health of one. Experiments suggested that this gave better results.

64

J. F. Miller et al.

Algorithm 7 Change dendrites according to the evolved dendrite program 1: function RunDendrite(Neuron, WhichDendrite, DendriteProgramOutputs) 2: ParentHealth = Neuron.dendrites[WhichDendrite].health 3: ParentPosition = Neuron.dendrites[WhichDendrite].x 4: ParentWeight = Neuron.dendrites[WhichDendrite].weight 5: health = DendriteProgramOutputs[0] 6: weight = DendriteProgramOutputs[1] 7: position = DendriteProgramOutputs[2] 8: if (Incropt = 1) then 9: HealthIncrement = δdh 10: WeightIncrement = δd w 11: PositionIncrement = δdp 12: else if (Incropt = 2) then 13: HealthIncrement = δdh *sigmoid(health, α) 14: WeightIncrement = δd w *sigmoid(weight, α) 15: PositionIncrement = δdp *sigmoid(position, α) 16: end if 17: if (Incropt > 0) then 18: if (health > 0.0) then 19: health = ParentHealth + HealthIncrement 20: else 21: health = ParentHealth - HealthIncrement 22: end if 23: if (position > 0.0) then 24: position = ParentPosition + PositionIncrement 25: else 26: health = ParentPosition - PositionIncrement 27: end if 28: if (weight > 0.0) then 29: weight = ParentWeight + BiasIncrement 30: else 31: weight = ParentWeight - BiasIncrement 32: end if 33: end if 34: health = Bound(health) 35: position = Bound(position) 36: weight = Bound(weight) 37: if (health > DHdeath ) then 38: UpdatedDendrite.weight = weight 39: UpdatedDendrite.health = health 40: UpdatedDendrite.x = position 41: UpdatedDendriteisAlive = 1 42: else 43: UpdatedDendriteisAlive = 0 44: end if 45: return UpdatedDendrite and UpdatedDendriteisAlive 46: end function

Evolving Programs to Build Artificial Neural Networks

65

Algorithm 8 Create new neuron from parent neuron 1: function CreateNewNeuron(ParentNeuron) 2: ChildNeuron.NumDendrites = ParentNeuron.NumDendrites 3: ChildNeuron.isout = 0 4: ChildNeuron.health = 1 5: ChildNeuron.bias = ParentNeuron.bias 6: ChildNeuron.x = ParentNeuron.x 7: for i = 0 to i < ChildNeuron.NumDendrites do 8: ChildNeuron.dendrites[i] = ParentNeuron.dendrites[i] 9: end for 10: end function

15.6 Extracting Conventional ANNs from the Evolved Brain In Algorithm 1, a conventional feed-forward ANN is extracted from the underlying collection of neurons (line 15). The algorithm for doing this is shown in Algorithm 9. Firstly, this algorithm determines the number of inputs to the ANN (line 5). Since inputs are shared across problems the number of inputs is set to be the maximum number of inputs that occur in the computational problem suite. If an individual problem has less inputs than this maximum, the extra inputs are set to 0.0. The brain array is sorted by position. The algorithm then examines all neurons (line 7) and calculates the number of non-output neurons and output neurons and stores the neuron data in arrays NonOutputNeurons and OutputNeurons. It also calculates their addresses in the brain array. The next phase is to go through all dendrites of the non-output neurons to determine which inputs or neurons they connect to (lines 19–33). The evolved neuron programs generate dendrites with end positions anywhere in the interval [−1, 1]. The end positions are converted to lengths (line 25). In this step the dendrite position is linearly mapped into the interval [0, 1]. To generate a valid neural network we assume that dendrites are automatically connected to the nearest neuron or input on the left. We refer to this as “snapping” (lines 28 and 44). The dendrites of non-output neurons are allowed to connect to either inputs or other non-output neurons on their left. However, output neurons are only allowed to connect to non-output neurons on their left. Algorithm 10 returns the address of the neuron or input that the dendrite snaps to. The dendrites of output neurons are not allowed to connect directly to inputs (see Line 4 of the GetClosest function), however, when neurons are allowed to move, there can occur a situation where an output neuron is positioned so that it is the first neuron on the right of the outputs. In that situation it can only connect to inputs. If this situation occurs then the initialisation of the variable AddressOfClosest to zero in the GetClosest function (line 2) means that all the dendrites of the output neuron will be connected to the first external input to the ANN network. Thus a valid network will still be extracted albeit with a rather useless output neuron. It is expected that evolution will avoid using programs that allow this to happen.

66

J. F. Miller et al.

Algorithm 9 The extraction of neural networks from the underlying brain. 1: function ExtractANN(problem, OutputAddress) 2: NumNonOutputNeurons = 0 3: NumOutputNeurons = 0 4: OutputCount=0 5: Ni = max(Ni , p) 6: sort(Brain, 0, NumNeurons-1) # sort neurons by position 7: for i = 0 to i < NumNeurons do 8: Address = i + Ni 9: if (Brain[i].isout > 0) then # non-output neuron 10: NonOutputNeur[NumNonOutputNeur] = Brain[i] 11: NonOutputNeuronAddress[NumNonOutputNeur]= Address 12: Increment NumNonOutputNeur 13: else # output neuron 14: OutputNeurons[NumOutputNeurons]= Brain[i] 15: OutputNeuronAddress[NumOutputNeurons]= Address 16: Increment NumOutputNeurons 17: end if 18: end for 19: for i = 0 to i < NumNonOutputNeur do # do non-output neurons 20: Phenotype[i].isout = 0 21: Phenotype[i].bias = NonOutputNeur[i].bias 22: Phenotype[i].address = NonOutputNeuronAddress[i] 23: NeuronPosition = NonOutputNeur[i].x 24: for j = 0 to j < NonOutputNeur[i].NumDendrites do 25: Convert DendritePosition to DendriteLength 26: DendPos = NeuronPosition - DendriteLength 27: DendPos = Bound(DendPos) 28: AddressClosest = GetClosest(NumNonOutputNeur, NonOutputNeur, 0, DendPos) 29: Phenotype[i].ConnectionAddresses[j] = AddressClosest 30: Phenotype[i].weights[j] = NonOutputNeur[i].weight[j] 31: end for 32: Phenotype[i].NumConnectionAddress = NonOutputNeur[i].NumDendrites 33: end for 34: for i = 0 to i < NumOutputNeurons do # do output neurons 35: i1 = i+NumOutputNeurons 36: Phenotype[i1].isout = OutputNeurons[i].isout 37: Phenotype[i1].bias = OutputNeurons[i].bias 38: Phenotype[i1].address = OutputNeuronAddress[i] 39: NeuronPosition = OutputNeurons[i].x 40: for j = 0 to j < OutputNeurons[i].NumDendrites do 41: Convert DendritePosition to DendriteLength 42: DendPos = NeuronPosition - DendriteLength 43: DendPos = Bound(DendPos) 44: AddressClosest = GetClosest(NumNonOutputNeur, NonOutputNeur, 1, DendPos) 45: Phenotype[i1].ConnectionAddresses[j] = AddressClosest 46: Phenotype[i1].weights[j] = OutputNeuron[i].weight[j] 47: end for 48: Phenotype[i1].NumConnectionAddress = OutputNeurons[i].NumDendrites 49: if (OutputNeurons[i].isout == problem+1) then 50: OutputAddress[OutputCount] = OutputNeuronAddress[i] 51: Increment OutputCount 52: end if 53: end for 54: end function

Evolving Programs to Build Artificial Neural Networks

67

Algorithm 10 Find which input or neuron a dendrite is closest to 1: function GetClosest(NumNonOutNeur, NonOutNeur, IsOut, DendPos) 2: AddressOfClosest = 0 3: min = 3.0 4: if (IsOut = 0) then # only non-out neurons connect to inputs 5: for (i = 0 to i < MaxNumInputs) do 6: distance = DendPos - InputLocations[i] 7: if distance > 0 then 8: if (distance < min) then 9: min = distance 10: AddressOfClosest = i 11: end if 12: end if 13: end for 14: end if 15: for j = 0 to j 0 then # feed-forward connections 18: if (distance < min) then 19: min = distance 20: AddressOfClosest = j + MaxNumInputs 21: end if 22: end if 23: end for 24: return AddressOfClosest 25: end function

Algorithm 9 stores the information required to extract the ANN in an array called Phenotype. It contains the connection addresses of all neurons and their weights (lines 29–30 and 45–46). Finally it stores the addresses of the output neurons (OutputAddress) corresponding to the computational problem whose ANN is being extracted (lines 49–52). These define the outputs of the extracted ANNs when they are supplied with inputs (i.e. in the fitness function when the Accuracy is assessed (see Algorithm 1). The Phenotype is stored in the same format as Cartesian Genetic Programming (see Sect. 5) and decoded in a similar way to genotypes.

References 1. Aljundi, R., Chakravarty, P., Tuytelaars, T.: Expert gate: lifelong learning with a network of experts 2 (2016). CoRR. arXiv:1611.06194 2. Astor, J.C., Adami, C.: A development model for the evolution of artificial neural networks. Artificial Life 6, 189–218 (2000) 3. Balaam, A.: Developmental neural networks for agents. In: Advances in Artificial Life, Proceedings of the 7th European Conference on Artificial Life (ECAL 2003), pp. 154–163. Springer (2003)

68

J. F. Miller et al.

4. Belew, R.K.: Interposing an ontogenic model between genetic algorithms and neural networks. In: S.J. Hanson, J.D. Cowan, C.L. Giles (eds.) Advances in neural information processing systems NIPS5, pp. 99–106. Morgan Kaufmann (1993) 5. Boers, E.J.W., Kuiper, H.: Biological metaphors and the design of modular neural networks. Master’s thesis, Dept. of Computer Science and Dept. of Experimental and Theoretical Psychology, Leiden University (1992) 6. Cangelosi, A., Nolfi, S., Parisi, D.: Cell division and migration in a ‘genotype’ for neural networks. Netw.-Comput. Neural Syst. 5, 497–515 (1994) 7. Deacon, T.: The Symbolic Species: The Co-evolution of Language and the Brain. W.W. Norton and Company, New York (1998) 8. Dekaban, A.S., Sadowsky, D.: Changes in brain weights during the span of human life. Ann. Neurol. 4, 345–356 (1978) 9. Downing, K.L.: Supplementing evolutionary developmental systems with abstract models of neurogenesis. In: Proceedings of the Conference on Genetic and Evolutionary Computation, pp. 990–996 (2007) 10. Drchal, J., Šnorek, M.: Tree-based indirect encodings for evolutionary development of neural networks. In: K˚urková, V., Neruda, R., Koutník, J. (eds.) Artificial Neural Networks - ICANN, pp. 839–848. Springer, Berlin (2008) 11. Edelman, G., Tononi, G.: A Universe of Consciousness. Basic Books, New York (2000) 12. Eggenberger, P.: Creation of neural networks based on developmental and evolutionary principles. In: Gerstner, W., Germond, A., Hasler, M., Nicoud J.D. (eds.) Artificial Neural Networks — ICANN’97, pp. 337–342 (1997) 13. Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. In: Advances in Neural Information Processing Systems, pp. 524–532 (1990) 14. Federici, D.: A regenerating spiking neural network. Neural Netw. 18(5–6), 746–754 (2005) 15. Fernández-Delgado, M., Cernadas, E., Barro, S., Amorim, D.: Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15(1), 3133–3181 (2014) 16. Ferreira, C.: Gene Expression Programming: Mathematical Modeling by an Artificial Intelligence, 2nd edn. Springer, New York (2006) 17. Floreano, D., Urzelai, J.: Neural morphogenesis, synaptic plasticity, and evolution. Theory Biosci. 120(3), 225–240 (2001) 18. Franco, L., Jerez, J.M.: Constructive Neural Networks, vol. 258. Springer, Berlin (2009) 19. French, R.M.: Catastrophic forgetting in connectionist networks: causes, consequences and solutions. Trends Cogn. Sci. 3(4), 128–135 (1999) 20. Goldman, B.W., Punch, W.F.: Reducing wasted evaluations in cartesian genetic programming. In: Proceedings of the Genetic Programming: 16th European Conference, EuroGP 2013, Vienna, Austria, April 3–5, 2013, pp. 61–72. Springer, Berlin (2013) 21. Goldman, B.W., Punch, W.F.: Analysis of cartesian genetic programmings evolutionary mechanisms. IEEE Trans. Evol. Comput. 19, 359–373 (2015) 22. Gruau, F.: Automatic definition of modular neural networks. Adapt. Behav. 3, 151–183 (1994) 23. Gruau, F., Whitley, D., Pyeatt, L.: A comparison between cellular encoding and direct encoding for genetic neural networks. In: Proceedings of Conference on Genetic Programming, pp. 81–89 (1996) 24. Hampton, A.N., Adami, C.: Evolution of robust developmental neural networks. In: Pollack, J., Bedau, M.A., Husbands, P., Ikegami, T., Watson R. (eds.) Proceedings of Artificial Life IX, pp. 438–443 (2004) 25. Harding, S., Miller, J.F., Banzhaf, W.: Developments in cartesian genetic programming: selfmodifying cgp. Genet. Program. Evolvable Mach. 11(3–4), 397–439 (2010) 26. Hornby, G., Lipson, H., Pollack, J.B.: Generative representations for the automated design of modular physical robots. IEEE Trans. Robot. Autom. 19, 703–719 (2003) 27. Hornby, G.S., Pollack, J.B.: Creating high-level components with a generative representation for body-brain evolution. Artif. Life 8(3) (2002)

Evolving Programs to Build Artificial Neural Networks

69

28. Huizinga, J., Clune, J., Mouret, J.B.: Evolving neural networks that are both modular and regular: HyperNEAT plus the connection cost technique. In: Proceedings of the Conference on Genetic and Evolutionary Computation, pp. 697–704 (2014) 29. Isles, A.: Neural and behavioral epigenetics; what it Is, and what is hype. Wiley (2015) 30. Jakobi, N.: Harnessing morphogenesis, COGS Research Paper 423. University of Sussex, Technical report (1995) 31. Jung, S.Y.: A topographical method for the development of neural networks for artificial brain evolution. Artif. Life 11, 293–316 (2005) 32. Kandel, E.R., Schwartz, J.H., Jessell, T.M.: Principles of Neural Science, 4th edn. McGrawHill, New York (2000) 33. Khan, G.M.: Evolution of Artificial Neural Development - In Search of Learning Genes. Studies in Computational Intelligence, vol. 725. Springer, Berlin (2018) 34. Khan, G.M., Miller, J.F.: In search of intelligence: evolving a developmental neuron capable of learning. Connect. Sci. 26(4), 297–333 (2014) 35. Khan, G.M., Miller, J.F., Halliday, D.M.: Evolution of cartesian genetic programs for development of learning neural architecture. Evol. Comput. 19(3), 469–523 (2011) 36. Kitano, H.: Designing neural networks using genetic algorithms with graph generation system. Complex Syst. 4, 461–476 (1990) 37. Kleim, J.A., Lussnig, E., Schwartz, E.R., Comery, T.A., Greenough, W.T.: Synaptogenesis and fos expression in the motor cortex of the adult rat after motor skill learning. J. Neurosci. 16, 4529–4535 (1996) 38. Kleim, J.A., Vij, K., Ballard, D.H., Greenough, W.T.: Learning-dependent synaptic modifications in the cerebellar cortex of the adult rat persist for at least four weeks. J. Neurosci. 17, 717–721 (1997) 39. Kodjabachian, J., Meyer, J.A.: Evolution and development of neural controllers for locomotion, gradient-following, and obstacle-avoidance in artificial insects. IEEE Trans. Neural Netw. 9, 796–812 (1998) 40. Koutník, J., Gomez, F., Schmidhuber, J.: Evolving neural networks in compressed weight space. In: Proceedings of the Conference on Genetic and Evolutionary Computation (GECCO-10) (2010) 41. Kumar, S., Bentley, P. (eds.): On Growth, Form and Computers. Academic Press (2003) 42. Luke, S., Spector, L.: Evolving graphs and networks with edge encoding: preliminary report. In: Late Breaking Papers at the Genetic Programming Conference, pp. 117–124 (1996) 43. Maguire, E.A., Gadian, D.G., Johnsrude, I.S., Good, C.D., Ashburner, J., Frackowiak, R.S.J., Frith, C.D.: Navigation-related structural change in the hippocampi of taxi drivers. PNAS 97, 4398–4403 (2000) 44. McCloskey, M., Cohen, N.: Catastrophic interference in connectionist networks: the sequential learning problem. Psychol. Learn. Motiv. 24, 109–165 (1989) 45. McCulloch, W., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943) 46. Métin, C., Vallee, R., Rakic, P., Bhide, P.: Modes and mishaps of neuronal migration in the mammalian brain. Neuroscience 28, 11746–11752 (2008) 47. Miller, J.F.: What bloat? cartesian genetic programming on boolean problems. In: Proceedings of the Conference on Genetic and Evolutionary Computation, Late Breaking Papers, pp. 295– 302 (2001) 48. Miller, J.F. (ed.): Cartesian Genetic Programming. Springer, Berlin (2011) 49. Miller, J.F., Smith, S.L.: Redundancy and computational efficiency in cartesian genetic programming. IEEE Trans. Evol. Comput. 10(2), 167–174 (2006) 50. Miller, J.F., Thomson, P.: Cartesian genetic programming. In: Proceedings of European Conference on Genetic Programming. LNCS, vol. 10802, pp. 121–132 (2000) 51. Miller, J.F., Thomson, P.: A developmental method for growing graphs and circuits. In: Proceedings of the International Conference on Evolvable Systems. LNCS, vol. 2606, pp. 93–104 (2003)

70

J. F. Miller et al.

52. Miller, J.F., Wilson, D.G., Cussat-Blanc, S.: Evolving developmental programs that build neural networks for solving multiple problems. In: Banzhaf, W., Spector, L., Sheneman L. (eds.) Genetic Programming Theory and Practice XVI, Chap. TBC. Springer (2019) 53. Ooyen, A.V. (ed.): Modeling Neural Development. MIT Press, Cambridge (2003) 54. Rakic, P.: Principles of neural cell migration. Experientia 46, 882–891 (1990) 55. Ratcliff, R.: Connectionist models of recognition and memory: constraints imposed by learning and forgetting functions. Psychol. Rev. 97, 205–308 (1990) 56. Risi, S., Lehman, J., Stanley, K.O.: Evolving the placement and density of neurons in the HyperNEAT substrate. In: Proceedings of the Conference on Genetic and Evolutionary Computation, pp. 563–570 (2010) 57. Risi, S., Stanley, K.O.: Indirectly encoding neural plasticity as a pattern of local rules. In: From Animals to Animats 11: Conference on Simulation of Adaptive Behavior (2010) 58. Risi, S., Stanley, K.O.: Enhancing ES-HyperNEAT to evolve more complex regular neural networks. In: Proceedings of the Conference on Genetic and Evolutionary Computation, pp. 1539–1546 (2011) 59. Roggen, D., Federici, D., Floreano, D.: Evolutionary morphogenesis for multi-cellular systems. Genet. Program. Evolvable Mach. 8(1), 61–96 (2007) 60. Rose, S.: The Making of Memory: From Molecules to Mind. Vintage (2003) 61. Rust, A., Adams, R., Bolouri, H.: Evolutionary neural topiary: growing and sculpting artificial neurons to order. In: Proceedings of the Conference on the Simulation and synthesis of Living Systems, pp. 146–150 (2000) 62. Rusu, A.A., Rabinowitz, N.C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., Hadsell, R.: Progressive neural networks (2016). arXiv:1606.04671 63. Ryan, C., Collins, J.J., Neill, M.O.: Grammatical evolution: evolving programs for an arbitrary language. In: Banzhaf, W., Poli, R., Schoenauer, M., Fogarty, T.C. (eds.) Genetic Programming, pp. 83–96. Springer, Berlin (1998) 64. Sharkey, A.J.: Combining artificial neural nets: ensemble and modular multi-net systems. Springer Science & Business Media (2012) 65. Siddiqi, A.A., Lucas, S.M.: A comparison of matrix rewriting versus direct encoding for evolving neural networks. In: Proceedings IEEE International Conference on Evolutionary Computation Proceedings, pp. 392–397 (1998) 66. Smythies, J.: The Dynamic Neuron. MIT Press, Cambridge (2002) 67. Stanley, K., Miikkulainen, R.: Efficient evolution of neural network topologies. Proc. Congr. Evol. Comput. 2, 1757–1762 (2002) 68. Stanley, K.O.: Compositional pattern producing networks: a novel abstraction of development. Genet. Program. Evolvable Mach. 8, 131–162 (2007) 69. Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving largescale neural networks. Artif. Life 15, 185–212 (2009) 70. Stanley, K.O., Miikkulainen, R.: A taxonomy for artificial embryogeny. Artif. Life 9(2), 93–130 (2003) 71. Suchorzewski, M., Clune, J.: A novel generative encoding for evolving modular, regular and scalable networks. In: Proceedings of the Conference on Genetic and Evolutionary Computation, pp. 1523–1530 (2011) 72. Terekhov, A.V., Montone, G., ORegan, J.K.: Knowledge transfer in deep block-modular neural networks. In: Conference on Biomimetic and Biohybrid Systems, pp. 268–279. Springer (2015) 73. Tierney, A., Nelson III, C.: Brain development and the role of experience in the early years. Zero Three 30, 9–13 (2009) 74. Tramontin, A.D., Brenowitz, E.: Seasonal plasticity in the adult brain. Trends Neurosci. 23, 251–258 (2000) 75. Tsankova, N., Renthal, W., Kumar, A., Nestler, E.: Epigenetic regulation in psychiatric disorders. Nat. Rev. Neurosci. 8(5), 33–367 (2007) 76. Turner, A.J., Miller, J.F.: Cartesian genetic programming encoded artificial neural networks: a comparison using three benchmarks. In: Proceedings of the Conference on Genetic and Evolutionary Computation (GECCO), pp. 1005–1012 (2013)

Evolving Programs to Build Artificial Neural Networks

71

77. Turner, A.J., Miller, J.F.: Recurrent cartesian genetic programming. In: Proceedings of the Parallel Problem Solving from Nature, pp. 476–486 (2014) 78. Valverde, F.: Rate and extent of recovery from dark rearing in the visual cortex of the mouse. Brain Res. 33, 1–11 (1971) 79. Vassilev, V.K., Miller, J.F.: The advantages of landscape neutrality in digital circuit evolution. In: Proceedings of the International Conference on Evolvable Systems. LNCS, vol. 1801, pp. 252–263. Springer (2000) 80. Yerushalmi, U., Teicher, M.: Evolving synaptic plasticity with an evolutionary cellular development model. PLOS One 3(11), e3697 (2008) 81. Yu, T., Miller, J.F.: Neutrality and the evolvability of Boolean function landscape. In: Proceedings of the European Conference on Genetic Programming. LNCS, vol. 2038, pp. 204–217 (2001) 82. Zar, J.H.: Biostatistical Analysis, 2nd edn. Prentice Hall, Upper Saddle River (1984)

Anti-heterotic Computing Viv Kendon

Abstract When two or more different computational components are combined to produce computational power greater than the sum of the parts, this has been called heterotic computing (Stepney et al in 8th workshop on quantum physics and logic (QPL 2011), vol 95, pp 263–273, 2012 [46, EPTCS 95 263]). An example is measurement based quantum computing, in which a set of entangled qubits are measured in turn, with the measurement outcomes fed forward by a simple classical computer, to keep track of the parity of the measurement outcomes on each qubit. The parts are no more powerful than a classical computer, while the combination provides universal quantum computation. Most practical physical computers are hybrids of several different types of computational components, but not all are heterotic. In fact, anti-heterotic, in which the whole is less than the sum of the parts, is the most awkward case to deal with. This occurs commonly in experiments on new unconventional computational substrates. The classical controls in such experiments are almost always conventional classical computers with computational power far outstripping the samples of materials being tested. Care must be exercised to avoid accidentally carrying out all of the computation in the controlling classical computer. In this overview, existing tools to analyse hybrid computational systems are summarised, and directions for future development identified.

1 Background Nowadays, almost all practical physical computers are composed of more than one distinct computational component. For example, a desktop computer will usually have a CPU and a graphics co-processor, and a chip dedicated to ethernet or wifi protocols for fast communications. A more powerful work station will also have space for dedicated processors, like FPGAs (field-programmable gate array). Siliconbased chips are no longer becoming faster through shrinking the feature size. Instead, V. Kendon (B) Department of Physics, Durham University, Durham DH1 3LE, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_3

73

74

V. Kendon

increased computational power is achieved through optimising the chip design for specific applications, resulting in hundreds of different chips, several of which are combined in a general purpose digital computer. Although they are all treated as being the same type of computational device by theory, the practical difficulties in programming multi-CPU or GPU devices are considerable, and there are many open questions in current research on parallel and multi-threaded computation. Composing computational devices to gain computational power is highly non-trivial even in a conventional setting. Indeed, the theory of computation usually ignores the physical layer entirely, so it has no way to model the effects of combining different types of computational components, let alone tell us when this might be an advantageous thing to do. When unconventional computational devices are considered, the challenges around composing them into hybrid computers increase further. Their computational capabilities are determined by the properties of the physical system [44]. Any physical system with sufficiently complex structure and dynamics can be used to compute. Examples under active current study include chemical [36, 39, 47], biological [1, 2], quantum [43], optical [48, 49], and various analog [26, 37, 38] computational substrates. Such devices may be best described by models with intrinsically different logic, e.g., quantum computers and neural nets both use different logic, and are correspondingly challenging to program. There may be additional challenges in data encoding. Given a quantum register, it is in general impossible to efficiently represent the data it encodes in a classical register of equivalent size. Passing data between quantum and classical processors therefore has to be planned carefully, to avoid loss of information. Passing data between processors is one way to combine different computational devices. A dedicated fast co-processor can perform a commonly used subroutine (e.g., a Fourier transform) on data passed to it and return the result for further processing. But most co-processors are not only passed data, they are also passed control instructions that tell them what processing to perform. This is potentially more powerful, since repeated cycles of data passing and control can act as a feedback loop control system. Control systems include computation as part of their basic functionality [31], so this kind of interconnection potentially adds computation in addition to that performed by the constituent components. Examples where the combination is more powerful than the sum of the parts are known, and designated heterotic computers—examples are described in Sect. 2. When the computational device is an experimental test of a new material, there is almost always a control computer that is powerful enough to do far more computation than is being tested in the material. This is the anti-heterotic case, where we are interested in the computational power of the substrate with only minimal i/o or controls, but for practical reasons, the controls are actually much more powerful. The presence of a powerful classical computer makes it difficult to tell where the observed computation is taking place. Exploiting the potential of diverse computational devices in combination needs new theoretical tools, as argued in Kendon et al. [34], Stepney et al. [45, 46].

Anti-heterotic Computing

75

Such tools will enable us to compose the models of different computational substrates and determine the power of the resulting hybrid computers. This chapter is organized as follows: Sect. 2 summarises prior work on identifying heterotic systems in quantum computing. Section 3 considers cases in classical unconventional computing where the combination has less computational power, rather than more. Sections 4 and 5 provide an overview of tools available and desirable for effectively analysing hybrid computational systems. Section 6 summarises and outlines future directions.

2 Computational Enhancement in Hybrid Systems Heterotic computing was first identified in a quantum computational setting where specific classical controls are required as part of the operation of the computer. The importance of the classical control system in quantum computers was first noted by Jozsa [33], without identifying the contribution to the computational power. A specific type of quantum computing in which the classical controls play an essential role is measurement-based quantum computing (MBQC), also known as cluster state, and as one-way, quantum computing [40]. In MBQC, many qubits are first prepared in an entangled state. The computation proceeds by measuring the qubits in turn. The outcomes from the measurements feed forward to determine the settings for the measurements performed on the next qubits, see Fig. 1. Anders and Browne [3] realised that the classical computation required to control and feed forward information in MBQC provides a crucial part of the computational power. Applying measurements without feed-forward is efficiently classically simulatable. The classical part of the computation that keeps track of the parity of measurements on each qubit is obviously also efficiently classical simulatable. However, the combination of the two is equivalent to the quantum circuit model with unbounded fan-out [17], which is not (efficiently) classically simulatable. This

Fig. 1 Schematic diagram of a measurement-based quantum computer. The qubits (circles) are prepared in an entangled state, indicated by grey fill and connecting lines. Measurements are performed on the qubits in turn, removing them from the entangled state (unfilled) and producing a measurement output. The value of the output determines (decision) the settings for the next measurements. The measured qubits can be recycled to form new cluster state (dashed line), see Horsman et al. [27]

76

V. Kendon

Fig. 2 A qubus sequence using six operations instead of eight to perform two gates. The boxes show displacements performed on the continuous variable field controlled by the qubits. Shaded boxes represent an operation acting on the position quadrature of the bus, while unshaded boxes represent operations on the momentum quadrature, see Brown et al. [15]

shows that the combination of two or more systems to form a new computational system, can be in a more powerful computational class than the constituent systems acting separately. The theory of ancilla-based quantum computation [4] has been abstracted and developed from MBQC, into a framework where now a quantum system (ancilla) controls another quantum system (the qubits), with or without measurement of the ancilla system during the computation. This framework is capable of modelling many types of hybrid quantum computing architectures in which two or more different quantum systems are combined. When the role of the ancilla system is played by a continuous variable quantum system instead of a qubit or qudit (d-dimensional quantum system), further efficiencies become available. The qubus quantum computer uses a coherent state as a bus connecting the individual qubits. A quantum coherent state has two quadratures, which act as two coupled continuous variable quantum systems. A practical example is a single-mode laser, such as can be found in many state-of-the-art quantum optics experimental labs. This type of ancilla can interact with many qubits in turn, allowing savings in the number of basic operations required for gate operations [15] and for building cluster states [16, 27]. Figure 2 shows a sequence of six operations that performs four gates, one between each possible pair of the three qubits. Each gate performed separately would require two operations, thus this sequence saves at least two operations over standard methods, more if the qubits have to be swapped to adjacent positions for direct gates. Typically, this provides polynomial reductions in the number of elementary operations required for a computation, when compared with interacting the qubits directly. The natural computational power of the coherent state is only classical without some type of nonlinear interaction [9]. A nonlinear interaction can be with another type of qubit, as in the qubus case, or via measurements of entangled coherent states [35], which generates quantum computing probabilistically, with a high overhead in failed operations which have to be repeated. The theoretical heterotic advantage of hybrid quantum computers is less dramatic than in the classical control case, but the practical advantages are clear from many points of view. Using the quantum system best suited to each role (memory, processing, data transfer, read out) produces a more efficient computer overall. Our examples in this section consist of a substrate system with a control computer. In MBQC, which has classical controls, the system clearly has two components and

Anti-heterotic Computing

77

can be precisely analysed. For the hybrid quantum systems such as the qubus, the controls are quantum, but there are always further classical controls in any actual experiment. This thus forms a multi-layer system. In fact, it is not clear that meaningful quantum computing without a classical component is possible if “whole system” analysis is performed.

3 Identifying the Computational Power of the Substrate The first clearly identified classical example of heterotic computing uses a liquid state NMR experiment to perform classical gates [41]. The use of NMR to study computation was first established for quantum computing (e.g., Jones [32]). Both liquid state [22, 25] and solid state [21] NMR quantum computing architectures have been implemented. However, the quantum nature of the liquid state NMR ensemble has been questioned [14], leaving the true power of NMR computers an open question. Roselló-Merino et al. [41] used a liquid state NMR system to study classical computing. In experiments to perform a sequence of simple classical logic gates, such as NAND, the instruments controlling the NMR experiment pass the outputs of one gate through to the inputs of the next. The output signals and input controls are in completely different formats, so the controlling instruments do signal transduction, which can require significant computational power. In this case, a Fourier transform is applied, which requires vastly more computational power than a single logic gate. It is thus rather stretching the definition of heterotic computing to claim these NMR experiments as an example. Nonetheless, such experiments are very insightful. Using NMR to do classical computing involves choosing a subset of the available parameters that are suitable for representing classical bits, and restricting the operations so as to keep the spin ensemble in fully determined classical states. In this way, more robust operations are obtained at the expense of not exploiting the full capabilities of the physical system. As with MBQC, the control computer plays an essential role in the computation, but by itself does not perform the gate logic. Although, unlike MBQC, in this case the control computer very well could perform the gate logic. This is thus an example of anti-heterotic computing, in which the whole is constrained to be less than the sum of the parts. As a step towards fully characterizing the classical computational power of the NMR system, Bechmann et al. [11] have produced a preliminary classification of the experimental NMR parameters for implementing classical logic gates. As part of this, they determined that a trinary logic (rather than binary) is natural for NMR. Using the natural logic of the model of the physical system is an important element in fully exploiting the computational capabilities. Their work has been extended to take advantage of the inherently continuous nature of the NMR parameter space of noncoupled spin species [10] by implementing continuous gates, so the combined system performs an analog computation. However, a full analysis of the computational power of the components and combinations of these NMR systems has not been done.

78

V. Kendon

Analysing the roles of the NMR substrate and the controlling computer in the overall computation is not straightforward, and provides part of the motivation for developing a set of tools for this purpose. Tools for more specific settings have grown out of these early experiments. Dale et al. [23] provide a framework for analysing substrates used in reservoir computing, and Russell and Stepney [42] provide general methods for obtaining speed limits on computing using finite resources.

4 Abstraction-Representation Theory The most basic question, that needs to be answered first, is whether a computational system is actually computing at all. To facilitate answering this question for diverse physical systems, Horsman et al. [28] developed a framework, known as abstractionrepresentation theory (ART), in which science, engineering, computing, and use of technology are all modeled as variations on the same diagrams. This highlights the relationships between the different activities, and allows clear distinctions to be made between experiments on potential new computational substrates, engineering a computer using a new substrate, and performing a computation on a computer made with the substrate. The ART framework facilitated the identification of some basic pre-requisites for computing to be taking place, including the need for a computational entity for which the computation is meaningful or purposeful in some way. The computational entity doesn’t have to be human, nor distinct from the computer: Horsman et al. [29] explain how bacteria carry out simple computations in the course of their search for food. The ART framework was further developed in Horsman et al. [30], and applied to a variety of computational edge cases, such as slime mould and black holes. Figure 3 illustrates the basic scientific process as an AR theory commuting diagram. An experimental procedure H (p) is performed to test a model CT (m p ) in theory T . The results of the experiment are compared with the theory and any difference  considered. If  is too large to be explained by experimental precision or other known factors, then the theory needs to be improved. Science is about creating theories that describe the physical world well.

Fig. 3 Science in the AR theory framework

Anti-heterotic Computing

79

Fig. 4 Engineering in the AR theory framework

Fig. 5 Computing in the AR theory framework

Figure 4 illustrates the engineering process as an AR diagram. Once we have a good enough theory (small enough ), we can build something useful with properties we trust because of our well-tested scientific theory. If it turns out that our product doesn’t perform well enough, we need to change the physical system so that it fits our design specification more closely. Engineering is about making physical devices that match our design specifications. Among the many things we build are computers, which we trust to perform our calculations. This trust is based on the prior scientific experiments to develop and test the theories, and engineering processes that meet the design specifications. These are, of course, not a guarantee that the computer will perform correct computations, but that is a topic well beyond the scope of this paper. Figure 5 illustrates the process of using a computer to calculate a computation c. The main extra step is the encoding of the problem c into the model of the computer m p , and subsequent decoding to obtain the result c . Note that the commuting nature of the diagram is now assumed in order to carry out the computation, the abstract link between c and c is shown dotted, but not carried out. Computing is the use of an engineered physical system to solve an abstract problem. Engineered is used in a broad sense that includes the evolutionary development of brains and bacteria. The AR theory framework relates science, engineering, technology, and computing by providing a framework for reasoning about these processes without needing

80

V. Kendon

to first solve all the philosophical questions about exactly how the scientific method works. It also is not a “dualist” theory, despite the physical and abstract realms labeled in the diagrams. Everything in the models is physically instantiated, it is just the roles the models are playing in relation to other physical objects that differ. The model m p of a physical particle p is instantiated in the pages of a text book, in the brain states of the students who learn about it, on the blackboard as the lecturer describes it. What makes it abstract is that these instantiations have a degree of arbitrariness about them, and can change, while still representing the same model. The AR theory diagrams focus on the key relationships for understanding science and computing, and avoid having to deal with too many layers of recursion of models of models. When necessary, the computational entity can be explicitly included in the AR diagrams [31], making the relationships three-dimensional (cubic) rather than the two-dimensional squares in Figs. 3, 4 and 5. The AR diagrams also extend smoothly to refinement layers [28], thus providing a link to practical programming tools and their role in the computing process.

5 Theoretical Tools for Heterotic Computing As outlined in Sect. 4, AR theory analyses whole systems that compute. More tools are needed to analyse compositions of different systems, to determine if, and how, the combinations provide different computational power than the parts. There are many ways in which a combination of two or more different systems can provide an advantage, besides increased computational power. It can also be better optimised for a particular task, by matching the natural system dynamics to the problem. Or it can be more robust against errors, or have a lower power consumption. It may even provide privacy or security guarantees for distributed computing [12]. Of great practical importance is decreasing the power consumed by computation: this can be achieved by extracting more computational power from the same systems. A more efficient representation of the data can also provide advantages for wellmatched problems and computers. This was the inspiration for the first proposed application of quantum computers: simulation of one quantum system by other quantum system [24]. The state space of a quantum system is exponentially large, and best represented by another quantum system of similar dimensions. Complex problems may require several different types of variables (discrete; continuous), making a computer with several types of registers and processors a natural approach. A framework for analysing heterotic computers needs to be capable of quantifying the computational and representational power of each component, so that meaningful comparisons can be made across data types. The standard method of mapping everything to a Turing machine is not suitable, because it fails to account for the efficiencies gained through using a natural representation for different data types in different computational substrates. The distinctions between different data types need to be retained in order to fully exploit the matching computational substrates.

Anti-heterotic Computing

81

Fig. 6 The simplest non-trivial composition of two different computational systems B and C. Note that both are in the physical part of an AR theory diagram, the corresponding abstract models are not shown. The transduction is required if the input and output formats differ between B and C

The nature of the connections between the components is also a core feature of any framework for modelling heterotic computers. If two components use different data encodings, then there will need to be signal transduction to convert the data formats. At a high level of abstraction, these connections look like communication channels (identity operations on the logical data). However, at a lower level, any non-trivial data encoding transduction will involve non-trivial computation, and hence contribute to the overall resource requirements and computational power of the system. A full accounting of all the computation in a hybrid device is crucial for fair comparisons to be made between disparate computational systems. Figure 6 shows the simplest non-trivial composition of two different computational systems, with the output of one becoming the input to the other, and vice versa after internal operations C O p and B O p update the internal states of C and B to C  and B  respectively. If the input and output formats don’t match, additional signal transduction steps are required. These in general involve computation that must be accounted for. Even if the two components B and C are identical, and the transduction is not required, we still gain computational power in the sense that we now have a larger computer capable of solving larger problems than if we have two identical computers that are not connected. From a modelling point of view, we can require that all connections match in terms of data format, and insert extra computational units between non-matching data connections to convert the format. This ensures all transduction steps are recorded as computation in the process. Even when transduction is accounted for, there are further issues, such as time scales, in connecting different computational components. Stepney et al. [45] have proposed a heterotic system combining optical, bacterial and chemical computing as a proof-of-concept exploration of the very different time scales in these systems, but with direct interfaces that do not require a classical computer to mediate. A “matching plug” model like this is crying out for a graphical applied category theory realisation. For examples of such systems in a quantum context, see Coecke and Kissinger [19]—this demonstrates that such models are powerful enough to represent quantum computers. Hence, a category theory approach should be both powerful and flexible enough for current and near future hybrid computer applications.

82

V. Kendon

Fig. 7 The step by step interactions between a computational substrate (state B, state change B O p) and a controller (state C, state change C O p): the input to one is the output from the other (assuming no signal transduction needed)

As already noted, connections between components usually go beyond data exchange to include control instructions. The MBQC example in Sect. 2 has one component controlling another. The basic control system is a feedback loop in which the output of the substrate is processed by the control to determine the instruction passed to the substrate for the next round of the loop. This is illustrated in Fig. 7. The dependence of each instruction on the output from the previous operation means that the control sequence cannot be compressed into parallel operations by duplicating the substrate units. Control systems also have well-developed category theory models that can be harnessed for heterotic computing analysis. The difference in a computational setting is that the goal of the process is an abstract computation, rather than a physical system or process. As well as one component controlling a substrate, two components can also interact such that the states of both components are updated. This is the norm for quantum systems, where interaction always has a “back action” associated with it. This back action can be harnessed for computation, and forms the basis of operation of the ancilla-mediated quantum gates described in Sect. 2. A category theory framework provides a tool for analysing and understanding hybrid computational devices and the computational capabilities they provide. From this, it is then necessary to create tools for using such devices, that handle the different types of computation they support, and their interactions. As already advocated [46], building on the existing tools of refinement, retrenchment [5–8], initialisation and finalisation [18, 20] are a practical route to develop tools for programmers. This would include analysis of propagation of errors [13] e.g., due to noise, and to drift, and techniques for correction and control of these errors.

6 Summary and Future Directions Use of hybrid and unconventional computational devices is expanding rapidly as computers diversify to gain performance through other routes besides smaller components. While pioneers like Stepney have been studying unconventional hybrid systems for many years, developing a theoretical framework to analyse and understand diverse combinations of computational devices is now becoming more urgent

Anti-heterotic Computing

83

and important. There are many subtleties that arise when composing devices with the goal of obtaining a heterotic advantage: this chapter has outlined a few of them, and proposed some directions for modelling them effectively. The AR theory framework reviewed in Sect. 4 provides the tools for the first step, understanding the development process from a new material to using a computer made from it. To incorporate unconventional physical substrates as computational devices, we need to clearly identify the computation the device is capable of performing for the purpose we want to use it for. Experiments to determine the computational capabilities of the substrate are an essential part of the development process, in which the anti-heterotic system has to be dissected to separate the contributions of the classical control computers from the substrate. While some progress has been made in this direction (e.g., Dale et al. [23]), there is much more to be done before we can confidently hand a hybrid unconventional computer to a programmer and expect efficient computing to happen with it. Nonetheless, the heterotic approach is crucial to ensure that the many forms of unconventional computation can be exploited fully. Different components should be combined with each doing what it does naturally, and best, for the whole to be a more powerful and efficient computer. Acknowledgements VK funded by the UK Engineering and Physical Sciences Research Council Grant EP/L022303/1. VK thanks Susan Stepney for many stimulating and productive hours of discussions on these subjects and diverse other topics. And for the accompanying stilton scones.

References 1. Adamatzky, A.: Physarum machines: encapsulating reaction-diffusion to compute spanning tree. Naturwissenschaften 94(12), 975–980 (2007) 2. Amos, M.: Theoretical and Experimental DNA Computation. Springer, Berlin (2005) 3. Anders, J., Browne, D.: Computational power of correlations. Phys. Rev. Lett. 102, 050502 (2009) 4. Anders, J., Oi, D.K.L., Kashefi, E., Browne, D.E., Andersson, E.: Ancilla-driven universal quantum computation. Phys. Rev. A 82(2), 020301 (2010) 5. Banach, R., Poppleton, M.: Retrenchment: an engineering variation on refinement. In: 2nd International B Conference. LNCS, vol. 1393, pp. 129–147. Springer (1998) 6. Banach, R., Jeske, C., Fraser, S., Cross, R., Poppleton, M., Stepney, S., King, S.: Approaching the formal design and development of complex systems: the retrenchment position. In: WSCS, IEEE ICECCS’04 (2004) 7. Banach, R., Jeske, C., Poppleton, M., Stepney, S.: Retrenching the purse. Fundam. Informaticae 77, 29–69 (2007) 8. Banach, R., Poppleton, M., Jeske, C., Stepney, S.: Engineering and theoretical underpinnings of retrenchment. Sci. Comput. Program. 67(2–3), 301–329 (2007) 9. Bartlett, S., Sanders, B., Braunstein, S.L., Nemoto, K.: Efficient classical simulation of continuous variable quantum information processes. Phys. Rev. Lett. 88, 097904 (2002) 10. Bechmann, M., Sebald, A., Stepney, S.: From binary to continuous gates—and back again. In: ICES 2010, 335–347 (2010) 11. Bechmann, M., Sebald, A., Stepney, S.: Boolean logic-gate design principles in unconventional computers: an NMR case study. Int. J. Unconv. Comput. 8(2), 139–159 (2013) 12. Ben-Or, M., Crepeau, C., Gottesman, D., Hassidim, A., Smith, A.: Secure multiparty quantum computation with (only) a strict honest majority. In: 2006 47th Annual IEEE Symposium on

84

13. 14.

15. 16. 17.

18. 19. 20.

21.

22. 23. 24. 25. 26. 27.

28. 29.

30.

31. 32. 33.

V. Kendon Foundations of Computer Science (FOCS’06), pp. 249–260 (2006). https://doi.org/10.1109/ FOCS.2006.68 Blakey, E.: Unconventional complexity measures for unconventional computers. Nat. Comput. 10, 1245–1259 (2010). https://doi.org/10.1007/s11047-010-9226-9 Braunstein, S.L., Caves, C.M., Jozsa, R., Linden, N., Popescu, S., Schack, R.: Separability of very noisy mixed states and implications for nmr quantum computing. Phys. Rev. Lett. 83, 1054–1057 (1999). https://doi.org/10.1103/PhysRevLett.83.1054 Brown, K.L., De, S., Kendon, V., Munro, W.J.: Ancilla-based quantum simulation. New J. Phys. 13, 095007 (2011) Brown, K.L., Horsman, C., Kendon, V.M., Munro, W.J.: Layer by layer generation of cluster states. Phys. Rev. A 85, 052305 (2012) Browne, D., Kashefi, E., Perdrix, S.: Computational depth complexity of measurement-based quantum computa tion. In: van Dam, W., Kendon, V.M., Severini S. (eds.) TQC 2010. LNCS, vol 6519, pp. 35–46. Springer (2010) Clark, J.A., Stepney, S., Chivers, H.: Breaking the model: finalisation and a taxonomy of security attacks. ENTCS 137(2), 225–242 (2005) Coecke, B., Kissinger, A.: Picturing Quantum Processes—A First Course in Quantum Theory and Diagrammatic Reasoning. Cambridge University Press, UK (2017). ISBN 9781107104228 Cooper, D., Stepney, S., Woodcock, J.: Derivation of Z refinement proof rules: forwards and backwards rules incorporating input/output refinement. Technical Report YCS-2002-347, Department of Computer Science, University of York (2002) Cory, D.G., Laflamme, R., Knill, E., Viola, L., Havel, T.F., Boulant, N., Boutis, G., Fortunato, E., Lloyd, S., Martinez, R., Negrevergne, C., Pravia, M., Sharf, Y., Teklemariam, G., Weinstein, Y.S., Zurek, W.H.: NMR based quantum information processing: achievements and prospects. Fortschritte der Phys. 48(9–11), 875–907 (2000) Cory, D.G., Fahmy, A.F., Havel, T.F.: Ensemble quantum computing by NMR spectroscopy. Proc. Natl. Acad. Sci. 94(5), 1634–1639 (1997). https://doi.org/10.1073/pnas.94.5.1634 Dale, M., Miller, J.F., Stepney, S., Trefzer, M.A.: A substrate-independent framework to characterise reservoir computers (2018). CoRR arXiv:1810.07135 Feynman, R.P.: Simulating physics with computers. Intern. J. Theoret. Phys. 21(6/7), 467–488 (1982) Gershenfeld, N.A., Chuang, I.L.: Bulk spin-resonance quantum computation. Science 275(5298), 350–356 (1997). https://doi.org/10.1126/science.275.5298.350 Graca, D.S.: Some recent developments on Shannon’s GPAC. Math. Log. Q. 50(4–5), 473–485 (2004) Horsman, C., Brown, K.L., Munro, W.J., Kendon, V.M.: Reduce, reuse, recycle for robust cluster-state generation. Phys. Rev. A 83(4), 042327 (2011). https://doi.org/10.1103/ PhysRevA.83.042327 Horsman, C., Stepney, S., Wagner, R.C., Kendon, V.: When does a physical system compute? Proc. R. Soc. A 470(2169), 20140182 (2014) Horsman, D., Kendon, V., Stepney, S., Young, P.: Abstraction and representation in living organisms: when does a biological system compute? In: Dodig-Crnkovic, G., Giovagnoli R. (eds.) Representation and Reality: Humans, Animals, and Machines, vol. 28, pp. 91–116. Springer (2017) Horsman, D., Kendon, V., Stepney, S.: Abstraction/representation theory and the natural science of computation. In: Cuffaro, M.E., Fletcher, S.C. (eds.) Physical Perspectives on Computation, Computational Perspectives on Physics, pp. 127–149. Cambridge University Press, Cambridge (2018) Horsman, D., Clarke, T., Stepney, S., Kendon, V.: When does a control system compute? The case of the centrifugal governor (2019). In preparation Jones, J.A.: Quantum computing with NMR. Prog. Nucl. Magn. Reson. Spectrosc. 59, 91–120 (2011) Jozsa, R.: An introduction to measurement based quantum computation (2005). arXiv:quant-ph/0508124

Anti-heterotic Computing

85

34. Kendon, V., Sebald, A., Stepney, S., Bechmann, M., Hines, P., Wagner, R.C.: Heterotic computing. In: Unconventional Computation. LNCS, vol. 6714, pp. 113–124. Springer (2011) 35. Knill, E., Laflamme, R., Milburn, G.J.: A scheme for efficient quantum computation with linear optics. Nature 409(6816), 46–52 (2001). https://doi.org/10.1038/35051009 36. Kuhnert, L., Agladze, K., Krinsky, V.: Image processing using light-sensitive chemical waves. Nature 337, 244–247 (1989) 37. Lloyd, S., Braunstein, S.L.: Quantum computation over continuous variables. Phys. Rev. Lett. 82, 1784 (1999) 38. Mills, J.W.: The nature of the extended analog computer. Phys. D Nonlinear Phenom. 237(9), 1235–1256 (2008). https://doi.org/10.1016/j.physd.2008.03.041 39. Motoike, I.N., Adamatzky, A.: Three-valued logic gates in reaction-diffusion excitable media. Chaos Solitons Fractals 24(1), 107–114 (2005) 40. Raussendorf, R., Briegel, H.J.: A one-way quantum computer. Phys. Rev. Lett. 86, 5188–5191 (2001) 41. Roselló-Merino, M., Bechmann, M., Sebald, A., Stepney, S.: Classical computing in nuclear magnetic resonance. Int. J. Unconv. Comput. 6(3–4), 163–195 (2010) 42. Russell, B., Stepney, S.: The geometry of speed limiting resources in physical models of computation. Int. J. Found. Comput. Sci. 28(04), 321–333 (2017). https://doi.org/10.1142/ S0129054117500204 43. Spiller, T.P., Munro, W.J., Barrett, S.D., Kok, P.: An introduction to quantum information processing: applications and realisations. Contemp. Phys. 46, 407 (2005) 44. Stepney, S.: The neglected pillar of material computation. Phys. D Nonlinear Phenom. 237(9), 1157–1164 (2008) 45. Stepney, S., Abramsky, S., Bechmann, M., Gorecki, J., Kendon, V., Naughton, T.J., PerezJimenez, M.J., Romero-Campero, F.J., Sebald, A.: Heterotic computing examples with optics, bacteria, and chemicals. In: 11th International Conference on Unconventional Computation and Natural Computation 2012 (UCNC 2012), Oréans, France. Lecture Notes in Computer Science, vol. 7445, pp. 198–209. Springer, Heidelberg (2012) 46. Stepney, S., Kendon, V., Hines, P., Sebald, A.: A framework for heterotic computing. In: 8th Workshop on Quantum Physics and Logic (QPL 2011) Nijmegen, Netherlands, EPTCS, vol. 95, pp. 263–273 (2012) 47. Tóth, Á., Showalter, K.: Logic gates in excitable media. J. Chem. Phys. 103, 2058–2066 (1995) 48. Tucker, R.S.: The role of optics in computing. Nat. Photonics 4, 405 (2010). https://doi.org/ 10.1038/nphoton.2010.162 49. Woods, D., Naughton, T.J.: Parallel and sequential optical computing. In: Optical SuperComputing. LNCS, vol. 5172, pp. 70–86. Springer (2008)

Visual Analytics Ian T. Nabney

Abstract We are in a data-driven era. Making sense of data is becoming increasingly important, and this is driving the need for systems that enable people to analyze and understand data. Visual analytics presents users with a visual representation of their data so that they make sense of it: reason about it, ask important questions, and interpret the answers to those questions. This paper shows how effective visual analytics systems combine information visualisation (to visually represent data) with machine learning (to project data to a low-dimensional space). The methods are illustrated with two real-world case studies in drug discovery and condition monitoring of helicopter airframes. This paper is not the final word on the subject, but it also points the way to future research directions.

1 Introduction In a volume of this sort, it seems fitting to start this paper with my memories of working with Susan. We were both at Logica Cambridge, which was the research lab of the software consulting company Logica, absorbed into CGI a number of years ago. My work there focused on neural networks and other forms of rule induction, but my background was in pure mathematics. Susan recruited me to work with a small team developing a verifiably correct compiler for a subset of Pascal for a client. The results of that work are contained in a series of technical reports [12–14]. The heavy lifting on the key ideas had already been developed by Susan in an earlier project for a much smaller language: my role was to take those ideas and specify mathematically the high-level language, the low level language, and all the checks and compilation steps, and then to prove that the transformations were correct. This was no small task: the processor was quite limited (being 8-bit), so even 16-bit integer division required a significant body of code (more than 100 lines from memory) to implement. Any thoughts I had that I might be the ‘rigorous’ I. T. Nabney (B) School of Engineering, University of Bristol, Clifton, Bristol BS8 1UB, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_4

87

88

I. T. Nabney

mathematician compared to Susan’s ‘empirical’ physicist were soon put aside given the immense care over the detail that Susan took in checking my work. Those hopes (of impressing her) were also dashed by a story that Susan, characteristically, told against herself. Before joining Logica, she had worked for GEC, where her colleagues had designed a new unit of pedantry: the Stepney. Unfortunately, it was useless for practical purposes, because the pedantry of anyone other than Susan could only be measured in micro-Stepneys. I consoled myself with the thought that while my precision was not measurable in a Stepney order-of-magnitude, I at least rated at one milli-Stepney, or possibly two on a good day. Since leaving Logica myself and joining Aston University and more recently the University of Bristol, I haven’t revisited my formal methods work, but have concentrated on machine learning instead. Searching for a topic that would be appropriate for this collection, I decided on writing about ‘Visual Analytics’ which if not ‘nonstandard computation’ does at least provide users with the possibility of reasoning about the results of computation in a different way. The structure of this paper is as follows. The second section defines visual analytics and explains why it is of great relevance today. The third section describes the relevance of machine learning to visual analytics both in core data projection models and further interaction and analysis that they support. These methods are illustrated by two real-world case studies in the fourth section, while the final section presents conclusions and future work.

2 Visual Analytics We are in a data-driven era. Increasingly many domains of our everyday life generate and consume data. People have the potential to understand phenomena in more depth using new data analysis techniques. Additionally, new phenomena can be uncovered in domains where data is becoming available. Thus, making sense of data is becoming increasingly important, and this is driving the need for systems that enable people to analyze and understand data. However, this opportunity to discover also presents challenges. Reasoning about data is becoming more complicated and difficult as data types and complexities increase. People require powerful tools to draw valid conclusions from data, while maintaining trustworthy and interpretable results. Visual analytics (VA) is a multi-disciplinary domain that combines information visualization with machine learning (ML) and other automated techniques to create systems that help people make sense of data [5]. Visual analytics presents people with a visual representation of their data so that they make sense of it: reason about it, ask important questions, and interpret the answers to those questions. Visualisation is an important tool for developing a better understanding of large complex datasets. It is particularly helpful for users who are not specialists in data modelling. Typical tasks that users can carry out include:

Visual Analytics

89

Fig. 1 A word cloud for text drawn from Primo Levi’s book ‘If this is a Man’. ‘One must understand Man’ could serve as a motto for the book. Figure produced using wordle.net

• detection of outliers (finding data points that are different from the norm); • clustering and segmentation (identifying important groups within the data); • aid to feature selection (identifying a subset of the features that provides the most information); • feedback on results of analysis (improving the fundamental task performance by seeing what you are doing). There are two important components to visual analytics: information visualisation and data projection. Information visualisation refers to a wide range of techniques for visually representing data. This data can be of many different types: numeric, discrete, spatial, temporal, textual, image, etc. A word cloud (see Fig. 1) is used to show the relative frequencies of words in a text document: the size of the word in the cloud is proportional to its frequency in the document. It is usual to set a lower bound on the frequency, to remove common stop words (such as ‘the’), and to stem the words (i.e. map all variants of a word to a single term, such as ‘move’ for ‘moving’, ‘moved’, etc.). The key limitation of information visualisation is that it is limited to data with a small number of variables. For example, word cloud visualises a single variable (word frequency). Even with a great deal of ingenuity using symbols, colours and three-dimensional plots, it is very hard to provide a really interpretable display of data above, say, five dimensions. And yet, many of the most interesting and difficult challenges we face are inherently multi-dimensional or are described by very highdimensional data. Clearly, information visualisation alone is not enough to tackle these challenges. In data projection, the goal is to project higher-dimensional data to a lowerdimensional space (usually 2d or 3d) while preserving as much information or structure as possible. Once the projection is done standard information visualisation methods can be used to support user interaction. These may need to be modified for particularly large datasets. The quantity and complexity of many datasets means

90

I. T. Nabney

that simple or classical visualisation methods, such as Principal Component Analysis, are not very effective. The typical result of applying them is that the data is confined to a very dense circular blob in the middle of the plot, which leaves little scope for discovering the deeper relationships between individual points and groupings within the data.

3 Machine Learning and Data Projection 3.1 Uncertainty Doubt is not a pleasant condition, but certainty is absurd. Voltaire1

Real data is uncertain: there is measurement error, missing data (i.e. unrecorded), censored data (values that are clipped when they pass a threshold), etc. We are forced to deal with uncertainty, yet we need to be quantitative: typically this means providing our best prediction with some measure of our uncertainty. The optimal formalism for inference in the presence of uncertainty is probability theory [4] and we assume the presence of an underlying regularity to make predictions. Bayesian inference allows us to reason probabilistically about the model as well as the data [2].

3.2 Linear Models The simplest way to project data is a linear map from the dataspace to a lowerdimensional space. To choose between all the different possible linear maps, we need to determine an appropriate criterion to optimise. In this application we want to preserve as much information as possible. If we assume that information is measured by variance this implies choosing new coordinate axes along directions of maximal data variance; these can be found by analysing the covariance matrix of the data. This method is called Principal Component Analysis (PCA): see [7]. Let S be the covariance matrix of the data, so that Si j =

1  n (x − x i )(x nj − x j ) N n i

The first q principal components are the first q eigenvectors w j of S, ordered by the size of the eigenvalues λ j . The percentage of the variance explained by the first q principal components is 1 In

the original French: Le doute n’est pas une état bien agréable, mais l’assurance est un état ridicule.

Visual Analytics

91

q j=1

λj

j=1

λj

d

where the data dimension is d. These vectors are orthonormal (perpendicular and unit length). The variance when the data is projected onto them is maximal. For large datasets, the end result is usually a circular blob in the middle of the screen. The reason is that we can also view PCA as a generative model. Classical PCA is made into a density model by using a latent variable approach, derived from standard factor analysis, in which the data x is generated by a linear combination of a number of hidden variables z x = Wz + μ + ,

(1)

where z has a zero mean, unit isotropic variance, Gaussian distribution N (0, I), μ is a constant (whose maximum likelihood estimator is the data mean), and  is an x-independent noise process. The fact that the probabilistic component of the model is a single spherical Gaussian explains the uninformative nature of the visualised data. To do better than this, we need to use a more complex generative model.

3.3 Generative Models The projection approach to visualisation is a way of reducing the data complexity in a direct fashion. An alternative view is to hypothesise how the data might have been generated from a lower-dimensional space. A hidden connection is stronger than an obvious one. Heraclitus

We consider separately the observed variables (i.e. the data) and latent variables which generate observations. We then use (probabilistic) inference to deduce what is happening in latent variable space from the observations. In such latent models, we often use Bayes’ Theorem: P(L|O) =

P(O|L) P(L) , P(O)

where L refers to latent variables and O refers to observed variables. A large number of interesting data models can be viewed in this framework, including Hidden Markov Models and State-Space Models for time series data [2]. Here we will focus on the case of static data and the Generative Topographic Mapping [3]. The Generative Topographic Mapping (GTM) is a non-linear probabilistic data visualisation method that is based on a constrained mixture of Gaussians, in which the centres of the Gaussians are constrained to lie on a two-dimensional space.

92

I. T. Nabney x3 Latent space

y(z;W)

z2

x2

z1 Data space x1

Fig. 2 The Generative Topographic Mapping

In the GTM, a D-dimensional data point (x1 , . . . , x D ) is represented by a point in a lower-dimensional latent-variable space t ∈ Rq (with q < D). This is achieved using a forward mapping function x = y(t; W ) which is then inverted using Bayes’ theorem. This function (which is usually chosen to be a radial-basis function (RBF) network) is parameterised by a network weight matrix W . The image of the latent space under this function defines a q-dimensional manifold in the data space (Fig. 2). To induce a density p(y|W ) in the data space, a probability density p(t) is defined on the latent space. Since the data is not expected to lie exactly on the q-dimensional manifold, a spherical Gaussian model with inverse variance β 2 is added in the data space so that the conditional density of the data is given by  p(x|t, W, β) =



β (2π )

D

  (β||y(t; W ) − x||)2 . exp − 2

(2)

To get the density of the data space, the hidden space variables must be integrated out  p(x|W, β) = p(x|t, W, β) p(t) dt. (3) In general, this integral would be intractable for a non-linear model y(t; W ). Hence p(t) is defined to be a sum of delta functions with centres on nodes t1 , . . . , t K in the latent space M 1  δ(t − ti ). (4) p(t) = M i=1 This can be viewed as an approximation to a uniform distribution if the nodes are uniformly spread. Now Eq. (3) can be written as p(x|W, β) =

K 1  p(x|ti , W, β). K i=1

(5)

Visual Analytics

93

Fig. 3 Schematic of GTM-FS (GTM with feature saliency). d1 and d2 have high saliency, d3 has low saliency

This is a mixture of K Gaussians with each kernel having a constant mixing coefficient 1/K and inverse variance β 2 . The ith centre is given by y(ti ; W ). As these centres are dependent and related by the mapping, it can be viewed as a constrained Gaussian Mixture Model (GMM): see Fig. 3. The model is trained in a maximum likelihood framework using an iterative algorithm (EM). Provided y(t; W ) defines a smooth mapping, two points t 1 and t 2 which are close in the latent space are mapped to points y(t 1 ; W ) and y(t 2 ; W ) which are close in the data space. We can provide more insight into the visualisation in a number of ways. The mapping of the latent space stretches and twists the manifold. This can be measured using methods from differential geometry and plotted as a background intensity: magnification factors measure the stretch of the manifold, and directional curvatures show the magnitude and direction of the main curvature [16]. Other mechanisms are more interactive, and use methods drawn from the field of information visualisation. Parallel coordinates [6] maps d-dimensional data space onto two display dimensions by using d equidistant axes parallel to the y-axis. Each data point is displayed as a piecewise linear graph intersecting each axis at the position corresponding to the data value for that dimension. It is impractical to display this for all the data points, so allow the user to select a region of interest. The user can also interact with the local parallel coordinates plot to obtain detailed information. There are a number of extensions to the basic GTM that allow users to visualise a wider range of datasets: • Temporal dependencies in data handled by GTM through Time. • Discrete data handled by Latent Trait Model (LTM): in fact, heterogeneous datasets containing both discrete and continuous data can be modelled. • Missing data can be handled in training and visualisation. Here we will consider just two extensions: hierarchies and feature selection.

94

3.3.1

I. T. Nabney

Hierarchical Models

The GTM assumes that the data lies ‘close’ to a two dimensional manifold; however, this is likely to be too simple a model for many datasets. Hierarchical GTM [15] allows the user to drill down into data with either user-defined or automated (using a minimum message-length MML criterion) selection of sub-model positions. Bishop and Tipping [17] introduced the idea of hierarchical visualisation for probabilistic PCA. A general framework for arbitrary latent variable models has been developed from this. Because GTM is a generative latent variable model, it is possible to train hierarchical mixtures of GTMs. We model and visualise the whole data set with a GTM at the top level, which is broken down into clusters at deeper levels of the hierarchy. Because the data can be visualised at each level of the hierarchy, the selection of clusters, which are used to train GTMs at the next level down, can be carried out interactively by the user.

3.3.2

Feature Selection

Feature selection is harder for unsupervised learning than for supervised learning as there is no obvious criterion to guide the search. Instead of selecting a subset of features, we estimate a set of real-valued (in [0, 1]) variables (one for each feature); feature saliencies. We adopted a minimum message length (MML) penalty for model selection (based on a similar approach for GMMs: [8]). Instead of a mixture of spherical Gaussians, as in standard GTM, we use a mixture of diagonal Gaussians. For each feature d = {1, . . . , D}, we flip a biased coin whose probability of a head is ρd ; if we get a head, we use the mixture component p(· · · |θmd ) to generate the dth feature; otherwise, the common density q(· · · |λd ) is used. GTM-FS associates a variation measure with each feature by using a mixture of diagonal-covariance Gaussians. This assumes that the features are conditionally independent. The dth feature is irrelevant if its distribution is independent of the component labels, i.e. if it follows a common density, denoted by q(xnd |λd ) which is defined to be a diagonal Gaussian with parameters λd . Let { = (ψ1 , . . . , ψ D )} be an ordered set of binary parameters such that ψd = 1 if the dth feature is relevant and ψd = 0 otherwise. Now the mixture density is p(xn | ) =

K D  1  [ p(xnd | βkd )]ψd [q(xnd | λd )](1−ψd ) . K k=1 d=1

(6)

The value of the feature saliencies is obtained by firstly treating the binary values in the set  as missing variables in the EM algorithm and then defining it by a probability pd that a particular feature is relevant (ψd = 1). Cheminformatics data from was analysed in [9] using GTM, GTM-FS and SOM. In GTM and GTM-FS, the separation of data clusters was better while GTM-FS showed more compact results because the irrelevant features were projected using a different distribution.

Visual Analytics

95

3.4 Metric-Based Models The basic aim of metric-based models is to define a mapping from the data space to a visualisation space that preserves inter-point distances, i.e. that distances in the visualisation space are as close as possible to those in original data space. Given a dissimilarity matrix di j (which is usually defined by Euclidean distance in the original data space), the aim is to map data points xi to points yi in a feature space such that their dissimilarities in feature space, d˜i j , are as close as possible to the di j . We say that the map preserves similarities. The stress measure is used as objective function

E=

 2  di j − d˜i j

1

ij

di j

i< j

di j

.

In classical scaling, the distance between the objects is assumed to be Euclidean. A linear projection then corresponds to PCA. The Sammon mapping [11] does not actually define a map: instead it finds a point yi for each point xi to minimise stress. Essentially, this is a lookup table: if there is any change to the dataset the entire lookup table must be learned again from scratch. Neuroscale [18] is a neural network-based scaling technique that has the advantage of actually giving a map (Fig. 4).

Fig. 4 Neuroscale network architecture

96

I. T. Nabney

3.5 Evaluation There is a need to compare the ‘quality’ of different visualisations. This matters to algorithm developers to demonstrate the value (or lack of it) of new techniques. It also matters to visual analytics practitioners since they are likely to generate multiple visualisations (parameter settings, different visualisation methods), sometimes in the thousands, and need to choose between them or guide the process of visualisation. This is a challenging task: see [1]. Dimensionality reduction is an inherently unsupervised task: there is no ground truth or gold standard and so there is no single globally correct definition of what quality means. Instead, there is a wide variety of dimensionality reduction methods with different assumptions and modelling approaches. In practice, visualisation is used both for specific tasks but also data-driven hypothesis formation. Any quality metrics should measure what the user requires the visualization for: • • • • • •

accurate representation of data point relationships (local/global); good class separation; identification of outliers; reduction of noise ‘understanding’ data—perception of data characteristics; choosing how to represent data.

One way to approach this task is to use user-based evaluation, either evaluating user performance at a task based on visualisation or evaluating the user experience of the visualisation process. The key challenge is to gather enough meaningful data to make sound judgements. Humans are not good at quantifying what they see (e.g. is one plot more structured than another?). Thus, given this inherent unreliability of user judgement, arriving at a reliable metric may require a very large number of users. As a result, most studies are very under-powered (in the statistical sense). For example, a user study in [10] that compared two information visualisation methods on three different tasks had just five subjects, all of whom were graduate students. Where user trials can be valuable is in providing richer qualitative data (e.g. reaction cards, choosing cards/words to reflect UX). A more fruitful approach is to use metric-based evaluation. This can be of three types: model-based; unsupervised learning metrics; and task-based metrics. Most models have an associated cost function. In the case of Neuroscale and multidimensional scaling, this is stress; for PCA it is variance; for GTM and other latentvariable models, it is the log likelihood of a dataset. But some models don’t have a cost function (e.g. the self-organising map, or SOM); cost functions for different models are incompatible; and we need some form of regularisation to compare different architectures and parameter spaces. So, model-based metrics are only useful when comparing a relatively narrow set of possible models. Stress can always be calculated for a data projection, but it is questionable if it is always relevant. Instead, we can argue that what is of most relevance to users is that

Visual Analytics

97

local neighbourhoods are preserved in the project and that larger inter-point distances do not have to be preserved so exactly. This leads to the definition of metrics that take account of local neighbourhood preservation. Two exemplar visualisation quality measures based on comparing neighbourhoods in the data space X and projection space Z are trustworthiness and continuity [19]. A mapping is said to be trustworthy if the k-neighbourhood in the visualised space matches that in the data space but if the k-neighbourhood in the data space matches that in the visualised space it maintains continuity. For measuring the trustworthiness, we consider that the ri,Xj is the rank of the jth data point from the corresponding ith data point with respect to the distance measure in the high-dimensional data space X, and Pk (i) represents the data points in the k-nearest neighbourhood of the ith data point in the latent space Z but not in the data space X. Trustworthiness with k neighbours can be calculated as 1−

N 2   X (r − k). γk i=1 j∈P (i) i, j

(7)

k

For measuring the continuity, we consider that the ri,Zj is the rank of the jth data point from the ith data point with respect to the distance measure in the visualisation space Z and Q k (i) to be the set of data points in the k-nearest neighbourhood of the ith data point in the data space X but not in the visualisation space Z. The continuity with k neighbours can be calculated as 1−

N 2   Z (r − k). γk i=1 j∈Q (i) i, j

(8)

k

Both for trustworthiness and continuity, we take the normalising factor (γk ) as

γk =

N k(2N − 3k − 1) if k < N /2, N (N − k)(N − k − 1) if k  N /2,

(9)

where the definition of γk ensures that the value of trustworthiness and continuity lie between 0 and 1. The higher the measure the better the visualisation as this implies that local neighbourhoods are better preserved by the projection. The third approach to quality measurement is to use objective metrics based on a user task, but without the use of use trials or subjective experiments. These metrics often work best in a semi-supervised way: with additional class information that is not included in the visualisation but can be used to define a meaningful task. For example, if the user wants the visualization to preserve class information, we can use the nearest-neighbour classification error (in visualization space) normalised by its value in data space so that we are comparing performance in the original and projection spaces. If the user wants the visualization to be informative about class separation, we need a measure of class separation in the visualisation space. To measure class

98

I. T. Nabney

separation, we can fit a Gaussian Mixture Model to each class in visualisation space. A variational Bayesian GMM can be used to automate the optimisation of model complexity (i.e. number of Gaussian components). We then compute the Kullback– Leibler divergence between all possible class pairs GMMs as a measure of overall class separation.  P(i) . (10) P(i) log D K L (P  Q) = Q(i) i This divergence is not symmetric, so it is usual to compute a symmetric metric D K L (P  Q) + D K L (Q  P). The larger the value of this metric, the better the class separation in the visualisation, and hence, the better the visualisation is for this task.

4 Case Studies 4.1 Chemometrics In this case study, we show how an interactive visualisation tool helped scientists at Pfizer to evaluate the results of large assays in the search for new targets and drugs. With the introduction of high-throughput screening techniques based on robotics, it is possible to screen millions of compounds against biological targets in a short period of time. It is thus important for scientists to gain a better understanding of the results of multiple screens through the use of novel data visualisation and modelling techniques. The initial task is to find clusters of similar compounds (measured in terms of biological activity) and using a representative subset to reduce the number of compounds in a screen. Once these clusters have been identified, the goal is to build local in silico prediction models.

Fig. 5 GTM plot of high-throughput screening dataset. Parallel coordinate plots show how the characteristics of compounds vary in different parts of the dataset

Visual Analytics

99

We have taken data from Pfizer which consists of 6912 14-dimensional vectors representing chemical compounds using topological indices developed at Pfizer. The task is to predict lipophilicity. This is a component of Lipinski’s ‘Rule of 5’, a rule of thumb to predict drug-likeness. The most commonly used measure of lipophilicity is LogP, which is the partition coefficient of a molecule between an aqueous and lipophilic phases, usually octanol and water. Plots (see Figs. 5, 6, and 7) segment the data which can be used to build local predictive models which are often more accurate than global models. Note that we used only 14 inputs, compared with c. 1000 for other methods of predicting logP, while the accuracy of our results was comparable to that of global models.

Fig. 6 Hierarchical GTM plot of high-throughput screening dataset. Plots at lower levels show structure that is obscured in the very dense top-level plot

Fig. 7 a GTM visualisation of chemometric data. b GTM-FS visualisation of chemometric data

100

I. T. Nabney

4.2 Condition Monitoring The main objective of this project for Agusta Westland (now Leonardo) is to enhance the HUMS for helicopter airframes by analysing signals related to structural vibration. Vibration information during flight is provided by sensors located at different parts of the aircraft. Before structural health can be inferred, features (i.e. sensors and frequency bands) must be chosen which provide the best information on the state of the aircraft. These selected features are then used to infer the flight modes and eventually the health and deviations from the normal state of the aircraft. The purpose of this case study is to show how flight condition can be inferred accurately from the vibration data and this information can be used to detect abnormalities in the vibration signature. The data provided by AgustaWestland Ltd. is continuously recorded vibration signals from 8 different sensors during flight. Each sensor measures the vibration in a particular direction at a chosen location on the aircraft. During test flights, the aircraft carries out certain planned manoeuvres and we use the knowledge of these manoeuvres to help build the models on the labelled data. As opposed to fixed wing aircrafts, rotorcraft undergo more distinct flight states such as: steep approach, normal approach, hover, forward flight etc. Our approach is to build models using features that capture (non-stationary) frequency information by applying a short-time Fourier transform. In this way, it is possible to detect certain signatures or intensities at fundamental frequencies and their higher harmonics. Many of the key frequencies are related to the period of either the main or tail rotor. The intensity at these frequencies is greater during certain periods of time and these periods can be associated with flight conditions and transition periods. The frequency resolution we selected yields around 100 features (frequency bands) for each signal. If we were to use all the features from all the sensors together this would give a total of 800 features, which is too high-dimensional for practical modelling and inference. To reduce the dimensionality, we have used a data-driven procedure to select the sensors which provide the most relevant information about the flight conditions and select frequencies which are most relevant to our analysis of airframe condition. This process is underpinned by data visualisation in order to explore the dataset and understand the feature selection process better (particularly since that process is of necessity unsupervised). To understand the flight data better, we used GTM and GTM-FS to visualise data from individual sensors and sensor-pairs. To confirm which data set has a better separation between classes, a Kullback–Leibler (KL) divergence (Eq. (10)) is calculated for the visualisation plots. The calculated value from the KL function indicates how much the classes in each visualisation plots are separated. The higher the resulting number, the more seperated the individual classes are from each other. To compute the KL-divergence, the probability density of each class is estimated by fitting a Gaussian Mixture Model in the 2D visualisation space as shown in Fig. 8. After analysing the data with different visualisation methods, it can be concluded that the GTM visualisation showed clear transitions between classes. However, the

Visual Analytics

101

Fig. 8 Helicopter airframe vibration data: GTM-FS visualisation of sensors 1 and 6. Note the ellipses representing the Gaussian components of the Gaussian Mixture Model fitted to each class

data points in a class were not compactly clustered. Multiple signals with multiple flight condition transitions were intended to use for GTM-FS and its log version and find the relevant features. It was found that the data with least irrelevant features (noise) showed better separation first visually and then evaluated with KL-divergence. So, to obtain better results, the feature set should include the maximum number of relevant features with high feature saliency.

5 Conclusions We need to understand the vast quantities of data that surround us; visualisation and machine learning can help us in that task. In this way, models can be used to uncover the hidden meanings of data. Visual analytics is a powerful tool that provides insight to non-specialists: the three case studies demonstrated how complex datasets could be interpreted better by domain experts with the use of visualisation. It is clear that visual analytics enables human users to understand complex multivariate data, but that this is a multi-skilled, collaborative effort. Future challenges: • Visualisation is currently not as widely used as it might be because it requires significant expertise to set up and train models of the type discussed in this paper. One way to mitigate this is to use Bayesian methods to (semi-)automate model building and metrics to provide feedback on visualisation fidelity. • Visualisation will provide richer insights when data projection and expert domain knowledge are better integrated.

102

I. T. Nabney

• On a more practical level, it is important to treat data visualisation as a true component in data analysis. To that end, we need to develop systems that record users analytical pathways for sharing, reproduction and audit. • Visualisation is often an early stage in the data analysis pipeline. As such, automated and intelligent data cleansing and semantic annotation need to be combined with data projection.

References 1. Bertini, E., Tatu, A., Keim, D.A.: Quality metrics in high-dimensional data visualization: an overview and systematization. IEEE Trans. Vis. Comput. Graph. 17, 2203–2212 (2011) 2. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Berlin (2006) 3. Bishop, C.M., Svensén, M., Williams, C.K.I.: GTM: the generative topographic mapping. Neural Comput. 10(1), 215–235 (1996) 4. Cox, R.T.: Probability, frequency and reasonable expectation. Am. J. Phys. 14(1), 1–13 (1946) 5. Endert, A., Ribarsky, W., Turkay, C., Wong, B.L.W., Nabney, I.T., Blanco, I.D., Rossi, F.: The state of the art in integrating machine learning into visual analytics. Comput. Graph. Forum 36(8), 458–486 (2017) 6. Inselberg, A.: The plane with parallel coordinates. Vis. Comput. 1(2), 69–91 (1985) 7. Jollife, I.T.: Principal Component Analysis. Springer, New York (1986) 8. Law, Martin H.C., Figueiredo, Mario A.T., Jain, Anil K.: Simultaneous feature selection and clustering using mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1154–1166 (2004) 9. Maniyar, D.M., Nabney, I.T.: Data visualization with simultaneous feature selection. In: IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology, pp. 1–8 (2006) 10. Pillat, R.M., Valiati, E.R.A., Freitas, C.M.D.S.: Experimental study on evaluation of multidimensional information visualization techniques. In: Proceedings of the 2005 Latin American conference on Human-Computer Interaction, pp. 20–30. ACM (2005) 11. Sammon, J.W.: A nonlinear mapping for data structure analysis. IEEE Trans. Comput. 18(5), 401–409 (1969) 12. Stepney, S., Nabney, I.T.: The DeCCo project papers I: Z specification of Pasp. Technical report YCS-358, University of York (2003) 13. Stepney, S., Nabney, I.T.: The DeCCo project papers II: Z specification of Asp. Technical report YCS-359, University of York (2003) 14. Stepney, S., Nabney, I.T.: The DeCCo project papers III: Z specification of compiler templates. Technical report YCS-360, University of York (2003) 15. Tino, P., Nabney, I.T.: Hierarchical gtm: constructing localized nonlinear projection manifolds in a principled way. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 639–656 (2002) 16. Tino, P., Nabney, I.T., Sun, Y.: Using directional curvatures to visualize folding patterns of the GTM projection manifolds. In: International conference on artificial neural networks, pp. 421–428. Springer (2001) 17. Tipping, M.E., Bishop, C.M.: Mixtures of principal component analysers. In: Proceedings of the International Conference on Artificial Neural Networks, vol. 440, pp. 13–18. IEE (1997) 18. Tipping, M.E., Lowe, D: Shadow targets: a novel algorithm for topographic projections by radial basis functions. In: Proceedings of the International Conference on Artificial Neural Networks, vol. 440, pp. 7–12. IEE (1997) 19. Venna, J., Kaski, S.: Neighborhood preservation in nonlinear projection methods: an experimental study. In: Proceedings of the International Conference on Artificial Neural Networks, pp. 485–491 (2001)

Playing with Patterns Fiona A. C. Polack

Abstract Susan Stepney has created novel research in areas as diverse as formal software modelling and evolutionary computing. One theme that spans almost her whole career is the use of patterns to capture and express solutions to software engineering problems. This paper considers two extremes, both in time and topic: patterns for formal modelling languages, and patterns related to the principled modelling and simulation of complex systems.

1 Introduction Software engineering uses patterns to express possible solutions to common problems. Patterns have been widely used both to capture expertise and to explore generic solutions to problems: in engineered emergence, for instance, it is often noted that if we could capture the patterns of multi-scale behaviour that results in desired emergent behaviours, we might be able to develop patterns that could be instantiated to efficiently engineering reliable emergent systems [57]. This paper summarises the origins of patterns (Sect. 2), and explores two areas of pattern research representing different approaches to pattern creation. Stepney’s work on formal modelling, and subsequent research on formal language patterns, aimed to support practitioners and those tasked with making formal models accessible; this work is reviewed in Sect. 3. Subsequently, the focus of Prof. Stepney’s work shifted to complex systems. An engineering interest in the simple algorithms that give rise to complex emergent behaviours (flocking, cellular automata, L-systems and the like) ultimately led to the CoSMoS project and patterns for simulation and modelling of complex systems, introduced in Sect. 4. The final sections focus on common features of patterns and potential directions for research, as well as reflecting specifically on Stepney’s contribution. F. A. C. Polack (B) School of Computing and Mathematics, Keele University, Keele, Newcastle ST5 5BG, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_5

103

104

F. A. C. Polack

2 Patterns Patterns, like methods, capture expertise in a way that is accessible to less-expert or less-experienced practitioners. Patterns were developed by the architect, Christopher Alexander, to demonstrate his approach to creating buildings and townscapes [11]. Many other disciplines have subsequently adopted the concept of patterns, and successful patterns have entered the language of discourse of these disciplines. Computing-related pattern research (e.g. human computer interaction, software design, safety-critical software engineering, systems of systems and other work on large or complex computer systems) is responsible for more direct citation of Alexander’s original pattern works than any other discipline [61]. At the time that his ideas were gaining popularity in software engineering, Prof. Alexander found himself being invited to address high-profile computing conferences. Coplien [10], in his introduction to a journal reprint of Alexander’s address to the 1996 OOPSLA conference, states: Focusing on objects had caused us to lose the system perspective. Preoccupation with design method had caused us to lose the human perspective. The curious parallels between Alexander’s world of buildings and our world of software construction helped the ideas to take root and thrive in grassroots programming communities worldwide. The pattern discipline has become one of the most widely applied and important ideas of the past decade in software architecture and design.1

Coplien [10] goes on to identify Alexander’s influence as one of the three underpinnings of patterns in computing—the others being the PloP (Pattern Languages of Programming) conference series2 and the pattern book of Gamma et al. [26]. Alexander’s work is, of course, a key influence on both PLoP and the Gamma et al. book. However, Alexander was always sceptical about computing patterns, and of the implicit view that patterns “can make a program better” [10]. Alexander stresses the process of design [9, 11], whereas most computing-related patterns aim to provide an instantiatable solution to a problem—not a process. Alexander made his scepticism about computing’s use of patterns abundantly clear when he spent a year as an honorary associate of the EPSRC CoSMoS project, 2007– 2011 (see Sect. 4). A decade after his 1996 address to OOPSLA, he was still concerned that patterns—even computing-related patterns—needed to express things that made people’s lives better. Alexander had a lifelong, deep but intuitive, understanding of the complexity of social and architectural environments, and was focused on the search for an objective way to express and understand the essential properties of positive spaces—that is, process patterns for making the built environment a more positive space and a reflection of a natural environment. It is unfortunate that, although patterns in computing contexts owe so much to Alexander, there is little to offer Alexander in return. 1 Note

that the youTube recording of the 1996 talk, https://www.youtube.com/watch? v=98LdFA-_zfA is slightly different from the journal reprint [10]. 2 http://hillside.net/plop/archive.html.

Playing with Patterns

105

2.1 Developing and Evaluating Computing-Related Patterns Alexander identifies two ways in which patterns can be discovered [9]. The first starts from a specific problem or class of problems, and proposes a solution that can be generalised as a pattern. The second, which depends on established practice and experience, considers the commonalities in the ways that a problem is addressed in practice, and extracts the characteristics of best-practice into patterns. The earliest published computer patterns are of the second type: for example, Gamma et al’s 1994 book [26] presents software engineering patterns derived from observation of good solutions to common problems. Software engineering also introduced antipatterns (also called bad smells) that capture the common features of poor solutions to common problems, again based on experience and observation of practice [19, 25]. The latter, in particular, are widely discussed in the programming literature and even in educating programmers, and have motivated work on refactoring and code evolution. The approach of patterns-from-practice can be applied as a systematic process (rather than an intuitive retrospective), to derive patterns for a new domain, as demonstrated by Wania and Atwood [62]. From a systematic review of the ways in which 30 existing information retrieval (IR) systems solved six aspects of an interaction problem, Wania and Atwood derive a pattern language of 39 novel IR patterns [62]. Interestingly, further systematic research by Wania and Atwood [62] finds that many software products that are generally considered to have good quality attributes share common solutions—things that might be expressed as patterns—whereas poorer-quality software development tended not to share these quality-enhancing characteristics. It seems that the pattern-from-practice approach is embedded in good-quality software engineering, even where authors have not captured a software engineering pattern. This is good, since other work (some by the same authors) has shown that computer pattern work focuses on the development of pattern languages and libraries but pays very little attention to providing objective evidence that the identified patterns make any difference to quality or productivity [21, 23, 61]. We can speculate what an objective evaluation of a pattern catalogue—or even a single pattern—might look like. It might, for instance, address and measure usage criteria (is it commonly used to solve the associated problem, do users find it easy to understand and instantiate, is it flexible to the contexts in which the problem arises). However, a subjective measure of the success of a pattern is the extent to which the pattern’s name has entered the language of the domain. For example, even undergraduate students discuss their programming projects in terms of model, view, controller (MVC) and factory patterns; these are now so fundamental in OO and user-facing programming contexts that they are inherent rather than being seen as patterns to be applied. Interestingly, MVC pre-dates the formal introduction of patterns in computing: it appears to have been devised in the 1970s, as a solution architecture for systems with a user interface, and is described as a “user interface paradigm” in 1988 [33]. The factory pattern is a creational pattern from Gamma et al. [26] that addresses a very common problem in OO programming; its description

106

F. A. C. Polack

is: “Define an interface for creating an object, but let subclasses decide which class to instantiate” [26]. Pattern discovery can cheat the popularity metric, by adopting a name from the language of the domain—this is true of many of the formal patterns in Sect. 3. However, with these exceptions, most of the patterns considered here have not made it into the language of discourse, but they do capture an aspect of expertise or experience that the pattern’s discoverers felt to be worth preserving. The pattern languages discussed in Sects. 3 and 4 include patterns-from-practice and patterns that are created as proposed solutions to a specific problem (Alexander’s first approach to pattern discovery, above).

3 Patterns for Formal Modelling In the late twentieth century, formal methods such as Z [31], B [1] and VDM [16] were seen as a major factor in future software reliability. Notations underpinned by sound mathematical theory allowed precise software specification and proof of properties at the specification stage. Formal refinement [36] allowed the systematic formal derivation of code from formal specifications, such that proved properties could be shown to be preserved—though it had long been realised that refinement could only preserve functional properties; properties such as safety and security (and, indeed emergent properties [40]) are not guaranteed to be preserved by systematic formal refinement, not least because the refinement says nothing about the ability of the implementation platform to preserve non-functional properties in the design. There are some programming languages that correspond so closely to formal modelling languages that programs can be correct-by-construction if appropriate coding memes (essentially patterns) are used correctly—for instance, liveness properties proved using the formal language, CSP [29] can be implemented directly in the occam-π language [64]. Formal methods are often stated to be “difficult”. In reality, once a practitioner has grasped the basic principles of defining updates on mathematical sets and relations, and can read the notations, the act of writing a specification is not hard (it is certainly easier than the process of working out what the specification should have in it in the first place). It is not so much their inherent difficulty which affects use, but the lack of alignment with industrial software engineering methods [17, 28, 34]. However, like coding, there are good ways and less good ways to construct and document a formal model. Again, like coding, there are ways to achieve particular goals that are specific to one formalism. Crucially, some ways of writing a proposal suit particular purposes better than others—for instance, a specification that is written for readability is unlikely to be in a format that facilitates (semi-)automated proof of properties or formal refinement. For Stepney et al. [55], the purpose and motivation for writing a catalogue of Z patterns was to capture good practice and presentation, and to support use of

Playing with Patterns

107

Z-specific processes. Later, some of the Z patterns and the conceptual-level ideas introduced via Z patterns were generalised to other formal languages [56]. The Z patterns build on best-practice developed by Logica UK’s Formal Methods Team (LFM), of which Prof. Stepney had been a member. In the 1990s, LFM was a key player in the industrial use of formal specification and proof, and published the book, Z in Practice [15]. In LFM, Prof. Stepney worked extensively on large-scale industrial specification and proof. Published examples include the DeCCo compiler for a high-integrity system [52], and the Mondex electronic purse [51, 69]. These are significant industrial achievements, which required not a little formal methods research. The formal modelling and proof underpinning Mondex resulted in its being awarded ITSEC level E6 security clearance [49, 51, 69]; until this point, the formalism requirements of level E6 had been widely considered unattainable for software-based systems. Although not explicitly presenting Z patterns, the LFM book [15] presents good practice in a clear and applicable manner. A decade later, our Z patterns use the LFM house style throughout. We also capture the LFM house style in patterns as, “some simple Patterns . . . to aid the original specification process, and to help with maintenance” [55]: • • • •

Comment the intention Format to expose structure Provide navigation Name consistently.

We also introduce anti-patterns that capture our experience of bad presentation, for instance: • Overmeaningful name • Overlong name .3 Like all good (i.e. clear, usable) patterns, the names of these patterns are almost entirely self-explanatory—even Provide navigation is an obvious name to someone using Z (it proposes that, where a Z schema includes—references and incorporates— other schemas, the accompanying text should make clear what is used, rather than leaving the reader to search a potentially large formal text for the specification of the included schemas). As noted above, many of the patterns are named with the familiar Z names of structures or activities (Delta/Xi, promotion, etc.). In writing the Z patterns [53–55], we identified a range of possible uses for patterns. The LFM house style patterns are examples of presentation patterns. Four other uses of pattern are documented: idioms, structure patterns, architecture patterns and development patterns. • Idioms are stylistic patterns and anti-patterns that relate to common challenges encountered in expressing a model in Z such as Represent a 1:many mapping, Use free types for unions, and Overloaded numbers. 3 In

line with usage in the Z Patterns report [55], we use a sans-serif font for pattern names and an italicised font for anti-pattern names.

108

F. A. C. Polack

• Structure patterns aim to address the problem of structuring a long or complicated Z specification, and encapsulate good practice in modularisation. This includes patterns to address readability, such as Name meaningful chunks and Name predicates; ways to model specific structural elements such as Modelling optional elements or Boolean flag ; and stylistic patterns such as Use generics to control detail. The anti-pattern Fortran warns against writing code in Z rather than exploiting the abstraction and power of the language. • Architectural patterns express paradigmatic ways in which Z notation can be used, aiming to facilitate ways of writing Z that are often unfamiliar to mainstream Z users. The patterns include Object orientation, and Algebraic style, whilst the anti-pattern Unsuitable Delta/Xi pattern captures the fact that the mainstream Z approach, of specifying a state and then specifying predicates that specify formal update and read operations on that state (see e.g. [15, 46]), is unsuitable for some uses of Z. Examples of (appropriate use of) the algebraic style can be found in the definitions that underpin Z: for instance, Stepney et al’s Z definitions and laws [59], or Woodcock and Loomes’ theory of natural numbers [68, Chap. 11]. • Development patterns are stylistic and usage patterns that can be applied in working with Z, such as Use integrated methods. They include patterns such as Do a refinement, Do sanity checks and Express implicit properties. The architectural and development patterns come close to Alexander’s conception of process patterns, in that they guide a user in creating or working with an appropriate (in aesthetic and architectural terms) Z model. The Z catalogue takes the support for the development process further in generative patterns: collections of patterns that capture what needs to be done to achieve or manage a particular form of Z model. Examples include collections of patterns relating to creation of a good quality Delta/Xi (state-and-operations) specification, and a collection which can be applied to create a Z Promotion, in which local operations are defined on a local state and then “promoted” to global operations on a global state. A set of patterns to support a full formal refinement was also presented [54]. In presenting the Z Pattern Catalogue, the original intention had been to extend the catalogue over time as formality became embedded in software engineering. To support this aim, the catalogue includes a pattern template. To make our Z patterns accessible to a wider audience, we also defined a diagrammatic language—a domain-specific language (DSL) in modern parlance. Diagrams with clearly-defined meanings were used to illustrate the structure and behaviour of pattern components. Later, the DSL was elaborated and applied to other formal languages [56]. The Z Pattern Catalogue and associated papers were well received at the time of writing, but Z patterns did not enter the language of discourse (except where we named patterns with terms from the existing language of discourse) and did not become the major resource for formalists that we had hoped.4 In retrospect, we see that our work on making Z more accessible came at the start of the decline in the belief 4 In

2018, the various papers had only some 30 citations. However, to put that in context, the LFM book [15], has only 142 citation.

Playing with Patterns

109

that formal methods would be the underpinning of all software engineering; indeed, in the twenty-first century, the predominant approach to software specification and design uses model-driven engineering, and metamodelling approaches to language definition (e.g. [18]).

4 Patterns for Modelling Complex Systems: CoSMoS The Z Pattern Catalogue comprises patterns-from-practice, building on many years of experience using the Z language in practice and as part of a software development process. Even the seemingly-esoteric definitions and laws of Z [59] are motivated by practical requirements for fit-for-purpose formal specification tool-support, and the need to demonstrate the internal consistency of ISO standard Z [31]. In the last decade, Prof. Stepney’s focus has shifted to complex systems, combining interests in complexity and software engineering. Stepney led development of CoSMoS, a principled approach to the modelling and simulation of complex systems, targetted to research and engineering uses of simulation.5 Simulations developed following the CoSMoS approach and principles include, for instance [2, 27, 37, 47, 48, 66]. The CoSMoS process is summarised as a life-cycle, phases and products (see Fig. 1); the high-level problem is thus how to support the the activities that are needed in each phase. CoSMoS patterns represent ways to address the sub-problems identified in decomposing the top-level problem of engineering a demonstrably fit-for-purpose simulation [58] (an example of Alexander’s first pattern-discovery approach). The top-level CoSMoS process can be summarised in three patterns [50, 58], • carry out the Discovery Phase, • carry out the Development Phase, • carry out the Exploration Phase. Further patterns are identified as the phases are decomposed. For the Discovery Phase, for instance, the next level of patterns are [50, 58]: • • • •

identify the Research Context, define the Domain, build a Domain Model, Argue Appropriate Instrument Designed.

Focusing on the pattern, Identify the research context, Stepney [50] uses the Intent, Context and Discussion sections of the pattern to convey the role that the research context plays in a CoSMoS development (reproduced in Table 1). Notice 5 The CoSMoS project, 2007–11, was led by Susan Stepney (York: EPSRC EP/E053505) and Peter

Welch (Kent: EPSRC EP/E049419), along with researchers from York, Kent, University of the West of England and Abertay. The outputs of CoSMoS can be found online at https://www.cosmosresearch.org/about.html. Background and motivation to CoSMoS can be found in [13, 42–45, 58].

110

F. A. C. Polack

Fig. 1 Phases (ellipses) and products (boxes) of the CoSMoS process [13, 50, 58]. The CoSMoS lifecycle starts with identification of the domain (not shown), and proceeds through development of the domain and platform models, the simulation platform and the results model. Information collected en-route is summarised in the research context, which is used to map between simulation results and domain observables

that further patterns are identified in the pattern’s Discussion section: whenever a variant of a problem is identified, and a potential solution identified, whether from scratch or from the wider software engineering experience of the CoSMoS team, a pattern captures the solution. Again, antipatterns identify some of the things to avoid in defining the research context [50, 58]. For example, avoiding an ever-expanding scope or scale, which is referred to as Everything but the kitchen sink . To further develop a research context, we decompose the problem again, and seek to capture more potential solutions as patterns. The following list of problems to be solved shows where patterns have been documented [50, 58]: • Document the research goals • Document Assumptions relevant to the research context • Identify the team members, including the Domain Expert, the Domain Modeller, and the Simulation Implementor, their roles, and experience • Agree the Simulation Purpose, including criticality and impact • Note the available resources, timescales, and other constraints • Determine success criteria • Revisit between phases, and at discovery points; if necessary, change the context, and Propagate Changes.

Playing with Patterns

111

Table 1 The intent, context and discussion sections of stepney’s pattern, Identify the Research Context [50, 58] Intent Identify the overall scientific context and scope of the simulation-based research being conducted Context A component of the Discovery Phase, Development Phase, and Exploration Phase patterns. Setting (and resetting) the scene for the whole simulation project Discussion The role of the research context is to collate and track any contextual underpinnings of the simulation-based research, and the technical and human limitations (resources) of the work The research context comprises the high-level motivations or goals for the research use, the research questions to be addressed, hypotheses, general definitions, requirements for validation and evaluation, and success criteria (how will you know the simulation has been successful) The scope of the research determines how the simulation results can be interpreted and applied. Importantly, it captures any requirements for validation and evaluation of simulation outputs. It influences the scale and scope of the simulation itself Consideration should be made of the intended criticality and impact of the simulation-based research. If these are judged to be high, then an exploration of how the work can be validated and evaluated should be carried out Determine any constraints or requirements that apply to the project. These include the resources available (personnel and equipment), and the time scale for completion of each phase of the project. Any other constraints, such as necessity to publish results in a particular format (for example, using the ODD Protocol), should be noted at this stage. This helps ensure that later design decisions do not violate the project constraints. Ensure that the research goals are achievable, given the constraints. As information is gathered during the project, more understanding of the domain and the research questions will be uncovered. For example, a Prototype might indicate that a simulation of the originally required detail is computationally infeasible. The research context should be revisited between the various phases, and also at any point where major discoveries are made, in order to check whether the context needs to change in light of these discoveries

Whilst some of these patterns are specific to a CoSMoS-style simulation development, all the patterns draw on general software engineering or simulation best practice. For example the patterns that decompose the problem of identifying roles draw on software engineering roles (cf. Scrum roles6 ). A CoSMoS role implies particular responsibilities, but it does not imply a single person—typically, the domain expert role is taken by a lab or research team; it is also possible for one person to play several, or even all, the roles. As for the Z generative patterns (Sect. 3), lists of patterns that capture what could be done to address a particular problem are not ordered steps, but a collection of activities that should be considered in order to address the sub-problem of a higher goal [50, 58]. As in other pattern catalogues, patterns may be optional, alternatives, or might be instantiated in spirit only in smaller or less critical projects. 6 https://www.scrumalliance.org/.

112

F. A. C. Polack

4.1 Fitness for Purpose and Patterns Documenting a whole development process by successive problem decomposition is not a task to take lightly: a full set of CoSMoS patterns has recently been published [58]. However, some of the lower-level patterns have been adopted in practice.7 In this section, I present some patterns that we have developed to support documentation of the fitness-for-purpose of a simulation using arguments [2, 3, 42, 66]; first, however, I explain how and why CoSMoS addresses fitness for purpose. CoSMoS is a process to support principled modelling and simulation of complex systems. Principled modelling and simulation implies that we can show that our modelling and simulation match the purpose of our simulation—in most cases, that the models are appropriate models of the domain, and that the simulation faithfully implements the models. Validation of simulations has often been limited to checking that the simulation produces the expected results by a process that looks a bit like reality; there is little concern for the quality of the underlying simulation [24]. The lack of overt emphasis on fitness for purpose in simulation has led to intellectual debate over whether it is possible to do science through simulation [20, 22, 35, 38, 45, 65]. Similar issues with the validity of simulation and use of simulation evidence arise in safety critical systems engineering [12], and in social simulation [65]. Wheeler et al. [65] summarise the concerns about simulation in noting that to assess the role and value of complex systems simulation, we need to address deep questions of comparability: we need a record of experience, of how good solutions are designed, of how to chose parameters and calibrate agents, and, above all, how to validate a complex system simulation. Many of these issues were subsequently considered in the CoSMoS project [13]. Of particular interest here are simulations created as tools for scientific exploration. A scientific tool enhances the ability of a human to observe or understand a scientific subject. Humphreys [30] identifies three ways in which scientific instruments enhance the range of natural human abilities. • Extrapolation describes the extension of an existing modality. For example, vision is extended using a microscope. • Conversion changes the mode through which something is accessed. For example, a sonar device has a visual display that converts sonar echoes into a threedimensional image of a surface. • Augmentation is used to access features that people cannot normally detect in their original form. Examples include the detection of magnetism or elementary particle spin. Tools that use extrapolation, conversion or augmentation exhibit an increasingly tenuous or abstract link to reality (the domain of study). Conversely, the implicit or explicit model of reality plays an increasingly important role in understanding the outputs of the scientific instrument. 7 see

e.g. the simulation projects of the York Computational Immunology Lab, https://www.york. ac.uk/computational-immunology/.

Playing with Patterns

113

For simulation, the properties of the real domain are often estimated and always abstracted: the environment, the input parameters, and the layers and the forms of interaction are necessarily simplified. However, just as with conventional scientific instruments, the simulation needs to be constructed, calibrated and documented in a way that allows it to be used as a robust tool. This puts a significant responsibility on the modelling. Without a principled approach to development, the results of a simulation cannot be interpreted in any meaningful way. The CoSMoS process has mostly been used for research simulation, aiming to create a simulation (usually an agent or individual based simulation) that is a credible tool for testing or developing specific scientific hypotheses in a specific laboratory (domain) context. The intended purpose and the scope of simulation are determined by the domain expert, in conjunction with the developer, (who usually has the more realistic idea of what is possible in a simulation). Purpose is defined by, and is that of, the domain expert laboratory. It is not the whole of that scientific domain. In this respect, CoSMoS-style simulation differs from much scientific simulation, which seeks to create a simulation of a whole concept using information from many domain experts and related sources—see for instance the Chaste project, which is building an in silico heart model.8 A CoSMoS simulation purpose typically relates to the investigation of some dynamic process that underpins an observable behaviour of a real system—for example, the development of cell clusters or the evolution of a pathology that depends on cell and molecular interactions over time and/or space. The simulation is needed either because the dynamic systems behaviour cannot be observed, or because it is useful to be able to perturb the dynamics in ways that cannot easily be attempted in vivo, in vitro or in dead tissue. The fitness-for-purpose of the simulation is assessed against the purpose, and must be re-assessed if the simulation purpose is changed. As a CoSMoS-style development proceeds, many discussions and some research take place, to establish a model of the domain. In engineering terms, the team aims to establish the scope and scale of a simulation that is computationally feasible, and can meet the purpose of the simulation. Many design decisions are made, both in terms of the domain and in terms of the modelling and potential realisation of the simulator. Many assumptions are also identified and recorded. (Of course, there are many un-noticed assumptions and instinctive design decisions that are not recorded, but that is another story.) Most CoSMoS-based simulation projects use argumentation to capture the belief of the team that specific aspects of the development are fit for purpose. Arguments lend themselves well to patterning. CoSMoS argumentation is derived from safety case argumentation, which, from its earliest days, has used patterns to capture the structure of common safety cases such as Component contributions to system hazards and Effects of other components [32, 63]. In safety case argumentation patterns, the top-level goal or claim often presents a generalisation of a common hazard or safety concern, and further patterns present potential decompositions of 8 www.cs.ox.ac.uk/chaste.

114

F. A. C. Polack

the main claims (referred to as goals) of the argument. The solutions represented by safety argumentation patterns capture best practice in safety critical engineering as well as ultimately presenting evidence that a goal has been met. Safety case arguments are typically presented using the Goal Structuring Notation (GSN) [39, 67]. CoSMoS adopts, and slightly adapts, GSN for the visual representation of fitness-for-purpose arguments [4, 41, 42]. Whereas a safety case argument must be complete, from goal to evidence, the primary purpose of a fitness-for-purpose argument is to expose and express the reasoning behind a particular aspect of the simulation design. For example, Alden [2] captures a detailed rationale for the fitness-for-purpose of the domain model created to model Peyers’ patch development in an embryonic mouse gut. The top claim, that the [m]odel is an adequate representation of the biology, is addressed using the decomposition strategy, [a]rgue over scientific content, the adequacy of the abstraction and the experimental results. Each element of the decomposition can be addressed in turn, broken down into sub-goals, and, in Alden’s case, used to point to the eventual evidence (data, statistical analysis) that the simulator is capable of generating results that are comparable to observations of the domain.

Fig. 2 An argument pattern for the top-level fitness-for-purpose of a domain model for a simulation. Claims are represented as rectangles, strategies as parallelograms, contextual information (which could also include assumptions, justifications, etc.), as soft-cornered rectangles. Instantiation of the pattern replaces names in angle-brackets and elaborates the sub-claims [14, 42]

Playing with Patterns

115

Fig. 3 Representation of an argument pattern for the appropriateness of statistical analysis. Claims are represented as rectangles, strategies as parallelograms, contextual information (which could also include assumptions, justifications, etc.), as soft-cornered rectangles. Instantiation of the pattern replaces names in angle-brackets

Alden’s top-level argument [2] is based on an instantiation of one of the sub-claims of the top-level fitness-for-purpose argument pattern shown in Fig. 2.9 The argument in Fig. 2 presents a possible solution to the problem of demonstrating fitness for purpose of a whole simulation (the top claim, Claim 1). The argument points to sources of information about the claim in two “Context” statements—one is a pointer to the definition of the system that we want to show to be fit, and the other, here, points to the statement of purpose against which fitness is to be judged. These contextual elements are essential in documenting an argument that something is fit for purpose. Other contextual items, assumptions and justifications could be added. The top-level claim in the argument (Fig. 2) is addressed using a decomposition strategy: arguing over the fitness for purpose of the modelling, the software engineering and the results of the simulation. Each decomposed element results in further claims, and each of these can be broken down and documented in the same way. To instantiate this argument, the terms in angle-brackets must be replaced by appropriate terms from the project, and thought must be given both to how to elaborate each sub-claim, and to what assumptions, justifications and context statements are needed to substantiate the claims.

9 Argument representations in this paper have been produced using YCIL’s open-source Artoo argu-

mentation tool https://www.york.ac.uk/computational-immunology/software/artoo/, which runs in a browser, loads and stores argument structures as XML, and generates .png images. The tool underpins Simomics Ltd’s Reason tool, www.simomics.com.

116

F. A. C. Polack

To illustrate a pattern and its instantiation, it is easiest to take a more focused argument. A common problem in demonstrating fitness for purpose in a simulation (or real-life) context is showing that the statistical analysis undertaken has used an appropriate statistic. Most common statistics make some assumptions about whether data is continuous or discrete and how population and sample data and data errors are distributed. Figures 3 and 4 illustrate a general pattern for an argument about

Fig. 4 An instantiation of the argument pattern in Fig. 3 for the use of Vargha and Delaney’s A test [60]. A fuller version of the argument, showing how it has been applied to specific experimental results, can be found in [42]

Playing with Patterns

117

the appropriateness of a statistic, and an instantiation of that argument for the A test [60], which can be used to compare medians (an example use of the argument can be found in [42]). A key benefit of argumentation is that anyone can review the argument: for example, in Fig. 4, someone might spot an error in our assumptions, or might challenge the validity of the strategy used. As an aside, we show the argument of fitness for the A test here because it is a test that many CoSMoS-style simulations have used, in situations where the distribution characteristics of the data cannot be known. It is one of a number of statistical tests built into the Spartan10 tool [5–8] which supports analysis of CoSMoS-style simulation and domain results.

5 Discussion The concept of a pattern is appealing to engineers because it encapsulates a solution to a typical problem. In this paper, I have given examples of patterns that are synthesised from practice and experience, exemplified by the Z and formal methods patterns; and patterns that are proposed as potential solutions to problems, exemplified by the CoSMoS development and argumentation patterns. Both approaches to pattern discovery illustrate some typical issues with patterns. The patterns-from-practice approach relies on there being established usage and expertise; the risk (demonstrated by the Z patterns work) is that the patterns are distilled too late to be of value: in a fast-changing area such as software engineering, this is not uncommon. However, the capturing of solutions in patterns can highlight aspects of solutions or best practice that are of relevance beyond the specific pattern domain. For instance, in the Z patterns work, we used generative patterns to identify sets of specific development (etc.) patterns that can be used to develop different styles of formal solution: the same meme is used in the CoSMoS patterns, and we suggest that the principle of providing a set of patterns relating to a particular problem takes patterns closer to Alexander’s intent of process patterns. In work on formal patterns, our use of “meta-patterns” shows how we adapt Z patterns to other formal and less formal modelling approaches, to capture design processes. If we replace the Z-specific patterns with either generic or other languagespecific patterns, we might be able to improve the quality of development more widely. To illustrate, consider Table 2, which takes three of the component patterns from the generative pattern for a conventional Z Delta/Xi specification [55], and suggests how these might relate to, and illuminate, a more general development process. It is interesting to observe in Table 2 how things that are implicit in one context (e.g. type-checking in a formal context; drawing design diagrams in a general context) need to be made explicit in another context. It is tempting to suggest that there is

10 https://www.york.ac.uk/computational-immunology/software/.

118

F. A. C. Polack

Table 2 Using components of a Z generative pattern to illuminate more general development patterns and vice versa Z generative patterns: intent Delta/Xi: specify a system as a state, with operations based on that state

Diagram the structure: create a visual representation of the structure of the solution, to summarise the structure and allow critical review Strict convention: Use the Δ/Ξ Z naming convention strictly, to avoid surprising the reader with hidden constraints. Note: Z schemas whose name starts with Δ define a pre and post state, and can be used to formally define updates to the state; those whose name starts with Ξ require that the pre and post states are equal, so can only define read operations Change part of the state: define an operation that changes only part of the state Note: Z allows “set difference” to be applied to schemas, constraining hidden elements to remain unchanged in operations

General development Model the state and operations of the proposed system—note that a formal model can be proven internally consistent; a non-formal model needs an additional validation step Create and validate diagrammatic models; validate internal consistency of models

Consider the meaning of notations and state semantic variations For example, when modelling in UML, consider whether the MOF definitions of class, generalisation, etc., are used strictly; consider stating the semantics of generalisation in terms of specific theory of OO inheritance, etc.

Validate operations to ensure that unintended side-effects are eliminated; ensure that a requirement not to change part of a state is explicit and tested

scope for comparison of developmental patterns for different contexts and languages, to identify a superset of generic quality or validation patterns. Turning to patterns invented to address existing problems, a similar effect can be observed. In developing a new pattern catalogue, we use our experience of working in other problem-solving domains and with other approaches or languages. Thus, one of the underpinnings of our work on the CoSMoS pattern language is our background in formal methods and software engineering. The CoSMoS team included experienced academic and industrial software engineers who had experience of a wide range of methods, techniques and languages. Experience with formal methods (the team had a good working knowledge and practical experience of Z, B and CSP) focuses attention on the use and misuse of models more generally; typechecking focuses attention on the need to cross-validate models. In terms of software engineering modelling, the team’s expertise covers conventional Harel statecharts, Petri nets, and many of the software modelling languages that comprise or influenced UML/MOF, as well as a number of bespoke modelling languages, and research expertise in model driven engineering. A strong background in notations and semantics, and a broad experience of modelling languages, inevitably focuses attention on the difficulty of knowing whether and to what extent a model adequately captures what is intended. Overall, this led to the CoSMoS Research Context (colloquially, the repository of all the stuff that we accumulate about the domain and the development) and the focus on fitnessfor-purpose and argumentation. Thus, whilst experience of principled modelling

Playing with Patterns

119

and simulation is still limited, the CoSMoS patterns are well-grounded in extensive software engineering practice and experience. Again, it would be tempting to suggest a more systematic review of pattern languages, to establish links to patterns in other domains which may already have solutions to some of the problems of a CoSMoSstyle development.

6 Summary and Conclusion Patterns, building on the work of Alexander, have become influential and mainstream in software engineering. Prof. Stepney’s work has returned to patterns repeatedly, both as a means of capturing long-time experience, and as a way to present potential solutions to problems. Stepney and her teams have produced novel patterns, with a focus on visual representations, as well as on practicality and use. This platform of pattern research has the potential for improving software development both of research simulators and more generally. Even if none of the patterns presented here and in the referenced work was never used again, they remain an archive of experience and practice, presented in an accessible way, to future researchers and software developers. The examples show the breadth of software engineering research led by Prof. Stepney over the last three decades.

References 1. Abrial, J.-R.: The B-book: Assigning Programs to Meanings. CUP (1996) 2. Alden, K.: Simulation and statistical techniques to explore lymphoid tissue organogenesis. Ph.D. thesis, University of York (2012). http://etheses.whiterose.ac.uk/3220/ 3. Alden, K., Andrews, P., Timmis, J., Veiga-Fernandes, H., Coles, M.C.: Towards argumentdriven validation of an in-silico model of immune tissue organogenesis. In: Proceedings of ICARIS, vol. 6825, LNCS, pp. 66–70. Springer (2011) 4. Alden, K., Andrews, P.S., Polack, F.A.C., Veiga-Fernandes, H., Coles, M.C., Timmis, J.: Using argument notation to engineer biological simulations with increased confidence. J. R. Soc. Interface 12(104) (2015) 5. Alden, K., Andrews, P.S., Veiga-Fernandes, H., Timmis, J., Coles, M.C.: Utilising a simulation platform to understand the effect of domain model assumptions. Nat. Comput. 14(1), 99–107 (2014) 6. Alden, K., Read, M., Andrews, P.S., Timmis, J., Coles, M.C.: Applying Spartan to understand parameter uncertainty in simulations. R J. (2014) 7. Alden, K., Read, M., Timmis, J., Andrews, P., Veiga-Frenandes, H., Coles, M.: Spartan: a comprehensive tool for understanding uncertainty in simulations of biological systems. PLoS Comput. Biol. 9(2) (2013) 8. Alden, K., Timmis, J., Andrews, P.S., Veiga-Fernandes, H., Coles, M.C.: Extending and applying Spartan to perform temporal sensitivity analyses for predicting changes in influential biological pathways in computational models. IEEE Trans. Comput. Biol. 14(2), 422–431 (2016) 9. Alexander, C.: The Timeless Way of Building. OUP (1979)

120

F. A. C. Polack

10. Alexander, C.: The origins of pattern theory: the future of the theory, and the generation of a living world. IEEE Softw. 16(5), 71–82 (1999) 11. Alexander, C., Ishikawa, S., Silverstein, M., Jacobson, M., Fiksdahl-King, I., Angel, S.: A Pattern Language—Towns, Buildings, Construction. OUP (1977) 12. Alexander, R.: Using simulation for systems of systems hazard analysis. Ph.D. thesis, Department of Computer Science, University of York, YCST-2007-21 (2007) 13. Andrews, P.S., Polack, F.A.C., Sampson, A.T., Stepney, S., Timmis, J.: The CoSMoS process, version 0.1. Technical Report, Computer Science, University of York, YCS-2010-450 (2010) 14. Andrews, P.S., Stepney, S., Hoverd, T., Polack, F.A.C., Sampson, A.T., Timmis, J.: CoSMoS process, models and metamodels. In: CoSMoS Workshop, pp. 1–14. Luniver Press (2011) 15. Barden, R., Stepney, S., Cooper, D.: Z in Practice. Prentice-Hall (1995) 16. Bjørner, D., Jones, C.B. (eds.): The Vienna Development Method: The Meta-Language, vol. 61, LNCS. Springer (1978) 17. Bowen, J.P., Hinchey, M.G.: Seven more myths of formal methods. IEEE Softw. 12(4), 34–41 (1995) 18. Brambilla, M., Cabot, J., Wimmer, M.: Model-driven Software Engineering (MDSE) in Practice, 2nd edn. Morgan & Claypool (2017) 19. Brown, W.H., Malveau, R.C., McCormick, H.W., Mowbray, T.J.: AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis, 1st edn. Wiley (1998) 20. Bryden, J., Noble, J.: Computational modelling, explicit mathematical treatments, and scientific explanation. In: Proceedings of Artificial Life X, pp. 520–526. MIT Press (2006) 21. Dearden, A., Finlay, J.: Pattern languages in HCI: a critical review. Hum. Comput. Interact. 21(1), 49–102 (2006) 22. Di Paolo, E., Noble, J., Bullock, S.: Simulation models as opaque thought experiments. In: Proceedings of Artificial Life VII, pp. 497–506. MIT Press (2000) 23. Duncan, I.M.M., de Muijnck-Hughes, J.: Security pattern evaluation. In: Proceedings of SOSE, pp. 428–429. IEEE (2014) 24. Epstein, J.M.: Agent-based computational models and generative social science. Complexity 4(5), 41–60 (1999) 25. Fowler, M.: Refactoring: Improving the Design of Existing Code. Addison-Wesley (1999) 26. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Objectoriented Software. Addison-Wesley (1995) 27. Greaves, R.B., Read, M., Timmis, J., Andrews, P.S., Butler, J.A., Gerckens, B., Kumar, V.: In silico investigation of novel biological pathways: the role of CD200 in regulation of T cell priming in experimental autoimmune encephalomyelitis. Biosystems (2013). https://doi.org/ 10.1016/j.biosystems.2013.03.007 28. Hall, A.: Seven myths of formal methods. IEEE Softw. 7(5), 11–19 (1990) 29. Hoare, C.A.R.: Communicating Sequential Processes. Prentice-Hall (1985) 30. Humphreys, P.: Extending Ourselves: Computational Science, Empiricism, and Scientific Method. OUP (2004) 31. Information Technology—Z formal specification notation—syntax, type system and semantics. ISO Standard 13568 (2002) 32. Kelly, T.P.: Arguing safety—a systematic approach to managing safety cases. Ph.D. thesis, Department of Computer Science, University of York, YCST 99/05 (1999) 33. Krasner, G.E., Pope, S.T.: A cookbook for using the model-view controller user interface paradigm in Smalltalk-80. J. Object Oriented Program. 1(3), 26–49 (1988) 34. Le Charlier, B., Flener, P.: Specifications are necessarily informal or: some more myths of formal methods. Syst. Softw. 40(3), 275–296 (1998) 35. Miller, G.F.: Artificial life as theoretical biology: how to do real science with computer simulation. Technical Report Cognitive Science Research Paper 378, University of Sussex (1995) 36. Morgan, C.: Programming from Specifications, 2nd edn. Prentice Hall (1994) 37. Moyo, D.: Investigating the dynamics of hepatic inflammation through simulation. Ph.D. thesis, University of York (2014)

Playing with Patterns

121

38. Nance, R.E., Sargent, R.G.: Perspectives on the evolution of simulation. Oper. Res. 50(1), 161–172 (2002) 39. Origin Consulting (York): GSN community standard version 1. Technical report, Department of Computer Science, University of York (2011). http://www.goalstructuringnotation.info 40. Polack, F., Stepney, S.: Emergent properties do not refine. ENTCS 137(2), 163–181 (2005) 41. Polack, F.A.C.: Arguing validation of simulations in science. In: CoSMoS Workshop, pp. 51– 74. Luniver Press (2010) 42. Polack, F.A.C.: Filling gaps in simulation of complex systems: the background and motivation for CoSMoS. Nat. Comput. 14(1), 49–62 (2015) 43. Polack, F.A.C., Andrews, P.S., Ghetiu, T., Read, M., Stepney, S., Timmis, J., Sampson, A.T.: Reflections on the simulation of complex systems for science. In: Proceedings of ICECCS, pp. 276–285. IEEE Press (2010) 44. Polack, F.A.C., Andrews, P.S., Sampson, A.T.: The engineering of concurrent simulations of complex systems. In: Proceedings of CEC, pp. 217–224. IEEE Press (2009) 45. Polack, F.A.C., Hoverd, T., Sampson, A.T., Stepney, S., Timmis, J.: Complex systems models: engineering simulations. In: Proceedings of ALife XI, pp. 482–489. MIT press (2008) 46. Potter, B., Till, D., Sinclair, J.: An Introduction to Formal Specification and Z, 2nd edn. Prentice Hall (1996) 47. Read, M., Andrews, P.S., Timmis, J., Kumar, V.: Techniques for grounding agent-based simulations in the real domain: a case study in experimental autoimmune encephalomyelitis. Math. Comput. Model. Dyn. Syst. 18(1), 67–86 (2012) 48. Read, M.N.: Statistical and modelling techniques to build confidence in the investigation of immunology through agent-based simulation. Ph.D. thesis, University of York (2011) 49. Stepney, S.: A tale of two proofs. In: BCS-FACS Northern Formal Methods Workshop. Electronic Workshops in Computing (1998) 50. Stepney, S.: A pattern language for scientific simulations. In: CoSMoS Workshop, pp. 77–103. Luniver Press (2012) 51. Stepney, S., Cooper, D., Woodcock, J.C.P.: An electronic purse: specification, refinement, and proof. Technical Monograph PRG-126, Oxford University Computing Laboratory (2000) 52. Stepney, S., Nabney, I.T.: The DeCCo project papers, I to VI. Technical Report, Computer Science, University of York, YCS-2002-358 to YCS-2002-363 (2003) 53. Stepney, S., Polack, F., Toyn, I.: An outline pattern language for Z: five illustrations and two tables. In: Proceedings of ZB2003, vol. 2651, LNCS, pp. 2–19. Springer (2003) 54. Stepney, S., Polack, F., Toyn, I.: Patterns to guide practical refactoring: examples targetting promotion in Z. In: Proceedings of ZB2003, vol. 2651, LNCS, pp. 20–39. Springer (2003) 55. Stepney, S., Polack, F., Toyn, I.: A Z patterns catalogue I: specification and refactorings, v0.1. Technical Report, Computer Science, University of York, YCS-2003-349 (2003) 56. Stepney, S., Polack, F., Toyn, I.: Diagram patterns and meta-patterns to support formal modelling. Technical Report, Computer Science, University of York, YCS-2005-394 (2005) 57. Stepney, S., Polack, F., Turner, H.: Engineering emergence. In: ICECCS, pp. 89–97. IEEE Computer Society (2006) 58. Stepney, S., Polack, F.A.C.: Engineering Simulations as Scientific Instruments: A Pattern Language. Springer (2018) 59. Valentine, S.H., Stepney, S., Toyn, I.: A Z patterns catalogue II: definitions and laws, v0.1. Technical Report, Computer Science, University of York, YCS-2003-383 (2004) 60. Vargha, A., Delaney, H.D.: A critique and improvement of the CL common language effect size statistics of McGraw and Wong. J. Educ. Behav. Stat. 25(2), 101–132 (2000) 61. Wania, C.E.: Investigating an author’s influence using citation analyses: Christopher Alexander (1964–2014). Proc. Assoc. Inf. Sci. Technol. 52(1), 1–10 (2015) 62. Wania, C.E., Atwood, M.E.: Pattern languages in the wild: exploring pattern languages in the laboratory and in the real world. In: Proceedings of DESRIST, pp. 12:1–12:15. ACM (2009) 63. Weaver, R.A.: The safety of software—constructing and assuring arguments. Ph.D. thesis, Department of Computer Science, University of York, YCST-2004-01 (2003)

122

F. A. C. Polack

64. Welch, P.H., Barnes, F.R.M.: Communicating mobile processes: introducing occam-pi. In: Proceedings of 25 Years of CSP, vol. 3525, LNCS, pp. 175–210. Springer (2005) 65. Wheeler, M., Bullock, S., Di Paolo, E., Noble, J., Bedau, M., Husbands, P., Kirby, S., Seth, A.: The view from elsewhere: perspectives on ALife modelling. Artif. Life 8(1), 87–100 (2002) 66. Williams, R.A., Greaves, R., Read, M., Timmis, J., Andrews, P.S., Kumar, V.: In silico investigation into dendritic cell regulation of CD8Treg mediated killing of Th1 cells in murine experimental autoimmune encephalomyelitis. BMC Bioinform. 14, S6–S9 (2013) 67. Wilson, S.P., McDermid, J.A.: Integrated analysis of complex safety critical systems. Comput. J. 38(10), 765–776 (1995) 68. Woodcock, J., Loomes, M.: Software Engineering Mathematics. Addison-Wesley (1990) 69. Woodcock, J., Stepney, S., Cooper, D., Clark, J.A., Jacob, J.L.: The certification of the Mondex electronic purse to ITSEC Level E6. Form. Asp. Comput. 20(1), 5–19 (2008)

From Parallelism to Nonuniversality: An Unconventional Trajectory Selim G. Akl

Abstract I had the distinct pleasure of meeting Dr. Susan Stepney in September 2006, on the occasion of the Fifth International Conference on Unconventional Computation (UC’06) held at the University of York in the United Kingdom. I learned a great deal at that conference co-chaired by Dr. Stepney, enough to motivate me to organize the sixth edition of that conference series the following year in Kingston, Ontario, Canada. This chapter relates my adventures in unconventional computation and natural computation, and offers some recollections on the path that led me to nonuniversality. It is dedicated to Susan in recognition of her contributions and in celebration of her 60th birthday. Keywords Parallelism · Parallel computer · Inherently parallel computations · Unconventional computing · Unconventional computational problems · Quantum computing · Quantum chess · Computational geometry · Quantum cryptography · Superposition · Entanglement · Universality · Nonuniversality · Superlinear performance · Speed · Quality · Natural computing · DNA computer · Biomolecular computing · Simulation · Key distribution · Optical computing · Sensor network · Cellular automata · Fully homomorphic encryption · Cloud security.

1 Introduction My thinking has always been influenced by unconventional wisdom. Throughout my life I always tried to see if things could be done differently. I found it incredibly interesting (and tantalizingly mischievous) to question dogmas and established ideas. Sometimes this worked in my favor, and I discovered something new. Often I was not so lucky, and I ruffled some feathers. Either way, it was a special thrill to explore the other side of the coin, the home of uncommon sense. Since the present S. G. Akl (B) School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_6

123

124

S. G. Akl

chapter recounts my journey to unconventional computation and nonuniversality, I will restrict my recollections to this topic. This work is an extended version of a short section contributed by the author to a recent collaboration [5].

2 The Early Years I begin with my days as a student. While I followed the conventional trajectory from undergraduate, to Master’s student, to Doctoral candidate, my university experience was far from conventional.

2.1 Aérospatiale It all started in 1969. I remember that year well. It was one of the most exciting years for me. As a space travel buff, how can I forget witnessing the first lunar landing? Humanity had just realized its dream of walking on another celestial body. But of more direct relevance to me personally, 1969 was the year I discovered Computer Science. As an engineering student I was on an internship at Sud Aviation in Toulouse, France. Later known as Aérospatiale, this was the company that built (in partnership with British Aircraft Corporation) the supersonic passenger jet Concorde, which was at that time doing test flights, while the now popular Airbus airplane was being designed. As an intern, I was offered a choice to join a number of engineering departments. Unsurprisingly, not knowing a single thing about computer science, I selected the programming group! And this is how I came to be part of the team that wrote the first flight simulator for the Airbus. From simulation, I learned that you could create worlds that did not previously exist, yet worlds as real, as useful, as effective, and as far reaching as one could ever imagine. I knew there and then that this is what I wanted to do. I wanted one day to become a computer scientist.

2.2 Exactly One Computer For my undergraduate capstone project, I decided to choose as a topic computer simulation. What is unconventional about this choice is that I was an electrical engineering student planning to graduate with an electrical engineering degree (not computer science, for there was no such department at my university). The year was 1971. There was exactly one computer in the entire university, and it was an IBM 1620. The 1620 fully occupied a large room. In order to run my program, I had to book the room overnight, and hope and pray that things will work out. Lucky for me, the 1620 behaved flawlessly, my program worked, and I graduated.

From Parallelism to Nonuniversality: An Unconventional Trajectory

125

2.3 Who in the World? Now I was ready and eager to do a Ph.D. in computer science, a subject which, like I said, did not exist at my university. The reaction from the professors was negative. “After all”, I was told, “who in the world could possibly need, even a Master’s in computing, never mind a Ph.D.?” Such was society’s view of computing in the late 1960s. Computers were regarded merely as number crunching machines, useful mainly for generating telephone bills (and the like) at the end of the month. Consequently, I was advised that a much better idea would be to do graduate work in mathematics. And so I did (but not precisely). What happened is that I ended up writing an M.Sc. thesis on computer switching circuits, disguised as a thesis in mathematical logic. Furthermore, at the time, the common approach to tackle the kind of problem I chose to solve, was software based. Instead of following the trend, my thesis explored a counterintuitive, yet effective, hardware solution to error detection in asynchronous switching circuits [116].

2.4 Give Me Something More Useful! To study for the Ph.D., I decided to move to Canada. The official at the Canadian Embassy, a very nice and very helpful gentleman, conducted my visa pre-application interview. There was a form to be filled out. “What is your profession?” he asked. I replied (proudly!): “Computer Science.” He looked at me quizzically: “That will not do. In order to improve your chances of being admitted, you need to give me something more useful than this, something of benefit to society. What else do you do?” Hesitantly, I offered: “I work in the evenings as a movie projectionist at a cultural center.” The kind man beamed: “Excellent! Let us put down Audio Visual Specialist!”

2.5 Chess and Geometry Some time later, during my Ph.D. work on combinatorial optimization, I had the good fortune of meeting two exceptional individuals who allowed me to join them in their research when I needed a break from my own work. Dr. Monty Newborn introduced me to computer chess. Together, we wrote a paper that described a new method, the “killer heuristic”, which allowed chess-playing programs to run much faster by significantly reducing the size of the trees that they searched [60]. Dr. Godfried Toussaint taught me computational geometry. Our delightful dinner conversations at the Basha restaurant in Montreal led to many results, most memorably the fastest known practical algorithm for computing the convex hull of a set of planar points [67]. Throughout my career I will return often to chess and geometry [45].

126

S. G. Akl

3 Parallel Computation Shortly after finishing my Ph.D. and starting an academic career, I went back to thinking about computer chess, and how to improve the chess-playing programs by making them run faster, and hence allowing them to explore more positions. The idea of searching game trees in parallel, while obvious today, seemed like a huge insight at the time. This approach proved very successful and opened for me a research path on which I am still to this day. Parallel computation is a field that requires a completely different way of thinking. From shared-memory machines to interconnection networks, from special-purpose combinational circuits to reconfigurable computers, and everything else in between, parallel algorithms are only limited by the extent of their designer’s imagination [58]. Indeed, in my opinion, sequential computing is only a very primitive special case of the much more general, richer, and more powerful algorithmic possibilities offered by parallelism [10, 14, 17, 52, 54, 61, 64, 65, 108].

3.1 Parallel Algorithms My very first paper on parallel algorithms, which also happened to be the first paper ever on searching game trees in parallel, was co-authored with D. Barnard and R. Doran [47, 48]. In it, the alpha-beta algorithm for searching decision trees is adapted to allow parallel activity in different parts of a tree during the search. Our results indicated that a substantial reduction in the time required by the search occurs because of the use of parallelism [12]. To test the waters of this new field a little more in depth, I embarked on a project to write a monograph on parallel algorithms, choosing as subject the problem of sorting, the most-studied problem in computer science. My book Parallel Sorting Algorithms appeared in 1985 [11], followed by a Japanese edition shortly thereafter. Topics covered included specialized networks for sorting, as well as algorithms for sorting on various parallel processor architectures such as linear arrays, perfect shuffles, meshes, trees, cubes, synchronous shared-memory machines, and asynchronous multiprocessors, algorithms for external sorting, and lower bounds on parallel sorting. It was the first book to appear in print that was devoted entirely to parallel algorithms (and later earned me the privilege of writing an encyclopedia entry on parallel sorting [33], one of three encyclopedia articles I have authored). Soon after, I published a second, more comprehensive, book on The Design and Analysis of Parallel Algorithms, each chapter of which was devoted to a different computational problem [13], namely, selection, merging, sorting, searching, generating permutations and combinations, matrix operations, numerical problems, computing Fourier transforms, graph theoretic problems, computational geometry, traversing combinatorial spaces, decision and optimization, and analyzing the bit complexity

From Parallelism to Nonuniversality: An Unconventional Trajectory

127

of parallel computations. The book was translated into several languages, including Spanish, Italian, and Chinese. A third book entitled Parallel Computation: Models and Methods took the approach of presenting parallel algorithms from the twin perspectives of models of computation and algorithmic paradigms [15]. Thus there are chapters on combinational circuits, parallel prefix computation, divide and conquer, pointer-based data structures, linear arrays, meshes and related models, hypercubes and stars, models using buses, broadcasting with selective reduction, and parallel synergy. The book is listed on amazon.com among the 21 best books on algorithmics–not just parallel algorithms (Professor Christoph Koegl, the creator of the list and a theoretical computer scientist at the University of Kaiserslautern, described it as “Possibly the best book on parallel algorithms” [121]). Using my knowledge of parallel algorithms, I returned to computational geometry, teaming up with Dr. Kelly Lyons to produce Parallel Computational Geometry [55]. The book describes parallel algorithms for solving a host of computational geometric problems, such as the convex hull, intersection problems, geometric searching, visibility, separability, nearest neighbors, voronoi diagrams, geometric optimization, and triangulations of polygons and point sets.

3.2 Parallel Computer I should also mention that in my work, theory was never too far from practice. Indeed, as early as 1982, my students and I built one of the earliest functioning parallel computers. The computer consisted of twelve processors, each a full-fledged computer with a microprocessor, a memory and four communication lines. Its architecture was hardware-reconfigurable in the sense that the processors could be connected in a number of different ways, including a linear array, a two dimensional array, a perfect shuffle, and a tree. It provided great flexibility and speed and allowed several algorithms to be tested on different architectures [9].

3.3 Superlinear Performance A great deal of my work in parallelism focused on discovering what surprising results in speed and quality can be obtained once the power of parallelism is unleashed.

3.3.1

Speed

Of course, the principal reason for using parallelism is to speed up computations that are normally run on sequential machines. In order to measure the improvement in processing speed afforded by a particular parallel algorithm, the following ratio,

128

S. G. Akl

known as the speedup is used: The running time of the best sequential algorithm for the problem at hand, divided by the running time of the parallel algorithm. It was widely believed that an algorithm with n processors can achieve a speedup at most equal to n. It was also believed that if only p processors, where p < n, are available, then the slowdown, that is, the running time of the p-processor algorithm divided by the running time of the n-processor algorithm, is smaller than n/ p. Unconvinced, I set out to prove that these two “folklore theorems” are in fact false. I exhibited several outof-the-ordinary computational problems for which an n-processor parallel algorithm achieves a speedup superlinear in n (for example, a speedup of 2n ), while a pprocessor parallel algorithm, where p < n, causes a slowdown superlinear in n/ p (for example, a slowdown of 2n / p) [19, 43, 53]. One-way functions provide a nice example of superlinear speedup. A function f is said to be one-way if the function itself takes little time to compute, but (to the best of our knowledge) its inverse f −1 is computationally prohibitive. For example, let x1 , x2 , . . . , xn be a sequence of integers. It is easy to compute the sum of a given subset of these integers. However, starting from a sum, and given only the sum, no efficient algorithm is known to determine a subset of the integer sequence that add up to this sum. Consider that in order to solve a certain problem, it is required to compute g(x1 , x2 , . . . , xn ), where g is some function of n variables. The computation of g requires Ω(n) operations. For example, g(x1 , x2 , . . . , xn ) = x12 + x22 + · · · + xn2 , might be such a function. The inputs x1 , x2 , . . . , xn needed to compute g are received as n pairs of the form xi , f (x1 , x2 , . . . , xn ), for i = 1, 2, . . . , n. The function f possesses the following property: Computing f from x1 , x2 , . . . , xn is done in n time units; on the other hand, extracting xi from f (x1 , x2 , . . . , xn ) takes 2n time units. Because the function g is to be computed in real time, there is a deadline constraint: If a pair is not processed within one time unit of its arrival, it becomes obsolete (it is overwritten by other data in the fixed-size buffer in which it was stored). Sequential Solution. The n pairs arrive simultaneously and are stored in a buffer, waiting in queue to be processed. In the first time unit, the pair x1 , f (x1 , x2 , . . . , xn ) is read and x12 is computed. At this point, the other n − 1 pairs are no longer available. In order to retrieve x2 , x3 , . . . , xn , the sequential processor p1 needs to invert f . This requires (n − 1) × 2n time units. It then computes g(x1 , x2 , . . . , xn ) = x12 + x22 + · · · + xn2 . Consequently, the sequential running time is given by t1 = 1 + (n − 1) × 2n + 2 × (n − 1) time units. Clearly, this is optimal considering the time required to obtain the data. Parallel Solution. Once the n pairs are received, they are processed immediately by an n-processor parallel computer, in which the processors p1 , p2 , . . . , pn share a common memory for reading and for writing [15]. Processor pi reads the pair xi , f (x1 , x2 , . . . , xn ) and computes xi2 , for i = 1, 2, . . . , n. The n processors now compute the sum g(x1 , x2 , . . . , xn ) using a concurrent-write operation to the shared memory. It follows that the parallel running time is equal to tn = 1. Speedup and slowdown. The speedup provided by the parallel computer over the sequential one, namely, S(1, n) = (n − 1) × 2n + 2n − 1, is superlinear in n

From Parallelism to Nonuniversality: An Unconventional Trajectory

129

and thus contradicts the speedup folklore theorem. What if only p processors are available on the parallel computer, where 2 ≤ p < n? In this case, only p of the n variables (for example, x1 , x2 , . . . , x p ) are read directly from the input buffer (one by each processor). Meanwhile, the remaining n − p variables vanish and must be extracted from f (x1 , x2 , . . . , xn ). It follows that the parallel running time is now ⎛ ⎞ log p (n− p)  t p = 1 + (n − p)/ p × 2n + ⎝ (n − p)/ pi ⎠ + 1, i=1

where the first term is for computing x12 + x22 + · · · + x 2p , the second for extracting x p+1 , x p+2 , . . . , xn , the third for computing x 2p+1 + x 2p+2 + · · · + xn2 , and the fourth for producing g. Therefore, t p /tn is asymptotically larger than n/ p by a factor that grows exponentially with n, and the slowdown folklore theorem is violated.

3.3.2

Quality

As a second manifestation of superlinear performance, I introduced the notion of quality-up as a measure of the improvement in the quality of a solution obtained through parallel computation [16]. I proved that a parallel computer can in some circumstances obtain a solution to a problem that is better than that obtained by a sequential computer. What constitutes a better solution depends on the problem under consideration. Thus, for example, ‘better’ means ‘closer to optimal’ for optimization problems, ‘more accurate’ for numerical problems, and ‘more secure’ for cryptographic problems. A source coding algorithm is ‘better’ if it yields a higher compression rate. An error correction code is ‘better’ if it provides a superior error correction capability. I exhibited several classes of problems having the property that a solution to a problem in the class, when computed in parallel, is far superior in quality than the best one obtained on a sequential computer [19]. This was a far more unconventional idea than superlinear speedup, for it is a fundamental belief in computer science that anything that can be computed on one computer can be obtained exactly on another computer (through simulation, if necessary). To illustrate, consider the following computational environment: 1. A computer system receives a stream of inputs in real time. 2. The numerical problem to be solved here is to find, for a continuous function f (x), a zero xapprox that falls between x = a and x = b using, for example, the bisection algorithm. At the beginning of each time unit, a new 3-tuple  f, a, b is received by the computer system. 3. It is required that  f, a, b be processed as soon as it is received and that xapprox be produced as output as soon as it is computed. Furthermore, one output must be produced at the end of each time unit (with possibly an initial delay before the first output is produced).

130

S. G. Akl

4. The operations of reading  f, a, b, performing one iteration of the algorithm, and producing xapprox as output once it has been computed, can be performed within one time unit. Sequential Solution. Here, there is a single processor whose task is to read each incoming 3-tuple, to compute xapprox , and to produce the latter as output. Recall that the computational environment we assumed dictates that a new 3-tuple input be received at the beginning of each time unit, and that such an input be processed immediately upon arrival. Therefore, the sequential computer must have finished processing a 3-tuple before the next one arrives. It follows that, within the one time unit available, the algorithm can perform no more than one iteration on each input  f, a, b. The approximate solution computed by the sequential computer is xapprox = (a + b)/2. This being the only option available, it is by default the best solution possible sequentially. Parallel Solution. When solving the problem on an n-processor computer, equipped with an array of processors p1 , p2 , . . . , pn , it is evident that one processor, for example p1 , must be designated to receive the successive input 3-tuples, while it is the responsibility of another processor, for example pn , to produce xapprox as output. The fact that each 3-tuple needs to be processed as soon as it is received implies that the processor must be finished processing a 3-tuple before the next one arrives. Since a new 3-tuple is received every time unit, processor p1 can perform only one iteration on each 3-tuple it receives. Unlike the sequential solution, however, the present algorithm can perform additional iterations. This is done as follows. Once p1 has executed its single iteration on  f, a1 , b1 , it sends  f, a2 , b2  to p2 , and turns its attention to the next 3-tuple arriving as input. Now p2 can execute an additional iteration before sending  f, a3 , b3  to p3 . This continues until xapprox = (an + bn )/2 is produced as output by pn . Meanwhile, n − 1 other 3-tuple inputs co-exist in the array of processors (one in each of p1 , p2 , . . . , pn−1 ), at various stages of processing. One time unit after pn has produced its first xapprox , it produces a second, and so on, so that an output emerges from the array every time unit. Note that each output xapprox is the result of applying n iterations to the input 3-tuple, since there are n processors and each executes one iteration. Quality-up. In what follows we derive a bound on the size of the error in xapprox for the sequential and parallel solutions. Let the accuracy of the solution be defined as the inverse of the maximum error. Sequentially, one iteration of the bisection algorithm is performed to obtain xapprox . The maximum error is |b − a|/2. In parallel, each 3-tuple input is subjected to n iterations of the bisection algorithm, where each processor performs one iteration. The maximum error is |b − a|/2n . By defining quality-up as the ratio of the parallel accuracy to the sequential accuracy, quality-up(1, n) = 2(n−1) . This suggests that increasing the number of processors by a factor of n leads to an increase in the level of accuracy by a factor on the order of 2n . In other words, the improvement in quality is exponential in n, the number of processors on the parallel computer.

From Parallelism to Nonuniversality: An Unconventional Trajectory

131

3.4 From Parallel to Unconventional Because parallelism is inherent to all computational paradigms that later came to be known as “unconventional”, my transition from architecture-dependent parallelism to substrate-dependent parallelism and to inherently-parallel computational problems, was logical, natural, and easy. This is how I embraced quantum computing [59], optical computing [114], biomolecular computing [93], cellular automata [119], slime mold computing [3], unconventional computational problems [38], nonuniversality in computation [37], and various other unconventional paradigms [18, 105–107, 120]. My first specific contribution in this direction was made in the early 1990s, when I developed, with Dr. Sandy Pavel, processor arrays with reconfigurable optical networks for such computations as integer sorting, the Hough transform, and other computations [109–113]. Since most of my work has focused on that part of unconventional computation that one might call natural computing, the next section briefly outlines my perspective of this area of research [39].

4 Nature Computes In our never-ending quest to understand the workings of Nature, we humans began with the biological cell as a good first place to look for clues. Later, we went down to the molecule, and then further down to the atom, in hopes of unraveling the mysteries of Nature. It is my belief that the most essential constituent of the Universe is the bit, the unit of information and computation. Not the cell, not the molecule, not the atom, but the bit may very well be the ultimate key to reading Nature’s mind. Does Nature compute? Indeed, we can model all the processes of Nature as information processes. For example, cell multiplication and DNA replication are seen as instances of text processing. A chemical reaction is simply an exchange of electrons, that is, an exchange of information between two molecules. The spin of an atom, whether spin up or spin down, is a binary process, the answer to a ‘yes’ or ‘no’ question. Information and computation are present in all natural occurrences, from the simplest to the most complex. From reproduction in ciliates to quorum sensing in bacterial colonies, from respiration and photosynthesis in plants to the migration of birds and butterflies, and from morphogenesis to foraging for food, all the way to human cognition, Nature appears to be continually processing information. Computer scientists study information and computation in Nature in order to: 1. Better understand natural phenomena. We endeavor to show that the computational paradigm is capable of modeling Nature’s work with great precision. Thus, when viewed as computations, the processes of Nature may be better explained and better understood at their most basic state. 2. Exhibit examples of natural algorithms whose features are sufficiently attractive, so as to inspire effective algorithms for conventional computers. Nature’s

132

S. G. Akl

algorithms may be more efficient than conventional ones and may lead to better answers in a variety of computational situations. 3. Identify problems where natural processes themselves are the only viable approach towards a solution. Such computational problems may occur in environments where conventional computers are inept, in particular when living organisms, including the human body itself, are the subject of the computation. 4. Obtain a more general definition of what it means ‘to compute’. For example, is there more to computing than arithmetic and logic? Natural phenomena involve receiving information from, and producing information to, the external physical environment–are these computations? The next three sections offer a glimpse into the evolution of my approach to unconventional computation, in general, and natural computing, in particular.

5 Quantum Computing and Quantum Cryptography My foray into the quantum realm was one of the most intriguing and most enjoyable experiences in my research life. Everyone knows the strange and somewhat farfetched connections that some seek to establish between quantum physics and various mystical beliefs (e.g. eastern religions) and everyday observations (e.g. the behavior of identical twins). In what follows I will stay away from these speculations, and highlight instead four adventures in the quantum world.

5.1 Quantum Computers can do More than Conventional Computers Quantum computers are usually promoted as being able to quickly perform computations that are otherwise infeasible on classical computers (such as factoring large numbers). My work with Dr. Marius Nagy, by contrast, has uncovered computations for which a quantum computer is, in principle, more powerful than any conventional computer. One example of such a computation is that of distinguishing among the 2n entangled states of a quantum system of n qubits: This computation can be performed on a quantum computer but not on any classical computer [88]. Suppose that we have a quantum system composed of n qubits whose state is not known exactly. What we know with certainty is that the system can be described by one of the following 2n entangled states:

From Parallelism to Nonuniversality: An Unconventional Trajectory

133

1 √ (|000 · · · 0 ± |111 · · · 1), 2 1 √ (|000 · · · 1 ± |111 · · · 0), 2 .. .

(1)

1 √ (|011 · · · 1 ± |100 · · · 0). 2 The challenge for the two candidate computers, conventional and quantum, is to correctly identify the state of the system by resorting to all of their measurement and computational abilities. Alternatively, the problem can also be formulated as a function computation (evaluation), with the unknown quantum state as the input and the corresponding index (between 0 and 2n − 1) as the output. It is shown in [88] that neither of the two computers can perform this task by using measurements only. However, if we resort to their processing capabilities, the situation changes. Unitary operators preserve inner products, so any unitary evolution of the system described by (1) will necessarily transform it into another orthonormal basis set. Therefore, a unitary transformation must exist that will allow a subsequent measurement in the standard computational basis without any loss of information. Indeed, we demonstrated that such a transformation not only exists, but that in fact it can be implemented efficiently. Our result is stated as follows: The transformation between the following two orthonormal basis sets for the state space spanned by n qubits, 1 √ (|000 · · · 0 + |111 · · · 1) ←→ |000 · · · 0, 2 1 √ (|000 · · · 0 − |111 · · · 1) ←→ |111 · · · 1, 2 1 √ (|000 · · · 1 + |111 · · · 0) ←→ |000 · · · 1, 2 1 √ (|000 · · · 1 − |111 · · · 0) ←→ |111 · · · 0, 2 .. . 1 √ (|011 · · · 1 + |100 · · · 0) ←→ |011 · · · 1, 2 1 √ (|011 · · · 1 − |100 · · · 0) ←→ |100 · · · 0. 2

(2)

134

S. G. Akl

can be realized by a quantum circuit consisting of 2n − 2 controlled-NOT gates and one Hadamard gate. Due to its symmetric nature, the same quantum circuit can also perform the inverse transformation, from the normal computational basis set to the entangled basis set. By applying the transformation realized by this circuit, the quantum computer can disentangle the qubits composing the system and thus make the act of measuring each qubit entirely independent of the other qubits. This is possible because the final states (after the transformation) are actually classical states which can be interpreted as the indices corresponding to the original entangled quantum states. Obtaining the correct answer to the distinguishability problem amounts to accurately computing the index associated with the given input state. The procedure detailed above gives us a reliable way to do this, 100% of the time. In other words, the function is efficiently computable (in quantum linear time) by a quantum computer. Can the classical computer replicate the operations performed by the quantum machine? We know that a classical computer can simulate (even if inefficiently) the continuous evolution of a closed quantum system (viewed as a quantum computation in the case of an ensemble of qubits). So, whatever unitary operation is invoked by the quantum computer, it can certainly be simulated mathematically on a Turing Machine. The difference resides in the way the two machines handle the uncertainty inherent in the input. The quantum computer has the ability to transcend this uncertainty about the quantum state of the input system by acting directly on the input in a way that is specific to the physical support employed to encode or describe the input. The classical computer, on the other hand, lacks the ability to process the information at its original physical level, thus making any simulation at another level futile exactly because of the uncertainty in the input. It is worth noting that had the input state been perfectly determined, then any transformation applied to it, even though quantum mechanical in nature, could have been perfectly simulated using the classical means available to a Turing Machine. However, in our case, the classical computer does not have a description of the input in classical terms and can only try to obtain one through direct measurement. This will in turn collapse the superposition characterizing the input state, leaving the classical computer with only a 50% probability of correctly identifying the original quantum state. This means that the problem cannot be solved classically, not even by a Probabilistic Turing Machine. There is no way to improve the 50% error rate of the classical approach in attempting to distinguish among the 2n states. This problem taught us that what draws the separation line between a quantum and a classical computer, in terms of computational power, is not the ability to extract information from a quantum system through measurements, but the ability to process information at the physical level used to represent it. For the distinguishability problem presented here, this is the only way to deal with the non-determinism introduced by superposition of states. At this point, it is important to clarify the implications of our result on the ChurchTuring thesis. The definition of the Turing machine was an extraordinary achievement in abstracting out computation as an information manipulating process. But although the model was thought to be free of any physical assumptions, it is clear today that the

From Parallelism to Nonuniversality: An Unconventional Trajectory

135

description of the Turing machine harbors an implicit assumption: the information it manipulates is classical. Computation is a physical process and the Turing machine computes in accord with the laws of classical physics. However, the success of quantum mechanics in explaining the reality of the microcosmos is challenging our traditional views on information processing, forcing us to redefine what we mean by computation. In the context of quantum computation, the data in general, and the input in particular, are not restricted to classical, orthogonal values, but can be arbitrary superpositions of them. Therefore, computational problems, such as distinguishing among entangled quantum states, are not an attack on the validity of the Church-Turing thesis, but rather they precisely define its scope. It is in these terms that our result [88] has to be understood: The set of functions computable by a classical Turing machine is a proper subset of those computable using a quantum Turing machine.

5.2 Key Distribution Using Public Information Since ancient times, cryptography had been used primarily in military operations, in diplomatic and intelligence activities, and in the protection of industrial secrets. By the late 1970s, cryptography emerged as an endeavor of public interest. It is at that time that I became involved in cryptographic research. In the summer of 1981, I attended the first ever conference on research in cryptology (that is, cryptography and cryptanalysis) to be organized by, and open to, civilians. The conference was held on the campus of the University of California, Santa Barbara, August 24–26. Attendees were a handful of academics and researchers (who wore name tags) and a couple of spy agency intelligence operatives (without name tags). We slept on bunk beds in the university dormitory, ate our meals at the campus cafeteria, and held our meetings in an undergraduate classroom. It was a far cry from today’s gigantic and expensive security conferences. It was also a shock to my wife, as we happened to be celebrating our honeymoon at that time! She later forgave me. My early work in this area spanned the spectrum of cryptologic research, most particularly cryptosystems [8, 56, 57], digital signatures [6, 7, 83–85], access control in a hierarchy [66, 80, 81], and database security [50, 51, 76, 79]. This interest continues to this day, with new results in graph encryption [44], and fully homomorphic cryptography (which allows operations on encrypted data that are stored on an untrusted cloud) [46]. However, most of my recent work in this field has been in quantum cryptography. Dr. Marius Nagy, Dr. Naya Nagy, and I proved that, contrary to established belief, authentication between two parties, for cryptographic purposes, can be performed through purely quantum means [99]. Our quantum key distribution protocols produce secret information using public information only, which was thought to be impossible for any cryptosystem [91, 92, 94, 100]. We have also provided for the first time quantum cryptographic solutions to the problems of security and identity protection in wireless sensor networks [101, 102], multilevel security in hierarchical systems

136

S. G. Akl

[95], and coping with decoherence [90], as well as exposing a less well known aspect of quantum cryptography [104]. The following example illustrates the quantum key distribution protocol described in [104]. Let A and B be two parties wishing to establish a secret cryptographic key in order to protect their communications. We further assume that A and B share an array of ten entangled qubit pairs A B , q10 ), (q1A , q1B ), (q2A , q2B ), . . . , (q10 A B are in A’s possession, and q1B , q2B , . . . , q10 are in B’s such that q1A , q2A , . . . , q10 possession. Note that an array of ten entangled qubit pairs is obviously far too short to be of any practical use; it is, however convenient and sufficient for this exposition. The type of entanglement used here is phase incompatibility. This means that if A measures qiA and obtains a classical bit 1 (0), then it is guaranteed that if B transforms (the now collapsed) qiB using a Hadamard gate, the resulting classical bit is 0 (1). The situation is symmetric if B measures qiB first. In order to obtain a secret key to be used in their communications, both A and B apply the steps below. Measuring the entangled qubits. For each qubit, both A and B have the choice of measuring the qubit directly, or applying a Hadamard gate H to the qubit first and then measuring it. Let A’s random choice be: A , q1A , H q2A , H q3A , q4A , q5A , q6A , H q7A , H q8A , q9A , q10

yielding 1, 1, 1, 0, 0, 0, 0, 1, 1, 1. Similarly, let B’s random choice be: B , H q1B , H q2B , q3B , H q4B , q5B , q6B , q7B , H q8B , H q9B , q10

yielding 0, 1, 0, 1, 1, 0, 1, 0, 0, 1. Of the four measurement options (qiA , H qiB ), (H qiA , qiB ), (qiA , qiB ), (H qiA , H qiB ), only the first two are ‘valid’. This means that there are only five valid qubit pairs in our example, namely, (q1A , H q1B ), (H q3A , q3B ), (q4A , H q4B ), (H q7A , q7B ), (q9A , H q9B ), with values (1, 0), (1, 0), (0, 1), (0, 1), (1, 0). Publishing the measurement strategy. Suppose that the digit 0 is used by A and B to indicate that a qubit has been measured directly, and the digit 1 to indicate that a Hadamard gate was used. As well, A and B choose to publish the index and value of 40% of their qubits (that is, 4 qubits in this case) selected at random. Thus, the string 0110001100 is disclosed as A’s measurement strategy, and the quadruple

From Parallelism to Nonuniversality: An Unconventional Trajectory

137

(0001)1, (0010)1, (1001)1, (1010)1 is made public as A’s four randomly selected qubits 1, 2, 9, 10. Similarly, the string 1101000110 is disclosed as B’s measurement strategy, and the quadruple (0001)0, (0101)1, (0111)1, (1000)0 is made public as B’s four randomly selected qubits 1, 5, 7, 8. Checking for eavesdropping. By computing the exclusive-OR of their two measurement strategies, A and B can determine that only qubits 1, 3, 4, 7, 9 have been measured correctly: (0110001100) ⊗ (1101000110) = 1011001010. A compares the values of q1A and q7A with the values of q1B and q7B published by B. Similarly, B compares the values of q1B and q9B with the values of q1A and q9A published by A. In each case, in the absence of eavesdropping, the values must be the opposite of one another. For large qubit arrays and a large number of qubits checked, A and B will separately reach the same conclusion as to whether they should continue with the protocol, or that malevolent interference has disrupted the entanglement and that, therefore, they must discard everything and start all over. Constructing the secret key. Assuming no eavesdropping has been detected, the unpublished qubits form the secret key. In our small example these qubits are qubits 3 and 4, that is, the secret key is 10.

5.3 Carving Secret Messages Out of Public Information Using our previous result on one-time pads [96], Dr. Naya Nagy, Dr. Marius Nagy, and I showed that secret information can be shared or passed from a sender to a receiver even if not encoded in a secret message. No parts of the original secret information ever travel via communication channels between the source and the destination. No encoding/decoding key is ever used. The two communicating partners, are endowed with coherent qubits that can be read and set while keeping their quantum values over time. Also any classical communication channel need not be authenticated. As each piece of secret information has a distinct public encoding, the protocol is equivalent to a one-time pad protocol [103]. Briefly described, the idea of this result is as follows. Suppose that the sender and the receiver each holds one array of a pair of arrays of entangled qubits. Specifically, the ith qubit in the array held by the sender is entangled with the ith qubit in the array held by the receiver. When the sender wishes to send a message M made up of the bits m1, m2, . . . , mn , she looks up the positions in her array of an arbitrary sequence of qubits that (when measured) would yield the message

138

S. G. Akl

m1, m2, . . . , mn . Let these positions (that is, the indices) of these (not necessarily contiguous) qubits be x1 , x2 , . . . , xn , in which, for example, x1 = 24, x2 = 77, x3 = 5, and so on. The sender transmits these numbers to the receiver who then reconstructs the message M using the indices x1 , x2 , . . . , xn , in his array of qubits. One could say that this is, in some sense, the “book cipher” approach to cryptography revisited, with a quantum twist that is theoretically unbreakable.

5.4 Quantum Chess Games of strategy, and in particular chess, have long been considered as true tests of machine intelligence, namely, the ability of a computer to compete against a human in an activity that requires reason. Today, however, most (if not all) human players do not stand a chance against the best computer chess programs. In an attempt to restore some balance, I proposed Quantum Chess (QC), a version of chess that includes an element of unpredictability, putting humans and computers on an equal footing when faced with the uncertainties of quantum physics [32]. Unlike classical chess, where each piece has a unique and known identity, each piece in QC is in a superposition of states, revealing itself to the player as one of its states only when it is selected to move [32, 36]. It is essential to note that bringing quantum physics into chess should be understood as being significantly different from merely introducing to the game an element of chance, the latter manifesting itself, for example, in games involving dice or playing cards, where all the possible outcomes and their odds are known in advance. The behavior of the Quantum Chess pieces is defined as follows: 1. Each Quantum Chess piece is in a superposition of states. Since there are 16 pieces for each side, then (ignoring color for simplicity) four qubits suffice to represent each piece distinctly. Again for simplicity, we assume henceforth that each Quantum Chess piece is a superposition of two conventional Chess pieces. 2. “Touching” a piece is tantamount to an “observation”; this collapses the superposition to one of the classical states, and this defines the move that this piece makes. 3. A piece that lands on a white square remains in the classical state in which it arrived. By contrast, a piece that lands on a black square is considered to

From Parallelism to Nonuniversality: An Unconventional Trajectory

139

have traversed a quantum circuit, thereby undergoing a quantum transformation: Having arrived in a classical state, it recovers its original quantum superposition. 4. Initially, each piece on the board is in a quantum superposition of states. However, neither of the two players knows the states in superposition for any given piece. 5. At any moment during the game, the locations on the board of all the pieces are known. Each of the two players can see (observe) the states of all the pieces in classical state, but not the ones in superposition. 6. If a player likes the current classical state of a piece about to be moved, then he/she will attempt not to land on a quantum circuit. In the opposite case, the player may wish to take a chance by landing the piece on a quantum circuit. A true QC board with true QC pieces is a long way from being constructed, however. Therefore, my undergraduate summer student Alice Wismath implemented a simulation of the game, and a competition was held pitting humans against a computer. The experiment confirmed that QC indeed provides a level playing field and drew an enormous following for QC on the Internet (in many languages). QC received coverage by the CBC and Wired magazine, among many other outlets, and a complimentary tip of the hat from the (then) reigning Women’s World Chess Champion Alexandra Kosteniuk [122]. Most notably, our work was featured on the Natural Sciences and Engineering Research Council of Canada web site [123]. Alice also created a version of Quantum Chess which includes the principle of quantum entanglement: 1. Both players’ pieces have the same superposition combinations (the specific combinations are randomly assigned at game start up). Each piece therefore has a ‘twin’ piece of the opposite colour. 2. All pieces, except for the king, start in quantum state, and both of their piece types are initially unknown. The location of the pieces within the player’s first two rows is random. 3. Each piece is initially entangled with its twin piece. When the piece is touched, its twin piece (belonging to the opponent) is also touched and collapses to the same state. Since both pieces are now in classical state, the entanglement is broken and the pieces behave normally henceforth. Recently, QC was back in the news when a program inspired by Alice’s implementations pitted physicist Dr. Stephen Hawking against Hollywood actor Paul Rudd [124]. It was fun to get back to computer chess after all these years since my graduate student days and my early work on searching game trees in parallel. Several intriguing open questions remain. As shown in my original paper, unlike a conventional computer, a quantum computer may be able to determine the superposition hidden in a QC piece; does this extra knowledge allow it to develop a winning strategy? What other games of strategy are amenable to the same modifications that led to QC? Are there implications to decision theory?

140

S. G. Akl

6 Biomolecular, Cellular, and Slime Mold Computing In this section I will briefly cover other aspects of my work in natural computing.

6.1 Biomolecular Computing With Dr. Virginia Walker, a biologist, I co-supervised three graduate students who built a DNA computer capable of performing a simple form of cryptanalysis [82]. They also put to the test the idea of double encoding as an approach to resisting error accumulation in molecular biology techniques such as ligation, gel electrophoresis, polymerase chain reaction (PCR), and graduated PCR. While the latter question was not completely settled, several pivotal issues associated with using ligation for double encoding were identified for future investigation, such as encoding adaptation problems, strand generation penalties, strand length increases, and the possibility that double encoding may not reduce the number of false negatives.

6.2 Cellular Automata A cellular automaton (CA) is a nature-inspired platform for massively parallel computation with the ability to obtain complex global behavior from simple local rules. The CA model of computation consists of n cells in a regular arrangement, where each cell is only aware of itself and of its immediate neighbors. In plant respiration, the stomata on a leaf are an example of a natural CA. With Dr. Sami Torbey I used the two-dimensional CA model to provide O(n 1/2 ) running-time solutions to computational problems that had remained open for some time: 1. The density classification problem is arguably the most studied problem in cellular automata theory: Given a two-state cellular automaton, does it contain more black or more white cells? 2. The planar convex hull problem is a fundamental problem in computational geometry: Given a set of n points, what is the convex polygon with the smallest possible area containing all of them? The solution to each of these problems was unconventional. The density classification problem was solved using a “gravity automaton”, that is, one where black cells are programmed to “fall” down towards the bottom of the grid. The solution to the convex hull problem programmed the cells to simulate a rubber band stretched around the point set and then released [117, 118]. We also used cellular automata to solve a coverage problem for mobile sensor networks, thus bringing together for the first time two unconventional computational models, namely, cellular automata and sensor networks [70–74, 119].

From Parallelism to Nonuniversality: An Unconventional Trajectory

141

Most recently, the CA (a highly organized and structured model) provided me with a novel approach to solving two combinatorial problems, specifically, computing single-pair as well as all-pair shortest paths in an arbitrary graph (a highly unstructured collection of data) [42]. Such is the power of abstraction in the science of computation: An appropriate encoding and a clever algorithm is usually all it takes to do the job!

6.3 Computing with Slime Mold Dr. Andrew Adamatzky and I demonstrated that the plasmodium of Physarum polycephalum can compute a map of the Canadian highway system fairly accurately [1, 2, 4]. The result may be interpreted as suggesting that this amoeba, a unicellular organism with no nervous system, no brain, no eyes, and no limbs, is capable of performing what humans may call a complex information processing task (network optimization). Fascinating questions arise. Do simple biologic organisms require a central brain for cognitive behavior? What are the implications to medicine if slime mold were to be bio-engineered to forage for diseased cells? Are there possibilities for green computing using such simple organisms? Our experiment generated considerable interest from the media, including the Discovery Channel, the National Post, PBS Newshour, PBS NOVA, PhysOrg, Science Codex, Science Daily, Popular Science, Scientific Canadian, CKWS TV, and the Whig Standard [125].

7 Nonuniversality in Computation One of the dogmas in Computer Science is the principle of computational universality, and the attendant principle of simulation: “Given enough time and space, any generalpurpose computer can, through simulation, perform any computation that is possible on any other general-purpose computer.” Statements such as this are commonplace in the computer science literature, and are served as standard fare in undergraduate and graduate courses alike [21–23]. Sometimes the statement is restricted to the Turing Machine, and is referred to as the Church-Turing Thesis, as in: “A Turing machine can do everything that a real computer can do” [115]. Other times the statement is made more generally about a Universal Computer (which is not to be confused with the more restricted Universal Turing Machine), as in: “It is possible to build a universal computer: a machine that can be programmed to perform any computation that any other physical object can perform” [77]. I consider it one of my most meaningful contributions to have shown that such a Universal Computer cannot exist. This is the Principle of Nonuniversality in Computation [20, 25, 26, 28, 31]. I discovered nonuniversality because of a challenge. While giving an invited talk on parallel algorithms, a member of the audience kept heckling me by repeatedly interrupting to say that anything I can do in parallel he can do sequentially

142

S. G. Akl

(specifically, on the Turing Machine). This got me thinking: Are there computations that can be performed successfully in parallel, but not sequentially? It was not long before I found several such computations [49, 68]. The bigger insight came when I realized that I had discovered more than I had set out to find. Each of these computations had the following property: For a problem of size n the computation could be done by a computer capable of n elementary operations per time unit (such as a parallel computer with n processors), but could not be done by a computer capable of fewer than n elementary operations per time unit [34, 35, 62, 78, 87, 89, 98]. This contradicted the aforementioned principle of simulation, and as a consequence also contradicted the principle of computational universality [29, 37, 43]. Thus parallelism was sufficient to establish nonuniversality in computation. With Dr. Nancy Salay, I later proved that parallelism was also necessary for any computer that aspires to be universal [63].

7.1 Theoretical Proof of Nonuniversality in Computation Suppose that time is divided into discrete time units, and let U 1 be a computer capable of V (t) elementary operations at time unit number t, where t is a positive integer and V (t) is finite and fixed a priori for all t. Here, an elementary computational operation may be any one of the following: 1. Obtaining the value of a fixed-size variable from an external medium (for example, reading an input, measuring a physical quantity, and so on), 2. Performing an arithmetic or logical operation on a fixed number of fixed-size variables (for example, adding two numbers, comparing two numbers, and so on), and 3. Returning the value of a fixed-size variable to the outside world (for example, displaying an output, setting a physical quantity, and so on). Each of these operations can be performed on every conceivable machine that is referred to as a computer. Together, they are used to define, in the most general possible way, what is meant by to compute: the acquisition, the transformation, and the production of information. Now all computers today (whether theoretical or practical) have V (t) = c, where c is a constant (often a very large number, but still a constant). Here, we do not restrict V (t) to be a constant. Thus, V (t) is allowed to be an increasing function of time, t such as V (t) = t, or V (t) = 22 , and so on, as is the case, for example, with some hypothetical accelerating machines [29]. (The idea behind these machines goes back to Bertrand Russell, Ralph Blake, and Hermann Weyl; recent work is surveyed in [78]). The crucial point is that, once defined, V (t) is never allowed to change, it remains finite and fixed once and for all. Finally, U 1 is allowed to have an unlimited memory in which to store its program, as well as its input data, intermediate results, and outputs. It can interact freely with

From Parallelism to Nonuniversality: An Unconventional Trajectory

143

the outside world. Furthermore, no limit whatsoever is placed on the time taken by U 1 to perform a computation. Theorem U: U 1 cannot be a universal computer. Proof. Let us define a computation C1 requiring W (t) operations during time unit number t. If these operations are not performed by the beginning of time unit t + 1, the computation C1 is said to have failed. Let W (t) > V (t), for at least one t. Clearly, U 1 cannot perform C1 . However, C1 is computable by another computer U 2 capable of W (t) operations during the tth time unit.  This result applies to all computers, theoretical or practical, that claim to be universal. It applies to sequential computers as well as to parallel ones. It applies to conventional as well as unconventional computers. It applies to existing as well as contemplated models of computations, so long as the number of operations they can perform in one time unit is finite and fixed.

7.2 Practical Proof of Nonuniversality in Computation In order to establish nonuniversality in a concrete fashion, I exhibited functions of n variables that are easily evaluated on a computer capable of n elementary operations per time unit, but cannot be evaluated on a computer capable of fewer than n elementary operations per time unit, regardless of how much time and space the latter is given [24, 30, 86, 97]. Examples of such functions are given in what follows. In all examples, an n-processor parallel computer suffices to solve a problem of size n. In the examples of Sects. 7.2.1, 7.2.3, 7.2.5, and 7.2.7, an n-processor parallel computer is necessary to solve the problem.

7.2.1

Computations Obeying Mathematical Constraints

There exists a family of computational problems where, given a mathematical object satisfying a certain property, we are asked to transform this object into another which also satisfies the same property. Furthermore, the property is to be maintained throughout the transformation, and be satisfied by every intermediate object, if any. More generally, the computations we consider here are such that every step of the computation must obey a certain predefined mathematical constraint. (Analogies from popular culture include picking up sticks from a heap one by one without moving the other sticks, drawing a geometric figure without lifting the pencil, and so on). An example of computations obeying a mathematical constraint is provided by a variant to the problem of sorting a sequence of numbers stored in the memory of a computer. For a positive even integer n, where n ≥ 8, let n distinct integers be stored in an array A with n locations A[1], A[2], . . ., A[n], one integer per location. Thus

144

S. G. Akl

A[ j], for all 1 ≤ j ≤ n, represents the integer currently stored in the jth location of A. It is required to sort the n integers in place into increasing order, such that: 1. After step i of the sorting algorithm, for all i ≥ 1, no three consecutive integers satisfy: A[ j] > A[ j + 1] > A[ j + 2] , (3) for all 1 ≤ j ≤ n − 2. 2. When the sort terminates we have: A[1] < A[2] < · · · < A[n].

(4)

This is the standard sorting problem in computer science, but with a twist. In it, the journey is more significant than the destination. While it is true that we are interested in the outcome of the computation (namely, the sorted array, this being the destination), in this particular variant we are more concerned with how the result is obtained (namely, there is a condition that must be satisfied throughout all steps of the algorithm, this being the journey). It is worth emphasizing here that the condition to be satisfied is germane to the problem itself; specifically, there are no restrictions whatsoever on the model of computation or the algorithm to be used. Our task is to find an algorithm for a chosen model of computation that solves the problem exactly as posed. One should also observe that computer science is replete with problems with an inherent condition on how the solution is to be obtained. Examples of such problems include: inverting a nonsingular matrix without ever dividing by zero, finding a shortest path in a graph without examining an edge more than once, sorting a sequence of numbers without reversing the order of equal inputs (stable sorting), and so on. An oblivious (that is, input-independent) algorithm for an n/2-processor parallel computer solves the aforementioned variant of the sorting problem handily in n steps, by means of predefined pairwise swaps applied to the input array A, during each of which A[ j] and A[k] exchange positions (using an additional memory location for temporary storage) [15]. Thus, for example, the input array 7 6 5 4 3 2 1 0 would be sorted by the following sequence of comparison/swap operations (each pair of underlined numbers are compared to one another and swapped if necessary to put the smaller first): 7 6 5 4 3 2 1 0 6 7 4 5 2 3 0 1 6 4 7 2 5 0 3 1 4 6 2 7 0 5 1 3

From Parallelism to Nonuniversality: An Unconventional Trajectory

145

4 2 6 0 7 1 5 3 2 4 0 6 1 7 3 5 2 0 4 1 6 3 7 5 0 2 1 4 3 6 5 7 0 1 2 3 4 5 6 7 An input-dependent algorithm succeeds on a computer with (n/2) − 1 processors. However, a sequential computer, and a parallel computer with fewer than (n/2) − 1 processors, both fail to solve the problem consistently, that is, they fail to sort all possible n! permutations of the input while satisfying, at every step, the condition that no three consecutive integers are such that A[ j] > A[ j + 1] > A[ j + 2] for all j. In the particularly nasty case where the input is of the form A[1] > A[2] > · · · > A[n] ,

(5)

any sequential algorithm and any algorithm for a parallel computer with fewer than (n/2) − 1 processors fail after the first swap.

7.2.2

Time-Varying Computational Complexity

Here, the computational complexity of the problems at hand depends on time (rather than being, as usual, a function of the problem size). Thus, for example, tracking a moving object (such as a spaceship racing towards Mars) becomes harder as it travels away from the observer. Suppose that a certain computation requires that n independent functions, each of one variable, namely, f 1 (x1 ), f 2 (x2 ), . . . , f n (xn ), be computed. Computing f i (xi ) at time t requires C(t) = 2t algorithmic steps, for t ≥ 0 and 1 ≤ i ≤ n. Further, there is a strict deadline for reporting the results of the computations: All n values f 1 (x1 ), f 2 (x2 ), . . . , f n (xn ) must be returned by the end of the third time unit, that is, when t = 3. It should be easy to verify that no sequential computer, capable of exactly one algorithmic step per time unit, can perform this computation for n ≥ 3. Indeed, f 1 (x1 ) takes C(0) = 20 = 1 time unit, f 2 (x2 ) takes another C(1) = 21 = 2 time units, by which time three time units would have elapsed. At this point none of f 3 (x3 ), . . . , f n (xn ) would have been computed. By contrast, an n-processor parallel computer solves the problem handily. With all processors operating simultaneously, processor i computes f i (xi ) at time t = 0, for 1 ≤ i ≤ n. This consumes one time unit, and the deadline is met.

146

7.2.3

S. G. Akl

Rank-Varying Computational Complexity

Suppose that a computation consists of n stages. There may be a certain precedence among these stages, or the n stages may be totally independent, in which case the order of execution is of no consequence to the correctness of the computation. Let the rank of a stage be the order of execution of that stage. Thus, stage i is the ith stage to be executed. Here we focus on computations with the property that the number of algorithmic steps required to execute stage i is C(i), that is, a function of i only. When does rank-varying computational complexity arise? Clearly, if the computational requirements grow with the rank, this type of complexity manifests itself in those circumstances where it is a disadvantage, whether avoidable or unavoidable, to being ith, for i ≥ 2. For example, the precision and/or ease of measurement of variables involved in the computation in a stage s may decrease with each stage executed before s. The same analysis as in Sect. 7.2.2 applies by substituting the rank for the time.

7.2.4

Time-Varying Variables

For a positive integer n larger than 1, we are given n functions, each of one variable, namely, f 1 , f 2 , . . . , f n , operating on the n physical variables x1 , x2 , . . . , xn , respectively. Specifically, it is required to compute f i (xi ), for i = 1, 2, . . ., n. For example, f i (xi ) may be equal to xi2 . What is unconventional about this computation, is the fact that the xi are themselves (unknown) functions x1 (t), x2 (t), . . . , xn (t), of the time variable t. It takes one time unit to evaluate f i (xi (t)). The problem calls for computing f i (xi (t)), 1 ≤ i ≤ n, at time t = t0 . Because the function xi (t) is unknown, it cannot be inverted, and for k > 0, xi (t0 ) cannot be recovered from xi (t0 + k). Note that the value of an input variable xi (t) changes at the same speed as the processor in charge of evaluating the function f i (xi (t)). A sequential computer fails to compute all the f i as desired. Indeed, suppose that x1 (t0 ) is initially operated upon. By the time f 1 (x1 (t0 )) is computed, one time unit would have passed. At this point, the values of the n − 1 remaining variables would have changed. The same problem occurs if the sequential computer attempts to first read all the xi , one by one, and store them before calculating the f i . By contrast, a parallel computer consisting of n independent processors may perform all the computations at once: For 1 ≤ i ≤ n, and all processors working at the same time, processor i computes f i (xi (t0 )), leading to a successful computation.

7.2.5

Interacting Variables

A physical system has n variables, x1 , x2 , . . ., xn , each of which is to be measured or set to a given value at regular intervals. One property of this system is that measuring

From Parallelism to Nonuniversality: An Unconventional Trajectory

147

or setting one of its variables modifies the values of any number of the system variables uncontrollably, unpredictably, and irreversibly. A sequential computer measures one of the values (x1 , for example) and by so doing it disturbs an unknowable number of the remaining variables, thus losing all hope of recording the state of the system within the given time interval. Similarly, the sequential approach cannot update the variables of the system properly: Once x1 has received its new value, setting x2 may disturb x1 in an uncertain way. A parallel computer with n processors, by contrast, will measure all the variables x1 , x2 , . . . , xn simultaneously (one value per processor), and therefore obtain an accurate reading of the state of the system within the given time frame. Consequently, new values x1 , x2 , . . . , xn can be computed in parallel and applied to the system simultaneously (one value per processor).

7.2.6

Uncertain Time Constraints

In this paradigm, we are given a computation consisting of three distinct phases, namely, input, calculation, and output, each of which needs to be completed by a certain deadline. However, unlike the standard situation in conventional computation, the deadlines here are not known at the outset. In fact, and this is what makes this paradigm truly uncommon, we do not know at the moment the computation is set to start, what needs to be done, and when it should be done. Certain physical parameters, from the external environment surrounding the computation, become spontaneously available. The values of these parameters, once received from the outside world, are then used to evaluate two functions, f 1 and f 2 , that tell us precisely what to do and when to do it, respectively. The difficulty posed by this paradigm is that the evaluation of the two functions f 1 and f 2 is itself quite demanding computationally. Specifically, for a positive integer n, the two functions operate on n variables (the physical parameters). A parallel computer equipped with n processors succeeds in evaluating the two functions on time to meet the deadlines.

7.2.7

The Global Variables Paradigm

In a computation C1 , we assume the presence of n global variables, namely, x0 , x1 , . . . , xn−1 , all of which are time critical, and all of which are initialized to 0. There are also n nonzero local variables, namely, y0 , y1 , . . . , yn−1 , belonging, respectively, to the n processes P0 , P1 , . . . , Pn−1 that make up C1 . The computation C1 is as follows:

148

S. G. Akl

P0 : if x0 = 0 then x1 ← y0 else loop forever end if. P1 : if x1 = 0 then x2 ← y1 else loop forever end if. P2 : if x2 = 0 then x3 ← y2 else loop forever end if. .. . Pn−2 : if xn−2 = 0 then xn−1 ← yn−2 else loop forever end if. Pn−1 : if xn−1 = 0 then x0 ← yn−1 else loop forever end if. Suppose that the computation C1 begins when xi = 0, for i = 0, 1, . . . , n − 1. For every i, 0 ≤ i ≤ n − 1, if Pi is to be completed successfully, it must be executed while xi is indeed equal to 0, and not at any later time when xi has been modified by P(i−1) mod n and is no longer equal to 0. On a parallel computer equipped with n processors, namely, p0 , p1 , . . . , pn−1 , that conforms to the Exclusive Read Exclusive Write Parallel Random Access Machine (EREW PRAM) model of computation [15], it is possible to test all the xi , 0 ≤ i ≤ n − 1, for equality to 0 in one time unit; this is followed by assigning to all the xi , 0 ≤ i ≤ n − 1, their new values during the next time unit. Thus all the processes Pi , 0 ≤ i ≤ n − 1, and hence the computation C1 , terminate successfully. By contrast, a sequential computer such as the Random Access Machine (RAM) model of computation [15], has but a single processor p0 and, as a consequence, fails to meet the time-critical requirements of C1 . At best, it can perform no more than n − 1 of the n processes as required (assuming it executes the processes in the order Pn−1 , Pn−2 , . . . , P1 , then fails at P0 since x0 was modified by Pn−1 ), and thus does not terminate. An EREW PRAM with only n − 1 processors, p0 , p1 , . . . , pn−2 , cannot do any better. At best, it too will attempt to execute at least one of the Pi when xi = 0 and hence fail to complete at least one of the processes on time. As mentioned earlier, nonuniversality applies to all models of computation, conventional and unconventional, as long as the computational model is capable only of a finite and fixed number of operations per time unit. Accordingly, it is abundantly clear from the global variables paradigm example that unless it is capable of an infinite number of operations per time unit executed in parallel [62], no computer can be universal—not the universal Turing machine [25, 37], not an accelerating machine [26], not even a ‘conventional’ time traveling machine (that is, a machine that allows a travel through time without the paradoxes of time travel) [31]. Several attempts were made to disprove nonuniversality (see, for example, [69, 75]). They all fell short [35, 37]. Clearly, in order for a challenge to nonuniversality to be valid, it must exhibit, for each of the computations described in Sects. 7.2.1–7.2.7, a concrete algorithm that can perform this computation on a ‘universal’ computational model. Because each of these computations requires n operations per time unit in order to be executed successfully, where n is a variable, and because these putative algorithms would run on a computer, by definition, capable of only a finite and fixed number of operations per time unit, this leads to a contradiction. Indeed, it has

From Parallelism to Nonuniversality: An Unconventional Trajectory

149

been suggested that the Principle of Nonuniversality in Computation is the computer science equivalent of Gödel’s Incompleteness Theorem in mathematical logic [27]. And thus the loop was closed. My journey had taken me from parallelism to unconventional computation, and from unconventional computational problems to nonuniversality. Now, nonuniversality has brought me back to parallel computation. All said, I trust that unconventional computation provided a perfect research home for my character and my way of thinking, and uncovered a wondrous world of opportunities in which to invent and create.

8 Looking to the Future Unconventional is a movable adjective. What is unconventional today may be conventional tomorrow. What will unconventional computing look like 32 years, 64 years, 128 years from now? It is difficult to say which of the current unconventional information processing ideas will be conventional wisdom in the future. It is equally hard to predict what computational paradigms our descendants in 2148 will consider ‘unconventional’. In this section, I take a third, perhaps safer approach of forecasting what contributions to humanity will be made by today’s efforts in the field of unconventional computing.

8.1 The Meaning of Life By the eighth decade of this century, humans will receive their first significant gift from unconventional computation. It would have all started by achieving a complete understanding of the biological cell through the dual lenses of information and computation. The behavior of the cell will be modeled entirely as a computer program. Thus, a cell with a disease is essentially a program with a flaw. In order to cure the cell, it suffices to fix the program. Once this is done through the help of unconventional computer scientists, healthcare will advance by leaps and bounds. Disease will be conquered. It is the end of death as a result of sickness [40]. It is also the end of death from old age. When this century closes, aging will be a thing of the past. It is well known that we grow old because our genetic code constantly makes copies of itself and these copies inevitably deteriorate. However, this need not happen if a digital version of an individual’s genetic code is created and the analog version is refreshed from the digital one on a regular basis. Natural death is no longer a necessity. The disappearance of death from sickness and from aging, coupled with the ability to procreate indefinitely, will present the danger of an overpopulated planet Earth. Once again, unconventional computation will provide the solution, namely, fast space travel. Huge numbers of humans will undertake a voyage every year to settle on

150

S. G. Akl

another celestial object, in search of a new life, new possibilities, and new knowledge. They will also bring with them humanity’s ideas and ideals in every area of endeavor.

8.2 The Arrow of Time By the middle of next century, humans will finally achieve control of time, the last aspect of their lives over which they had always been powerless. Time, which had constantly dominated and regulated their day to day existence, will no longer be their master. Human beings will be able to travel backward in time and visit their ancestors. They will be able to travel forward in time and meet their descendents. Free from time’s grip, we will be able to do extraordinary things. The technology will be fairly simple. First, humans will use information and computation to unify all the forces of nature. This will lead to a theory of everything. Quantum theory and gravity will be unified. Reversibility is fundamental to quantum theory. The curvature of space is inherent to the general theory of relativity. Closed timelike curves (CTCs) will become a reality. Traveling through time and space will logically follow. Travel to the past will allow universality in computation to be restored. All that will be needed is to equip the universal computer with the ability to time travel to a past where it meets a younger version of itself (unconventionally and paradoxically). If a computer can travel to the past and meet a younger version of itself, then there are two computers in the past and they can work in parallel to solve a problem that requires that two operations be applied simultaneously. More generally, the computer can travel to the past repeatedly, as many times as necessary, in order to encounter additional versions of its younger self, and solve a problem that requires that several operations be applied simultaneously [41].

9 Conclusion It is relevant to mention in closing that the motto of my academic department is Sum ergo computo, which means I am therefore I compute. The motto speaks at different levels. At one level, it expresses our identity. The motto says that we are computer scientists. Computing is what we do. Our professional reason for being is the theory and practice of Computing. It also says that virtually every activity in the world in which we live is run by a computer, in our homes, our offices, our factories, our hospitals, our places of entertainment and education, our means of transportation and communication, all. Just by the simple fact of living in this society, we are always computing. At a deeper level the motto asserts that “Being is computing”. In these three words is encapsulated our vision, and perhaps more concretely our model of computing in Nature. To be precise, from our perspective as humans seeking to comprehend the

From Parallelism to Nonuniversality: An Unconventional Trajectory

151

natural world around us, the motto says that computing permeates the Universe and drives it: Every atom, every molecule, every cell, everything, everywhere, at every moment, is performing a computation. To be is to compute. What a magnificent time to be a computer scientist! Computing is the most influential science of our time. Its applications in every walk of life are making the world a better place in which to live. Unconventional computation offers a wealth of unchartered territories to be explored. Natural computing may hold the key to the meaning of life itself. Indeed, unconventional information processing may be the vehicle to conquer time, the final frontier. What more can we hope for?

References 1. Adamatzky, A., Akl, S.G.: Trans-Canada slimeways: slime mould imitates the Canadian transport network. Int. J. Nat. Comput. Res. 2, 31–46 (2011) 2. Adamatzky, A., Akl, S.G.: Trans-Canada slimeways: from coast to coast to coast. In: Adamatzky, A. (ed.) Bioevaluation of World Transport Networks, pp. 113–125. World Scientific Publishing, London (2012) 3. Adamatzky, A., Akl, S.G., Alonso-Sanz, R., Van Dessel, W., Ibrahim, Z., Ilachinski, A., Jones, J., Kayem, A.V.D.M., Martínez, G.J., De Oliveira, P., Prokopenko, M., Schubert, T., Sloot, P., Strano, E., Yang, X.S.: Biorationality of motorways. In: Adamatzky, A. (ed.) Bioevaluation of World Transport Networks, pp. 309–325. World Scientific Publishing, London (2012) 4. Adamatzky, A., Akl, S.G., Alonso-Sanz, R., Van Dessel, W., Ibrahim, Z., Ilachinski, A., Jones, J., Kayem, A.V.D.M., Martínez, G.J., De Oliveira, P., Prokopenko, M., Schubert, T., Sloot, P., Strano, E., Yang, X.S.: Are motorways rational from slime mould’s point of view? Int. J. Parallel Emergent Distrib. Syst. 28, 230–248 (2013) 5. Adamatzky, A., Akl, S.G., Burgin, M., Calude, C.S., Costa, J.F., Dehshibi, M.M., Gunji, Y.P., Konkoli, Z., MacLennan, B., Marchal, B., Margenstern, M., Martinez, G.J., Mayne, R., Morita, K., Schumann, A., Sergeyev, Y.D., Sirakoulis, G.C., Stepney, S., Svozil, K., Zenil, H.: East-west paths to unconventional computing. Prog. Biophys. Mol. Biol. Elsevier, Amsterdam (2017) (Special issue on Integral Biomathics: The Necessary Conjunction of the Western and Eastern Thought Traditions for Exploring the Nature of Mind and Life) 6. Akl, S.G.: Digital signatures with blindfolded arbitrators who cannot form alliances. In: Proceedings of 1982 IEEE Symposium on Security and Privacy, pp. 129–135. IEEE, Oakland (1982) 7. Akl, S.G.: Digital signatures: a tutorial survey. Computer 16, 15–24 (1983) 8. Akl, S.G.: On the security of compressed encodings. In: Chaum, D. (ed.) Advances in Cryptology, pp. 209–230. Plenum Press, New York (1984) 9. Akl, S.G.: A prototype computer for the year 2000. Queen’s Gaz. 16, 325–332 (1984) 10. Akl, S.G.: Optimal parallel algorithms for selection, sorting and computing convex hulls. In: Toussaint, G.T. (ed.) Computational Geometry, pp. 1–22. North Holland, Amsterdam (1985) 11. Akl, S.G.: Parallel Sorting Algorithms. Academic Press, Orlando (1985) 12. Akl, S.G.: Checkers playing programs. In: Shapiro, S.C. (ed.) Encyclopedia of Artificial Intelligence, pp. 88–93. Wiley, New York (1987) 13. Akl, S.G.: The Design and Analysis of Parallel Algorithms. Prentice-Hall, Englewood Cliffs (1989) 14. Akl, S.G.: Memory access in models of parallel computation: from folklore to synergy and beyond. In: Dehne, F., Sack, J.-R., Santoro, N. (eds.) Algorithms and Data Structures, pp. 92–104. Springer, Berlin (1991) 15. Akl, S.G.: Parallel Computation: Models and Methods. Prentice Hall, Upper Saddle River (1997)

152

S. G. Akl

16. Akl, S.G.: Parallel real-time computation: sometimes quantity means quality. In: Sudborough, H., Monien, B., Hsu, D.F. (eds.) Proceedings of the International Symposium on Parallel Architectures, Algorithms and Networks, pp. 2–11. IEEE, Dallas (2000) 17. Akl, S.G.: The design of efficient parallel algorithms. In: Blazewicz, J., Ecker, K., Plateau, B., Trystram, D. (eds.) Handbook on Parallel and Distributed Processing, pp. 13–91. Springer, Berlin (2000) 18. Akl, S.G.: Parallel real-time computation of nonlinear feedback functions. Parallel Process. Lett. 13, 65–75 (2003) 19. Akl, S.G.: Superlinear performance in real-time parallel computation. J. Supercomput. 29, 89–111 (2004) 20. Akl, S.G.: The myth of universal computation. In: Trobec, R., Zinterhof, P., Vajteršic, M., Uhl, A. (eds.) Parallel Numerics, pp. 211–236. University of Salzburg, Salzburg and Jozef Stefan Institute, Ljubljana (2005) 21. Akl, S.G.: Non-Universality in Computation: The Myth of the Universal Computer. Queen’s University, School of Computing (2005). http://research.cs.queensu.ca/Parallel/projects.html 22. Akl, S.G.: A computational challenge. Queen’s University, School of Computing (2006). http://www.cs.queensu.ca/home/akl/CHALLENGE/A-Computational-Challenge.htm 23. Akl, S.G.: Universality in computation: some quotes of interest. Technical Report No. 2006511, School of Computing, Queen’s University (2006). http://www.cs.queensu.ca/home/akl/ techreports/quotes.pdf 24. Akl, S.G.: Conventional or unconventional: is any computer universal? In: Adamatzky, A., Teuscher, C. (eds.) From Utopian to Genuine Unconventional Computers, pp. 101–136. Luniver Press, Frome (2006) 25. Akl, S.G.: Three counterexamples to dispel the myth of the universal computer. Parallel Process. Lett. 16, 381–403 (2006) 26. Akl, S.G.: Even accelerating machines are not universal. Int. J. Unconv. Comput. 3, 105–121 (2007) 27. Akl, S.G.: Gödel’s incompleteness theorem and nonuniversality in computing. In: Nagy, M., Nagy, N. (eds.) Proceedings of the Workshop on Unconventional Computational Problems, pp. 1–23. Sixth International Conference on Unconventional Computation, Kingston (2007) 28. Akl, S.G.: Unconventional computational problems with consequences to universality. Int. J. Unconv. Comput. 4, 89–98 (2008) 29. Akl, S.G.: Evolving computational systems. In: Rajasekaran, S., Reif, J.H. (eds.) Parallel Computing: Models, Algorithms, and Applications, pp. 1–22. Taylor and Francis, Boca Raton (2008) 30. Akl, S.G.: Ubiquity and simultaneity: the science and philosophy of space and time in unconventional computation. Keynote address, Conference on the Science and Philosophy of Unconventional Computing, The University of Cambridge, Cambridge (2009) 31. Akl, S.G.: Time travel: A new hypercomputational paradigm. Int. J. Unconv. Comput. 6, 329–351 (2010) 32. Akl, S.G.: On the importance of being quantum. Parallel Process. Lett. 20, 275–286 (2010) (Special Issue on Advances in Quantum Computation. Qiu, K. (ed.)) 33. Akl, S.G.: Bitonic sort. In: Padua, D. (ed.) Encyclopedia of Parallel Computing, pp. 139–146. Springer, New York (2011) 34. Akl, S.G.: What is computation? Int. J. Parallel Emergent Distrib. Syst. 29, 337–345 (2014) 35. Akl, S.G.: Nonuniversality explained. Int. J. Parallel Emergent Distrib. Syst. 31, 201–219 (2016) 36. Akl, S.G.: The quantum chess story. Int. J. Unconv. Comput. 12, 207–219 (2016) 37. Akl, S.G.: Nonuniversality in computation: fifteen misconceptions rectified. In: Adamatzky, A. (ed.) Advances in Unconventional Computing, pp. 1–30. Springer, Cham (2017) 38. Akl, S.G.: Unconventional computational problems. In: Meyers, R.A. (ed.) Encyclopedia of Complexity and Systems Science. Springer, New York (2017) 39. Akl, S.G.: Natures computes. Queen’s Alumni Rev. (2), 44 (2017)

From Parallelism to Nonuniversality: An Unconventional Trajectory

153

40. Akl, S.G.: Information and computation: the essence of it all. Int. J. Unconv. Comput. 13, 187–194 (2017) 41. Akl, S.G.: Time: the final frontier. Int. J. Unconv. Comput. 13, 273–281 (2017) 42. Akl, S.G.: Computing shortest paths with cellular automata. J. Cell. Autom. 13, 33–52 (2018) 43. Akl, S.G.: Unconventional wisdom: superlinear speedup and inherently parallel computations. Int. J. Unconv. Comput. 13, 283–307 (2018) 44. Akl, S.G.: How to encrypt a graph. Int. J. Parallel Emergent Distrib. Syst 45. Akl, S.G.: A computational journey in the true north. Int. J. Parallel Emergent Distrib. Syst. (Special Issue on A Half Century of Computing. Adamatzky, A.I., Watson, L.T. (eds.)) 46. Akl, S.G. and Assem, I.: Fully homomorphic encryption: a general framework and implementations. Int. J. Parallel Emergent Distrib. Syst 47. Akl, S.G., Barnard, D.T., Doran, R.J.: Searching game trees in parallel. In: Proceedings of the Third Biennial Conference of the Canadian Society for Computational Studies of Intelligence, pp. 224–231. Victoria (1980) 48. Akl, S.G., Barnard, D.T., Doran, R.J.: Design, analysis and implementation of a parallel tree search algorithm. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-4, 192–203 (1982) 49. Akl, S.G., Cordy, B., Yao, W.: An analysis of the effect of parallelism in the control of dynamical systems. Int. J. Parallel Emergent Distrib. Syst. 20, 147–168 (2005) 50. Akl, S.G., Denning, D.E.: Checking classification constraints for consistency and completeness. In: Proceedings of 1987 IEEE Symposium on Security and Privacy, pp. 196–201. IEEE, Oakland (1987) 51. Akl, S.G., Denning, D.E.: Checking classification constraints for consistency and completeness. In: Turn, R. (ed.) Advances in Computer System Security, vol. 3, pp. 271–276. Artech House, Norwood (1988) 52. Akl, S.G., Doran, R.J.: A comparison of parallel implementations of the alpha-beta and Scout tree search algorithms using the game of checkers. In: Bramer, M.A. (ed.) Computer Game Playing, pp. 290–303. Wiley, Chichester (1983) 53. Akl, S.G., Fava Lindon, L.: Paradigms for superunitary behavior in parallel computations. J. Parallel Algorithms Appl. 11, 129–153 (1997) 54. Akl, S.G., Lindon, L.: Modèles de calcul parallèle à mémoire partagée. In: Cosnard, M., Nivat, M., Robert, Y. (eds.) Algorithmique Parallèle, pp. 15–29. Masson, Paris (1992) 55. Akl, S.G., Lyons, K.A.: Parallel Computational Geometry. Prentice Hall, Englewood Cliffs (1993) 56. Akl, S.G., Meijer, H.: A fast pseudo random permutation generator with applications to cryptology. In: Blakley, G.R., Chaum, D. (eds.) Advances in Cryptology. Lecture Notes in Computer Science, vol. 196, pp. 269–275. Springer, Berlin (1985) 57. Akl, S.G., Meijer, H.: Two new secret key cryptosystems. In: Pichler, F. (ed.) Advances in Cryptology. Lecture Notes in Computer Science, vol. 219, pp. 96–102. Springer, Berlin (1986) 58. Akl, S.G., Nagy, M.: Introduction to parallel computation. In: Trobec, R., Vajteršic, M., Zinterhof, P. (eds.) Parallel Computing: Numerics, Applications, and Trends, pp. 43–80. Springer, London (2009) 59. Akl, S.G., Nagy, M.: The future of parallel computation. In: Trobec, R., Vajteršic, M., Zinterhof, P. (eds.) Parallel Computing: Numerics, Applications, and Trends, pp. 471–510. Springer, London (2009) 60. Akl, S.G., Newborn, M.M.: The principal continuation and the killer heuristic. In: Proceedings of the ACM Annual Conference, pp. 466–473. ACM, Seattle (1977) 61. Akl, S.G., Qiu, K.: Les réseaux d’interconnexion star et pancake. In: Cosnard, M., Nivat, M., Robert, Y. (eds.) Algorithmique Parallèle, pp. 171–181. Masson, Paris (1992) 62. Akl, S.G., Salay, N.: On computable numbers, nonuniversality, and the genuine power of parallelism. Int. J. Unconv. Comput. 11, 283–297 (2015) 63. Akl, S.G.: On computable numbers, nonuniversality, and the genuine power of parallelism. In: Adamatzky, A. (ed.) Emergent Computation: A Festschrift for Selim G. Akl, pp. 57–69. Springer, Cham (2017)

154

S. G. Akl

64. Akl, S.G., Stojmenovi´c, I.: Generating combinatorial objects on a linear array of processors. In: Zomaya, A.Y. (ed.) Parallel Computing: Paradigms and Applications, pp. 639–670. International Thomson Computer Press, London (1996) 65. Akl, S.G., Stojmenovi´c, I.: Broadcasting with selective reduction: a powerful model of parallel computation. In: Zomaya, A.Y. (ed.) Parallel and Distributed Computing Handbook, pp. 192– 222. McGraw-Hill, New York (1996) 66. Akl, S.G., Taylor, P.D.: Cryptographic solution to a problem of access control in a hierarchy. ACM Trans. Comput. Syst. 1, 239–248 (1983) 67. Akl, S.G., Toussaint, G.T.: A fast convex hull algorithm. Inf. Process. Lett. 7, 219–222 (1978) 68. Akl, S.G., Yao, W.: Parallel computation and measurement uncertainty in nonlinear dynamical systems. J. Math. Model. Algorithms 4, 5–15 (2005) 69. Bringsjord, S.: Is universal computation a myth? In: Adamatzky, A. (ed.) Emergent Computation: A Festschrift for Selim G. Akl, pp. 19–37. Springer, Cham (2017) 70. Choudhury, S., Salomaa, K., Akl, S.G.: A cellular automaton model for wireless sensor networks. J. Cell. Autom. 7, 223–242 (2012) 71. Choudhury, S., Salomaa, K., Akl, S.G.. A cellular automaton model for connectivity preserving deployment of mobile wireless sensors. In: Proceedings of the Second IEEE International Workshop on Smart Communication Protocols and Algorithms, pp. 6643–6647. IEEE, Ottawa (2012) 72. Choudhury, S., Salomaa, K., Akl, S.G.: Energy efficient cellular automaton based algorithms for mobile sensor networks. In: Proceedings of the 2012 IEEE Wireless Communications and Networking Conference, pp. 2341–2346. IEEE, Paris (2012) 73. Choudhury, S., Salomaa, K., Akl, S.G.: Cellular automaton based algorithms for the dispersion of mobile wireless sensor networks. Int. J. Parallel Emergent Distrib. Syst. 29, 147–177 (2014) 74. Choudhury, S., Salomaa, K., Akl, S.G.: Cellular automaton based localized algorithms for mobile sensor networks. Int. J. Unconv. Comput. 11, 417–447 (2015) 75. Dadizadeh, A.: Two problems believed to exhibit superunitary behaviour turn out to fall within the church-turing thesis. M.Sc. Thesis, Bishop’s University, Canada (2018) 76. Denning, D.E., Akl, S.G., Heckman, M., Lunt, T.F., Morgenstern, M., Neumann, P.G., Schell, R.R.: Views for multilevel database security. IEEE Trans. Softw. Eng. SE-13, 129–140 (1987) 77. Deutsch, D.: The Fabric of Reality, p. 134. Penguin Books, London (1997) 78. Fraser, R., Akl, S.G.: Accelerating machines: a review. Int. J. Parallel Emergent Distrib. Syst. 23, 81–104 (2008) 79. Kayem, A., Martin, P., Akl, S.G.: Adaptive Cryptographic Access Control. Springer, New York (2010) 80. Kayem, A.V.D.M., Martin, P., Akl, S.G.: Self-protecting access control: on mitigating privacy violations with fault tolerance. In: Yee, G.O.M (ed) Privacy Protection Measures and Technologies in Business Organizations: Aspects and Standards, pp. 95–128. IGI Global, Hershey (2012) 81. MacKinnon, S., Taylor, P.D., Meijer, H., Akl, S.G.: An optimal algorithm for assigning cryptographic keys to control access in a hierarchy. IEEE Trans. Comput. C-34, 797–802 (1985) 82. McKay, C.D., Affleck, J.G., Nagy, N., Akl, S.G., Walker, V.K.: Molecular codebreaking and double encoding - Laboratory experiments. Int. J. Unconv. Comput. 5, 547–564 (2009) 83. Meijer, H., Akl, S.G.: Digital signature schemes. In: Proceedings of Crypto 81: First IEEE Workshop on Communications Security, pp. 65–70. IEEE, Santa Barbara (1981) 84. Meijer, H., Akl, S.G.: Digital signature schemes. Cryptologia 6, 329–338 (1982) 85. Meijer, H., Akl, S.G.: Remarks on a digital signature scheme. Cryptologia 7, 183–186 (1983) 86. Nagy, M., Akl, S.G.: On the importance of parallelism for quantum computation and the concept of a universal computer. In: Calude, C.S., Dinneen, M.J., Paun, G., Pérez-Jiménez, M., de, J., Rozenberg, G. (eds.) Unconventional Computation, pp. 176–190. Springer, Heildelberg (2005) 87. Nagy, M., Akl, S.G.: Quantum measurements and universal computation. Int. J. Unconv. Comput. 2, 73–88 (2006)

From Parallelism to Nonuniversality: An Unconventional Trajectory

155

88. Nagy, M., Akl, S.G.: Quantum computing: beyond the limits of conventional computation. Int. J. Parallel Emergent Distrib. Syst. 22, 123–135 (2007) 89. Nagy, M., Akl, S.G.: Parallelism in quantum information processing defeats the Universal Computer. Parallel Process. Lett. 17, 233–262 (2007) (Special Issue on Unconventional Computational Problems) 90. Nagy, M., Akl, S.G.: Coping with decoherence: parallelizing the quantum Fourier transform. Parallel Process. Lett. 20, 213–226 (2010) (Special Issue on Advances in Quantum Computation. Qiu, K. (ed.)) 91. Nagy, M., Akl, S.G.: Entanglement verification with an application to quantum key distribution protocols. Parallel Process. Lett. 20, 227–237 (2010) (Special Issue on Advances in Quantum Computation. Qiu, K. (ed.)) 92. Nagy, M., Akl, S.G., Kershaw, S.: Key distribution based on the quantum Fourier transform. Int. J. Secur. Appl. 3, 45–67 (2009) 93. Nagy, N., Akl, S.G.: Aspects of biomolecular computing. Parallel Process. Lett 17, 185–211 (2007) 94. Nagy, N., Akl, S.G.: Authenticated quantum key distribution without classical communication. Parallel Process. Lett. 17, 323–335 (2007) (Special Issue on Unconventional Computational Problems) 95. Nagy, N., Akl, S.G.: A quantum cryptographic solution to the problem of access control in a hierarchy. Parallel Process. Lett. 20, 251–261 (2010) (Special Issue on Advances in Quantum Computation. Qiu, K. (ed)) 96. Nagy, N., Akl, S.G.: One-time pads without prior encounter. Parallel Process. Lett. 20, 263– 273 (2010) (Special Issue on Advances in Quantum Computation. Qiu, K. (ed)) 97. Nagy, N., Akl, S.G.: Computations with uncertain time constraints: effects on parallelism and universality. In: Calude, C.S., Kari, J., Petre, I., Rozenberg, G. (eds.) Unconventional Computation, pp. 152–163. Springer, Heidelberg (2011) 98. Nagy, N., Akl, S.G.: Computing with uncertainty and its implications to universality. Int. J. Parallel Emergent Distrib. Syst. 27, 169–192 (2012) 99. Nagy, N., Akl, S.G., Nagy, M.: Applications of Quantum Cryptography. Lambert Academic Publishing, Saarbrüken (2016) 100. Nagy, N., Nagy, M., Akl, S.G.: Key distribution versus key enhancement in quantum cryptography. Parallel Process. Lett. 20, 239–250 (2010) (Special Issue on Advances in Quantum Computation. Qiu, K. (ed.)) 101. Nagy, N., Nagy, M., Akl, S.G.: Hypercomputation in a cryptographic setting: solving the identity theft problem using quantum memories. Int. J. Unconv. Comput. 6, 375–398 (2010) 102. Nagy, N., Nagy, M., Akl, S.G.: Quantum security in wireless sensor networks. Nat. Comput. 9, 819–830 (2010) 103. Nagy, N., Nagy, M., Akl, S.G.: Carving secret messages out of public information. J. Comput. Sci. 11, 64–70 (2015) 104. Nagy, N., Nagy, M., Akl, S.G.: A less known side of quantum cryptography. In: Adamatzky, A. (ed.) Emergent Computation: A Festschrift for Selim G. Akl, pp. 121–169. Springer, Cham (2017) 105. Palioudakis, A., Salomaa, K., Akl, S.G.: Unary NFAs, limited nondeterminism, and Chrobak normal form. Int. J. Unconv. Comput. 11, 395–416 (2015) 106. Palioudakis, A., Salomaa, K., Akl, S.G.: Operational state complexity of unary NFAs with finite nondeterminism. Theor. Comput. Sci. 610, 108–120 (2016) 107. Palioudakis, A., Salomaa, K., Akl, S.G.: Worst case branching and other measures of nondeterminism. Int. J. Found. Comput. Sci. 28, 195–210 (2017) 108. Osiakwan, C.N.K., Akl, S.G.: A perfect speedup parallel algorithm for the assignment problem on complete weighted bipartite graphs. In: Rishe, N., Navathe, S., Tal, D. (eds.) Parallel Architectures, pp. 161–180. IEEE Computer Society Press, Los Alamitos (1991) 109. Pavel, S., Akl, S.G.: Matrix operations using arrays with reconfigurable optical buses. J. Parallel Algorithms Appl. 8, 223–242 (1996)

156

S. G. Akl

110. Pavel, S., Akl, S.G.: Area-time trade-offs in arrays with optical pipelined buses. Appl. Opt. 35, 1827–1835 (1996) 111. Pavel, S., Akl, S.G.: On the power of arrays with reconfigurable optical buses. In: Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, pp. 1443–1454. Sunnyvale, (1996) 112. Pavel, S., Akl, S.G.: Efficient algorithms for the Hough transform on arrays with reconfigurable optical buses. In: Proceedings of the International Parallel Processing Symposium, pp. 697– 701. Maui (1996) 113. Pavel, S. and Akl, S.G.: Integer sorting and routing in arrays with reconfigurable optical buses. Int. J. of Found. of Comput. Sci. 9, 99–120 (1998) (Special Issue on Interconnection Networks) 114. Pavel, S.D., Akl, S.G.: Computing the Hough transform on arrays with reconfigurable optical buses. In: Li, K., Pan, Y., Zheng, S.-Q. (eds.) Parallel Computing Using Optical Interconnections, pp. 205–226. Kluwer Academic Publishers, Dordrecht (1998) 115. Sipser, M.: Introduction to the Theory of Computation, p. 125. PWS, Boston (1997) 116. Taleb, N., Akl, S.G.: Error detection in asynchronous sequential circuits - the hardware approach. In: Proceedings of the Tenth Conference on Statistics and Scientific Computations, pp. S201–S215. Cairo University, Cairo (1974) 117. Torbey, S., Akl, S.G.: An exact and optimal local solution to the two-dimensional convex hull of arbitrary points problem. J. Cell. Autom. 4, 137–146 (2009) 118. Torbey, S., Akl, S.G.: An exact solution to the two-dimensional arbitrary-threshold density classification problem. J. Cell. Autom. 4, 225–235 (2009) 119. Torbey, S., Akl, S.G.: Reliable node placement in wireless sensor networks using cellular automata. In: Durand-Lose, J., Jonoska, N. (eds.) Unconventional Computation and Natural Computation, pp. 210–221. Springer, Heidelberg (2012) 120. Torbey, S., Akl, S.G., Redfearn, D.: Time-scale analysis of signals without basis functions: application to sudden cardiac arrest prediction. Int. J. Unconv. Comput. 11, 375–394 (2015) 121. https://www.amazon.com/gp/richpub/listmania/fullview/32A3PMKCJH0Y8 122. http://www.cs.queensu.ca/home/akl/QUANTUMCHESS/QCOnTheWeb.pdf 123. http://www.nserc-crsng.gc.ca/Media-Media/ImpactStory-ArticlesPercutant_eng.asp? ID=1053 124. https://www.youtube.com/watch?v=Hi0BzqV_b44 125. http://research.cs.queensu.ca/home/akl/SLIMEMOLD/SlimeMoldInTheNews.pdf

A Simple Hybrid Event-B Model of an Active Control System for Earthquake Protection Richard Banach and John Baugh

Abstract In earthquake-prone zones of the world, severe damage to buildings and life endangering harm to people pose a major risk when severe earthquakes happen. In recent decades, active and passive measures to prevent building damage have been designed and deployed. A simple model of an active damage prevention system, founded on earlier work, is investigated from a model based formal development perspective, using Hybrid Event-B. The non-trivial physical behaviour in the model is readily captured within the formalism. However, when the usual approximation and discretization techniques from engineering and applied mathematics are used, the rather brittle refinement techniques used in model based formal development start to break down. Despite this, the model developed stands up well when compared via simulation with a standard approach. The requirements of a richer formal development framework, better able to cope with applications exhibiting non-trivial physical elements are discussed.

1 Introduction In earthquake-prone zones of the world, damage to buildings during an earthquake is a serious problem, leading to major rebuilding costs if the damage is severe. This is to say nothing of the harm to people that ensues if they happen to be inside, or near to, a building that fails structurally. One approach to mitigating the problem is to make buildings so robust that they can withstand the severest earthquake that may befall them; but this not only greatly increases cost, but also places limits on the size R. Banach (B) School of Computer Science, University of Manchester, Oxford Road, Manchester M13 9PL, UK e-mail: [email protected] J. Baugh Department of Civil, Construction, and Environmental Engineering, North Carolina State University, Raleigh, NC 27695-7908, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_7

157

158

R. Banach and J. Baugh

and shape of buildings, so that the desired robustness remains feasible with available materials. In recent decades, an alternative approach to earthquake protection has been to use control techniques to dissipate the forces that reach vulnerable elements of a building by using control strategies of one kind or another. In truth, the first proposal for intervening in a building’s ability to withstand earthquake dates back to 1870 (a patent filed at the U.S. Patent Office by one Jules Touaillon), but such ideas were not taken seriously till a century or so later. One approach is to use passive control. In this approach massive members and/or damping mechanisms are incorporated into the building in such a way that their parameters and the coupling between them and the rest of the building are chosen just so that the destructive forces are preferentially dissipated into these additional elements, leaving the building itself undamaged. An alternative, more recent approach, is to use active control. Since it is the amplitude and frequency of the vibrations that a building is subject to during an earthquake that determine whether it will sustain damage or not, damping earthquake vibrations by applying suitably designed counter vibrations to the building reduces the net forces that the building must withstand, and thus the damage it will sustain under a given severity of earthquake and given a specific standard of construction. In [16, 20, 21, 31] there is a study of such an active control system for earthquake resistance for buildings. Ultimately, it is targetted at an experimental tall building of six stories. These papers investigate various aspects of verification for a system of this kind, based largely on timing considerations, which inevitably generate uncertainties due to equipment latencies. The approach to the system is rather bottom up in [20, 21]. The design is presented at a low level of detail with separate elements for the start of an action, the end of the action, and a synchronisation point within the action (as needed), with timings attaches to each element. Even for a simple system, this results in a huge state space. The focus of [20, 21] then becomes reduction of the state space size, showing no loss of behaviour via bisimulation. Following this, useful properties of the system model may be demonstrated using the smaller state space version. In the present paper, we take an alternative route, going top down instead of bottom up, and using Hybrid Event-B (henceforth HEB) [11, 12] as the vehicle for doing the development. We work top-down, and for simplicity and through lack of space, we do not get to the low level of detail present in [20, 21]. In particular, we omit replication of subsystems, timing and fault tolerance (though we comment on these aspects at the end). As well as providing the contrast to the previous treatment, the present case study offers some novelty regarding the interaction of physical and digital behaviours (in particular, regarding continuous vs. impulsive physics, as treated within the HEB formalism), compared with other case studies, e.g. [5–7, 9, 10, 15]. The rest of this paper is as follows. In Sect. 2 we briefly overview control strategies for seismic protection for buildings, and focus on the active control principles that underpin this paper’s approach. Section 3 has an outline of single machine HEB, for purposes of orientation. In Sect. 4 we present the simple dynamical model we develop, and its most abstract expression in HEB, and Sect. 5 presents an ideal but

A Simple Hybrid Event-B Model of an Active Control System …

159

completely unrealistic solution to the problem posed in the model. The next few sections develop and refine the original model in a less idealised way, bringing in more of the detailed requirements of a practical solution. The more detail we bring in, the greater the challenge to the usual refinement technique found in formal development frameworks, including HEB. Thus Sect. 6 presents a first refinement towards a practical solution, while Sect. 7 engages more seriously with an ‘ideal pulse’ strategy for the active control solution. Section 8 pauses to discuss the issues for refinement that this throws up. Section 9 incorporates the discretization typically seen in practical engineering solutions, and also treats decomposition into a family of machines that reflect a more convincing system architecture, one resembling the approach of [20, 21]. Section 10 continues the discussion of issues raised for refinement and retrenchment by these development steps. Section 11 presents numerical simulation work showing that the theoretically based earlier models give good agreement when compared with solutions derived using conventional engineering approaches. Section 12 recapitulates and concludes.

2 Control Strategies for Earthquake Damage Prevention Since mechanical prevention of earthquake damage to structures began to be taken seriously, a number of engineering techniques have been brought to bear on the problem [40]. These days this is a highly active field and the literature is large, e.g. [1, 2, 19]. In passive control [17], decoupling of the building from the ground, and/or the incorporation of various additional members, are used to ensure that the forces of an earthquake do not impinge on the important building structure. Passive approaches are often used to protect historical buildings in which major re-engineering is impractical. One disadvantage of the passive approach is the potential transverse displacements relative to the ground that the protected building may undergo. If, with respect to an inertial frame, the building stays still, and the ground moves by 20 cm, then the relative movement of the building is 20 cm. This may not be practical. In alternative approaches the engineering compensation is more active. Active approaches (such as the one we will pursue in more detail below) have the compensation mechanism trying to actively counter the forces imparted by the earthquake in order to limit the amplitude of vibrations at the building’s resonant frequencies [33, 34]. One problem experienced by active prevention systems is that they may consume a lot of energy, which is expensive and undesirable. Another is that if one is unlucky, and the parameters of the earthquake fall in the wrong region (due to imprescient design, or error), then because an active system is injecting energy into the overall structure, it may actually make things worse, driving the overall structure into instability, perhaps because the injected energy is being introduced in phase rather than in anti-phase with the earthquake itself. An increasingly popular approach these days is semi-active control [22], in which the main aim is to intervene actively only in

160

R. Banach and J. Baugh

the dissipative part of the building’s response, decreasing energy costs and avoiding potential instabilities, for only a small decrease in performance. In all of these active strategies for prevention of earthquake damage to buildings, the building contains a set of sensors which constantly monitor vibrations impinging on the building from the ground. The signals coming from these are analysed to differentiate between earthquake originated forces, and normal day to day vibrations caused by everyday activities in the building’s surroundings. The latter are, of course, ignored. The building also contains active members which can impart forces to the building structure. The aim of the active control is to impart forces to the building that counter the damaging forces coming from the earthquake, so that the net force that the building must withstand remains within its safe design parameters. These days, these design aims are achieved using a sophisticated control engineering approach. Many strategies have been tried, but among the most popular currently is to use a LQG (Linear Quadratic Gaussian) strategy for designing a nominal controller, which is then modulated by clipping extreme values. This approach is based on a sophisticated formulation of noisy dynamics and its control via a Bayesian estimation of current and future behaviour. See e.g. [22] (or [28] for a simpler example). One consequence of this approach is some loss of direct contact between the control algorithm design and real time values, due to the use of L2 estimates in the derivation of the controller. This is a disadvantage regarding direct correspondence with typical formal methods approaches, which are wedded exclusively to variable values in real time. Our own study is based on the strategy used in [20, 21]. Figure 1, taken from [20, 21], gives a schematic outline of how active elements are disposed in an experimental building in Tokyo. There are sensors near the ground, and at the top of the building. The active members, which have to be capable of exerting significant force if they are to move significant aspects of the building, are found at the bottom of the building.1 The technique by which the corrective forces are applied to the building is to have the active members impart a series of pulses to the core framework of the building. Of course, for this to be successful on the timescales of earthquake vibrations, there has to be accurate real time control, and an appropriate balance between the aggregated effect of the applied pulse series and the sensed vibrations coming from the earthquake. In contrast to the LQG approach, the technique used in [16, 20, 21, 29, 31] is based on real time monitoring of positions, velocities and accelerations in the building’s structure, thus greatly facilitating a correspondence with conventional model based formal methods techniques (a point that emerges, though obviously quite indirectly, from remarks in [29]). It is not our aim in this paper to get deeply embroiled in the detailed control engineering aspects of the problem. We leave that to other work. Instead, our aim

1 In sophisticated modern designs, active members are also found higher up the building, to counter

vibration antinodes part way up a tall structure.

A Simple Hybrid Event-B Model of an Active Control System …

161

Fig. 1 A schematic of a building design, to be protected by an active earthquake damage prevention system. From [20, 21]

is to take a top down approach to the implementation task, and to see how a HEB perspective can bring efficiencies and a degree of clarity to that. Accordingly, we next turn to HEB itself.

3 A Brief Outline of Hybrid Event-B In this section we give an outline of Hybrid Event-B for single machines. In Fig. 2 we see a bare bones HEB machine, HyEvBMch. It starts with declarations of time and of a clock. In HEB, time is a first class citizen in that all variables are functions of time, whether explicitly or implicitly. However time is special, being read-only, never being assigned, since time cannot be controlled by any human-designed engineering process. Clocks allow a bit more flexibility, since they are assumed to increase their value at the same rate that time does, but may be set during mode events (see below). Variables are of two kinds. There are mode variables (like u, declared as usual) which take their values in discrete sets and change their values via discontinuous assignment in mode events. There are also pliant variables (such as x, y), declared in the PLIANT clause, which take their values in topologically dense sets (normally R) and which are allowed to change continuously, such change being specified via pliant events (see below).

162 MACHINE HyEvBMch TIME t CLOCK clk PLIANT x, y VARIABLES u INVARIANTS x∈R y∈R u∈N EVENTS INITIALISATION STATUS ordinary WHEN t =0 THEN clk := 1 x := x0 y := y0 u := u0 END ... ...

R. Banach and J. Baugh ... ... MoEv STATUS ordinary ANY i?, l, o! WHERE grd(x, y, u, i?, l,t, clk) THEN x, y, u, clk, o! : |BApred(x, y, u, i?, l, o!,t, clk, x , y , u , clk ) END PliEv STATUS pliant INIT iv(x, y,t, clk) WHERE grd(u) ANY i?, l, o! COMPLY BDApred( x, y, u, i?, l, o!,t, clk) SOLVE D x = φ (x, y, u, i?, l, o!,t, clk) y := E(x, y, u, i?, l, o!,t, clk) END END

Fig. 2 A schematic Hybrid Event-B machine

Next are the invariants. These resemble invariants in discrete Event-B, in that the types of the variables are asserted to be the sets from which the variables’ values at any given moment of time are drawn. More complex invariants are similarly predicates that are required to hold at all moments of time during a run. Then we get to the events. The INITIALISATION has a guard that synchronises time with the start of any run, while all other variables are assigned their initial values in the usual way. As hinted above, in HEB, there are two kinds of event: mode events and pliant events. Mode events are direct analogues of events in discrete Event-B. They can assign all machine variables (except time itself). In the schematic MoEv of Fig. 2, we see three parameters i?, l, o!, (an input, a local parameter, and an output respectively), and a guard grd which can depend on all the machine variables. We also see the generic after-value assignment specified by the before-after predicate BApred , which can specify how the after-values of all variables (except time, inputs and locals) are to be determined. Pliant events are new. They specify the continuous evolution of the pliant variables over an interval of time. The schematic pliant event PliEv of Fig. 2 shows the structure. There are two guards: there is iv, for specifying enabling conditions on the pliant variables, clocks, and time; and there is grd , for specifying enabling conditions on the mode variables. The separation between the two is motivated by considerations connected with refinement. The body of a pliant event contains three parameters i?, l, o!, (once more an input, a local parameter, and an output respectively) which are functions of time, defined over the duration of the pliant event. The behaviour of the event is defined

A Simple Hybrid Event-B Model of an Active Control System …

163

by the COMPLY and SOLVE clauses. The SOLVE clause specifies behaviour fairly directly. For example the behaviour of pliant variable y is given by a direct assignment to the (time dependent) value of the expression E(. . .). Alternatively, the behaviour of pliant variable x is given by the solution of the first order ordinary differential equation (ODE) D x = φ(. . .), where D indicates differentiation with respect to time. (In fact the semantics of the y = E case is given in terms of the ODE D y = D E, so that both x and y satisfy the same regularity properties.) The COMPLY clause can be used to express any additional constraints that are required to hold during the pliant event via its before-during-and-after predicate BDApred . Typically, constraints on the permitted range of values for the pliant variables, and similar restrictions, can be placed here. The COMPLY clause has another purpose. When specifying at an abstract level, we do not necessarily want to be concerned with all the details of the dynamics—it is often sufficient to require some global constraints to hold which express the needed safety properties of the system. The COMPLY clauses of the machine’s pliant events can house such constraints directly, leaving it to lower level refinements to add the necessary details of the dynamics. Briefly, the semantics of a HEB machine is as follows. It consists of a set of system traces, each of which is a collection of functions of time, expressing the value of each machine variable over the duration of a system run. (In the case of HyEvBMch, in a given system trace, there would be functions for clk, x, y, u, each defined over the duration of the run.) Time is modeled as an interval T of the reals. A run starts at some initial moment of time, t0 say, and lasts either for a finite time, or indefinitely. The duration of the run T, breaks up into a succession of left-closed right-open subintervals: T = [t0 . . . t1 ), [t1 . . . t2 ), [t2 . . . t3 ), . . . . The idea is that mode events (with their discontinuous updates) take place at the isolated times corresponding to the common endpoints of these subintervals ti , and in between, the mode variables are constant and the pliant events stipulate continuous change in the pliant variables. Although pliant variables change continuously (except perhaps at the ti ), continuity alone still allows for a wide range of mathematically pathological behaviours. To eliminate these, we make the following restrictions which apply individually to every subinterval [ti . . . ti+1 ): I Zeno: there is a constant δZeno , such that for all i needed, ti+1 − ti ≥ δZeno . II Limits: for every variable x, and for every time t ∈ T, and for δ > 0, the left limit −→ ←− limδ→0 x(t − δ) written x(t) and right limit limδ→0 x(t + δ), written x(t), exist, ←− and for every t, x(t) = x(t). [N. B. At the endpoint(s) of T, any missing limit is defined to equal its counterpart.] III Differentiability: The behaviour of every pliant variable x in the interval [ti . . . ti+1 ) is given by the solution of a well posed initial value problem D xs = φ(xs . . .) (where xs is a relevant tuple of pliant variables and D is the time derivative). ‘Well posed’ means that φ(xs . . .) has Lipschitz constants which are uniformly bounded over [ti . . . ti+1 ) bounding its variation with respect to xs, and that φ(xs . . .) is measurable in t.

164

R. Banach and J. Baugh

Regarding the above, the Zeno condition is certainly a sensible restriction to demand of any acceptable system, but in general, its truth or falsehood can depend on the system’s full reachability relation, and is thus very frequently undecidable. The stipulation on limits, with the left limit value at a time ti being not necessarily the same as the right limit at ti , makes for an easy interpretation of mode events that happen at ti . For such mode events, the before-values are interpreted as the left limit values, and the after-values are interpreted as the right limit values. The differentiability condition guarantees that from a specific starting point, ti say, there is a maximal right open interval, specified by tMAX say, such that a solution to the ODE system exists in [ti . . . tMAX ). Within this interval, we seek the earliest time ti+1 at which a mode event becomes enabled, and this time becomes the preemption point beyond which the solution to the ODE system is abandoned, and the next solution is sought after the completion of the mode event. In this manner, assuming that the INITIALISATION event has achieved a suitable initial assignment to variables, a system run is well formed, and thus belongs to the semantics of the machine, provided that at runtime: • Every enabled mode event is feasible, i.e. has an after-state, and on its completion enables a pliant event (but does not enable any mode event).2 • Every enabled pliant event is feasible, i.e. has a time-indexed family of afterstates, and EITHER:

(1) (2)

(i) During the run of the pliant event a mode event becomes enabled. It preempts the pliant event, defining its end. ORELSE (ii) During the run of the pliant event it becomes infeasible: finite termination. ORELSE (iii) The pliant event continues indefinitely: nontermination. Thus in a well formed run mode events alternate with pliant events. The last event (if there is one) is a pliant event (whose duration may be finite or infinite). We note that this framework is quite close to the modern formulation of hybrid systems. (See e.g. [27, 35] for representative formulations, or the large literature in the Hybrid Systems: Computation and Control series of international conferences, and the further literature cited therein.) In reality, there are a number of semantic issues that we have glossed over in the framework just sketched. We refer to [11] for a more detailed presentation. Also, the development we undertake requires the multi-machine version of HEB [12]. Since the issues that arise there are largely syntactic, we explain what is needed for multi-machine HEB in situ, as we go along.

2

If a mode event has an input, the semantics assumes that its value arrives at a time distinct from the previous mode event, ensuring part of (1) automatically.

A Simple Hybrid Event-B Model of an Active Control System … Fig. 3 The simple mechanical model that the HEB development is based on, after [31]

165

w(t)

z(t)

k p(t) c

m

4 Top Level Abstract Model of the Control System As stated in Sect. 2, we do not get deeply embroiled in the detailed control engineering aspects of realistic active control in this paper. We base our treatment on the relatively simple strategy described in detail in [31]. Figure 3 shows the simple system investigated in [29, 31]. The model refers to the dynamics of a mechanical system with a single degree of freedom (SDOF). The building to be protected is modelled as a concentrated or lumped mass m and a structural system that resists lateral motion with a spring of stiffness k and a viscous damper with coefficient c.3 A force p is applied to the mass by the active control system. The effects of spring and damper depend on the relative position between a fiducial point in the building w, and another fiducial point in the earth z, i.e. on x = w − z. When w = z = 0, the spring is unstretched. Writing D for the time derivative, and defining e ≡ D2 z, the dynamics of w, expressed in terms of the relative displacement x, is thus controlled by: m D2 x + c D x + k x = p − m e

(3)

Since p is to be chosen by the system, e can be measured, and the other data are known, (3) yields a yields a law that can be used to keep x within desired bounds. The code for the top level model of the HEB development is in Fig. 4. At this level the system consists of a single machine, ActConMch_0. There are pliant variables x, p, which capture the model elements discussed above. The INVARIANTS are rather basic at this stage. They declare the types of the variables, and one further non-trivial property. This property actually expresses the 3 Idealizing

a building by an equivalent SDOF system requires an assumption about its displaced shape and other details that are beyond the scope of the paper. The interested reader is directed to the methodology outlined by Kuramoto et al. [24], which is included in the current building design code used in Japan, as one example.

166 MACHINE ActConMch 0 PLIANT x, p INVARIANTS x, p ∈ R, R |x| ≤ XB EVENTS INITIALISATION STATUS ordinary BEGIN x, p := 0, 0 END ... ...

R. Banach and J. Baugh ... ... MONITOR STATUS pliant ANY e? WHERE e? ∈ R ∧ |e?| ≤ EB COMPLY INVARIANTS END END

Fig. 4 A highly abstract model of the earthquake damage prevention active control system

key system requirement of the whole development, namely that the value of variable x stays within a range −XB ≤ x ≤ XB . Imposing it amounts to placing a limit on the lateral drift4 of a building structure, i.e., the horizontal displacement that upper stories undergo with respect to the base. This relative motion is resisted by the building’s structural system, but under extreme events the internal deformations may be excessive, leading to structural damage and ultimately collapse of the building. The INITIALISATION event sets all the variables to zero. Then there is the single actual event of the model: the MONITOR pliant event, which covers the continuous monitoring of the system when it is in the monitoring mode. Since this is the only mode in the model, it does not require a specific variable or value to name it.5 The definition of the MONITOR pliant event is trivial at this level of abstraction. It consumes its input e? evidently corresponding to the relevant model element above. For future calculational tractability, e? is assumed to be bounded by an explicit constant EB . The event simply demands that the INVARIANTS are to be maintained. This is to be established in some as yet unspecified manner, since we postpone the more demanding details of the calculations involved in the control model we have introduced. Later on, its more precisely defined job will be to monitor the information coming from the sensors (i.e. to monitor w and its derivatives, and e), to calculate what response may be necessary, and to issue pulses to the actuators, as may be required, these being the embodiment of p. Of course, in reality, the monitoring will be done via a series of discrete events, resulting in a series of readings from the sensors, but for the next few models in the development, we will assume that this is all a continuous process.

4 The

International Building Code (IBC) requires that drift be limited in typical buildings to 1–2% of the building’s height for reasons of both safety and functional performance. 5 Of course, a more realistic model would contain modes for maintenance, and for other forms of partial running or of inactivity—to be used, presumably, only when the building is unoccupied.

A Simple Hybrid Event-B Model of an Active Control System … MACHINE ActConMch IDEAL REFINES ActConMch 0 PLIANT x, y, p INVARIANTS x, y, p ∈ R, R, R |x| ≤ XB EVENTS INITIALISATION REFINES INITIALISATION STATUS ordinary BEGIN x, y, p := 0, 0, 0 END ... ...

167

... ... MONITOR REFINES MONITOR STATUS pliant ANY e WHERE e? ∈ R ∧ |e?| ≤ EB SOLVE p := m e? Dx = y Dy = − mc y − mk x + m1 p − e? END END

Fig. 5 Idealised refinement of the system

5 An Idealised Refinement: Miraculous ODE Behaviour Figure 5 presents a somewhat idealised refinement of ActConMch_0. Rather than assume the desired effect is achieved nondeterministically, we introduce the control law (3) in the SOLVE clause of MONITOR. In order to do that we introduce a new variable y so that we can translate the second order (3) into the first order form stipulated by HEB. Having done that, we notice that we need merely to set p to m e? and the zero initialisation of x, y persists to a global solution satisfying the invariants. We are done! The building stands still under all admissible earthquake conditions! If only it were that simple. Unfortunately, it requires that p be chosen to mirror the instantaneous real time behaviour of e? with complete precision, with no allowance for quantisation effects or for signal propagation delay in equipment. This is impractical in the real world. Accordingly, we abandon this route in favour of a more achievable development route.

6 A More Realistic Refinement: Achievable ODE Behaviour The problem with the MONITOR of Fig. 5 is that it is already so precise that there is no way to backtrack to a more tolerant engineering model while remaining within the restrictions of refinement theory. In this section we have a different refinement of ActConMch_0, which allows some leeway for engineering imprecision. In Fig. 6 we have a transition diagram representation of the system for this refinement. As before there is only one state, the one occupied by the MONITOR event, and there is now a mode event too, MoSkip, of which more later. The HEB code for the refinement is in Fig. 7.

168 Fig. 6 A transition diagram for the first refinement of the HEB model of the earthquake damage prevention system

R. Banach and J. Baugh

MoSkip

MONITOR

The behaviour of the MONITOR event refines its previous incarnation by restricting the behaviour. As well as the input e? there are now two locally chosen parameters, pp and e. The former allows values that match e? imprecisely to be fed to the ODE system in the SOLVE clause (which is almost the same as in Fig. 5), while the latter permits the stipulation that e? differs from a constant value (which may be chosen conveniently) by not too much during a MONITOR transition. The COMPLY clause takes advantage of the fact that the solution to such linear constant coefficient inhomogeneous ODE systems is routine. See [4, 18, 32, 36] as well as a host of other sources. The first term is the homogeneous solution, primed by the initial values: eA (t−tL ) [x(tL ), y(tL )]T , where A is the companion matrix of the homogeneous part of the ODE system in the SOLVE clause, and tL refers, generically, to the start time of any runtime transition specified by the pliant event. The second term is the convolution ∗ over the interval [tL . . . t] between the homogeneous solution eA (s) (with bound convolution variable renamed to s) and the inhomogeneous part [0, m1 pp − e?]T . If the projection of all this to the x variable (written . x) achieves the desired bound, then the ODE system in the SOLVE clause establishes the desired MACHINE ActConMch 1 REFINES ActConMch 0 CLOCK clk pls PLIANT x, y, p INVARIANTS x, y, p ∈ R, R, R |x| ≤ XB EVENTS INITIALISATION REFINES INITIALISATION STATUS ordinary BEGIN clk pls := 0 x, y, p := 0, 0, 0 END MoSkip STATUS anticipating WHEN clk pls = TP THEN clk pls := 0 END ... ...

Fig. 7 First refinement of the system

... ... MONITOR REFINES MONITOR STATUS pliant ANY pp, e, e? WHERE pp ∈ R ∧ |pp| ≤ PPB ∧ e ∈ R∧ (e) ∧ e? ∈ R ∧ |e?| ≤ EB ∧ |e − e?| ≤ eB COMPLY { eA (t− L ) [x( L ), y( L )]T + (eA s ∗[ L ...t] [0, m1 pp − e?]T )}}.x ≤ XB SOLVE p := pp Dx = y Dy = − mc y − mk x + m1 p − e? END END

A Simple Hybrid Event-B Model of an Active Control System …

169

invariant. The permitted imprecision between pp and e? now makes this a practical proposition. The mode event MoSkip interrupts the MONITOR event at intervals of TP , after which MONITOR restarts. This permits the reassignment of the constant e in MONITOR at each restart. If the interval TP is short enough it permits the choice of pp during each MONITOR transition to achieve the desired outcome. The caption of Fig. 7 claims that ActConMch_1 is a refinement of ActConMch_0. Indeed it is, although we do not describe the details of this here; see [11] for a more thorough account. We note though that: the invariants are not weakened; the new initialisation is evidently consistent with the old; the new behaviour of MONITOR evidently satisfies the previous definition; and the new mode event only updates a newly introduced (clock) variable. All of these are characteristics of HEB refinement.

7 Refining pp The objective of the next refinement is to address the specific form of pp, bringing the design closer to engineering practice, and to [29, 31] in particular. For this we follow the detailed formulation in [19], which contains a wealth √ of detailed√calculation. Wemake a conventional change of parameters: ζ = c/2 km, ωn = k/m, ωD = ωn 1 − ζ 2 . This change reduces the LHS of (3) to (m times): D2 x + 2ζ Dx + ωn2 x

(4)

In terms of these quantities, the generic solution of the ODE system indicated above reduces to a Duhamel integral with specified initial values [19, 39]: x(t − tL ) =

  y(tL ) + ζ ωn x(tL ) sin(ωD (t − tL )) e−ζ ωn (t−tL ) x(tL ) cos(ωD (t − tL )) + ωD  (t−tL ) 1 (pp(s) − m e?(s)) e−ζ ωn ((t−tL )−s) sin(ωD ((t − tL ) − s)) ds (5) + m ωD 0 y(t − tL ) = D x(t − tL ) =   ωn (ζ y(tL ) + ωn x(tL )) sin(ωD (t − tL )) e−ζ ωn (t−tL ) y(tL ) cos(ωD (t − tL )) − ωD  (t−tL ) 1 (pp(s) − m e?(s)) × + m 0   ζ ωn e−ζ ωn ((t−tL )−s) cos(ωD ((t − tL ) − s)) − sin(ωD ((t − tL ) − s)) ds ωD (6)

170

R. Banach and J. Baugh

The idea now is to tailor the various parameters of the model in such a way that we can prove that the form we choose for pp lets us derive the desired bound |x| ≤ XB . To conform to engineering practice for this class of systems, the form we choose for pp will consist of pulses, as suggested earlier. Pulses have a large value for a short period, and are zero the rest of the time. As clearly explained in [19], if the support of a pulse is small, its precise shape has little effect on the dynamics, and only its overall impulse (i.e. integral) matters. Thus the natural temptation is to idealise the pulse into a ‘delta function’, which has zero duration but nonzero integral. Although no engineering equipment implements a delta function pulse, the idealisation simplifies calculations, and so we will pursue it here, since the deviation from a realistic pulse will be small. Technically, the idealisation also allows us to illustrate how delta functions can be handled in HEB.6 Tacitly, we can identify the time period TP in Fig. 7 with the interval between pulses. Suppose then that one of these idealised delta pulses has just occurred. In the ensuing interval, the form of pp will be zero, so the pp terms can be removed from (5) and (6). Assuming that we know x(tL ) and y(tL ), we thus calculate the behaviour of x and y in the ensuing interval. Demanding that this remains within safe limits imposes constraints on x(tL ) and y(tL ), which it was the obligation of the immediately preceding pulse to have ensured. Analogously, it is the obligation of the next pulse to ensure equally safe conditions for the next interval. And so on. Thus, we are interested in estimating the behaviour of the x and y variables during a transition of the MONITOR event. To this end, we argue as follows. Having tacitly arranged that the pulses occur at the transitions specified by the MoSkip event, and thus that pp is zero during a MONITOR transition, we note that the period of the building’s vibrations during an earthquake, which is typically of the order of a second or two and is captured in the constants ωn and ζ , will be much longer than the response time of the active protection system, i.e. will be much longer than TP . Therefore, the domain of integration in (5) and (6) will always be much shorter than a half cycle of the trigonometric terms, as a consequence of which the combined exponential and trigonometric terms will always be positive throughout the domain of integration. In such a case, the extremal values of the integral will arise when the modulating factor e? takes its own extremal values. These are just the constant values e ± eB (the sign to be chosen depending on which one favours the argument we wish to make). Substituting these (and keeping both signs in case of future need) reduces the integrals to an analytically solvable form, which is readily evaluated [23, 26]. For a duration TP we get: x(TP + tL ) =   1  1 y(tL ) + ζ x(tL ) sin(ωD TP ) e−ζ ωn TP x(tL ) cos(ωD TP ) +  1 − ζ 2 ωn 6 The issue is not a trivial one. HEB semantics is defined in terms of piecewise absolutely continuous

functions [11]. But a delta function is not piecewise absolutely continuous, because, to be precise, it is not a function at all.

A Simple Hybrid Event-B Model of an Active Control System …

− (e ± eB )

   1 ζ −ζ ωn TP  1 − e cos(ω T ) + sin(ω T ) D P D P ωn2 1 − ζ2

171

(7)

y(TP + tL ) = D x(TP + tL ) =     1 −ζ ωn TP y(tL ) cos(ωD TP ) −  ωn x(tL ) + ζ y(tL ) sin(ωD TP ) e 1 − ζ2 1 −ζ ωn TP e sin(ωD TP ) (8) − (e ± eB ) ωD Next we observe that the impulsive force that the active protection system applies during a pulse will not significantly change x but will only have a significant impact on y. Thus, assuming the system only reacts when |x| is close to its permitted maximum value (specified by an appropriately chosen threshold value Xth ), we infer that the following statements: and (9) if 0 < Xth ≤ x(tL ) ≤ XB then ensure x(TP + tL ) ≤ XB fi if 0 > −Xth ≥ x(tL ) ≥ −XB then ensure x(TP + tL ) ≥ −XB fi (10) express a policy for ensuring that the invariant |x| ≤ XB is maintained throughout the dynamics of the system. These allow us to focus predominantly on Eq. (7), using (8) only occasionally.  We note that for the typical scenario of interest, ζ  0.1, so that 1 ± ζ 2 ∼ = 1. From this we deduce that ωn ∼ = ωD , so we call both of them ω henceforth. Bearing the implications of such system parameters in mind, we embark on a process of simplifying (7) and (8). The observations just made lead to: x(TP + tL ) =  1  y(tL ) + ζ x(tL ) sin(ω TP ) e−ζ ω TP x(tL ) cos(ω TP ) + ω   1  −ζ ω TP cos(ω TP ) + ζ sin(ω TP ) − (e ± eB ) 2 1 − e ω y(TP + tL ) =    e−ζ ω TP y(tL ) cos(ω TP ) − ω x(tL ) + ζ y(tL ) sin(ω TP ) − (e ± eB )

1 −ζ ω TP e sin(ω TP ) ω

(11)

(12)

Also, to ensure that the system is responsive enough to adequately dampen large oscillations coming from an earthquake, it should be prepared to respond at least 20 times per building oscillation, making ωTP ∼ = 0.05, and making ζ ωTP ∼ = 0.005. This allows us to further simplify (11) and (12), keeping low order terms only. We work to second order in ωTP and regard ζ ≈ ωTP . This leads to the discarding of contributions [(1/2)ζ 2 ω2 TP2 ] (y(tL ) TP ) to x(TP + tL ) and of ζ 2 ω2 TP2 y(tL ) +

172

R. Banach and J. Baugh

ζ ω2 TP2 (x(tL ) ω)−(e ± eB ) TP ((1/2) ζ 2 ω2 TP2 ) to y(TP + tL )—these will certainly be negligible if we consider that real systems are noisy. In this way we get: x(TP + tL ) =    (e ± e ) T 2  ω2 TP2  B P x(tL ) 1 − + y(tL ) TP 1 − ζ ω TP − 2 2 y(TP + tL ) =      y(tL ) 1 − 2 ζ ω TP − ω x(tL ) ω TP − (e ± eB ) TP 1 − ζ ω TP

(13)

(14)

These formulae exhibit characteristics that we would expect. Thus, the leading contribution to x(TP + tL ) is x(tL ) + y(tL ) TP , to which are added smaller corrections, while the leading contribution to y(TP + tL ) is y(tL ) itself, modified by smaller corrections. The relative constancy of the velocity y over an interval TP confirms that our proposed strategy, of imposing a pulse which discontinuously alters y(tL ), will be the dominant effect on the displacement variable x during the interval. We also see that the earthquake acceleration, which contributes the (e ± eB ) terms, is not very significant unless it is violent enough to be comparable to the time period TP or its square. In principle (13) and (14) give us enough to design the protection system. At the end of each TP interval, we examine x(tL ) and y(tL ), we calculate x(TP + tL ) according to (13), and if the answer exceeds XB , we apply a pulse to change y(tL ) to a new value y(tL ) for which a recalculated x(TP + tL ) does not exceed XB . However, we wish to do a bit better. We would like to identify a safe region, given by a threshold value Xth , such that if x(tL ) ≤ Xth , no further action is needed. To identify Xth , we need an upper bound for y(tL ) so that we can estimate how much ‘help’ the velocity could give to x during an interval. We argue as follows. We note that starting from a stationary state, neglecting the lower order corrections (including the contribution from x(tL ) whose coefficient is small), and considering the strongest earthquake the system is designed to cope with, EB , each interval can add at most EB TP to y. So after N intervals, y is at most N EB TP . Turning to x, an interval can similarly add at most EB TP2 /2 from the last term of (13), and after N intervals, (1 + 2 . . . N ) EB TP is added from the velocity term, giving a total, after N intervals, of EB TP2 (N 2 + 2N )/2. This must not exceed XB , which leads to7 : N≈



2XB /EB TP2 + 1

(15)

The threshold value Xth must be small enough that the largest possible single increment of x cannot exceed XB − Xth . From (13), using the maximal velocity derived earlier, we get:

7 In

deriving this, we dropped a term −1 from the RHS of (15).

A Simple Hybrid Event-B Model of an Active Control System … Fig. 8 A transition diagram for the second refinement of the HEB earthquake damage prevention model

173

PulseNo PulseMaybe

MONITOR

PulseYesY PulseYesE  EB TP2  Xth ≤ XB − EB TP2 2XB /EB TP2 + 1 + 2

(16)

For (16) to be reasonable, its RHS must be positive, which leads to the consistency condition 2 XB ≥ EB TP2 . This is sensible, since if not (and referring to (13)), a single cold start interval could overreach XB , and the threshold idea would not make sense. (The same condition is also necessary for (15) to yield a positive integer, when the discarded −1 is reinstated.) From the account above, it is clear that if |x(tL )| ≤ Xth at the start of an interval, then the system need do nothing. This will be the case most of the time in reality, since the only vibrations sensed will be from normal everyday activity in the building and its surroundings. However, if |x(tL )| > Xth , then the more detailed calculation in (13) will be needed, in case there is a risk of exceeding the bound XB . These observations underpin the next model in our HEB development, whose transition diagram is in Fig. 8, and the text of which is in Fig. 9. In this model, the MONITOR event no longer has a COMPLY clause stipulating the behaviour of the system via an implicitly chosen pp function. In accordance with our discussion, the externally imposed force p is zero during MONITOR. The job of ensuring that the invariant |x| ≤ XB is maintained becomes the responsibility of delta pulses that jolt the system into acceptable behaviour when necessary. Here we hit a technical snag, in that delta functions do not exist in the semantics of HEB (see footnote 6). Rather than express the needed delta functions directly, we use their time integrals, which are discontinuous functions, which do exist in the semantics of HEB, and are typically implemented using mode events. The burden of implementing the pulses thus falls to refinements of the earlier MoSkip event, which implement the imposition of the needed delta functions onto the acceleration DDx = Dy, by instead imposing discontinuities on its integral y. Accordingly, when time is a multiple of TP , if |x| < Xth , then event PulseNo executes, and just resets the clock. But if |x| ≥ Xth then we need a more complex calculation, analogous to Eq. (13). If this reveals that the projected future |x| value will nevertheless still be below XB , then the action is the same, expressed in event PulseMaybe.

174

Fig. 9 Second refinement of the system

R. Banach and J. Baugh

A Simple Hybrid Event-B Model of an Active Control System …

175

However, if the calculation reveals that without intervention XB will be breached, then the system must intervene to prevent it. This is captured in mode events PulseYesY and PulseYesE and involves a case analysis as follows. Let us call Δx the difference between the projected future |x| value and the beforevalue of |x| in these events, as in the two events’ guards. Then if Δx turns out positive, it can only be because either the y term or the −e? term of the projected future |x| value, or both, is/are driving |x| too high. At least one of these terms has a value whose sign is the same as that of the before-value of x in the two events, else both terms would drive |x| smaller, contradicting the breaching of XB . N. B. We assume that the threshold is big enough that above threshold, a single interval cannot cause x to change sign, and thus cannot cause |x| to increase even in cases in which the rate of change of x changes sign. Suppose then that both terms have values whose sign agrees with that of the value of x. Then one of them has a value which is at least Δx/2 since they act additively and their sum is Δx. In this case it is sufficient to invert the sign of the larger contribution to ensure that their net effect diminishes |x|. So we either flip y, or flip a suitably rescaled e?. This covers one of the two cases in each of PulseYesY and PulseYesE. Suppose alternatively that only one of the terms has a value whose sign agrees with that of the value of x. Then the magnitude of that term must exceed Δx, since they act subtractively and the difference of their magnitudes is still Δx.8 In this case it is sufficient to invert the sign of this larger contribution to ensure their net effect diminishes |x|. This covers the remaining two cases in PulseYesY and PulseYesE.

8 On HEB Refinement At this point we reflect on the refinement just done. A first point notes that during normal Event-B refinement [3], the behaviour of an event is typically restricted, making it more deterministic. In our case, we have taken this to an extreme, by effectively abandoning external control of the behaviour of the dynamical variables x and y via p during MONITOR, and have delegated this duty instead to the PulseXX events. So the PulseXX events are new in the model of Fig. 9, and define new behaviour for y (and potentially for x too, if it were needed). This is against the rules of Event-B refinement, since new behaviour for variables should be introduced at the same time as the variables themselves (being made more deterministic subsequently)—whereas we introduced x and y during the previous refinement. We partly mitigated this by making the PulseXX events refine the earlier MoSkip events, introduced during the previous refinement stage, and giving the MoSkip events the status ‘anticipating’. This status allows an event, newly introduced during a refinement step, and which would normally be required to strictly decrease a relevant it is possible for both terms to have magnitude bigger than Δx, unless we take into account relevant upper bounds etc. and show that it is impossible. We will just assume that there is no such possibility in our problem space.

8 Mathematically,

176

R. Banach and J. Baugh

variant function (to ensure the convergence of the new behaviour), to not strictly decrease the variant then, postponing this obligation till later.9 Since we included no variants in our development, we discharged this duty trivially. The introduction of MoSkip and variable y at the same time thus not only allowed fresh choice of e in successive iterations of MONITOR but allowed the manipulation of y in refinements of MoSkip. Unfortunately though, by not mentioning y at all, MoSkip by default specifies that y does not change during MoSkip transitions, while the PulseYes events refining it specify nontrivial changes to y. It is tempting to think that this is still a refinement, since the only invariant concerning y is y ∈ R, the weakest possible invariant. However, when the same variable exists in a machine and in its refinement, there is an implicit equality invariant between the abstract and concrete versions of the variable—otherwise writing more conventional refinements would become intolerably verbose. In this regard, ‘no change’ in MoSkip is incompatible with ‘nontrivial update’ in PulseYes, and our refinement isn’t quite legal after all. This shows that ‘refining to a delta function’ is not ideally handled in HEB. The only way to make the development unimpeachable according to the rules of Event-B is to introduce the variable y and the nontrivial mode event behaviour at the same time. But this is less desirable from our point of view, as it forces the choice of control strategy without permitting consideration of alternatives, and flies in the face of the objectives of a refinement driven development strategy which aims at introducing detail into designs in stages. A second point concerns the arguments we employed in the preceding pages. Our reasoning started out being quite watertight mathematically, but rather quickly, we started to introduce simplifications which were perfectly justifiable on engineering grounds, but which would not pass the unblinking scrutiny of formal proof. Two centuries or more of rigorous mathematical analysis have, in principle, developed techniques, using which, such a shortcoming could be overcome, but the amount of work involved would be considerable, and would quickly surpass the small amount of added assurance that could be gained. The formal development field, in its somewhat strenuous avoidance of engagement with continuous mathematics hitherto, has not really developed a cost effective approach to dealing with this issue. A third point concerns the extent to which the model of Fig. 9 can actually be proved correct using a per event proving strategy as embodied in the semantics and verification architecture of Event-B. This is by contrast with the arguments in preceding pages, which focused on the application structure and employed whatever observations seemed useful at the time, without regard to how the various points were to be structured into an overall correctness argument. Here, the news, though not perfect, is better. We note that the MONITOR event, as written in Fig. 9, cannot by itself be correct according to the normal ‘preserving the invariant’ notion of event correctness, since 9 The

formal presentation of HEB [11] does not mention the anticipating status, since that is somewhat outside the main concerns there. But there is no reason to forbid it since it concerns purely structural matters.

A Simple Hybrid Event-B Model of an Active Control System …

177

it demands no restrictions on x(tL ) and y(tL ). Without prior knowledge about these, the ODE system can easily breach the x ≤ XB bound during a TP interval. Of course, we rely on the PulseYes events to ensure appropriate x(tL ) and y(tL ) values for the subsequent MONITOR event, but the MONITOR event correctness proof obligations know nothing of this. However, in HEB, we also have ‘well-formedness’ proof obligations, that police the handover between mode and pliant events. These can check that after any of the PulseXX events, the values of x and y are appropriate. In particular, they check that after the PulseXX events the guard for at least one pliant event is true. Since we have designed the PulseXX events to ensure exactly what is required here, the trivial guard of the MONITOR event of Fig. 9 could, in fact, be strengthened to demand a suitably stringent constraint on x(tL ) and y(tL ), from which, ‘preserving the invariant’ would become possible. So, although we did not get diverted by this detail earlier, a solution entirely within the rules is available.

9 Sensors, Actuators, Sampling, Quantization, Decomposition The next model in our development tackles a number of issues that add low level complexity. Following the structure of [20, 21], we introduce a sensor and an actuator into the system. Elements like these bring various kinds of imprecision to the development. Thus, they typically act at discrete moments of time—this brings temporal imprecision. Their inputs and outputs typically have finite ranges, and are quantized—this brings imprecision of magnitude. The impact of these sources of imprecision is similar from a formal point of view, and describing these phenomena precisely, generates complexity in the textual description of the system. Moreover, a model close to the architectural structure of [20, 21] would place the architecturally distinct components of the system in separate constructs. To create such a model requires the decomposition of a monolithic version into smaller pieces, a process which, if done with precision, generates both textual complexity and a lot of repetition of the model text. To minimise verbosity, our strategy will therefore be as follows. Viewing the model of Fig. 9 as being at level 2, the level 2 model is conceptually developed into an unstated, but still monolithic model, incorporating the features mentioned above, at level 3 (with machine ActConMch_3 say). This is then decomposed into a multimachine project at level 4, exhibiting the desired architectural structure. The level 4 model is presented in Figs. 11, 12, 13 and 14, and described below. We comment more extensively on the level 4 model later on. In Fig. 10 we have a depiction of the various HEB machines of the distributed concurrent HEB model that results from the process just sketched. Figures 11, 12, 13 and 14 contain the text of the resulting model. We start with the PROJECT ActCon_4_Prj file in Fig. 11, which describes the overall structure. The DECOMPOSES ActCon_3_Prj line refers to the fictitious level 3 system, of which more

178

R. Banach and J. Baugh

EarthMch_3

BuildingMch_3

QUAKE

MONITOR

ActuatorMch_3

PulseYesY_S PulseYesE_S

PliTrue SensorMch_3 PulseYesY_S PulseYesE_S

PliTrue

Sample_18_S Sample_19_S

ControllerMch_3 PulseNo PulseMaybe

PliTrue

Sample_18_S Sample_19_S

PulseYesY_S PulseYesE_S Fig. 10 A family of transition diagrams for the HEB machines of a distributed concurrent version of the active earthquake damage prevention system

later. The main job of the PROJECT file is to name the constituent machines and interfaces, and to define needed synchronisations between the mode events of the different machines. Thus there are machines for the earth, the building, the actuator, the sensor, and the controller. The PROJECT file also names the INTERFACE ActCon_4_IF file. This declares any variables that are shared between more than one machine, their initialisations, and, most importantly, any invariants that mention any of these variables. (The latter point can place stringent restrictions on how variables are partitioned into different interfaces and machines.) The final responsibility of the PROJECT file is to declare the mode event synchronisations. Thus SYNCHronisation Sample18 specifies that mode event Sample_18_S in machine SensorMch_4 and mode event Sample_18_S in machine ControllerMch_4 must be executed simultaneously. This means that they can only execute if all the guard conditions in all the events of the synchronisation are true. The same remarks apply to the other synchronisations declared in the project file.

A Simple Hybrid Event-B Model of an Active Control System … PROJECT ActCon 4 Pr j DECOMPOSES ActCon 3 Pr j MACHINE EarthMch 4 MACHINE BuildingMch 4 MACHINE SensorMch 4 MACHINE ControllerMch 4 MACHINE ActuatorMch 4 INTERFACE ActCon 4 IF SYNCH(Sample18) SensorMch 4.Sample 18 S ControllerMch 4.Sample 18 S END SYNCH(Sample19) SensorMch 4.Sample 19 S ControllerMch 4.Sample 19 S END SYNCH(PulseYesY ) ActuatorMch 4.PulseYesY S BuildingMch 4.PulseYesY S ControllerMch 4.PulseYesY S END SYNCH(PulseYesE) ActuatorMch 4.PulseYesE S BuildingMch 4.PulseYesE S ControllerMch 4.PulseYesE S END END

179

INTERFACE ActCon 4 IF PLIANT xx, yy, ee INVARIANTS xx, yy, ee ∈ R, R, R |xx| ≤ XB INITIALISATION xx, yy, ee := 0, 0, 0 END

Fig. 11 The PROJECT and INTERACE files of the further developed and decomposed system

We turn to the machines, pictured in Fig. 10. In outline, machine EarthMch_4 is responsible for producing the earthquake acceleration, which comes from the input e?, as in previous models. This is simply captured during the pliant event QUAKE and is recorded in the shared pliant variable ee, declared in the interface (which the EarthMch_4 machine CONNECTS to). We can see that the QUAKE event comes from decomposing the earlier MONITOR pliant event, and we will see the remnants of the MONITOR event elsewhere soon. Since any HEB machine must have at least one pliant event to describe what happens over the course of time, but need not contain any other event, and since QUAKE addresses that requirement, there are no other events in EarthMch_4. The shared variable ee is accessed by machine BuildingMch_4. This contains the remainder of the earlier MONITOR pliant event, namely the ODE system defining the building’s response, which uses ee. It also contains the business end of the PulseYesY _S and PulseYesE_S mode events, which take their inputs (which are received from the actuator using input ys?) and discontinuously impose the received values on the velocity variable yy. We come to the SensorMch_4 and ActuatorMch_4 machines. Their behaviour is essentially discrete, so to satisfy the requirement for having a pliant event, both

180

R. Banach and J. Baugh MACHINE EarthMch 4 CONNECTS ActCon 4 IF QUAKE STATUS pliant ANY e, e? WHERE e ∈ R∧ (e) ∧ e? ∈ R ∧ |e?| ≤ EB ∧ |e − e?| ≤ eB BEGIN ee := e? END END MACHINE BuildingMch 4 CONNECTS ActCon 4 IF EVENTS PulseYesY S STATUS ordinary ANY ys? WHERE ys? ∈ R THEN yy := ys? END PulseYesE S STATUS ordinary ANY ys? WHERE ys? ∈ R THEN yy := ys? END MONITOR STATUS pliant SOLVE Dxx = yy Dyy = − mc yy − mk xx − ee END END

MACHINE SensorMch 4 CONNECTS ActCon 4 IF EVENTS Sample 18 S ANY sens x! WHERE sens x! ∈ R THEN −1 K sens x! := Kxsqs xsqs xx END Sample 19 S ANY sens x!, sens e! WHERE sens x! ∈ R ∧ sens e! ∈ R THEN −1 K sens x! := Kxsqs xsqs xx −1 sens e! := Kesqs Kesqs ee END PliTrue STATUS pliant COMPLY INVARIANTS END END MACHINE ActuatorMch 4 EVENTS PulseYesY S ANY ys!, act y? WHERE ys! ∈ R ∧ act y? ∈ R THEN −1 K ys! := Kysqs ysqs act y? END PulseYesE S ANY ys!, act y? WHERE ys! ∈ R ∧ act y? ∈ R THEN −1 K ys! := Kysqs ysqs act y? END PliTrue STATUS pliant COMPLY INVARIANTS END END

Fig. 12 Machines for earth, building, sensor and actuator

machines have a default COMPLY INVARIANTS pliant event, named, as is typically the case, PliTrue. In fact, since all the pliant variables are handled by other machines, there is nothing for these PliTrue events to do, and that is part of the semantics of ‘COMPLY INVARIANTS’ in HEB.

A Simple Hybrid Event-B Model of an Active Control System … MACHINE ControllerMch 4 CONNECTS ActCon 4 IF CLOCK clk pls VARIABLES x18, x19, y19, e19 INVARIANTS x18, x19, y19, e19 ∈ R, R, R, R VARIANT (TP − clk pls) × 20 EVENTS INITIALISATION STATUS ordinary BEGIN clk pls := 0 x19, y19, e19 := 0, 0, 0 END PliTrue STATUS pliant COMPLY INVARIANTS END Sample 18 S ANY sens x? WHEN clk pls = 18 20 TP THEN x18 := sens x? END ... ...

181

... ... Sample 19 S ANY sens x?, sens e? WHEN clk pls = 19 20 TP THEN x19 := sens x? y19 := (sens x? − x18) T20P  e19 := sens e? END PulseNo STATUS ordinary WHEN clk pls = TP ∧ |x19| < Xthsq THEN clk pls := 0 END PulseMaybe STATUS ordinary WHEN clk pls = TP ∧ |x19| ≥ Xthsq ∧      x19 1 − ω 2 TP2 /2 + y19 TP 1 − ζ ω TP   − e19 TP2 /2  ≤ XBsq THEN clk pls := 0 END ... ...

Fig. 13 The controller machine, first part

The job of the SensorMch_4 machine is to sample the physical values required by a idealised implementation of the system. The values are required at pulse issuing time, but to allow time for computation, as in [20, 21], they are collected a little earlier. The position and earth acceleration values, from xx and ee, are collected 19/20 of the way through a TP interval, and are transmitted (to the controller machine) in output variable sens_x! and sens_e! An extra position value is needed for calculating a velocity estimate, so another sample of xx is taken 18/20 of the way through TP . Notice that the xx and ee values are scaled (by Kxsqs and Kesqs ), rounded, and then −1 −1 and Kesqs ) before sending, to model the quantization process.10 unscaled (by Kxsqs The mode events that do these jobs are Sample_18_S and Sample_19_S. The ‘_S’ suffixes on these names indicate, for readability, that these are synchronised with mode events in one or more other machines, though, as we mentioned earlier, the formal definition of a project’s synchronisations are in the project file. The same general comments work for the ActuatorMch_4 machine. Only the velocity variable is modified in our development, so only this variable is acted on by 10 In

reality, a sensor would send values in its own units, and scaling would be done as part of the controller’s job, but we avoid this so as to keep the controller calculation reasonably transparent.

182

R. Banach and J. Baugh

the actuator. The value needed is received (from the controller machine) in the act_y? input of synchronised events PulseYesY _S and PulseYesE_S, and after quantization via Kysqs and its inverse, is transmitted (to the building machine) in the ys! output.11 As for the sensor, there is no need for any non-trivial pliant event, so a default PliTrue suffices. At the heart of the system is the ControllerMch_4 machine. This houses the remaining functionality, and the non-trivial computation. The clock clk_pls is declared here, as are local variables x18, x19, y19, e19, the sampled values of the dynamical variables, which are not needed in any other machine. We see also that ControllerMch_4 only requires the PliTrue default pliant event, since its interventions are exclusively at individual moments of time. It also contains the remaining portions of the various synchronisations we have discussed. The Sample_18_S event picks up the sampled position at times 18/20 TP of an interval, recording them in x18. The Sample_19_S event picks up position and acceleration samples at 19/20 TP and, as well as recording these, it calculates an estimate of velocity from the position samples and records it in y19. The values in x19, y19, e19 are then ready for the pulse calculations, which would consume some time to do, but which are modelled as taking place instantaneously at the end of the interval in the various Pulse events. Compared with the other events, events Sample_18_S and Sample_19_S are newly introduced in this development step. In such a case, Event-B practice asks that they strictly decreases some variant function, which is included in Fig. 13 after the invariants. It is clear that at the two occurrences of the Sample events in each interval, the 18 TP + ε)) × 20 = 3 to 2, value of the variant drops, firstly from limε→0+ (TP − ( 20 12 and then from 2 to 1, thus strictly decreasing it, as required. The PulseNo, PulseMaybe, and now synchronised PulseYesY _S and PulseYesE_S events, handle the needed responses to building movement, as before, except that the calculations are now done using the sampled, quantized (SQ) values rather than the ideal, instantaneous (II) ones. This inevitably leads to disagreement with the ideal calculations in the border country where different behaviour regimes meet (in our case, the border country between the do pulse and don’t pulse regimes).13 In our case, we have to cope with the possibility that the SQ values dictate a pulse in a situation where the II values don’t (which is tolerable, since it will only happen in the border country, where pulses are probable anyway), or that the SQ values don’t dictate a pulse in a situation where the II values do (which is intolerable since it may permit the physical system to overshoot the XB bound without the SQ model being aware of it). We must prevent the latter. 11 On a technical level, the building and actuator machines illustrate the pattern whereby synchronised mode events in different machines can instantaneously share values: one event uses an output variable and the others use an input variable with a complementary (CSP style) name. 12 Note that this critically depends on insisting that intervals of pliant behaviour are left closed and right open. 13 Henceforth, we will use SQ to refer to and to label elements and quantities relevant to the level 4 model of Figs. 11, 12, 13 and 14 (and, implicitly to its unstated level 3 precursor), and II for elements relevant to the level 2 model of Fig. 9, as needed.

A Simple Hybrid Event-B Model of an Active Control System …

183

Fig. 14 The controller machine, second part

The approach we take is to conservatively adjust the constants Xth , XB in the model to new values Xthsq , XBsq that preclude the intolerable omissions at the price of admitting more superfluous pulses.14 For more convenient discussion, we also renamed the local variables Δx, w in Fig. 14 by adding a subscript. Our remarks indicate that whenever PulseNoSQ or PulseMaybeSQ can run, then we must be sure that PulseNoII or PulseMaybeII will also run. This implies a condition on their guards. We take the events individually, starting with PulseNoSQ and PulseNoII . It is clear that the latter is enabled whenever the former is, provided that |x19| < Xthsq ⇒ |xx| < Xth holds. Of course, x19 and xx refer to values at different times, but recalling that Xth was derived by estimating the maximum achievable displacement over a whole interval in (16), one twentieth of the same argument will cover the difference between x19 and xx. So our implication will hold, provided:  EB TP2  20 Xthsq ≤ Xth − EB TP2 2XB /EB TP2 + 1 + 2

(17)

We see that this is a small correction to Xth , which, for typical parameter values, will be negligible in practice, if not in mathematics, confirming the conjecture in footnote 14.

14 Speaking realistically, in a genuine earthquake scenario, noise and experimental uncertainty are likely to be such that the differences between the ideal and conservative values of the constants vanish into insignificance. But it is worth checking that the mathematics confirms this.

184

R. Banach and J. Baugh

Turning to PulseMaybeSQ and PulseMaybeII a similar argument applies. Looking at the relevant guards, we see that as well as (17), we will be able to maintain the invariant |xx| < XB provided we make an analogous correction to XB for the purpose of the estimates made in the PulseMaybeSQ guard:  EB TP2  20 XBsq ≤ XB − EB TP2 2XB /EB TP2 + 1 + 2

(18)

With these two cases understood, we see that the final two events are covered also. Both PulseYesY _S SQ and PulseYesE_S SQ flip the sign of the greatest contribution to the estimated increment in displacement, based on the same estimate made in PulseMaybeSQ .

10 Refinement, Retrenchment and Other Technical Issues In Figs. 11, 12, 13 and 14 the only structural directives are DECOMPOSES ActCon_3_Prj in the project file, and the CONNECTS ActCon_4_IF in the various machine files. There are a number of reasons for this. Firstly, we are presuming that the hard work of refining ActConMch_2 to incorporate the sensor, actuator and discretization features will have been achieved in the (unstated) ActConMch_3 machine, the only element of the (unstated) level 3 project ActCon_3_Prj.15 This understood, the job of decomposing a monolithic ActConMch_3 machine into the components seen in Figs. 11, 12, 13 and 14 is properly covered by the cited directives. Before continuing, we briefly comment on this by envisaging how Figs. 11, 12, 13 and 14 might be reassembled into a single construct. Let us start with the two Sample_18_S events, shared between SensorMch_4 and ControllerMch_4, and executed synchronously. In a monolithic ActConMch_3 (which would take on the duties of both machines), there would be a single Sample_18 18 −1 TP and action x18 := Kxsqs Kxsqs xx . There is no event, with guard clk_pls = 20 communication, since all the variables are accessible to the one machine. Sample_19 follows a similar pattern, with two variables assigned. Thus is the sensor machine’s functionality absorbed into one encompassing machine. The actuator is dealt with similarly, except that the building is involved; the functionality of the building is also absorbed into the single encompassing machine, rather as was the case in the level 2 and earlier models. The earth machine is similarly absorbed, eliminating the need for the shared variable ee. This account illustrates, in reverse, how the distributed model of Figs. 11, 12, 13 and 14 is arrived at, presuming the preexistence of the monolithic version. Note that it is a deliberate design objective of the multimachine HEB formalism that the monolithic and distributed versions should be, in all important aspects, semantically indistinguishable; see [12].

15 We

can also regard all the previous models as each being in its own single machine project.

A Simple Hybrid Event-B Model of an Active Control System …

185

Thus, the ActConMch_3 is relatively easily imagined, avoiding some verbosity. Less easy is its relationship to the level 2 machine ActConMch_2—the discussion in Sect. 8, on implicit equality invariants, flags up that the introduction of imprecision via sampling and quantization may not be unproblematic regarding refinement methodology. The immediate problem was avoided by renaming variables x, y to xx, yy in the level 4 model. But this raises the question of what the relationship between x, y and xx, yy ought to be. It is a truism in formal development that the stronger the invariants you write, the harder the work to prove that they are maintained, but the stronger the assurance that is gained thereby. And conversely. We might thus ease our task by omitting completely any non-trivial relationship between x, y and xx, yy. But this will not do since we still have the level 2 invariant x ≤ XB to establish, which is rendered impossible in the level 4 model without some coupling between x, y and xx, yy. The obvious relationship to consider is some sort of accuracy bound relating x and xx, and y and yy. Since the damping factor ζ is positive, the dynamics is asymptotically stable, so we can expect the dynamics to be contracting16 (although a refinement relationship based on this still often requires appropriate conditions on the constants of the system [35]). To see the contracting nature of our dynamics we first need to rewrite (13) and (14) in terms of dimensionally comparable quantities, for example, in terms of x˜ ≡ x and y˜ ≡ y/ω. When this is done, (13) and (14), viewed as a matrix operating on differences in pairs of values of x˜ , y˜ , has entries (δij + (−1)[i≥j] εij ), where δij is the identity, and the εij are small and positive, from which the contracting nature of the transformation can be inferred. With this, we can claim that a single execution of MONITOR in each of the II and SQ systems will maintain a joint invariant of the form ||(x, y) − (xx, yy)||1˜ ≤ A,17 provided it is true at the start, but it does not tell us what value we would need to choose for A for this to be true non-trivially. The latter problem would require a global analysis which could be quite challenging. The issue is made the more difficult by the possibility mentioned before, whereby imprecision caused by conservative design in the SQ system causes the SQ system to express a pulse whereas the II system does not. If this happened, the ||.||1˜ distance between the II and SQ systems would suddenly increase dramatically, even if it was well behaved previously, and it would consequently cause the ||.||1˜ norm to function poorly as a joint invariant between II and SQ systems, posing a significant impediment to refinement as an convincing notion for relating the II and SQ systems. A weakening of the highly demanding refinement concept is the idea of retrenchment [13, 14, 30]. In retrenchment the demand to preserve a ‘nearness’ invariant is relaxed by permitting different conditions to be specified for the before-state and after-state of a pair of transitions in the two systems being compared, and allows constants, such as A, to be declared locally per transition instance, rather than globally, as in a refinement relation. This formulation also permits the two systems to 16 In

a contracting dynamics, nearby points are driven closer by the dynamics. 1˜ refers to an L1 norm on the (instantaneous values of the) tilde variables.

17 The

186

R. Banach and J. Baugh

part company during exceptional circumstances. It works well enough if the two systems quickly recover ‘nearness’, or if the models cease to be relevant at all after the exception. In our case, the ‘exceptional’ regime, requiring pulses, is precisely the raison d’etre of the whole protection system, and it is in this regime (rather then the normal, stable regime when there is no earthquake) in which the behaviours of the II and SQ systems are the most unruly. And although retrenchment, as described in [13, 14, 30], addresses the onset of unruly behaviour quite well, it does not really engage with particular properties of extended periods of unruly behaviour, as we would ideally like in our application. A further complication of the scenario where the SQ system pulses and the II system does not, is that different events in the two systems are involved in these behaviours (PulseNo and PulseMaybe in II and PulseYesY _S and PulseYesE_S in SQ). Retrenchment and refinement, as usually defined, assume a static (and partial if needed) bijection between operations/events in the two models being compared. This does not cope with the scenario just mentioned, in which overlapping pairs of (names of) events may need to be related at different times. Thus our reticence in writing down an explicit level 3 system (with its obligation to make clear its relationship to the level 2 system) is further explained by the absence of a suitable species of formal relationship that could be used for the purpose. Without getting embroiled in too many further details, the present case study provides a fertile stimulus for developing a richer formulation of retrenchment and refinement capable of coping with the wealth of phenomena it exhibits.

11 Experiments and Simulations In this section, we compare the expectations raised by the preceding analytical work, with the outputs of well established conventional earthquake protection design approaches, based on numerical simulation. Our simulations were performed over a time interval from 0 to TMAX , using a control strategy that, as suggested by the analytical work, is defined by a pulse interval TP , allowable relative displacement XB , and an additional ground acceleration variability term eB . At the start of a pulse interval, the simulation chooses whether to apply a pulse based on Eq. (14), predicting a value of x at the end of the interval from the expression hx x + hy y + he (e ± eB ), where: hx = 1 −

ω2 TP2 2

  hy = TP 1 − ζ ω TP

he = −

TP2 2

(19)

using actual values of—or available estimates for—x, y, and e at time t. Figure 15 shows the essence of a Python program that performs simulations using the numerical and scientific libraries NumPy and SciPy, as well as the Matplotlib library for plotting; the complete code is available online [8].

A Simple Hybrid Event-B Model of an Active Control System …

187

Fig. 15 Python program for numerical simulation [8]

Function simulate(TMAX , TP , XB , eB ) contains two nested functions, one to predict future values of x, and another to adjust current values of y, if needed, when a pulse is called for. In particular, function x_future(x, y, t) estimates x at a time t + TP in the future, returning the estimate and the sign used for the eB term that maximizes the absolute value of the estimate—the worse case. The value returned by function e(t) is the ground acceleration at time t. Function y_new(x, y, t) likewise uses Eq. (14), but in this case does so to find a new value of y that would, one hopes, cause |x(t + TP )| ≤ XB to be satisfied; the sign for eB must be supplied (in this case, by the result from x_future). As with the HEB model, the simulation (defined by lines 8–13) is broken up into a succession of subintervals, each with duration TP . Between subintervals, a pulse may be applied that changes the value of y instantaneously. During a subinterval, time marches from i TP to (i + 1) TP .18 Function advance(x, y, a, b), not shown, lets the system evolve from time a to time b, starting from the initial values x(a) and y(a); it returns their values at time b: x(b), y(b), and b. As a side effect, it builds up collections of data for plotting time histories of x and y. With respect to numerical integration, advance solves the system of first order differential equations: Dx = y Dy = −2ζ ωn y − 18 The

(20) ωn2

x − e(t)

(21)

form for i in range(n) is idiomatic Python for bounded iteration from 0 to n − 1 (inclusive).

188

R. Banach and J. Baugh

(a) uncontrolled: peak relative displacement x˜ = 1.51 cm at t = 12.5 s

(b) controlled at XB = 80% of peak relative displacement Fig. 16 Response to harmonic ground motion (Tn = 2 s, ζ = 1%, Z = 1, Ω = 0.37 rad/s)

introduced in Eq. (3) and subsequently redefined in LHS (4) in terms of the variables: ζ , the viscous damping factor (dimensionless fraction of critical damping); and ωn , the undamped circular natural frequency (in units of radians per second). It does so using odeint, a SciPy function based on the Fortran LSODA routine from the ODEPACK library, which uses an Adams predictor-corrector method (when non-stiff problems like ours are encountered). The routine determines step size automatically to ensure that error bounds are satisfied. Harmonic ground motion. To illustrate the approach, we begin with a simple example after Prucz et al. [29] of an SDOF system, like that of Fig. 3, with a natural frequency ωn = π rad/s and viscous damping factor ζ = 1%. It is subjected to harmonic ground motion z = Z sin Ωt, which has the effect of adding a reversed inertia force −m D2 z to the system, with the ground acceleration given by: e(t) ≡ D2 z = −Ω 2 Z sin Ωt

(22)

A Simple Hybrid Event-B Model of an Active Control System …

189

where amplitude Z = 1 and frequency Ω = 0.37 rad/s are given. Thus, the case is one in which the ground motion frequency is lower than the system natural frequency (i.e., Ω < ωn ). The system begins at rest, so x0 = y0 = 0. Time histories of the uncontrolled response are shown in Fig. 16a, where the dimensionally comparable quantities x˜ ≡ x and y˜ ≡ y/ωn are plotted. The peak responses are: x˜ (12.5 s) = −0.0151, y˜ (0.985 s) = 0.00315 The predominant response of x˜ (t) is a harmonic having the same frequency as that of the ground acceleration; its period is 2 π/Ω, or in this case about 17 s. As expected, when Ω  ωn there is little relative motion between the mass and the ground, and the motions are in phase: they reach their peaks at the same time. Superimposed ‘wiggles’ are (dying) transients induced at the natural frequency of the system, whose undamped natural period Tn = 2 s. Pulse control can now be employed to limit the response to 80% of the peak relative displacement, which is done by setting XB = 0.0121. Continuing to be consistent with Prucz et al., we set the pulse interval to be on the order of one fourth the natural period, so TP = Tn /4 = 0.5 s. Specific to our approach, the additional ground acceleration variability term eB is set to zero for the moment. Time histories of the controlled response are shown in Fig. 16b, where the pulse trains (in red) have a ‘shape’ that acts to counterbalance relative displacements, where needed, that would have occurred, so as to keep them roughly within desired limits. Though the pulse interval here is about five times larger than we would anticipate using in practice, the example motivates the definition of a metric, the exceedance level, that can be used to assess the algorithm’s effectiveness as a bounded state control strategy. To quantify the exceedance level, we consider what happens at the endpoint of a TP interval where, if |x(t)| > XB , we add |x(t)| − XB to a running sum S, and define: (23) E = 103 S / n XB where n is the number of pulse intervals included in sum S. For the Prucz example, that gives an exceedance level E = 9.07. For pulses at 20 times per natural period instead (i.e., for TP = 0.1 s), we have E = 0.0250, and when in addition eB is raised to 0.001, the exceedance level E drops to zero, meaning there are no exceedances. El Centro ground motion. As noted by Prucz et al., the aim of pulse control is to disrupt, at resonance, the ‘gradual rhythmic build-up’ of the system response. A more realistic and challenging scenario then is to subject the system to complex ground accelerations that include resonant frequencies, particularly ones near the fundamental natural frequency of a building, which typically produce the largest relative displacements and damage. Used in the design of earthquake resistant structures, the ground accelerations recorded in El Centro, California, during the earthquake of May 18, 1940, have a peak value of 3.13 m/s2 (0.319 g), the first 20 s of which are shown in Fig. 17. We now apply them to the system. As before, the natural frequency ωn = π rad/s, so the

190

R. Banach and J. Baugh

undamped natural period Tn = 2 s, a value that might correspond to the fundamental natural period of a 20-story building. We use a viscous damping factor ζ = 5%, which is representative of a modern office building and is a value often used in design. For control, the pulse interval TP = 0.1 s. Time histories of the uncontrolled response are shown in Fig. 18a, where we again plot x˜ and y˜ . The peak responses are x˜ (6.37 s) = 0.137, y˜ (11.7 s) = 0.199 which occur during a time period from about 6–13 s into the event, as the system begins oscillating near its undamped natural frequency ωn = π rad/s (with a period Tn = 2 s). To limit the peak displacement, we apply pulse control at 80% of that value by setting XB = 0.109 (or 10.9 cm) and keep the pulse interval as before, TP = 0.1 s. Time histories of the controlled response are shown in Fig. 18b, where the pulses, shown in red, effectively counterbalance relative displacements to keep them approximately within desired bounds. Limiting displacements even further, to 50 and 40% of the peak value, is likewise shown to be effective, as demonstrated by the time histories in Figs. 18c–d, respectively. To achieve the additional level of control requires that successively more energy be put into the system, with more and sometimes larger pulses, and earlier into the event. Looking at exceedance for the three levels of controlled response (80, 50, and 40%), we have E = 0.223, 0.664, and 0.561, respectively, which are reduced to zero when the additional ground acceleration term, eB , is increased to at least 0.512, 0.358, and 0.469, respectively. Additional analysis, that might lead to finding good eB settings a priori for anticipated ground motions, is left for future work.

Fig. 17 North-south component of the ground motion recorded at a site in El Centro, California, during the Imperial Valley earthquake of May 18, 1940 (showing first 20 s of the event)

A Simple Hybrid Event-B Model of an Active Control System …

(a) uncontrolled: peak relative displacement x˜ = 13.7 cm at t = 6.37 s

(b) controlled at XB = 80% of peak relative displacement

(c) controlled at XB = 50% of peak relative displacement

(d) controlled at XB = 40% of peak relative displacement

Fig. 18 Response to El Centro ground motion (Tn = 2 s, ζ = 5%, no time delay)

191

192

R. Banach and J. Baugh

12 Conclusions In this paper, we started by reviewing how the initial ideas of earthquake protection eventually crystallised into a number of distinct approaches, and we focused on the active control approach. We also reviewed Hybrid Event-B as a suitable vehicle for attempting a formal development of an SDOF active protection model. We then pursued the development through various levels of detail, culminating in the distributed sampled and quantized model of Sect. 9. Along the way, particularly in Sects. 8 and 10, we discussed the obstacles to accomplishing this with full formality. In Sect. 11, we subjected our analytically derived model to simulation using well established numerical tools typically used in earthquake protection engineering. We spot-tested our model both on a simple harmonic excitation, and on the El Centro ground motion data. It was encouraging to see that our model behaved well, despite the relatively small input from the empirical sphere during its derivation. Enhancing the latter, can only be expected to improve matters regarding fidelity with conventional approaches. The present study forms a launchpad for much possible future work. Firstly, there is the fact that our models’ behaviour was timed with precision—in reality we will always have stochastic variations in the times of all events. Similar considerations apply to a better characterisation of the additional ground acceleration variability term eB . Taking these issues into account would bring us closer to the level of detail of [16, 20, 21]. Secondly, there is the consideration of the replication of components needed for adequate fault tolerance. Here, at least, we can see that use of standard approaches would address the issue, and would again bring us closer to [16, 20, 21]. Thirdly, we note that the SDOF modelling can readily be enriched to capture the dynamics of a genuine building more accurately. The essentially scalar description we dealt with here could be enriched to encompass a greater number of linear and angular degrees of freedom. This again is relatively standard, at least in the linear dynamics case. Fourthly, there is the investigation of richer formulations of retrenchment and refinement capable of coping with the wealth of phenomena discussed in Sect. 10. A generally applicable approach here would yield many dividends for a wide class of problems of a similar nature. Fifthly, it is regrettable that there currently is no mechanised support for Hybrid Event-B. Nevertheless, progress with the issue just discussed would be a prerequisite for a meaningfully comprehensive coverage of the development route as a whole by mechanical means, even if individual parts could be treated by conventional mechanisation of linear and discrete reasoning. Taking all the above together, there is plenty to pursue in future work. One final comment. In a recent UK terrestrial TV broadcast [25], various aspects of the construction of Beijing’s Forbidden City were described. Not least among

A Simple Hybrid Event-B Model of an Active Control System …

193

these was the capacity of the Forbidden City’s buildings to withstand earthquakes,19 particularly considering that Beijing lies in a highly seismic region. Fundamental to this is the use of bulky columns, which are essentially free standing, to support the weight of the building’s heavy roof, and the use of complex dougong brackets [37, 38] to couple the columns to the roof. The free standing construction allows the ground under the building to slip during powerful tremors without breaking the columns, and the relatively flexible dougong brackets permit relative movement between the columns and other members without risking structural failure. These building techniques were already ancient by the time the Forbidden City was constructed early in the 1400s. The cited broadcast showed a scaled structure on a shaking table withstanding a simulated magnitude 10 quake. So, more recent efforts notwithstanding, the Chinese had the problem of earthquake protection for buildings licked more than two thousand years ago!

References 1. J. Earthq. Eng. Eng. Vib. 2. World Conferences on Earthquake Engineering 3. Abrial, J.R.: Modeling in Event-B: System and Software Engineering. Cambridge University Press, Cambridge (2010) 4. Ahmed, N.: Dynamic Systems and Control With Applications. World Scientific, Singapore (2006) 5. Banach, R.: Formal refinement and partitioning of a fuel pump system for small aircraft in Hybrid Event-B. In: Bonsangue D. (eds.) Proceedings of IEEE TASE-16, pp. 65–72. IEEE (2016) 6. Banach, R.: Hemodialysis machine in Hybrid Event-B. In: Butler, S., Mashkoor B. (eds.) Proceedings of ABZ-16. LNCS, vol. 9675, pp. 376–393. Springer (2016) 7. Banach, R.: The landing gear system in multi-machine Hybrid Event-B. Int. J. Softw. Tools Tech. Transf. 19, 205–228 (2017) 8. Banach, R., Baugh, J.: Active earthquake control case study in Hybrid Event-B web site. http:// www.cs.man.ac.uk/~banach/some.pubs/EarthquakeProtection/ 9. Banach, R., Butler, M.: A Hybrid Event-B study of lane centering. In: Aiguier, B., Krob M. (eds.) Proceedings of CSDM-13, pp. 97–111. Springer (2013) 10. Banach, R., Butler, M.: Cruise control in Hybrid Event-B. In: Woodcock Z.L. (ed.) Proceedings of ICTAC-13. LNCS, vol. 8049, pp. 76–93. Springer (2013) 11. Banach, R., Butler, M., Qin, S., Verma, N., Zhu, H.: Core Hybrid Event-B I: single Hybrid Event-B machines. Sci. Comput. Program. 105, 92–123 (2015) 12. Banach, R., Butler, M., Qin, S., Zhu, H.: Core Hybrid Event-B II: multiple cooperating Hybrid Event-B machines. Sci. Comput. Program. 139, 1–35 (2017) 13. Banach, R., Jeske, C.: Retrenchment and refinement interworking: the tower theorems. Math. Struct. Comput. Sci. 25, 135–202 (2015) 14. Banach, R., Poppleton, M., Jeske, C., Stepney, S.: Engineering and theoretical underpinnings of retrenchment. Sci. Comput. Program. 67, 301–329 (2007) 15. Banach, R., Van Schaik, P., Verhulst, E.: Simulation and formal modelling of yaw control in a drive-by-wire application. In: Proceedings of FedCSIS IWCPS-15, pp. 731–742 (2015) 19 Being

wooden, the Forbidden City’s buildings were less good at withstanding fire, and several structures have had to be rebuilt a number of times over the centuries because they had burnt down.

194

R. Banach and J. Baugh

16. Baugh, J., Elseaidy, W.: Real-time software development with formal methods. J. Comput. Civ. Eng. 9, 73–86 (1995) 17. Buckle, I.: Passive control of structures for seismic loads. In: Proceedings of 12th World Conference on Earthquake Engineering. Paper No. 2825 (2000) 18. Chicone, C.: Ordinary Differential Equations with Applications, 2nd edn. Springer, New York (2006) 19. Chopra, A.: Dynamics of Structures: Theory and Applications to Earthquake Engineering, 4th edn. Pearson, Englewood Cliffs (2015) 20. Elseaidy, W., Baugh, J., Cleaveland, R.: Verification of an active control system using temporal process algebra. Eng. Comput. 12, 46–61 (1996) 21. Elseaidy, W., Cleaveland, R., Baugh, J.: Modeling and verifying active structural control systems. Sci. Comput. Program. 29, 99–122 (1997) 22. Gattulli, V., Lepidi, M., Potenza, F.: Seismic protection of frame structures via semi-active control: modelling and implementation issues. Earthq. Eng. Eng. Vib. 8, 627–645 (2009) 23. Gradshteyn, I., Ryzhik, I.: Table of Integrals Series and Products, 7th edn. Academic Press, New York (2007) 24. Kuramoto, H., Teshigawara, M., Okuzono, T., Koshika, N., Takayama, M., Hori, T.: Predicting the earthquake response of buildings using equivalent single degree of freedom system. In: Proceedings of 12th World Conference on Earthquake Engineering. Auckland, New Zealand. Paper No. 1039 (2000) 25. More4 TV: Secrets of China’s Forbidden City. UK Terrestrial TV Channel: More4. (24 July 2017) 26. Olver, F., Lozier, D., Boisvert, R., Clark, C.: NIST Handbook of Mathematical Functions. Cambridge University Press, Cambridge (2010) 27. Platzer, A.: Logical Analysis of Hybrid Systems: Proving Theorems for Complex Dynamics. Springer, Berlin (2010) 28. Popescu, I., Sireteanu, T., Mitu, A.: A comparative study of active and semi-active control of building seismic response. In: Proceedings of DGDS-09, pp. 172–177. Geometry Balkan Press (2010) 29. Prucz, Z., Soong, T., Reinhorn, A.: An analysis of pulse control for simple mechanical systems. J. Dyn. Syst. Meas. Control. 107, 123–131 (1985) 30. Retrenchment Homepage. http://www.cs.man.ac.uk/~banach/retrenchment 31. Rose, B., Baugh, J.: Parametric study of a pulse control algorithm with time delays. Technical report. CE-302-93, North Carolina State University Department of Civil Engineering (1993) 32. Sontag, E.: Mathematical Control Theory. Springer, New York (1998) 33. Soong, T.: Active Structural Control: Theory and Practice. Longman, Harlow (1990) 34. Soong, T., Chu, S., Reinhorn, A.: Active, Hybrid and Semi-Active Control: A Design and Implementation Handbook. Wiley, New York (2005) 35. Tabuada, P.: Verification and Control of Hybrid Systems: A Symbolic Approach. Springer, US (2009) 36. Walter, W.: Ordinary Differential Equations. Springer, Berlin (1998) 37. Wikipedia: Chinese architecture 38. Wikipedia: Dougong 39. Wikipedia: Duhamel’s integral 40. Wikipedia: Earthquake engineering

Understanding, Explaining, and Deriving Refinement Eerke Boiten and John Derrick

Abstract Much of what drove us in over twenty years of research in refinement, starting with Z in particular, was the desire to understand where refinement rules came from. The relational model of refinement provided a solid starting point which allowed the derivation of Z refinement rules. Not only did this explain and verify the existing rules—more importantly, it also allowed alternative derivations for different and generalised notions of refinement. In this chapter, we briefly describe the context of our early efforts in this area and Susan Stepney’s role in this, before moving on to the motivation and exploration of a recently developed primitive model of refinement: concrete state machines with anonymous transitions.

1 Introduction: Z Refinement Theories of the Late 1990s At the Formal Methods Europe conference at Oxford in 1996 [20], there was a reception to celebrate the launch of Jim and Jim’s (Woodcock and Davies) book on Understanding Z [30]. This was a fascinatingly different book on Z for those with a firm interest in Z refinement like ourselves, one as aspirational and inspirational as the slightly earlier “Z in Practice” [5]. It contained a full derivation of the downward simulation rules for states-and-operations specifications, with inputs and outputs, all the way from Hoare, He and Sanders’ relational refinement rules [22], with the punny “relaxing” and “unwinding” important steps of the derivation process. In addition, unlike most Z textbooks, it also included upward simulation rules to achieve completeness—we were told that these had turned out to be necessary in an exciting but mostly confidential industry project called “Mondex” [29]. There was also a

E. Boiten De Montfort University, Leicester LE1 9BH, UK e-mail: [email protected] J. Derrick (B) University of Sheffield, Sheffield S1 4DP, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_8

195

196

E. Boiten and J. Derrick

strong hint then that the Mondex team couldn’t tell us yet about everything they had discovered about refinement while doing this research. At that same conference, we presented the most theoretical Z refinement paper we had produced so far [8], which constructed a common refinement of two Z specifications, in support of our work on viewpoint refinement, extending ideas of Ainsworth, Cruickshank, Wallis and Groves [3] to also cover data refinement. To satisfy our funders EPSRC that we were being practical and building prototype tools, we had also implemented [6] this construction in the Generic version of the Z Formaliser tool [19]. This tool was being developed concurrently by Susan Stepney, who provided us with advice and debugging, and we all found out more about Smalltalk in the process. Our viewpoint unification technique, as we called it, gradually relaxed the constraints on different specifications of the same Z operation that we needed a common refinement of, to constructively show consistency, and continue from there. If the postconditions were different but the preconditions identical, conjunction of operations was sufficient. For where the postconditions differed, a form of disjunction delivered a common refinement. If the state spaces were different, we could use a “correspondence relation” to still find a common data refinement. But that is where our desire to allow viewpoints to take different perspectives hit the buffers as far as conventional Z refinement went. In particular, two viewpoint operations with different inputs or outputs could never have a common refinement according to the theory as presented in [30] or Spivey’s earlier Z bible [27]. Which was odd, as both of these books already contained “refinement” examples that added inputs or outputs—not least Spivey’s“birthday book” running example. Based on all this, we set out to reconstruct a sensible theory for how refinement in Z might also include changes to inputs and outputs. The examples in the noted textbooks formed a starting point for conservative generalisation of the standard rules. We extracted some of the informal and common sense rationales, which can be found in the paper “IO Refinement in Z” [7]. The final version of this paper, inspired by the derivations in [30], reverted from common sense reasoning to solid mathematics to establish the rules for IO refinement. The steps in the Woodcock and Davies derivation of Z refinement from relational refinement where concrete (“global”) input and output sequences were equated to their abstract (“local”) counterparts were ripe for generalisation. So, initialisation included potential transformation of inputs, and finalisation transformation of outputs—with constraints, such as avoiding the loss of information in outputs.1 Almost in parallel, the additional output from the Mondex project that had been hinted at appeared. “More Powerful Data Refinement in Z” by Stepney, Cooper and Woodcock [28] was presented at the yearly Z conference. Its central observation was that in the derivation of Z refinement from relational rules, the finalisation was a powerful instrument for generalisation. In the standard approach, it would throw away the abstract state and any remaining inputs, and directly copy the output

1 Our little joke was to call this the “every sperm is sacred” principle, in reference to Monty Python.

Understanding, Explaining, and Deriving Refinement

197

sequence across to the global state. Changing the type of outputs was one obvious generalisation, and had indeed been necessary in the Mondex case study. In our work on viewpoint specification and consistency, it had been clear from the beginning that we would need to be looking at reconciling behaviour-centred and state-centred specifications, as both of these were expected to be used in the Open Distributed Processing reference model [11]. We explored the cross-over between the world of Z and the world of process algebras in a variety of ways: translating LOTOS to Z [14], comparing the respective refinement relations [18], integrating CSP and Object-Z [26], and adding process algebra features such as internal operations to Z [17]. However, neither of these felt like the definitive solution or provided a comprehensive, let alone complete, refinement basis—until this thread of research was also infected by the derivation concept. Using relational data types, and their finalisations as making the correct observations (often: refusals) was a critical step forward, represented in a series of papers deriving concurrent refinement relations and simulation rules to verify them from relational characterisations, under the heading of “relational concurrent refinement” [12, 15].

2 Concrete State Machines with Anonymous Transitions Our first book on refinement [16] continued from the work on generalising refinement that we had done to support viewpoint specification. It grew almost like a bunch of flowers, with nearly a new generalisation per chapter, plus an extra chapter of unopened buds, “Further Generalisations” that we had envisaged but did not develop in detail or with examples. Relational concurrent refinement gets a brief mention in the second (2014) edition of the book, as our preferred method of integrating state-focused and behaviour-focused methods. We recently completed our second book on refinement [13], in which we take a rather different approach. We again conclude with relational concurrent refinement, but this time from a more inclusive perspective. We aimed to provide a comprehensive story of different refinement relations, mostly not of our own construction, and how they are related and reflected in existing formal methods and languages. In relating different refinement notions, of course we considered generalisation hierarchies as established by van Glabbeek [21] and Leduc [24], but also the more conceptual relationships between them. In that dimension, it almost becomes a genealogy of refinement relations. The first regular chapter in the new book [13] covers labeled transition systems as the obvious basic model for behavioural formalisms. There are states (including initial ones), and transitions between states, labelled with actions from some alphabet. Observations (traces, refusals, etc.) are in terms of these actions, and the states themselves contain no information beyond the behaviour from that point on. When later in [13] we get to the basic relational refinement model that has been central to our work for the last twenty years, there is relevance both to states and to actions. Observations are defined via finalisation of the states at the end of a trace;

198

E. Boiten and J. Derrick

refinement is inclusion of such observations, universally qualified over all traces, where a trace consists of a sequence of actions. So these are abstract state machines (as finalisation modulates the state observations) with visible transitions—labelled with an action for every transition step. Clearly that is a few steps away from the labeled transition model. How do we naturally get to that point? Looking ahead to explaining refinement in formalisms such as Event-B and ASM, how do we justify that these methods do not seem to care as much about the labels on transitions as Z (or labeled transition systems, for that matter) does? Will a deeper understanding of this improve our coverage of what “stuttering steps” and “refining skip” really mean? We decided this called for a basic system model that is in some sense dual to labeled transition systems. Namely, we wanted a model in which the observations are based on states, and transitions do occur but have no individual meaning or observability other than through the effect they have on the state. So from a change of state we can draw the conclusion that “something must have happened” but no more than that, and in particular we also cannot assume the converse, that nothing can have happened if the state is unchanged between two observations. Has such a model been described previously? It comes close to an abstract view of sequential programs, with possibilities for observation only crystal clear once the program has terminated. There are some candidates of formal methods in the literature which take related views, but they are all a bit more concrete than we would like in terms of the state spaces they assume. Action systems [4] have a rather concrete view of the state space, as being made up of variables, modified by assignments. The refinement theories of Abadi and Lamport [1] also have anonymous transitions, including stuttering ones, on a state space made up of variables. Hoare and He’s Unifying Theories of Programming [23] (UTP) in their basic form come close to what we were looking for, also on state spaces made up of variables, although the better known variants of UTP are the ones with auxiliary variables encoding behaviour. The model we defined has states, initial states, and a transition relation that only records that some states occur before some other states. We call it CSMAT: Concrete State Machine with Anonymous Transitions. Definition 1 (CSMAT) A CSMAT is a tuple (State, I nit, T ) where State is a nonempty set of states, I nit ⊆ State is the set of initial states, and T ⊆ State × State is a reflexive and transitive transition relation. We write p −→T q for ( p, q) ∈ T , leaving out T when it is clear from the context. This is in close analogy with −→ in LTSs, and at the same time also with =⇒ because T is reflexive and transitive, and thus equal to T ∗ . In [13] we explain some, but not all, of the “design decisions” of this definition, and even then not in great detail. The intended contribution of this article is to highlight and explore these. Given that this is an artificial intermediate station in the theory development, all these decisions are up for discussion. Their best defence is if they provide some additional insight into refinement, or illuminate and foreshadow issues cropping up later in the theory development.

Understanding, Explaining, and Deriving Refinement

199

State machine: This is justifiable already as we have states and transitions. A restriction to finite states seems unnecessary here. Expressiveness matters, but will always be secondary in a basic model where anything complex will look clunky anyway; computability or Turing-completeness of the model is not an important concern. Some of the more eccentric problems in refinement, around infinite traces and possible unsoundness of upward simulation, disappear if our model does not allow for infinite branching. We have not imagined models so abstract that they do not in some way contain that-what-is and that-what-happens, especially not when that-what-is is potentially represented by the possible futures, i.e. that-what-may-still-happen. Most machine models in theoretical computer science (Turing machines, stacks, registers, evaluation models for lambda calculus) are state machines with a particular structure of state anyway. Non-empty set of states: This is a somewhat arbitrary choice—the trivial CSMATlike structure with no states and hence no initial states or transitions is excluded. However … Initial states: We do not insist on the set of initial states being non-empty or even just a singleton. Allowing the empty set means we have a large collection of trivial state machines that would behave very interestingly if it wasn’t for the fact they could never start; however, for a given transition relation, set inclusion on initial states might induce some lattice-like structure, and retaining an extremal element in that may be useful. We had initially not been sure about allowing multiple initial states in the preceding chapter on LTSs. It seemed an unnecessary restriction to insist on a single initial state, but then we found that not doing so meant we needed to talk about internal versus external choice earlier than we wished to, and in a non-orthogonal way: LTSs with multiple initial states can be viewed as modelling the possibility of internal choice in initial states only. For CSMATs, the decision was forced towards multiple initial states by wanting a non-trivial notion of a state that could or would (not) lead to termination—effectively introducing external choice at initialisation only, which makes more sense for CSMATs than for LTSs, as we will explain below. Non-determinism: As well as coming in via multiple initial states, non-determinism is implicitly present when transitions are characterised by a relation. Our excuse is that we want to use this model for refinement—if descriptions are deterministic, there is nothing left to refine.2 Concrete: We call this a concrete state machine but after this single definition that remains entirely a statement of intent. Comparing the definition to that of 2 One of our most enlightening paper rejections was one for a 1990s ZUM conference, where we had

argued the opposite, namely that data refinement could introduce non-determinism, but a reviewer explained how this was entirely illusory, as such non-determinism could never be made visible in external observations. Of course this holds particularly for formal methods like Z where the final refinement outcome is only beholden to the initial specification and not to any detail introduced along the way like it is in for example Event-B [2], where refinement of deterministic systems can indeed be entirely meaningful.

200

E. Boiten and J. Derrick

labeled transition systems, we have merely removed information that might be observed: the labels on transitions. That abstraction by itself does not make the model concrete, of course. The real contrast is with abstract state machines, as in the standard relational refinement theory, where the state is not directly observable—we signpost here that it will be the states themselves that will occur in observations, and definitions of observations of CSMATs will be seen to comply with that. Anonymous: Transitions are anonymous, omitting the labels that are included in the LTS transition relation. As a consequence, virtually all of the notions of observation that LTSs provide and the refinement relations that are based on such observations become trivial in this model. At a deeper level this means that we should not look at this as a reactive model: we have removed the handle for the environment to be interacting with the system. This makes it a model of passive observation instead. Transitive: Our reasoning for making the transition relation transitive is the thought that if we cannot observe transitions, this implies that we also cannot count transitions as individual steps. This is definitely a design choice where we could have gone the other way. Turing machines, for example, are not normally viewed as labeling their steps; but the associated theory of time complexity relies on being able to count them. A bit later in the theory development, the decision on transitivity will prove to have an adverse effect: it will not be preserved under abstraction functions. If we are going to be looking at action refinement later, or at m-to-n simulation diagrams in ASM [25] where the labels do not matter so much, as we do later in [13], it is convenient to be able to move between looking at a single step and multiple steps. The main justification for transitivity is closely tied to reflexivity. If we observe a system in a way that is (unlike Turing machine time complexity) not synchronised with the system’s internal evolution, or maybe even in a continuous time model, there may be consecutive observations of the same state value. A transitive and reflexive transition relation between such observations allows us to not distinguish between the three different cases of this—which has to be the more abstract view. Reflexive: These three cases are: • nothing has happened; • something has happened, but it is not visible at the level of abstraction we are observing the system at; • multiple state changes have happened, returning us to the initial state. Look at this as different versions of someone at a traffic light. The light was red, they blinked, and when they opened their eyes again it was red. The state of the traffic lights might be identical; their light might have remained red but the other flows of traffic might have changed in the meantime; or they might have blinked long enough for their light to go through an entire cycle. Implicitly the third case will be noticeable in the transition relation anyway, as it also records what we would have seen if we had opened our eyes a little earlier. The first two really do not need to be distinguished. “Nothing happens” is often

Understanding, Explaining, and Deriving Refinement

201

an abstraction anyway, for example the empty statement in a busy-waiting loop in a program is an abstraction of passing time. The first two cases might be called “stuttering steps”. The second case in particular relates to“refining skip”, which serves a variety of roles in different formal methods, sometimes causing significant problems. We have analysed this previously [9, 10] and called the second case a “perspicuous” operation. At the more abstract level, the operation has no visible effect; but a refinement might provide some behaviour at a greater level of detail. The book contains a separate chapter on perspicuous operations and whether and how they relate to internal operations and the consequences this has for refinement. It is also the place where we deal with livelock or divergence: the idea that nothing visible or externally controllable happens infinitely often. Reflexivity of the transition relation has an important side effect: it means that the transition relation is total, i.e. from every state there is a possible “transition”, if only to that state itself. So if we wanted to define a notion of a computation that stops for some (positive or negative) reason, we could not do that by finding states where T fails to define a next state. No final states: Finite state machines (the ones that accept regular languages) have final or accepting states. It would be possible to add those to CSMATs, but then observations would have to respect that, at some cost of complexity of description. Looking forward to relational data types in later chapters having finalisations which are (typically) applicable in every state, we decided against it here. Having explained our decisions in defining CSMATs, we now briefly consider the possible notions of observation that go with it and form the bases for refinement on CSMATs. The most elementary of these simply characterises the states that are reachable in M, starting from a state in I nit and following T . Definition 2 (CSMAT observations) For a CSMAT M = (State, I nit, T ) its observations are a set of states defined by O(M) = {s : State|∃init : I nit • (init, s) ∈ T } The standard method of deriving a refinement relation when the semantics generates sets is set inclusion: Definition 3 (CSMAT safety refinement) For CSMATs C = (S, C I, C T ) and A = (S, AI, AT ), C is a safety refinement of A, denoted A  S C, iff O(C) ⊆ O(A). This is called “safety refinement” because the concrete system cannot end up in states that the abstract system disallows. In common with other safety-oriented refinement relations, doing nothing is always safe, so if either C I or C T is empty then so is O(C) and hence (S, C I, C T ) refines any CSMAT on the same state space S. Comparing only CSMATs on the same state space is based on a form of “type correctness”, as the state doubles up (“concrete”!) as the space of observations.

202

E. Boiten and J. Derrick

Given reflexivity and totality of T , the best method we have come up with for characterising “termination” is the absence of non-trivial behaviour, so a terminating state is one which only allows stuttering, i.e. it is linked by T only to itself. Definition 4 (CSMAT terminating states and observations) The terminating states and terminating observations of a CSMAT M = (S, I nit, T ) are defined by ter m(M) = {s ∈ S|∀t ∈ S • (s, t) ∈ T ⇒ s = t} OT (M) = O(M) ∩ ter m(M) Set inclusion on these observations we have called “partial correctness” as it is very close to that traditional correctness relation for programs: if the computation terminates, it delivers the correct results; and when it does not, we impose no constraints. Definition 5 (CSMAT partial correctness refinement) For CSMATs C = (S, C I, C T ) and A = (S, AI, AT ), C is a partial correctness refinement of A, denoted A  PC C, iff OT (C) ⊆ OT (A). To get a definition of “total correctness”, we would normally have to add that the concrete computation is only allowed to not terminate whenever that is also allowed by the abstract one. The concrete computation not (ever) terminating is characterised by OT (C) = ∅, and the same for the abstract computation is then OT (A) = ∅. The former condition should then imply the latter. But this is very much an all-or-nothing interpretation of termination, that does relate closely to ideas of termination and refinement in action systems and Event-B. The word “whenever” in the informal description implies a quantification of some kind. If we take that over the entire (shared) state space, i.e. that a state must be a terminating one in the concrete system whenever the same state is terminating in the abstract system, this forces equality between the sets of terminating states in the refinement definition, so is not very useful. A different way of looking at it is that “whenever” implies a quantification over all initial states—and this invites a different view of what the different initial states represent. Our definition of observations above only considers whether states (or terminating states) can be reached from some initial state. This implies that, when there are multiple initial states, we do not know or we do not care in which of these the computation started. Effectively, the CSMAT starts its operation by a (internal) nondeterministic choice of one of the possible initial states. We could also makes this an external choice: so different initial states represent a variable input to the system. This would make observations a relation between the initial state chosen and the final state observed—in other words, we get relational observations. Thus, we can define relational observations that connect an initial state to another (final) state as follows.

Understanding, Explaining, and Deriving Refinement

203

Definition 6 (Relational observations of a CSMAT) For a CSMAT M = (State, I nit, T ) its relational observations and terminating relational observations are relations defined as R(M) = (I nit × State) ∩ T RT (M) = (I nit × ter m(M)) ∩ T Analogous refinement relations can be defined using these observations. Definition 7 (Relational refinements for CSMATs) For CSMATs C and A, • C is a (relational) trace refinement of A, denoted A  RT C, iff R(C) ⊆ R(A); • C is a relational partial correctness refinement of A, denoted A  R C, iff RT (C) ⊆ RT (A). We can now extend partial correctness meaningfully to total correctness, see [13] for a calculation justifying the additional condition. Definition 8 (Total correctness refinement for CSMATs) For CSMATs C and A, C is a total correctness refinement of A, denoted A  R C, iff RT (C) ⊆ RT (A) and domRT (A) ⊆ domRT (C). Although we have now defined relational observations, we cannot yet use simulations to verify them. This is due to the state values being directly observable, i.e. we could only ever link fully identical states in a simulation anyway. At this point we felt we had explored the space of meaningful refinement on CSMATs. ([13] also contains also a state trace variant of the semantics.) What are the baby steps that take us towards abstract data types? First, observations restricted us to considering the same state space between concrete and abstract systems. Echoing our earlier work on output refinement, if we allowed ourselves to transform output types “at the edge of the system”, we could relax that restriction. So what properties should such a transformation have? It should certainly apply to every possible “concrete” state, as otherwise we would have concrete observations that had no abstract counterparts. For a given concrete observation, we should also be able to reconstruct the corresponding abstract observation uniquely—this is the same “no information loss” principle for output transformations. As our observations are states, this together implies that the transformation is a total function from concrete to abstract states. Reassuringly, these tend to crop up in the most elementary definitions of simulations as well, for example in automata. Can we define simulations between CSMATs on this basis? As it turns out, not quite—more on this below. Instead, we define the application of such a “state abstraction” on a CSMAT. Definition 9 (State abstraction on CSMATs) Given a (total) function f : S → S , the state abstraction of a CSMAT M = (S, I nit, T ) under f is the state machine f (M) = (S , I nit , T ) defined by

204

E. Boiten and J. Derrick

I nit = { f (s)|s ∈ I nit} T = {( f (s), f (s ))|(s, s ) ∈ T } Here is where we might regret an earlier design decision, namely transitivity. The state abstraction image is not necessarily a CSMAT, as its transition relation may not be transitive when the function is not injective. If states s1 and s2 have the same image under f , a path ending in s1 may join up with a path beginning in s2 , creating a connection that may not have existed in the original CSMAT. Injectivity of the abstraction function is not the correct fix for this. Elementarily, it would not establish an abstraction but merely a renaming, an isomorphism. In looking at the effect of an abstraction function on reflexivity we see why non-injectivity may actually be required. State abstraction does preserve reflexivity of the transition relation. Moreover, it can introduce stuttering steps in the abstraction that were actual changes of state in the original: when both the before state and the after state of a step are abstracted to the same state. This links a non-change in the abstracted machine to a change in the concrete machine, i.e. it highlights a perspicuous step as discussed above. The effect on “termination” is problematic again, though. A system that never terminates, moving between its two states, can be abstracted to a one state system that by our definition of termination (no transitions except to itself) always terminates. Abstract systems are typically expected to terminate less often, rather than more often, than concrete ones in refinement relations. Similarly, by collapsing parts of the state space to a single point, unbounded concrete behaviour (possibly interpreted as divergence) can be collapsed to stuttering in the abstract model. Again, we would expect concrete systems to have less rather than more divergence. A next step from considering abstraction functions in general is to look at the structure of the state space, and define specific abstraction functions from it. For example, if the state space is made up of the values of a fixed collection of named variables, projection onto the set of observable (global) variables is a meaningful abstraction function in the sense described above. In [13], this particular refinement model has turned out to be illuminating when thinking about divergence, about internal operations and perspicuous operations—as well as when looking at refinement in notations (such as ASM, B, and Event-B) that do not fully conform to the abstract relational datatype model as used in Z. Success in this undertaking at the most abstract theoretical level was not quite achieved. Ideally, we would have found a model M such that the abstract relational model is in some sense the minimal common generalisation of M and labeled transition systems. Maybe those models are just too subtly different. Time will tell. What we do know is that the theory of refinement in state-based languages is a lot richer than first appeared in the 1990s. Susan and colleagues’ work in the late 90s initiated a line of thinking that has developed the theory and practice in a number of ways that were probably not foreseen when the first generalisations appeared.

Understanding, Explaining, and Deriving Refinement

205

References 1. Abadi, M., Lamport, L.: The existence of refinement mappings. Theor. Comput. Sci. 2(82), 253–284 (1991) 2. Abrial, J.R.: Modelling in Event-B. CUP, Cambridge (2010) 3. Ainsworth, M., Cruickshank, A.H., Wallis, P.J.L., Groves, L.J.: Viewpoint specification and Z. Inf. Softw. Technol. 36(1), 43–51 (1994) 4. Back, R.J.R., Kurki-Suonio, R.: Distributed cooperation with action systems. ACM Trans. Program. Lang. Syst. 10(4), 513–554 (1988) 5. Barden, R., Stepney, S., Cooper, D.: Z in Practice. BCS Practitioner Series. Prentice Hall, New York (1994) 6. Boiten, E.: Z unification tools in generic formaliser. Technical report 10-97, Computing Laboratory, University of Kent at Canterbury (1997) 7. Boiten, E., Derrick, J.: IO-refinement in Z. In: Evans, A., Duke, D., Clark T. (eds.) 3rd BCSFACS Northern Formal Methods Workshop. Springer (1998). https://ewic.bcs.org/content/ ConWebDoc/4354 8. Boiten, E., Derrick, J., Bowman, H., Steen, M.: Consistency and refinement for partial specification in Z. In: Gaudel and Woodcock [20], pp. 287–306 9. Boiten, E.A.: Perspicuity and granularity in refinement. In: Proceedings 15th International Refinement Workshop, EPTCS, vol. 55, pp. 155–165 (2011) 10. Boiten, E.A.: Introducing extra operations in refinement. Form. Asp. Comput. 26(2), 305–317 (2014) 11. Boiten, E.A., Derrick, J.: From ODP viewpoint consistency to integrated formal methods. Comput. Stand. Interfaces 35(3), 269–276 (2013). https://doi.org/10.1016/j.csi.2011.10.015 12. Boiten, E.A., Derrick, J., Schellhorn, G.: Relational concurrent refinement II: internal operations and outputs. Form. Asp. Comput. 21(1–2), 65–102 (2009). http://www.cs.kent.ac.uk/ pubs/2007/2633 13. Derrick, J., Boiten, E.: Refinement – Semantics, Languages and Applications. Springer, Berlin (2018) 14. Derrick, J., Boiten, E., Bowman, H., Steen, M.: Viewpoints and consistency: translating LOTOS to Object-Z. Comput. Stand. Interfaces 21, 251–272 (1999) 15. Derrick, J., Boiten, E.A.: Relational concurrent refinement. Form. Asp. Comput. 15(1), 182– 214 (2003) 16. Derrick, J., Boiten, E.A.: Refinement in Z and Object-Z, 2nd edn. Springer, London (2014). https://doi.org/10.1007/978-1-4471-0257-1 17. Derrick, J., Boiten, E.A., Bowman, H., Steen, M.W.A.: Specifying and refining internal operations in Z. Form. Asp. Comput. 10, 125–159 (1998) 18. Derrick, J., Bowman, H., Boiten, E., Steen, M.: Comparing LOTOS and Z refinement relations. In: FORTE/PSTV’96, pp. 501–516. Chapman & Hall, Kaiserslautern (1996) 19. Flynn, M., Hoverd, T., Brazier, D.: Formaliser – an interactive support tool for Z. In: Nicholls J.E. (ed.) Z User Workshop, pp. 128–141. Springer, London (1990) 20. Gaudel, M.C., Woodcock, J.C.P. (eds.): FME’96: Industrial Benefit of Formal Methods, Third International Symposium of Formal Methods Europe. Lecture Notes in Computer Science, vol. 1051. Springer (1996) 21. van Glabbeek, R.J.: The linear time - branching time spectrum I. The semantics of concrete sequential processes. In: Bergstra, J., Ponse, A., Smolka S. (eds.) Handbook of Process Algebra, pp. 3–99. North-Holland (2001) 22. He, J., Hoare, C.A.R., Sanders, J.W.: Data refinement refined. In: Robinet, B., Wilhelm R. (eds.) Proceedings of ESOP 86, Lecture Notes in Computer Science, vol. 213, pp. 187–196. Springer, Berlin (1986) 23. Hoare, C.A.R., He, J.: Unifying Theories of Programming. Prentice Hall, Englewood Cliffs (1998) 24. Leduc, G.: On the role of implementation relations in the design of distributed systems using LOTOS. Ph.D. thesis, University of Liège, Liège, Belgium (1991)

206

E. Boiten and J. Derrick

25. Schellhorn, G.: ASM refinement and generalizations of forward simulation in data refinement: a comparison. Theor. Comput. Sci. 336(2–3), 403–435 (2005). https://doi.org/10.1016/j.tcs. 2004.11.013 26. Smith, G., Derrick, J.: Specification, refinement and verification of concurrent systems - an integration of Object-Z and CSP. Form. Methods Syst. Des. 18, 249–284 (2001) 27. Spivey, J.M.: The Z Notation: A Reference Manual. International Series in Computer Science, 2nd edn. Prentice Hall, Upper Saddle River (1992) 28. Stepney, S., Cooper, D., Woodcock, J.: More powerful data refinement in Z. In: Bowen, J.P., Fett, A., Hinchey M.G. (eds.) ZUM’98: The Z Formal Specification Notation. Lecture Notes in Computer Science, vol. 1493, pp. 284–307. Springer, Berlin (1998) 29. Woodcock, J., Stepney, S., Cooper, D., Clark, J., Jacob, J.: The certification of the mondex electronic purse to ITSEC level E6. Form. Asp. Comput. 20(1), 5–19 (2008). https://doi.org/ 10.1007/s00165-007-0060-5 30. Woodcock, J.C.P., Davies, J.: Using Z: Specification, Refinement, and Proof. Prentice Hall, New York (1996)

Oblique Strategies for Artificial Life Simon Hickinbotham

Abstract This paper applies Eno and Schmidt’s Oblique Strategies to the research paradigm of fostering major evolutionary transition in Artificial Life. The Oblique Strategies are a creative technique for moving projects forward. Each strategy offers a non-specific way of forming a new perspective on a project. The practitioner can try as many strategies as needed until a new way forward is found. Eight randomlyselected strategies were applied to the problem. Each Strategy was considered for sufficient time to either sketch out a new research direction or to reject the strategy as inappropriate. Five of the Eight strategies provoked suggestions for research avenues. We describe these new ideas, and reflect upon the use of creative methodologies in science.

1 Introduction Creative projects of any scale can experience ‘blocks’. When a project is blocked, it seems impossible to move forward without making the piece worse—even starting again seems to be pointless, since it will probably lead us back to the place we currently find ourselves. The original spirit which inspired the project appears to have diminished and the remaining prospect of a useful result (any useful result) appears to be strewn with difficulties. The Oblique Strategies [4] were developed by the artist Peter Schmidt and Brian Eno, a musician with an enviable reputation for repeated creative innovation. Eno seems to be able to clear creative blocks quite easily, and has worked with an impressive range of creative artists since the early 1970s. The Oblique Strategies are a set of short statements, originally printed on cards similar to playing cards, which provoke the user to thinking about their project in new ways. The user can read as few or as many of the strategies as they like—the point is to move the viewpoint of the status of the project to a new perspective, and thus to find a way of removing the block. S. Hickinbotham (B) YCCSA, University of York, York, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_9

207

208

S. Hickinbotham

One of the features of the Oblique Strategies is that they are purposely non-specific, with the goal that they can be applied to any creative project. In this essay, we’ll look at applying the Oblique Strategies to the research field of Artificial Life (ALife)—a field of research with the goal of discovering the principles of “life as it could be” [9]. This raises the question: Is ALife ‘blocked’? ALife researchers (including the author) would deny this, but it is possible to make the argument that there are currently no particularly strong research leads that take the field beyond the emergence of selfreplicating entities. The initial promise of results [1, 11, 12] regarding the emergence of self-replicators has stagnated: self-replicators suffer from parasitic attacks in these systems and do not seem to generate new levels of complexity after an initial period of innovation. The (well, a) current goal of ALife concerns the search for artificial systems that can generate new levels of complexity beyond these self-replicator systems. In short, we seek systems that exhibit a recognisable major transition [13] in their evolution. In addition, another particular ALife maxim needs to be observed: to create the conditions for life to emerge without enforcing the emergence—von Neumann [14] was aware of this and famously discusses the issue of “composing away the problem”. Here we will use the Oblique Strategies to try and foster new thinking about how to work towards this objective. At this point we switch from impersonal narrative to the first person, where ‘I’ is the author. This is important because the Oblique Strategies work by triggering new configurations of ideas and concepts in the mind of the user, so it is impossible to separate user from experiment as is desirable in the scientific method. First, some background: I am an ALife research professional/hobbyist depending on which year you read this. My contribution to ALife research has been in the field of Automata Chemistries (AChems), usually under the supervision and always with the support of Stepney. My main contribution to the field has been the Stringmol Automata chemistry [7], and I will be making reference to this as we go through the exercise.

2 Methods The premise of Oblique Strategies is simple: a short text statement challenges the user to think about the project in a new way. Over as many iterations as necessary, the user draws a random Oblique Strategy card, considers the application of the phrase to their project, and tries to implement the perceived New Way Forward. There are many different packs of Oblique Strategies available. The original packs are collectors item, but thanks to the internet we were able to use the online version at http://www.oblicard.com/ to generate the draws. In the context of this essay, and in an effort to evaluate the Oblique Strategies, I’ve limited the number of cards drawn to 8. I’ve then tried to interpret each statement in the general context of ALife, trying to use them to guide new angles of research in ALife and the Automata Chemistries. I then try to assess the usefulness of the new approach, and where possible to suggest how a research project might be built

Oblique Strategies for Artificial Life

209

around the new idea. I tried to do this thinking over a short period of time as this seems to reflect the spirit of the approach, but the ‘wall time’ for this covered the first two weeks of February 2018. Where possible, we want to move towards a system that offers the potential to demonstrate the emergence of a major transition within the context of open-ended evolution [2].

3 Results The cards that were drawn contained the following eight strategies in this order: • • • • • • • •

Humanize something that is free from error Do we need holes? Give the Game away Is the tuning appropriate? Move towards the unimportant Use something nearby as a model Take away the elements in order of apparent non-importance Call your mother and ask her what to do.

I considered each of these phrases in turn and tried to develop a research theme from each. Below, I detail the responses that each of these strategies provoked.

3.1 Humanize Something that is Free from Error What does humanize mean in this context? I think the end of the sentence is the clue— something that is free from error—and this raises the idea of imprecision in ALife systems. The process of ‘humanizing’ is to introduce what you might call ‘charming unpredictability’ to a deterministic procedure. This issue is particularly pertinent in the AChems, where reactions between entities follow the coded program(s) of each entity. Imprecision would mean that there was more than one possible outcome of a program execution. I have an intuition that inexact execution is really important—that information processing in the presence of noise is the only way to make things robust to noise. Robustness is difficult to add a posteriori. In addition, I often think back to the early implementations of Stringmol [7], which generated control flow statements between sections of the program using a version of the Smith-Waterman algorithm that didn’t use a traceback feature. Here, the resulting program execution could be wildly different if a single mutation caused different alignments. Many of the more complex reactions that were observed in Stringmol were a result of these drastic changes. This Oblique Strategy suggests that further work should be done to increase the stochasticity of program execution. This would mirror the way that enzymes strongly

210

S. Hickinbotham

catalyse one reaction but weakly catalyse many others. In Stringmol, a major chunk of processing time is dedicated to calculating alignments between program regions. Could we aim to kill two birds with one stone here—to create a ‘sloppy’ alignment function that introduces more imprecision, as well as running faster?

3.2

Do We need holes?

ALife systems are arranged in some sort of space—a toroidal grid commonly. There are no ‘holes’ in these grids—they are completely homogeneous. What might holes achieve? Well, one thing they do is make it harder to reach one point from another. This can be a useful component of a system where parasites emerge—it makes it more difficult for parasites to reach new hosts. We have seen that aspatial systems make it too easy for parasites to swamp the system, and by introducing a spatial component, the dominance of parasites is reduced [6]. The landscape is still uniform however— a replicator can survive as well in one patch as in another, and the only variable environmental factor is formed from the sequence of the neighbouring replicators or parasites in the Moore neighbourhood of an individual. Variability in the landscape of the system might give particular regions more heterogeneity, and so solve the problem of what landscape size to use. The idea would be to offer pockets of easy living and pockets of challenge. It may also be possible reduce the overall size of the grid but preserve interesting properties using this technique.

3.3 Give the Game Away This phrase brings to mind fundamental ideas regarding the way ALife systems are composed. Von Neumann warned: “By axiomatizing automata in this manner one has thrown half the problem out the window and it may be the more important half. One does not ask the most intriguing, exciting and important questions of why the molecules or aggregates that in nature really occur… are the sorts of thing they are…” [14]. It’s possible that to some extent, ALife systems have already given the game away and we need to find out how to get the game back. Recent work has shown that this axiomatization of functions into symbols needn’t be a single step process. We have seen in [3] how the mapping of genetic codons to units of function (e.g. amino acids) can change through the bio-reflective architecture, where the encoding of the translating machinery on the genome offer parallels to computational reflection. Susan Stepney et al.’s work on sub-symbolic AChems [5] comes into play here. The idea is to have a core, composable set of very small, efficient operators that can be composed into units and then manipulated as a language. Bringing bio-reflective evolution into the system yields:

Oblique Strategies for Artificial Life

211

1. A ‘bare-metal’ set of operators (internal ‘structure’) 2. Compositions of the operators into functional components (‘reaction properties’) 3. A genetic Specification-Translation mechanism via which the reaction properties are arranged into functional machines on a larger scale. Is it possible that ‘shaking the flask’ of a system like this would allow life to emerge? And can we build novel models of computation (or novel computers) that exploit this model? This feels like we are on the right track to a major transition.

3.4 Is the Tuning Appropriate? Well, no, it isn’t. Parameterisation of these systems is a huge problem. Can we place these parameters under more direct control of the system—and reduce the initialisation burden? One day maybe, but this feels like an Oblique Strategy that doesn’t quite fit into the current situation since we are currently unsure of the relative merits of the parameters in these systems.

3.5 Move Towards the Unimportant At first I struggled to consider what unimportant feature could possibly move the ALife paradigm forward. Eventually, I began to focus on an issue that has received relatively little attention in the ALife community, probably due to the emphasis on self-replicators. This is the role of translation in living systems. This issue came to my attention via Lanier and Williams [10] who review the evidence for various models of origins of life—‘replicator first’, ‘metabolism first’ and so on. They term each model as a ‘privileged function’: an operation which, having been observed in modern life, is supposed to be capable of functioning in isolation at life’s origin. They argue that these privileged functions lack predictive power because they lack the indeterminacy and plasticity of biological and chemical processes. They go on to review the evidence with respect to ancient genes for these functions and what they code for. They find that there is heavy emphasis on translation functionality, and suggest this is a new avenue for research in the origin of life. Translation is often a neglected or un-implemented feature of ALife systems, but we’ve shown in [3] that there are interesting properties of systems that use translation to decode a genome. Essentially, if the translating entity is itself encoded on the genome, then mutations on the translator can drastically change the function of every other machine in the system. Most commonly, these mutations can be disastrous, but sometimes they change the composition of the system with no corresponding change in the genetic record. This suggests a dynamic in evolution that is simultaneously essential (because it tunes the relationship between the coding system and the resulting machinery) and difficult to trace (because it leaves little or no trace in the genetic record).

212

S. Hickinbotham

3.6 Use Something Nearby as a Model Let’s move quickly on from this strategy as it doesn’t seem to work in the ALife  context where everything nearby is already a model!

3.7 Take Away the Elements in Order of Apparent Non-importance To work on this strategy, I ranked the elements of AChems by importance as follows: 1. The opcode language, which specifies how each entity manages its state and the state of the entities it contacts. 2. The interaction protocol, fixed rules with specify broadly how an interaction proceeds. 3. The spatial arrangement, what sits next to what. 4. The initial state, how the system kicks off. This ordering is open to debate of course. I’ve put the initial state last because all of the other elements have to be in place before the initial state is formulated. Also, biology at any scale doesn’t really have an initial state—the initial state arises from whatever evolves before it (with one notable exception…) The problem is how to take away this element of course, because there are vastly more initial states that lead nowhere compared with those that don’t, which seems to make the random initialisation of an AChem a non-starter. However, in this modern age of grid computing, perhaps we shouldn’t be afraid of taking this approach. Over many thousands of runs, a randomly initialised system might teach us something about the nature of the framework—particularly the nature of low-probability events. The goal then would be to explore a range of different frameworks and uncover the configurations that would best move us nearer to our goal.

3.8 Call Your Mother and Ask Her What to Do The important thing about this strategy is the act of explaining/inviting someone into a community. In the act of doing this, the correct route becomes more apparent. It’s the idea of being able to explain the problem sufficiently that it becomes better crystallised in the explainer’s head. I remember Susan Stepney talking about a particularly constructive meeting in which many members were arriving late or had to leave early. Handing the baton of discussion on to new parties was a way of marking current progress and allows the building blocks of the project to be assembled.

Oblique Strategies for Artificial Life

213

So I called my mother.1 I explained that I was writing a paper on trying to find a way to get a major transition in ALife. I then had to explain what a major transition was: “It’s a massive change in the way life organises itself, usually involving some sort of new scale of enclosure” I mumbled. “So how are these things enclosed now?” she asked. Which was a very good point really because there are no enclosures within the implementations we have. Perhaps we could join this concept with the networkinstead-of-grid-arrangement discussed in Sect. 3.2. We could imagine some way in which the rate or probability of interaction between some regions is hindered—under control of the entities at the border. A simple way to do this might be for an entity to refuse interactions in some notional direction in the Moore neighbourhood and to be able to influence neighbours to do the same.

4 Discussion The scientific method is a wonderful thing when the science gives clear results and presents an obvious research direction. In complex systems like ALife systems, interpretation of results is often more difficult, particularly where an artificial model needs to be designed, implemented and evaluated. In short, the scientific method requires creativity, and in this situation, it makes sense to look at how creative types like Brian Eno manage to stay productive. This exercise has provoked several interesting suggestions for ALife research with the object of fostering major transitions in artificial systems in the following order: increase the stochasticity; make the arena heterogeneous; link sub-symbolic AChems to bio-reflective evolution; focus on the encoding of the translation apparatus; foster the formation of enclosures. I would suggest that the last three research ideas in this list are particularly strong given that they were initiated by an essentially random process. My personal experience of using the Oblique Strategies to explore new research themes has at times rather felt like having a seance with myself. They seem to provide a conduit through which one can access and consolidate ideas that have been lurking in the back of the mind for a long time, re-igniting enthusiasm for these ideas in the process. It reminds me of Johnson’s [8] concept of the “slow hunch”: although society’s narrative of the way breakthrough ideas happen is through so-called “Eureka moments”, it is clear that these ideas can take years or decades to form, often via serendipitous coincidences mediated by a few highly-connected individuals. The process of considering the Oblique Strategies that were drawn for this essay has led me to think about the research of myself and others in unusual ways, and this process of stepping back a pace or two has been (for me at least) fruitful.

1 No

I didn’t. That would be ludicrous.

214

S. Hickinbotham

References 1. Adami, C., Brown, C.T., Kellogg, W.: Evolutionary learning in the 2D artificial life system Avida. Artificial Life IV, vol. 1194, pp. 377–381. The MIT Press, Cambridge (1994) 2. Banzhaf, W., Baumgaertner, B., Beslon, G., Doursat, R., Foster, J.A., McMullin, B., De Melo, V.V., Miconi, T., Spector, L., Stepney, S., et al.: Defining and simulating open-ended novelty: requirements, guidelines, and challenges. Theory Biosci. 135(3), 131–161 (2016) 3. Clark, E.B., Hickinbotham, S.J., Stepney, S.: Semantic closure demonstrated by the evolution of a universal constructor architecture in an artificial chemistry. J. R. Soc. Interface 14, 20161033 (2017) 4. Eno, B., Schmidt, P.: Oblique strategies. Opal, London (1978) 5. Faulkner, P., Krastev, M., Sebald, A., Stepney, S.: Sub-symbolic artificial chemistries. In: Inspired by Nature, pp. 287–322. Springer (2018) 6. Hickinbotham, S., Hogeweg, P.: Evolution towards extinction in replicase models: inevitable unless. In: 2nd EvoEvo Workshop, Amsterdam (NL), September 2016, 2016 7. Hickinbotham, S.J., Clark, E., Stepney, S., Clarke, T., Nellis, A., Pay, M., Young, P.: Diversity from a monoculture-effects of mutation-on-copy in a string-based artificial chemistry. In: ALife, pp. 24–31 (2010) 8. Kelly, K., Johnson, S.: Where ideas come from. Wired 18(10) (2010) 9. Langton, C.G., et al.: Artificial Life (1989) 10. Lanier, K.A., Williams, L.D.: The origin of life: models and data. J. Mol. Evol. 84(2–3), 85–92 (2017) 11. Pargellis, A.: Self-organizing genetic codes and the emergence of digital life. Complexity 8(4), 69–78 (2003) 12. Ray, T.S.: An approach to the synthesis of life. In: Langton, C., Taylor, C., Farmer, J.D., Rasmussen, S. (eds.) Artificial Life II. Santa Fe Institute Studies in the Science of Complexity, vol. XI, pp. 371–408. Addison-Wesley, Redwood City (1991) 13. Smith, J.M., Szathmary, E.: The Major Transitions in Evolution. Oxford University Press, Oxford (1997) 14. Von Neumann, J., Burks, A.W.: Theory of Self-reproducing Automata. University of Illinois Press, Urbana (1996)

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP Kangfeng Ye, Simon Foster and Jim Woodcock

Abstract Simulink is widely accepted in industry for model-based designs. Verification of Simulink diagrams against contracts or implementations has attracted the attention of many researchers. We present a compositional assume-guarantee reasoning framework to provide a purely relational mathematical semantics for discretetime Simulink diagrams, and then to verify the diagrams against the contracts in the same semantics in UTP. We define semantics for individual blocks and composition operators, and develop a set of calculation laws (based on the equational theory) to facilitate automated proof. An industrial safety-critical model is verified using our approach. Furthermore, all these definitions, laws, and verification of the case study are mechanised in Isabelle/UTP, an implementation of UTP in Isabelle/HOL.

1 Introduction Simulink [26] and OpenModelica [30] are widely used industrial languages and toolsets for expressing control laws diagrammatically, including support for simulation and code generation. In particular, Simulink is a de facto standard in many areas in industry. For example, in the automotive industry, General Motors, Jaguar Land Rover (JLR), Volkswagen, Daimler, Toyota, and Nissan all use Simulink and StateFlow for system-level modelling, electronics and software design and implementation, powertrain calibration and testing, and vehicle analysis and validation [36]. Model-based design, simulation and code generation make it a very efficient and costeffective way to develop complex systems. For example, JLR report significant time and cost savings accruing from the use of these tools, with increased ability to test K. Ye · S. Foster · J. Woodcock (B) University of York,York, United Kingdom e-mail: [email protected] K. Ye e-mail: [email protected] S. Foster e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_10

215

216

K. Ye et al.

more design options and faster development of embedded control designs [1]. Though empirical analysis through simulation is an important technique to explore and refine models, only formal verification can make specific mathematical guarantees about behaviour, which are crucial to ensure safety of associated implementations. Whilst verification facilities for Simulink exist [3, 8, 9, 11, 32, 35], there is still a need for assertional reasoning techniques that capture the full range of specifiable behaviour, provide nondeterministic specification constructs, and support compositional verification. Such techniques also need to be sufficiently expressive to handle the plethora of additional languages and modelling notations that are used by industry in concert with Simulink, in order to allow formulation of heterogeneous “multi-models” that capture the different paradigms and disciplines used in large-scale systems [41]. We analyse these requirements from a variety of aspects: compositional reasoning, expressiveness, algebraic loops, multi-rate models, semantics unifying and generalising, and tool support. Assume-Guarantee (AG) reasoning is a valuable compositional verification technique for reactive systems [4, 21, 27]. In AG, one demonstrates composite system level properties by decomposing them into a number of contracts for each component subsystem. Each contract specifies the guarantees that the subsystem will make about its behaviour, under certain specified assumptions of the subsystem’s environment. Such a decomposition is vital in order to make verification of a complex system tractable, and to allow development of subsystems by separate teams. AG reasoning has previously been applied to verification of discrete-time Simulink control law diagrams through mappings into synchronous languages like Lustre [38] and Kahn Process Networks [8]. These languages are inherently deterministic and non-terminating in nature, which are a good fit for discrete-time Simulink. However, as discussed in [37], in order to be general and rich in expressiveness, contracts should be relational (the value of outputs depends on the value of inputs), nondeterministic, and non-input-receptive (also called non-input-enabled or non-input-complete, which means rejection of some input values). Relations allow systems to be specified using input-output properties, such as assumptions and guarantees. Nondeterminism is useful to give high-level specifications and abstract low-level details. Nondeterminism also plays a vital role in refinement. Reducing nondeterminism results in refinement. Non-input-receptive contracts exclude illegal inputs for real systems. Though discrete-time Simulink is input-receptive (accept all input values), there is still a need to provide some protections and detections of such errors (due to illegal inputs for real systems), and finally avoid these errors. For example, divide-by-zero is one error that is expected to be avoided. Both the compositional theory for synchronous concurrent systems (the interface theory) that is presented in [37] and the Refinement Calculus for Reactive Systems (RCRS) [32, 33] cater for these general contract requirements. In addition, RCRS extends relational interfaces with liveness properties. Simulink diagrams may also contain algebraic loops (instantaneous feedbacks). Various approaches [8, 38] rely on the algebraic loop detection mechanism in Simulink to exclude diagrams with algebraic loops. Both the interface theory and RCRS identify it as a current restriction and expect a future extension to the

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

217

framework in order to cope with instantaneous feedbacks. ClawZ [3, 22]1 and its extensions [10, 11] are able to translate Simulink diagrams with algebraic loops to Z, but they do not explicitly state how to solve instantaneous feedbacks and how to reason about uniqueness of solutions as a proof obligation. Simulink is also capable of modelling and analysing multi-rate distributed systems in which components have different sampling rates or sampling periods. That different components in a system have different sampling rates is in the nature of distributed systems and so it is necessary to take verification of multi-rate systems into account. The verification of multi-rate Simulink models is widely supported by [8, 12, 24, 38] but it is not the case for the interface theory and RCRS. There is another need to unify and generalise semantic domains for Simulink and other control law block diagrams. Current approaches [3, 8, 12, 38] translate Simulink diagrams to one language and contracts or implementations into the same language, and then use existing verification methodologies for these languages to reason about contracts-to-Simulink or Simulink-to-programs. But the interface theory and RCRS are different. They introduce new notions: relational interfaces and monotonic property transformers (MPT) respectively for contract-based verification of reactive systems. To the best of our knowledge, these approaches verify either contracts or implementations, but not both. What is needed is a rich unifying and generalising language capable of AG based compositional reasoning, and providing a same semantic foundation for contracts, Simulink diagrams, and their implementations in various paradigms. Eventually, the development of such systems from contracts, Simulink diagrams, to final implementations (such as simulation and code generation) is supported through refinement with traceability and compositionality, and systems are able to be verified from contracts to implementations. One such example is the capability to compose a Simulink subsystem with another component (maybe a contract or a C program) to form another subsystem, and this new subsystem still shares the semantics on the same foundation. Applicable tool support with a high degree of automation is also of vital importance to enable adoption by industry. Since Simulink diagrams are data rich and usually have an uncountably infinite state space, model checking alone is insufficient and there is a need for theorem proving facilities. Based on analysis of these aspects, it is necessary to have an approach to support compositional reasoning, be general in contracts that facilities a flexible contractbased specification mechanism, able to reason about algebraic loops and multi-rate models, capable of unifying semantics from various paradigms, as well as theorem proving tool support. Our proposed solution in this paper aims to be one such approach, though the work presented in this paper currently cannot support multirate models (but it would be one of our future extensions). Our approach explores development of formal AG-based proof support for discrete-time Simulink diagrams through a semantic embedding of the theory of designs [40] in Unifying Theories of Programming (UTP) [18] in Isabelle/HOL [28] using our developed tool

1 ClawZ:

http://www.lemma-one.com/clawz_docs/.

218

K. Ye et al.

Isabelle/UTP [16].2 A design in UTP is a relation between two predicates where the first predicate (precondition) records the assumption and the second one (postcondition) specifies the guarantee. Designs are intrinsically suitable for modelling and reasoning about control law diagrams because they are relational, nondeterministic, and non-input-receptive. Our work presented in this paper has three contributions. The main contribution is to define a theoretical reasoning framework for control law block diagrams using the theory of designs in UTP. Our translation is based on a denotational semantics in UTP. Since UTP provides unifying theories for programs from different paradigms, it enables specifying, modelling and verifying in a common theoretical foundation. The capability to reason about diagrams with algebraic loops is another distinct feature of this reasoning framework. The second contribution is the mechanisation of our theories in the theorem prover Isabelle/HOL using our implementation of UTP, Isabelle/UTP. Both compositional reasoning and theorem proving help to tackle the state space explosion problem by giving a purely symbolic account of Simulink diagrams. Finally, the third and practical contribution is our industrial case study to verify a subsystem in a safety-critical aircraft cabin pressure control system. We identify a vulnerable block and suggest to replace the block or strengthen its assumption. This case study gives insight into the verification methodology of our approach. In the next section, we describe the relevant preliminary background about Simulink and UTP. Then Sect. 3 presents the assumptions we made, defines our treatment of blocks in UTP, and translations of a number of blocks are illustrated. Furthermore, in Sect. 4 we introduce our composition operators and their corresponding theorems. Afterwards, in Sect. 5 we briefly describe our verification strategies, the mechanisation of our approach in Isabelle/HOL, and demonstrate with an industrial case study. We conclude our work in Sect. 6.

2 Preliminaries 2.1 Control Law Diagrams and Simulink Simulink [26] is a model-based design modelling, analysis and simulation tool for signal processing systems and control systems. It offers a graphical modelling language that is based on hierarchical block diagrams. Its diagrams are composed of subsystems and blocks as well as connections between these subsystems and blocks. In addition, subsystems also can consist of others subsystems and blocks. Single function blocks have inputs and outputs, and some blocks also have internal states. A simple PID integrator is shown in Fig. 1. The integrator is composed of four blocks: an input port, an output port, a unit block (its initial value x0 is 0), and a sum block. They are connected by signals (or wires). A unit delay block delays its 2 Isabelle/UTP:

https://www.cs.york.ac.uk/circus/isabelle-utp/.

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

219

Fig. 1 PID integrator

input one sample period. It outputs x0 at initial step and previous input afterwards. The sum block takes the signal from the input port and the signal from the unit delay block as inputs, and outputs addition of them. Furthermore, the integrator uses the unit delay block to feed previous output of the model back to the sum block by the feedback. Therefore, the output of this model (or the output of the sum block) is equal to addition of previous output (or x0 at initial step) and current input. Because x0 is 0, the output at any time is a summation of all inputs up to that time. A consistent understanding [12, 25] of the simulation in discrete-time Simulink is based on an idealized time model. All executions and updates of blocks are performed instantaneously (and infinitely fast) at exact simulation steps. Between the simulation steps, the system is quiescent and all values held on lines and blocks are constant. The inputs, states and outputs of a block can only be updated when there is a time hit (simulation time t at which Simulink executes the output method of a block for a given sample period Tb and an offset To of the block, that is, (t − To ) mod Tb = 0) for this block. Otherwise, all values held in the block are constant too, though at exact simulation steps. Simulation and code generation of Simulink diagrams use sequential semantics for implementation, but it is not always necessary for reasoning about the simulation. Based on the idealized time model, a single function block can be regarded as a relation between its inputs and outputs. For instance, a unit delay block specifies that its initial output is equal to its initial condition and its subsequent output is equal to previous input. Then connections of blocks establish further relations between blocks. A directed connection from one block to another block specifies that the output of one block is equal to the input of another block. Finally, hierarchical block diagrams establish a relation network between blocks and subsystems.

2.2 Unifying Theories of Programming (UTP) Unifying Theories of Programming (UTP) [18] is a unifying framework to provide a theoretical basis for describing and specifying programs across different paradigms such as imperative, functional, declarative, nondeterministic, concurrent, reactive and higher order. A theory in UTP is described using three parts: an alphabet, a set of variables for the theory to be studied; a signature, the syntax for denoting members of the theory; and healthiness conditions, a set of conditions characterising membership of the theory.

220

K. Ye et al.

Our understanding of the simulation in Simulink as a relation network is very similar to the concept “programs-as-predicates” [20] in UTP. This similarity makes UTP [18] intrinsically suitable for reasoning about the semantics of Simulink simulation because UTP uses an alphabetised predicate calculus to model computations.

2.2.1

Alphabetised Relation Calculus

The alphabetised relational calculus [13] is the most basic theory in UTP. A relation is defined as a predicate with undecorated variables (v) and decorated variables (v ) in its alphabet. v denotes an observation made initially and v denotes an observation made at an intermediate or final state. In addition to normal predicate operators such as ∧, ∨, ¬, =⇒ , etc., more are defined to construct relations in UTP. Definition 1 (Relations) P  b Q  (b ∧ P) ∨ (¬b ∧ Q) [Conditional]  • P[v /v ] ∧ Q[v /v] [Sequence] P; Q  ∃v 0  0  0  x := e  x = e ∧ u = u [Assignment (u denotes all other variables in its alphabet)] P  Q  (P ∨ Q) [Nondeterminism] Conventionally, we use upper and lower case variables for predicates (P and Q) and conditions (b) respectively. A condition is a relation without decorated variables in its alphabet, such as x > 3. P[v0 /v ] (substitution) denotes that all occurrences of v in P are replaced by v0 . Refinement is an important concept in UTP concerned with program development to achieve program correctness. In UTP, the notation for program correctness is the same for every paradigm: in each state, the implementation P implies its specification S, which is denoted by S  P. Prior to the definition of refinement, we define the universal closure of a relation below.   Definition 2 (Universal Closure) [P]  ∀v, v · P providing the alphabet of P is {v, v }. Definition 3 (Refinement) S  P  [P =⇒ S] S  P means that everywhere P implies S. A refinement sequence is shown in (1). true  S1  S2  P1  P2  false

(1)

S1 is a more general and abstract specification than S2 and thus easier to implement. The relation true is the easiest one and can be implemented by anything. P2 is more specific and determinate program than P1 . Thus P2 is more useful in general because it is easier to be implemented. The relation false is the strongest predicate and it is impossible to implement in practice.

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

221

  For example, for a specification x ≥ 3 ∧ x  = 4 ∧ y  = y , the correctness of an implementation (x := x + 1) is shown below. Proof (Example)

     x ≥ 3 ∧ x = 4 ∧ y  = y  (x := x + 1) = (x := x + 1) =⇒ x ≥ 3 ∧ x = 4 ∧ y  = y     = x = x + 1 ∧ y = y =⇒ x ≥ 3 ∧ x = 4 ∧ y  = y

= [x ≥ 3 ∧ x + 1 = 4 ∧ y = y] = [x ≥ 3 ∧ x = 3] = tr ue

2.2.2

[refinement] [assignment] [universal one-point rule] [arithmetic and reflection] [arithmetic]



Designs

Designs are a subset of the alphabetised predicates that use a particular variable ok to record information about the start and termination of programs. The behaviour of a design is described from initial observation and final observation by relating its precondition P (assumption) to the postcondition Q (guarantee) as (P Q) [18, 40] (assuming P holds initially, then Q is established). The theory of designs is actually the theoretical setting for assume-guarantee reasoning [15]. Definition 4 (Design)   (P Q)  P ∧ ok =⇒ Q ∧ ok  A design is defined in Definition 4 where ok records that the program has started and ok  that it has terminated. It states that if the design has started (ok = tr ue) in a state satisfying its precondition P, then it will terminate (ok  = tr ue) with its postcondition Q established. Therefore, with designs we are able to reason about total correctness of programs. We introduce some basic designs. Definition 5 (Basic Designs)

D  ⊥D  (x := e) II D 

[Miracle] (true false ) = ¬ok true  [Abort]     true x = e ∧ u = u [Assignment] [Skip] (true II )

Abort (⊥ D ) and miracle ( D ) are the bottom and top element of a complete lattice formed from designs under the refinement ordering. ⊥ D is defined as a design true. The precondition of true is the predicate false and its postcondition can be any arbitrary predicate P. Actually, for any P, (false P) is equal to true.

222

K. Ye et al.

Theorem 1 (Abort) [Definition 4] (false P)   = false ∧ ok =⇒ P ∧ [false zero for conjunction] ok  = false =⇒ P ∧ ok  [vacuous implication] = true An interesting example is (false true ) = (false false ). Abort is never guaranteed to terminate and miracle establishes the impossible. In addition, abort is refined by any other design and miracle refines any other design. Assignment has precondition true provided the expression e is well-defined and establishes that only the variable x is changed to the value of e and other variables have not changed. The skip II D is a design identity that always terminates and leaves all variables unchanged. Refinement of designs is given in the theorem below. Theorem 2 (Refinement of Designs) (P1 Q 1  P2 Q 2 ) = ([P1 =⇒ P2 ] ∧ [P1 ∧ Q 2 =⇒ Q 1 ]) Refinement of designs is achieved by weakening the precondition, and strengthening the postcondition in the presence of the precondition. Designs can be sequentially composed with the following theorem: Theorem 3 (Sequential Composition) (P1 Q 1 ; P2 Q 2 ) = ((¬ (¬P1 ; true) ∧ (Q 1 wp P2 )) Q 1 ; Q 2 ) where Q 1 wp P2 denotes the weakest precondition (Definition 6) for Q 1 to be guaranteed to establish P2 . Definition 6 (Weakest Precondition) Q wp r  ¬ (Q; ¬ r ) A sequence of designs terminates when P1 holds and Q 1 guarantees to establish P2 . On termination, sequential composition of their postconditions is established. If P1 is a condition (denoted as p1 ), the theorem is further simplified. Theorem 4 (Sequential Composition (Condition)) ( p1 Q 1 ; P2 Q 2 ) = (( p1 ∧ (Q 1 wp P2 )) Q 1 ; Q 2 ) A condition p1 is a particular predicate that only has input variables in its alphabet. In other words, a design of which its precondition is a condition only makes the assumption about its initial observation (input variables) and without output variables. That is the same case for our treatment of Simulink blocks. Furthermore, sequential composition has two important properties: associativity and monotonicity which are given in the theorem below.

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

223

Theorem 5 (Associativity, Monotonicity) P1 ; (P2 ; P3 ) = (P1 ; P2 ) ; P3 [Associativity] Suppose P1  P2 and Q 1  Q 2 , then [Monotonicity] (P1 ; Q 1 )  (P2 ; Q 2 ) In addition, we define two notations pr e D and post D . Definition 7 ( pr e D and post D ) pr e D (P) = ¬P[tr ue, f alse/ok, ok  ] post D (P) = P[tr ue, tr ue/ok, ok  ] pr e D and post D can be used to retrieve the precondition of the design and the postcondition in the presence of the precondition respectively according to the following theorem. Theorem 6 ( pr e D and post D ) pr e D (P Q) = P post D (P Q) = (P =⇒ Q)

3 Semantic Representation of Simulink Blocks In this section, we focus on the methodology to map individual Simulink blocks to designs in UTP semantically. Basically, a block or subsystem is regarded as a relation between inputs and outputs. We use an undashed variable and a dashed variable to denote input signals and output signals respectively. We start with assumptions made to Simulink models when developing this work.

3.1 Assumptions Discrete-time In this paper, only discrete-time Simulink models are taken into account. Causality We assume the systems modelled in Simulink diagrams are causal where the output at any time only depends on values of present and past inputs. Consequently, if inputs to a casual system are identical up to some time, their corresponding outputs must also be equal up to this time [31, Sect. 2.6.3]. Single-rate This work captures single sampling rate Simulink models, which means the timestamps of all simulation steps are multiples of a base period T . Steps are abstracted and measured by step numbers (natural numbers N) and T is removed from its timestamp. In the future, we will extend the reasoning framework in this paper to support multi-rate models. The basic idea is to introduce a base period T

224

K. Ye et al.

which is the greatest common divisor (gcd) of the sampling periods of all blocks. Then for each block, its sampling period Tb must be multiples of T . At any simulation time n ∗ T of the n step, if n ∗ T is multiples of Tb , it is a hit. Otherwise, it is a miss. A block reads inputs and updates its states and outputs only when there is a hit. The states and outputs are left unchanged when there is a miss. An algebraic loop occurs in simulation when there exists a signal loop with only direct feedthrough blocks (instantaneous feedback without delay) in the loop. [8, 9, 14] assume there are no algebraic loops in Simulink diagrams and RCRS [32] identifies it as a future work. Amalio et al. [2] detects algebraic loops of SysML [29] models by a refinement model checker FDR3 [17]. Our theoretical framework can reason about a discrete-time block diagram with algebraic loops: specifically check if it has a unique solution. The signals in Simulink can have many data types, such as signed or unsigned integer, single float, double float, and boolean. The default type for signals are double in Simulink. This work uses real numbers in Isabelle/HOL as a universal type for all signals. Real numbers in Isabelle/HOL are modelled precisely using Cauchy sequences, which enables us to reason in the theorem prover. This is a reasonable simplification because all other types could be expressed using real numbers, such as boolean as 0 and 1.

3.2 State Space The state space of our theory for block diagrams is composed of only one variable in addition to ok, named inouts. We defined it as a function from natural numbers (step numbers) to a list of inputs or outputs. inouts : N → seq R Then a block is a design that establishes the relation between an initial observation inouts (a list of input signals) and a final observation inouts’ (a list of output signals). Additionally, this is subject to the assumption of the design.

3.3 Healthiness Condition: SimBlock This healthiness condition characterises a block with a fixed number of inputs and outputs. Additionally it is feasible. A design is a feasible block if there exists at least a pair of inouts and inouts’ that establishes both the precondition and postcondition of the design. Definition 8 (SimBlock ) A design (P Q) with m inputs and n outputs is a Simulink block if SimBlock (m, n, (P Q)) is true. In other words, this design is SimBlock (m, n) healthy.

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

225



⎞ ¬ [P =⇒ ¬Q] ∧ ⎠ ((∀nn · # (inouts(nn)) SimBlock (m, n, (P Q))  ⎝   = m)  (P ∧ Q))∧  ∀nn · # inouts (nn) = n  (P ∧ Q) The first predicate of the conjunctions in the definition of SimBlock states that (1) P always holds, and (2) it is not possible to establish ¬Q providing P holds. This excludes abortion and miracle defined in Definition 5, as shown in Theorem 7, because they are always not desired in Simulink diagrams. The second and third predicates of the conjunctions characterise the number of inputs and outputs of a block. In short, SimBlock (m, n, (P Q)) characterises a subset of feasible blocks which have m inputs and n outputs. Theorem 7 For all m and n, both D and ⊥ D are not SimBlock (m, n) healthy. Proof

SimBlock (m, n, D ) [Definition 5] = SimBlock (m, n, true false ) = (¬ [tr ue =⇒ ¬ f alse] ∧ · · · ) [Definition 8] = f alse [propositional calculus] SimBlock (m, n, ⊥ D ) [Definition 5] = SimBlock (m, n, false false ) = (¬ [ f alse =⇒ ¬ f alse] ∧ · · · ) [Definition 8] = f alse [propositional calculus] 

3.4 Number of Inputs and Outputs Two operators inps and outps are defined to get the number of input signals and output signals for a block. They are implied from SimBlock of the block. Definition 9 (inps and outps)

SimBlock (m, n, P) =⇒ (inps(P) = m ∧ out ps(P) = n) Provided that P is a healthy block, inps returns the number of its inputs and outps returns the number of its outputs. Additionally, inps and outps are not directly defined in terms of P. Instead, in order to get the number of inputs m and outputs n of a block P, we need to prove P is SimBlock (m, n, P) healthy.

3.5 Simulink Blocks In order to give definitions to the corresponding designs of Simulink blocks, firstly we define a design pattern FBlock to facilitate definitions of other blocks. Then we illustrate definitions of three typical Simulink blocks. We defined a pattern that is used to define all other blocks.

226

K. Ye et al.

Definition 10 (FBlock) F Block ( f 1 , m, n, f 2 ) ⎛ ⎞ ∀nn · f 1 (inouts, nn) ⎜ ⎟ ⎜ ⎛ ⎞⎟ ⎟ # = m∧ (inouts(nn)) ⎜ ⎜ ⎟   ⎝ ∀nn · ⎝ # inouts  (nn) = n∧ ⎠⎠   inouts  (nn) = f 2 (inouts, nn) FBlock has four parameters: f 1 is a predicate that specifies the assumption of the block and it is a function on input signals; m and n are the number of inputs and outputs, and f 2 is a function that relates inputs to outputs and is used to establish the postcondition of the block. The precondition of FBlock states that f 1 holds for inputs at any step nn. And the postcondition specifies that for any step nn the block always has m inputs and n outputs, the relation between outputs and inputs are given by f 2 , and additionally f 2 always produces n outputs provided there are m inputs. A block B defined by FBlock( f 1 , m, n, f 2 ) itself is not automatically SimBlock (m, n) healthy. It depends on its parameters f 1 and f 2 . If for any step there exists some inputs that satisfies f 1 , and for these inputs f 2 always produces n outputs, then this block is SimBlock (m, n, B) healthy. Then the Unit Delay block is defined using this pattern. Definition 11 (Unit Delay) U nit Delay (x0 ) ⎛ F Block (tr uef , 1, 1, (λx, n · x0  n = 0  hd (x (n − 1)))) ⎞ ∀nn · tr ue ⎜ ⎟ ⎜ ⎛ ⎞⎟ ⎜ ⎟ # (inouts(nn)) =⎜ ⎟  = 1∧ ⎝ ∀nn · ⎝ # inouts  (nn) = 1∧ ⎠⎠   inouts  (nn) = x0  nn = 0  hd (inouts(nn − 1)) [Definition 10] where hd is an operator to get the head of a sequence, and tr uef = (λx, n · tr ue) that means no constraints on input signals. Definition 11 of the Unit Delay block is straightforward: it accepts one input and produces one output. The value of the output is equal to x0 at the first step (0) and equal to the input at the previous step at subsequent steps. Theorem 8 U nit Delay(x0 ) is SimBlock(1, 1) healthy. Proof SimBlock (1, 1, U nit Delay(x0 )) is true if all three predicates in its definition are true. Use P and Q to denote the precondition and postcondition of U nit Delay respectively. Firstly, ¬ [P  =⇒ ¬Q]  = ¬ ∀inouts, inouts  · P =⇒ ¬Q [Definition 2 of universal enclosure]   =¬ · ¬Q [P = tr ue]  ∀inouts, inouts  = ∃inouts, inouts · Q [Negation of universal quantifier] = tr ue [Witness inouts = inouts  = x0 ]

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

227

Then ((∀nn · # (inouts(nn)) = 1)  (P ∧ Q)) = ∀inouts, inouts  · ((P ∧ Q) =⇒ (∀nn · # (inouts(nn)) = 1)) [Definition 2 of refinement and Definition 3 of universal closure] [P = tr ue] = ∀inouts, inouts  · Q =⇒ (∀nn · # (inouts(nn)) = 1) = ∀inouts, inouts  · (¬Q ∨ (Q ∧ (∀nn · # (inouts(nn)) = 1))) [Implication] [Q is the postcondition of U nit Delay] = ∀inouts, inouts  · (¬Q ∨ Q) = tr ue [Q is the postcondition of U nit Delay] Similarly,      ∀nn · # inouts  (nn) = 1  (P ∧ Q) = tr ue Hence, SimBlock (1, 1, U nit Delay(x0 )) is true.



The Divide block outputs the result of dividing its first input by its second. It is defined below. Definition 12 (Divide) Div2  F Block ((λx, n · hd(tl(x(n))  = 0) , 2, 1, (λx, n · hd(x(n))/ hd(tl(x(n)))))

where tl is an operator to get the tail of a sequence. Definition 12 of the Divide block is slightly different because it has a precondition that assumes the value of its second input is not zero at any step. In the presence of this assumption, its postcondition is guaranteed to be established. Therefore, the divide-by-zero error is avoided. By this way, the precondition enables modelling of non-input-receptive systems that may reject some inputs at some points. Theorem 9 Div2 is SimBlock(2, 1) healthy. The Sum block of two inputs is defined below. Definition 13 (Sum2) Sum2  F Block (tr uef , 2, 1, (λx, n · hd(x(n)) + hd(tl(x(n))))) . Theorem 10 Sum2 is SimBlock(2, 1) healthy.

4 Block Compositions In this section, we define three composition operators that are used to compose subsystems and systems from blocks. We also use three virtual blocks to map Simulink’s connections in our designs. We start with the integrator example and describe how to compose individual blocks.

228

K. Ye et al.

Fig. 2 PID integrator

4.1 The PID Integrator Example For the PID integrator illustrated in Fig. 1, in order to clearly present how the four blocks (inport, outport, sum and unit delay) compose to form this diagram, we rearrange it (Fig. 2a) into Fig. 2b by changing the icon of Sum and moving the unit delay block to the left of the sum block. Input and output ports merely provide interfaces between this diagram and its environment. They are not functional blocks. Therefore, we will not translate them explicitly in our theory.3 We treat the feedback as the outermost composition. Inside the feedback it is regarded a block with two inputs and two outputs (like breaking the feedback wire). The input goes to the sum block directly and the second input (after broken) goes to the sum block through the unit delay block. First, we introduce a virtual identity block I d in the first input, then make it compose with the unit delay block in parallel (like stack) because both of them do not share any inputs and outputs (disjoint). The composition will have two inputs and two outputs. Then this composite sequentially composes with the sum block to output their addition (one output). Afterwards, the output is split into two equal signals: one to the output port, one for feedback. In addition, the feedback wire establishes that the split signal for feedback is equal to the second input. In sum, the composition of this diagram from individual blocks would be (((I d  B U nit Delay(0)) ; Sum2; Split2) f D (1, 1)) Thus, in order to compose a diagram from blocks, we need to introduce several composition operators: ;,  B , and f D .

4.2 Sequential Composition The meaning of sequential composition of designs is defined in Theorem 3. It corresponds to composition of two blocks as shown in Fig. 3 where all outputs of B1 are 3 However,

because the order of input or output ports matter, we define inouts as a sequence of inputs or outputs. By this way, the order information has kept in our translation.

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP Fig. 3 Sequential composition

B1

229

B2

connected to the inputs of B2 (note that we use one arrowed connection to represent a collection of signals, and not one signal). A new block sequentially composed from B1 and B2 is denoted as B1 ; B2 . Provided that P = (F Block ( p1 , m 1 , n 1 , f 1 )) Q = (F Block ( p2 , n 1 , n 2 , f 2 ))

SimBlock (m 1 , n 1 , P) SimBlock (n 1 , n 2 , Q)

Sequential composition of two blocks can be simplified into one block using Theorem 11. Theorem 11 (Simplification)

(λx, n · ( p1 (x, n)) ∧ (( p2 ◦ f 1 )(x, n)) ∧ #(x(n)) = m 1 ) P; Q = F Block , m 1 , n2 , ( f2 ◦ f1 ) [Simplification] This theorem establishes that sequential composition of two blocks, where the number of outputs of the first block is equal to the number of inputs of the second block, is simply a new block with the same number of inputs as the first block P and the same number of outputs as the second block Q, and additionally the postcondition of this composed block is function composition. In addition, the sequentially composed block is still SimBlock (m 1 , n 2 ) healthy which is shown in the closure theorem below. Theorem 12 (Closure)

SimBlock (m 1 , n 2 , (P; Q)) [SimBlock Closure] If both p1 and p2 are true, then Theorem 11 is further simplified. Theorem 13 (Simplification) (P; Q) = F Block (tr uef , m 1 , n 2 , ( f 2 ◦ f 1 )) [Simplification] The theorems above assume the number of outputs in P is equal to the number of inputs in Q. However, if this is not the case, it is miraculous. Theorem 14 (Miraculous) if n 1 = m 2 , then (F Block (tr uef , m 1 , n 1 , f 1 )) ; (F Block (tr uef , m 2 , n 2 , f 2 )) = D

230

K. Ye et al.

Fig. 4 Parallel composition

B1

B2

4.3 Parallel Composition Parallel composition of two blocks is a stack of inputs and outputs from both blocks and is illustrated in Fig. 4. We use the parallel-by-merge scheme [18, Sect. 7.2] in UTP. This is the scheme used to define parallel composition for reactive processes (such as ACP [5] and CSP [19]) in UTP. Parallel-by-merge is denoted as P  M Q where M is a special relation that explains how the output of parallel composition of P and Q should be merged following execution. However, parallel-by-merge assumes that the initial observations for both predicates should be the same. But that is not the case for our block composition because the inputs to the first block and that to the second block are different. Therefore, in order to use the parallel by merge, firstly we need to partition the inputs to the composition into two parts: one to the first block and another to the second block. This is illustrated in Fig. 5 where we assume that P has m inputs and i outputs, and Q has n inputs and j outputs. Finally, both parts of the parallel composition have the same inputs (m + n), and the outputs of P and Q are merged to get i + j outputs. Parallel composition of two blocks is defined below. Definition 14 (Parallel Composition) ⎞ (takem(inps(P) + inps(Q)) inps(P); P) ⎠ P  B Q  ⎝  BM (dr opm(inps(P) + inps(Q)) inps(P); Q) ⎛

Fig. 5 Parallel composition of two blocks

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

231

where takem and dr opm are two blocks to partition inputs into two parts (the first part—inps(P) inputs— for P and another part—inps(Q) inputs—for Q). And the merge relation B M is defined below. Definition 15 (B M )     B M  ok  = 0.ok ∧ 1.ok ∧ inouts  = 0.inouts  1.inouts The merge operator B M states that the parallel composition terminates if both blocks terminate. On termination, the output of parallel composition is concatenation of the outputs from the first block and the outputs from the second block. Due to the concatenation, this merge operator is not commutative, which is different from the merge operators for ACP and CSP. Parallel composition has various interesting properties. Theorem 15 (Associativity, Monotonicity, and SimBlock Closure) Assume that

SimBlock (m 1 , n 1 , P1 ) SimBlock (m 2 , n 2 , P2 ) SimBlock (m 3 , n 3 , P3 ) SimBlock (m 1 , n 1 , Q 1 ) SimBlock (m 2 , n 2 , Q 2 ) P1  Q 1 P2  Q 2 then P1  B (P2  B P3 ) = (P1  B P2 )  B P3 [Associativity] [Monotonicity] (P1  B Q 1 )  (P2  B Q 2 ) SimBlock (m 1 + m 2 , n 1 + n 2 , (P1  B P2 )) [SimBlock Closure] inps (P1  B P2 ) = m 1 + m 2 out ps (P1  B P2 ) = n 1 + n 2 Parallel composition is associative, monotonic in terms of the refinement relation, and SimBlock healthy. The inputs and outputs of parallel composition are combination of the inputs and outputs of both blocks. Theorem 16 (Parallel Composition Simplification and Closure) Provided P = (F Block (tr uef , m 1 , n 1 , f 1 )) Q = (F Block (tr uef , m 2 , n 2 , f 2 )) then,

SimBlock (m 1 , n 1 , P) SimBlock (m 2 , n 2 , Q)

⎞ tr

uef , m 1 + m 2 , n 1 + n 2 , ⎠ ( f 1 ◦ (λx, n · take (m 1 , x(n)))) (P  B Q) = F Block ⎝ λx, n ·  ( f 2 ◦ (λx, n · dr op (m 1 , x(n)))) [Simplification] SimBlock (m 1 + m 2 , n 1 + n 2 , (P  B Q)) [SimBlock Closure] ⎛

Parallel composition of two FBlock defined blocks is expanded to get a new block. Its postcondition is concatenation of the outputs from P and the outputs from Q.

232

K. Ye et al.

Fig. 6 Feedback

B

The outputs from P (or Q) are function composition of its block definition function f 1 (or f 2 ) with take (or dr op).

4.4 Feedback The feedback operator loops an output back to an input, which is illustrated in Fig. 6. The definition of the feedback is given in Definition 16. The basic idea to construct a feedback operator is to use existential quantification to specify that there exists one signal sig that it is the ith input and oth output, and their relation is established by the block P. This is illustrated in Fig. 7 where m and n are the number of inputs and outputs of P. Pr eF D adds a signal into the inputs at i. And then P takes assembled inputs and produces an output in which the oth output is equal to the supplied signal. Finally, the outputs of feedback are the outputs of P without the oth output. Therefore, a block with feedback is translated to a sequential composition of Pr eF D, P, and Post F D. Definition 16 ( f D ) P f D (i, o)  (∃sig · (Pr eF D(sig, inps(P), i); P; Post F D(sig, out ps(P), o))) where i and o denotes the index number of the output signal and the input signal, which are looped. Pr eF D denotes a block that adds sig into the ith place of the inputs.

Fig. 7 Feedback

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

233

Definition 17 (Pr eF D) Pr eF D(sig, m, idx)  F Block (tr uef , m − 1, m, f _Pr eF D(sig, idx))  where f _Pr eF D(sig, idx) = λx, n · take(idx, x(n))  sig(n)  dr op(idx, x(n))) and Post F D denotes a block that removes the oth signal from the outputs of P and this signal shall be equal to sig. Definition 18 (Post F D) ⎛

⎞ true ⎜ ⎟ ⎜ ⎞⎟ ⎛ ⎜ ⎟ # = n∧ (inouts(nn)) ⎟   Post F D(sig, n, i d x)  ⎜ ⎜ ⎟⎟ ⎜ # inouts  (nn) = n − 1∧ ⎜ ∀nn · ⎜ ⎟ ⎟ ⎝ ⎝ inouts  (nn) = f _Post F D (sig, i d x, inouts, nn) ∧ ⎠ ⎠ sig(nn) = inouts(nn)!i d x

  where f _Post F D(idx) = λx, n · take(idx, x(n))  dr op(idx + 1, x(n)) and ! is an operator to get the element in a list by its index. Theorem 17 (Monotonicity) Provided that

SimBlock (m 1 , n 1 , P1 ) P1  P2

SimBlock (m 1 , n 1 , P2 ) i 1 < m 1 ∧ o1 < n 1

then, (P1 f D (i 1 , o1 ))  (P2 f D (i 1 , o1 )) The monotonicity law states that if a block is a refinement of another block, then its feedback is also a refinement of the same feedback of another block. The feedback of a FBlock defined block can be simplified to a block if the original block has one unique solution for feedback and the solution sig has supplied. Theorem 18 (Simplification) Provided that P = F Block (tr uef , m, n, f ) Solvable_unique(i, o, m, n, f ) then, (P f D (i, o))

SimBlock (m, n, P) is_Solution(i, o, m, n, f, sig)

tr uef , m − 1, n − 1, (λx, n · ( f _Post F D(o) ◦ f ◦ f _Pr eF D (sig(x), i)) (x, n)) [Simplification] SimBlock (m − 1, n − 1, (P f D (i, o))) [SimBlock Closure] = F Block

234

K. Ye et al.

The postcondition of the feedback is simply the function composition of f _Pr eF D, f and f _Post F D. The composed feedback is SimBlock (m − 1, n − 1) healthy. In the theorem above, Solvable_unique is defined below. Definition 19 (Solvable_unique) Solvable_unique (i, o, m, n, f )  ⎞ ⎛ ∧ o < n) ∧ (i < m ⎛ ⎞ ⎟ ⎜ (∀nn · # (sigs(nn)) ⎜ = (m − 1)) =⇒

⎟ ⎝ ∀sigs · ⎝ ⎠⎠ sig(nn) = ∃1 sig · ∀nn · ( f (λn 1 · f _Pr eF D (sig, i, sigs, n 1 ) , nn))!o

Solvable_unique characterises that the block with feedback has a unique solution that satisfies the constraint of feedback: the corresponding output and input are equal. Definition 20 (is_Solution) is_Solution (i, o, m, n, f, sig)  ⎛ ⎛ ⎞⎞ (∀nn · # (sigs(nn)) = (m − 1)) =⇒

⎝∀sigs · ⎝ ⎠⎠ sig(sigs, nn) = ∀nn · ( f (λn 1 · f _Pr eF D (sig(sigs), i, sigs, n 1 ) , nn))!o

The is_Solution evaluates a supplied signal to check if it is a solution for the feedback. The simplification law of feedback assumes the function f , that is used to define the block P, is solvable in terms of i, o, m and n. In addition, it must have one unique solution sig that resolves the feedback. Our approach to model feedback in designs enables reasoning about systems with algebraic loops. If a block defined by FBlock and Solvable_unique (i, o, m, n, f ) is true, then the feedback composition of this block in terms of i and o is feasible no matter whether there are algebraic loops or not. 4.4.1

An Example

Provided that a block P, which is similar to the Sum block but has two identical outputs, is given below. P = F Block (tr uef , 2, 2, f )   hd(x(n)) − hd(tl(x(n))), f = λx, n · hd(x(n)) − hd(tl(x(n)))

The feedback of P, as shown in Fig. 8, can be simplified using the simplification Theorem 18. In order to use this theorem, three lemmas are required.

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

235

Fig. 8 Feedback with an algebraic loop

Lemma 1 (SimBlock) P is SimBlock(2, 2) healthy. The feedback of P in terms of the first output and the first input has a unique solution. Lemma 2 (Solvable_unique) Solvable_unique(0, 0, 2, 2, f ) = tr ue Lemma 3 (is_Solution) is_Solution(0, 0, 2, 2, f, (λx, n · (hd(x(n))/2))) = tr ue Then, the feedback of P can be simplified.

  F Block tr uef , 2, 2, f f D (0, 0)

tr uef , 1, 1, = F Block (λx, n · ( f _Post F D(0) ◦ f ◦ f _Pr eF D (λnn · hd(x(nn))/2, 0)) (x, n)) [Theorem 18, Lemmas 1, 2 and 3]   = F Block tr uef , 1, 1, (λx, n · hd(x(n))/2) [Defintions of f _Pr eF D, f _Post F D, and f ]

Finally, the feedback of P is proved to be equal to a block Q (Fig. 8) which specifies that its output is always half of its input.

4.5 Virtual Blocks In addition to Simulink blocks, we introduce three virtual blocks for the purpose of composition: I d, Split2, and Router . The identity block I d is a block that has one input and one output, and the output value is always equal to the input value. Definition 21 (Id) I d  F Block (tr uef , 1, 1, (λx, n · hd (x(n)))) It establishes a fact that a direct signal line in Simulink could be treated as sequential composition of many I d blocks, as shown in the theorem below.

236

K. Ye et al.

Fig. 9 Id

B1 B2 Id

Fig. 10 Split

Split

B1

B2

Theorem 19 I d; I d = I d Its healthiness condition is proved. Theorem 20 I d is SimBlock(1, 1) healthy. The usage of I d is shown in Fig. 9 where the dotted frame denotes a virtual block (not a real Simulink block). The diagram is translated to (B1  B I d) ; B2 . Split2 corresponds to the signal connection splitter that produces two signals from one and both signals are equal to the input signal. Definition 22 (Split2) hd (x(n))))

Split2  F Block (tr uef , 1, 2, (λx, n · hd (x(n)) ,

It is SimBlock healthy. Theorem 21 Split2 is SimBlock(2, 1) healthy. The usage of Split2 is shown in Fig. 10. One input signal is split into two signals: one to B1 and another to B2. In addition, the value of the signal to B1 must be always equal to that of the signal to B2. The diagram is represented as Split2; (B1  B B2 ). Furthermore, signal lines may cross, that is, they are not connected but their line order has changed. For example, Fig. 11 shows two input signals are split into four, then the second and the third among the four signals cross. Finally, the second and the third become the third and the second. We introduce a virtual block Router for this purpose to reorder inputs. Definition 23 (Router) Router (m, table)  F Block (tr uef , m, m, (λx, n · r eor der (x(n), table)))

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

237

Fig. 11 Router B1 Router B2

Router changes the order of m input signals according to the supplied table. table is a sequence of natural numbers. The order of the sequence denotes the order of outputs, and each element is an index of the inputs. For instance, table = [2, 0, 1] denotes the new order: the third input, the first input, and the second input. The healthiness condition of Router requires well-definedness of table. Theorem 22 Router (m, table) is SimBlock(m, m) healthy, provided that table has m elements. The usage of Router is shown in Fig. 11. In this example, it should be instantiated to Router (4, [0, 2, 1, 3]). Eventually, The diagram is translated to (Split2  B Split2) ; Router (4, [0, 2, 1, 3]); (B1  B B2 )

4.6 Subsystems The treatment of subsystems (no matter whether hierarchical subsystems or atomic subsystems) in our designs is similar to that of blocks. They could be regarded as a bigger black box (as a design) that relates inputs to outputs.

4.7 Semantics Calculation The theorems proved in this section and previous Sect. 3 enable us to be possible to automatically derive the mathematical semantics of block diagrams. These theorems are called calculation laws or rules. For instance, in order to calculate the semantics of a sequentially composed block P; Q, we can use the simplification Theorem 11 provided that its assumptions are satisfied. An example is Sum2; Split2.

238

K. Ye et al.

(Sum2; Split2) ={Definition 13, Theorem 10, Definition 22, Theorem 22 and Theorem 11} ⎛ ⎞⎞ ⎛ (λx, n · hd (x(n)) , hd (x(n))) ⎠⎠ F Block ⎝tr uef , 2, 2, ⎝ ◦ (λx, n · hd(x(n)) + hd(tl(x(n)))) ={function composition}  

hd(x(n)) + hd(tl(x(n))), F Block tr uef , 2, 2, λx, n · hd(x(n)) + hd(tl(x(n))) If there are more blocks in the diagrams, we need to recursively apply calculation laws to simplify compositions and calculate their semantics. The simplification of the integrator example is illustrated below. (((I d  B U nit Delay(0)) ; Sum2; Split2) f D (1, 1)) ⎫ ⎧⎧ ⎫ ((I d  B U nit Delay(0)) ; Sum2; Split2) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ = {. . .} ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ = {Closure and Simplification Theorems 10, 8, . . . , 11 and 16} ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎛ ⎞ ⎪ ⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪ ⎪ tr ue , 2, 2, f ◦ f ◦ Split2 Sum2 ⎪ f ⎪ ⎪ ⎪ ⎪⎪ ⎪

⎪ ⎨ ⎬ ⎪ ⎪ ⎝ ⎠ ⎪ ⎪ ◦ n · take x(n)))) f (λx, (1, ( F Block Id ⎪ ⎪ ⎪ ⎪ [l1] λx, n · ⎪ ⎪  ( fU D ◦ (λx, n · dr op (1, x(n)))) ⎪⎪ ⎪ ⎪ ⎨ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ = ⎪ ⎪ = {function composition} ⎪ ⎞ ⎪ ⎛ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎪ ⎪ tr uef , 2, 2, λx, n· ⎪ ⎪ ⎪ ⎪    ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎠ ⎝ ⎪ ⎪ ⎪ ⎪  n = 0  hd(tl(x(n − 1))) , hd(x(n)) + 0 F Block ⎪ ⎪ ⎪ ⎪   ⎩ ⎭ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ hd(x(n)) + 0  n = 0  hd(tl(x(n − 1))) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 2, F Block ue [l2] SimBlock , 2, 2, f (2, (tr )) 0 ⎪ ⎪ f ⎪ ⎪ ⎪ Solvable_unique(1, 1, 2, 2, f ) ⎪ ⎪ ⎪ [l3] ⎪ ⎪ 0 ⎩ ⎭ is_Solution(1, 1, 2, 2, f 0 , sol) [l4] = {[l1], [l2], [l3], [l4] and Theorem 18}

tr uef , 1, 1, F Block (λx, n · ( f _Post F D(1) ◦ f 0 ◦ f _Pr eF D (sol(x), 1)) (x, n)) = {function composition} F Block (tr uef , 1, 1, λx, n · sol(x, n)) [r]

where L  b  R is a if-then-else conditional operator. f I d , fU D , f Sum2 , and f Split2 denote the function in their definitions to specify their outputs with regards to inputs. f 0 stands for the function below  λx, n ·

hd(x(n)) + (0  n = 0  hd(tl(x(n − 1)))) , hd(x(n)) + (0  n = 0  hd(tl(x(n − 1))))



sol is the unique solution for the feedback. sol  λx, n · (hd(x(n)) + 0  n = 0  sol(x, n − 1)) Finally, the composition of the integrator is simplified to a block ([r]) that has one input and one output. Additionally, the output is equal to current input plus

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

239

previous output, and the initial output is the initial input. In short, the output is the sum of all inputs, which is consistent with the definition of the PID integrator.

5 Verification Strategies and Case Study 5.1 General Procedure of Applying Assumption-Guarantee Reasoning Simulink blocks are semantically mapped to designs in UTP where additionally we model assumptions of blocks to avoid unpredictable behaviour (such as a divideby-zero error in the Divide block) and ensure healthiness of blocks. The general procedure of applying AG reasoning to verify correctness (S  I ) of a Simulink subsystem or diagram (I ) against its contracts S is given below. S.1

S.2

S.3 S.4

S.5

Single blocks and atomic subsystems are translated to single designs with assumptions and guarantees, as well as block parameters. This is shown in Sect. 3. Now we have the UTP semantics for each block. Hierarchical block compositions are modelled as compositions of designs by means of sequential composition (;), parallel composition ( B ), feedback ( f D ), and three virtual blocks. Composition of blocks is given in Sect. 4. With these calculation laws in Sects. 3 and 4, the composition will be simplified and its UTP semantics is calculated. Finally, we obtain the semantics for the diagram (I ). Requirements (or contracts) of the block diagram (S) to be verified are modelled as designs as well. The refinement relation (S  I ) in UTP is used to verify if a given property is satisfied by a block diagram (or a subsystem) or not. Because both the diagram and the contracts have the UTP semantics, the refinement check could be applied. However, in order to apply the simplification rules, we need to prove some properties of the blocks to be composed, which is not always practical. Particularly, the simplification Theorem 18 of feedback requires (1) the block is Sim Block healthy, (2) it has one unique solution, and (3) the solution is supplied. To find the solution may be difficult. To cope with this difficulty, our approach supports compositional reasoning (illustrated in Fig. 12) according to monotonicity of composition operators in terms of the refinement relation. Provided a property S to be verified is decomposed into two properties S1 and S2 , and S1 and S2 are verified to hold in two blocks or subsystems I1 and I2 respectively, then S (the composition of the properties) is also satisfied by the same composition of the blocks or subsystems. (S1  I1 ∧ S2  I2 ) =⇒ (S1 op S2  I1 op I2 )

240

K. Ye et al.

Fig. 12 Compositional reasoning

where op ranges over sequential composition ;, parallel composition  B , and feedback f D . With compositional reasoning, this problem is largely mitigated because we do not need to simplify the composition of blocks further. The verification goal 1 becomes three subgoals: 2 , 3 , and refinement between S and S1 op S2 . Remark 1 Since we have compositional reasoning (S.5) to cope with the difficulty of semantics calculation, what is the purpose of the simplification laws? Actually direct semantics calculation by simplification and compositional reasoning are complementary. Semantics calculation by simplification retains the semantics of the composition of blocks (say a subsystem), which makes refinement check against contracts simpler and make further composition of the subsystem have the full semantics. For instance, if there are various contracts to be verified for the subsystem, we only need to check refinement once for each contract. However, for compositional reasoning, different contract has different decomposed subcontracts. Hence, for each contract, we have to prove correctness of each subcomponent against its corresponding subcontract.

5.2 Mechanisation Our work in this paper has been mechanised in Isabelle/UTP. The mechanisation includes three theory files: one for definitions of our theory, one for laws and theorems, and one for the case study (shown later in Sect. 5.3). The definitions begin with the state space inouts, the healthiness condition SimBlock , inps and outps. Then based on them, we are able to give definitions to block composition operators ;,4  B and f D , as well as definitions to each individual block. The definition of each block starts with a function to characterise its inputs as its assumption and a function to specify its outputs in terms of its inputs and block parameters as a guarantee, then uses the block pattern FBlock to define the block.

4 Sequential

composition of blocks is the same as sequence of designs, and therefore has been defined in the theory of designs.

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

241

Fig. 13 Post landing finalize. Source [6]

Then a variety of theorems are proved for verification. All lemmas and theorems presented in this paper are mechanised. In addition, for each block defined, it is proved to be SimBlock healthy. These theorems form our calculation laws. Using these definitions and laws, along with our verification strategies, a case study is verified and mechanised.

5.3 Case Study This case study is to verify a post_landing_finalize subsystem, which is a part of an aircraft cabin pressure control application. The original Simulink model is from Honeywell5 through our industrial link with D-RisQ.6 The same model is also studied in [6] and the diagram shown in Fig. 13 is from the paper. It aims to model the behaviour of the output signal finalize_event and ensure the signal is only triggered after the aircraft door has been open for a minimum specific amount of time following a successful landing. This specific signal will be used to invoke subsequent activities such as calibration, initialisation or tests of sensors and actuators. Therefore, it is very important to ensure the signal always is triggered as expected. The model has four inputs • door_closed is a boolean signal and denotes if the door is closed or not, • door_open_time is the minimum amount of time (in seconds) that the door is continuously open before the finalize_event is triggered, • mode means the current aircraft state, such as LANDING (4) and GROUND (8), • ac_on_ground is a boolean signal and means if the aircraft is on ground or not. 5 Honeywell: 6 D-RisQ:

https://www.honeywell.com/. http://www.drisq.com/.

242

K. Ye et al.

one output finalize_event and three subsystems • variableTimer ensures its output is true only after the door is open continuously for door_open_time, • rise1Shot models a rising edge-triggered circuit, • latch models a SR AND-OR latch circuit. In order to apply our AG reasoning into this Simulink model, firstly we translate the model (or the system) as shown in Sect. 5.3.1. Then we verify a number of properties for three subsystems in this model, which is given in Sect. 5.3.2. Finally, in Sect. 5.3.3 we present verification of four requirements of this model. Our verification strategies will be illustrated in these subsections.

5.3.1

Translation and Simplification

We start with translation of three small subsystems (variableTimer, rise1Shot and latch) according to our block theories. The original model of the subsystem latch is illustrated in the left diagram of Fig. 14. It has two boolean inputs: S and R, one output, and one feedback. It is rearranged to get the right diagram of Fig. 14. This diagram is modelled as a composition of design blocks below. Remark 2 Our rearrangement of the diagram only flips the unit delay block and changes its location, which will not change the behaviour of the diagram. The purpose of the rearrangement is to make the translated latch in our theory easy to compare with the diagram. ⎞ ⎞ ((U nit Delay(0)  B I d) ; LopO R(2)) ⎠ ; Lop AN D (2) ; Split2⎠ f D (0, 0) latch  ⎝⎝  B (I d; LopN O T ) ⎛⎛

The blocks LopO R, LopN O T and Lop AN D correspond to the OR, NOT and AND operators in the logic operator block. For brevity, their definitions are omitted. Then we apply composition definitions, simplification and SimBlock closure laws to simplify the subsystem. Finally, the latch subsystem is simplified to a design block as shown below.

Fig. 14 Latch subsystem

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

243

Theorem 23 latch = F Block (tr uef , 2, 1, latch_simp_ pat_ f ) where latch_simp_ pat_ f  (λx, na · latch_r ec_calc_out put (λn 1 · hd(x(n 1 )), λn 1 · x(n 1 )!1, na)) latch_r ec_calc_out put (S, R, n) ⎛ ⎞ ((0  S(0) = 0  1)  R(0) = 0  0) ⎠  ⎝n = 0 ((latch_r ec_calc_out put (S, R, n − 1)  S(n) = 0  1)  R(n) = 0  0) L  b  R is a if-then-else conditional operator. latch_simp_ pat_ f is the function to characterise the relation between the inputs x and outputs of the latch. It has one output which is in turn defined by latch_r ec_calc_out put. latch_r ec_calc_out put is a recursively defined function that establishes the relation of current outputs with regards to current inputs as well as its history. It has three parameters: two inputs S and R, and the step number n. The recursion corresponds to the feedback and the step number n − 1 that denotes previous step corresponds to one unit delay block in the feedback. Remark 3 latch_simp_ pat_ f is calculated using the simplification Theorem 18 from the supplied solution. Our current work does not yet has an automated way to find the unique solution. It is a research problem that we need to handle in the future. Similarly, the subsystems variableTimer (Fig. 15) and rise1Shot are translated and simplified. variableTimer has two inputs: door_open (which is negation of door_closed) and door_open_time. Basically, its model can be seen as three parts:

Fig. 15 variableTimer subsystem

244

K. Ye et al.

the first part is from the block 4 to the block 14, the second part from the block 12 to varConfirm_1, and the third part varConfirm_1. The second part is to compute the desired door open times according to door_open_time and the sampling rate (the rate block). The first part is to (1) increase the door open times (if the door is open) until it reaches the desired door open times, and (2) reset to 0 (if the door is closed). Then varConfirm_1 is simply to compare if the desired door open times has reached (the output is true) or not (the output is false). variableT imer 1  ⎞ ⎞⎞ ⎛ ⎛⎛ (Min2; U nit Delay(0)) ⎜ ⎝⎝  B ⎠ ; Sum2⎠ ⎟ ⎟ ⎜ ⎟ ⎜ Const (1) ⎟ ⎜ ⎟ ; Switch1(0.5); Split2 ⎜ B ⎟ ⎜ ⎟ ⎜ Id ⎟ ⎜ ⎠ ⎝ B Const (0) variableT imer 2  ⎛ ⎞ Const (0) ⎝ B ⎠ ; Max2; (Gain Rate) ; RoundCeil; DataT ypeConv I nt32Z er o; Split2 Id ⎛⎛⎛ ⎞ ⎞ ⎞ variableT imer 1 ⎠ f D (0, 0)⎠ f D (0, 2)⎠ ; RopGT variableT imer  ⎝⎝⎝  B variableT imer 2

It is simplified to a block variableT imer _simp_ pat (its definition is emitted). Theorem 24 variableT imer = variableT imer _simp_ pat Finally, we can use the similar way to compose the three subsystems with other blocks in the diagram (Fig. 13) to get the corresponding composition of post_landing_finalise_1, and then apply the similar laws to simplify it further into one block and verify requirements for this system. However, for the outermost feedback, it is difficult to use a similar way to simplify it into one block because it is more complicated than the feedbacks in the three small subsystems. In order to use the simplification Theorem 18 of feedback, we need to find a solution for the block and prove the solution is unique. With increasing complexity of blocks, the application of this simplification law is becoming harder and harder. Therefore, post_landing_finalise_1 has not been simplified into one block. Instead, it is simplified to a block with a feedback. Theorem 25 (System Simplification) post_landing_ f inali ze_1 = pl f _rise1shot_simp f D (4, 1)

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

5.3.2

245

Subsystems Verification

After simplification, we can verify properties of the subsystems using the refinement relation. We start with verification of a property for variableTimer: vt_req_00. This property states that if the door is closed, then the output of this subsystem is always false. However, this property cannot be verified in absence of an assumption made to the second input: door_open_time. This is due to a type conversion block int32 used in the subsystem. If the input to int32 is larger than 231 − 1 (that is, door_open_time larger than (231 − 1)/10 (where 10 is the sampling rate), its output is less than zero and finally the output is true. That is not the expected result. Practically, door_open_time should be less than (231 − 1)/10. Therefore, we can make an assumption of the input and eventually verify this property as given in Lemma 4. In the lemma, the left side of  is the contract and the right side is the variableTimer. The precondition of the contract is the assumption, which assumes the first input door_closed is boolean ([p1]) and the second input door_open_time is positive and less than (231 − 1)/10 = 214 748 364 (/ as integer division) ([p2]). The postcondition of the contract guarantees that variableTimer has two inputs and one output ([p3]), and establishes the property (if the door is closed, then the output is always false) ([p4]). Additionally, we suggest replacing int32 by uint32, or a change of the data type for the input from double to unsigned integer, such as uint32. Lemma 4 (vt_req_00)

⎞ [p1] (hd(inouts(n)) = 0 ∨ hd(inouts(n)) = 1) ∧ ∀n · ⎜ (hd(tl(inouts(n))) ≥ 0 ∧ hd(tl(inouts(n))) ≤ 214748364) [p2] ⎟ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜  

 ⎠ ⎝ [p3] # (inouts(n)) = 2 ∧ # inouts (n) = 1∧ ∀n ·  hd(inouts(n)) = 0 =⇒ hd(inouts (n)) = 0 [p4] ⎛

 variableT imer Furthermore, one property for the latch subsystem (a SR AND-OR latch) is verified. The property latch_req_00 states that as long as the second input R is true, its output is always false. This is consistent with the definition of the SR latch in circuits. Lemma 5 (latch_req_00)

⎞ (hd(inouts(n)) = 0 ∨ hd(inouts(n)) = 1) ∧ ∀n · ⎜ (hd(tl(inouts(n))) = 0 ∨ hd(tl(inouts(n))) = 1) ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎛ ⎞⎟ ⎜ ⎟ # (inouts(n)) ⎜ ⎟  = 2∧ ⎝ ∀n · ⎝ # inouts  (n) = 1∧ ⎠⎠ hd(tl(inouts(n))) = 0 =⇒ hd(inouts  (n)) = 0 ⎛

 latch

246

5.3.3

K. Ye et al.

Verification of Requirements

In the presence of certain assumptions in Table 1, four requirements to be verified for the system are illustrated in Table 2. Our approach to cope with the difficulty to simplify this system into one design (Sect. 5.3.1 and Theorem 25) is to apply compositional reasoning (see S.5). As a specific example, the procedure to verify the requirements of this system using compositional reasoning is shown below (also illustrated in Fig. 16). In order to verify a requirement S satisfied by post_landing_finalise_1: S  post_landing_ f inalise_1, we need to find a decomposed contract S1 and verify S  (S1 f D (4, 1)) and (S1  pl f _rise1shot_simp). For brevity, only verification of the first requirement is demonstrated below. The verification of other three requirements are very similar.

Table 1 Assumptions for the system Assumption 1 Assumption 2 Assumption 3 Assumption 4

ac_on_ground can be true before the mode transitions to GROUND The mode can transition directly from LANDING to GROUND door_open_time does not change while the aircraft is on the ground door_closed must be true if ac_on_ground is false

Source [6] Table 2 Requirements for the system Requirement 1

Requirement 2 Requirement 3 Requirement 4

A finalize event will be broadcast after the aircraft door has been open continuously for door_open_time seconds while the aircraft is on the ground after a successful landing A finalize event is broadcast only once while the aircraft is on the ground The finalize event will not occur during flight The finalize event will not be enabled while the aircraft door is closed

Source [6]

Fig. 16 Illustration of compositional reasoning of this model

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

247

Requirement 1 According to Assumption 3 “door_open_time does not change while the aircraft is on the ground” (see Table 1) and the fact that this requirement specifies the aircraft is on the ground, therefore door_open_time is constant for this scenario. In order to simplify the verification, we assume it is always constant. The requirement is modelled as a design contract r eq_01_contract (S) which is shown below. r eq_01_contract  ⎞⎞ ⎞ ⎛ ⎛ ⎛ (hd(x(n)) = 0 ∨ hd(x(n)) = 1) ∧ [p1] ⎝∀n · ⎝λx, n · ⎝ (x(n)!1 = c) ∧ [p2] ⎠⎠ (inouts, n)⎠ [p3] (x(n)!3 = 0 ∨ x(n)!3 = 1) ∀n· ⎛ # (inouts(n))= 4∧ ⎜ # inouts  (n) = 1∧ ⎜ ⎜ ∀m· ⎞ ⎜⎛⎛⎛ ⎞ ⎜ inouts(m)!3 = 1∧ ⎜ ⎜ ⎜ ⎜ ⎝ inouts(m)!2 = 4∧ ⎠ ∧ [r3.1] ⎟ ⎟ ⎜⎜⎜ ⎟ ⎜ ⎜ ⎜ inouts(m)!0 = 1∧ ⎟ ⎜⎜⎜⎛ ⎞ ⎟ ⎜ ⎜ ⎜ inouts(m + 1)!3 = 1∧ ⎟ ⎜⎜⎜ ⎜ ⎜ ⎝ ⎝ inouts(m + 1)!2 = 8∧ ⎠ [r3.2] ⎠ ⎜⎜ ⎜⎜ inouts(m + 1)!0 = 1 ⎜⎜ ⎜ ⎜ =⇒ ⎜⎜ ⎜ ⎜ ∀ p· ⎜⎜ ⎛⎛

⎞ ⎜⎜ ∀q · (q ≤ c ∗ rate) =⇒ ⎜⎜ ∧ [r3.3] ⎜⎜ ⎜⎜ ⎟ ⎜⎜ ⎜⎜ ⎟ ⎞ ⎛inouts(m + 2 + p + q)!0 = 0 ⎜⎜ ⎜⎜ ⎟ ≤ p + c ∗ rate) =⇒ (q ⎜⎜ ⎜⎜ ⎟

⎜ ⎜ ⎜ ⎜ ∀q · ⎝ inouts(m + 2 + q)!3 = 1∧ ⎟ ⎠∧ [r3.4] ⎜⎜ ⎜⎜ ⎟ ⎜⎜ ⎜⎜ ⎟ inouts(m + 2 + q)!2 = 8 ⎜⎜ ⎜⎜ ⎟ ⎜ ⎜ ⎜ ⎝ (inouts(m + 2 + p − 1)!0 = 1) ∧ ⎠ [r3.5] ⎜⎜ ⎜    ⎝⎝ ⎝ ∀q · (q < p) =⇒ inouts  (m + 2 + q) = 0 [r3.6]  =⇒ inouts  (m + 2 + p + c ∗ rate) = 1 [r3.7]



⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

⎞ [r1] [r2] ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ [r3] ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

Recall that the model has four inputs: door_closed, door_open_time, mode, and ac_on_ground. The precondition of the contract assumes for every step n such that • the data type of the first input door _closed is boolean ([p1]); • the second input door _open_time is constant c ([p2]); • the data type of the fourth input ac_on_gr ound is boolean ([p3]).

248

K. Ye et al.

Fig. 17 Scenario of requirement 1

Its postcondition specifies that • it always has four inputs ([r1]) and one output ([r2]); • then a scenario is specified (illustrated in Fig. 17, [r3] in its definition): – after a successful landing at step m and m + 1: the door is closed, the aircraft is on ground, and the mode is switched from LANDING (4 at step m, [r3.1]) to GROUND (8 at step m + 1, [r3.2]), – then the door has been open continuously for door _open_time seconds from step m + 2 + p to m + 2 + p + door _open_time ∗ rate ([r3.3]), therefore the door is closed at the previous step m + 2 + p − 1 ([r3.5]), – while the aircraft is on ground: ac_on_gr ound is true and mode is GROUND ([r3.4]), – additionally, between step m and m + 2 + p, the f inali ze_event is not enabled ([r3.6]), – then a f inali ze_event will be broadcast at step m + 2 + p + door _open_time ([r3.7]). Remark 4 In order to verify this requirement, the Sim2SAL, the tool used in [6] to verify this system, introduces two auxiliary variables (latch and timer _count) and splits this requirement into three properties which are expressed in LTL. However, our approach models this complex scenario just in a design and has a powerful expressiveness. The decomposed contract r eq_01_1_contract (S1 ) is shown below where only its postcondition is displayed. Its precondition is omitted for brevity because it reuses the precondition of r eq_01_contract.

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

249

r eq_01_1_contract  ··· ∀n· ⎛ # (inouts(n))  = 5∧ ⎜ # inouts  (n) = 2∧ ⎜ ⎜ ∀m· ⎞ ⎜⎛⎛⎛ ⎞ ⎜ inouts(m)!3 = 1∧ ⎜ ⎜ ⎜ ⎜ ⎝ inouts(m)!2 = 4∧ ⎠ ∧ [r3.1] ⎟ ⎟ ⎜⎜⎜ ⎟ ⎜ ⎜ ⎜ inouts(m)!0 = 1∧ ⎟ ⎜⎜⎜⎛ ⎞ ⎟ ⎜ ⎜ ⎜ inouts(m + 1)!3 = 1∧ ⎟ ⎜⎜⎜ ⎟ ⎜ ⎜ ⎜ ⎝ inouts(m + 1)!2 = 8∧ ⎠ ∧ [r3.2] ⎟ ⎜⎜⎜ ⎜ ⎜ ⎝ inouts(m + 1)!0 = 1 ⎠ ⎜⎜    (q)!1 = inouts(q)!4 [r3.3] ⎜⎜ ∀q · inouts ⎜⎜ ⎜ ⎜ =⇒ ⎜⎜ ⎜ ⎜ ∀ p· ⎜⎜ ⎛⎛

⎜⎜ ∀q · (q ≤ c ∗ rate) =⇒ ⎜⎜ ∧ ⎜⎜ ⎜⎜ inouts(m + 2 + p + q)!0 = 0 ⎜⎜ ⎜⎜ ⎛ ⎞ ⎜⎜ ⎜⎜ ≤ p + c ∗ rate) =⇒ (q ⎜⎜ ⎜⎜

⎜ ⎜ ⎜ ⎜ ∀q · ⎝ inouts(m + 2 + q)!3 = 1∧ ⎠∧ ⎜⎜ ⎜⎜ ⎜⎜ ⎜⎜ inouts(m + 2 + q)!2 = 8 ⎜⎜ ⎜⎜ ⎜ ⎜ ⎜ ⎝ (inouts(m + 2 + p − 1)!0 = 1) ∧ ⎜⎜ ⎜    ⎝⎝ ⎝ ∀q · (q < p) =⇒ inouts  (m + 2 + q) = 0 =⇒ inouts  (m + 2 + p + c ∗ rate) = 1, 1



⎞ [r3.4] ⎟ ⎟ ⎟ ⎟ [r3.5] ⎟ ⎟ ⎟ ⎟ [r3.6] ⎠ [r3.7] [r3.8]

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

⎞ [r1] [r2] ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ [r3] ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

Its postcondition is different from that of r eq_01_contract in the number of inputs and outputs (one more input and one more output), additional predicate ([r3.3]) to insist the second output is always equal to the fifth input (because they will be connected by the feedback, r eq_01_1 = r eq_01_1_contract f D (4, 1)), and two equal outputs ([r3.8], correspond to split outputs). According to Fig. 16, we need to prove two auxiliary lemmas in order to verify the contract. Lemma 6 (Refinement of decomposed contract with regards to the contract) r eq_01_contract  r eq_01_1_contract f D (4, 1) Lemma 7 (Refinement of decomposed contract with regards to decomposed diagram) r eq_01_1_contract  pl f _rise1shot_simp Monotonicity is achieved by Theorem 17. Lemma 8 (Monotonicity of feedback) r eq_01_1_contract f D (4, 1)  pl f _rise1shot_simp f D (4, 1)

250

K. Ye et al.

From Lemmas 6, 7, 8, and the transitivity property of the refinement relation, the requirement is verified. Theorem 26 (Requirement 1) r eq_01_contract  post_landing_ f inali ze_1

5.4 Summary In sum, we have translated and mechanised the post_landing_finalize diagram in Isabelle/UTP, simplified its three subsystems (variableTimer, rise1Shot and latch) and the post_landing_finalize into a design with feedback, and finally verified all four requirements of this system. In addition, our work has identified a vulnerable block in variableTimer. This case study demonstrates that our verification framework has rich expressiveness to specify scenarios for requirement verification and our verification methodology is illustrated.

6 Conclusions 6.1 Related Work Our work in this paper is inspired by the interface theory [37] and RCRS [32, 33] in various aspects, and shares some ideas with other approaches. Contract. A variety of Assume-Guarantee reasoning or contract-based verification facilities for Simulink exist. SimCheck [35] defines a contract annotation language for Simulink types including unit and dimension constrains, and then use a Satisfiability Modulo Theories (SMT) solver to check the well-formedness of Simulink diagrams with respect to these contracts. [7, 8] define a contract syntax that is used to annotate Simulink diagrams, and translate both contracts and Simulink diagrams to SDF, and then use the notion of data refinement to check correctness. The interface theory [37] simply uses relations (predicates) as contracts to specify a set of valid input/output pairs. And RCRS [32] gives contracts to components or systems in either Symbolic Transition System (STS) based or Quantified Linear Temporal Logic (QLTL) based monotonic property transformers. Contracts in our approach are simply designs, a subset of alphabetised relations, which is similar to [37]. Composition operators. The way to compose blocks by sequential composition, parallel composition, and feedback in our approach is similar to those in the interface theory and RCRS. Theorem proving. ClawZ [3] uses a theorem prover ProofPower-Z [34]. The tool used in [9] is called S2L which is written in Java. Sim2SAL, the tool that is used in [24] and [6] (to verify the same case study as ours), is SMT solver based. Similar to

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

251

RCRS, our approach also uses theorem proving for verification in Isabelle/UTP, an implementation of UTP in Isabelle. Data types and typechecking. Reference [9] implements a type inference mechanism prior to the translation to get type information for each block and then uses the information during translation. ClawZ [3] and its related approaches [10–12] use a universal type (real numbers) for all blocks. RCRS defines well-formedness of components by induction on the structure of the components. Our current work is similar to ClawZ and uses a universal type (real numbers). Multi-rate. Reference [9] implements clock inference to infer timing information and uses the information to generate Lustre programs. Verification of multi-rate diagrams is supported by both CircusTime [12] and [8]. In [8], each stream is associated with a period, and the stream can be upsampled or downsampled to match a given base period T . CircusTime introduces a solver to synchronise all blocks (each block as a process) to model the cycles of the Simulink diagrams. In fact, the cycles are defined by the simulation time parameter. Each block models the behaviour of one cycle (parametrised by the simulation time). Depending on the simulation time, a block could be a sample time miss (quiescent) or a hit (update) at this step. We have not yet supported verification of multi-rate diagrams now. It is a part of our future plans. We will use the similar idea as CircusTime to have a miss and a hit, but without synchronisation (purely relational). Abstraction and internal states. Our definitions of blocks and composition operators actually do not need to introduce additional state variables to explicitly keep its internal state for future use, which is common in other approaches. For instance, [7, Fig. 5] shows a representation of the unit delay block in Synchronous Data Flow (SDF) [23] by using an additional variable x for its internal state. [12] also needs to add variables in the state space of stateful blocks. This is necessary in their approaches. But it is not the case for the relation-based solutions: ClawZ , the interface theory, RCRS, and our work. ClawZ and RCRS [32] define the unit delay block also by introducing additional variables, but these variables are auxiliary (eliminated during relation calculation but useful for modelling). Our definition of the unit block uses inouts(n-1) to represent its previous input, which is more abstract, direct and natural to its mathematical semantics. Therefore, our approach minimises the state spaces of translated block diagrams as well as the relation calculation, which may ease the verification of large scale discrete-time systems. Generalising and unifying semantics. Our work has its semantics based on UTP. Therefore, our framework is capable of unifying and generalising semantics from different paradigms. One example is to integrate our approach with CircusTime to deliver a solution from contracts to implementations because the semantics of CircusTime is also given in UTP. Our current work is able to verify the abstract side from contracts to Simulink diagrams, and CircusTime is capable of verifying the implementation side from Simulink diagrams to code (such as Ada and C programs). In future, we will similarly support refinement of Simulink control law diagram to implementation using Isabelle/UTP.

252

K. Ye et al.

6.2 Conclusion and Future Work We have presented a compositional assume-guarantee reasoning framework for discrete-time Simulink diagrams. Our approach is based on the theory of designs in UTP. In this paper, we present definitions for various blocks and block composition operators, as well as a set of calculation laws. These definitions and laws allow us to calculate the semantics for a diagram. After that, we can verify the contracts against the diagram (using the calculated semantics). One industrial example has been verified and our approach identifies one vulnerable block. As we discussed in Sect. 1, we would like to develop an approach that is general in contracts, able to reason about algebraic loops and multi-rate models, and able to unify semantics from various paradigms. We have fulfilled some of them and presented in this paper, but have left some for future work. • One extension is to support more precise type information for signals and blocks, which allows us to infer data types for each block and each signal as well as typecheck the models. The inferred type information will further allow us to specify types in contracts and verify them. • To mechanise most of discrete-time Simulink blocks in our theory and develop a translator is our subsequent extension, which will make our approach more applicable. • In addition, as discussed in Sect. 3, our idea to support verification of multi-rate systems will be further developed and integrated into this framework. The capability to verify multi-rate models extends usability of our approach to a wider variety of systems. • In order to generalise our framework, we plan to extend it to support refinement of Simulink control law diagram to implementation. Then our framework is able to reason about Simulink diagrams from contract to implementation. Acknowledgements This project is funded by the National Cyber Security Centre (NCSC) through UK Research Institute in Verified Trustworthy Software Systems (VeTSS) [39]. The second author is partially supported by EPSRC grant CyPhyAssure, EP/S001190/1. We thank Honeywell and D-RisQ for sharing of the industrial case study.

References 1. Add2: Jaguar Reduces Development Costs with MathWorks—Rapid Prototyping and Code Generation Tools. http://www.add2.co.uk/wp-content/uploads/add2JaguarUSERStory.pdf 2. Amalio, N., Cavalcanti, A., Miyazawa, A., Payne, R., Woodcock, J.: Foundations of the SysML for CPS modelling. Technical Report, INTO-CPS Deliverable, D2.2a (2016) 3. Arthan, R.D., Caseley, P., O’Halloran, C., Smith, A.: ClawZ: control laws in Z. In: Proceedings of 3rd IEEE International Conference on Formal Engineering Methods, ICFEM 2000, York, England, UK, 4–7 Sept 2000, pp. 169–176. IEEE Computer Society (2000). https://doi.org/ 10.1109/ICFEM.2000.873817

Compositional Assume-Guarantee Reasoning of Control Law Diagrams Using UTP

253

4. Bauer, S.S., David, A., Hennicker, R., Larsen, K.G., Legay, A., Nyman, U., Wasowski, A.: Moving from Specifications to Contracts in Component-Based Design. In: de Lara, J., Zisman, A. (eds.) Fundamental Approaches to Software Engineering—Proceedings of 15th International Conference, FASE 2012, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2012, Tallinn, Estonia, 24 Mar–1 Apr 2012. Lecture Notes in Computer Science, vol. 7212, pp. 43–58. Springer (2012). https://doi.org/10.1007/978-3-64228872-2_3 5. Bergstra, J.A., Klop, J.W.: Process algebra for synchronous communication. Inf. Control 60(1– 3), 109–137 (1984) 6. Bhatt, D., Chattopadhyay, A., Li, W., Oglesby, D., Owre, S., Shankar, N.: Contract-based verification of complex time-dependent behaviors in avionic systems. In: Rayadurgam, S., Tkachuk, O. (eds.) Proceedings of 8th International Symposium on NASA Formal Methods, NFM 2016, Minneapolis, MN, USA, 7–9 June 2016. Lecture Notes in Computer Science, vol. 9690, pp. 34–40. Springer (2016). https://doi.org/10.1007/978-3-319-40648-0_3 7. Boström, P.: Contract-based verification of simulink models. In: Qin, S., Qiu, Z. (eds.) Proceedings of 13th International Conference on Formal Engineering Methods and Software Engineering , ICFEM 2011, Durham, UK, 26–28 Oct 2011. Lecture Notes in Computer Science, vol. 6991, pp. 291–306. Springer (2011). https://doi.org/10.1007/978-3-642-24559-6_21. 8. Boström, P., Wiik, J.: Contract-based verification of discrete-time multi-rate Simulink models. Softw. Syst. Model. 15(4), 1141–1161 (2016). https://doi.org/10.1007/s10270-015-0477-x 9. Caspi, P., Curic, A., Maignan, A., Sofronis, C., Tripakis, S.: Translating discrete-time simulink to lustre. In: Alur, R., Lee, I. (eds.) Proceedings of Third International Conference on Embedded Software, EMSOFT 2003, Philadelphia, PA, USA, 13–15 Oct 2003. Lecture Notes in Computer Science, vol. 2855, pp. 84–99. Springer (2003). https://doi.org/10.1007/978-3-540-45212-6_7 10. Cavalcanti, A., Clayton, P., O’Halloran, C.: From control law diagrams to Ada via circus 11. Cavalcanti, A., Clayton, P., O’Halloran, C.: Control law diagrams in circus. In: Fitzgerald, J.S., Hayes, I.J., Tarlecki, A. (eds.) Proceedings of FM 2005: Formal Methods, International Symposium of Formal Methods Europe, Newcastle, UK, 18–22 July 2005. Lecture Notes in Computer Science, vol. 3582, pp. 253–268. Springer (2005). https://doi.org/10.1007/11526841_18 12. Cavalcanti, A., Mota, A., Woodcock, J.: Simulink timed models for program verification. In: Liu, Z., Woodcock, J., Zhu, H. (eds.) Theories of Programming and Formal Methods—Essays Dedicated to Jifeng He on the Occasion of His 70th Birthday. Lecture Notes in Computer Science, vol. 8051, pp. 82–99. Springer (2013). https://doi.org/10.1007/978-3-642-396984_6 13. Cavalcanti, A., Woodcock, J.: A tutorial introduction to CSP in unifying theories of programming. In: Cavalcanti, A., Sampaio, A., Woodcock, J. (eds.) First Pernambuco Summer School on Software Engineering, Refinement Techniques in Software Engineering, PSSE 2004, Recife, Brazil, 23 Nov–5 Dec 2004, Revised Lectures. Lecture Notes in Computer Science, vol. 3167, pp. 220–268. Springer (2004). https://doi.org/10.1007/11889229_6 14. Dragomir, I., Preoteasa, V., Tripakis, S.: Compositional semantics and analysis of hierarchical block diagrams. In: Bosnacki, D., Wijs, A. (eds.) Proceedings of 23rd International Symposium on Model checking software, SPIN 2016, Co-located with ETAPS 2016, Eindhoven, The Netherlands, 7–8 Apr 2016. Lecture Notes in Computer Science, vol. 9641, pp. 38–56. Springer (2016). https://doi.org/10.1007/978-3-319-32582-8_3 15. Foster, S., Cavalcanti, A., Canham, S., Woodcock, J., Zeyda, F.: Unifying theories of reactive design contracts. In preparation for Theoretical Computer Science (2017). arXiv:1712.10233 16. Foster, S., Zeyda, F., Woodcock, J.: Isabelle/UTP: a mechanised theory engineering framework. In: Naumann, D. (ed.) 5th International Symposium on Unifying Theories of Programming, UTP 2014, Singapore, 13 May 2014, Revised Selected Papers. Lecture Notes in Computer Science, vol. 8963, pp. 21–41. Springer (2014). https://doi.org/10.1007/978-3-319-148069_2 17. Gibson-Robinson, T., Armstrong, P., Boulgakov, A., Roscoe, A.: In: Proceedings of FDR3—A Modern Refinement Checker for CSP. Tools and Algorithms for the Construction and Analysis of Systems. LNCS, vol. 8413, pp. 187–201 (2014)

254

K. Ye et al.

18. Hoare, C., He, J.: Unifying Theories of Programming, vol. 14. Prentice Hall (1998) 19. Hoare, C.A.R.: Communicating Sequential Processes. Prentice-Hall (1985) 20. Hoare, C.A.R., Roscoe, A.W.: Programs as Executable Predicates. In: Proceedings of FGCS, pp. 220–228 (1984) 21. Jones, C.B.: Wanted: a compositional approach to concurrency, pp. 5–15. Springer, New York, NY (2003). https://doi.org/10.1007/978-0-387-21798-7_1. 22. Jones, R.B.: ClawZ—The Semantics of Simulink Diagrams. Lemma 1 Ltd. (2003) 23. Lee, E.A., Messerschmitt, D.: Synchronous data flow. Proc. IEEE 75, 1235–1245 (1987) 24. Li, W., Gérard, L., Shankar, N.: Design and verification of multi-rate distributed systems. In: 2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE), pp. 20–29. IEEE (2015) 25. Marian, N., Ma, Y.: Translation of Simulink Models to Component-based Software Models, pp. 274–280. Forlag uden navn (2007) 26. MathWorks: Simulink. https://www.mathworks.com/products/simulink.html 27. Meyer, B.: Applying “Design by Contract”. IEEE Comput. 25(10), 40–51 (1992). https://doi. org/10.1109/2.161279 28. Nipkow, T., Paulson, L.C., Wenzel, M.: Isabelle/HOL—a proof assistant for higher-order logic. Lecture Notes in Computer Science, vol. 2283. Springer (2002). https://doi.org/10.1007/3-54045949-9 29. Object Management Group: OMG Systems Modeling Language (OMG SysMLTM ). Technical Report. Version 1.4 (2015). http://www.omg.org/spec/SysML/1.4/ 30. OpenModelica. https://openmodelica.org/ 31. Oppenheim, A.V., Willsky, A.S., Nawab, S.H.: Signals and Systems, 2nd edn. Prentice-Hall Inc, Upper Saddle River, NJ, USA (1996) 32. Preoteasa, V., Dragomir, I., Tripakis, S.: The refinement calculus of reactive systems. CoRR (2017). arXiv:1710.03979 33. Preoteasa, V., Tripakis, S.: Refinement calculus of reactive systems. CoRR (2014). arXiv:1406.6035 34. ProofPower. http://www.lemma-one.com/ProofPower/index/index.html 35. Roy, P., Shankar, N.: SimCheck: a contract type system for Simulink. Innov. Syst. Softw. Eng. 7(2), 73 (2011). https://doi.org/10.1007/s11334-011-0145-4. 36. TeraSoft: The MathWorks in the Automotive Industry. http://www.terasoft.com.tw/product/ doc/auto.pdf 37. Tripakis, S., Lickly, B., Henzinger, T.A., Lee, E.A.: A theory of synchronous relational interfaces. ACM Trans. Program. Lang. Syst. (TOPLAS) 33(4), 14 (2011) 38. Tripakis, S., Sofronis, C., Caspi, P., Curic, A.: Translating discrete-time simulink to lustre. ACM Trans. Embed. Comput. Syst. 4(4), 779–818 (2005). https://doi.org/10.1145/1113830. 1113834 39. VeTSS: UK Research Institute in Verified Trustworthy Software Systems. https://vetss.org.uk/ 40. Woodcock, J., Cavalcanti, A.: A tutorial introduction to designs in unifying theories of programming. In: Boiten, E.A., Derrick, J., Smith, G. (eds.) Integrated Formal Methods, pp. 40–66. Springer, Berlin Heidelberg, Berlin, Heidelberg (2004) 41. Zeyda, F., Ouy, J., Foster, S., Cavalcanti, A.: Formalising cosimulation models. In: Proceedings of Software Engineering and Formal Methods (2018). https://doi.org/10.1007/978-3-31974781-1_31.

Sound and Relaxed Behavioural Inheritance Nuno Amálio

Abstract Object-oriented (OO) inheritance establishes taxonomies of OO classes. Behavioural inheritance (BI), a strong version, emphasises substitutability: objects of child classes replace objects of their ascendant classes without any observable effect difference on the system. BI is related to data refinement, but refinement’s constrictions rule out many useful OO subclassings. This paper revisits BI at the light of Z and the theory of data refinement. It studies existing solutions to this problem, criticises them, and proposes improved relaxations. The results are applicable to any OO language that supports design-by-contract (DbC). The paper’s contributions include three novel BI relaxations supported by a mathematical model with proofs carried out in the Isabelle proof assistant, and an examination of BI in the DbC languages Eiffel, JML and Spec. Keywords Refinement · Z · Object-orientation · Inheritance · Design-by-contract · JML · Eiffel · Spec · Behavioural subtyping

1 Introduction The object-oriented (OO) paradigm has a great deal in common with biological classification (taxonomy) and the ever present human endeavour to establish taxonomies that reflect degree of relationship [30]. This is, perhaps, a factor behind OO’s popularity, which uses classification to tame diversity and complexity. Whilst biologists go from life into classification, computer scientists use classification as templates that generate computing life.

N. Amálio (B) School of Computing and Digital Technology, Birmingham City University, Millennium Point, Curzon Street, Birmingham B4 7XG, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 A. Adamatzky and V. Kendon (eds.), From Astrophysics to Unconventional Computation, Emergence, Complexity and Computation 35, https://doi.org/10.1007/978-3-030-15792-0_11

255

256

N. Amálio

OO design builds taxonomies around classes: abstractions representing living computing objects with common characteristics. This resembles biological classifications, which group similar entities into taxa [30]. OO classes define both static and dynamic characteristics. Each object of a class is a distinct individual with its own identity. A class has a dual meaning: intension and extension [20]. Intension sees a class in terms of the properties shared by all its objects (for example, a class Person with properties name and address), whereas extension views a class in terms of its living objects (for example, class Person is {MrSilva, MsHakin, MrPatel}). Biological classifications hierarchically relate taxa through common ancestry [30]. This is akin to OO inheritance, which builds hierarchies from similarity to specificity in which higher-level abstractions (superclasses or ancestors) capture common characteristics of all descendant abstractions (subclasses). Inheritance provides a reuse mechanism: descendants reuse their ancestor definitions, and may define extra characteristics of their own. The essence of OO inheritance lies in its is-a semantics. A child abstraction (a subclass) is a kind of a parent abstraction. The child may have extra characteristics, but it has a strong link with the parent: a living object of a descendant is at the same time also an object of its ascendant classes, and a parent class includes all objects that are its own direct instances plus those of its descendants. For example, when we say that a human is a primate, then any person is both a human and a primate; characteristics of primates are also characteristics of humans, however humans have characteristics of their own which they do not share with other primates. OO computing life becomes more complicated when it comes to dynamics or behaviour, which is concerned with objects doing something when stimulated through operations. To understand operations, we resort to a button metaphor: operations are buttons, part of an object’s interface, triggered from the outside world to affect the internal state of an object and produce observable outputs. Often, such button-pushes require data (or inputs). Inheritance implies that ancestor buttons belong to descendants also and that descendants may specialise them. For example, walk on primate could be specialised differently in humans and gorillas as uprightwalk and knuckle-walk, respectively, to give rise to polymorphism. Operations apply the principle of information hiding; we have the interface of an operation—its name and expected data with respect to inputs outputs—, which is what the outside world sees, and its definition in terms of what it actually does (programs at a more concrete computing level). Through operations the outside world derives and instils meaningful outcomes from and into the abstraction. Inheritance’s is-a semantics entails substitutability: a child object can be used whenever a parent object is expected. For instance, a human is suitable whenever a primate is expected. This, in turn, entails a certain uniformity to prevent unwanted divergence, which is enforced at two levels. Interface conformity, the more superficial level, requires that the interfaces of the shared buttons, in sub- and super-class, conform with each other with respect to the data being interchanged (inputs and

Sound and Relaxed Behavioural Inheritance

257

outputs).1 This guarantees that subclasses can be asked to do whatever their superclasses offer, but leaves room for unwanted deviation. For example, a gorilla class may comply with a primate walk button, but be actually defined just as standing upright without any movement. The second deeper level of enforcement tackles this issue through behavioural inheritance (BI) [21, 28]: not only the interfaces must conform, the behaviour must conform also to ensure that subclass objects may stand for superclass objects without any difference on the object’s observable behaviour from a superclass viewpoint. In our primates example, standing upright would not meet the motion expectations. Only through proof can the satisfaction of BI be verified. Deep substitutability is captured by the theory of data refinement [16, 22, 39]. Inheritance relations induce refinement relations between parent and child.2 This paper tackles the refinement restrictions, a major obstacle to BI’s ethos of correctness already acknowledged by Liskov and Wing [28]. This paper delves into BI’s foundations to propose relaxations that tackle refinement’s overkills and constrictions. The investigation is in the context of Z [24, 39], a formal modelling language with a mature refinement theory [16, 39]. The work builds up on ZOO, the OO style for Z presented in [2, 3, 10], that is the semantic domain of UML + Z [2, 11, 12] and the Visual Contract Language (VCL) for graphical modelling of software designs [6–9]. Contributions. This paper’s contributions are follows: • The paper presents four relaxations to BI. Three of these relaxations are novel. A fourth unproved relaxation proposed elsewhere is proved here with the aid of the Isabelle proof assistant. • A thorough examination of the BI relaxations that underpin the design by contract languages JML, Eiffel and Spec. Paper outline. The remainder of this paper is as follows. Section 2 presents the mathematical model that underpins the paper’s BI study. Section 3 introduces the paper’s BI setting and derives conjectures for BI. Section 4 presents the running example, which is analysed in Sect. 5 to better understand how BI’s restrictions affect inheritance. Section 6 performs a thorough examination of BI in the DbC languages JML, Eiffel and Spec. Section 7 presents the paper’s four relaxations which are applied to the running example. Finally, the paper concludes by discussing its results (Sect. 8), comparing the results with related work (Sect. 9) and by summarising its main findings (Sect. 10). The appendix of Sect. 11 provides several mathematical definitions. The accompanying technical report [4] provides supplementary material not included in the main text.

1 This

involves type-checking, a computationally efficient means of verification which checks that variables hold valid values according to their types (e.g. boolean variables cannot hold integers). 2 Whereas in data refinement the refinement relation varies, in BI this relation is always a function from subclass to superclass. BI is a specialisation of data refinement.

258

N. Amálio

2 An Abstract Mathematical Model of OO This section presents the paper’s OO mathematical model, drawn from ZOO [2, 3, 10], our approach to couch OO models in Z. The model rests on abstract data types (ADTs), enabling a connection to data refinement. The sequel refers to mathematical definitions of Sect. 11, which abridge the definitions of the accompanying technical report [4].

2.1 The ADT Foundation ADTs, depicted in Fig. 1, are used to represent state-based abstractions made-up of structural and dynamic parts that capture computing life-forms. They comprise a state definition and a set of operations (Fig. 1a). Figure 1b depicts ADTs’ mathematical underpinnings. There are sets of all possible type states S, all possible environments E, all possible identifiers I, and all possible objects O (Definition 1). An ADT (Definition 3) is a quadruple T = (ST , i, f , os) comprising a set of states ST ⊆ S (the state space), an initialisation i : E ↔ ST (a relation between environment and state space), a finalisation f : ST ↔ E (a relation between state space and environment) and an indexed set of operations ops : I →  ST ↔ ST (a function from operation identifiers to relations between states). Functions sts, ini, fin, and ops (Definition 3) extract the different ADT components (e.g. for T above, sts T = ST ).

Fig. 1 An abstract data type (ADT) (a) comprises state (SA, SB) and operations (OA, OB). Mathematically (b), an ADT T is made-up of a set of states ST , an initialisation i (a relation from environments E to states ST ), a finalisation f (a relation from states ST to environments E) and operations os (an indexed set of relations between states ST )

Sound and Relaxed Behavioural Inheritance

259

2.2 Classes ADTs lack an intrinsic identity. In the model of Fig. 1b (Definition 3), two ADT instances with the same state denote a single instance. Classes are populations of individuals, suggesting uniquely identifiable individuals that retain their identity irrespective of the state in which they are in. Z promotion [35, 39] is a modular technique that builds ADTs for a population of individuals by promoting a local ADT in a global state without the need to redefine the encapsulated ADT; promoted operations are framed, as only a portion of the global state changes. Figure 2 depicts classes as promoted ADTs (PADTs), which underpin the OO model presented here. A PADT PT (Fig. 2a), is made up of an inner (or local) type T that is encapsulated and brought into a global space to make the compound (or outer) type PT . The inner and outer type correspond to class intension and extension, respectively. The mathematical underpinnings of a class as PADT are pictured in Fig. 2b. A class (Definition 6) is a 9-tuple C = (ci, t, os, stm, ot, c, d, ms), comprising a class identifier ci : I, an inner type t : ADT , a set of objects os ⊆ O of all possible object identities of the class, a global class state made up of an object to state mapping stm ∈ os →  sts t, a typing mapping ot ∈ os →  I indicating the direct class of the classes living objects, a constructor class operation c ∈ nOps, a destructor class operation d ∈ dOps and class modifiers ms ∈ uOps (see Definition 5 for nOps, dOps and uOps). Functions icl, ity, osu, los, csts, ost, ocl, cop, dop and mops extract information from class compounds (Definition 6) to yield: class identifier (icl), inner type (ity), universe of the class’s possible objects (osu), class’s living objects (los), set of class states (csts), object to state mapping (ost), object to direct class mapping (ocl), and constructor (cop), destructor (dop) and modifier operations (mops).

(a) Depiction of the structure (b) Mathematical structure of a promoted ADT of a class as a promoted ADT

Fig. 2 Promoted abstract data types (PADTs) (a) comprise an inner type T and global operations (GOA, GOB) promoting inner operations (OA, OB); OO classes are PADTs here. Mathematically (b), classes have a global function (stm) mapping class objects os (identities) to their inner states (type T ); this is the basis for constructor (c from inner initialisation), destructor (d, from inner finalisation) and modifier operations (ms from inner operations)

260

N. Amálio

2.3 Inheritance Inheritance, pictured in Fig. 3, embodies a constructive approach to build specificity on top of commonality (Fig. 3a). Its is-a semantics implies that a child is a parent with possibly something extra (Fig. 3a), which has implications at the level of inner and outer types. The inner type captures how the child inherits the characteristics of the parent and adds something of its own through ADT extension (Definition 8), expressed in Z as schema conjunction (Definition 9): C == A ∧ X. Given inner ADTs C (concrete or child) and A (abstract or parent), then C is defined as being A with something extra, X. From a child state space it is possible to derive the parent’s (by removing what is extra) as captured in the abstraction function ϑ (Definition 10). The mathematical relation between classes parent Cp and child Cc , depicted in Fig. 3b, rests on this ϑ function that maps inner object states of child to the ones of parent. In all states of the system the relation between child and parent must preserve the diagram commuting of Fig. 3b (Definition 11), which materialises is-a at the mathematical level of classes: : at any system state a child object can be seen as a parent object. Abstract classes (Definition 12) have no direct instances; they lack a direct existence and are used to capture general ancestors in a hierarchy, such primate whose existence is indirectly defined by the specimens of its descendants, such as human and gorilla.

↑⊆

(a) Depiction of class in- (b) Mathematical underpinheritance nings of inheritance Fig. 3 Class inheritance: descendants inherit their ancestor characteristics and add something of their own. In (a), state and operations of Parent are inherited by Child, which adds SC and OC. Mathematically (b), inheritance involves a pair of functions that preserve the class’s object to state mappings (expressed as diagram commutativity); one function is identity (id)—child objects are a subset of their parent objects—, the other is the abstraction function ϑ.

Sound and Relaxed Behavioural Inheritance

261

3 Behavioural Inheritance (BI) and Refinement The following investigates BI under the prism of data refinement; it refers to mathematical definitions from Sect. 11.

3.1 Data Refinement Refinement is a stepwise approach to software development, in which abstract models are increasingly refined into more concrete models or programs with each step carrying certain design decisions [38]. Data refinement [22] provides a foundation to this process through a theory that compares ADTs with respect to substitutability and preservation of meaning. Data refinement is founded on total relations [22]; operations are relations over a data type, programs are sequence of operations. Complete programs over a ADT start with an initialisation, carry out operations and end with a finalisation (Definition 13). In this setting, data refinement is set inclusion: given ADTs C and A, then for all complete programs pC and pA , with the same underlying operations over C and A respectively, C refines A (C  A) if and only if pC ⊆ pA (Definition 14). It is difficult to prove refinements through complete programs. In practice, refinement proofs resort to simulations (Fig. 4) where ADTs are compared inductively [22] through a simulation relation (R in Fig. 4). For each operation in the abstract type, there must be a corresponding operation in the concrete type. A refinement is verified by proving conjectures (or simulation rules), given in Definitions 15 for forwards (or downwards) simulation and 16 for backwards (or upwards) simulation and which are related to the three commutings of Fig. 4. The two types of simulations are sufficient for refinement (anything they can prove is a refinement) and together they are necessary (any refinement can be proved using either one of them) [22]. In OO with design by contract [31], operations are described in terms of preand post-conditions. Operations are partial relations applicable only in those ADT states that satisfy the pre-condition (the relation’s domain). The language Z operates in this partial setting. Refinement based on total relations is adapted to Z in [39]

Fig. 4 Data refinement simulation: every step in the concrete type is simulated by a step in the abstract type

262

N. Amálio

Fig. 5 Contractual totalisation of a relation [39]

by deriving simulation rules based on a totalisation of partial relations. There are two Z refinement settings [16]: non-blocking (contractual) refinement interprets an operation as a contract and so outside the precondition anything may happen, while blocking (behavioural) refinement says that outside the precondition an operation is blocked. Figure 5 gives the contractual totalisation of relation r = {a → a, a → b, b → b, b → c} where undefinedness (⊥) and all elements outside the relation’s domain are mapped to every possible element in the target set augmented with ⊥. This paper focuses on the contractual interpretation, the most relevant for our OO context; [4] covers both interpretations. Simulation rules for contractual refinement are given in Facts 1 (forwards) and 2 (backwards).

3.2 BI Refinement Although developed to support refinement (from abstract to concrete) or abstraction (other way round), data refinement compares data types with respect to substitutability making it applicable to BI. BI needs to relate the types being compared (R in Fig. 4). Such a relation can be discerned from Fig. 3b by looking into how child and parent are related through inheritance (Definition 11). This gives a basis for the BI class simulation portrayed in Fig. 6, depicting inherited child operations (icop) simulated by the parent operation they inherit from (pcop) and child-only operations being simulated by parent step operations. Figure 6 suggests a refinement relation as illustrated in Fig. 3b made up of a morphism comprising two functions: the ϑ abstraction function (Definition 10) and identity. This hints at a modular approach for BI: we start with inner type refinement (class intension) through function ϑ, followed by outer type refinement (class extension), which is related to Z promotion refinement [16, 29, 39].

Sound and Relaxed Behavioural Inheritance

263

Fig. 6 Behavioural inheritance class simulation. An inherited class operation icop is simulated by a corresponding parent class operation pcop from which it inherits. There may be child only class operations (represented as coop), which are simulated by some step operation in the parent (pstepop)

3.3 Inner BI as ADT Extension Refinement For any classes ClA , ClC : Cl (Definition 6), such that ClC is a child of ClA (ClC inh ClA , Definition 11), we have that inner BI equates to ADT extension refinement of the class’s inner types: ity ClC Ext ity ClA . Two alternative extension refinement settings are considered: one based on the general function ϑ (Definition 17) and a ϑ specific to the Z schema calculus (Definition 18) to cater to the ZOO approach. Simulation rules were derived with the aid of Isabelle (see [4] for details). For backwards and forwards simulation, the rules reduce to a single set (unlike the general case with separate rule sets)—Corollary 1 of Appendix 1.A, a consequence of Fact 3. Let A, C : ADT be two ADTs—such as the inner ones of classes ClC and ClA above, such that A = ity ClA and C = ity ClC —where C extends A (Definition 8). If C and A are two Z schema ADTS, then their their relation is described by the schema calculus formula C == A ∧ X. Let A and C have initialisation schemas AI and CI, operations AO and CO, and finalisation schemas AF and CF.3 As established by Fact 5, C Ext A if and only if: 1. 2. 3. 4.

?

?

?

?

∀ C • CI ⇒ AI ∀ C; i? : V • pre AO ⇒ pre CO ∀ C ; C; i?, o! : V • pre AO ∧ CO ⇒ AO ∀ C • CF ⇒ AF

(Initialisation) (Applicability) (Correctness) (Finalisation)

The first rule allows initialisations to be strengthened. The second rule allows the weakening of the precondition of a concrete operation (CO). The third rule says that the extended operation (CO) must conform to the behaviour of the base operation (AO) whenever the base operation is applicable—the postcondition may be strengthened. The last rule allows finalisation strengthening, but reducing to true if the finalisation is total (the ADTs lack a finalisation condition). Fact 4 captures extension refinement in the more general relational setting. 3 The

finalisation condition describes a condition for the deletion of objects; e.g. a bank account may be deleted provided its balance is 0.

264

N. Amálio

3.4 Extra Operations Refinement requires that each execution step in the concrete type is simulated by the abstract. A non-inherited operation in a child class (concrete) needs to simulate something in the parent (abstract). A common approach to this issue involves an abstract operation that does nothing and changes nothing (called a stuttering or a skip operation). The proofs verify that the new concrete operation refines skip: in the abstract type, skip does nothing; in the concrete type, the button executes the new operation. The rules for checking child-extra operations are obtained from the rules above by replacing AO with skip (Ξ A in Z).

3.5 Outer BI The BI simulation rules above (Sect. 3.3) cater to the class’s inner (or local) ADT only. In the class’s outer ADT, the concern is whether the refinement proved locally is preserved globally. Z promotion refinement relies on promotion freeness [16, 29, 39]: a class refines another if there is a refinement between the inner types and the child class is free or unconstrained from the global state [29]. Figure 7 describes freeness as a diagram commuting: a class is free if the set of global object states (function ists) is the same as the set of states of its inner type (function composition sts ◦ ity)—as per Definition 19. This means that the rules of Sect. 3.3 can be carried safely to contexts in which freeness holds (Definition 7)

Fig. 7 Class (or promotion) freeness

Sound and Relaxed Behavioural Inheritance

265

4 The ZOO Model of Queues Figure 8 presents the running example of a hierarchy of queues. Class QueueManager holds an indexed set of queues (HasQueues); the hierarchy is as follows: • Abstract class Queue holds a sequence of items. It has two operations: join adds an element to the queue, and leave removes the queue’s head. • Class BQueue (bounded queue) bounds the size of the queue. • Class PBQueue (privileged-bounded queue) reserves the last place in the queue to some privileged item. • Class RBQueue (resettable-bounded queue) adds operation reset to empty the queue. • Class JQBQueue (jump-the-bounded-queue) add an extra behaviour to operation join: the item taking the queue’s last place jumps the queue. • Class RABQueue (resettable-abandonable-bounded-queue) adds abandon enabling any element to leave the queue irrespective of its position. The following presents excerpts of the ZOO model formalising the class diagram of Fig. 8. The complete model is given in [4]. Further information on ZOO can be obtained from [2–4, 10].

4.1 ZOO Model Excerpt: Inner ADTs Inner ADT of class Queue holds a sequence of items, which is initially empty. Operation join receives an item and adds it to the back of the sequence. Operation leave removes and outputs the sequence’s head.

Fig. 8 A UML class diagram describing an inheritance hierarchy of queues made-up of classes Queue, BQueue (bounded-queue), RBQueue (resettable-bounded-queue), RABQueue (resettable-abandonable-bounded-queue)

266

N. Amálio

Queue[Item] items : seq Item

QueueInit[Item] Queue[Item] items = 

QueueJoin[Item] ΔQueue[Item] item? : Item

QueueLeave[Item] ΔQueue[Item]; item! : Item

items = items  item?

items = tail items



items =  ∧ item! = head items

BQueue extends Queue by bounding the queue with constant maxQ. maxQ : N1 BQueue[Item] Queue[Item] # items ≤ maxQ BQueueLeave[Item] ΔBQueue[Item] QueueLeave[Item]

BQueueInit[Item] BQueue[Item] QueueInit[Item]

BQueueJoin[Item] ΔBQueue[Item] QueueJoin[Item]

RBQueue extends BQueue; extra operation Reset empties the queue. RBQueue[Item] BQueue[Item]

RBQueueReset[Item] ΔRBQueue[Item] items = 

PBQueue extends BQueue by adding a set of privileged items, set at initialisation, and reserving the last place in the sequence to such an item. PBQueue[Item] BQueue[Item] privileged : P1 Item # items = maxQ ⇒ last items ∈ privileged

PBQueueInit[Item] PBQueue[Item] BQueueInit[Item] privileged? : P1 Item privileged = privileged?

Sound and Relaxed Behavioural Inheritance

267

JQBQueue slightly modifies operation join inherited from BQueue: the item occupying the last place left in the queue is placed at the queue’s head. JQBQueue[Item] BQueue[Item]

JQBQueueLeave[Item] ΔJQBQueue[Item] BQueueLeave[Item]

JQBQueueInit[Item] JQBQueue[Item] BQueueInit[Item] JQBQueueJoin[Item] ΔPBQueue[Item]; item? : Item # items < maxQ − 1 ⇒ BQueueJoin[Item] # items = maxQ − 1 ⇒ items = item?  items

RABQueue extends RBQueue by adding abandon, allowing elements to leave the queue no matter their position. RABQueue[Item] RBQueue[Item]

RABQueueInit[Item] RABQueue[Item] BQueueInit[Item]

ABQueueLeave[Item] ΔRABQueue[Item] RBQueueLeave[Item]

ABQueueJoin[Item] ΔRABQueue[Item] RBQueueJoin[Item]

RABQueueReset[Item] ΔRABQueue[Item]; RBQueueReset[Item] RABQueueAbandon[Item] ΔRABQueue[Item]; item? : Item ∃ q1, q2 : seq Item • items = q1  item?  q2 ∧ items = q1  q2

4.2 Global Properties Class extensions are obtained by instantiating the SCl Z generic (see [10]). State extensions of Queue, BQueue, and RBQueue are: SQueue[Item] == SCl[O QueueCl, Queue[Item]][stQueue/oSt] SBQueue[Item] == SCl[O BQueueCl, BQueue[Item]][stBQueue/oSt]

268

N. Amálio

SRBQueue[Item] == SCl[O RBQueueCl, RBQueue[Item]][stRBQueue/oSt] Extension initialisations say that classes have no living instances: SQueueInit[Item] == [SQueue[Item] | stQueue = ∅ ] SBQueueInit[Item] == [ SBQueue[Item] | stBQueue = ∅ ] SRBQueueInit[Item] == [ SRBQueue[Item] | stRBQueue = ∅ ] Association HasQueues is represented as a function relating QueueManager objects with sets of Queues indexed by set QId (the queue identifier). AHasQueues rHasQueues : O QueueManagerCl →  (QId →  O QueueCl) The next invariant says that the RBQueue instances held by a QueueManager must have queues of size at most 5. RBQueuesInHasQueuesSizeLeq5[Item] SystemGblSt[Item] ∀ oqm : O QueueManagerCl; rq : O RBQueueCl | oqm ∈ dom rHasQueues · rq ∈ (ran (rHasQueues oqm)) ∩ sRBQueue ⇒ #(stRBQueue abq).items ≤ 5

5 The Refinement Straight-Jacket and Some Loopholes The BI proof rules derived in Sect. 3 are over-restrictive. Trivial inheritance hierarchies, such as the queues example of Fig. 8, fail to be pure BIs. Furthermore, the rules may be misleading as inner BI does not entail overall BI when global constraints invalidate what is proved locally (Sect. 3.5). Table 1 summarises the BI analysis for the example of Fig. 8. The next sections discuss the four issues that emerged.

5.1 Applicability In Fig. 8, class BQueue fails to refine Queue and PBQueue fails to refine BQueue; applicability fails for operation join on both accounts:

Sound and Relaxed Behavioural Inheritance

269

Table 1 Results of the BI analysis of the queues example of Fig. 8 Relevant outcome Issue BQueue.join PBQueue.join RBQueue.reset ABQueue.abandon JQBQueue.join QueueManager

Applicability proof fails Applicability proof fails Does not refine skip (Ξ BQueue) Does not refine skip (Ξ BQueue) Correctness proof fails Global invariant breaches freeness assumption

Applicability Applicability Refinement of skip Refinement of skip Operation overriding Global interference

• Precondition of Queue.join is true, whilst that of BQueue.join is # items < maxQ. The former does not imply the latter and so applicability fails. • Precondition of PBQueue.join includes # items = maxQ − 1 ⇒ item? ∈ privileged, which does not imply precondition of BQueue.join. In refinement, the concrete type may weaken the precondition; here, the subclass preconditions are stronger. These failures happen because the concrete operations strengthen the inherited pre-condition, violating substitutability as the behaviour becomes observably different when the concrete type is used in place of the abstract one. Suppose a braking system of a car; the abstract type says “upon brake slow down” (precondition true), and the concrete type says, “upon brake slow down when speed is less than 160 km per hour” (precondition speed