Combinatorics, Words and Symbolic Dynamics [1 ed.]
 1107077028, 978-1-107-07702-7

Table of contents :
Content: List of contributors
Preface
Acknowledgments
1. Preliminaries V. Berthe and M. Rigo
2. Expansions in non-integer bases M. de Vries and V. Komornik
3. Medieties, end-first algorithms, and the case of Rosen continued fractions B. Rittaud
4. Repetitions in words N. Rampersad and J. Shallit
5. Text redundancies G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica
6. Similarity relations on words V. Halava, T. Harju and T. Karki
7. Synchronised automata M.-P. Beal and D. Perrin
8. Cellular automata, tilings and (un)computability J. Kari
9. Multidimensional shifts of finite type and sofic shifts M. Hochman
10. Linearly recursive sequences and Dynkin diagrams C. Reutenauer
11. Pseudo-randomness of a random Kronecker sequence. An instance of dynamical analysis E. Cesaratto and B. Vallee
Notation index
General index.

Citation preview

C O M B I NATO R I C S , WO R D S A N D S Y M B O L I C DY NA M I C S

Internationally recognised researchers look at developing trends in combinatorics with applications in the study of words and in symbolic dynamics. They explain the important concepts, providing a clear exposition of some recent results, and emphasise the emerging connections between these different fields. Topics include combinatorics on words, pattern avoidance, graph theory, tilings and theory of computation, multidimensional subshifts, discrete dynamical systems, ergodic theory, numeration systems, dynamical arithmetics, automata theory and synchronised words, analytic combinatorics, continued fractions and probabilistic models. Each topic is presented in a way that links it to the main themes, but then they are also extended to repetitions in words, similarity relations, cellular automata, friezes and Dynkin diagrams. The book will appeal to graduate students, research mathematicians and computer scientists working in combinatorics, theory of computation, number theory, symbolic dynamics, tilings and stringology. It will also interest biologists using text algorithms.

Encyclopedia of Mathematics and Its Applications This series is devoted to significant topics or themes that have wide application in mathematics or mathematical science and for which a detailed development of the abstract theory is less important than a thorough and concrete exploration of the implications and applications. Books in the Encyclopedia of Mathematics and Its Applications cover their subjects comprehensively. Less important results may be summarised as exercises at the ends of chapters. For technicalities, readers can be referred to the bibliography, which is expected to be comprehensive. As a result, volumes are encyclopedic references or manageable guides to major subjects.

Encyclopedia of Mathematics and Its Applications All the titles listed below can be obtained from good booksellers or from Cambridge University Press. For a complete series listing visit www.cambridge.org/mathematics. 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163

J. Beck Combinatorial Games L. Barreira and Y. Pesin Nonuniform Hyperbolicity D. Z. Arov and H. Dym J-Contractive Matrix Valued Functions and Related Topics R. Glowinski, J.-L. Lions and J. He Exact and Approximate Controllability for Distributed Parameter Systems A. A. Borovkov and K. A. Borovkov Asymptotic Analysis of Random Walks M. Deza and M. Dutour Sikiri´c Geometry of Chemical Graphs T. Nishiura Absolute Measurable Spaces M. Prest Purity, Spectra and Localisation S. Khrushchev Orthogonal Polynomials and Continued Fractions H. Nagamochi and T. Ibaraki Algorithmic Aspects of Graph Connectivity F. W. King Hilbert Transforms I F. W. King Hilbert Transforms II O. Calin and D.-C. Chang Sub-Riemannian Geometry M. Grabisch et al. Aggregation Functions L. W. Beineke and R. J. Wilson (eds.) with J. L. Gross and T. W. Tucker Topics in Topological Graph Theory J. Berstel, D. Perrin and C. Reutenauer Codes and Automata T. G. Faticoni Modules over Endomorphism Rings H. Morimoto Stochastic Control and Mathematical Modeling G. Schmidt Relational Mathematics P. Kornerup and D. W. Matula Finite Precision Number Systems and Arithmetic Y. Crama and P. L. Hammer (eds.) Boolean Models and Methods in Mathematics, Computer Science, and Engineering V. Berthé and M. Rigo (eds.) Combinatorics, Automata and Number Theory A. Kristály, V. D. R˘adulescu and C. Varga Variational Principles in Mathematical Physics, Geometry, and Economics J. Berstel and C. Reutenauer Noncommutative Rational Series with Applications B. Courcelle and J. Engelfriet Graph Structure and Monadic Second-Order Logic M. Fiedler Matrices and Graphs in Geometry N. Vakil Real Analysis through Modern Infinitesimals R. B. Paris Hadamard Expansions and Hyperasymptotic Evaluation Y. Crama and P. L. Hammer Boolean Functions A. Arapostathis, V. S. Borkar and M. K. Ghosh Ergodic Control of Diffusion Processes N. Caspard, B. Leclerc and B. Monjardet Finite Ordered Sets D. Z. Arov and H. Dym Bitangential Direct and Inverse Problems for Systems of Integral and Differential Equations G. Dassios Ellipsoidal Harmonics L. W. Beineke and R. J. Wilson (eds.) with O. R. Oellermann Topics in Structural Graph Theory L. Berlyand, A. G. Kolpakov and A. Novikov Introduction to the Network Approximation Method for Materials Modeling M. Baake and U. Grimm Aperiodic Order I: A Mathematical Invitation J. Borwein et al. Lattice Sums Then and Now R. Schneider Convex Bodies: The Brunn–Minkowski Theory (Second Edition) G. Da Prato and J. Zabczyk Stochastic Equations in Infinite Dimensions (Second Edition) D. Hofmann, G. J. Seal and W. Tholen (eds.) Monoidal Topology M. Cabrera García and Á. Rodríguez Palacios Non-Associative Normed Algebras I: The Vidav–Palmer and Gelfand–Naimark Theorems C. F. Dunkl and Y. Xu Orthogonal Polynomials of Several Variables (Second Edition) L. W. Beineke and R. J. Wilson (eds.) with B. Toft Topics in Chromatic Graph Theory T. Mora Solving Polynomial Equation Systems III: Algebraic Solving T. Mora Solving Polynomial Equation Systems IV: Buchberger’s Theory and Beyond V. Berthé and M. Rigo (eds.) Combinatorics, Words and Symbolic Dynamics B. Rubin Introduction to Radon Transforms: With Elements of Fractional Calculus and Harmonic Analysis M. Ghergu and S. D. Taliaferro Isolated Singularities in Partial Differential Inequalities G. Molica Bisci, V. Radulescu and R. Servadei Variational Methods for Nonlocal Fractional Problems S. Wagon The Banach–Tarski Paradox (Second Edition)

E n cyc l o p e d i a o f M at h e m at i c s a n d I t s A p p l i c at i o n s

Combinatorics, Words and Symbolic Dynamics Edited by VA L É R I E B E RT H É Université Paris Diderot - Paris 7, France

MICHEL RIGO Université de Liège, Belgium

University Printing House, Cambridge CB2 8BS, United Kingdom Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781107077027 © Cambridge University Press 2016 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2016 A catalogue record for this publication is available from the British Library Library of Congress Cataloguing in Publication data Combinatorics, words and symbolic dynamics / [edited by] Valérie Berthé, Université Paris Diderot, Paris 7, Michel Rigo, Université de Liège, Belgium. pages cm. – (Encyclopedia of mathematics and its applications ; 159) Includes bibliographical references and index. ISBN 978-1-107-07702-7 (Hardback) 1. Combinatorial analysis. 2. Symbolic dynamics. 3. Computer science. I. Berthé, V. (Valérie), 1957– II. Rigo, Michel. QA164.C666 2015 511 .6–dc23 2015024873 ISBN 978-1-107-07702-7 Hardback

Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Contents

List of contributors Preface Acknowledgments 1

2

3

page ix xi xix

Preliminaries V. B ERTH E´ , M. R IGO 1.1 Conventions 1.2 Words 1.3 Morphisms 1.4 Languages and machines 1.5 Symbolic dynamics Expansions in non-integer bases M. DE V RIES , V. KOMORNIK 2.1 Introduction 2.2 Greedy and lazy expansions 2.3 On the cardinality of the sets Eβ (x) 2.4 The random map Kβ and infinite Bernoulli convolutions 2.5 Lexicographic characterisations 2.6 Univoque bases 2.7 Univoque sets 2.8 A two-dimensional univoque set 2.9 Final remarks 2.10 Exercises Medieties, end-first algorithms, and the case of Rosen continued fractions B. R ITTAUD 3.1 Introduction 3.2 Generalities 3.3 Examples

1 1 1 4 5 10 18 18 19 22 26 35 39 50 55 56 57 59 59 62 68

vi

Contents 3.4 3.5 3.6 3.7 3.8

4

5

6

7

End-first algorithms Medieties with k letters An end-first algorithm for k-medieties Exercises Open problems

76 82 89 92 100

Repetitions in words N. R AMPERSAD , J. S HALLIT 4.1 Introduction 4.2 Avoidability 4.3 Dejean’s theorem 4.4 Avoiding repetitions in arithmetic progressions 4.5 Patterns 4.6 Abelian repetitions 4.7 Enumeration 4.8 Decidability for automatic sequences 4.9 Exercises 4.10 Notes

101 101 102 114 120 123 123 134 143 145 146

Text redundancies G. BADKOBEH , M. C ROCHEMORE , C. S. I LIOPOULOS , M. K UBICA 5.1 Redundancy: a versatile notion 5.2 Avoiding repetitions and repeats 5.3 Finding repetitions and runs 5.4 Finding repeats 5.5 Finding covers and seeds 5.6 Palindromes

151

Similarity relations on words ¨ V. H ALAVA , T. H ARJU , T. K ARKI 6.1 Introduction 6.2 Preliminaries 6.3 Coding 6.4 Relational periods 6.5 Repetitions in relational words 6.6 Exercises and problems

175

Synchronised automata M.-P. B E´ AL , D. P ERRIN 7.1 Introduction 7.2 Definitions ˇ y’s conjecture 7.3 Cern´ 7.4 Road colouring

151 153 157 163 167 171

175 176 181 186 204 211 213 213 214 215 229

Contents 8

9

vii

Cellular automata, tilings and (un)computability J. K ARI 8.1 Cellular automata 8.2 Tilings and undecidability 8.3 Undecidability concerning cellular automata 8.4 Conclusion 8.5 Exercises

241

Multidimensional shifts of finite type and sofic shifts M. H OCHMAN 9.1 Introduction 9.2 Shifts of finite type and sofic shifts 9.3 Basic constructions and undecidability 9.4 Degrees of computability 9.5 Slices and subdynamics of sofic shifts 9.6 Frequencies, word growth and periodic points

296

10 Linearly recursive sequences and Dynkin diagrams C. R EUTENAUER 10.1 Introduction 10.2 SL2 -tilings of the plane 10.3 SL2 -tiling associated with a bi-infinite discrete path 10.4 Proof of Theorem 10.3.1 10.5 N-rational sequences 10.6 N-rationality of the rays in SL2 -tilings 10.7 Friezes 10.8 Dynkin diagrams 10.9 Rational frieze implies Dynkin diagram  10.10 Rationality for Dynkin diagrams of type A and A 10.11 Further properties of SL2 -tilings 10.12 The other extended Dynkin diagrams 10.13 Problems and conjectures 10.14 Exercises 11 Pseudo-randomness of a random Kronecker sequence. An instance of dynamical analysis E. C ESARATTO , B. VALL E´ E 11.1 Introduction 11.2 Five parameters for Kronecker sequences 11.3 Probabilistic models 11.4 Statements of the main results

242 260 279 293 293

296 297 305 317 326 342 359 359 360 361 363 365 369 370 377 382 385 387 397 397 398 401 401 404 415 417

viii

Contents 11.5 11.6 11.7 11.8 11.9

Dynamical analysis Balanced costs Unbalanced costs Summary of functional analysis Conclusion and open problems

Bibliography Notation index General index

423 430 436 438 441 443 464 466

Contributors

Val´erie Berth´e CNRS, LIAFA, Bˆat. Sophie Germain, Universit´e Paris Diderot, Paris 7 - Case 7014, F-75205 Paris Cedex 13, France. [email protected] Michel Rigo University of Li`ege, Department of Mathematics, B37 Quartier Polytech 1, All´ee de la D´ecouverte 12, B-4000 Li`ege, Belgium. [email protected] Martijn de Vries Tussen de Grachten 213, 1381DZ Weesp, The Netherlands. [email protected] Vilmos Komornik D´epartement de math´ematique, Universit´e de Strasbourg, 7 rue Ren´e Descartes, 67084 Strasbourg Cedex, France. [email protected] Benoˆıt Rittaud Universit´e Paris-13, Sorbonne Paris Cit´e, LAGA, CNRS, UMR 7539, F-93430 Villetaneuse, France. [email protected] Narad Rampersad Department of Mathematics and Statistics, University of Winnipeg, 515 Portage Ave., Winnipeg MB, R3B 2E9, Canada. [email protected] Jeffrey Shallit School of Computer Science, University of Waterloo, Waterloo ON, N2L 3G1, Canada. [email protected] Golnaz Badkobeh King’s College London, London WC2R 2LS, U.K. [email protected] Maxime Crochemore King’s College London, London WC2R 2LS, U.K. and Universit´e Paris-Est, France. [email protected] Costas S. Iliopoulos King’s College London, London WC2R 2LS, U.K. [email protected] Marcin Kubica Institute of Informatics, University of Warsaw ul. Banacha 2, 02097 Warszawa, Poland. [email protected] Vesa Halava Department of Mathematics and Statistics, University of Turku, FI20014 Turku, Finland. [email protected]

x

Contributors

Tero Harju Department of Mathematics and Statistics, University of Turku, FI20014 Turku, Finland. [email protected] Tomi K¨arki Department of Teacher Education, University of Turku, P.O. Box 175, FI-26101 Rauma, Finland. [email protected] Marie-Pierre B´eal Universit´e Paris-Est Marne-la-Vall´ee, Laboratoire d’informatique Gaspard-Monge, UMR 8049 CNRS, 5 Bd Descartes, Champs-sur-Marne, F-77454 Marne-la-Vall´ee Cedex 2, France. [email protected] Dominique Perrin Universit´e Paris-Est Marne-la-Vall´ee, Laboratoire d’informatique Gaspard-Monge, UMR 8049 CNRS, 5 Bd Descartes, Champs-sur-Marne, F-77454 Marne-la-Vall´ee Cedex 2, France. [email protected] Jarkko Kari Department of Mathematics and Statistics, University of Turku, FI20014 Turku, Finland. [email protected] Michael Hochman Einstein Institute of Mathematics, Edmond J. Safra Campus, Givat Ram, The Hebrew University of Jerusalem, Jerusalem, 91904, Israel. [email protected] Christophe Reutenauer D´epartement de math´ematiques, Universit´e du Qu´ebec a` Montr´eal, C.P. 8888, Succursale Centre-Ville Montr´eal, Qu´ebec H3C 3P8, Canada. [email protected] Eda Cesaratto Conicet and Univ. Nac. of Gral. Sarmiento, Instituto de Desarrollo Humano, Universidad Nacional de General Sarmiento, J. M. Gutierrez 1150 (B1613GSX), Los Polvorines, Prov. de Buenos Aires, Argentina. [email protected] Brigitte Vall´ee CNRS and Univ. de Caen, Informatique, GREYC, Universit´e de Caen, Bd Mar´echal Juin, F-14032 Caen Cedex, France. [email protected]

Preface

Inspired by the celebrated Lothaire series (Lothaire, 1983, 2002, 2005) and animated by the same spirit as in the book (Berth´e and Rigo, 2010), this collaborative volume aims at presenting and developing recent trends in combinatorics with applications in the study of words and in symbolic dynamics. On the one hand, some of the newest results in these areas have been selected for this volume and here benefit from a synthetic exposition. On the other hand, emphasis on the connections existing between the main topics of the book is sought. These connections arise, for instance, from numeration systems that can be associated with algorithms or dynamical systems and their corresponding expansions, from cellular automata and the computation or the realisation of a given entropy, or even from the study of friezes or from the analysis of algorithms. This book is primarily intended for graduate students or research mathematicians and computer scientists interested in combinatorics on words, pattern avoidance, graph theory, quivers and frieze patterns, automata theory and synchronised words, tilings and theory of computation, multidimensional subshifts, discrete dynamical systems, ergodic theory and transfer operators, numeration systems, dynamical arithmetics, analytic combinatorics, continued fractions, probabilistic models. We hope that some of the chapters can serve as useful material for lecturing at master/graduate level. Some chapters of the book can also be interesting to biologists and researchers interested in text algorithms or bio-informatics. Let us succinctly sketch the general landscape of the volume. Short abstracts of each chapter can be found below. The book can roughly be divided into four general blocks. The first one, made of Chapters 2 and 3, is devoted to numeration systems. The second block, made of Chapters 4 to 6, pertains to combinatorics of words. The third block is concerned with symbolic dynamics: in the one-dimensional setting with Chapter 7, and in the multidimensional one, with Chapters 8 and 9. The last block, made of Chapters 10 and 11, has again a combinatorial nature. Words, i.e., finite or infinite sequences of symbols taking values in a finite set, are ubiquitous in the sciences. It is because of their strong representation power: they

xii

Preface

arise as a natural way to code elements of an infinite set using finitely many symbols. So let us start our general description with combinatorics on words. Powers, repetitions and periods have been at the core of this field since its birth with the work of Thue (1906a, 1912). Thue’s work has been fruitfully extended to several important research directions. Let us mention the notion of abelian repetition (introduced by Paul Erd˝os), and the notion of fractional repetition (introduced by Franc¸oise Dejean) leading to a famous conjecture on repetition threshold that was recently proved in 2009. We have chosen to focus on these fundamental notions in Chapters 4, 5 and 6 devoted to words. Both application-driven and theoretical viewpoints are presented. Note that the systematic study of repetitions covers a wide field of applications ranging from number theory to bio-informatics. As striking examples, let us quote the work of Novikov and Adian (1968) on the Burnside problem for groups and the work of Adamczewski and Bugeaud (2007) on the transcendence of real numbers. Chapter 4 is focused on avoidable regularities in words, which consists of avoiding some types of repetitions. This chapter also covers the use of non-effective probabilistic methods like the Lov´asz local lemma and introduces some decision problems about automatic sequences. Interestingly, B¨uchi’s theorem from 1960 and first-order logic are important tools leading to decision procedures for instances of combinatorial problems that can be expressed in an extension of the Presburger arithmetic. As an example, one can get an automated certification that the Thue–Morse word is overlapfree. Chapter 5 deals with redundancies in textual data and is also built around the analysis of periodicity in words but aimed towards applications, in particular considering text algorithms, text compression, algorithms for bio-informatics, and analysis of biological sequences. It presents several methods used to detect periodicity, like the one used for compression in the Lempel–Ziv factorisation. Similarity relations on words are considered in Chapter 6 from the two perspectives of periodicity and repetition freeness. A similarity relation on words is induced by a compatibility relation on letters assumed to be a reflexive and symmetric relation. Two words are similar if they are of the same length and their corresponding letters are pairwise compatible. Similarity relations generalise the notion of partial words, i.e., words where a do-not-know symbol  may be used. As an example, the word a  ba is compatible with the word abbab. These relations can be seen as a model for inaccurate information on words. It is motivated here again both by theoretical issues and by applications arising from computer science (e.g., string matching) and molecular biology. Combinatorial problems do not only occur in this book in the framework of words but also in a more general setting of algebraic and analytic combinatorics, in particular with Chapters 10 and 11 which are combinatorial in nature (of course, general combinatorial tools such as formal power series occur in many chapters). Chapter 10 belongs to algebraic combinatorics through the study of a class of sequences of natural numbers associated with certain quivers (directed graphs). Quivers are com-

Preface

xiii

monly used in representation theory. It is possible to associate a numerical frieze with a quiver. These are integer-valued sequences. They are an extension to the whole plane of the frieze patterns introduced by Coxeter (1971). These recursions, although highly non-linear, produce sometimes rational, or even N-rational, sequences. The question of rationality of the friezes is the central question which will be answered in this chapter: if friezes are rational, they must be N-rational. This chapter also introduces the notion of SL2 -tilings and their applications. An SL2 -tiling of the plane is a filling of Z2 by numbers in such a way that each adjacent two by two minor is equal to 1. Chapter 11 belongs to the framework of dynamical analysis of algorithms. It focuses on the study of random Kronecker sequences through their discrepancy and their Arnold constant. It thus provides an illustration of the use of probabilistic methods in combinatorics by applying to Kronecker sequences the dynamical analysis methodology, which is a mixing between analysis of algorithms and dynamical systems relying on spectral properties of transfer operators. Recall that the use of combinatorics in the analysis of algorithms, initiated by D. E. Knuth, greatly relies on number theory, asymptotic methods and computer use. Let us also mention recent successful applications of the dynamical approach in the analysis of algorithms in connection with number theory through the analysis of the Gauss map as illustrated, e.g., in the work of Baladi and Vall´ee (2005). Combinatorics on words and symbolic dynamics are intimately related. Indeed, the coding of orbits and trajectories by words over a (finite) alphabet constitutes the basis of symbolic dynamical systems. Recall that a discrete dynamical system is a continuous map defined on a compact metric space X onto itself. It is therefore natural to code trajectories of points in the state space using a (finite) partition of X . One thus gets infinite words as codings and the corresponding dynamical systems are said to be symbolic. The study of symbolic dynamical systems in a multidimensional setting has recently given rise to striking results intertwining computational complexity, entropy, ergodic theory and topological dynamics, such as, e.g., in the work of Hochman and Meyerovitch (2010). Theory of recursion appears to be a key tool in the study of finite type or sofic multidimensional shifts: it appears clearly that many properties of dynamical systems can thus be described in terms of recursion theory. Classical symbolic dynamics is linked with graph theory and automata theory. This is the heart of Chapter 7 with the study of the celebrated “Road Colouring Problem”. This latter problem is a classical question about synchronisation in an automaton (or a graph). A synchronising word maps every state of an automaton to the same state. An automaton is synchronised if it has a synchronising word. The Road Colouring Theorem states that every complete deterministic automaton with an aperiodic directed underlying graph has the same graph as a synchronised automaton. This result was first conjectured by Adler et al. (1977) and has been recently solved by Trahtman (2009). The aims of Chapter 7 are to present a proof and an efficient

xiv

Preface

algorithm for this problem and also to consider links existing to another famous ˇ y Conjecture. This conjecture asserts conjecture in automata theory, namely the Cern´ that a synchronised deterministic automaton with n states has a synchronising word of length at most (n − 1)2. Chapters 8 and 9 are articulated on multidimensional symbolic dynamics by stressing the striking fundamental differences with the one-dimensional case. One of the main features of the multidimensional case is that computations can be implemented in subshifts of finite type. Indeed, it is possible to construct for a given Turing machine T , a shift of finite type in which every configuration represents arbitrarily long computations. This yields undecidability for problems that were clearly decidable in the one-dimensional case. Striking connections with complexity, computability and decidability issues are presented. Chapter 8 focuses on cellular automata whereas Chapter 9 focuses on shifts of finite type. Both chapters are complementary and intertwined. Chapter 8 is organised around the notion of tiling linked with cellular automata. Again fundamental differences exist between one-dimensional cellular automata and multidimensional ones. These differences may be explained by the theory of tilings, like the existence of aperiodic tilings or the fact that the so-called domino problem is undecidable. Chapter 9 is more precisely aimed at connections between combinatorial dynamical systems and effective systems, and, in particular, at aspects of multidimensional symbolic dynamics and cellular automata, including realisation theorems. Thus will be presented the characterisation of the entropy for multidimensional shifts of finite type in terms of computable real numbers proved by Hochman and Meyerovitch (2010). Let us conclude our brief presentation of the book with numeration systems. In a generic way, a numeration system allows the expansion of numbers as words over an alphabet of digits. A numeration system usually is either defined by an algorithm providing expansions, or by an iterative process associated with a dynamical system. So again, words are demonstrating their representation power. Amongst the various questions related to the expansions of numbers, we have chosen to develop two focused viewpoints on numeration systems with non-integer bases. Chapter 2 deals with the possible expansions of a number that can occur when the base and the alphabet are fixed. It concentrates on the cases where such an expansion is unique. The viewpoint on numeration provided by Chapter 3 is of an arithmetic nature and relies on the notion of mediety. A mediety is a binary operation that allows us to split an interval into two smaller ones and to repeat the process. Assuming that any infinite sequence of such successive intervals decreases to a single number, we get a coding of any element of an initial interval by an infinite word over a binary alphabet. Chapter 3 revisits, under the viewpoint of medieties, various classical codings and representations such as the numeration in base 2, or continued fractions, classic ones as well as Rosen ones with k-medieties. Parts of the material presented in this book were presented during the CANT school that was organised at the Centre International de Rencontres Math´ematiques

Preface

xv

(CIRM) from 21st to 25th May 2012 in Marseille. We thank the CIRM for supporting this event that gathered more than one hundred participants from eighteen countries. We now give a short abstract of every chapter of the book. Chapter 1 is a general introduction where the main notions that will occur in this book are presented. The reader may skip this chapter in a first reading and use it as a reference if needed. Let us now move to the main contributions of this book listed by order of appearance.

Chapter 2 by M. de Vries and V. Komornik Expansions in non-integer bases The familiar integer base expansions were extended to non-integer bases in a seminal paper of R´enyi in 1957. Since then many surprising phenomena were discovered and a great number of papers were devoted to unexpected connections with probability and ergodic theory, combinatorics, symbolic dynamics, measure theory, topology and number theory. For example, although a number cannot have more than two expansions in integer bases, in non-integer bases a number has generically a continuum of expansions. Despite this generic situation, Erd˝os et al. discovered in 1990 some unexpected uniqueness phenomena which gave a new impetus to this research field. The purpose of Chapter 2 is to give an overview of parts of this rich theory. The authors present a number of elementary but powerful proofs and give many examples. Some proofs presented here are new.

Chapter 3 by B. Rittaud Medieties, end-first algorithms, and the case of Rosen continued fractions A mediety is any rule that splits a given initial interval I in two subintervals, then also these intervals into two subintervals, etc., such that any decreasing sequence of such subintervals reduces to a single element. Example of medieties are the arithmetic mean, that gives rise to the base 2-numeration system, and the mediant, from which the theory of continued fractions can be recovered. Engel continued fractions provide another example. Chapter 3 investigates some general properties of medieties, as for example the question of the numbers that are approximated the most slowly by elements of the set F of bounds of intervals defined by the mediety. For example, it is well-known that the Golden Ratio is the number the most slowly approximated by rational numbers, and this property corresponds to the case of the mediant, for which F = Q+ . This chapter also introduces some end-first algorithms, that is, algorithms that provide the coding of any element of F by the mediety starting from its end instead of its beginning. In the case of the mediant, these algorithms are related to random Fibonacci sequences. Last, Chapter 3 presents k-medieties, that is, medieties that split intervals into k

xvi

Preface

subintervals. In particular, it shows that λk -Rosen continued fractions, i.e., continued fractions in which the partial quotients are integral multiples of λk := 2 cos(π /k) (for an integer k ≥ 3), derive from a mediety that generalises the mediant in a similar way that the base k generalises the binary numeration system.

Chapter 4 by N. Rampersad and J. Shallit Repetitions in words Avoidable repetitions in words are discussed. Chapter 4 begins by a brief overview of the avoidability of the classical patterns, such as squares, cubes and overlaps. The authors describe the most common technique used to construct infinite words avoiding these kinds of patterns, namely, the use of iterated morphisms. They also describe a probabilistic approach to avoidability based on the Lov´asz local lemma. Next, they consider generalisations of the classical patterns, such as fractional powers (which leads naturally to a discussion of Dejean’s theorem), repetitions in arithmetic progressions and abelian repetitions. Some methods for counting, or at least estimating, the number of words of a given length avoiding a pattern or set of patterns are also presented in Chapter 4. Finally, the authors briefly explain an algorithmic method for obtaining computer-assisted proofs of certain types of results on automatic sequences.

Chapter 5 by G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica Text redundancies In relation with the previous chapter, Chapter 5 deals with several types of redundancies occurring in textual data. Detecting them in texts is essential in applications like pattern matching, text compression or further to extract patterns for data mining. Considered redundancies include repetitions, word powers, maximal periodicities, repeats, palindromes, and their extension to notions of covers and seeds. Main results like lower and upper bounds as well as detection algorithms on some patterns are reported.

Chapter 6 by V. Halava, T. Harju and T. K¨arki Similarity relations on words The authors of Chapter 6 consider similarity relations on words that were originally introduced in order to generalise partial word, i.e., words with a do-not-know symbol. A similarity relation on words is induced by a compatibility relation on letters assumed to be a reflexive and symmetric relation. In connection with the previous two chapters, similarity of words is studied from two perspectives that are central in combinatorics on words: periodicity and repetition freeness. In particular variations of Fine and Wilf’s theorem are stated to witness interaction properties between the

Preface

xvii

extended notions of periodicity induced by similarity relation. Also, squarefreeness of words is defined for relational words and tight bounds for the repetition thresholds are given.

Chapter 7 by M.-P. B´eal and D. Perrin Synchronised automata A survey of results concerning synchronised automata is presented in Chapter 7. The ˇ y Conjecture on the minimal authors first discuss the state of art concerning the Cern´ length of synchronising words. They next describe the case of circular automata and more generally one-cluster automata. A proof of the Road Colouring Theorem is also presented in Chapter 7.

Chapter 8 by J. Kari Cellular automata, tilings and (un)computability Chapter 8 reviews some basic concepts and results on the theory of cellular automata. Algorithmic questions concerning cellular automata and tilings are also discussed. Covered topics include injectivity and surjectivity properties, the Garden of Eden and the Curtis–Hedlund–Lyndon theorems, as well as the balance property of surjective cellular automata. The domino problem is a classical undecidable decision problem whose variants are described in the chapter. Reductions from tiling problems to questions concerning cellular automata are also covered.

Chapter 9 by M. Hochman Multidimensional shifts of finite type and sofic shifts In Chapter 9 multidimensional shifts of finite type (SFTs) are examined from the language-theoretic and recursive-theoretic point of view, by specifically discussing the recent results of Hochman–Meyerovitch on the characterisation of entropies, Simpson’s realisation theorem for degrees of computability, Hochman’s characterization of ‘slices’ of SFTs (i.e., restrictions of their language to lower-dimensional lattices), the Jeandel–Vanier characterisation of sets of periods of SFTs; a variety of other related developments are also mentioned. A self-contained presentation of the basic definitions and results needed from symbolic dynamics and recursion theory is also included, but this chapter also relies on Robinson’s aperiodic tile set, which is presented in Chapter 8. Modulo this, complete proofs for many of the main results above are given.

xviii

Preface

Chapter 10 by C. Reutenauer Linearly recursive sequences and Dynkin diagrams Following an idea of Caldero, in the realm of cluster algebras of Fomin and Zelevinsky, each acyclic quiver (i.e., a directed graph) defines a sequence of integers through a highly non-linear recursion. The Laurent phenomenon of Fomin and Zelevinsky implies that the number in the sequence are integers. It turns out that for certain quivers, the sequence satisfies, besides its non-linear defining recursion, also a linear recursion. The corresponding quivers are completely classified here: they are obtained by providing an acyclic orientation to a Dynkin graph, or an extended Dynkin graph. An important tool in order to give proofs is the concept of SL2 -tilings; this is a filling of the discrete plane by numbers in such a way that each connected two by two submatrix has determinant 1.

Chapter 11 by E. Cesaratto and B. Vall´ee Pseudo-randomness of a random Kronecker sequence. An instance of dynamical analysis In the last chapter of this volume, the focus is put on probabilistic features of the celebrated Kronecker sequence K (α ) formed of the fractional parts of the multiples of a real α , when α is randomly chosen in the unit interval. The authors are interested in measures of pseudo-randomness of the sequence, via five parameters (two distances, covered space, discrepancy and Arnold constant), and they then perform a probabilistic study of pseudo-randomness, in four various probabilistic settings. Indeed, the authors first deal with two ‘unconstrained’ probabilistic settings, where α is a random real, or α is a random rational. It is well-known that the behaviour of the sequence K (α ) heavily depends on the size of digits in the continued fraction expansion of α . This is why the authors also consider two ‘constrained’ probabilistic settings, where α is randomly chosen among the reals (or rationals) whose digits in the continued fraction expansion are bounded (by some M). The corresponding probabilistic studies are performed, exhibiting a great similarity between the rational and real settings, and the transition from the constrained model to the unconstrained model is studied when M → ∞.

Acknowledgments ´ The editors would like to express their gratitude to Michelangelo Bucci, Emilie Charlier, Pierre Lecomte, Julien Leroy, Milton Minervino, Eric Rowland, Manon Sutera ´ and Elise Vandomme who were kind enough to read drafts of this book and who suggested many improvements. They also would like to thank their editor Clare Dennison whose constant support has been a precious help through all this project. The editors also thank Olivier Bodini and Thomas Fernique for the illustration on the cover of the book. This is a cropped view of a tiling by squares with two adjacent edges connected by a red wire (kind of ‘half Truchet-tiles’). The presence/absence of wires is governed on the top row by the Thue–Morse sequence (from left to right) and on the left column by the Fibonacci sequence (from top to bottom). These two sequences completely determine the tiling on the whole bottom-right quarter of plane, which can thus be seen as a deterministic crisscross pattern of the Thue–Morse and Fibonacci sequences.

1 Preliminaries Val´erie Berth´e and Michel Rigo

1.1 Conventions Let us briefly start with some basic notation used throughout this book. The set of non-negative integers (respectively integers, rational numbers, real numbers, complex numbers) is N (respectively Z, Q, R, C). In particular, the set N is {0, 1, 2, . . .}. We use the notation [[i, j]] for the set of integers {i, i + 1, . . ., j}.The floor of a real number x is x = sup{z ∈ Z | z ≤ x}, whereas {x} = x − x stands for the fractional part of x.

1.2 Words This section is only intended to give basic definitions about words. For material not covered in this book, classical textbooks on finite or infinite words and their properties are (Lothaire, 1983, 2002, 2005), (Allouche and Shallit, 2003), and (Queff´elec, 1987). See also the chapter (Choffrut and Karhum¨aki, 1997) or the tutorial (Berstel and Karhum¨aki, 2003). The book (Rigo, 2014) can also serve as introductory lecture notes on the subject.

1.2.1 Finite words An alphabet is a finite, non-empty set. Its elements are referred to as symbols or letters. In this book, depending on the specific context or conventions of a given chapter, alphabets will be denoted by capital letters like Σ or A. Definition 1.2.1 A (finite) word over Σ is a finite sequence of letters from Σ. The empty sequence is called the empty word and it is denoted by ε . The sets of all finite words, finite non-empty words and infinite words over Σ are denoted by Σ∗ , Σ+ and Σω , respectively. A word w = w0 w2 · · · wn where wi ∈ Σ, 0 ≤ i ≤ n, can be seen as a function w : {0, 1, . . . , n} → Σ in which w(i) = wi for all i. Combinatorics, Words and Symbolic Dynamics, ed. Val´erie Berth´e and Michel Rigo. Published by c Cambridge University Press. Cambridge University Press 2016.

2

V. Berth´e and M. Rigo

Definition 1.2.2 Let S be a set equipped with a single binary operation  : S × S → S. It is convenient to call this operation a multiplication over S, and the product of x, y ∈ S is usually denoted by xy. If this multiplication is associative, i.e., for all x, y, z ∈ S, (xy)z = x(yz), then the algebraic structure given by the pair (S, ) is a semigroup. If, moreover, multiplication has an identity element, i.e., there exists some element 1 ∈ S such that, for all x ∈ S, x1 = x = 1x, then (S, ) is a monoid. In addition if every element x ∈ S has an inverse, i.e., there exists y ∈ S such that xy = 1 = yx, then (S, ) is a group. Let u = u0 · · · um−1 and v = v0 · · · vn−1 be two words over Σ. The concatenation of u and v is the word w = w0 · · · wm+n−1 defined by wi = ui if 0 ≤ i < m, and wi = vi−m otherwise. We write u · v or simply uv to express the concatenation of u and v. The concatenation (or catenation) of words is an associative operation, i.e., given three words u, v and w, (uv)w = u(vw). Hence, parenthesis can be omitted. In particular, the set Σ∗ (respectively, Σ+ ) equipped with the concatenation product is a monoid (respectively, a semigroup). Concatenating a word w with itself k times is abbreviated by wk . In particular, 0 w = ε . Furthermore, for an integer m and a word w = w1 w2 · · · wn , where wi ∈ Σ for 1 ≤ i ≤ n, the rational power wm/n is wq w1 w2 · · · wr , where m = qn + r for 0 ≤ r < n. For instance, we have (abbab)9/5 = abbababba.

(1.1)

The length of a word w, denoted by |w|, is the number of occurrences of the letters in w. In other words, if w = w0 w2 · · · wn−1 with wi ∈ Σ, 0 ≤ i < n, then |w| = n. In particular, the length of the empty word is zero. For a ∈ Σ and w ∈ Σ∗ , we write |w|a for the number of occurrences of a in w. Therefore, we have |w| =

∑ |w|a .

a∈Σ

A word u is a factor of a word v (respectively, a prefix, or a suffix), if there exist words x and y such that v = xuy (respectively, v = uy, or v = xu). A factor (respectively, prefix, suffix) u of a word v is called proper if u = v and u = ε . Thus, for example, if w = concatenation, then con is a prefix, ate is a factor, and nation is a suffix. The mirror (sometimes called reversal) of a word u = u0 · · · um−1 is the word u˜ = um−1 · · · u0 . It can be defined inductively on the length of the word by ε˜ = ε and  = v˜u. ˜ A palindrome is a au = ua  ˜ for a ∈ Σ and u ∈ Σ∗ . Notice that for u, v ∈ Σ∗ , uv word u such that u˜ = u. For instance, the palindromes of length at most 3 in {0, 1}∗ are ε , 0, 1, 00, 11, 000, 010, 101, 111.

Preliminaries

3

1.2.2 Infinite words Definition 1.2.3 An (one-sided right) infinite word is a map from N to Σ. If w is an infinite word, we often write w = a0 a1 a2 · · · , where each ai ∈ Σ. The set of all infinite words of Σ is denoted Σω (one can also find the notation ΣN ). The notions of factor, prefix or suffix introduced for finite words can be extended to infinite words. Factors and prefixes are finite words, but a suffix of an infinite word is also infinite. Definition 1.2.4 A two-sided or bi-infinite word is a map from Z to Σ. The set of all bi-infinite words is denoted ω Σω (one can also find the notation ΣZ ). Example 1.2.5 Consider the infinite word x = x0 x1 x2 · · · where the letters xi ∈ {0, . . ., 9} are given by the digits appearing in the usual decimal expansion of π − 3, +∞

π − 3 = ∑ xi 10−i−1, i=0

i.e., x = 14159265358979323846264338327950288419 · · · is an infinite word. Definition 1.2.6 An infinite word x = x0 x1 · · · is (purely) periodic if there exists a finite word u = u0 · · · uk−1 = ε such that x = uω , i.e., for all n ≥ 0, we have xn = ur where n = dk + r with r ∈ {0, . . ., k − 1}. An infinite word x is eventually periodic (or, ultimately periodic) if there exist two finite words u, v ∈ Σ∗ , with v = ε such that x = uvvv · · · = uvω . Notice that purely periodic words are special cases of eventually periodic words. For any eventually periodic word x, there exist words u, v of shortest length such that x = uvω , then the integer |u| (respectively |v|) is referred to as the preperiod (respectively period) of x. An infinite word is said to be non-periodic if it is not ultimately periodic. Definition 1.2.7 The language of the infinite word x is the set of all its factors. It is denoted by L(x). The set of factors of length n occurring in x is denoted by Ln (x). Definition 1.2.8 An infinite word x is recurrent if all its factors occur infinitely often in x. It is uniformly recurrent (also called minimal), if it is recurrent and for   (u) (u) (u) every factor u of x, if Tx (u) = i1 < i2 < i3 < · · · is the infinite set of positions where u occurs in x, then there exists a constant Cu such that, for all j ≥ 1, (u)

(u)

i j+1 − i j ≤ Cu . Definition 1.2.9 One can endow Σω with a distance d defined as follows. Let x, y be two infinite words over Σ. Let x ∧ y denote the longest common prefix of x and y. Then the distance d is given by  0, if x = y, d(x, y) := 2−|x∧y| , otherwise.

4

V. Berth´e and M. Rigo

It is obvious to see that, for all x, y, z ∈ Σω , d(x, y) = d(y, x), d(x, z) ≤ d(x, y) + d(y, z) and d(x, y) ≤ max(d(x, z), d(y, z)). This last property is not required to have a distance, but when it holds, the distance is said to be ultrametric. Note that we obtain an equivalent distance if we replace 2 with any real number r > 1. This notion of distance extends to ΣZ . Notice that the topology on Σω is the product topology (of the discrete topology on Σ). The space Σω is a compact Cantor set, that is, a totally disconnected compact space without isolated points. Since Σω is a (complete) metric space, it is therefore relevant to speak of convergent sequences of infinite words. The sequence (zn )n≥0 of infinite words over Σ converges to x ∈ Σω , if for all ε > 0, there exists N ∈ N such that, for all n ≥ N, d(zn , x) < ε . To express the fact that a sequence of finite words (wn )n≥0 over Σ converges to an infinite word y, it is assumed that Σ is extended with an extra letter c ∈ Σ. Any finite word wn is replaced with the infinite word wn ccc · · · and if the sequence of infinite words (wn ccc · · · )n≥0 converges to y, then the sequence (wn )n≥0 is said to converge to y. Let (un )n≥0 be a sequence of non-empty finite words. If we define, for all  ≥ 0, the finite word v as the concatenation u0 u1 · · · u , then the sequence (v )≥0 of finite words converges to an infinite word. This latter word is said to be the concatenation of the elements in the infinite sequence of finite words (un )n≥0 . In particular, for a constant sequence un = u for all n ≥ 0, v = u+1 and the concatenation of an infinite number of copies of the finite word u is denoted by uω .

1.3 Morphisms Particular infinite words of interest can be obtained by iterating morphisms (or homomorphisms of free monoids). Morphisms are also called substitutions. A map h : Σ∗ → Δ∗ , where Σ and Δ are alphabets, is called a morphism if h satisfies h(xy) = h(x)h(y) for all x, y ∈ Σ∗ . A morphism may be specified by providing the values h(a) for all a ∈ Σ. For example, we may define a morphism h : {0, 1, 2}∗ → {0, 1, 2}∗ by 0 → 01201 1 → 020121

(1.2)

2 → 0212021. This domain of a morphism is easily extended to (one-sided) infinite words. A morphism h : Σ∗ → Σ∗ such that h(a) = ax for some a ∈ Σ and x ∈ Σ∗ with i h (x) = ε for all i is said to be prolongable on a; we may then repeatedly iterate h to obtain the infinite fixed point hω (a) = a x h(x) h2 (x) h3 (x) · · · . This infinite word is said to be purely morphic. The morphism h given by (1.2) above is prolongable on 0, so we have the fixed point hω (0) = 01201020121021202101201020121 · · · .

Preliminaries

5

A morphism h is non-erasing if h(a) = ε for all a ∈ Σ. Otherwise it is erasing. A morphism is k-uniform if |h(a)| = k for all a ∈ Σ; it is uniform if it is k-uniform for some k. Example 1.3.1 (Thue–Morse word) {0, 1}∗ is defined by

For example, if the morphism μ : {0, 1}∗ → 0 → 01 1 → 10,

then μ is 2-uniform. This morphism is often referred to as the Thue–Morse morphism. The fixed point t = μ ω (0) = 0110100110010110 · · · is known as the Thue–Morse word. Example 1.3.2 (Fibonacci word) Another significant example of a purely morphic word is the Fibonacci word. It is obtained from the non-uniform morphism defined over the alphabet {0, 1} by σ : 0 → 01, 1 → 0,

σ ω (0) = (xn )n≥0 = 0100101001001010010100100101001001010010100 · · · . √ It is a Sturmian word and can be obtained as follows. Let φ = (1 + 5)/2 be the Golden Ratio. For all n ≥ 1, if (n + 1)φ  − nφ  = 2, then xn−1 = 0, otherwise xn−1 = 1.

1.4 Languages and machines Formal languages theory is mostly concerned with the study of the mathematical properties of sets of words. For an exhaustive exposition on regular languages and automata theory, see (Sakarovitch, 2003) and (Perrin and Pin, 2004) for the connections with infinite words. Also see the chapter (Yu, 1997), or (Sudkamp, 1997), (Hopcroft and Ullman, 1979) and the updated revision (Hopcroft et al., 2006) for general introductory books on formal languages theory.

1.4.1 Languages of finite words Let Σ be an alphabet. A subset L of Σ∗ is said to be a language. Note for instance that this terminology is consistent with the one of Definition 1.2.7. Since a language is a set of words, we can apply all the usual set operations like union, intersection or set difference: ∪, ∩ or \. The concatenation of words can be extended to define an operation on languages. If L, M are languages, LM is the language of the words obtained by concatenation of a word in L and a word in M, i.e., LM = {uv | u ∈ L, v ∈ M}.

6

V. Berth´e and M. Rigo

We can of course define the concatenation of a language with itself, so it permits us to introduce the power of a language. Let n ∈ N, Σ be an alphabet and L ⊆ Σ∗ be a language. The language Ln is the set of words obtained by concatenating n words in L. We set L0 := {ε }. In particular, we recall that Σn denotes the set of words of length n over Σ, i.e., concatenations of n letters in Σ. The (Kleene) star of the language L is defined as 

L∗ =

Li .

i≥0

L∗

Otherwise stated, contains the words that are obtained as the concatenation of an arbitrary number of words in L. Notice that the definition of Kleene star is compatible with the notation Σ∗ introduced to denote the set of finite words over Σ. We also write L≤n as a shorthand for L≤n =

n 

Li .

i=0

Note that if the empty word belongs to L, then L≤n = Ln . We recall that Σ≤n is the set of words over Σ of length at most n. More can be found in Section 6.3.1 where the notion of code is introduced. Example 1.4.1 Let L = {a, ab, aab} and M = {a, ab, ba} be two finite languages. We have L2 = {aa, aab, aaab, aba, abab, abaab, aaba, aabab, aabaab} and M 2 = {aa, aab, aba, abab, abba, baa, baab, baba}. One can notice that Card(L2 ) = (Card L)2 but Card(M 2 ) < (Card M)2 . This is due to the fact that all words in L2 have a unique factorisation as concatenation of two elements in L but this is not the case for M, where (ab)a = a(ba). We can notice that L∗ = {a}∗ ∪ {ai1 bai2 b · · · ain bain+1 | ∀n ≥ 1, i1 , . . . , in ≥ 1, in+1 ≥ 0}. Since languages are sets of (finite) words, a language can be either finite or infinite. For instance, a language L differs from 0/ or {ε } if, and only if, the language L∗ is infinite. Let L be a language, we set L+ = LL∗ . The mirror operation can also be extended from words to languages: L˜ = {u˜ | u ∈ L}. Definition 1.4.2 A language is prefix-closed (respectively suffix-closed) if it contains all prefixes (respectively suffixes) of any of its elements. A language is factorial if it contains all factors of any of its elements. Obviously, any factorial language is prefix-closed and suffix-closed. The converse does not hold. For instance, the language {an b | n > 0} is suffix-closed but not factorial.

Preliminaries

7

Example 1.4.3 The set of words over {0, 1} containing an even number of 1s is the language E = {w ∈ {0, 1}∗ | |w|1 ≡ 0 (mod 2)} = {ε , 0, 00, 11, 000, 011, 101, 110, 0000, 0011, . . .}. This language is closed under mirror, i.e., L˜ = L. Notice that the concatenation E{1}E is the language of words containing an odd number of 1s and E ∪ E{1}E = E({ε } ∪ {1}E) = {0, 1}∗. Notice that E is neither prefix-closed, since 1001 ∈ E but 100 ∈ E, nor suffix-closed. See also Example 8.1.3 and Example 9.2.9. If a language L over Σ can be obtained by applying to some finite languages a finite number of operations of union, concatenation and Kleene star, then this language is said to be a regular language. This generation process leads to regular expressions which are well-formed expressions used to describe how a regular language is built in terms of these operations. From the definition of a regular language, the following result is immediate. ∗

Theorem 1.4.4 The class of regular languages over Σ is the smallest subset of 2Σ (for inclusion) containing the languages 0, / {a} for all a ∈ Σ and closed under union, concatenation and Kleene star. Example 1.4.5 For instance, the language L over {0, 1} whose words do not contain the factor 11 is regular. It is called the Golden mean shift, see also Example 9.2.1. This language can be described by the regular expression L = {0}∗ {1}{0, 01}∗ ∪ {0}∗. Otherwise stated, it is generated from the finite languages {0}, {0, 01} and {1} by applying union, concatenation and star operations. Its complement in Σ∗ is also regular and is described by the regular expression Σ∗ {11}Σ∗. The language E from Example 1.4.3 is also regular, we have the following regular expression {0}∗({1}{0}∗{1}{0}∗)∗ describing E.

1.4.2 Automata As we shall briefly explain in this section, the regular languages are exactly the languages recognised by finite automata. Definition 1.4.6 A finite automaton is a labelled graph given by a 5-tuple A = (Q, Σ, E, I, T ) where Q is the (finite) set of states, E ⊆ Q × Σ∗ × Q is the finite set of edges defining the transition relation, I ⊆ Q is the set of initial states and T is the set of terminal (or final) states. A path in the automaton is a sequence (q0 , u0 , q1 , u1 , . . . , qk−1 , uk−1 , qk ) such that, for all i ∈ {0, . . . , k − 1}, (qi , ui , qi+1 ) ∈ E, u0 · · · uk−1 is the label of the path. Such a path is successful if q0 ∈ I and qk ∈ T . The language L(A ) recognised (or accepted) by A is the set of labels of all successful paths in A .

8

V. Berth´e and M. Rigo

Any finite automaton A gives a partition of Σ∗ into L(A ) and Σ∗ \ L(A ). When depicting an automaton, initial states are marked with an incoming arrow and terminal states are marked with an outgoing arrow. A transition like (q, u, r) is represented u by a directed edge from q to r with label u, q −→ r. Example 1.4.7 In Figure 1.1 the automaton has two initial states p and r, three terminal states q, r and s. For instance, the word ba is recognised by the automaton. There are two successful paths corresponding to the label ba: (p, b, q, a, s) and b

a

(p, b, p, a, s). For this latter path, we can write p −→ p −→ s. On the other hand, the word baab is not recognised by the automaton. b p

b

q

a a

a

a

r

a

s b

Figure 1.1 A finite automaton.

Example 1.4.8 The automaton in Figure 1.2 recognises exactly the language E of the words having an even number of 1 from Example 1.4.3. 0

0 1 q

p 1

Figure 1.2 An automaton recognising words with an even number of 1.

Definition 1.4.9 Let A = (Q, Σ, E, I, T ) be a finite automaton. A state q ∈ Q is accessible (respectively co-accessible) if there exists a path from an initial state to q (respectively from q to some terminal state). If all states of A are both accessible and co-accessible, then A is said to be trim. Definition 1.4.10 A finite automaton A = (Q, Σ, E, I, T ) is said to be deterministic (DFA) if it has only one initial state q0 , if E is a subset of Q × Σ × Q and for each (q, a) ∈ Q × Σ there is at most one state r ∈ Q such that (q, a, r) ∈ E. In that case,

9

Preliminaries

E defines a partial function δA : Q × Σ → Q that is called the transition function of A . The adjective partial means that the domain of δA can be a strict subset of Q × Σ. To express that the partial transition function is total, the DFA can be said to be complete. To get a total function, one can add to Q a new ‘sink state’ s and, for all (q, a) ∈ Q × Σ such that δA is not defined, set δA (q, a) := s. This operation does not alter the language recognised by A . We can extend δA to be defined on Q × Σ∗ by δA (q, ε ) = q and, for all q ∈ Q, a ∈ Σ and u ∈ Σ∗ , δA (q, au) = δA (δA (q, a), u). Otherwise stated, the language recognised by A is L(A ) = {u ∈ Σ∗ | δA (q0 , u) ∈ F} where q0 is the initial state of A . If the automaton is deterministic, it is sometimes convenient to refer to the 5-tuple A = (Q, Σ, δA , I, T ). As explained by the following result, for languages of finite words, finite automata and deterministic finite automata recognise exactly the same languages. Theorem 1.4.11 (Rabin and Scott (1959)) If L is recognised by a finite automaton A , there exists a DFA which can be effectively computed from A and recognising the same language L. A proof and more details about classical results in automata theory can be found in textbooks like (Hopcroft et al., 2006), (Sakarovitch, 2003) or (Shallit, 2008). For standard material in automata theory we shall not refer again to these references below. One important result is that the set of regular languages coincides with the set of languages recognised by finite automata. Theorem 1.4.12 (Kleene (1956)) A language is regular if, and only if, it is recognised by a (deterministic) finite automaton. Observe that if L, M are two regular languages over Σ, then L ∩ M, L ∪ M, LM and L \ M are also regular languages. In particular, a language over Σ is regular if, and only if, its complement in Σ∗ is regular. Example 1.4.13 The regular language L = {0}∗ {1}{0, 01}∗ ∪ {0}∗ from Example 1.4.5 is recognised by the DFA depicted in Figure 1.3. Notice that the state s is a sink: non-terminal state and all transitions remain in s. 0

0, 1

1 0

1

s

Figure 1.3 A DFA accepting words without factor 11.

10

V. Berth´e and M. Rigo

1.5 Symbolic dynamics Let us introduce some basic notions in symbolic dynamics. For expository books on the subject, see (Cornfeld et al., 1982), (Kitchens, 1998), (Lind and Marcus, 1995), (Perrin, 1995) and (Queff´elec, 1987). For references on ergodic theory, also see, e.g., (Walters, 1982).

1.5.1 Codings of dynamical systems A (discrete) dynamical system is a pair (X, T ) where T : X → X is a map acting on a convenient space X (e.g., X is a topological space or a metric space, in the usual setting, X is generally compact and T is continuous). We are interested in iterating the map T and we look at orbits (T n (x))n≥0 of points in X under the action T . The trajectory of x ∈ X is the sequence (T n (x))n≥0 . Roughly speaking, infinite words appear naturally as a convenient coding (with a priori some loss of information) of these trajectories (T n (x))n≥0 . So one can gain insight about the dynamical system by studying these words, with an interplay between combinatorics on words and dynamics. In that setting, the space X is discretised, i.e., it is partitioned into finitely many sets X1 , . . . , Xk and the trajectory of x is thus coded by the corresponding sequence of visited subsets, such as illustrated in Figure 1.4. Precisely, the coding of (T n (x))n≥0 is the word wx = w0 w1 w2 · · · over the alphabet {1, . . . , k} where wi = j if and only if T i (x) ∈ X j . Even though the infinite word wx contains less information than the original trajectory (T n (x))x≥0 , this discretised and simplified version of the original system can help us to understand the dynamics of the original system. X4 X3 T (x) X2 T 2 (x)

x X1

Figure 1.4 Trajectory of x in a space X = X1 ∪ X2 ∪ X2 ∪ X4 .

Example 1.5.1 (Rotation words) One of the simplest dynamical systems can be obtained from the coding of a rotation on a circle identified with the interval [0, 2π ). Instead of working modulo 2π , it is convenient to normalise the interval [0, 2π ) and

11

Preliminaries consider instead the interval [0, 1). Hence we shall consider the map1 Rα : [0, 1) → [0, 1), x → {x + α }

where α is a fixed real number in [0, 1). To get a coding of this system, we consider a partition of [0, 1). For instance, take two real numbers γ1 , γ2 such that γ0 := 0 < γ1 < γ2 < 1 =: γ3 and define X0 = [0, γ1 ), X1 = [γ1 , γ2 ) and X2 = [γ2 , 1). In Figure 1.5, we have chosen α = 3/(8π )  0.119, γ1 = 1.8/(2π )  0.286, γ2 = 4.3/(2π )  0.684 and x = 0.08. For a given x ∈ [0, 1), the coding of the trajectory is the word wx = w0 w1 · · · where wi = j if and only if Ri (x) belongs to [γ j , γ j+1 ). In our example represented in Figure 1.5, we get wx = 001111220001112 · · ·. With such a setting, we get interesting words when the angle α of rotation is irrational. Indeed, a rational number would only produce periodic orbits. γ1

R(x)

x

0

γ2

Figure 1.5 The first few points of a trajectory under a rotation R of angle α .

1.5.2 Beta-expansions In this section, we consider two important examples of codings of systems connected to numeration. Let us first consider the base-b expansion of real numbers. Given a real number x ∈ [0, 1), the algorithm in Table 1.1 provides a sequence (ci )i≥0 of digits in {0, . . . , b − 1} such that x = ∑ ci b−i−1. i≥0

1

The interval [0,1) is identified with the quotient set R/Z whose elements r + Z are in one-to-one correspondence with real numbers in this interval. It is more convenient to work with R/Z in mind to avoid discontinuity problems. In the literature, the map Rα is sometimes referred to a translation on the one-dimensional torus.

12

V. Berth´e and M. Rigo Table 1.1 An algorithm for computing the base-b expansion of x ∈ [0, 1). i ←0 y ←x REPEAT FOREVER ci ← by y ← {by} INCREMENT i END-REPEAT.

In this algorithm, we iterate a map from the interval [0, 1) onto itself, i.e., Tb : [0, 1) → [0, 1), y → {by}

(1.3)

and the value taken by the image determines the next digit in the expansion. The interval [0, 1) is thus split into b subintervals [ j/b, ( j + 1)/b), for j = 0, . . . , b − 1. For all i ≥ 0, if Tbi (x) belongs to the subinterval [ j/b, ( j + 1)/b), then the digit ci occurring in repb (x) is equal to j. It is indeed natural to consider such subintervals. If y belongs to [ j/b, ( j + 1)/b), then by has an integer part equal to j and the map Tb is continuous and increasing on every subinterval [ j/b, ( j + 1)/b). Note also that the range of Tb on any of these subintervals is [0, 1). So applying Tb to a point in one of these subintervals can lead to a point belonging to any of these subintervals (later on, we shall introduce some other transformation, such as, e.g., β -transformations, where a restriction appears on the intervals that can be reached). So to speak, the base-b expansion of x can be derived from the trajectory of x under Tb , i.e., from the sequence (Tbn (x))n≥0 . As an example, consider the base b = 3 and the expansion of x = 3/10. The point lies in the interval [0, 1/3); thus the first digit of the expansion is 0. Then T3 (3/10) = 9/10 lies in the interval [2/3, 1); thus the second digit is 2. If we apply again T3 , we get T32 (3/10) = {27/10} = 7/10, which belongs again to [2/3, 1) giving the digit 2. Then T33 (3/10) = 1/10 giving the digit 0 and finally T34 (3/10) = 3/10. So rep3 (3/10) = (0220)ω . The map T3 is depicted in Figure 1.6 on the three intervals [0, 1/3), [1/3, 2/3) and [2/3, 1) and we make use of the diagonal to apply the map T3 iteratively. A natural generalisation of base-b expansion is to replace the base b with a real number β > 1. In particular, the transformation Tb will be replaced by the so-called β -transformation. Note that we shall be concerned with expansions of numbers in [0, 1). If x ≥ 1, then there exists a smallest d such that x/β d belongs to [0, 1). It is therefore enough2 to concentrate on [0, 1). Definition 1.5.2 (β -expansions) We will only represent real numbers in the interval [0, 1). Let β > 1 be a real number. The representations discussed here are a direct 2

If the β -expansion of x/β d is d0 d1 ··· , then using an extra decimal point, the expansion of x is conveniently written d0 ··· d−1 • d d+1 ··· . Note that the presentation in Chapter 2 is not entirely consistent with our present treatment if x belongs to [0,1/(β − 1)] \ [0,1).

13

Preliminaries 1.0

0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

1.0

Figure 1.6 The dynamics behind the transformation T3 .

generalisation of the base-b expansions. Every real number x ∈ [0, 1) can be written as a series +∞

x = ∑ ci β −i−1

(1.4)

i=0

where the ci belong to {0, β  − 1}. Recall that · denotes the ceiling function, i.e., x = inf{z ∈ Z | z ≥ x} . Note that if β is an integer, then β  − 1 = β − 1. For integer base-b expansions, a number may have more than one representation, namely those ending with 0ω or (b − 1)ω . For a real base β , we obtain many more representations. Consider the Golden Ratio φ , which satisfies φ 2 − φ − 1 = 0 and thus 1 1 1 = n+1 + n+2 , φn φ φ

∀n ≥ 0 .

As an example, the number 1/φ has thus infinitely many representations as a power series with negative powers of φ and coefficients 0 and 1: 1 1 1 1 1 1 1 1 1 1 = 2 + 3 = 2 + 4 + 5 = 2 + 4 + 6 + 7 = ··· . φ φ φ φ φ φ φ φ φ φ To get a canonical expansion for a real x ∈ [0, 1), we just have to replace the integer base b with β and consider the so-called β -transformation Tβ : [0, 1) → [0, 1), x → {β x} in the algorithm from Table 1.1. For i = 0, 1, . . ., the idea is to remove the largest integer multiple ci of β −i−1 , and then repeat the process with the remainder and the

14

V. Berth´e and M. Rigo

next negative power of β to get (1.4). Note that ci is less than β  because of the greediness of the process. Otherwise, one could have removed a larger multiple of a power of β at a previous step. The corresponding infinite word c0 c1 · · · is called the β -expansion of x and is usually denoted by dβ (x). Any word d0 d1 · · · over a finite alphabet of non-negative integers satisfying +∞

x = ∑ di β −i−1 i=0

is said to be a β -representation of x. Thus, the β -expansion of x is the lexicographically maximal word amongst the β -representations of x. The greediness of the algorithm can be reformulated as follows. Lemma 1.5.3 A word d0 d1 · · · over {0, . . . , β  − 1} is the β -expansion of a real number x ∈ [0, 1) if and only if, for all j ≥ 0, +∞

∑ di β −i−1 < β − j .

i= j

Proposition 1.5.4 Let x, y be real numbers in [0, 1). We have x < y if and only if dβ (x) is lexicographically less than dβ (y). Remark 1.5.5 The map Tβ provides a classical example of a fibred system: as defined in (Schweiger, 1995), a fibred system is a pair (T, B) where T is a transformation on the set B, where B = ∪ j∈J B j is a partition of B with J finite or countable such that T|B j is injective for any j. For more on fibred systems, see also Remark 3.2.4. Note that a further example of a fibred dynamical system is provided in Chapter 11 with the Gauss map that produces partial quotients in the continued fraction expansion.

1.5.3 Subshifts The sets of infinite sequences obtained as codings of dynamical systems produce themselves dynamical systems, with the map acting on them being the shift. They are called symbolic since they are defined on words. Let S denote the following map3 defined on Σω , called the one-sided shift: S((xn )n≥0 ) = (xn+1 )n≥0 . In particular, if x = x0 x1 x2 · · · is an infinite word over Σ, then, for all n ≥ 0, its suffix xn xn+1 · · · is simply Sn (x). The map S is uniformly continuous, onto but not one-toone on Σω . This notion extends in a natural way to ΣZ . In this latter case, the shift S is one-to-one. We thus get symbolic dynamical systems. The definitions given below correspond to the one-sided shift, but they extend to the two-sided shift. 3

We have tried to keep notation as uniform as possible. However, in Chapter 2, the notation σ will also be used for the shift, and τ in Chapter 8.

Preliminaries

15

Definition 1.5.6 Let x be an infinite word over the alphabet Σ. The orbit of x under the action of the (left) shift S is defined as the set O(x) = {Snx | n ∈ N}. The symbolic dynamical system associated with x is then defined as (O(x), S), where O(x) ⊆ Σω is the closure of the orbit of x. In the case of bi-infinite words we similarly define O(x) = {Sn x | n ∈ Z} where the (two-sided) shift map is defined on ΣZ . The set Xx := O(x) is a closed subset of the compact set Σω , hence it is a compact space and S is a continuous map acting on it. One checks that, for every infinite word y ∈ Σω , the word y belongs to Xx if, and only if, L(y) ⊆ L(x). For a proof, see (Queff´elec, 1987) or Chapter 1 of (Pytheas Fogg, 2002). Note that O(x) is finite if, and only if, x is eventually periodic. Generic examples of symbolic dynamical systems are provided by subshifts (also called shifts for short). Let Y be a closed subset of Σω that is stable under the action of the shift S. The system (Y, S) is called a subshift. The full shift is defined as (Σω , S). If Y is a subshift, there exists a set F ⊂ Σ∗ of finite words such that an infinite word x belongs to X if, and only if, none of its factors belongs to F . A subshift X is called a subshift of finite type if one can choose the set F to be finite. A subshift is said to be sofic if the set F is a regular language. A subshift (X , S) is said to be periodic if there exist x ∈ X and an integer k such that X = {x, Sx, . . ., Sk x = x}. Otherwise it is said to be aperiodic. Subshifts are discussed in more details in Chapters 8 and Chapter 9 (see in particular Section 9.2.2). In particular, for more on shifts of finite type, see Chapters 9, and see also Chapter 8. Example 1.5.7 The set of infinite words over {0, 1} of Example 1.4.5 which do not contain the factor 11 is a subshift of finite type, whereas the set of infinite words over {0, 1} having an even number of 1 between two occurrences of the letter 0 is a sofic subshift which is not of finite type. See also Example 8.1.3 and Example 9.2.9. Example 1.5.8 We let Dβ denote the set of β -expansions of reals in [0, 1). From Lemma 1.5.3, we know that this set is shift-invariant, i.e., if w belongs to Dβ , then S(w) also belongs to Dβ . We easily get the following commutative diagram. x ∈ [0, 1) ⏐ ⏐ β -expansion



−−−−→

Tβ (x) ∈ [0, 1) ⏐ ⏐β -expansion

S

dβ (x) ∈ Dβ −−−−→ dβ (Tβ (x)) ∈ Dβ Therefore it is natural to consider the closure of Dβ . This set denoted by Sβ is called the β -shift and we can consider the dynamical system (Sβ , S). Definition 1.5.9 Let Y be subshift. For a word w = w0 · · · wr , the cylinder set [w] is the set {y ∈ Y | y0 = w0 , . . . , yr = wr }.

16

V. Berth´e and M. Rigo

The cylinder sets are clopen (open and closed) sets and form a basis of open sets for the topology of Y . Furthermore, one checks that a clopen set is a finite union of cylinders. In the bi-infinite case the cylinders are the sets [u.v]Y = {y ∈ Y | yi = ui , y j = v j , −|u| ≤ i ≤ −1, 0 ≤ j ≤ |v| − 1} and the same remark holds.

1.5.4 Topological and measure-theoretic dynamical systems There are two main types of dynamical systems, namely topological ones and measuretheoretic ones. Definition 1.5.10 A topological dynamical system (X , T ) is defined as a compact metric space X together with a continuous map T defined onto the set X . Subshifts are examples of topological dynamical systems. A topological dynamical system (X , T ) is minimal if, for all x in X , the orbit of x, i.e., the set {T n x | n ∈ N}, is dense in X . Let us note that if (X , S) is a subshift, and if X is furthermore assumed to be minimal, then X is periodic if, and only if, X is finite. Moreover, if x is an infinite word, (Xx , S) is minimal if, and only if, x is uniformly recurrent. Indeed, w is a factor of x, we write O(x) =



S−n [w],

n∈N

and we conclude by a compactness argument. Two dynamical systems (X1 , T1 ) and (X2 , T2 ) are said to be topologically conjugate (or topologically isomorphic) if there exists an homeomorphism f from X1 onto X2 which conjugates T1 and T2 , that is: f ◦ T1 = T2 ◦ f . If f is only onto, then (X1 , T1 ) is said to factor onto (X2 , T2 ), (X2 , T2 ) is a factor of (X1 , T1 ), and f is called a factor map. For more on topological conjugacy, also see Section 9.2.3 in the higher-dimensional setting. See also Theorem 9.2.7. We have considered here the notion of dynamical system, that is, a map acting on a given set, in a topological context. This notion can be extended to measurable spaces: we thus get measure-theoretic dynamical systems. For more details, one can refer for instance to (Walters, 1982). Definition 1.5.11 A measure-theoretic dynamical system is defined as a system (X , B, μ , T ), where B is a σ -algebra, μ a probability measure defined on B, and T : X → X is a measurable map which preserves the measure μ , i.e., for all B ∈ B, μ (T −1 (B)) = μ (B). Such a measure is said to be T -invariant and the map T is said to preserve the measure μ . The transformation T (or the system (X , B, μ , T )) is ergodic if for every B ∈ B such that T −1 (B) = B, then B has either zero measure or full measure.

17

Preliminaries

Let (X , T ) be a topological dynamical system. A topological system (X , T ) always has an invariant probability measure. The case where there exists only one T -invariant measure is of particular interest. A topological dynamical system (X , T ) is said to be uniquely ergodic if there exists one and only one T -invariant Borel probability measure over X. In particular, a uniquely ergodic topological dynamical system yields an ergodic measure-theoretic dynamical system. A measure-theoretic ergodic dynamical system satisfies the Birkhoff ergodic theorem, also called individual ergodic theorem. Let us recall that the abbreviation a.e. stands for ‘almost everywhere’: a property holds almost everywhere if the set of elements for which the property does not hold is contained in a set of zero measure. Theorem 1.5.12 (Birkhoff Ergodic Theorem) Let (X , B, μ , T ) be a measure-theok retic dynamical system. Let f ∈ L1 (X , R). Then the sequence ( 1n ∑n−1 k=0 f ◦ T )n≥0 con∗ 1 ∗ ∗ verges a.e. to a function f ∈ L (X , R). One has f ◦ T = f a.e. and X f ∗ d μ =

∗ X f d μ . Furthermore, if T is ergodic, since f is a.e. constant, one has: ∀ f ∈ L1 (X , R) ,

1 n−1 μ −a.e. f ◦ T k −−−−→ ∑ n→∞ n k=0

X

f dμ.

Note that the notions of conjugacy and factor between two topological dynamical systems extends in a natural way to the measure-theoretic context.

2 Expansions in non-integer bases Martijn de Vries and Vilmos Komornik

2.1 Introduction The familiar integer base expansions were extended to non-integer bases in a seminal paper of R´enyi (1957). Since then many surprising phenomena were discovered and a great number of papers were devoted to unexpected connections with probability and ergodic theory, combinatorics, symbolic dynamics, measure theory, topology and number theory. It has generally been believed for a long time that for any given q ∈ (1, 2) there are infinitely many expansions of the form 1=

c1 c 2 c3 + + + ··· q q2 q3

with digits ci ∈ {0, 1}. Erd˝os et al. (1991) made the startling discovery that for a continuum of bases q ∈ (1, 2) there is only one such expansion. This gave a new impetus to this research field. The purpose of this chapter is to give an overview of parts of this rich theory. We present a number of elementary but powerful proofs and we give many examples. Some proofs presented here are new. In most papers dedicated to ergodic and probabilistic questions, following R´enyi (1957) the base was denoted by β . On the other hand, following Erd˝os and his collaborators, most papers dealing with combinatorial and topological aspects use the letter q for the base. We keep ourselves to this tradition here: the base is denoted by β in the second, third and fourth section of this chapter, then the base will be denoted by q starting with Section 2.5. For some other important aspects of the theory, not discussed here, we refer the reader to the surveys Sidorov (2003b) and Komornik (2011).

Combinatorics, Words and Symbolic Dynamics, ed. Val´erie Berth´e and Michel Rigo. Published by c Cambridge University Press. Cambridge University Press 2016.

Expansions in non-integer bases

19

2.2 Greedy and lazy expansions Following the pioneering works of R´enyi (1957) and Parry (1960), many works during the past fifty years were devoted to the study of expansions in non-integer bases. Given a real number β ∈ (1, 2], a β -expansion or an expansion in base β (if the base β is understood from the context, we simply speak of expansions) of a real number x is a sequence (ci ) = c1 c2 · · · of digits ci ∈ {0, 1} such that ∞

ci . i i=1 β

x= ∑

Note that x must belong to Jβ := [0, 1/(β − 1)] ⊇ [0, 1]. If β = 2, each x ∈ Jβ = [0, 1] has only one β -expansion, except for the dyadic rationals in (0, 1) (numbers of the form x = i/2n with n ≥ 1 and 0 < i < 2n ) which have exactly two expansions. However, the situation is much more complicated if β ∈ (1, 2). For instance, it was proved by Sidorov (2003a) that almost every1 x ∈ Jβ has a continuum of expansions (see also Dajani and de Vries (2007)) and for each number x in the remaining null set the number of possible expansions may be countably infinite or may have any finite cardinality, depending on β (see Erd˝os and Jo´o (1992), Erd˝os et al. (1990)). Each number x ∈ Jβ has at least one β -expansion, namely the greedy β -expansion b(x, β ) = (bi ) = (bi (x)) = (bi (x, β )) introduced by R´enyi (1957) which can be defined recursively by the greedy algorithm: if the digits b1 , . . . , bn−1 have already been determined for some positive integer n (no condition for n = 1), then bn = 1 if, and only if, n−1

bi

1

∑ β i + β n ≤ x.

i=1

Loosely speaking, the greedy algorithm chooses in each step the largest possible digit. Let us show that (bi ) is an expansion of x for each x belonging to Jβ . If x = 1/(β − 1), then bi = 1 for each i ≥ 1, whence (bi ) is indeed an expansion of x. If 0 ≤ x < 1/(β − 1), then bn = 0 for some index n and for each such n we have ∞ bi 1 1 < n ≤ ∑ i. i β β β i=1 i=n+1 n

0 ≤ x− ∑

Hence there cannot be a last n such that bn = 0. Letting n → ∞ along the indices n for which bn = 0, we see that (bi ) is indeed an expansion of x. We equip the set Eβ (x) consisting of all possible β -expansions of x with the lexicographic order. The greedy expansion of a number x ∈ Jβ is obviously the largest element of Eβ (x). The lazy β -expansion (ei (x)) = (ei ) of a number x ∈ Jβ is the smallest element of Eβ (x). Note that (ci ) is an expansion of x if, and only if, (1 − ci ) = (1 − c1 )(1 − c2 ) · · · is an expansion of 1/(β − 1) − x. This means in particular that (ei (x)) = (1 − bi (1/(β − 1) − x)) for x ∈ Jβ . The lazy expansion of x ∈ Jβ 1

When there is no reference to a measure where there should be, we always mean the Lebesgue measure.

20

M. de Vries and V. Komornik

can be obtained by applying the lazy algorithm: if e1 , . . . , en−1 are already defined for some positive integer n, then en = 0 if, and only if, x≤

n−1

ei



1 . i β i=n+1

∑ βi + ∑

i=1

Until Section 2.5, the base β is a fixed but arbitrary number strictly between 1 and 2, unless stated otherwise. Let the greedy map Gβ : Jβ → Jβ , be given by ⎧ ⎨β x

  if x ∈ G(0) := 0, β1 ,   Gβ (x) := ⎩β x − 1 if x ∈ G(1) := 1 , 1 . β β −1 One easily verifies that the greedy map generates the greedy expansion in the sense that bn (x) = j if, and only if, Gn−1 β (x) ∈ G( j), j = 0, 1. Similarly, the lazy expansion is generated by the lazy map Lβ : Jβ → Jβ , defined by ⎧ ⎨β x

  if x ∈ L(0) := 0, β (β1−1) ,   Lβ (x) := 1 1 ⎩β x − 1 if x ∈ L(1) := β (β −1) , β −1 . We denote by μgβ the Gβ -invariant greedy measure (see Gel’fond (1959); Parry (1960)) on Jβ which is absolutely continuous with density gβ (x) =

1 ∞ 1  ∑ β n 1 0,Gnβ (1)(x), F(β ) n=0

x ∈ Jβ ,

where F(β ) is a normalising constant so that μgβ (Jβ ) = 1. Equip the interval Jβ with the Borel σ -algebra B. It is well known that the dynamical system (Jβ , B, μgβ , Gβ ) is ergodic. Define the lazy measure μβ on Jβ by setting μβ = μgβ ◦ rβ , where rβ : Jβ → Jβ is given by rβ (x) =

1 − x. β −1

Notice that rβ ◦ Gβ = Lβ ◦ rβ . Hence rβ is a continuous isomorphism between the dynamical systems (Jβ , B, μgβ , Gβ ) and (Jβ , B, μβ , Lβ ) and β := gβ ◦ rβ is a density of μβ . The normalised errors of an expansion (ci ) of x are defined by   n c i θn ((ci )) = β n x − ∑ i , n ∈ N. i=1 β A straightforward application of Birkhoff’s ergodic theorem, see Theorem 1.5.12, yields that for μgβ -almost all x ∈ Jβ the limit 1 n−1 j ∑ Gβ (x) n→+∞ n j=0 lim

21

Expansions in non-integer bases exists and equals 1 n−1 ∑ θ j ((bi (x))) = n→+∞ n j=0

Mg := lim



ygβ (y)dy.

Observe that the density gβ is positive on the interval [0, 1). Moreover, for each x ∈ [1, 1/(β − 1)), Gnβ (x) ∈ [0, 1) for sufficiently large n. Since the map Gβ is nonsingular2, we may conclude that for almost every x ∈ Jβ the limit above exists, and equals Mg . Similarly, we may define for almost every x ∈ Jβ , 1 n−1 1 n−1 j θ j ((ei (x))) = lim ∑ ∑ Lβ (x) = n→+∞ n n→+∞ n j=0 j=0



M := lim



yβ (y)dy.

The following result, from Dajani and Kraaikamp (2002), shows that “on average” the greedy convergents ∑ni=1 bi (x)β −i approximate x better than the lazy convergents ∑ni=1 ei (x)β −i for almost every x ∈ Jβ . Theorem 2.2.1 One has Mg + M = 1/(β − 1) and Mg < M . Proof The first statement follows directly from the relation between gβ and β :   1/(β −1) 1/(β −1) 1 − x dx M = x β (x) dx = x gβ β −1 0 0  1/(β −1)  1 1 − y gβ (y) dy = − Mg . = β − 1 β − 1 0 For the second statement, notice that by definition of Mg and the monotone convergence theorem, one has that Mg =

1 ∞ ∑ F(β ) n=0

Gn (1) x β 0

β

dx = n

1 ∞ (Gβ (1)) ∑ 2β n . F(β ) n=0 n

2

Furthermore, 1/(β −1)



1 ∞ 1/(β −1) x M = xβ (x)dx = ∑ r Gn (1) β n dx F(β ) n=0 0 β β  2  2 1 n 1 ∞ β −1 − rβ (Gβ (1)) = . ∑ F(β ) n=0 2β n The second statement now follows from the observation that for every n ≥ 0 one has that 2  1 2  2  n − rβ (Gnβ (1)) , Gβ (1) ≤ β −1 where the ≤ sign can be replaced by a < sign for those n ≥ 0 for which Gnβ (1) = 0, such as n = 0 or n = 1. 2

Non-singularity of a map T : Jβ → Jβ means that T −1 (E) is a null set whenever E ⊆ Jβ is a null set.

22

M. de Vries and V. Komornik

Motivated by Theorem 2.2.1, we call an expansion (di ) of x optimal if θn ((di )) ≤ θn ((ci )) for each n = 1, 2, . . . and for each expansion (ci ) of x. Since the greedy expansion is the lexicographically largest expansion of x, it is clear that only the greedy expansion can be optimal. The following example shows that the greedy expansion of a number x ∈ Jβ is not always optimal. Other examples can be found in Dajani and Kraaikamp (2002). √ Example 2.2.2 Let φ := (1 + 5)/2 be the Golden Ratio and consider a base β ∈ (1, φ ). Note that 1 < β −1 + β −2 . The sequence (ci ) := 0110∞ is clearly an expansion of x := β −2 + β −3 . Applying the greedy algorithm we find that the first three digits of the greedy expansion (bi ) of x equal 100. Hence θ3 ((bi )) > θ3 ((ci )) = 0. Let P be the countable set consisting of the Golden Ratio φ = φ2 and the Multinacci numbers φ3 , φ4 , . . . ∈ (1, 2), defined by the equations 1=

1 1 + ···+ k , φk φk

k = 2, 3, . . . .

Theorem 2.2.3 (Dajani et al. (2012)) We have the following dichotomy. • If β ∈ P, then each x ∈ Jβ has an optimal expansion. • If β ∈ (1, 2) \ P, then the set of numbers x ∈ Jβ with an optimal expansion is a nowhere dense null set. Remark 2.2.4 Let β ∈ (1, 2) \ P. The proof of Theorem 2.2.3 in (Dajani et al., 2012) shows that there exists a (non-degenerated) interval I contained in [0, 1), such that no number x ∈ I has an optimal expansion. Clearly a number belongs to I if its greedy expansion starts with b := b1 (x ) · · · bn (x ) for some x in the interior of I and for some large enough n. It was shown in (de Vries and Komornik, 2011) that such a block b occurs in the greedy expansion of every x ∈ Jβ , except for a set of Hausdorff dimension less than one. Since each tail of an optimal expansion must be optimal, it follows that the set of numbers x ∈ Jβ with an optimal expansion has in fact Hausdorff dimension less than one.

2.3 On the cardinality of the sets Eβ (x) We recall from the preceding section that Eβ (x) denotes the set of all possible β expansions of a number x ∈ Jβ . Let us first show that one does not need the continuum hypothesis in order to determine in general the cardinality of Eβ (x). Theorem 2.3.1 A number x ∈ Jβ with uncountably many expansions, necessarily has a continuum of expansions. Proof Give each coordinate {0, 1} the discrete topology and endow the set D := {0, 1}∞ with the Tychonoff product topology. One easily verifies that Eβ (x) is a

Expansions in non-integer bases

23

closed subset of the Polish space D3 . Hence Eβ (x) is a Polish space, too. The assertion follows from the well-known fact that uncountable Polish spaces have the cardinality of the continuum. We already mentioned that Eβ (x) has the cardinality of the continuum for almost every x ∈ Jβ . This provides a sharp contrast with the fact that almost all numbers have a unique binary expansion. We will now present a simple proof of this result using the dynamical properties of the greedy and the lazy map. Theorem 2.3.2 Almost every x ∈ Jβ has a continuum of β -expansions. Proof Let Uβ be the set of numbers x ∈ Jβ with a unique β -expansion. For ease of notation we denote in this proof the lazy and the greedy map by T0 and T1 , respectively. We also set Tu1 ···un = Tun ◦ · · · ◦ Tu1 for u1 , . . . , un ∈ {0, 1} and n = 1, 2, . . .. Since the greedy and the lazy map are non-singular, the set Nβ :=

∞  

 x ∈ Jβ | Tu1 ···un (x) ∈ Uβ for some u1 , . . . , un ∈ {0, 1}

n=1

is a null set because Uβ is a null set as follows for instance from Theorem 2.2.1. One may verify that if (ci ) is an expansion of a number x ∈ Jβ , then (cn+i ) = cn+1 cn+2 · · · is an expansion of the the number Tc1 ...cn (x) for each n ≥ 1. It follows that none of the expansions of a number x ∈ Jβ \ Nβ has a unique tail. Hence if (ci ) is an expansion of a number x ∈ Jβ \ Nβ , then there exists an index n such that x has expansions (εi ) and (ρi ) starting with c1 · · · cn−1 0 and c1 · · · cn−1 1, respectively. Similarly, there exists an index m > n such that x has expansions starting with ε1 · · · εm−1 0 and ε1 · · · εm−1 1, respectively. Continuing in this manner, we can construct recursively a full binary tree of possible expansions of x, whence Card Eβ (x) = 2ℵ0 for all x ∈ Jβ \ Nβ . Remark 2.3.3 • The set Uβ has in fact Hausdorff dimension less than one, see for instance Glendinning and Sidorov (2001); de Vries and Komornik (2011). Since the branches of the greedy and the lazy map are similarities, we have dimH (E) = dimH G−1 β (E) =

dimH L−1 β (E) (E ⊆ Jβ ), where dimH denotes the Hausdorff dimension. The countable stability of the Hausdorff dimension yields that dimH (Uβ ) = dimH (Nβ ), whence every x ∈ Jβ has a continuum of expansions, except for a set of Hausdorff dimension less than one. A more general version of this argument can be found in Sidorov (2007), Proposition 3.8. • For each E ⊆ Jβ , we have −1 G−1 β (E) ∪ Lβ (E) =

1 1 · E ∪ · (E + 1). β β

−1 It follows that G−1 β (E) ∪ Lβ (E) is a closed null set whenever the set E is. The

results in Section 2.7 together with Exercise 2.10.8 imply that the closure Uβ of

3

A Polish space is a topological space that is homeomorphic to a complete separable metric space.

24

M. de Vries and V. Komornik

the set Uβ is a null set. Since a closed null set is nowhere dense, we conclude that the set Nβ is of the first category and therefore the set 

x ∈ Jβ | Card Eβ (x) = 2ℵ0



is also residual4 in Jβ . Let Eβn (x) be the set of all possible prefixes of length n of sequences belonging to Eβ (x). More precisely, we set   Eβn (x) := (c1 , . . . , cn ) ∈ {0, 1}n | ∃(cn+1 , cn+2 , . . .) ∈ {0, 1}∞ so that (ci ) ∈ Eβ (x) and Nn (x, β ) := Card Eβn (x). The last result of this section strengthens Theorem 2.3.2 for β strictly between 1 and the Golden Ratio φ . Parts (i) and (ii) of Theorem 2.3.4 below are established in Erd˝os et al. (1990) and Feng and Sidorov (2011), respectively. Our proof of part (ii) is primarily based on the proof of part (i) and differs in this respect from the proof in (Feng and Sidorov, 2011). Note that the result is sharp in the sense that Card Eφ (1) = ℵ0 . One may check that Eφ (1) = {(10)n 110∞ | n = 0, 1, . . .} ∪ {(10)∞ } ∪ {(10)n 01∞ | n = 0, 1, . . .} . Theorem 2.3.4 is also sharp in a more interesting sense (see Proposition 5.5 in (Feng and Sidorov, 2011)): there exists a continuum of numbers x ∈ Jφ such that Card Eφ (x) = 2ℵ0 , yet lim

n→+∞

log2 (Nn (x, φ )) = 0. n

Theorem 2.3.4 If 1 < β < φ , then (i) Card Eβ (x) = 2ℵ0 for each x in the interior of Jβ ; (ii) there exists a positive constant c = c(β ) such that the inequality lim inf n→+∞

log2 (Nn (x, β )) ≥c n

holds for each x in the interior of Jβ . Proof (i) Since 1 < β < φ , there exists an index k such that 1 < β −2 + · · · + β −k .

(2.1)

Let (mi ) = m1 m2 · · · be the strictly increasing sequence consisting of all positive integers which are not multiples of k. Then for each  = 1, 2, . . ., we have

β −m
M, then N−1

N−1

n=1

n=1 M−1



n=1

n=M

| ∏ cos (2πβ n ) | = | ∏ cos (2π d(β n )) | ≥

∏ | cos (2π d(β n)) | · ∏ (1 − 2π kα n)

(2.9)

> 0. The assertion follows at once from (2.7), (2.8) and (2.9). Kempton (2013) recently characterised the measure theoretical nature (i.e., absolutely continuous or singular) of the infinite Bernoulli convolutions νβ in terms of the numbers Nn (x, β ). This enabled him to determine √ the growth rate of the numbers Nn (x, β ) as n → ∞ for almost every β ∈ (1, 2) and for almost every x ∈ Jβ . n In order to present his main  results, we note that (x1 , . . . ,xn ) ∈ {0, 1} belongs to Eβn (x) if, and only if, x ∈ ∑ni=1 xi β −i , ∑ni=1 xi β −i + β n (β1−1) . Hence, if we define for n = 1, 2, . . . the functions fn : Jβ → R by fn (x) =

β n (β − 1) · Nn (x, β ), 2n

x ∈ Jβ ,

34 then

M. de Vries and V. Komornik Jβ

fn (x)dx = 1,

n = 1, 2, . . .

Let f := lim supn→+∞ fn and f := lim infn→+∞ fn . Theorem 2.4.9 (Kempton, 2013) • The infinite Bernoulli convolution νβ is absolutely continuous if, and only if, 0
∑ i qn i>n,c q i =0

whenever cn = 1.

It follows that if d is a sequence different from c, then9 ∞

di



ci

if d > c;

ci

if d < c.

∑ qi > ∑ qi

i=1 ∞

di

i=1 ∞

∑ qi < ∑ qi

i=1

i=1

• The Golden Ratio φ = φ2 and the Multinacci numbers φ3 , φ4 , . . . do not belong to U : both 1k 0∞ and (1k−1 0)∞ are expansions for q = φk , k = 2, 3, . . . . • More generally, if there is a finite expansion c for some base q, then q ∈ / U . Indeed, there are infinitely many other expansions: if cn is the last one digit of c, then the formulas (c1 · · · cn−1 0)k c1 · · · cn 0∞ ,

k = 1, 2, . . .

define infinitely many other finite expansions, and letting k → ∞ we obtain an infinite expansion, too: (c1 · · · cn−1 0)∞ . The set U is ‘large’ or ‘small’, depending on the point of view. Theorem 2.6.2 (i) (ii) (iii) (iv)

(Erd˝os et al., 1991) U has 2ℵ0 elements. (Erd˝os et al., 1991) U is a null set. U is nowhere dense. (Dar´oczy and K´atai, 1995) U has Hausdorff dimension one. First we establish the following lexicographic characterisation.

9

We look at the first digit where they differ.

41

Expansions in non-integer bases

Theorem 2.6.3 (Erd˝os et al. (1990)) A sequence c = (ci ) belongs to U  if, and only if, the following two conditions are satisfied: cn+1 cn+2 · · · < c1 c2 · · ·

whenever cn = 0

(2.23)

cn+1 cn+2 · · · < c1 c2 · · ·

whenever cn = 1.

(2.24)

and Moreover, the formula c → q, where q ∈ (1, 2] is defined by (2.22), defines an increasing bijection between U  and U . Proof If (ci ) ∈ U  , then (ci ) = α (q) for some q ∈ (1, 2], and (2.23), (2.24) follow from Theorem 2.5.7. Conversely, assume that a sequence (ci ) satisfies (2.23) and (2.24). Then (ci ) > 0∞ by (2.23), and (ci ) cannot have a last one digit cn = 1 by (2.24). Hence (ci ) has infinitely many one digits, and the equation ∞

ci

∑ qi = 1

i=1

defines a number q ∈ (1, 2]. Applying Corollary 2.5.2 we deduce from (2.23) that (ci ) = α (q). Therefore we may apply Theorem 2.5.7 to conclude that (ci ) ∈ U  . Example 2.6.4

Let us reconsider the preceding examples.

• One has 1∞ ∈ U  . • We have also 1(10)∞ ∈ U  . Indeed, (2.23) and (2.24) take the forms (01)∞ < 1(10)∞ or (10)∞ < 1(10)∞ , and both lexicographic inequalities are obvious. Now we provide a proof of Theorem 2.6.2.10 Proof of Theorem 2.6.2 (i) It follows from Theorem 2.6.3 that 13 {01, 001}∞ ⊆ U  . Hence the set U  has 2ℵ0 elements. (ii) It follows from Theorem 2.6.3 that the expansion for each q ∈ U \ {2} has the form 1t1 0t2 1t3 0t4 · · · with a unique sequence of integers (t j ) j≥1 satisfying the following conditions: t1 ≥ 2,

and 1 ≤ t j ≤ t1

for all

j = 2, 3, . . . .

Since U \ {2} is the union of the countably many sets UN := {q ∈ U | t1 = N} ,

N = 2, 3, . . . ,

it is sufficient to prove that each UN is a null set. 10

Our proof of (iii) is new; it also shows directly that even U is a null set. Both properties were first deduced in (Komornik and Loreti, 2007) from the countability of U \ U .

42

M. de Vries and V. Komornik

Henceforth we fix N ≥ 2 arbitrarily, and we define r = rN ∈ (1, 2) by the formula β (r) := 1N 0N 10∞ , i.e., 1 1 1 + · · · + N + 2N+1 = 1. r r r

(2.25)

Since the map q → β (q) is increasing, we have r ≤ inf UN . We claim that if the expansions for two bases p, q ∈ UN start with the same word of length k, then C (2.26) |p − q| ≤ k r with some constant C, depending only on r. For the proof we may assume by symmetry that p < q. Then the corresponding expansions (ci ) := α (p) and (di ) := α (q) satisfy c1 · · · cn = d1 · · · dn and cn+1 = 0 < 1 = dn+1 for some n ≥ k. Since d1 = 1, we have ∞ ci di −∑ i i p i=1 i=1 q ∞

0=∑

  ∞ ci − d i 1 1 = ∑ + ∑ di − pi pi qi i=n+1 i=1   ∞ −1 1 1 − + ≥ ∑ i p q i=n+1 p ∞

1 q− p − n pq p (p − 1) 1 q− p − k . ≥ pq p (p − 1) =

Since r ≤ p < q ≤ 2, hence (2.26) follows with C = 2r/(r − 1): 0 < q− p ≤

q 2 ≤ . pk−1 (p − 1) rk−1 (r − 1)

Now we choose an even positive integer R, and we denote by Ut1 ···tR the set of bases q ∈ U whose expansions start with 1t1 0t2 · · · 1tR−1 0tR . Then we have the finite partition UN = ∪tN2 =1 · · · ∪tNR =1 UNt2 ···tR , and using (2.25) and (2.26) we obtain the following estimate: N

N

t2 =1

tR =1

N

N

C N+t2 +···+tR r t2 =1 tR =1   1 R−1 C 1 + ···+ N = N r r r  R−1 1 C = N 1 − 2N+1 . r r

∑ · · · ∑ diam UNt2 ···tR ≤ ∑ · · · ∑

Expansions in non-integer bases

43

Since the last expression tends to zero as R → ∞, we conclude that UN is a null set. (iii) In the previous proof we had a finite partition of UN for each R. Hence UN = ∪tN2 =1 · · · ∪tNR =1 UNt2 ···tR , and the preceding proof shows that even the closure UN of UN is a null set. This implies that U = ∪∞ N=2 UN ∪ {2} is of the first category. Moreover, since sup UN ≤ inf UN+1 for all N ≥ 2 by the fact that the map c → q is increasing, we have U = ∪∞ N=2 UN ∪ {2} , and hence the closure U of U is a null set as well. This implies that U is nowhere dense. (iv) See Dar´oczy and K´atai (1995). Before we proceed with the study of the topological structure of U , we recall the following result. Theorem 2.6.5 (Komornik and Loreti (1998)) The set U has a smallest element q∗ ≈ 1.787. The corresponding expansion is the truncated Thue–Morse sequence (τi )∞ i=1 = 1101 0011 0010 1101 · · · , where (τi )∞ i=0 is defined by the formulas τ0 := 0 and

τ2N +i = 1 − τi

for i = 0, . . . , 2N − 1,

N = 0, 1, 2, . . . .

Remark 2.6.6 Allouche and Cosnard (2000) proved that q∗ is a transcendental number. (They called it the Komornik–Loreti constant.) Example 2.6.7 • We have seen that the Golden Ratio φ ≈ 1.618 does not belong to U . It does not belong to its closure U either, because it is smaller than q∗ = min U . • We have seen that the Multinacci numbers φk do not belong to U either. But they belong to U . Indeed, for any fixed k ≥ 3 and n ≥ 1, the formula α (qk,n ) := (1k−1 0)n (10)∞ defines a number (qk,n ) ∈ U by Theorem 2.6.3, and qk,n → φk as n → ∞, because α (qk,n ) → α (φk ) coordinate-wise as n → ∞. Since U is not closed, it is an interesting question how to characterise its closure U . In order to solve this problem, it is convenient to introduce the following set. Definition 2.6.8 A number q ∈ (1, 2] belongs to V if, and only if, the sequence (αi ) := α (q) satisfies the following conditions:

αn+1 αn+2 · · · ≤ α1 α2 · · ·

whenever αn = 1.

(2.27)

44

M. de Vries and V. Komornik

Example 2.6.9 • We have U ⊆ V because α (q) = β (q) for q ∈ U , and β (q) satisfies even the strict inequalities (2.23), (2.24), not only (2.13) and (2.27). • The Golden Ratio and the Multinacci numbers belong to V because α (q) = (1k 0)∞ satisfies (2.13) and (2.27) for all k ≥ 1. One easily verifies that the Golden Ratio is the smallest element of V . Throughout the proof of the following lemmas we shall apply frequently without explicit mention Corollaries 2.5.2 and 2.5.5. The first property that we derive is not surprising because the inequalities defining V allow equalities. Lemma 2.6.10 The set V is a closed set. Proof We prove that the complement of V in (1, 2] is open. Let q ∈ (1, 2] \ V and set (αi ) := α (q). Choose two integers k, m ≥ 1 such that

αk = 1 and αk+1 · · · αk+m > α1 · · · αm .

(2.28)

If q < q is close enough to q, then α (q) and α (q ) start with the same word of /V. length k + m, and (2.28) implies that q ∈ If q > q is close enough to q, then β (q) and α (q ) start with the same word of / V again. length k + m. In case of α (q) = β (q) the inequality (2.28) implies q ∈ If α (q) = β (q), then β (q) has a last one digit βk . If q > q is close enough to q, then (αi ) := α (q ) starts with β1 · · · βk 0k+1 . Hence    αk = 1 and αk+1 · · · α2k+1 = 1k+1 > β1 · · · βk 0 = α1 · · · αk+1 ,

and therefore q ∈ /V. On the other hand, the following property is surprising because U is defined by strict inequalities. Lemma 2.6.11 The set U is closed from above.11 Proof Fix a number q ∈ / U and set (βi ) := β (q). We have to show that q ∈ / U for  all q > q, sufficiently close to q. Since q ∈ / U , there exists by Theorem 2.6.3 an index k such that βk = 1 and

βk+1 βk+2 · · · ≥ β1 β2 · · · . Since we cannot have equality here (this would imply an equality in (2.16) for n = 2k), there exists an index m such that

βk+1 · · · βk+m > β1 · · · βm .

(2.29)

If q > q is close enough to q, then β (q) and β (q ) start with the same word of /U. length k + m, and (2.29) implies that q ∈ 11

We call a set E ⊆ [1,∞) closed from above if the limit of a decreasing sequence of elements in E belongs to E.

Expansions in non-integer bases

45

Lemma 2.6.12 Fix q ∈ V and set (αi ) := α (q). If for some k ≥ 1,

αk = 1 and αk+1 · · · α2k = α1 · · · αk , then (αi ) = (α1 · · · αk α1 · · · αk )∞ . Proof Let r := α1 · · · αk . We must show that if (αi ) starts with (rr)N for some positive integer N, then (αi ) also starts with (rr)N+1 . Let s := α2kN+1 · · · α2kN+k

and t := α2kN+k+1 · · · α2k(N+1) .

Applying (2.13) for n = 2kN and (2.27) for n = 2kN − k, we obtain that s≤r

and rs ≤ rr;

they imply that s = r. Applying (2.13) and (2.27) again with the same choices of n, we obtain that rt ≤ rr

and rrt ≤ rrr;

hence t = r. Since V is closed, we have U ⊆ U ⊆ V , and a characterisation of U may be expected by strengthening somewhat the condition (2.27). Let us therefore investigate the case of equality in (2.27). Lemma 2.6.13 Fix q ∈ V and set (αi ) := α (q). If not all inequalities (2.27) are strict, then q ∈ / U , and q is an isolated point of V . Hence q ∈ /U. Sketch of proof By Theorem 2.6.3, the number q does not belong to U . By assumption, there exists a smallest index n such that

α (q) = (α1 · · · αn α1 · · · αn )∞

with αn = 1;

then

β (q) = α1 · · · αn α1 · · · αn−1 10∞ . If q > q is close enough to q, then α (q ) begins with

α1 · · · αn α1 · · · αn−1 102n+1, and this sequence does not satisfy (2.27) for α2n (q ) = 1. If q ≤ q is close enough to q, then α (q ) begins with α1 · · · αn α1 · · · αn . If q ∈ V , then q = q by the preceding lemma. It follows from the above property that if q ∈ U , then the inequalities (2.27) are strict. The converse also holds true. Lemma 2.6.14 Fix q ∈ V and let (αi ) := α (q). Suppose that all inequalities (2.27) are strict:

αn+1 αn+2 · · · < α1 α2 · · ·

whenever αn = 1.

(2.30)

46

M. de Vries and V. Komornik

(i) There exist arbitrarily large positive integers m such that

αk+1 · · · αm < α1 · · · αm−k

whenever 0 ≤ k < m.

(2.31)

(ii) One has q ∈ U . Note that (2.31) for k = m − 1 implies that α1 = αm = 1. Sketch of proof (i) It follows from Theorem 2.5.4 that (αi ) is the greedy expansion of 1/(q − 1) − 1. Applying Lemma 2.5.6 we conclude that there exist arbitrarily large indices m with αm = 1 such that

αk+1 · · · αm < α1 · · · αm−k

whenever 1 ≤ k < m

and αk = 1.

If αk = 0 and  := max {i ∈ {1, . . . , k − 1} | αi = 1} (note that α1 = 1), then

αk+1 · · · αm < α+1 · · · αm−k+ ≤ α1 · · · αm−k . (ii) We shall construct a sequence (p∗m ) of numbers belonging to U that converges to q. For each m satisfying (2.31) we define a sequence (ci ) by recursion as follows. First set c1 · · · cm := α1 · · · αm . Then, if c1 · · · c2N m is already defined for some non-negative integer N, set c2N m+1 · · · c2N+1 m := c1 · · · c2N m−1 1. It can be shown that the sequence (ci ) satisfies the conditions (2.23) and (2.24), so that it is the unique expansion for some base p∗m . Since α (q) and α (p∗m ) start with the same block of length m, letting m → ∞ we conclude that p∗m → q. Remark 2.6.15 • It can be shown that the sequence (ci ) constructed in the above proof satisfies (ci ) ≤ (αi ). This shows that the smallest element q∗ of U belongs to U and by taking m = 1 in the above construction we see that its unique expansion is the truncated Thue–Morse sequence. • Let us denote by U ∗ the set of Thue–Morse type univoque bases p∗m obtained by the above construction. Kong and Li (2014) proved that all elements of U ∗ are transcendental. (They called them the de Vries–Komornik constants.) This extended some earlier results in (Allouche and Cosnard, 2000; Komornik and Loreti, 2002). • The above proof shows that U ∗ ⊂ U is already dense in U . Hence the transcendental univoque bases are dense in U ; de Vries (2008) proved that the algebraic univoque bases are also dense in U . The last two lemmas above yield the following characterisation of U .

Expansions in non-integer bases

47

Theorem 2.6.16 A number q ∈ (1, 2] belongs to U if, and only if, the sequence (αi ) := α (q) satisfies the conditions (2.30). Example 2.6.17 The sequences α (φk ) = (1k−1 0)∞ satisfy (2.30) for k = 3, 4, . . . , but not for k = 2. This shows again that the Golden Ratio belongs to V \ U , while the Multinacci numbers belong to U . Next we state the basic properties of U , U and V . Theorem 2.6.18 (i) The set U is closed from above, but not from below. (ii) The set U \ U is a countable dense set in U . (iii) The set U is a Cantor set. We recall that a Cantor set is a non-empty closed set having neither isolated, nor interior points. Sketch of proof (i) Already proved above. (ii) The elements of U \ U are algebraic numbers because they have a periodic expansion as follows from Theorems 2.6.3 and 2.6.16. Hence U \ U is countable. It remains to show that each fixed q ∈ U can be approximated arbitrarily closely by numbers belonging to U \ U . By Lemma 2.6.14 (i) there exist arbitrarily large positive integers m satisfying (2.31) with (αi ) := α (q). It can be shown that the corresponding periodic sequences (α1 · · · αm−1 1 α1 · · · αm−1 1 α1 · · · αm−1 0)∞ are quasi-greedy expansions for some bases rm ∈ U \ U . Since α (rm ) and α (q) start with the same block of length m, letting m → ∞ we conclude that rm → q. (iii) It follows from (ii) that U (and hence U ) has no isolated points. Since U is a null set, it does not have interior points either. Remark 2.6.19 Since U is a non-empty perfect set,12 each neighbourhood of each point of U contains uncountably many elements of U . Since there are only countably many algebraic numbers and since U \ U is countable, this implies again that the transcendental univoque bases are dense in U . Theorem 2.6.20 (i) The set V is closed. (ii) The set V \ U is a discrete set, dense in V . (iii) The four properties of Theorem 2.6.2 hold for V in place of U . Sketch of proof 12

A perfect set is a closed set with no isolated points.

48

M. de Vries and V. Komornik

(i) Already proved above. (ii) We already know that U is characterised by Theorem 2.6.16. Therefore Lemma 2.6.13 shows that V \ U is discrete. It remains to show that each fixed q ∈ U can be approximated arbitrarily closely by numbers belonging to V \ U . By Lemma 2.6.14 (i) there exist arbitrarily large positive integers m satisfying (2.31) with (αi ) := α (q). It may be checked that the sequences (α1 · · · αm α1 · · · αm )∞ are quasi-greedy expansions for some bases tm ∈ V \ U . Since α (tm ) and α (q) start with the same block of length m, letting m → ∞ we conclude that tm → q. (iii) It follows from (ii) and Theorem 2.6.18 (ii) that V \ U is countable. This implies the corresponding properties (i), (ii) and (iv). Since V is a closed null set, it is nowhere dense. Since U and V are closed, their complements in (1, 2] are open (we recall that 2 ∈ U ), hence unions of disjoint open intervals. In order to determine the endpoints of these components, we take a closer look at the construction of the sequence (ci ) in the proof of Lemma 2.6.14 (ii). Given q0 ∈ {1} ∪ (U \ U ), the greedy expansion (βi ) := β (q0 ) has a last one digit βm . Starting with the block s1 := β1 · · · βm , we define a sequence of blocks by the recursive formula sn+1 := sn sn + ,

n = 1, 2, . . . .

Note that the last digit of sn sn is zero for each n = 1, 2, . . .. The superscript + means that we replace the last digit of sn sn by one. It can be shown that the sequences (sn sn )∞ are quasi-greedy expansions for some bases qn ∈ V \ U . Then we have q0 < q1 < q2 < · · ·

and qn → q∗0 ∈ U ∗ .

(2.32)

Example 2.6.21 For q0 = 1 we have β (1) = 10∞ , s1 = 1,

s2 = 11,

s3 = 11 01,

s4 = 1101 0011, . . .,

and q∗0 = min U ≈ 1.787. For example, α (q1 ) := (10)∞ , whence q1 = φ is the Golden Ratio. It turns out that the above intervals (q0 , q∗0 ) are exactly the connected components of (1, 2] \ U . The following theorem gives a precise description of the sets U ∗ , U , U and V , and clarifies an analogy with Cantor’s ternary set. Theorem 2.6.22 (de Vries and Komornik (2009)) • The set (1, 2] \ U is the union of countably many open intervals (q0 , q∗0 ) as constructed above. The left and right endpoints q0 and q∗0 run respectively over {1} ∪ (U \ U ) and U ∗ . Since U \ U and U ∗ are disjoint, even the closed intervals [q0 , q∗0 ] are disjoint.

Expansions in non-integer bases

49

• For each connected component (q0 , q∗0 ) of (1, 2] \ U , we have V ∩ (q0 , q∗0 ) = {q1 , q2 , . . .} with the sequence (qn ) in (2.32). Since for q ∈ V \U the expansion is not unique, we may wonder about the number of expansions. It turns out that the set of expansions is always countably infinite, and we may even give explicitly the complete list of expansions: see Komornik and Loreti (2007). Example 2.6.23 • The Golden Ratio φ belongs to V \ U . The corresponding list of expansions is given by (10)∞ , and the sequences (10)k 110∞

and (10)k 01∞ ,

k = 0, 1, . . . .

There are many infinite expansions, but only the quasi-greedy expansion is doubly infinite in the sense that its conjugate is also infinite (as defined in Section 2.5). • The Multinacci numbers φn , n = 3, 4, . . . belong to U \ U . The corresponding list of expansions for q = φn is given by (1n−1 0)∞ and the sequences (1n−1 0)k 1n 0∞ ,

k = 0, 1, . . . .

There is only one infinite expansion. We may in fact characterise U , U and V by these properties. Proposition 2.6.24 (de Vries and Komornik, 2009; Komornik, 2012) Given q ∈ (1, 2], we have the following equivalences:13 • q ∈ U ⇐⇒ x = 1 has a unique expansion in base q; • q ∈ U ⇐⇒ x = 1 has a unique infinite expansion in base q; • q ∈ V ⇐⇒ x = 1 has a unique doubly infinite expansion in base q. We end this section with two remarks. Remark 2.6.25 • If q ∈ V \ U , then α (q) is periodic, and hence β (q) is finite. It follows from some theorems of Parry (1960), Solomyak (1994) and Flatto et al. (1994) that q is an algebraic integer, and all its Galois conjugates are less than φ in modulus. • Fix a real number x and denote by U (x) the set of bases 1 < q < 2 in which x has a unique expansion. If x = 1, then the lexicographic technique cannot be applied any more. Nevertheless, L¨u et al. (2014) proved recently that if x ∈ (0, 1), then U (x) is a nowhere dense null set of Hausdorff dimension one, and that sup U (x) = 2. 13

The first one is just the definition of univoque bases; we include it here to stress the analogy.

50

M. de Vries and V. Komornik

2.7 Univoque sets Most results of this section are taken from de Vries and Komornik (2009). We indicate the references only for those proved elsewhere. In this section we fix a base 1 < q ≤ 2, and we set α (q) = (αi ) := a(1, q) and β (q) = (βi ) := b(1, q) for brevity. As before, we denote by a(x, q) = (ai (x, q)) and b(x, q) = (bi (x, q)) the quasigreedy and the greedy expansion of x in base q, respectively. We are going to investigate unique expansions of real numbers x in base q with respect to the alphabet {0, 1}. Thus, as before, an expansion or a sequence is always a sequence of zeros and ones. We denote by Uq the set of real numbers x having a unique expansion, and by Uq the set of the corresponding expansions. We will discover many analogies between Uq and the univoque set U of the preceding section, but also many differences. It will turn out that the situation depends critically on the choice of the base q, and that the spaces U , U , V of the preceding section play a crucial role in the description of the precise results. We recall that U2 is the set of all real numbers x ∈ [0, 1], except the dyadic rational numbers 0 < x < 1 of the form m/2n for some positive integers m, n. Hence U2 is not closed. The following surprising result shows that the generic case is the opposite: Uq is closed for almost every q. Theorem 2.7.1 The following equivalence holds: Uq

is closed

⇐⇒

q∈ /U.

This theorem follows from a more general description of Uq that we give now. As in the preceding section, our main tool is a lexicographic characterisation of Uq .14 In order to formulate our results, it is convenient to introduce, by analogy with the set V of the preceding section, the set   Vq := x ∈ Jq | an+1 (x, q)an+2 (x, q) · · · ≤ α1 α2 · · · whenever an (x, q) = 1 . We recall that V is closed, and U ⊆ U ⊆ V . Similarly, we have the following result. Proposition 2.7.2

The sets Vq are closed. The inclusions Uq ⊆ Uq ⊆ Vq hold true.

Remark 2.7.3 Let us mention the following equivalences: q ∈ U ⇐⇒ 1 ∈ Uq ; q ∈ U ⇐⇒ 1 ∈ Uq ; q ∈ V ⇐⇒ 1 ∈ Vq . The first and third properties follow from the definitions, while the second one was proved in (Komornik, 2012). 14

A lexicographic characterisation of Uq has already been given in Theorem 2.5.7.

Expansions in non-integer bases

51

While the inclusions U ⊂ U ⊂ V are strict, it turns out that the two inclusions Uq ⊆ Uq ⊆ Vq are never strict at the same time. Surprisingly, the complete picture depends on whether q belongs to U , V or not. Theorem 2.7.4 Let q ∈ (1, 2]. • q∈U • q ∈ V \U • q∈ /V

=⇒ =⇒ =⇒

Uq  Uq = Vq ; Uq = Uq  Vq ; Uq = Uq = Vq .

Since V is a null set, the last one is the generic case. We recall that • U \ U is dense in U , hence U \ U has no isolated points; • V \ U is dense in V , and all points of V \ U are isolated. By analogy we may expect that • if Uq  Uq , then Uq \ Uq is dense in Uq , hence Uq \ Uq has no isolated points; • if Uq  Vq , then Vq \ Uq is dense in Vq , and all points of Vq \ Uq are isolated. This is indeed the case. In view of Theorem 2.7.4 the following proposition is a reformulation of these properties. Proposition 2.7.5 Let q ∈ V , so that Uq  Vq . One has • Vq \ Uq is dense in Vq ; =⇒ Vq \ Uq has no isolated points; • q∈U • q ∈ V \ U =⇒ all points of Vq \ Uq are isolated. Since U is a Cantor set, we may wonder whether Uq is also a Cantor set for all q. This is not true for q = 2 because U2 = [0, 1] has interior points. For q ∈ (1, 2) the Cantor property is rather subtle. Proposition 2.7.6

Let q0 ∈ U \ U and q1 = min {q ∈ V | q > q0 }.

• If q ∈ (q0 , q1 ], then Uq is a Cantor set. • If q ∈ U \ {2}, then Uq is a Cantor set. • Otherwise neither Uq , nor Uq is a Cantor set. Let us denote by Uq and Vq the sets of quasi-greedy expansions of the numbers in Uq and Vq , respectively. Many results of this section are based on the careful study of these sets of sequences. It follows from Corollary 2.5.2 and Theorem 2.5.7 that p < q =⇒ U p ⊆ Uq . It is natural to ask whether these inclusions are strict or not. An interval I ⊆ (1, 2] is called a stability interval if U p = Uq for all q, s ∈ I. Example 2.7.7 We recall from Glendinning and Sidorov (2001) that15 15

The number q∗ is defined in Theorem 2.6.5.

52

M. de Vries and V. Komornik

• 1 < q ≤ φ =⇒ Uq has only two elements: 0∞ and 1∞ ; • φ < q < q∗ =⇒ Uq has ℵ0 elements; • q∗ ≤ q ≤ 2 =⇒ Uq has 2ℵ0 elements. Hence (1, φ ] is a maximal stability interval. The following result completes an investigation started by Dar´oczy and K´atai (1993, 1995). Theorem 2.7.8 The maximal stability intervals are given by the singletons {q} where q ∈ U , and the intervals (r, s] where (r, s) is a connected component of (1, 2] \ V . Moreover, if r = 1, then Uq = Vr for each q ∈ (r, s]. The next result, due to de Vries (2009), shows that Uq changes the most when we cross a univoque base. It also provides new characterisations of the set U . Theorem 2.7.9 Given a base q ∈ (1, 2], we have the following equivalences: • q ∈ U ⇐⇒ Ur \ Uq is uncountable whenever q < r ≤ 2; • q ∈ U ⇐⇒ Uq \ Ur is uncountable whenever 1 < r < q; • q ∈ U ⇐⇒ Uq \ ∪r∈(1,q) Ur is uncountable. We recall that a set X ⊆ {0, 1}∞ is called a subshift if there exists a set F = k ∞ F (X ) ⊆ ∪∞ k=1 {0, 1} such that a sequence (ci ) ∈ {0, 1} belongs to X if, and only if, none of the blocks ci+1 · · · ci+ j (i ≥ 0, j ≥ 1) belongs to F . In this case we write X = X (F ). A subshift X is called a subshift of finite type if one can choose the set F to be finite. Theorem 2.7.10 Given a base q ∈ (1, 2], the following statements are equivalent: (i) q ∈ /U; (ii) Uq is a subshift of finite type of {0, 1}∞ ; (iii) Uq is a closed subset of {0, 1}∞ . For the proof of Theorem 2.7.10 we need the following preliminary lemma. Lemma 2.7.11 Fix q ∈ (1, 2] \ U and let k = k(q) be the largest positive integer such that the quasi-greedy expansion (αi ) of 1 in base q satisfies

αi+1 · · · αk < α1 · · · αk−i

whenever 0 ≤ i < k.

(2.33)

Then (αi ) ≤ (α1 · · · αk α1 · · · αk )∞ . Proof Note that (2.33) holds true for k = 1. If there were infinitely many k satisfying (2.33), then we would have q ∈ U by Theorem 2.6.16. Hence the integer k(q) is well-defined. Suppose we had (αi ) > (α1 · · · αk α1 · · · αk )∞ .

Expansions in non-integer bases

53

By (2.14) we must have

α1 · · · α2k > α1 · · · αk α1 · · · αk ; hence there exists a smallest j with k < j ≤ 2k such that

αk+1 · · · α j > α1 · · · α j−k . This implies that for each r with k ≤ r < j, we have

αr+1 · · · α j > αr−k+1 · · · α j−k .

(2.34)

Since k is the largest positive integer satisfying (2.33), there exists a number r with 0 ≤ r < j such that

αr+1 · · · α j ≥ α1 · · · α j−r . Note that r ≥ k; otherwise it would follow from (2.33) that

αr+1 · · · α j = αr+1 · · · αk αk+1 · · · α j < α1 · · · αk−r 0 j−k ≤ α1 · · · α j−r . Using (2.14) again we may conclude that

αr+1 · · · α j ≤ α1 · · · α j−r ≤ αr−k+1 · · · α j−k , which contradicts (2.34). Proof of Theorem 2.7.10 (i)⇒(ii) Suppose that q ∈ / U and let k be the largest positive integer satisfying (2.33) with (αi ) := α (q). Set   Aq := 0a1 · · · ak ∈ {0, 1}k+1 | a1 · · · ak ≥ α1 · · · αk , and

  Bq := 1a1 · · · ak ∈ {0, 1}k+1 | a1 · · · ak ≥ α1 · · · αk .

Finally, let Fq := Aq ∪ Bq . We claim that X (Fq ) = Uq . It follows at once from Theorem 2.5.7 that X (Fq ) ⊆ Uq . Now suppose that X(Fq ) ⊂ Uq and let (ci ) be a sequence belonging to Uq \ X(Fq ). Since the conjugate sequence (ci ) also belongs to Uq \ X (Fq ), we may assume that (ci ) contains a block in Aq , i.e., there exists an index j with c j = 0 and c j+1 · · · c j+k ≥ α1 · · · αk . By (2.20) we have c j+1 · · · c j+k = α1 · · · αk , and c j+k+1 c j+k+2 · · · < αk+1 αk+2 · · · ;

54

M. de Vries and V. Komornik

hence by Lemma 2.7.11 we have c j+k+1 c j+k+2 · · · < (α1 · · · αk α1 · · · αk )∞ . Since c j+k = αk = 1, we conclude from (2.21) and Lemma 2.7.11 that c j+k+1 c j+k+2 · · · > α1 α2 · · · ≥ (α1 · · · αk α1 · · · αk )∞ , which yields a contradiction. (ii)⇒(iii) Note that any subshift of {0, 1}∞ is closed. (iii)⇒(i) We prove the contraposition. Fix q ∈ U . According to Lemma 2.6.14, there exist arbitrarily large positive integers m such that

αk+1 · · · αm < α1 · · · αm−k

whenever 0 ≤ k < m.

(2.35)

This implies in particular that αm = 1. From Lemmas 2.6.12 and 2.6.13 we may conclude that

αm+1 · · · α2m < α1 · · · αm , or, equivalently,

α1 · · · αm < αm+1 · · · α2m .

(2.36)

A straightforward application of the inequalities (2.14), (2.35) and (2.36) yields that the sequence ∞ (cm i ) := 0(α1 · · · αm α1 · · · αm )

belongs to Uq . If m → ∞ along integers m satisfying (2.35), then (cm i ) converges coordinate-wise to the sequence 0α1 α2 · · · which is an expansion of q−1 . Since the sequence 10∞ is another expansion of q−1 , the set Uq is not closed. Let us now consider the Hausdorff dimension of the sets Uq . It has been determined for large classes of bases in (Dar´oczy and K´atai, 1995; Kall´os, 1999, 2001; K´atai and Kall´os, 2001; Glendinning and Sidorov, 2001; de Vries and Komornik, 2011) and (Baatz and Komornik, 2011). In particular, it was shown in (Glendinning and Sidorov, 2001) that dimH Uq < 1 for all 1 < q < 2 and dimH Uq > 0 if, and only if, q ∈ (q∗ , 2]. Recently Kong and Li (2014) improved all earlier results by establishing an explicit formula for dimH Uq , valid for almost every q ∈ (1, 2]. More precisely, they showed that h(Uq1 ) dimH Uq = log q in each connected component (q0 , q∗0 ) of (1, 2] \ U , where q1 is the smallest element of V ∩ (q0 , q∗0 ) and h(Uq1 ) denotes the topological entropy of the subshift Uq1 . Subsequently, based on the works de Vries (2009), and de Vries and Komornik (2009, 2011), Komornik et al. (2014) proved the following result. Theorem 2.7.12 The dimension function q → dimH Uq is continuous on (1, 2], and its derivative is strictly negative for almost every q ∈ (q∗ , 2].

Expansions in non-integer bases

55

Remark 2.7.13 Since dimH Uq → 1 as q → 2, the dimension function is a natural example of a ‘Devil’s staircase’. Moreover, contrary to the Cantor–Lebesgue–Vitali function, we have even a strictly negative derivative almost everywhere. If x ∈ Vq \ Uq (this can only happen if q ∈ V ), then x has more than one expansion. It turns out that for q ∈ U there are two expansions, while for q ∈ V \ U the set of expansions is countably infinite. We refer to de Vries and Komornik (2009) for the complete list of expansions in each case. Finally, the following variant of Proposition 2.6.24 was obtained in (Komornik, 2012). Proposition 2.7.14 Fix a base q ∈ (1, 2). One has • x ∈ Uq • x ∈ Uq • x ∈ Vq

⇐⇒ ⇐⇒ ⇐⇒

x has a unique expansion; x or 1/(q − 1) − x (or both) has a unique infinite expansion; x has a unique doubly infinite expansion.

2.8 A two-dimensional univoque set The results of this section are taken from de Vries and Komornik (2011). Let J be the set of all pairs (x, q) ∈ R2 such that x has an expansion in base q for some q ∈ (1, 2], i.e.,   J := (x, q) ∈ R2 | x ∈ Jq and q ∈ (1, 2] . We consider the set J as a subspace of R2 , i.e., we equip the set J with the topology inherited from the Euclidean space R2 . In the preceding two sections we have investigated unique expansions with either x or q fixed. The corresponding sets U and Uq are vertical and horizontal sections of the two-dimensional set   U := (x, q) ∈ J | x ∈ Uq . Motivated by the results in the one-dimensional case, it is natural to introduce the set   V := (x, q) ∈ J | x ∈ Vq , and to study the relationship between U, its closure U and V. The following result answers this question and clarifies the structure of these sets. Theorem 2.8.1 • • • •

We have U  U = V. U is a Cantor set. U is a two dimensional (Lebesgue) null set. U has Hausdorff dimension two.

56

M. de Vries and V. Komornik

2.9 Final remarks (1) Erd˝os and Jo´o (1992) have shown that for each positive integer N and for N = ℵ0 there exist 2ℵ0 bases q ∈ (1, 2] for which x = 1 has exactly N expansions in base q. Various results of this type have been obtained in (Erd˝os et al., 1994; Sidorov, 2009; Komornik and Loreti, 1999; Baker and Sidorov, 2013). (2) More general regular digit sets. For simplicity we have investigated only the case of two-letter alphabets in this paper. However, almost all results remain valid with small modifications if we consider an arbitrary positive integer m and we investigate expansions in bases q ∈ (1, m + 1] over the alphabet {0, 1, . . . , m}. See, e.g., de Vries (2008, 2009), de Vries and Komornik (2009, 2011), Komornik and Loreti (1999, 2007), Komornik et al. (2014), Kong and Li (2014). (3) General ternary alphabets. For general finite alphabets few results are available, see, e.g., Pedicini (2005). We have seen that the Golden Ratio plays a special role for two-letter alphabets: while in bases q ∈ (1, φ ] we have only two trivial unique expansions, in bases q ∈ (φ , 2] there exist non-trivial unique expansions as well. This was generalised for all ternary alphabets in (Komornik et al., 2011) and Lai (2011). Since an affine transformation of the alphabet does not change the nature of the expansions, we may restrict ourselves to the ternary alphabets {0, 1, m} with m ≥ 2. The following result has been proved in (Komornik et al., 2011). Theorem 2.9.1 satisfying

There exists a continuous function p : [2, ∞) → R, m → pm " 2 ≤ pm ≤ Pm := 1 +

m m−1

for all m such that the following properties hold true: • for each m ≥ 2, there exist non-trivial unique expansions if q > pm and there are no such expansions if q < pm ; • we have pm = 2 if, and only if, m = 2k for some positive integer k; • the set C := {m ≥ 2 | pm = Pm } is a Cantor set, i.e., a non-empty closed set having neither interior nor isolated points; its smallest element is 1 + x ≈ 2.3247 where x is the smallest Pisot number, i.e., the positive root of the equation x3 = x + 1; • each connected component (md , Md ) of [2, ∞) \ C has a point μd such that p is decreasing on [md , μd ] and increasing on [μd , Md ]. (4) Spectra. It was shown in (Erd˝os et al., 1990, 1998; Erd˝os and Komornik, 1998) that there is an intimate relationship between universal expansions in a base q ∈ (1, 2] and the spectrum of q defined by the formula ! Y (q) =

n

∑ si qi | si ∈ {−1, 0, 1} , n = 0, 1, . . .

i=0

.

57

Expansions in non-integer bases

The following theorem is a combination of some results of Drobot (1973), Drobot and McDonald (1980), Akiyama and Komornik (2013) and Feng (2011). Theorem 2.9.2 The following dichotomy holds: either Y (q) is uniformly discrete, or it is dense in R. This answered an old open problem in (Erd˝os et al., 1990). We may also introduce generalised spectra by the formula Ym (q) =

n

∑ si q

i

!

| si ∈ {−m, . . . , 0, . . . , m} , n = 0, 1, . . .

i=0

for each positive integer m. It follows from a lemma of Garsia (1962) that each Ym (q) is uniformly discrete if q is a Pisot number. Concerning the behaviour of the sequence m (q) := inf {x ∈ Ym (q) | x > 0} we cite the following result from Komornik et al. (2000). Theorem 2.9.3 Let q = φ be the Golden Ratio. If m is a positive integer, then m (φ ) = |Fk φ − Fk+1 | , where k is the smallest integer satisfying qk−1 ≥ m, and F0 = 0,

F1 = 1,

F2 = 1,

F3 = 2, . . .

is the Fibonacci sequence. This result has been extended to other quadratic Pisot numbers by Borwein and Hare (2003) and Komatsu (2002). For more details on spectra we refer to the review paper Komornik (2011) and its references.

2.10 Exercises Exercise 2.10.1 Consider a quasi-greedy expansion (ai ) = a(x, q). (1) Prove that if am = 0 for some m, then the inequalities (2.11) hold for all n ≥ m. (2) Prove that for x ∈ [0, 1] the inequalities (2.11) hold for all n ≥ 0. Exercise 2.10.2 Consider a greedy expansion (bi ) = b(x, q). (1) Prove that if bm = 0 for some m, then the inequalities (2.15) hold for all n ≥ m. (2) Prove that for x ∈ [0, 1) the inequalities (2.15) hold for all n ≥ 0. Exercise 2.10.3 Prove Theorem 2.5.4 and Corollary 2.5.5. Exercise 2.10.4 Consider a greedy expansion (βi ) = β (q) of 1 in base q. Show that the inequalities (2.16) hold for all n ≥ 1, except for q = 2.

58

M. de Vries and V. Komornik

Exercise 2.10.5 2.6.14.

Fill in the missing details in the proof of Lemmas 2.6.13 and

Exercise 2.10.6 Prove the following results. (1) If xn x and qn q, then a(xn , q) → a(x, q). (2) If xn ! x and qn ! q, then b(xn , q) → b(x, q). Exercise 2.10.7 Prove the following results. (1) If q ∈ V , then the inequalities (2.27) are satisfied for all n ≥ 0. (2) If q ∈ U , then the inequalities (2.30) are satisfied for all n ≥ 0. Exercise 2.10.8 Prove the following results. (1) If q ∈ U , then Vq \ Uq is countably infinite, and the greedy expansion of each number x ∈ Vq \ Uq is finite or ends with α1 α2 · · ·. (2) If q ∈ V \ U , then Vq \ Uq is countably infinite, and the greedy expansion of each number x ∈ Vq \ Uq is finite. Exercise 2.10.9 (1) Consider the sequence (qi )i≥0 of (2.32) with q0 = 1. Use Exercise 2.10.8 and Theorem 2.7.8 to prove the the first and second assertions of Example 2.7.7. (2) Use Exercise 2.10.8, Theorem 2.7.4 and Proposition 2.7.5 for q = q∗ to prove the third assertion of Example 2.7.7. Exercise 2.10.10 Fix q ∈ (1, 2] \ U and set (αi ) := α (q). Then q ∈ (r, s], where (r, s) is a connected component of (1, 2] \ V . Prove that

α (r) = (α1 · · · αk−1 0)∞

and α (s) = (α1 · · · αk α1 · · · αk )∞ ,

where k is the largest positive integer satisfying (2.33).

3 Medieties, end-first algorithms, and the case of Rosen continued fractions Benoˆıt Rittaud

3.1 Introduction The numeration system in base 2 derives from the binary operation that maps the pair of numbers (r, s) to the value (r + s)/2, allowing us to split any interval into two subintervals. Iterating this dichotomy process, we get a coding of any element of I = [0, 1) by an infinite word over the alphabet {0, 1}, the letter 0 (respectively, 1) corresponding to the choice of the left (respectively, right) subinterval. The use of alternative binary operations provides other ways to code a number by an infinite word over the alphabet {0, 1}. Consider an initial interval I ⊆ [−∞, +∞] and a binary operation, here written generically ⊕ and called a mediety, that allows to split I into two smaller intervals, then allows to split these two smaller intervals into two even smaller ones, etc. Assuming that any infinite sequence of such successive intervals decreases to a single number, we get a coding of any element of I by an infinite word over the alphabet {0, 1}. The denomination of mediety goes back to Pythagoras’ school (around 500 BC), to which the definition of several different medieties is attributed, the most famous ones being the arithmetic, the geometric and the harmonic ones, which correspond √ respectively to the expressions r ⊕ s = (r + s)/2, r ⊕ s = rs and r ⊕ s = 2/(r−1 + s−1 ). For the Pythagoreans, the main interest of medieties was the elaboration of a musical scale. In modern language, the basic idea, attributed to Pythagoras himself, consists in building a set of notes with simple ratio frequencies. Choose a first note (for example, C4) and normalise its frequency to the value 1. The note in the upper octave (C5, in our example) has the frequency 2. The harmonic and arithmetic means of 1 and 2, respectively equal to 4/3 and 3/2, define harmonious musical intervals. In the example, 4/3 is the frequency corresponding to F4 and 3/2 to G4; C4–F4 is a perfect fourth, and C4–G4 a perfect fifth. Since the most important concern for the Pythagoreans was the musical scale, they had no reason to be interested in coding of numbers. Indeed, it would have been meaningless to iterate the mediety to get infinitely many notes between two given notes – not even to mention that the Combinatorics, Words and Symbolic Dynamics, ed. Val´erie Berth´e and Michel Rigo. Published by c Cambridge University Press. Cambridge University Press 2016.

60

B. Rittaud

Pythagoreans dealt only with integers and did not consider anything corresponding to our modern fractions. Hence, as far as we know, the Pythagoreans did not consider coding of numbers, even if one of their medieties can be quite straightforwardly related to the numeration system in the base defined by the Fibonacci sequence. The Pythagoreans considered no less than ten different medieties, of various usefulness. For us, a criterion of interest for a given mediety is the possibility of deducing an interesting expression from the coding of numbers it defines. For example, if the word w1 w2 · · · wn · · · over the alphabet {0, 1} is the coding of x ∈ I := [0, 1] given by the arithmetic mediety, then we have x = ∑n wn /2n . Now, replace the arithmetic mediety by the mediant, defined on I := [0, +∞] = [0/1, 1/0] by (p/q) ⊕ (p/q ) = (p + p )/(q + q) for any pair of fractions p/q and p /q such that pq − p q = −1. Such a mediety (without the restriction on pq − q p) appeared first in N. Chuquet’s Triparty en la science des nombres (Chuquet, 1881) written in 1484. Chuquet states that he invented this mediety and that it was intended to provide a way, two rational numbers being given, to get an intermediate one. The loss of Chuquet’s book, rediscovered only in the nineteenth century, explains for a part why Chuquet is not credited to the idea, eventually proposed independently by the mathematician M. Stern (Stern, 1858) and by the watchmaker A. Brocot (Brocot, 1861). Stern was looking for a construction of the set of real numbers, Brocot a way to find best approximations of real numbers (in practice, these numbers were astronomical ratios) by rational numbers – continued fractions were too raw for a tool for this purpose since Brocot was concerned with ‘simple’ approximations, that is, p/q with p and q not too big and with only small divisors. From the coding of numbers defined by the mediant (starting from the interval I := [0/1, 1/0] = [0, +∞]), we get the coding of real numbers provided by continued fractions. Indeed, write the coding of x in its multiplicative form, 1a0 0a1 1a2 0a3 · · · , where a0 ≥ 0 and an > 0 for any n > 0. We then have x = a0 +

1 a1 +

1 a2 +

=: [a0 , a1 , a2 , a3 , . . .]1 ,

1 a3 + · · ·

that is, Stern–Brocot’s mediety leads to standard continued fraction expansion. Classical references for the mediant and/or the theory of continued fractions are (Perron, 1913), (Hardy and Wright, 1985), (Khinchin, 1964), (Rademacher, 1964) and (Hensley, 2006). For historical aspects, see (Brezinski, 1991). Several notions linked to continued fractions can be ‘exported’ to any mediety, as the Golden Ratio, the Hurwitz constant (Section 3.2.4) or the Gauss map (Exercise 3.7.9). Classical works on algorithms of expansions of real numbers obtained by the iteration of a given function are (Bissinger, 1944), (Everett, 1946), (R´enyi, 1957) and (Parry, 1964). Conversely, an analytic expression of numbers being given, we may ask for a corresponding mediety. Here, we will consider the example of Engel–Sierpi´nski expansion of numbers, that allows us to write any x > 0 in the form

61

Rosen continued fractions

x=

1 1 1 + + + ··· = q1 q1 q2 q1 q2 q3

1+

1+

1 + ··· q3 q2

q1

,

where (qn )n>0 is an non-decreasing sequence of integers (see Section 3.3.3). References on these are (Borel, 1948), (Perron, 1960). The most common name attached to this kind of expansion is the one of F. Engel who wrote on it in 1913 (Engel, 1913). Nevertheless, as coined in particular by (Kraaikamp and Wu, 2004), W. Sierpi´nski already investigated this kind of expansion before in (Sierpi´nski, 1911). Also, the latter mentions in his paper that expansions of this kind already appeared in a work by J. Puzyna (Puzyna, 1898, p. 47). We may also look for medieties that split an interval into k subintervals, hence defining a coding of numbers by words over the alphabet [[0, k − 1]]. Once again, such an idea of k-mediety goes back to Antiquity. Only implicit in the Babylonian sexagesimal numeration system, it appears in an explicit form with Hippocrates of Chios (fifth and fourth centuries BC), who considered the two mean proportionals, which generalises the geometric mediety to a 3-mediety that splits any given interval [r, s] in three subintervals bounded by r ⊕1 s := r2/3 s1/3 and r ⊕2 s := r1/3 s2/3 . The question he had in mind was that of the duplication √ of the cube, the lowest of the two mean proportionals of 1 and 2 being the value 3 2, which is the factor to apply to the edge of a cube to get a cube two times bigger in volume. Back to musical considerations, in 1691, to address the question of the equal division of the octave, A. Werckmeister defined the 12-geometric mediety, that splits an interval [r, s] into subintervals limited by the values r ⊕i s = r(12−i)/12 si/12 for 0 ≤ i ≤ 12. Each of them defines a frequency, and the ratio between two consecutive ones is constant. As for the mediant, a natural generalisation to k-medieties is linked to λ -continued fractions, that is, continued fractions of the form x = a0 λ +

1 a1 λ +

1 a2 λ + · · ·

,

where an ∈ Z\{0} and λ = λk+1 := 2 cos(π /(k + 1)). For such values of λ (and some additional constraints on the sequence (an )n ), these are Rosen continued fractions. D. Rosen introduced these in 1954 (Rosen, 1954) in the context of hyperbolic geometry: the values λk are the only ones in the interval [1, 2] for which the group of transformations of the hyperbolic upper-plane H generated by z → −1/z and z → z − λ is discrete, hence defines a fundamental surface (it is the usual modular surface for λ = λ3 = 1) on which usual geometrical questions can be investigated, like coding of geodesics. Rosen continued fractions have been studied in their dynamical, metrical and geometrical aspects noticeably in (Burton et al., 2000), (Dajani et al., 2009) and (Arnoux and Schmidt, 2009, 2014). Generalisations of results on standard contin-

62

B. Rittaud

ued fractions, as approximation, Hurwitz-like theorem or Liouville-like construction of transcendental numbers are to be found in particular in (Haas and Series, 1986), (Kraaikamp et al., 2007, 2010) and (Bugeaud et al., 2013). Here, we will primarily focus on simple combinatorial aspects, briefly mentioned in (Rittaud, 2008); the notion of mediant applied to Rosen continued fractions was investigated in a much more formal and deeper way in (Kraaikamp et al., 2009). Let F be the set of the bounds of subintervals defined by the mediety. Its elements are the only ones that admit two codings, the first one ending with 0ω and the second one by (k − 1)ω (as in the classical equality 0.9999 · · · = 1.0000 · · · in the usual decimal numeration system). Finding a coding of an element of F is easily done by the dichotomy algorithm defined by the mediety, that provides each letter in the natural order. An end-first algorithm is an alternative way to get a coding of an element of F that provides the end of the coding first. For the mediant as well as its generalisation to the k-Rosen mediety, end-first algorithms are linked to random Fibonacci sequences, which are sequences defined by f0 = f1 = 1 and fn = |λk fn−1 ± fn−2 | for any n ≥ 2, where the ± sign is randomly chosen for each n (Viswanath, 2000), (Makover and McGowan, 2006), (Rittaud, 2007), (Janvresse et al., 2008, 2009, 2010). End-first algorithms are conveniently represented by binary trees with a specific structure given by a substitution (see Section 1.3).

3.2 Generalities 3.2.1 Definition and representation Definition 3.2.1 Let I := [α , ω ] be a closed interval of [−∞, +∞]. A mediety on I is a binary operation ⊕ defined on the smallest possible set M ⊂ I2 that satisfies the following. • The pair (α , ω ) is in M. • If (r, s) ∈ M, then r < r ⊕ s < s and (r, r ⊕ s), (r ⊕ s, s) ∈ M. • If (In )n≥0 is a decreasing sequence of intervals with bounds in M, then the set ∩n In is reduced to a single element. We let F denote the set of elements r ∈ I such that either (r, s) ∈ M or (s, r) ∈ M for some s. We also put F˚ := F\{α , ω }. Remark 3.2.2 In the sequel, whenever we write something like r ⊕ s, it will be implicitly assumed that this expression makes sense, that is, (r, s) ∈ M (even if a lot of medieties derive from formulas that could be applied to any pair (r, s) ∈ I2 ). Also, since the set M defines a set of subintervals of I, we may consider its elements themselves as intervals instead of pairs of numbers. The following easy result is a way to ensure that a binary operation is a mediety.

63

Rosen continued fractions

Theorem 3.2.3 Let ⊕ be a binary operation satisfying the two first axiom of medieties on some I. It is indeed a mediety if, and only if, the set F is dense in I. Remark 3.2.4 The notion of mediety provides a class of particular cases of fibred systems. As defined in (Schweiger, 1995), a fibred system is a pair (T, B) where T is a transformation on the set B, where B = ∪ j∈J B j is a partition of B with J finite or denumerable such that T|B j is injective for any j. (Keller (1989) proposed the same definition, but restricted to a topological context.) Take for B an interval I = [α , ω ] and assume that the partition is made of exactly two intervals, B0 and B1 , with common limit point β . Also, assume that T|B j is monotonic and bijective on B. Therefore, each B j can be split in two subintervals, B j0 and B j1 (of common limit point β j ) such that TB j j : B j j → B j is bijective. Iterating the operation defines intervals B j1 ··· jk that can be split in two subintervals B j1 ··· jk 0 and B j1 ··· jk 1 with common boundary point β j1 ··· jk . The mediety ⊕ that corresponds to T is defined by ⊕(B j1 ···ik ) = β j1 ···ik . The set M can be seen as a full binary tree (see Figure 3.1) with root I and in which the left child (respectively, right child) of an interval I is [min(I), ⊕(I)] (respectively, [⊕(I), max(I)]).

α,ω]

[

α,αω]

[

α,α(αω)]

[

α,α(α(αω))]

[

[

α αω),αω]

αω,(αω)ω]

[ (

α α αω)),α(αω)] [α(αω),(α(αω))(αω)]

[ ( (

α αω))(αω),αω]

[( (

αω,ω]

αω)ω,ω]

[

αω,(αω)((αω)ω)]

[

αω)((αω)ω),(αω)ω]

[(

[(

[(

αω)ω,((αω)ω)ω]

[((

αω)ω)ω,ω]

Figure 3.1 The general tree of intervals of M. For brevity, r ⊕ s is written rs.

To avoid cumbrous notation, we may replace each interval I of a node by ⊕(I) and get a tree which contains the same information (see Figure 3.2), since, given some ˚ there is only one interval I such that ⊕(I) = r. For some properties of this r ∈ F, tree and its links with Christoffel words and the Stern–Brocot mediant, see Exercise 3.7.1, and also (Berstel et al., 2009, chapters 3.2 and 7.3.), (Pytheas Fogg, 2002, chapter 6), (Rauzy, 1984), (Borel and Laubie, 1993). Another representation of M is the edges of the smallest diagram (see Figure 3.3) such that:

64

B. Rittaud αω

α(αω)

α(α(αω))

α(α(α(αω)))

(

α αω))(αω)

( (

(

αω)((αω)ω)

αω)ω

αω)ω)ω

((

α α αω)))(α(αω)) (α(αω))((α(αω))(αω)) ((α(αω))(αω))(αω) (αω)((αω)((αω)ω)) ((αω)((αω)ω))((αω)ω) ((αω)ω)(((αω)ω)ω)

( ( (

((((

αω)ω))ω)ω

Figure 3.2 The tree of Figure 3.1 where each interval I is replaced by ⊕(I).

• it contains an edge of vertices α and ω , with level(α ) = 0 and level(ω ) = −1, • for each edge (r, s) (with r < s) there exists the edges (r,t) and (s,t), where t is the vertex r ⊕ s, of level 1 + max(level(r), level(s)).

Figure 3.3 The general diagram for a mediety (here, α and ω are put on the same row to get a figure that is more symmetric).

The nodes of the diagram define the set F. We let FN denote the set of elements of F with level at most N.

3.2.2 Coding Definition 3.2.5 Let Iε := I, where ε is the empty word. For any finite word w on the alphabet {0, 1} for which Iw = [r, s] is already defined, we put Iw0 := [r, r ⊕ s] and Iw1 := [r ⊕ s, s]. A coding of a number x ∈ I by the mediety ⊕ is an infinite word w over the alphabet {0, 1} such that x ∈ Iw for any finite prefix w of w.

65

Rosen continued fractions

Here are some immediate properties derived from this definition (see also Figure 3.4). Property 3.2.6 For any finite word w, the interval Iw is made of all elements of I which admit a coding with w as a prefix. Property 3.2.7 We have the equality M = {Iw | w ∈ {0, 1}∗}. Property 3.2.8 For any N ≥ 0, the set FN is made of the bounds of all intervals Iw such that |w| ≤ N. Property 3.2.9 For any n ≥ 0, we have

#

|w|=n Iw

= I.

Property 3.2.10 Let n > 0. The set {0, 1}n being equipped with the lexicographical order, the function from {0, 1}n to M defined by w → Iw is increasing. Moreover, if w is the successor of w in {0, 1}n , then Iw ∩ Iw = {max(Iw )} = {min(Iw )}. I

α

ω I1

I0

α

ω

α ⊕ω I00

α

I01

α ⊕ (α ⊕ ω )

I10

α ⊕ω

I11 (α ⊕ ω ) ⊕ ω

ω

Figure 3.4 The first partitions of I by the intervals of M.

Theorem 3.2.11 Any infinite word is the coding of some element of I. Different elements of I cannot have a common coding. Any x ∈ I\F˚ admits only one coding, any x ∈ F˚ admits exactly two, of the form w01ω and w10ω where w is a finite word. Theorem 3.2.11 allows us to define, for any infinite word w, the unique value ˚ v(w) ∈ I which admits w as a coding. The function v is invertible in v−1 (I\F), −1  ω  ω ˚ whereas v (x) has two elements for all x ∈ F, of the form w 10 and w 01 . Let ≡ be the equivalence relation on infinite words defined by w01ω ≡ w10ω for any finite word w. We can then define the coding function c : I → {0, 1}N / ≡ by c := v−1 . The three following properties are immediate. Property 3.2.12 c is increasing.

The set {0, 1}N/ ≡ being lexicographically ordered, the function

66

B. Rittaud

Property 3.2.13 For any finite word w, c(Iw ) is the set of (classes of) infinite words that admits w as a prefix (for at least one of its representatives). Property 3.2.14 For any Iw ∈ M, we have c(⊕(Iw )) = {w10ω , w01ω }. Theorem 3.2.15 Let μ be the Lebesgue measure. For any bounded subinterval I of I\{±∞}, we have limn (sup|w|=n (μ (I ∩ Iw ))) = 0. Proof Let ε > 0 and let (wn )n be a sequence of words of increasing length such that μ (I ∩ Iwn ) > ε for any n. By compactness of the adherence I  of I, we extract a subsequence, again written (wn )n , such that the lower bound of I  ∩ Iwn converges to x ∈ I  . Since μ (I ∩ Iwn ) > ε , for all n ≥ n0 we have [x, x + ε ] ⊆ I  ∩ Iwn . Intervals of M that are not included one into the other are of null intersection, so (Iwn )n is nonincreasing, and [x, x + ε ] ⊆ ∩n>n0 Iwn contradicts the last axiom of medieties.

3.2.3 Conjugacy, multiplicative form Definition 3.2.16 We put 0 := 1 and 1 := 0. For any word w = w1 w2 · · · finite or infinite, w := w1 w2 · · · is the conjugate of w. For x ∈ I, the conjugate of x is the value x := v(c(x)). Theorem 3.2.17 The function x → x defined on I is a continuous and decreasing involution. For any (r, s) ∈ M, we have (s, r) ∈ M and r ⊕ s = s ⊕ r. Definition 3.2.18 The shift on infinite words is defined by S(w1 w2 w3 · · · ) = w2 w3 · · · . Definition 3.2.19 Let w be a non-empty word, finite or infinite. The multiplicative form of w is the unique sequence (a j ) j∈J in N ∪ {∞} for which w = 1a0 0a1 1a2 · · · , where J is finite or infinite and a j > 0 for any j > 0. The empty sequence as well as the sequence reduced to a0 = 0 are conventionally both multiplicative forms of ε . For any finite word w of multiplicative form (a j )0≤ j0 for i > 0. Alternative continued fraction expansions are obtained by asking for the remainder to lie in [(α − 1)b, α b) (or, almost equivalently, ((α − 1)b, α b]) where α ∈ [0, 1] is a fixed real number. These are α -continued fractions, with partial quotients in Z\{0}. We will consider them in Section 3.5.3. Consider now the following additive form of the Euclidean algorithm. • Initialisation: a := p; b := q; w := ε . • Step 1: if a ≥ b, then put (a, b) := (a − b, b) and w := w1; otherwise, put (a, b) := (a, b − a) and w := w0.

Rosen continued fractions

71

• Step 2: if a > 0, then go back to Step 1; otherwise, return w. In rough terms, a and b being given with a > b, instead of subtracting from a the biggest possible multiple of b, we subtract only the value b. The word w returned by the algorithm satisfies that c(p/q) = w0ω . Remark 3.3.5 Therefore, we have that F = Q≥0 so, by Theorem 3.2.3, this provides an alternative proof of the fact that the Stern–Brocot binary operation is a mediety. From the additive algorithm we also obtain the converse of Property 3.3.1. Theorem 3.3.6 The interval I := [a/b, c/d] belongs to M if, and only if, ad − bc = −1.

(3.1)

Proof Let I := [a/b, c/d] be an interval that satisfies equality (3.1), with the integers a, b, c and d all greater than 1. The main property needed here is the following: a > c ⇐⇒ b > d.

(3.2)

To prove it, we can assume a and c different, as well as b and d, otherwise the equality (3.1) would give a = c = 1 (or b = d = 1). If a > c, then since ad < bc we have b ≥ d, so b > d. Now, if a < c and b > d, then bc > ad + 1, contradicting (3.1). The meaning of equivalence (3.2) is that the additive Euclidean algorithm (and therefore also the classical Euclidean algorithm) applied to (a, c) and applied to (b, d) produces the same word w (and therefore the same sequence of partial quotients) until the value 1 is reached. Now, observe that, by definition, I0 := [a/b, c/d] is in M if, and only if, either I1 := [(a − c)/(b − d), c/d] or I1 := [a/b, (c − a)/(d − b)] (depending on the sign of a − c) is in M. Hence, since I0 satisfies (3.1) if, and only if, I1 also satisfies it, the simultaneous iteration of the additive Euclidean algorithm to (a, c) and (b, d) defines a finite sequence of intervals In , all satisfying (3.1) and such that In ∈ M if, and only if, In−1 ∈ M. Eventually, an interval In = [an /bn , cn /dn ] is reached such that an = cn (or, similarly, bn = dn ), so (3.1) gives In = [1/m, 1/(m− 1)] for some m ∈ N>0 , which indeed belongs to M. For the Stern–Brocot mediety, we have v(1a0 0a1 1a2 · · · ) = [a0 , a1 , a2 , . . .]1 . Moreover, a simple induction gives the following. Theorem 3.3.7 For any N ≥ 1, the set of elements of FN \FN−1 is made of all numbers of the form [a0 , . . . , an ]1 such that ∑i ai = N. Theorem 3.3.6 implies that [a/b, c/d] ∈ M if, and only if, [a/c, b/d] ∈ M. We may wonder if the codings of such similar intervals have something in common. The following classical result will be of crucial importance for the definition of a natural k-mediety generalising the mediant (see Section 3.5.2). − for the word Theorem 3.3.8 Let Iw = [a/b, c/d], where w = w1 · · · wn . Writing ← w − = [a/c, b/d]. wn · · · w1 , we have I← w

72

B. Rittaud

Proof This is mainly a consequence of the following immediate lemma. Lemma 3.3.9 For any finite word w, define v(w) as v(w(1 − )ω ), where  is the last letter of w. We have v(1w) = 1 + v(w) and v(0w) = 1/(1 + 1/v(w)). Let us assume that the theorem is true for any word of length at most n, and consider a word of length n + 1, either of the form w0 or w1. Writing Iw = [a/b, c/d], we have Iw0 = [a/b, (a + c)/(b + d)] and Iw1 = [(a + c)/(b + d), c/d]. By induc− = [1/(1 + − = [a/c, b/d], so, by Lemma 3.3.9, I0← tion hypothesis, we also have I← w w ← − c/a), 1/(1 + d/b)] = [a/(a + c), b/(b + d)] and I1 w = [1 + (a/c), 1 + (b/d)] = (a + c)/c, (b + d)/d], and the theorem is proven. The golden ratios are the numbers of √ the form [a0 , . . . , an , 1, . . . , 1, . . .]1 . Among 5)/2 = [1, 1, 1, . . .]1 . For any golden ratio them, we find the real Golden Ratio, (1 + √ x, we have C(x) = ( 5 − 1)/2. A standard characterisation of golden ratios for continued fractions makes use of the following result (Hurwitz, 1894): for any x > 0, there are infinitely many coprime integers p and q such that $ $ $ 1 p$ q2 · $$x − $$ < √ , q 5 and the right-hand side can be decreased if, and only if, x is not a golden ratio. Let us investigate the links between this characterisation and our definition. Property 3.3.10 If q2 · |x − p/q| < 1, then x belongs to some interval of M that admits p/q as a limit point. Proof Let I = [a/b, c/d] be the interval of M such that ⊕(I) = p/q. Without loss of generality, we assume x > p/q, so our aim is to prove that x ∈ [p/q, c/d] =: J. Since q = b + d ≥ d, we have μ (J) = 1/(qd) ≥ 1/q2 > x − p/q, so x ∈ J. Take x > 0 irrational and p/q such that q2 · |x − p/q| < 1. Assume that x > p/q (the case x < p/q would be similar), and let p /q be the fraction such that the interval [p/q, p /q ] is the biggest in M containing x (that is, we choose p /q to get x ∈ [p/q, p /q ] ∈ M with level(p /q ) minimal; hence, level(p /q ) < level(p/q)). Let w be the word such that [p/q, p /q ] = Iw . Our assumptions imply that the last letter of w is 1. Let (ai )i≤2m be the multiplicative form of w. By Theorem 3.2.20, we have c(p/q) = 1a0 0a1 · · · 0a2m−1 1a2m 0ω and c(p /q ) = 1a0 0a1 · · · 0a2m−1 1ω . By Property 3.2.13, c(x) is of the form ww with w infinite. Put y := v(w ) = v(S|w| (c(x))). With the help of Property 3.3.3, we have x = [a0 , . . . , a2m−1 , a2m + y]1 = [a0 , . . . , a2m−1 , a2m , y−1 ]1 = so, with a little calculation:

  1 p = −1 . q2 · x − q y + q /q

y−1 p + p , y−1 q + q

Rosen continued fractions

73

−1ω . Hence, the denominator of w Note that, by Theorem 3.3.8, we have c(q /q) = ← −1ω ). Therefore, an intuitive interw the right side corresponds to v(S|w| (c(x))) + v(← pretation of this denominator is the following: put z := c(x) and write it as zn zn , with zn the prefix containing n := |w| letters, where w is the word such that ⊕(Iw ) = p/q. Sitting at the ‘frontier’ of zn and zn , we sum the values defined by the words that appear when looking to the right with a conjugacy (zn , which corresponds to y−1 by  Property 3.3.2) and looking backward to the left (← z− n , which corresponds to c(q /q)).

´ 3.3.3 Engel–Sierpinski expansions As recalled in the introduction, the Engel–Sierpi´nski expansion of a number x > 0 is an expression of the form

x=

1+

1+

1 + ··· q3 q2

q1

=

1 1 1 + + + · · · =:]q1 , q2 , q3 , . . . [ , q1 q1 q2 q1 q2 q3

where (qn )n is an non-decreasing sequence of positive integers. Originally, the underlying algorithm came from a question of fair division that goes back to Egyptian antiquity. Consider a set of a circular cakes to be divided into b equal parts (see Figure 3.7). Any cake or any piece of cake can be divided in n equal angular pieces (for all n). At each step, all remaining pieces of cake must be cut at the same time (hence all of these remaining pieces are of the same size). Engel–Sierpi´nski expansion corresponds to a greedy algorithm for such a fair division. Consider for example a = 3 cakes to be divided into b = 7 equal parts (for seven people). The smallest n for which 3n ≥ 7 is n = 3, so we divide each cake in three equal pieces. Each people takes one of these nine pieces, so there are two remaining pieces. The smallest integer n for which 2n ≥ 7 is n = 4, so each of the two remaining pieces are cut into four smaller pieces. There are eight of these smaller pieces, so each of the seven people takes one, letting only one small piece remaining. Cutting this latter into seven tiny pieces ends the division. The algorithm is summed up by the equality: 1 1 3 1 = + + . 7 3 3·4 3·4·7 Let us give a more formal definition. For any p and q ∈ N with p/q > 0, an Engel– Sierpi´nski expansion ]q1 , . . . , qn , . . . [ of p/q is given by the following algorithm. • Initialisation: p0 := p; n := 1. • Step 1: put qn := q/pn−1 and pn := qn pn−1 − q. • Step 2: if pn > 0, then n := n+1 and go back to Step 1; otherwise, return ]q1 , . . . , qn [. Since the sequence (pn )n is decreasing, the algorithm ends for any choice of p and q. Moreover, (qn )n is non-decreasing.

74

B. Rittaud

Figure 3.7 Illustration of the algorithm for the Engel–Sierpi´nski expansion.

The approximations ri /si :=]q1 , . . . , qi [ of p/q provided by the Engel–Sierpi´nski expansion are given by r1 := 1, s1 := q1 and, for any i ≥ 2: ri := 1 + qiri−1

and

si := qi si−1 .

Note that, contrarily to what happens in the usual theory of continued fractions, the fraction ri /si may not be irreducible (see for example the case 4/6 =]2, 3[). Now, here is an additive algorithm. • Initialisation: p0 := p; n := 0; s := 1; w := ε . • Step 1: pn+1 := pn s − q; – if pn+1 = 0 then return w; – if pn+1 < 0 then w := w0 and s := s + 1; – if pn+1 > 0 then w := w1 and n := n + 1. • Step 2: go back to Step 1. Such an additive algorithm defines what we will call here the Engel–Sierpi´nski mediety on I := [0, +∞] = [0/1, 1/0] (see Figures 3.8 and 3.9). Writing 0/1 =]ε [ and 1/0 =]0[, this mediety is inductively defined, for any m and n ∈ N, by the formula: m

% &' ( ]q1 , . . . , qn−1 , qn + 1, . . ., qn + 1[⊕]q1 , . . . , qn−1 , qn [ m+1

% &' ( :=]q1 , . . . , qn−1 , qn + 1, . . . , qn + 1[. A proof that ⊕ is indeed a mediety goes as in Remark 3.3.5, with the help of the first of the following easy theorems. Theorem 3.3.11 For the Engel–Sierpi´nski mediety, we have F = Q≥0 ∪ {∞}. The two Engel–Sierpi´nski expansions of an element of F˚ are of the form (qn )1≤n≤N and (qn )n≥1 , where qn = qn for n < N and qn = qn + 1 for all n ≥ N.

75

Rosen continued fractions ∅

0 1 2

11

3

22 33

4 5

34

44

12

23 333

13

222 233

24

223

111

2222

122 133

14

123

112

1222

113

1111

1122 1112 11111

Figure 3.8 The diagram of the Engel–Sierpi´nski mediety for I = [0, +∞]. (The node ]q1 , q2 , . . . , qn [ is written q1 q2 · · · qn .) 0/1

1/0 1/1 1/2

2/1

1/3 1/4 1/5

3/4 4/9

4/6

3/2 7/8

3/1

4/3

5/16 5/12 13/27 5/8 13/18 10/12 15/16 5/4

7/4

13/9 10/6 15/8

5/2 7/3

11/4

4/1 7/2

5/1

Figure 3.9 The diagram of the Engel–Sierpi´nski mediety for I = [0, +∞], in with each node in written in the form rn /sn .

Theorem 3.3.12 Let w be a finite word and write Iw = [r, s]. Let (a j )0≤ j≤2m be the multiplicative form of w with, possibly, a2m = 0. Put A−1 := 1 and, for any 1 ≤ i ≤ m, A2i−1 := A2i−3 + a2i−1. If a2m > 0, then write (qi )1≤i≤n for the finite sequence: A , . . . , A , A , . . . , A , . . . , A2m−1 , . . . , A2m−1 . ' −1 (% −1& ' 1 (% &1 ' (% & a2

a0

a2m

We have then r =]q1 , . . . , qn−1 , qn [ and s =]q1, . . . , qn−1 , qn − 1[. In particular, n = |W |1 . If a2m = 0, then write (qi )1≤i≤n for the finite sequence: A , . . . , A , A , . . . , A , . . . , A2m−3 , . . . , A2m−3 , A2m−1 . ' −1 (% −1& ' 1 (% &1 ' (% & a0

a2

a2m−2

We have then r =]q1 , . . . , qn−1 [ and s =]q1 , . . . , qn−1 , qn − 1[. In particular, n − 1 = |w|1 . Theorem 3.3.13 For any N ≥ 1, the set of elements of FN \FN−1 is made of all numbers of the form ]q1 , . . . , qn [ such that n + qn = N.

76

B. Rittaud

Theorem 3.2.22 gives that the golden ratios are the numbers ]q1 , q2 , . . . [ such that the infinite sequence (qn )n ultimately satisfies the relation qn = 1 + qn−1. Hence, the golden ratios are of the form r + e, where r is a rational number and e is the usual constant e = ∑n≥0 1/n! =]1, 1, 2, 3, 4, 5, . . .[. For any Engel–Sierpi´nski golden ratio θ , we have C(θ ) = e − 2.

3.4 End-first algorithms Let x ∈ I. The dichotomy algorithm defined by the mediety provides successive approximations to x that correspond to the prefix of w := w1 · · · wn · · · = c(x), that is, after the nth step, the algorithm of partition has provided w1 · · · wn . By an end-first algorithm we mean an algorithm that provides first the end of the non-trivial part of c(x). (The alternative denomination of backward algorithms that could have been chosen was avoided because of the existing notion of backward continued fractions, that are the α -continued fractions with α = −1.) A natural end-first algorithm is defined by a binary tree whose nodes are labelled by intervals Iw , with the characteristic that if m is the word that codes the finite walk from the root Iε to the node Iw , then the first letters of the word m contain the information that allows us to know the last letters of the word w. We present here an example of such an algorithm.

3.4.1 The end-first tree On the set of words (finite or infinite) on {0, 1}, define, for w = w1 w2 w3 · · · : R(w) := 1w L(w) := 0w2 w3 · · ·

for w such that w1 = 1.

An alternative definition is R(w) := 1w and/or L(w) := 0w2 w3 · · ·. We keep ours because of its more immediate use in the case of the Stern–Brocot mediety (as well as the link that is to be made with random Fibonacci sequences – see below). Most of the sequel would apply in the same way for any other definition of R and L (see Exercise 3.7.16). Definition 3.4.1 The end-first tree R is the binary tree whose nodes are labelled by intervals of M, in the following way: • the root of R is labelled by Iε = I, • any node of R labelled Iw has a right child labelled IR(w) , • a node Iw has a left child, labelled IL(w) , if, and only if, the first letter of w is 1. After the root, the tree R is therefore the biggest binary tree in which we can never go twice to the left; see Figure 3.10. Property 3.4.2 Let ρn be the set of edges at the nth row of R (ρ0 being the unit set

77

Rosen continued fractions

∅ 1 0

10

00

11 01

110

010

100

000

101

001

111 1011

011

1000

1110

1010

0010

10101

1001 1111 0011 10100 1100 0000 10111 0110 10001 11101 00101 101010

Figure 3.10 The end-first tree R. To make it more easy to read, edges from left children are going down instead of going right, and Iw is written w.

containing the edge from the root to I1 ). For any n, we have Card(ρn ) = Fn , where (Fn )n is the Fibonacci sequence F0 = 1, F1 = 2 and, for any n ≥ 2, Fn = Fn−1 + Fn−2. − Proof For any n ≥ 0, let cn := Card(ρn ) and c+ n (respectively, cn ) be the cardinality of the set of edges in the nth row starting from a right child (respectively, from a left child). Since any node has a right child, we have c+ n = cn−1 . Also, since a node has + a left child if, and only if, it is itself a right child, we have c− n = cn−1 = cn−2 . Hence, + − since cn = cn + cn , the sequence (cn )n satisfies the Fibonacci induction formula. Eventually, the fact that c0 = 1 and c1 = 2 gives the desired result.

A walk in R is therefore a finite word over the alphabet {R, L} that does not contain LL as a factor. Any walk m is naturally associated with the finite word w such that Iw is the interval reached by the walk m starting at the root. Put differently, we have w = m(ε ); see Figure 3.11.

3.4.2 The end-first algorithm Theorem 3.4.3 Let w be a non-empty finite word. The walk m that goes from the root to the node Iw is provided by the following algorithm. • Initialisation: m := ε ; x := w. • Step 1:  being the first letter of x, put m := RL1− m. • Step 2: put x := S(x); if x is not empty, go back to Step 1, otherwise return m. Step 1 shows that the beginning of w defines the end of m.

78

B. Rittaud ∅ 1

10

00

111

011

1100

Figure 3.11 The walk m := RRLRLR in R, that reaches the interval I1100 . Note that the usual notation for walks, that we endorse here, is somehow in conflict with the usual notation for composition of functions. In particular, m(ε ) is not equal to R(R(L(R(L(R(ε )))))) but to R(L(R(L(R(R(ε )))))).

Proof Assume that the algorithm works for any word w with |w| ≤ n, let w = w1 w2 · · · wn+1 , and put w := S(w). By the induction hypothesis, applying the algorithm to the initial word w defines the walk m that leads to the interval Iw . Now, apply the algorithm to w. The first iteration of the loop leads to m := RL1−w1 and x := w , so at the end of the algorithm we get m = RL1−w1 m . The word reached by m is m(ε ) = (RL1−w1 m )(ε ) = (L1−w1 m )(1) = m (w1 ) = w1 w = w, so the theorem is proven. Let m be a walk. We can write it in a unique way as (RLm1 ) · · · (RLm j ), where the mi s all belong to {0, 1}. Define, then, m as the walk (RL1−m1 ) · · · (RL1−m j ). A simple calculation1 then gives that m(ε ) = m(ε ). With such a definition of the conjugate of a walk, we can give an algorithm that provides, for any m, the word w = m(ε ) such that Iw is the interval in R reached by the walk m. Theorem 3.4.4 Let m be a walk in R, leading to the interval Iw . The word w is provided by the following algorithm. • Initialisation: w := ε ; z := m; write z = (RLz1 ) · · · (RLz j ), with zi ∈ {0, 1} for any j. • Step 1: put w := w, where  is such that RL is a prefix of z but not RL+1 . • Step 2: put z := S1+(z); if z is not empty, go back to Step 1, otherwise return w. Proof Immediate by Theorem 3.4.3. The notion of ⊕-golden ratio plays a specific role in R as stated by the following result. 1

It is an immediate consequence of Step 1 in the algorithm.

79

Rosen continued fractions

Theorem 3.4.5 For m an infinite walk in R, put In := Imn (ε ) for any n (where mn is the prefix of m with |mn | = n), and say that m converges if, and only if, the sequence (In )n is ultimately decreasing. The walk m converges if, and only if, it is of the form m Rω for some finite walk m . Moreover, for any converging walk m, ∩n In reduces to the golden ratio v((01)ω ). Proof Let z be a finite walk ending with an R, and let Iw be the interval in R reached by this walk. By definition of L, the words w and L(w) have the same length but are not consecutive in the lexicographical order (apart for the case z = RL, for which w = 1 and L(w) = 0). Therefore, by Property 3.2.10, Iz(ε ) ∩ IL(z(ε )) is empty. Hence, a walk can converge only if it is of the form m Rω as stated in the theorem. In this case, write w := m (ε ). For any n ≥ 0, we have R2n (w) (w) R 2n+1

= =

(10)n w; 1(01)nw.

Hence, in the case m = m Rω , the sequence (In )n is indeed decreasing to a limit point x such that c(x) = (10)ω , and we are done.

3.4.3 Examples For the arithmetic mediety (see Figures 3.12 and 3.13), we have R(x) = 1 −

x 2

and

L(x) = x −

1 (for x ≥ 1/2). 2

[0,1]

[1/2,1]

[0,1/2]

[1/2,3/4]

[3/4,1]

[1/4,1/2]

[3/4,7/8]

[0,1/4]

[1/2,5/8]

[0,1/8]

[7/8,1]

[5/8,3/4]

[1/8,1/4]

[5/8,11/16]

[11/16,3/4] [3/8,1/2] [1/2,9/16] [7/8,15/16] [1/8,3/16][21/32,11/16]

Figure 3.12 The tree R for the arithmetic mediety on I = [0, 1].

80

B. Rittaud 1/2

3/4

1/4

5/8

7/8

3/8

13/16

1/8

9/16

1/16

23/32

15/16

7/16

17/32

11/16

3/16

29/32

21/32

5/32

43/64

Figure 3.13 The tree R for the arithmetic mediety on I = [0, 1], with ⊕(I) at each node.

For the harmonic mediety, we have 2 and L(x) = (for x ≥ 2). 1 2 1− 1+ x x The example of the Stern–Brocot mediety is especially interesting (see Figure 3.14). It is easily proven that, in this case: R(x) =

2

1 1 and L(x) = 1 − (for x ≥ 1). x x We can deduce of this an alternative presentation of the proof that, for the Stern– Brocot mediety, F = Q≥0 ∪ {1/0}. Indeed, a node x being given, its parent P(x) is equal to 1/|x − 1|. In particular, for x = p/q, we have P(x) = q/|p − q|. For an irreducible fraction p/q, write h(p/q) for the pair (min(p, q), max(p, q)). In an obvious sense, the sequence (h(Pn (p/q)))n is decreasing, hence the sequence (Pn (x))n eventually attains a trivial fraction. This proves that any irreducible fraction p/q belongs to R, so F = Q≥0 ∪ {1/0}. Also, in the case x = φ , we have 1/|x − 1| = x, so φ is irrational. An interesting fact about the tree of Figure 3.15 is given by the numerators. As can be seen, each right child is the sum of its two direct ancestors, whereas each left child is the difference of its two direct ancestors. The tree R is therefore linked to random Fibonacci sequences, i.e., sequences ( fn )n defined for example by f0 = 1, f1 = 2 and, for any n ≥ 2, fn = | fn−1 ± fn−2 |, where the ± sign is randomly chosen for each value of n. Indeed, add to the top of the tree R a node 0 that has a node 1 as a right child, the latter node having the root 1 of R as a left child. To each sequence ( fn )n then R(x) = 1 +

81

Rosen continued fractions [0/1,1/0]

[1/1,1/0]

[0/1,1/1]

[1/1,2/1]

[2/1,1/0]

[1/2,1/1]

[0/1,1/2]

[1/1,3/2]

[3/1,1/0]

[3/2,2/1]

[1/3,1/2]

[3/2,5/3]

[2/1,3/1] [0/1,1/3] [5/3,2/1] [2/3,1/1] [1/1,4/3] [3/1,4/1] [1/3,2/5] [8/5,5/3]

Figure 3.14 The tree R for the Stern–Brocot mediety with I = [0/1, 1/0].

1/1

2/1

1/2

3/2

3/1

2/3

5/2

1/3

4/3

1/4

5/3

4/1

7/4

3/4

2/5

5/4

7/2

8/5

3/8

13/8

Figure 3.15 The tree R for the Stern–Brocot mediety with I = [0/1, 1/0] and ⊕(I) at each node.

corresponds the following way to go in this modified tree R: when at the node fn , if fn+1 = fn + fn−1 then go to the right (or to the left if fn−1 = 0). If fn+1 = | fn − fn−1 |, then go to the left if it is possible (it is not difficult to prove that fn has a left child if, and only if, fn − fn−1 is non-negative), otherwise go to the parent of fn−1 . The tree corresponding to the continued fraction expansion can also be obtained in the

82

B. Rittaud 

 1 0 following way: label the root by the identity matrix , and, for any node 0 1   0 1 labelled by some matrix A, left-multiply A by to get the right child and 1 1   0 1 by to get the left child. The obtained tree correspond to what is called −1 1   b d the ‘SL(2, N)-tree’ in (Rittaud, 2007, Fig. 5, p. 21). Mapping each matrix a c to the Stern–Brocot interval whose bounds are a/b and c/d provides the tree R (see Figure 3.16). 1

2

3

1

3

2

5

5

1

4

1

4

7

3

8

2

5

7

3

13

Figure 3.16 The tree R for the Stern–Brocot mediety with I = [0/1, 1/0], with only the numerator of each node.

For Engel–Sierpi´nski expansions, the two variants for the tree R are more difficult to study, since there seems to be no simple expression for R and L (see Figures 3.17 and 3.18).

3.5 Medieties with k letters Here we consider the more general case of medieties defined by an alphabet made of k letters, namely A := [[0, k − 1]], where k ≥ 2 is some integer. This corresponds to a numeration system in base k > 1 (k integer). For such numeration systems, we define a k-mediety of two numbers r and s as a set of k + 1 numbers, written r ⊕i s for 0 ≤ i ≤ k, that split the interval [r, s] into k subintervals (with the assumption r ⊕0 s = r and r ⊕k s = s). For simplicity of notation, we put k := k − 1.

83

Rosen continued fractions [0/1,1/0]

[1/1,1/0]

[0/1,1/1]

[1/1,2/1]

[2/1,1/0]

[1/2,1/1]

[0/1,1/2]

[1/1,3/2]

[3/1,1/0]

[3/2,2/1]

[1/3,1/2]

[3/2,7/4]

[2/1,3/1] [0/1,1/3] [7/4,2/1] [3/4,1/1] [1/1,4/3] [3/1,4/1] [1/3,4/9] [10/6,7/4]

Figure 3.17 The tree R for the Engel–Sierpi´nski mediety with I = [0/1, 1/0].

1/1

2/1

1/2

3/2

3/1

3/4

5/2

1/3

4/3

1/4

7/4

4/1

15/8

7/8

4/9

5/4

7/2

10/6

5/12

31/18

Figure 3.18 The tree R for the Engel–Sierpi´nski mediety with I = [0/1, 1/0], and ⊕(I) at each node.

There are several ways to generalise a 2-mediety into a k-mediety. For example, to generalise the arithmetic mediety to a 3-mediety, we may define the standard trichotomy, but also something like x ⊕1 y = (x + y)/2 and x ⊕2 y = (3x + y)/4 (which corresponds to the following recoding of the usual binary expansion: 00 → 0, 01 → 1, 10 → 2 and 11 → 2). In Sections 3.5.2 and 3.5.3, we give a quite natural generali-

84

B. Rittaud

sation of the examples given in Section 3.3, without considering the case of Engel– Sierpi´nski expansions for which the question seems to open.

3.5.1 Generalities The definition and general properties of k-medieties are essentially the same as for 2medieties, the only technical difference being that it also considers two trivial binary operations, ⊕0 and ⊕k (that are of no use in the context of Definition 3.2.1); see Figure 3.19. Definition 3.5.1 Let I := [α , ω ] be a non-trivial closed interval of [0, +∞]. A kmediety on I is a set of k + 1 binary operations, written ⊕i for i = 0,. . . k, defined on the smallest possible set M ⊂ I2 such that • the pair (α , ω ) belongs to M, • if (r, s) ∈ M, then r =: r ⊕0 s < r ⊕1 s < · · · < r ⊕k s < r ⊕k s := s and (r ⊕i s, r ⊕i+1 s) ∈ M for any i < k, • if (In )n≥0 is a decreasing sequence of intervals with bounds in M, then the set ∩n In is reduced to a single element. Most of the results presented in the previous sections for two letters are straightforwardly generalised to k letters, so we mention them without providing all the details.

Figure 3.19 The general structure of the diagram for 3-medieties.

The set F of all bounds of all intervals in M is dense in I; any number x can be coded by an infinite word c(x) = w1 w2 w3 · · · over the alphabet A , each letter being chosen in such a way that the sequence of closed intervals Iw1 ···wn = [rn , sn ] defined by Iε = I and, for all n ≥ 0, wn+1 =  if, and only if, Iw1 ···wn+1 = [rn ⊕ sn , rn ⊕+1 sn ] satisfies ∩n Iw1 ···wn = {x}. For any finite word w, we have Iw = [v(w0ω ), v(wkω )]. The set Iw is the set of numbers which admit a coding of the form wz, where z is some infinite word over A . The elements of F˚ := F\{α , ω } are the only ones that admit two equivalent codings, given by the relation w0kω = wk 0ω where w is a finite word. We have c(α ) = 0ω and c(ω ) = kω . The function c is increasing and, for any (r, s) ∈ M and any  ∈ A \{0}, denoting by w the unique finite word such that c(r) = w0ω and c(s) = wkω , we have c(r ⊕ s) = {w0ω , w( − 1)kω }.

Rosen continued fractions

85

The conjugate of a letter  is the letter  := k − . Defining x := v(c(x)) for any x ∈ I leads to the following relation for any pair (r, s) ∈ M and any  ∈ A \{0}: r ⊕ s = s ⊕k− r. For any x ∈ I, define C(x) and golden ratios as in Definition 3.2.21. Theorem 3.5.2 The set of golden ratios of a given k-mediety is the set of numbers x ∈ I for which there exists an integer n ≥ 0 such that  Sn (c(x)) =

if k is odd; (k /2)ω , ((k − 2)/2 k/2)ω , if k is even.

The proof is a simple extension of Theorem 3.2.22. The previous expression can be summarised in: Sn (c(x)) = (k /2k /2)ω (for any k).

3.5.2 Simple examples The base k numeration system is a natural generalisation of the base-2 numeration system. Its corresponding k-mediety is defined by x ⊕i y := ((k − i)x + iy)/k, where M is the set of pairs of the form (p/kn , (p + 1)/kn), where n ≥ 0 and 0 ≤ p < kn . The golden ratios are the numbers of the form (2p + 1)/(2kn) if k is odd (since, then, the set of golden ratios is the set of the middles of all intervals of M), p/kn + (k + 2)/(2kn (k + 1)) if k is even (with p and n integers – see Exercise 3.7.17). The k-arithmetic mediety provides a first general way to define a k-mediety from any given 2-mediety ⊕ defined on I. Indeed, consider the question mark ? from the 2-arithmetic mediety ⊕A on IA := [0, 1] to ⊕. A k-mediety generalising ⊕ is therefore obtained by the relation r ⊕i s :=?((ir + (k − i)s)/k). The usual generalisations of Pythagorean medieties mentioned in introduction are obtained in this way. For the harmonic mediety, we get the formula x ⊕i y := k/(((k − i)/x) + (i/y)), where (x, y) ∈ M if, and only if, x and y are of the form x = kn /(kn − p) and y = kn /(kn − (p + 1)) for some integer n ≥ 0 and 0 ≤ p < kn . For the geometric mediety, we get x ⊕i y := x(k−i)/k yi/k , where (x, y) ∈ M if, and only if, (x, y) ∈ [1, 2] is of the n n form (2 p/3 , 2(p+1)/3 ), where n ≥ 0 and 0 ≤ p < 3n .

3.5.3 The k-Rosen mediety For the Stern–Brocot mediety, the use of the question mark to get a k-mediety does not seem to have been investigated, possibly because the result is not very natural (see Exercise 3.7.23). better attempt is to ask for a formula that preserves the linearity.  A a c Write M := for the interval [a/b, c/d]. We are looking for k matrices, say b d   r t Ai = i i , such that the matrices MAi provide a partition of [a/b, c/d] as in si ui the Stern–Brocot case. Having in mind the case of ⊕0 and ⊕k , we choose r0 = 1, s0 = 0, tk = 0 and uk = 1. Since the upper bound of a subinterval has to be equal to the lower bound of the next one, we also choose ri+1 = ti and si+1 = ui for all i, so

86

B. Rittaud   r r Ai = i i+1 , with rk+1 = 0 and sk+1 = 1. Considering Theorem 3.3.6 and asking si si+1 its statement to remain true for a corresponding k-mediety also gives that det(Ai ) = 1 for any i, hence s1 = rk−1 = 1. Unfortunately, this is not sufficient to characterise a k-mediety. A possibility is to put some more constraints on determinants. In particular, we may ask for det( ba ⊕i c c a d , b ⊕ j d ) to depend only on i − j (where det(a/b, c/d) stands for ad − bc) when a, b, c and d are fixed (see Exercise 3.7.21). Such an assumption provides the desired k-mediety, but forces to consider determinants of pairs (r, s) that do not belong to M. A strictly combinatorial way to extend the mediant is suggested by Theorem 3.3.8, which is the root of the following statement. Theorem 3.5.3 Let k ≥ 2 be fixed, and let λ := λk := 2 cos(π /(k + 1)). Put r0 = 1, r1 = λ and, for any 2 ≤ i ≤ k + 1, ri := λ ri−1 − ri−2 . The k-Rosen mediety on I := [0/1, 1/0] defined by c ri+1 a + ri c a ⊕i = b d ri+1 b + rid

(3.3)

is the unique linear k-mediety satisfying the following property: for any finite word − = [a/c, b/d]. w, writing Iw = [a/b, c/d], we have ad − bc = −1 and I← w Proof Consider a general linear formula satisfying the hypotheses of the theorem: c ri a + sic a ⊕i := . b d ri b + si d The subintervals of the form Ii , where i ∈ [[0, k − 1]], satisfy Ii = [si /ri , si+1 /ri+1 ]. Since a word made of a single letter is palindromic, Ii is also equal to [si /si+1 , ri /ri+1 ], so ri = si+1 for any i. Hence, for any i, we have Ii = [ri−1 /ri , ri /ri+1 ]. Put λ := r1 . By induction, the condition on the determinant ofIi implies  that 0 1 and the sequence (ri )i satisfies (3.3) for some value of λ . Put H := −1 λ   ri Ri := : we have HRi = Ri+1 for any i, so a simple calculation gives that ri+1 H k+1 = −Id, so H is conjugate to a rotation of angle π /(k + 1). This implies that λ = 2 cos(π /(k + 1)). The fact that this indeed defines a k-mediety is checked in the same way as the Stern–Brocot case (see Property 3.3.1), since the condition on the determinant is the same; see Figure 3.20. Eventually, assume the final required property true for any word of length at most n, and consider a word of length n + 1 of the form w with  ∈ [[0, k ]]. Write Iw := [a/b, c/d]. The equivalent of Lemma 3.3.9 is the following. Lemma 3.5.4 Define inductively Λi (x) by Λ0 (x) = λ + x and Λi (x) = (λ Λi−1 (x) − 1)/Λi−1(x). For any 0 ≤ i < k, we have Iiw = [Λi (a/b), Λi (c/d)].

Rosen continued fractions

87

This lemma is easily proven by observing that, for any i ≥ 1, ri+1 /ri is of the form

λ−

1

λ−

1 ..

,

1 λ where λ appears exactly i times in this expression. The end of the proof then goes as the proof of Theorem 3.3.8. .−

Note that, as for the Stern–Brocot mediety, the fractions involved in the calculations should not be ‘simplified’ in any way. A convenient representation is, as for the Stern–Brocot case, to consider vectors in R2 .

Figure 3.20 Geometric representation of the k-mediety applied to u = (a, b) and v = (c, d) for k = 3 (left) and k = 4 (right).

To describe the k-Rosen mediety in terms of intervals of M, we need some preliminary notation. Let (an )n≥0 be a sequence of non-negative real numbers. The notation 1 . Let [(an )n ]λ and [(a j )mj=0 , (a j ) j>m ] both stand for [a0 , a1 , . . .]λ := a0 λ + a1 λ + · · · furthermore (en )n>0 be a sequence in {−1, 1}. We write e1 [a0 , e1 : a1 , e2 : a2 , . . .]λ := a0 λ + . e2 a1 λ + a2 λ + · · · We will sometimes mix this notation with the usual one. In particular, for i < k − 2, we have λ (i) := ri+1 /ri = [1, (−1 : 1), . . . , (−1 : 1)]λ , which will be also written [1, (−1 : 1)i−1 ]λ . We also have λ (k−2) = [1, (−1 : 1)k−3 ]λ = [0, 1]λ . A convenient convention is to assume both (−1 : 1)0 and (an )−1 n=0 to be empty sequences. Theorem 3.5.5 The intervals of M are exactly those whose bounds can be written on the form [(a j ) j 1 and ∏i qi = q. Show that the number of integers p ∈ [1, q − 1] such that p/q appears in the Engel–Sierpi´nski diagram is equal to the number of increasing factorisations of q. Exercise 3.7.12 Find the Engel–Sierpi´nski expansion of e1/r for any integer r. Exercise 3.7.13 Let x =](qn )n≥1 [. Put x0 := x and for any n > 0:   m an := max m ∈ N | xn−1 − ≥ 0 n!

xn := xn−1 −

and

an . n!

Prove that, for any n: n! = qi . an ∏ i≤n Exercise 3.7.14 Construction of a 2-mediety diagram. We define the function Prefix on the set of finite words containing both the letters 0 and 1 by Pr(w) := w|w|−1 , and the function BothLettersPrefix in the following way: BLPr(w) is the biggest prefix x of w such that, defining y as w = xy, the word y contains both letters 0 and 1. We enumerate the nodes of a 2-mediety diagram by the way of numeration in base 2, the jth node of the ith row being labelled by the word of length i that corresponds to the base-2 expansion of j, as in the following picture. ε

ε ε

0

1

00 000

01 001

010

10 011

100

11 101

110

111

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Prove that, for any finite word w of length at least 2 and containing both letters 0 and 1, the node labelled by w is the child of Pr(w) and BLPr(w). Exercise 3.7.15 Consider the tree R of the mediant shown in Figure 3.15. Let m be a walk in R. The nodes of R successively reached by the walk m define a sequence of rational numbers whose set of limit points is written E. (1) Assume m ultimately periodic. (1) Prove that E depends only on the periodic part p of m. ) (2) Prove that E ⊆ Q( β )\Q, where β is explicit in p. (2) Conversely, assume E finite. Prove that m is ultimately periodic.

Rosen continued fractions

97

(3) We write m on the form Ra0 LRa1 LRa2 L · · · . Give a necessary and sufficient condition on the an s that ensures that E contains no number x with unbounded partial quotients. Exercise 3.7.16 On the generic diagram for a mediety (Figure 3.3), a caterpillar placed at the node α crosses the edge from α to ω , then walks further on the edges of the diagram, following two rules: • the caterpillar never crosses the same edge twice; • having crossed the edge e from x to y, the caterpillar choose one of the two edges adjacent to e to be crossed, i.e., one of the two edges of the form yz such that the edge xz exists (avoiding xz if that edge has already been crossed). (1) Find the end-first algorithm that corresponds to the set of possible walks. (2) Generalise to the diagram for k-medieties. Exercise 3.7.17 The aim of the exercise is to prove that, for even integers k, the golden ratios of the k-arithmetic mediety ⊕ on I := [0, 1] are the real numbers of the form (2kp + k + p)/(kn(2k + 1)) for m ≥ 0 and 0 ≤ p < kn (see Section 3.5.2). Let x be a golden ratio of the k-mediety. • Prove that there exist intervals n = {x}, where   In := [an, bn ] ∈ M such that ∩n I 1/2 − 1/k 1/2 + 1/k a . Un+1 = N 2Un with Un = n and N = bn 1/2 1/2 • Find an expression of the limit of the sequence of vectors (Un )n in terms of a0 , b0 and k. • Prove the desired result. √ Exercise 3.7.18 We consider Rosen continued fractions in the case λ = m with m = 2 or m = 3 (i.e., λ ∈ {λ4 , λ6 }). √ (1) Prove that F = mQ≥0 ∪ {∞} and that the form in which an element of F appears √ √ / mZ odd or p/q m with p ∈ / mZ. in the diagram is either p m/q with q ∈ (2) Prove that for any x > 0 there exists a sequence (finite or infinite) of integers (qn )n such that x = [mq0 , q1 , mq2 , q3 , mq4 , . . .]1 . Exercise 3.7.19 Define an i-random Fibonacci sequence as a sequence ( fn )n≥0 such that f0 = 1, f1 = i and, for any n ≥ 2, fn = i fn−1 ± fn−2 , where the ± sign is chosen for each n by a fair coin. Let Ti be the binary tree whose root, labelled by f0 , has f1 as unique child, and then in which any node b of parent a has bi − a as left child and bi + a as right child. (1) Say that two edges (a, b) and (a , b ) of Ti are equivalent if, and only if, (a , b ) = (±a, ±b). Cut all the edges of Ti for which there is an equivalent edge closer to the root, to get a subtree Ri . Show that the combinatorial structures of Ri and R5 are the same.

98

B. Rittaud

(2) Define similarly the subtree Ri with the equivalence relation between edges (a, b) and (a , b ) defined by b /a = b/a. Show that the combinatorial structures of Ri and R3 are the same. (3) In (2), replace b /a = b/a by |b /a | = |b/a|, to define a third subtree Ri . Show that the combinatorial structures of Ri and R2 are the same. Exercise 3.7.20 Consider the diagram of the k-Rosen mediety. Prove that the nodes of FN \FN−1 are those for which either the left, or the right form [a0 , (e j : a j ) j≤n ]λ satisfies

∑ a j − Card( j | e j = −1) = N. j

Exercise 3.7.21 Let k ≥ 2 be a fixed integer. The aim of the exercise is to provide a characterisation of the k-Rosen mediety alternative to Theorem 3.5.3. We still assume that any interval [a/b, c/d] ∈ M satisfies ad − bc = −1. We also assume the existence of matrices Ai as presented in Section 3.5.2. Hence, by linearity, it is enough to consider the bounds defined by the first partition of I: 0 1

1 r1

s2 r2

···

sk−2 rk−2

sk−1 1

1 . 0

We put r0 = s1 = rk−1 = sk = 1 and s0 = rk = 0. (1) Ask for the conjugation function to be given by x = 1/x. Show that such an hypothesis characterises the k-mediety for k = 3 but not for k ≥ 4. (2) Instead of the previous assumption, assume that for any i < j, det(si /ri , s j /r j ) = det(s0 /r0 , s j−i /r j−i ), where det(a/b, c/d) stands for ad − bc. Prove that this assumption characterises the k-Rosen mediety. Exercise 3.7.22 (See (Janvresse et al., 2013).) Put λ := λk for some k ≥ 2 and consider the same notation as in Section 3.5.3. (1) Let C (R) be the circle of radius R centred at the origin O, let M0 be a point on this circle, put M1 := gθ (M0 ) and M2 := gθ (M1 ), where gθ is the rotation of angle θ and centre O. Let t0 , t1 and t2 be the abscissas of M0 , M1 and M2 . Prove that t2 = λ t1 − t0 . (2) Take 1/ cos(θ − π /2) for R and (0, −R) for the coordinates of M0 . Put Mi = gθ (Mi−1 ) for 1 ≤ i ≤ k. Prove that for any i ≥ 1 the abscissa of Mi is equal to ri−1 . Exercise 3.7.23 For k ≥ 2, define the k-Stern–Brocot mediety as the image of the k-arithmetic mediety by ?−1 , where ? stands for the standard Minkowski’s question mark (from the Stern–Brocot mediety to the 2-arithmetic mediety). (1) Write F(k) the set F of the k-Stern–Brocot mediety. Prove that F(k) ⊆ F(d) for any d divisible by k. (2) Prove that if k is odd, then F(k) contains only quadratic irrational numbers.

99

Rosen continued fractions

M2 M1 M0

θ

θ

O t2

t1

t0

(3) We consider the case k = 3. (1) Prove that the minimal periodic part P of 1/3k written in base 2 contains 2 · 3k−1 digits. (2) Prove that any fraction p/3k with p ∈ / 3Z admits P as a minimal periodic part. (3) Prove that, for any n, if x and y both belong to Fn \Fn−1 , then Q(x) = Q(y). Exercise 3.7.24 For a given k-mediety, consider the balanced end-first algorithm defined by the following transformations R and L on words (where W = w1 w2 · · · ): R(w) := k /2w ,

L(w) := (w1 − 1 mod k)w2 w3 · · · ,

the transformation L being defined for words w = w1 w2 · · · such that w1 is not equal to 1 + k/2 (mod k). (1) Prove that the results of Section 3.6 remain true for this new algorithm, and write the new algorithms corresponding to Theorems 3.6.5 and 3.6.6. (2) Let m be an infinite walk in R and, for any n, In := Imn (ε ) , where mn is the prefix of m such that |mn | = n. Prove that (In )n is ultimately decreasing if, and only if, it converges to the golden ratio v((k /2k /2)ω ).

100

B. Rittaud

3.8 Open problems √ Problem 3.8.1 Provide a description of the Engel–Sierpi´nski expansion of 2. √ Remark 3.8.2 We have 2 =]1, 3, 5, 5, 16, 18, 78, 102, 120, 144, 251, . . .[. (See the sequence A028254 of Neil Sloane’s On-Line Encyclopedia of Integer Sequences.) Problem 3.8.3 In the tree of Figure 3.16, remove all the edges reached by a walk containing either the factor (L2 G)3 or (LGL)3 . Prove that the remaining subtree contains all positive integers. Remark 3.8.4 This problem is equivalent to Zaremba’s conjecture (Zaremba, 1972): for any integer q there exists an integer p, prime with q, such that the partial quotients of p/q are upper-bounded by K = 5. The most recent result in the field (Bourgain and Kontorovich, 2014) is that, for K = 50, the conjecture is true for a set of denominators of density 1. Problem 3.8.5 Study the properties of the question mark from k-Rosen mediety to k-arithmetic mediety. Problem 3.8.6 Find a k-mediety that generalises Engel–Sierpi´nski expansions.

4 Repetitions in words Narad Rampersad and Jeffrey Shallit

4.1 Introduction The topic of this chapter is the study of avoidable repetitions in words (finite or infinite sequences). The study of regularities in mathematical structures is a basic one in mathematics. The area of Ramsey theory is devoted to the study of unavoidable regularities in mathematical structures. Here we are principally concerned with the study of certain kinds of avoidable regularities in words. In particular, we are interested in the question, ‘What kinds of repetitions can be avoided by infinite words?’ The first work relating to this question dates back at least to the beginning of the twentieth century and the work of Thue (1906a, 1912) on repetitions in words. Thue’s results were independently rediscovered from time to time in the first half of the twentieth century; however, a systematic study of what is now known as combinatorics on words was not initiated until perhaps the late 1970s or early 1980s. The study of repetitions in words has as its principal motivation nothing other than its inherent mathematical appeal (indeed, this was Thue’s reason for studying it). Nevertheless, the study of repetitions in words has several applications, the most famous of which is perhaps the work of Novikov and Adian (1968) in solving the Burnside problem for groups. Other applications have been found in cryptography and bioinformatics, such as discussed in Section 5.6 and also in Section 6.2.4. Recently, combinatorial results regarding repetitions in words have been used to prove deep results in transcendental number theory. A survey of these recent developments has been given by Adamczewski and Bugeaud (2010). Note that repetitions are also the main object of study of the next chapter (Chapter 5) where detection algorithms are also provided.

Combinatorics, Words and Symbolic Dynamics, ed. Val´erie Berth´e and Michel Rigo. Published by c Cambridge University Press. Cambridge University Press 2016.

102

N. Rampersad and J. Shallit

4.2 Avoidability The notions of alphabet, letter, finite or infinite word, factor, prefix, suffix and morphism have been defined in Section 1.2. We recall in particular that the Thue–Morse morphism μ : {0, 1}∗ → {0, 1}∗ is defined by 0 → 01 1 → 10. Frequently we shall deduce the existence of an infinite word with a certain property from the existence of arbitrarily large finite words with the desired property. To pass from the finite to the infinite, we shall often rely (implicitly) on the following form of K¨onig’s infinity lemma. Theorem 4.2.1 Let Σ be a finite alphabet, and let A be an infinite subset of Σ∗ . Then there exists an infinite word w such that every prefix of w is a prefix of at least one word in A.

4.2.1 Squares, cubes and k-powers The most basic type of repetition is the square, that is, a non-empty word of the form xx, where x ∈ Σ∗ . An example of a square in English is the word murmur. We say a word w is squarefree (or avoids squares) if no factor of w is a square. It is easy to see that every word of length at least four over the alphabet {0, 1} contains a square; it is therefore impossible to avoid squares in infinite binary words. Thue (1906a) proved the following fundamental result. Theorem 4.2.2 three.

There exists an infinite squarefree word over an alphabet of size

Indeed, Thue showed that the word hω (0) is squarefree, where h : {0, 1, 2}∗ → {0, 1, 2}∗ is the morphism defined by (see also (1.2)) 0 → 01201 1 → 020121 2 → 0212021. We shall obtain this result in Section 4.2.5, as a consequence of a more general result. By analogy with the definition of a square, a cube is a non-empty word of the form xxx, where x ∈ Σ∗ . A word w is cubefree if no factor of w is a cube. The Thue–Morse word t (see Definition 1.3.1) is cubefree, but we shall see presently that it is possible to prove something even stronger. k

% &' ( For any positive integer k ≥ 2, a k-power is a non-empty word of the form xx · · · x, written for convenience as xk . Thus a 2-power is a square, and a 3-power is a cube.

Repetitions in words

103

A non-empty word that is not a k-power for any k ≥ 2 is primitive. A word is kpowerfree (or avoids k-powers) if none of its factors is a k-power.

4.2.2 Fractional powers We can extend the notion of integer powers in words to fractional powers. Let α be a real number larger than 1. A word y is said to be an α -power of x if y is the shortest prefix of xω such that |y| ≥ α |x|. Similarly, y is said to be an α + -power of x if y is the shortest prefix of xω such that√|y| > α |x|. For example, the French word entente is both a 73 -power of ent, and a 5 = 2.236 · · ·-power of ent. If we can write y = xn x , where n ≥ 1 is an integer and x is a prefix of x, then we say that |y|/|x| is an exponent of y. The largest such exponent is called the exponent of y. Lemma 4.2.3 Let h be a uniform morphism, and let α = β (respectively, β + ) for a real number β ≥ 1. If w contains an α -power then so does h(w). Proof Suppose w contains an α -power. Then there exist words s, s ∈ Σ+ and r,t ∈ Σ∗ such that w = rsn st, where s is a non-empty prefix of s and n + |s |/|s| ≥ β (respectively, > β ). Then h(w) = h(r)h(s)n h(s )h(t). Then h(w) contains h(s)n h(s ), which is of exponent greater or equal to α . We now examine how exponents behave under application of the Thue–Morse morphism μ . First, we need two lemmas. We write 0 = 1 and 1 = 0. Lemma 4.2.4 Let t, v ∈ {0, 1}∗. Suppose there exist letters c, d ∈ {0, 1} such that cμ (t) = μ (v)d. Then c = d and t = cn and v = cn , where n = |t| = |v|. Proof By induction on n. The base case is n = 0, so t = ε . Hence v = ε and c = d. For the induction step, assume the result is true for all words of length less than n; we prove it for n. If cμ (t) = μ (v)d, then by comparing prefixes we see that v = cv for some word v , and by comparing suffixes we see that t = t  d. Substituting, we get c μ (t  d) = μ (cv ) d. Hence c μ (t  ) d d = c c μ (v ) d. Cancelling c on the left and d on the right, we get μ (t  ) d = c μ (v ). Induction then gives c = d (and hence c = d) and t  = cn−1 and v = cn−1 . From this the desired result follows. Lemma 4.2.5 Suppose y, z ∈ {0, 1}∗ and μ (y) = zz. Then there exists x ∈ {0, 1}∗ such that z = μ (x). Proof If |z| is even, the result is clear, since then |μ (y)| ≡ 0 (mod 4), and hence |y| is even. Hence z = μ (w), where w is the prefix of y of length |y|/2. Let us show that n = |z| cannot be odd. If it were, let z = au = vb, where a, b ∈ {0, 1} and u and v are words of even length. Then μ (y) = zz = vbau; hence there exist words r, s such that u = μ (r) and v = μ (s). Hence zz = μ (s)ba μ (r), and b = a. But z = aμ (r) = μ (s)b; hence by Lemma 4.2.4 we have z = a (a a)(n−1)/2 . Then the last letter of z equals a and a, a contradiction.

104

N. Rampersad and J. Shallit

Theorem 4.2.6 Let α = β for a real number β > 2 or α = β + for a real number β ≥ 2. Then w is α -powerfree if, and only if, μ (w) is α -powerfree. Proof One direction follows from Lemma 4.2.3. For the other direction we handle the case where α = β > 2. Suppose μ (w) = xyn y z, where n ≥ 2, n + |y |/|y| = γ , for γ ≥ α (respectively, > α ) and y is a prefix of y. There are four cases to consider, based on the parity of |x| and |y|. Case 1: |x| is even and |y| is even. There are two subcases, depending on the parity of |y |. Case 1a: |y | is even. Then |z| is even. Then there exist words r, s, s ,t, with s a prefix of s, such that μ (r) = x, μ (s) = y, μ (s ) = y , and μ (t) = z. Then w = rsn s t, and so w contains the γ -power sn s . Case 1b: |y | is odd. Then |z| is odd. Then there exist words r, s, s ,t, with s a prefix of s, and a letter c such that μ (r) = x, μ (s) = y, μ (s )c = y , and cμ (t) = z. Since |y | is odd, |y| is even, and y is a prefix of y, it follows that y c is also a prefix of y. Hence s c is a prefix of s. Then w contains a power sn s c with exponent n+

2|s | + 2 |y | + 1 |y | |s c| = n+ = n+ > n+ = γ. |s| 2|s| |y| |y|

Case 2: |x| is even and |y| is odd. Since |x| is even and μ (w) = xyn y z, there exists a word t such that μ (t) = yy. From Lemma 4.2.5 there exists v such that y = μ (v). But then |y| is even, a contradiction. Thus this case cannot occur. Case 3: |x| is odd and |y| is even. There are two subcases, depending on the parity of |y |. Case 3a: |y | is even. Then |z| is odd. Then there exist words r, s, s ,t and letters c, d, e such that x = μ (r)c, y = cμ (s)d, y = d μ (s )e, and z = eμ (t). Consideration of the factor yy shows that dc must be the image of a letter under μ , so c = d. Hence μ (w) = μ (r(cs)n cs et) and so w = r(cs)n cs et. Since s is a prefix of s, it follows that cs is a prefix of cs. Thus w contains the γ -power (cs)n cs . Case 3b: |y | is odd. Then |z| is even. Then we are in the mirror image of case 3a, and the same proof works. Case 4: |x| is odd and |y| is odd. Then from length considerations we see that there exist words t, v and letters c, d such that y = cμ (t) = μ (v)d. By Lemma 4.2.4, we have d = c, t = cr , and v = cr for some r ≥ 0. Thus y = c(cc)r . Since y is a nonempty prefix of y and n ≥ 2, we may write μ (w) = xy2 ct for some word t. Since |x| and |y| are odd, and y ends in c, we must have that cc is the image of a single letter under μ , a contradiction. Thus this case cannot occur.

Repetitions in words

105

Corollary 4.2.7 Suppose x is a binary word. Then the largest exponent of a factor of x is exactly the same as the largest exponent of a factor of μ (x) if, and only if, x ∈ {01, 10, 010, 101}. Proof Let a ∈ {0, 1}. One direction is easy, since (a) μ (a a) = a a a a contains a square, while a a has largest power 1; (b) μ (a a a) contains a square, while a a a is a 32 -power. For the other direction, if x or μ (x) contains a power larger than 2, the result follows from Theorem 4.2.6. Hence assume that the largest power in x and μ (x) is less or equal to 2. If the largest power in x is less than 2, then |x| ≤ 3. Since x ∈ {01, 10, 010, 101} this means x ∈ {ε , 0, 1}, and the result follows. So the largest power in x is 2. By Lemma 4.2.3 we know that μ (x) contains a power greater or equal to 2. From above we have μ (x) contains only powers less or equal to 2. So the largest power in μ (x) is 2.

4.2.3 Overlaps An overlap is a word of the form axaxa, where a ∈ Σ and x ∈ Σ∗ . In the terminology of the previous section, an overlap can also be defined as a 2+ -power. An example of an overlap in English is the word alfalfa. A finite or infinite word is overlapfree if it contains no factor that is an overlap. Thue (1912) was the first to show the existence of infinite overlapfree binary words. Theorem 4.2.8 The Thue–Morse word t is overlapfree. Proof The word 0 is clearly overlapfree. By Theorem 4.2.6 we know that if x is overlapfree, then μ (x) is also. Repeatedly applying μ to 0 gives longer and longer prefixes of t, each of which is overlapfree. There is a beautiful and useful characterisation of overlapfree words due to Restivo and Salemi (1985). Basically, the theorem says that, up to the edges, overlapfree words can be factorised as an image under μ of shorter overlapfree words. Theorem 4.2.9 Let x ∈ {0, 1}∗ be overlapfree. Then there exist u, v ∈ {ε , 0, 1, 00, 11} and an overlapfree word y such that x = u μ (y)v. First, we need a lemma. Lemma 4.2.10 Let a ∈ {0, 1} and y ∈ {0, 1}∗. Suppose a a a y is overlapfree. Then at least one of the following holds: (a) |y| ≤ 3; (b) y begins with aa; (c) y begins with aaaa.

106

N. Rampersad and J. Shallit

Proof If y begins with aa or |y| ≤ 3, we are done. So assume y does not begin with aa and |y| ≥ 4. If y begins with a, then a a a y begins with a a a a, which has an overlap. So y begins with a, but not aa. So it begins with a a. If y = a a a z for some z with |z| ≥ 1, then by hypothesis a a ay = a a a a a a z is overlapfree. But whatever the first character of z is, the result has an overlap. So y = a a a z for some z. If z = a z , then a a a y = a a a a a a a z has an overlap. So z = az . Thus y begins with a a a a as desired. Now we can prove Theorem 4.2.9. Proof The proof is by induction on |x|. For |x| = k ≤ 2, the result is easy, for either x = aa for some a ∈ {0, 1}, in which case u = v = ε and y = a, or x is in {ε , 0, 1, 00, 11}. Now assume the result is true for |x| < k; we prove it for |x| = k. Let x be overlapfree of length k greater or equal to 3. Write x = az, where a ∈ {0, 1}. Since |z| < k, we can apply induction to it to get the factorisation z = uμ (y)v. If u = ε or u = a then x = (au)μ (y)v. If u = a, then x = a a μ (y)v = μ (ay)v. Since x is overlapfree, so is μ (ay), and hence ay is overlapfree by Theorem 4.2.6. If u = aa, then x begins with aaa, an overlap. If u = a a, then x = a a a μ (y)v. If y = ε , then x = a a a v. So v ∈ {ε , a, aa}. If v = ε , we get the factorisation x = μ (a)a. If v = a, we get the factorisation μ (a a). If v = aa, we get the factorisation μ (a a)a. Hence |y| ≥ 1. If y = a then x = a a a a av. If v = ε , then x = μ (a a) a. If v = a, then x = μ (a a a). If v = a, then x = μ (a a)a a. If v = aa, then x = μ (a a a)a. If v = aa, then x = a a a a a a a, a contradiction, since x contains an overlap. Finally, if |y| ≥ 2, then |μ (y)| ≥ 4. But the word μ (y) cannot begin with aa, so by Lemma 4.2.10, μ (y) begins with aaaa. This is impossible, so this case cannot occur. We can use this result to prove a similar result for (one-sided) infinite overlapfree words. Theorem 4.2.11 Let x ∈ {0, 1}ω be an overlapfree infinite word. Then there exist u ∈ {ε , 0, 1, 00, 11} and an overlapfree y ∈ {0, 1}ω such that x = uμ (y). Furthermore, u and the first two letters of y are completely determined by a prefix of length 4 of x, unless x begins with 0010 or 1101, in which case a prefix of length 5 suffices. Proof Consider applying Theorem 4.2.9 to longer and longer prefixes of x. For each such prefix xn of length n, we get a factorisation of the form un μ (yn )vn . But there are only finitely many possibilities for un , so among this infinite list of factorisations there must be a single u such that un = u for infinitely many n; say, for n = n1 , n2 , . . .. Let u = u . Then yn1 is a prefix of yn2 , which is a prefix of yn3 , etc., so let y be the unique infinite word such that each yni is a prefix of y. Then x = uμ (y). We now prove the claim about u being determined by a short prefix. Without loss

Repetitions in words

107

Table 4.1 Decompositions of x. Prefix

Decomposition

00100 00101 0011 0100 0101 0110

00 μ (10 · · · ) 0μ (00 · · · ) 0μ (01 · · · ) 0μ (10 · · · ) ε μ (00 · · · ) ε μ (01 · · · )

of generality, assume that x begins with 0. The reader can now check that Table 4.1 provides the only possible decomposition of x.

4.2.4 Fife’s theorem Fife’s theorem says that there is an encoding of all overlapfree words in terms of a finite automaton. In this section we give a version of this theorem. We let O denote the set of (right-) infinite binary overlapfree words. Define p0 = ε , p1 = 0, p2 = 00, p3 = 1, and p4 = 11, and let P = {p0 , p1 , p2 , p3 , p4 }. We can now iterate Theorem 4.2.11 to get the following. Corollary 4.2.12 Every infinite overlapfree word x can be written uniquely in the form x = pi1 μ (pi2 μ (pi3 μ (· · · )))

(4.1)

with i j ∈ {0, 1, 2, 3, 4} for j ≥ 1, subject to the understanding that if there exists c such that i j = 0 for j ≥ c, then we also need to specify whether the ‘tail’ of the expansion represents μ ω (0) = t or μ ω (1) = t. Furthermore, every truncated expansion pi1 μ (pi2 μ (pi3 μ (· · · pin−1 μ (pin ) · · · ))) is a prefix of x, with the understanding that if in = 0, then we need to replace 0 with either 1 (if the ‘tail’ represents t) or 3 (if the ‘tail’ represents t). Proof The form (4.1) is unique, since each pi is uniquely determined by the first five characters of the associated word. Thus, we can associate each infinite binary overlapfree word x with the essentially unique infinite sequence of indices i := (i j ) j≥0 coding elements in P, as specified by (4.1). If i ends in 0ω , then we need an additional element (either 1 or 3) to disambiguate between t and t as the ‘tail’. In our notation, we separate this additional element with a semicolon so that, for example, the word 000 · · · ; 1 represents t and 000 · · · ; 3 represents t.

108

N. Rampersad and J. Shallit

Other sequences of interest include 203000 · · · ; 1, which codes 001001t, the lexicographically least infinite overlapfree binary word (see Theorem 4.2.16 below), and 2(31)ω , which codes the word having, in the ith position, the number of 0s in the binary expansion of i. Of course, not every possible sequence of (i j ) j≥1 of indices corresponds to an infinite overlapfree word. For example, every infinite word coded by 21 · · · represents 00μ (0 μ (. . .)) and hence begins with 000 and has an overlap. Our goal is to characterise precisely, using a finite automaton, those infinite sequences corresponding to overlapfree words. We recall some basic facts about overlapfree words. Lemma 4.2.13 Let a ∈ Σ. Then (a) x ∈ O ⇐⇒ μ (x) ∈ O; (b) a μ (x) ∈ O ⇐⇒ a x ∈ O; (c) a a μ (x) ∈ O ⇐⇒ a x ∈ O and x begins with a a a.

Proof See, for example, (Allouche et al., 1998). We now define 11 subsets of O: A = O, B = {x ∈ Σω : 1x ∈ O}, C = {x ∈ Σω : 1x ∈ O and x begins with 101}, D = {x ∈ Σω : 0x ∈ O}, E = {x ∈ Σω : 0x ∈ O and x begins with 010}, F = {x ∈ Σω : 0x ∈ O and x begins with 11}, G = {x ∈ Σω : 0x ∈ O and x begins with 1}, H = {x ∈ Σω : 1x ∈ O and x begins with 1}, I = {x ∈ Σω : 1x ∈ O and x begins with 00}, J = {x ∈ Σω : 1x ∈ O and x begins with 0}, K = {x ∈ Σω : 0x ∈ O and x begins with 0}.

Next, we describe the relationships between these classes. Lemma 4.2.14 Let x be an infinite binary word. Then we have the following.

109

Repetitions in words x ∈ A ⇐⇒ μ (x) ∈ A

(4.2)

x ∈ B ⇐⇒ 0 μ (x) ∈ A

(4.3)

x ∈ C ⇐⇒ 00 μ (x) ∈ A (4.4) x ∈ D ⇐⇒ 1 μ (x) ∈ A

(4.5)

x ∈ E ⇐⇒ 11 μ (x) ∈ A (4.6) x ∈ D ⇐⇒ μ (x) ∈ B

(4.7)

x ∈ B ⇐⇒ 0 μ (x) ∈ B

(4.8)

x ∈ E ⇐⇒ 1 μ (x) ∈ B

(4.9)

x ∈ B ⇐⇒ μ (x) ∈ D

(4.10)

x ∈ D ⇐⇒ 1 μ (x) ∈ D (4.11) x ∈ C ⇐⇒ 0 μ (x) ∈ D (4.12) x ∈ I ⇐⇒ μ (x) ∈ E

(4.13)

x ∈ C ⇐⇒ 0 μ (x) ∈ E (4.14)

x ∈ F ⇐⇒ μ (x) ∈ C

(4.15)

x ∈ E ⇐⇒ 1 μ (x) ∈ C (4.16) x ∈ J ⇐⇒ 0 μ (x) ∈ I (4.17) x ∈ G ⇐⇒ 1 μ (x) ∈ F (4.18) x ∈ K ⇐⇒ μ (x) ∈ J

(4.19)

x ∈ J ⇐⇒ μ (x) ∈ K

(4.20)

x ∈ B ⇐⇒ 0 μ (x) ∈ J (4.21) x ∈ C ⇐⇒ 0 μ (x) ∈ K (4.22) x ∈ H ⇐⇒ μ (x) ∈ G (4.23) x ∈ G ⇐⇒ μ (x) ∈ H (4.24) x ∈ D ⇐⇒ 1 μ (x) ∈ G (4.25) x ∈ E ⇐⇒ 1 μ (x) ∈ H (4.26)

Proof Assertions (4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.9, 4.10, 4.12) follow from Lemma 4.2.13. The remaining ones can be proved as follows: (4.8, 4.11): a μ (x) ∈ B ⇐⇒ a aμ (x) = μ (a x) ∈ O ⇐⇒ a x ∈ O. (4.13, 4.15): μ (x) ∈ E ⇐⇒ (a μ (x) ∈ O and μ (x) begins with a a a) ⇐⇒ (ax ∈ O and x begins with aa). (4.14, 4.16): a μ (x) ∈ E ⇐⇒ (aa μ (x) ∈ O and a μ (x) begins with a a a) ⇐⇒ (1x ∈ O and x begins with a a a). (4.17, 4.18): a μ (x) ∈ I ⇐⇒ (a aμ (x) ∈ O and aμ (x) begins with aa) ⇐⇒ (μ (ax) ∈ O and x begins with a) ⇐⇒ (ax ∈ O and x begins with a). (4.19, 4.23, 4.20, 4.24): μ (x) ∈ J ⇐⇒ (a μ (x) ∈ O and μ (x) begins with a) ⇐⇒ (ax ∈ O and x begins with a). (4.21, 4.25): a μ (x) ∈ J ⇐⇒ (a aμ (x) ∈ O and a μ (x) begins with a) ⇐⇒ μ (ax) ∈ O ⇐⇒ ax ∈ O. (4.22, 4.26): aμ (x) ∈ K ⇐⇒ (aa μ (x) ∈ O and a μ (x) begins with a) ⇐⇒ (ax ∈ O and x begins with a a a). We can now use the result of the previous lemma to create an 11-state automaton (Figure 4.1) that accepts all infinite sequences (i j ) j≥1 over Δ := {0, 1, 2, 3, 4} such that pi1 μ (pi2 μ (pi3 μ (· · · ))) is overlapfree. Each state represents one of the sets A, B, . . . , K defined above, and the transitions are given by Lemma 4.2.14. Of course, we also need to verify that transitions not shown correspond to the empty set of infinite words. For example, a transition out of B on the symbol 2 would correspond to the set {x : 100 μ (x) ∈ O}. But if x begins with 0, then 100μ (x) = 10001 · · · contains the overlap 000 as a factor, whereas if x begins with 10, then 100 μ (x) = 1001001 · · · contains the overlap 1001001 as a factor, and if x begins with 11, then 100 μ (x) = 1001010 · · · contains 01010 as a factor. Similarly, we can

110

N. Rampersad and J. Shallit

verify that all other transitions not given in Figure 4.1 correspond to the empty set. This is left to the reader.

0

K 0

J 1

0

H 0 1

3

1

G 3

3

I

F

1

0

0 3

E 4 3 B 1

C

0 1

A 0 0

2 3

1 D 3

Figure 4.1 Automaton coding infinite binary overlapfree words.

From Lemma 4.2.14 and the results above, we get the following result. Theorem 4.2.15 Every infinite binary overlapfree word x is encoded by an infinite path, starting in A, through the automaton in Figure 4.1. Every infinite path through the automaton not ending in 0ω codes a unique infinite binary overlapfree word x. If a path i ends in 0ω and this suffix corresponds to a cycle on state A or a cycle between states B and D, then x is coded by either i; 1 or i; 3. If a path i ends in 0ω and this suffix corresponds to a cycle between states J and K, then x is coded by i; 1. If a path i ends in 0ω and this suffix corresponds to a cycle between states G and H, then x is coded by i; 3. As an application, we recover the following result (Allouche et al., 1998). Theorem 4.2.16 001001t.

The lexicographically least infinite binary overlapfree word is

Proof Let x be the lexicographically least infinite overlapfree binary word, and let y be its code. Then y[1] must be 2, since any other choice codes a word that starts with 01 or something lexicographically greater. Once y[1] = 2 is chosen, the

Repetitions in words

111

next two symbols must be y[2..3] = 03. Now we are in state G. We argue that the lexicographically least word that follows causes us to alternate between states G and H on 0, producing 100 · · ·. For otherwise our only choices are 30, 31, or (if we are in G) 33 as the next two symbols, and all of these code a word lexicographically greater than 100. Hence y = 203 0ω ; 1 is the code for the lexicographically least sequence, and this codes 001001t.

4.2.5 Powerfree morphisms A morphism h is squarefree (respectively α -powerfree) if h(w) is squarefree (respectively α -powerfree) for every squarefree (respectively α -powerfree) word w. We have already seen in Theorem 4.2.6 that the morphism μ is α -powerfree for all α > 2. We say a morphism is infix if no image of a letter is a factor of the image of another letter. Theorem 4.2.17 Let f : Σ∗ → Δ∗ be an infix morphism. Then f is squarefree if, and only if, f (w) is squarefree for all words w of length at most 3. Proof One direction is trivial. For the other, we first establish the following claim. Let w = w0 w1 · · · wn be a squarefree word, where each wi ∈ Σ. If f (w) = x f (a)y for some x, y ∈ Σ∗ and a ∈ Σ, then there exists j such that x = f (w1 · · · w j−1 ), a = w j , and y = f (w j+1 · · · wn ).

Suppose to the contrary that the claim is false. Since f is infix, the word f (a) is then a factor of f (w j w j+1 ) for some j such that f (a) crosses the boundary between f (w j ) and f (w j+1 ). So we can write f (w j ) = pq, f (w j+1 ) = rs, and f (a) = qr, where q and r are non-empty words. Now f (aw j a) = qrpqqr contains a square, so aw j a must contain a square as well. It follows that a = w j . Similarly, we obtain a = w j+1 . Thus w j = w j+1 , contradicting the squarefreeness of w. This establishes the claim. Let w = w0 w1 · · · wn be a shortest squarefree word such that f (w) contains a square. Write f (w) = xyyz and let Wi denote f (wi ); that is, f (w) = W0W1 · · ·Wn = xyyz. By the minimality of w we may suppose that x is a proper prefix of W0 and z is a proper suffix of Wn . Write W0 = xW0 and Wn = Wn z, so that yy = W0W1 · · ·Wn−1Wn . Now, there exists j such that y = W0W1 · · ·W j = W jW j+1 · · ·Wn , where W j = W jW j . Since f preserves squarefreeness on words of length at most 3, we must have n ≥ 3. If j = 0 then W1W2 is a factor of W0 , contradicting the infix property of f . Similarly, we cannot have j = n. By the claim shown above, we have W0 = W j ,W1 = W j+1 , . . . ,W j = Wn . It follows that f (w0 w j wn ) = xW0W jW jWn z = xW0W jW0W j z

112

N. Rampersad and J. Shallit

contains the square W0W jW0W j . It must therefore be the case that w0 w j wn contains a square: i.e., that either w0 = w j or that w j = wn . Without loss of generality, suppose that w0 = w j . The images of f must certainly be distinct, so from W1 = W j+1 , . . . ,W j−1 = W2 j−1 we have w1 = w j+1 , . . . , w j−1 = w2 j−1 . Then w = w0 w1 · · · w j−1 w0 w1 · · · w j−1 wn contains a square, a contradiction. Theorem 4.2.2 now follows upon verifying that the morphism h defined in (1.2) satisfies the conditions of Theorem 4.2.17. The following result characterises the uniform squarefree morphisms. Theorem 4.2.18 Let h be a uniform morphism over an alphabet Σ. Then h is squarefree if, and only if, h(w) is squarefree for all squarefree w of length 3. Proof One direction is trivial. For the other note that if f preserves squarefreeness on words of length at most 3, then the images of f must be distinct. Since f is uniform, this implies that f is infix and we can apply the criterion of Theorem 4.2.17.

The following is a criterion for squarefreeness that imposes no restrictions on the morphism. Theorem 4.2.19 Let f : Σ∗ → Δ∗ be a morphism. Then f is squarefree if, and only if, f (w) is squarefree for all words w of length at most +,  * M( f ) − 3 , max 3, 1 + m( f ) where M( f ) = max | f (a)| and m( f ) = min | f (a)|. a∈Σ

a∈Σ

The following is a criterion for k-powerfreeness similar to that of Theorem 4.2.17, but imposes an additional restriction on the morphism. Theorem 4.2.20 Let k ≥ 2 and let f : Σ∗ → Δ∗ be a morphism. Suppose that f satisfies the following: (a) f is infix, and (b) if a, b, c ∈ Σ and x f (a)y = f (bc) then either x is empty and a = b or y is empty and b = c. Then f is k-powerfree if, and only if, f (w) is k-powerfree for all words w of length at most k + 1. Bean et al. (1979) proved the following result. Theorem 4.2.21 For any alphabet Σ of size at least three, there exists a squarefree morphism h : Σ∗ → {0, 1, 2}∗. Further, for any alphabet Δ of size at least two, there exists a cubefree morphism g : Δ∗ → {0, 1}∗.

113

Repetitions in words

4.2.6 The probabilistic method Most of our previous results concerning the existence of infinite words avoiding repetitions have been explicit, in the sense that we exhibited the desired word, usually by specifying a morphism that generates it by iteration. Next we examine a probabilistic technique for showing the avoidability of repetitions. One of the first results on words to use the probabilistic method is the following theorem due to Beck (1984). Theorem 4.2.22 For any real ε > 0, there exist an integer Nε and an infinite binary word w such that for every factor x of w of length n > Nε , all occurrences of x in w are separated by a distance at least (2 − ε )n. The main tool used to proved this result is a lemma from probabilistic combinatorics known as the Lov´asz local lemma. Given a set S of probability events, we can construct a dependency digraph D = (S, E), where the event X is mutually independent of the events {Y : (X ,Y ) ∈ E}. The Lov´asz local lemma is the following. Lemma 4.2.23 Let A1 , A2 , . . . , At be events in a probability space, with a dependency digraph D = (S, E). Suppose there exist real numbers x1 , x2 , . . . , xt with 0 ≤ xi < 1 for 1 ≤ i ≤ t such that Pr(Ai ) ≤ xi



(1 − x j )

(4.27)

(i, j)∈E

for 1 ≤ i ≤ t. Then the probability that none of the events A1 , A2 , . . . , At occurs is at least

∏ (1 − xi).

1≤i≤t

Let us now apply this lemma to prove the existence of an infinite squarefree word over a finite alphabet. Let Ai,r be the event that there exists a square of length 2r beginning at position i of a word of length n, i.e., that ai ai+1 · · · ai+r−1 = ai+r ai+r+1 · · · ai+2r−1 . Then the event Ai,r is mutually independent of the set of all events A j,s when i + 2r − 1 < j or i > j + 2s − 1. Thus in our dependency digraph, (i, r) is connected to ( j, s) by an edge in each direction if i + 2r − 1 ≥ j and i ≤ j + 2s − 1. As in the statement of the lemma, we now associate a real number xi,r with each event Ai,r . We then have



((i,r),( j,s))∈E



(1 − x j,s) =

(1 − x j,s)

i−2s+1≤ j≤i+2r−1 0≤ j≤n−2s 1≤s≤n/2

≥ ∏(1 − x j,s)2r+2s−1 . s≥1

Note that we get this inequality by throwing away the requirement that 0 ≤ j ≤ n/2 (which adds more factors, each of which is less than 1), and by extending the product on s to infinity, instead of just to n/2.

114

N. Rampersad and J. Shallit

Now taking logs will turn this product into a sum, and we get



log(1 − x j,s) ≥

((i,r),( j,s))∈E

∑ (2r + 2s + 1) log(1 − x j,s).

s≥1

We now have to choose the x j,s . This is somewhat of a black art, but it turns out that choosing x j,s = α −s for some α often works. We now have to estimate log(1 − x j,s ). Since log(1 − x) = −x − x2 /2 − x3 /3 − · · ·, it seems reasonable that we can bound log(1 − x) from below with −x, minus a little bit more. Suppose 0 ≤ x < d for some real number d < 1. We claim there exists a real number c < −1 (depending on d) such that log(1 − x) ≥ cx. To see this, note that if f (x) = log(1 − x) − cx, then f  (x) = 1/(x − 1) − c. Setting f  (x) = 0, we see that f has a local maximum at m = (c + 1)/c, and f (x) is increasing and positive on the interval (0, m) and decreasing on the interval (m, 1). Thus, if we find y < z such that f (y) > 0 and f (z) < 0, and y > m, then we know that log(1 − x) ≥ cx for x ≤ y. To apply this idea, we need to choose y such that α −s ≤ y for all s. Clearly it suffices to ensure this for s = 1. With the appropriate choice of c and α , we then get

∑ (2r + 2s − 1) log(1 − x j,s) ≥ ∑ (2r + 2s − 1)cα −s

s≥1

s≥1

= (2r − 1)c ∑ α −s + 2c ∑ sα −s s≥1

s≥1

2cα (2r − 1)c + = . α −1 (α − 1)2 Now if our events are taking place over an alphabet of cardinality k, then Pr(Ai,r ) = k−r , so if log Pr(Ai,r ) = −r log k ≤ −r log α +

2cα (2r − 1)c + , α −1 (α − 1)2

then equation (4.27) of the local lemma will be satisfied, and we can conclude that there is a word w of length n such that none of the events Ai,r occurs, or in other words, that w is squarefree. It now suffices to take α = 6.23, c = −1.091, y = 0.162, and k ≥ 13. Thus we have shown that squarefree words of all lengths exist over an alphabet of cardinality greater or equal to 13, and hence by Lemma 4.2.1, infinite squarefree words exist over an alphabet of cardinality greater or equal to 13.

4.3 Dejean’s theorem 4.3.1 The repetition threshold Given an alphabet Σ of k symbols, we can ask ’What is the infimum of real numbers α such that we can avoid α -powers over Σ?’. This number is sometimes called the repetition threshold, and is denoted by RT (k). From Theorem 4.2.8 we already know that RT (2) ≤ 2. Since every word of length greater or equal to 4 contains a square,

Repetitions in words

115

it follows that RT (2) = 2. Similarly, from Theorem 4.2.2 we know that RT (3) ≤ 2, since we can avoid squares on 3 letters. Dejean (1972) proved that RT (3) = 7/4. She also conjectured that RT (4) = 7/5 and for k ≥ 5, RT (k) = k/(k − 1). This conjecture has recently been resolved, due to the combined efforts of several researchers (see endnotes). We start with a formal definition. Definition 4.3.1 The quantity RT (k) is defined to be the infimum, over all extended reals α , of the exponent α such that there exists an infinite α -powerfree word over an alphabet of size k. It is not hard to see, using the Lov´asz local lemma, that RT (k) exists. Note that, a priori, RT (k) could be (a) rational, (b) irrational, or (c) of the form α + where α is rational. However, it turns out that case (c) holds for all k. Theorem 4.3.2 Let k, the alphabet size, be fixed. If for all ε > 0 there exists an infinite word avoiding (α + ε )-powers, then (a) it is possible to avoid α -powers, if α is irrational; (b) it is possible to avoid α + -powers, if α is rational. Proof For each n ≥ 1, let xn be an infinite word avoiding (α + 1/n)-powers. Then there must be a letter a1 such that xn starts with a1 for infinitely many n. Similarly, there must be a letter a2 such that among the xn starting with a1 there are infinitely many that begin with a1 a2 . Continuing in this manner, one defines an infinite word w = a1 a2 · · · such that every prefix of w avoids (α + 1/n)-powers for arbitrarily large n. Now suppose that α is irrational and suppose that w contains a factor y with exponent β ≥ α . Since y is a finite word, the exponent β is rational, but α is irrational, so in fact we have β > α . Indeed, for large enough n we have β > α + 1/n. Thus w contains an (α + 1/n)-power, which is a contradiction. If α is rational and w contains an α + -power, then again it contains a factor y with exponent β where β > α , and the same argument as in the previous paragraph leads to a contradiction. Theorem 4.3.3 One has RT (3) > 7/4 and RT (4) > 7/5. Proof By computer search. Theorem 4.3.4 One has RT (4) = (7/4)+. Proof Consider the following morphism. h(0) = 0120212012102120210 h(1) = 1201020120210201021 h(2) = 2012101201021012102

116

N. Rampersad and J. Shallit

We claim that h is a (7/4)+ -avoiding morphism, in that it preserves words with this property. By iterating h on 0, we obtain an infinite word with the desired property. Let us now prove that h avoids (7/4)+ -powers. Assume that x is the shortest word without (7/4)+ -powers such that h(x) contains a (7/4)+ -power, that is, a word of the form uu with |u | > 34 |u| and u a prefix of u. When we speak of a block we will mean the image, under h, of a single letter. There are three cases to consider. Case 1: |x| < 4. We can simply enumerate all such words x avoiding (7/4)+ powers and check that h(x) also has no (7/4)+ -powers. Case 2: |x| ≥ 4. Since x is the shortest such word, we can assume that u begins within h(a), where a is the first letter of x and u ends within h(b) where b is the last letter of x; otherwise we could find a shorter x. Thus |uu | ≥ 40 and hence |u | ≥ 18. Case 2a: 19  |u|. Since |u | ≥ 18, either the first 7 letters or the last 7 letters of a block lie entirely within u , say at a position j letters from the start of u . Now look at the 7 letters that occur j letters from the start of u. Since 19  |u|, these 7 letters must straddle two blocks in u. But a short computation shows that neither the first 7 letters nor the last 7 letters of any block can straddle the boundary between two blocks. Case 2b: 19 | |u|. Now the first letter of any block uniquely determines the block, and the same is true of the last letter of any block. Thus the (7/4)+ -power uu leads to at least a (7/4)+ -power within x, a contradiction.

4.3.2 Restrictions on morphic constructions The proof of Theorem 4.3.4 was accomplished by the exhibition of a (7/4)+ -avoiding morphism. One might then suppose that Dejean’s theorem could be established for each alphabet size by producing morphisms of this type. We shall now see that for alphabets of size larger than three, this is not possible. In this section and the next we assume that we are working over the alphabet Σk = {1, 2, . . . , k}. First, for each integer k ≥ 2, define the quantity ⎧ ⎪ ⎪ ⎨7/4, if k = 3; αk = 7/5, if k = 4; ⎪ ⎪ ⎩ k , if k = 3, 4. k−1

Dejean’s theorem is that RT (k) = αk . We first show that if αk+ -free morphisms exist over a k-letter alphabet, they must be uniform. Theorem 4.3.5 form.

Let k ≥ 3 and let h : Σk → Σk be an αk+ morphism. Then h is uni-

Repetitions in words

117

Proof Without loss of generality suppose that |h(1)| = max{|h(a)| : a ∈ Σk } and |h(2)| = min{|h(a)| : a ∈ Σk }. Suppose further that h is not uniform, so that |h(1)| > |h(2)|. If k = 3, then 1232123 is (7/4)+ -free but h(1232123) is a (7/4)+-power. If k = 4, then 12342132431234 is (7/5)+ -free but h(12342132431234) is a (7/5)+ power. If k > 4, then 12 · · · (k − 1)1 is (k/(k − 1))+ -free but h(12 · · · (k − 1)1) is a (k/(k − 1))+-power. The next result gives a further restriction on αk+ -free morphisms. Theorem 4.3.6 Let k ≥ 2 and let h : Σk → Σk be an αk+ morphism. Then the first letters of the words in h(Σk ) are distinct. Similarly, the last letters of the words in h(Σk ) are also distinct. Proof Suppose to the contrary, and without loss of generality, that h(1) = au and h(2) = av for some letter a and words u, v. If k = 2, then 112 is overlapfree, but h(112) = auauav contains the overlap auaua. If k > 2, then define the word ⎧ ⎪ if k = 3; ⎪ ⎨1231231, wk = 123421324312341, if k = 4; ⎪ ⎪ ⎩k23 · · · (k − 1)k1, if k > 4; and write wk = x2zx1, where |x2zx|/|x2z| = αk . Note that wk is αk+ -free. By Theorem 4.3.5, the morphism h is uniform. It follows that h(wk ) = h(x)avh(z)h(x)au contains the αk+ -power h(x)avh(z)h(x)a. We can now show that Dejean’s argument does not generalise to alphabets of size larger than 3. In the statement of the following theorem, the term growing morphism refers to a morphism h such that h(a) = ε for all a ∈ Σ and |h(a)| > 1 for at least one letter a ∈ Σ. Theorem 4.3.7 Σk .

Let k ≥ 4. There exists no growing αk+ -free morphism from Σk to

Proof Suppose there exists such a morphism h. Without loss of generality, we may assume that there exists a ∈ Σk such that h(a) = 12u for some word u. By Theorem 4.3.6, the k words in h(Σk ) all end differently, so there exists b ∈ Σk such that h(b) = v2 for some word v. We first show that a = b. Again, by Theorem 4.3.6, there exists c ∈ Σk , c = a, such that h(c) = 2w for some word w. If a = b, then h(ac) = v22w contains the square 22, so we must have a = b. Now h(ba) = v212u contains the 3/2power 212. This is a contradiction, since for k ≥ 4 we have αk < 3/2. This result shows that we cannot hope to prove Dejean’s theorem by producing αk+ -free morphisms. This does not completely rule out the possibility of generating αk+ -free words with a morphism. It could be the case that there exist morphisms h that are not αk+ -free but still have an infinite αk+ -free fixed point. However, the previous result is strong evidence that a new idea was needed in order to prove Dejean’s theorem for larger alphabets. This new idea was provided by Pansiot.

118

N. Rampersad and J. Shallit

4.3.3 Pansiot recoding Dejean conjectured that RT (k) = (k/(k − 1))+ . We now show that RT (k) ≥ (k/(k − 1))+ . Proposition 4.3.8 (a) Every word w over a k-letter alphabet of length greater or equal to k + 2 contains a k/(k − 1)-power. (b) Every word x over a k-letter alphabet avoiding (k/(k − 1))+ -powers has the property that every factor of length k − 1 consists of k − 1 different symbols. Proof (a) Let w be of length greater or equal to k + 2. Then w contains at least 3 factors of length k. If any one of these factors contains some symbol of the alphabet at least twice, then w contains a power greater or equal to k/(k − 1), and we are done. Otherwise, up to a permutation of the symbols, we can assume w = 123 · · · k12. But then w contains a (k + 2)/k-power, and (k + 2)/k ≥ k/(k − 1) for k ≥ 2. (b) Suppose x does not have the stated property. Then there is a factor of length k − 1 containing some symbol twice, and hence containing a power greater or equal to (k − 1)/(k − 2). But (k − 1)/(k − 2) ≥ k/(k − 1) for k ≥ 3. This result suggests the following. Suppose a word w over a k-letter alphabet avoids (k/(k − 1))+ -powers, and suppose y is a factor of length k − 1. Then the letter following y is either • the first letter of y; or • the unique symbol of Σk missing from y. We can code these two choices with a 0 in the first case and a 1 in the second case. If we then know the sequence of codes and the first k − 1 letters of w, we can uniquely reconstitute w. Such a coding is called the Pansiot recoding. Example 4.3.9 Let us recode 12341532145 ∈ Σ5 : the series of codes is 0101011. Given 1234 and 0101011 we can reconstitute the original word. With this coding, we can also interpret a word over Σk of length k − 1, with no symbol occurring twice, as a permutation of Σk . Namely, if w = a1 a2 · · · ak−1 , the associated permutation is   1 2 3 ··· k − 1 k , a1 a2 a3 · · · ak−1 b where b is the unique letter in Σk − {a1 , a2 , . . . , ak }. Going from a block of k − 1 letters to the next block of k − 1 letters, obtained by a ‘0’ code, amounts to right multiplication by the permutation   1 2 3 ··· k − 1 k σ0 = , 2 3 4 ··· 1 k

Repetitions in words

119

and going from one block to the next via a ‘1’ code amounts to right multiplication by the permutation   1 2 3 ··· k − 1 k σ1 = . 2 3 4 ··· k 1 Example 4.3.10 Suppose k = 5 and the last four symbols seen were 4135. Then a code of ‘0’ produces the next symbol 4 and a code of ‘1’ produces the next symbol 2. The original block, 4135, corresponds to the permutation   1 2 3 4 5 p= . 4 1 3 5 2 The new block 1354, following a ‘0’ code, corresponds to the permutation   1 2 3 4 5 q= 1 3 5 4 2 while the new block 1352, following a ‘1’ code, corresponds to the permutation   1 2 3 4 5 r= . 1 3 5 2 4 The reader can now check that q = pσ0 and r = qσ1 . For the remainder of this section we are interested in repetitions of exponent less than 2, and we write them in a special way. If w = pe with e a prefix of pe, p = ε , then we write w = (p, e). Here p is called the period and e the excess. Note that w = pe = ep and such a word is a |pe|/|p|th power. A repetition (p, e) with |e| < k − 1 is called a short repetition, while a repetition with |e| ≥ k − 1 is called a kernel repetition. A repetition of exponent (|e| + |p| + k − 1)/|p| in a word corresponds to a kernel repetition (p, e) in the Pansiot code for that word. The term ‘kernel repetition’ comes from the fact that under the morphism ψ mapping 0 to the permutation σ0 and 1 to σ1 , the period p is mapped to the identity permutation; i.e., the period p is in the kernel of ψ . Example 4.3.11 Let k = 4 and consider the word w = 1234134123413. The Pansiot encoding of w is x = 1100011110. Note that x = (p, e) = (1100011, 110). We see that w has exponent (|pe| + k − 1)/|p| = (10 + 4 − 1)/7 = 13/7, and that ψ (p) = ψ (1100011) is the identity permutation on {1, 2, 3, 4}. To construct a word w avoiding RT (k)+ -powers we construct its Pansiot code v such that v avoids both the forbidden short repetitions and the forbidden kernel repetitions. The short repetitions have a bounded length, and one can check by computer that v does not contain any of them. To avoid kernel repetitions, we generate the word v by iterating a morphism. If the morphism satisfies certain technical conditions, one can ensure that long kernel repetitions are the image under the morphism of shorter

120

N. Rampersad and J. Shallit

kernel repetitions. One then verifies by computer that no such kernel repetitions are present in v. Theorem 4.3.12 There is an infinite word over a 4-letter alphabet avoiding (7/5)+ powers. Proof sketch Consider the infinite word generated by the morphism h, where h(1) = 10 and h(0) = 101101. Then hω (1) codes an infinite word over {1, 2, 3, 4} avoiding (7/5)+-powers. The full proof of Dejean’s theorem is long and technical. The most important piece of the proof is a result of Carpi (2007), which establishes Dejean’s theorem for all alphabets of size greater or equal to 33.

4.4 Avoiding repetitions in arithmetic progressions In this section we consider the question of the existence of infinite words that avoid squares in certain subsequences indexed by arithmetic progressions. Of course, by the classical theorem of van der Waerden, it is not possible to avoid repetitions in all arithmetic subsequences. Theorem 4.4.1 (van der Waerden’s theorem) Let the natural numbers be partitioned into finitely many disjoint sets C1 ,C2 , . . . ,Ck . Then some Ci contains arbitrarily long arithmetic progressions. A subsequence of w is a word of the form wi0 wi1 · · · , where 0 ≤ i0 < i1 < · · · . An arithmetic subsequence of difference j of w is a word of the form wi wi+ j wi+2 j · · · , where i ≥ 0 and j ≥ 1. We also define finite subsequences in the obvious way. If a word w has the property that no arithmetic subsequence of difference j contains a square (respectively cube, r-power, r+ -power), we say that w contains no squares (respectively cubes, r-powers, r+ -powers) in arithmetic progressions of difference j. Theorem 4.4.2 Let p be a prime and let m be a non-negative integer. There exists an infinite word over a finite alphabet that avoids (1 + 1/pm )-powers in arithmetic progressions of all differences, except those differences that are a multiple of p. Proof Let q = pm+1 . We construct an infinite word with the desired properties over the alphabet Σ = {n : 0 < n < 2q2 and q  n}.

Repetitions in words

121

Define w = a1 a2 · · · as follows. For n ≥ 1, write n = qt n , where q  n , and define an =

n mod q2 , q2 + (n

mod

if t = 0; q2 ),

if t > 0.

Suppose that w contains a (1 + 1/pm)-power in an arithmetic progression of difference k, where k is not a multiple of p. That is, ai ai+k · · · ai+(s−1)k = ai+rk ai+(r+1)k · · · ai+(r+s−1)k for some integers i, r, s satisfying s/r ≥ 1/pm . Suppose that ai = ai+rk > q2 . Then by the definition of w, q divides both i and i+rk and hence divides rk. If instead ai = ai+rk < q2 , then i mod q2 = (i + rk) mod q2 , so that q2 divides rk. In either case, since p does not divide k, it must be the case that q divides r. We can therefore write r = q r for some positive integers , r with r not divisible by q. Recall that s/r ≥ 1/pm , so that s ≥ q r /pm = pq−1r ≥ pq−1. It follows that the set {i, i+ k, . . . , i+ (s− 1)k} forms a complete set of residue classes modulo pq−1. Thus there exists j ∈ {i, i + k, . . ., i + (s − 1)k} such that j ≡ q−1

(mod pq−1 ).

Let us write then j = apq−1 + q−1 = q−1(ap + 1), for some non-negative integer a. We also have j + rk = q−1 (ap + 1) + qr k = q−1 (ap + 1 + qrk). Furthermore, since a j = a j+rk , from the definition of w, we obtain ap + 1 ≡ ap + 1 + qrk

(mod q2 ),

so that qr k ≡ 0 (mod q2 ). We must therefore have r k ≡ 0 (mod q). However, p does not divide k, and q does not divide r , so this congruence cannot be satisfied. This contradiction completes the proof. Corollary 4.4.3 There exists an infinite word over a 4-letter alphabet that contains no squares in arithmetic progressions of odd difference.

122

N. Rampersad and J. Shallit

Proof This is a special case of Theorem 4.4.2, obtained by taking p = 2 and m = 0 (so that q = 2) in that theorem. We then have Σ = {1, 3, 5, 7} and, writing n = 2t n , an =

n mod 4, 4 + (n

if n is odd;

mod 4), if n is even.

It follows that w = 1535173515371735153 · · · contains no squares in arithmetic progressions of odd difference. The word w defined in Corollary 4.4.3 is closely related to the well-studied paperfolding word. Indeed, if one applies the map 1, 5 → 0, 3, 7 → 1 to the word w, one obtains the paperfolding word f = 0010011000110110001 · · · . We now show how to apply the previous construction to define non-repetitive labellings of the integer lattice. Consider a map w from N2 to A, where we write wm,n for w(m, n). We call such a w a 2-dimensional word. A word x is a line of w if there exists i1 , i2 , j1 , j2 such that gcd( j1 , j2 ) = 1, and for t ≥ 0, xt = wi1 + j1t,i2 + j2t . Theorem 4.4.4 There exists a 2-dimensional word w over a 16-letter alphabet such that every line of w is squarefree. Proof Let u = u0 u1 u2 · · · and v = v0 v1 v2 · · · be any infinite words over the alphabet A = {1, 2, 3, 4} that avoid squares in all arithmetic progressions of odd difference. We define w over the alphabet A × A by wm,n = (um , vn ). Consider an arbitrary line x = (wi1 + j1t,i2 + j2t )t≥0 , = (ui1 + j1t , vi2 + j2 t )t≥0 , for some i1 , i2 , j1 , j2 , with gcd( j1 , j2 ) = 1. Without loss of generality, we may assume j1 is odd. Then the word (ui1 + j1t )t≥0 is an arithmetic subsequence of odd difference of u and hence is squarefree. The line x is therefore also squarefree. A computer search shows that there are no 2-dimensional words w over a 7-letter alphabet, such that every line of w is squarefree. It remains an open problem to determine if the alphabet size of 16 in Theorem 4.4.4 is best possible.

Repetitions in words

123

4.5 Patterns We have seen previously that a square is a word of the form xx. We now consider more general patterns. Let Σ and Δ be alphabets: the alphabet Δ is the pattern alphabet and its elements are variables. A pattern p is a non-empty word over Δ. A word w over Σ is an instance of p if there exists a non-erasing morphism h : Δ∗ → Σ∗ such that h(p) = w. Let x1 , x2 , . . . be variables. Define Z1 = x1 and for all n > 1 define Zn = Zn−1 xn Zn−1 . The words Zn are called the Zimin patterns. Theorem 4.5.1 The Zimin patterns Zn are unavoidable. Proof The proof is by induction on n. Let Σ be an arbitrary alphabet and let k = |Σ|. Clearly Z1 is unavoidable on Σ. Suppose that Zn is unavoidable on Σ. Then there is an integer N such that every word of length N contains an instance of Zn . There are kN such words. Now consider a word w ∈ Σ∗ of length M = kN (N + 1) + N. Write w = W0 a0W1 a1 · · ·WkN −1 akN −1WkN , where for 0 ≤ i ≤ kN , |Wi | = N and |ai | = 1. By the pigeonhole principle there exists i < j such that Wi = W j . Moreover, the word Wi contains an instance of Zn . Write Wi = W  yW  , where y is an instance of Zn . Thus, the word yW  aiWi+1 ai+1 · · ·W j−1 a j−1W  y begins and ends with an instance of Zn , and hence is an instance of Zn+1 . It follows that any word of length M over Σ contains an instance of Zn+1 . Since Σ was arbitrary, we conclude that Zn+1 is unavoidable. The following classification of the binary patterns is due to the combined work of several authors. Theorem 4.5.2 We have the following classification of binary patterns. • The patterns x, xy, and xyx are unavoidable. • The patterns xx, xxy, xxyx, xyyx, xyxy, xxyxx, and xxyxy are avoidable over a ternary alphabet but not over a binary alphabet. • All other patterns not equal to the reverse or complement of the above listed patterns are avoidable over a binary alphabet.

4.6 Abelian repetitions An abelian square is a word of the form xx with |x| = |x | and x a permutation of x. Examples in English include reappear and intestines. An abelian cube is a word of the form xx x with |x| = |x | = |x | and x and x both permutations of x. An example in English is deeded. Similarly, we can speak about abelian kth powers for any integer k ≥ 2. Erd˝os (1961) posed the problem of the existence of an infinite word over a finite alphabet that avoids abelian squares. Evdokimov (1968) gave an example over a

124

N. Rampersad and J. Shallit

25-letter alphabet, and this was improved to 5 letters by Pleasants (1970). Finally, Ker¨anen (1992) gave an example over a 4-letter alphabet. Theorem 4.6.1 Abelian squares cannot be avoided over an alphabet of size 3. Abelian cubes cannot be avoided over an alphabet of size 2.

4.6.1 The adjacency matrix associated with a morphism Given a morphism ϕ : Σ∗ → Σ∗ for some finite set Σ = {a1 , a2 , . . . , ad }, we define the adjacency matrix M = M(ϕ ) as follows: M = (mi, j )1≤i, j≤d , where mi, j is the number of occurrences of ai in ϕ (a j ), i.e., mi, j = |ϕ (a j )|ai . Example 4.6.2 Consider the morphism ϕ defined by

ϕ : a → ab b → cc c → bb. Then a ⎛ a 1 M(ϕ ) = b ⎝ 1 c 0

b

c ⎞ 0 0 0 2⎠ 2 0

Let us also define the map ψ : Σ∗ → Zd by ψ (w) = [|w|a1 , |w|a2 , . . . , |w|ad ]T . The matrix M(ϕ ) is useful because of the following proposition. Proposition 4.6.3 One has

ψ (ϕ (w)) = M(ϕ )ψ (w). Proof Clearly we have |ϕ (w)|ai =



|ϕ (a j )|ai |w|a j .

1≤ j≤d

From this, the desired equation easily follows. Now an easy induction gives M(ϕ )n = M(ϕ n ), and hence Corollary 4.6.4 One has

ψ (ϕ n (w)) = (M(ϕ ))n ψ (w). Hence we find Corollary 4.6.5 One has |ϕ n (w)| =

1

1 1

1 ···

1

2

(M(ϕ ))n ψ (w).

125

Repetitions in words Table 4.2 Progression-freeness of A. a

g

i

a + ig

a

g

i

a + ig

0 0 0 0 0 0

1 2 3 4 5 6

3 3 1 3 1 1

3 6 3 5 5 6

1 1 1 1 1 1

1 2 3 4 5 6

2 1 3 1 1 2

3 3 3 5 6 6

a

g

i

a + ig

a

g

i

a + ig

2 2 2 2 2 2

1 2 3 4 5 6

1 2 1 1 2 3

3 6 5 6 5 6

4 4 4 4 4 4

1 2 3 4 5 6

1 1 2 2 3 1

5 6 3 5 5 3

4.6.2 Dekking’s construction In this section we explore a construction due to Dekking (1979) that gives optimal results for abelian-powerfree words over alphabets of size 2 and 3. We start with some definitions about morphisms and groups. Let ϕ : Σ∗ → Σ∗ be a morphism. If w = ϕ (a) is the image of a single letter a ∈ Σ, we call it a block. If ϕ (a) = vv , v = ε , then we call v a left subblock and v its corresponding right subblock. Let G be a finite abelian group (written additively). We say that a subset A ⊆ G is progression-free of order n if for all a ∈ A, a, a + g, a + 2g, . . . a + (n − 1)g ∈ A implies that g = 0. Example 4.6.6 Let G = Z/(7), the integers modulo 7, and let A = {0, 1, 2, 4}. Then A is progression-free of order 4. For example, for each a ∈ A, Table 4.2 shows that for each g = 0 there exists i, 0 ≤ i ≤ 3, such that a + ig ∈ A. Let f : Σ∗ → G be a morphism, so that f (ε ) = 0, the identity element of G, and f (a1 a2 · · · ai ) = ∑1≤ j≤i f (a j ). We call f ϕ -injective if for any collection v1 , v2 , . . . , vn of left subblocks and v1 , v2 , . . . , vn the corresponding right subblocks, the equality f (v1 ) = f (v2 ) = · · · = f (vn ) implies that either v1 = v2 = · · · = vn or v1 = v2 = · · · = vn . Lemma 4.6.7 Let n be a positive integer, let ϕ : Σ∗ → Σ∗ be a morphism, let G be a finite abelian group, and let f : Σ∗ → G be a morphism such that (a) the adjacency matrix of ϕ has non-zero determinant; (b) f (ϕ (a)) = 0 for all a ∈ Σ;

126

N. Rampersad and J. Shallit

(c) the set A = {g ∈ G : g = f (v), v a left subblock of ϕ } is progression-free of order n + 1; (d) f is ϕ -injective. If ϕ is prolongable on a, and ϕ ω (a) avoids abelian nth powers x1 x2 · · · xn where |xi | ≤ maxa∈Σ |ϕ (a)|, then ϕ ω (a) is abelian n-powerfree. Proof Let x = ϕ ω (a). By the hypothesis, x avoids ‘short’ abelian nth powers, that is, factors of the form x1 x2 · · · xn where each xi is a permutation of x1 and |xi | ≤ maxa∈Σ |ϕ (a)|. Suppose B1 B2 · · · Bn is an abelian nth-power occurring in x, with |B1 | = |B2 | = · · · = |Bn |, and each Bi a permutation of B1 , and |Bi | is minimal. Since we have ruled out short powers, we must have |Bi | > maxa∈Σ |ϕ (a)|. Consider the factorisation of x into blocks, each an image of a letter under ϕ . Then each Bi starts inside some block ϕ (a); let vi be the corresponding left subblock and vi the corresponding right subblock, so that ϕ (a) = vi vi , and Bi occurs starting at the same position where vi starts, and Bn ends at the same position where vn+1 ends. (We take vi = ε if a Bi occurs starting at the same position as the beginning of a block.) By the length condition on the Bi , each Bi starts in a distinct block. See Figure 4.2, where this is illustrated for n = 3. C1 x=

C2

v1 v1

C3

v2 v2 B1

v3 v3 B2

v4 v4 B3

Figure 4.2 An abelian cube and the corresponding blocks.

Since each Bi is a permutation of every other Bi , it follows that each Bi contains exactly the same number of every letter, and so f (B1 ) = f (B2 ) = · · · = f (Bn ).

(4.28)

On the other hand, from condition (b) of Lemma 4.6.7, we know that f (ϕ (a)) = 0 for every a ∈ Σ. Writing Bi = vi yi vi+1 for some yi that is the image of a word under ϕ , we get f (Bi ) = f (vi ) + f (vi+1 ). Since f (vi vi ) = 0, we get f (Bi ) = − f (vi ) + f (vi+1 ). From (4.28) we get that the f (vi ) form an (n + 1)-term arithmetic progression with difference f (Bi ). But then by hypothesis (c), we get f (v1 ) = f (v2 ) = · · · = f (vn+1 ). Hence by hypothesis (d), it follows that either v1 = v2 = · · · = vn+1 or v1 = v2 = · · · = vn+1 . In the former case, we can ‘slide’ the abelian nth power to the left by |v1 | symbols and still get an abelian nth power; in the latter case we can slide it to the right by |v1 | symbols and still get an abelian nth power. Now our abelian nth power is aligned at both ends with blocks of ϕ , so there is an abelian nth power C1C2 · · ·Cn where each Ci is composed of blocks; again see Figure 4.2. Let Di be such that Ci = ϕ (Di ). Since x = ϕ (x), it follows that D1 D2 · · · Dn occurs in x. Now

Repetitions in words

127

ψ (Ci ) = M ψ (Di ), where M is the matrix of ϕ . Since M is invertible, there is only one possibility for ψ (Di ). Since ψ (C1 ) = ψ (C2 ) = · · · = ψ (Cn ), it follows that ψ (D1 ) = ψ (D2 ) = · · · = ψ (Dn ). Hence D1 · · · Dn is a shorter abelian nth power, contradicting the minimality of B1 B2 · · · Bn . Corollary 4.6.8 powers.

There is an infinite word on two symbols that avoids abelian 4th

Proof Let Σ = {0, 1} and define ϕ (0) = 011, ϕ (1) = 0001. We can check that there are no abelian 4th powers x1 x2 x3 x4 in ϕ ω (0) for |x1 | ≤ 4 by enumerating all subwords of length lessor equalto 16. 1 3 , which has determinant −5. Choose G = Z/(5) The matrix of ϕ is 2 1 and define f by f (0) = 1, f (1) = 2. Then f (ϕ (a)) = 0 for a ∈ {0, 1}. Furthermore A = {0, 1, 2, 3}, which is progression free of order 5. Thus ϕ ω (0) is abelian 4-powerfree. Corollary 4.6.9 There is an infinite word on 3 symbols that avoids abelian cubes. Proof Let Σ = {0, 1, 2} and define ϕ by ϕ (0) = 0012, ϕ (1) = 112, and ϕ (2) = 022. ⎛ ⎞ 2 0 1 Then the matrix of ϕ is ⎝ 1 2 0 ⎠ , which has determinant 7. Let G = Z/(7), 1 1 2 and define f (0) = 1, f (1) = 2, and f (2) = 3. Then f (ϕ (a)) = 0 for each a ∈ Σ. Then A = {0, 1, 2, 4} which is progression-free of order 4, as we saw in Example 4.2.

4.6.3 The template method Under some circumstances it is possible to decide if the fixed point of a morphism avoids an abelian k-power. The method of Dekking presented in Section 4.6.2 applies to a certain class of morphisms. We now describe another approach to this problem. In particular, we show the following. Theorem 4.6.10 Let μ be a morphism on Σ = {1, 2, . . ., m} and let M be the adjacency matrix of μ . Suppose that

μ (1) = 1x, for some x ∈ Σ+ , |μ (a)| > 1, for all a ∈ Σ, #M −1 # < 1 and M is non-singular. It is decidable whether μ ω (1) is abelian k-power free. Here, #M# denotes the norm on Rm×m induced by the Euclidean norm on Rm , i.e., |Mv| , v∈Rm |v|

#M# = sup v =0

128

N. Rampersad and J. Shallit

where |v| is the usual Euclidean length of the vector v. Let k be a positive integer. A k-template is a (2k)-tuple t = [a1 , a2 , . . . ak+1 , d1 , d2 , . . . , dk−1 ] , where the ai ∈ {ε , 1, 2, . . . , m} and the di ∈ Zm . We say that a word w realises the k-template t if a non-empty factor u of w has the form u = a1 X1 a2 X2 a3 · · · ak Xk ak+1 , where ψ (Xi+1 ) − ψ (Xi ) = di , i = 1, 2, . . . , k − 1. We call u an instance of t (note that a word u may be an instance of more than one template). The particular k-template Tk = [ε , ε , . . . , ε ,0,0, . . . ,0] will be of interest. The word u is an instance of Tk if, and only if, u has the form u = X1 X2 · · · Xk , where ψ (Xi+1 ) = ψ (Xi ), i = 1, 2, . . . , k − 1, in other words, if, and only if, u is an abelian k-power. Let t1 = [a1 , a2 , . . . , ak+1 , d1 , d2 , . . . , dk−1 ] and t2 = [A1 , A2 , . . . , Ak+1 , D1 , D2 , . . . , Dk−1 ] be k-templates. We say that t2 is a parent of t1 if

μ (Ai ) = ai ai ai for some words ai , ai while

ψ (ai+1 ai+2 ) − ψ (ai ai+1 ) + MDi = di , Lemma 4.6.11 (Parent Lemma) alises t1 .

1 ≤ i ≤ k.

(4.29)

Suppose that w ∈ Σ∗ realises t2 . Then μ (w) re-

Proof Let w contain the factor u = A1Y1 A2Y2 · · · AkYk Ak+1 where ψ (Yi+1 )− ψ (Yi ) = Di . For each i, write μ (Ai ) = ai ai ai and let Xi = ai μ (Yi )ai+1 . Then

μ (u) = a1 a1 X1 a2 X2 · · · Xk−1 ak Xk ak+1 ak+1

129

Repetitions in words and for each i,

ψ (Xi+1 ) − ψ (Xi ) = ψ (ai+1 μ (Yi+1 )ai+2 ) − ψ (ai μ (Yi )ai+1 ) = ψ (ai+1 ai+2 ) − ψ (ai ai+1 ) + ψ (μ (Yi+1)) − ψ (μ (Yi )) = ψ (ai+1 ai+2 ) − ψ (ai ai+1 ) + M ψ (Yi+1 ) − M ψ (Yi )   = ψ (ai+1 ai+2 ) − ψ (ai ai+1 ) + M ψ (Yi+1 ) − ψ (Yi ) = ψ (ai+1 ai+2 ) − ψ (ai ai+1 ) + MDi = di , by (4.29)

and μ (w) contains the instance a1 X1 a2 X2 · · · Xk ak+1 of t1 .

Observe that given a k-template t1 , we may calculate all of its parents. Indeed, the set of candidates for the Ai in a parent, and hence for the ai , ai is finite, and may be searched exhaustively. Since M is non-singular, a choice of values for ai , ai , together with given values di , determines the Di by (4.29). However, not all computed values for Di may be in Zm ; some k-templates may have no parents. Let ancestor be the transitive closure of the parent relation.

Lemma 4.6.12 The template Tk has finitely many ancestors.

Proof Rewriting (4.29), we obtain   Di = M −1 di + ψ (ai ai+1 ) − ψ (ai+1ai+2 ) .

Since the ai , ai are factors of words of μ (Σ), there are finitely many possibilities for v = ψ (ai ai+1 ) − ψ (ai+1 ai+2 ). Let V be the (finite) set of possible values for v. Iterating (4.29), we see that the Di vectors in any ancestor of k-template Tk will have the form

Di = M −q vq + M q−1 vq−1 + · · · + M −1 v1 + v0,

v j ∈ V , j = 0, . . . , q.

130

N. Rampersad and J. Shallit

Let N = max{|v| : v ∈ V } and let r = N/(1 − #M −1#). We have |Di | = |M −q vq + M q−1 vq−1 + · · · + M −1 v1 + v0 | ≤ |M −q vq | + |M q−1vq−1 | + · · · + |M −1 v1 | + |v0 | (triangle inequality) ≤ #M −q #|vq | + #M q−1#|vq−1 | + · · · + #M −1#|v1 | + |v0| (property of the induced norm) ≤ #M −1 #q |vq | + #M −1 #q−1|vq−1 | + · · · + #M −1 #|v1 | + |v0| (submultiplicativity) ≤ #M −1 #q N + #M −1#q−1 N + · · · + #M −1 #N + N ∞

≤ N ∑ #M −1 #i

(since #M −1 # < 1)

i=0

=

N 1 − #M −1#

= r. Thus, the Di lie within a ball of radius r in Rm . It follows that there are only finitely many Di s in Zm . Since there are finitely many choices for the Ai ∈ {ε , 1, 2, . . . , m} and the Di in any ancestor, it follows that Tk has only finitely many ancestors. Let W = maxa∈Σ |μ (a)|. Lemma 4.6.13 Let L be the set of factors of μ ω (1). If w ∈ L and |w| ≥ W − 1, then there exists a, c ∈ Σ and y ∈ Σ∗ such that we can write w = A μ (y)C , where A is a (possibly empty) suffix of μ (a) and C is a (possibly empty) prefix of μ (c). Proof The alternative is that w is an interior factor of some word of μ (Σ), which implies |w| ≤ W − 2. Lemma 4.6.14 (Inverse Parent Lemma) Suppose that u is a factor of μ ω (1) and that u is an instance of t1 = [a1 , a2 , . . . , ak+1 , d1 , d2 , . . . , dk−1 ]. Suppose further that |u| > k(W − 1) + mΔ(k − 1)2 + 1, where Δ = maxi |di |. Then for some parent t2 of t1 , μ ω (1) contains a factor v which is an instance of t2 , and such that |v| < |u|. Proof Since u is an instance of t1 , we have u = a1 X1 a2 X2 a3 · · · ak Xk ak+1 , where ψ (Xi+1 ) − ψ (Xi ) = di , i = 1, 2, . . . k − 1.

131

Repetitions in words If i > j, we have |Xi | − |X j | =

m

∑ (|Xi |n − |X j |n )

n=1

=

m i− j

∑∑



|X j+q |n − |X j+q−1|n



n=1 q=1

=

m i− j

∑∑

  ψ (X j+q )n − ψ (X j+q−1)n

n=1 q=1



m i− j

∑ ∑Δ

n=1 q=1

≤ m(i − j)Δ. Symmetrically, we see that if i < j then |X j | − |Xi | ≤ m( j − i)Δ. Consequently, we have ||Xi | − |X j || ≤ m|i − j|Δ. If for some i we have |Xi | ≤ W − 2, then for 1 ≤ j ≤ k we have |X j | ≤ W − 2 + m(k − 1)Δ. We then have |u| = |a1 X1 a2 X2 a3 · · · ak Xk ak+1 | k+1

= |Xi | + ∑ |a j | + ∑ |X j | j=1

j =i

≤ W − 2 + k + 1 + (k − 1)(W − 2 + m(k − 1)Δ) = k(W − 1) + mΔ(k − 1)2 + 1. Therefore, if |u| > k(W − 1) + mΔ(k − 1)2 + 1, then for each i we have |Xi | > W − 2 and by Lemma 4.6.13 we can then parse u as u = a1 X1 a2 X2 a3 · · · ak Xk ak+1 = a1 a1 μ (Y1 )a2 a2 a2 μ (Y2 )a3 · · · ak μ (Yk )ak+1 ak+1 , where v = A1Y1 · · · AkYk Ak+1 is a factor of μ ω (1), μ (Ai ) = ai ai ai for each i and Xi = ai μ (Yi )ai+1 . It follows that the parent t2 of t1 is realised by a factor of μ ω (1). Furthermore, since |μ (a)| > 1 for all a ∈ Σ, the instance v of t2 satisfies |v| < |u|/2. The algorithm that completes the proof of Theorem 4.6.10 proceeds as follows. Calculate the set T of ancestors of Tk . By Lemma 4.6.12 this set is finite. The word μ ω (1) contains an abelian k-power if, and only if, an instance of one of these ancestors is a factor of μ ω (1). For each t = [a1 , a2 , . . . , ak+1 , d1 , d2 , . . . , dk−1 ] ∈ T , let Dt = {d1, d2 , . . . , dk−1 }. Let D = ∪t∈T Dt , and let Δ = maxd∈D |d|. As per Lemma 4.6.14,

132

N. Rampersad and J. Shallit

the shortest instance (if any) in μ ω (1) of a template of T has length at most k(W − 1) + mΔ(k − 1)2 + 1. We therefore generate all the factors of μ ω (1) of this length, and test whether any contains an instance of one of these ancestors.

4.6.4 Abelian repetitions in balanced words Next we show that any word that avoids abelian k-powers is ‘unbalanced’. We say that a word w is M-balanced if for every pair of factors u, v of w with |u| = |v|, we have | |u|a − |v|a | ≤ M for all letters a. Theorem 4.6.15 Let w be an infinite word. If w is M-balanced for some M, then for every k ≥ 2, the word w contains an abelian k-power. The proof is an application of van der Waerden’s theorem (Theorem 4.4.1). We first need a lemma. Lemma 4.6.16 Let M and r be positive integers. There exist positive integers α1 , . . . , αr and N such that whenever r

∑ ci αi ≡ 0

(mod N)

i=1

for integers ci with |ci | ≤ M, then c1 = · · · = cr = 0. Proof The αi are defined inductively. Set α1 = 1 and for i ≥ 1, choose αi+1 to be any integer satisfying i

αi+1 > M ∑ α j . j=1

Now choose N to be any integer satisfying N > M ∑rj=1 α j . Suppose that r

∑ ci αi ≡ 0

(mod N)

i=1

for some ci satisfying |ci | ≤ M. Then $ $ $ r $ r r $ $ $ ∑ ci αi $ ≤ ∑ |ci |αi ≤ M ∑ αi < N, $i=1 $ i=1 i=1 which implies that r

∑ ci αi = 0.

i=1

To show that the ci are necessarily zero, suppose to the contrary that some ci is non-zero and let t be the largest index such that ct = 0. We cannot have t = 1, for otherwise we should have c1 = c1 α1 = 0, so let us assume that t > 1 and observe that $ $ $ t−1 $ t−1 t−1 $ $ |ct αt | = $− ∑ ci αi $ ≤ ∑ |ci |αi ≤ M ∑ αi < αt ≤ |ct αt |, $ i=1 $ i=1 i=1

Repetitions in words

133

which gives us our desired contradiction and completes the proof. We now give the proof of Theorem 4.6.15. Proof of Theorem 4.6.15 Let w = w1 w2 · · · be an infinite word over an alphabet of size r and suppose further that w is M-balanced for some positive integer M. Let α1 , . . . , αr and N be as in Lemma 4.6.16. By recoding the alphabet of w, we may suppose without loss of generality that w is a word over the alphabet {α1 , . . . , αr }. For any word x = x1 · · · xn over this alphabet, we define the function f (x) := x1 + · · · + xn . We now define a map τ : {1, 2, . . .} → {0, 1, . . ., N − 1} by

τ (n) = f (w[1..n]) mod N. By Theorem 4.4.1, for every positive integer k, there exist positive integers n0 and d such that

τ (n0 ) = τ (n0 + d) = · · · = τ (n0 + kd).

(4.30)

For j = 1, . . . , k, define w( j) = w[n0 + ( j − 1)d..n0 + jd]. We claim that w(1) · · · w(k) is an abelian k-power. We first note that (4.30) implies that f (w( j) ) ≡ 0 (mod N) for each j = 1, . . . , k. ( j) Next, for i = 1, . . . , r, let ai = |w( j) |αi . Since r

( j)

f (w( j) ) = ∑ ai αi i=1

for j = 1, . . . , k, we must have r

( j)

∑ ai

αi ≡ 0 (mod N),

i=1

and in particular, r

( j)

∑ ai

i=1

r

(1)

αi ≡ ∑ ai αi

(mod N),

i=1

which we may rearrange to obtain r

( j)

∑ (ai

(1)

− ai )αi ≡ 0

(mod N).

i=1

( j)

(1)

Since w is M-balanced, we have |ai − ai | ≤ M, and so by Lemma 4.6.16 we must ( j) (1) ( j) (1) have ai − ai = 0, or in other words, ai = ai for all j = 1, . . . , k. It follows that w(1) · · · w(k) is an abelian k-power, as claimed.

134

N. Rampersad and J. Shallit

4.7 Enumeration 4.7.1 Enumerating squarefree words Here we examine the question: ‘How many squarefree words of length n do we have over a 3-letter alphabet?’. We do not have a precise characterisation of the squarefree words, so the best we can hope for are some good asymptotic upper and lower bounds. We use a generalisation of morphism called replacement rule. A replacement rule ∗ s is a map from Σ∗ to 2Δ satisfying s(xy) = s(x)s(y) for all x, y ∈ Σ∗ and s(ε ) = {ε }. Consider the replacement rule h defined by 0 → {210201202120102012, 210201021202102012} 1 → {021012010201210120, 021012102010210120} 2 → {102120121012021201, 102120210121021201}. The set h(w) consists entirely of squarefree words if w is squarefree. This shows that there are at least 2n/17 ≈ 1.0416n ternary squarefree words of length n. The best bounds currently known are due to Shur (2012). Theorem 4.7.1 The number of squarefree words of length n over a 3-letter alphabet grows like ρ n , where ρ ∈ [1.3017579, 1.3017619].

4.7.2 Enumerating overlapfree words Unlike the squarefree ternary words, the overlapfree binary words have a well understood structure, thanks to Theorem 4.2.9. This makes it much easier for us to count the number of overlapfree binary words. Let x = x0 be a non-empty overlapfree binary word. Then by Theorem 4.2.9 we can write x0 = u1 μ (x1 )v1 with |u1 |, |v1 | ≤ 2. If |x1 | ≥ 1, we can repeat the process, writing x1 = u2 μ (x2 )v2 . Continuing in this fashion, we obtain the decomposition xi−1 = ui μ (xi )vi for i = 1, 2, . . . until |xt+1 | = 0 for some t. Then x0 = u1 μ (u2 ) · · · μ t−1 (ut )μ t (xt )μ t−1 (vt ) · · · μ (v2 )v1 . Then from the inequalities 1 ≤ |xt | ≤ 4 and 2|xi | ≤ |xi−1 | ≤ 2|xi | + 4, 1 ≤ i ≤ t, an easy induction gives 2t ≤ |x| ≤ 2t+3 − 4. Thus t ≤ log2 |x| < t + 3, and so log2 |x| − 3 < t ≤ log2 |x|.

(4.31)

There are at most five possibilities for each ui and vi , and there are at most 22 possibilities for xt (since 1 ≤ |xt | ≤ 4 and xt is overlapfree). Inequality (4.31) shows there are at most three possibilities for t. Letting n = |x|, we see there are at most 3 · 22 · 52 log2 n = 66nlog2 25 overlapfree words of length n. We have therefore proved Theorem 4.7.2 There are O(nlog2 25 ) = O(n4.644 ) binary words of length n that are overlapfree.

Repetitions in words

135

In fact, somewhat more is known. Let un denote the number of overlapfree binary words of length n. Define the following quantities:

α = sup{r : ∃C > 0, un ≥ Cnr }, β = inf{r : ∃C > 0, un ≤ Cnr }. Guglielmi and Protasov (2013) computed the values:

α = 1.273 553 265 · · · β = 1.332 240 491 · · ·

4.7.3 The Goulden–Jackson cluster method Let S be a finite set of words to be avoided over an alphabet Σ of size k. We say that S is reduced if S satisfies the following property: if x is a word in S, then no other word in S is a factor of x. For example, the set {001, 111, 1101} is reduced, but the set {001, 111, 1001} is not reduced, since 001 is a factor of 1001. Clearly, if x, y ∈ S and x is a factor of y, then S is avoided by exactly the same set of words as avoids S \ {y}. We therefore only consider reduced sets S. Let S be a reduced set of words. For n ≥ 0, let an denote the number of words of length n over Σ that avoid S and let A(x) := ∑n≥0 an xn be the corresponding generating function. A marked word is a word w over Σ where we have distinguished (i.e., ‘marked’) some number of occurrences of factors of w that are in the set S. We will indicate these marked factors by underlining. For example, if S := {00, 010} and w = 0101010100, then w := 0101010100 is one possible way of marking w. We will use w to denote a particular marked version of w. We do not require that all occurrences in w of words in S necessarily be marked, so in general there are many different ways to mark a given word w. For any given marking w of a word w, define the weight of w, denoted m(w), by m(w) := (−1)t , where t is the number of factors that are marked in w. Suppose that in total there r are r occurrences of words of S as factors of w. Then for t = 0, . . . , r, there are t possible marked versions w of w containing t marked factors. The sum of the weights of all marked versions of w is therefore equal to   r t r ∑ (−1) t = (1 − 1)r, t=0 which is 1 when r is 0 (i.e., when w avoids S) and 0 when r > 0 (i.e., when w contains a word in S as a factor). This may perhaps seem somewhat mysterious, but it is nothing more than the principle of Inclusion/Exclusion. Summing weights over

136

N. Rampersad and J. Shallit

all marked words of length n, we thus obtain



m(w) = an ,

(4.32)

|w|=n

which is the number of words of length n avoiding S. We define a cluster to be a marked word w such that every position of w is marked in w and w cannot be written as the concatenation of two smaller marked words. For example, 01010 is a cluster, but 010 010 is not a cluster, since it is the concatenation of two copies of the marked word 010. We can use clusters to enumerate marked words. Let M denote the set of all marked words and let T denote the set of all clusters. Let C(x) be the weighted cluster generating function for S defined by C(x) :=

∑ m(v)x|v| = ∑ cn xn .

v∈T

(4.33)

n≥0

Consider a marked word w of length n. We have two cases. Case 1: the last position of w is unmarked. In this case we write w = w a for some marked word w and some unmarked letter a. Moreover, we have m(w) = m(w ). Case 2: the marked word w ends with a cluster. In this case we write w = u v, where u is a marked word of length n − j and v is a cluster of length j. Moreover, we have m(w) = m(u)m(v). Since any marked word can be uniquely written as a concatenation of clusters and unmarked letters, it follows that for n ≥ 1, ⎞⎛ ⎞⎤ ⎡⎛



w∈M |w|=n

m(w) = k

∑ 

w ∈M |w |=n−1

⎢⎜ ⎜ m(w ) + ∑ ⎢ ⎣⎝ j



u∈M |u|=n− j

⎟⎜ ⎜ m(u)⎟ ⎠⎝

⎟⎥

⎥ ∑ m(v)⎟ ⎠⎦ .

v∈T |v|= j

Applying (4.32) and (4.33), we rewrite (4.34) as an = kan−1 + ∑ an− j c j , j

i.e., an − kan−1 − ∑ an− j c j = 0. j

Recall that A(x) := ∑n≥0 an xn . Since for n ≥ 1, [xn ]A(x)(1 − kx − C(x)) = an − kan−1 − ∑ an− j c j = 0, j

(4.34)

Repetitions in words

137

and [x0 ]A(x) = 1, we must have A(x)(1 − kx − C(x)) = 1, so that A(x) =

1 . 1 − kx − C(x)

We summarise our discussion as follows. Theorem 4.7.3 Let S be a reduced set of words over a k-letter alphabet and let C(x) be the weighted cluster generating function for S. The generating function A(x) for the number of words of length n over a k-letter alphabet that avoid S is A(x) :=

1 . 1 − kx − C(x)

In practice, to be able to apply Theorem 4.7.3 one must be able to calculate the generating function C(x). For simple sets S this can be done by hand, as illustrated in the example below; for more complicated sets S one generally resorts to computer calculations.1 Example 4.7.4 Let S := {00, 010} and let C(x) := ∑n≥0 cn xn be the weighted cluster generating function for S. Consider a cluster v of length n. If n ≥ 4, we have either v = 00 '

··· (%

&

··· (%

&

v

where v is a cluster of length n − 1, or v = 01 0 '

v

where v is a cluster of length n − 2. In either case, v has precisely one more marked factor than v , so that m(v) = −m(v ). It follows that for n ≥ 4, cn = −cn−1 − cn−2.

(4.35)

We check that there is one cluster of length 2 (00, which has weight −1) and 2 clusters of length 3 (010, which has weight −1, and 000, which has weight (−1)2 = 1). This gives the initial values c2 = −1 and c3 = −1 + 1 = 0. From these initial values and the recurrence (4.35) one derives C(x) = 1

−x2 − x3 . 1 + x + x2

John Noonan and Doron Zeilberger have prepared several MAPLE packages implementing this method. These can be downloaded from http://www.math.rutgers.edu/~ zeilberg/gj.html .

138

N. Rampersad and J. Shallit

The generating function for the binary words avoiding S is thus A(x) =

1 −x −x 1 − 2x − 1+x+x 2 2

3

1 + x + x2 1 − x − x3 = 1 + 2x + 3x2 + 4x3 + 6x4 + 9x5 + 13x6 + · · · . =

Indeed one verifies, for instance, that the following six words are exactly the binary words of length 4 that avoid 00 and 010: 0110 0111 1011 1101 1110 1111.

4.7.4 A power series method for lower bounds The following theorem gives a method to bound from below the number of words of length n over an m-letter alphabet that avoid a given set of words S. Unlike the Goulden–Jackson cluster method, this method does not give an exact enumeration; however, here the set S of words to be avoided may now be infinite. Theorem 4.7.5 Let S be a set of words over an m-letter alphabet, each word of length at least 2. Suppose that for each i ≥ 2, the set S contains at most ci words of length i. If the power series expansion of −1  G(x) :=

1 − mx + ∑ ci xi i≥2

has non-negative coefficients, then there are least [xn ]G(x) words of length n over an m-letter alphabet that avoid S. Proof For two power series f (x) = ∑i≥0 ai xi and g(x) = ∑i≥0 bi xi , we write f ≥ g to mean that ai ≥ bi for all i ≥ 0. Let F(x) := ∑i≥0 ai xi , where ai is the number of words of length i over an m-letter alphabet that avoid S. Let G(x) = ∑i≥0 bi xi be the power series expansion of G defined above. We wish to show F ≥ G. For k ≥ 1, there are mk − ak words w of length k over an m-letter alphabet that contain a word in S as a factor. On the other hand, for any such w either (a) w = w a, where a is a single letter and w is a word of length k − 1 containing a word in S as a factor; or (b) w = xy, where x is a word of length k − j that avoids S and y ∈ S is a word of length j. There are at most (mk−1 − ak−1 )m words w of the form (a), and there are at most ∑ j ak− j c j words w of the form (b). We thus have the inequality mk − ak ≤ (mk−1 − ak−1 )m + ∑ ak− j c j . j

Rearranging, we have ak − ak−1m + ∑ ak− j c j ≥ 0, j

(4.36)

139

Repetitions in words for k ≥ 1. Consider the function 

 H(x) := F(x) 1 − mx + ∑ c j x j =

j≥2





∑ ai x

i

i≥0



1 − mx + ∑ c j x

.

j

j≥2

Observe that for k ≥ 1, we have [xk ]H(x) = ak − ak−1 m + ∑ j ak− j c j . By (4.36), we have [xk ]H(x) ≥ 0 for k ≥ 1. Since [x0 ]H(x) = 1, the inequality H ≥ 1 holds, and in particular, H − 1 has non-negative coefficients. We conclude that F = HG = (H − 1)G + G ≥ G, as required. We now apply Theorem 4.7.5 to prove the following result concerning the number of words of length n over an m-letter alphabet that avoid instances of a pattern p. Theorem 4.7.6 Let k ≥ 2 and m ≥ 4 be integers with (k, m) = (2, 4). Let p be a pattern containing k distinct variables, each occurring at least twice in p. Then for n ≥ 0, there are at least λ n words of length n over an m-letter alphabet that avoid the pattern p, where  λ = λ (k, m) := m 1 +

1 (m − 2)k

−1

.

To prove the theorem we need some lemmas. Lemma 4.7.7 Let k ≥ 1 be a integer and let p be a pattern over the set of variables Δ = {x1 , . . . , xk }. Suppose that for 1 ≤ i ≤ k, the variable xi occurs ai ≥ 1 times in p. Let m ≥ 2 be an integer and let Σ be an m-letter alphabet. Then for n ≥ 1, the number of words of length n over Σ that are instances of the pattern p is at most [xn ]C(x), where C(x) :=

∑ · · · ∑ mi1 +···+ik xa1 i1 +···+ak ik .

i1 ≥1

ik ≥1

Proof For n ≥ 1, let tn denote the number of words of length n over Σ that are instances of the pattern p. We wish to show that for n ≥ 1, the inequality tn ≤ [xn ]C(x) holds. Given any k-tuple of non-empty words (W1 , . . . ,Wk ) we obtain a word of length a1 |W1 | + · · · + ak |Wk | matching p by substituting Wi for each occurrence of xi in p. Furthermore, all words matching p can be obtained by such a substitution. It

140

N. Rampersad and J. Shallit

follows that

∑ tn xn ≤ ∑ + · · · ∑ + xa1 |W1 |+···+ak |Wk | W1 ∈Σ

n≥1

=

Wk ∈Σ

∑+x

a1 |W1 |

···

W1 ∈Σ

= =



i1 ≥1

Wk ∈Σ

mi1 xa1 i1 · · ·

∑ ··· ∑ m

i1 ≥1

∑ + xak |Wk |

∑ mik xak ik

ik ≥1

i1 +···+ik a1 i1 +···+ak ik

x

ik ≥1

,

so that tn ≤ [xn ]C(x), as required. In what follows, let k ≥ 2 and m ≥ 4 be integers with (k, m) = (2, 4). Also let −1  1 λ = λ (k, m) := m 1 + . (m − 2)k Lemma 4.7.8 We have λ ≥ m − 1/2. Proof We have

−1  1 λ = m 1+ (m − 2)k   =m

∑ (−1)i (m − 2)−ki

i≥0

  ≥ m 1 − 1/(m − 2)k . When k ≥ 3,

    m 1 − 1/(m − 2)k ≥ m 1 − 1/(m − 2)3 = m − m/(m − 2)3 ≥ m − 4/23

(since m ≥ 4)

≥ m − 1/2. When k = 2 and m = 5 we have λ = m − 1/2, so suppose k = 2 and m ≥ 6. Then     m 1 − 1/(m − 2)k = m 1 − 1/(m − 2)2 = m − m/(m − 2)2 ≥ m − 6/42

(since m ≥ 6)

≥ m − 1/2. Lemma 4.7.9 Let a1 , . . . , ak be integers, each at least 2. Let C(x) :=

∑ · · · ∑ mi1 +···+ik xa1 i1 +···+ak ik ,

i1 ≥1

ik ≥1

141

Repetitions in words and let B(x) := ∑ bi xi = (1 − mx + C(x))−1. i≥0

Then bn ≥ λ bn−1 for all n ≥ 0. Proof The proof is by induction on n. When n = 0, we have b0 = 1 and b1 = m. Since m > λ , the inequality b1 ≥ λ b0 holds, as required. Suppose that for all j < n, we have b j ≥ λ b j−1 . Since B = (1 − mx +C)−1, we have B(1 − mx +C) = 1. Hence [xn ]B(1 − mx + C) = 0 for n ≥ 1. However,   

∑ b i xi

B(1 − mx + C) =

1 − mx +

i≥0

∑ · · · ∑ mi1 +···+ik xa1i1 +···+ak ik

i1≥1

ik ≥1

,

so [xn ]B(1 − mx + C) = bn − bn−1m +

∑ · · · ∑ mi1 +···+ik bn−(a1i1 +···+ak ik ) = 0.

i1 ≥1

ik ≥1

Rearranging, we obtain bn = λ bn−1 + (m − λ )bn−1 −

∑ · · · ∑ mi1 +···+ik bn−(a1i1 +···+ak ik ) .

i1 ≥1

ik ≥1

To show bn ≥ λ bn−1 it therefore suffices to show (m − λ )bn−1 −

∑ · · · ∑ mi1 +···+ik bn−(a1i1 +···+ak ik ) ≥ 0.

i1 ≥1

(4.37)

ik ≥1

Since b j ≥ λ b j−1 for all j < n, we have bn−i ≤ bn−1 /λ i−1 for 1 ≤ i ≤ n. Hence

∑ · · · ∑ mi1 +···+ik bn−(a1i1 +···+ak ik )

i1 ≥1



ik ≥1

λ bn−1

∑ · · · ∑ mi1 +···+ik λ a1 i1 +···+ak ik

i1 ≥1

ik ≥1

= λ bn−1



= λ bn−1

i1 ≥1

···



ik ≥1 λ

mi1 +···+ik a1 i1 +···+ak ik

mi1 mik ··· ∑ a i . a i 1 1 k k i1 ≥1 λ ik ≥1 λ



Since ai ≥ 2 for 1 ≤ i ≤ k, we have

λ bn−1

mi1 mik ··· ∑ a i a i 1 1 k k i1 ≥1 λ ik ≥1 λ



mi1 mik · · · ∑ 2i 2i 1 k i1 ≥1 λ ik ≥1 λ  k mi = λ bn−1 ∑ 2i . i≥1 λ ≤ λ bn−1



142

N. Rampersad and J. Shallit

By Lemma 4.7.8, we have λ ≥ m − 1/2, whence m/λ 2 ≤ m/(m − 1/2)2 < 1. Thus  k k   k mi m/λ 2 m λ bn−1 ∑ 2i λ b . = λ bn−1 = n−1 1 − m/λ 2 λ2 −m i≥1 λ We have thus shown

∑ ··· ∑ m

i1 ≥1

i1 +···+ik

ik ≥1

 bn−(a1i1 +···+ak ik ) ≤ λ bn−1

m λ2 −m

In order to show that (4.37) holds, it suffices to show that k  m m−λ ≥ λ . λ2 −m

k .

(4.38)

Again, by Lemma 4.7.8, we have λ ≥ m − 1/2, whence k k   m m λ ≤ λ λ2 −m (m − 1/2)2 − m k  m =λ m2 − 2m + 1/4 k  m ≤λ m2 − 2m = λ /(m − 2)k . On the other hand,

 λ = m 1+

whence

 λ 1+

1 (m − 2)k

1 (m − 2)k

−1

(4.39)

,

 = m,

and so

λ /(m − 2)k = m − λ .

(4.40)

Applying (4.39) and (4.40) establishes (4.38) and hence (4.37), completing the proof. We are now ready to prove Theorem 4.7.6. Proof of Theorem 4.7.6 Let Σ be an m-letter alphabet and let S be the set of all words of length at least 2 over Σ that are instances of the pattern p. By Lemma 4.7.7, the number of words of length n in S is at most [xn ]C(x), where C(x) :=

∑ · · · ∑ mi1 +···+ik xa1 i1 +···+ak ik .

i1 ≥1

ik ≥1

Repetitions in words

143

Define B(x) := ∑ bi xi = (1 − mx + C(x))−1. i≥0

By Lemma 4.7.9, we have bn ≥ λ bn−1 for all n ≥ 0. In particular, the coefficients of B are non-negative. By Theorem 4.7.5, there are at least bn words of length n over Σ that avoid S. Since bn ≥ λ n , there are at least λ n words of length n that avoid the pattern p. From Theorems 4.5.2 and 4.7.6 we have, a fortiori, the following. Corollary 4.7.10 Let p be a pattern in which every variable occurs at least twice. There is an infinite word over a 4-letter alphabet that avoids p. By Exercise 4.9.9, any pattern with k variables and length at least 2k contains a factor x where every variable that occurs in x occurs at least twice in x. We therefore have the following. Corollary 4.7.11 All patterns with k variables and length at least 2k are avoidable over a 4-letter alphabet.

4.8 Decidability for automatic sequences A sequence (an )n≥0 over a finite alphabet Δ is said to be k-automatic for some integer k ≥ 2 if, roughly speaking, there exists an automaton that, on input n in base k, reaches a state with the output an . This class of sequences, also called k-recognisable in the literature, has been studied extensively (Allouche and Shallit, 2003) and has several different characterisations, the most famous being images of fixed points of k-uniform morphisms. The archetypal example of a k-automatic sequence is the Thue–Morse sequence t = (tn )n≥0 = 0110100110010110· · · , where tn is the sum (modulo 2) of the bits in the base-2 expansion of n (Allouche and Shallit, 1999). See Figure 4.3. As noted in Section 1.3, it can also be obtained as the fixed point of the morphism μ where 0 → 01 and 1 → 10. 0

0 1 0

1 1

Figure 4.3 Automaton generating the Thue–Morse sequence.

Given a k-automatic sequence, one might reasonably inquire as to whether the sequence is ultimately periodic. More precisely, we would like to know if the problem

144

N. Rampersad and J. Shallit

Given a k-automatic sequence, is it ultimately periodic? is decidable (i.e., recursively solvable). This problem was solved by Honkala (1986), who gave a rather complicated decision procedure. This problem, as well as many similar problems, can be solved by recognising that a statement of the desired property can be expressed in the logical theory $N, +,Vk %, where Vk (n) is the largest power of k dividing n. For example, for a k-automatic sequence w = a0 a1 a2 · · · , the property of being ultimately periodic is equivalent to the assertion that ∃p ≥ 1, N ≥ 0 ∀ j ≥ N a j+p = a j .

(4.41)

Using the techniques in (Allouche et al., 2009; Charlier et al., 2012), given an automaton M generating the sequence (ai )i≥0 , we can construct another automaton M  with the property that it accepts all pairs (p, N), expressed in base k, such that (4.41) holds. From M  we can easily decide if the sequence is ultimately periodic. Here are a few more of the problems that, using this technique, have recursive solutions for automatic sequences x and y. • Given a rational number p/q, determining if x has infinitely many distinct p/qpowers. • Computing the critical exponent. The critical exponent of the word x is the supremum, over all finite factors f of x, of the exponent of f . • Given a rational number p/q, determining if the universal critical exponent = p/q (respectively, ≤ p/q). The universal critical exponent is the infimum, over all positions i ≥ 0 of the supremum of the exponents of all factors f of x beginning at position i. • Determining whether x is a shift of y. • Given an automatic sequence x, a rational number ρ , and an integer m ≥ 1, determining whether x is (ρ , m)-repetitive (Karhum¨aki et al., 2002): that is, whether all sufficiently long prefixes of x have a suffix of the form vσ , where |v| ≤ m and σ ≥ ρ. To see this, note that x is (ρ , m)-repetitive if, and only if, ∃N ≥ 1 ∀i ≥ N ∃ j,  with 0 ≤ j < i, 1 ≤ l < i − j and (i − j)/ ≥ ρ and l ≤ m such that x[ j . . . i −  − 1] = x[ j +  . . . i − 1]. Furthermore, although the worst-case complexity of the method is prohibitively large (with a running time bounded by an expression of the form 22

··

·2

p(n)

,

where the number of 2s equals the number of quantifiers in the logical formula, p is a polynomial, and n the number of states in the DFA defining the automatic sequence), in many cases it can be implemented and runs in reasonable time for sequences of interest. For example, Allouche et al. (2009) used the method to reprove, purely

Repetitions in words

145

mechanically, that the Thue–Morse sequence is overlapfree. A survey of recent applications of this method can be found in (Shallit, 2013). Although this method is widely applicable to many combinatorial properties of automatic sequences, certain properties cannot be proved using these techniques. For example, the property of being abelian squarefree cannot be expressed, in general, by a formula of the theory $N, +,Vk %. To see this let f = f1 f2 f3 · · · = 00100110001011000101110011011 · · · be the ordinary paperfolding word. The word f is 2-automatic. Now suppose to the contrary that the abelian squarefreeness of f were expressible as a formula of $N, +,V2 %. Then the language L = {(n, i)2 : f[i . . . i + n − 1] is a permutation of f[i + n . . .i + 2n − 1]} would be regular, but one can show that this is not the case.

4.9 Exercises Exercise 4.9.1 Prove that any infinite overlapfree binary word must contain arbitrarily large squares. Exercise 4.9.2 Prove that for any position i, there is at most one square in the Thue–Morse word beginning at position i. Exercise 4.9.3 Give a characterisation of the squares that can occur in an infinite overlapfree binary word. Exercise 4.9.4 Prove that the morphism 0 → 001, 1 → 011 is cubefree. Exercise 4.9.5 Use the Lov´asz local lemma to prove that there is an infinite word over an alphabet of size 9 that avoids all squares xx where |x| ≥ 12. Exercise 4.9.6 Use van der Waerden’s theorem (Theorem 4.4.1) to show that if w is an infinite word over some finite alphabet Σ ⊆ N, then for every positive  there exists  consecutive factors of w whose entries sum to the same value. Exercise 4.9.7 Show that any finite set of avoidable patterns can be simultaneously avoided. Exercise 4.9.8 Use the cluster method to compute the generating function for the number of binary words of length n avoiding 01101. Exercise 4.9.9 Show that every pattern with k variables and length at least 2k contains a factor x where every variable that occurs in x occurs at least twice.

146

N. Rampersad and J. Shallit

4.10 Notes Notes for Section 1.3 The Thue–Morse word is named after Thue (1906a) and Morse (1921), who rediscovered it in the 1920s. However, the Thue–Morse word occurs implicitly in a much earlier communication of Prouhet (1851) to the French Academy of Sciences. Prouhet actually gave a construction of a wider family of words known as the Prouhet words. For a survey concerning properties of the Thue–Morse word see (Allouche and Shallit, 1999).

Notes for Section 4.2.2 An important notion concerning fractional powers is that of the critical exponent of an infinite word w, that is, the supremum of all real numbers α such that w contains an α -power (this notion has been already mentioned in Section 4.8). The definitive study is Krieger’s Ph.D. thesis (Krieger, 2008). Thue (1912) showed that the critical exponent of the Thue–Morse word is 2. Mignosi and Pirillo (1992) showed that the critical exponent of the Fibonacci word is 2 + ϕ , where φ is the golden ratio. Damanik and Lenz (2002) gave a formula for the critical exponent of general Sturmian words. For slightly weaker versions of Corollary 4.2.7, see (Brandenburg, 1983) and (Shur, 2000).

Notes for Section 4.2.3 Aberkane et al. (2006) characterised the set of all rational numbers α such that the Thue–Morse word contains an α -power. Saari (2007) proved that a 5/3-power begins at every position of the Thue–Morse word, and the constant 5/3 is optimal. Using results of Mignosi et al. (2002), along with the work of de Luca and Mione (1994), one can describe the set of minimal forbidden factors of t. Dekking (1976) showed that any infinite overlapfree binary word contains arbitrarily large squares. Allouche et al. (1998) determined the lexicographically least overlapfree word. Brown et al. (2006) proved that modifying any finite number of bits of the Thue– Morse word results in a word containing an overlap.

Notes for Section 4.2.4 Fife (1980, 1983) showed that the set of all infinite binary overlapfree words can be concisely represented using a finite automaton. His proof was later simplified by Berstel (1994). The approach presented here based on the Restivo–Salemi factorisation theorem (Restivo and Salemi, 1985) is due to Shallit (2011). Rampersad et al.

Repetitions in words

147

(2011) applied the same method to obtain an automaton encoding the 73 -powerfree words. Thue (1912) (see also (Gottschalk and Hedlund, 1964)) characterised the bi-infinite overlapfree words. Shur (2000) generalised this result.

Notes for Section 4.2.5 Theorem 4.2.17 is due to Thue (1912); also see (Bean et al., 1979). Theorem 4.2.18 is due to Crochemore (1982a); also see (Brandenburg, 1983). Theorem 4.2.19 is due to Crochemore (1982a). Theorems 4.2.20 and 4.2.21 are due to Bean et al. (1979).

Notes for Section 4.2.6 For more on the Lov´asz Local Lemma, see the book by Alon and Spencer (2000). Beck (1984) gave one of the earliest applications of the local lemma to combinatorics on words. Currie (2005) applied it to avoiding fractional powers. Grytczuk (2002) applied it to the avoidance of repetitions in arithmetic progressions. Alon et al. (2002) applied it to non-repetitive colourings of graphs. Ochem et al. (2008) made use of it to study the avoidance of ‘approximate’ squares. Pegden (2011) used a ‘one-sided’ variant of the local lemma to proved results on ‘game’ versions of non-repetitive colourings. In 2010, Moser and Tardos (2010) produced an ‘algorithmic version’ of the local lemma. This approach inspired several new results in combinatorics on words by Grytczuk et al. (2011a,b, 2013).

Notes for Section 4.3.1 The notion of repetition threshold was first formulated by Brandenburg (1983); however, there are some problems with his definition. We have attempted to provide a more precise treatment here. The proof of Dejean’s Conjecture consists of the following works: (Dejean, 1972), (Pansiot, 1984), (Moulin-Ollagnier, 1992), (Mohammad-Noori and Currie, 2007), (Carpi, 2007), (Rao, 2011), and (Currie and Rampersad, 2009a,b, 2011).

Notes for Section 4.3.2 The material in the section is from (Brandenburg, 1983).

Notes for Section 4.3.3 The Pansiot encoding, as well as Theorem 4.3.12, is due to Pansiot (1984).

148

N. Rampersad and J. Shallit

Notes for Section 4.4 The original paper of van der Waerden is hard to find; for a proof of the theorem the reader may consult the classic textbook of Graham et al. (1990). The problem of avoiding repetitions in arithmetic progressions seems to have first been studied by Carpi (1988a) and subsequently by Currie and Simpson (2002), Grytczuk (2002) and Grytczuk et al. (2011b). Downarowicz (1999) studied a related problem. Theorem 4.4.2 is due to Carpi (1988a). Kao et al. (2008) proved that there exists an infinite word over a binary alphabet that contains no squares xx with |x| ≥ 3 in any arithmetic progression of odd difference. This improves upon a result of Entringer et al. (1974). Theorem 4.4.4 is due to Carpi (1988a). Dumitrescu and Radoiˇci´c (2004) proved that every 2-dimensional word over a 2-letter alphabet must contain a line containing a cube. Grytczuk (2006) presented the problem of determining the Thue threshold of N2 , namely, the smallest integer t such that there exists an integer k ≥ 2 and a 2dimensional word w over a t-letter alphabet such that every line of w is k-powerfree. Carpi’s result showed that t ≤ 16. It is possible to show that there is a 2-dimensional word over a 4-letter alphabet such that every line is 3+ -powerfree, so t ≤ 4.

Notes for Section 4.5 Bean et al. (1979) and Zimin (1982) independently developed the study of patterns. Zimin (1982) gave an algorithm to determine if a given pattern is avoidable over some finite alphabet. It remains an open problem to determine if there is an algorithm that decides, given a pattern p and a natural number k, whether p is avoidable on a k-letter alphabet. A pattern p is scrambled if all variables occur at least twice in p, and for each pair of distinct variables x, y, the pattern p contains occurrences of both xy and yx. Bean et al. (1979) proved that there is an infinite word over a 20-letter alphabet that simultaneously avoids all scrambled patterns. Petrov (1988) later improved this to a 4-letter alphabet. The classification of the binary patterns avoidable on the binary alphabet is due to several authors and was completed by Cassaigne (1993b) and independently by Vaniˇcek (see Goralcik and Vanicek (1991)).

Notes for Section 4.6.2 Currie and Linek (2001) and Currie and Visentin (2007, 2008) proved results on avoidability in the abelian sense of more general patterns.

Repetitions in words

149

Notes for Section 4.6.3 The material in this section is based on Currie and Rampersad (2012), which attempts to present a more systematic treatment of the somewhat ad hoc arguments of Aberkane et al. (2004) and Aberkane and Currie (2009).

Notes for Section 4.6.4 Theorem 4.6.15 is due to Richomme et al. (2011).

Notes for Section 4.7.1 Brandenburg (1983) established that there are exponentially many ternary squarefree words of length n and binary cubefree words of length n. His bounds were subsequently improved by several authors. The replacement rule h given in this section was found by Ekhad and Zeilberger (1998). Shur (2012) has calculated growth rates for β -powerfree words over k-letter alphabets for several values of β and k.

Notes for Section 4.7.2 The method presented here for estimating the number of binary overlapfree words is due to Restivo and Salemi (1985). A more refined approach was given by Cassaigne (1993a) and Carpi (1993). The current best bounds are due to Jungers et al. (2009). The same method was subsequently applied to enumerate the words avoiding α -powers with 2 < α ≤ 7/3 (Blondel et al., 2009). Karhum¨aki and Shallit (2004) established that the exponent 7/3 is a ‘threshold’ for polynomial vs. exponential growth over the binary alphabet.

Notes for Section 4.7.3 For the Goulden–Jackson cluster method see Goulden and Jackson (2004). Noonan and Zeilberger (1999) and Berstel (2005) also give expositions of the cluster method.

Notes for Section 4.7.4 The material in this section is based on Bell and Goh (2007). Rampersad (2011) derived some additional results using the same method. Theorem 4.7.5 is a special case of a result originally presented by Golod (see Rowen (1988), Lemma 6.2.7 therein) in an algebraic setting. We have reformulated the theorem and proof using combinatorial terminology more congenial for the applications considered here. Bell (2005) applied Theorem 4.7.5 to give enumerative results concerning the avoidability of finite sets of words. Theorem 4.7.6 is due to Bell and Goh (2007). The avoidability of the patterns treated in Corollaries 4.7.10 and 4.7.11 was originally established by

150

N. Rampersad and J. Shallit

Bean et al. (1979). The significance of Corollaries 4.7.10 and 4.7.11 is that they establish the avoidability of these patterns over a 4-letter alphabet.

Notes for Section 4.8 The material in this section is based on Allouche et al. (2009) and Charlier et al. (2012). The method described here is very similar to techniques previously developed by B¨uchi, Bruy`ere, Michaux, Villemaire, and others (see Bruy`ere et al. (1994)), involving formal logic. More recently the method has been used to resolve a conjecture on the lengths of unbordered factors in the Thue–Morse word (Go˘c et al., 2012). For the use of this method in computing the critical exponent of automatic sequences see Schaeffer and Shallit (2012). The method has also been generalised to numeration systems other than the standard integer base systems by Du et al. (2014) and Mousavi and Shallit (2014). The inexpressibility of abelian squarefreeness in this logical framework is due to Schaeffer (2013).

5 Text redundancies Golnaz Badkobeh, Maxime Crochemore, Costas S. Iliopoulos and Marcin Kubica

5.1 Redundancy: a versatile notion The notion of redundancies in texts, regarded as sequences of symbols, appear under various concepts in the literature of combinatorics on words and of algorithms on strings: repetitions, repeats, runs, covers, seeds and palindromes, for example. Fundamental elements along these concepts are spread in books by Lothaire (1997, 2002, 2005). This has been also illustrated in Chapter 4. Squares and cubes (concatenations of 2 or 3 copies of the same non-empty word) are instances of repetitions whose exponent is at least 2 and have been studied for more than a century after the seminal work of Thue (1906b) who described infinite words containing none of them. Repeats refer to less repetitive segments in text, that is, those segments having a rational exponent smaller than 2 (but larger than 1). The frontier between repetitions and repeats is indeed rather blurred in literature. The famous conjecture of Dejean (1972) refers mainly to repeats and provides the repetitive threshold of each alphabet size: it is the minimum of maximal exponents of factors occurring in infinite words drawn from the alphabet (also see Section 4.3). After several partial results, including the result of Dejean on the 3-letter alphabet, the conjecture has eventually been solved recently see (Carpi, 2007), (Rao, 2011) and (Currie and Rampersad, 2011). Also see Section 4.3. Further works show that maximal-exponent repeats or repetitions occurring in infinite words complying with the threshold can also have a bounded length (Shallit, 2004). Their minimal number is known for alphabets of size 2 and 3 (Badkobeh, 2011; Badkobeh and Crochemore, 2010), introducing the notion of finite-repetition threshold attached to each alphabet size. The latter threshold is proved to be Dejean’s threshold for larger alphabet sizes and the minimum number of Dejean’s factors is studied in (Badkobeh et al., 2014). The design of methods for computing all the occurrences of repetitions in a word has lead to several optimal algorithms by Crochemore (1981), by Apostolico and Combinatorics, Words and Symbolic Dynamics, ed. Val´erie Berth´e and Michel Rigo. Published by c Cambridge University Press. Cambridge University Press 2016.

152

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica

Preparata (1983) and by Main and Lorentz (1984), all running in O(n log n) time on a word of length n. They have been extended to algorithms producing certain types of repetitions with rational exponent by Main (1989), or a unique occurrence for each repetition by Gusfield and Stoye (2004). These latter variants run in O(n log a) time on an alphabet of size a. What seems to be the right concept on repetitions and repeats is the notion of a run, maximal occurrence of a repetition of maximal periodicity, whose computation can be done in linear time (Kolpakov and Kucherov, 1999), (Crochemore and Ilie, 2008a), (Bannai et al., 2014a). The above computation time relies on bounds on the number of runs occurring in a word. After several approximations on the maximal number of runs, the conjecture by Kolpakov and Kucherov (1999) stating that it is less than the word length has been proved by Bannai et al. (2014b). The best lower bound is of 0.944n by Matsubara et al. (2008) and by Simpson (2010). A repeat is a factor of exponent less than 2, that is, it is a word of the form uvu where u is its longest border. 1 The study of repeats in a word has to do with longdistance interactions between separated occurrences of the same segment (the u part) in the word. Although occurrences may be far away from each other, they may interact when the word is folded as it is the case for genomic sequences. A very close problem to locating repeats is that of computing maximal pairs (positions of the two occurrences of u) with gaps constraints as described by Gusfield (1997) and later improved by Brodal et al. (1999). Local periodicities in words associated with roots of repetitions are softened by the notion of covers whose consecutive occurrences are at least adjacent in the word but may overlap. A seed instead covers a superstring w of the initial word x, i.e., x is a factor of w. Covers and seeds reveal redundancies inside words that are more difficult to discover. Finding the shortest cover of a word can be done in linear time, while its shortest seed is done in O(n log n) time by Apostolico and Breslauer (1997). Palindromes are yet another kind of redundancies. They can be generalised to palindromes having a bounded-length centre and called gapped palindromes. Testing if a word is a palindrome is straightforward but finding palindromic segments is more complex. Under some constraints it can be done in time O(n log a) (Kolpakov and Kucherov, 2008), (Chairungsee and Crochemore, 2009) and even in linear-time using more sophisticated data structures (Crochemore et al., 2010a). Beyond the theoretical aspect of problems related to redundancies, repetitions and repeats are often the base for string modelling adapted to compression coding. They appear in run-length compression and in Ziv–Lempel compression, see for example (Bell et al., 1990). But more is to be done on this to account for all the notions described here. Repetitions and palindromes receive intensive attentions for the analysis of genetic sequences. Repetitions are then called tandem repeats, satellites or SRS and should 1

A border of a word w is a proper factor of w that is both a prefix and a suffix of w.

Text redundancies

153

accept some notion of approximation. The existence of some types of palindromes is crucial for the prediction of the secondary structure of RNA molecules influencing their biological functions, see (B¨ockenhauer and Bongartz, 2007).

5.2 Avoiding repetitions and repeats In this section we show how some infinite word can avoid repeats. The property depends on the alphabet size.

5.2.1 Avoidability in binary words In 1906 A.Thue established that squares are avoidable on a 3-letter alphabet (see Theorem 4.2.2) and cubes are avoidable on a 2-letter alphabet (Thue, 1906b); also see Section 4.2.1. Iterating the Thue–Morse morphism μ defined in Example 1.3.1 by

μ (0) = 01, μ (1) = 10 gives the Thue–Morse word t = μ ω (0) = 011010011001011010010110 · · · which is overlapfree (see Theorem 4.2.8). Pansiot (1981) observed that the only morphisms generating the Thue–Morse word are powers of μ . This was extended by S´ee´ bold in the following statement. Theorem 5.2.1 (S´ee´ bold (1983)) Let x be an infinite overlapfree word over the alphabet {0, 1} that is generated by iterating some morphism h. Then h is a power of the Thue–Morse morphism μ . Although it has been shown that overlaps are avoidable in infinite binary words, squares on the contrary are not avoidable in any binary words of length at least 4. In line with these findings, Fraenkel and Simpson (1995) proved that there are infinite words that can contain only three of them. All factors of exponent at least 2 should be considered, which adds two cubes to the three squares. Their proof uses a pair of morphisms, one morphism to generate an infinite word by iteration, the other morphism to produce the final translation to the binary alphabet. Their result has been proved with different pairs of morphisms by Rampersad et al. (2005) (their first morphism is uniform), by Harju and Nowotka (2006) (their second morphism accepts any infinite squarefree word), and eventually by Badkobeh and Crochemore (2010) with the two morphisms f0 : f0 (a) = abc, f0 (b) = ac, f0 (c) = b,

154

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica

and g0 : g0 (a) = 01001110001101, g0 (b) = 0011, g0 (c) = 000111. The infinite binary word g0 = g0 ( f0 ∞ (a)) contains only the three squares 00, 11 and 1010, and the two cubes 000 and 111. The study of the avoidability of large squares in infinite binary words with constraints has been initiated by Shallit (2004) who considered extreme cases of infinite binary words under two types of constraints, namely, maximal exponent and period length. In his paper, Shallit shows that for all t, t ≥ 1, no infinite binary word simultaneously avoids all squares yy with period |y| ≥ t and 7/3-powers. This implies that the number of squares in an infinite binary word is unbounded if it is 7/3-power free. He considers also the period length of the avoided squares when the maximal exponent increases to 5/2 and to 3. Table 5.1 summarises his results. Table 5.1 Period of avoided squares. Period of avoided squares

Avoidable power

Unavoidable power

2 3 4, 5, 6 ≥7

none 3+ 5/2+ 7/3+

all 3 5/2 7/3

Furthermore, the avoidability of some period lengths and maximal exponent have been studied by Dekking (1976) and in more detail by Ochem (2006). Badkobeh and Crochemore (2010) show that the two types of constraints for the binary alphabet can be combined: they produce indeed an infinite word whose maximal exponent of its factors is the smallest possible while containing the smallest (finite) number of squares. With maximal exponent 7/3 the smallest number of squares is 12 to which is to be added 7/3-powers. It is known (Karhum¨aki and Shallit, 2004) that if an infinite binary word avoids 7/3-powers it then contains an infinite number of squares. Proving that it contains more than 12 squares is indeed a matter of simple computation. Shallit (2004) has built an infinite binary word avoiding 7/3+ -powers and all squares of period at least 7 as mentioned above. However, his word contains 18 squares. The infinite binary word in (Badkobeh and Crochemore, 2010) avoids the same powers but contains only 12 squares, the largest having period 8. As before the proof relies on a pair of morphisms satisfying suitable properties. Both morphisms are almost uniform (up to one unit). We will use the following notation. The letter B stands for the alphabet {0, 1}, and we let denote by A6 the alphabet {a, b, c, d, e, f}.

Text redundancies

155

The first morphism f0 is weakly squarefree defined from A∗6 to itself by: f1 (a) = abac, f1 (c) = eabdf, f1 (e) = bace,

f1 (b) = babd, f1 (d) = fbace, f1 (f) = abdf.

The second morphism g1 from A∗6 to B∗ , does not even correspond to a uniquely decipherable code but admits a unique decoding on the words produced by the first morphism. It is defined by: g1 (a) = 10011, g1 (c) = 01001, g1 (e) = 0110,

g1 (b) = 01100, g1 (d) = 10110, g1 (f) = 1001.

Then the infinite word g1 = g1 ( f1 ∞ (a)) has the desired property: it contains the 12 squares 02 , 12 , (01)2 , (10)2 , (001)2 , (010)2 , (011)2 , (100)2 , (101)2 , (110)2 , (01101001)2 , (10010110)2 only. Words 0110110 and 1001001 are the only factors with an exponent larger than 2, that is, 7/3. Looking at repetitions in words on larger alphabets, the subject introduces a new type of threshold, that we call the finite-repetition threshold. For the alphabet of a letters, it is defined to be the smallest rational number FRt(a) for which there exists an infinite word avoiding FRt(a)+ -powers and containing a finite number of rpowers, where r is Dejean’s repetition threshold introduced in Section 4.3.1. Results from Karhum¨aki and Shallit (2004) as well as Badkobeh and Crochemore (2010) show that FRt(2) = 7/3, where the associated number of squares is 12.

5.2.2 Fewest repetitions vs. maximal-exponent powers in infinite binary words In this section we provide extra results that deepen the question of avoidable patterns in infinite binary words by introducing another point of view. We analyse the tradeoff between the number of (distinct) squares and the number of maximal-exponent repetitions in infinite binary words when the maximal exponent is constant. We focus on the behaviour of infinite binary words when the maximal exponent varies between 3 to 7/3. The value 7/3 is the finite-repetition threshold. And the value 3 of the maximal exponent is where the number of squares is the minimum. Table 5.2 summarises the results. The infinite binary word g2 = g2 ( f2∞ (a)) contains 14 squares, no factor of exponent larger than 7/3 and only one 7/3-power (Badkobeh, 2011). The morphism f2 is defined from A∗5 to itself, with A5 = {a, b, c, d, e, f}, by:

156

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica Table 5.2 Maximal exponent and powers. Maximal exponent e

Allowed number of e-powers

Minimum number of squares

7/3

2 1

12 14

g1 = g1 ( f 1∞ (a)) g2 = g2 ( f 2∞ (a))

5/2

2 1

8 11

g3 = g3 ( f 2∞ (a)) g4 = g4 ( f 0∞ (a))

3

2 1

3 4

g0 = g0 ( f 0∞ (a)) g5 = g5 (w)

f2 (a) = adcbebc, f2 (b) = adcbedc, f2 (c) = aebc, f2 (d) = aebedc, f2 (e) = aebedcbebc. Then we translate f2∞ (a) to a binary alphabet using the second morphism g2 defined by:

g2 (a) = 101001100101, g2 (b) = 1010011001001, g2 (c) = 101001011001, g2 (d) = 101001011001001, g2 (e) = 101001011001001100101, The word g2 = g2 ( f2 ∞ (a)) satisfies the property, contains the 14 squares 02 , 12 , (01)2 , (10)2 , (001)2 , (010)2 , (100)2 , (101)2 , (0110)2 , (1001)2 , (100110)2 , (0100110)2 , (0110010)2 , (10010110)2 , and only one 7/3-power, 1001001. Proving that it is impossible to have less than 12 squares when avoiding 5/2 powers of binary words needs a simple computation, but there exist infinite 5/2+ -free binary words with less than 12 squares (Badkobeh, 2011). Additionally the number of squares varies according to the number of maximal-exponent powers: if there is only one 5/2-power the minimum number of squares is 11, and if there are two 5/2-powers, it becomes 8. The infinite binary word g3 = g3 ( f2∞ (a)) contains 8 squares, no factor of exponent larger than 5/2, and only two 5/2-powers (Badkobeh, 2011). The morphism g3 is

Text redundancies

157

defined by: g3 (a) = 001100101, g3 (b) = 0011001011, g3 (c) = 001101, g3 (d) = 001101011, g3 (e) = 00110101100101. The 8 squares g3 contains are 02 , 12 , (01)2 , (10)2 , (0110)2 , (1001)2 , (011001)2 , (100110)2 , and the two 5/2-powers are 01010, 10101. The infinite binary word g4 = g4 ( f0∞ (a)) is a 5/2+ -free with only one 5/2-power and no more than 11 squares (Badkobeh, 2011), where g4 , from A∗3 to B∗ , with A3 = {a, b, c}, is defined by: g4 (a) = 1001001101011001101001011001001101100 101101001101100100110100101100110101, g4 (b) = 100100110100101, g4 (c) = 1001001101100101101001101. The 11 squares of g4 are 02 , 12 , (01)2 , (10)2 , (001)2 , (010)2 , (011)2 , (100)2 , (101)2 , (110)2 , (0110)2 , and its 5/2-power is 10101. Finally, proving that it is impossible to have less than 8 squares when avoiding cubes needs another mere computation, but by recalling Fraenkel and Simpson’s result (Fraenkel and Simpson, 1995), the word g0 = g0 ( f0∞ (a)) shows the existence of an infinite binary word containing only 3 squares and 2 cubes. It has been shown that the number of squares increases to 4 if only one cube is allowed in the infinite binary word. Let us consider the morphism g5 defined by g5 (a) = 1100010110010100, g5 (b) = 1101000110010100, g5 (c) = 0110101100010100. Then the infinite word g5 = g5 (w), where w is any infinite 7/4+ -free ternary word, is 3+ -free and contains the 4 squares 00, 11, 0101 and 1010, and the only cube 000 (Badkobeh, 2011).

5.3 Finding repetitions and runs We have already seen that repetitions and periods in words constitute one of the most fundamental areas of word combinatorics. They have been studied already in the papers of Thue (1906b), considered as having founded combinatorics of words. While Thue was interested in finding long sequences with few repetitions, in recent times a lot of attention has been devoted to the algorithmic side of the problem.

158

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica

Detecting repetitions in words is an important element of several questions: pattern matching, text compression and computational biology, to quote a few. Patternmatching algorithms have to cope with repetitions to be efficient as these are likely to slow down the process; the large family of dictionary-based text compression methods, see (Witten et al., 1999), use a weaker notion of repeats (like the software gzip); repetitions in genomes, called satellites or simple sequence repeats, are intensively studied because, for example, some over-repeated short segments are related to genetic diseases (MacDonald and Ambrose, 1993); some satellites are also used in forensic crime investigations. In this section, we recall some of the most significant achievements in the area. We focus on algorithms for finding repetitions and on their analysis that relies on counting various types of repetitions. The main result concerns the linear-time computation of runs in a word as well as combinatorial estimations for their number. Initially people investigated mostly occurrences of squares, but their number can be as high as Θ(n log n) (Crochemore, 1981), hence algorithms computing all of them cannot run in linear time, due to the potential size of the output. Indeed the same result holds for any type of repetition having an integer exponent greater than 1 (Crochemore et al., 2010b). The optimal algorithms reporting all positioned squares or just a single square were designed by Crochemore (1981), by Apostolico and Preparata (1983), by Main and Lorentz (1984), see also (Crochemore, 1986). Theorem 5.3.1 (Crochemore (1981), Apostolico and Preparata (1983), Main and Lorentz (1984)) There exists an O(n log n) worst-case time algorithm for computing all the occurrences of primitively rooted squares in a word of length n. Techniques used to design the algorithms designed in the three given references are based on partitioning, suffix trees, and naming segments, respectively. A similar result using suffix arrays has been obtained by Franek et al. (2003). Testing the existence of some repetitions can, however, be done faster, for instance in the case of squares. Theorem 5.3.2 (Crochemore (1986), Main and Lorentz (1984)) Testing if a word of length n is squarefree can be done in worst-case time O(n log a), where a is the alphabet size of the word. Looking at (distinct) factors that are repetitions but not at their occurrences as discussed above, it is known that only O(n) (distinct) squares can appear in a word of length n. Corollary 5.3.3 (Fraenkel and Simpson (1998)) Any word of length n contains at most 2n (distinct) squares. The proof is a direct consequence of a useful lemma known as the Three-square Lemma (Crochemore and Rytter, 1995), but a simple direct proof is due to Ilie (2005). Based on numerical evidence, it has been conjectured that the number of

159

Text redundancies 0

1

2

3

4

5

6

7

8

9

10

11

12

a

b

a

a

b

a

b

b

a

b

a

b

b

Figure 5.1 Dotted lines show runs in abaababbababb. For example, [7 . . . 11] is the run of period 2 and length 5 associated with factor babab.

squares is at most n. The best upper bounds to date are 2n − Θ(logn) by Ilie (2007) and 11n/6 by Deza et al. (2014). The structure of all squares and of repetitions has been also computed within the running time O(n log a) by Main (1989) and by Gusfield and Stoye (2004).

5.3.1 Runs The concept of maximal periodicity or maximal occurrence of repetitions, coined runs by Iliopoulos et al. (1997) when analysing Fibonacci words, has been introduced to represent all repetitions in a succinct manner. The crucial property of runs is that there are only O(n) many of them in a word of length n (see (Kolpakov and Kucherov, 1999), (Rytter, 2006), (Crochemore and Ilie, 2007), (Puglisi et al., 2008) and (Bannai et al., 2014a,b)). Formally, a run in a word w is an interval [i . . . j] of positions for which both the associated factor w[i . . . j] is periodic (i.e., it has period p ≤ ( j − i + 1)/2), and the periodicity cannot be extended to the right nor to the left: w[i − 1 . . . j] and w[i . . . j + 1] have larger periods when these words are defined (see Figure 5.1). A run of period p is also called a p-run. As a consequence of the algorithms and of the estimation on the number of squares, the most important result related to repetitions in words can be formulated as follows. Theorem 5.3.4 (Kolpakov and Kucherov (1999), Rytter (2006), Crochemore and Ilie (2007)) (i) All runs in a word of length n over an alphabet of size a can be computed in time O(n log a). (ii) The number of all runs is linear in the length of the word. Point (ii) of the statement is very intricate and of purely combinatorial nature. The algorithm for (i) executes in time proportional to the number of runs on a fixedsize alphabet which, by (ii), is linear. Indeed, with a less restrictive assumption but a reasonable hypothesis on the alphabet, the running time of (i) can be reduced to O(n) as stated in Theorem 5.3.6 below.

5.3.2 Computing runs Next, we sketch the basic components of the proof of point (i) of Theorem 5.3.4.

160

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica −1

0

1

2

3

4

5

6

7

8

9

10

11

12

#

a

b

a

a

b

a

b

b

a

b

a

b

b

Figure 5.2 L-roots of runs in #abaababbababb (# < a < b).

The first main idea is to use, as for Theorem 5.3.2, the f-factorisation of the input word (Crochemore, 1986): a word w is factorised into factors u1 , u2 , . . . , uk , that is, w = u1 u2 . . . uk , where ui is the longest segment that appears before its position in w, i.e., in u1 u2 . . . ui A−1 , possibly overlapping its present occurrence of ui ; if the segment is empty ui is set to letter u[i], which does not appear before. The factorisation is an analogue to the Ziv–Lempel factorisation that plays an important role in data compression algorithms based on dictionaries. Runs that fit inside a single factor of the f-factorisation are called internal runs, other runs are called here overlapping runs. There are three crucial facts: • all overlapping runs can be computed in linear time, • each internal run is a copy of an earlier overlapping run, • the f-factorisation can be computed in linear time under some hypothesis on the alphabet of the word (see Theorem 5.3.5 below). It follows easily from the definition of the f-factorisation that if a run overlaps two consecutive factors uk−1 and uk then its size is at most twice the total size of these two factors. The f-factorisation of a word is commonly computed with the suffix tree or the suffix automaton of the word. When the alphabet of the word has a fixed size, thanks to the efficient algorithms for building these data structures, the whole process can be carried on in linear time. Two recent algorithms by Crochemore and Ilie (2008a) and by Chen et al. (2008) (see also (Crochemore et al., 2008)), use the suffix array of the word to provide linear-time algorithms for integer alphabets, i.e., for alphabets whose sequences of letters can be sorted in linear time. This is done through a useful table called LPF, which stands for longest previous factor, which provides for each position on the word the longest factor occurring both at that position and before it. Its computation is done in linear time with the suffix array of the word (see (Crochemore and Ilie, 2008a), (Crochemore et al., 2008, 2009)). Theorem 5.3.5 (Crochemore and Ilie (2008a), Chen et al. (2008)) On an integer alphabet, the f-factorisation of a word can be computed in linear time. The second main idea introduced by Bannai et al. (2014a) is to use the Lyndon tree (or Lyndon forest) of the input word. A run is entirely determined by any occurrence of a conjugate of its word period. Noticing that all of them appear in a run since it starts with a square, we can consider the smallest lexicographic conjugate of the word period, which is a Lyndon word (word that is the smallest among its non-empty suffixes). It is called the Lroot of the

161

Text redundancies

−1

0

1

2

3

4

5

6

7

8

9

10

11

12

#

a

b

a

a

b

a

b

b

a

b

a

b

b

Figure 5.3 Lyndon trees of #abaababbababb. The top tree is related to the alphabet ordering # < a < b. The bottom tree is for the ordering # < b < a. Black nodes show an assignment of runs Lroots longer than 1 to internal nodes. The bottom tree is necessary to get a direct assignment of Lroot baa.

run. Then, we can choose the first occurrence of its Lroot. This follows an idea used in (Crochemore et al., 2012) to bound the number of cubic runs (runs starting with a cube) in a word. Figure 5.2 displays Lroots of runs in #abaababbababb (a small letter is appended at the beginning of the word of Figure 5.1 to get a unique Lyndon word instead of a Lyndon factorisation). The Lyndon tree reflects inductively the standard factorisation of a Lyndon word. It is known that any Lyndon word w is either a single letter or uniquely factorises into uv, where u and v are Lyndon words, u < v, and v is the longest proper suffix of w that is a Lyndon word. Nodes of the Lyndon tree are associated with Lyndon factors of the word. Some internal nodes correspond to Lroots of length more than 1 but not all of these Lroots correspond to internal node (see Figure 5.3). To get the inverse association Bannai et al. (2014a) showed that it suffices to inverse the alphabet ordering (see the example in Figure 5.3). Runs with period 1 are easily computed in linear time. Other runs of a word can be computed from its Lyndon trees (one for each of the two alphabet orderings) as shown by Bannai et al. (2014a,b). It is done by testing if the Lyndon factor associated with each node is a Lroot using the technique for Longest Common Extension Queries (see, for example, Fischer and Heun (2006)). Testing is then done in constant time per node and then in linear time since there are less than 2n internal nodes. In addition, the Lyndon tree can be computed with the Suffix Array of the word since its structure is essentially that of the Cartesian tree of the sequence of lexicographic

162

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica

ranks of the suffixes of the word as proved by Hohlweg and Reutenauer (2003). There, as with the first approach, all computations take linear time on an integer alphabet. Theorem 5.3.6 (Crochemore and Ilie (2008a), Chen et al. (2008), Bannai et al. (2014a,b)) On an integer alphabet, runs of a word can be computed in linear time according to its length. Finally, note that on a general alphabet runs can be computed in O(n log n) time (Crochemore et al., 2014), and that the computation is optimal due to the lower bound of O(n log n) for testing squarefreeness of a word proved by Main and Lorentz (1984).

5.3.3 Counting runs Let ρ (n) be the maximal number of runs in a word of length n. By item (ii) in Theorem 5.3.4 we have ρ (n) < cn for some constant c. Based on experiments, Kolpakov and Kucherov (1999) conjectured that c = 1 for binary alphabets. A stronger conjecture was proposed by Franek and Yang (2006) where a family of words is given with the number of runs equal to 23φ = 0.927 . . . (φ is the Golden Ratio), thus proving c ≥ 0.927 . . .. Authors conjectured that this bound is optimal, but the best known lower bound for c has been shown to be 0.944 by Matsubara et al. (2008) and by Simpson (2010). After Kolpakov and Kucherov (1999) result, the first explicit upper bound of 5 n on ρ (n) for general words was given by Rytter (2006) and improved in a structural and intricate manner by Rytter (2007) to 3.44 n. It was done by introducing a sparseneighbour technique. Another improvement on the ideas of Rytter (2006) was done by Puglisi et al. (2008) where the bound 3.48 n is obtained. The neighbours are runs for which both the distance between their starting positions is small and the difference between their periods is also proportionally small according to some fixed coefficient of proportionality. The occurrences of neighbours satisfy certain sparsity properties which imply the linear upper bound. Several variations on the definition of neighbours and sparsity are possible. Considering runs having close centres (the beginning position of the second period) the bound has been lowered to 1.6 n by Crochemore and Ilie (2007, 2008b). Using computer verification, bounds have been improved to 1.52 n and to 1.29n for binary words by Giraud (2009), and further to 1.029 n as a result of the computation of a tight approximation of the maximal number of 60-runs in general words (Crochemore et al., 2011). In a variant of their publication, Bannai et al. (2014a) found an assignment of Lroots to right children nodes of the Lyndon trees of the word and proved that the total number of implied nodes is less than the word length, leading to the proof of the runs conjecture. Bannai et al. (2014b) provide a simple direct proof of the result, not using Lyndon

Text redundancies

163

trees. It is also derived from a careful choice of Lroots occurrences that is described now. Let [i . . . j − 1] be a run of period p and length j − i in the word y. The first step is to choose the alphabet ordering to define the Lroot of the run. If the run is a suffix of y or if y[ j] < y[ j − p] then we keep the ordinary ordering, else we choose the inverse alphabet ordering. Then the Lroot of the run is defined with the chosen ordering. It is simple to see that with this choice, any occurrence of the Lroot starting at a position k, i < k ≤ j − p, is the longest Lyndon factor of y starting at k. The rest of the proof consists in showing that two occurrences of Lroots of two different runs do not start at the same position. Assume on the contrary that [k . . . k + p − 1] and [k . . . k + p − 1] are occurrences of Lroots of two different runs. Owing to condition i < k above, these occurrences are not prefixes of their respective runs. We can also assume that p > p since the runs cannot have the same period and be different. On the one hand, since p > 1, we get y[k − 1] = y[k + p − 1] = y[k]. On the other hand, since [k . . . k + p − 1] is the longest Lyndon word starting at position k, [k . . . k + p − 1] can only be the occurrence of a Lroot according to the inverse ordering. This implies that its period is p = 1 and then y[k − 1] = y[k], a contradiction. Eventually noting that the first position of word y is not the starting of any chosen Lroot occurrence leads to ρ (n) < n. Theorem 5.3.7 (Bannai et al. (2014b)) A word contains less runs than its length. Among other questions left open is the next conjecture. It bounds the number of Lyndon roots in each interval of positions on the input word. Looking at the example word in Figure 5.2 we can see that interval [2 . . . 4] of length 3 hosts 3 Lroots and that this number is 6 for the interval [2 . . . 7]. No other interval contains as many Lroots as its length, except of course intervals of length 1. Conjecture 5.3.8 (Crochemore (2014)) Each word interval contains no more Lyndon roots than its length.

5.4 Finding repeats Locating repeats in a word, that is, factors of exponent between 1 and 2, appears to be more complex than finding runs. The main reason is that there are many more repeats, since any word longer than the alphabet size contains at least one factor of exponent larger than 1. In this section we focus on the search for factors whose exponent is at most 2 inside an overlapfree word. Indeed, if the word contains overlaps, it contains runs, which are more significant from the point of view of redundancy. We further restrict the question to factors having the maximal exponent among all factors occurring in the input word in order to get an efficient searching algorithm. We show that the number of considered factors becomes linear according to the length of the word, while it can be quadratic for ordinary repeats, that we can

164 z1

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica zi−1

z2

zi u1

u2

(i)

u4

(ii) (ii) (iii) (iii)

u2 u3

u3 u4

u5

u1

u5

(iv)

Figure 5.4 The only four possible locations of a repeat uvu involving phrase zi of the factorisation of the word: (i) internal to zi , this can be discarded due to the fact that zi has been processed before; (ii) the first occurrence of u is internal to zi−1 ; (iii) the second occurrence of u is internal to zi ; (iv) the second occurrence of u is internal to zi−1 zi .

locate in linear time on a fixed size alphabet (Badkobeh and Crochemore, 2014). We first describe the algorithm and then give an upper bound on the number of maximalexponent factors, bound on which depends the algorithm running time.

5.4.1 Computing repeats The technique to get an efficient algorithm for locating repeats is based on the use of the f-factorisation of the word and of the Suffix Automaton of some factors. The Suffix Automaton (Crochemore et al., 2007) is used as a pattern-matching tool to search for maximal repeats in a product of two words due to its ability to locate occurrences of all factors of a pattern. Using the Suffix Automaton alone in a balanced divide-and-conquer manner produces a O(n log n)-running time algorithm. To remove the log factor, the algorithm additionally exploits the f-factorisation (see (Crochemore et al., 2007)), similarly as can be done for locating runs. The running time of the proposed algorithm depends additionally on the repetitive threshold of the underlying alphabet size of the word. The threshold restricts the context of the search for the second occurrence of u corresponding to a repeat uvu. Let y = z1 z2 · · · zk where z1 , z2 , . . . , zk are the nonempty factors of the f-factorisation of the word y. The divide-and-conquer approach based on the f-factorisation divides the search for occurrences of maximal-exponent factors into the four cases displayed in Figure 5.4 during the processing of the factor zi . There is no other case because a second occurrence of a searched u cannot contain any z-factor without contradicting the definition of the factorisation. This leads to the following algorithm producing the maximal exponent of all factors of y. It can be readily enhanced to output all the corresponding factors.

Text redundancies

165

M AX E XP FAC(y) 1 (z1 , z2 , . . . , zk ) ← f-factorisation of y 2  z1 is the longest prefix of y in which no letter repeats 3 e←1 4 for i ← 2 to k 5 do e ← M AX E XP(zi−1 , zi , e) zi , z; 6 e ← M AX E XP( i−1 , e) 7 if i > 2 · · · zi−2 , e) 8 then e ← M AX E XP(z i−1 zi , z1 9 return e In the algorithm, M AX E XP(z, w, e) deals with repeats occurring in the product zw. It outputs the maximal exponent of factors of zw having an occurrence that starts in z and ends in w. In fact, it outputs e itself if this exponent is smaller than e. The complete pseudo-code for the function M AX E XP is given in (Badkobeh and Crochemore, 2014). It is a straight application of the pattern-matching technique based on the use of a Suffix Automaton or Suffix Tree (Crochemore et al., 2007). After the Suffix Automaton of z is built, it is used to scan the word w letter by letter following the arcs of the automaton and its suffix links. The automaton gives access to the rightmost occurrences of the searched u in z. Meanwhile the exponents of repeats are computed. A few properties avoid too many exponent calculations. Theorem 5.4.1 Applied to an overlapfree word of length n from an alphabet of size a, M AX E XP FAC computes the maximal exponent of factors occurring in the word. It runs on O(n log a) and can be enhanced to output all maximal-exponent factors of the word.

5.4.2 Counting repeats The time analysis of algorithm M AX E XP FAC depends on the number of maximalexponent factors occurring in a word. We report the linear bound on their maximal number. Note that on the alphabet {a, a1 , . . . , an } the word aa1 aa2 a . . . aan a of length 2n + 1 has a quadratic number of maximal (i.e., non-extensible) repeats. Indeed all occurrences of repeats of the form awa for a non-empty word w are non extensible. But only the n repeats of the form aca for a letter c have the maximal exponent 3/2. The following theorem is proved in (Badkobeh and Crochemore, 2014). Theorem 5.4.2 There are less than 2.25 n occurrences of maximal-exponent factors in a word of length n. We present a sketch of the proof. Repeats are partitioned by their border length as follows. A maximal-exponent factor uvu is called a δ -MEF if its border length b = |u| = |uvu| − period(uvu) satisfies 2δ < b ≤ 4δ . Any maximal-exponent factor is a δ -MEF for some δ ∈ Δ, where Δ = {1/4, 1/2, 1, 2, 22, 23 , . . . }.

166

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica

Table 5.3 Maximal numbers for overlapfree words of lengths n = 5, 6, . . . , 20 and for alphabet sizes 2, 3 and 4. n binary ternary 4−ary 13 8 (8, 2) (8, 1.5)

5 2 (2, 1.5) (2, 1.5) 14 9 (9, 2) (9, 1.5)

6 3 (3, 1.5) (3, 1.5)

7 4 (4, 2) (4, 2)

15 9 (9, 2) (10, 1.5)

8 5 (5, 2) (5, 2)

16 11 (11, 2) (11, 2)

9 5 (5, 2) (5, 2)

10 6 (6, 1.5) (6, 1.5)

11 6 (6, 2) (7, 1.5)

17 11 (11, 2) (12, 1.5)

18 12 (12, 2) (12, 1.5)

19 12 (12, 2) (13, 1.5)

12 8 (8, 2) (8, 2) 20 14 (14, 2) (14, 1.5)

Since values of δ ∈ Δ cover all border lengths, the total number of occurrences of maximal-exponent factors is bounded by    2 1 n 1 ∑ δ = n 4 + 2 + 1 + 2 + 2 + · · · < 8 n. δ ∈Δ Furthermore, the upper bound is refined to 2.25n by separately counting the maximal-exponent factors whose border length is less than 6, because maximal-exponent factors whose border length is b cannot have starting positions within b positions from each other. Experiments show that the maximal number of occurrences of maximal-exponent factors is in fact smaller than n and not even close to n, at least for small values of n. Table 5.3 displays the maximal numbers for overlapfree words of lengths n = 5, 6, . . . , 20 and for alphabet sizes 2, 3 and 4. It also displays (second element of pairs) the corresponding maximal exponent. In the binary case we already know that it is 2 since squares are unavoidable in words whose length is greater than 3. We conclude the section by showing words containing many maximal-exponent factors. Let g be the morphism defined by g(c) = ec, for any letter c ∈ {a, b, c, d}. Let D4 be an infinite 4-ary word whose maximal exponent of factors is 7/5, Dejean’s threshold for a 4-letter alphabet. The infinite image q of D4 by g looks like: q = g(D4 ) = eaebecedeaebecedeaebecedeaebecedeaebecedeaebeced· · · . The maximal exponent of its factors is 3/2 and the number of occurrences of maximalexponent factors in its prefix of length n is close to 2n/3. Properties of D4 have been well studied: its maximal repeats uvu have period at least 10. Therefore, the induced factor g(uvu)e has exponent at most 29/20 which is smaller than 3/2. Thus we know the only repeats we need to consider are the 3/2-powers. These factors are either induced by g(c)e = ece, or by g(bcdb)e = ebecedebe, for some letters b, c and d pairwise distinct. Enumerating these 3/2powers is a matter of knowing the frequency of the letter 0 in the Pansiot coding of

Text redundancies

167

D4 because this letter corresponds to a factor of exponent 4/3, then translated into a 3/2-power in the image by g. Summation of these two types of maximal-exponent factors leads to the next result. Theorem 5.4.3 The number of occurrences of maximal-exponent factors in prefixes of q tends to 2n/3 with the prefix length n.

5.5 Finding covers and seeds Covers and seeds are generalisations of repetitions and periodicities. They reflect more subtle redundancies in words, that do not show up as repetitions. They have potential applications in the analysis of nucleotide sequences (DNA, RNA). Quasiperiodicities also have possible applications in text compression. The notion of cover was first introduced by Apostolico and Ehrenfeucht (1993) in 1990, together with a notion of quasi-periodicity. A factor u of y is a cover if, and only if, every letter of y is within some occurrence of u in y. This concept generalises the notion of periods of a string. If a word is its only cover, then it is called superprimitive. A word has a quasi-period q if, and only if, it has a superprimitive cover of length q. The notion of quasi-periodicity can be seen as a relaxed version of repetitions with integer exponents. In both cases, the primitive root and the superprimitive cover are borders of the given word. However, in periodicities, occurrences of the root do not overlap, while occurrences of a cover, on the other hand, can overlap. Dually, notion of superprimitivity is stronger than primitivity – a superprimitive word is also primitive. Example 5.5.1 Let us consider a word babbababbab. Its shortest period equals 5. Its shortest cover is bab and its longest proper cover is babbab. It is primitive, but not superprimitive. Natural problems related to covers are to find the shortest and the longest proper cover of a given word. A linear-time algorithm solving the first problem was proposed by Apostolico et al. (1991). Theorem 5.5.2 (Apostolico et al. (1991)) The shortest quasi-period and the corresponding cover of a word of n symbols can be computed in O(n) time and space. The problem of finding the longest proper cover was solved three years later (Moore and Smyth, 1994). Theorem 5.5.3 (Moore and Smyth (1994)) All the covers of a given word of length n can be computed in time Θ(n) and space Θ(n). Proofs of both theorems exploit the relations between covers and borders. Clearly, each cover of a word is also its border. Hence, when looking for covers, we can limit

168

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica

our considerations just to borders. All borders of a given word (and of all its prefixes) can be easily enumerated using a border array, which contains the longest proper borders of all the prefixes of the given word. The border array can be computed in a linear time (Knuth et al., 1977), (Crochemore et al., 2007), (Crochemore and Rytter, 2003), (Smyth, 2003). Enumeration of all the borders of some word w is based on the fact that all the borders of w shorter than its longest border u are also borders of u, and conversely. A similar property holds for covers. A cover of a cover of a given word w is also a cover of w. A converse property is even stronger: if w has a border u and a cover v such that |u| > |v|, then v is also a cover of u. Using this property, we can limit our search for covers just to check if the longest proper border u of some word w is its cover. If |u| ≥ |w| 2 , then it clearly does. Otherwise we have to search (in linear time) for all occurrences of u in w. However, each sequential search is smaller by a factor of at least 2. Hence, the overall time complexity remains linear. The problem of finding covers can be further extended to problems of computing the shortest/longest cover arrays, that is, arrays of respective covers of all the prefixes of a given word. Breslauer (1992) gave an on-line linear-time algorithm computing the shortest cover array of a word. Theorem 5.5.4 (Breslauer (1992)) The shortest cover array of a word of length n can be computed in O(n) time and auxiliary memory, performing at most 2n comparisons of input symbols. Additionally to the properties of covers and borders presented above, this algorithm exploits the fact that the algorithm computing the border array (Knuth et al., 1977), (Crochemore et al., 2007), (Crochemore and Rytter, 2003), (Smyth, 2003) is on-line. The algorithm creates two auxiliary arrays, both indexed by lengths of prefixes of the given word. One contains quasi-periods of prefixes of the given word. The other one, for superprimitive prefixes of the given word, contains the longest prefix of the given word covered by the superprimitive prefix. Li and Smyth (2002) provided a linear-time algorithm for computing the longest (proper) cover array of a word. Note that the longest cover array enables us to compute all proper covers of all prefixes of the word. Similarly, as for a border array, all the covers can be enumerated by iterating the longest cover array, as shown in the following example. Example 5.5.5 Let us consider a word bababbababbabab of length 15. Its longest proper cover array equals C = [0, 0, 0, 2, 3, 0, 0, 3, 0, 5, 6, 7, 8, 9, 10]. It has three proper covers: bababbabab, bab and babab of lengths, respectively: C[15] = 10, C[10] = 5 and C[5] = 3. Since C[3] = 0, there are no more proper covers. The notion of a seed can be seen as an extension of the notion of a cover. A factor u of the given word w is a seed, if w is a factor of some word covered by u. In other words, u covers w, but its leading and trailing occurrences can extend on either side

Text redundancies

169

of w. Note that a word of length n can have Θ(n2 ) seeds, hence if one wants to find all the seeds in a shorter time, they have to be reported in groups, not just one by one. Example 5.5.6 Let us consider a word of the form w = ak bak , n = |w| = 2k + 1. For any 0 ≤ i, j ≤ k ≤ i+ j, word of the form ai ba j is a seed of w. There is O(k2 ) = O(n2 ) such seeds. On the other hand, there is Θ(n2 ) factors of w. Seeds were first defined and studied by Iliopoulos et al. (1996), who gave an O(n log n) time algorithm computing all the seeds of a given word u of length n, and in particular the shortest seed of u. It turns out that finding all the seeds of a given word, or even just the shortest one, is much more complicated than finding covers. Mainly because there is no such connection between borders and seeds, as there is between borders and covers. Also, seeds cannot be enumerated by looking for the seeds of seeds. Example 5.5.7 Consider a word baaababaaab. Among its seeds are words abaaab and aababaa. In turn, baaa is a seed of abaaab and aba is a seed of aababaa, but they are not seeds of baaababaaab. Theorem 5.5.8 (Iliopoulos et al. (1996)) All the seeds of a given word w of length n can be found in O(n log n) time. Seeds are split there into two groups: easy and hard. If the given word w is periodic, then all the factors that are not shorter than its minimal period are its easy seeds. The easy seeds are found in O(n) time. All other seeds are hard. Finding hard seeds dominates and takes O(n log n) time. First, using Crochemore partitioning (Crochemore, 1981), all the factors u that cover w except possibly some prefix and suffix shorter than |u| are found. Finally, they are filtered using a border table of w (and of reversal of w). This result was improved very recently (Kociumaka et al., 2012). Theorem 5.5.9 (Kociumaka et al. (2012)) A linear representation of all the seeds of a given word can be computed in a linear time. The algorithm is complex. Finding all the seeds of some word w is first reduced to a problem of finding all quasi-seeds of w, that is, all the factors u that cover w except possibly some prefix and suffix shorter than |u|. Similarly to the work of Iliopoulos et al. (1996), computation of quasi-seeds involves suffix trees. Quasi-seeds correspond to lower fragments of some (compressed) edges in the suffix tree. Quasi-seeds are split into two groups: longer and shorter than some threshold, called respectively long and short. The number of vertices of the suffix tree that have to be considered to find long quasi-seeds is limited, resulting in a linear running time needed to find them. The algorithm finding short quasi-seeds is recursive and involves finding short quasi-seeds starting in a given fragment of w, by considering a reduced and compacted suffix tree of positions in this fragment. Additionally, the LZ factorisation of w (Ziv and Lempel, 1977), (Crochemore et al., 2008), (K¨arkk¨ainen et al., 2013), is

170

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica

used to reduce the amount of work to be done. If the considered fragment belongs to a factor that has appeared before, results computed for this factor are reused. Before the linear algorithm for finding seeds has been proposed, there have been efforts to obtain better results for some variants of the problem. Christou et al. (2013) considered left-seeds, that is, seeds that are prefixes of the given word. In a way, leftseeds are somewhere between seeds and covers. It allows not only to find them in linear time, but also to compute arrays of the shortest and longest left-seeds for all prefixes of the given word. Theorem 5.5.10 (Christou et al. (2013)) For any word of length n, its minimum length and maximum length left-seed arrays can be computed in O(n) time. Using the border array, one can efficiently identify occurrences of all the prefixes of the given word. Then, for each prefix, one can compute the longest prefix covered by it. Computing left-seeds arrays is then straightforward. Another approach to the problem of seeds presented by Christou et al. (2013) is to consider the following decision problem: whether a given word has a seed of a given length. Theorem 5.5.11 (Christou et al. (2013)) It can be checked whether a given word of length n has a seed of a given length in O(n) time. This decision problem is solved using a suffix array together with a longest common prefix array (Crochemore et al., 2007), (K¨arkk¨ainen and Sanders, 2003), (Ko and Aluru, 2005). Each seed of the given length corresponds to some segment of the suffix array, containing all its starting positions. This decision procedure is also applied to compute the shortest seed array of all the prefixes. Theorem 5.5.12 (Christou et al. (2013)) The shortest seed array of a word of length n can be computed in O(n2 ) time. Notion of a right-seed is dual to the left-seed, it is a seed that is also a suffix of the given word. A problem of computing a minimum (maximum) right-seed array is to compute minimum (maximum) length seeds of all prefixes of the given word. It is complementary (but not equivalent) to the problem of computing minimum and maximum left-seed arrays of the given word. These two problems have also been solved by Christou et al. (2011). Theorem 5.5.13 (Christou et al. (2011)) Computing the minimal right-seed array of a given word y can be done in O(n log n) time, where |y| = n. Computing the maximal right-seed array of a given word y can be done in Θ(n) time, where |y| = n.

Text redundancies

171

5.6 Palindromes A palindrome is a word or phrase that reads the same backwards as forwards, for instance ‘refer’, ‘level’ and ‘stats’. In a biological context, we encounter a more general concept of palindrome2 as presented below. Words with palindromic structure are important elements of DNA and RNA sequences, as they reflect the capacity of molecules to form double-stranded stems and loops, which insures a stable state of those molecules with low free energy. Restriction endonucleases usually recognise palindromic sequences of DNA as they are useful for gene isolation and cloning, genetic recombination, examining chromosome structure, and sequencing long DNA fragments. When an RNA is sequenced (digitised) it appears in the form of a mere word on the alphabet of nucleotides A, C, G and U that stand for adenine, cytosine, guanine and uracil. Meaningful palindromic genetic sequences include a gap (spacer) between left and right parts. Those palindromes correspond to hairpin structures of RNA molecules and they are significant in DNA sequences. These structures are widespread in the natural plasmids, viral and bacterial genomes, eukaryotic chromosomes and cell organelles. The reverse part of a palindrome is to be combined with the complementarity relation on nucleotides, where A and U are complements as well as C and G. For example, the (exact) complimentary strand of the sequence CGAATGGCTCTT is GCTTACCGAGAA. And CGAATGGCTCTTsAAGAGCCATTCG is a gapped palindrome with spacer s. Also see Section 6.2.4 for more on the connections between biological sequences and combinatorics on words.

5.6.1 Using LPrF to locate palindromes In this section, we introduce an efficient technique to find palindromes in a given word. This technique is based on the notion of longest previous reverse factor occurrence LPrF. The LPrF table is a concept close to the LPF table for which the previous occurrence in not reversed (Crochemore et al., 2009). This latter table extends the Ziv–Lempel factorisation of a text (Ziv and Lempel, 1977) intensively used for conservative text compression (Bell et al., 1990). The LPrF table also generalises a factorisation of words used by Kolpakov and Kucherov (2008) to extract certain types of palindromes in molecular sequences. These palindromes play an important role in RNA secondary structure prediction because they signal potential hairpin loops in RNA folding (B¨ockenhauer and Bongartz, 2007). In addition, the reverse complement of a factor has to be considered up to some degree of approximation. One of the problems is to compute efficiently, for a given word y, the LPrF table that stores at each index i the maximal length of factors that both start at position i on y and occur reversed at a smaller position. Table 5.4 is the example table for the word y = aababaabab. 2

See the concept of pseudo-palindrome or theta-palindrome discussed in de Luca and De Luca (2006).

172

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica Table 5.4 Example table for the word y = aababaabab. position i

0

1

2

3

4

5

6

7

8

9

y[i] LPrF[i]

a 0

a 1

b 0

a 1

b 3

a 2

a 4

b 3

a 2

b 1

Table 5.5 Finding palindromes. position i

0

1

2

3

4

5

6

7

8

9

y[i] LPrF[i] Pal[i]

a 0 -

a 1 0

b 0 -

a 1 1

b 3 0

a 2 0

a 4 2

b 3 2

a 2 1

b 1 2

From Table 5.4, we can find the palindromes of this word y by storing the starting position of the LPrF value as shown in Table 5.5. One efficient technique to compute the LPrF table is to use a Suffix Automaton. The Suffix Automaton of a word w, noted S(w), is the minimal deterministic automaton that accepts the set of suffixes of w (Crochemore et al., 2007). Its construction makes use of a table defined on its states, noted F, and known as the failure link of S(w). We also consider the length function L. They are informally defined as follows. If state q is associated with the nonempty word u, F[q] is the state associated with the longest suffix of u leading from the initial state to a state different from q, and we let denote by L[q] the maximal length of labels of paths from the initial state to state q.

5.6.2 Using a Suffix Automaton to compute LPrF A solution to compute the LPrF table of the input word y in linear time is designed with the Suffix Automaton S (yR ). The structure includes the failure link F and the table L recalled above as well as an additional attribute on states called SC for shortest context and described below. Figure 5.5 displays the Suffix Automaton of babaababaa used for computing the LPrF table of the word aababaabab. Table 5.6 gives the attributes (F, L and SC) of the states of the automaton displayed in Figure 5.5. Here is how the computation is carried on according to the algorithm described below. At a given step, i is a position on y,  is the length of the current match, and a 0

b

1

a

2

a

b

11 b

a

3 a

4

a

5

b

6

a

7

b

8

a

9

a

10

b

Figure 5.5 The Suffix Automaton of babaababaa, reverse of aababaabab.

173

Text redundancies Table 5.6 Attributes and states of the automaton displayed in Figure 5.5. state q

0

1

2

3

F[q] L[q] SC[q]

0 0 0

0 11 1 1 2 3 2 1 2

4

5

6

7

8

9 10 11

2 11 3 4 5 6 1 0 4

4 7 3

3 8 2

4 5 0 9 10 1 1 0 0

q is the current state of the automaton. The principal invariant of the computation is the equality δ (initial, y[i −  . . .i − 1]) = q, where δ denotes the transition function of the automaton and initial its initial state. The condition to extend the match by the letter a = y[i] is that δ (q, a) is defined and that y[i −  . . . i − 1]a occurs in yR at a position at least as large as n − i + . This can be tested efficiently on the automaton if the table SC is available. For a state r, SC[r] is the minimal length of labels of paths from r to a terminal state. The table can be pre-processed via a mere traversal of the automaton. The test becomes i −  ≤  + 1 + SC[δ (q, a)] where the first member is the length of y[0 . . . i −  − 1] and the second member is the minimal length of suffixes of yR starting with the next match. When the test is negative, the failure link F is applied to shorten the match whose length is given by L. None of the suffixes of the match of length larger than L[F[q]], which all correspond to the same state q, is able to change the value of the test in line 4. Then, a batch of LPrF values are computed in lines 8–10. In the code below we assume that F[initial] = initial. The value of F[initial] is usually left undefined for Suffix Automata but the assumption simplifies the presentation of the algorithm. LPrF- AUTOMATON(y, n) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

(q, ) ← (initial, 0) i←0 repeat a ← y[i] while (i < n) and (δ (q, a) = NULL) and ((i − ) ≥  + 1 + SC[δ (q, a)]) do (q, ) ← (δ (q, a),  + 1) i ← i+1 a ← y[i] repeat LPrF[i − ] ←   ← max{0,  − 1} until  = L[F[q]] if q = initial then q ← F[q] else i ← i + 1 until (i = n) and ( = 0) return LPrF

174

G. Badkobeh, M. Crochemore, C. S. Iliopoulos and M. Kubica

Theorem 5.6.1 The algorithm LPrF- AUTOMATON computes the LPrF table of a word of length n in time O(n) on a fixed-size alphabet. For the computation of genetic palindromes in which the left part in not only reverse but also complemented, a variant of the LPrF table is introduced accordingly (Chairungsee and Crochemore, 2009) The corresponding computation uses the Suffix Automaton S(z), where z is the reverse complement of the input y. Note that if the automaton is implemented in linear space on a potentially infinite alphabet, the overall algorithm runs in O(n log a) time. But the running time can be reduced to O(n) on an integer alphabet using the Suffix Array of the input y and Range Minimum Queries data structures (Crochemore et al., 2010a).

6 Similarity relations on words Vesa Halava, Tero Harju and Tomi K¨arki

6.1 Introduction In this chapter we consider similarity relations on words. These relations are induced by compatibility relations on symbols, i.e., by reflexive and symmetric relations on alphabets. Originally, similarity relations on words were introduced in Halava et al. (2007a) in order to generalise the notion of a partial word as presented in (Berstel and Boasson, 1999). Let Σ be a (finite) alphabet. We study similarity of words mainly from two perspectives that are central in combinatorics on words: periodicity and repetition freeness as discussed in Chapters 4 and 5. One of the often studied similarity relation is that of partial words. In these one has a special symbol  ∈ Σ, and the compatibility relation is R = {(a, a), (, a), (a, ) | a ∈ Σ}. In partial words the letter  is sometimes interpreted as a joker symbol or a symbol for ‘do not know’. It is compatible with all letters of the alphabet (including itself). Partial words were studied in (Berstel and Boasson, 1999) with respect to periodicity. Combinatorial properties of partial words have since been extensively studied; for this we refer to, e.g., the monograph (Blanchet-Sadri, 2007) and the references therein. A period of a finite or infinite word is a positive integer p such that all letters in the word occurring in positions congruent modulo p are equal. Also see Definition 1.2.6. A basic theorem on periodicity was proved by Fine and Wilf (1965). According to this result if a word w has two periods p and q, and the length of w is at least p + q − gcd(p, q), then w also has the period gcd(p, q). Berstel and Boasson (1999) considered a generalisation of this result for partial words that have a restricted number of holes. Our aim is to prove variations of Fine and Wilf’s theorem to witness interaction properties between ‘pure’ periods and relational periods induced by a similarity relation on words. Indeed, we define three types of period Combinatorics, Words and Symbolic Dynamics, ed. Val´erie Berth´e and Michel Rigo. Published by c Cambridge University Press. Cambridge University Press 2016.

176

V. Halava, T. Harju and T. K¨arki

in Section 6.4: global, external and local relational periods. These variants are then analysed in different interaction cases. It has already been mentioned in the previous chapters that the study of repetition freeness of words goes back a hundred years to two papers (Thue, 1906a, 1912). These two articles can be considered to be the starting point of combinatorics on words. In particular, Thue showed that there exists an infinite word w over a ternary alphabet that does not contain any squares, i.e., factors of the form xx where x is a non-empty word (see Theorem 4.2.2). Moreover, Thue constructed an infinite binary word t, the so-called Thue–Morse word, that does not contain any overlaps xyxyx for any words x and y where x is not empty (see Theorem 4.2.8). Squarefreeness and related notions can be defined for relational words. This topic is studied in Section 6.5 where we give tight bounds for the repetition thresholds. The basic concepts and theorems on words needed for subsequent sections are introduced in Section 6.2. We define there the main object of this chapter, i.e., similarity of words. In Section 6.3 we discuss shortly an application of similarity relations in coding theory. Sections 6.4 and 6.5 cover the main themes of the chapter, periodicity and repetition freeness.

6.2 Preliminaries In this section we introduce the basic notions and notation for the rest of the chapter. We mostly concentrate on two classical topics of combinatorics on word, namely periodicity and repetition freeness. Short introductions to these topics are provided in this sections. For further knowledge on combinatorics of words, we refer to the book (Lothaire, 1983), the tutorial (Berstel and Karhum¨aki, 2003) and the survey (Choffrut and Karhum¨aki, 1997). Also, Chapter 8 in Lothaire’s second book (Lothaire, 2002) is devoted to aspects of periodicity.

6.2.1 Periodicity Let w = w1 w2 · · · wn , with wi ∈ Σ for all i, be a word over the alphabet Σ. An integer p ≥ 1 is a period of w if wi = wi+p for 1 ≤ i ≤ |w| − p. In this case, the word w is called p-periodic. The smallest integer which is a period of w is called the (minimal) period of w and it is denoted by π (w). For each w, w is a rational power of the prefix of length p of w. See (1.1). We shall consider interaction properties of periods. These are properties of words where two different periods occurring in a finite word implies a third period. The most fundamental theorem of this type is the theorem of Fine and Wilf, which was originally proved in connection with real continuous functions (Fine and Wilf, 1965). In its discrete form on words it is as follows. Theorem 6.2.1 (Fine and Wilf) If a word w has periods p and q, and |w| ≥ p + q − gcd(p, q), then w also has the period gcd(p, q).

Similarity relations on words

177

Example 6.2.2 The word w = aabaabaaabaabaa has periods 7, 10, 13 and 14, and, trivially, all integers n ≥ |w| = 15. Hence, the minimal period is π (w) = 7. Indeed, in this example w = (aabaaba)2a. Note that |w| = 15 = 7 + 10 − gcd(7, 10) − 1. By Theorem 6.2.1, there are no binary words u of length |u| ≥ 15 with periods 7 and 10 apart from the unary words, since gcd(7, 10) = 1. An integer p is a period of an infinite word w = w0 w1 · · · if, for all non-negative integers i, it holds wi = wi+p , i.e., wi = w j whenever i ≡ j (mod p), and in this case w = vω = vvv · · · , where v is the prefix of w of length p.

6.2.2 Similarity relations of words Relations of symbols in several matching problems are known in the literature. In non-standard string matching problem a given pattern p is matched with a text τ up to some relation; see (Muthukrishnan and Ramesh, 1995). In an instance of this problem, we are given a many-to-many matching relation between symbols together with the pattern p. Then we search for positions in a text τ at which the pattern occurs under the relation. Also, the string matching problem with symbols for ‘don’t care’ symbols was introduced by Fischer and Paterson (1974). There a special symbol is present that matches all other symbols. In distance matching problems, a distance function d is given between pairs of symbols. Symbols a and b match if d(a, b) ≤ k for some specified constant k. In the latter two cases the relation on the letters is a compatibility relation inducing a similarity relation on words. Thus, a pattern which matches the text is, using our terminology, similar to the text. We consider binary relations R ⊆ X × X on a set X . We often write x R y instead of (x, y) ∈ R. The restriction of R on a subset Y ⊆ X , i.e., R ∩ (Y × Y ) is denoted by RY . A relation R on X is an equivalence relation if it is reflexive, symmetric and transitive, i.e., it satisfies the following conditions (E1) (E2) (E3)

∀x ∈ X : ∀ x, y ∈ X : ∀ x, y, z ∈ X :

x R x, x Ry =⇒ y R x, x R y, y R z =⇒ x R z.

A relation satisfying (E1) and (E2) is called a compatibility relation. If R ⊆ X × X is a compatibility relation and x R y for x, y ∈ X, we say that x is R-compatible with y, or simply, that x and y are compatible if R is clear from the context. Example 6.2.3 As an example of a compatibility relation on integers Z, consider the relation Dn for a positive integer n: x Dn y ⇐⇒ |x − y| ≤ n. This relation is clearly reflexive and symmetric but it is not transitive. Indeed, for instance, 0 Dn 1 and −n Dn 0 whereas 1 and −n are not related.

178

V. Halava, T. Harju and T. K¨arki

The identity relation ιX = {(x, x) | x ∈ X } on a set X is an equivalence relation. For an alphabet Σ, let ΩΣ = Σ × Σ. The universal (similarity) relation on Σ∗ is then ΩΣ∗ = {(x, y) ∈ Σ∗ × Σ∗ | |x| = |y|}. Also this relation is an equivalence relation. The subscripts X and Σ∗ are often omitted when they are clear from the context. By definition, R ⊆ Σ∗ × Σ∗ is a similarity relation if, and only if, it is a compatibility relation that satisfies u1 u2 · · · um Rv1 v2 · · · vn ⇐⇒ m = n and ui R vi for all i = 1, 2, . . . , m

(6.1)

for all ui , v j ∈ Σ. In particular, if u R v then |u| = |v|. It also follows that a similarity relation satisfies the simplifiability property uu R vv with |u| = |v| =⇒ u R v and u R v

(6.2)

as well as the multiplicativity property u R v and u R v =⇒ uu R vv .

(6.3)

Remark 6.2.4 A similarity relation R ⊆ Σ∗ × Σ∗ does not need to be an equivalence relation, and thus it may fail to be a congruence relation on the monoid Σ∗ . In fact, the most interesting similarity relations of this chapter are not transitive. A similarity relation R on Σ is generated by its restriction RΣ on letters, and therefore R can be represented by the list of the pairs {a, b}, a = b, such that (a, b) ∈ RΣ . Henceforth we use the notation R = $r1 , r2 , . . . , rn %, where ri = (ai , bi ) ∈ Σ × Σ for i = 1, 2, . . . , n, to denote that R is the similarity relation generated by the reflexive and symmetric closure of {r1 , r2 , . . . , rn }. Put differently, R can be represented as a graph GR = (Σ, E), where Σ is the vertex set and E = {{a, b} | a R b with a, b ∈ Σ, a = b} is the set of the edges. Example 6.2.5 Consider the similarity relation R = $(a, c), (b, c), (a, d)%. Then the corresponding graph is given in Figure 6.1.

d

b

a

c

Figure 6.1 The graph GR representing R = $(a, c), (b, c), (a, d)%.

A similarity relation R is clearly a compatibility relation, since the generating relation is reflexive and symmetric. Hence, it is justified to use expressions R-compatible and R-similar side by side.

Similarity relations on words

179

6.2.3 Partial words Partial words were introduced in the present context in Berstel and Boasson (1999). Definition 6.2.6 Let Σ be an alphabet. A partial function w : {1, 2, . . . , n} → Σ is called a partial word of length n. The domain D(w) of w is the set of positions p ∈ {1, 2, . . . , n} for which w(p) is defined. The set H(w) = {1, 2, . . . , n} \ D(w) is the set of holes of w. With each partial word we may associate a companion word w over the extended alphabet Σ = Σ ∪ {} as follows:  w(p), if p ∈ D(w); w (p) = , if p ∈ H(w). Clearly, partial words are in one-to-one correspondence with the words over Σ , and we will also call the words over Σ partial words. Let x and y be partial words of equal length. Then y is said to contain x if D(x) ⊆ D(y) and x(i) = y(i) for all i ∈ D(x). Two partial words x and y are compatible, denoted by x ↑ y, if there exists a partial word z containing both x and y. Example 6.2.7 Let Σ = {a, b, c, d}, and choose the partial words x = abbc and y = abca. We align these words letter-to-letter to obtain a a

b 

 

b b

c c

 a.

This shows that x ↑ y. Indeed, here z = abbca is a partial word containing both x and y. The compatibility relation on partial words is a similarity relation. Indeed, partial words with compatibility relation ↑ can be seen as (total) words over the alphabet Σ with the similarity relation R↑ = ${(, a) | a ∈ Σ}%.

(6.4)

6.2.4 Example: similarity relations in biology Motivation for the research on partial words comes partly from the study of biological sequences such as DNA (deoxyribonucleic acid), RNA (ribonucleic acid) and proteins; see, e.g., (Blanchet-Sadri, 2004a). Partial words are a special case of words with a similarity relation, it is evident that bioinformatics gives a suitable background for applications of similarity relations as well. We briefly describe the molecular biological setting and some basic operations where relations can be employed in theoretical models. See also Section 5.6. Proteins are macromolecules made of amino acids playing an essential role in living organisms by participating in all central processes within cells. The 20 amino

180

V. Halava, T. Harju and T. K¨arki Table 6.1 The 20 amino acids and their three-letter abbreviations. Name

Abbreviation Name

Abbreviation

Alanine

Ala

Leucine

Leu

Arginine

Arg

Lysine

Lys

Asparagine

Asn

Methionine

Met

Aspartic acid

Asp

Phenylalanine

Phe

Cysteine

Cys

Proline

Pro

Glutamine

Gln

Serine

Ser

Glutamic acid

Glu

Threonine

Thr

Glycine

Gly

Tryptophan

Trp

Histidine

His

Tyrosine

Tyr

Isoleucine

Ile

Valine

Val

acids and their three-letter abbreviations are given in Table 6.1. The length of the genes depends upon the length of the amino acid chain it encodes. In the cytoplasm, the messenger RNA (mRNA) binds to ribosomes for translation. Ribosomes read the coding of mRNA by moving from one codon to the next one, interpreting the present codon as one of the 20 amino acids until the corresponding protein is formed. Every mRNA sequence begins with trinucleotide AUG implying that all amino acid chains begin with methionine. The subsequent amino acids of the protein are coded according to Table 6.2. Finally, at the end of the mRNA, the ribosome translates UGA, UAA or UAG as a ‘stop’ symbol. As an example of the use of similarity relations within the framework of protein synthesis, we consider codons as letters forming a 64-letter alphabet. Codons that encode the same amino acid are then similar. For instance, both GAA and GAG correspond to glutamic acid (Glu) whereas UUA, CUA and CUU are code for leucine (Leu). Thus, the following two codon strings stand for the same amino acid sequence (Glu-Leu-Leu) despite the mutations (underlined changes) in the genetic code and therefore the sequences are similar.

GAA ' (% & CUU ' (% & ' (% & UUA Glu

Leu

Leu

mutation



GAG CUA CUA ' (% & ' (% & ' (% & Glu

Leu

Leu

181

Similarity relations on words Table 6.2 The genetic code table (5 to 3 direction).

First position

Second position A

A

C

G

U

C

G

Third position U

Lys Asn Lys Asn

Thr Arg Ile Thr Ser Ile Thr Arg Met Thr Ser Ile

A C G U

Gln His Gln His

Pro Pro Pro Pro

Arg Arg Arg Arg

Leu Leu Leu Leu

A C G U

Glu Asp Glu Asp

Ala Ala Ala Ala

Gly Gly Gly Gly

Val Val Val Val

A C G U

Stop Tyr Stop Tyr

Ser Stop Leu Ser Cys Phe Ser Trp Leu Ser Cys Phe

A C G U

6.3 Coding We shall now consider coding properties from the point of view of similarity relations. Our objective is to generalise the basic theorem (Sardinas and Patterson, 1953) on variable length codes of words for similarity relations. The corresponding problem for partial words was proved to be decidable using a domino technique introduced in (Head and Weber, 1995); see (Blanchet-Sadri, 2004a). We give here a solution for the more general problem of deciding whether a given set X is an (R, S)-code or not. Most of the material of this section is from (Halava et al., 2007a) and (K¨arki, 2008).

182

V. Halava, T. Harju and T. K¨arki

6.3.1 Some language theory For languages L, K ⊆ Σ∗ of words and for a word u ∈ Σ∗ , we define LK = {uv | u ∈ L, v ∈ K}, L+ =



Li = {u1 u2 · · · un | n ≥ 1, u j ∈ L},

i≥1

L∗ = L+ ∪ {ε }, u−1 K = {v | uv ∈ K}, Ku−1 = {v | vu ∈ K}, L−1 K =



u−1 K = {v | ∃ u ∈ L : uv ∈ K}.

u∈L

Let 2X denote the power set of X, i.e., the family of all subsets of X . For a relation R ⊆ X × X , let the corresponding function R : 2X → 2X be defined by R(Y ) = {x ∈ X | ∃ y ∈ Y : y R x}.

(6.5)

A sequence x1 , x2 , . . . , xm of words from X ⊆ Σ∗ is an X -factorisation (or simply a factorisation) of a word w ∈ Σ∗ if w = x1 x2 · · · xn . Figure 6.2 illustrates a word having two different factorisations, x1 , x2 , . . . , xm and y1 , y2 , . . . , yn .

% '

x1 &'

x2 &'

(%

(% y1

&'

(% y2

( &

%

··· ···

'

xm &'

(

(% yn

&

Figure 6.2 Two distinct factorisations of a word.

A subset X ⊆ Σ∗ is a code if every word w ∈ X ∗ has a unique X -factorisation, i.e., for all n, m ≥ 1 and x1 , x2 , . . . , xm ∈ X and y1 , y2 , . . . , yn ∈ X , x1 x2 · · · xm = y1 y2 · · · yn =⇒ n = m and xi = yi for i = 1, 2, . . . , m.

(6.6)

More precisely, such a code is a variable length code as opposed to an error correcting code; see (Baylis, 1998). (For more on codes, see also Section 7.4.5.)

6.3.2 Sardinas–Patterson theorem ∗



We begin with a multiplicative property of the function R : 2Σ → 2Σ of a similarity relation R on Σ. Theorem 6.3.1 Let R be a similarity relation on Σ∗ . Then R(X )R(Y ) = R(XY ) for all X ,Y ⊆ Σ∗ . Especially, R(X)∗ = R(X ∗ ) for all X ⊆ Σ∗ . Proof Let w ∈ R(X )R(Y ). Then w = uv for some words u ∈ R(X ) and v ∈ R(Y ).

Similarity relations on words

183

Now, there exist x ∈ X and y ∈ Y such that x R u and y R v. By multiplicativity of the relation R, see (6.3), we have xy R uv, and thus w ∈ R(XY ) as required. Conversely, let w ∈ R(XY ), i.e., xy R w for some x ∈ X and y ∈ Y , where |w| = |x| + |y|. Thus, w = uv such that |u| = |x| and |v| = |y|. By simplifiability (6.2) of R, it holds x R u and y R v. Hence, w = uv ∈ R(X)R(Y ). The second claim follows from R(X)n = R(X n ), for all n ≥ 1, by induction on n. Also, R(X )0 = ε = R({ε }) = R(X 0 ). Definition 6.3.2 Let R and S be similarity relations over the alphabet Σ. A subset X ⊆ Σ∗ is called an (R, S)-code if for all n, m ≥ 1 and x1 x2 , . . . , xm , y1 , y2 , . . . , yn ∈ X , the following condition holds x1 x2 · · · xm R y1 y2 · · · yn =⇒ n = m and xi S yi for i = 1, 2, . . . , m.

(6.7)

The relations R and S are called the alteration and fidelity relations, respectively. The alteration relation R may be seen as describing errors or differences in a coded message, and the fidelity relation S can be thought of as describing how well messages can be decoded. If S = ι , the identity relation, then an (R, S)-code is called an R-code In an R-code X different elements are necessarily pairwise non-similar. An (R, R)-code is called a weak R-code. An (ι , ι )-code is simply a code. Indeed, the definition coincides with the usual definition of a variable length code. Figure 6.3 illustrates the following results. Theorem 6.3.3 (i) Let R1 , R2 and S be similarity relations on Σ∗ with R1 ⊆ R2 . Then each (R2 , S)-code is an (R1 , S)-code. (ii) Let R, S1 and S2 be similarity relations on Σ∗ with S1 ⊆ S2 . Then each (R, S1 )code is an (R, S2 )-code. Proof For (i), let X is an (R2 , S)-code, and let x R1 y for some x = x1 x2 · · · xm and y = y1 y2 · · · yn , where xi , yi ∈ X for each i. Now, also x R2 y because R1 ⊆ R2 . Since X is an (R2 , S)-code, m = n and xi S yi holds for all i = 1, 2, . . . , n. This proves (i). For (ii), suppose that X is an (R, S1 )-code and let x R y for some x = x1 x2 · · · xm and y = y1 y2 · · · yn where xi , yi ∈ X for each i. Then m = n and xi S1 yi for all i = 1, 2, . . . , m. Because S1 ⊆ S2 , this implies that xi S2 yi for all i = 1, 2, . . . , n proving the second claim. When we consider unions and intersections of similarity relations the previous result implies the following corollary. Corollary 6.3.4 Let X be an (R1 , S1 )-code and let R2 and S2 be two similarity relations on Σ∗ . Then X is an (R1 ∩ R2 , S1 ∪ S2 )-code.

V. Halava, T. Harju and T. K¨arki



S2

R1

S1 ⊆

R2 ⊆



Ω



Ω



184

ι

ι

Figure 6.3 An (R2 , S1 )-code is also an (R1 , S1 )-code, an (R2 , S2 )-code and an (ι , ι )code.

For sets that are both (R, S1 )-codes and (R, S2 )-codes, the coding property can also be preserved when the fidelity relation is restricted to the intersection of S1 and S2 relations. Corollary 6.3.5 Let X be both an (R, S1 )-code and an (R, S2 )-code. Then it is an (R, S1 ∩ S2 )-code. Proof Again, let x R y for some x = x1 x2 · · · xm and y = y1 y2 · · · yn , where xi , yi ∈ X for all i. Then m = n and xi S j yi for j = 1, 2 and for all i = 1, 2, . . . , n. Thus, xi (S1 ∩ S2 ) yi for all i = 1, 2, . . . , n, and, consequently, X is an (R, S1 ∩ S2 )-code. The next theorem characterises general (R, S)-codes in terms of weak R-codes. Theorem 6.3.6 A subset X ⊆ Σ∗ is an (R, S)-code if, and only if, X is an (R, R)-code and RX ⊆ SX for the restrictions of R and S on X. Proof Suppose first that X is an (R, S)-code. By Theorem 6.3.3(ii), X is an (R, Ω)code, where Ω is the universal relation. Therefore if x1 x2 · · · xm R y1 y2 · · · yn with xi , y j ∈ X, then m = n and |xi | = |yi | for all i, and by the simplifiability property of the similarity relations, xi R yi for all i. Hence, X is an (R, R)-code. By choosing n = 1 = m in the definition of an (R, S)-code we see that RX ⊆ SX . Conversely, let X be an (R, R)-code such that RX ⊆ SX , and consider the words x = x1 x2 · · · xm and y = y1 y2 · · · yn ∈ X with x R y, and xi R yi for all i and j. Hence, m = n and also xi S yi for all i = 1, 2, . . . , n by the assumption RX ⊆ SX . As a corollary to Theorem 6.3.6, we show that the (R, S)-codes are always codes in the usual sense. Corollary 6.3.7 Every (R, S)-code is a code. Proof An (R, S)-code is an (ι , S)-code by Theorem 6.3.3(i). Thus, it is an (ι , ι )code by Theorem 6.3.6. We now introduce a modification of the Sardinas–Patterson theorem.

Similarity relations on words

185

Theorem 6.3.8 (Modified Sardinas–Patterson) Let R be a similarity relation on Σ∗ and let X ⊆ Σ+ . Set U1 = R(X )−1 X \ {ε }, and define Un+1 = R(X )−1Un ∪ R(Un )−1 X for n ≥ 1. The set X is a weak R-code if, and only if, ε ∈ / Un for all n. We need the following lemma. Lemma 6.3.9 Let X ⊆ Σ+ . For all n and k with 1 ≤ k ≤ n, we have ε ∈ Un if, and only if, there exist u ∈ Uk and i, j ≥ 0 such that uX i ∩ R(X j ) = 0/ and i + j + k = n.

(6.8)

Proof We prove the statement by descending induction on k. Assume first that k = n. If ε ∈ Un , then the condition (6.8) is satisfied with u = ε and i = j = 0. Conversely, / Thus, u = ε and consequently ε ∈ Un . if (6.8) holds, then i = j = 0 and {u}∩{ε } = 0. Now, let n > k ≥ 1 and suppose that the claim holds for n, n − 1, . . . , k + 1. If ε ∈ Un , then by the induction hypothesis, there exist a word u ∈ Uk+1 and integers i, j ≥ 0 such that uX i ∩ R(X j ) = 0/ and i + j + (k + 1) = n. Thus, there exist words x1 , x2 , . . . xi ,y1 , y2 , . . . , y j ∈ X such that y1 y2 · · · y j R ux1 x2 · · · xi . Since u ∈ Uk+1 , there are two cases (1) or (2). (1) There exists y ∈ R(X) with yu ∈ Uk . In this case, y R y for some y ∈ X and, by the multiplicativity of R, y y1 y2 · · · y j R yux1 x2 · · · xi . Consequently, there exists a word yu ∈ Uk such that yuX i ∩ R(X j+1 ) = 0/ and i + ( j + 1) + k = n. (2) There exists v ∈ R(Uk ) with vu ∈ X . Now, there exists v ∈ Uk such that v R v and by the multiplicativity and symmetry of R, vux1 x2 · · · xi R v y1 y2 · · · y j . Hence, there is a word v ∈ Uk such that v X j ∩ R(X i+1 ) = 0/ and j + (i + 1) + k = n as required. Conversely, assume that there exist a word u ∈ Uk and integers i, j ≥ 0 such that uX i ∩R(X j ) = 0/ and i+ j + k = n. Then y1 y2 · · · y j R ux1 x2 · · · xi for some x1 , x2 , . . . , xi , y1 , y2 , . . . , y j ∈ X. If j = 0, then i = 0, k = n and we are in the case considered in the beginning of the proof. Let us assume that j > 0. There are two cases to consider. Case 1. Assume that |u| ≥ |y1 |. By the simplifiability property of R, we may write u = y1 v, where y1 R y1 and v ∈ Σ∗ . Then v ∈ Uk+1 and y2 · · · y j R vx1 x2 · · · xi . Thus, vX i ∩ R(X j−1 ) = 0/ and i + ( j − 1) + (k + 1) = n. By the induction hypothesis, ε ∈ Un . Case 2. Assume that |u| < |y1 |. We write y1 = u v, where u R u and v ∈ Σ+ . Then, by the symmetry of R, v ∈ Uk+1 and x1 x2 · · · xi R vy2 · · · y j . Thus, vX j−1 ∩ R(X i ) = 0/ and ( j − 1) + i + (k + 1) = n. Again ε ∈ Un by the induction hypothesis. We shall now to prove the modified Sardinas–Patterson theorem. Proof of Theorem 6.3.8. If X is not a weak R-code, then there are positive integers m and n and words x1 , x2 , . . . , xm , y1 , y2 , . . . , yn ∈ X such that x1 x 2 · · · x m R y 1 y 2 · · · y n

and (x1 , y1 ) ∈ / R.

186

V. Halava, T. Harju and T. K¨arki

By the simplifiability property of R, |x1 | = |y1 |. By symmetry, we may assume that |x1 | < |y1 |. Hence y1 = x1 u for some u ∈ Σ+ with x1 R x1 . Also x2 · · · xm R uy2 · · · yn . / By Lemma 6.3.9, ε ∈ Um+n−1 . Thus, u ∈ U1 and uX n−1 ∩ R(X m−1 ) = 0. Conversely, if ε ∈ Un , then choose k = 1 in Lemma 6.3.9. Therefore, there exist / Hence a word u ∈ U1 and integers i, j ≥ 0 such that i+ j = n − 1 and uX i ∩R(X j ) = 0. there exist words x1 , x2 , . . . , xi , y1 , y2 , . . . , y j ∈ X such that y1 y2 · · · y j R ux1 x2 · · · xi . Since u ∈ U1 , necessarily x = yu for some y ∈ R(X ) and x ∈ X . Furthermore, |x| = |y|, since u = ε by the definition of U1 . Since y ∈ R(X ), there exists y ∈ X such that y R y. By the multiplicativity property, y y1 · · · y j R yux1 · · · xi , which therefore gives / R. This means that X y y1 · · · y j R xx1 · · · xi on X + . Since |y| = |y | = |x|, also (y , x) ∈ is not a weak R-code. Corollary 6.3.10 Given a similarity relation R and a finite set X it is decidable whether or not X is a weak R-code. Proof Note that if X is finite, there exist only finitely many different sets Un . Indeed, the elements of Un are suffixes of words in X , and hence the lengths of the elements are less than max{|x| | x ∈ X }. Consequently the sequence of the sets Ui is ultimately periodic, i.e., U j = U j+1 for some index j, and therefore it can be effectively determined whether a finite set of words is a relational code or not.

6.4 Relational periods We consider in this section variations of the theorem of Fine and Wilf; see Theorem 6.2.1. Halava et al. (2008a) considered three types of relational period: global, local and external. Of these, global and local cases generalise the corresponding concepts of partial words. We shall concentrate mostly on these two types of relational periods. In Section 6.4.2 we prove interaction theorems for relational periods. Different interaction types are treated in separate subsections. Typically, we consider words with one ‘pure’ period and one relational period. The question is whether these two periods induce a third period for long enough words. A summary of the exact length bounds in different interaction cases is given at the end of the section. So called extremal relational Fine and Wilf words are discussed in Section 6.4.2. These are words of maximal length which do not express period interaction behaviour.

6.4.1 Types of relational periods Recall that an integer p ≥ 1 is a period of a word x = x1 x2 · · · xn ∈ Σ∗ (where each xi ∈ Σ) if, for all positions i and j with i ≡ j (mod p), we have xi = x j . The minimal period of x is denoted by π (x). In the sequel these periods are called pure as distinction from relational periods defined as follows. Definition 6.4.1 Let R be a similarity relation on an alphabet Σ. For a word x = x1 x2 · · · xn , where xi ∈ Σ, an integer p ≥ 1 is

Similarity relations on words

187

(i) a global R-period of x if, for all 1 ≤ i, j ≤ n, i≡ j

(mod p) =⇒ xi R x j ;

(ii) an external R-period of x if there exists a word y = y1 y2 · · · y p such that, for all 1 ≤ i ≤ n and 1 ≤ j ≤ p, i≡ j

(mod p) =⇒ xi R y j

(in this case, the word y is called an external word of x); (iii) a local R-period of x if, for all 1 ≤ i ≤ n − p, it holds xi R xi+p . These definitions generalise naturally to infinite words. For a word x, the minimal global, local and external R-periods are denoted by πR,g (x), πR,l (x), and πR,e (x), respectively. In the sequel, we may omit the subscript R or the argument x if these are clear from the context. Similarly, if R is understood from the context, then we talk about global and local periods. Example 6.4.2 Let Σ = {a, b, c, d} and choose R = $(a, b), (b, c), (c, d), (d, a)%. Hence, the graph of R forms a cycle. Consider the word x = babbbcbd ∈ Σ∗ . The minimal pure period of x is π (x) = |x| = 8. In this case πR,l (x) = 2 for the local / R.) The external period of x is πR,e (x) = 3. R-period. (Note that (x7 , x8 ) = (b, d) ∈ Indeed, for the external word we can choose y = bab. Neither 1 nor 2 can do for the external R-period, because in both cases there would be a letter related to every letter of Σ. Finally, the global period of x equals πR,g (x) = 6. Indeed, since (b, d) ∈ / R, we have πR,g (x) > 5. But πR,g (x) = 6 will do because ba R bd. Hence, for the word x, we obtain

π = 8 > πg = 6 > πe = 3 > πl = 2. As another example of relational periods we consider periods of partial words. Example 6.4.3 In (Berstel and Boasson, 1999) two types of period were defined for partial words. A partial word w has a (partial) period p if, for all i, j ∈ D(w), i≡ j

(mod p) =⇒ w(i) = w( j).

A partial word w has a local (partial) period p if i, i + p ∈ D(w) =⇒ w(i) = w(i + p). Applying the similarity relation R↑ = ${(, a) | a ∈ Σ}%, we see that each period of a partial word w corresponds to the global R↑ -period of the companion w . Similarly, the local partial period of w corresponds to the local R↑ -period. On the other hand, the external period is not meaningful for partial words. Namely, we always have πe (w) = 1 after choosing y =  as the external word, since |w| is compatible with every partial word w. In the next theorem we show how different types of period are related to each other.

188

V. Halava, T. Harju and T. K¨arki

Theorem 6.4.4 Let x ∈ Σ∗ be a word. (i) Every pure period of x is a relational (global, external and local) R-period for any similarity relation R on Σ. (ii) Every global R-period of x is an external R-period and a local R-period of x. Hence π (x) ≥ πg (x) ≥ max(πe (x), πl (x)). Proof Let R be a similarity relation. Hence ι ⊆ R, and (i) follows from this. If x = x1 x2 · · · xn has a global R-period p, then we can choose x1 x2 · · · x p as an external word of x. Hence a global R-period is an external R-period. Clearly, a global period satisfies the definition of a local period. For the minimal periods, these considerations imply the inequalities of the statement. Note that an external period need not be a local period and a local period need not be an external period. For instance, in Example 6.4.2 the minimal local R-period πl (x) is not an external R-period, and πe (x) is not a local R-period. There it was the case πe (x) > πl (x). Next we give an example where πl (x) > πe (x). Example 6.4.5 Let Σ = {a, b, c, d}. Let R = $(a, b), (b, c), (c, d), (d, a)% and choose x = adcbccccbd. We determine first the minimal local R-period of x. Since (x9 , x10 ) = / R and 3 is a local R-period, πl (x) = 3. Since x1 = a, x4 = b, x7 = c (x4 , x2 ) = (b, d) ∈ and x10 = d, there does not exist any external word y = y1 y2 y3 of length 3. Otherwise, y1 would be compatible with all letters of Σ. Hence, 3 is not an external R-period. For the same reason, 1 is not an external R-period, but by choosing y = bc, we see that πe = 2. As noted above, 2 is not a local period. Since (a, c) ∈ / R, the minimal global R-period satisfies πg > 7. Actually, πg = 8 since a R b. Clearly, π = 10, and therefore

π = 10 > πg = 8 > πl = 3 > πe = 2. We state as an exercise the result that if the similarity relation R is an equivalence relation then the definitions of the relational periods coincide. In particular, we have the following result. Theorem 6.4.6 If R is a transitive similarity relation on Σ then

πg (x) = πe (x) = πl (x). Proof See Exercise 6.6.4. If R is not transitive, local R-periods differ from global and relational periods by the following property. Lemma 6.4.7 Let x ∈ Σ∗ be a word and R a similarity relation on Σ. If p is a global R-period (an external R-period, respectively) of x, then any multiple of p is a global R-period (an external R-period, respectively) of x. A multiple of a local R-period of x need not be a local R-period. Proof If p is a global R-period of x and i ≡ j (mod kp) for a non-negative integer

Similarity relations on words

189

k then i ≡ j (mod p) and, by the assumption, xi R x j . Hence kp is a global R-period. The proof is similar for the external R-periods. Consider then x = abc ∈ {a, b, c}∗, and choose R = $(a, b), (b, c)%. The word x has 1 as a local R-period, but 2 is not a local R-period. This proves the last claim.

6.4.2 Variants of the theorem of Fine and Wilf The theorem of Fine and Wilf, Theorem 6.2.1, is one of the cornerstones in combinatorics on words. In this theorem the derived period is the greatest common divisor of the original periods. This phenomenon was also the starting point of the study of partial words in the seminal paper (Berstel and Boasson, 1999). They proved the following variant for partial words with one hole. Theorem 6.4.8 Let Σ be an alphabet with a hole symbol. (i) Let w ∈ Σ∗ be a partial word of length n with at most one hole. If w has local periods p and q such that n ≥ p + q then w is purely gcd(p, q)-periodic. (ii) The bound p + q on the length of the word is sharp, i.e., there are partial words with one hole with local periods p and q and n = p + q − 1 such that w is not purely gcd(p, q)-periodic. Example 6.4.9 For the sharpness in (ii), consider the word wk+4 = abaak of length k + 4 with k ≥ 1. Then wk+4 has local periods 2 and |wk+4 | − 1 = k + 3. However, if k is even, gcd(k + 3, 2) = 1 is not a period of wk+4 . Generalisations of Theorem 6.4.8 for several holes has been studied in (BlanchetSadri, 2004b); see also (Blanchet-Sadri and Hegstrom, 2002), where it was shown that local partial periods p and q force a sufficiently long word to have a (global) partial period gcd(p, q) when certain unavoidable special cases are excluded. The bound on the length depends on the number of holes in the word. On the other hand, in (Shur and Gamzova, 2004) there are bounds for the length of a word with k holes such that (global) partial periods p and q imply a (global) partial period gcd(p, q). These results indicate that finding simple formulations for the interaction of periods in general relational periods is not feasible except for equivalence relations. The following example shows that without any additional assumption we cannot find a general bound for the interaction of relational periods. Example 6.4.10 Let R = $(a, b), (b, c)% and let i1 , i2 , . . . be a sequence of integers. Define an infinite word w as follows: w = w1 w2 w3 · · · = acb6i1 −2 acb6i2 −2 · · ·

with wi ∈ {a, b, c}.

The word w has global R-periods 2 and 3, since w1 , w3 , w5 , . . . ∈ {a, b} and w2 , w4 , w6 , . . . ∈ {b, c},

190

V. Halava, T. Harju and T. K¨arki

and w1 , w4 , w7 , . . . ∈ {a, b}, w2 , w5 , w8 , . . . ∈ {b, c} and w3 , w6 , w9 , . . . ∈ {b}. However, gcd(2, 3) = 1 is not a global R-period of w. Example 6.4.11 For the ultimately periodic word w = acbω = acbb · · ·, all integers p ≥ 2 are local and global R-periods with respect to R = $(a, b)(b, c)%, but 1 is not. Nonetheless, some interaction results can be obtained. If the relation R is an equivalence relation, the situation reduces to Theorem 6.2.1. Recall that the global, local and external periods coincide for the equivalence relations; see Exercise 6.6.5. Theorem 6.4.12 Let R be an equivalence relation. If a word x has R-periods p and q and |x| ≥ p + q − gcd(p, q), then gcd(p, q) is an R-period of x. The following example shows that there are infinite words having a pure period q and a local R-period p but that do not have a local R-period gcd(p, q). Example 6.4.13 Let R = $(a, b), (b, c)%. Every non-transitive similarity relation must have a subrelation isomorphic to R such that a and c are not compatible. Consider an infinite periodic word x = (bbcab)ω . Clearly, w has a pure period q = 5. It also has a local R-period p = 3, since the distance of the letters a and c in x is / R, gcd(p, q) = 1 is neither a local R-period different from 3. Since (x3 , x4 ) = (a, c) ∈ nor a global R-period. Bounds on interaction We concentrate on global and local rational periods. For the external period, we refer to (Halava et al., 2008a). In Example 6.4.13 the local relational period p is too weak to imply the desired interaction result. However, in the sequel we obtain several results that depend on the type of the relational period p. In the following we adopt the short-hand notation t ∈ {g, l} for the types of relational periods: global and local, respectively. Definition 6.4.14 Let P ≥ 2 and Q ≥ 3 be positive integers, and let t1 and t2 be two types of relational periods. A positive integer B = Bt1 ,t2 (P, Q) is called the bound of t1 -t2 interaction for P and Q, if it satisfies (i) and (ii). (i) The bound B is sufficient, i.e., if R is a similarity relation and w is a word with |w| ≥ B that has a pure period Q and a t1 -type R-period P, then w has a t2 -type R-period gcd(P, Q). (ii) The bound is strict, i.e., there exist a similarity relation R and a word w with

Similarity relations on words

191

|w| = B − 1 having a pure period Q and a t1 -type R-period P such that gcd(P, Q) is not a t2 -type R-period of w. In the definition we excluded the special cases where P = 1 or Q ≤ 2. Indeed, if Q ≤ 2 is a pure period, then the word contains at most two letters, and this case is covered by Theorem 6.4.12. By the following lemma, it suffices to consider cases where gcd(P, Q) = 1. Lemma 6.4.15 Let P and Q be positive integers with gcd(P, Q) = d, and let B = Bt1 ,t2 (P/d, Q/d). Then Bd = Bt1 ,t2 (P, Q). Proof Let q = Q/d and p = P/d. Hence gcd(p, q) = 1. Suppose that w = w1 w2 · · · wn has a pure period Q and a relational t1 -type period P. Assume that n ≥ dB. Consider the word w(i) = wi wi+d · · · wi+ki d

where 1 ≤ i ≤ d and ki = (n − i)/d.

Then w(i) has a pure period q and a t1 -type relational period p. Since |w(i) | ≥ B and 1 = gcd(p, q), w(i) has a t2 -type period 1 by the definition of B = Bt1 ,t2 (p, q). Since this is true for all i = 1, 2, . . . , d, we conclude that d is a t2 -type relational period of w. In order to prove that the bound Bd is strict, we give an example of a word u of length Bd − 1 such that it has a period Q and an R-period P but no R-period d. By the definition of B, there exists a word v = v1 v2 · · · vB−1 that has a pure period q and a t1 -type period p, but gcd(p, q) = 1 is not a t2 -type relational period of v. Let a be a letter and define a word u as follows: u = ad−1 v1 ad−1 v2 · · · ad−1 vB−1 ad−1 . Now u has a pure period Q = qd and a t1 -type period P = pd, but by the property of v, gcd(P, Q) = d is not a t2 -type R-period of u. Global–global interaction We consider the case where one pure period and one global relational period imply a derived global relational period. The bounds of global–global interaction Bg,g (p, q) for coprime integers p and q are given in Table 6.3. Theorem 6.4.16 Let p and q be positive integers with gcd(p, q) = 1. The bound of the global–global interaction for p and q is Bg,g (p, q) given in Table 6.3. We divide the proof into two parts. In the proof [n]q denotes the least positive residue of n (mod q). Lemma 6.4.17 The bound Bg,g (p, q) defined in Theorem 6.4.16 is sufficient. Proof Denote B = Bg,g (p, g). Let R be a similarity relation, and w be a word with a pure period q and a global R-period p such that |w| ≥ B. We show that gcd(p, q) = 1 is a global R-period of w. Since w has pure period q, it has at most q different letters.

192

V. Halava, T. Harju and T. K¨arki Table 6.3 Table of the bounds Bg,g (p, q), where gcd(p, q) = 1. Bg,g (p, q)

pq

p, q odd

(p + 1)q/2

q + (q − 1)p/2

p odd, q even

(p + 1)q/2

(p + 1)q/2

p even, q odd

q + (q − 1)p/2

q + (q − 1)p/2

Hence, it suffices to show that a letter wn in an arbitrary position 1 ≤ n ≤ q is Rcompatible with the other letters of w. For a position 1 ≤ n ≤ q, let

τ (n) = max{m | 1 ≤ m ≤ |w|, m ≡ n (mod q)}. Note that if w has exactly q different letters, then τ (n) is the last occurrence of the letter wn in w. Since w has the global relational period p, it follows that wn is related to all letters in the positions S(n) = {n + ip | i = 0, 1, . . . , (|w| − n)/p } and T (n) = {τ (n) − ip | i = 1, 2, . . . , (τ (n) − 1)/p } . We prove that S(n) ∪ T (n) contains a complete residue system modulo q. It follows then that each letter wn is R-compatible with all letters wi for i = 1, 2, . . . , q. Then, by the q-periodicity, 1 is an external R-period of w. Assume first that τ (n) ≡ n (mod p), and hence also τ (n) ≡ n (mod pq), since τ (n) ≡ n (mod q) and gcd(p, q) = 1. Now, τ (n) ≥ n + pq and hence also |w| ≥ n + pq. Therefore S(n) contains the complete residue system {n + ip | i = 0, 1, . . . , q − 1} modulo q. / Suppose then that τ (n) ≡ n (mod p). In this case, S(n) ∩ T (n) = 0. Let M(n) = τ (n) − (q − (B − n)/p − 1) p. Claim 1. One has M(n) > 0. Since B − n < pq, we have q − (B − n)/p − 1 ≥ q − (q − 1) − 1 = 0. For the proof of Claim 1, we observe first that, since B ≥ q,  if 1 ≤ n ≤ [B]q ; B − [B]q + n, τ (n) ≥ B − [B]q − q + n, if [B]q + 1 ≤ n ≤ q.

Similarity relations on words

193

We divide our considerations of Claim 1 into two cases according to the form of B. Case 1. Let B = (p + 1)q/2. In this case p is odd, and hence [B]q = q, and M(n) > B − [B]q + n − (q − 1 − ((B − n)/p − 1)) p = B − q + n − qp + B − n = 2B − (p + 1)q = (p + 1)q − (p + 1)q = 0. Case 2. Let B = q + (q − 1)p/2. We note that (B − n)/p = (q − 1)/2 + (q − n)/p ≥ (q − 1)/2, since q is odd and q ≥ n. If 1 ≤ n ≤ [B]q }, then M(n) ≥ B − [B]q + n − (q − (q − 1)/2 − 1) p = q + (q − 1)p/2 − [B]q + n − qp + (q − 1)p/2 + p = q − [B]q + n ≥ n > 0. On the other hand, if [B]q + 1 ≤ n ≤ q, then M(n) ≥ B − [B]q − q + n − (q − (q − 1)/2 − 1) p = q + (q − 1)p/2 − [B]q − q + n − qp + (q − 1)p/2 + p = n − [B]q > 0. Claim 2. One has |S(n) ∪ T (n)| ≥ q. By Claim 1 we have that τ (n) − 1 ≥ (q − (B − n)/p − 1)p, since M(n) is an integer. Therefore also (τ (n) − 1)/p ≥ q − (B − n)/p − 1. Now, since |w| ≥ B, |S(n) ∪ T (n)| = 1 + (|w| − n)/p + (τ (n) − 1)/p ≥ q.

(6.9)

This shows Claim 2. To conclude the proof of the lemma, we observe that, by Claim 2, there exist an integer k ∈ {0, q − 2} and sets S (n) = {n + ip | i = 0, 1, . . . , k} and T  (n) = {τ (n) − jp | j = 1, 2, . . . , q − k − 1} such that S (n) ∪ T  (n) is a complete residue system modulo q. Indeed, note that the elements of S (n) are pairwise incongruent modulo q, since gcd(p, q) = 1 and k < q. The same holds for T  (n). Assume next that for some 0 ≤ i ≤ k and 1 ≤ j ≤ q − k − 1, n + ip ≡ τ (n) − jp (mod q).

(6.10)

This holds if, and only if, (i + j)p ≡ τ (n) − n ≡ 0 (mod q), where the second congruence follows from the definition of τ (n). Since gcd(p, q) = 1, we conclude that (6.10) holds if, and only if, i + j ≡ 0 (mod q). However, this is a contradiction,

194

V. Halava, T. Harju and T. K¨arki

since 0 < i + j ≤ k + (q − k − 1) = q − 1 < q. Therefore, the set S (n) ∪ T  (n) contains exactly (k + 1) + (q − k − 1) = q pairwise incongruent elements, which proves the lemma. We prove then that the bound is strict. Lemma 6.4.18 The bound Bg,g (p, q) defined in Theorem 6.4.16 is strict. Proof We follow the notation of the previous proof. In particular, B = Bg,g (p, q). Fix n = [B]q , and define the critical positions m = m(p, q) ∈ {1, 2, . . ., q} according to Table 6.4. We show that there exists a word v of length B − 1 with a pure period q and a global R-period p such that the letter in the critical position is not related to the letter in position n. In the sequel we denote critical positions succinctly by m. Table 6.4 Table of critical positions m = m(p, q). m(p, q)

pq

p, q odd

(q − p)/2

q

p odd, q even

q/2

q/2

p even, q odd

q

q

Consider solutions (i, j) in non-negative integers of the equation m + iq ≡ n + jq (mod p).

(6.11)

By a minimal solution we mean a solution where max(n + iq, m + jq) is as small as possible. Note that if i > j in a solution, then m + (i − j)q ≡ n (mod p) is a smaller solution. Similarly, if j > i, then m ≡ n + ( j − i)q (mod p) is a smaller solution. Thus, in a minimal solution either i = 0 or j = 0. Also, such a solution is unique. Namely, let (i, j) and (i , j ) be distinct minimal solutions, say with j = 0, i = 0 and m + iq = n + j q. Then m ≡ n (mod q); a contradiction, since 1 ≤ m ≤ q and n = m as can be verified from Tables 6.3 and 6.4. Since gcd(p, q) = 1, the sets {m + iq | i = 0, 1, . . . , p − 1} and {n + jq | j = 0, 1, . . . , p − 1} are complete residue systems modulo p. Hence there exist exactly one integer j with 0 ≤ j ≤ p − 1 that satisfies m ≡ n + jq (mod p), and exactly one integer i with 0 ≤ i ≤ p − 1 such that m + iq ≡ n (mod p). Furthermore, for 1 ≤ j ≤ p − 1, we have m ≡ n + jq (mod p) =⇒ m + (p − j)q = m + pq − jq ≡ n (mod p),

Similarity relations on words

195

and 1 ≤ p − j ≤ p − 1. Hence, the minimal solution of (6.11) is either of the form (0, j) or (p − j, 0). We show that regardless of the parity of p and q and which of them is greater, the minimal solution is i = 0 and j = (B − n)/q. Since B < pq, it is 1 ≤ (B − n)/q ≤ p − 1 in all cases. Consider first those cases of Table 6.3 where B = (p + 1)q/2 and, consequently, n = [B]q = q. Let j = (B − n)/q. Case 1. Let p and q be both odd and p < q. In Table 6.4, we have m = (q − p)/2. Now n + jq = B and, since q is odd, it follows that (n + jq) − m = (p + 1)q/2 − (q − p)/2 = (q + 1)p/2 ≡ 0 (mod p). Hence, (0, (B − n)/q) is a solution. Also, jq = B − n = (p + 1)q/2 − q = (p − 1)q/2 and m + (p − j)q = m + pq − (p − 1)q/2 = m + (p + 1)q/2 = m + B > B. Hence, in the solution (p − (B − n)/q, 0) it is the case max(m + iq, n + jq) > B, whereas in the solution (0, (B−n)/q) we have max(m+iq, n+ jq) = B. Thus, (0, (B− n)/q) is the minimal solution. Case 2. Suppose that p is odd and q is even. By the parity of q, m = q/2 is an integer and (n + jq) − m = (p + 1)q/2 − q/2 = pq/2 ≡ 0 (mod p). Hence, (0, (B − n)/2) is a solution. As in Case 1, we have m + (p − j)q = m + B > B, and therefore (0, (B − n)/q) is the minimal solution also in this case. Consider then the cases where B = q + (q − 1)p/2. According to Table 6.3 and Table 6.4, we have m = q and q is odd. Clearly, (i, j) = (0, (B − n)/q) is a solution, since (n + jq) − m = q + (q − 1)p/2 − q = (q − 1)p/2 ≡ 0 (mod p). As in the above, m + (p − j)q = m + pq − B + n. By substituting m and B we get m + (p − j)q = q + (2 · (q − 1)p/2 + p)− (q + (q − 1)p/2)+ n = B + (p − q) + n. Case 3. Assume that p > q. Then p − q is positive and m + (p − j)q > B. Thus, (0, (B − n)/q) is the smallest solution. Case 4. Assume that p is even, q is odd and p < q. Then n = q − p/2, and m + (p − j)q = B + (p − q) + q − p/2 = B + p/2 > B. Hence (0, (B − n)/q) is the smallest solution also in this final case. Define now a word w over the {a, b, c} by the rule w=

p+1 2 B−n q

(bm−1 abq−m−1c) (bn−1 cbq−n−1a)

bn−1 c

if B = (p + 1)q/2, if B = q + (q − 1)p/2,

(6.12)

where m = m(p, q) is given by Table 6.4 and n = [B]q . Also, let w = vc and choose R = $(a, b), (b, c)%. Now q is a pure period of v and p is a global R-period. Namely,

196

V. Halava, T. Harju and T. K¨arki

b is related to each letter in v, and the first occasion where the distance between the letters a and c in w is a multiple of p is the case where a is in the position m and c is in the position B. This does not happen in v, since v is one letter shorter. Since (a, c) ∈ / R, 1 is not a global R-period of v. Example 6.4.19 For p = 5 and q = 7, the bound for global–global interaction is Bg (p, q) = (p + 1)q/2 = 21. Hence, any word w with a global R-period 5 and a pure period 7 and no relational period gcd(p, q) satisfies |w| ≤ 20. The situation is illustrated in Figure 6.4. The table with 20 entries represents a word w = (w1 w2 · · · w7 )2 w1 w2 · · · w6 that is a rational power of the word w1 w2 · · · wq , written into p columns. Letters in each column are R-related, since p is a global period. The graph with vertices w1 , w2 , . . . , wq represents all necessary relations in the word. If two vertices occur in the same column of the table, then there is an edge between those vertices. We notice that the graph is almost complete, only the edges {w1 , w7 } and {w6 , w7 } are missing. Hence, we conclude that w = (abbbbbc)2abbbbb and w = (abbbbac)2abbbba with relation R = $(a, b), (b, c)% are words such that they have global R-period 5 and pure period 7, but 1 is not a global R-period. Note that w satisfies the formula given by (6.12). Namely, m(5, 7) = 1 by Table 6.4. Moreover, from the figure we see that increasing the length of the word w by 1 is not possible. The next letter is indicated in the table by \w7. We notice that increasing the length causes w1 , w6 and w7 to occur in the same column. Hence, the graph would become perfect (dashed lines) implying that all the letters would be related to each other and, therefore, 1 would be a global R-period.

w1 w6 w4 w2 w \7

w2 w7 w5 w3

w3 w1 w6 w4

w4 w2 w7 w5

w5 w3 w1 w6

w3

w2

w4 w1 w5 w6

w7

Figure 6.4 Global–global interaction for p = 5 and q = 7.

Global–local interaction Instead of attaining a global period gcd(p, q) we loosen our requirements and consider the case where gcd(p, q) becomes a local relational period. Theorem 6.4.20

Let p and q be positive integers with gcd(p, q) = 1. Let k be the

197

Similarity relations on words

smallest integer satisfying kp ≡ ±1 (mod q). The bound of global–local interaction for p and q is  q + kp − 1, if q ≡ 2 (mod p) and kp ≡ +1 (mod q); Bg,l (p, q) = q + kp, otherwise. We again divide the proof into two parts. Lemma 6.4.21 The bound Bg,l (p, q) defined in Theorem 6.4.20 is sufficient. Proof Denote B = Bg,l (p, q), and let w have a pure period q and a global R-period p. We show that 1 is a local R-period of w if |w| ≥ B. As in the proof of Lemma 6.4.17 we conclude that there are at most q different letters in w. Hence, w has a local R-period 1 if, and only if, for all n = 1, 2, . . . , q, w[n]q R w[n+1]q .

(6.13)

We show that, for each 1 ≤ n ≤ q, there are non-negative in and jn such that [n]q + in q ≡ [n + 1]q + jn q

(mod p)

(6.14)

and both sides of the congruence belong to {1, 2, . . ., B}. From this it follows together with the global period p that (6.13) holds if |w| ≥ B. Case 1. Assume first that kp ≡ 1 (mod q). For each 1 ≤ n ≤ q − 1, choose jn = (kp − 1)/q and in = 0. Here jn is an integer by the definition of k. Then (n + 1) + jnq = n + 1 + kp − 1 = n + kp ≡ n

(mod p).

Clearly, both sides of the congruence belong to {1, 2, . . . , B}. Also, let iq = 0 and jq = (kp − 1)/q + 1. Now 1 + jqq = 1 + kp − 1 + q = q + kp ≡ q

(mod p).

The left-hand side is less than or equal to B only if q ≡ 2 (mod p). However, in the special case where 1 ≡ q − 1 (mod p), we can choose iq = (kp − 1)/q and jq = 0 so that q + iqq = q + kp − 1 ≡ q − 1 ≡ 1

(mod p).

Now the left-hand side is exactly B = q + kp − 1. Case 2. Assume that kp ≡ −1 (mod q). For 1 ≤ n ≤ q − 1, let in = (kp + 1)/q and jn = 0. Note that in is an integer by the definition of k. Hence, n + inq = n + kp + 1 ≡ n + 1 (mod p). Choose furthermore iq = (kp + 1)/q − 1 and jq = 0. Then q + iqq = q + kp + 1 − q ≡ 1

(mod p).

Note that both sides of both congruences belong to the set {1, 2, . . . , B}. Hence, we have shown that (6.13) is satisfied for all n = 1, 2, . . . , q, if |w| ≥ B. Therefore, w has gcd(p, q) = 1 as its local relational period.

198

V. Halava, T. Harju and T. K¨arki

Lemma 6.4.22 The bound Bg,l (p, q) defined in Theorem 6.4.20 is strict. Proof Again, let B = Bg,l (p, q), where gcd(p, q) = 1. We show that there is a word w of length B − 1 with a global period p and a pure period q but that w does not have local relational period gcd(p, q) = 1. We show that, at least for one position n with 1 ≤ n ≤ q, there is no solution in , jn of (6.14) where both sides of the equation belong to the set {1, 2, . . . , B − 1}. Without contradicting the assumption that p / R and therefore is a global period of w we may then assume that (w[n]q , w[n+1]q ) ∈ gcd(p, q) = 1 is not a local R-period of w. Let again k be the smallest integer satisfying kp ≡ ±1 (mod q). If k > q/2, then also (q − k)p ≡ −kp ≡ ∓1 (mod q). Now q − k < k, and this is a contradiction. On the other hand, if k = q/2, then, by the definition of k, 0 ≡ qp = 2kp ≡ ±2 (mod q); a contradiction, since q ≥ 3 by the definition of the interaction bound. Hence k < q/2. Next we will consider minimal solutions of (6.14). As in the proof of Lemma 6.4.18, by a minimal solution we mean a solution (in , jn ) where M(in , jn ) = max([n]q + in q, [n + 1]q + jn q) is as small as possible. Recall that the minimal solution is unique, and if 1 ≤ j ≤ p − 1, then the minimal solution has the form (0, j) or (p − j, 0). We divide the considerations into three cases. In each case there exists 1 ≤ n ≤ q such that the minimal solution (in , jn ) of (6.14) satisfies M(in , jn ) ≥ B. Case 1. Assume first that kp ≡ 1 (mod q) and q ≡ 2 (mod p). We consider (6.14) with n = q, i.e., the congruence q + iqq ≡ 1 + jq q (mod p). Note that in the solution (iq , jq ) = (0, (kp − 1)/q + 1), it is 1 + jq q = q + kp = B. Now assume that the solution (p − (kp − 1)/q − 1, 0) is smaller, i.e., q + (p − (kp − 1)/q − 1)q < B. Thus, 0 < q + kp − (q + (p − (kp − 1)/q − 1)q) = 2kp − qp + q − 1 < q − 1,

(6.15)

where the last inequality follows, since by the above k < q/2. Since kp ≡ 1 (mod q), we have 2kp − qp + q − 1 ≡ 1 (mod q).

(6.16)

Combining (6.15) and (6.16), we obtain 2kp − qp + q − 1 = 1. On the other hand, 1 = 2kp − qp + q − 1 ≡ q − 1 (mod p), which contradicts our assumption. Therefore, for the minimal solution, max(q + iqq, 1 + jqq) ≥ B, and thus M(iq , jq ) ≥ B. Define then a rational power in the ternary alphabet {a, b, c} with length B − 1: w = (abq−2c)(B−1)/q . Let R = $(a, b), (b, c)%. By the above, w has a period q and a global R-period p. However, 1 is not a local R-period of w, since a and c are unrelated.

Similarity relations on words

199

Case 2. Assume next that kp ≡ 1 (mod q) and q ≡ 2 (mod p). Consider the congruence (q − 1) + iq−1q ≡ q + jq−1q (mod p). In the solution (iq−1 , jq−1 ) = (0, (kp − 1)/q), we have q + jq−1 q = q + kp − 1 = B. Also, in the solution (p − (kp − 1)/q, 0), q − 1 + (p − (kp − 1)/q)q = q − 1 + qp − kp + 1 > q + kp > B, where the second to last inequality is due to k < q/2. Hence, the minimal solution satisfies max(q − 1 + iq−1q, q + jq−1q) ≥ B, and hence M(iq−1 , jq−1 ) ≥ B. In this case, the rational power w = (bq−2 ac)(B−1)/q and the relation R = $(a, b), (b, c)% together with the above calculations show that the bound B is strict. Case 3. Finally, assume that kp ≡ −1 (mod q). Consider the same congruence as in Case 2. However, note that now B = q + kp. Now (iq−1 , jq−1 ) = ((kp + 1)/q, 0) is one solution, where q − 1 + ((kp + 1)/q)q = q + kp = B. In the other solution (0, p − (kp + 1)/q), we have q + (p − (kp + 1)/q)q = q + (q − k)p + 1 > q + kp + 1 > B by using the fact k < q/2. Hence, the word w = (bq−2 ac)(B−1)/q with the relation R = $(a, b), (b, c)% shows that the bound B is strict also in this case. Theorem 6.4.20 follows from Lemma 6.4.21 and Lemma 6.4.22. Note that the value of k can be calculated easily using an elementary theorem by Fermat and Euler. Namely, the smallest solution k of the equation k p ≡ 1 (mod q) is called the reciprocal of p modulo q and, by the theorem, k = [pϕ (q)−1 ]q , where ϕ is Euler’s totient function.1 Thus k = min(k , q − k ), since (q − k )p ≡ −1 (mod q). Example 6.4.23 Let p = 5 and q = 7. Then k = [5ϕ (7)−1 ]7 = 3 = k and kp = 15 ≡ 1 (mod 7). Since q = 7 ≡ 2 (mod 5), the bound of global–local interaction is equal to Bg,l (p, q) = q + kp − 1 = 21. Since Bg,l (5, 7) = Bg,g (5, 7), a word w of length Bg,l (p, q) − 1 = 20 with local R-period p and pure period q but not having 1 as a local R-period can be represented in a table form exactly as in Figure 6.4. 1

Recall that ϕ (n) is the number of non-negative integers in the set {m ≤ n | gcd(m,n) = 1}.

200

V. Halava, T. Harju and T. K¨arki

Local interactions Despite the negative result in Example 6.4.13 there exist interaction bounds for some integers p and q also in the case where p is a local period. In the following, if no bound B(p, q) of interaction for p and q exists, then we set B(p, q) = ∞. Theorem 6.4.24 Let p and q be positive integers with gcd(p, q) = 1. Then the bound of local–local interaction for p and q is  p + q, if p − 1 ≡ 0 (mod q) or p + 1 ≡ 0 (mod q); Bl,l (p, q) = ∞, otherwise. Proof Let w be a word of length Bl,l = Bl,l (p, q) with a pure period q and a local R-period p. Suppose that gcd(p, q) = 1. Assume first that p + 1 ≡ 0 (mod q). By the periodicity assumption, we then have wi R wi+p = wi−1 for all i = 2, 3, . . . , q, and w1 R w1+p = wq . Since q is a period of w, 1 is a local R-period of w. On the other hand, if we set R = $(a, c), (b, c)%, the word w = (cq−2 ab)(p+q−1)/q has a pure period q and a local R-period p. However, gcd(p, q) = 1 is not a local / R. Note that in order to check that w has a local R-period of w, since (wq−1 , wq ) ∈ period p, it suffices to ensure that the distance from any occurrence of a to any occurrence of b is not p. By the length of w, this holds. Namely, the only position i such that wi ∈ {a, b} and i + p ≤ |w| is the first occurrence of the letter a. We have a = wq−1 R wq−1+p = wq−2 = c. Moreover, let kq = p + 1 for some positive k. Then the only position i such that wi ∈ {a, b} and i − p > 0 is the last occurrence of b. Hence, this position is kq and b = wkq R wkq−p = w1 = c. Assume next that p − 1 ≡ 0 (mod q). Now wi R wi+p = wi+1 for all i = 1, 2, . . . , q. As above, this means that w has a local R-period 1. Our bound is strict, since setting again R = $(a, c), (b, c)%, the word w = (acq−2 b)(p+q−1)/q has a pure period q and a local R-period p. However, (wq , wq+1 ) ∈ / R and 1 is not a local R-period. Again the length of w ensures that a and b do not have to be related. As above, we need to check the first occurrence of a and the last occurrence of b. Assume that p − 1 = kq for some positive k. Then a = w1 R w1+p = w2 = c and b = w(k+1)q R w(k+1)q−p = wq−1 = c. Finally, assume that q does not divide p − 1 nor p + 1. Then i + p ≡ i + 1 (mod q) and i + p ≡ i − 1 (mod q). Thus, if R = $(a, c), (b, c)%, then the infinite word w = (abcq−2 )ω has a pure period q and a local R-period p, but clearly 1 is not a local R-period of w.

Similarity relations on words

201

The next theorem shows that local R-periods are weak when related to global interaction. Theorem 6.4.25 Let p and q be such that gcd(p, q) = 1. The bound Bl,g (p, q) of local–global interaction exists only for q = 3, in which case Bl,g (p, q) = p + 3. Proof Let B = Bl,g (p, q), and consider first the case q = 3. Assume that a word w has a pure period 3 and a local R-period p. If |w| ≥ p + 2, then w1 R w[1+p]3 and w2 R w[2+p]3 .

(6.17)

Since gcd(p, q) = 1, we have w[1+p]3 = w2 and w[2+p]3 = w3 , or w[1+p]3 = w3 and w[2+p]3 = w1 . If |w| ≥ p + 3, then in addition to (6.17), it holds w3 R w[3+p]3 where w[3+p]3 is equal to either w1 or w2 . Hence, all letters are R-related, and 1 is a global R-period of w. On the other hand, v = (abc)(p+2)/3, that has length p + 2, with S = $(a, w[1+p]3 ), (b, w[2+p]3 )% show that the bound Bl,g is strict for q = 3. Suppose then that q ≥ 4, and consider the four-letter alphabet {a, b, c, d}. Choose R = $(a, b), (b, c), (c, d), (d, a)%. Define an infinite word w = (w1 w2 · · · wq )ω as follows: w1 = a, w[1+p]q = b, w[1+2p]q = c and w[1+ip]q = d for i = 3, 4, . . . , q − 1. Now, wi R wi+p for all i = 1, 2, . . . , q, and hence p is a local R-period of w. However, 1 is not a global R-period, since no letter is compatible with all other letters. Hence, B = ∞. Extremal words We shall now investigate words that demonstrate that the bound of global–global interaction is strict as given in (Halava et al., 2007b). The study of these words originates from the standard Fine and Wilf case, where for coprime periods p and q, the non-constant words of maximal length p + q − 2 have very interesting properties. Such words are called extremal Fine and Wilf words. In 1994 the following result was proved in (de Luca and Mignosi, 1994). Theorem 6.4.26 (de Luca and Mignosi) The extremal Fine and Wilf words are palindromes, and the factors of the extremal Fine and Wilf words are exactly the factors of the Sturmian words. Definition 6.4.27 Let p ≥ 2 and q ≥ 3 be integers satisfying gcd(p, q) = 1. A word is an extremal relational Fine and Wilf word if |w| = Bg,g (p, q) − 1 and there exists a similarity relation R such that w has a global R-period p and a pure period q but gcd(p, q) = 1 is not an R-period of w. Let FW (p, q) denote the set of all extremal relational Fine and Wilf words. Denote by Rw the similarity relation with a minimum number of pairs of letters such that w ∈ FW (p, q) has an Rw -period p. We leave it as an exercise to show that Rw is well defined; see Exercise 6.6.8.

202

V. Halava, T. Harju and T. K¨arki

Lemma 6.4.28 The number of different letters N occurring in w ∈ FW (p, q) satisfies 3 ≤ N ≤ q. Proof The inequality N ≤ q follows directly from the requirement of q-periodicity. Suppose then that N = 2, say w is over the binary alphabet {a, b} together with a similarity relation R such that w has global period p and a pure period q and |w| = Bg,g (p, q) − 1. If aRb then the letters are R-similar and gcd(p, q) = 1 is a global Rperiod of w. In (a, b) ∈ / R, then R = ι and the theorem of Fine and Wilf implies that gcd(p, q) = 1 is a period of w, since both Bg,g (p, q) = (p + 1)q/2 and Bg,g (p, q) = q + (q − 1)p/2 of Table 6.3 are greater than p + q − 1. Example 6.4.29 Consider the set FW (3, 7). Then Bg,g (3, 7) = (p + 1)q/2 = 14. Hence |w| = 13 for each w ∈ FW (3, 7). Choose Σ = {a, b, c} and R = $(a, b), (b, c)%. Then u = babbabcbabbab ∈ FW (3, 7). But also, w = babbbbcbabbbb ∈ FW (3, 7), and therefore, even if we restrict our considerations to words having the smallest possible number of different letters, we do not have uniqueness. Also, let Δ = {a, b, c, d} and Rv = $(a, b), (a, c), (a, d), (b, c), (c, d)%. Then v = abcacadabcaca ∈ FW (3, 7). However, we can show that the words in FW (p, q) do share a unique structure in the sense described below. Definition 6.4.30 Let R be a similarity relation on Σ∗ . We say that two letters a and b are R-isomorphic if, for all x ∈ Σ, we have a R x ⇐⇒ b R x. Also, a letter a is said to be relationally universal, or more precisely, R-universal if a R x for all x ∈ Σ. In the sequel we consider words in FW (p, q) that do not have any distinct Risomorphic letters and the number of occurrences of an R-universal letter is minimal. This restriction is justified, since all words in FW (p, q) can be obtained, up to renaming of letters, from the word w described in the next theorem by two operations: (1) changing some symbols to universal symbols and (2) replacing a letter with a new letter Rw -isomorphic to the original one. In this respect, w ∈ FW (p, q) with no distinct Rw -isomorphic letters and with minimal number of occurrences of an Rw universal letter can be called minimal. As above, we use the notation [n]q for the least positive residue of an integer n (mod q). For simplicity, denote B = Bg,g (p, q). The proof of the following result follows the techniques of Lemma 6.4.17. For a detailed proof, we refer to (Halava et al., 2007b).

Similarity relations on words

203

Theorem 6.4.31 Let w ∈ FW (p, q) with no distinct Rw -isomorphic letters and with minimal number of occurrences of an Rw -universal letter. This word is unique up to renaming of letters. Furthermore, w is of the form uc−1 , where ⎧    qp  q−1− q p  p+1 2 ⎪ ⎪ p [B] −1 p−[B] ⎪ p b p ab b c if B = p+1 ⎪ ⎪ 2 q and p < q, ⎨   p+1 u= 2 [B] p −1 abq−1−[B] p c ⎪ b if B = p+1 ⎪ 2 q and p > q, ⎪ ⎪ ⎪ q ⎩ [B]q −1 q−[B]q −1 B−[B] cb a) q b[B]q −1 c otherwise, (b and the relation is Rw = $(a, b), (b, c)%. Note that the relation Rw = $(a, b), (b, c)% in Theorem 6.4.31 which was used in defining the minimal extremal words in FW (p, q) corresponds to the compatibility relation of partial words. As in the case of normal extremal Fine and Wilf words (de Luca and Mignosi, 1994; Tijdeman and Zamboni, 2003), the minimal extremal relational Fine and Wilf words given in Theorem 6.4.31 have nice palindromic properties. Recall that a word  where w  = wn wn−1 · · · w1 . A generalisation w = w1 w2 · · · wn is a palindrome if w = w, of palindromic words are so called pseudo-palindromic words. Definition 6.4.32 Let σ : Σ → Σ be a morphism satisfying σ 2 = ι . A word w =  for w  = wn wn−1 · · · w1 . w1 w2 · · · wn is a σ -pseudo-palindrome if w = σ (w) For more information on palindromes and pseudo-palindromes, see (Anne et al., 2005; de Luca and De Luca, 2006). As a final result of extremal Fine and Wilf words we prove the following palindromic properties. Theorem 6.4.33 Let w ∈ Σ∗ , where Σ = {a, b, c}, belong to FW (p, q) with no distinct Rw -isomorphic letters and with minimal number of occurrences of an Rw universal letter. Let Rw = $(a, b), (b, c)%. If Bg,g (p, q) = (p + 1)q/2, then w is a palindrome. Otherwise, it is a σ -pseudo-palindrome, where σ : Σ → Σ is defined by σ (a) = c and σ (b) = b. Proof Let Bg = Bg,g (p, q). The word w is given by the formula of Theorem 6.4.31. Consider first w ∈ FW (p, q) such that B = (p + 1)q/2. Suppose that wm = a. By Theorem 6.4.31, we have m = n + iq for some i and 1 ≤ n < q satisfying n ≡ B (mod p). Since B ≡ 0 (mod q), wB−n−iq = wq−n by the period q. Now q − n ≡ q − B + pq = (p + 1)q/2 = B (mod p) since n ≡ B (mod p), and thus wB−m = wq−n = a. Consider then the occurrences of c in w. Suppose that wm = c. By Theorem 6.4.31, we have m ≡ 0 (mod q). Since B = (p + 1)q/2, also B − m ≡ 0 (mod q). This implies that wB−m = c and therefore wm = wB−m = w|w|+1−m if wm = a or wm = c. Hence, this is true also for wm = b and so the word w is a palindrome. Next consider w ∈ FW (p, q) such that B(p, q) = q + (q − 1)p/2. We apply again Theorem 6.4.31 to obtain: wm = a if, and only if, m ≡ 0 (mod q) and wm = c if, and only if, m ≡ B (mod q). Hence B − m ≡ B (mod q), for wm = a, and therefore

204

V. Halava, T. Harju and T. K¨arki

wB−m = c. On the other hand, if wm = c, then B − m ≡ 0 (mod q) and wB−m = a. Thus, wm = σ (wB−m ) = σ (w|w|+1−m ), i.e., w is a σ -pseudo-palindrome. We close this section by giving some examples of relational extremal Fine and Wilf words demonstrating also the palindromic properties discussed above. Example 6.4.34 We showed in Example 6.4.19 that the word w = (abbbbac)2abbbba together with the relation R = $(a, b), (b, c)% is a word of maximal length such that w has a global R-period 5 and a pure period 7, but 1 is not a global R-period. Note that in w1 w2 · · · w7 the letter c occurs in the position [B]7 = 7 and the letter a occurs exactly in positions 1 and 6, which are the positions congruent to B = 21 modulo 5. Hence, w is of the form uc−1 given in Theorem 6.4.31. Moreover, this word is clearly a palindrome. The word w is minimal and therefore acts as a template for other words in FW (5, 7). For example, replace the letter w2 by d and w6 by e. From Figure 6.4 we clearly see that w2 = d must be Rw -isomorphic to b. In other words, it must be R-universal. Similarly, w6 = e must be Rw -isomorphic to a. Hence, v = (adbbbec)2adbbbe ∈ FW (5, 7) with the relation Rv = $ΩΣ \ {(a, c), (c, a), (e, c), (c, e)}%. Note that this word is neither a palindrome nor a pseudo-palindrome. As an example of a pseudo-palindrome, consider a minimal word w ∈ FW (7, 5). Now, B(7, 5) = q + (q − 1)p/2 = 19, [B(7, 5)]5 = 4 and (B(7, 5) − [Bg(7, 5)]5 )/q = 3. By the formula of Theorem 6.4.31, w = (bbbca)3 bbb, which is a σ -pseudopalindrome for the morphism σ : {a, b, c}∗ → {a, b, c}∗ such that σ (a) = c and σ (b) = b.

6.5 Repetitions in relational words Repetitions and repetition freeness have been one of the main subjects in combinatorics on words since the seminal papers of Axel Thue (Thue, 1906a, 1912). Let us recall that he showed that there exist infinite words over a ternary alphabet that do not have any squares as factors (see Theorem 4.2.2). Thue also constructed an infinite binary word t that avoids all overlaps xyxyx for any word y and non-empty x (see Theorem 4.2.8). This celebrated word is called the Thue–Morse word and it has many remarkable properties; see (Allouche and Shallit, 1999) or Example 1.3.1. A generalisation of repetition freeness has been studied in connection with partial words. In a partial word w a factor uv is a square if the words u and v are compatible. Obviously squares cannot be avoided in partial words since every word containing a hole contains a square a or a for some letter a. However, we can avoid larger squares. Namely, over a ternary alphabet there exist uncountably many partial words with an infinite number of holes such that the only square factors are the trivial ones; see (Halava et al., 2008b) and (Blanchet-Sadri et al., 2009). Overlapfreeness

Similarity relations on words

205

of partial words was considered in (Halava et al., 2009). They showed that an infinite overlapfree binary partial word is either full or of the form w or a  w, where w is an infinite full word and a is a letter. There are infinitely many overlapfree words of each type; see also (Blanchet-Sadri et al., 2009). Here we have chosen to investigate squarefree and overlapfreeness with respect to relational words. The section relies on the article (Harju and K¨arki, 2014).

6.5.1 Relations, graphs and squares Denote by Γ = {0, 1, 2} a special ternary alphabet in this section. Definition 6.5.1 Let R be a similarity relation on Σ. (i) A non-empty word w ∈ Σ∗ has an R-square uu if uRu and uu is a factor of w. If w has no R-squares then it is R-squarefree. A word w that is ι -squarefree, for the identity relation ι , is simply squarefree. (ii) We say that words x and y R-overlap if x = uv and y = v u such that vRv or uRu for some non-empty words u, v, u , v . Clearly every R-squarefree word is squarefree. The following result is a criterion of squarefreeness preserving morphisms for ordinary words; see (Crochemore, 1982b). Theorem 6.5.2 Let α : Γ∗ → Σ∗ be a morphism for which every image α (w) is squarefree for |w| ≤ 5. Then α is a squarefree morphism, i.e., α (w) is squarefree for all squarefree words w ∈ Γ∗ . Recall that a similarity relation R ⊆ Σ∗ × Σ∗ can be represented as a graph GR = (Σ, E) such that the edges correspond to the pairs (a, b) ∈ R with a = b. Also, the converse is true, i.e., if G is a finite graph there is a unique similarity relation R such that G = GR . In the following if G = (Σ, E) is a graph, its order n(G) is the cardinality of the vertex set Σ and its size e(G) is the number of its edges. For a vertex a ∈ Σ, let NG (a) = {b | {a, b} ∈ E} denote its neighbourhood. We denote by Pn a graph that is isomorphic to a path on n vertices, i.e., Pn consists of edges {vi , vi+1 }, i = 1, 2, . . . , n − 1, for different vertices v1 , v2 , . . . , vn . Similarly, Cn denotes a cycle on n different vertices. Definition 6.5.3 We say that a connected graph G on Σ admits a squarefree word, if for the corresponding similarity relation R ⊆ Σ × Σ there exists an infinite Rsquarefree word w ∈ Σω where all letters a ∈ Σ occur infinitely many times. In this case we also say that the relation R admits a squarefree word.

206

V. Halava, T. Harju and T. K¨arki

Remark 6.5.4 The requirements of connectivity and infinity of occurrences of letters are essential to our considerations. Indeed, e.g., if R = ι is on Γ = {0, 1, 2}, then GR has no edges, and by Thue’s result there exists an infinite squarefree word on Γ (see Theorem 4.2.2). On the other hand, let R = $(a, b), (b, c)%. By Thue’s result, there exists an infinite overlapfree word over the binary alphabet {a, c} (see Theorem 4.2.8). However, if we require that there are infinitely many occurrences of b, then no such word exists; see (Halava et al., 2008b). Next we prove three general results on the number of edges in the graphs for the similarity relations. Recall that a subgraph H of a graph G is a spanning subgraph if H has the same vertex set (but a subset of edges). Lemma 6.5.5 If G = GR admits a squarefree word, then so do its connected spanning subgraphs. Proof If H is a spanning subgraph of G, then the corresponding similarity relation R of H is a subrelation of R. Therefore, if w is an R-squarefree word, then w also is R -squarefree. Lemma 6.5.6 Let G = (Σ, E) be a graph that admits a squarefree word, and let / Σ be a new vertex, a ∈ Σ be a vertex with the neighbourhood NG (a), and let b ∈ and let A ⊆ NG (a) ∪ {a} be a non-empty subset. Then, for EA = {{b, c} | c ∈ A}, the graph G = (Σ ∪ {b}, E ∪ EA ) of order n(G ) = n(G) + 1 admits a squarefree word. Proof We denote R the similarity relation according to G . Let w be an infinite Rsquarefree word admitted by G = GR , and let w be an infinite word that is obtained from w by replacing infinitely many occurrences of a by the new letter b so that there remains infinitely many occurrences of a. Assume that u and v are factors of w such that u R v , and let u and v be obtained from u and v , respectively, by replacing all occurrences of b by a. Then also uRv, since NG (b) ⊆ NG (a) ∪ {a}. It follows that u v cannot be factor in w . As a corollary to Lemma 6.5.5 and Lemma 6.5.6, we obtain the following result. Corollary 6.5.7 Suppose there exists a graph G of order n that admits a squarefree word, and let m ≥ n. Then there is a tree T , any spanning tree of G, of order m that admits a squarefree word. It remains to find the lower bound for the order of graphs admitting squarefree words. Below we shall show that the bound is six.

6.5.2 The lower bound Theorem 6.5.8 There are no graphs of order 5 that admit squarefree words. Proof By the previous corollary, we need to show only that no such trees exist.

207

Similarity relations on words

First we list all trees of order 5 up to isomorphism. There are only three of these (see Figure 6.5). We let the vertex set be Σ = {0, 1, 2, 3, 4}. 0

4

0

4

1

3

1

3

2

2

(1)

(2)

1

2

3

4

0 (3)

Figure 6.5 The trees of order 5.

In each of the cases, suppose the tree T admits a squarefree word w. Also, let R be the similarity relation corresponding to T . Consider first the path P5 in (1). Here the factor 20, if it exists in w, can be followed only by 2, 402 and 42. For instance, in 203a the letter a cannot be related to 0 or 3. But this exhausts all possibilities. Therefore between two occurrences of 2, there does not occur any letters 1 or 3. The case for the factor 24 is symmetric, and hence 1 and 3 cannot occur in w infinitely many times. This contradicts the assumption that w should contain infinitely many of each letter. Let T be the tree in (2). Now, an occurrence of 3 is preceded and followed by 0, since 3 is related to the other letters. Hence 030 is necessarily followed by 2 or 4, but 03R02 and 03R04 which is a contradiction. Finally, if T is the tree in (3), the letter 0 is related to the rest of the letters, and hence 0 cannot occur in w at all. This is again a contradiction. Example 6.5.9 Any graph where one of the vertices is adjacent to all other vertices cannot admit a squarefree word. There are also trees of order n ≥ 6 that do not admit squarefree words. We only consider one of these of order 6. Let T be the first tree in Figure 6.6. Suppose again that w is a squarefree word admitted by T . The letter 3 is necessarily preceded and followed by 0 or 1, but the factors 03 and 13 cannot be followed in w by any word of length two. After Theorem 6.5.10 we learn that the three trees of Figure 6.6 are the only trees of order six that do not admit squarefree words. 5 0

1

2

3

5 0

4

1

2

2

3 4

5

0 1

3 4

Figure 6.6 Trees of order 6 that do not admit a squarefree word.

The following theorem gives an optimal bound for squarefreeness from (K¨arki, 2012) where it was shown that the path P6 of six vertices admits a squarefree word.

208

V. Halava, T. Harju and T. K¨arki

The proof in (K¨arki, 2012) relies on the Leech morphism of words which is modified for the similarity relation corresponding to P6 . Below we give another proof of the result using a different graph. Theorem 6.5.10 Let n ≥ 6. There exists a graph G of order n that admits a squarefree word. Proof The claim follows from Lemma 6.5.6 after we prove that the graph G of Figure 6.7 on the vertex set Σ = {0, 1, . . ., 5} admits a squarefree word. To this end, let R be the similarity relation with G = GR . 0

5

1

4

2

3

Figure 6.7 A graph G of order 6.

Consider the morphism α : Γ∗ → Σ∗ defined by

α (0) = 130502420535, α (1) = 135305020535, α (2) = 153050240535. We show that if w ∈ Γ∗ is a squarefree word, then α (w) does not contain R-squares. Assume on the contrary that w ∈ Γ∗ is a squarefree word such that α (w) has a factor uv with uRv. A straightforward computer check shows that |w| > 5, and hence α (w) is squarefree as an ordinary word by Theorem 6.5.2. Also, it follows that u = x2 α (x)y1 and v = y2 α (y)z1 for some non-empty words x, y ∈ Γ∗ such that α (a) = x1 x2 , α (b) = y1 y2 and α (c) = z1 z2 for some a, b, c ∈ Γ; see Figure 6.8. x1

x2

α (x) u

y1

y2

α (y)

z1

z2

v

Figure 6.8 The factor ww in a squarefree word.

We can easily check that none of the images α (d) with d ∈ Γ is similar to a factor of α (s), where |s| = 2, except when s has d as a prefix or a suffix. By the form of the images α (i) and the relation R, necessarily x2 = y2 and y1 = z1 , and thus also x = y. Hence α (b) = y1 y2 = z1 x2 , where z1 is a prefix of α (c) and x2 is a suffix of α (a). However, the length of the longest R-similar prefixes of the images of different letters is two (α (0) and α (1)) and the length of the longest R-similar suffixes is five (α (0) and α (1)). Since the images have length 12, this gives a contradiction and proves the claim.

209

Similarity relations on words

The graph of Figure 6.7 has three spanning trees as given in Figure 6.9. These trees are obtained by removing one of the edges from the cycle 1 − 2 − 3 − 4. Hence, by Lemma 6.5.5, the three trees of Figure 6.9 admit a squarefree word. Since, up to isomorphism, there are six trees of order six and the trees in Figure 6.6 do not admit squarefree words, Figure 6.9 lists all the trees admitting a squarefree word. 0

5

0

5

0

5

1

4

1

4

1

4

2

3

2

3

2

3

Figure 6.9 The trees of order 6 admitting a squarefree word.

By Lemma 6.5.6, we have the following corollary. Corollary 6.5.11 Let n ≥ 6. There exists a tree T of order n that admits a squarefree word. Each tree T of order n has exactly n − 1 edges. Therefore if a graph of order n admits a squarefree word, then e(G) ≥ n − 1. A trivial upper bound for the size is e(G) ≤ n(n − 1)/2, but this bound is clearly too high; see Exercise 6.6.12. The following theorem is due to (K¨arki, 2012). Theorem 6.5.12 The (cordless) cycle C6 of six vertices does not admit a squarefree word.

6.5.3 Local and global overlapfreeness With respect to overlapfreeness in conjunction with similarity relations, there are again two types of case. Definition 6.5.13 Let R ⊆ Σ∗ × Σ∗ be a similarity relation. (i) A local R-overlap is a word of the form uu vv w, where u R v, u R v and v R w. In this case, the R-similar words uu v and vv w are R-overlapping. (ii) A global R-overlap is a local R-overlap uu vv w where also u R w. Definition 6.5.14 We say that a similarity relation R ⊆ Σ× Σ admits a local (global) overlapfree word, if its graph GR is connected and there exists an infinite local (global, resp.) R-overlapfree word w ∈ Σω where all letters a ∈ Σ occur infinitely many times. In this case we also say that the graph GR admits local (global, resp.) overlapfree word. Example 6.5.15 Let R correspond to the path P5 of five edges; see Figure 6.5(1). The word w = 411132202 = (4111)(3220)2 is a local R-overlap, but it is not a global R-overlap, since u = 4 is not R-compatible w = 2.

210

V. Halava, T. Harju and T. K¨arki

We show that the smallest order of a graph GR admitting a local and a global R-overlapfree word is 4. Theorem 6.5.16 The similarity relation R corresponding to the path P4 on four vertices admits a local, and thus also global, R-overlapfree word. No similarity relation on three letters admits a global, and thus local, R-overlapfree word. Proof We first prove the negative case on Γ. Let R be the similarity relation on Γ corresponding to the path P3 of three vertices such that 0R1 and 2R1. Consider any infinite word w over Γ where 1 occurs infinitely many times. Let a1b be a factor in w, where a, b ∈ Γ. Since aR1 and 1Rb hold, the factor a1b is a local R-overlap. Assume then that w is globally R-overlapfree. Hence in a factor a1b, always (a, b) ∈ / R, and so {a, b} = {0, 2}. Suppose a1b occurs infinitely many times in w. It is necessarily part of v = ba1ba, which cannot be followed by 1, since ba1ba1a, ba1ba11 and ba1ba1b all have global R-overlaps. Also, v cannot be followed by a, since ba1baaa, ba1baa1 and ba1baab all have an R-overlap. Hence, v is followed by b. Now, ba1baba contains a global R-overlap 1baba, and ba1bab1 contains a global R-overlap 1bab1 and ba1babb is a global R-overlap. In conclusion, w contains global P3 -overlaps. / R. For the first claim, let R correspond to P4 on Σ = {0, 1, 2, 3}. Now (0, 3) ∈ Consider the Thue–Morse word t = τ ω (0) over {0, 3}, where τ : {0, 3}∗ → {0, 3}∗ is defined by 0 → 03, 3 → 30. The word t = 033030033003 · · · is a fixed point of τ , and it is overlapfree; see Theorem 4.2.8 and (Lothaire, 1983). Let t  be a word obtained from t by arbitrarily replacing infinitely many occurrences of 303 by 313 and infinitely many occurrences of 030 by 020. We claim that t  is locally and, consequently, also globally R-overlapfree. Suppose on the contrary that t  has a local R-overlap. We can assume that this factor is of the form axbyc where a, b, c ∈ Σ and axbRbyc. We conclude that there is an index i in axb = u1 u2 · · · un and in byc = v1 v2 · · · vn such that either ui = 1 and vi = 2 or vice versa. Indeed, otherwise, we could change all modified letters 1 and 2 in axbyc back to the original letters of t and obtain a global P4 -overlap in t contradicting overlapfreeness of t. By symmetry, we can choose ui = 1 and vi = 2. If i > 1, then ui−1 ui = 31 and vi−1 vh = 02. These factors are not R-compatible; a contradiction. If i < |u|, then ui ui+1 = 13 and vi vi+1 = 20. As above, we obtain a contradiction. Hence, there are no local R-overlaps in t  and the claim follows. The following result on cycles was proven in (K¨arki, 2012). We leave the proof as an exercise. Theorem 6.5.17 The (cordless) cycle C5 of five vertices admits both globally and locally an overlapfree word.

Similarity relations on words

211

6.6 Exercises and problems Section 6.3 Exercise 6.6.1 Prove Corollary 6.3.4: if X is an (R1 , S1 )-code and R2 and S2 are similarity relations on Σ∗ , then X is an (R1 ∩ R2 , S1 ∪ S2 )-code. Exercise 6.6.2 Let R1 and R2 be similarity relations on Σ∗ , and let X ⊆ Σ∗ . Show that X is not necessarily an (R1 ∪ R2 , S)-code even when if it is both an (R1 , S)-code and an (R2 , S)-code. Exercise 6.6.3 Show that the converse of Corollary 6.3.7 does not hold in general. Section 6.4 Exercise 6.6.4 Prove Theorem 6.4.6: if a similarity relation R is an equivalence relation, then for the tree types of periods we have πg (x) = πe (x) = πl (x). Exercise 6.6.5 Consider an equivalence relation R that is a similarity relation. Let a word x have two R-periods p and q and suppose the length of x is at least p + q − gcd(p, q). Show that gcd(p, q) is an R-period of x. Exercise 6.6.6 Note that the roles of the relations R and S are not symmetric in Theorem 6.3.6, i.e., show that not all (R, S)-codes are (S, S)-codes. Exercise 6.6.7 Show that, in general, it is not the case that Bg,g (p, q) = Bg,l (p, q). Exercise 6.6.8 Let Rw be the similarity relation with a minimum number of pairs of letters such that w ∈ FW (p, q) has an Rw -period p. Show that the relation Rw is well defined. Problem 6.6.9 How do the periodicity properties of words change if we consider, instead of similarity relations, the relations R ⊆ Σ∗ × Σ∗ that need not be symmetric, but still induced by a relation on the set of the letters Σ, i.e., if (u, v) ∈ R then |u| = |v|? An integer p is then a global R-period of a word w = w1 w2 · · · wn ∈ Σ∗ , if for all indices i and j, i ≡ j and i < j =⇒ (wi , w j ) ∈ R, and p is a local R-period, if (wi , wi+p ) ∈ R for all indices i and j. Section 6.5 Exercise 6.6.10 Prove Theorem 6.5.12: the (cordless) cycle C6 of six vertices does not admit squarefree words. Exercise 6.6.11 Prove Theorem 6.5.17: the (cordless) cycle C5 of five vertices admits both globally and locally an overlapfree word. Exercise 6.6.12 Show that for each n ≥ 6, there exists a graph G of order n(G) = n and size (n + 1)(n − 6) + 6. (6.18) n − 1 ≤ e(G) ≤ 2

212

V. Halava, T. Harju and T. K¨arki

Problem 6.6.13 Give an optimal upper bound for the number of edges in graphs of order n that admit an overlapfree word. Problem 6.6.14 Characterise the trees (or graphs in general) of order n that admit a squarefree word. Problem 6.6.15 Give an optimal upper bound for the number of edges in graphs of order n that admit a squarefree word. Problem 6.6.16 As in Problem 6.6.9 consider the generalisations of the relations where R ⊆ Σ∗ × Σ∗ need not be symmetric, and say that a factor uv of a word is a R-square if (u, v) ∈ R holds. Now, the graph corresponding to R is directed, i.e., the edges are ordered pairs of letters (vertices). Determine the minimum order of Σ for a relation R for which there exists an infinite word avoiding directed R-squares, and containing all letters infinitely often. Problem 6.6.17 Theorem 6.5.2 gives an efficient criterion for a morphism to be squarefree. Is there such a criterion for morphisms that preserve R-squarefree words? Does there exists an integer N for which any morphism α : Γ∗ → Σ∗ preserves Rsquarefree words if it does so for R-squarefree words of length N?

7 Synchronised automata Marie-Pierre B´eal and Dominique Perrin

7.1 Introduction The notions of synchronising word and synchronised automaton are simple to define and occur in many applications of automata. A synchronising word maps every state of an automaton to the same state. It is remarkable that this simple notion is linked with difficult combinatorial problems. ˇ y Conjecture, have been open for a long time. This Some of them, like the Cern´ conjecture asserts that a synchronised deterministic automaton with n states has a synchronising word of length at most (n − 1)2. Another longstanding open problem, the Road Colouring Problem, was solved by Trahtman in 2009. The problem asks whether any complete deterministic automaton has the same graph as a synchronised automaton. The choice of the labels of the edges defines a colouring to which the term Road Colouring Problem refers. A reason to explain the difficulty of these questions may be the fact that there is no simple description of the class of automata that are not synchronised. It is relatively simple to verify whether an automaton is synchronised since it can be checked with a polynomial algorithm. On the contrary, it was proved by Eppstein (1990) that finding a synchronising word of minimal length is NP-hard. Recently, some results have been obtained on the synchronising properties of random automata showing that a random automaton is synchronised with high probability (see Berlinkov (2013), Nicaud (2014)). In this chapter, we present a survey of results concerning synchronised automata. We first define the notions of synchronising words and synchronised automata for deterministic automata. We extend these notions to the more general class of unambiguous automata. ˇ y’s conjecture and discuss several particular cases In Section 7.3, we present Cern´ where a positive answer is known. This includes the important case of aperiodic automata solved by Trahtman (2007). We also describe the case of circular automata Combinatorics, Words and Symbolic Dynamics, ed. Val´erie Berth´e and Michel Rigo. Published by c Cambridge University Press. Cambridge University Press 2016.

214

M.-P. B´eal and D. Perrin

Dubuc (1998), the case of one-cluster automata introduced by B´eal et al. (2011), and more generally the strongly transitive automata of Carpi and D’Alessandro (2013). In Section 7.4, we present the Road Colouring Theorem (i.e., the solution of Trahtman to the Road Colouring Problem). We present here Trahtman’s cubic-time algorithm. A quadratic-time algorithm is given in B´eal and Perrin (2014). We also present a generalisation of the Road Colouring Theorem to automata with a periodic graph.

7.2 Definitions We consider here a finite set A as the alphabet. A (finite) automaton A over the alphabet A is composed of a finite set Q of states and a finite set E of edges, which are triples (p, a, q) where p, q are states and a is a a → q. symbol from A called the label of the edge. An edge (p, a, q) is also denoted p − Also see Section 1.4.2. Note that neither initial nor terminal states are specified. The automaton is denoted (Q, E), the alphabet being understood. An automaton is irreducible if its underlying graph is strongly connected. An automaton is deterministic if, for each state p and each letter a, there is at most one edge starting at p and labelled with a. It is complete deterministic if, for each state p and each letter a, there is exactly one edge starting in p and labelled with a. This implies that, for each state p and each word w, there is exactly one path starting in p and labelled with w. If this unique path ends in a state q, we denote by p · w the state q. This notation is extended to sets of states by S · w = {p · w | p ∈ S}. A synchronising word of a deterministic automaton is a word w such that there is at least one path labelled by w and all paths labelled by w have the same terminal state. A synchronising word is also called a reset word (see Eppstein (1990)) or a magic word (see Lind and Marcus (1995)). An automaton which has a synchronising word is called synchronised. A pair of states (p, q) is synchronisable if there is a word w such that p · w = q · w. More generally, a set of states S is synchronisable if there is a word w such that Card(S · w) = 1. The word w is said to synchronise S. b

b

b

a 1

b

a

a 3

4 a

b

a

b

a 1

2

2 a

b 3

b

4 a

Figure 7.1 Two complete deterministic automata over the alphabet A = {a, b}. The automaton on the left is not synchronised. The one on the right is synchronised. For instance, the word aaa is a synchronising word.

Proposition 7.2.1 A complete deterministic automaton is synchronised if, and only if, every pair of states is synchronisable.

Synchronised automata

215

Proof Let A = (Q, E) be a complete deterministic automaton. If A is synchronised, the condition is obviously satisfied. Conversely, let S ⊂ Q be a set of the form Q · u of minimal cardinality for all words u in A∗ . If S has more than one element, let p, q be two distinct elements of S. By assumption there is a word v which synchronises the pair (p, q). Then Card(Q · uv) = Card(S · v) < Card(S), a contradiction. Thus Card(S) = 1 and u is a synchronising word. This statement gives an algorithm to check whether an automaton is synchronised. One computes the action of the letters on the pairs of states. The automaton is synchronised if, and only if, for every pair of distinct states, a pair of equal states can be reached. Consider for example the automaton on the left of Figure 7.1. The action on the pairs (1, 4) and (2, 3) is represented in Figure 7.2. No other pair is accessible from these pairs and thus the automaton is not synchronised. b

b a 1, 4

2, 3 a

Figure 7.2 The action on pairs of states.

ˇ 7.3 Cern´ y’s conjecture ˇ y (1964) constructed synchronised n-state deterministic complete automata for Cern´ which the length of a shortest synchronising word is (n − 1)2 . He formulated the ˇ y’s conjecture, which asserts the existence of a synchroconjecture, now called Cern´ nising word of length at most (n − 1)2 for any synchronised n-state deterministic complete automaton (see Figure 7.3). b

b a 1

2

b a

a 4

b

3 a

ˇ y automaton with four states over the alphabet A = {a, b}. The Figure 7.3 The Cern´ word ba3 ba3 b is a synchronising word.

The best upper bound known for the length (n) of a shortest synchronising word in an n-state deterministic complete automaton obtained so far is cubic (the bound

216

M.-P. B´eal and D. Perrin

(n) = (n3 − n)/6 is obtained in Pin (1983); see Kari and Volkov (2014) for more references). A simple proof of the existence of a synchronising word of length at most cubic in a synchronised n-state deterministic complete automaton is the following. Proposition 7.3.1 A synchronised n-state deterministic complete automaton has a synchronising word of length at most n(n − 1)2/2. Proof Let A = (Q, E) be a synchronised n-state deterministic complete automaton. Since A is synchronised, every pair of states is synchronisable by Proposition 7.2.1. Let P be a subset of Q. If Card(P) > 1, let p, q in P and u be such that p · u = q · u. One can choose a word u satisfying this equality of minimal length. Let us show that the length of u is at most n(n − 1)/2. Indeed, if |u| > n(n − 1)/2, let u = u1 · · · uk , with k > n(n − 1)/2. Then the pairs (p · u1 · · · ui , q · u1 · · · ui ), for 0 ≤ i < k, are k pairs of distinct states. As a consequence, two of them are equal, which contradicts the fact that u is as short as possible. Hence there is a word u of length at most n(n − 1)/2 such that Card(P · u) < Card(P). Starting with P = Q and iterating the construction, we get a word v = u1 u2 · · · un−1 of length at most n(n − 1)2 /2 such that Card(P · v) = 1. ˇ y’s conjecture is still open in general, it has been settled for important Though Cern´ and large particular classes of automata.

7.3.1 Aperiodic automata A deterministic automaton is aperiodic if there is a non-negative integer k such that for any word w and any state q, one has q · wk = q · wk+1 . Note that, since the automaton is finite, there always exists a non-negative integer k and a positive integer p such that for any word w and any state q, one has q · wk = q · wk+p. The condition expresses the fact that one can choose p = 1 in the above condition. Aperiodic automata can also be defined by a property of their transition monoid. Let A = (Q, E) be an automaton on the alphabet A. We denote by ϕA the morphism from A∗ into the monoid of binary relations on Q which associates with the word w ∈ A∗ the set of pairs (p, q) for which there is a path from p to q labelled by w. The monoid ϕA (A∗ ) is the transition monoid of A . Aperiodic automata can be defined equivalently by the fact that their transition monoid does not contain any non-trivial group, see Eilenberg (1976). Still another equivalent definition is the following. For a subset S of the set of states, let Stab(S) = {u ∈ A∗ | S · u = S} and let G(S) denote the set of restrictions to S of the elements of Stab(S). The set G(S) is a permutation group on S. Indeed, the restriction to S of an element of Stab(S) is a permutation on S.

217

Synchronised automata

Then the automaton is aperiodic if, and only if, for each S, the permutation group G(S) is reduced to the identity. This condition can be used in practice to check that an automaton is aperiodic as in the following example. Example 7.3.2 Consider the automaton on the set of states Q = {1, 2, 3, 4} represented in Figure 7.4. To check that the automaton is aperiodic, we compute in Figure 7.5 the action of the alphabet on the subsets of Q of the form Q · u for u ∈ A∗ with more than one element. Then ab ∈ Stab({1, 2, 3}) and the restriction of ab to {1, 2, 3} is the identity. It implies that the restriction of ab to {1, 2} and {2, 3} is also the identity and thus that the automaton is aperiodic.

a b

a

1

a

2 b

3

a

4

b

b

Figure 7.4 An aperiodic automaton.

3, 4

a

a

1, 2, 3, 4 b

a

2, 3, 4

a

b 2, 3

b 1, 2, 3

a b

b 1, 2

Figure 7.5 The action on subsets of Q.

Note that any strongly connected aperiodic automaton is synchronised. Indeed, assume the A = (Q, E) is a strongly connected aperiodic automaton. Let S be a set of the form Q · u for u ∈ A∗ of minimal cardinality. Since A is strongly connected, the group G(S) is a transitive permutation group. Since A is aperiodic, it is reduced to the identity. This forces Card(S) = 1 and thus A is synchronised. ˇ y’s ConThe following result was proved by Trahtman (2007). It shows that Cern´ jecture is true for aperiodic automata. Theorem 7.3.3 Let A be a synchronised n-state deterministic complete automaton. If A is aperiodic, then it has a synchronising word of length at most n(n − 1)/2. For an automaton A = (Q, E), we denote by A 2 the automaton on the set of pairs a → (p · a, q · a) for a ∈ A whenever (p, q) of distinct states whose edges are the (p, q) − p · a = q · a. Let C be a set of pairs of states, which we may consider as a binary relation on Q. We denote by r be a row of an element of M. Let q be such that (r − r)q = 1. Let m ∈ M be such that s = m p∗ is a maximal row and let n ∈ M be such that nqp = 1. Then r nm ≥ s and thus r = s. This forces rnm = 0 and thus 0 ∈ rM. Note that Lemma 7.3.18 implies in particular that, in a transitive unambiguous monoid of relations, the set of maximal rows is closed by multiplication on the right by an element of M. Indeed, this is clearly true for the set of row vectors satisfying condition (iii). Lemma 7.3.19 Let M be a transitive unambiguous monoid of relations not containing zero. For two elements m, m of M, if m ≤ m then m = m . Proof Suppose that mpq = 1 for some p, q ∈ Q. Since M is transitive and does not contain zero, there exists a maximal row r such that r p = 1. Let us assume that r = ns∗ for some n ∈ M. Then nm ≤ nm and (nm)s∗ = ns∗ m is a maximal row. Thus (nm)s∗ = (nm )s∗ . This forces m pq = 1 since m ≤ m . Lemma 7.3.20 For a state p ∈ Q and a word u ∈ A∗ , if ϕ (u) p∗ is not a maximal row, there is a state q and a word v of length at most n(n − 1)/2 such that ϕ (u) p∗ < ϕ (vu)q∗ . Proof Let p ∈ Q and u ∈ A∗ be such that ϕ (u) p∗ is not a maximal row. Since A is strongly connected, there exists a maximal row r such that r p = 1. There is at least a state p distinct of p such that r p = 1 and ϕ (u) p ∗ = 0 since otherwise rϕ (u) is not maximal. Hence there is a state q ∈ Q and a word v of length at most n(n − 1)/2 such v v → p and q − → p . Then ϕ (u) p∗ < ϕ (vu)q∗ . This proves the claim. that q −

228

M.-P. B´eal and D. Perrin

Lemma 7.3.21 Let M be a transitive unambiguous monoid of relations of minimal rank 1 not containing zero. Then for any maximal row r and any maximal column c, one has cr ∈ M and rc = 1. Proof The minimal number of non-zero distinct rows of an element of M is 1 and the same holds for columns. By Lemma 7.3.18, there exists an m ∈ M with all its non-zero rows equal to r. Then m = tr with t a maximal column. Similarly, there is an n ∈ M such that n = cs for some maximal row s. Since M is unambiguous, one has st ∈ {0, 1}. Since 0 ∈ / M one has st = 1. Thus nm = cstr = cr and cr ∈ M. Finally, rc ∈ {0, 1} because M is unambiguous and rc = 0 since 0 ∈ / M. Thus rc = 1. Proof of Proposition 7.3.17 By Lemma 7.3.20 and its symmetric form, there exist pairs (p1 , u1 ), (p2 , u2 ), . . . , (ps , us ) in Q × A∗ and (v1 , q1 ), (v2 , q2 ), . . . , (vt , qt ) in A∗ × Q such that, with xi = ϕ (ui · · · u1 ) pi ∗ and y j = ϕ (v1 · · · v j )∗qi , (i) u1 = v1 = ε and p1 = q1 , (ii) for 2 ≤ i ≤ s, the word ui has length at most n(n − 1)/2 and xi > xi−1 , (iii) for 2 ≤ j ≤ t, the word v j has length at most n(n − 1)/2 and y j > y j−1 , (iv) xs is a maximal row and yt is a maximal column. Let u = us · · · u1 and v = v1 · · · vt . We have |u| ≤ (s − 1)n(n − 1)/2 and |v| ≤ (t − 1)n(n − 1)/2. Thus |uv| ≤ (s + t − 2)n(n − 1)/2. z → ps with |z| ≤ n − 1. Then w = vzu is such Finally, let z ∈ A∗ be such that qt − that yt xs ≤ ϕ (w). By Lemma 7.3.21, yt xs ∈ M. Thus by Lemma 7.3.19, this implies ϕ (w) = yt xs . By Lemma 7.3.21, we have xs yt = 1. Thus s + t ≤ ∑q∈Q (xs )q + ∑q∈Q (yt )q ≤ n + 1 since xs has at least s coefficients 1 and yt has at least t coefficients 1. This shows that w is a synchronising word with 1 |w| ≤ (s + t − 2)n(n − 1)/2 + n − 1 ≤ n(n − 1)2 + n − 1 2 1 2 ≤ (n − n + 2)(n − 1) 2 and this concludes the proof. Example 7.3.22 We illustrate the proof of Proposition 7.3.17 on the automaton of 1 p1 = q12= 1. The row ϕ (ε )1∗ = 1 Example 2 7.3.15. We start with u1 = v1 = ε and 1 0 0 is not maximal but ϕ (ε )1∗ < ϕ (a)3∗ = 1 1 0 . Thus we choose u2 = a 1 2t and p2 = 3. Symmetrically, the column ϕ (ε )∗1 = 1 0 0 is not maximal but 1 2t ϕ (ε )∗1 < ϕ (b)∗3 = 1 1 0 . Thus we choose v2 = b and q2 = 3. Then ab is a synchronising word as already observed (Example 7.3.16).

229

Synchronised automata

7.3.4 The generalised conjecture The rank of a word w in a deterministic automaton A = (Q, E) is the size of its image Q · w. A synchronising word has rank 1. It has been conjectured by Pin (1978) that if an automaton admits a word of rank at most k, then there exists such a word of length at most (n − k)2 . This generalises ˇ y Conjecture. Pin’s conjecture was proved to be false by Kari (2001). The the Cern´ counterexample corresponds to n = 6 and k = 2. It is represented in Figure 7.9. b b

a

1

3

a

4

6

b

b a

a 2

a b

a 5

b Figure 7.9 Kari’s automaton.

The shortest words of rank 2 are the two words x = baabababaabbabaab and y = baababaabaababaab of length 17. The shortest synchronising word is xaababaab of length 24 = 52 − 1. A new conjecture proposed by Volkov states that an automaton of rank k admits a word of rank k of length at most (n − k)2 .

7.4 Road colouring Imagine a map with roads which are coloured in such a way that a fixed sequence of colours, called a homing sequence, leads the traveller to a fixed place whatever the starting point. Such a colouring of the roads is called synchronised and finding a synchronised colouring is called the Road Colouring Problem. In terms of automata, the problem asks whether any complete deterministic automaton has the same graph as a synchronised automaton. The choice of the labels of the edges defines a colouring to which the term Road Colouring Problem refers.

7.4.1 The Road Colouring Theorem The Road Colouring Theorem states that every complete deterministic automaton with an aperiodic directed graph has the same graph as a synchronised automaton. It has been conjectured under the name of the Road Colouring Problem by Adler et al. (1977), and solved for many particular types of automata (see, for instance, Adler

230

M.-P. B´eal and D. Perrin

et al. (1977), O’Brien (1981), Carbone (2001), Kari (2003), Friedman (1990), Perrin and Sch¨utzenberger (1992)). Trahtman (2009) settled the conjecture. In this section we present a version of his proof (another presentation is given in Berstel et al. (2007)). In the domain of coding, automata with outputs (i.e., transducers) can be used either as encoders or as decoders. When they are synchronised, the behaviour of the coder (or of the decoder) is improved in the presence of noise or errors. For instance, the well known Huffman compression scheme leads to a synchronised decoder provided the lengths of the codewords of the Huffman code are mutually prime. It is also a consequence of the Road Colouring Theorem that coding schemes for constrained channels can have sliding block decoders and synchronised encoders (see Adler et al. (1983) and Lind and Marcus (1995)). Trahtman’s proof is constructive and leads to an algorithm that finds a synchronised labelling with a cubic-time complexity (see Trahtman (2009)). The algorithm starts with an initial complete deterministic automaton which may not be synchronised. It consists in a sequence of flips of edges going out of some state so that the resulting automaton is synchronised. One first searches a sequence of flips leading to an automaton which has a so-called stable pair of states (i.e., with good synchronising properties). One then computes the quotient of the automaton by the congruence generated by the stable pairs. The process is then iterated on this smaller automaton. Trahtman’s method for finding the sequence of flips leading to a stable pair has a quadratic-time complexity, which makes his algorithm cubic. The period of a directed graph is the gcd of its cycles. A strongly connected graph is aperiodic if its period is 1.1 Two automata which have isomorphic underlying graphs are called equivalent. Hence two equivalent automata differ only by the labelling of their edges. In this section, we shall consider only complete deterministic automata. Proposition 7.4.1 The graph of a synchronised irreducible complete deterministic automaton is aperiodic. Proof Let d be the period of the graph of an irreducible complete deterministic automaton A . Let us consider an edge of A going from a state p to a state q. Let w be a synchronising word. There is a state r and two paths from p to r and from q to r labelled by w. Since A is irreducible, there is path from r to p of length m. Then d divides |w| + m and 1 + |w| + m. It divides then also 1 and thus d = 1. The Road Colouring Theorem can be stated as follows. Theorem 7.4.2 (Trahtman (2009)) Any complete deterministic automaton with an aperiodic graph is equivalent to a synchronised one. An example of two equivalent automata is shown in Figure 7.1. 1

The notion of aperiodic graph is independent of the notion of aperiodic automata used in Section 7.3.1.

Synchronised automata

231

A trivial case for solving the Road Colouring Theorem is the case where the automaton has a loop edge around some state r like in Figure 7.1. Indeed, since the graph of the automaton is strongly connected, there is a spanning tree rooted at r (with the edges of the tree oriented towards the root). Let us label the edges of this tree and the loop by the letter a. This colouring is synchronised by the word an−1 , where n is the number of states.

7.4.2 An algorithm for finding a synchronised colouring Trahtman’s proof of Theorem 7.4.2 is constructive and gives an algorithm for finding a labelling which makes the automaton synchronised provided its graph is aperiodic. In the sequel A denotes an n-state deterministic and complete automaton over an alphabet A. We fix a particular letter a ∈ A. Edges labelled by a are called a-edges, the other ones being called b-edges. An a-path is sequence of consecutive edges labelled by a. A pair (p, q) of states in an automaton is said stable if, for any word u, the pair (p · u, q · u) is synchronisable. In a synchronised automaton, any pair of states is stable. Note that if (p, q) is a stable pair, then for any word u, (p · u, q · u) also is a stable pair, hence the terminology. Note also that, if (p, q) and (q, r) are stable pairs, then (p, r) also is a stable pair. It follows that the relation defined on the set of states by p ≡ q if (p, q) is a stable pair, is an equivalence relation. It is actually a congruence (i.e., p · u ≡ q · u whenever p ≡ q) called the stable pair congruence. More generally, a congruence is stable if any pair of states in the same class is stable. The stable pair congruence is thus the coarsest congruence among the stable ones. The congruence generated by a stable pair (p, q) is the least congruence such that p and q belong to the same class. It is a stable congruence. The set of outgoing edges of a state is called a bunch if these edges all end in a same state. Note that if a state has two incoming bunches from two states p, q, then (p, q) is a stable pair. If A = (Q, E) is an automaton, the quotient of A by a stable pair congruence is the automaton B whose states are the classes of Q under the congruence. If p is a state of A , we denote by p¯ the congruence class of p. The edges of B are the triples ( p, ¯ c, q) ¯ where (p, c, q) is an edge of A . The automaton B is complete deterministic when A is complete deterministic. The graph of B is strongly connected (respectively aperiodic) when the graph of A is strongly connected (respectively aperiodic). The following lemma was obtained by Culik et al. (2002). Lemma 7.4.3 If the quotient of an automaton A by a stable congruence is equivalent to a synchronised automaton, then A is equivalent to a synchronised automaton. Proof Let B be the quotient of A by a stable congruence and let B  be a synchronised automaton equivalent to B. We define an automaton A  equivalent to A as follows. The number of edges of A going out of a state p and ending in states belonging to a same class q¯ is equal to the number of edges of B (and thus B  ) going

232

M.-P. B´eal and D. Perrin

out of p¯ and ending in q. ¯ We define A  by labelling these edges according to the labelling of corresponding edges in B  . The automaton B  is a quotient of A  . Let us show that A  is synchronised. Let w be a synchronising word of B  and let r be the state ending each path labelled by w in B  . Let p, q be two states of A  . Then p · w and q · w belong to the same congruence class r¯. Hence (p · w, q · w) is a stable pair of A  . Therefore (p, q) is a synchronisable pair of A  since the congruence is stable. All pairs of A  being synchronisable, A  is synchronised. Trahtman’s algorithm for finding a synchronised colouring of an automaton A can be described in a recursive way as follows. We first find an equivalent automaton A  of A which has at least one stable pair (p, q) and compute the quotient of A  by the congruence generated by (p, q). By induction on the number of states of A , we then find a synchronised colouring B  of this quotient. We finally lift up this colouring to the automaton A as follows. If there is an edge (p, c, q) in A but no edge ( p, ¯ c, q) ¯ ¯ d, q) ¯ in B  with c = d. Then we flip the labels of the in B  , then there is an edge ( p, two edges labelled c and d going out of p in A . The algorithm is described in the following pseudocode. The procedure F IND S TABLE PAIR, which finds an equivalent automaton which has a stable pair of states, is described in the next section. The procedure M ERGE computes the quotient of an automaton by the stable congruence generated by a stable pair of states. F IND C OLOURING (automaton with an aperiodic graph A ) 1 B←A 2 while (size(B) > 1) 3 do B, (s,t) ← F IND S TABLE PAIR (B) 4 lift the colouring up from B to the automaton A 5 B ← M ERGE (B, (s,t)) 6 return A

7.4.3 Finding a stable pair In this section, we consider a complete deterministic automaton A with an aperiodic graph. We present Trahtman’s cubic-time algorithm for finding an equivalent automaton which has a stable pair. In order to describe the algorithm, we give some definitions and notation. The subgraph of the graph of A made of the a-edges is a disjoint union of aclusters. Since each state has exactly one outgoing edge in this subgraph, each cluster contains a unique a-cycle with trees attached to the a-cycle at their root. If r is the root of such a tree, it belongs to the a-cycle of the cluster and its children are the states p such that p is not on the a-cycle and (p, a, r) is an edge. If p, q belong to a same tree, p is an ascendant of q in the tree if there is an a-path from q to p. If q belongs to some a-cycle, its predecessor is the unique state p belonging to the same a-cycle such that (p, a, q) is an edge. Note that it is q itself if the length of the a-cycle is 1. We recall that the level of a state p is the distance between p and the root of the

Synchronised automata

233

tree containing p. If p belongs to the a-cycle of the cluster, its level is thus null. The level of an automaton is the maximal level of its states. A maximal state is a state of maximal level. A maximal tree is a tree rooted at a state of level 0 containing at least one maximal state. The algorithm for finding a colouring which has a stable pair relies on the following key lemma due to Trahtman (2009). It uses the notion of minimal images in an automaton. An image in an automaton A = (Q, E) is a set of states I = Q · w, where w is a word and Q · w = {q · w | q ∈ Q}. A minimal image in an automaton is an image that is a minimal element of the set of images for set inclusion. In an irreducible automaton two minimal images have the same cardinality which is called the rank of A . Also, if I is a minimal image and u is a word, then I · u is again a minimal image and the map p → p · u is one-to-one from I onto I · u. Lemma 7.4.4 (Trahtman (2009)) Let A be an irreducible complete deterministic automaton with a positive level. If all maximal states in A belong to the same tree, then A has a stable pair. Proof Since A is irreducible, there is a minimal image I containing a maximal state p. Let  > 0 be the level of p (i.e., the distance between p and the root r of the unique maximal tree). Let us assume that there is a state q = p in I of level . Then the cardinality of I · a is strictly less than the cardinality of I, which contradicts the minimality of I. Thus all states but p in I have a level strictly less than . Let m be a common multiple of the lengths of all a-cycles. Let C be the a-cycle containing r. Let s0 be the predecessor of r in C and s1 the child of r containing p in its subtree. Since  > 0, we have s0 = s1 . Let J = I · a−1 and K = J · am . Since the level of all states of I but p is less than or equal to  − 1, the set J is equal to {s1 } ∪ R, where R is a set of states belonging to the a-cycles. Since for any state q in an a-cycle, q · am = q, we get K = {s0 } ∪ R. Let w be a word of minimal rank. For any word v, the minimal images J · vw and K · vw have the same cardinality equal to the cardinality of I. We claim that the set (J ∪ K) · vw is a minimal image. Indeed, J · vw ⊆ (J ∪ K) · vw ⊆ Q · vw, hence all three are equal. But (J ∪ K) · vw = R · vw ∪ s0 · vw ∪ s1 · vw. This forces s0 · vw = s1 · vw since the cardinality of R · vw cannot be less than the cardinality of R. As a consequence (s0 · v, s1 · v) is synchronisable and thus (s0 , s1 ) is a stable pair. In the sequel, we call Condition C1 the hypothesis of Lemma 7.4.4: all maximal states belong to the same tree. We now describe sequences of flips such that the resulting equivalent automaton either has a stable pair, or has strictly more states of null level. We consider several cases corresponding to the geometry of the automaton. • Case 1. We assume that the level of the automaton is  = 0. The subgraph of the automaton made of a-edges is a union of disjoint a-cycles. If the set of outgoing edges of each state is a bunch, then there is only one a-cycle, and the graph of the automaton is not aperiodic unless the trivial case where the length of this a-cycle

234

M.-P. B´eal and D. Perrin

is 1. We can thus assume that there is a state p whose set of outgoing edges is not a bunch. There exist b = a and q = r such that (p, a, q) and (p, b, r) are edges. We flip these two edges and obtain an automaton of positive level which has a unique maximal tree and thus satisfies Condition C1 . Let s be the state which is the predecessor of r in its a-cycle in the new automaton. It follows from the proof of Lemma 7.4.4 that the pair (p, s) is a stable pair (see Figure 7.10).

b

b

a q a a

b

a q

p

s

b r

a

b

p b a

b s

a

b

r a

Figure 7.10 The picture on the left illustrates Case 1. The a-cycle edges are drawn with larger thickness. After flipping the edges (p, a, q) and (p, b, r), we get the automaton on the right of the figure. It has a unique maximal tree (rooted at r).

• Case 2. We assume that the level of the automaton is  > 0. Let r be the root of a maximal tree and p a maximal state of this tree. We consider a b-edge (t, b, p) ending in p. Note that, since p is a maximal state and since the automaton is irreducible, such an edge always exists. We denote u = t · a. – Case 2.1. If t is not in the same cluster as r, or if t has a positive level and does not belong to the a-path from p to r, we flip the edges (t, b, p) and (t, a, u) and get an automaton which has a unique maximal tree. – Case 2.2. If t belongs to the a-path from p to r, we flip the edges (t, b, p) and (t, a, u) and get an automaton which has strictly more states of null level. – Case 2.3. We assume that t belongs to the a-cycle containing r. Let k1 be the length of the simple a-path from r to t and k2 the length of the simple a-path from u to r (see Figure 7.11). ◦ Case 2.3.1. If k2 > , we flip the edges (t, b, p) and (t, a, u) and get an automaton which has a unique maximal tree (see Figure 7.11). ◦ Case 2.3.2. If k2 < , we flip the edges (t, b, p) and (t, a, u) and get an automaton which has strictly more states of null level since k1 +  + 1 > k1 + k2 + 1 (see Figure 7.12). ◦ Case 2.3.3. Let us now assume that k2 = . Let q be the predecessor of r on the a-cycle and let s be the child of r ascendant of p in the maximal tree rooted at r.  Case 2.3.3.1. If the set of edges going out of q is not a bunch. There is a b-letter c such that q · c = v = r. Let us flip the edges (q, a, r) and (q, c, v). If r belongs to the new a-cycle, then the number of null level states has increased. In the other case, the level of r in the new automaton is at least

235

Synchronised automata p

p

11

11

r

r 6

8

6

8

k2 u

15

1

9

u

15

1

9

k1 t

t

2

2

3

3

18

18

Figure 7.11 The picture on the left illustrates Case 2.3.1 (k2 = 2 >  = 1). After flipping the edges (t, b, p) and (t, a, u), we get the automaton on the right of the figure. It has a unique maximal tree (rooted at r). Maximal states are filled in grey. Plain lines are used for a-edges and dashed ones for b-edges. Not all b-edges and clusters are represented.

p

p

12

s

s

11

u

8

k2 t

15

11

r

r u 16

12

1

9

10

16

8

t

15

1

9

10

k1 6

2

6

3 17

2 3

17 18

18

19

19

Figure 7.12 The picture on the left illustrates Case 2.3.2. We have k2 = 1 <  = 2. After flipping the edges (t, b, p) and (t, a, u), we get the automaton which has a larger number of null level states.

one and thus the new automaton has a unique maximal tree (see Figure 7.13).  Case 2.3.3.2. If the set of outgoing edges of q and s are bunches, then (q, s) is a stable pair.  Case 2.3.3.3. If the set of outgoing edges of q is a bunch and the set of outgoing edges of s is not a bunch, there is a letter c = a such that v = s · c = v = r. If there is an a-path from v to s, we flip the edges (s, a, r) and (s, c, v), creating a new a-cycle, which increases the number of states of level zero. If there is no a-path from v to s and the level of v is positive, we flip the edges (s, a, r) and (s, c, v) and get an automaton which has a unique maximal tree. If v has a null level and belongs to a cluster distinct

236

M.-P. B´eal and D. Perrin p

p

12

s

12

s

11

11

r

r q

q

8

8

k2 16

u

15

v t

9

10

16

u

15

v t

2

9

10

2

3

3

17

17 18

18

19

19

Figure 7.13 The picture on the left illustrates Case 2.3.3.1. We have k2 = 2. The state q is not a bunch. After flipping the edges (q, b, v) and (q, a, r), we get an automaton which has a unique maximal tree.

from the cluster of r, we flip the edges (s, a, r) and (s, c, v) and also the edges (t, a, u) and (t, b, p). We again get an automaton which has a unique maximal tree.  Case 2.3.3.4. We assume that the set of outgoing edges of q is a bunch and the set of outgoing edges of s is not a bunch. If there is a letter c = a such that v = s · c = v = r belongs to the a-cycle containing r. Let us denote by k3 the length of the simple a-path from u to v (see Figure 7.14). Since v = r, we have k3 = k2 . Hence k3 <  or k3 > . We flip the edges (s, a, r) and (s, c, v) and proceed as in Case 2.3.1 or 2.3.2, respectively. p

p

12

s

12

s

11

11

r

r q

q

v

16

u

15

v k3

k2 1 t

9

10

16

u

15

t

2 3

17

1

9

2 3

17 18

18

19

19

Figure 7.14 The picture on the left illustrates Case 2.3.3.3. We have k2 = 2. After flipping the edges (s, b, v) and (s, a, r), we have k3 > k2 = . We are back to Case 2.3.1.

10

Synchronised automata

237

The sequence of flips performed to transform the automaton into an equivalent one which has a stable pair has a quadratic-time complexity. This makes Trahtman’s algorithm have a worst case cubic-time complexity. A quadratic-time algorithm for the Road Colouring Problem is presented in B´eal and Perrin (2014). The price to pay for decreasing the time complexity is some more complication in the choice of the flips.

7.4.4 Periodic Road Colouring In this section, we extend the Road Colouring Theorem to periodic graphs by showing that Trahtman’s algorithm provides a colouring of minimal rank. Another proof of this result using semigroup tools, obtained independently, is given in Budzban and Feinsilver (2011). If the graph of an automaton is periodic, the automaton is not equivalent to a synchronised one. Nevertheless, the previous algorithm can be modified as follows for finding an equivalent automaton with the minimal possible rank. P ERIODIC F IND C OLOURING (automaton A ) 1 B←A 2 while (size(B) > 1) 3 do B, (s,t) ← F IND S TABLE PAIR (B) 4 lift the colouring up from B to the automaton A 5 if there is a stable pair (s,t) 6 then B ← M ERGE (B, (s,t)) 7 else return A 8 return A

It may happen that F IND S TABLE PAIR returns an automaton B which has no stable pair (it is made of a cycle where the set of outgoing edges of any state is a bunch). Lifting up this colouring to the initial automaton A leads to a colouring of the initial automaton whose rank is equal to the period of its graph. This result can be stated as the following theorem, which extends the Road Colouring Theorem to the case of periodic graphs. Theorem 7.4.5 Any irreducible automaton A is equivalent to a an automaton whose rank is the period of the graph of A . Proof Let us assume that A is equivalent to an automaton A  which has a stable pair (s,t). Let B  be the quotient of A  by the congruence generated by (s,t). Let d be the period of the graph of A  (equal to the period of the graph of A ) and d  the period of the graph of B  . Let us show that d = d  . It is clear that d  divides d (which we denote d  /d). Let  be the length of a path from s to s in A  , where s is equivalent to s. Since (s, s ) is stable, it is synchronisable. Thus there is a word w such that s · w = s · w. Since the automaton A  is irreducible, there is a path labelled by some word u from s·w to s. Hence d/(+|w|+|u|)

238

M.-P. B´eal and D. Perrin

and d/(|w| + |u|), implying d/. Let s¯ be the class of s and z be the label of a cycle around s¯ in B . Then there is a path in A  labelled by z from s to x, where x is equivalent to s. Thus d/|z|. It follows that d/d  and d = d  . Suppose that B  has rank r. Let us show that A  also has rank r. Let I be a minimal image of A  and J be the set of classes of the states of I in B  . Two states of I cannot belong to the same class since I would not be minimal otherwise. As a consequence I has the same cardinality as J. The set J is a minimal image of B  . Indeed, for any word v, the set J · v is the set of classes of I · v which is a minimal image of A  . Hence |J · v| = |J|. As a consequence, B has rank d. Let us now assume that A has no equivalent automaton that has a stable pair. In this case, we know that A is made of one cycle where the set of edges going out of any state is a bunch. The rank of this automaton is equal to the period of its graph which is the length of the a-cycle. Hence the procedure P ERIODIC F IND C OLOURING returns an automaton equivalent to the input automaton whose rank is equal to the period of its graph. The algorithm allows to find a colouring of minimal rank of an irreducible automaton. It has the same time and space complexity as the Road Colouring algorithm for automata with aperiodic graph.

7.4.5 Applications In this section we show how the previous results can be applied to automata associated with finite prefix codes. (See also Section 6.3.1 for more on codes, and Berstel et al. (2007) for a general introduction.) A prefix code on the alphabet A is a set X of words on A such that no element of X is a prefix of another word of X . Elements of the codes are called codewords. A prefix code is maximal if it is not contained in another prefix code on the same alphabet. As an equivalent definition, a prefix code X is maximal if for any word u in A∗ has a prefix in X or is a prefix of a word of X. In this section, we consider deterministic automata where an initial state, denoted as i, is specified. For a deterministic automaton A with an initial state i, the set XA of labels of first return paths from i to i is a prefix code. If the automaton is complete, the prefix code is maximal. Conversely, for any finite prefix code X, there exists a deterministic automaton A such that X = XA . Moreover, the automaton A can be supposed to be irreducible. If X is a maximal prefix code, the automaton A is complete. The automaton A can be chosen as follows. The set of states is the set Q of prefixes of the words of X . The transitions are defined for p ∈ Q and a ∈ A by p · a = pa if pa is a prefix of a word of X , and by p · a = ε if pa ∈ X . This automaton, denoted AX is a decoder of X. Let indeed α be a one-to-one map from a source alphabet B onto X. Let us add an output label to each edge of AX in the following way. The output label of (p, a, q) is ε if q = ε and is equal to α −1 (pa) if q = ε . With

239

Synchronised automata x

→ i is the word this definition, for any word x ∈ X ∗ , the output label of the path i − −1 α (x). Let us show that, as a consequence of the fact that X is finite, the automaton A is additionally one-cluster with respect to any letter. Indeed, let a be a letter and let C be the set of states of the form i · a j . For any state u v →q− → i. We may suppose that i does not occur elsewhere on q, there exists a path i − this path. Thus uv ∈ X . Since X is a finite maximal code, there is an integer j such that ua j ∈ X . Then q · a j = i belongs to C. This shows that A is one-cluster with respect to a. A maximal prefix code X is synchronised if there is a word x ∈ X ∗ such that, for any word w ∈ A∗ , one has wx ∈ X ∗ . Such a word x is called a synchronising word for X . Let X be a synchronised prefix code. Let A be an irreducible deterministic automaton with an initial state i such that XA = X. The automaton A is synchronised. Indeed, let x be a synchronising word for X . Let q be a state of A . Since A is iru → q for some u ∈ A∗ . Since x is synchronising for X , we reducible, there is a path i − ∗ have ux ∈ X , and thus q · x = i. This shows that x is a synchronising word for A . Conversely, let A be an irreducible complete deterministic automaton. If A is a synchronised automaton, the prefix code XA is synchronised. Indeed, let x be a synchronising word for A. We may assume that q · x = i for any state q. Then x is a synchronising word for X . Proposition 7.4.6 Let X be a synchronised maximal prefix code with n codewords on an alphabet of size k. The decoder of X has a synchronising word of length at most O((n/k)2 ). Proof The automaton AX is one-cluster. The number m of its states is the number of prefixes of the words of X . Thus m = (n − 1)/(k − 1) since a complete k-ary tree with n leaves has (n − 1)/(k − 1) internal nodes. By Proposition 7.3.14, the automaton has a synchronising word of length O(m2 ), whence O((n/k)2 ). The following example deals with Huffman code (see, for instance, B´eal et al. (2010) for a definition). Example 7.4.7 Let us consider the following Huffman code X = (00 + 01 + 1)(0 + 10 + 11) corresponding to a source alphabet B = {a, b, c, d, e, f , g, h, i} with a probability distribution (1/16, 1/16, 1/8, 1/16, 1/16, 1/8, 1/8, 1/8, 1/4). The Huffman tree is pictured in the left part of Figure 7.15 while the decoder automaton AX is given in its right part. The word 010 is a synchronising word of AX . When the lengths of the codewords in X are not relatively prime, the automaton AX is never synchronised (see Figure 7.16). When the lengths of the codewords in X are relatively prime, the code X is not necessarily synchronised. However, there is always another Huffman code Y corresponding to the same length distribution which is synchronised by a result of Sch¨utzenberger (1967). One can even choose Y such that the underlying graphs of AX and AY are the same. This is a particular case

240

M.-P. B´eal and D. Perrin 1

2

5

3

c

i

6

f

4

a

b

8

g

7

d

1

2

h

e

5

3

8

6

7

4

Figure 7.15 A synchronised Huffman code X on the left and its decoder AX on the right. Plain edges are 1-edges while dashed edges are 0-edges.

of the Road Colouring Theorem. The particular case corresponding to finite prefix codes was proved before in Perrin and Sch¨utzenberger (1992). Proposition 7.3.14 guarantees that the Huffman decoder has a synchronising word of length at most quadratic in the number of nodes of the Huffman tree. 1

5

2

3

c

4

a

6

b

d

e

i

8

f

7

g

1

5

2

h

3

4

6

8

7

Figure 7.16 A non-synchronized Huffman code X on the left and its decoder on the right. The automaton on the right is not synchronised. Indeed, for any word w, the set of states reachable by w is either {1, 3}, {2, 4}, {1, 5}, or {1, 6}.

ˇ y-Road Colouring Problem, was The following problem, called the Hybrid Cern´ raised by Volkov (2008): what is the minimum length of a synchronising word for a synchronised colouring of an automaton with aperiodic graph? We conjecture that a synchronised colouring such that the automaton is moreover one-cluster can be obtained. This guarantees a minimum length of a synchronising word of a length at most quadratic.

8 Cellular automata, tilings and (un)computability Jarkko Kari

Cellular automata are discrete dynamical systems based on local, synchronous and parallel updates of symbols written on an infinite array of cells. Such systems were conceived in the early 1950s by John von Neumann and Stanislaw Ulam in the context of machine self-reproduction, while one-dimensional variants were studied independently in symbolic dynamics as block maps between sequences of symbols. In the early 1970s, John Conway introduced Game-Of-Life, a particularly attractive two-dimensional cellular automaton that became widely known, in particular once popularised by Martin Gardner in Scientific American. In physics and other natural sciences, cellular automata are used as models for various phenomena. Cellular automata are the simplest imaginable devices that operate under the nature-inspired constraints of massive parallelism, locality of interactions and uniformity in time and space. They can also exhibit the physically relevant properties of time reversibility and conservation laws if the local update rule is chosen appropriately. Today cellular automata are studied from a number of perspectives in physics, mathematics and computer science. The simple and elegant concept makes them also objects of study of their own right. An advanced mathematical theory has been developed that uses tools of computability theory, discrete dynamical systems and ergodic theory. Closely related notions in symbolic dynamics are subshifts (of finite type). These are sets of infinite arrays of symbols defined by forbidding the appearance anywhere in the array of some (finitely many) local patterns. In comparison to cellular automata, the dynamic local update function has been replaced by a static local matching relation. Two-dimensional subshifts of finite type are conveniently represented as tiling spaces by Wang tiles. There are significant differences in the theories of oneand two-dimensional subshifts. One can implant computations in tilings, which leads to the appearance of undecidability in decision problems that are trivially decidable in the one-dimensional setting. Most notably, the undecidability of the tiling problem — asking whether a given two-dimensional subshift of finite type is non-empty Combinatorics, Words and Symbolic Dynamics, ed. Val´erie Berth´e and Michel Rigo. Published by c Cambridge University Press. Cambridge University Press 2016.

242

J. Kari

— implies deep differences in the theories of one- and two-dimensional cellular automata. In this chapter we present classical results about cellular automata, discuss algorithmic questions concerning tiling spaces and relate these questions to decision problems about cellular automata, observing some fundamental differences between the one- and two-dimensional cases. The material is mostly collected from the lecture notes by the author at the University of Turku, and from the writings (Kari, 2005, 2009, 2012). Note that this chapter is also closely related to the next chapter, namely Chapter 9, which focuses on shifts of finite type.

8.1 Cellular automata We begin this section on cellular automata with definitions and basic concepts in Section 8.1.1, followed by some notable classical results from the early years in Section 8.1.2. In Section 8.1.3 we analyse injectivity and surjectivity properties and discuss the de Bruijn representation of one-dimensional cellular automata.

8.1.1 Preliminaries We only consider here the basic model: deterministic, synchronous cellular automata, where the underlying grid is infinite and rectilinear. The cells are hence the squares of an infinite d-dimensional checker-board, addressed by Zd . The one-, two- and threedimensional cases are most common and, as we see below, one-dimensional cellular automata behave in many respects quite differently from the higher-dimensional ones. The states of the automaton come from a finite state set A. At any given time, the configuration of the automaton is a mapping c : Zd → A that specifies the states of d all cells. We denote by AZ the set of all d-dimensional configurations over state set A. We first start with the definition of a pattern. A pattern is an assignment p : D → A of states to the elements of some finite domain D ⊆ Zd . For any configuration d c ∈ AZ and position i ∈ Zd , we denote by ci,D the pattern with domain D extracted from position i in c. More precisely, ci,D ∈ AD is such that, for all d ∈ D, we have ci,D (d) = c(i + d). We then say that configuration c contains in position i ∈ Zd the pattern ci,D . We abbreviate c0,D as cD ; it is the restriction of c in domain D. For any 2 state a ∈ A, the constant pattern aD has a in each position. For instance, if c ∈ {0, 1}Z is the configuration that takes, for every non-zero index, the value 0 and such that 0 0 c(0, 0) = 1, and if D = {(0, 0), (1, 0), (0, 1), (1, 1)}, then c0,D = cD = and 1 0 0 0 c1,D = . 0 0

Cellular automata, tilings and (un)computability

243

A cellular automaton (CA) is a transformation F : AZ → AZ that changes configurations by synchronous updates of states at all cells. The next state of each cell depends on the current states of the neighbouring cells according to a local update rule. All cells use the same rule, and the rule is applied to all cells at the same time. The neighbouring cells may be the nearest cells surrounding the cell, but more general neighbourhoods can be specified by giving the relative offsets of the neighbours. Let N ⊆ Zd be a finite set, the neighbourhood. Then the neighbours of the cell in a position i ∈ Zd are the |N| cells in the positions i + N. A local rule is a function f : AN → A that specifies the next state of a cell based on the pattern in its neighbours: configuration c becomes in one time step the configuration e = F(c) where, for all i ∈ Zd , d

d

e(i) = f (ci,N ) .

(8.1)

Usually we iterate F to see the orbit c → F(c) → F 2 (c) → · · · from the initial configuration c. Cellular automaton F is hence specified by finite objects A, d, N and f . Formally, one defines cellular automata as 4-tuples A = (A, d, N, f ) that then determine the associated global maps F, but we do not usually make a distinction and simply call the map F a cellular automaton. In algorithmic questions concerning F, however, the input is the finite description A . Example 8.1.1 An elementary cellular automaton (ECA) is any one-dimensional CA over the binary state set A = {0, 1} and the radius-1 neighbourhood N = {−1, 0, 1}. There are 256 ECA since the local update rule f : AN → A can be spelled out as an eight bit long binary word f (111) f (110) f (101) f (100) f (011) f (010) f (001) f (000) . The word represents in binary a number in {0, 1, . . ., 255}, the Wolfram number of the ECA. Running examples of ECA that we use include rule 110 (ECA 110), the xor CA (ECA 102) and the traffic CA (ECA 184). Figure 8.1 shows sample spacetime diagrams of these three rules. A space-time diagram is a pictorial representation of an orbit where the rows represent consecutive configurations. The top row is the initial configuration and time increases downward. Let r be a positive integer. In the one-dimensional case we call N = {−r, . . . , r} the radius-r neighbourhood as each cell can see distance r to the left and to the right. By radius- 12 neighbourhood we mean N = {0, 1}. This is the smallest nontrivial case, and when the rows of the space-time diagram are shifted by a half cell at each iteration, the system becomes left–right symmetric, as shown in Figure 8.2. In the two-dimensional setting, we often use the nine element Moore neighbourhood {(x, y) | − 1 ≤ x, y ≤ 1}, or the five element von Neumann neighbourhood {(x, y) | |x| + |y| ≤ 1}.

244

J. Kari

(a)

(b)

(c)

Figure 8.1 Examples of space-time diagrams of some elementary cellular automata: (a) rule 110, (b) xor CA and (c) traffic CA. Black colour represents state 1, white is for state 0. In (a) and (c) the cellular automata were started from uniformly random initial configurations, while in (b) there was initially a single black cell.

time

Figure 8.2 Dependencies in one-dimensional, radius- 12 cellular automata.

Periodicity d d The translation by vector t ∈ Zd is the function τt : AZ → AZ that (−t)-shifts the configuration: for all i ∈ Zd one has τt (c)(i) = c(i + t). Notice that τt is a cellular automaton with singleton neighbourhood N = {t}. Notice also that all cellular automata F commute with all translations τt so that always

τt ◦ F = F ◦ τt .

(8.2)

This is a direct consequence of the fact that all cells use the same local update rule. In the one-dimensional setting, the translation τ1 by one position to the left is called the (left) shift. Configuration c is called spatially periodic, or simply periodic, if there exists a non-zero vector t such that τt (c) = c. More precisely, configuration c is then t-periodic. If a d-dimensional configuration is t-periodic for d linearly independent vectors t then we say that the configuration is fully periodic. It is easy to see that if a configuration c is fully periodic then there is a number p such that c is (0, . . . 0, p, 0 . . . , 0)-periodic, where p can be in any of the d positions. There are countably many fully periodic configurations over any alphabet since the states in positions {0, 1, . . ., p − 1}d determine c uniquely. By (8.2) cellular automata preserve

Cellular automata, tilings and (un)computability

245

periods: if c is t-periodic then

τt (F(c)) = F(τt (c)) = F(c) , so that also F(c) is t-periodic. In particular, fully periodic configurations remain fully periodic. The restriction of F on fully periodic configurations, denoted by FP , is analogous to using ‘periodic boundary conditions’ in CA simulations. Configuration c is temporally periodic for F if F n (c) = c for some n ≥ 1, and it is a fixed point if F(c) = c. Configuration c is eventually temporally periodic if F m (c) is temporally periodic for some m ≥ 0. Note that every fully periodic configuration is eventually temporally periodic since there are only finitely many fully periodic configurations for any fixed periods. Cylinders and subshifts d In symbolic dynamics, the set AZ is endowed with the standard compact product topology in which the convergence of a sequence c1 , c2 , . . . of configurations to d c ∈ AZ means that for all positions i ∈ Zd we have that ct (i) = c(i) for all sufficiently large t. The topology is easily seen to be compact, so that any sequence of configurations has a converging subsequence. Also see Chapter 1 for the case d = 1 and Section 9.2. Example 8.1.2 Let w1 , w2 , . . . be an enumeration of all finite words over alphabet A = {a, b}, and let ci = . . . aa.wi aa . . . be the one-dimensional configuration with word wi written starting in position 0, and symbols a everywhere else. Here and elsewhere, a dot is used to indicate the position 0 in one-dimensional configurations. The sequence c1 , c2 , . . . does not converge since, for example, the content of cell 0 does not stabilise but both symbols a and b appear at the cell arbitrarily late in the sequence. However, for any α ∈ AN there is a subsequence ci1 , ci2 , . . . that converges to . . . aa.α . Equivalently, the topology is generated by the cylinder sets [p] = {c ∈ AZ | cD = p} d

of all configurations that have fixed pattern p ∈ AD in finite domain D. Each cylinder is clopen (i.e., closed and open), and the collection of all cylinders forms a basis of the topology. The topology is also metric, defined by the (ultra)metric d(c1 , c2 ) = 2− min{#i# | c1 (i) =c2 (i)} , which is inversely related to the distance #i# to the origin from the closest cell i where two configurations differ. Let P be a set of patterns over A. The subshift obtained by forbidding P is the set of all configurations that do not contain any pattern of P: X(P) = {c ∈ AZ | ∀p ∈ P : c does not contain p in any position } . d

Such a set is translation invariant and closed in the topology and, conversely, any

246

J. Kari

translation invariant closed set can be defined in this way (see Proposition 9.2.4 for more details). If the set P of forbidden patterns is finite, the corresponding subshift X(P) is of finite type (SFT). Two-dimensional subshifts of finite type are also called tiling spaces since they are defined by a local matching condition. The full configud ration space AZ is the full shift over A. Example 8.1.3 The one-dimensional even shift over the alphabet {0, 1} is defined by forbidding words 102n+11 for all n. The shift contains configurations like . . . 100111100001001111000000001100 . . . where each finite block of consecutive 0s has even length. The even shift is not of finite type since, for arbitrarily large n, the configurations . . . 1102n11 . . . and . . . 1102n+111 . . . contain exactly the same patterns of length ≤ 2n while the first configuration is in the subshift but the second one is not. Hence the even shift cannot be defined by forbidding some patterns of length at most 2n, for any n, so an infinite number of forbidden patterns is needed. On the other hand, consider the following two-dimensional subshift over the fourletter alphabet ,  . , , , Configurations can be viewed as grids of horizontal and vertical arrows – an example is shown in Figure 8.3. Let the forbidden patterns be closed loops that contain different numbers of clockwise and counterclockwise oriented arrows. For example, the loop highlighted in Figure 8.3 is forbidden since, among its 14 arrows, six are in the clockwise direction and eight are in the counterclockwise direction.

Figure 8.3 Part of a configuration that is not in the two-dimensional subshift of Example 8.1.3. The highlighted loop is a forbidden pattern. A pole (forbidden loop of length four) is found inside the longer loop.

Cellular automata, tilings and (un)computability

247

Clearly, valid configurations are precisely those where one can assign integer heights on the squares in such a way that neighbours differ by one in height and the arrow points to the lower height. At first sight this subshift may seem not to be of finite type since we forbid infinitely many patterns, but a closer look reveals that the same subshift is obtained if we simply forbid the incorrect cycles of length four. Such a pole is namely found inside every invalid loop. The Curtis–Hedlund–Lyndon theorem d Cellular automata are continuous transformations on the compact metric space AZ . Indeed, let F be a CA with neighbourhood N. If c1 , c2 , . . . is any converging sequence of configurations, with limit c, then F(c1 ), F(c2 ), . . . also converges since for every cell i ∈ Zd we have that ct , for all sufficiently large t, coincides with c in domain i + N. It follows that F(ct ), for all sufficiently large t, coincides with F(c) in cell i, proving that limt→∞ F(ct ) = F(c). As we have seen, cellular automata are translation commuting continuous functions. The Curtis–Hedlund–Lyndon theorem from 1969 states the converse. Theorem 8.1.4 (Hedlund (1969)) Function F : AZ → AZ is a cellular automaton if, and only if, it is continuous and commutes with translations. d

d

Intuitively, continuity guarantees that there is a local update rule at every cell, and translation invariance states that all cells use the same rule. See also Theorem 9.2.7 for a reformulation of Theorem 8.1.4 in terms of factor maps and sliding block codes. Fully periodic configurations provide a countable set where cellular automata are typically simulated, that is, periodic boundary conditions are used. Another possibility is to consider finite configurations where all but a finite number of cells are in some null state. The null state ⊥ is expected to be stable, that is, to satisfy f (⊥N ) =⊥ , so that the ⊥-homogeneous configuration ⊥Z is a fixed point of the CA. A configuration c is ⊥-finite if the support d

supp⊥ (c) = {i ∈ Zd | c(i) =⊥} is finite. By stability then, F(c) is also ⊥-finite. We denote by F⊥ the restriction of F on the countable set of ⊥-finite configurations. Notice that the sets of fully periodic d configurations and ⊥-finite configurations are both dense in AZ . However, as seen below, there are significant differences between F and its restrictions FP and F⊥ . Reversible cellular automata A cellular automaton F is called injective if F is one-to-one, surjective if F is onto, and bijective if it is both injective and surjective. We call F reversible if there is a cellular automaton F −1 , the inverse of F, such that F ◦ F −1 = F −1 ◦ F = id, the identity.

248

J. Kari

From the Curtis–Hedlund–Lyndon theorem we immediately see that all bijective CA are automatically also reversible. Corollary 8.1.5 Cellular automaton F is bijective if, and only if, it is reversible. Proof Reversibility clearly requires bijectivity as otherwise no inverse function would exist. For the non-trivial direction, suppose that F is a bijective CA function. Then F has an inverse function F −1 . To prove that F −1 is a cellular automaton we show that it is continuous and commutes with translations. Then the result follows d from Theorem 8.1.4. The fact that AZ is a compact metric space implies that the inverse of a continuous bijection F is continuous. Translation invariance is easy: if τ is any translation then also its inverse τ −1 is a translation so that F ◦ τ −1 = τ −1 ◦ F. Inverting both sides gives τ ◦ F −1 = F −1 ◦ τ , as required. The point of the theorem is that, due to compactness of AZ , one needs only to know states in a bounded neighbourhood around each cell to be able to determine the state of the cell one time step earlier. Note, however, that the neighbourhood needed in the inverse direction may be much larger than the neighbourhood N of F. It follows from the undecidability results in Section 8.3.1 that in two- and higherdimensional cases there is no computable upper bound on the extent of the inverse neighbourhood. d

Example 8.1.6 The three elementary cellular automata in Example 8.1.1 are noninjective, and hence they are not reversible. For a reversible example, consider the Ising model named Q2R (see Vichniac, 1984). In this two-dimensional system every cell stores a spin variable with two possible values: ↑ (‘spin up’) or ↓ (‘spin down’). Spins are updated in two phases: first the spins of all even cells (positions (x, y) with x + y even) and then the spins of the odd cells (x + y odd). Strictly speaking, the partitioning of the space and time according to parities is not consistent with our formal definition of cellular automata, but if we consider 2 × 2 blocks, and single time steps that involve both phases, we obtain a cellular automaton with 16 states that is compatible with our definition. The rule to update a spin is simple: swap the spin if, and only if, the four immediate neighbours contain two ↑s and two ↓s. Owing to the checker-board partitioning of the space no neighbours apply the rule simultaneously. This guarantees reversibility: the same rule applied on the same partitioning reproduces the original spin values. Only the alternation of phases make the system evolve non-trivially. Figure 8.4 shows the results after 1000 time steps from two different initial configurations. In both cases the initial configuration was chosen randomly according to a Bernoulli distribution, that is, by independent (possibly biased) coin tosses at all cells. In (a) the coin was fair so the initial distribution was drawn uniformly, while in (b) the coin was biased with probabilities 0.93 and 0.07 for spins up and down, respectively. In (a) the configuration remains uniformly random after any number of iterations. This is due to invariance of the uniform probability distribution under

Cellular automata, tilings and (un)computability

(a)

249

(b)

Figure 8.4 Configurations obtained after 1000 steps of Q2R cellular automaton from initial configurations drawn randomly according to (a) the uniform and (b) a non-uniform Bernoulli distribution.

every reversible (and, more generally, every surjective) cellular automaton. The balance property (Theorem 8.1.9) states this invariance fact. In (b) the configuration after 1000 steps is clearly no longer drawn according to a Bernoulli distribution since the spins have clustered and the neighbouring pixels are no longer independent of each other. The picture shows an evolution of the non-uniform Bernoulli distribution into a non-Bernoulli one by Q2R. The Q2R rule also illustrates nicely the important concept of conservation laws in some cellular automata. If we define the ‘energy’ of a configuration to be the number of neighbouring cells with opposite spins, then this energy remains invariant under Q2R. Indeed, if a spin is flipped then this spin had to be opposite to two of its neighbours before the flip but also after the flip. The energy of a configuration is visible as the total length of the boundary between regions of up and down spins, so the boundary length remains invariant under iterations of the Q2R rule. Such conservation laws are important in applications of cellular automata in physical modeling.

8.1.2 Classical properties d AZ

A configuration c ∈ that has no pre-image under F is a Garden of Eden configuration. A pattern p ∈ AD is an orphan if it has no pre-image, meaning that / Surjective cellular automata have no orphans or Garden of Eden conF −1 ([p]) = 0. figurations. Non-surjective ones have both. Lemma 8.1.7 Configuration c is a Garden of Eden if, and only if, it contains a finite pattern that is an orphan. Proof Any configuration that contains an orphan is by definition a Garden of Eden. d To prove the other direction, notice that F(AZ ) is a closed set, so every Garden d of Eden configuration c has an open neighbourhood in the complement of F(AZ ).

250

J. Kari

Cylinders form a basis of the topology, so there is a pattern p such that c ∈ [p] and d / Pattern p is an orphan. [p] ∩ F(AZ ) = 0. Balance property of surjective CA The following example illustrates how an imbalance in the local update rule of a cellular automaton implies that the cellular automaton is not surjective. Example 8.1.8 Consider the elementary rule 110 from Example 8.1.1. Among the eight possible neighbourhood patterns there are three that are mapped to state 0 and five that are mapped to state 1. 111

110

101

100

011

010

001

000

















0

1

1

0

1

1

1

0

This imbalance implies that rule 110 has a Garden of Eden. We can argue as follows. Let k be an arbitrary positive integer, and consider a configuration c in which c(3) = c(6) = · · · = c(3k) = 0 . See Figure 8.5 for an illustration. There are 22(k−1) = 4k−1 possible choices for the missing states between 0s in c (shown as ‘*’ in Figure 8.5). e c

x x x x x x x x x

0

**

0

**

0

*

x x x x x x

*

0

**

0

Figure 8.5 Illustration of the configuration c and its pre-image e in Example 8.1.8.

In a pre-image e of c the three state segments e(3i − 1), e(3i), e(3i + 1) are mapped into state 0 by the local rule f , for every i = 1, 2, . . . , k. Since | f −1 (0)| = 3 there are exactly 3k choices of these segments. If k is sufficiently large then 3k < 4k−1 . This means that that some choice of c does not have a corresponding pre-image e. Therefore the CA is not surjective. Alternatively, one could show the non-surjectivity of rule 110 by directly verifying that pattern 01010 is an orphan. The method of the example can be generalised to other rules, higher dimensions and imbalances among bigger patterns. Consider a cellular automaton F with local rule f : AN → A over the neighbourhood N ⊆ Zd . Consider any finite domains D, E ⊆ Zd such that D + N ⊆ E. As the neighbours of all cells in D belong to E, the local rule determines a function AE → AD between patterns by formula (8.1) applied to all i ∈ D. We denote this function by F (E→D) , or simply by F when the domains E and D are clear from the context.

251

Cellular automata, tilings and (un)computability

The following balance theorem generalises Example 8.1.8. The result was proved in the one-dimensional case already in (Hedlund, 1969), but it holds also in the higher-dimensional spaces (Maruoka and Kimura, 1976). The theorem states that in surjective CA all finite patterns of the same domain D must have the same number of pre-image patterns in domain D + N. Any imbalance immediately implies nonsurjectivity. As a special case we see that the local rule of a surjective CA must be balanced: each state appears in the rule table equally many times. Theorem 8.1.9 (Hedlund (1969); Maruoka and Kimura (1976)) Let F : AZ → AZ be a surjective CA with neighbourhood N, and let E, D ⊆ Zd be finite domains such that D + N ⊆ E. Then, for every pattern p ∈ AD , the number of patterns q ∈ AE such that d

d

F (E→D) (q) = p is s|E|−|D| where s = |A| is the number of states. Proof sketch The theorem can be proved analogously to Example 8.1.8. Suppose there is a pattern with fewer than s|E|−|D| pre-images. By adding cells to E and D, we can assume without loss of generality that E is a d-dimensional hypercube of size nd and D is a co-centric hypercube of size (n − 2r)d , for some n and r. Considering an arrangement of kd such hypercubes as shown in Figure 8.6, one can deduce an orphan analogously to Example 8.1.8, using the following technical lemma. n

n

n

n

r

n

n kn

n r

kn

Figure 8.6 Illustration for the proofs of Theorems 8.1.9, 8.1.13 and 8.1.17.

Lemma 8.1.10 For all d, n, s, r ∈ Z≥1 the inequality  d kd d sn − 1 < s(kn−2r)

252

J. Kari

holds for all sufficiently large k. Theorem 8.1.9 actually states the fact that was observed in Example 8.1.6 for Q2R cellular automaton: the uniform Bernoulli measure on the configuration space is invariant under applications of surjective CA. If all patterns with domain E appear with equal probabilities in c, then all patterns with domain D appear with equal probabilities in F(c). Example 8.1.11 The balance condition of the local rule table is necessary but not sufficient for surjectivity. Consider, for example, the elementary CA number 232. It is the majority CA: f (a, b, c) = 1 if, and only if, a + b + c ≥ 2. Its rule table is balanced because 000, 001, 010, 100 give 0 and 111, 110, 101, 011 give 1. However, the majority CA is not balanced on longer patterns and hence it is not surjective: any word of length 4 that contains at most one state 1 is mapped to 00, so 00 has at least five pre-images of length 4. Balancedness would require this number of pre-images to be four. Pattern 01001 is a particular example of an orphan. The Garden of Eden theorem Two configurations c and d are asymptotic if they differ only in finitely many cells, that is, if diff (c, d) = {i ∈ Zd | c(i) = d(i)} is finite. Cellular automaton F is called preinjective if F(c) = F(d) for any asymptotic c, d such that c = d. Preinjectivity is clearly equivalent to injectivity of F⊥ , the restriction of F on ⊥-finite configurations, for all states ⊥. Among the earliest discovered properties of cellular automata is the Garden of Eden theorem by Moore and Myhill from 1962 and 1963, respectively, which proves that preinjectivity is equivalent to surjectivity. Example 8.1.12 As above, we start by illustrating the proof of the Garden of Eden theorem using elementary rule 110. Let us first show how non-surjectivity of rule 110 implies that it is not preinjective. We know from Example 8.1.8 that rule 110 is not surjective. In fact, finite pattern 01010 is an orphan. Let us demonstrate that there must exist different 0-finite configurations c and e such that G(c) = G(e). Let k ∈ Z≥1 be arbitrary. Consider 0-finite configurations c whose supports are included in a fixed segment of length 5k − 2, see Figure 8.7. There are 25k−2 = 32k /4 such configurations. The 0-support of F(c) is included in a segment of length 5k. Partition this segment in k subsegments of length 5. We know that pattern 01010 cannot appear anywhere in F(c), so there are at most 25 − 1 = 31 different patterns that can appear in the length 5 subsegments. Hence there are at most 31k possible configurations F(c). For all sufficiently large values of k we have 32k /4 > 31k

Cellular automata, tilings and (un)computability

253

so there must be two 0-finite configurations with the same image. c F(c)

0 0 0 x x x 0 0

x x x

**********

0 0 0

*****

0 0

5k

Figure 8.7 Illustration of Example 8.1.12.

Theorem 8.1.13 (Myhill (1963)) If F is preinjective then F is surjective. Proof sketch The theorem can be proved along the lines of Example 8.1.12. If F is not surjective there is an orphan whose domain is a d-dimensional hypercube of size nd , for some n. Arranging kd copies of the hypercube as in Figure 8.6, we come d d up with a domain in which at most (sn − 1)k different non-orphans exist. Using Lemma 8.1.10 one can now easily conclude the existence of two different asymptotic configurations with the same image. Injectivity trivially implies preinjectivity, and therefore we have the following. Corollary 8.1.14 Every injective CA is also surjective. Injectivity is hence equivalent to bijectivity and reversibility. Example 8.1.15 Injectivity implies surjectivity but the converse is not true. A simple counterexample is the xor cellular automaton (elementary CA 102 of Example 8.1.1). Every configuration has exactly two pre-images, related by complementing each state. The local rule of xor is left permutive which means that changing the value of the left neighbour also changes the next state. The rule is also right permutive. To find a pre-image to a configuration one can choose one bit of the pre-image freely and, using the permutivity properties, complete it in a unique way cell-by-cell to the left and to the right. Let us turn next to the other direction of the Garden of Eden theorem. Again, we start with a one-dimensional example that indicates the proof idea. Example 8.1.16 Consider again rule 110. The asymptotic configurations c1 = . . . 000011010000 . . . c2 = . . . 000010110000 . . . have the same image. Let us demonstrate how this implies that rule 110 is not surjective. (Of course we already know this fact from the prior examples.) Extract patterns p1 = 011010 and p2 = 010110 of length six from c1 and c2 , respectively. Both patterns are mapped into the same pattern 1111 of length four. Moreover, p1 and p2 have a boundary of width 2 on both sides where they are identical with each other. Since rule 110 uses radius-1 neighbourhood, one can replace in any configuration c pattern p1 by p2 or vice versa without affecting F(c).

254

J. Kari

Let k ∈ Z≥1 , and consider a segment of 6k cells. It consists of k segments of length 6. Any pattern of length 6k − 2 that has a pre-image of length 6k also has a pre-image where none of the k subsegments of length 6 contains pattern p2 . Namely, all such p2 can be replaced by p1 . This means that at most (26 − 1)k = 63k patterns of length 6k − 2 can have pre-images. On the other hand there are 26k−2 = 64k /4 such patterns, and for large values of k 64k /4 > 63k , so some patterns do not have a pre-image. Theorem 8.1.17 (Moore (1962)) If F is surjective then F is preinjective. Proof sketch The proof of the theorem is as in Example 8.1.16. If d-dimensional CA F is not preinjective there are two different patterns p1 and p2 with the same image such that the domain of the patterns is the same hypercube of size nd , and the patterns are identical on the boundary of width r of the hypercube, where r is chosen sufficiently big so that no cell can have neighbours on both sides of the boundary. In any configuration then, copies of pattern p2 can be replaced by p1 without affecting the image under F. Arranging kd copies of the hypercubes as in Figure 8.6, we come d d up with a domain in which at most (sn − 1)k patterns have different images. Using again Lemma 8.1.10 one sees that this number is not large enough to provide all possible interior patterns, and an orphan must exist. Example 8.1.18 The Garden of Eden theorem enables simple proofs of non-surjectivity in some cases where finding an explicit orphan is not as easy. The majority CA (ECA 232 of Example 8.1.11) has asymptotic configurations . . . 0000000 . . . . . . 0001000 . . . with identical image, so by Theorem 8.1.17 it is not surjective. Consider next the traffic CA of Example 8.1.1 (ECA 184). The rule can be interpreted as a simulation of cars on a road. State 1 represents a car that tries to move to the right, but it makes the move if, and only if, there is an empty slot (state 0) in the right neighbour. Since the following asymptotic configurations . . . 0001100 . . . . . . 0010100 . . . have the same image, ECA 184 is not surjective. Pattern 1100 is the shortest orphan. A remarkable two-dimensional example is the Game-Of-Life, introduced by J. Conway in the 1970s. This CA has two states (‘alive’ and ‘dead’) and the local rule counts the number of alive cells among the eight surrounding neighbours and determines the next state as follows. • If the cell itself is alive then it remains alive if, and only if, two or three of its eight neighbours are alive. (The cell is said to die of loneliness or overcrowding if the number of alive neighbours is less than two or more than three, respectively.)

Cellular automata, tilings and (un)computability

255

• If the cell itself is dead then it becomes alive if, and only if, it has exactly three alive neighbours. By Theorem 8.1.17 there are orphan patterns because the asymptotic configurations that contain at most one alive cell all have the same image. Yet it is not obvious to construct an orphan. The smallest known orphan for Game-Of-Life, in terms of the size of its domain, is the pattern shown in Figure 8.8.

Figure 8.8 Size 92 orphan for Game-Of-Life, discovered by M. Heule, C. Hartman, K. Kwekkeboom and A. Noels in 2011. Light cells are alive, dark ones dead.

8.1.3 Injectivity and surjectivity properties The Garden of Eden theorem links surjectivity of a cellular automaton F to its injectivity on finite configurations. This is because preinjectivity of F is equivalent to the injectivity of F⊥ . Let us next consider other combinations of injectivity and surjectivity of F, FP and F⊥ . On periodic configurations the situation is slightly different in the one-dimensional case than in the higher-dimensional cases. But the following implications are valid regardless of the dimension. • F is surjective if, and only if, F⊥ is injective. (Garden of Eden theorem.) • If F is injective then also F⊥ and FP are injective. (Trivial.) • If F⊥ or FP is surjective then also F is surjective. (Owing to denseness of ⊥-finite d and fully periodic configurations in AZ .) • If FP is injective then FP is surjective. (Because the number of fully periodic configurations with given fixed periods is finite and the periods are preserved by F.) • If F is injective then F⊥ is surjective. (Owing to reversibility and the stability of state ⊥.) Observe that we have two different implication chains to prove that all injective cellular automata are surjective: F injective =⇒ F preinjective =⇒ F surjective, F injective =⇒ FP injective =⇒ FP surjective =⇒ F surjective. It is good to be aware of both chains as they provide different generalisations to cellular automata whose underlying grid is not Zd but some other finitely generated

256

J. Kari

group. A group is called surjunctive if every injective cellular automaton on the group is also surjective. Our first implication chain that is based on the Garden of Eden theorem can be generalised to all amenable groups, proving their surjunctivity. The second chain through periodic configurations can be generalised to residually finite groups. It is not known whether, in fact, all groups are surjunctive. The next two examples provide two non-implications. Example 8.1.19 We saw in Example 8.1.15 that the xor CA is surjective. However, it is not surjective on ⊥-finite configurations for ⊥= 0. The only two pre-images of configuration . . . 001000 . . . are the non-finite configurations . . . 000111 . . . and . . . 111000 . . .. Example 8.1.20 Controlled xor is a one-dimensional cellular automaton with four states A0, A1, I0 and I1. The first symbol of each state is a control symbol that does not change. If the control symbol of a cell is I then the cell is inactive and does not change its state. If the control symbol is A then the cell is active and applies the xor CA on its bit component on its bit component with the bit component of its right neighbour. State I0 is the quiescent state ⊥. Controlled xor is easily seen to be surjective on finite configurations. It is not injective on unrestricted configurations as two configurations, all of whose cells are active, have the same image if their bits are complements of each other. The one-dimensional case The behaviour of cellular automata on periodic configurations depends on the dimension d of the space. Some aspects of two- and higher-dimensional cellular automata are lost under periodic boundary conditions. This is due to the existence of aperiodic tile sets, discussed below in Section 8.2. The one-dimensional setup is simpler. The balance theorem of surjective CA implies the following result that is valid in the one-dimensional case only. Theorem 8.1.21 (Hedlund (1969)) For every one-dimensional surjective CA there is a constant m such that every configuration has at most m pre-images. Proof Let F be a one-dimensional surjective CA function, defined by radius-r neighbourhood N. Let s = |A| be the number of states. In the following we prove that every configuration has at most s2r different pre-images. Suppose the contrary: there is a configuration c with s2r + 1 different pre-images e1 , e2 , . . . , es2r +1 . For some sufficiently large number k > r, each pair ei and e j of pre-images contain a difference inside the fixed interval E = {−k, −k + 1, . . ., k − 1, k}.

Cellular automata, tilings and (un)computability

257

But this contradicts Theorem 8.1.9 since we now have the pattern cD with domain D = {−k + r, −k + r + 1, . . ., k − r} that has at least s2r + 1 > s|E|−|D| different pre-images in domain E. Corollary 8.1.22 (Hedlund (1969)) Let F be a one-dimensional surjective CA. The pre-images of spatially periodic configurations are all spatially periodic. In particular, FP is surjective. Proof Suppose F(c) is periodic, so that τt (F(c)) = F(c) for some t = 0. Then for every i ∈ Z F(τti (c)) = τti (F(c)) = F(c) , so τti (c) is a pre-image of F(c). By Theorem 8.1.21 configuration F(c) has a finite number of pre-images so

τti1 (c) = τti2 (c) for some i1 < i2 . But then τti2 −i1 (c) = c, so that c is periodic with period (i2 − i1 )t. It follows that all pre-images of periodic configurations are periodic. A convenient representation of one-dimensional cellular automata uses edge labelled de Bruijn graphs (see Sutner, 1991). Let F be defined by a contiguous neighbourhood of n + 1 consecutive cells and a local rule f : An+1 → A. Since we are interested in surjectivity and injectivity properties, and since translations τ are bijections that preserve spatial periods, we may consider τ ◦ F instead of F. By making a suitable translation, we may hence assume that F has neighbourhood N = {0, 1, . . ., n}. The de Bruijn graph of order n and over alphabet A is the directed graph G = (V, E) with the vertex set V = An and the edge set E = An+1 . The edge (a0 , . . . an ) leads from vertex (a0 , . . . , an−1 ) to vertex (a1 , . . . , an ). Let us label each edge by two letters of A as follows: edge e = (a0 , . . . an ) has input label i(e) = a0 and output label o(e) = f (a0 , . . . an ). Now it should be clear that the input labels provide a oneto-one correspondence between AZ and the bi-infinite paths through G. Path p = . . . e−1 e0 e1 e2 . . . represents the configuration i(p) = . . . i(e−1 )i(e0 )i(e1 )i(e2 ) . . . Path p is obtained from i(p) by sliding a widow of size n + 1 across i(p) and recording the words in the window. This is the higher block presentation of AZ in the terminology of symbolic dynamics (see Lind and Marcus, 1995). Reading the output labels of path p yields o(p) = . . . o(e−1 )o(e0 )o(e1 )o(e2 ) . . . which is the image of i(p) under the CA, that is, o(p) = F(i(p)). Example 8.1.23 The de Bruijn graph for rule 110 is shown in Figure 8.9. The path p corresponding to configuration c = . . . 00.100 . . . follows the vertices . . . 00, 00, 01, 10, 00, 00, . . ., or edges . . . 000, 000, 001, 010, 100, 000, 000, . . .. The output labels of

258

J. Kari

this path read . . . 0011.000 . . . which is the (shifted) image of c under rule 110. The shift is caused by the fact that the neighbourhood of rule 110 is {−1, 0, 1} rather than {0, 1, 2} expected by our labelling of the de Bruijn graph. The shift does not affect injectivity and surjectivity properties of the CA. 00 0

1

01 1

0 10

1

1 1

0 11

Figure 8.9 The de Bruijn representation of rule 110. Only the output label of each edge is shown. The input label is always the first bit of the tail vertex of the edge.

Let us consider the output labelled de Bruijn graph for F, such as the one shown in Figure 8.9. We obviously have the following characterisations of injectivity, surjectivity and orphans. • F is injective if, and only if, different bi-infinite paths have always different labels. • F is surjective if, and only if, for every c ∈ AZ there is a path whose labels read c. • Word w ∈ A∗ is an orphan for F if, and only if, there is no finite path labelled w. The first two points will be used below to infer efficient algorithms to determine if a given one-dimensional CA is injective or surjective. The third point directly implies that the words that are orphans for F form a regular language: if we make all the vertices of the graph initial and final states, the output labelled de Bruijn graph becomes a non-deterministic finite automaton (NFA) that recognises exactly the words that are not orphans. Since the complement of a regular language is a regular language, the set of orphan words is a regular language as well. Example 8.1.24 Consider the NFA for rule 110 in Figure 8.9 where each state is both initial and final. The standard way to construct a finite automaton for the complement language is to first determinise the NFA using the well-known subset construction (see, e.g., Hopcroft and Ullman, 1979), and then swap the role of final and non-final states. Doing this for rule 110 yields the deterministic finite automaton for its orphans shown in Figure 8.10. One can read from the automaton, for example, that the unique shortest orphan is 01010. To test injectivity and surjectivity it is convenient to construct the pair graph from the output labelled de Bruijn graph G = (V, E). The vertex set of the pair graph is V × V , and there is an edge labelled by a ∈ A from (u1 , u2 ) to (v1 , v2 ) if, and only if, there are edges u1 −→ v1 and u2 −→ v2 with label a in G. Any bi-infinite path in the pair graph corresponds to a pair of configurations having the same image under F. These two configurations are obtained as the input labelling of the two corresponding

259

Cellular automata, tilings and (un)computability 0 00 01 10 11

0 00 11

0 1

1 1

0 01 10 11

1

0

01 10

0

1

00

1

1

10 11

0,1

01 0

Figure 8.10 A deterministic finite automaton recognising the orphans for rule 110. State {00, 01, 10, 11} is the initial state and state 0/ is the only final state.

paths in G. For an example of a pair graph, see Figure 8.11 where the pair graph for rule 110 has been constructed from the de Bruijn representation in Figure 8.9.

0 00 10

10 00 1

0 0

Δ

00 00

0 1 1 1

0 10 10

10 11

1 1

01 10

1

1

1 1

1 11 10

1 11 11

1

1

11 01

01 1 00

1

01 01 1

1

0

00 11 1

1 0 11 1 00

0

10 01

00 1 01

01 11

1 1

1

0

Figure 8.11 The pair graph for rule 110. The dashed line encloses the diagonal Δ.

The diagonal Δ of a pair graph consists of the vertices (u, u) where both components are equal. The induced subgraph on the diagonal is isomorphic to the de Bruijn graph G. The bi-infinite paths inside Δ are uninteresting since the two configurations they represent are identical to each other. Bi-infinite paths that involve vertices outside of Δ on the other hand represent different configurations with the same image under F. Such a path exists if, and only if, F is not injective. Since the graph is finite, any bi-infinite path returns to some vertex multiple times, providing periodic configurations with the same image. Theorem 8.1.25 A one-dimensional CA is (a) not injective if, and only if, its pair graph has a cycle that contains a vertex outside of Δ, (b) not surjective if, and only if, its pair graph has a cycle that contains both a vertex in Δ and a vertex outside of the diagonal Δ.

260

J. Kari

Proof (a) If CA F is not injective then there is a bi-infinite path p in the pair graph through a vertex x ∈ Δ. Some vertex y gets repeated twice on this path before x and some vertex z gets repeated twice after the same occurrence of x: p = ...y...y...x...z...z.... If y ∈ Δ or z ∈ Δ then the cycle y . . . y or z . . . z is the required cycle that contains a vertex outside of Δ. If y, z ∈ Δ then we use the fact that the induced subgraph in Δ is strongly connected so that there is a path from z to y. Now we have the required cycle y . . . x . . . z . . . y, where x ∈ Δ. The converse direction is trivial. (b) By the Garden of Eden theorem, F is not surjective if, and only if, it is not preinjective. Non-preinjectivity means that the de Bruijn graph representation of F contains a diamond, a pair of identically output labelled but non-identical finite paths that start in the same vertex and end in the same vertex. In the pair graph, a diamond corresponds to a finite path that starts and ends in Δ but contains a vertex outside of Δ. A cycle that contains both vertices both inside and outside of Δ is obtained by using the fact that the induced graph in Δ is strongly connected. The cycle in (a) shows that non-injective one-dimensional CA are not injective on periodic configurations. Corollary 8.1.26 If a one-dimensional CA is injective on periodic configurations then it is injective. Conditions (a) and (b) in Theorem 8.1.25 provide efficient algorithms to test if a given one-dimensional cellular automaton is injective or surjective. Standard graph algorithms provide polynomial time methods to test if required cycles exist in the pair graph. Corollary 8.1.27 One can decide in polynomial time (with respect to the size of the local rule table for a contiguous neighbourhood) whether a given one-dimensional cellular automaton is injective or surjective. Figure 8.12 summarises the relations between the injectivity and surjectivity properties of F, FP and F⊥ in the one-dimensional setting. In two- and higher-dimensional cases the situation is different on periodic configurations (see Figure 8.34).

8.2 Tilings and undecidability At the end of Section 8.1 above it was observed that the decision problems CA INJECTIVITY Input: Cellular automaton F. Question: Is F injective?

Cellular automata, tilings and (un)computability F reversible

F bijective FP injective

F⊥ surjective

F⊥ injective

261

F injective FP bijective

F⊥bijective

F surjective

FP surjective

Figure 8.12 Implications between injectivity and surjectivity properties in onedimensional CA.

and

CA SURJECTIVITY Input: Cellular automaton F. Question: Is F surjective?

can be solved efficiently for one-dimensional cellular automata F. This is in stark contrast to two- and higher-dimensional cases where both questions are known to be undecidable, i.e., no algorithms exist to solve the problems (Kari, 1994). When proving such undecidability results, it turns out that algorithmic questions concerning tiling spaces are a convenient intermediate step in reductions from the Turing machine halting problem. Tiling spaces are two-dimensional subshifts of finite type. A handy representation of these objects is in terms of Wang tiles, unit square tiles with coloured edges where the colours provide the local matching condition. In this section we explore algorithmic questions about tilings. We start with basic definitions in Section 8.2.1. In particular, we introduce Wang tiles and the fundamental T ILING PROBLEM that asks if a given Wang tile set admits a tiling of the infinite plane. In Section 8.2.1 we see how Wang tiles can represent computations by Turing machines, and this formalism is used in Section 8.2.2 to show that a simple variant of the T ILING PROBLEM is undecidable. Next we turn to the general T ILING PROBLEM. It is closely related to aperiodic tile sets, so we start by describing Robinson’s aperiodic set of Wang tiles in Section 8.2.3, and continue in Section 8.2.4 to complete the proof that the T ILING PROBLEM is undecidable. We follow the proof given in (Robinson, 1971).

262

J. Kari

8.2.1 Preliminaries A Wang tile set is a finite set T where each Wang tile a ∈ T has four labels a↑ , a→ , a↓ and a← associated with it. The four labels are the colours of the four sides of a where a is pictured as a unit square. Formally, the Wang tile set is then the 5-tuple (T , ↑,→,↓, ←) where ↑, →, ↓ and ← are the labelling functions with domain T . In practice we, less formally, assume the labelling and call T alone a Wang tile set. 2 We say that a configuration c ∈ T Z is correctly tiled at position (i, j) ∈ Z2 if c(i, j) matches with its four neighbours on the abutting edges so that c(i, j)↑ c(i, j)↓ c(i, j)→ c(i, j)←

= = = =

c(i, j + 1)↓ , c(i, j − 1)↑ , c(i + 1, j)← , c(i − 1, j)→ .

A configuration c ∈ T Z is a tiling if it is correctly tiled at all positions (i, j) ∈ Z2 of the plane. We say that the tile set admits tiling c. Note that the set of tilings admitted by T is a subshift of finite type, defined by forbidding the dominoes formed by non-matching pairs of tiles. Conversely, every two-dimensional SFT can be effectively converted into an ‘equivalent’ Wang tile set. By equivalence here we mean conjugacy of subshifts (also called isomorphism, see Section 9.2.3): there is a continuous translation-commuting bijection between the SFT and the set of valid tilings. The conversion uses a two-dimensional higher block presentation analogous to the one-dimensional de Bruijn representation discussed in Section 8.1.3. Also see Chapter 9. In particular, for more on Wang tiles, see Example 9.2.12. 2

Example 8.2.1 Consider the six Wang tiles in Figure 8.13. The labels are represented as arrows, and neighbouring tiles must have identically oriented arrows on the abutting edges to match each other. The six tiles in the set are exactly given by the possible assignments of two arrows in the clockwise direction and two arrows in the counterclockwise direction around the tile. In a tiling then, along any closed cycle there are equally many arrows in the clockwise and in the counterclockwise directions. The set of tilings is conjugate to the two-dimensional subshift of finite type discussed in Example 8.1.3. Note that the arrows used above as labelling functions intuitively ‘point’ to the side of the tile whose label is under consideration, whereas the arrows on the tiles in this example are the labels. A straightforward application of compactness yields the following simple lemma (also see Lemma 9.2.3). Lemma 8.2.2 If T admits a correctly tiled n × n square, for all n, then T also admits a tiling of the full plane. Proof For every n, let cn ∈ AZ be a configuration that is correctly tiled in all positions in the (2n + 1) × (2n + 1) square centred at the origin. By compactness, se2

Cellular automata, tilings and (un)computability

263

Figure 8.13 A Wang tile set with six tiles, and a piece of a valid tiling.

quence c1 , c2 , . . . has a converging subsequence. The limit is correctly tiled everywhere. The lemma provides a semi-algorithm to test if a given T does not admit any valid tilings. The (non-deterministic) semi-algorithm is based on guessing number n and verifying that the n × n square cannot be tiled. A Wang tile set is called aperiodic if (1) it admits a tiling but (2) it does not admit a periodic tiling. It is a non-trivial fact that aperiodic tile sets exist. This was first proved in (Berger, 1966), refuting a conjecture by H. Wang. Until recently the smallest known aperiodic Wang tile set contained 13 tiles (Kari, 1996; Culik II, 1996). See also the preprint Jeandel and Rao (2015) for a set of 11 tiles. The existence of aperiodic tile sets is a source of differences between one- and higher-dimensional cellular automata. In the one-dimensional setting, every SFT that is non-empty necessarily contains a periodic configuration, so that non-periodicity cannot be forced by local matching rules. This is reflected into the two-dimensional case by the fact that a two-dimensional SFT that contains a (not fully) periodic configuration contains necessarily also a fully periodic configuration. This in turn provides a semi-algorithm to test if a given Wang tile set admits a periodic tiling: guess a fully periodic configuration and verify that it is indeed correctly tiled everywhere. The fundamental decision problem concerning tilings asks whether a given Wang tile set admits at least one valid tiling. Equivalently, the question asks whether a given two-dimensional subshift of finite type is non-empty. T ILING PROBLEM Input: Wang tile set T . Question: Does T admit a valid tiling? Notice that without aperiodic tile sets the T ILING PROBLEM (also known as the D OMINO PROBLEM) could be decided by running the semi-algorithms for nontilability and periodic tilability in parallel. Indeed, negative and positive instances

264

J. Kari

of the T ILING PROBLEM would then be correctly identified by the first and the second semi-algorithm, respectively. Only the aperiodic tile sets ‘fall through the cracks’ between the two cases and both semi-algorithms fail to halt on aperiodic instances. In fact, the T ILING PROBLEM is known to be undecidable, proved by R. Berger in the same work where he introduced aperiodic tile sets (Berger, 1966). We discuss Berger’s theorem in more detail in Section 8.2.4. In Section 8.2.2 we first consider a variant of the T ILING PROBLEM whose undecidability is easier to establish. Also see the discussion in Section 9.3 in the framework of multidimensional subshifts of finite type. Computations and tilings Turing machines provide us with the most basic undecidable decision problems. For more on Turing machines, also see Chapter 9. A Turing machine consists of a finite state control unit that moves along an infinite tape. The tape has symbols written in cells indexed by Z. Depending on the state of the control unit and the symbol currently scanned on the tape the machine may overwrite the tape symbol, change the internal state and move along the tape one cell to the left or right. We formally define a Turing machine as a 6-tuple M = (Q, Γ, δ , q0 , qh , b) where Q and Γ are finite sets (the state alphabet and the tape alphabet, respectively), q0 , qh ∈ Q are the initial and the halting states, respectively, b ∈ Γ is the blank symbol and δ : Q × Γ → Q × Γ × {−1, 1} is the transition function that specifies the moves of the machine. A configuration (or instantaneous description) of the machine is a triplet (q, i,t) where q ∈ Q is the current state, i ∈ Z is the position of the machine on the tape and t ∈ ΓZ describes the content of the tape. In one time step configuration (q, i,t) becomes (q , i + d,t  ) if δ (q,t(i)) = (q , a, d) and t  (i) = a and t  ( j) = t( j) for all j = i. We denote this move by (q, i,t) + (q , i + d,t ). The reflexive, transitive closure of + is denoted by +∗ . One is interested to determine if a given Turing machine eventually enters its halting state qh when started in the initial state q0 on a totally blank tape, i.e., initially every tape location has the blank symbol b. T URING MACHINE HALTING ON BLANK TAPE Input: Turing machine M = (Q, Γ, δ , q0 , qh , b) Question: Does (q0 , 0, . . . bbb . . . ) +∗ (qh , i,t) for some i ∈ Z and t ∈ ΓZ ? T URING MACHINE HALTING ON BLANK TAPE is undecidable, i.e., no algorithm exists that solves the problem. This result, due to A. Turing, forms the foundation of computability theory. Theorem 8.2.3 (Turing (1936)) T URING MACHINE HALTING ON BLANK TAPE is undecidable. The positive instances are semi-decidable.

265

Cellular automata, tilings and (un)computability

The basic observation in establishing undecidability results concerning tilings is the fact that tilings can be forced to contain a complete simulation of a computation by a given Turing machine. This allows one to reduce the decision problem T URING MACHINE HALTING ON BLANK TAPE into the T ILING PROBLEM and its various variants. With any given Turing machine M = (Q, Γ, δ , q0 , qh , b) we associate the Wang tiles TM shown in Figure 8.14, and we call these tiles the machine tiles of M . In the illustrations, instead of colours, we use labelled arrows on the sides of the tiles. Two adjacent tiles match if, and only if, an arrow head meets an arrow tail with the same label. Such arrow representation can be converted into the usual colouring representation of Wang tiles by identifying each arrow direction and label with a unique colour. a

a’

qa

a’ q’

q’ a

qa

qa

(a)

(b)

(c)

qa q

q a

a (d)

Figure 8.14 Machine tiles associated with a Turing machine. (a) A machine tile for each a ∈ Γ, (b) an action tile corresponding to each left-moving instruction, and (c) to each right-moving instruction, (d) two merging tiles for each a ∈ Γ and each non-halting state q.

The machine tiles of M contain the following tiles. • For every tape letter a ∈ Γ a tape tile of Figure 8.14a. • For every tape letter a ∈ Γ and every state q ∈ Q an action tile of Figures 8.14b or 8.14c. Tile (b) is used if

δ (q, a) = (q , a , −1) , and tile (c) is used if

δ (q, a) = (q , a , +1) . • For every tape letter a ∈ Γ and non-halting state q ∈ Q \ {qh} two merging tiles shown in Figure 8.14d. The idea of the tiles is that a configuration of the Turing machine M is represented as a row of tiles in such a way that the cell currently scanned by M is represented by an action tile, its neighbour where the machine moves into has a merging tile and all other tiles on the row are tape tiles. See Figure 8.15 for an illustration. If this row is part of a valid tiling then it is clear that the rows above must be similar representations of subsequent configurations in the Turing machine computation, until the machine halts. There is no merging tile specified for the halting state qh . The machine tiles above are the basic tiles associated with Turing machine M .

266

J. Kari t-4

t-3

t-2

q’ t -1 q’

t’0

t1

t2

t3

t-4

t-3

t-2

t-1

qt0

t1

t2

t3

Figure 8.15 A row of tiles representing a left move (q, 0,t) + (q , −1,t  ) allowed by instruction δ (q,t0 ) = (q ,t0 , −1).

Additional tiles will be added depending on the actual variant of the tiling problem we are considering.

8.2.2 The tiling problem with a seed tile We start by considering a variant of the tiling problem where we are asking if a tiling exists that contains a particular seed tile. T ILING PROBLEM WITH A SEED TILE Input: Tile set T and one tile s ∈ T . Question: Does T admit a valid tiling such that tile s is used at least once? The seeded version was shown undecidable in (Wang, 1961). We present the proof here because the proof is quite simple and shows the general idea of how Turing machine halting problem can be reduced to problems concerning tiles. Also, the T ILING PROBLEM WITH A SEED TILE is used as an intermediate step in the general case. Theorem 8.2.4 (Wang (1961)) The T ILING PROBLEM WITH A SEED TILE is undecidable for Wang tile sets. The negative instances are semi-decidable. Proof The semi-decidability of the complement problem follows from the following semi-algorithm: guess number n and verify that there is no tiling of the (2n + 1) × (2n + 1) square with the seed tile s at the centre of the square. Consider then undecidability. A many–one reduction is a reduction which converts instances of one decision problem into instances of another decision problem. We many–one reduce the decision problem T URING MACHINE HALTING ON BLANK TAPE . For any given Turing machine M we can effectively construct a tile set and a seed tile in such a way that they form a positive instance of the T ILING PROBLEM WITH A SEED TILE if, and only if, M is a negative instance of T URING MACHINE HALTING ON BLANK TAPE . For the given Turing machine M we construct the machine tiles TM of Figure 8.14 as well as the four tiles shown in Figure 8.16. These are the blank tile and three initialisation tiles. They initialise all tape symbols to be equal to blank b, and the Turing machine to be in the initial state q0 . The middle initialisation tile is chosen as the seed tile s. Let us prove that a valid tiling containing a copy of the seed tile exists if, and only if, the Turing machine M does not halt when started on the blank tape.

Cellular automata, tilings and (un)computability b

(a)

q b 0

267

b

(b)

Figure 8.16 (a) The blank tile, and (b) three initialisation tiles.

(if ) Suppose that the Turing machine M does not halt from the blank tape. Then a valid tiling exists where one horizontal row is formed with the initialisation tiles, all tiles below this row are blank, and the rows above the initialisation row contain consecutive configurations of the Turing machine. (only if ) Suppose that a valid tiling containing the middle initialisation tile exists. The seed tile forces its row to be formed by the initialisation tiles, representing the initial configuration of the Turing machine on the blank tape. The machine tiles force the following horizontal rows above the seed row to contain the consecutive configurations of the Turing machine. There is no merge tile containing a halting state so the Turing machine does not halt – otherwise a valid tiling could not be formed. Conclusion: Suppose we had an algorithm that solves the T ILING PROBLEM WITH A SEED TILE . Then we also have an algorithm (which simply constructs the tile set as above and determines if a tiling with seed tile exists) that solves T URING MACHINE HALTING ON BLANK TAPE . This contradicts the fact that this problem is known to be undecidable (Theorem 8.2.3). Another variant of the tiling problem that can be proved as easily undecidable is the F INITE TILING PROBLEM, discussed in Exercise 8.5.6 at the end of this chapter.

8.2.3 Robinson’s aperiodic Wang tile set The proof of Theorem 8.2.4 is an easy reduction from the halting problem, based on a straightforward simulation of a Turing machine computation by Wang tiles. Aperiodic tile sets play no role in the proof. Let us consider next the general case, that is, the T ILING PROBLEM. As pointed out in the beginning of the section, undecidability of this problem relies on the existence of aperiodic tile sets, so such sets must somehow be present in the proof of its undecidability. Owing to the importance of the problem, a number of different proofs have been proposed over the years (Berger, 1966; Robinson, 1971; Kari, 2007; Durand et al., 2012). Here we sketch the approach from (Robinson, 1971), starting with the description of the aperiodic tile set used in the proof. Also see Section 9.3. Robinson’s aperiodic tile set contains 56 Wang tiles in total. Instead of colours on the edges, the matching condition is described by arrows. As above, arrow heads and tails in neighbouring tiles must match. Robinson’s tile set consists of crosses, shown

268

J. Kari

in Figure 8.17a, and arms, shown in Figure 8.17b. All tiles may be rotated so each tile comes in four orientations. Hence the total number of basic cross and arm tiles is 28.

(a)

(b)

Figure 8.17 Robinson’s (a) cross (facing north and east), (b) arms (oriented southward). Rotating the tiles is allowed to provide other orientations.

The following terminology will be used. • Every tile has central arrows at the centres of all four sides, and possibly some side arrows. • A cross is said to face the directions of its two side arrows. • The unique central arrow that runs through an arm is called the principal arrow of the arm, and the direction of the principal arrow is called the orientation of the arm. An important fact about arms is that if there are side arrows perpendicular to the principal arrow then these side arrows are never towards the tail but always towards the head of the principal arrow, as shown in the lower three tiles of Figure 8.17b. Otherwise, all combinations of side arrows are allowed.

Figure 8.18 Parity tiles that force identification of the row/column parities.

We want to enforce a cross in the intersections of every other row and column. This can be established by forming the Cartesian product (‘sandwich tiles’) with the parity tiles of Figure 8.18, and by allowing the first parity tile to be coupled only with a cross. Since the only way the parity tiles tile the plane is by alternating the tiles on even and odd rows and columns, the first parity tile is forced at the intersections of every other row and column, and hence a cross is forced to appear in those locations. By numbering the rows and columns suitably we can assume from now on that all

Cellular automata, tilings and (un)computability

269

odd–odd positions of the plane contain a cross, and correspondingly we call the first parity tile the odd–odd parity tile. Note that between two crosses can only appear an arm, and the orientation of the arm has just two possible choices as it cannot point towards either cross. This means that the second parity tile only needs to be paired with north-to-south or south-tonorth oriented arms, and the third parity tile is only paired with east-to-west or westto-east oriented arms. The fourth parity tile is paired with any of the 28 tiles. So the final set contains 4 + 12 + 12 + 28 = 56 different tiles. Next we investigate valid tilings admitted by Robinson’s tiles, and we show that the tile set is aperiodic. Specific patterns called 1-, 3-, 7-, 15-, . . . , (2n − 1)-patches are defined recursively as follows. • A cross with an odd–odd parity tile is a 1-patch. The patch faces the same directions as the cross. • A (2n+1 − 1)-patch consists of a cross in the middle (in an even–even position), sequences of arms radiating out of the centre and four copies of (2n − 1)-patches facing each other at the four quadrants. See Figure 8.19 for an illustration. We say the patch faces the same directions as the cross in the middle.

n

(2 -1)-patch

n

(2 -1)-patch

(2 -1)-patch

(2 -1)-patch

n

n

Figure 8.19 The recursive construction of a (2n+1 − 1)-patch.

For every n there are actually four different (2n − 1)-patches as the cross at the centre may be in any of the four possible orientations. For an example, Figure 8.20 shows a 3-patch and a 7-patch, both facing north and east. Inductively one easily proves the following properties of (2n − 1)-patches. (1) The patch is correctly tiled, (2) all edges on the border of the patch have arrow heads pointing out of the patch, so all edge neighbours of (2n − 1)-patches are forced to be arms, (3) the only side arrows pointing out of the patch are at the centres of the border in the two directions where the patch faces.

270

J. Kari

(a)

(b)

Figure 8.20 (a) A 3-patch, and (b) a 7-patch facing north and east. For clarity, in (b) the arms are shown without the arrows perpendicular to the principal arrow – these are uniquely determined by the neighbours.

Consider an arbitrary valid tiling of the plane by Robinson’s tiles. Let us show, using mathematical induction on n, that every cross with odd–odd parity belongs to a unique (2n − 1)-patch, for every n = 1, 2, . . . The case n = 1 is trivial, as by definition the crosses with odd–odd parity are themselves the 1-patches. Suppose then the claim is true for n and let C be an arbitrary cross with odd–odd parity. By the inductive hypothesis C belongs to a unique (2n − 1)-patch s. There are four possibilities for the orientation of this patch, but they are all symmetric. Let us assume without loss of generality that s faces north and east. In the following discussion we refer to positions indicated by the symbols marked in Figure 8.21. sU

sY

U

Y

a

X Z

b s

sZ

Figure 8.21 Positions and regions used in the proof that (2n − 1)-patches are forced in valid tilings.

Cellular automata, tilings and (un)computability

271

First we prove that tile X , outside the north-east corner of patch s, must be a cross. Suppose the opposite: X is an arm. Then it has an incoming arrow on all but one side, so one of its edge neighbours in regions a or b must be an arm directed towards X . By continuing this reasoning we see that all tiles in one of the regions a or b must be arms directed towards X . But this means that the tile at the centre of region a or b is an arm with an incoming side arrow at the wrong end of the principal arrow: the side arrows are only possible towards the head of the principal arrow. Hence the assumption that X is an arm must be incorrect, and X must be a cross. Consider then tile Y that is a corner-wise neighbour of X . It is in an odd–odd position and therefore Y is a cross. According to the inductive hypothesis Y belongs to a (2n − 1)-patch sY . This patch cannot overlap with patch s because then the tiles in the overlap region would belong to two different (2n − 1)-patches, which contradicts the uniqueness property. Also the tile north of X cannot belong to sY because X is a cross. Hence Y has to be at the south-east corner of sY . Analogously, tiles Z and U are corners of disjoint (2n − 1)-patches sZ and sU , respectively. Tiles between these (2n − 1)-patches are forced to be arms radiating out from X . The side arrows at the middle of a and b force the centre crosses of sY and sZ to face patches s and sU , so the patches s, sY , sZ , sU and the tiles between them form a (2n+1 − 1)-patch that contains tile C. We have proved the existence of a (2n+1 − 1)-patch that contains C. The uniqueness is obvious as the orientation of the (unique) (2n − 1)-patch s that contains C determines the location of the centre of the (2n+1 − 1)-patch that contains C. Theorem 8.2.5 (Robinson (1971)) Robinson’s Wang tile set is aperiodic. Proof The (2n − 1)-patches are valid tilings of arbitrarily large squares, so a valid tiling of the plane exists (Lemma 8.2.2). On the other hand, we have shown above that every tiling contains a (2n − 1)-patch for every n. Its centre cross has (2n−1 − 1) horizontal arms to its left, so there cannot be a horizontal period. Enhanced Robinson’s tiles Although valid tilings by Robinson’s tiles are non-periodic, they have simple hierarchical structures formed by nested (2n − 1)-patches everywhere on the plane. In particular, there are tilings where, for every fixed n, the (2n − 1)-patches repeat periodically with horizontal and vertical periods 2n . As proved above, for all n, each (2n − 1)-patch is one of the four quadrants of a unique (2n+1 − 1)-patch, as determined by the orientation of the cross in the middle of the patch. Let us associate with a patch the horizontal label E or W based on whether it is oriented to the east or to the west, respectively. Analogously we associate the vertical label N (north) or S (south). So, for example, a patch with horizontal label E and vertical label N faces north and east, and hence it is located at the southwest quadrant of the next bigger patch. Now we can assign every cross C of odd–odd parity the infinite sequences h(C) = h1 h2 h3 . . . and v(C) = v1 v2 v3 . . . where, for every n, symbols hn ∈ {W, E} and vn ∈

272

J. Kari

{N, S} record the horizontal and the vertical labels of the unique (2n − 1)-patch containing C. Sequences h(C) and v(C) are the horizontal and the vertical addresses of cross C in the present tiling. If h(C) is not ultimately constant, that is, h(C) contains infinitely many letters E and W then, as n grows, the patches containing C expand horizontally to the left and right to cover all columns of the plane. If also v(C) is not ultimately constant then every cell belongs to a common patch with C and the tiling is uniquely determined by the position of C and its addresses h(C) and v(C). In this case, for all n, (2n − 1)-patches repeat fully periodically in the tiling with horizontal and vertical periods 2n . If h(C) or v(C) is ultimately constant then the patches containing C only extend to cover a half plane (or a quarter plane in case both sequences are ultimately constant). More specifically, if h(C) is ultimately constant for some cross C then – assuming v(C) is not ultimately constant – there is a vertical fault line and all crosses to the east (west) of the fault line have their horizontal addresses ultimately W (ultimately E, respectively). Analogously, if v(C) is ultimately constant, but h(C) is not, then the plane is divided into north and south half planes by a horizontal fault line. If both addresses of some cross are ultimately constant, then the plane is partitioned into four quadrants, one for each combination of E/W and N/S. The quadrants do not necessarily meet at a common cell: Figure 8.22 shows an example where a horizontal fault line splits the plane and both halves are further divided by vertical fault lines. Informally, each quadrant is an ‘infinite’ patch obtained as the limit of (2n − 1)patches with a common corner as n goes to infinity.

Figure 8.22 Part of a valid tiling by Robinson’s tiles where a horizontal fault line divides the plane into two half planes that are further divided by vertical fault lines. Fault lines are shown in grey. The upper and lower half planes are misaligned by six cells so the periodic structure of the patches does not extend across the horizontal fault.

Fault lines may break the periodic structure of (2n − 1)-patches. Patches on the same side of the fault line still repeat 2n -periodically, but the synchronisation of the

Cellular automata, tilings and (un)computability

273

periods may fail across the fault line. For example, in Figure 8.22 the 3-patches are misaligned across the horizontal fault. While this is not usually a problem, in some applications of the Robinson’s tiles (for example in Chapter 9 of this volume) it would be desirable if the patches were known to repeat periodically in all valid tilings, even in the presence of fault lines. Fortunately it is fairly easy to modify the tiles to guarantee synchronisation also across the fault lines. We simply add to each tile a horizontal signal with label S or N, and a vertical signal with label E or W . The signals continue in the neighbouring tile with the same label, so that in each valid tiling each horizontal row has a unique label S or N, and each vertical column has its unique label E or W . In each Robinson’s cross we only allow the combination of labels that correctly identifies the orientation of the cross. For example, the signals at a cross that faces north and east must have labels N and E. On arms, no restrictions are imposed. We obtain the enhanced Robinson’s tiles shown in Figure 8.23. W

E S

S

S

S

E

W

E

W

N

N

N

W

E

(a)

Y N

X

X Y

(b)

Figure 8.23 The enhanced Robinson’s tiles where horizontal and vertical labels synchronise crosses horizontally and vertically. In crosses (a) the orientation of the cross determines the synchronising labels uniquely, while in arms (b) the labels are arbitrary X ∈ {N, S} and Y ∈ {E,W }.

The enhanced Robinson’s tiles are still an aperiodic tile set. Valid tilings exist as (2n −1)-patches can be assigned the synchronisation signals so that the tiling remains valid: no vertical column of the patch contains both east and west oriented crosses and, symmetrically, no horizontal row contains both north and south oriented crosses. Clearly no periodic tilings exist because ignoring the synchronisation signals leaves a tiling by the original Robinson’s tiles. Moreover, we obtain the desired property that in every valid tiling, (2n − 1)patches repeat periodically with horizontal and vertical periodicity 2n . This follows from the fact that the synchronisation signals prevent misalignments across fault lines. Suppose the contrary: for some n, while (2n − 1)-patches repeat with horizontal and vertical period 2n , some (2n+1 − 1)-patches are misaligned across a fault line. But then the (2n − 1)-patches in their quadrants are aligned so that their east and west (or north and south) oriented centre crosses are on the same column (same

274

J. Kari

row, respectively), contradicting the synchronisation signals. Note that 1-patches are automatically aligned by the parity tiles. Another observation is that synchronising signals prevent the staggering of the quarter planes as in Figure 8.22, because such a situation forces misalignment of patches. Fault lines must now run uninterrupted horizontally and/or vertically across the entire plane. It easily follows that any pattern contained in any valid tiling by the enhanced Robinson’s tiles is in fact a subpattern of (2n − 1)-patches for sufficiently large n. In particular this means that it must appear in every valid tiling. The SFT of valid tilings is minimal, that is, none of its non-empty proper subsets is a subshift. Lemma 8.2.6 In every valid tiling by the enhanced Robinson’s tiles all (2n − 1)patches repeat periodically with horizontal and vertical periodicity 2n . The set of valid tilings is a minimal SFT.

8.2.4 The tiling problem Now we turn to the problem of testing if a given Wang tile set admits a valid tiling. We follow (Robinson, 1971) to prove its undecidability. Theorem 8.2.7 (Berger (1966)) The T ILING PROBLEM is undecidable. The negative instances are semi-decidable. Proof Semi-decidability follows from Lemma 8.2.2 and was discussed below it. We prove the undecidability of the T ILING P ROBLEM by many–one reducing the T ILING PROBLEM WITH A SEED TILE , shown to be undecidable in Theorem 8.2.4. For that purpose, let P be any given Wang tile set with a seed tile s ∈ P. We construct a Wang tile set T that admits a tiling if, and only if, P admits a tiling that contains s. In the T ILING P ROBLEM we have no specified seed tile required to be used, so a main problem is to force the presence of the seed (=the beginning state of the Turing machine) in every valid tiling. Note that if it were possible to have arbitrarily large squares without the seed, then it would be also possible to make the entire tiling without the seed. This is a consequence of the compactness of the tiling space (Lemma 8.2.2). Therefore the seed must be enforced inside all n × n squares for some n. This on the other hand would seem to be contradictory with the possibility of having a tiling with a single seed. A solution is to partition the space using Robinson’s aperiodic tile set into ‘nested boards’, containing larger and larger pieces of valid tilings by P around seed tiles. This takes us back to the Robinson’s tile set. Recall the special (2n − 1)-patches that necessarily exist in every valid tiling. We define nested boards using the side arrows of Robinson’s tiles. Figure 8.24 shows only the side arrows in a 15-patch. Notice how the side arrows form intersecting squares: the side arrows emitted from the crosses form squares whose centres contain crosses, which in turn are corners of bigger squares. The

Cellular automata, tilings and (un)computability

275

Figure 8.24 Side arrows of a 15-patch.

smallest squares have the corners at the odd–odd -positions. They are of size 2 × 2, and they only intersect one 4 × 4 square whose corner is at the centre. Any bigger square S is of size 2n × 2n , for some n ≥ 2, and it intersects one bigger square of size 2n+1 × 2n+1 whose corner is at the centre of S, and four smaller squares of sizes 2n−1 × 2n−1 whose centres are the corners of of S. In order to pick non-intersecting squares we colour the side arrows red or green according to the following rules. • The side arrows of each cross are both red or both green. The crosses at odd–odd positions have green side arrows. • In each arm the horizontal side arrows have the same colour and the vertical side arrows have the same colour. In this way the colour is transmitted unchanged through the arm. If the arm contains both horizontal and vertical side arrows then these side arrows must have different colours. This guarantees that intersecting squares have different colours. • In neighbouring tiles the matching rule is that the meeting arrow heads and tails must have the same colour. Following these rules, each square will be coloured completely red or green, and intersecting squares have opposite colours. The smallest squares are green, so the colouring of the squares is completely determined. Notice that red squares do not intersect each other and green squares do not intersect each other. The red and green squares are of sizes 22n × 22n and 22n−1 × 22n−1, for n = 1, 2, . . . , respectively. In Figure 8.25 the green squares are depicted in light and the red ones in dark colour. A piece of a Wang tiling with a seed tile will be enforced inside each red square. Small red squares nested within a larger red square contain their own copies. We call a region within a red border but outside all nested red borders within it a board. See Figure 8.26a for an example. In each board we want to identify those rows and columns that run completely across the board without intersecting a smaller board inside. Let us call these free rows and free columns. Tiles at the intersections of free rows and columns are called

276

J. Kari

Figure 8.25 Colouring of the squares formed by the side arrows in the 31-patch that faces south and east. Green squares are light, red ones dark.

(a)

(b)

Figure 8.26 (a) A board in a 64 × 64 red square, and (b) the free rows and columns of the board. The scattered 9 × 9 square – shown dark in (b) – is formed by the free cells that are both on a free row and a free column.

free and they form a scattered square whose pieces are joined by the free rows and columns. The idea is that the scattered pieces contain tiles of the simulated tile set P, with the seed tile forced at the centre. The connecting free rows and columns transmit the colour information between the pieces so that the scattered pieces fit together. See Figure 8.27 for an illustration. Let us start by counting the number Fn of free rows (and columns) in a board of side 22n . The pattern of free rows in a 22n -board repeats in the middle of the 22(n+1)board, and halves of it (without the centre row) repeat at both ends. So we have

277

Cellular automata, tilings and (un)computability

Figure 8.27 Scattered pieces and their assembly together.

Fn+1 = 2Fn − 1. Since F1 = 3, we easily obtain Fn = 2n + 1. Hence a valid tiling necessarily contains boards with arbitrarily large numbers of free rows and columns. To identify tiles of the board that are on a free row and/or a free column, we use a new set of labelled arrows, called obstruction signals that run horizontally on all rows from west to east, and vertically on all columns from north to south. Each arrow is labelled to identify the orientation of the closest red boundaries on its row/column. Let us consider the horizontal obstruction signals first. There are four possible labels [ [, [ ], ] [ and ] ]. The label is transmitted unchanged through tiles without vertical red side arrows. But if a tile contains a vertical red side arrow (i.e., if the tile is on the left or right border of a red square, including corners) then the label changes to identify the interior and the exterior side of the red square. More precisely, as the arrow crosses a red border from left to right, the label changes from x[ to [y if the border is a left border, and from x] to ]y if the border is a right border, where x, y ∈ {[, ]} are arbitrary. Remark that the position of the red side arrow identifies whether the border is a left or a right border. It follows that the horizontal segment between two vertical red boundaries has a unique possibility for its label, and this unique label is [ ] if, and only if, the row is free (or is a horizontal boundary of a red square). See Figure 8.28 for an example.

x[

[y

(a)

x]

]y

(b)

x[

[[

[]

][

[]

]]

]y

(c)

Figure 8.28 Label change of a horizontal obstruction signal crossing a tile (a) on the left and (b) on the right boundary of a red square. (b) The unique labelling of the obstruction signals on a sample horizontal row through a 16 × 16 board.

Vertical obstruction signals are defined analogously. Based on the labels of horizontal/vertical obstruction signals, tiles inside a board can be classified into four classes:

278 (1) (2) (3) (4)

J. Kari tiles not a free row or column, tiles on a free row but not on a free column, tiles on a free column but not on a free row, tiles on a free row and a free column.

Tiles of type (4) are free, and they form the scattered Fn × Fn -square, whose disjoint parts are connected by tiles of types (2) and (3). Let R be the enhanced Robinson’s tile set constructed above. Now we are ready to reduce the tiling problem with the seed tile into the tiling problem without a seed tile. Let P be a given set of Wang tiles, and let s ∈ P be the given seed tile. Let C be the set of colours used in P. To determine if P admits a tiling that contains s we construct a set T of ‘sandwich tiles’ (r, p), whose first component r ∈ R, and the second component p is a Wang tile over the colour set C ∪ {b}, where b ∈ C is a new ‘blank colour’. The first components tile according to the local matching constraints of R described above. The second components tile under the colour constraints, as in Wang tiles. The set T contains all the pairs (r, p) that satisfy the following conditions. (a) If r is a free tile (on a free row and on a free column) then p ∈ P. (b) If r is on a free column but not on a free row then p is a tile x b

x

b

where x ∈ C and b is the blank colour. (c) If r is on a free row but not on a free column then p is a tile x

b

x

b

where x ∈ C and b is the blank colour. (d) If r is not on a free row or column then p is arbitrary: any colouring of its sides by C ∪ {b} is acceptable. (e) If r is a corner of a green square of size > 2 × 2 (that is, a cross with green side arrows with even–even parity), then p = s, the seed tile. Notice that the corners of green squares that are in even–even positions are exactly at the centres of red squares. These centre positions are always free. Condition (e) guarantees that the centre of each board is paired with the seed tile s. Properties (b) and (c) mean that colour information is transmitted along free rows and columns between disjoint parts of the board, while (a) guarantees that on free areas a tiling by P is formed. These conditions make the board behave as if the free rows and columns were contiguous and the board then is like a square board of size Fn × Fn . Condition (d) simply allows different boards to coexist arbitrarily. Let us prove that our sandwich tiles T admit a tiling if, and only if, P admits a tiling that contains the seed tile s. (if ) Suppose first that P admits a tiling that contains s. Then we can properly tile

Cellular automata, tilings and (un)computability

279

any board by placing s at the centre and scatter a tiling containing s in the free areas. Smaller nested boards can be tiled in the same way. Different boards are not immediate neighbours of each other since there are at least the red boundary tiles between them. So different boards can be tiled independently of each other. As we can tile arbitrarily large squares in this way, the whole plane can be tiled. (only if ) For the converse direction, suppose that the sandwich tiles admit a tiling. The underlying Robinson’s tiling necessarily contains (2n − 1)-patches for arbitrarily large numbers n. Hence there are red squares of size 22n × 22n , for every n, and consequently there are arbitrarily large boards. The centre of each board is paired with s, and the free areas of the board necessarily contain a piece of a valid tiling by P. As the free area is arbitrarily large, and its centre contains s, we conclude that P admits a tiling that contains a copy of tile s. The periodic tiling problem Each Wang tile set is of exactly one of the following types. (1) Sets that do not admit any tilings, (2) sets that admit some periodic tilings, (3) aperiodic Wang tile sets Membership in the first two classes are semi-decidable, so the third class cannot be semi-decidable: otherwise all classes would be decidable. The remaining algorithmic question concerns membership in the second class. P ERIODIC TILING PROBLEM Input: Wang tile set T . Question: Does T admit a periodic tiling? Theorem 8.2.8 (Gurevich and Koryakov (1972)a,b) The P ERIODIC TILING PROB LEM is undecidable. Positive instances are semi-decidable. For a proof see, for example, (Gurevich and Koryakov, 1972a,b). In fact, (Gurevich and Koryakov, 1972a,b) show that the classes of Wang tile sets that (1) do not admit any tiling and (2) admit a periodic tiling, are recursively inseparable from each other.

8.3 Undecidability concerning cellular automata Undecidability results of Section 8.2 can be used to show that a number a questions concerning cellular automata are undecidable. We start by discussing decision problems concerning two- and higher-dimensional CA in Section 8.3.1, and continue in Section 8.3.2 on questions about one-dimensional CA. We start by observing some simple connections between tilings and two-dimensional cellular automata. We then prove that CA INJECTIVITY is undecidable for two-dimensional CA. To prove this

280

J. Kari

fact we construct a set of Wang tiles with a plane-filling property that allows us to check the correctness of a tiling along a snaking path. This construction is based on the Robinson’s Wang tile set. In Section 8.3.2 we relate NW-deterministic Wang tilings to space-time diagrams of one-dimensional cellular automata, and use this correspondence to prove that asymptotic behaviours of one-dimensional CA are beyond algorithmic techniques, i.e., are undecidable.

8.3.1 Two-dimensional CA It is hardly surprising that decision problems on Wang tiles can be converted into questions concerning two-dimensional cellular automata. Tilings are static variants of CA dynamics where local update rule has been replaced by a local matching relation. We start by demonstrating this with a simple example. Example 8.3.1 Consider the following decision problem. F IXED POINT EXISTENCE Input: Cellular automaton F. Question: Does there exist a configuration c such that F(c) = c? The problem must be undecidable among two-dimensional CA since otherwise we could solve the T ILING PROBLEM as follows: for given Wang tile set T we construct a cellular automaton with state set T and a local rule that keeps a state unchanged if, and only if, the state matches (as a tile) its neighbours. It is obvious that fixed points are exactly the valid tilings. In contrast, among one-dimensional cellular automata the F IXED POINT EXIS TENCE is easily decidable, as there are algorithms to determine if a given onedimensional subshift of finite type is empty. Note that the set of fixed points is always an SFT, and it can be effectively constructed for any given cellular automaton. A cellular automaton F : AZ → AZ is nilpotent if F n (AZ ) is a singleton set for d some n ∈ N. Clearly then the unique configuration in F n (AZ ) is the ⊥-homogeneous d configuration ⊥Z , for the unique stable state ⊥ of F. Nilpotent CA have the simplest possible long-term behaviour since all configurations lead to the same fixed point. Consider the following decision problem. d

d

d

CA NILPOTENCY Input: Cellular automaton F. Question: Is F nilpotent?

Theorem 8.3.2 (Culik II et al. (1989)) CA NILPOTENCY is undecidable among two-dimensional CA. Positive instances are semi-decidable.

Cellular automata, tilings and (un)computability

281

Proof For semi-decidability notice that, for any n, we can effectively construct a local rule of the CA F n and check whether the local rule maps everything into the same state. If that happens for some n then we report that the CA is nilpotent. To prove undecidability we many–one reduce the T ILING PROBLEM. For any given Wang tile set T we construct a cellular automaton whose state set is T ∪ {⊥}, where ⊥ ∈ T is a new symbol. The local rule turns a cell into state ⊥ except if the cell and its four neighbours are tiles that match in color, in which case the state is not changed. Let us prove that this CA is not nilpotent if, and only if, T admits a valid tiling. (if ) Suppose a valid tiling exists. This tiling, as a configuration of the CA, is a fixed 2 point so it never becomes uniform ⊥Z . The CA is not nilpotent. (only if ) Suppose no valid tiling exists. Then there is number n such that no valid tiling of an n × n square exists. This means that after the first application of the CA, regardless of the initial configuration, state ⊥ appears in every n × n square. Since 2 state ⊥ spreads it is clear that the configuration becomes eventually ⊥Z . Hence the CA is nilpotent. Unlike the F IXED POINT EXISTENCE from Example 8.3.1, the decision problem CA NILPOTENCY is undecidable even among one-dimensional cellular automata. We consider the one-dimensional variant in Section 8.3.2. Example 8.3.3 Let T be some aperiodic Wang tile set, say the Robinson’s tile set of Section 8.2.3. If we do the construction from the proof of Theorem 8.3.2 on this tile set we obtain a cellular automaton with the following non-trivial behaviour: the CA has non-periodic fixed points while all periodic configurations become eventu2 ally the ⊥-homogeneous configuration ⊥Z . Plane-filling paths In the following we use two-dimensional cellular automata that execute the onedimensional xor rule along paths that snake on the plane. The shape of the snaking path is controlled by Wang tiles, and as long as there are no tiling errors on the path, the path is forced into the shape of a plane-filling curve. This motivates the following definition of directed tiles. A directed Wang tile is a Wang tile with a follower arrow that points to one of its four neighbours. More formally, we have a follower assignment function F that assigns to each tile a a follower vector F(a) ∈ {(±1, 0), (0, ±1)}. In a configuration c, the follower of the position i ∈ Z2 is i + F(c(i)). Starting in any position i0 ∈ Z2 we can then follow the arrows to obtain the path i0 , i1 , . . . of positions i j ∈ Z2 where i j+1 is the follower of i j , for all j = 0, 1, . . . . We say that the path is locally valid if in c the tiling is valid at all positions i j of the path. We say it is locally valid with neighbourhood N ⊆ Z2 if the tiling is valid at all neighbouring positions i j + N of the path.

282

J. Kari

A set of directed tiles is said to have the plane-filling property (with neighbourhood N) if it satisfies the following two conditions. (a) There exists a valid tiling of the plane. (b) In every configuration, every locally valid path (with neighbourhood N) visits all tiles of arbitrarily large squares. In other words, for every n ∈ N there is an n × n square such that all cells of the square are on the path. Intuitively the plane-filling property means that the simple device that moves over configuration c, repeatedly verifying the tiling condition in its neighbourhood and moving on to the follower tile, necessarily eventually either finds a tiling error or covers arbitrarily large squares. Note that the plane-filling property does not assume that the configuration c is correctly tiled everywhere on the plane. As long as the tiling condition is valid along a path the path must snake through larger and larger squares Note that conditions (a) and (b) imply that the underlying Wang tile set is aperiodic. Indeed, a fully periodic tiling necessarily only has periodically repeating paths, which hence cannot be plane-filling.

Figure 8.29 Segments of the Hilbert’s plane-filling curve through 16 × 16 squares.

In the following we describe a directed Wang tile set that has the plane-filling property with the Moore neighbourhood N = {(x, y) | − 1 ≤ x, y ≤ 1}. The construction is built on the Robinson aperiodic tile set. The set provides a recursive platform on which to enforce the well-known Hilbert curve shown in Figure 8.29. The curve is defined by recursively subdividing a square into four quadrants and joining the four curves through the quadrants into a single curve. This process is described by the substitution rule shown in Figure 8.30. The rule is over 12 symbols that describe possible orientations and entry/exit directions of the Hilbert curve in the square. The path comes in four orientations (corresponding to the four columns in Figure 8.30) and in each orientation we have three combinations of entry/exit directions (the three

Cellular automata, tilings and (un)computability

283

rows in the figure). When the substitution is iterated n times one obtains a Hilbert curve of size 2n × 2n .

Figure 8.30 Substitution rules that recursively generates the Hilbert plane-filling curve. The four rules on each row are reflected and/or half turned versions of each other.

Let us label the central arrows of all Robinson tiles by the 12 symbols of Figure 8.30. All four arrows of each cross must have identical labels, and we impose the usual tiling rule that meeting arrow heads and tails must have the same label. Recall the recursively defined (2n − 1)-patches that necessarily appear in valid Robinson tilings. In each such patch the label of the central cross gets transmitted to the midpoints of the four sides of the patch. We call this the label of the patch. The idea is that the label gives the orientation of the Hilbert curve through the patch. We want to guarantee that the labels of the four quadrants (all of which are (2n−1 − 1)patches) form the image of the central label under the substitution rule of Figure 8.30. This can be enforced by constraining the label combinations in tiles

(and their rotated variants) that appear where the labels of the quadrants meet the central label (i.e., the circled positions in Figure 8.31a). The label of the principal arrow forces the labels of the side arrows as determined by the substitution rule. Figure 8.32 shows the combinations of labels (b) that a rule (a) admits. With this constraint it is clear that in any correctly tiled (2n − 1)-patch the labels of its k-th level descendants (which are (2n−k − 1)-patches) implement the Hilbert curve of size 2k × 2k having the orientation specified by the centre of the patch. In particular, one is interested in the curve at the level k = n − 1, implemented by the labels of the 1-patches. These are precisely the crosses with the odd–odd parity tile. Our paths will hence follow the directions given by the labels of the crosses with odd–odd parity. The final directed tiles are 2 × 2 blocks of Robinson tiles so that each directed tile contains exactly one odd–odd cross. The label of this cross (its exit

284

J. Kari

(a)

(b)

Figure 8.31 (a) Example of a recursive labelling of the (2n − 1)-patch to force the Hilbert curve. The substitution rule iterated on the central cross label gives the shape of the curve. The encircled tiles force the first level of the substitution so that the central crosses of the four quadrants have the correct labels. (b) The second level of the substitution. B

B C

A

D E

C A

B

A

A D

E

A D

(a)

C

E

(b)

Figure 8.32 Generic substitution rule in (a) permits the combinations of labels in (b). The possible side arrow next to the principal arrow is not shown in the tiles. The tiles in (b) are used in the encircled positions in Figure 8.31(a) to connect the central label of a patch with its quadrants.

direction) provides the direction of the tile. We also require that the entry and exit points of consecutive tiles on the curve match. From the construction it is clear that every infinite path on a valid tiling of the plane covers arbitrarily large squares. However, the plane-filling property requires more since the tiling is not required to be valid outside of the path and still the path should cover squares of all sizes. To see that the tiles have this stronger property, we basically repeat the reasoning about (2n − 1)-patches in Section 8.2.3. Theorem 8.3.4 (Kari (1994)) There exists a set S NAKES of directed Wang tiles that has the plane-filling property with some finite neighbourhood N. Proof The tile set was constructed above, and we use the Moore neighbourhood

Cellular automata, tilings and (un)computability

285

N = {(x, y) | − 1 ≤ x, y ≤ 1}. It is clear that (2n − 1)-patches can be labelled properly: for any chosen label at the central cross, the labels of smaller patches are determined by iterating the substitution. So there exist valid tilings of the plane, and condition (a) of the plane-filling property is satisfied. To prove the second condition (b), consider an arbitrary configuration c and an infinite path P that is locally valid with neighbourhood N. Let us show, using mathematical induction on n, that every tile on the path, with the possible exception of the first 4n−1 tiles, belongs to a unique (2n − 1)-patch. The case n = 1 is trivial. Suppose then the claim is true for n and let C be an odd–odd cross on path P with at least 4n predecessors (and successors) on P. By the inductive hypothesis C belongs to a unique (2n − 1)-patch s. There are four possibilities for the orientation of this patch. Due to symmetry, we may assume that s faces north and east. In what follows we refer to the positions indicated in Figure 8.33. sU

U

Y

a

X Z

b s

sZ V

Figure 8.33 Positions and regions used in the proof that a locally valid path must have the shape of the Hilbert curve.

Path P clearly follows the size 2n−1 × 2n−1 Hilbert curve through s oriented according to its label. Because we assumed the tiling is valid on the Moore neighbourhood of P, the tiling is correct around s. In particular, we can reason as in Section 8.2.3 to conclude that X is a cross, with arms radiating out of it in regions a and b. Let L(X ) be the label of cross X . The labelling of s is determined by the substitution rule for label L(X ). There are 12 cases depending on L(X ), but we only need to consider two situations, as others are symmetric to these. (1) The path will continue from s into tile U or V . Assume it goes up to U, the other case being symmetric. By the inductive hypothesis, U belongs to a unique (2n − 1)patch sU , and the path follows the Hilbert curve through sU . By uniqueness, patches s and sU cannot intersect, so U is located at the bottom row of sU . But U is the entry tile to sU , and the only tile on the bottom row that can be the entry to the patch from below is located at the lower left corner of the patch, so sU must be positioned directly above s. (2) The path may enter s from tiles Y or Z. The cases being symmetric, we assume

286

J. Kari

the entry is from tile Z. By the inductive hypothesis, Z belongs to a unique (2n − 1)patch sZ . This patch must be located on the right of s, and the path through it must be the Hilbert curve, oriented as instructed by L(X). The reasoning can be repeated on sU and/or sZ in place of s. This way we are forced to build the Hilbert curve through a (2n+1 − 1)-patch with central cross X . Uniqueness follows easily from the fact that the location of X was uniquely determined by the orientation of patch s. Example 8.3.5 The tile set S NAKES suggests the following two-dimensional cellular automaton Snake xor that is injective on fully periodic configurations but not injective in general. This demonstrates that the first corollary of Theorem 8.1.25 does not hold for two-dimensional cellular automata. The state set is S NAKES × {0, 1}, that is, each state contains a tile and a binary value. The local rule is as follows: the cell checks if the tiling is correct around its Moore neighbourhood. • If the tiling is not valid then the cell does not change its state. • If the tiling is valid then the cell executes xor on its bit and the bit on the follower tile. The tile component of the state is never changed. The rule is analogous to the onedimensional Controlled xor of Example 8.1.20, except that now the tiling determines which cells are active. This CA F is not injective: let c be a valid tiling of the plane by S NAKES, and let c0 and c1 be the two configurations whose tile components form the tiling c and whose bits are all 0 and 1, respectively. Tiling is everywhere valid so all cells perform xor with the follower. We have F(c0 ) = F(c1 ) = c0 . But F is injective on fully periodic configurations. Suppose, on the contrary, that there are different fully periodic configurations p0 and p1 such that F(p0 ) = F(p1 ). As the tile components never change, the tile components of p0 and p1 form the same fully periodic p. Because p0 = p1 , there is a position i0 ∈ Z2 where the bit components in p0 and p1 differ. As the bits become identical on the next time instance, the tiling must be correct in position i0 , and p0 and p1 must have opposite bits in the follower position i1 . The reasoning can be repeated in position i1 , providing its follower i2 , etc. We obtain an infinite path i0 , i1 , i2 , . . . that is locally valid with the Moore neighbourhood. By the plane-filling property of S NAKES, this path visits arbitrarily large squares, but due to periodicity of p the path is periodically repeating, a contradiction. Figure 8.34 summarises the relations between the injectivity and surjectivity properties of F, FP and F⊥ in the two- and higher-dimensional setting. Notice the difference to the one-dimensional picture in Figure 8.12 concerning the periodic configurations, observed in Example 8.3.5 above.

287

Cellular automata, tilings and (un)computability F reversible

F bijective

F injective

FP injective

FP bijective

? F⊥ surjective

F⊥bijective

?

F⊥ injective

F surjective

FP surjective

?

Figure 8.34 Implications between injectivity and surjectivity properties in two- and higher-dimensional CA. Question marks indicate open problems.

Reversibility of two-dimensional CA Now we are ready to prove that there is no algorithm to determine if a given twodimensional CA is reversible. Theorem 8.3.6 CA INJECTIVITY is undecidable among two-dimensional CA. In any dimension, positive instances are semi-decidable. Proof The semi-decidability follows from the fact that injective CA have an inverse CA. One can guess the inverse CA and verify that the composition with F gives identity. Let us next prove that CA I NJECTIVITY is undecidable by many–one reducing the T ILING PROBLEM into it. In the reduction we use any directed Wang tile set D that has the plane-filling property with some neighbourhood N, such as, for example, S NAKES of Theorem 8.3.4. Let T be any given set of Wang tiles. In the following we construct a cellular automaton that is not injective if, and only if, T admits a tiling. The state set of the CA is D × T × {0, 1}. So each state has a D-component, a T -component and a binary value. The local rule is analogous to the Snake xor rule of Example 8.3.5: Each cell checks if the tiling by the D-components is valid in its N-neighbourhood, and if the tiling by the T -components is valid at the cell. • If the tiling is not valid in one of the two components then the cell does not change its state. • If the tiling is valid on both components then the cell executes xor on its bit and the bit on the follower tile. The follower is provided by the D-component of the state. So the tile components never change.

288

J. Kari

(if ) Suppose T admits a tiling. Construct two configurations c0 and c1 where the T - and D-components form the same valid tilings. In c0 all bits are 0 while in c1 they are all 1. Since the tilings are everywhere valid, every cell performs xor with its neighbour, which means that every bit becomes 0. Hence F(c0 ) = F(c1 ) = c0 , and F is not injective. (only if ) Suppose then that F is not injective. There are two different configurations c0 and c1 such that F(c0 ) = F(c1 ). Tile components are not modified by the CA so they are identical in c0 and c1 . There is a position i0 ∈ Z2 where the bit components in c0 and c1 differ. The tilings in the D and T must be locally correct at position i0 and c0 and c1 must have opposite bits in the follower position i1 . We repeatedly apply this reasoning to obtain an infinite path i0 , i1 , i2 , . . . with locally valid D- and T -tilings. The plane-filling property of D forces the path to cover arbitrarily large squares. The tiling by T is valid on these squares which implies that T admits a tiling of the plane (Lemma 8.2.2). The theorem above has the following interesting consequence on the neighbourhood sizes of a reversible CA and its inverse. By topological arguments we know that every bijective CA F has an inverse automaton so that each cell can determine its previous state based on the present states in some bounded neighbourhood around it. Yet, by Theorem 8.3.6 this neighbourhood of the inverse automaton cannot be algorithmically bounded for given F. Otherwise we could simply enumerate all (finitely many) CA with such bounded neighbourhood and check if any one of them is the inverse of F, obtaining an algorithm that contradicts Theorem 8.3.6. Notice also that if the cellular automaton F constructed in the proof above is reversible then it is automatically periodic, i.e., F n is the identity function for some n. This follows from the fact that the chains of active cells that perform xor have bounded lengths if the given tile set does not admit a valid tiling. Such bounded chains of xors repeat their states periodically, so F is periodic. If, on the other hand, F is not reversible then it of course cannot be periodic either, so we have that the following decision problem is undecidable. CA PERIODICITY Input: Cellular automaton F. Question: Is F n the identity for some n? The problem is clearly semi-decidable since we can guess n and verify that F n is the identity. Corollary 8.3.7 CA PERIODICITY is undecidable among two-dimensional CA. In any dimension, positive instances are semi-decidable. Without proof we also mention the following stronger result. Theorem 8.3.8 (Kari and Ollinger (2008)) CA PERIODICITY is undecidable among one-dimensional CA.

Cellular automata, tilings and (un)computability

289

Surjectivity of two-dimensional CA Let us briefly discuss the decision problem CA SURJECTIVITY. It can be proved undecidable among two-dimensional cellular automata without using the concept of aperiodic Wang tile sets. The idea is to apply the Garden of Eden theorem (see Section 8.1.2) and consider preinjectivity. The F INITE TILING PROBLEM of Exercise 8.5.6 can be reduced to preinjectivity using a suitable set of directed Wang tiles, which satisfies a finite version of the plane-filling property. The reduction is analogous to the proof of Theorem 8.3.6: we apply the xor CA along paths determined by the directed tiles. See, for example, (Kari, 2009) for a short proof. Note that negative instances of CA SURJECTIVITY can be semi-decided: simply guess an orphan pattern and verify that it has no pre-images. Alternatively, using the Garden of Eden theorem, one may guess two ⊥-finite configurations for some state ⊥ and verify that they have the same image. Theorem 8.3.9 (Kari (1994); see also Durand (1994)) CA SURJECTIVITY is undecidable among two-dimensional CA. Negative instances are semi-decidable. With a more careful analysis one can prove that the classes of two-dimensional CA that (1) are periodic and (2) are not surjective, are recursively inseparable (Kari, 2011). This directly implies the undecidability of any property that separates periodic and non-surjective CA. Such properties include, for example, injectivity and surjectivity of F among fully periodic configurations.

8.3.2 One-dimensional CA We have seen in Section 8.3.1 that tiling problems and problems concerning twodimensional cellular automata are closely related. But it is also possible to relate long-term evolution of one-dimensional cellular automata to tiling spaces by considering the space-time diagrams as particular types of tilings. Indeed, the set of all bi-infinite space-time diagrams of a one-dimensional cellular automaton is a two-dimensional subshift of finite type, where the local update rule of the CA gives the local matching rule of the subshift. The associated tile sets must have a local determinism – or expansivity – property in the positive time direction, corresponding to the local update rule. NW-deterministic Wang tile sets Consider a set T of Wang tiles. We say that the set is NW-deterministic if for all tiles a, b a = b =⇒ a← = b← or a↑ = b↑ . In other words, the colours on the upper and left sides uniquely determine the tile. See Figure 8.35a for an illustration of the concept. Consider now a valid tiling of the plane by NW-deterministic tiles. Each tile is uniquely determined by its left and upper neighbour. Then tiles on each diagonal in

290

J. Kari

the NE-SW direction locally determine the tiles on the next diagonal below it. If we interpret these diagonals as configurations of a CA then there is a radius- 12 local rule such that valid tilings are space-time diagrams of the CA, see Figure 8.35b.

y x

z

(a)

(b)

Figure 8.35 NW-deterministic sets of Wang tiles: (a) there is at most one matching tile z for any x and y, (b) diagonals of NW-deterministic tilings interpreted as configurations of one-dimensional CA.

The Robinson’s aperiodic Wang tile set of Section 8.2.3 is not quite NW-deterministic, but with a small modification determinism can be obtained. Let us see where the determinism fails. The parity tiles are deterministic, even determined by any single side of the tile. Crosses can be distinguished from arms NW-deterministically since a tile is an arm if, and only if, there is an incoming arrow on its north or on its west side. In the case a tile is a cross its orientation is uniquely determined by the side arrows on its north and its west sides. And if we know whether an arm is horizontal or vertical then its orientation and side arrows are also uniquely NW-determined. So the only source of uncertainty is in determining if an arm is horizontally or vertically oriented. This uncertainty only applies to tiles with even–even parity since even– odd an odd–even positions only have vertical and horizontal arms, respectively, and odd–odd positions only contain crosses. The following simple observation helps to distinguish horizontal and vertical arms, and hence to resolve the only NW-non-determinism in the Robinson’s tiles: in the special (2n − 1)-patches defined recursively in Section 8.2.3, the horizontal and vertical arms alternate on each diagonal. There can be, of course, a number of crosses between consecutive arms, but we can nevertheless add to the tiles binary signals along NW-to-SE diagonals that specify the orientation of the next arm. We implement the diagonal signals as zigzag lines because the Wang tile matching rules do

291

Cellular automata, tilings and (un)computability

not allow direct diagonal dependencies. More precisely, we add to each tile a zigzag layer as detailed in Figure 8.36. x

v

h x

x

v

h

x

(a)

(b)

(c)

(d)

Figure 8.36 The zigzag layer added to Robinson’s tiles to make the tileset NWdeterministic. The figure shows the zigzag arrows, labelled with h or v, attached to (a) all crosses (with odd–odd and even–even parities), (b) all tiles with even–odd or odd–even parities, (c) horizontal arms with even–even parity and (d) vertical arm with even–even parity. Label x can be either h or v.

It is easy to see that the obtained NW-deterministic tile set is still aperiodic: it admits a tiling since all (2n − 1)-patches can be decorated with matching zigzag arrows, and it does not admit a valid periodic tiling since without the new zigzag constraints the underlying tile set does not admit periodic tiling. We have proved the following lemma. Lemma 8.3.10 The Robinson tile set enhanced with the zigzag arrows is a NWdeterministic aperiodic Wang tile set. There are even smaller NW-deterministic and aperiodic tile sets. Amman’s aperiodic Wang tile set from 1977, shown in Figure 8.37, contains 16 tiles and one can easily verify that it is NW-deterministic. In fact, the tile set is also deterministic in the opposite SE-direction. See (Gr¨unbaum and Shephard, 1986) for a proof that Amman’s tile set is aperiodic. 1 1

3 2

3

2

2

6

4

3

2

3

4

4

5

6

3

1 6

2

4 3

2 3

1

3 4

5

2 6

1

3 3

4

2 2

6

6 5

4

3 1

1

4 3

3

5 1

6

6 5

5

4 1

3

4

4

5 2

4 4

5

4 1

Figure 8.37 Amman’s NW- and SE-deterministic aperiodic Wang tile set.

Example 8.3.11 Any NW-deterministic and aperiodic Wang tile set T can be converted into one-dimensional cellular automaton F with an interesting property. The state set of the CA is T ∪ {⊥} where ⊥ ∈ T is a new spreading state. The CA uses the radius- 12 neighbourhood N = {0, 1} and the local rule f that turns a cell into state ⊥ except that f (x, y) = z if x, y, z ∈ T are three tiles that match each other as in Figure 8.35a. If this CA is started with any periodic initial configuration c then it evolves into

292

J. Kari

the ⊥-uniform configuration. Indeed, if no state ⊥ is ever created then the spacetime diagram – drawn diagonally as in Figure 8.35b – provides a periodic tiling by T , which is not possible. So ⊥ appears in some F n (c). As state ⊥ spreads and since F n (c) is periodic, state ⊥ occupies every cell after a finite time. On the other hand, some non-periodic initial configuration c have orbits where state ⊥ never appears. We can choose as c a diagonal of any valid tiling by T . Our interest in the following is the T ILING PROBLEM in the restricted setup where the given tile set is NW-deterministic. It turns out this decision problem remains undecidable. We skip the details of the proof, but the idea is to ‘determinise’ the proof of Theorem 8.2.7. We have seen above that the underlying Robinson’s aperiodic tile set can be transformed into a NW-deterministic form. Red/green colouring of boards (illustrated in Figure 8.25) can be done in a NW-deterministic way by assigning the colour to each infinite row and column. This works since the boards are aligned so that the corners of boards of different sizes are located on separate rows and columns. The obstruction signals in Figure 8.28 are non-deterministic, but the technique can be replaced by a NW-deterministic method to recognise the free rows and columns in any board that is at the upper right corner of a bigger board (see Kari, 1992). Finally, it is an easy matter to make the Turing machine simulation by Wang tiles NW-deterministic. The tape is represented along NE-to-SW diagonals and the time increases to the southeast. Putting all together one can show the following. Theorem 8.3.12 (Kari (1992)) The decision problem T ILING PROBLEM is undecidable among NW-deterministic sets of Wang tiles. One defines analogously NE-, SW- and SE-deterministic tile sets. Finally, we call a tile set 4-way deterministic if it is deterministic in all four directions simultaneously. It turns out Robinson’s tiles can be turned into a 4-way deterministic aperiodic tile set (see Kari and Papasoglu, 1999) and that the T ILING PROBLEM is undecidable even among 4-way deterministic tile sets (see Lukkarila, 2009). Nilpotency of one-dimensional CA Theorem 8.3.12 together with the construction in Example 8.3.11 gives the following undecidability result for one-dimensional cellular automata. Theorem 8.3.13 (Kari (1992)) mensional CA.

CA NILPOTENCY is undecidable among one-di-

Proof We many–one reduce the T ILING PROBLEM for NW-deterministic Wang tile sets. Let T be a given NW-deterministic tile set. Construct a one-dimensional CA F whose state set is T ∪ {⊥} and the local rule turns a cell into the quiescent state ⊥ except in the case that the cell and its right neighbour are in states x, y ∈ T , respectively, and tile z ∈ T exists so that tiles x, y, z match as in Figure 8.35a. In this case z is the new state of the cell. Note that state ⊥ is quiescent, and even more strongly, a spreading state so that a cell goes to state ⊥ if it has a neighbour in state ⊥.

Cellular automata, tilings and (un)computability

293

The CA F is not nilpotent if, and only if, T admits a valid tiling. (if ) Suppose a valid tiling exists. The diagonals of the tiling have orbits that do not contain state ⊥ in any cell, so the CA is not nilpotent. (only if ) Suppose no valid tiling exists. Then there is a number n such that no valid tiling of an n × n square exists. This means that for every initial configuration c, in the configuration F 2n (c) every cell is in state ⊥. Otherwise a valid tiling of an n × n square can be read from the space time diagram of configurations c, F(c), . . . , F 2n (c). We conclude that the CA is nilpotent. Based on this result a number of dynamical properties can be proved undecidable, including equicontinuity and sensitivity to initial conditions (see Durand et al., 2003; Kari, 2008). Also it follows that the topological entropy of one-dimensional cellular automata cannot be algorithmically computed (see Hurd et al., 1992).

8.4 Conclusion We started this chapter by reviewing several classical results in the theory of cellular automata. Then we discussed a number of algorithmic questions concerning tilings and cellular automata. We discovered that it is possible to algorithmically test one-dimensional cellular automata for ‘single step’ properties such as injectivity and surjectivity, while the same properties are undecidable in the two-dimensional case. The difference stems from the fact that two-dimensional subshifts of finite type (i.e., tiling spaces) are capable of simulating arbitrary computation, while the onedimensional variants are computationally simple. This difference is manifested, most notably, in the fact that one can test whether a given one-dimensional subshift of finite type is empty or not, while this emptiness question in the two-dimensional case (i.e., the T ILING PROBLEM) is well-known to be undecidable (Berger, 1966). Long-term properties, or ‘asymptotic properties’, such as nilpotency or periodicity are undecidable even among one-dimensional cellular automata. The space-time diagrams of one-dimensional cellular automata are tilings with the additional constraint of determinism in the direction that represents forward time. In the case of reversible cellular automata also the backward time direction is deterministic. The new constraints of determinism do not hinder the computational power of tiling spaces, which is reflected as various undecidability results concerning the infinite space-time of one-dimensional cellular automata.

8.5 Exercises Section 8.1 Exercise 8.5.1 Let F be the majority CA of Example 8.1.11.

294

J. Kari

(a) Determine all fixed points of F. (b) Determine all configurations c whose orbit c, F(c), F 2 (c), . . . converges in the product topology. Exercise 8.5.2 Fill-in the details in the proof sketch given for the balance theorem (Theorem 8.1.9). Exercise 8.5.3 Consider the Game-of-Life CA from Example 8.1.18. (a) Calculate how much imbalance there is in the local rule. More precisely, count how many 3 × 3 patterns are mapped to states alive and dead. (b) Using the result of (a) and the proof technique from Exercise 8.5.2, show that Game-of-Life has an orphan of size 40 × 40. Exercise 8.5.4 Let us call a configuration c ∈ AZ rich if it contains every finite d d pattern over the state set A. Let F : AZ → AZ be a surjective cellular automaton. Prove that c is rich if, and only if, F(c) is rich. d

Section 8.2 Exercise 8.5.5 Show that the 28 Robinson’s tiles without the parity constraint are not an aperiodic tile set. Exercise 8.5.6 Let T be a Wang tile set, and b ∈ T a specified blank tile. A finite tiling is a tiling where only a finite number of tiles are non-blank. A tiling where all tiles are blank is trivial. Prove that the following decision problem is undecidable. F INITE TILING PROBLEM Input: A finite set T of Wang tiles and a blank tile b ∈ T . Question: Does there exist a finite tiling that is not trivial? Exercise 8.5.7 Prove that there exists a fixed Wang tile set T for which the following C OMPLETION PROBLEM is undecidable. C OMPLETION PROBLEM FOR WANG TILE SET T Input: A finite pattern p ∈ T D . Question: Does there exist a tiling that contains pattern p? Hint. Use the fact there exists a universal Turing machine M = (Q, Γ, δ , q0 , qh , b) whose HALTING PROBLEM is undecidable: There is no algorithm that would determine, for a given b-finite t ∈ ΓZ , whether (q0 , 0,t) +∗ (qh , i,t  ), for some i ∈ Z and t  ∈ ΓZ .

Cellular automata, tilings and (un)computability

295

Section 8.3 Exercise 8.5.8 Recall that a cellular automaton F is nilpotent if there is a stable d state ⊥ and a number n such that F n (c) =⊥Z holds for all configurations c. A cellular automaton F is periodic if F n is the identity for some n. Prove the following facts. (a) CA F is nilpotent if, and only if, for all configurations c there is time n such d that F n (c) =⊥Z . (In other words, show that a common n can be chosen for all configurations.) (b) CA F is periodic if, and only if, all configurations are temporally periodic. (In other words, show that configurations must have a common temporal period.) Exercise 8.5.9 Prove that there exists a fixed cellular automaton F with a null state ⊥ such that the following decision problem associated with F is undecidable. G ARDEN OF E DEN PROBLEM FOR CA F Input: A ⊥-finite configuration c. Question: Is c a Garden of Eden for CA F? Hint. Use Exercise 8.5.7.

9 Multidimensional shifts of finite type and sofic shifts Michael Hochman

9.1 Introduction In this chapter we discuss (multidimensional) shifts of finite type, or SFTs for short. d These are the sets X(E ) ⊆ ΣZ of d-dimensional configurations that arise by forbidding the occurrence of a finite list E of finite patterns. This local, combinatorial and perhaps naive definition leads to surprisingly complex behaviour, and therein lies some of the interest in these objects. SFTs also arise independently in a variety of contexts, including language theory, statistical physics, dynamical systems theory and the theory of cellular automata. We shall touch upon some of these connections later on. The answer to the question ‘what do SFTs look like’ turns out to be profoundly different depending on whether one works in dimension one or higher. In dimension one, the configurations in an SFT X(E ) can be described constructively: there is a certain finite directed graph G, derived explicitly from E , such that X(E ) is (isomorphic to) the set of bi-infinite vertex paths in G. Most of the elementary questions, and many of the hard ones, can then be answered by combinatorial and algebraic analysis of G and its adjacency matrix. For a brief account of these matters, see Section 9.2.5. In dimension d ≥ 2, which is our main focus here, things are entirely different. First and foremost, given E , one cannot generally give a constructive description of the elements of X(E ). In fact, Berger showed in 1966 that it is impossible even to decide, given E , whether X(E ) = 0. / Consequently, nearly any other property or quantity associated with X(E ) is undecidable or uncomputable. What this tells us is that, in general, we cannot hope to know what a particular SFT behaves like. But one can still hope to classify the types of behaviours that can occur in the class of SFTs, and, quite surprisingly, it has emerged in recent years that for many properties of SFTs such a classification exists. The development of this circle of ideas began with the characterisation of the possible rates of ‘word-growth’ (topological entropy) of SFTs, and was soon followed by a characterisation of the degrees of computability (in the sense of Medvedev and Muchnik) of SFTs, and of Combinatorics, Words and Symbolic Dynamics, ed. Val´erie Berth´e and Michel Rigo. Published by c Cambridge University Press. Cambridge University Press 2016.

Multidimensional shifts of finite type and sofic shifts

297

the languages that arise by restricting configurations of SFTs to lower-dimensional lattices. Recently a characterisation of the possible sets of periods of periodic points in an SFT has been found, at least for some definitions of ‘period’. These are the main results we shall discuss here. After defining the objects of study in Section 9.2, we present the basic language, examples and results of symbolic dynamics. In Section 9.3 we give an account of the classical undecidability theorem of Berger, and present some basic constructions. Then in Section 9.4 we discuss recursive degrees, in Section 9.5 the languages induced by SFTs on lower-dimensional lattices, and in Section 9.6 the densities of patterns in SFTs, the rates of word growth, and, briefly, sets of periods. Although this chapter is self-contained, the first two sections have some overlap with Chapter 8. The reader may wish to consult that chapter for further examples and background.

9.2 Shifts of finite type and sofic shifts In this section we give the main definitions and discuss them in the context of symbolic dynamics. More complete sources are Lind and Marcus (1995) and Kitchens (1998).

9.2.1 Main definitions and illustrations A d-dimensional pattern over a finite alphabet Σ is a colouring a ∈ ΣD of a finite set D ⊆ Zd . The set D is called the support or the shape of a and is denoted supp(a). We write Σ∗d for the set of d-dimensional patterns. Note Σ∗1 is not the same as # n Σ∗ = ∞ n=0 Σ : the latter consists of all patterns whose shape is [[1, n − 1]] for some n ∈ N, while the former allows all shapes. If a ∈ ΣD and E ⊆ D then a|E is a subpattern of a, and a pattern b ∈ ΣD+u is called a translate of a (by u) if bv+u = av for all v ∈ D; in this case we write a ≈ b. A pattern c appears (or occurs) in a at u if the translate of c by u is a subpattern of a. d A d-dimensional configuration is an element of x ∈ ΣZ . Given a set E ⊆ Σ∗d of d patterns, we say that a pattern a or a configuration x ∈ ΣZ is E -admissible if no pattern from E appears in it, and we define1 X(E ) = {x ∈ ΣZ | x is E -admissible}. d

A subset of ΣZ is called a shift of finite type (abbreviated SFT) if there exists a finite set E such that X = X(E ) (an SFT can have multiple presentations of this kind, including some with E infinite). Note that, up to re-naming of the alphabet, there are only countably many SFTs, because each is specified by a finite set of patterns over its alphabet, and there are countably many such sets. d

1

It would be more precise to indicate the alphabet in the notation, as in XΣ (E ), but in practice Σ is always clear from the context.

298

M. Hochman

Example 9.2.1 (Golden mean shift) Let X ⊆ {0, 1}Z denote the set of configurations x without consecutive 1s, i.e., xi · xi+1 = 0 for all i. Then X = X({11}). It is a nice exercise to verify that the number of E -admissible words of length n is given by the (n + 2)-nd Fibonacci number. For this reason X is called the Golden mean shift. Interestingly, for the analogous construction in dimension 2, which is known as the hard core model, there is no known closed-form expression for the number of admissible n × n patterns or even their asymptotics (see, e.g., Baxter (1999), also Pavlov (2012)). Example 9.2.2 (Matchings and dominos) Let Σ ⊆ Zd denote a finite symmetric set d (u ∈ Σ ⇐⇒ −u ∈ Σ) and let X ⊆ ΣZ be the set of configurations x such that if u ∈ Zd and if v = u + xu then u = v+ xv . Interpreting xu as an arrow pointing from u to u + xu , we have required that if u points to v then v points to u. Thus the configurations in X are in one-to-one correspondence with perfect matchings of Zd with differences in Σ. Clearly X = X(E ) for E = {a ∈ Σ{0,s} | s ∈ Σ , a0 = s , as = −s}. For d = 2 and Σ = {±(1, 0), ±(0, 1)}, configurations can be interpreted as domino tilings of the plane, i.e., tilings by 2 × 1 and 1 × 2 rectangles formed by the matched pairs of sites. The combinatorics of this system very interesting and non-trivial; see, e.g., (Kasteleyn, 1963) and (Kenyon, 2000). It is crucial to note that E -admissibility of a pattern a ∈ ΣD does not imply that a can be extended to a configuration in X(E ). The next lemma shows, however, that if it cannot be so extended, then there must already be a finite region to which the pattern does not extend admissibly; and this is true also when E is infinite. See also Lemma 8.2.2. Lemma 9.2.3 Let E ⊆ Σ∗d and a ∈ ΣD with D ⊆ [[ − n0, n0 ]]d . Then a extends to d an E -admissible configuration x ∈ ΣZ if, and only if, for all n ≥ n0 it extends to an d E -admissible pattern an ∈ Σ[[−n,n]] . Proof If a extends to an E -admissible x ∈ ΣZ then an = x|[[−n,n]]d are certainly E -admissible for all n ≥ n0 , and extend a. Conversely, fix an as in the statement. d We construct an ‘accumulation point’ x ∈ ΣZ of (an ) by induction. First, enumerate Zd = {u1 , u2 , . . .}. Since there are finitely many symbols, there is an infinite subsequence of (an ) whose elements agree at u1 ; set xu1 to be this symbol. For the induction step, assume we have defined xu1 , . . . , xu−1 and have arrived at an infinite subsequence of (an ) whose elements agree with x at u1 , . . . , uk−1 . Pass to a further subsequence whose elements all agree at uk , and set xuk to this value. By construction, for any k there is an n such that x|{u1 ,...,uk } = an |{u1 ,...,uk } , so for any finite E ⊆ Zd there is an / E (since an is E -admissible), hence x is E -admissible. Taking n with x|E = an |E ∈ E = D this also shows that x|D = an |D = a, so x extends a, as claimed. d

The definition of SFTs can be modified to several equivalent forms. One can require that the patterns in E have a common shape. Indeed, given E = {a1 , . . . , an }

299

Multidimensional shifts of finite type and sofic shifts #

with ai ∈ ΣDi , let D ⊆ Zd be any finite set such that ni=1 Di ⊆ D. Enumerate the exm(i) tension of ai to D as {ai, j } j=1 and let E  = {ai, j | 1 ≤ i ≤ n , 1 ≤ j ≤ m(i)}. Then for x ∈ ΣZ and u ∈ Zd , the pattern x|Di +u is a translate of ai if, and only if, x|D+u = ai, j for some j. This shows that X(E ) = X(E  ). In particular, if N is large enough we can d always assume that E ⊆ Σ[[−N,N]] . Alternatively, since translating the patterns of E d does not affect X(E ), for sufficiently large N, we can assume that E ⊆ Σ[[0,N]] . Another equivalent definition constrains the allowed patterns rather than the forbidden ones. Given a finite set D ⊆ Zd and F ⊆ ΣD , let d

X+ (F ) = {x ∈ ΣZ | ∀u ∈ Zd x|D+u is a translate of a pattern in F }. d

Then clearly X+ (F ) = X(ΣD \ F ), so X+ (F ) is an SFT, and by the previous paragraph, given an arbitrary SFT X(E ) we can find a set D and E  ⊆ ΣD such that X(E ) = X(E  ) = X+ (ΣD \ E  ). So the definitions are equivalent.

9.2.2 Subshifts and languages Many questions about SFTs are most naturally studied in the framework of symbolic dynamics (although we shall see later that a recursive–theoretic setting is also natd ural). Endow ΣZ with the product topology, which is compact and metrisable. A subbasis for this topology is given by the cylinder sets [a] = {x ∈ ΣZ | x|supp(a) = a} d

as a ranges over Σ∗d . This is the topology of pointwise convergence: xn → x if, and only if, for every u ∈ Zd , the sequence ((xn )u )∞ n=1 eventually stabilises to xu , i.e., (xn )u = xu for all large enough n; equivalently for every finite D ⊆ Zd we have (xn )|D = x|D for all large enough n. The cylinders are both closed and open (clopen). d The space ΣZ comes equipped with a natural translation operation. For u ∈ Zd , d d let Su : ΣZ → ΣZ be given by: (Su x)v = xu+v . Each Su is a homeomorphism and S = {Su }u∈Zd is an action of Zd on ΣZ (that is, Su+v = Su ◦ Sv and S0 is the identity map) called the shift action. For a finite pattern a ∈ ΣD we define Su a ∈ ΣD−u by the formula above, but note that Su a is then the translate of a by −u. d A subset X ⊆ ΣZ is said to be shift-invariant if Su X = X for every u ∈ Zd (equivalently for all u in a generator set of Zd ). A closed, non-empty shift-invariant set is d called a subshift. The language of a subshift X ⊆ ΣZ is d

L(X) = {x|D | x ∈ X and D ⊆ Zd finite} ⊆ Σ∗d and its complementary language is Lc (X ) = Σ∗d \ L(X ).

300

M. Hochman

Each element of L(X ) and Lc (X ) comes together with a specified ‘position’. But since X is shift-invariant, L(X ) is closed under translations, and it is closed to taking subpatterns. It also has the property that for every a ∈ L(X ) ∩ ΣD and every finite set E ⊆ Zd with D ⊆ E, there is a b ∈ L(X) ∩ ΣE extending a. It is not hard to see that these three properties characterise the languages of subshifts; as we do not use this, we leave the details to the reader. But we shall use the following repeatedly. Proposition 9.2.4 A set 0/ = X ⊆ ΣZ is a subshift if, and only if, X = X(E ) for some E ⊆ Σ∗d , in which case we can take E = Lc (X ). d

Proof Suppose X ⊆ ΣZ is a subshift and let E = Lc (X ). Clearly X ⊆ X(E ). For / E so there is the other inclusion, let x ∈ X(E ). For each n ∈ N, certainly x|[[−n,n]] ∈ some xn ∈ X such that (xn )|[[−n,n]]d = x|[[−n,n]]d . Then xn → x, and, since X is closed, this implies x ∈ X . / Then X is shift-invariant because Conversely, fix E ⊆ Σ∗d and X = X(E ) = 0. d Z clearly x ∈ Σ contains a pattern a ∈ E if, and only if, some (equivalently every) translate of x does. To see that it is closed, let xn ∈ X with xn → x. For finite D ⊆ / E . By continuity of the shift action, Zd and u ∈ Zd , we must show that (Su x)|D ∈ limn→∞ Su (xn ) = Su x, hence (Su (xn ))|D = (Su x)|D for all large enough n. By shift/ E , whence also (Su x)|D ∈ / E. invariance of X we have Su (xn ) ∈ X so (Su xn )|D ∈ d

In particular, SFTs are subshifts.

9.2.3 Factors, isomorphisms and sliding block codes Given a subshift X ⊆ ΣZ , a map π : X → ΔZ is said to be equivariant if it ‘commutes’ with the shift action, in the sense that π ◦ Su = Su ◦ π for all u ∈ Zd . Under these circumstances π (X ) is subshift: it is closed since it is the continuous image of a compact set, and if y ∈ π (X ) then y = π (x) for some x ∈ X , so Su y = Su π (x) = π (Sux) ∈ π (X) because X is shift-invariant. A continuous, equivariant and surjective map between subshifts is called a factor map; its image is called a symbolic factor of its domain, and its domain a symbolic extension of the image. An injective factor map π : X → Y is called an isomorphism (or a conjugacy). Note that this implies that π −1 : Y → X is a bijective factor map as well: indeed, π is a continuous bijection between compact metric spaces, so it is a homeomorphism and π −1 is continuous. To see that π −1 is equivariant, fix y ∈ Y , write x = π −1 (y) (so that y = π (x)), and note that by equivariance of π , d

d

π −1 (Su y) = π −1(Su π (x)) = π −1(π (Su x)) = Su (x) = Su (π −1 (y)). Example 9.2.5 Let X ⊆ {0, 1}Z denote the subshift in which every finite run of 0s or 1s has even length. Then the map π : X → X that inverts all symbols is an isomorphism. Example 9.2.6 Let X = {0, 1}Z and π (x)n = xn + xn+1 mod 2. Then, with addition

Multidimensional shifts of finite type and sofic shifts

301

understood modulo 2, we have

π (Sx)n = (Sx)n + (Sx)n+1 = xn+1 + xn+2 mod 2 = S(π (x))n so π is equivariant and clearly continuous. It is also onto, because given y ∈ {0, 1}Z we can set x0 = 0 and inductively for n = 1, 2, . . . define xn = xn−1 + yn−1 mod 2, and for n = −1, −2, . . . define xn = xn+1 + yn mod 2. Then clearly π (x) = y. Thus π : {0, 1}Z → {0, 1}Z is a factor map. It is not an isomorphism because we can obtain a second pre-image of y by setting x0 = 1 and extending to x ∈ {0, 1}Z using the same procedure as above. One way to define a factor map is by a local rule that generates the symbol at each position in the ‘output’ based on the pattern seen at the same position in the ‘input’. More precisely, given a finite set D ⊆ Zd and a function π0 : ΣD → Δ, we can extend d d π0 to a map ΣZ → ΔZ by (π (x))u = π0 ((Su x)|D ). Then π is continuous and equivariant. Indeed, first, if xn → x then, for every u ∈ Zd , we have Su xn → Su x, so Su xn |D = Su x|D for all large enough n; hence π0 ((Su xn )|D ) = π0 ((Su x)|D ) for all large enough n, showing that π (xn )u → π (x)u , and since u was arbitrary, π (xn ) → π (x). Second, for every v ∈ Zd we have for all u ∈ Zd (π (Sv x))u = π0 ((Su (Sv (x)))|D ) = π0 ((Su+v (x))|D ) = π (x)u+v = (Sv (π (x)))u , so Sv π = π Sv , which is equivariance. A map π as above is said to be defined by the local rule π0 and is called a sliding block code, because in dimension 1 one can imagine that the sequence (π (x)n )∞ n=−∞ is obtained by sliding the set D along the sequence x and at each stop applying π0 . One of the fundamental results in symbolic dynamics is that every continuous equivariant map, and in particular every factor map, arises in this way (see also Theorem 8.1.4). Theorem 9.2.7 (Curtis–Hedlund–Lyndon, Hedlund (1969)) given by a sliding block code.

Every factor map is

Proof Let X ⊆ ΣZ be a subshift and π : X → ΔZ an equivariant and continuous map. By continuity, there is an N ∈ N so that π (x)0 is determined by x|[[−N,N]]d . Let d

d

π0 : Σ[[−N,N]] → Δ be given by π0 (a) = b if π (x)0 = b for some (all) x ∈ [a] ∩ X . Then by equivariance, for all x ∈ X and u ∈ Zd , d

(π (x))u = (Su (π (x)))0 = π (Su (x))0 = π0 ((Su (x))|[[−N,N]]d ) and π is the sliding block code defined by π0 . We note two consequences. First, given a subshift X ⊆ ΣZ , every factor Y ⊆ ΔZ of X is determined by a function ΣD → Δ for some finite D ⊆ Zd , and there are countably many of these, so up to renaming of symbols, X has countably many symbolic d

d

302

M. Hochman

factors. Second, since every isomorphism between subshifts is given by a sliding block code, the isomorphism X → Y becomes a combinatorial, almost language– theoretic relation: it means that there is a local, automatic procedure that transforms sequences in X to sequences in Y , and vice versa. The following proposition gives further expression to this point of view. Proposition 9.2.8 Let X ⊆ ΣZ be an SFT and π : X → Y ⊆ ΔZ an isomorphism. Then Y is an SFT. d

d

Proof Let X = X(E ) be an SFT, E ⊆ ΣD1 . Then π is a sliding block code, defined by a local rule π0 : ΣD2 → Δ. Since σ = π −1 : Y → X is a factor map it is also a sliding block code, defined by some local rule σ0 : ΔD3 → Σ. Let N be large enough that D1 ∪ D2 ∪ D3 ⊆ [[ − N, N]]d , let E = [[ − 2N, 2N]]d , let F ⊆ ΔE denote the set of patterns that do not appear in any y ∈ Y , and let Z = X(F ). Clearly Y ⊆ Z, and we claim that the two are equal. First, observe that by definition of F , and since D3 ⊆ E, every D3 -pattern appearing in Z also appears in Y , so it is in the domain of d σ0 , and we can define a continuous, equivariant map σ : Z → ΣZ from σ0 extending d σ . Let z ∈ Z and fix u ∈ Z . Then z|u+E is not a translate of a pattern in F , so there exists y ∈ Y with z|u+E = y|u+E . For every v ∈ D1 we have u + v + D3 ⊆ u + E so z|u+v+D3 = y|u+v+D3 , and since σ (z)u+v depends only on z|u+v+D3 we have σ (z)u+v = σ (y)|u+v . This holds for every v ∈ D1 so σ (z)|u+D1 = σ (y)|u+D1 , and the latter is not a translate of any pattern from E because σ (y) ∈ X . This shows that  (z) ∈ X. By the same reasoning, σ (z)|u+D2 = σ (y)|u+D2 , and π (σ (z)) depends only σ on σ (z)|u+D2 , so π (σ (z))u = π (σ (y))u = yu = zu . This shows that z = π (σ (z)), so z ∈ Y , hence Y = Z = X(F ).

9.2.4 Sofic shifts A symbolic factor of an SFT is called a sofic shift.2 The composition of factor maps is a factor map, so the sofic shifts form the smallest class of subshifts containing the SFTs and closed to taking factors. Sofic shifts are combinatorially defined objects, being specified by local constraints (a finite set of forbidding patterns) followed by the application of a ‘local’ transformation (a sliding block code). Up to re-naming of the alphabet there are only countably many SFTs and each has countably many symbolic factors, so, up to isomorphism, there are countably many sofic shifts. Every SFT is sofic, being a factor of itself via the identity map, but not every sofic shift is an SFT, as the following example shows. Example 9.2.9 (A sofic shift that is not an SFT) Let Y ⊆ {0, 1}Z denote the set of sequences in which at most a single 1 appears. This is not an SFT, because if we had Y = X(E ) for some E ⊆ {0, 1}N , then we would have . . . 0001000 . . . ∈ Y , but this configuration has exactly the same subwords of length N as . . . 00010N+11000 . . ., 2

The word sofic is derived from sofi, the Hebrew word for finite.

Multidimensional shifts of finite type and sofic shifts

303

implying that the latter configuration is E -admissible and belongs to Y , a contradiction. On the other hand, if X ⊆ {a, b}Z is the SFT defined by forbidding the word ba, then Y is the image of X under the sliding block code defined by the local rule π0 : {a, b}2 → {a, b},

π0 (aa) = π0 (bb) = 0

π0 (ab) = 1.

We leave the simple verification to the reader. See also Example 8.1.3. Although we will not use it, we note that sofic shifts also have an elegant language– theoretic description. Theorem 9.2.10 (Weiss (1973)) A one-dimensional subshift X is sofic if, and only if, its language L(X ) is regular. With suitable definition of multidimensional regular languages, the theorem can be generalised to higher dimensions, but this approach has had limited success. We return to the languages of sofic shifts in Section 9.5.

9.2.5 One-step models and higher block presentation Let e1 , . . . , ed denote the standard basis of Zd . We say that u, v ∈ Zd are adjacent if u = v ± ei for some i, or equivalently, #u − v#1 = 1. A one-step SFT (or nearest-neighbour SFT) is an SFT that can be defined using a set E of forbidden patterns whose supports are pairs of adjacent vectors. Thus it is obtained by placing restrictions only on neighbouring symbols. Example 9.2.11 (Paths in directed graphs) Let G = (V, E) be a directed graph and P(G) ⊆ V Z the set of bi-infinite vertex paths in G, i.e., (vn )∞ n=−∞ ∈ P(G) if, and only if, (vi , vi+1 ) ∈ E for all i. This is a one-step SFT over the alphabet V , given by taking E for the set of allowed adjacent symbols. Example 9.2.12 (Wang tiles (see also Section 8.2.1)) Let T be a finite set of 1 × 1 tiles, with colours (i.e., labels) assigned to the edges and, possibly, to the interior of each tile. A tiling is any arrangement of translated copies of the elements of T , centred at integer points. A tiling is admissible if every pair of tiles with a shared edge agree about the colour of that edge. The tiling space of T is the set of all 2 admissible tilings of R2 , and is represented by the one-step SFT X ⊆ T Z defined by excluding all adjacent pairs of tiles in which the colours of the shared edge do not agree. This model was introduced by Wang (1961). For example, the tiles in Figure 9.1 implement the ‘domino’ system from Example 9.2.2 (using Σ = {±(1, 0), ±(0, 1)}). The colour of an edge is either ‘blank’ or ‘arrow’ and the admissible tilings depict a matching of adjacent squares. This example is easily generalised to higher dimensions by considering d-dimensional cubes with coloured faces.

304

M. Hochman





Figure 9.1 (a) Tiles implementing a matching of Z2 . Blank edges match to blank edges, arrows match to arrows. (b) An admissible tiling of a 4 × 5 region.

Given an alphabet Σ and a finite subset 0/ = D ⊆ Zd , the higher block presentation d d map (with shape D) πD : ΣZ → (ΣD )Z is the map defined by (πD (x))u = (Su x)|D . It is clearly continuous and equivariant, and it is injective since, with v0 ∈ D fixed, for any u ∈ Zd we have xu = ((πD (x))u−v0 )v0 , which shows that πD (x) determines d x. The image πD (X ) of a subshift X ⊆ ΣZ is a subshift isomorphic to X , called the higher block presentation of X (with shape D). Proposition 9.2.13 Every one-dimensional SFT is isomorphic to the space of biinfinite vertex paths through a finite directed graph. Every d-dimensional SFT is isomorphic to a one-step SFT. Proof For simplicity we start with dimension 1. Let X be an SFT and assume X = X(E ) ⊆ ΣZ with E ⊆ Σ[[0,N]] for some N, which can always be arranged. We claim that the higher block presentation Y = π[[0,N]] (X ) of X , which is isomorphic to X , is a one-step SFT, and furthermore that it is equal to the space Z of paths through the graph G = (V, E) with vertex set V = Σ[[0,N]] \ E and edge set E ⊆ Σ[[0,N]] × Σ[[0,N]] of pairs (a, b) such that the last N symbols of a equal the first N symbols of b. Indeed, if y ∈ Y then y = π[[0,N]] (x) for some x ∈ X . Then for any i ∈ Z we have yi = (π[[0,N]] (x))i = xi . . . xi+N and yi+1 = (π[[0,N]] (x))i+1 = xi+1 . . . xi+1+N , which agree on the subword xi+1 . . . xi+N , showing that (yi , yi+1 ) ∈ E. Therefore y ∈ Z, and since y ∈ Y was arbitrary, Y ⊆ Z. Conversely, let z ∈ Z. An easy induction shows that for 0 ≤ n ≤ N, the final N + 1 − n symbols of zi ∈ Σ[[0,N]] agree with the first N + 1 − n symbols of zi+n ∈ Σ[[0,N]] ; therefore if we define x ∈ ΣZ by xi = (zi )0 , then / E , that x ∈ X ; xi . . . xi+N = zi , which both shows that π[[0,N]] (X ) = z and, since zi ∈ so z = π[[0,N]] (X) ∈ Y , and since z ∈ Z was arbitrary, Z ⊆ Y . Taken together, we have proved Z = Y . d In dimension d the proof is essentially the same. For X = X(E ) with E ⊆ Σ[[0,N]] , let Y = π[[0,N]]d (X). One shows that Y is equal to the one-step SFT Z defined by requiring that z ∈ Z if, and only if, for every pair of adjacent symbols zu , zu+ei , the pattern induced by zu on [[0, N]]d ∩ ([[0, N]]d + ei ) and the pattern induced by zu+ei on

Multidimensional shifts of finite type and sofic shifts

305

([[0, N]]d − ei ) ∩ [[0, N]]d , agree up to a translation. The verification is similar to the one-dimensional case. An important side effect of the proof of Proposition 9.2.13 is the explicit construction of the graph G = (V, E) whose path space is isomorphic to a given onedimensional SFT X . From this much further information can be gained. For example, it is elementary to see that the set V∞ ⊆ V of vertices that belong to a bi-infinite path is just the set of vertices that lie on a path connecting two cycles, and this set is easily computed, so we can decide whether X(E ) = 0/ by computing V∞ and checking / Writing Y for the space of bi-infinite paths in G, a word a ∈ V ∗ belongs if V∞ = 0. to L(Y ) if, and only if, it is a vertex path with elements in V∞ , so L(Y ) is a regular language (this is a special case of Theorem 9.2.10), so we can enumerate its elements by listing the finite paths in V∞ . Since X and Y are isomorphic, similar statements can be made about L(X(E )). We say that a factor map π is a symbol code if D = {0}, i.e., π (x)u is determined by xu alone. Proposition 9.2.14 If σ : X → Y is a factor map then there is a subshift X  and an isomorphism π : X → X  such that σ ◦ π −1 : X  → Y is a symbol code. If X is an SFT then X  can be chosen to be a one-step SFT. Proof Assume σ is a sliding block code defined by the local rule σ0 : ΣD → Δ. d Let X  = πD (X ) ⊆ (ΣD )Z be the higher block presentation of X with shape D. Then πD : X → X  is an isomorphism, and if y = π (x) then yu = x|u+D , so yu determines σ0 (x|u+D ) = σ (x)u = σ (π −1 (y))u . Thus σ ◦ π −1 is a symbol code. For the second statement, note that the construction works using a higher block code πE based on any finite E ⊇ D, and in the proof of the previous proposition we saw that for E large enough the subshift X  will be a one-step SFT.

9.3 Basic constructions and undecidability In this section we describe some of the classical (un)decidability results for SFTs. In the course of this we present some basic constructions that, besides their role in explaining the undecidability results, will be used extensively in later sections. This section has large overlap with Chapter 8, specifically the undecidability in tiling systems is discussed in Section 8.2 (see in particular Section 8.2.2 and 8.2.4); simulation of Turing machines appears, with minor differences, in Section 8.2.1; and Robinson’s system is presented in Section 8.2.3.

9.3.1 Wang’s problem and Berger’s theorem The classical theory of multidimensional SFTs was motivated largely by the following decision problems.

306

M. Hochman

Problem 9.3.1 (The emptiness problem) Given a finite set E of d-dimensional Σpatterns, decide whether X(E ) = 0. / Problem 9.3.2 (The extension problem) Given a finite set E of d-dimensional Σpatterns and a pattern a, decide whether a can be extended to an E -admissible configuration x ∈ X(E ). We saw above that in dimension 1 both problems can be solved (see Section 9.2.5). Historically, the multidimensional problems were first considered in the work of Wang (1961), who, as part of a study in symbolic logic, was interested in tilings of the plane by the tiles that now bear his name. Wang made two main observations. First, he noted that the extension problem (and hence the emptiness problem) can be semi-decided using the following procedure: given a pattern a and a finite E ⊆ Σ∗d , d iterate over n = 1, 2, 3, . . ., and for each n, enumerate all patterns b ∈ Σ[[0,n]] and halt if none is E -admissible (note that admissibility of a pattern involves checking finitely many subpatterns, and hence is decidable). By Lemma 9.2.3 this procedure will terminate if, and only if, X(E ) = 0. / Wang’s second observation was that the emptiness problem is related to the exisd tence of periodic points. A configuration x ∈ ΣZ is said to be periodic if there are d linearly independent vectors u1 , . . . , ud ∈ Zd such that Su1 x = . . . , Sud x = x (equivalently, the stabiliser of x in the group {Su }u∈Zd has finite index). By elementary arithmetic considerations, x is periodic if, and only if, there is an n such that x = Sv x for all v ∈ nZd , in which case we say that x has square period n. See Section 8.2 for more on this equivalence. If for some E ⊆ Σ∗d it is known that X(E ) is either empty or contains a periodic point, then the following procedure decides whether X(E ) = 0. / First, we can assume that E consists of adjacent pairs, for otherwise we can arrange this by passing to a higher block presentation (Proposition 9.2.13). Next, iterate over n = 1, 2, . . . For d each n enumerate all patterns b ∈ Σ[[0,n]] . If none is E -admissible, we conclude that d X(E ) = 0. / On the other hand, if there exists an E -admissible pattern b ∈ Σ[[0,n]] so d that opposing faces of the cube [[0, n]] are coloured in b in the same way, we can d construct an E -admissible configuration x ∈ ΣZ by repeating b with square period n (formally, by xu = bu mod n for u ∈ Zd ); admissibility is due to the fact that, since we ‘glued’ translates of b along faces of [[0, n]]d , every pair of adjacent symbols in x already appears as an adjacent pair in b. Of the two alternatives above, the first will occur if X(E ) = 0, / and the second if X(E ) contains a periodic point. Thus if every non-empty SFT contained periodic points, then the procedure above solves Problem 9.3.1. Wang himself showed that the extension problem is undecidable (see Section 8.2.4), but he conjectured that periodic points do always exist, and that the emptiness problem is decidable. This was finally shown to be false by Berger in 1966. Theorem 9.3.3 (Berger (1966)) cidable.

For every d ≥ 2, the emptiness problem is unde-

Multidimensional shifts of finite type and sofic shifts

307

By the previous discussion, Berger’s theorem implies that there exist non-empty SFTs without periodic points. It is interesting to note that Berger found it necessary to construct such an SFT explicitly as a part of his proof. This is also the case for the various other proofs of Berger’s theorem that have been found over the years. Berger’s theorem can be used to show that many other properties of SFTs are undecidable. Let us demonstrate this by showing that it is impossible to decide 2 whether an SFT consists of exactly a single periodic orbit. Indeed, let Z = {0Z }, 2 which contains a single fixed point of the shift, and, given E ⊆ ΣZ consider the SFT 2 Q = Q(E ) = Z ∪ ({1, 2}Z × X(E )). It is clear that Q is an SFT and consists of a single periodic orbit if, and only if, X(E ) = 0. / Hence, by Berger’s theorem, one cannot decide if Q is a single periodic orbit.

9.3.2 Symbols vs tiles Most of the phenomena we are interested in occur already in dimension d = 2, and we will restrict our attention to this case. One of the most convenient representations of such SFTs is via Wang tiles (Example 9.2.12), but this has a disadvantage, namely, that such systems are by definition one-step, and it is sometimes useful to allow longer-range constraints. We take a hybrid approach: we work in the abstract symbolic setting, but often think of the symbols as square tiles with markings on them. Thus the word symbol and tile are used interchangeably, patterns correspond to arrangements of tiles centred at integer points, and the markings on tiles may form a picture with some geometric interpretation. We shall see this at work starting in the next section.

9.3.3 Drawing paths By a path tile we shall mean a 1 × 1 square on which there are drawn finitely many directed polygonal paths, called path segments, that either connect two points on the boundary of the square, or an interior point to a boundary point. Path segments may be labelled. An arrangement of path tiles with centres in Z2 is admissible if whenever two tiles share an edge, every path segment in one of the tiles that terminates on the common edge continues on the other tile from the same point, with the same direction and, if relevant, the same label. This is a one-step constraint, so if finitely many types of path tiles are used, they define an SFT. Every admissible tiling depicts a collection of paths obtained by concatenating adjacent path segments in the obvious way. Example 9.3.4 (Rectangular paths) In every admissible configuration of the tiles in Figure 9.2, every bounded path is rectangular. Unbounded paths can be lines, infinite ‘L’ shapes, and infinite ‘U’ shapes (possibly rotated). Example 9.3.5 (Counting along a path) Admissible configurations of the tiles in Figure 9.3 consist of horizontal paths (finite or infinite in one or both sides). The labels are integer-valued and non-decreasing along paths, increase by at most one on

308

M. Hochman





Figure 9.2 (a) The basic tiles; (b) a valid tiling by the tiles. Any valid tiling consists of rectangular paths, straight lines, and bi-infinite U- and L-shaped paths. 









Figure 9.3 k ∈ {0, 1, 2, 3, 4}. On every finite path the label increases from 0 to 4, giving four dark dots.

each tile and, at the initial and terminal sites of a path, if these exist, are equal to 0 and 4 respectively. Thus on a finite path exactly four increments occur, marked by dark dots on the tiles. This example is easily generalised, for instance to force finite paths to make a specified sequence of turns, or even for the sequence of tiles to come from a given regular language.

9.3.4 Extension–reduction SFTs are often constructed iteratively. The basic step consists of two parts: starting from a given SFT, one first augments the existing symbols (or tiles) with a new ‘layer’ of information; after which one places new restrictions on the compound 2 patterns. More formally, fix E ⊆ Σ[[0,N]] and an initial SFT X = X(E ). Let Δ be another alphabet, and identify a pattern a ∈ (Σ × Δ)D with the pair of patterns a ∈ ΣD and a ∈ ΔD , calling a the first layer of a, and a the second layer. We do the same for configurations over Σ × Δ. Perform the following two operations. Extension: Form the product X × ΔZ . This is the SFT X(E  ) where E  ⊆ (Σ × Δ)∗d is the set of pairs (a, a ) such that a ∈ E . Reduction: Optionally, enlarge E  to E  by adding finitely many Σ × Δ-patterns to it. d

Then the SFT Y = X(E  ) is called an extension–reduction of X . Let π : (Σ × Δ)Z → d ΣZ denote the projection to the first layer, and note that any E  -admissible pattern d

Multidimensional shifts of finite type and sofic shifts 



309







Figure 9.4 Type (a) is defined for every u ∈ V , and (b) for every (u, v) ∈ E. Then in every row, the sequence of labels on horizontal paths is a valid vertex path in G.

is E  -admissible, hence its first layer is E -admissible, so π (Y ) ⊆ X . Since π is equivariant, π (Y ) is a subshift which, being a factor of the SFT Y , is sofic. In general π (Y ) is not an SFT. In the special case that π (Y ) = X, we say that Y is an extension of X . If x ∈ X and there is a point z = (x, y) ∈ Y then z is called an extension of x. Example 9.3.6 (Enforcing long-distance constraints) Let G = (V, E) be a finite directed graph and ∗ a new ‘blank’ symbol. Each row of a configuration x ∈ (V ∪ 2 {∗})Z gives rise to a finite or infinite sequence over V by removing all occurrences d of ∗. Let X ⊆ (V ∪ {∗})Z consist of configurations whose rows give rise in this way to a vertex path in G. We now show that X is sofic by constructing an SFT extension of it (the reader may wish to verify, for some G, that X itself is not an SFT). Begin with the full shift 2 (V ∪ {∗})Z , which is an SFT. Extend–reduce by the path tiles in Figure 9.4. At the reduction stage, we first require that the second layer obey the usual adjacency rules for path tiles. Furthermore, we require that tiles of type (a) in Figure 9.4 are paired with ∗ in the first layer, and that tiles of type (b) with incoming label u and outgoing label v are paired with v in the first layer. Denote the resulting SFT by Y . 2 Let y ∈ Y with first layer x ∈ (V ∪ {∗})Z . Clearly if u ∗ ∗ · · · ∗ ∗v appears as a horizontal pattern in x then the corresponding pattern in the second layer of y is a path labelled u, starting at the u symbol and ending at the v symbol, and since the tile over v is of type (b), we must have (u, v) ∈ E. Thus the V -symbols in each row of x form a path in G. Conversely, each x ∈ X can be lifted to Y by initiating a path labelled u above each occurrence of u in x, and extending it to the right, terminating at the next vertex symbol, or not at all (if there are no more vertex symbols in the row). This determines the lift uniquely except in rows without vertex symbols, in which case the path label can be any vertex in V , or when u is the leftmost vertex symbol in its row, in which case the incoming path to u can be labelled with any t ∈ V satisfying (t, u) ∈ E. The formal process of extension–reduction is often tedious, and in the interest of readability we shall take some shortcuts. d For example, starting from X ⊆ ΣZ , we often want to augment only some of the symbols, say those in the subset Σ0 ⊆ Σ , with new information drawn from an alphabet Δ0 . Formally this is achieved by adding a new ‘blank’ symbol ∗ to Δ0 , to form the alphabet Δ = Δ0 ∪ {∗}, and then extending all symbols, requiring in the

310

M. Hochman

reduction stage that symbols from Δ0 be paired with symbols from Σ0 , and that ∗ be paired with symbols from Σ \ Σ0 . Similarly, we often extend by an alphabet or set of tiles Δ that already has its own adjacency rules, e.g., when Δ is a set of path tiles. We then automatically and implicitly apply these rules to the second layer, unless stated otherwise. Finally, we shall satisfy ourselves with brief (but complete) descriptions of a construction. For instance, we could describe Example 9.3.6, as follows: extend–reduce 2 (V ∪ {∗})Z by path tiles. Emit a path moving to the right from each occurrence of a vertex u ∈ V . These paths run across instances of ∗, but cannot cross a vertex symbol, and can terminate at a vertex symbol v ∈ V only if (u, v) ∈ E. Later on, we shall leave it to the reader to fill in the details of such an outline.

9.3.5 Turing machines For background on computation and Turing machines, see Rogers (1967) or Arora and Barak (2009). Turing machines come in many variants; we work with machines that run on a one-sided tape that extends infinitely to the right, with cells indexed 1, 2, 3, . . .. At the start of a computation the machine is located in the leftmost cell, indexed 1, and all but finitely many cells contains a ‘blank’ data symbol. It is a straightforward matter to program machines so that they never try to move left from the leftmost cell, for instance by marking the first cell in a way that can later be detected when the machine returns to it. We shall assume henceforth that all machines take such precautions.

9.3.6 Representing computations via local constraints Let T be a Turing machine with alphabet A, state space Q, and transition function f : Q × A → Q × A × {±1}. Let * be a new symbol and form the alphabet B = A × (Q ∪ {∗}) whose symbols (a, q) ∈ B we interpret as representing the contents of a cell of the machine tape: if q ∈ Q then (a, q) is a cell with data symbol a and the machine in state q, and if q = ∗ then (a, q) = (a, ∗) represents a cell with data symbol a (the * indicates the absence of the machine). Suppose that T runs for n steps without halting on an input a ∈ An . Then for each 0 ≤ i ≤ n − 1 the contents of the first n cells of the tape after i computation steps can be encoded by a sequence bi ∈ Bn . The n × n square in which the ith row is bi depicts the history of the computation up to time n − 1, with time increasing by one as we move up one row. In the next few paragraphs we describe local rules (over an enlarged alphabet) such that every admissible n × n pattern satisfying mild assumptions represents in this way n − 1 steps of the computation, started from the tape depicted in the first row. First, we want to make sure that data symbols evolve upwards as dictated by the program. To this end, we require that if a site contains the data symbol a ∈ A but no machine, then the site directly above it also contains a, but that if a machine is

Multidimensional shifts of finite type and sofic shifts 

 



311





Figure 9.5 The labels q, q are machine states. The tiles in which the incoming label q turns into outgoing label q only when this transition is consistent with the machine program and the data symbol at the site (which belongs to a different layer).

present and in state q and f (q, a) = (q , a , e), then the site above it contains the data symbol a . Second, we want a machine to ‘move’ and change states according to its program. To this end, add a layer consisting of path tiles and blank tiles as in Figure 9.5. Require that tiles of type (a) and labels q, q occur precisely at sites labelled (a, q) such that f (q, a) = (q , a , e), in which case the outgoing path leaves from the right if e = 1 and from the left if e = −1. Clearly, in an admissible tiling of a rectangle, every occurrence of the machine belongs to a path which moves alternately one cell to the left or right, and one row up, and enforces the transition rule of the machine. Third, we need to make sure that each row contains only the machine coming from the previous row (as it stands, a machine could ‘enter’ from one of the sides of the rectangle). To this end, add another layer of path tiles in which every site containing a machine emits two horizontal paths, moving to the right and left. Paths run horizontally across data symbols and never terminate. Thus, if two sites in the same row contained machines, they would emit paths moving towards each other and these would meet ‘head-on’, which is impossible. Hence there is at most one machine per row. Finally, we forbid the halting state from appearing on any tile. This construction gives an alphabet ΣT extending B, and a set ET of one-step constraints (i.e., a set of 1 × 2 and 2 × 1 patterns), such that the following holds. Proposition 9.3.7 In the notation above, given an ET -admissible pattern a ∈ ΣnT , with the machine represented in the leftmost position and in its initial state, the following are equivalent. (1) There is an extension of a to an ET -admissible n × n pattern, with a in the bottom row. (2) When T is run with input equal to the sequence of data symbols represented by a, it does not halt in the first n − 1 steps. Note that when the conditions are met, the extension in (1) is unique. Indeed, T runs on a one-sided tape and can move only one cell per step, so if the machine does not halt in the first n − 1 steps it must remain in the first n cells of the tape. Therefore in the extension in (1) every row contains a machine, which determines uniquely the location and state of the machine in the next row up. An induction then proves uniqueness.

312

M. Hochman

Figure 9.6 A grid with 12 × 12 bounding square, five rows and five columns. The blank squares on the outer boundary are not part of the grid.

9.3.7 Grids Our eventual objective is to construct, for a given Turing machine T , an SFT in which every configuration represents arbitrarily long computations of T . We saw above how to represent computations in square patterns, but, when the square is large, there necessarily are large subsquares that contain only data symbols and no machine (e.g., the bottom-right quadrant). Therefore, if a configuration in an SFT contains arbitrarily large squares representing computations, then by shifting it appropriately and taking a limit, we will obtain a configuration in which the machine doesn’t appear at all. The solution to this problem is to have short computations represented periodically, with longer computations represented in the spaces between the shorter ones. To implement this computation using ‘sparse’ patterns we introduce the notion of a grid. By a grid with bounding rectangle R = [[a, b]] × [[c, d]] we mean a subset G ⊆ R which, for some k, is a union of k rows and k columns running the length and width of R, among them the top and bottom rows and leftmost and rightmost columns. See Figure 9.6. More formally, G has the form G = ([[a, b]] × I) ∪ (J × [[c, d]]) for subsets I ⊆ [[c, d]] and J ⊆ [[a, b]] of cardinality k which satisfy a, b ∈ J and c, d ∈ I. A set [[a, b]] × { j} with j ∈ J is called a row of G, and a set {i} × [[c, d]] with i ∈ I is called a column of G (so a row or column in G is a row or column of R, respectively, that is contained in the grid). The boundary of G is the union of its top and bottom rows and its leftmost and rightmost columns. We refer to the set R \ G as the interior of G. The set J × I ⊆ G consists of the k2 sites that lie in both a row and a column of G. These are called the cells of G. Given u ∈ G, if we know which of its neighbours in Z2 belong to G, we can determine whether it is a cell that is not on the boundary (it has four neighbours in G), a corner cell (it has two neighbours that do not lie on the same row or column in Z2 ), a cell on the boundary of G that is not a corner (it

Multidimensional shifts of finite type and sofic shifts

313

has precisely three neighbours in G), or a non-cell (it has two neighbours lying on a common row or column of Z2 ). By an infinite grid we mean an infinite subset G ⊆ Z2 that is the limit of grids in the sense that there are grids Gn for which u ∈ G if, and only if, u ∈ Gn for all large 2 enough n (equivalently, 1Gn → 1G in {0, 1}Z ). A cell in G is a site that is a cell in Gn for all large enough n, and these are identified via their neighbours as in the finite case. 2 Let x ∈ ΣZ and Σ0 ⊆ Σ, and let G be a grid or infinite grid. We say that x defines G (using the symbols in Σ0 ) if G is a connected component in the graph obtained from the vertex set V = {u ∈ Z2 | xu ∈ Σ0 } by connecting adjacent sites. We say that x defines grids if every connected component of this graph is a grid or an infinite grid, 2 and that X ⊆ ΣZ defines grids if every x ∈ X defines grids. Note that, since grids are defined as connected components of the graph induced by the Σ0 -symbols, different grids defined by x cannot contain adjacent sites. Also note that, by compactness, if the number of columns in grids defined in X is unbounded, then some points in X necessarily define infinite grids. We often say that a configuration defines grids without mentioning Σ0 explicitly. Remark 9.3.8 Note that if X ⊆ ΣZ is an SFT defining grids via Σ0 ⊆ Σ , and we 2 extend–reduce to Y ⊆ X × ΔZ , then Y also defines grids via Σ0 × Δ. Also note that if X contains arbitrarily large grids, then in order to verify that Y = 0, / by compactness it is enough to show that for every N there is an x ∈ X and a grid G in x with bounding rectangle R of dimensions at least N × N, such that x|R can be admissibly extended to a pattern b ∈ (Σ × Δ)R . 2

9.3.8 Running Turing machines on grids Fix an SFT X that defines grids and a Turing machine T . By running T on the grids in X we mean the construction below, which results in an extension–reduction Y of X, in which each grid represents a computation of T . Let ΣT and ET be associated with T as in the Section 9.3.6. Extend–reduce X so that, in every x ∈ X , (i) a ΣT -symbol is superimposed over every grid cell; (ii) if a grid cell is in the lower left corner of its grid, the superimposed ΣT -symbol represents a machine in its initial state; (iii) nothing (i.e., a ‘blank’ symbol) is superimposed at sites that are not cells in a grid; (iv) we do not yet impose any adjacency rules (including those related to paths depicted in ΣT ). The result is an SFT because, as noted earlier, it is possible to determine whether a site is a grid cell in x or a corner cell by examining its immediate neighbours. Next, proceed as in Example 9.3.6 to force the constraints in ET between ΣT symbols belonging to ‘adjacent’ cells in a grid. In more detail, for every grid in x and

314

M. Hochman

every cell in the grid except those in the top row, emit a path directed upward, labelled with the ΣT -symbol at its starting point, moving upward through grid sites that   are not cells, and terminating at the first cell that it reaches. For every vertical pair aa ∈ ET , we forbid vertical paths labelled a from terminating at a cell labelled a . This enforces the vertical adjacency rules. A similar procedure enforces the horizontal ones. Let Y denote the resulting extension–reduction of X . If y ∈ Y extends x ∈ X , then the cells of every grid G in x are labelled by symbols from ΣT and, after deleting the spaces between columns and rows of the grid and keeping only the cells, give a square ΣT -pattern which we call the computation represented in the grid. The initial data of the computation is the sequence of data symbols of T appearing in the bottom row of this square. Proposition 9.3.7 now implies the following. Proposition 9.3.9 In the notation above fix x ∈ X, and for each grid G in x with mG columns fix a sequence aG of length mG over the data alphabet of T . Assume mG is unbounded as G ranges over the grids in x. Then the following are equivalent. (1) There exists an extension y ∈ Y of x such that for every grid G in x, the initial data of the grid G in y is aG . (2) For every grid G in x, T runs at least mG − 1 steps without halting when started on input aG . We note that so far we have ignored infinite grids in our discussion, but it is easy to see, by a compactness argument, that extensibility of all infinite grids is equivalent to extensibility of arbitrarily large finite grids (since infinite grids are limits of finite ones).

9.3.9 Robinson’s system The basis for nearly all our later constructions will be a certain SFT developed by Robinson (1971) as part of his proof of Berger’s theorem. A detailed construction of it may be found in Section 8.2.3, and we recommend the reader to consult it before proceeding further. Here we summarise those properties of Robinson’s system that we use later. We note, however, that our later constructions are quite robust, and could rely on many other systems instead; Robinson’s system is simply a convenient starting point. Robinson’s grids For each n = 1, 2, 3, . . . there is defined a grid Gn ⊆ Z2 with the following properties. G1 The grid Gn has 2n + 1 rows and 2n + 1 columns. G2 The grid Gn has bounding square of dimensions (4n − 1) × (4n − 1). G3 The grid Gn is symmetric around the central row and column, and around the diagonals, of its bounding square. Also, the pattern of rows below the central row is a translate of the pattern of rows above the central row, and similarly for columns.

Multidimensional shifts of finite type and sofic shifts

315

Robinson’s system Robinson’s system is a (non-empty) SFT, implemented as a set of Wang tiles, in which every configuration defines grids that are translates of the grids Gn . We refer to such a translate of Gn as an n-grid. Every configuration in Robinson’s system has the following properties. R1 The centres of n-grids form a lattice un + 2 · 4n · Z2 with un+1 = un ± (4n , 4n ) (here un ∈ Z2 depends on the configuration). Consequently, • n-grids appear with square period 2 · 4n, • for every m < n every n-grid contains 4n−m m-grids in its interior. R2 For every m < n and every n-grid, the extension of its columns to Z2 -column are disjoint from all m-grids. Consequently, for all m = n, • the columns of m-grids and n-grids have distinct x-coordinates, • the rows of m-grids and n-grids have distinct y-coordinates. R3 Every infinite grid in Robinson’s system arises as a limit of finite grids from Robinson’s system. See Figure 9.7. These properties are derived explicitly from, or are immediate consequences of, the construction in Section 8.2.3. For our applications we will need to augment Robinson’s system with two additional features. First, we shall want the symbols on the boundaries of grids to indicate the side on which lies the interior of the grid. This can be achieved by adding a directed path to the boundaries, oriented clockwise, so that the interior is always on their ‘right’. To this end, simply extend Robinson’s system by the path tiles in Figure 9.2, requiring that path tiles appear only on the boundary sites of grids. Second, we want to identify certain distinguished grids from among those that lie in the interior of a given larger grid. By (R1), for m < n and a configuration in Robinson’s system, every n-grid contains a 2n−m × 2n−m array of m-grids in its interior, the center of the array coinciding with that of the surrounding n-grid. We call the grids in this array the sub-m-grids of the given n-grid. In particular, every (n + 1)grid contains a 2 × 2 array of sub-n-grids which we call the principal subgrids. Note that not every n-grid is a principal subgrid of some (n + 1)-grid. For example, in Figure 9.7, one finds many 1-grids that are not contained in a 2-grid, and many 2grids that are not contained in a 3-grid. We want to be able to recognize those n-grids that are the upper-right principal subgrid of some (n + 1)-grid by coloring them blue, and coloring all other grids white. There are many ways to implement this, one of which is the following. Extend– reduce Robinson’s system by paths. By giving the path-segments appropriate ‘states’, and introducing appropriate rules at the reduction stage, we can force each path to have the following course: it begins in the upper-right corner of a grid and initially moves to the left along a grid boundary; at some point it turns down, and moves downward through non-grid sites, until it reaches the lower boundary of a grid; it then turns left and moves to the left along the grid boundary; at some point it turns

316

M. Hochman

Figure 9.7 Grids in a configuration of Robinson’s system. In the image we see 1grids (the 3 × 3 grey squares), 2-grids (with 5 rows and 5 columns) and a 3-grid (with 9 rows and 9 columns).

up, and moves upward through non-grid sites until reaching the upper boundary of the grid; finally, it turns left again, and moves to the left along the grid boundary, until it reaches the upper-left corner of a grid, where it terminates. All these events are required to occur and in this sequence, ensuring that the path ends in the corner of the grid it began in, and traverses the bounding square of the grid vertically twice, once top-to-bottom and once bottom-to-top, without entering any subgrids. Next, add a counter to the path, taking values {0, 1, . . ., 4}. The counter is initially 0, is required to be 2 when the path reaches the bottom row of the grid, and must be 4 when it terminates. The counter is constant along the path except when the path visits a site that is adjacent to the bottom-right corner of a grid, in which case the counter is incremented by 1. The result of this is that on each of the vertical portions of the path it must pass next to two subgrids of the grid is started in, and this is possible only if it passes adjacent to the principal subgrids, because smaller grids are arranged in an array containing more than two grids on each column. It is also clear that the upper-

Multidimensional shifts of finite type and sofic shifts

317

right principal subgrid is the first one the path encounters, i.e., the one to the left of the path at the site where the counter is incremented from 0 to 1. To complete the construction, color grids monochromatically blue or white, requiring that the color of the bottom-right corner is blue if it is next to a path with counter incrementing from 0 to 1, and white otherwise. The effect is that upper-right principal subgrids are now colored blue, and all others, white.

9.3.10 Proof of Berger’s theorem We reduce the halting problem for Turing machines to the emptiness problem for SFTs (Problem 9.3.1). Given a Turing machine T , start from Robinson’s system, which we denote X, and run T on grids of X, obtaining an extension–reduction Y of X . Further reduce by requiring that the input sequence to each grid consists of ‘blank’ symbols (from T ’s input alphabet). Call the result Z. If x ∈ X represents arbitrarily large grids, then by Proposition 9.3.9 x extends to some z ∈ Z if, and only if, T does not halt on blank input; so Z = 0/ if, and only if, T does not halt on blank input. Reviewing the previous sections, clearly the alphabet and the set of forbidden patterns defining Z can be computed from Robinson’s tiles and T . Thus, if we could decide from this data whether Z = 0, / we could decide whether T halts on blank input. Since the latter is undecidable, so is the former.

9.4 Degrees of computability As a direct continuation of the classical undecidability results for SFTs one may consider the recursive properties of non-empty SFTs. An early result of this type, dating from the 1970s, is Myers’s construction of SFTs that do not contain computable points (see (Myers, 1974)), and recently, it has emerged that one can get more refined information about the degrees of computability of SFTs (see (Simpson, 2014)). This section is dedicated to an exposition of these results.

9.4.1 Recursive functions and sets A function f : N → N is recursive, or computable, if there is a Turing machine which on input n outputs f (n). A set A ⊆ N is recursive (R) if its indicator function 1A is computable (i.e., there is a Turing machine that on input n outputs 1 if n ∈ A and 0 if n ∈ / A). A set A ⊆ N is recursively enumerable (RE) if there is a Turing machine that on input n ∈ A returns 1, and on input n ∈ / A does not halt. We then say that T semi-decides membership to A. A recursive set is recursively enumerable but not vice versa, as shown by the set of halting Turing machines. Recursion theory classically deals with natural numbers and with sets and functions of natural numbers. We formulate the definitions below in these terms, but it extends to other sets (and functions on them) as long as the sets can be computably

318

M. Hochman

identified with N. For example, the set of finite subsets of N forms one such family, as does the set of finite words Σ∗ over a finite alphabet Σ, so we may speak of recursive sets and functions of these objects. Below we make these identifications implicitly, applying the definitions wherever reasonable. Example 9.4.1 Let S = {E ⊆ ΣZ | X(E ) = 0}. / After Σ∗d is identified computably with N, the set S is RE, since, as we saw in Section 9.3.1, membership to S is semidecidable. d

Lemma 9.4.2 A set A ⊆ N is RE if, and only if, there is an algorithm that on input # n computes a finite (possibly empty) set Ai ⊆ N such that A = i∈N Ai . Proof Given a sequence (Ai ) as in the statement, and given n, iterate over i = 1, 2, . . . and halt if n ∈ Ai . This algorithm halts on input n if, and only if, n ∈ A. Conversely, given such an algorithm and given n, simulate the algorithm for n steps on each of the inputs 1, . . . , n, and output the set An of inputs on which it halted. This is the desired sequence (An ).

9.4.2 Effective subsets of {0, 1}N Recursive sets are by definition countable, and the notion does not apply to sets X ⊆ {0, 1}N that are uncountable. However, as a topological space, {0, 1}N has a nicely parametrised subbasis, given by the cylinder sets, and when X ⊆ {0, 1}N is closed, it is characterised by the family of cylinder sets in its complement. We shall say that a subset X ⊆ {0, 1}N is effectively closed3 (or just effective) if there is a # recursively enumerable set of patterns L such that X = {0, 1}N \ a∈L [a] (this is similar to the definition of X(L), but here we do not allow patterns to be translated, as we did in the definition of X(L)). Define the language L(X) and complementary language Lc (X ) of a set X ⊆ {0, 1}N in the same way as for subshifts (Section 9.2.2). We have the following characterisation. Lemma 9.4.3 A closed set X ⊆ {0, 1}N is effective if, and only if, Lc (X ) is RE. #

Proof If Lc (X ) is RE, then X is effective by the identity X = {0, 1}N \ a∈Lc (X) [a]. # Conversely, suppose that X = {0, 1}N \ a∈L [a] for a RE set L of patterns, and let #∞ L = n=1 Ln where Ln are finite and n → Ln is computable (Lemma 9.4.2). Then b ∈ Lc (X) if, and only if, every extension of b to a configuration y ∈ {0, 1}N has subpattern in L. By Koenig’s lemma there exists an N ∈ N such that supp(b) ⊆ [[1, N]], and every extension c ∈ {0, 1}N of b has a subpattern in L; furthermore, since there are only finitely many extensions of length N, if N  is large enough, each such extension c must contain a subpattern from LN  . Clearly we may raplace N, N  by 3

In the recursion-theory literature, effectively closed sets are also called Π01 -classes, and are part of a larger hierarchy that we shall not discuss, see Rogers (1967). Also, it is sometimes assumed that L is a set of words a1 ... an , and X is the set of sequences without prefixes in L. This results in an equivalent definition.

319

Multidimensional shifts of finite type and sofic shifts

max{N, N  } and the statement remains true. Thus, the following algorithm semidecides membership to Lc (X ): on input b ∈ {0, 1}∗1, iterate over all N = 1, 2, . . ., and for each N such that supp(b) ⊆ [[1, N]], halt if every extension c ∈ {0, 1}N of b contains an element of LN as a subpattern. The notion of an effectively closed set extends in the usual manner to closed subd sets of ΣZ whenever Σ is finite. Corollary 9.4.4 If E ⊆ Σ∗d is RE then X(E ) is effectively closed. In particular SFTs are effective. Proof If E is RE then F = (Su a)u∈Zd ,a∈E is RE, and since X(E ) = ΣZ \ it is effective. d

#

a∈F [a],

9.4.3 Computable functions between sequence spaces Classically, a Turing machine accepts a finite string as input, but we can relax this to allow an infinite word as input, written on the tape and accessed in the usual manner, one symbol at a time (the machine may either write on the tape that the input is on, or we may make the input read-only and provide a second tape for ‘work space’; computation-theoretically the result is the same). An alternative model imagines a ‘black box’ (or ‘oracle’) which the machine can query to obtain the sequence of input symbols. These models are computationally equivalent and give meaning to computations whose input is an infinite sequence. Example 9.4.5 The closed set X ⊆ {0, 1}N is effective if, and only if, there is a / X . Indeed, if Turing machine T that on input x ∈ {0, 1}N halts if, and only if, x ∈ # L = Lc (X ) is RE, let L = n∈N Ln where Ln are finite sets and n → Ln is computable. Then the desired Turing machine is the one that on input x ∈ {0, 1}N checks for each n = 1, 2, . . . whether x1 . . . xn has a subpattern from Ln , and if so, halts. Conversely, given T , let L ⊆ {0, 1}∗ denote the set of words on which T halts. By definition it is RE, and if T halts on x ∈ {0, 1}N then during the computation it reads only finitely many bits of x so in fact T halts on some prefix of x, hence x has a prefix in L. It # follows that the set of x on which T does not halt is just {0, 1}N \ a∈L [a], which is effective by definition. We can now define computable maps between sequence spaces. For X ⊆ {0, 1}N , a function f : X → {0, 1}N is computable or recursive if there is a Turing machine T that, on input (n, x) ∈ N × X, outputs f (x)n . We then say that T computes f . Since the machine T in the definition reads only finitely many bits of x when it runs on each (n, x), for each  ∈ N there is some N = N(x, ) such that only the prefix x1 . . . xN is used in computing f (x)|[[1,]] . In this case, we say that T computes  bits of f from x1 . . . xN . It follows that if y ∈ X and y|[[1,N]] = x|[[1,N]] then f (y)|[[1,]] = f (x)|[[1,]] .

320

M. Hochman

Consequently, f is continuous on its domain.4 Example 9.4.6 Factor maps between symbolic dynamical systems are computable, since they are given by sliding block codes (Theorem 9.2.7). Proposition 9.4.7 If f : X → Y and g : Y → Z are computable functions (X,Y, Z ⊆ {0, 1}N) then g ◦ f : X → Z is computable. Proof Let T, S be Turing machines computing f , g, respectively. Fix x ∈ X and write y = f (x). Consider the following algorithm whose input is n ∈ N. For each k = 1, 2, 3, . . . compute y1 . . . yk by running T with inputs (i, x), 1 ≤ i ≤ k, then run S for k steps on input (n, y1 . . . yk ). This simulates the first k steps of S running on input (n, y) because in k steps S reads at most k symbols of y. If the machine halts and outputs b ∈ {0, 1}, then output b. Evidently, this algorithm outputs the nth digit of g(y) = g( f (x)) on the kth iteration, where k is the number of steps of computation of S on input (n, y). Proposition 9.4.8 Let X ⊆ {0, 1}N be effective. If f : X → {0, 1}N is computable then f (X) is effective. Proof Since Lc (X ) is RE, there is a computable sequence of finite sets Ln ⊆ {0, 1}∗ # such that Lc (X ) = n∈N Ln . Let T be the Turing machine computing f . Since X is compact and f continuous on its domain, f (X ) is closed, so it suffices to show that Lc ( f (X)) is RE. We claim that the following algorithm semi-decides membership to Lc ( f (X)). On input c ∈ {0, 1}∗ of length m, for n = 1, 2, . . . do the following. First, compute the set Bn ⊆ {0, 1}n of words b that do not have any a ∈ Ln as a prefix. Second, for each b ∈ Bn , check (by simulation) whether T computes m bits of f from b in n steps or less; if yes, let cb ∈ {0, 1}m be the resulting word, otherwise cb is undefined. Third, if cb is defined for all b ∈ Bm and satisfies cb = c, halt. Let us show that this works. For every x ∈ X and n ∈ N we have x|[[1,n]] ∈ Bn . Therefore, if on input c our algorithm halts at stage n, then for every x ∈ X the machine T computes a word cx1 ...xn of length m from x1 . . . xn and cx1 ...xn = c. Thus c is not a prefix of f (x) for any x ∈ X , so c ∈ Lc ( f (X )). Conversely, suppose c ∈ Lc ( f (X )). For every x ∈ X there is an N1 = N(x) such that T computes f (x)|[[1,m]] in N1 (x) steps or less, and by compactness we can choose N1 independent of x ∈ X . By Koenig’s lemma, there is an N2 ∈ N such that for n > N2 every b ∈ Bn is a prefix of some x(b) ∈ X. Now if n > max{N1 , N2 } then T computes m bits of f from b ∈ Bn in at most n steps and the output coincides with f (x(b))|[[1,m]] , which is = c because x(b) ∈ X and c ∈ Lc (X ). Thus the algorithm eventually halts. 4

In the recursion-theory literature, one often takes the domain of a recursive function to be the largest set for which it can be defined using a given Turing machine. Specifically, suppose a Turing machine T is given. A point x ∈ {0,1}N may have the property that for every n ∈ N the machine T halts on input (n,x) with an output bit yn . The map x → y = (yn )∞ n=1 defines a computable function f T whose domain is the set of such x. By the discussion above for each  ∈ N there is an open set of sequences x such that T computes  bits from x, so the domain of fT is a Gδ set. We prefer to be explicit about the domains of our functions.

Multidimensional shifts of finite type and sofic shifts

321

Corollary 9.4.9 Sofic shifts are effective. Proof SFTs are effective, factor maps are computable, and sofic shifts are the image of an SFT by a factor map.

9.4.4 Degrees of computability Any two non-empty topological spaces admit plenty of continuous maps taking one into the other, e.g., the maps sending the whole space to a point. If the spaces are effective, however, and one adds the requirement that the maps be computable, the situation is completely different, and this embedding relation becomes non-trivial and interesting. Given effective sets5 X ,Y ⊆ {0, 1}N , we say that X is strongly reducible (or Medvedev reducible) to Y , denoted X ,s Y , if there exists a computable function f : Y → X; and we say that X is weakly reducible (or Muchnik reducible) to Y , denoted X ,w Y , if for every y ∈ Y there exists a computable function f such that f (y) ∈ X. By Proposition 9.4.7 these are transitive relations, and reflexive since the identity map is computable. Note that strong reducibility is the uniform version of weak equivalence, the map being independent of the input, so the former implies the latter. The two forms of reducibility are not equivalent, see Hinman (2012). One should interpret these relations as follows. If we want to show that X is not empty, then we must produce a point x ∈ X . If X ,w Y or X ,s Y , and if we are able to produce a point y ∈ Y , then we can, by applying a suitable computable function f , obtain the point x = f (y) ∈ X . Thus demonstrating that X = 0/ is at least as hard as demonstrating that Y = 0, / and in this sense, X is at least as complicated as Y . Notice that, counter-intuitively, X ⊆ Y =⇒ Y ,s X because the inclusion map is computable. Effective sets X ,Y are said to be strongly (respectively weakly) equivalent if X ,s Y ,s X (respectively X ,w Y ,w X ). The equivalence class of X is called the strong (or Medvedev) degree of X (respectively weak (or Muchnik) degree of X). We denote by ,s and ,w the partial orders induced by the corresponding reduction relations, on the set of strong and weak degrees, respectively. Lemma 9.4.10 Let X ⊆ {0, 1}N be an effective set. If x ∈ X is computable (as a function x : N → {0, 1}) then X ,s Y for all effective sets Y . Conversely, if X ,w Y for all effective sets Y then X contains a computable point. In particular, there is a minimal degree (weak and strong) consisting of those effective sets that contain computable points. Proof If x ∈ X and Y is effective then the constant function with value x is computable from Y to X, hence X ,s Y . Conversely if X ,w Y for all effective Y , then 5

The definition makes sense also when X,Y are not effective.

322

M. Hochman

this holds for Y = {0N }. Then there is a Turing machine computing x ∈ X from 0N , and since 0N is computable, also x is. Lemma 9.4.11 There exists a non-trivial weak (and hence strong) degree. Proof This is a diagonal argument. Let {Tn }∞ n=1 be an enumeration of the Turing machines. If Tn halts on input n with output un ∈ {0, 1} define the cylinder set Cn = {x ∈ {0, 1}N | xn = un }. The set I of n for which Cn is defined is not recursive but it is # clearly RE, so X = {0, 1}N \ n∈I Cn is effective. It is non-empty because it contains / I. Finally, if some x ∈ X the point x defined by xn = 1 − un for n ∈ I and xn = 0 for n ∈ were computable x would be computed by Tk for some k. Then xn = un (both are the / Cn , hence xn = un , a contradiction. output of Tn on input n), but x ∈ X implies x ∈ The structure of the partial orders on degrees has been extensively studied but much about it remains a mystery. In any event, the order relation itself is highly non-computable. For more information see Hinman (2012).

9.4.5 Realising degrees as SFTs SFTs are effective sets (Corollary 9.4.4), so with each SFT there is associated its weak and strong degree. The following theorem tells us which degrees arise in this way. Theorem 9.4.12 (Simpson (2014)) Every degree (weak or strong) of an effective set occurs as the degree of a two-dimensional SFT. Since effective sets of non-trivial degree contain no computable points, we obtain as a corollary a theorem of Myers (1974) (see also Hanf (1974)), which pre-dates Theorem 9.4.12 by several decades. We remark that Simpson’s proof of Theorem 9.4.12 in fact relies on Myers’s construction. Theorem 9.4.13 (Myers (1974)) There exists a non-empty SFT X which does not contain a computable point. For the proof of Theorem 9.4.12, begin with an effective set W ⊆ {0, 1}N , and a / W (see Example Turing machine T that on input w ∈ {0, 1}N halts if, and only if, w ∈ 9.4.5). Let X denote Robinson’s system, and run T on the grids in X , obtaining an SFT Y . For every x ∈ X there are extensions to Y for any assignment of input symbols to grids, as long as for each input to a grid, T does not halt on the given input in less than the number of rows of the grid. We next construct an extension–reduction Z of Y whose purpose is to synchronise between the inputs of grids. Specifically, we will have that y ∈ Y extends to Z if, and only if, the following holds. • All n-grids in y have the same input word, denoted an (y) ∈ {0, 1}2 • an (y) is the prefix of an+1(y).

n +1

,

Multidimensional shifts of finite type and sofic shifts 





323





Figure 9.8 Diagram to show that u takes values 0, 1.

We give the details of the construction shortly, but first explain how this finishes the proof that W and Z are strongly equivalent. Observe that if z ∈ Z extends y ∈ Y , then z determines a unique infinite word a(z) ∈ {0, 1}N whose prefixes are the words an (y), and a configuration w ∈ {0, 1}N occurs as a(z) for some z ∈ Z if, and only if, for all k ∈ N, the machine T does not halt in k steps on the prefix w1 . . . wk of w. Thus, by choice of T , the image of Z under z → a(z) is just W . Conversely, fixing a computable point x of the Robinson system (one can construct such a point explicitly using the recursive construction of ‘patches’ in Section 8.2.3), and fixing w ∈ W , we can extend x to y so that the inputs to grids are the initial segments of w, and extend further to Z (since the construction is explicit, this is a computable procedure). We obtain a computable map from W into Z. Thus the SFT Z is strongly equivalent (and hence weakly equivalent) to W . We turn to the details of the extension from Y to Z. Synchronising grids of the same level Let A denote the data alphabet of T . Extend Y by the alphabet A, and reduce by requiring that, first, each A-symbol is identical to the ones vertically adjacent to it, so the A-layer is constant on columns, and, second, at every input cell of a grid in the Y -layer, the data symbol represented in the cell agrees with the symbol in the second layer. Since by (R2) two n-grids either share no Z2 -columns or share all of them, and the cells of n-grids are located in different Z2 -columns from the cells of m-grids when m = n, we have forced synchronisation of inputs of n-grids that are vertical translates of each other. Next, extend–reduce by the tiles in Figure 9.8. Require that path segments appear only at grid sites (in particular, this means a path stays within a single grid), that paths begin at input cells of grids excluding the lower-right corner of grids, that their labels agree with the data symbols at their initial point, and that they terminate at cells in the rightmost column of a grid. Evidently, the result is to connect every input cell of a grid, except the lower right corner, to a cell in the right side of the same grid. There is enough space for all these paths to coexist because grids have the same number of rows and columns (and since the extension is possible for finite grids, it is possible for infinite ones, since they are limits of finite ones). Thus the rightmost cells of each grid are now labelled with a sequence of data symbols, given either by the labels of the new paths terminating in them, or, in the case of the lower-right cell, the input symbol there. Clearly this word is the same as the input word, when read top-to-bottom. Using a horizontal variant of the vertical synchronisation performed earlier, we

324

M. Hochman 

















 









Figure 9.9 Diagram to show that u takes values {0, 1}: (a) path begins and moves down; (b)+(c) turns sideways and continues; (d)+(e) turns down, continues and terminates.

can force the data symbols in the rightmost cells of grids that are horizontal translates of each other to be equal. Since for every two n-grids there is a third that is a horizontal translate of one and a vertical translate of the other, for each n we have forced the all n-grids to have the same inputs sequence. Synchronising between n- and (n + 1)-grids The basic idea is to connect the first 2n + 1 symbols of input in every (n + 1)-grid by paths to the input cells of its upper-right principal subgrid, and to force the input symbols to agree with the path label, thus synchronising them. For this we use the tiles in Figure 9.9, so paths move down from their starting point, turn sideways once and then down again, before terminating. We require the following. • Paths begin at input cells of upper-right principal subgrids, which can be identified by their blue colour, see Section 9.3.9. • Paths can only terminate in input cells (but we do not require each input cell to be the terminal point of a path). • Each path is labelled by a data symbol that is constant along the path, and is required at the initial and terminal sites of the path to agree with the corresponding symbol in the A-layer. • Paths may not cross the grid boundaries. We claim that every configuration in Y can be extended so that these conditions are satisfied. Note that by construction the paths stay in the space between the boundary of an (n + 1)-grid and its subgrids, so there is no interaction between paths associated with grids of different sizes, and we only need to consider a single grid. Fix an (n + 1)-grid. We first claim that it has 2n + 1 rows in the space between its the upper and lower principal subgrids. Indeed, there are 2n+1 + 1 rows all together, so by the symmetries (G3), 2n of them lie in the bottom half of the bounding square, 2n in the top, and the middle row belongs to the grid. From the last part of (G3) it follows that exactly half of the 2n rows in the bottom half of the bounding square lie above the lower principal subgrid, and exactly half of the 2n rows in the upper half lie below the upper principal subgrid. Thus we have 2n /2 + 2n/2 = 2n rows between the upper

Multidimensional shifts of finite type and sofic shifts

325

and lower principal subgrids, excluding the middle row, together with which we have 2n + 1 rows, as claimed. Now, this is also the number of input cells in the upper-right principal subgrid. Thus the desired paths can be arranged as follows: from the i-th input cell of the upper-right principal subgrid, a path goes down, passing i − 1 rows of the bounding (n + 1)-grid and turning left at the i-th one. It continues left until it reaches the i-th column of the bounding grid, and turns down once more, terminating at the first input cell it reaches. Observe that, because each path travels in a row of its own, no collisions occur. As it stands, there are many ways for the paths above to connect the principal sub-n-grid to the outer (n + 1)-grid. We are interested in the extension in which the terminal sites of the paths are precisely the leftmost 2n + 1 input cells of the (n + 1)grid. For this introduce adjacency conditions on the cells of the bottom row of grids, requiring that if a cell is not a terminal site of one of the paths then neither is the neighbouring cell on its right. Cells are not adjacent, but such ‘long-range’ constraints can be implemented as in Example 9.3.6. This completes the construction, and the proof of Theorem 9.4.12.

9.4.6 Implications for the factorisation relation One of the central problems in symbolic dynamics is to understand the factor relation between subshifts, and to a large extent this has been achieved for one-dimensional SFTs and sofic shifts (Adler and Marcus (1979); Boyle (1983)). The results of this section point to the extreme difficulty in obtaining similar results in higher dimensions. Indeed, if X ,Y are effective subshifts, and π : X → Y a factor map, then, since π is computable, Y ,s X (and also Y ,w X ). Thus the factor relation between SFTs is at least as complex as the order relation between their strong (or weak) degrees, and since every degree occurs as an SFT, the relation is highly complex. We remark that there are various other results with similar conclusions. For example, Clemens (2009) proved that the isomorphism relation between subshifts, as a subset of all pairs of subshifts in the appropriate topology, is a universal countable Borel equivalence relation. Such results are encountered in other branches of dynamics (Foreman et al. (2011)).

9.4.7 Further developments Another interesting direction to take is to study the complexity of individual points in a subshift. One interesting measure of uncomputability of a sequence is the growth of the Kolmogorov complexity. Fixing a universal Turing machine U, the Kolmogorov d complexity κ (a) of a ∈ Σ[[−n,n]] is the length of the smallest program p such that U computes au from (p, u) for every u ∈ [[ − n, n]]d . The function a → κ (a) is not computable, and the exact value of κ (a) depends on the choice of U, but, using the fact that any universal Turing machine can simulate any other, one can show that d changing U affects κ (a) only by an additive constant. If x ∈ ΣZ is a computable

326

M. Hochman

point then there is a single Turing machine T computing the function u → xu , so κ (x|[[−n,n]]d ) = O(1). The following result can thus be viewed as a strengthening of Myers’s theorem (Theorem 9.4.13). Theorem 9.4.14 (Durand et al. (2008)) Every SFT X ⊆ ΣZ contains a point x s such that κ (a) = O(nd−1 ) for every a ∈ Σ[[−n,n]] occurring in x, and there exist SFTs d d Y ⊆ ΣZ such that κ (a) = Ω(nd−1 ) for every y ∈ Y and every a ∈ ΣZ appearing in y. d

Another measure of the complexity of a configuration is its Turing degree, defined as the equivalence class of x under the relation x ≈T y if, and only if, there is a Turing machine computing x from y, and vice versa. Some results on the sets of Turing degrees arising in SFTs appeared in Jeandel and Vanier (2011). Finally, we mention that the dynamical properties of SFTs (or subshift) can have non-trivial interaction with its recursive properties. For instance, if an effective subshift is minimal or strongly irreducible, then its language is recursive, it contains computable points, and has minimal degree; see, e.g., Hochman (2009) and K˚urka (1999). Whether there is any relation between dynamics and higher degrees remains unclear.

9.5 Slices and subdynamics of sofic shifts In the previous section we described the strong and weak degrees of SFTs. The results indicate that the classes of SFTs and sofic shifts are computationally very rich, but gives limited information about what SFTs ‘look like’. Roughly speaking, it classifies systems only up to the operation of taking images by computable functions, and these can destroy nearly all statistical, combinatorial or geometric properties of a d configuration. For example, both the full shift ΣZ and the shift containing the single d fixed point 0Z have minimal degree and so does Robinson’s system. One would ideally like to have some usable description of the class of SFTs and their languages. This is currently out of reach (and may be impossible), but as we shall see in the present section, it is possible to fully understand the sets of configurations arising from sofic shifts by sampling them along lower dimensional lattices.

9.5.1 Effective subshifts An effective subshift is a subshift X ⊆ ΣZ that is an effectively closed set (since the d cylinders can be effectively enumerated, effective subsets of ΣZ can be defined as in Section 9.4.2). The idea of making the static notion of an effective set dynamic in this way first appeared in (Cenzer et al., 2008). d Note that X ⊆ ΣZ is an effective subshift if, and only if, X = X(E ) for an RE set of Σ-patterns E (the ‘if’ direction follows from Corollary 9.4.4, the ‘only if’ from d Lemma 9.4.3, once they are adapted to the configuration space ΣZ ). In particular, d

Multidimensional shifts of finite type and sofic shifts

327

if the language of a subshift is recursive, then so is its complementary language, hence the subshift is effective. This shows that many ‘natural’ examples of subshifts are effective, e.g., the Thue–Morse shift, one-dimensional substitution subshifts, the Chacon subshift, etc. By Corollary 9.4.4 and Corollary 9.4.9, SFTs and sofic shifts are effective subshifts, but the class of effective shifts is much broader than this. Example 9.5.1 (A non-sofic effective shift) In dimension 1 every sofic shift has periodic points, and there are many effective subshifts (e.g., Thue–Morse) without periodic points, hence not every effective subshift is sofic. We present an example in dimension 2, but it can easily be generalised to any d ≥ 2. Let AN denote the set of square N × N patterns over the symbols {0, 1}. For 2 each N let XN ⊆ {0, 1, ∗}Z consist of configurations x with the following properties: (a) the set of u such that xu = ∗ is the union of two sets, one made of a column repeating horizontally with period N, the other of a row repeating vertically with period N; (b) each pattern from AN−1 occurs in x; 2 (c) x has horizontal and vertical period N · 2N . Observe that XN is not empty. Indeed, begin with an arrangement of ∗s as in (a), and note that the complement is an array of (N − 1) × (N − 1) squares. Choose a 2 2 2 2N × 2N subarray of these squares, and place each of the 2(N−1) patterns a ∈ AN−1 in one of them, filling in the remaining squares in the array arbitrarily; this ensures (b). Now repeat with the desired period, ensuring (c). # # Let X = N∈N XN . Then L(X) = L(XN ), which is easily seen to be recursive, so X is effective. Also, observe that since the gaps between the columns and rows of ∗s # in XN tend to infinity as N → ∞, every configurations in X \ N∈N XN has at most a single row and a single column of ∗s. 2 We claim that X is not sofic. Indeed, suppose there existed an SFT Y ⊆ ΔZ and a factor map π : Y → X. By Propositions 9.2.13 and 9.2.14 we may assume that Y is one-step and that π is a symbol-code (π (x)u depends only on xu ). Let BN denote the 2 set of square N × N patterns over Δ. Let N be large enough that |Δ|4(N+1) < 2(N−1) and consider y ∈ Y with x = π (y) ∈ XN . For each a ∈ AN−1 let a∗ denote the (N + 1) × (N + 1) pattern obtained by surrounding a by a border of ∗s, so by assumption a∗ occurs in x and there exists a pattern b(a) ∈ BN+1 appearing in y which under π maps to a∗ . Now, the number of Δ-patterns whose shape is the boundary of an (N + 1) × (N + 1) square is less than |Δ|4(N+1) (because the boundary has at less than 2 4(N + 1) sites). Since |AN−1 | = 2(N−1) > |Δ|4(N+1) , there must exist distinct a, a ∈ AN−1 such that b(a), b(a ) induce the same pattern on their boundaries. Choose a single occurrence of b(a) in y, and replace it with b(a ) to obtain a configuration y . Since the boundaries of b(a), b(a ) agree, all adjacent pairs of symbols in y already appeared in y, and since Y is a one-step SFT and y ∈ Y also y ∈ Y . Now, the point x = π (y ) ∈ X is obtained from x by replacing the corresponding occurrence of a

328

M. Hochman

by a . Thus the pattern of ∗s in x and x is identical, so, since x ∈ XN , we must have x ∈ XN . But x is periodic and x differs from it at finitely many sites, so x is not periodic, a contradiction.

9.5.2 Slices Let X ⊆

d ΣZ .

For k < d the

k-slice6

of X is

XZk = {x|Zk | x ∈ X}, where we identify Zk with the subgroup Zk × {0}d−k ≤ Zd . Let us say that a k-dimensional pattern a ∈ ΣD , D ⊆ Zk , appears in a d-dimensional configuration x if k ≤ d and there is some u ∈ Zd such that xu+v = av for all v ∈ D, where we identify v = (v1 , . . . , vk ) ∈ Zk with (v1 , . . . , vk , 0 . . . 0) ∈ Zd , as in the definition. Then L(XZk ) = {a | a is a k-dimensional pattern appearing in some x ∈ X }. Lemma 9.5.2 If X ⊆ ΣZ is a subshift then XZk is a subshift for every k < d. d

Proof XZk is the image of X under π (x)u = x(u1 ,...,uk ,0,...,0) . This is a continuous map, so XZk is closed, and clearly S(u1 ,...,uk ) ◦ π = π ◦ S(u1 ,...,uk ,0,...,0) , which by shiftinvariance of X implies that π (X ) is shift-invariant. Example 9.5.3 (Every subshift is a slice) For a subshift Y ⊆ ΣZ let Y (d) = {z ∈ ΣZ | ∃y ∈ Y ∀(u1 , . . . , ud ) ∈ Zd z(u1 ,...,ud ) = yu1 }. d

This is the unique subshift Z such that ZZ = Y and whose configurations are constant on each ‘hyperplane’ {i} × Zd−1 . Thus every one-dimensional subshift occurs as a slice of higher-dimensional subshifts, and it is not hard to see that if Y is an SFT or k sofic shift then so is Y (d) . The construction is easily generalised to Y ⊆ ΣZ . Example 9.5.4 (XZ does not determine X) If X = {0, 1}Z then XZ = {0, 1}Z . But 2 for the SFT X  ⊆ {0, 1}Z in which vertically adjacent symbols must agree, we again have XZ = {0, 1}Z . Thus X |Z does not determine X , even for SFTs. 2

Example 9.5.5 (SFTs can have non-SFT slices) Let Y ⊆ {0, 1}Z denote the set of configurations y such that if for some n ≥ 2 the sequence 01n0 appears in row j, then in row j + 1 the sequence 001n−200 must appear above it, and in row j − 1 the sequence 01n+2 0 must appear below it, with the same centre. Also, the word 010 may not appear. It is easy to see how to enforce this using local constraints (the details are left to the reader) so Y is an SFT. If y ∈ Y , then each occurrence of 1 belongs to an infinite ‘triangle’ which grows wider as one moves downward. No row in y can contain more than a single consecutive block of 1s, since if it did then somewhere below it the 2

6

Note that k-slices are also called the k-dimensional projective subdynamics in (Pavlov and Schraudner, 2015a) and (Schraudner, 2015). We prefer the shorter terminology.

Multidimensional shifts of finite type and sofic shifts

329

two corresponding triangles would have to merge, and this is not provided for in the definition. Thus YZ consists of the sequence 0∞ , 1∞ , . . . 000111 . . . and . . . 111000 . . ., together with and the countably many sequences of the form . . . 00012n000 . . ., n ∈ N. This is not an SFT for reasons similar to those in Example 9.2.9. Example 9.5.6 (SFTs can have non-sofic slices) One can show that, for the SFTs constructed in the proof of Berger’s theorem (Theorem 9.3.3), the 1-slices can have an undecidable language, and hence are not one-dimensional sofic shifts (recall Theorem 9.2.10). One can broaden the notion of a slice by allowing restrictions of X to a general subgroup {0} = Λ ≤ Zd , but for our purposes this is not really more general. Indeed, d let X ⊆ ΣZ be an SFT or sofic shift and v1 , . . . , vk ∈ Zd linearly independent vectors defining a subgroups Λ = spanZ {v1 , . . . , vk }. Extend the sequence to d linearly independent vectors v1 , . . . , vd ∈ Zd , and let Y = {y ∈ ΣZ | ∃x ∈ X ∀(s1 , . . . , sd ) ∈ Zd y(s1 ,...,sd ) = x∑ si vi }. d

One can show that Y is a sofic shift, and YZk = XΛ in the obvious sense, so every slice of a sofic shift in this broader sense is already a slice in the original one.

9.5.3 Characterisation of sofic slices Theorem 9.5.7 (Hochman (2009)) A subshift Y ⊆ ΣZ is a k-slice of a sofic subshift if, and only if, it is effective. k

One direction is immediate. If X is sofic then it is effective, and XZk is the image of X under the restriction map x → x|Zk , which is computable. By Proposition 9.4.8, XZk is effective. The other direction of Theorem 9.5.7 requires a construction realising a given kdimensional effective subshift as the slice of a d-dimensional SFT. In the original proof the construction used d = k + 2. This was later reduced to d = k + 1, independently by Aubrun and Sablik (2010) and Durand et al. (2009). This is optimal: one cannot have d = k because not every effective subshift is sofic (Example 9.5.1). The construction given here uses d = k + 1, but is new and is more in the spirit of the classical constructions of Robinson and Meyers (see Sections 9.3.9, and 9.4.5). For simplicity we give the proof with k = 1 (and d = 2), but it can be adapted to any k < d. 2 Fix an effective subshift Y ⊆ ΣZ ; our goal is to construct a sofic shift X ⊆ ΣZ , such that XZ = Y . As usual, the construction proceeds by a sequence of extension– reductions. Grid layer For the proof we rely on a certain SFT X0 that will play a role similar to that of Robinson’s system in our earlier constructions. We postpone the dirty details to Section 9.5.4, and here only summarise its relevant properties.

330

M. Hochman

As usual we think of the symbols of X0 as square tiles. Some of the tiles are marked with path lines that we call connector paths; between 0 and 34 path segments can coexist on each tile, and they may cross each other, but they must have distinct initial and terminal sites. Every tile is also marked with one of the symbols ◦, •. Together with X0 we define increasing integer sequences Nn , Pn , with Pn a power of 2, and satisfying Nn+1 > 2 · Pn2.

(9.1)

Every x ∈ X0 defines grids which we call supergrids. An n-supergrid is a supergrid with Nn columns and rows. As with grids, the intersections of the columns of a supergrid with its bottom row will be called input cells. All supergrids in x are nsupergrids for some n, and the following hold. S1 For all n, the centres of n-supergrids in x form a translate of Pn Z2 (in particular, they occur with square period Pn ). S2 The extension of a row in an n-supergrid to a Z2 -row is disjoint from all msupergrids for m < n, and similarly for columns. In particular, for m = n the rows of m- and n-supergrids have different heights, and similarly for columns. S3 Each Z2 -row is marked entirely by ◦ or entirely by •. If a Z2 -row intersects a supergrid, then it is marked • if, and only if, it meets the bottom row of some supergrid. S4 For every n-supergrid G in x, there are connector paths starting at the leftmost Nn sites in the bottom row of G, ending at the Nn input cells, and inducing an order-preserving bijection between the initial and terminal sites. Data layer 2 Extend X0 to X0 × ΣZ (recall that Σ is the alphabet of the target subshift Y ), and 2 reduce by requiring for every (x, y) ∈ X0 × ΣZ and every (i, j) ∈ Z2 that (i) if x(i, j) is marked with a ◦, then y(i, j+1) = y(i, j) ; (ii) If x(i, j) is marked with a •, then y(i+1, j+1) = y(i, j) . Call the resulting system X1 . Thus (x, y) ∈ X1 if, and only if, x ∈ X0 , y ∈ ΣZ , and the configuration y is constant along ‘stepped lines’ that continue vertically up from rows where x is marked with a ◦, but go diagonally up-and-right from rows marked with a •. This is shown schematically in Figure 9.10. 2 Given x ∈ X0 and any w ∈ ΣZ we can define y ∈ ΣZ by setting y(i,0) = wi and extending the definition of y upward row by row using (i) and (ii) above, and similarly downward, ensuring that (x, y) ∈ X1 . Thus the projection of X1 to the second layer is a sofic shift whose 1-slice is ΣZ . 2 We say that a word w ∈ Σ occurs in y ∈ ΣZ (as a horizontal block) at (i, j) ∈ Z2 if yi, j yi+1, j . . . yi+−1, j = w1 . . . w . If (x, y) ∈ X1 and w ∈ Σ∗ appears in y as a horizontal block at (i, j), then for any k the word w appears also in y at (i + n(x, j, k), j + k), where n(x, j, k) is the number of rows in x marked • with height in the range [[ j, j + k − 1]]. Such occurrences of w are called translates of the original one. 2

331

Multidimensional shifts of finite type and sofic shifts

Figure 9.10 Schematic drawing of the ‘stepped line’ along which symbols in the data layer are constant.

Lemma 9.5.8 Let (x, y) ∈ X1 . If w ∈ Σ∗ occurs as a horizontal block in y, then it occurs at the lower-left corner of supergrids of arbitrarily large size in x. Proof Fix any occurrence of w in y, fix n ∈ N, and fix an (n + 1)-supergrid in x with bounding square I × J. Write J = [[ j− , j+ ]]. By (S2), for every i ≥ 2 the bottom rows of (n + i)-supergrids do not lie at heights in [[ j− , j+ ]], so by (S3), a • appears in x at height j ∈ [[ j− , j+ ]] if, and only if, for some  ≤ n + 1 it is the height of the bottom row of an -supergrid. Let { j1 , . . . , jm } ⊆ [[ j− , j+ ]] denote the set of heights of bottom rows of n-supergrids, so by (S1) we have jk = j1 + kPn , hence by (9.1), < = 1 1 ( j+ − j− ) ≥ Nn+1 − 1 > Pn . m = |[[ j− , j+ ]] ∩ ( j1 + PnZ)| ≥ Pn Pn For each k there is a translate of the original occurrence of w at height jk , so w appears in y at (ik , jk ) for some ik , and we have ik+1 = ik + nk

(9.2)

where nk is the number of rows marked in x by • with heights in the range [[ jk , jk+1 − 1]]. Now, for each  < n, the -supergrids in x appear with vertical period P , and |[[ jk , jk+1 − 1]]| = Pn is a multiple of P (both being powers of 2 and n > ), so the number of rows with height in [[ jk , jk+1 − 1]] that intersect the bottom row of an supergrid is Pn /P . In addition the row at height jk itself is marked •, being the bottom of an n-supergrid, and has not been counted yet (because supergrids of different sizes have rows at different heights). It follows that n1 = n2 = . . . = nm = 1 + ∑ 0 of the plane. It follows that inside a given n-grid only a (1 − c)nfraction of sites lie outside of its subgrids, and, roughly speaking, for large n this means that there simply won’t be room in which to put the connector paths without them overlapping connector paths of many smaller grids. To avoid this issue we construct a system in which grids occur far more sparsely. The situation should be compared with the construction of Cantor sets in real analysis. The classical Cantor set C ⊆ [0, 1] is constructed by removing a fixed fraction (1/3) of the length at each step, so in the end one is left with a set of length 0. On the other hand, if the fraction removed at each stage decays sufficiently fast the resulting set may have positive length – so-called ‘fat’ Cantor sets. The classical Cantor set is analogous to Robinson’s system, in which all but a 0-fraction of sites are contained in the interior of grids; the system we shall construct is analogous to ‘fat’ Cantor sets. The starting point for our construction will be Robinson’s system. We shall select a sparse subsequence mk and ‘delete’ all but the mk -grids. Then we delete all but a

Multidimensional shifts of finite type and sofic shifts

333

spatially sparse subset of the mk -grids. From what is left we construct new grids in a manner similar to that in the construction of Robinson’s system, namely, as the rows and columns that run the entire length of one of the remaining mk -grids without entering one of the smaller remaining grids. The (rather messy) details are given in the following subsections; the reader who is interested in the bird’s eye view may wish to skip to Section 9.5.6. Starting out We begin from Robinson’s system, as described in Section 9.3.9. Level counters The goal of this stage is to encode the number n in the input cells of every n-grid. Numbers are represented as sequences over the alphabet {0, 1, blank }, with the longest prefix of 0, 1s giving the binary expansion of the number, least significant digit first. Digits that appear after the first blank are ignored. Extend the input cells of grids with these symbols. Each grid now encodes a number that we call the level counter. There are 2n + 1 input cells in an n-grid, so the level counter of an n-grid is capable of representing both n and n + 1. At this point in the construction, any combination of representable numbers can occur as level counters in a configuration. Require level counters of 1-grids to be (1, blank, blamk), representing the number 1 (this is a local constraint because 1grids are 3 × 3 squares). Let T be a Turing machine with data alphabet {0, 1, blank } that is programmed to increment its input by 1. This can be implemented so that a number 0 ≤ k < 2n is incremented in n computation steps, after which the machine enters a loop and does nothing more. Run T on the grids, using level counters as input. If an n-grid has level counter k ≤ n then, since it has 2n + 1 rows, the machine will complete the incrementation before reaching the top of the grid. Finally, synchronise grids so that the level counter of each (n + 1)-grid is equal to the output of the machine running on its upper-right principal subgrid, followed by blanks (this can be achieved similarly to the synchronisation in Section 9.4.5). An induction using the discussion in the previous paragraph shows that the level counter of n-grids represents n, and furthermore that all symbols after the first blank are blanks as well. Choosing a sparse sequence of levels (coloured bronze) Let mk = 24k . In this stage our goal is to colour the mk -grids bronze, and all other grids – white. Extend the previous system by colouring the grid sites white and bronze, and require neighbouring sites to be the same colour, so grids are monochromatic. At this

334

M. Hochman

point in the construction, every configuration from the previous step can be extended using any combination of colourings of its grids. Choose a Turing machine with two distinguished states, white and bronze, which on input n ∈ N, determines whether n = mk for some k. If yes, it enters and remains in the bronze state, otherwise it enters and remains in the white state. This computation can easily be implemented so that the number of steps in the computation is of order log n. Run this machine on grids, using the level counter as input. Then when a machine runs on an n-grid it runs with input n, so the computation requires O(log n), and the grid has 2n + 1 rows, so for all large enough n the machine will enter the white or bronze state before reaching the top of the grid. Finally, forbid configurations in which a machine in a white state is running on a bronze grid, or vice versa. Then for all large enough n, the n-grids are coloured bronze if, and only if, n = mk for some k, and otherwise coloured white. For the finitely many n such that the machine did not have time to complete its program before reaching the end of the n-grid, we can force the colouring scheme directly, since it involves patterns of bounded size. A sparse array of grids at level mk (coloured silver) Let Pk = 4mk 24k+1 . Our goal now is to select, for each k, a subset of the mk -grids whose centres form a translate of Pk Z2 , and colour these grids silver. We also want that for every admissible configuration x from the previous stage of the construction, and every sequence G1 , G2 , G3 , . . . with Gk an mk -grid in x, there is an admissible extension in the present stage with the Gk s coloured silver. Currently, in every configuration the mk -grids are coloured bronze (others are coloured white), and have the number mk = 24k written in binary in the input cells, followed by blanks. Thus the leftmost 4k input cells of an mk -grid are identified as those not containing blanks. We call these counter cells. We first create horizontal periodicity. Extend the counter cells of bronze grids with a new layer of binary digits, and extend other bronze input cells to a ‘blank’ symbol. In this new layer, the input cells of each bronze grid represent a number between 0 and 24k −1, which we call the horizontal counter. Proceeding as in Section 9.4.5, synchronise the bronze grids so that bronze grids that are vertical translates of each other have the same horizontal counters. Next, on bronze grids we run a Turing machine that increments the horizontal counter by 1 modulo 24k (the machine proceeds the same as if it were just incrementing the input by 1, but when it reaches a blank cell it enters a loop, discarding the ‘carry’ bit without writing anything further, thus reducing modulo 24k ). The program can be implemented so that the ith bit is updated at step i + 1, and since an mk -grid has 2mk +1 rows the machine certainly completes the increment before reaching the top row of the grid. Finally, proceeding

Multidimensional shifts of finite type and sofic shifts

335

again as in Section 9.4.5, synchronise bronze grids so that the input of the machine running on a grid G agrees with the output of the machine running on the grid G that is the first translate of G to its left. Summarising, we have forced horizontal counters to agree on grids that are vertical translates of each other, and increase by one (modulo mk ) upon passing from an mk -grid to the next mk -grid to its right. We do the same thing again in the vertical direction, adding a vertical counter with 4k bits to mk -grids, forcing the counter to be constant under horizontal translation and to increase by one (modulo mk ) under vertical translation. Run a Turing machine on bronze grids that enters a silver state if both the horizontal and vertical counters are 0, and otherwise enters a grey state. Colour bronze grids silver or grey monochromatically, and forbid a machine in a grey state from existing in a silver grid and vice versa. By (R1), the centres of the mk -grids form an array of square period 2 · 4mk , and the silver grids form a subarray with relative period 24k . Thus the centres of the silver mk -grids form a translate of 4mk · 24k+1 · Z2 . It is clear from the construction that given a configuration from the previous stage, and any choice G1 , G2 , . . . with Gk an mk -grid, an extension exists in which the horizontal and vertical counters of the Gk are 0, i.e., the Gk are coloured silver. How much room is there? The sparseness of the silver grids will come into play through the following estimates. Lemma 9.5.9 Let x be a configuration from the SFT constructed above, R ⊆ Z2 a rectangular region of height and width greater than 4mk , and for  ≥ k assume that R does not intersect any silver m -grids in x. Then more than three-quarters of the rows and three-quarters of the columns in R do not intersect a silver grid. Proof We count columns, rows behave similarly. Let w denote the width of R, so w > 4mk . For each  < k, the silver m -grids occur periodically with period P = 4m 24+1 , so the fraction of columns in R containing the centre of a silver m -grid is at most w/P . Therefore, since m -grids have width 4m − 1, the number of columns in R intersecting a silver m -grid is at most (4m − 1)(1 +

1 1 w ) ≤ 4m ( + ) · w P w P 1 1 ≤ 4m ( m + m 4+1 ) · w 4 k 4 2 ≤ 2−4 · w,

< 2−4k . Since R where in the last inequality we used 4m /4mk = 42 −2 ≤ 4−2 does not intersect silver m -grids for  ≥ k, the number of columns in w is the sum of the expression above for  = 1, . . . , k − 1, which is < 14 w. 4

4k

4k−1

Corollary 9.5.10 Let x be a configuration from the SFT constructed above, R ⊆ Z2

336

M. Hochman

a rectangular region of height and width at least 4 · (4mk + 1), and for  > k assume that R does not intersect any silver m -grids in x. Then more than half of the rows and half of the columns in R do not intersect a silver grid. Proof By the previous lemma at most a quarter of the rows and columns intersect silver m -grids for  < k. At most another quarter of rows and columns can intersect silver mk -grids, because the width and height of R is at least four times the width of an mk -grid and these appear with period at least four times their width. The conclusion follows. Lemma 9.5.11 Let x be a configuration from the stage prior to adding silver grids. Then x can be extended to a configuration y in such a way that for every silver mk grid G in y, every  < k, and every silver m -grid G in y in the interior of G, the bottom row of G is at least 8(4m + 1) rows above the bottom row of G. Proof Fix x, fix k, and choose an mk -grid Gk in x. The mk−1 -grids in the interior of Gk are arranged in an array of period Pk = 4mk−1 · 24(k−1) + 1, so, since the height of Gk is 4mk + 1 - 4mk−1 · 24(k−1)+1, this array has at least eight rows, and we can choose an mk−1 -grid Gk−1 in Gk belonging to the eighth row from the bottom. Similarly, the Gk−2 -grids contained in Gk−1 form an array with at least eight rows, from which array we can choose an mk−2 -grid Gk−2 in the eighth row from the bottom. Continue in this way until Gk , Gk−1 , . . . , G1 have been chosen, and consider an extension yk of x in which these are silver grids. Fix any 1 < i ≤ j ≤ k, and note that by (R1), the distance between the centres of m j -grids and mi -grids is a multiple of 4mi ; since the distances of the centres of the grids from their bottom rows are 2mi and 2m j , respectively, the distance between the bottom rows of Gi and G j is a multiple of 2mi , which is greater than Pi−1 and an integer multiple of it. Since the silver mi−1 -grids are arranged periodically with period Pi−1 , it follows that the distance between the bottom row of Gi and the first silver mi−1 -grid above it is the same as the distance between the bottom row of G j and the first silver mi−1 -grid above it, so by construction both are at least 8(4mi−1 + 1). Furthermore, since Pj−1 is a common period of the silver mi−1 and m j grids, the same holds for every silver m j -grid in y. For each k, we have ensured that yk has the desired properties for silver m -grids,  ≤ k. Let y be an accumulation point of yk as k → ∞. Then y extends x, and the interior pattern on the bounding square of each silver m -grid in y agrees with yk for some k ≥ , so the desired property holds for it. Thus y is the claimed extension. Supergrids (coloured gold) and proofs of (S1) and (S2) The supergrids, coloured gold, arise as the sites lying in rows and columns that run completely across the bounding square of some silver mk -grid, excluding the boundary, without meeting a smaller silver grid. To identify these rows and columns, add directed paths that carry black and gold labels, except at their initial and terminal sites. Paths begin at the left boundary sites of silver grids, excluding corners; move to the right; and may not cross boundaries of silver grids. Black paths are required to

Multidimensional shifts of finite type and sofic shifts

337

terminate at the left boundary of a silver grid, and gold paths are required to terminate at the right boundary of a silver grid. It is clear that in any configuration, for each silver grid, a row in the bounding square has a gold path running its length if, and only if, the row does not intersect any silver smaller grids and is neither the top nor bottom row of the grid. In the same manner introduce vertical paths identifying columns that run across the bounding square of silver grids without entering its silver subgrids. The sites marked by gold in one of the two new layers form the gold supergrids. A supergrid built from a silver mk -grid is called a k-supergrid. Note that the exact form of a k-supergrid depends on the positions of the smaller silver mk -grids inside it, and these vary from configuration to configuration. However, the number Nk of row and columns in it depends only on k (and not on the positions of subgrids inside it), due to the periodicity properties of grids; we omit this calculation, but note that by Corollary 9.5.10, at least half of the columns in the bounding square of a supergrid belong to the supergrid. Because the silver mk -grids are periodic with square period Pk , the same holds for the k-supergrids, giving (S1). Next, observe that a row R in a k-supergrid is by definition a row crossing the interior of a bounding square of a silver mk -grid such that R does not meet any silver m -grids for  < k. Because of (R1) and (R2), this implies (and is equivalent to) the statement that the Z2 -row R containing R is disjoint from all silver m -grids for  = k. This implies (S2) for rows, and a similar argument holds for columns.

Marking the rows with ◦, • and proof of (S3) Extend by the symbols ◦, •, requiring horizontally adjacent symbols to be equal. We furthermore require the following. – A site in the bottom row of a gold supergrid is labelled •. – A site in any other row of a gold supergrid is labelled ◦. – A site belonging to the bottom or top row of a silver grid is labelled ◦. It is easy to see that, given a configuration and a silver k-grid G in it, each row in the bounding square of G either meets the top or bottom row of G or one of its subgrids, in which case the last option holds, or else crosses G or one of its subgrids without meeting a smaller silver grid, in which case one of the first two options hold. Since silver grids of different sizes have rows at different heights in the plane, and the silver grids appear in periodic arrays, it follows that in fact precisely one of the three options holds. Thus, if a Z2 row intersects some silver grid, then it is labelled • if, and only if, it intersects the bottom row of a gold grid (it can happen that a row does not intersect any silver grids, in which case its label is undetermined). This proves (S3).

338

M. Hochman

Connector paths: bottom to top It remains to build connector paths in the gold grids satisfying (S4). We do this in two stages, the first (and more important one) being to connect the leftmost Nn sites in the bottom row of a gold n-supergrid to cells in its top row. To this end we add a layer of connector paths, requiring that one path be emitted upwards from every site in the bottom row of a gold supergrid, forbidding paths from crossing boundaries of gold supergrids, and requiring that the paths terminate at cells in the top row of a gold supergrid, with at most two paths per cell. We allow between 0 and 4 non-intersecting paths at each tile. Note that the paths emitted from a given gold supergrid remain entirely inside it, so there is no interaction between paths in different supergrids. The crucial and non-trivial point is that we end up with a non-empty SFT, i.e., that at least one configuration from the SFT constructed in the previous stages allows paths to be added admissibly to every supergrid. To verify this, we shall show that given a configuration from the previous step, connector paths can be added with the properties above to each grid. In fact, this will follow from consideration of the following more general situation. Suppose we are given a rectangular region R ⊆ R2 of dimensions 2n × 2n+1 for some n ∈ N, tiled by 1 × 1 tiles, and a set E ⊆ R which is a union of these tiles, and which we interpret as the set of ‘blocked’ tiles. Given finite sets A, B ⊆ R \ E we say that there is an admissible connection from A to B if we can tile R\ E by path tiles depicting paths whose initial points are in A, and whose terminal points are in B. We say the tiling has multiplicity m if no tile has more than m paths on it. Enumerate the rows of R bottom to top, with the bottom row numbered 1. A maximal column or row of tiles in R is called free if it contains no blocked tiles (no elements of E). If R ⊆ R is a subrectangle then a row or column of R is relatively free if it does not intersect E (thus a free row or column in R that intersects R intersects it in a relatively free row or column, but relatively free rows and columns in R need not extend to free rows or columns in R). We say that E is sparse near the bottom of R if either n = 1 and E = 0, / or n > 1  and the following holds: for every k ∈ N and every rectangle R ⊆ R of dimensions 2k × 2k+1 whose bottom row lies in the bottom row of R, there are at least 2k−1 relatively free rows in the top half of R , and at least 2k−2 free columns in each of the left and right halves of R. Observe that if E is sparse near the bottom of R and R ⊆ R is a rectangle whose bottom row is contained in the bottom row of R, then E is sparse near the bottom of R . Proposition 9.5.12 Let R ⊆ R2 be a 2n × 2n+1 rectangle as above. Let E ⊆ R be sparse near the bottom of R. Let A ⊆ R contain a representative point from each tile in the bottom row of R, so |A| = 2n , and let B ⊆ R be a set of multiplicity at most 2 contained in the intersection of the top for of R and the union of its free columns. Then there is an admissible connection from A to B with multiplicity at most 2.

Multidimensional shifts of finite type and sofic shifts

339

Proof The proof is by induction on n (i.e., the size of R). For n = 1 by definition E = 0/ and R has dimensions 1 × 2, and the statement is trivial. Suppose the claim has been proved for n − 1. Let F denote the set of free columns in R. Since E is sparse near the bottom of R, the left and right halves of R each intersect F in at least 2n−2 columns. Partition R into four subrectangles of dimensions 2n−1 × 2n . Let RL , RR be the lower left and lower right of these. Choose a set of points BL of size 2n−1 belonging to columns in F and to the top row of RL , and having multiplicity at most 2, which can be done because |RL ∩ F| ≥ 2n−2 . Note that every column in F is relatively free in RL . Let AL = A ∩ RL so that |AL | = |BL | = 2n−1 . Apply the induction hypothesis to RL to obtain a tiling connecting AL to BL . Do the same in RR to get a tiling connecting AR = A ∩ RR to a multiplicity-2 set if points BR contained in the top row of RR and in columns of F. Consider now the upper half of R, which we denote R+ . Because E is sparse near the bottom of R, there are at least 2n−1 free rows in R+ . Let x1 , . . . , x2n be an enumeration of BL ∪ BR from left to right, and y1 , . . . , y2n an enumeration of B from left to right. For each i construct a path from xi to yi as follows: move up from xi until reaching the [i/2]th free row, then move horizontally until directly under yi , then move up to yi . The multiplicity of these paths is at most 2: this is true in the bottom half by the induction hypothesis, and in the upper half note that the multiplicity begins at two, but paths separate onto their own row and recombine only when they reach their terminal columns; and the terminal points (the yi s) have multiplicty at most two by assumption. We now return to our SFT. Using Lemma 9.5.11, fix a configuration x in which for each gold mk -supergrid G and  < k, every gold m -supergrid in the interior of G is at least 8 · (4m + 1) rows above the bottom row of G. Fix such a G, let G denote the bounding square of G, and let E ⊆ G denote the union of the gold m -supergrids for  < k. Consider a rectangle R ⊆ G of dimensions 2n × 2n+1 whose bottom row is contained in the bottom row of G and choose  < k such that 4m +1 ≤ 2n < 4m+1 +1. If 2n ≤ 8 · (4m + 1) then R does not intersect any gold m+1 -supergrids, and by Corollary 9.5.10 at least half of its columns are free, and furthermore, applying the same lemma to the upper half R+ of R, we find that at least half of the rows in R+ are free. On the other hand if 2n > 8 · (4m + 1) then Corollary 9.5.10 applies to R and R+ to give the same conclusion. It follows that E is sparse near the bottom of G. To conclude the proof, note that by (R3), G is built from a silver grid of dimensions  (4mk − 1) × (4mk − 1), so G is a square of dimensions (4mk − 3) × (4mk − 3). Let G denote the 4mk × 4mk square obtained by adjoining to G three columns on the right  and three rows at the top. It is clear that E is also sparse near the bottom of G . Split  1 mk G vertically into two halves of dimensions ( 2 4 ) × 4mk . By Proposition 9.5.12 we can connect each site in the bottom row of each of these rectangular regions to the free columns in their top row using an admissible system of paths of multiplicity at most 2. We can transform this system of paths to the desired one in G \ E, simply by  deleting the path emitted in the three rightmost columns of G , shifting all remaining

340

M. Hochman

paths from these columns to the rightmost column of G (thus possibly increasing the multiplicity in that column to 8), and similarly moving paths in the top three row down to the top row of G (thus raising the multiplicity potentially to 32). The resulting paths can clearly be realised using the connector-path tiles. Connector paths: top to bottom and proof of (S4) It remains to continue the paths from the top row of each supergrid back down to the bottom row. Since there are more paths than cells in the bottom row, we must first select an appropriate subset of them to continue. To this end, colour the grid sites in the bottom row of each supergrid purple or pink, with the left corner purple, the right corner pink, and the rule that purple sites have purple neighbours on their left and pink sites have pink neighbours on their right. This forces the bottom row of each supergrid to be partitioned into two blocks, the left one purple, and the right one pink. Colour the connector path segments constructed in the previous section purple or pink monochromatically, requiring that the colour of a path agree with the colour at its initial site. Now ‘half’ the paths are purple, and the other ‘half’ pink, though we have not restricted the proportion of pink and purple paths in any grid. Now add a second set of non-crossing connector paths. Their initial sites are the terminal sites of the purple paths constructed above. The new paths can move down and sideways (but not up), they are required to stay within supergrid sites, and to terminate at a cell in the bottom row of a supergrid. The multiplicity of the new paths is allowed to be 2, but we require that every cell in the bottom row is the terminal site of exactly one of these paths. It is clear that this forces the number of purple paths to be equal to the number of cells in the bottom row, and that the ‘concatenation’ of the old and new purple paths gives the desired connector paths, verifying (S4).

9.5.5 A variation In some applications it is convenient to have the following version, which is slightly stronger. Recall the notation Y (d) from Example 9.5.3. Theorem 9.5.13 Let Y ⊆ ΣZ be an effective subshift. Then the subshift Y (2) ⊆ ΣZ is sofic.

2

Proof Let X be the SFT constructed from Y in the proof of Theorem 9.5.7. Pass to a one-step SFT X  isomorphic to X via a symbol-factor X  → X (Proposition 9.2.14), so we can imagine that each symbol in X  is marked with ◦ or • according to the label of the grid layer of the image symbol, and marked with a Σ-symbol according to the data layer of the image symbol. Define an SFT X  using the same alphabet as X  . We introduce the following restrictions on patterns in X  . – Horizontal adjacency rules are the same as in X  .

Multidimensional shifts of finite type and sofic shifts 341 b  – For symbols a, b with a marked ◦, a vertical pair a is admissible in X  if, and only if, it is admissible in X  . – For symbols a, c with a marked•,c is allowed to appear above and to the left of a if, and only if, the vertical pair ac is admissible in X  . The first condition ensures that every row in X  is monochromatically marked ◦ or •. It is clear, by the definition of the data layer in X  , that the Σ-symbols coming from the ‘data layer’ are now constant along columns of X  . On the other hand, every x ∈ X  gives rise to a configuration x ∈ X  by shifting the ith row horizontally: if i > 0 then shift it right by a distance equal to the number of rows with heights in [[0, i) that are marked with •, and if i < 0, left by a distance equal to the number of rows with heights in [[i, 0) marked •. Conversely, every x ∈ X  gives a x ∈ X  by shifting rows in the other direction. It follows that the projection of X  to the data layer is Y (2) . The analogous statement holds for higher-dimensional effective subshifts Y .

9.5.6 Further developments Instead of the slice operation, which is syntactic, one can instead consider the dynamical operation of restricting the shift action to a sublattice. For simplicity we focus on the one-dimensional lattice Z ∼ = Z × {0}d−1 ≤ Zd . Thus, given a d-dimensional d Z subshift X ⊆ Σ , writing S = S(1,0,...,0) for the generator of the group {Su}u∈Z acting on X, we obtain a dynamical system (X , S) called the Z-subaction of X . This system factors onto the slice XZ (by x → x|Z×{0}d−1 ), but in general is not isomorphic to it or to any subshift (for example, for X = {0, 1}Z there are uncountably many fixed points of the subaction, while a subshift can only have finitely many). We note one advantage over slices: isomorphic subshifts need not have isomorphic slices, but they do have isomorphic Z-subactions. k Fix k ∈ N. Define effective subsets of (ΣZ )Z is the usual way (with respect to the natural countable set of cylinder sets), define the shift action (Sn )n∈Z in the usual k k way (i.e., (Sn x)i = xi+n ∈ ΣZ ), and define a subsystem of (ΣZ )Z to be a non-empty, closed shift-invariant subset. An effective dynamical system is an effective subsystem k of (ΣZ )Z . d Now, to each point x in a subshift X ⊆ ΣZ we can assign a sequence x = (xi )i∈Z ∈ d−1 (ΣZ )Z by (xi )(u1 ,...,ud−1 ) = x(i,u1 ,...,ud−1 ) , and set X  = {x | x ∈ X }. The map X → X  given by x → x is bijective, computable, and intertwines the Z-subaction on X and the shift on X  , so if X is effective, also X  is. In particular the Z-subaction of an SFT or sofic shift is identified with an effective dynamical system. 2

Theorem 9.5.14 (Hochman (2009)) The class of Z-subactions of three-dimensional sofic shifts is precisely the class of effective dynamical subsystems of ({0, 1}Z )Z . Furthermore, a given effective subsystem of ({0, 1}Z )Z is isomorphic to the subdynamics of a three-dimensional sofic shift.

342

M. Hochman

An example of Jeandel (unpublished) shows that it really is necessary to go up to dimension 3 to realise some effective systems, i.e., dimension 2 is not enough. The theorem above holds also for higher-rank subactions; see Hochman (2009). One interesting implication of this, and the theory of recursive degrees, is that in the space of all Zd dynamical systems (with suitable parametrisation), there are subshifts which cannot be approximated by effective subshifts, i.e., much of the dynamical world is inaccessible to us (Hochman (2012)). Another implication of this theorem is to the theory of cellular automata, that is, d d continuous equivariant maps f : ΣZ → ΣZ (see also Chapter 8). The reason is that CA can essentially be realised as subdynamics of SFTs (for this one must replace the CA with a version of it that is bijective), and vice versa, a subaction of an SFT is essentially isomorphic to a CA (more precisely, to the action of a CA on its attractor after a fixed point has been removed from it). For details we refer again to Hochman (2009). We have so far discussed slices and subdynamics of sofic shifts. Much less is known about slices of SFTs. They are, of course, effective subshifts, but describing them seems to be a difficult task. Some results on this problem were recently obtains by Pavlov and Schraudner (2015a) and Schraudner (2015), who characterised those one-dimensional sofic shifts that can occur as slices of two-dimensional SFTs. Also see Hochman (2009).

9.6 Frequencies, word growth and periodic points In this chapter we discuss three dynamical and combinatorial characteristics of SFTs (and subshifts in general): the frequencies with which patterns appear, the rate of growth of the number of patterns on cubes, and the set of periods of periodic points. The quantities that arise in this way are in general uncomputable, but they can be characterised by their computational properties.

9.6.1 A hierarchy of real numbers Let 0.t1t2 . . . denote the binary expansion of t ∈ [0, 1), using the expansion terminating in 0s if there is more than one expansion, and write Dk (t) = 0.t1t2 . . .tk for the kth binary approximation. Considering what it might mean for t to be computable, several definitions suggest themselves (for t ∈ R the definitions may be applied to the fractional part of t). ∞ (1) The sequence of digits (tn )∞ n=1 (equivalently, the sequence (Dn )n=1 ) is computable. 7 (1 ) There exists a recursive sequence (t (n) )∞ n=1 of rational numbers with |t − (n) −n t |≤2 . 7

An equivalent class of numbers arises if we use any other computable sequence an → 0 instead of 2−n .

Multidimensional shifts of finite type and sofic shifts

343

(2) t is the limit of a computable sequence of numbers. (2↓) t is the limit of a non-increasing computable sequence (equivalently, the infimum of a computable sequence). (2↑) t is the limit of a non-decreasing computable sequence (equivalently, the supremum of a computable sequence). Let us explain the relationship between these notions. (1) ⇐⇒ (1 ) If t satisfies (1) then (1 ) holds with t (n) = Dn (t). Conversely, note that if s = 0.s1 s2 . . . ∈ [0, 1) and sk+1 sk+2 = 01, then Dk (s) = Dk (s+ε ) for all −2−(k+2) ≤ ε ≤ 2−(k+2) . It follows that Dn (t) = Dn (t (k) ) as soon as t (k) contains the word 01 to the right of the nth digit. Assuming as we may that t is irrational, this holds for infinitely many k, giving an algorithm to compute Dn (t) and implying (1). (1) ⇐⇒ (2↓)∧(2↑) Assuming (1), the sequence t (n) = Dn (t) is computable and sat(n) isfies t (n) t and t (n) + 2−n ! t, giving (2↑) and (2↓). Conversely, let (t+ ) and (n) (t− ) denote non-increasing and non-decreasing computable sequences, respec(i) (i) tively, converging to t. Given n, we can compute t+ ,t− for i = 1, 2, . . . until the (i) (i) (i) first i for which t+ − t− < 2−n (this must eventually occur because lim(t+ − (i) (n) (n) (i) t− ) = t − t = 0). Since t− ≤ t ≤ t+ we have |t − t+ | ≤ 2−n , so (1) holds. (2↓) =⇒ (2) and (2↑) =⇒ (2) Immediate. (2) =⇒

(1) Let hn = 0 if the nth Turing machine halts, hn = 1 otherwise. Let h = (n) (n) (n) 0.h1h2 . . . in binary and h(n) = 0.h1 . . . hn + 2−n, where hi = 0 if the ith ma(n) (n) chine halts in n steps, otherwise hi = 1. Clearly hi → hi from above, so h(n) → h, but (Dn (h))∞ n=1 is not computable. (2↓) and (2↑) are not equivalent The number h from the last argument satisfies (2↓), and if it satisfied also (2↑) it would satisfy (1), which it does not. Hence (2↓) =⇒

(2↑). For the other direction, consider 1 − h. (2) =⇒

(2↓) and (2) =⇒

(2↑) If (2) implied (2↓) they would be equivalent. Then by symmetry also (2↑) would be equivalent to (2). But then (2) would imply (2↓)∧(2↑), which implies (1), which is false. Properties (1) (and (1 )), (2), (2↑) and (2↓) form the base of a hierarchy of computability types of real numbers. Let C(Nk , Q) denote the set of computable functions f : Nk → Q, let Θi denote the operator sup if i is odd and inf if i is even, and i let Θ = Θi+1 . The following is taken from Zheng and Weihrauch (2001). For n ∈ N, define Σn , Πn , Δn ⊆ R by Σn = {x ∈ R | x = sup inf . . . Θnin f (i1 , . . . , in ) for some f ∈ C(Nn , Q)} i1

i2

n

Πn = {x ∈ R | x = inf sup . . . Θin f (i1 , . . . , in ) for some f ∈ C(Nn , Q)} i1

Δn = Σn ∩ Πn .

i2

344

M. Hochman

For any n ≥ 0 the inclusions Πn ⊆ Σn+1 , Σn ⊆ Πn+1 (and Δn ⊆ Δn+1 ) are proper, and we have the following characterisations of several of these classes (for details see Zheng and Weihrauch (2001)): (1) x ∈ Σ1 if, and only if, x = sup xn for a recursive sequence (xn ); (2) x ∈ Π1 if, and only if, x = inf xn for a recursive sequence (xn ); (3) x ∈ Δ1 if, and only if, x = lim xn for a recursive sequence (xn ), and |x − xn | is bounded by a recursive sequence tending to 0; (4) x ∈ Σ2 if, and only if, x = lim inf xn for a recursive sequence (xn ); (5) x ∈ Π2 if, and only if, x = lim sup xn for a recursive sequence (xn ); (6) x ∈ Δ2 if, and only if, x = lim xn for a recursive sequence (xn ).

9.6.2 Frequencies of patterns in subshifts Given patterns σ , a ∈ Σ∗d , the number and the frequency with which σ occurs in a are defined, respectively, by |a|σ = Card{u ∈ Zd | σ appears in a at u} 1 |a| . fσ (a) = | supp(a)| σ For a subshift X ⊆ ΣZ let d

fσ (X, n) = max{ fσ (a) | a ∈ Σ[[0,n−1]] appears in X } d

and fσ (X ) = lim fσ (X , n). n→∞

The proof that the limit exists uses a variant of Fekete’s lemma on subadditive sequences. Lemma 9.6.1 Let (sn )∞ n=1 be a real-valued sequence. Suppose that for all m, n ∈ N we have smn ≤ sn

(9.4)

cm sm+n ≤ sn + n

(9.5)

for a constant cm > 0. Then (sn ) converges to infn∈N sn as n → ∞. Proof Fix n ∈ N. Given an integer k ≥ n write k = mk n + rk with mk , rk ∈ N and 0 ≤ rk < n, and let cn = max0≤r t + 2−n. Define $  , $ Z $ x|J is 2-Toeplitz and f 1 (x|J ) ≤ tn X = x ∈ {0, 1} $ . (9.6) for all intervals J ⊆ Z of length 2n We claim that X is a subshift. Shift invariance is clear. To see that X is closed, first observe that by definition, every x ∈ X is locally 2-Toeplitz, so by Lemma 9.6.8, x is 2-Toeplitz. Now suppose that xn ∈ X and lim xn = x. Then x is 2-Toeplitz by Lemma 9.6.7, and hence locally 2-Toeplitz; and if J ⊆ Z is an interval of length 2n , then x|J = xk |J for some k, whence f1 (x|J ) = f1 (xk |J ) ≤ tn . It follows that x ∈ X . We claim that f1 (X ) = t. By definition if x ∈ X then f1 (x|[[0,2n −1]] ) ≤ tn , so f1 (X, 2n ) ≤ tn , and therefore f1 (X ) ≤ limtn = t. For the reverse inequality, choose a 2-Toeplitz family (In ) and let x ∈ {0, 1}Z be the 2-Toeplitz sequence which on In is equal to the nth binary digit an of t. For every interval J of length 2n clearly x|J is 2-Toeplitz and, by Lemma 9.6.9, f1 (x|J ) ≤

n−1

∑ ak 2−k + 2−n ≤ t + 2−n ≤ tn.

k=1

−n = Hence x ∈ X. But the same lemma also tells us that lim f1 (x|[[0,2n −1]] ) = ∑∞ k=1 an 2 t, hence f1 (X ) ≥ t, and combined with the previous inequality, we have shown that f1 (X) = t.

9.6.4 Frequencies in effective subshifts and SFTs Theorem 9.6.10 For every d ≥ 1, a number t ∈ [0, 1] is of class Π1 if, and only if, t = fσ (X) for some d-dimensional effective subshift X , and some symbol σ . Proof of ‘if’ and let X = X(E ) for an RE set E ⊆ Σ∗d . We aim to show that fσ (X ) ∈ Fix σ # ∞ Π1 . Write E = ∞ n with (En )n=1 a recursive sequence and assume that En ⊆ En+1 n=1 E# (if not, replace En by k≤n En ). For N ≥ n set   d sn,N = max fσ (a|[[0,n]]d ) | a ∈ Σ[[−N,N]] is En -admissible . ∈ Σ∗d

Lemma 9.6.11 For all n, the sequence (sn,N )∞ N=1 is non-increasing and fσ (X , n) = inf sn,N = lim sn,N . N→∞

N∈N

Proof If N ≥ n and a ∈ Σ[[−(N+1),N+1]] is EN+1 -admissible then it is also EN admissible (since EN ⊆ EN+1 ), so a|[[−N,N]]d is EN -admissible. This shows that the maximum defining sn,N is taken over a set at least as large as the set whose maximum is sn,N+1 , so sn,N+1 ≤ sn,N . d

Multidimensional shifts of finite type and sofic shifts

349

It remains to show that lim sn,N = fσ (X , n). First note that if x ∈ X then x|[[−N,N]] is E -admissible and therefore EN -admissible, so, for all N ≥ n, sn,N ≥ fσ (x|[[−N,N]] )|[[0,n]]d ) = fσ (x|[[0,n]]d )

for all x ∈ X

hence sn,N ≥ fσ (X , n), and the same holds in the limit as N → ∞. For the other inequality, fix a pattern aN realising the maximum in the definition of sn,N , and extend d aN to a (not necessarily admissible) configuration xN ∈ ΣZ . Using compactness, there is an accumulation point x = lim xNk . Every subpattern b of x is a subpattern of aNk for all large enough k, so b is ENk -admissible for all large enough k, and since ENk increase to E , b is E -admissible, hence x ∈ X . Also, x|[[0,n]]d = (xNk )|[[0,n]]d = (aNk )|[[0,n]]d for all large enough k, hence fσ (x|[[0,n]]d ) = lim fσ (aNk |[[0,n]]d ) = lim sn,nk = lim sn,N . k→∞

k→∞

N→∞

(The last equality holds because, by monotonicity, the limits exist.) Thus fσ (X , n) ≥ lim sn,N , and the proof is complete. Observe that fσ (X ) = inf fσ (X , n) = inf inf sn,N . n∈N

n∈N N≥n

Thus in order to show that fσ (X ) ∈ Π1 , we need only to show that the function (n, N) → sn,N is recursive. But this is trivial: by assumption, EN can be computed from N, and since it is finite we can determine EN -admissibility of a pattern by exhaustively checking its subpatterns for membership in EN ; also, a → fσ (a) is computable; so sn,N is just the maximum of a computable function over a computable set. Proof of ‘only if’ Let t ∈ Π1 ∩ [0, 1] and let (tn ) be a recursive sequence of numbers such that t = inf{tn }. We may assume that tn − t > 2−(n−1), since we can always replace tn by tn + 2 · 2−n. Let n

En = {a ∈ {0, 1}2 | f1 (a) > tn or a is not 2-Toeplitz} #

and let E = ∞ n=1 En . This is an RE set, so X = X(E ) is effective. But this is just the subshift defined in (9.6), where it was shown that f1 (X ) = t. This completes the proof. The class of densities achieved by SFTs is no different. We remark that, historically, the result below pre-dates Theorem 9.6.10. Theorem 9.6.12 (Hochman and Meyerovitch (2010)) For every d ≥ 2, a number t ∈ [0, 1] is of class Π1 if, and only if, t = f (X) for some d-dimensional SFT X. Proof We already know that SFTs are effective, so fσ (X ) ∈ Π1 by Theorem 9.6.10. Now let t ∈ Π1 ∩ [0, 1]. By Theorem 9.6.10 there exists an effective subshift Y ⊆ 2 {0, 1}Z such that f1 (Y ) = t. Let Y (2) ⊆ {0, 1}Z be the subshift that is constant on

350

M. Hochman

columns and satisfies (Y (2) )Z = Y . Clearly f1 (Y (2) ) = f1 (Y ) = t. By Theorem 9.5.13 d there is a two-dimensional SFT X ⊆ ΣZ factoring onto Y (2) , and we can assume the factor map is defined by a symbol code π : X → Y (2) (Lemma 9.2.14 and Proposition 9.2.8). Writing S ⊆ Σ for the set of symbols mapped under π to 1, we have fS (X ) = f1 (Y (2) ) = t. By Lemma 9.6.3 there is a subshift X  isomorphic to X and a symbol σ with fσ (X  ) = fS (X ) = t. Finally, X  is an SFT by Proposition 9.2.8. This completes the proof for d = 2; for d ≥ 3, start with X  and extend it to be constant in the new directions.

9.6.5 Pattern growth in subshifts (topological entropy) One of the most important characteristics of a language is its word growth; in symbolic dynamics this exponential rate of growth is known as entropy, and is defined d as follows. Given a subshift X ⊆ ΣZ , let Nn (X ) denote the number of patterns in X whose shape is an n-cube, i.e., Nn (X ) = Card(L(X) ∩ Σ[[0,n−1]] ). d

The (topological) entropy of a subshift X ⊆ ΣZ is d

h(X) = lim

1

n→∞ nd

log Nn (X )

with the logarithm is to base 2; in other words, Nn (X ) = 2n (h(X)+o(1)). To justify the limit, and show that it is equal to the infimum of the sequence, let us check that n1d log Nn (X ) satisfies the hypotheses of Lemma 9.6.1. First, a cube of side d

mn is a disjoint union of md cubes Q1 , . . . , Qmd of side n, and a ∈ L(X ) ∩ Σ[[0,mn−1]] implies a|Qi ∈ L(X) ∩ ΣQi , so

d

md

Nmn (X ) ≤ ∏ Nn (X ) = Nn (X )m

d

i=1

which, after taking logarithms and dividing by (mn)d , is (9.4). Second, [[0, n + m − 1]]d = Q ∪ E for Q = [[0, n − 1]]d and |E| = Om (nd−1 ), we have a similar bound Nm+n (X ) ≤ Nn (X ) · |L(X) ∩ ΣE | ≤ Nn (X ) · |Σ|Om (n

d−1 )

.

After taking logarithms and normalising, this is (9.5). The importance of entropy to symbolic dynamics stems in part from its good functorial properties. Clearly for subshifts X ,Y , if X ⊆ Y then Nn (Y ) ≤ Nn (X ), so h(Y ) ≤ h(X). Furthermore, the following holds. Proposition 9.6.13 Let X ⊆ ΣZ and Y ⊆ ΔZ be subshifts. If Y is a factor of X then h(Y ) ≤ h(X). In particular if X ,Y are isomorphic then h(X ) = h(Y ). d

d

Proof Let π : X → Y be a sliding block code defined by the local rule π0 : ΣD → Δ.

Multidimensional shifts of finite type and sofic shifts

351

Let k denote the diameter of D. Then for every y ∈ Y there is an x ∈ X with y = π (x) and y|[[0,n]]d is determined by x|[[−k,n+k]]d , so Nn (Y ) = |Δ[[0,n−1]] ∩ L(Y )| ≤ |Σ[[−k,n+k−1]] ∩ L(X )| = Nn+2k (X ), d

d

and 1 (n + 2k)d 1 log N (Y ) ≤ lim · log Nn+2k (X ) = h(X ). n n→∞ nd n→∞ nd (n + 2k)d

h(Y ) = lim

Example 9.6.14 (Full shift)

One has Nn (ΣZ ) = |Σ|n so h(ΣZ ) = log |Σ|. d

d

d

Example 9.6.15 Let X ⊆ ΣZ be a subshift, then h(X (d) ) = 0 for d ≥ 2. Indeed, d every a ∈ L(X (d) ) ∩ Σ[[0,n−1]] is determined by a|[[0,n−1]]×{0}d−1 ∈ Σn , so Nn (X (d) ) ≤ |Σ|n = |Σ|o(n ) . This gives h(X (d) ) = 0. d

Example 9.6.16 (Periodic orbits) If x ∈ ΣZ is periodic with period p then X = {x, Sx, . . . , S p−1x} is closed and shift invariant, and hence a subshift. Every word appearing in X is determined by its length and position modulo p in x, so Nn (X ) = p for all n ≥ p, hence h(X) = 0. Example 9.6.17 (Realising rational entropies) Fix m, s ∈ N. Let Xm,s ⊆ {0, 1, . . ., s}Z denote the set of all concatenations of the words 0m−1t for t ∈ {1, . . . , s}. This is an SFT. For every k ∈ N, a word a ∈ Lkm (Xm,s ) is determined by (a) the position of its non-zero symbols (these form an arithmetic progression with gap m, so there are m choices for the alignment), and (b) the values of the non-zero symbols (there are k symbols, hence sk choices). Thus Nkm (Xm,s ) = m · sk , and 1 log s log(m · sk ) = . k→∞ km m

h(Xm,s ) = lim

For s = 2 the entropy is /m, thus giving all rational numbers. Example 9.6.18 (Entropy of one-dimensional SFTs) Let G = (V, E) be a directed graph and X ⊆ V Z the associated SFT (Example 9.2.11). Assume for simplicity that G is strongly connected (every pair of vertices in G is connected by a path), so that every finite path in G extends to a bi-infinite path. Then Nn (X ) is the number of paths of length n in G. Letting A denote the |V | × |V | adjacency matrix of G (i.e., Au,v = 1 if (u, v) ∈ E, and Au,v = 0 otherwise), a standard induction shows that Nn (X ) =

∑ (An )u,v = #An#1 .

u,v∈V

Thus h(X) = lim

n→∞

1 log #An #1 . n

This is the spectral radius of A, and it is a logarithm of an algebraic number.

352

M. Hochman

Every h ≥ 0 occurs as the entropy of some subshift (see Remark 9.6.24 below). On the other hand, the class of SFTs is countable up to isomorphism, so the class numbers that arise as entropies of SFTs is countable. One may ask which numbers arise in this way. The last few examples showed that certain logarithms of algebraic numbers occur as entropies of one-dimensional SFTs. These were special cases of the following neat characterisation. Theorem 9.6.19 (Lind (1984)) A real number α ≥ 0 is the entropy of a onedimensional SFT if, and only if, α = log λ for a Peron number λ , i.e., λ is an algebraic integer strictly larger than the modulus of its conjugates.

9.6.6 Some entropy calculations We give some elementary arguments to control the entropy of extensions. Lemma 9.6.20 Let X ⊆ ΣZ and Y ⊆ X × ΔZ be subshifts and π : Y → X the d projection to the first coordinate. Suppose that for every a ∈ L(X ) ∩ Σ[[0,n−1]] there d d d are at most 2n (t+o(1)) patterns b ∈ Δ[[0,n−1]] such that (a, b) ∈ L(Y )∩(Σ× Δ)[[0,n−1]] . Then h(Y ) ≤ h(X) + t. d

d

Proof This follows by taking logarithms in the inequality below, dividing by nd , and letting n → ∞: Nn (Y ) =



Card{b ∈ Δ[[0,n−1]] | (a, b) ∈ Ln (Y )} d

a∈L(X)∩Σ[[0,n−1]]

≤ Nn (X ) · 2n

d

d (t+o(1))

.

A variant of this argument gives the following. Lemma 9.6.21 Let X ⊆ ΣZ be a subshift with h(X ) = 0. Let σ ∈ Σ and r ∈ N, and consider the extension Y of X given by d

Y = {(x, z) ∈ X × {1, . . ., r}Z | ∀u ∈ Zd xu = σ =⇒ zu = 1}. d

Then h(Y ) = fσ (X) · logr. Proof Write Δ = {1, . . . , r}. A word (a, b) ∈ (Σ × Δ)[[0,n−1]] belongs to L(Y ) if, and only if, a ∈ L(X) and bu = 0 whenever au = σ . Thus, for given a ∈ L(X ), the number of b such that (a, b) ∈ L(Y ) is r|a|σ = 2|a|σ log r . Since |a|σ ≤ nd ( f1 (X )+o(1)), the hypothesis of the previous lemma is satisfied with t = f1 (X ), giving h(Y ) ≤ d f1 (X) · log r. For the other direction, there is a sequence an ∈ L(X ) ∩ Σ[[0,n−1]] such that fσ (an ) → f1 (X ), or equivalently, |an |σ = nd ( f1 (X ) + o(1)). For each an there d d are thus 2n ( f1 (X)+o(1)) different b ∈ Δ[[0,n−1]] such that (a, b) ∈ L(Y ), which gives d

Nn (Y ) ≥ rn

d( f

1 (X)+o(1))

= 2n

d( f

1 (X) log r+o(1))

.

Multidimensional shifts of finite type and sofic shifts

353

This implies that h(Y ) ≥ f1 (X ) log r and, together with the previously established inequality, proves the claim. Finally, we will need the following entropy computation. Lemma 9.6.22 If X is a subshift consisting of 2-Toeplitz sequences then h(X ) = 0. n

Proof If a ∈ {0, 1}2 is 2-Toeplitz, then for k = 1, . . . , n − 1 there are sets Ik = ik + 2k and bits bk such that all symbols in a|Ik equal bk . By Lemma 9.6.5 a is completely determined by I1 , . . . , In−1 , b1 , . . . , bn−1 , and the symbol b ∈ {0, 1} appearing at the # k unique site in a not covered by n−1 k=1 Ik . Since we can assume 0 ≤ ik < 2 , there 2) n−1 k O(n n are ∏k=1 2 = 2 possible choices for I1 , . . . , Ik , and there are 2 choices for 2 n b1 , . . . , bn−1 and b. Thus, there exist at most 2O(n ) 2-Toeplitz sequences a ∈ {0, 1}2 , so 1 1 O(n2 ) O(n2 ) n (X ) ≤ lim log N log 2 = lim = 0. 2 n→∞ 2n n→∞ 2n n→∞ 2n

h(X) = lim

9.6.7 Entropies of effective subshifts and SFTs Theorem 9.6.23 For every d ≥ 1, a number t ≥ 0 is of class Π1 if, and only if, t = h(X) for some d-dimensional effective subshift X. Proof of ‘if’ d Let X ⊆ ΣZ be effective, represented as X = X(E ) for an RE set E ⊆ Σ∗d , and write # (En )∞ E = ∞ n=1 En with n=1 a recursive sequence. Assume that En ⊆ En+1 (otherwise # replace En by k≤n En ). For N ≥ n set sn,N = Card{a|[[0,n−1]]d | a ∈ Σ[[−N,N]] is EN -admissible}. d

Clearly sn,N ≥ Nn (X ), since for every x ∈ X the restriction a = x|[[−N,N]] is E -admissible, hence EN -admissible, and so is certainly counted in the definition of ss,N . A similar argument, using EN+1 ⊇ EN , shows that sn,N+1 ≤ sn,N , and a compactness argument similar to that in the proof of the ‘if’ direction of Theorem 9.6.10, shows that Nn (X ) = lim sn,N = inf sn,N . N→∞

N≥n

From this we find that h(X) = inf

1

n∈N nd

log Nn (x) = inf inf

n∈N N≥n

1 log sn,N  . nd

Since log(·) is a computable function, and (n, N) → sn,N is computable, it follows that h(X) is the infimum of a recursive sequence and hence of class Π1 .

354

M. Hochman

Proof of ‘only if’ Let t ∈ Π1 ∩ [0, ∞) and set s = t/ t ∈ Π1 ∩ [0, 1). Let X ⊆ {0, 1}d be the subshift constructed for s at the end of Section 9.6.3. By the arguing in the proof of Theorem 9.6.10 (“only if” part), X is effective, f1 (X ) = s, and X contains only 2-Toeplitz sequences. Thus also X (d) is effective, and h(X (d) ) = 0 either by Lemma 9.6.22 when d = 1, or by Example 9.6.15 when d > 1. It is also clear that f1 (X (d) ) = f1 (X ) = s. Let Y be as in Lemma 9.6.21 with r = 2t ; it is also effective, and by that lemma, h(Y ) = s log 2t = t, as desired. Remark 9.6.24 For arbitrary t ≥ 0 the construction in the ‘only if’ part of the proof caries through to give a subshift with entropy t, although, of course, usually not effective. Theorem 9.6.25 (Hochman and Meyerovitch (2010)) For every d ≥ 2, a number t ≥ 0 is of class Π1 if, and only if, t = h(X) for some d-dimensional SFT X. One direction follows from the previous theorem, since SFTs are effective, and the proof of the other direction is quite similar to the proof of that theorem as well. Let d > 1 and t ∈ Π1 ∩ [0, ∞), and define s = t/ t; let X ⊆ {0, 1}Z be an effective subshift with f1 (X ) = s. By Theorem 9.5.7, X (d) is a sofic shift, so there exists an d SFT Y ⊆ ΔZ factoring onto X (d) , and we can assume the factor map is a symbol code. Let S ⊆ Δ denote the set of symbol factoring to 1, so fS (Y ) = fσ (X ) = s. By Lemma 9.6.3 we can pass to a system Z isomorphic to Y (hence also an SFT) with a symbol τ such that fτ (Z) = fS (Y ) = s. If h(Z) = 0, we would apply Lemma 9.6.21 to Z and r = 2t , and the result would be an SFT with entropy t, as desired. Now, h(Z) = h(Y ) because Y, Z are isomorphic, Y is an extension–reduction of X (d) , and h(X (d) ) = 0. Thus we would be done if we could verify that each stage in the construction of Y did not increase the entropy. This is, in fact, not the case, but the construction can be modified to have this property. Rather than give a lengthy formal proof we content ourselves with a brief account of the changes to each of the stages of the construction. We restrict ourselves to the case d = 2, as we did with the earlier construction. Robinson’s system Robinson’s system has zero entropy. One can prove this using the recursive definition of patches (Section 8.2.3). One shows that the number of n × n subpatterns of a patch is at most polynomial in n (this is similar to the proof of Lemma 9.6.22) and a short argument then shows that Nn (X ) is polynomial, hence the entropy is 0. One can directly check, using Lemma 9.6.20, that the later modifications (elimination of faults, orienting grid boundaries, identifying upper-right principal subgrids) did not increase entropy. Extension by paths Most of the extensions we performed using paths were deterministic in the sense that they were completely determined on each square region by the first layer and

Multidimensional shifts of finite type and sofic shifts

355

the path segments on the boundary of the square (e.g., in Example 9.3.6), or at least determined within the bounding square of every grid (e.g., in orienting the boundaries, Section 9.3.9). In such cases Lemma 9.6.20 can be used to show that there is no increase in entropy. However, in some parts of the construction we extend by paths that are not deterministic in this sense, specifically, when we synchronised between grids of different levels (Section 9.4.5), and in the construction of connector paths in Section 9.5.4. In both cases the initial and terminal points of the paths were determined by the first layer, but there are multiple ways of making the connection. In these cases we can modify the construction, making it deterministic by requiring that paths always ‘keep to the left’ as long as it is allowed, taking every allowable left turn and backtracking when necessary (possibly going both ways on the same tile). The entropy of running Turing machines on grids When Y is obtained by running a Turing machine T on grids in a subshift X , only symbols belonging to grids are extended. By Proposition 9.3.9 the data symbol and machine state represented in cells are determined by the data in the input cells, and if they are determined by the X -layer (e.g., the data layer of our construction), or by some other external means (e.g., the level counters used), then the states of cells are completely determined. The rest of the implementation used paths to enforce adjacency rules on cells, and is deterministic in the strict sense. From this, one can deduce that running Turing machines on grids does not increase entropy when the input symbols are determined by the first layer.

9.6.8 Periodic points In the theory of one-dimensional SFTs (and dynamical systems in general), periodic points play an important role: they give natural invariants and obstruction to factoring, since the image of a periodic point under an equivariant map is periodic, and the growth rate of the number of points of period n is often related to the entropy of the system. As we discussed in Section 9.3.1, in higher dimensions periodic points have played an important historic role due to their connection with decidability of emptiness of an SFT. The following is also classical. Theorem 9.6.26 (Gureviˇc and Korjakov, (1972a)) In dimension d ≥ 2 there is no algorithm which decides whether X(E ) contains periodic points. It turns out that more refined information about periodic points in SFTs can be obtained, but it involves notions from complexity theory, rather than the recursion theory we have encountered so far. A language L ⊆ {0, 1}∗ has non-deterministic polynomial complexity, or is of complexity class NP, if there is a Turing machine and a c > 0 such that: (1) for every a, b ∈ {0, 1}∗, on input (a, b) the machine halts with output 1 if a ∈ L and 0 otherwise;

356

M. Hochman

(2) for every a ∈ L of length n, there exists a b ∈ {0, 1}∗, such that on input (a, b) the machine halts with output 1 after at most c · nc computation steps. One may think of the word b in Condition (2) as a ‘proof’ or ‘verification’ that a ∈ L; thus L is of class NP if it is possible to efficiently verify that a ∈ L, given the appropriate input. This definition is equivalent to L being semi-accepted in polynomial time by a non-deterministic Turing machine. For more information consult (Arora and Barak, 2009). Example 9.6.27 We claim that the language L ⊆ {0, 1}∗ of binary representations of composite (non-prime) numbers is in NP.8 For this, observe that given n ∈ N and a pair (k, m) ∈ N, one can verify quickly (i.e., polynomially in the length in bits of the input (k, m)) whether km = n or not. Thus we have the following algorithm: on input (a, b) with a representing n and b representing (k, m) ∈ N × N, check if n = km and if yes return 1. Otherwise, exhaustively check all pairs (k , m ) ∈ {1, . . . , n}2 , and return 1 if some pair satisfies m k = n, otherwise return 0 (this requires at least n2 operations, which is exponential in the number of bits representing n). This algorithm satisfies Conditions (1) and (2) above. In the definition of the class NP, polynomiality is relative to the length of the input, which, for integer input k, is its length in binary, i.e., log2 k. For the statement that follows we will prefer to represent it in unary: a set A ⊆ N is of non-deterministic polynomial complexity in unary, or of class NPunary , if {1n | n ∈ A} is of class NP. d For a subshift X ⊆ ΣZ , the set of square periods Πsquare (X ) of X is defined by Πsquare (X ) = {k ∈ N | ∃x ∈ X stab(x) = kZd } where stab(x) = {u ∈ Zd | Su x = x} is the stabiliser of x. Theorem 9.6.28 (Jeandel and Vanier (2015)) The set Π ⊆ N is the set of square periods of an SFT if, and only if, Π is of class NPunary . It is relatively straightforward to show that Πsquare (X ) is of class NPunary when X is an SFT. The converse requires a more delicate construction which we do not present here. An important feature of this theorem, however, is that the equivalence is only valid if we allow SFTs of all dimensions: in fact the dimension of the SFT whose square period set is a given Π ⊆ N is related to the degree of the polynomial bounding the length of the non-deterministic accepting algorithm for Π. It remains an open problem to characterise the actual sets of periods, i.e., the set of lattices Π(X) = {stab{x} | x ∈ X}. It is also unknown how many periodic points of each period can occur. For some results using notions of periodicity, see (Jeandel and Vanier, 2015). We note that a related connection between SFTs and the standard complexity class NP appears in (Borchert, 2008). 8

In fact, L can be decided in polynomial time, see Agrawal et al. (2004).

Multidimensional shifts of finite type and sofic shifts

357

9.6.9 Further developments It is natural to ask about the entropies of restricted classes of SFTs. It is not difficult to see that minimal SFTs must have entropy 0. For strongly irreducible SFTs, it was shown in Hochman (2009) that the entropies are computable (i.e., of class Δ0 ). This condition is not sufficient: Pavlov and Schraudner (2015b) showed that if h ≥ 0 is the entropy of a strongly irreducible two-dimensional SFT then its binary expansion 2 can be computed to precision ε in time 2−O(1/ε ) , which is a non-trivial restriction. Thus only a subclass of computable numbers can be realised as entropies of strongly irreducible SFTs. A characterisation of this class of entropies is still lacking. There is also great interest in obtaining closed-form expressions for the entropies of SFTs, when possible; as we remarked in Example 9.2.1, this is very difficult even in very simple cases. One class where this is possible is when the SFT is of algebraic origin, in which case there is an explicit expression for the entropy. This theory is due to (Lind et al., 1990). Another interesting case where the asymptotic entropy is computed for so-called axial powers was recently studied by Meyerovitch and Pavlov (2014). Another direction of research concerns subexponential growth scales: one can ask α for the stretched-exponential rates, i.e., the rates 2n , and polynomial rates, i.e., nα , which best describe the growth of Nn (X ). More precisely, the upper and lower end tropy dimensions of a subshift X ⊆ ΣZ are given respectively by log(log Nn (X )) log n log(log Nn (X )) . D(X ) = lim inf n→∞ log n D(X ) = lim sup n→∞

If these quantities are equal their common value is called the entropy dimension and is denoted log(log Nn (X )) D(X) = lim . n→∞ log n Unlike entropy the limits above need not exist. Theorem 9.6.29 (Meyerovitch (2010)) For any d ≥ 2, (1) {D(X ) | X is a d-dimensional SFT} = [0, d] ∩ Π3; (2) {D(X) | X is a d-dimensional SFT} = [0, d] ∩ Σ2 ; (3) {D(X) | X is a d-dimensional SFT} = [0, d] ∩ Δ2 . Finally, one can define entropy for cellular automata (and, more generally, for arbitrary continuous maps of compact metric spaces). Using the connection between SFTs and cellular automata one can prove the following. Theorem 9.6.30 (Hochman (2009), Guillon and Zinoviadis (2013)) A number h ≥ 0 is the topological entropy of a 1-dimensional cellular automaton if and only if h ∈ [0, ∞) ∩ Π1 , i.e. h = infhn for a non-negative recursive sequence (hn )∞ n=1 .

358

M. Hochman

For d ≥ 2, a number h ≥ 0 is the topological entropy of a d-dimensional cellular automaton if and only if h ∈ [0, ∞) ∩ Π2 , i.e. h = lim sup hn for a non-negative recursive sequence (hn )∞ n=1 .

10 Linearly recursive sequences and Dynkin diagrams Christophe Reutenauer

10.1 Introduction Sequences of natural numbers have always been the subject of all kind of investigations. Among them is the famous Fibonacci sequence, related to the well-known Golden Ratio. This sequence is the paradigmatic example of a sequence satisfying a linear recursion. Latter sequences are of fundamental importance in algebra, combinatorics, and number theory, as well as in automata theory, since they count languages associated with finite automata. The first scope of the present chapter is to prove a classification theorem: it characterises a class of sequences associated with certain quivers (directed graphs); in general these sequences satisfy non-linear recursions; it turns out that these sequences satisfy also a linear recursion exactly when the undirected underlying graph is of Dynkin, or extended Dynkin, type. This is perhaps the first example of a classification theorem, that uses Dynkin diagrams, in the realm of integer sequences. Note that, for the sake of simplicity, we have dealt here with diagrams and quivers, whose underlying graphs are simple graphs; the corresponding Dynkin diagrams are sometimes called simply laced. The whole theory may be done with more general diagrams, that is, using Cartan matrices, see (Assem et al., 2010). The sequences we consider are called friezes: they were introduced by Philippe Caldero and they satisfy non-linear recursions associated with quivers. Such a recursion is a particular case of a mutation, an operation introduced by Fomin and Zelevinsky in their theory of cluster algebras. The recursion does not show at all that these sequences are integer-valued. This must be proved separately and for this we follow Fomin and Zelevinsky’s Laurent phenomenon (Fomin and Zelevinsky, 2002a,b), for which a proof is given. We shall explain all details about Dynkin and extended Dynkin diagrams. These diagrams have been used to classify several mathematical objects, starting with the Cartan–Killing classification of simple Lie algebras. There is a simple combinatorial characterisation of these diagrams, due to (Vinberg, 1971) for Dynkin diagrams, and Combinatorics, Words and Symbolic Dynamics, ed. Val´erie Berth´e and Michel Rigo. Published by c Cambridge University Press. Cambridge University Press 2016.

360

C. Reutenauer

to (Berman et al., 1971/72) for extended Dynkin diagrams: they are characterised by the existence of a certain function on the graph, which must be subadditive or additive. See Section 10.8 for more detail. The secondary scope of the chapter is to introduce the reader to the notion of SL2 -tilings and their applications. An SL2 -tiling of the plane is a filling of the discrete plane by numbers, or elements of a commutative ring, in such a way that each adjacent two by two minor is equal to 1. Similar objects were considered by Coxeter (Coxeter, 1971), Conway and Coxeter (Conway and Coxeter, 1973a,b) and Di Francesco (Di Francesco, 2010). A remarkable subclass is obtained by prescribing the value 1 in a two-sided infinite discrete path of the discrete plane; then it extends uniquely to a SL2 -tiling, that may be easily computed by a well-known matrix representation. If the path is periodic, then the sequences on any discrete half-line satisfy linear recursions, and are even N-rational. This allows to prove that the sequences associated with a frieze of type  are N-rational. A The present chapter rests on the author’s article (Assem et al., 2010), jointly with Ibrahim Assem and David Smith. There have been recent new developments in the theory of SL2 -tilings, see (Morier-Genoud, 2012; Holm and Jørgensen, 2013; Bessenrodt et al., 2014): in the latter article, SL2 -tilings filled with positive natural numbers are completely characterised by a construction using triangulations of a circle with four accumulation points. The author is grateful to Pierre Auger, Val´erie Berth´e, Ibrahim Assem, Ralf Schiffler and Bernhard Keller for useful mail exchanges.

10.2 SL2 -tilings of the plane Following (Assem et al., 2010), we call SL2 -tiling of the plane a mapping t : Z2 → K, for some field K, such that for any x, y in Z, $ $ $ $ t(x, y) t(x + 1, y) $ $ $ t(x, y + 1) t(x + 1, y + 1) $ = 1. Here we represent the discrete plane Z2 , so that the y-axis points downwards, and the x-axis points to the right: see Figure 10.1. An example with K = Q is given in Figure 10.2; it has values in N. Another example, with K the field of fractions over Q in the variables a, b, c, d, e, f , . . ., is given in Figure 10.3. These objects are an extension to the whole plane of the frieze patterns introduced by Coxeter (Coxeter, 1971) and studied by Conway and Coxeter (Conway and Coxeter, 1973a,b). In Figure 10.4 is such a frieze pattern, partially represented (it extends diagonally infinitely in both directions north-west and south-east). Note that Conway and Coxeter represent them horizontally, instead of diagonally as here. An SL2 -tiling of the plane, viewed as an infinite matrix, has necessarily rank at

361

Linearly recursive sequences and Dynkin diagrams x

y

-

P = (x, y)

? Figure 10.1 Coordinate convention.

...

1 1

1 2

1 3

1 1 4

1 2 9

3 1 4 19

1 3 14

3 2 1 5 24

2 1 1 1 1 6 29

1 1 1 1 2 3 4 52 121 ...

1 1 2 3 22 9 14 19 119 242

1 2 5 8 11 52 39 53 332 1607

1 3 8 13 18 41 82 87 545 2368

1 4 11 18 25 57 89 112 758 3669

...

Figure 10.2 An SL2 -tiling over N.

least 2. Following (Bergeron and Reutenauer, 2010), we say that the tiling is tame if its rank is 2.

10.3 SL2 -tiling associated with a bi-infinite discrete path Roughly speaking, a frontier is a discrete path (with steps that go from west to east, and south to north), which is infinite in both directions, and such that each vertex is

···

d+b+bce cd

1+ce d

b a

c

··· e d

1+ac b

b+d+acd bc

···

f 1+d f e bce+b+bd f +dac+d 2 ac f +d+d 2 f bcde

Figure 10.3 An SL2 -tiling over N[a, b, c, . . . , a−1 , b−1 , c−1 , . . .].

···

362

C. Reutenauer ... 1 1 1 1

1 2 3 4 1

1 3 5 7 2 1

1 4 7 10 3 2 1

1 1 1 1 5 9 13 4 3 2 1

1 2 3 4 21 38 55 17 13 9 5 1

1 2 3 16 29 42 13 10 7 4 1

1 2 11 20 29 9 7 5 3 1

1 6 11 16 5 4 3 2 1

1 2 3 1 1 1 1 1

1 2 1

1 1 ...

Figure 10.4 A piece of a frieze pattern.

labelled with a non-zero element of the ground field; see Figure 10.5. In most cases considered here, these elements will be all equal to 1, so that the frontier is simply a bi-infinite discrete path. The formal definition goes as follows. A frontier is a bi-infinite sequence . . . x−2 a−2 x−1 a−1 x0 a0 x1 a1 x2 a2 x3 a3 . . .

(10.1)

where xi ∈ {x, y} and ai are elements of K ∗ , for any i ∈ Z. It is called admissible if none of the two sequences (xn )n≥0 and (xn )n≤0 is ultimately constant. The ai s are called the variables of the frontier. Each frontier may be embedded into the plane: the variables label integer points in the plane, and the x (respectively y) determine a biinfinite discrete path, in such a way that x (respectively y) corresponds to a segment of the form [(a, b), (a + 1, b)] (resp [(a, b), (a, b − 1)]). Note the b − 1: this is because of our coordinate conventions, see Figure 10.1. An example is given in Figure 10.2, with all the variables equal to 1, and where the embedding is represented by the 1s. In Figure 10.3, the frontier is coded by the word ...aybxcxdyex f .... Given an admissible frontier, embedded in the plane as explained previously, let (u, v) ∈ Z2 . Then we obtain a finite word, which is a factor of the frontier, by projecting the point (u, v) horizontally and vertically onto the frontier. We call this word the word of (u, v). It is illustrated in Figure 10.3: the word associated with the point is aybxcxd. Another example is shown in Figure 10.5, where the labelled b+d+acd bc word associated with P is a−3 ya−2 ya−1 ya0 xa1 xa2 ya3 xa4 . Theorem 10.3.1 Given an admissible frontier, there exists a unique tame SL2 tiling t of the plane over K, extending the embedding of the frontier into the plane. It is defined, for any point (u, v) that is below the frontier, with associated word a0 x1 a1 x2 · · · xn+1 an+1, where n ≥ 1 and xi ∈ {x, y}, by the formula t(u, v) =

1 (1, a0 )μ (a1 , x2 , a2 ) · · · μ (an−1 , xn , an )(1, an+1 )t . a1 a2 · · · an

(10.2)

Linearly recursive sequences and Dynkin diagrams ..

. ..

a−4

a0 a−1 a−2 a−3

a1

a3 a2

a4

363

.

a5

P

Figure 10.5 A frontier.

Here we have used the following notation:    a 1 b μ (a, x, b) = , μ (a, y, b) = 0 b 1

0 a

 .

10.4 Proof of Theorem 10.3.1 We prove the theorem in the particular case where all variables on the frontier are equal to 1; we do not use more than this special case in the sequel of this chapter. The general case is proved in (Assem et al., 2010), Theorem 4, and tameness and uniqueness follows from (Bergeron and Reutenauer, 2010), Proposition 7 and Proposition 12. In the special case we prove, one may omit the variables on the frontier, since they are all equal to 1; hence a frontier is simply a bi-infinite word on the alphabet {x, y}. Likewise, the word associated with each point is a finite word on this alphabet. In this case, the theorem takes the following simpler form: let μ denote the homomorphism from the free monoid {x, y}∗ into the multiplicative group SL2 (Z) such that     1 1 1 0 μ (x) = and μ (y) = . 0 1 1 1 Moreover, let S(A) denote the sum of the coefficients of any matrix A. Then we have to prove that: given an admissible frontier, with only 1s as variables, there exists a unique SL2 -tiling of the plane t extending the embedding of the frontier into the plane. It is defined, for any point (u, v) below the frontier, with associated word ywx, by the formula t(u, v) = S(μ (w)). Note that equivalently, if (u, v) is a point with associated word w, one has t(u, v) = μ (w)22 .

(10.3)

364

C. Reutenauer

.. . .. 1 1

...

...

1

1 .. . .. . 1

1

(u, v) (u, v + 1)

(u + 1, v) (u + 1, v + 1)

.

w

Figure 10.6 The words associated with the four points (u, v), (u + 1, v), (u, v + 1) and (u + 1, v + 1).

Indeed, we have w = yw x, so that 





S(μ (w )) = (1, 1)μ (w )  1 =( 1

0 1





μ (w )



1 1 0 1

1 1



 )22 = (μ (y)μ (w )μ (x))22 = μ (w)22 .

We prove below that formula (10.3) defines an SL2 -tiling t. Then, clearly t(u, v) > 0 for any (u, v) ∈ Z2 . Then it is easily deduced, by induction on the length of the word associated with (u, v), that t(u, v) is uniquely defined by the SL2 -condition. This proves that the tiling is unique; note that in this special case tameness is not used to prove uniqueness. Now, we show that the function t given by (10.3) is an SL2 -tiling of the plane. It is enough to show that for any (u, v) ∈ Z2 , the determinant of the matrix   t(u, v) t(u + 1, v) t(u, v + 1) t(u + 1, v + 1) is equal to 1. By inspection of Figure 10.6, where w = x1 · · · xn , n ≥ 0 and xi ∈ {x, y}, it is seen that the words associated with the four points (u, v), (u + 1, v), (u, v + 1) and (u + 1, v + 1) are respectively of the form w, wyl x, yxk w and yxk wyl x, with k, l ∈ N. Let M = μ (w). Then     0 0 l t(u, v) = M22 = (0, 1)M , t(u + 1, v) = (0, 1)M μ (y) μ (x) , 1 1  t(u, v + 1) = (0, 1)μ (y)μ (x) M k

0 1

 ,

365

Linearly recursive sequences and Dynkin diagrams and moreover

 t(u + 1, v + 1) = (0, 1)μ (y)μ (x)k M μ (y)l μ (x)

0 1

 .

We have to show that t(u, v)t(u + 1, v + 1) −t(u, v + 1)t(u + 1, v) = 1. Equivalently that the matrix   λ Mγ λ Mγ  , λ  Mγ λ  Mγ  where λ = (0, 1), λ  = (0, 1)μ (yxk ) and similarly for γ , γ  , has determinant 1. Now, this matrix is equal to the product   λ M(γ , γ  ). λ The matrix M = μ (w) has determinant 1. Moreover,     1 0 1 k λ  = (0, 1)μ (y)μ (xk ) = (0, 1) μ (x)k = (1, 1) = (1, k + 1). 1 1 0 1     λ 0 1 = , which has determinant −1; similarly (γ , γ  ) has Thus λ 1 k+1 determinant is −1, which ends the proof, except for tameness, which will be proved in Section 10.11.1. We have also proved the following. Corollary 10.4.1 Let t be the SL2 -tiling associated with some frontier with variables all equal to 1. Then for each point P with associated word w, one has t(P) = μ (w)22 .

10.5 N-rational sequences 10.5.1 Equivalent definitions A series S = ∑n∈N an xn ∈ N[[x]] is called N-rational if it satisfies one of the two equivalent conditions (this equivalence is a particular case of the theorem of Kleene– Sch¨utzenberger, see (Berstel and Reutenauer, 2011), Theorem I.7.1): (i) for some matrices λ ∈ N1×d , M ∈ Nd×d , γ ∈ Nd×1 , one has ∀n ∈ N, an = λ M n γ ; (ii) S belongs to the smallest subsemiring of N[[x]] containing N[x] and closed under the operation T → T ∗ = ∑n∈N T n (which is defined if T has zero constant term). We then say that the sequence (an ) is N-rational. We do not prove this equivalence, since this is somewhat out of the scope of the present chapter. We shall use only form (i). A third equivalence is the following:

366

C. Reutenauer

(iii) there exists a rational, or equivalently (by Kleene’s theorem), recognisable language L, such that an is for each n the number of words of length n in L (see (Berstel and Reutenauer, 2011), Proposition 3.2.1). Or equivalently: (iv) there exist a finite directed graph G, a distinguished vertex v0 and a subset V f of the set of vertices of G such that for any n, an is the number of paths in G which go from v0 to some vertex in V f . A simple consequence of (i) is that each N-rational sequence (an ) satisfies a linear recursion with integer coefficients. Indeed, let t d − α1 t d−1 − · · · − αd be the characteristic polynomial of M. Then by the Cayley–Hamilton theorem, we have M d = α1 M d−1 + · · · + αd ; multiplying at the left by λ M n and at the right by γ , we therefore obtain that for any natural number n, an+d = α1 an+d−1 + · · ·+ αd an . Hence, (an ) satisfies the linear recursion associated with the characteristic polynomial of M. Recall the well-known result that a sequence (an ) satisfies a linear recursion with constant coefficients if, and only if, the associated series S = ∑n∈N an xn is rational, that is, is the expansion of some rational function. We call such sequences rational. We therefore deduce that each N-rational sequence is rational and has coefficients in N. The converse is, however, not true: see Section 10.5.3. It is generally believed that if a rational sequence is the counting sequence of some mathematical object, then it is actually N-rational; this metamathematical principle, that goes back to Sch¨utzenberger, is illustrated by many examples, as Hilbert series of rings or monoids, and generating series of combinatorial structures; see the introduction of (Reutenauer, 1997) and (Bousquet-M´elou, 2005, Section 2). An example within the present chapter are the friezes: if they are rational, they must be N-rational.

10.5.2 Linear-algebraic characterisation and closure properties The following result goes back to Fliess and Jacob. Proposition 10.5.1 The following conditions are equivalent. (1) The sequence (an ) is N-rational. (2) There exists a finitely generated submodule of the N-module of all sequences over N which contains (an ) and which is closed under the shift, which maps each sequence (bn ) onto the sequence (bn+1 ). (3) There exists a finitely generated N-module M, an element V of M, an N-linear endomorphism T of M and an N-linear mapping φ : M → N such that for any n, an = φ ◦ T n (V ). Recall that a module M over a semiring (here N) is a commutative monoid M with a left action of the semiring on M, and with the same axioms as those for modules over rings, except that one has to add the axiom 0.V = 0, for any V ∈ M (see (Berstel and Reutenauer, 2011, Section 1.5) for some details). (i)

Proof (1) implies (2): take (an ) in the form (i) above. Then define an = ei M n γ ,

Linearly recursive sequences and Dynkin diagrams

367

where (ei ) is the canonical basis of N1×d . Let M be the N-submodule of the module (i) of all sequences over N generated by the sequences an , i = 1, . . . , d. Then an = (i) λ M n γ = ∑1≤i≤d λi ei M n γ = ∑1≤i≤d λi an . Thus (an ) is in M. Moreover, a similar (i) (i) calculation shows that the shifted sequence of an , which is an+1 = λ  M n γ , with λ  = ei M, is also in M. This proves that M is closed under the shift, since the latter is N-linear. (2) implies (3): we take as module M the submodule given in (2), as V the sequence (an ) itself, as T the shift and as φ the mapping which sends each sequence onto its term of rank 0. Then clearly an = φ ◦ T n (V ). (3) implies (1): let V j be the d generators of M; define the d by d matrix M by the formula T (Vi ) = ∑ j Mi jV j . Then we have for any natural number n, T n (Vi ) = ∑ j MinjV j . Indeed, this is true for n = 0, 1 and we admit it for n. Then T n+1 (Vi ) = T (T n (Vi )) = T (∑ MinjV j ) = ∑ Minj T (V j ) j

j

= ∑ Minj ∑ M jkVk = ∑(∑ Minj M jk )Vk = ∑ Mikn+1Vk . j

k

k

j

k

Define λ ∈ N1×d by V = ∑i λiVi and γ ∈ Nd×1 by γ j = φ (V j ). Then an = φ ◦ T n (V ) = φ (∑ λi T n (Vi )) = φ (∑ λi ∑ MinjV j ) i

i

j

= ∑ λi Minj φ (V j ) = ∑ λi Minj γ j = λ M n γ . ij

ij

Thus (1) holds. Corollary 10.5.2 The following holds. (i) Let (an ) be a sequence of natural numbers and k a natural number. Then (an ) is N-rational if, and only if, the sequence (an+k ) is N-rational. (i)

(ii) If for some natural number p, the p sequences (an ), i = 0, . . . , p − 1, are N(i) rational, then so is the sequence an defined by ai+np = an , for any n and i, i = 0, . . . , p − 1. (iii) If (an ) and (bn ) are two N-rational sequences, then so is the sequence (an bn ). The latter sequence is called the Hadamard product of the two sequences. Proof (i) and (ii) are easy consequences of condition (2) in the proposition. For (ii), one uses (2) also, by taking the Hadamard product of the two submodules. In case (ii) of the corollary, we say that (an ) is the merge (also called interlacing) (i) of the sequences (an ), i = 0, . . . , p − 1.

368

C. Reutenauer

10.5.3 Exponential polynomial and theorem of Berstel–Soittola Recall that each rational sequence (an ) may be uniquely expressed as an exponential polynomial, of the form k

an = ∑ Pi (n)λin ,

(10.4)

i=1

for n large enough, where Pi (n) is a non-zero polynomial in n and the λi are distinct non-zero complex numbers; see e.g., (Berstel and Reutenauer, 2011, Section 6.2). We call the λi the eigenvalues of the sequence an . We say that an has a dominating eigenvalue if among the λi , there is a unique one of maximum modulus; for convenience, we include in this class the sequences ultimately equal to 0. Theorem 10.5.3 A sequence is N-rational if, and only if, it is a merge of rational sequences over N having a dominating eigenvalue. This result is due to Berstel for the ‘only if’ part, and to Soittola for the converse. A complete proof may found in (Berstel and Reutenauer, 2011), Chapter 8. Here we need only Berstel’s theorem. It is a consequence of the theorem of Berstel that there exist rational sequences over N which are not N-rational: see (Eilenberg, 1974) Theorem VI.6.1 and Example VI.6.1, or (Berstel and Reutenauer, 2011) Theorem 8.1.1 and Exercise 8.1.2.

10.5.4 An asymptotic lemma Given two sequences of positive real numbers (an ) and (bn ), we shall write as usually an ∼ bn if the quotient an /bn tends to 1 when n tends to ∞. We also write an ≈ bn to express the fact that for some positive constant C, one has limk→∞ an /bn = C. Clearly an ∼ bn implies an ≈ bn . Corollary 10.5.4 Let a(v, n) be a family, indexed by the finite set V , of unbounded N-rational sequences of positive integers. There exist an integer p ≥ 1, real numbers λ (v, i) ≥ 1 and integers e(v, i) ≥ 0, for v ∈ V and i = 0, ..., p, such that: (i) for every v ∈ V and every i = 0, ..., p, a(v, pn + i) ≈ λ (v, i)n ne(v,i) ; (ii) for every v ∈ V , there exists i = 0, ..., p such that λ (v, i) > 1 or e(v, i) ≥ 1; (iii) for every v ∈ V , λ (v, 0) = λ (v, p) and e(v, 0) = e(v, p). Proof Let (an ) be an N-rational sequence having a dominating eigenvalue, with an > 0. We have then relation (10.4) with | λ1 |>| λ2 |, . . . , | λk |, and non-zero Pi s. Let α be the coefficient of highest degree of P1 , e the degree of the latter and λ = λ1 . Then an ∼ α ne λ n . Since the an are positive natural numbers, and since an+1/an ∼ λ , we must have λ ∈ R+ . We cannot have λ < 1, since otherwise, an being an integer, an = 0 for n large enough. Thus λ ≥ 1. If an is unbounded, then either e ≥ 1 or λ > 1. Note that if a rational sequence is the merge of p sequences having a dominating eigenvalue, this is true also for each multiple of p. Therefore, using Theorem 10.5.3,

Linearly recursive sequences and Dynkin diagrams

369

we see that there exists some p such that each series a(v, n + pi), v ∈ V , i = 0, . . . , p − 1 is N-rational and has a dominating eigenvalue. By the first part of the proof, we see that (i) holds for i = 0, . . . , p−1. Now, we have a(v, pn + p) = a(v, p(n + 1)) ≈ λ (v, 0)n+1 (n + 1)e(v,0) by the i = 0 case. Therefore a(v, pn + p) ≈ λ (v, 0)n ne(v,0) : thus (i) holds also for i = p and moreover (iii) holds. The sequence a(v, n) is unbounded; hence for some i = 0, . . . , p − 1, the sequence a(pn + i) is unbounded and therefore either λ (v, i) > 1 or e(v, i) ≥ 1. Thus (ii) holds.

10.6 N-rationality of the rays in SL2 -tilings Given a mapping t : Z2 → K, a point M ∈ Z2 and a non-zero vector V ∈ Z2 , we consider the sequence an = t(M + nV ). Such a sequence will be called a ray associated with t. We call M the origin of the ray and V its directing vector. The ray is horizontal if V = (1, 0), vertical if V = (0, 1) and diagonal if V = (1, 1) (see Figure 10.1). Theorem 10.6.1 Suppose that the frontier in Theorem 10.3.1 is ultimately periodic and that each variable is equal to 1. Then each ray associated with t is N-rational. Proof We prove the theorem in the case where the directing vector V = (a, b) satisfies a, b ≥ 0. The other cases are left to the reader and will not be used in the sequel. (1) The points M + nV are, for n large enough, all above or all below the frontier, since the frontier is admissible and by the hypothesis on the directing vector. Since, by Lemma 10.5.2, N-rationality is not affected by changing a finite number of values, we may assume that they are all below. (2) Let wn be the word associated with the point M + nv. By ultimate periodicity of the frontier, there exists an integer q ≥ 1 and words v0 , ..., vq−1 , u0 , ..., uq−1 , u0 , ..., uq−1 such that for any i = 0, ..., q − 1 and for n large enough, wi+nq = u ni vi uni . This is seen by inspection. (3) It follows from Corollary 10.4.1 that for some 2 × 2 matrices Mi , Ni , Mi over N, one has for any i = 0, . . . , q − 1 and n large enough, ai+nq = λi Mi n Ni Min γi , where λi ∈ N1×2 , γi ∈ N2×1 . (4) Consequently, by Corollary 10.5.2, the sequence (an ) is N-rational. Indeed, a sequence of the form λ M  n NM n γ is a N-linear combination of sequences, each of which is a Hadamard product of two N-rational sequences. In the case of a purely periodic frontier (this is the case of the SL2 -tilings as n such as defined in Section 10.7.1), there is a sociated with the friezes of type A sharpening of this result; it explains why the linear recursions in these tilings are essentially of length two. Proposition 10.6.2 Let t be the SL2 -tiling associated with a purely periodic frontier, with variables equal to 1, associated with the admissible frontier ∞ w∞ , for some word

370

C. Reutenauer

... 1 1

1 2

1 1 3

1 2 7

1 1 3 11

1 1 3 11 41

1 2 7 26

1 2 7 26 97

1 1 3 11 41 153

1 2 7 26 97 362 ...

1 1 3 11 41 153 571

1 2 7 26 97 362 1351

1 1 3 11 41 153 571 2131

1 2 7 26 97 362 1351 5042

Figure 10.7 An SL2 -tiling.

w ∈ {x, y}∗ . Let a (respectively b) be the number of xs (respectively of ys) in w, and let p be the lowest common multiple of a, b. Then each diagonal ray is a merge of p rational sequences, each of which satisfies the linear recursion associated with the characteristic polynomial of the matrix μ (w) p/a+p/b. This result is illustrated below. We have w = xxy, a = 2, b = 1, p = 2 thus p/a + p/b = 3,     2  1 0 3 2 1 1 = , μ (w) = 1 1 1 1 0 1 

μ (w)3 =

3 1

2 1

3

 =

41 30 15 11

 .

The characteristic polynomial of the latter matrix is t 2 − 52t + 1 and the associated linear recursion is un+2 = 52un+1 − 1. We see indeed in Figure 10.7 that 1351 = 52.26 − 1, 2131 = 52.41 − 1 and 5042 = 52.97 − 2. In order to prove the proposition, one has to mimic with more precision part (2) of the proof of the theorem. Details are left to the reader.

10.7 Friezes 10.7.1 Numerical friezes Let Q be a quiver (a directed graph) with set of vertices V and set of arrows E. We assume that Q is acyclic. Associate with each vertex v the sequence a(v, n) of numbers defined by the recursion: a(v, 0) = 1 and a(v, n + 1) =

1 + ∏w→v a(w, n + 1) ∏v→w a(w, n) . a(v, n)

(10.5)

The family of sequences a(v, n), one for each vertex v, is called the frieze associated with Q. The fact that the quiver is acyclic guarantees the existence and the uniqueness of the sequences. A priori these numbers are positive rational numbers.

371

Linearly recursive sequences and Dynkin diagrams

A remarkable fact, due to Fomin and Zelevinski, is a consequence of their Laurent phenomenon (Fomin and Zelevinsky, 2002a,b): the sequences are natural integers, that is, in the above fraction, the denominator always divides the numerator. This will be proved below. The terminology ‘frieze’ comes from the fact that one may take infinitely many copies of the quiver Q, one copy Qn for each natural integer, with corresponding vertices vn ; one adds arrows vn → wn+1 if in Q one has w → v; then the labelling a(v, n) of vertex vn is obtained recursively by the previous formula. It means that the label of a vertex vn+1 is obtained as follows: take the product of the labels of the sources of the ingoing arrows, add 1, and divide by the label of vn . For example, in Figure 10.8, the value 26 is 7 × 11 + 1 divided by 3. These recursions, although highly non-linear, produce sometimes rational, or even N-rational, sequences. In this case, we say that the frieze is rational, or N-rational. This is the case, for example, when the quiver has two vertices with two edges: then one obtains the Fibonacci numbers of even rank. The question of rationality of the frieze is the central question which will be answered in this chapter. Let G be an undirected graph without loops. We say that a frieze is of type G if the quiver Q on which it is constructed is obtained from G by some acyclic orientation of its edges. The friezes shown in the Figures 10.8, 10.9 and 10.10 are of the type indicated, which is a graph defined in Section 10.8. 7

1 1

2

97

11 3

1

2131

362

41 26

571 153

1351

 2. Figure 10.8 A frieze of type A

1

2

5

1

2

5 9

1 1

2

8 39

1

87

...

53

14 19

3

1

8

332 119

4

1607

1

5

24

1

5

24

7 . Figure 10.9 A frieze of type D

...

372

C. Reutenauer 1

2

1

1

1

2

3

28

2

19

2

2

13 129

245

10

129

19

3

1 1

19

3

1

10

2

883 8762

13

883

129 10

...

78574 883

...

13

6. Figure 10.10 A frieze of type E

10.7.2 Quiver mutations We consider a finite set V , the ring of Laurent polynomials L = Z[v±1 , v ∈ V ], and the field of rational function F = Q(V ). Note that L is a factorial ring. Call basis a family X = (xv )v∈V of rational functions which freely generates the field F, that is, v → xv is an automorphism of F. An example of basis is V itself. The quivers we consider here are without loops and without 2-cycles; they may have multiple arrows and are not assumed to be acyclic as in Section 10.7.1. Consider an initial quiver Q0 with vertex set V , and label each vertex v of it by v itself, considered as an element of the field F = K(V ). We consider a quiver Q with same vertex set V , with each v ∈ V labelled by xv , where X = (xv )v∈V is a basis as defined above; given some vertex u of Q, we define the mutation of Q at u, denoted as μu : it defines a new quiver Q , with same vertex set V , with basis Y such that yv = xv if v = u and with yu xu = ∏v→u xv + ∏u→v xv , where the arrows are in Q and are of course taken with multiplicities. Moreover Q is obtained from Q by the following algorithm. (1) For each arrows v → u → w, add an arrow v → w (transitive closure at u). (2) Reverse each arrow incident at u (reversion). (3) Remove each pair of opposite arrows until no such pair exists (removal of 2cycles). The polynomial ∏v→u v + ∏u→v v ∈ Z[V ] (where the arrows are in Q) is called the mutation polynomial of Q at u; note that this polynomial does not depend on u, since there is no loop. Let Q = μu (Q). Note that Q is without loops, since Q has no loops nor 2-cycles. Moreover, Q has no 2-cycles by construction. Observe also that the mutation polynomial of Q at u is equal to the mutation polynomial of Q at u: this is because the arrows incident to u in Q are the reverse of those incident to u in Q, since step (1) and step (3) do not change arrows incident to u. Proposition 10.7.1 The following hold. (i) (Involution) Mutation at u is an involution. (ii) (Commutation) If u, w are distinct vertices in Q without arrow between them, then μu μw (Q) = μw μu (Q).

Linearly recursive sequences and Dynkin diagrams

373

(iii) (Braid) Suppose that u, w are distinct vertices of Q. Let A (respectively B,C) be the mutation polynomial of Q at u (respectively of μu (Q) at w, respectively of μw μu (Q) at u). If there is no arrow in Q between u and w, then A = C and they do not depend on w. If there is an arrow, then C = m(A |w←B0 /w ) for some Laurent monomial m, not depending on u, where B0 is the monomial B0 = B |u←0 . It follows that C |w←B0 /w = mA. Each quiver Q is completely determined by the antisymmetric matrix M ∈ ZV ×V defined by: muv is the number of arrows u → v minus the number of arrows v → u; note that at most one of these two numbers is non-zero, since Q has no 2-cycles. If Q is obtained from Q by mutation at u, then its matrix M  satisfies, when v = w: mvw = −mvw if v = u or w = u, and mvw = mvw + sgn(mvu )max(mvu muw , 0) if v = u and w = u (if mvu = 0, then the value of sgn(mvu ) does not matter, and the second term vanishes). We leave the verification of this fact to the reader. Proof (i) If, with the notation above, we apply a second time the mutation at u, we obtain a matrix M  . Suppose that v = u or w = u; then mvw = −mvw = mvw . If v = u and w = u, then mvw = mvw + sgn(mvu )max(mvu muw , 0) = mvw + sgn(mvu)max(mvu muw , 0) +sgn(−mvu )max((−mvu )(−muw ), 0) = mvw . Thus M  = M and the mutation is involutive. (ii) Suppose that u, w are not linked by an arrow in the quiver Q. Then, take the previous notation M, M  ; moreover, define the matrices M  , P and P : M  is obtained from M  by a mutation at w, P is obtained from M by a mutation at w and P is obtained from P by a mutation at u. We verify that P = M  , which will prove the commutation. Since u, w are unconnected in Q, we have muw = 0 = mwu , thus also muw = 0 = mwu since M  is obtained from M by a mutation at u. The same relations holds for M  , P and P , since they are obtained by mutations at u or w. We show first that muv = puv . If v = w, these numbers are 0, so we may assume that v = w. We have (mutation at u): muv = −muv and puv = −puv . Moreover, since u = w and v = w, mutating at w, we have muv = muv + sgn(muw )max(muw mwv , 0) = muv = −muv . Similarly, puv = muv + sgn(muw )max(muw mwv , 0) = muv . Thus muv = −muv = −puv = puv . The proofs that mvu = pvu , mwv = pwv and mvw = pvw are similar.  = p . We have (mutation We assume now that v,t = u, w and show that mtv tv    + sgn(p )max(p p , 0). at u): mtv = mtv + sgn(mtu )max(mtu muv , 0) and ptv = ptv tu tu uv  = m + sgn(m )max(m m , 0) and p = Moreover, we have (mutation at w): mtv tv tv tw tw wv mtv + sgn(mtw )max(mtw mwv , 0). We have also (mutation at w):  ptu = mtu + sgn(mtw )max(mtw mwu , 0) = mtu  = m and m = m . Thus m = m + since mwu = 0. Similarly, puv = muv , mtw tw wv tv wv tv

374

C. Reutenauer

sgn(mtu )max(mtu muv , 0) + sgn(mtw )max(mtw mwv , 0) and  ptv = mtv + sgn(mtw )max(mtw mwv , 0) + sgn(mtu)max(mtu muv , 0).  = p . Thus mtv tv (iii) We prove it first in the case where there is no arrow between u and w in Q. Then A is independent of w. Moreover C is the mutation polynomial of μw μu (Q) at u, so that, by a previous remark, it is equal to that of μu μw μu (Q) at u. But by (i) and (ii), this is the mutation polynomial of μw (Q) at u. Since u and w are not incident in Q, the mutation mutation of Q at w does not change the arrows incident to u (a fact that we leave to the reader to verify). It follows that the mutation polynomial of μw (Q) at u is equal to the mutation polynomial of Q at u. Thus we have C = A and (iii) holds in this case. Suppose now that there is an arrow between u and w in Q. We may by symmetry assume that there is an arrow u → w with multiplicity a in μu (Q) (beware: not in Q!). Let M be the antisymmetric matrix associated with μu (Q). In the next formulas, v is always assumed to be = u, w and the arrows are taken, without multiplicities, in the quiver μu (Q). We have (since A is also the mutation polynomial of μu (Q) at u)

A=

∏ vmvu + wa ∏ vmuv .

v→u

u→v

Now, since muw = a > 0, B depends on u and therefore B0 is equal to B0 = ∏w→v vmwv . We claim that C is equal to C divided by some monomial m1 , not depending on u, with C = wa ∏ vmvu + ∏ vmuv v→u

u→v

∏ vamwv .

w→v

Thus A |w←B0 /w = Therefore C

wa × A |

∏ vmvu +

v→u

∏w→v vamwv wa

= C /m

∏ vmuv .

u→v

wa m1

= × A |w←B0 /w . 1= w←B0 /w , thus C The verification of the claim is left to the reader, who may note that it corresponds to apply to the quiver μu (Q) the first two steps of the mutation at w, without the removal of 2-cycles; then the removal of 2-cycles amounts to divide the mutation polynomial by some monomial. Since the first monomial in C above is not divisible by u (because there are no loops), m1 does not depend on u. The last assertion follows since B, hence also B0 , does not depend on w, so that the substitution of w by B0 /w is an involution. Theorem 10.7.2 Each element of the basis of each quiver Q obtained from Q0 by a sequence of mutations is a Laurent polynomial in the variables v ∈ V . This theorem will be proved in Section 10.7.3. Corollary 10.7.3 Each frieze is integer-valued.

Linearly recursive sequences and Dynkin diagrams

375

Proof Replace each initial value a(v, 0) = 1 by the variable v. It is enough to show that the sequences a(v, n) are Laurent polynomials (that is, linear combinations of positive and negative powers of the variable), since one may recover the frieze by specialising these variables to 1. Now, one verifies that computing the sequences using the recursion (10.5) amounts to mutate only at vertices which are sources of the quiver. Hence, the corollary is a particular case of Theorem 10.7.2.

10.7.3 The Laurent phenomenon for quiver mutation Consider the mutation graph of Q0 : its vertices are the quivers Q obtained by a sequence of mutations from Q0 , with an edge (Q , Q ) labelled v if there is mutation at v between Q and Q . We prove the theorem by induction on the distance from Q0 to Q in this graph. We start by considering a special case of distance 3. Proposition 10.7.4 Consider four quivers Q0 , Q1 , Q2 , Q3 , which are linked by mutations Q0 − Q1 − Q2 − Q3 , with corresponding bases V, X ,Y, Z, with respective mutations at u, w, u, and with respective mutations polynomials A, B,C. We assume that u = w, that there is an arrow between u and w in Q0 , and that A and B0 = B |u←0 are relatively prime in L. Then the elements of these bases are Laurent polynomials and one has: gcd(xu , yw ) = 1 = gcd(xu , zu ) in L. We shall need an easy lemma. Lemma 10.7.5 Let R be a factorial ring and u a variable. Let au + b be a polynomial of degree 1 in R[u] and P ∈ R be such that a and P are relatively prime in R. Then au + b and P are relatively prime in R[u±1 ]. Proof Let p ∈ R[u±1 ] be an irreducible divisor of au + b and P. Since u is invertible in R[u±1 ], we may assume that p is a polynomial in R[u] with non-zero constant term. Since p divides au + b in R[u±1 ], we see that p is of degree 0 or 1. If p is of degree 0, then p divides a and b in R[u], and also P, a contradiction. Now, p cannot be of degree 1, since it divides P, which is in R. Proof (of Proposition 10.7.4) If v is distinct from u, w, then the mutations of the proposition do not change the corresponding element, so that v = xv = yv = zv . For a similar reason, since u = w, we have w = xw , xu = yu and yw = zw . We use the notation Bx = B(xv , v ∈ V ). Since xv = v if v = u, w, and since B does not depend on w, we have Bx = B |u→xu and we write B(u) for B and B(xu ) for Bx , by a slight abuse of notation. Similarly we write A(w) and C(w). We have xu = A/u, so that xu is a Laurent polynomial. Moreover, yw = Bx /xw = B(xu )/w, since w = xw and since the only variable changed in the first mutation is xu ; thus yw = B(A/u)/w, so that yw is a Laurent polynomial, too. Thus, regarding Laurentness in the proposition, it remains only to show that zu is a Laurent polynomial. We have zu = Cy /yu = C(yw )/yu . Thus zu = C(B(xu )/w))/yu .

376

C. Reutenauer

We have B0 = B(0). Then, since yu = xu , zu = (C(B(xu )/w) − C(B0/w))/xu + C(B0 /w)/xu . Now, we have B(xu )/w ≡ B0 /w modulo xu , since B(xu ) is a polynomial in xu with constant term B0 . Thus (C(B(xu )/w) −C(B0 /w))/xu is a polynomial in xu and therefore a Laurent polynomial. Moreover, C |w←B0 /w = mA, for some Laurent monomial m, by Proposition 10.7.1 (iii), so that C(B0 /w)/xu = mA/xu = mu, which is a Laurent monomial. Thus, zu is a Laurent polynomial. By the above congruence, we have yw ≡ B0 /w modulo xu . Thus, since u and w are invertible, gcd(xu , yw ) = gcd(A, B0 ) = 1 in L, since A, B0 are relatively prime by hypothesis. Let f (u) = C(B(u)/w). We have seen that zu = ( f (xu ) − f (0))/xu + mu. Modulo xu , we have ( f (xu ) − f (0))/xu ≡ f  (0) = C (B0 /w)B (0)/w. Thus zu ≡ C (B0 /w)B (0)/w + mu. Note that C (B0 /w)B (0)/w and m do not depend on u by Proposition 10.7.1. Now, we have gcd(A, m) = 1, since m is a Laurent monomial. Thus gcd(A,C (B0 /w)B (0)/w + mu) = 1, as follows from Lemma 10.7.5, applied to R equal to the ring of Laurent polynomials over Z in the variables different from u, using the fact that A is independent of u. Thus gcd(xu , zu ) = 1 in L. Another result that will be used in the proof of the theorem is the following. Proposition 10.7.6 Let Q0 , . . . , Qn be a sequence of quivers, with vertex set V , with a mutation from Qi to Qi+1 for each i = 0, . . . , n − 1. Add a new vertex h to Q0 and arrows from and towards h, and apply to the new quiver Q0 the same sequence of mutations. Let Q0 , . . . , Qn the new sequence of quivers. Let X (respectively Y ) the basis associated with Qn (respectively Qn ). Then for each v ∈ V , xv = yv |h→1 . Proof One has to prove first that for each i, Qi is obtained from Qi by adding the vertex h and arrows from and towards h, and nothing else. For this note that one never mutates at h. This being admitted, one proves by induction on n, the final assertion. We leave the two verifications to the reader. Proof (of Theorem 10.7.2) Let Q be some quiver in the mutation class of Q0 , with associated basis X. If the distance of Q to Q0 in the mutation graph is at most two, then the result is easy and left to the reader: note that since mutations are involutions, one has then Q = μu (Q0 ) or Q = μw μu (Q0 ) with u = w. Suppose now that the distance is at least three and let u, w respectively be the vertices at which are performed the two first mutations. We may assume that u = w. Let Q0 , Q1 , . . . , Qn = Q, n ≥ 3, be the successive quivers on a shortest path from Q0 to Q in the mutation graph.

Linearly recursive sequences and Dynkin diagrams

377

(1) We assume first that u, w are linked by an arrow in Q0 . Let A, B be the mutations polynomials respectively of Q0 at u and of Q1 at w. Then A and B0 = B | u → 0 are relatively prime, since B0 is a monomial by Proposition 10.7.1 (iii). Let X,Y be the bases attached to Q1 , Q2 respectively. Let Q3 be the quiver obtained by mutation at u from Q2 , with its basis denoted Z. Then the quivers Q0 , Q1 , Q2 , Q3 satisfy the hypothesis of Proposition 10.7.4. By induction, since the distances in the mutation graph from Q1 to Q and from Q3 to Q are shorter than the distance from Q0 to Q, we have that each element p of the basis attached to Q belongs to Z[x±1 v ] i ]. This implies, since x = v for any v =

u, that p ∈ L/x for some and also to Z[z±1 v v u natural number i. Similarly, p is in L/zuj zkw . We have zw = yw and gcd(xiu , zuj zkw ) = 1 by Proposition 10.7.4. Thus p is in L. (2) Assume now that there is no arrow in Q0 between u and w. Then we add to Q0 a new vertex h, with an arrow h → u. Apply to this new quiver Q0 the same sequence of mutations as the one applied to Q0 in order to obtain Q. Let us denote by Qi the new sequence. If we show that the basis elements of Qn are Laurent polynomials in the variables V ∪ {h}, then the basis elements of Q = Qn will be Laurent polynomials in V , since the latter are obtained by putting h = 1 in the former, by Proposition 10.7.6. Let G be the quiver obtained by mutation at u from Q2 ; thus G is obtained from  Q0 by the sequence of mutations u, w, u. Since there is no arrow between u and w in Q0 , G is also obtained by a single mutation at w from Q0 , by Proposition 10.7.1 (ii) and (i) (commutation and involution). Let X ,Y be the the bases attached to Q1 , G respectively. The distances from Q1 to Qn and from G to Qn in the mutation graph are shorter than the distance from Q0 to Q. By induction we deduce that each element p of the basis attached to Qn is a Laurent polynomial in the xv s, and also a Laurent polynomial in the yv s. Hence, since xv = v for v = u and yv = v for v = w, p is in j L/xiu and in L/yw . Note now that xu is a polynomial of degree 1 in h, with the coefficient a1 of h being a monomial in the other variables, and yw is independent of h. Hence xu and yw are relatively prime in L, by Lemma 10.7.5, applied to R equal to the ring of Laurent polynomials over Z in the variables different from h (note that a1 is therefore invertible in R). Thus p ∈ L.

10.8 Dynkin diagrams These diagrams are well known in many classification theorems, including the first one (historically): the classification of simple Lie algebras by Cartan and Killing. The Dynkin diagrams and the extended Dynkin diagrams are shown in Figure 10.11 and Figure 10.12. As noted in the introduction of this chapter, we consider only simply laced diagrams, that is, diagrams which are simple graphs. Note that the index of a Dynkin diagram is its number of nodes, whereas the number of nodes of an extended Dynkin diagram is one more than its index. Note also that an ex-

378

C. Reutenauer

tended Dynkin diagram is obtained by adding a node and one or two edges to the corresponding Dynkin diagram.

An

Dn

b

b

b Q

Q Qb   b

b

b

b

...

b

b

...

b

b

n≥1

b

E6

b

b

b b

b

b

E7

b

b

b b

b

b

b

E8

b

b

b

b

b

b

n≥4

b

Figure 10.11 The Dynkin diagrams.

They have the very simple combinatorial characterisation below, due to Vinberg (for Dynkin diagrams) (Vinberg, 1971) and Berman–Moody–Wonenburger (for extended Dynkin diagrams) (Berman et al., 1971/72). Let G be a graph without loops. An additive (respectively subadditive) function is a function from the set V of vertices of G into the set of positive real numbers such that for any vertex v, 2 f (v) is equal (respectively is greater than or equal ) to ∑{v,w}∈E f (w). Theorem 10.8.1 A simple and connected graph G is a Dynkin diagram (respectively an extended Dynkin diagram) if, and only if, it has a subbadditive function which is not additive (respectively it has an additive function). The additive functions for extended Dynkin diagram are shown in Figure 10.14; it may be shown that they are unique, up to a constant factor. Subadditive but not additive functions for Dynkin diagrams are obtained by removing a node (see Proposition 10.8.4). For the proof of the theorem, we shall follow (Happel et al., 1980) (the ‘if’ part of their proof assumes that the function is integer-valued, but it extends without change to real-valued functions). Proposition 10.8.2 Let G be a finite connected graph. Then either G is a Dynkin diagram or it contains an extended Dynkin diagram as subgraph.

Linearly recursive sequences and Dynkin diagrams b

b

b n A

`

`

b b

n D

`

`

`

b Q

Q Qb   b

`

`

b

`

`

b

`

`

`

` `

...

b

379

n≥1

b   b Q Q Qb

b

n≥4

b 6 E

b

b

b

b b

b

7 E

b

b

b b

b

b

b

b

8 E

b

b

b

b

b

b

b

b

Figure 10.12 The extended Dynkin diagrams.

We say that G is a subgraph of G if the set of vertices of G is contained in that of G, and likewise for the set of edges. Proof We show that if G does not contain any extended Dynkin diagram, then it  n implies that G is acyclic, is a Dynkin diagram. The fact that G does not contain A  4 , no vertex has hence is a tree (because it is connected). Since G does not contain D  n for n ≥ 5, at most one vertex more than 3 neighbours. Since G does not contain D in G has 3 neighbours. Thus G is of the form shown in Figure 10.13, with r ≤ s ≤ t, where these numbers denote the numbers of edges on each branch. Since G does not  6 , we must have r ≤ 1. If r = 0, then G = An . Suppose now that r = 1, contain E  7 , we must have 1 ≤ s ≤ 2. If s = 1, then thus also 1 ≤ s. Since G does not contain E  8 , we G = Dn . Suppose now that s = 2, hence also 2 ≤ t. Since G does not contain E must have 2 ≤ t ≤ 4; in these 3 cases, we have G = E6 , E7 or E8 .

380

C. Reutenauer

.. .

r

...

... s

t

Figure 10.13 The graphs G with at most one vertex having three neighbours of the proof of Proposition 10.8.2.

Proposition 10.8.3 Let G be an extended Dynkin diagram and f be a subadditive function for G. Then f is additive. Proof Let C be the Cartan matrix of G, that is, the V × V matrix with 2s on the diagonal, a −1 at entry (v, w) if {v, w} is an edge and 0 elsewhere. Let also F denote the row vector ( f (v))v∈V . Then the fact that f is subadditive means that FC ≥ 0 component-wise. Let h be an additive function for G: it exists, see Figure 10.14. Since C is symmetric, we have by additivity of h, CH = 0, where H is the row vector (h(v))v∈V . Thus we have FCH = 0. Now, the components of H are positive and those of FC are ≥ 0. Thus we must have FC = 0 and f is an additive function. Proposition 10.8.4 Suppose that G is a subgraph of G. Then the restriction to G of any subadditive function f of G is a subadditive function of G. Moreover, if G is a proper subgraph, then the restriction is not additive . Proof Let v be a vertex of G . Then 2 f (v) ≥



{v,w}∈E

f (w) ≥



{v,w}∈E 

f (w),

so that f is a subadditive function for G . Suppose now that G is a proper subgraph. Arguing inductively, we may assume that G has the same vertices as G and one edge less, or that G has one vertex less. In the first case, let {v, u} be this edge; then, since the values of f are positive, 2 f (v) > ∑{v,w}∈E  f (w) and f | G is not additive. In the second case, let u be this vertex; u is not isolated in G, since G is connected; let v be a neighbour of u in G; then v is in G , and 2 f (v) > ∑{v,w}∈E  f (w), so that f is not additive on G . Proof (of Theorem 10.8.1) If G is an extended Dynkin diagram, then G has an additive function: see Figure 10.14. If G is a Dynkin diagram, then it is a proper

Linearly recursive sequences and Dynkin diagrams 1

381

1

1 n A

n≥1



1

1

1 n D

1 2

2

2

2

n≥4

2

1

1 1 2

6 E

1

2

3

2

1

2 7 E

1

2

3

4

3

2

1

5

4

3

2

3 8 E

2

4

6

1

Figure 10.14 The additive functions of extended Dynkin diagrams.

subgraph of some extended Dynkin diagram, so that it has a subadditive function, which is not additive by Proposition 10.8.4. Conversely, suppose that G has an additive function f . If G is not a Dynkin diagram, then by Proposition 10.8.2, it contains an extended Dynkin diagram G . If G is a proper subgraph, then f | G is not additive by Proposition 10.8.4, which contradicts Proposition 10.8.3. Thus, G = G and G is an extended Dynkin diagram. Suppose now that G has a subadditive function f which is not additive. If G is not a Dynkin diagram, then by Proposition 10.8.2, it contains an extended Dynkin diagram G ; then f | G is subadditive but not additive by Proposition 10.8.4 (indeed, either G = G, or G is a proper subgraph of G); this contradicts Proposition 10.8.3.

382

C. Reutenauer

10.9 Rational frieze implies Dynkin diagram Theorem 10.9.1 Let Q be a quiver such that the underlying undirected graph is connected. The sequences on the frieze associated with Q are simultaneously bounded or unbounded. Suppose that the frieze is rational. If the sequences are all bounded (respectively all unbounded), then Q is a Dynkin diagram (respectively an extended Dynkin diagram ) with some acyclic orientation.

The theorem is proved by studying the asymptotics of the sequences; these are computed by the exponential polynomial. Then the additive or subadditive function is obtained by taking the logarithm. It appears that the recursion formula (10.5) is some multiplicative analogue of the additivity or subadditivity formula. We only prove the theorem under the stronger assumption that the frieze is Nrational. The complete result is proved in (Assem et al., 2010).

Proof (1) Formula (10.5) may be rewritten as

a(v, n + 1)a(v, n) = 1 +

∏ a(w, n) ∏ a(w, n + 1).

v→w

(10.6)

w→v

Recall that by the Laurent phenomenon, these numbers are all positive natural numbers. Hence if a(v, n) is bounded, then a(w, n) is bounded for each neighbour w of v. Since the graph is connected, if one sequence is bounded, then each sequence is bounded. (2) We now assume that all the sequences are bounded. Since they are integervalued, they take only finitely many values. Since they satisfy linear recursions, they are ultimately periodic. Let p be a common period and let n0 be such that each sequence is purely periodic for n ≥ n0 . Let b(v) = ∏n0 ≤n 1. Indeed, if a(v, n) = 1, then a(v, n + 1) > 1 by (10.6); moreover, each a(v, n) is a positive integer. We have, since a(v, n0 ) = a(v, n0 + p), b(v) = ∏n0 ≤n 0 and β + < 1 such that (i) K factorises as K = F(P), (ii) for any integer p ≥ p0 , one has F(p) ∈ [β − p, β + p]. We fix an admissible function K = F(P) of the depth. Then, when α is fixed, the index k of the truncation is fixed and equal to F(P(α )). If the position μ is fixed, this $μ % $μ % defines, for each α ∈ Ω, a unique truncation TK and a unique value XK . Unconstrained rational model. The rational model is parametrised by an integer N > 0 which is an upper bound on the denominator v, and the set of interest is the subset ΩN of Ω formed of rational numbers u/v whose denominator v is at most N, that is, u  u  ΩN = ∈ Ω; v ≤ N = ∈ Q; 1 ≤ u ≤ v ≤ N, gcd(u, v) = 1 . v v It is equipped with the uniform probability. We study the asymptotics of the mean $μ % value EN [XK ] when α is a random rational of ΩN and N → ∞. Constrained rational model. Here, the set of interest is parametrised by a pair (M, N) of integers, where N is the upper bound for the denominator v and M is the upper bound for the digits mk which appear in the continued fraction. Then, for any pair (N, M) of integers with N ≥ 1, M ≥ 2, the set of interest is u  [M] ΩN = ∈ Ω; v ≤ N; mk (u/v) ≤ M, ∀k ∈ [[1, P(u/v)]] . v It is equipped with the uniform probability. As previously, we study the asymptotics [M] $ μ % [M] of the mean value EN [XK ] when M is fixed, α is a random rational of ΩN and N → ∞.

Pseudo-randomness of a random Kronecker sequence

417

11.3.3 Rational probabilistic models for unbalanced costs Here, we expect results which involve the index k of the truncation. In the rational model, it then proves convenient to adopt a more restrictive definition for the index k, and to only consider particular admissible functions of the depth. For each real5 δ ∈]0, 1[, we consider a particular admissible function, which is called the δ -fraction of the depth and is defined as Kδ : Ω → N,

Kδ (x) = δ P(x).

This notion has been already introduced in previous works on the subject (Daireaux and Vall´ee, 2004; Lhote and Vall´ee, 2008).

11.3.4 Transition from the constrained to the unconstrained models When the constraint M becomes large, it is natural to expect that the results in the constrained case have ‘a limit’ which coincides with the result in the unconstrained case. It is also natural to study the constrained problem in the rational model where the digit-bound M = M(N) is a function of the denominator-bound N of rationals. Since the equality M ≤ N holds, we recover the unconstrained case when M(N) = N. [M(N)] has been precisely studied in the paper (Cesaratto and Vall´ee, 2011) The set ΩN for a general function M = M(N).

11.4 Statements of the main results All along our analysis, the transfer operator Hs of the Euclidean dynamical system plays a fundamental rˆole. This operator depends on a complex parameter s and can be viewed as an extension of the Riemann zeta function s → ζ (2s). It acts on the functional space C 1 (I ), and transforms a function g ∈ C 1 (I ) into a function   1 1 . (11.7) Hs [g](x) := ∑ g 2s m+x m≥1 (m + x) This operator is precisely defined in Section 11.5.2. Some of its extensions are introduced in Section 11.5.5, and Section 11.8 summarises the main properties that are useful here. This operator Hs is used here as a generating operator for the characteristics of the continued fraction expansion, namely, the digits mk , and the continuants qk , θk . In the constrained framework, the constrained transfer operator HM,s , with   1 1 , (11.8) HM,s [g](x) := ∑ · g 2s m+x m≤M (m + x) replaces the operator Hs . 5

We will see that we further restrict ourselves to a rational δ .

418

E. Cesaratto and B. Vall´ee

These transfer operators (constrained or unconstrained) play a crucial rˆole in all the proofs. However, they do not explicitly appear in the statements of the first three theorems, with three exceptions: the factor 1/ log2 (equal to the value of the Gauss density at zero), the subdominant factor ρ (related to the subdominant spectral radius of the operator H1 ), and finally the exponent γ (which is equal to the width of a vertical strip around ℜs = 1 where the operator (I − Hs )−1 has a polynomial growth for large |ℑs|). The situation differs in Theorem 11.4.4, where the dominant eigenvalue λM (s) of the operator HM,s plays an explicit rˆole, for (possibly infinite) integers M.

11.4.1 Balanced parameters for a boundary truncation Our first result deals with balanced parameters and truncations at a boundary position, namely μ = 0 (case −) or μ = 1 (case +). This is the simplest case: here, the mean values of the three parameters (covered space, discrepancy and Arnold measure) share the same behaviour in the four probabilistic models (real versus rational, unconstrained versus constrained): they admit a finite limit. The limit is the same in the real and the rational frameworks, and only depends on the constraint M. Moreover, there is a clear transition to the unconstrained model. Finally, there are explicit values for the limits in the unconstrained model. Theorem 11.4.1 (Covered space, discrepancy and Arnold measure for boundary truncations T ) Consider a two-distance truncation T at a boundary position μ = 0 (case −) or μ = 1 (case +). In the rational model, consider an admissible function K of the depth. Then, the following holds. (a) There exist ρ < 1, γ > 0, and, for each constraint M (possibly infinite), and each parameter X ∈ {V, Δ, A}, there exist two constants x± M , for which the associated ± sequences Xk satisfy [Real model]

for k → ∞,

[Rational model] for N → ∞,

E[M] [Xk± ] = x± M [M]

EN [XK± ] = x± M

+ O(ρ k ), + O(N −γ ).

The hidden constants are uniform with respect to M. (b) For M = ∞, the numerical values of the limits x± ∞ are displayed in Figure 11.6. (c) When M → ∞, there is a transition to the unconstrained model, and   1 ± . = x + O x± ∞ M M

11.4.2 Covered space for a generic truncation Our second result deals with the covered space in the case when the truncation is ‘generic’, and defined by its position μ ∈]0, 1[. The situation is similar to what happens when the truncation is at a boundary position (Theorem 11.4.1). For any constraint M (possibly infinite), there is a finite limit for the mean values in the rational

Pseudo-randomness of a random Kronecker sequence Covered space V

Discrepancy Δ

419

Arnold measure A

x− ∞

1 1 + ∼ 0.861 2 4 log 2

1+

1 ∼ 1.721 2 log 2

2 1 + ∼ 1.147 3 3 log 2

x+ ∞

1 2

1+

1 ∼ 1.360 4 log 2

2 1 + ∼ 1.027 3 4 log 2

± ± Figure 11.6 Values of the limits s± ∞ , d∞ , a∞ of the mean values of V, Δ, A in the unconstrained models.

model, equal to the limit of the real model. And the limits sM (μ ) of the constrained models tend to the limit s∞ (μ ) of the unconstrained model. Theorem 11.4.2 (Covered space for a generic truncation T ) Consider, for μ ∈ ]0, 1[, a two-distance truncation at position μ . In the rational model, consider an $μ % admissible function K of the depth. Then, the following holds for the sequence Vk . (a) There exist ρ < 1, γ > 0, and, for each constraint M (possibly infinite), and each μ ∈]0, 1[, there exists a constant sM (μ ), for which one has $μ %

for k → ∞,

E[M] [Vk ] = sM (μ )

[Rational model] for N → ∞,

= sM (μ )

[Real model]

[M] EN [VK± ]

+ O(ρ k ), + O(N −γ ).

The hidden constants are uniform with respect to M and μ . (b) When μ → 0+ or μ → 1− , there is a transition to the boundary cases sM (μ ) →μ →0+ s− M,

sM (μ ) →μ →1− s+ M.

(c) When M → ∞, there is a transition to the unconstrained models,   1 . sM (μ ) = s∞ (μ ) + O M The hidden constant is uniform with respect to μ .

11.4.3 Discrepancy and Arnold measure for a generic truncation The situation deeply differs from the two previous results (Theorem 11.4.1 and Theorem 11.4.2). Previously, the parameter had a similar behaviour in the four probabilistic models, and its mean values admitted finite limits. The behaviour of the discrepancy and Arnold measure is now completely different from what happens when the truncation is at a boundary position (Theorem 11.4.1). Their mean values are infinite in the unconstrained real model. Their mean values in the three other models are all of logarithmic order, with respect to N in the rational unconstrained model, or with respect to M in the two constrained models. The constant which appears in the dominant logarithmic term is of the same type for the discrepancy and the Arnold measure: it depends on the position μ as a simple

420

E. Cesaratto and B. Vall´ee

polynomial which equals 0 at μ = 0 and μ = 1. It is of degree two and equals μ (1 − μ ) for the discrepancy; it is of degree three and equals μ (1 − μ )2 for the Arnold measure. This entails an interesting result: in the three probabilistic models where the mean values are finite, the asymptotic mean value of the discrepancy is maximal at a position μ = 1/2 whereas the asymptotic mean value of the Arnold measure is maximal at a position μ = 1/3. Theorem 11.4.3 (Discrepancy and Arnold measure for a generic truncation T ) Consider, for μ ∈]0, 1[, a two-distance truncation at position μ . In the rational model, consider an admissible function K of the depth. Then, there exist ρ < 1, γ > 0, θ > 0 such that, for any μ ∈]0, 1[, for each parameter $μ % X ∈ {Δ, A}, the following holds for the sequence Xk . $μ %

(a) [Unconstrained real model] The mean values E[Xk ] are infinite. (b) [Unconstrained rational model] There exist constants a(μ ), d(μ ) for which $μ %

EN [ΔK ] = μ (1 − μ ) log2 N + d(μ ) + O(N −γ ), $μ %

EN [AK ] = μ (1 − μ )2 log2 N + a(μ ) + O(N −γ ). The hidden constants are uniform with respect to μ . (c) [Constrained models] For any finite constraint M, there exist constants xM (μ ) for which [Real model]

for k → ∞,

[Rational model] for N → ∞,

$μ %

E[M] [Xk ] = xM (μ ) + O(ρ k ), [M]

$μ %

EN [XK ] = xM (μ ) + O(N −θ / logM ).

The hidden constants are uniform with respect to M. > μ ), a>(μ ) for which (d) [Transition to the unconstrained model] There exist constants d( one has, when M → ∞, > μ ), dM (μ ) − μ (1 − μ ) log2 M → d(

aM (μ ) − μ (1 − μ )2 log2 M → a>(μ )

with a speed of convergence of order O(log2 M/M). In the constrained rational model, it is possible to choose the digit bound M as a function of the denominator bound N. There are two interesting choices. Choice M = N. We recover the main behaviour of the parameters in the unconstrained rational model, with a remainder term of order O(1). This is why the con> μ ) (or a(μ ) and a>(μ )) are not (a priori) equal for μ ∈]0, 1[. stants d(μ ) and d( Choice M = Θ(log N). Indeed, in the paper (Cesaratto and Vall´ee, 2011), it is proven [M] that there is a threshold phenomenon on the cardinality of the subset ΩN , depending on the relative order of the digit-bound M with respect to the length n := log N of the denominator. In particular, for M = an, one has   [an] |ΩN | 12 → exp − 2 for N → ∞. |ΩN | aπ

Pseudo-randomness of a random Kronecker sequence

421

[an]

On this subset ΩN , the mean value of the discrepancy or the Arnold measure at a generic position are of order Θ(log log N).

11.4.4 Distances and products qbk θkc >$T % and the large disWe now wish to study the two distances, the small distance Γ tance Γ˜ $T % , in the four probabilistic models. The expression of distances mainly involves the random variables θk ; more precisely, for any6 truncation T of index k, the distances Γ$T % belong to the interval [θk+1 , θk ]. A more general study. It may be of independent interest to perform a probabilistic study of products of the general form qbk θkc , related to any pair (b, c) of real numbers. We first remark that the case b = c leads to a (ordinary) balanced cost, and this type of cost was already studied in the previous section. When the imbalance υ := c − b is close to zero, see (Lhote and Vall´ee, 2008; Vall´ee, 1997) for the study of the variable qbk θkc . Note that this was a first step to prove that the variables log qk or log θk asymptotically follow a Gaussian law, both in the real and rational models. We are here mainly interested in other situations, with two main motivations. First, the study of distances introduces the pair (b = 0, c = 1). Second, the approximation of a number α by its kth convergent pk /qk leads to the pair (b = −1, c = 1). In the real case, the mean value is of exponential type, with a ratio which involves the value λ (2) ∼ 0.1994 of the dominant eigenvalue of the transfer operator. This last value7 λ (2) (discovered in 1994) plays a central rˆole in the analysis of the Gauss Algorithm (Daud´e et al., 1997), and its occurrence in this approximation context was noticed for the first time in (Flajolet and Vall´ee, 1998). Products qbk θkc . Theorem 11.4.4 below proves that the probabilistic behaviour of this product mainly depends on the imbalance equal here to υ := c − b. It will be directly applied to the study of distances, where υ equals 1. Here, the spectral objects of the transfer operator HM,s explicitly appear in the statements of the result, via the dominant eigenvalue λM (s), the Hausdorff dimension σM solution of the equation λM (s) = 1, or a ‘perturbation’ σM (υ , δ ) of σM . In the real models, the mean value E[M] [qbk θkc ] has an exponential behaviour with respect to the index k (decreasing or increasing, according to the sign of υ ), and the exponential rate depends on the constraint M and the imbalance υ : it is equal to λM (σM + υ /2). In the rational models, we consider truncations whose index is defined by the δ -fraction of the depth, and already described in Section 11.3.3. In this case, the [M] mean value EN [qbk θkc ] has a polynomial behaviour with respect to the denominator N (decreasing or increasing, according to the sign of υ ), and the exponent of N depends on the imbalance υ , the constraint M and the fraction δ : it is equal to the difference 2(σM (υ , δ ) − σM ) where the function σM (υ , δ ) is defined in (11.9). 6 7

In the three-distance framework, this result holds for each of the three distances. Flajolet called it the Vall´ee constant.

422

E. Cesaratto and B. Vall´ee

Theorem 11.4.4 (Parameters qbk θkc ) Consider an integer M ≥ 2 possibly infinite, and let denote by λM (s) the dominant eigenvalue of the (possibly) constrained transfer operator HM,s defined in equation (11.8). Denote by σM the Hausdorff dimension ¯ defined by tM = −∞ for M < ∞ and tM = −1 for of the set I [M] , by tM the real in R M = ∞. The following holds and strongly involves the imbalance υ := c − b. [Real model] (i) If υ ≤ tM , the mean value E[M] [qbk θkc ] is infinite for any integer k. (ii) If υ > tM , the mean value E[M] [qbk θkc ] is finite, and satisfies    E[M] [qbk θkc ] = DM (b, c) λMk (σM + υ /2) 1 + O ρ (b, c)k

[k → ∞],

for some positive constants DM (b, c) > 0, and ρ (b, c) < 1. According to the sign of υ , the following holds for the mean value E[M] [qbk θkc ] when k → ∞. (a) For υ > 0, the mean value tends to 0. (b) For tM < υ < 0, the mean value tends to +∞. (c) For υ = 0, the mean value tends to the constant DM (b, b). [Rational model] For any δ ∈ [0, 1], and any integer M ∈ [[2, ∞]], there is a unique solution, denoted by σM (υ , δ ), of the equation

λM1−δ (σ ) · λMδ (σ + υ /2) = 1, with

σM (0, δ ) = σM ,

σM (υ , δ ) < σM (for υ > 0),

(11.9)

σM (υ , δ ) > σM (for υ < 0).

(i) For any triple (δ , b, c), with a rational δ ∈]0, 1[, the mean value of the product [M] qbk θkc on ΩN , when the index k = δ P is the δ -fraction of the depth P, satisfies [M]

EN [qbk θkc ] = DM (δ , b, c) N 2(σM (υ ,δ )−σM ) (1 + εM (N)) , for some positive constant DM (δ , b, c) and εM (N) = o(1). [M] (ii) According to the sign of υ , the following holds for the mean value EN [qbk θkc ], when N → ∞. (a) For υ > 0, the mean value tends to 0. (b) For υ < 0, the mean value tends to +∞. (c) For υ = 0, and any δ ∈]0, 1[, the mean value tends to DM (b, b), the constant of the real case. (iii) When M is large and υ is near 0, the error term εM (N) is of order O(N −γ ) for some positive γ , the quantity γ and the hidden constants in the O-term being uniform with respect to M, b and c. Application to distances. As we already said, the study of distances is obtained with the choice υ := c − b = 1. Then, for any two-distance truncation T whose index k tends to ∞, the mean value E[M] [Γ$T % ] is exponentially decreasing, with a rate   1 . λM σM + 2

Pseudo-randomness of a random Kronecker sequence

423

When M = ∞, one has σ∞ = 1, and the exponential rate is λ (3/2) ∼ 0.3964. In the rational models, for any two-distance truncation whose index is the δ [M] fraction of the depth, with δ ∈ Q∩]0, 1[, the mean value EN [Γ$T % ] is polynomially decreasing with respect to the denominator N, and the exponent of N equals 2(σM (δ ) − σM ), where σM (δ ) is solution of the equation

λM1−δ (σ ) · λMδ (σ + 1/2) = 1.

11.5 Dynamical analysis We now describe in an informal way the dynamical analysis methodology which mixes analysis of algorithms and the theory of dynamical systems. The random variables of interest are generated by transfer operators (plain or extended) of the Euclidean dynamical system, and their asymptotic probabilistic behaviour is dictated by the dominant spectral objects of these transfer operators. The analysis in the real model only requires a precise knowledge of operators when their complex parameter s is close to the real axis, whereas the study in the rational model also requires their precise behaviour when the parameter s is far from the real axis.

11.5.1 Continued fraction expansion We recall that the Gauss map S : I → I is defined by < =  , 1 1 1 = for x = 0, S(x) = − x x x

S(0) = 0,

where · denotes the integer part, and {·} denotes the fractional part. The pair (I , S) defines a dynamical system, which is called in the sequel the Euclidean dynamical system. The restriction of S to the interval Im := [1/(m + 1), 1/m] is the mapping S[m] : Im → I defined by S(m] (x) = (1/x) − m whose inverse mapping h[m] : I → Im is defined by h[m] (x) = 1/(m + x). The trajectory (x, S(x), S2 (x), . . . , Sk (x), . . .) of the real x reaches 0 if, and only if, x is rational. For a rational x, the first index k for which Sk (x) = 0 is called the depth P(x) of x. The sequence of the digits is defined as < = 1 , mk+1 (x) = m(Sk (x)), (m1 (x), m2 (x), . . . , mk (x), . . .) where m(x) := x and x admits a continued fraction expansion (CFE) of the form x=

1 m1 +

= [m1 , m2 , . . . , mk , . . .].

1 m2 +

1 ..

.+

1 mk +

1 ..

.

424

E. Cesaratto and B. Vall´ee

The continued fraction expansion is finite if, and only if, x is rational. In any case (rational or irrational), a truncation of the continued fraction expansion at depth k ≤ P(x) produces two continued fraction expansions: the beginning part [m1 , m2 , . . . , mk ] and the ending part [mk+1 , mk+2 , . . . , mk+ , . . .]. The beginning part defines the linear fractional transformation (LFT), that is, pk−1 y + pk gk := h[m1 ] ◦ h[m2 ] ◦ . . . ◦ h[mk ] , with gk (y) = qk−1 y + qk together with the rational pk /qk = gk (0), which is often called the kth approximant of x. The ending part defines the real xk := Sk (x) = [mk+1 , mk+2 , . . .] via the equality x = gk (xk ),

or xk =

θk+1 (x) θk (x)

with θk (x) := |qk−1 x − pk−1 |.

The beginning continuant qk and the distance θk (also called here ending continuant) are expressed with the derivative of gk , i.e., 1 = |gk (0)|, q2k

θk2 = |gk (xk )|.

(11.10)

When x is rational, of the form x = u/v with a pair (u, v) of coprime integers, the sequence θk (x) is closely related to the sequence vk of remainders that occurs in the execution of the Euclid algorithm on the pair (u, v), and the equality θk (u/v) = (vk /v) holds. This is why the remainder vk is also called the ending continuant. The decomposition h = gk ◦ k

with

gk = h[m1 ] ◦ h[m2 ] ◦ . . . ◦ h[mk ] ,

k = h[mk+1 ] . . . ◦ h[m p ]

entails the following expressions for the beginning and ending continuants, as derivatives of LFTs: 1 1 1 = |h (0)| = |gk (k (0))| · |k (0)|, = |gk (0)|, = |k (0)|. (11.11) v2 q2k v2k If we let q−1 = 0, q0 = 1, θ0 = 1, θ1 = x, the two sequences (qk ), (θk ) satisfy the recursion formulae, for k ∈ [[0, P(x) − 1]], qk+1 = mk+1 qk + qk−1 ,

θk+2 = θk − mk+1 θk+1 .

(11.12)

11.5.2 The dynamical system and the plain transfer operator As the pair (I , S) defines a dynamical system, we strongly use the main tools of the theory of dynamical systems. The transfer operator together with some of its extensions play a central rˆole in our study. It was introduced in the 1970s in Ruelle (1978) as a main tool for studying statistical properties of trajectories of dynamical systems. In this section we give its definition and introduce some of its variants. For a detailed account of the theory of transfer operators we refer the reader to Baladi (2000). Section 11.8.2 describes the spectral properties of these operators that are useful in this chapter.

425

Pseudo-randomness of a random Kronecker sequence

The main issue in dynamical systems is the study of the interplay between properties of the transformation S and properties of trajectories under iteration of the transformation. The behaviour of typical trajectories of dynamical systems is more easily explained by examining the flow of densities. We consider that the interval I is endowed with some density g = g0 . The time evolution governed by the map S modifies the density, and the successive densities g0 , g1 , g2 , . . . , gn , . . . describe the global evolution of the system at time t = 0, 1, 2, . . .. For each inverse branch h ∈ H , with ,  1 ; m≥1 , H = h[m] : x → m+x the component operator H[h] defined as H[h] [g](x) = |h (x)| · g ◦ h(x) expresses the part of the new density which is brought when one uses the branch h. Then, the operator H :=



H[h]

(11.13)

h∈H

is the density transformer (or the Perron–Frobenius operator) which expresses the new density g1 as a function of the old density g0 via the relation g1 = H[g0]. It proves convenient to add a (complex) parameter s, as Ruelle proposed it (Ruelle, 1978), when he introduced the transfer operator Hs defined as Hs :=



Hs,[h] ,

with

Hs,[h] [g](x) = |h (x)|s · g ◦ h(x).

h∈H

As the equality H1 = H holds, the operator Hs provides an extension of the density transformer and it admits the general form   1 1 . Hs [g](x) := ∑ |h (x)|s · g ◦ h(x) = ∑ · g 2s m+x m≥1 (m + x) h∈H [M] is invariant by all the LFTs h In the constrained case, the [m] of the   Cantor set I set HM := h(m] , m ≤ M , and the constrained transfer operator is defined as   1 1  s . (11.14) ·g HM,s [g](x) := ∑ |h (x)| · g ◦ h(x) = ∑ 2s m+x m≤M (m + x) h∈H M

For M = ∞, this is the ‘unconstrained’ transfer operator Hs , and the index M is omitted.

11.5.3 The transfer operator viewed as a generating operator The set HMk is defined as the kth power of the initial set HM , that is, HMk = {h = h1 ◦ · · · ◦ hk ; hi ∈ HM , i ∈ [[1, k]]}.

426

E. Cesaratto and B. Vall´ee

When M = ∞, this is the set of the inverse branches of Sk . For any M ≤ ∞, and due to the multiplicative properties of derivatives, the kth iterate of the operator HM,s has exactly the same expression as the operator itself in equation (11.14), except that the sum is now taken over HMk : HkM,s [g](x) :=



|h (x)|s · g ◦ h(x).

h∈HMk

In a very general sense, the kth iterate of the transfer operator describes the data after k iterations. The evolution of the data during all the possible finite executions of the process involves the semigroup HM . This is why, for rational studies, we are led to work with the quasi-inverse (I − HM,s )−1 of the transfer operator, that will play a central rˆole, as we will see in Section 11.5.7.

11.5.4 Principles for our analysis Any parameter of interest is written as a sum of costs of the form d . Rk := τ (mk+1 ) qak−1 qbk θkc θk+1

The continuants qk , the distances θk , or else the digits mk+1 , are all defined as denominators of LFTs, as it appears in equation (11.10). And, for LFTs, there is a close connection between denominators and derivatives. This explains why the (usual) transfer operator is able to generate them in a separate way. However, the cost Rk , interpreted as a random variable, involves products of the variables mk+1 , q j , θ , and these variables are not independent. It is then necessary to (slightly) extend the (usual) transfer operator in order to generate such products. Our study is then based on two main facts. Fact 1. There exist (extended) transfer operators for generating such products. Fact 2. There exists a close relation between the transfer operators which are used for the study of such a cost in the real model and those used for the study of the same cost in the rational model.

11.5.5 Extended transfer operators We first make precise Fact 1. For unconstrained models, we use the various transfer operators whose component operators are described in Figure 11.7. The extended operator H(s,t) was already introduced and used in many works of dynamical analysis (see for instance (Vall´ee, 1998)), and it is well adapted for generating together qk and θk . Here, we also deal with the digit-cost τ (mk+1 ), and we introduce the weighted (τ ) operator Hs , already used in (Lhote and Vall´ee, 2008). For constrained models, the generating operators have the same components oper-  ators as in the unconstrained case, but the sum runs over the set HM := h[m] ; m ≤ M . To stress the dependence on M, we write HM,s ,

HM,(s,t) ,

(τ )

HM,s

Pseudo-randomness of a random Kronecker sequence Name Hs H(s,t) (τ )

Hs

First definition of the component operator

Second definition of the component operator   1 1 g m+x  (m + x)2s  1 1 1 , G m+x m+y (m + x)2s (m + y)2t   1 1 τ (m) g m+x (m + x)2s

|h (x)|s · g ◦ h(x) |h (x)|s |h (y)|t · G(h(x), h(y)) |h (x)|s τ



1 h(0)



427

· g ◦ h(x)

Figure 11.7 Definition of operators via their component operators. Remark that g denotes a function of one variable, and G a function of two variables. The second column is relative to the LFT h, while the third one is relative to h = h[m] .

for the various transfer operators which are written as the sum over HM of component operators that are defined in Figure 11.7. The main analytic properties of these operators (when they act on functions of class C 1 ) are described, e.g., in (Vall´ee, 2006). The properties which are useful here are summarised in Section 11.8. We now describe, in Sections 11.5.6 and 11.5.7, the main steps of dynamical analysis in each model (real or rational).

11.5.6 Dynamical analysis in the real model The mean value of a cost Rk on the set I [M] is equal to [M]

E

[Rk ] =



I [M]

Rk (x) dνM (x),

where νM is the Hausdorff measure of I [M] defined in Section 11.3.1. To estimate the asymptotic behaviour of this mean value, we proceed with three main steps. Step 1. We look for alternative forms of the cost Rk which involve the transfer operators of the underlying dynamical system. Step 2. Using dominant spectral properties of transfer operators (see Section 11.8.2) near the value σM (equal to the Hausdorff dimension of I [M] ) leads to asymptotic estimates for the mean value E[M] [Rk ] in terms of dominant spectral objects. Step 3. The speed of convergence is related to subdominant spectral properties (see Section 11.8.2) and we obtain the remainder estimates (exponential with respect to k) of our four theorems in the real models.

11.5.7 Dynamical analysis in the rational case We wish to evaluate the mean value of a cost Rk (u/v) which depends on parameters of the continued expansion of the rational real u/v at depth k. When the index k depends on the depth P(u/v) via an admissible function K, this random variable

428

E. Cesaratto and B. Vall´ee

depends only on u/v, and we denote it by R or simply by R. We shall consider the following two Dirichlet series for the rational model, respectively related to this cost R or to the unitary cost R = 1: [M]

SR (s) :=



R(u/v) an [M] = ∑ 2s , S1 (s) := 2s v [M] n≥1 n

(u/v) ∈ Ω



1 bn = ∑ 2s . (11.15) 2s v [M] n≥1 n

(u/v) ∈ Ω

As the coefficients an and bn are respectively equal to an :=



R(u, n),



bn :=

(u/n) ∈ Ω[M]

1,

(u/n) ∈ Ω[M]

[M]

the expectation EN [R] involves partial sums of an , bn under the form [M]

EN [R] =

[M]

ΦR (N) [M] Φ1 (N)

,

with

[M]

ΦR (N) :=

∑ an ,

[M]

Φ1 (N) :=

n≤N

∑ bn .

(11.16)

n≤N

We then proceed with three main steps, which define the general method of the dynamical analysis method described for instance in (Vall´ee, 2006). [M]

Step 1. We first describe an alternative form for the generating function S1 (s) of the set Ω[M] : with the first relation given in equation (11.11), one gets [M]

S1 (s) = (I − HM,s )−1 [1](0).

(11.17) [M]

We also look for an alternative form of the Dirichlet series SR (s), which involves the (various) transfer operators of the underlying dynamical system. Step 2. Using dominant spectral properties of transfer operators (see Section 11.8.2), [M] we isolate a ‘dominant’ part in the series SR (s), and thus a dominant pole, located at s = πM . For the cost R = 1, with equation (11.17), πM equals the Hausdorff dimension σM of I [M] , and this is also the case for any balanced cost, whereas for unbalanced costs, with imbalance υ , πM equals σM (υ , δ ) (see Theorem 11.4.4). Now, there is a [M] close relation between the residue of SR (s) at s = πM and the asymptotic behaviour of its coefficients, given by the estimate   [M] SR (s) 2s [M] N ; s = πM ΦR (N) ∼ Res 2s [M]

which involves the sum ΦR (N) defined in equation (11.16). Then, the denominator in equation (11.16) is of order Θ(N 2σM ), whereas the estimate for the numerator differs for the balanced and unbalanced costs. (a) For balanced costs, according to the order of the pole πM = σM (simple or double), the numerator is of order Θ(N 2σM ) or Θ(N 2σM log N) and the expectation is of constant order or of logarithmic order. (b) For unbalanced costs, the pole πM is always simple, and the numerator is of order Θ(N 2σM (υ ,δ ) ) and the expectation is of order Θ(N 2(σM (υ ,δ )−σM ) ).

Pseudo-randomness of a random Kronecker sequence

429

Step 3. The previous holds as soon as we have some extra knowledge about the [M] Dirichlet series SR (s) when s is close to the vertical line ℜs = πM , with large |ℑs|. There are now two cases for the remainder terms (see Section 11.8.4). (a) For balanced costs, when M becomes large, the dominant singularity πM = σM is close to 1, and, with results a` la Dolgopyat (Dolgopyat, 1998) which are only [M] proven to hold when ℜs is close to 1, we show that the Dirichlet series SR (s) is of polynomial growth for |ℑs| → ∞ on a convenient vertical strip near ℜs = πM . We then use a refined version of the classical Landau theorem (see, e.g., (Cesaratto and Vall´ee, 2014)), and this provides precise remainder terms of order N −γ where γ is related to the width of this vertical strip. (b) For unbalanced costs, the dominant singularity πM is no longer close to 1. How[M] ever, the Dirichlet series SR (s) is analytic on the vertical line ℜs = πM (except at s = πM ). We then use a Tauberian theorem, in the same vein as in (Flajolet and Vall´ee, 1998) or (Vall´ee, 1998), which does not provide any explicit remainder term.

11.5.8 Generating operators for a cost We expect similar results in the two probabilistic models (real model and rational model). Such a similarity is due to Fact 2, stated in Section 11.5.4, that we make now more precise. Definition 11.5.1 Consider a cost R that is studied in the real model via a sequence of costs (Rk ) or in the rational model via an admissible function of the depth p → [M] [k] F(p) and the Dirichlet generating function SR (s). The sequence of functions RM,s forms a basic sequence for the cost R if one has: in the real case

E[M] [Rk ] =

in the rational case

SR (s) =

[M]



[k]

I [M]

RM,σM (u) dνM (u)

∑ HM,s

p−F(p)−1



[F(p)]

RM,s



for any k ∈ N

(0).

p≥1

The existence of a basic sequence is the beginning part of our analysis. We exhibit such a basic sequence in Lemma 11.6.1, for balanced costs, and in Lemma 11.7.1, in the unbalanced framework.

11.5.9 Plan of the next two sections The next two sections are devoted to outline the proofs of our four theorems. Section 11.6 deals with the three balanced parameters, i.e., covered space, discrepancy and Arnold measure. The balance f equals 1 for discrepancy and covered space and equals 2 for Arnold measure. Section 11.7 is devoted to the study of parameter qbk θkc . This is an unbalanced cost except in the particular case when b = c. The distances,

430

E. Cesaratto and B. Vall´ee

whose expression is recalled in Figure 11.5, are closely related to the sequence (θk ) and define unbalanced costs, of imbalance equal to 1. In each section, we first describe the basic sequence, then, we follow the general scheme that is described in Sections 11.5.6 (for the real models) and 11.5.7 (for the rational models). In this short version, we mainly focus on Steps 1 and 2 of the general scheme. The details for Step 3 are provided in the paper (Cesaratto and Vall´ee, 2014).

11.6 Balanced costs First, we provide in Section 11.6.1 the expression of the basic sequence which generates an elementary cost. It involves ‘beginning’ operators (only in the rational model), ‘ending’ operators and a ‘middle’ operator (in both models). Then, as the beginning and ending operators intervene with large powers, Section 11.6.2 first uses their dominant spectral properties. Then, Section 11.6.3 focuses on the middle operator, and leads to a further classification of costs into two subclasses (extremal costs, or ordinary costs) which will be of crucial importance in the sequel, and explains the difference between the first two theorems and the third one.

11.6.1 Expression of the basic sequence We first show the existence of a basic sequence for each balanced cost, that is,     qk−1 a θk+1 d Rk = τ (mk+1 ) (qk θk ) f . (11.18) qk θk First, we observe that the four variables qk−1 , qk , θk , θk+1 are related via the Bezout identity which expresses qk−1 as a function of qk , θk , θk+1 , namely qk−1 = (1 − θk qk )/θk+1 . We then obtained an alternative expression for the balanced cost described in (11.18) as     a    θk+1 d−a 1 f , (11.19) · τ (mk+1 ) Rk = (qk θk ) −1 qk θk θk which decomposes into two factors, the first one which is a function of the balanced product qk θk , and the second one which highly depends on the digit mk+1 directly, and via the ratio θk+1 /θk which is of order Θ(1/mk+1 ) (see Section 11.5.1). The following result is mainly based on a refinement of the expressions provided in (11.10) and (11.11). Lemma 11.6.1 Consider a balanced cost defined with a balance f and a triple (τ , a, d), and written as in equation (11.18). The following holds. (τ )

(i) This cost admits a basic sequence which involves the weighted operator HM,(s,t)

Pseudo-randomness of a random Kronecker sequence

431

defined in Figure 11.7 under the form

  [k] (τ ) [k] RM,s := HM,(s+(d−a)/2) LM,(s,a, f ) ,

with

[k] LM,(s,a, f ) (x)

:=



g∈HMk

a $  $ f /2 $  $−1/2 $ $ $ $ g g (x) (x) $ $ |g (x)|s $$  $$ −1 . $ g (0) $ g (0)

[k]

(ii) The function LM,(s,a, f ) is O(xa ) for x → 0, and the basic sequence admits another useful expression   1 [k] [k] (τ ) > [k] > [k] , with L (x) := a LM,(s,a, f ) (x). RM,s := HM,(s+d/2) L M,(s,a, f ) M,(s,a, f ) x (iii) There is also another expression for the function L[k] which involves kth iterates of transfer operators, together with the ‘section’ Π : C 1 (I 2 ) → C 1 (I ) for which Π[ f ](x) = f (x, 0), that is,     a a [k] a− j k LM,(s,a, f ) := Π ∑ (−1) HM,(s+( f − j)/2,−( f − j)/2)[1] . (11.20) j=0 j Remark. The similarity between equation (11.19) and the expression of Assertion (i) (τ ) is striking. The weighted operator HM,(s+(d−a)/2) generates the second factor while [k]

the function LM,(s,a, f ) generates the first factor. Assertion (ii) is interesting because it splits the rˆoles of the triple (a, d, τ ) into two groups: the pair (τ , d) for the middle operator, and the integer a for the ending part. Assertion (iii) describes the behaviour of the ending part in terms of kth iterates of operators. Proof Assertion (i). We begin with the real case, then we study the rational case.

Real case. We use a refined decomposition. On each fundamental interval of depth k + 1, of the form g ◦ h(I ), with g ∈ HMk , h ∈ HM , the real x is written x = g ◦ h(u), and the measure νM is defined via the characteristic equality d νM (g ◦ h(u)) = |(g ◦ h)(u)|σM d νM (u) = |g (h(u))|σM · |h (u)|σM d νM (u) when g, h ∈ HM . Furthermore, the relations (11.10) entail expressions of qk , θk , θk+1 , mk+1 which involve the derivatives of g and h, namely 1 = |g (0)|, q2k

θk2 = |g (h(u))|,

2 θk+1 = |(g ◦ h)(u)| = |g (h(u))| · |h (u)|,

1 = |h (0)|. m2k+1 (τ )

Taking the sum over h ∈ HM gives rise to the ‘middle’ operator HM,(σM +(d−a)/2). [k]

Taking the sum over LFTs g ∈ HMk gives rise to the function LM,(σ ,a, f ) . Finally, we M obtain the expression given in Assertion (i).

432

E. Cesaratto and B. Vall´ee

Rational case. The similar refined decomposition of the LFT h1 ◦ · · · ◦ h p with g = h1 ◦ h2 ◦ · · · ◦ hk ,

h := hk+1 ,

 = hk+2 ◦ hk+3 ◦ · · · ◦ h p

now entails the following relations 1 1 1 = 2 = |g (h ◦ (0))| · |h((0))| · |(0)|, = |g (0)|, v2 v0 q2k 1 1 1 = |(h ◦ )(0)| = |h ((0))| · |(0)|, = | (0)|, = |h (0)|. v2k v2k+1 m2k+1 (τ )

Taking the sum over LFTs f gives rise to the ‘middle’ operator HM,(s+(d−a)/2). Tak[k]

ing the sum over LFTs g ∈ HMk gives rise to the function LM,(s,a, f ) . Finally, the sum p−k−1

over LFTs  ∈ HM

gives the ‘beginning’ operator of the basic sequence.

Assertion (ii). As the function g is non-zero on I , the function x → |g (x)|−1/2 belongs to C 1 (I ) and $  $−1/2 $ g (x) $  −1/2  −1/2 $ $ |g (x)| = |g (0)| + O(x), − 1 = O(x). $ g (0) $ 1 > [k] Then the function L M,(s,a, f ) belongs to the space C (I ), and equation (11.6.1) is a

consequence of the equality |h(x)|1/2 = |h (x)| that holds for all the LFTs h ∈ H . [k]

Assertion (iii). Equation (11.20) follows from the expression LM,(s,a, f ) given in Assertion (i), with the binomial formula, the definitions of HM,(s+t,−t) and section Π. We have performed step 1 of the dynamical analysis, as it is described in Sections 11.5.6 and 11.5.7. We have now8 to perform step 2. The expressions provided by Lemma 11.6.1 involve ending and beginning operators which appear with large powers and we first use in Section 11.6.2 their dominant spectral properties near the real axis, described in Section 11.8.2. In Section 11.6.3, we deal with the middle operator.

11.6.2 Using dominant spectral properties With Assertion (iii) of Lemma 11.6.1, and for s near the real axis, the dominant part (see Section 11.8.2) of ‘ending’ operators involved in the basic sequence provides the estimate [k]

LM,(s,a, f ) ∼ λMk (s) ΦM,(s,a, f ) , where λM (s) is the dominant eigenvalue, and the function ΦM,(s,a, f ) is expressed with the section Π of eigenfunctions φM,(s+t,−t) . Moreover, with Assertion (ii) of 8

We recall that step 3 is not performed here. See the paper (Cesaratto and Vall´ee, 2014) for step 3.

433

Pseudo-randomness of a random Kronecker sequence Lemma 11.6.1, the function ΦM,(s,a, f ) is O(xa ) for x → 0 and > M,(s,a, f ) (x) := (1/xa ) ΦM,(s,a, f ) (x) Φ a   a (−1)a− j Π[φM,(s+( f − j)/2,−( f − j)/2)](x). = (1/xa ) ∑ j j=0

(11.21) (11.22)

This entails, with Assertion (ii) of Lemma 11.6.1, asymptotic estimates for the func[k] > M,(s,a, f ) , namely tion RM,s which now involve the function Φ [k]

RM,s ∼ λMk (s) JM,s ,

(τ ) > M,(s,a, f ) ]. with JM,s := HM,(s+d/2) [Φ

(11.23)

Real model. In the real case, using the equality λM (σM ) = 1, the previous estimate (11.23) at s = σM gives rise to the asymptotic estimate: E[M] [Rk ] ∼



I [M]

JM,σM (u) dνM (u).

(11.24)

Rational model. In the rational case, we also use dominant spectral properties for the ‘beginning’ operator, for s near the real axis (see Section 11.8.2): HkM,s [g](0) ∼ λMk (s) φM,s (0) QM,s [g].

(11.25)

Collecting all the leading estimates obtained in (11.23) and (11.25) gives rise to a [M] ‘leading’ term for the Dirichlet series SR (s), of the form

φM,s (0) · ΣM (s) · ΨM (s) with

ΣM (s) :=

λM (s)

∑ λMp (s) = 1 − λM (s)

and

ΨM (s) := QM,s [JM,s ]. (11.26)

p≥1

[M]

The other part HM (s) of the Dirichlet series SR (s), which gathers all the other terms, will be actually a remainder term, due to the fact that K is an admissible function of the depth and it remains analytic when the series ΣM (s) becomes singular, namely at s = σM for which the denominator 1 − λM (s) is zero. Then, the value AM := ΨM (σM )

(11.27) [M]

which involves the function ΨM defined in (11.26) intervenes in the residue of SR (s) at s = σM and plays a central rˆole in the analysis in the rational model. Comparing the leading terms in the two models. We remark that the term ΨM (σM ) which intervenes in the rational case, also intervenes in the real case. Indeed, with Sections 11.8.2 and 11.8.3, the projector QM,σM satisfies QM,σM [g] =



I [M]

g(t) dνM (t) so that ΨM (σM ) =



I [M]

JM,σM (u) dνM (u)

coincides with the constant defined in (11.24). This explains the strong similarity between the analyses in the two models and the central rˆole played by the constant AM defined in (11.27).

434

E. Cesaratto and B. Vall´ee

11.6.3 Rˆole of the middle operator: classification of pairs (τ , d) (τ )

It remains for us to deal with the middle operator HM,(s+d/2) . We first focus on the case when the digit-cost τ satisfies τ (m) ∼ λ me when m → ∞. Then, the middle operator is closely related to the (possibly) truncated hybrid ζ function ζM (2s + d, −e; x) where ζM (s,t; x) is defined as

ζM (s,t; x) :=

M

1 1 . s t m=1 (m + x) m



More precisely, since the inequality 1 + d − e ≥ 0 holds, the difference (τ )

HM,(s+d/2) [g](x) − λ ζM (2s + d, −e; x)g(0) defines an operator which is bounded (uniformly with respect to M) when ℜs is close to 1 (even though each term is itself not uniformly bounded). Then, the middle operator is ‘closely related’ to the truncated hybrid function ζM (2s + d, −e; x) itself ‘closely related’ to the truncated zeta function ζM (2s + d − e). When ℜs is close to 1, there are two cases for the pair (e, d) and this leads to the following definition. Definition 11.6.2 Consider an elementary cost Rk defined as in equation (11.18). (i) When the digit-cost τ (m) is O(me ) with e ≤ d, the cost Rk is said to be ordinary. (ii) When the digit-cost τ (m) is Θ(me ) with e = 1 + d, the cost Rk is said to be extremal. This is only possible when a = 0. (iii) A digit-cost of the form τ (m) = me is said to be standard. We now explain the specificities of the two cases. Ordinary cost. For any constraint M (possibly infinite), the truncated zeta functions ζM (2σ + d − e) are uniformly bounded with respect to M when σ is close to 1. The middle operator is bounded (uniformly with respect to M). The sequence AM tends to A∞ := Ψ(1) when M → ∞ with a speed of order O(1/M). The Dirichlet series S[M] (s) has a simple pole at s = σM for any M (finite or infinite). Extremal cost. For M = ∞, the function ζ (2σ + d − e) is singular at σ = 1, and, for any M ≤ ∞, the middle operator has the same behaviour as the truncated zeta function ζM (2s + d − e) (see Section 11.8.5). It then creates a pole at s = 1 for M = ∞ and is no longer uniformly bounded with respect to M when M → ∞. Furthermore, [M] A∞ := Ψ(1) = ∞, and AM := ΨM (σM ) is of order log M. The Dirichlet series SR (s) has a simple pole at s = σM for finite M (with a residue of order log M) and a double pole at s = 1 for M = ∞ which gives rise to a logarithmic term for the mean value.

11.6.4 The basic result for a balanced cost These statements will be made precise in the following theorem, which is a central step of the proof of the first three theorems. This result is completely proven in (Cesaratto and Vall´ee, 2014). Here, we have only explained the dominant terms.

Pseudo-randomness of a random Kronecker sequence

435

Theorem 11.6.3 There are three positive real numbers γ , θ , ρ < 1, such that, for any balanced cost Rk defined in (11.18), the following holds and involves the constant AM = AM [R] defined in (11.27). (a) If the cost Rk is ordinary, then, for any integer M ≤ ∞, the mean values satisfy E[M] [Rk ] = AM + O(ρ k ),

[M]

EN [R] = AM + O(N −γ ).

Furthermore, when M → ∞, the sequence AM tends to A∞ and satisfies   1 . AM = A∞ + O M (b) If the cost Rk is extremal, then, for any finite integer M, the mean values satisfy E[M] [Rk ] = AM + O(ρ k ),

[M]

EN [R] = AM + O(N −θ / log M ).

Furthermore, when M → ∞, the sequence AM is of logarithmic order, and, if the r for which cost Rk is standard, there exists a constant >   1 . AM = log2 M + > r+O M (c) If the cost (Rk ) is extremal and M = ∞, the mean value E[Rk ] is infinite, the mean value EN [R] is of logarithmic order and, if the cost Rk is standard, there exists a constant r for which the mean value EN [R] satisfies EN [R] = log2 N + r + O(N −γ ).

11.6.5 End of the proofs for balanced parameters We explain how Theorem 11.6.3 is applied to the proof of our first three theorems. For Theorem 11.4.1, when the truncation T is at a boundary position, the digit mk+1 does not intervene, the integer e equals 0, and parameters Xk± only involve ordinary (and standard) costs Rk (see Figure 11.5). We are in the case (a) of Theorem 11.6.3, and it remains to compute the exact values of the limit x± ∞ , which will be done in the next section. For Theorem 11.4.2, we consider a truncation at position μ ∈]0, 1[ and the two types of costs may a priori appear. However, for the covered space, all the costs are ordinary; we directly deal with the costs τ (m) := 1 + μ (m − 1)e and we use case (a) of Theorem 11.6.3. We now obtain constants AM which depend on μ , and the sum of all these constants gives rise to the constant sM (μ ). This leads to the proof. For Theorem 11.4.3, we deal with a truncation at position μ ∈]0, 1[, and the other two parameters (discrepancy and Arnold measure). There now exist both ordinary and extremal costs Rk . For ordinary costs, we directly deal with the costs τ (m) := 1 + μ (m − 1)e , as previously, and we apply case (a) of Theorem 11.6.3. For extremal costs, we decompose the digit-cost 1 + μ (m − 1)e = μ e me + ρ (m)

with

ρ (m) = Θ(me−1 ).

(11.28)

436

E. Cesaratto and B. Vall´ee

The first part gives rise to the leading extremal part of the cost Rk (which is colinear to a standard extremal cost), and the second part gives rise to a remainder cost which is now ordinary. We apply case (a) of Theorem 11.6.3 to the remainder cost, and cases (b) and (c) of the same theorem to the leading extremal cost. The polynomial $μ % $μ % X k which collects all the leading extremal parts of cost Rk equals in each case $μ %

Δk $μ % Ak

= =

μ mk+1 qk θk μ mk+1 q2k θk2

− μ 2 m2k+1 qk θk+1 2 . −2μ 2 m2k+1 q2k θk θk+1 + μ 3 m3k+1 q2k θk+1

These expressions, together with cases (b) and (c) of Theorem 11.6.3, give rise to the leading polynomials (with respect to μ ) which occur in Theorem 11.4.3.

11.6.6 Computation of constants x± ∞ in Theorem 11.4.1 The constants which occur in the statement of Theorem 11.4.1 for M = ∞ are obtained by summing each constant Ψ(1) brought by each elementary cost Rk which appears in the decomposition of each parameter (discrepancy, Arnold constant and > (1,a, f ) for some covered space). Such a constant Ψ(1) is the integral of a function Φ > (1,a, f ) is expressed pairs (a, f ) with 0 ≤ a ≤ f and f = 1, 2. Any such function Φ in (11.21) with the dominant eigenfunction φ1 and the sections Π of the dominant eigenfunctions φ(3/2,−1/2), φ(2,−2) . Section 11.8.3 provides an explicit expression for the functions φ1 , Π[φ(3/2,−1/2)] and Π[φ(2,−2) ]. This yields an explicit expression for > (1,a, f ) (u) as a linear combination of the functions (1 + u)−b with the function u → Φ b := 1, 2, 3. The integral on such functions leads to the constants of Figure 11.6.

11.7 Unbalanced costs We here outline the proof of Theorem 11.4.4, which deals with costs qbk θkc . Remark that the case b = c gives rise to balanced costs that have been already studied in the previous section. Section 11.2.8 explains why we may expect an asymptotic exponential behaviour for the expectation E[qbk θkc ] in the case when c and b are not equal. We show here that the imbalance υ = c − b plays a central rˆole.

11.7.1 Expression of the basic sequence The following lemma describes the basic sequence of functions relative to the general cost qbk θkc . As the equalities a = d = e = 0 hold, this expression is simpler than the previous one of Lemma 11.6.1. Lemma 11.7.1 Each cost Rk = qbk θkc admits a basic sequence [k]

RM,s (x) = HkM,(s+c/2,−b/2) [1](x, 0)

Pseudo-randomness of a random Kronecker sequence and in the real model

E[M] [Rk ] =

in the rational model

SR (s) =

[M]

I [M]

437

HkM,(σM +c/2,−b/2) [1](u, 0) dνM (u)

∑ HM,s

p−F(p)

p≥1

F(p)

◦ HM,(s+c/2,−b/2)[1](0, 0).

The proof follows the same lines as in Lemma 11.6.1. For the real case, one deals with g ∈ HMk , and, in the rational case, with the decomposition h = g ◦ .

11.7.2 Step 2 in the real model We recall that we let denote the imbalance c − b by υ . When the inequality υ > tM holds, the operator HM,(s+c/2,−b/2) is well-defined at s = σM , and the spectral decomposition at s = σM leads to the estimate HkM,(σM +c/2,−b/2)[1](x) ∼ λMk (σM + υ /2) φM,(σM +c/2,−b/2)(x), which gives rise to the estimate [M]

E

[Rk ] ∼

λMk (σM

+ υ /2)

1 0

φM,(σM +c/2,−b/2) (u) dνM (u).

11.7.3 Step 2 in the rational model [M]

The Dirichlet series SR (s) is convergent in the half-plane ℜs > sM . For any complex number s with ℜs > sM , and close enough to the real axis, we use, as in (11.25), the dominant spectral behaviour of the operator HkM,s . This gives rises to a dominant part [M]

for the Dirichlet series SR (s), in the same vein as in (11.26). We obtain [M]

SR (s) = φM,s (0) · ΣM (s) · ΨM (s) + HM (s) where the first term collects all the dominant terms ΣM (s) := ∑ λM

p−F(p)

F(p)

(s) λM

(s + υ /2),

ΨM (s) := QM,s [x → φM,(s+c/2,−b/2) (x)],

p

and the ‘remainder’ term HM (s) collects the three other terms, each of them containing at least one subdominant term. In the present case, the admissible function intervenes in the leading term, and this is why we consider, as in (Daireaux and Vall´ee, 2004; Lhote and Vall´ee, 2008), particular admissible functions F(p) of the form F(p) = δ p, with δ ∈ Q ∩ [0, 1], already defined in Section 11.3.3. Consider indeed a rational δ := B/C defined with two coprime integers B,C which satisfy B ≤ C. Then, the series ΣM is written as ΣM (s) = M (s, δ ) ∑ LCp M (s, δ ) = p≥1

M (s, δ , υ ) , 1 − LCM (s, δ , υ )

with

438 M (s, δ , υ ) =

E. Cesaratto and B. Vall´ee C−1

j−δ j

∑ λM

δ j

(s)λM

(s + υ /2),

LM (s, δ , υ ) = λM1−δ (s)λMδ (s + υ /2).

j=0

The poles of the Dirichlet series ΣM (s) are brought by the zeroes of the map s → LCM (s, δ , υ ) − 1 where C is a positive integer. The map s → LM (s, δ , υ ) is an extension of a map which has been already studied in (Lhote and Vall´ee, 2008). And the long version of this chapter (Cesaratto and Vall´ee, 2014) provides a proof of all the properties which are stated in Theorem 11.4.4 for the unique real zero σM (υ , δ ) of the equation LM (s, δ , υ ) − 1.

11.8 Summary of functional analysis We now recall some basic facts from functional analysis concerning the operators under study. We consider a (possibly infinite) integer M, and we denote by GM,s a generic element of the set GM,s , defined as GM,s := {HM,s }



{HM,(s+t,−t) ; t ∈ R}.

11.8.1 Generalities Operators in GM,s may act on functions of one or two variables. The integer q denotes the number of variables. It always equals 2 for the operator GM,s except for the (plain) operator HM,s , where it equals 1. More precisely, we consider the Banach spaces C 1 (I q ), endowed with the following norms defined from the sup-norm on I q , denoted by ||.||0 : || f ||1 = || f ||0 + || f  ||0

for q = 1,

||F||1 = ||F||0 + ||DF||0

for q = 2.

We will also use another norm, the (1, τ ) norm, defined later in (11.31). There are three cases for the constraint M: the two possible cases M < ∞, M = ∞, but we are also interested in the ‘transition’ when M → ∞. There are two important real numbers which depend on the integer M. (i) The convergence abscissa sM defines the (half)-plane ℜs > sM where the operator GM,s is well-defined: this abscissa sM equals −∞ for M < ∞ and equals 1/2 for M = ∞. (ii) The real σM is the real s for which the dominant eigenvalue λM (s) of the operator GM,s equals 1. The operators GM,s ∈ GM,s share many properties with the plain operator HM,s . We describe their main analytic properties, in particular their spectral properties, and focus on the behaviour of these operators when the parameter s equals σM .

439

Pseudo-randomness of a random Kronecker sequence

11.8.2 Dominant spectral properties and spectral decomposition Each operator HM,s (when s is close to the real axis) or HM,(s,t) (when s + t is close to the real axis) is quasi-compact and has spectral dominant properties. The operator HM,s has a unique dominant eigenvalue λM (s), which is simple. This is also the dominant eigenvalue of the adjoint operator H∗M,s . The dominant eigenvector of HM,s is denoted by φM,s , the dominant eigenmeasure of the adjoint H∗M,s is denoted by QM,s , and there is a normalisation condition QM,s [φM,s ] = 1. For s close to the real axis, one has HkM,s [g](x) ∼ λMk (s)φM,s (x)QM,s [g].

(11.29)

The operator HM,(s+t,−t) has a unique dominant eigenvalue λM (s +t, −t), which is simple and equal to λM (s). This is also the dominant eigenvalue of the adjoint operator H∗M,(s+t,−t) . The dominant eigenvector of HM,(s+t,−t) is denoted by φM,(s+t,−t) , the dominant eigenmeasure of the adjoint H∗M,(s,t) is denoted by QM,(s+t,−t) , and there is a normalisation condition QM,(s+t,−t) [φM,(s+t,−t) ] = 1. For s close to the real axis, one has HkM,(s+t,−t) [G](x, y) ∼ λMk (s)φM,(s+t,−t) (x, y)QM,(s+t,−t) [G].

(11.30)

In equations (11.29) or (11.30), the quasi-compactness of operators entails that the remainder term is of order |λM (s)|k O(ρ k ), with ρ < 1 and a hidden constant that is uniform with respect to s, for s close enough to the real axis, and uniform with respect to M, for M large enough. Proofs of the previous assertions on the space C 1 (I q ) closely follow Baladi (2000), given for the (constrained and unconstrained) plain transfer operator on the space of H¨older continuous functions. The ‘quasi-compacity’ property is the first key step in the proof of the spectral decomposition. It is obtained with Hennion’s theorem. In Baladi and Vall´ee (2005), the central hypothesis of Hennion’s theorem (also known as Lasota–Yorke bounds) is proven to hold (for the plain transfer operator) in the space C 1 (I ). The paper Vall´ee (1997) establishes the spectral decomposition for the operator H(s,t) in the space of analytic functions and the paper Cesaratto et al. (2006) in the space C 1 (I 2 ). The proofs can be easily adapted to the constrained operator HM,(s,t) .

11.8.3 Special values at s = σM For any M possibly infinite, the equation λM (s) − 1 has a unique solution on the real axis, located at s = σM . The value σM is the Hausdorff dimension of the constrained set I [M] . The dominant eigenmeasure of the adjoint operator HM,s at s = σM coincides with the Hausdorff measure νM of the set I [M] . For M = ∞, the eigenfunction φ1 is the Gauss density and −λ  (1) is the entropy   π2 1 1 , Q1 [ f ] = . φ1 (x) = f (w)dw, and λ  (1) = − log 2 1 + x 6 log 2 I

440

E. Cesaratto and B. Vall´ee

The sections of the eigenfunctions φ(2,−1) and φ(3/2,−1/2) satisfy 3 log 2 · Π[φ(2,−1)](x) = (1 + x)−1 + (1 + x)−2 + (1 + x)−3, 2 log 2 · Π[φ(3/2,−1/2)](x) = (1 + x)−1 + (1 + x)−2. For M → ∞, the dominant spectral objects of the constrained operators HM,s tend to the spectral objects of the unconstrained one Hs with a speed of order O(1/M). Gauss himself proved that φ1 is a fixed point of H1 (see e.g., Knuth (1998)). The description of eigenfunctions for the operator H(s,t) is given in (Vall´ee, 1997, Theorem 5). The eigenfunctions φ(2,−1) and φ(3/2,−1/2) are computed in Cesaratto et al. (2006).

11.8.4 When s is far from the real axis There are two cases, according as σ := ℜs simply satisfies σ > sM or is furthermore close to 1. For any M ≥ 2, for any σ > sM , on the vertical line ℜs = σ , with s = σ , one has, according to Vall´ee (1998) ||GM,s [F]||0 < λM (σ )||F||0 ,

for any F ∈ C 1 (I q ).

We now consider Dolgopyat bounds when σ is close to 1. These bounds deal with the (1, τ ) norm, defined as ||F||(1,τ ) := ||F||0 +

1 ||F||1 . |τ |

(11.31)

For any τ0 > 0, there exist α , β > 0, M0 > 0, K > 0, for which, when s belongs to the part of the vertical strip |ℜs − 1| ≤ β , with τ := ℑs satisfying |τ | > τ0 > 0, the norm (1, τ ) of the nth iterate of any operator GM,s ∈ GM,s satisfies, for M ≥ M0 ||GnM,s ||(1,τ ) ≤ K · γ n · |τ |α ,

for n ≥ 1.

This was first proved in Dolgopyat (1998) for plain transfer operators (associated with subshifts of finite type) whose adjoints fix the Lebesgue measure for s = 1. Then Baladi and Vall´ee (2005) extended the result to the case when the dynamical system has an infinite number of branches (the case of the Gauss map). The paper Cesaratto and Vall´ee (2011) provides a detailed proof of Dolgopyat-type estimates in the constrained case for large enough M and with uniform parameters (with respect to M). This result is obtained via perturbation techniques (for M → ∞) and is not a priori valid for small values of M, where sM is not close to 1. In Cesaratto et al. (2006), techniques from the paper Baladi and Vall´ee (2005) are adapted in order to extend Dolgopyat’s estimates to the case q = 2.

Pseudo-randomness of a random Kronecker sequence

441

11.8.5 Basic properties of the ζ function (τ )

The weighted operator HM,s , used to study the ‘middle’ digit, is quite close (when τ (m) = me ) to the (possibly truncated) Riemann zeta function ζM (s − e). The plain Riemann ζ function is analytic on ℜs > 1, has a pole at s = 1, with a residue equal to 1. It is of polynomial growth (with respect to ℑs) when ℜs is close to 1. For M < ∞ and for real s < 1 with |s − 1| log M ≤ 1, the truncated ζ function satisfies (see Edwards (2001))

ζM (s) = log M + γ + O(|1 − s| log2 M).

11.9 Conclusion and open problems Our results precisely describe the pseudo-randomness of a random Kronecker sequence K$T % (α ) with five parameters, for two-distance truncations and in various probabilistic models (real versus rational, constrained or not constrained). Extension to three-distance truncations. We deal here only with two-distance truncations. For any three-distance truncation, and for all the four parameters, except the discrepancy, there exist general formulae (of the same type as the present ones) which express the parameter X$T % as a function of the truncation T and the four continuants qk , qk−1 , θk , θk+1 (for a truncation of index k). Any three-distance truncation may be described with two positions: the principal position μ , already used here, and another (auxiliary) position. This is why a similar study can be conducted in this case, with similar expected results (and heavier computations). The special case of the discrepancy. The situation is completely different for the discrepancy. For three-distance truncations, there does not exist a formula which expresses Δ$T % for a truncation of index k solely with T, qk , qk−1 , θk , θk+1 . However, the paper (Baxa and Schoissengeier, 1994) provides estimates that relate the discrepancy Δ$T % (α ) for an index k to the average Ak := (1/k) ∑ki=1 mi of the first k digits in the continued fraction expansion of α . We are thus led to consider constrained models of another type, which deal with the sets I[M] of real numbers for which each average Ak is bounded by some constant M. We may also consider their rational counterparts  [M] . In these models, we may expect a logarithmic behaviour for the mean discrepΩ N ancy. In the paper (Cesaratto and Vall´ee, 2006), we already studied the set I[M] and provided estimates for its Hausdorff dimension, with tools of dynamical analysis. Quadratic numbers. Other particular Kronecker sequences K (α ), associated with quadratic irrational numbers α , are also interesting to study from our probabilistic point of view. Such sequences are usually classified as the most random, because of their ‘low’ worst-case discrepancy. For dynamical analysis, there is a strong parallelism between rational and irrational quadratic numbers, as it is shown for instance in (Vall´ee, 1998): we just replace the quasi-inverse (I − Hs )−1 by the zeta function of

442

E. Cesaratto and B. Vall´ee

the dynamical system. It is then surely possible to conduct similar analyses among quadratic irrational numbers, with similar expected results.

Bibliography

Aberkane, A., and Currie, J. 2009. A cyclic binary morphism avoiding Abelian fourth powers. Theoret. Comput. Sci., 410, 44–52. Aberkane, A., Currie, J. D., and Rampersad, N. 2004. The number of ternary words avoiding abelian cubes grows exponentially. J. Integer Seq., 7(2), Article 04.2.7, 13. Aberkane, A., Linek, V., and Mor, S. J. 2006. On the powers in the Thue–Morse word. Australasian J. Combinat., 35, 41–49. Adamczewski, B., and Bugeaud, Y. 2007. On the complexity of algebraic numbers. I. Expansions in integer bases. Ann. of Math. (2), 165, 547–565. Adamczewski, B., and Bugeaud, Y. 2010. Transcendence and Diophantine approximation. In: Berth´e, V., and Rigo, M. (eds.), Combinatorics, Automata, and Number Theory. Encyclopedia of Mathematics and its Applications, vol. 135. Cambridge University Press. Adler, R. L., and Marcus, B. 1979. Topological entropy and equivalence of dynamical systems. Mem. Amer. Math. Soc., 20(219), iv+84. Adler, R. L., Goodwyn, L. W., and Weiss, B. 1977. Equivalence of topological Markov shifts. Israel J. Math., 27(1), 48–63. Adler, R. L., Coppersmith, D., and Hassner, M. 1983. Algorithms for sliding block codes. IEEE Trans. Inform. Theory, IT-29, 5–22. Agrawal, M., Kayal, N., and Saxena, N. 2004. PRIMES is in P. Ann. of Math. (2), 160, 781–793. Akiyama, S., and Komornik, V. 2013. Discrete spectra and Pisot numbers. J. Number Theory, 133(2), 375–390. Allouche, J.-P., and Cosnard, M. 2000. The Komornik–Loreti constant is transcendental. Amer. Math. Monthly, 107(5), 448–449. Allouche, J.-P., and Shallit, J. O. 1999. The ubiquitous Prouhet–Thue–Morse sequence. Pages 1–16 of: Ding, C., Helleseth, T., and Niederreiter, H. (eds.), Sequences and Their Applications, Proceedings of SETA ’98. Springer-Verlag. Allouche, J.-P., and Shallit, J. 2003. Automatic Sequences. (Theory, applications, generalizations.) Cambridge University Press. Allouche, J.-P., Currie, J. D., and Shallit, J. 1998. Extremal infinite overlap-free binary words. Electronic J. Combinatorics, 5. Allouche, J.-P., Rampersad, N., and Shallit, J. 2009. Periodicity, repetitions, and orbits of an automatic sequence. Theoret. Comput. Sci., 410, 2795–2803. Alon, N., and Spencer, J. 2000. The Probabilistic Method. Wiley.

444

Bibliography

Alon, N., Grytczuk, J., Haluszczak, M., and Riordan, O. 2002. Nonrepetitive colorings of graphs. Random Structures and Algorithms, 21, 336–346. Anne, V., Zamboni, L. Q., and Zorca, I. 2005. Palindromes and pseudo-palindromes in episturmian and pseudo-episturmian infinite words. Pages 91–100 of: Brlek, S., and Reutenauer, C. (eds.), Proceedings of Words 2005. Publications du LACIM, vol. 36. Apostolico, A., and Breslauer, D. 1997. Of periods, quasiperiods, repetitions and covers. Pages 236–248 of: Structures in Logic and Computer Science. Apostolico, A., and Ehrenfeucht, A. 1993. Efficient detection of quasiperiodicities in strings. Theoret. Comput. Sci., 119(2), 247–265. Apostolico, A., and Preparata, F. P. 1983. Optimal off-line detection of repetitions in a string. Theoret. Comput. Sci., 22(3), 297–315. Apostolico, A., Farach, M., and Iliopoulos, C. S. 1991. Optimal superprimitivity testing for strings. Inf. Process. Lett., 39(1), 17–20. Arnold, V. I. 2003. Topology and statistics of formulae of arithmetics. Russian Math. Surveys, 58, 637–664. Arnold, V. I. 2004. Arnold’s Problems. Springer Phasis. Arnoux, P., and Schmidt, T. 2009. Veech surfaces with nonperiodic directions in the trace field. Journal of Modern Dynamics (JMD), 4, 611–629. Arnoux, P., and Schmidt, T. 2014. Commensurable continued fractions. Discrete and Continuous Dynamical Systems, 34(11), 4389–4418. Arora, S., and Barak, B. 2009. Computational Complexity. (A modern approach.) Cambridge University Press. Assem, I., and Dupont, G. 2011. Friezes and a construction of the Euclidean cluster variables. J. Pure Appl. Algebra, 215(10), 2322–2340. ˜ Ann. Math. Blaise Pascal, Assem, I., and Reutenauer, C. 2012. Mutating seeds: types A and A. 19(1), 29–73. Assem, I., Reutenauer, C., and Smith, D. 2010. Friezes. Adv. Math., 225(6), 3134–3165. Aubrun, N., and Sablik, M. 2010. Simulation of effective subshifts by two-dimensional subshifts. Preprint. Baatz, M., and Komornik, V. 2011. Unique expansions in integer bases with extended alphabets. Publ. Math. Debrecen, 79(3–4), 251–267. Badkobeh, G. 2011. Fewest repetitions versus maximal-exponent powers in infinite binary words. Theoret. Comput. Sci., 412(48), 6625–6633. Badkobeh, G., and Crochemore, M. 2010. Bounded number of squares in infinite repetitionconstrained binary words. Pages 161–166 of: Holub, J., and Zd’´arek, J. (eds.), Prague Stringology Conference. Czech Technical University in Prague. Badkobeh, G., and Crochemore, M. 2014. Maximal-exponent factors in strings. J. Comput. Syst. Sci. In press. Badkobeh, G., Crochemore, M., and Rao, M. 2014. Finite repetition threshold for large alphabets. Theor. Inform. Appl., eFirst(8). Baiocchi, C., and Komornik, V. 2007. Greedy and quasi-greedy expansions in non-integer bases. Manuscript available electronically at arXiv:0710.3001. Baker, S., and Sidorov, N. 2013. Expansions in non-integer bases: lower order revisited. Manuscript available electronically at arXiv:1302.4302. Baladi, V. 2000. Positive Transfer Operators and Decay of Correlations. Advanced Series in Nonlinear Dynamics, vol. 16. World Scientific Publishing Co., Inc., River Edge, NJ. Baladi, V., and Vall´ee, B. 2005. Euclidean algorithms are Gaussian. J. Number Theory, 110, 331–386.

Bibliography

445

Bannai, H., I, T., Inenaga, S., Nakashima, Y., Takeda, M., and Tsuruta, K. 2014a. A new characterization of maximal repetitions by Lyndon trees. Manuscript available electronically at arXiv:1406.0263. To appear in SODA 2015. Bannai, H., I, T., Inenaga, S., Nakashima, Y., Takeda, M., and Tsuruta, K. 2014b. The “runs” theorem. Manuscript available electronically at arXiv:1406.0263. Baxa, C., and Schoissengeier, J. 1994. Minimum and maximum order of magnitude of the discrepancy of (nα ). Acta Arith., 68, 281–290. Baxter, R. J. 1999. Planar lattice gases with nearest-neighbor exclusion. Ann. Comb., 3(2–4), 191–203. On combinatorics and statistical mechanics. Baylis, J. 1998. Error-Correcting Codes: A Mathematical Introduction. Chapman and Hall Mathematics Series. London: Chapman & Hall. B´eal, M., and Perrin, D. 2014. A quadratic algorithm for road coloring. Discrete Applied Mathematics, 169, 15–29. B´eal, M.-P., Berstel, J., Marcus, B., et al.. 2010. Variable-length codes and finite automata. Chap. 14, pages 505–584 of: Issac Woungang, Sudip Misra, S. C. M. (eds.), Selected Topics in Information and Coding Theory. World Scientific Publishing Company. B´eal, M., Berlinkov, M. V., and Perrin, D. 2011. A Quadratic upper bound on the size of a synchronizing word in one-cluster automata. Int. J. Found. Comput. Sci., 22(2), 277– 288. Bean, D. A., Ehrenfeucht, A., and McNulty, G. 1979. Avoidable patterns in strings of symbols. Pacific J. Math., 85, 261–294. Beck, J. 1984. An application of Lov´asz local lemma: there exists an infinite 01-sequence containing no near identical intervals. Pages 103–107 of: Hajnal, A., et al. (eds.), Finite and Infinite Sets, Vol. I, II (Eger, 1981). Colloq. Math. Soc. J´anos Bolyai, vol. 37. NorthHolland. Behnke, H. 1922. Uber die Verteilung von Irrationalzahlen mod 1. Hamb. abh, 1, 252–267. Behnke, H. 1924. Theorie der Diophantischen Approximationen. Hamb. abh, 3, 261–318. Bell, J. P. 2005. On the values attained by a k-regular sequence. Adv. in Appl. Math., 34(3), 634–643. Bell, J. P., and Goh, T. L. 2007. Exponential lower bounds for the number of words of uniform length avoiding a pattern. Inform. and Comput., 205(9), 1295–1306. Bell, T. C., Clearly, J. G., and Witten, I. H. 1990. Text Compression. New Jersey: Prentice Hall Inc. Berger, R. 1966. The undecidability of the domino problem. Mem. Amer. Math. Soc. No., 66, 72. Bergeron, F., and Reutenauer, C. 2010. SLk -tilings of the plane. Illinois J. Math., 54(1), 263–300. Berlinkov, M. V. 2013. On the probability to be synchronizable. Manuscript available electronically at arXiv:1304.5774. Berman, S., Moody, R., and Wonenburger, M. 1971/72. Cartan matrices with null roots and finite Cartan matrices. Indiana Univ. Math. J., 21, 1091–1099. Berstel, J. 1994. A rewriting of Fife’s theorem about overlap-free words. Pages 19–29 of: Karhum¨aki, J., Maurer, H., and Rozenberg, G. (eds.), Results and Trends in Theoretical Computer Science. Lecture Notes in Computer Science, vol. 812. Springer-Verlag. Berstel, J. 2005. Growth of repetition-free words – a review. Theoret. Comput. Sci., 340(2), 280–290. Berstel, J., and Boasson, L. 1999. Partial words and a theorem of Fine and Wilf. Theoret. Comput. Sci., 218, 135–141. Berstel, J., and Karhum¨aki, J. 2003. Combinatorics on words – a tutorial. Bull. European Assoc. Theor. Comput. Sci., 79, 178–228.

446

Bibliography

Berstel, J., and Reutenauer, C. 2011. Noncommutative rational series with applications. Encyclopedia of Mathematics and its Applications, vol. 137. Cambridge University Press. Berstel, J., Perrin, D., and Reutenauer, C. 2007. Codes and Automata. Cambridge University Press. Berstel, J., Lauve, A., Reutenauer, C., and Saliola, F. V. 2009. Combinatorics on Words. (Christoffel words and repetitions in words.) CRM Monograph Series, vol. 27. American Mathematical Society, Providence, RI. Berth´e, V., and Rigo, M. (eds.). 2010. Combinatorics, Automata and Number Theory. Encyclopedia Math. Appl., vol. 135. Cambridge University Press. Bessenrodt, C., Holm, T., and Jørgensen, P. 2014. All SL2 -tilings come from triangulations. Research report. Bissinger, B. H. 1944. A generalization of continued fractions. Bull. Amer. Math. Soc., 50, 868–876. Blanchet-Sadri, F. 2004a. Codes, orderings, and partial words. Theoret. Comput. Sci., 329(13), 177–202. Blanchet-Sadri, F. 2004b. Periodicity on partial words. Comput. Math. Appl., 47(1), 71–82. Blanchet-Sadri, F. 2007. Algorithmic Combinatorics on Partial Words. Boca Raton, FL: Chapman & Hall/CRC Press. Blanchet-Sadri, F., and Hegstrom, R. A. 2002. Partial words and a theorem of Fine and Wilf revisited. Theoret. Comput. Sci., 270, 401–419. Blanchet-Sadri, F., Mercas¸, R., and Scott, G. 2009. A generalization of Thue freeness for partial words. Theoret. Comput. Sci., 410(8-10), 793–800. Blondel, V. D., Cassaigne, J., and Jungers, R. M. 2009. On the number of α -power-free binary words for 2 < α ≤ 7/3. Theoret. Comput. Sci., 410(30–32), 2823–2833. B¨ockenhauer, H.-J., and Bongartz, D. 2007. Algorithmic Aspects of Bioinformatics. Berlin: Springer. Borchert, B. 2008. Formal language characterizations of P, NP, and PSPACE. J. Autom. Lang. Comb., 13(3-4), 161–183. Borel, E. 1948. Sur les d´eveloppements unitaires normaux. Ann. Soc. Pol. Math, 21, 74–79. Borel, J.-P., and Laubie, F. 1993. Quelques mots sur la droite projective r´eelle. J. Th´eorie Nombres Bordeaux, 5, 23–51. Borho, W., and Rosenberger, G. 1973. Eine Bemerkung zur Hecke-Gruppe G(λ ). Abhandlungen aus dem Mathematischen Seminar der Universit¨at Hamburg, 39, 83–87. Borwein, P., and Hare, K. G. 2003. General forms for minimal spectral values for a class of quadratic Pisot numbers. Bull. London Math. Soc., 35(1), 47–54. Bourgain, J., and Kontorovich, A. 2014. On Zaremba’s conjecture. Ann. Math., 180, 137–196. Bousquet-M´elou, M. 2005. Algebraic generating functions in enumerative combinatorics and context-free languages. Pages 18–35 of: STACS 2005. Lecture Notes in Comput. Sci., vol. 3404. Berlin: Springer. Boyle, M. 1983. Lower entropy factors of sofic systems. Ergodic Theory Dynam. Systems, 3(4), 541–557. Brandenburg, F.-J. 1983. Uniformly growing k-th power-free homomorphisms. Theoret. Comput. Sci., 23, 69–82. Breslauer, D. 1992. An on-line string superprimitivity test. Inf. Process. Lett., 44(6), 345–347. Brezinski, C. 1991. History of Continued fractions and Pad´e Approximants. Springer-Verlag. Brocot, A. 1861. Calcul des rouages par approcimiation, nouvelle m´ethode. Revue chronom´etrique, 3, 186–194. Brodal, G. S., Lyngsø, R. B., Pedersen, C. N. S., and Stoye, J. 1999. Finding Maximal Pairs with Bounded Gap. Pages 134–149 of: Crochemore, M., and Paterson, M. (eds.), Combinatorial Pattern Matching. Lecture Notes in Computer Science, vol. 1645. Springer.

Bibliography

447

Brown, S., Rampersad, N., Shallit, J., and Vasiga, T. 2006. Squares and overlaps in the Thue– Morse sequence and some variants. RAIRO Inform. Th´eor. App., 40, 473–484. Bruy`ere, V., Hansel, G., Michaux, C., and Villemaire, R. 1994. Logic and p-recognizable sets of integers. Bull. Belg. Math. Soc., 1, 191–238. Corrigendum, Bull. Belg. Math. Soc. 1 (1994), 577. Budzban, G., and Feinsilver, P. 2011. The generalized road coloring problem and periodic digraphs. Appl. Algebra Eng. Commun. Comput., 22(1), 21–35. Bugeaud, Y., Hubert, P., and Schmidt, T. 2013. Transcendence with Rosen continued fractions. J. Europ. Math. Soc., 15, 39–51. Burton, R., Kraaikamp, C., and Schmidt, T. 2000. Natural extensions for the Rosen fractions. Trans. Amer. Math. Soc., 352(3), 1277–1298. Caldero, P., and Chapoton, F. 2006. Cluster algebras as Hall algebras of quiver representations. Comment. Math. Helv., 81(3), 595–616. Carbone, A. 2001. Cycles of relatively prime length and the road coloring problem. Israel J. Math., 123, 303–316. Carpi, A. 1988a. Multidimensional unrepetitive configurations. Theoret. Comput. Sci., 56, 233–241. Carpi, A. 1988b. On synchronizing unambiguous automata. Theoret. Comput. Sci., 60(3), 285–296. Carpi, A. 1993. Overlap-free words and finite automata. Theoret. Comput. Sci., 115, 243–260. Carpi, A. 2007. On Dejean’s conjecture over large alphabets. Theoret. Comput. Sci., 385(1-3), 137–151. Carpi, A., and D’Alessandro, F. 2013. Independent sets of words and the synchronization problem. Adv. in Appl. Math., 50(3), 339–355. Cassaigne, J. 1993a. Counting overlap-free binary words. Pages 216–225 of: Enjalbert, P., Finkel, A., and Wagner, K. W. (eds.), STACS 93, Proc. 10th Symp. Theoretical Aspects of Comp. Sci. Lecture Notes in Computer Science, vol. 665. Springer-Verlag. Cassaigne, J. 1993b. Unavoidable binary patterns. Acta Informatica, 30, 385–395. Cenzer, D., Dashti, S. A., and King, J. L. F. 2008. Computable symbolic dynamics. MLQ Math. Log. Q., 54(5), 460–469. ˇ y, J. 1964. Pozn´amka k homog´ennym experimentom s koneˇcn´ymi automatmi. Mat. fyz. Cern´ cˇ as., SAV 14, 208–215. Cesaratto, E., and Vall´ee, B. 2006. Hausdorff dimension of real numbers with bounded digits averages. Acta Arith., 125, 115–162. Cesaratto, E., and Vall´ee, B. 2011. Small quotients in Euclidean algorithms. Ramanujan J., 24(2), 183–218. Cesaratto, E., and Vall´ee, B. 2014. Metrical versions of the two distances theorem. To be submitted. Cesaratto, E., Plagne, A., and Vall´ee, B. 2006. On the non randomness of modular arithmetic progressions. Discrete Math. Theor. Comput. Sci. Proc., AG, 271– 288. Chairungsee, S., and Crochemore, M. 2009. Efficient computing of longest previous reverse factors. Pages 27–30 of: Shoukourian, Y. (ed.), Seventh International Conference on Computer Science and Information Technologies (CSIT 2009). The National Academy of Sciences of Armenia Publishers, Yerevan, Armenia. ´ Rampersad, N., and Shallit, J. 2012. Enumeration and decidable properties of Charlier, E., automatic sequences. Internat. J. Found. Comput. Sci., 23(5), 1035–1066. Chen, G., Puglisi, S. J., and Smyth, W. F. 2008. Lempel–Ziv factorization using less time & space. Mathematics in Computer Science, 1(4), 605–623. Choffrut, C., and Karhum¨aki, J. 1997. Combinatorics of words. Pages 329–438 of: Rozenberg, G., and Salomaa, A. (eds.), Handbook of Formal Languages, vol. 1. Springer-Verlag.

448

Bibliography

Christou, M., Crochemore, M., Guth, O., Iliopoulos, C. S., and Pissis, S. P. 2011. On the right-seed array of a string. Pages 492–502 of: Fu, B., and Du, D.-Z. (eds.), Proceedings of the 17th Annual International Conference Computing and Combinatorics, COCOON 2011. Lecture Notes in Computer Science, vol. 6842. Springer. Christou, M., Crochemore, M., Iliopoulos, C. S., et al.. 2013. Efficient seed computation revisited. Theoret. Comput. Sci., 483, 171–181. Chuquet, N. 1881. Triparty en la science des nombres. Imprimerie des sciences math´ematiques et physiques. Clemens, J. D. 2009. Isomorphism of subshifts is a universal countable Borel equivalence relation. Israel J. Math., 170, 113–123. Conway, J. H., and Coxeter, H. S. M. 1973a. Triangulated polygons and frieze patterns. Math. Gaz., 57(400), 87–94. Conway, J. H., and Coxeter, H. S. M. 1973b. Triangulated polygons and frieze patterns. Math. Gaz., 57(401), 175–183. Cornfeld, I. P., Fomin, S. V., and Sina˘ı, Y. G. 1982. Ergodic theory. New York: SpringerVerlag. Translated from the Russian by A. B. Sosinski˘ı. Coxeter, H. S. M. 1971. Frieze patterns. Acta Arith., 18, 297–310. Crochemore, M. 1981. An optimal algorithm for computing the repetitions in a word. Inform. Process. Lett., 12(5), 244–250. Crochemore, M. 1982a. A solution to Berstel’s problem no. P3. Bull. European Assoc. Theor. Comput. Sci., 18, 9–11. Crochemore, M. 1982b. Sharp characterizations of squarefree morphisms. Theoret. Comput. Sci., 18(2), 221–226. Crochemore, M. 1986. Transducers and repetitions. Theoret. Comput. Sci., 45(1), 63–86. Crochemore, M. 2014 (June). Repeats in Strings. Keynote talk at CPM 2014. https://tech.yandex.com/events/cpm/2014/talks/168/. Crochemore, M., and Ilie, L. 2007. Analysis of maximal repetitions in strings. Pages 465–476 of: Kucera, L., and Kucera, A. (eds.), Mathematical Foundations of Computer Science 2007, 32nd International Symposium, MFCS 2007. Lecture Notes in Computer Science, vol. 4708. Springer. Crochemore, M., and Ilie, L. 2008a. Computing longest previous factors in linear time and applications. Information Processing Letters, 106(2), 75–80. Crochemore, M., and Ilie, L. 2008b. Maximal repetitions in strings. Journal of Computer and System Sciences, 74, 796–807. Crochemore, M., and Rytter, W. 1995. Squares, cubes and time-space efficient stringsearching. Algorithmica, 13(5), 405–425. Crochemore, M., and Rytter, W. 2003. Jewels of Stringology. Singapore: World Scientific Publishing Co. Pte. Ltd. Crochemore, M., Hancart, C., and Lecroq, T. 2007. Algorithms on Strings. Cambridge University Press. Crochemore, M., Ilie, L., and Smyth, W. F. 2008. A simple algorithm for computing the Lempel–Ziv factorization. Pages 482–488 of: Storer, J. A., and Marcellin, M. W. (eds.), 18th Data Compression Conference. IEEE Computer Society, Los Alamitos, CA. Snowbird, UT, USA, 25–27 March 2008. Crochemore, M., Ilie, L., Iliopoulos, C., et al.. 2009. LPF Computation Revisited. Pages 158–169 of: Fiala, J., Kratochv´ıl, J., and Miller, M. (eds.), IWOCA. LNCS, vol. 5874. Berlin: Springer. Crochemore, M., Iliopoulos, C., Kubica, M., Rytter, W., and Wale´n, T. 2010a. Efficient algorithms for two extensions of the LPF table: the power of suffix arrays. Pages 296–307 of: van Leeuwen, J., Muscholl, A., Peleg, D., Pokorn´y, J., and Rumpe, B. (eds.), SOFSEM

Bibliography

449

2010: Theory and Practice of Computer Science, 36th Conference on Current Trends in Theory and Practice of Computer Science, Spindleruv Ml´yn, Czech Republic. Lecture Notes in Computer Science, vol. 5901. Berlin: Springer. Crochemore, M., Fazekas, S. Z., Iliopoulos, C., and Jayasekera, I. 2010b. Number of occurrences of powers in strings. International Journal of Foundations of Computer Science, 21(4), 535–547. Crochemore, M., Ilie, L., and Tinta, L. 2011. The “runs” conjecture. Theoret. Comput. Sci., 412(27), 2931–2941. Crochemore, M., Iliopoulos, C. S., Kubica, M., et al.. 2012. The maximal number of cubic runs in a word. Journal Computer System Science, 78(6), 1828–1836. Crochemore, M., Iliopoulos, C. S., Kubica, M., et al.. 2014. New simple efficient algorithms computing powers and runs in strings. Discrete Applied Mathematics, 163(3), 258–267. Culik II, K. 1996. An aperiodic set of 13 Wang tiles. Discrete Math., 160, 245–251. Culik II, K., Pachl, J., and Yu, S. 1989. On the limit sets of cellular automata. SIAM Journal on Computing, 18(4), 831–842. Culik, II, K., Karhum¨aki, J., and Kari, J. 2002. A note on synchronized automata and road coloring problem. Pages 175–185 of: Developments in Language Theory (Vienna, 2001). Lecture Notes in Comput. Sci., vol. 2295. Berlin: Springer. Currie, J. 2005. Pattern avoidance: themes and variations. Theoret. Comput. Sci., 339, 7–18. Currie, J., and Linek, V. 2001. Avoiding patterns in the abelian sense. Canadian J. Math., 53, 696–714. Currie, J. D., and Rampersad, N. 2009a. Dejean’s conjecture holds for n ≥ 27. RAIRO Inform. Th´eor. App., 43, 775–778. Currie, J. D., and Rampersad, N. 2009b. Dejean’s conjecture holds for n ≥ 30. Theoret. Comput. Sci., 410, 2885–2888. Currie, J. D., and Rampersad, N. 2011. A proof of Dejean’s conjecture. Math. Comp., 80, 1063–1070. Currie, J. D., and Rampersad, N. 2012. Fixed points avoiding abelian k-powers. J. Combin. Theory Ser. A, 119(5), 942–948. Currie, J., and Simpson, J. 2002. Non-repetitive tilings. Electronic J. Combinatorics, 9, #R28. Currie, J., and Visentin, T. 2007. On abelian 2-avoidable binary patterns. Acta Informatica, 43, 521–533. Currie, J., and Visentin, T. 2008. Long binary patterns are abelian 2-avoidable. Theoret. Comput. Sci., 409, 432–437. Daireaux, B., and Vall´ee, B. 2004. Dynamical analysis of the parameterized Lehmer–Euclid Algorithm. Combinatorics, Probability, Computing, 13(4/5), 499–536. Dajani, K., and de Vries, M. 2005. Measures of maximal entropy for random β -expansions. J. Eur. Math. Soc. (JEMS), 7(1), 51–68. Dajani, K., and de Vries, M. 2007. Invariant densities for random β -expansions. J. Eur. Math. Soc. (JEMS), 9(1), 157–176. Dajani, K., and Kraaikamp, C. 2002. From greedy to lazy expansions and their driving dynamics. Expo. Math., 20(4), 315–327. Dajani, K., and Kraaikamp, C. 2003. Random β -expansions. Ergodic Theory Dynam. Systems, 23(2), 461–479. Dajani, K., Kraaikamp, C., and Steiner, W. 2009. Metrical theory for α -continued fractions. J. Eur. Math. Soc. (JEMS), 11, 1259–1283. Dajani, K., de Vries, M., Komornik, V., and Loreti, P. 2012. Optimal expansions in non-integer bases. Proc. Amer. Math. Soc., 140(2), 437–447. Damanik, D., and Lenz, D. 2002. The index of Sturmian sequences. European J. Combinatorics, 23, 23–29.

450

Bibliography

Dar´oczy, Z., and K´atai, I. 1993. Univoque sequences. Publ. Math. Debrecen, 42(3–4), 397– 407. Dar´oczy, Z., and K´atai, I. 1995. On the structure of univoque numbers. Publ. Math. Debrecen, 46(3-4), 385–408. Daud´e, H., Flajolet, P., and Vall´ee, B. 1997. An average-case analysis of the Gaussian algorithm for lattice reduction. Combinatorics, Probability and Computing, 6, 397–433. de Luca, A., and De Luca, A. 2006. Pseudopalindrome closure operators in free monoids. Theoret. Comput. Sci., 362(1–3), 282–300. de Luca, A., and Mignosi, F. 1994. Some combinatorial properties of Sturmian words. Theoret. Comput. Sci., 136, 361–385. de Luca, A., and Mione, L. 1994. On bispecial factors of the Thue–Morse word. Inform. Process. Lett., 49, 179–183. de Vries, M. 2008. A property of algebraic univoque numbers. Acta Math. Hungar., 119(1–2), 57–62. de Vries, M. 2009. On the number of unique expansions in non-integer bases. Topology Appl., 156(3), 652–657. de Vries, M., and Komornik, V. 2009. Unique expansions of real numbers. Adv. Math., 221(2), 390–427. de Vries, M., and Komornik, V. 2011. A two-dimensional univoque set. Fund. Math., 212(2), 175–189. Dejean, F. 1972. Sur un th´eor`eme de Thue. J. Combin. Theory. Ser. A, 13, 90–99. Dekking, F. M. 1976. On repetitions of blocks in binary sequences. J. Combin. Theory. Ser. A, 20(3), 292–299. Dekking, F. M. 1979. Strongly non-repetitive sequences and progression-free sets. J. Combin. Theory. Ser. A, 27, 181–185. Deza, A., Franek, F., and Thierry, A. 2014. How many double squares can a string contain? Discrete Applied Mathematics. In press. Di Francesco, P. 2010. The solution of the Ar T -system for arbitrary boundary. Electron. J. Combin., 17(1), Research Paper 89, 43. Dolgopyat, D. 1998. On decay of correlations in Anosov flows. Ann. of Math., 147, 357–390. Downarowicz, T. 1999. Reading along arithmetic progressions. Colloq. Math., 80, 293–296. Drmota, M., and Tichy, R. F. 1997. Sequences, Discrepancies, and Applications. Lecture Notes in Mathematics, vol. 1651. Springer-Verlag. Drobot, V. 1973. On sums of powers of a number. Amer. Math. Monthly, 80, 42–44. Drobot, V., and McDonald, S. 1980. Approximation properties of polynomials with bounded integer coefficients. Pacific J. Math., 86(2), 447–450. Du, C. F., Mousavi, H., Schaeffer, L., and Shallit, J. 2014. Decision Algorithms for FibonacciAutomatic Words, with Applications to Pattern Avoidance. Available electronically at arXiv:1406.0670. ˇ y. RAIRO Inform. Th´eor. Dubuc, L. 1998. Sur les automates circulaires et la conjecture de Cern´ Appl., 32(1–3), 21–34. Dumitrescu, A., and Radoiˇci´c, R. 2004. On a coloring problem for the integer grid. Pages 67–74 of: Towards a theory of geometric graphs. Contemp. Math., vol. 342. Amer. Math. Soc. Durand, B. 1994. The surjectivity problem for 2D cellular automata. Journal of Computer and System Sciences, 49(3), 718–725. Durand, B., Formenti, E., and Varouchas, G. 2003. On undecidability of equicontinuity clas´ (eds.), sification for cellular automata. Pages 117–128 of: Morvan, M., and R´emila, E. Discrete Models for Complex Systems, DMCS’03. DMTCS Proceedings, vol. AB. Discrete Mathematics & Theoretical Computer Science.

Bibliography

451

Durand, B., Levin, L. A., and Shen, A. 2008. Complex tilings. J. Symbolic Logic, 73(2), 593–613. Durand, B., Romashchenko, A., and Shen, A. 2009. Fixed-point tile sets and their applications. preprint. Durand, B., Romashchenko, A., and Shen, A. 2012. Fixed-point tile sets and their applications. Journal of Computer and System Sciences, 78(3), 731–764. Edwards, H. 2001. Riemann’s Zeta Function. Dover Publications. Eilenberg, S. 1974. Automata, Languages, and Machines. Vol. A. Academic Press. Eilenberg, S. 1976. Automata, Languages, and Machines. Vol. B. Academic Press. Ekhad, S. B., and Zeilberger, D. 1998. There are more than 2(n/17) n-letter ternary square-free words. J. Integer Sequences, 1, 98.1.9. Engel, F. 1913. Entwicklung der Zahlen nach Stammbruechen. Pages 190–191 of: Verhandlungen der 52. Versammlung deutscher Philologen und Schulmaenner in Marburg. Entringer, R. C., Jackson, D. E., and Schatz, J. A. 1974. On nonrepetitive sequences. J. Combin. Theory. Ser. A, 16, 159–164. Eppstein, D. 1990. Reset sequences for monotonic automata. SIAM J. Comput., 19(3), 500– 510. Erd˝os, P. 1939. On a family of symmetric Bernoulli convolutions. Amer. J. Math., 61, 974– 976. Erd˝os, P. 1940. On the smoothness properties of a family of Bernoulli convolutions. Amer. J. Math., 62, 180–186. Erd˝os, P. 1961. Some unsolved problems. Magyar Tud. Akad. Mat. Kutat´o Int. K¨ozl., 6, 221–254. Erd˝os, P., and Jo´o, I. 1992. On the number of expansions 1 = ∑ q−ni . Ann. Univ. Sci. Budapest. E¨otv¨os Sect. Math., 35, 129–132. Erd˝os, P., and Komornik, V. 1998. Developments in non-integer bases. Acta Math. Hungar., 79(1–2), 57–83. Erd˝os, P., Jo´o, I., and Komornik, V. 1990. Characterization of the unique expansions 1 = −ni and related problems. Bull. Soc. Math. France, 118(3), 377–390. ∑∞ i=1 q Erd˝os, P., Horv´ath, M., and Jo´o, I. 1991. On the uniqueness of the expansions 1 = ∑ q−ni . Acta Math. Hungar., 58(3-4), 333–342. Erd˝os, P., Jo´o, I., and Komornik, V. 1994. On the number of q-expansions. Ann. Univ. Sci. Budapest. E¨otv¨os Sect. Math., 37, 109–118. Erd˝os, P., Jo´o, I., and Komornik, V. 1998. On the sequence of numbers of the form ε0 + ε1 q + · · · + εn qn , εi ∈ {0, 1}. Acta Arith., 83(3), 201–210. Evdokimov, A. A. 1968. Strongly asymmetric sequences generated by a finite number of symbols. Dokl. Akad. Nauk SSSR, 179, 1268–1271. In Russian. English translation in Soviet Math. Dokl. 9 (1968), 536–539. Everett, C. J. 1946. Representations for real numbers. Bull. Amer. Math. Soc., 52, 861–869. Falconer, K. 2014. Fractal Geometry, 3rd edition. (Mathematical foundations and applications.) John Wiley & Sons, Ltd., Chichester. Feng, D.-J. 2011. On the topology of polynomials with bounded integer coefficients. Manuscript available electronically at arXiv:1109.1407. Feng, D.-J., and Sidorov, N. 2011. Growth rate for beta-expansions. Monatsh. Math., 162(1), 41–60. Fife, E. D. 1980. Binary sequences which contain no BBb. Trans. Amer. Math. Soc., 261, 115–136. Fife, E. D. 1983. Irreducible binary sequences. Pages 91–100 of: Cummings, L. J. (ed.), Combinatorics on Words: Progress and Perspectives. Academic Press.

452

Bibliography

Fine, N. J., and Wilf, H. S. 1965. Uniqueness theorems for periodic functions. Proc. Amer. Math. Soc., 16, 109–114. Fischer, J., and Heun, V. 2006. Theoretical and practical improvements on the RMQ-problem, with applications to LCA and LCE. Pages 36–48 of: Lewenstein, M., and Valiente, G. (eds.), Combinatorial Pattern Matching. Lecture Notes in Computer Science, vol. 4009. Springer. Fischer, M. J., and Paterson, M. S. 1974. String-matching and other products. Pages 113– 125. SIAM–AMS Proc., Vol. VII of: Karp, R. (ed.), Complexity of Computation (Proc. SIAM-AMS Appl. Math. Sympos., New York, 1973). Providence, R. I.: Amer. Math. Soc. Flajolet, P., and Vall´ee, B. 1998. Continued fraction algorithms,functional operators, and structure constants. Theoret. Comput. Sci., 194(1–2), 1–34. Flatto, L., Lagarias, J. C., and Poonen, B. 1994. The zeta function of the beta transformation. Ergodic Theory Dynam. Systems, 14(2), 237–266. Fomin, S., and Zelevinsky, A. 2002a. Cluster algebras. I. Foundations. J. Amer. Math. Soc., 15(2), 497–529 (electronic). Fomin, S., and Zelevinsky, A. 2002b. The Laurent phenomenon. Adv. in Appl. Math., 28(2), 119–144. Fomin, S., and Zelevinsky, A. 2003. Cluster algebras. II. Finite type classification. Invent. Math., 154(1), 63–121. Foreman, M., Rudolph, D. J., and Weiss, B. 2011. The conjugacy problem in ergodic theory. Ann. of Math. (2), 173(3), 1529–1586. Fraenkel, A. S., and Simpson, J. 1995. How many squares must a binary sequence contain? Electronic J. Combinatorics, 2, #R2. Fraenkel, A. S., and Simpson, J. 1998. How many squares can a string contain? J. Combin. Theory. Ser. A, 82(1), 112–120. Franek, F., and Yang, Q. 2006. An asymptotic lower bound for the maximal-number-of-runs function. Pages 3–8 of: Holub, J., and Zd´arek, J. (eds.), Proceedings of the Prague Stringology Conference. Department of Computer Science and Engineering, Faculty of Electrical Engineering, Czech Technical University. Franek, F., Smyth, W. F., and Tang, Y. 2003. Computing all repeats using suffix arrays. Journal of Automata, Languages and Combinatorics, 8(4), 579–591. Friedman, J. 1990. On the road coloring problem. Proc. Amer. Math. Soc., 110(4), 1133–1135. Garsia, A. M. 1962. Arithmetic properties of Bernoulli convolutions. Trans. Amer. Math. Soc., 102, 409–432. Gel’fond, A. O. 1959. A common property of number systems. Izv. Akad. Nauk SSSR. Ser. Mat., 23, 809–814. Giraud, M. 2009. Asymptotic behavior of the numbers of runs and microruns. Inf. Comput., 207(11), 1221–1228. Glendinning, P., and Sidorov, N. 2001. Unique representations of real numbers in non-integer bases. Math. Res. Lett., 8(4), 535–543. Goralcik, P., and Vanicek, T. 1991. Binary patterns in binary words. Internat. J. Algebra Comput., 1, 387–391. Gottschalk, W. H., and Hedlund, G. A. 1964. A characterization of the Morse minimal set. Proc. Amer. Math. Soc., 15, 70–74. Go˘c, D., Henshall, D., and Shallit, J. 2012. Automatic theorem-proving in combinatorics on words. Pages 180–191 of: Moreira, N., and Reis, R. (eds.), CIAA 2012. Lecture Notes in Computer Science, vol. 7381. Springer-Verlag. Goulden, I., and Jackson, D. 2004. Combinatorial Enumeration. Dover. Graham, R., Rothschild, B., and Spencer, J. 1990. Ramsey Theory, 2nd edition. Wiley.

Bibliography

453

Gr¨unbaum, B., and Shephard, G. C. 1986. Tilings and Patterns. New York: W. H. Freeman & Co. Grytczuk, J. 2002. Thue-like sequences and rainbow arithmetic progressions. Electronic J. Combinatorics, 9, #R44. Grytczuk, J. 2006. Nonrepetitive graph coloring. Pages 209–218 of: Graph Theory in Paris. Trends in Mathematics. Basel: Birkh¨auser. Grytczuk, J., Kozik, J., and Micek, P. 2011a. Nonrepetitive games. Manuscript available electronically at arXiv:1103.3810. Grytczuk, J., Kozik, J., and Witkowski, M. 2011b. Nonrepetitive sequences on arithmetic progressions. Electronic J. Combinatorics, 18, #P209. Grytczuk, J., Kozik, J., and Micek, P. 2013. A new approach to nonrepetitive sequences. Random Structures and Algorithms, 42. Guglielmi, N., and Protasov, V. 2013. Exact computation of joint spectral characteristics of linear operators. Found. Comput. Math., 13, 37–97. Guillon, P., and Zinoviadis, C. 2012. Densities and entropies in cellular automata. Pages 253–263 of: How the world computes. Lecture Notes in Computer Science, vol. 7318. Springer, Heidelberg. ˇ and Korjakov, I. O. 1972a. A remark on R. Berger’s work on the domino Gureviˇc, J. S., ˇ 13, 459–463. problem. Sibirsk. Mat. Z., Gurevich, Y. S., and Koryakov, I. O. 1972b. Remarks on Berger’s paper on the domino problem. Siberian Mathematical Journal, 13, 319–321. Gusfield, D. 1997. Algorithms on Strings, Trees, and Sequences – Computer Science and Computational Biology. Cambridge University Press. Gusfield, D., and Stoye, J. 2004. Linear time algorithms for finding and representing all the tandem repeats in a string. J. Comput. Syst. Sci., 69(4), 525–546. Haas, A., and Series, C. 1986. The Hurwitz constant and Diophantine approximation on Hecke groups. J. London Math. Soc., 34(2), 219–234. Halava, V., Harju, T., and K¨arki, T. 2007a. Relational codes of words. Theoret. Comput. Sci., 389(1–2), 237–249. Halava, V., Harju, T., K¨arki, T., and Zamboni, L. Q. 2007b. Relational Fine and Wilf words. Pages 159–167 of: Arnoux, P., B´edaride, N., and Cassaigne, J. (eds.), Proceedings of WORDS 2007. IML. Also available at http://www.tucs.fi/research/series/ as a technical report 839 of TUCS Turku Centre for Computer Science. Halava, V., Harju, T., and K¨arki, T. 2008a. Interaction properties of relational periods. Discrete Math. & Theor. Comput. Sci., 10(1), 87–111. Halava, V., Harju, T., and K¨arki, T. 2008b. Square-free partial words. Inform. Process. Lett., 108(5), 290–292. Halava, V., Harju, T., K¨arki, T., and S´ee´ bold, P. 2009. Overlap-freeness in infinite partial words. Theoret. Comput. Sci., 410(8–10), 943–948. Hanf, W. 1974. Nonrecursive tilings of the plane. I. J. Symbolic Logic, 39, 283–285. Happel, D., Preiser, U., and Ringel, C. M. 1980. Vinberg’s characterization of Dynkin diagrams using subadditive functions with application to DTr-periodic modules. Pages 280–294 of: Representation Theory, II (Proc. Second Internat. Conf., Carleton Univ., Ottawa, Ont., 1979). Lecture Notes in Math., vol. 832. Berlin: Springer. Hardy, G. H., and Wright, E. M. 1985. An Introduction to the Theory of Numbers, 5th edition. Oxford University Press. Harju, T., and K¨arki, T. 2014 (April). Minimal similarity relations for square-free words, Manuscript 2014. Unpublished manuscript. Harju, T., and Nowotka, D. 2006. Binary words with few squares. Bulletin of the EATCS, 89, 164–166.

454

Bibliography

Head, T., and Weber, A. 1995. Deciding multiset decipherability. IEEE Trans. Inform. Theory, 41(1), 291–297. Hedlund, G. A. 1969. Endomorphisms and automorphisms of the shift dynamical system. Math. Systems Theory, 3, 320–375. Heeffer, A. 2014. Cardano’s favorite problem: the Proportio Reflexa. Math. Intelligencer. Hensley, D. 1992. Continued fraction Cantor sets, Hausdorff dimension, and functional analysis. J. Number Theory, 40, 336–358. Hensley, D. 2006. Continued Fractions. World Scientific. Hinman, P. G. 2012. A survey of Muˇcnik and Medvedev degrees. Bull. Symbolic Logic, 18(2), 161–229. Hochman, M. 2009. On the dynamics and recursive properties of multidimensional symbolic systems. Invent. Math., 176(1), 131–167. Hochman, M. 2012. Rohlin properties for Zd actions on the Cantor set. Trans. Amer. Math. Soc., 364(3), 1127–1143. Hochman, M., and Meyerovitch, T. 2010. A characterization of the entropies of multidimensional shifts of finite type. Ann. of Math. (2), 171(3), 2011–2038. Hohlweg, C., and Reutenauer, C. 2003. Lyndon words, permutations and trees. Theoret. Comput. Sci., 307(1), 173–178. Holm, T., and Jørgensen, P. 2013. SL2 -tilings and triangulations of the strip. J. Combin. Theory Ser. A, 120(7), 1817–1834. Honkala, J. 1986. A decision method for the recognizability of sets defined by number systems. RAIRO Inform. Th´eor. App., 20, 395–403. Hopcroft, J. E., and Ullman, J. D. 1979. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. Hopcroft, J. E., Motwani, R., and Ullman, J. D. 2006. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. 3rd Edition. Hurd, L. P., Kari, J., and Culik II, K. 1992. The topological entropy of cellular automata is uncomputable. Ergodic Theory and Dynamical Systems, 12(6), 255–265. ¨ Hurwitz, A. 1894. Uber die angen¨aherte Darstellung der Zahlen durch rationale Br¨uche. Math. Ann., 44(2), 417–436. Ilie, L. 2005. A simple proof that a word of length has at most 2 distinct squares. J. Comb. Theory, Ser. A, 112(1), 163–164. Ilie, L. 2007. A note on the number of squares in a word. Theoret. Comput. Sci., 380(3), 373–376. Iliopoulos, C. S., Moore, D., and Smyth, W. F. 1997. A characterization of the squares in a Fibonacci string. Theoret. Comput. Sci., 172(1–2), 281–291. Iliopoulos, C. S., Moore, D., and Park, K. 1996. Covering a string. Algorithmica, 16(3), 288–297. Jacobs, K., and Keane, M. 1969. 0 − 1-sequences of Toeplitz type. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 13, 123–131. Janvresse, E., Rittaud, B., and de la Rue, T. 2008. How do random Fibonacci sequences grow? Probability Theory and Related Fields, 142(3), 619–648. Janvresse, E., Rittaud, B., and de la Rue, T. 2009. Growth rate for the expected value of a generalized random Fibonacci sequence. Journal of Physics A, 42(8), 085005. Janvresse, E., Rittaud, B., and de la Rue, T. 2010. Almost-sure growth rate of generalized random Fibonacci sequences. Ann. Inst. Henri Poincar´e, 46(1), 135–158. Janvresse, E., Rittaud, B., and de la Rue, T. 2013. Dynamics of λ -continued fractions and β -shifts. Discrete and Continuous Dynamical Systems Series A, 33(4), 1477–1498.

Bibliography

455

Jeandel, E., and Vanier, P. 2011. Π01 sets and tilings. Pages 230–239 of: Theory and Applications of Models of Computation. Lecture Notes in Comput. Sci., vol. 6648. Springer, Heidelberg. Jeandel, E., and Rao, M. 2015. An aperiodic set of 11 Wang tiles. Preprint available electronically at https://hal.inria.fr/hal-01166053v2. Jeandel, E., and Vanier, P. 2015. Characterizations of periods of multi-dimensional shifts. Ergodic Theory and Dynamical Systems, 35(4), 431–460. Jessen, B., and Wintner, A. 1935. Distribution functions and the Riemann zeta function. Trans. Amer. Math. Soc., 38(1), 48–88. Jungers, R., Protasov, V., and Blondel, V. 2009. Overlap-free words and spectra of matrices. Theoret. Comput. Sci., 410, 3670–3684. Kall´os, G. 1999. The structure of the univoque set in the small case. Publ. Math. Debrecen, 54(1–2), 153–164. Kall´os, G. 2001. The structure of the univoque set in the big case. Publ. Math. Debrecen, 59(3–4), 471–489. Kao, J.-Y., Rampersad, N., Shallit, J., and Silva, M. 2008. Words avoiding repetitions in arithmetic progressions. Theoret. Comput. Sci., 391, 126–137. Karhum¨aki, J., Lepist¨o, A., and Plandowski, W. 2002. Locally periodic versus globally periodic infinite words. J. Combin. Theory. Ser. A, 100, 250–264. Karhum¨aki, J., and Shallit, J. 2004. Polynomial versus exponential growth in repetition-free binary words. J. Combin. Theory Ser. A, 105(2), 335–347. Kari, J. 1992. The nilpotency problem of one-dimensional cellular automata. SIAM Journal on Computing, 21(3), 571–586. Kari, J. 1994. Reversibility and surjectivity problems of cellular automata. Journal of Computer and System Sciences, 48(1), 149–182. Kari, J. 1996. A small aperiodic set of Wang tiles. Discrete Math., 160, 259–264. Kari, J. 2005. Theory of cellular automata: a survey. Theoret. Comput. Sci., 334(1–3), 3–33. Kari, J. 2007. The tiling problem revisited (Extended Abstract). Pages 72–79 of: Durand-Lose, J., and Margenstern, M. (eds.), Machines, Computations, and Universality, 5th International Conference, MCU 2007, Orl´eans, France, September 10-13, 2007, Proceedings. Lecture Notes in Computer Science, vol. 4664. Springer. Kari, J. 2008. Undecidable properties on the dynamics of reversible one-dimensional cellular automata. Pages 3–14 of: Durand, B. (ed.), First Symposium on Cellular Automata ”Journ´ees Automates Cellulaires” (JAC 2008), Uz`es, France, April 21–25, 2008. Proceedings. MCCME Publishing House, Moscow. Kari, J. 2009. Tiling problem and undecidability in cellular automata. Pages 9158–9172 of: Meyers, R. A. (ed.), Encyclopedia of Complexity and Systems Science. Springer. Kari, J. 2011. Snakes and cellular automata: reductions and inseparability results. Pages 223– 232 of: Kulikov, A. S., and Vereshchagin, N. K. (eds.), Computer Science - Theory and Applications - 6th International Computer Science Symposium in Russia, CSR 2011, St. Petersburg, Russia, June 14–18, 2011. Proceedings. Lecture Notes in Computer Science, vol. 6651. Springer. Kari, J. 2012. Basic concepts of cellular automata. Pages 3–24 of: Rozenberg, G., B¨ack, T., and Kok, J. N. (eds.), Handbook of Natural Computing. Springer. Kari, J., and Ollinger, N. 2008. Periodicity and immortality in reversible computing. Pages 419–430 of: Ochmanski, E., and Tyszkiewicz, J. (eds.), Mathematical Foundations of Computer Science 2008, 33rd International Symposium, MFCS 2008, Torun, Poland, August 25–29, 2008, Proceedings. Lecture Notes in Computer Science, vol. 5162. Springer.

456

Bibliography

Kari, J., and Papasoglu, P. 1999. Deterministic aperiodic tile sets. Geometric and Functional Analysis GAFA, 9(2), 353–369. Kari, J. 2001. A counter example to a conjecture concerning synchronizing words in finite automata. EATCS Bulletin, 73, 146. Kari, J. 2003. Synchronizing finite automata on Eulerian digraphs. Theoret. Comput. Sci., 295(1-3), 223–232. ˇ y conjecture and the road coloring problem. In: Handbook Kari, J., and Volkov, M. 2014. Cern´ of Automata Theory. European Mathematical Society. to appear. K¨arki, T. 2008. Relational Words: Periodicity and Repetition Freeness. Ph.D. thesis, TUCS Volume 98, University of Turku. K¨arki, T. 2012. Repetition-freeness with cyclic relations and chain relations. Fund. Inform., 116, 157–174. K¨arkk¨ainen, J., Kempa, D., and Puglisi, S. J. 2013. Linear time Lempel–Ziv factorization: simple, fast, small. Pages 189–200 of: Combinatorial Pattern Matching. Lecture Notes in Computer Science, vol. 7922. Springer. K¨arkk¨ainen, J., and Sanders, P. 2003. Simple linear work suffix array construction. Pages 943–955 of: Baeten, J. C. M., Lenstra, J. K., Parrow, J., and Woeginger, G. J. (eds.), Proceedings of the 30th International Colloquium: Automata, Languages and Programming, ICALP 2003. Lecture Notes in Computer Science, vol. 2719. Springer. Kasteleyn, P. W. 1963. Dimer statistics and phase transitions. J. Mathematical Phys., 4, 287–293. K´atai, I., and Kall´os, G. 2001. On the set for which 1 is univoque. Publ. Math. Debrecen, 58(4), 743–750. Keller, B., and Scherotzke, S. 2011. Linear recurrence relations for cluster variables of affine quivers. Adv. Math., 228(3), 1842–1862. Keller, G. 1989. Markov extensions, zeta functions, and Fredholm theory for piecewise invertible dynamical systems. Trans. Amer. Math. Soc., 314, 433–497. Kempton, T. 2013. Counting β -expansions and the absolute continuity of Bernoulli convolutions. Monatsh. Math., 171(2), 189–203. Kempton, T. 2014. On the invariant density of the random β -transformation. Acta Math. Hungar., 142(2), 403–419. Kenyon, R. 2000. The planar dimer model with boundary: a survey. Pages 307–328 of: Directions in Mathematical Quasicrystals. CRM Monogr. Ser., vol. 13. Amer. Math. Soc., Providence, RI. Ker¨anen, V. 1992. Abelian squares are avoidable on 4 letters. Pages 41–52 of: Kuich, W. (ed.), Proc. 19th Intl. Conf. on Automata, Languages, and Programming (ICALP). Lecture Notes in Computer Science, vol. 623. Springer-Verlag. Khinchin, A. Y. 1964. Continued Fractions. The University of Chicago Press. Kitchens, B. P. 1998. Symbolic Dynamics. (One-sided, two-sided and countable state Markov shifts.) Universitext. Berlin: Springer-Verlag. Kleene, S. C. 1956. Representation of events in nerve nets and finite automata. Pages 3–42 of: Automata Studies. Princeton University Press. Knuth, D. 1998. The Art of Computer Programming, 3rd edition. Addison Wesley. Knuth, D. E., Morris, J., and Pratt, V. 1977. Fast pattern matching in strings. SIAM J. Comput., 6(2), 323–350. Ko, P., and Aluru, S. 2005. Space efficient linear time construction of suffix arrays. J. Discrete Algorithms, 3(2-4), 143–156. Kociumaka, T., Kubica, M., Radoszewski, J., Rytter, W., and Wale´n, T. 2012. A linear time algorithm for seeds computation. Pages 1095–1112 of: Rabani, Y. (ed.), Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA’2012.

Bibliography

457

Kolpakov, R., and Kucherov, G. 1999. Finding Maximal Repetitions in a Word in Linear Time. Pages 596–604 of: Proceedings of the 40th IEEE Annual Symposium on Foundations of Computer Science. New York: IEEE Computer Society Press. Kolpakov, R., and Kucherov, G. 2008. Searching for Gapped Palindromes. Pages 18–30 of: Ferragina, P., and Landau, G. M. (eds.), Combinatorial Pattern Matching, 19th Annual Symposium, Pisa, Italy, June 18–20, 2008. Lecture Notes in Computer Science, vol. 5029. Berlin: Springer. Komatsu, T. 2002. An approximation property of quadratic irrationals. Bull. Soc. Math. France, 130(1), 35–48. Komornik, V. 2011. Expansions in noninteger bases. Integers, 11B, Paper No. A9, 30. Komornik, V. 2012. Unique infinite expansions in noninteger bases. Acta Math. Hungar., 134(3), 344–355. Komornik, V., and Loreti, P. 1998. Unique developments in non-integer bases. Amer. Math. Monthly, 105(7), 636–639. Komornik, V., and Loreti, P. 1999. On the expansions in non-integer bases. Rend. Mat. Appl. (7), 19(4), 615–634 (2000). Komornik, V., and Loreti, P. 2002. Subexpansions, superexpansions and uniqueness properties in non-integer bases. Period. Math. Hungar., 44(2), 197–218. Komornik, V., and Loreti, P. 2007. On the topological structure of univoque sets. J. Number Theory, 122(1), 157–183. Komornik, V., Loreti, P., and Pedicini, M. 2000. An approximation property of Pisot numbers. J. Number Theory, 80(2), 218–237. Komornik, V., Lai, A. C., and Pedicini, M. 2011. Generalized golden ratios of ternary alphabets. J. Eur. Math. Soc. (JEMS), 13(4), 1113–1146. Komornik, V., Kong, D., and Li, W. 2014. Hausdorff dimension of univoque sets and Devil’s staircase. In preparation. Kong, D., and Li, W. 2014. On the Hausdorff dimension of unique beta expansions. Manuscript available electronically at arXiv:1401.6473. Kraaikamp, C., and Wu, J. 2004. On a new continued fraction expansion with non-decreasing partial quotients. Monatsh. Math., 143, 285–298. Kraaikamp, C., Schmidt, T., and Smeets, I. 2007. Tong’s spectrum for Rosen continued fractions. J. Th´eorie Nombres Bordeaux, 19(3), 641–661. Kraaikamp, C., Nakada, H., and Schmidt, T. 2009. Metric and arithmetic properties of mediant-Rosen maps. Acta Arith., 137(4), 295–324. Kraaikamp, C., Schmidt, T., and Smeets, I. 2010. Natural extensions for α -Rosen continued fractions. J. Math. Soc Japan, 62(2), 649–671. Krieger, D. 2008. Critical Exponents and Stabilizers of Infinite Words. Ph.D. thesis, University of Waterloo. Kuipers, L., and Niederreiter, H. 1974. Uniform Distribution of Sequences. Wiley. K˚urka, P. 1999. Zero-dimensional dynamical systems, formal languages, and universality. Theory Comput. Syst., 32(4), 423–433. Lai, A. C. 2011. Minimal unique expansions with digits in ternary alphabets. Indag. Math. (N.S.), 21(1–2), 1–15. Lee, K., and Schiffler, R. 2013. Positivity for cluster algebras. Manuscript available electronically at arXiv:1306.2415. ¨ Leutbecher, A. 1967. Uber die Heckeschen Gruppen G(λ ). Abhandlungen aus dem Mathematischen Seminar der Universit¨at Hamburg, 31, 199–205. ¨ Leutbecher, A. 1974. Uber die Heckeschen Gruppen G(λ ) II. Math. Annalen, 211, 63–86. Lhote, L., and Vall´ee, B. 2008. Gaussian laws for the main parameters of the Euclid algorithm. Algorithmica, 50(4), 497–554.

458

Bibliography

Li, Y., and Smyth, W. F. 2002. Computing the cover array in linear time. Algorithmica, 32(1), 95–106. Liardet, P., and Stambul, P. 2000. S´eries de Engel et fractions continu´ees. J. Th´eorie Nombres Bordeaux, 12(1), 37–68. Lind, D. A. 1984. The entropies of topological Markov shifts and a related class of algebraic integers. Ergodic Theory Dynam. Systems, 4(2), 283–300. Lind, D., and Marcus, B. 1995. An Introduction to Symbolic Dynamics and Coding. Cambridge University Press. Lind, D., Schmidt, K., and Ward, T. 1990. Mahler measure and entropy for commuting automorphisms of compact groups. Invent. Math., 101(3), 593–629. Lothaire, M. 1983. Combinatorics on Words. Encyclopedia of Mathematics and Its Applications, vol. 17. Addison-Wesley. Lothaire, M. 1997. Combinatorics on Words, 2nd edition. Cambridge University Press. Lothaire, M. 2002. Algebraic Combinatorics on Words. Encyclopedia of Mathematics and Its Applications, vol. 90. Cambridge University Press. Lothaire, M. 2005. Applied Combinatorics on Words. Encyclopedia of Mathematics and Its Applications, vol. 105. Cambridge University Press. L¨u, F., Tan, B., and Wu, J. 2014. Univoque sets for real numbers. Fund. Math., 227(1), 69–83. Lucas, E. 1891. Th´eorie des nombres. Gauthier-Villars. Lukkarila, V. 2009. The 4-way deterministic tiling problem is undecidable. Theoret. Comput. Sci., 410(16), 1516–1533. MacDonald, M., and Ambrose, C. M. 1993. A novel gene containing a trinucleotide repeat that is expanded and unstable on Huntington’s disease chromosomes. Cell, 72(6), 971–983. Main, M. G. 1989. Detecting leftmost maximal periodicities. Discrete Appl. Math., 25, 145– 153. Main, M. G., and Lorentz, R. J. 1984. An O(n log n) algorithm for finding all repetitions in a string. J. Algorithms, 5(3), 422–432. Makover, E., and McGowan, J. 2006. An elementary proof that random Fibonacci sequences grow exponentially. J. Number Theory, 121, 40–44. Maruoka, A., and Kimura, M. 1976. Condition for injectivity of global maps for tessellation automata. Information and Control, 32(2), 158–162. Matsubara, W., Kusano, K., Ishino, A., Bannai, H., and Shinohara, A. 2008. New lower bounds for the maximum number of runs in a string. Pages 140–145 of: Holub, J., and Zd´arek, J. (eds.), Proceedings of the Prague Stringology Conference. Prague Stringology Club, Department of Computer Science and Engineering, Faculty of Electrical Engineering, Czech Technical University in Prague. Mauldin, R. D., and Simon, K. 1998. The equivalence of some Bernoulli convolutions to Lebesgue measure. Proc. Amer. Math. Soc., 126(9), 2733–2736. Meyerovitch, T. 2010. Growth-type invariants for Z d subshifts of finite type and arithmetical classes of real numbers. To appear in Inventiones Mathematica. Meyerovitch, T., and Pavlov, R. 2014. On independence and entropy for high-dimensional isotropic subshifts. Proc. Lond. Math. Soc. (3), 109(4), 921–945. Mignosi, F., and Pirillo, G. 1992. Repetitions in the Fibonacci infinite word. RAIRO Inform. Th´eor. App., 26, 199–204. Mignosi, F., Restivo, A., and Sciortino, M. 2002. Words and forbidden factors. Theoret. Comput. Sci., 273, 99–117. Minkowski, H. 1904. Zur Geometrie der Zahlen. Pages 164–173 of: Verhandlungen des III. internationalen Mathematiker-Kongresses in Heidelberg. Mohammad-Noori, M., and Currie, J. D. 2007. Dejean’s conjecture and Sturmian words. European J. Combinatorics, 28, 876–890.

Bibliography

459

Moore, D., and Smyth, W. F. 1994. Computing the covers of a string in linear time. Pages 511–515 of: Sleator, D. D. (ed.), Proceedings of the Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA’94. Moore, E. F. 1962. Machine models of self-reproduction. Pages 17–33 of: Bellman, R. (ed.), Mathematical Problems in Biological Sciences (Proceedings of Symposia in Applied Mathematics). American Mathematical Society. Morier-Genoud, S. 2012. Arithmetics of 2-friezes. J. Algebraic Combin., 36(4), 515–539. Morse, M. 1921. Recurrent geodesics on a surface of negative curvature. Trans. Amer. Math. Soc., 22, 84–100. Moser, R., and Tardos, G. 2010. A constructive proof of the general Lov´asz local lemma. J. Assoc. Comput. Mach., 57, Article 11. Moulin-Ollagnier, J. 1992. Proof of Dejean’s conjecture for alphabets with 5, 6, 7, 8, 9, 10 and 11 letters. Theoret. Comput. Sci., 95, 187–205. Mousavi, H., and Shallit, J. 2014. Mechanical proofs of properties of the Tribonacci word. Available electronically at arXiv:1407.5841. Muthukrishnan, S., and Ramesh, H. 1995. String matching under a general matching relation. Inform. and Comput., 122(1), 140–148. Myers, D. 1974. Nonrecursive tilings of the plane. II. J. Symbolic Logic, 39, 286–294. Myhill, J. 1963. The converse of Moore’s Garden-of-Eden theorem. Proceedings of The American Mathematical Society, 14, 685–685. Nicaud, C. 2014. Fast synchronization of random automata. Manuscript available electronically at arXiv:1404.6962. Noonan, J., and Zeilberger, D. 1999. The Goulden–Jackson cluster method: extensions, applications and implementations. J. Diff. Equations. Appl., 5, 355–377. Novikov, P. S., and Adian, S. I. 1968. Infinite periodic groups I, II, III. Izv. Akad. Nauk. SSSR Ser. Mat., 32, 212–244, 251–524, 709–731. O’Brien, G. L. 1981. The road-colouring problem. Israel J. Math., 39(1–2), 145–154. Ochem, P., Rampersad, N., and Shallit, J. 2008. Avoiding approximate squares. Internat. J. Found. Comp. Sci., 19, 633–648. Ochem, P. 2006. A generator of morphisms for infinite words. ITA, 40(3), 427–441. Pansiot, J.-J. 1981. The Morse sequence and iterated morphisms. Inf. Process. Lett., 12(2), 68–70. Pansiot, J.-J. 1984. A propos d’une conjecture de F. Dejean sur les r´ep´etitions dans les mots. Discrete Appl. Math., 7, 297–311. Parry, W. 1960. On the β -expansions of real numbers. Acta Math. Acad. Sci. Hungar., 11, 401–416. Parry, W. 1964. Representations for real numbers. Acta Math. Acad. Sci. Hung., 15, 95–105. Pavlov, R. 2012. Approximating the hard square entropy constant with probabilistic methods. Ann. Probab., 40(6), 2362–2399. Pavlov, R., and Schraudner, M. 2015a. Classification of sofic projective subdynamics of multidimensional shifts of finite type. Trans. Amer. Math. Soc., 367(5), 3371–3421. Pavlov, R., and Schraudner, M. 2015b. Entropies realizable by block gluing Z d shifts of finite type. J. Anal. Math., 126, 113–174. Pedicini, M. 2005. Greedy expansions and sets with deleted digits. Theoret. Comput. Sci., 332(1–3), 313–336. Pegden, W. 2011. Highly nonrepetitive sequences: winning strategies from the Local Lemma. Random Structures and Algorithms, 38, 140–161. Perrin, D. 1995. Symbolic dynamics and finite automata. Pages 94–104 of: Wiedermann, J., and H´ajek, P. (eds.), Proc. 20th Symposium, Mathematical Foundations of Computer Science 1995. Lecture Notes in Computer Science, vol. 969. Springer-Verlag.

460

Bibliography

Perrin, D., and Pin, J.-E. 2004. Infinite words. Automata, semigroups, logic and games. Amsterdam: Elsevier/Academic Press. Perrin, D., and Sch¨utzenberger, M.-P. 1992. Synchronizing prefix codes and automata and the road coloring problem. Pages 295–318 of: Symbolic Dynamics and its Applications. Contemp. Math., vol. 135. Amer. Math. Soc. Perron, O. 1913. Die Lehre von den Kettenbr¨uchen. Leipzig: B. G. Teubner. Perron, O. 1960. Irrationalzahlen. Walter de Gruyter. Petrov, A. N. 1988. A sequence that avoids every complete word. Mat. Zametki, 44, 517–522. Translation in Math. Notes 44 (1988), 764–767. ˇ y. Th`ese de 3`eme Pin, J.-E. 1978. Le probl`eme de la synchronisation et la conjecture de Cern´ cycle, Universit´e Paris VI. Pin, J.-E. 1983. On two combinatorial problems arising from automata theory. Pages 535–548 of: Combinatorial mathematics (Marseille-Luminy, 1981). North-Holland Math. Stud., vol. 75. North-Holland, Amsterdam. Pleasants, P. A. B. 1970. Non-repetitive sequences. Proc. Cambridge Phil. Soc., 68, 267–274. Prouhet, E. 1851. M´emoire sur quelques relations entre les puissances des nombres. C. R. Acad. Sci. Paris, 33, 225. Puglisi, S. J., Simpson, J., and Smyth, W. F. 2008. How many runs can a string contain? Theoret. Comput. Sci., 401(1-3), 165–171. Puzyna, J. 1898. Teorya funkcyj analitycznych, t. I. Pytheas Fogg, N. 2002. Substitutions in Dynamics, Arithmetics and Combinatorics. Lecture Notes in Mathematics, vol. 1794. Springer-Verlag. Ed. by V. Berth´e and S. Ferenczi and C. Mauduit and A. Siegel. Queff´elec, M. 1987. Substitution Dynamical Systems – Spectral Analysis. Lecture Notes in Mathematics, vol. 1294. Springer-Verlag. Rabin, M. O., and Scott, D. 1959. Finite automata and their decision problems. IBM J. Res. Develop., 3, 115–125. Rademacher, H. 1964. Lectures on Elementary Number Theory. Blaisdell Publishing Company. Rampersad, N. 2011. Further applications of a power series method for pattern avoidance. Electronic J. Combinatorics, 18, P134. Rampersad, N., Shallit, J., and Shur, A. 2011. Fife’s theorem for 73 -powers. Pages 189–198 of: Ambroz, P., Holub, S., and Masakova, Z. (eds.), WORDS 2011. Rampersad, N., Shallit, J., and Wang, M.-w. 2005. Avoiding large squares in infinite binary words. Theoret. Comput. Sci., 339(1), 19–34. Rao, M. 2011. Last cases of Dejean’s conjecture. Theoret. Comput. Sci., 412(27), 3010–3018. Rauzy, G. 1984. Des mots en arithm´etique. Pages 103–113 of: Journ´ees d’Avignon: Th´eorie des Langages et Complexit´e des Algorithmes. Universit´e Claude Bernard (Lyon), Publications du D´epartement de Math´ematiques. R´enyi, A. 1957. Representations for real numbers and their ergodic properties. Acta Math. Acad. Sci. Hung., 8, 477–493. Restivo, A., and Salemi, S. 1985. Overlap free words on two symbols. Pages 198–206 of: Nivat, M., and Perrin, D. (eds.), Automata on Infinite Words. Lecture Notes in Computer Science, vol. 192. Springer-Verlag. Reutenauer, C. 1997. N-rationality of zeta functions. Adv. in Appl. Math., 18(1), 1–17. Richomme, G., Saari, K., and Zamboni, L. Q. 2011. Abelian complexity of minimal subshifts. J. Lond. Math. Soc. (2), 83(1), 79–95. Rigo, M. 2014. Formal Languages, Automata and Numeration Systems, Introduction to Combinatorics on Words. Vol. 1. ISTE, Wiley.

Bibliography

461

Rittaud, B. 2007. On the average growth of random Fibonacci sequences. J. Integer Sequences, 10(07.2.4). Rittaud, B. 2008. Suites de Fibonacci al´eatoires et fractions continues de Rosen. Habilitation a` diriger des recherches. Robinson, R. M. 1971. Undecidability and nonperiodicity for tilings of the plane. Invent. Math., 12, 177–209. Rogers, Jr., H. 1967. Theory of Recursive Functions and Effective Computability. New York: McGraw-Hill Book Co. Rosen, D. 1954. A class of continued fractions associated with certain properly discontinuous groups. Duke Math. J., 21, 549–563. Rowen, L. 1988. Ring Theory. Vol. II. Boston: Academic Press. Ruelle, D. 1978. Thermodynamic Formalism. Addison Wesley. Rytter, W. 2006. The Number of Runs in a String: Improved Analysis of the Linear Upper Bound. Pages 184–195 of: STACS. Rytter, W. 2007. The number of runs in a string. Inf. Comput., 205(9), 1459–1469. Saari, K. 2007. Everywhere α -repetitive sequences and Sturmian words. Pages 362–372 of: Diekert, V., Volkov, M., and Voronkov, A. (eds.), CSR 2007. Lecture Notes in Computer Science, vol. 4649. Springer-Verlag. ´ ements de th´eorie des automates. Vuibert. English corrected edition: Sakarovitch, J. 2003. El´ Elements of Automata Theory, Cambridge University Press, 2009. Sakarovitch, J. 2009. Elements of Automata Theory. Cambridge University Press. Sardinas, A. A., and Patterson, C. W. 1953. A necessary and sufficient condition for the unique decomposition of coded messages. IRE Internat. Conv. Rec., 8, 104–108. Schaeffer, L. 2013. Abelian powers in automatic sequences are not always automatic. Talk at CanadaDAM 2013 Conference, St. John’s, Newfoundland (June 2013). Schaeffer, L., and Shallit, J. 2012. The critical exponent is computable for automatic sequences. Internat. J. Found. Comput. Sci., 23(8), 1611–1626. Schmidt, W. M. 1972. Irregularities of distribution VII. Acta Arith., 21, 45–50. Schraudner, M. H. 2015. One-dimensional projective subdynamics of uniformly mixing Zd shifts of finite type. Ergodic Theory and Dynamical Systems, FirstView(3), 1–38. Sch¨utzenberger, M.-P. 1967. On synchronizing prefix codes. Inform. and Control, 11, 396– 401. Schweiger, F. 1995. Ergodic Theory of Fibred Systems and Metric Number Theory. Clarendon Press. S´ee´ bold, P. 1983. Sur les morphismes qui engendrent des mots infinis ayant des facteurs prescrits. Pages 301–311 of: Cremers, A. B., and Kriegel, H. (eds.), Theoretical Computer Science, 6th GI-Conference, Dortmund, Germany, January 5–7, 1983, Proceedings. Lecture Notes in Computer Science, vol. 145. Springer. Shallit, J. 2004. Simultaneous avoidance of large squares and fractional powers in infinite binary words. Int. J. Found. Comput. Sci, 15, 317–327. Shallit, J. 2008. A Second Course in Formal Languages and Automata Theory. Cambridge University Press. Shallit, J. 2011. Fife’s theorem revisited. Pages 397–405 of: Mauri, G., and Leporati, A. (eds.), DLT 2011. Lecture Notes in Computer Science, vol. 6795. Springer-Verlag. Shallit, J. 2013. Decidability and enumeration for automatic sequences: a survey. Pages 49– 63 of: Bulatov, A. A., and Shur, A. M. (eds.), CSR 2013. Lecture Notes in Computer Science, vol. 7913. Springer-Verlag. Shmerkin, P. 2014. On the exceptional set for absolute continuity of Bernoulli convolutions. Geom. Funct. Anal., 24(3), 946–958.

462

Bibliography

Shur, A. M. 2000. The structure of the set of cube-free Z-words in a two-letter alphabet (Russian). Izv. Ross. Akad. Nauk Ser. Mat., 64, 201–224. English translation in Izv. Math. 64 (2000), 847–871. Shur, A. 2012. Numerical values of the growth rates of power-free languages. Available electronically at arXiv:1009.4415. Shur, A. M., and Gamzova, Y. V. 2004. Partial words and the period interaction property. Izv. Ross. Akad. Nauk Ser. Mat., 68(2), 191–214. Sidorov, N. 2003a. Almost every number has a continuum of β -expansions. Amer. Math. Monthly, 110(9), 838–842. Sidorov, N. 2003b. Arithmetic dynamics. Pages 145–189 of: Topics in Dynamics and Ergodic Theory. London Math. Soc. Lecture Note Ser., vol. 310. Cambridge University Press. Sidorov, N. 2007. Combinatorics of linear iterated function systems with overlaps. Nonlinearity, 20(5), 1299–1312. Sidorov, N. 2009. Expansions in non-integer bases: lower, middle and top orders. J. Number Theory, 129(4), 741–754. Sierpi´nski, W. a. 1911. Sur quelques algorithmes pour d´evelopper les nombres r´eels en s´eries. C. R. Soc. Sci. Varsovie, 4, 56–77. Simpson, J. 2010. Modified Padovan words and the maximum number of runs in a word. Australasian J. of Comb., 46, 129–145. Simpson, S. G. 2014. Medvedev degrees of two-dimensional subshifts of finite type. Ergodic Theory Dynam. Systems, 34(2), 679–688. Smyth, W. F. 2003. Computing Patterns in Strings. Pearson Education. Solomyak, B. 1994. Conjugates of beta-numbers and the zero-free domain for a class of analytic functions. Proc. London Math. Soc. (3), 68(3), 477–498. Solomyak, B. 1995. On the random series ∑ ±λ n (an Erd˝os problem). Ann. of Math. (2), 142(3), 611–625. S´os, V. T. 1958. On the distribution mod 1 of the sequence nα . Ann. Univ. Sci. Budapest E¨otv¨os Sect. Math., 1, 127–134. ˇ y Conjecture. Int. J. Found. Comput. Steinberg, B. 2011. The averaging trick and the Cern´ Sci., 22(7), 1697–1706. Stern, M. 1858. Ueber eine zahlentheoritsche Funktion. J. Reine Angew. Math., 55, 193–220. Sudkamp, T. A. 1997. Languages and Machines, An Introduction to the Theory of Computer Science. Addison-Wesley. Sur´anyi, J. 1958. On the distribution mod 1 of the sequence nα . Ann. Univ. Sci. Budapest E¨otv¨os Sect. Math., 1, 107–111. Sutner, K. 1991. De Bruijn graphs and linear cellular automata. Complex Systems, 5(1), 19–30. ´ Swierczkowski, S. 1959. On successive settings of an arc on the circumference of a circle. Fund. Math., 46, 187–189. Tamura, J.-I. 1985. Explicit formulae for Cantor series representing quadratic irrationals. Pages 369–381 of: Number Theory and Combinatorics. Japan 1984 (Tokyo, Okayama and Kyoto, 1984). World Scientific. ¨ Thue, A. 1906a. Uber unendliche Zeichenreihen. Norske vid. Selsk. Skr. Mat. Nat. Kl., 7, 1–22. Reprinted in Selected Mathematical Papers of Axel Thue, T. Nagell (ed.), Universitetsforlaget, Oslo, 1977, pp. 139–158. Thue, A. 1906b. Uber unendliche Zeichenreihen. Norske vid. Selsk. Skr. I. Mat. Nat. Kl. Christiana, 7, 1–22. ¨ Thue, A. 1912. Uber die gegenseitige Lage gleicher Teile gewisser Zeichenreihen. Norske vid. Selsk. Skr. Mat. Nat. Kl., 1, 1–67. Reprinted in Selected Mathematical Papers of Axel Thue, T. Nagell, editor, Universitetsforlaget, Oslo, 1977, pp. 413–478.

Bibliography

463

Tijdeman, R., and Zamboni, L. Q. 2003. Fine and Wilf words for any periods. Indag. Math. (N.S.), 14(1), 135–147. ˇ y Conjecture for aperiodic automata. Discrete Math. & Theor. Trahtman, A. 2007. The Cern´ Comput. Sci., 9(2). Trahtman, A. N. 2008. Synchronizing road coloring. Pages 43–53 of: Fifth IFIP International Conference on Theoretical Computer Science – TCS 2008. IFIP Int. Fed. Inf. Process., vol. 273. New York: Springer. Trahtman, A. 2009. The road coloring problem. Israel J. Math., 172, 51–60. Turing, A. M. 1936. On computable numbers, with an application to the Entscheidungsproblem. Proc. Lond. Math. Soc., 42, 230–265. Vall´ee, B. 1997. Op´erateurs de Ruelle-Mayer g´en´eralis´es et analyse en moyenne des algorithmes de Gauss et d’Euclid. Acta Arith., 81, 101–144. Vall´ee, B. 1998. Dynamique des fractions continues a` contraintes p´eriodiques. J. Number Theory, 2, 183–235. Vall´ee, B. 2006. Euclidean dynamics. Disc. and Cont. Dyn. Syst., 15(1), 281–352. van Ravenstein, T. 1985. On the discrepancy of the sequence formed from multiples of an irrational number. Bull. Austral. Math. Soc., 329–338. Vichniac, G. Y. 1984. Simulating physics with cellular automata. Physica D: Nonlinear Phenomena, 10(1–2), 96–116. ` B. 1971. Discrete linear groups that are generated by reflections. Izv. Akad. Nauk Vinberg, E. SSSR Ser. Mat., 35, 1072–1112. Viswanath, D. 2000. Random Fibonacci sequences and the number 1.13198824. . .. Math. Comp., 69(231), 1131–1155. ˇ y ConVolkov, M. V. 2008. Synchronizing strongly connected digraphs. In: Around theCern´ jecture, International Workshop. University of Wroclaw. Volkov, M. V. 2009. Synchronizing automata preserving a chain of partial orders. Theoret. Comput. Sci., 410(37), 3513–3519. Walters, P. 1982. An Introduction to Ergodic Theory. New York: Springer-Verlag. Wang, H. 1961. Proving theorems by pattern recognition II. Bell System Technical Journal, 40, 1–42. Weiss, B. 1973. Subshifts of finite type and sofic systems. Monatsh. Math., 77, 462–474. Witten, I. H., Moffat, A., and Bell, T. C. 1999. Managing Gigabytes: Compressing and Indexing Documents and Images, 2nd Edition. Morgan Kaufmann. Yu, S. 1997. Regular languages. Pages 41–110 of: Rozenberg, G., and Salomaa, A. (eds.), Handbook of Formal Languages, vol. 1. Springer-Verlag. Zaremba, S. K. 1972. La m´ethode des “bons treillis” pour le calcul des int´egrales multiples. Pages 39–119 of: Applications of Number Theory to Numerical Analysis (Proc. Sympos., Univ. Montreal, Montreal, Que., 1971). Academic Press. Zheng, X., and Weihrauch, K. 2001. The arithmetical hierarchy of real numbers. MLQ Math. Log. Q., 47(1), 51–65. Zimin, A. I. 1982. Blocking sets of terms. Mat. Sbornik, 119, 363–375, 447. In Russian. English translation in Math. USSR Sbornik, 47 (1984), 353–364. Ziv, J., and Lempel, A. 1977. A universal algorithm for sequential data compression. IEEE Transactions on Information Theory, 23(3), 337–343.

Notation index

2X (power set), 182 X ,s Y (X is strongly reducible to Y ), 321 X ,w Y (X is Weakly reducible to Y ), 321 [n]q (least positive residue), 191 [w] (cylinder), 15, 245, 299 x ↑ y (compatible partial words), 179 ? (question mark function), 67 [1,(−1 : 1)i−1 ]λ , 87 [a0 ,... ,an ]1 , 70 [a0 ,a1 ,...]λ , 87 [a0 ,e1 : a1 ,e2 : a2 ,...]λ , 87 [(a j )mj=0 ,(a j ) j>m ], 87 [(an )n ]λ , 87 ]q1 ,q2 ,q3 ,... [, 73 $r1 ,... ,rn % (generated similarity relation), 178 A$T % (Arnold measure), 411 Bt1 ,t2 (p,q) (bound of interaction), 190 c (coding function), 65 cD (restriction of configuration c in domain D), 242 ci,D (pattern of domain D extracted from position i in configuration c), 242 C(x), 67 d (distance on words), 3 Δn (complexity class of real numbers), 343 D$T % , Δ$T % (discrepancy), 409 D(w) (domain of a partial word), 179

ε (the empty word), 1 ECA (elementary cellular automaton), 243 F⊥ (restriction of CA F on ⊥-finite configurations), 247 F, 62 ˚ 62 F, FN , 64 FP (restriction of CA F on periodic configurations), 245 fσ (X) (upper frequency of σ in X), 344 fσ (X,n) (upper frequency of σ in n-cubes in X), 344

fσ (a) (frequency of σ in a), 344 FW (p,q) (extremal relational Fine and Wilf words), 201 Γ$T % (distances in a Kronecker sequence), 408 Gb (greedy map), 20 h(X) (topological entropy of X), 350 HM,s (constrained transfer operator), 417 Hs (plain transfer operator), 417 Hs (transfer operator), 403 H(s,t) (extended transfer operator), 426 (τ )

Hs (weighted transfer operator), 426 H(w) (set of holes), 179 I (closed interval), 62 Iw , 64 [[i, j]] (interval of integers), 1 ι (identity relation), 178 k (= k − 1, for k-medieties), 82 κ (a) (Kolmogorov complexity of a), 325 K (α ) (Kronecker sequence), 401, 404 K$T % (α ) (truncated Kronecker sequence), 405 LFT (linear fractional transformation), 424 L(.) (language), 3, 299, 318 Lc (.) (complementary language), 299, 318 L≤n (concatenation of at most n words in L), 6 Ln (x) (factors of length n), 3 L∗ (Kleene star), 6 Ln (power of a language), 6 LPF (longest previous factor occurrence), 171 LPrF (longest previous reverse factor occurrence), 171 λk (k-Rosen continued fractions), 86 M, 62 u˜ (mirror image), 2 L˜ (mirror image), 6 μβ (lazy measure), 20 NFA (non-deterministic finite automaton), 258 Nn (X) (number of words of shape [0,n − 1]d in X), 350

Notation index NPunary (complexity class NP in unary, 356 νM (Hausdorff measure), 415 Ω (universal relation), 178 uω (concatenation), 4 ⊕, 62 ⊕i , 84

π (w) (minimal period), 176 πD (higher block presentation with shape D), 304 πR,e (x) (minimal external R-period), 187 πR,g (x) (minimal global R-period), 187 πR,l (x) (minimal local R-period), 187 Πsquare (set of square periods of a subshift), 356 Πn (complexity class of real numbers), 343 R (end-first tree), 76 ρ (n) (maximal number of runs in a word of length n), 162 ri (k-Rosen mediety), 86 Rk , 89 RY (restriction of R on Y ), 177 S (shift), 14 Su (shift), 299 Σ∗d (set of d-dimensional patterns), 297 σM (Hausdorff dimension), 415 Σn (complexity class of real numbers), 343 stab(x) (stabiliser of x), 356

τt (translation), 244 T (w) (w word), 66 v (value function), 65 V$T % (covered space), 408 ← − (w finite word), 71 w w (w word), 66 w (companion word), 179 x (x number), 66 X(.) (subshift defined by forbidden patterns), 245 X(·) (subshift defined by forbidden patterns), 297 X + (·) (subshift defined by allowing patterns), 299

465

General index

2-Toeplitz family, 346 sequence, 347 a.e., see almost everywhere abelian cube, 123 square, 123 Aberkane, A., 146, 149 accessible state, 8 additive form (of the Euclidean algorithm), 70 function, 378 Adian, S. I., 101 adjacency matrix, 124 Adler, R. L., 229 admissible frontier, 362 pattern, 297 Akiyama, S., 57 Allouche, J.-P., 43, 108, 110, 144, 146, 150 almost everywhere, 17 Alon, N., 147 alphabet, 1 integer, 160 alteration relation, 183 aperiodic, 230 automaton, 216 graph, 230 tile set, 263, 307 Apostolico, A., 158, 167 appearance (of a pattern), 297 approximate square, 147 arithmetic progression, 120, 147, 148 arithmetic subsequence, 120 Arnold measure, 402, 410, 411, 418, 420, 435 Arnold, V. I., 410 Arnoux, P., 62 Assem, I., 360 Aubrin, N., 329 augmenting word, 223

automatic sequence, 143 automaton, 107, 146, 214 aperiodic, 216 circular, 221 complete, 9 complete deterministic, 214 deterministic, 8, 214 equivalent, 230 irreducible, 214 level, 221, 233 one cluster, 221 strongly transitive, 221 synchronised, 214 trim, 8 B´eal, M.-P., 214, 237 Baatz, M., 54 Badkobeh, G., 154, 155 Baiocchi, C., 35 Baker, S., 56 Baladi, V., 403, 439 balanced, 132 basic sequence, 429, 430, 436 basis, 372 Bean, D. A., 112, 147 Beck, J., 113, 147 Bell, J. P. , 149 Berger’s theorem, 306, 317 Berger, R., 263, 267, 306 Bergeron, F., 361 Berlinkov, M. V., 213, 214 Berman, S., 360, 378 Bernoulli measure, 248, 252 Berstel, J., 146, 175, 179, 225, 365 β -expansion, 14, 19 greedy, 19 lazy, 19 β -representation, 14 β -shift, 15 β -transformation, 13 Birkhoff ergodic theorem, 17, 20

General index Blanchet-Sadri, F., 175 block, 116, 125 Blondel, V., 149 Boasson, L., 175, 179 Bodini, O., xvii border, 152 Borwein, P., 57 bound of t1 -t2 interaction, 190 Bousquet-M´elou, M., 366 Brandenburg, F.-J., 146, 147 Breslauer, D., 168 Brocot, A., 60 Brown, S., 146 Bruy`ere, V., 150 B¨uchi, J. R., 150 Bugeaud, Y., 62 bunch, 231 Burnside problem, 101 Burton, R., 62 CA (cellular automaton), 243 Caldero, P., 359 Cantor set, 4, 47, 415, 425 Cardano, G., 93 Carpi, A., 147, 148, 214, 226 Cartan matrix, 380 ceiling function, 13 cellular automaton, 243, 342, 357 balance theorem, 251, 256 bijective, 247 CA rule controlled xor, 256, 286 Game-Of-Life, 241, 254, 255, 294 majority, 252, 254 Q2R, 248, 252 rule 110, 243, 250, 252, 253, 257–259 Snake xor, 286 traffic, 243, 254 xor, 243, 253, 256 configuration, 242 asymptotic, 252 finite, 247 fixed point, 245 fully periodic, 244 Garden of Eden, 249, 295 periodic, 244 temporally periodic, 245, 295 conservation law, 249 elementary, 243 entropy, 293 equicontinuous, 293 Garden of Eden theorem, 252 injective, 247, 253, 259 invariant measure, 252 inverse automaton, 247 local rule, 243

neighbourhood, 243 Moore, 243 radius- 12 , 243 radius-r, 243 von Neumann, 243 nilpotent, 280, 295 orphan, 249 periodic, 288, 295 permutive, 253 preinjective, 252, 254 reversible, 247, 253 sensitive, 293 space-time diagram, 243, 289 stable state, 247 state, 242 surjective, 247, 253, 254, 256, 259 Wolfram number, 243 ˇ y’s conjecture, 215 Cern´ ˇ y, J., 215 Cern´ Cesaratto, E., 402 Chairungsee, S., 174 Chapoton, F., 397 ´ 144, 150 Charlier, E., Chen, A., 329 Chen, G., 160 child ith left, 89 Christoffel word, 92 standard factorisation, 93 Chuquet, N., 60 clopen set, 16 cluster, 136, 221, 232 generating function, 136 method, 135, 138, 149 co-accessible state, 8 code, 182, 238 Huffman, 239 maximal, 238 prefix, 238 R-, 183 (R,S)-, 183 codewords, 238 coding, 64 codon, 180 compactness, 245 companion word, 179 compatibility relation, 177 compatible, 177 complementary language of a set, 318 of a subshift, 299 complete automaton, 9 computable function, 317, 319 concatenation, 2

467

468

General index

configuration, 242, 297 finite, 247 fully periodic, 244 periodic, 244, 306 square period of, 306 temporally periodic, 245, 295 congruence stable, 231 stable pair, 231 conjugacy, 262 conjugate digit, 39 expansion, 39 letter, 85 of a word, 66 continued fraction expansion, 60, 70, 401, 405, 423 α -continued fraction, 70 backward, 76 beginning continuants, 405, 424, 431 depth, 405, 416, 423 digits, 402, 405, 423 ending continuants, 424, 431 partial quotients, 402 remainders, 424 Rosen, 61 Conway, J., 254, 360 Cosnard, M., 43 cost balance, 414 balanced, 414, 428, 430, 434 digit-cost, 413, 426, 434 elementary, 413 extremal, 434 imbalance, 414, 421, 436, 437 ordinary, 434 standard, 434 unbalanced, 414, 417, 428, 436 cover, 151, 167 covered space, 402, 408, 418, 419, 435 Coxeter, H. S. M., 360 critical exponent, 144, 146 position, 194 Crochemore, M., 147, 154, 158, 160, 174, 205 cube, 102 abelian, 123 cubefree word, 102 Culik, II, K., 231, 263 Currie, J. D., 108, 110, 146–149, 151 Curtis, M, 301 Curtis–Hedlund–Lyndon theorem, 247 cylinder, 15, 245, 299 D’Alessandro, F., 214

Daireaux, B., 408 Dajani, K., 19, 21, 22, 62 Damanik, D., 146 Dar´oczy, Z., 35, 54 Daud´e, H., 421 de Bruijn graph, 257 diagonal, 259 diamond, 260 pair graph, 258 de la Rue, T., 62 de Luca, A., 146, 201, 203 de Vries, M., 19, 22, 46 de Vries–Komornik constant, 46 decidability, 143 decision problem CA INJECTIVITY, 260, 287 CA NILPOTENCY, 280, 292 CA PERIODICITY, 288 CA SURJECTIVITY, 261, 289 C OMPLETION PROBLEM, 294 D OMINO PROBLEM, 263, 305 E MPTINESS, 261, 305 F INITE TILING PROBLEM, 294 G ARDEN OF E DEN P ROBLEM, 295 P ERIODIC TILING PROBLEM, 279 T ILING PROBLEM WITH A SEED TILE, 266, 305 T ILING PROBLEM, 261, 263, 274, 292 T URING MACHINE HALTING ON BLANK TAPE, 264 decoder, 238 of a prefix code, 238 Dejan, F., x Dejean’s conjecture, 147 Dejean’s theorem, 114, 116 Dejean, F., 115, 147, 151 Dekking, F. M., 125, 127, 146 dependency digraph, 113 DFA, 8 Di Francesco, P., 360 diagonal, 259 diagonal ray, 369 diamond, 260 directing vector of a ray, 369 Dirichlet series, 403, 428, 437 discrepancy, 401, 402, 409, 418, 420, 435 discriminant, 391 distance, 3 ultrametric, 4 distances, 401, 402, 404, 405, 408, 421, 422 Dolgopyat bounds, 429, 440 Dolgopyat, D., 429 dominating eigenvalue, 368 domino tilings, 298 Downarowicz, T., 148

General index Dubuc, L., 214, 221 Durand, B., 289, 329 dynamical analysis, 403, 408, 423, 427 dynamical system conjugacy, 16 effective, 325 measure-theoretic, 16 subshift, 15, 245, 299 symbolic, 15 topological isomorphism, 16 E -admissible, 297 edge, 214 a-edge, 231 label of, 214 effective dynamical system, 341 set, 318 subset, 341 subshift, 326 effectively closed set, 318 Ehrenfeucht, A., 112, 147, 167 eigenvalues of a rational sequence, 368 Eilenberg, S., 216, 368 emptiness problem (tilings and SFTs), 261, 306, 317 empty word, 1 end-first algorithm, 76 tree, 76 Engel, F., 61 Engel–Sierpi´nski mediety, 74 Entringer, R. C., 148 entropy dimension, upper and lower, 357 entropy (of a subshift), 350 enumeration overlapfree words, 134 squarefree words, 134 Eppstein,D., 213 equivalence relation, 177 equivariant, 300 Erd˝os, P., 18, 31, 56, 123 ergodic, 16 Birkhoff theorem, 17 individual ergodic theorem, 17 Euclidean algorithm, 70 additive form, 70 Euclidean dynamical system, 403, 417, 423 Evdokimov, A. A., 123 even shift, 246 eventually periodic word, 3 excess, 119 expansion, 19, 73 β -, 14 Engel–Sierpi´nski, 73

greedy, 19 lazy, 19 exponent of a word, 103 exponential polynomial, 368 extension (of a subshift), 300 extension problem (tilings and SFTs), 306 extension–reduction, 308 external R-period, 187 word, 187 extremal Fine and Wilf word, 201 relational Fine and Wilf word, 201 f-factorisation, 160 factor forbidden, 146 of a word, 2, 3 proper, 2 subshift, 300 factor map, 300 factorial language, 6 factorisation X-factorisation, 182 f-factorisation, 160 standard, 161 Ziv–Lempel, 160 Fekete’s Lemma, 344 Feng, D.-J., 22 Fibonacci word, 5, 146 fibred system, 14, 63 fidelity relation, 183 Fife’s theorem, 107 Fife, E. D., 107, 146 final state, 7 Fine, N. J., 175 finite automaton, 8, 258 finite tiling, 294 finite-repetition threshold, 155 Flajolet, P., 403, 421 Fomin, S., 359 fractional power, 103, 147 Fraenkel, A. S., 153 frieze, 370 of type G, 371 patterns, 360 fringe, 390 frontier, 362 full shift, 15, 246 Game-Of-Life, 241, 254, 255, 294 Garden of Eden, 249, 295 Garden of Eden theorem, 252 Garsia number, 32 Garsia, A. M., 32 Gauss map, 95, 403, 423 generating function, 135

469

470 Glendinning, P., 23, 54 global R-period, 187 Goh, T. L., 149 Golden mean shift, 7, 15, 298 Golden Ratio, 5, 22, 66 Golod, E., 149 Goodwyn, L.W., 229 Gottschalk, W. H., 147 Goulden, I., 135 Graham, R., 148 graph aperiodic, 230 period of, 230 greedy expansion, 19 grid, 312 group, 2 growing morphism, 117 Grytczuk, J., 147, 148 Guillon, P., 357 ˇ 355 Gurevich, Ju. S., Haas, A., 62 Hadamard product, 367 Halava, V., 175, 202 Haluszczak, M., 147 Hancart, C., 164 hard core model, 298 Hare, K. G., 57 Harju, T., 175, 202 Hausdorff dimension, 415, 421, 439 measure, 415, 439 Hedlund, G., 147, 301 higher block presentation, 257, 304 Hilbert curve, 282 Hippocrates of Chios, 61 Hochman, M., 329, 341, 349, 354, 357 Hohlweg, C., 162 hole, 179 Hopcroft, J. E., 258 horizontal ray, 369 Horv´ath, M., 18 Hubert, P., 62 Huffman code, 239 Ilie, L., 158–160 Iliopoulos, C. S., 169, 174 image, 233 in an automaton, 233 minimal, 233 independent set, 221 individual ergodic theorem, 17 infix, 111 initial quiver, 372 integer alphabet, 160 interlacing, 367 interval

General index stability, 51 invariant measure, 16 inverse CA, 247 isomorphism subshifts, 300 topological, 16 Jackson, D. E., 148 Jackson, D. M., 135 Janvresse, E., 62 Jeandel, E., 342, 356 Jo´o, I., 18, 56 Jungers, R., 149 k-automatic sequence, 143 k-power, 102 k-powerfree, 103 k-recognisable sequence, 143 k-slice (of a subshift), 328 K¨arki, T., 175, 202, 209 Kall´os, G., 54 Kao, J.-Y., 148 Karhum¨aki, J., 154, 231 Kari, J., 216, 229, 231, 263, 267, 288 Keller, B., 397 Kempton, T., 28 Ker¨anen, V., 124 kernel repetition, 119 Kleene star, 6 Kleene, S. C., 9 Kolmogorov complexity, 325 Kolpakov, R., 159 Komatsu, T., 57 Komornik, V., 18, 35, 39, 56 Korjakov, I. O., 355 Kozik, J., 147, 148 Kraaikamp, C., 21, 22, 62 Krieger, D., 146 Kubica, M., 174 Kucherov, G., 159 K´atai, I., 35, 54 label, 214 Lai, A. C., 56 Landau theorem, 403, 429 language, 5 complementary, 299, 318 factorial, 6 finite, 6 infinite, 6 of a closed set, 318 of a subshift, 299 prefix-closed, 6 suffix-closed, 6 Laurent phenomenon, 371

General index polynomial, 372 lazy expansion, 19 measure, 20 Lecroq, T., 164 left form (of a bound of a k-Rosen interval), 88 Lenz, D., 146 letter, 1 level of automaton, 221, 233 of state, 221, 232 Lhote, L., 408 Lind, D., 352, 357 linearisation coefficients, 387 Linek, V., 146 linked set, 219 local R-period, 187 partial period, 187 locally strongly transitive, 221 longest previous reverse factor occurrence, 171 Lorentz, R. J., 158 Loreti, P., 22, 39 Lovasz local lemma, 113, 145, 147 lower entropy dimension, 357 Lyndon tree, 161 word, 160 Lyndon, R., 301 M-balanced, 132 Main, M. G., 158 marked word, 135 matrix adjacency, 124 Mauldin, R. D., 32 maximal code, 238 McNulty, G., 112, 147 measure invariant, 16 measure-theoretic dynamical system, 16 mediant, 68 mediety, 62 k-Rosen, 86 k-mediety, 84 Engel–Sierpi´nski, 74 Stern–Brocot, 68 Medvedev degree, 321 reducible, 321 merge, 367 Meyerovitch, T., 349, 354, 357 Micek, P., 147 Michaux, C., 150 Mignosi, F., 146, 201, 203 minimal

471

dynamical system, 16 image, 233 period, 176 subshift, 326, 357 word, 3 Minkowski, H., 67 Mione, L., 146 mirror, 2 Mohammad-Noori, M., 147 monoid, 2 of binary relations, 216 Moody, R., 360, 378 Moore, D., 169 Moore, E. F., 252 Mor, S. J., 146 morphism, 127 growing, 117 powerfree, 111 squarefree, 111 uniform, 116, 143 Morse, M., 146 Moser, R., 147 Moulin-Ollagnier, J., 147 Muchnik degree, 321 reducible, 321 Multinacci number, 22 multiplicative form (of a word), 66 mutation, 372 graph, 375 polynomial, 372 Myers, D., 322 Myhill, J., 252 Nakada, H., 62 nearest-neighbour SFT, 303 Nicaud, C., 213 non-deterministic polynomial complexity, 355 in unary, 356 non-periodic word, 3 non-repetitive colouring, 147 normalised error, 20 Novikov, P. S., 101 NP (complexity class), 355 N-rational frieze, 371 sequence, 365 series, 365 number Garsia, 32 Multinacci, 22 Pisot, 32 occurrence (of a pattern), 297 Ochem, P., 147 Ollinger, N., 288 one-sided shift, 14

472 one-step SFT, 303 orbit of an infinite word, 15 origin of a ray, 369 orphan, 249, 255, 259 overlap, 105 overlapfree, 145, 146 word, 105 pair graph, 258 palindrome, 2, 151, 171, 203 gapped, 152 Pansiot recoding, 118 Pansiot, J.-J., 117, 147, 153 paperfolding word, 122 Park, K., 169 Parry, W., 19, 60 partial period, 187 word, 179 partial quotients, 70 Paterson, C. W., 181 path, 7 label, 7 successful, 7 pattern, 123, 139, 143, 148, 242, 297 alphabet, 123 appears in, 297 instance, 123 occurs in, 297 orphan, 249, 255, 259 shape, 297 subpattern, 297 support, 297 variable, 123 Zimin, 123 Pavlov, R., 342, 357 Pedicini, M., 56 Pegden, W., 147 perfect set, 47 period, 3, 119, 176, 230 periodic, 143 configuration, 306 fully, 244 spatially, 244 temporally, 245 word, 3 Perrin, D., 214, 225, 237 Petrov, A. N., 148 Pin, J. -E., 216 Pirillo, G., 146 Pisot number, 32 Plagne, A., 402 plane-filling property, 282 Pleasants, P. A. B., 124 positivity conjecture, 397

General index power series method, 138 power set, 182 powerfree morphism, 111 prefix, 2 -closed language, 6 prefix code, 238 synchronised, 239 Preparata, F. P., 158 preperiod, 3 primitive word, 103 principal subgrid, 315 probabilistic method, 113 probabilistic model, 402, 408, 415 constrained probabilistic model, 402, 415, 416 rational probabilistic model, 402, 416, 428, 433 real probabilistic model, 402, 415, 427, 433 product topology, 245 progression-free set, 125 Protasov, V., 149 Prouhet, E., 146 Puglisi, S. J., 160, 162 pure period, 186 Puzyna, J., 61 Pythagoras, 59 quasi -period, 167 -periodicity, 167 -seed, 169 question mark function, 67 quiver, 359, 370 R´enyi, A., 18 Rabin, 9 Rampersad, N., 144, 146–151 Ramsey theory, 101 random Fibonacci sequences, 80 rank, 226, 229, 233 of a relation, 226 of a word, 229 of an automaton, 233 Rao, M., 147, 151 rational frieze, 371 power, 2 sequence, 366 ray, 369 R-code, 183 reciprocal, 199 recurrent word, 3 recursive function, 317, 319 set, 317 recursively enumerable set, 317 reduced set, 135 reflected ratio, 93 reflexive, 177

General index regular language, 7 sofic shifts, 303 relational period, 186 relationally universal, 202 repeat, 151, 152, 163 repetition, 151, 157 threshold, 114, 147, 155 repetitions, 101 representation β -, 14 Restivo, A., 105, 146 Reutenauer, C., 162, 225, 365 reversal, 2 right form (of a bound of a k-Rosen interval), 88 Riordan, O., 147 R-isomorphic, 202 Rittaud, B., 62 Robinson’s system, 314 Robinson’s tiles, 267, 291 minimal, 274 patch, 269 Robinson, R., 261, 267 Robinson, R. M., 314 Romashchenko, A., 329 Rosen continued fractions, 61 Rosen, D., 61 Rothschild, B., 148 R-overlap, 205 (R,S)-code, 183 R-square, 205 R-squarefree, 205 Ruelle, D., 425 run, 151, 159 R-universal, 202 running Turing machines on grids, 313 Rytter, W., 162, 174 R´enyi, A., 60 S´ee´ bold, P., 153 Saari, K., 146 Sablik, M., 329 Salemi, S., 105, 146 Sardinas, A. A., 181 Sch¨utzenberger, M.-P., 230 Schatz, J. A., 148 Scherotzke, S., 397 Schmidt, K., 357 Schmidt, T., 62 Schraudner, M., 342, 357 Sciortino, M., 146 Scott, 9 scrambled pattern, 148 seed, 151, 167 semi-adjacent minor, 388 semi-algorithm, 263 semi-decision, 317

semigroup, 2 sequence Kronecker sequence, 401, 404 parameters for Kronecker sequences, 404 pseudo-randomness of a sequence, 401 truncated Kronecker sequence, 401, 405 Series, C., 62 set Cantor, 4, 47 clopen, 16 SFT (subshift of finite type), 246 Shallit, J., 108, 110, 144–148, 150, 154 shape (of a pattern), 297 shift, 14, 66, 244 β -shift, 15 action, 244, 299 full, 15 one-sided, 14 two-sided, 14 shift of finite type, 15, 246, 297 nearest-neighbour, 303 one-step, 303 shift-invariant, 299 Shmerkin, P., 32 short repetition, 119 Shur, A., 134, 146, 147 Sidorov, N., 18, 22, 23, 54, 56 Sierpi´nski, W., 61 signed continuant polynomial, 389 Silva, M., 148 Simon, K., 32 Simpson, J., 148, 153, 162 Simpson, S., 322 sink, 9 SL2 -tiling of the plane, 360 slice (of a subshift), 328 sliding block code, 16, 247 Smeets, I., 62 Smith, D., 360 Smyth, W. F., 160, 168 sofic shift, 302 Solomyak, B., 32 spanning subgraph, 206 Spencer, J., 147, 148 square, 102 abelian, 123 period, 306, 356 squarefree morphism, 111 word, 102 stability interval, 51 stable pair, 231 relation, 218 standard factorisation, 161

473

474 (of a Christoffel word), 93 state accessible, 8 co-accessible, 8 final, 7 level, 221, 232 terminal, 7 Steiner, W., 62 Stern, M., 60 Stern–Brocot mediety, 68 strong degree, 321 equivalence, 321 reducibility, 321 strongly irreducible subshift, 326, 357 strongly transitive automaton, 221 Sturmian word, 146 subaction (of a subshift), 341 subadditive function, 378 subpattern, 297 subsequence, 120 subshift, 15, 52, 245, 299 aperiodic, 15 conjugacy, 16, 262, 300 effective, 326 entropy, 350 even shift, 246 extension, 300 factor, 300 finite type, 15, 52, 246 isomorphism, 300 lower entropy dimension, 357 minimal, 274, 326 periodic, 15 set of square periods, 356 slice of, 328 sofic, 15 strongly irreducible, 326 upper entropy dimension, 357 substitution, 4 suffix, 2 -closed language, 6 superprimitive, 167 superstring, 152 support (of a pattern), 297 surjunctive group, 256 symbol, 1 symbol code, 305 symbolic dynamical system, 15 symbolic factor (of a subshift), 300 symmetric, 177 synchronisable pair, 214 set, 214 synchronised, 239

General index prefix code, 239 synchronising word, 214, 225, 239 for a code, 239 Tardos, G., 147 Tauberian theorem, 403, 429 template, 128 ancestor, 129 instance, 128 parent, 128 terminal state, 7 three-distance theorem, 401, 406 Thue, A., x, 101, 102, 105, 146, 147, 151, 153 Thue–Morse morphism, 5, 103, 153 word, 5, 102, 105, 143, 145, 146, 150 Tijdeman, R., 203 tiling, 262, 303 space, 246, 303 Toeplitz family, 346 sequence, 347 subshift, 346 topological conjugacy, 16 dynamical system, 16 entropy, 293, 350 isomorphism, 16 topological entropy, 357, 358 Trahtman, A., 213, 217, 230 trajectory, 10 transfer operator, 403, 421, 430, 436–438 constrained, 415, 417, 425 extended, 426 middle, 434 plain, 417, 425 weighted, 426, 430, 441 transition function, 9 monoid, 216 relation, 7 transitive, 177 translation (by a vector), 244, 297 translation (via RNA), 180 transpose, 391 trim, 8 truncation boundary position, 402, 407, 410, 418, 435 boundary truncation, 407, 418 generic position, 402, 407, 410, 419, 420, 435 generic truncation, 407, 419, 420 position of the truncation, 402, 407, 413 two-distance truncation, 401, 406, 407, 410, 418 Turing degree, 326 Turing machine, 264, 310

General index Turing, A., 264 two-distance phenomenon, 401, 404 Ullman, J. D., 258 ultimately periodic, 3, 143, 144 uniform morphism, 116, 143 uniformly recurrent word, 3 unique ergodicity, 17 universal critical exponent, 144 univoque base, 40 upper entropy dimension, 357 Vall´ee, B., 402, 403 van der Waerden’s theorem, 120, 132, 145 van der Waerden, B. L., 148 Vanier, P., 356 variables of a frontier, 362 Vasiga, T., 146 vertical ray, 369 Vichniak, G. Y., 248 Villemaire, R., 150 ` B., 359, 378 Vinberg, E. Volkov, M. V., 216, 220 Wale´n, T., 174 Wang tile, 262 4-way deterministic, 292 aperiodic tile set, 263 directed, 281 NW-deterministic, 289 plane-filling, 282 Robinson’s, 267, 291 S NAKES , 284 Wang tiles, 303 Wang, H., 263, 303, 306 Ward, T., 357 weak degree, 321 equivalence, 321 R-code, 183 reducibility, 321 Weihrauch, K., 343 Weiss, B., 229, 303 Werckmeister, A., 61 Wilf, H. S., 175 Witkowski, M., 147, 148 Wonenburger, M., 360, 378 word, 1, 101 associated with a point, 362 augmenting, 223 bi-infinite, 3 concatenation, 2 distance, 3 eventually periodic, 3 factor, 3 Fibonacci, 5 infinite, 3

Lyndon, 160 marked, 135 minimal, 3 mirror, 2 non-periodic, 3 overlapfree, 105 partial, 179 periodic, 3 preperiod, 3 primitive, 103 purely morphic, 4 recurrent, 3 reversal, 2 Sturmian, 201 synchronising, 214, 225 two-dimensional, 122, 148 uniformly recurrent, 3 X-factorisation, 182 Zamboni, L. Q., 202, 203 Zaremba’s conjecture, 100 Zelevinsky, A., 359 Zheng, X., 343 Zimin pattern, 123 Zinoviadis, C., 357 Ziv–Lempel factorisation, 160, 171

475