This book is about the dynamics of neural systems and should be suitable for those with a background in mathematics, phy

*132*
*9*
*26MB*

*English*
*Pages 525
[513]*
*Year 2023*

- Author / Uploaded
- Stephen Coombes
- Kyle C. A. Wedgwood

**Commentary**- directly obtained off Springer's servers, on March 6, 2024, when the servers (possibly inadvertently) became "open access" from about 7:00PM GMT to 11:00PM GMT

*Table of contents : PrefaceAcknowledgementsContentsList of Boxes1 Overview 1.1 The brain and a first look at neurodynamics 1.2 Tools of the (mathematical) trade Remarks2 Single neuron models 2.1 Introduction 2.2 Neuronal membranes 2.3 The Hodgkin–Huxley model 2.3.1 Batteries and the Nernst potential 2.3.2 Voltage-gated ion channels 2.3.3 Mathematical formulation 2.4 Reduction of the Hodgkin–Huxley model 2.5 The Morris–Lecar model 2.5.1 Hopf instability of a steady state 2.5.2 Saddle-node bifurcations 2.6 Other single neuron models 2.6.1 A plethora of conductance-based models 2.7 Quasi-active membranes 2.8 Channel models 2.8.1 A two-state channel 2.8.2 Multiple two-state channels 2.8.3 Large numbers of channels 2.8.4 Channels with more than two states Remarks Problems3 Phenomenological models and their analysis 3.1 Introduction 3.2 The FitzHugh–Nagumo model 3.2.1 The mirrored FitzHugh–Nagumo model 3.3 Threshold models 3.4 Integrate-and-fire neurons 3.4.1 The leaky integrate-and-fire model 3.4.2 The quadratic integrate-and-fire model 3.4.3 Other nonlinear integrate-and-fire models 3.4.4 Spike response models 3.4.5 Dynamic thresholds 3.4.6 Planar integrate-and-fire models 3.4.7 Analysis of a piecewise linear integrate-and-fire model 3.5 Non-smooth Floquet theory 3.5.1 Poincaré maps 3.6 Lyapunov exponents 3.7 McKean models 3.7.1 A recipe for duck Remarks4 Axons, dendrites, and synapses 4.1 Introduction 4.2 Axons 4.2.1 Smooth nerve fibre models 4.2.2 A kinematic analysis of spike train propagation 4.2.3 Myelinated nerve fibre models 4.2.4 A Fire-Diffuse-Fire model 4.3 Dendrites 4.3.1 Cable modelling 4.3.2 Sum-over-trips 4.3.3 Compartmental modelling 4.4 Synapses 4.4.1 Chemical synapses 4.4.2 Electrical synapses 4.5 Plasticity 4.5.1 Short-term plasticity 4.5.2 Long-term plasticity Remarks Problems5 Response properties of single neurons 5.1 Introduction 5.2 Mode-locking 5.3 Isochrons 5.4 Phase response curves 5.4.1 Characterising PRCs 5.5 The infinitesimal phase response curve 5.6 Characterising iPRCs 5.7 Phase response curves for non-smooth models 5.7.1 PRCs for integrate-and-fire neurons 5.7.2 iPRC for piecewise linear systems 5.8 Phase and amplitude response 5.8.1 Excitable systems 5.9 Stochastically forced oscillators 5.9.1 Phase equations for general noise 5.10 Noise-induced transitions Remarks Problems6 Weakly coupled oscillator networks 6.1 Introduction 6.2 Phase equations for networks of oscillators 6.2.1 Two synaptically coupled nodes 6.2.2 Gap-junction coupled integrate-and-fire neurons 6.3 Stability of network phase-locked states 6.3.1 Synchrony 6.3.2 Coupled piecewise linear oscillators 6.3.3 The splay state 6.4 Small networks 6.5 Clustered states 6.5.1 Balanced (M,q) states 6.5.2 The unbalanced (N,q) cluster state 6.6 Remote synchronisation 6.7 Central pattern generators 6.8 Solutions in networks with permutation symmetry 6.8.1 A biharmonic example 6.8.2 Canonical invariant regions 6.9 Phase waves 6.10 Beyond weak coupling Remarks Problems7 Strongly coupled spiking networks 7.1 Introduction 7.2 Simple neuron network models 7.3 A network of binary neurons 7.3.1 Release generated rhythms 7.4 The master stability function 7.4.1 MSF for synaptically interacting LIF networks 7.5 Analysis of the asynchronous state 7.6 Rate-based reduction of a spiking network 7.7 Synaptic travelling waves 7.7.1 Travelling wave analysis 7.7.2 Wave stability Remarks8 Population models 8.1 Introduction 8.2 Neural mass modelling: phenomenology 8.3 The Wilson–Cowan model 8.3.1 A piecewise linear Wilson–Cowan model 8.3.2 The Wilson–Cowan model with delays 8.3.3 The Curtu–Ermentrout model 8.4 The Jansen–Rit model 8.5 The Liley model 8.6 The Phenomenor model 8.7 The Tabak–Rinzel model 8.8 A spike density model 8.9 A next-generation neural mass model 8.9.1 Mapping between phase and voltage descriptions 8.10 Neural mass networks 8.10.1 Functional connectivity in a Wilson–Cowan network Remarks Problems9 Firing rate tissue models 9.1 Introduction 9.2 Neural field models 9.3 The continuum Wilson–Cowan model 9.3.1 Power spectrum 9.3.2 Single effective population model 9.4 The brain wave equation 9.5 Travelling waves 9.5.1 Front construction 9.5.2 Front stability (Evans function) 9.6 Hallucinations 9.7 Amplitude equations 9.8 Interface dynamics 9.8.1 One spatial dimension 9.8.2 Two spatial dimensions Remarks ProblemsAppendix A Stochastic calculusA.1 Modelling noiseA.2 Random processes and sample pathsA.3 The Wiener processA.4 Langevin equationsA.5 Stochastic integralsA.6 Comparison of the Itô and Stratonovich integralsA.7 Itô's formulaA.8 Coloured noiseA.9 Simulating stochastic processesA.10 The Fokker–Planck equationA.10.1 The backward Kolmogorov equationA.11 Transforming probability distributionsAppendix B Model detailsB.1 The Connor–Stevens modelB.2 The Wang–Buzsáki modelB.3 The Golomb–Amitai modelB.4 The Wang thalamic relay neuron modelB.5 The Pinsky–Rinzel modelAppendix ReferencesIndex*

Texts in Applied Mathematics 75

Stephen Coombes Kyle C. A. Wedgwood

Neurodynamics An Applied Mathematics Perspective

Texts in Applied Mathematics Volume 75

Editors-in-Chief Anthony Bloch, University of Michigan, Ann Arbor, MI, USA Charles L. Epstein, University of Pennsylvania, Philadelphia, PA, USA Alain Goriely, University of Oxford, Oxford, UK Leslie Greengard, New York University, New York, NY, USA Series Editors J. Bell, Lawrence Berkeley National Laboratory, Berkeley, CA, USA R. Kohn, New York University, New York, NY, USA P. Newton, University of Southern California, Los Angeles, CA, USA C. Peskin, New York University, New York, NY, USA R. Pego, Carnegie Mellon University, Pittsburgh, PA, USA L. Ryzhik, Stanford University, Stanford, CA, USA A. Singer, Princeton University, Princeton, NJ, USA A Stevens, University of Münster, Münster, Germany A. Stuart, University of Warwick, Coventry, UK T. Witelski, Duke University, Durham, NC, USA S. Wright, University of Wisconsin, Madison, WI, USA

The mathematization of all sciences, the fading of traditional scientific boundaries, the impact of computer technology, the growing importance of computer modelling and the necessity of scientific planning all create the need both in education and research for books that are introductory to and abreast of these developments. The aim of this series is to provide such textbooks in applied mathematics for the student scientist. Books should be well illustrated and have clear exposition and sound pedagogy. Large number of examples and exercises at varying levels are recommended. TAM publishes textbooks suitable for advanced undergraduate and beginning graduate courses, and complements the Applied Mathematical Sciences (AMS) series, which focuses on advanced textbooks and research-level monographs.

Stephen Coombes · Kyle C. A. Wedgwood

Neurodynamics An Applied Mathematics Perspective

Stephen Coombes School of Mathematical Sciences University of Nottingham Nottingham, UK

Kyle C. A. Wedgwood Living Systems Institute University of Exeter Exeter, UK

ISSN 0939-2475 ISSN 2196-9949 (electronic) Texts in Applied Mathematics ISBN 978-3-031-21915-3 ISBN 978-3-031-21916-0 (eBook) https://doi.org/10.1007/978-3-031-21916-0 Mathematics Subject Classification: 00A69, 37N25, 92C20 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

For Patricia and George Stephen Coombes

For Iris and Roy Kyle C. A. Wedgwood

Preface

This is a book about ‘Neurodynamics’. What we mean is that this is a book about how ideas from dynamical systems theory have been developed and employed in recent years to give a complementary perspective on neuroscience to the vast and extensive set of experimental results that have been accrued over tens of decades. Overall, mathematical neuroscience is now a well-recognised area of cross-disciplinary research that brings to bear the power of the Queen of Sciences to help elucidate the fundamental mechanisms responsible for experimentally observed behaviours in neuroscience at all relevant scales, from the microscopic molecular world to the ethereal domain of cognition. It employs a wide range of techniques from across the broad spectrum of the mathematical sciences, as nicely exemplified by activities of the International Conference on Mathematical Neuroscience since 2015, with applications to the whole gamut of neuroscience. Indeed, there are now several books that have appeared under the umbrella of mathematical neuroscience, often overlapping with the sister discipline of computational neuroscience (e.g., [283, 327, 433, 460, 943]). The purpose of this book is to expand the use of mathematics to probe neural dynamics further still with an emphasis on gaining insight through calculation. It is a theory book and is aimed primarily at those with a bent for the quantitative sciences who have historically not worked in neuroscience, but see exciting future opportunities ahead. This is most readily achieved with a focus on idealised models that maintain a direct link to biological reality, either as reductions of more detailed models or designed to capture the essential phenomenology. Model simplicity allows for a ‘shut up and calculate!’ philosophy that has paid huge dividends in theoretical physics, though must be tempered by the need for a careful mathematical approach to treat models that can often be non-standard. A nice case in point is the leaky integrate-and-fire model of a spiking neuron. At first sight, its description as a linear first-order differential equation suggests that it can be tackled with elementary methods. However, the realisation that its harsh reset upon firing leads to a discontinuous dynamical system reminds us that we should not naively apply the standard tools for tackling smooth systems. The aim of this book is to reach out to a mathematical sciences audience curious about neural systems and show how to augment vii

viii

Preface

familiar mathematical tools to understand and tackle a multitude of problems in neurodynamics, ranging from the cell to the brain. The material covered here has been developed over a number of years and merges the experience of delivering a final year undergraduate course on theoretical neuroscience (over more than a 10-year period) together with the authors’ research activity. The idea for the book originally came when Coombes (Steve) took a sabbatical in 2014 and Wedgwood (Kyle) stepped in to deliver the course, after having previously audited it as part of his Ph.D. training. Hardly any of the original course material has made its way into this book. Instead, we have written something that much better represents our aspirations for a comprehensive course on neurodynamics and touches more closely upon recent methodologies developed in applied mathematics that are well suited to further research in the subject (including non-smooth dynamics and network science). This book is suitable for those with a background in mathematics, physics, or engineering who want to explore how their skill sets can be applied to models arising in neuroscience and could be used to support a Masters-level course or by postgraduate researchers starting in the field of mathematical neuroscience. The sub-title of the book (An applied mathematics perspective) reflects the major role that we believe applied mathematics, especially nonlinear dynamics and pattern formation, can play in shaping how scientists think not just about neurodynamics, but about many of the subjects not covered in this book, including learning, behaviour, natural computation, brain-development, -physiology, -function, and -pathology. This is an optimistic response to the question posed by Jacques Hadamard in 1945 [387]: ‘Will it ever happen that mathematicians will know enough of the subject of the physiology of the brain and that neurologists know enough of mathematical discovery for efficient cooperation to be possible?’ Nottingham, UK Exeter, UK October 2022

Stephen Coombes Kyle C. A. Wedgwood

Acknowledgements

Family comes first. SC thanks his ‘girls’, Nanette, Charlotte, Stephanie, and Alice, for the balance in his life that has allowed him to get this far and to write a book. KCAW thanks Jen for all of her support throughout the book-writing endeavour and for all of the continuing joyful time together we share. SC has been very lucky to have worked with and learned from Paul Bressloff. There are many references to his work throughout this book, and in no small way his style of doing research, merging theoretical physics and applied mathematics with an eye on biological realism, has had a strong positive influence. Thank you, Paul! Both of us have also been lucky to work with excellent students and colleagues over the years at both Nottingham and Exeter. Some of these sat through early versions of the final year undergraduate course on Theoretical Neuroscience as part of their Ph.D. training, and special thanks to Nikola Venkov and Margarita Zachariou for being early test subjects. Invaluable feedback on the book has been received from many friends (a superset of our collaborators) and in particular we would like to acknowledge: Peter Ashwin, Daniele Avitabile, Chris Bick, Áine Byrne, Oliver Cattell, Charlotte Coombes, Michael Forrester, Daniel Galvis, Aytül Gökçe, Carlo Laing, Bonnie Liefting, Sunil Modhara, Rachel Nicks, Mustafa Sayli, Yulia Timofeeva, Rüdiger Thul, and Mark van Rossum. Comments from four anonymous referees are also greatly appreciated. To the many mathematicians, physicists, and life-scientists whose work we have built upon to create this book we hope that we have given you proper credit with our referencing. We apologise in advance if we have inadvertently excluded you or did not do your work justice. No doubt there will be some errors and omissions in the book. We very much hope that they are minor and do not lead the reader astray.

ix

Contents

1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 The brain and a first look at neurodynamics . . . . . . . . . . . . . . . . . . . 1.2 Tools of the (mathematical) trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 5 6

2 Single neuron models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Neuronal membranes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The Hodgkin–Huxley model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Batteries and the Nernst potential . . . . . . . . . . . . . . . . . . . . 2.3.2 Voltage-gated ion channels . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Mathematical formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Reduction of the Hodgkin–Huxley model . . . . . . . . . . . . . . . . . . . . . 2.5 The Morris–Lecar model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Hopf instability of a steady state . . . . . . . . . . . . . . . . . . . . . 2.5.2 Saddle-node bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Other single neuron models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 A plethora of conductance-based models . . . . . . . . . . . . . . 2.7 Quasi-active membranes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Channel models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.1 A two-state channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.2 Multiple two-state channels . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.3 Large numbers of channels . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.4 Channels with more than two states . . . . . . . . . . . . . . . . . . . Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 7 7 8 10 12 14 19 24 26 28 34 36 42 46 46 47 50 53 54 55

3 Phenomenological models and their analysis . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The FitzHugh–Nagumo model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 The mirrored FitzHugh–Nagumo model . . . . . . . . . . . . . . . 3.3 Threshold models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61 61 61 66 68 xi

xii

Contents

3.4

Integrate-and-fire neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 The leaky integrate-and-fire model . . . . . . . . . . . . . . . . . . . 3.4.2 The quadratic integrate-and-fire model . . . . . . . . . . . . . . . . 3.4.3 Other nonlinear integrate-and-fire models . . . . . . . . . . . . . 3.4.4 Spike response models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.5 Dynamic thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.6 Planar integrate-and-fire models . . . . . . . . . . . . . . . . . . . . . . 3.4.7 Analysis of a piecewise linear integrate-and-fire model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Non-smooth Floquet theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Poincaré maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Lyapunov exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 McKean models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 A recipe for duck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

74 75 77 80 81 82 86 89 95 98 102 106 109 117 119

4 Axons, dendrites, and synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Axons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Smooth nerve fibre models . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 A kinematic analysis of spike train propagation . . . . . . . . 4.2.3 Myelinated nerve fibre models . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 A Fire-Diffuse-Fire model . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Dendrites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Cable modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Sum-over-trips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Compartmental modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Chemical synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Electrical synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Plasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Short-term plasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Long-term plasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

125 125 125 126 138 141 141 145 146 148 152 156 156 162 163 164 165 167 168

5 Response properties of single neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Mode-locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Isochrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Phase response curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Characterising PRCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 The infinitesimal phase response curve . . . . . . . . . . . . . . . . . . . . . . . 5.6 Characterising iPRCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Phase response curves for non-smooth models . . . . . . . . . . . . . . . . .

175 175 176 181 186 188 189 192 195

Contents

xiii

5.7.1 PRCs for integrate-and-fire neurons . . . . . . . . . . . . . . . . . . 5.7.2 iPRC for piecewise linear systems . . . . . . . . . . . . . . . . . . . . 5.8 Phase and amplitude response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Excitable systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Stochastically forced oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.1 Phase equations for general noise . . . . . . . . . . . . . . . . . . . . 5.10 Noise-induced transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

195 196 198 203 206 210 211 215 216

6 Weakly coupled oscillator networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Phase equations for networks of oscillators . . . . . . . . . . . . . . . . . . . . 6.2.1 Two synaptically coupled nodes . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Gap-junction coupled integrate-and-fire neurons . . . . . . . . 6.3 Stability of network phase-locked states . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Synchrony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Coupled piecewise linear oscillators . . . . . . . . . . . . . . . . . . 6.3.3 The splay state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Small networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Clustered states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Balanced (M, q) states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 The unbalanced (N , q) cluster state . . . . . . . . . . . . . . . . . . . 6.6 Remote synchronisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Central pattern generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Solutions in networks with permutation symmetry . . . . . . . . . . . . . 6.8.1 A biharmonic example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.2 Canonical invariant regions . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Phase waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Beyond weak coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

227 227 231 235 238 239 240 242 243 246 248 251 255 257 261 266 273 274 278 281 282 284

7 Strongly coupled spiking networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Simple neuron network models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 A network of binary neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Release generated rhythms . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 The master stability function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 MSF for synaptically interacting LIF networks . . . . . . . . . 7.5 Analysis of the asynchronous state . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Rate-based reduction of a spiking network . . . . . . . . . . . . . . . . . . . . 7.7 Synaptic travelling waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Travelling wave analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.2 Wave stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

293 293 294 296 299 302 305 310 315 317 318 321 325 326

xiv

Contents

8 Population models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Neural mass modelling: phenomenology . . . . . . . . . . . . . . . . . . . . . . 8.3 The Wilson–Cowan model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 A piecewise linear Wilson–Cowan model . . . . . . . . . . . . . . 8.3.2 The Wilson–Cowan model with delays . . . . . . . . . . . . . . . . 8.3.3 The Curtu–Ermentrout model . . . . . . . . . . . . . . . . . . . . . . . . 8.4 The Jansen–Rit model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 The Liley model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 The Phenomenor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 The Tabak–Rinzel model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 A spike density model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 A next-generation neural mass model . . . . . . . . . . . . . . . . . . . . . . . . 8.9.1 Mapping between phase and voltage descriptions . . . . . . . 8.10 Neural mass networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.1 Functional connectivity in a Wilson–Cowan network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

335 335 336 337 340 343 345 346 348 348 351 352 356 363 369

9 Firing rate tissue models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Neural field models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 The continuum Wilson–Cowan model . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Power spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Single effective population model . . . . . . . . . . . . . . . . . . . . 9.4 The brain wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Travelling waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Front construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Front stability (Evans function) . . . . . . . . . . . . . . . . . . . . . . 9.6 Hallucinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Amplitude equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 Interface dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.1 One spatial dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.2 Two spatial dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

381 381 384 385 386 389 393 395 397 399 401 406 411 412 416 424 426

Appendix A: Stochastic calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Modelling noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Random processes and sample paths . . . . . . . . . . . . . . . . . . . . . . . . . A.3 The Wiener process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 Langevin equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.5 Stochastic integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.6 Comparison of the Itô and Stratonovich integrals . . . . . . . . . . . . . . .

439 439 440 440 441 442 443

370 372 373

Contents

A.7 A.8 A.9 A.10

xv

Itô’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Coloured noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simulating stochastic processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Fokker–Planck equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.10.1 The backward Kolmogorov equation . . . . . . . . . . . . . . . . . . A.11 Transforming probability distributions . . . . . . . . . . . . . . . . . . . . . . . .

444 445 447 451 452 453

Appendix B: Model details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1 The Connor–Stevens model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2 The Wang–Buzsáki model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.3 The Golomb–Amitai model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.4 The Wang thalamic relay neuron model . . . . . . . . . . . . . . . . . . . . . . . B.5 The Pinsky–Rinzel model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

455 455 456 456 457 458

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501

List of Boxes

Box 2.1 Box 2.2 Box 2.3 Box 2.4 Box 2.5 Box 2.6 Box 2.7 Box 2.8 Box 2.9 Box 3.1 Box 3.2 Box 3.3 Box 3.4 Box 3.5 Box 3.6 Box 3.7 Box 3.8 Box 3.9 Box 3.10 Box 3.11 Box 4.1 Box 4.2 Box 4.3 Box 4.4 Box 4.5 Box 5.1 Box 5.2 Box 5.3 Box 5.4 Box 6.1 Box 6.2

The Nernst-Planck equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linearisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear systems in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hopf bifurcation theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The saddle-node bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Global bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common ion channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gillespie algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Properties of the Dirac delta function . . . . . . . . . . . . . . . . . . . . . . Slow–fast systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Poisson processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solutions to linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Poincaré maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bifurcation of maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Grazing bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Floquet theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Saltation matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lyapunov exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Canards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Curves of inflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Travelling waves in PDE models . . . . . . . . . . . . . . . . . . . . . . . . . . Poles and residues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Cauchy Residue Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . Green’s function for the infinite cable equation . . . . . . . . . . . . . . Synaptic filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Isochrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Winfree’s isochrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The adjoint equation for the iPRC . . . . . . . . . . . . . . . . . . . . . . . . . Evolving adjoints across switching manifolds . . . . . . . . . . . . . . . Phase-reduced network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Averaging theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11 18 20 25 27 32 35 49 51 67 69 76 83 85 93 94 95 104 111 114 130 133 134 143 158 183 184 191 196 232 232 xvii

xviii

Box 6.3 Box 6.4 Box 6.5 Box 6.6 Box 6.7 Box 6.8 Box 7.1 Box 7.2 Box 7.3 Box 8.1 Box 8.2 Box 8.3 Box 9.1 Box 9.2 Box 9.3 Box 9.4 Box 9.5 Box 9.6 Box 9.7

List of Boxes

Phase-locked solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graph Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The matrix determinant lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Permutation symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Canonical invariant region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Master stability function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tensor (Kronecker) product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Continuity equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ott–Antonsen ansatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mean field Kuramoto model with a Lorentzian distribution of natural frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Möbius transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ab (k, ω) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Calculation of G Turing instability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Patterns at a Turing instability . . . . . . . . . . . . . . . . . . . . . . . . . . . . Equivalent PDE description of a neural field . . . . . . . . . . . . . . . . Fredholm alternative and solvability conditions . . . . . . . . . . . . . . Line integral representation for non-local interaction ψ . . . . . . . Integral evaluation using geometry . . . . . . . . . . . . . . . . . . . . . . . .

235 241 252 267 270 276 302 304 311 358 359 366 388 391 392 394 410 417 420

Chapter 1

Overview

The book is organised in chapters that progress from the scale of a single cell up to an entire brain. The use of mathematical methodologies does not follow a natural hierarchy, and instead we introduce these as and when needed. On this note, we rely on some familiarity of the reader with techniques from applied mathematics. However, for pragmatic purposes, we have also included a number of Boxes to act as self-contained expositions or reminders of useful pieces of mathematical theory. Nonetheless, we do not expect a typical reader to come armed with the perfect mathematical repertoire to absorb the story in this book. Indeed, we hope that some of the topics covered, such as those associated with non-smooth dynamical systems, will be of interest in their own right. In a similar vein, the reader need not have any prior background in neuroscience to understand this book, and at the risk of reducing the whole of this field to a few glib paragraphs, we first mention some of the more common facts and terminology that you will come across in the pages ahead.

1.1 The brain and a first look at neurodynamics Neurodynamics explores the principles by which single neurons generate action potentials (spikes) and synaptic networks generate electrical waves and patterns fundamental for neurobiological function. To make sense of this definition, it is instructive to consider the relevant biological context. Referring to the standard reference text in the field by Kandel et al. [488], which spans more than a 1,000 pages, is a somewhat daunting though ultimately rewarding task. Instead, we give here a (very!) brief summary of pertinent details on neuroscience1 . 1

A free primer on the brain and nervous system, published by the Society for Neuroscience, can be found at https://www.brainfacts.org/the-brain-facts-book.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Coombes and K. C. A. Wedgwood, Neurodynamics, Texts in Applied Mathematics 75, https://doi.org/10.1007/978-3-031-21916-0_1

1

2

1 Overview

At the highest level of organisation, the mammalian central nervous system has a broad structure comprising the cerebral cortex, the thalamus, and various other sub-cortical structures. The cerebral cortex is the main site of animal intelligence and, in humans, is the most highly developed part of the brain. Viewed superficially, the human cortex consists of a thin sheet about 0.2 m2 in area and 2–3 mm thick. It is strongly convoluted and forms the exterior of both brain hemispheres. Around 80 distinct cortical areas have been identified, each of which represents a highly parallel module specialised for a specific task. For example, in the visual cortex, one can identify areas for the analysis of edge orientation, of colour shades, and of velocity, whilst other cortical areas contain modules for speech comprehension, face recognition, touch (somatosensory cortex), and the planning and execution of movements (frontal and motor cortices). Additional association areas integrate information relayed by different sensory areas. The thalamus plays the role of a sensory gateway to the cortex; sensory information (apart from that associated with olfaction) that flows to and from the cortex is first processed by the thalamus. Various other structures in the brain play ancillary roles that are not yet fully understood. Examples of these include the hypothalamus (regulates the internal environment of the body such as temperature, heart rate, food intake, etc.), the reticular formation (coordinates and executes programs from the hypothalamus), the cerebellum (involved in the storage and retrieval of precise adjustments to sequences of coordinated movement), and the hippocampus (plays a key role in the storage of long-term memories). Proper brain functioning requires communication between these modularised regions. Forward connections from one area of the brain to another are generally matched by recurrent connections back to the area of origin. The fundamental processing unit of the central nervous system is the neuron. The total number of neurons in the human brain is around 1012 . In 1 mm3 of cortical tissue, there are about 105 neurons. Three main structures can be identified in a typical neuron: the dendritic tree, the cell body or soma, and the axon. These roughly correspond to the input, processing, and output functions, respectively. The dendritic tree is a branched structure that forms the main input pathway of a neuron. It sums the output signals sent by surrounding neurons in the form of electrical potentials, which diffuse along the tree towards the soma. If the membrane potential at the soma exceeds a certain threshold value, the neuron produces a short electrical spike or action potential. Thus, the total input to a neuron is continuous valued (the electrical potential at the soma), whereas the output is discrete (either the neuron spikes or it does not). The action potentials (spikes) last about 2 ms during which the voltage across the neuron’s membrane makes an excursion of around +100 mV from its resting value of around −65 mV. After an action potential has been generated, it is conducted along the axon, which itself branches out so that the spike pulse may be transmitted to several thousand target neurons. The contacts of the axon to target neurons are either located on the dendritic tree or directly on the soma, and are known as synapses. Most synapses are chemical in nature, that is, the arrival of an action potential at the synapse induces the secretion of a neurotransmitter that in turn leads to a change in the membrane potential of the downstream neuron. Overall, synaptic transmission can last from a few to a

1.1 The brain and a first look at neurodynamics

3

Fig. 1.1 An illustration of the topics covered in this book at increasing scales and their corresponding chapter number(s).

4

1 Overview

Fig. 1.2 An illustration of the core mathematical techniques relied upon in this book.

1.2 Tools of the (mathematical) trade

5

few hundred milliseconds, whilst the changes in synaptic response induced by the arrival of an action potential can last from 1 ms to many minutes. A single neuron may have thousands, tens of thousands, or even hundreds of thousands of synapses. However, the brain as a whole is sparsely connected since any given neuron will only be directly connected to a tiny fraction of the overall number of neurons in the brain. Depending on the type of synapse, the membrane potential of a downstream neuron may either increase (excitatory synapse) or decrease (inhibitory synapse) in response to an action potential in an upstream neuron. Generally speaking, the outgoing synapses of a given neuron are all of the same type, so that neurons may be grouped according to whether they are excitatory or inhibitory. Overall, there are two main classes of cortical neurons that are distinguished by their shape and functional role. Pyramidal cells (which comprise about 80% of total neurons) have long-range axons and stimulate excitatory synapses of target cells, whereas starshaped stellate cells have short-range axons and are usually inhibitory. It is believed that the important information is coded in the activity of the pyramidal cells, whereas the stellate cells act as a stabiliser and modulator of the system by inhibiting activity in excited regions. The theory of nonlinear dynamical systems can be used to understand and explain neural phenomena at many different levels, including the ionic currents that generate action potentials, short- and long-term memory, visual hallucinations, neural synchronisation, and motor control, to list just a few. Figure 1.1 illustrates the topics covered in this book over increasing spatiotemporal scales along with their corresponding chapter number(s).

1.2 Tools of the (mathematical) trade The book begins by examining dynamics at the small scales corresponding to ion channels, neuronal membranes, and single neuron electrophysiological models. The stochastic nature of ion channel opening and closing is naturally treated with the theory of stochastic processes. However, compared to standard calculus, this is often learnt later in life, and to mitigate in part for this we have included a brief stand-alone Appendix (Appendix A) on this topic. The modelling and analysis of deterministic single neuron models is more readily achieved with standard tools from the theory of smooth dynamical systems, and these feature heavily throughout the book, notably those of linear stability and bifurcation theory. However, when considering phenomenological models, of both single cells and neuronal populations, hard switches in dynamic behaviour are often encountered. Here, the theory of non-smooth dynamical systems is more pertinent, and we introduce elements of this to study both single cells and networks. We also take advantage of piecewise linear caricatures where appropriate, allowing for even further detailed mathematical study by exploiting linearity and making use of matrix exponentials. At the network level, we often consider reductions to either firing rate or phase oscillator descriptions, with synaptic interactions mediated by both state-dependent

6

1 Overview

and event-driven coupling. The description of the latter is aided with use of Green’s functions. Various other methods from physics and engineering are also brought into play throughout the book, most notably integral transforms and Fourier series. The use of symmetries is further useful for the analysis of emergent collective patterns of network activity, such as synchrony and phase-locked states. The book also considers these ideas in the continuum limit with generalisations of the network models to cover neuronal tissue, and we tap into tools such as travelling wave analysis, Turing instability analysis, and multi-scale analysis for describing waves and pattern formation. The core mathematical techniques leveraged throughout this book are illustrated in Fig. 1.2. Throughout the book, we illustrate how these tools are applied using figures created in Python in the style of the xkcd comics [3, 649] to emphasise the use of pen-and-paper calculations over simulations, though the latter are still used to demonstrate how mathematical results can be applied. At the end of each chapter, a set of exercises allows the reader to test their understanding.

Remarks To cover the multitude of topics relevant to neurodynamics would take many times as much space as this book already uses. As such, there are many topics that have regrettably not been included in this book. To signpost some of these we have offered ‘Remarks’ at the end of each chapter. One notable omission is the topic of mathematical models of neuronal development. The book by van Ooyen [892] gives a nice introduction to this, and more recent reviews on theoretical models of neural development can be found in [365, 893]. For a recent discussion of mathematical models of neuronal growth, we also recommend the article by Oliveri and Goriely [680]. Another topic that we did not find space to address is that of topographic maps between sensory structures and the cortex. The learning of such maps in both biological and artificial neural networks is nicely discussed in the book by Ritter et al. [752], and for a more recent theoretical physics style exploration of orientation preference maps in primary visual cortex, see [497]. Indeed, for more on mathematical models of the primary visual cortex and the internal neural origin of external spatial representations, we recommend the book on ‘Neurogeometry’ by Petitot [708].

Chapter 2

Single neuron models

2.1 Introduction Few phenomena in neuroscience are as evocative as the pulses of electrical activity produced by individual cells. Indeed, the discovery by Galvani of electrical activity in nervous tissue began centuries-long fascination with the origin and purpose of this behaviour, and motivated the initial design of the battery by Volta. Half a century later, du Bois-Reymond demonstrated that these pulses follow a prototypical temporal pattern, now referred to as an action potential. The pioneering work of Ramón y Cajal highlighted that, rather than being a syncytium, the brain is a large network of individual cells acting in concert to produce the diverse range of behaviours seen at the organism level. This shift in perspective gave new meaning to action potentials as units of information that could be exchanged between cells. In this chapter, some important biophysical properties of neuron membranes are reviewed before introducing the Nobel prize-winning work of Hodgkin and Huxley, which established the mathematical framework for analysing action potential generation in neurons. Simplifications of their original model are discussed, and techniques from dynamical systems theory are used to probe the conditions for which action potentials are expected. Extensions of the original model to incorporate different types of ion channels are presented, and their effect on action potential generation and firing patterns is discussed. The chapter ends with a section describing the inherent stochasticity in ion channel gating.

2.2 Neuronal membranes The neuron plasma membrane acts as a boundary separating the intracellular fluid from the extracellular fluid. It is selectively permeable allowing, for example, the passage of water but not large macromolecules. Charged ions, such as sodium (Na+ ), © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Coombes and K. C. A. Wedgwood, Neurodynamics, Texts in Applied Mathematics 75, https://doi.org/10.1007/978-3-031-21916-0_2

7

8

2 Single neuron models

potassium (K+ ), and chloride (Cl− ) ions, can pass through ion channels in the cell membrane, driven by diffusion and electrical forces, and this movement of ions underlies the generation and propagation of electrical signals along neurons. Differences in the ionic concentrations of the intra/extracellular fluids, maintained by pumps, create an electrical potential difference across the cell, simply referred to as the membrane potential, see Fig. 2.1. In the absence of any external inputs, the cell membrane potential is typically around −65 mV for neurons. During an action potential, the membrane potential increases rapidly to around 20 mV, then decreases more slowly to around −75 mV (on a roughly 1 ms timescale) and then slowly relaxes to the resting potential. The rapid membrane depolarisation corresponds to an influx of Na+ ions across the membrane, facilitated by the opening of Na+ -specific ion channels. The downswing to −75 mV corresponds to the transfer of K+ out of the cell, again facilitated by the opening of ion channels, this time specific to K+ . The final recovery stage back to the resting potential is brought about by the closure of the previously opened ion channels. Further action potentials cannot be generated during some absolute refractory period (lasting a few ms) and, over a slightly longer timescale called the relative refractory period, a larger than typical stimulus is required to generate an action potential. Between and during action potentials, ATP-dependent sodium and potassium pumps work to re-establish the original ion distribution across the membrane so that subsequent action potentials can occur. The activity of these pumps is estimated to account for roughly 20% of the brain’s total energy consumption. The work of Hodgkin and Huxley in elucidating the mechanism of action potentials in the squid giant axon is one of the major breakthroughs of dynamical modelling in electrophysiology [425] (and see [749] for a wonderful historical perspective). Their work forms the basis for all modern electrophysiological models, exploiting the observation that cell membranes behave much like electrical circuits, and that the electrical behaviour of cells is based upon the transfer and storage of ions such as K+ and Na+ . However, cortical neurons in vertebrates exhibit a much more diverse repertoire of firing patterns than the squid axon studied by Hodgkin and Huxley, largely due to a plethora of different ion channels [420]. For an introduction to the modelling of elements in the so-called zoo of ionic currents, see the small volume by Huguenard and McCormick [442]. Here we illustrate, by exploiting specific models of excitable membrane, some of the concepts and techniques which can be used to understand, predict, and interpret the excitable and oscillatory behaviours that are commonly observed in single cell electrophysiological recordings. We begin with the description of the Hodgkin– Huxley model.

2.3 The Hodgkin–Huxley model The 1952 work of Alan Hodgkin and Andrew Huxley led to the award of the 1963 Nobel Prize in Physiology or Medicine (together with Sir John Eccles) for their study on nerve action potentials in the squid giant axon. The experimental measurements

2.3 The Hodgkin–Huxley model

9

Fig. 2.1 Ionic concentration difference across a cellular membrane leads to a voltage difference, which for a neuron at rest is typically ∼ −65 mV. There is a high concentration of K+ inside and a low concentration outside the cell. There may also be membrane-impermeable anions, denoted by A4− , inside the cell. By contrast, Na+ and Cl− are at high concentrations in the extracellular region, and low concentrations in the intracellular region (at rest).

on which the pair based their action potential theory represent one of the earliest applications of the voltage clamp technique [667]. This electrophysiological method is used to measure the ionic currents through biological membranes whilst holding the membrane voltage at a fixed level. In practice, voltage-dependent membrane conductances are characterised by clamping a cell’s membrane to different command potentials. The clamp is achieved using an electrical circuit that quickly detects deviations from the command potential and injects back to the cell a current that eliminates this deviation. This injected current is equal in magnitude and opposite in sign to the current induced by ionic flow across the membrane and provides critical information about the dynamics of membrane processes associated with action potentials. The second critical element of their research was the so-called giant axon of the Atlantic squid (Loligo pealei). The large diameter of the axon (up to 1 mm in diameter; typically around 0.5 mm) provided a great experimental advantage for Hodgkin and Huxley as it allowed them to insert voltage clamp electrodes inside the lumen of the axon. They showed that the outer layer of a nerve fibre is not equally permeable to all ions. Whilst a resting cell has low sodium and high potassium permeability, during excitation, sodium ions rapidly flood into the axon, which changes net charge in the cell cytoplasm from negative to positive. It is this sudden change that constitutes a nerve impulse, known formally as an action potential and informally as a spike. The sodium ions then continue to flow through the membrane until the axon is so highly charged that the sodium becomes electrically repelled. The stream

10

2 Single neuron models

of sodium ions then stops, which causes the membrane to become permeable once again to potassium ions. The pair published their theory in 1952 [425], describing one of the earliest computational models in biochemistry. Their model of an axon is now widely recognised as a model of excitable media, and at every point along an axon a patch of membrane is described by a set of four nonlinear ordinary differential equations (ODEs). Activity spreads in a regenerative way along the axon via diffusive coupling between neighbouring patches of excitable tissue. The form of the experimentally observed action potential is predicted extremely well by the model. For this, integration of the differential equations was required, and Huxley devised an algorithm for use on a ‘Millionaire’ mechanical hand calculator. Each run of the algorithm producing a 5 ms theoretical voltage trace took about 8 hours of effort. The wonderful quote from Hodgkin, ‘The propagated action potential took about 3 weeks to complete and must have been an enormous labour for Andrew [Huxley]’ [423], contrasts strongly with the ease with which numerical simulations of systems of ODEs can now be performed. Hodgkin and Huxley’s electrical description of the cell assumes that the basic circuit elements are 1) the phospholipid cell bilayer, which is analogous to a capacitor in that it accumulates ionic charge as the electrical potential difference (voltage) across the membrane changes; 2) the ionic permeabilities of the membrane, which are analogous to resistors in an electronic circuit; and 3) the electrochemical driving forces, which are analogous to batteries driving the ionic currents. When there is a voltage across an insulator (in this case, the phospholipid cell bilayer), charge will build up at the interface because current cannot flow directly across the insulator. The dynamic behaviour of the voltage across the phospholipid bilayer is driven by the flow of ionic currents into and out of the cell. This flow is regulated by ion channels that open and close, so that the ionic permeability of the membrane evolves over time. Before presenting the mathematical details of the Hodgkin–Huxley model, it is instructive to discuss some important properties regarding ion channels and their dynamics.

2.3.1 Batteries and the Nernst potential Ion channels selectively facilitate movement of specific ions into and out of the cell, which helps maintain the potential difference across the cell membrane. The rate at which these ions flow is governed by the conformational state of the channel, which is typically sensitive to the voltage across the membrane. The reversal or Nernst potential of an ionic species is the membrane potential at which there is no net (overall) flow of that ionic species from one side of the membrane to the other. Balancing the electrical and osmotic forces across the membrane yields the Nernst potential for an ionic species as

2.3 The Hodgkin–Huxley model

11

Vion =

[ion]out RT ln , zF [ion]in

(2.1)

where [ion]out/in denotes the ionic concentration on the outside/inside of the membrane, R is the universal gas constant (∼ 8.314 J K−1 mol−1 ), T is the absolute temperature in Kelvin, F is the Faraday constant (∼ 96,485 C mol−1 ), and z is the valence of the ionic species (e.g., z = +1 for Na+ , z = +2 for Ca2+ , and z = −1 for Cl− ). The derivation of the Nernst potential (2.1) relies on some prior knowledge of electro-diffusion processes, as briefly summarised in Box 2.1 on page 11. The reversal potential provides an electromotive force (or battery) that drives the movement of each ionic species in the electrical circuit shown in Fig. 2.3. The resting potential is a weighted sum of individual ionic Nernst potentials. When a membrane has significant permeabilities to more than one ion, the cell potential can be calculated from the linearised Goldman–Hodgkin–Katz voltage equation rather than the Nernst equation as ∑ j g j V j / ∑ j g j , where the sum is over the ionic type and g j is the individual conductance of the jth ionic species and V j its Nernst potential. For a further discussion of the biophysics underlying resting potentials, see, for example, [208]. Box 2.1: The Nernst–Planck equation The movement of ions through channels in the cell membrane is mainly determined by electrochemical gradients. The current due to a concentration gradient takes the form IC = A JC z F where A is the membrane area, JC is the flux of the ion down the concentration gradient (number of ions per second per unit area of membrane), z is the valence of the ion, and F is the Faraday constant. The product z F translates the flux of ions into flux of charge and hence into an electrical current. Modelling an ion channel as a quasi-one-dimensional domain with spatial coordinate x, the flux JC at time t is given by Fick’s law of diffusion as ∂c (2.2) JC = −D , ∂x where c = c(x, t) denotes ion concentration and D is the diffusion coefficient (assuming Brownian motion of the ion). The current due to electrophoretic effects (ion movement under the influence of an electric field) takes the form I E = A JE z F, where the flux JE is given by the gradient of the electric potential V as ∂V , (2.3) JE = −uc ∂x where u is the mobility of the ion within the membrane. The product z Fc(x, t) is the concentration of charge at position x at time t. The continuity equation (current balance) for the ionic current can be written as ∂c ∂J zF (2.4) = −z F , J = JC + JE , ∂t ∂x

12

2 Single neuron models

and making use of the Einstein relation D = u RT /(z F) yields the Nernst– Planck equation ∂c ∂c zF ∂V =D + c . (2.5) ∂t ∂x RT ∂ x At equilibrium, there will be no net movement of the ion across the membrane. In the presence of both concentration and electrical gradients, this means that the rate of movement of the ion down the concentration gradient is equal and opposite to the rate of movement of the ion down the electrical gradient. Thus, at equilibrium IC + I E = 0, and setting ∂ c/∂ t = 0 in (2.5) gives dV RT 1 dc =− . dx z F c dx

(2.6)

Integrating the above with respect to x across the width of the membrane (from inside (in) to outside (out)) gives the Nernst equilibrium expression Vin − Vout = Vion , where RT [c]out . (2.7) Vion = ln zF [c]in Typical values of the Nernst potential for the common ion species are VK −75 mV, VN a 50 mV, VCa 150 mV, and VCl −60 mV. The Nernst–Planck equation for multiple ionic species can also be formulated, and when solved with a uniform field assumption in the membrane gives rise to the Goldman–Hodgkin–Katz (GHK) voltage and current equations. The GHK voltage equation gives the membrane potential expected to result from multiple ion species at differing concentrations on either side of the membrane. The GHK current equation gives the transmembrane current of an ionic species expected at a given membrane potential for a given concentration of the ion on either side of the membrane. For further discussion see the book by Malmivuo and Plonsey on bioelectromagnetism [602].

2.3.2 Voltage-gated ion channels Ion channels are believed to have gates that regulate the permeability of the pore to ions. At the most simplistic level, these gates can be described with a simple two-state (Open/Closed) Markov process: k+

Cé O, −

(2.8)

k

denoting the transition rates from closed to open by k + , and from open to closed by k − . Using a mass action approach [291] (treating a very large number of independent channels; scenarios away from this limit are discussed in Sect. 2.8) the evolution of

2.3 The Hodgkin–Huxley model

13

the fraction of open channels f o can be written as d fo f∞ − fo , = −k − f o + k + (1 − f o ) = dt τ

(2.9)

where

k+ 1 , τ= + . (2.10) k+ + k− k + k− Because ionic channels are composed of proteins with charged amino acid side chains, the potential difference across the membrane can influence the rate at which the transitions from the open to closed state occur. In this case, the ion channels are described as being voltage-gated. Arguments from thermodynamics suggest that the transition rates depend exponentially on the free-energy barrier between the two states and that this in turn can have a linear dependence on the potential difference [882]. This gives rise to the so-called linear thermodynamic model [243] where the rate constants have the (Arrhenius) form f∞ =

k + = k0+ e−α V ,

k − = k0− e−β V ,

α , β > 0,

(2.11)

so that f ∞ = f ∞ (V ), with f ∞ (V ) =

1 1+

k0− /k0+ e(α −β )V

=

1 1+

e−(V −V0 )/S0

.

(2.12)

Here,

− k0 1 1 ln , (2.13) , S0 = β −α β −α k0+ and V0 can be thought of as the threshold and S0 the sensitivity of channel opening. In general, the parameters α and β may also have some temperature dependence that can be determined using Eyring rate theory [420]. For a relatively recent discussion of channel modelling and fitting see [145]. From the sigmoidal form of f ∞ , it can be seen that gates can either be activating (S0 > 0) or inactivating (S0 < 0). Typical shapes for these curves are sketched in Fig. 2.2. In practice, voltage-gated channels are often considered to have at least three main conformational states: closed (C), open (O), and inactivated (I ), as depicted in the diagrammatic equation (2.14). The forward and backward transitions between closed and open states are termed activation (a) and deactivation (d), respectively. Those between inactivated and open are termed reactivation (r) and inactivation (i), respectively. The transitions between inactivated and closed are called the recovery from inactivation (rfi) and closed-state inactivation (csi), respectively. Closed and inactivated states are ion impermeable. V0 =

C

a d

csi rfi

i

r

O

I

(2.14)

14

2 Single neuron models

Fig. 2.2 The shapes of activating (S0 > 0) and inactivating (S0 < 0) functions given by Equation (2.12).

2.3.3 Mathematical formulation After having briefly reviewed some properties of ion channels, we are now ready to discuss Hodgkin and Huxley’s seminal model, which describes the action of three ionic currents, one involving sodium ions, one involving potassium ions and a leak current. These currents are arranged in a parallel circuit, as shown in Fig. 2.3. The constant that describes the relationship between the voltage (V ) and the charge (Q) that builds up is called the capacitance (C). The membrane capacitance is proportional to the cell surface area and the specific membrane capacitance is often expressed in μF cm−2 , where F is the SI unit for capacitance (the Farad). From the relationship Q = C V , one may form the capacitive current Q˙ = C V˙ showing that the cell’s capacitance determines how quickly the membrane potential can respond to a change in current. Focusing on a patch of membrane that is isopotential, the equations for membrane voltage evolution are based upon conservation of electric charge (current balance), so that dV = −Iion + I. (2.15) C dt Here I is an applied current (as might be injected during an experimental protocol) and Iion represents the sum of individual ionic currents: Iion = g K (V − VK ) + g N a (V − VN a ) + g L (V − VL ),

(2.16)

where the electromotive driving force for each term comes from the difference between the cell voltage and the reversal potential for each ionic species, as presented in Sect. 2.3.1. Equation (2.15) is an expression of Kirchhoff’s first law (the

2.3 The Hodgkin–Huxley model

15

Fig. 2.3 The equivalent electrical circuit for the Hodgkin–Huxley model of a squid giant axon. The capacitance is due to the phospholipid bilayer separating the ions on the inside and the outside of the cell, see Fig. 2.1. The three ionic currents, one for Na+ , one for K+ , and one for a non-specific leak, are indicated with resistances. The conductances of the Na+ and K+ currents are voltage dependent, as indicated by the variable resistances. The driving force for the ions is indicated by the symbol for the electromotive force, which is given in the model by the difference between the membrane potential, V = Vin − Vout , and the reversal potential.

algebraic sum of currents in a network of conductors meeting at a point is zero). Kirchhoff’s second law ensures conservation of energy and says that the directed sum of the electrical potential differences (voltages) around any closed network is zero. Voltage-dependent membrane conductances, such as those discovered by Hodgkin and Huxley, and the ion channels they hypothesised, give rise to membrane currents primarily through the conduction of sodium and potassium ions. The contribution from other ionic currents is assumed to obey Ohm’s law and is called the leak current. Ohm’s law states that electric current is proportional to voltage and inversely proportional to resistance. Resistance is measured in the SI unit of Ohms (Ω), and its reciprocal, termed ‘conductance’, in Siemens (S). The Siemen is often referred to as a mho (ohm spelled backwards). The g K , g N a and g L are conductances (reciprocal resistances). The leak conductance, g L , is time independent, whilst g K and g N a are dynamic and are constructed in terms of so-called gating variables. The potassium conductance g K depends upon four independent activation gates: g K = g K n 4 , whereas the sodium conductance g N a depends upon three independent activation gates and one inactivation gate: g N a = g N a m 3 h (for constants g K and g N a representing maximal conductances). The gating variables all obey equations of the type presented in Sect. 2.3.2, namely,

16

2 Single neuron models

X ∞ (V ) − X dX = , dt τ X (V )

X ∈ {m, n, h},

(2.17)

which are recognised as having the same form as (2.9), where X ∞ and τ X are voltage dependent, as in (2.12). The gating variables described by X take values between 0 and 1 and approach the asymptotic values X ∞ (V ) with time constants τ X (V ). These six functions are obtained from fits to experimental data (and see Prob. 2.1). It is common practice to write

τ X (V ) =

1 , α X (V ) + β X (V )

X ∞ (V ) = α X (V )τ X (V ),

(2.18)

where α X and β X have the modern interpretation of channel opening and closing transition rates, respectively. The details of the Hodgkin–Huxley model are completed with the following functions: 0.1(V + 40) , βm (V ) = 4.0e−0.0556(V +65) , 1 − e−0.1(V +40) 0.01(V + 55) αn (V ) = , βn (V ) = 0.125e−0.0125(V +65) , 1 − e−0.1(V +55) 1 αh (V ) = 0.07e−0.05(V +65) , βh (V ) = , −0.1(V +35) 1+e

αm (V ) =

(2.19)

where V is measured in units of mV, and the functions α X (V ) and β X (V ) return values in ms. Here, C = 1 μ F cm−2 , g L = 0.3 mmho cm−2 , g K = 36 mmho cm−2 , g N a = 120 mmho cm−2 , VL = −54.402 mV, VK = −77 mV, and VN a = 50 mV. To summarise, (2.15)–(2.17) provide a set of four nonlinear ODEs that can be succinctly written as C

dV = F(V, m, n, h) + I, dt

dX X ∞ (V ) − X = , dt τ X (V )

X ∈ {m, n, h}, (2.20)

with F given by −Iion using (2.16). The mathematical forms chosen by Hodgkin and Huxley for the functions τ X and X ∞ , X ∈ {m, n, h}, are all transcendental functions. From a modern perspective, it is natural to interpret these as functions of channel transition rates in a mass action model, generalising the simple discussion surrounding Equation (2.9). Despite the complexity of the Hodgkin–Huxley model it does allow a walk-through of an action potential from a qualitative perspective. Given an initial (and sufficiently large) depolarisation of the resting membrane, the gating variable m reacts relatively quickly, and since the shape for m ∞ describes that of an activating channel, the conductance for Na+ ions increases. From the form of the ionic current (2.16), it can be seen that the result is to drive the voltage variable, governed by (2.15), towards the reversal potential for Na+ (with a value of ∼ 50 mV), which is positive with respect to the resting membrane voltage. Subsequently, on a slightly slower timescale, the gating variables h and n react, the former inactivating the Na+ current and the latter activating the K+ current. The voltage is then pushed towards the reversal potential for K+ (with a value of ∼ −80 mV), before

2.3 The Hodgkin–Huxley model

17

Fig. 2.4 A plot of an action potential generated in the Hodgkin–Huxley model with zero external drive (I = 0) starting from rest with a perturbation in the initial voltage to raise it to −50 mV. The gating variables m, n, and h take values on [0, 1], and initially m reacts quickest. The n and h variables are slaved to a common slower timescale, with n activating the K+ current and h inactivating the Na+ current. The duration of the action potential or spike is roughly 3 ms.

equilibrium can be re-established. This basic mechanism is the one that forms the action potential, giving rise to the shape shown in Fig. 2.4. The nonlinearity of the Hodgkin–Huxley model makes analysis difficult. Indeed, even the linear stability analysis of the steady state is relatively cumbersome. See Box 2.2 on page 18 for a brief summary of linearisation and linear stability in a dynamical systems context. The steady-state voltage variable satisfies F(V, m ∞ (V ), n ∞ (V ), h ∞ (V )) + I = 0, and a numerical check for the standard Hodgkin–Huxley parameters shows that this function has only one solution for a wide range of I . The construction of the Jacobian of (2.20) around this solution can then be used to infer stability. This is an unwieldy task that is best abandoned in favour of a numerical bifurcation analysis, as illustrated in Fig. 2.5, using a freely available software such as XPPAUT [275] (or by writing bespoke codes). This shows that in response to constant current injection, oscillations can emerge via a Hopf bifurcation (see Box 2.4 on page 25) with increasing drive. With a further increase in drive, the frequency of oscillations increases and their amplitude decreases, until ultimately there is a second Hopf bifurcation and a stable rest state is recovered. This rest state has a high voltage value relative to that with no applied current and the mechanism by which this cessation of oscillations is achieved is referred to as depolarisation block.

18

2 Single neuron models

Fig. 2.5 Bifurcation diagram of the Hodgkin–Huxley model as a function of the external drive I . Solid black lines show the amplitude of a stable limit cycle, and dashed black lines indicate unstable limit cycle behaviour. The solid grey line indicates a stable fixed point, and the thin dotted grey line indicates unstable fixed point behaviour. The Hopf bifurcation points are indicated by the filled black circles.

Box 2.2: Linearisation Consider a smooth dynamical system x˙ = f (x) where x = (x1 , . . . , xn ) ∈ Rn and f : Rn → Rn . Suppose that x is a fixed point, given by f (x) = 0 and expand f locally as a Taylor series in u = x − x: u˙ i = f i (u + x) = f i (x) + ∑ j

∂ fi (x)u j + O(u 2 ), ∂xj

(2.21)

for i = 1, . . . , n, where the i subscript corresponds the ith component of the indicated vector quantity. Since f i (x) = 0 by assumption, u˙ = Au + O(u 2 ),

Aij = [D f (x)]ij ≡

∂ fi (x). ∂xj

(2.22)

Here D f (x) is recognised as the Jacobian of f evaluated at x = x. Theorem (linear stability): Suppose that x˙ = f (x) has an equilibrium at x = x and the linearisation x˙ = Ax. If A has no zero or purely imaginary eigenvalues then the local stability of the (hyperbolic) fixed point is determined by the linear system. In particular, if all eigenvalues of A have a negative real part, i.e.,

2.4 Reduction of the Hodgkin–Huxley model

19

Re λi < 0 for all i = 1, . . . , n then the fixed point is asymptotically stable. The fixed point is said to be nonhyperbolic if at least one eigenvalue has zero real part. Despite the excellent agreement of numerical simulations of the Hodgkin–Huxley model with a variety of experiments, there are at least two predictions from the model that are at odds with experimental observations. This is perhaps not a surprise given the arbitrariness of some of the steps in the construction of the model [215]. Firstly, the squid giant axon produces only a finite train of up to four impulses in response to a step current, whereas, for constant I , the model fires indefinitely. Secondly, if a ramp current, whereby the injected current increases linearly with time, is used as the stimulation protocol, the rate of this increase influences the resulting neuron response. Specifically, if the rate of increase is below a certain threshold, then no action potential is elicited. Conversely, if the rate of increase is above the threshold, then only one action potential occurs. By contrast, the Hodgkin–Huxley model yields a periodic solution for all values of I shown in Fig. 2.5 between the two Hopf bifurcations. In essence, the Hodgkin–Huxley model fails to incorporate a form of accommodation, often regarded as an increase in the threshold of an excitable membrane when the membrane is subjected to a sustained sub-threshold depolarising stimulus or a stimulus that increases very slowly [18]. To give more meaning to the notion of a threshold, it is useful to consider a reduction of the Hodgkin–Huxley model to the plane.

2.4 Reduction of the Hodgkin–Huxley model The nonlinearity of the functions τ X (V ) and X ∞ (V ), X ∈ {m, n, h} and the high dimensionality of the Hodgkin–Huxley model make analysis difficult. However, considerable simplification can be attained with the following observations: i) τm (V ) is small for all V so that the variable m rapidly approaches its equilibrium value m ∞ (V ) and ii) the equations for h and n have similar time courses (the Na+ channel closing occurs at the same rate but in the opposite direction to the K+ channel opening) and may be slaved together. Numerical simulations suggest that during an action potential, h is linearly proportional to n so that one may set h = an + b, with (a, b) determined by a fit to data [748]. Hence the dynamics is reduced from four dimensions to two dimensions with C

dV = f (V, n) + I, dt

dn n ∞ (V ) − n = , dt τn (V )

(2.23)

with f (V, n) = F(V, m ∞ (V ), n, an + b). This reduced model is amenable to analysis in the phase plane, whereas the original Hodgkin–Huxley model cannot be pictured in this way (being four dimensional).

20

2 Single neuron models

This slaving principle has been put on a more formal footing by Abbott and Kepler [8] and, in general, does not require a recourse to numerical observations (to suggest relations between h and n). To obtain a reduced planar model from the full Hodgkin–Huxley model in this approach, one considers the replacement m → m ∞ (V ) and X = X ∞ (U ) for X ∈ {n, h}, so that both n and h are slaved to U . This equation can always be solved exactly for U since the functions X ∞ are monotonic and hence invertible. The reduced model is two-dimensional with a membrane current f (V, U ) = F(V, m ∞ (V ), n ∞ (U ), h ∞ (U )). By demanding that the time dependence of f in the reduced model mimics the time dependence of F in the full model at constant V , then ∂ F dh(V ) ∂ F dn(V ) ∂ f dh ∞ ∂ f dn ∞ dU + = + . (2.24) ∂ h dt ∂ n dt ∂ h ∞ dU ∂ n ∞ dU dt Under the approximation that h ≈ h ∞ (U ) and n ≈ n ∞ (U ) the time evolution of U may be solved, yielding C

dU = g(V, U ), dt

dV = f (V, U ) + I, dt

where g(V, U ) =

∂F ∂h

h ∞ (V )−h ∞ (U ) τh (V )

∂ f dh ∞ (U ) ∂ h ∞ dU

n ∞ (V )−n ∞ (U ) τn (V )

+

∂F ∂n

+

∂ f dn ∞ (U ) ∂ n ∞ dU

(2.25) ,

(2.26)

and ∂ F/∂ h and ∂ F/∂ n are evaluated at h = h ∞ (U ) and n = n ∞ (U ). The variable V describes the capacitive nature of the cell whilst U describes the time-dependence of the membrane conductance. In fact, U may be regarded as a variable responsible for the refractory period of a neuron (the time during which another stimulus will not lead to a second action potential). The reduction to a two-dimensional system allows a direct visualisation of the dynamics by plotting the flow in the (V, U )-plane. A plot of the nullclines V˙ = 0 and U˙ = 0 allows the straightforward identification of the fixed point (defined by the intersection of the two nullclines), as well as the determination of the flow. For a review of the classification of fixed points in planar systems, see Box 2.3 on page 20. In the reduced model, g(V, V ) = 0 and the U nullcline is the straight line U = V . The phase plane and nullclines for this system are shown in Fig. 2.6 for a value of the external input I > 0 such that the fixed point lies on the middle branch of the cubic-like V -nullcline. Box 2.3: Linear systems in R2 Consider the planar dynamical system: x˙1 = ax1 + bx2 , Introduce the vector x = (x1 , x2 )T , then

x˙2 = cx1 + d x2 .

(2.27)

2.4 Reduction of the Hodgkin–Huxley model

21

Fig. 2.6 Phase plane of the reduced Hodgkin–Huxley neuron model with I = 100 (in units of μ A cm−2 ), in a regime that supports the generation of a periodic spike train. The solid grey cubic-like curve is the V -nullcline, and the straight dashed grey line is the U -nullcline. The periodic orbit is shown in black, and see the inset for the corresponding voltage trace, showing a periodic train of action potentials.

x˙ = Ax,

A=

ab . cd

(2.28)

Assume a solution of the form x = eλ t x0 . This leads to the linear homogeneous equation (A − λ I2 )x0 = 0, where I2 is the 2 × 2 identity matrix. For the system above to have a non-trivial solution, it is required that Det(A − λ I2 ) = 0,

(2.29)

which is called the characteristic equation (assuming that A − λ I2 is noninvertible). Substituting the components of A into the characteristic equation gives λ 2 − (a + d)λ + (ad − bc) = 0, (2.30) so that

λ± =

1 Tr A ± (Tr A)2 − 4 Det A . 2

(2.31)

22

2 Single neuron models

The different types of behaviour can be classified according to the values of Tr A and Det A. • λ± are real if (Tr A)2 > 4 Det A. • Real eigenvalues have the same sign if Det A > 0 and are positive if Tr A > 0 (negative if Tr A < 0)—stable and unstable nodes. • Real eigenvalues have opposite signs if Det A < 0—saddle. • Eigenvalues are complex if (Tr A)2 < 4 Det A—focus. For zero external input, the fixed point is stable and the neuron is said to be excitable (and the steady state lies on the left branch of the cubic-like V -nullcline). When a positive external current is applied, the low-voltage portion of the V -nullcline moves up whilst the high-voltage part remains relatively unchanged. For sufficiently large constant external input, the intersection of the two nullclines falls within the portion of the V -nullcline with positive slope (middle branch of the cubic-like V nullcline). In this case, the fixed point is unstable, and the system may support a limit cycle. The system is then said to be oscillatory as it produces a train of action potentials. Referring to Fig. 2.6, the action potential is naturally broken into four distinct phases. In phase 1 (upstroke) the system moves quickly from the left branch of the cubic-like V -nullcline to the right branch (rapid activation of Na+ channels), in phase 2 (plateau) the trajectory tracks along the V -nullcline after V reaches VN a (and the slow dynamics for the recovery variable U comes into play), in phase 3 (downstroke) the system falls off the V -nullcline and V moves rapidly towards VK , and in phase 4 (recovery) V is hyperpolarised, tracking along the left-hand branch of the V -nullcline. Note that, in the excitable regime, an action potential may be induced for current stimuli of sufficient strength and duration. Furthermore, as for the Hodgkin–Huxley model, the reduced model can fire an action potential when released from hyperpolarisation. This is referred to as anode break excitation or, more commonly, as post-inhibitory rebound. Post-inhibitory rebound can be geometrically described easily in the reduced Hodgkin–Huxley model using the phase plane shown in Fig. 2.7. Consider the case that the applied current I is stepped from I = 0 to I < 0 for a prolonged period of time. This effectively pulls the V -nullcline down and to the left (i.e., it becomes more

2.4 Reduction of the Hodgkin–Huxley model

23

Fig. 2.7 Phase plane of the reduced Hodgkin–Huxley neuron model with I = 0 and I = −4 (in units of μ A cm−2 ). Upon release from a hyperpolarising current, namely, a rapid switch from I = −4 to I = 0, the system re-equilibriates via the generation of a single action potential. When I = −4 the steady state is stable. However, when using this as initial data for the system with I = 0, a trajectory must first move rightward in the phase plane since V˙ > 0 and U˙ > 0. Ultimately the trajectory will approach the stable steady state (for I = 0) via a large amplitude excursion (an action potential).

hyperpolarised and with K+ further deactivated). Ultimately, the solution moves from one stable steady state to another, more hyperpolarised, one. If the current is stepped back to I = 0 then the original equilibrium will be recovered. However, if V˙ > 0 upon release of the hyperpolarising current, namely, the more hyperpolarised steady state is below the V -nullcline, then the system may achieve this equilibrium by first generating a single action potential as shown in Fig. 2.7. The simple planar model described above captures many of the essential features of the original Hodgkin–Huxley model yet is much easier to understand from a geometric perspective. Indeed, the model is highly reminiscent of the famous FitzHugh– Nagumo model, in which the voltage nullcline is taken to be a cubic function (and is discussed in Chap. 3). Both models show the onset of repetitive firing at a non-zero frequency as observed in the Hodgkin–Huxley model (when an excitable state loses stability via a subcritical Hopf bifurcation). However, unlike real cortical neurons, they cannot fire at arbitrarily low frequency. This brings us to consider modifications of the original Hodgkin–Huxley formalism to accommodate bifurcation mechanisms from excitable to oscillatory behaviours that can respect this experimental observation. One natural way to achieve this is through the inclusion of extra currents or the adaptation of existing ones so that F(V, m ∞ (V ), n ∞ (V ), h ∞ (V )) + I becomes non-monotonic, allowing the possibility of further steady states. Oscillatory orbits that come near these, or even their so-called ghosts, may be slowed down giving rise to low-frequency oscillations. In the next section, we discuss a prototypical model,

24

2 Single neuron models

built on the Hodgkin–Huxley formalism, that supports such behaviour, and which is another common choice for exploring single cell neural dynamics. Moreover, we now begin to introduce some of the mathematical machinery from nonlinear dynamical systems that have been tacitly assumed above, say in discussing notions of fixed point stability and bifurcation, in the spirit of the review article by Borisyuk and Rinzel [91].

2.5 The Morris–Lecar model Under constant current injection, barnacle muscle fibres respond with a host of oscillatory voltage waveforms. To describe this system, Morris and Lecar [643] introduced a set of coupled ODEs incorporating two ionic currents: an outward non-inactivating potassium current (a delayed-rectifier potassium current similar to the Hodgkin– Huxley I K ) and a fast inward non-inactivating calcium current (depolarising, and regenerative, like the Hodgkin–Huxley I N a ). Assuming that the calcium currents operate on a much faster timescale than the potassium current they formulated the following two-dimensional system: C

dV = I − g L (V − VL ) − g K w(V − VK ) − gCa m ∞ (V )(V − VCa ), dt w∞ (V ) − w dw =φ , dt τw (V )

(2.32) (2.33)

where w represents the fraction of K+ channels open, and the Ca2+ channels respond to V so rapidly that instantaneous activation can be assumed. Here g L is the leakage conductance, g K , gCa are the maximal potassium and calcium conductances, VL , VK , VCa are corresponding reversal potentials, m ∞ (V ), w∞ (V ) are voltagedependent gating functions, τw (V ) is a voltage-dependent timescale, and φ is a temperature-dependent parameter. The details of the model are completed with m ∞ (V ) = 0.5(1 + tanh[(V − V1 )/V2 ]), w∞ (V ) = 0.5(1 + tanh[(V − V3 )/V4 ]),

(2.34)

1/τw (V ) = cosh[(V − V3 )/(2V4 )], where V1 , V2 , V3 , V4 , and φ are constants given by V1 = −1.2 mV, V2 = 18 mV, V3 = 2 mV, V4 = 30 mV, φ = 0.04 ms−1 , and Vk = −84 mV, VL = −60 mV, VCa = 120 mV, g K = 8 mmho cm−2 , g L = 2 mmho cm−2 , gCa = 4.4 mmho cm−2 , and C = 20 μ F cm−2 . It is convenient to introduce the function F(V, w) = −g L (V − VL ) − g K w(V − VK ) − gCa m ∞ (V )(V − VCa ) so that the V -nullcline can be defined implicitly by the equation F(V, w) + I = 0 and the w-nullcline by w = w∞ (V ). Hence the steadystate voltage must satisfy the relation F(V, w∞ (V )) + I = 0. If this equation is monotonic in V then there will be just one fixed point, and otherwise there could be more. In particular, if it is ‘N’ shaped then there can be up to three fixed points.

2.5 The Morris–Lecar model

25

Box 2.4: Hopf bifurcation theorem The Hopf bifurcation involves a nonhyperbolic fixed point with linearised eigenvalues ±i ω , with a two-dimensional centre manifold and bifurcating solutions which are periodic rather than stationary. Consider the system x˙ = μ x − ω y − (x 2 + y 2 )x, y˙ = ω x + μ y − (x 2 + y 2 )y.

(2.35)

Linearising about the origin shows that it is a stable focus if μ < 0 and an unstable focus if μ > 0 (since the eigenvalues of the linearised flow are μ ± i ω ). Hence, the origin is nonhyperbolic when μ = 0 and a bifurcation is expected as μ passes through zero. This is easily analysed by writing the system in polar coordinates: θ˙ = ω . (2.36) r˙ = μ r − r 3 , √ The fixed point at r = 0 loses stability to a stable limit cycle with r = μ , with uniform rate of angular rotation ω , as μ increases through zero.

Now consider a more general planar dynamical system of the form x˙ = f (x, y, μ ),

y˙ = g(x, y, μ ),

(2.37)

with f (0, 0, μ ) = g(0, 0, μ ) = 0. Suppose that the following conditions hold: A. The Jacobian matrix

∂x f ∂ y f J (μ ) = ∂x g ∂ y g (x,y)=(0,0)

(2.38)

has a pair of eigenvalues λ± (μ ) = α (μ ) ± i β (μ ). Furthermore, suppose that at a certain value μ = μc , that α (μc ) = 0, β (μc ) = ω = 0, which is equivalent to the conditions Tr J (μc ) = 0,

Det J (μc ) > 0.

(2.39)

26

2 Single neuron models

dα (μ )

= 0 dμ μ =μc

B.

(transversality condition).

(2.40)

C. The first Lyapunov coefficient σ does not vanish, where 1 [ f x x x + gx x y + f x yy + g yyy ] 16 1

+ f x y ( f x x + f yy )−gx y (gx x +g yy )− f x x gx x + f yy g yy . 16ω

σ=

(2.41)

Then a limit cycle bifurcates from the origin with an amplitude that grows as |μ − μc |1/2 , whilst its period tends to 2π /ω as μ → μc . The bifurcation is supercritical (bifurcating periodic solutions are stable) if σ < 0 and subcritical (bifurcating periodic solutions are unstable) if σ > 0. When the first Lyapunov coefficient vanishes and the Jacobian has a pair of purely imaginary eigenvalues, the bifurcation is referred to as a generalised Hopf or Bautin bifurcation, and separates branches of subcritical and supercritical Hopf bifurcations in the parameter plain. For nearby parameter values, the system has two limit cycles which collide and disappear via a saddle-node bifurcation of periodic orbits.

2.5.1 Hopf instability of a steady state Steady states, (V , w) of (2.32)-(2.33), are determined by the simultaneous solution of F(V , w) + I = 0 and w = w∞ (V ), or equivalently by the intersection of the V and w-nullclines (which can be determined graphically). This gives the steady states as a function of the applied current, which can be used to determine a bifurcation diagram for (V , w) = (V (I ), w(I )). For simplicity, assume that τw (V ) only slowly varies. In this case, the Jacobian of the steady state is given by

J=

1 ∂ 1 ∂ C ∂ V F(V, w) C ∂ w F(V, w) φ ∂ − τwφ(V ) τw (V ) ∂ V w∞ (V )

.

(2.42)

(V ,w)

In order for a fixed point to lose stability, one of the following two things must happen: i) Det J = 0 or ii) Tr J = 0 with Det J > 0. Case i) corresponds to a saddle-node bifurcation and is discussed in Box 2.5 on page 27. Case ii) corresponds to a Hopf bifurcation that would lead to the onset of repetitive firing. Hopf bifurcations are discussed in greater detail in Box 2.4 on page 25. A saddle-node bifurcation can only happen if V is at the knee of F(V , w∞ (V )) + I = 0, since Det J = −φ dF/d V /(C τw ) (V ,w) . Therefore, if F(V , w∞ (V )) + I = 0 is monotonic in V , then the loss of stability can only occur via a Hopf bifurcation,

2.5 The Morris–Lecar model

27

giving rise to a periodic solution. Now, Tr J = 0 when 1 ∂ φ F(V, w) = . C ∂V τw (V ) (V ,w)

(2.43)

For typical parameters, such as those described underneath (2.34), F(V , w∞ (V )) has a roughly cubic shape, as shown in Fig. 2.8 (top), so that its derivative has a corresponding quadratic shape. This suggests that equation (2.43) could either have two solutions or none as a function of I . A bifurcation diagram showing the possibility of two Hopf bifurcations is also shown in Fig. 2.8 (bottom). Here the numerically computed branches of emerging periodic orbits are also shown. Since the generation of a periodic orbit occurs through a Hopf bifurcation, the frequency of the emerging orbit is non-zero. Box 2.5: The saddle-node bifurcation Consider the one-dimensional system: x˙ = a − x 2 ,

a ∈ R,

(2.44) √ for which there is a pair of branches of fixed point solutions x ± = ± a for a > 0 and no solution branches with a < 0. The upper (lower) branch is stable (unstable), and the pair meet at a = 0 in a saddle-node bifurcation.

In general a one-dimensional dynamical system x˙ = f (x; λ ) has a saddle-node bifurcation point at (x, λ ) = (0, 0) if the following hold • f (0, 0) = 0, f x (0, 0) = 0, • f x x (0, 0) = 0, • f λ (0, 0) = 0, where subscripts denote differentiation. If f λ (0, 0) f x x (0, 0) < 0, a stable/ unstable pair of fixed points exists for λ > 0, whereas if f λ f x x > 0, then such a pair exists for λ < 0. In higher dimensions with x ∈ Rm and λ ∈ Rl , a saddle-node bifurcation occurs when an equilibrium (x, λ ) = (x, λ ), defined by f (x, λ ) = 0, is nonhyperbolic, with precisely one eigenvalue, say μ1 , with zero real part. This is

28

2 Single neuron models

guaranteed if Dx f (x, λ )v1 = 0,

(2.45)

and Re Dx f (x, λ )vi = 0, i = 2, . . . , m. Here v1 ∈ Cm and w1 ∈ Cm are, respectively, the right and left eigenvectors of the Jacobian Dx f corresponding to the eigenvalue μ1 . The function f (x, λ ) must have non-vanishing quadratic terms in the neighbourhood of x along the eigenvector v1 , and the system must also satisfy the l-dimensional transversality condition w1 Dλ f (x, λ ) = 0.

(2.46)

Here, Dx and Dλ denote the matrix of partial derivatives of f with respect to x and λ , respectively.

2.5.2 Saddle-node bifurcations If F(V , w∞ (V )) + I = 0 is non-monotonic in V , the system may support a number of fixed points simultaneously. Indeed, this can occur when VK is increased (which can be achieved by increasing the extracellular K + concentration). The number and location of steady states do not depend on the temperature parameter φ , though their stability will. As such it is interesting to consider the dynamics at different values of φ . For large φ , w becomes a fast variable and can be assumed to be at quasisteady state, namely, w = w∞ (V ). The model is then reduced to a one-dimensional dynamical system C V˙ = F(V, w∞ (V )) + I for which oscillations are not possible. For small φ , it can occur that there are three fixed points, as shown in Fig. 2.9. The middle fixed point is a saddle, whose unstable manifold has one branch that connects directly to the stable fixed point, and another that goes around the unstable spiral and also connects to the stable fixed point. These two unstable manifolds are heteroclinic orbits. They effectively form (topologically) a circle that has two fixed points on it. As I is increased, the V -nullcline moves up and the stable fixed point and the saddle annihilate one another and then disappear. As the two points coalesce and then disappear, the orbits connecting them form a single limit cycle, and at this critical value, denoted by I = Ic , the limit cycle has infinite period, i.e., the closed trajectory is a homoclinic orbit. In this scenario, the limit cycle ceases to exist at exactly the same point in parameter space that a degenerate pair of fixed points come into existence and is often referred to as a saddle node on an invariant circle (SNIC) bifurcation. This is an example of a global bifurcation, and see √ Box 2.6 on page 32. For I just above Ic , the frequency of the periodic orbit scales as I − Ic [270, 279]. The corresponding bifurcation diagram is shown in Fig. 2.10, which also shows that the periodic orbits terminate via a subcritical Hopf bifurcation, generating a range of bistability between an oscillation and a stable fixed point. For intermediate values of φ , it is possible that both the lower and upper steady states can be stable, with the upper steady state surrounded by a unstable periodic

2.5 The Morris–Lecar model

29

Fig. 2.8 Top: Nullclines for different values of I (60, 160, and 260 μ A cm−2 ), corresponding to excitable, oscillatory, and depolarisation block states of the Morris–Lecar model. The solid grey cubic-like curve is the V -nullcline, and the straight dashed grey line is the w-nullcline. Trajectories are shown with black solid lines. Bottom: Bifurcation diagram as a function of I , showing a branch of periodic orbits with termini at two distinct Hopf bifurcation points.

orbit, as shown in Fig. 2.11. This figure also shows bistability between the low-voltage steady state and periodic firing. The corresponding bifurcation diagram is shown in Fig. 2.12. Here it can be seen that the lower steady-state branch is stable, the middle steady-state branch is unstable and of saddle type, and the upper branch can lose stability through a subcritical Hopf bifurcation (with decreasing I ). The emergent branch of unstable periodic orbits is annihilated in a collision with a branch of stable periodic orbits, and for some window of I there are three steady states and two periodic states, as seen in Fig. 2.11. With a further decrease in I , the stable periodic orbit can intercept the saddle, terminating the periodic branch at a value I = Ic , in what is termed a homoclinic bifurcation. When this occurs, the unstable manifold of the saddle leaves the fixed point along the cycle and then returns back along the

30

2 Single neuron models

Fig. 2.9 (V, w) phase plane for the Morris–Lecar model with φ small, showing steady states (filled circle is stable, open circle unstable, and the half-filled circle is a saddle), nullclines (V in solid grey and w in dashed grey), and sample trajectories (black curves). The parameters are as shown in Fig. 2.8 with V3 = 12 mV, V4 = 17 mV, φ = 0.06666667, and I = 0 μ A cm−2 .

Fig. 2.10 Left: Bifurcation diagram for the Morris–Lecar model with small φ corresponding to Fig. 2.9, showing both a Hopf and an SNIC bifurcation. The black solid (dashed) line denotes the maximum and minimum amplitude of a stable (unstable) periodic orbit. The stable (unstable) fixed point is the branch of solutions shown in grey solid (dotted). Right: The corresponding frequency of the stable periodic orbit showing a square root scaling in the neighbourhood of the SNIC bifurcation. The inset shows the voltage time course for I = 40 μ A cm−2 .

2.5 The Morris–Lecar model

31

Fig. 2.11 (V, w) phase plane with steady states (filled circles are stable and the half-filled circle is a saddle), nullclines (V in solid grey and w in dashed grey), and stable (unstable) periodic orbits in solid (dashed) black curves for the Morris–Lecar model with an intermediate value of φ = 0.25 and I = 30 μ A cm−2 . The black dotted line shows the separatrix (stable manifold of the saddle) between the stable periodic orbit and the stable fixed point with lowest V value.

cycle as the stable manifold. This homoclinic loop is called a ‘saddle loop homoclinic orbit’ and the frequency of oscillations scales as 1/| ln(I − Ic )| (and see Prob. 5.6 on page 219). In summary, the Morris–Lecar model can switch from silence to repetitive firing in three different ways, namely, (i) through a Hopf bifurcation, (ii) through a SNIC bifurcation, and (iii) through a homoclinic bifurcation. In the Hopf case, the oscillations appear with a finite frequency that is bounded away from 0. If this is subcritical, then (for some range of drive) there is bistability between a periodic and resting state. This behaviour is also known to occur in the squid axon, as well as for the Hodgkin– Huxley model as seen in Fig. 2.5. The appearance of repetitive firing at a non-zero frequency is often referred to as Type II firing and is rarely seen in recordings from cortex. More usual is the observation of firing rates with low frequencies, as is the case for the SNIC and homoclinic mechanisms for generating periodic behaviour. The so-called f − I curves shown in the right-hand panels of Fig. 2.10 and Fig. 2.12 are referred to as Type I, though from an experimental point of view, it is hard to distinguish between them.

32

2 Single neuron models

Fig. 2.12 Left: Bifurcation diagram for the Morris–Lecar model with intermediate φ corresponding to Fig. 2.11, showing both a Hopf and a homoclinic bifurcation. The black solid (dashed) line denotes the maximum and minimum amplitudes of a stable (unstable) periodic orbit. The stable (unstable) fixed point is the branch of solutions shown in solid (dotted) grey. Right: The corresponding frequency of the stable periodic orbit.

Box 2.6: Global bifurcations It is important to distinguish between local and global bifurcations. Loss of stability or disappearance of an equilibrium is a local bifurcation. Indeed, when such a bifurcation takes place, the phase portrait of a dynamical system changes locally in a neighbourhood of the nonhyperbolic point. Qualitative changes of the vector field are localised in a small neighbourhood. Outside the neighbourhood of the equilibrium the vector field is qualitatively the same. Not surprisingly global bifurcations are of a type that effects large regions of phase space. Some exemplars are the saddle-node bifurcation of limit cycles, saddle node on an invariant circle bifurcation, and homoclinic bifurcation. Saddle-node bifurcation of limit cycles: occurs when two limit cycles come together and annihilate. By way of illustration, consider the plane polar model: r˙ = μ r + r 3 − r 5 ,

θ˙ = ω .

(2.47)

The radial dynamics undergoes a saddle-node bifurcation at μ = μc = −1/4 as depicted in the bifurcation diagram below showing the steady radial variable r = r (μ ). As μ increases through μc a stable and unstable pair of limit cycles √ emerges for 0 > μ > μc with amplitudes r 2± = (1 ± 1 + 4μ )/2.

2.5 The Morris–Lecar model

33

Saddle node on an invariant circle bifurcation: occurs when stable and unstable fixed points annihilate leaving behind a periodic orbit of infinite period. By way of illustration consider the plane polar model: r˙ = r (1 − r 2 ),

θ˙ = μ − sin θ ,

μ ≥ 0.

(2.48)

The radial dynamics has a stable fixed point, with r = 1, describing a limit cycle in the plane if θ˙ = 0 and a fixed point in the plane otherwise. In the latter case, there are two possible fixed points with angular values defined by μ = sin θ ± for μ < 1. Thus, there is a bifurcation at μ = μc = 1, where θ + and θ − coalesce. period beyond Here θ˙ = 0 giving rise to an infinite period orbit. The emergent bifurcation is given by Δ = 02π dθ /(μ − sin θ ) = 2π / μ 2 − μc2 for μ > μc and as μ approaches μc the period diverges as (μ − μc )−1/2 .

Homoclinic bifurcation: occurs when a limit cycle approaches a saddle point. Upon touching the saddle the cycle becomes a homoclinic orbit with infinite period. By way of illustration consider the planar model discussed by Sandstede in [776] with x˙ = f (x), where x = (u, v) and 3 (2.49) v˙ = (2 − α )u − v − 3u 2 + uv. u˙ = −u + 2v + u 2 , 2 The origin (0, 0) is a saddle for small |α |. For α = 0, the orbits are given by the Cartesian leaf H (u, v) = u 2 (1 − u) − v2 = 0. One of these orbits is homoclinic to (0, 0). To check this, one has to verify that the vector field of

34

2 Single neuron models

the dynamical system for α = 0 is tangent to the curve H (u, v) = 0. Alternatively, check that the normal to the vector field is orthogonal. The normal is proportional to ∇ H = (2u − 3u 2 , −2v)T . A direct calculation shows that f, ∇ H = −2u 2 + 5u 3 − 3u 4 + v2 (2 − 3u). Using v2 = u 2 (1 − u), on H (u, v) = 0, it can be shown that v, ∇ H = 0. A unique stable limit cycle bifurcates from the homoclinic orbit under variation of α as illustrated below.

2.6 Other single neuron models There is now a plethora of single neuron models that build upon the formalism of the Hodgkin and Huxley model to incorporate the many different ionic currents that give rise to the patterns of voltage activity seen in electrophysiological studies, and in particular patch clamp recordings. These provide access to currents and channels in electrically compact neurons or small membrane patches down to the single channel level, so that opening/closing statistics can sometimes be obtained. For example, Purkinje cells show sodium-dependent action potentials in the soma and calciumbased action potentials in the dendritic tree, whilst thalamic relay cells can produce a burst of action potentials if the cell is released from a hyperpolarised state. In many cases, the underlying ionic currents are given names like those described in Box 2.7 on page 35 where some of their main electrophysiological effects are described.

2.6 Other single neuron models

Box 2.7: Common ion channels A brief description of some common ionic currents. • I N a,t : The ‘t’ is for transient. This sodium current (often found in axons and cell bodies) is involved in action potential generation and rapidly activates and inactivates. In vertebrates, these channels appear somewhat faster than in the squid. • I N a, p : The ‘p’ is for persistent. This non-inactivating sodium current is much smaller in amplitude than I N a,t . It can enhance the response to excitation and keep the cell moderately depolarised for extended periods (making it easier for a cell to fire). • IT : A low-threshold transient calcium current that rapidly inactivates with a threshold that is more negative than −65 mV. It is responsible for rhythmic burst firing (of a packet of spikes) in response to hyperpolarisation. Depolarisation above −60 mV inactivates this current and eliminates the bursting. • I L : A high-threshold long-lasting calcium current that slowly inactivates above a threshold of about −20 mV. It can generate calcium spikes in dendrites and is involved in synaptic transmission. • I K : A potassium current activated by strong depolarisation (above −40 mV), often referred to as the ‘delayed rectifier’. Its role is to repolarise the membrane after an action potential. It slowly inactivates and recovery from inactivation takes seconds. • I K ,Ca : A potassium current activated by calcium concentration increases within the cell (say from I L ) that is sensitive to membrane potential depolarisation. It can affect interspike interval duration and inactivates quickly upon repolarisation. There are two primary types of ion channels that facilitate I K ,Ca current. The big potassium (BK) channels are typically few in number, but have high single channel conductance. Conversely, small potassium (SK) channels are abundant but have small individual conductances. • I AH P : A slow after-hyperpolarisation potassium current that is sensitive to cellular calcium concentration, but insensitive to membrane potential. • I A : A transient, inactivating potassium current that can delay the onset of firing. It activates in response to membrane potentials greater than −60 mV, but then inactivates rapidly. It activates in response to polarisation. • I M : A muscarinic potassium current that is activated by depolarisation to about −65 mV. It is non-inactivating and gives rise to a form of ‘spikefrequency adaptation’, in which the cell is quietened after an initial spike (raising the threshold for firing an action potential). • Ih : A depolarising mixed cation (Na+ and K+ ) current activated by hyperpolarisation, and sometimes called a ‘sag’ current. It can generate slow rhythms and has cAMP receptors that modulate its voltage dependence. For a comprehensive discussion of ionic channels of excitable membranes, see the book of Hille [420].

35

36

2 Single neuron models

All of the nonlinear ionic currents in Box 2.7 on page 35 have mathematical descriptions of the form I y = g y m yp h qy (V − Vy ),

(2.50)

where g y is a constant conductance, Vy is the reversal potential, p and q are positive integers, and m y and h y are activation and inactivation variables, respectively (taking values in the range [0, 1]). Non-inactivating currents are described with the choice h y = 1, and passive currents are described with the choice m y = h y = 1. The voltage equation, for a single point neuron model, is then given by (2.15) with Iion = ∑ y I y . Next, we discuss a few of the more common single neuron models that incorporate some of these currents.

2.6.1 A plethora of conductance-based models Here we briefly describe some of the currently popular model choices for describing neural cells for which a certain type of firing pattern is expected. Full model details can be found in Appendix B.

The Connor–Stevens model This model contains a fast sodium, a delayed potassium, and leak current as in the Hodgkin–Huxley model, and, in addition, another potassium current that is transient, the so-called A-current. This current was first described by Connor and Stevens [177], and is activated under depolarisation and can act to slow the rate of action potential generation, giving rise to low firing rates. Indeed, the model was originally designed to capture the low firing rates of a crab motor axon. A numerical bifurcation analysis as a function of constant current injection, with parameters from [178], shows that the mechanism for this slowdown is via a SNIC. See Fig. 2.13 for a comparison of the firing properties of a Hodgkin–Huxley neuron with and without the inclusion of an A-current. However, it is well to note that currents that are classically thought to permit low firing rates can paradoxically cause a jump to a high minimum firing rate when expressed at higher levels. Consequently, achieving and maintaining a low firing rate can be surprisingly difficult and fragile in a biological context [261]. A reduction of the Connor–Stevens model from six variables to two using the method of equivalent potentials described in Section 2.4 has previously been done by Kepler et al. [501], and more recently by Drion et al. [261] using timescale separation and a slaving principle. This latter approach allows an interpretation of the model in terms of the (V, n) variables of the original Hodgkin–Huxley model upon which the Connor–Stevens model is based (with two fast variables given by

2.6 Other single neuron models

37

Fig. 2.13 A numerical plot of the f − I curve for the Connor–Stevens model showing the change from Type II (Hopf) to Type I (SNIC) firing, as the conductance for the A-current is switched from g A = 0 to g A = 60 mmho cm−2 . The insets show firing patterns for the choice I = 15 μ A cm−2 . Other parameters are as in [178].

their quasi-steady forms as function of V , and two slow variables functionally slaved to n). This approach, illustrated in Fig. 2.14, shows the emergence of a lower branch to the V -nullcline giving rise to an hourglass shape. The physiological meaning of this branch has only recently been studied, using singularity theory to prove its existence [260, 316, 319]. Also shown in Fig. 2.14 is a trajectory initiated from a hyperpolarised state. This trajectory must crawl over a new obstacle, not present in the reduced Hodgkin–Huxley model, and the lower it starts, the farther it has to climb. This provides a geometric interpretation of the latency to fire caused by the inclusion of an A-current.

The Wang–Buzsáki model The Wang–Buzsáki model [919] has the form of the Hodgkin–Huxley model, but the gating variable m is replaced by its steady-state value (i.e., activation is assumed to be fast). Thus, it possesses a transient sodium, leak, and delayed-rectifier potassium current. It displays two salient properties of hippocampal and neocortical fastspiking interneurons. Firstly, the action potential in these cells is followed by a brief after-hyperpolarisation. Secondly, the model can fire repetitive spikes at high frequencies. It is a common modelling choice for a fast-spiking inhibitory cell. These are important in models of the cerebral cortex, particularly in preventing run away excitation. A typical fast-spiking voltage waveform under constant current injection

38

2 Single neuron models

Fig. 2.14 Phase plane of the two-dimensional reduced Connor–Stevens model [261]. The solid grey curve is the V -nullcline, and the sigmoidal dashed grey line is the n-nullcline. Note that in comparison to the reduced Hodgkin–Huxley model, illustrated in Fig. 2.6, the V -nullcline has an hourglass shape rather than an inverted N (cubic) shape. A trajectory, initiated from a hyperpolarised state, is shown by the solid black line. The parameters are as shown in Fig. 2.13 with g A = 20 mmho cm−2 and I = −1 μ A cm−2 .

is shown in Fig. 2.15. If the leak conductance of the model is low, then the model can support a SNIC bifurcation, whilst for a high leak conductance, it can support a Hopf bifurcation (under constant current injection). With the inclusion of a muscarinic M-current I M the Wang–Buzsáki model can also support a Bogdanov–Takens bifurcation (where the critical equilibrium has a zero eigenvalue of algebraic multiplicity two) changing the excitability type of the model [21]. At the network level, the Wang–Buzsáki model has been used to study gamma rhythms created by mutual synaptic inhibition in the hippocampus [919].

The Golomb–Amitai model The Golomb–Amitai model [356] describes an excitatory cortical neuron and incorporates sodium, persistent sodium, potassium-delayed rectifier, slow (muscarinic) potassium, potassium A, and leak currents. The slow potassium current I K −slow (with a timescale ∼ 100 ms) can interact with the fast-spiking currents of the model to produce a form of adaptation. The neuronal behaviour with the slow potassium

2.6 Other single neuron models

39

Fig. 2.15 Firing pattern of the Wang–Buzsáki model under constant current injection showing a firing pattern consistent with that of a fast-spiking interneuron. The parameters are as in [919] with I = 1 μ A cm−2 .

Fig. 2.16 Firing patterns in the Golomb and Amitai model of a repetitive spiking cortical neuron. Top: With I K −slow blocked the model can show fast tonic spiking. Bottom: With I K −slow intact the model exhibits a form of adaptation, whereby interspike intervals are lengthened. Here the injected current is I = 2.5 μ A cm−2 , and other parameters are as in [356].

current blocked and unblocked is shown in Fig. 2.16. With I K −slow blocked (i.e., g K −slow set to 0), a sufficiently large constant current injection will cause the cell to fire tonically. If the applied current is too high, the neuron membrane potential may approach a depolarised plateau following a period of damped oscillatory firing. In contrast to when I K −slow is intact, a strong enough current injection causes the model neuron to fire, with the interspike interval increasing with time until it reaches a steady-state value. This behaviour, which mimics the adaptation observed experimentally in repetitive spiking neurons, occurs because I K −slow builds up over successive action potentials, hyperpolarising the neuron, and reducing its firing rate.

40

2 Single neuron models

Fig. 2.17 Firing patterns in the Wang thalamic relay neuron model. From top to bottom I = −0.45 μ A cm−2 , I = −0.55 μ A cm−2 , I = −0.8 μ A cm−2 , and I = −1.3 μ A cm−2 , respectively. For I = 0 μ A cm−2 , the system has a stable steady-state value of −60.5 mV and for I = −2 μ A cm−2 the system has a stable steady-state value of −76 mV.

Both the first interspike interval and the long-time interspike interval decrease with the injected current amplitude.

Wang thalamic relay neuron model In [917], Wang developed a bursting oscillation model of a thalamocortical relay neuron. This four-dimensional model includes three fast currents: a transient sodium current, a persistent sodium current, and a potassium current. Both sodium currents are assumed to activate instantaneously. The sodium inactivation and potassium activation are assumed to be slaved together, as described in Sect. 2.4 with h = 0.85 − n. The model also includes a passive leak current and two slow currents: a T-type calcium current, IT , and a sag current, Ih . When the IT current is evoked, Ca2+ entering the neuron via T-type Ca2+ channels causes a large voltage depolarisation known as the low-threshold Ca2+ spikes (LTS). Conventional action potentials mediated by fast Na+ and K+ (delayed-rectifier) currents often ride on the crest of an LTS resulting in a burst response (i.e., a tight cluster of spikes). The calcium inactivation and sag current activation are slow variables (although the sag current activation is rapid at high voltages so that it changes value substantially during an action potential). The model provides a cellular basis for the 7 − 14 Hz (spindle) oscillation and the slower 0.5 − 4 Hz delta rhythm, seen in the early and deep stages of sleep, respectively

2.6 Other single neuron models

41

Fig. 2.18 Left: A plot of m 3∞ (V ) and h ∞ (V ) using the IT model of Williams et al. [935]. Here m ∞ (V ) = 1/(1 + exp(−(V + 63)/7.8)), h ∞ (V ) = 1/(1 + exp((V + 83.5)/6.3)), IT = gT m 3 h(V − VT ), I L = g L (V − VL ), with VT = 180 mV, gT = 40 nmho, VL = −95 mV, and g L = 1 nS. The region of ‘overlap’ defining the voltage range over which the window current is significant is emphasised by the shaded region. Right: A plot of the steady-state window and leak currents. Equilibria occur where the two curves cross and in this example there are three fixed points. For standard physiological parameters, the low- and high-voltage states are stable and the intermediate one is unstable, so that the system is bistable.

[917]. Moreover, it shows rhythmic bursting, as illustrated in Fig. 2.17, which can bifurcate from a sub-threshold oscillation. Near the bifurcation, chaotic discharge patterns are observed, where spikes occur intermittently at randomly chosen cycles of a mostly sub-threshold slow oscillation. Both the slow currents allow the model to respond to the release of hyperpolarising inhibition with a so-called post-inhibitory rebound response, which results in a burst of action potentials [268]. See Box 3.1 on page 67 for a discussion of fast–slow analysis and [381] for its use in dissecting the thalamic model of Wang. As well as playing a role in burst firing, the IT current can lead to membrane potential bistability and play a role in the generation of slow (< 1 Hz) thalamic rhythms [219, 439, 440, 935]. This membrane potential bistability is due to an imbalance between a hyperpolarising leak current and a depolarising steady calcium current. This steady current occurs when there is an overlap or window in the voltage region of activation and inactivation of T-channels, so that a fraction of the channels are always open and do not inactivate. An illustration of this phenomenon is given in Fig. 2.18. Since the amplitude of this sustained current can significantly diminish as the membrane voltage moves outside of the window, this can lead to the creation of complex behaviours for driving signals that cause substantial variations in the cell membrane potential in and out of this window. Indeed, this current can underlie a chaotic response to constant cellular current injection and see [560] for further discussion.

42

2 Single neuron models

Fig. 2.19 Left panel: Typical output from the Pinsky–Rinzel model under constant current injection showing bursting behaviour in the soma. Right panel: Structure of a burst pattern for the voltage in both the soma (solid black line) and dendritic (dotted black line) compartment. The parameters are as in [719] with I = 0.75 μ A cm−2 .

The Pinsky–Rinzel model Repetitive bursting can also be generated by a ‘ping-pong’ interplay between action potentials in the soma and slow active currents in the dendrites. A fuller discussion of dendrites is given in Chap. 4, and here we briefly review the Pinsky-Rinzel model of a hippocampal CA3 pyramidal cell [719]. This is an eight-variable model consisting of an electrically coupled soma and dendritic compartment with active currents that is itself a reduction of a more complex 19 compartment cable model [881]. The model segregates the fast currents for sodium spiking into a set of equations representing activity at the soma, and the slower calcium and calcium-mediated currents into another set representing activity in the dendrites. An example of the model output in response to constant current injection is shown in Fig. 2.19, and for a more comprehensive mathematical discussion see [93]. The inset shows a zoom of both the somatic and dendritic voltage making it clearer that the burst is initiated by a sodium spike in the soma compartment that then invades and depolarises the dendritic compartment. This in turn, and combined with the electrical coupling to the soma, can generate a partial dendritic calcium spike. As a result, the soma is overdriven and the sodium spiking mechanism is shut down by depolarisation block. Eventually, the burst ends when the calcium spike is shut off by an outward voltage-activated potassium current (which builds up during the burst). A similar ‘ping-pong’ mechanism (combined with an after-depolarisation generated by a persistent dendritic sodium current and a somatic sodium window current) has been used in [918] to model chattering in neocortical neurons, which can fire fast rhythmic bursts in the gamma frequency range (∼ 40 Hz).

2.7 Quasi-active membranes Many neurons exhibit resonances whereby sub-threshold oscillatory behaviour is amplified for inputs at preferential frequencies. A nice example is that of the subthreshold frequency preference seen in neurons of rat sensorimotor cortex [443]. In response to supra-threshold inputs, this frequency preference leads to an increased likelihood of firing for stimulation near the resonant frequency. It is known that the

2.7 Quasi-active membranes

43

Fig. 2.20 The electrical diagram for an ‘LRC’ circuit, showing an ‘RC’ circuit, with capacitance C and resistance R in parallel with an inductive circuit. The latter consists of a resistance r in series with an inductor L. In the limit r → ∞, no current can flow through the inductor and the circuit behaves as an ‘RC’ circuit.

nonlinear ionic current Ih is partly responsible for this resonance. From a mathematical perspective, Mauro et al. [610] have shown that a linearisation of such channel kinetics, about rest, may adequately describe the observed resonant dynamics. Indeed, this story has its roots in the original work of Hodgkin and Huxley, who, in their 1952 paper [425], also show that sub-threshold response to weak external currents can be explained by a linearisation of their model. Moreover, they interpret their linearised model in terms of an inductance, as originally proposed by Cole [176]. In the terminology of electrical engineering, their resulting linear system has a membrane impedance that displays resonant-like behaviour due to the additional presence of inductances. This extends the more usual ‘RC’ circuit description of passive membrane to the so-called quasi-active or ‘LRC’ case, with electrical components as shown in Fig. 2.20. Here an ‘RC’ is shown with another current pathway in parallel described by a resistor of resistance r in series with an inductor of inductance L. The I − V relationship for an inductor is V = −L dI /dt, where L is measured in units of Henry (H). Here we describe the theory of quasi-active membrane, following the approach in [523], and show how it may be interpreted in the language of ‘LRC’ circuits, i.e., circuits with a resistor, capacitor, and inductor in parallel. Although linear, these models are highly relevant to understanding the dynamic properties of neurons in the sensory periphery and have recently been used in an auditory context to show how

44

2 Single neuron models

sub-threshold resonance properties can contribute to the efficient coding of auditory spatial cues [742]. To start with, consider a generic ionic membrane current of the form Iion = Iion (V, w1 , . . . w N ),

(2.51)

where V is a voltage and the wk are gating variables that satisfy (and cf. Sec. 2.3.2)

τk (V )

dwk = wk,∞ (V ) − wk , dt

k = 1, . . . , N .

(2.52)

Itistraditionaltowrite τk (V ) = (αk (V ) + βk (V ))−1 ,wherewk,∞ (V ) = αk (V )τk (V ), as was done in Sec. 2.3. Now consider variations around some fixed point (Vss , w1,∞ (Vss ), . . . , w N ,∞ (Vss )), so that (V, wk ) = (Vss , wk,∞ (Vss )) + (δ V, δ wk ), which gives N δV ∂ Iion +∑ δ Iion = (2.53) δ wk , R k=1 ∂ wk ss where the resistance R is defined by R −1 = ∂ Iion /∂ V ss , and the subscript “ss” denotes that quantities are evaluated at steady state. Using (2.52) the evolution of the perturbations in the gating variable, around the steady state, can be written as dαk d d(αk + βk ) + αk + βk δ wk = − wk,∞ δ V. (2.54) dt dV dV We may now write (2.53) in the form

δ Iion =

N δV + ∑ δ Ik , R k=1

d rk + L k δ Ik = δ V. dt

where

Here rk−1

∂ Iion dwk,∞ = , ∂ wk dV ss

L k = τk r k .

(2.55)

(2.56)

(2.57)

Hence, for a small perturbation around the steady state, the current Iion responds as though the resistance R is in parallel with N impedance lines. Each of these is a resistance rk that is itself in series with an inductance L k . Such inductive terms account for the oscillatory overshoot commonly seen in response to depolarising current steps or even after the firing of an action potential. This form of equivalent linear membrane circuit is called quasi-active to distinguish it from a truly active (i.e., nonlinear) membrane. Now consider a general current balance equation in the form C

dV = −g L (V − VL ) − Iion + I. dt

(2.58)

2.7 Quasi-active membranes

45

The linearised equations will be C Lk

N dδ V δV =− − ∑ δ Ik , dt R k=1

1 1 = gL + , R R

d δ Ik = −rk δ Ik + δ V. dt

(2.59) (2.60)

The steady-state voltage, for a constant drive I , satisfies I = g L (Vss − VL ) + Iion (Vss , w1,∞ (Vss ), . . . , w N ,∞ (Vss )).

(2.61)

If a time-dependent drive is introduced to the right-hand side of (2.59), given by A(t), then the resulting equations can be readily solved using integral transforms. Introducing the Laplace transform (with spectral parameter λ ∈ C) (λ ) = η

0

∞

dte−λ t η (t),

(2.62)

λ ), where K (λ ) is a and assuming initial data that is zero, then δ V (λ ) = K (λ ) A( complex impedance given by

N 1 1 K (λ ) = C λ + + ∑ r + R k=1 k λ L k

−1 .

(2.63)

Since ‘LRC’ circuits are expected to show oscillatory behaviour with the presence of one or more resonances, it is natural to look for conditions where the imaginary part of K (i ω ) vanishes, with ω ∈ R, or where |K (i ω )| has a maxima. To illustrate the application of the theory above, it is enough to consider a simple model of Ih , due to Magee [599], with a single gating variable, f , so that Ih = gh (V − Vh ) f . In this case, the impedance (2.63) reduces to K (i ω ) =

+ ω 2 L 2 / R + i ω [L(1 − ω 2 LC) − Cr 2 ] r (1 + r/ R) . 2 + (1 + r/ R − ω 2 LC)2 (Cr + L/ R)

(2.64)

The resonant frequency defined by Im K (i ωc ) = 0 is thus given by

ωc2 =

r2 1 − 2, LC L

r 2 < L/C,

(2.65)

The values for r and L can be determined from (2.57) and which is independent of R. the detailed form of the Magee model of Ih [599], for which the potential Vh = −16 mV and the conductance gh = 0.09 mmho cm−2 . The functions that appear in the gating dynamics (for a temperature of 27◦ C) are

τ f (V ) =

224.22e0.03326(V +80) , (1 + e0.08316(V +80) )

(2.66)

46

2 Single neuron models

and f ∞ (V ) = 1/(1 + e(V +92)/8 ). From (2.57), r −1 = −gh (Vss − Vh ) f ∞ (Vss )(1 − f ∞ (Vss ))/8,

(2.67)

and L = τ f (Vss )r . For a resting state of −65 mV, this gives ωc 26 Hz, decreasing to zero with increasing Vss . For the linearisation of the Hodgkin–Huxley model (with N = 3), the impedance function describes a bandpass filter with optimal response around 67 Hz [610], and so the Hodgkin–Huxley model is expected to selectively amplify its response to inputs at these frequencies. The range of validity of the reductive process is limited to a few millivolts around the resting potential.

2.8 Channel models In the discussion of voltage-gated channels in Sec. 2.3.2, a large number of independent channels were assumed so fluctuations could safely be ignored. However, fluctuating currents are readily observed in patch clamp experiments, and it is well known that action potentials can in fact arise spontaneously due to channel fluctuations and see [932] for a nice review. Channel noise, say in excitable dendritic spines or the axon hillock, is undoubtedly important in controlling the fidelity of action potential timing and is likely to be a major factor that limits the degree to which the brain’s wiring can be miniaturised [290]. Interestingly, a theoretical study by O’Donnell and van Rossum has shown that the fluctuations from the stochastic gating of potassium channels is the dominant source of noise in a stochastic version of the Hodgkin–Huxley model [674]. Here we discuss stochastic models of ion channels making use of mathematical techniques from the theory of stochastic processes. A brief overview of these techniques is presented in Appendix A.

2.8.1 A two-state channel The mathematical description of the simple two-state model given by (2.8) is that of a continuous-time Markov process, and it is illuminating to follow the treatment of Smith [821] to show how this can give rise to a stochastic correction to (2.9), for a large though finite number of channels. First, consider a probabilistic description of a single ion channel and denote the probability that the channel is in the open or closed state at time t by Po (t) and Pc (t), respectively, with Po (t) + Pc (t) = 1 (from conservation of probability). In a small time Δt, the probability that the channel will transition from closed to open is Prob(channel open at time t + Δt | channel closed at time t) = k + Δt.

(2.68)

Multiplying this by Pc (t), the probability that the channel is actually in the closed state gives k + Pc (t)Δt as the probability that the transition from closed to open

2.8 Channel models

47

actually occurs. Arguing similarly, the probability that a channel transitions from open to closed is k − Po (t)Δt. Hence, Pc (t + Δt) = (1 − k + Δt)Pc (t) + k − Po (t)Δt,

(2.69)

which, upon taking the limit Δt → 0, yields an equation of the form (2.9) with f o replaced by Po , namely, d Po = −k − Po + k + (1 − Po ). (2.70) dt However, whilst (2.9) governs the proportion of channels across the whole membrane that are open, (2.70) governs the changes in opening probabilities for a single twostate channel.

2.8.2 Multiple two-state channels To treat N independent channels, note that there are N + 1 possibilities for the number of open channels, namely, 0, 1, 2, . . . , N − 2, N − 1, N and thus N + 1 distinguishable states for the ensemble. Labelling these states S0 , S1 , S2 , . . . , S N −2 , S N −1 , S N , the transition state diagram, generalising (2.8), can be written as N k+

S1 S0 é − k

(N −1)k +

é−

2k

S2

(N −2)k +

é−

3k

···

3k +

é

(N −2)k

−

S N −2

2k +

é

(N −1)k

k+

−

S N −1 é− S N .

(2.71)

Nk

To explain the combinatoric factors, it is enough to consider the transition rate from state S0 to S1 . In this case, any one of N closed channels can open at rate k + , resulting in one open channel labelled by the ensemble state S1 . More generally, denote the probability that there are n open channels (i.e., the system is in state Sn ), where 0 ≤ n ≤ N , and N − n closed channels by Pn (t). Consider a sufficiently small time interval Δt so that at most one channel transitions from an open to a closed state (or vice versa). In this case, there are only four possible events that affect Pn (t) during this interval: i) there are n open channels and a closed one opens with probability (N − n)k + Δt, ii) there are n open channels and an open one closes with probability nk − Δt, iii) there are n − 1 open channels and a closed one opens with probability (N − n + 1)k + Δt, and iv) there are n + 1 open channels and one closes with probability (n + 1)k − Δt. Combining these allows one to write (for n = 0, N ) Pn (t + Δt) = k + (N − n + 1)Pn−1 (t)Δt + k − (n + 1)Pn+1 (t)Δt

+ 1 − k + (N − n)Δt − k − nΔt Pn (t),

(2.72)

where events i) and ii) have been combined in the final term. Taking the limit Δt → 0 yields the master equation d Pn = k + (N − n + 1)Pn−1 + k − (n + 1)Pn+1 dt − k + (N − n)Pn − k − n Pn .

(2.73)

48

2 Single neuron models

Upon introducing the vector representation of the time-dependent probabilities as P(t) = (P0 (t), . . . , PN (t)), equation (2.72) can be written in the succinct form P(t + Δt) = QP(t)Δt, which is the vector analogue of (2.69), where Q is the tridiagonal matrix ⎤ ⎡ D0 k − 0 ⎥ ⎢ N k + D1 2k − ⎥ ⎢ ⎥ ⎢ . .. (2.74) Q=⎢ ⎥. ⎥ ⎢ + −⎦ ⎣ 2k D N −1 N k 0 k+ DN The diagonal terms of Q are such that each of the rows ×Δt sum to one (for probability to be conserved), giving rise to the condition Di = (Δt)−1 − ∑ j =i Q ji . In a where Q ij = Q ij similar vein, the master equation, (2.73) can be written as P˙ = QP, for i = j and − Q ii = ∑ j =i Q ji , where the latter represents the total escape rate from state i. The general time-dependent solution can be written using matrix exponentials as P(t) = exp( Qt)P(0), and the steady-state distribution, denoted P∞ , is a solution with eigenvalue zero. The elements Q ij of QP∞ = 0, namely, an eigenvector of Q represent the probability of making a transition from state j to state i in a time step of duration Δt, and is well suited to numerical investigation using a Monte Carlo simulation. However, as N increases, the Monte Carlo approach can be prohibitively slow, and it is then convenient to use the Gillespie algorithm as presented in Box 2.8 on page 49, which takes advantage of knowledge of the distribution of time spent in a given state. The mean number of open channels at time t is n(t) =

N

∑ n Pn (t).

(2.75)

n=0

By differentiating the above with respect to t and using the master equation, the mass action model given by (2.9) with f o = n/N is recovered. It is convenient to introduce P∞ (n) as the nth element of the steady-state distribution, P∞ , of (2.73), assuming it exists. The steady state obeys the relationship J (n + 1) = J (n) where J (n) = ω− (n)P∞ (n) − ω+ (n − 1)P∞ (n − 1), with ω+ (n) = (N − n)k + and ω− (n) = nk − . By iteration, this generates + n n ω+ (m − 1) k N = P∞ (0) P∞ (n) = P∞ (0) ∏ , − ω− (m) k n m=1

(2.76)

(2.77)

N P∞ (n) = 1 as with P∞ (0) determined via normalisation ∑n=0 P∞ (0) =

ω+ (m − 1) 1+ ∑ ∏ ω− (m) n=1 m=1 N

n

−1 =

(k − ) N , + k+)N

(k −

(2.78)

2.8 Channel models

49

where in the last line the binomial expansion formula has been used. Hence, remembering the definition for f ∞ from equation (2.10), P∞ (n) is given by a binomial distribution with N f n (1 − f ∞ ) N −n . (2.79) P∞ (n) = n ∞ N At steady state, the mean number of open channels is n ∞ = ∑n=0 n P∞ (n) = N f ∞ N 2 2 with variance σ∞ = ∑n=0 (n − n ∞ ) P∞ (n) = N f ∞ (1 − f ∞ ) (see Prob. 2.6). Thus, at equilibrium, the coefficient of variation, CV, is given by σ∞ 1 − f∞ = , (2.80) CV = n∞ N f∞

which is inversely proportional to the square root of the number of channels. Box 2.8: Gillespie algorithm Consider a continuous-time Markov process with discrete states S1 , . . . , S N and transition rates between states given by the N × N matrix Q. Denote the state of the system at time t by S(t). Suppose that S(t) = Si . The Gillespie algorithm [339] proceeds by noting that the time to the first transition from Si (commonly referred to as the dwell time, waiting time or holding time), which is here denoted τi , follows an exponential distribution: Pi (τi ) = λi e−λi τi , λi =

∑ Q ij ,

(2.81)

j =i

where λi is the sum of the ith row of Q excluding the diagonal element. To see this, note that the probability of no transition taking place from state Si over a short interval δ τ 1 is equal to 1 − λi δ τ . Thus, Pi (τi + δ τ ) = Pi (τi ) 1 − λi δ τ , (2.82) where δ τ is taken to be sufficiently small that the probability of more than one transition occurring within the timestep is negligible. The distribution (2.81) is then recovered by Taylor expanding the left-hand sideof (2.82), solving the resulting ODE and enforcing the normalisation condition 0∞ Pi (τ ) dτ = 1. The observation that the distribution of τi is given by (2.81) may also be deduced by noting that the Markov process possesses the memoryless property: Prob(τi > t + s | τi > s) = Prob(τi > t),

(2.83)

and that the exponential distribution is the only distribution for real-valued continuous random variables that satisfies this condition. Given that a transition from Si occurs, the new state can be computed by sampling from the probability mass function:

50

2 Single neuron models

Prob(S(t + τi ) = S j | S([t, t + τi )) = Si , S(t + τi ) = Si ) =

Q ij , λi

(2.84)

where the denominator ensures that the distribution is normalised and where j = i. Thus, the overall probability that a transition from Si to S j for j = i occurs immediately after a time τi is given by Prob(S(t + τi ) = S j , S([t, t + τi )) = Si ) = Q ij e−λi τi .

(2.85)

The Gillespie algorithm uses the distributions (2.81) and (2.84) to efficiently and exactly sample paths of the Markov process. For this reason, it is sometimes referred to as the exact stochastic simulation algorithm. Each update in the algorithm takes the following steps: 1. Two independent random numbers are drawn from the uniform distribution defined over the unit interval, i.e., r1 , r2 ∼ U (0, 1). 2. The timestep is defined via τi = (1/λi ) ln(1/r1 ). This samples from distribution (2.81). j 3. The transition is then chosen as the smallest j such that r2 < ∑k=1 Q ik (1 − δik )/λi . This samples from distribution (2.84). 4. The state of the system is updated to S(t + τi ) = S j . These steps are repeated as many times as necessary to complete the simulation. Note that because the timestep for the simulation is selected automatically, one must take care to ensure the transition probabilities are constant over the interval [t, t + τi ). For a relatively recent review of the algorithm and advances in numerical methods for its implementation, see the paper by Gillespie [340].

2.8.3 Large numbers of channels For large N , the master equation (2.73) can be approximated using a Kramers– Moyal expansion, along the lines developed by Fox and Lu [313] for the sodium and potassium channels of the Hodgkin–Huxley model. Following a more recent exposition of these techniques by Bressloff [105], it is convenient to introduce a rescaled variable f = n/N and transitions rates N Ω± ( f ) = ω± (N f ) so that (2.73) can be written in the form d p( f, t) = N Ω+ ( f − 1/N ) p( f − 1/N , t) + Ω− ( f + 1/N , t) p( f + 1/N , t) dt (2.86) − (Ω+ ( f ) + Ω− ( f )) p( f, t), where p( f, t) = P( f /N , t). Treating f ∈ [0, 1] as a continuous variable and Taylor expanding terms on the right of (2.86) to second order in N −1 leads to the Fokker– Planck equation (and see Appendix A, Sec. A.10 for further discussion of the Fokker– Planck equation)

2.8 Channel models

51

∂ p( f, t) ∂J =− , ∂t ∂f

1 ∂ [B( f ) p( f, t)], (2.87) 2N ∂ f

J ( f, t) = A( f ) p( f, t) −

with A( f ) = k + − (k + + k − ) f and B( f ) = k + + (k − − k + ) f and J identified as a probability flux. The Fokker–Planck equation (2.87) is supplemented with no-flux conditions J (0, t) = 0 = J (1, t), and a normalisation condition 01 p( f, t)d f = 1. From the general theory of stochastic processes, [331], the solution of the Fokker– Planck equation (2.87) determines the probability density for a corresponding stochastic process, F(t), which evolves according to the stochastic differential equation (SDE) or Langevin equation 1 dF = A(F)dt + √ b(F)dW (t), N

b( f ) =

B( f ).

(2.88)

Box 2.9: Properties of the Dirac delta function The Dirac delta function is a distribution with the following properties: Normalisation:

∞

δ (x) dx = 1.

(2.89)

δ (x) dx = Θ(κ ),

(2.90)

−∞

Integral:

κ

−∞

where Θ is the Heaviside step function. Sifting property: ∞

−∞

f (x)δ (x − a) dx = f (a).

(2.91)

Scaling and symmetry:

∞

−∞

δ (α x) dx =

∞

−∞

δ (u)

1 du = . |α | |α |

(2.92)

f (xi ) , |g (xi )|

(2.93)

Composition with a function:

∞

−∞

f (x)δ (g(x)) dx = ∑ i

where the sum extends over all roots of g(x), denoted xi (which are assumed to be simple). Integral representation:

δ (x) =

1 2π

∞

−∞

eikx dk.

(2.94)

52

2 Single neuron models

Dirac comb:

1

∑ δ (x − 2π n) = 2π ∑ ei px .

n∈Z

(2.95)

p∈Z

Distributional derivative:

∞

−∞

δ (k) f (x − a) dx = (−1)k f (k) (x − a),

(2.96)

where the bracketed superscript denotes the kth derivative with respect to x. Here W (t) denotes a Wiener process (see Appendix A) with dW (t) distributed according to a Gaussian process with mean and covariance

dW (t) = 0,

dW (t)dW (s) = δ (t − s)dtds,

(2.97)

and the brackets indicate an average over realisations of the stochastic process. Here, δ is a Dirac delta function (see Box 2.9 on page 51). In the limit N → ∞, the variable F evolves according to the mass action equation (2.9) as expected, and so it can be interpreted as the fraction of open channels (in the large N limit). Hence, at steady state and for large N , F = f ∞ . Thus, the SDE (2.88) √ describes a stochastic path in phase space with Gaussian fluctuations of order 1/ N about a deterministic trajectory. Thus, one is led to a noisy form of the deterministic equation (2.9) that can be written in the physicist’s form (ignoring the fact that a Wiener process is nowhere differentiable) as f∞ − fo d fo = + ξ (t), (2.98) dt τ where f o is random variable and ξ is a random variable defined by ξ = γ ( f o , V )W˙ with ξ = 0, ξ (t)ξ (s) = γ ( f o , V )δ (t − s) and

γ ( fo , V ) =

k + (V )(1 − f o ) + k − (V ) f o , N

(2.99)

where the V argument to the transition probabilities has been explicitly included to emphasise their dependence on voltage. This provides a natural way to see the effects of channel noise on the Hodgkin–Huxley style models, say by modifying the gating dynamics in (2.20) according to X˙ → X˙ + ξ X with ξ X a zero mean random process with autocorrelation γ (X, V ) = [X (1 − 2X ∞ (V )) + X ∞ (V )]/(N τ X (V )) (using X ∞ = k + τ X and τ X = 1/(k + + k − )). The numerical simulation of the resulting SDE model can be performed using any one of a number of extensions of wellknown integration methods for deterministic systems to compute stochastic integrals, and see Appendix A.9. Example stochastic simulations for various values of N in the stochastic Morris–Lecar model are shown in Fig. 2.21, where it can be seen that as the number of channels increases the spontaneous action potentials induced by stochastic gating are eliminated. Of course, it is worth remembering that when N is not large, one should return to a study of the underlying Markov model rather than a Langevin description.

2.8 Channel models

53

Fig. 2.21 Simulations of the stochastic Morris–Lecar model using the Langevin approximation for the fraction of open potassium channels. As the number of channels is increased (N = 25, 50, 100, 500, 1000 from top to bottom), spontaneous action potentials induced by stochastic gating are eliminated. The parameters are as in Sec. 2.5 with I = 50 μ A cm−2 .

2.8.4 Channels with more than two states Given the importance of the sodium and potassium channels in the Hodgkin–Huxley model, we describe here their Markov models [420], though for a more comprehensive survey of channel models we refer the reader to the IonChannelGenealogy project [723]. Assuming a simple Markov process for four identical gates with an opening rate αn and a closing rate βn , the kinetic scheme for the potassium channel is given by 4 αn

3αn

2 αn

αn

βn

2βn

3βn

4βn

n0 é n1 é n2 é n3 é n4,

(2.100)

so that each channel has five possible states. A channel is open when it is in the state n 4 . The Markov kinetic scheme for the sodium channel is given by 3αm

2 αm

αm

βm

2βm

3βm

m 0 h 1 é m 1 h 1 é m 2 h 1 é m3 h1 αh êβh

3αm

α h êβ h

2 αm

αh êβh

αm

αh êβh .

(2.101)

m0h0 é m1h0 é m2h0 é m3h0 βm

2βm

3βm

In this description, each channel has eight possible states and m i h j is the number of channels that are in the state m i h j . An individual channel is conducting when it is in

54

2 Single neuron models

the state m 3 h 1 but non-conducting in all other states. All of the voltage-dependent rate constants α X and β X , X ∈ {m, n, h} described above are those for the deterministic Hodgkin–Huxley model given in Sec. 2.3.3. The Markov process for these channels may be numerically evolved using the Gillespie method, as described in Box 2.8 on page 49. It is interesting to note that the assumption of channel independence has been questioned as a valid starting point for the construction of gating variables in the Hodgkin–Huxley model, and that the cooperative activation of voltage-gated sodium channels may provide a better model for describing action potential initiation [659]. This is easily modelled by assuming that each channel is coupled to K neighbouring channels. The opening of any of these neighbours is then assumed to cause a shift of the instantaneous activation curve of the channel by −a towards lower membrane potentials. The activation and deactivation rates of channel i are then given by αiA (V ) = α A (V + ∑ j aij χ j ) and βiA (V ) = β A (V + ∑ j aij χ j ), where aij = a if channels i and j are coupled and aij = 0 otherwise, and χ j is a binary single channel state variable labelling channels in the open state such that χ j = 1 if channel j is open and χ j = 0 if channel j is not open. This simple form of cooperativity leads to a more rapid initiation and variable onset potential of the action potential, although it has been questioned whether this actually occurs in cortical neurons [616].

Remarks This chapter treats the small scale of neuronal membrane and ion channels covering both deterministic and stochastic electrophysiological models. By way of ‘Boxes’ we have given pointers to some tools from dynamical systems theory that are used here and in later chapters. However, for those new to this field we can do no better than direct you to the book by Strogatz [836], with its emphasis on real-world applications, and the book by Kuznetsov [541] for a systematic overview of bifurcation theory. Although we have a preference for pen-and-paper calculations, this chapter is relatively light in this regard, focusing more on modelling and insight via numerical bifurcation analysis of the deterministic models. A very nice free tool for exploring the latter is XPPAUT [275], and those with a MATLAB license might be interested in MatCont [248] and CoCo [226]. One tool that we have side-lined here is that of geometric singular perturbation theory. This can be particularly useful in decomposing and understanding systems that have a separation of timescales, as is often the case for nonlinear ODE models of excitable neurons. The main reason for this being that in later chapters, we emphasise models that have exact solutions so that a fast–slow timescale separation is not required. For those wishing to learn more about this topic, we can highly recommend the books by Kuehn [535] and Wechselberger [925]. Regarding the stochastic models that we have covered, we feel that many of those in our intended audience will be less familiar with stochastic, as opposed to real, analysis. Appendix A is an attempt to redress this balance, though is no substitute for a more in-depth study. In a neuroscience context, the book by Tuckwell [885] is a good starting point before tackling the edited volume of Laing

Problems

55

and Lord [548], and for a more recent and general overview of stochastic process in cell biology we recommend the book by Bressloff [105]. One might well ask what to do with the list of single neuron models provided in this chapter? Our answer is that we should build and study networks from them and look at the consequences of ionic currents on network states. Subsequent chapters will help develop the tools to do this and by way of a somewhat biased pair of examples let us mention the application of this methodology to understanding the role of the rebound currents IT and Ih (from Box 2.7 on page 35) in shaping tissue-level patterns of synaptic activity as described in [90, 639]. On a final note, the presentation in this chapter is minimal in regards to fundamental biophysics and the reader may benefit from studying a text such as the one by Koch [524] to gain a better appreciation of our starting point.

Problems 2.1. Consider the Hodgkin–Huxley model given by (2.20). (i) Show that physiologically significant solutions remain in the closed bounded set {(V, m, n, h) | V− − r ≤ V ≤ V+ + r, 0 ≤ m, n, h ≤ 1} ,

(2.102)

for some positive number r , where V− = min(VN a , VK , VL ) and V+ = max(VN a , VK , VL ). (ii) Show that under voltage clamp with m(0) and h ∞ small then 4 g K (t) = g K n(0) − (n(0) − n ∞ )(1 − e−t/τn ) ,

(2.103)

g N a (t) =

g N a m 3∞ h(0)(1

−e

−t/τm 3 −t/τh

)e

.

(2.104)

(iii) Consider an abrupt positive change in the voltage clamp at t = t0 and show that 4 1/4 1/4 g K (t) = (g ∞ − [(g ∞ − (g 0K )1/4 ]e−t/τn , t > t0 , (2.105) K ) K ) where g 0K and g ∞ K are, respectively, the value that g K has before and after the voltage step. Note that (2.105) was used by Hodgkin and Huxley to fit n ∞ and τn to data, and a similar approach using (2.104) was used to fit X ∞ and τ X for X ∈ {m, h}. 2.2. Consider the Hodgkin–Huxley model given by (2.20) with a membrane potential that is initially at V0 , and then abruptly switched to the value V1 . Show that the fraction of sodium channels in the conducting state is given by 3

m 3 h = m ∞ (V1 ) − (m ∞ (V1 ) − m ∞ (V0 ))e−t/τm (V1 )

(2.106) × h ∞ (V1 ) − (h ∞ (V1 ) − h ∞ (V0 ))e−t/τh (V1 ) .

56

2 Single neuron models

Hence, show that the time dependence of m 3 h is a sum of seven exponentially decaying terms with associated time constants, τm , 2τm , 3τm , τh , τm τh /(τm + τh ), τm τh /(τm + 2τh ), and τm τh /(τm + 3τh ). 2.3. Consider the Hodgkin–Huxley model in a regime close to the initiation of an action potential where cell excitability is dominated by voltage-gated sodium channels. Assume a fit of the sodium activation function such that m 3∞ (V ) = P∞ (V ), where P∞ (V ) = (1 + exp(−(V − V1/2 )/k))−1 is a Boltzmann function with half activation V1/2 (i.e., P∞ (V1/2 ) = 1/2) and activation slope k (i.e., P∞ (V1/2 ) = 1/(4k)). (i) Under the assumption that sodium activation is instantaneous, show that IN a = g N a h

V − VN a . 1 + e−(V −V1/2 )/k

(2.107)

(ii) Further assuming that e−(V −V1/2 ) 1 and that V − VN a has small variation before the initiation of an action potential, show that I N a = g N a e(V −θ )/k ,

(2.108)

where θ is the threshold function

θ = V1/2 − k ln[h(V1/2 − VN a )].

(2.109)

(iii) Using the dynamics for the inactivation variable h, show that k h ∞ (V ) − h k(1 − e(θ −θ∞ (V ))/k ) dθ =− = , dt h τh (V ) τh (V )

(2.110)

where θ∞ (V ) = V1/2 − k ln h ∞ (V ) − k ln(V1/2 − VN a ). (iv) Assuming that θ remains close to its steady-state value (|θ − θ∞ (V )| k) show that the threshold dynamics simplifies to

τθ (V )

dθ = θ∞ (V ) − θ , dt

τθ (V ) = τh (V ).

(2.111)

For a further discussion of threshold equations for action potential initiation and their use in developing simple integrate-and-fire models (as discussed in Chap. 3), see [722]. 2.4. Consider the circuit diagram shown in Fig. 2.20 with r = 0 and an injected current I0 sin(Ω t). (i) Show that the current balance equations yield a pair of coupled linear ODEs for the voltage V and inductive current I as C

V dV = − − I + I0 sin(Ω t), dt R

L

dI = V. dt

(2.112)

Problems

57

(ii) Show that the system may be written in terms of a vector X = (V, I ) and the membrane time constant τ = RC as X˙ = AX + b(t), where −1/τ −1/C I0 sin(Ω t) A= , b(t) = . (2.113) 1/L 0 0 (iii) Show that the general solution may be written in terms of a matrix exponential G(t) = exp(At) as X (t) = G(t)X (0) +

0

t

ds G(t − s)b(s).

(2.114)

(iv) Show that G(t) can be written in the form G(t) = P exp(Λt)P −1 , where Λ = diag(λ+ , λ− ), with −1/τ ± 1/τ 2 − 4/(C L) L λ+ L λ− , P= λ± = . (2.115) 1 1 2 (v) For initial data (V (0), I (0)) = (0, 0), show that

ds c+ eλ+ (t−s) + c− eλ− (t−s) sin(Ω s),

λ± . λ+ − λ− 0 (2.116) For the case that 1/τ 2 − 4/(C L) < 0 decompose λ± = μ ± i ω and show that (2.116) can be written as V (t) = I0

V (t) = 2I0

t

0

t

c± = ±

ds eμ (t−s) [c R cos(ω (t − s)) − c I sin(ω (t − s))] sin(Ω s),

(2.117) where c R = Re c+ and c I = Im c+ . For large time, show that this reduces to (μ c R + ω c I )2 + c2R ω 2 sin(Ω t + φ ), (2.118) V (t) = 2I0 [μ 2 + (ω + Ω )2 ][μ 2 + (ω − Ω )2 ] for some constant phase-shift φ . √ (vi) Show that as μ → 0 the voltage has a maximum amplitude when Ω = 1/ LC. 2.5. Consider the continuous-time Markov model from Sec. 2.8 described by the system of ODEs: dP = QP, dt

∈ R(N +1)×(N +1) , Q

P ∈ R N +1 ,

ij = Q ij for i = j and − Q ii = ∑ j =i Q ji . where Q have non-positive real parts. (i) Show that all the eigenvalues of Q

(2.119)

58

2 Single neuron models

have pairwise different eigenvalues λi , i = 0, . . . , N . Show that the (ii) Let Q solution of the ODE system converges towards the equilibrium solution P∞ as

P(t) = P∞ +

N

∑ vi wi eλ (t−t ) i

0

P(t0 ),

(2.120)

i=1

respectively. where vi and wi are left and right eigenvectors of Q, 2.6. Consider the master equation for an ensemble of two-state ion channels given by equation (2.73). N (i) Show that the mean number of open channels, n(t) = ∑n=0 n P(n, t), satisfies the differential equation

dn = k + (N − n) − k − n. dt

(2.121)

N (ii) Show that the variance of open channels, σ 2 = ∑n=0 (n − n)2 P(n, t), satisfies the differential equation

dσ 2 = −2(k + + k − )σ 2 + k + (N − n) + k − n. dt (iii) Show that the steady-state coefficient of variation is given by k− 1 . CV = √ N k+

(2.122)

(2.123)

2.7. Consider a model of a voltage-gated sodium channel with six states (two closed, two open, two inactivated) and transitions as described by the state transition diagram:

Using the law of mass action show that the dynamics of the different states of the channels are described by the following set of coupled ordinary differential equations:

Problems

dC1 dt dC2 dt dO1 dt dO2 dt dI1 dt dI2 dt

59

= α I1 C1 I1 + αC2 C1 C2 − (αC1 C2 + αC1 I1 )C1 , = αC1 C2 C1 + α O1 C2 O1 + α O2 C2 O2 − (αC2 C1 + αC2 O1 + αC2 O2 )C2 , = αC2 O1 C2 + α I1 O1 I1 − (α O1 C2 + α O1 I1 )O1 , (2.124) = αC2 O2 C2 − α O2 C2 O2 , = α I2 I1 I2 + αC1 I1 C1 + α O1 I1 O1 − (α I1 C1 + α I1 I2 + α I1 O1 )I1 , = α I1 I2 I1 − α I2 I1 I2 ,

and further reduce these equations using the law of mass conservation (O1 + O2 + I1 + I2 + C1 + C2 = 1).

Chapter 3

Phenomenological models and their analysis

3.1 Introduction The role of mathematical neuroscience, of which neurodynamics is a subset, is to explore the mechanisms underlying the behaviour observed in real neural tissues using mathematics as the primary tool. Mechanistic models build from the ground up, aiming to describe specific biophysical processes in such a way that the emergent behaviour matches that seen in experiments. For example, the Hodgkin–Huxley model highlights the contribution of ionic currents to action potential generation. Such models are useful when trying to identify precise pathways that underlie specific activity patterns. An alternative approach is to identify mathematical structures which can give rise to the desired neural response in a phenomenological sense, and then use these to direct further studies of potential biophysical routes to the same results. Phenomenological models aim only to capture the essence of the behaviour and, as such, are typically more mathematically tractable than their mechanistic counterparts. Their reduced complexity often sees them being used to explore new phenomena in a mathematical sense even when the underlying biophysics are known. In spite of this, these models often present interesting challenges of their own, as demonstrated throughout this chapter. The previous chapter showed how complex models can be simplified by making certain assumptions, or taking certain limits, as done in the reduction of the Hodgkin–Huxley model to a planar system in Sec. 2.4. When following this approach, the simplified model still arises from one derived from the known biophysics of the cell, in contrast to the phenomenological approach described in this chapter, which forgoes such derivation.

3.2 The FitzHugh–Nagumo model The reduction of the Hodgkin–Huxley model to a planar system in Sec. 2.4, showed that the simplified version was still able to capture much of the behaviour produced © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Coombes and K. C. A. Wedgwood, Neurodynamics, Texts in Applied Mathematics 75, https://doi.org/10.1007/978-3-031-21916-0_3

61

62

3 Phenomenological models and their analysis

by the original, but is more amenable to analysis since the full-phase space structure can be studied. Richard FitzHugh took a different approach in an attempt to isolate separate specific phenomena observed in the Hodgkin–Huxley model [461]. In particular, FitzHugh modified the van der Pol equations, which themselves were developed to describe oscillations in electrical circuits involving vacuum tubes. FitzHugh referred to these equations as the Bonhoeffer–van der Pol equations [305]. With the invention of the tunnel diodes by Esaki in 1957 [284], constructing van der Pol oscillators with electrical circuits became much simpler compared to using vacuum tubes and in the following year, Nagumo constructed the equivalent electrical circuit to FitzHugh’s equations [652]. The equations then became known as the FitzHugh– Nagumo equations. One of the great advantages of the FitzHugh–Nagumo model over the Hodgkin–Huxley model was the immense reduction of the computational power required to integrate it numerically, which was particularly important given the computational resources available at the time. To this day, the FitzHugh–Nagumo model remains the prototypical example of a simple action potential generating model and serves as a testing ground for mathematical studies of the nervous system. The equations of the FitzHugh–Nagumo model are given by

ε v˙ = f (v) − w + I ≡ f 1 (v, w), w˙ = β v − w ≡ f 2 (v, w),

f (v) = v(a − v)(v − 1),

(3.1)

where 0 < a < 1 with β , ε > 0 and I ∈ R. Here, v is a proxy for membrane potential, and w plays a role analogous to, but simpler than, the gating variables in the Hodgkin–Huxley model (2.20) (and for mathematical convenience is allowed to take negative values, though can be shifted back to the physiological range [0, 1]). Since the model is planar, geometric arguments can be used to explain observed phenomena. The phase plane for the FitzHugh–Nagumo model for typical parameters is shown in Fig. 3.1. Like the Hodgkin–Huxley model, the FitzHugh–Nagumo model does not possess a well-defined firing threshold that can be specified in terms of v alone. The latter model also provides explanations for many of the phenomena presented in Sec. 2.6, such as depolarisation block, post inhibitory rebound, spike-frequency adaptation, and sub-threshold responses. As in FitzHugh’s original analysis, this discussion will be restricted to the case where β is such that only one fixed point exists, which is guaranteed provided β > (a 2 − a + 1)/3. The fixed point (v, w) is given by the real solution of the cubic equation w = β v. (3.2) f (v) − β v + I = 0, The Jacobian evaluated at the fixed point is given by f (v)/ε −1/ε L= , f (v) = −a + 2(1 + a)v − 3v2 . −1 β

(3.3)

For I = 0, (v, w) = (0, 0) and the fixed point is stable as can be determined using the results in Box 2.3 on page 20 noting that Tr L = f (0)/ε − 1 = −a/ε − 1 < 0

3.2 The FitzHugh–Nagumo model

63

Fig. 3.1 Phase portrait of the FitzHugh–Nagumo model with parameters a = 1, β = 2 and I = 1.1. The nullclines are plotted as dashed grey lines. There is a single fixed point, indicated by the grey marker, which is unstable. The Poincaré–Bendixson theorem predicts the existence of a stable periodic orbit, depicted by the black solid curve.

and det L = (a + β )/ε > 0 and (Tr L)2 − 4 det L = (a/ε + 1)2 − 4(a + β )/ε > 0 if ε 1. Oscillatory behaviour in the FitzHugh–Nagumo model is generated via a Hopf bifurcation. With non-zero I , Hopf bifurcations are defined by Tr L = 0 and det L > 0, which gives (3.4) f (v) = ε . A plot of the quadratic, f (v), can be seen in Fig. 3.2 showing the possibility of two solutions to (3.4), denoted v1,2 . Given that v = v(I ), this suggests that there are two possible values for I that give rise to Hopf bifurcations. Since the fixed point is stable for I = 0, it is expected to destabilise as I is increased and then restabilise for some even larger value of I , as shown in the right panel of Fig. 3.2. The Poincaré–Bendixson theorem can be used to prove the existence of a stable limit cycle when the fixed point is unstable. This theorem, valid for planar systems, states that if a trapping region containing no fixed points can be found such that the flow on the boundary of this region points inwards, then the region must contain a limit cycle. A trapping region for the FitzHugh– Nagumo model can be constructed from two closed curves. The first of these is a ball centred on the fixed point with an arbitrarily small radius. Since the fixed point is unstable, trajectories on this ball flow outwards. The second curve is formed by taking a box with a size that diverges to infinity. As v → ±∞, it is straightforward

64

3 Phenomenological models and their analysis

Fig. 3.2 The Hopf bifurcation in the FitzHugh–Nagumo model with a = 1, β = 2 and ε = 0.01. Left: Conditions for the Hopf bifurcation ( f (v) = ε , with solutions denoted by v± ). The bifurcation occurs at the intersection of the solid black curve and the horizontal dashed grey line. Right: Bifurcation diagram under variation of I . The solid black curve represents stable fixed points, the dashed grey line is a branch of unstable fixed points and the dashed black line is a family of stable periodic orbits emanating from the Hopf.

to see that f 1 (v, w) → ∓∞, for any finite value of w. Similarly, for w → ±∞, f 2 (v, w) → ∓∞ for any finite v. The combination of these observations shows that trajectories on the box point inwards. Finally, since the fixed point is unique, the region bounded by the ball and the box cannot contain a fixed point and must therefore contain a limit cycle. See Prob. 3.1 and Prob. 3.2 for other examples of demonstrating the existence of limit cycles in planar systems. The FitzHugh–Nagumo model is said to be excitable when the fixed point is stable and on the left branch of the cubic v-nullcline and oscillatory when the fixed point is unstable and on the middle branch of the cubic v-nullcline, which is the case depicted in Fig. 3.1. The location of the fixed point can be parametrised by the applied current parameter, I . It is also possible for the fixed point to exist on the right branch of the v-nullcline. This scenario corresponds to a depolarisation block state in which the neuron is unable to spike, but has a voltage that is significantly higher than that of the resting state. When ε 1, the planar model admits a slow–fast decomposition as described in Box 3.1 on page 67. In the layer problem, v adjusts rapidly and maintains a quasiequilibrium along the stable branches of f 1 (v, w) = 0, which are given by the left and right branches of the v-nullcline. Along these branches, the dynamics of w are governed by the reduced dynamics w˙ = f 2 (v± (w), w) = G ± (w),

(3.5)

where v± are roots of the equation f (v± ) + I − w = 0. Away from the left and right branches of the v-nullcline, it is not possible for v to be in quasi-equilibrium, and here the motion is governed by

3.2 The FitzHugh–Nagumo model

65

Fig. 3.3 Phase portrait of the FitzHugh–Nagumo model with parameters a = 1, β = 2, I = 1.1 as ε → 0. The nullclines and unstable fixed points are plotted as in Fig. 3.1. In the singular limit and with respect to the slow timescale, trajectories spend infinitesimal time away from the left and right branches of the v-nullcline, as indicated by the dashed black lines. When trajectories are close to the v-nullcline, v can be approximated by v = v± (w) (which are guaranteed to exist by the implicit function theorem). Since v can be expressed in terms of w, the dynamics for the recovery variable become one-dimensional. The jump points at the knees of the v-nullcline, where the quasiequilibrium approximation breaks down, are indicated on the figure as (vL , wL ) and (vR , wR ).

dv = f 1 (v, w), dτ

dw = 0, dτ

(3.6)

where τ is the fast time scale τ = t/ε . On this time scale, w is a constant whilst v equilibrates to a solution of f 1 (v, w) = 0. Thus, if the applied current is such that the system is oscillatory, the period orbit with respect to the slow timescale is formed of two orbit segments that follow the v-nullcline until its ‘knees’, given by the turning points of f (v) + I . At these points, trajectories ‘jump off’ and land immediately on the opposite branch of the v-nullcline, with no change in w. This scenario is illustrated in Fig. 3.3. In this case, the period of the oscillation, T , may be approximated by the time spent on the slow branches. Denoting the value of w at the lower knee (i.e., the local minimum at the transition from the left to the middle branch) of the cubic nullcline by wL and at the upper by wR , the period of the oscillation is approximated by wR 1 1 Δ= − dw . (3.7) G + (w) G − (w) wL

66

3 Phenomenological models and their analysis

In the case where the system is excitable and ε 1, the middle branch of the vnullcline, is a threshold curve. Perturbations away from the fixed point that cross that threshold will induce a spike, whereas perturbations that remain on the left side of the threshold will not.

3.2.1 The mirrored FitzHugh–Nagumo model In much the same way that the FitzHugh–Nagumo model captures the dynamics of the reduced Hodgkin–Huxley model described in Sec. 2.4, the mirrored FitzHugh– Nagumo captures some of the essential features of models with more exotic ionic currents, and especially those with calcium channels [260]. The model is a variant of the FitzHugh–Nagumo model with a quadratic, as opposed to linear, dependence on the recovery variable and takes the form

ε v˙ = v −

v3 − w2 + I, 3

w˙ = w∞ (V ) − w,

(3.8)

where w∞ (v) = 2(1 + exp(−5v))−1 is a sigmoidal activation function. The v-nullcline mirrors along the v−axis the classical cubic-shaped nullcline of the FitzHugh–Nagumo model, as illustrated in Fig. 3.4, giving rise to an hourglass shape. Also shown here is a trajectory initiated from a hyperpolarised state. In a similar fashion to the reduced Connor–Stevens model described in Sec. 2.6.1, the trajectory has a latency to periodic firing that can be understood in terms of the time to crawl over the hump in the lower branch of the v-nullcline. Further discussion of the mirrored FitzHugh–Nagumo model can be found in [316]. A bursting model is naturally obtained by augmenting the planar model (3.8) with a form of ultraslow adaptation [317]. This has been used to great effect to understand the robustness and tunability of bursting [318], whereby epochs of spiking are interspersed with epochs of quiescence during which the cell is hyperpolarised. Much insight has been gained by exploring the geometry of the (v, w) phase-plane (treating the ultra slow variable as a parameter), highlighting the important role that slow negative conductances have in shaping cellular rhythms [246, 258, 259]. Indeed, the method of decomposing dynamical systems into their fast and slow subsystems, covered in Box 3.1 on page 67, to study bursting or mixed mode oscillations is a common one. Bursting in smooth systems, such as the popular Hindmarsh–Rose model [421], requires at least three dynamical variables. The Hindmarsh–Rose model comprises a planar fast subsystem and a scalar slow variable. Transitions between active spiking phases and quiescent phases can be understood by treating the slow variable as a slowly varying parameter of the fast subsystem. In particular, these transitions are given by bifurcations of the latter under variation of the former. For the Hindmarsh– Rose model, the transition to spiking occurs via a fold whilst the transition back to quiescence is instigated via a homoclinic bifurcation. Knowledge of the sequence of bifurcations of the fast subsystem associated with bursting trajectories provides important information about the expected frequency of spiking during the active

3.2 The FitzHugh–Nagumo model

67

Fig. 3.4 Phase plane of the mirrored FitzHugh–Nagumo model with parameters ε = 0.1 and I = 0.7. The dashed grey curves depict the indicated nullcline. Note that in comparison to the FitzHugh– Nagumo model, illustrated in Fig. 3.1, the v-nullcline has an hourglass rather than cubic shape. A trajectory, initiated from a hyperpolarised state, is shown by the solid black line.

phase. It should therefore come as no surprise that bursting dynamics in systems that can be separated into fast and slow subsystems have been characterised by their bifurcation sequences [456, 462]. Further analysis of bursting dynamics in a nonsmooth phenomenological model is presented in Sec. 3.4.6 and in Prob. 3.3. Box 3.1: Slow–fast systems Consider a dynamical system of the form

ε x˙ = f (x, y), x ∈ Rn , y˙ = g(x, y), y ∈ Rm .

(3.9) (3.10)

When ε 1, system (3.9)–(3.10) is said to be a slow–fast system since the dynamics for x are significantly faster than those for y. In particular, (3.9) is referred to as the fast subsystem and (3.10) as the slow subsystem. By introducing the fast timescale as τ = t/ε , (3.9)-(3.10) can be rewritten as the equivalent system

68

3 Phenomenological models and their analysis

x = f (x, y), y = ε g(x, y),

(3.11) (3.12)

where the prime denotes differentiation with respect to τ . In the singular limit, in which ε = 0, the slow variable y remains constant over the fast timescale. The variables y can thus be thought of as being parameters of the fast subsystem (3.11). This scenario is known as the layer problem for a slow–fast system. The dynamics on the slow timescale t follow a differential algebraic equation given by 0 = f (x, y), y˙ = g(h(y), y),

(3.13) (3.14)

where (3.13) defines a critical manifold to which the dynamics of x are restricted. Here, h is a map from the domain of the slow variable y to this manifold. The scenario is referred to as the reduced problem. Note that h is not invertible for every value of y, however, the implicit function theorem guarantees that there exist connected sets over which it is locally invertible. In the case that x, y ∈ R, the critical manifold is given by the nullcline of the fast subsystem. Trajectories of the full system (3.9)-(3.10) can be computed by integrating the reduced problem (3.13)-(3.14) until orbits reach nonhyperbolic points of this restricted system, whereupon the reduced problem ceases to be a good approximation to the overall dynamics. From here, orbits are evolved according to the layer problem (3.11)-(3.12) holding y fixed until the orbit re-enters the critical manifold and the dynamics are considered according to the reduced problem once more. As such, full system trajectories are found by piecing together solutions of the reduced and layer problems. As ε is increased away from zero, Fenichel theory [299] guarantees the existence of perturbed manifolds that are ε -close to the critical manifold, and share its hyperbolicity properties, so that perturbation expansions of the layer and reduced problems may be used to analyse dynamics near to the singular limit. For further details on the analysis of general slow–fast systems, see [482, 902], as well as [73] for the treatment of noisy slow–fast systems and [456, 460] for specific analysis of slow–fast systems in neuroscience.

3.3 Threshold models The precise timing of action potentials in cortex is highly irregular [740, 837], giving rise to the notion of action potentials as random events. The models considered thus far in this chapter have not exhibited this irregularity. Deterministic models can produce irregular solutions, through deterministic chaos. However, such behaviour only appears to be random; it is still governed by a deterministic system, whereas the observed randomness in neural firing patterns arises, at least in part, from the inherent

3.3 Threshold models

69

stochasticity at the channel level. In turn, randomness in channel opening and closing is reflected in probabilistic changes to the neuron’s membrane potential, as discussed in Sec. 2.8. The dynamics of these random events can be succinctly captured in Markov chain models of a neuron receiving stochastic external inputs [127, 545, 885]. This approach foregoes describing detailed channel dynamics, instead opting for a phenomenological model that captures the impact that such inputs have on the neuron’s voltage and the times at which it fires an action potential. Neurons are typically exposed to both excitatory and inhibitory inputs. In an oversimplified representation of a neuron, the excitatory and inhibitory inputs can be modelled as independent Poisson processes with different rates, as described in Box 3.2 on page 69. Indeed, a variety of extensions of the Poisson process assumption have been used to describe neural firing across a range of contexts [369, 600, 638, 642], and they can be fitted well to experimental data. In the case where both kinds of input change the membrane potential, v, by a fixed amount, this becomes a birthand-death process, which are often studied in epidemic modelling. This treatment of neurons was considered by Feller [298] in 1996, and a nice discussion of this is presented by Tuckwell in [886], which is followed here. Although this overly simplified model bears little resemblance to true physiology, significant progress in understanding the dynamics of neuronal membrane potentials can be achieved. This insight can then be transferred to more complicated models, as will be seen later in the chapter, to investigate the role of thresholds in action potential generation. Without loss of generality, begin by assuming that v(0) = 0 and consider how v evolves over an interval [0, t] under the assumption that excitatory inputs increase v by 1, whereas inhibitory inputs reduce v by the same amount. The discussion of the Hodgkin–Huxley model (Sec. 2.3) and of the FitzHugh–Nagumo model (Sec. 3.2) highlighted the notion of a threshold voltage, beyond which the neuron will spike. In those models, the threshold was not well defined in the sense that its specific value was dependent on other state variables describing the gating dynamics. Some simplified neuronal models omit descriptions of ion channels and gating dynamics. In such models, a hard threshold vth > 0 can be defined such that the neuron spikes if v exceeds this value. The tractability of the simplified description allows for the quantification of firing rates, or expected firing times, given knowledge of the input arrival rates. Box 3.2: Poisson processes The probability mass function of a random variable, X , following a Poisson distribution with parameter μ is given by −μ k e μ , k≥0 k! P(X = k; μ ) = (3.15) 0, otherwise. The probability density function for a random variable, X , following an exponential distribution is given by

70

3 Phenomenological models and their analysis

λ e−λ x , x ≥ 0 f (x; λ ) = 0, otherwise.

(3.16)

Consider a sequence of events in which the times between consecutive events are random, independent and all follow an exponential distribution with parameter λ . Such a sequence is known as a Poisson process with rate λ . The number of events, N (t), that occur in this sequence over a given time, t, follows a Poisson distribution (3.15) with parameter μ = λ t. Note that N (0) = 0 and that, since the time intervals between events are independent, so too are increments of N (t). More information about Poisson processes can be found in the books [483] and [761]. Suppose that excitatory and inhibitory inputs arrive following independent Poisson processes with rates of λE and λI , respectively. Over an interval (t, t + Δt) with Δt 1, the probability of either kind of input arriving is given by (λE + λI )Δt + O(Δt 2 ), whilst the probability of neither occurring is (1 − λE Δt)(1 − λI Δt) + O(Δt 2 ) = 1 − (λE + λI )Δt + O(Δt 2 ). Since inputs are independent of one another, the arrival of any input, regardless of type, follows a Poisson process with rate λ = λE + λI . Given that an input arrives, the probability that it was excitatory is given by p = λE /λ and the probability that it was inhibitory is given by q = λI /λ , so that p + q = 1. Let P(v(t) = m) for m ∈ Z denote the probability distribution of v(t), that is, the probability that v has value m at time t. If v(t) = m > 0, then there must have been at least n ≥ m inputs over [0, t]. If n inputs have arrived over the interval [0, t], then the number of these inputs that are excitatory follows a binomial distribution with n trials and success probability p (since inputs are independent). Define n E to be the number of excitatory inputs and n I to be the number of inhibitory ones, so that n = n E + n I and m = n E − n I . Note that n E = (n + m)/2 and n I = (n − m)/2 so that P(v(t) = m | n inputs over [0, t]) = P(n E excitatory inputs over [0, t] | n inputs over [0, t]). (3.17) Using the probability mass function for the binomial distribution and the earlier expressions for n E and n I , this can be expressed as n! pnE q nI (n − n E )!n E ! n! p (n+m)/2 q (n−m)/2 . = ((n − m)/2)!((n + m)/2)! (3.18) The law of total probability implies that the probability that v(t) = m is the sum of these probabilities over all n ≥ m, i.e., P(v(t) = m | n inputs over [0, t]) =

P(v(t) = m) =

∑

P(v(t) = m | n inputs arrived over [0, t]) × P(n inputs in [0, t]).

n≥m

(3.19)

3.3 Threshold models

71

Since the input arrivals are defined by a Poisson process with rate λ , the probability of n inputs occurring in the required interval is e−λ t (λ t)n /n!. Combining this with (3.18) gives P(v(t) = m) = e−λ t

∞

∑

n=m

(λ t)n p (n+m)/2 q (n−m)/2 , ((n − m)/2)!((n + m)/2)!

(3.20)

where the sum is taken over even n if m is even and odd n if m is odd. Since, n = m + 2n I , where n I = 0, 1, 2, . . . , the sum in (3.20) can be replaced to give P(v(t) = m) = e−λ t

∞

(λ t)m+2n I m+n I n I p q . n I =0 (m + n I )!n I !

∑

(3.21)

Equation (3.21) can be written more succinctly by expressing it in terms of the modified Bessel function x 2k+r ∞ 1 , (3.22) Ir (x) = ∑ k=0 k!Γ (k + r + 1) 2 where Γ (n) = (n − 1)! is the Gamma function. Using this formulation, the probability mass function for v(t) is given by P(v(t) = m) =

λE λI

m/2

e−λ t Im (2t

λE λI ).

(3.23)

The above arguments facilitate the computation of passage times of v(t) through the threshold vth . In the case where λE = λI , which may be thought of as a balanced state, the system is symmetric with respect to the direction of changes in v as inputs arrive since p = q = 1/2. In this case, the system can be solved using the method of images. It is convenient to introduce the notation Pm (t) = P(v(t) = m) and to define by Pm∗ (t) the probability that v(t) = m and v(t) < vth for all t ∈ [0, t], that is, the probability that a trajectory ends at m without reaching threshold over the specified interval. An example of such a path is shown in the left panel in Fig. 3.5. Progress can now be made by taking advantage of the symmetry in the system. For every path starting from v(0) = 0 and ending at v(t) = m that touches or crosses vth , there is an equivalent path that starts at v(0) = 2vth that ends at v(t) = m. Suppose that the former path first reaches vth at t ∈ [0, t]. Then these two paths are identical for s ∈ [t , t] and, for s < t , the latter is the reflection of the former about v = vth , as depicted in the middle panel of Fig. 3.5. The symmetry of the system implies that the latter path is itself equivalent to one starting at v(0) = 0 and ending at v(t) = 2vth − m, that being a reflection of the whole path around v = vth , as illustrated in the right panel of Fig. 3.5. Since these paths have v(t ) = vth for some t ∈ [0, t], they do not contribute towards Pm∗ (t). The probability of such a path occurring is equal to P2vth −m . Taking these facts together means that Pm∗ (t) can be expressed as Pm∗ (t) = Pm (t) − P2vth −m (t).

(3.24)

72

3 Phenomenological models and their analysis

Fig. 3.5 Sample paths of a neuron whose excitatory and inhibitory inputs follow an independent Poisson process with rates λE = λI , respectively. Each segment of the paths has constant v over a half-open interval, closed at the left edge. Jumps in v occur as inputs arrive, such that v immediately takes on a new value as indicated by the markers. In all panels, the dashed line shows the threshold v = vth , whilst the dotted line is an arbitrary value v = m. Of interest are sample paths that start at v(0) = 0 and end at v(t) = m, where t = 10 in this example. Examples of such paths are here plotted in black. Left: Sample path that remains below vth in our interval of interest. Middle: The black path touches vth at some time. This path has an equivalent path starting from v(t) = 2vth that also ends at v(t) = m, plotted in grey. Right: By symmetry, the grey path in the middle panel is equivalent to one reflected about the line v = vth , plotted in black.

Denote the probability density function of the first passage time to vth by F(t). A first passage occurs in (t, t + Δt] if v(t) = vth − 1 with v < vth for t ∈ [0, t] and an excitatory inputs arrives in (t, t + Δt). Observing symmetry, the probability of such an input occurring is (λ /2)Δt, and so F(t)Δt = Pv∗th −1 (t)(λ /2)Δt,

(3.25)

Using (3.22), (3.23) and (3.24), together with the fact that λE = λI yields λ −λ t e Ivth −1 (2λ t) − Ivth +1 (2λ t) . (3.26) 2 By taking advantage of the recurrence relation for the modified Bessel function: Ivth −1 (x) − Ivth +1 (x) = (2vth /x)Ivth (x), (3.26) can be written as F(t) =

F(t) = (vth /t)e−λ t Ivth (2λ t).

(3.27)

Sample paths of the system in this case, along with the first passage time distribution are shown in Fig. 3.6. In the case where λE = λI , another method is needed for calculating the first passage times. If a path starting at v(0) = 0 ends at v(t) > vth , then the path must have passed through vth in the interval [0, t]. Denoting the first such crossing by t < t and integrating over all such paths gives Pm (t) =

0

t

F(t )Pm−vth (t − t ) dt .

(3.28)

This convolution is called a renewal equation and can be solved by taking Laplace transforms, since under this transformation, convolutions become products. We refer the reader to [886] for further details and simply choose to state the result

3.3 Threshold models

73

Fig. 3.6 Left: Sample paths of 5 neurons’ voltages receiving excitatory and inhibitory input defined by a Poisson process with the same rate λE = λI = 5. Here, vth = 20, as indicated by the dashed grey line. In this example, only one neuron reaches threshold over the time window simulated. Right: Distribution of first passage times of the voltages through threshold. The histogram shows the empirical distribution over 1000 sampled neurons, the black curve is the predicted distribution from (3.27).

F(t) = vth

λE λI

(vth /2)

e−λ t Ivth 2t λE λI , t > 0. t

(3.29)

Other moments, such as the variance of the first passage time, can be calculated in a similar fashion. In more complicated models, the parameters λ E and λ I may vary over time, leading to an inhomogeneous Poisson process, that have enjoyed widespread use in describing the statistics of neuronal firing [74, 492, 493]. The assumption that v takes only integer values can be relaxed by assuming that inputs are of strength a, which results in mean first passage times being scaled by a factor of (1 + vth /a). If the inputs are scaled in such a way that a → 0 whilst, simultaneously, the input arrival rates become infinitely fast, the resulting process for v(t) is described by a Wiener process (see Appendix A for more details on Wiener processes). The paths of this process are then continuous trajectories (though they are nowhere differentiable). The Wiener process, also commonly known as Brownian motion, belongs to the general class of Markov processes called diffusion processes. Compared with the previously considered case, these processes are more able to capture the fluctuations observed in real voltage traces. We refer the reader to Appendix A and to [331] for a wider discussion of the use of Wiener processes in modelling. Their use in neuroscience led to the development of an important new class of phenomenological model, known as the integrate-and-fire model, though its roots lie even further back in history.

74

3 Phenomenological models and their analysis

3.4 Integrate-and-fire neurons Hodgkin and Huxley’s celebrated equations paved the way for understanding of electrical behaviour at the level of the ion channel, however, little was known about the electrophysiology of nerve cells at the turn of the twentieth century. This was primarily due to limited measuring capabilities, and as a result, theories were often based on speculation, rather than on observed phenomena. A paper by Weiss [928] was the first to investigate electrostimulation quantitatively in a general form [453]. Weiss posited that the threshold quantity, that being the product of the minimal current required to induce an action potential and the duration of the application, was a linear function of time. Whilst this study was overlooked by most other prominent electrophysiologists at the time, it was picked up by Lapicque in 1907, who used the same method for delivering current stimulation developed by Weiss to study threshold quantities in frogs’ legs. Weiss had studied stimulation pulses up to 3 ms, and his results showed good agreement to the postulated linear time dependence. Lapicque was able to extend this stimulation and demonstrated that as the duration of the pulse increased, the response became increasingly less linear. By writing down an equivalent circuit for the experiment, Lapicque derived a logarithmic form for the threshold quantity that better described the long pulse response. Importantly, this facilitated the relation of time constants to membrane properties, which was a key step in understanding the excitability of tissue. In the 1960s, a flurry of papers considered neuron models with a hard threshold for the membrane potential. Action potentials were deemed to have occurred whenever the membrane potential exceeded threshold, after which the potential was instantaneously set to a reset value away from threshold. This notion of all-or-nothing response differs from the soft threshold considered in Chap. 2, for which it is impossible to determine for certain whether a neuron will fire or not from knowledge of its membrane potential alone. The first of these papers was due to Gerstein and Mandelbrot [334], who modelled neurons as a random walk, described by a Wiener process so that the interspike intervals (ISIs) were distributed as Poisson process. The following year, Stein [832] considered the same model, but added exponential decay to a resting value when the neuron was sub-threshold. Both of these works showed remarkable agreement between theory and experimental data, which were taken primarily from the cochlear nucleus of the cat. In 1972, Bruce Knight introduced the term ‘integrate-and-fire’ (IF) to describe these models in his works, which analysed their firing rates [520] and showed how they can be used to model the visual system of the Limulus [521]. The name stuck and IF models have enjoyed much analysis, both in theoretical studies as well as in modelling works. A number of variants of the original leaky (linear) integrate-and-fire model have since been introduced to try to bring these phenomenological models closer to biology, but the basic principle of a hard threshold remains. The general form of IF models is C v˙ = f (v) + I, Tm < t < Tm+1 , m ∈ Z,

(3.30)

3.4 Integrate-and-fire neurons

75

where C is the membrane capacitance and I represents the, potentially timedependent, external current that is applied to the cell. Without loss of generality, it can be (and often is) assumed that the membrane capacitance has unit value (i.e., C = 1) and this convention is adopted for the remainder of this chapter. The function f defines the intrinsic neuronal properties. Analysis for both a linear and a quadratic form for f will be presented in this chapter. Suppose that the system has a threshold, vth , and reset, vr , such that vr < vth and that the drive is sufficiently strong to cause trajectories to reach threshold and hence induce spiking activity. Under these conditions, the mth firing time is defined by Tm = inf{t | v(t) ≥ vth ; v˙ > 0; t > Tm−1 }.

(3.31)

To (3.30) is appended the reset condition lim v(Tm + ε ) = vr ,

(3.32)

ε →0+

where the inclusion of ε is used to make the direction of the limit precise. For notational convenience, the reset condition can be incorporated into (3.30) by writing v˙ = f (v) + I − (vth − vr )

∑ δ (t − Tm ).

(3.33)

m∈Z

where δ represents the Dirac delta function (see Box 2.9 on page 51). For smooth f , the state space of one-dimensional IF models is restricted to the interval v ∈ (−∞, vth ]. Formally, (3.30) is a hybrid dynamical system, comprising both a smooth flow generated by f , together with a ‘jump’ given by the reset condition. The interaction between the smooth and non-smooth dynamics can result in a rich range of behaviour. For example, smooth scalar dynamical systems do not support oscillations. However, the non-smoothness of the reset process can induce periodic solutions in IF models. The following sections cover some concrete examples of IF models.

3.4.1 The leaky integrate-and-fire model Originally described as the forgetful IF model, the leaky integrate-and-fire (LIF) model takes f (v) = −v/τ , where τ is the membrane time constant. Since f is linear, (3.30) can be integrated from t0 using variation of parameters (see Box 3.3 on page 76) to give v(t) = v(t0 )e(t0 −t)/τ +

t

t0

I (s)e(s−t)/τ ds − (vth − vr ) ∑ e−(t−Tm )/τ .

(3.34)

m

Suppose that the external current is held constant, that is, I (t) = I ∈ R. The fixed point for the LIF model is given by v = I τ . If I < vth /τ ≡ Ic , trajectories will simply tend to the sub-threshold fixed point and no firing events will occur. In

76

3 Phenomenological models and their analysis

Fig. 3.7 Oscillatory behaviour in the LIF model with I = 1.3, τ = 1, vr = 0 and vth = 1. In contrast to the case for smooth systems, scalar non-smooth systems can support periodic solutions. Moreover, trajectories reach the attracting periodic orbit in finite time. Left: Trajectories of the LIF model are shown as solid black curves. The dashed grey line indicates the position of the threshold. Right: Frequency of firing in the LIF model as the constant drive, I , is varied.

this case, the firing times, Tm , are undefined. As I increases through Ic , a border collision bifurcation occurs as the fixed point makes contact with the discontinuity surface defined by the set v = vth . Now, trajectories will reach threshold and will subsequently be reset giving rise to a periodic solution as illustrated in Fig. 3.7. This behaviour will repeat since the model parameters are fixed and hence oscillatory behaviour will be observed. This highlights another difference between the smooth models previously discussed and IF models: in smooth models, trajectories only reach attracting states in infinite time. By contrast, owing to the action of the reset, trajectories will lie on the periodic orbit following the first firing event. After defining the period of the oscillation by Δ = Tm+1 − Tm , (3.30) with f (v) = −v/τ can be integrated between Tm and Tm+1 to find vr − I τ Δ = τ ln (3.35) Θ(I τ − vth ), vth − I τ where Θ is the Heaviside function. This equation can be used to define the frequency response curve for the LIF neuron, as plotted in Fig. 3.7, in which a sharp rise in the frequency following the border collision bifurcation is observed. Trajectories for the oscillatory state which showcase the non-smooth nature of the orbits are also plotted in this figure. Box 3.3: Solutions to linear systems Consider a constant coefficient non-autonomous linear system of ordinary differential equations of the form:

3.4 Integrate-and-fire neurons

77

x˙ = Ax + b(t), x, b(t) ∈ Rn ,

A ∈ Rn×n .

(3.36)

The solution to (3.36) from an initial vector x(t0 ) = x0 is given by the variation of parameters formula: x(t) = G(t − t0 )x0 + K (t, t0 ),

(3.37)

where G(t) = e At is a matrix exponential and K is given by the formula K (t, t0 ) = e At

t

e−As b(s) ds.

(3.38)

t0

In the case where b is constant, (3.37) can be simplified to x(t) = G(t − t0 )x0 + L(t − t0 )b,

(3.39)

where L(t) =

0

t

e As ds = A−1 e At − I = A−1 [G(t) − I ] ,

(3.40)

where I is the n × n identity matrix. If A is diagonalisable, it may be written as A = PΛP −1 , with its corresponding matrix exponential given by G(t) = PeΛt P −1 . For planar systems, if the diagonalisable matrix A has a pair of eigenvalues, λ+ , λ− , then Λ and P can be expressed as Λ = diag (λ+ , λ− ), T and P = q+ , q− where q± = (λ± − a22 )/a21 , 1 and aij is the (i, j)th element of A. If instead, A is diagonalisable but has a pair of complex conjugate eigenvalues, λ± = ρ ± i ω , with associated eigenvector q such that Aq = A(ρ + i ω ), q ∈ C2 , then G(t) = eρ t P Rω t P −1 , with cos θ − sin θ 0 1 Rθ = , P = [Im(q), Re(q)] = , (3.41)

ρ sin θ cos θ ω

= ω /a12 and ρ

= (ρ − a11 )/a12 . Note that ρ and ω can be written as where ω ρ = (a11 + a22 )/2 and ω 2 = a11 a22 − a12 a21 − ρ 2 . Explicit expansions for G and L in this form may be constructed (and see for example [185]).

3.4.2 The quadratic integrate-and-fire model Trajectories from the LIF model are concave down, whereas action potentials are typically characterised by an initial rapid upswing, which would better be captured by a concave up function. In spite of this, LIF neurons have enjoyed a fruitful life in modelling studies in which it has been demonstrated that they can adequately

78

3 Phenomenological models and their analysis

capture the firing times from experimental data [203, 648, 855]. Indeed, it is often presumed that it is the firing times, or firing rates, rather than the spike shape which conveys the majority of information associated with neural processing. In particular, the majority of communication between neurons occurs via chemical synapses, as will be discussed in Sec. 4.4.1. For such coupling, one neuron signals to another by inducing a current in the receiving cell. Although this current is triggered by the signalling cell, it is typically independent of the membrane potential of that cell. For other forms of coupling, such as that facilitated via gap junctions as discussed in Sec. 4.4.2, the current induced in the receiving cell may be directly influenced by the signalling cell’s membrane potential and so the shape of the action potential is an important factor in the induced response. Improvements to IF models can be made to better capture the shape of a typical action potential by making f nonlinear. One important such example of this is the quadratic integrate-and-fire (QIF) model [401, 559], which sets f (v) = v2 in (3.30). This model can be shown to represent the normal form for a saddle node on an invariant circle (SNIC) bifurcation, as presented in Sec. 2.5.2, which will be discussed at the end of this section. Consider a constant current injection to the QIF model with I ∈ R. For I < 0, √ steady states are given by v± = ± |I |. An inspection of f (v± ) shows that v− is stable, whilst v + is unstable. Increasing I brings the two fixed points closer to one another. At I = 0, the two fixed points coalesce in a saddle-node bifurcation, and for I > 0, the system possesses no fixed points. In this regime, (3.30) with f (v) = v2 can be integrated from t0 to find √ √ v(t0 ) v(t) = I tan arctan √ (3.42) + I (t − t0 ) . I Since v(t) = tan(u + t) → ∞ as u + t → π /2, it can be seen that, solutions blow up in finite time, even with a threshold at infinity, in contrast to the LIF neuron. In particular, since tan t → ∞ as t → π /2− , solutions to the QIF model blow up as √ v(t0 ) π− √ arctan . (3.43) + I (t − t0 ) → 2 I These observations mean that it is not necessary to define a finite threshold or reset value for the QIF model. Instead, these are often taken to have infinite magnitude: vth → +∞ and vr → −∞. Under these choices, the period of an oscillation is given by π I > 0. (3.44) Δ= √ , I From a computational perspective, it is not possible to set an infinite threshold or an infinite reset. Instead, large in magnitude, but finite values can be used for these. The period of oscillation in this case with I > 0 is given by vth vr 1 −1 −1 √ √ Δ= √ tan − tan . (3.45) I I I

3.4 Integrate-and-fire neurons

79

Fig. 3.8 Oscillatory behaviour in the QIF model with I = 0.8, vr = −1 and vth = 10. Left: Oscillatory trajectories of the QIF model. In contrast to the LIF model, solutions have a rapid upswing, which mimics the rapid depolarisation phase of an action potential. Right: Frequency of the QIF model as the constant drive is varied. The QIF model is observed to support periodic activity with arbitrarily low frequency.

Trajectories of the QIF model for I > 0 are shown in Fig. 3.8. The solutions are much closer to true action potentials, and so the QIF model may be thought of as being a true spiking model, in contrast to the LIF model in which spike shapes are often appended to the dynamics artificially. Additionally, oscillations in the QIF model can possess arbitrarily low frequency, which captures the behaviour of certain cortical cells [851]. Allowing for a finite reset value generates the possibility of bistability for I < 0. For vr < v + , only the fixed point v − is stable. However, for v + < vr < vth , there is bistability between this fixed point and an oscillatory solution. The solution to the QIF model for I < 0 is given by

v(t0 ) −1 √ v(t) = |I | tanh tanh − |I | (t − t0 ) . (3.46) |I | For an initial condition v(t0 ) > v+ , this solution also blows up in finite time. If the reset value is such that vr > v+ and the threshold value is finite, there is an oscillatory solution with period √ √ (vth − |I |)(vr + |I |) 1 √ √ . (3.47) Δ = √ ln 2 |I | (vr − |I |)(vth + |I |) The expression for the period of the equivalent system with infinite threshold may be obtained by taking the limit as vth → ∞ in (3.47), yielding √ vr + |I | 1 √ . (3.48) Δ = √ ln 2 |I | vr − |I |

80

3 Phenomenological models and their analysis

Fig. 3.9 Schematic of the dynamics of the θ -neuron model showing that it exhibits a SNIC bifurcation. For I < 0, two fixed points exist on the unit circle; one stable, the other unstable. At I = 0, the two come together and annihilate one another in a saddle-node bifurcation. For I > 0, there are no fixed points and so θ moves around the unit circle. The ghost of the saddle node causes the evolution of θ to slow down near to where the unstable fixed point previously existed.

√ At vr = v + = |I |, there is a homoclinic bifurcation at which the oscillatory solution collides with the unstable fixed point, and after which v − is the only stable solution. This line thus delineates the (I, vr ) parameter plane into regions that support bistability from those that do not. By applying the transformation v = tan θ /2, the QIF model can be written in the equivalent form:

θ˙ = 1 − cos θ + (1 + cos θ ) I, θ ∈ S ≡ [0, 2π ).

(3.49)

This is the Ermentrout–Koppell model, more commonly known as the θ -neuron model [279]. The neuron is said to spike when θ passes through π from below. Since the phase space for the θ -neuron is the unit circle, it can be visualised easily, as shown in Fig. 3.9. For I < 0, there exist two fixed points, one of which is stable, the other being unstable. At I = 0, these two come together and annihilate one another. For I > 0, there are no fixed points and θ simply makes revolutions around the unit circle. In this way, it can be seen that the θ -neuron model supports a SNIC bifurcation.

3.4.3 Other nonlinear integrate-and-fire models Choices for the function f are not limited to polynomial functions. Indeed, a variety of functions, including those of non-polynomial type have been incorporated into the IF framework to capture features of neuronal dynamics. A reasonable approximation of a conductance-based model is afforded by the linear exponential IF (LEIF) model as developed by Fourcaud–Trocmé [312], an example trace of which is shown in Fig. 3.10. This uses a shifted, scaled exponential for f of the form f (v) = −gL (v − vL ) + gL κ e(v−vκ )/κ .

(3.50)

Each of the new parameters in the LEIF model has a biologically interpretable meaning: vL is the approximate resting membrane potential, vκ is the maximum voltage

3.4 Integrate-and-fire neurons

81

Fig. 3.10 Sample trajectory of the LEIF model (3.50) with vL = −65, vr = −60, vκ = −50, vth = 10, κ = 10, gL = 10, and I = 50 + ξ (t), where ξ is an Ornstein–Uhlenbeck process with parameters τ = 0.1 and σ = 0.01 (see equation (A.19) in Appendix A for details regarding the Ornstein–Uhlenbeck process). For reference, the evolution of ξ is displayed in grey in the lower trace.

that can be reached without triggering a spike, κ sets the sharpness of the spike, and gL is the leak conductance. These parameters can all be estimated from data using the technique of dynamic I − V curves and the resulting traces have been shown to account for more than 95% of the variability in spike timing when the model is fitted to experimental recordings [50]. In a similar vein to the QIF model, the LEIF model possesses two fixed points that coalesce in a saddle-node bifurcation, and hence supports oscillatory behaviour at arbitrarily low frequencies. It thus appears that the behaviour of real neurons can be captured to a reasonable degree by very simple phenomenological models. In fact, it is possible to formulate models for single neuron dynamics directly from observed response properties. One prominent example of a class of such models is the spike response model.

3.4.4 Spike response models Instead of defining the dynamics for neuronal behaviour via differential equations, the time-dependent voltage can be specified directly in the form of a filtered version of an input signal I (t): ∗

v(t) = η (t − t ) +

0

∞

κ (t − t ∗ , s) I (t − s) ds.

(3.51)

This formalism is known as the spike response model (SRM) [335, 337]. Just as for IF-type models, the SRM includes the notion of a threshold, vth , such that a firing

82

3 Phenomenological models and their analysis

event occurs when v passes through vth from below. The time of the most recent of these firing events is denoted t ∗ , so that the present value of v is dependent on the last firing time. The name ’spike response model’ is derived from the fact that η represents the neuron’s response to its own spikes, whereas κ represents the linear response to a short incoming pulse, which could either be from other neurons or applied as part of an experimental protocol. The threshold value is not fixed, but also depends on the last firing time; typically it is higher immediately after the neuron fires and subsequently decays back to some resting value. In this way, refractoriness can be built into the model through the dynamic threshold. The kernel, η , essentially represents the spike shape and is dependent only on the previous spike time. Similarly, κ also depends on the previous spike time since the effective membrane time constant is shorter just after a spike due to the opening of many ion channels and the corresponding increase in whole cell conductance. Both η and κ can be approximated from data or through simplifications of a conductancebased model by using short stimulation protocols for a variety of t ∗ . For example, if this is done via a reduction of the Hodgkin–Huxley model, κ has a resonant behaviour in the form of a delayed oscillation. Moreover, comparing (3.34) with (3.51) shows that the LIF is a special case of the SRM with κ (t, s) = Θ(t)Θ(t − s) exp(−s/τ ). The freedom of fitting η and κ affords the SRM great flexibility in capturing variability in real data and it has been shown to be capable of capturing a large fraction of spike times in neocortical pyramidal neurons to within ±2 ms [50, 479, 480, 515]. It should be noted, however, that despite its accuracy in predicting spike times, the kernel used in the SRM captures only linear responses, whereas real neurons are inherently nonlinear in nature. The SRM and the IF models presented thus far strike a balance between the ability to capture dynamical features of neuronal response and tractability. Their one-dimensional nature, however, means that they do not support certain types of dynamics, such as the bursting phenomena described at the end of Sec. 3.2. To describe these kinds of behaviour, additional variables must be added to the SRM and IF models. Common choices for this include making the threshold a dynamical variable [96, 161, 583, 843] (see Sec. 3.4.5), or introducing a recovery or adaptation variable (see Sec. 3.4.6).

3.4.5 Dynamic thresholds In contrast to the IF models thus far presented, which have a fixed value for the threshold, the conductance-based models in Chap. 2 all possessed a notion of a ‘soft’ threshold, which may be regarded as being state-dependent. In this way, the history of firing times played a role in defining the next firing time, in contrast to scalar IF models, which are essentially renewal processes. This dependence on the history of firing events can be introduced into the IF models by making the threshold evolve over time, that is, by imbuing the threshold with dynamics. One way to achieve this is to assume that the threshold obeys linear dynamics, which leads to the planar LIF

3.4 Integrate-and-fire neurons

83

model

dvth dv v (3.52) = − + I (t), τth = θ − vth . dt τ dt A common use of such models is to incorporate refractoriness by applying a reset map vth → vth + k to the threshold following a spike. Since the threshold is increased, it is harder for the neuron to fire repetitive spikes over short periods. Over time, the threshold returns to its resting value θ at a rate determined by τth . In addition to refractoriness, the model exhibits spike-frequency adaptation under constant current injection as the system possesses a stable limit cycle to which trajectories are attracted [161, 583]. An alternative way of introducing variable thresholds is by drawing them from a random process (over time), rather than introducing dynamics explicitly. In such an approach, an autocorrelation function is specified and paths with this autocorrelation are sampled to describe the threshold, as done in [203]. Marrying these two ideas together, it is also possible to form a dynamical system for the threshold that is itself driven by a stochastic process. This can be done, for example, by changing the threshold dynamics in (3.52) to follow that of an Ornstein–Uhlenbeck process:

τth dvth = (θ − vth ) dt + σ dW (t),

(3.53)

where dW (t) are increments of a Wiener process (see Appendix A for more details on Wiener processes). The effect of a noisy threshold on neuronal firing can be illustrated by forcing an LIF neuron with threshold dynamics specified by (3.53) with an oscillatory input of the form I (t) = I0 + a sin ω t. Here, it is assumed that I0 τ > vth so that the neuron spikes even without the oscillatory component of the drive. Defining ω0 to be the natural frequency of the neuron when a = 0 and setting the forcing frequency to be ω = 2π bω0 allows the firing rate of the forced neuron to be studied relative to its natural firing rate. Examples of such responses can be seen in Fig. 3.11. The neuron here exhibits a variety of mode-locked patterns, which are a direct result of the dynamics of the threshold. The action of the threshold noise causes ‘jitter’ in the timing of the spikes, so that mode-locked states are represented by clouds of points in the map between interspike intervals. Studies using more general forms of threshold noise have shown that these clouds can be well matched to responses from stellate neurons in the ventral cochlear nucleus [203]. Neuronal response properties of these types will be covered in more depth in Chap. 5. In spite of the complications in simulating noisy systems that exhibit non-smooth dynamics, the stochastic firing times can be found with accuracy down to machine precision using sophisticated numerical routines [843]. Moreover, explicit expressions for the firing rate and its variability are available for the QIF model [123, 576]. Box 3.4: Poincaré maps For a system x˙ = f (x), x ∈ Rd , define a section, Σ ∈ Rd−1 that is transverse to the vector field f implying that trajectories pass through it (rather than being tangent to it). Consider a map that evolves a trajectory starting at an arbitrary

84

3 Phenomenological models and their analysis

Fig. 3.11 Interspike interval (ISI) distributions for the LIF model with a dynamic threshold following an Ornstein–Uhlenbeck process and periodic forcing with I (t) = I0 + a sin(2π bω0 t) where ω0 is the frequency of the unforced, noise-free model. The dynamics obey (3.52) with the threshold dynamics replaced by (3.53). The top row shows the histogram of ISIs, whilst the bottom row shows the map of ISIs between successive firing events. Here, the neuron displays a variety of firing patterns, in which it is entrained to the oscillatory drive in different mode-locked states. The parameters are τ = 6, vth = 20, vr = 0, I0 = 3.5, a = 2, σ = 3, and τth = 10 with b = 1, b = 2, and b = 4 in the left, middle, and right columns, respectively.

point x0 ∈ Σ under the flow ϕ(t, x0 ) induced by f until it intersects Σ again at a point P(x0 ), as shown below. The map P : Σ → Σ is known as the Poincaré map, which is also referred to as the first return map.

3.4 Integrate-and-fire neurons

85

More generally, a Poincaré map defines a relationship between consecutive entries in the sequence of crossing points, {xn }n∈Z , of trajectories (in the same direction) through Σ so that xn+1 = P(xn ). A fixed point, x, of a Poincaré map is defined by the condition x = P(x). Fixed points are stable if the eigenvalues of DP(x), that is, the linearisation of P evaluated at the fixed point, all lie within the unit disc. Fixed points can destablilise in one of three ways, as described in Box 3.5 on page 85. A tangent bifurcation, at which point two fixed points coalesce, occurs when one of the eigenvalues leaves the unit disc through +1. A period-doubling bifurcation, involving the creation of period 2 orbit, occurs when an eigenvalue crosses −1. Finally, a Neimark–Sacker bifurcation, which generates a closed invariant curve of points around the fixed point, occurs when a pair of complex conjugate eigenvalues crosses the unit circle at e±i θ where θ∈ / {0, π }. Fixed points of P correspond to limit cycle solutions of f , and moreover, the stability of these fixed points under P matches that of the corresponding limit cycles under the flow induced by f . Note that the state space of P has one fewer dimension than the original system f . Moreover, P is defined as a discrete-time evolution operator, whereas the vector field operates in continuous time. The time-of-flight between successive intersections of trajectories with Σ is not provided by the map, and in particular, must be found in order to define it. This observation means that in general, Poincaré maps for nonlinear systems must be found numerically, since these times are not known a priori. More information about Poincaré maps for smooth systems can be found in [380, 836], whilst specific considerations for non-smooth systems are presented in [249].

Box 3.5: Bifurcation of maps Fixed points x of a map of the form xn+1 = P(xn ), x ∈ Rd ,

(3.54)

satisfy the condition x = P(x). Stability is then examined by extracting eigenvalues of the Jacobian matrix DP|x=x . If all of these eigenvalues lie inside the unit circle, x is stable; otherwise, it is unstable. For general maps, three generic types of bifurcation can occur. When a single eigenvalue, λ , crosses the unit circle at 1, a tangent bifurcation occurs during which two fixed points come together and annihilate one another. If λ crosses at −1, a period-doubling bifurcation occurs in which a new orbit with twice the original period is generated. Finally, if a pair of complex conjugate eigenvalues crosses the unit circle away from the points −1 and +1, a Neimark–Sacker bifurcation occurs, giving rise to a closed invariant curve of (3.54).

86

3 Phenomenological models and their analysis

The possible bifurcations for a one-dimensional map (i.e., with d = 1) are depicted below in the left and right panels, respectively. In both cases, the grey and black lines depict graphs of the map before and at the bifurcation point, respectively. In the left panel, the white marker corresponds to an unstable fixed point, the black marker to a stable fixed point and the grey marker illustrates where they come together at the tangent bifurcation. In the right panel, the white dotted line displays a stable period 2 orbit that is generated at the period-doubling bifurcation.

A full overview of bifurcations in maps can be found in [541].

3.4.6 Planar integrate-and-fire models Bursting behaviour, which is typically indicative of slow processes (e.g., those mediated by Ca2+ dynamics) is a neuronal firing pattern commonly seen in many brain areas. For example, bursting dynamics are prevalent in the pre-Bötzinger complex in the brainstem, which regulates breathing. It is hypothesised that bursting is a more reliable means of neuronal communication than tonic firing as it can overcome information loss associated with missed spikes [580, 804] and may carry greater information content [741]. In general, bursting dynamics in smooth models require a state space with at least three dimensions. However, the reset condition allows nonsmooth models to support bursting behaviour with only two state variables. This can be understood by noting that oscillations can be produced by a scalar non-smooth model, so it remains to append to such a system a slowly evolving recovery variable to provide a timescale over which bursting can take place. This recovery variable, here denoted w, can be taken to be the same as that for the FitzHugh–Nagumo model and hence will be treated as a linear process. This leads to the definition of the following family of planar IF models v˙ = f (v) − w + I, τ w˙ = β v − w.

(3.55)

Upon reaching threshold, w is adjusted according to w → w + k, where k ∈ R. Loosely, this reset process can be thought of as describing the influx of Ca2+ into the

3.4 Integrate-and-fire neurons

87

cell, which typically occurs during an action potential due to the opening of voltagegated calcium channels. It is also natural to consider a version of the above whereby the linear coupling to w (in the equation for v˙ ) is replaced by a quadratic term w2 , reminiscent of the mirrored FitzHugh–Nagumo model described in Sec. 3.2.1 (to better capture the dynamics of calcium conductances) [894]. The form of (3.55) is similar to the scalar IF models with dynamic thresholds defined by (3.52). The latter models are a way of naturally incorporating refractoriness in spiking dynamics whilst systems of the form (3.55) essentially capture the effect of adaptive currents. These currents have similar effects on neuronal response to refractoriness, slowing down firing rates and causing spike-frequency adaptation. Given the similarity between dynamic threshold and planar IF models, it should come as no surprise that the two can be analysed in analogous ways, though it is worth noting that the mathematical treatment of (3.52) is often simpler than that of (3.55) since the equations for v and vth decouple. In terms of neuronal computation, however, the two model types are not equivalent. In particular, it can be shown that although both adaptation and dynamic thresholds reduce firing rates, adaptation does so in a subtractive manner, whereas this arises in models with dynamic thresholds in a divisive manner [66]. The most well known of the planar IF models was introduced independently by Izhikevich [458] and by Gröbler et al. [377]. This model, commonly referred to as the Izhikevich model, takes f (v) = 0.04v2 + 5v + 140 and can capture a range of dynamic behaviour, including tonic and burst firing as shown in Fig. 3.12. In spite of the insensitivity of the QIF model to the threshold value, owing to solutions blowing up in finite time, Touboul has shown that the dynamical behaviour of the Izhikevich model is in fact tightly coupled to the choice of threshold [876]. This is because the recovery variable can also blow up when v does. There exist other planar models that do not exhibit this behaviour, for example, the adaptive exponential IF model [117], which has been successfully fit to recordings from pyramidal cells [479]. Other such models include the quartic model with f (v) = v4 + 2av that can reproduce all of the behaviours of the Izhikevich model and additionally supports self-sustained sub-threshold oscillations [876]. Sufficient conditions to guarantee regular spiking in generic planar IF models are provided in [315]. In spite of the computational advantages of the planar IF models described thus far, their nonlinear nature means solutions may be unavailable in closed form. Another, tractable, one-dimensional IF model is obtained by taking f (v) = |v| [490], which shall hereon be referred to as the absolute integrate-and-fire model (AIF). The AIF may be thought of as loosely caricaturing the LEIF model through a linearisation of the latter’s nonlinearity [658]. The piecewise linear nature of the AIF model means that solutions can be found in closed form. Substituting the AIF form of f into (3.55) creates a piecewise linear planar IF model that has been used to analyse rhythms in networks coupled via gap junctions [207]. Since the dynamics for the recovery variable are linear, the resulting system is still solvable analytically. A slightly more general piecewise linear model is obtained through the choice f (v) = vΘ(v) − svΘ(−v) where s > 0 and, as before, Θ is the Heaviside function. This piecewise linear integrate-and-fire (PWLIF) model exhibits all of the solution types of

88

3 Phenomenological models and their analysis

Fig. 3.12 Phase planes and time series of different firing modes of the Izhikevich model with I = 10, β = 0.2, and vth = 30. In each main panel, the solid black curve is a representative trajectory, the dashed grey curves are the indicated nullclines and the dotted grey lines show the location of the reset and threshold. The insets show the voltage traces associated with each of the solution types. The top-left panel shows tonic firing with parameters τ = 50, vr = −65, k = 8, the top-right panel presents another tonic spiking scenario with τ = 50, vr = −55, k = 4; the bottom-left panel depicts bursting behaviour with τ = 50, vr = −50, k = 2 whilst the bottom-right panel illustrates fast spiking with τ = 10, vr = −65, k = 2.

the quartic model, including sub-threshold oscillations. The exposition in Sec. 3.4.7 presents an analysis of the PWLIF model, and note that the geometric arguments invoked are the same that would be used in the analysis of any planar IF model.

3.4 Integrate-and-fire neurons

89

3.4.7 Analysis of a piecewise linear integrate-and-fire model In order to characterise where in parameter space different types of solution exist, it is useful to consider the different types of bifurcation that can occur. As seen in Fig. 3.13, the v-nullcline has a characteristic ‘V’ shape, whilst the w-nullcline is a straight line with slope β . Under different parameter regimes, the PWLIF model is seen to support one, two, or no fixed points. There is a slight subtlety here, in that the nullclines may intercept where v > vth , generating a virtual fixed point. From here on, the branch of the v-nullcline with v < 0 (v > 0) is referred to as the left (right) v-branch. Since the system is piecewise linear (PWL), the linearisation around a fixed point (v, w) is equivalent to the homogeneous part of the vector field (i.e., the vector field when I = 0) evaluated there. Thus, the eigenvalues of fixed points, where they exist, may be evaluated as

1 − τ −1 ± (1 − τ −1 )2 − 4(β − 1)/τ , v > 0,

(3.56) 2 λ± = −s − τ −1 ± (s + τ −1 )2 − 4(β + s)/τ , v < 0, Note that these eigenvalues do not depend on the exact location of the fixed point values, only on which side of the line v = 0 they exist. The exact nature of the fixed points is determined by the sign of the expression under the square root in (3.56). Since β must be less than 1 to have two fixed points, it may be concluded that the fixed point on the right v-branch is a saddle. The sub-threshold dynamics are described by a continuous but non-differentiable system, so that the Jacobian matrix is not defined along the border separating the linear subsystems. This border, given by Σs = {(v, w) ∈ R2 | v = 0}, is referred to as a switching manifold, as crossing it leads to a discontinuous change in the Jacobian. Non-smooth bifurcations, which are comprehensively reviewed in [249], can occur as fixed points or limit cycles touch the switching manifold under parameter variation. The dynamics for the recovery variable imply that w = β v at a fixed point. Consider the case where I < 0 and β > 1. In this parameter regime, there exists a solitary fixed point on the left v-branch, given by v = I (β + s)−1 , which is stable. At I = 0, this fixed point collides with the switching manifold at a border collision bifurcation, whilst, for I > 0, the fixed point v = I (β − 1)−1 is now on the right v-branch. Since each of the subsystems is linear, the Jacobian matrix is constant. This means that across the switching manifold, there is a jump in the values of the eigenvalues. If τ > 1, the fixed point for I > 0 is unstable and the real part of the eigenvalues changes sign from negative to positive as the fixed point crosses the switching manifold. As discussed in Box 2.4 on page 25, this is a hallmark of a Hopf bifurcation. For the PWLIF model, the manner in which the bifurcation occurs is discontinuous, reflecting the non-smooth nature of the system. For a range of values of I > 0, the PWLIF model supports sub-threshold orbits, that is, periodic orbits that do not reach threshold. To find these, it is first useful to divide phase space into the regions:

90

3 Phenomenological models and their analysis

Fig. 3.13 Phase plane of the PWLIF model in a bursting parameter regime. The v-nullcline has a characteristic V shape, whilst the w-nullcline is a straight line; both of these nullclines are plotted as dashed grey lines. The trajectory, plotted as a solid black curve, is one of bursting type, the voltage trace of which can be seen at the top of the figure. For this solution, the active portion of the burst terminates when the trajectory is reset above the v-nullcline, so that v˙ < 0. The parameter values are I = 4, τ = 4, β = 1.1, s = 0.35, k = 0.4, vr = 20, and vth = 60.

SL = {(v, w) ∈ R2 | v < 0}, S R = {(v, w) ∈ R2 | 0 < v < vth }, and to recast the dynamics in matrix form so that: A L z + b L ≡ FL (z), z ∈ SL , z˙ = A R z + b R ≡ FR (z), z ∈ S R , where z = (v, w) and 1 −1 , AL = β /τ −1/τ

−s AR = β /τ

−1 I , , bL = b R = 0 −1/τ

(3.57)

(3.58)

(3.59)

3.4 Integrate-and-fire neurons

91

Note that the vector field is not defined on the switching manifold, Σs . In Sec. 3.7, a method for defining a vector field on this set that is consistent with those in SL and S R will be discussed. For now, it is sufficient to note that, for a given value of w, the limits of the vector fields FL and FR as v approaches 0 from the left and right, respectively, agree with one another and hence there is a natural extension of (3.58) into Σ. Box 3.3 on page 76 then shows that the solution to the equation z˙ = Az + b from an initial condition z(0) can be written using matrix exponentials in the form z(t) = G(t)z(0) + L(t)b,

(3.60)

where G and L are given in Box 3.3 on page 76. Hereafter, G μ , L μ will represent the above expressions with the respective matrix A = Aμ with μ ∈ {L , R}. Assuming that trajectories remain bounded away from the threshold v = vth , they will be continuous. Since the dynamics on both sides of the switching manifold are known, construction of model trajectories can be completed by enforcing continuity at the switching manifold itself. Noting that v = 0 at the switching manifold, a periodic solution may be found by picking initial data z(0) = z 0 = (0, w0 ) for some unknown w0 and then demanding that w be Δ-periodic so that w(0) = w(Δ) = w0 . Denote by Δ L and Δ R the times-of-flight over SL ,R , that is, the time taken for an orbit entering SL ,R to cross the switching manifold again. The period of the full orbit is then given by Δ = Δ L + Δ R . Thus, specification of a sub-threshold orbit requires a simultaneous solution of the equations: v(Δ L ) = 0, v(Δ L + Δ R ) = 0, w(Δ L + Δ R ) = w0 .

(3.61)

The times-of-flight are solutions to transcendental equations and must typically be found using numerical root finding. There are some interesting consequences of the piecewise linear nature of the system. Whilst virtual fixed points are not true fixed points of the system, they still play a role in organising its behaviour. For I > 0, a virtual fixed point for the system z˙ = A L z + b L is given by v = I (β + s)−1 ∈ S R . This is clearly not a true fixed point of the full system since the dynamics in S R are given by a different equation. Suppose, however, that the vector field SL were extended into S R . The virtual fixed point created by this extension would be stable, as seen earlier and in particular, would be of focus type. By contrast, the true fixed point in S R of the vector field z˙ = A R z + b R is an unstable focus. The periodic orbit, as plotted in the left panel of Fig. 3.14, may be thought of as being composed of two distinct spiral segments: one in SL that is spiralling in towards the virtual fixed point, and one in S R that is spiralling away from the true fixed point. A consequence √ of this is that the amplitude of the orbit varies linearly with I , as opposed to the I expected for a smooth Hopf bifurcation whilst the period of the orbit is independent of I . Typical sub-threshold periodic orbits, together with their amplitude and frequency are plotted in the right panel of Fig. 3.14. For further examples of the analysis of planar IF models, including considerations of the criticality of the discontinuous Hopf bifurcation, see Prob. 3.5 and Prob. 3.6. As I increases from zero in the PWLIF model considered in Sec. 3.4.7, the subthreshold oscillations continue to grow in amplitude. The presence of a hard threshold

92

3 Phenomenological models and their analysis

Fig. 3.14 Typical periodic sub-threshold solutions to the PWLIF model with β = 1.2, τ = 1.11, s = 0.35, and k = 0.4. Left: Periodic orbits may be thought of as a composition of two spiral segments: one spiralling away from the unstable focus of S R (solid black curve), the other spiralling inwards towards the virtual fixed point of SL (solid grey curve). Here, I = 0.5. The nullclines are depicted as grey lines. The extension of the v-nullcline of subsystem FL into S R is plotted as a dashed line to highlight the existence of a virtual fixed point. Also plotted is the switching manifold at v = 0. Right: Sub-threshold orbits plotted for different values of I with I = 1, 2, 3, 4. The amplitude of the orbits scales linearly with I , whilst the period of the solutions is independent of I .

prevents them from growing indefinitely and eventually the orbit will come into contact with the threshold. This occurs at a grazing bifurcation, as discussed in Box 3.6 on page 93, which involves an intersection of a periodic orbit and either a switching manifold or, in this case, the impact manifold defined by Σ = {(v, w) ∈ R2 | v = vth } (the name arises from the fact that the mathematical machinery for discontinuous dynamical systems is commonly used to described impacting mechanical oscillators). The grazing bifurcation destroys the sub-threshold periodic orbit, and due to the absence of any sub-threshold attracting sets, trajectories are forced up to threshold, resulting in spiking solutions. Mathematically, it is important to make a distinction between spiking solutions that cross the switching manifold and those that do not since this determines the number of continuity conditions that must be enforced along the orbit. For simplicity, the following analysis focusses on those orbits that are wholly contained in S R . After reaching threshold, trajectories are mapped from Σ onto the reset manifold given by Σr = {(v, w) ∈ R2 | v = vr }. The value of w on Σr for a tonic spiking orbit must differ from that on Σ by an amount k for periodicity to hold. Using the same terminology as in the previous section, a spiking solution with period Δ can thus be found by simultaneously solving the equations v(Δ) = vth , w(Δ) = w0 − k,

(3.62)

where v(0) = vr and w(0) = w0 . Once again, the time-of-flight Δ must typically be found numerically.

3.4 Integrate-and-fire neurons

93

Box 3.6: Grazing bifurcations Consider a piecewise smooth vector field given by x˙ = f (x), x ∈ Ω = S1 ∪ S2 ⊂ Rn ,

f (x) =

f 1 (x), x ∈ S1 , f 2 (x), x ∈ S2 .

(3.63)

and suppose that the boundary between S1 and S2 is given by a n − 1 dimensional manifold Σ. The figure below shows a generic example of such a system for n = 2. Consider trajectories starting in S1 . Some of these trajectories will remain bounded away from Σ, such as the grey trajectory z 0 . Others, such as the black trajectory z 1 will hit Σ at some finite time, where they may be subject to a reset map g : Σ → Ω so that z → g(z) at the switching time. In the example below, it is assumed that the codomain of g is restricted to S1 . Observe that z 1 intersects Σ tangentially. That is, the trajectory (and thus the vector field f 1 ) is tangent to the impact manifold Σ at the point of impact. The impact point z ∗ is known as a grazing point, with the associated trajectory z 1 referred to as a grazing trajectory. A grazing periodic orbit is a periodic trajectory that contains a grazing point.

To make this more precise, define an (at least once differentiable) indicator function h : Ω → R such that h(x) < 0, x ∈ S1 , h(x) = 0, x ∈ Σ, h(x) > 0, x ∈ S2 .

(3.64) (3.65) (3.66)

A grazing point, z ∗ is a point along a trajectory that satisfies h(z ∗ ) = 0 and dh(x)/dt|x=z∗ = f 1 (z ∗ ). Further details about grazing points, grazing orbits and grazing can be found in [249].

94

3 Phenomenological models and their analysis

Box 3.7: Floquet theory Suppose that a system of smooth ordinary differential equations of the form x˙ = f (x), x ∈ Rn ,

(3.67)

has a Δ-periodic solution z(t). Consider a perturbed solution z (t)=z(t) + δ z(t) with |δ z| 1. By linearising around the limit cycle, it can be shown that the perturbation vector δ z obeys the equation dδ z = A(t) δ z, dt

A(t) = D f (z(t)).

(3.68)

Note that this holds for any perturbation vector and that, since z(t) is periodic, so too is A(t). Corresponding to (3.68) is a fundamental matrix solution, Φ(t) ∈ Rn×n , whose columns are linearly independent solutions of (3.68). This matrix solves the variational equation Φ˙ = A(t)Φ, Φ(0) = In .

(3.69)

All solutions of the linearised system can then be written in the form δ z(t) = Φ(t) c where c ∈ Rn is a vector of constants, moreover, δ z(t + t0 ) = Φ(t) δ z(t0 ). The matrix Φ is thus a state transition matrix that describes how perturbations evolve over time. Floquet theory states that the fundamental matrix solution can be written in the Floquet normal form: (3.70) Φ(t) = P(t)e Bt , where P is a Δ-periodic function and B is a matrix of constants. Since P is periodic and Φ(0) = In , the initial condition for P satisfies P(0) = P(Δ) = In . Thus, the evolution of perturbations over one period of the underlying orbit can be expressed in terms of the monodromy matrix Φ(Δ) = e BΔ . The behaviour of these solutions is governed by the eigenvalues ρi , i = 1, . . . , n, of the monodromy matrix. These eigenvalues are called the characteristic (Floquet) multipliers. For autonomous systems, one of these multipliers will be equal to unity, and will have a corresponding eigenvector that points in the direction of the flow (since perturbations along the orbit neither grow nor decay). If the remaining n − 1 multipliers are inside the unit circle, the periodic orbit z(t) is linearly stable; otherwise, it is linearly unstable. Finally, it is useful to note that the product of the Floquet multipliers satisfies Δ ρ1 × ρ2 × · · · × ρn = exp Tr(A(s)) ds . (3.71) 0

See [167] for further details regarding Floquet theory.

3.5 Non-smooth Floquet theory

95

3.5 Non-smooth Floquet theory A question that has of yet gone answered is: how can the stability of the computed periodic solutions be inferred? For smooth systems with a periodic orbit, Floquet theory, as described in Box 3.7, on page 94 can be used to infer the stability of such orbits. For non-smooth systems, smooth Floquet theory must be amended using machinery to handle discontinuities and jumps in the vector field. The use of saltation matrices, as presented in Box 3.8 on page 95, facilitates analysis of perturbations across switching and impact manifolds and can be used, in conjunction with the corresponding smooth Floquet theory, to compute orbital stability. First, consider the sub-threshold orbits of the PWLIF system. These must necessarily cross the switching manifold Σs . Across this manifold, the vector field changes continuously, that is FL (z) → FR (z) as z 1 ≡ v → 0, and there is no reset process applied as trajectories cross. The corresponding saltation matrix for this case, as specified in Box 3.8 on page 95, is the identity matrix. Thus, stability of these orbits can be computed using the variational equation, exactly as in the case of smooth systems. The piecewise linear nature of the model means that the monodromy matrix can be written as Φ(Δ) = G R (Δ R )G L (Δ L ). For a given Floquet multiplier ρ , its Floquet exponent is given by its natural logarithm scaled by the period of the orbit: σ = ln(ρ )/Δ. Since periodic orbits of autonomous systems always have one Floquet multiplier equal to unity, its corresponding Floquet exponent will be zero. This fact, taken together with (3.71), implies that (σ1 , σ2 ) = (0, σ ), where

σ=

1 ∑ Δμ Tr(Aμ ). Δ μ ∈{L ,R}

(3.72)

Substituting the values of A L ,R defined by (3.59) into (3.72) yields σ = 1 − τ −1 − Δ L (1 + s)/Δ. Periodic orbits are stable if σ < 0 and unstable if σ > 0. Box 3.8: Saltation matrices Consider a system of the form (3.63) and suppose, without loss of generality, that a trajectory z (t) starts in S1 and hits Σ transversally at time T whereupon it is subject to a reset map g : Σ → S2 . The following calculations explain how to evolve perturbations, z (t) = z(t) + δ z(t), with |δ z| 1 across Σ. The hitting time, T , of z , defined by the condition h(z (T )) = 0, will differ from T . This difference must be accounted for when considering the evolution of the perturbation. An expression for δ T = T − T can be found by perturbing the indicator function (see Box 3.6 on page 93) around the hitting point of the perturbed trajectory:

96

3 Phenomenological models and their analysis

h(z (T − )) = h(z(T − + δ T ) + δ z(T − + δ T )) h(z(T − ) + z˙ (T − )δ T + δ z(T − )) h(z(T − )) + ∇z h(z(T − )) · [˙z (T − )δ T + δ z(T − )], = h(z(T − )) + ∇z h(z(T − )) · f 1 (z(T − ))δ T + δ z(T − ) , (3.73) where the equality in the last line follows from the fact that the trajectory hits Σ from S1 . Using the fact that h(z (T − )) = 0 = h(z(T − )), the difference in hitting times can be expressed as

δT = −

∇z h(z(T − )) · δ z(T − ) . ∇z h(z(T − )) · f 1 (z(T − ))

(3.74)

Note that z (T − ) = z(T − ) + δ z(T − ) = z(T − + δ T ) + δ z(T − + δ T ) z(T − ) + z˙ (T − )δ T + δ z(T − ) = z(T − ) + f 1 (z(T − ))δ T + δ z(T − ).

(3.75)

After recalling that z(T + ) ∈ S2 following reset, a similar calculation shows that z (T + ) = z (T + − δ T ) g(z (T − )) − f 2 (z(T + ))δ T.

(3.76)

Substituting (3.75) into (3.76) and Taylor expanding yields z (T + ) = g(z(T − )) + ∇z g(z(T − )) f 1 (z(T − ))δ T + δ z(T − ) − f 2 (z(T + ))δ T.

(3.77) Recalling that g(z(T − )) = z(T + ), and using (3.74) shows that δ z(T + ) = K (z(T )) δ z(T − ), where K (z(T )) = ∇z g(z(T − )) +

f 2 (z(T + )) − ∇z g(z(T − )) f 1 (z(T − )) ∇z h(z(T − )). ∇z h(z(T − )) · f 1 (z(T − ))

(3.78) The matrix K is known as the saltation matrix and specifies how perturbations evolve through a switching or impact manifold. Overall, (3.68) can be used to evolve perturbations up to the hitting time, upon which the perturbation vector is multiplied on the left by the saltation matrix. Thereafter, the variational equation can once again be used to evolve the perturbation. In the case where no resets occur, g(z) = z and so ∇z g is the identity matrix. If, additionally, the vector field is continuous across Σ, then f 1 (z(T − )) = f 2 (z(T + )) and so K (z(T )) reduces to the identity matrix, as expected. Further details about the construction and application of saltation matrices can be found in [249] and [562].

3.5 Non-smooth Floquet theory

97

Computing the stability of spiking orbits requires consideration of trajectories that interact with the impact manifold. The assumption that spiking orbits remain in S R implies that vr > 0. The reset map is defined by g(z) = (vr , w + k), meaning that 00 . (3.79) ∇z g(z) = 01 It may seem at first that, since trajectories do not cross the switching manifold, that the saltation matrix should be the identity, as in the case for sub-threshold orbits. However, it must be remembered that the crossing point and its image under the reset map are distinct from one another and so the numerator of the fraction in (3.78) does not vanish. The saltation matrix in this case, recalling that z = (v, w), is given by k (w) 0 , (3.80) K (z) = 1 k2 (w) 1 where k1 (w) =

β (vr − vth ) − k vr + I − w − k , k2 (w) = . vth + I − w τ (vth + I − w)

(3.81)

Fig. 3.15 shows a one-dimensional example illustrating what happens when a trajectory and a nearby perturbed trajectory reach an impact manifold. Since the oscillatory solution has period Δ and initial condition z 0 = (vr , w0 ), its monodromy matrix is given by Φ(Δ) = K (w(Δ))G R (Δ) where w(Δ) = w0 − k. Using the definition of the saltation matrix, the nontrivial Floquet exponent for a solution of the PWLIF model comprising m crossings of the switching (v = 0) and impact (v = vth ) manifold may be evaluated as [202] + v˙ (t ) 1 m (3.82) σ= ∑ Δi Tr (Aχi ) + ln v˙ (ti− ) , Δ i=1 i m is a sequence of labels indicating in which region of phase space where {χi }i=1 the system is to be solved and is ordered such that z χi+1 (0) = z χi (Δi ). The event times, which may correspond to either switching or spiking events, are given by ti = ∑ij=1 Δ j . Compared to (3.72), equation (3.82) contains an additional term that captures the impact of the discontinuous reset mechanism on perturbations. Note that as the trajectory crosses the switching manifold, where the vector field is continuous and no reset occurs, v˙ (t + ) = v˙ (t − ) and this additional term vanishes, consistent with (3.72). In addition, note that, for spiking events, k1 (w) = v˙ (t + )/˙v (t − ) when w is evaluated at the intersection of the trajectory and the threshold v = vth . It can be shown (see Prob. 3.4) that the sole Floquet multiplier for periodic spiking orbits in one-dimensional IF models subject to constant drive has unit value, as expected.

98

3 Phenomenological models and their analysis

Fig. 3.15 Evolution of perturbed and unperturbed trajectories, denoted z and z , respectively, as they collide with a discontinuity manifold Σ across which the vector field is non-smooth. The difference between the two trajectories prior to the hitting time is given by δ z − = δ z(T − ), whilst after both trajectories have passed through the manifold or undergone reset, it is given by δ z + = δ z(T + ). The saltation matrix conveys how δ z − is mapped to δ z + as the perturbed and unperturbed trajectories pass through the manifold. The left panel shows an example in which the trajectories are continuous, i.e., g(z) = z. This corresponds to a switching scenario. Conversely, the right panel shows an example where trajectories are reset to locations away from Σ and hence are discontinuous. This is an example of an impacting scenario.

3.5.1 Poincaré maps A common alternative approach to study periodic orbits in dynamical systems is through the use of Poincaré maps, as presented in Box 3.4 on page 83. The key aspect to defining a Poincaré map is choosing an appropriate section with which to work. The ideal section is one through which all trajectories will pass, though reductions can be made, dependent on the problem at hand. Certain choices of sections, for example, those aligned with impact manifolds, may lead to a smooth Poincaré map even if the underlying vector field is non-smooth. Such a scenario can be shown to occur for the PWLIF model, given by (3.55) with f (v) = vΘ(v) − svΘ(−v). Since spiking solutions are those that cross the impact manifold, Σ = {(v, w) ∈ R2 | v = vth }, it is sensible to choose this as the section. This section is somewhat non-standard since trajectories starting from it will immediately be reset and their evolution will instead start from the reset manifold Σr = {(v, w) ∈ R2 | v = vr }. The full map can thus be decomposed as P = Psub ◦ g, where Psub : Σr → Σ is the map from the reset manifold to the impact manifold under the continuous, sub-threshold flow of the system, and by construction, g : Σ → Σr is the reset map. Since v is fixed on the sections Σ and Σr , P is a function of w only. In particular, Psub is defined by evolving a trajectory of the full system from an initial point z 0 = (vr , w0 ). After composing this with the reset map g, the full Poincaré map is defined as P(w) = [ϕΔ (z 0 )]2 , z 0 = (vr , w + k),

(3.83)

3.5 Non-smooth Floquet theory

99

Fig. 3.16 Construction of a Poincaré map for the PWLIF model with Σ = {(v, w) | v = vth } and Σr = {(v, w) | v = vr }. The figure shows the map decomposition for two initial points z i ∈ Σ, i = 0, 1. Each point is first mapped to Σr by the action of g and trajectories are then evolved from g(z i ) until they intersect Σ. Note that points on Σr and Σ are specified by their w coordinate alone so that the Poincaré map is specified by P(wi ) = Psub (g(z i )) where z i = (vth , wi ).

where ϕt = ϕ(t, ·) is the flow operator over a time t induced by the sub-threshold system dynamics as given by (3.60) (here, the outer subscript denotes that we consider only the w component), and Δ satisfies [ϕΔ (z 0 )]1 = vth ,

(3.84)

and must typically be found by root finding, except in special cases, for example, when the original system possesses a conserved quantity [963] (and see Prob. 3.8). This latter possibility can be exploited to remove the dependency on times-of-flight of Poincaré maps in piecewise linear systems [151]. A plot demonstrating the construction of the Poincaré map for the PWLIF model is shown in Fig. 3.16. The map P may be either continuous or discontinuous, depending on the value of β . In the present case, P is seen to be continuous. For discussion on discontinuous maps, or so-called ‘maps with gaps’, see [52, 470] and Sec. 5.2. The derivative of P may be computed by applying the saltation matrix at the initial reset and then by solving the variational equation as presented in (3.68) for the sub-threshold part of the orbit (these approaches are valid for examining the evolution of perturbations for general solutions, not just periodic ones). An example Poincaré map and its first derivative for the PWLIF model are plotted in Fig. 3.17. The map may be split into three parts: the first is a monotonically

100

3 Phenomenological models and their analysis

Fig. 3.17 The Poincaré map, P, for the PWLIF model for parameters I = 4, k = 4, vr = 20, vth = 60, β = 0.9, s = 0.35, and τ = 0.9 (left), together with its derivative J = P (w) (right). The map possesses a unique fixed point w, which is stable, since |J (w)| < 1. This fixed point corresponds to a spiking orbit in the full PWLIF system.

increasing part with approximately unit slope, the second displays a rapid downswing, with a large negative gradient, and is followed by a relatively flat third segment. These parts of the map can be referenced back to the phase plane of the original dynamical system. The first part of the map corresponds to initial conditions well below the right v-branch; trajectories starting here are forced up to threshold quickly. Increases in w are essentially matched by an equivalent change in P. The flat segment corresponds to initial conditions well above the right v-branch. Solutions here first move leftwards into SL and track down the left v-branch until they fall close to the knee of the vnullcline. The contraction of trajectories onto the left v-branch means that they will all reach threshold at similar values of w. As τ is increased, this contraction becomes more prominent, and in the limit as τ → ∞, this section of P will have zero gradient. The middle portion of the map connects these two regions and indicates potential sensitivity for this range of w. The map in Fig. 3.17 has a single fixed point, w, with |P (w)| < 1, indicating that this solution is stable (see Box 3.4 on page 83). Bifurcations of P have equivalent bifurcations in the original system, and so the Poincaré map can be used to identify dynamical transitions. One such bifurcation results in transitions from tonic spiking to burst firing. In general, as I increases, the shape of P remains qualitatively similar, but the fixed point moves to the left, closer to the middle portion of the map. As the fixed point continues to move to the left, P (w) passes through −1, indicating that a period-doubling bifurcation occurs, whereupon the fixed point is no longer stable. Period doubling bifurcations can result in new stable solutions with twice the period of the original periodic orbit. Inspection of the second return map P (2) = P ◦ P and its derivative J (2) , as depicted in Fig. 3.18, shows that this is indeed the case here. In terms of the original system, there is now a solution with two spikes per oscillation, corresponding to a doublet, which may be thought of as a simple bursting solution.

3.5 Non-smooth Floquet theory

101

Fig. 3.18 The Poincaré map for the PWLIF model following a period doubling bifurcation, with parameters I = 10, τ = 1.11, β = 1.2, k = 0.04, s = 0.35, vr = 20 and vth = 60. The top row shows the value of the map P(w), whilst the bottom row depicts J (n) = d P (n) /dw. The first return map, shown in the left column has a single unstable fixed point (depicted in white), whilst the second return map has three fixed points. One of these fixed points is unstable, whilst the other two are stable (shown in black) and correspond to a doublet solution.

In general, a bursting solution with n spikes can be studied by examining the nth return map P (n) . As such, complex bursting solutions can be probed by looking at a one-dimensional map, highlighting the usefulness of Poincaré maps to study planar IF models. For other examples of how to construct Poincaré maps for IF models, see Prob. 3.7 and Prob. 3.8. Poincaré maps may also be used to identify regions of parameter space that lead to chaotic solutions. For example, Fig. 3.19 shows an example of a snap-back repeller [605], which is an unstable fixed point w of a map, P, whose neighbourhood contains another point, w M , that satisfies |P (w)| > 1, w M = w,

P (wk ) = 0,

(3.85)

where wk = P k (w) for k = 1, . . . , M ∈ N. Snap-back repellers were shown in [605] to be a necessary condition for chaos in this class of model. Intuitively, trajectories are repelled away from w into another, contracting region of phase space. However,

102

3 Phenomenological models and their analysis

Fig. 3.19 A snap-back repeller in the PWLIF model, with parameters I = 4.5, β = 0.9, τ = 2.5, s = 0.35, k = 0.4, vr = 20 and vth = 60. The top-left panel shows the Poincaré map with its derivative, J = P (w), displayed in the bottom-left panel. Taken together, these plots show that the map possesses a snap-back repeller as defined in [605] (see text for further details). In these panels, the white marker displays the location of the unstable fixed point of the map, whilst the grey marker represents the point w M , where M = 2. The right column shows the corresponding chaotic orbit in phase space (top) and as a voltage trace (bottom).

the action of the map forces trajectories back towards the repelling neighbourhood of w. The repeated action of this process means that small perturbations in initial conditions grow and thus the system is chaotic. Identifying under what situations snap-back repellers exist can aid in understanding the variability observed in neural firing patterns.

3.6 Lyapunov exponents In general, the presence of chaotic solutions can be identified through the computation of Lyapunov exponents. Briefly, Lyapunov exponents measure the rate of growth of small perturbations between trajectories, as described in Box 3.9 on page 104. A strictly positive Lyapunov exponent (3.89) is an indication that the system under

3.6 Lyapunov exponents

103

Fig. 3.20 Period doubling route to chaos in the LIF model given by (3.30) with f (v) = −v/τ and C = 1. The plots are generated for a constant threshold vth = 1 and a time-dependent reset of the form vr (t) = K sin(2π t). Left: Inter-spike intervals as K is varied. Right: Lyapunov exponent as specified by (3.86). A period-doubling cascade from the 2:1 mode-locked state begins around K ≈ 0.55 and leads to chaos around K ≈ 0.7. Here, τ = 1 and I = 1.2

investigation exhibits chaotic behaviour. As for Floquet exponents, the evaluation of Lyapunov exponents for non-smooth systems involves evolving the perturbation vector representing the separation between nearby trajectories over switching and impacting manifolds. As before, this is achieved using the saltation operator defined in Box 3.8 on page 95. As an example, consider the LIF model, given by (3.30) with f (v) = −v/τ and C = 1 with potentially time-varying threshold and reset voltages. This one-dimensional system possesses only one Lyapunov exponent, which is given by [187] f (v (T + )) + I (T + ) − v˙ (T + ) m 1 1 r r j j j λ = − + lim ∑ ln f (vth (T − )) + I (T − ) − v˙th (T − ) , (3.86) τ m→∞ (Tm − T0 ) j=1 j j j There are two contributions to λ . The first is given by the smooth flow between successive firings. This contribution is always negative, reflecting the contracting nature of the sub-threshold LIF dynamics. The second term in (3.86) arises due to the discontinuous nature of the reset mechanism. Its sign depends on the dynamics of the time-varying threshold and reset across all firing events. Fig. 3.20 shows an example taking vth (t) = 1 and vr (t) = K sin(2π t). As the value of K is increased, the system goes through a period-doubling sequence from the 2:1 mode-locked state. This period-doubling cascade starts around K ≈ 0.55 and gives rise to a chaotic solution around K ≈ 0.7. As a second example consider the PWLIF model given by (3.55) with f (v) = vΘ(v) − svΘ(−v) and constant drive. The maximal Lyapunov exponent for this system is shown in Fig. 3.21, showing that the system supports bands of chaos in two distinct orientations. The vertical sweeping arc is close to the transition in which bursting solutions are lost and are replaced by tonic spiking solutions. The horizontal bands indicate transitions between bursting solutions with different numbers of

104

3 Phenomenological models and their analysis

spikes. Planar IF models are known to exhibit sensitivity close to both of these transition types [859, 860]. In particular, snap-back repellers, as discussed in Sec. 3.5 are expected to exist in regions of parameter space close to period adding bifurcations in [859]. More generally, these transitions can be understood as a slow passage through a folded saddle [240]. Box 3.9: Lyapunov exponents Consider a system of the form x˙ = f (x) where x ∈ D ⊂ Rd . Let x0 ∈ D be an arbitrary point and x(t) = ϕt (x0 ) be the trajectory of the system from this point. Suppose that the initial condition is perturbed by an amount δ x0 where δ x0 1 and let x(t) = ϕt (x0 + δ x0 ) be a perturbed trajectory starting from this point. Lyapunov exponents measure the exponential rate of divergence of the separation vector δ x(t) = x(t) − x(t). Since δ x0 is small, this rate can be approximated, to first order, through the linearised system δ˙x = D f (x(t))δ x [596]. In general, the system will possess d Lyapunov exponents, representing rates of separation in different linear combinations of phase space variables. These exponents are typically ordered from largest to smallest.

Define M(t) to be the solution to M˙ = D f (x(t))M,

M(0) = Id .

(3.87)

The d Lyapunov exponents, λi , i = 1, . . . , d, are then given by the eigenvalues of the matrix ln M(t)M T (t) , (3.88) Λ = lim sup 2t t→∞ where the lim sup is used to account for fluctuations in the evaluation of the exponent over finite t. Oseledets multiplicative ergodic theorem implies that Lyapunov exponents are independent of the specific initial condition, x0 used to generate x(t) [685].

3.6 Lyapunov exponents

105

Fig. 3.21 Maximal Lyapunov exponents of the PWLIF model given by (3.55) with f (v) = vΘ(v) − svΘ(−v). Here, the model is seen to exhibit chaotic dynamics in windows of parameter space close to transitions between bursts with different numbers of spikes and between bursts and tonic spiking. The parameter values are k = 0.4, vth = 60, vr = 20, β = 0.8, and s = 0.35. Also shown are representative voltage traces from the indicated regions of parameter space.

Lyapunov exponents are often used to identify the presence of deterministic chaos. A system is often deemed to exhibit deterministic chaos if the maximal Lyapunov exponent is strictly positive. Conversely, if the maximal Lyapunov exponent is strictly negative, the system typically settles to a stable fixed point, and if the maximal Lyapunov exponent vanishes, then the system will exhibit periodic dynamics. As the system evolves, the perturbation vector δ x(t) will align along the direction of maximal separation. As such, the maximal Lyapunov exponent can be computed as δ x(t) 1 ln λ1 = lim lim . (3.89) t→∞ δ x0 →0 t δ x0 To find the full spectrum of Lyapunov exponents, it is typically required to (periodically or continuously) orthonormalise the separation vectors to avoid them aligning along the same axis. Details of how to do this via the Gram– Schmidt procedure, or equivalently, QR decomposition methods, may be found in [67, 806, 913, 933].

106

3 Phenomenological models and their analysis

3.7 McKean models The notion of constructing piecewise caricatures of nonlinear models is not restricted to IF-type models. Any given model can be approximated by a piecewise linear system and thus solved using matrix exponentials. The resulting linear system can be shown to approximate the nonlinear system in most parameter regimes, although care must be taken to respect any non-smooth phenomena that arise as a result of this caricature. The hallmark of smooth excitable systems is an N -shaped nullcline, which can be linearised in a number of ways. Two of the simplest descriptions are due to McKean [620] and were intended as a reduction of the FitzHugh–Nagumo model, which, as discussed in Sec. 3.2, is itself intended to capture the core dynamical features of the Hodgkin–Huxley model. Both of the McKean models can be written in the form: v˙ = f (v) − w, τ w˙ = β v − γ w.

(3.90)

The first variant, which is the primary focus of this section, takes f (v) = −λ v + μ Θ(v − a), where a is a threshold and, using the terminology defined earlier, marks the location of the switching manifold, Σ = {(v, w) ∈ R2 | v = a}. In this section, it is assumed that the evolution of v and w takes place over a common timescale so that τ = 1. The McKean model has two planar domains, which can be labelled SL = {(v, w) ∈ R2 | v < a} and S R = {(v, w) ∈ R2 | v > a}, and which are separated by Σ. In Sec. 3.4.7, it was shown that the PWLIF model supports both sub-threshold oscillations and spiking orbits. The sub-threshold orbits and their first derivative are continuous and so these orbits can be analysed using the same techniques as for smooth systems. The spiking orbits are not continuous and so the reset map in conjunction with the saltation matrix is required to study these. In the McKean model, the orbits themselves are continuous, but their first derivative is not: there is a jump in its value across the switching manifold. In order to characterise solutions in this model, a combination of the ideas presented in this chapter must be used. In order to write solutions of the McKean model in terms of matrix exponentials, its dynamics can be recast as z ∈ SL , FL (z) ≡ A z (3.91) z˙ = FR (z) ≡ A z + c z ∈ S R , where

−λ −1 , A= β −γ

μ . c= 0

(3.92)

The dynamics on the switching manifold Σ are not yet defined, and there is some freedom in how to extend the vector field into this manifold. The widely accepted way to do this is to consider a set-valued function

3.7 McKean models

107

⎧ ⎪ z ∈ SL , ⎨ FL (z), z˙ ∈ F(z) = co{FL (z), FR (z)}, z ∈ Σ, ⎪ ⎩ z ∈ SR , FR (z),

(3.93)

where co(S) is the convex hull of S, that is, the smallest closed convex set containing S. In the present case, co{FL (z), FR (z)} = {FS (z) ∈ R2 | FS (z) = (1 − κ )FL (z) + κ FR (z)}.

(3.94)

The extension of the discontinuous system (3.91) into the differential inclusion (3.93) is known as the Filippov convex method [304]. For this system, the function κ is chosen to ensure that v˙ = 0 on Σ, giving κ = (λ a + w)/μ . For γ = 0, the w-nullcline is a vertical line at v = a and for a > 0, there exists a fixed point at the origin with eigenvalues (−λ ± λ 2 − 4β )/2. Hence, for β , λ > 0, the fixed point is stable. For λ 2 − 4β > 0, it is globally attracting whilst for λ 2 − 4β < 0, it is of focus type and periodic behaviour may be possible in a similar vein to the sub-threshold orbits in the PWLIF model displayed in Fig. 3.14. As a trajectory spirals in towards the origin, it may hit Σ. In contrast to the examples in Sec. 3.4.7 in which trajectories crossed through the switching manifold or jumped to a different location after hitting an impact manifold, trajectories may slide along Σ in the McKean model, containing segments that are part of the switching manifold. Periodic orbits that cross Σ, i.e., that do not slide, may be constructed and their stability examined in the same way as in Sec. 3.4.7. Applying this approach shows that the non-zero Floquet exponent is given by [202] −λ a − w(Δ R ) 1 −λ a + μ − w(Δ) σ = −λ + ln , (3.95) Δ −λ a + μ − w(Δ R ) −λ a − w(Δ) where w(Δ R ) is the hitting point of the orbit on Σ from S R and w(Δ) is the hitting point of the orbit from SL as shown in Fig. 3.22 and, as in Sec. 3.4.7, Δ = Δ L + Δ R is the period of the oscillatory solution. Interestingly, stable fixed points and stable limit-cycles can co-exist, separated by an unstable sliding orbit. This unstable orbit is the union of an unstable sliding trajectory on Σ, together with a trajectory in SL . One of the issues faced by these kinds of differential inclusion is that solutions are not unique. Given a point on Σ, it is impossible to uniquely identify a trajectory leading into Σ, meaning that the α -limit of such a point is set-valued. This issue can be resolved by introducing a thin boundary layer around the switching manifold, and then by using matched asymptotic expansions to connect solutions as they pass through Σ [666]. These approaches are often used during numerical simulation to circumvent the non-uniqueness problem. By considering (3.91) in backward time, that is, making the substitution t → −t, the unstable limit-cycle can be constructed. The signs of v˙ and w˙ are now reversed and the sliding motion on Σ becomes stable. On Σ, with time reversed, w˙ = −β a and a trajectory starting on this manifold will thus slide down Σ until it hits the point where κ = 0. After this, the trajectory will exit Σ and enter SL where it will be governed by the flow under FL . Since the origin is an unstable focus in the time-reversed system,

108

3 Phenomenological models and their analysis

Fig. 3.22 Bistability in the McKean model (3.90) between a stable fixed point and an attracting limit-cycle with λ = 1, μ = 3, a = 0.3, β = 2 and γ = 0. The dotted grey line represents the unstable limit cycle, which possesses a sliding segment, that acts as a separatrix between the stable fixed point, depicted as a black marker and the stable limit cycle, which is shown as a solid black curve. Also plotted are the nullclines for the system as indicated, the switching manifold as a dotted vertical line, the intersection points of the stable limit cycle with the switching manifold as white markers, and the point marking the start of the sliding segment of the unstable periodic orbit as a grey marker.

the trajectory will then spiral outwards until it hits Σ once more. Specifically, the trajectory will hit Σ at the point (v∗ , w∗ ) = (a, w(Δ L )), where Δ L > 0 satisfies v(Δ L ) = a with (v(0), w(0)) = (a, −λ a). If the trajectory in SL intersects Σ again below the v-nullcline in S R , it will once again slide down the switching manifold and, repeating the previous argument, the unstable orbit will have been constructed. If the hitting point is above the v-nullcline, no sliding will occur and there will be no unstable periodic orbit. Thus, the existence of an unstable orbit requires that the trajectory hits Σ below the point (a, −λ a + μ ). The time-of-flight on the sliding segment, Δ S , is obtained by integrating the equation for w˙ on v = a, leading to Δ S = λ a + w∗ /β . Thus, the period of the unstable orbit is given by Δ = Δ L + Δ S . It remains to demonstrate that the constructed orbit is unstable. If a periodic orbit is stable in the time-reversed system, then it is unstable in the original system time. Since the underlying dynamics in the McKean model are nonsmooth, saltation matrices must be used in the computation of the orbit’s Floquet multipliers ρ1,2 . Trajectories remain continuous as they hit the switching manifold and so the reset map is g(z) = z with ∇z g(z) being the identity. On Σ, the choice of κ enforces that that v˙ = 0, leading to w˙ = −β a in backward time. Supposing that a trajectory hits Σ at time T , these two observations mean that the numerator in (3.78)

3.7 McKean models

109

is evaluated as FS (z(T + )) − ∇z g(z(T − )) FL (z(T − )) = (−λ a − w, 0)T .

(3.96)

As in Sec. 3.5, the Jacobian of the indicator function h is given by ∇z h(z) = (1, 0), so that ∇z h(z(T − )) · FL (z(T − )) = λ a + w. Overall, the saltation matrix for the entry to the sliding mode from SL is given by 10 00 = . (3.97) K = I2 − 00 01 This is a singular matrix and implies that one of the Floquet multipliers vanishes. Since the McKean model is autonomous and the orbit is periodic, the other Floquet multiplier has a unit value. Periodic orbits for which all nontrivial Floquet multipliers vanish are superstable in the sense that any infinitesimal perturbation that is transverse to the flow along the orbit decays entirely over one period. The complete decay of perturbations to the periodic orbit in finite time is a direct consequence of the sliding motion along Σ. To see this, consider a trajectory that is close to the sliding periodic orbit. By assumption of closeness, this trajectory will, at some time, hit Σ and slide down it. Thus, there exists some portion of Σ that is common to both solutions. This implies that general differences between the solutions away from Σ are mapped to differences solely in their w components when the trajectories slide along Σ, since v is fixed here. The flow along Σ occurs only in the w direction and so perturbations here are necessarily in the direction of the flow and not transverse to it. Thus, any perturbation transverse to the flow is lost as soon as trajectories hit Σ. Without loss of generality, the above arguments mean that it can be assumed that ρ1 = 1 and ρ2 = 0. The Floquet multipliers of the periodic orbit in forward time are the reciprocal of those in backward time. Hence, the nontrivial multiplier of the orbit in forward time is 1/ρ2 → ∞, indicating that the orbit is ‘infinitely’ unstable.

3.7.1 A recipe for duck The notion of a threshold in the FitzHugh–Nagumo model in Sec. 3.2 arises when the system is perturbed from the excitable steady state. For small and sustained current injections, the system exhibits small amplitude oscillations, whilst, for larger stimulation, large excursion orbits corresponding to action potentials are observed. The FitzHugh–Nagumo model supports two Hopf bifurcations under variation of I . The present discussion concerns dynamics close to the left-most of these two, so that the fixed point is close to the left knee of the v-nullcline. For I close to the critical value IHB , the periodic orbits are small in amplitude. As I increases away from the bifurcation point, the amplitude of the orbit grows. If the timescale of the v and w variables are sufficiently well separated, there is a rapid transition from these small amplitude oscillations to much larger amplitude ones near a critical value of I , which occurs over an exponentially small window of parameter space.

110

3 Phenomenological models and their analysis

This transition is known as a canard phenomenon due to its dependence on a particular type of solution termed a canard [69]. They are sometimes referred to as ‘false bifurcations’, since there is a qualitative change in oscillatory behaviour, but no associated topological change in the system [702]. A key component required for canards is a separation of timescales. In the singular limit as ε → 0, the dynamics of the FitzHugh–Nagumo model on the slow timescale occurs only on the critical (slow) manifold, which in this case is the v-nullcline. Transitions between the left and right branches of this manifold are instantaneous on this slow timescale. The critical manifold has one other part, namely, the middle branch. Dynamics on this branch are not observed in the singular limit as it is a repelling manifold and essentially acts as the threshold between trajectories which move towards the left branch and those that move towards the right branch. Along the critical manifold, the knees of the v-nullcline are fold points that separate attracting and repelling manifolds. As the timescale separation is relaxed, the critical manifold persists and trajectories can now follow along the repelling parts of this manifold to give rise to canards, as discussed in Box 3.10 on page 111. Canards are interesting solution types since trajectories are not typically expected to stay close to repelling branches for extended durations but, due to the separation of timescales, this is possible for systems close to the singular limit. For the FitzHugh–Nagumo model, a canard solution must therefore stay close to the middle branch of the v-nullcline as the timescale separation moves away from the singular limit. Fig. 3.23 displays solutions near to the canard explosion for this model. With some artistic licence, one might be able to see the duck-like shape from which the name ‘canard’ is derived. The connection between such solutions and their animal counterparts is made more obvious in Box 3.10 on page 111. As well as their relation to threshold responses, canards also have relevance to bursting solutions, which are a class of mixed-mode oscillations. Here, the bursting behaviour is described by canards of folded nodes in an analogous way as for the FitzHugh–Nagumo model. In this case, bursting trajectories initially start as small amplitude oscillations and slowly diverge towards a large amplitude relaxation oscillation [237, 239, 241, 924]. During the relaxation oscillation, the trajectories move back towards the small amplitude oscillation state, and hence complete the burst. This parallels the earlier discussion of bursting in fast–slow systems in Sec. 3.2.1. A nice property of the analysis of bursting in this framework is that the number of small amplitude oscillations, corresponding to spikes, can be calculated explicitly through knowledge of the eigenvalues of the folded node [119]. Canard transitions of torus type may also play an important role in the transition between bursting and tonic spiking solutions [242]. Since the FitzHugh–Nagumo model is planar, it cannot support bursting solutions, and so the discussion below focusses only on canard explosions from small amplitude to large amplitude oscillations.

3.7 McKean models

111

Fig. 3.23 The canard explosion phenomenon in the FitzHugh–Nagumo model (3.1) with parameters a = 0, β = 4 and ε = 0.02 under variation of I . The left panel shows the bifurcation diagram, in which a branch of stable fixed points (solid black curve) is seen to destabilise via a Hopf bifurcation at IHB , giving rise to a branch of unstable fixed points (black dashed curve) and a branch of stable periodic orbits (solid grey). At IC 0.014, the canard phenomenon occurs, in which small amplitude oscillations grow into large amplitude oscillations over a small window in I . The middle panel displays the limit cycle solution just prior to the canard phenomenon, whilst the right panel depicts the limit cycle solution just after it. Also plotted are the v and w-nullclines as dashed (˙v = 0) and dotted (w˙ = 0) grey lines.

Box 3.10: Canards Consider a smooth system with a large timescale separation

ε x˙ = f (x, y, ε ), y˙ = g(x, y, ε ),

(3.98)

where ε 1. The critical manifold of (3.98) is defined as S = {(x, y) | f (x, y, 0) = 0}. Generically, this manifold can be partitioned into attracting, repelling, and non-hyperbolic components, which may be labelled Sa , Sr and Sn , respectively, so that S = Sa ∪ Sn ∪ Sr . A canard solution is a periodic orbit which follows an attracting sub-manifold Sa , passes near to a bifurcation point p ∈ Sn and then follows a repelling sub-manifold Sr for an extended period of time. Canards occur at parameter values at which the system transitions between small and large amplitude orbits, leading to a trajectory shape resembling a duck’s body and head as shown below.

112

3 Phenomenological models and their analysis

Fenichel theory [299] implies the persistence of the critical manifolds (and their associated hyperbolicity) as ε is increased from zero. A maximal canard is one which contains orbit segments along the intersection Sa,ε ∩ Sr,ε near p ∈ Sn where Sa,ε and Sr,ε are the perturbed attracting and repelling submanifolds. More details about the influence of canards on oscillatory solutions can be found in [119], with special attention paid to piecewise linear systems in [237]. Canards are typically studied using either non-standard analysis [69, 251], matched asymptotics [265, 637] or the blow-up method [264, 534, 840], which extends standard Fenichel theory by expanding the non-hyperbolic sets and defining dynamical systems within them. These methods typically involve heavy mathematical machinery along with care to ensure that timescales are matched appropriately. For PWL systems, non-standard approaches are not required, which greatly simplifies the requisite analysis. The goal here is not to precisely reproduce the canards in the fully nonlinear FitzHugh–Nagumo model. Instead, it is to capture the same qualitative behaviour with respect to the canard phenomenon. In particular, consider model (3.90) with a continuous piecewise linear (PWL) function given by ⎧ −v, v < 0, ⎪ ⎪ ⎪ ⎨η v, 0 ≤ v < a, 1 (3.99) f (v) = ⎪ η2 (v − a) + a η1 , a ≤ v < 1, ⎪ ⎪ ⎩ 2 − v, v ≥ 1, for a, η1 ∈ (0, 1), η2 ∈ (1, ∞), γ = 1 and τ 1. Demanding continuity of f means that once two out of the triple (a, η1 , η2 ) are specified, the third follows directly through the relation η2 (1 − a) = 1 − a η1 . The function f splits the phase plane into four distinct strips, namely,

3.7 McKean models

113

Fig. 3.24 The canard explosion phenomenon in the PWL caricature of the FitzHugh–Nagumo model with f specified by (3.99) with parameters a = 0.4, η1 = 0.1, β = 2 and τ = 20. The figure construction is as in Fig. 3.23. A rapid increase in the amplitude of the periodic orbit is observed across an exponentially small window around IC 0.217.

S1 = {(v, a) | v < 0}, S2 = {(v, a) | 0 ≤ v < a}, S3 = {(v, a) | a ≤ v < 1},

(3.100)

S4 = {(v, a) | v ≥ 1}. Since f is continuous, the differential inclusions used for the McKean model are not needed for this system. Thefollowing analysis is restricted to the case where β > 1. For I ∈ 0, a( β − η1 ) , there exists a solitary fixed point (v, w) = I /(β − η1 ), I β /(β − η1 ) ∈ S2 . The trace of the linearised system is given by η1 − τ −1 , which is positive since τ 1 and so the fixed point is unstable. The Poincaré–Bendixson theorem can then be used to show that there must be a limit cycle solution for this parameter range. As expected for systems of this form with significant timescale separation, a canard explosion is observed as I varies between 0 and a(β − η1 ), as shown in Fig. 3.24. Note that continuous three-piece caricatures of the FitzHugh–Nagumo model do not support canard solutions [238] (see Prob. 3.9). After demonstrating that a system supports canards, the next logical question concerns finding the parameter values at which they appear. The fixed point in S2 is an unstable node, as may be checked via calculation of its eigenvalues. This means that trajectories must exit S2 , moving either leftwards towards S1 or rightwards towards S3 . Trajectories entering S1 from S2 will cross the v-nullcline and will return to S2 . Trajectories crossing into S3 will either cross the v-nullcline and return to S2 or will move rightwards and enter S4 . An inspection of trajectories near the canard phenomenon shows that they tend to bend leftwards or rightwards rapidly upon entering S3 , suggesting the existence of geometrical objects to explain this phenomenon.

114

3 Phenomenological models and their analysis

Box 3.11: Curves of inflection Consider a planar system of the form v˙ = F(v, w), w˙ = G(v, w), v, w ∈ R.

(3.101)

A curve of inflection, given by the condition d2 w/dv2 = 0, separates trajectories that are concave from those that are convex. Above this curve, d2 w/dv2 > 0 and trajectories bend leftwards, whilst below it d2 w/dv2 < 0 and trajectories bend towards the right (or vice versa). For (3.101), this inflection condition can be written as F ∂G − G ∂ F d dw (3.102) = ∂ v 2 ∂ v = 0. dv dv F Hence, an inflection point on a trajectory defined by w = w(v), assuming it exists, is determined by ∂G ∂ G dw ∂F ∂ F dw + + F −G = 0. (3.103) ∂v ∂ w dv ∂v ∂ w dv For a linear system z˙ = A z + b, where z = (v, w) and a a b A = 11 12 , b = 1 , a21 a22 b2

(3.104)

the components of the vector field can be written as F(v, w) = a11 v + a12 w + b1 , G(v, w) = a21 + a22 w + b2 .

(3.105)

In this case, (3.103) defines two straight lines: [a21 σ± − a11 ]v + b2 σ± − b1 , a12 − a22 σ±

a11 − a22 ± Tr(A)2 − 4 det A σ± = . 2a21

w± (v) =

(3.106) (3.107)

Note that the lines of inflection, w± (v), exist if and only if the eigenvalues of A are real. Further details about inflection curves and their role in determining dynamics, particularly, in piecewise linear systems, are given in [763].

The geometrical objects of interest are known as curves of inflection and are discussed in Box 3.11 on page 114. In piecewise linear systems, these curves are lines that are defined within each of the relevant subregions. Since the important behaviour leading to canards occurs when trajectories enter S3 , it is pertinent to study the inflection lines in this region. Here, the specific values for the McKean system (3.90) with f (v) = η2 (v − a) + a η1 may be substituted into (3.107) to find

3.7 McKean models

115

Fig. 3.25 Maximal canard in the McKean model with f defined by (3.99) and parameters as in Fig. 3.24 with I = IC . Both panels show the phase portraits, with the right panel being a blow-up of the left panel close to where trajectories enter S3 . Trajectories (including the maximal canard) are shown as solid black curves. The inflection line in S3 is depicted as a solid grey line and is seen to form part of the canard orbit. In the right panel, two trajectories entering S3 are displayed. The trajectory starting above inflection line bends leftwards, back towards S2 , whilst the trajectory starting below the inflection line bends rightwards and moves towards S4 . It can thus be seen how the inflection line organises dynamic behaviour near the canard. Also plotted are the nullclines as grey, dashed (˙v = 0), and dotted (w˙ = 0) lines, as well as the switching manifolds as black dotted lines.

the inflection lines as

β σ± /τ − η2 v − I − 1 + η2 w± (v) = σ± /τ − 1 −1 β λ± + τ − η2 v − β (I + 1 − η2 ) = , λ± + τ −1 − β

(3.108)

where λ± are the eigenvalues of the (constant) Jacobian matrix in S3 as given in Box 2.3 on page 20. In the limit as τ → ∞, both lines defined by (3.108) converge to the v-nullcline as it is defined in S3 . In fact, the inflection lines are the unstable manifold of a virtual fixed point obtained by extending the vector field from S3 into S2 [763]. It may be seen by inspection that the relevant inflection line to consider for finding canards is w− . A canard solution in the present system is one possessing an inflection point, and is thus one that intersects the line w− (v). Since w− (v) is an unstable manifold in S3 , it must also define a trajectory of the system. Thus, trajectories cannot cross w− (v) and only the point at which they enter S3 needs to be considered. The system possesses a trajectory starting at (a, w− (a)) that tracks along w− (v), exiting at (1, w− (1)). The desired canard solution is a periodic orbit that passes through both the points (a, w− (a)) and (1, w− (1)). Note that if a solution passes through one of the points, it must necessarily pass through the other. This allows for a two point boundary value problem to be established which can be

116

3 Phenomenological models and their analysis

Fig. 3.26 Two parameter continuation of canard solutions in the McKean model with f defined by (3.99) under variation of (ε , I ) where ε = τ −1 with other parameters as in Fig. 3.24. The solid black line shows maximal small amplitude oscillations, determined as being periodic orbits that pass through the point (a, η1 a + I ), i.e., the point along the v-nullcline on the boundary separating S1 and S2 . The solid grey curve represents maximal canards as described in the text and computed in Fig. 3.25. Below the black curve, only large amplitude oscillations are observed, whilst, above the grey curve, there exist only small amplitude oscillations. The narrow tongue between the two curves is a region of bistability in which both small amplitude and large amplitude oscillations both exist and are stable.

solved numerically to construct the canard solution, which is plotted in Fig. 3.25. Canard solutions can also be found by seeking fixed points of a Poincaré map through a section with v = a and by varying parameters so that the fixed point exists at w− (a). Once a canard solution has been found, it may be continued to explore its dependence on parameter values, as shown in Fig. 3.26 for variation in τ . This figure highlights that the canard solutions separate large and small amplitude responses and, further, that bistability between these solutions is possible for sufficiently high I . Solutions displaying similar properties to canards, termed phantom ducks [95], have a role for probing the response of excitable systems, since inflection curves in these act as a threshold for spiking. In this way, the state dependence of the ‘soft’ threshold on the recovery variable can be quantified. Interestingly, FitzHugh seemed to be aware of the presence of inflection curves in his original work [305], referring to the region of state space bounded by the inflection curves, as shown in Fig. 3.27, as ‘no-mans land’. This name arose from that fact that, since the system is so sensitive here, FitzHugh found it difficult to solve the FitzHugh–Nagumo system numerically subject to the tolerances required to compute such dynamically sensitive behaviour.

Remarks

117

Fig. 3.27 Just as in the McKean model, inflection curves in the full FitzHugh–Nagumo model can be used to explain the canard phenomenon. Here, the parameters are as in Fig. 3.23. The left panel displays a small amplitude oscillation, whilst the right panel depicts a large amplitude oscillation, both plotted as solid black curves. The solid grey lines are the curves of inflection, determined by (3.103), and are plotted only where they are real. The canard explosion occurs when the orbit intersects the the inflection curve. Also plotted are the system nullclines as grey, dashed (˙v = 0), and dotted (w˙ = 0) lines.

Just as in the McKean model case, when a small amplitude periodic orbit intersects the inflection curve as I is increased, it transitions to a large amplitude periodic orbit. Modern computers have facilitated the probing of dynamics in this ‘no-mans land’, but it is instructive to note that PWL caricatures can be used to simplify the analysis of this transition, allowing for straightforward construction of complicated solutions even in the absence of highly accurate numerical integrators for tonic spiking [763] and bursting orbits [237].

Remarks The models presented in this chapter complement their conductance-based counterparts discussed in Chap. 2. The replacement of transcendental functions with polynomials makes the models in this chapter significantly more amenable to analysis compared with those of the previous chapter, and we present a variety of tools to carry out such investigations. This chapter also introduces the integrate-and-fire class of models, which has enjoyed a fruitful life in the modelling of neuronal systems, but brings with it additional complexities in the form of non-smooth dynamics. Finally, the chapter showcases how piecewise linear caricatures can be leveraged to probe dynamics in fully nonlinear models through the construction of explicit solutions. Just as for the conductance-based models of the previous chapter, phenomenological models can be combined to form and study networks. This approach provides a means to study the general single neuron mechanisms that promote or suppress

118

3 Phenomenological models and their analysis

dynamics at the network level. Moreover, phenomenological models can be used to describe higher-level processes such as circadian rhythms [309], sleep–wake cycles [819], and the coordination of human movement [820]. The models presented in this chapter are not intended to describe the entirety of the phenomenological approaches available to study neuronal dynamics. Indeed, there are a variety of other approaches for describing such behaviour. One of the simplest of these involves describing the neuron as a Markov chain whereby the neuron is either active or inactive [341]. Such a description can be extended to incorporate a third state that captures the refractoriness present in real neurons [47, 363]. These states can be realised as potential wells of an appropriately defined gradient system [499]. Such a description allows for the transition rates between states to be shaped by the potential landscape, allowing for greater control over them. Such simplistic descriptions of the single cell facilitate analysis of network dynamics, as covered in the above references and in [212]. Similarly, potential landscapes have also been used to study transitions in large-scale neuronal networks, taking into account the form of coupling between individual cells [378, 954]. Another prominent approach is to treat either the spiking rate of the neuron, or the spike times themselves as stochastic processes [127, 144, 163]. Often, the spiking rate is treated as an inhomogeneous Poisson Process, as discussed in detail in [885], which have the flexibility to describe a large range of firing dynamics, and can also be adjusted to incorporate refractoriness [74]. For discussion of other important aspects of stochastic processes in neuroscience, see [548] and for those wishing to combine ideas from stochastic processes with slow–fast systems, see the book by [73], which covers noisy relaxation oscillators. Certain classes of phenomenological model are posed in discrete, rather than continuous, time. Prominent examples of this include the Chialvo map [165], the Rulkov map [771, 772] and the more recently defined KT map [342]. Since these models require only one function evaluation per time step, they can achieve considerable computational gains compared to continuous-time systems that display similar dynamics [450]. Moreover, the specification as a map rather than a flow allows even low dimensional models to exhibit more ‘exotic’ behaviours such as bursting [772]. Transitions between different solution types in these models can be identified using the theory of bifurcations in maps, which is comprehensively reviewed in [541]. Although this chapter focusses on one- and two-dimensional models, the general ideas of using polynomial functions for describing neuronal dynamics can be extended to higher dimensions, so that bursting behaviour can be captured. One of the seminal models in this regard is the Hindmarsh–Rose model [421] (and see Prob. 3.3), for which a piecewise linear caricature has recently been developed [237]. As mentioned in Sec. 3.7.1, canard solutions are integral in explaining the features of these mixed-mode oscillations. Although not the focus of this present chapter, canards can also be used to explain transitions between different oscillatory rhythms across entire neural tissues [46]. In spite of the tractability of the Hindmarsh–Rose model and the map-based models, many modelling studies instead opt to use variations of the Izhikevich model [457], the dynamics of which are explored in [460] and [876], to describe complex

Problems

119

firing patterns. This popularity is partly due to the flexibility of this model in capturing firing properties, such as chattering and fast-spiking, across a range of cell types [459]. In addition, software packages such as the python-based BRIAN [366, 367, 835] and PyGeNN [522] provide an easy-to-use interface for simulating large networks of IF neurons. This software leverages the hybrid nature of IF dynamics to explicitly find spike times as solutions to an appropriately defined nonlinear problem [872]. In turn, this can lead to high-efficiency simulators for networks of IF neurons that circumvent the need for explicit time-stepping. Such integrators thus avoid some of the issues surrounding the accuracy of explicit ordinary differential equation solvers when dealing with threshold-crossing events [403], and see [90] for an example that exploits parallel computing capabilities to achieve this.

Problems 3.1. Consider the van der Pol equation v¨ + μ (v2 − 1)˙v + v = 0.

(3.109)

(i) Using the transformation w = v − v3 /3 − v˙ /μ , show that (3.109) can be rewritten in the form v v3 − w , w˙ = . (3.110) v˙ = μ v − 3 μ (ii) In the limit as μ → ∞, show that this model supports oscillatory solutions with period (to first order in μ ) Δ = μ (3 − 2 ln 2).

(3.111)

(iii) Use the transformation u = μ w to show that (3.109) can also be written in the form v3 v˙ = μ v − − u, u˙ = v . (3.112) 3 (iv) Let x = v cos(t) + u sin(t) and y = −v sin(t) + u cos(t) in (iii). Show that x and y obey the equations 1 3 x˙ = μ x cos(t) − y sin(t) − (x cos(t) − y sin(t)) cos(t), 3 (3.113) 1 3 y˙ = −μ x cos(t) − y sin(t) − (x cos(t) − y sin(t)) sin(t). 3 (v) For 2π a 2π periodic function, f (x), define its average over one period as g(x) = f (x) dx. Show that the averaged equations corresponding to (3.113) are 0 given by μx μy 4 − (x 2 + y 2 ) , y˙ = 4 − (x 2 + y 2 ) . (3.114) x˙ = 8 8

120

3 Phenomenological models and their analysis

(vi) By introducing the variable r = cycle solution with r = 2.

x 2 + y 2 , show that (3.114) supports a limit

3.2. Consider the system x˙ = x − y − x(x 2 + 5y 2 ),

y˙ = x + y − y(x 2 + y 2 ).

(3.115)

(i) Classify the stability type of the fixed point at the origin. (ii) Rewrite the system in polar coordinates. (iii) Determine the circle of maximum radius r1 centred on the origin such that all trajectories have a radially outward component on it. (iv) Determine the circle of minimum radius r2 centred on the origin such that all trajectories have a radially inward component on it. (v) Prove that the system has a limit cycle somewhere in the trapping region r1 ≤ r ≤ r2 . 3.3. Consider the Hindmarsh–Rose model to describe burst firing in the brain of the pond snail Lymnaea when depolarised by a short current pulse [58, 421]: x˙ = y − x 3 + bx 2 + I − z, y˙ = 1 − 5x 2 − y, z˙ = ε (4(x − x0 ) − z) .

(3.116)

Here, the (dimensionless) variable x describes the membrane potential, y the fast transport of sodium ions, and z the slower transport of potassium ions across a membrane. Consider the choice x0 = −1.6 with the other parameters (b, I, ε ) all being positive. (i) Show that the fixed points of the model are given by y = 1 − 5x 2 and z = 4(x − x0 ) where x is any real root of P(x) = I + 1 + 4x0 − 4x + (b − 5)x 2 − x 3 .

(3.117)

(ii) Show that the eigenvalues of the Jacobian are zeros of the characteristic equation Q(λ ) = λ 3 + q2 (x, b, ε )λ 2 + q1 (x, b, ε )λ + q0 (x, b, ε ),

(3.118)

where q2 (x, b, ε ) = 3x 2 − 2bx + 1 + ε , q1 (x, b, ε ) = 3x 2 + (10 − 2b)x + ε (3x 2 − 2bx + 5),

(3.119)

q0 (x, b, ε ) = ε (3x + (10 − 2b)x + 4). 2

(iii) Show that the necessary conditions for a Hopf bifurcation are P(x) = 0, C(x, b, ε ) = q2 (x, b, ε )q1 (x, b, ε ) − q0 (x, b, ε ) = 0, q1 (x, b, ε ) > 0.

(3.120)

Problems

121

(iv) In the limit ε → 0, show that the Hopf bifurcation points are defined by the set of four curves in the (I, b) plane given by I = Ii (b) for i = 1, . . . , 4, where Ii (b) = Q(xi (b)) for i = 1, 2, 3 and I4 = Q(x4 (b)) for x4 (b) > (2b − 10)/3, with Q(x) = −1 − 4x0 + 4x(b) − (b − 5)x(b)2 + x(b)3 , and x1 (b) = 0,√ x2 (b) = (2b − 10)/3, x3 (b) = (2b + x4 (b) = (2b − 4b2 − 12)/6 for 4b2 ≥ 12.

√

(3.121)

4b2 − 12)/6 and

3.4. Consider a one-dimensional IF neuron given by v˙ = f (v) + I, v → vr when v = vth .

(3.122)

where f is a smooth function and vr < vth . Suppose that I is such that model exhibits an oscillatory solution with period Δ. Show that the Floquet multiplier of this orbit is equal to 1. 3.5. Consider the adapted absolute IF (AAIF) model given by v˙ = |v| + I − w, τ w˙ = β v − w,

(3.123)

v → vr and w → w + k when v = vth . Suppose that I < 0, β < 1 and τ = 1.

(i) Show that there exist two fixed points, one of which is a stable focus with the other being of saddle type. (ii) Show that the stable and unstable manifolds of the saddle can be written as a straight lines of the form

(β − 1)v − I 1 − β (β − 1)v + I 1 − β

, wu (v) = β . ws (v) = β (β − 1)(1 − 1 − β ) (β − 1)(1 + 1 − β ) (3.124) (iii) Show that a homoclinic spiking orbit (i.e., one that interacts with the impact manifold Σ = {(v, w) | v = vth }) occurs when k = vr − vth +

1 − β (vr + vth ) +

2I 1−β

.

(3.125)

(iv) Prove that a spiking homoclinic orbit cannot occur when k > 0 and vr < I /(β − 1). (v) Prove that the AAIF model cannot support a homoclinic orbit that encloses the stable fixed point. 3.6. Consider again the AAIF model (3.123). (i) Find a pair (I, β ) at which a Bogdanov–Takens bifurcation occurs (i.e., when the linearised system around an equilibrium has a pair of zero eigenvalues).

122

3 Phenomenological models and their analysis

(ii) Show that the AAIF model supports sub-threshold orbits when β > 1 and τ > 1 with I sufficiently small and positive. (iii) Show that the period of these sub-threshold orbits is independent of I . (iv) Demonstrate that a discontinuous Hopf bifurcation can occur for I > 0 and β > 1 when τ = 1 and show that the sub-threshold orbits emerging at this bifurcation have an amplitude that is bounded away from zero. (v) The criticality of a planar discontinuous Hopf bifurcation can be determined by the sign of the parameter aR aL − , (3.126) Λ= bL bR where the eigenvalues immediately prior to the bifurcation are given by λ L = a L ± ib L and those immediately following it are given by λ R = a R ± ib R . In particular, the bifurcation is supercritical if Λ < 0 and is subcritical if Λ > 0 with a generalised Hopf bifurcation occurring when Λ = 0. Show that discontinuous Hopf bifurcations in the AAIF model are supercritical. 3.7. Consider the AAIF model (3.123) with β = 0 and τ = 1. (i) Find the value wc such that orbits starting at (vr , wc + k) do not cross v = 0 before reaching v = vth . (ii) Show that the Poincaré map P : Σ → Σ where Σ = {(v, w) | v = vth } and its derivative in the interval (−∞, wc ) are given by

P(w) = vth + I − (vth + I )2 + (w + k) [(w + k) − 2(vr + I )], (3.127) vr + I − w − k . (3.128) P (w) =

2 (vth + I ) + (w + k) [(w + k) − 2(vr + I )] (iii) Determine whether the Poincaré map is continuous or discontinuous (i.e., has a gap) at w = wc . (iv) Show that a fixed point of this map with w < wc is given by w=

k [2(vr + I ) − k] , 2 (vth − vr + k)

(3.129)

and is stable provided I > −vr . (v) Hence, or otherwise, show that a bifurcation of the tonic spiking solution occurs when

(3.130) k = −(vth − vr ) + vr (vr + 2I ) + vth (vth + 2I ). (vi) Justify why this model does not support bursting solutions when vr < 0. (vii) In the singular limit as τ → ∞, show that, where they exist, the number of spikes in bursting orbits is independent of I and that the period, Δ, of these bursting orbits, to first order in τ is given by v I + kn r , (3.131) Δ = τ ln , n= I k where . denotes the ceiling function.

Problems

123

3.8. Consider the model [963] v˙ = v2 + I − w, w˙ = v(β − 2w), v → vr ,

w → sw + k,

(3.132)

when v = vth .

(i) Show that the following quantity is conserved between impacts (i.e., whilst v < vth ) β 1 (3.133) E(v, w) = w2 − w(I + v2 ) + v2 . 2 2 (ii) Show that the Poincaré map defined on the section Σ = {(v, w) ∈ R2 | v = vth } 2 are given by and its derivative for w < I + vth P(w) = A −

(sw + B)2 + C,

P (w) = −

s(sw + B) (sw + B)2 + C

, (3.134)

where 2 A = I + vth ,

2 2 B = k − I − vr2 , C = (2I + vth + vr2 − β )(vth − vr2 ).

(iii) For (s B + A)2 − (B 2 − A2 + C)(s 2 − 1) ≥ 0, find the fixed points of P and classify their stability. 3.9. Consider the FitzHugh–Nagumo model

μ v˙ = v(v − a)(1 − v) − w + I, w˙ = β v − w.

(3.135)

(i) Write down a three-region piecewise linear caricature in which both the v and w nullclines are continuous and which preserves the (v, w) locations of the extrema of the v-nullcline. (ii) Find conditions for a discontinuous Hopf bifurcation to occur. (iii) Demonstrate that this three-piece caricature does not support canard solutions.

Chapter 4

Axons, dendrites, and synapses

4.1 Introduction Chapter 2 and chapter 3 considered single neuron models that idealised the nerve cell as a point or patch of cell membrane in which voltage is the same everywhere. However, the classical notion of a neuron is of a specialised cell with a body, an axon, and dendrites [802]. The cell body or soma, contains the nucleus and cytoplasm, the axon extends from the soma and ultimately branches before ending at nerve terminals, and dendrites are branched structures connected to the soma that receive signals from other neurons. Axons and dendrites make contact with each other at axo-dendritic synapses, and dendro-dendritic synapses are also possible. These structures allow neurons to communicate in different ways, and to transmit information to other nerve cells, muscle, or gland cells. In this chapter, we consider idealised mathematical models of these processes and methods for their analysis, which now include a spatial aspect. In later chapters, we make use of these structures and their dynamics to construct model neuronal networks.

4.2 Axons Axons are the fibres that allow neurons to send action potential signals, usually initiated at the axon hillock, over large distances (ranging from the millimetre to metre scale). They have a diameter that is typically one micrometre across. The largest mammalian axons can reach a diameter of up to 20 μ m, and the unmyelinated squid giant axon, which is specialised to conduct signals very rapidly, is closer to 1 mm in diameter [488]. However, rapid conduction comes at a high price with respect to brain volume. For example, in monkeys, the fastest axons are around 20 μ m in diameter and conduct at about 120 m/s. By contrast, the slowest axons are around 0.1 μ m in diameter and conduct at about 0.3 m/s. Thus, the fastest axons occupy roughly 40, 000 times the volume of the slowest axons, per unit length and conduct about 400 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Coombes and K. C. A. Wedgwood, Neurodynamics, Texts in Applied Mathematics 75, https://doi.org/10.1007/978-3-031-21916-0_4

125

126

4 Axons, dendrites, and synapses

times faster. The diameter of the axon and the presence of myelin (a fatty insulating substance) are the most powerful structural factors that control conduction velocity of mammalian axons. Conduction velocity increases with both axon diameter, and with myelination and, in the central nervous system, the great majority of axons that are larger than 0.3 μ m in diameter are myelinated. For a recent perspective on the structure and function of axons, see [297]. The propagation of the action potential along an axon in a regenerative fashion arises from a mixture of ions diffusing along the axon together with the nonlinear flow of ions across the axonal membrane. This is naturally described by generalising the Hodgkin–Huxley model of Sec. 2.3 from an ordinary to a partial differential equation (PDE) of reaction–diffusion type. An action potential is then described by a travelling wave solution; namely one that moves with a constant speed with a time-independent shape. Much of the initial analysis for the existence, uniqueness, and stability of travelling wave solutions to the spatially extended Hodgkin–Huxley model was performed in the 1970s [153, 285–288]. Here, we shall revisit some of these notions for travelling waves, though with a specific focus on idealised models for which the mathematical analysis simplifies considerably compared to that for the Hodgkin–Huxley model.

4.2.1 Smooth nerve fibre models We first consider models for nerve impulse propagation appropriate for fibres with relatively little spatial variation of biophysical properties, such as the squid giant axon. The natural extension of the Hodgkin–Huxley model of Sec. 2.3 to include a longitudinal current (along the fibre) can be constructed by considering the fibre to be a set of identical Hodgkin–Huxley models connected in a ladder structure, as shown in Fig. 4.1, with nearest neighbour interactions mediated by Ohmic (linear) resistances, and then taking the continuum limit to obtain a PDE for V (x, t) with x ∈ (−L , L) and t > 0. For a cable of diameter d, the nth compartment in this idealised setup is considered to have a spatial location xn = nh, with h = 2L/N and n ∈ Z with N being the number of nodes. Each cylindrical compartment has surface area A = π dh and cross-sectional area S = π d 2 /4. This spatially extended scenario can be modelled via the lattice equation S Vn+1 − 2Vn + Vn−1 dVn = −AIion + AI + , (4.1) AC dt hR where Vn (t) = V (nh, t), and C (in units of capacitance per unit area of the cell membrane) and R (in units of resistance × length) denote the specific membrane capacitance and resistivity of the intracellular fluid, respectively. Dividing through by A and taking the limit h → 0 (equivalently N → ∞) and then considering the limit L → ∞ gives

4.2 Axons

127

Fig. 4.1 Ladder structure for a model of the axon. The axial (longitudinal) resistance to current is described using the resistances with values R. The nonlinear ionic currents in the Hodgkin–Huxley model have an impedance symbolised by the electrical component with value Z .

∂V d ∂ 2V , x ∈ R, t ≥ 0, (4.2) = −Iion + I + ∂t 4R ∂ x 2 where Iion is obtained from the usual Hodgkin–Huxley description of gating variables under the replacement X (t) → X (x, t), X ∈ {m, n, h}, with C

X ∞ (V ) − X ∂X = . ∂t τ X (V )

(4.3)

Here, the following result has been employed: lim

h→0

Vn+1 − 2Vn + Vn−1 ∂ 2V = , 2 h ∂ x2

(4.4)

and note that boundary conditions are unimportant upon taking the limit as L → ∞. Using a value of R = 35.4 Ω cm, Hodgkin and Huxley were able to show numeri√ cally that the velocity of the model action potential scales as d. Moreover, using d = 0.476 mm, they obtained a shape and amplitude in good agreement with experiments, and could obtain a propagation velocity of 18.8 m/s, which is within 12% of the measured experimental value of 21.2 m/s [425]. The spatially extended Hodgkin–Huxley model can, of course, be studied using numerical simulations. However, it can also be analysed from a mathematical perspective using travelling wave analysis. For descriptive purposes, we pursue this

128

4 Axons, dendrites, and synapses

here by first replacing the Hodgkin–Huxley model by the FitzHugh–Nagumo model (as presented in Sec. 3.2, though we stress that the methodology has full generality). The following system of PDEs is a FitzHugh–Nagumo caricature of the Hodgkin– Huxley equations modelling nerve impulse propagation along an axon:

∂v ∂ 2v = D 2 + f (v) − w, ∂t ∂x

∂w = β v, ∂t

x ∈ R, t ≥ 0,

(4.5)

where v(x, t) represents the membrane potential, w(x, t) is a phenomenological recovery variable; f (v) = v(a − v)(v − 1), 0 < a < 1, β > 0 and D is a diffusion coefficient that, without loss of generality, is now set to unity. Now, transform to a frame of reference that translates at a speed c along the x-axis. In this way, one moves from (x, t) coordinates to (ξ , t) coordinates where ξ = x − ct. The dynamics in this co-moving frame may be easily constructed using the fact that the derivative operators are related by dx = ∂ξ and dt = dξ /dt ∂ξ + ∂t = −c∂ξ + ∂t . The equations in the co-moving frame are given by −c

∂v ∂v ∂ 2v = + + f (v) − w, ∂ξ ∂t ∂ξ2

−c

∂w ∂w = β v. + ∂ξ ∂t

(4.6)

A travelling wave is one that appears stationary in the co-moving frame, namely, there is some c for which the solution of (4.6) is independent of time, so that (v(ξ , t), w(ξ , t)) = (V (ξ ), W (ξ )). In this case, the profile is described by the travelling wave ordinary differential equation (ODE) obtained from (4.6) after dropping derivatives in t, and see Box 4.1 on page 130 for further discussion. The equations obtained from (4.6) can be written as a system of three first-order ODEs in the form dV = U, dξ dU = −cU − f (V ) + W, dξ dW β = − V. dξ c

(4.7)

Any bounded orbit corresponds to a travelling wave such that c = c(α , β ). For all c = 0, the wave system has a unique equilibrium at (V, U, W ) = (0, 0, 0) with one positive eigenvalue λ1 and two eigenvalues λ2,3 with negative real parts. (To show this, first, verify the assertion assuming the eigenvalues are real. Next, show that the characteristic equation cannot have roots on the imaginary axis, and finally, use the fact that eigenvalues exhibit continuous dependence on system parameters). The equilibrium can either be a saddle or a saddle-focus with a 1D unstable and a 2D stable manifold. The transition between saddle and saddle-focus is caused by the presence of a real, negative eigenvalue with multiplicity two (and see Prob. 4.1). A travelling pulse is described by a homoclinic orbit to the equilibrium, and see Box 4.1 on page 130. Figure 4.2 considers a homoclinic orbit to a saddle equilibrium and shows the corresponding wave profile in the travelling wave coordinate. For a homoclinic to

4.2 Axons

129

Fig. 4.2 Travelling pulse shapes in axonal fibres. Upper-top: A pulse that is homoclinic to a fixed point of saddle type showing a monotone tail. Upper-middle: A pulse that is homoclinic to a fixed point of saddle-focus type showing an oscillating tail. Upper-bottom: A double pulse that is homoclinic to a fixed point of saddle-focus type. Lower: Indicative schema showing the configuration of the stable manifold of the equilibrium at the origin and an example trajectory for the pulse with a monotone tail (left) and with an oscillatory tail (right).

a saddle-focus equilibrium the trajectory spirals toward the equilibrium generating an oscillatory tail in the wake of the pulse. It is also possible for smooth nerve fibre models to support trains of action potentials with more than one spike, such as the double pulse solution shown in Fig. 4.2 (upper-bottom), where a homoclinic orbit has two large excursions from the equilibrium.

130

4 Axons, dendrites, and synapses

Box 4.1: Travelling waves in PDE models Consider a simple multi-component reaction–diffusion PDE model for the evolution of a vector field u(x, t) ∈ Rn , x ∈ R and t ∈ R+ given by

∂u ∂ 2u = D 2 + f (u), ∂t ∂x

(4.8)

for some nonlinear vector field f and diffusion matrix D (i.e., a diagonal n × n matrix with strictly positive entries). A travelling wave solution is one that satisfies the ODE system −c

dq d2 q = D 2 + f (q), dξ dξ

(4.9)

where q = q(ξ ) ∈ Rn and ξ = x − ct. Typically, one is interested in waves that are Δ-periodic so that q(ξ ) = q(ξ + Δ), or have a travelling wave form that is either a front or a pulse. These two latter types of orbits are naturally defined in terms of global connections. Global connection. Consider a continuous-time dynamical system defined by u˙ = F(u),

u ∈ Rn ,

(4.10)

with (multiple) equilibria u 0 , u 1 , . . .. An orbit, Γ , starting at a point u ∈ Rn is called homoclinic to the equilibrium point u 0 if u(t) → u 0 as t → ±∞. An orbit, Γ , starting at a point u ∈ Rn is called heteroclinic to the equilibrium points u 1 and u 2 if u(t) → u 1 as t → −∞ and u(t) → u 2 as t → +∞. For a travelling wave ODE system, a pulse solution corresponds to a homoclinic connection and a front solution to a heteroclinic connection. Wave stability. Linearising around a travelling wave in the co-moving frame with a perturbation of the form eλ t v(ξ ) gives rise to an eigenvalue problem L v = λ v, where L is the linear operator L = cIn

d d2 + D 2 + D f (q), dξ dξ

(4.11)

and In is the n × n identity matrix. Differentiating (4.9) with respect to ξ gives −c

d dq d2 dq dq =D 2 + D f (q) , dξ dξ dξ dξ dξ

(4.12)

which shows that p(ξ ) = qξ (ξ ), where the subscript denotes differentiation, is an eigenfunction of the operator L with eigenvalue zero: L p = 0. The eigenfunction p is often referred to as the Goldstone mode. The existence of the Goldstone mode corresponds to the invariance of the model with respect

4.2 Axons

131

to spatial translations. To see this, consider an infinitesimal translation of q: q(ξ + ε ) = q(ξ ) + ε qξ (ξ ) + O(ε 2 ). Hence, such a perturbation only results in a phase shift of the original wave, and is discounted when considering linear stability. The travelling wave q(ξ ) is linearly stable if the spectrum of L (all non-regular points of the resolvent (L − λ In )−1 ) is strictly to the left of the imaginary axis. See the book by Kapitula and Promislow [489] for a modern treatise on the spectral and dynamical stability of nonlinear waves. Generically, one would not expect the stable and unstable manifolds of the equilibrium to intersect. However, by varying c, such a scenario may be possible. Once a homoclinic orbit is found (typically by numerical methods such as shooting), it still remains to determine if it is a stable solution of (4.6). This has led to the development of a general mathematical framework for determining wave stability in PDEs in terms of a complex analytic Evans function [288]. After linearising (4.6) around a travelling wave, the eigenvalues that determine stability are given by the zeros of the Evans function. However, even for models of FitzHugh–Nagumo type, the Evans function is not available in closed form (since the travelling pulse cannot be generally obtained without recourse to numerics) and it is appealing to instead consider other reduced models of the type described in Chap. 3 for which explicit analysis is possible. Nonetheless, it is important to note that close to a certain singular limit Jones has been able to prove the stability of the fast pulse in the FitzHugh–Nagumo model using geometric singular perturbation theory to compute the Evans function [481] (and see also related later work by Yanagida [955]).

4.2.1.1

McKean model without recovery: fronts

Section 3.7 considered the McKean model of an excitable system and showed that, as a piecewise linear system, it was amenable to explicit analysis for periodic solutions. The same is true when the model is spatially extended and to emphasise this, we consider the construction of travelling fronts in the model without recovery, that is, the model with only the v component. In this case, the model takes the form v = v(x, t), x ∈ R, t ∈ R+ , with v ∂v ∂ 2v = − + D 2 + Θ(v − h). ∂t τ ∂x

(4.13)

Here, Θ is a Heaviside step function, and h is a constant threshold. Note that when the Heaviside function in (4.13) is replaced by a smooth function then a theorem by McLeod and Fife [303] gives the existence of a front using geometric arguments. After introducing the travelling wave coordinate ξ = x − ct, then in a co-moving frame, (4.13) becomes

132

4 Axons, dendrites, and synapses

−c

∂v ∂v ∂ 2v v + = − + D 2 + Θ(v − h), ∂ξ ∂t τ ∂ξ

(4.14)

where v = v(ξ , t). A travelling wave solution q(ξ ) is obtained upon letting ∂t v → 0, so that dq q d2 q − = −Θ(q − h). (4.15) D 2 +c dξ dξ τ Now, consider a front solution where q(ξ ) ≥ h for ξ ≥ 0, and q(ξ ) < h for ξ < 0, and c > 0. In this case, the solution to (4.15) takes the form τ + Aem − ξ ξ ≥ 0 q(ξ ) = , (4.16) ξ 0 and one in the lower half complex plane for ξ < 0. Using Cauchy’s residue theorem, as presented in Box 4.3 on page 134, gives 1 ek− (λ )ξ , ξ ≥ 0 η (ξ ) = , (4.24) k+ (λ ) − k− (λ ) ek+ (λ )ξ , ξ < 0 where

c2 + 4(1/τ + λ ) . (4.25) 2 Here, it is assumed that λ + 1/τ > 0 (i.e., attention is restricted to the right of the essential spectrum). Equation (4.22) can now be solved in the form k ± (λ ) =

−c ±

u = η ∗ [δ (q − h)u],

(4.26)

where ∗ denotes convolution (a convolution is defined as f ∗ g(x) = R f (y)g(y − x)dy). Using the result that δ (q(ξ ) − h) = δ (ξ )/|q (0)| (see Box 2.9 on page 51), then η (ξ ) . (4.27) u(ξ ) = A (λ , ξ )u(0), A (λ , ξ ) = |q (0)| Demanding a non-trivial solution at ξ = 0 gives the condition E (λ ) = 0, where c2 + 4/τ m+ − m− =1− . (4.28) E (λ ) = 1 − A (λ , 0) = 1 − k + (λ ) − k − (λ ) c2 + 4(1/τ + λ ) The function E (λ ) is identified as the Evans function of the front. Note that E (0) = 0, as expected for a system with translation invariance. Moreover, for λ + 1/τ > 0, there are no solutions of E (λ ) = 0 in the right-hand complex plane. Since E (0) > 0, the zero-eigenvalue is simple, and so the travelling front solution is stable. Box 4.2: Poles and residues A complex function f : C → C has an isolated singularity at z 0 if f is analytic in the annulus D = {z ∈ C | 0 < |z − z 0 | < δ } for some δ > 0, but is not analytic at z = z 0 . The singularity is said to be a pole of order k for some positive integer, k, if lim (z − z 0 )k f (z) = 0, and lim (z − z 0 )k+1 f (z) = 0.

z→z 0

z→z 0

(4.29)

134

4 Axons, dendrites, and synapses

A pole of order 1 is typically referred to as a simple pole. The function f can be expressed as an infinite Laurent series: f (z) =

∞

∑

an (z − z 0 )n , an =

n=−∞

1 2π i

C

f (z) dz, z ∈ D, (z − z 0 )n+1

(4.30)

where C is a circle centred at z 0 with radius r < δ . The coefficient a−1 in the Laurent series expansion is known as the residue of f at z 0 . It is common to use the notation a−1 = Res( f ; z 0 ). If z 0 is a simple pole, the residue is Res( f ; z 0 ) = lim (z − z 0 ) f (z). z→z 0

(4.31)

Moreover, since f is analytic in D, it can be written as f (z) = A(z)/B(z). In this case, the residue can be expressed as Res( f ; z 0 ) = A(z 0 )/B (z 0 ), which can be seen via application of L’Hˆopital’s rule. If z 0 is a pole of order k, the residue is instead expressed as Res( f ; z 0 ) =

dk−1 1 k (z − z ) f (z) . 0 z=z 0 (k − 1)! dz k−1

(4.32)

A good introductory book on complex variables and applications in applied mathematics is that of Ablowitz and Fokas [13].

Box 4.3: The Cauchy Residue Theorem Suppose that γ ∈ C is a closed curve and that a ∈ / γ , then

ργ (a) =

1 2π i

γ

dz , z−a

(4.33)

is called the winding number of γ around a. Note that ργ will be integer valued for any a. The winding number essentially counts the number of times that path traced out by γ winds around the point a. As such, if γ does not enclose a, then ργ (a) = 0. Now suppose that f : C → C is analytic in a simply connected region D except at isolated singularities z 1 , z 2 , . . . z m , (see Box 4.2 on page 133). The Cauchy Residue Theorem tells us that if γ ∈ C is a closed curve such that z i ∈ /γ for all i = 1, . . . , m, then γ

f dz = 2π i

m

∑ ργ (zi )Res( f ; zi ).

i=1

(4.34)

4.2 Axons

135

This theorem offers a powerful way to evaluate contour integrals in the complex plane, showing that the desired integral can be computed by summing the residues of f at isolated singularities, weighted by the winding numbers around these points. If the singularities are all poles, then the formulas in Box 4.2 can be used to directly evaluate the residues. Moreover, if γ is a positively orientated simple closed curve, such as a circle, then (4.34) simplifies to γ

f dz = 2π i

∑ Res( f ; zi ),

(4.35)

i∈B

where B is the set of indices such that z i is inside γ . The residue theorem can be used as an alternative approach to compute integrals over the real line. In this case, the integrand is extended into the complex plane and a contour is constructed that comprises part of the real line and a half-circle in the upper (or lower) complex plane, forming a semi-circle. In the limit as the radius of the circle tends to infinity, the size of the interval on the real line included in the contour also diverges to infinity (else the curve would not be closed), and thus the contour includes the whole real line. After construction of this contour, the residue theorem can be applied directly, summing over all poles in the upper (lower) complex plane.

4.2.1.2

A leaky integrate-and-fire model: Pulses and periodics

To make a fair comparison of travelling wave solutions of the McKean model with either the full Hodgkin–Huxley model or experimental data, we further need to consider the recovery variable that acts to bring down the activity in the wake of a travelling front and turn it into a pulse (and see Prob. 4.2). Another way is to consider the shape of an action potential as universal and explore how this shape can be propagated along a spatially extended axon. We do this here by considering a spatially extended leaky integrate-and-fire (LIF) model. A simple model of an excitable fibre is one that feels an effect whenever the local dynamics reaches threshold. The fibre itself may have a simple description as a linear PDE with a source term (action potential shape) I as: v ∂v ∂ 2v = − + D 2 + I (x, t), ∂t τ ∂x

x ∈ R, t ∈ R+ ,

(4.36)

whilst the ‘effect’ could take the form I (x, t) = ∑m η (t − Tm (x)), for some universal shape η (t), mimicking that of the action potential, centred around a set of firing times Tm (x) at position x with m ∈ Z. Here, the generation of the firing times is modelled according to a simple LIF process for a dynamic variable u = u(x, t) that resets to zero whenever it reaches a threshold h:

τu

∂u = −u + v, ∂t

Tm (x) < t < Tm+1 (x),

(4.37)

136

4 Axons, dendrites, and synapses

with u(x, Tm (x)) = h and u(x, Tm+ (x)) = 0. The firing times, Tm , are determined iteratively according to Tm (x) = inf{t | u(x, t) ≥ h ; u t (x, t) > 0 ; t ≥ Tm−1 (x) + τr },

(4.38)

where τr represents a refractory timescale. To model the refractory dynamics following reset, a clamp is employed such that u(x, t) = 0 for Tm (x) < t < Tm (x) + τr . A travelling solitary wave is described with an ansatz of the form T (x) = x/c, where c denotes the speed of the wave, and each LIF process fires only once (and so we suppress the index m from now on). In the travelling wave frame ξ = ct − x, the wave, with time-independent profile q(ξ ), is described by the second-order ODE: D

d2 q dq q −c − = −η ξ /c . 2 dξ dξ τ

(4.39)

For the choice of an idealised, square-wave action potential shape η (t) = σ Θ(t)Θ (τη − t)/τη , with amplitude σ and duration τη , a travelling pulse solution with limξ →±∞ v(ξ ) = 0 and c > 0 takes the form ⎧ ⎪ −∞ < ξ < 0 ⎨α1 exp(m + ξ ), q(ξ ) = α2 exp(m + ξ ) + α3 exp(m − ξ ) + τσ /τη , 0 < ξ < cτη , (4.40) ⎪ ⎩ α4 exp(m − ξ ), ξ > c τη with m ± = [c ± c2 + 4D/τ ]/(2D). By ensuring the continuity of the solution and its first derivative at ξ = 0 and ξ = cτη , one may solve for the unknowns α1 , . . . , α4 as m− m− α1 = α3 [1 − exp(−m + cτη )], α2 = −α3 exp(−m + cτη ), m+ m+ τσ m+ α3 = α4 = α3 [1 − exp(−m − cτη )]. (4.41) , τη (m − − m + ) The self-consistent speed of the travelling wave may be determined by demanding that u(0) = h. In the travelling wave frame, cτu du/dξ = −u + q, which may be solved using an integrating factor to give: u(ξ ) =

1 c τu

ξ

−∞

e−(ξ −ξ )/(cτu ) q(ξ )dξ .

(4.42)

Hence the speed of the travelling pulse satisfies h=

α1 . m + c τu + 1

(4.43)

This is an explicit equation for h = h(c) that can be easily recast to give c = c(h), as shown in Fig. 4.3, from which it is apparent that there are two branches of solutions (just as in the McKean model with recovery, and see Prob. 4.2). Moreover, a linear

4.2 Axons

137

Fig. 4.3 Speed of the travelling pulse solution in the LIF fibre model. A linear stability analysis shows that the fast wave is stable (upper solid curve) and the slow wave is unstable (lower dashed curve). Here τη = τu = τ = D = σ = 1.

stability analysis shows that it is the faster of the two branches that is stable (see Prob. 4.3). It is also possible to construct periodic wave solutions of the simple LIF model above. A periodic travelling wave is one for which the firing times satisfy Tm (x) = mΔ + x/c = (m + kx)Δ. Here, k is the wavenumber and c = 1/(kΔ) is the wave velocity. Hence, all LIF processes fire at regular intervals, Δ, but at different times depending on their position along the fibre. For a periodic travelling wave, I (x, t) = F(t − x/c) where F(t) = ∑m η (t − mΔ) is a Δ-periodic function. The travelling wave ODE is given by (4.39) under the replacement of right-hand side by −F(ξ /c) instead of −η (ξ /c). In this case, a solution to (4.39) has the general form β1 em + ξ + β2 em − ξ + τσ /τη , 0 < ξ < cτη q(ξ ) = . (4.44) β3 e m + ξ + β4 e m − ξ , cτη < ξ < cΔ By demanding periodicity of the solution, continuity of the solution, and continuity of its derivative, four conditions are generated that may be solved for the unknowns β1 , . . . , β4 . These four coefficients are given by

138

4 Axons, dendrites, and synapses

(1 − em + c(Δ−τη ) ) , (em + cΔ − 1) (1 − e−m + cτη ) , β3 = κ m − m cΔ (e + − 1)

β1 = κ m −

(1 − em − c(Δ−τη )) , (em − cΔ − 1) (1 − e−m − cτη ) , β4 = −κ m + m cΔ (e − − 1)

β2 = −κ m +

(4.45)

where κ = τσ /(τη (m − − m + )). The speed of the wave is then determined in a selfconsistent manner by demanding that the LIF process at position x reaches threshold at times Tm (x). The firing condition, u(cΔ) = h, provides an implicit equation for c = c(Δ) in the form (and remembering that u = 0 when the system is refractory) 1 h= c τu

cΔ

cτr

e−(cΔ−ξ )/(cτu ) q(ξ )dξ .

(4.46)

If, for simplicity, one fixes τη = τr , then the speed of a periodic travelling wave is completely determined by the equation h = β3

em + cΔ − e(τr −Δ)/τu em + cτr em − cΔ − e(τr −Δ)/τu em − cτr + β4 . m + c τu + 1 m − c τu + 1

(4.47)

Note that, as Δ → ∞, the formula for the speed of a solitary pulse given by (4.43) is recovered, as expected. A typical dispersion curve is shown in Fig. 4.4, where it is clear that the minimum wave period is set by the refractory timescale. Moreover, as Δ → τr from above, the speed of the periodic wave can be greater than that of the solitary pulse. In this case, the wave is often referred to as having a supernormal speed. This is also observed in dispersion curves of the full Hodgkin–Huxley axon model [630]. The above approach can also be extended to cover the case of quasiactive membrane (see Chapter 2 Sec. 2.7), in which case the dispersion curve can have multiple stationary points [466]. The shape of the dispersion curve as it increases from c = 0 (just above Δ = τr ) is such that the reciprocal dispersion curve c−1 = c−1 (Δ) adopts an approximately exponential form. This shape can be directly attributed to the inclusion of an absolute refractory period within the model, and in the next section, we will see that it plays an important role in the generation of solutions which connect periodic spike trains.

4.2.2 A kinematic analysis of spike train propagation As well as supporting solitary wave and periodic travelling waves, smooth nerve fibre models can also support more exotic spike train patterns. For example, the Hodgkin– Huxley model of an axon also supports action potentials that are irregularly spaced and that travel with different speeds [153]. A kinematic theory of wave propagation is one attempt to follow the progress of action potentials at the expense of a detailed description of the pulse shape [750]. Supposing that a pulse has a well-defined arrival time at some position x, the arrival of the mth pulse at position x is denoted by Tm (x).

4.2 Axons

139

Fig. 4.4 Dispersion curve c = c(Δ) for a periodic travelling wave in the LIF fibre model. Periodic waves are not possible for periods below that of the refractory timescale, shown by the vertical dotted line. Note that for large Δ, the speed of a solitary pulse is recovered (dashed grey line). The parameters are D = 1, τr = 1, τu = 0.5, τ = 0.02, h = 0.05, and σ = 20.

A periodic wave, of period Δ, is then completely specified by the set of ordinary differential equations dTm (x) 1 = , m ∈ Z, (4.48) dx c(Δ) with solution Tm (x) = mΔ + x/c, where c(Δ) is the dispersion curve for the given model. The kinematic formalism assumes that there is a description of irregular spike trains in the above form such that dTm (x) = F(Tm (x), Tm−1 (x), Tm−2 (x), . . .), dx

(4.49)

which must reduce to (4.48) for periodic waves. Assuming that the most recent spike in the train has the strongest influence, it is further assumed that F(Tm (x), Tm−1 (x), Tm−2 (x), . . .) ≈ F(Tm (x), Tm−1 (x)),

(4.50)

which can only reduce to (4.48) if F(Tm (x), Tm−1 (x)) = F(Tm (x) − Tm−1 (x)). The function F(Δ) is then chosen as c−1 (Δ), the reciprocal dispersion curve for the periodic wave. Hence, within the kinematic framework, the dynamics of irregular travelling wave trains are described by (4.48) with the replacement c(Δ) → c(Δm (x)), where Δm (x) = Tm (x) − Tm−1 (x) is recognised as the instantaneous period of the wave train at position x, often termed the interspike interval (ISI).

140

4 Axons, dendrites, and synapses

A steadily propagating wave train is stable if under the perturbation Tm (x) → Tm (x) + u m (x) the system converges to the unperturbed solution during propagation, i.e., u m (x) → 0 as x → ∞. For the case of uniformly propagating periodic travelling waves of period Δ (same interval between all successive pairs of spikes), substitution of the perturbed solution in (4.48), and working to first order in u m yields c (Δ) du m =− 2 [u m − u m−1 ]. dx c (Δ)

(4.51)

To solve this linear differential-difference equation, it is enough to recognise the similarity to a linear differential equation for a vector of states (u m (x), u m−1 (x), . . .) and then use matrix exponentials, as discussed in Box 3.3 on page 76, to write the general solution for x > 0 as u m (x) = ∑ G mp (x)u p (0), G mp (x) = e−γ x e K x mp , K mp = γδm, p+1 , (4.52) p

where γ = c (Δ)/c2 (Δ) and δ is the Kronecker delta. Using the series expansion for the matrix exponential and noting that [K q ]mp = γ q δm, p+q , we may write G mp (x) = e−γ x

∞

∑

q=0

xq q (γ x)m− p K mp = e−γ x . q! (m − p)!

(4.53)

Thus, a uniformly spaced, infinite wave train with period Δ is stable (within the kinematic approximation) if and only if c (Δ) > 0 (assuming that c > 0). For a finite non-uniform spike train, a similar analysis shows that it will be stable if and only if c (Δm ) > 0 for each m (ignoring the zero eigenvalue associated with translations of the leading pulse) [267]. This analysis also predicts that wave bifurcations will occur at stationary points in the dispersion curve, and this can be shown to lead to period-doubling bifurcations (and see [592] for a further discussion). For large periods, the slope of the dispersion curve in Fig. 4.4 is essentially flat and the speed of the periodic waves approximates that of the solitary pulses. For smaller values of the period, where one does not encounter supernormal waves, the stable branch of period waves has an exponential shape, which may be fit with an equation of the form c−1 (Δ) = K + A exp(−BΔ) for some constants K , A and B. After a rescaling Tm (x) → B(Tm (x) − mΔr + x/c(Δr )) and x → AB exp(−BΔr )x for some arbitrary Δr , the kinematic equations become dTm = exp(−Tm (x) + Tm−1 (x)) − 1, dx

(4.54)

where we choose Δr such that [K − c−1 (Δr )] exp(BΔr )/A = −1. The general solution of this system has previously been given by Horikawa [434] (and see Prob. 4.6). Importantly, for initial data in the form of a step change in the ISIs of the form

4.2 Axons

mΔ L , m ≤ m∗ Tm (0) = , m ∗ Δ L + (m − m ∗ )Δ R , m > m ∗

141

m ∗ = constant,

(4.55)

where Δ L ,R ∈ R+ , the general solution shows that the ISIs may be written as a sequence with Δm (x) = Δ(κ x − ω m), for ω = Δ R − Δ L and κ = exp(−Δ L ) − exp(−Δ R ) (assuming Δ R > Δ L ) where exp(−Δ R ) − exp(−Δ L ) Δ(z) = − ln exp(−Δ L ) + . (4.56) 1 + exp(z) It is clear that Δ(z) → Δ L as z → ∞ and Δ(z) → Δ R as z → −∞. Hence, solutions may be regarded as connections to periodic spike trains of interspike intervals Δ L and Δ R . Moreover, the position of the front connecting the two periodic orbits moves with a constant group velocity κ /ω . The front moves backwards for Δ R > Δ L and forwards for Δ L > Δ R . In Fig. 4.5, a plot of the sequence of ISIs given by (4.56) is shown. Since the solutions describing connections between periodic orbits are constructed from a dispersion curve with c (Δ) > 0 for all realisable Δ, they are expected to be stable.

4.2.3 Myelinated nerve fibre models There are now many mathematical studies on models of smooth nerve fibre, as discussed above in Sec. 4.2.1, though relatively fewer for myelinated nerves. Many vertebrate nerves, including that of the frog studied by Galvani, and axons found in mammalian brains, are bundles of discrete, periodic structures, comprising active nodes (areas where action potentials can be generated). These ‘nodes of Ranvier’ are separated by relatively long fibre segments that are insulated by myelin. Myelin is a layer of a fatty insulating substance, which is formed by two types of glial cells: Schwann cells ensheathing peripheral neurons and oligodendrocytes insulating those of the central nervous system. In myelinated nerve fibres, a wave of activity jumps from one node to the next enabling an especially rapid mode of electrical impulse propagation called saltatory conduction (and much faster than even the fastest unmyelinated axon can sustain). To describe this process, we are led naturally to the consideration of hybrid continuum-lattice models.

4.2.4 A Fire-Diffuse-Fire model Here, a simple phenomenological model for a myelinated nerve fibre is considered. This is modelled with LIF threshold units embedded at a set of points throughout a continuous passive fibre and is described by (4.36) with the further choice

142

4 Axons, dendrites, and synapses

Fig. 4.5 An illustration of the travelling front (connecting two spike trains with different ISIs) obtained analytically from the kinematic description of the LIF axonal fibre model. The initial data at x = 0 comprise a step sequence in the ISIs with Δ L = 1 and Δ R = 2. Top: Plot showing how the ISI at specified locations changes with the number of spikes experienced at that point. Bottom: Instantaneous profile of the travelling wave solution at distinct, increasing time points.

4.2 Axons

143

I (x, t) =

∑ δ (x − xn ) ∑ η (t − Tm (x)).

n∈Z

(4.57)

m∈Z

Here, the LIF units, modelling the dynamics at the nodes of Ranvier, are positioned at points x = xn . Since (4.36) is a linear PDE, it may be solved using a Green’s function approach, and see Box 4.4 on page 143, as Box 4.4: Green’s function for the infinite cable equation Consider the linear PDE Gt = −

G + DG x x , τ

G(x, 0) = δ (x).

(4.58)

Introduce the Fourier transform t) = G(k,

∞

−∞

e

−ikx

1 G(x, t) = 2π

G(x, t)dx,

∞

−∞

t)dk (4.59) eikx G(k,

to give t (k, t) = −ε (k)G(k, t), G

0) = 1, ε (k) = 1 + Dk 2 , G(k, τ

(4.60)

t) = G(k, 0) exp(−ε (k)t). Inverting the Fourier transformawith solution G(k, tion yields G(x, t) =

1 2π

=√

∞

eikx e−ε (k)t dk = e−t/τ e−x

−∞

1 4π Dt

e−t/τ e−x

2

/(4Dt)

2

/(4Dt)

1 2π

,

∞

−∞

e−Dt[k+i x/(2Dt)] dk 2

(4.61)

where the last result is√obtained by completing the square and using the fact that 2 π. −∞ exp(−x )dx = As well as introducing the concept of Green’s function, George Green provided many other useful mathematical tools in his 1828 essay [373], including that of a potential function and Green’s theorem (relating a line integral around a simple closed curve to a double integral over the plane region bounded by that curve). He is a son of Nottingham, and his family mill can be seen from the University of Nottingham Park campus on a clear day.

∞

v(x, t) =

∞

−∞

dy

t

−∞

ds G(x − y, t − s)I (s, y) +

Here, the Green’s function is given explicitly by

∞

−∞

dy G(x − y, t)v(y, 0). (4.62)

144

4 Axons, dendrites, and synapses

G(x, t) = √

1 4π Dt

e−t/τ e−x

2

/(4Dt)

Θ(t).

(4.63)

For simplicity, assume that the LIF process is fast (τu → 0) so that firing events are triggered whenever v(x, t) crosses the threshold h from below. Moreover, focussing on the propagation of a solitary wave, with a saltatory pulse jumping from one node of Ranvier to another, one need only consider a single firing event at an arbitrary cell position x = xn . In this case, and after dropping transients, (4.62) becomes v(x, t) =

∑

t

n∈Z −∞

ds G(x − xn , t − s)η (s − T (xn )),

(4.64)

and we recover the Fire–Diffuse–Fire (FDF) model of Keizer et al. [498], originally developed to describe the saltatory spread of calcium waves in cardiac myocytes. When the nodes of Ranvier are placed on a regular lattice of spacing d, so that xn = nd, it is natural to look for a saltatory wave solution with the sole firing time given by T (nd) = nΔ, for some Δ > 0, so that the speed of threshold crossing events is given by c = d/Δ. The, as yet, undetermined parameter Δ will be referred to as the period of a wave as it measures the time between successive firing events that make up a saltatory travelling pulse. Assuming an idealised square-wave action potential shape η (t) = σ Θ(t)Θ(τη − t)/τη , and that only the sites with index up to N have crossed threshold, then v(x, t) =

σ τη

N

∑

min(t−nΔ,τη )

n=−∞ 0

dsG(x − nd, t − s − nΔ), t > N d.

(4.65)

Long-time solutions that cause sites with increasing n to cross threshold are obtained by taking the large N limit in (4.65) and neglecting all terms in the sum with n ≤ 0, leading to the equation v(N d, N Δ) = σ

N

∑ G (nd, nΔ),

n=1

G (x, t) =

1 τη

0

τη

G(x, t − s)ds.

(4.66)

Hence, one may determine the speed of the travelling wave in a self-consistent manner by demanding lim v(N d, N Δ) = h. (4.67) N →∞

Note from (4.65) that waves do not propagate with an invariant shape even though the threshold crossing times occur on a regularly spaced temporal lattice, as illustrated in Fig. 4.6. The issue of wave stability is dealt with in [181]. In the special case τη → 0, it can be seen from (4.66) that G (x, t) → G(x, t). Moreover, in the limit τ → ∞, the implicit equation for the wave speed given by (4.67) can be written in the form h = g(Δ; τ D ), where τ D = d 2 /D is the intersite diffusion time scale, h = hd/σ is a rescaled threshold and g(Δ; τ D ) is monotonic in Δ and given by

4.3 Dendrites

145

Fig. 4.6 An example of a saltatory travelling wave analytically determined by equation (4.65) with the period Δ of the wave determined self-consistently according to (4.67). The solid curves show the state of the system at t = 5Δ and t = 6Δ, respectively, illustrating the property that v(x + d, t + Δ) = v(x, t). The dashed curve shows the state of the system at an intermediate time t = 5Δ + Δ/10, which illustrates that the wave does not propagate with a constant profile, and instead jumps between nodes at x = nd. The parameters are τη = 0, τ = 1/2, d = D = σ = 1, and h = 0.1.

g(Δ; τ D ) =

∞

∑

n=1

τ D −n τ D /(4Δ) e . 4π nΔ

(4.68)

If time is rescaled such that τ D = 1, then the speed of the wave is given by c = d/(Δτ D ) = D/(dΔ) where Δ is the solution to h = g(Δ; 1). Since this latter equation is independent of D, c scales linearly with √ D. Thus, in comparison to a continuous model, where c is expected to scale with D, the simple FDF model provides a nice mechanistic description for the fast waves that can be seen in myelinated tissue. For the effects of demyelination on axonal transmission in an FDF model, see [657].

4.3 Dendrites Despite the fact that experimental and theoretical work for understanding of signal processing in neurons began more than a century ago the role of an individual neuron in neural computation has long been debated. Traditionally, relatively simple computational properties have been attributed to the individual cell and complex computations to networks of these simple elements. However, this assumption is oversimplified in view of the properties of real neurons and the computations they

146

4 Axons, dendrites, and synapses

perform [588]. In particular, neurons are spatially extended and can have elaborate dendritic structures, such as that of a Purkinje cell shown in Fig. 4.7.

Fig. 4.7 Left: Camera Lucida drawing of a Purkinje cell by Ramón y Cajal, showing a space-filling dendritic tree (public domain figure from Wikimedia Commons). Right: Digitally reconstructed Purkinje cell from Neuromorpho.org rendered using the HBP Neuron Morphology Viewer [4]

Dendrites are involved in receiving and integrating thousands of inputs, via both chemical and electrical synapses (and see Sec. 4.4.1 and Sec. 4.4.2 for further discussion), from other cells as well as in determining the extent to which action potentials are produced by neurons. The theoretical work of Rall [794] during the 1950s and 1960s revolutionised the mathematical treatement of dendrites through the introduction of a new framework, cable theory and compartmentalisation, for modelling these complex structures. He demonstrated that the structural and electrical properties of dendrites play a critical role in the way a neuron processes its synaptic inputs. For an excellent overview of the work of Rall, see the book by Segev et al. [795].

4.3.1 Cable modelling In cable modelling, a dendritic segment is regarded as a passive electrical conductor. As such, its mathematical description is very similar to that of an excitable axon, though without the need to model any nonlinear ionic currents. Referring back to the discussion of smooth nerve fibres in Sec. 4.2.1, we are led naturally to a model for a uniform cable in the form of a second order linear PDE of the form

4.3 Dendrites

147

C

∂V d ∂ 2V = −g L V + + I, ∂t 4R ∂ x 2

x ∈ R, t > 0.

(4.69)

Here, V (x, t) represents the membrane potential at position x along a cable at time t measured relative to the resting potential of the membrane (with a leak current −g L V ), in response to a current injection I (x, t). The longitudinal current at position x is given by the term πd2 ∂ V − . (4.70) 4R ∂ x √ Let τ = C/g L be the cell membrane time constant, λ = d/(4Rg L ) the membrane length constant, and D = λ 2 /τ the diffusion coefficient, to generate the standard form of the cable equation: V ∂V ∂ 2V = − + D 2 + A, ∂t τ ∂x

x ∈ R, t > 0,

(4.71)

where A = A(x, t) is the input signal I (x, t)/C. The cable equation may be solved using Green’s function techniques, and for the case of the infinitely long cable (where boundary conditions are of no consequence), the solution is given by V (x, t) =

∞

−∞

dy

t

−∞

ds G(x − y, t − s)A(s, y) +

∞

−∞

dy G(x − y, t)V (y, 0),

(4.72) where the Green’s function G(x, t) is given by (4.63) (and see also the discussion in Box 4.4 on page 143). For a simple pulsatile input current of the form A(x, t) = δ (x)δ (t), the response for vanishing initial data is simply V (x, t) = G(x, t). For a time-independent input signal, the steady state of (4.71) satisfies the ODE d2 (4.73) 1 − λ 2 2 V (x) = τ A(x), dx which may be readily solved using Fourier transforms as V (x) = τ

∞

−∞

ikx dk A(k)e , 2π 1 + λ 2 k 2

(4.74)

= ∞ dx A(x)e−ikx . For the choice of a point source at the origin with where A(k) −∞ = A0 and (4.74) can be evaluated using a semi-circular A(x) = A0 δ (x), then A(k) contour in the upper (lower) half complex plane for x < 0 (x > 0) to yield (see Box 4.3 on page 134) A0 τ exp (−|x|/λ ) . V (x) = (4.75) 2λ For a semi-infinite cable (x ≥ 0), the steady-state solution of the homogeneous cable equation (with A = 0) is simply V (x) = α e−x + β ex , and for a bounded solution, the choice β = 0 is required. For an injected current I0 at x = 0, this must be balanced by the longitudinal current given by (4.70) and hence we may determine the amplitude α as

148

4 Axons, dendrites, and synapses

4λ I 0 R . (4.76) πd2 The input resistance, Rin , is defined by the steady-state voltage response at x = 0 divided by the injected current, which gives R −3/2 4λ R 2 = d , (4.77) Rin = πd2 π gL

α=

and note the −3/2 power law dependence on the diameter d.

4.3.2 Sum-over-trips To model more realistic dendritic geometries than that considered in Sec. 4.3.1, one must pose the cable equation on a graph with appropriate boundary conditions. The Green’s function must now be constructed by solving a PDE with these boundary conditions. This is typically achieved using classical techniques from applied mathematics, such as separation of variables, integral transforms, and series solutions, that are nicely described in the books by Tuckwell [884] and Johnston and Wu [478]. However, an alternative approach exists that has been developed by Abbott and collaborators [6, 7, 148], borrowing heavily from the path-integral formalism for describing Brownian motion. This allows for an elegant algorithmic approach to determining Green’s function for an arbitrary branched dendritic geometry based solely on Green’s function for an infinite cable and a set of so-called trip coefficients. To describe this approach, we first define a node as a point where branch segments touch (i.e., the vertices of the graph describing the tree). Nodes that are connected to only one other node are called terminal nodes. Assuming, for simplicity, a dendritic tree with homogeneous electrical properties only differing in the diameters and lengths of its dendrites, then a finite segment labelled by i has a voltage Vi = Vi (x, t) satisfying ∂ Vi ∂ 2 Vi = −Vi + + Ii , 0 ≤ x ≤ L i /λi . (4.78) ∂t ∂ x2 Here, L i , is the physical length of the ith segment, and Li = L i /λi is recognised as the electrotonic length, typically in the range 0.1 − 1 mm. Note that time has been rescaled according to t → t/τ , and space (on each segment) according to x → x/λi , where λi is the space-constant on each segment, as well as absorbed a factor of g L within the injected current Ii . The dynamics of a tree may then be specified by ensuring the appropriate boundary conditions at all nodes and terminals. These are i) continuity of potential, and ii) conservation of current. It is convenient to choose the coordinates on all of the radiating branches so that the node is at the point x = 0. With this choice, continuity of potential requires that

4.3 Dendrites

149

Vi (0, t) = V j (0, t),

(4.79)

for all values of i and j corresponding to segments radiating from the node. Conservation of longitudinal current gives 1 π d 2j ∂ V j (4.80) ∑ λ j 4R ∂ x = 0. x=0 j Here, the sum is over all j values corresponding to segments radiating from the node in question, √ and the factor of λ j comes from the spatial rescaling. After observing that λi ∝ di , this latter boundary condition takes the form 3/2 ∂ V j (4.81) ∑ d j ∂ x = 0. x=0 j At an open terminal, the condition Vi (Li , t) = 0 is imposed, and at a closed end ∂ Vi (x, t)/∂ x|x=L i = 0. The potential at any point on any segment of a general tree is given by Vi (x, t) = ∑ j

t 0

ds

L j 0

dy G ij (x, y, t − s)I j (y, s) +

L j 0

dy G ij (x, y, t)V j (y, 0) ,

(4.82) where the sum on j is over all segments of the tree. A closed form expression for the Green’s function G ij (x, y, t) of the whole tree can be constructed in terms of the known response of the infinite cable as 1 −t −x 2 /(4t) G ∞ (x, t) = √ Θ(t). e e 4π t trips (4.83) Here, Ltrip = Ltrip (i, j, x, y) is the length of a path along the tree that starts at point x on branch i and ends at the point y on branch j. G ij (x, y, t) =

∑ Atrip G ∞ (Ltrip , t),

1. A trip may start out from x by travelling in either direction along segment i, but it may subsequently change direction only at a node or a terminal. A trip may pass through the points x on segment i and y on segment j but must begin at x on segment i and end at y on segment j. 2. When a trip arrives at a node, it may pass through the node to any other segment radiating from the node or it may reflect from the node back along the same segment on which it entered.

150

4 Axons, dendrites, and synapses

3. When it reaches any terminal node, a trip always reflects back, reversing its direction.

Every trip generates a term in (4.83) with Ltrip given by summing the lengths of all the steps taken along the course of the trip. For example, the four primary trips on a simple dendritic tree consisting of two segments have lengths Li − x + y, Li + x + y, Li − x + 2L j − y and Li + x + 2L j − y, respectively, as depicted in Fig. 4.8. Note that all longer trips, even in a larger branched network, would consist only of constant additions to these four basic lengths. Although the sum in (4.83) has an infinite number of terms, it is naturally truncated due to the decay properties √ of G ∞ (x, t) to include only those terms whose trip lengths are shorter than K t for some constant K . The boundary conditions can be satisfied by choosing the trips according to the following rules: 1. From any starting point, Atrip = 1. 2. For every node at which the trip passes from an initial segment k to a different segment m (m = k), Atrip is multiplied by a factor 2 pm . 3. For every node at which the trip enters along segment k and then reflects off the node back along segment k, Atrip is multiplied by a factor 2 pk − 1. 4. For every closed (open) terminal node, Atrip is multiplied by a factor +1 (−1). Here, the pk are given in terms of the segment diameters as 3/2

pk =

dk

3/2

∑m dm

,

(4.84)

4.3 Dendrites

151

Fig. 4.8 The four basic trips and their indicated lengths between a point x on segment i and point y on segment j.

where the sum is over all segments connected to the node of interest. For example, an application of the sum-over-trips rules to a semi-infinite cable (on the positive real axis x ≥ 0) with either an open or closed terminal at the origin (x = 0) yields G ± (x, y, t) = G ∞ (x − y, t) ± G ∞ (x + y, t),

(4.85)

where the minus sign is for an open end and the plus sign is for a closed end. For a finite cable with x ∈ [−L , L] and closed-end (no flux) boundary conditions it is found that G(x, y, t) =

∞

∑

G ∞ (x − y − 2n L , t) + G ∞ (x + y − 2n L , t).

(4.86)

n=−∞

Note that from equation (4.84) that if pk = 1/2, then the trip multiplier at a node is either zero or one, and the rules reduce to describing a segment without branching. In this case, equation (4.84) can be re-written as 3/2

dk

=

∑ dm3/2 .

(4.87)

m=k

This recovers Rall’s famous ‘3/2’ power law that shows the conditions under which a branching dendrite can be collapsed to an equivalent cylinder [736]. The sum-overtrips approach can also be extended to tackle electrical heterogeneity across the tree, coupling to a linear soma model, and the treatment of quasi-active membrane [205].

152

4 Axons, dendrites, and synapses

4.3.3 Compartmental modelling Compartmental modelling represents a finite-difference approximation of a linear cable equation in which the dendritic system is divided into sufficiently small regions such that spatial variations of the electrical properties within a region are negligible. The PDEs of cable theory then simplify to a system of first-order ODEs. In practice, a combination of matrix algebra and numerical methods are used to solve for realistic neuronal geometries [793]. In the compartmental modelling approach, an unbranched cylindrical region of a passive dendrite is represented as a linked chain of equivalent circuits as shown in Fig. 4.9. Each compartment consists of a membrane leakage resistor Rα in parallel with a capacitor Cα , with the ground representing the extracellular medium (assumed to be isopotential). The electrical potential Vα (t) across the membrane is measured with respect to some resting potential. The compartment is joined to its immediate neighbours in the chain by the junctional resistors Rα ,α −1 and Rα ,α +1 . All parameters are equivalent to those of the cable equation, but restricted to individual compartments. The parameters Cα , Rα , and Rα ,β can be related to the underlying membrane properties of the dendritic cylinder as follows. Suppose that the cylinder has uniform diameter d and denote the length of the α th compartment by lα . Then Cα = cα lα π d,

Rα =

1 , gα lα π d

Rα ,β =

2rα lα + 2rβ lβ , πd2

(4.88)

where gα and cα are, respectively, the membrane conductance and capacitance per unit area, and rα is the longitudinal resistivity. An application of Kirchhoff’s law to a compartment shows that the total current through the membrane is equal to the difference between the longitudinal currents entering and leaving that compartment. Thus, (and cf equation (4.1)), Cα

Vβ − Vα Vα dVα =− + ∑ + Iα (t), dt Rα β ;α Rα ,β

t > 0,

(4.89)

where Iα (t) represents the net external input current into the compartment and β ; α indicates that the sum over β is restricted to immediate neighbours of α . Dividing through by Cα (and absorbing this factor within the Iα (t)), equation (4.89) may be written as a linear matrix equation: dV = QV + I (t), dt

[Q]αβ = −

δα ,β δβ ,β + ∑ , τα

β ;α τα ,β

(4.90)

where δ is the Kronecker delta and the membrane time constant τα and junctional time constant τα ,β satisfy ⎡ ⎤ 1 1 ⎣ 1 ⎦ 1 1 1 , = + = . (4.91) ∑ τα Cα β ;α Rα ,β Rα τα ,β Cα Rα ,β

4.3 Dendrites

153

Fig. 4.9 Equivalent circuit for a compartmental model of a chain of successive cylindrical segments of passive dendritic membrane.

Equation (4.90) may be solved using matrix exponentials and variation of parameters as Vα (t) = ∑ β

0

t

dsG αβ (t − s)Iβ (s) + ∑ G αβ (t)Vβ (0),

t > 0,

(4.92)

β

with

G αβ (t) = e Qt αβ .

(4.93)

The response function or matrix Green’s function G αβ (t) determines the membrane potential of compartment α at time t in response to a unit impulse stimulation of compartment β at time zero. The matrix Q has real, negative, non-degenerate eigenvalues reflecting the fact that the dendritic system is described in terms of a passive ‘RC’ circuit, recognised as a dissipative system. From (4.93), it is useful to note that the response function satisfies dG αβ = ∑ Q αγ G γβ , dt γ

G αβ (0) = δα ,β .

(4.94)

An infinite uniform chain of linked compartments is obtained for the choice Rα = R, Cα = C for all α , Rα ,β = Rβ ,α = R for all α = β + 1. In this case, and introducing τ = RC and γ = R C, then Q αβ = −

δα ,β K αβ + , τ γ

1 2 1 = + . τ τ γ

The matrix K generates paths along the tree and in this example is given by

(4.95)

154

4 Axons, dendrites, and synapses

K αβ = δα −1,β + δα +1,β .

(4.96)

The form (4.95) of the matrix Q carries over to dendritic trees of arbitrary topology provided that each branch of the tree is uniform and certain conditions are imposed on the membrane properties of compartments at the branching nodes and terminals of the tree [115]. In particular, modulo additional constant factors arising from the boundary conditions at terminals and branching nodes [K m ]αβ is equal to the number of possible paths consisting of m steps between compartments α and β (with possible reversals of direction) on the tree, where a step is a single jump between neighbouring compartments. Thus, calculation of G αβ (t) for an arbitrary branching geometry reduces to (i) determining the sum over paths [K m ]αβ , and then (ii) evaluating the series expansion of G(t) = e Qt = e−t/τ e K t/γ to give m t 1 −t/τ G αβ (t) = e (4.97) ∑ γ m! K m αβ , m≥0 which may be evaluated in terms of modified Bessel functions using (3.22). Alternatively, G(t) may be found by solving (4.94). Since for the infinite chain, G αβ (t) depends upon |α − β | (translation invariance), then Fourier transforms may be used to find π dk ik|α −β | −ε (k)t e e , (4.98) G αβ (t) = 2 −π π for the modified where ε (k) = τ −1 − 2γ −1 cos k. Using the integral representation Bessel function of integer order n (namely, In (z) = π −1 0π ez cos θ cos(n θ )dθ ), equation (4.98) can be evaluated as G αβ (t) = e−t/τ I|β −α | (2t/γ ).

(4.99)

The response function (4.99) is plotted in Fig. 4.10 for a range of separations |α − β |. The sharp rise to a large peak, followed by a rapid early decay in the case of small separations, is also seen in simulations of more detailed model neurons, highlighting the usefulness of the simple analytical expression (4.99). A re-labelling of each compartment by its position along the dendrite as x = l α , x = l β , α , β = 0 ± 1, ±2, . . ., with l being the length of an individual compartment facilitates taking the continuum limit of the above model. Making a change of variable k → k/l on the right-hand side of (4.98) and taking the continuum limit l → 0 gives G(x − x , t) = e−t/τ lim

π /l

l→0 −π /l

dk ik|x−x | −k 2 l 2 t/γ +... e e , 2π

(4.100)

where terms smaller than l 2 have been disregarded. This equation reproduces the fundamental result (4.63) for the standard cable equation upon taking D = liml→0 l 2 /γ , as described in Box 4.4 on page 143. Although analytical progress is possible for a passive dendritic compartmental model, the compartmental approach is typically augmented to include complex membrane properties associated with nonlinear ionic currents, and subsequently used as

4.3 Dendrites

155

Fig. 4.10 Response function of an infinite chain as a function of t (in units of γ ) with τ = 10γ for various values of the separation distance |α − β |.

a framework for numerical simulation. In particular, the widely used software package NEURON [152] allows user-defined biophysical properties of the cell membrane (e.g., ion channels, pumps) and cytoplasm (e.g., buffers and second messengers) to be described in terms of differential equations, kinetic schemes, and sets of simultaneous equations. This approach is a key part of the Human Brain Project [30] for simulating a human brain. Indeed, part of this project is to combine models of neural activity with biophysical properties of brain tissue to generate physiologically realistic signals, such as the local field potential (LFP). This signal can be recorded using an extracellular microelectrode placed sufficiently far from any one neuron and then retaining only the low-frequency voltage component (less than ∼ 300 Hz). The LFP is a population measure reflecting how dendrites integrate synaptic inputs, and is created by the combination of a large number of spatially distributed transmembrane currents, with contributions depending on their magnitude, sign, and distance from the recording site. In cortex, the LFP is mainly thought to arise from correlated synaptic input to populations of geometrically aligned pyramidal cells (with their main dendritic shafts arranged in parallel). From a modelling perspective, the LFP can be constructed from cable or compartmental models in a two-step process. First, the transmembrane currents are computed, and then the extracellular potential is derived as a weighted sum over contributions from the transmembrane currents using volume-conductor theory [430]. If the extracellular medium is assumed to be homogeneous, purely resistive, and infinite in extent with conductivity σ , then the quasi-steady approximation to Maxwell’s equation gives the potential from a point current source with magnitude I as

156

4 Axons, dendrites, and synapses

Φ(ζ ) =

1 I , 4πσ ζ

(4.101)

where ζ is the distance from the point source. When several point sources are present, the linearity of Maxwell’s equations means that the extracellular potential can be obtained from a simple summation. For example, for a simple infinite cable model the transmembrane current Imem (including both the leak and capacitive currents) at a point z on the cable is equal to 1/r ∂ 2 V /∂ z 2 , where r is the (constant) specific resistance per unit length for currents flowing along the dendrite (and r = 4R/(π d 2 ), where d is the diameter of the dendrite and R its axial resistivity). In this case, for a dendrite aligned along the z-axis Φ(z, ρ ) =

1 4πσ

∞

−∞

Imem (z )dz , (z − z )2 + ρ 2

(4.102)

where ρ is the radial distance from the recording point to the cable [710]. The practical numerical simulation of LFP signals from reconstructed neurons has recently been implemented in the software package LFPy 2.0 [388].

4.4 Synapses Neurons have only one axon that leaves the cell body (soma), though this may branch either in the form of collaterals along the axonal length or at a terminal arborisation known as the telodendrion (distal dendrite). These terminals are usually the site of pre-synaptic boutons that establish synaptic contacts with other neurons, muscles, or glands. A single axon, with all its branches taken together, can innervate multiple parts of the brain and generate thousands of synaptic terminals. The contact point between an axon and the membrane of another neuron occurs at a junction called a synapse. Here, special molecular structures serve to transmit electrical or electrochemical signals across the gap. Some synaptic junctions appear partway along an axon as it extends, and are referred to as en passant (‘in passing’) synapses. Other synapses appear as terminals at the ends of axonal branches.

4.4.1 Chemical synapses A sketch of the structure of a typical chemical synapse is shown in Fig. 4.11. Neurotransmitters, enclosed in small membrane-bound sacs called synaptic vesicles, are contained within the synaptic bouton (pre-synaptic axon terminal). Receptors can be found on the post-synaptic cell, and behind the post-synaptic membrane is an elaborate complex of interlinked proteins called the post-synaptic density (PSD). The synaptic cleft is a small gap between the pre- and post-synaptic cells (about

4.4 Synapses

157

Fig. 4.11 Structure of a typical chemical synapse.

20 nm wide). Proteins in the PSD are involved in anchoring and trafficking neurotransmitter receptors and modulating the activity of these receptors. At a synapse, pre-synaptic firing results in the release of neurotransmitters that causes a change in the membrane conductance of the post-synaptic neuron (after binding to receptors). This post-synaptic current may be written Is = g(Vs − V ),

(4.103)

where V is the voltage of the post-synaptic neuron, Vs is the synapse reversal potential and g is a conductance. This is proportional to the probability that a synaptic receptor channel is in an open-conducting state. This probability depends on the presence and concentration of neurotransmitter released by the pre-synaptic neuron. The sign of Vs relative to the resting potential (assumed to be zero) determines whether the synapse is excitatory (Vs > 0) or inhibitory (Vs < 0). One of the most common neurotransmitters underlying an excitatory response is glutamate. This can activate fast AMPA and kainate receptors as well as NMDA receptors (both with reversal potentials around 0 mV). GABA is the main inhibitory neurotransmitter, for which there are two main receptors. GABAA is responsible for fast inhibition (with a reversal potential around −75 mV), whilst GABAB is responsible for slow inhibition (with a

158

4 Axons, dendrites, and synapses

reversal potential −100 mV). Here, fast implies a 1 − 10 ms timescale, whilst slow is considered to be on the timescale of 100 ms. For the AMPA, kainate, NMDA, and GABAA synapses, the ion channel and the receptor are the same protein, so that the effect of the neurotransmitter is direct. An indirect synapse, such as GABAB , is one for which a cascade of intracellular processes ultimately leads to a post-synaptic response. In the neuroscience literature, direct synapses are often referred to as being ionotropic and indirect synapses as metabotropic, reflective of their method of action. Box 4.5: Synaptic filters Common choices for η (t) include the exponential function:

η (t) = α e−α t Θ(t),

(4.104)

the difference in exponentials:

η (t) = and the α -function:

1 1 − α β

−1

[e−α t − e−β t ]Θ(t),

η (t) = α 2 te−α t Θ(t),

(4.105)

(4.106)

where Θ is the Heaviside step function. The inclusion of this step function ensures that synapses act causally, in that, activity at the synapse can only affect dynamics after the synapse is triggered. Note that the synaptic filters are normalised so that ∞ η (t)dt = 1. (4.107) 0

This means that η describes only the time-dependent shape of the post-synaptic response. Its amplitude can then be captured by a global scaling parameter. In general, η (t) can be written as Green’s function of a linear differential operator, so that (4.108) Q η (t) = δ (t). To construct Q for a given η , introduce the Laplace transform (λ ) = η

∞

0

e−λ t η (t)dt,

(4.109)

dn . dt n

(4.110)

and write Q in the general form Q=

∞

∑ an

n=0

4.4 Synapses

159

Taking the Laplace transform of (4.108) gives the algebraic equation (λ ) = 1, or equivalently, ∑n an λ n η

∑ an λ n = 1/η (λ ).

(4.111)

n

Powers of λ can be balanced on either side of the above equation to determine the coefficients an for Q. For the simple exponential (4.104), the Laplace transform is (λ ) = η

∞

0

α e−α t e−λ t dt =

1 . 1 + λ /α

Balancing terms gives a0 = 1, a1 = 1/α , an>1 = 0, and hence 1 d Q = 1+ . α dt

(4.112)

(4.113)

Similarly, for (4.105), we obtain (λ ) = η

αβ . (α + λ )(β + λ )

(4.114)

Equating powers of λ in this case yields a0 = 1, a1 = (α + β )/αβ , a2 = 1/αβ and an>2 = 0, meaning that Q may be written as 1 d 1 d Q = 1+ 1+ . (4.115) α dt β dt Taking the limit β → α gives the response for the α -function defined by (4.106). The effect of some direct synapses can be described with a function that fits the shape of the post-synaptic response due to the arrival of action potential at the presynaptic release site. A post-synaptic conductance change g(t) would then be given by t ≥ T, (4.116) g(t) = g η (t − T ), where T is the arrival time of a pre-synaptic action potential, η (t) fits the shape of a realistic post-synaptic conductance, and g is the maximal conductance. Common choices for η are presented in Box 4.5 on page 158. Similarly, the conductance change arising from a train of action potentials, with firing times Tm , is given by g(t) = g

∑ η (t − Tm ).

(4.117)

m∈Z

For post-synaptic currents on a dendritic cable with voltage V (x, t), one would write Is = I (x, t) = gs (x, t)(Vs (x, t) − V (x, t)). In this case, the form of equation (4.72) may be expanded as a Neumann series (by repeated substitution of (4.72) into

160

4 Axons, dendrites, and synapses

itself), and if gs is small, then the resulting series may be naturally truncated. For example, dropping dependence upon initial data and writing (4.72) in the symbolic form V = G ⊗ Is , then V G ⊗ gs [Vs − G ⊗ gs Vs ]. The first term in this expression is a simple convolution of synaptic conductance with Green’s function of the dendrite. However, the second term highlights the fact that dendrites can mix inputs in interesting ways since the full description of this term is G ⊗ gs [G ⊗ gs Vs ] =

t

ds 0

0

s

ds

∞

−∞

dy

∞

−∞

dy G(x − y, t − s)G(y − y , s − s )J (y, s)J (y , s ), (4.118)

where J (x, t) = gs (x, t)Vs (x, t). Thus, the dendrite can effectively mix inputs impinging on the tree at different positions (y and y ) and different times (s and s ). For a further discussion of the ability of dendrites to generate multiplicative inputs see [724], and for an application to auditory coincidence detection by bipolar dendrites (each receiving input from only one ear) see [191]. The idealisation of synapses on dendrites above assumes that the synapse makes contact with the dendritic shaft. However, it is well known that in 80% of all excitatory synapses are made onto so-called spines. These are small mushroom-like appendages with a bulbous head (with a surface area of order 1 μ m2 ) and a tenuous stem (of length around 1 μ m) which may be found in their hundreds of thousands on the dendritic tree of a single cortical pyramidal cell. It is widely believed [491] that the molecular control of synaptic strength is likely to encompass: i) the genesis, motility, enlargement, and elimination of small spines (for establishing memory), ii) the stability of large spines (memory maintenance), iii) the expression of AMPA receptors (determining the amplitude of synaptic response), and iv) the expression of NMDA receptors (controlling synaptic re-organisation via Ca2+ currents). Given their role in learning and memory [406, 408, 958], logical computations [803], and pattern matching tasks [622, 623], they have been described, rather appropriately, as forming the ‘backbone of dendritic computation’ [707]. For a detailed model of a Hebbian synapse, which facilitates the famous Hebbian learning rules (and see Sec. 4.5.2), with receptors located on a dendritic spine, see [959], and for an electro-diffusion model of a spine to cope with the breakdown of cable modelling for such small structures, see [544]. It is well to note that some receptors are also sensitive to voltage. For example, the NMDA receptor is partially blocked by magnesium ions under normal physiological operating conditions. However, this block can be removed by depolarisation (or bathing in low-concentration magnesium solution), giving rise to a longer lasting response. In this case, a simple extension of (4.103) is to multiply the whole term by a sigmoidal function of voltage given by B(v) = [1 + exp(−(v − h)/k)]−1 , where k 16 and h = k ln([Mg2+ ]/3.57). Although the simplicity of using fixed temporal shapes for the post-synaptic conductance change, such as an α -function, is appealing from a mathematical perspective it does not always capture well the underlying biophysics of voltage- and ligand-gated channels. A better model is to assume a first-order kinetic scheme, in which a closed

4.4 Synapses

161

receptor in the presence of a sufficiently high concentration of neurotransmitter [ρ ] transitions to the open receptor state. In this case, consider the Markov scheme [244]: r1 (V,[ρ ])

−− C −− −− −− − −O r2 (V )

(4.119)

where C and O represent the closed and open states of the channel and r1 (V, [ρ ]) and r2 (V ) are the associated rate constants, noting that the transition from the open to the closed state is independent of the neurotransmitter concentration. A conductance change g(t) is then modelled by g(t) = gs(t), where s is the fraction of receptors in the open state O and g is the conductance of a single open channel. In many cases, indirect synaptic channels are found to have time dependent properties that are more accurately modelled with a second order kinetic scheme. In fact, the presence of one or more receptor sites on a channel allows the possibility of transitions to desensitised states. Such states are equivalent to the inactivated states of voltage-dependent channels. The addition of such a desensitised state to the first-order process generates a second-order scheme: ds = r1 (V, [ρ ])(1 − s − z) − [r2 (V ) + r3 (V )]s + r4 (V )z, dt dz = r6 (V, [ρ ])(1 − s − z) − [r4 (V ) + r5 (V )]z + r3 (V )s, dt

(4.120)

where z is the fraction of channels in the desensitised state. All neurotransmitterdependent rate constants are assumed to have the variables separable form ri (V, [ρ ]) = ri (V )[ρ ]. It is common for detailed Markov models of voltage-gated channels to assume that the voltage dependence of all rates takes a simple exponential form. However, it has been shown that the number of states needed by a model to more accurately reproduce the behavior of a channel may be reduced by adopting sigmoidal functions for the voltage-dependent transition rates (see [244] for a discussion): ai . (4.121) ri (V ) = 1 + exp [−(V − ci )/bi ] The ai set the maximum transition rate, bi the steepness of the voltage dependence, and ci , the voltage at which the half-maximal rate is reached. Furthermore, the concentration of neurotransmitter can also often be successfully approximated by a sigmoidal function of the pre-synaptic potential Vpre : [ρ ](Vpre ) =

ρmax . 1 + exp[−(Vpre − VΔ )/Δ]

(4.122)

Here, ρmax is the maximal concentration of transmitter in the synaptic cleft, Vpre is the pre-synaptic voltage, Δ gives the steepness and VΔ sets the value at which the function is half-activated. It is common to take Δ = 5 mV and VΔ = 2 mV. One of the main advantages of using an expression such as (4.122) is that it provides a smooth transformation between pre-synaptic voltage and transmitter concentration.

162

4 Axons, dendrites, and synapses

Finally, it is also important to note that the release of neurotransmitter is both quantised and probabilistic. This is often modelled with a binomial model of size n, where n is the maximum possible number of vesicles released. For the peripheral nervous system n 103 − 104 whilst n = 1 − 10 for the central nervous system. For a further detailed discussion of the statistics of neurotransmitter release, see [478].

4.4.2 Electrical synapses An electrical synapse is a mechanical and electrically conductive link between two adjacent nerve cells that is formed at a fine gap between the pre- and post-synaptic cells known as a gap junction and permits a direct electrical connection between them. Each gap junction contains numerous connexin hemichannels which cross the membranes of both cells. With a lumen diameter of about 1.2 to 2.0 nm, the pore of a gap junction channel is wide enough to allow ions and even medium sized molecules like signalling molecules to flow from one cell to the next thereby connecting the two cells’ cytoplasm. A sketch of the structure of a gap junction is shown in Fig. 4.12. Their discovery was surprising and was first demonstrated between escape-related giant nerve cells in crayfish in the late 1950s. They are now known to be abundant in the retina and cerebral cortex of vertebrates and have been directly demonstrated between inhibitory neurons in the neocortex [330] (particularly between fast-spiking cells and low-threshold spiking cells). In fact, it would appear that they are now ubiquitous throughout the human brain, being found in the hippocampus [326], the

Fig. 4.12 Structure of a typical gap junction.

4.5 Plasticity

163

inferior olivary nucleus in the brain stem [826], the spinal cord [738], the thalamus [441] and have been shown to form axo-axonic connections between excitatory cells in the hippocampus (on mossy fibres) [396]. Without the need for receptors to recognise chemical messengers, gap junctions are much faster than chemical synapses at relaying signals. The synaptic delay for a chemical synapse is typically in the range 1 − 100 ms, whilst the synaptic delay for an electrical synapse may be only about 0.2 ms. Little is known about the functional aspects of gap junctions, but they are thought to be involved in the synchronisation of neurons [23, 68] and contribute to both normal [435] and abnormal physiological brain rhythms, including epilepsy [901]. It is common to view the gap junction as nothing more than a channel that conducts current according to a simple Ohmic model. For two neurons with voltages vi and v j , the current flowing into cell i from cell j is given by Igap (vi , v j ) = ggap (v j − vi ).

(4.123)

Here, ggap is the constant gap junction conductance. From a biological perspective, it is important to emphasise that gap junctions are dynamic and are in fact influenced by the voltage across the membrane, and as such can be described by Ohmic models with time and state dependent conductances, as in [51]. Moreover, the potentiation of gap junction coupling by cannabinoids has recently been reported [139], and as such a gap junction model should be sufficiently general as to allow coupling to neuromodulators. It has also been hypothesised that activity-dependent gap junction plasticity can act as a mechanism for regulating oscillations in the cortex [705]. The presence of gap junctional coupling in a neuronal network necessarily means that neurons directly ‘feel’ the shape of action potentials from other neurons to which they are connected. From a modelling point of view, one must, therefore, be careful to work with excitable membrane models that give rise to an accurate representation of the action potential shape at the soma. Moreover, because network dynamics is known to be tuned by the location of the gap junction on the dendritic tree [779, 880], it is important to also have a biologically realistic model of this.

4.5 Plasticity The astonishing ability of a brain to learn and adapt in an ever changing environment is one of the key features that sets it apart from other forms of organised matter. The connection strengths or weights between neurons that are mediated by synaptic interactions are believed to be the main physiological correlates of learning and memory. The change in their value over time, from milli-seconds to decades, is referred to as synaptic plasticity. Other forms of brain plasticity are, of course, possible, such as myelin plasticity [691, 910], though we shall only consider the two main types of short- and long-term synaptic plasticity here. The former involves the change in connection strength in a use dependent fashion that depends only on the pre-synaptic

164

4 Axons, dendrites, and synapses

activity. The latter depends on the activity of both pre- and post-synaptic activity, and is exemplified by Hebbian learning. This is the process whereby long-term synaptic modifications organise neurons into functional networks called cell assemblies [411]. These can perform as associative memory engrams, with memory retrieval by the activation of a subset of the neurons within a given assembly. The (learnt) connections facilitate a network dynamics that can then spread activity to re-activate the whole assembly. More generally, the term ‘cell assembly’ is used to describe a group of neurons that perform a given action or represent a given percept or concept in the brain.

4.5.1 Short-term plasticity For some synapses, the simple phenomenological model given by (4.117) does not hold. Processes such as synaptic facilitation and short-term depression cause the amplitude of the response to depend on the recent history of pre-synaptic firing. To describe post-synaptic potentials accurately, one needs a formalism that accounts for these history-dependent effects [9]. A natural way to incorporate such effects is to write (4.124) g(t) = g ∑ A(Tm )η (t − Tm ), m

where the factor A(Tm ) scales the response evoked by an action potential by an amount that depends upon the details of the previous spike train data. A common model of facilitation or depression is to assume that between spikes, A(t) relaxes at a rate τ A (on a time-scale slower than fast neural signalling though faster than experience-induced learning, so between milliseconds and minutes) to its steady-state value of one, but that directly after the arrival of a spike it changes discontinuously. For facilitation, A is increased, whilst for depression, it is decreased. For example, after the arrival of a spike one could make the replacement A → κ A (multiplicative) or A → A + (κ − 1) (additive) with κ > 1 (κ < 1) for facilitation (depression). A more biophysical model of short-term plasticity has been developed by Tsodyks and Markram [883] that is capable of describing short-term depression (caused by depletion of neurotransmitters consumed during the pre-synaptic signalling process at the axon terminal) and short-term facilitation (caused by influx of calcium into the axon terminal after spike generation, which increases the release probability of neurotransmitters). The model is written in terms of three coupled ODEs as dx z − U x δ (t − T ), = dt τr ec dy y =− + U x δ (t − T ), dt τin dz y z = − . dt τin τr ec

(4.125)

4.5 Plasticity

165

Here, y is the fraction of neurotransmitter released into the synaptic cleft after the arrival of a single spike at time T , x is the fraction of neurotransmitter recovered after the previous arrival of a spike, δ is the Dirac delta function, and z is the fraction of inactive neurotransmitters, subject to the constraint x + y + z = 1. The released neurotransmitter inactivates with time constant τin and the inactive neurotransmitter recovers with time constant τr ec . The synaptic current received by a post-synaptic neuron is proportional to y. When the release probability, U , is constant, the model describes synaptic depression, and to treat synaptic facilitation, U is considered to evolve according to the ODE U −U dU = + U (1 − U )δ (t − T ), dt τ f ac

(4.126)

where U denotes the maximal value of U , and τ f ac denotes the timescale of facilitation. For a discussion of the effect of such a dynamical synapse on the activity in feed-forward and recurrent neural networks, see [55, 874].

4.5.2 Long-term plasticity Long-term plasticity is exemplified by Hebb’s rule that can be loosely summarised as ‘neurons wire together if they fire together’, though with the caveat that for two cells A and B, cell A needs to ‘take part in firing’ cell B, and such causality can occur only if cell A fires just before, not at the same time as, cell B [411]. This aspect of temporal order has now been refined in spike-timing-dependent plasticity (STDP) rules [149], though these will not be considered here. If w AB is the weight of neuron B to A and r A,B the firing rate of the two neurons than a continuous-time Hebbian model for the evolution of w AB might take the form w˙ AB = −w AB + ε r A r B ,

ε > 0,

(4.127)

where the decay term −w AB helps to keep the weights bounded. Biological models of Hebbian learning rules have also inspired counterparts in artificial neural networks. One example is the Oja rule that can extract the first principal component, or feature, of a data set [677]. Principal component analysis (PCA) is often used as a dimensionality reduction technique in domains like computer vision and image compression. Consider a simple model of an artificial neuron with output y that returns a linear combination of its inputs x ∈ Rn using pre-synaptic weights w ∈ Rn according to y = wT x (where T denotes transpose). The Oja learning rule is given by (4.128) w˙ = η y (x − yw) , where η is the learning rate (chosen to decrease over time as t −1 ). The first term involves the Hebbian product yx and the second term ensures that the weights do not blow up (and by design are approximately normalised to one). Denoting the

166

4 Axons, dendrites, and synapses

covariance of the data set by C = x T x (assuming zero mean data), where · denotes an average over inputs, then the steady state of (4.128) after learning (with repeated exposure to input data) gives Cw = λ w,

λ = wT Cw,

(4.129)

which is the eigensystem equation for the covariance matrix C. Thus, the weight vector becomes one of the eigenvectors of the input covariance matrix and the output of the neuron becomes the corresponding principal component. The proof that it is the first principal component that the neuron will find, and the norm of the weight vector tends to one is given in [677]. Generalisations of Oja’s rule, such as that of Sanger [777], can extract further principal components. Thus, unsupervised Hebbian learning rules can underly useful neural computations such as PCA. The Bienenstock–Cooper–Munro (BCM) learning rule also combines a Hebbian product rule with a mechanism to keep the weights bounded [84, 452]. This mechanism is regarded as homeostatic and the model uses a sliding threshold for longterm potentiation (LTP) or long-term depression (LTD) induction whereby synaptic plasticity is stabilised by a dynamic adaptation of the time-averaged post-synaptic activity. In the BCM model, when a pre-synaptic neuron fires, a post-synaptic neuron will tend to undergo LTP if it is in a high-activity state, or LTD if it is in a low-activity state, and is a candidate model for explaining how cortical neurons can undergo both LTP or LTD depending on different conditioning stimulus protocols applied to presynaptic neurons (e.g., low-frequency stimulation ∼ 1 Hz induces LTD, whereas synapses undergo LTP after high-frequency stimulation ∼ 100 Hz). Although the BCM rule was developed to model selectivity of visual cortical neurons [809], it has been more widely used and recently a link between it and certain STDP rules has been made (using triplets of spikes) [343]. Denoting the weighted input to a neuron with input x ∈ Rn by v = wT x with w ∈ Rn the BCM rule can be written

τw w˙ = φ (v; θ )x,

θ = E[v] p ,

p > 1,

(4.130)

where E[v] denotes a running temporal average of v, and φ is a nonlinear function that depends on the sliding threshold θ . The function φ is chosen so that for low values of the post-synaptic activity (v < θ ) φ is negative and for high values (v > θ ) it is positive. A natural choice is φ (v; θ ) = v(v − θ ) and for p = 2 the approximation E[v]2 ≈ E[v2 ] is useful. A further replacement of the temporal average by a first-order low-pass temporal filter E[v](t) → [ζ ∗ v](t), where ∗ denotes temporal convolution and ζ (t) = exp(−t/τθ )Θ(t)/τθ gives a differential form of the learning rule as τw w˙ = vx(v − θ ), τθ θ˙ = v2 − θ . (4.131) A study of these equations when the homeostatic time scale τθ is close to the synaptic modification time scale τw has shown that instabilities can arise leading to oscillations and chaos [888].

Remarks

167

Remarks This chapter has examined simple models of some of the basic cellular components needed to build networks, covering the axon, dendrite, and synapse. This is a very neuron-centric view and we have not attempted to address more realistic scenarios to incorporate sub-cellular processes for metabolism, nutrient transport, the tri-partite synapse (incorporating pre- and post-synaptic membrane with glia), coupling to cerebral blood flow, and intercellular signalling for vessel dilation. These are all necessary components for building forward models for the generation of physiologically realistic blood oxygen level-dependent signals seen in functional neuroimaging studies. Although beyond the scope of this text, it is well to mention that such challenges are being met with the advent of new journals, such as Brain Multiphysics [1], and for a recent book on the importance of astrocytes (a type of cortical glial cell) in brain processing and the storage of information, see [230]. Furthermore, we have viewed the dendrite and synapse as essentially passive structures that can be described with appropriate Green’s functions and only considered the axon as truly excitable and capable of supporting travelling pulses. We now know that the historical perspective of the neuron as simply receiving and relaying information (spikes) is far from true, and that the backpropagation of action potentials into dendritic trees and their withintree generation is an important part of single neuron dynamics, see e.g., [601, 923], that can depend sensitively on dendritic morphology [907]. Theoretical work on this topic has largely relied on compartmental modelling and it is an open challenge to develop more mathematical results in this arena, though see [188, 868] for some analysis of solitary waves in a model of dendritic cable with active spines. The treatment of plasticity presented in this chapter is extremely light (especially compared to the activity in the fields of experimental neuroscience and machine learning), although we did briefly mention the importance of dendritic spines in Sec. 4.4.1. Apart from the work of Baer and colleagues [216, 766, 905, 906] there seems to be surprisingly little on the modelling of activity-dependent changes in dendritic spine morphology and density, and this is a topic worthy of further mathematical investigation. Furthermore, we did not discuss STDP in any detail, mainly for the reason that it is so well treated elsewhere, as in the book by Gerstner et al. [336] or the paper by van Rossum et al. [895], and that the consequences of such rules are mainly only known through simulation studies. Nonetheless, it is interesting that STDP rules can sometimes be derived by optimising the likelihood of post-synaptic firing at one or several desired firing times [711]. We have also not touched upon reinforcement learning rules such as those developed by Sutton and Barto [838]. These have led to numerous successes in understanding how biological networks could produce observed behaviours [320] and solve spatial navigation tasks, as recently reviewed in [861]. An open challenge for the theoretical neuroscience community is an explanation of so-called one-shot learning, where the experience of just a single event can cause long-lasting memories [97]. With the growing success of machine learning paradigms, such as deep learning, there is now an increasing cross-over and cross-fertilisation of ideas between neu-

168

4 Axons, dendrites, and synapses

roscience and artificial intelligence regarding learning rules [725, 745]. Moreover, there is increased interest in opening the ‘black box’ of artificial neural networks using ideas from dynamical systems theory, such as recently discussed by Ceni et al. [157] for reservoir computing. One should also note the growing number of conferences and workshops in this area, as exemplified by those coordinated by the Machine Learning and Dynamical Systems group at the Alan Turing Institute [2]. Once again, all of these interesting topics are beyond the scope of this book, though do provide a nice set of mathematical challenges.

Problems 4.1. Consider the FitzHugh–Nagumo system of PDEs modelling the nerve impulse propagation along an axon:

∂v ∂ 2v = + f (v) − u, ∂t ∂ x2

∂w = β v, ∂t

(4.132)

where v(x, t) represents the membrane potential and w(x, t) is a phenomenological recovery variable; f (v) = v(α − v)(v − 1), 1 > α > 0, β > 0, x ∈ R, t > 0. Travelling waves are solutions of the form v(x, t) = V (ξ ),

u(x, t) = U (ξ ),

ξ = x + ct,

(4.133)

for some unknown speed c. (i) Derive a system of three ODEs for the travelling wave profiles. (ii) Check that the system has a unique equilibrium with one positive eigenvalue and two eigenvalues with negative real parts. (iii) Show that the equilibrium can either be a saddle or a saddle-focus with a onedimensional unstable and a two-dimensional stable invariant manifold, and show that the boundary between these two cases is given by the curve D = {(α , c) : c4 (4β − α 2 ) + 2α c2 (9β − 2α 2 ) + 27β 2 = 0}.

(4.134)

[Hint: At the boundary, the characteristic polynomial has a double root]. (iv) Plot the boundary obtained above in the (α , c) plane for β = 0.01, 0.005 and β = 0.0025. Specify the region corresponding to a saddle-foci. (v) Sketch possible profiles of travelling impulses in both regions. (vi) Compute the number of saddle limit cycles present in the neighbourhood of the saddle-focus equilibrium. 4.2. Consider the FitzHugh–Nagumo system in the form

ε

∂ 2v ∂v = ε 2 2 + f (v, w), ∂t ∂x

∂w = g(v, w). ∂t

(4.135)

Problems

169

(i) Show that, in general, travelling waves are specified by the ODEs:

ε 2 vξξ + cε vξ + f (v, w) = 0,

cwξ + g(v, w) = 0,

(4.136)

where ξ = x − ct.

(ii) Consider the piecewise linear dynamics (McKean model with recovery) f (v, w) = Θ(v − α ) − v − w,

g(v, w) = v,

(4.137)

where Θ(x) is a Heaviside-step function such that Θ(x) = 1 if x ≥ 0 and is zero otherwise. Consider solutions of the form shown in the figure with limξ →±∞ v(ξ ) = 0. Regions I, II, and III are defined respectively by ξ < 0, 0 < ξ < ξ1 and ξ1 < ξ , and in the travelling coordinate frame v(0) = v(ξ1 ) = α . Note that ξ1 and c are undetermined. In regions I and III, look for solutions of the form v = A exp(λ ξ ) and w = B exp(λ ξ ) and show that λ satisfies the characteristic equation

ε 2 p(λ ) ≡ ε 2 λ 3 + ε cλ 2 − λ + 1/c = 0. (iii) Write down a general solution of the system in region II. (iv) Discuss why one might look for travelling waves of the form ⎧ λ1 ξ ⎪ ξ ≥ ξ1 ⎨ Ae 3 λ ξ i w(ξ ) = 1 + ∑i=1 Bi e 0 ≤ ξ ≤ ξ1 , ⎪ ⎩ 3 λi ξ C e ξ ≤0 ∑i=2 i

(4.138)

(4.139)

with v = −cwξ . You may assume that there is exactly one negative root (λ1 ) of the characteristic equation p(λ ) = 0 and that the real parts of the other two roots (λ2 and λ3 ) are positive. (v) From continuity of the solution and its first derivative and using the fact that v(0) = v(ξ1 ) = α , show that the two unknowns (c and ξ1 ) are completely specified by the following system:

170

4 Axons, dendrites, and synapses

eλ1 ξ + ε 2 p (λ1 )α − 1 = 0, −λ2 ξ1

(4.140)

−λ3 ξ1

e e 1 + + + ε 2 α = 0. p (λ 2 ) p (λ 3 ) p (λ 1 )

(4.141)

(vi) By numerically simultaneously solving the equations of part (v), obtain a plot of the wave speed as a function of α for ε = 0.01, 0.1 and ε = 0.5. Discuss your results and comment on how one may determine the stability of solutions. (vii) Discuss the extension of this approach to treat pulses with an oscillatory tail. See [960] for further discussion. (viii) Discuss the extension of this approach to periodic wave trains and outline a method to calculate the speed of a wave as a function of the period of the wave (the dispersion relation). See [376] for further discussion. 4.3. Consider the continuum Fire–Diffuse–Fire model for v = v(x, t) ∈ R with x ∈ R and t > 0: v ∂v ∂ 2v = − + D 2 + ∑ η (t − T m (x)). (4.142) ∂t τ ∂x m∈Z Here, T m (x) denotes the mth time that a threshold h is crossed from below, and the release profile is such that η (t) = 0 for t < 0. Consider the case that η (t) may be written in terms of a Fourier transform as

η (t) =

1 2π

∞

−∞

(k)eikt , dk η

(k) = η

∞

0

dt η (t)e−ikt ,

(4.143)

(k) must lie in the upper-half complex plane. where any poles of η (i) Show that the general solution to (4.142) may be written as v(x, t) = +

∑

m∈Z −∞ ∞ −∞

∞

dy

t

−∞

dsG(x − y, t − s)η (s − T m (y))

dyG(x − y, t)v(y, 0),

(4.144)

where G(x, t) =

1 2π

∞

−∞

t)eikx , G(k, t) = e−ε (k)t , ε (k) = τ −1 + Dk 2 . dk G(k, (4.145)

(ii) Show that a travelling pulse solution with speed c may be written v(x, t) =

1 2π

∞

−∞

dkeik(t−x/c)

(k) η , ε (k/c) + ik

(4.146)

with the speed determined implicitly by h=

(−icm + ) cη 1 , m± = c ± c2 + 4D/τ . D(m + − m − ) 2D

(4.147)

Problems

171

(iii) Show that for the choice η (t) = σ H (t)H (τ R − t)/τ R , this gives h=σ

m− τ [1 − e−m + cτ R ]. τR m− − m+

(4.148)

(iv) Show that when a pair of pulses coexists that the fast branch of solutions is stable and the slow branch unstable. 4.4. Consider the simple lattice model dvn = f (vn ) + D(vn+1 − 2vn + vn−1 ), dt

n ∈ Z.

(4.149)

(i) For the choice f (v) = −v + Θ(v − a), where Θ is the Heaviside step function, show that a stationary solution front (with vn → 1 as n → ∞ and vn → 0 as n → −∞) is given by 1 − (1 + λ )−1 λ n+1 n ≥ 0 vn = , (4.150) n 0, and τ ≥ 0. (i) Show that there is a homogeneous network steady state z k = 0 for all k.

Problems

331

(ii) Linearise the network equations around the steady state with z k (t) = 0 + u k (t) for u k ∈ C and |u k | small to obtain the dynamics for X k = (u k , u ∗k ) ∈ C2 :

κβ X˙ k = DF X k + 2

N

∑ wk j 12 X j (t − τ ),

(7.130)

j=1

where 12 is a 2 × 2 matrix with all entries equal to 1 and DF is the Jacobian: λ 0 λ± = λ ± i ω . (7.131) , DF = + 0 λ− (iii) Show that the network steady state is stable if Re γ < 0, where Eμ (γ ) = 0: Eμ (γ ) = det[γ I2 − DF − γμ

κβ −γτ e 12 ], 2

μ = 1, . . . , N ,

(7.132)

and γμ ∈ R are the eigenvalues of w. (iv) For τ = 0, show that γ = γ± (μ ), where γ+ (μ ) = λ + i ω + γμ κβ and γ− (μ ) = λ − i ω . For κ > 0 and λ < −κβ , determine the value of κ that leads to an instability. (v) For τ > 0, show that an instability can occur if there is a positive solution to the quadratic equation x 2 + bx + c = 0, where b = 2(λ 2 − ω 2 ) − 4 γμ2 , c = (λ 2 + ω 2 )2 − 4 γμ2 λ 2 , γμ = γμ κβ /2. (7.133)

7.8. Consider a synaptic network of PWLIF neurons described in Sec. 3.4.6 that can be written in the form [662] # N a L vi vi ≤ 0 v˙ i = I − wi + σ ∑ wij s j (t) + , a R vi vi > 0 (7.134) j=1

τ w˙ i = aw vi + bw wi , for i = 1, . . . , N . Here I > 0, a L < 0, a R > 0, si (t) = ∑m∈Z η (t − Tim ), η (t) = α 2 te−α t Θ(t), and Tim denotes the mth firing time of the ith neuron. Firing events occur whenever vi increase through the threshold vth from below, at which time vi → vr and wi → wi + κ /τ with κ > 0. The weights wij are balanced in the sense that ∑ Nj=1 wij = 0 for all i. (i) Find a pair of self-consistent equations for the period Δ and initial value w0 of a periodic tonic orbit (v(t), w(t)) = (v(t + Δ), w(t + Δ)) with v(t) > 0 and (v(0), w(0)) = (vth , w0 ). (ii) Show that for an uncoupled neuron (σ = 0) undergoing a Δ-periodic tonic orbit, that the saltation matrix for propagating perturbations through a firing event is given by K (Δ) where 0 v˙ (t + )/˙v (t − ) . (7.135) K (t) = (w(t ˙ + ) − w(t ˙ − ))/˙v (t − ) 1

332

7 Strongly coupled spiking networks

Use this to show that the orbit is stable if Re r < 0, where bw 1 a R vr − w(0) + I + ln r = aR + . τ Δ a R vth − w(Δ) + I

(7.136)

(iii) Show that the saltation matrix for the synchronous orbit in the coupled network is ⎤ ⎡ 000 v˙ (t + )/˙v (t − ) ⎢(w(t ˙ − ))/˙v(t − ) 1 0 0⎥ ˙ + ) − w(t ⎥ (7.137) K (t) = ⎢ + ⎣ (˙s (t ) − s˙ (t − ))/˙v(t − ) 0 1 0⎦ , + − − ˙ ))/˙v (t ) 0 0 1 (u(t ˙ ) − u(t where

1 d 1+ α dt

s = u,

1 d 1+ α dt

∑ δ (t − mΔ).

u=

(7.138)

m∈Z

(iv) Show that the MSF is the largest number in the set Re (ln γ (β ))/Δ, where γ (β ) is an eigenvalue of K (Δ) exp{(A R + β DH )Δ}, β ∈ C, where ⎡ ⎤ a R −1 0 0 ⎢aw /τ bw /τ 0 0 ⎥ ⎥, (7.139) AR = ⎢ ⎣ 0 0 −α α ⎦ 0 0 0 −α and DH is 4 × 4 a constant matrix with [DH ]ij = 1 if i = 1 and j = 3, and is 0 otherwise. (v) Consider a balanced ring network with N odd and wij = w(|i − j|), with distances calculated modulo (N − 1)/2. Show that the circulant structure leads √ to normalised eigenvectors of w given by el = (1, ωl , ωl2 , . . . , ωlN −1 )/ N , where l = 0, . . . , N − 1, and ωl = exp(2π il/N ) are the N th roots of unity. Show that the eigenvalues of the (symmetric) connectivity matrix are real and given by

λl =

N −1

∑ w(| j|)ωl . j

(7.140)

j=0

(vi) Plot the MSF for α = 0.4, vth = 1, vr = 0.2, aw = 0, bw = −1, a R = 1, I = 0.1, κ = 0.75, and τ = 3. On top of this plot the spectra for a ring network with N = 31 with various values of σ for the choice w(x) = (1 − a|x|/3)e−|x|/3 , and with a chosen to ensure ∑ Nj=1 wij = 0. Compare your predictions of the stability of the synchronous tonic orbit with direct numerical simulations. 7.9. Show that for the choice of an exponential weight kernel w(x) = e−|x| /2 and an α -function synapse η (t) = α 2 te−α t Θ(t), that the synaptic drive in the travelling wave frame given by (7.74) takes the explicit form ⎧ cα 2 cξ ξ ≤0 ⎪ ⎨ 2(α +c)2 e 2 cα − αξ Ψ (ξ ) = 2(α +c)2 [(α + c)ξ + 1]e (7.141) ⎪ ⎩ cα 2 −αξ −αξ −cξ + 2(α −c)2 [(c − α )ξ e +e − e ] ξ > 0.

Problems

333

7.10. Consider the continuum model of a synaptically coupled network as described in Sec. 7.7. Replace the synaptic response with that of pulsatile input to an infinite dendrite at a fixed position along the cable so that η (t) → G(d, t), with an IF soma process at the cable origin. Here G(d, t) is the Green’s function of the infinite cable (given by equation (4.63)) with membrane time constant τ , diffusion coefficient D, and d is the distance (from the soma) along the cable at which the pulsatile input arrives. (i) Show that the Fourier transform of η is given by e−γ (k)d 1 + ik τ G(d, k) = , γ 2 (k) = , 2D γ (k) Dτ

(7.142)

and hence show that for D = 1 = τ , a solitary spiking wave has a speed c that satisfies √ e− 1+c d c √ . (7.143) vth = σ 4(τm−1 + c) 1 + c (ii) Show that for synapses proximal to the soma (i.e., those with small d) the speed of the wave scales as σ 2 . 7.11. Consider the continuum model of a synaptically coupled LIF network as described in Sec. 7.7 in the oscillatory regime with I > 0. Further include an absolute refractory process so that whenever there is a firing event at position x the tissue at that point is held at the reset level for a time τR . Consider a periodic wave train of speed c and interspike interval Δ so that firing times are given by T m (x) = x/c + mΔ. (i) By integrating the equations of motion and using the threshold condition show that ∞ dy w(|y|)K (y), (7.144) h=σ −∞

where h = vth − I τ + e

−Δ/τ

K (y) = e−Δ/τ

(I τ − vr ) and

Δ

τR

dt et/τ

∑ η (t − y/c − mΔ).

(7.145)

m∈Z

(ii) By exploiting the Δ-periodicity of P(t) = ∑m∈Z η (t − mΔ), show that P(t) (2π p/Δ)/Δ, where η is the has a Fourier series ∑ p∈Z η p e2π i pt/Δ with η p = η Fourier transform of η . Hence, show that h=

σ Δ

(2π p/Δ) η w(2π p/(cΔ)) 1 − e−(Δ−τR )/τm e2π i pτR /Δ . (7.146) −1 τm + 2π i p/Δ p∈Z

∑

(iii) For the choice of an exponential weight kernel w(x) = e−|x| /2 and an α function synapse η (t) = α 2 te−α t Θ(t), show that (k) = w

1 , 1 + k2

(k) = η

α2 . (α + ik)2

(7.147)

334

7 Strongly coupled spiking networks

(iv) Use the computationally useful form of the implicit dispersion relationship above for c = c(Δ) to determine the solution branches, and determine the stability of these using kinematic theory (see Sec. 4.2.2). For α = 1/2, σ = 60, and τR = 10, show that a periodic wave with period Δ 23 is predicted to undergo a period-doubling instability. 7.12. Consider the continuum LIF model (7.68) with the inclusion of distance dependent axonal delays and action potentials of uniform speed c0 such that (7.70) takes the form ∞ ψ (x, t) = dy w(|y|)s(x − y, t − |y|/c0 ). (7.148) −∞

(i) Show that for a wave with speed 0 < c < c0 that vth = σ

∞

−∞

where (k) = w

(k) dk η (k/c− )], [ w(k/c+ ) + w −1 2π τm + ik 0

∞

dy w(y)e−iky ,

1 1 1 = ± . c± c c0

(7.149)

(7.150)

(ii) For an exponential kernel w(x) = e−|x| /2, show that vth = σ

c− (−ic− ). η + c− )

2(τm−1

(7.151)

(iii) Determine the stability of solution branches for the choice of an α -function synapse by constructing the appropriate Evans function.

Chapter 8

Population models

8.1 Introduction Ever since the first recordings of the human electroencephalogram (EEG) in 1924 by Hans Berger [72], electrophysiological brain recordings have been shown to be dominated by oscillations (rhythmic activity in cell assemblies) across a wide range of temporal scales. From a behavioural perspective, these oscillations can be grouped into five main frequency bands: delta (1 − 4 Hz), theta (4 − 8 Hz), alpha (8 − 13 Hz), beta (13 − 30 Hz), and gamma (30 − 200 Hz). Alpha rhythms are associated with awake resting states and REM sleep, delta with deep sleep, theta with drowsiness, whilst beta and gamma rhythms are associated with task-specific responses [32, 59, 133, 796]. It is also believed that coupling across these frequency bands may have a functional role in neural computation, and, in particular, that different kinds of information can be transmitted with respect to different frequency bands so that they work synergistically [34, 146, 471]. Whilst there is presently little known about the role of the theta rhythm in humans [463], its importance for spatial navigation in rodent hippocampi has attracted significant research. Here, it is postulated that oscillations in the theta band provide a reference oscillation for cells to tune responses to encoding locations and features of the environment. Such a rhythmic response allows information to be encoded in the timing of events relative to the reference oscillations, such as the often studied theta phase precession [678, 816]. It is widely believed that the rhythmic EEG signals that can be recorded from a single scalp electrode arise from the coordinated activity of ∼ 108 pyramidal cells in cortex [813]. There are now many mathematical models to describe the coarsegrained activity of large populations of neurons and synapses using low-dimensional models. These are commonly referred to as neural mass models, and are an integral part of the Virtual Brain project that aims to deliver the first open simulation of the human brain based on individual large-scale connectivity [778]. They are typically cast as systems of ordinary differential equations (ODEs) and in their modern incar© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Coombes and K. C. A. Wedgwood, Neurodynamics, Texts in Applied Mathematics 75, https://doi.org/10.1007/978-3-031-21916-0_8

335

336

8 Population models

nations are exemplified by variants of the two-dimensional Wilson–Cowan model [944]. This model tracks the activity of an excitatory population of neurons coupled to an inhibitory population. With the augmentation of such models by more realistic forms of synaptic and network interaction, they have proved especially successful in providing fits to neuroimaging data [890]. One of the first neural mass models is that of Zetterberg et al. [962] for the EEG rhythm. Since its introduction, this model has become more widely popularised by the work of Jansen and Rit [467] and used to explain epileptic brain dynamics [929]. Another well-known neural mass model is that of Liley [568], which pays particular attention to the role of synaptic reversal potentials. Indeed, there are now a plethora of neural mass models to accommodate other features of known biological reality, including spike-frequency adaptation [44, 65], threshold accommodation [199], depressing synapses [55, 586, 841], and synaptic plasticity [914]. It is important to remember that, at heart, all neural mass models to date are essentially phenomenological, with state variables that track coarse-grained measures of the average membrane potential, population firing rate, or synaptic activity. At best, they are expected to provide appropriate levels of description for many thousands of near-identical interconnected neurons with a preference to operate in synchrony. This latter assumption is especially important for the generation of a sufficiently strong physiological signal that can be detected non-invasively. In this chapter, we shall review a variety of neural mass models from a mathematical perspective, as well as discuss the derivation of an exact mean field model for a network of synaptically coupled quadratic integrate-and-fire (QIF) neurons. This derived model has a firing rate that depends on the degree of population synchrony, and has a richer dynamical repertoire than standard neural mass models.

8.2 Neural mass modelling: phenomenology Neural mass models generate brain rhythms using the notion of population firing rates, aiming to side step the need for large-scale simulations of more realistic networks of spiking neurons. Although they are not derived from detailed conductancebased models, they can be motivated by a number of phenomenological arguments. The one described here is based on the premise of slow synaptic interactions. We have seen in Chap. 4 that an event-based model for a post-synaptic current can be written in the form I = g(vsyn − v), where v is the voltage of the post-synaptic neuron, vsyn is the reversal potential of the synapse, and g is a conductance. This latter dynamic variable is often considered to evolve according to a linear differential equation of the form (8.1) Qg = k ∑ δ (t − T m ), m∈Z

m

where k is some positive amplitude, T denotes the arrival time of the mth presynaptic action potential, and Q is a linear differential operator such as in (4.115). However, for neural mass models, it is assumed that the interactions are mediated by firing rates

8.3 The Wilson–Cowan model

337

rather than action potentials (spikes) per se. To see how this might arise, consider a short-time average of (8.1) over some timescale τ as done in Sec. 7.6 and assume that g is sufficiently slow so that Qgt is approximately constant, where xt =

1 τ

t

t−τ

x(t )dt .

(8.2)

In this case, Qg = k f , where f is the instantaneous firing rate (number of spikes per unit time). For a single neuron (real or synthetic) experiencing a constant drive, it is natural to assume that this firing rate is a function of the drive alone. If, for the moment, it is assumed that a neuron spends most of its time close to rest such that vsyn − v ≈ vsyn , and absorb a factor vsyn into k, then for synaptically interacting neurons, this drive is directly proportional to the conductance state of the presynaptic neuron. Thus, for a single population of identically and globally coupled neurons operating synchronously, one is led naturally to equations of the form: Qg = κ f (g),

(8.3)

for some strength of coupling κ . A common choice for the population firing rate function is the sigmoid f0 f (g) = , (8.4) 1 + e−r (g−g0 ) which saturates to f 0 for large g. This functional form, with threshold g0 and steepness parameter, r , is not derived from a biophysical model, rather it is seen as a physiologically consistent choice.

8.3 The Wilson–Cowan model For their activity-based model, Wilson and Cowan [944, 945] distinguished between excitatory and inhibitory sub-populations, as well as accounting for refractoriness. This seminal (space-clamped) model can be written succinctly in terms of the pair of coupled ODEs: dE = −E + (1 − r E E) f E [W E E E − W E I I + p E ] , dt dI = −I + (1 − r I I ) f I [W I E E − W I I I + p I ] . τI dt

τE

(8.5)

Here, E = E(t) is a temporally coarse-grained variable describing the proportion of excitatory cells firing per unit time at the instant t. Similarly the variable I represents the activity of an inhibitory population of cells. The constants Wab > 0 describe the weight of all synapses to the population a from cells of the population b (where a, b ∈ {E, I }), and ra is proportional to the refractory period of the ath population. The nonlinear function f a describes the expected proportion of neurons in population

338

8 Population models

a receiving at least threshold excitation per unit time, and is often taken to have the sigmoidal form specified by (8.4). Here, the terms pa represent external inputs that could be time varying. For a historical perspective on the Wilson–Cowan model, see [245], and for a more recent reflection by Cowan et al., see [211]. In many modern uses of the Wilson–Cowan equations, the refractory terms are often dropped (by setting ra = 0). When chosen to have a strong positive feedback from the excitatory population to itself balanced by strong negative feedback from the inhibitory population (as for models of the hippocampus and the primary visual cortex), the model is often referred to as an inhibition-stabilised network (ISN) [464]. In their working regime, ISNs produce damped oscillations in the gamma frequency range in response to inputs to the inhibitory population. The system possesses a resonant frequency, which can be determined by linearisation around the fixed point, and under variation of parameters (in the absence of forcing) may undergo a Hopf bifurcation (for more about Hopf bifurcations, see Box 2.4 on page 25). In the absence of self-sustained oscillations, one would expect an ISN to respond to periodic forcing by showing an enhanced response when the resonant frequency is rationally related to the forcing frequency, and nonlinear theory predicts the largest responses when these frequency ratios are 1:2, 1:1, and 2:1. A general understanding of such resonant responses is challenging, though some progress has been made for systems near a Hopf bifurcation [903]. By contrast, when forcing a system that is intrinsically oscillating, one might expect more exotic quasi-periodic responses, organised around an Arnol’d tongue structure [717], as discussed in Sec. 5.2. An example of an ISN phase space (with r E = 0 = r I ) is shown in the inset Fig. 8.1, together with an example of a sustained oscillation (at a different parameter set). The main body of this figure shows a two-parameter bifurcation diagram in the ( p E , p I ) parameter space tracking both Hopf and saddle-node bifurcations (where the number of fixed points changes from one to three). To switch between an ISN and a network with sustained oscillations, one need only cross the locus of super-critical Hopf bifurcations. The simple structure of the Wilson–Cowan equations allows for the two-parameter bifurcation diagram shown in Fig. 8.1 to be computed analytically [92]. For the choice f a (x) = f (x) = 1/(1 + e−β x ), with β > 0, the point (E, I ) is an equilibrium if there is a solution to the pair of equations p E = f −1 (E) − W E E E + W E I I ,

p I = f −1 (I ) − W I E E + W I I I ,

(8.6)

where f −1 (z) = β −1 ln(z/(1 − z)). The Jacobian matrix, evaluated at (E, I ), is therefore (−1 + β W E E E(1 − E))/τ E −β W E I E(1 − E)/τ E . (8.7) J= β W I E I (1 − I )/τ I (−1 − β W I I I (1 − I ))/τ I Thus, the conditions for a Hopf bifurcation are Tr J = −(1/τ E + 1/τ I ) + β W E E E(1 − E)/τ E − β W I I I (1 − I )/τ I = 0, (8.8) for det J > 0. Eliminating I using I = I ± (E), where

8.3 The Wilson–Cowan model

339

Fig. 8.1 Hopf (HB) and saddle-node (SN) bifurcation curves in the Wilson–Cowan network with a mixture of excitatory and inhibitory connections. Here, f a (x) = 1/(1 + e−x ), τ E = 3, τ I = 8, W E E = 10, W E I = 12, W I I = 10, and W I E = 10. The insets show the phase plane (E-nullcline in grey and I -nullcline in dashed grey) for a parameter set supporting an ISN with ( p E , p I ) = (0, −0.647) and a sustained oscillation with ( p E , p I ) = (0, −1). Black lines denote numerically determined trajectories.

I ± (x) =

1±

1 − 4τ I [(−1 + β W E E x(1 − x))/τ E − 1/τ I ]/(β W I I ) 2

(8.9)

means that the fixed point equation can be parametrically plotted in the ( p E , p I )plane. A similar procedure can be used to determine the locus of saddle-node bifurcations defined by det J = 0, as well as the Bogdanov–Takens bifurcation defined by det J = 0 and Tr J = 0 occurring simultaneously (when the saddle-node and Hopf curves intersect). The Wilson–Cowan model also supports a saddle node on an invariant circle (SNIC) bifurcation (when the saddle-node curve lies between the two Hopf curves), and can also support a saddle-separatrix loop and a double limit cycle. See Hoppensteadt & Izhikevich [433, Chapter 2] for a detailed discussion. It is also straightforward to set up the Wilson–Cowan model in a scenario with two stable fixed points separated by a saddle, as shown in Fig. 8.2. The stable manifold for the saddle point acts as a threshold, meaning that in the presence of stochastic forcing the system can exhibit bistable switching. This has been suggested by Ermentrout and Terman [283, Chapter 11] as a simple model for up-down state transitions observed in extracellular and intracellular recordings of neurons. Here networks of neurons can sustain elevated membrane potentials (the up state) for around 4s followed by periods of inactivity (the down state). A similar model of up–down transitions based

340

8 Population models

on a noisy bistable model has also been proposed by Holcman and Tsodyks [426] using recurrent excitation and synaptic depression (without inhibition). For a recent perspective on how randomness and bistability can underly up–down state transitions in the brain, see [775].

Fig. 8.2 Bistability in the Wilson–Cowan model (E-nullcline in grey and I -nullcline in dashed grey). The dotted black line shows the stable manifold of the saddle point. With the addition of coloured noise, the system can switch back and forth between the up and down states as shown in the inset. Here f a (x) = 1/(1 + e−x ), τ E = 5, τ I = 3, W E E = 16, W E I = 10, W I I = 6, and W I E = 24, p E = −3.7, and p I = −6.7.

8.3.1 A piecewise linear Wilson–Cowan model In the limit of high gain, it is natural to replace the sigmoidal firing rate function in the Wilson–Cowan model with a Heaviside function and write f a (x) = Θ(x). In this case, the model becomes non-smooth and can be analysed with the Filippov technique described in Sec. 3.7. This has recently been done by Harris and Ermentrout [407] allowing for a study of periodic orbits. It is also possible to analyse the stability of these orbits by exploiting the machinery described in Chap. 3 and constructing the appropriate saltation matrices. To give meaning to Θ(0), the Heaviside function is treated as set-valued so that it returns a whole interval of possible solutions with Θ(0) = [0, 1]. To simplify analysis, it is first convenient to introduce new variables (U, V ) such that (U, V ) = (W E E E − W E I I + p E , W I E E − W I I I + p I ).

8.3 The Wilson–Cowan model

341

The transformed Wilson–Cowan model (with no refractoriness, i.e., ra = 0) can be written in matrix form as d U Θ(U ) U − pE +WJ . (8.10) =A V − pI Θ(V ) dt V Here, W E E −W E I , W = W I E −W I I

1/τ E 0 J= 0 1/τ I

A = −W J W −1 .

(8.11)

By exploiting the fact that the right-hand side of (8.10) is piecewise linear, solutions may be obtained using matrix exponentials and written in the form pE Θ(U ) U (t) At U (0) At −1 , t ≥ 0, =e + (I2 − e ) − A WJ Θ(V ) pI V (t) V (0) (8.12) where I2 is the 2 × 2 identity matrix. In the representation (8.10), there are two switching manifolds defined by U = 0 and V = 0. After introducing the indicator functions h 1 (U, V ) = U and h 2 (U, V ) = V , these manifolds (which are lines in this case) can be defined as Σi = (U, V ) ∈ R2 | h i (U, V ) = 0 . (8.13) These switching manifolds divide the plane into four sets that can be denoted by D++ = {(U, V ) | U > 0, V > 0}, D+− = {(U, V ) | U > 0, V < 0}, D−− = {(U, V ) | U < 0, V < 0}, and D−+ = {(U, V ) | U < 0, V > 0}. It is possible for the nullclines of the piecewise linear Wilson–Cowan model to intersect and create a fixed point (Uss , Vss ) = ( p E , p I ). Linear stability analysis shows that this is a stable node (with eigenvalues −1/τ E and −1/τ I ). Moreover, this system also supports pseudo-equilibria where either a nullcline intersects a switching manifold or two switching manifolds intersect. A thorough exploration of the pseudoequilibria of (8.10) can be found in [407]. Here, we shall simply focus on the pseudoequilibrium at (0, 0). A non-sliding periodic orbit around (0, 0) can be constructed in terms of the times-of-flight in each region Dαβ , α , β ∈ {+, −}. Denoting these four times by the symbols Δαβ , then the period of the orbit is given by Δ = Δ++ + Δ−+ + Δ−− + Δ+− . Solutions may then be patched together using (8.12) and setting the origin of time in each region such that initial data in one region is equal to final data from a trajectory in a neighbouring region. The periodic orbit is denoted by (u, v) such that (u(t), v(t)) = (u(t + Δ), v(t + Δ)). To indicate which region a given trajectory is in, αβ subscripts are added to the variables in (8.12). In this way, a periodic orbit that visits all four regions in turn can be parameterised by the five unknowns u ++ (0) and Δαβ . These are determined self-consistently by the five equations u ++ (Δ++ ) = 0, v−+ (Δ−+ ) = 0, u −− (Δ−− ) = 0, v+− (Δ+− ) = 0, and u +− (Δ+− ) = u ++ (0). To determine the stability of such an orbit, one may use the non-smooth Floquet theory described in Sec. 3.5. In essence, this treats the propagation of perturbations through

342

8 Population models

Fig. 8.3 Phase plane for the transformed Wilson–Cowan network, showing a stable periodic orbit (solid black) and an unstable periodic sliding orbit (dotted grey). The parameters are τ E = 1, τ I = 0.6, p E = −0.05, p I = −0.3, W E E = 1, W E I = 2, W I E = 1, and W I I = 0.25.

a switching manifold using a saltation matrix. Given that the system has two switches defined by h i = 0, the general rule (3.78) generates two saltation matrices given by

u(T ˙ + )/u(T ˙ −) 0 , ˙ −) 1 (˙v (T + ) − v˙ (T − ))/u(T 1 (u(T ˙ + ) − u(T ˙ − ))/˙v(T − ) . K 2 (T ) = 0 v˙ (T + )/˙v (T − ) K 1 (T ) =

(8.14)

In this case, a perturbed trajectory may be evolved over one period according to (δ U (Δ), δ V (Δ)) = Γ (δ U (0), δ V (0)), where Γ = K 2 (Δ4 )e AΔ+− K 1 (Δ3 )e AΔ−− K 2 (Δ2 )e AΔ−+ K 1 (Δ1 )e AΔ++ ,

(8.15)

and Δ1 = Δ++ , Δ2 = Δ++ + Δ−+ , Δ3 = Δ++ + Δ−+ + Δ−− , and Δ4 = Δ. For a planar system, the non-zero Floquet exponent is given by σ = det Γ /Δ, yielding (and see Prob. 8.2)

σ=−

1 1 + τE τI

1 + ln Δ

u(Δ ˙ + ˙ (Δ+ ˙ + ˙ (Δ+ 1)v 2 ) u(Δ 3)v 4) − − − u(Δ ˙ 1 ) v˙ (Δ2 ) u(Δ ˙ 3 ) v˙ (Δ− 4)

.

(8.16)

A periodic orbit will be stable provided σ < 0. The pseudo-equilibria at (0, 0) is unstable (stable) if it is enclosed by a stable (unstable) periodic orbit of arbitrarily small amplitude. There is a pseudo-Hopf bifurcation at (0, 0) when the pseudoequilibrium changes stability, namely, when σ = 0. An example of a stable periodic

8.3 The Wilson–Cowan model

343

orbit constructed with the above approach is shown in Fig. 8.3, together with an unstable sliding orbit. In summary, taking the high gain limit of a sigmoidal firing rate and replacing it by a Heaviside function can lead to highly tractable models for which substantial analytical results can be obtained. A case in point is the work of Laing and Chow [552] for understanding binocular rivalry. These authors considered a neural mass network model with recurrent excitation, cross-inhibition, adaptation, and synaptic depression and showed that the use of a Heaviside nonlinearity allowed the explicit calculation of the dominance durations of perceptions. A more recent use of the Heaviside limit has been by McCleney and Kilpatrick [614] to study neural activity models with spike rate adaptation to understand the dynamics of up–down states.

8.3.2 The Wilson–Cowan model with delays The inclusion of fixed time delays in dynamical systems is a well-known mechanism for generating oscillations [598]. In the context of brain rhythms, the inclusion of delays within the Wilson–Cowan model has been used as a model of beta oscillations within the basal ganglia with applications to Parkinson’s disease [428, 697]. For an excellent overview of the mathematical techniques for studying delayed neural models, see [143]. Here, we consider a Wilson–Cowan model with two delays such that the dynamics evolves according to [193]: dE = −E + f (W E E E(t − τ1 ) − W E I I (t − τ2 ) + p E ) , dt dI = −I + f (W I E E(t − τ2 ) − W I I I (t − τ1 ) + p I ) , τI dt

τE

(8.17)

with f (x) = 1/(1 + e−β x ). The fixed delays τ1 and τ2 distinguish between delayed self-interactions and delayed cross-interactions. Extending the analysis of Sec. 8.3 (no delays), in the presence of delays, the linearised equations of motion have solutions of the form (E, I ) = (E 0 , I0 )eλ t . Demanding that the amplitudes (E 0 , I0 ) be non-trivial gives a condition on λ that may be written in the form E (λ ) = 0, where E (λ ) = det[J (λ ) − λ I2 ] where I2 is the 2 × 2 identity matrix and (−1 + β W E E E(1 − E)e−λ τ1 )/τ E −β W E I E(1 − E)e−λ τ2 /τ E , J (λ ) = β W I E I (1 − I )e−λ τ2 /τ I (−1 − β W I I I (1 − I )e−λ τ1 )/τ I (8.18) and the equilibrium (E, I ) is given by the simultaneous solution of (8.6). For λ ∈ R, λ = 0 when (1 − κ1 )(1 − κ2 ) − κ3 = 0,

(8.19)

344

8 Population models

Fig. 8.4 A bifurcation diagram showing the stability of the equilibrium in the Wilson–Cowan model with two delays. The solid black lines denote the locus of Hopf bifurcations. Parameters are as in Fig. 8.1 with ( p E , p I ) = (0, 1). Note that the stability of the steady state is enhanced around the region where τ1 = τ2 . In the regimes where the steady state is unstable (grey regions), the system robustly supports stable limit cycle behaviour.

where κ1 = β W E E E(1 − E), κ2 = −β W I I I (1 − I )and κ3 = −β 2 W E I W I E E I (1 − E)(1 − I ). Thus a real instability of a fixed point is defined by (8.19) and is independent of (τ1 , τ2 ). Referring back to the analysis of Sec. 8.3, this is identical to the condition for a saddle-node bifurcation. By contrast, a dynamic instability will occur whenever λ = ±i ω for ω = 0, where ω ∈ R. The bifurcation condition in this case is defined by the simultaneous solution of the equations Re E (i ω ) = 0 and Im E (i ω ) = 0, namely, 0 = (1 − κ1 cos(ωτ1 ))(1 − κ2 cos(ωτ1 )) (8.20) − (ωτ E + κ1 sin(ωτ1 ))(ωτ I + κ2 sin(ωτ1 )) − κ3 cos(2ωτ2 ), 0 = (1 − κ1 cos(ωτ1 ))(ωτ I + κ2 sin(ωτ1 )) (8.21) + (ωτ E + κ1 sin(ωτ1 ))(1 − κ2 cos(ωτ1 )) + κ3 sin(2ωτ2 ). For parameters that ensure ω = 0, the simultaneous solution of equations (8.20) and (8.21) defines the condition for a Hopf bifurcation at (τ1 , τ2 ) = (τ1c , τ2c ). More correctly, one should also ensure that as the delays pass through this critical point, the rate of change of Re(λ ) is non-zero (transversality) with no other eigenvalues with zero real part (non-degeneracy). Interestingly, models with two delays can lead to an interference effect whereby although either delay, if long enough, can bring about instability, there is a window of (τ1 , τ2 ) where solutions are stable to Hopf

8.3 The Wilson–Cowan model

345

bifurcations [598]. An example of this effect, obtained by computing the locus of Hopf bifurcations according to the above prescription, is shown in Fig. 8.4.

8.3.3 The Curtu–Ermentrout model The original refractory terms in the Wilson–Cowan model come from more elaborate forms that acknowledge that refractoriness is a history-dependent process. For example, for the dynamics of the E variable, the original work of Wilson and Cowan considered the term t E(s)ds, (8.22) 1− t−r E

which, for small r E , recovers the form in (8.5), namely, the term (1 − r E E(t)). Curtu and Ermentrout [221] were the first to realise that the inclusion of this more general refractory process gives rise to a functional-differential equation with novel oscillatory properties. To appreciate this, it is enough to consider the scalar model 1 t 1 du = −u + 1 − u(s)ds f (u), (8.23) α dt τ R t−τ R where τ R is the absolute refractory period of the neurons in the population. The fixed point u ss satisfies the equation −u ss + (1 − u ss ) f (u ss ) = 0. For a sigmoid with 0 < f < 1, there is at least one solution of this equation for u ss ∈ (0, 1/2). The characteristic equation determining the linear stability of the steady state is calculated via E (λ ) = 0, with E (λ ) =

λ 1 − e−λ τ R + A + f (u ss ) , α λ τR

(8.24)

where A = 1 − (1 − u ss ) f (u ss ). This transcendental equation allows for the possibility of complex roots and, not surprisingly, it is possible to choose values for the slope and threshold of the sigmoid such that a dynamic bifurcation defined by E (i ω ) = 0 can occur [221]. Interestingly, periodic solutions that emerge beyond such a bifurcation can be analysed explicitly in a singular limit. To see this, it is useful to write (8.23) as a two-dimensional delay differential equation:

ε u˙ = −u + (1 − z) f (u), z˙ = u(t) − u(t − 1),

(8.25)

where time has been re-scaled time according to t → t/τ R and ε = (ατ R )−1 . Formally setting ε = 0 gives the graph z = 1 − u/ f (u), an example of which is shown in Fig. 8.5. Note the ‘cubic’ shape of this curve, reminiscent of the nullcline for the fast variable seen in many excitable systems, and particularly those of FitzHugh– Nagumo type [305]. In the limit ε → 0, the dynamics consists of slow evolution along the left and right branches of the ‘cubic’ with fast transitions from one branch

346

8 Population models

to another. In this case, it can be shown that the period of oscillation satisfies T < 2τ R (and numerical experiments show that the actual oscillation period scales linearly with τ R ). Hence, the inclusion of refractoriness in activity-based models is a natural way to induce oscillations with an emergent period that is largely set by the refractory timescale.

Fig. 8.5 Relaxation oscillations in the Curtu–Ermentrout model of activity-based single population with self-excitation and refractoriness with parameters ε = 0.001, T ∼ 1.29; ε = 0.01, T ∼ 1.43; and ε = 0.1, T ∼ 2.24. Here f (u) = (1 + exp(−β (u − θ )))−1 with β = 8 and θ = 1/3. The (solid black) cubic curve represents the u nullcline: z = 1 − u/ f (u).

8.4 The Jansen–Rit model The Jansen–Rit model [467] describes the activity of cortex using three neuronal sub-populations, and has been used to model both normal and epileptic patterns of cortical activity [877]. It is constructed as a network of pyramidal neurons (P) that receives inputs from inhibitory (I ) and excitatory (E) populations of interneurons with corresponding activity s P , s I and s E . These in turn are driven by inputs from the pyramidal cells. The equations for the network dynamics are a realisation of the structure suggested by (8.3), with the choice

8.4 The Jansen–Rit model

Q E s P = f (s E − s I ),

347

Q E s E = C2 f (C1 s P ) + A,

Q I s I = C4 f (C3 s P ), (8.26)

where the firing rate is given by f (u) = f 0 /(1 + exp(−r (u − u 0 ))), Q a = (1 + βa−1 d/dt)2 βa /Aa for a ∈ {E, I }, and βa , Aa , and C1 , . . . , C4 are constants. Here, A is an external input. When this is constant, one obtains the bifurcation diagram shown in Fig. 8.6. Oscillations emerge via Hopf bifurcations and it is possible for a pair of stable periodic orbits to coexist. One of these has a frequency in the alpha band and the other is characterised by a lower frequency and higher amplitude. As

Fig. 8.6 Top-left: Wiring diagram for the Jansen–Rit model, representing the interaction of three neuronal populations: pyramidal (P), excitatory (E), and inhibitory (I ). Top-right: Bifurcation diagram for the Jansen–Rit model with f 0 = 5, u 0 = 6, r = 0.56, β E = 100 s−1 , β I = 50 s−1 , A E = 3.25 mV, A I = 22 mV, ν = 5 s−1 , v0 = 6 mV, r = 0.56 mV−1 , C1 = 135, C2 = 0.8C1 , C3 = 0.25C1 = C4 . Solid black (dotted grey) lines represent stable (unstable) fixed points. Thick black (grey) lines denote the extrema of stable (unstable) periodic orbits that emerge via Hopf bifurcations. Note that a SNIC bifurcation occurs at A 110 Hz. Bottom: Time series of two co-existing stable periodic orbits at A = 125 Hz.

well as applications to epilepsy modelling, networks of interacting Jansen–Rit nodes have recently been used to understand cross-frequency coupling between brain areas [469].

348

8 Population models

8.5 The Liley model One of the more successful population models for generating rhythms consistent with those found in the human EEG power spectrum is that of Liley et al. [567, 568]. In this mesoscopic model, cortical activity is locally described by the mean soma membrane potentials of interacting excitatory and inhibitory populations. The interaction is through a model of the synapse that treats both shunting currents and a realistic time course for post-synaptic conductance changes. The model can be written as

τ E E˙ = E R − E + W E E (A E E − E) + W E I (A E I − E), τ I I˙ = I R − I + W I I (A I I − I ) + W I E (A I E − I ),

(8.27)

and see Fig. 8.7 for a schematic. Here E (I ) is the mean membrane potential in the excitatory (inhibitory) population, with leak reversal potential E R (I R ). The relaxation time constants for the populations are given by τ E,I , whilst Aab describes a reversal potential such that Aa E > 0 and Aa I < 0 with respect to the resting state. The weights, Wab , are the product of a static strength factor and a dynamic conductance Wab = W ab gab , where Q ab gab = f b (b) + Pab ,

Q ab =

1 d 1+ αab dt

2 ,

a, b ∈ {E, I },

(8.28)

and the conductances are considered to be driven by a combination of firing from the populations to which they are connected and some external drive. The former is modelled using a sigmoidal firing rate function f a of the form (8.4) and the latter, Pab , is considered constant. The Liley model is known to support a rich repertoire of dynamical states and has been used in particular to model the human EEG alpha rhythm. As with the Jansen–Rit model, it can support the co-existence of stable large and small amplitude oscillations. Moreover, it can support a novel Shilnikov’s saddlenode route to chaos [896]. Indeed, compared to the Wilson–Cowan or Jansen–Rit model, the Liley model can support a more exotic bifurcation structure altogether [223], and see Fig. 8.7 for an example.

8.6 The Phenomenor model A neural mass model for describing the dynamics of seizures, refractory status epilepticus (RSE, seizures that last more than an hour without returning to baseline), and depolarisation block (DB, where the neuronal membrane is depolarised, but neurons stop firing), termed the Epileptor model, has been introduced by Jirsa et al. [266, 476]. This relatively recent phenomenological model comprises 2 two-dimensional subsystems, one slow variable, and a linear filter, and hence is a six-dimensional model. Whilst the high dimensionality of the Epileptor model makes it more suited

8.6 The Phenomenor model

349

Fig. 8.7 Left: Wiring diagram for the Liley model, representing the interaction of two neuronal populations, one excitatory (E) and the other inhibitory (I ). Right: Bifurcation diagram for the Liley population model, with f a (z) = [1 + e−βa (z−θa ) ]−1 , showing the absolute maximum of E in terms of PE E for steady-state (solid black for stable and dashed grey for unstable), small amplitude periodic (dotted line) and large amplitude periodic (filled circles) orbits. Chaotic solutions are found after the period-doubling cascade. The parameters are PI E = 0.005763 ms−1 , α E E = α I E = 1.01 ms−1 , α I I = α E I = 0.142 ms−1 , W E E = W I E = 43.31, W I I = W E I = 925.80, β E = 0.3, β I = 0.27, θ E = 21.0 mV, θ I = 29.0 mV, h E = 115.0 mV, and h I = −20.0 mV.

to a numerical rather than analytical bifurcation analysis, a recent paper has developed a simpler phenomenological neural mass model, named the Phenomenor model, to describe the repeated transitions between seizure and interictal states. In the Phenomenor model [162], the transition to seizures is also modelled as an interaction between slow and fast variables, where the slow variable represents changes in population excitability a and the fast variable represents the population firing rate v. In general, the seizure onset and offset mechanisms in the Phenomenor model can be mapped to those in the Epileptor model, meaning that the former captures many of the essential features of the latter. In both models, the seizure onset is marked by a saddle-node bifurcation. The Phenomenor model is given by a pair of coupled ODEs: dv = a − v2 − v3 , dt

τa

da = tanh [c(h(a) − v) − 1/2] , dt

(8.29)

where h(a) = −0.86 + 1.6a is a state-dependent threshold. For values of a population firing rate above the threshold (that is, during seizures), excitability decreases, whilst for values below the threshold (that is, interictal state), the excitability increases. As long as the threshold h is between the branches corresponding to the

350

8 Population models

low firing and high firing states, the equations give rise to a slow periodic switching between the two states. For a recent study of how perturbations may promote or delay seizure emergence in the Phenomenor model (utilising isochronal coordinates), see [703].

Fig. 8.8 The effect of interictal perturbations on the transition to seizure in the Phenomenor model. Perturbations representing interictal epileptiform discharges are modelled as periodic Dirac delta kicks in v of size 0.3 every 50 seconds. The plot shows a trajectory (black) in the (v, a)-plane, together with the nullclines for the model. The middle branch of the v-nullcline (solid grey) is an unstable (repelling) manifold. The arrow indicates the premature initiation of a seizure. The parameters are τa = 1000 s and c = 1000. Also plotted is the a-nullcline as a dashed grey line.

To examine the effect of interictal epileptiform discharges (IEDs) on the transition to seizure, these may be modelled as periodic perturbations with a brief transient increase in the population firing rate. If these arrive far from the unstable manifold (middle branch of cubic v-nullcline), and see Fig. 8.8, this will lead to a transient increase in firing followed by a shift in the system’s dynamics back towards the more stable (less excitable) branch. This has an anti-seizure property and prolongs the interictal period. By contrast, if an unperturbed trajectory is close to the unstable manifold then a tipping point is reached where even a small perturbation can shift the system over the unstable manifold and prematurely initiate a seizure. Thus, the system response to an IED depends on the dynamical state of the population at the moment of the discharge occurrence.

8.7 The Tabak–Rinzel model

351

8.7 The Tabak–Rinzel model Bursts of episodic neuronal activity, characterised by episodes of discharge when many cells are firing and separated by silent phases during which few cells fire, are a hallmark of neuronal development in the retina, hippocampus, and spinal cord [675] and appear to be important for determining the structure of neuronal networks [563]. In the chick spinal cord, this manifests as spontaneous episodes of rhythmic discharge (duration 5 − 90s; cycle rate 0.1 − 2 Hz) that recur every 2 − 30 min. This is known not to depend on any specialised connectivity or intrinsic bursting neurons and is generated by a network of functionally excitatory connections. Tabak et al. [841] have developed a neural mass model of this process, describing only the average firing rate of the population in a purely excitatory recurrent network that is susceptible to both short- and long-term activity-dependent depression. The model is described with three coupled ODEs

τa a˙ = a∞ (nsda) − a,

τd d˙ = d∞ (a) − d,

τs s˙ = s∞ (a) − s,

(8.30)

where a∞ (x) =

1 1+

e−(x−θa )/ka

, d∞ (x) =

1 1+

e(x−θd )/kd

, s∞ (x) =

1 1+

e−(x−θs )/ks

.

(8.31) Here, a is a population activity or mean firing rate, and d is a fast depression variable, representing the fraction of synapses not affected by the fast synaptic depression. These two variables generate the cycles within an episode. The slow modulation that underlies onset and termination of episodes, and the long silent phases, is described by s, which represents the fraction of synapses not affected by the slow synaptic depression. This variable decreases slowly during an episode, causes the termination of said episode, and subsequently recovers between episodes. The fraction of nondepressed synapses at any given time is given by the product sd. Note that a∞ is an increasing sigmoidal function whilst d∞ and s∞ are decreasing sigmoidal functions (with thresholds θa,d,s and steepnesses controlled by ka,d,s ). The parameter n measures the strength of recurrent excitation. To match experimental data from chick embryos, the depression timescale τd is of the order of 200 − 400 ms, with τa chosen to be about half that so as to generate oscillations. Given the separation of timescales between (a, d) and s, it is natural to analyse the model with a fast-slow analysis (see Box 3.1 on page 67), treating s as a parameter of the fast (a, d) subsystem. The associated bifurcation diagram is shown in Fig. 8.9. For intermediate values of s, the system is bistable between a steady state and a periodic orbit. Thus, to evoke an episode with an external stimulus, this must be strong enough to bring activity above the middle branch of unstable steady states (the separatrix between the upper and lower branches). Otherwise, activity returns immediately to the low steady state. Because the vertical distance between the unstable steady state and low-activity state is highest just after an episode and progressively declines, the smaller the amount of time after an episode, the harder it is to successfully evoke a new episode, in agreement with experiments. Also shown in Fig. 8.9 is a bursting trajectory from the full

352

8 Population models

Fig. 8.9 Bursting dynamics in the Tabak–Rinzel model. The diagram shows a trajectory in the (a, s) plane, superimposed with the bifurcation diagram for the (a, d) model with s as a parameter, −1 (s)). The inset shows and the s-nullcline. The dashed grey line is where s˙ = 0 (given by a = s∞ the time course of a and a distinct bursting pattern, where each burst contains five spikes. The steady state of the (a, d) model is shown in solid black (stable) and dotted black (unstable), and the extrema of the stable periodic orbits emerging from a Hopf bifurcation are shown with solid black filled circles. The parameters are n = 1, τa = 0.1 s, τd = 0.2 s, τs = 50 s, θa = 0.18, ka = 0.05, θd = 0.5, kd = 0.2, θs = 0.14, and ks = 0.02.

(a, d, s) dynamical model. Below the line where s˙ = 0, s increases towards a saddle node of steady states, after which it begins to track the stable periodic orbit of the fast system (the burst onset). However, this trajectory is above the line where s˙ = 0 and so s decreases until the periodic orbit terminates at a homoclinic bifurcation (the burst offset). There is increasing agreement between theory and simulations as τs → ∞. A more refined mean field model, motivated by the study of a heterogeneous spiking (integrate-and-fire) neurons with slow synaptic depression, is presented in [911], and also shows that dynamical bistability is key for modelling episodic rhythmogenesis.

8.8 A spike density model One successful method for reducing networks of spiking neurons to a lower dimensional population firing rate description is the spike density method. This has mainly been developed for integrate-and-fire neurons with global coupling and pulsatile interactions with a stochastic rate of arrival. An excellent exposition of this approach

8.8 A spike density model

353

can be found in [336], and see also the nice review paper by Deco et al. [232]. The method is now widely in use [10, 122, 140, 141, 519, 595, 672, 681]. Other reduction methods include those based on a refractory density approach by Chizhov et al. [168] for Hodgkin–Huxley networks, Schwalger and colleagues (building on work of Gerstner) [714, 787, 791] for many point single neuron models used throughout the computational neuroscience community, and Visser and van Gils [909] who developed a firing rate description for bursting Izhikevich neuronal populations. Here we describe the spike density approach for a simple leaky integrate-and-fire (LIF) model with only one voltage state variable and excitatory pulsatile inputs. First, consider a single LIF neuron receiving a train of spiking inputs with dynamics: v dv = − + I + σ ∑ δ (t − T m ), dt τm m∈Z

(8.32)

where v(t) represents the voltage at time t and I is a constant drive. Here the effect of a spike arriving at time T m is to cause a jump in the voltage variable v of size σ > 0. If this jump ever causes the voltage to increase above vth , then the voltage is instantaneously reset to vr and the neuron is considered to emit a spike of its own. For simplicity, we set vth = 1 and vr = 0. Note that for I τm > 1, the voltage can increase through threshold independently of jumps from incoming spikes. Now consider a large network of identical neurons described by a population density ρ (v, t), such that

vb

va

ρ (v, t)dv = Proportion of neurons with potential v ∈ [va , vb ) at time t.

(8.33) The equation for the density obeys a conservation law that respects three fluxes: i) a drift term describing the voltage evolution of the LIF neuron, ii) a flux due to the population jumping when receiving excitatory pulses, and iii) a flux due to voltage reset. For the case that the flux of spikes impinging on the population is deterministic with rate r (t) then ∂ ρ /∂ t + D = 0, where drift

excitatory input

∂ v D= − + I ρ (v, t) + r (t) ρ (v, t) − ρ (v − σ , t) ∂v τm

− δ (v)r (t)

reset

1

1−σ

ρ (v, t)dv .

(8.34)

At threshold, the absorbing boundary condition ρ (1, t) = 0 is imposed. This ensures the conservation property d 1 ρ (v, t)dv = 0, (8.35) dt −∞

354

8 Population models

which is easily established using the result that the excitatory input can be written as a divergence: v ∂ ρ (v , t)dv , (8.36) r (t) ρ (v, t) − ρ (v − σ , t) = r (t) ∂v v−σ and assuming that ρ decays to zero as v → −∞. Hence, if the initial condition satis1 ρ (v, 0)dv = 1, then the solution of the nonlinear equation ∂ ρ /∂ t + D = 0 fies −∞ 1 ρ (v, t)dv = 1. The arrival rate per neuron satisfies the normalisation condition −∞ r (t) can be considered to be the result of background drive r0 (t) plus excitatory input from other neurons in the population. Let R(t) be the firing rate of the population, that is, the flux through threshold, and κ the average number of presynaptic connections per neuron. In this case, we write r (t) = r0 (t) + κ R(t), where the population firing rate is given by R(t) = r (t)

1

1−σ

ρ (v, t)dv.

(8.37)

This completes the model description and gives a closed system of equations with appropriate boundary conditions. Although the spike density approach relies on numerical machinery to evolve the partial differential equation (PDE) describing the evolution of the population density, this can be done quite efficiently [487]. If the jump amplitude, σ , is sufficiently small then one can formally Taylor expand (8.34), corresponding to the so-called Kramers–Moyal expansion [331], and obtain the Fokker–Planck equation ∂ ∂ ρ (v, t) = − ∂t ∂v

r (t)σ 2 ∂ 2 v + I + r (t)σ ρ (v, t) + ρ (v, t), − τm 2 ∂ v2

v = 0.

(8.38) This in turn gives the time evolution of the probability density for a system evolving according to the equivalent stochastic differential Langevin equation: dv = −

v dt + μ (t)dt + s(t)dW (t). τm

(8.39)

Here μ (t) = I + r (t)σ is the mean synaptic input, s(t) = r (t)σ 2 determines the size of the voltage fluctuations, and W (t) is a standard Wiener process as discussed in Appendix A. Note that if r (t) = r is a constant, then the Langevin equation (8.39) reduces to an Ornstein–Uhlenbeck process. In this case, the steady-state Fokker– Planck equation implies that the flux 1 ∂ v (8.40) J (v) = − + μ0 ρ0 (v) − s02 ρ0 (v), τm 2 ∂v where ρ0 (v) is the steady-state distribution and μ0 = I + r σ and s0 = r σ 2 , is constant except at v = 0. Here it jumps by a constant amount R0 (equal to the steady-state population firing rate), and one may take J = 0 for v < 0. The solution for ρ0 (v) is given as [122]

8.8 A spike density model

355

⎧ ⎨ c1 exp − (v−μ2 0 τm )2 , v ≤ 0, s0 s0 τm ρ0 (v) = c ⎩ 22 exp − (v−μ2 0 τm )2 1 exp (v −2μ0 τm )2 dv , 0 < v ≤ 1, v s s τ s τ 0

0 m

(8.41)

0 m

which can be verified by direct substitution. The coefficients c1,2 are determined by demanding continuity of ρ0 (v) at v = 0 and enforcing the normalisation 1 ρ (v)dv = 1. Using the fact that R0 = J (1) = (−1/τm + μ0 )ρ0 (1) − s02 ρ0 (1)/2 = 0 −∞ c2 /2, it can also be shown that √ R 0 = τm π

√ (1−μ0 τm )/ τm s0

√ − τm μ0 /s0

2

ev (1 + erf(v))dv

−1 .

(8.42)

A plot of the rate R0 as a function of μ0 is shown in Fig. 8.10, where it can be seen that the population response is a smoothed version of the single neuron response. For a recent mathematical treatment of the existence, uniqueness, and positivity of solutions to various spike density models, see [263].

Fig. 8.10 A plot of the population firing rate R0 given by equation (8.42) for a population of LIF neurons with different noise levels and τm = 1. The dotted shows the response with s0 = 1 and the dashed curve shows the response with s0 = 0.5. The solid curve shows the firing rate response of an isolated neuron receiving only a constant drive μ0 = I .

356

8 Population models

8.9 A next-generation neural mass model Despite the usefulness of neural mass models for describing certain large-scale brain rhythms, they do have some limitations. These mainly arise from the phenomenological nature of the models, and the fact that they are not derived from more realistic conductance-based spiking models. One particular failing is their inability to track any notion of within population synchrony. Rather, it is assumed that neural mass models provide descriptions of spiking populations operating in a near-synchronous regime so as to generate a macroscopic signal that can be detected using electro- and magneto-encephalography (EEG and MEG). However, this assumption means that single population neural mass models are unable to model the important phenomenon of event-related synchronisation/desynchronisation (ERS/ERD), which are believed to underlie the changes in power seen in brain spectrograms [712]. The self-organisation of large networks of coupled oscillators into macroscopic coherent states, such as observed in phase locking, has inspired the search for equivalent low-dimensional dynamical descriptions. However, the mathematical step from microscopic to macroscopic dynamics has proved elusive in all but a few special cases. The Kuramoto model is one such special case for which solutions can be found on a reduced invariant manifold using the Ott–Antonsen (OA) reduction [689] as described in Box 8.1 on page 358. Here we describe a recent approach for the reduction of θ -neuron networks (or equivalently QIF networks) that utilises this framework and is exact for infinitely large networks. This approach yields a neural mass equation of the form (8.3), where the population firing rate is itself a function of the within-population synchrony. The latter is given by an explicit evolution equation that is nonlinearly coupled to the dynamics of the synaptic conductance. The θ -neuron model, or Ermentrout–Kopell canonical model, is now widely known throughout computational neuroscience as a parsimonious model for capturing the firing and response properties of a cortical cell [279] (and see Sec. 3.4.2). It is described by a one-dimensional dynamical system evolving on a circle according to equation (3.49). A network of θ -neurons can be described with the introduction of an index i = 1, . . . , N and the replacement I → μi + Ii , where Ii describes the synaptic input current to neuron i and μi is a constant drive. For a globally coupled network, the synaptic current can be written in the form Ii = g(t)(vsyn − vi ) for some global conductance g and local voltage vi . As a model for the conductance, consider the form (and cf equation (8.3)) Qg(t) =

k N

N

∑ ∑ δ (t − T jm ),

(8.43)

j=1 m∈Z

where T jm is the mth firing time of the jth neuron. These firing events are defined to happen every time θ j increases through π . It is well known that the θ -neuron model is formally equivalent to a quadratic integrate-and-fire model for voltage dynamics [559] under the transformation vi = tan(θi /2) (so that cos θi = (1 − vi2 )/(1 + vi2 ) and sin θi = 2vi /(1 + vi2 )). This voltage relationship allows one to write the network dynamics as

8.9 A next-generation neural mass model

dθi = (1 − cos θi ) + (1 + cos θi )(μi + gvsyn ) − g sin θi , dt k N Qg = 2 ∑ P(θ j ). N j=1

357

(8.44) (8.45)

Here P(θ ) = δ (θ − π ) and is periodically extended so that P(θ ) = P(θ + 2π ), and the result that δ (t − T jm ) = δ (θ j (t) − π )|θ˙ j (T jm )| has been used (and see Box 2.9 on page 51). Upon setting vsyn vi , Q = 1, and P(θ ) = (1 − cos(θ ))n for some positive integer n in (8.45), the model of Luke et al. [594] is recovered. In this work, the authors show how to obtain an exact mean field reduction by making use of the OA ansatz, originally used to study the Kuramoto model [539]. The same style of reduction has also been used by Pazó and Montbrió to study pulse-coupled Winfree networks [699]. The OA ansatz essentially assumes that the distribution of phases as N → ∞ has a simple unimodal shape, capable of describing synchronous (peaked) and asynchronous (flat) distributions, as shown in Fig. 8.11.

Fig. 8.11 The OA ansatz gives the shape of the phase distribution ρ (θ ) = (2π )−1 (1 − |Z |2 )/|ei θ − Z |2 as shown. For |Z | ∼ 1 the system becomes synchronous, with a single preferred phase (so that the phase distribution is tightly peaked). For |Z | ∼ 0 the system becomes asynchronous, producing a flat phase distribution.

358

8 Population models

Box 8.1: Ott–Antonsen ansatz Consider the Kuramoto model:

ε θ˙ i = ωi + N

N

∑ sin(θ j − θi ),

θi ∈ [0, 2π ),

i = 1, . . . , N .

(8.46)

j=1

Using the complex order parameter: 1 N

Z=

N

∑ ei θ , j

|Z | ≤ 1,

(8.47)

j=1

the right-hand side of (8.46) can be rewritten using N 1 N −i θi 1 iθj = Im Z e−i θi . sin(θ j − θi ) = Im e e ∑ ∑ N j=1 N j=1

(8.48)

In the large N (thermodynamic) limit, let ρ (θ |ω , t)dθ be the fraction of oscillators with phases between θ and θ + dθ and natural frequency ω at time t. The frequencies are assumed to be random variables drawn from a distribution L(ω ). The dynamics of the density, ρ , is governed by the continuity equation (expressing the conservation of oscillators), as presented in Box 7.3 on page 311:

∂ρ ∂ + (ρ v) = 0, ∂t ∂θ

v(θ |ω , t) = ω +

ε −i θ Ze − Z ∗ ei θ , 2i

(8.49)

where Z ∗ denotes the complex conjugate of Z , given by Z (t) =

0

2π

dθ

∞

−∞

dωρ (θ |ω , t)ei θ .

(8.50)

Since ρ is 2π -periodic in θ (admitting a Fourier series representation), it is natural to assume the product structure ∞ L(ω ) in θ ρ (θ |ω , t) = , (8.51) 1 + ∑ f n (ω , t)e + cc 2π n=1 where cc stands for the complex conjugate of the previous term. The OA ansatz considers the restriction that the Fourier coefficients are all slaved to a single function a(ω , t) such that f n (ω , t) = a(ω , t)n ,

|a(ω , t)| ≤ 1,

(8.52)

8.9 A next-generation neural mass model

359

where the inequality is required to ensure that f n does not diverge as n increases. There is also a further requirement that a(ω , t) can be analytically continued to the complex plane, that this continuation has no singularities in the lower half complex plane, and that |a(ω , t)| → 0 as Im ω → −∞. Substitution into the continuity equation (8.49) and balancing powers of ei θ yields

∂a ε + (Z a 2 − Z ∗ ) + ia ω = 0. ∂t 2

(8.53)

Using the orthogonality properties of complex exponential functions, namely, i(n−m)θ = 2πδn,m , Z ∗ can be constructed as 0 dθ e

2π

Z ∗ (t) = =

2π 0

∞ −∞

dθ

∞ −∞

dω

L(ω ) 2π

1+

∞

∑

a n (ω , t)ein θ + (a ∗ )n (ω , t)e−in θ

e−i θ

n=1

dω a(ω , t)L(ω ).

(8.54)

Note that for identical oscillators the OA ansatz does not give an attracting distribution. In this case, a reduced dynamics can be obtained using the Watanabe–Strogatz ansatz [716, 922].

Box 8.2: Mean field Kuramoto model with a Lorentzian distribution of natural frequencies Suppose that the natural frequencies, ωi , in (8.46) are drawn from the Lorentzian distribution: 1 γ 1 1 = L(ω ) = − , (8.55) π [(ω − ω )2 + γ 2 ] 2π i ω − ω+ ω − ω− where ω± = ω ± i γ . By noting that the Lorentzian L(ω ) has simple poles at ω± the integral in (8.54) may be performed by choosing a large semi-circle contour in the lower half ω -plane (assuming that a is analytic and has no poles there), as discussed in Box 4.3 on page 134. This yields Z (t) = a ∗ (ω− , t). Taking the complex conjugate of (8.53) and then setting ω = ω− (or equivalently ω ∗ = ω+ ) yields the evolution of the complex order parameter Z as dZ ε + (Z |Z |2 − Z ) − i Z ω+ = 0. dt 2

(8.56)

Writing the complex order parameter as Z = |Z |eiΨ results in the equivalent system

360

8 Population models

ε d|Z | = |Z | (1 − |Z |2 ) − γ , dt 2

dΨ = ω. dt

(8.57)

A simple bifurcation analysis shows that |Z | = 0 is a stable solution that becomes unstable as ε increases through 2γ leading to a new stable branch of solutions with |Z | = [1 − 2γ /ε ]1/2 . This bifurcation result was originally obtained by Kuramoto (by other means) [538]. The background drives, μi , are chosen to be random variables drawn from a Lorentzian distribution L(μ ) with Δ 1 , (8.58) L(μ ) = π (μ − μ 0 )2 + Δ 2 where μ0 is the centre of the distribution and Δ its width at half maximum. In the limit N → ∞, the state of the network at time t can be described by a continuous probability distribution function, ρ (θ |μ , t), which satisfies the continuity equation (arising from the conservation of oscillators), and see Box 7.3 on page 311:

∂ρ ∂ + (ρ c) = 0. ∂t ∂θ

(8.59)

The global drive to the network given by the right-hand side of (8.45) can be constructed using 2π ∞ 1 N lim P( θ ) = d θ dρ (θ |μ , t)P(θ ). (8.60) ∑ j N →∞ N −∞ 0 j=1 Hence, c = (1 − cos θ ) + (1 + cos θ )(μ + gvsyn ) − g sin θ , Qg =

k π

∑

m∈Z 0

2π

dθ

∞

−∞

dρ (θ |μ , t)eim(θ −π ) ,

(8.61) (8.62)

where the result that 2π P(θ ) = ∑m∈Z eim(θ −π ) has been used (and see Box 2.9 on page 51) and that c is a realisation of θ˙ i . The formula for c above may be written conveniently in terms of e±i θ as c = lei θ + h + l ∗ e−i θ ,

(8.63)

where l = ((μ − 1) + vsyn g + ig)/2, h = (μ + 1) + vsyn g, and l ∗ denotes the complex conjugate of l. Substitution of (8.63) into the continuity equation (8.59), using the OA ansatz (and see Box 8.1 on page 358): ∞ L(ω ) n in θ ρ (θ |μ , t) = , (8.64) 1 + ∑ a (μ , t)e + cc 2π n=1 and balancing terms in ei θ yields an evolution equation for a(μ , t) as

8.9 A next-generation neural mass model

361

∂a − ia 2 l − iah − il ∗ = 0. ∂t

(8.65)

It is now convenient to introduce the Kuramoto order parameter Z (t) =

2π

0

dθ

∞

−∞

dμρ (θ |μ , t)ei θ ,

(8.66)

where |Z | ≤ 1. Using the OA ansatz (and the orthogonality properties of ei θ , namely, i pθ iq θ e dθ = 2πδ p+q,0 ), one finds that 0 e

2π

Z ∗ (t) =

∞

−∞

dμ L(μ )a(μ , t).

(8.67)

As described in Box 8.2 on page 359, the integral in (8.67) may be performed by choosing a large semi-circle contour in the lower half complex plane. This yields Z ∗ (t) = a(μ− , t), giving Z (t) = a(μ+ , t). Hence, the dynamics for g given by (8.62) can be written as Qg = k f (Z ), (8.68) and

π f (Z ) = 1 + = Re

∑ (−Z ) + cc = m

m>0

1 − Z∗ 1 + Z∗

,

1 − |Z |2 1 + Z + Z ∗ + |Z |2

|Z | < 1.

(8.69)

From equations (8.43), (8.68)–(8.69), the physical interpretation of f (Z ) is the population firing rate. A plot of f (Z ) is shown in Fig. 8.12. The dynamics for Z is obtained from (8.65) as dZ = F (Z ; μ0 , Δ) + G (Z , g; vsyn ), dt

(8.70)

where (Z + 1)2 (Z − 1)2 + −Δ + i μ0 , 2 2 (Z + 1)2 Z2 − 1 vsyn g − g. G (Z , g; vsyn ) = i 2 2 F (Z ; μ0 , Δ) = −i

(8.71) (8.72)

Here, (8.71) can be viewed as describing the intrinsic population dynamics and (8.72) the dynamics induced by synaptic coupling. Thus, the form of the mean field model is precisely that of a neural mass model as given by equation (8.3). Importantly, the firing rate, f , is a derived quantity that is a real-valued function of the complex Kuramoto order parameter for synchrony. This in turn is described by a complex ODE with parameters from the underlying microscopic model. A comparison of (g, g ) for the mean field model and a network of N = 500 θ -neurons is shown in Fig. 8.13.

362

8 Population models

Fig. 8.12 Heat map showing f as a function of the complex number Z using equation (8.69). f is the flux through θ = π , which is proportional to the density, ρ (θ |μ , t), at θ = π . The peak of the density occurs at arg(Z ) and the sharpness of the density depends on |Z | (see Fig. 8.11). The density plot above illustrates that f takes a high value near Z = ei π = −1.

It is clear that the two realisations agree well. Also shown is the corresponding plot for the complex order parameter W = π R + i V , where R = f (Z ) is the average population firing rate, and V is the average population voltage (more of which in the next section). The fluctuations of the finite size network, seen in the figure, reduce to zero in the large N limit. For details on how to treat networks of identical θ -neurons (using the Watanabe–Strogatz ansatz), see [551]. As well as capturing the behaviour of the spiking network, the mean field model is also capable of supporting ERS/ERD in response to a global time-dependent input. Indeed, the model has recently been matched to experiments exhibiting so-called beta rebound [135]. Modulations in the beta power of human brain rhythms are known to occur during and after movement. A sharp decrease in neural oscillatory power is observed during movement followed by an increase above baseline on movement cessation. Both are captured in the response of the mean field model to simple square wave forcing, as illustrated in Fig. 8.14. For a further discussion of this, as well as a bifurcation analysis, see [135, 190], and for the further application of the model to other challenges in large-scale brain modelling, see [136, 137]. For the inclusion of gap junctions within the mean field reduction, see [138].

8.9 A next-generation neural mass model

363

Fig. 8.13 Left: Comparison between the next-generation neural mass model (8.69)–(8.72) and a simulation of a network of N = 500 θ -neurons for the choice Q = (1 + α −1 d/dt)2 . The figure shows a projection into the (g, g )-plane. The grey trajectory arises from the direct simulation of the underlying microscopic model and the black trajectory shows the mean field prediction. The small discrepancy between the two is the result of finite size fluctuations (that reduce to zero in the large N limit). Right: Time evolution of the complex order parameter W = π R + i V . Topright: Average population voltage V = Im W as a function of t. Bottom-right: Average population firing rate R = Re W/π as a function of t. Here μ0 = 20, Δ = 0.5, vsyn = −10 mV, k = π , and α −1 = 1.05 ms.

8.9.1 Mapping between phase and voltage descriptions Montbrió et al. have recently shown how to move between order parameters for the phase and voltage descriptions (for θ -neuron and QIF networks) with the use of a conformal transformation [641]. The equivalent dynamics for (8.44) in voltage form is (8.73) v˙ i = μi + vi2 + g(vsyn − vi ). For consistency with the θ -neuron model, the reset and threshold of the underlying QIF neurons are chosen such that vr → −∞ and vth → ∞. In the thermodynamic limit (N → ∞), Qg = k R,

R(t) = lim

N →∞

1 N

N

∑ ∑ δ (t − T jm ),

(8.74)

j=1 m∈Z

where R is the population firing rate. The continuity equation associated with (8.73) in this limit is

364

8 Population models

Fig. 8.14 Response of the next-generation neural mass model to an external drive A(t). This −1 is modelled under the replacement μ0 → μ0 + A where (1 + αΩ d/dt)2 A = Ω0 Θ(t)Θ(τ − t) (namely, a smoothed rectangular pulse). Note that the 0.4s pulse is not applied until time t = 3s after transients have dropped off. Right: Power spectrogram and time series of the synaptic current demonstrating the rebound of the system, i.e., an increase in amplitude (and hence power) after the drive is switched off. The white contour line in the spectrogram is the level set with value 0.5, highlighting that the power in the beta band 15 Hz drops off significantly during the drive and comes back stronger at the termination of the drive. Left: The corresponding trajectory of the Kuramoto order parameter Z . As the system relaxes back to its original oscillatory behaviour, the trajectory loops close to the border |Z | = 1. It is this enhanced synchrony that causes the rebound. Lines in blue indicate pre-stimulus activity, lines in red are for the response during stimulation, and lines in green are post-stimulus. The parameters are μ0 = 21.5, Δ = 0.5, vsyn = −10 mV, k = 3.2, −1 α −1 = 33 ms, αΩ = 10 ms, τ = 400 ms, and Ω0 = 30.

∂ρ ∂ v˙ ) = 0, + (ρ (8.75) ∂t ∂v (v|μ , t) is the density of neuronal voltages for fixed μ at time t and v˙ = μ + where ρ ∞ (v|μ , t)L(μ )dμ , ρ v2 + g(vsyn − v). The total voltage density at time t is given by −∞ with L(μ ) being the distribution of external drives as given by (8.58). The insight of Montbrió et al. was to assume that solutions of (8.75) generically converge to a Lorentzian: (v| μ , t) = ρ

x(μ , t) 1 . π (v − y(μ , t))2 + x(μ , t)2

(8.76)

The form of (8.76) can be motivated by noting that the time-independent solution is quadratic in v. As in (8.58), y rep v˙ = const, suggesting 1/ρ of (8.75) satisfies ρ resents the (time-dependent) centre of the distribution and x is the width at half

8.9 A next-generation neural mass model

365

maximum. Substitution of (8.76) into the continuity equation (8.75) and balancing powers of v shows that, for fixed μ , x and y obey the two coupled dynamical equations:

∂x = −gx + 2x y, ∂t

∂y = μ + y 2 − x 2 + g(vsyn − y). ∂t

(8.77)

In terms of the complex variable w(μ , t) = x(μ , t) + i y(μ , t), this gives the dynamics ∂w (8.78) = −gw + i μ + gvsyn − w2 . ∂t To provide a physical meaning to (x, y), first note that the firing rate (number of spikes per unit time) is computed as the probability flux at threshold, namely, (v = vth |μ , t)˙v(v = vth |μ , t) = r (μ , t) = ρ

1 x(μ , t), π

(8.79)

remembering that vth → ∞. Hence, x(μ , t) = π r (μ , t) ≥ 0. It may be further shown that ∞ (v|μ , t) dv, vρ (8.80) y(μ , t) = PV −∞

where PV denotes the Cauchy principal value (and see Prob. 8.7). An average over the distribution of μ gives the population averages for the firing rate R(t) and the mean voltage V (t) = lim N →∞ N −1 ∑ Nj=1 v j (t) as R(t) = V (t) =

∞

−∞ ∞ −∞

dμ r (μ , t)L(μ ) = dμ

∞

−∞

1 π

∞

−∞

dμ L(μ )x(μ , t),

(v|μ , t) L(μ ) = dv vρ

∞

−∞

dμ L(μ )y(μ , t).

(8.81) (8.82)

Exploiting the pole structure of L(μ ) shows that the integrals in (8.81) and (8.82) evaluate to R(t) = π −1 x(μ− , y) and V (t) = y(μ− , t), where μ± = μ0 ± iΔ. Using (8.78), the dynamics for the complex order parameter W (t) = π R(t) + i V (t) = w(μ− , t) obey (8.83) W˙ = Δ − gW + i μ0 + gvsyn − W 2 . Separating the above into real and imaginary parts yields a two-dimensional system for the macroscopic variables (R, V ): Δ R˙ = + R(2V − g), π V˙ = μ0 + V 2 − π 2 R 2 + g vsyn − V .

(8.84) (8.85)

Recall that v and θ are related to one another via v = F(θ ) = tan(θ /2) = i M(ei θ ; −1, 1, 1, 1), where M(z; a, b, c, d) is a Möbius transformation as described in Box 8.3 on page 366. This transformation maps the probability density from the voltage to the phase description according to (see Appendix A, equation (A.40))

366

8 Population models

(F(θ ))|F (θ )|. ρ (θ ) = ρ

(8.86)

It is convenient to introduce the complex parameters ζ = y + i x and write (8.76) in (v|ζ ): =ρ the form ρ 1 1 1 1 Im (ζ ) 1 ζ − ζ∗ ρ (v|ζ ) = − = . (8.87) = 2π i v − ζ v − ζ∗ 2π i |v − ζ |2 π |v − ζ |2 Using (8.86) and the derivative properties of the Möbius transformation described in Box 8.3 on page 366 allows one to rewrite (8.86) as 2 Im(ζ ) ρ (θ |ζ ) = π |i + ζ |2

1 ei θ − ζ

2

=

1 1 − |ζ |2 , 2π |ei θ − ζ |2

i −ζ ζ = . i +ζ

(8.88)

(v|ζ ) and is a Lorentzian distribution in Hence, ρ (θ |ζ ) has a comparable form to ρ ei θ with complex parameter ζ . Moreover, after performing the sum in (the geometric progression) for the distribution in (8.64) (factoring out L(μ )), it can be seen that they are identical upon setting ζ (μ , t) = a ∗ (μ , t). Note that the original Lorentzian distribution (8.87) and complex parameter transform according to the same Möbius transformation, namely, v = i M(ei θ ; −1, 1, 1, 1) and ζ = i M(ζ ; −1, 1, 1, 1). This is consistent with the general observation of McCullagh that if the random variable X has a Lorentzian distribution with complex parameter ζ , then the random variable Y = M(X ; a, b, c, d) has a Lorentzian distribution with parameter M(ζ ; a, b, c, d) [617]. Box 8.3: Möbius transformation A mapping M : C → C of the form M(z; a, b, c, d) =

az + b , cz + d

(8.89)

for a, b, c, d ∈ C where ad − bc = 0 is called a Möbius transformation. This transformation is part of a broader classification of conformal mappings that preserve local angles. An analytic function is conformal at any point where it has a non-zero derivative. Each Möbius transformation is composed of four elementary transformations describing translation, dilation, rotation, and inversion. In particular, M = f 4 ◦ f 3 ◦ f 2 ◦ f 1 , where f i : C → C are defined by

8.9 A next-generation neural mass model

(i) f 1 (z) = z +

d , c

1 , z (bc − ad) z, (iii) f 3 (z) = c2 a (iv) f 4 (z) = z + , c (ii) f 2 (z) =

367

(translation by d/c)

(8.90)

(inversion and reflection in real axis)

(8.91)

(dilation and rotation)

(8.92)

(translation by a/c),

(8.93)

for c = 0. The inverse of the Möbius transformation (8.89) is given by M −1 (z; a, b, c, d) = M(z; d, −b, −c, a),

(8.94)

which can be seen by directly inverting (8.89) or by composing the inverses of the f i , that is, M −1 = f 1−1 ◦ f 2−1 ◦ f 3−1 ◦ f 4−1 . The derivative of the transformation is given by ad − bc . (8.95) M (z; a, b, c, d) = (cz + d)2

∗ The Fourier coefficients for the density ρ (θ |ζ ) = ∑n∈Z ρn ein θ , ρn = ρ−n , can be written in the form

ρ−n =

1 2π

0

2π

! 1 1 − |ζ |2 1 dθ e−in θ ρ (θ |ζ ) = dz z n−1 , 2π 2π i |z − ζ |2

(8.96)

where z = ei θ and n ≥ 0. Noting that |z|2 = 1, then |z − ζ |2 = (z − ζ )(z ∗ − ζ ∗ ) = (z − ζ )(1 − z ζ ∗ )/z allowing the pole structure of (8.96) to be exposed as " # ! ζ ∗ 1 1 n ρ−n = dz z − . (8.97) (2π )2 i z − ζ 1 − z ζ ∗ Without loss of generality, one may take |ζ | < 1 so that only the pole at z = ζ contributes to the residue. Use of the Cauchy residue theorem (see Box 4.3 on page 134) generates the elegant result that

ρ−n =

1 n (ζ ) . 2π

(8.98)

Computing the Kuramoto order parameter according to (8.66) (and exploiting the orthogonality properties of ei θ ) gives

368

8 Population models

1 Z (t) = 2π =

∞

−∞

0

2π

dθ

∞

−∞

dμ L(μ ) 1 +

∑

ζ (μ , t)∗

n e

in θ

+ cc

ei θ

n>0

dμ L(μ )ζ (μ , t) = ζ (μ− , t),

(8.99)

using the observation that the only contribution to the last integral comes from the pole at μ = μ− . Finally, using ζ = i M(ζ ; −1, 1, 1, 1) and noting that ζ = iw∗ yield the conformal relation Z (t) = M(i ζ (μ− , t); −1, −1, 1, −1) = M(−w∗ (μ− , t); −1, −1, 1, −1) =

1 − W ∗ (t) . 1 + W ∗ (t)

(8.100)

This Möbius transformation takes the right half-plane onto the unit disc, as shown in Fig. 8.15 and together with its inverse defines a one-to-one mapping between the Kuramoto order parameter Z ∈ C and the macroscopic quantities of a population of QIF neurons: R ∈ R+ , the population firing rate, and V ∈ R, the mean membrane potential. Such a transformation is useful in classifying the degree of synchrony in a network given its mean voltage and firing rate. Thus, for problems where ERD/ERS is prevalent, say in EEG/MEG neuroimaging studies, it may be appropriate to work with the dynamics for Z given by (8.70), whilst for experiments with voltage-sensitive dyes it may be better to switch to the W framework using the conformal transformation W = (1 − Z ∗ )/(1 + Z ∗ ) or by evolving (8.83). From W , the voltage signal can be extracted as V = Im W .

Fig. 8.15 The conformal transformation defined by (8.100) and its inverse map the right half-plane onto the unit disc, respectively. This allows the Kuramoto order parameter Z ∈ C, which quantifies the degree of synchrony in a network, to be defined in terms of the physical macroscopic quantities of population firing rate R ∈ R+ and mean voltage V ∈ R.

8.10 Neural mass networks

369

8.10 Neural mass networks As well as being useful for shedding light on localised cortical dynamics, networks of neural mass models can be used to gain insight into whole-brain dynamics, including functional connectivity (FC), which characterises the patterns of correlation and coherence between brain regions based on temporal similarity [87], especially as measured with neuroimaging modalities such as functional magnetic resonance imaging during the resting state. Changes in FC are believed to reflect higher brain functions [694, 964] and have been extensively studied in the context of changes in cognitive processing during ageing (see, e.g., [300, 410]) and neurological disease (see, e.g., [220, 252, 801]). Computational modelling has proven an invaluable tool for gaining insight into the potential mechanisms that can give rise to whole-brain network dynamics. Activity in this area of computational neuroscience and neuroinformatics is exemplified by that of the Virtual Brain project that combines connectome data, such as those available from the Human Connectome Project [891], with neural mass modelling and can map onto a wide range of neuroimaging modalities [473]. One way in which FC may arise in a network of neural mass models is via a dynamic (Hopf) instability of a network steady state. To see how this may occur in a network of phenomenological neural mass models of Wilson–Cowan type, consider a node with a state variable given by x(t) = (x 1 (t), . . . , x m (t)) ∈ Rm , t ≥ 0, with local dynamics dx = F(x) + G(W loc x + p), (8.101) dt where F, G : Rm → Rm , p ∈ Rm , and W loc ∈ Rm×m (and is used to represent the local connectivity between excitatory and inhibitory sub-populations). A network of N identical interconnected nodes can then be constructed according to dxi = F(xi ) + G(W loc xi + p + si ), dt

si = σ

N

∑ wij H (x j ),

(8.102)

j=1

where i = 1, . . . , N , and H : Rm → Rm selects the local component that mediates interactions. For example, if interactions are only mediated by the first component of the local dynamics then one would choose H (x) = (x 1 , 0, . . . , 0). Here the connection strength from node i to node j is given by wij (and σ represents a global strength of coupling), and in human connectome data, this typically captures the long-range interactions between excitatory sub-populations. For simplicity, the discussion here is restricted to the choice of a connectivity matrix that is row sum normalised, that is, ∑ Nj=1 wij = 1. This simplifies the construction of the steady state and its linear stability analysis. In this case, at steady state, each node is described by an identical time-independent vector with components xi = x for all i. The network steady state is given by 0 = F(x) + G(W loc x + s), with s = p + H (x). Linearising the network equations around the steady state with xi (t) = x + eλ t u i for λ ∈ C and u i small gives

370

8 Population models

λ u i = DF(x) + DG(W loc x + s)W loc u i + σ

N

∑ DG(W loc x + s)DH (x)wij u j ,

j=1

(8.103) where DF, DG, and DH are the Jacobians of F, G, and H , respectively. Intro = DG(W loc x + ducing the matrices D F = DF(x) + DG(W loc x + s)W loc and DG s)DH (x) and the vector U = (u 1 , . . . , u N ), system (8.103) can be written in tensor notation, and see Box 7.2 on page 304, as [λ I N ⊗ Im − I N ⊗ D F − σ w ⊗ DG]U = 0,

(8.104)

where ⊗ is the tensor product. By introducing a new variable Y according to the linear transformation Y = (P ⊗ Im )−1 U , where P is the matrix of normalised eigenvectors of w, a block diagonal system for Y can be generated (and see Sec. 7.4 for a similar diagonalisation). This is in the form of (8.104) under the replacement w → diag(γ1 , γ2 , . . . , γ N ), where γμ are the eigenvalues of w. Thus, the eigenvalues of the linearised network system are given by the set of spectral equations = 0, det[λ Im − D F − σγμ DG]

μ = 1, . . . , N .

(8.105)

The network steady state is stable if Re λ < 0 for all μ . Should this stability condition be violated for some unique value of μ = μc then we would expect the excitation of the structural eigenmode vμc (eigenvector of w with eigenvalue γμ ).

8.10.1 Functional connectivity in a Wilson–Cowan network Consider a Wilson–Cowan network consisting of an excitatory population E i and an inhibitory population Ii , for i = 1, . . . , N , with dynamics given by W E E E i − W E I Ii + p E + σ ∑ Nj=1 wij E j τ E 0 d Ei E =− i + f . Ii 0 τ I dt Ii W I E E i − W I I Ii + p I (8.106) This may be written in the form (8.102) with the identification xi = (E i , Ii ), H (x) = (E, 0), F(x) = −Γ −1 x, G(x) = Γ −1 f (x), and p = ( p E , p I ), where W E E −W E I τE 0 loc Γ = , W = . (8.107) W I E −W I I 0 τI In this spectral equation (8.105) can be written in the form Eμ (λ ) = case, the det λ I2 − Aμ = 0, where −1 % $ τE 0 γμ 0 loc loc Aμ = − . I2 − D f (W x + s) W + σ 0 0 0 τ I−1

(8.108)

8.10 Neural mass networks

371

Here [D f (x)]ij = f (x i )δij . Hence, the complete set of eigenvalues that determine the stability of the network steady state is given by λ = λ± (μ ), where & 1 Tr Aμ ± Tr Aμ2 − 4 det Aμ , λ ± (μ ) = μ = 1, . . . , N . (8.109) 2 For a specific choice of network connectivity (structural connectivity), equation (8.109) can be used to probe the conditions under which a spatially patterned network may emerge. For example, upon choosing each node in an otherwise unconnected network to be stable (say in an ISN state, as described in Sec. 8.3), one may then determine the overall value of coupling strength σ such that λ crosses the imaginary axis of the complex plane from left to right. If this occurs for Im λ = 0, then a dynamic (Hopf) instability will occur and a simple proxy for the expected pattern of FC is the matrix vμc vμTc (formed from the outer product of vμc with itself). In Fig. 8.16 (left), a plot of a typical human structural connectivity matrix for w parcellated to a 68-node network obtained from diffusion magnetic resonance imaging (MRI) data made available through the Human Connectome Project [891] is shown. In Fig. 8.16 (middle), the spectrum computed from (8.109) with a non-zero value of σ is shown, signalling a dynamic instability (since some of the eigenvalues of Aμ are found in the right-hand complex plane). In Fig. 8.16 (right), the corresponding pattern of FC predicted from the linear stability analysis is shown (for the value of μ such that λ+ (μ ) first crosses into the right-hand complex plane). With the inclusion

Fig. 8.16 A Wilson–Cowan network constructed with human connectome data. Left: A structural connectivity matrix for a 68-node network obtained from diffusion MRI data made available through the Human Connectome Project [891]. Middle: A plot of the network spectrum obtained from (8.109) with σ = 2.2. Right: The predicted pattern of FC at the onset of a dynamic instability. The parameters are as shown in Fig. 8.1 with ( p E , p I ) = (−0.25, 0).

of delays into the model, it is possible to preferentially excite a set of structural eigenmodes that provide a good fit to FC resting state data, ranging from delta to the high gamma band [862] (and see Prob. 7.7 for how to include delays into the spectral equation (8.105)). For details on how this approach would work for a network of next-generation neural mass models, see Prob. 8.9. If network oscillations occur for

372

8 Population models

weak coupling with |σ | 1, then it is also possible to use the techniques of Sec. 6.2 to develop a weakly coupled oscillator network description. This has been done for the Wilson–Cowan model in [422] and the Jansen–Rit model in [310] to explore how the composition of structural connectivity and node dynamics can shape emergent patterns of FC.

Remarks This chapter has covered some of the more historically well-known neural mass models, as exemplified by that of Wilson and Cowan, and also advocated for the use of a certain next-generation model. One question that we did not explore is how well the behaviour of large networks of interacting neurons are captured by either phenomenological neural mass models [566] or the next-generation one and its set of perhaps unrealistic assumptions (e.g., global coupling) [372]. It remains an open challenge to develop exact mean field models for general classes of interacting spiking neurons, though one such approach for describing the activity of large ensembles of Hodgkin–Huxley elements with certain assumptions on the stochastic dynamics of maximal conductances is that of Baladron et al. [53, 94]. The resulting mean field equations are nonstandard stochastic differential equations with a probability density function that can be expressed in terms of a nonlinear and non-local PDE of McKean– Vlasov–Fokker–Planck type. It is also a challenge to understand finite size effects, though some progress in this regard has been made for θ -neuron networks using a perturbation expansion in the reciprocal of the number of neurons in the network and a moment hierarchy truncation [128]. We also touched upon whole-brain modelling by considering networks of interacting neural masses and how this is relevant to predicting patterns of FC. However, it is important to emphasise that FC patterns can evolve over tens of seconds, with essentially discontinuous shifts from one short-term state to another [444]. The maintenance of even the relatively short-term static patterns of FC is still relatively poorly understood from a mechanistic perspective. This is despite the widespread use of FC in distinguishing between healthy and pathological brain states [171]. Thus, there is clear challenge in ‘Network Neuroscience’ where further neural mass network modelling and analysis can play a major role, for example, in understanding abnormalities in brain activity patterns associated with diseases and dysfunction. Moreover, with the inclusion of models of cellular damage, this can form the starting point for investigations of degradation of cognitive function and the slow evolution of structural connectivity in neurodegenerative diseases, such as Alzheimer’s or Parkinson’s [368]. For a more comprehensive introduction to Network Neuroscience, with applications to psychiatry and neurology, we recommend the book by Flavio Frohlich [324], and for a recent discussion of ‘Whole-Brain Modelling: Past, Present, and Future’, see [375]. One topic that we have neglected at the neuronal population level is that of plasticity. This has been discussed in the context of changes induced by transcranial

Problems

373

magnetic stimulation by Wilson et al. [946], and a recent analysis of the emergence of bursting in a next-generation neural mass model that includes short-term synaptic plasticity can be found in [842] (using a slow–fast approach). As well as incorporating synaptic plasticity, it is also possible to augment neural mass models to incorporate homeostatic plasticity that can dynamically rebalance brain networks, as described in [414, 664, 912]. For the implementation of this in a Wilson–Cowan network model, see [11].

Problems 8.1. Consider the Wilson–Cowan model given by (8.5) with r E = 0 = r I (i.e., no refractoriness). Show that the locus of saddle-node bifurcations is given in terms of the parameter E by ( p E , p I ) = ( p E (E), p I (I± (E))), where A(E) ± A2 (E) − 4 A(E)B(E) , (8.110) I± (E) = 2 A(E) and A(E) = β W I I (−1 + β W E E E(1 − E)) − β 2 W I E W E I E(1 − E), B(E) = 1 − β W E E E(1 − E).

(8.111)

8.2. Consider the piecewise linear Wilson–Cowan model: dE = −E + Θ[W E E E − W E I I + p E ], dt dI = −I + Θ[W I E E − W I I I + p I ], τI dt

τE

(8.112)

where Θ is a Heaviside step function. (i) Rewrite the model in terms of the new variables (U, V ) = (W E E E − W E I I + p E , W I E E − W I I I + p I ). (ii) Show that the switching manifolds naturally divide the plane into four sets denoted by D++ = {(U, V ) |U > 0, V > 0}, D+− = {(U, V ) |U > 0, V < 0}, D−− = {(U, V ) |U < 0, V < 0}, and D−+ = {(U, V ) |U < 0, V > 0}. (iii) Show that the U -nullclines are given by ⎧ −W E E /τ E + W E I /τ I (U, V ) ∈ D++ ⎪ ⎪ ⎪ ⎨ A11 (U − p E ) 1 −W E E /τ E (U, V ) ∈ D+− V = pI − + , A12 A12 ⎪ 0 (U, V ) ∈ D−− ⎪ ⎪ ⎩ W E I /τ I (U, V ) ∈ D−+ (8.113) and the V -nullclines are given by

374

8 Population models

⎧ −W I E /τ E + w I I /τ I ⎪ ⎪ ⎪ ⎨ A21 (U − p E ) 1 −W I E /τ E V = pI − + A22 A22 ⎪ 0 ⎪ ⎪ ⎩ W I I /τ I

(U, V ) ∈ (U, V ) ∈ (U, V ) ∈ (U, V ) ∈

D++ D+− , D−− D−+ (8.114)

where Aij , i, j = 1, 2, denote the elements of the matrix: 1 W E I W I E /τ I − W E E W I I /τ E W E E W E I (1/τ E − 1/τ I ) A=− , W I E W E I /τ E − W E E W I I /τ I W I I W I E (1/τ I − 1/τ E ) |W | (8.115) and |W | = W E I W I E − W E E W I I . (iv) Derive the saltation matrices given by (8.14) and use these to determine that the Floquet exponent for a non-sliding periodic orbit visiting each of the four regions Dαβ , α , β ∈ {+, −}, is given by (8.16). (v) Construct an unstable sliding periodic orbit for the parameter values of Fig. 8.3. 8.3. Consider the Kilpatrick–Bressloff model for binocular rivalry [503] that consists of two neuronal populations, one responding to the left eye and the other to the right eye. Each eye’s local and cross-population synapses experience synaptic depression so that the model is given by du L = −u L + wl q L Θ(u L − κ ) + wc q R Θ(u R − κ ) + I L , dt du R = −u R + wl q R Θ(u L − κ ) + wc q L Θ(u R − κ ) + I R , dt 1 − qj dq j = − β q j Θ(u j − κ ), j = L , R, dt α

(8.116)

where Θ is a Heaviside step function. (i) Show that the off state (u L , u R , q L , q R ) = (I L , I R , 1, 1) occurs when I L ,R < κ . (ii) Show that the on or fusion state u j = I j + (wl + wc )/(1 + αβ ), q j = 1/(1 + αβ ), occurs when I L ,R > κ − (wl + wc )/(1 + αβ ). (iii) Show that the winner-take-all state with the left eye population dominant is given by u L = I L + wl /(1 + αβ ), u R = I R + wc /(1 + αβ ), q L = 1/(1 + αβ ), q R = 1, when I L > κ − wl /(1 + αβ ) and I R < κ − wc /(1 + αβ ). (iv) Show that all the steady states are stable. (v) Consider a periodic orbit such that whenever u R is above κ then u L is below and vice versa. Denote the temporal duration of these dominance behaviours by Δ L and Δ R , respectively. Show that they are governed by the pair of equations

Problems

375

1 1 − e−(1+αβ )Δ L /α e−Δ R /α 1 + αβ −1 1 × 1 − e−(1+αβ )Δ L /α e−Δ R /α − (8.117) e−(1+αβ )Δ L /α + I R , 1 + αβ 1 1 1 − e−(1+αβ )Δ R /α e−Δ L /α κ = wc + 1 − e−Δ L /α + 1 + αβ 1 + αβ −1 1 −(1+ αβ )Δ / α −Δ / α R L × 1−e e−(1+αβ )Δ R /α + I L . e − (8.118) 1 + αβ

κ = wc

1 + 1 + αβ

1 − e−Δ R /α +

8.4. Consider the functional-differential equation (Curtu–Ermentrout model) t du = r −u + 1 − u(s)ds f (u) . (8.119) dt t−1 Here f : R → [0, 1] is a smooth monotonically increasing function and r > 0. (i) Show that there is at least one fixed point u ∈ (0, 1/2). (ii) Show that dynamics linearised around the fixed point (u(t) = u + y(t) with y small) satisfies t dy = r −Ay − b y(s)ds , (8.120) dt t−1 with A = 1 − (1 − u) f (u) < 1 and b = f (u) ∈ (0, 1). (iii) Show that solutions of the form y(t) = eλ t yield the characteristic equation

λ + Ar + br

1 − e−λ = 0. λ

(8.121)

(iv) Assume −b < A < bM where M = − sin(ξ )/ξ with ξ being the smallest root of tan(ξ ) = ξ greater than π . Then show that the fixed point becomes unstable at sin ωc ωc2 A 1 =− , . (8.122) r= b 1 − cos ωc b ωc (v) By introducing ε = 1/r and z(t) =

t

u(s)ds,

(8.123)

t−1

show that the model is equivalent to the two-dimensional delay differential equation

ε

du = −u + (1 − z) f (u), dt

dz = u(t) − u(t − 1). dt

(8.124)

(vi) Consider the limit ε → 0 and demonstrate how to construct a singular periodic orbit when f (u) has a sigmoidal shape.

376

8 Population models

(vii) Show that the period, Δ, of such a singular orbit must satisfy Δ < 2. (viii) Develop a numerical code to simulate this model with f (u) = 1/(1 + exp(−8(u − 1/3))), with r = 300, and plot the slow manifold and a typical trajectory. 8.5. Consider the Liley model of Sec. 8.5 with firing rate function f a (z) = (i) Show that

1 , βa > 0. 1 + e−βa (z−θa )

(8.125)

f a = βa f a (1 − f a ).

(8.126)

(ii) Show that the steady-state equation is the solution to a pair of simultaneous equations for (E ss , I ss ): 0 = a R − a ss + ∑ W ab (E ab − a ss )( f b (bss ) + Pab ),

a, b ∈ {E, I },

b

(8.127) ss ss with the fixed point for gab given by gab = f b (bss ) + Pab . (iii) Linearise around the steady state and show that there are solutions of the form eλ t with a characteristic equation det E (λ ) = 0 where the entries of the 2 × 2 matrix E (λ ) are [E (λ )]ab = (λ τa + κa ) δab −

ab W . (1 + λ /αab )2

(8.128)

ab = W ab (Aab − a ss ) f (bss ), and κa = 1 + ∑b W ab f b (bss ). Here W b (iv) Consider two new scale parameters (r, k) according to W a I → r W a I and W I I → kW I I and construct the saddle-node and Hopf bifurcation points for the stationary state in the (k, r ) plane [Hint: use the approach of Sec. 8.3 and parametrically plot (k, r ) = (k(E ss ), r (E ss )) subject to the constraint Re λ = 0]. 8.6. Consider a real 2π -periodic function ρ (θ ). (i) Show that the Ott–Antonsen ansatz means that ρ (θ ) can be written in the form

ρ (θ ) =

1 1 − |a|2 , 2π |ei θ − a ∗ |2

(8.129)

in θ . where ρn = a n for n ≥ 0 and ρ (θ ) = ∑∞ n=−∞ ρn e (ii) Construct ρ (θ ) for the limiting cases |a| → 0 and |a| → 1 and sketch the graph of ρ (θ ) for |a| = 1/2.

8.7. Consider the mean field QIF model discussed in Sec. 8.9.1. (i) Show that the distribution (8.76) may be written as % $ 1 1 1 (v|μ , t) = ρ − , v± = y(μ , t) ± i x(μ , t). 2π i v − v+ v − v− (8.130)

Problems

377

(ii) Using contour integration in the lower and upper half-plane, show that ±

1 2π i

∞

−∞

v 1 dv = y(μ , t). v − v± 2

(8.131)

Hence, or otherwise, show that

PV

∞

−∞

vρ (v|μ , t) dv = y(μ , t).

(8.132)

(iii) Apply the conformal transformation (8.100) to (8.83) and show that this recovers equation (8.70). 8.8. Consider the Winfree network model given by dθi 1 = ωi + ε Q(θi ) dt N

j=1

Q(θ ) = 1 − cos θ ,

P(θ ) = 2π

N

∑ P(θ j ),

where

i = 1, . . . , N ,

(8.133)

∑ δ (θ − 2π n),

(8.134)

n∈Z

and ωi is drawn from a Lorentzian g(ω ) with g(ω ) =

Δ 1 . π (ω − ω0 )2 + Δ2

(8.135)

(i) Use the Ott–Antonsen reduction to show that in the large N limit, the complex Kuramoto order parameter Z = N −1 ∑ Nj=1 ei θ j evolves according to dZ h = −i(ω0 − iΔ + ε h)Z + i ε (1 + Z 2 ), dt 2

(8.136)

where h = h(Z ) and is given by h(Z ) = Re

1+ Z 1− Z

.

(8.137)

(ii) Show that by writing Z = Re−iΨ , the Kuramoto order parameter can be written in plane polar coordinates as dR h = −ΔR − ε (1 − R 2 ) sin Ψ, dt 2 dΨ 1 + R2 = ω0 + ε h 1 − cos Ψ , dt 2R with h = h(R, Ψ ) given by

(8.138) (8.139)

378

8 Population models

1 − R2 , 0 ≤ R < 1. (8.140) 1 − 2R cos Ψ + R 2 (iii) Compare simulations of the Winfree network for fixed N with those of the Ott–Antonsen reduction. h(R, Ψ ) =

8.9. Consider the extension of (8.84)–(8.85) to describe interacting excitatory and inhibitory sub-populations of next-generation neural mass models, with both reciprocal and self-connections. With the introduction of indices E and I and the introa,E a,I > 0 and vsyn < 0 for duction of four distinct synaptic reversal potentials with vsyn a ∈ {E, I }, and for α -function synapses, this gives rise to M = 12 first-order differential equations of an excitatory–inhibitory population in the form

τa R˙ a =

Δa + 2Ra Va − Ra ∑ gab , πτa b

a, b ∈ {E, I },

ab τa V˙a = μ0a + Va2 − π 2 τa2 Ra2 + ∑ gab (vsyn − Va ),

(8.141) (8.142)

b

g˙ ab = αab (sab − gab ),

s˙ab = αab (Wab Rb − sab ).

(8.143)

Here, the local interaction are described by Wab ≥ 0. A network of N such nodes, each built from an (E, I ) pair, can be constructed with a further index i = 1, . . . , N ij − VEi ) in the right-hand side of the with the inclusion of a term ∑ Nj=1 gij (t)(vsyn N ˙ equation for VE and a term −R E ∑ j=1 gij in the right-hand side of R˙ E at each node i, where 1 d 2 Q ij = 1 + . (8.144) Q ij gij (t) = wij R E j (t − Tij ), αij dt Here, Tij ≥ 0 is the axonal delay between nodes i and j and wij are the components of the structural connectivity matrix, and long-range interactions are assumed to be between excitatory sub-populations only. (i) Show that the full network is described by (M + 2N )N delay differential equations. ij VEi (ii) Assume that the connectivity matrix is row sum normalised that vsyn ij and absorb a factor of vsyn within wij . Show that the network equations can be written in the form x˙i = f (xi ) +

N

∑ wij ηij ∗ H (x j (t − Tij )),

i = 1, . . . , N ,

(8.145)

j=1

for some f ∈ R M , where xi = (R E , R I , VE , VI , g E E , g E I , g I E , g I I , s E E , s E I , s I E , s I I ), H (x) = (0, 0, R E /τ E , 0, 0, 0, 0, 0, 0, 0, 0, 0) and ∗ represents temporal convolution. (iii) Show that the network steady state is given by xi = x for all i with x = (R E , R I , V E , V I , g E E , g E I , g I E , g I I , s E E , s E I , s I E , s I I ) where g ab = Wab R b = s ab and (R a , V a ) are given by the simultaneous solution of

Problems

379

0=

Δa + 2R a V a − R a ∑ Wab R b , πτa b

(8.146)

ab 0 = μ0a + V a − π 2 τa2 R a + ∑ Wab R b (vsyn − V a ) + δa,E R a . 2

2

(8.147)

b

(iv) Linearise the network equations around the steady state with xi (t) = x + eλ t u i , for λ ∈ C and u i small, and show that

λ u i = D f (x)u i +

N

∑ wij ηij (λ )e−λ T

ij

DH (x)u j ,

(8.148)

j=1

ij (λ ) = (1 + λ /αij )−2 , and D f and DH are the Jacobians of f and where η H , respectively. (v) Introduce U = (u 1 , . . . , u N ) and show that (λ ) ⊗ DH (x) − λ I N ⊗ I M U = 0, (8.149) I N ⊗ D f (x) + w ij (λ )e−λ Tij . (λ ) has components w (λ )ij = wi j η where w (vi) Show that the eigenvalues of the linearised network system are given by the set of spectral equations det[λ Im − D f (x) − γμ (λ )DH (x)] = 0, μ μ

μ = 1, . . . , N ,

(8.150)

(λ )ij z i v j , and vμ and z μ are the right and left eigenwhere γμ (λ ) = ∑i,N j=1 w vectors, respectively, of the structural connectivity matrix w.

Chapter 9

Firing rate tissue models

9.1 Introduction Despite the success of population models, such as the neural mass models described in Chap. 8, in describing brain rhythms that can be measured, e.g., with localised electroencephalogram (EEG) scalp electrodes, they do not provide large-scale models of brain activity on their own. Rather, they can be seen as building blocks for this larger endeavour. Through their axons and dendrites, which may span many hundreds of microns, neural populations are able to extend their influence over many times the scale of the cell soma. For directly adjacent neurons, connection probabilities are known to range from 50 to 80% and from 0 to 15% for neurons 500 μ m apart [413]. Moreover, axons can extend over the scale of the whole brain. This allows not only for local interactions, but also long-range ones that often span different brain areas and hemispheres. Physical connections between brain regions can be found through noninvasive magnetic resonance imaging (MRI) procedures by using diffusion tensor imaging (DTI) to track the locations of myelinated fibres using a process called white matter tractography. An example of the superior longitudinal fasciculus fibre tract constructed this way is shown in Fig. 9.1. This is one of the major association tracts via which posterior parietal cortical areas are linked with different frontal cortical regions, and illustrates the complexity of possible interactions between brain regions. Loosely speaking, the brain may be thought of as being comprised of the cortex and sub-cortical regions. The cortex can then be subdivided into six layers, labelled inward from the cortical surface as layers I–VI, as illustrated in Fig. 9.2. Signals relayed through the thalamus often project into the granular layer, layer IV, whereupon these signals are subsequently transmitted upwards or downwards to the supragranular layers (layers I–III) or infragranular layers (layers V and VI), respectively [644]. Within layers, connections are dense, whereas direct interlayer connections are much sparser. In addition, pyramidal cells, which comprise the majority of some layers share a similar orientation, perpendicular to the cortical surface. From an experimental point of view, one can take advantage of the layered structure by taking thin slices of brain tissue, preserving the dense within-slice connections, losing only © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Coombes and K. C. A. Wedgwood, Neurodynamics, Texts in Applied Mathematics 75, https://doi.org/10.1007/978-3-031-21916-0_9

381

382

9 Firing rate tissue models

Fig. 9.1 A sagittal perspective of the superior longitudinal fasciculus (comprising three specific fibre sub-bundles) covering a distance of roughly 10 cm. This is one of the major association tracts via which posterior parietal cortical areas are linked with different frontal cortical regions. Tractography streamline coordinates were provided by Dr S. Sotiropoulos, University of Nottingham, following the protocols defined in [921] using diffusion MRI data from the Human Connectome Project [827] and probabilistic tractography in FSL (a comprehensive library of analysis tools for functional MRI, MRI and DTI brain imaging data) [822]. The mean fibre orientation per voxel used for a particular tract was used to visualise streamlines.

the sparse inter-slice ones. Following this, the response properties of the slice can be probed using multi-electrode arrays, voltage-sensitive dyes, plasmon imaging or traditional and contemporary microscopy. The structure of the neocortex is known to have a columnar organisation, built from macrocolumns of ∼ 106 neurons with similar response properties, and these tend to be vertically aligned into columnar arrangements of roughly 1 − 3 mm in diameter [644]. Intracortical connections can range over 1 − 15 cm, allowing communication between distal cortical areas. Thus, it is natural to view the human cortex as a dense reciprocally interconnected network of roughly 1010 corticocortical axonal pathways that make connections within the roughly 3 mm thick outer layer of the cerebrum [389]. Given the shallow depth of this wrinkled and folded cortical structure (with outward folds known as gyri and inward pits known as sulci) and its high neuronal density, it is common from a modelling perspective to use a neural field description. This is essentially a coarse-grained description of neural tissue that describes the evolution of neuronal activity on a two-dimensional surface, although theoretical analyses of such models are often carried out considering just one spatial dimension for simplicity. These models can incorporate large-scale anatomical knowledge, including the fact that most long-range synaptic interactions are excitatory, with excitatory pyramidal cells sending their myelinated axons to other parts

9.1 Introduction

383

Fig. 9.2 The layered structure of cortex. Different types of pyramidal neuron are found in each of the six layers and share a similar orientation. Afferent inputs from other regions of cortex typically arrive to layers II, IV, and V, whilst afferent inputs from the thalamus (the sensory gateway to the cortex) arrive in layer IV. Afferent inputs from the brainstem typically arrive in all layers except layer I.

of the cortex. Inhibitory interactions, on the other hand, tend to be much more short range. For excitatory connections, it is known that the weight of connection between two areas decays exponentially with their wiring distance, with a characteristic distance of ∼ 11 mm [920]. It is the combination of local synaptic activity (seen in the rise and decay of post-synaptic potentials with timescales from 1 to 100 ms) and non-local delayed interactions within the cortex (of up to 30 ms in humans) that is believed to be the major source of large-scale EEG and magnetoencephalogram (MEG) signals recorded at (or near) the scalp. The latter delays vary according to the path of the axon through the brain as well as the speed of the action potential along the fibre. This speed can range from around 0.5 m/s in unmyelinated axons to 150 m/s in myelinated axons (in the peripheral nervous system). Typical values for corticocortical axonal speeds in humans are distributed, and appear to peak in the 5 − 10 m/s range [671] though speeds in callosal fibres (connecting the two hemispheres of the brain) can range from 7 to 19 m/s [89].

384

9 Firing rate tissue models

9.2 Neural field models Since their inception in the 1970s by Wilson and Cowan [944, 945], Nunez [668], and Amari [24], neural field models have seen use in a variety of different contexts. These have included: EEG/MEG rhythms [475, 477, 568, 669], geometric visual hallucinations [101, 276, 295, 848], mechanisms for short-term memory [552, 553], feature selectivity in the visual cortex [64], motion perception [333], binocular rivalry [504], anaesthesia [569], autonomous robotic behaviour [269], embodied cognition [788], and dynamic causal modelling [228]. A relatively recent overview of neural fields, covering theory, applications, and associated inverse problems [729], can be found in the book [186] and other reviews can be found in [104, 180, 184, 273]. All of the models discussed in this chapter are natural extensions of the neural mass model exemplified by equation (8.3) in Chap. 8, and hence are also phenomenological. In their simplest form, they are cast as integro-differential equations of the type Qu = ψ (and c.f. equation (8.3)), albeit with u = u(r, t) representing some notion of neural activity at a position r ∈ Ω ⊆ R2 in the cortical surface. The input ψ is now non-local with a typical integral representation as

ψ (r, t) =

Ω

dr

0

∞

ds ρ (s)w(r, r ) f ◦ u(r , t − τ (r, r ; s)),

(9.1)

where the operation ◦ denotes function composition ( f ◦ u(r, t) = f (u(r, t))). Here w(r, r ) describes the anatomical interaction (wiring) between points at r and r , τ (r, r ; s) the associated time delay arising from axonal communication mediated by action potentials of speed s, and ρ (s) the distribution of axonal speeds. The forms for Q (describing synaptic processing) and f (the function that converts activity to firing rate) are the same as those used in neural mass models (see Chap. 8). To facilitate mathematical analysis, the anatomical interaction is often taken to be a function of distance alone so that w(r, r ) = w(|r − r |), and similarly the axonal delay term is often simplified to τ (r, r ; s) = |r − r |/s for some axonal speed s. Moreover, although the distribution of propagation speeds in myelinated corticocortical fibres can be fit to a gamma-distribution [669], it is often simplified to that of a Dirac delta distribution that recognises only a representative speed v with ρ (s) = δ (s − v). We shall only consider these idealisations, though it is well to mention that perturbation theory can be used to treat weakly heterogeneous systems [101, 194, 195, 786], and that the effect of distributed axonal delays can be treated in a relatively straightforward manner [43, 447]. Due to the high metabolic demand of spiking activity, neural tissue cannot fire at high rates for long periods of time. To account for this effect, it is common to augment neural field models with a form of threshold accommodation or spikefrequency adaptation. The former can be modelled by endowing the threshold in a sigmoidal choice for the firing rate function f with a slow dynamics that elevates the threshold with increasing levels of activity (and allows it to decay back to some base level otherwise) [198]. Spike frequency adaptation is more often modelled with the use of a negative feedback term such that ψ → ψ − ga for some strength g > 0

9.3 The continuum Wilson–Cowan model

385

that couples to a field a with dynamics of the simple linear form τa at = u − a for some slow timescale τa . Other forms of adaptation, such as short-term plasticity have also been incorporated into neural field models by several authors, mainly through a simple facilitation/depression description as in [502, 504, 628]. In this chapter, we will cover some of the more well-known results about waves, bumps, and patterns that have been developed for firing rate tissue models of cortex. Rigorous results are relatively few in number, the exception perhaps being the work of Faugeras et al. for the existence and uniqueness of solutions (in cases with no delay) [293]. Here, we shall focus on the explicit results that can be obtained under the idealisation that the sigmoidal firing rate function is replaced by a Heaviside, allowing the explicit construction of travelling fronts and pulses, as well as localised states (bumps and spots), and globally periodic patterns that emerge beyond a Turing instability of a homogeneous steady state (for a smooth firing rate). We begin with a discussion of the spatially extended Wilson–Cowan model.

9.3 The continuum Wilson–Cowan model The spatially continuous version of the Wilson–Cowan model is obtained by generalising the population model (8.5) from a point model to a field model. A similar generalisation of the next-generation mass model described in Sec. 8.9 has recently been developed and has been shown to be well suited for analysing problems in which spiking behaviour (as opposed to firing rate behaviour) is important [134, 137, 138]. For simplicity, refractory terms are dropped and the model is posed on the infinite plane. Denoting the activity of the excitatory and inhibitory populations by u e and u i , respectively, the model is written as Q e u e (r, t) =

Q i u i (r, t) =

R2

R2

wee (r − r ) f e ◦ u e (r , t − |r − r |/vee ) dr +

R2

wei (r − r ) f i ◦ u i (r , t − |r − r |/vei ) dr ,

(9.2)

wie (r − r ) f e ◦ u e (r , t − |r − r |/vie ) dr +

R2

wii (r − r ) f i ◦ u i (r , t − |r − r |/vii ) dr ,

(9.3)

where wab describes the connection strength from population b ∈ {e, i} to population a ∈ {e, i}. Here, Q a = (1 + τa ∂ /∂ t)2 is a second-order differential operator; the more traditional version of the Wilson–Cowan model is recovered with the first-order choice Q a = 1 + τa ∂ /∂ t. The firing rate functions f a can differ between the two populations, and here are chosen to be f a (u) = (1 + tanh(βa (u − h a )))/2. One can also allow for different axonal speeds vab connecting distinct populations. However, given that long-range connections are predominantly excitatory and inhibitory connections are short range, a more parsimonious model would be to drop axonal delays on all connection paths except those between excitatory populations. For exponen-

386

9 Firing rate tissue models

tial or Gaussian choices of the connectivity functions wab , the Wilson–Cowan model is known to support a wide variety of solutions, including spatially and temporally periodic patterns (beyond a Turing instability), localised regions of activity (bumps and multi-bumps) and travelling waves (fronts, pulses, target waves and spirals), as reviewed in [184]. The precise form of the decaying connectivity has very little effect on the qualitative nature of solutions, and for later convenience, it is useful to consider the following exponential weight function for the planar model: wab (r ) =

0 1 −r/σab wab e . 2 2π σab

(9.4)

We begin by looking at the linearised response of the model with this natural choice of connectivity.

9.3.1 Power spectrum The power spectrum, as recorded in EEG, represents a statistical measure of a system’s linear response to spatiotemporal input and provides a measure of how prominent oscillations in certain frequency bands are in a given recording. A simple dimensional argument shows that the Wilson–Cowan model with axonal delays (9.2)–(9.3) has a set of natural frequencies that include ωab = vab /σab . It is thus natural to ask how these contribute to resonant peaks in the power spectrum. The power of a signal can be computed as the Fourier transform of its autocorrelation function. This in turn can be computed by examining the Fourier transform of the linear response to fluctuations. The resting state, in which tasks are not being performed, is often considered to be driven by fluctuations that follow a Gaussian process. With this in mind, one may construct the linear response to such stochastic input by linearising around a steady state. The model (9.2)–(9.3) can be written in a more compact form as Q a u a (r, t) =

∑

ψab (r, t), a ∈ {e, i},

(9.5)

b∈{e,i}

where

ψab (r, t) =

R2

wab (r − r ) f b ◦ u b (r , t − |r − r |/vab ) dr .

(9.6)

The function ψab (r, t) can be written as

ψab (r, t) =

∞

−∞

R2

G ab (r − r , t − t )ρb (r , t ) dr dt ,

(9.7)

where G ab (r, t) = δ (t − |r|/vab )wab (r),

ρb (r, t) = f b ◦ u b (r, t).

(9.8)

9.3 The continuum Wilson–Cowan model

387

The driven model takes the form

∑

Q a u a (r, t) =

ψab (r, t) + ξa (t), a ∈ {e, i},

(9.9)

b∈{e,i}

with ψab as in (9.7) and where ξa are independent Gaussian zero-mean white-noise processes. Assume in the absence of drive that there is a steady state (u e , u i ) satisfying

∑

ua =

f b (u b )

b∈{e,i}

R2

wab (r ) dr .

(9.10)

One can linearise around this steady state by considering perturbations of the form u a (r, t) = u a + ε va (r, t) where ε 1, and substitute this into (9.2)–(9.3) to obtain Q a va (r, t) =

∑

b∈{e,i}

γb

∞

−∞

R2

G ab (r − r , t − t )vb (r , t ) dr dt + ξa (t), (9.11)

where γa = f a (u a ). It is now convenient to introduce a three-dimensional Fourier transform with spectral parameters (k, ω ), k ∈ R2 , ω ∈ R, according to v(r, t) =

1 (2π )3

∞

−∞

R2

ei(k·r+ω t) v(k, ω ) dk dω .

(9.12)

Taking the Fourier transform of (9.11) yields (1 + i ωτa )2 va =

∑

ab γb G vb + Ca ,

(9.13)

b∈{e,i}

where Ca is the Fourier transform of the noisy input ξa . Since the Fourier transform of a white noise process is flat, Ca are constant real values. One can rewrite (9.13) in matrix form as M v = c, (9.14) where v = ( ve , vi ), c = (Ce , Ci ) and ee (k, ω ) ei (k, ω ) (i ωτe + 1)2 − γe G −γi G M(k, ω ) = ie (k, ω ) ii (k, ω ) . −γe G (i ωτi + 1)2 − γi G

(9.15)

If one makes the choice (9.4), then a closed-form expression for v can be obtained using the result that, and see Box 9.1 on page 388, 0 Aab (ω ) ab (k, ω ) = wab G 3/2 , 2 2 σab Aab (ω ) + k 2 −1 where Aab (ω ) = σab + i ω /vab .

k = |k|,

(9.16)

388

9 Firing rate tissue models

ab (k, ω ) Box 9.1: Calculation of G 0 2 Consider the choice wab (r ) = wab exp(−r/σab )/(2πσab ). In this case,

∞

dre−i(k·r+ω t) G ab (r, t) −∞ R2 ∞ 0 wab dt dre−|r|/σab δ (t − |r|/vab )e−i(k·r+ω t) = 2 −∞ R2 2πσab ∞ 2π 0 wab = e−ikr cos θ e−Aab r r dr dθ 2 0 2πσab 0 2π w0 ∂ 1 ω 1 = − ab2 dθ , Aab (ω ) = +i . ∂ A A + ik cos θ σ v 2πσab ab 0 ab ab ab

G ab (k, ω ) =

dt

(9.17) This may be evaluated using a (circular) contour integral in the complex plane, and see Box 4.3 on page 134, to give G ab (k, ω ) =

0 Aab (ω ) wab . 2 σab (Aab (ω )2 + k 2 )3/2

(9.18)

Finally, the inversion of (9.14) yields v = M−1 c. To construct a power spectrum, it is natural to consider an integration over a fixed spatial region, in analogy to the way EEG records the summed activity of a patch of cortex. The simplest assumption is that contributions come only from a two-dimensional disk of radius R (on the order of 1 cm) with equal weight [88], and so it is convenient to introduce a lead field h a h a (r, t) =

R2

ϕ(r − r )va (r , t)dr ,

ϕ(r) = Θ(R − |r|),

(9.19)

with three-dimensional Fourier transform ha (k, ω ) = ϕ(k) va (k, ω ). The two-dimensional Fourier transform of ϕ can be calculated as

2π R J1 (k R), k 0 0 0 (9.20) where Jν is the Bessel function of the first kind of order ν (with integral representation J0 (z) = (2π )−1 02π ei z cos θ dθ ). The power at wave-vector k is proportional = (he , hi ) and the power spectrum P(ω ) is obtained by ω )||2 , where h to ||h(k, integrating over all such k, so that ϕ(k) =

P(ω ) =

2π

∞

1 (2π )2

eikr cos θ Θ(R − r )r dr dθ = 2π

R2

ω )||2 dk = 2π R 2 ||h(k,

0

∞

R

J0 (kr )r dr =

J12 (k R) || v(k, ω )||2 dk. k

(9.21)

9.3 The continuum Wilson–Cowan model

389

Fig. 9.3 Power spectrum (normalised to peak) P( f ) as a function of frequency f = ω /(2π ). There is a resonant response around 10 Hz, corresponding to the alpha-band rhythm, present in awake, relaxed humans as well as in REM sleep. The parameters are τe = τi = 100 ms, wee = wie = 2, wei = −2, wii = −2.1, σee = σie = σii = σei = 1 mm, R = 1 cm, Ce = Ci = 1, h a = 0, βe = 1, −1 −1 βi = 40, vee = 10 m/s, and vei = vie = vii−1 = 0.

An example of the (linearised) power spectrum that can be generated using this approach is shown in Fig. 9.3. This shows a resonant response around 10 Hz, corresponding to the alpha-band rhythm. Interestingly, the role of anaesthesia is thought to shift power away from the alpha band and into lower bands, and neural field models that account for drug action on synapses have been shown to provide a mechanistic explanation for this [88, 311, 446].

9.3.2 Single effective population model To simplify mathematical analysis, it can be convenient to consider a reduction of a two-population model, with four separate connectivity functions, to a model with fewer parameters and possibly fewer variables. A quite drastic reduction in model complexity can first be achieved by dropping axonal delays by taking the limit vab → ∞. Given the lack of self-connections in inhibitory populations, a further reduction can be achieved by setting wii = 0. If inhibition is fast compared to excitation, namely, τe = 1 and τi 1, then u i can be approximated by its quasi-steady state value. Finally, if the firing rate function f i in (9.2) is replaced by a linear function, then the Wilson–Cowan model simplifies to a single effective population model for the dynamics of u = u e given by

∂u = −u + w ⊗ f (u), ∂t

(9.22)

where the e label on f e is now supressed, and w = wee + wei ⊗ wie .

(9.23)

390

9 Firing rate tissue models

Here, the operator ⊗ denotes a spatial convolution: [a ⊗ b] (r) =

R2

a(r − r )b(r) dr .

(9.24)

Choosing a linear firing rate function for f i does impose a restriction on the model but is supported by experiments comparing firing rates of excitatory and inhibitory cells [615]. The effective connectivity, w, can easily be calculated, and consider, for example, a simple model with exponential decay in one spatial dimension such that wab (x) = Aab exp(−|x|/σab )/(2σab ). In this case, a short calculation yields w(x) =

Aei Aie Aee −|x|/σee e + σei e−|x|/σei − σie e−|x|/σie . 2σee 2(σei2 − σie2 )

(9.25)

In the limit σab → 1 and with the choice Aee = −Aie A ei = 4, this reduces to a balanced wizard hat shape w(x) = (1 − |x|)e−|x| , with R w(x)dx = 0, describing an effective system with short-range excitation and long-range inhibition. A similar shape, which is differentiable at the origin, can be obtained with the planar Gaussian 2 2 )/(πσab ), and is referred to as a Mexican hat: choice wab (r ) = Aab exp(−r 2 /σab w(r ) =

Aee −r 2 /σee2 A −r 2 /σ 2 e − e , 2 πσee πσ 2

A = −Aei Aie , σ 2 = σei2 = σie2 ,

(9.26)

2 with Aee /σee > A/σ 2 > 0 and σee < σ . Note that in this case, the balance condition R2 w(r)dr = 0 is met when Aee = A. It is thought that neural networks have optimal response properties when they are in balance, and the enforcement of such a condition ensures that there will be no runaway activity [797, 798, 899]. Examples of balanced wizard hat and Mexican hat shapes are shown in Fig. 9.4. The effective model is especially useful for gaining insight into pattern formation via a Turing instability, the concept of which was introduced by Alan Turing in 1952 [887]. His foundational work describes the way in which patterns in nature, such as stripes and spots, can arise

Fig. 9.4 Effective anatomical interactions w = wee + wei ⊗ wie in the plane. Left: Balanced wizard hat with w(r ) = (1 − 2r ) exp(−r ). Right: Balanced Mexican hat with w(r ) = exp(−r 2 ) − exp(−r 2 /4)/4.

9.3 The continuum Wilson–Cowan model

391

naturally from an instability of a homogeneous uniform state in coupled reactiondiffusion equations. The mathematical techniques apply just as well to the study of pattern formation in neural field models, as is discussed further in Box 9.2 on page 391, and see Prob. 9.1 to Prob. 9.9.

Box 9.2: Turing instability Consider an effective single population model (without axonal delays) defined on the plane and written as

∂ u(r, t) = −u(r, t) + ∂t

R2

w(r − r ) f ◦ (u(r , t))dr .

(9.27)

Assuming it exists, a homogeneous steady state u(r, t) = u 0 is a solution of (0) f (u 0 ), where w is the two-dimensional Fourier transform defined by u0 = w (k) = w

R2

e−ik·r w(r)dr.

(9.28)

Linearising around the homogeneous steady state by writing u(r, t) = u 0 + v(r, t) for some small perturbation v yields (after substitution into (9.27) and expanding to first order in v) vt = −v + γ w ⊗ v,

γ = f (u 0 ).

(9.29)

This linear integro-differential equation has separable solutions of the form v(r, t) = eλ t eik·r for λ ∈ C and k ∈ R2 . The resulting characteristic equation for λ takes the simple form (k). λ = −1 + γ w

(9.30)

The homogeneous steady state u 0 is stable if Re λ (k) < 0 for all k. Note that (k) = w (|k|) ∈ R. In this case, there is a bifurcation to if w(r) = w(|r|), then w a pattern state when 1 (k) = , k = |k|. (9.31) max w k γ For a discussion on the behaviour expected following a Turing instability, see Box 9.3 on page 392, and for a recent perspective on Turing’s theory of morphogenesis (in reaction-diffusion systems), see [532].

392

9 Firing rate tissue models

Box 9.3: Patterns at a Turing instability Consider the effective neural field defined by (9.27) in Box 9.2 on 391. As a motivating example, suppose the coupling kernel is chosen to be a Mexican hat 2 2 2 shape given by a difference of Gaussians: w(r ) = [e−r − e−r /σ /σ 2 ]/π with σ > 1. The Fourier transform of this kernel is given by (k) = e−k w

2

/4

− e−σ

k /4

2 2

,

(9.32)

and has a maximum away from the origin at kc = 8 ln σ /(σ 2 − 1). The homogeneous steady state is u 0 = 0 since the kernel is balanced (i.e., R2 w(r)dr = 0).

Near to bifurcation, spatially periodic solutions of the form eikc ·r , |kc | = kc are expected. For a given kc , there are an infinite number of choices for the direction of kc . It is, therefore, convenient to restrict attention to doubly periodic solutions that tessellate the plane. These can be expressed in terms of the basic symmetry groups of the hexagon, square and rhombus. Solutions of (9.29) can then be constructed from combinations of the basic functions eikc R·r , for appropriate choices of the basis vectors R. If φ is the angle between two basis vectors R1 and R2 , one can distinguish three types of lattice according to the value of φ : square lattice (φ = π /2), rhombic lattice (0 < φ < π /2) and hexagonal (φ = π /3). Hence, all doubly periodic functions may be written as a linear combination of plane waves v(r, t) = ∑ A j eikc R j ·r + cc,

|R j | = 1.

(9.33)

j

√ √ For hexagonal lattices: R1 = (1, 0), R2 = (−1, 3)/2, and R3 = (1, 3)/2. For square lattices: R1 = (1, 0), R2 = (0, 1), while for the rhombus tessellations: R1 = (1, 0), R2 = (cos φ , sin φ ).

9.4 The brain wave equation

393

If two distinct wavenumbers are simultaneously excited, say k = kc and k = qkc with q ∈ / Q, then it is also possible that a quasicrystal pattern can be excited [347]. The one shown is a decagonal (10-fold) quasicrystal with q = 2 cos(π /5). Such patterns do not have spatial periodicity though do possess rotational symmetry. For a detailed review of the methods to describe patterns emerging from a Turing instability in neural field models, see [102].

9.4 The brain wave equation Nunez has emphasised the important role that delays arising from action potential propagation along corticocortical fibres have in generating brain rhythms seen in the 1 − 15 Hz range [669]. Moreover, he has proposed a damped inhomogeneous wave equation describing the evolution of neural activity at the tissue level that has played an important role in our understanding of waves and patterns seen using EEG sensors [668, 669]. This brain wave equation, and variants thereof, has played a major role in the interpretation of EEG signals since the 1970s [12, 228, 475, 669, 670]. The local nature of such models means that they are amenable to analysis with standard numerical techniques for partial differential equations (PDEs), circumventing the challenges of evolving delayed integro-differential models. For a judicious choice of the anatomical connectivity function, it is sometimes possible to obtain an equivalent PDE from a neural field equation. The basic idea behind this is to take advantage of the convolution structure of homogeneous neural field equation. To illustrate this approach, it is sufficient to consider a single population effective model without delay that can be written in the form Qu = ψ (for some temporal differential operator Q), as presented in Box 9.4 on page 394 (and see also [206, 474, 550, 553] for further discussion).

394

9 Firing rate tissue models

Box 9.4: Equivalent PDE description of a neural field Consider a single population effective homogeneous neural field model, obtained from (9.1) with w(r, r ) = w(|r − r |) and τ (r, r ) = 0, in one spatial dimension over a finite domain [−1, 1] with

ψ (x, t) =

1

−1

w(x − y) f ◦ u(y, t)dy,

x ∈ [−1, 1].

(9.34)

Let w(x) = exp(−|x|)/2, which is the Green’s function of R = (1 − dx x ) (namely Rw = δ ) and apply R to (9.34) to obtain [R ψ ](x, t) =

1

−1

δ (x − y) f ◦ u(y, t)dy = f ◦ u(x, t).

(9.35)

To generate boundary conditions, one can differentiate (9.34) and use w (x) = −sgn(x)w(x), giving ψ (±1) ± ψ (±1) = 0. Hence, the integro-differential model can now be written as a coupled PDE: Qu = ψ , R ψ = f ◦ u with the specified boundary conditions. A similar approach for systems in two spatial dimensions can be adopted. Consider a single population effective model on a finite domain Ω ⊆ R2 with

ψ (r, t) =

Ω

w(r − r ) f ◦ u(r , t)dr , r ∈ Ω.

(9.36)

Let w(r) = K 0 (r )/(2π ) (r = |r|), where K 0 is the modified Bessel function of the second kind of order zero. This is the Green’s function of R = 1 − ∇ 2 . Applying R to (9.36) yields [R ψ ](r, t) =

Ω

δ (r − r ) f ◦ u(r , t)dr = f ◦ u(r, t).

(9.37)

Boundary conditions can be obtained by differentiating (9.36) and evaluating the result on ∂ Ω. To obtain the brain wave equation, consider equation (9.7), take its Fourier transform and make use of (9.16) to give ab (k, ω )ρ ab (k, ω ) = G b (k, ω ) = ψ

0 Aab (ω ) wab b (k, ω ). ρ 2 (Aab (ω )2 + k 2 )3/2 σab

(9.38)

Cross multiplication yields

2 Aab (ω ) + k 2

3/2

ab (k, ω ) = ψ

0 wab b (k, ω ). Aab (ω )ρ 2 σab

(9.39)

9.5 Travelling waves

395

One may now make the long-wavelength approximation and expand around k = 0 to obtain 3 w0 2 ab (k, ω ) ab b (k, ω ). (ω ) + k 2 ψ ρ (9.40) Aab 2 2 σab Taking the inverse Fourier transform to real space and time coordinates using the identification k 2 ↔ −∇ 2 and i ω ↔ ∂ /∂ t yields the brain wave equation:

1 1 ∂ 2 3 2 w0 + − ∇ ψab = ab fb ◦ u b . (9.41) 2 σab vab ∂ t 2 σab This is a damped inhomogeneous wave equation and as such is expected to have travelling wave solutions. These types of equations have previously been derived by Jirsa and Haken [474], studied with regard to brain behaviour experiments by Kelso et al. [325, 500], and used extensively in EEG modelling [88, 754, 755, 834]. Interestingly, in one spatial dimension, an exact brain wave equation can be found for an exponentially decaying connectivity [474], and see Prob. 9.10. For a recent extension of the brain wave equation to include dendritic processing, see [760]. In the next section, we will show how waves can be constructed in closed form for the choice that f a is a Heaviside function.

9.5 Travelling waves Travelling waves at the scale of the whole brain have been studied ever since the advent of EEG, with more recent studies progressing with the use of electrocorticography (ECoG), in which arrays of electrodes are placed directly on the cortical surface. Both EEG and ECoG indicate that wave speeds are typically in the 1 − 10 m/s range (consistent with the axonal conduction speeds of myelinated cortical white matter fibres). The development of multi-electrode array and voltage-sensitive dye imaging techniques has yielded even more information about their spatio-temporal properties and has shown that they are present during almost every type of cortical processing [952]. Waves can occur during both awake and sleep states, and can range over both small and large cortical spatial scales. Moreover, they can also occur during pathological states, such as seizures and spreading depression. For a recent discussion of the mechanisms underlying cortical wave propagation, as well as their role in computation, see [647]. It is also possible to image waves of neural activity in in vitro slice preparations using a combination of multielectrode array recordings, calcium imaging and voltage-sensitive dyes, typically after bathing in a pharmacological medium that blocks inhibition. Such experiments have been performed in a variety of preparations, including from cortex [164, 356, 386, 951], hippocampus [629] and thalamus [511], as well as spinal sections from vertebrates [753]. Usually, the cortex is cut along the columnar structure to preserve maximum local connectivity. A slice normal to the cortical surface and thus containing all cortical layers, but only 400 μ m thick (on

396

9 Firing rate tissue models

the scale of macrocolumn diameter), can be well represented by a one-dimensional spatial model. Two-dimensional spatial patterns can be studied in tangential slices (with areas ranging from mm2 to cm2 ). In these preparations, a local stimulation by a short current pulse from an electrode can be observed to amplify and propagate with speeds of 20 − 100 mm/s without spread or loss of shape, although neither its speed nor amplitude is absolutely uniform due to cortical inhomogeneity. The mechanism for this spread appears to be synaptic, rather than diffusive. Experiments by Pinto et al. have also established that the initiation, propagation and termination of travelling pulses involves distinct network mechanisms [721]. The analytical construction of such pulses has also been studied using neural field models with adaptation by Pinto and Ermentrout (under the assumption that adaptation is slow compared to neural activity and then using singular perturbation theory) [720]. As well as travelling pulses, more complicated dynamics can also occur spontaneously in suitably treated slices, including initial epileptiform spikes followed by lower intensity periodic wavetrains [951]. Later work using tangential slices found out that these wavetrains may be part of a rotating spiral wave [438]. Moreover, the same authors showed that these spiral waves could be modelled with a planar neural field model with realistic boundary conditions. Propagating activity in brain regions has been implicated in both physiological and pathological conditions. Sensory processing in both vertebrate and invertebrate olfactory bulbs exhibits propagating waves in response to odour stimuli [234, 556], whilst visual stimuli can evoke similar responses in the visual cortex [70, 397, 730, 953]. Stimulating rat whiskers can trigger a wave in rat barrel cortex [706]. Travelling waves can also be used as a mechanism for perceptual switching in ambiguous stimuli tasks, as well as for phase-locking to moving stimuli [154]. In vertebrates, waves of activity propagating down the spinal column can serve as generators for locomotor patterns [720, 769]. Neurological conditions such as epilepsy are characterised by travelling waves of neural activity (following focal oscillations that mark the onset of a seizure) [179, 561]. Once again, neural field models have been put forward and analysed to study this phenomenon [364, 810]. There are surprisingly few results concerning travelling waves in neural field models with a smooth firing rate, with the exception being the work of Ermentrout and McLeod [282] on the existence and uniqueness of travelling fronts, the recent follow up in [292] to treat a (fixed) delayed interaction, and that in [296] to prove the existence of fast-travelling pulse solutions (in models with adaptation). However, for the choice that the firing rate is a nonsmooth Heaviside function, the number of results undergoes a relative explosion [154, 364, 445, 503, 547, 746, 810]. Here, we emphasise the key methodologies that allow for the determination of wave speed and stability in this idealised limit of a steep sigmoid. To do so, we focus on a simple single population neural field model in one spatial dimension and treat travelling fronts; extensions to more novel settings are possible without any major change in methodology, and see for example [197]. Consider a one-dimensional neural field model with purely excitatory connections and axonal delays written in the form

9.5 Travelling waves

τ

397

∂ u(x, t) = −u(x, t) + ∂t

R

dy w(y)Θ (u(x − y, t − |y|/v) − h) ,

(9.42)

for some threshold h, and w(x) = exp(−|x|/σ )/(2σ ). Following the standard approach for constructing travelling wave solutions, and see Box 4.1 on page 130, we introduce the coordinate ξ = x − ct and seek functions U (ξ , t) = u(x − ct, t) that satisfy (9.42). In the (ξ , t) coordinates, the integral equation (9.42) reads −cτ

∂ ∂ U (ξ , t) + τ U (ξ , t) = ∂ξ ∂t − U (ξ , t) +

R

dyw(y)Θ U (ξ − y + c|y|/v, t − |y|/v) − h . (9.43)

The travelling wave is a stationary solution U (ξ , t) = q(ξ ) (independent of t) that satisfies −cτ

dq = −q + ψ , dξ

ψ (ξ ) =

R

(9.44)

dyw(y)Θ q(ξ − y + c|y|/v) − h .

(9.45)

From a physical perspective, it is natural to take c < v, since emergent cortical waves cannot go faster than the speed of the action potential.

9.5.1 Front construction Travelling fronts connect regions of low activity with high activity and may be thought of as heteroclinic connections between these states, and see Box 4.1 on page 130. Assume a travelling front solution such that q(ξ ) > h for ξ < 0 and q(ξ ) < h for ξ > 0 and c > 0. It is then a possible to show that ∞ w(y)dy = 21 em − ξ /σ , ξ ≥0 ψ (ξ ) = ξ∞/(1−c/v) , (9.46) 1 m + ξ /σ w(y)dy = 1 − e , ξ 0, (9.51) can be integrated from ξ to ∞, yielding the eigenvalue problem p = L p: p(ξ ) =

1 cτ

∞ dyw(y) ds exp −(s − ξ + y − cy/v)/(cτ ) 0 ξ −y+c|y|/v × exp −λ (s − ξ + y)/c δ (q(s) − h) p(s). (9.53) ∞

The cut-off in the integral over y arises because of causality. In particular, perturbations at ξ can only be constructed from perturbations on the interval [ξ − y + c|y|/v, ∞) if ξ > ξ − y + c|y|/v and since c < v, this gives the condition y ≥ 0. Let σ (L ) be the spectrum of L . A travelling wave is linearly stable if (9.54) max{Re(λ ) : λ ∈ σ (L ), λ = 0} ≤ −K ,

400

9 Firing rate tissue models

for some K > 0, and λ = 0 is a simple eigenvalue of L . In general, the normal spectrum of the operator obtained by linearising a system about its travelling wave solution may be associated with the zeros of a complex analytic function, the socalled Evans function. This was originally formulated by Evans [288] in the context of a stability theorem about excitable nerve axon equations of Hodgkin–Huxley type, and see the discussion in Sec. 4.2.1. The construction of the Evans function begins with an evaluation of (9.53). Under the change of variables z = q(s), this equation may be written p(ξ ) =

1 cτ

∞

dyw(y)

q(ξ −y+cy/v)

0

× e−λ (q

−1

q(∞)

(z)−ξ +y)/c

×

dze−(q

−1

(z)−ξ +y−cy/v)/(cτ )

δ (z − h) p(q −1 (z)). |q (q −1 (z))|

(9.55)

For the travelling front of choice, note that when z = h, q −1 (h) = 0 and (9.55) reduces to p(ξ ) =

p(0) cτ |q (0)|

∞

0

dyw(y)e−(y−ξ −cy/v)/(cτ ) e−λ (y−ξ )/c .

(9.56)

From this equation, one may generate a self-consistent equation for the value of the perturbation at ξ = 0 by setting ξ = 0 in (9.56). This self-consistent condition reads p(0) =

p(0) cτ |q (0)|

∞

0

dy w(y)e−(y−cy/v)/(cτ ) e−λ y/c .

(9.57)

Importantly, there are only nontrivial solutions if E (λ ) = 0, where E (λ ) = 1 −

1 cτ |q (0)|

0

∞

dyw(y)e−(y−cy/v)/(cτ ) e−λ y/c .

(9.58)

The expression (9.58) is identified as the Evans function for the travelling front solution of (9.42). The Evans function is real valued if λ is real. Furthermore: (i) the complex number λ is an eigenvalue of the operator L if and only if E (λ ) = 0, and (ii) the algebraic multiplicity of an eigenvalue is equal to the order of the zero of the Evans function. Making use of the result from (9.44) that q (0) = (h − 1/2)/(cτ ) and eliminating h from (9.49) to give q (0) = v/2(σ (c − v) − cτ v)−1 , the Evans function for the present example can be calculated as E (λ ) =

λ . λ + c/σ + (1 − c/v) /τ

(9.59)

The equation E (λ ) = 0 only has the solution λ = 0. Since E (0) > 0, then λ = 0 is a simple eigenvalue. Hence, the travelling wave front is linearly stable.

9.6 Hallucinations

401

9.6 Hallucinations The first systematic studies of visual hallucinations were conducted by Klüver in 1926 [517, 518] with the hallucinogenic drug mescaline. He identified four types of simple imagery that repeated in all his subjects: tunnels; spirals; cobwebs; and lattices, honeycombs, fretworks, and checker-boards. Since then, these basic forms have been found to be almost ubiquitously a part of drug-induced hallucinations, including those induced by marijuana, LSD, ketamine, or cocaine [812]. One of the most well known success stories in mathematical neuroscience is the theory of geometric visual hallucinations, developed by Ermentrout and Cowan [276]. Their key observation was that the Klüver forms could all be described in terms of periodic patterns of activity occurring in primary visual cortex (V1). They considered a planar neural field model of V1 and attributed the emergence of such patterns to a Turing instability (and see Box 9.2 on page 391) arising through the variation of a control parameter representing the effect of a hallucinogenic drug. Beyond bifurcation, the emergence of striped, square, or hexagonal patterns of neural activity in V1 induce visual percepts that can be understood with further information about how retinal activity is mapped to cortical activity. As a first approximation (away from the fovea), this map is conformal and takes the form of a complex logarithm [792]. When applied to oblique stripes of neural activity in V1, the inverse retino-cortical map generates hallucinations comprising spirals, circles, and rays, whilst lattices like honeycombs or chequerboards correspond to hexagonal activity patterns in V1. An important property of their neural field model is that excitatory interactions are short-range and inhibitory connections are long-range. To understand why this is necessary, we now go through the Turing instability analysis of the Wilson–Cowan model given by (9.2)–(9.3) with Q a = 1 + τa ∂ /∂ t, and show how both static and travelling patterns arise. Since axonal delays are not necessary for −1 = 0. these behaviours, we drop these by setting vab The Turing analysis is essentially identical to the linear analysis developed for ab (k, ω ) = w ab (k) (since the power spectrum in Sec. 9.3.1, under the replacement G ab is the two-dimensional Fourier transaxonal delays have been dropped), where w form of wab defined by ab (k) = w

R2

e−ik·r wab (r)dr =

0 wab , 2 2 3/2 (1 + σab k )

k = |k|.

(9.60)

The homogeneous steady state (u e , u i ) satisfies ua =

∑

0 f b (u b )wab .

(9.61)

b∈{e,i}

Linearising around the steady state and considering solutions of the form u a (r, t) = u a + va eλ t eik·r , |va | 1, yields (1 + λ τa ) va =

∑

b∈{e,i}

ab (k)vb , γb w

γa = f a (u a ),

(9.62)

402

9 Firing rate tissue models

ab (k) = w ab (k). Thus, λ is an eigenvalue of the 2 × 2 matrix remembering that w A(k) with elements 1 1 ab (k), Aab (k) = − δa,b + γb w (9.63) τa τa where δa,b is the Kronecker delta, so that λ = λ± (k), where

Tr A(k) ± (Tr A(k))2 − 4 Det A(k) λ± (k) = . 2

(9.64)

Now decompose λ into real and imaginary parts as λ = μ + i ω . For positive μ , perturbations will grow, whilst, for negative μ , they will decay. Since μ = μ (k), it is possible for some perturbation modes to decay whilst others grow. Although linear stability analysis can tell us which modes are unstable, the full pattern to emerge beyond an instability will depend upon the distance from bifurcation and the particular form of nonlinear firing rate function. A Turing instability will occur when μ (kc ) = 0 assuming that μ (k) < 0 for all other k, and two different types of instability are possible.

Fig. 9.6 The onset of a static Turing instability as βa = β is increased, with a ∈ {e, i}. Here βa is the steepness parameter in the firing rate function f a (u) = (1 + tanh(βa (u − h a )))/2. In this figure, λ is real for all k and only the larger of the two eigenvalues, λ = λ+ (k), is considered for each k. The eigenvalue λ has a peak away from k = 0 and an instability to patterned solutions is expected to occur at a critical value of βc 5.4. Beyond this critical β , there exists a range of 0 = w 0 = 1, unstable wavenumbers (indicated by the shaded region). The parameters are τa = 1, wee ie 0 0 wii = wei = −1, σee = σie = 1, σii = σei = 2, h e = 0, and h i = 0.

9.6 Hallucinations

403

First, consider the case that λ ∈ R, so that ω = 0. The condition for an instability can be found by the simultaneous solution of dλ /dk = 0 and λ = 0. This pair of equations defines a pair (kc , pc ), where kc is the critical wavenumber and pc is a critical value for one of the parameters in the model (such as βe ). Beyond the point of instability, the emergence of spatially periodic (asymptotically time-independent) patterns with wavenumber kc is expected, and this will be referred to as a static bifurcation. This scenario is illustrated in Fig. 9.6 using βe = βi = β as a bifurcation parameter. The maximum of λ occurs away from zero at k = kc , meaning that the bifurcation will give rise to spatially patterned solutions. For β < βc , λ (k) < 0 for all k, and so the spatially homogeneous solution is stable. At a critical βc , there is a tangential crossing of the dispersion curve with the k-axis at kc , signalling a bifurcation. Beyond this, a range of wavenumbers is expected to be unstable, as shown by the shaded region. Importantly, the shape of the dispersion curve in Fig. 9.6 comes about because of a choice of short-range excitation and long-range inhibition, namely, that the sizes of σai are greater than those of σae . In the second case, λ ∈ C, so that ω = 0. The (continuous) spectrum is now best visualised using a parametric plot (μ , ω ) = (μ (k), ω (k)) for k ∈ R. A dynamic bifurcation is said to occur if this curve tangentially touches the imaginary axis. The condition for an instability can be found by the simultaneous solution of dμ /dω = 0 and μ = 0. This is equivalent to solving dμ /dk = 0 and μ = 0 (dω /dk = 0), and defines a pair (kc , pc ) where pc is a critical value of some model parameter. From (9.64), this instability is equivalently defined √ by d Tr A(k)/dk = 0 and Tr A(k) = 0, with emergent temporal frequency ωc = Det A(kc ). Beyond a dynamic instability, periodic travelling plane waves will emerge with a speed equal to ωc /kc (note that standing waves are also possible). The spectrum for a neural field model supporting a dynamic bifurcation is shown in Fig. 9.7. Three separate spectral curves are shown for differing values of τi , and as τi is increased the spectrum can cross over into the right-hand complex plane away from the real axis, signalling a dynamic instability. For the reinstatement of axonal delays, a similar analysis would be obtained with the replacement A(k) → A(k, λ ) where the elements of A(k, λ ) are given by (9.63) ab (k, −i λ ). In this case, the spectral equation ab (k) → G under the replacement w where E (λ , k) = Det(A(k, λ ) − λ I2 ) defined by E (λ , k) = 0, (and I2 is the 2 × 2 identity matrix) is no longer quadratic in λ . In this situation, it is natural to find the zeros of E (λ , k) by the simultaneous solution of G(μ , ω , k) = Re E (μ + i ω , k) = 0 and H (μ , ω , k) = Im E (μ + i ω , k) = 0. The solution of these equations defines a continuous curve in the (μ , ω ) space parametrised by k. For a fixed μ , the implicit function theorem guarantees solutions provided that the condition ∂ (G, H ) (9.65) ∂ (ω , k) = 0, is met. Thus, points of instability can be found by fixing μ = 0, varying parameters until (9.65) is violated and checking that transversality (i.e., non-zero derivative of eigenvalues with respect to parameters) holds.

404

9 Firing rate tissue models

Fig. 9.7 The onset of a dynamic Turing instability as τi is increased. For a critical value of τi = τc 8.7, the spectral curve tangentially touches the imaginary axis, signalling a bifurcation to a travelling pattern with emergent temporal frequency ωc 0.6, critical wavenumber kc 0.16, and 0 = w 0 = 1.5, w 0 = w 0 = −4, σ = σ = 1, and speed ωc /kc . The parameters are τe = 1, wee ee ie ie ii ei σii = σei = 2, βa = 10, and h a = 0.

We now turn to the perception of patterns of activity in V1. One of the main structures of the visual cortex is that of retinotopy, a neurophysiological projection of the retina to the visual cortex. The log-polar mapping [792] is perhaps the most common representation of the mapping of points from the retina to the visual cortex and is key to understanding some of the visual hallucinations that can be induced by hallucinogenic drugs, and in particular their geometry. It was the great insight of Ermentrout and Cowan [276] that the Klüver forms could be recovered after an application of the inverse retino-cortical map to spatially periodic activity arising from a Turing instability in V1. The action of the retino-cortical map turns a circle of radius r in the visual field into a vertical stripe at x = ln(r ) in the cortex, and also turns a ray emanating from the origin with an angle θ into a horizontal stripe at y = θ . Simply put, if a point on the visual field is described by (r, θ ) in polar coordinates, the corresponding point in V1 has Cartesian coordinates (x, y) = (ln(r ), θ ). Thus to answer how a pattern would be perceived, one need only apply the inverse (conformal) log-polar mapping. Some example patterns of time-independent cortical activity and their perception are shown in Fig. 9.8. For travelling wave solutions beyond a dynamic Turing instability, the percept would be a dynamical visual hallucination, such as a rotating spiral, a tunnel hallucination (expanding or shrinking rings), or a funnel hallucination (rotating radial arms), with corresponding blinking (phasing in and out) versions for standing waves [849].

9.6 Hallucinations

405

Fig. 9.8 Plots of activity in cortex and their corresponding perception making use of the conformal retino-cortical map: (x, y) = (ln(r ), θ ).

It is well to mention an extension of the original work of Ermentrout and Cowan by Bressloff et al. [110] to describe the dynamics of orientation selective cells (and see Prob. 9.5). This work uses a more biologically realistic neural field model that includes anisotropic lateral connections that only connect distal elements that share the same orientation preference and are aligned along that same (common) orientation preference. Interestingly, this model (using an extension of the retino-cortical map to include its action on local edges and contours) can generate representatives of all the Klüver form constants, including contoured honeycomb, checkerboard, and cobwebs patterns. Nonetheless, both this and the original model of Ermentrout and Cowan have a focus on spontaneous pattern formation that is induced by changes of parameters intrinsic to the models, rather than by external drive. However, it is particularly important to address external drive when trying to understand the mechanisms of sensory-induced illusions and hallucinations. A nice example can be found in the psycho-physical experiments of Billock and Tsou [85]. These authors tried to induce certain geometric hallucinations by biasing them with an appropriate visual stimuli from a flickering monitor. For example, a set of centrally presented concentric rings was expected to bias a hallucination to circles in the surround. Instead, and to their surprise, they found that fan-shaped patterns were perceived in the surround (and vice versa for the presentation of radial arm patterns, with a perception of concentric rings). A recent study of this phenomenon using a spatially-forced neural field can be found in [663]. For theoretical work on flicker-induced hallucinations in neural fields with time periodic forcing see the work of Rule et al. [770]. Up until now, we have not discussed the precise form of patterns that can be excited in the planar neural field, and referred only to their wavenumber kc . For a given kc , there are an infinite number of choices for the direction of the wavevector kc , and it is common practice to restrict attention to doubly periodic solutions that tessellate the plane. These can be expressed in terms of the basic symmetry groups of

406

9 Firing rate tissue models

the hexagon, square and rhombus, and see Box 9.3 on page 392. Thus, the question arises as to which of these is seen robustly beyond an instability. This leads us to the notion of pattern selection, which is best addressed using a weakly nonlinear analysis, as exemplified in [102] to derive so-called amplitude equations.

9.7 Amplitude equations To introduce the notion of envelope or amplitude equations for patterns emerging beyond the point of an instability for a neural field model, it is useful to focus on an effective model, as described in Sec. 9.3.2. Here, we consider the choice u t = −u + w ⊗ f ◦ u,

(9.66)

where w(r) = w(|r|), u = u(r, t), r = (x, y) ∈ R2 , t ≥ 0. The conditions for a static Turing instability are given in Box 9.2 on page 391. Just beyond the bifurcation point, emergent patterns can be written as linear combinations of (spatial) plane waves

∑ An eik ·r + cc, n

(9.67)

n∈Z

where the sum is taken over all wavevectors with |kn | = kc . The rotational symmetry of w means that there exist infinitely many kn that satisfy this condition. However, Turing patterns found in nature often form regular tilings of the plane, that is, they are doubly periodic with respect to some regular (square, rhomboid or hexagonal) lattice [218]. Working in the weakly nonlinear regime, one can study the symmetry breaking bifurcations that lead to pattern formation to analyse the stability and selection of patterns beyond the critical parameter value [110, 276, 848]. Close to the bifurcation point, the dominant mode eikm ·r eλ t , for some m ∈ Z, grows slowly (since λ ∈ R is small), giving rise to a separation of scales. This observation is key in deriving the envelope or amplitude equations. In this approach, information about the short-term behaviour of the system is discarded in favour of a description on some appropriately identified slow timescale, and makes use of a multiple-scales perturbation analysis. This may be thought of as an extension of linear stability analysis to higher-order terms and is often used where singular perturbation expansions break down. Consider the power series expansion u(r, t) = u 0 (r, t) + ε u 1 (r, t) + ε 2 u 2 (r, t) + ε 3 u 3 (r, t) + . . . ,

(9.68)

for some small parameter ε . As for many asymptotic methods, one can construct solutions at each order of ε by recursively solving equations at the previous order. However, in some systems, the nth and (n − 1)th term in the expansion have the same power of ε . This is caused by resonance between terms. This secular problem can be circumvented by introducing new scaled variables on long spatial and temporal scales. The new and old variables are considered to be independent of each other.

9.7 Amplitude equations

407

Whilst this can be seen as a limitation of this approach, it allows one to bridge the gap across scales via homogenisation or averaging [698]. To construct the amplitude equations, first Taylor expand the nonlinear firing rate around the steady state u 0 : f (u) = f (u 0 ) + γc (u − u 0 ) + γ2 (u − u 0 )2 + γ3 (u − u 0 )3 + . . . .

(9.69)

where γc = f (u 0 ) = 1/ w(kc ), γ2 = f (u 0 )/2, and γ3 = f (u 0 )/6. To capture the long time behaviour of the system, introduce the scaled time variable t = ε 2 τ (and for simplicity, the discussion of any variation on a large spatial scale is dropped). Assume that one is at a certain distance, δ , beyond the bifurcation and set γc → γc + ε 2 δ to reflect this. Upon substitution into (9.66), this yields

ε3

3 ∂ u1 = − ∑ ε m u m + w ⊗ f (u 0 ) + γc ε u 1 + ε 2 u 2 + ε 3 u 3 ∂τ m=0 +ε 3 δ u 1 + γ2 ε 2 u 21 + 2ε 3 u 1 u 2 + γ3 ε 3 u 31 .

(9.70)

After balancing terms in powers of ε and introducing the operator L u = −1 + γc w ⊗ u, a hierarchy of equations is generated: (0) f (u 0 ), u0 = w 0 = L u1, 0 = L u 2 + γ2 w ⊗ u 21 ,

∂ u1 = L u 3 + δ w ⊗ u 1 + 2γ2 w ⊗ (u 1 u 2 ) + γ3 w ⊗ u 31 . ∂τ

(9.71) (9.72) (9.73) (9.74)

Equation (9.71) is recognised as the equation defining the homogeneous steady state. One composite pattern that solves the linear equation (9.72) is u 1 (x, y, τ ) = A1 (τ )eikc x + A2 (τ )eikc y + cc.

(9.75)

The form of (9.75) is a sum of periodic patterns corresponding to those from linear stability analysis, multiplied by amplitudes that vary on the slow timescale. Hence, the nullspace of L is spanned by {e±ikc x , e±ikc y }. The amplitude A1 describes slow variation of a stripe pattern along the x direction and A2 the corresponding variation along the y direction. For A1 = 0 and A2 = 0 (or vice versa), the pattern is striped, while if both A1 and A2 are non-zero, and in particular, are equal, the pattern is spotted. A dynamical equation for the complex amplitudes A1,2 (τ ) can be obtained by deriving solvability conditions for the higher-order equations (9.73)–(9.74); a method known as the Fredholm alternative. This approach is discussed further in Box 9.5 on page 410. These equations have the general form L u n = gn (u 0 , u 1 , . . . , u n−1 ) (with L u 1 = 0). The inner product between two periodic complex functions, having period 2π /kc , on the plane is defined as

408

9 Firing rate tissue models

U, V =

1 |Ω|

Ω

U ∗ (r)V (r) dr, Ω = [0, 2π /kc ) × [0, 2π /kc ),

(9.76)

where ∗ denotes complex conjugation. The operator L is self-adjoint with respect to this inner product, so that u 1 , L u n = L u 1 , u n .

(9.77)

Hence, the following set of solvability conditions is obtained: e±ikc x , gn = 0, e±ikc y , gn = 0, n ≥ 2.

(9.78)

(|k|)eik·r and eimkc x , einkc x = δm,n . The solvIt is useful to note that w ⊗ eik·r = w ability condition for n = 2 is automatically satisfied. Assuming a representation for u 2 as u 2 = α0 + α1 e2ikc x + α2 e−2ikc x + α3 e2ikc y + α4 e−2ikc y + α5 eikc (x+y) + α6 e−ikc (x+y) + α7 eikc (x−y) + α8 e−ikc (x−y) , and balancing eikc x terms in (9.73) gives (0) 2γ2 |A1 |2 + |A2 |2 w , α0 = (0) 1 − γc w √ ( 2kc ) 2 γ2 A 1 A 2 w √ α5 = , ( 2kc ) 1 − γc w

(2kc ) γ2 A21 w , (2kc ) 1 − γc w √ ( 2kc ) 2γ2 A1 A∗2 w √ α7 = . ( 2kc ) 1 − γc w

(9.79)

α1 =

The solvability condition for (9.74) is du 1 − δ w ⊗ u 1 = γ3 eikc x , w ⊗ u 31 + 2γ2 eikc x , w ⊗ u 1 u 2 . eikc x , dτ

(9.80)

(9.81)

The left-hand side of (9.81) can be calculated using du 1 d A1 ikc x d A2 ikc y (kc ) A1 eikc x + A2 eikc y + cc, = e + e + cc, w ⊗ u 1 = w dτ dτ dτ (9.82) and so δ A1 du 1 d A1 d A1 (kc ) = − δ w ⊗ u1 = − δ A1 w − . (9.83) eikc x , dτ dτ dτ γc The right-hand side of (9.81) can be calculated using eikc x , w ⊗ u 31 = w ⊗ eikc x , u 31 = 3γc−1 A1 |A1 |2 + 2|A2 |2 .

(9.84)

Finally, using the form for u 2 , it is found that eikc x , w ⊗ u 1 u 2 = w ⊗ eikc x , u 1 u 2 = γc−1 α0 A1 + α1 A∗1 + α5 A∗2 + α7 A2 . (9.85)

9.7 Amplitude equations

409

Fig. 9.9 Pattern selection as determined from alinear stability analysis of the amplitude equations for a balanced Mexican hat kernel, w(r ) = A exp(−r 2 ) − exp(−r 2 /σ 2 )/σ 2 , and a sigmoidal (kc ) = firing rate f (u) = (1 + tanh(β (u − h)))/2. Here A is adjusted for each pair (h, β ) so that w 1/ f (0). In the regime where Φ, Ψ < 0, the bifurcation is subcritical (giving rise to a so-called hard excitation). Determining the winner of the pattern selection process here requires the treatment of higher-order terms in the amplitude equation analysis. The parameters are σ = 2, giving kc = √ (8 ln 2)/3.

Note that by projecting onto eikc y instead of eikc x in the above analysis, one can obtain a governing equation for A2 . However, due to symmetry, the equivalent equation for A2 can be obtained by interchanging A1 and A2 . Putting (9.83), (9.84), and (9.85) together, one arrives at the coupled amplitude equations d A1 = A1 δ − Φ|A1 |2 − Ψ |A2 |2 , dτ d A2 γc = A2 δ − Φ|A2 |2 − Ψ |A1 |2 , dτ

γc

where

(9.86)

(2kc ) 2 w(0) w + Φ = −3γ3 − (0) 1 − γc w (2kc ) 1 − γc w

√ 2 w ( 2k ) w (0) c √ Ψ = −6γ3 − 4γ22 + . (0) 1 − γc w 1 − γc w ( 2kc ) 2γ22

(9.87)

Equation (9.86) describes a coupled Stuart–Landau model. Without loss of generality, assume that A1 is non-zero. A stripe corresponds to A2 = 0, with A1 = δ /Φ,

410

9 Firing rate tissue models

and

is stable if and only if Ψ > Φ > 0. A spot solution is given by A1 = A2 = δ /(Φ + Ψ ), and is stable if and only if Φ > Ψ > 0. Hence, stripes and spots are mutually exclusive as stable patterns. If the firing rate function f has no quadratic terms (γ2 = 0), then Φ = −3γ3 and Ψ = −6γ3 . For example, if one has a firing rate function like f (u) = tanh u ≈ u − u 3 /3, then γ3 < 0 and so Ψ > Φ. This means that stripes will be selected over spots; the key to observing spots is the presence of quadratic √ terms in the firing rate function [271]. For a Mexican hat connectivity (2kc ) and the quadratic term of Ψ is larger than that of Φ, so ( 2kc ) > w function, w that as |γ2 | increases, spots will be selected in favour of stripes. Figure 9.9 shows a plot for the winner of the pattern selection process in the two parameter (h, β ) plane for the sigmoidal firing rate function f (u) = (1 + tanh(β (u − h)))/2 for a balanced Mexican hat kernel. Box 9.5: Fredholm alternative and solvability conditions Consider a linear operator L with adjoint L † acting over a (Banach) space of functions u(x), x ∈ Rn . The Fredholm alternative says that exactly one of the following statements is true: (i) The inhomogeneous problem L u = g has a unique solution for all g(x), or, (ii) The homogeneous adjoint problem L † w = 0 has a non-trivial solution. From this, one can conclude that if (i) has a unique solution, then λ = 0 is not an eigenvalue of the adjoint. Conversely, if λ = 0 is an eigenvalue of L † , then (i) either has no solutions or infinitely many solutions. A corollary to the Fredholm alternative states that (i) has a unique solution if and only if g is orthogonal to the nullspace of L . This solvability condition can be written as g, w = 0 where L † w = 0 and the inner product is defined by 1 U ∗ (x)V (x)dx, (9.88) U, V = |Ω| Ω where U ∗ denotes the complex conjugate of U and the normalisation factor is equal to the Lebesgue measure of the domain Ω ⊂ Rn . If the operator L is self-adjoint, then the solvability becomes g, v = 0, where L v = 0. Thus, rather than solving the system directly, one can first identify the nullspace of L and then apply the solvability condition. This result can also be applied to the sequence of equations that typically arise when performing perturbative expansions of nonlinear operators. In general, suppose that a solution of a system N u = 0 is sought. Let L be the linear part of N and assume that it is self-adjoint. Assume a solution of the form u = u 0 + ε u 1 + ε 2 u 2 + . . . where ε 1. Such an expansion typically results in a system of equations of the form

9.8 Interface dynamics

411

L u 0 = 0, L u 1 = g1 (u 0 ), L u 2 = g2 (u 0 , u 1 ), .. .

(9.89)

The solvability conditions state that gn , u 0 = 0 for each n ≥ 1. Since the right-hand side of each of the equations depends only on knowledge of solutions from equations above them, the set of equations can be solved sequentially to find expressions for the u n and hence for u up to arbitrary order. For a more comprehensive discussion of the Fredholm alternative, see [727]. Although for ease of presentation, the focus here has been on stripes and spots, the same techniques can be used to generate amplitude equations for patterns built from more combinations of plane waves, namely, with a larger set of amplitudes A1 (τ ), A2 (τ ), A3 (τ ), . . . in equation (9.67). For the treatment of slow spatial variation, so that the resulting amplitude equations are now in Ginzburg–Landau form, as well as how to treat dynamic Turing instabilities (in the presence of axonal delays), see [904].

9.8 Interface dynamics The persistence of localised activity in brain regions has been linked to working memory [175, 349], that is, the ability of the brain to retain information for a short period of time to enable this information to be used to complete a given task. Localised activity has also been indicated in sensory processing, such as feature selection in the visual system [64]. In a neural field model, localised states (with a region of high activity and the remainder of the tissue being in a low-activity state) are easily induced in models with lateral inhibition-type interaction kernels and steep sigmoidal firing rates. These are often referred to as bumps in one spatial dimension and spots in two dimensional systems. Wilson and Cowan established the existence of bumps numerically [944, 945], and Kishimoto and Amari later developed existence and uniqueness results [514]. Major progress on understanding bump dynamics was developed by Amari, who realised that explicit results could be obtained for a Heaviside firing rate function with f (u) = Θ(u − h) for some fixed firing threshold h. The high-activity region in this case is defined as that for which u is above the threshold h [24]. He also showed how the stability of the bump depends on whether or not perturbations of the bump boundary (threshold crossing points with u = h) grow or decay. Interestingly, Amari also developed an analysis of spots, however, this work only appeared in a 1978 book that was never translated from the Japanese [25], and has only recently been summarised in English [26]. Taylor also generalised Amari’s original one-

412

9 Firing rate tissue models

dimensional analysis by deriving conditions for the existence of radially symmetric spots and determining the stability of the bumps to radially symmetric perturbations [853], and see also [930]. This analysis was later extended to the case of azimuthal perturbations (essentially recovering the untranslated results of Amari) [201, 307, 308, 505, 690]. In related work, Laing and Troy [553] introduced PDE methods to study symmetry-breaking of rotationally symmetric bumps and the formation of multiple bump solutions. As well as being useful for describing time-independent localised states and their stability, the Amari approach can be used to develop a so-called interface dynamics to describe the propagation of moving interfaces (separating low and high-activity regions). This gives a reduced, yet exact, description of dynamics solely in terms of points (in one-dimensional models) and closed curves (in two-dimensional models) on the level set where u = h. We describe this reduction below, and further, use it to show how linear adaptation can lead to novel breathing and drifting instabilities in neural field models of the form u t = −u + w ⊗ Θ(u − h) − ga + I,

at = α (−a + u),

(9.90)

where g is the strength of adaptation, described by the variable a, and α −1 its timescale relative to the evolution of u. Here, I denotes a time-independent spatial drive. Equation (9.90) can also be rewritten as a second-order differential equation: u tt + (1 + α )u t + α (1 + g)u = α (ψ + I ) + ψt ,

ψ = w ⊗ Θ(u − h). (9.91)

9.8.1 One spatial dimension Consider a model with one spatial dimension so that (u, a) = (u(x, t), a(x, t)) with x ∈ R and t ≥ 0. A bump, or set of bumps, is defined by the region D(x, t) = {x | u(x, t) ≥ h}, so that

ψ (x, t) =

D(y,t)

w(|x − y|) dy.

(9.92)

A single bump solution, by definition, has two threshold crossing points, which shall be labelled as xL and xR . These interfaces are defined by the condition u(xL (t), t) = h = u(xR (t), t), and are shown schematically in Fig. 9.10. Thus, for a single bump, equation (9.92) reduces to

ψ (x, t) =

xR (t)−x xL (t)−x

w(|y|) dy.

(9.93)

Introducing the two component vector U = (u, a) and Ψ = (ψ + I, 0) and realising that ψ no longer explicitly depends on u means that one can exploit linearity to write the solution of (9.90) using matrix exponentials as

9.8 Interface dynamics

413

Fig. 9.10 Interface variables (xL , xR ) for a bump with threshold h.

U (x, t) = e

A(t−t0 )

U (x, t0 ) +

t

dse

A(t−s)

t0

Ψ (x, s),

−1 −g A= . α −α

(9.94)

Thus, if the pair (xL (t), xR (t)) is known then the solution over the whole domain may be reconstructed using (9.93) and (9.94). The equations of motion for xi = xi (t), with i ∈ {L, R}, may be obtained by differentiating the level-set (threshold) condition u(xi (t), t) = h, to give the speed of the interface as dxi ∂ u(x, t)/∂ t =− , i ∈ {L, R}. (9.95) dt ∂ u(x, t)/∂ x x=xi (t) Using a diagonal representation for A, the solution for u(x, t) may be evaluated from U (x, t) (dropping initial data) as u(·, t) =

t

−∞

ds η (s) ψ (·, t − s) + I (·) ,

(9.96)

and the dot (·) notation is used to emphasise that this form of solution also holds in two spatial dimensions. Here,

η (t) =

α (1 − λ+ )e−λ+ t − (1 − λ− )e−λ− t Θ(t), λ− − λ+

(9.97)

and λ± are the distinct eigenvalues of A:

λ± =

−1 − α ±

(1 − α )2 − 4α g . 2

(9.98)

Thus, the temporal and spatial derivatives in (9.95) can be obtained from (9.96) exactly. However, a convenient approximation is to consider only slowly moving

414

9 Firing rate tissue models

interfaces. Upon setting vi = ∂ u/∂ t|x=xi and z i = ∂ u/∂ x|x=xi , and using (9.91) and its derivative with respect to x, one obtains the ordinary differential equations (ODEs) dvi + (1 + α )vi + α (1 + g)h = α ψ + I + ψt x=xi (t) , dt d2 z i dz i + (1 + α ) + α (1 + g)z i = α ψx + Ix + ψxt x=xi (t) , 2 dt dt

(9.99) (9.100)

where the following approximation has been used: dz i dxi ∂ z i ∂ zi ∂ zi =

. + dt dt ∂ xi ∂t ∂t

(9.101)

These coupled second and first-order ODEs can be transformed into eight first-order ODEs for variables (xi , vi , z i , yi ) as: dxi vi =− , (9.102) dt zi dvi = −(1 + α )vi − α (1 + g)h + α ψ (xi ) + I (xi ) dt vL vR + w(xi − xL ) − w(xi − xR ) , (9.103) zL zR dz i = yi , (9.104) dt dyi = −(1 + α )yi − α (1 + g)z i + α [w(xi − xL ) − w(xi − xR ) + Ix (xi )] dt vL vR + w (xi − xL ) − w (xi − xR ) . (9.105) zL zR For a drive, I (x), that is symmetric about the origin, a time-independent bump solution will also be symmetric about the origin and is given by a steady state of (9.102)–(9.105) with xL = −Δ/2 = −xR , where Δ is the bump width (see Fig. 9.10) and (xL , vL , z L , yL , xR , vR , z R , yR ) = (−Δ/2, 0, −z 0 , 0, Δ/2, 0, z 0 , 0), with

ψ (Δ/2) + I (Δ/2) = (1 + g)h,

z0 =

w(Δ) − w(0) + Ix (Δ/2) . 1+g

(9.106)

Here, Ix denotes the partial derivative of I with respect to x. The Jacobian for the full system is an 8 × 8 matrix, though taking advantage of symmetry about the spatial origin, one can reduce the associated eigenvalue problem by considering specific classes of solutions. In particular, consider perturbations of the form (a, b, c, d, ±a, ∓b, ±c, ±d), which are respectively the sum and difference modes as considered in [306]. These perturbations correspond to sloshing (side-to-side motion) and breathing (growing and shrinking motion) instabilities respectively. For these modes of instability, the eigenvalue problem splits into two sub-problems, each of dimension 4. Four of these eigenvalues are given by (λ+ , λ− , λ+ , λ− ). The eigenvalues corresponding to the sloshing solution are given by

9.8 Interface dynamics

415

Fig. 9.11 Steady-state bump interface point x R under variation of the amplitude of the drive current I0 , for w(x) and I (x) given by (9.109). In this case, the bump is given by q(x) = [I (x) + ψ (x)]/(1 + g), where ψ (x) = P(1, xL − x, xR − x) − wi P(σi , xL − x, xR − x) and P(s, a, b) = (erf(b/s) − erf(a/s))/2. As I0 is reduced, the stable fixed point (bump) undergoes a sloshing instability, and the bump is unstable to a breathing instability. As the control parameter is further decreased, a saddle-node bifurcation is met, so that for a range of I0 there are two bump solutions, both of which are unstable. The point of sloshing instability is denoted HS whilst the point of breathing instability is denoted by HB and that of the saddle-node bifurcation by S N . The space-time pictures next to the bifurcation points indicate the type of orbits expected beyond bifurcation (with the grey region indicating where activity is above threshold). The parameters are σ = 1.2, σi = 2, wi = 0.4, α = 0.025, h = 0.3, and g = 1.

λ±S

=

−κ+ ±

κ+2 − 4α Ix (Δ/2)/z 0 , 2

(9.107)

where κ± = 1 + α + α (w(0) ± w(Δ))/z 0 . The eigenvalues for the breathing instability are given by

−κ− ± κ−2 − 4α (Ix (Δ/2) + 2w(Δ))/z 0 . (9.108) λ±B = 2 Thus, there exist Hopf bifurcations when κ± = 0 provided that (Ix (Δ/2) + w(Δ) ± w(Δ))/z 0 > 0. Figure 9.11 shows the bifurcation diagram for the choice 1 wi −(x/σi )2 2 w(x) = √ e−x − √ e , π πσi

I (x) = I0 e−(x/σ ) , 2

(9.109)

416

9 Firing rate tissue models

under variation of the amplitude of the external current I0 . For I0 = 1, there is a unique stable fixed point corresponding to a stable bump. As I0 is reduced, the fixed point undergoes two Hopf instabilities in turn, the first of which leads to a sloshing mode, and the second to a breathing mode. However, the assumption of slowly moving interfaces means that the ODEs (9.102)–(9.105) should not be used to describe the dynamics of emergent sloshing and breathing solutions, nor to determine whether the Hopf bifurcations are sub- or super-critical. This issue is treated further by Folias [306] using an amplitude equation approach, or alternatively one could abandon the convenient slow moving interface approach and make use of (9.96), as we do in the next section. For a study of travelling fronts in the model (9.90) without adaptation, see Prob. 9.11.

9.8.2 Two spatial dimensions In two spatial dimensions, consider again the neural field model given by (9.90) or equivalently by (9.96) though, for simplicity drop discussion of external drive and set I = 0. The domain, R2 , is decomposed as R2 = Ω+ ∪ ∂ Ω ∪ Ω− where ∂ Ω represents the level-set which separates Ω+ (excited) and Ω− (quiescent) regions. These domains are given explicitly by Ω+ (t) = {r ∈ R2 | u(r, t) > h}, Ω− (t) = {r ∈ R2 | u(r, t) < h}, and ∂ Ω(t) = {r ∈ R2 | u(r, t) = h}. Assume that ∂ Ω(t) is a closed contour (or a finite set of disconnected closed contours). Differentiation of the level-set condition u(∂ Ω(t), t) = h gives the normal velocity rule: d u t (r, t) cn ≡ n · ∂ Ω = , (9.110) dt ∇r u(r, t) r=∂ Ω(t) where n = −∇r u/|∇r u| is the normal vector along ∂ Ω(t). The analogue of (9.93) for a localised state in two spatial dimensions is

ψ (r, t) =

dr w(|r − r |).

(9.111)

Ω+ (t)

By differentiating (9.96), explicit expressions for u t and z = ∇r u can be generated as u t (r, t) = η (0)ψ (r, t) + z(r, t) =

t

−∞

t

−∞

ds η (t − s)ψ (r, s),

ds η (t − s)∇r ψ (r, s),

(9.112) (9.113)

where η (t) is given by (9.97). In the following, it will be shown that cn can be expressed solely in terms of integrals along ∂ Ω(t). First consider the denominator in (9.110). The term ∇r ψ in (9.113) can be constructed as a line integral using the integral vector identity:

9.8 Interface dynamics

∇r ψ (r, t) =

417

dr ∇r w(|r − r |) = −

Ω+ (t)

ds n(s)w(|r − r (s)|).

(9.114)

∂ Ω(t)

The representation of ψ (r, t) in terms of a line integral rather than a double-integral can also be done using the result (see Box 9.6 on page 417)

ψ (r, t) =

ds ϕ(|r − γ (s)|)

∂ Ω(t)

r − γ (s) · n(s) + K C, |r − γ (s)|

(9.115)

where s ∈ S1 is a continuous parametrisation of points around the contour produced by the mapping γ : S1 → ∂ Ω. Here, ⎧ ⎪ r ∈ Ω+ r ⎨1, 1 ϕ(r ) = s w(s)ds, K = w(r)dr, C = 1/2, r ∈ ∂ Ω . (9.116) ⎪ r ∞ R2 ⎩ 0, r ∈ Ω− Hence, the normal velocity rule (9.110) can be expressed solely in terms of onedimensional line integrals involving the shape of the active region Ω (which is prescribed by ∂ Ω). This is a substantial reduction in description as compared to the full space-time model, yet is exact. Box 9.6: Line integral representation for non-local interaction ψ Recall the divergence theorem for a generic vector field F on a domain B with boundary ∂ B, B

(∇ · F) dr =

∂B

F · n ds,

(9.117)

where n is the unit normal vector on ∂ B. Consider a rotationally symmetric two dimensional synaptic weight kernel w(r) = w(r ) that satisfies R2 dr w(r) = K , for some finite constant K , and introduce a function g(r) : R2 → R such that w(r) = (∇ · F)(r) + g(r). (9.118) Now considering a function ϕ(r ) : R+ → R which satisfies the condition limr →∞ r ϕ(r ) = 0, the vector field can be written using polar coordinates, that is, F = ϕ(r )(cos θ , sin θ ) = ϕ(r ) r/|r| with r = r (cos θ , sin θ ). Transforming the expressions K and g into polar coordinates, integrating equation (9.118), and using the divergence theorem yields

418

9 Firing rate tissue models

K =

R2

[(∇ · F)(r) + g(r)] dr = lim

R→∞ C R

F · n ds +

= 2π lim R ϕ(R) +

R→∞

R2

R2

g(r)dr,

g(r)dr, (9.119)

where C R is a circle of radius R centred on the origin. Since the first term vanishes, one may set g(r) = K δ (r). The equation for ϕ(r ) can be deduced by writing ∂ϕ 1 w(r ) = (r ) + ϕ(r ), r > 0. (9.120) ∂r r The integration of (9.120) yields ϕ(r ) =

1 r

r

∞

sw(s)ds.

(9.121)

Using the above results means that (9.111) can be evaluated as

ψ (r, t) = = =

Ω+ (t)

∂ Ω(t)

∂ Ω(t)

dr w(|r − r |) ds F(|r − γ (s)|) · n(s) + K ds ϕ(|r − γ (s)|)

Ω+ (t)

dr δ (r − r )

r − γ (s) · n(s) + K C. |r − γ (s)|

(9.122)

Here γ : S1 → ∂ Ω is a continuous parametrisation of the boundary, and the integration over the Dirac delta function gives C = 1 if r ∈ Ω+ , C = 0 if r ∈ Ω− , and C = 1/2 if r ∈ ∂ Ω. For a further discussion of the use of integral representations of the type (9.122) in describing the evolution of closed planar shapes, see [355].

After substitution into (9.110), a time-independent localised state with ψ (r, t) → ψ (r) gives a self-consistent normal velocity as cn = 0. From (9.96), this generates (0)ψ (r), where the time-independent activity u(r) as u(r) = η (λ ) = η

∞

0

ds η (s)e−λ s =

α (1 + λ ) , (λ + λ+ )(λ + λ− )

(9.123)

and λ± are given by (9.98). Hence, u(r) = ψ (r)/(1 + g). Consider now the case of a spot solution with an active region defined by a circle of radius R, so that ψ (r) = ψ (r ), where r = |r|. In this case, it is easy to construct ψ from (9.111) as

ψ (r ) =

|r | 0, r ≤ σ1 σ2 > σ1 . (9.125) w(r ) = w− < 0, σ1 < r ≤ σ2 , ⎪ ⎩ 0, r > σ2 The integral in (9.124) can be split as

ψ (r ) = w+ where

R σ1 (r)

dr + w−

R σ2 (r)

dr − w−

R σ1 (r)

dr ,

Rσ (r) = {r ∈ R2 | |r | < R, |r − r | < σ }.

(9.126)

(9.127)

Introducing the area A+ (r, σ ) as A+ (r, σ ) =

R σ (r)

dr ,

(9.128)

gives

ψ (r ) = (w+ − w− )A+ (r, σ1 ) + w− A+ (r, σ2 ),

(9.129)

and the self-consistent equation for a stationary radially symmetric spot u(R) = h takes the form h(1 + g) = (w+ − w− )A+ (R, σ1 ) + w− A+ (R, σ2 ).

(9.130)

The integral (9.128) can be evaluated using a geometric approach [416], as described in Box 9.7 on page 420, giving A+ (r, σ ) = A(R, φ0 (r, σ )) + A(σ , φ1 (r, σ )), where

(9.131)

A(r, φ ) = r 2 (φ − sin φ )/2,

φ0 (r, σ ) = 2 cos−1 [(R 2 − σ 2 + r 2 )/(2Rr )], −1

φ1 (r, σ ) = 2 cos [(σ − R + r )/(2σ r )]. 2

2

2

(9.132)

420

9 Firing rate tissue models

Box 9.7: Integral evaluation using geometry The area A+ (r, σ ) given by (9.128) can be calculated in terms of the area of overlap of two circles, one of center (0, 0) and radius R, and the other of center r and radius σ [416]. Consider a portion of a disk, with radius r0 , whose upper boundary is a circular arc and whose lower boundary is a chord making a central angle φ0 < π , shown as the shaded region in (a). The area A = A(r0 , φ0 ) of the (shaded) segment is given by the area of the circular sector (the entire wedgeshaped portion) minus the area of an isosceles triangle, namely, A(r0 , φ0 ) =

φ0 2 1 1 π r0 − r0 sin(φ0 /2)r0 cos(φ0 /2) = r02 φ0 − sin φ0 . 2π 2 2 (9.133)

The area of the overlap of two circles, as illustrated in (b), can be constructed as the total area of A(r0 , φ0 ) + A(r1 , φ1 ). To determine the angles φ0,1 in terms of the centres, (x0 , y0 ) and (x1 , y1 ), and radii, r0 and r1 , of the two circles one may use the cosine formula that relates the lengths of the three sides of a triangle formed by joining the centres of the circles to a point of intersection. Denoting the distance between the two centres by d where d 2 = (x0 − x1 )2 + (y0 − y1 )2 , then φ0 (r, σ ) = 2 cos−1 [(R 2 − σ 2 + r 2 )/(2Rr )], and φ1 (r, σ ) = 2 cos−1 [(σ 2 − R 2 + r 2 )/(2σ r )]. To determine solution stability, consider a perturbation of the interface, with a corresponding perturbation to the field away from the interface [201]. For the case of a spot, write the unperturbed interface as r = R(cos θ , sin θ ), and the perturbed interface as r(t) (which is a small deviation from r). The corresponding fields on the boundary are written as u(r) = h and u( r(t), t) = h for the unperturbed and perturbed fields, respectively. Given that both fields take the value of the threshr(t), t) − u(r) = 0. This can be old on their respective interfaces, then δ u(t) = u( calculated as

9.8 Interface dynamics

421

δ u(t) = u(r(t), t) − u(r) ∞ ds η (s) dr w(| r(t) − r |) − dr w(|r − r |) = 0 |r | 1.

(9.143)

9.2. Consider a single population model in one spatial dimension given by 1 u t = −u + α

R

dyw(x − y) f (u(y, t)) − ga,

at = u − a,

(9.144)

where x ∈ R and t ∈ R+ . (i) Show that the spectrum for a Turing instability is defined by (k)(1 + λ ) + α g = 0, E (λ , k) ≡ (α + λ )(1 + λ ) − αγ w

(9.145)

(0) f (u) and w (k) = R dy e−iky w(y). where γ = f (u) and (1 + g)u = w (ii) Show that for w(x) = w(|x|), the spectrum lies on the curve defined by

μ 2 + ω 2 + 2μ = α g − 1,

λ = μ + iω.

(9.146)

(kc ))/2, (iii) Show that the spectrum lies to the left of the line μ = −(1 + α − αγ w (k). (kc ) = max w where w (iv) Show that a dynamic instability occurs when (kc ) − 1 < g, γw

(kc ) − 1 = 1/α , γw

α g − 1 > 0, √ and that the emergent frequency of patterns is ωc = α g − 1.

(9.147)

9.3. Using the same notation as in Prob. 9.2, consider a scalar neural field with a Mexican hat kernel and a single discrete delay τ : u t = −u +

R

dyw(x − y) f ◦ u(y, t − τ ),

x ∈ R, t ∈ R+ .

(9.148)

Problems

427

(i) Show that the spectral problem takes the form (k)e−λ τ = 0. E (λ , k) ≡ 1 + λ − γ w

(9.149)

(ii) Show that this defines a τ -dependent curve (1 + μ ) tan(ωτ ) + ω = 0.

(9.150)

(iii) Show that at bifurcation, the emergent frequency satisfies tan(ωc τ ) = −ωc , and that there is a non-zero solution with π /2 < ωc τ < π . Assuming that f is monotonically increasing, show that dynamic patterns can only occur if (k0 ) < 0 for some k0 = 0. [Hint: use the result that cos(ωc τ ) < 0]. w (iv) Calculate k0 at the point of instability for the choice of an inverted wizard hat function: w(x) = (|x − 1|)e−|x| . 9.4. Using the same notation as in Prob. 9.3, consider a scalar neural field with a refractory term: 1 1 t u t = −u + 1 − u(s)ds f w(x − y)u(y, t) dy + I . (9.151) α R t−R R (i) Show that the spectral problem takes the form E (λ , k) ≡ (1 + λ /r )λ − λ A(k) + g(1 − e−λ ) = 0,

(9.152)

(0)u + I , and where r = α R, A(k) = (1 − u) f (z) w(k), g = f (z), z = w u/(1 − u) = f (z). (ii) Show that the spectrum lies on the curve

ω (μ 2 + ω 2 ) = rg[ω (1 − e−μ cos ω ) + e−μ μ sin ω )].

(9.153)

(iii) Show that there is a dynamic instability defined by 1 − A(kc ) − g cos ωc = 0,

ωc2 = rg(1 − cos ωc ),

A(kc ) > 1. (9.154)

(iv) Show that a necessary condition for ωc > 0 is rg > 2. 9.5. Consider the ring model for orientation preference in a cortical hypercolumn:

∂ 1 u(θ , t) = −u(θ , t) + ∂t π

0

π

w(θ , θ ) f (u(θ , t))dθ + h(θ ),

(9.155)

where u(θ , t) denotes the activity at time t of a local population of cells with orientation preference θ ∈ [0, π ), w(θ , θ ) is the strength of synaptic weights between cells with orientation preference θ and θ and h(θ ) is the feedforward input from the lateral geniculate nucleus expressed as a function of θ [64].

428

9 Firing rate tissue models

(i) Show that if w(θ , θ ) is chosen to have the symmetry of a circle (the O(2) group) with w(θ , θ ) = w(θ − θ ) and w(·) is an even π -periodic function, then it can be written in the form: w(θ ) = W0 + 2 ∑ Wn cos(2n θ ), Wn ∈ R.

(9.156)

n≥0

(ii) In the case of a constant input h(θ ) = h 0 , show that there exists at least one equilibrium solution of equation (9.155), which satisfies the algebraic equation u 0 = W0 f (u 0 ) + h 0 .

(9.157)

(iii) Linearise the model around an equilibrium and show that there are solutions of the form v(θ , t) = v(θ )eλ t where v(θ ) = An e2in θ + cc,

n ∈ Z,

(9.158)

and λ = λn , where λn = −1 + γ Wn ∈ R and γ = f (u 0 ). (iv) Consider the case that W1 = maxm Wm and show that there is a Turing instability when γ > 1/W1 and that the excited pattern describes a tuning curve for orientation preference centred about the arbitrary point θ0 with v(θ ) = |A| cos(2(θ − θ0 )),

(9.159)

and complex amplitude A = |A|e−2i θ0 . (v) With the inclusion of an additional small amplitude input so that h(θ ) = h 0 + ε cos(2(θ − θ )), show that the O(2) symmetry of (9.159) is broken, and locks the peak of the tuning curve to the stimulus orientation, that is, θ0 = θ . 9.6. Consider a scalar neural field model in one spatial dimension with a spacedependent axonal delay: ∂ 2 u(x, t) = w(|x − y|) f ◦ u(y, t − |x − y|/v)dy, (9.160) 1+ ∂t R with x ∈ R, t ∈ R+ , and v > 0. (i) Show that a Turing analysis of the homogeneous steady state u(x, t) = u leads to a spectral equation of the form E (λ , k) = 0 where (k, λ ), E (λ , k)=(1 + λ )2 − f (u)W

(k, λ )= W

R

w(x)e−ikx e−λ |x|/v dx.

(9.161) (ii) For an inverted wizard hat w(x) = (|x| − 1)e−|x| , show that u = 0 and 2 2 2 (k, λ ) = −2λ /v[(1 + λ /v) − k ] − 4k (1 + λ /v) . W 2 2 2 [(1 + λ /v) + k ]

(9.162)

(iii) Use the above to plot the continuous spectrum in the complex λ -plane and find values of (v, γ ) that lead to a dynamic Turing instability, where γ = f (u).

Problems

429

9.7. Consider a next-generation neural field model in one spatial dimension (no shunts), with population firing rate R(x, t) and average membrane voltage V (x, t), with x ∈ R and t ∈ R+ , where Qu = w ⊗ R,

τ

(w ⊗ R)(x, t) =

∂R Δ , = 2RV + ∂t πτ

τ

∞

−∞

w(x − y)R(y, t) dy

∂V = μ0 + V 2 − π 2 τ 2 R 2 + κ u. ∂t

(9.163) (9.164)

Here, κ ∈ R is a global coupling strength, w(x) = w(|x|), and Q is a linear temporal differential operator with normalised Green’s function η (t). (i) Show that the steady state (R(x, t), V (x, t)) = (R0 , V0 ) satisfies 0 = 2R0 V0 +

Δ (0)R0 , , 0 = μ0 + V02 − π 2 τ 2 R02 + κ w πτ

(9.165)

∞ (k) = −∞ where w w(x)e−ikx dx. (ii) For the choice w(x) = e−|x|/σ [2 cos(ρ x/σ ) − 1], show that

2σ . 1 + ( ρ − k σ )2 (9.166) (iii) Show that the steady state is linearly stable if Re λ (k) < 0 for all k ∈ R, where E (λ , k) = 0 and det A(λ ) (k). E (λ , k) = − 2κ R 0 w (9.167) (λ ) η (k)=a(k, σ , ρ ) + a(k, σ , −ρ )−a(k, σ , 0), a(k, σ , ρ ) = w

(λ ) = Here, η

∞ 0

η (t)e−λ t dt, and A(λ ) = τλ I2 − J , with 2V0 2R0 J= . −2π 2 τ 2 R0 2V0

(9.168)

(iv) Show that a static Turing instability occurs when ρ = ρc , defined implicitly by (kc ) = w

det J , 2κ R 0

(9.169)

(kc ) = maxk w (k). where w Consider a parameter variation away from the neutral curve (where λ = 0) of the form ρ = ρc + ε 2 δ for small ε and δ ∼ O(1). Consider a multiple scales analysis with the introduction of T = ε 2 t and X = ε x and write (R(x, t), V (x, t)) = (R0 , V0 ) + n ∑∞ n=1 ε (Rn (x, X, t, T ), Vn (x, X, t, T )). (v) Introduce (η ∗ R)(x, t) = R(x, X, t, T ),

∞ 0

η (s)R(x, t − s) ds and show that for R(x, t) →

430

9 Firing rate tissue models

∞

∞

dy η (s)w(y)R(x − y, ε (x − y), t − s, ε 2 (t − s)) ∂R ∂R 1 ∂2R x 2 t xx η ∗ w ⊗ R − εη ∗ w ⊗ , −ε η ∗w⊗ − η ∗w ⊗ ∂X ∂T 2 ∂ X2 (9.170) ds

0

−∞

where η t (t) = t η (t), w x (x) = xw(x), w x x = x 2 w(x), ∗ denotes temporal convolution, and ⊗ denotes spatial convolution. (vi) Obtain a hierarchy of equations for Wn = (Rn , Vn ) given by (9.165) and

∂ W1 (9.171) = L W1 , ∂t ∂ W2 2R1 V1 τ , (9.172) = L W2 + V12 − π 2 τ 2 R12 − κη ∗ w x ⊗ ∂∂RX1 ∂t ∂ W3 ∂ W1 R1 V2 + R2 V1 +τ = L W3 + 2 τ V1 V2 − π 2 τ 2 R1 R2 ∂t ∂T 0 +κ , 2 δ η ∗ wρ ⊗ R1 − η t ∗ w ⊗ ∂∂RT1 + 21 η ∗ w x x ⊗ ∂∂ XR21 − η ∗ w x ⊗ ∂∂RX2 (9.173) where wρ = ∂ w/∂ ρ ρ =ρc , and L is the linear operator τ

L =J+

0 0 . κη ∗ w⊗ 0

(vii) Show that the solution for W1 is of the form W1 (x, X, t, T ) = Z (X, T )eλ t eikx + cc wc , wc ∈ R2 ,

(9.174)

(9.175)

where λ satisfies det[A(λ ) − G(k, λ )] = 0 (and cf equation (9.167)) and 0 0 . (9.176) G(k, λ ) = (λ ) κη w(k) 0 (viii) At bifurcation, show that wc = (Rc , Vc ) is the null eigenvector of H (kc , 0) where H (k, λ ) = J + G(k, λ ). (ix) At bifurcation, show that the solution for W2 is of the form W2 = w+ e2ikc x + ∗ and H (2kc , 0)w+ = −Z 2 P, H (0, 0) w− e−2ikc x + w0 + χ W1 , where w+ = w− 2 2 w0 = −2|Z | P, where P = (2Rc Vc , Vc − π 2 τ 2 Rc2 ) (x) Using the Fredholm alternative, show that the amplitude Z evolves according to the Ginzburg–Landau equation (0) w(kc )β ) (τ − η ρ (kc ), where α = δ β w

∂Z β ∂2Z (kc ) , = Z (α − γ |Z |2 ) − w ∂T 2 ∂ X2

(9.177)

Problems

431

0 Vc Rc −1 H (2kc , 0) + 2H −1 (0, 0) P, , γ = 2bc · 2 2 Rc −π τ Rc Vc (9.178) where bc is the null eigenvector of H T (kc , 0), and · denotes the inner product between two vectors. √ (xi) Show that if α , γ > 0, then a stable branch of solutions with amplitude ρ − ρc emerges for ρ > ρc .

β = κ bc ·

9.8. Consider a two-dimensional neural field model u t (r, t) = −u(r, t) +

R2

w(r − r ) f ◦ u(r , t) dr ,

r ∈ R2 , t ∈ R+ (9.179)

with rotationally symmetric kernel w and sigmoidal firing rate f . (i) Show that there is a Turing instability to a spatially periodic pattern with wavenumber kc when (kc ) = w

1 , f (u)

(k) = w

R2

w(r)e−ik·r dr, k = |k|,

(9.180)

(0) f (u). where u is a homogeneous steady state given by u = w (ii) Show that spatial hexagonal patterns can be written in the form 3

∑ An eik ·r + cc, n

(9.181)

n=1

where kn = kc (cos ϕn , sin ϕn ) with ϕ1 = 0, ϕ2 = 2π /3, ϕ3 = −2π /3. (iii) Show that the equations governing the time evolution of the amplitudes An ∈ C just beyond a Turing instability take the form d An = An 1 − Γ1 |An |2 − Γ2 |An+1 |2 + |An−1 |2 + Γ3 A∗n+1 A∗n−1 , dt (9.182) where n = 1, 2, 3 mod 3, ∗ denotes the complex conjugate and Γn are to be determined. 9.9. Consider a scalar neural field model on the surface of a unit sphere

∂ u(r, t) = −u(r, t) + ∂t

Ω

w(r, r ) f ◦ u(r , t)dr ,

(9.183)

with r = r(θ , φ ) ∈ Ω, t ∈ R+ , where Ω = S 2 is the surface of the unit sphere in R3 and the corresponding metric is the great circle distance. Here θ ∈ [0, π ] is the polar angle and φ ∈ [0, 2π ) is the azimuthal angle. (i) Consider a homogeneous system where interactions are a function of distance along the surface. Show that w(r, r ) = w(r · r ) in this case.

432

9 Firing rate tissue models

(ii) Assume that w has an expansion on spherical harmonic functions Ynm (r) of the form [908] w(r, r ) =

∞

n

n=0

m=−n

∑ wn ∑

Ynm (r)Ynm (r ),

(9.184)

where the overline bar denotes complex conjugation. Show that the coefficients wn are given by wn = 2π

1

−1

w(s)Pn (s)ds,

(9.185)

where Pn is the Legendre polynomial of degree n. (iii) Consider the explicit choice of kernel cos−1 s cos−1 s w(s) = J1 exp − + J2 exp − , σ1 σ2

σ1 > σ2 > 0.

(9.186) Show that for J1 J2 < 0 and J1 + J2 > 0, this describes a wizard hat shape and that the kernel is balanced when w0 = 0. (iv) For a balanced kernel, linearise the model around the steady state and show that there are solutions of the form eλn t Ynm (r), m = −n, . . . , n where

λn = −1 + f (0)wn .

(9.187)

[Hint: Use the orthogonality property that Ω Ynm (r)Ynm (r)dr = δn,n δm,m .] (v) Show that a dynamic instability is not possible and that a static instability to a pattern with shape Ynm (r) occurs when maxn wn = 1/ f (0). (vi) For a Heaviside firing rate function f (u) = Θ(u − h), show that a timeindependent spot solution with u(θ , φ ) ≥ h for θ ≤ θ0 and u(θ , φ ) < h otherwise can be written q(θ ) =

0

2π

dφ

0

θ0

dθ sin θ w sin θ sin θ cos φ + cos θ cos θ , (9.188)

and obtain a self-consistent solution for θ0 . 9.10. Consider a single population model in one spatial dimension given by u t = −u + ψ ,

ψ (x, t) =

R

w(x − y) f ◦ u(y, t − |x − y|/v) dy,

(9.189)

where x ∈ R, t ∈ R+ , and w is a translationally invariant kernel given by w(x) = e−|x|/σ /(2σ ) and let G(x, t) = δ (t − |x|/v)w(x). (i) Show that the two-dimensional Fourier transform of G is given by 1 1 1 ω) = G(k, + 1 . 2σ σ1 + i ωv + ik + i ωv − ik σ

(9.190)

Problems

433

(ii) By performing an inverse Fourier transform, show that (9.189) can be rewritten as the exact brain wave equation u t = −u + ψ , where

2 1 ∂2 1 1 1 1 + ∂t − ψ= + ∂t f ◦ u. (9.191) σ v ∂ x2 σ σ v 9.11. Consider a single population neural field model in one spatial dimension with a Heaviside firing rate: u t = −u +

R

w(x − y)Θ(u(y, t) − h) dy.

(9.192)

A travelling front has one interface, x0 (t), given by the threshold condition u(x0 (t), t) = h. (i) By differentiating the threshold condition with respect to t and using the properties of the Heaviside function, show that the speed of the front c = x˙0 with c > 0 is given implicitly by the equation (0) − w (1/c), h=w where (λ ) = w

∞

e−λ s w(s) ds.

(9.193)

(9.194)

0

(ii) For the case w = exp(−|x|/σ )/2σ , show that the front speed is given by c=σ

1 − 2h , 2h

0 < h ≤ 1/2,

(9.195)

(iii) Repeat the analysis above for c < 0 to show that in this case c=σ

1 − 2h , 2(1 − h)

1/2 ≤ h < 1.

(9.196)

(iv) Consider a small perturbation to the interface such that h = u( ˆ xˆ0 (t), t)) and ˆ x=xˆ0 (t) and δ x0 (t) = xˆ0 (t) − introduce the small differences δ u = u|x=ct − u| ct. Show that, to first order in δ x0 (t),

δ u(t) =

1 c

0

∞

e−s/c w(s) δ x0 (t) − δ x0 (t − s/c) ds.

(9.197)

(v) Use the above to construct the Evans function for the front as E (λ ) = 1 −

((1 + λ )/c) w . (1/c) w

(9.198)

(vi) For the w in part (ii), show that the travelling wave front is neutrally stable.

434

9 Firing rate tissue models

9.12. Consider a one-dimensional neural field with a Heaviside firing rate u t = −u +

∞

−∞

w(|x − y|)Θ(u(y, t − |x − y|/v))dy, v > 0, x ∈ R, t ∈ R+ . (9.199)

(i) Show that a time-independent spatially localised bump solution with q(x) ≥ h for x ∈ [x1 , x2 ] and q(x) < h for x ∈ / [x1 , x2 ] can be written

q(x) =

x2 −x x1 −x

w(y) dy.

(9.200)

(ii) Show that the width Δ = x2 − x1 satisfies h=

Δ

w(y) dy.

(9.201)

0

(iii) After linearising around q(x), show that the resulting equation admits solutions of the form u(x)eλ t , where E (λ ) = 0 and E (λ ) =

w(0) 1+λ − |w(0) − w(Δ)|

2 −

w(Δ) e−λ Δ/v |w(0) − w(Δ)|

2 .

(9.202) (iv) For λ ∈ R, show that E (0) = 0, limλ →∞ E (λ ) > 0, and E (λ ) > 0 if h (Δ) < 0. (v) For the choice w(x) = (1 − |x|)e−|x| , use the results above to show that the width is given by h ≤ 1/e. (9.203) Δe−Δ = h, and that the solution with Δ > 1 (Δ < 1) is stable (unstable). (vi) For the choice w(x) = e−|x| /2, use the results above to show that a stable bump does not exist. (vii) For the wizard hat kernel in (v), and a smooth firing rate function Θ → f , show that a general time-independent solution q(x) satisfies the fourth-order ODE (9.204) (1 − dx x )2 q = − [ f ◦ q]x x , and that this is reversible with a first integral −

1 (qx x )2 + qx qx x x − (qx )2 + 2

0

q

[s + g(s)]ds,

g(q) = [ f ◦ q]x x . (9.205)

Problems

435

9.13. Consider a one-dimensional neural field in the integral form u(x, t) =

∞

0

−g

∞

w(|y|)η (s)Θ(u(x − y, t − s) − h) dy ds

−∞ ∞ ∞

0

0

η (s )ηa (s − s )u(x, t − s ) ds ds, x ∈ R, t ∈ R+ , (9.206)

where η (t) = α e−α t Θ(t), ηa (t) = e−t Θ(t), and w(x) = e−|x| /2. (i) Show that the model supports a travelling front with speed !

1 α α 1+α − c=− 1+α − ± 2 2h 2h

c=

α 1 1+α − ∗ ± 2 2h

!

1+α −

α 2h ∗

2

2

1 − 4α 1 + g − 2h − 4α 1 + g −

1 2h ∗

,

c > 0,

(9.207)

,

c < 0,

(9.208) where h ∗ = 1/(1 + g) − h. (ii) Show that if g = gc = 1/(2h) − 1, then a front exists for all α with wavespeed c = 0, i.e., the front is stationary. (iii) Show that at a critical value of α , this stationary front undergoes a pitchfork bifurcation leading to a pair of fronts travelling in opposite directions. Further, show that if this critical condition is not met, then the pitchfork bifurcation is broken. 9.14. Consider a neural field model in one spatial dimension with Heaviside firing rate and linear adaptation:

∞ 1 ∂ u(x, t) = −u(x, t) + w(x − y)Θ(u(y, t) − h) dy − ga(x, t) α ∂t −∞ (9.209) ∂ a(x, t) + = u(x, t), x ∈ R, t ∈ R , ∂t

where w(x) = e−|x| /2. (i) Show that the speed c of a travelling pulse of width Δ (defined by the localised region of the wave above the threshold h) is given by the simultaneous solution of

α c(1 − e−Δ ) , (9.210) 2(c2 + α c + α g) 1 − e−k− Δ/c α k− e−Δ − e−k− Δ/c h= + e−k− Δ/c + k+ − k− 2 k− − c k− + c −Δ −k+ Δ/c −k+ Δ/c 1−e k+ e − e + −e−k+ Δ/c − , (9.211) 2 k+ − c k+ + c h=

436

9 Firing rate tissue models

where

α±

α 2 − 4α g . (9.212) 2 (ii) Use an Evans function approach to show that of the two solution branches, the one describing a fast, wide pulse is stable and that the slower, narrower pulse is unstable [197]. k± =

9.15. Consider a neural field model in two spatial dimensions with Heaviside firing rate and adaptation given by

τ

∂ u(r, t) = −u(r, t) + w(r − r )Θ(u(r , t) − h) dr − ga(r, t) ∂t R2 ∂ a(r, t) = u(r, t) − a(r, t), r ∈ R2 , t ∈ R+ , ∂t

(9.213)

where w(r) = w(r ) and r = |r|. (i) Show that the equation of a radially symmetric spot with q(r ) ≥ h for r ≤ a and q(r ) < h for r > a is given by q(r ) =

1 1+g

2π

0

dθ

0

a

r dr w( r 2 + r 2 − 2rr cos θ ).

(9.214)

(ii) Show that the kernel w may be written in terms of its two-dimensional Fourier as transform w w(r ) =

1 (2π )2

R2

(k)dk = eik·r w

1 2π

0

∞

(k)J0 (r k)kdk, w

(9.215)

where Jν is the Bessel function of the first kind of order ν . (iii) Use the above to write q(r ) in the form q(r ) =

a 1+g

∞

0

(k)J0 (r k)J1 (r k)dk. w

(9.216)

[Hint: make use of Graf’s addition formula

J0 ( r 2 + r 2 − 2rr cos θ ) =

∞

∑ εm Jm (r )Jm (r ) cos m θ ,

(9.217)

m=0

where ε0 = 1 and εm = 2 for m ≥ 1, and the result that

z 0

r J0 (r )dr = z J1 (z).]

Problems

437

(iv) Consider the kernel representation w(r ) =

N

∑ Ai K 0 (μi r ),

(9.218)

i=1

where K ν is the modified Bessel function of second kind of order ν . Show that q(r ) can now be written q(r ) = =

2π a 1+g

i=1

2π a 1+g

∑ μi

N

∑

Ai

N

Ai

i=1

0

∞

J0 (r k)J1 (ak) dk μi2 + k 2

r ≥a I1 (μi a)K 0 (μi r ) , 1 − I0 (μi r )K 1 (μi a) r < a a μi

(9.219)

where Iν is the modified Bessel functions of the first kind of order ν . [Hint: make use of the result that the two-dimensional Fourier transform of K 0 (zr ) is 2π /(z 2 + k 2 ).] (v) Use an interface dynamics approach to show that a necessary condition for a spot to destabilise to a breather is g > τ [200].

Appendix A

Stochastic calculus

A.1

Modelling noise

Biological processes, such as the opening and closing of ion channels discussed in Sec. 2.3.2, may be modelled as Markov chains, which describe the probability of moving between different states via a state transition matrix [331]. The effect of these transitions on the neuron’s membrane potential can then be derived by analysing the dynamics associated with the transition matrix. One common approach for achieving this is to first formulate the master equation, which describes the rate of change of transition probabilities as the state of the system changes. From here, sample paths of the system can be computed directly, for example via the Gillespie algorithm [339] (and see Box 2.8 on page 49), by sampling the probability distribution of the system using Markov Chain Monte Carlo methods [120]. In certain cases, stochastic shielding methods may additionally be used to decrease the overall computational cost [783]. Considering the size and complexity of neuron models, Markov chain descriptions tend to be too costly to solve in practice, even when taking advantage of methods to speed up the Gillespie algorithm, such as τ -leaping [739]. An alternative to using full Markov chain models is to assume that, rather than simply being averaged out completely as in the deterministic ordinary differential equation (ODE) models studied in Chap. 2, the effects of stochasticity give rise to additional terms in the model equations. These terms describe the combined overall effect of the individual stochastic processes. This gives rise to a stochastic differential equation (SDE), which is computationally cheaper to solve than the original Markov chain. For an example of how to do this for the Hodgkin–Huxley model, see [313, 314]. The following sections review the key aspects associated with these processes in a general setting.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Coombes and K. C. A. Wedgwood, Neurodynamics, Texts in Applied Mathematics 75, https://doi.org/10.1007/978-3-031-21916-0

439

440

Appendix A: Stochastic calculus

A.2

Random processes and sample paths

In many neural models, random fluctuations are assumed to primarily affect the membrane potential; it is sensible to consider this potential to be a random variable, that is, one whose value is subject to change. The repeated samplings of a random variable X over time give rise to the notion of a random process X (t) = {X t }t∈R , where the subscript denotes the time dependence of the sampling. As an example of a continuous time random process, consider the movement of ionic species in a well-mixed environment. The ions will move around by diffusive processes, but since the environment is well mixed, there is no directed movement down a concentration gradient. Instead, each ion will follow what manifests as an unbiased random walk. This unbiased process is known as Brownian motion. Each sampling of a random process will give rise to an outcome at each step that will differ from samplings of other realisations at the same time step. Thus, there is no true sampling of a random process across time. This observation can be accounted for by introducing the notion of a sample path ω ∈ Ω, where Ω is the sample space of all possible variable states across all times. Then, the state of the system at a given time can be defined in terms of a given sample path via the mapping X ω (t) : (t, ω ) → X t .

A.3

The Wiener process

One of the key random components in the construction of SDEs is the Wiener process. This might be more commonly thought of, outside the mathematical and physical sciences, as describing Brownian motion. The standard one-dimensional Wiener process, W (t) for t ≥ 0, is a real-valued process obeying the following properties: P1. P2. P3. P4.

W (0) = 0, the function t → W (t) is almost surely continuous, the process W (t) has stationary, independent increments, and the increment W (t + s) − W (s) has the distribution N (0, t),

where N (0, t) is a normal distribution with zero mean and variance t. A standard ddimensional Wiener process can be obtained as a vector of one-dimensional Wiener processes T W (t) = W1 (t), W2 (t), . . . , Wd (t) . (A.1) The Wiener process has zero mean and a variance that scales linearly with time. This variance diverges to infinity as t → ∞ leading to high irregularity of different sample paths from the same Wiener process. This can be seen in Fig. A.1, which shows ten sample paths from the standard Wiener process in one dimension. As can be seen, these paths possess a zero ensemble mean, but an ensemble variance that increases linearly with t.

Appendix A: Stochastic calculus

441

Fig. A.1 Ten sample paths of the Wiener process. In each case, different trajectories for W (t) are observed. It is clear to see that the trajectories do not converge to one another and that the variance of the ensemble of paths increases with t, but that its mean stays close to 0.

A.4

Langevin equations

A deterministic system of the form dX = f (X )dt, X ∈ Rm can be appended with a random component to capture the effects of stochasticity to give dX = f (X ) dt +

d

∑ g j (X ) dW j (t).

(A.2)

j=1

In the above, f (X ) represents the underlying deterministic system. This is referred to as the drift term. The remaining terms describe the role of the noise in the system. These terms are referred to as the diffusion terms. The dW j terms represent increments of a d-dimensional Wiener process, as described in Sec. A.3, whilst g ∈ Rm×d captures how the multidimensional noise is processed by the system. In a neural context, background noise is often considered to be independent of the gating variables and the membrane potential, in which case g would be constant. Other modelling approaches consider channel noise, which affect and are affected by the proportion of open channels. In this case, the g j depend explicitly on the gating variables. The first of these cases corresponds to additive noise, whereby the noise is simply added to the deterministic term, whilst the second is referred to as multiplicative noise, since the noise effects are multiplied by some function of the state variables.

442

Appendix A: Stochastic calculus

Equations of the type (A.2) are known as Langevin equations. Langevin equations can be formed from the master equation associated with a stochastic system, and have the advantage of being easier to work with since the effect of the stochastic processes appears as a product of increments of Wiener processes and some function of the state variables. In rare cases, an explicit solution to an SDE can be found, and here the advantage over the master equation is evident. In most cases, numerical techniques must be used to integrate the equations. Even when having to resort to numerical approaches, the computational cost to integrate SDEs is still expected to be smaller than that required to simulate the corresponding master equation. It is common to see (A.2) written in the form: dX = f (X ) + dt

d

∑ g j (X )ξ j (t),

(A.3)

j=1

where ξ j are random processes such that ξ j = 0 and ξ (t)ξ (s) = δ (t − s), where δ (t) is the Dirac delta function, · denotes taking the ensemble average and x y is the correlation between x and y. In (A.3), ξ j is a white noise process. To get from (A.2) to (A.3), one simply has to divide through by dt, so that the two are equivalent. However, dividing dW j (t) by dt involves forming the derivative of the Wiener process. Mathematicians dissent from using this equation since the derivative of the Wiener process is nowhere defined (which can be seen by noting that increments of the Wiener process are independent of one another), and so this notation for the Langevin equation is more commonly used by physicists. For a one-dimensional Wiener process, (A.2) can be integrated with respect to t to give X (t) − X (t0 ) =

t

t0

f (X (s))ds +

t

t0

g(X (s))dW (s).

(A.4)

The first term on the right-hand side, is easily recognised as the solution to the deterministic part of the system, whilst the second term is a stochastic integral with respect to a sample function W (t).

A.5

Stochastic integrals

In order to complete the details of the solution (A.4), the stochastic integral must be evaluated. Consider the integral tt0 a(X, s)dW (s), where a(X, t) is an arbitrary function of the random variable X and of time and W (t) is the one-dimensional Wiener process. In a similar fashion to numerical integration of ODEs, the interval [t0 , t] can be divided into n subintervals via a partitioning t0 < t1 < t2 < · · · < tn−1 < tn = t, and intermediate points defined such that ti−1 ≤ τi ≤ ti . The stochastic integral is then defined as the limit of partial sums as n → ∞, Sn =

n

∑ a(X (τi ), τi )

i=1

W (ti ) − W (ti−1 ) .

(A.5)

Appendix A: Stochastic calculus

443

In general, the integral written in this way depends on the particular choice of the intermediate points τi = α ti + (1 − α )ti−1 , where α ∈ [0, 1], and thus, there are an infinite number of possible interpretations of the stochastic integral, of which two have become the most popular.

Itô integral The first interpretation sets α = 0, so that τi = ti−1 . This defines the Itô stochastic integral of a(X, t) as

t

t0

n a(X (s), s)dW (s) = ms-lim ∑ a(X (ti−1 ), ti−1 ) W (ti ) − W (ti−1 ) , n→∞

(A.6)

i=1

where ms-lim refers to the convergence with respect to the mean square limit. Two important consequences of the Itô definition of a stochastic integral are that dW (t)2 = dt and dW (t)2+n = 0 for n ∈ N.

Stratonovich integral A different interpretation sets α = 1/2, so that τi = (ti−1 + ti )/2. In this case, the Stratonovich stochastic integral is defined to be t t0

a(X (s), s) ◦ dW (s) = ms-lim n→∞

n

∑

a

i=1

1 X (ti ) + X (ti−1 ) , τi 2

W (ti ) − W (ti−1 ) .

(A.7) It is worth noting, that although a choice of τi as indicated above leads to (A.7), this can also be reached by averaging X across the time points ti and ti−1 . In fact, it is only the dependence of X on t that is averaged in this way, rather than the explicit dependence of a on t. Furthermore, if a(X, t) is differentiable in t, then the integral may be shown to be independent of the particular choice of t ∈ [ti−1 , ti ].

A.6

Comparison of the Itô and Stratonovich integrals

To examine the effect of the interpretation of the stochastic integral, consider the case where a(X, t) = W (t). In the Itô interpretation,

t

t0

W (s)dW (s) =

1 W (t)2 − W (t0 )2 − (t − t0 ) , 2

(A.8)

444

Appendix A: Stochastic calculus

whereas, in the Stratonovich interpretation

t

t0

W (s) ◦ dW (s) =

1 W (t)2 − W (t0 )2 . 2

(A.9)

The difference between the two can be accounted for by the fact that, in the Itô sense, dW (t)2 = dt and so terms of second order in dW (t) do not vanish on taking the limit. Note that the Stratonovich integral is precisely that expected under the normal rules of calculus, ignoring the stochastic nature of the integral, and that the extra term (t − t0 ) appears only in the Itô sense. In fact, using the Itô interpretation of integrals requires using a different kind of calculus, involving Itô’s formula, which will be presented shortly in Sec. A.7. In the context of mathematical biology, the Itô interpretation may seem the most logical choice, since biological processes are non-anticipatory, that is, they depend only on values in the past or present, and not in the future. However, the Stratonovich interpretation may be more suited if the assumption is that the noise is actually part of some other, non-observed biologically relevant process [603]. In practice, both interpretations are used, and there exist methods of moving between the two interpretations freely, which will be discussed in Sec. A.7.

A.7

Itô’s formula

Itô’s formula is essentially the chain rule for Itô calculus. This formula defines the stochastic differential equation obeyed by a function function h(x(t)), where x ∈ R, and x varies in time. If dx(t) = f (x(t))dt + g(x(t))dW (t),

(A.10)

then expanding h(x(t)) to second order in dW (t) yields dh(x(t)) = h(x(t) + dx(t)) − h(x(t)), 1 = h (x(t))dx(t) + h (x(t))dx(t)2 + . . . , 2 1 = h (x(t)){ f (x(t))dt + g(x(t))dW (t)} + h (x(t))g(x(t))2 dW (t)2 , 2

1 2 = f (x(t))h (x(t)) + g(x(t)) h (x(t)) dt + g(x(t))h (x(t))dW (t), 2 (A.11) where terms of order higher than dW (t)2 are discarded in the penultimate step and the last step uses the fact that dW (t)2 = dt. In the Stratonovich case, the respective chain rule is the same as would be expected from ordinary calculus dh(x(t)) = h (x(t)){ f (x(t))dt + g(x(t))dW (t)}.

(A.12)

Appendix A: Stochastic calculus

445

The following multivariable version of Itô’s formula may be used for multidimensional systems in which X ∈ Rm : dh(X (t)) =

m

∑

f i (X (t))∂i h(X (t)) +

i=1

+

m

∑

1 m T (X (t)) ∂ ∂ h(X (t)) dt g(X (t))g i j ij 2 i,∑ j=1

gij (X (t))∂i h(X (t)) dW j (t),

(A.13)

i, j=1

For each of the respective representations of a given stochastic integral, an equivalent SDE may be constructed in the other representation, taking advantage of Itô’s formula. Upon writing the Itô SDE as dX = f I (X (t))dt + g I (X (t))dW (t),

(A.14)

and the Stratonovich SDE as dX = f S (X (t))dt + g S (X (t)) ◦ dW (t),

(A.15)

the connection between the solutions to the Itô and Stratonovich SDEs is given by

t

t0

g S (X (s)) ◦ dW (s) =

t

t0

g I (X (s))dW (s) +

1 2

t

t0

g I (X (s))∂ X g I (X (s))ds.

(A.16) Using this integral, the equivalent Stratonovich SDE corresponding to (A.14) is obtained via the replacement f i I → f iS −

1 2

n

∑

g Sjk ∂k gijS .

(A.17)

j,k=1

Similarly, the Itô SDE corresponding to (A.15) is obtained through the replacement f iS → f i I +

1 2

n

∑

g Ijk ∂k gijI .

(A.18)

j,k=1

These results provide a convenient way of converting between equivalent representations of the same SDE. This can prove useful when analysing the probability distributions of the solution to SDEs or when simulating SDEs numerically.

A.8

Coloured noise

The appendix has thus far only considered the case in which the noise sources, x(t), are white. A white noise process is one that has no temporal correlations (other than at zero lag), so that its autocorrelation Rx (τ ) = x(t)x(t + τ ) = δ (τ ). White noise

446

Appendix A: Stochastic calculus

represents all temporal frequencies equally. This means that white noise processes have a flat power spectral density (PSD), where the PSD describes how the power of a signal is distributed across different frequencies and is given by the Fourier transform of Rx (τ ). Noise processes in the real world are likely to have finite temporal correlations and so a white noise description may not be the most appropriate for modelling biological processes. In a neural context, it has been shown that correlations between noisy inputs may impact interspike intervals [575]. A noise process that is not white is referred to as a coloured noise process. A prominent example of a simple, mean-reverting noise process that has finite temporal correlations at non-zero lag is the Ornstein–Uhlenbeck process [889]. The one-dimensional Ornstein–Uhlenbeck process is given by dη = −γη dt + σ dW (t),

(A.19)

where W (t) is the Wiener process as defined in Sec. A.3. The Ornstein–Uhlenbeck process is mean-reverting, since sample paths will always tend to the mean η = 0. Equation (A.19) can be solved analytically as

η (t) = η (t0 )e−γ (t−t0 ) + σ

t

t0

e−γ (t−s) dW (s).

(A.20)

If the initial condition is deterministic, or comes from a Gaussian distribution, then η (t) is Gaussian. The temporal correlation function for η may then be calculated as σ 2 −γ (t+s) σ 2 −γ |t−s| η (t)η (s) = var(η (t0 )) − + e . (A.21) e 2γ 2γ This expression shows that the temporal correlation of the Ornstein–Uhlenbeck process decays exponentially as the lag increases and hence is non-zero at non-zero lag. If γ > 0, then as t, s → ∞ with finite |t − s|, the correlation function becomes stationary. In fact, if the initial time is specified at t → −∞, rather than at finite t0 , the solution (A.20) becomes

η (t) = σ

t

−∞

e−γ (t−s) dW (s),

(A.22)

in which the correlation function and mean assume their stationary values. At the expense of increasing the size of a system, Ornstein–Uhlenbeck processes may be used as additional input variables to incorporate noise terms with temporal structure. For example, (A.2) can be amended to include an Ornstein–Uhlenbeck process as dX = [ f (X (t)) + g(X (t))η ] dt, dη = −γη dt + σ dW (t),

(A.23)

where the parameters γ , σ > 0 are tuned to reflect the appropriate properties of the modelled noise source. The time evolution of histograms of 10,000 sample paths of an Ornstein– Uhlenbeck process, with γ = σ = 0.1 are shown in Fig. A.2. The initial condition at

Appendix A: Stochastic calculus

447

Fig. A.2 Approximate and exact solutions for the probability density function of (A.19) with γ = σ = 0.1. The left panel shows the approximate density found by discretising time into N = 1000 bins, and plotting the histogram of ηn for each time bin for n = 1, . . . , N . The right panel shows the solution to (A.20). As can be seen, the approximate density from the histograms matches well to the analytical solution.

t = 0 is chosen to be the standard normal distribution N (0, 1), and so the resulting distribution of η is Gaussian for all t. The transparent surface shown on the right is the solution specified by (A.20) and is well matched to the numerically simulated solution shown on the left.

A.9

Simulating stochastic processes

In most cases, analytical solutions to SDEs are unavailable and developing numerical schemes to efficiently and accurately simulate sample paths of SDEs is an active area of research. In all numerical schemes, sample paths are generated by representing Wiener increments as samples from normal distributions. Clearly, each solution of the same SDE will only produce one sample path, and this may deviate from the ‘true’ solution. To circumvent this problem, many runs of the same SDE using different samples for the Wiener increments are typically performed to produce different sample paths. Provided that the numerical scheme and SDE converge, the ensemble averages of many sample paths should result in a solution representing the mean path of the system. Monte Carlo methods may be used to approximate the probability distribution of the state variables by generating histograms of many sample paths. As for numerical algorithms for ODEs, different SDE integrators have different rates of convergence of solutions to the exact solutions, and different regions of stability, that is, where errors in the solution are attenuated as the numerical routine evolves. Integration schemes can be either explicit or implicit in nature. Explicit schemes are generally faster since implicit schemes involve root finding at each timestep, but implicit schemes tend to have larger regions of stability. Moreover, the numerical routines may differ depending on whether the SDE to be simulated is of Itô or Stratonovich type. For additive noise cases, there will be no difference between the

448

Appendix A: Stochastic calculus

two, but for multiplicative noise cases, they will differ due to the handling of terms of order dW (t)2 . Note, however, that an Itô equation may be written in its equivalent Stratonovich form as discussed in Sec. A.7 and integrated using a Stratonovich algorithm, or vice versa. In certain cases, such as when studying averages over ensembles of trajectories suffices (so that only convergence in distribution, rather than meansquare convergence, is required), or when the noise strength is weak compared to the deterministic dynamics, computationally cheap algorithms are available. Details of these and many other numerical integration routines for SDEs can be found in the book [633].

Convergence of numerical schemes Numerical routines have different rates of convergence, corresponding to what notion of convergence is considered (see [331] and [633] for a review of the different types of convergence). Intuitively, strong convergence (mean square convergence) considers the deviation of solutions found using the numerical routine from the exact solution, whilst weak convergence (convergence in distribution) considers the approach of the average of a smooth function of the variable to that function’s exact value. Weak convergence is typically more rapid than strong convergence and, in general, the weak and strong orders of convergence for a given scheme are not the same. Moreover, some schemes will give a better estimate of actual paths of an SDE, but may not necessarily improve estimates of averages. If a particular problem requires analysis only of averages rather than sample paths, simulation of the SDE may be made faster by approximating increments in the Wiener process by matching moments to a normal distribution. Thus, the choice of algorithm depends not only on the SDE in question but also on what quantities are to be analysed.

Multiple integrals When considering the case in which a model contains multiple noise sources, care must be taken to represent the combined effect of the noise terms correctly. Generally speaking, for an SDE, written in integral form as X (t) = X (t0 ) +

t

t0

f (X (s))ds +

t

t0

g(X (s))dW (s),

(A.24)

where X ∈ Rm and W (t) ∈ Rd , for higher orders of expansion of functions, the solution to the SDE contains terms involving the general Itô stochastic multiple integrals Ii1 ,i2 ,...,in (t, t0 ) =

t

t0

dWi1 (s1 )

s1

t0

dWi2 (s2 ) . . .

sn−1

t0

dWin (sn ).

(A.25)

Appendix A: Stochastic calculus

449

There also exist similarly defined Stratonovich multiple integrals which are more convenient for the development of higher-order algorithms. In general, these integrals cannot be expressed in terms of Wiener increments.

Euler–Maruyama scheme Suppose that (A.2) is to be integrated over the interval [0, T ]. This interval may be divided into N subintervals of size h = T /N at points τn = nh so that the function X (t) is to be evaluated at the points τ0 , τ1 , . . . , τ N −1 , τ N . The corresponding Wiener increments are √ (A.26) ΔW j,n = W j (τn+1 ) − W j (τn ) ∼ hN (0, 1), where N is the standard one-dimensional normal distribution. Perhaps the simplest scheme to simulate SDEs is the Euler–Maruyama scheme, which extends the forward Euler scheme for integrating ODEs and gives the approximation to X n , denoted yn , as yn+1 = yn + f (yn )h +

d

∑ g j (yn )ΔW j,n ,

(A.27)

j=1

where ΔW j,n are as defined in (A.26). It may be shown that this scheme has weak order of convergence of h, but a strong order of convergence of only h 1/2 due to the variance of the Wiener increments [844]. A nice implementation of (A.27) is presented in [418]. Although the scheme is efficient, the low order of convergence means that small h must be used to obtain accurate approximate solutions [516].

Milstein scheme The Milstein algorithm extends the Euler–Maruyama by using one more term in the expansion in h of X (t) and has both a weak and strong order of convergence of h [631]. For a one-dimensional Wiener process, the Milstein scheme approximates X n via yn as 1 yn+1 = yn + f (yn )h + g(yn )ΔWn + g(yn )g (yn ) (ΔWn )2 − h . 2

(A.28)

The approximation of X n for systems involving multidimensional noise processes is complicated by the need, except in special cases, to evaluate stochastic double integrals of the form presented earlier. For details on how to form approximations in these cases, see [631, 633].

450

Appendix A: Stochastic calculus

Other schemes The implementation of numerical routines for approximating solutions to SDEs is an active area of research, and as such, there now exists a suite of algorithms for this purpose, including higher-order stochastic Runge–Kutta methods. The general form for a s-stage stochastic Runge–Kutta method with d = 1 provides the yn as s

s

i=1

i=1

yn+1 = yn + ∑ ai f (Yi )h + ∑ b j ΔWn , Yi = yn +

i−1

i−1

j=1

j=1

∑ αij f (Y j )h + ∑ βij ΔWn ,

(A.29) i = 1, . . . , s ,

(A.30)

where α and β are s × s constant matrices of coefficients and a and b are constant row vectors of coefficients ∈ Rs . For further details on stochastic Runge–Kutta schemes for multidimensional noise processes, see [131]. Other routines extend linear multistep methods, such as the Adams–Bashforth or Adams–Moulton methods, as discussed in [125, 126]. Two-step schemes following this philosophy give yn as 2

2

r =0

r =0

2

d

∑ αn−r yn−r = h ∑ β2−r f (yn−r ) + ∑ ∑ g j (yn−r )ΔW j,n−r ,

(A.31)

r =1 j=1

where α , β , and γ are vectors with constant coefficients. When β2 = 0, these schemes are explicit, whereas, when β2 = 0, the schemes are implicit. Since these schemes require starting at step n = 2, initial data need to be specified for y1 as well as y0 . In general, y1 may be taken from data, or may be approximated by advancing from initial data at (t0 , y0 ) using a one-step Euler–Maruyama scheme.

Moment matching If only averages (or some other functional) of the solution are needed, rather than the sample paths themselves, integration algorithms may be sped up through the use of moment matching. Instead of using (A.26), the Wiener increments may be √ approximated by ΔW j,n = h ξ j,n , where the ξ j,n are i.i.d. according to [633] P(ξ j,n = 0) =

2 , 3

√ 1 P(ξ j,n = ± 3) = . 6

(A.32)

Thus, instead of having to sample from a normal distribution, this approach only samples from a uniform distribution, which provides a great speed increase to any numerical integration algorithm.

Appendix A: Stochastic calculus

451

Weak noise Suppose that the multiplicative terms in (A.2) can be written as g j = σ g j , where σ > 0 is a global noise strength. In cases in which the contribution of the noise is small, that is σ 1, alternative integration algorithms can be used to provide efficient schemes with higher accuracy. For example, the standard fourth-order Runge–Kutta scheme for ODEs appended with the term

σ

d

∑ g j (yn )dW j,n ,

(A.33)

j=1

has a strong order of convergence of h 4 + σ h + σ 2 h 1/2 , which allows for larger stepsizes to be taken during integration without incurring significant accuracy loss, providing a reduction in computational cost [632].

A.10

The Fokker–Planck equation

The Fokker–Planck equation, also known as the forward Kolmogorov equation, which is constructed using the terms in (A.2), can be used to derive many quantities of interest for a stochastic dynamical system [331, 751]. In particular, the Fokker–Planck equation governs the evolution of the probability density of the location of sample paths given in some past distributions. The large-time solution of this partial differential equation (PDE) gives the steady-state distribution of the system. From here, the ensemble average of quantities of interest can be computed by taking the integral of the product of the quantity with the steady-state distribution over the entire domain. The Fokker–Planck equation (associated with the Itô interpretation) is written as n ∂ P(X, t | Y, t ) ∂ =−∑ [ f i (X )P(X, t | Y, t )] ∂t ∂ i=1 X i

+ where

∂2 1 n [G ij (X )P(X, t | Y, t )], 2 i,∑ ∂ X ∂ X i j j=1

G ij (X ) = g(X )g(X )T ij =

(A.34)

d

∑ gik (X )g jk (X ),

(A.35)

k=1

is the outer product of g with itself. The Fokker–Planck equation comprises a drift term, given by the vector f , and a diffusion term, given by the tensor G (which will be a matrix if the Wiener process is one-dimensional). Since (A.34) is a PDE obeying the standard rules of calculus, there is no ambiguity over the correct interpretation of integrals in its solution. Except in simple cases, analytical solutions to (A.34) will not be available in closed form, and so using Fokker–Planck equations to study

452

Appendix A: Stochastic calculus

stochastic systems exchanges the difficulties associated with solving SDEs for those involved in solving PDEs. In cases where X exists in a high-dimensional space, it is often easier to use numerical techniques to approximate sample paths of the Langevin equation to estimate the probability distributions rather than solving the high-dimensional PDE. Before (A.34) can be solved, initial and boundary conditions must be specified. Suppose that the state variables evolve over a domain X ∈ S ⊆ Rn , with boundary ∂ S. Enforcing that P is a probability distribution leads to the normalisation condition

S

P(X, t) dX = 1, ∀t.

(A.36)

This condition implies that the initial condition must be a probability distribution. The probability current, J ∈ Rn , associated with (A.34) is defined to be Ji = − f i P +

1 2

∂P

n

∑ G ij ∂ X j .

(A.37)

j=1

To ensure that the normalisation condition is always met, a reflecting boundary condition is imposed on ∂ S, so that r · J = 0 for all X ∈ ∂ S, where r is the outward facing normal of ∂ S at X . If the domain S is large enough, and it is known that P and its spatial derivatives vanish at the boundary, it may be sufficient to impose the absorbing boundary condition, given by the Dirichlet condition P(X ) = 0 for all X ∈ ∂ S. Steady-state probability distributions can be found by setting ∂ P/∂ t = 0 in (A.34) and solving the resulting homogeneous PDE. Note that since the timeindependent Fokker–Planck equation is a homogeneous PDE with zero flux boundary conditions, it is solved by the trivial solution, which cannot represent a probability density function since its integral vanishes. Care must therefore be taken to ensure that the non-trivial steady-state probability solution is found.

A.10.1

The backward Kolmogorov equation

Instead of propagating information forwards, it is often useful to work backwards and ask about the conditional dependence of probability distributions on later times. This gives rise to the backward Kolmogorov equation (associated with the Itô interpretation) [331, 751] n ∂ P(X, t | Y, t ) ∂ 1 [P(X, t | Y, t )] + = ∑ f i (Y ) ∂t ∂ Yi 2 i=1

n

∑

G ij (Y )

i, j=1

∂2 [P(X, t | Y, t )]. ∂ Yi ∂ Y j

(A.38) The absorbing boundary condition is the same as for the Fokker–Planck equation, so that P(X ) = 0 for X ∈ ∂ S. The reflecting boundary condition takes a slightly different form and is given by the condition n

∑

i, j=1

ri G ij (Y )

∂P = 0, ∂Yj

X ∈ ∂ S.

(A.39)

Appendix A: Stochastic calculus

453

By using (A.38) and variations thereof along with appropriate boundary conditions, the spatial distribution of moments and distributions of first passage times from subsets of S can be found. This can be useful to make observations about transitions in bistable systems such as those discussed in Sec. 5.10. For further details on such calculations, see [331, 633].

A.11

Transforming probability distributions

In some cases, it is useful to transform probability distributions between different coordinate systems, for example, in situations where it is easier to compute the distribution in one coordinate system than another. A probability distribution P, given in terms of variables x ∈ X can be expressed as a distribution P over the variables y ∈ Y using the transformation [331] ∂ x P(x). (A.40) P(y) = det ∂y

Appendix B

Model details

B.1

The Connor–Stevens model

The Connor–Stevens model [178] can be written in the form

C

dV = g L (VL − V ) + g N a m 3 h(VN a − V ) + g K n 4 (VK − V ) + g A a 3 b(V A − V ) + I, dt dX X ∞ (V ) − X = , X ∈ {m, n, h, a, b}, (B.1) dt τ X (V )

with −0.1(V + 35 + m s ) , exp(−(V + 35 + m s )/10) − 1) αm (V ) , m ∞ (V ) = αm (V ) + βm (V )

αm (V ) =

βm (V ) = 4 exp(−(V + 60 + m s )/18), τm (V ) =

1 1 , 3.8 αm (V ) + βm (V )

(B.2)

and αh (V ) = 0.07 exp(−(V + 60 + h s )/20), h ∞ (V ) =

αh (V ) , αh (V ) + βh (V )

1 , 1 + exp(−(V + 30 + h s )/10) 1 1 τh (V ) = (B.3) , 3.8 αh (V ) + βh (V )

βh (V ) =

and −0.01(V + 50 + n s ) , exp(−(V + 50 + n s )/10) − 1 αn (V ) , n ∞ (V ) = αn (V ) + βn (V )

αn (V ) =

βn (V ) = 0.125 exp(−(V + 60 + n s )/80), τn (V ) =

1 2 , 3.8 αh (V ) + βh (V )

(B.4)

and © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Coombes and K. C. A. Wedgwood, Neurodynamics, Texts in Applied Mathematics 75, https://doi.org/10.1007/978-3-031-21916-0

455

456

Appendix B: Model details

exp((V + 94.22)/31.84) 1/3 a∞ (V ) = 0.0761 , 1 + exp((V + 1.17)/28.93) 1 b∞ (V ) = , {1 + exp((V + 53.3)/14.54)}4 1.158 τa (V ) = 0.3632 + , 1 + exp((V + 55.96)/20.12) 2.678 τb (V ) = 1.24 + . 1 + exp((V + 50)/16.027)

(B.5)

The other parameters of the model are C = 1 μ F cm−2 , g L = 0.3 mmho cm−2 , g N a = 120 mmho cm−2 , g K = −72 mmho cm−2 , g A = 47.7 mmho cm−2 , VL = −17 mV,VN a = 55 mV, VK = −72 mV, V A = −75 mV, m s = −5.3 mV, h s = −12 mV, and n s = −4.3 mV.

B.2

The Wang–Buzsáki model

The Wang–Buzsáki model [919] can be written in the form C

dV = g L (VL − V ) + g N a m 3∞ (V )h(VN a − V ) + g K n 4 (VK − V ) + I, dt dX = φ α X (1 − X ) − β X X , X ∈ {n, h}, dt

(B.6)

with φ = 5, m ∞ (V ) = αm (V )/(αm (V ) + βm (V )) and

αm (V ) =

0.1(V + 35) , 1 − exp(−(V + 35)/10)

βm (V ) = 4 exp(−(V + 60)/18),

αh (V ) = 0.07 exp(−(V + 58)/20), αn (V ) =

0.01(V + 34) , 1 − exp(−(V + 34)/10)

βh (V ) =

1 , 1 + exp(−(V + 28)/10)

βn (V ) = 0.125 exp(−(V + 44)/80). (B.7)

The other parameters of the model are C = 1 μ F cm−2 , g L = 0.1 mmho cm−2 , g N a = 35 mmho cm−2 , g K = 9 mmho cm−2 , VL = −65 mV, VN a = 55 mV, and VK = −90 mV.

B.3

The Golomb–Amitai model

The Golomb–Amitai model [356] can be written in the form

Appendix B: Model details

457

dV = g L (VL − V ) + g N a m 3∞ (V )h(VN a − V ) + g N a P p∞ (V )(VN a − V ) dt 3 (V )b(VK − V ) + g Z z(VK − V ) + I, + g K dr n 4 (VK − V ) + g A a∞ dh = φ γ (V, θh , σh ) − h)/(1 + 7.5γ (V, θht , σht )), dt dn = φ γ (V, θn , σn ) − n)/(1 + 5γ (V, θnt , σnt )), dt db = (γ (V, θb , σb ) − b)/τb , dt dz = (γ (V, θz , σz ) − z)/τz , dt (B.8) with φ = 2.7, m ∞ (V ) = γ (V, θm , σm ), p∞ (V ) = γ (V, θ p , σ p ), and a∞ (V ) = γ (V, θa , σa ) and C

γ (V, θ , σ ) =

1 . 1 + exp(−(V − θ )/σ )

(B.9)

The other parameters of the model are θm = −30 mV, σm = 9.5 mV, θ p = −40 mV, σ p = 5 mV, θa = −50 mV, σa = 20 mV, θh = −53 mV, σh = −7 mV, θn = −30 mV, σn = 10 mV, θht = −40.5 mV, σht = −6 mV, θnt = −27 mV, σnt = −15 mV, θb = −80 mV, σb = −6 mV, τb = 15 ms, θz = −39 mV, σz = 5 mV, τz = 75 ms, C = 1 μ F cm−2 , g L = 0.02 mmho cm−2 , g N a = 24 mmho cm−2 , g N a P = 0.07 mmho cm−2 , g K dr = 3 mmho cm−2 , g A = 1.4 mmho cm−2 , g Z = 1 mmho cm−2 , VL = −70 mV, VN a = 55 mV, and VK = −90 mV.

B.4

The Wang thalamic relay neuron model

The Wang thalamic relay neuron model [917] can be written in the form C

dV = −IT − Isag − I N a − I K − I N a P − I L + I, dt

(B.10)

with 3 I T = g T s∞ (V )h(V − VCa ),

h ∞ (V ) − h dh =2 , dt τh (V )

1 1 , h ∞ (V ) = , 1 + exp(−(V + 65)/7.8) 1 + exp((V + 79)/5) τh (V ) = h ∞ (V ) exp((V + 162.3)/17.8) + 20, (B.11)

s∞ (V ) =

458

Appendix B: Model details

Isag = g sag sag 2 (V − Vsag ),

sag∞ (V ) − sag dsag = , dt τsag (V )

1 , 1 + exp((V + 69)/7.1) 1000 τsag (V ) = , exp((V + 66.4)/9.3) + exp(−(V + 81.6)/13)

sag∞ (V ) =

(B.12)

I N a = g N a m 3∞ (σ N a , V ))(0.85 − n)(V − VN a ), αm (σ , V ) m ∞ (σ , V ) = , αm (σ , V ) + βm (σ , V ) αm (σ , V ) = −0.1(V + 29.7 − σ )/(exp(−0.1V + 29.7 − σ )) − 1), βm (σ , V ) = 4 exp(−(V + 54.7 − σ )/18),

(B.13)

I K = g K n 4 (V − VK ),

n ∞ (V ) − n dn = φn , dt τn (V )

αn (σ K , V ) , αn (σ K , V ) + βn (σ K , V ) τn (V ) = 1.0/(αn (σ K , V ) + βn (σ K , V )), αn (σ , V ) = −0.01(V + 45.7 − σ )/(exp(−0.1(V + 45.7 − σ )) − 1), βn (σ , V ) = 0.125 exp(−(V + 55.7 − σ )/80), n ∞ (V ) =

I N a P = g N a P m 3∞ (σ N a P , V ))(V − VN a ),

I L = g L (V − VL ).

(B.14)

(B.15)

The other parameters of the model are C = 1 μ F cm−2 , g T = 1.0 mmho cm−2 , VCa = 120 mV, g sag = 0.04 mmho cm−2 , Vsag = −40 mV, g N a = 42 mmho cm−2 , VN a = 55 mV, σ N a = 6 mV, σ N a P = −5 mV, σ K = 10 mV, φn = 28.57, g K = 30 mmho cm−2 , VK = −80 mV, g N ap = 9 mmho cm−2 , g L = 0.12 mmho cm−2 , and VL = −70 mV.

B.5

The Pinsky–Rinzel model

The Pinsky–Rinzel model [719] can be written in the form C

C

dVs = g L s(VL − Vs ) + g N a m 2∞ (Vs )h(VN a − Vs ) + g K dr n(VK − Vs ) dt + gc (Vd − Vs )/ p + I / p, dVd = g L s(VL − Vd ) + g Ca s 2 (VCa − Vd ) − g K −ahp q(VK − Vd ) + g K −C cχ (VK − Vd ) dt + gc (Vs − Vd )/(1 − p),

dCa = −0.13ICa − 0.075Ca, dt dX = α X (U ) − (α X (U ) + β X (U ))X, dt

X ∈ {h, n, s, c, q},

(B.16)

Appendix B: Model details

459

where U = Vs when X = h, n; Vd when X = s, c; and Ca when X = q. Here, 0.32(−46.9 − V ) 0.28(V + 19.9) , βm (V ) = , exp((−46.9 − V )/4) − 1 exp((V + 19.9)/5) − 1 αm (V ) m ∞ (V ) = , αm (V ) + βm (V ) 0.016(−24.9 − V ) , βn (V ) = 0.25 exp(−1 − 0.025V ), αn (V ) = exp((−24.9 − V )/5) − 1 αh (V ) = 0.128 exp((−43 − V )/18), βh (V ) = 4/(1 + exp((−20 − V )/5)), 1.6 0.02(V + 8.9) , βs (V ) = , αs (V ) = 1 + exp(−0.072(V − 5)) exp((V + 8.9)/5) − 1 αc (V ) = (1 − Θ(V + 10)) exp((V + 50)/11 − (V + 53.5)/27)/18.975 + Θ(V + 10)2 exp((−53.5 − V )/27),

αm (V ) =

βc (V ) = (1 − Θ(V + 10))(2 exp((−53.5 − V )/27) − αc (V )), αq = min(0.00002Ca, 0.01), βq = 0.001, χ = min(Ca/250, 1), (B.17) where Θ is the Heaviside step function. The other parameters of the model are C = 3 μ F cm−2 , g L = 0.1 mmho cm−2 , g N a = 30 mmho cm−2 , g K dr = 15 mmho cm−2 , g Ca = 10 mmho cm−2 , g K −ahp = 0.8 mmho cm−2 , g K −C = 15 mmho cm−2 , VN a = 60 mV, VCa = 80 mV, VK = −75 mV, VL = −60 mV, gc = 2.1 mmho cm−2 , and p = 1/2.

References

1. Brain Multiphysics. https://www.sciencedirect.com/journal/brain-multiphysics/ 2. Machine learning and dynamical systems, The Alan Turing Institute. https://www.turing.ac. uk/research/interest-groups/machine-learning-and-dynamical-systems 3. matplotlib.pyplot.xkcd. https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.xkcd. html 4. https://neuroinformatics.nl/HBP/morphology-viewer/ 5. Abbott, L.F.: A network of oscillators. J. Phys. A 23, 3835–3859 (1990) 6. Abbott, L.F.: Simple diagrammatic rules for solving dendritic cable problems. Physica A 185, 343–356 (1992) 7. Abbott, L.F., Fahri, E., Gutmann, S.: The path integral for dendritic trees. Biol. Cybern. 66, 49–60 (1991) 8. Abbott, L.F., Kepler, T.B.: Model neurons: from Hodgkin–Huxley to Hopfield. In: Garrido, L. (ed.) Statistical Mechanics of Neural Networks, no. 368 in Lecture notes in Physics, pp. 5–18. Springer (1990) 9. Abbott, L.F., Marder, E.: Modelling small networks. In: Kock, C., Segev, I. (eds.) Methods in Neuronal Modelling, pp. 361–410. MIT Press (1998) 10. Abbott, L.F., Van Vresswijk, C.: Asynchronous states in networks of pulse-coupled oscillators. Phys. Rev. E 48, 1483–1490 (1993) 11. Abeysuriya, R.G., Hadida, J., Sotiropoulos, S.N., Jbabdi, S., Becker, R., Hunt, B.A., Brookes, M.J., Woolrich, M.W.: A biophysical model of dynamic balancing of excitation and inhibition in fast oscillatory large-scale networks. PLoS Comput. Biol. 14, e1006007 (2018) 12. Abeysuriya, R.G., Robinson, P.A.: Real-time automated EEG tracking of brain states using neural field theory. J. Neurosci. Methods 258, 28–45 (2016) 13. Ablowitz, M.J., Fokas, A.S.: Introduction to Complex Variables and Applications. Cambridge Texts in Applied Mathematics. Cambridge University Press (2021) 14. Abrams, D.M., Mirollo, R., Strogatz, S.H., Wiley, D.A.: Solvable model for chimera states of coupled oscillators. Phys. Rev. Lett. 101, 084103 (2008) 15. Abrams, D.M., Strogatz, S.H.: Chimera states for coupled oscillators. Phys. Rev. Lett. 93, 174102 (2004) 16. Achuthan, S., Canavier, C.C.: Phase-resetting curves determine synchronization, phase locking, and clustering in networks of neural oscillators. J. Neurosci. 29, 5218–5233 (2009) 17. Acker, C.D., Kopell, N., White, J.A.: Synchronization of strongly coupled excitatory neurons: relating network behavior to biophysics. J. Comput. Neurosci. 15, 71–90 (2003) © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Coombes and K. C. A. Wedgwood, Neurodynamics, Texts in Applied Mathematics 75, https://doi.org/10.1007/978-3-031-21916-0

461

462

References

18. Adelman, W.J., Fitzhugh, R.: Solutions of the Hodgkin-Huxley equations modified for potassium accumulation in a periaxonal space. Fed. Am. Soc. Exp. Biol. Proc. 34, 1322–1329 (1975) 19. Ahmadizadeh, S., Shames, I., Martin, S., Neši´c, D.: On eigenvalues of Laplacian matrix for a class of directed signed graphs. Linear Algebra Appl. 523, 281–306 (2017) 20. Aihara, K., Matsumoto, G., Ichikawa, M.: An alternating periodic-chaotic sequence observed in neural oscillators. Phys. Lett. 111, 251–255 (1985) 21. Al-Darabsah, I., Campbell, S.A.: M-current induced Bogdanov-Takens bifurcation and switching of neuron excitability class. J. Math. Neurosci. 11(1) (2021) 22. Albert, R., Jeong, H., Barabási, A.L.: Error and attack tolerance of complex networks. Nature 406, 378–382 (2000) 23. Alvarez, A.V., Chow, C.C., Van Bockstaele, E.J., Williams, J.T.: Frequency-dependent synchrony in locus ceruleus: role of electrotonic coupling. Proc. Natl. Acad. Sci. USA 99, 4032– 4036 (2002) 24. Amari, S.I.: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 27, 77–87 (1977) 25. Amari, S.I.: Mathematical Theory of Nerve Nets. Sangyotosho (1978) 26. Amari, S.I.: Neural Field Theory, chap. Excitation and Self-Organization of Neural Fields. Springer, Heaviside World (2013) 27. Aminzare, Z., Srivastava, V., Holmes, P.: Gait transitions in a phase oscillator model of an insect central pattern generator. SIAM J. Appl. Dyn. Syst. 17, 626–671 (2018) 28. Amit, D.J.: Modelling Brain Function. Cambridge University Press (1989) 29. Amit, D.J., Gutfreund, H., Sompolinsky, H.: Storing infinite numbers of patterns in a spinglass model of neural networks. Phys. Rev. Lett. 55, 1530–1533 (1985) 30. Amunts, K., Knoll, A.C., Lippert, T., Pennartz, C.M.A., Ryvlin, P., Destexhe, A., Jirsa, V.K., D’Angelo, E., Bjaalie, J.G.: The Human Brain Project-Synergy between neuroscience, computing, informatics, and brain-inspired technologies. PLoS Biol. 17, e3000344 (2019) 31. Arenas, A., Díaz-Guilera, A., Kurths, J., Moreno, Y., Zhou, C.: Synchronization in complex networks. Phys. Rep. 469, 93–153 (2008) 32. Arnal, L.H., Giraud, A.L.: Cortical oscillations and sensory predictions. Trends Cogn. Sci. 16, 390–398 (2012) 33. Arnold, L.: Random Dynamical Systems, 1st edn. Springer (2003) 34. Aru, J., Aru, J., Priesemann, V., Wibral, M., Lana, L., Pipa, G., Singer, W., Vicente, R.: Untangling cross-frequency coupling in neuroscience. Curr. Opin. Neurobiol. 31C, 51–61 (2014) 35. Ashwin, P., Bick, C., Burylko, O.: Identical phase oscillator networks: bifurcations, symmetry and reversibility for generalized coupling. Front. Appl. Math. Stat. 2(7) (2016) 36. Ashwin, P., Burylko, O., Maistrenko, Y.: Bifurcation to heteroclinic cycles and sensitivity in three and four coupled phase oscillators. Physica D 237, 454–466 (2008) 37. Ashwin, P., Orosz, G., Wordsworth, J., Townley, S.: Dynamics on networks of cluster states for globally coupled phase oscillators. SIAM J. Appl. Dyn. Syst. 6, 728–758 (2007) 38. Ashwin, P., Postlethwaite, C.: Sensitive finite-state computations using a distributed network with a noisy network attractor. IEEE Trans. Neural Netw. Learn. Syst. 29, 5847–5858 (2018) 39. Ashwin, P., Postlethwaite, C.: Excitable networks for finite state computation with continuous time recurrent neural networks. Biol. Cybern. 115, 519–538 (2021) 40. Ashwin, P., Rodrigues, A.: Hopf normal form with S N symmetry and reduction to systems of nonlinearly coupled phase oscillators. Physica D 325, 14–24 (2016) 41. Ashwin, P., Swift, J.W.: The dynamics of n weakly coupled identical oscillators. J. Nonlinear Sci. 2, 69–108 (1992) 42. Atay, F.M., Biyikoˇglu, T., Jost, J.: Network synchronization: spectral versus statistical properties. Physica D 224, 35–41 (2006) 43. Atay, F.M., Hutt, A.: Neural fields with distributed transmission speeds and long-range feedback delays. SIAM J. Appl. Dyn. Syst. 5, 670–698 (2006)

References

463

44. Augustin, M., Ladenbauer, J., Obermayer, K.: How adaptation shapes spike rate oscillations in recurrent neuronal networks. Front. Comput. Neurosci. 7(9) (2013) 45. Avitabile, D., Davis, J., Wedgwood, K.C.A.: Bump attractors and waves in networks of leaky integrate-and-fire neurons. SIAM Rev. 65, 147–182 (2022) 46. Avitabile, D., Desroches, M., Knobloch, E.: Spatiotemporal canards in neural field equations. Phys. Rev. E 95, 042205 (2017) 47. Avitabile, D., Wedgwood, K.C.A.: Macroscopic coherent structures in a stochastic neural network: from interface dynamics to coarse-grained bifurcation analysis. J. Math. Biol. 75, 885–928 (2017) 48. Azad, A.K.A., Ashwin, P.: Within-burst synchrony changes for coupled elliptic bursters. SIAM J. Appl. Dyn. Syst. 9, 261–281 (2010) 49. Bacak, B.J., Kim, T., Smith, J.C., Rubin, J.E., Rybak, I.A.: Mixed-mode oscillations and population bursting in the pre-Bötzinger complex. eLife 5, e13403 (2016) 50. Badel, L., Lefort, S., Brette, R., Petersen, C.C.H., Gerstner, W., Richardson, M.J.E.: Dynamic I-V curves are reliable predictors of naturalistic pyramidal-neuron voltage traces. J. Neurophysiol. 99, 656–666 (2008) 51. Baigent, S., Stark, J., Warner, A.: Modelling the effect of gap junction nonlinearities in systems of coupled cells. J. Theor. Biol. 186, 223–239 (1997) 52. Bailey, M.P., Derks, G., Skeldon, A.C.: Circle maps with gaps: Understanding the dynamics of the two-process model for sleep-wake regulation. Eur. J. Appl. Math. 29, 845–868 (2018) 53. Baladron, J., Fasoli, D., Faugeras, O., Touboul, J.: Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons. J. Math. Neurosci. 2(1) (2012) 54. Barabási, A.L.: Network Science, 1st edn. Cambridge University Press (2016) 55. Barak, O., Tsodyks, M.: Persistent activity in neural networks with dynamic synapses. PLoS Comput. Biol. 3, e104 (2007) 56. Bard Ermentrout, G., Glass, L., Oldeman, B.E.: The shape of phase-resetting curves in oscillators with a saddle node on an invariant circle bifurcation. Neural Comput. 24, 3111–3125 (2012) 57. Barlow, B.Y.H.B.: Summation and inhibition in the frog’s retina. J. Physiol. 119, 69–88 (1953) 58. Barrio, R., Ibán˜ez, S., Pérez, L.: Hindmarsh-Rose model: close and far to the singular limit. Phys. Lett. A 381, 597–603 (2017) 59. Basar, E., Basar-Eroglu, C., Karakas, S., Schürmann, M.: Gamma, alpha, delta, and theta oscillations govern cognitive processes. Int. J. Psychophysiol. 39, 241–248 (2001) 60. Bassett, D.S., Bullmore, E.T.: Small-world brain networks revisited. Neuroscientist 23, 499– 516 (2017) 61. Bassett, D.S., Sporns, O.: Network neuroscience. Nat. Neurosci. 20, 353–364 (2017) 62. Bassett, D.S., Wymbs, N.F., Porter, M.A., Mucha, P.J., Carlson, J.M., Grafton, S.T.: Dynamic reconfiguration of human brain networks during learning. Proc. Natl. Acad. Sci. USA 108, 7641–7646 (2011) 63. Bastos, A.M., Schoffelen, J.M.: A tutorial review of functional connectivity analysis methods and their interpretational pitfalls. Front. Syst. Neurosci. 9(175) (2016) 64. Ben-Yishai, R., Lev Bar-Or, R., Sompolinsky, H.: Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. USA 92, 3844–3848 (1995) 65. Benda, J., Herz, A.V.M.: A universal model for spike-frequency adaptation. Neural Comput. 15, 2523–2564 (2003) 66. Benda, J., Maler, L., Longtin, A.: Linear versus nonlinear signal transmission in neuron models with adaptation currents or dynamic thresholds. J. Neurophysiol. 104, 2806–2820 (2010) 67. Benettin, G., Galgani, L., Giorgilli, A., Strelcyn, J.M.: Lyapunov characteristic exponents for smooth dynamical systems and for Hamiltonian systems; a method for computing all of them. Part 1: theory. Meccanica 15, 9–20 (1980) 68. Bennet, M.V.L., Zukin, R.S.: Electrical coupling and neuronal synchronization in the mammalian brain. Neuron 41, 495–511 (2004)

464

References

69. Benoit, E., Callot, J.L., Diener, F., Diener, M.: Chasse au canard. Collectanea Mathematica 31–32, 37–119 (1981) 70. Benucci, A., Frazor, R.A., Carandini, M.: Standing waves and traveling waves distinguish two circuits in visual cortex. Neuron 55, 103–117 (2007) 71. Benzit, R., Sutera, A., Vulpiani, A.: The mechanism of stochastic resonance. J. Phys. A 14, 453–457 (1981) 72. Berger, H.: Über das Elektrenkephalogramm des Menschen. Archiv für Psychiatrie und Nervenkrankheiten 87, 527–570 (1929) 73. Berglund, N., Gentz, B.: Noise-Induced Phenomena in Slow-Fast Dynamical Systems: A Sample-Paths Approach. Springer (2006) 74. Berry, M.J., Meister, M.: Refractoriness and neural precision. Adv. Neural Inf. Process. Syst. 18, 110–116 (1998) 75. Berry, M.J., Tkaˇcik, G.: Clustering of neural activity: a design principle for population codes. Front. Comput. Neurosci. 14, 1–19 (2020) 76. Betzel, R.F., Medaglia, J.D., Papadopoulos, L., Baum, G.L., Gur, R.C., Gur, R.E., Roalf, D., Satterwaite, T.D., Bassett, D.S.: The modular organization of human anatomical brain networks: accounting for the cost of wiring. Netw. Neurosci. 1, 42–68 (2017) 77. Bialek, W.: Biophysics. Princeton University Press (2005) 78. Bick, C., Ashwin, P.: Chaotic weak chimeras and their persistence in coupled populations of phase oscillators. Nonlinearity 29, 1468–1486 (2016) 79. Bick, C., Ashwin, P., Rodrigues, A.: Chaos in generically coupled phase oscillator networks with nonpairwise interactions. Chaos 26, 094814 (2016) 80. Bick, C., Gross, E., Harrington, H.A., Schaub, M.T.: What are higher-order networks? SIAM Rev. (2021) 81. Bick, C., Martens, E.A.: Controlling chimeras. New J. Phys. 17, 033030 (2015) 82. Bick, C., Panaggio, M.J., Martens, E.A.: Chaos in Kuramoto oscillator networks. Chaos 28, 071102 (2018) 83. Bick, C., Timme, M., Paulikat, D., Rathlev, D., Ashwin, P.: Chaos in symmetric phase oscillator networks. Phys. Rev. Lett. 107, 244101 (2011) 84. Bienenstock, E.L., Cooper, L.N., Munro, P.W.: Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2, 32–48 (1982) 85. Billock, V.A., Tsou, B.H.: Neural interactions between flicker-induced self-organized visual hallucinations and physical stimuli. Proc. Natl. Acad. Sci. USA 104, 8490–8495 (2007) 86. Birg, T., Ortolano, F., Wiegers, E.J., Smielewski, P., Savchenko, Y., Ianosi, B.A., Helbok, R., Rossi, S., Carbonara, M., Zoerle, T., Stocchetti, N., Anke, A., Beer, R., Bellander, B.M., Beqiri, E., Buki, A., Cabeleira, M., Chieregato, A., Citerio, G., Clusmann, H., Czeiter, E., Czosnyka, M., Depreitere, B., Ercole, A., Frisvold, S., Jankowski, S., Kondziella, D., Koskinen, L.O., Kowark, A., Menon, D.K., Meyfroidt, G., Moeller, K., Nelson, D., Piippo-Karjalainen, A., Radoi, A., Ragauskas, A., Raj, R., Rhodes, J., Rocka, S., Rossaint, R., Sahuquillo, J., Sakowitz, O., Sundström, N., Takala, R., Tamosuitis, T., Tenovuo, O., Vajkoczy, P., Vargiolu, A., Vilcinis, R., Wolf, S., Younsi, A., Zeiler, F.A.: Brain temperature influences intracranial pressure and cerebral perfusion pressure after traumatic brain injury: a CENTER-TBI study. Neurocrit. Care 35, 651–661 (2021) 87. Biswal, B., Yetkinand, F.Z., Haughton, V.M., Hyde, J.S.: Functional connectivity in the motor cortex of resting human brain using echo-planar MRI. Magn. Reson. Med. 34, 537–541 (1995) 88. Bojak, I., Liley, D.: Modeling the effects of anesthesia on the electroencephalogram. Phys. Rev. E 71, 041902 (2005) 89. Bojak, I., Liley, D.T.J.: Axonal velocity distributions in neural field equations. PLoS Comput. Biol. 6, e1000653 (2010) 90. Bonilla-Quintana, M., Wedgwood, K.C.A., O’Dea, R.D., Coombes, S.: An analysis of waves underlying grid cell firing in the medial enthorinal cortex. J. Math. Neurosci. 7(9) (2017) 91. Borisyuk, A., Rinzel, J.: Understanding neuronal dynamics by geometrical dissection of minimal models. In: Chow, C., Gutkin, B., Hansel, D., Meunier, C., Dalibard, J. (eds.) Methods and

References

92. 93.

94.

95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110.

111. 112. 113. 114. 115. 116.

465

Models in Neurophysics: Lecture Notes of the Les Houches Summer School 2003. Elsevier (2004) Borisyuk, R., Kirillov, A.B.: Bifurcation analysis of a neural network model. Biol. Cybern. 66, 319–325 (1992) Bose, A., Booth, V.: Bursting: The Genesis of Rhythm in the Nervous System, chap. Bursting in 2-compartment neurons: a case study of the Pinsky-Rinzel model, pp. 123–144. World Scientific (2005) Bossy, M., Faugeras, O., Talay, D.: Clarification and complement to “Mean-field description and propagation of chaos in networks of Hodgkin–Huxley and FitzHugh–Nagumo neurons”. J. Math. Neurosci. 5(1) (2015) Braaksma, B.: Phantom ducks and models of excitability. J. Dyn. Differ. Equ. 4, 485–513 (1992) Braun, W., Matthews, P.C., Thul, R.: First-passage times in integrate-and-fire neurons with stochastic thresholds. Phys. Rev. E 91, 052701 (2015) Brea, J., Gerstner, W.: Does computational neuroscience need new synaptic learning paradigms? Curr. Opin. Behav. Sci. 11, 61–66 (2016) Breakspear, M., Heitmann, S., Daffertshofer, A.: Generative models of cortical oscillations: neurobiological implications of the Kuramoto model. Front. Hum. Neurosci. 4(190) (2010) Bressloff, P.C.: Stochastic dynamics of time-summating binary neural networks. Phys. Rev. A 44, 4005–4016 (1991) Bressloff, P.C.: Traveling waves and pulses in a one-dimensional network of integrate-and-fire neurons. J. Math. Biol. 40, 169–183 (2000) Bressloff, P.C.: Traveling fronts and wave propagation failure in an inhomogeneous neural network. Physica D 155, 83–100 (2001) Bressloff, P.C.: Les Houches Lectures in Neurophysics, chap. Pattern formation in visual cortex. Springer (2004) Bressloff, P.C.: Stochastic neural field theory and the system size expansion. SIAM 70, 1488– 1521 (2009) Bressloff, P.C.: Spatiotemporal dynamics of continuum neural fields. J. Phys. A 45, 033001 (2012) Bressloff, P.C.: Stochastic Processes in Cell Biology. Springer (2014) Bressloff, P.C.: Waves in Neural Media: From Single Neurons to Neural Fields. Springer (2014) Bressloff, P.C., Coombes, S.: Symmetry and phase-locking in a ring of pulse-coupled oscillators with distributed delays. Physica D 126, 99–122 (1999) Bressloff, P.C., Coombes, S.: Dynamics of strongly-coupled spiking neurons. Neural Comput. 12, 91–129 (2000) Bressloff, P.C., Coombes, S.: Neural ‘bubble’ dynamics revisited. Cogn. Comput. 5, 281–294 (2013) Bressloff, P.C., Cowan, J.D., Golubitsky, M., Thomas, P.J., Wiener, M.: Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philos. Trans. R. Soc. B 40, 299–330 (2001) Bressloff, P.C., MacLaurin, J.: Synchronization of stochastic hybrid oscillators driven by a common switching environment. Chaos 28, 123123 (2018) Bressloff, P.C., MacLaurin, J.N.: A variational method for analyzing stochastic limit cycle oscillators. SIAM J. Appl. Dyn. Syst. 17, 2205–2233 (2018) Bressloff, P.C., Rowlands, G.: Exact travelling wave solutions of an “integrable” discrete reaction-diffusion equation. Physica D 106, 255–269 (1997) Bressloff, P.C., Taylor, J.G.: Discrete time leaky integrator network with synaptic noise. Neural Netw. 4, 789–801 (1991) Bressloff, P.C., Taylor, J.G.: Compartmental-model response function for dendritic trees. Biol. Cybern. 70, 199–207 (1993) Bressloff, P.C., Webber, M.A.: Front propagation in stochastic neural fields. SIAM J. Appl. Dyn. Syst. 11, 708–740 (2012)

466

References

117. Brette, R., Gerstner, W.: Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J. Neurophysiol. 94, 3637–42 (2005) 118. Breuer, D., Timme, M., Memmesheimer, R.M.: Statistical physics of neural systems with nonadditive dendritic coupling. Phys. Rev. X 4, 011053 (2014) 119. Brøns, M., Krupa, M., Wechselberger, M.: Mixed mode oscillations due to the generalized canard phenomenon. Fields Inst. Commun. 49, 36–63 (2006) 120. Brooks, S.: Markov chain Monte Carlo method and its application. J. R. Stat. Soc. Ser. D 47, 69–100 (1998) 121. Brown, T.G.: On the nature of the fundamental activity of the nervous centres; together with an analysis of the conditioning of rhythmic activity in progression, and a theory of the evolution of function in the nervous system. J. Physiol. 48, 18–46 (1914) 122. Brunel, N., Hakim, V.: Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput. 11, 1621–1671 (1999) 123. Brunel, N., Latham, P.E.: Firing rate of the noisy quadratic integrate-and-fire neuron. Neural Comput. 15, 2281–2306 (2003) 124. Bruns, A.: Fourier-, Hilbert- and wavelet-based signal analysis: are they really different approaches? J. Neurosci. Methods 137, 321–332 (2004) 125. Buckwar, E., Horváth-Bokor, R., Winkler, R.: Asymptotic mean-square stability of two-step methods for stochastic ordinary differential equations. BIT Numer. Math. 46, 261–282 (2006) 126. Buckwar, E., Winkler, R.: Multi-step methods for SDEs and their application to problems with small noise. SIAM J. Numer. Anal. 44, 779–803 (2006) 127. Buesing, L., Bill, J., Nessler, B., Maass, W.: Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons. PLoS Comput. Biol. 7, e1002211 (2011) 128. Buice, M.A., Chow, C.C.: Dynamic finite size effects in spiking neural networks. PLoS Comput. Biol. 9, e1002872 (2013) 129. Buice, M.A., Cowan, J.D.: Field-theoretic approach to fluctuation effects in neural networks. Phys. Rev. E 75, 051919 (2007) 130. Bullmore, E., Sporns, O.: Complex brain networks: graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 10, 186–198 (2009) 131. Burrage, K., Burrage, P.M.: General order conditions for stochastic Runge-Kutta methods for both commuting and non-commuting stochastic ordinary differential equation systems. Appl. Numer. Math. 28, 161–177 (1998) 132. Butera, R.J., Rinzel, J., Smith, J.C.: Models of respiratory rhythm generation in the preBotzinger complex. I. Bursting pacemaker neurons. J. Neurophysiol. 82, 382–397 (1999) 133. Buzsáki, G.: Theta oscillations in the hippocampus. Neuron 33, 325–340 (2002) 134. Byrne, A., Avitabile, D., Coombes, S.: A next generation neural field model: the evolution of synchrony within patterns and waves. Phys. Rev. E 99, 012313 (2019) 135. Byrne, A., Brookes, M.J., Coombes, S.: A mean field model for movement induced changes in the beta rhythm. J. Comput. Neurosci. 43, 143–158 (2017) 136. Byrne, A., Coombes, S., Liddle, P.F.: Handbook of Multi-scale Models of Brain Disorders, chap. A neural mass model for abnormal beta-rebound in schizophrenia. Springer (2019) 137. Byrne, A., Dea, R.O., Forrester, M., Ross, J., Coombes, S.: Next generation neural mass and field modelling. J. Neurophysiol. 123, 726–742 (2020) 138. Byrne, A., Ross, J., Nicks, R., Coombes, S.: Mean-field models for EEG/MEG: from oscillations to waves. Brain Topogr. 35, 36–53 (2022) 139. Cachope, R., Mackie, K., Triller, A., O’Brien, J., Pereda, A.E.: Potentiation of electrical and chemical synaptic transmission mediated by endocannabinoids. Neuron 56, 1034–1047 (2007) 140. Cai, D., Tao, L., Rangan, A.V., McLaughlin, D.W.: Kinetic theory for neuronal network dynamics. Commun. Math. Phys. 4, 97–127 (2006) 141. Cai, D., Tao, L., Shelley, M., McLaughlin, D.W.: An effective kinetic representation of fluctuation-driven neuronal networks with application to simple and complex cells in visual cortex. Proc. Natl. Acad. Sci. USA 101, 7757–7762 (2004)

References

467

142. Caianiello, E.R.: Outline of a theory of thought processes and thinking machines. J. Theor. Biol. 1, 204–235 (1961) 143. Campbell, S.A.: Handbook of Brain Connectivity, chap. Time delays in neural systems, pp. 65–90. Springer (2007) 144. Camproux, A.C., Saunier, F., Chouvet, G., Thalabard, J.C., Thomas, G.: A hidden Markov model approach to neuron firing patterns. Biophys. J. 71, 2404–2412 (1996) 145. Cannon, R.C., D’Alessandro, G.: The ion channel inverse problem: neuroinformatics meets biophysics. PLoS Comput. Biol. 2, e91 (2006) 146. Canolty, R.T., Knight, R.T.: The functional role of cross-frequency coupling. Trends Cogn. Sci. 14, 506–515 (2010) 147. Cao, A., Lindner, B., Thomas, P.J.: A partial differential equation for the mean-return-time phase of planar stochastic oscillators. SIAM J. Appl. Math. 80, 442–447 (2020) 148. Cao, B.J., Abbott, L.F.: New computational method for cable theory problems. Biophys. J. 64, 303–313 (1993) 149. Caporale, N., Dan, Y.: Spike timing-dependent plasticity: a Hebbian learning rule. Annu. Rev. Neurosci. 31, 25–46 (2008) 150. Carletti, T., Fanelli, D., Nicoletti, S.: Dynamical systems on hypergraphs. J. Phys. Complex. 1, 035006 (2020) 151. Carmona, V., Fernández-Sánchez, F.: Integral characterization for Poincaré half-maps in planar linear systems. J. Differ. Equ. 305, 319–346 (2021) 152. Carnevale, N.T., Hines, M.L.: The NEURON Book. Cambridge University Press (2006) 153. Carpenter, G.: A geometric approach to singular perturbation problems with applications to the nerve impulse equation. J. Differ. Equ. 23, 335–367 (1977) 154. Carroll, S.R., Bressloff, P.C.: Binocular rivalry waves in a directionally selective neural field model. Physica D 285, 8–17 (2014) 155. Casadiego, J., Nitzan, M., Hallerberg, S., Timme, M.: Model-free inference of direct network interactions from nonlinear collective dynamics. Nat. Commun. 8(2192) (2017) 156. Castejón, O., Guillamon, A., Huguet, G.: Phase-amplitude response functions for transientstate stimuli. J. Math. Neurosci. 3(1), 13 (2013) 157. Ceni, A., Ashwin, P., Livi, L.: Interpreting recurrent neural networks behaviour via excitable network attractors. Cogn. Comput. 12, 330–356 (2020) 158. Cestnik, R., Mau, E.T.K., Rosenblum, M.: Inferring oscillator’s phase and amplitude response from a scalar signal exploiting test stimulation (2022) 159. Cestnik, R., Rosenblum, M.: Inferring the phase response curve from observation of a continuously perturbed oscillator. Sci. Rep. 8, 13606 (2018) 160. Chacron, M.J., Longtin, A., Pakdaman, K.: Chaotic firing in the sinusoidally forced leaky integrate-and-fire model with threshold fatigue. Physica D 192, 138–160 (2004) 161. Chacron, M.J., Longtin, A., St-Hilaire, M., Maler, L.: Suprathreshold stochastic firing dynamics with memory in P-type electroreceptors. Phys. Rev. Lett. 85, 1576–1579 (2000) 162. Chang, W.C., Kudlacek, J., Hlinka, J., Chvojka, J., Hadrava, M., Kumpost, V., Powell, A.D., Janca, R., Maturana, M.I., Karoly, P.J., Freestone, D.R., Cook, M.J., Palus, M., Otahal, J., Jefferys, J.G.R., Jiruska, P.: Loss of neuronal network resilience precedes seizures and determines the ictogenic nature of interictal synaptic perturbations. Nat. Neurosci. 21, 1742–1752 (2018) 163. Chen, Z., Vijayan, S., Barbieri, R., Wilson, M.A., Brown, E.N.: Discrete- and continuoustime probabilistic models and algorithms for inferring neuronal UP and DOWN states. Neural Comput. 21, 1797–1862 (2009) 164. Chervin, R.D., Pierce, P.A., Connors, B.W.: Periodicity and directionality in the propagation of epileptiform discharges across neocortex. J. Neurophysiol. 60, 1695–1713 (1988) 165. Chialvo, D.R.: Generic excitable dynamics on a two-dimensional map. Chaos, Solitons Fractals 5, 461–479 (1995) 166. Chialvo, D.R., Cecchi, G.A., Magnasco, M.O.: Noise-induced memory in extended excitable systems. Phys. Rev. E 61, 5654–5657 (2000) 167. Chicone, C.: Ordinary Differential Equations with Applications, 2nd edn. Springer (2006)

468

References

168. Chizhov, A.V., Graham, L.J., Turbin, A.A.: Simulation of neural population dynamics with refractory density approach and a conductance-based threshold neuron model. Neurocomputing 70, 252–262 (2006) 169. Chorev, E., Yarom, Y., Lampl, I.: Rhythmic episodes of subthreshold membrane potential oscillations in the rat inferior olive nuclei in vivo. J. Neurosci. 27, 5043–5052 (2007) 170. Chowdhury, F.A., Woldman, W., FitzGerald, T.H., Elwes, R.D., Nashef, L., Terry, J.R., Richardson, M.P.: Revealing a brain network endophenotype in families with idiopathic generalised epilepsy. PLoS ONE 9, e110136 (2014) 171. Cier, F., Cera, N., Griffa, A., Mantini, D., Esposito, R.: Editorial: dynamic functioning of resting state networks in physiological and pathological conditions. Front. Neurosci. 14, 624401 (2020) 172. Clay, J.R., Paydarfar, D., Forger, D.B.: A simple modification of the Hodgkin and Huxley equations explains type 3 excitability in squid giant axons. J. R. Soc. Interface 5, 1421–1428 (2008) 173. Clewley, R., Sherwood, E., LaMar, D., Guckenheimer, J.M.: PyDSTool (2007). https:// pydstool.sourceforge.net 174. Cohen, A.H., Holmes, P.J., Rand, R.H.: The nature of the coupling between segmental oscillators of the lamprey spinal generator for locomotion: a mathematical model. J. Math. Biol. 13, 345–369 (1982) 175. Colby, C.L., Duhamel, J.R., Goldberg, M.E.: Oculocentric spatial representation in parietal cortex. Cereb. Cortex 5, 470–481 (1995) 176. Cole, K.S.: Rectification and inductance in the squid giant axon. J. Gen. Physiol. 25, 29–51 (1941) 177. Connor, J.A., Stevens, C.F.: Prediction of repetitive firing behaviour from voltage clamp data on an isolated neurone soma. J. Physiol. 213, 31–53 (1971) 178. Connor, J.A., Walter, D., McKown, R.: Neural repetitive firing: modifications of the HodgkinHuxley axon suggested by experimental results from crustacean axons. Biophys. J. 18, 81–102 (1977) 179. Connors, B.W., Amitai, Y.: Generation of epileptiform discharge by local circuits of neocortex. In: Schwartkroin, P.A. (ed.) Epilepsy: Models, Mechanisms, and Concepts, pp. 388–423. Cambridge University Press (1993) 180. Cook, B.J., Peterson, A.D.H., Woldman, W., Terry, J.R.: Neural field models: a mathematical overview and unifying framework. Math. Neurosci. Appl. 2(7284) (2022) 181. Coombes, S.: The effect of ion pumps on the speed of travelling waves in the fire-diffuse-fire model of Ca2+ . Bull. Math. Biol. 63, 1–20 (2001) 182. Coombes, S.: Phase-locking in networks of pulse-coupled McKean relaxation oscillators. Physica D 160, 173–188 (2001) 183. Coombes, S.: Dynamics of synaptically coupled integrate-and-fire-or-burst neurons. Phys. Rev. E 67, 041910 (2003) 184. Coombes, S.: Waves, bumps, and patterns in neural field theories. Biol. Cybern. 93, 91–108 (2005) 185. Coombes, S.: Neuronal networks with gap junctions: a study of piece-wise linear planar neuron models. SIAM J. Appl. Dyn. Syst. 7, 1101–1129 (2008) 186. Coombes, S., Beim Graben, P.P., Potthast, R., Wright, J.: Neural Fields: Theory and Applications. Springer (2014) 187. Coombes, S., Bressloff, P.C.: Mode locking and Arnold tongues in integrate-and-fire neural oscillators. Phys. Rev. E 60, 2086–2096 (1999) 188. Coombes, S., Bressloff, P.C.: Solitary waves in a model of dendritic cable with active spines. SIAM J. Appl. Math. 61, 432–453 (2000) 189. Coombes, S., Bressloff, P.C.: Erratum: mode locking and Arnold tongues in integrate-and-fire neural oscillators [Phys. Rev. E 60, 2086 (1999)]. Phys. Rev. E 63, 059901 (2001) 190. Coombes, S., Byrne, A.: Lecture Notes in Nonlinear Dynamics in Computational Neuroscience: from Physics and Biology to ICT, chap. Next generation neural mass models, pp. 1–16. Springer (2019)

References

469

191. Coombes, S., Doiron, B., Josi´c, K., Shea-Brown, E.: Toward blueprints for network architecture, biophysical dynamics, and signal transduction. Philos. Trans. R. Soc. A 364, 3301–3318 (2006) 192. Coombes, S., Lai, Y.M., Sayli, M., Thul, R.: Networks of piecewise linear neural mass models. Eur. J. Appl. Math. 29, 869–890 (2018) 193. Coombes, S., Laing, C.: Delays in activity-based neural networks. Philos. Trans. R. Soc. A 367, 1117–1129 (2009) 194. Coombes, S., Laing, C.R.: Pulsating fronts in periodically modulated neural field models. Phys. Rev. E 83, 011912 (2011) 195. Coombes, S., Laing, C.R., Schmidt, H., Svanstedt, N., Wyller, J.A.: Waves in random neural media. Discrete Contin. Dyn. Syst. Ser. A 32, 2951–2970 (2012) 196. Coombes, S., Osbaldestin, A.H.: Period adding bifurcations and chaos in a periodically stimulated excitable neural relaxation oscillator. Phys. Rev. E 62, 4057–4066 (2000) 197. Coombes, S., Owen, M.R.: Evans functions for integral neural field equations with Heaviside firing rate function. SIAM J. Appl. Dyn. Syst. 34, 574–600 (2004) 198. Coombes, S., Owen, M.R.: Bumps, breathers, and waves in a neural network with spike frequency adaptation. Phys. Rev. Lett. 94, 148102 (2005) 199. Coombes, S., Owen, M.R.: Exotic dynamics in a firing rate model of neural tissue with threshold accommodation. In: Fluids and Waves: Recent Trends in Applied Analysis, vol. 440, pp. 123–144. American Mathematical Society (2007) 200. Coombes, S., Schmidt, H., Avitabile, D.: Neural Fields: Theory and Applications, chap. Spots: Breathing, Drifting and Scattering in a Neural Field Model, pp. 187–211. Springer (2014) 201. Coombes, S., Schmidt, H., Bojak, I.: Interface dynamics in planar neural field models. J. Math. Neurosci. 2(1) (2012) 202. Coombes, S., Thul, R.: Synchrony in networks of coupled nonsmooth dynamical systems: extending the master stability function. Eur. J. Appl. Math. 27, 904–922 (2016) 203. Coombes, S., Thul, R., Laudanski, J., Palmer, A.R., Sumner, C.J.: Neuronal spike-train responses in the presence of threshold noise. Front. Life Sci. 5, 91–105 (2011) 204. Coombes, S., Thul, R., Wedgwood, K.C.A.: Nonsmooth dynamics in spiking neuron models. Physica D 241, 2042–2057 (2012) 205. Coombes, S., Timofeeva, Y., Svensson, C.M., Lord, G.J., Josi´c, K., Cox, S.J., Colbert, C.M.: Branching dendrites with resonant membrane: a “sum-over-trips” approach. Biol. Cybern. 97, 137–149 (2007) 206. Coombes, S., Venkov, N., Shiau, L., Bojak, I., Liley, D., Laing, C.: Modeling electrocortical activity through improved local approximations of integral neural field equations. Phys. Rev. E 76, 051901 (2007) 207. Coombes, S., Zachariou, M.: Gap junctions and emergent rhythms. Springer Series in Computational Neuroscience, vol. 3, pp. 77–94. Springer (2009) 208. Cotterill, R.: Biophysics: An introduction. Wiley (2002) 209. Couzin-Fuchs, E., Kiemel, T., Gal, O., Ayali, A., Holmes, P.: Intersegmental coupling and recovery from perturbations in freely running cockroaches. J. Exp. Biol. 218, 285–297 (2015) 210. Cover, T.M.: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Trans. Electron. Comput. EC-14, 326–334 (1965) 211. Cowan, J.D., Neuman, J., Van Drongelen, W.: Wilson-Cowan equations for neocortical dynamics. J. Math. Neurosci. 6(1) (2016) 212. Creaser, J., Ashwin, P., Tsaneva-Atanasova, K.: Sequential escapes and synchrony breaking for networks of bistable oscillatory nodes. SIAM J. Appl. Dyn. Syst. 19, 2829–2846 (2020) 213. Creaser, J.L., Diekman, C.O., Wedgwood, K.C.A.: Entrainment dynamics organised by global manifolds in a circadian pacemaker model. Front. Appl. Math. Stat. 7(703359) (2021) 214. Crimi, A., Dodero, L., Sambataro, F., Murino, V., Sona, D.: Structurally constrained effective brain connectivity. NeuroImage 239, 118288 (2021) 215. Cronin, J.: Mathematical Aspects of Hodgkin-Huxley Neural Theory. Cambridge University Press (1987)

470

References

216. Crook, S.M., Dur-e-Ahmad, M., Baer, S.M.: A model of activity-dependent changes in dendritic spine density and spine structure. Math. Biosci. Eng. 4, 617–631 (2007) 217. Crook, S.M., Ermentrout, G.B., Vanier, M.C., Bower, J.M.: The role of axonal delay in the synchronization of networks of coupled cortical oscillators. J. Comput. Neurosci. 4, 161–172 (1997) 218. Cross, M.C., Hohenberg, P.C.: Pattern formation outside of equilibrium. Rev. Modern Phys. 65, 851–1111 (2003) 219. Crunelli, V., T’oth, T.I., Cope, D.W., Blethyn, K., Hughes, S.W.: The ‘window’ T-type calcium current in brain dynamics of different behavioural states. J. Physiol. 562, 121–129 (2005) 220. Cruz-Gómez, Á.J., Ventura-Campos, N., Belenguer, A., Ávila, C., Forn, C.: The link between resting-state functional connectivity and cognition in MS patients. Multiple Scler. J. 20, 338– 348 (2014) 221. Curtu, R., Ermentrout, B.: Oscillations in a refractory neural net. J. Math. Biol. 43, 81–100 (2001) 222. Czeisler, C.A., Gooley, J.J.: Sleep and circadian rhythms in humans. Cold Spring Harb. Symp. Quant. Biol. 72, 579–597 (2007) 223. Dafilis, M.P., Frascoli, F., Cadusch, P.J., Liley, D.T.J.: Chaos and generalised multistability in a mesoscopic model of the electroencephalogram. Physica D 238, 1056–1060 (2009) 224. Dale, H.H.: Pharmacology and nerve-endings (Walter Ernest Dixon memorial lecture. Section of Therapeutics and Pharmacology). Proc. R. Soc. Med. 28, 319–30 (1934) 225. Daniel, E., Meindertsma, T., Arazi, A., Donner, T.H., Dinstein, I.: The relationship between trial-by-trial variability and oscillations of cortical population activity. Sci. Rep. 9, 16901 (2019) 226. Dankowicz, H., Schilder, F.: Recipes for continuation. https://doi.org/10.1137/1. 9781611972573 (2013) 227. Danzl, P., Hespanha, J., Moehlis, J.: Event-based minimum-time control of oscillatory neuron models: phase randomization, maximal spike rate increase, and desynchronization. Biol. Cybern. 101, 387–399 (2009) 228. Daunizeau, J., Kiebel, S.J., Friston, K.J.: Dynamic causal modelling of distributed electromagnetic responses. NeuroImage 47, 590–601 (2009) 229. Dayan, P., Abbott, L.F.: Theoretical Neuroscience. MIT Press (2001) 230. De Pittà, M., Berry, H. (eds.): Computational Glioscience. Springer Series in Computational Neuroscience. Springer (2019) 231. Deco, G., Buehlmann, A., Masquelier, T., Hugues, E.: The role of rhythmic neural synchronization in rest and task conditions. Front. Hum. Neurosci. 5, 1–6 (2011) 232. Deco, G., Jirsa, V.K., Robinson, P.A., Breakspear, M., Friston, K.: The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput. Biol. 4, e1000092 (2008) 233. del Molino, L.C.G., Pakdaman, K., Touboul, J., Wainrib, G.: Synchronization in random balanced networks. Phys. Rev. E 88, 042824 (2013) 234. Delaney, K.R., Gelperin, A., Fee, M.S., Flores, J.A., Gervais, R., Tank, D.W., Kleinfeld, D.: Waves and stimulus-modulated dynamics in an oscillating olfactory network. Proc. Natl. Acad. Sci. USA 91, 669–673 (1994) 235. Denève, S., Machens, C.K.: Efficient codes and balanced networks. Nat. Neurosci. 19, 375– 382 (2016) 236. Deng, F., Jiang, X., Zhu, D., Zhang, T., Li, K., Guo, L., Liu, T.: A functional model of cortical gyri and sulci. Brain Struct. Funct. 219, 1473–1491 (2014) 237. Desroches, M., Fernández-García, S., Krupa, M.: Canards in a minimal piecewise-linear square-wave burster. Chaos 26, 073111 (2016) 238. Desroches, M., Freire, E., Hogan, S.J., Ponce, E., Thota, P.: Canards in piecewise-linear systems: explosions and super-explosions. Proc. R. Soc. A Math. Phys. Eng. Sci. 469, 2154 (2013) 239. Desroches, M., Guckenheimer, J., Kuehn, C., Osinga, H.M., Wechselberger, M.: Mixed-mode oscillations with multiple time scales. SIAM Rev. 54, 211–288 (2012)

References

471

240. Desroches, M., Kaper, T.J., Krupa, M.: Mixed-mode bursting oscillations: dynamics created by a slow passage through spike-adding canard explosion in a square-wave burster. Chaos 23, 046106 (2013) 241. Desroches, M., Kowalczyk, P., Rodrigues, S.: Spike-adding and reset-induced canard cycles in adaptive integrate and fire models. Nonlinear Dyn. 104, 2451–2470 (2021) 242. Desroches, M., Rinzel, J., Rodrigues, S.: Classification of bursting patterns: a tale of two ducks. PLoS Comput. Biol. 18, e1009752 (2022) 243. Destexhe, A., Huguenard, J.R.: Computational Modeling Methods for Neuroscientists, chap. Modeling voltage-dependent channels, pp. 107–138. MIT Press (2009) 244. Destexhe, A., Mainen, Z.F., Sejnowski, T.J.: Synthesis of models for excitable membranes synaptic transmission and neuromodulation using a common kinetic formalism. J. Comput. Neurosci. 1, 195–231 (1994) 245. Destexhe, A., Sejnowski, T.J.: The Wilson-Cowan model, 36 years later. Biol. Cybern. 101, 1–2 (2009) 246. Dethier, J., Drion, G., Franci, A., Sepulchre, R.: A positive feedback at the cellular level promotes robustness and modulation at the circuit level. J. Neurophysiol. 114, 2472–2484 (2015) 247. Detrixhe, M., Doubeck, M., Moehlis, J., Gibou, F.: A fast Eulerian approach for computation of global isochrons in high dimensions. SIAM J. Appl. Dyn. Syst. 15, 1501–1527 (2016) 248. Dhooge, A., Govaerts, W., Kuznetsov, Y.A., Meijer, H.G.E., Sautois, B.: New features of the software MatCont for bifurcation analysis of dynamical systems. Math. Comput. Model. Dyn. Syst. 14, 147–175 (2008) 249. di Bernardo, M., Budd, C.J., Champneys, A.R., Kowalczyk, P.: Piecewise-Smooth Dynamical Systems, vol. 50, 1st edn. Springer (2008) 250. Diekman, C.O., Bose, A.: Reentrainment of the circadian pacemaker during jet lag: East-west asymmetry and the effects of north-south travel. J. Theor. Biol. 437, 261–285 (2018) 251. Diener, M.: The canard unchained or how fast/slow dynamical systems bifurcate. Math. Intell. 6, 38–49 (1984) 252. Díez-Cirarda, M., Strafella, A.P., Kim, J., Peña, J., Ojeda, N., Cabrera-Zubizarreta, A., Ibarretxe-Bilbao, N.: Dynamic functional connectivity in Parkinson’s disease patients with mild cognitive impairment and normal cognition. NeuroImage: Clin. 17, 847–855 (2018) 253. Ding, J., Zhou, A.: Eigenvalues of rank-one updated matrices with some applications. Appl. Math. Lett. 20, 1223–1226 (2007) 254. Doiron, B., Litwin-Kumar, A.: Balanced neural architecture and the idling brain. Front. Comput. Neurosci. 8(56) (2014) 255. Dolmatova, A.V., Goldobin, D.S., Pikovsky, A.: Synchronization of coupled active rotators by common noise. Phys. Rev. E 96, 062204 (2017) 256. Dorval 2nd, A.D., Bettencourt, J., Netoff, T.I., White, J.A.: Hybrid neuronal network studies under dynamic clamp. In: Molnar, P., Hickman, J. (eds.) Patch-Clamp Methods and Protocols. Methods in Molecular Biology, 403 edn. Humana Press (2007) 257. Dowling, J.E.: The Retina: An Approachable Part of the Brain. Harvard University Press (1987) 258. Drion, G., Dethier, J., Franci, A., Sepulchre, R.: Switchable slow cellular conductances determine robustness and tunability of network states. PLoS Comput. Biol. 14, e1006125 (2018) 259. Drion, G., Franci, A., Sepulchre, R.: Cellular switches orchestrate rhythmic circuits. Biol. Cybern. 113, 71–82 (2019) 260. Drion, G., Franci, A., Seutin, V., Sepulchre, R.: A novel phase portrait for neuronal excitability. PLoS ONE 7, e41806 (2012) 261. Drion, G., Leary, T.O., Marder, E.: Ion channel degeneracy enables robust and tunable neuronal firing rates. Proc. Natl. Acad. Sci. USA 112, E5361–E5370 (2015) 262. Dumont, G., Gutkin, B.: Macroscopic phase resetting-curves determine oscillatory coherence and signal transfer in inter-coupled neural circuits. PLoS Comput. Biol. 15, e1007019 (2019) 263. Dumont, G., Henry, J.: Population density models of integrate-and-fire neurons with jumps: well-posedness. J. Math. Biol. 67, 453–481 (2013)

472

References

264. Dumortier, F., Rousarrie, R.H.: Canard Cycles and Center Manifolds. American Mathematical Society (1996) 265. Eckhaus, W.: Relaxation oscillations including a standard chase on French duck. In: Asymptotic Analysis II, Lecture Notes in Mathematics, pp. 449–494. Springer (1983) 266. El Houssaini, K., Ivanov, A.I., Bernard, C., Jirsa, V.K.: Seizures, refractory status epilepticus, and depolarization block as endogenous brain activities. Phys. Rev. E 91, 010701 (2015) 267. Elphick, C., Meron, E., Rinzel, J., Spiegel, E.A.: Impulse patterning and relaxational propagation in excitable media. J. Theor. Biol. 146, 249–268 (1990) 268. Engbers, J.D.T., Anderson, D., Tadayonnejad, R., Mehaffey, W.H., Molineux, M.L., Turner, R.W.: Distinct roles for IT and I H in controlling the frequency and timing of rebound spike responses. J. Physiol. 589, 5391–5413 (2011) 269. Erlhagen, W., Bicho, E.: The dynamic neural field approach to cognitive robotics. J. Neural Eng. 3, R36-54 (2006) 270. Ermentrout, B.: Type I membranes, phase resetting curves, and synchrony. Neural Comput. 8, 979–1001 (1986) 271. Ermentrout, B.: Stripes or spots? Nonlinear effects in bifurcation of reaction-diffusion equations on the square. Proc. R. Soc. A 434, 413–417 (1991) 272. Ermentrout, B.: The analysis of synaptically generated traveling waves. J. Comput. Neurosci. 5, 191–208 (1998) 273. Ermentrout, B.: Neural networks as spatio-temporal pattern-forming systems. Rep. Prog. Phys. 61, 353–430 (1998) 274. Ermentrout, B., Park, Y., Wilson, D.: Recent advances in coupled oscillator theory. Philos. Trans. R. Soc. A 377, 20190092 (2019) 275. Ermentrout, G.B.: Simulating, Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT for Researchers and Students. SIAM Books (2002) 276. Ermentrout, G.B., Cowan, J.D.: A mathematical theory of visual hallucination patterns. Biol. Cybern. 34, 137–150 (1979) 277. Ermentrout, G.B., Galán, R.F., Urban, N.N.: Relating neural dynamics to neural coding. Phys. Rev. Lett. 99, 248103 (2007) 278. Ermentrout, G.B., Kleinfeld, D.: Traveling electrical waves in cortex: insights from phase dynamics and speculation on a computational role. Neuron 29, 33–44 (2001) 279. Ermentrout, G.B., Kopell, N.: Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 46, 233–253 (1986) 280. Ermentrout, G.B., Kopell, N.: Multiple pulse interactions and averaging in systems of coupled neural oscillators. J. Math. Biol. 29, 195–217 (1991) 281. Ermentrout, G.B., Kopell, N.J.: Frequency plateaus in a chain of weakly coupled oscillators. SIAM J. Appl. Math. 15, 215–237 (1984) 282. Ermentrout, G.B., McLeod, J.B.: Existence and uniqueness of travelling waves for a neural network. Proc. R. Soc. Edinb. 123A, 461–478 (1993) 283. Ermentrout, G.B., Terman, D.H.: Mathematical Foundations of Neuroscience. Springer (2010) 284. Esaki, L.: Discovery of the tunnel diode. IEEE Trans. Electron Devices 23, 644–647 (1976) 285. Evans, J.: Nerve axon equations: I linear approximations. Indiana Univ. Math. J. 21, 877–885 (1972) 286. Evans, J.: Nerve axon equations: II stability at rest. Indiana Univ. Math. J. 22, 75–90 (1972) 287. Evans, J.: Nerve axon equations: III stability of the nerve impulse. Indiana Univ. Math. J. 22, 577–593 (1972) 288. Evans, J.: Nerve axon equations: IV the stable and unstable impulse. Indiana Univ. Math. J. 24, 1169–1190 (1975) 289. Faisal, A.A., Selen, L.P.J., Wolpert, D.M.: Noise in the nervous system. Nat. Rev. Neurosci. 9, 292–303 (2008) 290. Faisal, A.A., White, J.A., Laughlin, S.B.: Ion-channel noise places limits on the miniaturization of the brain’s wiring. Curr. Biol. 15, 1143–1149 (2005) 291. Fall, C.P., Keizer, J.E.: Computational Cell Biology, chap. Voltage gated ionic currents, pp. 21–52. Springer (2002)

References

473

292. Fang, J., Faye, G.: Monotone traveling waves for delayed neural field equations. Math. Models Methods Appl. Sci. 26, 1919–1954 (2016) 293. Faugeras, O., Grimbert, F., Slotine, J.J.: Absolute stability and complete synchronization in a class of neural field models. SIAM J. Appl. Math. 69, 205–250 (2008) 294. Faugeras, O., Inglis, J.: Stochastic neural field equations: a rigorous footing. J. Math. Biol. 71, 259–300 (2015) 295. Faugeras, O.D., Song, A., Veltz, R.: Spatial and color hallucinations in a mathematical model of primary visual cortex. Comptes Rendus Mathématique 360, 59–87 (2022) 296. Faye, G., Scheel, A.: Existence of pulses in excitable media with nonlocal coupling. Adv. Math. 270, 400–456 (2015) 297. Feldmeyer, D., Lubke, J.H.R. (eds.): New Aspects of Axonal Structure and Function. Springer (2010) 298. Feller, W.: An Introduction to Probability Theory and its Applications, vol. 2. Wiley (1966) 299. Fenichel, N.: Geometric singular perturbation theory for ordinary differential equations. J. Differ. Equ. 31, 53–98 (1979) 300. Ferreira, L.K., Busatto, G.F.: Resting-state functional connectivity in normal brain aging. Neurosci. Biobehav. Rev. 37, 384–400 (2013) 301. Fields, R.D.: White matter in learning, cognition and psychiatric disorders. Trends Neurosci. 31, 361–370 (2008) 302. Fields, R.D., Stevens-Graham, B.: New insights into neuron-glia communication. Science 298, 556–562 (2002) 303. Fife, P.C., McLeod, J.B.: A phase plane discussion of convergence to travelling fronts for nonlinear diffusion. Arch. Ration. Mech. Anal. 75, 281–314 (1980) 304. Filippov, A.F.: Differential Equations with Discontinuous Righthand Sides, 1st edn. Springer (1960) 305. Fitzhugh, R.: Impulses and physiological states in theoretical models of nerve membrane. Biophys. J. 1, 445–466 (1961) 306. Folias, S.E.: Nonlinear analysis of breathing pulses in a synaptically coupled neural network. SIAM J. Appl. Dyn. Syst. 10, 744–787 (2011) 307. Folias, S.E., Bressloff, P.C.: Breathing pulses in an excitatory neural network. SIAM J. Appl. Dyn. Syst. 3, 378–407 (2004) 308. Folias, S.E., Bressloff, P.C.: Breathers in two-dimensional neural media. Phys. Rev. Lett. 95, 208107 (2005) 309. Forger, D.B., Jewett, M.E., Kronauer, R.E.: A simpler model of the human circadian pacemaker. J. Biol. Rhythms 14, 533–538 (1999) 310. Forrester, M., Coombes, S., Crofts, J.J., Sotiropoulos, S.N., O’Dea, R.D.: The role of node dynamics in shaping emergent functional connectivity patterns in the brain. Netw. Neurosci. 4, 467–483 (2020) 311. Foster, B.L., Bojak, I., Liley, D.T.J.: Population based models of cortical drug response: insights from anaesthesia. Cogn. Neurodyn. 2, 283–296 (2008) 312. Fourcaud-Trocme, N., Hansel, D., Van Vreeswijk, C., Brunel, N.: How spike generation mechanisms determine the neuronal response to fluctuating inputs. J. Neurosci. 23, 11628– 11640 (2003) 313. Fox, R., Lu, Y.: Emergent collective behavior in large numbers of globally coupled independently stochastic ion channels. Phys. Rev. E 49, 3421–3431 (1994) 314. Fox, R.F.: Stochastic versions of the Hodgkin-Huxley equations. Biophys. J. 72, 2068–2074 (1997) 315. Foxall, E., Edwards, R., Ibrahim, S., Van Den Driessche, P.: A contraction argument for two-dimensional spiking neuron models. SIAM J. Appl. Dyn. Syst. 11, 540–566 (2012) 316. Franci, A., Drion, G., Sepulchre, R.: An organizing center in a planar model of neuronal excitability. SIAM J. Appl. Dyn. Syst. 11, 1698–1722 (2012) 317. Franci, A., Drion, G., Sepulchre, R.: Modeling the modulation of neuronal bursting: a singularity theory approach. SIAM J. Appl. Dyn. Syst. 13, 798–829 (2014)

474

References

318. Franci, A., Drion, G., Sepulchre, R.: Robust and tunable bursting requires slow positive feedback. J. Neurophysiol. 119, 1222–1234 (2018) 319. Franci, A., Seutin, G.D.V., Sepulchre, R.: A balance equation determines a switch in neuronal excitability. PLoS Comput. Biol. 9, e1003040 (2013) 320. Frankenhuis, W.E., Panchanathan, K., Barto, A.G.: Enriching behavioral ecology with reinforcement learning methods. Behav. Process. 161, 94–100 (2019) 321. Freyer, F., Roberts, J.A., Ritter, P., Breakspear, M.: A canonical model of multistability and scale-invariance in biological systems. PLoS Comput. Biol. 8, e1002634 (2012) 322. Fries, P.: A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends Cogn. Sci. 9, 474–480 (2005) 323. Fries, P.: Communication through coherence (CTC 2.0). Neuron 88, 220–235 (2015) 324. Frohlich, F.: Network Neuroscience. Academic Press (2016) 325. Fuchs, A., Jirsa, V.K., Kelso, J.A.S.: Theory of the relation between human brain activity (MEG) and hand movements. NeuroImage 11, 359–369 (2000) 326. Fukuda, T., Kosaka, T.: Gap junctions linking the dendritic network of GABAergic interneurons in the hippocampus. J. Neurosci. 20, 1519–1528 (2000) 327. Gabbiani, F., Cox, S.J.: Mathematics for Neuroscientists, 2nd edn. Academic Press (2017) 328. Galán, R., Ermentrout, G., Urban, N.: Efficient estimation of phase-resetting curves in real neurons and its significance for neural-network modeling. Phys. Rev. Lett. 94, 158101 (2005) 329. Galán, R.F., Ermentrout, G.B., Urban, N.N.: Optimal time scale for spike-time reliability: theory, simulations, and experiments. J. Neurophysiol. 99, 277–283 (2008) 330. Galarreta, M., Hestrin, S.: A network of fast-spiking cells in the neocortex connected by electrical synapses. Nature 402, 72–75 (1999) 331. Gardiner, C.: Stochastic Methods: A Handbook for the Natural and Social Sciences, 4th edn. Springer (2009) 332. Gardner, E., Derrida, B.: Optimal storage properties of neural network models. J. Phys. A 21, 271 (1988) 333. Geise, M.A.: Dynamic Neural Field Theory for Motion Perception. Kluwer Academic Publishers (1999) 334. Gerstein, G.L., Mandelbrot, B.: Random walk models for the spike activity of a single neuron. Biophys. J. 4, 41–68 (1964) 335. Gerstner, W.: Time structure of the activity in neural network models. Phys. Rev. E 51, 738– 758 (1995) 336. Gerstner, W., Kistler, W.M., Naud, R., Paninski, L.: Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press (2014) 337. Gerstner, W., Ritz, R., Van Hemmen, J.L.: Why spikes? Hebbian learning and retrieval of time-resolved excitation patterns. Biol. Cybern. 69, 503–515 (1993) 338. Gigandet, X., Hagmann, P., Kurant, M., Cammoun, L., Meuli, R., Thiran, J.P.: Estimating the confidence level of white matter connections obtained with MRI tractography. PLoS ONE 3, e4006 (2008) 339. Gillespie, D.T.: Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 81, 2340–2361 (1977) 340. Gillespie, D.T.: Stochastic simulation of chemical kinetics. Annu. Rev. Phys. Chem. 58, 35–55 (2007) 341. Ginzburg, I., Sompolinsky, H.: Theory of correlations in stochastic neural networks. Phys. Rev. E 50, 3171–3191 (1994) 342. Girardi-Schappo, M., Bortolotto, G.S., Stenzinger, R.V., Gonsalves, J.J., Tragtenberg, M.H.: Phase diagrams and dynamics of a computationally efficient map-based neuron model. PLoS ONE 12, e0174621 (2017) 343. Gjorgjieva, J., Clopath, C., Audet, J., Pfister, J.P.: A triplet spike-timing-dependent plasticity model generalizes the Bienenstock-Cooper-Munro rule to higher-order spatiotemporal correlations. Proc. Natl. Acad. Sci. USA 108, 19383–19388 (2011) 344. Glass, L., Mackey, M.C.: From Clocks to Chaos. Princeton University Press (1998)

References

475

345. Glowatzki, E., Fuchs, P.A.: Transmitter release at the hair cell ribbon synapse. Nat. Neurosci. 5, 147–154 (2002) 346. Gökçe, A., Avitabile, D., Coombes, S.: The Art of Theoretical Biology, chap. Labyrinths: exotic patterns of cortical activity, pp. 32–33. Springer (2020) 347. Gökçe, A., Coombes, S., Avitabile, D.: Quasicrystal patterns in a neural field model. Phys. Rev. Res. 2, 013234 (2020) 348. Gökçe, A., Avitabile, D., Coombes, S.: The dynamics of neural fields on bounded domains: an interface approach for Dirichlet boundary conditions. J. Math. Neurosci. 7(12) (2017) 349. Goldman-Rakic, P.: Cellular basis of working memory review. Neuron 14, 477–485 (1995) 350. Goldobin, D., Rosenblum, M., Pikovsky, A.: Controlling oscillator coherence by delayed feedback. Phys. Rev. E 67, 061119 (2003) 351. Goldobin, D.S.: Coherence versus reliability of stochastic oscillators with delayed feedback. Phys. Rev. E 78, 1–4 (2008) 352. Goldobin, D.S., Pikovsky, A.: Synchronization and desynchronization of self-sustained oscillators by common noise. Phys. Rev. E 71, 15–18 (2005) 353. Goldobin, D.S., Pimenova, A.V., Rosenblum, M., Pikovsky, A.: Competing influence of common noise and desynchronizing coupling on synchronization in the Kuramoto-Sakaguchi ensemble. Eur. Phys. J. Spec. Top. 226, 1921–1937 (2017) 354. Goldobin, D.S., Teramae, J., Nakao, H., Ermentrout, G.B.: Dynamics of limit-cycle oscillators subject to general noise. Phys. Rev. Lett. 105, 154101 (2010) 355. Goldstein, R.E.: Pattern Formation in the Physical and Biological Sciences, chap. Nonlinear dynamics of pattern formation in physics and biology, pp. 65–91. CRC Press (1997) 356. Golomb, D., Amitai, Y.: Propagating neuronal discharges in neocortical slices: computational and experimental study. J. Neurophysiol. 78, 1199–1211 (1997) 357. Golomb, D., Ermentrout, G.B.: Effects of delay on the type and velocity of travelling pulses in neuronal networks with spatially decaying connectivity. Network 11, 221–246 (2000) 358. Golubitsky, M., Josic, K., Shea-Brown, E.: Winding numbers and average frequencies in phase oscillator networks. J. Nonlinear Sci. 16, 201–231 (2006) 359. Golubitsky, M., Schaeffer, D.G.: Singularities and Groups in Bifurcation Theory: Vol. I. Springer (1985) 360. Golubitsky, M., Stewart, I.: The Symmetry Perspective: From Equilibrium to Chaos in Phase Space and Physical Space. Springer (2002) 361. Golubitsky, M., Stewart, I., Buono, P.L., Collins, J.J.: Symmetry in locomotor central pattern generators and animal gaits. Nature 401, 693–695 (1999) 362. Golubitsky, M., Stewart, I.N., Schaeffer, D.G.: Singularities and Groups in Bifurcation Theory: Vol. II. Springer (1988) 363. Gong, P., Steel, H., Robinson, P., Qi, Y.: Dynamic patterns and their interactions in networks of excitable elements. Phys. Rev. E 88, 042821 (2013) 364. González-Ramírez, L.R., Ahmed, O.J., Cash, S.S., Wayne, C.E., Kramer, M.A.: A biologically constrained, mathematical model of cortical wave propagation preceding seizure termination. PLoS Comput. Biol. 11, 1–34 (2015) 365. Goodhill, G.J.: Theoretical models of neural development. iScience 8, 183–199 (2018) 366. Goodman, D.: Brian: a simulator for spiking neural networks in Python. Front. Neuroinformatics 2(5) (2008) 367. Goodman, D.F., Brette, R.: The Brian simulator. Front. Neurosci. 3, 192–197 (2009) 368. Goriely, A., Kuhl, E., Bick, C.: Neuronal oscillations on evolving networks: dynamics, damage, degradation, decline, dementia, and death. Phys. Rev. Lett. 125, 128102 (2020) 369. Goris, R.L.T., Movshon, J.A., Simoncelli, E.P.: Partitioning neuronal variability. Nat. Neurosci. 17, 858–865 (2014) 370. Gosak, M., Markoviˇc, R., Dolenšek, J., Slak Rupnik, M., Marhl, M., Stožer, A., Perc, M.: Network science of biological systems at different scales: a review. Phys. Life Rev. 24, 118– 135 (2018) 371. Gouras, P.: Graded potential of bream retina. J. Physiol. 152, 487–505 (1960)

476

References

372. Grabska-Barwi´nska, A., Latham, P.E.: How well do mean field theories of spiking quadraticintegrate-and-fire networks work in realistic parameter regimes? J. Comput. Neurosci. 36, 469–481 (2014) 373. Green, G.: An essay on the application of mathematical analysis to the theories of electricity and magnetism. Printed for the author, by T. Wheelhouse (1828) 374. Grewe, J., Kruscha, A., Lindner, B., Benda, J.: Synchronous spikes are necessary but not sufficient for a synchrony code in populations of spiking neurons. Proc. Natl. Acad. Sci. USA 114, E1977–E1985 (2017) 375. Griffiths, J.D., Bastiaens, S.P., Kaboodvand, N.: Computational Modelling of the Brain: Modelling Approaches to Cells, Circuits and Networks. Advances in Experimental Medicine and Biology, vol. 1359, chap. Whole-brain modelling: past, present, and future, pp. 313–355. Springer (2022) 376. Grindrod, P.: The Theory and Applications of Reaction-Diffusion Equations, 2nd edn. Oxford Applied Mathematics and Computing Science Series. Oxford University Press (1996) 377. Gróbler, T., Barna, G., Erdi, P.: Statistical model of the hippocampal CA3 region I. The single cell module: bursting model of the pyramidal cell. Biol. Cybern. 79, 309–321 (1998) 378. Gu, S., Cieslak, M., Baird, B., Muldoon, S.F., Grafton, S.T., Pasqualetti, F., Bassett, D.S.: The energy landscape of neurophysiological activity implicit in brain network structure. Sci. Rep. 8, 2507 (2018) 379. Guckenheimer, J.: Isochrons and phaseless sets. J. Math. Biol. 1, 259–273 (1975) 380. Guckenheimer, J., Holmes, P.: Nonlinear oscillations, dynamical systems, and bifurcations of vector fields. Applied Mathematical Sciences, vol. 42. Springer (1990) 381. Guckenheimer, J., Tien, J.H., Willms, A.R.: Bursting: The Genesis of Rhythm in the Nervous System, chap. Bifurcations in the fast dynamics of neurons: Implications for bursting, pp. 89–122. World Scientific (2005) 382. Guevara, M.R., Shrier, A., Glass, L.: Phase-locked rhythms in periodically stimulated heart cell aggregates. Am. J. Physiol. 254, H1–H10 (1998) 383. Guillamon, A., Huguet, G.: A computational and geometric approach to phase resetting curves and surfaces. SIAM J. Appl. Dyn. Syst. 8, 1005–1042 (2009) 384. Gunawan, R., Doyle, F.J.: Isochron-based phase response analysis of circadian rhythms. Biophys. J. 91, 2131–2141 (2006) 385. Gutkin, B.S., Ermentrout, G.B., Reyes, A.D.: Phase-response curves give the responses of neurons to transient inputs. J. Neurophysiol. 94, 1623–1635 (2005) 386. Gutnick, M.J., Connors, B.W., Prince, D.A.: Mechanisms of neocortical epileptogenesis in vitro. J. Neurophysiol. 48, 1321–1335 (1982) 387. Hadamard, J.: The Mathematician’s Mind: The Psychology of Invention in the Mathematical Field. Princeton University Press (1945) 388. Hagen, E., Næss, S., Ness, T.V., Einevoll, G.T.: Multimodal modeling of neural network activity: computing LFP, ECoG, EEG, and MEG signals with LFPy 2.0. Front. Neuroinformatics 12(92) (2018) 389. Hagmann, P., Cammoun, L., Gigandet, X., Meuli, R., Honey, C.J., Weeden, V.J.: Mapping the structural core of human cerebral cortex. PLoS Biol. 6, e159 (2008) 390. Hahn, G., Bujan, A.F., Frégnac, Y., Aertsen, A., Kumar, A.: Communication through resonance in spiking neuronal networks. PLoS Comput. Biol. 10, e1003811 (2014) 391. Hahn, G., Skeide, M.A., Mantini, D., Ganzetti, M., Destexhe, A., Friederici, A.D., Deco, G.: A new computational approach to estimate whole-brain effective connectivity from functional and structural MRI, applied to language development. Sci. Rep. 9, 8479 (2019) 392. Haken, H.: Phase locking in the lighthouse model of a neural net with several delay times. Prog. Theor. Phys. 139, 96–111 (2000) 393. Haken, H.: Quasi-discrete dynamics of a neural net: the lighthouse model. Discrete Dyn. Nat. Soc. 4, 187–201 (2000) 394. Haken, H.: Brain Dynamics: Synchronization and Activity Patterns in Pulse-coupled Neural Nets with Delays and Noise. Springer (2002)

References

477

395. Hale, J.K.: Ordinary Differential Equations. Dover Books on Mathematical Series, Courier Corporation (2009) 396. Hamzei-Sichani, F., Kamasawa, N., Janssen, W.G.M., Yasumura, T., Davidson, K.G.V., Hof, P.R., Wearne, S.L., Stewart, M.G., Young, S.R., Whittington, M.A., Rash, J.E., Traub, R.D.: Gap junctions on hippocampal mossy fiber axons demonstrated by thin-section electron microscopy and freeze-fracture replica immunogold labeling. Proc. Natl. Acad. Sci. USA 104, 12548–12553 (2007) 397. Han, F., Caporale, N., Dan, Y.: Reverberation of recent visual experience in spontaneous cortical waves. Neuron 60, 321–327 (2008) 398. Han, S.K., Kurrer, C., Kuramoto, Y.: Dephasing and bursting in coupled neural oscillators. Phys. Rev. Lett. 75, 3190–3193 (1995) 399. Hannay, K.M., Booth, V., Forger, D.B.: Collective phase response curves for heterogeneous coupled oscillators. Phys. Rev. E 92, 022923 (2015) 400. Hannay, K.M., Forger, D.B., Booth, V.: Seasonality and light phase-resetting in the mammalian circadian rhythm. Sci. Rep. 10, 19506 (2020) 401. Hansel, D., Mato, G.: Existence and stability of persistent states in large neuronal networks. Phys. Rev. Lett. 86, 4175–4178 (2001) 402. Hansel, D., Mato, G., Meunier, C.: Synchrony in excitatory neural networks. Neural Comput. 7, 307–337 (1995) 403. Hansel, D., Mato, G., Meunier, C., Neltner, L.: On numerical simulations of integrate-and-fire neural networks. Neural Comput. 10, 467–483 (1998) 404. Hansel, D., Sompolinsky, H.: Synchronization in a chaotic neural network. Phys. Rev. Lett. 68, 718–721 (1992) 405. Harish, O., Hansel, D.: Asynchronous rate chaos in spiking neuronal circuits. PLoS Comput. Biol. 11, e1004266 (2015) 406. Hariss, K.M.: Calcium from internal stores modifies dendritic spine shape. Proc. Natl. Acad. Sci. USA 96, 12213–12215 (1999) 407. Harris, J., Ermentrout, B.: Bifurcations in the Wilson-Cowan equations with nonsmooth firing rate. SIAM J. Appl. Dyn. Syst. 14, 43–72 (2015) 408. Harris, K.M., Kater, S.B.: Dendritic spines: cellular specializations imparting both stability and flexibility to synaptic function. Annu. Rev. Neurosci. 17, 341–371 (1994) 409. Hata, M.: Neurons: A Mathematical Ignition. Number Theory and Its Applications 9. World Scientific (2014) 410. Hausman, H.K., O’Shea, A., Kraft, J.N., Boutzoukas, E.M., Evangelista, N.D., Van Etten, E.J., Bharadwaj, P.K., Smith, S.G., Porges, E., Hishaw, G.A., Wu, S., DeKosky, S., Alexander, G.E., Marsiske, M., Cohen, R., Woods, A.J.: The role of resting-state network functional connectivity in cognitive aging. Front. Aging Neurosci. 12, 177 (2020) 411. Hebb, D.O.: The Organization of Behavior: A Neuropsychological Theory. Wiley (1949) 412. Heine, M., Ciuraszkiewicz, A., Voigt, A., Heck, J., Bikbaev, A.: Surface dynamics of voltagegated ion channels. Channels 10, 267–281 (2016) 413. Hellwig, B.: A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex. Biol. Cybern. 82, 111–121 (2000) 414. Hellyer, P.J., Jachs, B., Clopath, C., Leech, R.: Local inhibitory plasticity tunes macroscopic brain dynamics and allows the emergence of functional brain networks. NeuroImage 124, 85–95 (2016) 415. Hernández-Navarro, L., Faci-Lázaro, S., Orlandi, J.G., Feudel, U., Gómez-Gardeñes, J., Soriano, J.: Noise-driven amplification mechanisms governing the emergence of coherent extreme events in excitable systems. Phys. Rev. Res. 3, 023133 (2021) 416. Herrmann, J.M., Schrobsdorff, H., Geisel, T.: Localized activations in a simple neural field model. Neurocomputing 65–66, 679–684 (2005) 417. Hesse, J., Schleimer, J.H., Maier, N., Schreiber, S.: Temperature elevations can induce switches to homoclinic action potentials that alter neural encoding and synchronization. Nat. Commun. 13, 3934 (2022)

478

References

418. Higham, D.J.: An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev. 43, 525–546 (2001) 419. Hilgetag, C.C., Goulas, A.: Is the brain really a small-world network? Brain Struct. Funct. 221, 2361–2366 (2016) 420. Hille, B.: Ionic Channels of Excitable Membranes, 3rd edn. Sinauer Associates (2001) 421. Hindmarsh, J.L., Rose, R.M.: A model of neuronal bursting using three coupled first order differential equations. Proc. R. Soc. B 221, 87–102 (1984) 422. Hlinka, J., Coombes, S.: Using computational models to relate structural and functional brain connectivity. Eur. J. Neurosci. 36, 2137–2145 (2012) 423. Hodgkin, A.: Chance and design in electrophysiology: an informal account of certain experiments on nerve carried out between 1934 and 1952. J. Physiol. 263, 1–21 (1976) 424. Hodgkin, A.L.: The local electric changes associated with repetitive action in a non-medullated axon. J. Physiol. 107, 165–181 (1948) 425. Hodgkin, A.L., Huxley, A.F.: A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500–544 (1952) 426. Holcman, D., Tsodyks, M.: The emergence of up and down states in cortical networks. PLoS Comput. Biol. 2, 0174–0181 (2006) 427. Holden, A.V.: The response of excitable membrane models to a cyclic input. Biol. Cybern. 21, 1–7 (1976) 428. Holgado, A.J.N., Terry, J.R., Bogacz, R.: Conditions for the generation of beta oscillations in the subthalamic nucleus–globus pallidus network. J. Neurosci. 30, 12340–12352 (2010) 429. Holme, P., Saramäki, J.: Temporal Networks. Springer, Berlin (2013) 430. Holt, G.R., Koch, C.J.: Electrical interactions via the extracellular potential near cell bodies. Comput. Neurosci. 6, 169–184 (1999) 431. Holzhausen, K., Ramlow, L., Pu, S., Thomas, P.J., Lindner, B.: Mean-return-time phase of a stochastic oscillator provides an approximate renewal description for the associated point process. Biol. Cybern. 116, 235–251 (2022) 432. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2554–2558 (1982) 433. Hoppensteadt, F.C., Izhikevich, E.M.: Weakly Connected Neural Networks. Springer (1997) 434. Horikawa, Y.: A spike train with a step change in the interspike intervals in the FitzHughNagumo model. Physica D 82, 365–370 (1995) 435. Hormuzdi, S.G., Filippov, M.A., Mitropoulou, G., Monyer, H., Bruzzone, R.: Electrical synapses: a dynamic signaling system that shapes the activity of neuronal networks. Biochimica et Biophysica Acta 1662, 113–137 (2004) 436. Hoyle, R.: Pattern Formation: An Introduction to Methods. Cambridge University Press (2006) 437. Huang, H., Ding, M.: Linking functional connectivity and structural connectivity quantitatively: a comparison of methods. Brain Connectivity 6, 99–108 (2016) 438. Huang, X., Troy, W.C., Yang, Q., Ma, H., Laing, C.R., Schiff, S.J., Wu, J.Y.: Spiral waves in disinhibited mammalian neocortex. J. Neurosci. 24, 9897–9902 (2004) 439. Hughes, S.W., Cope, D.W., Blethyn, K.L., Crunelli, V.: Cellular mechanisms of the slow (