Modeling Biomolecular Networks in Cells: Structures and Dynamics 1849962138, 9781849962131

Modeling Biomolecular Networks in Cells shows how the interaction between the molecular components of basic living organ

109 11 8MB

English Pages 355 [350] Year 2010

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Modeling Biomolecular Networks in Cells: Structures and Dynamics
 1849962138, 9781849962131

Table of contents :
Preface
Contents
1 Introduction
1.1 Biological Processes and Networks in Cellular Systems
1.1.1 Gene Regulation: Gene Regulatory Networks
1.1.2 Signal Transduction: Signal Transduction Networks
1.1.3 Protein Interactions: Protein Interaction Networks
1.1.4 Metabolism: Metabolic Networks
1.1.5 Cell Cycles and Cellular Rhythms: Nonlinear Network Dynamics
1.2 A Primer to Networks
1.2.1 Basic Concepts of Networks
1.2.2 Topological Properties of Networks
1.3 A Primer to Dynamics
1.3.1 Dynamics and Collective Behavior
1.3.2 System States
1.3.3 Structures and Functions
1.3.4 Cellular Noise
1.3.5 Time Delays
1.3.6 Multiple Time Scales
1.3.7 Robustness and Sensitivity
1.4 Network Systems Biology and Synthetic Systems Biology
1.5 Outline of the Book
2 Dynamical Representations of Molecular Networks
2.1 Biochemical Reactions
2.2 Molecular Networks
2.3 Graphical Representation
2.3.1 Example of Interaction Graphs
2.3.2 Example of Incidence Graphs
2.3.3 Example of Species-reaction Graphs
2.4 Biochemical Kinetics
2.5 Stochastic Representation
2.5.1 Master Equations for a General Molecular Network
2.5.2 Stochastic Simulation
2.5.3 Analysis of Sensitivity and Robustness of Master Equations
2.5.4 Langevin Equations
2.5.5 Fokker–Planck Equations
2.5.6 Cumulant Equations
2.6 Deterministic Representation
2.6.1 Basic Kinetics
2.6.2 Deterministic Representation of a General Molecular System
2.6.3 Michaelis–Menten and Hill Equations
2.6.4 Total Quasi-steady-state Approximation
2.6.5 Deriving Rate Equations
2.6.6 Modeling Transcription and Translation Processes
2.7 Hybrid Representation and Reducing Molecular Networks
2.7.1 Decomposition of Biomolecular Networks
2.7.2 Approximation of Continuous Variables in Molecular Networks
2.7.3 Gaussian Approximation in Molecular Networks
2.7.4 Deterministic Approximation in Molecular Networks
2.7.5 Prefactor Approximation of Deterministic Representation
2.7.6 Stochastic Simulation of Hybrid Systems
2.8 Stochastic versus Deterministic Representation
3 Deterministic Structures of Biomolecular Networks
3.1 A General Structure of Molecular Networks
3.1.1 Basic Definitions
3.1.2 A General Structure for Gene Regulatory Networks
3.2 Gene Regulatory Networks with Cell Cycles
3.2.1 Gene Regulatory Networks for Eukaryotes
3.2.2 Gene Regulatory Networks for Prokaryotes
3.3 Interaction Graphs and Logic Gates
3.3.1 Interaction Graphs and Types of Interactions
3.3.2 Logic Gates
4 Qualitative Analysis of Deterministic Dynamical Networks
4.1 Stability Analysis
4.2 Bifurcation Analysis
4.3 Examples for Analyzing Stability and Bifurcations
4.3.1 A Simplified Gene Network
4.3.2 A Two-gene Network
4.3.3 A Three-gene Network
4.4 Robustness and Sensitivity Analysis
4.4.1 Robustness Measures
4.4.2 Sensitivity Analysis
4.5 Control Analysis
4.5.1 Control Coefficients of Metabolic Systems
4.5.2 Metabolic Control Theorems
4.6 Monotone Dynamical Systems
4.6.1 Notation
4.6.2 Decomposition of Monotone Systems
5 Stability Analysis of Genetic Networks in Lur’e Form
5.1 A Genetic Network Model
5.2 Stability Analysis of Genetic Networks Without Noise
5.3 Stochastic Stability of Gene Regulatory Networks
5.3.1 Mean-square Stability
5.3.2 Stochastic Stability with Disturbance Attenuation
5.4 Examples
6 Design of Synthetic Switching Networks
6.1 Types of Switches
6.2 Simple Switching Networks
6.2.1 Bistability in a Single Gene Network
6.2.2 The Toggle Switch
6.2.3 The MAPK Cascade Model
6.3 Design of Switching Networks with Positive Loops
6.4 Detection of Multistability
6.5 Enzyme-driven Switching Networks
7 Design of Synthetic Oscillating Networks
7.1 Simple Oscillatory Networks
7.1.1 Delayed Autoinhibition Networks
7.1.2 Goldbeter’s Models
7.1.3 Relaxation Oscillators
7.1.4 Stochastic Oscillators
7.2 Design of Oscillating Networks with Negative Loops
7.2.1 Theoretical Model of Cyclic Feedback Networks
7.2.2 A Special Cyclic Feedback Network
7.2.3 A General Cyclic Feedback Network
7.3 Construction of Oscillators by Non-monotone Dynamical Systems
7.4 Design of Molecular Oscillators with Hybrid Networks: General Formalism
8 Multicellular Networks and Synchronization
8.1 A General Multicellular Network for Deterministic Models
8.2 Deterministic Synchronization of Cellular Oscillators
8.2.1 Complete Synchronization
8.2.2 Other Types of Synchronization
8.3 Spontaneous Synchronization of Deterministic Models
8.4 Entrained Synchronization for Deterministic Models
8.5 Noise-driven Synchronization for Stochastic Models Without Coupling
8.6 A General Multicellular Network for Stochastic Models with Coupling
8.6.1 A Model
8.6.2 Example of a Gene Regulatory Network
8.6.3 Theoretical Analysis
8.6.4 Algorithm for Stochastic Simulation
8.6.5 Numerical Simulation
8.7 Deterministic Synchronization of Genetic Networks in Lur’e Form
8.8 Stochastic Synchronization of Genetic Networks in Lur’e Form
8.9 Transient Resetting for Synchronization Without Coupling
References
Index

Citation preview

Modeling Biomolecular Networks in Cells

Luonan Chen · Ruiqi Wang · Chunguang Li Kazuyuki Aihara

Modeling Biomolecular Networks in Cells Structures and Dynamics

123

Prof. Luonan Chen Shanghai Institutes for Biological Sciences Chinese Academy of Sciences 200233 Shanghai China [email protected]

Prof. Ruiqi Wang Institute of Systems Biology Shanghai University 200444 Shanghai China [email protected]

Prof. Chunguang Li Department of Information Science and Electronic Engineering Zhejiang University 310027 Hangzhou China [email protected]

Prof. Kazuyuki Aihara University of Tokyo 153-8505 Tokyo Japan [email protected]

ISBN 978-1-84996-213-1 e-ISBN 978-1-84996-214-8 DOI 10.1007/978-1-84996-214-8 Springer London Dordrecht Heidelberg New York British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Control Number: 2010930911 © Springer-Verlag London Limited 2010 MATLAB® and Simulink® are registered trademarks of The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098, U.S.A. http://www.mathworks.com Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Cover design: eStudioCalamar, Figueres/Berlin Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)

This book is dedicated to our colleagues and our families

Preface

One of the major challenges for post-genomic biology is to understand how genes, proteins, and small molecules interact to form cellular systems. It has been recognized that a complicated living organism cannot be completely understood by merely analyzing individual components, and that interactions between these components or biomolecular networks in terms of structures and dynamics are ultimately responsible for an organism’s form and functions. To elucidate the essential principles or fundamental mechanisms in cellular systems, study of structures and dynamics of biomolecular networks in cells is increasingly attracting much attention from biology, mathematics, and engineering communities. In particular, there are many complicated but interesting phenomena generated from the biomolecular networks due to their specific structures and nonlinear dynamics such as switching behavior and collective rhythms of biological systems which are ubiquitous in living organisms. This book will cover contents and topics on modeling biomolecular networks and analyzing their nonlinear dynamics in a comprehensive manner, especially stressing the viewpoints of systems and engineering. Attention will be focused on deriving general theoretical results and revealing the essential principles of biological systems on the basis of nonlinear dynamical and control theories. In particular, we will describe how to model a general molecular network in a single cell; how to construct a molecular network with specific functions or structures, such as gene switching networks and gene oscillating networks in individual cells at the molecular level; how to model a general multicellular system with the consideration of external fluctuations and intercellular coupling of signal molecules; how to design a synthetic molecular network from the viewpoint of forward engineering; and how to analyze and further control the nonlinear phenomena of living organisms at the molecular level, such as switching behavior, cooperative dynamics, and synchronization of biological oscillators in multicellular systems. This book will provide upper-level undergraduate students, graduate students, and researchers in the areas of mathematics, engineering, computer

viii

Preface

science, and biology with a computational or theoretical background in both academia and industry, e.g., fields of systems biology, bioinformatics, and synthetic biology. The book assumes little knowledge of molecular biology with each chapter covering the necessary material. Biologists would find the book useful if they have a strong computational background or training in systems biology or computational biology. Readers are assumed to have undergraduate-level backgrounds in mathematics, engineering, and basic biology. This book will introduce readers to the challenges in life sciences: from the understanding of individual molecules to system-level analysis, and from the static interactions between molecules to dynamical networks, in the hope that the readers will build on them to make new discoveries of their own. Designing and constructing synthetic molecular networks from the perspectives of both synthetic biology and engineering are also described in the book. Unlike traditional books on systems biology and bioinformatics, this book aims to show engineers and biologists the essentials of biomolecular networks with emphasis on structures and dynamics by presenting cutting edge research topics and methodologies, which will be vital for their future careers. Contents of this book are mainly based on collaborative studies and discussions with many researchers. Collectively and individually, we express our gratitude to these people for their collaboration. In particular, the authors thank Yong Wang, Xing-Ming Zhao, Rui-Sheng Wang, Tetsuya J. Kobayashi, Zhi-Ping Liu, Tianshou Zhou, Zhujun Jing, Marto di Bernardo, Masaaki Takada, Rui Liu and Shinji Hara for their cooperation and valuable comments in bringing this book to completion. The studies forming the basis of this book were partially supported by the ERATO Aihara Complexity Modeling Project, Japan Science and Technology Agency, Japan; FIRST, Aihara Innovative Mathematical Modeling Project, JSPS; Grant-in-Aid for Scientific Research on Priority Areas 17022012 from MEXT of Japan; the Chief Scientist Program of Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences with the grant number 2009CSP002; the National Natural Science Foundation of China under Youth Research Grants 10701052, 10832006, and 60502009; Shanghai Pujiang Program; the Distinguished Youth Foundation of Sichuan Province under Grant 07ZQ026-019; the Program for New Century Excellent Talents in University under Grant NCET-05-0801; and the JSPSNSFC Collaboration Project.

Shanghai, Shanghai, Hangzhou, Tokyo, June 2010

Luonan Chen Ruiqi Wang Chunguang Li Kazuyuki Aihara

Contents

1

2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Biological Processes and Networks in Cellular Systems . . . . . . . 1.1.1 Gene Regulation: Gene Regulatory Networks . . . . . . . . . . 1.1.2 Signal Transduction: Signal Transduction Networks . . . . 1.1.3 Protein Interactions: Protein Interaction Networks . . . . . 1.1.4 Metabolism: Metabolic Networks . . . . . . . . . . . . . . . . . . . . 1.1.5 Cell Cycles and Cellular Rhythms: Nonlinear Network Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 A Primer to Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Basic Concepts of Networks . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Topological Properties of Networks . . . . . . . . . . . . . . . . . . 1.3 A Primer to Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Dynamics and Collective Behavior . . . . . . . . . . . . . . . . . . . 1.3.2 System States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Structures and Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Cellular Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.5 Time Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6 Multiple Time Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.7 Robustness and Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Network Systems Biology and Synthetic Systems Biology . . . . . 1.5 Outline of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 2 3 6 8 8 11 13 14 15 17 17 18 18 20 20 21 22 23 24

Dynamical Representations of Molecular Networks . . . . . . . . 2.1 Biochemical Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Molecular Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Graphical Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Example of Interaction Graphs . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Example of Incidence Graphs . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Example of Species-reaction Graphs . . . . . . . . . . . . . . . . . 2.4 Biochemical Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Stochastic Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31 31 38 38 39 42 42 43 44

x

Contents

2.5.1 Master Equations for a General Molecular Network . . . . 2.5.2 Stochastic Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 Analysis of Sensitivity and Robustness of Master Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.4 Langevin Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.5 Fokker–Planck Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.6 Cumulant Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Deterministic Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Basic Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Deterministic Representation of a General Molecular System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 Michaelis–Menten and Hill Equations . . . . . . . . . . . . . . . . 2.6.4 Total Quasi-steady-state Approximation . . . . . . . . . . . . . . 2.6.5 Deriving Rate Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.6 Modeling Transcription and Translation Processes . . . . . 2.7 Hybrid Representation and Reducing Molecular Networks . . . . 2.7.1 Decomposition of Biomolecular Networks . . . . . . . . . . . . . 2.7.2 Approximation of Continuous Variables in Molecular Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.3 Gaussian Approximation in Molecular Networks . . . . . . . 2.7.4 Deterministic Approximation in Molecular Networks . . . 2.7.5 Prefactor Approximation of Deterministic Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.6 Stochastic Simulation of Hybrid Systems . . . . . . . . . . . . . 2.8 Stochastic versus Deterministic Representation . . . . . . . . . . . . . .

45 51 56 57 62 65 68 68 70 71 75 77 79 82 82 86 87 89 91 94 98

3

Deterministic Structures of Biomolecular Networks . . . . . . . . 101 3.1 A General Structure of Molecular Networks . . . . . . . . . . . . . . . . . 103 3.1.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 3.1.2 A General Structure for Gene Regulatory Networks . . . . 107 3.2 Gene Regulatory Networks with Cell Cycles . . . . . . . . . . . . . . . . 109 3.2.1 Gene Regulatory Networks for Eukaryotes . . . . . . . . . . . . 112 3.2.2 Gene Regulatory Networks for Prokaryotes . . . . . . . . . . . 114 3.3 Interaction Graphs and Logic Gates . . . . . . . . . . . . . . . . . . . . . . . 118 3.3.1 Interaction Graphs and Types of Interactions . . . . . . . . . 118 3.3.2 Logic Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

4

Qualitative Analysis of Deterministic Dynamical Networks 125 4.1 Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 4.2 Bifurcation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 4.3 Examples for Analyzing Stability and Bifurcations . . . . . . . . . . . 132 4.3.1 A Simplified Gene Network . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.3.2 A Two-gene Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.3.3 A Three-gene Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.4 Robustness and Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . 141

Contents

xi

4.4.1 Robustness Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 4.4.2 Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 4.5 Control Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 4.5.1 Control Coefficients of Metabolic Systems . . . . . . . . . . . . 145 4.5.2 Metabolic Control Theorems . . . . . . . . . . . . . . . . . . . . . . . . 147 4.6 Monotone Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 4.6.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 4.6.2 Decomposition of Monotone Systems . . . . . . . . . . . . . . . . . 151 5

Stability Analysis of Genetic Networks in Lur’e Form . . . . . . 159 5.1 A Genetic Network Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.2 Stability Analysis of Genetic Networks Without Noise . . . . . . . . 162 5.3 Stochastic Stability of Gene Regulatory Networks . . . . . . . . . . . 165 5.3.1 Mean-square Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 5.3.2 Stochastic Stability with Disturbance Attenuation . . . . . 169 5.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

6

Design of Synthetic Switching Networks . . . . . . . . . . . . . . . . . . . 179 6.1 Types of Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 6.2 Simple Switching Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 6.2.1 Bistability in a Single Gene Network . . . . . . . . . . . . . . . . . 185 6.2.2 The Toggle Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 6.2.3 The MAPK Cascade Model . . . . . . . . . . . . . . . . . . . . . . . . . 188 6.3 Design of Switching Networks with Positive Loops . . . . . . . . . . . 190 6.4 Detection of Multistability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 6.5 Enzyme-driven Switching Networks . . . . . . . . . . . . . . . . . . . . . . . . 208

7

Design of Synthetic Oscillating Networks . . . . . . . . . . . . . . . . . . 217 7.1 Simple Oscillatory Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 7.1.1 Delayed Autoinhibition Networks . . . . . . . . . . . . . . . . . . . . 219 7.1.2 Goldbeter’s Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 7.1.3 Relaxation Oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 7.1.4 Stochastic Oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 7.2 Design of Oscillating Networks with Negative Loops . . . . . . . . . 232 7.2.1 Theoretical Model of Cyclic Feedback Networks . . . . . . . 233 7.2.2 A Special Cyclic Feedback Network . . . . . . . . . . . . . . . . . . 235 7.2.3 A General Cyclic Feedback Network . . . . . . . . . . . . . . . . . 242 7.3 Construction of Oscillators by Non-monotone Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 7.4 Design of Molecular Oscillators with Hybrid Networks: General Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255

xii

8

Contents

Multicellular Networks and Synchronization . . . . . . . . . . . . . . . 267 8.1 A General Multicellular Network for Deterministic Models . . . . 268 8.2 Deterministic Synchronization of Cellular Oscillators . . . . . . . . . 271 8.2.1 Complete Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . 271 8.2.2 Other Types of Synchronization . . . . . . . . . . . . . . . . . . . . . 274 8.3 Spontaneous Synchronization of Deterministic Models . . . . . . . . 275 8.4 Entrained Synchronization for Deterministic Models . . . . . . . . . 279 8.5 Noise-driven Synchronization for Stochastic Models Without Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 8.6 A General Multicellular Network for Stochastic Models with Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 8.6.1 A Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 8.6.2 Example of a Gene Regulatory Network . . . . . . . . . . . . . . 288 8.6.3 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 8.6.4 Algorithm for Stochastic Simulation . . . . . . . . . . . . . . . . . 299 8.6.5 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 8.7 Deterministic Synchronization of Genetic Networks in Lur’e Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 8.8 Stochastic Synchronization of Genetic Networks in Lur’e Form 311 8.9 Transient Resetting for Synchronization Without Coupling . . . 317

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339

1 Introduction

Modern molecular biology has led to remarkable progress in understanding individual cellular components. One of the next main challenges is to elucidate biological networks comprising components revealed by reductionism in molecular biology at a system level. Because the behavior of living organisms can seldom be attributed to individual components, one has to assemble the components into networks in order to understand various complex biological processes. Many scientists are becoming increasingly interested in the research on system-level understanding of living organisms, especially, the topics related to topological structures, system dynamics, and biological functions of various molecular and cellular networks in living organisms. Biological functions can be carried out through interactions and robust regulations of thousands of cellular components such as genes, ribonucleic acids (RNAs), proteins, and metabolites in a concurrent manner. Due to the large number of components involved in a molecular network, it is almost impossible to intuitively understand how the network executes various complex cellular functions. The understanding of complex biological systems requires the integration of experimental and theoretical research. Therefore, mathematical modeling is a prerequisite to reveal the biological implications of molecular networks, including gene regulatory networks, transcription regulatory networks, RNA interaction networks, protein–RNA interaction networks, protein interaction networks, metabolic networks, and signal transduction networks. Using an appropriate model, qualitative or quantitative analysis can be conducted to gain deep insight into the essential mechanisms of various biological functions and processes, which is crucial for experimentally verifiable predictions and further successful advancement of biological science. Both the structures and the dynamics underlie the functionality of molecular networks, ranging from transcriptional regulation to cell signaling. Network structures can be inferred based on topological features of the networks. For example, network modules can be identified on the basis of topological distances, and network motifs can be detected by their recurrent topological patterns. Different dynamics may correspond to different functions of a

2

1 Introduction

specified molecular network. For example, periodic oscillations in nonlinear dynamics correspond to various biological rhythmic phenomena with periods ranging from seconds to years, while multistability corresponds to the capacity of cellular systems to achieve multiple distinct stable steady-states in response to a set of external stimuli. Moreover, some relationship may exist between network structures and system dynamics; for example, network topology sometimes determines network dynamics (Muller et al. 2008), and network dynamics analysis may also reveal topological changes (Luscombe et al. 2004). Technological innovations in theoretical and computational methods may significantly advance our understanding of the functionality of molecular networks. A cellular system comprises signaling, metabolic, and regulatory processes in a hierarchical structure. Actually, a living organism can be viewed as a huge biochemical reaction network with each chemical (e.g., an RNA, protein, or metabolite) as a node and each reaction as an edge, which is constantly affected by both internal and external stochastic fluctuations. Thousands of components or chemicals interact with each other and participate in these complicated nonlinear processes. From the theoretical viewpoint, besides significant time delays and specific diffusion processes, there are three major difficulties to model such a complicated system, i.e., (1) nonlinearity, (2) a large scale, and (3) stochasticity. Therefore, it is necessary to model a biological system both by exploiting its special properties and by developing special theories to make such a molecular network tractable for theoretical analysis. In this book, we present and further propose various mathematical theories to model and analyze a variety of molecular networks, including stochastic processes (e.g., master equations), nonlinear dynamics (e.g., monotone dynamical systems), and control theory (e.g., Lur’e systems and linear matrix inequality (LMI) because of their universality and ability to mimic cellular networks comprising many dynamically interacting components. In particular, we aim at providing a general framework for modeling and analyzing dynamical networks at the molecular level from the viewpoints of systems and engineering. Three graph representations, e.g., interaction graph, incidence graph and species-reaction graph, are also adopted to efficiently model biomolecular network structures in the book. In this chapter, we will provide a brief introduction to essential regulation processes in cellular systems and basic mathematical concepts in networks and modeling. For more detailed and systematic knowledge on these areas, readers can refer to related expert books.

1.1 Biological Processes and Networks in Cellular Systems A living organism can be regarded as a huge nonlinear biochemical reaction system which can be represented by the interactions of biomolecules in-

1.1 Biological Processes and Networks in Cellular Systems

3

cluding genes, RNAs, proteins, metabolites, thereby forming various types of biomolecular networks (Chen et al. 2009). These complex networks indispensably exist in cellular systems and play fundamental and essential roles in giving rise of life and maintaining the homeostasis in living organisms. In particular, biological processes are governed by complex networks ranging from gene regulation to signal transduction. These processes are required to be modeled at the molecular level to accurately reflect their essential properties. In this section, we provide a brief review of the most- and best-understood processes. Note that this is a very general and brief introduction intended mainly for mathematicians and computer scientists who are not familiar with molecular biology. Biology-oriented researchers can skip the details in this section. 1.1.1 Gene Regulation: Gene Regulatory Networks Genes are the fundamental units of biology. Genes encode proteins which carry out various functions required for the maintenance of life. Gene regulation is a complex process that begins with the deoxyribonucleic acid (DNA) sequence for a given gene. The process from DNA through many intermediates to functional proteins involves transcription, translation, transport, degradation, biochemical modification, and many other mechanisms. The central dogma of molecular biology, i.e., DNA encodes RNA which in turn encodes protein molecules, as shown in Figure 1.1, provides a framework for understanding the transfer of sequence information flow from DNA via RNAs to proteins. As indicated in Figure 1.1, there are three main processes, i.e., transcription, translation, and replication in the central dogma of molecular biology. R e p l i c a ti o n D NA

R NA T ra n s c ri p t i o n

P ro te in T ra n s l a t i o n

D e g rad atio n

D e g r a d ati o n

Figure 1.1 The central dogma of molecular biology

Transcription of a gene is the process by which RNA polymerase (RNAP) produces messenger RNA (mRNA) that corresponds to the gene’s coding sequence, as shown in Figure 1.2. The gene consists of a coding region and a regulatory region. The coding region is the part of the gene that encodes a certain protein. The regulatory region is the part of the DNA that contributes to the regulation of the gene. In particular, it contains binding sites for proteins known as transcriptional factors (TFs). The binding sites are also called a

4

1 Introduction

promoter of a particular gene, i.e., the regulatory region preceding the gene. In eukaryotes, every gene has its own promoter, whereas in prokaryotes, a group of genes called operon can be transcribed as a single mRNA molecule and hence are regulated by a single promoter. The TFs act by binding to the DNA, directly or in a complex form with other TFs or cofactors, to regulate the rate at which a specific target gene is read. When bound to the DNA, the TFs change the probability per unit time that an RNAP binds the promoter and thus affect the rate at which the RNAP initiates transcription. Once bound to the DNA, the TFs or their complexes recruit or allow RNAP to bind to a specific site at the promoter. RNAP forms a transcriptional complex which separates the two strands of the DNA in a step-wise manner, and transcribes the coding region into mRNA. The TFs can act as activators or repressors, depending on whether the transcriptional rate is increased or decreased. The transcription regulation or the transcription process is one of the most important and also essential processes for gene activities, which is generally nonlinear with large stochastic fluctuations. Clearly, the regulation of the transcriptional process is facilitated mainly by pairwise interactions between TFs and DNA. Therefore, a transcription regulatory network is mainly formed by TF-DNA interactions. Besides TFs, there are many other proteins, known as transcriptional cofactors or co-regulators, which do not bind to the DNA themselves but bind to TFs to regulate the transcription process by linking TFs and RNAP. In addition, many epigenetic factors, such as DNA methylation and histone modification, also affect the transcriptional process. For both prokaryotes and eukaryotes, translation occurs in the cytosol. Between the transcription and the translation, in eukaryotes, mRNA must be translocated from the nucleus to the cytosol where mRNA binds to ribosomes. During translation, a ribosome moves along the mRNA, three bases at a time, and each three-base combination, or codon, is translated into one of the 20 amino acids. The function of the ribosome is to copy the one-dimensional structure of mRNA into a one-dimensional sequence of amino acids, which folds into a three-dimensional protein structure, thereby facilitating different kinds of functions. The replication of DNA is the basis for biological inheritance and is a fundamental process occurring in all living organisms to copy their DNA. During replication, each strand of the original double-stranded DNA molecule serves as a template for reproduction of the complementary strand. Hence, following DNA replication, two identical DNA molecules are produced from a single double-stranded DNA molecule. Cellular proofreading and error-checking mechanisms ensure nearly perfect fidelity for DNA replication. Degradation of mRNA and proteins constantly occurs via the cellular machinery. Specifically, mRNA is degraded by a ribonuclease which competes with ribosomes to bind to mRNA. If a ribosome binds, the mRNA will be translated, otherwise, the mRNA will get degraded. On the other hand, pro-

1.1 Biological Processes and Networks in Cellular Systems

5

T r an scr iptio nal facto r bin din g sites

(c)

(a)

RNAP

Regu lato ry r eg ion

Codin g r eg ion

A single gene may be activated by several differ ent protein combinat ions

(b)

The tr anscr iptional complex enbles RNAP binding

(d)

mRNA RNAP

Transcriptional factors form a complex attached to a regulator y r egion

Tr anscr ipt ion from DNA to mRNA

Figure 1.2 Processes involved in the transcription regulation of a gene: (a) binding sites of TFs; (b) formation of a transcriptional complex; (c) RNAP binding; and (d) transcription initiation by RNAP

teins are degraded by cellular machineries such as proteasomes activated by ubiquitin tagging. The process is regulated by some specific enzymes. In addition to the above processes, many other processes also play important roles in gene regulation. In prokaryotes, the coding region is contiguous, while in eukaryotes the coding region is typically split up into several parts. Each of these coding region parts is called an exon, and the parts between the exons are called introns. In eukaryotes, the introns need to be removed, i.e., spliced. In some cases, alternative splicing occurs which allows a cell to edit an mRNA molecule in different ways to produce many different proteins from the same gene. Other processes such as diffusion, cell growth, micro-RNA (or non-coding RNA) regulation (Barrandon et al. 2008,Keene 2007), and electrical properties may also affect gene regulation. For instance, it has been found that microRNAs may play critical roles in the regulation of gene expression, e.g., cell fate decision (Johnston et al. 2005), circadian rhythms (Nandi et al. 2009), cancer network regulation (Aguda et al. 2008), and robustness (Tsang et al. 2007). A quantitative comparison of non-coding RNA-based and proteinbased gene regulation and a review on quantitative roles of non-conding RNAs can be found in (Mehta et al. 2008) and (Shimoni et al. 2007), respectively. In addition to genetic factors (e.g., single nucleotide polymorphism (SNP) or TFs), recently, it has been found that epigenetic factors, such as DNA/histon methylation, histon acetylation, post-transcriptional genomic silencing in

6

1 Introduction

plants, genomic imprinting in mammals, and double-strand RNA (dsRNA) or RNA iinterference (RNAi), also play important roles in regulation of genes (Steensel 2005). Therefore, a real gene regulatory network should include both genetic and epigenetic regulatory factors in order to build a realistic model for understanding essential mechanisms of cellular systems. As mentioned above, a TF may act as an activator or repressor to influence the expression of other genes. The TF in turn is also a gene product. Such regulatory mechanisms form complex networks of transcriptional and regulatory interactions. For example, gene A can activate gene B and gene C but repress gene D which in turn activates A, etc. It is believed that complex behavior of living cells is created by not only the properties of individual components but also the manner in which they are connected. In general, all processes in Figure 1.1 are nonlinear and result from a variety of pairwise interactions (e.g., TF–DNA, microRNA–mRNA, RNA–RNA, RNA–protein, and protein–protein) with large stochastic fluctuations. Therefore, a living organism can be considered to be formed by a huge nonlinear biochemical reaction network or molecular network, which generates complicated phenomena of a life. 1.1.2 Signal Transduction: Signal Transduction Networks Cells do not live in isolation. They sense, transmit, and process signals originating in other cells and their environment. As a result, a wealth of sensor systems allow cells to monitor their external and internal states. The sensing of external stimuli at the cell membrane demands the transfer of a signal to the place of action, i.e., signal transduction. Typical signals are hormones, pheromones, heat, cold, light, osmotic pressure, and appearance or concentration change of substances such as glucose, K+ , Ca+ , or cAMP. Such a process in a cell is carried out via receptors or membrane proteins which work as the interface of cells to their external environment and can bind specific ligands. The monitoring of external conditions is important for securing survival and communicating with other cells. Correct signal processing is necessary to ensure the optional response to the external and internal states to facilitate the triggering of various biological responses. Receptors can be roughly divided into two major classes: intracellular receptors and cell-surface receptors. Therefore, cells have developed two modes of importing a signal. First, the stimulus may penetrate the cell membrane and bind to a respective receptor in the cell interior. Another possibility is that the signal is perceived by a transmembrane receptor. Once the extracellular signaling molecules, e.g., the ligands, bind to the receptors, arrays of intracellular proteins form signal transduction pathways which facilitate signal transmission from the extracellular compartment to the nucleus by a cascade of biochemical reactions and thereby trigger various biological responses, including the transcription of genes, protein–protein interactions, and metabolism, as shown in Figure 1.3.

1.1 Biological Processes and Networks in Cellular Systems

7

Figure 1.3 Simplified scheme for general signal transduction (from (Saez-Rodriguez et al. 2004))

The external signal is sensed at a specific point and is propagated to modulate activities of other components or processes. The target may be either enzymes or DNA, and especially, enzymes can be modified; for example by phosphorylation, so that their catalytic activities are increased or decreased in response to the extracellular signal. On the other hand, for DNA, the signal transduction process targets TFs, which are proteins regulating gene expression. In the simplest case, the signaling system consists of two components: a sensor that detects environmental changes and a regulatory element that influences the transcription of selected genes. In addition to the sensing and transduction of a signal, the term “signal transduction” traditionally includes the processing of signals. Information in cellular signaling processes is generally transferred by modifications of proteins leading to changes in their activities, such as phosphorylation, which produces a conformational change in a protein and alters its activity. The activation and inactivation of a protein is the result of signal transduction processes. Besides, the interactions between proteins are also important for signal transduction, e.g., signals from the exterior of a cell are mediated to the inside of that cell by a series of protein–protein interactions (PPI) of the signaling molecules, which play a fundamental role in many biological processes and in many diseases.

8

1 Introduction

Many signaling pathways have been extensively investigated, e.g., the general mechanism of G protein signaling pathways, mitogen-activated protein kinase (MAPK) cascades, and Jak-Stat pathways. The dynamics of signaling pathways such as the propagation of noise and stochastic fluctuations, the role of various feedbacks, the origin of multistability and oscillations, and the consequences of multiple phosphorylations have been the subject of active research. Unlike the homogeneous components in a protein interaction network, a signal transduction network is a heterogeneous one with components of not only proteins but also metabolites and small molecules. 1.1.3 Protein Interactions: Protein Interaction Networks A typical mechanism used to transfer information or signals in a cell is physical interactions between proteins. Actually, PPIs are of central importance for virtually every process in a living cell, and can be viewed as an essential part of the signal transduction process. Many proteins involved in signaling processes contain amino acid sequences, known as domains, which can bind to the domains in other proteins, i.e., an interaction called a domain–domain interaction (DDI), leading to association of molecules. Figure 1.4 shows schematically an example of a protein interaction due to the interactions of two domain pairs. Hence, one of the major molecular networks, a protein interaction network or a PPI network, is formed by such PPIs. Analyzing and further unraveling PPI networks and interactions will not only facilitate better understanding of complex cellular processes, but also enable the drawing of inferences regarding functions of proteins. Biochemically, PPIs involve not only the direct-contact association of protein molecules but also long-range interactions through the electrolyte, an aqueous solution medium surrounding neighbor hydrated proteins over distances from less than one nanometer to distances of several tens of nanometers. With respect to the types of protein interactions, proteins may interact over a longer duration to integrate into a protein complex called permanent interaction (e.g., ribosomes and polymerases), or they may interact briefly with other proteins just to modify them; e.g., a protein kinase will add a phosphate to a target protein (or ligand–receptor interactions), called transient interactions, by forming a functional module. This modification of proteins can itself change PPIs. In addition, a protein may also interact with another protein for transportation, e.g., from the cytoplasm to the nucleus or vice versa. Information about these interactions improves our understanding of diseases and can provide the basis for developing new therapeutic approaches. 1.1.4 Metabolism: Metabolic Networks All the processes that occur within a living cell are ultimately driven by energy. Green plants and some bacteria obtain energy directly from sunlight. Other

1.1 Biological Processes and Networks in Cellular Systems

9

Figure 1.4 Domain-based protein interactions. There are two domains D1 and D2 for Protein1 , and three domains D3 , D4 , and D5 for Protein2 . Domain pairs D1 -D3 and D2 -D4 interact, which facilitate the interaction between Protein1 and Protein2

organisms utilize compounds made using sunlight and break them down to release energy through a process called catabolism, as shown in Figure 1.5. The most common method of breaking down these food compounds is to oxidize them, that is, to burn them but in a well-controlled way. The energy trapped in energy currencies can then be used for the regeneration, repair, and homeostatic processes termed anabolism. Metabolism is the general term for these two kinds of reactions: catabolism and anabolism. Catabolism is the set of metabolic pathways that break down molecules into smaller units by releasing energy, while anabolism is the set of metabolic pathways that construct molecules from smaller units generally by consuming energy. Metabolism is a highly organized process and generally involves thousands of reactions that are catalyzed by enzymes. Major components in a metabolic network are enzymes and substrates. Therefore, a metabolic network can be considered to be a result of enzyme– substrate interactions, and is determined by the set of catalyzing enzymes, the possible metabolic fluxes, and the intrinsic modes of regulation. When

10

1 Introduction

H e at rele ase d

Si mple molec ules and monomer s suc h as gluc ose, ami no aci ds, glyc er ol, and f atty ac i ds

Catabolic reactions t ra n s f e r e n e r g y f r o m complex molecules to AT P

ATP AD P +

P

Complex molecules and polymers such as g lycogen, p ro t e i n s , a n d t ri g l y c e ri d e s

Anabolic r eactions transfer energy from AT P to complex molecules

H e at rele ase d

Figure 1.5 Schematic representation of metabolism

modeling a metabolic system, the concentrations of the molecules and their rates of change are of special interest: e.g., enzyme kinetics, which is used to investigate the dynamical properties of the metabolic networks, and metabolic control analysis (MCA), which quantifies the effect of perturbations in the network. On the other hand, the network feature of metabolism is studied with stoichiometric analysis considering the balance of compound production and degradation, e.g., flux balance analysis (FBA) or MCA. Metabolic networks are one of many types of molecular networks which consist of a variety of networks at different levels, e.g., interlocked genetic regulatory networks, transcription regulatory networks, protein interaction networks, signal transduction networks, and metabolic networks, although they have been introduced independently. For example, a signaling network is triggered by the presence of extracellular stimuli and often results in the activation of TFs, which function in gene regulatory networks, by regulating the transcription of associated genes and the synthesis of various proteins used in protein interaction networks and metabolic networks. Metabolism is responsible for the production of energy needed for all biological processes. The interconnection of different kinds of networks affects highly integrated intracellular processes. Note that besides the molecular networks, there are many other types of networks widely studied in computational biology, such as disease networks, functional linkage networks, and structural similarity networks (Chen et al. 2009).

1.1 Biological Processes and Networks in Cellular Systems

11

1.1.5 Cell Cycles and Cellular Rhythms: Nonlinear Network Dynamics The cell cycle or cell-division cycle is the series of events that allows the division or duplication of cells. Such a phenomenon results from dynamical interactions among the related biomolecules, or the corresponding molecular network, which can be represented as a nonlinear dynamical system. A typical cycle for both eukaryotic and prokaryotic cells is growth and division. Growth implies formation of new molecules in a cell and associated increase in its mass and volume, while division means separation of two almost equally sized daughter cells, which is usually a much faster process in contrast to growth. While preparing for cell division, a genome together with associated proteins must be doubled in size with extraordinary precision. A eukaryotic cell cycle comprises four stages: G1 (gap) phase in which the size of the cell is increased by constantly producing RNA and synthesizing protein; S (synthesis) phase in which DNA synthesis and duplication occur; G2 (gap) phase in which the cell continues to produce new proteins and grows in size; and M (mitosis) phase in which chromosomes segregate and cell division occur. In particular, the genome is constantly kept in the G1, G2, and M phases, but duplicated in the S phase, which lasts a shorter time than the cell volume growth process and much longer than the cell division time, as shown in Figure 1.6. Mammalian cells require around 12–24 h to complete one cell cycle, whereas bacteria may divide every 20–30 min, and yeast cells or other protozoans may require 6–8 h. Since the cell volume and DNA number must increase by a factor of 2 between successive divisions in order to ensure that the mass of the two daughter cells remains nearly equal to that of the mother cell, the concentrations or the numbers of molecules in the cell inevitably depend on dynamics of the cell cycle, which in turn have a significant effect on the dynamics of molecular networks owing to such dynamical changes during the cell cycle. The cell cycle is a vital process by which a single-celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. Major events of the cell cycle, i.e., DNA synthesis, mitosis, and cell division, are regulated by a complex network of regulatory proteins known as cell cycle control system. The core of this system is an ordered series of biochemical switches which control the main events of the cycle. The control system monitors the conditions inside and outside of the cell. When the system malfunctions, excessive cell division can result in cancers. In the past decade, many researchers developed various models for the cell cycle and its control system which have greatly improved our understanding of cell cycle dynamics (Battogtokh et al. 2006, Chenkc et al. 2000, Tyson et al. 2001, Tyson and Novak 2001, Tyson et al. 2002). Besides cell cycle, rhythmic phenomena exist at all levels in living organisms with periods from less than a second to years, which may allow living organisms to adapt their behaviors to a periodically varying environment.

12

1 Introduction

Figure 1.6 Schematic representation of the cell cycle. To produce two similar daughter cells, the complete DNA must be duplicated. DNA replication occurs during the S phase. At the end of S phase, each chromosome consists of a pair of sister chromatids held together by tethering proteins. After a gap (G2 phase), the cell enters mitosis (M phase), when the replicated chromosomes are aligned on the metaphase spindle, with sister chromatids attached by microtubules to opposite poles of the spindle. Finally, the tether proteins are removed so that the sister chromatids can be segregated at the opposite sides of the cell (anaphase). Shortly thereafter, the cell divides to produce two daughter cells in G1 phase (from (Collins et al. 1997))

In particular, a circadian rhythm is roughly a 24-h cycle in the biochemical, physiological or behavioral processes of living beings, including plants, animals, fungi, and cyanobacteria. Recently, it was also found that there is a functional pathway linking cell division and the circadian clock in an experiment on murine-regenerating liver. Circadian rhythms are biological clocks which are endogenously generated and can be entrained by external cues called Zeitgebers. The primary one is daylight, but other factors, e.g., temperature, also affect the rhythms. For instance, there is some evidence which shows that liver cells appear to respond to feeding rather than to light. These rhythms allow organisms to anticipate and prepare for precise and regular environmental changes. The primary circadian clock or master clock in mammals is located in the suprachiasmatic nucleus, a pair of distinct groups of neurons located in the hypothalamus. More-or-less independent circadian rhythms are also found in many organs and cells in the body outside the suprachiasmatic nuclei in mammals. These clocks, called peripheral oscillators, are found in

1.2 A Primer to Networks

13

the oesophagus, lung, liver, pancreas, spleen, thymus, and skin. Generally, the rhythms are linked to the light–dark cycle. Animals, including humans, kept in total darkness for extended periods, eventually function with a freerunning rhythm. For each day, their sleep cycle is pushed back or forward, depending on whether their endogenous period is longer or shorter than 24 h. The rhythms are reset each day by environmental cues, i.e., Zeitgebers. Every rhythmic phenomenon in living organisms is believed to result from the internal nonlinear interactions of molecules or molecular networks, which are coupled with external stimuli and are usually represented as a nonlinear dynamical system.

1.2 A Primer to Networks The quantifiable tools of network theory and nonlinear dynamics theory offer unexplored possibilities to understand the structures and dynamics of molecular networks. Network theory has been applied to many disciplines, including particle physics, computer science, biology, economics, operations research, and sociology. There are many types of networks, examples of which include logistical networks, the world wide web, gene regulatory networks, metabolic networks, social networks, epistemological networks, etc. With the recent explosion of publicly available high-throughput biological data, the analysis of molecular networks has gained significant interest. The study on molecular networks now covers not only local patterns in the network, e.g., network hubs, network motifs, pathways, feedback loops, modules, and communities, but also global features of the networks, such as feedback networks, scale-free distribution, small-world networks, and robustness. Here, we introduce some basic concepts related to network theory that allow us to characterize different molecular networks, e.g., gene regulatory networks, transcription regulatory networks, protein interaction networks, metabolic networks, and signal transduction networks. A network is composed of a set of interconnected vertices or nodes. For example, in a transcription regulatory network, the nodes are genes, and edges represent transcriptional regulation of one gene by the protein products of other genes. On the other hand, in a protein network, a node is a protein, and an edge represents a physical or genetic interaction between two proteins. If an edge has a specific direction, then the network is called a directed graph; otherwise, it is called an undirected graph. Generally, transcriptional regulatory networks and metabolic networks would be modeled as directed graphs, whereas signal transduction networks can be represented as directed graphs or as hybrid graphs with both directed and undirected edges depending on the reactions and interactions in the network (e.g., PPIs). For instance, a transcription regulatory network (or transcriptional regulatory network) is a directed graph because if gene A regulates gene B, then there is a natural direction associated with the edge between the corresponding nodes, starting

14

1 Introduction

at A and finishing at B. A directed graph also includes so-called self-loops, i.e., edges from a vertex to itself. On the other hand, PPI networks describe physical interactions between proteins in an organism, and there is no direction associated with the interactions in such networks. Hence, they are typically modeled as undirected graphs. The number of vertices n in a directed or undirected graph is the size or order of the graph. Generally, there is a weight coefficient, a positive or negative number, on each edge to represent the strength of the interaction. A network can be represented as a static graph or a dynamical system, depending on the setting of the problem. 1.2.1 Basic Concepts of Networks A set of vertices connected by edges is only the simplest type of network or graph. There are many ways in which a network may be more complex than this structure. For example, there may be more than one type of vertex or more than one type of edge or both in a network. Consider the example of a gene regulatory network, in which the vertices represent genes (mRNAs) or proteins, and edges represent transcription, translation, or other kinds of gene regulations. The edges can carry weights representing, say how strong a TF directly controls the transcription rate of its target gene. The edges can be of two types. Activation or positive regulation with a positive weight on the edge occurs when increasing the concentration or the number in one vertex enhances the concentration or the number in another vertex. Otherwise, the regulation is negative or is called repression with a negative weight on the edge. Similarly to edges, a node can also carry a weight to quantitatively or qualitatively represent its attribution to a certain property. As described above, a molecular network formed by a variety of pairwise interactions between molecules, can be directed only in one direction. A gene regulatory network is directed because each message is transmitted in only one direction. Directed networks can be either cyclic, indicating that they contain closed loops of edges, or acyclic indicating that they do not contain closed loops. In directed networks, feedback plays an important role in understanding many aspects of networks such as stability and robustness. Feedback can be defined as the ability of a system to adjust its output in response to monitoring itself. Taking a gene regulation process as an example, a protein A may inhibit or enhance transcription of a gene which encodes some other protein B, while B may in turn influence the production of A, thereby forming a closed feedback loop. Feedback is a ubiquitous control mechanism in molecular networks. It has become clear that the principles of feedback – both positive and negative loops – are indeed used to produce signaling properties. By negative feedback, the response counteracts the effect of the stimulus, and therefore, it can be used to create homeostasis where the steady-state concentration of response is confined to a narrow window for a broad range of signal strength (Tyson et al. 2003). Negative feedback can also be used to create an oscillatory response

1.2 A Primer to Networks

15

such as circadian rhythms and to suppress noise (Kim et al. 2006). Positive feedback, however, would amplify the initial conditions, thereby rendering it possible to create amplification, decision, and memory. For instance, by positive feedback, a slight hot temperature would lead to maximum heating and a slight cold temperature would lead to maximum cooling, which eventually would result in two stable states – very hot and very cold. Therefore, positive feedback can be used to create a switch, indicating that the cellular response changes abruptly as signal magnitude crosses a critical value. Moreover, such a feature can also be used as a sensor to detect small perturbations. Molecular networks may have multiple positive and negative feedback loops, particularly, in the form of coupled positive and negative feedback loops, which have been proposed as a basis for rapidly turning on a reaction to a proper stimulus, robustly maintaining its status, and immediately turning off the reaction when the stimulus disappears. Therefore, coupled positive and negative feedback loops form essential signal transduction motifs in cellular signaling systems (Kim et al. 2006). It has also been revealed that a signaling system with multiple feedback loops is more robust than one with a single feedback loop (Venkatesh et al. 2004). Excellent reviews on the roles of positive and negative feedbacks in cellular systems can be found in (Tyson et al. 2003, Mitrophanov and Groisman 2008, Ferrell 2002, Freeman 2000). Molecular networks also evolve over time, with vertices or edges appearing or disappearing (i.e., dynamics for structure evolution), or values defined on those vertices and edges changing (i.e., dynamics for state evolution). Nodes are added to a gene regulatory network when genes duplicate. However, duplicated genes immediately change their interactions and thereby rapidly specialize their interacting partners. This results in a link dynamics model which explains the network evolution through interaction loss and preferential interaction gain (Wagner 2003). Generally, core components of a network tend to be conserved, whereas components at the periphery or false interactions do not. Although conservation of network nodes and edges is extremely valuable for mapping conserved interactions and common features among organisms, it is likely that many regulatory interactions are not conserved, thereby likely leading to species diversity and allowing organisms to occupy distinct ecological niches (Zhu et al. 2007). 1.2.2 Topological Properties of Networks Cellular functions cannot be attributed to isolated components. Rather, they arise from characteristics of molecular networks, which represent connections between cellular components. A molecular network is a typical complex network, which is generally characterized by not only global topological properties such as small-world character and scale-free distribution (Albert et al. 2002, Barabasi and Oltvai 2004, Watts et al. 1998) but also local patterns or structures such as motifs and modules (Alon 2006), as shown in Figure 1.7.

16

1 Introduction

In other words, unlike random networks, molecular networks contain characteristic topological patterns that enable their functionality. A node with a high degree, i.e., connecting to many other nodes, is called a hub, which is considered to play a crucial role in the qualitative behavior of the network. Being the basic building blocks of molecular networks, which imply structural design principles, simple patterns of interconnections occurring in complex networks significantly more frequently than in randomized networks are defined as network motifs (Milo et al. 2002).

Figure 1.7 Ingredients of molecular networks: molecules, interactions, local structures, and networks, where local structures include hubs, pathways, feedback loops, modules, communities, network motifs, and subnetworks. Clearly, basic units or components in a network are individual molecules, which affect each other by their local or pairwise interactions. A chain or cascade of those local interactions is a linear pathway or local structure which transforms local perturbations into a functional response. All of linear pathways or local structures are assembled into a global molecular network which eventually generates global behaviors and holds responsible for complicated life in a living organism (Chen et al. 2009)

Some functionally related components often interact with each another, forming modules in molecular networks (Hartwell et al. 1999, Qi and Ge 2006). Motifs represent recurrent topological patterns, while modules are bigger building blocks that can carry out certain cellular functions such as signal transmission. Modules may retain motifs as their structural components and maintain certain properties such as robustness to environmental perturbations and evolutionary conservations because such a modular design can prevent damage from spreading limitlessly and can ease the evolutionary upgrading of some components. In addition to hubs, motifs and modules can be found in PPI, metabolic, and transcriptional networks. For the PPI and metabolic networks, modules can be defined as subnetworks whose component entities are more likely to be connected to each other than to entities outside the subnetworks, while for the transcriptional networks, modules can be defined as sets of genes controlled by the same set of TFs under certain conditions. The modular organization of molecular networks provides us testable hypotheses that provide biological

1.3 A Primer to Dynamics

17

insights such as functional annotation and key regulatory information because components in a given module are hypothesized to be functionally coherent. The module structures can also supply key regulatory information (Qi and Ge 2006). In addition to motifs and modules, other topological and statistical properties of networks such as degree distribution, centrality, clustering coefficient, and average path length are important factors to reveal the complex nature of molecular networks.

1.3 A Primer to Dynamics Having described some cellular processes and network basics, we proceed to describe basic concepts of modeling and dynamics. The enterprise of modeling research requires both breadth and depth in understanding of various aspects of molecular networks, including biological, computational, mathematical, and even engineering issues. When modeling a specific problem of a system, appropriate models must be selected in order to reflect the essential properties of the system because different models may highlight different aspects of the same system. Further, even after an appropriate model has been selected, there are still many different aspects which need to be understood. This section will introduce some fundamental concepts and aspects in modeling a molecular network as a dynamical system. Different approaches for modeling molecular networks are being investigated. For instance, various kinds of network models like Bayesian networks, Boolean networks, and linear feedback networks have been adopted to infer the static or dynamic structure of regulatory networks from experimental data. There are also other mathematical models which have attempted to capture more complex phenomena like spatiotemporal fluctuations and diffusions. All these methods can be categorized as static or dynamic, discrete or continuous, qualitative or quantitative (Jong 2002). Clearly, these studies help us qualitatively or quantitatively to understand the structures and functions of various molecular networks and their regulatory mechanisms in living cells. 1.3.1 Dynamics and Collective Behavior Dynamics exist in living organisms at all levels. From both theoretical and experiment viewpoints, it is a big challenge to model, analyze, and further predict various kinds of dynamical behavior in biological systems. One of the best studied dynamical phenomena thus far is circadian oscillations, which are assumed to be produced by limit cycle oscillators at the molecular level from the gene regulatory feedback loops. With rapid advances in mathematics and experiments concerning the underlying regulatory mechanisms, more sophisticated theoretical models and general techniques are increasingly required to elucidate various features of dynamical behavior at a system-wide level.

18

1 Introduction

Another common dynamical phenomenon in biology is collective behavior, which is well-coordinated responses resulting from an integrated exchange of information by cell communication in both prokaryotes and eukaryotes. Collective behavior is also widespread in living organisms. The ability of cells to cooperate or communicate is an absolute requisite to ensure appropriate and robust coordination of cell activities at all levels of organisms under an uncertain environment. To understand the mechanism of cooperative behavior (such as chemotaxis and quorum sensing) for molecules is an essential topic in systems biology, which requires both mathematical and biological knowledge and insight. Generally, for a coupled system, cooperative behavior such as intercellular communication is accomplished by transmitting individual cell reactions via intercellular signals to neighboring cells and further integrating them to generate a global cellular response at the level of molecules, tissues, organs, and bodies. Recently, many studies have indicated that an uncoupled system may also realize collective behavior or synchrony provided that all subsystems or individual components are subjected to a common fluctuating environment (Teramae and Tanaka 2004). 1.3.2 System States An important notion in modeling a dynamical molecular network is the system state. It is a snapshot of the network at a given time and contains considerable information to predict the behavior of the system at all future instances. A system state is described by a set of variables that must be monitored in a model. Different representations of the state can be used in different modeling approaches. The states can be discrete or continuous, deterministic or stochastic. For instance, in a Boolean network model for a gene regulatory network, each gene is assumed to be in one of two states: ‘expressed’ or ‘not expressed’. Similarly, each protein can be assumed to be in either an active or an inactive state (Li et al. 2004). For simplicity, the expressed and not expressed states will be denoted as 1 or 0, respectively. The state of the model is simply a list of which genes are expressed and which are not; therefore, the system states are discrete and deterministic in a Boolean network model. In a differential equation model for a genetic regulatory network, the state, a list of concentrations of its cellular components, is continuous and deterministic. On the other hand, in its respective stochastic model such as the master equations, the state is a list of the current numbers of all species, and therefore, the state is discrete and stochastic. With the approximation of discrete variables by continuous variables (e.g., concentrations), a stochastic model can also be continuous and stochastic, such as stochastic differential equations. 1.3.3 Structures and Functions The analysis of topological structures and functions in molecular networks is an important issue in systems biology. Increasing numbers of scientists have

1.3 A Primer to Dynamics

19

been attracted to such a research topic. Therefore, it is essential to establish theoretical methodologies and computational techniques to enable the understanding of the way components dynamically interact to form molecular networks which facilitate sophisticated biological functions such as robustness and adaptation. Inferring a network structure is a complicated problem which generally cannot be automatically solved simply on the basis of some principles or universal rules by referring to experimental data. To identify a network structure, two kinds of approaches, i.e., bottom-up (knowledge-driven) and top-down (data-driven) approaches, can be utilized on the basis of the available experimental data. The bottom-up approach involves the construction of a molecular network by compiling independent experimental data, mostly through literature searches and database requests such as Kyoto Encyclopedia of Genes and Genomes and some specific experiments to obtain data regarding very specific aspects of the network. This approach is suitable when most of the molecular mechanisms and their regulatory relationships are relatively well understood. On the other hand, the top-down approach involves the utilization of highthroughput data obtained via new measurement technologies, e.g., microarray for gene expression and mass spectrometry for protein expression. Although the top-down approach does not require prior knowledge, its drawback is the computational cost. Hence, when prior knowledge is abundantly available, a hybrid method that combines the bottom-up and the top-down approaches is preferred in order to reduce the possible searching space of network structures. The theory of complex networks is a powerful tool to elucidate the structures and functions of molecular networks because of its universality and ability to mimic systems of many interacting parts. Global and local properties such as degree distribution, motifs and modules, and hierarchies play important roles in understanding the functional implication of molecular networks. For example, metabolic networks have been found to have small-world and scale-free properties (Barabasi and Oltvai 2004,Jeong et al. 2000). Such properties cannot only ensure the network robustness against random failures of the nodes but also guarantee an efficient transport and flow processing by avoiding congestion. Many protein interaction networks also have the scalefree distribution which represents network’s tolerance to random errors, simultaneously coupled with fragility against the removal of the most connected nodes. Irrespective of these advancements, revealing the structures and functions of various complex molecular networks is still a key challenge in the field of systems biology. Besides theoretical results of complex networks, nonlinear dynamical theory and control theory are also widely used in analyzing structures and functions of networks, e.g., stability theory, bifurcation theory, Lur’e system, and LMI. For example, the dynamics of cell cycle regulation has been analyzed by using bifurcation theory, which reveals how the generic properties of a dynamical molecular network depend on parameter perturbations (Tyson et al. 2002). On the basis of theoretical analysis, bifurcations have been shown to

20

1 Introduction

underlie cell cycle transitions. A network with only positive feedback loops can be used to construct a molecular switch because the dynamics of such a network can be ensured to converge only to stable equilibria on the basis of theory of monotone dynamical systems (Kobayashi et al. 2003). On the other hand, a network with interlocked positive and negative feedback loops can be used to construct a circadian clock (Glossop et al. 1999), which ensures a stably periodic oscillation. 1.3.4 Cellular Noise Cellular processes at the molecular level are inherently stochastic. The origin of stochasticity in a cell can be attributed to random transitions among the discrete biochemical states, which are the source of inherent fluctuations for the cell. There can be two sources of noise. First, the inherent stochasticity in biochemical processes such as binding, transcription, and translation generates the intrinsic noise due to random encountering, whose relative magnitude is proportional to the inverse of the system size. Second, variations in the amounts or states of cellular components due to discrete numbers or the external environment generate extrinsic noise. Such noise processes are believed to play especially important roles when species are present at low copy numbers. Systematic treatment of noise is essential for understanding biologically relevant system properties. It has become clear that the role of noises in cellular functions may be complex and cannot always be treated as a small perturbation to the deterministic behavior. Moreover, the stochastic fluctuations may play constructive roles and not always be the cause of systematic worsening of the properties of the system. For instance, molecular fluctuations may enhance the sensitivity of intracellular regulation (Paulsson et al. 2000), induce new phenomena such as noise-based switches and amplifiers for gene expression (Hasty et al. 2000), and mediate collective behavior or stochastic synchronization (Chen et al. 2005). When species are present at low copy numbers, the stochastic description is more reasonable although it may be solvable neither analytically nor with high computational efficiency. On the other hand, when the species numbers are high and the system is operating far from its critical points, the deterministic description is more reasonable due to its simple representation and high computational efficiency. Recently, it has been shown that noise is generated at the microscopic level of discrete variables and transmitted to the mesoscopic level of continuous variables (Crudu et al. 2009). 1.3.5 Time Delays Biological regulation typically occurs via direct or indirect interactions between cellular components. During the regulation, there are always time delays associated with biosynthesis and transport of regulatory molecules to reach

1.3 A Primer to Dynamics

21

the site of action. Delayed feedbacks are ubiquitous in many cellular systems such as the regulatory networks of circadian rhythms. Time delays are usually introduced in cellular models to signify the time taken for transcription, translation, phosphorylation, protein degradation, translocation, and posttranslational modification, and may significantly influence the stability and dynamics of the overall system, especially for eukaryotes (Chen and Aihara 2002a). Quantitative measurements of the delays are few, but see (Audibert et al. 2002), for information on recent progress in measuring RNA splicing delays. Few studies have focused on signal transduction delays. Time delays may also play important roles in many other aspects such as entrainment and occurrence of certain physiological disorders like sleep phase syndromes (Sriram et al. 2006). Besides the roles played in these deterministic systems, time delays may also cause some new phenomena in stochastic systems such as delay-induced stochastic oscillations (Bratsun et al. 2005). It has been shown that when the time delays in biochemical reactions are in the order of other significant time scales characterizing the cellular system or longer, and the feedback loops associated with these delays are strong, the delays can be crucial for the description of transient processes; this implies that when delay times are significant, both analytical and numerical modeling should take into account the effects of time delays. Actually, several useful algorithms and computational tools have been developed to consider such an effect. For instance, a generalized Gillespie algorithm that accounts for the non-Markovian properties of random biochemical events with delays has been widely used to simulate the dynamical behavior of molecular networks (Bratsun et al. 2005). 1.3.6 Multiple Time Scales Each cell can be viewed as an integrated device made of several thousand types of interacting cellular components. Explicitly considering all the components in individual cells is unrealistic for a cellular system from a modeling, analyzing, and computing viewpoint. However, many different time scales characterize various cellular processes which can be exploited to reduce the complexity of the mathematical models. Generally, in gene regulatory networks, the transcription and translation processes evolve on a time scale that is much slower than that of other biochemical reactions, such as phosphorylation, dimerization, or binding reactions of proteins. For instance, time to transcribe a DNA sequence and translate a mRNA into a protein in Escherichia coli is about one and two minutes, respectively. However, time for binding a TF to a DNA promoter in Escherichia coli is about one second. The wide range of time scales in which cellular processes may occur is also an important feature of metabolism. Some modifications may happen within seconds, while other processes take minutes, hours or even longer. For example, in Michaelis–Menten rate equations, it is assumed that enzyme production and degradation occur on a much larger time scale than the catalyzed reaction. In addition, protein production and

22

1 Introduction

degradation occur on a different time scale than signal transduction. Dynamics are generally intertwined between the gene regulatory, metabolic, and signal transduction networks, and different time scales also co-exist in these networks. For example, enzymatic reactions occur on the order of milliseconds, and gene regulatory events occur on the order of minutes or hours. Because of the existence of multiple different time scales in a molecular network, we need to consider what time scale is valid for a specific problem under study. When modeling, we can choose an appropriate time scale while neglecting faster or slower processes in order to reduce the system provided that the simplified system is guaranteed to behave similarly to the original one. For example, in a process that occurs within seconds or less, the details of binding and unbinding and protein transitions between active and inactive states have to be modeled. At longer time scales, on the other hand, these can be considered to be at quasi-steady-states. At very long time scales, processes such as cell division, which can be ignored at shorter time scales, may be very important. An example for neglecting slow processes in metabolic modeling is to assume that enzyme concentrations are constant. Their production and degradation are considered to be much slower than the reactions that they catalyze (Ciliberto et al. 2007). Similarly, an example for neglecting fast processes in modeling a gene regulatory network is to assume that the fast reactions rapidly approach quasi-steady-states. Therefore, in mathematical terms, differential equations for the concentration change of fast variables can be replaced by the corresponding algebraic equations (Wang et al. 2004). 1.3.7 Robustness and Sensitivity Robustness, which characterizes the ability to maintain performance in the face of perturbations and uncertainty, is one of the essential features of cellular systems (Stelling et al. 2004a). For example, it has been shown that the yeast cell cycle network is robust with respect to small perturbations to the network (Li et al. 2004). It is essential for cells to protect their genetic information and their mode of living against perturbations. Robustness in biological systems is often achieved by a high degree of complexity involving feedback, modularity, redundancy, and structural stability (Kitano 2002). The phenomenological properties exhibited by robust systems can be classified into three areas: adaptation, which denotes the ability to cope with environmental changes; parameter insensitivity, which indicates a system’s relative insensitivity to specific kinetic parameters; and graceful degradation, which reflects the characteristic slow degradation of a system’s functions after damage, rather than catastrophic failure (Kitano 2002). The global properties such as scalefree network structures are also helpful for robustness against random failure of the molecules (Barabasi and Oltvai 2004). Different robustness measures have been proposed, such as degree of robustness (DOR), which measures the minimal distance from a reference point in the parameter space to a bifurcation point, multiparametric robustness

1.4 Network Systems Biology and Synthetic Systems Biology

23

by structural singular values (Ma and Iglesias 2002), robustness with respect to random multiple parameter variation by defining “total parameter variation” (Bluthgen and Herzel 2003), and Monte Carlo-based robustness measure (Eissing et al. 2005), in which random parameter sets are drawn from predefined ranges and the relative frequency of occurrence provides an estimate of the volume in the parameter space allowing bistable behavior. However, cellular systems are able to adapt to changes, sense and process internal and external signals, and react precisely depending on the type or strength of a perturbation. Sensitivity or fragility characterizes the ability of living organisms to adequately react to a certain stimulus. Moreover, robustness is usually quantified by calculating sensitivity such as period and amplitude sensitivity for quantifying robustness of circadian rhythms. Relatively small sensitivity indicates high robustness. Understanding the mechanism behind robustness and sensitivity is particularly important because it provides in-depth understanding on how the system not only maintains its functional properties against various disturbances but also adapts to environmental changes and reacts precisely. Due to the requirement of both robustness and sensitivity, there must exist a tradeoff between them. Some of the mechanisms for such tradeoff have been revealed; for example, module-based analysis of robustness tradeoffs in the heat shock response system (Kurata et al. 2006) and steady-state analysis of yeast cell polarization (Chou et al. 2008).

1.4 Network Systems Biology and Synthetic Systems Biology Systems biology is an emergent area arising recently in biology that focuses on the systematic study of complex interactions in biological systems by integrating biology, mathematics, chemistry, physics, informatics, engineering, and other fields (Kitano 2002,Chen et al. 2009). It has been found that a complicated living organism cannot be completely understood by merely analyzing individual components, such as genes and metabolites. It is the interactions among components or the networks that are responsible for the functions and behavior of the biological systems. With respect to the challenge of understanding the complexities of living organisms faced by biologists, instead of analyzing individual components or interactions of an organism with the socalled reductionist approach, systems biology studies an organism by considering all components and interactions together and treats the organism as a dynamical and interacting network of genes, proteins, and biochemical reactions which contribute to life (Barabasi and Oltvai 2004,Chen et al. 2009). In recent years, with rapid progress of various measurement technologies and experiential methods, many high-throughput technologies have been developed for systematically studying interactions or networks of molecules, such as microarray, next generation sequencing, the two-hybrid assay, co-immunoprecipitation,

24

1 Introduction

and the ChIP-chip approach, which can be used to screen for PPI or to infer gene regulatory networks. With increasingly accumulated data from highthroughput technologies, molecular networks and their dynamics have been studied extensively from various aspects of living organisms. These researches help biologists not only to understand complicated biochemical phenomena but also to elucidate the essential principles and fundamental mechanisms of cellular systems at a system-wide level. One of the biggest challenges in network systems biology is to build a complete and high-resolution description of molecular topography and connect molecular interactions or molecular networks with physiological responses. By studying the relationships and interactions between various parts of a biological system, e.g., regulation modules, functional pathways, organelles, cells, physiological systems, and organisms, we aim to eventually develop an understandable model or a molecular network of the whole system, which is a key both for understanding life and for application in human medicine, in particular from the theoretical and engineering perspective. Figure 1.8 shows schematically major research topics of the network systems biology. Closely related to systems biology, synthetic biology is also a new area of research that combines biological science and engineering in order to design and build artificially novel biological functions and systems and requires the techniques from systems biology. In other words, synthetic biology, in particular synthetic systems biology, involves the design and construction of new biological parts, devices, and systems or the re-design of natural biological systems for useful purposes. Research in synthetic biology aims at combining knowledge from various disciplines, including molecular biology, engineering, and mathematics to design functional networks and implement new cellular behavior. Recent progress in genetic engineering has made the design and implementation of artificial synthetic gene networks realistic from both theoretical and experimental viewpoints. Actually, from the theoretical predictions, several simple gene networks have been experimentally constructed, e.g., genetic toggle switch (Gardner et al. 2000) and repressilator (Elowitz and Leibler 2000). Such simple models clearly represent a first step towards logical cellular control by manipulating and monitoring biological processes at the DNA level, and not only can be used as building blocks to synthesize artificial biological systems but also have great potential for biotechnological and therapeutic applications. To demonstrate the similarity and difference, Table 1.1 shows simple comparisons between the silicon digital computing system and the synthetic biological system.

1.5 Outline of the Book This book provides new theoretical tools and computational models for modeling and analyzing dynamical networks at the molecular level, from systems and engineering viewpoints. In particular, we provide a general theoretical

1.5 Outline of the Book

25

Figure 1.8 Major research topics of network systems biology. Omics data include high-throughput data from genomics, transcriptomics, proteomics, metabomics and phenomics

Table 1.1 Silicon computing system and synthetic biological system In-silico computing system Synthetic biological system Building blocks

silicon switch, clock, silicon sensor ‘Hardware’ gate, memory, CPU, computer ‘Software’ (codes) C++, Fortran, Basic Networks computing system Applications computation, control, artificial intelligence, etc.

gene switch, gene oscillator, gene sensor bio-gate, bio-sensor, bio-memory, bio-computer genetic code (A, C, G, T) living organism bio-tech, logical cellular control, drug, etc.

26

1 Introduction

framework to model building blocks of biomolecular systems and to analyze nonlinear dynamical phenomena, which readers can apply to design functionoriented molecular networks and solve novel biological problems on the basis of their knowledge and skills. Specifically, the new features of the book include: 1. modeling a general molecular network with either time-invariant or timevarying parameters, 2. nonlinear analysis of molecular networks, such as stability and bifurcation analysis; 3. designing synthetic switching networks; 4. designing synthetic oscillating networks; 5. quantitative simulation scheme for molecular networks with stochastic fluctuations; 6. synchronizing bio-oscillators without noise; 7. synchronizing bio-oscillators with noise; 8. noise-induced collective behavior of molecular networks; 9. graphic representations for molecular networks, i.e., interaction graph, incidence graph, and species-reaction (SR) graph. The engineering, networks, and dynamics approaches of this book have major strengths. For instance, there are many engineering areas featured in this book, including forward engineering design, signal processing, and control systems. The text or material is designed to match ideas that engineering students are familiar with. These approaches are demonstrated and explored within the context of the functionality of living systems. On the other hand, the nonlinear dynamical analysis of molecular networks on collective behavior and switching dynamics, which are ubiquitous in living organisms, is presented in the book in a comprehensive and systematic manner. The topic is treated in depth and is related to other emerging areas, such as network motif, molecular communication, and stochastic resonance in living organisms. The intended readers are systems biology and computational biology specialists in academia and industry, including pharmaceuticals, engineers, postgraduates, and molecular biologists who rely on computers, and mathematical scientists with interests in biology. In addition, there are three graphic representations introduced in this book to analyze molecular networks, i.e., interaction graph, incidence graph, and SR graph. Figure 1.9 shows the applications of these graphic representations to cellular systems. Table 1.2 shows the major theoretical tools, i.e., monotone dynamical systems, Lur’e systems, and LMI techniques, which are also adopted to analyze the dynamical behavior of cellular systems, in particular for the stability, bifurcation, and synchronization of molecular networks. The book includes new developments with cutting edge research topics and methodologies in the area of molecular network design and nonlinear analysis, which are difficult to fully cover due to a dearth of experts in the related fields. However, this area is one where applied mathematicians and engineers can make a big impact. The main contents of the book are as follows:

1.5 Outline of the Book

27

nŒ•ŒG™ŒŽœ“ˆ›–™ G •Œ›ž–™’G

zž›Š•ŽG•Œ›ž–™’G O”–•–›–•ŒGš š›Œ”PG

všŠ““ˆ›•ŽG•Œ›ž–™’

p•›Œ™ˆŠ›–•G n™ˆ—G

tŒ›ˆ‰–“ŠG•Œ›ž–™’G

tœ“›š›ˆ‰“› G

OŒ•¡ ”ŒT™Œ“ˆ›Œ‹G•Œ›ž–™’P

ˆ•ˆ“ ššG

všŠ““ˆ›•ŽG•Œ›ž–™’G O•–•T”–•–›–•ŒGš š›Œ”PG

p•Š‹Œ•ŠŒG n™ˆ—G

zyG G n™ˆ—G

zž›Š•ŽG•Œ›ž–™’G

nŒ•ŒG™ŒŽœ“ˆ›–™ G

tptvGŠ–•›™–“G

•Œ›ž–™’G

š š›Œ”G

Figure 1.9 Graphic representations for analyzing molecular networks. Interaction graph emphasizes regulations which can be direct or indirect interactions. Incidence graph is similar to the interaction graph but stresses the relation between input and output. In contrast, SR graph mainly describes the direct interactions or chemical reactions. The solid line means the directly induced relation between the two graphs, and the dotted line implies the indirectly induced relation between the two graphs Table 1.2 Special theoretical tools adopted in the book; that is, monotone dynamical systems, Lur’e systems, and LMI technique, etc. Special theoretical tools

Major topics

Monotone dynamical systems switching behavior oscillating behavior multistability Master equations stochastic simulation stochastic synchronization Cumulant equations stability bifurcation LMI stability analysis synchronization Lur’e systems gene regulation stochastic stability

28

1 Introduction

1. Chapter 1 covers the problems and topics of biomolecular networks from both biological and theoretical viewpoints to provide the context and an impetus for the following chapters, and also, this chapter provides the fundamental concepts for molecular biology and network theory used in the book. In this chapter, we also include perspectives and challenges on modeling and analyzing molecular networks in cellular systems. 2. In Chapter 2, we present a mathematical description of stochastic molecular networks in a single cell in a multiscale manner. Specifically, we show how the master equations, stochastic differential equations, cumulant equations, and then deterministic equations are obtained to model cellular dynamics at the molecular level, depending on the requirement for accuracy. There are three representations of molecular networks, i.e., stochastic representation, deterministic representation, and hybrid representation. Special structures and properties of biochemical reactions are exploited by monotone dynamical systems to reduce the complexity of the cellular systems. 3. Chapter 3 provides a general framework to represent a deterministic molecular network either with time-invariant structures or with timevarying structures to consider the cell division process. 4. In Chapter 4, we discuss qualitative analysis of the molecular networks, including stability, bifurcation, sensitivity, and robustness analysis. 5. Chapter 5 describes qualitative analysis of genetic networks based on Lur’e model by using LMI techniques and Lyapunov functions. 6. In Chapter 6, we illustrate how to construct or design a synthetic molecular network based on interaction graphs and SR graphs, in particular, a gene regulatory network with specific functions: switching dynamics with feedback or interlocked feedback networks (i.e., positive loop network), by exploiting the special structures of biological systems, e.g., monotone dynamics of biochemical reactions. On the basis of the synthetic biology, the detailed examples for gene regulation are also provided for such networks in bacteria. 7. In Chapter 7, we illustrate how to construct or design a gene regulatory network with rhythmic dynamics in feedback or interlocked feedback networks (i.e., negative loop networks and hybrid network), by exploiting the special structures of biological systems, e.g., monotone dynamics of biochemical reactions. Incidence graphs are also adopted to transform a non-monotone system into a discrete map which can be analyzed in a much simpler way. On the basis of the synthetic systems biology, the detailed examples for gene regulation are also provided for such synthetic networks in bacteria. 8. Chapter 8 describes a general model for the coupled molecular networks in a multicellular system, i.e., we formulate a molecular network with coupling in stochastic and deterministic forms in a multicell system, which is similar to the star-like coupling but has a different structure. The theoretical results on synchronization of multiple bio-oscillators without noise

1.5 Outline of the Book

29

and with noise, by nonlinear dynamical theory and control theory are described. A synthetic multicellular system is constructed to show how synchronization is achieved and how dynamics of individual cells is controlled as an illustrative example. The ingredient flow of this book is shown in Figure 1.10. The major topics and theoretical methods are also summarized in Figure 1.11.

Modeling m olec ular netw orks (C hapter s 2 and 3)

Q uant it atively sim ulating m olec ular networks

Q ualit at ively analyz ing m olec ular netw orks

(C hapter 2)

(C hapters 4 and 5)

Des igning m olec ular networks (C hapter s 6 and 7)

Applying to m ulti- cellular s ys tem s and c ollec tive dynam ics (C hapter 8)

Figure 1.10 Content flow in the book

30

1 Introduction

G G G G

t–‹Œ“•ŽG G ŽŒ•Œ™ˆ“G ”–“ŒŠœ“ˆ™G•Œ›ž–™’šG Oš›–Šˆš›ŠG–™”PG Ojˆ—›Œ™GYPG

i–ŠŒ”Šˆ“GŒ˜œˆ›–•š z›–Šˆš›ŠG—™–ŠŒššŒšG tˆš›Œ™GŒ˜œˆ›–•šG z›–Šˆš›ŠG‹Œ™Œ•›ˆ“GŒ˜œˆ›–•šG

t–‹Œ“•ŽG G ŽŒ•Œ™ˆ“G ”–“ŒŠœ“ˆ™G•Œ›ž–™’šG O‹Œ›Œ™”•š›ŠG–™”PG Ojˆ—›Œ™GZPG

m–’’Œ™Tw“ˆ•Š’GŒ˜œˆ›–•šG jœ”œ“ˆ•›GŒ˜œˆ›–•šG v™‹•ˆ™ G‹Œ™Œ•›ˆ“GŒ˜œˆ›–•šG

t–‹Œ“•ŽG G  ‰™‹G ”–“ŒŠœ“ˆ™G•Œ›ž–™’šSG ˆ•‹G‹ŒŠ–”—–š›–•G G G G Ojˆ—›Œ™GYPG

v™‹•ˆ™ G‹Œ™Œ•›ˆ“GŒ˜œˆ›–•š w™–‰ˆ‰“› G›Œ–™ G z•Žœ“ˆ™G—Œ™›œ™‰ˆ›–•G›Œ–™ G z›–Šˆš›ŠG—™–ŠŒššŒšG

u–•“•Œˆ™Gˆ•ˆ“ ššG–•G ”–“ŒŠœ“ˆ™G•Œ›ž–™’šG Ojˆ—›Œ™šG[Gˆ•‹G\PG

s ˆ—œ•–Gš›ˆ‰“›  z›–Šˆš›ŠGš›ˆ‰“› G iœ™Šˆ›–•G›Œ–™ G sœ™˅ŒGš š›Œ”šSGstpG

xœˆ•››ˆ›Œ“ G G š”œ“ˆ›•ŽG•Œ›ž–™’šG Ojˆ—›Œ™šGYGˆ•‹G_PG

z›–Šˆš›ŠGš”œ“ˆ›–• n““Œš—ŒGˆ“Ž–™›”G z›–Šˆš›ŠG‹Œ™Œ•›ˆ“GŒ˜œˆ›–•šG t–•›ŒGjˆ™“–G”Œ›–‹šG

G G G G G G G G G G G G G G G G

kŒšŽ••ŽGšž›Š•ŽG •Œ›ž–™’šGOjˆ—›Œ™G]PG

G G G

kŒšŽ••ŽG–šŠ““ˆ›•ŽG •Œ›ž–™’šGOjˆ—›Œ™G^PG

G G G G

G

t–•–›–•ŒG‹ •ˆ”Šˆ“Gš š›Œ”š p•›Œ™ˆŠ›–•GMGp•Š‹Œ•ŠŒGŽ™ˆ—šG w–•Šˆ™ŒTiŒ•‹Ÿš–•G ›Œ–™Œ”G j–•›™–“G›Œ–™ G

z •Š™–•¡•ŽG  ‹Œ›Œ™”•š›ŠG ”–“ŒŠœ“ˆ™G•Œ›ž–™’šG Ojˆ—›Œ™G_PG

v™‹•ˆ™ G‹Œ™Œ•›ˆ“GŒ˜œˆ›–• s ˆ—œ•–Gš›ˆ‰“› G›Œ–™ G sœ™˅ŒGš š›Œ”šSGstpG ~•™ŒŒG›Œ–™ G–•G–šŠ“““ˆ›–•G

z›–Šˆš›ŠG š •Š™–•¡ˆ›–•G–G ”–“ŒŠœ“ˆ™G•Œ›ž–™’šG Ojˆ—›Œ™G_PG

z›–Šˆš›ŠG‹Œ™Œ•›ˆ“GŒ˜œˆ›–•G šjœ”œ“ˆ•›GŒ˜œˆ›–•šG z›–Šˆš›ŠG™Œš–•ˆ•ŠŒG wˆšŒGš •Š™–•¡ˆ›–•G

G G

t–•–›–•ŒG‹ •ˆ”Šˆ“Gš š›Œ”š p•›Œ™ˆŠ›–•GŽ™ˆ—šG zyGŽ™ˆ—šG u–•“•Œˆ™G‹ •ˆ”Šˆ“G›Œ–™ G

G G

tˆ‘–™G›–—ŠšG

tˆ‘–™G”Œ›–‹šG

G G

Figure 1.11 Major topics and theoretical methods in the book

2 Dynamical Representations of Molecular Networks

A living cell can be viewed as a huge dynamical system or molecular network with stochastic fluctuations at the molecular levels, where cellular components interact dynamically, both temporally and spatially. Dynamical representations of the molecular network are necessary for an accurate understanding of the temporal and spatial evolution of a cellular system and can also provide deep insight into the dynamical interactions among the cellular components. This chapter describes a general theoretical framework to model molecular networks with the consideration of discrete state transitions and stochastic fluctuations. In particular, we provide both stochastic and deterministic formulation to model networks of biochemical reactions in a cell.

2.1 Biochemical Reactions Biochemical reactions, which specify how the states of a system change and how fast that change occurs, are very important in modeling molecular networks. Many complex networks can be expressed by such reactions, either qualitatively or quantitatively. Biochemical reactions are so fundamental that the same list of biochemical reactions can lead to different models, e.g., a graphical representation, a stochastic process, or a deterministic model. In this sense, representing processes by biochemical reactions is more basic than using a mathematical model of either stochastic or deterministic dynamics. Writing down a list of biochemical reactions corresponding to a cellular system or a molecular network, together with the rate of every reaction and the initial condition of each species, is a powerful and flexible way to specify the system. However, the reactions themselves specify only the qualitative structure of the system and must be augmented with additional tools or formalisms before they can be used to analyze and simulate dynamics of the network and further make predictions. One tool is the dynamical representations of cellular systems or molecular networks.

32

2 Dynamical Representations of Molecular Networks

In this section, we introduce some basic biochemical reactions involved in cellular systems. A cellular system consists of a network of coupled biochemical reactions. These reactions can be transcription, translation, dimerization, protein or mRNA degradation, enzyme-catalyzed reactions, transportation, diffusion, binding or unbinding, DNA or histone methylation, histone acetylation or phosphorylation. These biochemical reactions constitute various biomolecular networks, e.g., metabolic, genetic, and signaling networks. In an elementary biochemical reaction, one or more biochemical species react directly to form products in a single reaction step with a single transition state. More complicated reactions can be decomposed into a sequence of elementary reactions. A general elementary biochemical reaction can be represented as follows: k

r1 R1 + r2 R2 + · · · + rm Rm  p1 P1 + p2 P2 + · · · + pn Pn ,

(2.1)

where m is the number of reactants, and n is the number of products. The terms to the left of the arrow are called reactants, and those on the right are called products. Thus, Ri is the ith reactant, and Pj is the jth product. ri and pj are the numbers of reactant Ri consumed and product Pj produced in a reaction step, respectively. k is a positive number to represent the rate of the reaction. The coefficients ri and pj are known as stoichiometries and are generally small positive integers. The reactants and products in a cell are DNA, RNA, proteins,  or other chemicals. The total reaction order is defined m as the sum of ri , i.e., i=1 ri . The general reaction (2.1) can express any biochemical reaction in a cell. However, in order to clearly represent concrete biochemical processes, we sequentially describe typical reactions in cellular systems. First, we consider the dimerization of a protein P with reaction rate kd : k

2P d P2 ,

(2.2)

where 2P means P + P . (2.2) is a reaction for a protein–protein interaction, which results in a homodimer P2 composed of two protein monomers P . The reaction has one reactant P and one product P2 with stoichiometries of 2 and 1, respectively. Stoichiometries of 1 are not usually written explicitly. Similarly, the reaction for the dissociation of the dimer P2 with reaction rate k−d is written as follows: k−d

P2  2P.

(2.3)

A reaction that can happen in both directions is termed reversible. Reversible reactions are quite common in biological systems and can be briefly written by one reaction equation. For instance, (2.2)–(2.3) can be represented as kd

2P  P2 k−d

(2.4)

2.1 Biochemical Reactions

33

or sometimes as Keq

2P  P2 .

(2.5)

Here, Keq = kd /k−d is called the equilibrium constant. It is important to remember that the notation for a reversible reaction is simply a convenient way to represent two separate reactions. When modeling, we still need to decompose a reversible reaction or more complex reactions into a sequence of elementary reactions in order to analyze and simulate their dynamics. An elementary reaction is a biochemical reaction in which one or more of the chemical species react directly to form products in a single reaction step and with a single transition state. Before introducing different modeling approaches to systems of coupled biochemical reactions in the next section, we will first describe in detail some basic biochemical processes and show how their essential features can be captured with fairly simple systems of coupled biochemical reactions. Generally, a complex molecular network is intertwined among various processes, including gene regulation, protein interactions, metabolism, and signal transduction. Reactions in each process are relatively strongly correlated, but reactions between these processes are relatively independent, and therefore we only need to couple all the reactions involved in the system or process of interest when modeling it. Transcription is a key cellular process for gene regulation, and control of transcription is a fundamental regulation mechanism in a living organism. The crucial stage of the transcription process is the binding of a RNAP to the promoter of a gene, which is regulated by various TFs, to initiate the transcription or produce mRNA. A RNAP is an enzyme which copies the genetic sequence of a gene and synthesizes the mRNA by attaching to the DNA strand. A TF or a sequence-specific DNA binding factor is a protein that binds to specific DNA sequences and thereby controls the transcription of genetic information from DNA to RNA by promoting or blocking the recruitment of RNAP to specific genes as an activator or a repressor. Note that a particular feature of TFs is that they contain one or more DNA binding domains which attach to specific DNA sequences adjacent to the genes that they regulate. Other proteins or chemicals without DNA binding domains also play crucial roles in gene regulation, e.g., by binding to a TF to form a transcriptional complex, and are called as cofactors. In this book, we consider a TF as one protein or one complex unless otherwise specified. Consider the case of one TF P , which can be a monomer or a complex, and one free DNA binding site D in the promoter region of a gene, which produces mRNA. The transcription process as a system of coupled biochemical reactions can be represented as follows:

34

2 Dynamical Representations of Molecular Networks

(a)

(b) Q

(c) TF

RNAP

P

Q mRNA

TF

binding site of P

binding site of Q

binding site of P

(d)

RNAP

P

binding site of Q

binding site of P

(f)

(e) RNAP

P

P

Q binding site of P

binding site of P

(g)

binding site of Q

Q

mRNA

RNAP

Q

binding site of Q

RNAP

mRNA

P

binding site of Q

binding site of P

RNAP binding site of Q

(h) RNAP

mRNA

P binding site of P

Q

RNAP

P

binding site of Q

binding site of P

Q binding site of Q

Figure 2.1 Transcription regulation of two TFs (P and Q) and two binding sites on the promoter of a gene. There are eight cases for the transcription regulation: (a) no binding, (b) RNAP binding, (c) P bound to the site, (d) Q bound to the site, (e) Q bound to the site with RNAP binding, (f) P bound to the site with RNAP binding, (g) P and Q bound to the sites with RNAP binding, and (h) P and Q bound to the sites, among which RNAP binds to the promoter to produce mRNA for four cases, i.e., (b), (e), (f), and (g)

ka

P + D  P · D, k−a k

D d mRN A + D, kdc

P · D  mRN A + P · D,

(2.6) (2.7) (2.8)

where ka , k−a , kd , and kdc are rate constants for the respective reactions, and P · D is a protein–DNA complex which denotes TF P bound to the binding site D in the promoter. Note that we do not explicitly include RNAP in the model to simplify the expression, although RNAP is necessary in initiating the transcription process. Transcriptional reactions (2.7) and (2.8) indicate

2.1 Biochemical Reactions

35

that transcription rates kd and kdc to produce mRNA are different without TF and with TF. If the TF is an activator to enhance the transcription (i.e., TF recruits RNAP binding to the promoter to initiate the transcription), kdc will be clearly larger than kd . On the other hand, if the TF is a repressor to inhibit the transcription (i.e., TF prevents RNAP binding to the promoter and from initiating the transcription), kdc will be smaller than kd . Unlike a single binding reaction (2.6), transcription reactions (2.7) and (2.8) are actually a chain of reactions due to the synthesis of a RNA sequence from a number of nucleotides and are generally slow. Sometimes, there are multiple TFs which can bind to different binding sites to regulate the expression of the same gene. Such a case can be modeled in a similar way. For example, consider the case where two TFs, P and Q, can bind to two respective binding sites on the DNA, as shown in Figure 2.1. Using similar notation P · D to denote P bound to the binding site D, and Q · P · D to denote Q bound to P · D or P bound to Q · D, such processes can be formulated by a set of reactions as follows: k1

P + D  P · D, k−1 k2

Q + D  Q · D, k−2 k3

P + Q · D  P · Q · D, k−3 k4

Q + P · D  P · Q · D, k−4 k

D 5 mRN A + D, k6

P · D  mRN A + P · D, k7

Q · D  mRN A + Q · D, k8

P · Q · D  mRN A + P · Q · D.

(2.9) (2.10) (2.11) (2.12) (2.13) (2.14) (2.15) (2.16)

Clearly, the transcription reactions (2.13), (2.14), (2.15), and (2.16) correspond to cases (b), (f), (e), and (g) of Figure 2.1, respectively. Transcription of a gene occurs when RNAP is bound to the promoter of DNA regulated by the TF P (or multiple TFs). Thus, the DNA binding site is either free D or bound P · D, resulting in a conservation equation D + P · D = nx

(2.17)

D + P · D + Q · D + P · Q · D = nx

(2.18)

for the system (2.7)–(2.8), or

for the system (2.13)–(2.16), where nx is the total binding sites of the promoter. Note that we also use the same symbols D, P , Q, Q · D, P · D, and

36

2 Dynamical Representations of Molecular Networks

P · Q · D in (2.17)–(2.18) to represent the concentrations of the respective molecules. Similarly to the transcription processes, translation is another complicated process which sometimes involves over several hundred reactions to produce a single protein from a single mRNA. The key stages involved in the translation process are the binding of a ribosome to the mRNA, and the translation of the mRNA to a polypeptide chain, and then to a functional protein. For instance, the production of a protein and a complex of multiple proteins can be formulated as a set of reactions as follows: kp

mRN A  P + mRN A, kb

P + P  P2 , k−b kc

P2 + P2  P4 , k−c

(2.19) (2.20) (2.21)

where P , P2 , and P4 are a protein monomer, a dimer, and a tetramer, respectively. Similarly to the transcription reactions, the translation process (2.19) is also a chain of reactions. Note that we do not model every aspect of the translation and production of protein complex processes, but express the prominent stages of the translation and features of interest. In particular, we do not consider the processes of binding of ribosomes to the mRNA and the folding of the polypeptide chain into a functional protein. Degradation of mRNA and protein can be modeled in a similar way: d

m mRN A  0,

dp

P  0,

(2.22) (2.23)

which mean that mRNA or protein is transformed or degraded into nothing, where dm and dp are the degradation rates. In fact, mRNA and protein degradation is a rather complex process. Note that the transcription processes (2.7)– (2.8) and (2.13)–(2.16), translation process (2.19), and degradation processes (2.22)–(2.23) all are a chain of reactions, which are generally much slower than the binding and unbinding processes (2.9)–(2.12) and (2.20)–(2.21). The transportation process is also important, especially, in eukaryotes. Due to the existence of a nucleus, mRNA must be transported out of the cell nucleus before the translation process occurs. In contrast, proteins (e.g., TFs) must be transported into the nucleus from cytoplasm to regulate gene expression. Transduction has also been shown to be important, especially, in higher eukaryotes and multicellular organisms, because individual cells need to communicate with each other using chemical signaling molecules which can dissolve in the cytosol and diffuse between individual cells and their extracellular medium. By appropriately modeling this process, it might become possible to elucidate the fundamental mechanisms underlying many biological phenomena, e.g., collective behavior through intercellular signaling and

2.1 Biochemical Reactions

37

signaling pathways. A model for this process would be as simple as k

I  A,

(2.24)

where I denotes an mRNA or a protein within the nucleus, and A corresponds to the mRNA or the protein in the cytoplasm, or vice versa, for the transportation process. On the other hand, for the transduction process, I and A denote intracellular and extracellular signaling molecules, respectively. Such a reaction can also be used to model the transition between different states such as inactive and active states of a protein. Metabolism is a general term for catabolic and anabolic reactions. It is a highly organized process and often involves thousands of reactions that are catalyzed by enzymes. Phosphorylation in signaling cascades is also an enzyme kinetic reaction with the kinase facilitating the phosphorylation of a substrate. Consider the simple enzyme-catalyzed biochemical reactions k1

k

E + S  ES 2 E + P. k−1

(2.25)

The reactions comprise a reversible formation of an enzyme–substrate complex ES from the free enzyme E and the substrate S and an irreversible release of the product P . Generally, the duration of the state ES is very short. In the reactions, the enzyme is neither produced nor consumed. Therefore, its total concentration remains constant. It may be free or involved in the complex. Note that in this book, without confusion, we use both XY and X · Y to express the binding or complex of X and Y . Besides the reactions mentioned above, there are many other reactions involved in cellular systems that we have omitted, such as post-transcriptional regulation, post-translational modification, and microRNA regulation. Although involved in different processes, many reactions can be represented in similar forms. For example, the reaction form for the formation of a heterodimer is the same as that for the formation of an enzyme–substrate complex in the enzyme-catalyzed reaction. Moreover, small RNA-mediated posttranscriptional regulation can be described similarly as the formation of a heterodimer because the small RNA (sRNA) itself is consumed via the interaction between sRNAs and their mRNA targets. Although there are many kinds of biochemical processes and reactions, after they are decomposed into elementary reactions, most of them can be classified into monomolecular reactions (the first-order reactions), bimolecular reactions (the second-order reactions), or trimolecular reactions (the third-order reactions) according to the number of molecules involved. The monomolecular reaction has the form of degradation (2.22)–(2.23) or transportation and transduction (2.24). When the reactant concentration in monomolecular reactions is considered to be constant, the first-order reactions become the zero-order ones. The bimolecular reactions are probably the most common reactions in cellular systems, e.g., (2.9)–(2.12) and (2.20)–(2.21). There are two different

38

2 Dynamical Representations of Molecular Networks

ways by which two reactants combine to form one product. One is the binding of two same molecules such as the formation of a homodimer, and the other is the binding of two different molecules to form a heterodimer, e.g., the formation of an enzyme–substrate complex, the binding of a TF to a promoter, and the binding of a small RNA to its mRNA target. Trimolecular reactions or higher-order reactions are rare because the probability of the simultaneous collision of three or more molecules is very small. A more general reaction can m be represented as (2.1), which is the i=1 ri -order reaction.

2.2 Molecular Networks Although biochemical reactions are fundamental to understanding a cellular system at the molecular level due to the detailed information that these reactions provide, how these reactions form various molecular networks to facilitate specific cellular functions is unclear. Hence, it is important to elucidate not only the function of each individual reaction but also that of the associated molecular network as a whole. Note that we particularly study biological networks at the molecular level, and therefore, a biological network simply means a biomolecular network or a molecular network in this book. Actually, a living organism can be viewed as a huge biochemical reaction network, which is nonlinear with both intrinsic and extrinsic stochastic fluctuations. A molecular network is assumed to organize a list of biochemical reactions in an accurate, complete, and comprehensive manner. Modeling and analyzing the molecular networks may not only lead to the elucidation of the essential mechanism of how biochemical reactions generate various particular cellular functions but also lead to the revelation of the regulatory roles of individual reactions in the network functions. Many types of molecular networks, such as gene regulatory networks, transcription regulatory networks, protein interaction networks, metabolic networks, and signal transduction networks exist in a cell (Cao and Liang 2008, Chen et al. 2009, Jeong et al. 2000, Tyson et al. 2001). However, few such networks are known for their complete structures, even in the simplest bacteria. Still less is known on how the networks interact at different levels in a cell, and how to predict the complete state description of a eukaryotic cell or a bacterial organism at a given point in the future. In this sense, the study of molecular networks from a systems biology perspective is still in its infancy. Next, we show how a molecular network is represented by a graph, a stochastic system, a deterministic system or a hybrid system, depending on the requirements to model and analyze a specific cellular system.

2.3 Graphical Representation One way to understand biochemical reactions is via graphical representation, which can provide more information and is easier to analyze than the reac-

2.3 Graphical Representation

39

tion list, especially for cases with a large number of reactions. For example, the global properties of molecular networks such as degree distribution and betweenness centrality can be more easily obtained from graphical representations than from the reaction list. Moreover, many theoretical results, e.g., by graph theory and complex network theory, can be adopted to investigate these networks in order to obtain qualitative or quantitative predictions. In this book, we use three graphic representations to describe biochemical reactions or molecular interactions, depending on the models, i.e., interaction graphs, incidence graphs, and species-reaction graphs. Detailed descriptions of the three representations are provided in Chapter 6. 2.3.1 Example of Interaction Graphs First, consider the simple enzyme kinetic reaction (2.25) as an example of the interaction graph. The graphical representation is shown in Figure 2.2. Such a diagram represents the relationship or interactions between the components in a biomolecular system and is known as an interaction graph. As described in detail in Chapters 3 and 6, in an interaction graph, each node represents the concentration or the number of a chemical. Each edge or connection can be a linear or nonlinear function of the connected node. A positive value of the edge, e.g., A → B, implies that chemical A enhances the synthesis of chemical B, while a negative value implies that chemical A represses the synthesis of chemical B, which is also represented by  in contrast to →. Consider another example of a gene regulatory system with two genes in a eukaryotic cell. The biochemical reactions are shown below, where Dx and Dy denote the promoter regions of genes x and y, respectively. The multimerization (2.26), (2.30), (2.31); transportation (from cytoplasm to nucleus) (2.27), (2.32); and binding reactions of TFs to promoter regions (2.28), (2.29), (2.33) are described as follows: k1

Px + Px  P2x , k−1 k2

 , P2x  P2x k−2 k3

  + Dy  P2x Dy , P2x k−3 k4

    + P2x Dy  P2x P2x Dy , P2x k−4

(2.26) (2.27) (2.28) (2.29)

40

2 Dynamical Representations of Molecular Networks k5

Py + Py  P2y ,

(2.30)

k−5 k6

P2y + P2y  P4y ,

(2.31)

k−6 k7

 , P4y  P4y

(2.32)

k−7 k8

  + Dx  P4y Dx , P4y

(2.33)

k−8

  where P2x , P4y , and P2x , P4y denote protein dimers and tetramers in the     cytoplasm and the nucleus, respectively. P2x Dy , P4y Dx and P2x P2x Dy are all complexes. All these reactions are generally fast and occur within less than a few seconds.

P ES

E

S

Figure 2.2 Graphical representation of the enzyme kinetic reaction via an interaction graph. All interactions or edges are positive

The reactions for the transcription of mRNAs (mx , my ), translation of proteins (Px , Py ), and degradation of proteins and mRNAs are represented as follows: k

mx0 Dx  mx + Dx ,

kmx1  P4y Dx 

mx +

(2.34)

 P4y Dx ,

k

Px mx  Px + mx ,

kmy0

Dy 

kmy1

 Dy  P2x  P2x

kmy2  P2x Dy 

(2.36)

my + Dy ,

(2.37)

 my + P2x Dy ,

my +

 P2x

(2.35)

 P2x Dy ,

(2.38) (2.39)

2.3 Graphical Representation kP y

my  Py + my ,

41

(2.40)

dmx

mx  0,

(2.41)

dP x

Px  0,

(2.42)

dmy

my  0,

(2.43)

dP y

Py  0.

(2.44)

P '2 x

P '2xP'2xD y

P2x

Px

P '2xD y

mx

P '4 y D x

my

P '4 y

P 4y

P 2y

Py

Figure 2.3 Schematic illustration of the graphical representation of a gene network by an interaction graph. ‘’ means a negative regulation, i.e., repression, and ‘→’ implies a positive regulation, i.e., activation

Transcription and translation reactions are a chain of reactions, which are considerably slower than other reactions and generally require more than minutes to complete. The schematic representation for a gene network or gene regulatory network as an interaction graph is shown in Figure 2.3, where ‘’ means a negative regulation, i.e., repression, and ‘→’ implies a positive regulation, i.e., activation. In this book, when there is no specific note on each edge of a network or a graph, a positive regulation (or interaction) and a negative regulation (or interaction) are represented by ‘→’ and ‘’, respectively. However, the regulation between the two chemicals is positive or negative if the edge of the regulation is specifically indicated by ‘+’ or ‘−’. The regulatory interactions between all the components are explicitly expressed in the associated diagram. When there are multiple feedback loops in a molecular network, it becomes difficult to gain some direct insight into the regulatory relations from the reactions. Even for this simple network, the interactions from p2y to p2x are not straightforward. However, the repression relation from p2y to p2x can easily be obtained from the graphical representation, as schematically illustrated in Figure 2.3.

42

2 Dynamical Representations of Molecular Networks

2.3.2 Example of Incidence Graphs To analyze biomolecular networks with control inputs, it is convenient to use incidence graphs, which are the same as the interaction graphs, except the two extended input and output nodes. Hence, compared with an interaction graph with n nodes, an incidence graph has n+2 nodes. All definitions in an incidence graph are the same as those of the corresponding interaction graph. When the input and output nodes are viewed equivalently as other nodes, it becomes an interaction graph. A detailed description of the incidence graph is provided in Chapter 6. For the example of (2.25), assuming the substrate S and the product P to be input and output of the system, respectively, the incidence graph can be simply represented by Figure 2.4, which has two additional nodes (input and output nodes) compared with the interaction graph of Figure 2.2. The incidence graph is used to study the relation between input and output.

P

O

ES

E

S

I

Figure 2.4 Graphical representation of the enzyme kinetic reaction by an incidence graph. All the definitions are the same as those of Figure 2.2, except input and output nodes. I and O represent the input and output nodes, respectively

2.3.3 Example of Species-reaction Graphs In addition to interaction graphs and incidence graphs for the analysis of gene regulatory networks, SR graphs are used mainly in metabolic networks. An SR graph is composed of chemicals and reactions and is a general technique for studying the relationship between reaction network structures and the capacity for multistability. Unlike interaction graphs and incidence graphs which are directed graphs, an SR graph is an undirected graph. A detailed description of incidence graphs and SR graphs is provided in Chapter 6. For the example of the enzyme reaction (2.25), the SR graph is illustrated in Figure 2.5, where each circle represents a chemical (or a species) and each box stands for a reaction. Each edge or arc is labeled with the name of the

2.4 Biochemical Kinetics

43

complex in which the species appears. An SR graph is mainly used to study the multistability in a molecular network.

ESGÆGE+PG

E+P

ES E+P

ESG

P

lzG E+S

E+SG ЂG ESG

E+S

S

E

Figure 2.5 Graphical representation of an enzyme kinetic reaction by an SR graph

2.4 Biochemical Kinetics Molecular networks can be modeled and analyzed as stochastic or deterministic systems. Different approaches may highlight different aspects of the same list of reactions. Moreover, even for a given model, many different aspects can be revealed from theoretical and numerical analysis of the network. In dynamical modeling, the reaction rates that depend on the change of participating species over time need to be defined, e.g., in terms of the number or the concentration of each molecule. In order to obtain the temporal behavior of the species, the functional relation of change in the number or the concentration need to be specified, depending on the approach chosen. Specifically, for each reaction, we identify its rate constant, which depends on temperature and specifies the amount of time that the reaction takes, and an associated rate law, which specifies the amount of state changes or the probability that the reaction occurs in a small time interval. For example, consider a simple reaction of the form k

A  B.

(2.45)

This reaction means that A is transformed to B at a rate of k. In the deterministic formulation, the reaction specifies the state changes in which the

44

2 Dynamical Representations of Molecular Networks

concentration of A decreases concomitantly with the increase in the concentration of B. The amount of state change in a small time interval dt is given by k[A]dt, where [A] is the concentration of A. On the other hand, in the stochastic formulation, the reaction specifies the state changes in which the total number of A decreases by one, and the total number of B increases by one. The probability for this reaction is given by k{#A}dt, where {#A} is the number of A present. Note that in this book, [A] represents the concentration of species A, but sometimes A also stands for the concentration of species A to simplify the representation. For the same reason, we also occasionally drop # to represent the number of species A directly by using A instead. When modeling biochemical kinetics, both the stochastic and deterministic approaches follow the mass action law, which states that the reaction rate is proportional to the probability of a collision of the reactants. This probability is in turn proportional to the concentrations of reactants to the power of the molecularity, i.e., the number in which they enter the specific reaction. There are some minor differences between stochastic and deterministic rate constants. In order to conduct a stochastic simulation, the deterministic rate constants must be converted in an appropriate way to stochastic rate constants. Next, we will provide a general mathematical framework of both stochastic and deterministic approaches for modeling molecular networks. Especially, we will show how the master equations, Langevin (stochastic differential) equations, cumulant equations, and then deterministic differential equations can be obtained to model biochemical kinetics and further provide a brief comparison among them.

2.5 Stochastic Representation A cellular system is well regulated at a molecular level but is an inherently noisy process, from transcriptional control, alternative splicing, translation, diffusion to chemical modification reactions, all of which involve stochastic fluctuations. Such stochastic noise may not only affect the dynamics of biological systems but may also be exploited by living organisms to actively facilitate certain functions. From an evolutionary viewpoint, noise is also assumed to be used for cellular and population variability control. Due to very low copy numbers for many species in living cells, the origins of stochasticity can be traced to the random transitions among discrete chemical states, which implies that a model of a molecular network should be able to present this discrete nature of small numbers both qualitatively and quantitatively. To capture the discrete and stochastic nature of biochemical kinetics with low concentrations or small numbers of molecules, a stochastic modeling formulation can be adopted to describe such a biological system. In stochastic modeling, one can estimate the time of each reaction from statistical properties. For example, stochastic simulation provides the time of a reaction from

2.5 Stochastic Representation

45

a probability distribution. The result of a simulation run is one possible realization of the temporal evolution of the system. The stochastic framework considers the exact numbers of molecules present, which are discrete quantities. Such a strategy may identify how many molecules of each component are present in the system. The stochastic framework grasps the essence of the stochastic collision of biochemical components, i.e., the components change discretely, but which change occurs and when it occurs are probabilistic. Actually, a cellular system at a molecular level can be considered to be governed by stochastic processes with many random events, or by a continuous time and discrete-space Markov chain. Consider, for example, a DNA binding protein P , whose binding to and unbinding from DNA follow elementary reactions k+

P + DN A  P · DN A, k−

P · DN A  P + DN A.

(2.46) (2.47)

This list of reactions involves chemicals of three types: P , DNA, and P · DN A, which is a complex of P and DNA. There are multiple possible reactions involved in (2.46)–(2.47), and the states change discretely when any one of the reactions occurs. Assume that the system state at time t is ({#P }, {#DN A}, {#P · DN A}). After a short time dt, the state will change to ({#P − 1},{#DN A − 1},{#P · DN A + 1}) with the probability k+ {#P }{#DN A}dt, to state ({#P +1}, {#DN A+1}, {#P ·DN A−1}) with the probability k− {#P · DN A}dt, or stay in the same state with the probability 1 − k+ {#P }{#DN A}dt − k− {#P · DN A}dt, where {#A} indicates the number of molecule A (Gibson and Mjolsness 2001). In stochastic modeling, one can also deal with the probabilities rather than the numbers of molecules. In other words, the state or variable is the probability distribution over all configurations. A configuration is a list of numbers of molecules. The dynamics of probabilities obey a master equation, a linear differential equation, which describes the temporal evolution of the probability distribution (Van Kampen 1992) for the discrete transitions of molecules. 2.5.1 Master Equations for a General Molecular Network The master equations description accounts for the probabilistic nature of cellular biochemical processes and can be viewed as a continuous time discretespace Markov model. It describes the time evolution of the probability of having a certain number of molecules, and its result is usually taken as a gold standard for numerical simulation in computational biology due to its detailed representation and also due to the lack of experimental data. In the master equation, reaction rates are transformed into probability transition rates. Suppose that the number of molecules of reactant X can be any positive integer {#X} = n. Let Pn (t) denote the probability that there are

46

2 Dynamical Representations of Molecular Networks

n molecules of X at time t. Then, we want to know the temporal evolution of the probability which governs its development. In other words, we want to express Pn (t + dt) in terms of Pm (t) for all m, i.e., to express the probability of having n molecules at time t + dt in terms of the probability of all possible values for m molecules at time t. The occurrence of event {#X(t + dt)} = n can be thought of as the the occurrence of mutually exclusive event ({#X(t + dt)} = n, {#X(t)} = m) for all possible m. When taking the conditional probabilities into account, we have  P ({#X(t + dt)} = n) = P ({#X(t + dt)} = n, {#X(t)} = m) m

=



Pm,n P ({#X(t)} = m),

(2.48)

m

where Pm,n = P ({#X(t + dt)} = n|{#X(t)} = m) is the transition probability of changing from m to n molecules in the time interval dt for m, n = 0, 1, 2, .... The summation of (2.48) is over all possible m. The stochastic representation (2.48) obeys a Markov process because the transition probability itself neither explicitly depends upon the time at which the transition occurs, nor does it depends on the path on which the change occurred. Taking the limit as dt → 0 in the difference P ({#X(t + dt)} = n) − P ({#X(t)} = n) leads to a differential equation in the probabilities, i.e., the master equation. An Illustrative Example of the Master Equation In order to clearly explain the master equation, consider the reactions (2.46)– (2.47). Let X = (X1 , X2 , X3 ) = ({#P }, {#DN A}, {#P · DN A}) be the numbers of the three species. Define P (X; t) to be the probability function for the state ({#P }, {#DN A}, {#P · DN A}) = X at time t. The probability of being in state X = (X1 , X2 , X3 ) at time t + dt is composed of the sum of terms which describe all possible previous states multiplied by their respective transition probabilities. Then, the probability at t + dt is given by P (X; t + dt) = P(X1 +1,X2 +1,X3 −1),(X1 ,X2 ,X3 ) P (X1 + 1, X2 + 1, X3 − 1; t) +P(X1 −1,X2 −1,X3 +1),(X1 ,X2 ,X3 ) P (X1 − 1, X2 − 1, X3 + 1; t) −P(X1 ,X2 ,X3 ),(X1 −1,X2 −1,X3 +1) P (X1 , X2 , X3 ; t) −P(X1 ,X2 ,X3 ),(X1 +1,X2 +1,X3 −1) P (X1 , X2 , X3 ; t) +P (X1 , X2 , X3 ; t).

(2.49)

The transition probability is assumed to be proportional to their numbers and time interval dt

2.5 Stochastic Representation

47

P(X1 +1,X2 +1,X3 −1),(X1 ,X2 ,X3 ) = k+ (X1 + 1)(X2 + 1)dt,

(2.50)

P(X1 −1,X2 −1,X3 +1),(X1 ,X2 ,X3 ) = k− (X3 + 1)dt, P(X1 ,X2 ,X3 ),(X1 −1,X2 −1,X3 +1) = k+ X1 X2 dt, P(X1 ,X2 ,X3 ),(X1 +1,X2 +1,X3 −1) = k− X3 dt.

(2.51) (2.52) (2.53)

Substituting (2.50)–(2.53) in (2.49), taking P (X1 , X2 , X3 ; t) to the left-hand side, and dividing by dt, and then considering the limit as dt → 0, we obtain a differential equation in the probability as follows: dP (X; t) P (X1 , X2 , X3 ; t + dt) − P (X1 , X2 , X3 ; t) = lim dt→0 dt dt = k+ (X1 + 1)(X2 + 1)P (X1 + 1, X2 + 1, X3 − 1; t) +k− (X3 + 1)P (X1 − 1, X2 − 1, X3 + 1; t) −(k+ X1 X2 + k− X3 )P (X1 , X2 , X3 ; t).

(2.54)

An equation of the type (2.54) is called the master equation, which is a continuous-time discrete-space Markov chain because the probability of the next transition depends on the current state only, and not on the history of states. The first two terms are the gain of state X = (X1 , X2 , X3 ) due to transitions from other states, and the last term (i.e., (k+ X1 X2 + k− X3 )P (X1 , X2 , X3 ; t)) is the loss due to transitions from X = (X1 , X2 , X3 ) to other states. A General Molecular Network Consider a general molecular network with N molecular species {S1 , ..., SN } that react through M reaction channels {R1 , ..., RM }. Let X = (X1 , ..., XN ) be the state of the molecules at time t, i.e., Xi is the number of the ith molecule at time t. Define P (X; t) to be the probability function for the state X at time t. Here, we occasionally drop t to simplify the expression in the following, e.g., X = X(t), without confusion. Then, the temporal evolution of the system is governed by a set of reactions X → X  and X  → X. The probability P (X; t) that the system is in state X at time t evolves according to the master equation (Van Kampen 1992)   ∂P (X; t)  PX  X P (X  ; t) − PXX  P (X; t) , (2.55) = ∂t  X

where PX  X is the transition probability from state X  to state X, and PXX  is the transition probability from state X to state X  . The master equation is nothing but a gain–loss equation for the probability distribution P (X; t) or a stochastic birth-and-death process. The summation of (2.55) is for all possible states of X  . The first term is the gain of state X due to transitions from other states X  to X, and the second term is the loss due to transitions from X to other states.

48

2 Dynamical Representations of Molecular Networks

Define aj (X)dt to be the probability, given X, that one Rj reaction occurs in the next infinitesimal time interval [t, t + dt) and νji to be the change in the number of Si molecules produced by one Rj reaction (j = 1, ..., M and i = 1, ..., N ). The master equation (2.55) for a general molecular network can be rewritten as  M  ∂P (X; t)  aj (X − νj )P (X − νj ; t) − aj (X)P (X; t) , (2.56) = ∂t j=1 where νj = (νj1 , ..., νjN ). Note that the state X is discrete but the probability distribution P is continuous. Generally, the master equation itself is not analytically or numerically solvable in any but the simplest cases. Therefore, one has to resort to Monte Carlo types of simulations that produce a random walk through the possible states of the system. In other words, instead of calculating the probability distribution P (X; t), the approaches simulate the time evolution of a particular trajectory, starting at an initial state X0 . Various methods have been developed, such as the Gillespie stochastic simulation algorithm (SSA) (Gillespie 1976,Gillespie 1977). Recently, some more computationally efficient algorithms have been developed such as the Gibson–Bruck algorithm (Gibson and Bruck 2000), the approximate simulation strategies (Gillespie 2001, Gillespie and Petzold 2003), the hybrid simulation strategies (Kiehl et al. 2004, Rao and Arkin 2003, Puchalka and Kierzek 2004), and parallel simulation strategies (Chen et al. 2005). However, making the stochastic simulation algorithms more efficient and applying stochastic simulation algorithms to larger models are still the subject of active research. For example, the perfect sampling algorithm was proposed to recast Gillespie’s SSA in the light of Markov chain Monte Carlo methods and combine it with the dominated coupling from past algorithms in order to provide guaranteed sampling from the stationary distribution (Hemberg and Barahona 2007). Equilibrium Probability Distribution In the stochastic framework, it is difficult to define an equilibrium configuration in terms of the numbers of molecules as in a deterministic framework since any reaction which changes the numbers of molecules takes places in probability. In deterministic differential equations, an equilibrium means that there is no net change in the numbers or concentrations of molecules. The concept of the equilibrium suffices for differential equation models, but not for stochastic models. In contrast, for stochastic models, there is equilibrium probability distribution, i.e., a set of probabilities that the system has certain numbers of molecules, even though the numbers of molecules may change. Generally, it is difficult to get equilibrium probability distribution of a master equation analytically, if not impossible, although some statistical quantities such as means and variances can be obtained based on it. However, for some

2.5 Stochastic Representation

49

simple cases, it can be obtained by recursion relations. For example, consider a set of coupled reactions called the ‘Lotka–Volterra’ system, k

X 1 2X,

(2.57)

β

X + Y  2Y,

(2.58)

k2

Y  0.

(2.59)

The time evolution of the joint probability P (X, Y ; t) obeys the following master equation: ∂P (X, Y ; t) = k1 (X − 1)P (X − 1, Y ; t) − k1 XP (X, Y ; t) ∂t +k2 (Y + 1)P (X, Y + 1; t) − k2 Y P (X, Y ; t) +β(X + 1)(Y − 1)P (X + 1, Y − 1; t) − βXY P (X, Y ; t). (2.60) To obtain its equilibrium distribution, we define A(X; t) =

∞ 

P (X, Y ; t)

and

B(Y ; t) =

Y =0

∞ 

P (X, Y ; t),

(2.61)

X=0

where A(X; t) is the marginal probability of X at time t, irrespective of the number of Y . The probability B(Y ; t) is defined analogously. Supposing that there is equilibrium probability distribution, one can obtain ˙ A(X; t) = 0 ˙ B(Y ; t) = 0

for for

Y = 0, 1, 2, ..., X = 0, 1, 2, ...,

(2.62) (2.63)

where A˙ = dA/dt and B˙ = dB/dt. Then, using recursion relations, we can obtain the equilibrium distribution P (X, Y ; t) = 0 for all X = 0, 1, ... and Y = 1, 2, ....

(2.64)

To derive (2.64) in more detail, on the basis of the definition of B(Y ; t) in (2.61), by summing (2.60) over all X, we have ˙ ; t) = k2 (Y + 1)B(Y + 1; t) − k2 Y B(Y ; t) − βY B(Y +β(Y − 1)

∞  X=0

Then, we have

∞ 

XP (X, Y ; t)

X=0

XP (X, Y − 1; t).

(2.65)

50

2 Dynamical Representations of Molecular Networks

˙ 0 = B(0, t) = k2 (0 + 1)B(0 + 1, t) − k2 · 0 · B(0, t) ∞  +β(0 − 1) XP (X, 0 − 1, t) X=0

−β · 0 ·

∞ 

XP (X, 0, t)

X=0

= k2 B(1, t).

(2.66)

Note that P (X, −1, t) = 0 because all X, Y are non-negative numbers. Hence, we can obtain ∞  B(1, t) = P (X, 1, t) = 0. (2.67) X=0

Because all the probabilities are non-negative, we have P (X, 1, t) = 0 for X = 1, 2, ....

(2.68)

Proceeding with this process, we can obtain (2.64). Similarly, by using ˙ A(X; t) = k1 (X − 1)A(X − 1; t) − k1 XA(X; t) − βX ∞ 

∞ 

Y P (X, Y ; t)

Y =0

Y P (X + 1, Y ; t),

(2.69)

P (X, Y ; t) = 0 for all X = 1, 2, ... and Y = 0, 1, ....

(2.70)

+β(X + 1)

Y =0

we obtain

Integrating (2.64) and (2.70), it can be obtained that the only non-zero probability left in the equilibrium distribution is P (0, 0; t) = 1.

(2.71)

Therefore, the stochastic ‘Lotka–Volterra’ system does not have equilibrium distribution other than the one in which both species X and Y are extinct (Reddy 1975), i.e., X = Y = 0. However, on the basis of (2.57)–(2.59), the deterministic ‘Lotka–Volterra’ system can be written as dX (2.72) = k1 X − βXY, dt dY (2.73) = βXY − k2 Y. dt In this book, we occasionally use X˙ to represent dX/dt. Clearly, it has an ¯ = k2 /β and Y¯ = k1 /β, which are generally non-zero. The equilibrium X difference arises from the different possibilities for the identification of birth and death terms in the deterministic equation. Another example where the equilibrium probability distribution can be solved analytically is the model for the signaling cycle (Levine et al. 2007).

2.5 Stochastic Representation

51

2.5.2 Stochastic Simulation Although the analytical solution of the master equation is rarely available, the density function can be constructed numerically using the SSA (Gillespie 1976). Generally, the SSA first constructs numerical realizations of X(t) and then averages the results of many realizations. Specifically, we compute the reaction probability density function P (Δt, μ|X; t), which is the joint probability density function of two random variables, the time to the next reaction Δt and the index of the next reaction μ, given X. The reaction probability density function for the general molecular network (2.56) takes the form P (Δt, μ|X; t) = aμ (X) exp(−a0 (X)Δt) with a0 (X) =

M 

(Δt ≥ 0; μ = 1, ..., M ),

aj (X).

(2.74)

(2.75)

j=1

The reaction probability density function provides the basis for the SSA. According to the joint density function (2.74), the next reaction and the time of its occurrence can be generated through the direct method as follows. Draw two random numbers r1 and r2 from a uniform distribution in unit interval [0, 1). The time to the next reaction Δt and the index of the next reaction μ, given X, can be taken as follows: Δt =

1 1 ln( ), a0 (X) r1

(2.76)

μ = the smallest integer satisfying

μ 

aj  (X) > r2 a0 (X).

(2.77)

j  =1

The Gillespie direct method for exact simulation of the master equation (2.56) is shown in Table 2.1. Despite its simplicity in expression, the computational cost of the Gillespie algorithm increases drastically with the number of reactions and the number of molecules. The increment in computational cost is primarily due to the generation of the two random numbers and the enumeration of reactions to determine the next reaction. For example, when the number of molecules is equal to 106 , the time step Δt will become excessively small, i.e., on the order of 10−6 , which means that more time steps or iterations are needed. For illustrative purposes, we use the example for the Gillespie algorithm presented in (Gillespie 2001). There are three molecular species (S1 , S2 , S3 ) and four reaction channels as follows: c

1 S1  0,

c2

S1 + S1  S2 , c3

S2  S1 + S1 , c4

S2  S 3 .

(2.78) (2.79) (2.80) (2.81)

52

2 Dynamical Representations of Molecular Networks

Table 2.1 Gillespie direct method for exact simulation of (2.56) (Gillespie 1977) Step 1. Initialization: set t = 0 and fix the initial numbers of molecules X0 . Step 2. Calculate the propensity function aj , j = 1, ..., M . Step 3. Generate two random numbers r1 and r2 in [0, 1). Step 4. Determine Δt and μ according to (2.76)–(2.77). Step 5. Execute reaction μ and advance time Δt, i.e., t ← t + Δt. If t reaches Tmax , terminate the computation. Otherwise, go to Step 2.

For these reactions, the reaction propensities, which describe the probability of one molecule colliding with another, take the forms a 1 = c 1 X1 , a2 = c2 X1 (X1 − 1)/2,

(2.82) (2.83)

a 3 = c 3 X2 , a 4 = c 4 X2 ,

(2.84) (2.85)

where X1 and X2 are the numbers of molecules S1 and S2 , respectively. The simulation results are shown in Figure 2.6 when using the rate constant values c1 = 1, c2 = 0.002, c3 = 0.5, c4 = 0.04 (2.86) and the initial conditions X1 = 105 , X2 = X3 = 0,

(2.87)

where X3 is the number of molecules of S3 . Time delay exists in many cellular processes and in a mathematical model, the effect of the delay may be drastic. When delay times are significant, both analytical and numerical modeling should take into account their crucial influences. To explore the combined effects of time delay and intrinsic noise on cellular dynamics, a modified Gillespie method was developed (Bratsun et al. 2005), which allows incorporation of delayed reactions and accounts for the non-Markovian properties of random biochemical events with time delay. The formal steps for the modified method are shown in Table 2.2. To account for the modified method more clearly, we use a simple model presented in (Bratsun et al. 2005) as an example. Suppose that the protein can exist both in the form of monomers X1 and dimers X2 , and protein production occurs with a time lag τ and can only occur if the operator D is unoccupied at time t. The reaction channels are as follows:

2.5 Stochastic Representation

53

Figure 2.6 Stochastic simulation of the system (2.78)–(2.81) for the rate constants (2.86) and the initial condition (2.87) (from (Gillespie 2001)) k

D0 0 D0 + X1 , k2

X1 + X 1  X 2 , k−2

X2  X 1 + X 1 , k1

D0 + X 2  D 1 , k−1

D1  D 0 + X 2 , d

X1  0,

(2.88) (2.89) (2.90) (2.91) (2.92) (2.93)

54

2 Dynamical Representations of Molecular Networks

Table 2.2 Modified Gillespie method to incorporate delayed reactions (Bratsun et al. 2005) Step 1. Initialization: set t = 0 and fix the initial numbers of molecules X0 and the reaction counter. Step 2. Calculate the propensity function aj , j = 1, ..., M . Step 3. Generate two random numbers r1 and r2 in [0, 1). Step 4. Compute the time interval until the next reaction according to (2.76). Step 5. Check whether there are delayed reactions scheduled within time interval [t, t + Δt]. If yes, time t advances to the time td = t + τ of the next scheduled delayed reaction, states X are updated according to the delayed reaction channel, and the counter is increased i ← i + 1. Proceed to Step 2. Otherwise, go to Step 6. Step 6. Find the channel of the next reaction μ according to (2.77). Step 7. If the selected reaction μ is not delayed, update X by executing reaction μ, update time t ← t + Δt, and increase counter i ← i + 1. If the selected reaction is delayed, update is postponed until time td . If t reaches Tmax , terminate the computation. Otherwise, go to Step 2.

where D0 and D1 are the unoccupied operator and the occupied operator by repressor X2 , respectively. When utilizing the inherent separation of time scales and eliminating the fast variables X2 and D1 corresponding to reactions (2.89)–(2.92), the deterministic equation for the number of monomers X1 takes the form   dX1 4δX1 k0 1 + 4X1 + (2.94) = − dX1 , 2 2 (1 + δX1 ) dt 1 + δX12 (t − τ ) where  = k1 /k−1 and δ = k2 /k−2 . (2.94) is an approximation of the original system using the quasi-steady-state equilibrium of (2.89)–(2.92), but clearly, deterministic simulation of (2.94) is considerably simpler. The stochastic simulation results are shown in Figure 2.7. See (Bratsun et al. 2005) for more details on the modified Gillespie method. Although the equilibrium probability distribution of the master equation (2.55) is difficult to derive analytically, it can be calculated numerically on the basis of the following algebraic equation:    PX  X P (X ; t) − PXX  P (X; t) = 0. (2.95) X

For instance, by enumerating all possible (feasible) configurations or states of X and then substituting them into (2.95), we have simultaneous equations

2.5 Stochastic Representation

55

for all feasible states. Then, exploring the sparse structure of these linear equations, e.g., using the Arnoldi method (Cao and Liang 2008), we can estimate the equilibrium probability distribution of P for all states or configurations of X. The exact stochastic simulation algorithms are computationally expensive, and thereby only realistic for the computation of a small-scale system. To simulate a large-scale biomolecular system, approximation schemes are usually adopted, e.g., the finite state projection approach, the parallel computation scheme, the adaptive time-step procedure, and the model reduction (or aggregation) method based on time-scale separation or continuous–discrete variable separation, by exploiting the special dynamical or topological properties of the concerned system.

Figure 2.7 Stochastic simulation of the system (2.88)–(2.93). The parameter values are  = 0.1, δ = 0.2, k1 = 100, k−1 = 1000, k2 = 200, k−2 = 1000, d = 4, τ = 20, k0 = 20 (a), and k0 = 70 (b) (from (Bratsun et al. 2005))

56

2 Dynamical Representations of Molecular Networks

2.5.3 Analysis of Sensitivity and Robustness of Master Equations Sensitivity characterizes the ability of living organisms to adequately react to certain stimulus. It quantifies the dependence of system behavior on parameters. In deterministic modeling, robustness is usually quantified by calculating sensitivity, e.g., period and amplitude sensitivity in quantifying robustness of circadian rhythms. Sensitivity analysis of stochastic systems has recently become popular due to its relevance to the simulation of cellular processes. Using an analogue of the classical sensitivity analysis, the parameter sensitivity can also be applied to discrete stochastic dynamical systems. In stochastic systems, the state is probability distribution and parameters affect it indirectly through a master equation. The parameter sensitivity applied to the probability distribution is given by (Gunawan et al. 2005) Sj (X; t) =

∂P (X(p); t) , ∂pj

(2.96)

where p denotes the parameter vector. In traditional sensitivity analysis, one implicit assumption is that the outputs are continuous with respect to parameters. However, in stochastic systems, the output takes random values with probability. A sensitivity measure for discrete stochastic systems is defined by (Gunawan et al. 2005)  ∂P (X; t) ∂P (X; t) = sj (t) = E (2.97) ∂pj P (X; t)dX, ∂pj X where E[·] means the expected values on X. The absolute value is used here because only the magnitude of the sensitivity coefficient is concerned. Unlike the sensitivity coefficient of a single system state with respect to a parameter, the dependence of the system state X on parameter p is implicitly assumed. The evolution of sensitivity coefficients can be derived from (2.55) by taking the derivatives with respect to parameters  ∂Sj (X; t)  PX  X Sj (X  ; t) − PXX  Sj (X; t) = ∂t  X  ∂PX  X ∂PXX   + P (X ; t) − P (X; t) . (2.98) ∂pj ∂pj The evolution of sensitivity coefficients (2.98) should be solved simultaneously with its respective master equation (2.55). Generally, its analytical solution is difficult to construct. The SSA cannot be directly applied to solve the sensitivity equations. Techniques for estimating the sensitivity coefficients have been developed on the basis of a black-box approach such as finite difference. Specifically, the probability density function is constructed through a cumulative distribution function from the SSA. The cumulative distribution function is given by

2.5 Stochastic Representation



X

F (X; t) =

˜ t)dX, ˜ P (X;

57

(2.99)

−∞

from which the density function P (X; t) can be obtained as follows: P (X; t) =

dF (X; t) . dX

(2.100)

Then, the sensitivity coefficients can be numerically computed according to difference approximation as follows: ∂P (X; t) ∂pj P (X(pj + Δpj ); t) − P (X(pj − Δpj ); t) = 2Δpj F (P (X(pj + Δpj )); t) − 2F (P (X(pj )); t) + F (P (X(pj − Δpj )); t) = (, 2.101) (Δpj )2 Sj (X; t) =

where Δpj is the perturbation to pj and should be small enough to minimize truncation error but large enough to accurately predict the effect of the parameter changes over the level of internal noise (Gunawan et al. 2005). Generally, this balance in choosing step sizes in the parameters may hinder the application of finite difference to the probability density function. Recently, spectral methods for parameter sensitivity in stochastic dynamical systems were developed (Kim et al. 2007). The authors used spectral polynomial chaos expansions to represent statistics of the system dynamics as polynomial functions of system parameters. Such an approach allows an accurate and robust evaluation of sensitivities, even for the case of large-magnitude parametric perturbations. In addition, it enables studies of the predictability of the system given uncertainty, variability, or external noise in the model parameters, and estimation of corresponding uncertainty of the predicted output state statistics. 2.5.4 Langevin Equations The Langevin equational description to represent a cellular system explicitly incorporates the effects of noise. It takes the form of continuous differential equations augmented with additive or multiplicative stochastic terms, called stochastic differential equations. Due to the explicit incorporation of noise, the Langevin approach is ideal to describe constructive effects of stochastic fluctuations in cellular systems. Two typical examples of Langevin equations with additive and multiplicative stochastic terms take the forms X˙ = f (X) + Γ (t) and

(2.102)

58

2 Dynamical Representations of Molecular Networks

X˙ = h(X) + g(X)Γ (t),

(2.103)

where Γ (t) can be a Gaussian white noise or other fluctuating random term (Hasty et al. 2000). In (2.102), the noise term does not depend on the current state, whereas in (2.103), the noise term depends on the current state. Here, g(X) is a deterministic function of X, which determines the scaling of the noise. Note that unlike a master equation, the state X in the Langevin equations takes continuous values, and hence the Langevin equations can be viewed as an approximation to the corresponding master equation, i.e., they approximate discrete state variable X of the master equation by continuous variable X of (2.103). In order to describe the fluctuations, one can generally proceed by the following steps (Van Kampen 1992) when g(X) and Γ are not explicitly given. 1. Write the deterministic macroscopic equations of the system. 2. Add the Langevin force, i.e., the second term of (2.102) or (2.103). 3. Adjust the specified constants related to the Langevin force so that the stationary solution reproduces the correct mean square fluctuations as known from statistical mechanics or other considerations. The master equation and various stochastic simulation algorithms account for the stochastic and discrete nature of biochemical reactions and have been widely used to investigate the properties and effects of the internal noise as the gold standard of numerical computation. However, the computational efficiency rapidly degrades as the complexity of a system increases. In addition, a master equation cannot provide clear perspectives on the origins and magnitude of the internal noise. Staring from a master equation, when a system possesses a macroscopically infinitesimal time scale in the sense that during any time increment dt on that scale, all the reaction channels fire much more than once and none of the propensity functions change appreciably, its dynamics can be well approximated by Langevin equations (Gillespie 2000). In particular, when the number of each species is large (e.g., over hundreds in a cell), Langevin equations can well describe the dynamics of cellular systems. The Langevin equations of a general molecular network can be straightforwardly derived from (2.56) and take the form M M  dXi (t)  νji aj (X(t)) + νji aj (X(t))Γj (t), = dt j=1 j=1

(2.104)

where all parameters and variables are defined as the same as those in (2.56). Γj (t) are temporally uncorrelated, statistically independent Gaussian white noise signals, and are formally defined by Γj (t) = lim N (0, 1/dt), dt→∞

(2.105)

2.5 Stochastic Representation

59

where N (m, σ 2 ) denotes the normal random variable with mean m and variance σ 2 . Together with the properties of temporal and statistical independence, the definition implies < Γj (t)Γj  (t ) >= δjj  δ(t − t ),

(2.106)

where the first and second delta functions are Kronecker’s function and Dirac’s function, respectively, i.e., δjj  = 1 if j = j  , and 0 otherwise; δ(t − t ) = 1 if t = t , and 0 otherwise. Next, we will occasionally use δ(j, j  ) to represent δjj  . Although the Langevin approach is approximate analysis due to continuous approximation of discrete state X, which loses validity when the number of molecules is small, it is often solved with much greater analytical ease than other representations, e.g., the master equational approach. For example, on the basis of the transfer function around the feedback loop and the equivalent noise bandwidth of a system, a frequency domain technique for the analysis of intrinsic noise in negatively auto-regulated gene circuits has been proposed (Simpson Siegal-Gaskins 2003). Working with Langevin equations is beneficial, especially, when one can clearly find how the internal noise involved in biochemical reactions is related to the parameter values, the system size, and the state variables that evolve with time. Because internal noise is inherent in biochemical reactions and cannot be switched off, the Langevin equations have been extensively used in modeling intrinsic noise in cellular systems to study their constructive roles such as internal noise stochastic resonance (INSR) (Hou and Xin 2003) and coherence resonance (Steuer et al. 2003). Let us take a biochemical clock model, which incorporates three species: mRNA (M), clock proteins in cytosol (PC ), and in nucleus (PN ), as an example (Hou and Xin 2003). The deterministic equations that describe the evolution of the three species are d[M ] kn [M ] = νs n 1 n − νm  a1 − a2 , dt k1 + [PN ] km + [M ] [PC ] d[PC ] = ks [M ] − νd − k1 [PC ] + k2 [PN ] dt kd + [PC ]  a3 − a4 − a5 + a6 , d[PN ] = k1 [PC ] − k2 [PN ]  a5 − a6 . dt

(2.107)

(2.108) (2.109)

According to (2.104), the Langevin equations for this system can be written directly as

60

2 Dynamical Representations of Molecular Networks

Figure 2.8 Bifurcation diagrams for the deterministic equations (2.107)–(2.109) and the chemical Langevin equations (CLE) (2.110)–(2.112) (from (Hou and Xin 2003))

√ d[M ] 1 √ = (a1 − a2 ) + √ [ a1 Γ1 (t) − a2 Γ2 (t)], dt V 1 √ d[PC ] = (a3 − a4 − a5 + a6 ) + √ [ a3 Γ3 (t) dt V √ √ √ − a4 Γ4 (t) − a5 Γ5 (t) + a6 Γ6 (t)], √ d[PN ] 1 √ = (a5 − a6 ) + √ [ a5 Γ5 (t) − a6 Γ6 (t)], dt V

(2.110)

(2.111) (2.112)

where Γi=1,...,6 (t) are Gaussian white noise signals with Γi (t) = 0 and Γi (t)Γj (t ) = δij δ(t − t ), and V is the system size, e.g., the cell volume. When removing the second terms in the brackets at the right side of (2.110)– (2.112), the equations are equivalent to the deterministic ones (2.107)–(2.109). Therefore, the second terms actually denote the√internal noise. It is clear that the magnitude of the internal noise scales as 1/ V and also depends not only on the control parameters but also on system states, i.e., the concentrations of [M ], [PC ], and [PN ]. Due to the explicit expression of noise in the Langevin equations, the constructive roles of intrinsic and extrinsic noise can be studied relatively easily. For example, as shown in Figure 2.8, when the system size is very large and the deterministic kinetics applies, the system would not sustain oscillations if the control parameter is less than the threshold. On the other hand, when the system size is small, stochastic oscillations occur in the Langevin equations for such a parameter region. Such stochastic oscillations are distinct from random

2.5 Stochastic Representation

61

noise in that there is a clear peak in their power spectra. These oscillations are induced by the internal noise. In addition, the best performance at an optimal noise level demonstrates the occurrence of INSR (Hou and Xin 2003).

Figure 2.9 Dependence of the effective SNR on the system size at νs = 0.25 (from (Hou and Xin 2003))

To measure the relative performance of the stochastic oscillations quantitatively, signal-to-noise ratio (SNR) can be defined, i.e., β = S/N , where S and N are signal and noise strength, respectively. The dependence of SNR on the system size V is plotted in Figure 2.9. There is a clear maximum for system size V ∼ 104 , which demonstrates the existence of a resonance region. Since this resonance effect is purely induced by the internal noise, it is simply called INSR. See (Hou and Xin 2003) for an algorithm on how to calculate the effective SNR. Recently, increasing amount of experimental and theoretical evidence indicates that noise can play a very important role in cellular systems. For example, noise-based switches and amplifiers were studied for gene expression in a single network derived from the bacteriophage λ (Hasty et al. 2000). Fluctuation-enhanced sensitivity of intracellular regulation in a single cell has also been reported (Paulsson et al. 2000). Internal noise can induce circadian oscillations, and the performance of the noise-induced circadian oscillation reaches a maximum with variation of the internal noise level, indicating the occurrence of INSR (Hou and Xin 2003, Li and Lang 2008). When combined with delays, a noisy system may keep oscillations for parameter values such that its corresponding deterministic one reaches a steady-state level (Bratsun et al. 2005,Lewis 2003). Noise-induced collective behavior was also discovered

62

2 Dynamical Representations of Molecular Networks

for multicellular systems (Chen et al. 2005, Zhou et al. 2005). All these phenomena show that noise can induce potential richness to cellular dynamics. 2.5.5 Fokker–Planck Equations Fokker–Planck equations are a special type of master equation, and are often used as an approximation to the actual master equations or as a model for more general Markov processes. By the Taylor expansion of aj (X(t) − νj )P (X(t) − νj ) to order two in (2.56), we obtain the Fokker–Plank equation for the general molecular network as follows: ⎡⎛ ⎞ ⎤ N M   ∂P (X; t) ∂ ⎣⎝ νji aj (X)⎠ P (X; t)⎦ =− ∂t ∂X i i=1 j=1 ⎡⎛ ⎞ ⎤ N M 2   ∂ 1 2 ⎣⎝ νji aj (X)⎠ P (X; t)⎦ , (2.113) + 2 i=1 ∂Xi ∂Xj j=1 where all parameters and variables are defined in (2.56). This equation can be rewritten in the following more compact form:  ∂ ∂P (X; t) [Ai (X)P (X; t)] =− ∂t ∂Xi i=1 N

∂2 1 [Di (X)P (X; t)] , 2 i=1 ∂Xi ∂Xj N

+

(2.114)

or in a vector form ∂ 1 ∂2 ∂P (X; t) [B(X)P (X; t)] , =− [A(X)P (X; t)] + ∂t ∂X 2 ∂X 2

(2.115)

where A(X) = (A1 , ..., AN ),

(2.116)

D(X) = (D1 , ..., DN ), B(X) = [Bij ]N ×M ,

(2.117) (2.118)

Ai (X) =

M 

νji aj (X),

(2.119)

1/2

(2.120)

2 Bij (X).

(2.121)

j=1

Bij (X) = νji aj (X), Di (X) =

M  j=1

2.5 Stochastic Representation

63

Note that the master equation is expanded with respect to variables X and discrete jumps νji . If the variables are transformed as x = X/V with the discrete jumps as νji /V , where V is the system size (e.g., the cellular volume), the master equation can be expanded with respect to x and νji /V by the Taylor series, and such an approximation is called the Kramers–Moyal expansion. The first-order term of the Taylor or Kramers–Moyal expansion for the master equation is the deterministic kinetics of the molecular network, while the second-order term represents the Langevin dynamics. It is clear that the Fokker–Planck equation (2.113) is a continuous approximation of discrete state X to the master equation (2.56) and can be proved to be equivalent to the Langevin equations (2.104). The Fokker–Planck equation is beneficial in the sense that some theoretical analysis can be conducted. For example, the equilibrium probability distribution can be obtained from (2.113) with one variable as follows:   X ∂ A(X  ) C  Peq (X) = and exp 2 dX X = Ai (X), (2.122) ) B(X) B(X ∂t 0 where C is a constant, and · is the mean over all X. Note that Peq is the equilibrium probability distribution in the completely stochastic case, not an equilibrium in the deterministic one. On the basis of the equilibrium probability distribution, the equilibrium mean concentration or number can be obtained. The Fokker–Planck equation is an approximation to but similar to the master equation in that it describes the evolution of probability distribution of the state X(t). One main difference between them is with regard to the presentation of the species. In the Fokker–Planck equation, the description is continuous, while in the master equation the representation is discrete. If the biochemical system contains only a few molecules, the discrete representation is more accurate than the continuous one. As an example, consider the effect of a randomly varying external field or environment on the biochemical reactions. We will show how to transform a Langevin equation to an equivalent Fokker–Planck equation, or transform the master equation approximately to a Langevin equation. Reconsider the example (2.102) with additive noise Γ (t), which is a rapidly fluctuating random term with zero mean Γ (t) = 0 and is δ-correlated, i.e., Γ (t)Γ (t ) = Dδ(t − t ) with D proportional to the strength of the perturbation. Introducing the probability distribution P (X; t), i.e., the probability of the system with a state of concentration X at time t, its corresponding Fokker– Planck equation for P (X; t) can be constructed as follows: ∂P (X; t) ∂(f (X)P (X; t)) D ∂ 2 P (X; t) =− + ∂t ∂X 2 ∂X 2 ∂J(X; t) =− , ∂X

(2.123)

64

2 Dynamical Representations of Molecular Networks

where J(X; t) is the following probability flux J(X; t) = f (X)P (X; t) −

D ∂P (X; t) . 2 ∂X

(2.124)

Compared with (2.115), in (2.124), clearly, A(X) = f (X) and B(X) = D. At the equilibrium distribution, we have J(X; t) = 0. Then, integrating (2.124) over X, the equilibrium distribution for one-dimensional X is therefore given analytically as Peq (X) = Ke−2/Dφ(X) (2.125) with

∂φ(X) = −f (X), (2.126) ∂X i.e., φ(X) can be viewed as an energy landscape, where K is a normalization constant determined by considering the integral of Peq (X) over all X to be unity (Hasty et al. 2000). Using (2.125), the equilibrium mean value is given by ∞

Xeq =

XPeq (X)dX.

(2.127)

0

For multiplicative noise signals described by (2.103), the equilibrium probability distribution is obtained by transforming (2.103) to an equivalent Fokker–Planck equation ∂(h(X) + ∂P (X; t) =− ∂t

D  2 g(X)g (X))P (X; t)

∂X

+

D ∂ 2 g 2 (X)P (X; t) , (2.128) 2 ∂X 2

where the prime denotes the derivative of g(X) with respect to X. Compared with (2.115), A(X) = h(X) + Dg(X)g  (X)/2 and B(X) = Dg 2 (X). The equilibrium probability distribution for one-dimensional X can be similarly obtained as follows: Peq (X) = Le−2/Dφm (X) (2.129) with

  ∂φm (X) D = − h(X) + g(X)g  (X) , ∂X 2

(2.130)

where the function φm (X) can also be viewed as a potential (Hasty et al. 2000), and L is a normalization constant. Clearly, from the master equation (2.56), we can derive the corresponding Langevin equations (2.104) and Fokker–Planck equation (2.115) for a general molecular network. Note that the Fokker–Planck equation (2.115) and the Langevin equations (2.104) are equivalent.

2.5 Stochastic Representation

65

2.5.6 Cumulant Equations It can be proven that when all state variables X follow Gaussian distributions, the Langevin equation (2.104) or the Fokker–Planck equation (2.113) can be equivalently expressed by the first and second cumulant evolution equations, which means that we can actually examine the dynamics by deterministic cumulant equations instead of the complicated stochastic dynamics such as the master equation and the Langevin equations. Next, we first describe a procedure to derive cumulant equations up to any order for a general cellular system, and then describe a closed-form expression of cumulant equations for the system with all variables following Gaussian distributions. From the master equation (2.56) for a general molecular network, define ¯ i (X(t)) = K

M 

νji aj (X(t))

(2.131)

νji νjk aj (X(t))

(2.132)

j=1

¯ ik (X(t)) = K

M  j=1

for i = 1, ..., N and k = 1, ..., N . Let the concentration of X(t) be x(t), i.e., x(t) = X(t)/V , where V is the system size or the individual cellular volume. By using the concentration and letting ¯ i (V x(t))/V, Kik (x(t)) = K ¯ ik (V x(t))/V, fi (x(t)) = K

(2.133)

the Langevin equations (2.104) with molecular numbers as states can be rewritten into the following form with concentrations as states (Van Kampen 1992, Chen et al. 2005): dxi (t) = fi (x(t)) + ξi (t), dt

(2.134)

where the vector ξi are Gaussian white noise signals with zero means and the intracellular covariances Kik (x(t)), i.e., ξi (t) = 0 and ξi (t)ξk (t ) = Kik (x(t))δ(t − t ), and Kik is an N × N matrix. · is the means, or implementation of integration over the probability distribution, i.e., f (x) = f (x)P (V x; t)dx, (2.135) x

where X = V x. Clearly, Langevin equations (2.104) and (2.134) are equivalent but with different variables, and both are derived from (2.56). The first and second cumulants for any two random variables xi and xj are actually their means and covariances, i.e., xi , xj , and xi xj −xi xj . Letting g(x(t), s) =

N  i=1

xsi i (t),

(2.136)

66

2 Dynamical Representations of Molecular Networks

then for each integer-valued vector s = (s1 , ..., sN ), the moment evolution equations corresponding to (2.134) are given as follows (Kawai et al. 2004, Wojtkiewicz et al. 1996),  N  N N   ∂g dg(x(t), s) 1  ∂2g + Kik , (2.137) fi = dt ∂xi 2 i=1 ∂xi ∂xk i=1 k=1

which can be directly used to derive the cumulant evolution equations. Therefore, from (2.134) and (2.137), we can derive the cumulant equations up to any order for a general molecular network which, however, may not have a closed-form. On the other hand, if each variable xi in a system follows a Gaussian distribution, we can show that it can be expressed exactly by the first- and second-order cumulant equations in a closed-form. For such a system, all odd central moments vanish and any even central moment can be expressed as products of the second central moments. For instance,  xi xj xk xl c = xi xj c xk xl c + xi xk c xj xl c + xi xl c xj xk c , (2.138)  xi xi xi xj xj xk c = 6xi xi c xi xj c xj xk c + 6xi xj 2c xi xk c +3xi xi c xi xk c xj xj c ,

(2.139)

where all moments are central moments (Chen et al. 2005), e.g., xi xj c = (xi −xi )(xj −xj ) = xi xj −xi xj . Note that cumulants are identical to central moments for the first, second, and third orders, and any differentiable function f (x) can be expanded around x by central moments, i.e., for the Gaussian distribution of X, since all odd central moments are zero, f (x) can be expanded as f (x) = f (x) +

1 ∂ 2 f (x) 1 ∂ 4 f (x) xxc + xxxxc + · · · .(2.140) 2 2! ∂x 4! ∂x4

The cumulant evolution equations of (2.134) with a closed-form expression can be derived in terms of the first-order cumulant mi and the second-order cumulant cik as follows: dmi (t) = Fi (m(t), c(t)), dt dcik (t) = Gik (m(t), c(t)), dt

(2.141) (2.142)

with Fi (m(t), c(t)) = fi (x(t)) Gik (m(t), c(t)) = (xi (t) − mi (t))fk (x(t))

(2.143)

+(xk (t) − mk (t))fi (x(t)) + Kik (x(t)), (2.144)

2.5 Stochastic Representation

67

where i, k = 1, ..., N . In other words, a molecular network can be expressed exactly by the first-order and second-order cumulant equations. Here, the first cumulants or means of x are m = (m1 , ..., mN ) with each element mi = xi , and the second cumulants or covariances of x are c = [cik ]N ×N with each element cik = (xi − mi )(xk − mk ). (2.141) is obtained by directly integrating (2.134) over all xi , whereas (2.142) is derived by integrating d(xi − mi )(xk − mk )/dt over all xi and xk with the substitution of (2.134), mi , and mk . All Fi and Gik are the results of those integrating implementations. The vector m clearly has N elements. On the other hand, the non-zero elements of the covariance matrix c are at most N (N + 1)/2, but more than N because any two molecules in each cell are not generally dependent on each other. A detailed example, which shows how to derive the explicit expression for the Langevin equations (2.134) and cumulant equations (2.141)–(2.142) for a gene regulatory network can be found in (Chen et al. 2005). The system (2.141)–(2.142) is a closed-form expression in m and c, and hence can be viewed as a deterministic representation for a cellular system although there are high-order statistic terms, i.e., covariances cij in addition to the means mi . By examining deterministic cumulant equations, we can approximately determine the qualitative dynamics of the original stochastic system described by master or Langevin equations, e.g., the stochastic stability and the stochastic synchronization from the viewpoint of probability distribution. For instance, if each xi follows a Gaussian distribution, the switching dynamics, oscillatory behavior or synchrony in (2.141)–(2.142) correspond to those in (2.134), because the original stochastic dynamics of x can be entirely reconstructed from the deterministic means m and covariances c (Chen et al. 2005). The fundamental importance of intrinsic and extrinsic noises within molecular networks has been appreciated within both mathematical and biological communities with increasing interest in recent years. These studies include techniques to predict (Mettetal et al. 2006), estimate (Orrell et al. 2005), and control (Orrell and Bolouri 2004) stochastic noise, effects of negative and positive feedback on intrinsic and external noise (Yi 2004), noise-induced qualitative change in dynamics (Hasty et al. 2000, Chen et al. 2005, J. Wang et al. 2007), enhancement of cellular memory by reducing stochastic transitions (Acar et al. 2005), robust properties with respect to noise (Gonze et al. 2002a), and some other stochastic cellular models to quantitatively or qualitatively investigate different roles played by noise (Tian and Burrage 2006, Tsimring et al. 2006). Elucidating constructive roles of stochastic noise and developing new techniques to deal with these stochastic fluctuations will remain an important and attractive subject of active research.

68

2 Dynamical Representations of Molecular Networks

2.6 Deterministic Representation There are two mathematical forms to model dynamical molecular networks, generally, i.e., one is a stochastic formulation that explicitly includes the discrete and probabilistic change in reactant molecule numbers as reactions occur and the other is a deterministic formulation with reactant concentrations varying continuously in time and governed by a system of rate equations. The advantage of the deterministic representation is that qualitative behavior can be obtained relatively easily by using theoretical methods of complex networks, nonlinear dynamics, and control. Therefore, with the further development of sophisticated analytical and computational techniques, deterministic representation is expected to not only be a powerful tool to provide testable predictions and foundations for designing synthetic molecular networks and controlling cellular processes but also have great potential for biotechnological and therapeutic applications. 2.6.1 Basic Kinetics In the deterministic formulation, a cellular system or molecular network is considered to be a series of elementary biochemical reactions, whose kinetics can be described by rate equations according to the mass action law. Consider the simplest monomolecular reaction, which is called the firstorder reaction due to one reactant: A  A . k

(2.145)

It states that the species A is converted to species A with a rate constant k. As the reaction occurs, [A] decreases and [A ] increases. In this book, [X] represents the concentration of species X, but we also directly use X to represent its concentration if no confusion arises. Many biochemical reactions such as transcription from DNA to an mRNA, translation from an mRNA to a protein, and transportation of a protein from the cytoplasm into the nucleus can be written in such a form. The reaction rate depends not on the products but only on the reactants and is a positive quantity identical for all participating species in an elementary reaction. Particularly, for the reaction (2.145), the rate equation has the form r=

d[A ] d[A] =− = k[A]. dt dt

(2.146)

Bimolecular reactions probably are the most common reactions occurring in cellular systems. The two molecules can be the same or different. If the reactants are two different species, the reaction takes the following form, which is called the second-order reaction due to the presence of two reactants: k

A + B  AB.

(2.147)

2.6 Deterministic Representation

69

Many biochemical reactions, e.g., binding of a TF to DNA, formation of a heterodimer through binding of two different proteins, and conversion of a substrate into a complex catalyzed by an enzyme, take such a form. The reaction rate is r=

d[AB] d[A] d[B] =− =− = k[A][B]. dt dt dt

(2.148)

When both the reactants are the same species, the reaction becomes k

2A  A2 .

(2.149)

Formation of a homodimer through binding of two monomers and formation of a tetramer through binding of two identical homodimers are such types. The reaction rate can be written as r=

d[A2 ] 1 d[A] =− = k[A]2 . dt 2 dt

(2.150)

The prefactor 1/2 ensures that one gets the same rate for the reactants and products. DNA is a relatively stable molecule; on the other hand, mRNAs and proteins can be constantly degraded by cellular machinery and recycled. The degradation of mRNAs and proteins takes the simple form which is also a first-order reaction: k

A  0,

(2.151)

where A can be an mRNA or a protein. The reaction rate can be written as r=−

d[A] = k[A]. dt

(2.152)

Besides monomolecular and bimolecular reactions, there are some other complex reactions such as reversible reactions, parallel reactions, and consecutive reactions, each of which can be decomposed into multiple elementary reactions. Of course, there are also some other types of reactions which occur rarely such as trimolecular reactions. We now develop formalisms for a general biochemical reaction. A general elementary biochemical reaction takes the form (see (2.1)) k

r1 R1 + r2 R2 + · · · rm Rm  p1 P1 + p2 P2 + · · · + pn Pn ,

(2.153)

where m is the number of reactants, and n is the number  of products. With m m m reactants, (2.153) is the i=1 ri -order reaction due to i=1 ri reactants. Ri is the ith reactant and Pj is the jth product. ri and pj are the numbers of reactant Ri consumed and product Pj produced in a reaction step, respectively. The coefficients ri and pj are known as stoichiometries. The reaction rate based on the mass action law is

70

2 Dynamical Representations of Molecular Networks

 1 d[Ri ] 1 d[Pj ] = = k [Rl ]rl , ri dt pj dt m

r=−

(2.154)

l=1

where i = 1, ..., m and j = 1, ..., n. When more than one reaction are considered, the differential equations for a certain chemical is simply the sum of the contribution of each reaction. For example, based on rate equations, the reactions k

A + B  AB k−1

AB  A + B

(2.155) (2.156)

lead to a system of differential equations d[AB] = k1 [A][B] − k−1 [AB], dt d[A] d[B] = = k−1 [AB] − k1 [A][B]. dt dt

(2.157) (2.158)

The transformation is simply the sum of the rates of the two reactions. Generally, a cellular system consists of a network of coupled biochemical reactions. These reactions may be types of transcription, translation, dimerization, protein or mRNA degradation, enzyme-catalyzed reactions, transportation, and transduction. Such reactions constitute various metabolic, genetic, and signaling networks. Moreover, one species may participate in multiple reactions, and one species can be a reactant in one reaction and a product in another one. Thus, the change of one species is actually the sum of changes in all reactions that it participates in. 2.6.2 Deterministic Representation of a General Molecular System Recalling the general molecular network represented by master equation (2.56), we can also derive the deterministic representation in the form of differential equations with the numbers of molecules as variables by simply eliminating noise terms of (2.104) as dXi (t)  νji aj (X(t)), = dt j=1 M

(2.159)

where all variables and parameters are defined as those in (2.104). The same expression but with the concentrations of molecules as variables can be obtained from (2.134) by eliminating ξi (t). Actually, the deterministic representation (2.159) for a general molecular network is identical to (2.141) when Xi /v = mi and all cij = 0. In other words, when we ignore the stochastic noise (or let the means represent whole dynamical states), (2.159) approximately captures the main dynamical behavior of the general molecular network described by (2.56).

2.6 Deterministic Representation

71

2.6.3 Michaelis–Menten and Hill Equations By exploiting multiple time scales, e.g., fast–slow dynamics, the reaction (2.159) can be approximately expressed in a simple form. A commonly used reaction model for enzymatic reactions is the Michaelis–Menten (MM) equation, which is an approximation of the original dynamics. In this reaction mechanism it is assumed that the enzyme is neither consumed nor produced; therefore, the total concentration of the enzyme remains constant. It only interacts directly with the substrate to form an enzyme–substrate complex, which leads to synthesis of the product and release of the enzyme: k1

k

E + S  ES 2 E + P, k−1

(2.160)

where E, S, and P are enzyme, substrate, and product, respectively. The rate equations are now a system of differential equations: d[S] dt d[E] dt d[ES] dt d[P ] dt

= −k1 [E][S] + k−1 [ES],

(2.161)

= −k1 [E][S] + (k−1 + k2 )[ES],

(2.162)

= k1 [E][S] − (k−1 + k2 )[ES],

(2.163)

= k2 [ES].

(2.164)

The quasi-steady-state assumption for the enzyme–substrate complex ES due to the fast reaction, i.e., d[ES]/dt = 0, leads to [E][S] KM

(2.165)

k−1 + k2 . k1

(2.166)

[ES] = with the MM constant KM =

Combining the quasi-steady-state complex concentration and the conservation law for the enzyme [E]T = [E] + [ES] with the total enzyme concentration [E]T , we obtain [ES] =

[E]T [S] KM + [S]

(2.167)

for the enzyme. This leads to the well-known MM equation Vmax [S] d[P ] = , dt KM + [S]

(2.168)

72

2 Dynamical Representations of Molecular Networks

where Vmax = k2 [E]T is the maximum reaction rate, and [E]T is assumed to be constant. In a molecular network, when there is no cooperative interactions among molecules such as dimerization, binding of TFs and their cofactors, and binding of TFs to an operator site of a promoter on DNA, a cellular process (e.g., gene regulation) can be described approximately by the MM type of equations. Generally, gene expression can be regulated at multiple levels involving interactions among inducers (or cofactors), repressors, activators, and operator sites (or binding sites) on DNA. On one hand, a repressor (or an activator) protein, which is called a TF, binds to an operator site at the beginning of a gene to prevent (or promote) RNAP attaching to the DNA to synthesize mRNA. On the other hand, an inducer which is called a cofactor, binds to a repressor (or activator), causing it to change shape and preventing (or enhancing) its binding to the DNA strand. Therefore, the interactions among TFs, cofactors, and operator sites lead to two categories of transcriptional regulation: activation and repression. Take the binding of a repressor protein to an inducer as an example. The repressor protein P binds to a small inducer I to form a complex P I by k1

P + I  P I. k−1

(2.169)

The repressor is therefore found in either free (P ) or bound (P I) form. Assume that the number of P is relatively stable in contrast to the small inducer I, i.e., the conservation law states that the total concentration of the repressor protein remains constant of [P ]T , i.e., [P ]T = [P ] + [P I].

(2.170)

The kinetic rate equation is d[P I] = k1 [P ][I] − k−1 [P I]. dt

(2.171)

A system is said to be at equilibrium when its state ceases to change. Assume that the system has reached its equilibrium, that is, there are no further net changes. Thus, d[P I] = k1 [P ][I] − k−1 [P I] = 0, dt

(2.172)

Keq [P I] = [P ][I],

(2.173)

which leads to

where Keq = k−1 /k1 is called the equilibrium constant. Combining the conservation of the total repressor concentration, we derive the MM equation as follows:

2.6 Deterministic Representation

[P I] =

[P ]T [I] . [I] + Keq

73

(2.174)

The MM function has three notable features (Alon 2006): 1. it reaches saturation at high [I]; 2. it has a regime where [P I] increases linearly with I when [I] 1) coefficient. There are five slow reactions represented by (2.227)–(2.228). Let z = (x, y) = (x1 , x2 , y1 , y2 ), where x = (x1 , x2 ) and y = (y1 , y2 ). Therefore, n = 4, m = 9, m0 = 4, nx = 2, and ny = 2. The terms for k = 1, ..., 4 in Table 2.3 correspond to the four reactions in (2.226), whereas the terms for k = 5, ..., 9 are derived from the five reactions in (2.227)–(2.228). Then, the master equation (2.224) is obtained directly from Table 2.3 for the simple molecular network. Clearly, φk for k = m0 + 1, ..., m and θk for k = 1, ..., m0 are zero vectors. The total numbers of direct gene products (i.e., y1 , and p) are affected only by transcription, translation, and degradation reactions, which are all in the gene network, whereas other chemical numbers vary only in the protein network although there exist interactions between gene and protein networks. Then, the protein network can be described as

86

2 Dynamical Representations of Molecular Networks

Table 2.3 rk and wk for the simple molecular network, where V stands for the cell volume and rk = (φk , θk ) = (φk,1 , φk,2 , θk,1 , θk,2 ). Similarly, w1 should be w1 = k1 p(p − 1)/V , rather than k1 p2 /V . u = d + x2 is the total number of binding sites for the gene, which is a constant. θk for all k = 1, ..., 4 and φk for all k = 5, ..., 9 are zero vectors k rk (φk , 1 1,0, 2 -1,0, 3 -1,1, 4 1,-1, 5 0,0, 6 0,0, 7 0,0, 8 0,0, 9 0,0,

= wk (x, y) θk ) 0,0 k1 p2 /V = k1 (y2 − 2x1 − 2x2 )2 /V 0,0 k−1 x1 0,0 k2 x1 d/V = k2 x1 (u − x2 )/V 0,0 k−2 x2 1,0 αk3 d = αk3 (u − x2 ) 1,0 k3 x 2 0,1 k4 y1 -1,0 d1 y 1 0,-1 d2 p = d2 (y2 − 2x1 − 2x2 )

0 ∂P (x|y)  [wk (x − φk , y)P (x − φk |y) − wk (x, y)P (x|y)]. (2.229) = ∂t

m

k=1

Substituting (2.225) into (2.224) and summing over all x, we have the following evolution equation of the marginal functions for the gene network: ∂P (y) m ¯k (y − θk )P (y − θk ) − w ¯k (y)P (y)], = k=m0 +1 [w ∂t

(2.230)

where w ¯k (y) =

 x

wk (x, y)P (x|y)

(2.231)

is an average value conditional to y. w ¯k (y) can be expressed by conditional moments or cumulants of x because wk (x, y) is generally a polynomial of x and y. According to (2.225), we can clearly obtain the dynamics of the biomolecular network by (2.229) and (2.230), which is much simpler than the original (2.224) and thus is a reduced biomolecular network. From the viewpoint of computational complexity, the decoupled master equations (2.229)–(2.230) require considerably less CPU demand than the original master equation (2.224) does. 2.7.2 Approximation of Continuous Variables in Molecular Networks When we approximate x as continuous variables, (2.229) can be expressed approximately by the Fokker–Planck equation

2.7 Hybrid Representation and Reducing Molecular Networks

∂P (x|y) ∂ 1 ∂2 [B(x)P (x|y)] , =− [A(x)P (x|y)] + ∂t ∂x 2 ∂x2

87

(2.232)

where A(x) = (A1 , ..., Am0 ),

(2.233)

D(x) = (D1 , ..., Dm0 ), B(X) = [Bij ]m0 ×n ,

(2.234) (2.235)

with Ai (x) =

n 

φji wj (x, y),

(2.236)

j=1 1/2

Bij (x) = φji wj (x, y), Di (x) =

n 

2 Bij (x).

(2.237) (2.238)

j=1

Therefore, we have (2.232) and (2.230) to represent the original (2.224). Generally, compared with the master equation (2.229), P (x|y) can be calculated more efficiently by (2.232) for a given y, thereby yielding w ¯k of (2.231) efficiently. Note that y is still considered to be discrete even when solved by the reduced master equation (2.230). Clearly, (2.229) and (2.232) are a hybrid system with both discrete and continuous variables. 2.7.3 Gaussian Approximation in Molecular Networks When focusing on the gene network, we can easily examine the dynamics if the conditional moments or cumulants of x are provided, according to (2.230). In other words, provided that the necessary moments or cumulants in w ¯k (y) and w ¯k (y − θk ) are available, the dynamics of the gene network simply follow (2.230) with a much smaller number of variables. The required moments or cumulants can be calculated from (2.229) by multiplying xi for appropriate integer i and summing over all x. Next, we approximate the model (2.224) with (2.230) by assuming that all variables follow approximately Gaussian distribution. Let N be a conditional mean vector with elements Ni = xi |y, and M be a conditional correlation matrix with elements Mij = (xi − Ni )(xj − Nj ) for x. Then, from (2.231), by expanding wk at N , the transition rate w ¯k (y) of (2.230) is given as w ¯k (y) =



[wk (N, y) +

x

∂wk (N, y) (x − N ) + · · · ]P (x|y) ∂x

1 ∂ 2 wk (N, y) M + ··· 2 ∂x2  hk (N, M, y),

= wk (N, y) +

(2.239)

88

2 Dynamical Representations of Molecular Networks

which requires the conditional moments of x. Notice that all odd central moments vanish and the even moments can be expressed by the second central moment M in the Gaussian distribution. Therefore, w ¯k (y) can be expressed by N and M with y, i.e., hk (N, M, y), and we only need to derive the means and correlations of x when assuming the Gaussian distribution, as indicated in (2.239). On the other hand, by multiplying xi by (2.229) and summing over x, we have the following evolution equation for the means: m0   dNi φki wk (x − φk , y)P (x − φk |y) = dt x

= =

k=1 m0  

φki wk (x, y)P (x|y)

k=1 x m0 

φki w ¯k (y)

k=1

 fi (N, M, y),

(2.240)

where wk (x, y) is expanded m0 at N in (2.240). According to (2.239), it is obvious that fi (N, M, y) = k=1 φki hk (N, M, y). Similarly, we can derive the evolution equations for the correlations by multiplying (xi − Ni )(xj − Nj ) by (2.229) and summing over x as follows: m0   dMij [φki φkj + φki (xj − Nj ) + φkj (xi − Ni )]wk (x, y)P (x|y) = dt x

=

k=1 m0 

[φki φkj w ¯k (y) + φki

k=1

 gij (N, M, y),

∂wk (N, y) ∂wk (N, y) Mj + φkj Mi + · · · ] ∂x ∂x (2.241)

where Mi is the ith column of M . Note that M is a symmetrical matrix, i.e., Mij = Mji . Therefore, the evolution equations can be expressed in terms of N and M for given y, as indicated in (2.240) and (2.241). For the simple molecular network shown in Figure 2.13, clearly from (2.239) we can derive the w ¯1 (y) shown in Table 2.4, where V is the cellular volume. The evolution equations for means and correlations can easily be derived from (2.240) and (2.241). Therefore, the master equation (2.224) is reduced to a reduced master equation (2.230) with a set of ordinary differential equations (ODEs) (2.240) and (2.241), which can be simulated easily from the numerical computation viewpoint, e.g., using the Gillespie algorithm (Chen et al. 2005), due to small numbers of variables and reactions in the reduced master equation. Therefore, we have a decomposed system that is composed of a master equation (2.230) for the gene network and time-varying ODEs (2.240) and

2.7 Hybrid Representation and Reducing Molecular Networks

89

Table 2.4 Each term for the simple molecular network

w ¯2 (y) w ¯3 (y) w ¯4 (y) w ¯5 (y) w ¯6 (y) w ¯7 (y) w ¯8 (y) w ¯9 (y)

k1 (y2 − 2N1 − 2N2 )2 /V +4k1 (2M12 + M11 + M22 )/V k−1 N1 k2 N1 (u − N2 )/V − k2 M12 /V k−2 N2 αk3 (u − N2 ) k3 N2 k4 y1 d 1 y1 d2 (y2 − 2N1 − 2N2 )

f1 f2

w ¯1 (y) − w ¯2 (y) − w ¯3 (y) + w ¯4 (y) w ¯3 (y) − w ¯4 (y)

g11

w ¯1 (y) + w ¯2 (y) + w ¯3 (y) + w ¯4 (y) −8k1 (y2 − 2N1 − 2N2 )(M11 + M12 )/V − 2k1 M11 −2k2 (u − N2 )M11 /V + 2k2 N1 M12 /V + 2k2 M12 w ¯3 (y) + w ¯4 (y) 2k2 (u − N2 )M12 /V − 2k2 N1 M22 /V − 2k2 M22 −w ¯3 (y) − w ¯4 (y) −k2 (u − N2 )M12 /V + k2 N1 M22 /V + k2 M22 k2 (u − N2 )M11 /V − k2 N1 M12 /V − k2 M12

w ¯1 (y)

g22 g12

(2.241) for the protein network. For an extreme case, assuming that the dynamics of the protein network is much faster than that of the gene network, the moments can be numerically or even analytically calculated by considering (2.240) to be zero. Note that we do not require the quasi-steady-state equilibrium of the fast–slow dynamics in this section for the Gaussian approximation. In other words, with the assumption of the Gaussian distribution, (2.230) with (2.240)–(2.241) is an exact representation of the original master equation (2.224). 2.7.4 Deterministic Approximation in Molecular Networks We further approximate the model (2.230) with (2.240) and (2.241) by a deterministic scheme in this section. The dynamics of the protein network is relatively fast and the number of variables is usually substantially bigger than that of the gene network. Let Vp be the size of the protein system, e.g., the cellular  volume. Since the stochastic deviation is approximately proportional to 1/ Vp , we assume that the noise of the protein network is almost averaged. In other words, we assume that the reactions in the protein network are deterministic. Therefore, when y is given, according to (2.240), x|y = x and the protein network for x is a deterministic system, i.e., x˙ i =

m0  k=1

φk,i wk (x, y),

(2.242)

90

2 Dynamical Representations of Molecular Networks

where y is given. For y and x(0), we have ψ˙ i (y) =

m0 

φk,i wk (ψ(y), y)

with

ψ(y, 0) = x(0),

(2.243)

k=1

where ψ is the flow of the dynamics. (2.243) is a time-varying ODE due to the time-dependent y(t). Hence, the conditional probability of x is P (x|y) = δ(x − ψ(y))

(2.244)

P (x, y) = P (x|y)P (y) = δ(x − ψ(y))P (y).

(2.245)

and

Therefore, we have the following master equation (2.246) with ODE (2.243) which is a hybrid system with both stochastic and deterministic processes: ∂P (y) = ∂t

m 

[wk (ψ(y − θk ), y − θk )P (y − θk ) − wk (ψ(y), y)P (y)]. (2.246)

k=m0 +1

For the simple molecular network, the master equation is (2.246) with m0 = 4 and m = 9, and ODEs are dψ2 = w1 (ψ, y) − w2 (ψ, y) − w3 (ψ, y) + w4 (ψ, y), dt dψ3 = w3 (ψ, y) − w4 (ψ, y), dt

(2.247) (2.248)

where x1 (t) = ψ2 (t) and x2 (t) = ψ3 (t). Moreover, approximating (2.246) by the Fokker–Planck equation, we equivalently transform (2.246) into the form of the Langevin equations dyi = Ki (y) + ηi , dt

(2.249)

where ηi are Gaussian noise signals with zero mean ηi (t) = 0 and covariances ηi (t)ηj (t ) = Kij δ(t − t ). Ki and Kij are defined by Ki (y) = Kij (y) =

m  k=m0 +1 m 

θk,i wk ,

(2.250)

θk,i θk,j wk ,

(2.251)

k=m0 +1

for all i and j. When there is no noise, it is easy to check that dyi /dt = Ki (y) is identical to the rate equation of the deterministic system.

2.7 Hybrid Representation and Reducing Molecular Networks

91

Thus, for the simple molecular network, we have the corresponding Langevin equations of (2.249) as follows: dy1 = K1 (ψ, y) + η1 , dt dy2 = K2 (ψ, y) + η2 , dt

(2.252) (2.253)

where ηi are Gaussian noise signals with zero mean ηi (t) = 0 and covariances ηi (t)ηj (t ) = Kij δ(t − t ). K1 (ψ, y) = w5 + w6 − w8 and K2 (ψ, y) = w7 − w9 . K11 (ψ, y) = w5 + w6 + w8 and K22 (ψ, y) = w7 + w9 . K12 = K21 = 0. Hence, the gene–protein network can be simplified as (2.247)–(2.248) and (2.252)– (2.253). As another decomposition method for the general molecular network (2.224), we can also partition the variables z as z = (x, y), where x is the species in large numbers and y represents the species in small numbers. Then, by the partial Taylor expansion or the Kramers–Moyal expansion for x (Crudu et al. 2009), we can express the master equation approximately with x as continuous variables but considering y as a discrete variable. In such a way, the computational efficiency can be significantly improved due to a reduced number of discrete variables, which follow the master equation. 2.7.5 Prefactor Approximation of Deterministic Representation A continuous approximation scheme, which reduces the number of dimensions of a system while predicting the dynamics of the entire system more accurately than that provided by the classic quasi-steady-state (QSS) approximation, was developed (Kepler and Elston 2001, Bundschuh et al. 2003, Bennett et al. 2007) for the deterministic model. By correctly applying multiple time scale analysis, we found that the resulting reduced systems were similar to the QSS approximations, but with a prefactor in front of the time derivatives of the concentrations. Consider the example of the synthetic repressilator shown in Figure 2.14 (Elowitz and Leibler 2000). The reactions in the repressilator are as follows (Bennett et al. 2007): κ+

xi + x i  y i , κ− k+

di,0 + yk  dr,i , k−

(2.254) (2.255)

α

(2.256)

σ

(2.257)

di,0  di,0 + mi , mi  m i + x i , γm

mi  0, γp

xi  0,

(2.258) (2.259)

92

2 Dynamical Representations of Molecular Networks

where di,0 and dr,i are the free and repressed promoter sites, respectively. If the promoter of gene Gi is free, it can transcribe its associated mRNA, mi , which in turn can translate its associated protein.

Figure 2.14 Schematic representation of the repressilator. Gene G1 produces protein x1 , whose dimer y1 inhibits transcription of gene G2 . Similarly, the protein dimer y2 represses the gene G3 , whose protein dimer y3 represses transcription of G1 (from (Bennett et al. 2007))

According to the mass action law, the differential equations are given by x˙ i = −2κ+ x2i + 2κ− yi + σmi − γp xi , y˙ i = κ+ x2i − κ− yi − k+ yi d0,j + k− dr,j , d˙i,0 = −k+ di,0 yk + k− dr,i , d˙r,i = k+ di,0 yk − k− dr,i , m ˙ i = αdi,0 − γm mi ,

(2.260) (2.261) (2.262) (2.263) (2.264)

where i ∈ {1, 2, 3}, j ∈ {2, 3, 1}, and k ∈ {3, 1, 2}. Assume that the dimerization and dissociation processes are faster than the other processes and that these reactions are in equilibrium. Solving the resulting algebraic equations, we obtain yi = cp x2i , d di,0 = , 1 + cd cp x2k dr,i =

dcd cp x2k , 1 + cd cp x2k

(2.265) (2.266) (2.267)

2.7 Hybrid Representation and Reducing Molecular Networks

93

where d = d0,i + dri , cp = κ+ /κ− , and cd = k+ /k− . Substituting these expressions into (2.260)–(2.261), we obtain the following differential equations: ∂ni = x˙ i p(xi ), ∂xi αd − γm mi , m ˙i= 1 + cd cp x2k n˙ i = x˙ i

(2.268) (2.269)

where ni = xi + 2yi + 2dr,j ≈ xi + 2cp x2i + 2cd cp dx2i (1 + cd cp x2i )−1 and p(xi ) = 1 + 4cp xi +

4cd cp dxi . (1 + cd cp x2i )2

(2.270)

Clearly ni is the total number or concentration of protein i, including those in the complexes. Defining the dimensionless variables by rescaling γm t → t, √ √ cd cp xi → xi , and (σ cd cp )/(γm β)mi → mi , we obtain the system (2.271) p(xi )x˙ i = β(mi − xi ),  κd − mi , (2.272) m ˙i= 1 + x2k √ where β = γp /γm , κ = ασ/γm γp , d = cd cp d, and p(xi )x˙ i = n˙ i . Note that f (xi ) in (2.271) is the rescaled one, i.e., p(xi ) = 1 + 4rxi +

4d xi , (1 + x2i )2

(2.273)

 where r = cp /cd . The difference between (2.271)–(2.272) and the model under QSS as used in (Elowitz and Leibler 2000) is the existence of the prefactor p(xi ). Such a difference results from different reduction approaches, i.e., treating xi and ni = xi + 2yi + 2dr,j as slow variables under the QSS and the prefactor approximation, respectively. It is true that xi depends on slow reactions, but it also depends on fast reactions. Therefore, xi is not a slow variable, but a mixture of both slow and fast variables. Here, the true slow variable is ni , representing the total concentration of protein molecules. It has been shown that the prefactor approximation predicts the dynamics of the entire system more accurately than that provided by the classic QSS approximation, including both transient dynamics and other properties such as relaxation times of equilibria and periods and amplitudes of oscillations. For example, a comparison of the periods and amplitudes predicted by the entire system (2.260)–(2.264), the prefactor approximation (2.271)–(2.272), and the QSS approximation (i.e., p(xi ) = 1 in (2.271)–(2.272)) is shown in Figure 2.15. It can be seen that the prefactor approximation predicts the dynamics of the entire system more accurately than the QSS approximation with respect to the estimation of both the periods and amplitudes. See (Bennett et al. 2007) for more theoretical analysis and examples.

94

2 Dynamical Representations of Molecular Networks

Figure 2.15 A comparison of the periods (a) and the amplitudes (b) of the oscillations as functions of γp for the entire system (black curves), the prefactor approximation (circles), and the QSS approximation (dashed curves). The parameters are γp = 6, γm = 1, κ+ = k+ = 5, κ− = k− = 100, α = 10, σ = 20, and d = 20 (from (Bennett et al. 2007))

2.7.6 Stochastic Simulation of Hybrid Systems Define (x, y) and (X, Y ) to be concentrations and numbers of molecules, respectively. Reconsider the general system (2.224) by considering z = (x, y) and rk = (φk , θk ) with x = (x1 , ..., xnx ), y = (y1 , ..., yny ), and nx + ny = n, where xi and yi are the concentrations of molecules, i.e., the numbers divided by the system size or volume V . Then, the dynamics of the system is described by the master equation m  ∂P (x, y; t) [wk (x − φk /V, y − θk /V )P (x − φk /V, y − θk /V ; t) = ∂t k=1

−wk (x, y)P (x, y; t)],

(2.274)

where (φk , θk ) is a vector for the change of the state, i.e., rk,j is the change in the number of the jth molecule by the kth reaction. wk (x, y) is the transition rate (≥ 0) from state (x, y) to state (x + φk /V, y + θk /V ) by the kth reaction. Note that (x, y) represent the concentrations in (2.274), in contrast to the numbers z in (2.224). Hybrid System with Deterministic Process Assuming that the number of X is much bigger than that of Y , we can approximate X by continuous variables x, i.e., x = X/V , by keeping y = Y /V as discrete variables. Therefore, by the partial Kramers–Moyal expansion of (2.274) with respect to x and φk /V up to the first order (i.e., the zeroth order and the first order), we have the following hybrid representation

2.7 Hybrid Representation and Reducing Molecular Networks

95

m  ∂P (x, y; t) [wk (x, y − θk /V )P (x, y − θk /V ; t) − wk (x, y)P (x, y; t)] = ∂t k=1  m   nx   ∂ 1 φkj Wk (x, y) P (x, y; t) + O( ), (2.275) − ∂x V j j=1 k=1

where wk = Wk V is rates of the reactions which are proportional to the volume V based on the mass action law. O( V1 ) implies that the order of the term is equal to or higher than 1/V . Clearly, the term of the Taylor m zeroth order expansion, i.e., the first term k=1 [·] in (2.275), is the discrete dynamics or the master equation for discrete variables nx y,∂ and the term of the Taylor first-order expansion, i.e., the second term j=1 ∂xj [·] in (2.275), is the deterministic kinetic dynamics or corresponds to the Langevin equation for the continuous variables x. The term of the Taylor second-order expansion, i.e., O( V1 ), corresponds to the diffusion process, and when V → ∞, it approaches zero. Therefore, (2.275) is a hybrid system with both discrete and continuous dynamics, or with both stochastic and deterministic processes. Specifically, the stochastic system for discrete variables Y is m  ∂P (x, y; t) [wk (x, y − θk /V )P (x, y − θk /V ; t) = ∂t k=1

−wk (x, y)P (x, y; t)],

(2.276)

and the deterministic system with continuous variables x is for j = 1, ..., nx dxj (t)  φkj Wk (x(t), y), = dt m

(2.277)

k=1

vector; otherwise, δ(θk ) = 1. where j = 1, ..., n. Let δ(θk ) = 0 if θk is a zero  m Therefore, defining the jump intensity w0 = k=1 wk (x, y)δ(θk ), we have the algorithm of stochastic simulation based on the piecewise deterministic Markov process (PDMP) (Davis 1993, Zeiser et al. 2008, Crudu et al. 2009), shown in Table 2.5. Clearly, the continuous variables x are governed by the deterministic system and change continuously during each time interval [ti , ti + Δti ), while the discrete variables y remain constant. Therefore, x are piecewise continuous variables and may change discretely at ti + Δti . On the other hand, y evolve discretely with stochastic motion punctuated by a sequence of random waiting times Δti due to the master equation (2.276). Such dynamics is schematically shown in Figure 2.16. Hybrid System with Diffusion Process Moreover, we can expand (2.274) to the second order of V to consider the diffusion effect, i.e.,

96

2 Dynamical Representations of Molecular Networks Table 2.5 Algorithm of stochastic simulation for (2.275) based on PDMP

Step 1. Initialization: set t0 = 0 and fix the initial numbers of molecules (X0 /V, Y0 /V ). Step 2. Calculate the propensity function wk , k = 1, ..., m. Step 3. Generate one random number r1 uniformly distributed in [0, 1). Step 4. Integrate the following differential equations: m  dxj (t) φkj Wk (x(t), y) for j = 1, ..., nx = dt k=1

dq(t) = −w0 (x(t), y)q(t) dt with x(ti ) = xi , q(ti ) = 1 between ti and ti + Δti with the stopping condition q(ti + Δti ) = r1 . Then, we have Δti . distributed in [0, 1). Step 5. Generate a second random number r2 which is uniformly  i  Choose μi so that μi is the smallest integer satisfying μ j  =1 wj (x, y) > r2 w0 (x, y). Step 6. Execute the reaction μi , i.e., update (x, y). If ti > Tmax , terminate the computation. Otherwise, go to Step 2.

m  ∂P (x, y; t) [wk (x, y − θk /V )P (x, y − θk /V ; t) = ∂t

(2.278)

k=1

(2.279) −wk (x, y)P (x, y; t)] nx m   ∂ [gjk (x, y; t)P (x, y; t)] (2.280) − ∂x j k=1 j=1 m  nx  nn   φkj φkl ∂2 + Wk (x, y)P (x, y; t) (2.281) ∂xj ∂xk 2V j=1 l=1

1 +O( 2 ) V where

k=1

(2.282)

2.7 Hybrid Representation and Reducing Molecular Networks

97

x(t)

0

t1

t2

t4

t3

t5

t

Figure 2.16 Schematic illustration of a piecewise deterministic Markov process

gjk (x, y; t) = φkj Wk (x, y) −

ny  l=1

φkj θkl ∂Wk (x, y)P (x, y; t) V P (x, y; t) ∂yl

ny  φkj θkl ∂Wk (x, y) = φkj Wk (x, y) − [ V ∂yl l=1

+Wk (x, y)

∂lnP (x, y; t) ]. (2.283) ∂yl

Therefore, (2.280)–(2.281) can also be expressed by the Langevin equations instead of the differential equations (2.277), i.e., stochastic differential equations with the continuous variables x for j = 1, ..., n m m  dxj (t)  φ  √kj Wk (x(t), y)Γk (t), gjk (x(t), y; t) + = dt V k=1

(2.284)

k=1

where Γk (t) is defined in (2.105). For this case, the hybrid system is the combination of the discrete stochastic system (2.276) and continuous stochastic system (2.284), which can be simulated similarly to the algorithm shown in Table 2.5 and Table 2.6. In Table 2.6, Vj (t) are independent one-dimensional Wiener processes. The stochastic differential equation can be calculated by the Itˆo integration. Clearly, there is P (x(t), y; t) in (2.284) or gjk , which is required to be estimated during the integration. There are many ways to approximate P (x(t), y; t), such as by the finite state projection approach, the Gaussian distribution assumption for the continuous variables, or the equilibrium probability distribution. We consider the scheme to estimate approximately ∂ ln P (x, y; t)/∂yl . Since y corresponds to discrete variables which are expected to change the

98

2 Dynamical Representations of Molecular Networks

Table 2.6 Algorithm of stochastic simulation for (2.278)–(2.282) based on PDMP Step 1. Initialization: set t0 = 0 and fix the initial numbers of molecules (X0 /V, Y0 /V ). Step 2. Calculate the propensity function wk , k = 1, ..., m. Step 3. Generate one random number r1 uniformly distributed in [0, 1). Step 4. Integrate the following stochastic differential equations: dxj (t) =

m 

gjk (x(t), y; t)dt +

k=1

m  φkj  √ Wk (x(t), y)dVk (t) V k=1

for j = 1, ..., nx dq(t) = −w0 (x(t), y)q(t)dt with x(ti ) = xi , q(ti ) = 1 between ti and ti + Δti with the stopping condition q(ti + Δti ) = r1 . Then, we have Δti . Step 5. Generate a second random number r2 which is uniformly μi distributed in  [0, 1). Choose μi so that μi is the smallest integer satisfying j  =1 wj (x, y) > r2 w0 (x, y). Step 6. Execute the reaction μi , i.e., update (x, y). If ti > Tmax , terminate the computation. Otherwise, go to Step 2.

dynamics in a slow manner in contrast to the continuous variables x, we assume ∂P (x(t), y; t)/∂yl ≈ 0, or ∂ ln P (x(t), y; t)/∂yl ≈ 0. Specifically, we have gjk (x, y; t) = φkj Wk (x, y) −

ny  φkj θkl ∂Wk (x, y) l=1

V

∂yl

.

(2.285)

2.8 Stochastic versus Deterministic Representation Stochastic and deterministic approaches have both advantages and disadvantages. The advantage of stochastic approaches is that they exactly capture the discrete and stochastic nature of cellular systems. At low concentrations of molecules, molecular fluctuations are likely to have a marked impact on system dynamics. The predictions of deterministic and stochastic models for circadian rhythms show that robust circadian oscillations can be observed even when the maximum number of mRNA and protein molecules is of the order of some tens and hundreds (Gonze et al. 2002b). To assess exactly the effects of molecular noise, it is necessary to resort to a stochastic approach.

2.8 Stochastic versus Deterministic Representation

99

However, almost all analytical methods available for deterministic approaches are no longer applicable and all stochastic simulations are time-consuming processes. The computational efficiency rapidly degrades as the complexity of a system increases. One should therefore use stochastic approaches only if they are absolutely necessary. Deterministic approaches neglecting the discrete and stochastic nature have received considerable attention due to the simplicity for performing qualitative and quantitative studies. For deterministic formalisms, there are rich techniques, e.g., structural analysis, cellular control analysis, frequency analysis, and bifurcation analysis, that can be used to qualitatively or quantitatively analyze the system dynamics. In many cases, stochastic and deterministic descriptions of a system coincide in the sense that the mean behavior of the system can be accurately captured by the deterministic description. For example, sustained oscillation corresponding to a limit cycle in a deterministic circadian rhythm model can also be obtained in its corresponding stochastic description, i.e., the stochastic oscillations present fluctuations around the deterministic limit cycle (Gonze et al. 2006).

oŒ™ˆ™Š G–G”–‹Œ“•ŽG–™”ˆ“š”š “–ž

n  n™ˆ—š

Ž

i––“Œˆ•GŒ˜œˆ›–•š v™‹•ˆ™ G‹Œ™Œ•›ˆ“GŒ˜œˆ›–•š

hŠŠœ™ˆŠ  j–”—“ŒŸ› 

h‰š›™ˆŠ›–• jœ”œ“ˆ•›GGŒ˜œˆ›–•š

z›–Šˆš›ŠG‹Œ™Œ•›ˆ“GŒ˜œˆ›–•š

i–ŠŒ”Šˆ“GG”ˆš›Œ™GŒ˜œˆ›–•š Ž

yŒˆŠ›–•T‹œš–•G”ˆš›Œ™GŒ˜œˆ›–•š

“–ž

Figure 2.17 Formalisms to model molecular networks

Generally, when a deterministic system is operating near a critical point, stochastic and deterministic processes may be substantially different. In this case, noise can induce some new phenomena or qualitative changes. For example, for some parameters, both the stochastic and deterministic processes in a circadian rhythm model coincide, i.e., they both exhibit periodic oscillations

100

2 Dynamical Representations of Molecular Networks

with only difference in their periods and fluctuations in terms of concentrations. However, for other parameters, the deterministic description results in a stable equilibrium, while stable oscillations persist in the stochastic description (Vilar et al. 2002). Some other noise-induced new phenomena such as noise-based switches and amplifiers for gene expression (Hasty et al. 2000) and fluctuation-enhanced sensitivity of intracellular regulation (Paulsson et al. 2000) in a single cell have also been developed. Therefore, when species are present at low copy numbers, the stochastic description is more reasonable although it is solvable neither analytically nor with high computational efficiency. On the other hand, when the species numbers are high and the system is operating far from its critical points, the deterministic description is more reasonable due to its simple representation and high computational efficiency. The features of various formalisms to model molecular networks are briefly summarized in Figure 2.17. Depending on the requirement for accuracy, we can choose different modeling approaches for quantitative simulation and qualitative analysis of cellular dynamics.

3 Deterministic Structures of Biomolecular Networks

One of the ultimate goals in molecular biology is to understand the physiology of living cells in terms of the information that is encoded in the genome of a cell. The central dogma of molecular biology, i.e., DNA encodes RNA which in turn produces protein molecules, provides a framework for understanding the flow of information transfer from the DNA through the RNA to the protein molecules. Individual molecules, such as proteins, perform various functions in complex molecular networks and play key roles in most of the cellular processes. For example, a protein may affect production rates of other proteins or itself by transcriptional regulation when acting as a transcriptional factor. Therefore, to understand how genes, proteins, and small molecules dynamically interact to form molecular networks which realize sophisticated biological functions becomes one of the major challenges for post-genomic biology. Recent advances in genomic science have made the quantitative analysis of molecular interactions, e.g., PPIs and DNA–protein interactions, possible because of the progress in experimental and measurement techniques, unlike conventional qualitative study. Nonlinear phenomena in cellular dynamics, such as biochemical oscillations and gene expression multistability, have been extensively investigated through various mathematical models, in particular, for molecular networks in simple organisms. Mathematical models can provide testable quantitative predictions despite the complexity of the networks. In addition, general regulatory principles can be found through them so as to allow us to manipulate and monitor various biological processes at the molecular level, which has great potential for biotechnological and therapeutic applications. A biomolecular network can be expressed as a set of vertices representing cellular elements, e.g., genes, proteins, metabolites, and complexes, connected by edges which represent the relations between pairs of elements such as biochemical reactions and intermolecular interactions. Networks enable representation and characterization of biological processes such as signaling, metabolic, and regulatory processes. For example, a metabolic network can be viewed as a directed graph where each vertex represents a metabolite and every edge

102

3 Deterministic Structures of Biomolecular Networks

represents a biochemical reaction that transforms one metabolite into another. On the other hand, in a protein–protein interaction network or protein interaction network, a vertex represents a protein and an edge represents a pairwise interaction between two proteins, that is, two proteins are connected if they interact with each other. A representative gene regulatory network with two genes lac and cI is shown in Figure 3.1, which is actually a schematic representation of a gene oscillator (Hasty et al. 2002a,Hasty et al. 2002b). The two genes synthesize their mRNAs and subsequently proteins X and Y , which in turn activate and repress the two genes. In this directed network, interactions include transcription, translation, and protein–DNA binding. In this chapter we introduce some basic concepts in modeling biomolecular networks to help readers understand the related problems discussed in the later chapters and provide a general structure for molecular networks in the deterministic form. In particular, we provide examples of gene regulatory networks. Other kinds of networks can be similarly discussed.

Figure 3.1 Schematic representation of a gene oscillator (from (Hasty et al. 2002b))

Cellular regulation is highly integrated and consists of signaling, metabolic, and regulatory processes although various analyses often focus independently on one or some of these processes. For example, signaling cascades are triggered by the presence of extracellular stimuli and often result in the activation of the targets, which may be either enzymes within the cytoplasm or transcription factors. The transcription factors which regulate transcription of associated genes and synthesis of various proteins involved in signal transduction and metabolism function in transcription regulatory networks. The enzymes may be modified so that their catalytic activities are increased or decreased in response to extracellular signals. Consequently, biomolecular networks are highly integrated.

3.1 A General Structure of Molecular Networks

103

Understanding how various cellular processes are controlled and what is their general structure are one of the major challenges in research of systems biology. Traditionally, research has been focused on the characterization of individual components and then interactions. However, almost all cellular functions cannot be attributed to isolated components. Rather, they are associated with the characteristic molecular networks. The structure and dynamics of molecular networks have been the subject of active research by using computational modeling coupled with various experimental techniques and methodologies. Theoretical and computational studies of dynamics in molecular networks span a broad range of contents such as dynamics of various regulatory processes in transcriptional, signaling, and metabolic networks, as shown in Figure 3.2.

Biochemical interactions

S ig naling n e t w o rk s

R e g u l a t o ry n e two rks

1

1

M e tab o lic ne two r ks 1 A TP A DP

2

3

2

2 AT P A DP

4

3

4

5

3

Netw ork structure and dynamics

Figure 3.2 An overview of network structure and dynamics in molecular networks

3.1 A General Structure of Molecular Networks All living organisms consist of one or more cells, which are the basic structural units of an organism. A cell is an integrated device comprising several thousand or more types of genes, proteins, and other molecules. The set of nodes representing these biochemical components and the set of directed or undirected edges representing the interactions between them constitute a molecular network, whose dynamical properties cannot be understood by individual components alone. Complex molecular networks can perform various

104

3 Deterministic Structures of Biomolecular Networks

specific functions, including DNA replication, translation, conversion of glucose to pyruvate, and cell cycle regulation. Physiological functions of cells and organisms are actually responsible for the coordinated or integrated functions of multiple molecular networks. Recent advances by mathematical modeling in biology have demonstrated that molecular networks can be well described by mathematical models. These models shed light on the design principles of molecular networks with specified functions and allow making non-trivial predictions, some of which have been verified experimentally. Consequently, to understand how these networks are built, what is their general structure, and how they function, one must develop a conceptual framework, i.e., a precise mathematical description, which can be used to describe and analyze these networks. An appropriate mathematical model can allow qualitative or even quantitative predictions in order to provide guidelines for conducting experiments. Advanced computing devices combined with improved numerical techniques have made it possible to simulate and analyze dynamical properties of various molecular networks. To build and analyze theoretical models or structures of the various complex molecular networks shown in Figure 3.2, the bottom-up approach can be used to propose a hypothetical network of biochemical reactions among the component species, to formulate a set of dynamical equations which describe the temporal and spatial evolution of the network, to analyze the equations qualitatively or quantitatively, to compare the behavior of the networks with that of living cells, and consequently, to better understand the underlying molecular basis of cell physiology. In principle, the governing equations for any chemical reaction network can be formulated by the mass action law. Therefore, the differential equational formulation, which models concentrations of cellular components by time-dependent variables, has been widely used to analyze various molecular networks. For instance, regulatory interactions take the form of differential equational relations among the concentrations of variables. One example is the ODE models based on rate equations with such forms. Next, we define a general molecular network on the basis of differential equations in a mathematical manner. 3.1.1 Basic Definitions Let R+ be the set of non-negative real numbers. Assume that a molecular network is composed of n biochemical components, which can be proteins, mRNAs, chemical complexes, different states of the same protein, or proteins at different locations in a cell. The network can be represented by a functional differential equation (FDE) dx(t) = f (xt ), dt

(3.1)

3.1 A General Structure of Molecular Networks

105

where x(t) = (x1 (t), ..., , xn (t)) ∈ X ⊂ R+n is the concentrations of all components at time t ∈ R. Let C+ ≡ C([−r, 0], R+n ), where C([−r, 0], R+n ) is the space of continuous maps on [−r, 0] into R+n . xt ∈ C+ is defined by xt ≡ x(t + θ), −r ≤ θ ≤ 0. The reaction rates f = (f1 , ..., fn ) : C+ → R+n are continuously differentiable and map a bounded subset of C+ to a bounded subset of R+n . Note that the reaction rates f include both synthesis and ˆ ⊂ C+ be an induced space of degradation rates of the components. Let X +n ˆ means φ(θ) ∈ X for −r ≤ θ ≤ 0. A speX on [−r, 0] into R , i.e., φ ∈ X cial form of (3.1), which is widely used, is differential equations with discrete delays represented by dxi (t) = fi (x1 (t − τi1 ), ..., xn (t − τin ))  fi (xτi ), dt

(3.2)

where τij (i, j = 1, ..., n) denotes time delays from component j to component i. xτi = (x1 (t − τi1 ), ..., xn (t − τin )). These delays arise from the time required to complete transcription, translation, and diffusion to the places where the RNAs or proteins can act. In this book, (3.2) is adopted for most of the cases to simplify the descriptions although most theoretical results related to (3.2) also hold for (3.1). When all delays are set to be zero, (3.2) takes the form of an ODE. Time delays are often involved in gene regulatory networks. For example, a delay can represent the time taken for a protein to repress or activate the production of its own or other proteins, including the time for translation and processing steps such as multiple phosphorylation, nuclear entry, and complex formation. Few studies have focused on metabolic delays because most metabolic reactions are fast. The temporal behavior of a metabolic network, consisting of n metabolites and r reactions, can often be described by a set of differential equations dS(t) = N ν(S(t), p), dt

(3.3)

where S denotes the n-dimensional vector of biochemical reactants, and N denotes the n × r stoichiometric matrix. Clearly, time delays xτ are all zero in this system. The stoichiometric matrix N contains important information about the structure of the metabolic network. The r-dimensional vector ν(S, p) consists of reaction rates, which depend on the substrate concentrations S, as well as a set of parameters p, e.g., enzyme activities. The reaction rates can be determined on the basis of the enzyme dynamics which obey the mass action law. The description of the metabolic system (3.3) consists of a vector S = (S1 , S2 , ..., Sn )T of concentration values, a vector ν = (ν1 , ν2 , ..., νr )T of reaction rates or fluxes, a parameter vector p = (p1 , p2 , ..., pm )T , and the stoichiometric matrix N . The equation describes the rate of concentration change in each metabolite, including the consumption and production of a metabolite.

106

3 Deterministic Structures of Biomolecular Networks

Take the simple metabolic network with four chemicals S1 , ..., S4 and fluxes ν1 , ..., ν4 shown in Figure 3.3 as an example. The stoichiometric matrix takes the form ⎞ ⎛ 1 −1 0 0 ⎜ 0 1 −1 0 ⎟ ⎟ (3.4) N =⎜ ⎝ 0 −1 0 1 ⎠ . 0 1 0 −1

V1

S1 S3

V2

V4

S2

V3

S4

Figure 3.3 A simple metabolic system

The system dynamics is described by a set of ODEs S˙ 1 = ν1 − ν2 , S˙ 2 = ν2 − ν3 , S˙ 3 = ν4 − ν2 , S˙ 4 = ν2 − ν4 ,

(3.5) (3.6) (3.7) (3.8)

where S˙ i = dSi (t)/dt. The conservation condition is that S3 + S4 is constant. Generally, there are some equations for conserved quantities, i.e., the sum of two or more metabolites is a conserved quantity. These conservation conditions can be used to simplify the system, e.g., the fourth equation can be eliminated on the basis of the conservation condition. Clearly, (3.5)–(3.8) are a linear system of fluxes νi , but can be generally a nonlinear system. A function x(t; φ) ∈ R+n is said to be a solution of (3.1) if it satisfies (3.1) for all t ≥ t0 with x(t0 + θ; φ) = φ(θ), −r ≤ θ ≤ 0, where φ ∈ C+ is a given initial function. To emphasize the initial function, we define xt (φ) ≡ x(t+θ; φ) with xt0 (φ) = x(t0 + θ; φ) = φ(θ), −r ≤ θ ≤ 0. For (3.1), orbits, equilibria, periodic orbits, omega, and alpha limit sets are defined in the following ways. Let x ˆ be the constant function equal to x for all values of its argument, i.e., x ˆ(θ) = x, where −r ≤ θ ≤ 0. In other words, x ˆ is the natural inclusion from x ∈ R+n to x ˆ ∈ C+ by x ˆ(θ) = x with −r ≤ θ ≤ 0. Definition 3.1. The set of equilibria for (3.1) is defined by E  {φ ∈ C+ : φ = x ˆ for some x ˆ ∈ R+n satisfying f (ˆ x) = 0}.

(3.9)

3.1 A General Structure of Molecular Networks

107

Definition 3.2. The orbit of (3.1) for the initial condition φ ∈ C+ is O+ (φ)  {xt (φ) : t ≥ t0 }.

(3.10)

Definition 3.3. The omega limit set is defined by " {xt (φ) : t ≥ s}, ω(φ) 

(3.11)

s≥0

whereas the alpha limit set is defined by " α(φ)  {xt (φ) : t ≤ s}.

(3.12)

s≤0

Definition 3.4. The orbit O+ (φ) is said to be a T-periodic orbit if xT +t (φ) = xt (φ) for all t and the minimal T > 0. 3.1.2 A General Structure for Gene Regulatory Networks Next, we consider an example of a general gene regulatory network, which emphasizes the structure of feedback effects on transcription, splicing, and translation processes (Chen and Aihara 2002a, Wang et al. 2008). The structure of the network is shown schematically in Figure 3.4. In the network, each node represents one gene with its products (the mRNA and the protein), and the relationship between them, i.e., the transcription, translation, and splicing processes. As shown in Figure 3.4 (a), for any single node i, there are generally many inputs of proteins, i.e., p1 (t − τpi1 ), ..., pn (t − τpin ), which come from its own or other nodes with time delays τpi1 , ..., τpin , respectively. τpij ∈ R+ is a time delay from pj to mi , i.e., from protein j to mRNA i, mainly due to the slow transcription processes. The regulation and interactions of the inputs on the gene or mRNA i is represented as a nonlinear function ri (p1 (t − τpi1 ), ..., pn (t − τpin )) to show the activation and repression effect of the individual proteins. However, there is only one output, i.e., the protein pi (t), from any single Node i, which may activate or repress its own gene or other genes with time delays τpji . The mRNA i and protein i degrade with degradation rates dmi and dpi , respectively. Couplings and interactions of many such nodes constitute a gene regulatory network, as shown in Figure 3.4 (b). The differential equations of the gene regulatory network can be mathematically represented as (Chen and Aihara 2002a, Wang et al. 2008) m ˙ i (t) = −dmi mi (t) + ri (p(tτpi )), p˙ i (t) = −dpi pi (t) + si (mi (t − τmi )),

(3.13) (3.14)

where mi , pi ∈ R (i = 1, ..., n) represent the concentrations or numbers of mRNAs and proteins with degradation rates dmi and dpi , respectively. The regulatory functions ri (p(tτpi )) = ri (p1 (t−τpi1 ), ..., pn (t−τpin )) and si (mi (t−τmi ))

108

3 Deterministic Structures of Biomolecular Networks (a)

Interactions from its own or other proteins r (p1(t-τp ) , ... ,pn(t-τpin)) i

To its own or other genes

i1

dm

Degradation

i

dp

i

p (t) i

mRNA m i (t)

s (mi (t - τm ))

Protein p (t) i

i

τ m and τ p are delays ij i

Node i (b) p 1(t -τ p ) k1

p k(t -τ p )

nk Node- n

Node-1 Node-k

p j (t -τ p ) 1j

p n (t -τ p ) in

Node-j

Node- i

p i (t -τ p ) ji A gene regulatory network composed of n nodes

Figure 3.4 Illustration for a single node and a gene regulatory network: (a) the detailed structure of Node i. (b) A gene regulatory network composed of n nodes with many feedback loops due to regulations and interactions among them (from (Wang et al. 2008))

are generally nonlinear, with τmi ∈ R+ and τpij ∈ R+ representing time delays for mRNA i and protein i, respectively. τmi is a time delay from mi to pi , i.e., from mRNA i to protein i, mainly due to the slow translation process. If the detailed binding information among proteins (i.e., TFs) and DNA (i.e., promoters) is available, ri can be derived analytically directly from P (RN APi ) of (2.222). However, there are rare cases for which the TFs and promoters are known. Hence, ri and si are considered as general nonlinear functions to describe the transcription and translation processes in gene regulatory networks. Some choices of ri (x) and si (x) can be the sigmoid functions such as α tanh(xi /) or xki /(αxki + β) with parameters α and β, which show the switch-like phenomena, where  is a positive parameter and k is the Hill co-

3.2 Gene Regulatory Networks with Cell Cycles

109

efficient denoting the degree of cooperativity. As one approximation scheme, an integration model is represented as follows: n  ri (p(tpτi )) = αi tanh( wij pj (t − τpij )/),

(3.15)

j=1

si (mi (t − τmi )) = βi mi (t − τmi ),

(3.16)

where wij represents the regulation rate from protein j to mRNA i, and βi is the linear synthesis rate of protein i. Clearly, all inputs from other genes are linearly added with weights wij and then their total effect is nonlinearly transformed, whereas the synthesis of protein i is assumed to depend approximately linearly on the concentration of mRNA i (Chen and Aihara 2002a). Note that in molecular regulation, there might exist several different co-regulation mechanisms that can be explained by OR gate logic and AND gate logic. The above case corresponds to the OR gate logic, i.e., any of the inputs is sufficient to activate or repress the mRNA i. In addition, SUM (summation) and PROD (product) forms can also be used to model the regulatory function ri .

3.2 Gene Regulatory Networks with Cell Cycles The cell cycle may significantly change the dynamics of a biomolecular network both qualitatively and quantitatively. A cycle of most eukaryotes is composed of four stages: the G1 (gap) phase in which the size of the cell is increased by constantly producing RNAs and synthesizing proteins, the S (synthesis) phase in which DNA synthesis and duplication occur, the G2 (gap) phase in which the cell continues to produce new proteins and grows in size, and the M (mitosis) phase in which chromosomes segregate and cell division takes place. In particular, the genome is constantly maintained in the G1, G2, and M phases, but duplicated in the S phase which is shorter than the cell volume growth process and much longer than the cell division instant. The time period of a cell cycle in most mammalian cells is on the order of 12–24 h, whereas bacteria by contrast may divide every 20–30 min, and yeast cells or other protozoans may take 6–8 h. Since the cell volume and DNA number must increase by a factor of 2 between successive divisions in order to ensure that the mass of the two daughter cells remains nearly equal to that of the mother cell, the concentrations or the numbers of molecules inevitably depend on dynamics of the cell cycle, which in turn has significant effects on the dynamics of the gene regulatory networks because of the dynamical fluctuations of the cell cycle. Let mi and pi represent the numbers of mRNAs and proteins for gene i. To consider the nonlinear dynamics of gene regulatory networks by consideration of the cell cycle, (3.13)–(3.14) can be rewritten as (Chen et al. 2004)

110

3 Deterministic Structures of Biomolecular Networks

p(tτpi ) ), v(t) p˙ i (t) = −dpi pi (t) + si (mi (t − τmi )),

m ˙ i (t) = −dmi mi (t) + Ni u(t)ri (

(3.17) (3.18)

where Ni is a positive scalar which represents the number of gene i at the beginning of the cell growth phase, while u(t) ∈ R is the DNA number factor so that Ni u(t) is the number of gene i at time t, due to which 1 ≤ u(t) ≤ 2. Define the cell volume factor as v(t) = V (t)/V0 ∈ R, where V (t) is the host cell volume at time t and V0 is the host cell volume at the beginning of its growth phase, due to which 1 ≤ v(t) ≤ 2. For the period from the beginning of cell growth to the cell division, (3.17) represents the transcription reaction whereas (3.18) describes the translation process. m˙ i (t) = dmi (t)/dt and p˙ i (t) = dpi (t)/dt hold for all t except division instants, while the volume and the number of chemicals all halve, i.e., v(t) → v(t)/2, u(t) → u(t)/2, mi (t) → mi (t)/2, and pi (t) → pi (t)/2 at each division instant t (Chen et al. 2004). In a eukaryotic cell, there is usually one copy for each gene at the beginning of a cell growth phase, i.e., Ni = 1. However, for bacteria, there may exist multiple DNA plasmids per cell, e.g., as many as 100 plasmids, which implies Ni = 1 ∼ 100 for a host cell. For the sake of simplicity, it is assumed that genes or DNAs, including plasmid DNAs, are duplicated before cell division. Assuming the period of a cell division cycle to be τ , the following piecewise function can be used to describe v(t): # e(t/τ −k) ln 2 , kτ ≤ t < (k + 1)τ, v(t) = (3.19) 1, t = (k + 1)τ, where k = 0, 1, 2, .... The cell volume factor v(t) exponentially increases from 1 to 2 during each cell cycle and is reset to 1 after each cell division. On the other hand, since DNA is duplicated in a much faster manner than the cell volume, the following sigmoidal function is adopted to describe approximately the DNA number factor u(t): # ag(t/τ − k) + b, kτ ≤ t < (k + 1)τ, u(t) = (3.20) 1, t = (k + 1)τ, where k = 0, 1, 2, ... and g(t) = 1/(1 + e−γ(t−td ) ) is centered at td for 0 ≤ td ≤ 1. a = 1/(g(1) − g(0)) and b = 1 − g(1)/(g(1) − g(0)) are chosen to ensure that u(t) = 1 and u(t) = 2 just after and before each division, respectively. Clearly the DNA number factor u(t) is mostly constant, except the time period near (k + td )τ during which u(t) rapidly increases from 1 to 2 and is reset to 1 after each cell division. The period near (k + td )τ corresponds to the S phase in a eukaryotic cell. The temporal changes of the cell volume factor v(t) and the DNA number factor u(t) are shown in Figure 3.5, where the DNA duplication occurs around (k + td )τ for each cell cycle period τ with τ = 1.

3.2 Gene Regulatory Networks with Cell Cycles 2.5

111

u v

1.5

u

and

v

2

1

0.5 0

0.5

t

1

1.5

(cell division period

2

2.5

3

τ =1)

Figure 3.5 The cell volume factor v(t) and the DNA number factor u(t) at td = 0.6 and γ = 50 (from (Chen et al. 2004))

Define [mi ](t) = mi (t)/v(t) and [pi ](t) = pi (t)/v(t) as the relative concentrations of mRNAs and proteins. Then, by differentiating mi (t) = v(t)[mi ](t) and pi (t) = v(t)[pi ](t) with respect to t, and further by substituting (3.19)– (3.20) into (3.17)–(3.18) with consideration of v(t) → v(t)/2, u(t) → u(t)/2, mi (t) → mi (t)/2, and pi (t) → pi (t)/2 at each division instant, a model of the gene regulatory network in terms of relative concentrations takes the form ˙ (t) = −(dm + v¯)[mi ](t) + Ni u(t) ri ([p](tτ )), [m] i pi i v(t) ˙ (t) = −(dp + v¯)[pi ](t) + si [mi ](t − τm ), [p] i

i

i

(3.21) (3.22)

where v¯ = v(t)/v(t) ˙ = (ln 2)/τ when v(t) of (3.19) is adopted. Note that when there is no cell division cycle dynamics, the terms v¯[mi ](t) and v¯[pi ](t) in (3.21)–(3.22) disappear and u(t) = v(t) = 1. The gene regulatory network with cell cycle (3.21)–(3.22) is a non-autonomous system due to the existence of u(t)/v(t), which generally is a periodic function. Therefore, the cell cycle may significantly change the dynamics of biomolecular networks. In particular, when a gene network is near a stability boundary, a cell cycle as a degradation factor may significantly change the dynamics both qualitatively and quantitatively. For example, a cell division cycle can be viewed as an external periodic force for the inherent autonomous dynamics of genetic networks. Depending on the frequencies and coupling of external and internal oscillations, there may exist periodic, quasi-periodic, resonant, and even chaotic dynamics that are generated by synchronization of the two oscillators (Chen et al. 2004). As a cell grows, the DNA or gene numbers can

112

3 Deterministic Structures of Biomolecular Networks

be assumed to change rapidly or smoothly, depending on cell types and the initial DNA or gene numbers. It has also been shown that the cell cycle may play significant roles in gene regulation due to the nonlinear relation among the cell volume, the DNA number, and gene regulatory network, although gene expression is usually tightly controlled by TFs (Chen et al. 2004). Consider two situations for changes in the numbers of genes. The first one is rapid change, i.e., u(t) is constant during the cell growth, except the S phase in which u(t) is doubled, but immediately halves after division in each daughter cell, as indicated in (3.20). Such a case corresponds to the situation of a eukaryotic cell. It may also hold for a system with chromosomal genes in a prokaryotic cell. On the other hand, the second one is a smooth change, i.e., u(t) proportionally increases with the cell volume growth until it is doubled at the division instant but immediately halves after division in each daughter cell, i.e., u(t)/v(t) = 1. Such a case can be considered as an approximation to a prokaryotic cell with a large number of plasmids, i.e., there are many copies of a gene in a cell. 3.2.1 Gene Regulatory Networks for Eukaryotes First consider the dynamics with rapid changes of gene numbers. Such a case corresponds to the situation of a eukaryotic cell, where there are usually one copy or a few of copies of a gene in a cell. Therefore, by (3.17)–(3.20) with consideration of m → m/2, p → p/2, v → v/2, and u → u/2 at the division instant, we can describe the dynamics in terms of the chemical numbers and v, u by impulsive differential equations (IDEs) as follows: ∞

p m m ˙ = N uf ( ) − Km m − δ(t − kτ ), v 2 ∞

p˙ = Sp m − Kp p −

(3.23)

k=1

p δ(t − kτ ), 2

(3.24)

k=1

v˙ = v¯ v −

∞ v δ(t − kτ ), 2 k=1

u˙ =

(3.25) ∞

γ u γ δ(t − kτ ), (u − b) − (u − b)2 − τ aτ 2

(3.26)

k=1

where the impulse function δ is defined as: δ(t) = 0 when t = 0, and  +∞ δ(t)dt = 1. Note that v(0) = u(0) = 1. Due to the last impulsive terms −∞ of (3.23)–(3.26), values of m(t), p(t), v(t), and u(t) all halve at t = kτ . Clearly (3.23)–(3.26) are not ODEs but IDEs with a periodic impulse force. The effects of a cell division cycle on the chemical numbers include two parts, i.e., a variable term (v(t), u(t)) to influence the synthesis of mRNAs, and an impulse term δ(t − kτ ) to enhance the degradation or dilution of each chemical.

3.2 Gene Regulatory Networks with Cell Cycles

113

Consider the stability of a periodic oscillation in (3.23)–(3.26). For the sake of simplicity, (3.23)–(3.26) can be summarized as ∞

1 ˙ X(t) = F (X(t)) − δ(t − kτ )X(t), 2

(3.27)

k=1

where X = (m, p, v, u) and F = F (m, p, v, u) = (N uf (p/v) − Km m, Sp m − Kp p, v¯v, γ(u − b)/τ − γ(u − b)2 /(aτ )). Let φ(t; X(kτ )) denote the flow of the vector field F starting from φ(0; X(kτ )) = X(kτ ) at t = 0, i.e., dφ(t; X(kτ )) = F (φ(t; X(kτ ))), dt

(3.28)

and define ψ(t) as a fundamental solution satisfying ∂F (φ(t; X(kτ ))) ∂ψ(t) = ψ(t) ∂t ∂X

(3.29)

with ψ(0) = I, where ψ ∈ R(2n+2)×(2n+2) , and I is the identity matrix. According to (3.28), integrating (3.27) from kτ + to t for kτ < t < (k + 1)τ yields t + X(t) − X(kτ ) = F (X(t))dt

kτ + t−kτ

F (φ(t; X(kτ )))dt

= 0

= φ(t − kτ ; X(kτ )) − φ(0; X(kτ )).

(3.30)

Note that X(kτ + ) = φ(0; X(kτ )), and the integration range is changed for φ(t; X(kτ )) in (3.30) due to its initial state starting from X(kτ ). On the other hand, in the same way, by integrating (3.27) from kτ + to (k + 1)τ for t, we have τ 1 τ + X((k + 1)τ ) − X(kτ ) = F (φ(t; X(kτ )))dt − δ(t − τ )φ(t; X(kτ ))dt 2 0 0 1 = φ(τ ; X(kτ )) − φ(0, X(kτ )) − φ(τ ; X(kτ )). (3.31) 2 Therefore, by using φ of the autonomous system and from (3.30)–(3.31), the orbit of the non-autonomous (3.27) with k = 0, 1, 2, ... can be expressed as X(t) = φ(t − kτ ; X(kτ )), kτ ≤ t < (k + 1)τ, 1 X((k + 1)τ ) = φ(τ ; X(kτ )), t = (k + 1)τ. 2

(3.32) (3.33)

Unlike the continuous dynamics of the concentrations, the chemical number X(t) is continuous at t = kτ from the right side, but generally discontinuous at t = kτ from the left side.

114

3 Deterministic Structures of Biomolecular Networks

It is evident that (3.33) is a Poincar´e map of (3.27). Thus, the existence of a period-τ solution for (3.27) is equivalent to the existence of a real solution for the algebraic equation X(kτ ) =

1 φ(τ ; X(kτ )). 2

(3.34)

Note that φ is not the flow of the right-hand side of (3.27) but the flow of the vector field F of (3.28). According to (3.33), the stability of the period-τ solution depends on the eigenvalues of the Jacobian matrix at X(kτ ): J=

1 ∂φ(τ, X(kτ )) 1 ≡ ψ(τ ). 2 ∂X(kτ ) 2

(3.35)

From dynamical system theory, if the absolute values of eigenvalues for J are all less than 1, the periodic solution is asymptotically stable. In a similar manner, we can derive the existence and stability conditions for any period-kτ solution. 3.2.2 Gene Regulatory Networks for Prokaryotes Next, we consider the dynamics with smooth changes of gene numbers, i.e., u(t) proportionally increases with the cell volume, and u(t)/v(t) = 1 approximately holds true. Such an assumption is actually valid only when N is sufficiently large, e.g., a large number of plasmids in a bacterial cell. Otherwise, u(t) should be considered a time-varying factor and its value should rapidly change as indicated in the first situation. Therefore, by (3.21)–(3.22), we can describe the dynamics in terms of relative concentrations as follows: ˙ = N f ([p]) − (Km + v¯)[m], [m] ˙ = Sp [m] − (Kp + v¯)[p], [p]

(3.36) (3.37)

which are actually autonomous ODEs. The effect of a cell division cycle on relative concentrations is an additional degradation rate v¯, which implies that a cell cycle mainly enhances the dilution of chemicals in terms of concentrations or affects gene regulation by acting as a degradation factor. In a real cell, there actually exist many perturbations such as noise, around u(t)/v(t) = 1, which prevent the perfect tuning of the DNA duplication with the cell size or even equal distribution of DNAs in daughter cells. For the case of deterministic perturbations, i.e., u(t)/v(t) = 1 + σ, where σ is a small real number, the stability analysis of equilibria is relatively easy and can be investigated by perturbing N of (3.36) due to N u(t)/v(t) = N + σN according to (3.21)–(3.22). Local stability analysis of the dynamics of (3.36)– (3.37) at an equilibrium point for [m] and [p] can be obtained directly by simply investigating the eigenvalues of the Jacobian matrix J for (3.36)–(3.37), i.e.,

3.2 Gene Regulatory Networks with Cell Cycles

 J=

−Km − v¯ Sp

115



N df ([p])/d[p] . −Kp − v¯

(3.38)

Note that the stability of [m] and [p] for an equilibrium point is identical to that of m and p for the corresponding periodic solution according to the definition of [m] and [p], i.e., [mi ](t) = mi (t)/v(t) and [pi ](t) = pi (t)/v(t). By including nonlinear terms, we can also analyze local bifurcations (Kuznetsov 1995). The effects of stochastic perturbations on cellular dynamics by using a model with small stochastic noises can be examined as follows. Let u(t)/v(t) = 1 + ση(t), where σ is a small real number corresponding to the deviation of the small noise, and η(t) is Gaussian noise with zero mean η(t) = 0 and variance η(t)η(t ) = δ(t − t ). Then, (3.36)–(3.37) become ˙ = N f ([p]) − (Km + v¯)[m] + N f ([p])ση(t), [m] ˙ = Sp [m] − (Kp + v¯)[p]. [p]

(3.39) (3.40)

Clearly, fluctuations by such noise mainly influence the system dynamics through the transcription process due to (3.39). Consider the example of a synthetic genetic network shown in Figure 3.6 ∗ with genes lac, tetR, cI and promoters PL tetO1, PRM , PL tetO1. All the three genes are well-characterized prokaryotic transcriptional regulators, which can be found in the bacterium E. coli and λ phage. The protein Lac forms a ∗ tetramer to inhibit the gene tetR with promoter PRM , and the protein CI as a dimer activates the gene tetR, while the protein TetR forms a homodimer to repress both the lac and cI genes with promoter PL tetO1. All the three genes can be engineered on plasmids and then cloned to multiple copies, e.g., by polymerase chain reaction (PCR). The engineered plasmids are further assumed to grow in E. coli. Let x, y, and z be the numbers of protein monomers TetR, Lac, and CI, respectively. Define x2 to be the number of protein dimer TetR. Let d and dx2 be the number of free DNA and the number of TetR2 –DNA complex, i.e., OR bound by a protein dimer TetR2 . Then, to regulate gene cI, the multimerization and binding reactions of tetR can be written as the equilibrium reactions as follows: k1

x + x  x2 , k−1 k2

d + x2  dx2 . k−2

(3.41) (3.42)

The corresponding slow dynamics, i.e., transcription and translation processes of cI are βm

d z d + mz , sz

mz  z,

(3.43) (3.44)

116

3 Deterministic Structures of Biomolecular Networks

Figure 3.6 A three-gene model of a synthetic gene regulatory network where the protein Lac forms a tetramer to inhibit the gene tetR, and the protein CI enhances the gene tetR as a dimer, whereas the protein TetR forms a dimer to repress both ∗ is a mutated promoter of PRM and has two binding sites gene lac and gene cI. PRM OR1 and OR2 for the protein dimer CI2 and one binding site OR3 for the protein ∗ are OR1 > OR2 . Binding effects of CI2 tetramer Lac4 . Affinities of CI2 for PRM ∗ to OR1 and OR2 for transcription of PRM are neutral and positive, respectively, in contrast to a negative binding effect of OR3 by Lac4 . On the other hand, there is one binding site, i.e., OR for protein TetR2 , which represses the transcription of the promoter PL tetO1 (from (Chen et al. 2004))

with the degradation processes of mz and z as km

mz z 0, k

z z 0.

(3.45) (3.46)

According to the mass action law, the dynamics of reactions (3.41)–(3.42) can be described by the following differential equations x2 x2 − k−1 ), V2 V x2 d dx = V (k2 2 − k−2 2 ). V V

x˙ 2 = V (k1 d˙x2

(3.47) (3.48)

The equilibrium states of fast dynamics (3.41)–(3.42) can be written as algebraic equations x2 = k1 x2 /(k−1 V ) = c1 x2 /V and dx2 = k2 x2 d/(k−2 V ) = σ3 x2 d/V 2 , where σ3 = c1 c2 and ci = ki /(k−i V ). Such algebraic equations imply that the numbers of chemicals synthesized in the fast dynamics are inversely proportional to the cell volume. Let the copy number of plasmids with gene cI be nz u(t). Then, we have the conservation condition nz u(t) = d + dx2 , which leads to d = nz u(t)/(1 + σ3 x2 /v 2 ). Therefore, by substituting the equilibrium states of fast dynamics, the slow dynamics of (3.43)–(3.46) for the mRNA and the synthesized protein representing the transcription and translation processes of gene cI are

3.2 Gene Regulatory Networks with Cell Cycles

m ˙ z = βm z d − k m z m z =

βmz nz u(t) − kmz mz , 1 + σ3 x2 /v 2

z˙ = sz mz − kz z,

117

(3.49) (3.50)

where the synthesis rate of mz is βmz d due to the repressive effect of TetR on the binding site OR . At division instants, x → x/2, y → y/2, z → z/2, v → v/2, and u → u/2. Similarly, we can obtain the dynamics for regulating genes lac and tetR. By defining the relative concentrations for proteins as [x] = x/v, [y] = y/v, and [z] = z/v, the dynamical system of the three-gene network can be summarized in terms of the relative concentrations of proteins in the following closed form: u(t) [m˙ x ] = βmx nx fx ([y], [z]) − (kmx + v¯)[mx ], v(t) ˙ = sx [mx ] − (kx + v¯)[x], [x] u(t) [m˙ y ] = βmy ny fy ([x]) − (kmy + v¯)[my ], v(t) ˙ = sy [my ] − (ky + v¯)[y], [y] u(t) [m˙ z ] = βmz nz fz ([x]) − (kmz + v¯)[mz ], v(t) ˙ = sz [mz ] − (kz + v¯)[z], [z]

(3.51) (3.52) (3.53) (3.54) (3.55) (3.56)

where fx ([y], [z]) = (1 + c[z]2 + ασ1 c2 [z]4 )/((1 + c[z]2 + σ1 c2 [z]4 )(1 + σ4 [y]4 )), fy ([x]) = 1/(1 + σ2 [x]2 ), and fz ([x]) = 1/(1 + σ3 [x]2 ). On the basis of the above theoretical framework, the nonlinear dynamics of gene regulatory networks with the consideration of the cell division cycle and the duplication process of DNA can be considered. In particular, for synthetic switches and oscillators, the cell cycle as a degradation factor may significantly affect cellular dynamics both qualitatively and quantitatively as follows: • • •

For a gene switch (or genetic switch), the bistable region may disappear due to the cell cycle although there is a bistable region for the autonomous system, and vice versa. For a gene oscillator (or genetic oscillator), a cell division cycle functions as an external force to entrain or synchronize the natural oscillation. Usually, a cell cycle entrains the system to tend to a limit cycle, but depending on the natural oscillation period or network structure, there may exist quasi-periodic, resonant, and even chaotic dynamics, stimulated by the cell cycle.

A genet network (or genetic network) in vivo in a cell and an artificial genetic network in vitro in a cell-free system actually correspond to our model with and without a cell division cycle, respectively. Therefore, such analysis

118

3 Deterministic Structures of Biomolecular Networks

with and without a cell division cycle may be a theoretical basis to quantitatively predict the essential dynamics and to successfully implement experiments from in vitro to in vivo. See (Chen et al. 2004) for more details on the synthetic network models and simulation results.

3.3 Interaction Graphs and Logic Gates 3.3.1 Interaction Graphs and Types of Interactions We use three types of graphs to represent deterministic structures of molecular networks, one of which is the interaction graph. Each edge in a biomolecular network represented by an interaction graph corresponds to an interaction between two components. An interaction or more exactly a pairwise interaction can be of two types in a directed molecular network: activation and repression. Activation from B to A occurs when the synthesis rate of A increases as the concentration or number of B increases. For example, an activator B can increase the transcription rate of its target gene A; therefore, the interaction from the activator B to the target gene A is activation. On the other hand, repression from B to A occurs when the synthesis rate of A decreases as the concentration or number of B increases. For instance, when a repressor B binds to the promoter of gene A, it can reduce the transcription rate of its target gene A. The types of interaction can be more generally defined as follows in a molecular network with n chemicals, i.e., (3.1) or (3.2). Suppose that the concentration of the jth component at time t − τij affects the synthesis rate of the ith component at time t, where i, j ∈ {1, ..., n}. If the synthesis rate of the ith component at time t increases (or decreases) as the concentration of the jth component at t − τij increases, the type of interaction from the jth component to the ith component is called positive (or negative), and we set sij = 1 (or sij = −1). If the synthesis rate of the ith component at t is never affected by the change in the concentration of the jth component, i.e., there is no direct interaction between them, we set sij = 0. For (3.1) or (3.2), mathematically sij = 1, sij = −1, or sij = 0 means sij = 1 if sij = 0 if sij = −1 if

∂fi (x(t)) > 0, ∂xj (t) ∂fi (x(t)) = 0, ∂xj (t) ∂fi (x(t)) < 0, ∂xj (t)

(3.57) (3.58) (3.59)

for all x(t) ∈ X. Thus, sij = 1 (or sij = −1) means that the jth component affects positively (or negatively) the ith component with time delay τij , where

3.3 Interaction Graphs and Logic Gates

119

xτi = (x1 (t − τi1 ), ..., xn (t − τin )), and τij is the time delay from chemical j to chemical i. For instance, in the following equation dP (t) Vs , = f (S(t − τs )) = dt KM + S(t − τs )n

(3.60)

we have sP S = −1 due to ∂f (S(t))/∂S(t) < 0 for all S(t). On the other hand, in the following equation dP (t) Vs S(t) = f (S(t)) = , dt KM + S(t)

(3.61)

we have sP S = 1 due to ∂f (S)/∂S > 0 for all S. Subsequently, we describe the definition of interaction graphs (Kobayashi et al. 2003), which would enable an intuitive understanding of the relation among the components. An interaction graph, IG(F ), of the biomolecular network defined by (3.1) or (3.2) is a directed graph whose nodes represent the individual components and whose edges eij represent the interaction between node i and node j. When sij = 0, the graph has an edge, eij , directed from the node j to the node i. A representative interaction graph is shown in Figure 3.7.

+

2

4

-

3

+

-

1

-

6

5

+

+

Figure 3.7 A representative interaction graph with feedback loops. Signs + and − on an edge indicate s = 1 and −1, respectively. A feedback loop designated by a solid curve (or dashed curve) is a positive (or negative) feedback loop. In this graph, there is a negative feedback loop composed of the first, second, fourth, fifth, and sixth nodes, a positive feedback loop composed of the first, second, third, and sixth nodes, and a positive self-feedback loop composed of the fifth node

The types of feedback loops are qualitative characteristics of biomolecular networks. If there is a path from the ith node of an interaction graph to itself,

120

3 Deterministic Structures of Biomolecular Networks

p(i, i) = (i = p1 → p2 · · · pl−1 → pl = i), then this path is said to be a feedback loop, and further, it can become a self-feedback loop when l = 2, where pi means the ith node in the path. In addition, this feedback loop is $l−1 said to be positive (or negative) if m=1 spm+1 pm = 1 (or −1). By using positive and negative feedback loops, we can obtain many features of a molecular network. For instance, if all feedback loops are positive, such a molecular network has a very simple steady-state behavior which can be adopted for designing switching networks as stated in the succeeding chapters. On the other hand, if some of the feedback loops are negative, a periodic oscillation may appear in the molecular network and such a mechanism is adopted to design oscillating networks. The dynamics of biomolecular networks are generally complicated due to various types of nonlinear interactions among the components. When details of the interactions between any two components in a network are known, depending to the requirement for accuracy, one can use stochastic or deterministic equations to model its dynamics at the molecular level on the basis of the mass action law. Such a technique can also be used to construct synthetic molecular networks using known biological materials, which is a main goal in synthetic biology. The details of functional regulatory relationship between two components are generally unknown. Therefore, when modeling such a relationship irrespective of particular parameter values, some approximations are usually adopted. Assume that for each reaction in a biomolecular network there exists a rate function, which may include kinetic parameters. The rate function is usually a MM or Hill type function and is a monotone function with its variables. For example, consider two interacting components x and y. If x activates y, when using the Hill function, the dynamics can be described as follows: dy vx (x/kxy )n − kd y + kb , = dt 1 + (x/kxy )n

(3.62)

where x = x(t) and y = y(t). Clearly, the Hill function can be viewed as a special form of a general polynomial function (2.222). On the other hand, if x inhibits y, the dynamics can be described by dy vx − kd y + kb . = dt 1 + (x/kxy )n

(3.63)

Here, vx represents the regulatory effect of x, the Hill coefficient n indicates the sensitivity of y with respect to x, kxy denotes the threshold of x inducing a significant response of y, kd is the degradation rate, and kb is the basal synthesis rate (Kim et al. 2008). Taking vx , kxy , and kd to be the unit leads to simpler forms dy xn − y + kb = dt 1 + xn and

(3.64)

3.3 Interaction Graphs and Logic Gates

dy 1 − y + kb = dt 1 + xn

121

(3.65)

of (3.62) and (3.63). 3.3.2 Logic Gates Generally, a component may have one or more regulators. For example, a gene can be regulated by multiple transcription factors. Several regulators are combined with a logic block, which merges different regulations into one by the continuous analogue of Boolean functions AND or OR gate logic. Different co-regulation mechanisms may produce different dynamics. Even for different logic gates, there exist different co-regulation mechanisms. For example, there are competitive and non-competitive binding mechanisms for the OR and AND gate logic as shown in Chapter 2. Here, we consider the case of two regulators based on the Hill function. The case of more regulators can be similarly discussed. Let us first consider the case where both the regulators j and k are repressors. If the simultaneous binding of the two regulators is required to achieve the transcriptional repression of gene i, the co-regulation function or synthetic function can be modeled as follows: fiS (pj , pk ) =

1 , 1 + pnj pm k

(3.66)

where n and m are the Hill coefficients of the proteins j and k, respectively. The superscript S stands for ‘simultaneous’. In other words, the regulatory function fiS is chosen as algebraic equivalent of Boolean function AND for the repressor. On the other hand, if the binding of either of the two repressors is sufficient to inhibit the gene expression, the co-regulation can be expressed as fiI (pj , pk ) =

1 , 1 + pnj + pm k

(3.67)

where the superscript I stands for ‘independent’ and the regulatory function fiI is chosen as algebraic equivalent of Boolean function OR for the repressor. The co-regulatory functions of Boolean AND and OR gates for the activator can be similarly defined. Next, if one regulator, say j, is an inhibitor and the other, say k, is an activator, the co-regulation function can be modeled as follows: fiM (pj , pk ) =

1 + pm k , 1 + pnj + pm k

(3.68)

where the superscript M stands for ‘mixed’ (Goh et al. 2008). (3.66)–(3.68) are defined based on the simper Hill functions (3.64)–(3.65). Co-regulation functions corresponding to (3.62)–(3.63) for different Boolean logics can be

122

3 Deterministic Structures of Biomolecular Networks

similarly defined (Kim et al. 2008). For example, if both x and z activate y, the resulting dynamics can be dy vx ((x/kxy )n + (z/kzy )m ) − kd y + kb , = dt 1 + (x/kxy )n + (z/kzy )m

(3.69)

which is clearly a special form of a general transcription regulatory function (2.222). These co-regulation mechanisms for the case of MM type rate function can be similarly defined. Consider the example of the repressilator, a synthetic gene regulatory network by genes cI, tetR, and lacI (Elowitz and Leibler 2000). Its basic kinetics is described by dmi α + α0 , = −mi + dt 1 + pnj dpi = −β(pi − mi ), dt

(3.70) (3.71)

where i =(lacI, tetR, cI) and j =(cI, lacI, tetR), respectively. The variables mi = mi (t) and pi = pi (t) are concentrations of the mRNAs and their protein products, respectively. The parameter α0 is the basal synthetic rate, α + α0 is the maximum synthetic rate in the absence of repressors, β is the ratio of the decay rate, and n is the Hill coefficient.

Figure 3.8 Eight possible configurations of the repressilator with a new component N4 forming a coupled feedback structure. Open circles represent the elements of the original three-node repressilator, and the solid circles denote the additional elements newly introduced. The arrows denote activation, and the blunted lines indicate repression (from (Goh et al. 2008))

To investigate the dynamical consequences of the extension through interlocking of elementary cellular networks, Goh et al. studied the sustained oscillations in the extended repressilator with a new component (Goh et al.

3.3 Interaction Graphs and Logic Gates

123

2008). The new component interacts with two existing nodes to form an additional feedback loop. The four-node system is modified into dmi = −mi + αfi (pj , pk ) + α0 , dt dpi = −β(pi − mi ), dt

(3.72) (3.73)

where i = N1 , N2 , N3 , N4 , and N4 is a new node in addition to the three nodes of the original repressilator. According to the co-regulation among the fourth node and the other two existing nodes, the function fi can take one of the following forms fiS , fiI , and fiM . From in silico analysis, it can be shown that the capability of sustained oscillation depends on the topology of extended systems, and the stability of sustained oscillation under the extension also depends on the coupling topology. Clearly, all eight possible configurations have at least one negative feedback loop. Coherent coupling, i.e., Figure 3.8 (a, d, f, g), where the new feedback loop contains an odd number of inhibitory interactions, and homogeneous regulation, i.e., Figure 3.8 (d, g), where the element regulating two targets has the same sign of regulation (N2 in Figure 3.8 (d) and N1 in Figure 3.8 (g)), can favor sustained oscillations (Goh et al. 2008). The detailed analysis of such networks on dynamical and topological features will be provided in succeeding chapters, in particular for switching and oscillating networks.

4 Qualitative Analysis of Deterministic Dynamical Networks

Given a biomolecular network, one can characterize its system behavior by applying various analytical methods, such as stability and bifurcation analysis, robustness and sensitivity analysis, topological analysis, and control analysis. These techniques can provide qualitative and quantitative insights into the system behavior of various networks. Such information is useful for revealing the design principles of biomolecular networks with specified functions such as switching and oscillation and for implementing synthetic biomolecular networks.

4.1 Stability Analysis Consider system (3.2) with delays for i = 1, ..., n, dxi (t) = fi (xτi ), dt

(4.1)

dx(t) = f (xτ ), dt

(4.2)

or, in a vector form

where there are n × n delays τij for i, j = 1, ..., n, and τij is the time delay from xj to xi . The inherent nonlinearity in (4.1) often precludes the analytical investigation of its dynamics. But we can gain an insight into its behavior by linearizing it around some point, generally an equilibrium. If a system is at its equilibria, it should stay there if there is no external perturbation. Depending on system behavior after the perturbation, an equilibrium is stable if the system returns to this state or unstable if the system leaves this state after the perturbation. An equilibrium is asymptotically stable if it is stable and the trajectories from nearby initial conditions called the basin of attraction tend to this state for

126

4 Qualitative Analysis of Deterministic Dynamical Networks

t → ∞. Local stability describes the behavior after small perturbations and global stability after any perturbation. Let us first consider the local stability at an equilibrium. Assume f (¯ x) ≡ 0, i.e., the zero function x → x ¯ is an equilibrium solution of (4.1). Extracting the linear part from (4.1), it can be rewritten as dx = L(xτ ) + fh (xτ ), dt

(4.3)

where L(·) is a linear functional and |fh (u)| = o(||u||) (namely, when ||u|| → 0, o(||u||)/||u|| → 0) is the nonlinear term with order higher than ||u||. If all roots λk of the characteristic equation det[λI − P (λ)] = 0,

(4.4)

P (λ) = L(xτ → e−λτ ),

(4.5)

with

have negative real parts Reλk < 0, then the equilibrium is asymptotically stable, where L(xτ → e−λτ ) represents the replacement of all x(t − τij ) in L(xτ ) by e−λτij , i.e., x(t−τij ) → e−λτij in L(xτ ) for i, j = 1, ..., n. If Reλk > 0 for at least one k, it is unstable. For the case with at least one real part Reλk = 0 and all other Reλk < 0, the local stability cannot be determined by the characteristic equation, but depends on the higher-order term fh (xτ ). Such a case is the main focus in bifurcation analysis. Here, I is the n × n identity matrix. Local stability depends on the characteristic values. On the other hand, global stability is widely analyzed using the Lyapunov function because of its simplicity and the generality of the method. An equilibrium is globally stable if the trajectories from all initial conditions approach it for t → ∞. Assume x ¯ to be an equilibrium of (4.1) and that its stability can be tested with the Lyapunov–Krasovskii functional method as follows: 1. Transfer the equilibrium into the origin by coordination transformation x ˆ=x−x ¯. 2. Find a Lyapunov–Krasovskii functional V (ˆ xτ , t) with the following properties: V (ˆ x, t) is positive definite, i.e., V (ˆ xτ , t) = 0 for x ˆτ = 0 and V (ˆ xτ , t) > 0 for x ˆτ = 0. 3. Calculate the time derivative of V (ˆ xτ , t). The equilibrium x ˆ = 0 is stable if the time derivative of V (ˆ xτ , t) in a certain region around this state has no positive values. The equilibrium is asymptotically stable if the time derivative of V (ˆ xτ , t) in this region is negative definite, i.e., dV (ˆ xτ , t)/dt = 0 for x ˆτ = 0 and dV (ˆ xτ , t)/dt < 0 for x ˆτ = 0. If the

4.1 Stability Analysis

127

region is the whole state space, the equilibrium is globally and asymptotically stable. Consider a scalar delay differential equation with constant coefficients a, b, and constant delay h ≥ 0, x(t) ˙ = −ax(t) − bx(t − h),

t ≥ 0.

(4.6)

Clearly, x ¯ = 0 is the equilibrium. Construct a Lyapunov–Krasovskii functional as follows: t 2 V (xh , t) = x (t) + |b| x2 (s)ds. (4.7) t−h

Then, % & V˙ (xh , t) = −2ax2 (t) − 2bx(t)x(t − h) + |b| x2 (t) − x2 (t − h) % & ≤ (−2a + |b|)x2 (t) + |b|x2 (t − h) + |b| x2 (t) − x2 (t − h) = −2(a − |b|)x2 (t).

(4.8)

As a result, we obtain that V˙ (xh , t) ≤ −2(a − |b|)x2 (t). This gives us the asymptotically and globally stable condition a > |b|, which is independent of the value of the delay h. Consider the general model of gene regulatory networks (3.13)–(3.14) for the analysis of local stability. Assume that (m, ¯ p¯) is an equilibrium and consider its local stability. By linearizing r = (r1 , ..., rn ) and s = (s1 , ..., sn ) and by substituting m(t) = ae−λτm and p(t) = be−λτp into (3.13)–(3.14), we obtain the characteristic equation at the equilibrium (m, ¯ p¯) as follows: 



 λI 0 −Dm Jr E −λτp det − = 0, (4.9) 0 λI Js E −λτm −Dp where Dm = diag(dm1 , ..., dmn ) and Dp = diag(dp1 , ..., dpn ). Jr =

dr(p) dp

and

Js =

ds(m) dm

(4.10)

at (m, ¯ p¯) are n × n matrices and I is the n × n identity matrix. Note that Js is a diagonal matrix because si depends only on mi . E −λτp = diag(e−λτp1 , ..., e−λτpn ) and E −λτm = diag(e−λτm1 , ..., e−λτmn ), where we assume τpi = τpij with i, j = 1, ..., n for the simplicity. Multiplying (4.9) by diag(E λτp , E λτm ) and using Schur’s theorem, we obtain det((λIn + Dm )(λIn + Dp )E λ¯τ − Js Jr ) = 0, (4.11) where τ¯ = (τm1 + τp1 , ..., τmn + τpn ). The local stability of (m, ¯ p¯) depends on the characteristic values of (4.11). In particular, if all characteristic roots of (4.11) have negative real parts, the equilibrium (m, ¯ p¯) will be asymptotically

128

4 Qualitative Analysis of Deterministic Dynamical Networks

stable. If there exists a root with a positive real part, on the other hand, it will be unstable. Now, consider the global stability of the equilibrium (m, ¯ p¯) for (3.13)– (3.14) by assuming that r can be expressed in the Lur’e system form of p and that s are linear functions of m. Under simple transformations, (3.13)–(3.14) can be rewritten in the following Lur’e system form, x(t) ˙ = Ax(t) + Gf (y − τ1 (t)), y(t) ˙ = Cy(t) − Dx(t − τ2 (t)),

(4.12) (4.13)

where τ1 (t) > 0 and τ2 (t) > 0 are inter- and intra-node time-varying delays. We assume that τ˙1 (t) ≤ d1 < 1 and τ˙2 (t) ≤ d2 < 1. Based on the Lyapunov method and LMI technique, the global asymptotical stability of the equilibrium is obtained as follows (Li et al. 2006a): Suppose there exist matrices P11 , P22 , P12 , Q > 0, R > 0, and Λ = diag(λ1 , ..., λn ) > 0 such that the following LMIs hold:

 P11 P12 > 0, (4.14) M2 < 0 and P = T P12 P22 then, the unique equilibrium of the genetic network (4.12)–(4.13) is globally asymptotically stable , where M2 is the following matrix: ⎡ ⎤ 2P11 A + R P12 C + AP12 P12 D 0 P11 G T T T ⎢ P12 ⎥ A + CP12 2P22 C P22 D kΛ P12 G ⎢ ⎥ T ⎢ ⎥. DP DP −(1 − d )R 0 0 (4.15) 22 2 12 ⎢ ⎥ ⎣ ⎦ 0 kΛ 0 Q − 2Λ 0 GT P11 GT P12 0 0 −(1 − d1 )Q This theoretical result can be proven by constructing a Lyapunov–Krasovskii functional 

T 

t x(t) x(t) V (x(t), y(t), t) = P + f T (y(μ))Qf (y(μ))dμ y(t) y(t) t−τ1 (t) t + x(μ)Rx(μ)dμ (4.16) t−τ2 (t)

and showing that V˙ (x(t), y(t), t) < 0 holds. These conditions guarantee the global stability of the equilibrium and can be easily verified by using software tools, e.g., the MATLAB LMI Toolbox. Moreover, this numerical approach can be used not only to analyze and understand various gene regulation mechanisms in living organisms but also to design synthetic biomolecular networks in the framework of synthetic biology and analyze, for example, oscillatory phenomena such as synchronization of gene oscillators.

4.2 Bifurcation Analysis

129

The type of stability mentioned above is the Lyapunov stability. Another type of stability is known as structural stability, for which the qualitative behavior of the trajectories is unaffected by continuously differentiable small perturbations. Examples of such qualitative properties are hyperbolic invariant sets like hyperbolic equilibria and periodic orbits. In contrast to the Lyapunov stability, in which perturbations of initial conditions for a fixed system are considered, structural stability deals with perturbations of the system itself. Structural stability plays important roles in robustness and development in biology (Kitano 2002). For example, the fundamental properties of the lambda phage fate decision circuit are not affected even if the sequence of OR binding sites is altered (Kitano 2002). Lambda phage exploits multiple feedback mechanisms to stabilize the committed state and to enable switching of its pathways, other than specific parametric features of the elements such as binding sites.

4.2 Bifurcation Analysis Bifurcation analysis focuses on the qualitative changes in system behavior in response to parameter changes. It is performed by varying single or multiple parameters until a qualitative change in dynamics occurs. The value at which the bifurcation occurs is called the bifurcation value, where the real part of at least one root λk of the characteristic equation is zero. Bifurcation analysis can help to provide comprehensive and predictive information for understanding gene expression patterns, regulatory pathways, and functions of various biomolecular networks. To show explicitly the dependence of the system dynamics on the parameter, (4.1) is rewritten as dx(t) (4.17) = f (xτ ; α), dt where α is a vector of system parameters. Assume that f (0; α) ≡ 0, i.e., the zero function corresponding to the equilibrium is a solution of (4.17) for all parameter values α. The characteristic equation of the linear approximation to (4.1) has the form of (4.4). In general, bifurcations can be divided into two principal classes, i.e., local bifurcations and global bifurcations. Local bifurcations can be analyzed entirely through changes in the local stability properties of equilibria, periodic orbits or other invariant sets as parameters cross through critical thresholds, whereas global bifurcations often occur when larger invariant sets of the system collide with each other or with equilibria of the system. Global bifurcations cannot be detected only by stability analysis of the equilibria. Varying the system parameters can create or destroy equilibrium solutions. Further, the properties of these equilibria can change. At bifurcation points, a stable equilibrium may lose its stability or vice versa. A stable equilibrium

130

4 Qualitative Analysis of Deterministic Dynamical Networks

may also bifurcate to other or no equilibria. Local bifurcation of equilibria can be used to analyze such phenomena. For local bifurcations, there are two generic bifurcation types (codimension-one bifurcation types) for a general nonlinear dynamical system (a continuous-time dynamical system), i.e., steady-state bifurcations and Hopf bifurcations, where the real part of a root of the characteristic equation is zero, most notably: 1. if a root is equal to zero, the bifurcation is a steady-state bifurcation; 2. if two roots are nonzero but a pair of purely imaginary complex conjugate numbers, the bifurcation is called the Hopf bifurcation. Clearly, the simplest and most commonly occurring bifurcations associated with the appearance of one characteristic value λ1 = 0 are steadystate bifurcation, including saddle-node bifurcations, transcritical bifurcations, and pitchfork bifurcations. In fact, by central manifold theory and appropriate transformation, a nonlinear system can be qualitatively reduced to a codimension-one system near the bifurcation point. Therefore, we can have the following qualitative descriptions for each steady-state bifurcation. 1. The saddle-node bifurcation with the prototype function or the normal form as x(t) ˙ = α−x2 (t): the saddle-node bifurcation occurs when there are two curves of stable and unstable equilibria on one side of the bifurcation point in the bifurcation diagram and no curves of equilibria on the other side. At the bifurcation point, stable and unstable equilibria coalesce and form a saddle-node equilibrium. 2. The transcritical bifurcation with the prototype function or the normal form as x(t) ˙ = αx(t) − x2 (t): the transcritical bifurcation occurs when two curves of stable and unstable equilibria exist on either side of the bifurcation point, both curves intersect at the bifurcation point, and the stability of each equilibrium along the curve changes on passing through the bifurcation point in the bifurcation diagram. 3. The pitchfork bifurcation with a prototype function or the normal form as x(t) ˙ = αx(t) ± x3 (t). The negative sign and positive sign correspond to a supercritical bifurcation and a subcritical bifurcation, respectively. For the supercritical case, the pitchfork bifurcation occurs when three curves of equilibria intersect at the bifurcation point and only one middle curve c exists on both sides of the bifurcation point in the bifurcation diagram; however, the other curves lie entirely to one side of the bifurcation point and have a stability type which is opposite to that of the curve c. Next, we consider the Hopf bifurcation. In addition to equilibrium solutions, another widely observed phenomenon is oscillatory behavior, which is also common in biological systems. The cause of an oscillation may be different, either externally imposed or internally generated. An internally caused stable oscillation can be found if the system has a limit cycle in the state space. A limit cycle is an isolated closed trajectory. All trajectories in its

4.2 Bifurcation Analysis

131

vicinity will wind towards or away from the limit cycle for t → ∞, depending on the stability of the limit cycle. Hopf bifurcations can lead to stable or unstable oscillations. The Hopf bifurcation occurs when there are a pair of purely imaginary roots. Assume that there is α0 such that for α < α0 , all roots λk of the characteristic equation (4.4) belong to the open left half of the complex plane, whereas at α = α0 , λ1,2 |α=α0 = ±iω0 (ω0 > 0), dReλ1,2 (α) |α=α0 > 0, Reλj |α=α0 < 0 (for all j > 2). dα

(4.18) (4.19)

Under these conditions, if α increases and passes through the value α0 , then the stable equilibrium becomes unstable, i.e., α = α0 is the bifurcation value. When α increases and passes through α0 , a periodic solution bifurcates from the equilibrium. Such a solution is stable if it arises for α > α0 and is unstable if for α < α0 . When a stable cycle emerges, it is a supercritical Hopf bifurcation, otherwise, the Hopf bifurcation is subcritical. Local bifurcation has been widely used in analysis of biomolecular networks. For example, saddle-node bifurcations and pitchfork bifurcations correspond to progression and decision differentiation (Guantes and Poyatos 2008), respectively, and Hopf bifurcations are often used to detect the occurrence of oscillations such as circadian rhythms (Goldbeter 1995, Leloup and Goldbeter 1998, Leloup and Goldbeter 2003). For an extensive treatment of bifurcation analysis, refer to (Guckenheimer and Holmes 1983, Kuznetsov 1995, Kolmanovskii and Myshkis 1999).

Figure 4.1 Bifurcation diagram for wild-type cell cycle (from (Tyson et al. 2003))

A typical example of bifurcation analysis in biomolecular networks is the cell cycle, as shown in Figure 4.1. The cell cycle can be viewed as a sequence

132

4 Qualitative Analysis of Deterministic Dynamical Networks

of bifurcations. A small newborn cell is attracted to the stable G1 steadystate. As it grows, it eventually passes the saddle-node bifurcation (SN3) where the G1 steady-state disappears, and the cell makes an irreversible transition into the S/G2 stage. It stays in the S/G2 stage until it grows so large that the S/G2 steady-state disappears, inducing an infinite-period bifurcation (SN/IP), where a stable steady-state gives way to a large-amplitude periodic solution, and the period of oscillation is very long. Cyclin-B-dependent kinase activity soars, driving the cell into mitosis, and then plummets, as cyclin B is degraded by APC-Cdc20. The drop in Cdk1-cyclin B activity is the signal for the cell to divide, causing the cell size to be halved, and the system returns to its starting point (Tyson et al. 2003). Other practical bifurcation analysis has been widely used for analyzing multistability (Angeli et al. 2004), resonance in oscillatory cellular processes (Hasty et al. 2002a), production of circadian oscillations and coexistence of multiple attractors (Goldbeter 1997), single parameter robustness (Ma and Iglesias 2002), and synchronization of coupled oscillators (Gonze et al. 2005, Chen et al. 2005). Many bifurcation tools have been developed to study various interesting dynamics such as switching and oscillatory dynamics in biomolecular networks. These software tools, tutorial manuals, and test models can be freely ˜ downloaded, e.g., Xppaut at http://www.math.pitt.edu/bard/xpp/xpp.html, MATLAB package matcont at http://www.matcont.ugent.be/matcont.html, and DDE-BIFTOOL at http://www.cs.kuleuven.ac.be/˜twr/research/software /delay/ddebiftool.shtml. Recently, a software BunKi, which is an integrated environment dedicated to bifurcation analysis, was developed. Its highlight feature is that it offers a user-friendly analysis environment. The software is freely available at http://bunki.sat.iis.u-tokyo.ac.jp/.

4.3 Examples for Analyzing Stability and Bifurcations In this section, we present some simple examples to analyze stability and bifurcations of molecular networks. 4.3.1 A Simplified Gene Network We first study the qualitative behavior of the general gene regulatory networks (3.13)–(3.14), with the following simplification. Assumption 4.3.1 Assume that the total delay time τ of the transcription and translation processes for each gene product has the same value, i.e., τ ≡ τm1 +τp1 = · · · = τmn +τpn . Assume that all mRNAs and all proteins have the same degradation rates km and kp , respectively, i.e., km ≡ km1 = · · · = kmn and kp ≡ kp1 = · · · = kpn .

4.3 Examples for Analyzing Stability and Bifurcations

133

Hence, according to (4.11) with this assumption, we have a much simpler representation of the characteristic equation, which is the transcendental equation det((λ + dm )(λ + dp )eλτ In − J) = 0.

(4.20)

Thus, we have the following theorem. Theorem 4.1. Suppose that Assumption 4.3.1 holds. If λ is a root of (4.20), there is an eigenvalue γi (i = 1, ..., n) of the matrix Js Jr for which γi = (λ + km )(λ + kp )eλτ .

(4.21)

On the other hand, if γi ∈ C is an eigenvalue of the matrix Js Jr , then any solution λ of (4.21) is a characteristic root of (4.20). Note that in general γi is a complex number. The proof of the theorem is straightforward according to (4.20), which is equivalent to n equations of (4.21) for n eigenvalues of J, respectively. For instance, suppose that (γ1 , ..., γn ) are the n eigenvalues of the matrix Js Jr ∈ Rn×n at an equilibrium (m, ¯ p¯). Then, the local stability depends on the characteristic values λ of the n equations of (4.21). If all the roots of (4.21) for i = 1, ..., n have negative real parts, then the equilibrium of (3.13)–(3.14) is asymptotically stable ; if there exists a root with a positive real part for i = 1, ..., n, then the equilibrium of (3.13)–(3.14) is unstable. Let ψ(λ) = (λ + km )(λ + kp ) where km and kp are non-negative numbers. Then, we have the following theorem. Theorem 4.2. Suppose that Assumption 4.3.1 holds. Assume that each eigenvalue of Js Jr is expressed as γi ∈ C for i = 1, ..., n. If and only if γi for all i = 1, ..., n lies inside the region including the origin, bounded by the following arcs of spirals ψ(jω)ejωτ , then all roots of (4.21) have negative real parts at (m, ¯ p¯), where j = ω ∈ R.

(4.22) √

−1 and

Notice that the spirals ψ(jω)ejωτ can be drawn in the following way. As shown in Figure 4.2 (a), first draw ψ(jω) in the complex plane with the x axis and the y axis as x(ω) = Re[ψ(jω)] and y = Im[ψ(jω)], which is clearly a parabolic curve. Then ψ(jω)ejωτ is obtained by rotating each point of ψ(jω) by ωτ , as shown in Figure 4.2 (b), which includes the origin or the zero point in the complex plane (Hori et al. 2010). The region including the origin, bounded by the arcs of spirals ψ(jω)ejωτ , is Ω shown in Figure 4.2 (b). If any square root of any eigenvalue for Js Jr moves from the inside region to the outside region and passes the spirals indicated in Theorem 4.2, then there exists a bifurcation that destabilizes the system. From Theorem 4.2, we have two limiting cases, i.e., τ = 0 and τ → ∞.

134

4 Qualitative Analysis of Deterministic Dynamical Networks

Figure 4.2 The stability region in the complex plane for the simplified gene network: (a) stability region without time delay; (b) stability region with time delay

4.3 Examples for Analyzing Stability and Bifurcations

135

Corollary 4.3. Suppose that Assumption 4.3.1 holds and that k = kp = km . Assume that γi (i = 1, ..., n) are eigenvalues of Js Jr . Then, at (m, ¯ p¯), 1. all roots of (4.21) have negative real parts for τ = 0 if and only if k 2 > |γi |+Re[γi ] for all i = 1, ..., n, and 2 2. all roots of (4.21) have negative real parts for all non-negative τ if and only if k 2 > |γi | for all i = 1, ..., n. Proof of Corollary 4.3 Let γi = Ri ejθi , where |γi | = Ri ≥ 0 and 2π > θi ≥ 0. According to Theorem 4.2, when τ = 0, the sufficient and necessary conditions for the existence of negative roots for (4.21) are  jθi Re[± Ri e 2 ] < k.

(4.23)

i] Therefore, by noting |γi | = Ri , we have k 2 > Ri cos2 θ2i = |γi |+Re[γ , 2 which proves condition 1 of this corollary. In the same manner, we can show condition 2 of Corollary 4.3.  From Corollary 4.3, there clearly exists a critical τˆ ∈ R+ such that all roots of (4.20) have negative real parts for τ ∈ [0, τˆ) and at least one root of (4.20) has a positive real part for τ > τˆ provided that

max{

|γ1 | + Re[γ1 ] |γn | + Re[γn ] ,··· , } < k 2 < max{|γ1 |, · · · , |γn |}. (4.24) 2 2

When Js Jr has only real eigenvalues, we can also obtain the conditions for Hopf bifurcations by taking τ¯ as a bifurcation parameter (Chen and Aihara 2002a). 4.3.2 A Two-gene Network The second example is a simplified two-gene model k1 , q(t) + k2 q 2 (t) p(t) + k3 , q(t) ˙ = −kq q(t) + 2 q (t) + k4 p(t) ˙ = −kp p(t) +

(4.25) (4.26)

where p(t) and q(t) are the protein products of genes P and Q, respectively. The protein q(t) enhances transcription of itself but represses that of gene P, whereas the protein p(t) is an activator of gene Q. Figure 4.3 illustrates schematically the two-gene network. To minimize the number of variables, we adopt only q(t) and p(t) concentrations without explicitly expressing the mRNA or other related chemicals. kp and kq / are the degradation rates of the two proteins, whereas k1 is the transcription and translation rate for gene P. k2 is the MM constant. k3 and

136

4 Qualitative Analysis of Deterministic Dynamical Networks

p(t  W ) kp Protein: p

        

+ Promoter

Gene-Q

+

DNA sequences

Promoter

Gene-P

kq degradation Protein : q

q(t)

q(t)

Figure 4.3 A two-gene model of the genetic regulatory system with an autoregulatory feedback loop (from (Chen and Aihara 2002b))

k4 are lumped parameters that describe the effects of binding or multimerization of proteins, phosphorylation, and other similar phenomena.  is a small positive real number expressing the difference of time scales between q(t) and p(t). Assume that E = (¯ p, q¯) is an equilibrium of (4.25)–(4.26). The Jacobian matrix J of (4.25)–(4.26) at E is  

k 1 −kp − (¯q+k12 )2 J= , (4.27) 2  q¯2q¯+k Jq 4 where Jq = −kq +

2k4 q¯p¯ . + k4 )2

(¯ q2

(4.28)

For (4.25)–(4.26), the following results hold (Chen and Aihara 2002b). Theorem 4.4. Assume that (4.25)–(4.26) have only one equilibrium E and  is sufficiently small. If Jq > 0, (4.25)–(4.26) have a periodic solution around E. On the other hand, if Jq < 0, (4.25)–(4.26) have a stable equilibrium at E. The stability of the unique equilibrium can be obtained by proving that all eigenvalues of J have negative real parts if Jq < 0. On the other hand, the existence of a periodic solution around E can be proven by showing that there exists a trapping region for (4.25)–(4.26) and there is only one unstable

4.3 Examples for Analyzing Stability and Bifurcations

137

equilibrium E within it. According to the Poincar´e–Bendixson theorem, a periodic solution exists. One of the key factors affecting the dynamics of gene–protein networks is time delays, which usually exist in transcription, translation, and translocation processes and may significantly influence the stability of the overall system, particularly in an eukaryotic cell. Next, we assume that there is a time delay τ ≥ 0 only for the slow variable p and that the time delay for the fast variable q is small enough to be ignored. Then (4.25)–(4.26) become k1 , q(t) + k2 q 2 (t) p(t − τ ) + k3 . q(t) ˙ = −kq q(t) + 2 q (t) + k4 p(t) ˙ = −kp p(t) +

(4.29) (4.30)

The characteristic equation of (4.29)–(4.30) at the unique equilibrium E = (¯ p, q¯) takes the form a b c λ2 + λ + + e−λτ = 0,   

(4.31)

p, q¯). where a = kp −Jq , b = −Jq kp , and c = k1 q 2 /[(q 2 +k4 )(q +k2 )2 ] at E = (¯ The roots of the transcendental equation (4.31) for λ determines the stability of the equilibrium E. If the real parts of all the roots are negative, E is an asymptotically stable equilibrium and there is no oscillation. On the other hand, if there exists a root with a positive real part, the oscillation exists. In other words, the bifurcations occur when λ = jv is a root of (4.31), namely b − v 2 + c cos(vτ ) = 0, av − c sin(vτ ) = 0.

(4.32) (4.33)

For τ > 0 and a = 0, by eliminating the sin(vτ ) and cos(vτ ) terms, we have v4 +

¯b a ¯ 2 v + 2 = 0, 2  

(4.34)

where a ¯ = a2 − 2b and ¯b = b2 − c2 . When  is sufficiently small and |c| > |b|, the real solutions of (4.34) can be written as ¯b a ¯ a ¯2 + ( 4 − 2 )1/2 ]1/2 2 2 4  ¯b a ¯ a ¯ = ±[− 2 + ( 2 − + O())]1/2 . 2 2 a ¯

v = ±[−

Therefore, the critical values for v¯ and τ¯k are v¯ = ±[

 c2 − b 2 + O()]1/2 = ± c2 − b2 /a + O() 2 a

(4.35)

138

4 Qualitative Analysis of Deterministic Dynamical Networks

and 1 v 2 − b [2kπ + arccos( )] v c b |a| [2kπ + arccos(− )] + O(), = √ 2 2 c c −b

τ¯k =

(4.36)

where k = 0, 1, 2, ... and the range of arccos is [0, π]. When |c| < |b|, it is clear that there is no real solution of (4.34), which means that no Hopf bifurcation could occur for any τ .

4

H =0.01, W

3



protein ln(q)

2

1

E

H =0.01, W



0

-1

-2

-3

0

5

10

15 20 protein p

25

30

35

Figure 4.4 Limit cycles obtained at τ = 0 and τ = 0.5. The point E is the equilibrium of the overall system (4.29)–(4.30) (from (Chen and Aihara 2002b))

On the other hand, from (4.31), we have (

2λ + a λτ τ dλ −1 ) = e − . dτ cλ λ

(4.37)

Since ∂(Reλ)/∂τ = |λ|2 Re[(dλ/dτ )−1 ], by substituting (4.32)–(4.33) into the real part of (4.37) at λ = jv, we obtain ∂(Reλ) 22 v 2 − 2b + a2 v 2 a2 = 2 + O(). |λ=jv = v 2 2 ∂τ c c

(4.38)

Thus, when  is sufficiently small, ∂(Reλ)/∂τ |λ=jv > 0, which implies that the real part of any λ moves to the right half plane for increasing τ when λ

4.3 Examples for Analyzing Stability and Bifurcations

139

Tim e c ours es for q(t) and p(t) without tim e delay s olution: q(t), p(t)

4 log(q(t)) p(t)/10

2 0 -2 -4 0

5

10

15

Tim e c ours es for q(t) and p(t) with tim e delay s olution: q(t), p(t)

4 log(q(t)) p(t)/10

2 0 -2 -4 0

5

10

15

tim e t

Figure 4.5 Time evolutions of q(t) and p(t) for  = 0.01 with τ = 0 and τ = 0.5 (from (Chen and Aihara 2002b))

is on the imaginary axis. In other words, the time delay only destabilizes the equilibrium E. Therefore, the following theorem is obtained by summarizing the above discussions (Chen and Aihara 2002b). Theorem 4.5. Assume that (4.29)–(4.30) have only one equilibrium E =(¯ p, q¯) and  is sufficiently small. If Jq > 0, (4.29)–(4.30) have an oscillatory solution around E for any non-negative τ . If Jq < 0 and |c| ≥ |b|, (4.29)–(4.30) have an oscillatory solution around E when τ > τ¯0 , where τ¯0 is the first bifurcation value in (4.36) at k = 0. If Jq < 0 and |c| < |b|, (4.29)–(4.30) have an asymptotically stable equilibrium at E for any non-negative τ . For kp = 1, kq = 1, k1 = 15, k2 = 0.2, k3 = 0.1, and k4 = 10, it can be estimated that there is only one equilibrium (¯ p, q¯) = (7.6831, 1.4787). It is unstable at τ = 0 and τ = 0.5, independent of , and Jq = 0.53 > 0. The phase portrait and time evolutions based on numerical simulation with  = 0.01 are shown in Figure 4.4 and Figure 4.5, respectively. 4.3.3 A Three-gene Network Next, we examine the biological plausible three-gene model shown in Figure 4.6, where proteins p1 and p3 form a heterodimer to inhibit gene 2, whereas

140

4 Qualitative Analysis of Deterministic Dynamical Networks

protein p2 forms a homodimer to activate gene 3 and inhibit gene 1 (Chen and Aihara 2002b). Assume that the production of proteins p1 and p2 is much faster than that of protein p3 : k1 − d1 p1 (t) + b1 1 + a1 p22 (t) k2 p˙ 2 (t) = − d2 p2 (t) + b2 1 + a2 p1 (t)p3 (t − τ ) k3 p22 (t) − d3 p3 (t) + b3 p˙ 3 (t) = 1 + a3 p22 (t)

p˙ 1 (t) =

(4.39) (4.40) (4.41)

where  = 0.01, d1 = d2 = d3 = 0.04, b1 = b2 = b3 = 0.004, k1 = 4, k2 = 1, k3 = 0.08, and a1 = 1, a2 = 1/16, a3 = 0.05. All variables are positive. τ is the time delay.

+

p2 Gene2

Gene1

p1

Gene3

p3

Figure 4.6 A three-gene network (from (Chen and Aihara 2002b))

Assume τ = 0. From (4.39)–(4.40), we obtain the nullcline, or the equilibrium loci of the fast system: p3 =

d1 (1 + a1 p22 )(k2 + b2 − d2 p2 ) . a2 (d2 p2 − b2 )(k1 + b1 (1 + a1 p22 ))

(4.42)

On the other hand, the nullcline of the slow system can be derived from (4.41): p3 =

k3 p22 b3 + . d3 (1 + a3 p22 ) d3

(4.43)

Figure 4.7 shows the limit cycle as well as the nullclines for both the fast system and the slow system obtained by numerical calculation. The relaxation oscillation is mainly due to the time scale difference and the hysteresis of the slow manifold. As with the previous example, we can obtain the time evolutions and the effects of the time delay, which are similar to the two-gene model.

4.4 Robustness and Sensitivity Analysis

141

Relaxation Os cillator of the three-gene s ys tem without tim e delay 25 eqn.(26)

protein p3

20

15

10

5 E eqn.(27) 0

0

5

10

15 protein p2

20

25

30

Figure 4.7 A limit cycle of the three-gene model with  = 0.01 and τ = 0. The two curves are the nullclines of the fast and slow systems (4.42) and (4.43), respectively. The point E is the equilibrium of the overall system (4.39)–(4.41) (from (Chen and Aihara 2002b))

4.4 Robustness and Sensitivity Analysis In contrast to bifurcation analysis, sensitivity analysis provides a quantitative measure of the dependence of system behavior on parameters. Complex mathematical models of biomolecular networks are increasingly being used as predictive tools and aid in gaining an understanding of the system behavior underlying observed biological phenomena. The parameters appearing in these models, which may include rate constants, initial conditions, operating conditions, and thermodynamic constants, are seldom known to high precision. Quantification of the roles of the parameters in model prediction is the traditional realm of sensitivity analysis. Most research studies are directed at conceptualization and implementation of numerical techniques for determining the parameter sensitivities in various mathematical models including those with stochastic characteristics and non-constant parameters. Robustness and sensitivity are two sides of the same coin. In addition to some direct robustness measures, sensitivity analysis has also been widely used in quantifying robustness with respect to different parameter perturbations. In a complex system with a large number of parameters, the system behavior may be robust to changes in some parameters but sensitive to changes in other parameters. Accurate identification of the underlying mechanisms for

142

4 Qualitative Analysis of Deterministic Dynamical Networks

such robustness is a challenging problem. The system behavior depends on both the parameters and the system architecture. Some analysis shows that the tradeoff between robustness and fragility is largely determined by the regulatory structures (Stelling et al. 2004b). Robustness and sensitivity analysis provides a method to characterize both the impact of parameters and system structures. It may also serve as a guide not only in designing synthetic biomolecular networks with specified functions and high robustness but also in realizing system behavior as desired experimentally. For example, sensitivity analysis can be used to optimize genetic networks (Feng et al. 2004) and to provide guidance on which proteins are likely to be the most significant as drug targets (Ihekwaba et al. 2004). Different parameters may have different influences on system dynamics, and the degree of the influence can be quantified by analysis on robustness and parameter sensitivity. Next, we consider systems of differential equations. Robustness and sensitivity analysis of the general system (4.1) can be treated similarly. Consider a system described by the ODEs dx(t) = f (x(t); p), dt

(4.44)

where x ∈ Rn denote the state, p ∈ Rm denotes the parameters, and f : Rn × Rm → Rn consists of functions of the state and parameters. We will analyze this system for robustness and sensitivity. 4.4.1 Robustness Measures Different robustness measures have been proposed, such as single parameter robustness (Ma and Iglesias 2002), multiparametric robustness (Ma and Iglesias 2002,Bluthgen and Herzel 2003), and Monte Carlo-based robustness (Eissing et al. 2005). The single parameter robustness measures the minimal distance from a reference point in the parameter space to a bifurcation point. For each parameter pi , bifurcation values can be obtained by using bifurcation analysis tools. Suppose that some kind of bifurcation, e.g., a Hopf bifurcation, occurs at p¯i and pˆi . Both the size of the interval (¯ pi , pˆi ) as well as the proximity of the nominal parameter value to either boundary are measures of the robustness of the system. Therefore, the degree of robustness (DOR) for each parameter pi can be defined as   |¯ pi | |pi | , (4.45) , DORi = 1 − max |pi | |ˆ pi | where pi is the parameter at the reference point or in the reference system. According to the definition, it is straightforward to see that the value is always between zero and one. A robustness measure of 0 indicates that the parameter value is exactly at a bifurcation point, i.e., the extreme parameter sensitivity, whereas a robustness measure of 1 implies large insensitivity

4.4 Robustness and Sensitivity Analysis

143

or high robustness. Single-parameter robustness has been adopted to quantify the robustness of an oscillatory network, where stable oscillations exist for the range (¯ pi , pˆi ) (Ma and Iglesias 2002), and a bistable network, where bistability is generated for the range (¯ pi , pˆi ) by saddle-node or transcritical bifurcations (Eissing et al. 2007). Note that single-parameter robustness is strongly dependent on the reference parameter set. Multiparametric robustness with respect to random parameter variations measures the robustness of global variability caused by environmental or cellto-cell variations. An ensemble of altered systems is obtained from a reference system by random modifications of its parameters. Each alternation of the reference system is characterized by the total parameter variation, pT , which is defined as pT =

m  i=1

| log10

|ˆ pi | |, |pi |

(4.46)

where pˆi and pi are the parameters in the altered and reference systems, respectively. The total parameter variation pT can be interpreted as the total order of magnitude of the parameter variation. Such an approach has been applied to analysis on the robustness of bacterial chemotaxis (Barkai and Leibler 1997) and intracellular signaling cascades (Bluthgen and Herzel 2003). Similarly, a Monte Carlo-based approach to evaluate the robustness of bistable systems was proposed and applied to apoptosis signaling (Eissing et al. 2005, Eissing et al. 2007). In this approach, random parameter sets are drawn from predefined ranges such that each parameter is uniformly distributed on a logarithmic scale, and the relative frequency of occurrence of bistability provides an estimate of the volume in the parameter space and can be used as a robustness measure. Such an approach is also applicable to oscillatory systems, although there is no related work to date as far as the authors know. Some other multi-parametric robustness measures were also proposed. For example, a tool of the structural singular value was used to quantify the robust stability of limit cycles in an oscillatory biomolecular network (Ma and Iglesias 2002). 4.4.2 Sensitivity Analysis Assuming that the solution of (4.44) exists, the sensitivity matrix of the system, S(t), which describes how variations in the parameters near a point in the parameter space, p0 , influence the system trajectories, is defined as   ∂x = {sij }, (4.47) S(t) = ∂p x=x(t;p0 ),p=p0 where S(t) is composed of individual sensitivities of each state variable to each parameter sij (Zak et al. 2005). The sensitivity matrix S(t) can be calculated by the finite differences as follows. For a single parameter pj ,

144

4 Qualitative Analysis of Deterministic Dynamical Networks



∂x(t) ∂pj

 ≈ x=x(t;p0 )

x(t; pj + Δpj ) − x(t; pj ) . Δpj

(4.48)

It is computationally tedious and frequently inaccurate to estimate the sensitivities using (4.48), and this may lead to numerical instabilities. An alternative approach is to differentiate (4.44) with respect to the parameter p, giving dS(t) = A(t, p0 )S(t) + B(t, p0 ), dt

(4.49)

where A(t, p0 ) and B(t, p0 ) are the Jacobian matrices of f with respect to x and p, respectively, i.e.,   ∂f A(t, p0 ) = , (4.50) ∂x x=x(t;p0 ),p=p0   ∂f . (4.51) B(t, p0 ) = ∂p x=x(t;p0 ),p=p0 The sensitivity matrix S(t) is a solution of the linear time-varying system defined by (4.49) . By symbolically calculating the state and parameter Jacobian matrices (A and B, respectively), it is possible to integrate x(t) and S(t) simultaneously, given the nominal parameter values, p0 , and the initial conditions for the state, x0 (Zak et al. 2005). A state-based sensitivity measure describes the change in the state with respect to changes in the parameters. Periodic oscillations are very common in biomolecular networks. For an oscillatory network, the global indicator of robustness captures aspects including the shape, phase, period, and amplitude of an oscillation. The overall state sensitivity is defined by So (t) = (So1 (t), ..., Som (t))T , where Soj (t) (j = 1, ..., m) is determined by summation over discrete time t0 , ..., tnT and normalization to the relative sensitivity (log-gain sensitivity) as follows: n n 

1/2 T   1 1 ∂xi (tk , t0 ) Soj (t) = pj . (4.52) n xi (tk , t0 ) ∂pj i=1 k=1

The overall state sensitivity is normalized with respect to the number of state variables and the parameters in order to enable comparison of models (Stelling et al. 2004b). In addition to the state sensitivity, period and amplitude sensitivities are also the quantities of primary interest in oscillatory networks. The period sensitivity Sτ captures the change of period τ upon changes in parameters p:   ∂τ . (4.53) Sτ = ∂p p=p0 The relative period sensitivity (Wolf et al. 2005) can be similarly quantified by

4.5 Control Analysis

Sj =

Δτ /τ . Δpj /pj

145

(4.54)

The period sensitivity measure ) *  *1 m 2 σ=+ S m j=1 j

(4.55)

is introduced for quantification of the overall sensitivity of the period against changes in parameters (Wolf et al. 2005). Using the above period sensitivity measure and comparing different oscillatory models, it is shown that the sensitivity depends on the oscillatory mechanisms rather than the details of the model description. It is also shown that systems with negative feedback are more robust than the corresponding systems with positive feedback. An increase in the length of the reaction chain under regulation can lead to a decrease in sensitivity (Wolf et al. 2005). The amplitude sensitivity is similarly described by   ∂Ai SAi = , (4.56) ∂p p=p0 where Ai is the amplitude of the ith state variable. In addition to the above sensitivity quantities, some other quantities, e.g., the sensitivity measure for discrete stochastic systems (Gunawan et al. 2005, Kim et al. 2007) and the phase sensitivity (Bagheri et al. 2007, Taylor et al. 2008), can also be defined to quantify various sensitivity performances. In addition, various algorithms for solving sensitivity-related problems, e.g., the singularity value decomposition approach to period sensitivity (Zak et al. 2005) and Green’s function method for phase sensitivity (Taylor et al. 2008) have been developed.

4.5 Control Analysis 4.5.1 Control Coefficients of Metabolic Systems In this section, we focus on the flux control of metabolic systems. Metabolic control analysis (MCA) was formulated originally for extensive metabolic networks but can be extended to any problem that considers the transformations of elements, or more generally, the fluxes of any elements. It is a powerful quantitative and qualitative framework for studying not only the relationship between the properties of steady-states and the properties of individual reactions, but also the control and regulation of biomolecular networks, e.g., metabolic, signaling, and genetic pathways. It quantifies both the control that molecular processes exert on system properties and how system properties result from interactions between individual components. It can also identify the

146

4 Qualitative Analysis of Deterministic Dynamical Networks

relative importance of each reaction for setting particular system properties. A more recent study has shown that MCA can be mapped directly on to classical control theory and that the two are equivalent. The effect of change in a process or quantity P on a system property S is expressed in terms of the coefficient   P ΔS CPS = . (4.57) S ΔP ΔP →0 The prefactor P/S is a normalization factor that makes the coefficient independent of units and the magnitudes of P and S. For example, when P is the concentration of an enzyme and S is the metabolic flux, (4.57) gives the ratio of the fractional change in flux ΔS to the fractional change in the enzyme concentration ΔP . When considering an infinitesimal change in P , i.e., ΔP → 0, (4.57) can be written as CPS =

P ∂S , S ∂P

(4.58)

which can be further simplified to the logarithmic control coefficient CPS =

∂ ln S . ∂ ln P

(4.59)

The coefficient can be described by various control coefficients, depending on the changes in system variables, i.e., elasticity coefficients, control coefficients, and response coefficients. These coefficients can be divided into two distinct types: local and global coefficients. Elasticity coefficients are local coefficients pertaining to individual reactions. They can be calculated in any given state. Control coefficients and response coefficients, on the other hand, are global quantities. An elasticity coefficient quantifies the sensitivity of a reaction rate to the change of a concentration or a parameter. It measures the direct effect of a specific change of a concentration or a parameter on a reaction velocity, while the rest of the network is kept fixed. The sensitivity of the reaction rate vk to change of the concentration of a metabolite Si is calculated by the -elasticity vSki =

Si ∂vk . vk ∂Si

(4.60)

Any sufficiently small perturbation of an individual reaction rate by a parameter change p → p + Δp, i.e., vk → vk + Δvk , drives the system to a new steady state from an old one with J = J + ΔJ and S = S + ΔS, where S(p) and J = v(S(p), p) denote the steady-state concentration and the steady-state flux, respectively. In general, flux is mathematically defined as the amount that flows through a unit area per unit time. In particular, metabolic flux refers to the rate of flow of metabolites along a metabolic pathway, or even through a single enzyme.

4.5 Control Analysis

147

The calculation of the flux is dependent on a number of factors, including enzyme concentrations, concentrations of precursors, products, and intermediate metabolites, post-translational modification of enzymes, and the presence of metabolic activators or repressors. MCA and flux balance analysis (FBA) provide frameworks for understanding metabolic fluxes and their constraints. The flux and concentration control coefficients are defined as CvJkj =

vk ∂Jj Jj ∂vk

(4.61)

CvSki =

vk ∂Si , Si ∂vk

(4.62)

and

respectively, i.e., they quantify the control that a certain reaction vk exerts on the steady-state flux Jj and the steady-state concentration Si , respectively. In addition to the elasticity and control coefficients, the third type of coefficient, i.e., response coefficients, which express the direct dependence of the steady-state flux and the concentration on parameters, are similarly defined as j RpJm =

pm ∂Jj pm ∂Si i and RpSm = . Jj ∂pm Si ∂pm

(4.63)

4.5.2 Metabolic Control Theorems A set of theorems have been developed for MCA. The first theorem, i.e., the summation theorem, makes a statement about the total control over a flux or a steady-state concentration. On the other hand, the second theorem, i.e., the connectivity theorem, relates the control coefficients to the elasticity coefficients. Both types of theorems along with dependency information encoded in the stoichiometric matrix, contain enough information to calculate all control coefficients as functions of the elasticities. The summation theorem states that metabolic fluxes are system properties and that the control of metabolic fluxes is shared by all reactions in the system. When a single reaction changes its control of the metabolic flux, it is compensated by changes in the control of the same flux by all other reactions: r 

CvJkj = 1,

(4.64)

k=1

where r is the number of reactions, i.e., the flux-control coefficients of a metabolic network for one steady-state flux sum up to 1. This means that all enzymatic reactions can share the control over this flux. For the concentrationcontrol coefficients, we have

148

4 Qualitative Analysis of Deterministic Dynamical Networks r 

RvSki = 0,

(4.65)

k=1

which means that the control coefficients of a metabolic network for one steady-state concentration are balanced. It follows from these summation theorems that the control coefficients are not independent of each other. If, for example, one coefficient increases, one or more of the other coefficients have to decrease. Thus, control coefficients are system properties and are defined in the context and constraints of the system. Note that the flux summation theorem does not restrict the flux control coefficients to the interval [0 1]. Some coefficients may be negative or exceed unity. On the other hand, the connectivity theorems characterize specific relation between elasticities and control coefficients. They are useful because they highlight the close relationship between the kinetic properties of individual reactions and the system properties of a pathway. Two basic sets of theorems exist, one for fluxes and another for concentrations. The first connectivity theorem relates the flux control coefficients and the -elasticities in a summation over products between both coefficients: r 

CvJkj vSki = 0.

(4.66)

k=1

Note that the summation runs over all rates vk for the concentration Si and flux Jj of a given metabolite. An analogous theorem was derived for concentrations: r  CvSki vSki = −1. (4.67) k=1

It connects the concentration control coefficients CvSki to the -elasticities vSki . With these metabolic control theorems, we are able to investigate metabolic pathways or networks from their global and local properties. In addition, we can also determine control coefficients from elasticity coefficients. During the past decades, MCA, which was originally proposed by Kacser and Burns (Kacser and Burns 1973) and Heinrich and Rapoport (Heinrich and Rapoport 1974), has been used extensively for analyzing regulatory behavior of various metabolic pathways. Reviews on MCA can be found in (Fell 1992, Wildermuth 2000, Moreno et al. 2008) and also in (Klipp et al. 2005).

4.6 Monotone Dynamical Systems 4.6.1 Notation In biomolecular networks, consistent interactions are common, i.e., the synthesis rate of the ith component at t, namely, fi (xτ ) in (4.1), monotonously

4.6 Monotone Dynamical Systems

149

increases (or decreases) as the concentration of the jth component at t − τij monotonously increases. Here, τij is the time delay from node j to node i. Mathematically, the types of such interactions in the interaction graph can be defined as follows: •

An interaction is a positive interaction from the jth node to the ith node if ∂fi (x) > 0 for all x ∈ X, ∂xj



and we set sij = 1. An interaction is a negative interaction from the jth node to the ith node if ∂fi (x) < 0 for all x ∈ X, ∂xj



(4.68)

(4.69)

and we set sij = −1. There is no interaction from the jth node to the ith node if ∂fi (x) = 0 for all x ∈ X, ∂xj

(4.70)

and we set sij = 0. Here X is the feasible region of variables. Thus, sij = 1 (or −1) means that the jth component affects positively (or negatively) the ith component with time delay τij . Most of elementary biochemical reactions satisfy this monotone condition. In this section, we assume that these consistent interactions, i.e., sij = 1, −1, or 0, hold for all of the systems under study. Next, we describe the paths and loops in a network or an interaction graph. •

A path in a molecular network is a sequence of vertices such that from each of its vertices or nodes, there is an edge or an interaction to the next vertex in the sequence.

A finite path always has a first vertex, called its start vertex, and a last vertex, called its end vertex. Both of them are called end or terminal vertices of the path. The other vertices in the path are internal vertices. •

A cycle or a loop is a path such that the starting vertex and the ending vertex are the same.

Note that the choice of starting vertex in a loop is arbitrary. A loop is called a self-feedback or simply a self-loop if an edge connects a vertex to itself. For a molecular network defined as an interaction graph, there are two types of loops or cycles that characterize the dynamics of the system, i.e., positive loops (or positive feedback loops) and negative loops (or negative feedback loops).

150

4 Qualitative Analysis of Deterministic Dynamical Networks



If there are an even (or zero) number of negative interactions in a closed loop, this loop is called as positive. • If there are an odd number of negative interactions in a closed loop, this loop is called as negative. $ Clearly, the product of sij for all interactions in a positive loop, i.e., i,j sij for all i − j interactions in the loop, is +1, whereas the product of sij for all interactions in a negative loop is −1. Consider the following input/output (I/O) system: dx(t) = f (x(t), u(t)), y(t) = h(x(t)), dt

(4.71)

in which the state x(t) evolves on some subset X ⊂ Rn , and input and output values u(t) and y(t) belong to subsets U ⊂ Rm and Y ⊂ Rp , respectively. The maps f : X × U → Rn and h : X → Y are assumed to be continuously differentiable. An input is a signal u : [0, ∞) → U that is locally essentially compact. By an ordered Euclidean space, we mean a Euclidean space Rq , for some positive integer q, together with an order  induced by a positivity cone K, that is, K ⊆ Rq is a non-empty, closed, convex, pointed (K ∩ −K = {0}), and solid (K has a non-empty interior) cone. Then, we have the order defined as follows: • • •

the order x2  x1 implies that x1 − x2 ∈ K; the order x2 ≺ x1 implies that x2  x1 and x2 = x1 ; the order x2  x1 implies that x1 − x2 ∈ int(K).

Denote the solution of (4.71) with initial value x(t0 ) = x0 and input u by φ(t; x0 , u). Given an order  on X, U , and Y , a monotone I/O system with respect to the order is a system (4.71), where h is assumed to be a monotone map, i.e., for each input, solutions do not blow up in finite time therefore x(t) and y(t) are defined for all t ≥ 0, and for all initial values x1 and x2 with x1 ∈ X and x2 ∈ X and inputs u1 (t) and u2 (t) with u1 ∈ U and u2 ∈ U , the following property holds: x1  x2 & u1  u2 ⇒ φ(t; x1 , u1 )  φ(t; x2 , u2 ) for all t ≥ 0.

(4.72)

Furthermore, a monotone system is strongly monotone if the following stronger property holds: x1  x2 & u1  u2 ⇒ φ(t; x1 , u1 )  φ(t; x2 , u2 ) for all t ≥ 0.

(4.73)

The monotone I/O system can be viewed as a nonlinear dynamical system with the control variables u(t). Hence, the monotone dynamical system for (4.1) can be similarly defined by eliminating the control variable u(t). Clearly, a monotone dynamical system or a monotone I/O system is an order-preserved system, i.e., its solution preserves the order. A monotone dynamical system is also called a cooperative system. In contrast, a competitive system is defined

4.6 Monotone Dynamical Systems

151

based on the corresponding monotone dynamical system; namely, dx(t)/dt = −f (xτ ) is called a competitive system if dx(t)/dt = f (xτ ) is a cooperative system. A dynamical system, whose structure is represented by an interaction graph, is monotone if and only if every loop in the interaction graph has an even (including zero) number of negative interactions. •

A molecular network is a monotone dynamical system if every loop in the network is positive.

We will show such a theoretical result in Chapters 6 and 7. Examples of monotone and non-monotone dynamical networks are shown in Figure 4.8.

( a) x1

x2

x3

x4

(b )

x1

x2

x3

x4

Figure 4.8 Examples of monotone and non-monotone dynamical networks: (a) an example of a non-monotone dynamical network; (b) an example of a monotone dynamical network. Arrows and bar heads indicate positive and negative interactions, respectively

Although consistent interactions are common, monotonicity is still a very strong assumption to satisfy. Monotonicity is usually satisfied on subsystems or modules of a given network. Given the interaction graph of a biomolecular network with monotonicity, it has been proven that its trajectories can only converge to stable equilibria for almost all initial states. In other words, there are neither stable oscillations nor other dynamical attractors like quasiperiodic and strange attractors for monotone dynamical systems (Kobayashi et al. 2003,Angeli and Sontag 2004a). However, it is very difficult to determine the number of equilibria and their stability when there is no other information besides monotonicity. 4.6.2 Decomposition of Monotone Systems Depending on the structure, a molecular network may be decomposed into two monotone dynamical systems (Angeli et al. 2004). Take the simple monotone nonlinear system shown in Figure 4.9 (a) as an example. The system is described by the following ODEs

152

4 Qualitative Analysis of Deterministic Dynamical Networks

x˙ 1 = f1 (x2 ) − d1 x1 ,

(4.74)

x˙ 2 = f2 (x3 ) − d2 x2 , ··· x˙ n = fn (x1 ) − dn xn ,

(4.75) (4.76)

where dfi /dxi+1 > 0 for i = 1, ..., n − 1 and dfn /dx1 > 0.

x2

(a)

x2

(b )

x1

x3

y

x1

x3

xn

x4

u

xn

x4 xn - 1

xn -1

Figure 4.9 Decomposition of a closed feedback network into an open network with an input and an output: (a) a closed monotone cyclic network; (b) decomposition of (a) into an open network with input u and output y = x1

Replacing x1 in fn (x1 ) of (4.76) by an input u forms a new parameterized system x˙ 1 = f1 (x2 ) − d1 x1 , x˙ 2 = f2 (x3 ) − d2 x2 ,

(4.77) (4.78)

··· x˙ n = fn (u) − dn xn ,

(4.79)

with output y = h(x) = x1 , as shown in Figure 4.9 (b). By setting u(t) = y(t) = h(x(t)), (4.77)–(4.79) return to the original system (4.74)– (4.76). Such a decomposition technique can be used to detect multistability and bifurcations in a large class of biological monotone systems, i.e., to derive information such as the number of the equilibria and their stability properties (Angeli et al. 2004, Angeli and Sontag 2004a). A system is said to admit a non-degenerate input to state (I/S) static characteristic kX (·) : U → X if for each constant input u ∈ U there exists a unique globally asymptotically stable equilibrium kX (u) and det f (kX (u), u) = 0. For systems with non-degenerate I/S characteristic, their input/output (I/O) static characteristic is defined as the composition, i.e., kY = h ◦ kX . For example, the input/output (I/O) characteristic of (4.77)–(4.79) is kY (u) =

f1 f2 fn (u) ◦ ◦ ··· ◦ . d1 d2 dn

(4.80)

4.6 Monotone Dynamical Systems

153

A map k : U → U is said to have non-degenerate fixed points if for all u ∈ U with k(u) = u, k  (u) exists and k  (u) = 1. ◦ represents the operation of function composition, i.e., (g◦f )(x) = g(f (x)) for functions f and g. The fixed points of the I/O characteristic play a central role in the number of equilibria and their stability properties, i.e., the following theorem applies (Angeli et al. 2004, Angeli and Sontag 2004a). Theorem 4.6. Consider a monotone single–input single–output (SISO) (m = p = 1, with standard order) system endowed with non-degenerate I/S and I/O static characteristic x˙ = f (x, u), y = h(x).

(4.81) (4.82)

Consider the unitary positive feedback interconnection u = y. Then, the equilibria are in one to one correspondence with the fixed points of the I/O characteristic. Moreover, if kY has non-degenerate fixed points, the closed-loop system is strongly monotone, and all trajectories are bounded, then, for almost all initial conditions, solutions converge to the set of equilibria of (4.81)–(4.82) corresponding to inputs for which kY (u) < 1.

y= u

y II I

k

Y

II

I u Figure 4.10 Schematic diagram of Theorem 4.6

The schematic diagram of Theorem 4.6 is shown in Figure 4.10. According to the theorem, the points III and I are stable and II is unstable. Consider the following simple example described in (Angeli and Sontag 2004a): α1

− x1 , 1 + xβ2 α2 x˙ 2 = − x2 , 1 + xγ1

x˙ 1 =

(4.83) (4.84)

154

4 Qualitative Analysis of Deterministic Dynamical Networks

where α1 , α2 , β, and γ are positive parameters. It is the unitary feedback closure of α1 x˙ 1 = − x1 , (4.85) 1 + uβ α2 x˙ 2 = − x2 , (4.86) 1 + xγ1 (4.87) y = x2 . It is easy to verify that (4.85)–(4.87) are a monotone dynamical system due to the positive loop. The system is endowed with the static I/S characteristic   α1 kX (u) =

1+uβ α2 (1+uβ )γ (1+uβ )γ +αγ 1

.

(4.88)

The I/O static characteristic and the phase portrait for α1 = 1.3, α2 = 1, β = 3, and γ = 10 are shown in Figure 4.11. ( a)

(b )

Figure 4.11 I/O characteristic and phase portrait of (4.83)–(4.84). The horizontal axis is u in (a) and x1 in (b). The vertical axis is x2 (from (Angeli and Sontag 2004a))

As indicated above, monotonicity is a very strong assumption, which is generally satisfied only by subsystems of a given network because of the existence of negative loops. Therefore, the general results on monotone systems are limited to a small class of biomolecular systems. However, a non-monotone system can often be decomposed into two or more monotone subsystems or modules and the results for monotone systems can be applied to these monotone subsystems. Then the properties of the non-monotone system can be obtained by combining the properties the monotone subsystems. Consider the stability of the SISO monotone dynamical systems connected in feedback, as shown in Figure 4.12. The resulting system may not be a monotone dynamical system. For such a system, a small–gain theorem for the

4.6 Monotone Dynamical Systems

155

feedback interconnection of a system with a monotonically increasing input– output static gain and a system with a monotonically decreasing input–output gain is obtained as follows (Angeli and Sontag 2003). Theorem 4.7. Consider the following interconnection of two SISO dynamical systems: Σ1 : x˙ = fx (x, w),

y = hx (x),

(4.89)

Σ2 : z˙ = fz (z, y),

w = hz (z),

(4.90)

with Ux = Yz and Uz = Yx , where U and Y denote the input and output sets. Suppose that 1. the first subnetwork Σ1 is monotone when its input w, as well as output y, is ordered according to the standard order induced by the positive real semi-axis; 2. the second subnetwork Σ2 is monotone when its input y is ordered according to the standard order induced by the positive real semi-axis and its output w is ordered by the negative real semi-axis; 3. the respective static input-state characteristics kx (·) and kz (·) exist, and therefore, the static input–output characteristics Ky (·) and Kw (·) also exist, and are monotonically increasing and decreasing respectively; 4. every solution of the feedback closure for (4.89)–(4.90), i.e., the network Σ defined by (4.92), is bounded. Then, (4.92) has the globally attractive equilibrium provided that the following scalar discrete-time dynamical system, evolving in Ux , wk+1 = (Kw ◦ Ky )(wk )

(4.91)

has a unique globally attractive fixed point.

Figure 4.12 Feedback closure of the two SISO monotone subsystems Σ1 and Σ2 , i.e., the first and second equations of the network Σ defined by (4.92)

Here, the feedback closure network of the two subnetworks Σi (i = 1, 2) has the form

156

4 Qualitative Analysis of Deterministic Dynamical Networks

# x˙ = fx (x, hz (z)), Σ: z˙ = fz (z, hx (x)).

( a)

(4.92)

(b )

Figure 4.13 Two types of input–output characteristics in the (w, y) plane: (a) convergence to a stable fixed point; (b) convergence to a stable periodic orbit (from (Wang et al. 2006a))

The graphical interpretation of (4.91) of Theorem 4.7 is shown in Figure 4.13 (a). In addition to the convergence to a stable fixed point, (4.91) may also converge to a stable periodic orbit, as shown in Figure 4.13 (b). When an oscillation occurs in (4.91), stable oscillations may also emerge in (4.92) when appropriate time delays are introduced. See (Wang et al. 2006a, Wang et al. 2007, Angeli and Sontag 2004b) for more details. In particular, for a class of cyclic delay systems, a general result was obtained as follows (Enciso 2004): Theorem 4.8. Consider the cyclic nonlinear system

where

x˙ i = gi (xi+1 ) − di xi , i = 1, ..., n − 1,

(4.93)

x˙ n = gn (x1 (t − τ )) − dn xn ,

(4.94)

δi gi (x) ≥ 0, δi ∈ {−1, 1}, δ1 · δ2 · · · δn = −1,

(4.95)

and gi (x) has the Hill function form, i.e., gi (x) =

axm a + c or gi (x) = + c. m b+x b + xm

(4.96)

Then exactly one of the following statements holds 1. |g  (¯ u)| ≤ 1, the discrete-time system (4.97) is globally attracted to a unique fixed point, and the continuous-time system (4.93)–(4.94) is also globally attracted to a unique equilibrium, for all values of the delay τ .

4.6 Monotone Dynamical Systems

157

2. |g  (¯ u)| > 1, the discrete-time system (4.97) has non-constant periodic solutions, and the continuous-time system (4.93)–(4.94) has non-constant periodic solutions for some values of τ , where u ¯ is the unique fixed point of the one-dimensional map uk+1 = g(uk ) with g(u) =

(4.97)

1 1 1 g1 ◦ g2 ◦ · · · gn . d1 d2 dn

(4.98)

The assumption δ1 · δ2 · · · δn = −1 means that the system is subject to a negative feedback loop. In the positive feedback loop case, i.e., δ1 ·δ2 · · · δn = 1, system (4.93)–(4.94) falls into the framework of positive feedback loop systems and is monotone. A large number of results are known for this case, perhaps the most important one of which is that the generic solution converges towards an equilibrium (Kobayashi et al. 2003, Angeli and Sontag 2004a). The same result holds if, instead of assuming the a Hill function form, every nonlinear function gi (x) is assumed to be one of the forms ±a tan−1 (bx) or ±a tanh(bx), which are often used in neural networks. An example is shown in Figures 4.14 and 4.15. (a )

(b )

(c )

Figure 4.14 The first case of Theorem 4.8. (a) System (4.93)–(4.94) is globally attracted to a unique fixed point. (b) System (4.97) is globally attractive to a unique equilibrium. (c) The induced decreasing function is g(x), and the increasing function g 2 (x) = g(g(x)). The parameter values and the functions are n = 3, g1 = g2 = g3 = u) = 1/1.1 < 1 tan−1 (x), d1 = 0.11, d2 = 2.5, d3 = 4, and τ = 80. Here, g  (¯ (from (Enciso 2004))

158

(a )

4 Qualitative Analysis of Deterministic Dynamical Networks

( b)

(c)

Figure 4.15 The second case of Theorem 4.8. (a) System (4.93)–(4.94) converges to non-constant periodic solutions. (b) System (4.97) converges to periodic 2-cycles. (c) The induced decreasing function is g(x), and the increasing function g 2 (x) = g(g(x)). u) = 1/0.9 > 1 and g 2 (x) = x has several solutions. The parameter In this case, g  (¯ values and the functions are the same as those used in Figure 4.14, except that d1 = 0.09 (from (Enciso 2004))

5 Stability Analysis of Genetic Networks in Lur’e Form

In this chapter, we present a gene regulatory network model, which has a special structure described by differential equations, and study its stability. As mentioned in previous chapters, generally, a biomolecular system is characterized with significant time delays in gene regulation, particularly, in the transcription, translation, diffusion, and translocation processes. All cellular components also exhibit intracellular noise due to random births and deaths of individual molecules, and extracellular noise due to environmental fluctuations. Such time delays and stochastic noises may affect the dynamics of the entire biomolecular system both qualitatively and quantitatively. In this chapter, in addition to a basic case, we also consider the cases of genetic networks with time delays and stochastic perturbations.

5.1 A Genetic Network Model In (Becskei and Serrano 2000), to evaluate the role of negative feedback in the stability of genetic networks, the authors studied a simple genetic network model. In (Chen and Aihara 2002a), the authors presented a general gene regulatory network model as follows: m ˙ i (t) = −ai mi (t) + bi (p1 (t), p2 (t), ..., pn (t)), p˙ i (t) = −ci pi (t) + di mi (t),

(5.1) (5.2)

where mi (t) ∈ R and pi (t) ∈ R (i = 1, 2, ..., n) are the concentrations of the mRNA and the protein of the ith node. Note that (5.1)–(5.2) are the same with (3.13)–(3.14), but the translation process (5.2) is expressed by a simple linear form. In this network, there is one output but multiple inputs for a single node or gene. A direct edge is linked from node j to node i if the TF or protein j regulates gene i. In (5.1)–(5.2), ai and ci are the degradation rates of the mRNA and the protein, di is a constant, and bi is the regulatory function of the ith gene, which is a general nonlinear function of the variables,

160

5 Stability Analysis of Genetic Networks in Lur’e Form

p1 (t), ..., pn (t). Based on (4.4) (Chen and Aihara 2002a), the stability of (5.1)– (5.2) was studied by using local stability analysis and characteristic equation analysis. Although the method of characteristic equation analysis can provide an accurate local stability region, it is difficult to verify, especially for largescale genetic networks with time delays. As is well known, genetic networks (or gene networks) are usually large-scale even in a simple organism. The gene activity is well controlled in a cell. The gene regulation function bi plays a key role in the nonlinear dynamics. In general, the form of bi may be very much complicated, depending on all biochemical reactions involved in the regulation. Typical regulatory logics include AND-like gates and OR-like gates (Yuh et al. 1998,Buchler et al. 2003,Setty et al. 2003). Here, we present a model of genetic networks, in which different TFs act additively to regulate the same gene  (Li et al. 2006a). In other words, the regulatory function is of the form bi = j bij (pj (t)), which is called a SUM regulatory logic (Yuh et al. 1998, Kalir et al. 2005). The function bij (pj (t)) is usually expressed as a monotonic function of the Hill form as follows: ⎧ (pj (t)/β)H ⎨ αij 1+(p H , if TF j is an activator of gene i, j (t)/β) bij (pj (t)) = (5.3) ⎩ 1 , if TF j is a repressor of gene i, αij 1+(pj (t)/β) H where H is the Hill coefficient, β is a positive constant, and αij is the dimensionless transcriptional rate of TF j to gene i, which is a bounded constant. Note that 1 (pj (t)/β)H =1− . (5.4) H 1 + (pj (t)/β) 1 + (pj (t)/β)H Therefore, (5.1)–(5.2) can be rewritten as follows:  Gij g(pj (t)) + li , m ˙ i (t) = −ai mi (t) +

(5.5)

j

p˙ i (t) = −ci pi (t) + di mi (t), where g(x) =

(x/β)H 1 + (x/β)H

(5.6)

(5.7)

is a monotonically increasing function and G = (Gij ) ∈ Rn×n is the coupling matrix of the genetic network, which is defined as follows: • • •

if there is no link from node j to node i, Gij = 0; if TF j is an activator of gene i, Gij = αij ; if TF j is a repressor of gene i, Gij = −αij .

Thus, the matrix G = (Gij ) defines coupling topology, directions, and transcriptional rates of the genetic network. li is defined as a basal rate

5.1 A Genetic Network Model

li =



αij ,

161

(5.8)

j∈Vi1

in which Vi1 is the set of all the repressors of gene i. In a compact matrix form, (5.5)–(5.6) can be rewritten as follows: m(t) ˙ = Am(t) + Gg(p(t)) + l, p(t) ˙ = Cp(t) + Dm(t),

(5.9) (5.10)

where m(t) = [m1 (t), ..., mn (t)]T , p(t) = [p1 (t), ..., pn (t)]T , l = [l1 ..., ln ]T , A = diag(−a1 , ..., −an ), C = diag(−c1 , ..., −cn ), D = diag(d1 , ..., dn ), and g(p(t)) = [g1 (p1 (t)), ..., gn (pn (t))]T . Note that in (5.9)–(5.10), we can include multiple nonlinear vector regulatory functions, but for simplicity, we consider only one here. m∗ and p∗ are said to be an equilibrium of (5.9)–(5.10) if they satisfy Am∗ + Gg(p∗ ) + l = 0 and Cp∗ + Dm∗ = 0. For convenience, we will always shift the equilibrium (m∗ , p∗ ) to the origin by letting x(t) = m(t) − m∗ and y(t) = p(t) − p∗ . Thus, we have x(t) ˙ = Ax(t) + Gf (y(t)),

(5.11)

y(t) ˙ = Cy(t) + Dx(t),

(5.12)

where f (y(t)) = g(y(t) + p∗ ) − g(p∗ ). Since g is a monotonically increasing nonlinear function with saturation, for all a, b ∈ R with a = b, it satisfies 0≤

g(a) − g(b) ≤ k. a−b

(5.13)

When g is a differentiable function, the above inequality is equivalent to 0 ≤ dg(a)/da ≤ k. From the relationship between f (·) and g(·), f (·) satisfies the sector condition 0 ≤ f (a)/a ≤ k, or equivalently f (a)(f (a) − ka) ≤ 0.

(5.14)

Note that a Lur’e system is a linear dynamical system, interconnected by feedback to a static nonlinearity f (·) that satisfies a sector condition (5.14) (Vidyasagar 1993). Hence, the genetic network (5.11)–(5.12) can be seen as a type of Lur’e system and can be investigated by using the fruitful theory of Lur’e system. Next, let us return to the motivations for presenting the genetic network model. When modeling genetic networks (or many other systems), “we must accept that there are no unique, exact mathematical descriptions of processes in nature. We have to search for approximations that capture all aspects of interest as accurately as feasible and at the same time allow us to gain insight from their analysis” (Voit 2000). The primary bases for proposing this model are mainly due to the following facts (Li et al. 2006a, Li et al. 2007a). 1. Such a SUM logic does exist in many natural genetic networks.

162

5 Stability Analysis of Genetic Networks in Lur’e Form

2. Such a genetic network with SUM regulatory function can be implemented experimentally, which is an advantage from the viewpoint of synthetic biology. 3. In literature, there exist many well-known genetic systems that can be described in (or reformed into) the form of the model. 4. Even if the regulatory functions in real genetic networks are not exactly of the form in the model, the model can serve as a good approximation of the real regulatory networks. 5. It has been shown that the model is extremely suited to be analyzed in the framework of control theory. In the following sections, we will analyze the stability of the genetic network model. Although the above system (5.11)–(5.12) is developed by analyzing genetic networks (5.1)–(5.2), the above derivation is also applicable to genetic networks with time delays and stochastic perturbations. When analyzing the stability of an equilibrium, it is equivalent to studying systems (5.5)–(5.6) and (5.11)–(5.12). In the following sections we study system (5.11)–(5.12) directly. The stability analysis of the genetic networks is based on the Lyapunov method and the Lur’e system approach, and the results are presented in the form of LMIs (Boyd et al. 1994), which are easy to be verified by convex optimization techniques, e.g., the interior point method (Boyd et al. 1994), or by using software packages, e.g., the MATLAB LMI Toolbox.

5.2 Stability Analysis of Genetic Networks Without Noise In this section, we analyze the global stability of the genetic network (5.11)– (5.12) by using the Lyapunov function method. The main results are based on (Li et al. 2006a). The sufficient condition is summarized in the following theorem. Theorem 5.1. If there exist matrices P11 , P22 , P12 , and Λ = diag(λ1 , ..., λn ) > 0, such that the following LMIs hold: ⎡ ⎤ T DP22 + AP12 + P12 C P11 G 2P11 A + P12 D + DP12 T T T A + CP12 2P22 C P12 G + kΛ ⎦ < 0, M1 = ⎣ P22 D + P12 T T G P11 G P12 + kΛ −2Λ

 P11 P12 > 0, (5.15) P = T P12 P22 then the origin of the genetic network (5.11)–(5.12) is the unique equilibrium point and is globally asymptotic stable. Proof (Li et al. 2006a): Consider the following Lyapunov function

5.2 Stability Analysis of Genetic Networks Without Noise

 V (x(t), y(t)) =

x(t) y(t)

T

 P

x(t) . y(t)

163

(5.16)

By calculating the time derivative of V along (5.11)–(5.12), we obtain T V˙ (x(t), y(t)) = 2xT (t)P11 Ax(t) + 2y T (t)P12 Ax(t) + 2xT (t)P11 Gf (y(t)) T +2y T (t)P12 Gf (y(t)) + 2xT (t)P12 Dx(t)

+2y T (t)P22 Dx(t) + 2xT (t)P12 Cy(t) + 2y T (t)P22 Cy(t) T Ax(t) + 2xT (t)P11 Gf (y(t)) ≤ 2xT (t)P11 Ax(t) + 2y T (t)P12 T +2y T (t)P12 Gf (y(t)) + 2xT (t)P12 Dx(t) +2y T (t)P22 Dx(t) + 2xT (t)P12 Cy(t) + 2y T (t)P22 Cy(t) n  −2 λi f (yi (t))[f (yi (t)) − kyi (t)] i=1

= ξ (t)M1 ξ(t) ≤ 0, T

(5.17)

where ξ(t) = [xT (t), y T (t), f T (y(t))]T . From the above analysis, we know that V˙ (t) = 0 if and only if both x(t) = 0 and y(t) = 0, and for all the other (x(t), y(t)), V˙ (t) < 0. Hence, the origin of the genetic network (5.11)–(5.12) is globally asymptotic stable. Under condition (5.15), the uniqueness of the equilibrium can be proven by using the contradiction method (see, for example, (Arik 2002)). In fact, if there is another equilibrium point (m, ¯ p¯) that is different from (m∗ , p∗ ), we can also shift the equilibrium (m, ¯ p¯) to the origin, and by the same analysis as above, it is easy to show that the origin is also globally asymptotic stable under the condition (5.15). Note that the condition (5.15) is independent of the equilibrium. Hence, we have more than one globally asymptotic stable equilibrium, which is impossible. Therefore, (5.9)–(5.10) has a unique equilibrium, in other words, under condition (5.15) the origin is the unique equilibrium (5.11)–(5.12).  In (5.15), constant matrices A, C, D, and G and constant k are from the model (5.11)–(5.12), and all the components of the symmetric matrices M1 and P in (5.15) are linear functions of the matrix variables P11 , P22 , P12 , and Λ, hence the conditions in (5.15) are LMIs , which is easy to be verified numerically. In genetic networks, time delays generally exist in transcription, translation, and translocation processes. Next, we consider the genetic network with time delays, as described below x(t) ˙ = Ax(t) + Gf (y(t − τ1 (t))), y(t) ˙ = Cy(t) + Dx(t − τ2 (t)),

(5.18) (5.19)

where τ1 (t) > 0 and τ2 (t) > 0 are inter- and intra-node time-varying delays. We assume that τ˙1 (t) ≤ d1 < 1 and τ˙2 (t) ≤ d2 < 1. This model can be derived

164

5 Stability Analysis of Genetic Networks in Lur’e Form

from a delayed analogue of system (5.1)–(5.2) by using the same manipulation as that in the above section. The theoretical result is summarized in the following theorem (Li et al. 2006a): Theorem 5.2. If there exist matrices P11 , P22 , P12 , Q > 0, R > 0, and Λ = diag(λ1 , ..., λn ) > 0, such that the following LMIs hold ⎡ ⎤ 2P11 A + R P12 C + AP12 P12 D 0 P11 G T T T ⎢ P12 ⎥ A + CP12 2P22 C P22 D kΛ P12 G ⎢ ⎥ T ⎢ ⎥ < 0, DP12 DP22 −(1 − d2 )R 0 0 M2 = ⎢ ⎥ ⎣ ⎦ 0 kΛ 0 Q − 2Λ 0 GT P11 GT P12 0 0 −(1 − d1 )Q 

P11 P12 P = > 0, (5.20) T P12 P22 then the origin of the genetic network (5.18)–(5.19) is the unique equilibrium point and is globally asymptotic stable. Proof (Li et al. 2006a): Construct a Lyapunov–Krasovskii functional as follows: 

T 

t x(t) x(t) V (x(t), y(t), t) = P + f T (y(μ))Qf (y(μ))dμ y(t) y(t) t−τ1 (t) t + x(μ)Rx(μ)dμ. (5.21) t−τ2 (t)

Calculating the time derivative of V (x(t), y(t), t), we have T V˙ (x(t), y(t), t) = 2xT (t)P11 Ax(t) + 2y T (t)P12 Ax(t) T T +2x (t)P11 Gf (y(t − τ1 (t))) + 2y T P12 Gf (y(t − τ1 (t))

+2xT (t)P12 Cy(t) + 2y T (t)P22 Cy(t) +2xT (t)P12 Dx(t − τ2 (t)) + 2y T (t)P22 Dx(t − τ2 (t)) +f T (y(t))Qf (y(t)) −(1 − τ˙1 (t))f T (y(t − τ1 (t)))Qf (y(t − τ1 (t))) + xT (t)Rx(t) (5.22) −(1 − τ˙2 (t))xT (t − τ2 (t))Rx(t − τ2 (t)). Considering τ˙1 (t) ≤ d1 < 1, τ˙2 (t) ≤ d2 < 1, and −2

n 

λi f (yi (t))[f (yi (t)) − kyi (t)] ≥ 0,

(5.23)

i=1

we have V˙ (x(t), y(t), t) ≤ ξ T (t)M2 ξ(t) < 0,

(5.24)

where ξ(t) = [xT (t), y T (t), xT (t − τ2 (t)), f T (y(t)), f T (y(t − τ1 (t)))]T . It follows from the Lyapunov–Krasovskii Theorem (Kolmanovskii and Myshkis 1999)

5.3 Stochastic Stability of Gene Regulatory Networks

165

that the delayed genetic network (5.18)–(5.19) is globally asymptotic stable. By a proof similar to that of Theorem 5.1, the origin can be proven to be the unique equilibrium. 

5.3 Stochastic Stability of Gene Regulatory Networks Gene regulation is an intrinsically noisy process, which is subject to intracellular and extracellular noise perturbations and environmental fluctuations. Such noise should affect the dynamics of genetic networks. Since we know very little about how noise acts on genetic networks, one of the simplest ways to incorporate random effects is to assume that certain fluctuations randomly perturb the genetic network in an additive manner. In this section, we study genetic networks with random perturbations based on stochastic differential equation models. This section is mainly based on parts of (Li et al. 2006a) and (Li et al. 2007a). In the following, λmax (P ) denotes the maximum eigenvalue of a square matrix P , L2 [0, ∞) is the space of square-integrable vector functions over [0, ∞), | · | stands for the Euclidean vector norm, and  · 2 stands for the usual L2 [0, ∞) norm. 5.3.1 Mean-square Stability We consider genetic networks with noise perturbations of the following form x(t) ˙ = Ax(t) + Gf (y(t)) + σ(y(t))n(t), y(t) ˙ = Cy(t) + Dx(t),

(5.25) (5.26)

where n(t) = [n1 (t), ..., nl (t)]T with ni (t) as a scalar white Gaussian noise process with zero mean, and ni (t) is independent of nj (t) for all i = j. σ(y(t)) ∈ Rn×l is called the noise intensity matrix. Here we only consider the noise perturbations from regulations or inter-nodes perturbations. Since the nodes of the genetic network model communicate via variable y(t), we assume that the noise intensity matrix is a function of y(t), which acts on the dynamics of x(t). As in many studies of stochastic dynamical systems, e.g., (Kolmanovskii and Myshkis 1999, W. Chen et al. 2005), we assume that σ(y(t)) can be estimated by trace[σ(y(t))σ T (y(t))] ≤ y T (t)Hy(t), H ≥ 0.

(5.27)

Recall that the time derivative of a Wiener process is a white noise process (Arnold 1974). We have dw(t) = n(t)dt, where w(t) is an l-dimensional Wiener process. Hence, (5.25)–(5.26) can be rewritten as the following stochastic differential equations (SDEs):

166

5 Stability Analysis of Genetic Networks in Lur’e Form

dx(t) = [Ax(t) + Gf (y(t))]dt + σ(y(t))dw(t),

(5.28)

dy(t) = [Cy(t) + Dx(t)]dt.

(5.29)

For this genetic network model, we have the following stability theorem (Li et al. 2006a). Theorem 5.3. If there exist matrices P11 , P22 , P12 , and Λ = diag(λ1 , ..., λn ) > 0, and constant ρ > 0, such that the following LMIs hold: ⎤ ⎡ T DP22 + AP12 + P12 C P11 G 2P11 A + P12 D + DP12 T T T A + CP12 2P22 C + ρH P12 G + kΛ ⎦ < 0, M3 = ⎣ P22 D + P12 T T G P11 G P12 + kΛ −2Λ

 P11 P12 > 0, P11 ≤ ρI, (5.30) P = T P12 P22 then the genetic network (5.25)–(5.26) is asymptotically stable in mean square. Proof (Li et al. 2006a): Consider the same Lyapunov function as that in the proof of Theorem 5.1. By Itˇo’s formula (Arnold 1974), we obtain the following stochastic differential: dV (x(t), y(t)) = LV (x(t), y(t))dt T ]σ(y(t))dw(t), +2[xT (t)P11 + y T (t)P12

(5.31)

where L is the diffusion operator and T LV (x(t), y(t)) = 2xT (t)P11 Ax(t) + 2y T (t)P12 Ax(t) + 2xT (t)P11 Gf (y(t)) T +2y T (t)P12 Gf (y(t)) + 2xT (t)P12 Dx(t)

+2y T (t)P22 Dx(t) + 2xT (t)P12 Cy(t) + 2y T (t)P22 Cy(t) +trace(σ(y(t))σ T (y(t))P11 ). (5.32) By using (5.27) and (5.30), we have (W. Chen et al. 2005) trace(σ(y(t))σ T (y(t))P11 ) ≤ λmax (P11 )trace(σ(y(t))σ T (y(t))) ≤ ρy T (t)Hy(t). (5.33) n By considering −2 i=1 λi f (yi (t))[f (yi (t)) − kyi (t)] ≥ 0, we obtain T LV (x(t), y(t)) ≤ 2xT (t)P11 Ax(t) + 2y T (t)P12 Ax(t) + 2xT (t)P11 Gf (y(t)) T Gf (y(t)) + 2xT (t)P12 Dx(t) +2y T (t)P12 T +2y (t)P22 Dx(t) + 2xT (t)P12 Cy(t) + 2y T (t)P22 Cy(t) n  −2 λi f (yi (t))[f (yi (t)) − kyi (t)] + ρy T (t)Hy(t) i=1

= ξ T (t)M3 ξ(t),

(5.34)

5.3 Stochastic Stability of Gene Regulatory Networks

167

where ξ(t) = [xT (t), y T (t), f T (y(t))]T . Therefore, it follows from M3 < 0 that E[dV (x(t), y(t))] = E[LV (x(t), y(t))dt] < 0, where E is the mathematical expectation operator. It is easy to show that the genetic network (5.25)–(5.26) is asymptotically stable in mean square.  Next, we go further to consider genetic networks with both time delays and noise perturbations. The networks are represented in the following form: x(t) ˙ = Ax(t) + Gf (y(t − τ1 (t))) + σ(y(t), y(t − τ1 (t)))n(t),

(5.35)

y(t) ˙ = Cy(t) + Dx(t − τ2 (t)),

(5.36)

which can also be rewritten as the following SDEs: dx(t) = [Ax(t) + Gf (y(t − τ1 (t)))]dt + σ(y(t), y(t − τ1 (t)))dw(t), (5.37) dy(t) = [Cy(t) + Dx(t − τ2 (t))]dt. (5.38) We assume that the noise intensity matrix σ(y(t), y(t−τ1 (t))) can be estimated by trace[σ(y(t), y(t − τ1 (t)))σ T (y(t), y(t − τ1 (t)))] ≤ y T (t)H1 y(t) + y T (t − τ1 (t))H2 y(t − τ1 (t)),

(5.39)

with H1 ≥ 0 and H2 ≥ 0. For this genetic network model, the main result is summarized in the following theorem (Li et al. 2006a). Theorem 5.4. If there exists matrices P11 , P22 , P12 , Q > 0, R > 0, S > 0, and Λ = diag(λ1 , ..., λn ) > 0, and constant ρ > 0, such that the following LMIs hold ⎡ ⎤ 2P11 A + R P12 C + AP12 P12 D 0 P11 G T T T ⎢ P12 ⎥ A + CP12 2P22 C + S + ρH1 P22 D kΛ P12 G ⎢ ⎥ T ⎢ ⎥ < 0, DP12 DP22 −(1 − d2 )R 0 0 M4 = ⎢ ⎥ ⎣ ⎦ 0 kΛ 0 Q−Λ 0 T T G P11 G P12 0 0 −(1 − d1 )Q 

P11 P12 P = > 0, P11 ≤ ρI, ρH2 − (1 − d1 )S < 0, (5.40) T P12 P22 then the genetic network (5.35)–(5.36) is asymptotically stable in mean square. Proof: Consider a Lyapunov–Krasovskii functional as follows:



T  x(t) x(t) P V (x(t), y(t), t) = y(t) y(t) t + [f T (y(μ))Qf (y(μ)) + y T (μ))Sy(μ)]dμ

t−τ1 (t) t

+

x(μ)Rx(μ)dμ. t−τ2 (t)

(5.41)

168

5 Stability Analysis of Genetic Networks in Lur’e Form

By Itˆ o’s formula (Arnold 1974), we obtain the following stochastic differential: dV (x(t), y(t), t) = LV (x(t), y(t), t)dt T +2[xT (t)P11 + y T (t)P12 ]σ(y(t), y(t − τ1 (t)))dw(t),(5.42) where LV (x(t), y(t), t) T = 2xT (t)P11 Ax(t) + 2y T (t)P12 Ax(t) T T +2x (t)P11 Gf (y(t − τ1 (t))) + 2y T P12 Gf (y(t − τ1 (t)) T T +2x (t)P12 Cy(t) + 2y (t)P22 Cy(t) + 2xT (t)P12 Dx(t − τ2 (t)) +2y T (t)P22 Dx(t − τ2 (t)) + f T (y(t))Qf (y(t)) −(1 − τ˙1 (t))f T (y(t − τ1 (t)))Qf (y(t − τ1 (t))) +y T (t)Sy(t) − (1 − τ˙1 (t))y T (t − τ1 (t))Sy(t − τ1 (t)) +xT (t)Rx(t) − (1 − τ˙2 (t))xT (t − τ2 (t))Rx(t − τ2 (t)) +trace(σ(y(t), y(t − τ1 (t)))σ T (y(t), y(t − τ1 (t)))P11 ) T ≤ 2xT (t)P11 Ax(t) + 2y T (t)P12 Ax(t) T T Gf (y(t − τ1 (t)) +2x (t)P11 Gf (y(t − τ1 (t))) + 2y T P12 T T +2x (t)P12 Cy(t) + 2y (t)P22 Cy(t) + 2xT (t)P12 Dx(t − τ2 (t)) +2y T (t)P22 Dx(t − τ2 (t)) + f T (y(t))Qf (y(t)) −(1 − d1 )f T (y(t − τ1 (t)))Qf (y(t − τ1 (t))) +y T (t)Sy(t) − (1 − d1 )y T (t − τ1 (t))Sy(t − τ1 (t)) +xT (t)Rx(t) − (1 − d2 )xT (t − τ2 (t))Rx(t − τ2 (t)) n −2 i=1 λi f (yi (t))[f (yi (t)) − kyi (t)] +trace(σ(y(t), y(t − τ1 (t)))σ T (y(t), y(t − τ1 (t)))P11 ). By (5.39) and (5.40) we have (W. Chen et al. 2005) trace(σ(y(t), y(t − τ1 (t)))σ T (y(t), y(t − τ1 (t)))P11 ) ≤ λmax (P11 )trace(σ(y(t), y(t − τ1 (t)))σ T (y(t), y(t − τ1 (t)))) ≤ ρ[y T (t)H1 y(t) + y T (t − τ1 (t))H2 y(t − τ1 (t))]. Hence, we have T Ax(t) LV (x(t), y(t), t) ≤ 2xT (t)P11 Ax(t) + 2y T (t)P12 T +2xT (t)P11 Gf (y(t − τ1 (t))) + 2y T P12 Gf (y(t − τ1 (t)) +2xT (t)P12 Cy(t) + 2y T (t)P22 Cy(t) +2xT (t)P12 Dx(t − τ2 (t)) + 2y T (t)P22 Dx(t − τ2 (t)) +f T (y(t))Qf (y(t)) −(1 − d1 )f T (y(t − τ1 (t)))Qf (y(t − τ1 (t))) +y T (t)Sy(t) − (1 − d1 )y T (t − τ1 (t))Sy(t − τ1 (t)) +xT (t)Rx(t) − (1 − d2 )xT (t − τ2 (t))Rx(t − τ2 (t)) −2f T (y(t))Λf (y(t)) + 2kf T (y(t))Λy(t) +ρy T (t)H1 y(t) + ρy T (t − τ1 (t))H2 y(t − τ1 (t)) = ξ T (t)M4 ξ(t),

5.3 Stochastic Stability of Gene Regulatory Networks

169

where ξ(t) = [xT (t), y T (t), xT (t − τ2 (t)), f T (y(t)), f T (y(t − τ1 (t))), y T (t − τ1 (t))]T . Since M4 < 0, we obtain E[dV (x(t), y(t), t)] = E[LV (x(t), y(t), t)dt] < 0

(5.43)

for all x(t) and y(t) except x(t) = y(t) = 0. Therefore, the genetic network (5.35)–(5.36) is asymptotically stable in mean square.  5.3.2 Stochastic Stability with Disturbance Attenuation In the above section, we described the mean-square asymptotic stability of the genetic network model. The definition of mean-square asymptotic stability (Arnold 1974) is rather restrictive, which requires that lim E|z(t)|2 = 0,

t→∞

(5.44)

where z(t) = [x(t)T , y(t)T ]T . If the noise perturbations do not vanish in the steady-state, it is highly unlikely that the network will achieve mean-square asymptotic stability. Numerous experimental results also indicate that real biological systems generally cannot achieve mean-square asymptotic stability and that small fluctuations around the steady-states generally occur. In this section, we study a more realistic stochastic genetic network model as follows: dx(t) = [Ax(t) + Gf (y(t))]dt + σ(x(t), y(t))dw1 (t) + v(t)dw2 (t), (5.45) dy(t) = [Cy(t) + Dx(t)]dt. (5.46) For simplicity and convenience, we let σ(x(t), y(t)) ∈ Rn and v(t) ∈ Rn belong to L2 [0, ∞). w1 (t) and w2 (t) are two independent one-dimensional Wiener processes. As we will see in the following analysis, the results are independent of the form of v(t), and the results hold no matter what v(t) is and no matter where it is introduced. We assume that the perturbation terms can represent perturbations from all sources, but even if we have to add another noise perturbation term to (5.46), the procedure of analysis does not change significantly. For (5.45)–(5.46), when v(t) does not vanish in steady-states, the network cannot achieve mean-square asymptotic stability. We give another definition below. Definition 5.5. For a given scalar γ > 0, the network (5.45)–(5.46) is said to be stochastically stable with disturbance attenuation γ if when v(t) = 0 the network is asymptotically stable in mean-square, and under zero initial conditions, we have z(t)E2 < γv(t)2 (5.47) for all non-zero v(t), where

170

5 Stability Analysis of Genetic Networks in Lur’e Form

z(t)E2 = for z(t) = [x(t)T , y(t)T ]T

  E



1/2 |z(t)|2 dt

(5.48)

0

(Xu and Chen 2002).

Similarly, we also assume that σ(x(t), y(t)) can be estimated by σ T (x(t), y(t))σ(x(t), y(t)) ≤ xT (t)H1 x(t) + y T (t)H2 y(t)

(5.49)

with H1 ≥ 0 and H2 ≥ 0. For this genetic network model, we have the following stability theorem (Li et al. 2007a): Theorem 5.6. Given a scalar γ > 0, if there are matrices P11 , P22 , P12 , and Λ = diag(λ1 , ..., λn ) > 0, and a constant ρ > 0, such that the following LMIs hold ⎤ ⎡ (1, 1) (1, 2) P11 G T (2, 2) P12 G + kΛ ⎦ < 0, M5 = ⎣ (1, 2)T T T G P11 G P12 + kΛ −2Λ

 P11 P12 > 0, P11 ≤ ρI, (5.50) P = T P12 P22 T + ρH1 + γρ2 I, (1, 2) = DP22 + AT P12 + where (1, 1) = 2P11 A + P12 D + DP12 ρ P12 C, and (2, 2) = 2P22 C +ρH2 + γ 2 I, then the genetic network (5.45)–(5.46) is stochastically stable with disturbance attenuation γ.

Proof (Li et al. 2007a): Consider the following Lyapunov function:  V (x(t), y(t)) =

x(t) y(t)

T

 P

x(t) . y(t)

(5.51)

By Itˆo’s formula, we obtain the following stochastic differential T ] dV (x(t), y(t)) = LV (x(t), y(t))dt + 2[xT (t)P11 + y T (t)P12

·[σ(x(t), y(t))dw1 (t) + v(t)dw2 (t)],

(5.52)

where L is again the diffusion operator. Assuming that the model (5.45)–(5.46) has zero initial conditions (according to Definition 5.5), we can derive  t  E(V (x(t), y(t))) = E LV (x(s), y(s))ds . (5.53) 0

For γ > 0, we define   t [xT (s)x(s) + y T (s)y(s) − γ 2 v T (s)v(s)]ds . J(t) = E 0

(5.54)

5.3 Stochastic Stability of Gene Regulatory Networks

Then, from (5.53) and (5.54), it is easy to show that for α > 0,  t  J(t) ≤ E S1 (s)ds ,

171

(5.55)

0

where S1 (s) = αLV (x(s), y(s))+xT (s)x(s)+y T (s)y(s)−γ 2 v T (s)v(s). Letting α = γ 2 /ρ, we obtain γ2 LV (x(t), y(t)) + xT (t)x(t) + y T (t)y(t) − γ 2 v T (t)v(t) ρ γ2 T = Ax(t) [2xT (t)P11 Ax(t) + 2y T (t)P12 ρ T +2xT (t)P11 Gf (y(t)) + 2y T (t)P12 Gf (y(t)) T T +2x (t)P12 Dx(t) + 2y (t)P22 Dx(t) + 2xT (t)P12 Cy(t) ρ +2y T (t)P22 Cy(t) + 2 [xT (t)x(t) + y T (t)y(t)] γ +trace(σ(x(t), y(t))σ T (x(t), y(t))P11 ) −ρv T (t)v(t) + trace(v(t)v T (t)P11 )]. (5.56) n Considering P11 ≤ ρI and −2 i=1 λi f (yi (t))[f (yi (t)) − kyi (t)] ≥ 0 and using also (5.49), we obtain S1 (t) =

S1 (t) ≤

γ2 T Ax(t) + 2xT (t)P11 Gf (y(t)) [2xT (t)P11 Ax(t) + 2y T (t)P12 ρ T +2y T (t)P12 Gf (y(t)) + 2xT (t)P12 Dx(t) +2y T (t)P22 Dx(t) + 2xT (t)P12 Cy(t) + 2y T (t)P22 Cy(t) n  −2 λi f (yi (t))[f (yi (t)) − kyi (t)] + ρxT (t)H1 x(t) i=1

+ρy T (t)H2 y(t) + =

γ2 T ξ (t)M5 ξ(t), ρ

ρ T [x (t)x(t) + y T (t)y(t)]] γ2 (5.57)

where ξ(t) = [xT (t), y T (t), f T (y(t))]T . In view of the LMIs (5.50) and (5.55), we obtain

 t γ2 T J(t) ≤ ξ (s)M5 ξ(s)ds < 0 (5.58) E ρ 0 for all x(t) and y(t) except for x(t) = y(t) = f (y(t)) = 0. Then (5.47) follows immediately from (5.54) and (5.58). It is easy to show that the genetic network (5.45)–(5.46) is stochastically stable with disturbance attenuation.  Next, we consider genetic networks with both time delays and noise perturbations of the following form

172

5 Stability Analysis of Genetic Networks in Lur’e Form

dx(t) = [Ax(t) + Gf (y(t − τ1 (t)))]dt +σ(x(t), x(t − τ2 (t)), y(t), y(t − τ1 (t)))dw1 (t) + v(t)dw2 (t), (5.59) dy(t) = [Cy(t) + Dx(t − τ2 (t))]dt, (5.60) where w1 (t) and w2 (t) are defined in the same manner as those in the above section, and τ1 (t) > 0 and τ2 (t) > 0 are time-varying delays. We assume that τ˙1 (t) ≤ d1 < 1 and τ˙2 (t) ≤ d2 < 1. We also assume that the noise intensity matrix σ(x(t), x(t − τ2 (t)), y(t), y(t − τ1 (t))) can be estimated by σ T σ ≤ xT (t)H1 x(t) + xT (t − τ2 (t))H2 x(t − τ2 (t)) + y T (t)H3 y(t) +y T (t − τ1 (t))H4 y(t − τ1 (t)),

(5.61)

with H1 ≥ 0, H2 ≥ 0, H3 ≥ 0, and H4 ≥ 0. For this network, the main result is summarized in the following theorem (Li et al. 2007a). Theorem 5.7. Given a scalar γ > 0, if there are matrices P11 , P22 , P12 , Q > 0, R > 0, S > 0, Λ = diag(λ1 , ..., λn ) > 0, and a constant ρ > 0, such that the following LMIs hold: ⎡ ⎤ (1, 1) P12 C + AP12 P12 D 0 P11 G T T T ⎢ P12 ⎥ A + CP12 (2, 2) P22 D kΛ P12 G ⎢ ⎥ T ⎥ < 0, DP (3, 3) 0 0 DP M6 = ⎢ 22 12 ⎢ ⎥ ⎣ ⎦ 0 kΛ 0 Q−Λ 0 GT P11 GT P12 0 0 −(1 − d1 )Q 

P11 P12 P = > 0, P11 ≤ ρI, ρH4 − (1 − d1 )S < 0, (5.62) T P12 P22 where (1, 1) = 2P11 A + R + ρH1 + γρ2 I, (2, 2) = 2P22 C + S + ρH3 + γρ2 I, and (3, 3) = −(1 − d2 )R + ρH2 , then the genetic network (5.59)–(5.60) is stochastically stable with disturbance attenuation γ. Proof (Li et al. 2007a): Consider a Lyapunov–Krasovskii functional as follows (Kolmanovskii and Myshkis 1999): 

T 

x(t) x(t) V (x(t), y(t), t) = P y(t) y(t) t + [f T (y(μ))Qf (y(μ)) + y T (μ)Sy(μ)]dμ

t−τ1 (t) t

+

x(μ)Rx(μ)dμ.

(5.63)

t−τ2 (t)

By using Itˆo’s formula, we obtain the stochastic differential dV (x(t), y(t), t) T ] = LV (x(t), y(t), t)dt + 2[xT (t)P11 + y T (t)P12 ·[σ(x(t), x(t − τ2 (t)), y(t), y(t − τ1 (t)))dw1 (t) + v(t)dw2 (t)]. (5.64)

5.3 Stochastic Stability of Gene Regulatory Networks

173

Following a procedure similar to that in the proof of Theorem 5.6, we obtain  t  J(t) ≤ E S2 (s)ds , (5.65) 0 2

where S2 (s) = γρ LV (x(s), y(s), s) + xT (s)x(s) + y T (s)y(s) − γ 2 v T (s)v(s). Using the above Lyapunov–Krasovskii functional, we have S2 (t) =

γ2 T Ax(t) [2xT (t)P11 Ax(t) + 2y T (t)P12 ρ T +2xT (t)P11 Gf (y(t − τ1 (t))) + 2y T P12 Gf (y(t − τ1 (t))) T T +2x (t)P12 Cy(t) + 2y (t)P22 Cy(t) +2xT (t)P12 Dx(t − τ2 (t)) + 2y T (t)P22 Dx(t − τ2 (t)) +f T (y(t))Qf (y(t)) − (1 − τ˙1 (t))f T (y(t − τ1 (t)))Qf (y(t − τ1 (t))) +y T (t)Sy(t) − (1 − τ˙1 (t))y T (t − τ1 (t))Sy(t − τ1 (t))

+xT (t)Rx(t) − (1 − τ˙2 (t))xT (t − τ2 (t))Rx(t − τ2 (t)) ρ ρ + 2 xT (s)x(s) + 2 y T (s)y(s) − ρv T (s)v(s) γ γ +trace(σσ T P11 ) + trace(v(t)v T (t)P11 )] γ2 T ≤ Ax(t) [2xT (t)P11 Ax(t) + 2y T (t)P12 ρ T +2xT (t)P11 Gf (y(t − τ1 (t))) + 2y T P12 Gf (y(t − τ1 (t))) T T +2x (t)P12 Cy(t) + 2y (t)P22 Cy(t) +2xT (t)P12 Dx(t − τ2 (t)) + 2y T (t)P22 Dx(t − τ2 (t)) +f T (y(t))Qf (y(t)) − (1 − d1 )f T (y(t − τ1 (t)))Qf (y(t − τ1 (t))) +y T (t)Sy(t) − (1 − d1 )y T (t − τ1 (t))Sy(t − τ1 (t)) +xT (t)Rx(t) − (1 − d2 )xT (t − τ2 (t))Rx(t − τ2 (t)) ρ ρ + 2 xT (s)x(s) + 2 y T (s)y(s) + ρxT (t)H1 x(t) γ γ +ρxT (t − τ2 (t))H2 x(t − τ2 (t)) + ρy T (t)H3 y(t) +ρy T (t − τ1 (t))H4 y(t − τ1 (t)) n  −2 λi f (yi (t))[f (yi (t)) − kyi (t)]].

(5.66)

i=1

In the above analysis, we have used the sector condition as well as the inequality (5.61). Letting ξ(t) = [xT (t), y T (t), xT (t − τ2 (t)), f T (y(t)), f T (y(t − τ1 (t)))]T , we have S2 (t) ≤

γ2 T [ξ (t)M6 ξ(t) + y T (t − τ1 )(ρH4 − (1 − d1 )S)y(t − τ1 )]. ρ

Since M6 < 0 and ρH4 − (1 − d1 )S < 0, we obtain

(5.67)

174

5 Stability Analysis of Genetic Networks in Lur’e Form





t

J(t) ≤ E

S2 (s)ds < 0.

(5.68)

0

We can easily obtain inequality (5.47). Therefore, the genetic network (5.59)– (5.60) is stochastically stable with disturbance attenuation γ.  As mentioned in the Discussion section in (Li et al. 2006a), the model can be generalized in many ways and the analysis procedures should not add significant difficulty.

5.4 Examples In this section, we present three examples to show the effectiveness and correctness of the theoretical results. In order to demonstrate the evaluation of the theoretical results in detail, we consider a small size (but with complex coupling topology) genetic network with five nodes as shown in Figure 5.1 (Li et al. 2006a). Figure 5.1 shows an interaction graph of a gene regulatory network, where each ellipse represents a node, and the lines represent regulatory links, in which “→” and “” denote activation and repression, respectively. We assume that the dimensionless transcriptional rates are all 0.5. According to the definition of links in Section 5.1, we can obtain the coupling matrix G of this network as follows: ⎤ ⎡ 0 −1 1 0 0 ⎢ −1 0 0 1 1 ⎥ ⎥ ⎢ ⎥ (5.69) G = 0.5 × ⎢ ⎢ 0 1 0 0 0⎥, ⎣ 1 −1 0 0 0 ⎦ 0 0 010 and l = 0.5 × [1, 1, 0, 1, 0]T in (5.9). In the following three examples, all the networks have this topology.

1

2 5

3

4

Figure 5.1 The interaction graph of a genetic network model, where “” represents repression and “→” represents activation (from (Li et al. 2006a))

Example 5.1 We consider the genetic network in Figure 5.1 with time delays of the form (5.18)–(5.19). Let A = C = diag(−1, −1, −1, −1, −1),

5.4 Examples

175

D = diag(0.8, 0.8, 0.8, 0.8, 0.8), and f (x) = x2 /(1 + x2 ), in other words, the Hill coefficient H is 2. It is easy to show that the maximum value of the derivative of f (x) is less than k = 0.65. Assuming that the time delays are τ1 (t) = 1+0.1sin(t) and τ2 (t) = 0.5+0.1sin(t), we have d1 = d2 = 0.1 < 1. The unique equilibrium of this network is m∗ = [0.4302, 0.5126, 0.0742, 0.4816, 0.0657]T and p∗ = [0.3459, 0.4109, 0.0651, 0.3860, 0.0553]T . We first shift the equilibrium to the origin. According to Theorem 5.2, if the LMIs (5.20) hold, then the genetic network is globally asymptotic stable. By using the MATLAB LMI Toolbox, we can easily obtain feasible solutions of the LMIs (5.20). Thus, the network is globally asymptotic stable. The trajectories of variables x(t) are shown in Figure 5.2, which indicates that the network considered in this example is indeed stable. 5 4

x(t)

3 2 1 0 −1

0

2

4

6

8

10

t

Figure 5.2 Trajectories of x(t) in the genetic network with time delays (from (Li et al. 2006a))

Example 5.2 In this example, let us consider the genetic network in Figure 5.1 without time delay but with stochastic perturbations of the form of (5.25)–(5.26). Let D = diag(1, 1, 1, 1, 1), n(t) be a scalar white Gaussian noise process with zero mean, and σ(y(t)) = [σ1 (y(t)), ..., σ5 (y(t))]T with 5 σi (y(t)) = 0.3 j=1 yj (t) for all i. The other parameters are the same as those in Example 5.1. The unique equilibrium in the absence of perturbation is m∗ = [0.3955, 0.5208, 0.1164, 0.4493, 0.0803]T , p∗ = [0.3905, 0.5296, 0.1295, 0.4408, T 0.0850] . First of all, we also shift the equilibrium to the origin. By using the MATLAB LMI Toolbox, we can easily find feasible solutions of the LMIs (5.30) in Theorem 5.3, which indicates that the network with stochastic perturbations is asymptotic stable in mean square. We show the trajectories of variables x(t) in Figure 5.3, which indicates that the network of this example is indeed stable in mean square.

176

5 Stability Analysis of Genetic Networks in Lur’e Form 4 3

x(t)

2 1 0 −1

0

2

4

6

t

Figure 5.3 Trajectories of x(t) in the genetic network with stochastic perturbations (from (Li et al. 2006a)) 2.5 2

x(t)

1.5 1 0.5 0 −0.5 −1

0

5

10

15 t

20

25

30

Figure 5.4 Trajectories of x(t) in the genetic network with both time delays and stochastic perturbations (from (Li et al. 2007a))

Example 5.3 In this example, we consider a genetic network of the form of (5.59)–(5.60). We set the noise intensity as σ(x(t), y(t − τ1 (t))) = [σ1 (x(t), y(t − τ1 (t))), ..., σ5 (x(t), y(t − τ1 (t)))]T with σi (x(t), y(t − τ1 (t))) = 5 0.05[xi (t)+ j=1 yj (t−τ1 (t))] for all i. From σ(x(t), y(t−τ1 (t))) we can easily obtain the matrix Hi , i = 1, 2, 3, 4. Since the theoretical result is independent of the form of v(t), we let v(t) = [0.05, 0.05, 0.05, 0.05, 0.05]T . The other parameters are the same as those in Example 5.1. By applying Theorem 5.7, and using the MATLAB LMI Toolbox, we can easily obtain feasible solutions of the LMIs (5.62) for γ ≥ 4.5. The trajectories of variables x(t) are shown in Figure 5.4. Since v(t) is invariant with time, we can easily confirm that the right-hand side of (5.47) is equal to 0.5031t when γ = 4.5, and by numerical calculations, we can show that the time average of the left-hand side of

5.4 Examples

177

(5.47) is less than 0.5031. Thus, the network is indeed stochastically stable with disturbance attenuation γ = 4.5.

6 Design of Synthetic Switching Networks

One of the major challenges in post-genomic biology is to understand how genes, proteins, and small molecules dynamically interact to form molecular networks that facilitate sophisticated biological functions. From the engineering viewpoint, there are two major approaches to clarify network structures and their functions for a cellular system. One is called “reverse engineering” in which biomolecular networks are developed by conducting biological experiments on a specific species or organisms, collecting data applying highthroughput technologies, and finally inferring the related network structure based on computational and analytical theory, in other words, the process of reverse engineering is from data or experiments to networks. The other approach is called “forward engineering” in which a biomolecular device or a simple network with a specific function is first designed, then, this artificial part is integrated into the system of a living organism to grow, and finally, data are collected to confirm its functioning, in other words, the process of forward engineering is from networks to data. An ability to rationally design complex networks from the bottom-up can also help realize useful quantitative model systems for gaining a deeper appreciation of the principles governing the functional characteristics of complex biological systems. In this chapter, we focus on the design method of biomolecular networks, which clearly belongs to the category of forward engineering. In fact, such a topic is closely related to synthetic biology, which is a new and rapidly emerging discipline aimed at the design and construction of new biological devices and systems and the re-design of existing or natural biological systems for desired purposes. Although the basic concepts in designing biomolecular networks and controlling cellular systems at the DNA level have been in existence for sixty years, it is the recent advances of genetic engineering that facilitate both theoretical design and experimental implementation realistic. Progress in the theory on networks and dynamics provides mathematical frameworks for designing biologically viable biomolecular networks with specified functions. One such function which exists ubiquitously in cellular systems is multistability, i.e., the capacity to achieve multiple alternative internal states in response to dif-

180

6 Design of Synthetic Switching Networks

ferent stimuli. Multistability is a defining characteristic of a switch. Cells can switch between multiple internal states to accommodate environmental and intercellular conditions. It is increasingly becoming clear that such multiple discrete and alternative stable states are generated by regulatory interactions among cellular components. Such capacity has been found in both synthetic and natural biomolecular networks, including gene regulatory networks (Gardner et al. 2000), signal transduction networks (Markevich et al. 2004, Rodriguez et al. 2008), and metabolic networks (Ozbudak et al. 2004). Multistability has fundamental biological significance, notably in cell differentiation (Suel et al. 2006, Becskei et al. 2001), cell fate decision (Xiong and Ferrell Jr 2003), adaptive response to environmental changes (Kashiwagi et al. 2006), regulation of cell-cycle oscillations during mitosis (Pomerening et al. 2003), and so on. Many research studies have indicated that the multistability of these systems is attributed to positive feedback loops in their regulatory networks. In the modern information technology age, switch-like structures play important roles. For instance, the silicon switch is an essential building block for a computer; by connecting a few silicon switches we can construct a simple gate and by using additional silicon switches we can develop a logic circuit. When a network is connected by a large number of silicon switches, we can construct a high-capacity memory, a powerful CPU, or even a sophisticated computer. Analogously to information science, a living organism is also assumed to be synthesized by basic building blocks, including gene switches, gene sensors, and gene oscillators, which are the main topics of this book. By connecting a few such building blocks, we can synthesize a module with a more elaborate function. Hence, it should be important to gain a deep understanding of those building blocks. Responses of a complex biomolecular network are often understood to be facilitated by various switches. For example, a cell or an organism often switches off one gene or gene group but simultaneously switches on a different gene or group of genes to respond to environmental changes. Recently, it has been revealed that many genetic switches can turn therapeutic genes on and off. Such switches let them regulate levels of compounds like insulin in diabetics. Therefore, designing and constructing such a building block represents a first step towards cellular control by monitoring and manipulating biological processes at the DNA level. This process can be used not only for building modules to synthesize artificial biological systems but also has great potential for biotechnological and therapeutic applications. The perspectives as well as simple comparisons between silicon computing systems; and synthetical biological systems are listed in Table 1.1. Here, we will briefly introduce some basic concepts on switching networks and then present a general theoretical framework to design switching networks from the viewpoints of engineering and synthetic biology.

6.1 Types of Switches

181

6.1 Types of Switches Bistable switches are one of the most extensively studied building blocks in biomolecular networks (Ferrell 2002). Much theoretical and experimental research has been carried out to elucidate the structures and functions of various switches in cellular systems. A bistable system is like a toggle which can reside only in one of two stable states. Two simple mechanisms which can exhibit bistability through a positive or double-negative feedback are shown in Figure 6.1. The synthetic toggle switch (Gardner et al. 2000) takes the form shown in Figure 6.1 (a) and the bistable model of natural lactose utilization (Ozbudak et al. 2004) network is of the form shown in Figure 6.1 (b).

(a)

(b)

Figure 6.1 Two simple networks that can exhibit bistability. (a) A double-negative feedback loop, where protein A inhibits B and protein B inhibits A. Thus there can be a stable steady-state with in which A is on and B is off or one in which B is on and A is off, but there cannot be a stable steady-state in which both A and B are on or off. (b) A positive feedback loop in which A activates B and B activates A. There could be a stable steady-state in which both A and B are off or one in which both A and B are on, but not one in which A is on and B is off or vice versa. Both types of circuit exhibit persistent, self-perpetuating responses long after the triggering stimulus is removed (from (Ferrell 2002))

Switching networks can be grouped into two distinct categories: reversible and irreversible switches. One can classify switches by plotting the signalresponse curves, which display the qualitative changes in the steady-states as the input signal or a system parameter is smoothly varied. Consider the phosphorylation and dephosphorylation reactions shown in Figure 6.2 (a). The dynamics of these reactions is governed by the MM kinetics as follows (Tyson et al. 2003):

182

6 Design of Synthetic Switching Networks

k2 RP dRP k1 S(RT − RP ) − , = dt Km1 + RT − RP km2 + RP

(6.1)

where S is the signal strength and RP is the concentration of the phosphorylated form. R is the concentration of the dephosphorylated form and RT is the total concentration of the molecule, i.e., RT = R + RP . The steady-state concentration of the phosphorylated form is shown in Figure 6.2 (b). The mechanism for creating a switch-like signal-response curve is called zero-order ultrasensitivity because when the signal strength S is close to the threshold, small signal changes result in large changes in the steady-state response. Ultrasensitivity is the opposite of homeostasis, where the steady-state concentration of the response is confined to a narrow window for a broad range of signal strengths. The steady-state response RP increases continuously with signal strength, which is called “graded”. A slightly stronger signal results in a slightly stronger response. “Reversible” implies that if the signal strength is changed from Sinitial to Sfinal , the response at Sfinal is the same irrespective of whether the signal strength is being increased (Sinitial < Sfinal ) or decreased (Sinitial > Sfinal ). Clearly, the switch in Figure 6.2 is reversible because of the characteristic shown in Figure 6.2 (b). Such a switch is also called a gate, where S is taken as an input and RP is taken as an output. Note that Figure 6.2 (b) is a map on a parameter space in which S is a parameter and RP is a state variable. Clearly, there is only one stable equilibrium for one specific value of parameter S. Now, we consider an example of an irreversible switch by considering a mutual activation network, as shown in Figure 6.3 (a), whose dynamics is governed by (Tyson et al. 2003) dR = k0 EP (R) + k1 S − k2 XR, dt

(6.2)

here EP (R) =

KJR +



2k3 RJ4 2 − 4(k − k R)k RJ KJR 4 3 3 4

,

(6.3)

where KJR = k4 − k3 R + k4 J3 + k3 RJ4 . In the network, R activates protein E by phosphorylating E into its active form EP , and EP in turn enhances the synthesis of R, resulting in a positive feedback. In contrast to the graded and reversible sigmoidal signal response, the response to such a mutual activation network may create a discontinuous switch. As the signal magnitude S goes over a critical value Scrit , the response changes abruptly and irreversibly, as shown in Figure 6.3 (b). In other words, as the signal strength S increases, the response is low until S exceeds some critical intensity Scrit , at which point the response increases discontinuously to a high value. Then, if S decreases, the response stays high, i.e., the switch is irreversible (Tyson et al. 2003). Clearly, there are two stable equilibria for one specific parameter S in the interval 0 ≤ S ≤ Scrit , as shown by the solid lines in Figure 6.3 (b).

6.1 Types of Switches

(a)

183

(b)

Figure 6.2 Sigmoidal signal-response element: a reversible switch or a gate. (a) The network for phosphorylation and dephosphorylation reactions. (b) The sigmoidal response of the steady-state concentration of the phosphorylated form. The parameter values are k1 = k2 = 1, RT = 1, and Km1 = Km2 = 0.05 (from (Tyson et al. 2003))

(a)

(b)

Figure 6.3 An irreversible switch: a one-way switch: (a) the mutual activation network; (b) the signal-response curve. The parameter values are k0 = 0.4, k1 = 0.01, k2 = k3 = 1, k4 = 0.2, and J3 = J4 = 0.05 (from (Tyson et al. 2003))

184

6 Design of Synthetic Switching Networks

Another example of an irreversible switch is the mutual inhibition network, as shown in Figure 6.4, where R represses E and E in turn inhibits R, thereby forming a positive feedback loop. Its dynamics is governed by (Tyson et al. 2003) dR = k0 + k1 S − k2 R − k2 EP (R)R, dt

(6.4)

where EP (R) =

KRJ +



2k3 J4 2 KRJ

− 4(k4 R − k3 )k3 J4

,

(6.5)

here KRJ = k4 R − k3 + k4 RJ3 + k3 J4 . In general, there are two kinds of discontinuous responses: one-way switch and a toggle switch, as shown in Figure 6.3 (b) and Figure 6.4 (b), respectively. One-way switches presumably play major roles in developmental processes characterized by a point of no return, as shown in Figure 6.3 (b). Apoptosis is an example of a one-way switch. On the other hand, in a toggle switch, if S is sufficiently decreased, the switch can go back to the off-state, as shown in Figure 6.4 (b). In the case of a single parameter, the discontinuous toggle switch is often referred to as hysteresis or multistability. The examples include the lac operon in bacteria and cell cycle transitions driven by hysteresis (Sha et al. 2003, Pomerening et al. 2003, Han et al. 2005). A hysteresis model in a synthetic mammalian genetic network has been constructed. This model is based on a positive feedback loop (Kramer and Fussenegger 2005). A comprehensive introduction to making continuous processes discontinuous and reversible processes irreversible by using graphical displays of the rate equations can be found in (Ferrell and Xiong 2001). In contrast to the case illustrated in Figure 6.2, in such a case, there are two stable equilibria in the state space for a specific parameter S. Different definitions are given for reversible and irreversible switches, e.g., in (Paladugu et al. 2006), where toggle switches are actually defined to be reversible because it is possible to switch between the two stable states simply by changing the parameter values. The definition is based on bifurcation diagrams. When the two saddle-node bifurcation points appear in the positive region of a parameter, the switch will be usually reversible. On the contrary, if one of the two bifurcation points appears in the negative region or at a biologically infeasible parameter value, it is impossible to switch back to the other steady-state because of physical or biological constraints, in which case the switch will be irreversible. It has been shown that cellular systems use bistability as a means to achieve irreversiblity (Ferrell 2002). In this chapter, we mainly consider the dynamics and structures of irreversible switches with multiple steady-states, or equilibria, i.e., switches with characteristics similar to those illustrated in Figures. 6.3 and 6.4.

6.2 Simple Switching Networks

(a)

185

(b)

Figure 6.4 Another irreversible switch: a toggle switch: (a) the mutual inhibition network; (b) the corresponding signal-response curve. The parameter values are k0 = 0, k1 = 0.05, k2 = 0.1, k2 = 0.5, k3 = 1, k4 = 0.2, and J3 = J4 = 0.05 (from (Tyson et al. 2003))

6.2 Simple Switching Networks 6.2.1 Bistability in a Single Gene Network A simple quantitative model describing the regulation of the PRM operator region of λ phage was developed in (Hasty et al. 2001). The system is a DNA plasmid consisting of the promoter region and the cI gene. The promoter region contains three operator sites known as OR1 , OR2 , and OR3 . The gene cI expresses its protein CI, which in turn dimerizes and binds to the DNA as a TF. The binding can take place at one of the three binding sites. The CI dimer first binds to OR1 , then to OR2 , and finally to OR3 , according to their binding affinities. Positive feedback arises because the downstream transcription is enhanced by the binding at OR2 , while the binding at OR3 represses the transcription and constitutes a negative feedback loop (Hasty et al. 2001, Isaacs et al. 2003). However, there is almost no effect on the transcription when the CI dimer binds only to OR1 , although OR1 has the highest binding affinity among the three operator sites. Figure 6.5 shows schematically the binding effects and priorities of the CI dimers for the operator sites of the PRM promoter. The operate sites can be fully occupied by three dimers or equivalently total six monomers. The binding reactions are fast and assumed to be in equilibrium with respect to other reactions. Letting X1 , X2 , D, and Di denote the repressor (CI), the repressor dimer, the DNA promoter site, and the dimer binding to

186

6 Design of Synthetic Switching Networks

Figure 6.5 The binding effects of the CI dimers on the operator sites of the PRM promoter with priorities OR1 (0)>OR2 (+)>OR3 (−)

the ORi site, respectively, the reactions take the following form: K1

X 1 + X1  X 2 , K2

D + X2  D 1 , K3

D 1 + X2  D 2 D 1 , K4

D 2 D 1 + X2  D 3 D 2 D 1 ,

(6.6) (6.7) (6.8) (6.9)

where K3 = σ1 K2 and K4 = σ2 K2 thus σ1 and σ2 represent binding affinities of OR2 and OR3 relative to the dimer-OR1 affinity. The slow irreversible reactions are the transcription and degradation processes. If no repressor is bound to the operator region or a single repressor dimer is bound only to OR1 , the transcription proceeds at a normal rate. If, however, a repressor dimer is bound to OR2 , the transcription is enhanced. Moveover, when the CI dimer is also bound to OR3 , the transcription is completely repressed or terminated. The reactions governing these processes are k

D t D + nX1 , kt

D1  D1 + nX1 , αkt

D2 D1  D2 D1 + nX1 , 0

D3 D2 D1  D3 D2 D1 + nX1 , kx

X  0,

(6.10) (6.11) (6.12) (6.13) (6.14)

6.2 Simple Switching Networks

187

where α > 1 is the degree to which the transcription is enhanced by dimer occupation of OR2 and n is the number of proteins per mRNA transcript. Note that the transcription is assumed to be completely inhibited when the CI dimer is bound to OR3 , and thus we set the rate in (6.13) to be zero.

Figure 6.6 The steady-state concentration of the repressor as a function of the parameter γ (from (Hasty et al. 2001))

Under the QSS assumption, by applying the conservation law for DNA promoter sites and using rescaled dimensionless variables (Hasty et al. 2001), we obtain the following rate equation that describes the evolution of the concentration for the CI monomer: dx m(1 + x2 + ασ1 x4 ) − γx, = dt 1 + x2 + σ 1 x 4 + σ 1 σ 2 x 6

(6.15)

where x is the concentration of the CI monomer. The first term on the righthand side of (6.15) represents the production of repressor CI by transcription. (6.15) includes even polynomials because of the dimerization of CI and the subsequent binding to the promoter region. Recall that three dimers or six equivalent monomers can fully occupy the operator sites of the promoter, and they correspond to the polynomial x6 in (6.15). The system (6.15) has the capacity to create bistability. The steady-state concentration of the repressor as a function of the parameter γ is shown in Figure 6.6. The bistability arises as a consequence of the competition between the production of x along with dimerization and its degradation. 6.2.2 The Toggle Switch A synthetic toggle switch was constructed on plasmids (Gardner et al. 2000). The network was designed from two promoters and their repressors, where

188

6 Design of Synthetic Switching Networks

each promoter can be inhibited by the repressor transcribed by the other promoter, as shown in Figure 2.11. The model can be expressed by (2.180)– (2.181). Intuitively, one might anticipate that there will be two possible equilibria. Because LacI production is repressed by the CI protein, an initial high concentration of CI would lead to a state with high CI and low LacI concentrations. Conversely, because CI production is repressed by LacI, if LacI is initially present at a high concentration, a second stable state would entail high LacI and low CI concentrations. One counter-intuitive observation is that not all parameter combinations will show bistability. The bistable and monostable situations, the bifurcation set, and the effects of cooperativity parameters β and γ on bifurcations are shown in Figure 6.7. The design of an operating toggle switch thus depends on the choice of parameters that can lead to bistability. These criteria include the use of strong and balanced constitutive promoters, effective transcriptional repressions, formation of protein multimers, and protein degradation rates. Reliable toggling between different stable states can be induced experimentally through the transient introduction of either a chemical or a thermal stimulus (Gardner et al. 2000). 6.2.3 The MAPK Cascade Model Now, we consider a higher-dimensional system of the MAPK cascade. The key features of the cascade are shown schematically in Figure 6.8. Active Mos (x) can activate MEK through phosphorylation of two residues, i.e., conversion of unphosphorylated y1 to monophosphorylated y2 and then bisphosphorylated y3 . Similarly, active MEK (y3 ) then phosphorylates p42 MAPK (z1 ) in two residues, resulting in monophosphorylated z2 and then bisphosphorylated z3 . Active p42 MAPK (z3 ) can then promote Mos synthesis, completing the closed positive feedback loop (Angeli et al. 2004). The equations following the MM expressions are given by V2 x + V0 z3 x + V1 , K2 + x V6 y 2 V3 xy1 − , K6 + y 2 K3 + y 1 V3 xy1 V5 y3 V4 xy2 V6 y2 + − − , K3 + y 1 K5 + y 3 K 4 + y2 K6 + y2 V4 xy2 V5 y3 − , K4 + y 2 K5 + y 3 V10 z2 V7 y3 z1 − , K10 + z2 K7 + z 1 V7 y3 z1 V9 z3 V8 y3 z2 V10 z2 + − − , K7 + z 1 K9 + z3 K8 + z2 K10 + z2 V8 y3 z2 V9 z3 − . K8 + z 2 K9 + z3

x˙ = − y˙ 1 = y˙ 2 = y˙ 3 = z˙1 = z˙2 = z˙3 =

(6.16) (6.17) (6.18) (6.19) (6.20) (6.21) (6.22)

6.2 Simple Switching Networks

189

Figure 6.7 Dynamical analysis of the toggle switch: (a) bistability with balanced promoter strength; (b) monostability with imbalanced promoter strength; (c) the bistable region. The lines mark the boundary between bistable and monostable regions. The slopes of the bifurcation lines are determined by the exponents β and γ for large α1 and α2 . (d) Reducing the cooperativity of repression (β and γ) reduces the size of the bistable regions. Bifurcation lines are illustrated for three different values of β and γ. The bistable region lies inside each pair of curves (from (Gardner et al. 2000))

For appropriate parameter values, this model has two stable equilibria, which work as a switch. Details on how to detect the bistability and determine the parameter values that are chosen to reproduce the experimentally determined abundance and kinetic data in Xenopus oocytes are given in (Angeli et al. 2004). Excellent reviews of other switches can be found in (Ferrell 2002, Wolf and Arkiny 2003). The response of individual reactions or biomolecular interactions with respect to the concentrations of the participating components is continuous and usually graded; however, the combination of the individual reactions can give rise to sharp and switching behavior. Switching behavior can even be realized

190

6 Design of Synthetic Switching Networks

Figure 6.8 Schematic depiction of the Mos-MEK-p42 MAPK cascade (from (Angeli et al. 2004))

by a single function such as one with highly cooperative behavior as demonstrated in (6.15); examples of such functions are Hill type function with large exponents. The above simple switching networks indicate that network structures, cooperative interactions, feedback loops, and parameter values all play key roles in realizing switching behavior. A systems approach offers a better way to understanding how the complexity of switches arises from the network structure and feedback loops.

6.3 Design of Switching Networks with Positive Loops The switching mechanisms discussed include cross-repressive feedback with cooperativity, e.g., the toggle switch (Gardner et al. 2000), cooperative autoactivation of gene expression (Hasty et al. 2001), and more complex positive feedback systems, e.g., the MAPK cascade (Angeli et al. 2004). The requirements for multistability include some sort of feedback and some type of nonlinearity within the feedback circuits. In addition, the two aspects of the feedback loops must be properly balanced for the circuit to exhibit bistability, as shown in Figure 6.7. If either aspect is too strong or weak, the circuit will be monostable rather than bistable (Ferrell 2002). Bistability can arise in circuits with positive (or even-number negative-interactions) loops, but a circuit with a negative (or odd-number negative-interactions) loop is expected to exhibit different properties.

6.3 Design of Switching Networks with Positive Loops

191

Here, we describe a general theoretical framework for constructing a switching network with positive feedback loops (Kobayashi et al. 2003); this network can act as a toggle switch with multiple switching states. The construction method is based on the use of monotone dynamical systems (Smith 1995). The mathematical description of biomolecular networks comprises a system of differential equations derived by considering the synthesis and degradation of individual components. Assuming that there are n biochemical components in a network, i.e., proteins, mRNAs, and small molecules, we can generally describe a biomolecular model as follows: x˙ = f (xτ , p) − Dx,

(6.23)

where x(t) ∈ R+n represents the concentrations of all components at time t ∈ R+ and R+ is the set of non-negative real numbers. xτ = x(t − τ ) ∈ X ⊂ R+n is the concentration of all components at time t − τ . When emphasizing the dependence of a solution on the initial data φ ∈ C+ ≡ C([−τ, 0], R+n ), we write x(t, φ) or xt (φ). D = diag(d1 , ..., dn ) is an n × n diagonal matrix with n positive real diagonal components representing the degradation rates of individual components, and f = (f1 , ..., fn ) : C+n → R+n with fi indicating the synthesis rate of the ith component. In addition, we define N = {1, ..., n}. τij is the time delay from node j to node i. A general design procedure based on monotone dynamical systems and positive feedback networks (PFNs) was developed (Kobayashi et al. 2003, Smith 1995); this procedure guarantees stable switching states without any non-equilibrium dynamics, thereby making theoretical analysis and design tractable even for large-scale biological systems with time delays. We first define interactions, feedback loops, interaction graphs (IG), and then define PFNs by applying the theoretical results of monotone dynamical systems. Definition 6.1 (Types of interactions). Suppose that the concentration of the jth component affects the synthesis rate of the ith component with i = j. Express fi as fi (xτ ) = fi (x1 (t − τi1 ), ..., xn (t − τin )) with fi (x) = fi (x1 (t), ..., xn (t)), and define the type of interaction between the ith and the jth components, i.e., sij , as follows: ⎧ ∂fi (x) ⎪ ⎪ ⎨ +1 : if ∂xj |x > 0 for all x ∈ X, sij =

−1 : if ⎪ ⎪ ⎩ 0 : if

∂fi (xj ) ∂xj |x ∂fi (xj ) ∂xj |x

< 0 for all x ∈ X,

(6.24)

= 0 for all x ∈ X.

If sij = 1 (or −1), then the jth component affects positively (or negatively) the ith component with time delay τij , namely a positive (or negative) interaction.

192

6 Design of Synthetic Switching Networks

For example, sij = 1 for fi (xτ ) = xj (t − τij )/(1 + xj (t − τij )) and sij = −1 for fi (xτ ) = 1/(1 + xj (t − τij )). If sij = sji = 0 for all x ∈ X, there is no interaction between the ith component and the jth component. Next, we define an interaction graph of the model (6.23). This not only enables us to understand the relation among the components intuitively but also gives us an intuitive interpretation of theoretical results. Definition 6.2 (Interaction graph). An interaction graph, IG(f ), of a biomolecular network defined by (6.23) is a directed graph whose nodes represent individual components in the network and whose edges with additional parameter sets represent the interaction between the nodes. When sij = 0 and τij ≥ 0, that is, when the jth chemical affects the synthesis rate of the ith component with time delay τij , the graph has an edge, eij , directed from the jth node to the ith node with an additional parameter set (sij , τij ). It should be noted that an edge from the jth node to the ith node is subscribed oppositely to the convention in graph theory. In other words, an edge eij in an interaction graph of (6.23) is an edge from the jth node to the ith node, which is related to the derivative of fi by xj , i.e., ∂fi (xτ )/∂xj (t−τij ) or ∂fi (x)/∂xj . Definition 6.3 (Feedback loops and their types). A path from the ith node to itself in an interaction graph, i.e., p(i, i) = (i = p1 → p2 → · · · → pl = i) is said to be a feedback loop or cycle and is also a self-feedback loop when l is 2; pk represents the node k in the path p(i, i). In addition, this feedback $l−1 loop is said to be positive (or negative) if m=1 spm+1 pm = 1 (or −1). The network (6.23) is called a positive feedback network if its interaction graph IG(f ) has only positive feedback loops except self-feedback loops. Intuitive descriptions for interactions, paths, and loops of an interaction graph are also given in Chapter 3. An example of a PFN that is expressed by an interaction graph with only positive feedback loops is shown in Figure 6.9. It is worth noting that a positive feedback loop may include negative interaction edges. Further, for two arbitrary nodes i and j in an interaction graph, there may exist multiple paths from the ith node to the jth node. Although different paths from the ith to the jth node may have different signs in a general interaction graph, all loops must have the same sign in PFNs, i.e., all loops must be positive. For example, there are three positive + + + + − − loops from the node 1 to itself, i.e., 1 → 2 → 1, 1 → 2 → 4 → 5 → 1, + + − − + and 1 → 2 → 4 → 6 → 3 → 1 in Figure 6.9. There is no restriction on the self-feedback loops in PFNs, which may be either positive or negative. The associated ODEs of (6.23) are obtained by ignoring all time delays, i.e., setting all delays τij = 0 for all i and j. x˙ = f (x, p) − Dx.

(6.25)

The following theorems are derived based on monotone dynamical systems (Smith 1995), (Kobayashi et al. 2003).

6.3 Design of Switching Networks with Positive Loops

+

τ13

1

e

12

τ12

e

+

e + 2

21

e

15

3 13

τ36

-

54

τ 54 τ 42

42

+

e

5

e

τ 21

e

e 36

τ15

193

-

6 64

τ64 -

4

Figure 6.9 An example of a PFN. The signs + and − on an edge indicate s = 1 and −1, respectively. There are three loops, i.e., the loop connecting nodes (1,2,1), the loop connecting nodes (1, 2, 4, 5, 1), and the loop connecting nodes (1, 2, 4, 6, 3, 1), which are all positive (from (Wang et al. 2008))

Theorem 6.4. Suppose that a biomolecular network has only positive feedback loops except self-feedback loops, and its dynamics is described by (6.23). For almost all initial conditions φ ∈ C+ , its solution xt (φ) converges to an equilibrium. This theorem indicates that a biomolecular network with only positive feedback loops has no dynamical attractors except equilibria. When designing a switching network, it is important to ensure that the designed switch does not show any dynamical oscillations except asymptotic convergence to stable equilibria. However, it is generally not easy to guarantee such a stable or convergent behavior even for a small biomolecular network with only a few components and without any time delays because of the nonlinearity of the system. As indicated in Theorem 6.4, if we design a switching network with only positive feedback loops, the network is guaranteed to converge to stable equilibria in spite of the nonlinearity, size, and delays of the network. Such a property significantly reduces the complexity of designing and analyzing switching networks. It should be noted that this theorem does not exclude the existence of unstable non-equilibrium solutions such as unstable limit cycles. However, such unstable non-equilibrium solutions cannot usually be observed because of intracellular noise. In this sense, the theorem asserts that a biomolecular network composed of only positive feedback loops inevitably converges to stable equilibria. In addition, it is worth noting that this theorem can be extended not only for networks with multiple time delays but also for some networks with non-positive feedback loops.

194

6 Design of Synthetic Switching Networks

Theorem 6.5. Suppose that a biomolecular network has only positive feedback loops except self-feedback loops and that its dynamics is described by (6.23), where there are no time delays in the self-feedback loops. Then, time delays have no effects on the stabilities of equilibria. Note that (6.23) and (6.25) have identical equilibria. Theorem 6.5 indicates that the time delays have no effects on the stabilities of the equilibria. In other words, instead of the complicated FDEs (6.23), we can use the associated ODEs by letting all τ = 0 in (6.23), i.e., (6.25), to design and analyze switching networks with only positive feedback loops. Note that the self-feedback loops do not allow any delays. By using the ODEs instead of the FDEs, we can significantly reduce the complexity of the problem and enable the design of large-scaled complex biomolecular networks. The theorem allows us to examine the equilibria and their stability by using the much simpler associated ODEs, however, it is still difficult to analyze the nonlinear ODEs, especially high-dimensional ODEs. To cope with such a problem, a reduction method was developed to further simplify the complex ODEs to lower-dimensional ones with the same equilibria and stability as the original system (Kobayashi et al. 2003). The reduction of dimensionality is carried out by considering the pseudo-steady-state and making some assumptions on feedback loops. Such a reduction process is different from the conventional methods to reduce complexity by exploiting multiple time scales and keeping only the slow variables. In other words, there are no approximations and assumptions for time scales. Note that transient dynamics may be quite different although (6.23) and (6.25) have the same equilibria with the same stability. Next, we consider only the ODEs (6.25). Theorem 6.6. Consider (6.25) and its interaction graph IG(F). Assume that the ith node does not have any self-feedback loop, i.e., does not have any edge eii . By removing x˙ i = fi (x) − di xi and substituting xi = fi (x)/di into the remaining equations in (6.25), we obtain (n − 1)-dimensional differential equations x˙  = f  (x ) − D x ,

(6.26)

where x = (x1 , ..., xi−1 , xi+1 , ..., xn ), f  = (f1 , ..., fi−1 , fi+1 , ..., fn ),

(6.27) (6.28)

D = (d1 , ..., di−1 , di+1 , ..., dn ).

(6.29)

Then, there is a one-to-one correspondence between the equilibria of (6.26) and those of (6.25). In addition, their stabilities are also the same. Theorem 6.6 shows a procedure to reduce the dimensionality of a biomolecular network with only positive feedback loops. According to this theorem, the

6.3 Design of Switching Networks with Positive Loops

195

associated ODEs can be reduced stepwise to a lower-dimensional network until all the remaining nodes in the interaction graph of the reduced network have self-feedback edges. In other words, according to Theorem 6.6, all nodes without any self-feedback loop can be eliminated one by one, as illustrated in Figure 6.10.

(a )

(b )

Figure 6.10 Schematic diagram of the reduction procedure. The signs accompanying the arrows indicate the types of interactions: (a) the case that both of the signs s12 and s21 are positive; (b) the case that both of the signs are negative (from (Kobayashi et al. 2003))

The reduction procedure is represented by the following operations on the interaction graph. First, we choose any node that has no self-feedback loops as the target node. In Figures 6.10 (a-b), the 2nd node is chosen as the target node. Then, for nodes from each of which an edge leaves towards the target node, we create new edges from these nodes to all the nodes to which an edge enters from the target node. The sign corresponding to each new edge is the same as that corresponding to the path connecting the same two nodes through the target node in the original graph. In Figure 6.10 (a) and Figure

196

6 Design of Synthetic Switching Networks

6.10 (b), there are two edges from the 1st and 5th nodes to the removed 2nd node and three edges leaving from the 2nd node to the 1st, 6th, and 7th nodes. Thus, new edges from the 1st node to the 1st, 6th, and 7th nodes and ones from the 5th node to the 1st, 6th, and 7th nodes are created. Here, the edge that begins and terminates at the 1st node is a positive self-feedback loop. In Figure 6.10 (A), because the sign of the edge from the 1st node to the 2nd node in the original graph is positive, the new edges from the 1st node to the 1st, 6th, and 7th nodes are positive, positive, and negative, respectively. On the other hand, in Figure 6.10 (B), because the sign corresponding to the edge from the 1st node to the 2nd node is negative, the new edges from the 1st node to the 1st, 6th, and 7th nodes are positive, negative, and positive, respectively. Continuing the above process until all the remaining nodes in the interaction graph of the reduced network have self-feedback edges, we can eventually obtain low-dimensional ODEs that are easier to analyze than the original highdimensional ones. According to such a procedure, a biomolecular network with only positive feedback loops can be reduced to a minimal model in terms of the number of nodes; the maximum of this model is the number of loops in the original network. It should be noted that the associated ODEs and the nodes in the minimal network can be different, depending on the reduction procedure. For example, if we choose the 1st node as the first target node, we obtain a different minimal network with different associated ODEs. However, according to Theorem 6.6, different reduction procedures have no effects on the equilibria and their stabilities although they can result in different reduced systems with different transient dynamics. A detailed reduction procedure is shown in Figure 6.11. By applying this procedure, a node in the IG(F ) without any self-feedback loop can be eliminated, and the edges entering and leaving this node are merged. Then, we finally obtain lower-dimensional ODEs and the corresponding interaction graph with a smaller number of nodes than that of the original graph. For instance, a four-node network is eventually reduced to a one-node network with two positive self-feedback loops in Figure 6.11; the network obtained is a minimal network of the four-node network. Theorems 6.4–6.6 are important for the design of switching networks and indicate that a PFN is ideally suited to a switching system. These theoretical results also demonstrate that a PFN or a switching network satisfying the conditions of Theorems 6.4–6.6 is robust to some uncertainty, e.g., time delays and perturbations, because the stability of the equilibrium is not qualitatively affected by delays and there is no oscillatory behavior except stable equilibria that represent switching states. The above results show how to reduce the dimensionality of a biomolecular network to simplify the analysis and the computation of the associated ODEs. However, when we design a switching network, it is convenient for us to start with a minimal network satisfying all the requirements of PFNs and then to extend it to a biologically plausible network with higher dimensions. In

6.3 Design of Switching Networks with Positive Loops

197

Figure 6.11 Schematic illustration of the reduction procedure. The original network with four components is reduced stepwise to a minimal network with one component and two self-feedback loops. First, the 4th node is removed and the edges e43 and e14 are merged. Then, the 2nd and the 3rd nodes are successively removed. Finally, we obtain the minimal network with only the 1st node and two positive self-feedback loops (from (Kobayashi et al. 2003))

other words, we need to reverse the previous procedure by increasing the dimension of the network. The following theorem shows how to extend a switching network while preserving its equilibria and their stabilities. Theorem 6.7. Let a transformation from (6.26) to (6.25) be xi = fi (x )/di

⇒ x˙ i = fi (x) − di xi .

(6.30)

Assume that the networks described by (6.25) and (6.26) have only positive feedback loops except self-feedback loops and that the orbits of (6.25) and (6.26) have a compact closure in the state spaces. Then, (6.25) and (6.26) have the same equilibria with identical stabilities. The proof of all theorems above requires several conditions to be satisfied; in fact these conditions can always be satisfied in biological systems (Kobayashi et al. 2003). Based on the Theorem 6.7, the procedure to design a switching network by extending a minimal network is as follows: 1. Design a minimal switching network satisfying the requirements for configuration, equilibria, and their stabilities, even if such a network itself may not be plausible from a biological viewpoint.

198

6 Design of Synthetic Switching Networks

2. Extend the minimal network by successively adding nodes one at a time that satisfy the assumptions required for the minimal network in order to make the network more plausible and easier to implement in experiments. According to Theorem 6.7, the extended network preserves the static properties of the system in terms of equilibria and their stabilities.

Figure 6.12 Schematic diagram of the extension procedure. The original minimal network with only two nodes is extended by adding nodes to obtain a biologically plausible network. First, the 3rd and the 4th nodes are added, then the 5th and the 6th nodes are added (from (Kobayashi et al. 2003))

The above procedure is illustrated schematically in Figure 6.12 and can be viewed as a reverse procedure of the reduction. Starting with an abstract minimal switching network, we obtain a biologically plausible network by adding nodes and edges to the interaction graph. Note that we do not need to introduce time delays because the systems with and without them have the identical equilibria and stabilities. To demonstrate the above procedure, a genetic switch is designed as follows. First, an abstract minimal switching network with two nodes and three positive feedback loops is constructed, as shown in Figure 6.13 (a). Simple algebraic analysis shows that it can have three or four equilibria. Starting from the minimal network and applying the extension procedure, a realistic network with three nodes can be constructed, as shown in Figure 6.13 (b). Next, three different proteins are selected to represent the three nodes, as shown in Figure 6.13 (c). Now, the extended network is composed of only three proteins. We extend it further by incorporating their corresponding mRNAs and eventually obtain a biologically plausible network, as shown in Figure 6.13 (d).

6.3 Design of Switching Networks with Positive Loops

(a)

199

(b) (b)

(c)

(d)

Figure 6.13 Synthetic genetic switching network designed by the proposed procedure. (a) An abstract minimal network with two components and three feedback loops. Each node has a positive self-feedback loop, and the interactions among the nodes form a positive feedback loop; (b) an extension of (a); the 3rd node is added in order to replace the positive self-feedback loop of the 1st node in (a) with mutual negative interactions between the 1st and the 3rd nodes. (c) A realization of the extended network (b); proteins LacI, CI, and TetR are adopted to represent the 1st, 2nd, and 3rd nodes in (b), respectively. The broken line indicates the feedback loop with proteins LacI and TetR, which realizes a toggle switch. The bold line indicates a self-feedback loop of CI. (d) A further extension of (c); the extension includes the mRNAs corresponding to the proteins (from (Kobayashi et al. 2003))

The implementation of Figure 6.13(d) is shown in Figure 6.14, where genes lacI, tetR, and cI and promoters PLtetO−1 , P trc−2 , and PRM are adopted. Genes lacI and tetR with promoters PLtetO−1 and P trc−2 are artificially engineered and used to construct a two-state toggle switch (Gardner et al. 2000). On the other hand, the wild-type PRM promoter has three binding sites, i.e., OR1 , OR2 , and OR3 . In the model, the binding site OR3 of the PRM promoter is assumed to be artificially altered or mutated so that CI proteins ∗ cannot bind to it, as shown in Figure 6.15. With such a mutated PRM , the

200

6 Design of Synthetic Switching Networks

transcription rate is monotone with the CI; thus the conditions of monotone dynamical systems are satisfied.

Figure 6.14 A model for the implementation of the switching network with two nodes and three feedback loops (Figure 6.13) that include genes lacI, tetR, and cI and promoters PLtetO−1 , P trc−2 and PRM , where the mRNAs of lacI, tetR, and cI are omitted for simplicity. The signs indicate the types of interactions among the proteins LacI, TetR, and CI. (tetR1 , tetR2 ) and (cI1 , cI2 ) are identical to tetR and cI genes but have different promoters and ribosome binding sites (from (Kobayashi et al. 2003))

The detailed eight-dimensional functional differential equations for the real network shown in Figure 6.14 can be found in (Kobayashi et al. 2003). By applying the reduction procedure, they can be reduced to two-dimensional ODEs; thus preserving the equilibria and their stabilities. Numerical simulations show that the switching network can have three or four stable equilibria, depending on the parameter values. Figure 6.16 shows that the network has four stable equilibria, namely, (OFF, OFF), (ON, OFF), (OFF, ON), and (ON, ON); this represents a four-state switch. Note that operons are used to wire individual genes to form a network as shown in Figure 6.14. An operon is made up of several structural genes arranged under a common promoter and regulated by a common operator. Operons exist primarily in prokaryotes; they also exist in some eukaryotes, including nematodes. Therefore, when tuning the parameters, we need to consider the inefficiency of poly-cistronic transcription for the second gene at the downstream of the promoter (which may be as low as 1/100 as that of the first gene). Clearly, the two-state toggle switch is also embedded in this

6.4 Detection of Multistability

201

Figure 6.15 The mutated PRM promoter and its binding sites with binding priorities OR1 (0)>OR2 (+). The binding site OR3 of the PRM promoter is mutated and hence CI proteins cannot bind to it

four-state switch. In fact, we can easily show that the toggle switch, as well as other switches described in this chapter, satisfies the conditions of Theorems 6.4–6.7, and therefore is robust to the uncertain delays and perturbations. See (Kobayashi et al. 2003) for more details on the proof of the theorems, differential equations, and the parameter values.

6.4 Detection of Multistability Biomolecular networks with only positive feedback loops have no dynamical attractors like oscillatory behavior, which makes them work as model switching networks. However, the detection of multistability of such kinds of networks is not a trivial problem because of nonlinearity. Recently, a simple graphical method was developed (Angeli et al. 2004, Angeli and Sontag 2004a). For networks with arbitrary nodes and only positive feedback loops, the stability properties can be deduced mathematically by the open-loop approach. When an open-loop network is monotone and possesses a sigmoidal characteristic, the network is guaranteed to be bistable for some range of feedback strength. Before introducing the general theoretical framework, we first present a simple network with two proteins, Cdc2-cyclin B complex and Wee1, and a mutually inhibitory feedback loop, as shown in Figure 6.17 (a), to show how to analyze its dynamical behavior. The two mutually inhibitory proteins form a positive feedback loop. The equations for this model are as follows: (Angeli et al. 2004):

202

6 Design of Synthetic Switching Networks

Figure 6.16 The switching network has four stable equilibria. The broken and solid lines are the nullclines of the reduced two-dimensional ODEs (not shown) (from (Kobayashi et al. 2003))

β1 x1 (νy1 )γ1 , K1 + (νy1 )γ1 β2 y1 xγ12 y˙ 1 = α2 (1 − y1 ) − , K2 + xγ12

x˙ 1 = α1 (1 − x1 ) −

(6.31) (6.32)

where α1,2 and β1,2 are rate constants, K1,2 are the MM constants, γ1,2 are the Hill coefficients, and ν is a coefficient that indicates the strength of the influence of Weel on Cdc2-cyclin B. x1 and y1 represent the concentrations of Cdc2-cyclin B and Weel, respectively. Clearly, the system (6.31)–(6.32) is a monotone dynamical system and also a PFN satisfying the conditions of Theorems 6.4–6.7. Therefore, there are no dynamical attractors except stable equilibria. When appropriate parameter values are chosen, the system exhibits two stable equilibria, as shown by the two small circles in Figure 6.17 (b). The approach is based on considering (6.31)–(6.32) to be a feedback closure of the open-loop system β1 x1 ω γ1 , K1 + ω γ1 β2 y1 xγ12 , y˙ 1 = α2 (1 − y1 ) − K2 + xγ12

x˙ 1 = α1 (1 − x1 ) −

(6.33) (6.34)

6.4 Detection of Multistability

(a)

203

(b)

Figure 6.17 Network with two nodes and a mutually inhibitory feedback loop: (a) schematic description of the network; (b) phase plane with bistability. Parameter values are α1 = α2 = 1, β1 = 200, β2 = 10, γ1 = γ2 = 4, K1 = 4, and K2 = 1 (from (Angeli et al. 2004))

where ω is an input. Let η = y1 be the output of (6.33)–(6.34) with respect to the input ω. By breaking the feedback loop at the step of the inhibition of Cdc2 (x1 ) from Wee1 (y1 ) and by considering the effect of Wee1 (y1 ) on Cdc2 (x1 ) as an input signal ω (see Figures. 6.18 (a)–(c)), the system behavior can easily be analyzed. For the open-loop system, the input ω can be considered to be a parameter but not a state variable as in the full original system. Therefore, the behavior of the output as a function of the input can be realized. Subsequently, by letting η = ω/ν, the original system can be recovered and its system behavior can also be determined. Simple algebraic analysis shows that the open-loop system (6.33)–(6.34) has a monostable steady-state for a constant input ω, and thus, the system has a well-defined steady-state input/output characteristic. In fact, for any input ω, the steady-state input/output characteristic for (6.33)–(6.34), i.e., y1 = η = kη (ω), can easily be obtained as follows: kη (ω) = η =

α2 (K2 + (α1 (K1 + ω γ1 )/(α1 K1 + α1 ω γ1 + β1 ω γ1 ))γ2 ) , (6.35) α2 K2 + αβ (α1 (K1 + ω γ1 )/(α1 K1 + α1 ω γ1 + β1 ω γ1 ))γ2

where αβ = α2 + β2 . This function has a single value for every ω, i.e., oneto-one mapping, as shown in Figure 6.18(d); therefore, the open-loop system has a well-defined steady-state input/output characteristic. The equilibria can be detected by simultaneously plotting together the characteristic kη that represents the steady-state output η as a function of the constant input ω and the diagonal η = ω. Algebraically, this amounts

204

6 Design of Synthetic Switching Networks

Figure 6.18 Mathematical analysis of the Cdc2-cyclin B/Wee1 system carried out by breaking the feedback loop. Schematic change of a feedback system before (a) and after (b) breaking the feedback loop, where ω is the input of the open-loop system and η is the output. (c) The incidence graph. (d) The steady-state input/output characteristic curve kη as a function of ω with the same parameter values as those in Figure 6.17. The solid curve represents η as a function of ω for unitary feedback, i.e., η = ω. There are three intersection points (I, II, and III), which represent two stable equilibria (I and III) and an unstable one (II). The solid line and dashed lines are η = ω/ν for different values of ν. The dashed lines represent the two tangent lines of the characteristic curve at ν ≈ 0.83 and ν ≈ 1.8, which are the two bifurcation values. When ν is between the two bifurcation values, the system shows bistability. (e) Bifurcation diagram showing bistability when the feedback strength ν is between 0.83 and 1.8 (from (Angeli et al. 2004))

to searching for fixed points of the mapping kη . In other words, the steadystate input/output characteristic represents the equilibrium curve of the open system as a function of ω. By letting the output be the input, i.e., η = ω, we close the open system and go back to the original one. The intersection points of the two curves, i.e., kη (ω) and η = ω are exactly the equilibria of the closed system. We find that there are three intersection points between these graphs, which we refer to as points I, II, and III, as shown in Figure 6.18 (d).

6.4 Detection of Multistability

205

The stability can be detected through the slope of the curve kη at each equilibrium according to Theorem 4.6. When the slope is greater than unity, the equilibrium will be unstable. On the contrary, when the slope is less than unity, the equilibrium will be stable. We see from the picture that this slope is less than 1 at I and III and more than 1 at II. Therefore, the system (6.31)– (6.32) has two stable equilibria I and III and an unstable equilibrium II. Now, we describe the general theoretical framework that holds for a general class of positive feedback biomolecular networks. The approach can be directly applied to detect multistablity and bifurcations even for large-scale networks. The results in (Kobayashi et al. 2003) indicates that time delays have no effects on stability and the number of equilibria in PFNs. Therefore, we only consider the following ODEs without delays, x˙ = f (x, ω),

(6.36)

which describe the evolution of a set of variables x(t) = (x1 (t), ..., xn (t)) with an external input ω; ω is generally a scalar. It is possible to extend this to vector inputs (Angeli and Sontag 2003,Angeli and Sontag 2004a). The output η = h(x) is a function of the state variables. The functions f and h are supposed to be differential at all of their arguments according to the implicit function theorem; at an equilibrium, i.e., f (x(ω), ω) = 0, x(ω) can be derived as a function ω provided |fx−1 | is not singular at the equilibrium. To describe the methodology, we introduce an incidence graph , which is similar to the interaction graph, except for the presence of two extended input and output nodes. An incidence graph has n + 2 nodes, labeled ω, η, and xi , i = 1, ..., n. When the input and output nodes are considered equivalent to others, it becomes an interaction graph. The definition of the sign corresponding to a path in an incidence graph is analogous to that in an interaction graph. For example, consider the following system, x˙ 1 = −x1 +

1 , 1+ω x˙ 2 = −x1 − x2 + x3 ± ω,

(6.37) (6.38)

x˙ 3 = −x1 + x2 − x3 .

(6.39)

Its output is η = x3 − x1 . Its incidence graph is shown in Figure 6.19 (a). Two critical necessary conditions must be satisfied before adopting the framework: positive monotonicity and a well-defined steady-state input/output characteristic. Here, the monotonicity implies that there are no possible negative feedback loops in the incidence graph or the interaction graph, even when the system is closed under positive feedback. On the other hand, the well-defined input/output characteristic implies that the open-loop system has a monostable steady-state response to constant inputs. The property of monotonicity amounts to checking that the following conditions are satisfied. 1. Every loop in the incidence graph, either directed or not, is positive.

206

6 Design of Synthetic Switching Networks

(a)

(b)

Figure 6.19 Example of an incidence graph: (a) the incidence graph of (6.37)– (6.39); (b) a cascade of subsystems (from (Angeli et al. 2004))

2. All paths from the input to the output node are positive. 3. There is a directed path from the input node to each node. 4. There is a directed path from each node to the output node. Note that conditions 1 and 2 are equivalent to the requirement that every possible loop is positive, even after closing under any positive feedback (because a positive path under any positive feedback forms a positive loop). Conditions 3 and 4 are technical conditions needed for irreducibility. When an incidence graph is irreducible, it cannot be divided into two or more subnetworks. In fact, if an interaction graph can be divided into several irreducible subnetworks that satisfy conditions 1 and 2, we can examine each individual subnetwork by applying the approach to each irreducible subnetwork. The well-defined steady-state input/output characteristic implies that for each constant input ω(t) ≡ ω ¯ , there exists a globally and asymptotically stable equilibrium x ¯ for (6.36). We say that (6.36) has a static input-state characteristic kη (·) : W → X (6.40) if for each constant input ω(t) ≡ ω ¯ ∈ W there exists a globally and asymptotically stable equilibrium x ¯(ω) = kη (¯ ω ) with f (¯ x(¯ ω ), ω ¯ ) = 0, where W and X are the input space and the state space, respectively. We also define the static input/output characteristic as η = ky (¯ ω ) := h(kη (¯ ω )) at the equilibria, provided that the input-state characteristic exists. Then, with η = h(kη (¯ ω )) and η = ω ¯ , the analysis of equilibria and their stabilities is relatively simpler. Note that x ¯ = kη (¯ ω ) is derived from f (¯ x(¯ ω ), ω ¯ ) = 0 at the equilibrium.

6.4 Detection of Multistability

207

A very useful fact in the verification of these two critical conditions is that these two conditions are always true for cascades of systems. Cascades are systems composed of subsystems; the output of each subsystem is an input to the next subsystem, as shown in Figure 6.19 (b). Now, we adopt the method to a higher-dimensional system, i.e., a three-tier MAPK cascade based on the Mos/MEK1/p42 MAPK cascade present in Xenopus oocytes, as shown in Figure 6.20 (a). The system can be described by the differential equations V2 x + V0 z3 x + V1 , K2 + x V6 (1200 − y1 − y3 ) V3 xy1 , − K6 + (1200 + y1 + y3 ) K3 + y1 V4 x(1200 − y1 − y3 ) V5 y 3 , − K4 + (1200 − y1 − y3 ) K5 + y3 V10 (300 − z1 − z3 ) V7 y3 z1 , − K10 + (300 − z1 − z3 ) K7 + z1 V8 y3 (300 − z1 − z3 ) V9 z3 . − K8 + (300 − z1 − z3 ) K9 + z3

x˙ = − y˙1 = y˙3 = z˙1 = z˙3 =

(6.41) (6.42) (6.43) (6.44) (6.45)

We show how the approach can be adopted to detect the multistability. The first step is to view the system as a cascade of three modular subsystems: the one-dimensional x (Mos) subsystem, whose input is ω and output is η = x, the two-dimensional y1 –y3 (MEK) subsystem, whose input is ω = x and output is η = y3 , and the two-dimensional z1 –z3 (MAPK) subsystem, whose input is ω = y3 and output is η = z3 , as shown in Figure 6.20(c). It is straightforward to see that the Mos subsystem helps realize a well-defined I/O characteristic. They can also be proven for the MEK and MAPK subsystems. Therefore, the entire cascade helps realize a well-defined I/O characteristic. Next, monotonicity needs to be verified. This is trivial for all the subsystems. Because each subsystem in the cascade is monotone, the entire cascade is guaranteed to be monotone. Thus, the framework can be adopted for the system described by (6.41)–(6.45). Clearly, the closed-loop system is obtained with ω = z3 for the one-dimensional x (Mos) system. The open-loop system is obtained by taking the input to be ω = νz3 . The global stability behavior of the entire five-dimensional system can be deduced from a plot of the characteristic z3 = kη (ω) and the line ω = νz3 . Under unitary feedback ν = 1, the system has three equilibria, as shown in Figures. 6.20 (d) and (f). The theoretical framework shows that the middle equilibrium is unstable and the high and low equilibria are stable. Moreover, every trajectory except for the unstable equilibrium itself and a zero-measure separatrix corresponding to the stable manifold of the unstable equilibrium necessarily converges to one of the two stable equilibria. The experimental demonstration of a sigmoidal response of MAPK to Mos based on actual data is shown in Figure 6.20 (e).

208

6 Design of Synthetic Switching Networks

(a)

(b)

(c)

(d)

(e)

(f)

Figure 6.20 Bistability in a MAPK cascade. Schematic depiction of the MosMEK-p42 MAPK cascade, which is a linear cascade of protein kinases embedded in a positive feedback loop (a), together with the corresponding open-loop system (b). (c) Incidence graphs of three subsystems. (d) The steady-state I/O characteristic (kη as a function of ω) for the MAPK cascade, plotted together with the diagonal that represents η as a function of ω with unity feedback. (e) Experimental demonstration of a sigmoidal response of MAPK to Mos. (f) The bifurcation diagram showing the stable on-state (the upper curve), the stable off-state (the lower curve), and the unstable threshold (the middle curve) as a function of feedback strength ν (from (Angeli et al. 2004))

6.5 Enzyme-driven Switching Networks Although PFNs are general network structures, they are mainly used to design and model gene regulatory networks, as indicated in the preceding sections. In contrast, in this section, we will discuss switching networks at the level of proteins and metabolites. In addition to using PFNs to construct switching networks, other techniques have also been developed. In this section, we provide a rigorous conceptual basis for understanding the relationship between the structures of mass action biochemical reaction networks and their capacity to exhibit bistability (Craciun et al. 2006). Before introducing the relationship, we first consider a motivating example. The conversion of two substrates S1 and S2 into the product P is catalyzed by an enzyme E with three intermediate complexes ES1 , ES2 , and ES1 S2 . The kinetic mechanism can be represented as

6.5 Enzyme-driven Switching Networks k1

E + S1  ES1 ,

209

(6.46)

k2

k3

E + S2  ES2 ,

(6.47)

k4

k5

k7

k6

k8

S2 + ES1  ES1 S2  S1 + ES2 , k

ES1 S2 9 E + P,

(6.48) (6.49)

where ki (i = 1, ..., 9) denote the rate constants. The corresponding system of coupled differential equations according to the mass action law is c˙E = −k1 cE cS1 + k2 cES1 − k3 cE cS2 + k4 cES2 + k9 cES1 S2 , (6.50) c˙S1 = −k1 cE cS1 + k2 cES1 − k8 cS1 cES2 + k7 cES1 S2 + FS1 − dS1 cS1 ,(6.51) c˙S2 = −k3 cE cS2 + k4 cES2 − k5 cS2 cES1 + k6 cES1 S2 + FS2 − dS2 cS2 ,(6.52) c˙ES1 = k1 cE cS1 − k2 cES1 − k5 cES1 cS2 + k6 cES1 S2 , (6.53) c˙ES2 = k3 cE cS2 − k4 cES2 − k8 cES2 cS1 + k7 cES1 S2 , c˙ES1 S2 = k5 cS2 cES1 + k8 cS1 cES2 − (k6 + k7 + k9 )cES1 S2 , c˙P = k9 cES1 S2 − dP cP ,

(6.54) (6.55) (6.56)

where dS1 , dS2 , and dP are the degradation rates, and FS1 and FS2 are supply rates of the substrates. ci represents the concentration of chemical i. There are appropriate combinations of parameter values, i.e., appropriate rate constants, mass transfer coefficients, and substrate supply rates, for which the system (6.50)–(6.56) shows bistability. There are two stable equilibria, one characterized by higher productivity of P and the other by a substantially lower one. The trajectories will converge to one of the two equilibria, depending on the initial conditions. Switching between the two equilibria would result, for example, from a signal in the form of a temporary disturbance in a substrate supply rate (Craciun et al. 2006), as shown in Figure 6.21. In fact, we can easily show that the network described by (6.46)–(6.49) or (6.50)–(6.56) is also a PFN that satisfies the conditions of Theorems 6.4–6.7. This example shows that the capacity for bistability is already present in the biochemical reactions. Some networks can give rise to bistability, while others cannot show bistability for any set of parameter values. Capacity for bistability refers to the existence of combinations of parameter values that result in the governing equations allowing at least two distinct stable equilibria. On the basis of the species-reaction (SR) graph, a general technique for determining the relationship between reaction network structures and the capacity for bistability was developed (Craciun et al. 2006). Before describing the technique, some terminology needs to be introduced. The first term is the SR graph, which is obtained as follows: symbols in circles represent species and symbols in boxes represent reactions. Reversible reaction pairs and irreversible reactions are included in the same box (see Figure 6.22). If a species appears within a reaction, then an arc is drawn from

210

6 Design of Synthetic Switching Networks

Figure 6.21 Simulations of (6.50)–(6.56) show that trajectories converge to one of the two stable equilibria. The parameter values are k1 = 93.43, k2 = 2539, k3 = 481.6, k4 = 1183, k5 = 1556, k6 = 121192, k7 = 0.02213, k8 = 1689, k9 = 85842, dS1 = dS2 = dP = 1, FS1 = 2500, and FS2 = 1500 (from (Craciun et al. 2006))

the species symbol to the reaction symbol, and the arc is labeled with the name of the complex in which the species appears. For example, assume that species A appears within the reactions A+B  F . Thus, an arc is drawn from A to reactions A+B  F and labeled with the complex A+B. The SR graph is completed once all the species nodes are connected to the reaction nodes in the manner described above. If a species appears in both complexes of a reaction, as A+B  2A, then two arcs are drawn between A and the reaction, each labeled by a different complex, i.e., A + B and A + A. An example of the SR graph and its corresponding reactions are depicted in Figure 6.22. The second term is a complex pair. A complex pair in an SR graph refers to a pair of arcs that are adjacent to the same reaction symbol and carry the same complex label. For example, the two arcs labeled C + E in Figure 6.22 constitute a complex pair because they are adjacent to the same reaction symbol and carry the same complex label C + E. There are a total of four complex pairs in Figure 6.22, i.e., C + E, C + G, C + D, and A + B. Note that each arc or edge has two components for a complex pair. Next, we introduce the types of cycles, which are qualitative characteristics of a reaction network. A cycle is similarly defined as a feedback loop in the interaction graph and incidence graph except no direction in SR graph, i.e., a closed path in which no arc or vertex is traversed twice. For example, there are

6.5 Enzyme-driven Switching Networks

211

F A+B

㪘㩷㪂㩷㪙㩷㹔㩷㪝

A+B

cycle 1 㪘㩷㪂㩷㪙㩷㹔㩷㪝 㪚㩷㪂㩷㪞㩷㹔㩷㪘 㪚㩷㪂㩷㪛㩷㹔㩷㪙 㪚㩷㪂㩷㪜㩷㹔㩷㪛

A

B

㪚㩷㪂㩷㪛㩷㹔㩷㪙

㪚㩷㪂㩷㪞㩷㹔㩷㪘

C+D

C+G

C+D C+G

cycle 2 D

Cycles 1 and 2 split the red c-pair

C+E

㪚㩷㪂㩷㪜㩷㹔㩷㪛 C+E

Figure 6.22 An example of the SR graph and its corresponding reactions (redrawn from (Craciun et al. 2006))

three cycles in Figure 6.22; labeled cycle 1 and cycle 2, and a third unlabeled cycle A − B − D − C − A (i.e., cycle 3). Three kinds of cycles need to be classified: namely, odd-cycles, even-cycles, and 1-cycles. These classifications are not mutually exclusive. A cycle can, for example, be both an odd-cycle and a 1-cycle. An odd- (or even-) cycle refers to a cycle containing an odd (or even) number of complex pairs. Therefore, all three cycles in Figure 6.22 are odd cycles. In particular, there is only one complex pair, A + B, A + B, for cycle 1; there is only one complex pair, C + D, C + D, for cycle 2; there is also only one complex pair, A + B, A + B, for cycle 3. A 1-cycle in an SR graph is a cycle such that the stoichiometric coefficient associated with each of its arcs is one. Clearly, all three cycles in Figure 6.22 are 1-cycle. Finally, we say that two cycles split a complex pair if both arcs of the complex pair are among the arcs of the two cycles and one of the arcs is contained in just one of the cycles. In Figure 6.22, cycles 1 and 2 split the C + D complex pair. Both arcs are among the two cycles, but cycle 1 contains only one of the arcs with C + D. The large outer cycle (i.e., cycle 3) and cycle 1 also split the complex pair C + D, as do the large outer cycle and cycle 2. A technique based on a graph theoretical method was developed in order to discriminate complex reaction networks that can admit multiple equilibria from those that cannot (Craciun et al. 2006). It has been shown that if

212

6 Design of Synthetic Switching Networks

Table 6.1 Some networks and their capacity for bistability (from (Craciun et al. 2006)) Entry

Networks

Remark

Bistability

the graph satisfies certain conditions, the differential equations corresponding to the network cannot admit multiple equilibria no matter what values the parameter take. Because these conditions are not stringent, they amount to powerful necessary conditions that a network must satisfy if it is to have the capacity to produce multiple equilibria. Theorem 6.8. Consider a reaction network whose SR graph has both of the following properties.

6.5 Enzyme-driven Switching Networks

213

1. Each cycle in the graph is a 1-cycle, an odd-cycle, or both. 2. No complex pair is split by two even-cycles. For such a reaction network, the corresponding differential equations cannot admit more than one positive equilibrium, no matter what values the parameters take.

ESGÆGE+PG

E+P

ES E+P

ESG

P

lzG E+S

E

E+SG ЂG ESG

E+S

S

Figure 6.23 The SR graph for entry 1 of Table 6.1

Although the theorem does not provide sufficient conditions for the capacity of multistability, it provides a necessary condition for bistability. In particular, when every stoichiometric coefficient is one, which is very common in biochemical systems, two even-cycles must split a complex pair to generate multiple equilibria. According to Theorem 6.8, the networks shown in the first three entries in Table 6.1 cannot admit more than one positive equilibrium, no matter what values are assigned to their parameters. Clearly there is no bistability for the system of Figure 6.22 according to Theorem 6.8 because there are only three odd-cycles. For entry 1 of Table 6.1, the SR graph is shown in Figure 6.23, where there is only one 1-cycle (containing no complex pair); therefore the network satisfies the conditions of Theorem 6.8. See (Craciun et al. 2006) for more a detailed analysis on the other networks shown in Table 6.1. Take dihydrofolate reductase (DHFR), a crucial enzyme along the pathway for thymine synthesis, as an example. The DHFR promotes the overall

214

6 Design of Synthetic Switching Networks

Figure 6.24 Reactions and rate constants for DHFR catalysis: E, DHFR; H2F, dihydrofolate; H4F, tetrahydrofolate; NH, NADPH; N, NADP+ ; and EX, X bound to DHFR (from (Craciun et al. 2006))

reaction, as shown in Figure 6.24. Its SR graph and the bistability are shown in Figure 6.25 and Figure 6.26, respectively. In addition to the techniques mentioned above, i.e., the interaction graph and SR graph techniques, there are some other techniques that can be used to determine whether a given network has the capacity to exhibit multiple equilibria, e.g., the Thomas conjecture (Thomas 1981), the Kauffan multistability conditions (Kaufman et al. 2007), the injective polynomial function approach (Craciun and Feinberg 2005, Craciun and Feinberg 2006), and the chemical reaction network theory implemented by the Chemical Reaction Network Toolbox (Siegal-Gaskins et al. 2009). Interested readers may refer to (Thomas 1981, Kaufman et al. 2007, Craciun and Feinberg 2005, Craciun and Feinberg 2006) for more details on the theories and examples. All the above approaches have been compared and applied to one-component and two-component subnetworks (Siegal-Gaskins et al. 2009). It was demonstrated that different methods have their merits and drawbacks; this suggestes that different techniques may be of limited use in the analysis of different kinds of networks. For example, the Thomas conjecture (Thomas 1981), which states that a necessary condition for the existence of multiple equilibria is the presence of a positive loop in the interaction graph, is only a necessary condition and does not preclude the existence of multiple equilibria for any PFNs. How-

6.5 Enzyme-driven Switching Networks

㪜㪥㪟㪋㪝㩷㹔㩷㪜㪟㪋㪝㩷㪂㩷㪥 N + EH4

㪜㪟㪋㪝

ENH4F

N + EH4

㪟㪋㪝

E + H4F EH4F

215

H4F + EN

㪜㪟㪋㪝㩷㹔㩷㪜㩷㪂㩷㪟㪋㪝 H4F + EN E + H4F

㪜㪥

㪜㪥㪟㪋㪝㩷㹔㩷㪜㪥㩷㪂㩷㪟㪋㪝



NH + EH2F

㪜㪥㪟㪋㪝

E+N

EN

ENH4F

㪜㪥㪟㪟㪉㪝㩷㹔㩷㪜㪥㪟㪋㪝

㪥㪟

㪜㩷㪂㩷㪥㩷㹔㩷㪜㪥

Cycle 1 㪥㪟㩷㪂㩷㪜㪟㪉㪝㩷㹔㩷㪜㪥㪟㪟㪉㪝

NH + E

E+N

ENHH2F

EH2FNH

㪥㪟㩷㪂㩷㪜㩷㹔㩷㪜㪥㪟

ENH

㪜㪥㪟

NH + EH2F

㪜㪥㪟㪟㪉㪝

NH + E



H2F + E

Split c-pair

H2F + ENH

㪟㪉㪝㩷㪂㩷㪜㩷㹔㩷㪜㪟㪉㪝

EH2F

㪜㪟㪉㪝

ENHH2F

H2F + E

㪟㪉㪝

㪟㪉㪝㩷㪂㩷㪜㪥㪟㩷㹔㩷㪜㪥㪟㪟㪉㪝

Cycle 2 H2F + ENH

Figure 6.25 The SR graph corresponding to reactions shown in Figure 6.24 (redrawn from (Craciun et al. 2006))

ever, not all PFNs have multiple equilibria, and even for the same network, different sets of parameters may have a different number of equilibria. Such a situation is similar to the Kauffan multistability condition, which states that multistationarity requires either the presence of a variable nucleus or else the presence of two nuclei of opposite signs; this is also a necessary condition (Kaufman et al. 2007), where a nucleus is a union of one or more disjoint loops that includes all vertices in an interaction graph. There are many pioneering works in the area of detecting multiple equilibria of a molecular network. Craciun et al. modeled a cascade of chemical reactions in a chemical network based on the law of mass-action (Craciun et al. 2006). If the details of each reaction are available, one can express exactly such a system as a chemical network based on the rates of the reactions by the law of mass action. But for an enzyme catalyzed metabolic reaction, the details of its intermediate process are generally unknown, thereby limiting the application of such a theoretical result. On the other hand, for an enzyme-catalyzed metabolic reaction, one can formulate the biochemical reactions based on the MM kinetics or the Hill kinetics, even without identifying the details of its intermediate process, thereby avoiding the difficulty of the previous modeling approaches. Based on the Hill kinetics, by exploiting network structures of

216

6 Design of Synthetic Switching Networks

Figure 6.26 Bistability and switch-like behavior in the DHFR experiment determined from measured rate constants (from (Craciun et al. 2006))

enzyme-driven reactions, a module-based approach has recently been also developed to analyze the multistability of metabolic networks (Lei et al. 2010), which first decomposes a general metabolic network into four types of elementary modules according to the number of substrates and products, and then derives the sufficient conditions for monostability of the metabolic network.

7 Design of Synthetic Oscillating Networks

Rhythmic behavior represents one of the most striking dynamical phenomena in biological systems. The biological rhythms, including neural, cardiac, glycolytic, mitotic, hormonal, circadian rhythms, and rhythms in ecology and epidemiology, with periods ranging from seconds to years, play important roles in many processes (Goldbeter 1997). Such dynamical phenomena arise from interplay of cellular components and are typically generated by negative feedback loops (Dunlap 1999). From both theoretical and experimental viewpoints, it is still a great challenge to model, analyze, and further predict oscillatory phenomena in various living organisms. Oscillations, particularly periodic oscillations, are widely used in engineering control systems as central clocks to synchronize various elements with periodic behavior. Many multicellular organisms also adopt variations of cellular clocks to coordinate their behavior over the course of the day–night cycle. Models and theoretical approaches are essential for gaining understanding of the principles underlying these rhythmic or oscillating processes. The most widely studied models of rhythmic phenomena are circadian oscillators (Dunlap 1999), cell cycle oscillators (Tyson et al. 2001, Tyson and Novak 2001), and glycolytic oscillators (Wolf et al. 2000). It has been shown that using simplified systems and focusing on general mechanisms is useful for obtaining a fundamental understanding of the oscillatory mechanisms. Theoretical studies have yielded models for cellular oscillators and even helped predict oscillatory behavior. Many models of sustained biological oscillators use three components, namely, X → Y → R  X, where X enhances Y , Y enhances R, but R represses X, thereby forming a negative feedback loop. The effect of the variable R can also be accompanied with a time delay in the feedback loop (Tyson et al. 2003). The simplest case of a feedback oscillator is represented by a single gene, its corresponding mRNA X, and its product, i.e., protein Y . If the protein inhibits the transcription of the mRNA, i.e., X → Y  X, the gene expression can be oscillatory if the time between the beginning of the transcription and the end of the translation can be represented by the self-inhibitory or auto-repressed system with a time delay.

218

7 Design of Synthetic Oscillating Networks

This mechanism has been proposed to serve as the basis for the mechanism of the circadian rhythm (Schepper et al. 1999) and the zebrafish somitogenesis oscillator (Lewis 2003). In all these models, time delays are essential to the generation of the oscillations although a simple negative feedback loop without a time delay can also generate oscillations. A network forming a cellular oscillator in living organisms typically involves more components than just a protein and its mRNA. These oscillators function faithfully under different environmental conditions. For example, the well-known Goodwin oscillator (Goodwin 1965), a three-dimensional model composed of X, a clock mRNA, Y , a clock protein, and Z, an inhibitor of the transcription, describes an oscillatory negative feedback regulation of a translated protein Y that inhibits its own transcription through the inhibitor Z. Subsequently, many other more complex oscillators were proposed, e.g., Goldbeter’s models (Goldbeter 1995, Leloup and Goldbeter 1998, Leloup and Goldbeter 2003). In addition to the models mentioned above, some synthetic oscillators have also been constructed, e.g., the repressilator (Elowitz and Leibler 2000), the gene-metabolic oscillator (Fung et al. 2005), and the robust and tunable synthetic gene oscillator (Stricker et al. 2008). Some of them have been implemented experimentally. Excellent reviews can be found elsewhere, e.g., papers (Dunlap 1999, Goldbeter 2002, Pedersen et al. 2005, Kruse and J¨ ulicher 2005, Hasty et al. 2002b) and books (Goldbeter 1997, Segel 1984). In general, because of complicated nonlinearities, it is difficult to guarantee that a biomolecular system will converge to a limit cycle, or a sustained oscillation, even for a simply-structured dynamical system. Therefore, many important physiological factors, e.g., time delays and multiple time scales, are simply ignored in order to reduce the dimensionality and complexity of the systems. It is well known, however, that such factors may play important roles in the dynamics of biomolecular systems. With rapid advances in mathematics and experiments concerning the underlying regulatory mechanisms, more theoretical results and general techniques have been obtained to elucidate oscillatory behavior. In this chapter, we aim to provide a general framework to design and analyze oscillating networks by exploiting special dynamical features of biomolecular networks and applying recent theoretical results on monotone dynamical systems. In contrast to the preceding chapter, in which the asymptotical property of positive feedback networks (PFNs) was explored to construct switching networks, in this chapter, a class of negative feedback networks, namely, cyclic feedback networks (CFNs), was used to construct oscillating networks.

7.1 Simple Oscillatory Networks Cellular oscillations are typically generated by negative feedback loops. To obtain an insight into how to design an oscillatory network, we first consider several simple oscillatory networks.

7.1 Simple Oscillatory Networks

219

7.1.1 Delayed Autoinhibition Networks An important consideration in modeling oscillatory networks is the fact that individual processes need a certain amount of time to be completed. For example, mature mRNA is not immediately available shortly after the initiation of the transcription. An oscillator mechanism involves a negative feedback loop in which the responding protein directly binds to the regulatory DNA of its own gene to inhibit its own transcription, thus forming a direct autoinhibition with transcriptional delays, as depicted in Figure 7.1.

Protein

Delay τ p

her 1 mRNA

Delay τ m her 1 Gene

Figure 7.1 An oscillatory network based on a direct autoinhibition with delays. The protein product acts as a homodimer to inhibit the expression of gene her1

For this simple autoregulatory network, let m and p denote the numbers or concentrations of the mRNA and protein molecules, respectively. The dynamics of the autoregulation network can then be assumed to obey the following differential equations (Lewis 2003): dm(t) = f (p(t − τm )) − dm m(t), dt dp(t) = a m(t − τp ) − dp p(t), dt

(7.1) (7.2)

where the constants dp and dm are the degradation rates of the protein and mRNA molecules, respectively, a is the production rate of the protein molecules, and f (p) is the production rate of the mRNA molecules, which is assumed to be a decreasing Hill function of the protein p. τm and τp are the time delays involved in the synthesis of mRNA and protein molecules. The amount of regulatory protein p influences the transcription rate, but a significant time, τm , elapses between the initiation of the transcription and the arrival of the mature mRNA molecules at the cytoplasm. Thus, the rate

220

7 Design of Synthetic Oscillating Networks

of increase of the number of mature mRNA molecules at any instant reflects the value of p at a time that is earlier by an amount τm . Similarly, there is a delay, τp , between initiation of the translation and the emergence of complete functional protein molecules. The decreasing Hill function f (p) takes the form f (p) =

k 1 + pn /pn0

(7.3)

with Hill coefficient n, where constants k and p0 represent the action of an inhibitory protein that acts as a dimer, i.e., n = 2. The behavior is qualitatively similar for cases in which n > 1. It can be proved from Bendixson’s negative criterion that it is impossible for (7.1)–(7.2) to generate sustained oscillations when the two delays are set to zero. To generate sustained oscillations, three conditions must be satisfied: (1) ak/dp dm > 2p0 ; (2) dp  1.7/T , and dm  1.7/T , with total time delay T = τp + τm (Lewis 2003). Sustained oscillations are shown in Figure 7.2. The oscillator based on autoinhibition with time delays possesses the remarkable property that the period is mainly determined by the total time delay T and values of the other parameters can change by orders of their magnitudes with very little effect on the period. See (Lewis 2003) for more details.

Figure 7.2 Sustained oscillations generated by (7.1)–(7.2). The parameter values are a = 4.5, b = c = 0.23, k = 33, and p0 = 40, which are chosen so that the predicted period is close to the observed period. The delays τp and τm are estimated according to the rate at which RNAP II moves along DNAs and the time needed for intron splicing out and transfer of mature mRNAs from the nucleus to the cytosol. The estimated delays are τp ≈ 2.8 min and τm ≈ 31.5 minutes for her1 (from (Lewis 2003))

In (7.2), the production rate of the protein is linear. In contrast, another oscillatory model based on a similar mechanism was proposed, in which the time delay and nonlinearity in protein production and cooperativity in the negative feedback are necessary to generate circadian oscillations (Schepper et al. 1999). The mathematical model for the intracellular circadian rhythm generator is also based on negative feedback regulation of the protein product on the transcriptional rate of its gene. The model includes the production and

7.1 Simple Oscillatory Networks

221

degradation of mRNA and protein molecules, along with negative feedback of protein molecules upon mRNA production, as shown in Figure 7.3.

( a)

(b )

Figure 7.3 An example of autoinhibition networks. (a) Schematic representation of the circadian rhythm generator. The generator involves the autoinhibition of the protein at the translational or transcriptional level and post-translational processing such as phosphorylation, dimerization, and transport. Protein∗ denotes effective protein molecules in the molecular state that are capable of inhibiting the mRNA production and expressing the circadian rhythm. (b) Model interpretation of (a), emphasizing the delay τ and nonlinearity in the protein production, the nonlinear negative feedback, as well as the production and degradation of mRNA and protein molecules (from (Schepper et al. 1999))

It is assumed that the protein production and the negative feedback are nonlinear processes, whereas the total time involved in the protein production and subsequent processing is represented by a single delay. The nonlinearities and the delay are critical to generate oscillations. On the basis of the above assumptions, the model is defined as follows: dM (t) rM − qM M (t), = dt 1 + (P (t)/k)n dP (t) = rP M m (t − τ ) − qP P (t), dt

(7.4) (7.5)

222

7 Design of Synthetic Oscillating Networks

where M and P denote the relative concentrations of mRNA and effective protein molecules, respectively, rM is the scaled mRNA production rate constant, rP is the protein production rate constant, qM and qP are the mRNA and protein degradation rates, respectively, n is the Hill coefficient, the exponent m denotes the nonlinearity in protein production, the delay τ is the total duration of protein production from mRNAs, and k represents a scaling constant. Sustained oscillations generated by (7.4)–(7.5) are shown in Figure 7.4.

Figure 7.4 1.0 h−1 , qM protein and respectively

Sustained oscillations generated by (7.4)–(7.5) at rM = 1.0 h−1 , rP = = 0.21 h−1 , qP = 0.21 h−1 , n = 2, m = 3.0, τ = 4.0 h, and k = 1. The mRNA concentrations are shown by the continuous and dashed lines, (from (Schepper et al. 1999))

The simple autoinhibition networks with negative feedback on genetic expression are relatively easier to analyze theoretically. Many other oscillators based on such a mechanism have also been developed, such as the Gro/TLE1Hes1 repression model (Bernard et al. 2006) and the discrete stochastic Hes1 delay model (Barrio et al. 2006), and others (Monk 2003). One disadvantage of these oscillators is that not all the processes can be specified e.g., phosphorylation. Therefore, they may be too general to account for various aspects of dynamics such as entrainment and robustness. The negative feedback on genetic expression has been subsequently used to analyze various periodic phenomena in many biomolecular networks (Goldbeter 1995, Leloup and Goldbeter 1998, Leloup and Goldbeter 2003). 7.1.2 Goldbeter’s Models In Drosophila, the negative autoregulatory feedback established by the period (per) gene is at the heart of the circadian oscillator, as shown in Figure 7.5.

7.1 Simple Oscillatory Networks

223

The gene per is first expressed in the nucleus and transcribed into mRNA. The per mRNA is then transported into the cytosol where it is translated into the protein P0 and degraded. The protein undergoes reversible phosphorylation twice at multiple residues, from P0 into P1 and from P1 into P2 . The fully phosphorylated form of the protein is transported into the nucleus in a reversible manner. The nuclear form of the protein PN represses the transcription of the gene per, resulting in a negative autoregulatory feedback. The phosphorylation, dephosphorylation, and degradation steps are assumed to obey Michaelian kinetics. The repression is supposed to be cooperative according to the Hill kinetics (Goldbeter 1995, Gonze et al. 2004).

Figure 7.5 The Goldbeter minimal model for circadian oscillations in Drosophila; the model is based on negative autoregulation of the per gene by its protein product PER (from (Goldbeter 1995))

The alternating protein production, gene repression, and protein degradation may lead to sustained circadian oscillations even in continuous darkness. The temporal variation of the concentrations of mRNA (M ) and of the various protein forms (P0 , P1 , P2 , PN ) is governed by the differential equations dM dt dP0 dt dP1 dt dP2 dt dPN dt

KIn M − νm , + PNn Km + M P0 P1 + V2 , = k s M − V1 K1 + P0 K2 + P1 P0 P1 P1 P2 − V2 − V3 + V4 , = V1 K1 + P 0 K2 + P 1 K3 + P1 K4 + P2 P1 P2 P2 − V4 − k1 P2 + k2 PN − νd , = V3 K3 + P 1 K4 + P 2 Kd + P 2 = νs

KIn

= k1 P2 − k2 PN ,

(7.6) (7.7) (7.8) (7.9) (7.10)

where per mRNA is accumulated at a maximum rate of νs and is degraded by an enzyme at a maximum rate of νm with MM constant Km . The synthesis rate of the PER protein from M is characterized by a first-order rate constant

224

7 Design of Synthetic Oscillating Networks

ks . The parameters Vi and Ki (i = 1, ..., 4) denote the maximum rates and Michaelis constants of the kinases and phosphatases involved in the reversible phosphorylation of P0 into P1 and P1 into P2 , respectively. The fully phosphorylated form P2 is degraded with maximum rate of νd with MM constant Kd . The repressor PN is synthesized and degraded with the first-order rate constant k1 and k2 in the cytoplasm; PN is transported into the nucleus to inhibit the mRNA in the Hill type with coefficient n. Figure 7.6 shows an example of the sustained oscillations in (7.6)–(7.10).

Figure 7.6 The sustained oscillations generated by the minimal model with νs = 0.76 μM h−1 , νm = 0.65 μM h−1 , Km = 0.5 μM, ks = 0.38 h−1 , νd = 0.95 μM h−1 , k1 = 1.9 h−1 , k2 = 1.3 h−1 , KI = 1 μM, Kd = 0.2 μM, n = 4, K1 = K2 = K3 = K4 = 2 μM, V1 = 3.2 μM h−1 , V2 = 1.58 μM h−1 , V3 = 5 μM h−1 , and V4 = 2.5 μM h−1 (from (Goldbeter 1995))

The minimal model described by (7.6)–(7.10) can be decomposed into 30 elementary steps (Gonze et al. 2004). The stochastic simulations show that dynamical behavior predicted by the deterministic equations remains valid as long as the maximum numbers of mRNA and protein molecules involved in the circadian oscillator are of the order of a few tens and hundreds, respectively (Gonze et al. 2004, Gonze et al. 2002a). Because of the presence of intrinsic noise, the trajectory in the phase space transforms into a cloud of stochastic fluctuations around the deterministic limit cycle. Thus, the stochastic and deterministic descriptions are equivalent since the mean behavior of the stochastic system can be captured by the deterministic description, as shown in Figure 7.7. Only when the maximum numbers of molecules of mRNA and protein become smaller than a few tens, does the noise begin to obliterate the circadian rhythm. Despite the above fact in Drosophila, a more complex extended model based on the minimal model was established by the period (per) and timeless (tim) genes, as shown in Figure 7.8 (Goldbeter 2002). The model describes

7.1 Simple Oscillatory Networks

( a)

(b)

(c)

(d )

225

Figure 7.7 Comparison of sustained circadian oscillations and limit cycles predicted by the deterministic and stochastic descriptions. (a) The limit cycle and (b) the oscillations obtained in the deterministic model. (c) The limit cycle and (d) the Oscillations obtained in the stochastic simulation by using the Gillespie method. The variables are expressed in terms of the concentrations and the numbers of molecules in deterministic and stochastic simulations, respectively (from (Gonze et al. 2004))

both branches of negative feedback. After expression of both the genes, their respective proteins PER and TIM are phosphorylated at multiple residues. The heterodimer PER-TIM acts as a transcriptional repressor for both genes. The rate of TIM degradation is induced by light, which enables entrainment with the environment. See (Leloup and Goldbeter 1998, Leloup et al. 1999, Leloup and Goldbeter 1999) for more details on the kinetic equations and simulation results. The spontaneous and entrained oscillations are shown in Figure 7.9. The existence of the light–dark (LD) cycle is reflected in the model by the periodic, square-wave variation of parameter νdT , which represents the maximum rate of TIM degradation. It has been shown that the minimal model based on a single negative autoregulatory feedback loop is sufficient for emergence of robust oscillations to occur (Gonze et al. 2002a). The function of the additional branch would enhance system robustness under conditions of continuous darkness, or free running in darkness. In other words, the dual feedback structure contributes to robust fine-tuning of the clock in the case of single parameter perturbations (Stelling et al. 2004b).

226

7 Design of Synthetic Oscillating Networks

Figure 7.8 The extended network involving negative regulation of genes per and tim. The box delimits the nucleus. The negative feedback is exerted by the nuclear PER-TIM complex on per and tim transcription (from (Goldbeter 2002))

7.1.3 Relaxation Oscillators Oscillators with interlinked positive and negative feedback loops appear frequently in biological systems. Such oscillators may have advantages over pure negative feedback loops in some contexts, e.g., great robustness to parameter changes and noise (Barkai and Leibler 2000,Vilar et al. 2002). Interlinked positive and negative feedback can produce relaxation oscillations that exhibit rapid transitions followed by durations of slow change. In this section, we present a model for producing a relaxation oscillation; the oscillation is produced by virtue of the competition between a strong self-activating gene A that activates a repressor R and the repression of A by the repressor R (Barkai and Leibler 2000). The implementation of such a hysteresis-based activator– repressor relaxation oscillator was also proposed in (Hasty et al. 2001), as shown in Figure 7.10. The relaxation oscillator is constructed as follows: first, both the repressor CI (X) and the activator RcsA (Y ) are under the control of the same promoter PRM , so that the functional form of the production term f (x) is identical for both proteins. Then, the network is constructed from two plasmids, one for the repressor and one for the activator, and that we control the number of plasmids per cell for each type. Finally, the interaction of RcsA and CI leads to the degradation of CI (Hasty et al. 2001). Defining the concentrations as the variables, i.e., x = [X] and y = [Y ], the equations governing this network are

7.1 Simple Oscillatory Networks

(a)

(d )

(b )

(e)

(c )

(f )

227

Figure 7.9 Effects of asymmetrical conditions (i.e., different parameter values in the two branches) and of entrainment by an LD cycle. Panels (a)–(c) on the left refer to the case of continuous darkness, whereas panels (d)–(f) on the right pertain to the entrainment by a 12:12 LD cycle (from (Leloup and Goldbeter 1998))

dx = mx f (x) − γx x − γxy xy, dt dy = my f (x) − γy y, dt where

(7.11) (7.12)

228

7 Design of Synthetic Oscillating Networks

f (x) =

1 + x2 + ασ1 x4 ; 1 + x2 + σ 1 x 4 + σ 1 σ 2 x 6

(7.13)

here, mx and my are the plasmid copy numbers for the two species, i.e., the number of plasmids per cell. The PRM promoter and its binding affinities are shown in Figure 6.5. The production term f (x) can be obtained by the regulation of the PRM operator region of λ phage. The system is a DNA plasmid consisting of the promoter region and the cI gene. The promoter region contains three operator sites known as OR1 , OR2 , and OR3 , as shown in Figure 6.5. The gene cI expresses repressor CI, which in turn dimerizes and binds to the DNA as a TF. The binding can take place at one of the three binding sites. The binding affinities are such that, typically, the binding proceeds sequentially: the dimer first binds to OR1 , then to OR2 , and finally to OR3 . The biochemical reactions include both fast and slow reactions. Letting X, X2 , and D denote the repressor, the repressor dimer, and the DNA promoter site, respectively, we can write the fast reactions as follows: K1

X + X  X2 , K2

D + X2  D1 , K3

D 1 + X2  D 2 D 1 , K4

D 2 D 1 + X2  D 3 D 2 D 1 ,

(7.14) (7.15) (7.16) (7.17)

where Di denotes dimer binding to the ORi site, and Ki = ki /k−i are equilibrium constants. Let K3 = σ1 K2 and K4 = σ2 K2 ; thus σ1 and σ2 represent the binding affinities relative to the dimer–OR1 affinity. Based on singular perturbation theory, the fast reactions rapidly converge to quasi-equilibrium states. The reactions governing the slow processes are as follows: k

D + RN AP t D + RN AP + nX, kt

D1 + RN AP  D1 + RN AP + nX, αkt

D2 D1 + RN AP  D2 D1 + RN AP + nX, 0

D3 D2 D1 + RN AP  D3 D2 D1 + RN AP + nX, kx

X  ∅,

(7.18) (7.19) (7.20) (7.21) (7.22)

where RN AP denotes the RNA polymerase, n is the number of repressor proteins per mRNA transcript, and α > 1 is the degree to which the transcription is enhanced by dimer occupation of OR2 . Assuming that D3 D2 D1 completely terminates the transcription, the transcription rate of (7.21) is set to be zero. The protein multimers and other complexes can be eliminated by utilizing the inherent separation of time scales (i.e., setting the fast reactions into equilibrium states). This allows algebraic substitution and yields the equation

7.1 Simple Oscillatory Networks

229

(a)

(b)

Figure 7.10 The relaxation oscillator. (a) The schematic of the network. The PRM promoter is used on two plasmids to control the production of repressor CI (X) and RcsA (Y ). After dimerization, the repressor acts to turn on both plasmids through its interaction at PRM . As its promoter is activated, the RcsA concentration increases, leading to an induced reduction of CI. (b) The simulation which arises as the RcsA-induced degradation of repressor. The parameter values are mx = 10, my = 1, γx = 0.1, γy = 0.01, γxy = 0.1, σ1 = 2, σ2 = 0.08, and α = 11 (from (Hasty et al. 2001))

dx = mx f (x) − γx, dt

(7.23)

√ where γ = kx /dT nkt p0 K1 K2 and dT is the total concentration of DNA promoter sites and is kept constant, i.e., mx dT = d0 (1 + K1 K2 x2 + σ1 (K1 K2 )2 x4 + σ1 σ2 (K1 K2 )3 x6 ). (7.24) √ ˜ = The dimensionless variables are defined by t˜ = tkt p0 dT n K1 K2 and x K1 K2 x. We still use x and t in (7.23) by replacing x ˜ and t˜ for simplification. Combining (7.23) and the fact that the interaction of RcsA and CI leads to the degradation of CI, we can finally obtain (7.11). Similarly, (7.12) can be also obtained. The network shown in Figure 7.10 is a hysteresis oscillator based on interlinked positive and negative feedback. It is based on hysteresis, in other words, its construction is based on a two-way discontinuous switch, as shown in Figure 6.6. The positive feedback gives bistability while the interlinked negative feedback drives hysteresis, i.e., drives the bistable system back and forth

230

7 Design of Synthetic Oscillating Networks

between its two steady-state regimes. First, consider y to be the signal and x to be the response, and plot the steady states as a function of y. We obtain an S-shaped signal-response curve, indicating that the network functions as a toggle switch (i.e., replace γ with y in Figure 6.6). For intermediate values of y, the network is bistable. Conversely, plotting y (response) as a function of x, we can obtain a simple linear response curve. These curves are referred to as the x-nullcline and the y-nullcline. The intersection point of the two curves represents a steady-state, or an equilibrium for the full system, but the system does not settle in this steady-state because it is unstable. Instead, the system oscillates in a closed orbit around the steady-state. Some other relaxation oscillators have also been constructed (Tyson et al. 2003, Chen and Aihara 2002b, Guantes and Poyatos 2006, Kurosawa et al. 2006). Relaxation oscillators have been shown to be consistent with higher noise-rejection properties (Barkai and Leibler 2000) and have been shown to facilitate synchronization (McMillen et al. 2002). 7.1.4 Stochastic Oscillators The constructive roles of noise and disorder in nonlinear biomolecular systems have been extensively studied. The most well known example of such a role is the phenomenon of noise-induced oscillations. In addition to oscillations generated by various deterministic mechanisms such as delayed negative feedback, internal rhythms can also be generated by intrinsic or extrinsic noise, although the corresponding deterministic systems are in steady states. Such a phenomenon is known as noise-induced coherent motion, which has become an active topic of research since it has a variety of applications. We call such noise-induced oscillators stochastic oscillators. Consider the genetic oscillator shown in Figure 7.11. The deterministic dynamics of the model is given by the following rate equations: dDA  − γA DA A, = θA DA dt dDR  − γR DR A, = θR DR dt  dDA  , = γ A DA A − θ A D A dt

(7.25) (7.26) (7.27)

7.1 Simple Oscillatory Networks  dDR dt dMA dt dA dt dMR dt dR dt dC dt

231

 , = γ R D R A − θ R DR

(7.28)

  DA + α A D A − δMA MA , = αA

(7.29)

  − θR D R − A(γA DA + γR DR + γC R + δA ), (7.30) = βA M A + θ A D A   DR + αR DR − δMR MR , = αR

(7.31)

= βR MR − γC AR + δA C − δR R,

(7.32)

= γC AR − δA C,

(7.33)

 and DA denote the numbers of activator genes in which A is bound where DA  and not bound to its promoter, respectively. DR and DR refer to the repressor promoters. MA and MR denote the numbers of mRNAs of A and R. A and R are the numbers of activator and repressor proteins. C corresponds to the inactivated complex formed by A and R. The constants α and α denote the basal and activated rates of transcription; β, the rates of translation; δ, the rates of spontaneous degradation; γ, the rates of binding of A to other components; and θ, the rates of unbinding of A from those components. The results obtained by the deterministic and stochastic analysis are shown in Figure 7.12. The deterministic result was obtained from numerical integration of (7.25)–(7.33), whereas the stochastic result was obtained by the Gillespie stochastic algorithm. It can be found that some specific parameter values that yield a stable steady-state in the deterministic case can produce sustained oscillations in the stochastic case. Therefore, the presence of noise not only changes the behavior of the system by adding more disorder but can also lead to marked qualitative differences (Vilar et al. 2002). To measure the temporal coherence of noise-induced oscillations, we introduce an index, called the the signal-to-noise ratio (SNR), defined as (Zhou et al. 2008),

SN R = 

Tk t V ar(Tk )

,

(7.34) (7.35)

where Tk = τk+1 − τk (here τk is the time for the presence of the kth firing of the noise-induced oscillator to occur) represents the kth pulse duration and

232

7 Design of Synthetic Oscillating Networks

Figure 7.11 A genetic oscillator (from (Vilar et al. 2002).)

·t denotes the average over time. A plot of SNR versus noise intensity reveals non-monotonic behavior, which is a signature of stochastic or coherence resonance. The noise intensity at which the SNR attains its maximum gives the amount of noise that can be introduced into the system to play the best constructive role (Hou and Xin 2003, Li and Lang 2008).

7.2 Design of Oscillating Networks with Negative Loops The relationship between network topology and functionality is also an important research topic because the topology of a network plays an important role in determining the functions of the network. For example, networks with only positive feedback loops have no dynamical attractors (Kobayashi et al. 2003) except stable equilibria as explained in Chapter 6. A system-level understanding of topological structures and biological functions requires a set of principles and methodologies that link the behavior of molecules to network characteristics and functions. The aim of this section is to introduce a general framework to design and analyze oscillatory networks by using cyclic feedback networks, which are also closely related to synthetic biology. Many oscillatory networks belong to the class of the scope of such network structures, e.g., the Goodwin model (Goodwin 1965), Goldbeter’s single loop model (Goldbeter

7.2 Design of Oscillating Networks with Negative Loops

233

(a)

(b)

Figure 7.12 Time evolution of R for (a) the deterministic equations (7.25)–(7.33) and (b) the stochastic version of the model. The parameter values are αA = 50   = 500 h−1 , αR = 0.01 h−1 , αR = 50 h−1 , βA = 50 h−1 , βR = 5 h−1 , h−1 , αA δMA = 10 h−1 , δMR = 0.5 h−1 , δA = 1 h−1 , δR = 0.05 h−1 , γA = 1mol−1 h−1 , γR = 1mol−1 h−1 , γC = 2mol−1 h−1 , θA = 50 h−1 , and θR = 100 h−1 (from (Vilar et al. 2002))

1995), and the synthetic oscillator repressilator (Elowitz and Leibler 2000). One desirable property for a cyclic network is that their omega-limit sets are composed of only periodic orbits and equilibria. Such a property drastically reduces the difficulty in theoretical analysis and design of oscillators. Negative cyclic networks with certain conditions have no stable equilibria but have stable periodic oscillations. In other words, the asymptotical dynamics of such networks obey the Poincar´e–Bendixson Theorem, although it is a highdimensional system. Such a property is clearly ideal for designing or modeling cellular oscillators. Next, we describe the theoretical model of cyclic feedback networks and then present its general representation by relaxing several constraints. 7.2.1 Theoretical Model of Cyclic Feedback Networks A network with cyclic feedback loops, also known as a cyclic feedback network (CFN), can be represented generally by functional differential equations as follows (Mallet-Paret and Sell 1996a, Mallet-Paret and Sell 1996b):

234

7 Design of Synthetic Oscillating Networks

dxi (t) = fi (t, xi−1 (t − αi ), xi (t), xi+1 (t − βi )), 0 ≤ i ≤ n, dt

(7.36)

where the index i is taken mod(n + 1), xi ∈ R+ , fi : R+4 −→ R, αi ∈ R, and βi ∈ R. Clearly, there are n + 1 nodes in the network. Relations are imposed on the time delays, i.e., αi = −βi−1

for 1 ≤ i ≤ n,

(7.37)

which means either αi = βi−1 = 0 or αi > 0 without the edge from node i to node i − 1, and fi is independent of variable xi−1 . Feedback conditions are also imposed on the nonlinearities fi (7.36) as follows: # i i u ≥ 0 and δ+ v ≥ 0, ≥ 0 if δ− fi (t, u, 0, v) (7.38) i i ≤ 0 if δ− u ≤ 0 and δ+ v ≤ 0, i ∈ {−1, 1} are constants satisfying where δ± i−1 i = δ+ , δ−

(7.39)

i−1 i−1 i = δ+ = 0 or δ+ > 0 and fi is independent of the which implies either δ− variable xi−1 . A further assumption is that f0 is independent of its second argument,

f0 (t, u, w, v) = f0 (t, w, v).

(7.40)

Although (7.36) has multiple time delays, it can be reduced to the canonical form by using the transformation yi (t) = σi xi (rt − γi ),

(7.41)

for 0 ≤ i ≤ n, where σi = (sgn r)

i

i−1 

j δ+ ,

r=

j=0

n 

βi , and γi =

i=0

i−1 

βj

(7.42)

j=0

with σ0 = 1 and γ0 = 0 and by setting δ ∗ = (sgn r)n+1

n 

j δ+ .

(7.43)

j=0

The canonical form of a cyclic network can be written as follows: y˙ 0 (t) = g0 (y0 (t), y1 (t)), y˙ i (t) = gi (yi−1 (t), yi (t), yi+1 (t)), y˙ n (t) = gn (yn−1 (t), yn (t), y0 (t − 1)),

1 ≤ i ≤ n − 1,

(7.44) (7.45) (7.46)

7.2 Design of Oscillating Networks with Negative Loops

235

where the functions gi are given by gi (t, u, w, v) = σi rfi (rt − γi , σi−1 u, σi w, σi+1 v) with

# ≥ 0 if v ≥ 0, g0 (t, 0, v) ≤ 0 if v ≤ 0, # ≥ 0 if u ≥ 0 and v ≥ 0, gi (t, u, 0, v) ≤ 0 if u ≤ 0 and v ≤ 0, # ≥ 0 if u ≥ 0 and δ ∗ v ≥ 0, gn (t, u, 0, v) ≤ 0 if u ≤ 0 and δ ∗ v ≤ 0.

(7.47)

(7.48)

(7.49)

(7.50)

To model a cellular oscillator by a cyclic network, we need to describe the network structure through interaction graphs introduced in Chapters 2–3 and 6. Let sij = −1, 0, 1 represent negative, no, positive interactions from node j to node i, respectively. Note that sij and τij represent the same notation with sij and τij in this book, respectively. For any two nodes i and i+1 (1 ≤ i ≤ n−1) of a CFN, if si,i+1 = 0, then si+1,i is non-zero and the interaction between these two nodes is said to be a one-directional interaction. If both si,i+1 and si+1,i are non-zero, the interaction between these two nodes is said to be a bi-directional interaction. 7.2.2 A Special Cyclic Feedback Network According to the mathematical expression of the cyclic feedback systems, the corresponding molecular network can be obtained as follows (Wang et al. 2004, Wang et al. 2005). Assumption 7.2.1 For i = 1, ..., n − 1, 1. the reaction rate of the ith component of the network fi may depend on the (i − 1)th, the (i + 1)th, and the ith components; 2. if the interaction from the ith component to the (i + 1)th component is positive (or negative), then the interaction from the (i + 1)th component to the ith component is non-negative (or non-positive); 3. the last reaction rate fn depends on the (n − 1)th and the nth chemical components. In addition, if both si,i+1 and si+1,i are non-zero, the delay τi,i+1 = τi+1,i = 0; otherwise, τi+1,i can be any finite non-negative real number and τii = 0, where 0 ≤ i ≤ n. τij represents the time delay from node j to node i. Note that the 0th and the (n + 1)th nodes represent the nth node and the 1st node, respectively.

236

7 Design of Synthetic Oscillating Networks

Therefore, the CFN satisfying Assumption 7.2.1 takes the following form for i = 1, ..., n − 1: x˙ 1 (t) = f1 (x2 (t − τ12 ), x1 (t), xn (t − τ1n )), x˙ i (t) = fi (xi+1 (t − τi,i+1 ), xi (t), xi−1 (t − τi,i−1 )), x˙ n (t) = fn (xn (t), xn−1 (t − τn,n−1 )),

(7.51) (7.52) (7.53)

where τi+1,i = τi,i+1 = 0 if the reaction rates fi and fi+1 depend on xi+1 and xi , respectively, i.e., both si,i+1 and si+1,i are non-zero; this implies that the interactions between the two nodes i and i + 1 are bi-directional. Otherwise, if the interaction between them is one-directional, i.e., si,i+1 = 0, then τi+1,i is a non-negative finite real number. In addition, all self-feedbacks have no time delays, i.e., τii = 0 for all 0 ≤ i ≤ n. Assumption 7.2.1 also requires that ∂fi (xi+1 , xi , xi−1 ) ∂fi+1 (xi+2 , xi+1 , xi ) ≥ 0, ∂xi+1 ∂xi

(7.54)

where ∂fi+1 (xi+2 , xi+1 , xi )/∂xi = 0 for 1 ≤ i ≤ n and all x, which indicates that for any two neighboring components i and i + 1, the interaction from the (i + 1)th component to the ith component has the same type as that from the ith component to the (i + 1)th component or is zero. In other words, if si+1,i = 1 (or −1), then si,i+1 = 1 (or −1) or si,i+1 = 0. Because of the one-directional interaction from the nth component to the 1st component, we have sn1 = 0. In addition, there is an additional restriction on the model as stated in the following Assumption 7.2.2, which will be relaxed in generalized CFNs. Assumption 7.2.2 Any two neighboring interactions except node n, i.e., the interaction from node i − 1 to node i and the interaction from node i to node i + 1 have opposite signs for i = n. Assumption 7.2.2 determines the signs of the neighboring interactions, except for the nth chemical component, i.e., si+1,i si,i−1 = −1 for i = n. Such an assumption on the network structure limits the application of cyclic networks. For example, Goldbeter’s single loop model does not satisfy this assumption. In the next section, we will extend the CFNs by eliminating Assumption 7.2.2. According to Assumption 7.2.1, it is clear that the interaction between node 1 and node n is one-directional, i.e., sn1 = 0. Assumption 7.2.1 requires that there be at least one one-directional interaction between nodes in a cyclic network. Figure 7.13 schematically illustrates an example of the basic structure with five nodes. The signs + and − on the edges indicate s = 1 and −1, respectively. Note that s can be zero for the interactions represented by the dotted lines, whereas s must be nonzero for the interactions represented by the solid lines.

7.2 Design of Oscillating Networks with Negative Loops

-

5

-

4

+

237

1

3

+

-

+

+ 2

Figure 7.13 An interaction graph of a CFN with a negative largest loop. A feedback represented by solid lines cannot be zero and a feedback represented by dashed lines can be zero (or the interaction can be eliminated). Each node may have a linear or nonlinear, positive or negative self-feedback loop, which is now shown. The interactions between any two neighboring nodes have opposite signs, except in the case of node 5

Although there are multiple time delays in (7.51)–(7.53), we can actually reduce all time delays equivalently into a single total time delay τ by using the transformation yi (t) = xi (t − γi ) (7.55) for 1 ≤ i ≤ n, where γi =

n 

τj,j−1 ,

(7.56)

j=i+1

for 1 ≤ i ≤ n − 1 with γn = 0. It is easy to show that by the transformation (7.55), (7.51)–(7.53) can be equivalently transformed to a canonical form of the CFN with only one total time delay as follows: y˙ 1 (t) = f1 (y2 (t), y1 (t), yn (t − τ )), (7.57) y˙ i (t) = fi (yi+1 (t), yi (t), yi−1 (t)), 2 ≤ i ≤ n − 1, (7.58) y˙ n (t) = fn (yn (t), yn−1 (t)), (7.59) n where the total time delay τ = i=1 τi+1,i . Note that τn+1,n = τ1n . In contrast to (7.51)–(7.53), the canonical cyclic network (7.57)–(7.59) is much easier to analyze since it involves only a single delay. There are only phase differences between xi and yi according to the transformation (7.55). When using (7.57)–(7.59) to model cellular oscillators, the time delays can be the simplified representation of the accumulated time consumed mainly in the transcription, translation, signal transduction, translocation, and diffusion processes. For simplicity, only the case of one delay for each component is considered, although multiple direct interactions from the jth component to the ith component with different delays may exist. For any two nodes with bidirectional interactions, the time delays between them must be zero, according to Assumption 7.2.1. For one-directional interaction between any two nodes, the delay can be any non-negative finite real number.

238

7 Design of Synthetic Oscillating Networks

In addition to the feedback loops connecting two neighboring nodes, which are all positive, there is a unique largest loop that connects all nodes, as shown in Figure 7.13. A cyclic network with a positive largest loop falls into the scope of cooperative dynamical systems, i.e., special structures of PFN, which have been thoroughly investigated for both FDEs and ODEs. Cooperative systems exhibit very regular behavior, e.g., typical solutions tend to equilibria in the case of autonomous systems or to periodic solutions in the case of non-autonomous periodic systems. In other words, there is no stable periodic solutions for autonomous monotone dynamical systems with only positive feedback loops (Smith 1995, Kobayashi et al. 2003). When the largest loop is negative, the network (7.51)–(7.53) is said to ba a negative cyclic feedback network. Therefore, to ensure existence of periodic solutions, the following important assumption should be made. Assumption 7.2.3 The feedback loop connecting all nodes, i.e., the largest loop, is negative. A CFN with a negative largest loop is said to be a negative cyclic network. An example of a negative CFN is shown in Figure 7.13. The largest loop, i.e., 1 → 2 → 3 → 4 → 5 → 1, is a negative feedback loop because of s21 s32 s43 s54 s15 = −1. According to Assumptions 7.2.1–7.2.2, it is clear that a negative cyclic network requires that 1. the largest loop be negative and all self-feedback loops can have arbitrary signs, 2. all loops excluding the self-feedback loops and the largest-loop are positive, 3. except the nth node, the interactions are of opposite signs for neighboring nodes, e.g., if s21 is positive, then s32 must be negative, as shown in Figure 7.13. The time delays do not change the location of an equilibrium in (7.51)– (7.53) but can change its stability. If all the eigenvalues of the characteristic equation have negative real parts, then the equilibrium is stable and there is no oscillation near the equilibrium. On the other hand, when a parameter value changes, e.g., τ , if any complex eigenvalue crosses the imaginary axis, then a stable equilibrium loses its stability with appearance of an oscillation because of a typical Hopf bifurcation. Let x ¯ = (¯ x1 , ..., x ¯n ) be an equilibrium of (7.51)–(7.53). Define ⎛ ⎞ f11 f12 e−τ12 λ 0 · · · f1n e−τ1,n λ ⎜ f21 e−τ21 λ f22 ⎟ f23 e−τ23 λ ··· 0 ⎜ ⎟ −τ32 λ ⎜ ⎟ , (7.60) f33 ··· 0 f32 e A(λ) = ⎜ 0 ⎟ ⎝ .................................................. ⎠ 0 0 ··· fn,n−1 e−τn,n−1 λ fnn where fij =

∂fi |x=¯x ∂xj

(7.61)

7.2 Design of Oscillating Networks with Negative Loops

239

for 0 ≤ i, j ≤ n. Clearly, A(0) is the Jacobian matrix of f = (f1 , ..., fn ) with respect to x. Then, the characteristic equation evaluated at the equilibrium x ¯, i.e., det(λI − A(λ)) = 0, (7.62) has the form bn λn + bn−1 λn−1 + · · · + b0 + (−1)n+1 Be−λτ = 0, (7.63) $n−1 n where I is  the n × n identity matrix, B = f1n i=1 fi+1,i , bn = (−1) , and τ = i τi+1,i . B represents the total feedback strength. Notice that bj for j = 0, ..., n − 1 are functions of fk,k+1 fk+1,k for 1 ≤ k ≤ n and fii for 1 ≤ i ≤ n except f1n ; this implies that all effects of interactions between nodes k and k + 1 on bj disappear but the effects on B exist if fk,k+1 is zero. On the basis of the monotone dynamical system theory and the discrete Lyapunov functional, Mallet-Paret and Sell (Mallet-Paret and Sell 1996a, Mallet-Paret and Sell 1996b) obtained the Morse decomposition and established that the Poincar´e–Bendixson type theorem holds for (7.51)–(7.53) when Assumptions 7.2.1–7.2.2 are satisfied. Let the natural phase space for the (7.51)–(7.53) be C(K), where K = [0, τ21 ] ∪ [0, τ32 ] ∪ · · · [0, τn,n−1 ] ∪ [0, τ1n ] ∪ N.

(7.64)

Here, N = 0, 1, 2, .... Theorem 7.1. (Poincar´e–Bendixson type theorem) Consider the differentiable system (7.51)–(7.53). Assume that Assumptions 7.2.1–7.2.2 and (7.54) hold. Let x(t) be a solution of (7.51)–(7.53) on some time interval [t0 , ∞). Let ω(x) ⊆ C(K) denote the omega limit set of this solution in the phase space C(K). Then, either 1. ω(x) is a single non-constant periodic orbit; or 2. for each solution u(t) of (7.51)–(7.53) in ω(x), i.e., for solutions with u(t) ∈ ω(x) for all t ∈ R, we have α(u) ∪ ω(u) ⊆ E,

(7.65)

where α(u) and ω(u) denote the alpha and omega limit sets of u, respectively, and E ⊆ C(K) denotes the set of equilibria. This theorem does not provide sufficient conditions for existence of periodic solutions but indicates that omega limit sets of cyclic feedback systems are composed of only periodic orbits and equilibria, which are a desirable property for modeling cellular oscillators if it can be shown that there is no stable equilibrium. Take the total delay τ as a parameter and assume that when τ = 0, there is at least one pair of complex eigenvalues for the characteristic equation (7.63). By changing τ , the asymptotic stability of x ¯ can be detected from the roots of (7.63).

240

7 Design of Synthetic Oscillating Networks

Since the solution exists and is bounded, the omega limit set is non-empty, which means that all asymptotical solutions are periodic orbits provided that there is no stable equilibrium. Therefore, the basic idea is to destabilize all equilibria, which can be carried out mainly by linear analysis for most cases. For instance, one way to generate a global periodic oscillation is to identify all equilibria and then make all of them unstable by tuning the related parameters. Next, we first state theoretical results for the local analysis with respect to initial values, which guarantees that a negative cyclic network converges to a local periodic orbit by tuning τ as a parameter (Wang et al. 2005). Theorem 7.2. Suppose that Assumptions 7.2.1–7.2.3 hold for (7.51)–(7.53). The following equation has one nonzero real root v¯ at x ¯:   i−1 i ( (−1) 2 bi v i )2 + ( (−1) 2 bi v i )2 − B 2 = 0, (7.66) i∈Ie

i∈Io

and further, #



1 2 B v¯2  k∈Ie





(−1)

k−1 2

bk v¯k

i∈Io

k∈Io k 2

(−1) bk v¯k



 i∈Ie ,i≥2

(−1)

i−1 2

ibi v¯i

⎞⎫ ⎬ i−2 (−1) 2 ibi v¯i ⎠ = 0, ⎭

(7.67)

where Ie = {i : mod (i, 2) = 0, 0 ≤ i ≤ n}, Io = {i : mod (i, 2) = 1, 0 ≤ i ≤ n}, x ¯ is stable with at least a pair of complex eigenvalues at τ = 0, and mod (x, y) denotes the remainder after x divided by y. Then, there exists τ¯ such that     i (−1)n i∈Ie (−1) 2 bi v i 1 τ¯ = arccos , (7.68) v¯ B where the range of arccos is [0, π], and (7.51)–(7.53) will converge to a stable periodic orbit when τ is near τ¯ and τ > τ¯. √ This theorem indicates that if there exists τ¯ at which λ = j¯ v with j = −1 is a root of (7.63) and the derivative of the real part of the eigenvalues at τ = τ¯ is not zero, then (7.51)–(7.53) will converge to a stable periodic orbit when τ is near τ¯ and τ > τ¯ for any initial conditions near x ¯ except x ¯ itself. Therefore, a negative cyclic network can have periodic orbits which can be proved by the Hopf bifurcation theorem for FDEs. Based on the local convergence of Theorem 7.2, global convergence conditions of non-trivial periodic orbits can be derived as follows (Wang et al. 2005): Theorem 7.3. Assume that Assumptions 7.2.1–7.2.3 hold for (7.51)–(7.53) and the feedback for the total one-directional interaction is sufficiently strong,

7.2 Design of Oscillating Networks with Negative Loops

241

∂f1 $ ∂fi+1 ∂fi i.e., ∂x i ∂xi for those i with ∂xi+1 = 0 at any equilibrium is sufficiently n large; then, there exists τ¯ such that     i (−1)n i∈Ie (−1) 2 bi v i 1 arccos , (7.69) τ¯ = v¯ B

where the range of arccos is [0, π], and (7.51)–(7.53) will converge to a stable periodic orbit for almost all initial conditions when τ > τ¯. ∂f1 $ ∂fi+1 ∂fi The product ∂x i ∂xi for all those i with ∂xi+1 = 0 represents the n total strength of the one-directional interaction, i.e., the product of those interactions ∂fi+1 /∂xi with si,i+1 = 0 but si+1,i = 0. The theorem can be proven by showing that all equilibria are unstable because of the existence of an eigenvalue with a positive real part and that the real part never returns to a negative region for any τ > τ¯. Therefore, there is no stable equilibrium but stable periodic orbits because of the existence of the non-empty omega limit set. Although the conditions appear quite stringent in expressions, it is significant that certain common mechanisms in cellular systems actually satisfy these conditions. Theorem 7.4. Assume that Assumptions 7.2.1–7.2.3 hold for (7.51)–(7.53). If det(A(0)) < 0 at all equilibria, then for almost all initial conditions, (7.51)– (7.53) will converge to a stable periodic solution, where A(λ) is defined by (7.60). The virtue of this theorem is strong. It ensures that (7.51)–(7.53) converges to a stable periodic orbit regardless of any non-negative time delays. The theorem can be proven by showing that all equilibria are unstable for any non-negative delays (Wang et al. 2005). It is generally not easy to guarantee stable behavior such as equilibria and periodic orbits even for a small network with a few components due to the nonlinearity of the system. Theorem 7.3 implies that if the feedback for the total one-directional interaction is sufficiently strong, a stable periodic orbit exists. In other words, the omega limit set is non-empty and includes only periodic orbits. On the other hand, Theorem 7.4 indicates that when the determinant of the Jacobian matrix is negative for all equilibria, the system (7.51)–(7.53) will converge to periodic orbits from almost all initial conditions and with any non-negative time delay. When all the feedback loops of a system are positive, its orbits have a strong tendency to converge to equilibria. However, for negative cyclic feedback networks, when the conditions of Theorem 7.3 or Theorem 7.4 hold, only stable periodic orbits exist and constitute the omega limit sets, which is quite different from those of positive feedback networks. Therefore, if conditions of Theorem 7.3 or Theorem 7.4 are satisfied, (7.51)–(7.53) will inevitably converge to stable periodic orbits. In other words, negative CFNs have ideal properties for constructing oscillatory networks and therefore can be used to

242

7 Design of Synthetic Oscillating Networks

model and design cellular oscillators. Although Theorem 7.2 is a local convergence theorem, it becomes global convergence for almost all initial condition when there is at most one equilibrium, as stated in Corollary 7.5. Corollary 7.5. Assume that all conditions of Theorem 7.2 hold. If det(A(0))= 0 for all x in a convex set X, then when τ is near τ¯ and τ > τ¯, (7.51)–(7.53) will converge to a stable periodic orbit for almost all initial conditions. By showing that there is at most one unstable equilibrium, we can prove the corollary. As a simple example, we can verify that the following system satisfies the conditions of Corollary 7.5: x˙ i (t) =

1 − xi (t), 1 + x2j (t − τj )

(7.70)

where i and j have the following three pairs of values: (i = 1, j = 2), (i = 2, j = 3), and (i = 3, j = 1). It is clear that det(A(0)) = 0, or more exactly det(A(0)) < 0 for all x > 0. 7.2.3 A General Cyclic Feedback Network Negative CFNs can be used for modeling and designing cellular oscillators when the feedback for the total one-directional interaction is strong enough. However, the special feedback structure in Assumption 7.2.2 requires that interactions are opposite for neighboring nodes, except the last one; this is difficult to satisfy for many cellular systems and therefore may limit the potential applications. In fact, such a restriction can be eliminated by a coordinate transformation. In other words, the original CFNs with a special feedback structure can be extended to general ones with any type of interaction for neighboring nodes. Moreover, since there is no limitation on the dimensionality of the general cyclic feedback networks, a cellular oscillator can be modeled and designed even by a large-scale system. Choose any node i and change types or signs of all interactions associated with it. Denote the system obtained under such a transformation as y(t) ˙ = g(yτ ), where the transformation P for x = Py is defined by a matrix ⎞ ⎛ 0 σ1 ⎟ ⎜ .. P=⎝ ⎠ . 0 σn

(7.71)

(7.72)

with σi = −1 and σj = 1 for all j with j = i. By substituting x = Py into (7.71), we get x(t) ˙ = f (xτ ) ≡ Pg(Pxτ ), (7.73)

7.2 Design of Oscillating Networks with Negative Loops

243

where P = P−1 is used. It can be easily proven that (7.73) is qualitatively equivalent to (7.71) since P is a reversible and one-to-one map. Therefore, the following theorem can be obtained (Wang et al. 2005). Theorem 7.6. Assume that Assumption 7.2.1 holds for the CFN (7.51)– (7.53). The transformation (7.72), which changes the signs of all interactions connected to any node i (1 ≤ i ≤ n), does not change its dynamical properties. Theorem 7.6 implies that the dynamics of (7.71), in which the signs of all the interactions connected to any node i are changed, is qualitatively equivalent to that of (7.73). Moreover, it is also easy to show that such a transformation does not change the type of any feedback loop, which implies that a negative cycle network is still a negative cycle network under this transformation. A simple case of the transformation procedure is shown in Figures 7.13 and 7.14 (a). The difference between them is the types or signs of all interactions connected to node 2. The two cases in Figure 7.14 are dynamically equivalent to the case in Figure 7.13.

(a )

-

5

1

4

+ 3

2

+

+

-

(b )

5

-

-

+

4

-

1

3

-

-

2

Figure 7.14 A simple example of the transformation procedure. All these networks have dynamical properties that are qualitatively equivalent to those of Figure 7.13. (a)The signs of all interactions connected to node 2 are changed. (b) On the basis of (a), the signs of all interaction connected with node 3 are changed

An important property of the transformation is that any combination of interactions can be obtained under such consecutive transformations, which actually do not change the type of any loop. By performing such a transformation for each node, we can obtain different cyclic networks with different combinations of interactions, which are all qualitatively equivalent. Therefore,

244

7 Design of Synthetic Oscillating Networks

Assumption 7.2.2 can be eliminated to obtain more general cyclic networks by the reversible and one-to-one map. Corollary 7.7. Assume that Assumptions 7.2.1 and 7.2.3, excluding Assumption 7.2.2, are satisfied; then, Theorems 7.1–7.4 and Corollary 7.5 still hold for the CFN (7.51)–(7.53).

Translation

Nucleus

vs

Transcription

ks

vs k2

PN

P2

v4

p

v2

τ

τm

P1

P0

vd

v3

v1

Time delay

Time delay

vm

DNA

per

M

k1

Phosphorylation

Figure 7.15 The Goldbeter single loop model with a time delay to show the slow diffusion, transportation, and or signal transduction processes of molecules between the nucleus and the cytosol. It is a general negative cyclic network (from (Chen and Wang 2006))

In contrast to the restricted structure of the special CFNs, Corollary 7.7 indicates that general negative cyclic networks with any type of interactions on each node can be used to design cellular oscillators. Figures 7.13–7.14 show different network structures, which have qualitatively equivalent dynamical properties. An example of the generalized cyclic network is Goldbeter’s single loop model, as shown in Figure 7.15. See (Wang et al. 2005) for all proofs of the theorems and numerical results for the delayed Goldbeter’s single loop model and the synthetic repressilator.

7.3 Construction of Oscillators by Non-monotone Dynamical Systems Cellular functions, such as circadian rhythms, are carried out by interlocked feedback networks, which are made up of many interacting molecules or modules. Understanding how the networks work requires combining phenomenological analysis with molecular and modular studies. It is thus important to

7.3 Construction of Oscillators by Non-monotone Dynamical Systems

245

consider both the functions and structures of each module and then elucidate the complex sets of molecules that interact to form functional networks. Therefore, the general principles that dominate the structure and behavior of the interlocked networks may be discovered by understanding each module and the biochemical connectivity among the modules. In this section, we show how monotone modules with simple dynamics can be used to construct nonmonotone interlocked feedback networks functioning as cellular oscillators. A dynamical system in the standard sense of control theory with inputs and outputs is shown as x˙ = f (x, u),

y = h(x),

(7.74)

with a state space X , an input set U, and an output set Y. A dynamical system is said to be monotone if the following property holds, with respect to the orders (see partial order or vector order (4.72)) on states and inputs (Smith 1995, Angeli and Sontag 2003): ξ1 ≥ ξ2 & u1 ≥ u2 ⇒ x(t; u1 , ξ1 ) ≥ x(t; u2 , ξ2 ) for all t ≥ 0,

(7.75)

where x(t, u, ξ) ∈ X denotes a solution at time t with initial condition ξ and input u(·). u1 < u2 means that u1 (t) < u2 (t) for all t. In monotone control system, a larger input and/or a larger initial condition will produce a larger output; this is very common in cellular systems. For example, high mRNA concentration results in a high synthesis rate of its corresponding protein. Monotone systems are one of the most important classes of dynamical systems in theoretical biology (Sontag 2004). Monotone systems with inputs and outputs are important for understanding the interactions between cellular components. Such systems allow the application of the rich theory developed for the classical monotone systems, e.g., theoretical results for PFNs that guarantee convergence of trajectories to equilibria (Kobayashi et al. 2003, Angeli et al. 2004). The monotone control system (7.74) is said to be endowed with a static input-state characteristic kx (·) : U → X (7.76) if for each constant input u(t) ≡ u ¯ there exists a globally and asymptotically stable equilibrium x ¯ = kx (¯ u). The static input/output characteristic is defined as Ky (¯ u) := y¯ = h(kx (¯ u)), provided the input-state characteristic exists and h is continuous, as illustrated in Figure 7.16 (Angeli and Sontag 2003, Wang et al. 2006a). The existence of an input–output characteristic implies the uniqueness of the equilibrium; this can be confirmed as follows: view u ¯ ≡ u(t) as a parameter and (7.74) as a feedback closure of an open-loop system with an input and an output, and if the open-loop system exhibits a linear response, a Michaelian response, or any response that lacks an inflection point for each u ¯, the openloop system is guaranteed to be monostable. By setting the input and output

246

7 Design of Synthetic Oscillating Networks u

x = k x (u ) f(x,u)= 0

y = h (x )

I np ut- s tate c harac te r is tic :

- K y(u) =y= h(x)=h (k x( u))

x= k x( u)

- I np ut-o utp ut c har ac te ris tic : K y (u)= y= h (x)= h(k x (u))

Figure 7.16 The static input–output characteristic of (7.74)

variables to be equal and thus recovering the original system, it will be also monostable (Angeli et al. 2004). To understand how a large-scale network that is not necessarily monotone is built and how it works, one must develop a precise mathematical description of the network and some intuition about its dynamical properties. Complex networks can often be constructed from simple modules, i.e., sets of interacting components that carry out specific tasks and can be connected together. These simple modules can be used to construct an interlocked feedback network with specific functions such as cellular oscillators. A monotone module can be represented in the form of control system (7.74) with monotone condition (7.75). Consider two monotone modules with inputs and outputs, which can be be either scalars or vectors: Σ1 : Σ2 :

x˙ = fx (x, w), z˙ = fz (z, y),

y = hx (x), w = hz (z),

(7.77) (7.78)

with Ux = Yz and Uz = Yx , where U and Y denote the input and output sets, with the following important assumptions: 1. the module Σ1 is monotone when its input w as well as output y is ordered according to the standard order induced by the positive real semi-axis; 2. the module Σ2 is monotone when its input y is ordered according to the standard order induced by the positive real semi-axis and its output w is ordered in the opposite order; 3. static input-state characteristics kx (·) and kz (·) and static input–output characteristics Ky (·) and Kw (·) exist, and are monotonically increasing and decreasing, respectively; 4. every solution of the feedback closure for (7.77)–(7.78), i.e., the network Σ defined by (7.79), is bounded. The first assumption implies that for Σ1 , increasing w will cause an increase in y. This assumption can be satisfied if the x-subsystem, i.e., (7.77), is monotone according to (7.75) and h is a monotonically increasing function

7.3 Construction of Oscillators by Non-monotone Dynamical Systems

247

of x. The second assumption implies that for Σ2 , increasing y will cause a decrease in w. In other words, the static input–output characteristics Ky (·) and Kw (·) are monotonically increasing and decreasing, respectively. The feedback closure network of the two subnetworks Σi (i = 1, 2) is shown in Figure 4.12 and has the form # x˙ = fx (x, hz (z)), Σ: (7.79) z˙ = fz (z, hx (x)). The first two conditions imply that the network Σ, which does not need to be monotone, can be decomposed into two open-loop modules Σ1 and Σ2 with opposite monotonicity, as shown in Figure 7.17. In other words, a non-monotone network functioning as an oscillator can be constructed and designed by integrating two monotone modules Σ1 and Σ2 under delayed negative feedback. The third condition implies that for each constant input and any initial condition, each module will converge asymptotically to a global equilibrium. Although this condition is not trivial to prove rigorously, even for a system of differential equations describing a relatively simple signaling network, it might seem evident from a viewpoint of biochemistry. In addition, it is worth noting that the boundedness of trajectories is generally satisfied in biochemical models because of the conservation of mass and other constraints in a cell.

(a) (b) (c)

Figure 7.17 Schematic description of I/O monotone modules (7.77)–(7.78) under negative feedback. (a) The incidence graph of Σ1 . (b) The incidence graph of Σ2 . (c) The feedback closure of Σ1 and Σ2 , i.e., the incidence graph of Σ (from (Wang et al. 2006a))

248

7 Design of Synthetic Oscillating Networks

Oscillatory behavior is a strongly nonlinear phenomenon, and thus, linear stability theory generally does not work. Although bifurcation analysis is a powerful tool for investigating oscillatory behavior, it can only reveal their local existence and does not provide any qualitative insight into the source of oscillations. Then, a simple approach, which can identify the regulation mechanism underlying oscillatory behavior just by linear stability analysis, was established. It relates the oscillatory behavior of a network to destabilization of a steady-state in a simple discrete map (Wang et al. 2006a,Angeli and Sontag 2004b). When the source of the instability for the steady-state rather than the oscillatory behavior itself is examined, linear stability analysis and feedback control theory can be employed. The presence of time delays is an inevitable feature of biomolecular systems, and time delays generally enrich possible dynamics and increase mathematical complexity. By considering the delays in the input and output variables, the correspondence between (7.79) and a discrete map defined by (4.91), i.e., wk+1 = (Kw ◦ Ky )(wk ) (7.80) evolving in Ux can be established. In other words, the map preserves the qualitative characteristics of the network Σ. The non-monotone network Σ thus is composed of two open-loop monotone modules (7.77) and (7.78) combined with the connectivity (7.80) between them. It has been shown that (7.79) has a globally attractive equilibrium provided that (7.80) has an unique globally attractive steady-state (see Theorem 4.7 in Chapter 4). It provides a sufficient condition for global asymptotic stability of an equilibrium, and hence its violation is a necessary condition for the existence of periodic solutions in (7.79). It can be shown that attractors of (7.80) are composed of only periodic orbits or steady-states, as shown in Figure 4.13, and the correspondence between the attractors of (7.79) and (7.80) can be further established, which makes the map ideal as an indicator to show when a specific oscillation will occur in (7.79) (Wang et al. 2006a). More accurately, oscillatory behavior in (7.79) can be related to the properties of the two open-loop modules by determining the interactions on destabilization of the steady-state in (7.80). The oscillatory behavior in (7.79), which is a strongly nonlinear phenomenon, can thus be traced to instability of the steady-state in (7.80), which can be easily determined by linear stability analysis. According to the Lyapunov stability theory, the stability of the unique steady-state in (7.80) can in most cases be analyzed based on the linearization at the steady-state. Thus, linear system theory can, in principle, predict the mechanisms giving rise to destabilization of the unique steady-state, i.e., emergence of oscillatory behavior in (7.80), because its attractors are composed of only periodic orbits or steady-states. Using the correspondence between the continuous and discrete systems, the oscillatory behaviors in (7.79) can also be obtained. This property holds if there are no additional bifurcation points between the original bifurcation point and the considered point

7.3 Construction of Oscillators by Non-monotone Dynamical Systems

249

in the parameter space. Otherwise, one can choose and change one or more parameters to move the system closer to the original bifurcation point prior to analysis. Accordingly, the mechanisms causing destabilization of the steadystate in (7.80) can be analyzed, i.e., it will be unstable if at least one of the eigenvalues of A has modules greater than 1, where A = ∂(Kw ◦ Ky )/∂w|we and we is the unique steady-state of (7.80).

Figure 7.18 Schematic depiction of the destabilization mechanism (from (Wang et al. 2006a))

The destabilization of the steady-state in (7.80) can be realized as follows: For any chosen parameter p, by denoting the w-coordinate of the steady-state as we (p), one can always choose an initial input w0 (p) with 0 < w0 (p) − we (p) < , where  is sufficiently small. When w2 (p) > w0 (p), we will be unstable and (7.79) will be oscillatory for appropriate delays; otherwise, it will be stable and so will the equilibrium of (7.79). Moreover, when w2 (p) < w0 (p), one can adjust p until w2 (p) > w0 (p) holds, thus, an oscillation will occur in (7.80) and in (7.79), as shown in Figure 7.18. Therefore, this technique is useful for constructing a cellular oscillator. The technique can also be used to control the amplitude of an oscillation. For w0 (p) − we (p) = α > 0, when w2 (p) > w0 (p), the amplitude of the obtained oscillation will be larger than α; otherwise, the map either oscillates with an amplitude smaller than α or converges to a steady-state. The same results hold for (7.79) with appropriate delays. In addition to the indicator of oscillations, amplitude robust to changes in delays can also be obtained from (7.80). By understanding the properties of each module and interactions within the modules that can induce the destabilization in (7.80), a cellular oscillator with networks interlocked by two I/O monotone modules under delayed negative feedback can be constructed. Many well-known models can fall

250

7 Design of Synthetic Oscillating Networks

into the category of interlocked feedback networks, e.g., Goldbeter’s minimal model (Goldbeter 1995, Angeli and Sontag 2004b), the repressilator (Elowitz and Leibler 2000,Wang et al. 2006a), and Goldbeter’s dual loop model (Leloup and Goldbeter 1998, Wang et al. 2007); these models can thus be analyzed by using such a technique. Consider the repressilator, a synthetic oscillator with genes cI, tetR, and lacI, as an example. The construction of the repressilator and the emergence of oscillatory behavior will be illustrated stepwise, although the repressilator can be also analyzed by the general CFNs. The module Σ1 is constructed as follows: the first repressor protein LacI from E. coli inhibits the transcription of the second repressor gene tetR from the tetracycline-resistance transposon Tn10, whose protein product in turn inhibits the expression of the third gene cI from λ phage. Moreover, there is an input variable w and an output variable y. The regulations from the input w to the first repressor protein LacI and from protein CI to the output y are assumed to be positive so that the monotonicity conditions are satisfied. The incidence graph of Σ1 with input w and output y is shown in Figure 7.19 (a). The module Σ1 can be described by p˙ 1 = βw − βp1 , α3 − α 0 m3 , m ˙3= 1 + pn1 p˙ 3 = βm3 − βp3 , α2 − α 0 m2 , m ˙2= 1 + pn3 p˙ 2 = βm2 − βp2 ,

(7.81) (7.82) (7.83) (7.84) (7.85)

with input w and output y = p2 , where pi (i = 1, 2, 3) denote proteins LacI, TetR, and CI, respectively, and mi (i = 2, 3) denote mRNAs of genes cI and tetR, respectively. It is easy to show that for any positive parameters β and αi (i = 0, 2, 3) and constant input w, ¯ there is only one unique globally asymptotically stable equilibrium (w, ¯ m∗3 , p∗3 , m∗2 , p∗2 ) with negative eigenvalues (−β, −β, −β, −α0 , −α0 ), where m∗3 = p∗3 = α3 /(α0 (1 + w ¯ n )) and m∗2 = p∗2 = α2 /(α0 (1 + (p∗3 )n )). Hence, its static input–output characteristic can be defined as Ky (w) ¯ := α2 /(α0 (1 + (p∗3 )n )).

(7.86)

The module Σ2 is composed of only gene lac with inhibitory input y and activating output w. It is described by a scalar differential equation m˙ 1 =

α1 − α0 m1 1 + yn

(7.87)

with input y and output w = m1 . Its monotonicity and the existence of an equilibrium are clear. Its incidence graph is shown in Figure 7.19 (b). Its static input–output characteristic is defined as

7.3 Construction of Oscillators by Non-monotone Dynamical Systems

251

(a)

(b) ( ) (c)

Figure 7.19 The construction of the repressilator: (a) monotone open-loop module Σ1 ; (b) monotone open-loop module Σ2 ; (c) the repressilator with three mRNAs mi and three proteins pi (i = 1, 2, 3). The arrows and bar heads indicate positive and negative regulation, respectively. Time delays τpj and τmi are omitted (from (Wang et al. 2006a))

Kw (¯ y ) := α1 /(α0 (1 + y¯n )).

(7.88)

According to the regulation between the components in the two modules, i.e., the repressor protein CI inhibits the transcription of the repressor gene lacl, which activates the synthesis of the repressor protein LacI, the repressilator can be constructed by combining (7.81)–(7.85) and (7.87) and closing the feedback loop, as shown in Figure 7.19 (c). The repressilator Σ without delays can thus be described as m˙ i =

αi − α 0 mi , 1 + pnj

p˙i = βmi − βpi ,

(7.89) (7.90)

where i and j have the following three pairs: (i=1, j=2), (i=2, j=3), and (i=3, j=1). By introducing delays in the input and output components to represent the slow processes of transcription, translation, and transportation of the molecules between the nucleus and the cytoplasm and using the transformation developed in (Wang et al. 2005), the correspondence between the repressilator described by the delayed differential equations

252

7 Design of Synthetic Oscillating Networks

αi − α0 mi (t), 1 + pj (t − τPj )n p˙i (t) = βmi (t − τmi ) − βpi (t),

m˙ i (t) =

(7.91) (7.92)

and the discrete map described by (7.80) can be established according to Ky and Kw defined by (7.86) and (7.88), respectively. The total time delay is 3 3 defined as τ = i=1 τmi + i=1 τpi . It is worth noting that the repressilator with delays introduced only in the input and output components is qualitatively equivalent to that with delays introduced in all components due to its special structure and the transformation developed in (Wang et al. 2005). Two scenarios of the input–output characteristics in the (w, y) plane are illustrated in Figure 7.20 to show the convergence of (7.80) to a steady-state and a periodic orbit, respectively. According to the correspondence, an oscillation will also occur with appropriate delays in (7.91)–(7.92). The oscillations corresponding to Figure 7.20 (b) with the amplitude yA − yD for protein TetR at different time delays are shown in Figure 7.21 (a). Therefore, (7.80) can be used not only to indicate the occurrence of oscillations but also to detect the amplitudes that are robust to the variation in delays. The robustness of the amplitude against variation in the delays for different α is shown in Figure 7.21 (b). When τ is small, no oscillation occurs or the amplitude of oscillations is small. The amplitude of oscillations increases with increasing τ . Eventually, the amplitude keeps almost constant and is robust to variations in τ . Such robust amplitudes are obtained from the iterations of equilibria in the two modules, where the equilibria are robust to variations in the respective delays (Kobayashi et al. 2003). Therefore, the oscillations with other amplitude must be sensitive to variations in τ and thus show poor robustness. (a )

(b )

Figure 7.20 The two different asymptotic states in (7.80). Convergence to an asymptotically stable steady-state at α1 = 1.5 (a) and to an asymptotically stable periodic orbit at α1 = 2.5 (b). Other parameters are n = 2, β = 2, α2 = α3 = 2.5, and α0 = 1 (from (Wang et al. 2006a))

7.3 Construction of Oscillators by Non-monotone Dynamical Systems (a )

253

(b)

Figure 7.21 Oscillations with different delays. (a) Different oscillations with the same amplitudes but different periods for different delays at α = 2.5. (b) Bifurcation diagrams with τ as a bifurcation parameter at α = 2.5 (dotted) and α = 3.5 (solid), respectively, where the maximum and minimum peak values of protein TetR are shown (from (Wang et al. 2006a))

The correspondence of bifurcation diagrams showing the maximum and minimum peak values of protein TetR as a function of α = αi (i = 1, 2, 3) for (7.80) and (7.91)–(7.92) at τ = 30, 60, and 90 min is demonstrated in Figure 7.22 (a). The bifurcation diagrams and maximum and minimum peak values are identical for both cases. Therefore, the dynamical behavior of the repressilator with an appropriate delay is determined by (7.80). Although the delay is important to produce oscillations, it has little effect on the bifurcation diagrams. In other words, the delay only affects the periods but not the amplitude. Because of the opposite monotonicity of the two modules, both of them have only one equilibrium. The destabilization of the steady-state in (7.80) indicates a stable oscillation in (7.91)–(7.92) for an appropriate τ . Note that when τ is small, (7.79) and (7.80) may have different bifurcation diagrams. In this case, both the amplitude and the period are sensitive to delay variations. The regulation mechanism can also be understood from (7.80) so as to control the system behavior. The parameter values in one module are kept constant and the effects of parameter variations in the other module are discussed. When the parameters in both the modules vary, a similar discussion can be made. The regulation mechanism is illustrated in Figure 7.22 (b) to show why different dynamics can emerge in Figure 7.20, where α1 in Σ2 is chosen as a parameter; a change in α1 does not affect the dynamics of Σ1 . −1 −1 −1 Two different Kw curves denoted by KwP and KwQ are represented by solid and dashed lines in Figure 7.22 (b), respectively at α1P = 2.5 and α1Q = 1.5. For the same initial input w0 , two identical outputs Sy and Ty from Σ1 with Sy = Ty and two different w1P = Sw and w1Q = Tw with Sw > Tw at α1P and α1Q can be obtained. Finally, two different w2P = Pw and w2Q = Qw with Pw > w0 and Qw < w0 are derived at α1P and α1Q based on the mono−1 tonicity of Ky and Kw . w2 > w0 implies that (7.80) and (7.91)–(7.92) with

254

7 Design of Synthetic Oscillating Networks

an appropriate delay τ will converge to a periodic orbit. Although w2 < w0 does not necessarily mean that the map will converge to a steady-state, this is true when 0 < w0 − we <  with sufficiently small  is satisfied. In other words, by choosing w0 with 0 < w0 − we <  and increasing α1 until w2 > w0 , the repressilator will become oscillatory with an appropriate delay. (a)

(b )

Figure 7.22 The bifurcation diagrams and the regulation mechanism. (a) The bifurcation diagrams of (7.80) and (7.91)–(7.92) with α = α1 = α2 = α3 as a bifurcation parameter for different τ . (b) The regulation mechanism can be derived from the input–output characteristics and thereby to detect oscillations and control system dynamics. For α1 = 1.5 and α1 = 2.5, we can obtain two identical Ky curves −1 −1 −1 curves, denoted by KwQ and KwP , and two different w2 , w2Q and two different Kw and w2P with w2Q < w0 < w2P . Therefore, an oscillation must occur at α1 = 2.5 for an appropriate τ . On the other hand, for α1 = 1.5, more iterations are needed to determine the convergence (from (Wang et al. 2006a))

Although the approach above is mainly applicable to the case of single input and single output, extension to cases of multiple inputs and outputs is possible (Wang et al. 2007). Consider Goldbeter’s dual loop model as an example. As shown in Figure 7.8, the first module is the mRNA subsystem, described by dMP Kn MP − k d MP , = vsP n IP n − νmP dt KIP + y KmP + MP Kn MT dMT − k d MT , = vsT n IT n − νmT dt KIT + y KmT + MT

(7.93) (7.94)

with input y and output w = (w(1) , w(2) ) = (hM 1 (Mp ), hM 2 (MT )) = (MP , MT ). The second module is the protein subsystem, described by

7.4 Design of Molecular Oscillators with Hybrid Networks dP0 dt dP1 dt dP2 dt dT0 dt dT1 dt dT2 dt dC dt dCN dt

= = = = = = = =

255

0 1 ksP w(1) − V1P K1PP+P + V2P K2PP+P − k d P0 , 0 1 P0 P1 P1 2 V1P K1P +P0 − V2P K2P +P1 − V3P K3P +P1 + V4P K4PP+P − k d P1 , 2 P1 P2 2 V3P K3P +P1 − V4P K4P +P2 − k3 P2 T2 + k4 C − vdP KdPP+P − kd P2 , 2 ksT w(2) − V1T K1TT0+T0 + V2T K2TT1+T1 − kd T0 , 0 V1T K1TT+P − V2T K2TT1+T1 − V3T K3TT1+T1 + V4T K4TT2+T2 − kd T1 , 0 T1 V3T K3T +T1 − V4T K4TT2+T2 − k3 P2 T2 + k4 C − vdT KdTT2+T2 − kd T2 , k3 P2 T2 − k4 C − k1 C + k2 CN − kdC C, k1 C − k2 CN − kdN CN ,

with input w = (w(1) , w(2) ) and output y = hP (CN ) = CN . We focus on the core delayed negative feedback loop established by per and tim, i.e., when the two modules are closed by delayed network feedback, the feedback closure takes the following form: dMP dt dP0 dt dP1 dt dP2 dt dMt dt dT0 dt dT1 dt dT2 dt dC dt dCN dt

= = = = = = = = = =

Kn

P vsP K n +CIP − vmP KmPM+M − k d MP , n P IP N (t−τ ) P0 P1 ksP MP − V1P K1P +P0 + V2P K2P +P1 − kd P0 , 0 1 1 2 V1P K1PP+P − V2P K2PP+P − V3P K3PP+P + V4P K4PP+P − kd P1 , 0 1 1 2 P1 P2 2 V3P K3P +P1 − V4P K4P +P2 − k3 P2 T2 + k4 C − vdP KdPP+P − k d P2 , 2 n KIT MT vsT K n +C n (t−τ ) − vmT KmT +MT − kd MT , IT N ksT MT − V1T K1TT0+T0 + V2T K2TT1+T1 − kd T0 , 0 V1T K1TT+P − V2T K2TT1+T1 − V3T K3TT1+T1 + V4T K4TT2+T2 − kd T1 , 0 T1 V3T K3T +T1 − V4T K4TT2+T2 − k3 P2 T2 + k4 C − vdT KdTT2+T2 − kd T2 , k3 P2 T2 − k4 C − k1 C + k2 CN − kdC C, k1 C − k2 CN − kdN CN .

The delayed dual loop model shows some ideal properties, i.e., when introducing delays in the negative feedback, coexistence of an equilibrium and a periodic oscillation, coexistence of two periodic oscillations, and coexistence of a periodic oscillation and chaos disappear and only a periodic oscillation can exist, as shown in Figure 7.23. Some other properties, e.g., correspondence of bifurcation diagrams between the continuous system and the discrete map, and the robust amplitude to delays, can also be found. See (Wang et al. 2007) for more details.

7.4 Design of Molecular Oscillators with Hybrid Networks: General Formalism As shown in previous chapters, a general procedure for the designe of the PFNs guarantees the stable switching states without any non-equilibrium dynamics, thereby making theoretical analysis and design of switching networks tractable even for large-scale systems with time delays. Meanwhile, a CFN with some specified conditions can converge to periodic oscillations (Wang et al. 2005,

256

7 Design of Synthetic Oscillating Networks

Figure 7.23 The global oscillations induced by delayed negative feedback, shown along with the bifurcation diagrams. (a)–(c) Transition from coexistence of an equilibrium and a periodic oscillation, coexistence of two periodic oscillations, and coexistence of a periodic oscillation and chaos, to global oscillations. (d) Correspondence of bifurcation diagrams between the continuous system and the discrete map (from (Wang et al. 2006a))

Chen and Wang 2006). Although the original CFNs have been extended to general ones, the specific structures still considerably limit their applications. Explicitly considering all components and biochemical reactions in a biomolecular network is unrealistic from the viewpoint of modeling, analysis, and computation. However, different time scales characterize the various cellular regulatory processes, which can be exploited to reduce the complexity of mathematical models (Chen and Aihara 2002b, Hasty et al. 2002a, Ciliberto et al. 2007). For example, the transcription and translation processes in genetic networks generally evolve on a time scale that is much slower than that of phosphorylation, dimerization, and binding reactions of TFs in protein networks. In addition, although the dynamics is intertwined between gene networks, signal transduction networks, and metabolic networks, interactions within each network are generally more active than those between them, or

7.4 Design of Molecular Oscillators with Hybrid Networks

257

they are relatively independent. Such properties can also be exploited to simplify a biomolecular network provided the behavior of the simplified network is guaranteed to be qualitatively and quantitatively identical to the behavior of the original network. According to the convergence properties of PFNs and CFNs and the multiple time scales in different processes, a methodology to construct and analyze cellular oscillators with time delays was developed (Wang et al. 2004). A multiple time scale network (MTN) is composed of a series of CFNs and multiple PFNs. The PFNs are mainly constituted by fast reactions, whereas the CFNs consist of slow reactions. According to different convergence properties of positive and cyclic feedback networks, it can be proven that an MTN with certain conditions has no stable equilibria but has stable periodic oscillations, depending on the total time delay of the CFN, although it has a complicated network structure including both positive and negative feedback loops. Such a property is clearly ideal for designing and modeling biological oscillators. Since there is less restriction on the network structures of an MTN, it can be used in several applications for modeling, analysis, and design of cellular oscillators. A basic MTN consists of a fast PFN and a slow CFN. Assume that there are m fast variables y = (y1 , ..., ym ) ∈ R+m and p slow variables x = (x1 , ..., xp ) ∈ R+p , representing the concentrations of chemical components at time t ∈ R, where p ≥ 2. Then, (3.1) can be rewritten as x(t) ˙ = h(xτx , yτy ), y(t) ˙ = g(xτx , yτy ),

(7.95) (7.96)

where  is a small positive real parameter, xτx = x(t − τx ), and yτy = y(t − τy ). The system (7.95)–(7.96) is called a singularly perturbed system and also known as a fast–slow system with slow x and fast y. Such multiple time scale properties are found in many biochemical systems, especially gene regulatory and metabolic systems. Assume that (7.96) is a PFN for a fixed xτx , (7.95) has a CFN structure except for those parts interacting with yτy , and that there are two neighboring variables in xτx affecting yτy or the PFN. This implies that all loops in (7.96) are positive for fixed xτx , and (7.95) has the structure or partial structure of cyclic networks, except those interacting with yτy . Figure 7.24 shows a schematic of an example of an MTN, where all PFNs evolve on a much faster time scale than other components, and Ci is a CFN or its partial structure. Note that all Ci s have to be connected in series, whereas PFNs can be connected in any form, e.g., in series, parallel or hybrid forms. Moreover, due to the difference in time scales, time delays in the different subnetworks have different effects on the dynamical properties. A PFN is robust to time delays, while time delays in a CFN may significantly affect the dynamics of the network. Such properties actually hold when different time scales are utilized. In other words, we do not need to consider the time delays in the fast posi-

258

7 Design of Synthetic Oscillating Networks

tive feedback subnetworks when analyzing and designing molecular oscillators, e.g., gene oscillators, although they may influence the transient dynamics. When  = 0, (7.95)–(7.96) degenerate to a set of only p functional differential equations, i.e., (7.95) with the following constraint: 0 =g(xτx , yτy ).

(7.97)

According to the convergence properties of PFNs, for a fixed xτx , (7.96) converges to a stable equilibrium E0 = {y0 (xτx )}. Let K denote the set of solutions of (7.97). Since (7.96) is a PFN that is irreducible, ∂g/∂y is negatively definite in K, and hence, det(∂g/∂y) = 0 or rank(∂g/∂y) = m at the point E0 of K. By the implicit function theorem, there exist neighborhoods of Ao of xE0 and B o of yE0 , and unique smooth mapping h : Ao → B o such that g(xτx , h(xτx )) = 0 for all xτx ∈ Ao . Therefore, locally around (xE0 , yE0 ), the degenerate system (7.95) and (7.97) is equivalent to a p-dimensional FDE defined on the graph of the mapping h, i.e., on the set 3 4 S = (xτx , yτy ) ∈ Ao × B o : yτy = h(xτx ) (7.98) and represented by the equation x(t) ˙ = f (xτx , h(xτx ))  fˆ(xτx ).

(7.99)

This system is called a reduced system. Without loss of generality, for the MTN described by (7.95)–(7.96), we assume that the (p − 1)th and the pth nodes are two neighboring slow chemical components, which connect with the fast chemical components. We also assume that the reduced network defined by (7.99) is a CFN. Theorem 7.8. An orbitally and asymptotically stable periodic solution x = Φ(t) of (7.99) is stable under persistent perturbations. Moreover, when assuming that (7.96) is a PFN for a fixed xτx and that the reduced network defined by (7.99) is a CFN, for a sufficiently small , x = Φ(t) is a stable periodic solution of (7.95)–(7.96). The reduction from a MTN to a CFN can be carried out as follows: the protein multimers and their complexes can be eliminated by utilizing the inherent separation of time scales because the multimerization processes are known to be governed by rate constants that are extremely fast with respect to the cellular growth and the transcription. In other words, fast reactions are assumed to converge to equilibria rapidly and thus all fast variables can be eliminated from the MTN. The reduced MTN has the structure of a CFN. According to the conditions for a CFN to converge to periodic oscillations, oscillatory behavior in the MTN can be approximately analyzed. Although it is generally very difficult to guarantee asymptotical behavior such as equilibria and periodic orbits even for a small network because of the nonlinearity of the system, it becomes much easier to analyze the dynamical properties if the

7.4 Design of Molecular Oscillators with Hybrid Networks

259

C1 C1

P F N -1

P F N -5

P F N -4

C2

C3

P F N -2 PF N - 3

C3

C2

R e d uc e d M TN ( C F N )

M TN

Figure 7.24 Schematic illustration of a multiple time scale network, where PFNs evolve on a much faster time scale than other components, which evolve on a slower time scale and are denoted as Ci . Ci is a CFN or its partial structure, and all Ci s are connected in series. After all the PFNs are eliminated, the reduced network is a CFN (from (Chen and Wang 2006))

Figure 7.25 Schematic illustration of a gene regulatory network. Proteins px (CI) and py (Lac) and mRNA mx (cI) and my (Lac) constitute a slow CFN. Other chemicals, such as the CI dimer, the Lac dimer and the Lac tetramer, consist of two fast PFNs (from (Wang et al. 2004))

260

7 Design of Synthetic Oscillating Networks

reduced MTN has the same structure as that of a CFN by eliminating all fast PFNs from the original MTN. See (Wang et al. 2004) for more details on the theoretical analysis. Consider a simple two-gene network with genes cI and lac under the control ∗ of promoters PL lacO1 and PRM , respectively, as an example. It consists of two fast PFNs and one slow CFN, as shown in Figure 7.25. The two genes are both well-characterized transcriptional regulators, which can be found in bacterium E. coli and λ phage. Assume that the network is implemented in a eukaryotic cell, e.g., yeast, so as to examine the effect of time delays on oscillatory behavior. mRNA of gene cI (mx ) is translated to protein CI (px ) in the cytoplasm, which in turn forms a homodimer p2x and is transported or diffused into the nucleus in the form p2x to enhance the expression of gene ∗ Lac by binding on the two operator sites of the promoter PRM . On the other hand, the mRNA of gene lac (my ) is translated to protein Lac (py ), which forms a homodimer p2y , and further, a tetramer p4y in the cytoplasm. When they are moved to the nucleus, the tetramer p4y is in the form of p4y , which represses the expression of gene cI by binding on the operator site of the promoter PL lacO1. The promoter PL lacO1 has one binding site OR for the ∗ Lac tetramer, but the promoter PRM has two binding sites OR1 and OR2 for the CI dimer, with the priority binding first on OR1 and next on OR2 . Note ∗ that PRM is a mutated promoter obtained from PRM , which has no binding site for the Lac tetramer. In contrast to the case of prokaryotes, there are time delays (τmx , τmy , τpx , τpy ) because of transportation and diffusion of mRNAs and TFs between the nucleus and cytoplasm, which may significantly affect the dynamics of the system. We define the following species in terms of concentrations: mx , mRNA CI; px , CI protein; p2x , CI dimer in the cytoplasm; p2x , CI dimer in the nucleus; ∗ Dy , the free DNA binding or operator site in the promoter PRM ; p2x Dy , the CI ∗   dimer bound to operator site OR1 of the promoter PRM ; p2x p2x Dy , CI dimers ∗ bound to both OR1 and OR2 of the promoter PRM ; my , mRNA Lac; py , Lac protein; p2y , Lac dimer; p4y , Lac tetramer in the cytoplasm; p4y , Lac tetramer in the nucleus; Dx , the free DNA binding site in the promoter PL lacO1; p4y Dx , the Lac tetramer bound to the operator site OR of the promoter PL lacO1. The fast reactions are mainly the multimerization and binding reactions for the protein network. As indicated in Figure 7.25, we have fast reactions for CI, which consist of a PFN k1

px + px  p2x , k−1 k2

p2x  p2x , k−2 k3

p2x + Dy  p2x Dy , k−3 k4

p2x + p2x Dy  p2x p2x Dy . k−4

(7.100) (7.101) (7.102) (7.103)

7.4 Design of Molecular Oscillators with Hybrid Networks

261

The fast reactions for Lac also consist of a PFN k5

py + py  p2y , k−5 k6

p2y + p2y  p4y , k−6 k7

p4y  p4y , k−7 k8

p4y + Dx  p4y Dx . k−8

(7.104) (7.105) (7.106) (7.107)

On the other hand, the slow reactions involve the transcription of mRNAs, the translation of proteins, and the degradation of proteins and mRNAs. The slow reactions for CI are kpx

mx 

kmy0

Dy 

p x + mx ,

(7.108)

my + Dy ,

(7.109)

kmy1

my + p2x Dy ,

(7.110)

kmy2

my + p2x p2x Dy ,

(7.111)

0,

(7.112)

0.

(7.113)

p y + my ,

(7.114)

mx + Dx ,

(7.115)

p2x Dy  p2x p2x Dy 

dmx

mx 

dpx

px  The slow reactions for Lac are

kpy

my 

kmx0

Dx 

kmx1 p4y Dx  dmy

my 

dpy

py 

mx +

p4y Dx ,

(7.116)

0,

(7.117)

0.

(7.118)

There are also conservation conditions for the total binding sites of the two promoters, i.e., Dy + p2x Dy + p2x p2x Dy = ny and Dx + p4y Dx = nx , where nx and ny are the concentrations of the genes cI and lac, respectively. For convenience, mx is denoted by X1 , px by X2 , my by X3 , py by X4 , p2x by Y1 , p2x by Y2 , p2x Dy by Y3 , p2x p2x Dy by Y4 , p2y by Y5 , p4y by Y6 , p4y by Y7 , and p4y Dx by Y8 . Then the time evolution of the twelve-variable model is governed by the following functional differential equations, in which all parameters and concentrations are defined with respect to the total cell volume:

262

7 Design of Synthetic Oscillating Networks

dX1 dt dX2 dt dX3 dt dX4 dt dY1 dt dY2 dt

= kmx0 (nx − Y8 ) + kmx1 Y8 − dmx X1 ,

(7.119)

= kpx X1 (t − τmx ) + 2k−1 Y1 − 2k1 X22 − dpx X2 ,

(7.120)

= kmy0 (ny − Y3 − Y4 ) + kmy1 Y3 + kmy2 Y4 − dmy X3 ,

(7.121)

= kpy X3 (t − τmy ) − 2k5 X42 + 2k−5 Y5 − dpy X4 ,

(7.122)

= k1 X22 + k−2 Y2 (t − τpx ) − k−1 Y1 − k2 Y1 ,

(7.123)

= k2 Y1 (t − τpx ) − k−2 Y2 + k−3 Y3 −k3 (ny − Y3 − Y4 )Y2 + k−4 Y4 − k4 Y2 Y3 ,

dY3 dt dY4 dt dY5 dt dY6 dt dY7 dt dY8 dt

(7.124)

= k3 (ny − Y3 − Y4 )Y2 + k−4 Y4 − k4 Y2 Y3 − k−3 Y3 ,

(7.125)

= k4 Y2 Y3 − k−4 Y4 ,

(7.126)

= k5 X42 − k−5 Y5 − 2k6 Y52 + 2k−6 Y6 ,

(7.127)

= k6 Y52 − k−6 Y6 + k−7 Y7 (t − τpy ) − k7 Y6 ,

(7.128)

= k7 Y6 (t − τpy ) − k−7 Y7 − k8 Y7 (nx − Y8 ) + k−8 Y8 ,

(7.129)

= k8 (nx − Y8 )Y7 − k−8 Y8 ,

(7.130)

where Yi are fast variables and  is not explicitly expressed in (7.119)–(7.130). It is easy to check that the two fast reaction subgroups are the two PFNs for the fixed slow variables. By assuming that the fast reactions converge to equilibria rapidly, all fast variables can be eliminated. To demonstrate the example clearly, we explicitly derive the reduced MTN, although it is not necessary in general. In particular, by dYi /dt = 0 in (7.123)–(7.130), we eliminate fast variables as follows: Y1 = Y2 = K1 X22 , Y3 = ny K3 K1 X22 /(1 + K3 K1 X22 + K4 K3 K12 X24 ), Y4 = ny K4 K3 K12 X24 /(1 + K3 K1 X22 + K4 K3 K12 X24 ), Y5 = K5 X42 , Y6 = Y7 = K6 K52 X44 , and Y8 = nx K8 K6 K52 X44 /(1+K8 K6 K52 X44 ), where Ki = ki /k−i , (i = 1, ..., 8) and K2 = K7 = 1. Then, we obtain the reduced equations

7.4 Design of Molecular Oscillators with Hybrid Networks

263

dx1 nx x44 (t) 1  dmx = (k x1 (t) + kmx0 nx ), (7.131) − mx1 dt rs 1 + x44 (t) Ka 1 kpx Kb dpx dx2  = ( x1 (t − τm )− x2 (t)), (7.132) x dt rs Ka2 Ka   ny x22 (t) + kmy2 σny x42 (t) dmy dx3 r kmy1 = ( x3 (t) + kmy0 ny ), (7.133) − dt rs 1 + x22 (t) + σx42 (t) Kb 1 kpy dpy dx4  = ( x3 (t − τm )− x4 (t)). (7.134) y dt r s Kb Ka The dimensionless variables are scaled as follows: x1 ≡ (K8 K6 K52 )1/4 X1 , x2 ≡  (K1 K3 )1/2 X2 , x3 ≡ (K1 K3 )1/2 X3 , x4 ≡ (K8 K6 K52 )1/4 X4 t ≡ rs Ka t, τmx ≡   rs Ka τmx , and τpx ≡ rs Ka τpx , where rs = nx kpx rkmx1 /dmx , r = Kb /Ka ,  = kmx1 − kmx0 , Ka = (K8 K6 K52 )1/4 , Kb = (K1 K3 )1/2 , σ = K4 /K3 , kmx1   kmy1 = kmy1 −kmy0 , and kmy2 = kmy2 −kmy0 . The reduced network described by (7.131)–(7.134) is shown in Figure 7.26.

Figure 7.26 The reduced MTN with proteins px (CI), py (Lac) and mRNAs mx (cI) and my (Lac). The self-feedback loops are omitted    It is clear that when kmx1 < 0, kmy1 > 0, and kmy2 > 0, (7.131)–(7.134) is a CFN with a negative cyclic feedback loop. By using a functional transformation, i.e.,

x1 (t − τ ) → x1 (t ),  x2 (t − τm ) → x2 (t ), y

(7.135) (7.136)

 ) → x3 (t ), x3 (t − τm y

(7.137)



x4 (t ) →

x4 (t ),

(7.138)

 we can equivalently change all time delays into a single time delay τ = τm + x  τmy for (7.131)–(7.134), i.e.,

264

7 Design of Synthetic Oscillating Networks

dmx   dx 1 nx x 4 (t − τ ) 1  − = (k x 1 (t ) + kmx0 nx ), mx1 4    dt rs Ka 1 + x 4 (t − τ ) dx 2 1 kpx Kb   dpx   = ( x 1 (t ) − x 2 (t )),  2 dt r s Ka Ka 4

(7.139) (7.140)

  ny x 2 (t ) + kmy2 σny x 2 (t ) dmy   dx 3 r kmy1 − = ( x 3 (t ) + kmy0 ny ),(7.141) dt rs Kb 1 + x 22 (t ) + σx 42 (t ) dx 4 1 kpy   dpy   = ( x 3 (t ) − x 4 (t )). (7.142)  dt r s Kb Ka 2

4

Note that τ does not include τpx and τpy , which are eliminated in the fast PFNs. The parameter values are kmx1 = 0.2 min−1 , K8 = 2 × 1013 M−1 , nx = 1 nM, ny = 1 nM, K6 = 107 M−1 , K5 = 108 M−1 , kmx0 = 3 min−1 , kpx = 4 min−1 , kmy1 = 3 min−1 , kmy2 = 12 min−1 , K1 = 5 × 107 M−1 , K3 = 3×108 M−1 , dmy = 5 min−1 , kmy0 = 2 min−1 , kpy = 1 min−1 , dpy = 2 min−1 , and σ = 2. According to the above parameters, the variables are scaled as X1 (nM) ∼ 0.8x1 , X2 (nM) ∼ 8x2 , X3 (nM) ∼ 8x3 , X4 ∼ 0.8x4 , and t (min) ∼ t /1.37. Note that τ is also a time delay scaled by 1.37. 3

x’ 1 x’ 2 x’3 x’

2.5

4

2

1.5

1

0.5

0 0

200

400

600

800

1000

t’

Figure 7.27 The sustained oscillations generated by the reduced network shown in Figure 7.26 (from (Wang et al. 2004))

According to the reduction process, the complex network can easily be reduced to a simple network, as shown in Figure 7.26; this network is a negative cyclic network. The reduced network consists of only four components and is relatively easier to analyze. Moreover, according to the theoretical analy-

7.4 Design of Molecular Oscillators with Hybrid Networks

265

sis, the reduced network quantitatively maintains the dynamical properties of the original network. The sustained oscillations in the reduced network are shown in Figure 7.27. Because the fast reactions in the form of perturbations do not change the period and amplitude over a long time period, limit cycle oscillations represent a particularly stable mode of periodic behavior. Such stability is consistent with the robust nature of circadian clocks which have to maintain their amplitude and period in the changing environment.

8 Multicellular Networks and Synchronization

In higher eukaryotes and multicellular organisms, intercellular communication has been shown to be very important. The biosignals received by individual cells, whether originating from other cells or from some change in the organism’s physical and chemical surroundings, have various forms. Cells can sense and respond to electromagnetic signals, such as light, and to mechanical signals, such as direct contact. Individual cells usually communicate with each other using chemical signal molecules which can dissolve in the cytosol and diffuse freely between individual cells and their extracellular medium. The signals which are sent and received by cells during their entire existence may also be essential for the harmonious development of tissues, organs, and bodies. They may also influence movements, information processing, and behavior of individual cells. Normal cellular functions require a precise coordination of the emission and reception of the signals and dysfunctioning is often associated with pathological condition. The mechanism by which cells produce, release, then detect, and respond to the signals is an important aspect of intercellular communication. Besides the signals, the extracellular environment in a multicellular network is also important because individual cells must sense, respond, and adapt to the modification in their environment. The first recognized diffusible signaling mechanism described in living organisms was autoinduction, which reflects the observation that bacteria themselves were the source of the signal (Novick 2003). Through the diffusive process of the signal molecules, e.g., autoinducer (AI), all cells are coupled, and a multicellular system is formed. The coupling is composed of three main stages: production, release, and subsequent detection of the signal molecules. Complex patterned structures in multicellular organisms, various kinds of social behavior, and cellular differentiations in bacteria can be attributed to intercellular communication, e.g., quorum sensing (Weiss and Knight 2000). Generally, intercellular communication is accomplished by transmitting one or more intercellular signal molecules such as acyl-homoserine lactone, hormones, growth factors, and neurotransmitters to neighboring cells and fur-

268

8 Multicellular Networks and Synchronization

ther integrating the signals to generate a global cellular response at the level of molecules, cells, tissues, organs, and bodies. In the detection and response processes, a signal molecule binds to a receptor protein. The activated protein acts as a TF or an enzyme, thereby triggering some specific cellular activities. The ability of cells to communicate is an absolute requisite to ensure collective behavior like synchronization under an uncertain environment. Depending on the nature of the signals, distinct pathways can be used to enter individual cells. For example, hydrophobic compounds such as steroid hormones can proceed through the lipid bilayer of the cells and eventually combine with receptors which are known to be TF’s regulating gene expression. The signals also diffuse through ion channels, which allow ions such as sodium, potassium, and calcium to translocate across the membrane (H¨ ofer 1999, Koenigsberger et al. 2004). Besides communication of signaling molecules, we show that a multicellular system can be synchronized by common perturbations of environment even without any signaling molecules between cells.

8.1 A General Multicellular Network for Deterministic Models Collective behavior is a phenomenon whereby two or more cells adjust their motions to common behavior due to coupling or forcing. Many researchers have studied such a phenomenon experimentally, numerically, or theoretically (McMillen et al. 2002, Chen et al. 2005, Zhou et al. 2005, Yamaguchi et al. 2005, Gonze et al. 2005, Teramae and Tanaka 2004, H¨ofer 1999, Zhou et al. 2008, Garcia-Ojalvo et al. 2004, Kuznetsov and Kopell 2004). Collective behavior is essential for cellular organization and information processing. The quorum-sensing bacteria have revealed a widespread mechanism of collective gene expression. By monitoring signal molecules produced, individual bacteria can regulate their expression of group-beneficial phenotypes which guarantee an effective group outcome. Once a particular density threshold is reached, cooperative behavior is established. To describe and analyze collective behavior, a general model based on the intercellular communication mechanism by which cells produce, release, then detect, and respond to the signals, as shown in Figure 8.1, was constructed (Wang et al. 2008). The model can be represented as x˙ i = −dxi (xi ) + fi (xi ) + ri (xi , si ), s˙ i = −dsi (si ) + pi (xi , si ) + ci (si , se ),

(8.1) (8.2)

s˙ e = −de (se ) + ce (si , se ),

(8.3)

where xi (t) ∈ R+m (i = 1, ..., n) indicates the concentrations of all intracellular components of the ith cell with degradation dxi (xi ), except the signal molecules. si (t) and se (t) ∈ R+p are the concentrations of intracellular and extracellular signal molecules with degradation dsi (si ) and de (se ), respectively.

8.1 A General Multicellular Network for Deterministic Models

269

The three degradation terms in (8.1)–(8.3) can be either linear or nonlinear. The dynamics of an isolated cell is represented as x˙ i (t) = fi (xi (t)) − dxi (xi ). The term pi (xi , si ) represents the synthesis of the signal molecules, and the term ri (xi , si ) shows how individual cells detect and respond to the signal molecules. The coupling terms ci (si , se ) and ce (si , se ) show how the signal molecules are released and diffused across cell membranes. For different i; fi , ri , dxi , dsi , pi , and ci may be the same or different, depending on whether the intrinsic and extrinsic noise and cell variances are considered. Unlike many models of coupled networks, any two cells in (8.1)–(8.3) are not directly coupled but interact indirectly through a diffusive and mixing process through a common extracellular environment, which is more plausible biologically.

Extracellular medium AI

LuxI

LuxR

LuxR

Genes Cell membrane

Figure 8.1 Schematic representation of the quorum-sensing mechanism. The Luxtype protein catalyzes the synthesis of the signal molecule autoinducer (AI). The LuxR-type protein binds to the AI and controls the expression of target genes. (from (Wang et al. 2008))

Beginning with an initial isolated network in individual cells suggested by the knowledge of regulatory mechanisms and various modeling techniques, a specific multicellular network comprising some cells can be constructed after the intercellular coupling is included. The dynamics of individual cells may be switching or oscillatory. Besides the deterministic expression (8.1)–(8.3), stochastic formulations presented in previous chapters can also be used to model multicellular networks, especially when the effects of stochastic fluctuations on the collective behavior of cells should be considered. For this, the coupling reactions can be expressed approximately by biochemical reactions in the form of (2.24), but in a reversible form because of the free diffusion of the signal molecules between the intracellular cytosol and the extracellular medium (Chen et al. 2005). When the QSS approximation assumption is made, i.e., s˙ e (t) = 0, the extracellular concentrations of the signal molecules se can be approximated

270

8 Multicellular Networks and Synchronization

by −de (se ) + ce (si , se ) = 0 or se = h(si ).

(8.4)

Then, (8.1)–(8.3) becomes x˙ i = −dxi (xi ) + fi (xi ) + ri (xi , si ), s˙ i = −dsi (si ) + pi (xi , si ) + ci (si , h(si )).

(8.5) (8.6)

Generally, coupling between individual cells or subsystems and environments is nonlinear, i.e., ri , ci , and ce are nonlinear functions. When the diffusion process (8.2)–(8.3) takes the linear form, i.e., the linearly coupled model s˙ i = −dsi (si ) + pi (xi , si ) + ηint (se − si ), n  s˙ e = −de se + ηext (sj − se ),

(8.7) (8.8)

j=1

where se is assumed to degrade linearly, the approximated extracellular se concentration has the form (McMillen et al. 2002, Garcia-Ojalvo et al. 2004) nηext 1  sj ≡ Q¯ s, de + nηext n j=1 n

se =

(8.9)

where ηint and ηext are p × p diagonal matrices, which represent the coupling strength from cell i to the environment and from the environment to the cell j, respectively, and s¯ indicates the average over all cells. When the se degradation is not considered, (8.9) becomes the mean field, as used in (Gonze et al. 2005), 1 sj . n j=1 n

se = s¯ =

(8.10)

Assuming the linear degradation of se and using (8.7) and (8.9), (8.1)–(8.3) take the form x˙ i = −dxi (xi ) + fi (xi ) + ri (xi , si ), s˙ i = −dsi (si ) + pi (xi , si ) + ηint ( = −dsi (si ) + pi (xi , si ) +

n 

(8.11)

n Q

n

sj − si )

j=1

bij Γ sj ,

(8.12)

j=1

where Γ = ηQ is a diagonal matrix that indicates the linkage of variables to the coupled system. B = [bij ] is an n × n coupling matrix. For symmetrical coupling of (8.11)–(8.12),

8.2 Deterministic Synchronization of Cellular Oscillators

⎧ 1 ⎪ ⎨n bij =

1 ⎪ n1 ⎩ n

− n

1 Qi

if i = j, if i = j and Qi = 0, if i = j and Qi =  0.

271

(8.13)

For diffusive coupling, i.e., j=1 bij = 0, we have Qi = 1 for all i = 1, ..., n, whereas when some Qi < 1, non-diffusive coupling occurs (Wang and Chen 2005). Such a multicellular network may show rich dynamics such as synchronization (McMillen et al. 2002,Garcia-Ojalvo et al. 2004), multistability, clustering, and partial synchronization (Zhou et al. 2008,Ullner et al. 2007,Ullner et al. 2008). We mainly focus on synchronization in this Chapter.

8.2 Deterministic Synchronization of Cellular Oscillators Synchronization and possible effects of coupling on synchronization through intercellular signaling in a population of cellular oscillators have been investigated intensively in recent decades because of its biological importance and potential applications (McMillen et al. 2002, Glass 2001, Garcia-Ojalvo et al. 2004,Yamaguchi et al. 2005,Zhou et al. 2008). Synchronization in multicellular systems has received considerable interest both biologically and theoretically. Intercellular signaling has been shown to be essential for coordinated responses resulting from an integrated exchange of information in both prokaryotes and eukaryotes. 8.2.1 Complete Synchronization The coupled dynamical system (8.11)–(8.12) is said to achieve complete synchronization if u1 (t) = u2 (t) = · · · = un (t) → φ(t), as t → ∞.

(8.14)

Here, ui (t) = (xi (t), si (t)), and φ(t) can be an equilibrium, a periodic orbit, or even a non-periodic orbit such as a chaotic orbit. The synchronization manifold is defined as the hyperplane Λ = {u1 , u2 , ..., un ∈ Rm+p | ui = uj ; i, j = 1, 2, ..., n}.

(8.15)

When the synchronized state is not an equilibrium, complete synchronization is generally expected only for a coupled network with identical subsystems or subnetworks. For a linearly coupled model, two cases for the coupling matrix B, i.e., diffusive coupling and non-diffusive coupling are considered. Diffusive coupling means n  j=1

bij = 0,

i = 1, ..., n.

(8.16)

272

8 Multicellular Networks and Synchronization

Many studies have particularly examined the synchronization problem of diffusive coupling networks, such as master stability (Pecora and Carrol 1998), global synchronization of coupled neural networks (Cheng et al. 2004), and many other phenomena (Pikovsky et al. 2001). The diffusively coupled condition (8.16) ensures that the synchronization manifold is an invariant manifold of an individual network, namely, x˙ i = −dxi (xi ) + fi (xi ) + ri (xi , si ), s˙ i =

−dsi (si )

+ pi (xi , si ).

(8.17) (8.18)

Note that the coupling matrix B is an irreducible matrix. Furthermore, it can be shown that zero is an eigenvalue of B with multiplicity 1 and that all other eigenvalues of B are strictly negative, i.e., λ1 = 0 and 0 > λ2 ≥ · · · ≥ λn .

(8.19)

On the other hand, non-diffusive coupling for a linearly coupled model means n  bij = 0, for some i ∈ {1, ..., n}. (8.20) j=1

In this case, the synchronization manifold of (8.11)–(8.12) is not an invariant manifold of (8.17)–(8.18). For non-diffusive coupling, few results have been reported on the characterization of network synchronization because of the difficulties in identifying the synchronization state and analyzing its stability. Moreover, coupling matrix B may have entirely n different properties. To deal with such a situation, rewrite j=1 bij as bij = ˆbij + ¯bij , where

n 

(8.21)

¯bij = 0

(8.22)

j=1

for i = 1, ..., n. Then (8.11)–(8.12) can be rewritten as x˙ i = −dxi (xi ) + fi (xi ) + ri (xi , si ), n  ¯bij Γ sj , s˙ i = −dsi (si ) + p¯i (xi , si ) +

(8.23) (8.24)

j=1

where p¯(xi , si ) = p(xi , si ) +

n 

ˆbij Γ xj (t).

(8.25)

j=1

Instead of the original individual system (8.17)–(8.18), we discuss an auxiliary individual system, i.e.,

8.2 Deterministic Synchronization of Cellular Oscillators

x˙ i = −dxi (xi ) + fi (xi ) + ri (xi , si ), s˙ i =

−dsi (si )

+ p¯i (xi , si ),

273

(8.26) (8.27)

which has the properties of the diffusive coupling. In other words, the synchronization state φ(t) is not a solution of the original individual system (8.17)–(8.18), but a solution of the auxiliary individual system (8.26)–(8.27), due to the non-diffusively coupling condition (8.20). It is important to indicate that the dynamics of the original individual system and that of the auxiliary one may differ entirely. For instance, the original individual system may converge to an equilibrium, while the auxiliary one may converge to a periodic or even a chaotic attractor, which means that a periodically oscillatory synchronization can be realized even if the original individual system is neither non-oscillatory nor chaotic. For example, synchronized hysteresis-based oscillators can be obtained by coupling a population of toggle switches via the quorum-sensing mechanism (Kuznetsov and Kopell 2004). Using the auxiliary individual system, for the case of linear degradation, i.e., dxi (xi ) = dxi xi and dsi (si ) = dsi si , a sufficient condition on the stability of synchronization was obtained (Wang and Chen 2005). Theorem 8.1. For (8.23)–(8.24), assume F = (f, p¯) to be globally Lipschitz continuous, i.e., there exist constants Li such that |Fi (u1 ) − Fi (u2 )| ≤ Li |u1 − u2 |, i = 1, ..., m + p,

(8.28)

holds for any two different u1 and u2 ∈ Rm+p , where |u| is a vector norm  n 2 defined for vector u = (u1 , ..., un )T by |u| = k=1 uk . Let the eigenvalues ¯ ¯ of its coupling matrix B = [bij ] be ordered as follows:

with

0 = λ1 > λ 2 ≥ λ 3 ≥ · · · ≥ λn

(8.29)

⎧ ⎨ λ2 , λ(γi ) = 0, ⎩ λn ,

(8.30)

if γi > 0, if γi = 0, if γi < 0,

such that for all i = 1, 2, ..., m + p, Li − di + γi λ(γi ) < 0,

(8.31)

where di = dxi , if i ≤ m, otherwise, di = dsi+m , then, the dynamical system (8.23)–(8.24) achieves strong synchronization or complete synchronization or identical synchronization, i.e., there is an invariant diagonal hyperplane Λ defined by (8.15), which is not only attractive but also Lyapunov stable. Clearly, complete synchronization for (8.1)–(8.3) means that, when t → ∞, the dynamics asymptotically converges to the synchronization manifold (8.15).

274

8 Multicellular Networks and Synchronization

8.2.2 Other Types of Synchronization Besides complete synchronization, there are a variety of other synchronization forms which is quite a rich phenomenon, e.g., phase synchronization, partial synchronization, and generalized synchronization. •





Generalized synchronization occurs when there are functions φ1 , ..., φn , such that φ1 (u1 (t)) = φ2 (u2 (t)) = ... = φn (un (t)) for the coupled system (8.1)–(8.3) hold after a transitory evolution, i.e., t → ∞, from appropriate initial conditions, where ui (t) = (xi (t), si (t)). Clearly, if the subsystems or oscillators are mutually coupled, φi is an invertible function. Otherwise, e.g., if there is a drive-response configuration among the subsystems, φi may not be an invertible function. Complete synchronization is a particular case of generalized synchronization where φ1 (u1 (t)) = φ2 (u2 (t)) = ... = φn (un (t)) = φ(t) (Rulkov et al. 1995). Partial synchronization occurs when a part of subsystems, e.g., the first m (m ≤ n) subsystems of (8.1)–(8.3) asymptotically evolve in the sense of the generalized synchronization (Inoue et al. 1998). In other words, there are functions φ1 , ..., φm , such that φ1 (u1 (t)) = φ2 (u2 (t)) = ... = φm (um (t)) for the coupled system (8.1)–(8.3) hold after a transitory evolution, i.e., t → ∞, from appropriate initial conditions. For a coupled system, there may exist multiple groups separatively synchronized in different periods, which are also called clustering synchronization. Phase synchronization occurs when in the synchronized state, the amplitudes of the subsystems or oscillators remain unsynchronized, but their phases evolve in synchrony with phase differences kept constant (Rosenblum et al. 1996). In particular, the phase synchronization is in-phase synchronization, or sometimes, bulk synchronization, if all the synchronized states have the same phase. For a dynamical system, there are many ways to define a phase of a periodic, quasi-periodic, or even chaotic trajectory.

Actually, the synchronized states for all these forms of synchronization can be asymptotically stable. This means that once the synchronized state has been reached, the effect of a small perturbation that destroys synchronization is rapidly damped, and the synchronization is re-established. For the stochastic synchronization, the definition will be provided in the following sections. The various forms of synchronization mentioned above can also be classified as spontaneous or entrained synchronization, depending on whether the external stimulus is needed. Next, we introduce some elementary methods by which synchronization can be obtained. For entrained synchronization, the external stimulus can be deterministic or stochastic. Here, the synchronization induced by a deterministic stimulus, e.g., the light–dark cycle or periodic external forcing, is referred to as entrained synchronization. The synchronization induced by stochastic effects such as intrinsic or extrinsic noise will be called

8.3 Spontaneous Synchronization of Deterministic Models

275

noise-induced or noise-driven synchronization. All kinds of synchronization are generalized synchronization for either deterministic or stochastic forms.

8.3 Spontaneous Synchronization of Deterministic Models Some detailed synthetic multicellular network models have been proposed to study spontaneous synchronization, i.e., synchronization without any extracellular stimulus (McMillen et al. 2002, Garcia-Ojalvo et al. 2004, Kuznetsov and Kopell 2004,Gonze et al. 2005). Two of them are the coupled genetic relaxation oscillators and the repressilators with intercellular signaling molecules, as shown in Figure 8.2. Both of them fall into the scope of the general model (8.1)–(8.3).

Figure 8.2 Coupled genetic relaxation oscillators and repressilators with intercellular signaling molecules. (from (Zhou et al. 2008))

The mathematical equations corresponding to Figure 8.2 (a)–(b) can be described by dxi dt dyi dt dli dt dsi dt and

1 + ρxni 1 + ρ1 s2i − γxi yi + μ , n 1 + xi 1 + s2i 1 + ρxni , = −β2 yi + α2 1 + xni 1 + ρxni , = −β3 li + α3 1 + xni = −β1 xi + α1

= −β4 si + α4 li + ηint (se − si ) ,

(8.32) (8.33) (8.34) (8.35)

276

8 Multicellular Networks and Synchronization

dxi dt dyi dt dzi dt dsi dt

α1 + γ1 , 1 + zin α2 + γ2 , = −β2 yi + 1 + xni α3 si + γ3 + μ , = −β3 zi + n 1 + yi 1 + si

(8.37)

= −β4 Si + α4 xi + ηint (se − si ) ,

(8.39)

= −β1 xi +

(8.36)

(8.38)

respectively, where the AI (si ) activation is considered to follow the standard MM kinetics, and μ is the maximal contribution to the lacI transcription in the presence of saturating amounts of AI. The final equation in each model is for the dynamical evolution of the intracellular AI concentration, which is affected by degradation, synthesis, and diffusion toward/from the intercellular medium. The dynamics of the signaling molecule AI in the extracellular environment can be described as n  dse (si − se ) , = −βe se + ηext dt i=1

(8.40)

where nηext = δ/Vext with Vext being the total extracellular volume, n−1 i=1 si indicates average AI concentration inside individual cells, and βe is the degradation rate of AI in the extracellular environment, which is assumed to be a homogeneous culture. For the coupled genetic relaxation oscillators (8.32)–(8.35) and (8.40), rapid synchronization can be achieved by the intercellular coupling mechanism even for the initially randomly distributed phases. The mechanism underlying such a phenomenon is that the cells initially in the high-x state produce sufficient AI to saturate the levels of s in the cells initially in the lowx state, thereby strengthening the coupling, causing rapid transitions, and quickly condensing the range of phases in the population (McMillen et al. 2002). The rapid synchronization is shown in Figure 8.3. Unlike the coupled genetic relaxation oscillators, (8.32)–(8.35) and (8.40) are a model of coupled sinusoidal-type oscillators. It has been shown that synchronization can also be obtained even when there is a relatively broad distribution in frequencies of individual oscillators due to various noise signals (Garcia-Ojalvo et al. 2004). Diffusion of the signaling molecules across the cell membrane facilitates intercellular coupling. As the cell density increases, partial frequency locking occurs, and finally, perfect locking is achieved, i.e., complete synchronization can be observed when the cell density is large enough (Garcia-Ojalvo et al. 2004). To characterize the transition to synchronization, a quantity R which changes abruptly at the transition point is defined as R=

1 N

M 2  − M 2 Vart (M ) = , N 2 2 Mean i (Vart (yi )) i=1 (yi  − yi  )

(8.41)

8.3 Spontaneous Synchronization of Deterministic Models

277

Figure 8.3 Rapid synchronization for the coupled genetic relaxation oscillators. (from (McMillen et al. 2002))

N where M (t) = (1/N ) i=0 yi (t) is the average signal, · denotes time average, and the dynamics of yi is defined by (8.33) or (8.37). In the unsynchronized regime, R ≈ 0, whereas R ≈ 1 in the synchronized state. Synchronization occurs when the coupling strength is strong enough, i.e., R → 1 as Q → 1, as shown in Figure 8.4, where Q is defined as Q=

δn/Vext , βe + δn/Vext

(8.42)

which is linearly proportional to the cell density, if δn/Vext is sufficiently smaller than the extracellular AI degradation rate βe (Garcia-Ojalvo et al. 2004). To describe the degree of synchronization across a population of cells, a similar ordering parameter R is defined as n 1  R= eiφk , n

(8.43)

k=1

√ where i = −1, and φk stands for the phase of the kth oscillator. Then, R = 0 corresponds to unsynchronization, whereas R = 1 corresponds to complete synchronization. The dependence of the amplitude and the period of the synchronized entire system on Q is shown in Figure 8.5 for the coupled relaxation oscillators and repressilators. An assembly of relaxation oscillators can persist in more stable period and amplitude in a wider region of parameter Q

278

8 Multicellular Networks and Synchronization

Figure 8.4 Synchronization transition of the coupled repressilators for increasing Q. (from (Garcia-Ojalvo et al. 2004))

than repressilators. Further, the amplitude and period abruptly decrease when the cell density increasingly approaches the limit Q = 1. In other words, the cell density can play a role in the rapid damping of the period and amplitude of coupled genetic relaxation oscillators when it is beyond a threshold (Zhou et al. 2008). Spontaneous synchronization of coupled circadian oscillators in suprachiasmatic nucleus was also investigated based on a similar mechanism. The coupling mechanism through the global level of neurotransmitter concentrations is effective in synchronizing a population of oscillators. Moreover, it has also been shown that phases of individual cells are governed by their intrinsic periods and efficient synchronization can be achieved when the average neurotransmitter concentrations would dampen individual oscillators (Gonze et al. 2005). It is worth noting that synchronization can also be established even though cell-to-cell variation exists, as observed in the coupled repressilators (GarciaOjalvo et al. 2004) and circadian oscillators (Gonze et al. 2005). When considering such a variation, stochastic parameters are used. For example, in both coupled repressilators and coupled circadian oscillators, one or more parameters can be selected randomly from a Gaussian distribution. Because not all of the oscillators are identical, perfect synchronization cannot be achieved and phase differences between some oscillators still persist at the synchronized state, i.e., phase synchronization.

8.4 Entrained Synchronization for Deterministic Models

279

Figure 8.5 The effect of parameter Q on the amplitude and the period of oscillations: (a) and (c) for the coupled relaxation oscillators; (b) and (d) for the coupled repressilators. Parameters values for the coupled relaxation oscillators are n = 100, α1 = 10, α2 = 1, β1 = β2 = 0.5, α3 = 50, β3 = 25, α4 = β4 = 0.4, ρ = 200, n = 4, μ = 10, ηint = 120, ρ1 = 10, and γ = 6. Parameter values for the repressilators are n = 100, α1 = α2 = α3 = 216, β1 = β2 = β3 = 1, α4 = 0.01, β4 = 1, μ = 10, ηint = 1, n = 3, and γ ≡ γ1 = γ2 = γ3 = 0.5. (from (Zhou et al. 2008))

8.4 Entrained Synchronization for Deterministic Models Besides spontaneous synchronization, where any external forcing is not required to induce synchronization, it is well known that external forcing can play important roles in synchronization of cellular oscillators. For example, in the natural environment, circadian clocks are subjected to the alternation of light and dark every day. This external cycle entrains coupled oscillators precisely to a 24h period (Gonze et al. 2005, Beersma et al. 1999). Generally, cellular oscillators can be synchronized by an appropriate external or internal stimuli, i.e., entrained synchronization. Examples of entrained synchronization have been widely demonstrated. For instance, synchronization of electronic genetic networks by an external forcing, i.e., external voltage (Wagemakers et al. 2006), synchronization induced by periodic stimulation in squid giant axons (Aihara et al. 1984, Matsumoto et al. 1987, Kaplan et al. 1996),

280

8 Multicellular Networks and Synchronization

and synchronization induced by periodic impulses (Zhou et al. 2008, Wang et al. 2006b) and light and dark cycles (Gonze et al. 2005). Note that although there is a cell-to-cell variation, i.e., the period of each oscillator varies slightly, a light–dark cycle can still induce a systematic change in their periods due to the light intensity. All oscillators are synchronized, leading to a single resulting period, i.e., a precisely 24h period, which is identical for all oscillators. Consider the example of the coupled circadian oscillators. The light–dark cycle is simulated by using a square-wave function for the light term, L, which switches, e.g., from L = 0 in the dark phase to L = 0.01 in the light phase. Such a forcing entrains the circadian oscillators to a 24h period, as shown in Figure 8.6.

(a)

(b)

(c)

(d)

Figure 8.6 Entrainment of 10,000 coupled circadian oscillators by a light–dark cycle. The light–dark cycle is described by a square-wave forcing: L = 0 in the dark phases and L = 0.01 in the light phases. (a) Distribution of individual periods. (b) Oscillations of randomly chosen 10 oscillators among the 10,000 oscillators. (c) Distribution of the periods in the coupled system. (d) The oscillation of the mean field. In (b) and (d), the white and black bars indicate the light and dark phases, respectively. (from (Gonze et al. 2005))

In addition to the natural light and dark cycle, which entrains coupled oscillators precisely to a 24h period, there are some other types of external

8.4 Entrained Synchronization for Deterministic Models

281

forcing, which can be used as artificial control strategies to implement in experiments and further clinical applications possibly to control desynchronization and pathological rhythms especially when necessary synchronization cannot be achieved spontaneously. Since individual oscillators interact with each other in a fluctuating external environment, disruption of the rhythmic processes beyond normal limits, emergence of abnormal rhythms, and large fluctuations in the external environment are often associated with the loss of synchronization. Moreover, diseases can also lead to alternations from synchronization to desynchronization. External forcing can thus be used as an artificial control strategy to compensate coupling inefficiency and further induce entrained synchronization by increasing the production, the release, and the detection of the signaling molecules. Generally, external forcing can be imposed on one or more components in individual oscillators (Gonze et al. 2005). Moreover, external forcing can be imposed on the signaling molecules (Zhou et al. 2008, Wang et al. 2006b). The commonly used extracellular medium and the diffusive process provide an artificial control strategy, i.e., introduction of the diffusive signaling molecules into the extracellular medium at fixed instants. The impulsive control system can be represented as (8.1)–(8.2) and s˙ e (t) = −de (se (t)) + ce (si (t), se (t)) + Iext (t),

(8.44)

where Iext (t) takes the form Iext (t) = σ

∞ 

δ(t − tk )

(8.45)

k=1

with tk = kT (k ∈ Z+ = {1, 2, ...}). More specifically, it takes the form x˙ i (t) = −dxi (xi (t)) + fi (xi (t)) + ri (xi (t), si (t)), s˙ i (t) = −dsi (si (t)) + pi (xi (t), si (t)) + ci (si (t), se (t)), s˙ e (t) = −de (se (t)) + ce (si (t), se (t)), Δse (t) = σ,

t = kT, t = kT,

(8.46) (8.47) (8.48) (8.49)

where Δse (t) = se (t+ ) − se (t) = σ is the impulsive input with constant or random injection amounts of the signals into the common extracellular medium at control instants t = kT , with impulsive input period T . Clearly, Δse (t) represents a sudden change of signals in the extracellular medium at instants t = kT . The dynamics of (8.46)–(8.49) is determined by the the original system (8.1)–(8.3), the injection amount σ, which can take a deterministic or stochastic value, and the injection period T . In the case σ = 0, (8.46)–(8.49) reverts to (8.1)–(8.3). The impulsive control strategy has two beneficial effects. One is that it can compensate coupling inefficiency when spontaneous synchronization cannot be achieved, i.e., the external forcing can be used as a coupling amplifier so as

282

8 Multicellular Networks and Synchronization

to induce controlled synchronization. Moreover, the amount and period of the external forcing are independent of the state variables; therefore, when the periodic external forcing is used to increase its effectiveness at control, we need not to measure the system states at the control instants, which makes the method biologically plausible and easy to implement in experiments and possibly in medical treatments. The other is that it can reduce the noisiness of the multicellular network, effectively transforming an ensemble of noisy clocks into a very reliable collective oscillator. The findings demonstrate an efficient way to synchronize multicellular networks by an artificial control strategy and also provide a powerful mechanism for noise resistance (Wang et al. 2006b). Generally, external forcing can profoundly affect the dynamics of the signaling molecules in the extracellular medium, thereby affecting the dynamics of individual cells due to the diffusive process. In other words, the dynamics of the impulsive control system may strongly depend on the frequency and amount of external forcing. Consider the coupled repressilators with periodic injection of the signalling molecules AI into the common extracellular medium. The schematic of the coupled repressilators with impulsive control is shown in Figure 8.7. It is found that external forcing can indeed entrain the intrinsic rhythms or induce collective rhythms although the natural periods of individual oscillators are broadly distributed. Injectio n Cell-2

Cell-1

Lac

LuxI

CI

TetR

AI

AI

AI E x t r a c e ll u l a r s p a c e

Cell-N

AI

Figure 8.7 Schematic of the coupled repressilators with the quorum-sensing mechanism and periodic external forcing for impulsive control. (from (Wang et al. 2006b))

The impulsive control system can easily be synchronized especially when the impulsive period T is close to the mean period of the oscillators. In other words, when the impulsive period T is close to the mean period of the oscillators, a relatively small amount of impulsive control is required to induce synchronization. In addition, the higher the impulsive amount of the external input, the larger the impulsive period range for synchronization. In the pres-

8.5 Noise-driven Synchronization for Stochastic Models Without Coupling

283

ence of appropriate impulsive control, when the impulsive period is close to the mean period of all oscillators, the characteristic oscillations of the controlled synchronization do not change qualitatively, except the dynamics of AI in the intracellular and extracellular medium, which is changed qualitatively by the external forcing. On the other hand, for the same impulsive period, the minimum impulsive amount required to synchronize a population of oscillators decreases with increase in the coupling strength, which means for a specific impulsive period and relatively larger coupling strength, a smaller impulsive amount is required to compensate coupling inefficiency. Therefore, even if the coupling itself is insufficient to induce spontaneous synchronization, it still plays an important role for the entrained synchronization. In other words, the entrained synchronization is induced by the coupling along with the external forcing, rather than the external forcing alone. Because the rhythms actually arise from a stochastic cellular mechanism interacting with a fluctuating environment, the strictly constant impulsive amount is not plausible. It has been shown that under conditions of periodic input with random impulsive amount σ, collective dynamics can also emerge. In such a case, σ can be chosen randomly from an interval [σa , σb ], and a population of noisy oscillators can be entrained by a periodic source of the coupling substance to yield sustained oscillations with an irregular waveform and a stable period, which is determined by the relative magnitude of the impulsive period. Unlike the stability of the period, both the irregularity of the oscillators induced by the intrinsic and extrinsic noise and the random impulsive amounts render the amplitude irregular. External stimuli can also induce rich dynamics such as the Arnold tongue and resonance (Zhou et al. 2008), and oscillation death and chaos (Wang et al. 2006b). For example, for the coupled repressilators (8.36)–(8.39) and (8.44), the Arnold tongue and resonance regions as functions of the impulsive amount σ and period T are shown in Figure 8.8. Besides the period impulse, other forms of external forcing can also be used, for example, a sinusoidal stimulus. It has been shown that when the sinusoidal stimulus Iext (t) = λ + σ cos(ωt) is used, periodic intermittent synchronization can be observed due to the timevarying strength.

8.5 Noise-driven Synchronization for Stochastic Models Without Coupling When examined carefully, biological oscillators are rarely strictly periodic but rather fluctuate irregularly with time. The fluctuations arise from the combined influences of intracellular or intrinsic noise due to the intrinsically stochastic nature of the biochemical reactions involved, i.e., the random transitions among discrete chemical states such as random births and deaths of individual molecules, extracellular noise owing to environmental perturbations or stochastic variations of externally set parameters, and biological variance,

284

8 Multicellular Networks and Synchronization

Figure 8.8 External forcing induced Arnold tongues and resonance regions. Five different dynamical regions are labeled A-E. In region A, the coupled oscillators can display rich dynamics, in particular, including oscillation death. In the dominant region B, the Arnold tongue is found around the natural period. Within this resonance region, the period of oscillation is entrained to the external period. The regions C, D, and E show 3:2, 2:1, and 3:1 resonance regions, respectively. (from (Zhou et al. 2008))

i.e., different properties among cells. The continual interaction between the environmental fluctuations and intracellular feedback mechanisms renders the separation of the effects impossible. In most cases, additive and multiplicative stochastic terms, or random parameters can be used to simulate stochastic fluctuations. Such stochastic noise may not only affect biological activities of both individual cells and the entire multicellular system but also may be exploited by living organisms to positively facilitate certain collective behavior. By using the phase reduction method (Kuramoto 1984) and analytically computing the Lyapunov exponent, Teramae and Tanaka (Teramae and Tanaka 2004) showed that uncoupled limit-cycle oscillators can be in-phase synchronized by common weak additive noise regardless of their intrinsic properties and the initial conditions. The population of n oscillators driven by common additive noise are described as x˙ i (t) = f (xi ) + ξ(t),

(8.50)

where ξ(t) is a vector of Gaussian white noise. The elements of the vector are normalized as ξi (t) = 0 and ξi (t)ξj (t) = 2Dij δ(t − s), where D = {Dij } is a variance matrix of the noise components with i, j = 1, ..., n. Assume that the unperturbed system has a limit cycle and consider the common noise as a weak perturbation to the deterministic oscillators. Based on the Kuramoto model (Kuramoto 1984), one would attempt to obtain a phase variable φ(x) such that dφ = ω, dt

(8.51)

8.5 Noise-driven Synchronization for Stochastic Models Without Coupling

285

where ω is the intrinsic frequency of the unperturbed oscillators. Then, the equation for the phase becomes dφ ∂φ ∂φ ∂φ = f (x) + ξ=ω+ ξ. dt ∂x ∂x ∂x

(8.52)

Because all the oscillators are identical and there is coupling among them, we can study the phase synchronization of the entire population in a reduced system of two oscillators. This can be converted into the equation dφi = ω + Z(φi )ξ, dt

i = 1, 2

(8.53)

where φi is the phase variable of the ith oscillator, Z is the phase-dependent sensitivity defined as Z(φ) = gradx φ|x=x0 (φ) , and x0 (φ) is the unperturbed limit cycle solution. φi is the phase of oscillator i. The phase equation obtained by using Itˆo’s formula is described by dφi = ω + Z  (φi )T DZ(φi ) + Z(φi )ξ, dt

(8.54)

where the dash denotes differentiation with respect to φi . When all the oscillators are identical and there is no interaction between them, the phase synchronization of the entire population can be reduced to a system of two oscillators. To prove that the synchronizing solution φ1 (t) = φ2 (t) is stable, one needs to calculate the Lyapunov exponent λ of the solution. Define the phase difference between the two oscillators as ψ = φ2 − φ1 . The linearization of (8.54) with respect to ψ yields dψ = [(Z  (φi )T DZ(φi )) + Z  (φi ) · ξ]ψ, dt where φi obeys (8.54). By proving the Lyapunov exponent to be negative, i.e., 2π 1 λ=− Z T DZ  dφ ≤ 0, 2π 0

(8.55)

(8.56)

it can be shown that phase synchronization induced by the common weak noise is stable in an arbitrary oscillator system, regardless of the detailed oscillatory dynamics. How weak noise can be, however, is difficult to determine because it is not a quantitative condition and too strong noise may also decrease the synchronization probability. For coupled oscillators, on the other hand, the situation may be different due to the combined influences of the intercellular coupling and the common noise. In other words, the coupled oscillators can be more easily synchronized because both the noise and the intercellular coupling may play active roles in inducing synchronization, i.e., effects of the common noise can be more easily imposed on individual cells through the coupling, i.e., the diffusive signaling molecules.

286

8 Multicellular Networks and Synchronization

8.6 A General Multicellular Network for Stochastic Models with Coupling 8.6.1 A Model A general method of modeling multicellular networks with stochastic fluctuations, i.e., from the master equation to the Langevin equations and then to the cumulant evolution equations, including the derivation of intracellular and extracellular noise from the biochemical reactions involved, was developed for synchronization in multicellular networks (Chen et al. 2005, Zhou et al. 2005). Sufficient conditions were also provided based on the global Hopf bifurcation theory (Alexander et al. 1978, Alexander et al. 1986). The results are obtained under the assumption that the cell number is sufficiently large that the system with Gaussian approximation can be expressed by the cumulant equations. Therefore, the existence of periodic solutions in the cumulant equations implies that the original n cells show bulk synchronization (Chen et al. 2005). As described in Chapter 2, for a general molecular network in a single cell or a subsystem with m molecular species {S1 , ..., Sm } that react through M reaction channels {R1 , ..., RM }, we can express the M intracellular biochemical reactions by the master equation (2.55). Depending on the requirement of accuracy and computational power, the master equation can be represented approximately by the Langevin equation (2.104), the Fokker–Planck equation (2.113), and further the cumulant equation (2.142). For a multicellular system with coupling, the master equation includes not only the M intracellular biochemical reactions, but also the diffusion reactions which can be expressed approximately by the following biochemical reactions: Xi

dii



dii v/V

Yi ,

(8.57)

where Xi is the number of the intracellular signaling molecules, dii is the diffusion rate of Xi between the cell and the extracellular environment, Yi is the total number of Xi in the extracellular environment, v = v¯A, and V = V¯ A with v¯, V¯ , and A being the individual cell volume, the total culture or environmental volume, and the Avogadro number, respectively. Moreover, the effects of extracellular noise with Gaussian distribution are also assumed to be associated with the diffusion process of Xi and are incorporated in the master equation, which can be equivalently described in the form of the following biochemical reactions: Xi

2 v/(2Xi ) σii



2 v/(2Y ) σii i

Yi ,

(8.58)

2 is the extracellular noise intensity or the variance, which affects where σii the cell dynamics through signaling molecules Xi and Yi . Note that when the

8.6 A General Multicellular Network for Stochastic Models with Coupling

287

master equations are used to model multicellular systems, both the probability function P (X; t) and the transition rates PXX  and PX  X are also considered as functions of the environmental variables Yi , if the coupling is included. dii = 0, if the ith molecule is not a coupling variable. By adding (8.57)–(8.58) to the master equation (2.55) we can analyze and simulate the dynamical behavior of multicellular networks with the consideration of both the diffusion process and the extracellular noise by the algorithms of stochastic simulation described in Chapter 2. In other words, assume that the system is homogeneous due to the free diffusion and transportation processes of the signaling molecules between individual cells and the environment, which means that the signaling molecules are randomly and uniformly distributed throughout the environment. When there are a sufficiently large number of cells, i.e., n → ∞, the concentration of Y approaches the average or the mean field concentration of X, i.e., Y (t)/V = Nx (t − τ ) ≡ X(t − τ )/v which represents the time-delayed feedback effects. N (t) is the mean value of the concentration X(t)/v. Hence, the master equation (2.55) with (8.57)– (8.58) represents a general multicellular network, which can be numerically simulated by the modified Gillespie algorithm with time delay effects and a parallel computation scheme (Chen et al. 2005, Wang et al. 2008). When the coupling by the diffusion reactions is considered as the approximation of the master equation, the Langevin equations for the coupled multicellular networks become dxi (t) (8.59) = fi (x(t)) + dii (yi (t) − xi (t)) + ξi (t), dt where ξi (t) are Gaussian white noise signals with zero means ξi (t) = 0 and 2 covariances ξi (t)ξj (t ) = (Kij (x(t)) + dij (yi (t) + xj (t)) + σij )δ(t − t ). Note that dij = σij = 0 for i = j, and dii = 0 if variable xi (t) is not a coupling variable. When the system is sufficiently large, we assume that the stochastic variables obey Gaussian distribution. Then, it can be proved that (8.59) is equivalently expressed by the first and second cumulant evolution equations, which means that we can actually examine the dynamics by deterministic cumulants instead of the complicated stochastic variables. Let us denote the first cumulant or mean of x by N with each element Ni = xi , and the second cumulant or covariances of x by M with each element Mij = (xi −Ni )(xj −Nj ). Then, by integrating over all x, the cumulant evolution equations can be obtained as follows: dNi (t) (8.60) = Fi (N (t), M (t)) + dii (Ni (t − τi ) − Ni (t)), dt dMij (t) = Gij (N (t), M (t)) − (dii + djj )Mij (t) dt (8.61) +dij (Ni (t − τi ) + Nj (t)), where i, j = 1, ..., m, Fi (N (t), M (t)) = fi (x(t)), and Gij (N (t), M (t)) = 2 (xi (t)−Ni (t))fj (x(t))+(xj (t)−Nj (t))fi (x(t))+Kij (x(t))+σij . The vector

288

8 Multicellular Networks and Synchronization

N (t) clearly has m elements. On the other hand, the non-zero elements of the covariance matrix M (t) are at most m(m + 1)/2, but more than m. Note that the element dii is zero if xi is not a coupling variable with the environment. Because not all molecules among cells are coupled with the environment, many dii (i = 1, ..., m) are generally zero. Clearly, (8.61) mainly represents the effect of noise. Although beneficial roles of noise on synchronization have been extensively studied, complete understanding of their origin and how to exploit them in order to regulate cellular functions require further investigation. The mechanism of synchronizing a population of interacting oscillators by common extracellular noise seems likely that the noise is shared by all cells and can be effectively exerted on each cell through the intercellular coupling, i.e., the diffusive process of the signaling molecules. Moreover, the intracellular and extracellular noise may play different roles in inducing synchronization. Since the intracellular noise signals of each cell are independent, they generally tend to disturb synchronization among cells. However, the extracellular noise is nearly common to all cells due to the common extracellular environment and facilitate the synchronization of the dynamics of all cells by exerting the same fluctuations on each cell via signaling molecules. 8.6.2 Example of a Gene Regulatory Network Consider the example of a coupled genetic network, i.e., a two-gene model which uses luxI and luxR with promoter Plac Lux0 adopted in (Chen et al. 2005), as shown in Figure 8.9. Genes luxI and luxR which coordinate the behavior of bacteria, such as quorum sensing, were initially discovered in the marine bacterium, Vibrio fischeri. They are constructed as an operon under the control of the promoter Plac Lux0. Cell-to-cell coupling is accomplished by diffusing a small signaling molecule into the extracellular environment, i.e., AI, which plays a major role in the cell-to-cell communication. The protein LuxI is an AI synthase that produces AI. Both proteins LuxR and AI are first dimerized and then form a complex, i.e., a hetero-tetramer, which inhibits the activity of the promoter Plac Lux0. As a signaling molecule, AI freely diffuses into the environment to exchange information with other cells, and then enters individual cells to alter gene expression. Let AI2 and LuxR2 indicate AI and LuxR dimers, and AL and ALD represent AI2 –LuxR2 and AI2 –LuxR2 -DNA complexes, respectively. Then, the autoinducer synthesis, the multimerization reactions of proteins, and the binding reaction on the regulatory region of DNAs in individual cells are described as follows:

8.6 A General Multicellular Network for Stochastic Models with Coupling

289

CELL-2 AI

LuxR2 - AI2

.

AI

AI

LuxR

CELL-1

LuxI

PlacLux0

luxR

promotor

AI AI

luxI

genes

CELL-3 AI

CELL-4 AI

Figure 8.9 A two-gene model of a gene regulatory network. Gene luxR produces the protein LuxR, which is dimerized. Protein LuxI synthesizes AI, which forms a dimer and further a hetero-tetramer by binding to a LuxR dimer. The AI-LuxR tetramer binds to the promoter plac Lux0 to inhibit the transcription of the genes luxR and luxI. Cell communication or synchronization is accomplished by diffusing AIs to the extracellular environment, which further enter the cells as signaling molecules to regulate gene expression. (from (Chen et al. 2005))

k

a LuxI  LuxI + AI,

k1

AI + AI  AI2 , k−1 k2

LuxR + LuxR  LuxR2 , k−2 k3

AI2 + LuxR2  AL, k−3 k4

AL + DN A  ALD. k−4

(8.62) (8.63) (8.64) (8.65) (8.66)

Let the copy number of plasmids with the operon luxI and luxR be nD . Then a conservation condition for DNA binding sites can be obtained, i.e., the total number of free DNAs and ALDs should be equal to nD . On the other hand, the reactions involving transcription, translation, and degradation in a cell are expressed as k

m DN A  mRN ALuxI + mRN ALuxR + DN A,

αkm

ALD  mRN ALuxI + mRN ALuxR + ALD, kpi

mRN ALuxI  LuxI + mRN ALuxI , kpr

mRN ALuxR  LuxR + mRN ALuxR ,

(8.67) (8.68) (8.69) (8.70)

290

8 Multicellular Networks and Synchronization

where 0 < α < 1 is a repression coefficient. As shown in (8.67)–(8.68), mRNALuxI and mRNALuxR are produced by the same reactions due to the operon. Molecules LuxI, LuxR, mRNALuxI , mRNALuxR , and AI degrade at rates ei , er , emi , emr , and ea , respectively. Denote the numbers of LuxI, LuxR, AL, ALD, AI2 , LuxR2 , mRNALuxI , mRNALuxR , and AI as R1 , R2 , R3 , R4 , R5 , R6 , R7 , R8 , and R9 , respectively. Then, we can derive the master equation for the gene network shown in Figure 8.9. For convenience, we define the following molecules: X1 , LuxI; X2 , LuxR; X3 , AL; X4 , ALD; X5 , AI2 ; X6 , LuxR2 ; X7 , mRNALuxI ; X8 , mRNALuxR ; X9 , AI; and Y , AI2 in the environment. Define nD as the total number of DNAs, and nDN A as the free DNA number. Then, by the conservation condition, we have nDN A + X4 = nD . From (2.55) and (8.57)–(8.58), the transition rates and the states corresponding to reactions (8.62)–(8.70) and (8.57)–(8.58) are listed in Table 8.1, where the last two rows represent the diffusion process and the extracellular noise effect between each cell and the environment for AI according to (8.57)–(8.58). In Table 8.1, the volume factors v and V are multiplied to some wk to convert the concentration to the number of molecules because the reaction rates k1 –k4 are second-order reactions and are defined not by the numbers but by the concentrations in the given data. Then, by appropriate approximations to the master equation as described in Chapter 2, the Langevin equations of (8.59) for a single cell in terms of concentrations can be obtained: dx1 (t) dt dx2 (t) dt dx3 (t) dt dx4 (t) dt dx5 (t) dt dx6 (t) dt dx7 (t) dt dx8 (t) dt dx9 (t) dt

= −ei x1 (t) + kpi x7 (t) + ξ1 , = −2k2 x2 (t)(x2 (t) − v1 ) + 2k−2 x6 (t) + kpr x8 (t) − er x2 (t) + ξ2 , = k3 x5 (t)x6 (t) − x3 (t)(k−3 + k4 ( nvD − x4 (t))) + k−4 x4 (t) + ξ3 , = k4 x3 (t)( nvD − x4 (t)) − k−4 x4 (t) + ξ4 , = k1 x9 (t)(x9 (t) − v1 ) − k−1 x5 (t) − k3 x5 (t)x6 (t) + k−3 x3 (t) + ξ5 , = k2 x2 (t)(x2 (t) − v1 ) − k−2 x6 (t) − k3 x5 (t)x6 (t) + k−3 x3 (t) + ξ6 , = km ( nvD − x4 (t)) + αkm x4 (t) − emi x7 (t) + ξ7 , = km ( nvD − x4 (t)) + αkm x4 (t) − emr x8 (t) + ξ8 , = −2k1 x9 (t)(x9 (t) − v1 ) + 2k−1 x5 (t) + ka x1 (t) − ea x9 (t), +d(x9 (t − τ ) − x9 (t)) + ξ9 ,

where ξi (t)ξj (t ) = Kij δ(t − t ) for i = 9 and j = 9 with Kij = Kji , ξ9 (t)ξ9 (t ) = (K99 + d(x9 (t − τ ) and

V + x9 (t)) + (σV )2 )δ(t − t ), v

8.6 A General Multicellular Network for Stochastic Models with Coupling

291

Table 8.1 Transition rates and states X1 k θk,1 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 1 13 0 14 0 15 0 16 -1 17 0 18 0 19∗ 0 20 0 • •

X2 θk,2 0 0 0 -2 2 0 0 0 0 0 0 0 1 0 0 0 -1 0 0 0

X3 X 4 X5 θk,3 θk,4 θk,5 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 1 0 -1 -1 0 1 -1 1 0 1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ∗ If n → ∞,

X6 X7 X 8 X9 θk,6 θk,7 θk,8 θk,9 wk 0 0 0 1 ka X1 (t) 0 0 0 -2 k1 X9 (t)(X9 (t) − 1)/v 0 0 0 2 k−1 X5 (t) 1 0 0 0 k2 X2 (t)(X2 (t) − 1)/v -1 0 0 0 k−2 X6 (t) -1 0 0 0 k3 X5 (t)X6 (t)/v 1 0 0 0 k−3 X3 (t) 0 0 0 0 k4 X3 (t)(nD − X4 (t))/v 0 0 0 0 k−4 X4 (t) 0 1 1 0 km (nD − X4 (t)) 0 1 1 0 αkm X4 (t) 0 0 0 0 kpi X7 (t) 0 0 0 0 kpr X8 (t) 0 -1 0 0 emi X7 (t) 0 0 -1 0 emr X8 (t) 0 0 0 0 ei X1 (t) 0 0 0 0 er X2 (t) 0 0 0 -1 ea X9 (t) 0 0 0 1 dY9 (t)v/V + (σ)2 v/2 0 0 0 -1 dX9 (t) + (σ)2 v/2 then Y9 (t) = X9 (t − τ ) Vv .

The transition rate wk (X(t)) = 0, if wk (X(t)) < 0 or if wk (X(t)) has a variable Xi (t) satisfying Xi (t) + θk,i < 0, due to nonnegative values of wk and X(t). wk (X(t)) = 0, if wk (X(t)) has a term nD −X4 (t) satisfying either nD −X4 (t) < 0 or nD −(X4 (t)+θk,4 ) < 0, due to the conservation condition of the DNA number.

K11 (t) = ei x1 (t) + kpi x7 , 1 K22 (t) = 4k2 x2 (t)(x2 (t) − ) + 4k−2 x6 (t) + kpr x8 + er x2 (t), v nD K33 (t) = k3 x5 (t)x6 (t) + x3 (t)(k−3 + k4 ( − x4 (t))) + k−4 x4 (t), v nD K44 (t) = k4 x3 (t)( − x4 (t)) + k−4 x4 (t) + σ 2 , v 1 K55 (t) = k1 x9 (t)(x9 (t) − ) + k−1 x5 (t) + k3 x5 (t)x6 (t) + k−3 x3 (t) + σ 2 , v 1 K66 (t) = k2 x2 (t)(x2 (t) − ) + k−2 x6 (t) + k3 x5 (t)x6 (t) + k−3 x3 (t), v nD K77 (t) = km ( − x4 (t)) + αkm x4 (t) + emi x7 (t), v nD K88 (t) = km ( − x4 (t)) + αkm x4 (t) + emr x8 (t), v

292

8 Multicellular Networks and Synchronization

1 K99 (t) = 4k1 x9 (t)(x9 (t) − ) + 4k−1 x5 (t)) + ka x1 (t) v +d(x9 (t − τ ) − x9 (t)) + ea x9 (t) + σ 2 , 1 K26 (t) = −2k2 x2 (t)(x2 (t) − ) − 2k−2 x6 (t), v nD K34 (t) = −k4 x3 (t)( − x4 (t)) − k−4 x4 (t), v K35 (t) = −k3 x5 (t)x6 (t) − k−3 x3 (t), K36 (t) = −k3 x5 (t)x6 (t) − k−3 x3 (t), K56 (t) = k3 x5 (t)x6 (t) + k−3 x3 (t), 1 K59 (t) = −2k1 x9 (t)(x9 (t) − ) − 2k−1 x5 (t), v where xi represents the concentration of Ri , i.e., xi = Ri /v, and other Kij = 0. Note that if a term in fi and Kij is negative, then the corresponding term is zero, due to the constraints of wk in the master equation. The intracellular noises ξi are derived directly from the master equation by the second-order approximation of θk,i , and they are additive and white with identical and independent distribution for each cell. Theoretically, when individual jumps or the changes |θk,i | of the number Xi (t) are small, such an approximation approaches an accurate result; i.e., the additive and white noise signals are an adequate representation of the fluctuations in a cell. Otherwise, the Ω expansion technique or other approximation method should be adopted to approximate the master equation in a more accurate manner. In the numerical examples, all the jumps |θk,i | are 1 or 2, which are small compared with Xi (t) but they may still lead to the introduction of errors in the simulation. Define Ni to be the first cumulant or the mean value of xi in the cell and Mij to be the second cumulant or the covariance of xi and xj . Then, according to (8.61) and the Gaussian distribution approximation of Chapter 2, we have the evolution equations for the first and second cumulants or the means and the covariances as follows:

8.6 A General Multicellular Network for Stochastic Models with Coupling dN1 dt dN2 dt dN3 dt dN4 dt dN5 dt dN6 dt dN7 dt dN8 dt dN9 dt 1 dM11 2 dt 1 dM22 2 dt 1 dM33 2 dt

1 dM44 2 dt 1 dM55 2 dt

1 dM66 2 dt 1 dM77 2 dt 1 dM88 2 dt 1 dM99 2 dt

dM26 dt

dM34 dt dM35 dt dM36 dt dM56 dt dM59 dt

293

= −ei N15 + kpi N7 6 2k2 = −2k2 N22 + M22 + 2k−2 N 8 + ( v −6er )N2 5 6 + kprkN n = k3 (N55N6 + M566) − N3 (t) k−3 + 4v D − k4 N4 + k4 M34 + k−4 N4 nD = k4 N 5 3 2 v − N64 − k−4 N4 − k4 M34 = k1 5N9 + M99 6 − k−1 N5 − k3 (N5 N6 + M56 ) + k−3 N3 − kv1 N9 = k2 N22 + M22 − k−2 N6 − k3 (N5 N6 + M56 ) + k−3 N3 − kv2 N2 = km ( nvD − N4 ) + αkm N4 − emi N7 = km ( nvD5 − N4 ) + αk 6 m N4 − emr N8 = −2k1 N92 + M99 + 2k−1 N5 − ea N9 + ka N1 + 2kv1 N9 + d(N9 (t − τ ) − N9 (t)) = 12 (e5i N1 + kpi N67 ) − ei M11 = 2k2 N22 + M22 + 2k−2 N6 + 12 kpr N8 + ( 12 er − 2kv2 )N2 −5(4k2 N2 + er ) M22 + 2k−25 M26 + 2kv2 M22 6 6 = 12 k3 (N5 N6 + M56 ) + N3 5 k−3 + k4vnD − k4 N4 − k4 M34 + k−4 N4 6 +k3 N6 M35 + k3 N5 M36 − −k4 N4 + k−3 + k4vnD M3 +5(k4 N3 5+ k−4 ) M34 6 6 5 6 = 12 k4 N3 nvD − N4 + k−4 N4 − k4 M34 + k4 nvD − N4 M34 1 2 −5(k−45 + k4 N3 ) M 6 44 + 2 σ 6 1 2 = 2 k1 N9 + M99 + k−1 N5 + k3 (N5 N6 + M56 ) + k−3 N3 − kv1 N9 − (k−1 + k3 N6 ) M55 + k−3 M35 − k3 N5 M56 + 2k1 N9 M59 − dM55 −5kv1 M559 + 12 σ 2 6 6 = 12 k2 N22 + M22 + k−2 N6 + k3 (N5 N6 + M56 ) + k−3 N3 − kv2 N2 −5(k−2 + k3 N5 ) M66 + 2k2 N2 M26 +6k−3 M36 − k3 N6 M56 − kv2 M26 = 12 5km ( nvD − N4 ) + αkm N4 + emi N7 6 − emi M77 = 12 5km (5nvD − N4 ) +6 αkm N4 + emr N8 − emi M88 = 12 4k1 N92 + M99 + 4k−1 N5 + ea N9 + ka N1 − 4kv1 N9 + d(N5 (t − τ ) − N5 )) − (ea + 4k1 N9 ) M99 + ka M19 +2k−1 M59 + 2kv1 M99 + 12 σ 2 = −2k2 (N22 + M22 ) − 2k−2 N6 + 2kv2 N2 +2k2 N2 M22 + 2k−2 M66 − (k−2 + er + 4k2 N2 + k3 N5 ) M26 − kv2 M22 + 2kv2 M26 5 nD 6 = − k4vnD N3 + k4 (N3 N4 + M ) − k N + k − N M336 34 −4 4 4 4 v 5 k4 nD + (k4 N3 + k−4 ) M4 − −k4 N4 + k4 N3 + k−3 + k−4 + v M34 = −k53 (N5 N6 + M56 ) − k−3 N3 + k−3 M33 6+ k3 N6 M55 − k4 (−N4 + nvD ) + k−1 + k−3 + k3 N6 M35 + k3 N5 (M56 − M36 ) = −k53 (N5 N6 + M56 ) − k−3 N3 + k−3 M336 + k3 N5 M66 − k3 N6 M35 − k−2 + k−3 + k4 nvD + k3 N5 − k4 N4 M36 + k3 N6 M56 = k3 (N5 N6 + M56 ) + k−3 N3 − k3 N6 M55 − k3 N5 M66 +k−3 (M35 + M36 ) − (k−1 + k−2 + k3 N6 + k3 N5 ) M56 = −2k1 (N92 + M99 ) − 2k−1 N5 + 2kv1 N9 + 2k−1 M55 + 2k1 N9 M99 − (d + k−1 + ea + k3 N6 + 4k1 X9 ) M59 − kv1 M99 + 2kv1 M59 .

294

8 Multicellular Networks and Synchronization

8.6.3 Theoretical Analysis Hopf Bifurcation of Evolution Equations We first analyze the deterministic model, i.e., the cumulant equations, to derive general conditions for the Hopf bifurcation, under which the system (8.60)–(8.61) with time delays will converge to a non-trivial periodic solution. ¯, M ¯ ) be an equilibrium of (8.60)–(8.61). The number of the non-zero Let (N elements in the covariance vector M is p. Denote functions Fi (N (t), M (t)), Gij (N (t), M (t)) − −(dii + djj )Mij (t), and dij (Ni (t − τi ) + Nj (t)) by a m × 1 vector function F (N (t), M (t)), a p × 1 vector function G(N (t), M (t)), and a p × 1 vector function U (N (t − τ ), N (t)), respectively. Define  ∂F  ∂F +P ∂M A(λ) = ∂N , (8.71) ∂G ∂G ∂N + Q ∂M where P = diag(d11 (e−λτ1 − 1), ..., dmm (e−λτm − 1)) is an m × m diagonal matrix. Q = e−λt (UN1 , ..., UNm ) is a p × m matrix, where UNi denotes a p × 1 vector function U (N (t − τ ), N (t)), in which Nj (t − τj ) and Nj (t) for j = 1, ..., m are replaced by zeros if j = i, and replaced by eλ(t−τi ) and eλt if j = i, respectively. Then, the characteristic equation of (2.141)–(2.142) ¯, M ¯ ) is evaluated at the equilibrium (N det(λI − A(λ)) = 0,

(8.72)

where I is the (m + p) × (m + p) identity matrix. Note that A also includes the noise deviation σ due to G. For any parameter α in (8.60)–(8.61), such as coupling coefficients, time delays, and noise deviations, we have the following theorem: Theorem 8.2. Suppose that functions F , G, and U are sufficiently smoothly depending on the parameter α, and there is α0 , such that for α < α0 all roots λk , k = 1, 2, ..., m + p of the characteristic equation belong to the open left half-plane, whereas for α = α0 , 1. λ1,2 |α=α0 = ±iω0 , ω0 > 0; dReλ1,2 (α) 2. |α=α0 > 0, Reλj |α=α0 < 0 (j > 2), dα then, a periodic solution of system (8.60)–(8.61) arises near the solution ¯, M ¯ ), and this solution is stable if it arises for α > α0 and (N, M ) = (N unstable if in the opposite case. Under these conditions, if α increases and passes through the value α0 , then, the stable equilibrium becomes unstable, i.e., α = α0 is a critical value of the bifurcation. When α passes through α0 in one of the two directions, a periodic solution bifurcates from the equilibrium. Such a solution is stable if it arises for α > α0 and unstable in the opposite case.

8.6 A General Multicellular Network for Stochastic Models with Coupling

295

A Sufficient Condition for Synchronization When the number of cells n is sufficiently large, we assume that the system can be expressed by deterministic dynamics of (8.60)–(8.61) by the Gaussian approximation. To directly consider n the interconnection of cells, we replace Ni (t − τ ) of (8.60)–(8.61) by k=1 Nik (t − τ )/n due to yi of (8.59), where the superscript k indicates cell k. For such a case, the existence of periodic solutions in the system (8.60)–(8.61) implies that the original n cells show bulk synchronization. However, partial oscillations among cells are generally expected. For the consideration of generality, we divide the n cells into n ¯ (¯ n ≤ n) different sets or groups, each set or group containing a fraction W k , k = 1, 2, ..., n ¯ , of n oscillators with n ¯ 

W k = 1,

(8.73)

k=1

where the superscript k for W k indicates the group or the set k. W k is a non-negative scalar and W k n is an integer representing the number of cells in the kth group.  Because all the cells in each set  are equivalent, we further n n k k k use Ri (t − τ ) ≡ W N (t − τ ) to replace i k=1 k=1 Ni (t − τ )/n. Thus, (8.60)–(8.61) are rewritten as follows: dNik (t) = Fi (N k (t), M k (t)) + dii (Ri (t − τ ) − Nik (t)), dt k (t) dMij k (t) = Gij (N k (t), M k (t)) − (dii + djj )Mij dt +dij (Ri (t − τ ) + Njk (t)),

(8.74)

(8.75)

k k (t) = Mji (t). Note that bulk oscillation where 1 ≤ k ≤ n ¯ , dij = 0 if i = j, Mij or in-phase synchronization of these n ¯ groups correspond to n ¯ = 1. This implies that the bulk oscillation is a special case of solutions of (8.74)–(8.75). However, phase-locked oscillations (i.e., phase synchronization) among cells are generally expected. For clarity, (8.74)–(8.75) are rewritten as

dNik (t) (8.76) = Fi (N k (t), M k (t)) + dii (Ri (t − τ ) − Nik (t)), dt dMiik (t) = Gii (N k (t), M k (t)) − 2dii Miik (t) + 2dii (−Miik (t) + Nik (t)) dt +dii [Ri (t − τ ) − Nik (t)], (8.77) k dMij (t) k (t), i = j. (8.78) = Gij (N k (t), M k (t)) − (dii + djj )Mij dt By introducing a vector-valued variable Z combining variables N and M , (8.76)–(8.78) are further rewritten in compact form as follows:

296

8 Multicellular Networks and Synchronization

⎞ ⎞ ⎛ n¯ z1k (t) D k=1 W k z1k (t − τ ) − Dz1k (t)  d ⎝ k ⎠ ¯ z2 (t) = H(Z k (t); μ) + ⎝ D nk=1 W k z1k (t − τ ) − Dz1k (t) ⎠ , (8.79) dt k z3 (t) O ⎛

where



d11 0 ⎜ D = ⎝ 0 ...

0



⎟ 0 ⎠,

0 dmm ⎛ k ⎞ z1 (t) Z k (t) = ⎝ z2k (t) ⎠ , z3k (t)

(8.80)

0

(8.81)

with z1k (t) ∈ Rm , z2k (t) ∈ Rm , and z3k (t) ∈ Rm(m−1)/2 , representing Nik , Miik k and Mij respectively, and μ is a parameter (see below). Note that there are k k = Mji . only m(m − 1)/2 independent variables in z3k due to Mij Equation 8.79 may be regarded as a system of n ¯ identical groups coupled in a linear way with time delays. Each group will be considered as a system with m(m + 3)/2 distinct deterministic variables (by summing all variables of z1k , z2k , z3k for each k), which are governed by the dynamical equation in the following vector form dZ = H(Z; μ). (8.82) dt ¯ μ) = 0. Then, a steady-state of Suppose that its steady-state satisfies H(Z; the coupled system (8.79) is 5 6 ¯ = Z; ¯ Z; ¯ ...; Z¯ . U (8.83) We now study synchronization solutions of (8.79), i.e., phase-locked solutions with non-zero phase difference. The mathematical analysis of mutual synchronization is still a challenging problem. The pioneering work in this area is due to Winfree (Winfree 1967, Winfree 1980, Winfree 1987) and Kuramoto (Kuramoto 1984), who simplified the problem by assuming that the oscillators are strongly attracted to limit cycles in the phase space, so that the amplitude variations can be neglected and only phase variations need to be considered. Winfree and Kuramoto discovered that mutual synchronization is a cooperative phenomenon, by a temporal analogue of the phase transitions encountered in statistical physics. Now, suppose that the system (8.79) has a periodic solution of the form ⎛ ⎞ P1 (t − αj T ) Z j (t) = P (t − αj T ) = ⎝ P2 (t − αj T ) ⎠ , 1 ≤ j ≤ n ¯, (8.84) P3 (t − αj T ) where P (t) is a non-trivial vector-valued function with the least period T > 0. Let α1 = 0 without loss of generality. Such a solution is called a phase-locked

8.6 A General Multicellular Network for Stochastic Models with Coupling

297

solution of (8.79). Essentially, the oscillation in each group is described by function P (t). Other groups, however, may be out of phase with the phase difference, T βj ≡ T (αj+1 − αj ). Here and henceforth, we index the group of cells by j mod(¯ n). When (8.84) is a solution of (8.79), certain compatibility conditions must hold true. To derive those conditions, consider the behavior of the jth and the (j + 1)th variables at times t and t + βj T , respectively. From (8.84) and (8.79), for 2 ≤ j ≤ n ¯,   n¯  dP1 (t − αj T ) W k P1 (t − τ − (αk + αj )T ) − P1 (t − αj T ) =D dt k=1

+H(P (t − αj T ); μ) and

(8.85)

  n¯  dP1 (t − αj T ) k W P1 (t − τ − αk T ) − P1 (t − αj T ) =D dt k=1

+H(P (t − αj T ); μ).

(8.86)

Subtracting the two equations, we have D

n ¯ 

W l [P1 (t − τ − (αl + αj )T ) − P1 (t − αl T )] = 0.

(8.87)

l=1

Let P1 (t) =

∞ 

γk e2πikt/T

(8.88)

k=−∞

√ be the Fourier expansion of P1 (t), where i = −1. Then γk = γ¯k , and γk =  1 T −2πikt/T dt. Substituting (8.88) into (8.87) and using orthogonality, T 0 P1 (t)e we find that basic compatibility conditions are  n¯    5 −2πikαj 6 l −2πikαl det D e W e −1 =0 (8.89) l=1

¯ , where det[·] means the determinant. for all k for which γk = 0 and 2 ≤ j ≤ n Note that (8.89) has the following trivial solution for all D and arbitrary W k: αj = 0, 1 ≤ j ≤ n ¯, which corresponds to the in-phase synchronization solution of (8.79). We are more interested in non-trivial solutions. For this, assume det(D) =

n ¯  l=1

dll = 0.

(8.90)

298

8 Multicellular Networks and Synchronization

Then,

n ¯ 

W l e−2πikαl = 0.

(8.91)

j−1 , 1≤j≤n ¯ n ¯

(8.92)

l=1

One solution of (8.91) is αj = when

1 , 1≤j≤n ¯. (8.93) n ¯ Clearly, the solution corresponding to such a phase has uniform phase difference. An interesting phenomenon is in the case that n identical cells are coupled in a ring in which each cell is connected to its nearest neighbors as depicted in (Alexander et al. 1978, Alexander et al. 1986). In such a case, we have n ¯ ≡ n. Next, we speculate the existence conditions of such a periodic solution with period T for (8.79), which are used to describe a synchronization mechanism through cell-to-cell communication. We show that the conditions required are straightforward and are easy to verify for any particular example. These conditions strongly depend on coupling, time delays, variances of noise, and kinetics. For the system (8.79), consider a problem to find a phase-locked solution of the form (8.84). By (8.84) and (8.79), we have ⎞ ⎛ ⎞ ⎛ n¯ P1 (t) D l=1 W l P1 (t − τ − αl T ) − DP1 (t) ¯ d ⎝ P2 (t) ⎠ = ⎝ D nl=1 W l P1 (t − τ − αl T ) − DP1 (t) ⎠ + H(P (t); μ) dt P3 (t) O (8.94) and the oscillation in the jth (2 ≤ j ≤ n ¯ ) group is given by Wj =

Z j (t) = P (t − αj T ).

(8.95)

Thus, the existence of a synchronous solution to (8.79) is converted to finding a periodic solution of system (8.94). According to the global Hopf bifurcation theorem (Alexander et al. 1978,Alexander et al. 1986), we only need to examine some algebraic conditions. To be specific, let t = ω0 t, where ω0 = 2π/T . Then, (8.95) can be rewritten as ⎞ ⎛ ⎞ ⎛ n¯ P1 (t ) D l=1 W l P1 (t − 2πτ /T − 2παl ) − DP1 (t )  d ¯ ω0 ⎝ P2 (t ) ⎠ = ⎝ D nl=1 W l P1 (t − 2πτ /T − 2παl ) − DP1 (t ) ⎠ dt  P3 (t ) O +H(P (t ); μ).

(8.96)

¯ , we have Considering the linearization equation of (8.96) evaluated at U

8.6 A General Multicellular Network for Stochastic Models with Coupling

299

⎞ n¯ P1 (t) D l=1 W l P1 (t − 2πτ /T − 2παl ) − DP1 (t)  d ¯ ω0 ⎝ P2 (t) ⎠ = ⎝ D nl=1 W l P1 (t − 2πτ /T − 2παl ) − DP1 (t) ⎠ dt P3 (t) O ⎛





+AP (t), ¯ ); μ). Let where A = ∂P H((U 6 ⎞ ⎛ 5n¯ D 5 l=1 W l e−2πik(τ /T +αl ) − 16 O O  n ¯ l −2πik(τ /T +αl ) Bk (μ) = ⎝ D −1 O O⎠+A l=1 W e O OO

(8.97)

(8.98)

for k = 0, ±1, ±2, .... In addition, because our main interest is in the effects of noise on synchronized oscillations, we set μ = σ. Then, finally we obtain the following theorem to conclude the existence conditions of phase-locked synchronous solutions (Alexander et al. 1978, Alexander et al. 1986). Theorem 8.3. Suppose that function H is differentiable with respect to its arguments, and that W k = 1/¯ n for 1 ≤ k ≤ n ¯ . If for some μ = μ0 and αj = (j − 1)/¯ n (mod 1), the following conditions are satisfied: 1. B0 (μ0 ) of (8.98) is non-singular; 2. B1 (μ0 ) of (8.98) has a simple purely complex eigenvalue iω0 with the corresponding left and right eigenvectors VL and VR , respectively; 3. ikω of Bk (μ0 ) for k ≥ 2; 7 0 is not an eigenvalue 8

0) 4.  VL dA(μ = 0 where  is an operator taking the real part of a dμ VR complex number.

Then there is a global branch of 2π-periodic solutions of (8.97) bifurcating ¯ , μ0 , ω0 ), or equivalently, the original coupled system (8.79) has a from (U phase-locked synchronous solution with a uniform phase difference. If these conditions in Theorem 8.3 are satisfied, then the system (8.79) definitely has a synchronous solution, and the corresponding synchronization mechanism is based on the “global Hopf bifurcation”. Such conditions are easy to use and enable us to predict, for a given set of parameter values, whether or not the intercell coupling and the noise can synchronize the cells. 8.6.4 Algorithm for Stochastic Simulation Based on the Direct Gillespie method (Gillespie 1976), we give a detailed algorithm for the stochastic simulation of the master equation (2.55), where Y (t) ≡ X(t − τ )V /v with time delays. Let the superscript j of the algorithm indicate the jth cell, and assume there is one time delay τ although multiple delays can be incorporated to the algorithm in a similar manner. 1. Initialization: input the cell number n, the stop time tstop , and the initial j states X j (0) = (X1j , ..., Xm ) of the jth cell for j = 1, ..., n. Let X(r) =  n j X (0)/n for all −τ ≤ r ≤ 0, and the time evolution tj = 0. j=1

300

8 Multicellular Networks and Synchronization

2. Parallel computation for each cell: if tM X − tmx ≤ τ , proceed with the parallel computation for each cell, i.e., j = 1, ..., n. Otherwise, choose only the mxth cell, i.e., j = mx to proceed with the following computation, where M X and mx are the cells with the maximal and the minimal current evolution times among {t1 , ..., tn } respectively. n a) Mean field variables: compute X(tj −τ ) = j=1 X j (tj −τ )/n, where X j (tj − τ ) is the latest value of X j at tj − τ . That is, if tj − τ > 0, then X j (tj − τ ) = X j (tj1 ) for two consecutive updating times tj1 and tj2 of the jth cell with tj1 ≤ t − τ < tj2 ; otherwise X j (tj − τ ) = X j (0). b) Propensities: compute wi (X j (tj )) for i = 1, ..., n0 according to the state X j (tj ) and X(tj − τ ). c) Uniform random numbers: draw two uniform random numbers r1j and r2j ∈ [0, 1).  n0 wi until the next d) Time interval Δτ j : compute Δτ j = −(lnr1j )/ i=1 reaction. e) Next reaction μj : find the next reaction μj by taking μj to be the integer satisfying j μ −1

i=1

wi
tstop , then terminate the computation; otherwise, go to step 2. The Gillespie algorithm is considered the standard one for stochastic simulation of biochemical systems. In particular, the algorithm entails the generation of an ensemble of sample trajectories of the system with correct statistics for a set of biochemical reactions, which asymptotically converge to the solution of the corresponding master equation. Clearly, the algorithm requires storing the sampling times and the state in the time interval [tj − τ, tj ] due to the time delays. 8.6.5 Numerical Simulation The noise, the coupling, and the time delays affect the dynamics of individual oscillators and can lead to cooperative behavior. The effects of the extracellular noise intensity on cooperative behavior among cells are first considered. The bifurcation diagram of the AI mean value as a function of the noise deviation is indicated in Figure 8.10 (a), which shows that the noise can actually induce oscillations by a Hopf bifurcation. Clearly, without extracellular noise, the evolution equations for cumulants and covariances converge to a stable equilibrium or a stationary probability distribution with constant means

8.6 A General Multicellular Network for Stochastic Models with Coupling

301

and covariances, which corresponds to asynchronously fluctuating behavior of cells. As the noise intensity increases, the equilibrium becomes unstable and the multicellular system becomes oscillatory, which implies that the synchronized oscillation is induced by extracellular noises. In other words, for such a system, noise provides extra dynamics, i.e., the dynamics of the second cumulants that originate from fluctuations or other unknown energy sources beyond the coupled system induces cooperative but oscillatory behavior among cells. The cooperative behavior, which is also confirmed by the simulation of the stochastic master equation, is shown in Figure 8.10 (b). Such behavior corresponds to in-phase synchronization induced by extracellular noise. Clearly all cells oscillate almost in the same way with synchronous fluctuations. However, in the absence of extracellular noise, the mean values of AIs evolve to a steady state.

(a)

(b)

Figure 8.10 Noise-driven synchronization. (a) The bifurcation diagram of the cumulant equations. The two curves show the stable steady-state of the AI concentration, as well as the maximum and minimum concentrations of AI in the course of oscillation. (b) The cooperative behavior of concentrations AI of three cells is achieved with random initial phases, which indicates cooperative behavior or nearly synchronous periodic oscillations induced by extracellular noise (from (Chen et al. 2005))

Generally, the coupling enhances the synchronization among cells, whereas the time delay τ and the extracellular noises σ tend to induce oscillatory behavior in each cell. When there is no time delay or the time delay is small, the evolution equations for cumulants and covariances converge to a stable steady-state or a stationary probability distribution with constant means and covariances, for which each cell does not fluctuates, as shown in Figure 8.11 (a). However, with a large time delay, the multicellular system becomes syn-

302

8 Multicellular Networks and Synchronization

chronously oscillatory. The extracellular noise σ has the effect of inducing oscillations similarly to the time delay. On the other hand, the oscillatory and steady-state regions as functions of the noise deviation σ and the coupling strength d are shown in Figure 8.11 (b). Clearly, the oscillatory region emerges with increasing the noise deviation and is also related to the coupling strength. In particular, for a small noise, the multicellular system converges to a steady-state, which is also a trivial synchronous state. Therefore, for such a system, the coupling and the time delay as well as the noise can also significantly affect cooperative dynamics.

(a)

(b)

Figure 8.11 Bifurcation diagrams on the AI mean value for the evolution equations. (a) Bifurcation diagram with time delay. (b) Oscillatory and steady-state regions by the noise deviation σ and the coupling strength d. The oscillatory region emerges with increasing the noise deviation σ (from (Chen et al. 2005))

The cooperative dynamics induced by the time delay is confirmed in Figure 8.12 (a), whereas the convergence to a steady-state of cumulants with a small time delay is shown in Figure 8.12 (b) by both the evolution equations and the master equations. On the other hand, the effect of coupling on cooperative behavior is demonstrated in Figure 8.11 (b). We can see that all cells almost synchronously oscillate for a large coupling d. However, when the coupling d is changed to a small value, the multicellular system still oscillates but in an asynchronous manner (not shown). Such facts imply that the coupling generally enhances the cooperative dynamics by controlling the stability of synchronization. All these figures are obtained with randomly distributed initial phases. In other words, the simulation for all cells starts from asynchronously initial conditions, to demonstrate the effect of active synchronization. These results indicate that when the system is oscillatory, the cooperative behavior among cells can be observed clearly from the evolution equations, as shown in Figure 8.10 (b). Figure 8.12 (a) shows good agreement between stochastic

8.7 Deterministic Synchronization of Genetic Networks in Lur’e Form

303

and deterministic approaches, except for quantitative difference in peak levels. Such differences arise possibly due to a small number of cells in the simulation, and are expected to be further reduced if more cells are considered. Noise promotes oscillations by introducing extra dynamics, which originates from random fluctuations or the second cumulants. Such extra dynamics under certain conditions may play a crucial role as an energy source to excite cooperative behavior or lead to ordered states among cells. Actually, it is well known that noise plays a significant role in stochastic resonance of a non-autonomous system with a periodic or non-periodic driving force and coherence resonance of an autonomous system. In particular, coherence resonance phenomena show that noise enhances temporal regularity of a dynamical system. Many theoretical and experimental works including coupled coherence resonance oscillators and coupled chaotic systems have analyzed various kinds of resonance behavior. All these studies indicate that noise can play a stabilizing role in coupled oscillators and maps, and tends to drive stochastic systems toward regular dynamics. Intracellular and extracellular noises play different roles in cell communication. Since the intracellular noise of each cell is independent, they generally tend to disturb cooperative behavior among cells. However, since the extracellular noise is nearly common to all cells due to the common environment, they have effects of synchronizing all cells by exerting the same fluctuations on each cell through signal molecules. The effects of coupling on cooperative behavior across a population of noisy oscillators have not been extensively studied and the roles of noise are not well understood. The results show that noise and time delays can induce cooperative behavior or an order which seems contradictory to our intuitions based on the usual negative meanings of the words noise and randomness. The theoretical and numerical results suggest that such an essential and constructive role played by noise and coupling may make living organisms organize their various apparatuses harmoniously and actively accomplish mutual communications (Springer and Paulsson 2006, Zhou et al. 2005).

8.7 Deterministic Synchronization of Genetic Networks in Lur’e Form As described in Chapter 5, a gene regulatory system in a single cell can be represented in the Lur’e form (5.11)–(5.12), therefore, we can study synchronization of multicellular networks by considering coupled genetic oscillators in the Lur’e form. In this section and the next section, we present results in this framework. The main contents of this section are based on (Li et al. 2006b). More generally, we can study a genetic oscillator model in the Lur’e form with multiple nonlinear vector regulatory functions as follows:

304

8 Multicellular Networks and Synchronization

Figure 8.12 Dynamics of AIs with the effects of time delay by the simulation of the stochastic master equation. The AI concentrations generated by the stochastic simulations are denoted by the dots, while the mean value generated by the evolution equations is denoted by the bold line. (a) Cooperative behavior induced by the time delay (a nearly synchronous periodic oscillation). (b) Convergence to a steady-state where the dots correspond to the result of the stochastic simulation and the bold line corresponds to that of the evolution equations (from (Chen et al. 2005))

y(t) ˙ = Ay(t) +

l 

Bi fi (y(t)),

(8.100)

i=1

where y(t) ∈ Rn represents the concentrations of proteins, RNAs, and biochemical complexes, and fi (y) = [fi1 (y1 (t)), ..., fin (yn (t))]T with fij (yj (t)) as a monotonic increasing or decreasing regulatory function, which usually is of the MM or Hill form. A and Bi are matrices in Rn×n . Many well-known genetic system models can be represented in (or rewritten into) this form, such as the Goodwin model (Goodwin 1965), the repressilator (Elowitz and Leibler 2000), the toggle switch (Gardner et al. 2000), and the circadian oscillators (Goldbeter 1995). In synthetic biology, genetic oscillators of this form can be implemented experimentally (Kalir et al. 2005). To make the method more understandable and to avoid unnecessarily complicated notation, we consider the following simplified model, in which there are only one increasing and one decreasing nonlinear term in each equation of the individual genetic oscillator: y(t) ˙ = Ay(t) + B1 f1 (y(t)) + B2 f2 (y(t)), (8.101) where Ay(t) includes the degradation terms and all the other linear terms in the genetic oscillator, f1 (y(t)) = [f11 (y1 (t)), ..., f1n (yn (t))]T with f1j (yj (t)) as a monotonic increasing nonlinear regulatory function of the Hill form:

8.7 Deterministic Synchronization of Genetic Networks in Lur’e Form

f1j (yj (t)) =

(yj (t)/β1j )H1j , 1 + (yj (t)/β1j )H1j

305

(8.102)

and f2 (y(t)) = [f21 (y1 (t)), ..., f2n (yn (t))]T with f2j (yj (t)) as a monotonic decreasing nonlinear regulatory function of the following form f2j (yj (t)) =

1 . 1 + (yj (t)/β2j )H2j

(8.103)

In the above equations, both H1j and H2j are Hill coefficients. To avoid confusion, we consider the jth column of B1,2 to be zeros if f1j,2j ≡ 0. Since f2j (yj (t)) =

1 1 + (yj (t)/β2j )H2j

= 1−

(yj (t)/β2j )H2j 1 + (yj (t)/β2j )H2j

 1 − gj (yj (t))

(8.104)

by letting f (·) = f1 (·), we can rewrite (8.101) as y(t) ˙ = Ay(t) + B1 f (y(t)) − B2 g(y(t)) + B2 1,

(8.105)

where 1 = [1, 1, ..., 1]T ∈ Rn×1 . Obviously, for any a, b ∈ R with a = b, the functions fi and gi satisfy the following sector conditions: fi (a) − fi (b) ≤ k1i , a−b gi (a) − gi (b) 0≤ ≤ k2i , a−b

0≤

(8.106) (8.107)

where i = 1, ..., n, or equivalently (fi (a) − fi (b))[(fi (a) − fi (b)) − k1i (a − b)] ≤ 0, (gi (a) − gi (b))[(gi (a) − gi (b)) − k2i (a − b)] ≤ 0.

(8.108) (8.109)

According to the mean value theorem, for differentiable functions fi and gi , the above sector conditions correspond to the inequality that for all a ∈ R, dfi (a) ≤ k1i , da dgi 0≤ (a) ≤ k2i . da

0≤

(8.110) (8.111)

Recall that a Lur’e system is a linear dynamical system, interconnected by feedback through static nonlinearity that satisfies a sector condition (Vidyasagar 1993). Hence, the genetic oscillator (8.105) can be considered a Lur’e system, which can be investigated by using the fruitful Lur’e system

306

8 Multicellular Networks and Synchronization

method in control theory. In the following, we first consider coupled identical genetic oscillators and then extend the result to non-identical ones. We first analyze the following N linearly coupled genetic oscillators, in which each genetic oscillator is identical: x˙ i (t) = Axi (t) + B1 f (xi (t)) − B2 g(xi (t)) + B2 1 +

N 

Gij Dxj (t),

(8.112)

j=1

where xi (t) ∈ Rn (i = 1, ..., N ) is the state vector of the ith genetic oscillator, corresponding to y(t) in (8.105), and D ∈ Rn×n defines the coupling between two genetic oscillators. G = (Gij )N ×N is the coupling matrix of the network, in which Gij is defined as follows: if there is a link from oscillator-j to oscillator-i (j = i), then Gij equals to a positive constant denoting n the coupling strength of this link; otherwise, Gij = 0. We define Gii = − j=1,j =i Gij , which means that the coupling is diffusive. Matrix G defines the coupling topology, the directions, and the coupling strength of the network. Since in biomolecular networks, genetic oscillators are most likely nonidentical, there should exist parametric mismatches among oscillators. Then, we consider the following network of N coupled nonidentical genetic oscillators: x˙ i (t) = (A + ΔAi (t))xi (t) + (B1 + ΔB1i (t))f (xi (t)) −(B2 + ΔB2i (t))g(xi (t)) + (B2 + ΔB2 (t))1 +

N 

Gij Dxj (t),

(8.113)

j=1

where ΔAi , ΔB1i , and ΔB2i are the mismatch matrices. We assume that the mismatch matrices ΔAi (t), ΔB1i (t), and ΔB2i (t) can be estimated by the following bounds, which are reasonable for general biomolecular systems: ΔAi (t) ≤ α1 , ΔB1i (t) ≤ α2 , ΔB2i (t) ≤ α3 , for ∀ i.

(8.114)

We also assume that xi (t) ≤ δ1 , f (xi (t)) ≤ δ2 , g(xi (t)) ≤ δ3 , for ∀ i.

(8.115)

Since in genetic oscillators, xi (t) usually denotes the concentrations of the mRNAs, the proteins, the neurotransmitters, etc., which take bounded values, and f (·) and g(·) are usually monotonic functions with saturated values, the above assumptions should be reasonable. The other parameters are defined as those in the identical case. For the aforementioned two networks, i.e., (8.112) and (8.113), we mainly use Wu’s method to analyze the synchronization (Wu 2002), which can separate the effects of the inter-node coupling and the individual genetic oscillator dynamics. Based on the Lyapunov method, we can obtain sufficient conditions for the synchronization of coupled identical genetic oscillators (8.112)

8.7 Deterministic Synchronization of Genetic Networks in Lur’e Form

307

and coupled non-identical genetic oscillators (8.113) in Theorems 8.4 and 8.5, respectively. In the following theorems and hereafter, K1 = diag(k11 , ..., k1n ), K2 = diag(k21 , ..., k2n ), and matrix U ∈ RN ×N is defined as an irreducible matrices with zero row sums, whose off-diagonal elements are all non-positive. λmin (P ) and λmax (P ) represent the minimal and maximal eigenvalues of the matrix P , respectively,  ·  stands for the usual L2 norm of a vector, or the usual spectral norm of a square matrix, and ⊗ indicates the Kronecker product. The Kronecker product A ⊗ B of an n × m matrix A and a p × q matrix B is the np × mq matrix defined as ⎡ ⎤ A11 B · · · A1m B ⎢ .. ⎥ . A ⊗ B = ⎣ ... . . . (8.116) . ⎦ An1 B · · · Anm B For the network of coupled identical genetic oscillators, we have the following synchronization condition (Li et al. 2006b): Theorem 8.4. If there are matrices P > 0, Λ1 = diag(λ11 , ..., λ1n ) > 0, Λ2 = diag(λ21 , ..., λ2n ) > 0, Q ∈ Rn×n , and U ∈ RN ×N , such that the following matrix inequalities hold: ⎤ ⎡ P A + AT P − Q − QT P B1 + K1 Λ1 −P B2 + K2 Λ2 ⎦ < 0, B1T P + K1 Λ1 −2Λ1 0 M1 = ⎣ T −B2 P + K2 Λ2 0 −2Λ2 (U G ⊗ P D + U ⊗ Q)T + (U G ⊗ P D + U ⊗ Q) ≤ 0,

(8.117)

then the network (8.112) is asymptotically synchronous . Proof (Li et al. 2006b): We let x(t) = [xT1 (t), ..., xTN (t)]T ∈ RN n×1 and define a Lyapunov function of the form V (x(t)) = xT (t)(U ⊗ P )x(t).

(8.118)

According to (Wu 2002) (pp. 136–137), V (x(t)) is equivalent to the form  V (x(t)) = (−Uij )(xi (t) − xj (t))T P (xi (t) − xj (t)). (8.119) i 0, Λ2 = diag(λ21 , ..., λ2n ) > 0, Q ∈ Rn×n , U ∈ RN ×N , and a positive real constant γ, such that the following matrix inequalities hold: ⎤ ⎡ P A + AT P − Q − QT + γI P B1 + K1 Λ1 −P B2 + K2 Λ2 ⎦ < 0, B1T P + K1 Λ1 −2Λ1 0 M2 = ⎣ 0 −2Λ2 −B2T P + K2 Λ2 (U G ⊗ P D + U ⊗ Q)T + (U G ⊗ P D + U ⊗ Q) ≤ 0,

(8.123)

then the network (8.113) is asymptotically synchronous with error bound  i