Dataflow and Reactive Programming Systems

Dataflow concepts are the heart of Reactive Programming, Flow-Based Programming (e.g. NoFlo), Unix pipes, Actors and mes

1,604 413 6MB

English Pages 161 [153]

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Dataflow and Reactive Programming Systems

Table of contents :
Table of Contents
Special Thanks
DSP Robotics
ghostream
Clean Code Developer School
Synthetic Spheres
ANKHOR Software GmbH
vvvv
Code Examples
Introduction
Overview of the Book
Reactive Programming is Dataflow
Von Neumann Architecture
Benefits of Dataflow
History
The Purpose of this Book
Dataflow Explained
Pipeline Dataflow
Nodes
Data
Arcs
Dataflow Graphs
Executing a Graph
Features of Dataflow Systems
Push or Pull Data
Mutable or Immutable Data
Static or Dynamic
Dynamic
Static
Functional or Stateful Nodes
Synchronous or Asynchronous Activation
Asynchronous
Synchronous
Hybrid
Multiple Inputs and/or Outputs
Multiple Inputs
Multiple Outputs
Fire Patterns
Cycles and Feedback
Recursion
Implementation of Recursive Nodes
Compound Nodes
Execution of Compound Nodes
Design of Compound Nodes
Arc Capacity > 1
Arc Joins and/or Splits
Multi-Rate Token Production and Consumption
Common Dataflow Nodes
Switch Node/ Choice Node
Merge Node/ Correlate Node/ Join Node
Distribute Node/ Splitter Node
Gate Node
Terminal Node
Source Node
Sink Node
Miscellaneous Topics
Granularity
When is it Done?
Actor Model
Summary of the Actor Model
Comparison to Object Oriented Programming
Relation to Dataflow
Dataflow Features
Where is the Actor Model Used?
Where is it Not Used?
Flow-Based Programming
Summary of Flow-Based Programming
Dataflow Features
Benefits of Flow-Based Programming
Communicating Sequential Processes
Summary of CSP
Message Passing Channels
Channels as a Concurrency Primitive
Channel Implementations
Implicit Dataflow
Unix Pipes
Sockets
Function
Manager Controlled Communication
Message Passing Channels
Feature Creep
Asynchronous Dataflow Implementation
Architecture Overview
Implementation Walk-Through
Main Data Types
Port Address
Data Token
Execute Token
Node
Node Definition
Arc
Fire Pattern
Token Store
Node Store
Arc Store
Dataflow Program
Implementation Components
IO Unit
Transmit Unit
Enable Unit
Execute Unit
Program Execution Example
Preparing a Program for Execution
Multiple Dataflow Engines
Synchronous Dataflow Implementation
Compilation
How to Build a Schedule
Label Nodes/Arcs and Token Rates
Create a Topology Matrix
Does a Schedule Exist?
Determine Initial Arc Capacities
Execution Simulation
Simulation Process Overview
Simulation Process in Detail
Step 1: Create a new activation matrix:
Step 2: Create an activation vector
Step 3: Create new Token and Fire Count Vectors
Step 4: Stop or Repeat
Analyze for Errors
Search for a Schedule
Test Schedule
Parallel Schedules
Dynamic Dataflow Implementation
Introduction
Overall Design
Features of this Design
Notation Convention
General Types
Nodes
Pipeline Node
PipelineNodeObject Methods
Developer Accessible Nodes
Primitive Node
PrimitiveNodeObject Methods:
Operation of a PimitiveNodeObject
Compound Nodes
CompoundNodeObject Methods
NodeClass and NodeObject
NodeObject Methods
Limitations:
Implementation Language Requirements:
Appendix
Glossary
Bibliography
Important Books and Papers
General
Hardware
Synchronous Dataflow
Communicating Sequential Processes
Actor Model
Programming Languages

Citation preview

Dataflow and Reactive Programming Systems A Practical Guide to Developing Dataflow and Reactive Programming Systems Matt Carkci This book is for sale at http://leanpub.com/dataflowbook This version was published on 2014-05-29

This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and many iterations to get reader feedback, pivot until you have the right book and build traction once you do. ©2014 Matt Carkci

Contents Special Thanks . . . . . . . . . . DSP Robotics . . . . . . . . . ghostream . . . . . . . . . . . Clean Code Developer School Synthetic Spheres . . . . . . . ANKHOR Software GmbH . . vvvv . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

i i i ii ii ii iii

Code Examples . . . . . . . . . . . . . . . . . . . . . . . . .

iv

1 Introduction . . . . . . . . . . . . . . . 1.1 Overview of the Book . . . . . . . 1.2 Reactive Programming is Dataflow 1.3 Von Neumann Architecture . . . . 1.4 Benefits of Dataflow . . . . . . . . 1.5 History . . . . . . . . . . . . . . . 1.6 The Purpose of this Book . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

1 1 2 4 5 6 8

2 Dataflow Explained . . . . . . . . 2.1 Pipeline Dataflow . . . . . . 2.2 Nodes . . . . . . . . . . . . . 2.3 Data . . . . . . . . . . . . . . 2.4 Arcs . . . . . . . . . . . . . . 2.5 Dataflow Graphs . . . . . . . 2.6 Executing a Graph . . . . . . 2.7 Features of Dataflow Systems

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

9 9 9 10 11 13 13 15

. . . . . . . .

. . . . . . . .

. . . . . . . .

CONTENTS

2.7.1 2.7.2 2.7.3

2.8

2.9

Push or Pull Data . . . . . . . . . . . . . . . Mutable or Immutable Data . . . . . . . . . Static or Dynamic . . . . . . . . . . . . . . . 2.7.3.1 Dynamic . . . . . . . . . . . . . . 2.7.3.2 Static . . . . . . . . . . . . . . . . 2.7.4 Functional or Stateful Nodes . . . . . . . . . 2.7.5 Synchronous or Asynchronous Activation . 2.7.5.1 Asynchronous . . . . . . . . . . . 2.7.5.2 Synchronous . . . . . . . . . . . . 2.7.5.3 Hybrid . . . . . . . . . . . . . . . . 2.7.6 Multiple Inputs and/or Outputs . . . . . . . 2.7.6.1 Multiple Inputs . . . . . . . . . . . 2.7.6.2 Multiple Outputs . . . . . . . . . . 2.7.7 Fire Patterns . . . . . . . . . . . . . . . . . . 2.7.8 Cycles and Feedback . . . . . . . . . . . . . 2.7.9 Recursion . . . . . . . . . . . . . . . . . . . 2.7.9.1 Implementation of Recursive Nodes 2.7.10 Compound Nodes . . . . . . . . . . . . . . . 2.7.10.1 Execution of Compound Nodes . . 2.7.10.2 Design of Compound Nodes . . . . 2.7.11 Arc Capacity > 1 . . . . . . . . . . . . . . . 2.7.12 Arc Joins and/or Splits . . . . . . . . . . . . 2.7.13 Multi-Rate Token Production and Consumption . . . . . . . . . . . . . . . . . . . . . . Common Dataflow Nodes . . . . . . . . . . . . . . 2.8.1 Switch Node/ Choice Node . . . . . . . . . . 2.8.2 Merge Node/ Correlate Node/ Join Node . . 2.8.3 Distribute Node/ Splitter Node . . . . . . . . 2.8.4 Gate Node . . . . . . . . . . . . . . . . . . . 2.8.5 Terminal Node . . . . . . . . . . . . . . . . 2.8.6 Source Node . . . . . . . . . . . . . . . . . . 2.8.7 Sink Node . . . . . . . . . . . . . . . . . . . Miscellaneous Topics . . . . . . . . . . . . . . . . . 2.9.1 Granularity . . . . . . . . . . . . . . . . . . 2.9.2 When is it Done? . . . . . . . . . . . . . . .

16 17 18 18 20 21 22 23 23 24 24 25 27 27 29 30 32 33 33 34 35 36 37 37 37 38 38 39 39 40 40 41 41 41

CONTENTS

3 Actor Model . . . . . . . . . . . . . . . . . . . . . 3.1 Summary of the Actor Model . . . . . . . . . 3.2 Comparison to Object Oriented Programming 3.3 Relation to Dataflow . . . . . . . . . . . . . . 3.4 Dataflow Features . . . . . . . . . . . . . . . 3.5 Where is the Actor Model Used? . . . . . . . 3.6 Where is it Not Used? . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

43 43 44 45 45 46 47

4 Flow-Based Programming . . . . . . . . . 4.1 Summary of Flow-Based Programming 4.2 Dataflow Features . . . . . . . . . . . 4.3 Benefits of Flow-Based Programming .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

48 48 49 50

5 Communicating Sequential Processes . . 5.1 Summary of CSP . . . . . . . . . . . 5.2 Message Passing Channels . . . . . . 5.3 Channels as a Concurrency Primitive 5.4 Channel Implementations . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

52 52 53 54 56

6 Implicit Dataflow . . . . . . . . . . . . . 6.1 Unix Pipes . . . . . . . . . . . . . . 6.2 Sockets . . . . . . . . . . . . . . . . 6.3 Function . . . . . . . . . . . . . . . 6.4 Manager Controlled Communication 6.5 Message Passing Channels . . . . . . 6.6 Feature Creep . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

58 59 59 59 60 61 61

7 Asynchronous Dataflow Implementation . 7.1 Architecture Overview . . . . . . . . . 7.2 Implementation Walk-Through . . . . 7.3 Main Data Types . . . . . . . . . . . . 7.3.1 Port Address . . . . . . . . . . . 7.3.2 Data Token . . . . . . . . . . . 7.3.3 Execute Token . . . . . . . . . . 7.3.4 Node . . . . . . . . . . . . . . . 7.3.5 Node Definition . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

63 64 65 66 66 66 67 67 67

CONTENTS

. . . . . . . . . . . . . .

67 68 68 68 68 69 69 69 69 70 71 72 73 74

8 Synchronous Dataflow Implementation . . . . . . . . 8.1 Compilation . . . . . . . . . . . . . . . . . . . . . 8.2 How to Build a Schedule . . . . . . . . . . . . . . . 8.2.1 Label Nodes/Arcs and Token Rates . . . . . 8.2.2 Create a Topology Matrix . . . . . . . . . . 8.2.3 Does a Schedule Exist? . . . . . . . . . . . . 8.2.4 Determine Initial Arc Capacities . . . . . . . 8.2.5 Execution Simulation . . . . . . . . . . . . . 8.2.6 Simulation Process Overview . . . . . . . . 8.2.7 Simulation Process in Detail . . . . . . . . . 8.2.7.1 Step 1: Create a new activation matrix: . . . . . . . . . . . . . . . . . 8.2.7.2 Step 2: Create an activation vector . 8.2.7.3 Step 3: Create new Token and Fire Count Vectors . . . . . . . . . . . . 8.2.7.4 Step 4: Stop or Repeat . . . . . . . 8.2.8 Analyze for Errors . . . . . . . . . . . . . . 8.2.9 Search for a Schedule . . . . . . . . . . . . . 8.2.10 Test Schedule . . . . . . . . . . . . . . . . . 8.3 Parallel Schedules . . . . . . . . . . . . . . . . . .

75 77 78 78 79 81 83 84 84 85

7.4

7.5 7.6 7.7

7.3.6 Arc . . . . . . . . . . . . . . 7.3.7 Fire Pattern . . . . . . . . . 7.3.8 Token Store . . . . . . . . . 7.3.9 Node Store . . . . . . . . . . 7.3.10 Arc Store . . . . . . . . . . 7.3.11 Dataflow Program . . . . . Implementation Components . . . 7.4.1 IO Unit . . . . . . . . . . . 7.4.2 Transmit Unit . . . . . . . . 7.4.3 Enable Unit . . . . . . . . . 7.4.4 Execute Unit . . . . . . . . . Program Execution Example . . . . Preparing a Program for Execution Multiple Dataflow Engines . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

85 87 88 89 90 90 91 92

CONTENTS

9 Dynamic Dataflow Implementation . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . 9.2 Overall Design . . . . . . . . . . . . . . . 9.3 Features of this Design . . . . . . . . . . . 9.4 Notation Convention . . . . . . . . . . . . 9.5 General Types . . . . . . . . . . . . . . . 9.6 Nodes . . . . . . . . . . . . . . . . . . . . 9.6.1 Pipeline Node . . . . . . . . . . . . 9.6.2 PipelineNodeObject Methods . . . 9.6.3 Developer Accessible Nodes . . . . 9.6.4 Primitive Node . . . . . . . . . . . 9.6.5 PrimitiveNodeObject Methods: . . . 9.6.6 Operation of a PimitiveNodeObject 9.6.7 Compound Nodes . . . . . . . . . . 9.6.8 CompoundNodeObject Methods . . 9.6.9 NodeClass and NodeObject . . . . 9.6.10 NodeObject Methods . . . . . . . . 9.7 Limitations: . . . . . . . . . . . . . . . . . 9.8 Implementation Language Requirements: .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

95 95 95 97 97 98 100 100 102 105 105 109 114 115 117 121 122 125 126

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Bibliography . . . . . . . . . . . . . . . Important Books and Papers . . . . . General . . . . . . . . . . . . . . . . Hardware . . . . . . . . . . . . . . . Synchronous Dataflow . . . . . . . . Communicating Sequential Processes Actor Model . . . . . . . . . . . . . . Programming Languages . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

134 134 135 137 139 139 141 141

Special Thanks This book was made possible due to the support of 455 Kickstarter backers and all those who pre-ordered the book before publication. I personally say “Thank You!” to each and every one. A special thanks to our corporate sponsors…

DSP Robotics http://www.dsprobotics.com FlowStone is a new type of graphical computer programming tool that allows you to create your own standalone programs quicker and more easily than ever before.

ghostream https://ghostream.com An open source library of self contained, reusable, components for building extensible, speedy, reactive systems.

Special Thanks

ii

Clean Code Developer School http://ccd-school.de/en Teaching data flow design and lightweight software architecture since 2010

Synthetic Spheres http://syntheticspheres.com Synthetic Spheres is dedicated to research and innovation with open partnership to associations, international standards bodies, training company and academic university institutions.

ANKHOR Software GmbH http://www.ankhor.com Experience the future of advanced visual data processing. ANKHOR’s FlowSheet Data-Workbench is a universal and cross-industry tool to quickly and interactively solve the challenges of your data projects.

Special Thanks

vvvv http://www.vvvv.org vvvv is a hybrid development environment with a visual dataflow editor and a textual c#/.net editor.

iii

Code Examples All of the code in this book can be downloaded from ftp://DataflowBook.com/. Visit DataflowBook.com for more information and blog posts about dataflow and reactive programming Contact the author at [email protected]

1 Introduction Dataflow is a method of implementing software that is very different to the prevailing Von Neumann method that the software industry has been based on since inception. At the lowest level, dataflow is both a programming style and a way to manage parallelism. At the top, dataflow is an overarching architecture that can incorporate and coordinate other computational methods seamlessly. Dataflow is a family of methods that all share one important fact, data is king. The arrival of data causes the system to activate. Dataflow reacts to incoming data without having to be specifically told to do so. In traditional programming languages, the developer specifies exactly what the program will do at any moment.

1.1 Overview of the Book It is important to understand the concepts of dataflow and not just the specifics of one library so that you can quickly adapt to any new library encountered. There are many varieties of dataflow with subtle differences yet they all can be considered dataflow. Sometimes very slight changes in the dataflow implementation can drastically change how you design programs. This book will explain the whole landscape of dataflow. You’ll learn dataflow from the software perspective. How it is an architecture and a way to think about building programs. We’ll start by covering it in its simplest form, Pipeline Dataflow, and then move on to the many features and variations you’ll encounter in existing implementations. Three of the most common

Introduction

2

styles of dataflow are explained in detail using code of a working implementation to bring theory into practice. You should already have a little programming experience under your belt but you don’t need to be an expert to understand what this book covers.

1.2 Reactive Programming is Dataflow “Reactive Programming” is a term that has become popular recently but its origin stretches back to at least 1985. The paper, “On the Development of Reactive Systems” by David Harel and Amir Pnueli was the first to define “reactive systems”: “Reactive systems… are repeatedly prompted by the outside world and their role is to continuously respond to external inputs.”¹ The paper specifies that reactive systems are not restricted to software alone. They were discussing ways to develop any type of reactive system, software or hardware. A few years later in 1989 Gerard Berry focuses on the software aspects in his paper, “Real Time Programming: Special Purpose or General Purpose Languages”: “It is convenient to distinguish roughly between three kinds of computer programs. Transformational programs compute results from a given set of inputs; typical examples are compilers or numerical computation programs. Interactive programs interact at their own speed with users or with other programs; from a user point of view a time-sharing system is interactive. ¹Harel, D., & Pnueli, A. (1985). “On the development of reactive systems” (pp. 477-498). Springer Berlin Heidelberg. Chicago

Introduction

3

Reactive programs also maintain a continuous interaction with their environment, but at a speed which is determined by the environment, not by the program itself. Interactive programs work at their own pace and mostly deal with communications, while reactive programs only work in response to external demands and mostly deal with accurate interrupt handling. Real-time programs are usually reactive. However, there are reactive program that are not usually considered as being real-time, such as protocols, system drivers or man-machine interface handlers. All reactive programs require a common programming style. Complex applications usually require establishing cooperation between the three kinds of programs. For example, a programmer uses a man-machine interface involving menus, scroll bars and other reactive devices. The reactive interface permits him to tell the interactive operating systems to start transformational computations such as program compilations.”² From the preceding quotes we can say that reactive programs… • • • •

Activate in response to external demands Mostly deal with handling parallelism Operate at the rate of incoming data Often work in cooperation with transformational and interactive aspects

The definition of dataflow is a little more vague. Any system where the data moves between code units and triggers execution of the code could be called dataflow, which includes reactive systems. ²Gerard Berry (1989). “Real Time Programming: Special Purpose or General Purpose Languages” (pp.11-17) IFIP Congress

Introduction

4

Thus, I consider Reactive Programming to be a subset of dataflow but a rather large subset. In casual use, Reactive Programming it is often a synonym for dataflow.

1.3 Von Neumann Architecture The reason parallel programming is so hard is directly related to the design of the microprocessors that sit in all of our computers. The Von Neumann architecture is used in the common microprocessors of today. It is often described as an architecture where data does not move. A global memory location is reserved and given a name (the variable name) to store the data. Its contents can be set or changed but the location is always the same. The processor commands, in general, deal with assigning values to memory locations and what command should execute next. A “program-counter” contains the address of the next command to execute and is affected by statements like goto and if. Our programs are simply statements to tell the microprocessor what to do… in excruciating detail. Any part of the program can mutate any memory location at any time. In contrast, dataflow has the data move from one piece of code to another. There is no program-counter to keep track of what should be executed next, data arrival triggers the code to execute. There is no need to worry about locks because the data is local and can only be accessed by the code it was sent to. The shared memory design of the Von Neumann architecture poses no problems for sequential, single threaded programs. Parallel programs with multiple components trying to access a shared memory location, on the other hand, has forced us to use locks and other coordination methods with little success. Applications of this style are not scalable and puts too much burden on developers to get it right. Unfortunately we are probably stuck with Von Neumann

Introduction

5

processors for a long time. There’s too much software already written for them and it would be crazy to reproduce the software for a new architecture. Even our programming languages are influenced by the Von Neumann architecture. Most current programming languages are based directly or indirectly on the C language which is not much more than a prettier form of assembly language. Since C uses Von Neumann principals, by extension all derivative languages are also Von Neumann languages. It seems our best hope is to emulate a parallel friendly architecture on top of the Von Neumann base. That’s where this book comes in. All dataflow implementations that run on Von Neumann machines must translate dataflow techniques to Von Neumann techniques. I will show you how to build those systems and understand the ones you will encounter.

1.4 Benefits of Dataflow Some of the benefits of dataflow that we’ll cover in this book are… • Dataflow has an inherent ability for parallelization. It doesn’t guarantee parallelism, but makes it much easier. • Dataflow is responsive to changing data and can be used to automatically propagate GUI events to all observers. • Dataflow is a fix for “callback hell.” • Dataflow is a high-level coordination language that assists in combining different programming languages into one architecture. How nodes are programmed is entirely left up to the developer (although implementations may put constraints on it, the definition of dataflow does not). Dataflow can be used to combine code from distant locations and written in different languages into one application.

6

Introduction

“Contrary to what was popularly believed in the early 1980s, dataflow and Von Neumann techniques were not mutually exclusive and irreconcilable concepts, but simply the two extremes of a continuum of possible computer architectures.”³ • For those visual thinkers out there, dataflow graphs lend themselves to graphical representations and manipulation. Yet there’s no requirement that it must be displayed graphically. Some dataflow languages are text only, some are visual and the rest allow both views.

1.5 History The first description of dataflow techniques was in the 1961 paper, “A Block Diagram Compiler”⁴ that developed a programming language (BLODI) to describe electronic circuits. The paper established the concepts of signal processing blocks communicating over interblock links encoded as a textual computer language. “BLODI was written to lighten the programming burden in problems concerning the simulation of signal processing devices”⁵. In 1966 William Robert Sutherland wrote, “The On-Line Graphical Specification of Computer Procedures”⁶ that heavily influenced the visual representation of dataflow. He proposed a purely visual programming language where the user interacted with computer using the new technology of video displays and drawing tablets. Objects were drawn and then given a meaning. In a historical video, ³Johnston, W. M., Hanna, J. R., & Millar, R. J. (2004). Advances in dataflow programming languages. ACM Computing Surveys (CSUR), 36(1), 1-34. Chicago ⁴John L. Kelly Jr., Carol Lochbaum, V. A. Vyssotsky (1961). “A Block Diagram Compiler”. Bell System Technical Journal, pages 669-678 ⁵ibid ⁶Sutherland, W. R. (1966). ON-LINE GRAPHICAL SPECIFICATION OF COMPUTER PROCEDURES (No. TR-405). LINCOLN LAB MASS INST OF TECH LEXINGTON. Chicago

Introduction

7

Sutherland is shown drawing a square-root block and then defines its operation by drawing its constituent blocks. Jack B. Dennis continued the evolution of dataflow by explaining the exact steps that must be taken to execute a dataflow program in his 1974 paper, “First Version of a Data Flow Procedure Language”⁷. Many consider this paper to be the first definition of how a dataflow implementation should operate. Dataflow has always been closely related to hardware. It is essentially the same way electronic engineers think about circuits, just in the form a programming language. There have been many attempts to design processors based on dataflow as opposed to the common Von Neumann architecture. MIT’s Tagged Token architecture, the Manchester Prototype Dataflow Computer, Monsoon and The WaveScalar architecture were all dataflow processor designs. They never gained the popularity that Intel’s Von Neumann microprocessors did, not because they wouldn’t work, but because it was impossible for them to keep pace with the ever increasing clock speeds that Intel, Zilog and others mass market manufactures were able to provide. From the 1990s until the early 2000s, less research went into dataflow because there was no pressing need. Every 18 months a new, faster microprocessor came out and no one felt a need to change the way things were done. Due to the increasingly graphical capabilities of computers, most of the advances during this period were concentrated in the visual aspects of dataflow. LabView is one of notable developments of this period. Then we reached the limits of silicon. Starting around 2005, processor speed stopped increasing and the only option was to just add more cores to the chip. Parallelism became important again. Developers began looking around for solutions and created a resurgence in the 40+ year old concept of dataflow and reactive programming. ⁷Dennis, J. B. (1974, January). First version of a data flow procedure language. In Programming Symposium (pp. 362-376). Springer Berlin Heidelberg. Chicago

Introduction

8

1.6 The Purpose of this Book Dataflow is difficult to learn. Not due to inherent complexity but due to the number of variations dataflow can take on and the lack of a standardized language. Take for example the most common of all elements of dataflow, the node. Some call it a node while others call it a “block”, a “process”, an “action”, an “actor” and any number of other names. Extend this renaming to other basic elements and sometimes you’re not sure what you are reading about. Half of the work in reading about dataflow is learning the author’s terminology. Additionally, dataflow does not have a single set of features and capabilities. It is like ordering from a Chinese restaurant. Mix and match as you want but some things just don’t taste right together. My goal is to describe all of the possible variations in easy to understand terms. Your goal should be to understand the general concepts of dataflow. Then you will be able to apply that knowledge to specific problems with possibly different semantics than those I describe. The purpose of this book is to give you the tools and understanding to work with a multitude of dataflow systems.

2 Dataflow Explained Dataflow is when the data controls program execution. It eliminates the need for polling to check if data has arrived. Dataflow programs react to changing data, updating whenever new data arrives.

2.1 Pipeline Dataflow My explanation of dataflow starts with the simplest core of a working dataflow implementation (Pipeline Dataflow). With that understanding we can then look at different features you find in dataflow systems and how they affect the basic operations of Pipeline Dataflow.

2.2 Nodes A “node” is a processing element that takes inputs, does some operation and returns the results on its outputs. It is a unit of computation. The actual computation doesn’t matter because dataflow is primarily concerned about moving data around.

A dataflow node with one input port and one output port

The only way for a node to send and receive data is through “ports.” A port is the connection point between an arc and a node. Think of a node as a black box where the only way to interact with it is through the inputs and outputs. The ports are the only view you

Dataflow Explained

10

have inside the box. Pipeline Dataflow only has one input and one output port per node but other models allow for multiple inputs and outputs. When nodes have multiple ports, often they are given names to distinguish one from another. Nodes are often functional, but it is not required. By “functional” I mean that if you give it the same inputs at two separate times, it will always return the same answer. Node execution is called “activation” or “firing.” When a node is activated, it will first take the data it needs from the input port. Then using that data, it processes it and creates an answer. Just before it is done, the node will push its answer to the output port and end. Port The connection point between an arc and a node Node A processing element, a unit of computation, that has inputs and outputs ports to pass data Activation or Firing Executing a node… asking it to perform its calculation

2.3 Data Surprisingly for a method that has “data” in its name, Dataflow doesn’t deal with the actual values of data. It is immaterial because all data is treated the same. Often data is just referred to as “tokens.” The Oxford English Dictionary defines “token” as “a thing serving as a visible or tangible representation of a fact.” Tokens in dataflow represent data without worrying about its value or type.

Dataflow Explained

11

Token Another name for data without regard to its value or type

Mutability of the data is important because it affects the semantics of the dataflow implementation. The traditional and prudent choice is for data to be immutable, especially if parallelism is important to you. Yet mutable data is not ruled out. When dataflow is combined with a common language like C++, mutable data is often unavoidable. Pipeline Dataflow typically uses immutable data.

2.4 Arcs Data is sent from one node to another through “arcs.” They are paths for tokens to flow between nodes. Data typically flows from the output port to the input port of nodes. Although it is very uncommon, the reverse is also possible (section Push or Pull). For now, just assume that data tokens only flow from output to input. Arc A path or conduit that is connected between two nodes and allow tokens to be transmitted from one node to the other. Also called an edge, wire, link or connection

Think of an arc as a pipe that can only hold so much data at any one time. When a node is activated, it accepts some data from the input port and eventually pushes data to the output port. The requirement for a node to fire is that there is at least one token waiting on the input arc and space available on the output arc. Reading data from the input port frees space on the intervening arc.

12

Dataflow Explained

The amount of data that an arc can hold is called the “arc capacity.” The Pipeline Dataflow model says that you may only have one token per arc. Some theoretical dataflow models assume arcs have an unbounded capacity but that only works in theory. Arc Capacity The maximum amount of tokens that an arc can hold at any one time

Splitting and joining two arcs together are not allowed in Pipeline Dataflow. We’ll look at this very common feature in section, Arc Joins and Splits.

An arc join and split

Arcs • Always connects one output port to one input port • Typically data flows from output to input • Arcs may contain zero or more tokens as long as it is less than the arc’s capacity

.

13

Dataflow Explained

2.5 Dataflow Graphs A dataflow graph is a directed graph containing nodes and the arcs between them. The graph is the dataflow program. There are no hidden connections between nodes. Every connection is explicitly defined by the programmer using arcs. Since Pipeline Dataflow only allows for one input port and one output port per node and arcs can’t be split, the only style of graph we can create is a simple string of nodes connected with arcs. Dataflow Graph Explicitly states the connections between nodes. A graph is also called a dataflow program

2.6 Executing a Graph Where do we start if we want to execute the dataflow program shown in the figure below? In pipeline dataflow (and most dataflow models) nodes are only activated when data is available on the input port and idle at all other times.

A Simple Pipeline Dataflow Program

Assume that initially there are no tokens on any of the arcs in the graph. If a single token appears on the input arc of node A, then it can fire. The result is that it consumes the token from the input, computes a new token and puts it on the output. Now, node B has a token available on its input arc, so it can fire. Node A doesn’t have any more input tokens waiting, thus it goes into an idle state. Node B consumes the input token, computes a new token and puts it on

Dataflow Explained

14

the output. Node C follows the same execution pattern while node A and B are idle. Since no other tokens exist in the graph, none of the nodes can fire; therefore the execution of the graph is done. This is what is called “data-driven” execution. It is the presence of data that causes nodes to fire. Conversely, the lack of data allows them to go idle. Now let’s evaluate the graph again with one change. Instead of only putting a single token on the input arc to A, this time let’s supply it with a steady stream of data. Just as before, node A activates first, passing a new token to B. Node B has data so it can fire just like before but now node A also has data so it can also fire. Node A and B can both execute at the same time. Since the two nodes don’t share any data and are “functional” we can safely execute them in parallel. Once B is done, it will put a new token on its output for node C to consume. When node A is done, it will also put a new token on the output arc for B to consume. Our graph now has a token on C’s input, so node C can fire, a token on B’s input, so node B can fire, and since node A always has a constant stream of data on its input it can fire too. All three nodes can execute at the same time. If we continue with our example, we would again have all three nodes execute again and again. This continues until we stop supplying data to node A. So far we just assumed that there was enough space available on the output arc to push a new token to it. But we have to be sure that two conditions are met before activating a node:

.

Preconditions for Node Activation

Dataflow Explained

15

• A token is available on the input arc • Space for a new token is available on the output arc

. So for our example, we have to ensure that node C is fired before node B to make room for a new token on the intervening arc. Similarly, node B has to fire before node A. Executing all three nodes at the same time, so that space on the outputs are freed at the same time that new tokens are pushed to them, would accomplish the same thing. As long as our preconditions are met, we are safe. In all the examples so far, a node has always pushed a new token to the output arc. A node is not required to push a new token upon activation. It is perfectly acceptable for it to consume a token from the input, process it and then end. The only effect this has is that the downstream node doesn’t receive a new token.

Execution Semantics Assuming the preconditions for node activation are met. Node activation always follows the same steps: 1. A token is consumed from the input arc 2. The node executes 3. If needed, a new token is pushed to the output arc

.

2.7 Features of Dataflow Systems The previous sections defined a basic dataflow implementation, Pipeline Dataflow. It is the simplest model that is still usable but it is also very restrictive in the types of programs you can build

Dataflow Explained

16

with it. Unix pipes, Chain of Responsibility, stream processors and filters are all names used to describe Pipeline Dataflow. It is such an obvious solution that it has been reinvented many times over. In the following sections we are going to look at features that can be added to the Pipeline Dataflow model to make it more powerful. Certain combinations of these features make up well known dataflow models like Synchronous Dataflow or Flow-Based Programming. Some features clash with others and are impossible to combine together. Yet, these features, along with the definition of Pipeline Dataflow, are all you need to build most other models of dataflow. Pipeline Dataflow can be thought of as the foundation to build more complicated dataflow models. But it is not as easy as randomly combining features together to get your own Frankenstein of dataflow systems. Some combinations have bad interactions. For example, using mutable data with arc splits are a bad idea because then you have to worry about making a deep copy of the data at every split. You need to understand what each feature does and its effect on the system as a whole. Not every dataflow model will neatly align with these features. But these features were discovered from my understanding of many models of dataflow computation. Therefore they tend to work with the majority of the models you will encounter. If you are using these features to understand and classify an existing dataflow implementation, then be aware that there may still be operational differences in the specific implementation that I don’t cover here.

2.7.1 Push or Pull Data Push and Pull refer to the way tokens move through the system. With push, nodes send tokens to other nodes whenever they are available. The data producer is in control and initiates transmissions. Clicking on a form’s summit button is an example of push.

Dataflow Explained

17

The browser initiates the conversation with the server and sends the form’s data. On the other hand, pull puts the consumer in control. The consumer node must first request data from the producer. If the producer needs data from other upstream nodes it will also request data from those nodes. This continues until there is a producer node that doesn’t need data from any other node. It will transmit its data to the requesting node. That consumer node can now send data to any node that requested data from it too. Pull is very rarely used in dataflow systems but can be advantageous when a node’s action is costly. It allows a node to lazily produce an output only when needed.

2.7.2 Mutable or Immutable Data Mutability means that data can change at any time. Conversely, immutable data will never change. Consider the situation when the same data token is sent to two nodes. If one node changes the data then what is the value of the data token at the other node? If the data is mutable, then the second node will see the new value of the data token. Now if the second node also changes the data, then the value from the first node is lost. A key feature of dataflow is its ability to execute multiple nodes in parallel without worrying about locks. Mutable data complicates this feature. Immutable data is preferred any time parallel computations exist. Unless you have a good reason, don’t use mutable data. Using immutable data means that you will not modify existing data; you always create a new data token. Splits (section Arc Joins and Splits) become a problem with mutable data. To ensure safe parallel operations of nodes, data arriving at a

Dataflow Explained

18

split must be copied so an independent version can be sent along the two arcs. Even if you don’t need nodes to operate in parallel, mutable data can be a source of nasty bugs. Our mental image of an arc with a split is that the same data value is sent along both paths of the split. If the value can change at any time due to an unrelated node’s execution, our mental image is shot and debugging is a nightmare. Token copying can be expensive but it is the safest way to use mutable data in dataflow. If you are combining dataflow with an imperative programming language, then you may just have to accept the difficulties that come with mutable data.

2.7.3 Static or Dynamic 2.7.3.1 Dynamic Dynamic dataflow allows changes in the graph and/or node definitions to happen at run-time. Many programming languages allow functions to accept other functions as arguments (we call these higher-order-functions). This allows you to define general purpose functions that are specialized with other functions. Dynamic dataflow allows us to do the same thing. For example, we could design a generic filter node and specialize its filtering at run-time by passing it another node that has the same interface (input and output ports). The generic node will be completely replaced with the specialized node (we call this “node replacement”). You can view this as either changing the graph or changing the node’s definition but the result is the same. Node Replacement When one node is completely replaced by another node with the same interface at run-time

19

Dataflow Explained

Modifying the arcs at run-time is another technique of dynamic dataflow that allows the program to reconfigure itself.

Before arc modification

After arc modification

In the above figures, the graph initially had an arc from node A to node B. After the modification, the graph now has an arc connecting node A to node C. While this sounds like a powerful concept, self modifying code has been out of favor for a long time because it makes debugging the code a nightmare. The actual graph that is being executed after the arc modification is now different from the one the programmer designed. Your debugging tool must take this into account and show you the current graph. Use with caution. Node replacement, for the most part, is safe but should be used only when necessary. When there are a few known replacements for a node at design time, it is better to explicitly put those nodes into the graph and use a switch to direct the tokens to the correct specialized

Dataflow Explained

20

node at the appropriate time. Only when the number of possible replacements are large is it better add dynamism to eliminate the boiler-plate arcs and switches. While boiler-plate code may be boring, using it explicitly details all the possible paths that a token can flow through. Node replacement, on the other hand, requires you to examine the code or documentation of another node to learn what replacement nodes it may give you. And since node replacement only happens at run-time, it may be less obvious where a bug is occurring. 2.7.3.2 Static A static dataflow system is one in which no changes can be made at run-time. The graph and the nodes are fixed. We experience the same situation with compiled languages like C or C++.

Static Dataflow in Hardware Much of the available literature on dataflow is written from a hardware perspective. Don’t get confused when they say that static systems can only execute one copy of the node at a time. This restriction is for hardware only. In hardware, executing multiple copies of the same node requires the processor to use the same collection of bits that define the node for all activations of the node. Thus, all simultaneous activations of a node would use the same memory locations to store data for that node. Modifications from one instance of a node would be overwritten by another instance of the same node. Software is not bound to the same restrictions. Static dataflow in hardware is akin to an electronic circuit board. All the parts on it are fixed and unchanging. A circuit cannot just create new copies of a part out of thin air. It is bound to the parts that physically exist on the circuit board.

.

Therefore, static dataflow in software means that only the

Dataflow Explained

21

layout of the graph and the node definitions are static, it has no effect on the parallelism of the system.

.

Dynamic Dataflow Changes in the graph and/or node definitions can be made at run-time. Static Dataflow Graph and node definitions cannot be changed at run-time.

2.7.4 Functional or Stateful Nodes Functional nodes do not carry any state with them from one activation to the next. The result of each activation is only due to the tokens received on the inputs for the current activation. Stateful nodes are allowed to retain local data. A key point to understand is that any stateful node can be made into a functional node just by having an output arc that loops back to an input of the same node (a self arc, or self loop). Instead of storing state internally in the node, it would send its state data on the output so that in the next activation it can read it back in. Depending on how the system is designed, the additional arc traffic could reduce performance. If almost every node uses self-arcs then it is better to just allow nodes to be stateful and reduce the extra arc overhead. Although, functional nodes simplify the system design by eliminating the need for node-local storage. Functional nodes do not have to be implemented in a functional language. As long as the node never retains state from one activation to the next, it is functional.

22

Dataflow Explained

A stateful counter node

A stateless counter node

An example of a stateful node with and without self arcs is the counter in the figures above. The stateful counter stores the current value internally to the node while the stateless version uses its input value as the current state. Immutable data is not required with functional nodes but it is a wise combination for safety. In the same way mutable data is not required for stateful nodes.

2.7.5 Synchronous or Asynchronous Activation Asynchronous and synchronous activations are just two ways to do the same thing… activate nodes in time. The asynchronous method has no preexisting knowledge of what’s going to happen and just manages the now. While the synchronous method is very ordered, structured and plans everything first. A major classification of dataflow systems is whether they are synchronous or asynchronous. This delineation defines what features the system can support. Due to its importance, we will examine implementations of both types of systems in the following chapters.

Dataflow Explained

23

2.7.5.1 Asynchronous Asynchronous execution is when a node fires any time that its activation preconditions are met, possibly at the same time as other nodes. Due to the nature of asynchronous activations, it is often combined with an arc capacity > 1 (section Arc Capacity). Nodes may have bursts of activity and thus the token production rate may go up and down. If there is not enough space on the output arcs the node can’t activate. Arcs need to have a capacity > 1 to buffer these bursts of tokens. 2.7.5.2 Synchronous Synchronous execution is when the nodes fires on a pre-calculated, fixed schedule. This implies that the program has to be compiled first to build the schedule. The schedule determines which nodes have to fire before others, how often they have to fire and which nodes can fire simultaneously. In some applications, the token rate is always known at design time. So why pay for the overhead of a run-time activation manager when we know at design time what has to fire when? In these situations, synchronous activations allow you to compile the program to gain extra performance. In other domains, nodes need more flexibility. They need to have different token rates at different times. Pre-calculating schedules for these types of programs are impossible – it has to be done at runtime. The system has to be able to respond to nodes at any time and asynchronous activations make sense. Synchronous activations allow us to determine, at design time, the number of tokens on an arc at any one time. Therefore we can find the maximum number of tokens an arc will ever hold and set our arc capacity accordingly. On the other hand, asynchronous activations requires the designer to give a best guess for the arc capacity.

Dataflow Explained

24

Unlike asynchronous activation, synchronous activation cannot be combined with dynamic program structure (section “Static or Dynamic”) because the activation schedule is pre-calculated. 2.7.5.3 Hybrid Another option is to have both an asynchronous and a synchronous engine in one package. The synchronous section is isolated; it is the asynchronous engine that communicates data between the separate synchronous programs, asynchronous programs and the outside world. Synchronous dataflow has the benefit that programs will never dead-lock. But this comes at a price. Some types of programs cannot be expressed in synchronous dataflow. Dividing a program into synchronous and asynchronous section gives us the best of both worlds. Asynchronous Activation When a node fires any time its activation preconditions are met. Synchronous Activation When a node fires on a pre-calculated, fixed schedule.

2.7.6 Multiple Inputs and/or Outputs The Pipeline Dataflow model explained earlier only allowed nodes to have one input and one output. This severely restricts the types of programs you can develop. It is like using a programming language with functions that only allowed one parameter to be passed in. Multiple outputs are not as critical but are nice to have.

25

Dataflow Explained

2.7.6.1 Multiple Inputs Most processes that we build and encounter need multiple items of data to do its job. Without the allowance of multiple input ports you are not even able to add two numbers. In all but the most basic of dataflow systems, multiple inputs are common place. Adding multiple inputs to the Pipeline Dataflow model complicates node activation. With a single input it was simple, a node couldn’t fire until there was data available. With multiple inputs, data can be available on some but not all inputs at any one time. A “firing rule” is the condition that must be met before a node can execute. It specifies what inputs must have tokens available before a node can fire. There are two extremes to consider. A node should only fire when ALL of its inputs have tokens waiting or a node should fire when ANY of its inputs have tokens waiting. Certain operations can’t be performed unless the node is supplied with all of the data at once (for example, addition or subtraction). Other nodes can operate when only some of its inputs are available.

Nodes with Multiple Inputs

The program in this figure implements the simple equation, 2a+3. Let’s step through executing it like a dataflow engine would. First let’s start by giving “a” the value of 5. A token with the value of 5 is put on the “a” input. Now the “multiply” node has tokens on all

26

Dataflow Explained

of its inputs so it can fire, producing the result of 10 on its output. A token with the value of 10 is on one input to the addition node and a token of value 3 is on the other input. The “addition” node has all of its inputs so it can fire, giving us the result of 13.

A “Merge” or “Correlate” Node

The addition operation dictated that we use the “all” firing rule. Other nodes require different firing rules. A common node is called “merge” or “correlate.” It takes two or more input streams and merges them into a single output stream. Tokens can come in on any input, all inputs or just one. If we were restricted to using the “all” firing rule then the system may deadlock if we never get a token on a certain input to the merge node. “Any” and “all” firing rules are the most common but it is possible to have any combination of input requirements as a firing rule. In systems I design, I usually use what I call a “firing pattern” that allows either me or the developer to specify, for each input, one of the following requirements: • A token must be available on the input • A token must not be available on the input • Don’t care – a token may or may not be available For more information see the following section, “Firing Patterns”.

27

Dataflow Explained

Adding multiple inputs changes the preconditions for node activation that was given for the Pipeline Dataflow model.

Activation Preconditions for Nodes with Multiple Inputs • A firing rule for the node must match the available input tokens • Space for new token(s) are available on the outputs

.

2.7.6.2 Multiple Outputs Nodes with multiple outputs don’t require us to modify the firing rules that we have learned about so far. The outputs can be connected to other nodes in the same way. Nodes often have multiple outputs because there are alternate ways of producing an answer. For example, the following figure is a division node that can calculate both the quotient and the modulo.

Nodes with Multiple Outputs

2.7.7 Fire Patterns A fire pattern is a condition that must exist on the inputs before a node can fire. It is a list of pattern codes that consists of:

Dataflow Explained

28

• “1” : Exists - A token must be waiting on the port • “0” : Empty - The port should not have any tokens waiting • “X” : Don’t Care - A token may or may not be available on the port • ”*” : Always - Causes the pattern to match regardless of the other inputs The firing pattern is written as a comma delimited list of pattern codes (one for each input port) such as: [1,1,X] or to specify the ports too, [1:input a, 1:input b, X:input c]. A pattern such as [1,1,*] would match even if the first two inputs do not have tokens waiting. The pattern cold also be written as [X,X,*] or just [*]. A node can have multiple fire patterns. They are tested in order, from first pattern to last, and succeed with the first matching pattern found. An implementation may allow nodes to have different fire patterns per node (and allow the programmer define the pattern) or a single fire pattern for all nodes. The fire patterns for the merge node shown in the above figure are: • [1,X,X] First input has a token, don’t care about the second or third inputs • [X,1,X] Second input has a token, don’t care about the first or third inputs • [X,X,1] Third input has a token, don’t care about the first or second inputs A node with no inputs (called a “Source Node” and described in section “Common Nodes”) will have a fire pattern of [*]. Thus it should fire upon every test for activation. The “all” fire rule mentioned in section “Multiple Inputs and Outputs” uses a firing pattern like [1,1,1,…] and the “any” fire rule patterns are [1,X,X,…], [X,1,X,…] and [X,X,1,…].

29

Dataflow Explained

2.7.8 Cycles and Feedback Loops, cycles and feedback all mean the same thing. It is when data from one part of the graph is sent back to an upstream node. In graph terms, it is a cyclic graph (Pipeline Dataflow is acyclic… no feedback). In traditional programming languages we often call the use of cycles “iteration”.

Powers of “n” program

A simple program with feedback is shown in this figure. Let’s step through the execution of this graph with n = 2 and using the “all” firing rule (see section Multiple Inputs and Outputs). We start with data available on one input of the multiply node, but the other input is fed back from the output. We need an output token so we have an input token so we can generate an output token. The program is deadlocked. If we put an initial value of 1 on the feedback arc then the system would run. The multiply node has the value of 2 on one input and the value of 1 on the other. Since all inputs are available the node can fire, producing the output of 2, which is fed back to the input. Now there is a 2 on both inputs, the output is now 4 which is fed back… An initial value is a token that exists on arc before the program starts and is very common in synchronous dataflow systems. It is the programmer’s responsibility to determine where initial values

Dataflow Explained

30

are needed. Initial values can be used in asynchronous dataflow systems but there’s also another option available. Instead of using an initial value, we could use a type of node called a “one-shot” that will output a token just once, at its first activation, and then never again. In the example above, we would connect the output of the one-shot to the feedback arc. Now when we run the program, the one-shot puts a 1 on the feedback arc, kick-starting the system to run. A common concern developers have with cycles is that it is very easy to design dataflow programs that never terminate. But this is an unfounded worry based on their experience with single threaded, traditional program design. I cover this topic later (section When is it Done) but in short there are a few ways to deal with this issue: • Accept that dataflow programs are designed to run continuously and that the output value is simply the output at any one point in time • Don’t use cycles and accept that fact that some programs can’t be written without them • Use cycles but design the program so that they will always terminate

Initial Value A token that exists on arc before the program starts.

2.7.9 Recursion Recursion is when a node contains a node of itself. This assumes that compound nodes (section Compound Nodes) are allowed.

Dataflow Explained

31

Recursion is similar to cycles except the number of times it will loop is not known until run-time and the condition to stop the recursion is internal to the node. Cycles are external to the node in question while recursion is internal to the node. To try to clarify the difference, let’s look at some code written in a hypothetical, traditional programming language. The following is an example of cycles: x = 0; for (count = 0 to 100){ x = node(x); }

The node() function accepts an input, x and then produces a new value that is assigned back to x to be the argument for the next call to itself. Translating this to dataflow, the node() function would be a one input, one output node with an arc from its output to input with another node in the middle to stop the feedback when the count reaches 100. Notice that the arc cycle is external to node(). The condition to stop the cycling of data is also external (in this example, when count reaches 100). Now for a recursive example… function int node(int x){ if (x 1 because one arc carries the tokens for all the recursions levels.

2.7.10 Compound Nodes The ability to create new nodes by using other nodes is essential to reducing complexity and maximizing code reuse. Think about how much of a headache it would be to build programs without functions or procedures. A compound node is equivalent to functions and procedures in that it creates new nodes by packaging together a few other nodes to perform a task. A primitive node is one that does not contain other nodes. They are the basic building blocks of the systems and are equivalent to keywords in traditional programming languages. 2.7.10.1 Execution of Compound Nodes While this seems like a very simple concept, it can create problems with execution in dynamic dataflow. First, let’s look at the easy situation using compound nodes in static dataflow systems. Since static dataflow programs are not allowed to change at run-time, we can view compound nodes as simply a design-time convenience to make building and understanding the program easy for the developer. At compile-time, compound nodes are recursively expanded to primitive nodes and that is what the dataflow engine executes. Dynamic dataflow allows nodes to be modified at run-time so we can’t just throw away compound nodes. I have used two strategies

Dataflow Explained

34

in systems I have built. The easy way is to have the compound node act as a wrapper that just accepts tokens and passes them on to its internal nodes. With compound nodes containing other compound nodes that contain even more compound nodes, all of these wrappers can add significant overhead. Additionally, wrapper nodes slow down the transmission of tokens because it must travel through and be processed by each wrapper in turn. To eliminate the overhead I have also designed systems that will internally rewrite the dataflow graph to only use primitive nodes just like with static dataflow mentioned previously. Essentially this is a just-in-time (JIT) compilation. Any changes to the compound node at run-time requires the system to recompile that section. This method has its own problems with overhead, just in a different way. If a dynamic dataflow system is mostly static and nodes rarely change, then JIT compilation should be used. But systems where nodes are often modified, using a wrapper for compound nodes will make the dataflow engine design much simpler. 2.7.10.2 Design of Compound Nodes A compound node is essentially just a smaller dataflow program that does one thing. In keeping with good design principles, the interior nodes should not know of anything outside of the compound node. Some method must be devised to connect the outside world to the nodes inside the compound. The interior nodes must be able to connect to the ports of the parent compound node. I strive for consistency in my designs. One rule I always have is that an arc may only be connected from an output port to an input port. But what is a port on a compound node? It is both an input and output. A compound node’s output port act like an input (from the interior nodes’ viewpoint) and an output (from the viewpoint of exterior nodes). I find that using a “terminal node” solves this inconsistency. It is simply a node that passes its input to its output unchanged. In the case of an output port on a compound node,

Dataflow Explained

35

interior nodes connect to the inputs of the terminal and exterior nodes connect the terminal’s output. The compound’s ports are the exterior facing ports of the terminal node. Sometimes I find a need to disable an entire node at run-time. By changing the terminal node to be a “gate node” (a node that either passes or blocks tokens passing through it with an input control token) I can block all tokens from entering the compound node, effectively disabling it.

2.7.11 Arc Capacity > 1 There are various reasons arc may need to hold more than one token at a time. In asynchronous systems, one node may run twice as fast as another. Its output tokens would pile up at the input to a slower node. Multi-rate dataflow systems (section Multi-Rate Token Production and Consumption), are designed to emit multiple tokens upon each firing and require arc capacity > 1. Some dataflow programs that deadlock with an arc capacity of 1 will run fine when arcs are allowed to buffer more tokens. This is especially true with asynchronous dataflow because of varying fire rules.

A deadlocked program when arc capacity is 1

Systems may have variable arc capacities or a fixed (and larger than one) capacities. The most flexible dynamic, asynchronous dataflow systems also let the developer set capacities per arc.

36

Dataflow Explained

Unbounded arc capacities can be emulated by using a large plot of memory to store tokens and throw an exception when you run out of storage. In these systems developers need to verify that their dataflow program can run in the allotted amount of memory.

2.7.12 Arc Joins and/or Splits Splitting is when an arc has one input and two or more outputs. Tokens are duplicated and a copy is sent along each path. This is easy with immutable data but mutable data requires you to make a deep copy of the data.

An arc join and split

Joins are when an arc has two or more inputs and a single output. Their effects on tokens are not as obvious as splits. It depends how you want to define joining two tokens together. In asynchronous systems joining two tokens typically means joining them, in time, on to the output of the arc. Token A arrives at time 1 and leaves at time 1. Token B comes in at time 2 and leaves at time 2… When two tokens arrive at the exact same time then one is randomly picked to pass first. Some implementations define a join to mean, combine two tokens using an operation that produces a new token. For example, in systems that use only numeric data a join operation could be addition. Some implementations do not allow splits and/or joins. Flow-Based Programming disallows splits and most synchronous dataflow systems disallow joins.

37

Dataflow Explained

Both joins and splits can also be emulated with nodes. If you can define a join or split with a node using the rules of your system, then offering joins and/or splits in your implementation could be as easy as just defining two new nodes. Representing them as nodes or special arcs is up to you.

2.7.13 Multi-Rate Token Production and Consumption Multi-rate systems allow nodes to send and receive 1 or more tokens per activation. The amount of tokens can be fixed (like in multi-rate, synchronous dataflow) or variable (like in asynchronous dataflow). Increasing the amount of tokens produced and consumed per node activation is an easy way to increase the performance of a dataflow system. It follows that arcs must also have capacities larger than one.

2.8 Common Dataflow Nodes 2.8.1 Switch Node/ Choice Node

A switch or choice node

38

Dataflow Explained

A switch or choice node has two or more inputs, one control input and a single output port. This node will route one of the inputs to the output depending on the token given to the control port.

2.8.2 Merge Node/ Correlate Node/ Join Node

A merge, correlate or join node

A merge node (also called a correlate node or join node) has two or more inputs and one output port. It will merge the incoming tokens on to the output port. The actual method used to merge the tokens is implementation defined. This node is the same as an arc join (see Arc Joins and Splits).

2.8.3 Distribute Node/ Splitter Node

A distribute or splitter node

39

Dataflow Explained

A distribute or splitter node has a single input port and multiple output ports. It takes the incoming tokens and distributes them to the outputs. The method used to distribute the tokens is determined by the implementation. It may be as simple as just making a copy of the input and sending one to all outputs. Another option is for the node to choose one output to send the token through. This type of node is very similar to an arc split (see Arc Joins and Splits).

2.8.4 Gate Node

A gate node

A gate node has one input, one output and one control input. Depending on the input to the control port, this node will either pass or block the token from traveling from the input port to output port. Often it is simply the presence or absents of the control token (regardless of data value) that determines if the node will pass or block the input.

2.8.5 Terminal Node

A terminal node

40

Dataflow Explained

A terminal node has one input and one output and allows the tokens to flow through unchanged. It may also have multiple inputs and outputs with one input port passing tokens to exactly one output port. This can also just be thought of as packaging multiple single input/output terminal nodes together into a compound node.

2.8.6 Source Node

A source node

A source node does not have any inputs and one or more outputs. It is a generator or source of tokens. It has special firing rules due to the lack of any input port. See the section called Fire Patterns for an explanation.

2.8.7 Sink Node

A sink node

A sink node has one or more inputs and no outputs. Tokens sink into it like a black hole, never to emerge again. Its typical use is to pass data from the dataflow program to locations external to the dataflow engine. For example a sink node can be used to display token data on the screen or to pass data to your C++ program.

Dataflow Explained

41

2.9 Miscellaneous Topics 2.9.1 Granularity Parallelism in dataflow is measured by the “granularity” of the system. Small granularity is when the nodes are small, simple, primitive operations (like addition). Large granularity is when nodes are more like subroutines or a whole application… large processes. There are no concrete rules as to what large, medium and small granularity means. Larger granularity systems tend to use asynchronous activation because their processes are large and they use dataflow more like message passing. Small granularity systems, like electronic circuits, tend to use synchronous activation because of the higher rate of data transfer between nodes as compared to large granularity systems.

2.9.2 When is it Done? A common misconception about dataflow is that programs should terminate. While it is true for traditional languages it is often the wrong way to think about dataflow. Dataflow programs should run until you explicitly stop it. The result (or answer) is valid only for that moment of time and could change in the next moment. Dataflow operates on continuous streams while most common languages operate on single values. Let’s take the mathematical sin function. Given a single value, we can calculate a single answer… sin(3.14) = 0.055. But if instead we supply the sin function a stream of values, it should produce a stream of answers each one being the solution for that moment in time. That seems obvious, no? We would expect to get a stream of answers with a stream of inputs. But when we add loops and cycles

Dataflow Explained

42

to a dataflow program, even a single input value could create a stream of output values. This is because the output of a node is fed back to an earlier node creating a new input that has to be processed to create new outputs that is again fed back to start all over again. If you need a dataflow program to compute a single answer then, in general, you should avoid cycles in the graph and use the Pipeline Dataflow model. A more error prone method when cycles are required is to add a node that will stop the tokens from looping back based on some criteria. In most engineering professions, devices are expected to run continuously for long periods of time. The best devices are those that never stop working. It is only under the influence of Von Neumann concepts that developers put so much emphasis on program termination.

3 Actor Model The Actor Model is a very dynamic form of dataflow where nodes and arcs can come and go based on runtime demands. It is a concurrent model of computation developed by Carl Hewitt that is also similar to Object Oriented Programming (OOP). The Actor Model is actually closer to the original definition of OOP than most current OO languages. Alan Kay, the creator of OOP, commented on a blog post… “Many of Carl Hewitt’s Actors ideas… were more in the spirit of OOP than the subsequent Smalltalks.”¹ Mr. Kay defines OOP to be… ”…only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.”²

3.1 Summary of the Actor Model The Actor Model uses message passing to send data directly to a single actor asynchronously. Every actor has a unique address to determine which actor should receive what message. A mailbox (an ordered queue) stores received messages in the actor. ¹“Moti asks: Objects Never? Well, Hardly Ever!” http://computinged.wordpress.com/2010/09/11/moti-asks-objects-never-well-hardly-ever/ ²“Dr. Alan Kay on the Meaning of Object-Oriented Programming” http://www.purl.org/stefan_ram/pub/doc_kay_oop_en

Actor Model

44

Messages can be taken from the mailbox in any order. In most implementations, actors have a message processing procedure that will pattern match on the messages, similar to a case or switch statement, and pass the messages to the appropriate methods of the actors. If the message is unknown, then it is either silently discarded or creates an error depending on the implementation. Actors can only send messages to other actors that they know about (i.e. actors they have addresses to). It could be an address that was sent in a message or an address of an actor it created. Message delivery is not guaranteed. At most, messages are delivered once and possibly never get delivered. To ensure that a message reaches its destination you have to use an acknowledgment protocol between the two actors. Like all types of dataflow, receiving a message triggers the activation of an actor. Execution happens concurrently. Actors are referentially transparent (functional) per invocation. They can change their internal state but it does not take effect until the next activation. Essentially they handle state like we talked about in section Functional or Stateful Nodes. They pass their state out and back to themselves so that the next activation uses the updated state. Local data is encapsulated inside actors. The only way to communicate with them is through messages. Internal data cannot leak out unless it is specifically sent to other actors.

3.2 Comparison to Object Oriented Programming The Actor Model is similar to OOP except the methods do not return a value. Instead they send messages to return values. A message is a form of indirect call. An actor never directly calls

Actor Model

45

methods in other actors. In fact, they couldn’t even if they wanted to. By removing direct calls, actors are able to work concurrently with other actors, essentially asynchronous objects. Like objects, actors retain local state and can be created.

3.3 Relation to Dataflow The Actor Model is different from most other dataflow models we have discussed so far where data is transmitted over arcs. We can say that the Actor Model is just dynamic dataflow where the arcs have a capacity of one and are created immediately before sending the data and then disconnected immediately afterwards. It is an asynchronous dataflow model that has functional nodes but passes state from one activation to the next to simulate stateful nodes. Since everything is an actor, compound nodes are easily created by defining actors in terms of other actors.

3.4 Dataflow Features • • • •

Asynchronous activations Dynamic program structure Pushed Data Arc joins and splits are not allowed… technically arcs do not exist in this model • Data (messages) are immutable • Functional nodes but passes state from one activation to the next to simulate stateful nodes • Multiple inputs are simulated by allowing different messages to be received on one input

Actor Model

46

• Multiple outputs are simulated by actors directly sending data to multiple receivers. Other actors are not allowed to explicitly connect to these outputs • Since there is only one input, actors don’t need the concept of fire patterns. They activate whenever there are messages available. • Cycles are allowed • Recursion poses no problems since every actor is distinct from all others • Compound nodes are allowed • Arc capacity: While there are no arcs in the Actor Model, we can imagine arcs being dynamically created for each message that needs to be transmitted. In this case arcs would have a capacity of one. • Multi-rate token production and consumption: There is no limit to the amount of messages that an actor can send in one activation. Only one message may be received per activation.

3.5 Where is the Actor Model Used? The Actor model is a very dynamic and fluid dataflow model. It is beneficial anytime applications need to change due to runtime demands such as: • Systems where you can’t determine how the nodes should be connected until runtime. • Systems where nodes need to be created dynamically based on some runtime factor. For example creating a node for every new connection on a web server. • Systems where services can be added or removed from an application at runtime.

Actor Model

47

3.6 Where is it Not Used? In situations where the program is essentially static, the dynamism of the Actor Model gets in the way. For example, a fixed pipeline of transformations or circuit simulations. To some applications, the connections between nodes are as important as the nodes themselves. When you need to ensure that nodes can be freely reused simply by changing the connections to the nodes, the Actor Model is a poor fit.

4 Flow-Based Programming Flow-Based Programming (FBP) is a version of dataflow created by J. Paul Morrison in the early 1970s. It is a field-tested implementation that has spawned many recent developments such as NoFlo, Pypes, MicroFlow, DSPatch and many others. This is just a short summary of the method, I suggest you read Mr. Morrison’s book, “Flow-Based Programming” for the whole picture.

4.1 Summary of Flow-Based Programming FBP is an asynchronous dataflow model that allows for mutli-port and compound nodes. In FBP tokens are called Information Packets (IP). An IP can only have a single owner. IPs are explicitly destroyed or passed on, there is no garbage collection of IPs. Unlike most other dataflow implementations, FBP allows IPs to be combined into lists and trees that are then sent as if they were an IP in their own right. There are special IPs called “brackets” that can delineate substreams of IPs. This makes it much easier to keep related data together while being transmitted through the program. For example, you could create a list of IPs that contains all the lines in a text file. Each list is the text from a different file but they do not get intertwined because they are contained within separate lists. A unique feature is called Initial Information Packet (IIP). These act like regular IPs except they are sent to the node first to allow for configuration. A node can be in one of a few states:

Flow-Based Programming

49

• Not yet initiated: The initial state of all nodes • Terminated: Signifies that nodes will never be activated again • Active: The node is currently activated. This may be further subdivided into active-normal and active-suspended (active but waiting on an external event) • Inactive: The node is not activated but it is not terminated A node is terminated when all of its upstream nodes are also terminated and it is impossible for a node to receive anymore IPs. Nodes may also choose to terminate themselves. When all the nodes in a program are terminated, the program is done. FBP gives nodes more control over its execution than most other dataflow models. A node may continue to loop and read in multiple IPs with just a single activation or they could just read a single IP and stop. It is up to a node to choose when to end. Nodes are activated using the “any” firing rule. While a node is in the inactive state, a token arriving at any port will cause it to be activated. Once activated, a node can only wait on a one event at a time, such as a timer, an I/O request, or a suspended send or receive. Every FBP node has an “automatic input port” and an “automatic output port” to coordinate activations with other nodes. A node is activated when an IP is received on an automatic input port. When a node is terminated, and if the automatic output port is connected, it will close the port to alert any other node that has a connection to the port. FBP makes use of both visual and textual representations. The textual form varies from one implementation to the next but is typically defined in the form of a domain specific language for graphs.

4.2 Dataflow Features • Asynchronous activations

Flow-Based Programming

• • • • • • • • • • • •



50

Static program structure Pushed data Arc joins are allowed Arc splits are not allowed Immutable data is encouraged Stateful nodes Multiple inputs and outputs per node are allowed Uses the “any” fire pattern: Only one input port may trigger activation at a time Cycles are allowed Recursion is not supported Compound nodes are allowed Arc capacity: Multiple tokens (IPs) can exist on an arc at any one time. The maximum amount is fixed and implementation specific. Multi-rate token production and consumption is allowed

4.3 Benefits of Flow-Based Programming FBP is one of the few field tested versions of dataflow. A program written using FBP has been in continuous use at a Canadian bank since 1975. There are many implementations, not to mention Mr. Morrison’s own, that have already worked out the kinks. So finding people to answer questions you have about FBP should be easy. J. Paul Morrison has written an accessible book on the topic that will give you huge head start in implementing and understanding FBP systems. FBP uses a pragmatic approach to dataflow. Instead of relying on theoretical concepts to make dataflow work, it gives power to the developer with its many ways to control nodes. The ability to group

Flow-Based Programming

51

IPs into structures is a powerful concept that is rarely found in dataflow systems. Unless FBP doesn’t offer some feature you need in your own system, you should seriously consider it due to the amount of practical, real world experience that the community possesses.

5 Communicating Sequential Processes Communicating Sequential Processes (CSP) is a simple language to describe concurrent processes communicating over channels using messages. It was developed by Sir Charles Antony Richard Hoare in 1978¹.

5.1 Summary of CSP CSP uses a textual representation to describe systems. For example: • keyboard?c Read a value from the process keyboard and call it c • console!c Send the value of c to the console process • keyboard?c || mouse?pos In parallel, read from keyboard and read from mouse • x := y + 1 Assign x the value of y + 1 • [x>0 -> sign:=1 || x sign:=-1] If x > 0 then sign := 1 or if x < 0 then sign := -1 There is no concept of a port in CSP. All processes receive messages through a single, unnamed input (as do most message passing ¹Hoare, C. A. R. (1978). “Communicating sequential processes” Communications of the ACM, 21(8), 666-677.

Communicating Sequential Processes

53

systems). Arcs are called “channels.” They are synchronous (the producer cannot send until the consumer receives) and transmit one value at a time. The single most common artifact found in CSP related implementations are channels. Most implementations of CSP solely focus on channels and replace processes with their own versions. I have found channels to be one of the most powerful primitive for building dataflow systems. Their semantics are simple and they already exist in most programming languages.

5.2 Message Passing Channels Channels come in a few different flavors. Implementations have tweaked Mr. Hoare’s original design but they all conform to the same general principal; channels decouple sender from receiver. In CSP, channels are unbuffered, synchronous communication paths. That means that the sender will block until the receiver reads from the channel. Many implementations change them to be first-in-first-out, asynchronous, fixed-sized buffers. Writing to the channel only blocks if it is full and reads will only block until data is available. Once a value is read it is removed from the channel. Another read will get the next value from the channel or block if there is none. One thread writes to the channel and the other reads from it. A channel is equivalent to an arc with no splits. Channel buffer size is usually set at creation time and cannot be changed although this is implementation specific. This is no reason that the buffer size can’t be modified after creation. Multiple senders can post data to the channel to emulate a timebased arc join like we covered earlier. To have multiple readers all get the same token, you will have to construct a node with one input

Communicating Sequential Processes

54

and multiple outputs that copies the data and send it to all the output ports. Since a channel is implemented as a class, datatype, stuct or whatever, it can be passed around and stored in collections. With buffered channels we have to watch that the output end is actually connected to a receiver. With no receiver the messages will pile up until the buffer is full and will keep the sender blocked forever. Buffered channels create “back pressure.” Once it is full, upstream processes cannot send anymore data. This is important to maintain an acceptable resource usage for the program. Unbounded channels are strongly discouraged. This only leads to bad things happening, probably when the system is already in use at a customer’s site. A channel with a buffer size of one can be used for coordination between two processes. The existence of data on the channel can act as a signal or flag that some event occurred.

5.3 Channels as a Concurrency Primitive Channels are so common because they can be used to build other standard concurrent operations. A “future” is a write-once variable that may not currently contain a value but will at sometime in the future. A “promise” is the process that can assign a value to a future. A future is just a channel with a buffer size of one and a promise is a process that writes to that channel once. A “semaphore” is a device to allow a fixed number of processes to access a shared resource. To implement this with a channel, create a channel with a fixed buffer size (the number of processes that can access the resource at once). To request access to the resource, the

Communicating Sequential Processes

55

process should write a value to channel. Any value will do because we are only concerned with the number of values on the channel. If the channel is full, then the write attempt will block meaning there is already the maximum number of processes using the resource. Once the process becomes unblocked, it can access the resource. When done the process must read a value from the channel to make space for other processes. Another way to implement semaphores when you only have unbounded channels, is to initially place a certain amount of values on the channel – the number of concurrent processes allowed. A process requesting access will attempt to read from the channel. Once completed the same process must write a value to the channel. A “mutex” or a “lock” allows for one process to access a resource at a time. We can use the same method as we did for semaphores to implement mutexes. A “generator” is a function that returns a sequence of values, one for each call to the generator. Typically the operation performed by the generator is expensive and/or it produces many results. If we had to produce all the values before we processed them, it could cause our application to momentarily freeze. A generator lets us produce one value at a time so we can process it at the same time the generator is calculating the next value. As you may have already guessed, a channel with a buffer size of one between the generator function and the processing function will perform the same task. We can come close to actor style of programming by creating classes that that have a single public method with one argument of type channel. The sender would create a new channel and then call the receiver’s method, passing that channel. The sender would then send messages to the receiver over that channel but must somehow communicate when it is done sending. In some channel implementations, the closing of a channel is a signal that the receiver can use to stop reading from the channel. Implementations

Communicating Sequential Processes

56

without the closing signal could just send a special message to signal when it is done.

5.4 Channel Implementations The theory presented with CSP is probably the most widely used dataflow model. Even though most implementations only take the concepts of channels and leave the rest, it points to the power of decoupling processes by connecting them with a channel. The following are only some of the implementations of channel-like objects that I could find. • Python: – Stackless Python has an implementation of channels that follows CSP style of channels (a sender is blocked until the receiver reads) • Go: – Go has channels as part of its core language • Haskell: – The Chan datatype has an unbounded buffer size although there are versions available with a fixed buffer size • C: – libthread (initially from the Plan9 OS now available in Unix-like OSs) • .Net Platform: – BlockingCollection • Java (JVM): – BlockingQueue Interface. While this can be used with any JVM based language, below I have also added libraries that are designed to be used with specific JVM languages. • Groovy:

Communicating Sequential Processes

57

– GPars library (use SyncDataflowQueue and SyncDataflowBroadcast classes) • Clojure: – core.async Even if you happen to be working in a language that does not have an implementation of a channel, all you really need is a queue and a mutex to implement them yourself.

6 Implicit Dataflow Dataflow can be through of as a way to program in most any language. It doesn’t take an explicit dataflow engine to implement dataflow concepts. Any time you chain actions together by sending data from one action to the next you are programming with dataflow. There are a variety of ways to send data. The most common method is to use message passing channels but it is equally valid to use any other medium. The main idea is to decouple sender from receiver. Compare this style to the traditional call/return style of programming. The sender (or the calling function) knows who it is sending data (arguments) to and knows what they will send back (the return value). The sender and receiver are tightly coupled. Instead of sending the arguments directly to the function, what if we just placed it in a variable that both actions could access. The caller now could place its output in that variable and end. The receiver would grab the data from the variable and process it just as if it had been passed to it in the function arguments. Don’t try this at home because we have the age old problem of concurrent access to a shared resource. This is just a simplified example of using dataflow concepts without a full blown dataflow engine. The key thing to take from this is that medium of communication is not as important as the concept of decoupling sender from receiver. If simply using a shared variable works for your situation then go for it. Some possible communication paths are:

Implicit Dataflow

• • • • • •

59

Unix pipes Socket Function Shared File Shared Memory Manager Controlled Communication

Some of these methods are difficult when used in a multithreaded environment. Dataflow style does not require multiple threads or processes. It depends on what you want. If you just need actions to be easily composable, then forget about the parallelism.

6.1 Unix Pipes Unix pipes are probably the most common dataflow implementation in use today. Creating small programs that communicate using strings allows you to construct larger tasks by writing shell scripts.

6.2 Sockets Sockets can be used to connect multiple actions together on one machine or over a network. The nature of sockets almost requires multiple processes or threads but the actions can be written as though they were single threaded.

6.3 Function A function that takes a single argument and just returns that same argument unmodified can be used to decouple two actions. The function is acting like a pipe between two actions. This style typically has a top-level module that is only used to define the connections between actions. The actual actions are all coded in

Implicit Dataflow

60

the same style, a function that takes a single argument and returns a single argument. In some module you would define all the pipelines needed. This module then is the single file you have to edit in order to change how the pipelines operate. This method is much more natural in programming languages that allow you to define or redefine infix operators (e.g. +, -, ->). Without that ability, you end up with too many parentheses. Instead of… action1 -> action2 -> action3

you have… action3(action2(action1()))

The order of operation is reversed from the way you write it in code.

6.4 Manager Controlled Communication Shared resources like memory or files are most often combined with some sort of activation manager. In a single threaded environment, we could create a shared storage location that all the actions know about. Upon activation the action would read from that location to get the input arguments and write to it to return values. The activation manager could be as simple as a procedure that defines the order of actions to be called. For example:

Implicit Dataflow

61

manager(){ action1(); action2(); action3(); }

Then to change how the program operates, you would just need to modify the manager procedure… manager(){ action1() new_action() action2() action3() }

6.5 Message Passing Channels Unless you have a good reason, you should always try to use channels to communicate between multiple actions. There are times when channels do not exist in the language you are using or there are other factors that push you to use one of the previous methods. See the chapter on Communicating Sequential Processes for more information on channels.

6.6 Feature Creep Any one of the methods I present here are useful when you only need the most basic characteristics of Pipeline Dataflow. What I often find is that once you start using dataflow concepts you always want more… nodes that have multiple inputs… nodes that run in parallel… The system starts to expand to include the new features.

Implicit Dataflow

62

As you can see by the few implementations of dataflow that I present in this book, they can be relatively small and still offer 90% of the features you want. Instead of trying to evolve a dataflow engine as the need arises, why not just start out with an existing implementation? I have a few libraries of my own design that I have included into applications. They were coded years ago but now there are many more open source libraries available for you to choose from. Determine what dataflow features you need and see if there is an existing library. If not, implementing one yourself is not hard and can be reused for years to come.

7 Asynchronous Dataflow Implementation Asynchronous Dataflow is characterized by nodes that fire whenever one of their fire rules are satisfied. Implementations commonly have a few standard components. An activation unit that determines what nodes can fire, an execution unit that controls how the nodes are executed, a token transmission unit that moves tokens from one node to another and finally storage for nodes and tokens. This is just the bird’s eye view of asynchronous dataflow. There are many different ways to design the system. In the rest of this chapter we will look at the design of a typical asynchronous implementation that you can you as a reference to understand the details of asynchronous dataflow systems.

Asynchronous Dataflow Implementation

64

7.1 Architecture Overview

Top Level Architecture of the Example Implementation

This figure shows the architecture of our asynchronous dataflow system. It uses a simple pipeline dataflow architecture to define a system that runs other, dynamic, asynchronous dataflow programs. It is modeled after a dataflow processor from the early 1980’s, the Manchester Prototype Dataflow Computer. Most asynchronous dataflow processors use some version of this basic architecture. The Manchester Processor design had demands on its design due to the physical nature of computer processors. As software doesn’t share in those burdens, I have changed the design to be more general purpose. Using the features of dataflow we examined earlier in the book, this implementation can be described as: • Dynamic

Asynchronous Dataflow Implementation

• • • • • • • • •

65

Asynchronous Activations Multiple Inputs and Outputs Cycles Allowed Functional Nodes Uses Per-Node Fire Patterns Pushes Data Arc Capacity > 1 Arcs May Join and Split Single Token per Arc per Activation

7.2 Implementation Walk-Through The four main components are: • IO: Communication with other engines and the world • Transmit: “Moves” a token by changing its location address to the next port’s address • Enable: Determines what nodes can fire • Execute: Executes nodes Tokens come into the system through the input side of the IO unit. Its job is to keep any tokens with addresses inside this engine and to pass on all other tokens. Tokens then are sent to the Transmit unit. It looks at the token’s address and compares it to a look-up table of connections in the system. If it finds that the new token’s address is on an output port, then it will make a copy of the token and give it the address of the input port(s). Effectively, moving the token from one output port to another input port. If the token’s address is already on an input port, then it keeps the address the same. The tokens with the new addresses are sent from the Transmit unit to the output side of the IO unit. Its job is to send any tokens with external addresses and to keep those with internal addresses.

Asynchronous Dataflow Implementation

66

Our local tokens then move to the Enable unit. It looks at the incoming tokens and compares them to a store of waiting tokens to see if any nodes can now fire due to the new token. If not, it will save the new token in the Token Store for later use. If a node can now be activated, it creates a new token called an Execute Token. It packages together all the data tokens for the inputs of the node and a way to invoke the node itself. The Execute unit receives these Execute Tokens and runs the node. Any output tokens are passed back around to the input IO unit again to start over.

7.3 Main Data Types Besides the Fire Pattern data type below, all of these can be implemented as classes in an Object Oriented language, a struct in C or the equivalent in your language of choice. The itemized elements under each data type are the members of the type.

7.3.1 Port Address Defines a unique location (combination of node ID and port ID) of a port within this engine • Node Id - Unique to engine • Port Id - Unique to node only

7.3.2 Data Token A container for the data value and its location • Value - Any data value • Port Address - Current location (port) of token

Asynchronous Dataflow Implementation

67

7.3.3 Execute Token Combines everything needed to fire a node • Data Tokens - A collection of all tokens on inputs of node • Node - A means to activate the node

7.3.4 Node A run-time node combines the Node Definition and a unique Node ID • Node Definition - Defines how to activate the node • Node Id - Engine wide, unique id

7.3.5 Node Definition A node declaration and definition. A single Node Definition may be used to define many run-time nodes that all act the same as the Node Definition – just with different Node IDs. • Node’s activation function - Function that does the real work • List of Ports - All the ports on this node • Fire Patterns - A collection of Fire Patterns

7.3.6 Arc An Arc is a connection between to two ports • Source Port Address - Address of the output port • Sink Port Address - Address of the input port

Asynchronous Dataflow Implementation

68

7.3.7 Fire Pattern This is a union type in C/C++, a sum type in Haskell, or, in object oriented languages, a base class of Fire Pattern with one sub-class for Exists, one for Empty and so on. This could also be implemented as an enumeration. A Fire Pattern for a single port is one of the following: • • • •

Exists - A token must exist on this input Empty - Input must be empty Don’t Care - Doesn’t matter if a token is available or not Always - Ignores all other patterns, and forces the whole fire pattern to match

The pattern for the whole node is simply a collection of the all the Fire Patterns for that node.

7.3.8 Token Store A collection of all the Data Tokens in the system. Read and written to by the Enable Unit only. The tokens in here represent the complete state of the program.

7.3.9 Node Store A collection of all the nodes in the system. Note changing this at run-time allows for dynamic node redefinitions.

7.3.10 Arc Store A collection of all the connections in the system. Note changing this at run-time allows for dynamic graphs.

Asynchronous Dataflow Implementation

69

7.3.11 Dataflow Program The Node Store and Arc Store together are everything needed to define a dataflow program. This is loaded into the engine before execution. • Node Store - A collection of all the nodes in the program. • Arc Store - A collection of all the arcs in the program.

7.4 Implementation Components 7.4.1 IO Unit Input: Data Token Local Output: Data Token External Output: Data Token The IO Unit is the interface to the engine. Tokens arriving at the input port with internal addresses are directed to the “local” port of the component and those with external addresses are directed to the “external” port.

7.4.2 Transmit Unit Input: Data Token Output: Data Token Token movement along arcs are implemented with this unit. The Transmit Unit looks at the tokens address and compares it to a lookup table of connections in the system (Arc Store). If it finds that the new token’s address is on an output port, then it will make one copy of the token for each input port and give it the address of that input port. This action is equivalent to moving the token along the arc and sending a copy down each path.

Asynchronous Dataflow Implementation

70

The look-up table is an encoding of all the connections in the program. Changing the values in this table changes the graph at run-time.

7.4.3 Enable Unit Input: Data Token Output: Execute Token The Enable Unit looks at the incoming tokens and compares them to a store of waiting tokens. The Token Store holds all the tokens currently moving through the program. By comparing the waiting tokens with the node’s fire pattern (pulled from the Token Store), the Enable Unit can determine if a node can fire. For activated nodes, it creates and sends an Execute Token that packages together all the data tokens for the inputs of the node and a way to invoke the node itself. The tokens are removed from the Token Store and the node definition is copied from the Node Store. If the incoming token does not cause a node to activate, then it will save the new token in the Token Store for later use. This implementation allows for per-node firing patterns. The original Manchester Processor design had one firing pattern for every node… all inputs must have tokens waiting before the node can activate. And since all nodes in the original design only had 1 or 2 inputs, the Manchester architecture’s Enable Unit didn’t need access to the node’s definition. Due to the addition of per-node firing patterns and more than 2 inputs allowed per node, this design requires Enable to connect to the Node Store and the Token Store while the Manchester design only needed a connection to the Token Store. The Enable Unit is the only component that writes to the Token Store so no special locking is needed for multithreaded operation.

Asynchronous Dataflow Implementation

71

7.4.4 Execute Unit Input: Execute Token Output: Data Token With the contents of the Execute Token, this unit has everything it needs fire nodes. It takes the collection of data tokens from the Execute Token and passes it to the node definition for evaluation. In this implementation the node definition is simply a function that takes a collection of tokens and returns another (possibly empty) collection of output tokens. The node’s function only deals with what I call, “local tokens.” They are just like the regular data tokens (that I refer to in this context as a “global token”) without the Node ID field. Nodes should be isolated and not have to know anything about the outside world. The node’s ID is external to the definition of the node itself. It doesn’t matter if there are 10 nodes, all with the same definition and different node IDs, they should all operate exactly the same. What the node does know about is its Port IDs. Port IDs are unique to the node definition. The node’s function returns a collection of local tokens with addresses of output ports that exist on the node. The Execute Unit must first convert a global token to a local token. It does this by simply stripping off the node’s ID but retaining the port ID and data value. It calls the function with the locals (input tokens) and gets back a collection of locals (output tokens). The unit converts these to globals by adding the current node’s ID back to the tokens along with the port ID and data value. In the original Manchester design, nodes were defined by an opcode in the same way that assembly language instructions in a typical microprocessor are given numeric identifiers. The Execute Unit knew how to execute an op-code it received so the Execute Token only needed to include an op-code number instead of the full node definition like this implementation requires. In software

Asynchronous Dataflow Implementation

72

it costs the same to pass an integer as it does to pass the full node definition and makes the design more general.

7.5 Program Execution Example

Example Program. The number next to the port is the Port Id

We will assume that the dataflow program is already loaded into the engine. Node #1 is the first node to activate. When it is done, the node pushes a new token to its output port (#1) with the address of (Node #1, Port #1). This states that the token is currently on the output end of the arc between nodes #1 and #2. Then Node #2 fires since it has a token on its input. For that sequence to happen, the engine has to do the following… Immediately after starting the engine with the example dataflow program, there are no tokens in the program. The Enable Unit normally first looks at its incoming tokens to see if a node can be executed due to the new token. In this case, there are no input tokens to the Enable Unit but it finds that Node #1 is a source node so that means it can fire all the time. So the Enable Unit creates a new Execution Token with Node #1’s definition as its contents and sends it to the Execute Unit. The Execute unit sees that it has a new Execute Token waiting so it consumes the token and fires the Node Definition found

Asynchronous Dataflow Implementation

73

in the Execute Token. As mentioned above, activating Node #1 pushes a new token to its output. The Execute Unit gets the token, produced from Node #1, and sends that to the input IO Unit which immediately sends it to the Transmit Unit. Remember that the token’s address is (Node #1, Port #1)… The Transmit Unit looks in the collection of all arcs in the system to find where the token should be sent. Node #1’s output arc connects to Node #2’s input port #1. So the Transmit Unit changes the address of the token to (Node #2, Port #1) saying that the token has been moved to the input port of Node #2. The token with the updated address is passed to the output IO Unit. Since the address of the token is local to this engine, it sends the token to the Enable Unit. Now the Enable Unit will look at the incoming token and see that now Node #2 can fire because a token is waiting on its input. It takes the token and places it into an Execute Token along with the Node Definition for Node #2. The Execute Unit then activates Node #2 just like it did before with Node #1 and the cycles continue.

7.6 Preparing a Program for Execution This example implementation does not include any means to convert a human friendly format (program text) to the engine’s representation. Besides parsing and validating the program text, which is the same for every programming language, the only other thing required is to generate a unique ID for every node in the program. This is the run-time identifier that uniquely identifies every node in the engine. The best choice is to use Universally Unique IDs (UUIDs also called GUIDs). Second best is to use unique integers. UUIDs take up 128 bits each so space could be an issue for some designs.

Asynchronous Dataflow Implementation

74

UUIDs are best because they only need to be defined once and allow us to change the graph at run-time without worrying about generating duplicate integer IDs. They also can be used to refer to a specific version of a node. If you generate a new UUID anytime a breaking change is made to a node, all existing code referring to the old UUID will continue to work as expected. Choose wisely because the type of ID impacts the maximum number of nodes you can have in the engine at any one time and thus restricts maximum program size. The end result of the preparation phase is a filled in Node Store and Arc Store that is passed to the engine to execute.

7.7 Multiple Dataflow Engines The Manchester architecture was designed to be easy to combine with other processors. Simply connect the IO units of a few of them to a bus so they can communicate. The tokens will be sent anywhere the address specifies. This design does not handle multi-engine configurations as well as it could. The addition of an engine ID to the Port Address type would allow you to easily move nodes around to other engines to balance the load. It only takes changing the addresses on the arcs and a few other minor changes. With multiple engines, some sort of overseer is necessary to balance the load and move code from one engine to another. This implementation was designed as an example of an asynchronous dataflow engine so no effort was spent on external components to make it easy to combine with other engines.

8 Synchronous Dataflow Implementation When people say “Synchronous Dataflow” they typically are referring to a specific set of features and not referring to just synchronous activations. In the world of dataflow there are at least two meaning for every word. It’s OK to use synchronous activations with a different set of features than I will describe, it just wouldn’t be what most people call Synchronous Dataflow. Even within the Synchronous Dataflow realm there are variations to the feature set. This overlapping terminology is what makes learning about dataflow so difficult and the main reason I started writing this book. Synchronous Dataflow can be defined as: • • • • • • • • • • • •

Synchronous activations Multiple input/output ports Push data (with a caveat) Immutable data Static Functional nodes Cycles are allowed* Compound nodes are allowed at design-time Arc capacity > 1* Arc joins are not allowed Arc splits are allowed Mutli-rate token production and consumption are allowed*

Items marked with “*” were not allowed in the original definition of Synchronous Dataflow but have since been incorporated.

Synchronous Dataflow Implementation

76

The defining characteristics of Synchronous Dataflow is that nodes are activated on a fixed, pre-calculated, repeating schedule and every port on a node sends and receives a fixed amount of tokens upon every activation. Once a schedule is found, you can guarantee that the program will not deadlock. But the downside is that Synchronous Dataflow is not as flexible as asynchronous dataflow and not all programs can be defined in Synchronous Dataflow terms. The process of calculating a schedule tells you… • If the program will deadlock • The maximum size of every arc • The rate of activation of every node A schedule defines one “period”. In a period every node must fire at least once and possibly many times. Because the period is repeated, if a node did not fire in the period it would never fire. The state of a Synchronous Dataflow program is defined by the number of tokens on every arc. By the end of a period, the state of the program must be identical to the state at the start of the period. This allows us to repeatedly execute the schedule while ensuring the program will never get into a state that can deadlock. A period is subdivided into “instances”. An instant is one set of node activations. A node may fire in more than one instant but must fire in at least one. The process of finding a schedule is called compiling and is similar to compiling a traditional programming language like C++. It is performed once and results in an application that can be run many times without recompiling. The runtime requirements for a Synchronous Dataflow engine are very minimal since most of the hard work has been done already. It must have space for and keep track of tokens, it must contain

Synchronous Dataflow Implementation

77

the node definitions to execute and it needs have some process to activate the nodes based on the schedule. The rest of this chapter will deal with one method of compilation and not the runtime system because of its simplicity.

8.1 Compilation My method of creating a schedule is to simulate the execution of the program being compiled to find a repeating pattern in the state of the program. Once we have that pattern, all the scheduler has to do is activate the nodes in that same order, again and again and again. Since we know how many tokens are received and sent on every port in the program, we can step through the execution, keeping a tally of the number of tokens on every arc for every step. We’ll use the same method as Asynchronous Dataflow to determine which nodes can fire when during the simulation. By recording our states we can analyze them later to find deadlocks and errors in our program design. If the program is found to execute without deadlocks, we can then look for a repeating pattern in the states for the scheduler. Part of the dataflow programmer’s job is to understand the conditions needed to get the system running. By simulating the dataflow program we can see the reason for the deadlock. Often it is as simple as putting an initial token on an arc to get it running. This is not something that a compiler can do well so I view it as just another error that the programmer has to fix. While there are many mathematical ways to create a schedule, from my experience it is better to make the compiler as simple as possible and expect the human to understand what needs to be done to build a schedule. Unfortunately, the amount of practical experience in compiling dataflow programs, compared to popular imperative languages like C++, is rather small. It is better to remove all “magic”

Synchronous Dataflow Implementation

78

from the compiler and give that power to the programmer who may be more up-to-date on dataflow than a 5 year old compiler. I view a compiler for Synchronous Dataflow as more of a set of tools for the developer than a single device to create an executable. Overall I follow the method that Edward A. Lee and David G. Messerschmitt developed¹ but I trade some mathematical wizardry for a more obvious approach needing more human tweaking.

8.2 How to Build a Schedule The main steps to find a repeating schedule of node activations (if any) for our example program are: 1. Determine if a schedule exists 2. Simulate and record the states for some number of instants 3. Analyze the state history for deadlocks or other error conditions 4. Search the validated history for a repeating pattern of token count vectors to find a period 5. Simulate and analyze the program again using our found period to ensure it doesn’t cause an error condition. We’ll examine each step in detail but before we do we have to create a matrix to encode the dataflow program so that we can use some simple formula to develop a schedule. It’s called a “Topology Matrix.”

8.2.1 Label Nodes/Arcs and Token Rates Before anything else, we must first annotate the ports in our graph. Every port must be labeled with the number of tokens it ¹Synchronous Dataflow, Edward A. Lee and David G. Messerschmitt, Proceedings of the IEEE, vol. 75, no. 9, September, 1987.

Synchronous Dataflow Implementation

79

produces or consumes on each activation. If it is an input port, the number should be negative while outputs should be positive. We also number all the nodes and arcs. The numbering for nodes and arcs are separate (i.e., we will have a node #0 and also an arc #0). In the following figure, node #0 puts three tokens on its output port on every activation. Node #1 will consume 12 tokens from the #0 arc and produce three tokens on the #1 arc and 1 token on arc #5 for every activation. Remember that in Synchronous Dataflow, every node produces and consumes the exact same number of tokens upon every activation.

Example Program for Schedule

8.2.2 Create a Topology Matrix Using our labeled graph, we will create a topology matrix of the token production and consumption rates for the graph. Nodes are

80

Synchronous Dataflow Implementation

along the x axis and arcs are along the y axis. The value at location (n,a) is the amount of tokens produced or consumed at that port of node “n” and arc “a.”

Arc 0 1 2 3 4 5

0

1

2

3

4

Node 5

+3 0 0 0 0 0

-12 +3 0 0 0 +1

0 -1 +1 0 0 0

0 0 -1 +1 0 0

0 0 0 -3 +1 0

0 0 0 0 -1 -1

Topology Matrix for Example Program This table is the topology matrix for our example program. Looking at the matrix, the intersection of node #0 and arc #0 contains +3, meaning node #0 produces 3 tokens on arc #0 for every activation. The intersection of node #1 and arc #0 contains -12 because node #1 consumes 12 tokens from arc #0. But node #0 and arc #1 contains 0 because node #0 does not connect with arc #1. With the topology matrix we can create some simple definitions that we’ll need later. Every row in the matrix tells us how many tokens are produced and consumed on the arc. By definition, an arc must only have two non-zero entries in the matrix and there must be exactly one positive and exactly one negative entry for every arc. This simply states that an arc is connected to exactly one output port and one input port. If a node has a self-loop (one its outputs connect to one of its inputs) we do not enter it in to the matrix. This is because the only way for such a configuration to run is for the amount of tokens produced on the output to be the same as the amount consumed on the input port. They cancel each other out. Be careful not to just ignore arcs

Synchronous Dataflow Implementation

81

with self loops while building the topology matrix. If the input and output rates are not the same, then we could go through the whole process of finding a schedule for a program that would not run anyway. The columns in the matrix shows us the token rates for all the ports of a node. A node will have, at the minimum, one entry for every output port but more is possible if there are multiple arcs connecting to the same output port. A node must have exactly one entry for every input port and never more because that would mean we have two arcs joining at the port and that is not allowed.

8.2.3 Does a Schedule Exist? Before we go through the simulation process we can check if a schedule even exists for the program. Using the topology matrix we just created, we will build a set of equations that describe the rate of token flow in the program and the total number of times each node must be activated in one period. An unsolvable system of equations tells us that no schedule exists.

A graph where node A must activate twice for one activation of B and C fires half as many times as B

In one period of the schedule, every node must fire at least once and maybe more if needed. In the previous figure, node A produces 2 tokens and node B consumes 4 tokens. Intuitively, we can tell that A has to be activated twice for every activation of B. While on the other arc, node B produces 3 tokens and node C consumes 6 tokens. Therefore, node B must be activated twice for every activation of C. From these we can create a set of equations…

82

Synchronous Dataflow Implementation

activations_A activations_B activations_B activations_C

= = = =

2 * activations_B 0.5 * activations_A 0.5 * activations_C 2 * activations_B

These equations describe the relative activation rates for one node as compared to its other connected node. If we can find values for all of the activations so that all the equations are true, then a schedule exists for our example program. If we are unable to solve the system of equations, then no schedule exists. To begin solving the equations, we first have to pick a node and assign it a value for number of activations in one period of the schedule. It doesn’t matter what node you choose as all the other nodes activation rates will simply be relative to the one you picked. Let’s say that we want node C to be activated once per schedule period. We replace the equation for C and end up with the following new system of equations… activations_A activations_B activations_B activations_C

= = = =

2 * activations_B 0.5 * activations_A 0.5 * activations_C 1

Now, we are able to solve for the rest of the equations by substituting activations_C into the equation for activations_B, then substitute activations_B and activations_A into the other equations in turn, giving us… activations_A activations_B activations_B activations_C

= = = =

1 0.5 0.5 1

These back up our intuitive understanding that node B must fire half the number of times that A and C fire.

Synchronous Dataflow Implementation

83

More formally, the procedure is: Foreach node, "n1" Foreach arc connecting "n1" to "n2"... (inputs and outputs) Add the following equations to the set of equations: activations_n1 = activations_n2 * (arc_rate_n2 / arc_rate_n1) activations_n2 = activations_n1 * (arc_rate_n1 / arc_rate_n2) Pick a node and assign it a value for the... activation count (usually 1) Solve the system of equations If a solution exists then a schedule exists otherwise no schedule exists

. The values for arc rates can be found in the topology matrix.

8.2.4 Determine Initial Arc Capacities An arc’s capacity is the maximum number of tokens on that arc at any one time. During our initial simulation of the program, we want to give every arc a large capacity (effectively emulating an unbounded capacity). The simulation will give us exact arc capacities that should be used in the compiled program.

Synchronous Dataflow Implementation

84

8.2.5 Execution Simulation Simulating a program means activating nodes just as if it was really running but you don’t actually execute any of the nodes. Since the tokens produced and consumed by nodes are constant, we can just keep track of the token counts on the arcs and that’s all we really care about for now anyway. To simulate a program we need: 1. A topology matrix of the program to simulate 2. An arc capacity vector 3. The initial state of token counts on all the arcs For every “instant” the simulation process creates: 1. An activation matrix and vector: to record what runs when 2. A token count vector: to hold the current number of tokens on an arc 3. A fire count vector: to record the number of activations per node These three form the collective state of the dataflow program at an “instant.” As we step through the program, we’ll keep a record of the state for every “instant”… let’s call it the “history.”

8.2.6 Simulation Process Overview The simulation method first looks at the token counts from the last state to calculate a new activation vector. Then using that we can adjust the token counts for all the arcs connected to activated nodes, creating a new token count vector for the current “instant.” In the same way, we create a new fire count vector. These three items (activation vector, token counts and fire counts) make up the state

Synchronous Dataflow Implementation

85

of the current “instant” that we use to start all over again – our current state is now the previous state. Each loop is an “instant.” We run it for some number of instants and record the state of every instant.

8.2.7 Simulation Process in Detail To start, we need an initial token count vector. This contains the token counts on all the arcs that exist before the simulation starts. Quite often we need to prime the arcs with initial tokens just to kick-start it running for the first “instant.” 8.2.7.1 Step 1: Create a new activation matrix: Remember that our topology matrix uses negative numbers for input ports and positive numbers for output ports. So… forevery element in the topology matrix at (n,a) if element is an input (use "Is Input" equation) if there are enough tokens on the... arc (use "Enough Tokens" equation) then our new activation matrix at (n,a) = 1 else the new activation matrix at (n,a) = 0 elseif the element is an output... (use "Is Output" equation) if space is available on the output... arc (use "Arc Not Full" equation) then the new activation matrix at (n,a) = 1

.

Synchronous Dataflow Implementation

else the new activation matrix at (n,a) = 0 else the new activation matrix at (n,a) = 0 next element

Where “n” is the node index and “a” is the arc index

. The equation definitions are: Is Input? topology_matrix(n,a) > 0

.

Is Output? topology_matrix(n,a) < 0

.

Enough Tokens? token_count(a) >= abs(topology_matrix(n,a))

.

.

Arc Not Full?

86

Synchronous Dataflow Implementation

87

token_count(a) + topology_matrix(n,a)