Computational Analysis of Firms' Organization and Strategic Behaviour [1 ed.] 9780203850091, 9780415476027

This book addresses possible applications of computer simulation to theory building in management and organizational the

142 41 7MB

English Pages 349 Year 2010

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Computational Analysis of Firms' Organization and Strategic Behaviour [1 ed.]
 9780203850091, 9780415476027

Citation preview

Computational Analysis of Firms’ Organization and Strategic Behaviour

Computational Analysis of Firms’ Organization and Strategic Behaviour

Routledge Studies in Organizational Behaviour and Strategy

1. R&D Decisions Strategy, Policy and Disclosure Edited by Alice Belcher, John Hassard and Stephen Procter 2. International Strategies in Telecommunications Model and Applications Anders Pehrsson 3. Corporate Strategy A Feminist Perspective Angélique du Toit 4. The Peak Performing Organization Edited by Ronald J. Burke and Cary L. Cooper 5. Wisdom and Management in the Knowledge Economy David Rooney, Bernard Mckenna, and Peter Liesch 6. Computational Analysis of Firms’ Organization and Strategic Behaviour Edited by Edoardo Mollona

Computational Analysis of Firms’ Organization and Strategic Behaviour

Edited by Edoardo Mollona

New York

London

First published 2010 by Routledge 270 Madison Avenue, New York, NY 10016 Simultaneously published in the UK by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN

Routledge is an imprint of the Taylor & Francis Group, an informa business This edition published in the Taylor & Francis e-Library, 2010. To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk. © 2010 Edoardo Mollona The right of Edoardo Mollona to be identified as the author of the editorial material, and of the authors for their individual chapters, has been asserted by them in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Computational analysis of firms’ organization and strategic behavior / edited by Edoardo Mollona. p. cm. — (Routledge research in organizational behaviour and strategy ; 6) Includes bibliographical references and index. 1. Strategic planning—Computer simulation. 2. Social sciences—Computer simulation. I. Mollona, Edoardo, 1967– HD30.28.C649 2010 302.3'50113—dc22 2009049922 ISBN 0-203-85009-2 Master e-book ISBN

ISBN13: 978-0-415-47602-7 (hbk) ISBN13: 978-0-203-85009-1 (ebk)

Contents

List of Figures and Boxes List of Tables Preface

vii xiii xv

PART I Why and How: Using Computer Simulation for Theory Development in Social Sciences 1

The Use of Computer Simulation in Strategy and Organization Research

3

EDOARDO MOLLONA

2

Computational Modelling and Social Theory—The Dangers of Numerical Representation

36

BRUCE EDMONDS

3

Devices for Theory Development: Why Using Computer Simulation If Mathematical Analysis Is Available?

69

RITA FIORESI AND EDOARDO MOLLONA

4

Mix, Chain and Replicate: Methodologies for Agent-Based Modeling of Social Systems

106

DAVID HALES

PART II Computer Simulation for Theory Development in Strategy and Organization Theory 5

The Dynamics of Firm Growth and Resource Sharing in Corporate Diversification MICHAEL S. GARY

123

vi

Contents

6

Revisiting Porter’s Generic Strategies for Competitive Environments Using System Dynamics

152

MARTIN KUNC

7

Rivalry and Learning among Clustered and Isolated Firms

171

CRISTINA BOARI, GUIDO FIORETTI AND VINCENZA ODORICI

8

Organization and Strategy in Banks

193

ALESSANDRO CAPPELLINI AND ALESSANDRO RAIMONDI

9

Changing Roles in Organizations: An Agent-Based Approach

211

MARCO LAMIERI AND DIANA MANGALAGIU

10 Rationality Meets the Tribe: Recent Models of Cultural Group Selection

231

DAVID HALES

PART III How to Build Agent-Based Computer Models of Firms 11 An Agent-Based Methodological Framework to Simulate Organizations or the Quest for the Enterprise: jES and jESOF, Java Enterprise Simulator and Java Enterprise Simulator Open Foundation

247

PIETRO TERNA

12 From Petri Nets to ABM: The Analysis of the Enterprise’s Process to Model the Firm

280

GIANLUIGI FERRARIS

Contributors Index

311 315

Figures and Boxes

FIGURES 1.1

Logic of inquiry with computer simulation.

19

1.2

A proposal to associate computer simulation and field research.

23

Integration of collected and simulated patterns of behavior.

26

2.1

The basic modeling relation.

37

2.2

Modeling steps with a simulation.

37

2.3

Two stages of abstraction in the modeling process.

38

2.4

Using a simulation stage abstraction.

52

2.5

A chain of three levels of model with increasing abstraction concerning tag-based mechanisms of group formation.

53

2.6

Belief networks of seller and buyer.

57

3.1

A qualitative model of economic relationships among populations of fi rms.

77

3.2

Analysis of behavior in the neighborhood of equilibrium P7.

82

3.3

Transition of the system from equilibrium P7 to equilibrium P8 .

83

Transition of the system from equilibrium P7 to equilibrium P6 .

84

3.5

Analysis of behavior in the neighborhood of equilibrium P7.

90

3.6

Transition of the system from equilibrium P 7 to equilibrium P3.

91

1.3

3.4

viii Figures 3.7

Data collected in the Val Vibrata Industrial District.

96

3.8

Simulated demographic dynamics in Val Vibrata Industrial District.

97

Simulated demographic dynamics in Val Vibrata Industrial District with empirically grounded calibration of the model.

99

Simulated demographic dynamics in Val Vibrata Industrial District with empirically grounded calibration of the model and appropriate simulation length.

99

3.9 3.10

4.1

The form of an existence proof.

109

4.2

Behavior modeling.

109

4.3

Theory testing.

109

4.4

Theory building.

110

4.5

Explanation fi nding.

110

4.6

Example of a model chain terminating in peer-to-peer application domain models.

113

Diagram outlining a replication process in which two independent replications were made of a previously published model.

115

4.8

A network visualisation of models and how they relate.

116

5.1

Revenue of the primary field site company from 1990 to 1998.

126

Pre-tax profit of the primary field site company from 1990 to 1998.

127

Profit margin of the primary field site company from 1990 to 1998.

127

Asset stock of shared resources and net resource investment flow.

129

5.5

Resource correction management control loop.

130

5.6

Aspiration adjustment reinforcing feedback loop coupled with the resource correction balancing loop.

132

Comparison of profitability for four different simulation experiments.

139

4.7

5.2 5.3 5.4

5.7

Figures ix 5.8

Related diversification with overstretching costs experiment.

140

Realizing synergy through higher initial slack or fi xed targets.

142

5.10

Dynamics of the fi xed target experiment.

143

5.11

Expanded feedback diagram for the dynamics of fi rm growth and resource sharing in corporate diversification.

147

6.1

Model sectors and the concept of value chain.

156

6.2

Market evolution under the initial conditions—industry equilibrium.

162

Cost leader fi nancial resources (line 1) and price (line 3) and differentiation leader fi nancial resources (line 2) and price (line 4).

163

Cost leader fi nancial resources (line 1) and price (line 3) and differentiation leader fi nancial resources (line 2) and price (line 4).

164

Relationship between technology investment rates and fi nancial performance obtained for a differentiation leader.

165

Price and product technology evolution with two cost leaders.

166

6.7

Price evolution with two differentiation leaders.

167

6.8

Technology evolution with two differentiation leaders.

167

7.1

A fi rm’s knowledge is entailed in knowledge fields, represented by parallelepipeds.

177

A network of knowledge fields (solid squares) owned by fi rms (dashed circles).

179

Experiential exploration, experiential exploitation, vicarious exploration and vicarious exploitation.

182

The sequence of operations carried out by a fi rm A and their relationships with the analogous sequence carried out by another fi rm B.

183

Organization of the universal bank.

200

5.9

6.3

6.4

6.5 6.6

7.2 7.3 7.4

8.1

x

Figures

8.2

Actions to be taken by banks.

203

8.3

Loans Portfolio Composition.

206

8.4

Credit quality indicator.

206

9.1

Example of a simulated organization as the interplay of a formal and informal network.

217

Number of changing roles with and without the informal network.

223

9.3

Average number of passages to complete a task set.

224

9.4

UML representation of the JavaSwarm model.

227

10.1

Traditionally, game theory models have focused on agents with unbounded rationality and complete information.

234

Cultural group selection models also differ from the traditional game theory approach in their focus on social learning and (often emergent) social utility over individual utility.

234

The cultural group selection models represent interactions within dynamic social structures whereas game theory has tended towards static “mean field” structures.

235

10.4

Schematic of the evolution of groups in the tag model.

237

10.5

Schematic of the evolution of groups (cliques) in the network-rewiring model.

238

Schematic of the evolution of groups in the group-splitting model.

240

A simplified view of the jES components; recipes are reported here in a simplified way, without time specifications.

253

A dynamic view of the jES components; recipes are reported here in a simplified way, without time specifications.

254

11.3

Decision dilemma.

257

11.4

Different layers or strata in jESOF.

260

9.2

10.2

10.3

10.6 11.1

11.2

Figures 11.5

xi

From the left: grass level, rabbit level and fox level; grass, rabbits, and foxes are represented by the dark areas; the medium gray areas are empty zones; the white spaces are the areas of visibility of the units.

262

11.6

Workers–fi rms, v. 1.

263

11.7

Workers–fi rms, v. 2.

264

11.8

The parameters of the simulation.

268

11.9

A UML view of jESlet.

270

11.10

Production with unitCriterion = 2.

274

11.11

The same case, adding three complex units operating as lathes.

274

The system in a normal situation, with the waiting lists of the units.

277

11.13

The system with simulated overloads.

278

11.14

Increasing the number of the evaluation units: the effect.

278

12.1

Example of a conditions-events net.

285

12.2

Example of a standard Petri net.

288

12.3

Standard Petri net with weights on arcs.

289

12.4

Marking graph of the example net.

292

12.5

Example of a colored Petri net.

297

12.6

Place-transition net of the wrong process.

303

12.7

Place-transition net after the process has been corrected.

304

Actor Network Theory.

180

11.12

BOXES 7.1

Tables

2.1

Summary of Results from Example 2

6.1

Differences in Decision-Making Styles Using the Porter (1985) Generic Strategies

155

Main Decision-Making Processes Existing in Each Model of the Firm

161

6.3

Initial Conditions for Each of the Two Firms

162

9.1

Task Set

218

9.2

Example of Task Set

221

9.3

Formal Structures Used in Experiments

222

9.4

Average Agent’s Productivity

223

12.1

Incidence Matrix of the Conditions-Events Net

286

12.2

Incidence Matrix of a Place-Transition Net

290

12.3

Invariants’ Computation of the Example Net

294

12.4

Incidence Matrix of a Colored Net

298

12.5

Results of 1,000 Auctions with the Wrong Process

302

12.6

Results of 1,000 Auctions with the Correct Process

304

12.7

Stops and Lacks During 500 Production Cycles of Five Different Scenarios

307

6.2

59

Preface

Management and organization theory has, over the years, developed rich methodological paraphernalia to test hypotheses. Methods and techniques impinging on probability theory and econometrics allow researchers to rigorously test hypotheses and contribute to articulate robust theories. As far as hypothesis generation process, researchers are trapped within two main strategies, which unveil specific cultural milieu. On one side, researchers produce hypotheses using mathematical modeling and analysis. On the other hand, researchers may induce hypotheses by the means of grounded field studies and direct observation of phenomena. By adopting the fi rst strategy, researchers are bound by the limits of tractability of mathematical representations of complex phenomena. By adopting the second strategy, researchers face the limits of their capability to stretch and exercise their brains to rigorously articulate chains of cause–effect relationships among variables. The book addresses possible applications of computer simulation to theory building in social sciences, in particular in management and organizational theory. The key hypothesis is that modeling and computer simulation provide an environment to develop, test and articulate theoretical propositions. Since computational approach is gaining legitimization in mainstream journals, it is worth understanding how and when computer simulation can be a good research strategy to produce hypotheses. In general, computer simulation provides an experimental environment where researchers are able to play with symbolic representations of phenomena by modifying a model’s structure and activating or deactivating the model’s parameters. This environment allows the generation of hypotheses both to ex post explain observed phenomena or to ex ante generate distributions of unrealized events thereby envisioning areas for further empirical investigations. The objective of the book is twofold. First, in this book, authors convey their experiences in adopting computer simulation as a research strategy. Why using computer simulation, what the limits are of the approach and how to conduct a simulation study are major themes of the book. Under a methodological perspective, in articulating the discourse on research

xvi

Preface

strategy based on computer simulation, the volume includes studies developed using two different techniques and philosophies: System Dynamics and agent-based modeling (ABM). Within the System Dynamics tradition, modeling is grounded on differential equations and feedback theory whereas ABM moves off from the assumption that behavior of social systems can be understood as evolving out from the interaction among autonomous learning agents. Regarding ABM, we both explain the fundamentals of the underpinning paradigm and present a number of applications in the area of management and organizational theory. Second, the book describes how computer simulation helps to address specific research issues in strategic management and organizational theory. In this respect, we investigate what makes fi rms heterogeneous and what the determinants are of fi rms’ emergent strategic behavior and organizational structure. Thus, the book is organized around two axes: the methodological axis that focuses on the structure of a computer simulation research design, and the theoretical axis, dealing with specific research issues in strategic management and organization theory. The volume is organized as follows. In the fi rst part, we explain what we mean by computational approach, we present a comparison with other approaches to theorizing and we suggest possible ways to integrate computer simulation in more traditional research designs. Contributions that adopt a simulation-based approach to analyze strategic and organizational dynamics are included in a second part. A third part presents two pieces of work. The fi rst chapter illustrates a tool developed to create agent-based computer models of fi rms. The second chapter expounds a technique to enhance rigor in computer modeling and simulation. More specifically, in the fi rst part, the chapter of Mollona elucidates the logic that underpins a number of selected simulation-based research works in organization and strategy fields. In particular, he speculates on how simulation experiments enhance theory development by supporting inductive and deductive inferences. Furthermore, Mollona proposes integration between simulation-based and field-based studies and delineates key steps to develop such a mixed research design. The second chapter is written by Edmonds. In the attempt to enhance the production of useful models of observed social phenomena, Edmonds reviews recurring weaknesses of modeling and provides examples that indicate how and when modeling might be useful in a scientific sense. In the fi rst two chapters, both Mollona and Edmonds speak in favor of a more educated use of a research strategy that employs modeling and computer simulation. They also contribute a number of hints to articulate such a strategy of inquiry. In a similar vein, the chapter by Fioresi and Mollona provides a further direction to think through the logic that informs computational approaches to social sciences. In particular, they investigate the relationships between mathematical analysis and computer simulation of formal models of socioeconomic processes. In the chapter, the authors unveil limits and advantages of both

Preface xvii approaches and suggest how both ways to analyze formal models can coexist in a research design. To offer another view on the use of computer modeling and simulation as devices for theory generation, we conclude this fi rst part with the chapter written by Hales. Hales begins his work by coming back to the issue introduced by Mollona that concerns the use of computer simulation to support inductive and deductive inferences. Then, he presents a summary of methods to approach agent-based modeling. Taken together, the four pieces of work that are included in the fi rst part of this book ought to offer a rich perspective on the motives, logics and methods that inspire and steer the use of modeling and computer simulation as a research approach in social sciences. The second part of the book provides examples of theory development through computer modeling and simulation. To begin this second part, the chapter written by Gary investigates related corporate diversification within a fi rm. In analyzing diversification strategy implementation process, Gary discusses recent research combining grounded process fieldwork and System Dynamics modeling. Under a methodological point of view, the work presented by Gary provides an example of how field research, computer modeling and simulation are integrated in the System Dynamics tradition. On the other hand, through computer simulation, Gary contributes to strategy theory by assessing long-term performances of different implementation strategies for diversifying into a new, growing business. In the following chapter, Kunc presents a System Dynamics model that simulates competitive dynamics in a duopoly. With a series of simulations, Kunc shows the usefulness of System Dynamics simulation models to complement traditional strategy analysis to address weaknesses, strengths and blind spots in the pursuit of generic competitive strategies. The lenses provided by a computer simulation elicit the discrepancy that often emerges between ex ante managerial recipes and counterintuitive ex post consequences of managerial choices. Both the work of Gary and Kunc are solidly grounded within the System Dynamics tradition and are insightful illustrations of the bridge that this discipline has built with the analysis of corporate and business strategy. Moving towards the agent-based modeling approach, in their contribution, Boari, Fioretti and Odorici explore the relationships between rivalry and geographical proximity at the very level of contacts between individual fi rms. In particular, they highlight the influence of geographical proximity on rival identification, on the comparison of their knowledge and on the consequential elaboration of a competitive strategy. A number of simulation experiments highlight the key role that rivalry plays as a powerful engine for knowledge creation and diffusion among geographically co-located fi rms. Here, the issue of competitive strategy and rivalry is scrutinized in its interconnections with the cognitive processes that underpin mechanisms of imitation and learning. In particular, the use of an agent-based model

xviii

Preface

allows authors to carefully model individual behavior of fi rms and their interaction to obtain emergent global patterns of knowledge diffusion. In the chapter that follows, Cappellini and Raimondi present their agentbased model to analyze banks’ organization and strategy. By taking an angle that emphasizes the social role played by banks, they explore implications and quandaries that arise from pursuing profits in an extremely institutionalized and socially interconnected context. Under this point of view, management strongly interacts with the market and with macroeconomic scenarios. By simulating their model, Cappellini and Raimondi describe how banks assess and carry out credit and risk evaluation activities and investigate how these activities integrate and reflect changes in corporate risk aversion driven by either a change in shareholders’ attitudes or exogenous change in the business situation. Following, Lamieri and Mangalagiu investigate the concept of “organizational role” and its effect on the productivity of organizations. Using an agent-based simulation, Lamieri and Mangalagiu compare performances of different organizational structures where roles are alternatively fi xed or variable according to decisions taken by managers that have local information. Simulation experiments support speculations on how flexibility in role structure may alleviate rigidity and inefficiencies of hierarchical structures. The part closes with the chapter written by Hales that presents and discusses a number of recent agent-based models of cooperative action. Seemingly, the issue diverges with the theme of the part that touches on fi rms’ strategy and organizations. Actually, however, models of cooperative action may provide an intriguing perspective to analyze behavior of organizations in an increasingly knowledge-based economy. Escalating specialization and reliance on creativity of individual specialists may weaken the effectiveness of authority-based hierarchical mechanisms. Along with formal governance mechanisms and hierarchical controls, reciprocity in exchange behavior within intra-organizational social networks is increasingly responsible for the performance of productive processes. In this light, the contribution presented by Hales is important in its describing mechanisms that produce spontaneous formation of cooperative groups of agents that behave altruistically. In the last part of the book, we included two chapters that have in common both a methodological inclination and the originality of the content. To start the last part of the book, Terna describes the Java Enterprise Simulator (jES). The tool is a programmable agent-based model that works as a fl ight simulator and allows users to execute commands in a simulated enterprise. The simulator lets users to design a business context by defi ning which tasks are to be accomplished and what structures are able to do them. Besides describing the structure and the features of the tool, the chapter guides potential users in a journey into the logic that underpins the flight simulator. Thus, combining strictly technical issues, examples and

Preface xix theoretical topics, Terna explains how to use jES both for theory development and as a decision support system. The work presented by Terna contemporaneously scores two goals: it makes available a useful environment to build agent-based models, and it provides a guide to the use of this environment that not only encompasses technical issues but also explains to potential users the different meanings that the simulation of a fi rm’s behavior may presuppose. Concluding the part, Ferraris is the author of the last chapter of the book. The chapter investigates the possibility to use Petri nets to set up a model and to validate the model’s structure. The author fi rst offers a thorough description of Petri nets—a method to represent complex systems—then illustrates how this technique can be coupled with agent-based modeling. By indicating Petri nets as a tool to ensure that a computer program correctly performs required data processing, this chapter contributes in an original way to elaborating a rigorous discipline for computer modeling. While books dealing with the application of computer simulation to management and organization theory do exist, we believe that the work presented in this volume may contribute to this body of literature in one important regard. Indeed, the aim of this book is to combine chapters that include applications of computer simulation to management and organizational issues with chapters in which both theoretical and technical issues related to simulation-based research strategy are presented and dealt with. More importantly, methodological topics are specifically embedded within the broader theme of conducting research in social science. If the aim that stimulated the making of this volume is fulfilled, readers will become intrigued and will experiment themselves with computer modeling and simulation as a research strategy.

Part I

Why and How Using Computer Simulation for Theory Development in Social Sciences

1

The Use of Computer Simulation in Strategy and Organization Research Edoardo Mollona

INTRODUCTION With different fortunes and oscillating enthusiasm, computer simulation has supported theoretical investigation in managerial disciplines since the 1960s. In the attempt to further corroborate the role of computer simulation in the repertoire of research strategies available to social scientists, the aim of the present chapter is twofold. First, I would like to describe the logic underpinning the adoption of computer simulation in management and organization research. Thus, I propose a historical journey into a selection of contributions to speculate on the different logics of inquiry that permeate these studies. Second, I sketch out the framework for a research strategy that combines computer simulation and field-based inquiry. To begin with, it is important to set up in the front a defi nition for computer simulation. Computer simulation has to do with the manipulation of symbols using a computer code; more specifically, it uses algorithms to derive propositions from the assumptions that come together in a computer model. A computer model is a formal model in which ‘[ . . . ] the implications of the assumptions, that is, the conclusions, are derived by allowing an electronic digital computer to simulate the processes embodied in the assumptions’ (Cohen and Cyert 1961: 115). In this respect, computer models can be regarded as special cases of mathematical models (Cohen and Cyert 1961) in which conclusions are derived from assumptions by using a computer simulation rather than a process of analytical solution. On the other hand, however, computer models do not necessarily have to be stated in mathematical and numerical form (Clarkson and Simon 1960) since they allow manipulation of symbols that can be words, phrases and sentences. Therefore, computer models make up the subset of mathematical models that are solved numerically rather than analytically, but not all the computer models are stated in mathematical terms since they may incorporate not-mathematical symbols. In this respect, Troitzsch suggests that computer simulation is a third system beside natural language and mathematics (1998: 27).

4

Edoardo Mollona

In principle, computer simulation is just a technologically aided process of deduction. Yet, the crude technology can vary strongly from different approaches; and, more importantly, the difference in the adopted technology often unveils profound differences in the philosophy that lies beneath modeling. For example, computer simulations based on systems of difference equations are inspired by a structuralist stance that sees the behavior of the individuals that are embedded within a social system as determined by the feedback nature of the causal relationships that characterize the system (Forrester 1958, 1961). Agent-based models or cellular automata, on the other hand, simulate actions and interactions of autonomous individual entities and build on the hypothesis that the behavior of social systems can be modeled and understood as evolving out of interacting autonomous learning agents (Epstein and Axtell 1996; Axelrod 1997; Axtell 1999). Thus, a crucial feature of these models is the emergence of ordered structures independently of top-down planning. While agent-based models and cellular automata show how interaction among individual decision making and learning may generate complex aggregate behavior, differential equation modeling aims at reducing aggregate and often puzzling behaviors into underlying feedback causal structures. As a consequence, these latter models typically aggregate agents into a relatively small number of states, assuming their perfect mixing and homogeneity (Rahmandad and Sterman 2004) while cellular automata and, especially, agent-based models preserve heterogeneity and individual attributes thereby sacrificing parsimony. The reader looking for an overview of approaches and techniques may refer to the texts edited by, for example, Liebrand et al. (1998) or Gilbert and Troitzsch (2005). However, independently of the approach adopted and the inspiring philosophy, research work employing computer simulation has frequently been regarded, in social sciences, as influenced by an autonomous logic in respect to mainstream research. Simulation studies, however, have a long tradition in organizational research. Going back to seminal work in the area of the behavioral theory of the fi rm and organizational decision theory, some of the most important theoretical pieces are based on a simulation approach. This is true, for example, for the well-known garbage can model (Cohen et al. 1972) and for the work leading to the development of the behavioral theory of the fi rm (Cyert et al. 1959; Cyert and March 1963). In recent times, computer simulations have recuperated terrain in mainstream management journals. To push further legitimization of computer simulation in the study of fi rms’ strategy and organization, this chapter aims to capture logical underpinnings of successful simulation research. The chapter is organized as follows: in the next section I briefly pinpoint key milestones in the history of computer simulation applied to strategy and organization research and, in the following section, the reasons that motivated early adopters to use computer simulation are summarized. In

Computer Simulation in Strategy and Organization Research

5

section 4, I consider a sample of recent works that use simulation and I muse on the differences in the underlying logic of inquiry. In the fi fth section, I focus on a specific issue: the association of computer simulation and field research. In the last section of the chapter I draw some conclusions.

A HISTORICAL JOURNEY INTO THE ADVANCEMENT OF COMPUTER SIMULATION INTO STRATEGY AND ORGANIZATION THEORY The use of computer simulation has intrigued social scientists having roots in a variety of cultural territories. Computer models have played a role in sociology and political science. A review conducted by Meinhart (1966) reveals that at the beginning of the 1960s a group of sociologists and politics researchers shared an enthusiasm for the use of computer simulations. As reported by Meinhart, computer simulations supported McPhee in studying voting behavior (1961), Coe in investigating conflict in dyadic relationships (1964) and Gullahorn and Gullahorn in examining individual reactions in social interactions (1963). They followed a line of research initiated by Simon (1952) who formalized Homans’ theory of interaction (1950). Grounding on this strong foothold, the use of computer simulation has cultivated robust roots in sociology. Beginning from Axelrod’s work (1984), for example, a well-established thread of studies explored emergence of social order and cooperation from the micro-interaction of boundedly rational individuals (Epstein and Axtell 1996; Lomborg 1996; Nettle and Dunbar 1997; Macy and Skvoretz 1998; Eguiluz et al. 2005; Hanaki et al. 2007). In different veins, another insightful example of an application of computer simulation to sociology is the analysis of theories of conflict conducted by Hanneman et al. (1995). Furthermore, the cross-fertilization between computational models and social sciences gave rise to a fertile field of studies labeled social simulation (Gilbert and Doran 1994; Gilbert and Conte 1995; Troitzsch et al. 1996; Gilbert and Troitzch 2005; Edmonds et al. 2007). As for strategy and organization theory, my analysis moves from economics because, when computer simulation fi rst appeared, studies in strategy and organization, under a theoretical point of view, were growing as branches of the more consolidated field of economics. As Clarkson and Simon suggest (1960), three threads of studies have shared an interest in applying computer simulation in economics: the thread of studies on dynamic macroeconomics, the studies on operations management and the studies dealing with the theory of decision making. The fi rst thread employed computer simulation in the analysis of business cycles and market dynamics to deal with non-linearities and

6

Edoardo Mollona

growing complexity of dynamic systems of differential equations. As the complexity of mathematical models began to increase, researchers had to hang on numerical analysis and computer simulation to explore the behavior of these systems. Here, computer simulation was employed as a technical device not as a research approach having its own logic. Indeed, in this area of macroeconomics dynamics, Clarkson and Simon provide a reference to a textbook of econometrics written by Klein (1953). The second area of studies adopted computer simulation to test computational algorithms aimed at fi nding optimal decision rules in complex business decision-making situations (Bowman and Fetter 1957; Churchman et al. 1957; Vazsonyi 1958; Dorfman 1960).1 To investigate the origin of the application of computer simulation to the study of fi rms’ strategy and organization, we focus on the third mentioned thread that is focused on economic decision making. To start our historical journey, we make use of the hints provided by Cohen in the paper he presented at the annual meeting of the American Economic Association in 1960 (Cohen 1960b) and in the paper published the following year with colleague Cyert (1961). They indicate a group of researchers as the pioneers of computer simulation in economics. What makes this group of scholars fairly homogenous is their shared aspiration to explore the implications of realistic representations of decision-making processes, removing the pressure to obtain mathematically tractable formalizations. This research agenda illuminated the advantage of computer simulation in dealing with dynamic models that were only partially amenable to mathematical analysis. With this attitude, Clarkson and Simon (1960), for example, referred to computer simulation as an attractive methodology to be employed in positive research to explain the relationship between decision-making processes and emerging economic behavior. The research tradition described by Cohen and Cyert focuses on this relationship and addresses economic behavior at the level of fi rm, industry and national economy. At fi rm level, a thread of work that is mentioned by Cohen and Cyert (1961) is System Dynamics. Under this label goes a repertoire of simulation studies grounded on a specifi c methodology developed by Forrester (1958, 1961, 1968). Forrester investigated endogenously generated oscillations in the inventory of a manufacturing fi rm (1968); at the industry level, he looked at the interaction among component parts in a production-distribution system (1961). A fact that needs mention is that Forrester’s industrial dynamics (1961) was also one early attempt to state clearly a method to support researchers willing to study economic behavior of fi rms using computer simulation. Indeed, Forrester’s major contribution results probably in the delineation of a rigorous discipline to elicit real decision-making processes within organizations and to explore, with computer simulation, the unfolding organizational behavior that follows from portrayed decision routines.

Computer Simulation in Strategy and Organization Research

7

At the level of the industry, Hoggatt (1957) investigated the sensitivity of fi rms’ birth and death to changes in market conditions, such as supply, demand, cost, prices and exit and entry conditions; and Cohen (1960a) modeled the shoe-making industry as divided into a value chain articulated in five segments: consumers, shoe retailers, shoe manufacturers, cattle-hide leather tanners and hide dealers. The aim of modeling was to explain monthly values of selling prices of each of the sectors from 1930 to 1940. Also at industry level, Cyert et al. (1959) modeled a duopoly formed by an ex-monopolist and a spin-off created by a former member of the incumbent fi rm. The key decision that each fi rm makes is an output decision. The decision process is based on a cycle of forecasting, profit goal setting and evaluation of best alternative. If this best alternative is inconsistent with the profit goal, fi rms re-examine cost and demand estimates and, eventually, search for a satisfying alternative. Finally, at the national system level, Orcutt et al. (1958) worked at a large demographic model of the United States household sector. As Orcutt explains, the model is a step in ‘. . . demonstrating the feasibility and potential usefulness of simulation techniques in connection with the development and use of models of economies . . .’ (Orcutt 1960: 903). To bring forth the role that computer simulation played in theory development in strategy and organization, it is important to notice that for a group of scholars based at the Carnegie Mellon University (Cohen, Cyert, Feigenbaum and March), the development of a computational approach was intertwined with the advancement of a theoretical research agenda aimed at a new theory of the fi rm. As Cohen suggests: It is only when all of the detailed aspects of entrepreneurial decision making can be programmed and simulated successfully that we will have a behavioral theory of the fi rm. (Cohen 1960b: 536) The tight coupling between emerging discontinuity in theoretical paradigms in economics and methodological innovation is particularly clear in the work leading to the development of the behavioral theory of the fi rm (Cyert et al. 1959; Cyert and March 1963). Interestingly, Cyert and March in the introduction of their book remind that: The emphasis on studying actual decision processes implies a description of the fi rm’s decision in terms of a specific series of steps used to reach that decision. The process is specified by drawing a flow diagram and executing a computer program that simulates the process in some detail. We wanted to study the actual making of decisions and reproduce the behavior as fully as possible within the confi nes of theoretical manageability. (Cyert and March 1963: 2)

8

Edoardo Mollona

It is interesting to note that the influence of this persuasive and powerful association between a theoretical contribution and a methodological approach produced widespread influences in the work of management scholars. In this light, Morecroft (1983) accurately draws attention to the cultural bridges existing between the work being pursued at Carnegie School and System Dynamics, the approach that Forrester was cultivating at the Sloan School of MIT.

COMPUTER SIMULATION FOR THEORY DEVELOPMENT IN STRATEGY AND ORGANIZATION After the succinct delineation of the milestones that summarize the diffusion of computer simulation in economics, in this section I highlight the motives that more frequently have been mentioned to justify the adoption of this methodology. In examining early contributions, three groups of typical justifications recur. First, computer simulation, in comparison to formal analytical approaches, allows a greater richness of details to be retained. Economists that adopted computer simulation operated in a cultural milieu in which research method, and rhetoric, was erected upon the rock-hard plinth provided by mathematical modeling. The typical way of proceeding demanded consequences of modeled assumptions to be deducted by the means of analytical solution of a mathematical model. The rigor of the approach, however, does not come without costs since the need to solve analytically a model bounds the complexity that the model can incorporate. The portrayal, for example, of non-linear relationships among variables introduces in a model a considerable amount of complexity so as to possibly impair its analytical solution. Under this perspective, we can interpret the candid enthusiasm that permeates writings of Cohen, for example, who explains that ‘[i]t requires a much more extensive knowledge of mathematics to obtain an analytical solution to a complex mathematical model than it does to formulate the model’ and, thus, computer simulation ‘. . . allows a more flexible and easy approach and preserves richness of details . . .’ (Cohen 1960b: 535). This enthusiasm is shared by Orcutt, who gives the idea of how powerful computer simulation appeared to these pioneers as a tool to deal with complex systems: The use of simulation techniques by the authors of this demographic study does not, of course, offer any guarantee in itself that they have produced an acceptable and useful population model. However, by producing a feasible means of solution it permitted them to introduce a variety of interactions, variables, nonlinearities and stochastic considerations into their model which they otherwise would have been

Computer Simulation in Strategy and Organization Research

9

forced to leave out despite strong evidence of their importance. (Orcutt 1960: 905) This characteristic rescues the researcher from a typical dilemma. The dilemma requires a researcher either to abandon the idea to represent the object of study closely, thereby accepting costly simplification in order to rigorously generate testable hypotheses through mathematical analysis, or to preserve complex representations of the object of study at the cost of producing appreciative theories of behavior that have to deal with the ambiguity of natural language. A second motive that is frequently mentioned is that in a computer model the relationship between assumptions and deducted consequences can be easily manipulated to account for a variety of changes and amendments in the model structure. The fact that a computer simulation does not require an analytical solution to derive consequences from assumptions entails that researchers can explore how modifications in a model’s structure have an impact on the unfolding behavior of the model without remaining entrapped into the quandaries of often laborious mathematical analysis. As Cohen explains: A further advantage of computer models is the ease of modifying the assumptions of the theory. When suitable programming languages become available, relations can be inserted, deleted, or changed in the model, and only local changes, which can be quickly made, will be required in the computer program. Modifications of this kind will have a much smaller effect on the procedures for simulating a formal model than they would on the means used for obtaining analytical solutions to the model. (Cohen 1960b: 536) Considering the work done by Hoggatt (1957), for example, Shubik noted that ‘[t]he number of cases and conditions worked out by Hoggatt would have been unfeasible without a simulation’ (Shubik 1960: 917). As Cohen and Cyert suggest (1961), the work of Hoggatt is a good example of how computer simulation may help to revive an old model (in this case the neoclassical decision model for determining output of fi rms given a market price) addressing complex questions that were not practicable with other techniques of analysis. The easiness in the manipulation of computer models is also connected to the fact that computer models may be structured in a modular format. Thus, ‘[i]t is extremely convenient to be able to formulate a complex model in terms of several component submodels, to deal with each component separately at fi rst, and then to integrate them into a complete model.’ (Cohen 1961: 45). In similar veins, Gilbert and Troitzsch (2005) suggest that simulation is more appropriate for formalizing social science theories than mathematics because programming languages are more expressive and less abstract than most mathematical

10

Edoardo Mollona

techniques and because computer models are often modular, so that major changes can be made in one part without the need to change other parts of the program. Finally, computer simulation allows researchers to generate complex hypotheses of a system’s behavior that are testable against empirical world. This is because deductions obtained with computer simulation, besides being as rigorous and reliable as those obtained through mathematical analysis, may be cast in the form of time series to be directly compared with observed behaviors. Imagine a theory that predicts, in specified circumstances, the emergence of a particular behavior over time of a specified variable. In this case, a verbal description of the behavior has to be compared with empirical paths of behavior. This verbal description may be ambiguous in comparison to the string of reported quantities collected over time as appearing in an empirical time series. On the other hand, computer simulation, to produce hypotheses of behavior, adopts the same language that is used to collect empirical time series: a string of quantities reported in specific intervals of time. In this way, computer simulation improves the capability to generate testable hypotheses of behavior (Meinhart 1966). As Orcutt explains, computer simulation made ‘. . . possible comparison of generated results with observed time series and cross sectional data and thus permitted testing of a sort that would not otherwise have been possible’ (Orcutt 1960: 905).

THE LOGIC OF INQUIRY USING COMPUTER SIMULATION As Cohen and Cyert suggest (1961), computer models are of two types: synthetic and analytic. In synthetic models, the modeler knows with a high degree of accuracy the behavior of the component units of the phenomenon under scrutiny. On the other hand, in analytic models, the behavior of the phenomenon is known and the problem is to capture the mechanisms that produce the behavior. In this classification, synthetic and analytic models reveal different underpinning logics of enquiry. While synthetic computer models are informed by a pure deductive logic, analytic models are characterized by an inductive logic (Cohen 1961). To start with, however, a word has to be said to better defi ne what we mean by inductive or deductive process. More specifically, the associations synthetic/deductive and analytic/induction may sound not necessarily intuitive. Deductive process has been acknowledged as a key component of scientific reasoning since Aristotle. A deductive inference moves from general assumptions to specific consequences; in this respect, consequences drawn from assumptions have an inferior degree of universality than their premises. Deductive inferences have two properties: fi rst, the information embodied in the deducted consequences is more or less explicitly included

Computer Simulation in Strategy and Organization Research

11

in the assumptions; second, deducted consequences originate necessarily from assumptions. In other words, if assumptions are correct, deducted consequences must be correct as well. On the other hand, inductive processes move from particular instances to general conclusions. In this respect, in inductive inferences, derived conclusions are not entirely included in the premises. In other words, the information content in inducted conclusions is greater than the one crystallized into the premises. Thus, inductive inferences say something new, or different, in respect to premises; thus, they add information. This property conceals a hazard because correctness of premises does not necessarily imply that conclusions are correct as well. As for the distinction between analytic and synthetic, starting from Kant’s Critique of Pure Reason (fi rst published in 1781), an analytic statement is purely explanatory of an existing concept and it does not add more information than that already contained in the concept itself. A classic example reported by Kant regards the statement that affi rms that an entity of matter is extended in the space. The fact that an entity of matter is extended in the space is already implicit in the defi nition of entity of matter. It does not add information regarding the concept entity of matter; rather it provides an extension, or further explanation, of the concept. On the contrary, a synthetic statement is extensive because it adds more information than that contained originally in a concept. For example, the fact that an entity of matter has a weight, explains Kant, is not included necessarily in the concept of entity of matter (it suffi ces to think of a state of absence of gravity) but rather stems from a synthesis between an original concept and a quality external to the concept. Given this distinction between synthetic and analytic statements, Peirce, for example, put forward a dichotomy between deductive/analytic and inductive/synthetic inferences (Harshorne and Weiss 1931/1935). Thus, we have to be very careful in interpreting the distinction proposed by Cohen and Cyert between analytic/inductive and synthetic/ deductive, since in their framework the concept of synthesis pertains to the use of simulation to aggregate local, or partial, components of a phenomenon into a global emerging behavior. On the other, analysis concerns the dissection of behavior of interest into its components, or determinants. To be clear about the wording we are going to use and to avoid misunderstanding, we focus on the distinction between computer simulations that adopt a deductive or an inductive logic of inference and we ignore the dichotomy between analytic and synthetic computer models. Within this framework, deductive computer models focus on the specification of a set of mechanisms or processes and explore unfolding consequences of such specifi cations, whereas inductive computer models move from the defi nition of an aggregate behavior and use simulation to test whether

12

Edoardo Mollona

candidate mechanisms or processes are able to determine in vitro, and thus explain, the aggregate behavior. We suspect, however, that simulation studies show a much broader variety of approaches that blend elements of deduction and induction. In addition, in computer simulations induction and deduction are intertwined in a cyclical process of theoretical investigation. Induction works when we introduce in a model a casual mechanism that we deem possibly responsible for an observed behavior. In this case, we run history backward to reproduce the conditions for the behavior under study to emerge. On the other hand, once we have found a candidate causal mechanism that we think may explain observed behavior, we might be interested in understanding how robust is the relationship between causal structure and the emerging behavior. Additionally, we may want to understand if the causal mechanism is connected to other possible behaviors. In other words, we may be interested in the relationship between the causal mechanism, or a class of similar causal mechanisms, and a class of behavioral phenomena. In both cases, we can generate a sensitivity analysis by simulating the model with different calibration of model’s parameters or we can simulate the model with a variety of modifi cations in the structure of key causal mechanisms. In this way, we can explore near-histories or hypothetical histories (March et al. 1991) in order to articulate our understanding of a phenomenon. When we run a computer model and we observe simulated consequences of changes in parameters’ calibration or amendments in the model’s structure, we are embarking into a deductive inference. Thus, deduction and induction are hardly separable in a research design based on computer simulation. We thus expect differences among simulation studies to be detected in the degree of accuracy of the description of the elements that compose an aggregate phenomenon or of the features that characterize the aggregate phenomenon itself. Simulation studies in which a deductive logic of inference prevails will move from accurate modeling of components, while simulation studies informed by an inductive logic will set forth from the description of an aggregate behavior. Nonetheless, maintaining two ideal types2 of computer models, deductive and inductive, seems a good strategy, or at least a safe point of departure, to get the picture of what logic of inquiry simulation studies have adopted in the field of strategy and organization. Differently than other typical, qualitative and quantitative research strategies that are more legitimized and disciplined, simulation-based research has been structured in a variety of different guises. Only recently, Davis et al. (2007) have convincingly positioned simulation studies among other methods of inquiry within strategy and organization research developing a road map for rigorous simulation-based research. To carry on this avenue, we apply the two ideal types to capture the often subtle differences in the logic underlying simulation studies.

Computer Simulation in Strategy and Organization Research

13

Computer Simulation and Deductive Inference To address typical features of deductive inference in simulation studies, we begin from the classic Cohen et al.’s garbage can simulation model (1972). The authors do not specify in details a reference mode of behavior to be explained, beyond the broad idea that they want to address the way in which organized anarchies 3 embark in decision-making activity. Rather, the emphasis is on the modeling of the structural features of decision-making processes in specific types of organizations. The aim is to develop ‘a behavioral theory of organized anarchy’ (1972: 2). To do so, the authors develop a model that describes decision making within organized anarchies and examine ‘. . . the impact of some aspects of organizational structure on the process of choice . . .’ (1972: 2). The structure of the research design encompasses the modeling of organizational decisionmaking processes and the analysis of the behavioral consequences of such modeling. More specifically, the authors adopt a view of an organization as a garbage can in which are collected ‘. . . choices looking for problems, issues and feelings looking for decision situations in which they might be aired, solutions looking for issues to which they might be the answer, and decision makers looking for work’ (1972: 2). Along these lines, they modeled problems that require a specific amount of energy devoted by members of the organization to be solved and depicted two matrix structures that describe organizational features. The fi rst matrix defi nes the access structure that associates choices to problems by determining what choice is accessible to what problem. The second matrix represents the decision structure and associates decision makers to choices by establishing what decision maker is eligible to make what choice. In their experimental design, they portrayed different kinds of organizations with different energy distribution, different problem loads and different organizational structures. Through simulation experiments, the authors derived emerging decision-making behaviors with typical features. For example, they observed that, depending on the different assumptions crystallized into the initial calibration of the model, organizations may show different styles in decision making and problem solving. We defi ne this type of work deductive since the curiosity that triggers the effort of researchers regards the deduction of typical emerging patterns of organizational behavior given the description of organizational structures and decision-making processes. Similarly, in their simulation study of entrepreneurial strategies, Lant and Mezias (1990) formalized an organizational learning model by which fi rms collect performances, set aspiration levels, search alternatives and change organizational features. They designed an experimental setting with a population of 150 fi rms; to each fi rm was assigned 1 out of 16 different organizational features, 1 out of 3 entrepreneurial strategies and 1 out of 2 levels

14

Edoardo Mollona

of entrepreneurial activity. The research design involves the generation of a number of different simulations to explore what kind of fi rm would successfully survive. Through the simulation experiments, the authors derived longitudinal implications on fi rms’ performances, growth and survival and generated theoretical hypotheses on the relationship between entrepreneurial strategies, levels of entrepreneurial activity and fi rm performances. In this case, again, the research design moves off from the description of fi rms’ decision-making processes and investigates the consequences of these latter in terms of unfolding behaviors. The study, thus, maintains a deductive attitude in its interest for the dynamic consequences of a set of assumptions concerning entrepreneurial strategies as these are built in the specification of the simulation model. As the authors explain, the data generated ‘ . . . represent implications for organizational performance, growth, and survival of the different entrepreneurial strategies and two levels of entrepreneurship’ (1990: 152). On similar veins, Gavetti and Levinthal (2000) examined the role and interaction between search processes that are forward-looking, and are based on a cognitive choice, and those that are backward-looking, and are the consequence of experiential learning. Gavetti and Levinthal represented the environment as a fitness landscape and modeled two decisionmaking processes that are alternatively informed by a backward-looking experiential learning mechanism or a forward-looking cognitive mechanism. Experimental design devises a set of simulations in which performances of the two mechanisms are compared. The experiments allowed the authors to ascertain that the two mechanisms may productively interact, with the cognitive mechanism that seeds the experiential learning mechanism. More precisely, Gavetti and Levinthal explored the role of the two mechanisms in different experimental conditions. For example, they found that the more complex the environment, the more accentuated is the role of the cognitive mechanism in supporting decision-making. In this study, as well as in those mentioned before, a computer model served as a virtual laboratory where researchers deducted consequences from different initial calibrations. The trait that is shared among these studies is that the value added from simulation is to elicit complex implications that are already hidden into a set of assumptions. In this respect, the term deductive maintains its attitude to describe an inference process in which consequences are already contained in the premises. However, this inference process is far from being an unimaginative or infertile process; on the contrary, researchers, by connecting premises with their often counterintuitive or surprising consequences, discover plausible causal relationships among variables that may contribute to theory development. This active role that simulation can play in theory building motivated Mezias and Glynn to say that ‘[ . . . ] simulation results do not simply reflect suppositions built in the model, but yield knowledge that adds value beyond its explicit assumptions’ (1993: 95).

Computer Simulation in Strategy and Organization Research

15

Computer Simulation and Inductive Inference As we assumed in this work, researchers adopt an inductive inference when they proceed from a phenomenon—more specifically, from the description of a behavior that unfolds longitudinally over time—and use computer simulation to select plausible determinants of the phenomenon among alternative causal mechanisms. For example, Adner (2002) studied the emergence of disruptive technologies, and he set up his research design by stating at the front the description of the characteristics of the phenomenon he wanted to investigate. After clarifying that his contribution was to explain the emergence of disruptive technologies, Adner modeled consumers’ individual preferences and fi rm technological strategy to obtain mechanisms that are sufficient to produce the phenomenon. A similar logic inspires the work of Lee et al. (2002), that conceived of their research design with the aim at explaining the emergence of strategic groups. They developed a number of theoretical hypotheses that defi ne causal relationships among four explanatory mechanisms and strategic groups’ emergence, persistence and differential performances. They modeled a population of 50 fi rms and a payoff function with two peaks (a global maximum and local maximum). Adopting an evolutionary framework, they built a genetic algorithm that mimics a process of variation (innovation in strategy), a process of selection (payoff received) and a process of retention (imitation of successful fi rms by new entrants). They ran experiments varying each of the four mechanisms at time and examined under what conditions strategic groups are likely to emerge and persist. Another study with similar features is Abrahamson and Rosenkopf’s analysis of the emergence of bandwagon effects in innovation adoption (1993). They defi ned the phenomenon of interest and used computer simulation to fi nd sufficient conditions for bandwagon to emerge and for innovations to be retained by adopters after bandwagons have displayed their effects. More precisely, they modeled bandwagons and derived behavior with simulation to induce how causal structures of the model, and the processes that the causal structures represented, contributed to produce the dynamic behavior observed in the simulation experiments. Grounding on the observed cause–effect relationship, they derived propositions about bandwagon occurrence, extent and persistence. Similar logic of inquiry informs Lant and Mezias’ speculation on modes of organizational change (1992). They set off their research design from the defi nition of a dynamic behavior of interest: Tushman and Romanelli’s theory of punctuated model of organizational change (1985). Afterward, they scrutinized candidate causal mechanisms to ferret out determinants of the behavior of interest. In particular, they formalized an organizational learning model by which fi rms collect performances, set aspiration levels,

16 Edoardo Mollona search alternatives and change organizational features. Then, they used computer simulation to build a population of fi rms whose activities are governed by this process of experiential learning and demonstrated that an organizational change process that is informed by this learning mechanism can unfold displaying the typical pattern of punctuated change. Using computer simulation, they theorized that the same deep theoretical structure, in this case a learning mechanism, underpins both convergence and reorientation processes. The explanation of the punctuated model of organizational change is at the core of Sastry’s simulation study as well (1997). Sastry analyzed Tushman and Romanelli’s verbal theory of punctuated change to demonstrate that the verbal theory does not contain the necessary causal mechanisms to explain the described behavior. Sastry conducted a textual analysis of the verbal theory and used qualitative descriptions of behavior to produce a dynamic behavior to test the theory. Then, she identified constructs and causal relationships that provided the basis of the formal model. Once a computer model that formalized key traits of the theory was built, Sastry simulated the model and compared simulated behavior with those crystallized into the theory. The discrepancy between theoretical and simulated behaviors guided Sastry to introduce two new mechanisms that were not originally included in the verbal theory but that proved necessary to produce the behavior purported in the theory. The two mechanisms are a routine for monitoring organization-environment consistency and a heuristic that suspends change for a trial period following each reorientation. The work of Sastry provides the opportunity to speculate further on the features of inductive simulation research. As we said in the foregoing, typically, inductive inferences bring about additional information that is not necessarily crystallized into the premises. The inductive nature of the study of Sastry emerges when we appreciate that in the original premises of the study, which are captured in Tushman and Romanelli’s verbal theory, there was not mention or any sort of indication that pointed at, or give a clue about, the causal mechanisms that Sastry included in the theory ex post. To clarify the position taken in this chapter, however, when I suggest that inductive simulations bring into a study information content that is not included in the stated premises, we are not speaking about empirical information. Computer simulation may interact with empirical information and help to investigate real instances but do not per se say anything about the empirical world. What I am suggesting is that given a set of initial premises, a simulation study has an inductive nature when it facilitates the enlargement or the modification of this set of premises. Nevertheless, often computer simulation studies maintain a more or less close relationship with empirical data. Malerba et al. (1999), for example, propose a class of computer models that they defi ne as history friendly because of the adherence of these latter to the empirical realm

Computer Simulation in Strategy and Organization Research

17

that is the object of exploration. To elucidate their approach, they focused on an appreciative theory that describes the pattern of evolution of the computer industry and developed a formal representation of that theory. Through simulation, they checked the consistency between the appreciative and the formal version of the theory by examining whether the formal version is able to reproduce the same stylized facts as described in the appreciative theory. The empirical information is the pedestal on which to build the computer model but the contribution of the simulation study is not one of extending such information. The contribution of the study rests in its corroborating the relationship between causal mechanisms and emerging behaviors as observed in the real world. In this vein, another example of induction is provided by the study of Lomi and Larsen (1996) on population ecology. They focused on the typical model of density-dependent founding and mortality rates and addressed the micro-processes that take place at the level of individual organizations. The authors modeled micro-processes of local interaction and simulated emerging competitive dynamics of organizational populations. They designed a protocol of simulation experiments through which they tested different specifications of local micro-processes. For example, they varied the strength of the link between founding decision and local density. Then, they used data generated by the simulations to estimate a model of organizational founding and compared simulated estimates with existing empirical estimates. They demonstrated the ecological model of density-dependent founding rates to be consistent with a number of micro-assumptions about the patterns and the range of local interaction among individual organizations. Again, in this case, the study maintains an inductive flavor in its using computer simulation to include plausible premises, a set of behavioral micro-assumptions, to the repertoire of possible explanations of observed aggregate behaviors.

The Interplay of Induction and Deduction in Simulation Studies A consideration is fundamental in order not to misinterpret the distinction between deductive and inductive simulation studies. In most of the simulation studies in social sciences, inductive and deductive inferences are intertwined. However, we cannot avoid noting that the logic by which they are inspired often differs not marginally. For example, in the mentioned study of Sastry, the logic of inquiry is clearly stated and hinges upon two elements. First, the author has a clear imagine of the dynamic features of the behavior she wants to explain. Second, she uses the comparison between theoretical and simulated behavior as a trigger to import in her modeling candidate causal mechanisms. On the other hand, at the other extreme, consider, for example, Cohen et al.’s garbage can simulation model. The authors described how problems, choices and people met within an organization, but they start their

18

Edoardo Mollona

inquiry without a precise idea about the aggregate decision-making behavior that follows from the premises they designed. The curiosity was exactly to understand what the consequences are of representing an organization as an organized anarchy, and the contribution of the study is indeed to suggest that organized anarchies maintain a peculiar style in their decisionmaking behavior. Most of the simulation studies, however, blend the two components. For example, in his study on the emergence of disruptive technologies (2002), Adner proceeded from the description of the phenomenon of the emergence of disruptive technologies. He investigated how the phenomenon had been analyzed before in the literature and noticed that previous explanations had focused on the limits of incumbent technologies. Taking a different angle, Adner focused on the impact of market demand on development strategies. This choice directed his attention on the modeling of the structure of market demand and, more importantly, led him to introduce two new constructs, preference overlap and preference symmetry, to capture features of market demand that are connected to the behavior of interest. However, the deduction, through simulation, of the consequences of modeled premises led him to produce a repertoire of plausible behaviors depending on changes applied to the calibration of the simulation model. Through this exercise of deduction the author provided an articulated portray of the phenomenon under study, eliciting different modes of competition among technologies. Thus, inductive simulation, starting from a defi ned behavior, aided the elicitation of sufficient causal mechanisms to observe the behavior whereas deductive simulation expanded knowledge of the behavior by producing various simulated scenarios. Besides cases in which the inductive or deductive approaches clearly come into view, most studies incorporate both approaches. A simulation study may incorporate a loosely defined idea of the features of the behavior it is aimed to explain, and this idea guides the modeling of the premises. The deduction of consequences from premises through computer simulation aids the refi nement of the description of the behavior of interest. On the other hand, the materialization of surprising or counterintuitive behaviors induces the search for alternative causal mechanisms to modify the original set of premises. The diagram in Figure 1.1 suggests that induction and deduction are often embedded in a cyclical process of discovery. Deduction generates repertoires of patterns of behavior that represent near-histories that proceed from a common deep causal structure. This exercise contributes to theory building by making available ex ante falsifi able hypotheses that connect casual mechanisms to behaviors. Deduction may also create counterintuitive and surprising behaviors that bring about marginal amendments in the modeling of the premises or may trigger inductive processes of revisions of modeled premises. In this case, the discrepancy between expected and simulated behavior is the incentive to refi ne, or

Computer Simulation in Strategy and Organization Research

19

deeply modify, the modeled set of premises by introducing in the model new causal mechanisms. For example, in their study on population ecology and competition among structurally different populations of organizations, Carroll and Harrison (1994) built a mathematical model, designed a structurally superior population and simulated competition between two populations (one inferior and one superior). Through the simulation study, they demonstrated, in vitro, that the dominance of structurally superior populations may not emerge depending on their timing of entry in the industry. The contribution of this theoretical falsification is to delineate the hypothesis of historical inefficiency, according to which the explanation of an observed behavior is history-dependent and the time in which events happens modify their expected consequences. Thus, the diagram in Figure 1.1 conveys one of the key ideas that inspire this chapter. Technically speaking, a computer simulation cannot be anything different than a computer-aided process of deduction. This deduction process both unveils not necessarily intuitive cause–effect relationships that are implicitly hidden in the premises and assists rigorous articulation of appreciative theories. This facilitates researchers in producing testable hypotheses. On the other hand, when deducted behaviors do not match with expectations, this mismatch activates an inductive inference that amends the original set of premises. In this respect, I suggest that embedding a computer-based process of deduction into a richer research perspective provides a powerful environment to use of computer simulation for theory development.

Testable ex-ante predictions

Premises

Articulation of: (1) Cause-effect relationships (2) Appreciative theories of behavior

Computer simulation

Deductive inferences

Inductive inferences

Analysis of additional causal mechanisms

Counterintuitive consequences Surprise behaviours

Figure 1.1 Logic of inquiry with computer simulation.

20

Edoardo Mollona

THE INTERACTION OF COMPUTER MODELING AND FIELD RESEARCH In this last section, I outline some ideas to inspire the use of computer modeling and simulation as a support for theory building associated with field research. This area of methodological development is underinvestigated, and hopefully the few directions offered in this chapter may indicate a possible avenue to explore. Theorizing from field research has an acknowledged tradition in social science since the work of, for example, Glaser and Strauss (1967). The authors described a methodology to systematically discover theory from empirical data in field research. By comparative analysis, researchers fi rst generate conceptual categories and the conceptual properties of these categories; then, they create hypotheses on the relationships among the categories. For this purpose, researchers need to start their research into an empirical setting without any previously structured conceptual category.4 Generating theory with field research has a recognized tradition in management study as well. Explanatory case studies, as Yin suggests (1994), are aimed at answering how or why questions by eliciting causal links among variables over time. Interestingly, Yin draws a distinction between the case study approach, which he describes, and the grounded theorizing as described by Glaser and Strauss. Yin affi rms that the key difference is that in a case study researchers use a previously developed theory as a template, and the design of a case study is tantamount to the conceiving of a theoretical experiment aimed at further articulating the theory. However, along with Glaser and Strauss we consider ‘theory as process’ (1967: 32), and we propose that the two approaches may coexist as complementary in the development of a theory of behavior. Researcher may start a field research being generally interested in the phenomenon, without specific ideas on whether the phenomenon fits in a specific theoretical framework. Once the preliminary theoretical positioning is complete, researchers may redirect, refocus and take a specific angle in their empirical research in order to use the research setting to produce theoretical experiments. Independently of the logic adopted, the quality of case study research, as for other research approaches, needs to be judged on the grounds of four elements: construct validity, internal validity, external validity and reliability (Yin 1994: 32–33). In this chapter, we point our attention to the use of computer modeling and simulation to corroborate internal validity of a field study.

The Problem of Internal Validity Given an explanation that infers a casual relationship between two events, internal validity is a judgment on the robustness of that causal relationship.

Computer Simulation in Strategy and Organization Research

21

Thus, a potential threat to internal validity is the existence of spurious effects. When a researcher makes an inference, and connects an event to an earlier occurrence, a spurious effect intervenes if the appearance of the event object of observation is connected instead to another unobserved occurrence. Yin describes three techniques to improve internal validity of a case study: pattern matching, explanation building and time series analysis (1994: 35, 106–118). Pattern matching implies the comparison between the predicted and the actual behavior of a variable. When empirically observed results match those predicted by a theory, the case study represents an experiment that corroborates propositions embedded in the theory. On the other hand, if patterns do not match, theory has to be questioned. The more articulated is the predicted pattern of dependent variables, the more demanding is the test of pattern matching and the stronger is the test of theoretical propositions. For example, if a prediction involves not one pattern but a variety of patterns for a variety of dependent variables, matching of those patterns allows for strong causal inferences. Pattern matching also includes independent variables. Researchers may articulate rival explanations that imply different causal mechanisms, different independent variables and different, mutually exclusive, unfolding patterns for independent and dependent variables. The matching between one specific predicted pattern and the observed empirical behavior supports selection among rival explanations. When a complete explanation for the phenomenon under analysis does not exist at the beginning of the study, Yin suggests that the pattern-matching procedure gives the way to a more sophisticated protocol that is named explanation building. Explanation building consists of an iterative process through which researchers gradually build an explanation by making initial theoretical statements and predictions, comparing predictions with available empirical patterns and revising statements. Finally, pattern matching can be applied on time series of variables rather than to a chain of events chronologically linked. This kind of analysis is named time series analysis. At the core of the three techniques is the problem of understanding how a causal structure is able to explain observed patterns of behavior. The key theme here is the ability of a researcher to enact and maintain a dialogue between theoretical behaviors, as predicted by an explanation, and observed empirical patterns.

Internal Validity and Computer Modeling and Simulation In the following, I delineate a research design in which computer modeling and simulation, and field research are associated to support theorizing. The research design proceeds in a sequence of steps in which a researcher begins

22

Edoardo Mollona

by theorizing from an exploratory field study, translates this preliminary theorizing into a formal model and through computer simulation both strengthens the causal structure of the theory and envisions new research sites to locate further field studies that serve as experiments to consolidate the theory.

Building of a Preliminary Theoretical Framework In the sketched approach, the point of departure is an exploratory fi eld study (step 1 in Figure 1.2). The exploratory fi eld study entails the grounding of theorizing in a specifi c research site. As described by Glaser and Strauss (1967), the fi eld study leads to the building of a theoretical framework by defi ning conceptual categories, conceptual properties of the categories and hypotheses regarding the causal relationships among categories. The sketch of the theoretical framework needs to proceed without any ‘. . . preconceived theory that dictates, prior to the research, “relevancies” in concepts and hypotheses’ (Glaser and Strauss 1967: 33). Researchers should select, at this stage, a research site because they are interested in a specific empirical phenomenon, not because the site is appropriate to conduct a theoretical experiment on an existing theory. Of course, it is naïve to propose that a researcher approaches a research site without any previously crystallized theoretical lens. It is plausible to suspect that the theoretical background of researchers, along with the state of the art of the literature to which they aim to contribute, plays a role in the sedimentation of a more or less uncovered cognitive fi lter that steers the attention towards one or another research site. The intellectual curiosity that illuminates a specific research site is motivated by the interest in an observed phenomenon, and it is likely that this interest is, at least implicitly, driven by the fact that the phenomenon is an empirical instance that confi rms or disconfi rms a prior theory. Theorizing can hardly be totally disconnected from relevant literature because what captures the attention of a researcher is the observation that a conceptual category is empirically associated with a property different from the one expected, that two conceptual categories are empirically linked by a counterintuitive causal relationship or that an empirical phenomenon escapes previous conceptualizations. It is not the purpose of this chapter to dwell on the delicate dispute regarding the selection of a research site; neither does this chapter give attention to how a researcher extracts a preliminary theoretical framework from a field study. We simply assume that a preliminary empirically grounded theoretical framework exists, this latter including a number of conceptual categories, conceptual properties that characterize the categories and a number of tentative hypotheses on causal relationships among categories.

Computer Simulation in Strategy and Organization Research

23

Sensitivity analysis

Role of specific constructs

(6)

Exploratory field study

(8)

(7)

History-divergent simulation runs

Parameter calibration

(1)

History-matching simulation runs

(3)

Theoretical framework

(2)

Computer model

(4)

Longitudinal articulation Cross-sectional articulation

Case studies as experiments

(5)

Pattern matching

(9) Sensitivity analysis with different model’s calibration

History-divergent simulation runs

(10) Structural changes

Figure 1.2

A proposal to associate computer simulation and field research.

Building the Computer Model Once the exploratory field study has generated a preliminary theoretical framework, the second step implies to transform an appreciative theorizing into a set of formal propositions (step 2). At the end of this second step, a computer model embodies the preliminary theoretical framework. This is a subtle endeavor that requires the transformation of conceptual categories into measurable constructs that reflect their theoretical properties and the formalization of causal link among constructs. Causal mechanisms included in a computer model may originate from two sources. They may be formalizations built upon a researcher’s interpretation of verbal descriptions collected during the field study or they may be formalizations that replicate either existing formal theories or descriptions of processes that already exist in a quantitative format. Let’s take for example a researcher that conducts a field study to explain the process of strategy formation in large fi rms. If a fi rm makes available to the researcher memos, blueprints and manuals with already formalized decision-making routines, then the formalization is likely to adhere more realistically to the empirical setting under scrutiny. More often, however, the researcher needs to translate verbal descriptions of operating organizational routines into formal modeling. Furthermore, let’s suppose that the researcher wants to include in its theorizing the behavior of fi nancial

24

Edoardo Mollona

markets that respond to focal organization’s fi nancial performances by allowing credit. In this case, the modeler may take advantage of existing theory of fi nancial markets and include in their model the formalizations that are provided in the literature to capture expected behavior of fi nancial markets. In this case, the use of an existing theory does not really violate the mandate, stated at the beginning, to initiate an exploratory field research without previous preconceived theory since the theory employed regards the behavior of fi nancial markets, not the object of study of the research, which is the process of strategy formation. In other words, in this case, the researcher borrows elements from the theory of fi nancial markets’ behavior to complete the description of the environmental context in which the object of study—the firm—operates. In addition, the field study may be helpful to provide researcher with information to be used for a provisional calibration of model’s parameter (step 3). The calibration will be useful in the next step of the described research protocol that requires the simulation of the formal model. As Kaplan (1964) suggests, ‘. . . [I]n all simulation experiments the fundamental problem is that of “scaling”—that is, the translating of results from a simulation model to the real world’. The grounding and calibration of a simulation model on a case study facilitates the process of translating the abstract insights of a formal model into real-world problems.

Longitudinal and Cross-Sectional Articulation of Hypotheses with Computer Simulation The fourth step entails the use of the formal model to produce simulation runs that describe behavioral implications of the causal relationships that originated from preliminary theorizing. If the theoretical framework that a researcher has built to explain the observed behaviors is correct, simulation runs tend to replicate observed behaviors. In this light, computer simulation supports researchers in using data from field studies to detect fallacies in underpinning logic and to test a theoretical framework. The use of computer simulation as a tool to derive behavioral consequences from stated assumptions brings about a number of advantages. First, in general, computer simulation generates time series. This may result of some help when time series can be compared directly with realworld quantitative figures, for example fi nancial figures extracted from balance sheets and economic reports. In this case, the availability of real and simulated time series that are accessible in a similar quantitative format facilitates pattern matching by assigning to a researcher the possibility to generate a measure of how predicted events match empirical instances of those events (Sterman 1984). Second, computer simulation allows for a rigorous longitudinal articulation of predicted behaviors. In other words, the computer-aided process of deduction goes far beyond the human capability to appreciate the long-

Computer Simulation in Strategy and Organization Research

25

term features of the behavior of selected variables. Thus, computer simulation can support researchers in predicting complex patterns of behavior such as peaks and lowest point, oscillations with different characteristics and changes in rate of growth or decline. Third, researchers, by simulating a formal model, can articulate their predictions by contemporaneously producing behavior of different variables and the interactions of these latter. In particular, researchers can simulate the interaction of independent and dependent variables in each time step, along a given time horizon. This cross-sectional articulation of predictions increases the points of contacts between the theoretical propositions and the empirical world of the case study. As Kaplan suggests, ‘What counts in the validation of a theory, so far as fitting the facts are concerned, is the convergence of the data brought to bear upon it [ . . . ]’ (1964: 314). I argue that computer simulation expands the terrain where comparison between theory and empirical setting takes place by generating a rich longitudinal and cross-sectional articulation of predictions. In this light, the convergence of data and the concatenation of events that is necessary to obtain to use a case study to confi rm a theory is increasingly demanding. In this respect, computer simulation aids researchers to design case studies to produce difficult experiments where the falsifiability of a theory is easier because fitting the facts becomes increasingly hard. Of course, on the other hand, had empirically collected facts fit into a complex web of interweaved simulated behaviors, the experiment would lead to stronger evidence to confi rm propositions contained in the theory. In Figure 1.3, we imagine starting a field study with the objective to explore the increase in profits empirically observed in the time period comprised of between t1 and t2 . A researcher can build a variety of hypotheses to explain the behavior. These hypotheses can be formalized into a computer model. Yet, there might be a large collection of computer models that are able, for different reasons, to produce a behavior similar to the one observed. However, once we use computer simulation to articulate the behavioral implications of the model beyond the observed time span t1-t2 (longitudinal articulation) and for both the profits and other variables such as revenues and market share (cross-sectional articulation), then the theory of behavior captured in the model becomes more complex and easier to falsify by further data collections of empirical instances regarding market share and revenues. In this respect, in the example of Figure 1.3, longitudinal articulation of behavior of the variables of interest—that is, the generation of a hypothesis of behavior that extends beyond the originally considered time span t1-t2 — and cross-sectional articulation—that is, the generation of hypothetical behavior for a variety of relevant variables—orient further data collection and increase falsifiability of a theoretical framework. This process of data

26

Edoardo Mollona Revenues

1,500,000,000

Market share 45%

Profits

1,000,000

t1

Figure 1.3

t2

Time

Integration of collected and simulated patterns of behavior.

collection to falsify formalized theoretical hypotheses narrows down the set of candidate explanatory models.

Pattern Matching Now we turn to the process that involves the analysis of the matching between simulated and empirically observed patterns of behavior (step 5 in Figure 1.2). In particular, we investigate this process by looking at two cases. The fi rst case is when the field study confi rms the predictions made through computer simulation. The second case applies when a researcher reports a mismatch between computer-generated predictions and empirically collected evidences and time series. Sensitivity Analysis and History-Convergent Runs When simulated and historical patterns of behavior match, computer simulation can be used as a laboratory to produce sensitivity analysis (step 6). Field cases are retrospective studies. Retrospective studies explain, ex post, how a set of variables interacted to drive an observed behavior of interest. However, it could become troublesome to ascertain the extent to which a theoretical explanatory model and the observed behavior are linked. This difficulty is explained by the fact that retrospective studies are not particularly efficient in connecting causes and effects (Leonard-Barton 1990).

Computer Simulation in Strategy and Organization Research

27

If, for example, we are aware that two variables affect the observed behavior, given the complex web of interactions in which these variables are embedded, it might be hard to determine their relative strengths. It might be the case that the influence of one of these two variables is insignificant, and could be omitted from the analysis to satisfy the criterion of parsimony for a good theory (Eisenhardt 1989). To investigate further the importance of that variable, an experiment could be run to detect what happens if the variable is omitted from the model. Thus, sensitivity analysis helps in revising a theoretical explanation by suggesting that specific constructs are not necessary to explain a behavior whereas others are fundamental since the change in their calibrations produces simulated behavior that is divergent from the one observed (step 7 in Figure 1.2). In addition, the intentional generation of history-divergent simulation runs orients further empirical inquiry by indicating new potential research sites. Indeed, in a new site that resembles the simulation settings that have been adopted in the sensitivity analysis, a researcher can test whether, given the characteristics of the new site, a behavior closer to the history-divergent run is observed (step 8 in Figure 1.2). For example, some longitudinal event studies have compared polar cases, that is, cases of organizations that have shown opposite behaviors in responding to an identical exogenous stimulus, and have explained the different unfolding of their histories as the result of different initial conditions (Noda 1994; Noda and Bower 1996). What we suggest is that using sensitivity analysis to generate historydivergent runs may be helpful to illuminate the potential of a research site to become a polar case in which, given a change in some key features of the research context, a behavior divergent from the one observed in the original field study ensues. In general, simulation, by connecting a theoretical structure to a variety of possible emerging, often unexpected, behaviors, activates dormant consequences of a theory, which were not observed in the original empirical study. This generation of a distribution of unrealized histories, both strengthens the understanding of causal structures and envisions areas for further empirical investigations. Field researches conducted in these sites represent further theoretical experiments to reinforce internal validity of a theory. Thus, computer simulation helps validating a theory by supporting a researcher in demonstrating that a common theoretical engine may explain a repertoire of different behaviors in different empirical contexts. In this vein, the coupling of field study and computer simulation speaks to the problem of learning from samples of one or fewer as presented by March et al. (1991). If we consider a case study as an experiment, computer simulation allows learning from this experiment by exploring how small changes in some conditions generate different behaviors. These behaviors are near-histories that may materialize and become visible in other empirical contexts.

28

Edoardo Mollona

In addition, when a theoretical argument includes the mention of specific parameters’ calibration as necessary conditions for a predicted behavior to emerge, change in the parameters’ calibration can represent a further test of the theory encapsulated in the computer model. For example, Malerba et al. modified parameters’ calibration in order to test that model calibrations ‘that are counter to those argued as strongly causal in the appreciative theory should obtain history-divergent results’ (1999: 35). Sensitivity Analysis and History-Divergent Runs Finally, we address the case in which a researcher observes a mismatch between computer-generated and empirically observed events and time series. In this case, the problem is to understand why the behaviors diverge. The idea here is that computer modeling and simulation provide a theoretical laboratory that is relatively easy to manipulate in order to investigate the origins of the discrepancy between simulated predictions and observed behaviors. In this respect, I agree with Malerba et al. (1999) in suggesting that computer simulation provides an appropriate terrain to nurture a friendly dialogue between empirical evidence and theory. When history-divergent simulations appear, researcher tries to explain where discrepancies come from. Investigators can intervene on the structure of a computer model or on the calibration of model’s parameters and rigorously deduct whether these interventions narrow down the gap between predicted and actual behaviors. Pressures for historical and simulated behaviors to diverge arise in two cases. The fi rst pressure intervenes when the causal structure of the theory that is captured in the computer model is isomorphic to the causal relationships at work in a specific empirical context and the discrepancy is the consequence of flaws in the specifications of parameters’ calibrations. The second pressure for historical and simulated behaviors to diverge arises when the causal structure of the theory and the causal relationships at work in the real world are not isomorphic in some respects. This may be the result of either the fact that a researcher has not properly formalized a theoretical argument arising from a field study or the fact that the researcher was not able to select the key causal mechanisms at work in the case studied. The fi rst direction to explore is the sensitivity analysis of model’s behaviors to change in parameters to check whether simulating the model with a new calibration improves the match between simulated and observed behaviors (step 9). The fact that the fit between simulation and empirical data is improved by manipulating a model’s parameters points at two areas of analysis. First, it may suggest that the model is characterized by non-linear causal relationships among variables so that slightly different model’s calibrations yield very different emerging behaviors. Second, the causal structure at work may include positive feedback among variables, and initial calibration of variables has a mounting

Computer Simulation in Strategy and Organization Research

29

weight in molding unfolding patterns of behavior. For example, in the before mentioned study by Carroll and Harrison (1994), positive feedbacks among key variables generate history-dependent behavior of organizational populations that emerges as strongly dependent upon the calibration of a model’s parameter: time of entry of populations in the simulated environment. Finally, in general, the fact that a computer model produces history-replicating simulation runs only after implausible values are assigned to parameters casts an alarming light over the robustness of causal structure of the theory. The second avenue to use to explore discrepancy between predictions and observed behaviors is the analysis of the structure of the model, that is, the causal relationships among variables that are deemed necessary to produce behaviors of interest. Different formalizations may exist for specific relationships, and including in the model one or the other may have different behavioral implications. To revise formalization, researchers need to go back and compare the formal structure of the computer model and the real processes at work in the case study. This further investigation plays as a catalyst to defi ne possible amendments to the theory (step 10). To begin the analysis of the discrepancy between the structure of the computer model and the structure of the phenomenon under study, those formalizations that are directly obtained from descriptions sufficiently clear and less questionable are not good candidates to look at for the origins of the discrepancies. The researcher ought to start to generate alternative formulations for those descriptions that were originally provided in a verbal form and, thus, to be formalized, required a more dubious and arguable interpretation. The fairly intuitive idea here is that formalizations that required a researcher’s translation of verbal descriptions into quantitative formulations are more debatable, more prone to conceal misinterpretation and hence good candidates for the analysis of historydivergent simulations. However, such an instinctive expedient ought not to veil another potential source of history-divergent simulation runs that materializes when fi rms describe processes on the basis of existing formal procedure whereas everyday activity is grounded on informal and tacit routines which are different from those crystallized in official manuals and blueprints.

CONCLUSION As Montgomery et al. suggested almost 20 years ago (1989), a serious problem that may compromise the quality of theory development in strategy and organization is the looseness and the lack of logical consistency in developing implications from a set of assumptions where “[s]mall changes in assumptions or parameters can alter dramatically the implications of a model” (Montgomery et al. 1989: 192).

30

Edoardo Mollona

In this chapter I propose that computer modeling and simulation support theory generation in managerial studies and, in general, in social sciences by contributing to amending for the critical shortcomings that emerge in theory development when implications are not rigorously derived from assumptions. In particular, computer modeling forces researchers to tease out unambiguously their theoretical argument. A simulation experiment entails the formalization of a theory. Formalization enhances simplicity and parsimony and helps to clarify the morphology and to sharpen the discussion of the theory thereby supporting both its audit trial (Saloner 1994: 170) and its communication. In this respect, I suggest that formalization and computer simulation of a theory represent devices that support communication among scholars of different disciplines. Furthermore, the discourse articulated in this chapter suggests that computer modeling and simulation offer a helpful tool to enhance quality of fieldbased theory building. Field research and simulation studies, although both having strong roots in management and organization theory research, have not often been used in combination. This chapter contains the sketch for a research protocol that integrates simulation-based research and field study. The idea that motivates this attempt is that computer simulation, by producing artificial time series that are directly comparable with real time series, sets the basis for a fruitful dialogue between the observation of empirical patterns of behavior and the modeling of theoretical hypotheses. As Cohen and Cyert suggest (1961), this dialogue both strengthens the theoretical argument and directs field research: The requirements of a computer model can provide a theoretical framework for an empirical investigation, and, in return, the empirical information is utilized in developing a flow diagram for the model. Through this process of working back and forth, it is possible to know when enough empirical information has been gathered and whether it is of the proper quality. (Cohen and Cyert 1961: 127) In the iterated process of pattern matching, structural adjustment and theory refi nement that I delineated in the foregoing, field study informs computer model and this latter puts on the right track empirical research. In addition, in this iteration, it is not important anymore whether a formalized theory, an exploratory case study or a verbal theory of behavior come fi rst; existing theoretical propositions and the computer model are interlaced in a cycle of mutual enrichment. Take as an example Sastry’s analysis of Tushman and Romanelli’s theory of punctuated change. Sastry built her formalization upon an existing verbal theory of behavior. The theory connects a number of causal mechanisms to an emergent behavior with specific features—the punctuated organizational change. The point of departure is an existing verbal theory, which was built on previous field studies. Similarly, in Malerba et al. (1999) the point of departure was an appreciative

Computer Simulation in Strategy and Organization Research

31

theory of computer industry evolution. In both cases, a verbal or appreciative theory of behavior exists, and then the comparison between simulated and historical patterns of behavior is the trigger to translate existing theorizing in a formalized and more reliable set of hypotheses. Once the dialogue between computer model and field research kicks off, the critical issue is whether a researcher is able to mediate the dialogue by pursing two critical processes. First, the researcher has to feed the model with the information extracted from a case study. Second, the researcher needs to understand what information the observed gap between simulated and historical behavior provides that can be utilized both to indicate further research sites and to refi ne underlying theoretical argument. Associated with a simulation study, the field study is not merely a retrospective photograph of what has happened, but rather becomes a living picture illustrating what could have happened in different circumstances. Capturing in a simulation model the rich but static appreciative theorizing grounded on a field study, the researcher can build a laboratory where simulation experiments are used to elicit missing variables and hidden assumptions, and to test emerging theory for internal consistency (Langley 1999). This dialogue between available empirical data, in the forms of detailed description of observed behavior, and a theory, or a set of hypotheses, establishes the premises to develop sound theories of behavior.

NOTES 1. Dorfman (1960: 603) recommends that computer simulation is particularly useful in the area of general systems analysis and in problems that involve inventory and queuing management. In particular, Dorfman explains that the operation researcher tries to simplify ‘ . . . his problems as much as he dares (sometimes more than he should dare), applies the most powerful analytic tools at his command and, with luck, just squeaks through. But what if all established methods fail, either because the problem cannot be forced into one of the standard types or because, after all acceptable simplifications, it is still so large or complicated that the equations describing it cannot be solved? When he finds himself in this fix, the operations analyst falls back on “simulation” or “gaming”’. 2. I use the word ideal type to refer to a general abstract rule that crystallizes selected traits and attitudes in designing simulation studies, more specifically, as Weber suggests it is a ‘. . . one-sided accentuation of one or more points of view’ (Weber, 1904/1949: 90) Though abstract, this stylized and theoretical description maintains an heuristic value because provides a criteria to investigate the heterogeneous panoply of simulation-based researches by the means of comparative analysis. 3. In Cohen, March and Olsen’s garbage can model, organized anarchies are characterized by three general properties: problematic preferences (inconsistent and ill-defined set of preferences), unclear technology and fluid participation. 4. In their view ‘[a]n effective strategy is, at fi rst, literally to ignore the literature of theory and fact on the area under study, in order to assure that the emergence of categories will not be contaminated by concepts more suited to different area’ (Glaser and Strauss 1967: 37).

32

Edoardo Mollona

BIBLIOGRAPHY Abrahamson, E. and Rosenkopf, L. (1993) ‘Institutional and competitive bandwagon: Using mathematical modelling as a tool to explore innovation diffusion’, Academy of Management Journal, 18 (3): 487–517. Adner, R. (2002) ‘When are technologies disruptive? A demand-based view of the emergence of competition’, Strategic Management Journal, 23 (8): 667–88. Axelrod, R. (1984) The Evolution of Cooperation, New York: Basic Books. Axelrod, R. (1997) The Complexity of Cooperation: agent-based models of competition and collaboration, Princeton, NJ: Princeton University Press. Axtell, R.L. (1999) ‘The emergence of fi rms in a population of agents’, working paper 99–03–019, Santa Fe Institute, Santa Fe, New Mexico. Bowman, H.R. and Fetter, R.B. (1957) Analysis for Production Management, Homewood, IL: Irwin. Carroll, G.R. and Harrison, J.R. (1994) ‘On the historical efficiency of competition between organizational populations’, The American Journal of Sociology, 100 (3): 720–49. Churchman, C.W., Ackoff, R.L. and Arnoff, E.L. (1957) Introduction to Operations Research, New York: Wiley. Clarkson, G.P.E. and Simon, H.A. (1960) ‘Simulation of individua1 and group behavior’, The American Economic Review, 50 (5): 920–32. Coe, R.M. (1964) ‘Confl ict, interference, and aggression: computer simulation of a social process’, Behavioral Science, 9 (2): 186–97. Cohen, K.J. (1960a) Computer Models of the Shoe, Leather, Hide Sequence, Englewood Cliffs, NJ: Prentice-Hall. Cohen, K.J. (1960b) ‘Simulation of the fi rm’, The American Economic Review, 50 (2), Papers and Proceedings of the Seventy-second Annua1 Meeting of the American Economic Association: 534–40. Cohen, K.J. (1961) ‘Two approaches to computer simulation’, The Journal of the Academy of Management, 4 (1): 43–9. Cohen, K. and Cyert, R.M. (1961) ‘Computer models in dynamic economics’, The Quarterly Journal of Economics, 75 (1): 112–27. Cohen, M.D., March, J.G. and Olsen, H.P. (1972) ‘A garbage can model of organizational choice’, Administrative Science Quarterly, 17 (1): 1–25. Cyert, R.M., Feigenbaum, E.A. and March, J.G. (1959) ‘Models in a behavioral theory of the fi rm’, Behavioral Science, 4 (2): 81–95. Cyert, R.M. and March, J.G. (1963) A Behavioral Theory of the Firm, Englewood Cliffs, NJ: Prentice-Hall. Davis, J.P., Eisenhardt, K.M. and Bingham, C.B. (2007) ‘Developing theory through simulation methods’, Academy of Management Review, 32 (2): 480–99. Dorfman, R. (1960) ‘Operations research’, American Economic Review, 50 (4): 575–623. Edmonds, B., Hernandez, C. and Troitzsch, K. (eds) (2007) Social Simulation Technologies, Advances and New Discoveries, Minneapolis, MN: IGI. Eguiluz, V., Zimmermann, M.G., Cela-Conde, C.J. and San Miguel, M. (2005) ‘Cooperation and the emergence of role differentiation in the dynamics of social networks’, American Journal of Sociology, 110 (4): 977–1008. Eisenhardt, K. (1989) ‘Building theories from case study research’, Academy of Management Review, 14 (4): 532–50. Epstein, J.M. and Axtell, R. (1996) Growing Artificial Societies: social science from the bottom up, Cambridge, MA: MIT Press. Forrester, J.W. (1958) ‘Industrial dynamics: a major breakthrough for decision makers’, Harvard Business Review, 36 (4): 37–66. Forrester, J.W. (1961) Industrial Dynamics, Cambridge, MA: MIT Press.

Computer Simulation in Strategy and Organization Research

33

Forrester, J.W. (1968) ‘Market growth as influenced by capital investments’, Industrial Management Review, 9 (2): 83–105. Gavetti, G. and Levinthal, D. (2000) ‘Looking forward and looking backward: cognitive and experiential learning’, Administrative Science Quarterly, 45 (1): 113–37. Gilbert, G.N. and Conte, R. (eds) (1995) Artificial Societies: the computer simulation of social life, London: UCL Press. Gilbert, G.N. and Doran, J.E. (eds) (1994) Simulating Societies: the computer simulation of social phenomena, London: UCL Press. Gilbert, G.N. and Troitzch, K.G. (eds) (2005) Simulation for the Social Scientist, London: Open University Press. Glaser, B.G. and Strauss, A.L. (1967) The Discovery of Grounded Theory, Hawthorne, NY: Aldine de Gruyter. Gullahorn, J.T. and Gullahorn, J.E. (1963) ‘A computer model of elementary social behavior’, in E.A. Feigenbaum and J. Feldman (eds) Computer and Thought, New York: McGraw-Hill, 375–86. Hanaki, N., Peterhansl, A., Dodds, P.S. and Watts, D.J. (2007) ‘Cooperating in evolving social networks’, Management Science, 53 (7): 1243–48. Hanneman, R.A., Collins, R. and Mordt, G. (1995) ‘Discovering theory dynamics by computer: experiments on state legitimacy and imperialist capitalism’, Sociological Methodology, 25: 1–46. Harshorne, C. and Weiss, P. (1931/1935) Collected papers of Charles Sanders Peirce, vol. 2, Cambridge, MA: Harvard University Press, 374. Hoggatt, A.C. (1957) ‘Simulation of the fi rm’, Research Paper RC-16, IBM Research Center, Poughkeepsie, NY. Homans, G.C. (1950) The Human Group, New York: Harcourt. Huff, J.O., Huff, A.S. and Thomas, H. (1992) ‘Strategic renewal and interaction of cumulative stress and inertia’, Strategic Management Journal, 13 (S1): 55–75. Kaplan, A. (1964) The Conduct of Inquiry, San Francisco: Chandler. Klein, L.R. (1953) A Textbook of Econometrics, Evanston, IL. Langley, A. (1999) ‘Strategies for theorizing from process data’, Academy of Management Review, 24 (4): 691–710. Lant, T.K. and Mezias, S.J. (1990) ‘Managing discontinuous change: a simulation study of organizational learning and entrepreneurship’, Strategic Management Journal, 11 (2): 147–79. Lant, T.K. and Mezias, S.J. (1992) ‘An organizational learning model of convergence and reorientation’, Organization Science, 3 (1): 47–71. Lee, J., Lee, K. and Rho, S. (2002) ‘An evolutionary perspective on strategic group emergence: a genetic algorithm-based model’, Strategic Management Journal, 23 (8): 727–46. Leonard-Barton, D. (1990) ‘A dual methodology for case studies: synergistic use of a longitudinal single site with replicated multiple sites’, Organization Science, 1(3): 248–66. Liebrand, W.B.G., Nowak, A. and Hegselmann, R. (eds) (1998) Computer Modeling of Social Processes, London: Sage Publications. Lomborg, B. (1996) ‘Nucleus and shield: the evolution of social structure in the iterated prisoner’s dilemma’, American Sociological Review, 61 (2): 278–307. Lomi, A. and E. R. Larsen. (1996) Interacting Locally and Evolving Globally: A Computational Approach to the Dynamics of Organizational Populations. Academy of Management Journal, 39(5): 1287–1321. Macy, M. and Skvoretz, J. (1998) ‘The evolution of trust and cooperation between strangers: a computational model’, American Sociological Review, 63 (5): 638–60.

34

Edoardo Mollona

March, J.G., Sproull, L.S. and Tamuz, M. (1991) ‘Learning from a sample of one or fewer’, Organization Science, 2 (1): 1–13. Malerba, F., Nelson, R., Orsenigo, L. and Winter, S. (1999) ‘“History-friendly” models of industry evolution: the computer industry’, Industrial and Corporate Change, 8 (1): 3–40. McPhee, W.N. (1961) ‘A note on a campaign simulator’, Public Opinion Quarterly, 25(2): 184–93. Meinhart, W.A. (1966) ‘Artificial intelligence, computer simulation of human cognitive and social processes, and management thought’, The Academy of Management Journal, 9 (4): 294–307. Mezias, S.J. and Glynn, M.A. (1993) ‘The three faces of corporate renewal: institution, revolution, and evolution’, Strategic Management Journal, 14 (2): 77–101. Montgomery, C.A., Wernerfelt, B. and Balakrishnan, S. (1989) ‘Strategy content and the research process: a critique and summary’, Strategic Management Journal, 10 (2): 189–97. Morecroft, J.D.W. (1983) ‘System dynamics: portraying bounded rationality’, Omega, The International Journal of Management Science, 11 (2): 131–42. Nettle, M. and Dunbar, R.I.M. (1997) ‘Social markers and the evolution of reciprocal exchange’, Current Anthropology, 38 (1): 93–9. Noda, T. (1994) ‘Intra-organizational strategy process and the evolution of intraindustry fi rm diversity: a comparative study of wireless communications business development in the seven Bell regional holding companies’, unpublished doctoral dissertation, Harvard University Graduate School of Business Administration. Noda, T. and Bower, J.L. (1996) ‘Strategy making as iterated processes of resource allocation’, Strategic Management Journal, 17, Special Issue: Evolutionary Perspectives on Strategy, 159–92. Orcutt, G.H. (1960) ‘Simulation of economic systems’, The American Economic Review, 50 (5): 893–907. Orcutt, G.H., Greenberger, M. and Rivlin, A.M. (1958) Decision-Unit Models and Simulation of the United States Economy, Mimeo, Harvard University. Rahmandad, H. and Sterman, J. (2004) ‘Heterogeneity and network structure in the dynamics of diffusion: comparing agent-based and differential equations models’, Massachusetts Institute of Technology Engineering Systems Division, Working Paper Series, ESD-WP-2004–05. Saloner, G. (1994) ‘Game theory and strategic management: contributions, applications, and limitations’, in R.P. Rumelt, D.E. Schendel and D.J. Teece (eds) Fundamental Issues in Strategy: a sesearch agenda, Boston: Harvard Business School Press. Sastry, M.A. (1997) Problems and paradoxes in a model of punctuated organizational change; Administrative Science Quarterly, 42: 237–75. Shubik, M. (1960) ‘Simulation of the industry and the fi rm’, The American Economic Review, 50 (5): 908–19. Simon, H.A. (1952) ‘A formal theory of interaction in social groups’, American Sociological Review, 17 (2): 202–11. Sterman, J.D. (1984) ‘Appropriate summary statistics for evaluating the historical fit of system dynamics models’, Dynamica, 10: 51–66. Troitzsch, K.G. (1998) ‘Multilevel process modelling in the social sciences: mathematical analysis and computer simulation’, in W.B.G. Liebrand, A. Nowak and R. Hegselmann (eds) Computer Modeling of Social Processes, London: Sage. Troitzsch, K.G., Mueller, U., Gilbert, G.N. and Doran, J.E. (eds) (1996) Social Science Microsimulation, Berlin: Springer.

Computer Simulation in Strategy and Organization Research

35

Tushman, M. L. and E. Romanelli (1985), ‘Organizational evolution: a metamorphosis model of convergence and reorientation’, in L.L. Cummings and B.M. Staw (eds) Research in Organizational Behavior, vol. 7, Greenwich, CT: JAI Press, 171–222. Vazsonyi, A. (1958) Scientific Programming in Business and Industry, New York: Wiley. Yin, R.K. (1994) Case Study Research, Thousand Oaks, CA: Sage. Weber, M. (1904/1949) Objectivity in Social Science and Social Policy, in Shils, E.A. and H. A. Finch (ed/trans.) The Methodology of the Social Sciences. New York: Free Press.

2

Computational Modeling and Social Theory—The Dangers of Numerical Representation Bruce Edmonds

INTRODUCTION This chapter aims to stop a certain group of people from having so much fun. The group in question are those who produce toy models of social phenomena—that is, models that are really useful only in the sense they are good for playing with. It aims to do this in two different ways: fi rst, by re-examining the criteria under which such models are judged and, second, by reviewing the weaknesses of numerical representation (including some of the problems it exasperates). In this way I hope to help dispel the “number blindness” that seems to play a part in allowing such toy models to escape the criticism they deserve. It is fun making models and playing with them; it can help train our intuition. However, if we are to make real progress in producing useful models of observed social phenomena (e.g., that of exchange), then a more critical and careful approach is called for. This chapter starts by analyzing the modeling process and then looks at some of the ways in which such modeling might be useful in a scientific sense. This analysis leads to a group of model that fails to satisfy any set of criteria—the floating model. It goes on to examine our use of numerical representation in models, separating out the different kinds of number, then arguing how inappropriate use of number makes some of the more fundamental shortcomings of floating models worse. It ends with some examples that indicate that numerical representation is not the only one, but rather structural representations (using computational simulation) might often be more appropriate and effective.

ABOUT MODELING Modeling can be done for many purposes. Here we are interested in models that help us to understand and/or predict aspects of observed socioeconomic systems. Thus I will not be concerned with those models whose primary goal is, for example, to be neat, illustrative or entertaining, but rather with those whose goal can be broadly called a scientific goal.

Computational Modeling and Social Theory

37

Inference using model

Measurement/ encoding

Prediction/ decoding

Natural Process Figure 2.1 The basic modeling relation.

The most basic kind of model is one that is supposed to relate directly to the data, in an almost mechanical way. This is the classic idea of a model illustrated in Figure 2.1, following Hesse (1963), Rosen (1985) and Hughes (1997). In this picture something is a model of something else if the diagram in Figure 2.1 commutes, which is to say that if you follow the two different routes around the rectangle then you get the same result. Frequently in the natural sciences, the model has been encoded as a set of mathematical equations with numbers used as the basic currency for initial conditions and predictions. It was (and is) basically the job of mathematicians to discover and check the inferential machinery and the job of scientists to discover and check the mappings to and from the mathematical model. In a computational simulation the model is encoded as a program and the inference is performed by executing that program. In a declarative computational simulation the model is encoded as a set of logical statements and relations and the inference is done by the inference engine acting on these statements. In either case, it is the job of computer scientist to design and check the simulation process. Figure 2.2 is essentially Figure 2.1 but with the steps performed on it. Simulation run

Analysis of results and interpretation back onto target system

Abstraction into a Simulation by design

Dynamics of Target System Figure 2.2 Modeling steps with a simulation.

38

Bruce Edmonds

Of course, the aforementioned characterization of the modeling process is simplified, in part because it is rare that such direct modeling is attempted. Typically several other layers of models are involved, which are not always made explicit. For example, in many papers, it is not the target system that is modeled but rather an abstraction of that system which is related to the target system in a vaguer way. In other words, it is the mental model that the modeler has of the phenomena that is represented in the simulation model. Thus we have the picture in Figure 2.3. The simulation is a formal model of the abstraction and the abstraction is an analogical model of the target system (or a class of target systems). In such a case there will be more stages to go through before we are able to infer something useful about the target system. First we abstract from the target system (abstraction); then we use simulations to improve our understanding of our abstraction (design, inference, analysis and interpretation); and fi nally we apply our knowledge of the abstract system to the target system (application). This sort of “modeling at one remove” can be an aid to decision making: the computational modeling is used to hone one’s intuitions, and this increased understanding is implicitly utilized when making decisions with regard to the target system. This is what Moss et al. (1994) discovered occurred in the development and use of UK macroeconomic

Simulation run

Analysis of results and interpretation back onto target system

Abstraction into a Simulation by design

Abstraction from Class of Target Systems analogy

Dynamics of Target System Figure 2.3 Two stages of abstraction in the modeling process.

Computational Modeling and Social Theory

39

models—the direct predictions of the models were almost completely useless on their own, but could be helpful to the experts that used them as an aid for thinking about the issues. However this does not result in transferrable scientific understanding of the phenomena but rather in a personal understanding in the mind of the person who plays with the model. It gives that person a richer and more intricate way of thinking about the target phenomena. The most common error in this sort of modeling is that modelers conflate the abstraction and the target in their minds and attempt to interpret the results of their computer simulation directly in terms of the target system. That is, they confuse their personal understanding with a scientific understanding. This can manifest itself in terms of overly strong conclusions in simulation work: for example, in Axelrod (1984) at the end of Chapter 5 it says: ‘In this chapter Darwin’s emphasis on individual advancement has been formalized in terms of game theory. This formulation establishes conditions under which cooperation based on reciprocity can evolve. . . .’ The determination of the relevant abstraction (the middle layer of Figure 2.3) can itself be analyzed in two parts: deciding on the language/ theoretical framework in which the abstraction will be formulated, and formulating the abstraction within that framework.1 The human cost of searching among candidate formal frameworks/languages is sufficiently great that one person usually has effective access to only a limited number of such frameworks. For this reason the framework is rarely changed once it has been chosen; rather people will go to considerable lengths to reformulate their concepts of some phenomena within a framework they know rather than change the framework. Added to this, the Figure 2.3 framework affects one’s perceptions of the problem itself so that people “see” problems in terms of their favorite framework (Kuhn 1962). Both of these effects mean that instead of the domain (i.e., the target systems) determining the most appropriate framework, the formulation is adjusted so that the target systems can be mapped into a known framework even if this means critically distorting the target system. In particular, since people are trained in numerically based frameworks it means that it feels “natural” to see many processes in terms of those frameworks. Clearly the less appropriate the framework which produces the abstraction the more divorced the whole modeling process will be from its focus phenomena, resulting in something that is more of a computational analogy than a scientific model. As I will argue, forcing many kinds of phenomena into a numerical framework thus results in less scientific models, because it further divorces the model from its phenomena.

Some Types of Model Here I distinguish some kinds of simulation model in terms of their purpose and criteria for acceptability.

40

Bruce Edmonds

If one has a model that predicts reliably on unknown data then there is no argument but that one has a useful model, even if that model contains strong assumptions.2 It is not sufficient to check the predictive power of a model on known data, because of the widespread “positive-results” bias in the social sciences. That is, negative results—on the lines of this model, results that did not predict this data—are not published. Thus only models that do reproduce such data are published. The chances of reproducing the known data (e.g., so-called “out of sample” data) are vanishingly small, so in effect the model is fitted to this data by a process of trial and error. Thus this kind of validation is insufficient to establish a predictive model because there has not been an independent test against unknown data. Thus for this kind of model, the assumptions can be wildly strong, but it needs to be validated against unknown data. In this chapter I will call these predictive models. Another use of this basic kind model is to provide explanations of observed data (Moss and Edmonds 2005b). This kind of model does not claim to predict the outcomes of the target system in unknown situations but rather to provide a credible explanation of what happened in the target system’s processes to get a particular result. It is a sort of “reverse engineering” from data (both initial conditions and outcomes) to a credible process (i.e., one that could happen in the target system) that explains how the outcomes might have resulted from the initial conditions. Clearly, if unrealistic assumptions are used in this kind of model the explanation that results from using such a model is only in terms of those assumptions. If the assumptions are known to be unrealistic then the credibility of any inferred explanation is severely compromised. Thus for this kind of model the assumptions need to be credible, but it can be validated against known data. I will call these explanatory models.3 A third kind of model is one that is not confronted with evidence from the target system at all, but is more of a type of pseudomathematics. That is, the model is used to establish a set of purely formal possibilities between initial model conditions and the resulting outcomes. In other words, the exploration of the model is necessary because it is not possible to deduce the outcomes from the setup in a meaningful way. This kind of model is actually useful (like any pure mathematics) only if the results do turn out to be applicable to a case that relates to an observed target system at some point. In this case the criteria used to judge mathematics are appropriate: soundness, generality and importance. If these criteria are met then there is an acceptable chance the, otherwise abstract, results will have a use later on. If a model is fairly specific to a set of particular assumptions, and these assumptions are not likely to be satisfied then it will not be a useful model of this kind. I will call these inferential models, since their purpose is purely to establish the pattern of inference in a certain class of model. If a model is shown to be either a good predictive, explanatory or inferential model, then it clearly represents useful progress in understanding the phenomena it relates to. What is puzzling is the apparent acceptance of models that

Computational Modeling and Social Theory

41

do not satisfy any of these! Thus models which are vaguely justified on the basis of intuition but use utterly unrealistic assumptions (or even assumptions known to be wrong) and that are “checked” only against known data (e.g., in sample data) abound! Such models fail to be a predictive model since they are not tested against unknown data, fail to be an explanatory model since they use incredible assumptions, and yet also fail to be an inferential model because they do not exhibit generally important or applicable results. These are the computational analogies mentioned earlier; they are no more “scientific” than other kinds of analogy, even though they are written in an apparently scientific language. They may be persuasive to the modeler, because they have helped to hone their personal understanding, but they are not useful for the collective scientific project of understanding socioeconomic phenomena. I will call these floating models as they float somewhere between all these types, not satisfying any particular set of criteria for usefulness. How is it that this plague of floating models has been allowed to flourish? The short answer is that I don’t know, but I will put forward some hypotheses about the roots of this misplaced tolerance, trying to show why these bases for tolerance are misplaced. The chief one of these that I will deal with here is that of “number blindness”—an apparent inability to apply normal critical faculties to numerically formulated models. In the latter part of this chapter I will indicate, with an example, how with computational simulation we are no longer limited to numerically based methods, but are free to use the most appropriate representation.

OTHER REASONS FOR ACCEPTING FLOATING MODELS Before I progress to the issue of numerical representation and “number blindness” I will briefly go over some of the other possible reasons for the tolerance of floating models that are often presented. Simplicity is often quoted as a reason for introducing unrealistic assumptions into a model (and, in particular as we shall see, numerical “simplifications”). However, this is either merely a cover for limitations of resources or ideas (e.g., a time limitation) or to cover an unrealistic assumption. In no case is there any reason to suppose that simplicity is more likely to be true or that a simpler model will have greater generality. More likely it is that such a simple model will be abstract (lacking a relation to any specific or identifiable set of evidence) but not have general applicability. Although there are engineering reasons why starting small and working upwards is a good way to implement any specific model, this is not any reason to suppose that a simpler model is a better guide to any particular phenomena (Edmonds 2007). It is sensible to restrict the space of models over which one searches but this is better done using the available evidence rather by simplicity which is essentially an arbitrary method (Edmonds and Moss 2005).

42

Bruce Edmonds

It may be argued that science is an effective evolutionary process, collectively acting upon the class of formal models (Edmonds 2000b). The formality of the models is important because the faithful replication of models greatly facilitates such a process. Under this picture, it is the process as a whole that matters in the end, not the success of any particular model. In any such evolutionary process it is inevitable that a process of elaboration occurs, especially at the early stages.4 However this does not change the overall argument, since an evolutionary process works well only if there is the correct selective pressure on it. Thus if useless models are not sufficiently selected out then the evolutionary search process will not work well. It is true that an evolutionary process requires there to be considerable variation in order to work, but that variation needs to be biased in the right direction. This is essentially Karl Popper’s argument (Popper 1963). Another reason seems to be a straightforward wish to emulate one of the more obviously successful sciences, like physics. Quite apart from the question of whether a physics-type approach will work with social phenomena, this emulation suffers due to its very selective view of sciences such as physics. For example, although the kind of model might look similar, the ruthlessness with which empirically unsuccessful models are discarded in physics is not copied! Similarly, although simple models are preferred in physics, they are usually built upon well-validated micro-foundations (so that their models need checking only in terms of the outcomes). However the approach of applying simple models is often used in the socioeconomic sphere even when the behavior of the parts is unknown; Moss and Edmonds (2005b) aims to separate out those aspects of physics that are and are not appropriate to the social sciences. Of course tradition also plays a part. Fields of science are not arenas of pure rationality, but have strong cultures and traditions. Thus physics papers have a very defi nite style that marks them out from, say, social science papers (Edmonds 2000b). Authors are assimilated into their fields and will, consciously or unconsciously, learn the vocabulary, styles, issues, faux pas etc. that are the norm there. This can lead to people copying the surface style of models in their field, without carefully considering what kind of representation or model is best. If the whole field has a tradition of tolerance to floating models then they may simply accept this, without thinking. There will also be the knowledge that it is far easier to have a paper accepted by their peers that does follow the norms in their field.5

NUMERICAL REPRESENTATION All tools have their advantages and disadvantages and for all tools there are times when they are appropriate and times when they are not. Formal tools are no exception to this, and systems of numbers are examples of such formal tools. Thus there will be occasions where using a number to represent

Computational Modeling and Social Theory

43

something is helpful and times where it is not. To use a tool well one needs to understand that tool and, in particular, when it may be inadvisable to use it and what its weaknesses are. It is easy for us to use numbers in our simulations and we frequently do so without considering the consequences of this choice. This chapter can be seen as a reminder about numbers: a call to remember that they are just another (formal) tool. So, to be absolutely clear, I am not against numerical representation per se, merely against its thoughtless and inappropriate use. Numbers are undeniably extremely useful for many purposes, including as language of thought—just not for every purpose and in every domain. Also, to be absolutely clear, I do not think there is any system of representation that is superior to the others—including logic, programs, natural language or neural networks—but that different representations will be more useful (and less misleading) in different circumstances and for different purposes. In particular numbers are not good at representing structure. There are, of course, different ways of using numbers as representations depending upon what properties of numbers you are using. Some of the possibilities are listed next going from weaker properties to stronger ones. Each possible property has its uses and limitations. • Unique labeling —an indefi nite supply of unique labels is required (as with players in a football team); the only aspect of numbers that is significant is that each label is distinguishable from the others. You cannot use comparisons or arithmetic. • Total order —the numbers are used to indicate the ordering of a set of entities (as with ticket numbers in a queuing system). Comparisons can be made at any time to quickly determine which of the labeled items is higher in the order. You cannot use arithmetic without the danger of changing the order in subtle and unexpected ways—that is, any transformation needs to preserve the ordering unless the order of what is being represented has changed. • Exact value —the value of some properties is given without error (as with money or counting), that is, using whole numbers or fractions. This is using numbers to represent essentially discrete phenomena— numbers of things or multiples of exact shares of things. No measurement can be involved because measurement processes inevitably introduce some level of error. The conservation of the things that are being represented underpins the possible arithmetic operations that may be performed on the numbers, and comparisons relate to a one-toone matching exercise with what is represented. Arithmetic operations that break the exactness of the calculations (like square roots) generally have no meaning in terms of what is being represented in this case. • Approximate value —where the number indicates some measurement which involves error (as with temperature or length). These

44

Bruce Edmonds are typically, but not always, continuous properties which are not exactly representable in any fi nite form. Thus, as well as errors introduced by measurement and interpretation processes there are also errors introduced by most arithmetic manipulations. That is, you need to be very careful about the accumulation of error with calculation and comparisons. There are techniques to help with problem, e.g., the use of interval arithmetic (Pohill et al. 2005).

Many problems with numerical representation are caused by people using one kind of numbers appropriate to a particular circumstance (e.g., a Lockhart scale) which are then processed as if they were another, e.g., using arithmetic. This is quite a common occurrence since there is a temptation to use the full range of arithmetic possibilities as soon as properties are represented as numbers, regardless of whether these are warranted by the nature of what they represent. One of the reasons for this is that they are all called numbers and it is not possible to tell what sort of numerical properties are relevant just by looking at them. Thus for example, numbers might be used to indicate the preferences of an individual (the value of A > the value of B means that the individual simply prefers A to B given the choice); but then these values are used as if they were valid arithmetic entities so, for example, a calculation is done on them to help decide how much of each to buy. The exact conditions under which numbers can be used in these different ways were formalized as “measure theory” from the 1930s to the 1970s in the philosophy of science. An introduction to measure theory is Sarle (1997), and the defi nitive set of works is considered to be Krantz et al. (1971), Suppes et al. (1989) and Luce et al. (1990). This debate and work seems to have been largely forgotten, and an unthinking numerical culture prevails. To a person immersed in such a culture it may not be obvious what is wrong with numerical representation; this is covered in the next section.

CONSEQUENCES OF INAPPROPRIATE NUMERICAL REPRESENTATION In this section I review some of the difficulties that can be associated with the careless and/or inappropriate use of numbers. These difficulties are all, to different extents, avoidable through either careful technique, practice or the use of alternatives. In the former case the difficulties can be interpreted as simply being due to a sloppy use of numbers, but in the latter the mistake lies in trying to use numbers rather than a more appropriate alternative. In many circumstances all forms of representation have their own difficulties, but in all cases it is preferable to be able to choose the best representation for a purpose and to be aware of (and check for) the possible difficulties that result from one’s choice.

Computational Modeling and Social Theory

45

Distorting the Phenomena The fi rst and foremost difficulty is when representing something by a number unnecessarily so that it critically distorts the relevant aspects of the target phenomena. Sometimes this seems unavoidable, but happens surprisingly often. The most frequent case seems to be when something that is essentially qualitative or structural in nature is represented by an arithmetic number. This is highlighted by a case where the difference between qualitative and quantitative is very clear: variety. This could be variety of strategy among fi rms or consumer needs. To see why variety might not be numerically representable, consider the following case. If one has a set of objects or properties which are satisfactorily representable within a set of numeric dimensions, and one adds a new object or property (that is not well represented by the existing measures) then the new set (which includes the new property) has a greater variety than the old set. Thus variety, in this sense, is not capturable by any (fi nite) set of measures. Variety is not the same as variation which is simply a measure of how much another measure varies in a set of measurements. Variety in this sense has real operational meaning— for example, when we say that there is a great deal of variety among the fi rms in a certain economic area this means that it is more likely that some will survive an unexpected catastrophe of a novel kind (such as a totally new competitor appearing). If the “variety” was merely a set of different amounts of properties then it would be relatively easy for a single kind of fi rm to dominate, since a simple dominating set of strategies would be possible. However if the variety consisted of many different ways of competing, then one would not expect a whole “ecology” of fi rms to survive which was more robust (as a whole) to the appearance of a new kind of firm. Now while it is legitimate to invent post hoc descriptive measures to demonstrate and illustrate an increase in variety that has already occurred, it is another matter entirely to try and implement such an increase in variety via the simple increase along some fi xed set of dimensions. The difference is readily apparent if one is co-evolving a set of fi rms to cope with the change—for the increase along within a fi xed set of measures is (eventually) learnable and hence predictable, whereas such surprises as the appearance of a new kind of competitor is not. In the former case it may be possible for some kinds of fi rm to learn a strategy that will cope with the change while in the latter case the only thing that will help (in general) is the variety between fi rms. Variety of this kind is not directly implementable in terms of any numeric measure of variance (despite the fact that variance might be used as an illustration of the amount of variety). This sort of variety is of a fundamentally qualitative kind. This does not mean that it is not formally implementable at all, simply that it is not well represented in general by a numeric measure (however complicated).

46

Bruce Edmonds

This is not due to deep theoretical differences between the qualitative and the quantitative since all numerical mathematics can be implemented within qualitative set theory and logic (Russell and Whitehead 1962) and vice versa as Gödel showed (1930). Rather it is the practical differences between systems of representation that matter. It is so difficult to represent some kinds of properties within some kinds of system that the practicalities will necessitate that the original is distorted in the process (Edmonds 2000).

Losing the Context Numbers are very abstract representations; they are the result of abstracting away all the properties except for a single dimension. In particular they do not encode anything about the context in which the abstraction occurred. This brings considerable advantages—one can learn how to manipulate numbers and equations independently of the contexts in which they are applied, so that one can use the learned formal tool on a wide variety of possible problems. The maintenance of contextual relevance is left up to the humans who use the mathematics—one cannot tell this just from the mathematics. Thus the abstract nature of numbers and mathematics allows for possible confusion. For example, it may be that in a particular simulation each new individual is given a random float in order to provide it with a unique label (it being extremely unlikely that two individuals will have the same float allocated). Later it may be that someone else modifies the simulation so that if two competing individuals happen to have exactly the same fitness then the one with the numerically greater label wins; this seems equivalent to an arbitrary choice since the original labels were randomly generated and it is rare that competing individuals do have exactly the same fitness. However it may be that under certain circumstances life becomes so difficult (or easy) that fitnesses all reach the same minimum (or maximum), in which case instead of a random set of individuals being selected, there will be a bias towards those who happen to have the higher label. Here the original intention of unique labels is forgotten and they are reused as symmetry-breaking mechanisms, causing unforeseen results. A real example of this kind of confusion is found in Riolo et al. (2001) where a particular selection mechanism interacts with differences in fatnesses so that the authors misinterpreted a tolerance mechanism as significant when it was not (Edmonds and Hales 2003b). The semantic complexity of a set of phenomena is the difficulty of formally (i.e., syntactically) modeling that phenomenon for a particular purpose (Edmonds and Wallis 2002). The more semantically complex the target phenomena is, the more important it is to preserve the original context of abstraction so the reader may trace back the original meaning of referents in models so as to understand their nature. That does not mean that there is

Computational Modeling and Social Theory

47

not any possibility of generalization to other contexts, but it does mean that this needs to be done carefully by knowing the relevant set of properties of what is represented beyond that formally encoded in the model. This issue is discussed more in Edmonds and Hales (2003a).

The Accumulation of Error Approximate numbers are the hardest type of number to use because the traps involved in using them are not easily avoided. Although there are techniques which are very helpful at avoiding these problems there is no technical fi x that will always avoid these problems. The nature of the problem is intractable—we are using a fi nite representation (a float, an interval, an error bound, etc.) for something that cannot be fi nitely represented (an irrational real). Almost any model that uses such fi nite representations will eventually “drift apart” from what is being modeled with suitably extreme parameters or when run for a sufficiently long time. Of course, if the system is chaotic the results may diverge quite sharply. As Pohill et al. (2005) shows, it is not simply that the represented value “drifts” away from the target, but in many cases there is a systematic bias to the drift. This means that simply performing the same run many times using different random number seeds (or slightly different parameter settings) and averaging the results will not avoid the problem. Devices such as “interval arithmetic” (Pohill et al. 2005) may provide assurance (or conversely warnings) in some cases—it would seem particularly useful when there are calculations using floats followed by a comparison which determines subsequent behavior, but it will not avoid all problems. More subtle are effects caused by the additional “noise” caused by the continual errors. This is frequently insignificant, but not always. In systems where there are chaotic processes or symmetry breaking is important even a small amount of “extra noise” can make a difference (Edmonds 2005).

The Creation of Artifacts The distortion of what is being represented, the loss of original context and the inevitable “drift” away from the original phenomena due to approximate representation means that some of the phenomena observed in the resulting process outcomes may well be “artifacts” of the simulation—that is, features of the results that appear qualitatively significant to the modeler but which do not correspond in any way to what is being represented due to the fact that the simulation is working in a way that is different from what the modeler intended. In a way all complex simulations are full of such artifacts—formal representations (including computer simulations) are always somewhat of a distortion of complex phenomena—and one will rarely understand the complete behavior of such simulations in all circumstances. However if

48

Bruce Edmonds

the modeler has a clear idea6 of how the relevant aspects of the simulation should be behaving then this can be compared to the relevant observed behavior of the model and differences detected. If differences are detected in these significant aspects and there were no bugs at the level of the programming code but a subtle interaction due to the underlying nature of the numeric representation then we may call these effects “artifacts” of the representation. This is bad news for modelers because it means that they cannot be satisfied with simply programming at the level of whatever language or system they are using but have to pay attention to the nature of the representation they are using and its properties. Understandably many modelers react to such problems by wishing them away or ignoring them—‘these are simply technical problems and don’t affect my results’ seems to be the attitude. This is simply wishful thinking; the modelers have no way of knowing whether they do or not—if they take their results at all seriously (and if they present their results to their peers then presumably this is the case) then they do have an obligation to try and ensure that the cause of their results is due to an intended rather than unintended interaction in their simulation. There are a number of possible ways to try and prevent such a situation arising, including the independent replication of simulations on different systems, and the use of various kinds of representation (e.g., interval arithmetic) in simulations. However there is a more fundamental heuristic that may be applied: since the “artifact” presumably does not appear in reality (for this would simply indicate a deficiency of one’s mental model) then making one’s simulation more and more like reality will eventually get rid of the problem. For example, it is rare in natural (as opposed to constructed) systems that exact values have a critical and undesirable effect on outcomes because in that case the system would adapt (or evolve) to avoid this effect. Thus it is not generally plausible that humans determine their action upon crisp and critical comparisons (e.g., give money if their income is exactly greater than the average of their neighbors).

Limiting the Language of Representation Perhaps the most far-reaching (and damaging) result of the use of numbers is that people become used to thinking of the phenomena in these numeric terms—what starts out as a quick fi x for getting a model working or “proxy” for the phenomena ends up being mistaken for the truth about that phenomena. New students are trained to think in terms of the existing model and thus fi nd it difficult to step back and rethink their way of thinking about a field. This effect is reinforced in some fields by “fashions” and traditions in modeling frameworks and styles—it can become necessary to utilize certain modeling forms or frameworks in order to get published or gain recognition.

Computational Modeling and Social Theory

49

For example, the usefulness of goods (or services) can be represented as a total order among possible goods (and hence given a number and compared as a utility). However this does not mean that this is always a good model because sometimes the nature of goods is critical to their usefulness. Whether one prefers A to B depends a great deal on how these goods will be helpful and not how much they are helpful in the same way. For example, with food flavorings it is the combination that makes them preferable— there is no underlying utility of salt or sugar separate from the contexts in which they are applied (say chips or strawberries). Similarly they will interact, so that although (within limits of diminishing returns) a person may like more salt or sugar in things they eat they may well not like salt and sugar.7 This is similar to the ranking of tennis players: the ranking gives one a guess as to which of two players will win when they play but does not determine this—the ranking is only a numerical model of who will win. Before the advent of cheap computational power it was very difficult to formally model without using equations, because the manipulation of large amounts of data was very difficult by hand. We no longer have this excuse. We can now represent such qualitative and structural processes directly; we do not have to settle for equation-based models that require drastically reformulating the problem to suit the technique (e.g., through the use of unrealistically strong assumptions).

NUMBER BLINDNESS We are in an age that is obsessed by numbers. Governments spend large amounts of money training their citizens in how to use numbers and their declarative abstractions (graphs, algebra etc.). We are surrounded by numbers every day in the news, weather forecasts, our speedometers and our bank balance. We are used to using numbers in loose, almost “conversational” ways—as with such concepts as the rate of inflation and our own “IQ”. Numbers have become so familiar that we no more worry about when and why we use them than we do about natural language. We have lost the warning bells in our head that remind us that we may be using numbers inappropriately—that is, using a numerical representation for something that is fundamentally non-numerical. They have entered (and sometimes dominate) our language of thought. Computers have exasperated this trend by making numbers very much easier to store, manipulate and communicate, not to mention making them more seductive by making possible attractive pictures and animations of their patterns. More subtly, when thought of as calculating machines that can play games with us and simulate the detail of physical systems, they suggest that everything comes down to numbers. It is my hypothesis that such a familiarity with numbers has not only led us to a blindness over their shortcomings as a means of representation,

50

Bruce Edmonds

but even acts as a signal of rigor and scientific propriety and hence acts to help models be accepted by the community that otherwise would be justly criticized. I call this “number blindness”, an inability to see beyond the surface formulation of mathematics to the strength (or otherwise) of a model beneath. Of course, not all faith in numerical models is unthinking. There may be reasons (good or otherwise) why one might use a numerical representation. The fi rst is, of course, that what is being represented is known to be number-like. Things like money, tonnage of coal, number of indexed publications or speed are obviously well represented by numbers. However in many other cases it may seem obvious, but further reflection would cast doubt on it. For example, in many cases it may be reasonable to assume a probability for a certain event exists (albeit less often than is the case), however this is a mile away from it being useful for us to represent the occurrence of such an event with a probability. For example, the right probability might change sharply with contextual elements we cannot include in our model, or that would be impossible to estimate stably. Sometimes it is assumed that any data can be meaningfully understood in terms of an identifiable signal plus noise. This is essentially an assumption about the randomness of the residual when a given “signal” is subtracted. However, especially with a subtly coupled set of factors and individuals this is often not warranted with the “noise” essentially not being random. For example, in increasingly big samples the noise would not reduce (in proportion to the signal) as the “law of large numbers” might suggest, but rather remain—indicating that the “noise” is in fact globally coupled and non-random (for examples of this, see Kaneko 1990 or Edmonds 1999). In such a case this may indicate that the “signal” is not representable as a stable number. This issue is considered in depth in Edmonds (2005). A related issue is where it is assumed that the probability distributions of some set of events or data points have well-defi ned moments. However, in the case where processes which are similar to self-organized critical systems, it is far from the case that one can assume this. In such cases it is frequent to observe people using statistical techniques that are applicable only if one can assume normality, existence of second moments etc. See Moss and Edmonds (2005a) for this.

WHAT ARE THE VIABLE ALTERNATIVES? Let me make it quite clear that I am not at all against formal or rigorous modeling. In fact, formal modeling is often essential for progress to be made (even if all the formal models are eventually discarded). However I am arguing that much of the modeling of social phenomena using numbers (or equations representing numeric relations) is, at best, unhelpful and, at worst, counterproductively misleading. In this chapter I am suggesting

Computational Modeling and Social Theory

51

that computational simulations do not have to be so limited to numerically based representations and that analytic models can have a role as abstractions of more descriptive and structural simulation models.

Staging Abstraction Using Descriptive Simulations A computational simulation is just as formal as an equation-based model. Each technique has its own advantages and limitations: equation-based modeling may allow for the declarative manipulation of the model to allow for closed-form solutions and other properties to be derived, however this is at the cost of the difficulty in relating the model to many phenomena; computational simulation can be more directly representational of the processes observed but is almost never amenable to general solutions or formal manipulation. Thus although a computational simulation is a theory about some domain (if only a conceptual model) it retains much of the nature of an experiment. One can (almost) never know for sure about the nature of a simulation, but one can endlessly perform experiments upon it. We need to have models (either formal or otherwise) about what is happening in a simulation in order to be able to use it; each experiment which does not disconfi rm our model of it leads us to have more confidence in the simulation. The greater the variety of experiments that the simulation survives (e.g., replication in another language) the more confidence we can have in its veracity. Using computational requires giving up an illusion that absolute proof or certainty is possible. This is frequently already the case even with analytic models because the equations are often analytically unsolvable so that either approximations are required (which breaks the certainty of any proof) or numerical simulations are required (in which case one has no more certainty than with any other simulation). There is no difference in formality between: a huge system of equations which has a set of equations to describe the state of each agent separately and another set to describe all the relations between each agent that interacts; and a distributed computational simulation that encodes the same things. One practical way forward is to formulate and maintain “chains” (or even, as Giere 1988 puts it, “clusters”) of models at different levels of abstraction, starting from the bottom up. Thus the fi rst step is to decide what “data models” of the target phenomena are appropriate (including anecdotal accounts and qualitative observations). Then a “descriptive” simulation can be developed—this is intended to include as much of the relevant detail and processes as possible in as direct a manner as possible. This is akin to a natural language description, except that it makes the fi rst steps towards abstraction—a simulation is appropriate because it enables processes to be captured as well as states. Such a descriptive simulation has several purposes: it is a tool to inform (and be informed by) observation by helping frame what are the important questions to investigate; it allows for the exploration of the behavior in a detailed and flexible manner

52

Bruce Edmonds

enabling the complex processes to be understood; it opens up the description to criticism by experts and stakeholders so that the model can be more easily improved; and it stages the abstraction providing a better basis for more abstract models to be tested and formulated. Thus a simulation can be used to stage the abstraction process. The relationship between the simulation and the observations of the phenomena is relatively transparent and direct. The more abstract models, intended to capture aspects of the behavior, can be informed by and tested against runs of the descriptive simulation. This has some great advantages over the direct modeling of phenomena with abstract models: one can use the simulation to establish whether and under what conditions the more abstract models work (which should correspond with when their assumptions hold); and the simulation gives justification for the details of the abstract model so that when one is asked why the model is formulated as it is one can point to the simulation runs that it models. The simulation delays the abstraction by providing an experimental test bed which is as rich a representation of the phenomena as possible. This staging of the abstraction is illustrated in Figure 2.4. The more abstract model could be an equation-based model or a more abstract computational simulation. Here there may be many measurable aspects of the simulation that are representable as numbers (statistics about its outcomes etc.). Such second-stage models can be tested against experimental runs of the simulation model—against unknown runs and thus be established as a good model of that simulation. In this manner one can build up a whole hierarchy of models of different levels of abstraction (as

More abstract models

Simulation

The phenomena of concern Figure 2.4

Using a simulation stage abstraction.

Computational Modeling and Social Theory

Abstract Simulation of tags

53

equation-based models

Descriptive Simulation of social grouping

Observations of social grouping Figure 2.5 A chain of three levels of model with increasing abstraction concerning tag-based mechanisms of group formation.

in Figure 2.5). Having staged the abstraction to these more abstract models, the chain of reference is less analogical and more precise, since what is measured in these models has a direct referent in the model “below” in a checkable manner. The “bottom” model is more testable against real observations and data because it does not have to use overly strong assumptions. The mental model shown in Figure 2.3 has been replaced by a precise, testable and criticizable simulation. This combination of computational simulation and the a posteriori formulation of abstract models seems to hold out the hope for the achievement of the best combination of advantages possible for complex phenomena, albeit at the cost of a lot more work.

Structural Modeling, the Example of Negotiation It is impossible to show all the ways in which structure can be represented in simulation models, and thus avoid some of the numerical “kludges”8 that are used to circumnavigate such issues. Often it involves simply putting in far more detail than one might otherwise do, forcing oneself to “dig down” into the processes that occur and not giving in to the temptation for a “quick fi x” in the hope that it will allow one to leap-frog to an immediate solution. Here I show a model of negotiation that pivots on the structure of the agent’s beliefs rather than any simplistic weighing or utilities and thus opens up the examination of the negotiation process. Many models of negotiation are little more than a process whereby two numbers (bid and offer) slowly converge (or not). The representation of

54

Bruce Edmonds

the cognitive processes typically involves some optimization of utility, with possibly some element of risk taken into account. However there is no evidence that such a model corresponds to what people do when they haggle. At best such models either are explorations of artificial situations (i.e., it is never claimed that such correspond to what is observed) or it is hoped that, against any evidence, the results will still work even though they know the micro-level is wrong. In particular there is no evidence that any of the following assumptions are in general tenable: • that the participants necessarily have the same view of the world in terms of the causation therein; i.e., they have different beliefs as to what actions or events cause what results in the target domain; • that the participants necessarily have any sort of “joint utility” or “social rationality” that drives them to seek for agreement or for the common good above their own; • that the participants necessarily have any sort of knowledge about others’ beliefs or goals, except as revealed or implied by the communications of other; • that the participants necessarily agree upon the description of the current world state; • that the participants necessarily judge similar resultant states in similar (or even related) ways; • that the participants are necessarily able to think out all the possible negotiation possibilities or consequences of their own beliefs. Rather, the simulation to be described follows the lead of those who study what people do when negotiating. In this we follow Van Boven and Thompson (2001) who say: ‘negotiation is best viewed as a problem solving enterprise in which negotiators use mental models to guide them toward a “solution”’., where they defi ne “mental models” as mental representations of the causal relations within a system that allow people to understand, predict, and solve problems in that system . . . Mental models are cognitive representations that specify the causal relations within a particular system that can be manipulated, inspected, ‘read,’ and ‘run’. . . . According to this picture, negotiation goes far beyond simple haggling over numerical attributes such as price. It is a search for a mutually acceptable solution (albeit at different levels of satisfaction), that is produced as the result of agents with different beliefs about their world interacting via communication until they discover an agreement over action that all parties think will result in a desired state. The motivation behind this is not to discover how to get artificial agents to negotiate nor to determine how

Computational Modeling and Social Theory

55

a “rational” agent might behave, but to move towards a more descriptive model, which can be meaningfully compared, to human negotiation in order to gain some insights into the processes involved. In the simulation to be described the agents represent the domain they are negotiating about using a network of nodes and arcs representing relevant possible states of the world and actions respectively. These states are judged by the agents when deciding what to do; what offers to make and which to accept. The simulation allows judgment because each node has an associated set of properties. The agent has an algorithm which returns the acceptability of the nodes and allows nodes to be compared with each other. These properties could be numeric (e.g., the price of something), or could be other types (e.g., color). The goals of the agents are to attain to more preferable states than the one that currently holds and are thus implicit in the results of the judgment algorithm upon the properties of nodes. The structure of the simulation allows for the use of numeric indicators of desirability to be attached to nodes, but does not require them. It also allows for a mixture of numeric and qualitative properties to be used. Suffice to say that any modeling of cognitive processes which suggests that human actors weigh numeric measures would need to be justified.

Model Description There follows a brief description of a model of negotiation that illustrates that other possibilities than number-centric ones are not only possible, but can avoid some of the strong assumptions that are otherwise necessary. There is a fi xed set of agents (the participants). They negotiate in a series of negotiation rounds, each of which is composed of a subsequence of time instances in which utterances can be made by participants. All utterances are public, that is, accessible to all participants. When a set of actions is agreed on or no utterances are made then the round ceases. Between rounds actions that are agreed on are taken—these are public actions, known to all participants. When a round occurs that is identical to the last round and no actions are agreed on, the simulation ceases. The simulation output consists of the utterances made and the actions agreed on and taken. It is a development of Hales (2003). Each participant has the following structures internal to itself and not accessible to others (unless the agent reveals them in an utterance): • A set of properties in which possible world states are judged; • A (possibly complex) algorithm which, given the internal information and current world states, results in an overall judgment on that set of states; • A network composed of a set of nodes representing what the participant considers possible world states, each node having a label, a set of properties, a list of possible actions for the agent and a set of arcs

56

Bruce Edmonds to other nodes (each arc having a condition in terms of actions that have occurred).

The utterances that agents can make to each other are of the following kinds: • Can someone please do [these actions] so that we can read [these states]. • If [these actions are done] then [I will do these actions]. • I agree to [I will do these actions] if others agree to [these actions are done]. In addition there are the following reports: • [agent name] has done [action name]; • [agent name] is in state: [state name]. Thus, the input to the simulation is a specification of each agent’s beliefs about the world, and the output is a transcript showing the utterances and actions that occurred. Basically the simulation works as follows. At the start the agents and their beliefs are initialized from a text file which specifies these (called the viewpoint file). Then each agent performs a limited search on its own network for states that it judges are preferable to the current one—if it fi nds such it makes a conditional offer composed of its own and others’ actions necessary to reach that state (if there are none of its own actions this is a request; if no others’ actions are needed it simply commits itself to that action without communication). If others make a conditional offer it considers possible combinations of offers to see if there are possible agreements, and if there are it signals its potential agreement to it. If there is an agreement made by another agent that it judges is acceptable to it, it signals its potential agreement to that. If all the necessary parties have indicated their potential acceptance of an agreement then it becomes binding and all agents commit themselves to doing the actions that are their part of that— the simulation now enters an action phase. During action phases all agents do actions as soon as they become possible—actions may have the effect of changing the current state in agents. When actions have fi nished then negotiation may begin again etc. The simulation can be seen as a minimal constraint upon what negotiations could occur, since it would be possible to get almost any output given enough tinkering with the agents’ beliefs. In this sense it is akin to an agent programming language. What it does do is relate the agents’ beliefs to the viewpoint file that is output in an understandable and vaguely credible manner. The representation of the agents’ beliefs with nodes (states of the world), arcs (the actions between states of the world) and judgments along multiple criteria (the judgment dimensions) is designed to be fairly easy to

Computational Modeling and Social Theory

57

present as blobs-and-arrows pictures, and thus be amenable to participatory input and criticism. This is in contrast to many agent negotiation setups which are couched in pure economic or logical terms. An interesting aspect of this simulation is that (if the values of the judgments are Boolean or string valued) it does not require the use of numbers at all. This is more of a qualitative simulation than a numerical calculation. Hopefully this will make it more amenable to qualitative interpretation and criticism.

Example Setup This is an example to illustrate the negotiation in a simple transaction of a purchase. In this simple version there is a low price and a high price that could, in theory, be paid in return for the car. In this version one of the (two) properties of nodes could be a number, corresponding to the amount of (extra) monetary changes. This is justified because the amount of money is a number. However a fuller model might eliminate this by representing the relevant trade-offs (or opportunities) that that money meant to the actors at the time, since we deal with only two possible prices (cheap and expensive). The basic belief networks are shown in Figure 2.6. There are clearly a number of ways of representing the buyer’s and seller’s beliefs using this method—we have chosen one. Let us assume that for the seller the states are ordered thus: Start < Car sold cheaply < Car sold expensively < Get little < Get lots; and that for the buyer: Start < Car bought expensively < Car bought cheaply < Get car. There are number of possible variations here: the seller could mentally rule out the action of Give car cheaply from the state Get little (i.e., only 10,000) or not depending on

Car sold cheaply (no car, 10000)

Car sold expensively (no car, 20000)

Give car

Give car cheaply?

Give car

Get little (car, 10000) Pay 10000

Get lots (car, 20000) Start (car, 0)

Car bought cheaply (car, 10000) Give car cheaply?

Give car expensively Pay 20000?

Seller

Figure 2.6

Car bought expensively (car, 0)

Belief networks of seller and buyer.

Gave little (car, 10000) Pay 10000

Give car expensively

Gave lots (car, 0) Start (no car, 20000) Buyer

Pay 20000

Pay 10000 Pay 20000?

58

Bruce Edmonds

whether this was considered as a possible action; likewise the buyer might or might not consider paying 20,000 at the state Get car as possible. Corresponding to these is the existence or absence of arcs in the belief networks of the other agent. So the seller might or might not have an arc from Start to Get lots depending on whether the seller thinks that such an action is possible, and the buyer might or might not have an arc from Get car to Car bought cheaply for the action Pay 10,000 depending on whether the buyer thinks it will be possible to purchase the car for only 10,000. When this is run there is some initial exploration concerning whether the seller will give the car for nothing and the buyer give money for nothing—this is because the agents do not know these would not occur (as we would know). Given the aforementioned there are 2 × 2 × 2 × 2 = 16 possibilities: • Seller does (1st u) or does not (1st c) think buyer would pay 20,000 for car. • Seller would (2nd u) or would not (2nd c) give the car for 10,000. • Buyer would (3rd u) or would not (3rd c) pay 20,000 for the car. • and buyer does (4th u) or does not (4th c) think seller would give car for 10,000. Thus the viewpoint fi le labeled example2-cucu is the viewpoint fi le where the fi rst and third option are commented out (hence the c) and the second and fourth options are left uncommented (hence the u). This corresponds to the case where: the seller does not think the buyer would pay 20,000; the seller would sell the car for 10,000; the buyer would not pay 20,000; and the buyer does think the seller would sell for 10,000. The template for these scripts (with options to comment out the relevant lines) and some example results are listed in Appendix 2. Table 2.1 summarizes the results of the 16 possibilities. Unsurprisingly, the conditions for the car being sold expensively is that the buyer would pay 20,000 and the seller thinks that the buyer would pay 20,000. This is so even if the buyer thinks that the seller would sell for less and the seller would be willing to sell for less. This is because of the asymmetry of the belief networks where the payment happens before the handing over of a car (never the other way around); thus the seller explores whether the buyer is willing to pay money without giving the car which delays his more credible offers; this has the effect that the buyer comes down to an expensive offer before the seller makes a cheap offer. The condition for a cheap sale is that the seller would sell for 10,000 and the buyer knows this, except for the case discussed immediately preceding. Although this is rather an artificial source of delay in this case, delaying making offers that are less good for oneself is an established negotiation tactic. Most models of bargaining on prices center only on this case (i.e., those represented by the bottom right-hand corner of Table 2.1).

Computational Modeling and Social Theory Table 2.1

59

Summary of Results from Example 2 Seller Does Not Seller Does Not Seller Thinks Seller Thinks Think Buyer Think Buyer Buyer Would Buyer Would Would Pay Would Pay Pay 20,000 Pay 20,000 20,000 and 20,000 and and Would Not and Would Would Not Give Would Give Give Car Give Car Car for 10,000 Car for 10,000 for 10,000 for 10,000 (cc--) (cu--) (uc--) (uu--)

No agreement Buyer Would Not Pay 20,000 and Thinks Seller Would Not Sell for 10,000 (--cc)

No agreement

No agreement

No agreement

No agreement Buyer Would Not Pay 20,000 and Does Think Seller Would Sell for 10,000 (--cu) No agreement Buyer Would Pay 20,000 and Thinks Seller Would Not Sell for 10,000 (--uc)

Car Sold Cheaply

No agreement

Car Sold Cheaply

No agreement

Car Sold Expensively

Car Sold Expensively

Car Sold Cheaply

Car Sold Expensively

Car Sold Expensively?

Buyer would No agreement Pay 20,000 and Does Think Seller Would Sell for 10,000 (--uu)

It is interesting to note that no agreement can result even when the seller would be willing to sell the car for 10,000 and the buyer would be willing to buy the car for 20,000 because of their beliefs about what the others will do (e.g., case CUUC). In this example it is clear that the beliefs that each have about the possibilities that exist can make a critical difference to the outcomes.

Discussion of Example The primary means in this model for this determination is by dissembling to the other about what is possible so that they accept an agreement that is suboptimal for the other. Thus the car salesman might achieve a better sale through convincing the buyer that he would not sell for 10,000 even though

60

Bruce Edmonds

he would if there was no other choice. Of course this strategy is risky as the seller might end up with no agreement at all. Thus in this model there are two sorts of negotiation: 1. Where the parties are searching to see if an agreement is possible and 2. Where the parties think more than one agreement is possible and are trying to determine which deal. When a deal is better than no deal then in case 1 it is to the advantage to be honest about what is and is not possible, but in case 2 it can be advantageous to be deceptive about the real possibilities. Case 2 can be dangerous if the deception means that it seems to the parties that no agreement is possible. Case 2 most closely corresponds to what people commonly refer to as “haggling”. This touches on the question of trust and is consistent with Moore and Oesch (1997) who observed: The good news from this study for negotiators is that there is a real cost to being perceived as untrustworthy. Negotiators who negotiate fairly and earn a reputation for honesty may benefit in the future from negotiating against opponents who trust them more. The bad news that comes from this study is that it is the less trusting and more suspicious party who will tend to claim a larger portion of the spoils. It is also interesting to compare this analysis to the observations of negotiations at the Marseille fruit and vegetable market made in Rouchier and Hales (2003). There, it was observed that there were two kinds of buyer: those who were looking for a long-term relationship with sellers, so as to ensure continuity of supply, and those who were searching for the best price. For the former kind, once a relationship had been formed, it is more important for both that they reach an agreement rather than they get the very best price—that is the negotiation, although it may involve some haggling as to the price, was better characterized as a search for agreement. This involved a series of “favors” to each other (the seller giving excess stock to the buyer and the buyer not negotiating about price). The latter kind searched the market for the best price, swapping between buyers depending upon the best price of the moment. On the whole, the negotiation with each seller was to fi nd the best price among the possible ones, since if the negotiation failed they could always go to another seller. Thus this is better characterized by the second type—finding the best deal among those possible. This is at the cost of sometimes going without obtaining some product when there is a shortage (since the sellers favor their regular customers in these circumstances). The structure of the simulation is such that these conditions form a set of hypotheses which it is conceivable could be tested using participatory

Computational Modeling and Social Theory

61

methods and observations of negotiations. A game could be set up where the subjects have to negotiate their actions via a limited computer-moderated script—a web version of the game Diplomacy in a form similar to that of the online “Zurich Water Game” might be suitable (Hare et al. 2002b). At suitable stages the subject’s views of the game could be elicited in the form of blobs-and-arrows diagrams, possibly using something akin to the hexagon method used in the Zurich Water Game (Hare et al. 2002a). Such an investigation might lead to further developments in the simulation model presented earlier which might, in turn, prompt more investigations as is indicated by Hare et al. (2002c). This simulation framework greatly extends other simulations of bargaining which usually focus only on the case of haggling over a limited number of numerical indexes (e.g., price and quantity). The model could be easily extended to include belief extension/change, goal reformulation and even some meta-communication mechanisms. However before this is done more is needed to be discovered about how and when this occurs in real negotiations. This model suggests some directions for this research—the simulation framework is relatively well suited for participatory methods of elicitation since the “nodes and arrows” representation of beliefs is commonly used and thus accessible to stakeholders and domain experts. Clearly, the preceding example is not the only simulation that is starting to explore the potential of representing non-numerical structure when representing socioeconomic phenomena. Other examples include the papers of Morone and Taylor (e.g., 2004a, 2004b) where the structure of knowledge is explicitly represented and used by fi rms seeking new knowledge and partnerships with other fi rms.

CONCLUSION Dispelling the “number blindness” in the field of socioeconomic modeling will help to restore intelligent criticism to the usefulness models. This would result in a more productive field, if one more difficult. It may well be that getting good models of such phenomena is just very difficult and that many models have to be rejected. It may be that instead of single “clever” models, more mundane and time-consuming chains of models might be required. It seems evident that simply playing with floating models will not do the trick. It used to be that the only way to get a formal model was by using numerically based techniques (including abstractions of these). So with many phenomena one had a choice between ingeniously “shoe-horning” phenomena into numerical form or doing without formal models. We are no longer forced into this terrible dilemma; we now have computational models, which are formal but far more expressive. We are no longer confi ned to such a restricted form of representation but are free to use the

62

Bruce Edmonds

most appropriate representation available. It is much easier to translate narrative accounts that people give of their actions and decision-making processes into computational rules. The whole world of structure is now available to explore.

APPENDIX—SIMULATION RUNS FROM THE EXAMPLE

Viewpoint file template Agent: Seller : The Car Salesman IndicatorWeights: car 5000 money 1 StateValuationClause: sum (multiply 5000 (indicatorValue car)) (multiply 1 (indicatorValue money)) InitialNodes: Start Node: Start : the start Indicators: car 1 money 0 Link: Pay10000 => GetLittle : given 10000 by buyer # Comment out if seller thinks buyer would not pay 20000 # Link: Pay20000 => GetLots : given 20000 by buyer # Node: GetLittle : Seller has 10000 and car Indicators: car 1 money 10000 # Comment out if seller would not give car for 10000 # Action: GiveCarCheaply : Seller gives car to buyer for only 10000 # Link: GiveCarCheaply => CarSoldCheaply # Node: GetLots : Seller has 20000 and car Indicators: car 1 money 20000 Action: GiveCarExpensively : Seller gives car to buyer # Link: GiveCarExpensively => CarSoldExpensively # Node: CarSoldCheaply : Seller has 10000 # Indicators: car 0 money 10000 # Node: CarSoldExpensively : Seller has 20000 # Indicators: car 0 money 20000 # Agent: Buyer : The Car Purchaser IndicatorWeights: car 25000 money 1 StateValuationClause: sum (multiply 25000 (indicatorValue car)) (multiply 1 (indicatorValue money)) InitialNodes: Start Node: Start : the start Indicators: car 0 money 20000 Action: Pay10000 : pay 10000 # Comment out if buyer would not pay 20000 # Action: Pay20000 : pay 20000 Link: Pay10000 => GaveLittle : gave 10000 # Link: Pay20000 => GaveLots : gave 20000

Computational Modeling and Social Theory

63

# Node: GaveLittle : Seller has 10000 and car Indicators: car 0 money 10000 # Comment out if seller would not give car for 10000 # # Link: GiveCarCheaply => CarSoldCheaply : seller gives car for 10000 # Node: GaveLots : Seller has 20000 and car Indicators: car 0 money 0 # Link: GiveCarExpensively => CarSoldExpensively :seller gives car for 20000 # Node: CarSoldCheaply : Seller has car and 10000 # Indicators: car 1 money 10000 # Node: CarSoldExpensively : Seller has car and 0 # Indicators: car 1 money 0

Results Due to the length of these I include only a few of the results to give their fl avor. CCCC Seller does not thinks buyer would pay 20,000; seller would not give car for 10,000; buyer would not pay 20,000; and buyer thinks seller would not sell for 10,000. Buyer: Can someone please Pay20000 and GiveCarExpensively so we can achieve CarSoldExpensively? Seller: Can someone please Pay10000 so we can achieve GetLittle? Buyer: I will Pay10000 if others Pay20000 and GiveCarExpensively. Seller: Can someone please Pay10000 and GiveCarCheaply so we can achieve CarSoldCheaply? Buyer: I will Pay10000 if others Pay20000 and GiveCarExpensively. (State of Buyer) is: Start. (State of Seller) is: Start.

CUCU Seller does not thinks buyer would pay 20,000; seller would give car for 10,000; buyer would not pay 20,000; and buyer thinks seller would sell for 10,000. Seller: Can someone please Pay10000 so we can achieve GetLittle? Buyer: I will Pay10000 if others GiveCarCheaply. Seller: I will GiveCarCheaply if others Pay10000. Buyer: Can someone please Pay20000 and GiveCarExpensively so we can achieve CarSoldExpensively? Buyer: I will Pay10000 if others Pay20000 and GiveCarExpensively.

64

Bruce Edmonds

Seller: I agree to GiveCarCheaply if others Pay10000 Buyer: I agree to Pay10000 if others GiveCarCheaply Buyer has done Pay10000. Seller has done GiveCarCheaply. (State of Seller) is: CarSoldCheaply. (State of Buyer) is: CarSoldCheaply.

CUUC Seller does not think buyer would pay 20,000; seller would give car for 10,000; buyer would pay 20,000; and buyer does not think seller would sell for 10,000. Seller: Can someone please Pay10000 so we can achieve GetLittle? Buyer: I will Pay20000 if others GiveCarExpensively. Seller: I will GiveCarCheaply if others Pay10000.

(State of Seller) is: Start. (State of Buyer) is: Start.

UCUC Seller does think buyer would pay 20,000; seller would not give car for 10,000; buyer would pay 20,000; and buyer does not think seller would sell for 10,000. Buyer: I will Pay20000 if others GiveCarExpensively. Seller: Can someone please Pay20000 so we can achieve GetLots? Seller: I will GiveCarExpensively if others Pay20000. Seller: Can someone please Pay10000 so we can achieve GetLittle? Buyer: I agree to Pay20000 if others GiveCarExpensively Seller: I agree to GiveCarExpensively if others Pay20000 Seller: Can someone please Pay10000 and GiveCarCheaply so we can achieve CarSoldCheaply? Buyer has done Pay20000. Seller has done GiveCarExpensively.

(State of Buyer) is: CarSoldExpensively. (State of Seller) is: CarSoldExpensively.

Computational Modeling and Social Theory

65

UUUU Seller thinks buyer would pay 20,000; seller would give car for 10,000; buyer would pay 20,000; and buyer thinks seller would sell for 10,000. Buyer: I will Pay10000 if others GiveCarCheaply. Seller: Can someone please Pay20000 so we can achieve GetLots? Buyer: I will Pay20000 if others GiveCarExpensively. Seller: I will GiveCarExpensively if others Pay20000. Seller: Can someone please Pay10000 so we can achieve GetLittle? Buyer: I agree to Pay20000 if others GiveCarExpensively Seller: I agree to GiveCarExpensively if others Pay20000 Seller: I will GiveCarCheaply if others Pay10000. Buyer has done Pay20000. Seller has done GiveCarExpensively. (State of Buyer) is: CarSoldExpensively. (State of Seller) is: CarSoldExpensively.

NOTES 1. This is an idealization; it is often a lot less clear-cut than this. However it is clear that people do develop ways of thinking about certain phenomena and that these are somewhat persistent. 2. It is the presumption of science (so far a successful one) that there must be some reason why a predictive model does work, and hence that the predictive model must be, in some way, also an explanatory one. However the bridge between a predictive and explanatory model might not be immediate. 3. Cartwright, N. (1983) How the Laws of Physics Lie, Oxford: Oxford University Press. Distinguishes phenomenological and explanatory laws, which roughly correspond to the predictive and explanatory models here. 4. This may be due to a limitation of how quickly such a process of elaboration can occur and the fact that people fi nd simpler models easier to deal with. 5. This is, unfortunately, especially true in economics, where adherence to its social norms is particularly strongly enforced. 6. This idea can be characterized as the modeler’s mental model of their own simulation. 7. At this point the dedicated utility theorists will suggest all sorts of kludges to save their framework, for example: assigning a different utility to all combinations of goods rather than single items; allowing utilities to vary wildly between contexts; or claiming that they exist but are unmeasurable. All these have the effect of destroying the usefulness of modeling preference with a total order—it would be far better to choose a more appropriate way of representing preferences. 8. I do not want in any way to imply that these are stupid; many involve impressive ingenuity!

66

Bruce Edmonds

BIBLIOGRAPHY Axelrod, R. (1984) The Evolution of Cooperation, New York: Basic Books. Axelrod, R. (1997) ‘Advancing the art of simulation in the social sciences’, in R. Conte, R. Hegselmann and P. Terna (eds) Simulating Social Phenomena, Berlin: Springer, 21–40. Cartwright, N. (1983) How the Laws of Physics Lie, Oxford: Oxford University Press. Edmonds, B. (1999) ‘Modelling bounded rationality in agent-based simulations using the evolution of mental models’, in T. Brenner (ed.) Computational Techniques for Modelling Learning in Economics, Dordrecht: Kluwer, 305–32. Edmonds, B. (2000a) ‘Commentary on “A bargaining bodel to simulate negotiations between water users” by Sophie Thoyer, Sylvie Morardet, Patrick Rio, Leo Simon, Rachel Goodhue and Gordon Rausser’, Journal of Artificial Societies and Social Simulation, 4 (2). Available HTTP: (accessed March 7th, 2010). Edmonds, B. (2000b) ‘The purpose and place of formal systems in the development of science’, CPM Report 00–75, MMU, UK. Available HTTP: (accessed March 7th, 2010). Edmonds, B. (2001) ‘The use of models—making MABS actually work’, in S. Moss and P. Davidsson (eds) Multi Agent Based Simulation, Springer, Lecture Notes in Artificial Intelligence, 1979: 15–32. Edmonds, B. (2005) ‘The nature of noise’, CPM Report 05–156, MMU, presented at EPOS 2006, Epistemological Perspectives on Simulation—II Edition, University of Brescia, Italy, 5–6 October 2006. Available HTTP: (accessed March 7th, 2010). Edmonds, B. (2007) ‘Simplicity is not truth-indicative’, in C. Gershenson et al. (2007) Philosophy and Complexity, Singapore: World Scientific, 65–80. Edmonds, B. and Hales, D. (2003a) ‘Computational simulation as theoretical experiment’, CPM report 03–106, MMU, 2003. Available HTTP: (accessed March 7th, 2010). Edmonds, B. and Hales, D. (2003b) ‘Replication, replication and replication— some hard lessons from model alignment’, Journal of Artificial Societies and Social Simulation, 6 (4). Available HTTP: (accessed March 7th, 2010). Edmonds, B. and Hales, D. (2004) ‘When and why does haggling occur? Some suggestions from a qualitative but computational simulation of negotiation’, Journal of Artificial Societies and Social Simulation, 7 (2). Available HTTP: (accessed March 7th, 2010). Edmonds, B. and Moss, S. (2005) ‘From KISS to KIDS—an “anti-simplistic” modelling approach’, in P. Davidsson et al. (eds) Multi Agent Based Simulation 2004, Springer, Lecture Notes in Artificial Intelligence, 3415: 130–44. Edmonds, B. and Wallis, S. (2002) ‘Towards an ideal social simulation language’, 3rd International Workshop on Multi-Agent Based Simulation (MABS’02), Lecture Notes in Artificial Intelligence, 2581: 104–24. Giere, R.N. (1988) Explaining Science: a cognitive approach, Chicago: University of Chicago Press. Gödel, K. (1930) ‘Die Vollständigkeit der Axiome des logischen Funktionen-kalküls’, Monatshefte für Mathematik und Physik, 37: 349–60. Hales, D. (2003) ‘Neg-o-net—a negotiation simulation testbed’, CPM report, CPM, MMU, Manchester, UK. Available HTTP: (accessed March 7th, 2010). Hare, M.P., Gilbert, N., Maltby, S. and Pahl-Wostl, C. (2002c) ‘An Internet-based role playing game for developing stakeholders’ strategies for sustainable urban

Computational Modeling and Social Theory

67

water management: experiences and Comparisons with face-to-face gaming’, proceedings of ISEE 2002, Sousse, Tunisia. Hare, M.P., Heeb, J. and Pahl-Wostl, C. (2002b) ‘The symbiotic relationship between role playing games and model development: a case study in participatory model building and social learning for sustainable urban water management’, proceedings of ISEE, 2002, Sousse, Tunisia. Hare, M.P., Medugno, D., Heeb, J. and Pahl-Wostl, C. (2002a) ‘An applied methodology for participatory model building of agent-based models for urban water management’, in C. Urban (ed.) Third Workshop on Agent-Based Simulation, SCS Europe Bvba, Ghent, 61–6. Hesse, M.B. (1963) Models and Analogies in Science, London: Sheed and Ward. Hughes, R.G. (1997) ‘Models and representation’, Philosophy of Science, 64(proc): S325–6. Kaneko, K. (1990) ‘Globally coupled chaos violates the law of large numbers but not the central limit theorem’, Physics Review Letters, 65: 1391–4. Krantz, D.H., Luce, R.D., Suppes, P. and Tversky, A. (1971) Foundations of Measurement, vol. 1: ‘Additive and polynomial representations’, New York: Academic Press. Kuhn, T.S. (1962) The Structure of Scientific Revolutions, Chicago: University of Chicago Press. Luce, R.D., Krantz, D.H., Suppes, P. and Tversky, A. (1990) Foundations of Measurement, vol. 3: ‘Representation, axiomatization, and invariance’, New York: Academic Press. Moore, D. and Oesch, J. M. (1997) ‘Trust in negotiations: the good news and the bad news’, Kellogg working paper 160. Available HTTP: (accessed March 1st, 2004). Morone, P. and Taylor, R. (2004a) ‘Knowledge diffusion dynamics and network properties of face-to-face interactions’, Journal of Evolutionary Economics, 14 (3): 327–51. Morone, P. and Taylor, R. (2004b) ‘Small world dynamics and the process of knowledge diffusion: the case of the metropolitan area of greater Santiago De Chile, Journal of Artificial Societies and Social Simulation, 7 (2). Available HTTP: (accessed March 7th, 2010). Moss, S. (2002) ‘Challenges for agent-based social simulation of multilateral negotiation’, in K. Dautenhahn, A. Bond, D. Canamero and B. Edmonds (eds) Socially Intelligent Agents—creating relationships with computers and robots, Dordrecht: Kluwer. Moss, S., Artis, M. and Ormerod, P. (1994) ‘A smart macroeconomic forecasting system’, The Journal of Forecasting, 13 (3): 299–312. Moss, S. and Edmonds, B. (2005a) ‘Sociology and simulation: statistical and qualitative cross-validation’, American Journal of Sociology, 110 (4): 1095– 131. Moss, S. and Edmonds, B. (2005b) ‘Towards good social science’, Journal of Artificial Societies and Social Simulation, 8 (4). Available HTTP: (accessed March 7th, 2010). Polhill, J.G., Izquierdo, L.R. and Gotts, N.M. (2005) ‘The ghost in the model (and other effects of floating point arithmetic)’, Journal of Artificial Societies and Social Simulation, 8 (1). Available HTTP: (accessed March 7th, 2010). Popper, K. (1963) Conjectures and Refutations, London: Routledge and Kegan Paul. Riolo, R.L., Cohen, M.D. and Axelrod, R. (2001) ‘Evolution of cooperation without reciprocity’, Nature, 411: 441–3. Rosen, R. (1985) Anticipatory Systems, New York: Pergamon.

68

Bruce Edmonds

Rouchier, J. and Hales, D. (2003) ‘How to be loyal, rich and have fun too: the fun is yet to come’, 1st international conference of the European Social Simulation Association (ESSA 2003), Groningen, Netherlands, September 2003. Sarle, W.S. (1997) ‘Measurement theory: frequently asked questions’, version 3, September 14. Available HTTP: (accessed 22 January 2004). Stevens, S.S. (1946) ‘On the theory of scales of measurement’, Science, 103: 677–80. Suppes, P., Krantz, D.H., Luce, R.D. and Tversky, A. (1989) Geometrical, Threshold, and Probabilistic Representations, vol. 2: ‘Foundations of measurement’, New York: Academic Press. Taylor, R.I. (2003) ‘Agent-based modelling incorporating qualitative and quantitative methods: a case study investigating the impact of e-commerce upon the value chain’, unpublished doctoral thesis, Manchester Metropolitan University, Manchester, UK. Thoyer, S. et al. (2000) ‘A bargaining model to simulate negotiations between water users’, Journal of Artificial Societies and Social Simulation, 4 (2). Available HTTP: (accessed March 7th, 2010). van Boven, L. and Thompson, L. (2001) ‘A look into the mind of the negotiator: mental models in negotiation’, Kellogg working paper 211. Available HTTP: (accessed March 1st, 2004). Whitehead, A.N. and Russell, B. (1962) Principia Mathematica to *56, Cambridge: Cambridge University Press (originally published 1913).

3

Devices for Theory Development Why Using Computer Simulation If Mathematical Analysis Is Available? Rita Fioresi and Edoardo Mollona

INTRODUCTION As Cohen suggests (1960a: 82), the formulation of theories in terms of computer models provides opportunities to work with formal models to scholars that are not mathematicians. To be more precise on this point, computer modeling and simulation support researchers in two ways. First, they allow inferring deductions from modeled assumptions when these assumptions are not treatable by means of mathematical analysis. Thus, computer modeling and simulation allow investigators to preserve richness, and complexity, in the portrayal of social processes. For example, when working with a computer model, a researcher can escape the pressures to linearize functional forms that impinge on modeling when an analytical solution is necessary. Second, computer modeling and simulation consent a flexible, yet rigorous, manipulation of modeled assumptions. This feature is valuable in order to produce insightful experiments to investigate the relationships between a causal structure, which is captured in a formal model, and the implied behavior. In the words of Cohen (1960a: 82): A further advantage of computer models is the ease of modifying the assumptions of the model. When suitable programming languages become available, equations can be inserted, deleted, or changed in the model, and only local changes which can be quickly made will be required in the program. Modifications of this kind will have a much smaller effect on the ease with which the model can be simulated that they would on the difficulty of obtaining analytical solutions. The advantages of computer modeling and simulation ought not to overshadow the connected risks and shortcomings. Here are two typical laments that arise when scholars express their perplexity towards computer simulation. First, often—not always—the system of symbols, whose behavior is simulated by computers, is not (entirely) represented as a mathematical model. Very often, this system of symbols takes the appearance of a number of strings of programming code. These strings, which may be very numerous,

70

Rita Fioresi and Edoardo Mollona

embed the algorithms that describe both the social behaviors under study and a number of rules that direct the computer in executing the code. Obviously, this program may be hard to communicate to readers who are not necessarily skilled in computer science. Thus, replicating an experiment is not a trivial and obvious endeavor. A second complaint is that a computer simulation produces results by means of numerical rather than analytical solution. Obviously, this is true because often we use computer simulation for the very reason that the phenomenon under study is so complex that it does not allow an analytical treatment. The problem with numerical solutions is that each simulation produces a result that depends on the specific calibration of the parameters that we used for that simulation. We should run an infi nite number of simulations (as infi nite as the values are that we could use to calibrate parameters) to obtain the entire possible repertoire of behaviors that a model could produce (Orcutt 1960: 893). This fact may induce one to think that any conclusion extracted from a simulation experiment is of limited value. However, in this chapter, we will suggest that, on the one hand, it is true that the number of values we can use to calibrate parameters in a simulation is infi nite, but the number of plausible values to consider may be, in many circumstances, fi nite. Furthermore, rather than considering computer simulation and mathematical analysis as mutually exclusive, we address the relationships between mathematical analysis and computer simulation (Troitzsch 1998). In this chapter, we try to explain, using an example, how computer simulation can interact with mathematical analysis. We report segments of our previous studies on an empirical phenomenon: the co-evolution and competition between two clusters of fi rms. The interest here is not on the specific research context but rather on the methodological approach employed. In the next section, we describe the empirical phenomenon that we address in order to elucidate our methodological argument. In section 3, we report and discuss a mathematical model that captures interaction dynamics among competing species. In section 4, we present an adaptation of the model that captures stylized traits of the empirically observed interaction dynamics among populations of fi rms. In the same section, we present a full mathematical solution of the equilibrium points of the system, together with the discussion of their stability. In section 5, we simulate the model to acquire further insights and we highlight the kind of contributions that numerical computer solutions of dynamic systems can bring along. In the last section, we draw a number of conclusions.

COMPETITION AMONG GEOGRAPHICAL PRODUCTIVE CLUSTERS Geographical clusters can be defined as spatially concentrated groups of small entrepreneurial firms, competing in the same or related industries, that are linked through vertical (buyer-supplier) or horizontal (alliance, resource

Devices for Theory Development 71 sharing etc.) relationships. What characterizes geographical clusters is that a complex network of firms is bound together in a social division of labor (Scott 1982). Geographic clusters have historically played a prominent role in the national manufacturing production of countries such as, for example, Italy where they have also labeled Industrial Districts (Becattini 1979). However, the cases of the information and communication technology district of Silicon Valley (Saxenian 1994), the Hollywood district (Scott 1998) in the U.S., the Motorsport Valley in South England (Pinch and Henry 1999) and the software district of Bangalore, in India, demonstrate that geographical clusters play a role as key economic engines in many countries. Within the literature on clusters, a number of authors focused on the explanation of why fi rms benefit from concentrating in geographical space or in a local value chain (Scott and Storper 1992; Cooke et al. 1997; Human and Provan 1997; Porter 1998, 2000; Manskell and Malmberg 1999). In this literature, a line of research has explored the competitive advantage of geographical clusterization within the social embeddedness framework (Granovetter 1985; Manskell 2001). Under this perspective, advantages of clusterization derive from the development of a dense web of social relationships among fi rms that facilitates the exchange of knowledge—in particular, of the most valuable knowledge that is highly tacit, difficult to replicate and not easily purchased (Keeble and Wilkinson 1999). Thus, within geographical clusters, social relationships are channels that convey information and knowledge to give rise to “innovative milieus” (Maillat 1991) or “learning regions” (Lawson and Lorenz 1998). Recently, both the globalization of markets and the increasing competitive pressures have put social and economic relationships within geographical clusters under strain. The association between competitive pressures and internationalization tends to trigger a process of knowledge outflow from geographical clusters leading to a phenomenon of de-clusterization (Marafioti et al. 2008). On the other hand, besides pressures towards de-localization and active internationalization, Mollona and Presutti (2006) highlighted the process of passive internationalization in influencing the internal equilibrium of Industrial Districts. Passive internationalization occurs when a high number of co-localized small foreign firms establish in an area geographically close to a pre-existing industrial cluster. In this respect, Mollona and Presutti (2006) documented how in the textile industry in Italy a newly emerged cluster of firms owned by Chinese entrepreneurs competes with firms co-located into a pre-existing cluster. In both the aforementioned cases, the relevant issue is the competition between geographical clusters. In Marafioti et al. (2008) the focus is on competition among clusters that are located in different geographical areas whereas in Mollona and Presutti (2006) competing clusters are geographically co-located. This theme is fairly delicate to handle. Firms within a cluster may be forced into a dilemma. On the one hand, they may have an incentive to activate economic relationships with firms located outside the cluster. For

72

Rita Fioresi and Edoardo Mollona

example, in the study of Marafioti et al. (2008) suppliers of machinery in the footwear industry that operate within a geographical cluster, facing decreasing demand in domestic markets, tend to increase their exports by selling their goods to firms located outside the cluster; or, as reported by Mollona and Presutti (2006), firms located in a textile geographical cluster buy intermediate goods from Chinese suppliers located nearby in a different cluster. On the other hand, the activation of economic relationships may contribute to undesired long-term consequences. For example, since in the footwear industry productive processes require a tight relation between footwear fi rms and footwear machinery fi rms, the co-location of machinery and shoe producers implies the reciprocal transfer of skills and knowhow between the two stages of the productive chain (McKendrick et al. 2001; Meyer 2004). Thus, Marafioti et al. (2008) suggest that exports of machinery fi rms that are located in a geographical cluster to fi rms outside the cluster weaken the cluster and lead to de-clusterization since exports facilitate knowledge transfer towards machinery users and machinery producers that are co-located in the area where the exports are directed. Since the competitiveness of a geographical cluster depends on productive processes and competencies that are distributed across the boundaries of a number of small fi rms, which come to share the same destiny, the weaknesses of fi rms located at a particular stage of a cluster’s value chain may endanger the survival of the whole cluster. The consequence is that capturing unfolding dynamics of interactions among populations of fi rms located in different geographical clusters is a fairly complex matter since intra-cluster relationships are intertwined with inter-cluster competitive and commercial relationships. In addition, the diversity of industrial clusters and the heterogeneity of research methodologies still pose significant barriers to systematically investigating longitudinal clusters’ dynamics and to generalizing fi ndings (Staber 1998; Staber and Morrison 2000). For this reason, we use formal modeling to capture stylized traits of the complex interaction between two geographical clusters. We are going to achieve our purpose by modeling our dynamic system with a system of differential equations. The formal model is used as a device to rigorously think through the issue under scrutiny. The theoretical speculation is supported by both mathematical analysis and computer simulation. Since we model competition among a population of fi rms sharing key features, we use the theoretical framework of the competing species model, which we borrow from biology.

THE COMPETING SPECIES MODEL Since we model competition among populations of fi rms using the theoretical framework of the competing species model, we quickly review it in its generality, directing the interested reader to Boyce and Diprima (1997).

Devices for Theory Development 73 We assume that we are in a closed environment, where two different species live, for example, two species of fish in a pond. Of course, rather than a pond, we assume we are dealing with a geographical area where two clusters compete against each other. We assume the species do not predate each other, but that they compete for the same resource for survival. In our example of fish in the pond, the survival resource would be the limited amount of food present in the pond. If we have just one of the species in our environment the equation that rules the dynamic evolution of the system is the following: dx = x(e1 − s1 ⋅ x) dt

where x = x(t) is the variable counting the number of specimens of the species at a given time t, e1 is the growth rate and s1 the inhibition of the two populations on their same growth, while

e1 s1 gives the level of saturation, given the specific carrying capacity of an environment. These are positive constants, whose values can be determined by an empirical observation of the system. In fact, it is very reasonable that, even if just one species of fish is present in a pond, after a while the number of fish will stop growing, reaching an equilibrium, based on the amount of the resources (food) present in the environment (the pond). When both species are present, each species will interfere with and reduce the growth rate of the other. The simplest system of equations describing this phenomenon is the following: ­ dx ° dt = x(e1 − s1 x − a1 y ) ® dy ° = y (e 2 − s 2 y − a 2 x ) ¯ dt

where the constants a1 and a2 measure the interference of one species on the other and can be determined again by an empirical observation of our system. We can determine the critical points of the dynamic system by solving the homogeneous system: ­ dx ° dt = x(e1 − s1 x − a1 y ) = 0 ® dy ° = y (e 2 − s 2 y − a 2 x ) = 0 ¯ dt

74

Rita Fioresi and Edoardo Mollona

This will yield four different equilibrium points with coordinates (x, y) representing the number of specimens of each of the two species:

§ e · § e s − a1e2 e2 s1 − a 2 e1 §e · O = (0,0) ; P = ¨¨ 1 ,0 ¸¸ ; Q = ¨¨ 0 , 2 ¸¸ ; Z = ¨¨ 1 2 , © s1 ¹ © s2 ¹ © s1 s 2 − a1a 2 s1 s 2 − a1a 2

· ¸¸ = ( X , Y ) ¹

The fi rst three points represent the extinction of one of the two species (points P and Q) or both of them (point O), while the point Z is the most interesting to study, since it leads to a state where both species coexist. In order to study the stability of the generic equilibrium point (x0 , y 0) we need to linearize the system; in other words we need to study what happens in a neighborhood of the given equilibrium point. So we set: x = x0 + u, y = y 0 + v If we substitute in our system and we expand in power series we get the equations: du = (e1 − 2s1 ⋅ x 0 − a1 y 0 ) ⋅ u − a1 x 0 ⋅ v dt dv = − a 2 y 0 u + (e 2 − 2 s 2 y 0 − a 2 x 0 ) ⋅ v dt

or, in matrix notation: d § u · § e1 − 2 s1 x 0 − a1 y 0 ¨ ¸=¨ − a2 y0 dt ¨© v ¸¹ ¨©

− a1 x 0 ·§ u · ¸¨ ¸ e 2 − 2 s 2 y 0 − a 2 x 0 ¸¹¨© v ¸¹

We can immediately see that the point O = (0,0) is unstable; in fact in this case the eigenvalues of the matrix of the system are e1 and e2 which are always positive. For the stability of the remaining three points we need to consider different cases, depending on the values of the parameters of the system. Intuitively, if one species has a very strong growth rate with respect to the other and the interaction is strong, we expect this species to overcome the other in due time. On the other hand, if both species have moderate growth rate and they have little influence on the other (that is, they are not too aggressive in their competition), then we expect them to coexist peacefully for a long time. Let us now examine the stability of the equilibrium point Z = (X, Y) calculated previously. If we compute the eigenvalue of the matrix of the linear system described earlier with (x 0, y 0) = (X, Y) we fi nd out they are:

Devices for Theory Development 75 1ª − s1 X − s 2 Y ± 2 «¬

(s1 X + s 2Y )2 − 4(s1 s 2 − a1a 2 )XY º» ¼

The sign of the eigenvalues determines the stability of the critical point. We need to examine two cases: Case s1s2 > a1 a2: weak competition. In this case one sees after a small computation that the eigenvalues are real and negative. Hence the critical point Z is stable, thus sustained coexistence of both species is possible. In this case, one also sees with another small calculation that the remaining three critical points are unstable. This means that the dynamic evolution of the system is always towards the coexistence of both species. Case s1s2 < a1 a2: strong competition. In this case the eigenvalues are real and positive; hence, the critical point Z is unstable. This implies that the coexistence of the two species is not possible and the system will steer away from it. A small calculation shows that of the remaining three points the origin is unstable, while the other two are stable. This means that one of the two species will overcome the other.

DESCRIPTION OF THE MATHEMATICAL MODEL: A FORMAL ANALYSIS OF EQUILIBRIUM POINTS AND THEIR STABILITY Let’s now go back to our original problem that is the modeling of the dynamic evolution of two clusters of organizations under their reciprocal competition. The diagram in Figure 3.1 sketches the relationships among four populations. We have two populations, x1 and x 2 , of suppliers that supply two populations of producers of fi nished goods, y 1 and y 2 . Populations x1 and y 1 are located in the same cluster and so also for populations x 2 and y 2 . However, we assume that suppliers of each cluster can sell to producers in the other cluster. That is, x1 can sell to y 2 and x2 can sell to y1. Therefore, the two populations of suppliers x1and x 2 compete for a scarce resource that is the total number of “producers of fi nished goods”: y 1 + y 2 . Similarly the populations of “producers of fi nished goods”, y 1 + y 2 , are in competition trying to reach the greatest number of buyers, which represent the potential market u of consumers.

76

Rita Fioresi and Edoardo Mollona

We investigate long-term survival dynamics that arise as the result of two classes of processes. The fi rst class of processes is the horizontal competitive dynamics while the second class of processes includes the vertical commercial relationships that connect suppliers and fi nished goods producers (Figure 3.1). For example, in their study on the textile cluster of Val Vibrata (2006), Mollona and Presutti addressed the dynamic co-evolution of four populations of fi rms that are in competition. In the textile cluster of Val Vibrata, a population of suppliers and a population of fi nished goods producers were originally located. In the last 30 years, two new populations set their operations in the same geographical cluster. The fi rst population is made of Chinese suppliers whereas the second population is made of Chinese producers of fi nished goods. Since they have legal entity in Italy and operate under the Italian law, these fi rms became Italian fi rms. However, we place them in a separate cluster because they have sharply distinct traits with respect to the incumbent population of Italian fi rms and, on the other hand, they show strong internal homogeneity. They are not embedded in the indigenous web of social relationships as are the incumbent Italian fi rms but, on the contrary, developed their own web of socioeconomic relationships. In addition, Chinese fi rms generally produce at an average lower cost and sell at an average lower price. In the described cluster, Italian and Chinese suppliers compete to provide both Italian and Chinese producers with intermediate goods, and Italian and Chinese producers of fi nished goods compete for the market of fi nished goods. The growth rate of the four populations x1, x 2 , y 1 and y 2 is modeled in analogy with the competing species model, by the four differential equations:

­ dx1 ° dt ° ° dx2 ° dt ® ° dy1 ° dt ° dy 2 ° ¯ dt

g1 x1 ( y1 + y 2 − c 21 x 2 − x1 ) y1 + y 2 g 2 x 2 ( y1 + y 2 − c12 x1 − x 2 ) = y1 + y 2 g~1 y1 (u − c~21 y 2 − y1 ) = u g~2 y 2 (u − c~12 y1 − y 2 ) = u

=

where: g 1, g 2 , ~ g1, and ~ g 2 are positive constants representing the growth rate of each population x1, x2 , y1 and y 2; c12 , c21, c~12, c~21 are positive constants which represent the competition rate.

Devices for Theory Development 77 Population of suppliers (x1)

Competition

Population of finished good producers (y1)

Competition

Population of suppliers (x2)

Population of finished good producers (y2)

Market for finished goods (u)

Figure 3.1 A qualitative model of economic relationships among populations of firms.

As we have seen before, we can fi nd the equilibrium points by setting: ­ g1 x1 ( y1 + y 2 − c 21 x 2 − x1 ) =0 ° ( y1 + y 2 ) ° ° g 2 x 2 ( y1 + y 2 − c12 x1 − x 2 ) = 0 ° ( y1 + y 2 ) ®~ ~ ° g1 y1 (u − c 21 y 2 − y1 ) = 0 ° u ° g~2 y 2 (u − c~12 y1 − y 2 ) =0 ° u ¯

This determines 12 equilibrium points, namely: P1 = (0,0,0, u ) , P2 = (0, u ,0, u ) , P3 = (u ,0,0, u )

§ u (c 21 − 1) u (c12 − 1) · P4 = ¨¨ , ,0, u ¸¸ © c12 c 21 − 1 c12 c 21 − 1 ¹ P5 = (0,0, u ,0) , P6 = (0, u , u ,0) , P7 = (u ,0, u ,0)

§ u (c 21 − 1) u (c12 − 1) · , , u ,0 ¸¸ P8 = ¨¨ − 1 − 1 c c c c 12 21 © 12 21 ¹

78

Rita Fioresi and Edoardo Mollona

§ u (c~ − 1) u (c~12 − 1) · ¸ P9 = ¨¨ 0,0, ~ ~21 , c12 c 21 − 1 c~12 c~21 − 1 ¸¹ © § u (2 − c~12 − c~21 ) u (c~21 − 1) u (c~12 − 1) · ¸ P10 = ¨¨ 0, ,~ ~ , 1 − c~12 c~21 c12 c 21 − 1 c~12 c~21 − 1 ¸¹ © § u ( 2 − c~12 − c~21 ) u (c~21 − 1) u (c~12 − 1) · ¸ P11 = ¨¨ ,0, ~ ~ , ~ ~ c12 c 21 − 1 c~12 c~21 − 1 ¸¹ © 1 − c12 c 21 § u ( 2 − c~12 − c~21 ) ⋅ (1 − c 21 ) u ( 2 − c~12 − c~21 ) ⋅ (1 − c12 ) u (c~21 − 1) u (c~12 − 1) · ¸¸ P12 = ¨¨ , , ~ ~ , ~ ~ ~ ~ ~ ~ © (1 − c12 c 21 ) ⋅ (1 − c12 c 21 ) (1 − c12 c 21 ) ⋅ (1 − c12 c 21 ) (c12 c 21 − 1) (c12 c 21 − 1) ¹

Their stability is ruled by the eigenvalues of the following Jacobian matrix evaluated at the equilibrium point:

§ F1 , x 1 ¨ ¨ F2 , x 1 ¨ 0 ¨ ¨ 0 ©

F1 , x

2

F2 , x 1 0 0

F1 , y

1

F2 , y 1 F3 , y 1 F4 , y 1

F1 , y · 2 ¸ F2 , y 2 ¸ F3 , y 2 ¸ ¸ F4 , y2 ¸ ¹

where:

g1 x1 ( y1 + y 2 − c21 x 2 − x1 ) ­ ° F1 = ( y1 + y 2 ) ° g x ( y ° F = 2 2 1 + y 2 − c12 x1 − x 2 ) ° 2 ( y1 + y 2 ) ® ~ y (u − c~ y − y ) g 21 2 1 °F = 1 1 ° 3 u ° g~2 y 2 (u − c~12 y1 − y 2 ) ° F4 = u ¯ Notice F3 , x = F3 , x = F4 , x1 = F4 , x2 = 0. 1

2

The study of the stability involves the calculation of the eigenvalues of the matrix. Since it is a diagonal block matrix, we can compute the eigenvalues for each diagonal block, which from all points of view behaves as a competing species system. So we have decomposed our system in two separate, but coupled, competing species models. A tedious exam of all possible cases yields the following result:1

Devices for Theory Development 79 • If the point P12 is stable, this implies, by our knowledge of the competing species model, that c~12 c~21 < 1 and c12 c 21 < 1. This implies that all the other points P1, . . . . ,P11 are unstable. • If the point P12 is unstable, this implies c~12 c~21 > 1 or c12 c 21 > 1 and we have to further analyze all the possible cases: 1. If c~12 c~21 > 1 and c12 c 21 < 1, then P4 and P8 are stable, while all the other points are unstable. 2. If c~12 c~21 > 1 and c12 c 21 > 1, then P4 and P8 are unstable, while all the other points are stable. 3. If c~12 c~21 < 1 and c12 c 21 > 1, then the points P10 and P11 are stable, while the remaining points are unstable. We now want to discuss what the concrete implications of our stability analysis are for our understanding of competitive dynamics and population co-evolution in the two described clusters. This discussion is useful to test the logical implications of our modeling and to assess the extent to which these implications match with, and contribute to generate further insights on, the empirical realm under scrutiny. The points P1,P5,P6,P9 are always unstable. These points have the coordinates x1 and x2 equal to zero and this condition creates the instability of the system. This implies that an equilibrium in which populations of suppliers do not exist cannot be maintained. This is because if a fi nal market exists, a population of producers of fi nished goods will emerge and this latter will nurture a population of suppliers as well. The points P 2 ,P3 and P 7 are characterized by the presence of just one population of suppliers and one population of producers of fi nished goods. We can have situations in which all of these points are stable. This suggests that the described competitive system can reach at least three different situations of equilibrium in which two populations survive and the other two are driven out from the environment. More specifically, the three equilibrium points represent three different situations. Equilibrium point P 2 represents a situation in which the new populations that invade the system take the lead and expel the previously incumbent populations. This is, for example, the case reported by Mollona and Presutti (2006), where populations of Chinese fi rms implanted their operations in the textile geographical cluster of Val Vibrata, pushing out from the environment a large number of suppliers and, in time, eroding the competitive position of the population of producers of fi nished goods as well. On the other hand, equilibrium point P7 represents a situation in which the incumbent populations resist the attempt of the new populations to establish their operations in the environment. Finally, a possible equilibrium scenario is described by equilibrium point P3, in which the incumbent population of suppliers survives along with the new population of producers of fi nished goods. On the other hand, the

80

Rita Fioresi and Edoardo Mollona

populations both of new suppliers and of the originally incumbent producers of fi nished goods are forced out from the competitive environment. The points P4 and P8 correspond to a situation in which we have only one population of producers of fi nished goods; hence there is no competition in the market for fi nished goods. This population is dominating the market. Similarly the points P10 and P11 correspond to a situation of a single population of suppliers, which supplies both producers of fi nished goods. Finally, an interesting point is P12 corresponding to the situation in which all four populations coexist in the competitive environment. What decides the stability of this point is the competition rates of the different populations, that is, the parameters c12 , c21, c~12 , c~21. HOW SIMULATION EXPERIMENTS COMPLEMENT FORMAL ANALYSIS In the foregoing, we presented a mathematical analysis of a formal model. The analysis encourages rigorously thinking through a given situation that is described by a formal model. In our example on competing populations, mathematical analysis led to the defi nition of the equilibrium points of the system represented. The analysis of the equilibrium points of our four-population system is germane to the generation of hypotheses concerning the possible states in which the system is likely to be attracted. The study of the stability of the equilibrium points unveils the rich repertoire of possible path of behaviors that our relatively simple model is able to produce. In addition, the investigation of how the calibration of the model’s parameters determines the dynamic properties of the equilibrium points contributes to work out the role that the concepts, which the model’s parameters represent, play in the theoretical framework we are developing. For example, our analysis suggests that the parameters c12 , c21, ~ c12 , ~ c21, which represent the reciprocal competition rates among populations, have a crucial role in molding the unfolding behavior of our competing species model. On the other hand, we learned that parameters g1, g 2 , ~ g 1, ~ g 2 , which represent the growth rate of populations, are not crucial to define the characteristic of the equilibrium points of the model. For example, grounding on this analysis, we can conjecture that in our theoretical framework the speed at which populations of fi rms grow does not convey information on the variety of possible states in which the environment where they compete will fall into. How, then, can computer analysis add further knowledge to such an inquiry? In the following, we develop our discourse along two avenues. First, we highlight the ease with which computer simulation allows manipulation of a formal model in order to produce insightful theoretical experiments. This issue will force us to address the trade-off between the loss of generality

Devices for Theory Development 81 that computer simulation entails and the gain in terms of speed and flexibility that a simulation-based research design permits. Second, we describe the advantage that computer simulation begets to interpret empirically observed longitudinal dynamics of behavior.

Trade-off Between Flexibility and Completeness Despite the fact that the mathematical analysis of a formal model can support the analysis of equilibrium points, their stability and their basins of attraction, this analysis may often be very laborious. On the other hand, a researcher may need to investigate how a variety of different parameters’ calibrations influence the unfolding behavior of a system without proceeding with a mathematical analysis for any single different calibration. Of course, we are aware that, in this respect, using computer simulation may look like a shortcut that operates at a lower level of generality. The problem is that each computer simulation produces results that are incomplete because they depend on a specific calibration of model parameters. We should run an infinite number of simulations (as infinite are the values that we could use to calibrate parameters) to obtain the entire possible repertoire of behaviors that a model could produce. Here researchers face a trade-off between the completeness of their results and the flexibility and agility of the analysis. We argue that there are a number of circumstances in which incompleteness is less of a problem in respect to the advantages that a simulation study brings about. For example, we suggest that a kind of inquiry in which the trade-off speaks in favor of computer simulation includes those simulation experiments that are directed to demonstrate that under plausible circumstances a phenomenon may occur, which has not been empirically observed; or a phenomenon, which has been empirically observed, would have not happened if some circumstances had been different. In both cases, falsification logic underpins the simulation experiment. We need just one plausible calibration of parameters that lead to the theoretical falsification of a hypothesis. In this case, the problem of the incompleteness of the numerical solution of a system is less a concern. The fact that under a different parameter’s calibration we would have probably obtained a behavior with different characteristics is not an issue since we are interested in proving that there is at least one plausible calibration of the model that produces counterintuitive, undesired, unexpected or empirically unobserved behaviors. When falsification logic animates the design of a simulation experiment, the focus on the completeness gives way to the flexibility that a researcher has in manipulating a model to try out a variety of different parameters’ setting and the use of a computer simulation to explore the behavior of a dynamical system displays all its potentials. We use an example to illuminate this point. In the foregoing section, we analyzed the stability of the equilibrium points of the four-population competition model. We noticed that, given specific parameters’ values, we have a stable equilibrium point P7 representing a situation in which incumbent

82

Rita Fioresi and Edoardo Mollona

populations resist the attempt of new populations to establish their operations in the same environment. Interestingly, however, our analysis works only “near” the equilibrium points, and this result may not hold if we assign to populations initial values that are very distant from the equilibrium point (for example, assuming that new populations massively attempt to enter the competitive environment). We might be interested in proving that under plausible calibration, the model, when disturbed from equilibrium point P7, rather than going back to the point, reaches another equilibrium point. In this case, the problem is to demonstrate that there is at least one calibration of the initial values of populations of new entrants that forces the model to reach another equilibrium point. To perform the experiment, we set c~12 c~21 > 1 and c12 c 21 > 1 , in order to have the situation in which P7 is a stable equilibrium point. Specifically, we set c~12 = c~21 = c 21 = c11 = 2 and we assume that the market for fi nished goods (u) is made of 1,000 buyers; consequently, we have the stable equilibrium point P7 for the following values assigned to the populations: x1 = 1,000 x2 = 0 y1 = 1,000 y2 = 0 We then disturb the model by assigning to the variable x2 the value of 500. That is, we are assuming that a new population of 500 suppliers enters the

1,000

3

1

3

3

1

3

3

3

3

3

1

1

1

1

1

1

1

1

3

500 2 2

0 0

4

12

2 4

24

2 4

36

2 4

4

2

48 60 72 Time (Quarter)

4

2

84

4

2

96

4

2

4

108 120

1 1 1 1 1 1 Incumbent suppliers N° firms 2 2 2 2 2 2 N° firms New entrant suppliers 3 3 3 3 3 3 Incumbent producers N° firms 4 4 4 4 4 4 New entrant producers N° firms

Figure 3.2

Analysis of behavior in the neighborhood of equilibrium P7.

Devices for Theory Development 83 environment and starts competing with incumbent producers. We simulate the model for 120 quarters, analyzing a process that unfolds over 30 years. The graph in Figure 3.2 proves that P7 is indeed a stable equilibrium since the population of new entrants is forced out from the environment (curve 2) and the incumbent population of suppliers, after a temporary decline, goes back to the original value of 1,000 fi rms (curve 1). However, if we set x 2 = 1,000, that is, if we assume that the new population of suppliers is as large as the incumbent one, then the system does not go back to equilibrium P 7; rather it reaches equilibrium point P8 , in which the incumbent and the new entrant suppliers coexist (see the graph in Figure 3.3). Finally, if we set x 2 = 1,100, that is, if we assume that the population of new entrant suppliers is larger than the incumbent one, then the system does not go back to equilibrium P 7; rather it reaches equilibrium point P6 , in which the incumbent population of suppliers disappears and the new entrant remains along with the incumbent population of fi nished goods producers (in the graph in Figure 3.4, we extended the simulation over 300 quarters to observe the complete unfolding of the characteristic behavior). In the described analysis, computer simulation provided a tool to perform a quick, yet rigorous, theoretical experiment. Of course, mathematical analysis would have provided us with more general results. More specifically, for each stable equilibrium point we could mathematically analyze the attraction basin that is the region around the point in which each state is evolving towards the given equilibrium point. In fact the information we have about stability is greatly incomplete if we cannot

1,000

3

3

3

3

3

3

3

3

3

1 2

500

12

0 0

4

12

1 2

4

24

12

4

36

12

4

1 2

4

48 60 72 Time (Quarter)

12

4

84

12

4

96

1 2

4

1

4

108 120

1 1 1 1 1 1 Incumbent suppliers N° firms 2 2 2 2 2 2 N° firms New entrant suppliers 3 3 3 3 3 3 Incumbent producers N° firms 4 4 4 4 4 4 New entrant producers N° firms

Figure 3.3 Transition of the system from equilibrium P7 to equilibrium P8.

84

Rita Fioresi and Edoardo Mollona

2,000

1,000

3

1

3

3

3

2 1

0 0

4

30

1 4

60

1 4

90

3

1 4

2

2

2

2

2

2

3

4

1

4 1

3

2 3

41

120 150 180 210 240 Time (Quarter)

23

4 1

4 1

270 300

1 1 1 1 1 1 Incumbent suppliers N° firms 2 2 2 2 2 2 N° firms New entrant suppliers 3 3 3 3 3 3 Incumbent producers N° firms 4 4 4 4 4 4 New entrant producers N° firms

Figure 3.4

Transition of the system from equilibrium P7 to equilibrium P6.

estimate how far we can move our state outside the equilibrium point and make sure the system will evolve into it in due time. This can be done using the Lyapunov functions. The so-called Lyapunov second method provides a very powerful technique to estimate the basin of attraction of a stable isolated critical point. Let’s take one of our stable isolated equilibrium points and let’s generally call it xi; let’s further let V be a positive defi nite function defi ned on some domain D containing xi; that is, V ( x) > 0 for all x in the domain D. Let’s then assume that V (0) = 0 and that V ( x) < K , for a positive fi xed number K and for all x in D. If dV is negative defi nite in the domain D, the origin is asymptotically dt stable; in other words, all the trajectories that start from a point in the dV domain D must approach the point xi. And vice-versa in that if dt is positive defi nite in the domain D, the point x is an unstable critical point; all the trajectories starting from a point in D will escape from xi. The function V is called in both cases a Lyapunov function. In the fi rst case, that is usually the one that is the most interesting, V gives us the domain D, that is also called a region of asymptotic stability. Considering, for example, the previously analyzed equilibrium point for the new entrant supplier population x 2 , the analysis through the Lyapunov functions would have generated all the values, that, once assigned to x 2 , would have driven the system back to equilibrium point P 7.

Devices for Theory Development 85 What is then the rationale to deal with the completeness/flexibility trade-off here? Should we continue into the analysis of the Lyapunov functions, requiring an increasing level of mathematical virtuosity not necessarily possessed by the researcher, or, rather, should we be content with the specific results that we obtained? We have two answers to this question. First, unfortunately, when we fi nd a region of asymptotic stability, this does not imply that the trajectories starting from points outside D will escape from the point x2: they may still approach the point, since another Lyapunov function could have them inside its own domain, possibly different from D. In other words, each Lyapunov function gives us sufficient conditions for stability, but these conditions are not necessary. Other points that do not satisfy them may still be in the basin of attraction of our critical point. Furthermore, there are no general methods to construct Lyapunov functions, though they have been constructed for certain families of differential equations. Second, how to deal with the completeness/flexibility trade-off depends on the purpose of the model. If the model had the purpose of falsifying the hypothesis that once a population of incumbent producers of fi nished goods and a population of suppliers reach a stable equilibrium they cannot be disturbed by such a state, then the simulation experiment would fulfi ll its aim. We need just one instance in which the contrary happens to accomplish our objective. Moreover, the simulation experiment would even provide a testable hypothesis by suggesting that an incumbent population is fated to disappear were its size to be smaller than the size of a new entrant population. On the other hand, a researcher may be interested in producing a complete set of hypotheses that links the values assigned to x 2 to the unfolding dynamics of the system. In this case, computer simulation would reach incomplete results since a researcher should run an infi nite number of simulations, as the number of possible values to be assigned to x 2 , and report, for each simulation, what equilibrium point the system reaches. Along these lines, the falsification logic has inspired a fairly high number of studies in sociology, where researchers’ aim was to demonstrate that a phenomenon, typically described with specific features, in given circumstances actually takes place in a different form. A researcher begins by modeling the circumstances that produce the manifestation previously described and explains how the change in those circumstances produces a different appearance of the phenomenon under study. In this case, the balance between computer simulation and mathematical analysis speaks in favor of the former for two reasons. First, the design of the circumstances that generate the expected phenomenon may be a rich description that includes a complex set of processes and, thus, is hardly tractable through mathematical analysis.

86

Rita Fioresi and Edoardo Mollona

Second, again, the researcher needs only to produce the change in the phenomenon; he does not need to explore all the possible behavior that the model is able to produce given the infi nite number of calibration of parameters. Of course, here, the researcher needs to demonstrate that the particular calibration, or the formal structure, of the model that generated the changes in the phenomenon under study is not implausible or so specific as to result in an insignificant contribution to extant accumulated knowledge. For example, Noah (1998) used a computer simulation to address mechanisms leading to the emergence of social differentiation. The contribution he produced with his simulation experiments is to prove that individual differences are not necessary to explain the emergence of social differentiation. Noah formalized assumptions about interaction and communication pattern, and social construction of knowledge and, using computer simulation, demonstrated that these assumptions are sufficient to explain how it is possible that an originally undifferentiated social system becomes differentiated over time. Similarly, Centola et al. (2005) studied the circumstances under which highly unpopular norms can emerge. Using an agent-based computer simulation model, they illustrated that, given particular characteristics of the structure of the social network in which decision makers are embedded, unpopular norms can emerge locally and spread in the social system. Probably one of the most consolidated traditions in social sciences that developed through the use of simulation studies is the thread of inquiry on emerging cooperation. A number of authors contributed to understanding under what conditions systems of cooperators could emerge in spite of individual attitude to egoism and defection. The general idea was set forth by Axelrod (1984), who simulated the interaction of a number of agents playing a repeated prisoner’s dilemma. The intent was to let different individual strategies to compete to see which of these strategies is able to produce emerging cooperation. Following this logic, circumstances under which systems of cooperators emerge have been investigated by simulating repeated, multi-person prisoner’s dilemma games (Nowak et al. 1994; Lomborg 1996). In general, in this thread of studies, a key issue is the understanding of the mechanisms that facilitate the emergence of systems of cooperators without formal or informal social controls. Macy (1991), for example, concentrated on the role of the learning that is generated by cues and sanctions that result from repeated interaction. Another angle to explain cooperation through computer simulation makes an appeal to cultural elements. Boyd et al. (2005), for example, highlighted the role of attitude to cooperate, and Nettle and Dunbar (1997) dealt with the idea that social markers, such as languages,

Devices for Theory Development 87 facilitate cooperation by making it easier to detect free-riders, which typically move to different neighborhoods to exploit cooperators and escape retaliation. In the same area of study, to address the evolution of trust and cooperation among strangers Macy and Skvoretz (1998) abandoned the framework of the prisoner dilemma to build a very rich picture that connects individual decision making and the structure in which social interaction is embedded. Their simulation experiments suggest that for a population of cooperators to emerge, two types of exchange are necessary. The first type of exchange is embedded in social ties and facilitates the creation of effective trust-based conventions. The second type of exchange is represented by non-embedded encounters between random strangers; these exchanges contribute to diffuse these conventions across a population. Similarly, in Hanaki et al. (2007), unilateral tie severance and consensual tie creation foster local reinforcement of cooperation whereas triadic closure, that is, the tendency of an individual to connect to a friend of a friend, hinders global expansion of cooperation. Thus, the analysis of the features of the social networks in which players are embedded becomes a crucial aspect to explain whether and how cooperation emerges. Along similar lines, Eguiluz et al. (2005) explicitly modeled social plasticity, that is, the ability of an individual agent to select partners thereby modifying its neighborhood as time goes by. In their model, individuals and network co-evolve, producing emerging role differentiation and social structure that, in turn, sustain cooperation. In all the before mentioned studies, just a small sample of the large body of work that employed computer simulation in sociology, the completeness of results obtained with simulation is not a concern since the contribution is in the analysis of the structural conditions, the causal mechanisms and the processes that are able to explain the phenomenon of emerging cooperation. As long as the portrayal of the individual decision-making routines, of the context in which agents behave and the mechanisms through which agents interact are plausible, and not so specific as to delineate irrelevant contexts, simulation contributes to the building of complex but rigorous theoretical hypotheses. To bring our discourse to a higher level of generality, a research activity in which the concern for completeness of solution gives way to the flexibility of simulation is one in which a researcher wants to improve her understanding of the not obvious implications of a theoretical model. A researcher could be interested, for example, in exploring what are the consequences of differently initializing the variables or of assigning particular values to a model’s parameters. In this case, we are not interested in completeness of results; rather we focus on the analysis of specific situations, and simulation models are employed as virtual laboratories to clarify behavioral consequences of structural assumptions. This is a form

88 Rita Fioresi and Edoardo Mollona of computational thought experiment in which we ask what if questions in an artificial world. By running a simulation under various circumstances computer models become learning laboratories where it is possible to propitiate the emergence of counterintuitive, apparently paradoxical, behaviors. For example, by simulating the model in extreme conditions, it is possible to perform boundary experiments (Kaplan 1964) to establish the robustness of a theory. Extreme conditions might include the assumption of unusual initial values for some variables in the model, or exogenous perturbation of the model, mimicking apparently bizarre or extraordinary scenarios. In an insightful piece of work, Mass (1981/1991) illustrated how the explanation of surprise behaviors and rationalization of cognitive dissonance favor further understanding of a model, thereby sharpening the underpinning theory. Davis (1971) suggests that a hypothesis is interesting if it induces the revision of an established characterization of a single phenomenon or of a relation among phenomena. It is believed that the clarification of unexpected behaviors, using simulation, often gives birth to such interesting hypotheses. An example may help to illustrate the use of a computer model as a learning laboratory. From the mathematical analysis of our model on competing populations, we deduced that parameters g1, g 2 , g~1, g~2 , which represent the growth rate of populations, are not crucial to defi ne the characteristic of the equilibrium points of the model. Intuitively, however, we are prone to suspect that the speed at which a population of fi rms grows does have important consequences on the destiny of the population. This is a situation in which computer simulation can be very useful since it offers a laboratory where experiments can easily be conducted that allow some light to be cast on an issue. To explore the role played by parameters g 1, g 2 , ~ g 1 and ~ g 2 , we observe again the behavior of the model around the equilibrium point P 7. In this equilibrium, we know that only populations x1 and y 1 are in the environment; these populations are linked by commercial relationships. We also are aware that for c~12c~21 > 1 and c12 c21 > 1 this equilibrium is stable. In previous experiments we disturbed this equilibrium and we showed that for any values assigned to the four populations so that x1 > x 2 and y 1 > y 2 , the system goes back the point P 7. That is, new entrants are defeated and the incumbent populations dominate the environment. We now try to disturb again the equilibrium with two modifications; we assign to new entrant populations values very close to those of the incumbent ones. Thus, since x1 = y1 = 1,000, we set x2 = y 2 = 950 and u = 1,000. In addition, we assume that g 2 = 5g1 and g~2 = 5g~1. The purpose of the experiment is to understand whether assuming that new entrant populations grow faster assigns to these latter a survival advantage.

Devices for Theory Development 89 We simulate the model for 300 quarters in order to capture the unfolding behavior of the system towards the equilibrium. As reported in Figure 3.5, rates of growth do not change the destiny of new entrants that are forced out of the environment. Intrigued by this result, we performed another experiment. We recalibrated populations’ rates of growth so that g2 = 5g1 and g~2 = 0.5g~1; that is, the new entrant population of finished goods producers grows more slowly than the incumbent population of finished goods producers while the population of incumbent suppliers grows more slowly than suppliers that are new entrants. The graph in Figure 3.6 conveys a counterintuitive and fairly surprising message. From equilibrium point P7 the model is pushed towards P3 where the surviving populations are x1 and y 2 . Why is it that the new entrant population that grows more slowly than the incumbent population is able to overwhelm competition whereas the fast-growing population of new entrant suppliers is severely defeated? The question requires proceeding with the inquiry in two directions. First, there is a crude explanation that deals with the formulations that we adopted in the formal model; second, there is an explanation that refers to the logic underpinning these formulations. That is, either a formulation may contain an inaccuracy or a mistake, which produces an unintended behavioral effect, or it may incorporate a fl awed reproduction of the modeled empirical realm. In this case, a counterintuitive behavior propitiates amendments to the formulation under scrutiny. However frequently, unexpected emerging behaviors are correct consequences of formulations that appropriately capture distinctive traits of modeled empirical objects. In this case, the explanation of the discrepancies between simulated and expected behaviors enriches our understanding of a given problem. We start by looking at the formulation we employ:

dx2 g 2 x2 ( y1 + y2 − c12 x1 − x2 ) = dt y1 + y2 As shown in the equation, when the ratio

( y1 + y 2 − c12 x1 − x 2 ) > 0, y1 + y 2 the higher the rate g 2 , the stronger the growth will be of population x2 . On the contrary, when

90

Rita Fioresi and Edoardo Mollona

( y1 + y 2 − c12 x1 − x 2 ) < 0, y1 + y 2 a larger rate g 2 leads to a faster erosion of the population. Since in the simulated exercise, c12 = 2 we obtain that

( y1 + y 2 − c12 x1 − x 2 ) < 0, y1 + y 2 then the population of new entrant suppliers x 2 erodes faster than the population of incumbent suppliers x 1 while the population of new entrant producers y 2 , which grows more slowly than the incumbent population of producers y 1, survives to this latter population. The experiment suggests that while the rates of growth of the populations do not influence the characteristics of the equilibrium points of the system, they contribute to defi ne which equilibrium point the system will select. Once we have clarified how the formulation produces the unexpected behavior we turn to the question of whether the causal mechanism crystallized in the formulation is plausible and allows us to extend our understanding of the phenomenon we are studying. Actually, the behavior generated by our formal model brings about a conjecture that we may consider an empirically testable candidate hypothesis: in an environment where resources are scarce in respect to

1,000

1

3

3

1

3

1

1

3

1

3

1

3

1

3

1

3

1

1

3

500

2

0 0

4

30

2

4

60

2

4

90

2

4

2

4

2

4

2

120 150 180 210 Time (Quarter)

4

2

4

2

240 270

4

300

1 1 1 1 1 1 Incumbent suppliers N° firms 2 2 2 2 2 2 N° firms New entrant suppliers 3 3 3 3 3 3 Incumbent producers N° firms 4 4 4 4 4 4 New entrant producers N° firms

Figure 3.5

Analysis of behavior in the neighborhood of equilibrium P7.

Devices for Theory Development 91

1,000

1 1 1 4

4

4

500

4 1

4 1

1

4 1

4 1

41

4 1

3 2

3 3

0 0

30

2

60

2

90

2

3

23

2 3

2 3

120 150 180 210 Time (Quarter)

2 3

23

240 270

300

1 1 1 1 1 1 Incumbent suppliers N° firms 2 2 2 2 2 2 N° firms New entrant suppliers 3 3 3 3 3 3 Incumbent producers N° firms 4 4 4 4 4 4 New entrant producers N° firms

Figure 3.6

Transition of the system from equilibrium P7 to equilibrium P3.

the needs of competing populations, populations that grow faster are penalized because they need more resources to survive. Thus, given equal and high reciprocal competition rates among populations, the impact of rate of growth on a population’s chance of survival depends on the size of the market. Alternatively, we might consider this result as a flaw in the model and reflect on how the equation that captures population dynamics ought to be amended.

History-friendly Modeling In this section, we turn our attention to a distinctive feature of computer simulation that may result very handful for researchers engaged in the inquiry of dynamic phenomena; this feature is the delivery of its output in the form of longitudinal patterns of behavior that unfold over time. Before addressing the usefulness of computer simulation from this point of view, we fi rst detail once again what a mathematical analysis can achieve. For a relatively simple dynamical system, described by a system of differential equations, like the one we have studied previously, we can proceed as follows: • We can determine the equilibrium points of the system: dx n dx1 = f n ( x1 ,..., x n ) = f1 ( x1 ,..., x n ) , ... dt dt

92

Rita Fioresi and Edoardo Mollona by solving the system f1 ( x1 ,..., xn ) = 0 , ... f n ( x1 ,..., x n ) = 0. • For each equilibrium point we can establish if it is a stable or unstable point by studying a linearization of the system, near the equilibrium point. This procedure entails the computing of the eigenvalues of § ∂f · the Jacobian matrix ¨¨© ∂x ¹; if one or more eigenvalues are positive, the point is unstable; if the eigenvalues are all negative, or if they have negative real parts, then the point is stable. In other words, we can determine the behavior of the system for small perturbations from the equilibrium state: if the point is stable, the system will return in the given equilibrium state; if the point is unstable, the system will not do so. • Using the Lyapunov functions, we can determine for each stable equilibrium point the attraction basin, that is, the region around the point in which each state is evolving towards the given equilibrium point. i

j

On the other hand, mathematical analysis cannot in general give complete information concerning the trajectories of the system and its overall behavior over time. For example, mathematical analysis can provide limited insight regarding: • how long the system takes to reach the stable equilibrium point once it is on trajectory towards it; • how fast the system evolves away from an unstable point; • which are the trajectories that lead to one equilibrium point or another. Inquiry in social science, however, often focuses on disequilibrium dynamics and on the historical trajectories of social systems rather than on their properties in equilibrium such as maxima and minima. In this light, social scientists have often adopted computer simulation to develop theory concerning the modes of historical change of a social system. Hanneman et al. (1995), for example, built a computer simulation model to articulate and integrate theorizing on state legitimacy and imperialist capitalism. Using a computer model, they investigated the path of behavior of a state’s legitimization as a consequence of the state’s attitude to initiate confl icts with other states. The model was gradually enlarged to include the modeling of the consequences of policy on the dynamics of development of an imperialist capitalism and a state-dependent economy. A number of simulation experiments elicited a complex relationship between the initial level of a state legitimacy and the unfolding consequences of the policies adopted by the state. Computer simulation made it possible to easily perform a number of experiments that produced each time a complete longitudinal description of unfolding behavioral consequences of modeled assumptions. The experiments revealed that, given the

Devices for Theory Development 93 sensitivity of the model’s behavior to its initial calibration, similar policies lead to very different dynamics consequences. In these experiments, the relevant theoretical insight is to propose that the initial level of legitimacy is a key variable to design a state’s policies. The generation of this theoretical statement does not need a complete mathematical solution of the formal model. A limited number of selected simulations are sufficient to illustrate the role that the initial calibration of legitimacy plays in molding consequences of policies. Here, the researcher is not interested in the exploration of the complete set of political consequences that follow from the infi nite possible calibrations of the initial value of a state’s legitimacy. Rather, it is sufficient to demonstrate that different plausible calibrations entail diverse political implications. The value added by computer simulation is the generation of a theoretical hypothesis. In addition, in the mentioned study, the simulation approach, by producing complete descriptions of the trajectory of the modeled system in its transient state, provided insight that mathematical analysis hardly could convey. The observation of the trajectories showed that initial calibration of state legitimacy not only leads to different end states but deeply influences the pattern of behavior in the transient state by possibly producing oscillations. In the same vein, Powers and Hanneman traduced Pareto’s theory of social and economic cycles into a formal model to be simulated (1983). To test the internal logic of the theory, they simulated the formal model to obtain a longitudinal behavior to be compared with Pareto’s predictions. A similar approach has been adopted by Sastry in her study of Tushman and Romanelli’s model of punctuated organizational change (1997). She elucidates the advantages that computer simulation brings about in terms of flexibility in analysis of theories of behavior. The theory of punctuated organizational change is a typical theory of behavior that focuses on the pattern of change of organizations. The theory postulates that organizations undertake occasional dramatic revolutions followed by period of relative stability. Sastry formalized the theory of Tushman and Romanelli, which was originally stated verbally, in order to analyze its completeness and consistency. She used computer simulation and numerical solution to explore the implications of formal modeling. First, to retain the richness of the verbal theory, Sastry’s model is fairly complex; for example, it includes a number of nonlinear relationships among variables. Thus, it would not have been easily tractable through mathematical analysis. Second, and more importantly, Sastry’s aim was the analysis of longitudinal change behavior of organizations as postulated by the theory. Thus, the structure of her inquiry included a phase in which the verbal model was formalized and a subsequent phase in which the formal model was simulated to test the implications postulated by the verbal model.

94

Rita Fioresi and Edoardo Mollona

The detection of gaps between postulated and simulated behaviors triggered further development of the original theory. However, what is of interest to us is that Sastry’s analysis focuses not on the end state that organizations reach but on the feature of the rate of change and the trajectory of change behavior. For example, Sastry not only studied under what circumstances an organization responds to environmental transformations with organizational change; she also investigated the length of change periods, the pattern that characterizes the rate of change and the impact of this latter on the trajectories of other variables such as organizational competence and performances. Another area of research where the computer simulation approach can be extremely useful includes studies that aim to capture dynamics underpinning empirically observed time series. In this case, the ability of computer simulation to generate an output that takes the form of a pattern of behavior that unfolds over time facilitates the dialogue between the structure of causal relationships underpinning the empirical setting under study and its formal representation. In this respect, Hanneman et al. suggest that ‘[c]omputer simulation methods help to bridge the gap between theory and history’ (1995: 4). An example of this approach is offered by Malerba et al. (1999) who propose a class of computer models that they defi ne as history friendly because of their adherence to the empirical realm that is the object of exploration. They developed a formal representation of an appreciative theory that describes the pattern of evolution of the computer industry. Through simulation, they checked the consistency of the appreciative theory by examining whether the formal version is able to reproduce same stylized facts as described in appreciative theory. Underpinning the approach is the idea that computer simulation can be used to corroborate and to explore the logic that informs qualitative explanations of empirically observed dynamic economic phenomena. In this respect, computer simulation scores three goals. First, freed by the impinging constraint of analytical solution, formal modeling can retain the richness of verbal explanations of qualitative theorizing. Second, the fact that the output of a simulation is presented in the form of time series that unfold over time allows easy comparison with observed historical patterns of behavior. Last, the possibility to easily observe how changes in parameters produce changes in simulated longitudinal patterns of behavior facilitated the friendly dialogue between the formal representation of a phenomenon and the historical behavior of this latter. More specifically, in the study of Malerba et al., the idea is to fi nd a plausible calibration of model’s parameters that yields a history-replicating run. The exercise demonstrates that a stylized formal model incorporating a theory, or an explanation, is able to generate the observed historical pattern. The analysis entails as well to change parameters’ values that activate those causal mechanisms that in the theory are crucial

Devices for Theory Development 95 determinants of observed behaviors in order to obtain history-divergent simulation runs. A fundamental principle of the described procedure is to compare model-generated and observed behaviors. These latter, however, are available to the researchers as time series characterized by specific dynamics’ properties; consequently, to appropriately carry out the comparison, the output of the formal model ought to be expressed in the form of a behavior that unfolds over time. To obtain such kind of output, computer simulation is necessary. In particular, observed behavioral patterns to be replicated are, in this study, the emergence in the computer industry of a dominant fi rm ‘relatively early in the history of the industry’ (Malerba et al. 1999: 9), the entrance of new fi rms in the industry that brings along new technologies and opens new market segments, and the reaction of old leaders to the attempt to enter the new market segment. Let’s assume that the industrial system under scrutiny is initially in a state in which a number of identical fi rms coexist and that, after a while, the industry evolves to stabilize in a state in which only a dominant fi rm exists. Mathematical analysis can provide support to understand whether the formal model is able to replicate the mentioned behaviors by, for example, fi nding that the system has an unstable equilibrium point where fi rms are identical and a stable equilibrium point that attracts the system towards an end state in which a dominant leader exists. It would be more difficult for mathematical analysis, for example, to defi ne how rapidly the system abandons the initial state and through which trajectory the system reaches its end state. Furthermore, questions that hardly can be answered through mathematical analysis concern, for example, how early a leading fi rm emerges, what pattern of behavior characterizes the features of the industry (e.g., level of concentration) during the transient state that the system goes through before reaching its final state and how the performances of the fi rms that are co-evolving in the industry unfold over time. To capture these patterns, a computer simulation is able to produce an artificial history to be compared with the observed history in order to conduct an analysis of behavior more closely related to the empirical phenomenon under study. Indeed, socioeconomic phenomena hardly are observed in equilibrium. Researchers that observe dynamic phenomena, and typically consider time series, usually deal with data collected longitudinally. These time series are to be considered snapshots that capture portions of a much longer history that is flowing. Thus, the snapshots often crystallize a behavior out of equilibrium that is a segment of an history that began somewhere in the past. The ability of computer simulation to deliver output in term of dynamic histories helps the researcher to recognize these snapshots as instances of a class of behaviors with specific dynamic properties. By the means of computer simulation, a researcher can generate the repertoire of alternative

96

Rita Fioresi and Edoardo Mollona 600 500

N° of Firms

400 300 200 100 0 Years -100

Figure 3.7

1961

1971

1981

1991

Italian Suppliers

Chinese Suppliers

Italian Producers

Chinese Producers

2001

Data collected in the Val Vibrata Industrial District.

histories that a formal model can generate and treat them as classes of behaviors with homogeneous dynamic properties. The dialogue between computer-generated and real behaviors helps to explore the conditions under which a specific empirical phenomenon is the outcome of the causal explanation contained in the simulated formal model. In generating dynamic theories of behavior, this exercise supports the discernment between groundless assertions and assertions that are true only within certain boundaries and given specific assumptions. In this light, we consider again the Mollona and Presutti study of the textile geographical cluster of Val Vibrata (2006). Figure 3.7 reports four collected time series capturing the evolution, between 1961 and 2001, of the populations of Italian and Chinese suppliers and producers in the Val Vibrata geographical cluster. In 1961, populations of Chinese fi rms are very small in comparison to the two populations of Italian fi rms. Notwithstanding the differences in sizes, the graph shows that the population of Chinese suppliers has almost reached the size of the population of Italian suppliers. After having worked with our formal model, can we recognize this behavior? Is this behavior in some respect similar to one of those produced by our formal model? Eventually, how does the comparison supports our speculation on the phenomenon studied? To start with, the collected time series suggest that, at the beginning of the time span over which the phenomenon unfolds, Chinese producers and

Devices for Theory Development 97 suppliers were very few. We thus perform a fi rst simulation experiment to see whether, once calibrated with the values empirically collected in 1961, the formal model is able to generate a behavior that shares any characteristics with the real time series. We assign to the four populations and to the fi nal market the values that they had in 1961 and we run the model of 500 time steps, each representing one quarter. Thus, we are simulating a time period that is much longer than the actual time span observed in order to play out the entire behavior of the model until it eventually sets in an equilibrium point. The results are reported in the graph in Figure 3.8. Incumbent Italian populations grow to saturate the entire fi nal market that represents a sort of carrying capacity of the industry, while the new population of Chinese suppliers surges but does not take off and in the long term is defeated and driven out from the market. On the contrary, we observe in Figure 3.7 that the real history is quite a different one. The new population of Chinese fi rms grows and challenges the population of Italian incumbent suppliers. This dynamic is very clear among the suppliers, where Italian fi rms are decreasing dramatically, and much weaker among producers of fi nished goods where Chinese fi rms are imperceptibly growing and Italian fi rms are losing small portions of the market. One conclusion is that our formal model does not capture the deep causal structure that underpins observed behavior. Another solution is that the model, at least partially, captures the underlying causal engine and is suggesting to the researcher that the observed behaviors manifest themselves only if specific conditions occur. Consequently, we try to make a good use

600 3

3

3

1

3

1

3

1

3

1

3

1

3

1

3

1

1

1

300 1

0

2

0

4

2

50

4

100

2

4

150

2

4

2

4

2

4

200 250 300 Time (Quarter)

2

4

350

2

4

400

2

450

4

2

500

1 1 1 1 1 1 Incumbent suppliers N° firms 2 2 2 2 2 2 N° firms New entrant suppliers 3 3 3 3 3 3 Incumbent producers N° firms 4 4 4 4 4 4 New entrant producers N° firms

Figure 3.8

Simulated demographic dynamics in Val Vibrata Industrial District.

98

Rita Fioresi and Edoardo Mollona

of such indications and start to search for those conditions that are likely to produce the behavior observed. The behavior reported in Figure 3.8 suggests that Chinese populations are not strong enough to emerge and the system is attracted towards the equilibrium point P 7 where only two populations of incumbent survive. If our model is correct, the mismatch between observed and simulated behaviors has to be connected to a problem in the calibration of the simulation model. In calibrating the model, we probably overlooked some key empirical information that explains the strength demonstrated by Chinese fi rms. A possible explanation is that the populations of Chinese fi rms have stronger rates of growth. We could then implement a number of simulation experiments gradually increasing the rates of growth of Chinese fi rms. However, the mathematical analysis that we conducted before suggests that a very important determinant of the type of behavior produced by our formal model is the value assigned to the parameters c 12 , c 21, ~ c 12 , ~ c 21 which represent the reciprocal competition rates among populations, that is, the impact that a population has on the survival of the competing population. Directed by these considerations, we may want to go back to our research field and collect information that could be useful in order to calibrate those parameters. We then learn that the price of an intermediate goods sold by a Chinese supplier is on average three times cheaper than the product sold by Italian suppliers. In addition, very low brand recognition and product differentiation protect Italian intermediate products. As for the fi nished goods producers, here the situation is slightly different; price difference is similar but the Italian fi nished good is more recognizable and thus maintains a competitive advantage to the Chinese product. Grounding on this information, we recalibrated our simulation model amending the values of parameters c12 , c21, c~12 , c~21. More specifically, we set c12 = 1, c21 = 3, c~12 = 1 and c~21 = 2. Thus, we assumed that Chinese suppliers are three times more competitive than Italian suppliers, and that Chinese producers of fi nished goods are twice as competitive as their Italian competitors. In the case of fi nished goods producers, we balanced out the price disadvantage of Italian producers with the brand recognition of Italian finished goods. We simulated the model again and obtained results reported in the graph in Figure 3.9. Here results are much more similar to those reported in the graph in Figure 3.7. The simulation suggests that the observed empirical behavior is a section of a class of behavior that is characterized by their being attracted to the equilibrium point P6 in which the population of incumbent producers of fi nished goods survives and shifts its procurement from the incumbent population of supplier to the population of Chinese suppliers that offers cheaper supplies. The incumbent population of Italian suppliers is forced out of the market and the new population of Chinese producers is not able to take off.

Devices for Theory Development 99

600 3

3

3

3

2 3

2 3

2 3

2 3

2 3

2

1 1

300

2

1

1

0

2

0

2

2

4

4

50

4

100

4 1

150

4 1

4 1

4 1

200 250 300 Time (Quarter)

350

4 1

400

450

4 1

500

1 1 1 1 1 1 Incumbent suppliers N° firms 2 2 2 2 2 2 N° firms New entrant suppliers 3 3 3 3 3 3 Incumbent producers N° firms 4 4 4 4 4 4 New entrant producers N° firms

Figure 3.9 Simulated demographic dynamics in Val Vibrata Industrial District with empirically grounded calibration of the model.

In Figure 3.10, we repeat the experiment, stopping the simulation after 160 quarters that corresponds to 40 years, which is the time span of the phenomenon empirically observed and reported in Figure 3.7. In this graph, the similitude between the simulated and the empirically observed behaviors suggests to us that the causal mechanisms described in our formal model may give us some hints to articulate theoretical hypotheses to explain the observed phenomena.

600 3

3

3

3

1

1

300

3

3

1

3

1

3

1

3

1

1

1 1 2

1

0

2

0

4

16

2

4

32

2

4

48

2

4

2

4

2

2

2 4

64 80 96 Time (Quarter)

2

4

112

4

128

144

4

160

1 1 1 1 1 1 Incumbent suppliers N° firms 2 2 2 2 2 2 N° firms New entrant suppliers 3 3 3 3 3 3 Incumbent producers N° firms 4 4 4 4 4 4 New entrant producers N° firms

Figure 3.10 Simulated demographic dynamics in Val Vibrata Industrial District with empirically grounded calibration of the model and appropriate simulation length.

100 Rita Fioresi and Edoardo Mollona CONCLUSION The objective of our chapter was to suggest that computer simulation and mathematical analysis, rather than defi ning two territories separated by ideological fences, ought to be considered complementary approaches to theory development in social sciences. Mathematical analysis provides a useful method to rigorously deduct conjectures from modeled assumptions, however the approach loses its power early as the complexity of the modeled object starts to increase. Also, computer simulation delivers conjectures with limited generality but, on the other hand, if appropriately managed, it represents a unique tool to develop theory. In general, computer simulation facilitates theoretical experiments by providing a virtual laboratory where theoretical statements can receive a rigorous treatment and manipulation. This is particularly useful for non-mathematical social scientists, which may lack mathematical virtuosity. As Cohen suggests, People need not be powerful mathematicians in order to build and run computer models. It requires a much more extensive knowledge of mathematics to obtain an analytical solution to a complex mathematical model than it does to formulate the model. (Cohen 1960a: 356–7) This should not, however, suggest that mathematically inclined researchers do not receive benefits from computer simulation. On the contrary, they probably receive the most of advantage since they can couple the two approaches. In this light, we suggest two circumstances where computer simulation adds value to mathematical analysis: fi rst, when researchers are not willing to renounce to a certain level of richness and complexity of their modeling, and second, when researchers work at theories of behavior, or process theorizing, and of particular interest are the characteristics of the behavior of a dynamic system in its transient state. How long the system takes to reach a stable equilibrium point once it is on the trajectory towards it, how fast the system evolves away from an unstable point and which are the trajectories that lead to an equilibrium point or another, are questions that computer simulation can deal with agility. For example, Nelson et al. (1976) used computer simulation to work out their evolutionary theorizing on technical change. Discontent with the orthodox neoclassical approach to the issue, this latter centered on the analysis of equilibrium, they adopted a computer simulation approach to study mechanisms, forces and pressures that operate in disequilibrium.

Devices for Theory Development 101 They used computer simulation because the picture they portrayed in their formal model is complex and lacks an ‘adequate mathematical analysis of its dynamic behavior’ (Nelson et al. 1976: 93). Indeed, a previous paper included a more simple formalization and was mathematically analyzed (Nelson and Winter 1975). Interestingly, Nelson et al. proved that under plausible circumstances, the process defi ned by the model converges towards a conventional competitive equilibrium state. This illustrates how computer simulation may be useful to prove that the same end state can be reached with different transient paths, and the path is the very focus of the analysis. Finally, computer simulation is particularly useful when theorizing is empirically grounded on longitudinal phenomena and the researcher needs to nurture a dialogue (Malerba et al. 1999) with empirically collected descriptions of behaviors. This capacity of computer simulation to support theorizing on the longitudinal evolution of specific phenomena has been noticed by those economists that early adopted this approach to research. For example, Cohen (1960a) simulated the time paths of the endogenous variables of a model that was aimed at reproducing dynamics of the shoe, leather and hide industries. The simulated behaviors were then compared with the empirically observed time series of the same variables between 1930 and 1940. As Cohen and Cyert explained in a further writing (1961: 125), While these comparisons do not result in complete agreement between the hypothetical and the actual time series for the endogenous variables, they do indicate that the models may incorporate some of the mechanisms which in fact determined behavior in the shoe, leather, and hide industries. The same authors together built a model to simulate the behavior of two duopolists’ profits and market shares. Data generated by the model were compared with the corresponding actual data collected for American Can Company and Continental Can Company for 45 periods. This comparison demonstrated that ‘the model as a whole has some reasonable empirical basis’ and the model was able to ‘satisfactorily approximating the observed data’ (Cohen and Cyert 1961: 125). Finally, Orcutt, in his study of demographic trends of the U.S. household sector, noted that computer simulation made possible the ‘comparison of generated results with observed time series and cross sectional data and thus permitted testing of a sort that would not otherwise have been possible’ (Orcutt 1960: 905). Empirically observed longitudinal behaviors are often segments of longer dynamic patterns with specific characteristics. In addition, empirically collected behavior is biased by non-systematic and stochastic disturbances. Researchers willing to capture the deep causal structure that generated the observed history in its key stylized traits need to understand how

102

Rita Fioresi and Edoardo Mollona

modifications of theoretical assumptions, crystallized into a formal model, lead to modifications of the phenomenon under study. In this respect, computer simulation provides a unique tool to analyze the connection between a theoretical statement, which incorporates hypotheses regarding causal relationships among variables, and the consequences of this statement in terms of emerging patterns of behavior.

APPENDIX A The Jacobian matrix at the equilibrium point ( x10 , x 20 , y10 , y 20 ) has the form:

ªD D=« 1 ¬0

*º D2 »¼

where D, D1 and D 2 are matrices:

ª ( g1c 21 x 20 ) 2 g1 x10 − « g1 − 0 ( y1 + y 20 ) ( y10 + y 20 ) D1 = « « c12 g 2 x 20 « ( y10 + y 20 ) ¬

ª g~1 ( u − 2 y10 − c~12 y 20 ) « u D2 = « ~ ~ y0 c g 12 2 2 « «¬ u

º » + » ( g 2 c12 x10 ) 2 g 2 x 20 » − g2 − 0 » ( y1 + y 20 ) ( y10 + y 20 ) ¼ c 21 g1 x10

( y10

~ c21 g~1 y10 u g~2 ( u − 2 y 20 − ~ c12 y10 u

y 20 )

º » » )» »¼

In order to apply the competing species model described previously we set:

e1 = g1 , e2 = g 2 , s1 =

a1 =

g1 y10

+

y 20

, s2 =

g2 y10

c21 g1 c g , a2 = 012 1 0 0 0 y1 + y2 y1 + y2

g~ g~ ~ e1 = g~1 , e~2 = g~2 , ~ s1 = 1 , ~ s2 = 2 u u ~ ~ ~ ~ c g c g a~1 = 21 1 , a~2 = 12 1 u u If we substitute into the inequalities

+ y 20

Devices for Theory Development 103

s1 s 2 < a1a 2 , ~ s1~ s 2 < a~1a~2 that decide if we have weak or strong competition, we have

g1 g 2 ( y10 + y 20 ) 2 g~1 g~2 u2

>

>

c12 c 21 g1 g 2 ( y10 + y 20 ) 2

c~12 c~21 g~1 g~2 u2

which give:

c12 c 21 < 1 and c~12 c~21 < 1 These two conditions correspond to weak interaction in both the two competing models coupled in our dynamical system. There are four possible cases: c12 c 21 < 1 and c~12 c~21 < 1

c12 c 21 < 1 and c~12 c~21 > 1 c12 c 21 > 1 and c~12 c~21 < 1 c12 c 21 > 1 and c~12 c~21 > 1 Each has to be analyzed separately. With the knowledge of the behavior of the competing species model we can right away establish the stability of our equilibrium points.

NOTES 1. See Appendix A for calculations.

BIBLIOGRAPHY Axelrod, R. (1984) The Evolution of Cooperation, New York: Basic Books. Becattini, G. (1979) ‘Dal settore industriale al distretto industriale. Alcune considerazioni sull’unità di indagine dell’economia industriale’, Rivista di economia e politica industriale, 1. Boyce, W.E. and Diprima, R.C. (1997) Elementary Differential Equations and Boundary Value Problems, New York: Wiley. Boyd, R., Gintis, H., Bowles, S. and Richerson, P.J. (2005) ‘The evolution of altruistic punishment’, in H. Gintis, S. Bowles, R. Boyd and E. Fehr (eds) Moral Sen-

104

Rita Fioresi and Edoardo Mollona

timents and Material Interests: The foundations of cooperation in economic life, Cambridge, MA: MIT Press. Centola, D., Willer, R. and Macy, M. (2005) ‘The emperor’s dilemma: a computational model of self-enforcing norms’, American Journal of Sociology, 110 (4): 1009–40. Cohen, K.J. (1960a) Computer Models of Shoe, Leather, Hide Sequence, Englewood Cliffs, NJ: Prentice-Hall. Cohen, K.J. (1960b) ‘Simulation of the fi rm’, The American Economic Review, 50 (2), Papers and Proceedings of the Seventy-second Annual Meeting of the American Economic Association: 534–40. Cohen, K.J. and R. M. Cyert. (1961) Computer Models in Dynamic Economics, The Quarterly Journal of Economics, 75(1): 112–127 Cooke, P., Gomez Uranga, M. and Etxebarria G. (1997) ‘Regional innovation systems: institutional and organisations dimensions’, Research Policy, 26 (4–5): 475–91. Davis, M.S. (1971) ‘That’s interesting!’, Philosophy of Social Science, 16 (3): 285–301. Eguiluz, V.M., Zimmermann, M.G., Cela-Conde, C.J. and San Miguel, M. (2005) ‘Cooperation and the emergence of role differentiation in the dynamics of social networks’, American Journal of Sociology, 110 (4): 977–1008. Granovetter, M.S. (1985) ‘Economic action and social structure: the problem of embeddedness’, American Journal of Sociology, 91 (3): 481–510. Hanaki, N., Peterhansl, A., Dodds, P.S. and Watts, D.J. (2007) ‘Cooperation in evolving social networks’, Management Science, 53 (7): 1036–50. Hanneman, R.A., Collins, R. and Mordt, G. (1995) ‘Discovering theory dynamics by computer: experiments on state legitimacy and imperialist capitalism’, Sociological Methodology, 25: 1–46. Human, S.E. and Provan, K.G. (2000) ‘Legitimacy building in the evolution of small-fi rm multilateral networks: a comparative study of success and demise’, Administrative Science Quarterly, 45 (2): 327–65. Human, S. E. and K. G. Provan. (1997) An emergent theory of structure and outcomes in small-fi rm strategic manufacturing networks. The Academy of Management Journal, 40(2): 368-403. Kaplan, A. (1964) The Conduct of Inquiry, Scranton, PA: Chandler. Keeble, D. and Wilkinson, F. (1999) ‘Collective learning and knowledge development in the evolution of regional clusters of high-technology SMEs in Europe’, Regional Studies, 33 (4): 295–303. Lawson, C. and Lorenz, E. (1998) ‘Collective learning, tacit knowledge and regional innovative capacity’, Regional Studies, 33 (4): 305–17. Lomborg, B. (1996) ‘Nucleus and shield: the evolution of social structure in the iterated prisoner’s dilemma’, American Sociological Review, 61 (2): 278–307. Macy, M. (1991) ‘Learning to cooperate: stochastic and tacit collusion in social exchange’, The American Journal of Sociology, 97 (3): 808–43. Macy, M.W. and Skvoretz, J. (1998) ‘The evolution of trust and cooperation between strangers: a computational model’, American Sociological Review, 63 (5): 638–60. Maillat D. (1991) ‘The innovation process and the role of the milieu’, in E. Bergman, G. Maier and F. Tödtling (eds) Regions Reconsidered: economic networks, innovation and local development in industrialized countries, London: Mansell. Malerba, F., Nelson, R., Orsenigo, L. and Winter, S. (1999) ‘History-friendly’ models of industry evolution: the computer industry’, Industrial and Corporate Change, 8 (1): 3–40. Manskell, P. and Malmberg, A. (1999) ‘Localised learning and industrial competitiveness’, Cambridge Journal of Economics, 23: 167–85. Marafioti, E., Mollona, E. and Perretti, F. (2008) ‘International strategies and declusterization: a dynamic theory of Italian clusters’, Proceedings of the 50th conference of the Academy of International Business, Milan, 2008.

Devices for Theory Development 105 Mass, N.J. (1991/1981) ‘Diagnosing surprise model behavior: a tool for evolving behavioral and policy insights’, System Dynamics Review, 7 (1): 68–86. McKendrick, D.G., Doner, R.F. and Haggard, S. (2001) From Silicon Valley to Singapore: location and competitive advantage in the hard disk drive industry, Palo Alto, CA: Stanford University Press. Meyer, K. (2004) ‘Perspectives on multinational enterprises in emerging economies’, Journal of International Business Studies, 35 (4): 259–76. Mollona, E. and Presutti, M. (2006) ‘A population ecology approach to capture dynamics of cluster evolution: Using computer simulation to guide empirical research’, Proceedings of the 24th International System Dynamics Conference, Nijmegen, Netherlands, The System Dynamics Society, 2006. Available HTTP: . Nelson, R.R. and Winter, S.G. (1975) ‘Factor prices changes and factor substitution in an evolutionary model’, Bell Journal of Economics and Management Science, 6: 466–86. Nelson, R.R., Winter, S.G. and Schuette, H.L. (1976) ‘Technical change in an evolutionary model’, The Quarterly Journal of Economics, 90 (1): 90–118. Nettle, D. and Dunbar, R.I.M. (1997) ‘Social markers and the evolution of reciprocal exchange’, Current Anthropology, 38 (1): 93–9. Noah, M. (1998) ‘Beyond individual differences: social differentiation from fi rst principles’, American Sociological Review, 63 (3): 309–30. Nowak, M.A., Bonhoeffer, S. and May, R.M. (1994) ‘Spatial games and maintenance of cooperation’, Proceedings of the National Academy of Sciences of United States of America, 91 (11): 4877–81. Orcutt, G.H. (1960) ‘Simulation of economic systems’, The American Economic Review, 50 (5): 893–907. Pinch, S. and Henry, N. (1999) ‘Paul Krugman’s geographical economics, industrial clustering and the British motor sport industry’, Regional Studies, 33 (9): 815–27. Porter, M. (1998) ‘Clusters and the new economics of competition’, Harvard Business Review, November–December, Reprint Number: 77–90. Porter, M.E. (2000) ‘Location, competition, and economic development: local clusters in a global economy’, Economic Development Quarterly, 14 (1): 15–34. Powers, C.H. and Hanneman, R.A. (1983), ‘Pareto’s theory of social and economic cycles: a formal model and simulation’, Sociological Theory, 1: 59–89. Sastry, M. A., Problems and paradoxes in a model of punctuated organizational change: Administrative Science Quarterly, Vol. 42, pp: 237–275, 1997. Saxenian, A.L. (1994) Regional Advantage: culture and competition in Silicon Valley and Route 128, Cambridge, MA: Harvard University Press. Scott, A.J. (1982) ‘Production system dynamics and metropolitan development’, Annals of the Association of American Geographers, 72 (2): 185–200. Scott, A.J. (1998) Regions and the world economy: the coming shape of global production, competition and political order, Oxford: Oxford University Press. Scott, A.J. and Storper, M. (1992) ‘Regional development reconsidered’, in H. Ernste and V. Meier (eds) Regional Development and Contemporary Industrial Response: extending fl exible specialization, London: Bellhaven Press. Staber, U. (1998) ‘Inter-fi rm co-operation and competition in industrial districts’, Organization Studies, 19 (4): 701–24. Staber, U. and Morrison, C. (2000) ‘The empirical foundations of industrial district theory’, in D. Wolfe and A. Holbrook, Innovation, Institutions and Territory: regional innovation systems in Canada, Montreal: McGill-Queen’s University. Troitzsch, K.G. (1998) ‘Multilevel process modelling in the social sciences: mathematical analysis and computer simulation’, in W.B.G. Liebrand, A. Nowak and R. Hegselmann (eds) Computer Modeling of Social Processes, London: Sage.

4

Mix, Chain and Replicate— Methodologies for Agent–Based Modeling of Social Systems 1

David Hales

INTRODUCTION The modeling of social processes and systems using agent-based models (ABM) is increasingly seen as a valid tool over a wide range of disciplines including economics, sociology and anthropology (Gilbert and Troitzsch 2005; Halpin 1999). Although not considered a central tool or method within any one discipline ABM has attracted loyal followers in each area, bringing together researchers from many disciplines with different methodological backgrounds and approaches. This rich mixture of approaches and backgrounds is the primordial soup from which great and original work can evolve but it can also lead to misunderstanding, failure to communicate and, perhaps worst of all, the constant re-emergence of stale and entrenched debates that are very well represented in other areas of social science. ABM is a technique in which models are composed of a number of subunits called “agents” that represent subentities of a social system. Agents may represent individuals, groups, fi rms or other entities. In computational simulation work the agents are software abstractions. Agents are represented as algorithms (rule-based decision processes) and data (local agent memory). Agents inhabit a shared environment in which they interact with each other. The scenario being modeled dictates the nature of the agents, their interactions and the environment. Work in ABM is methodologically permissive. There is no single ABM method or methodology.2 ABM is a technique or technology rather than a methodology or a discipline. It is important to understand this since it explains why no single methodology would be appropriate for all ABM work. Methodology is rarely discussed explicitly and in detail in ABM papers because it is assumed that the nature of the investigation and framing of the research questions and the ABM itself should be sufficient for the reader to understand why the particular approach is being employed. This, in general, is the case with good ABM work. However it can be confusing for those new to ABM looking for methodological clarity and can also can lead to confusion between experienced researchers who have used ABM but only from a different methodological tradition.

Methodologies for Agent–Based Modeling of Social Systems 107 We identify a number of approaches that can be combined in different ways to reflect many of the methodologies found in the literature. We argue that such approaches can be used to allow ABM researchers to build on each other’s fi ndings and models. We believe that fi nding ways of building on, testing, extending and reapplying fi ndings is a necessary condition for approaching the level of rigor required to support what might be termed a “science” of social systems. Additionally, by linking models from different disciplines and traditions increased communication is possible between different researchers. Communication occurs through the ABM models. The models themselves can become a kind of lingua franca. This chapter is structured in the following way: fi rst we present some quotes from formative researchers in the field concerning methodology, specifically discussing the fact that ABM applied to sociological phenomena incorporate both deduction and induction in interesting and new ways. We then present a “mix and match” approach based loosely on a Popperian (Popper 1968) approach to the analysis of ABM. We then present the idea of “chains of models” and how they relate to ABM. Following this we briefly discuss ABM replication and fi nally put the pieces together and conclude with some observations on progress in ABM methodology over the last 10 years.

COMBINING DEDUCTION AND INDUCTION It has been noted by several foundational social simulation researchers that ABM social simulation does not fit neatly into either deductive or inductive methodologies. Consider the following comments: Simulation is a third way of doing science. Like deduction, it starts with a set of explicit assumptions. But unlike deduction, it does not prove theorems . . . induction can be used to fi nd patterns in data, and deduction can be used to fi nd consequences of assumptions, simulation modelling can be used as an aid to intuition. (Axelrod 1997) Clearly, agent-based social science does not seem to be either deductive or inductive in the usual senses. But then what is it? We think generative is an appropriate term . . . We consider a given macrostructure to be “explained” by a given micro—specification . . . (Epstein and Axtell 1996) We can therefore hope to develop an abstract theory of multiple agent systems and then to transfer its insights to human social systems, without a priori commitment to existing particular social theory. (Doran 1998) Our stress . . . is on a new experimental methodology consisting of observing theoretical models performing on some testbed. Such a new

108

David Hales methodology could be defi ned as “exploratory simulation”. (Gilbert and Conte 1995)

In the following section we incorporate these observations into a “mix and match” method combining various components that are found in ABM modeling work, producing many possible kinds of method applicable to ABM depending on the nature of the research questions that are being addressed. MIX-AND-MATCH METHODOLOGIES In order to group different ABM approaches into a set of indefi nable methods we have imported some Popperian terminology (Popper 1968). Of course here we apply these terms to an artificial deductive system (ABM models) rather than the real world. To be more precise we examine the ABM as an entity “in the world” which can be empirically examined by applying a kind of Popperian approach. We do not claim the approaches we present are exhaustive and we do not wish our tone to be prescriptive. Rather these sketches should be seen as ways to clarify and classify methods already in use in ABM work. The methods employed in ABM work can be broken down into a collection of reasonably distinct components. These are: a of assumptions (A) that are used to specify the agents and their environment, a set of runs (R) comprising execution of a computer program which embodies A, a set of measurements or observations (O) of the runs, a set of explanations (E) which attempt to link A and O in some meaningful way and a set of hypotheses (H) linked to E based on A and O. A, R and O are formalized since A is represented by a computer program, R some set of executions of the program and O some specified measures of R. However, E and H may or may not be formalized. They are often given in a mixture of natural language using qualitative concepts and statistical or mathematical relationships. In either case the explanation aims to illuminate the dynamic processes in R with reference to A and O and possibly via the identification of some emergent properties.3 Connecting the aforementioned components in different ways reveals several methods of inquiry, some of which are now detailed. Perhaps the simplest method is the presentation of an existence proof. An existence proof does not require E or H at all. Here A is shown to be suffi cient to produce some O (see Figure 4.1). Much ABM work follows this method, at least in publication presentation, because it is concise and easy to understand. Some assumptions are given and shown to be sufficient to produce some outcome. The evidence is presented based on observations usually shown as charts of individual runs and distributions over multiple runs. In general this kind of method benefits from minimal assumptions (simple agents) and a qualitative easily identifi able outcome.

Methodologies for Agent–Based Modeling of Social Systems 109

Figure 4.1 The form of an existence proof. The assumptions (A) are coded into an ABM, and a set of simulation runs (R) support some observations (O).

Behavior modeling (or reverse engineering) again does not require E or H. Here some existing process (R’) is observed (O’) and compared (possibly visually/qualitatively) against O and, based on divergence, A is revised. This process is continued until a satisfactory level of correspondence is observed (see Figure 4.2). Theory testing involves the translation/abstraction of some existing theory concerning real social processes T into E, A and H and then the testing of H against O in order to either support or refute H and by implication T (see Figure 4.3). An early example of this was presented by Doran et al. (1994) in which a theory of Upper Palaeolithic change was tested.

Figure 4.2 Behavior modeling. Observations are compared to some existing process (R’) producing observations (O’). Assumptions are revised to align behavior.

Figure 4.3 Theory testing. An existing theory (T) is used to specify assumptions (A) and an explanation (E) which explains how A leads to O in the model. From E hypotheses are derived (H) which predict what O should be. These can be tested against O.

110

David Hales

Figure 4.4 Theory building. By revising the explanation (E) and assumptions (A) based on finding agreement of hypotheses (H) with observations (O) new theory can potentially be created.

Theory building involves the abstraction from T into E, A and H, comparison between O and H and then possible revision of E and/or A. Given that a state is reached in which E, H and O correspond, E and H can then possibly be “de-abstracted” into T producing a theory testable against real social processes (see Figure 4.4). Explanation finding involves iterative refi nement of E based on comparison of H with O without changing A (see Figure 4.5). This means we fi x the assumptions; this might be necessary when the research question involves relatively fi xed assumptions which produce O of interest but it is not known how this happens—i.e., some emergent property that the ABM modeler, although able to produce, does not understand how R produces it. This might be termed “trying to fi nd out what is going on in an ABM by repeatedly applying new hunches and then trying to refute them”. Actually this method most closely reflects the spirit of Popper since we cannot change the assumptions (A) and we are looking for explanations through a kind of informed trial-and-error process. It is generally the case in this mode that refutation is the easiest course of action by which to test E. One can look for some observation that will refute H.

Figure 4.5 Explanation finding. Revise the explanation (E) until the derived hypotheses (H) match the observations (O).

Methodologies for Agent–Based Modeling of Social Systems 111 Many of these modes combine deduction and induction often in an iterative way. Such investigation has been termed “ceduction” which is short for “computer experimental induction/deduction” (Hales 1998). The inductive process here is viewed as iterative observation and a revision of E and A. The O is produced deductively (computationally) from A, but the revision of E and A is an inductive process based on observation guided by H. It should be made clear why we have attached strong caveats to our use of the term “Popperian” approach. Although we can use the mix-and match-methods to refute hypotheses (H) we can also “change the rules of the universe” by changing A to “unrefute” some H. This should not be seen as “cheating” but (as we have labeled previously) a kind of theory building or behavior modeling. This is a constructive enterprise in which we ask the question: What assumptions are sufficient to produce certain kinds of observable behavior from the ABM? However it should also be noted that this does not mean “anything goes” because A will be constrained by the specific research questions being addressed. CHAINS OF MODELS It is often desirable to import certain properties from existing models into new models. For example, a highly abstract model of an artificial society which self-organizes high levels of cooperation between egotistical agents might help to explain a specific target social phenomena if it can be incorporated into a more elaborated and specialized model (by supplementing and possibly changing some assumptions). But since many of the properties of ABM models result from complex and emergent processes it is rarely easy to identify which elements of the set of assumptions (A) are necessary, sufficient or contingent. Hence importing properties from existing models into new models is not a matter of simply selecting known assumptions and combining them with new assumptions. One way to achieve the import process is to construct chains of models in which the assumptions are varied gradually in each successive model until a sufficient level of detail or abstraction is obtained. The links between models in the chain represent the preservation of some desirable property between models. Essentially what is happening during an iterative chaining process is that theory, in the form of algorithms evidencing some phenomena of interest, is being carried over into a new scenario or context. This is particularly useful when models are to be moved across disciplinary boundaries. For example, a biologically orientated evolutionary model might display properties that can be used to capture a social process by changing some assumptions or vice-versa. Chains can also be constructed post hoc, rather than as part of a goalorientated process. That is, existing models produced for different reasons and at different levels of detail or application may be found to be chainable if a common link can be found between them—i.e., if they can be shown to

112

David Hales

share a given property and subset of assumptions that support it. This has been termed model “alignment” or “docking” (Axtell et al. 1996) or more generally “model-to-model” analysis (Hales et al. 2003). This approach of fi nding common phenomena and mechanisms operating in different models constructed in different disciplines offers the possibility of finding general and unified underlying processes expressible at different levels. Essentially by linking models in this way one attempts to link or unify theories embodied in the models. A model chain may terminate when it reaches a target system (real social system) in which it is empirically validated via comparison of the target with the terminal model. We do not discuss in detail how this may be done here but we refer interested readers to the “cross-validation” work of Moss and Edmonds (2005). Essentially cross-validation involves grounding both the assumptions, specifically the micro-behavior of the agents, and the observations of system macro-behavior in real social systems. Another way that ABM may interact with the real social word is through a construction process that incorporates the stakeholders themselves (the agents being modeled) in the model construction process. This is termed “participatory modeling”. Again we do not discuss this here as it is covered in detail elsewhere (for a good overview see Ramanath and Gilbert 2004). More recently ABM social models have been applied to fi nding engineering solutions, through chains of models, in distributed self-organizing software systems such as agent-based computing (Brueckner et al. 2006) and more recently peer-to-peer systems (Hales and Arteconi 2006). In this approach chains terminate when they have reached a level of elaboration required to produce an actual deployable implementation. In this sense validation becomes demonstrating that the software system performs the required functions. Figure 4.6 shows an example of a chain linking several ABM moving from an abstract social model (TagWorld—Hales 2000) towards two peer-to-peer (P2P) applications: Broadcast (Arteconi and Hales 2006) and CacheWorld (Hales, Marcozzi and Cortese 2007). The more abstract models are to the left, the more specific to the right. Although both Broadcast and CacheWorld have a common lineage in TagWorld and NetWorld (Hales 2005) they differ considerably as they are modifications of the intermediate models SLACER (Hales 2006) and SkillWorld (Hales 2006). For each model a brief description plus the scenario used are given in the figure. However, these details are not important; rather this is given as an example of model chaining in action. Note that the more abstract models use the prisoner’s dilemma game as a test for the emergence of cooperation, and the more applied models emerge cooperation in specific P2P application domains. It should be noted that chains can run in either direction; for example recent work has taken P2P applications and chained back to new kinds of

Methodologies for Agent–Based Modeling of Social Systems 113

Figure 4.6 Example of a model chain terminating in peer-to-peer application domain models.

social theory models (Mollona and Marcozzi 2009)—these are examples of so-called “peer production” models (Benkler 2006).

REPLICATION We have argued elsewhere that since ABM are generally not analytically tractable (i.e., we need to use the empirical approaches described earlier) confidence in results can be obtained only via replication of results by independent researchers (Edmonds and Hales 2003). In exactly the same way that empirical fi ndings in scientific areas such as physics need to be replicated to be trusted so do ABM results. Good replications should ideally work only from the assumptions (A) given in the original work. This means ignoring extraneous details such as the specific computational environment, computer languages and tools used since these should not affect the results obtained. Indeed a good replication should start from scratch using different languages and computational abstractions if possible. Essentially the ABM should be recoded based on the assumptions presented in the original work. These assumptions follow a kind of high-level specification, and by replicating the ABM from the specification two critical questions are answered: • Is the clarity and level of detail of the presented assumptions (A) sufficient for an ABM programmer to construct, from scratch, a working model? • If an ABM can be produced does it replicate the main results and observations (O) presented in the original work?

114 David Hales Interestingly, experience has shown that the fi rst question is rarely answerable in the positive and often requires direct communication between the original researchers and the replicators. This should not surprise because original published work needs to follow the space and style constraints of academic publications. In general ABM work is presented as concisely as possible to communicate the general result rather than to give an exhaustive and unambiguous software specification. This can cause serious problems if original authors of work cannot be contacted. Several ABM researchers have proposed that published work should be supplemented with appendices containing additional detail in the form of a reasonably standardized pseudocode algorithm or flowchart describing the ABM simulation in addition to the original source code (Edmonds 2004; Edmonds and Bryson 2004). However this practice is not widespread at present. The second question concerning actual reproduction of results is rarely a simple matter of looking for an exact match between observations (O) of runs (R) in both models. This is because ABM work often involves many runs that produce alternative histories due to stochastic processes (randomness) built into the model. Often then, the issue becomes one of statistical matching of results and/or qualitative matches (i.e., the same emergent phenomena was observed). In fact the issue of randomness (or more specifically pseudorandomness) pervades ABM. By replicating in other environments different pseudorandom generator algorithms are applied. Experience indicates that it is rare that pseudorandom bias can seriously affect outcomes but it is a possibility. It has also been noted that rounding errors due to real number representations in digital computers can also lead to seriously misleading results (Polhill et al. 2005). Unfortunately, replicated models will often have the same forms of rounding errors since this is a processor or operating system issue rather than an ABM implementation issue. It has been suggested that “interval arithmetic” implementations could be used eliminate this potential source of error, however currently this is very rarely done (but see Polhill and Izquierdo 2005). Replication can be viewed as a simple and short model chain (as discussed previously). The chain contains two models and the phenomena of interest (to be preserved) are the entire set of observations (O) from the preceding model. It is often claimed that, although desirable, there are few academic incentives to replicate. As we have discussed earlier it is not an easy task and, the argument goes, a positive or negative result does not necessarily lead to quality publications. Reviewers will ask—so what? If you can’t reproduce the results perhaps your model is wrong or has a bug,4 and if you can replicate then what have we learned that is new? However, recently this appears to be changing as ABM become more widely cited and understood (see Will and Hegselmann 2008; Galan and Izquierdo 2005).

Methodologies for Agent–Based Modeling of Social Systems 115

Figure 4.7 Diagram outlining a replication process in which two independent replications were made of a previously published model. The process allowed for a detailed examination of the claimed results of the original model (for details see Edmonds and Hales 2003).

Another incentive for replication comes from using the chaining method discussed previously. If a researcher wishes to apply, say, an abstract model to some more specific domain then the initial work should be to replicate the abstract model in an extensible form before modifying and specializing it. Hence in this way the replication work is a by-product of the chaining process rather than the main focus of the work.

PUTTING IT ALL TOGETHER If we put together the methods of mix and match, chaining and replication we can think of ABM work as a kind of expanding network of linked models. Nodes represent particular model instantiations (generally reported in some publication); links represent relationships between models (chains and or replications). We can visualize such a network such that nodes on the periphery are more specific and applied and those nearer the core are more abstract and general. That is, nodes at the edge of the network terminate

116

David Hales

Figure 4.8 A network visualization of models and how they relate. Nodes are ABM models and links represent chain relationships between models. The nodes at the periphery may be seen as linking to empirical social realities through various methods such as empirical validation, engineering implementations and participatory or descriptive processes. Nodes in the center (here marked with a T) represent abstract or theoretical models.

where they relate directly to either real-world empirical results (based on a real target system) or, from the engineering perspective, represent instantiation of deployed working software systems.

CONCLUSION In this chapter we have outlined three broad methods of working with ABM: mix-and-match methodologies, model chains and model replication. We have proposed approaching ABM empirically. We argue that ABM researchers should view their models as aspects of the physical world that can be investigated experimentally like other physical sciences. If analytically tractable and useful models of social behavior can be produced then we do not need to take the ABM route. But it seems evident that ABM researchers should not believe that because they use computer models (based on automatic logical deductions of a computer program) their results are any sounder than those in the empirical sciences. This is experimental science with all the concomitant caveats, pitfalls, opportunities and possibilities. With this in mind what we have presented in this chapter is a loose summary of a set of methods and approaches that, although diverse, can integrate ABM work from diverse disciplines and with diverse goals. Again looking to the physical sciences we see that it is possible to integrate both highly abstract theory, often based on intuition or mathematical beauty, with empirical experiment and applications. We believe careful use of ABM in social modeling can potentially achieve this through focusing on linking

Methodologies for Agent–Based Modeling of Social Systems 117 models in chains, replicating important results and using a rigorous empirical methodology towards the ABMs themselves. Over the last 10 years or so we have observed ABM maturing in a promising direction. We increasingly see physicists working with ABM applying a physics perspective. We see new replications of important models. Also we are seeing work explicitly linking models through model-to-model analysis (Hales et al. 2003; Rouchier et al. 2008) and cross-fertilization between social models and the engineering of distributed computer systems because the requirements for such systems become ever more social, complex and self— organizing (Di Marzo Serugendo et al. 2007). Recently we have witnessed an explosion of empirical work based on the new and massive data sets available from Internet applications and mobile phone records, and other electronic sources, allowing for levels of detailed social analysis never before possible (Palla et al. 2007). This offers potential for large-scale validation of ABM. We welcome these developments and look forward to the next decade of ABM research.

ACKNOWLEDGMENTS Many of the ideas and thinking about methodology in this chapter were heavily influenced by Bruce Edmonds and Scott Moss from the Centre for Policy Modelling in Manchester. During my time working there (over four years ago now) extensive methodological discussions were ongoing and I benefited greatly from these. The ideas of replication and chaining are very much directly influenced by Bruce’s ideas, work and approach. The earlier “mix and match” ideas were directly influenced by my PhD supervisor (and ABM pioneer), Jim Doran from Essex University (almost a decade ago now). All errors, vagueness and unconvincing arguments are of course my fault.

NOTES 1. This work was partially supported by the Future and Emerging Technologies program FP7-COSI-ICT of the European Commission through project QLectives (Grant no. 231200). 2. We use the words “method” and “methodology” synonymously in this chapter. 3. We do not defi ne or discuss the nature of “emergence” in detail here. The term is used in different ways by different authors. For our purposes it can be considered to mean some observable property that emerges from the runs of a model that is not intuitively expected (or easily reducible) to the assumptions that comprise the rules coded into the agents. 4. One way to address this is to perform a further independent replication to give three models. One can then use a majority vote to determine which model appears to be misbehaving. If all three models disagree we can at least be sure that the specification is too vague to be used for meaningful replication.

118

David Hales

BIBLIOGRAPHY Arteconi, S. and Hales, D. (2006) ‘Broadcasting at the critical threshold’, Technical Report UBLCS-2006–22, University of Bologna, Dept. of Computer Science. Axelrod, R. (1997) ‘Advancing the art of simulation in the social sciences’, in R. Conte and R. Hegselmann (eds) Simulating Social Phenomena—LNEMS 456, Berlin: Springer. Axtell, R., Axelrod, R., Epstein, J. and Cohen, M.D. (1996) ‘Aligning simulation models: a case study and results’, Computational and Mathematical OrganizationTtheory, 1: 123–41. Benkler, Y. (2006) The Wealth of Networks: how social production transforms markets and freedom, New Haven, CT: Yale University Press. Brueckner, S., Di Marzo Serugendo, G., Hales, D. and Zambonelli, F. (eds) (2006) ‘Engineering self-organising systems’, Proceedings of the 3rd Workshop on Engineering Self–Organising Applications (EOSA’05), Lecture Notes in Artificial Intelligence, 3910, Berlin: Springer. Di Marzo Serugendo, G., Martin-Flatin, J.P., Jelasity, M. and Zambonelli, F. (eds) (2007) ‘Proceedings of the First International Conference on Self-Adaptive and Self-Organizining Systems (SASO2007)’, July 2007, Boston: MIT, IEEE Press. Doran, J. (1998) ‘Simulating collective misbelief’, Journal of Artificial Societies and Social Simulation, 1 (1). Online. Available HTTP: (accessed 22 February 2010) Doran, J., Palmer, M., Gilbert, N., and Mellars, P. (1994) ‘The EOS Project: modelling Upper Palaeolithic social change’, in N. Gilbert and J. Doran (eds) Simulating Societies: the computer simulation of social phenomena, London: UCL Press. Edmonds, B. (2004) Using the Experimental Method to Produce Reliable SelfOrganised Systems: engineering self-organising systems, Berlin: Springer. Edmonds, B. and Bryson, J. (2004) ‘The insufficiency of formal design’, in 3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2004), 19–23 August 2004, New York: IEEE Computer Society Press. Edmonds, B. and Hales, D. (2003) ‘Replication, replication and replication—some hard lessons from model alignment’, Journal of Artificial Societies and Social Simulation, 6(4). Online. Available HTTP: (accessed 22 February 2010). Epstein, J. and Axtell, R. (1996) Growing Artificial Societies: social science from the bottom up, London: MIT Press. Galan, J. and Izquierdo, L. (2005) ‘Appearances can be deceiving: lessons learned re-implementing Axelrod’s “evolutionary approach to norms”’, Journal of Artificial Societies and Social Simulation, 8 (3). Online. Available HTTP: (accessed February 2010) Gilbert, N. and Conte, R. (eds) (1995) Artificial Societies: the computer simulation of social life, London: UCL Press. Gilbert, N. and Troitzsch, K. (2005) Simulation for the Social Scientist, London: Open University Press. Hales, D. (1998) ‘Artificial societies, theory building and memetics’, Proceedings of the 15th International Conference on Cybernetics, International Association for Cybernetics (IAC), Namur, Belgium. Hales, D. (2000) ‘Cooperation without space or memory: tags, groups and the prisoner’s dilemma’, in S. Moss and P. Davidsson (eds) Multi-Agent-Based Simulation, LNAI 1979, Berlin: Springer: 157–66.

Methodologies for Agent–Based Modeling of Social Systems 119 Hales, D. (2005) ‘Self-organising, open and cooperative P2P societies—from tags to networks’, in Proceedings of the 2nd Workshop on Engineering Self-Organising Applications, LNCS 3464, Berlin: Springer, 123–37. Hales, D. (2006) ‘Emergent group-level selection in a peer-to-peer network’, Complexus, 3: 108–18 (DOI:10.1159/000094193). Hales, D. and Arteconi, S. (2006) ‘SLACER: a self-organizing protocol for coordination in P2P networks’, IEEE Intelligent Systems, 21 (2): 29–35. Hales, D., Edmonds, B., Norling, E. and Rouchier, J. (eds) (2003) ‘Multi-agent based simulation III’, Proceedings of the 4th International Workshop, MABS 2003, Melbourne, Australia, July 2003, Lecture Notes in Artificial Intelligence, 2927, Berlin: Springer. Hales, D., Marcozzi, A. and Cortese, G. (2007) ‘Towards cooperative, self-organised replica management’, Proceedings of the First International Conference on Self-Adaptive and Self-Organizing Systems (SASO2007), July 2007, Boston: MIT, IEEE Press. Hales, D., Rouchier, J. and Edmonds, B. (eds) (2003) ‘Special issue on model-2model comparison’, Journal of Artificial Societis and Social Simulation, 6 (4). Online. Available HTTP: (accessed February 2010) Halpin, B. (1999) ‘Simulation in sociology’, American Behavioral Scientist, 42 (10): 1488–1508. Mollona, E. and Marcozzi, A. (2009) ‘FirmNet: the scope of fi rms and the allocation of task in a knowledge–based economy’, Computational and Mathematical Organization Theory (DOI: 10.1007/s10588–008–9049–8). Moss, S. and Edmonds, B. (2005) ‘Sociology and simulation: statistical and qualitative cross-validation’, American Journal of Sociology, 110 (4): 1095–131. Palla, G., Barabasi, A.L. and Vicsek, T. (2007) ‘Quantifying social group evolution’, Nature, 446: 664–7. Polhill, G. and Izquierdo, L. (2005) ‘Lessons learned from converting artificial stock market to interval arithmetic’, Journal of Artificial Societies and Social Simulation, 8 (2). Online. Available HTTP: (accessed February 2010) Polhill, G., Izquierdo, L. and Gotts, N. (2004) ‘The ghost in the model (and other effects of floating point arithmetic’, Journal of Artificial Societies and Social Simulation, 8 (1). Online. Available HTTP: (accessed February 2010) Popper, K. (1968) The Logic of Scientific Discovery, London: Hutchinson. Ramanath, A. and Gilbert, N. (2004) ‘The design of participatory agent-based social simulations’, Journal of Artificial Societies and Social Simulation, 7 (4). Online. Available HTTP: (accessed February 2010) Rouchier, J., Cioffi-Revilla, C., Polhill, G. and Takadama, K. (2008) ‘Progress in model-to-model analysis’, Journal of Artificial Societies and Social Simulation, 11 (2). Online. Available HTTP: (accessed February 2010) Will, O. and Hegselmann, R. (2008) ‘A replication that failed: on the computational model’, Journal of Artificial Societies and Social Simulation, 11 (3). Online. Available HTTP: 0};

(5.9) (5.10)

2 st {0 ≥ s t ≥ −0.75} ; f ( st ) = 1.5 {st < −0.75} 3 Performance of the fi rm is operationalized in Equation 5.11, where fi rm profit margin (πt) takes account of the economic implications of the diversification move including any economies of scope benefits of resource sharing as well as any costs of overstretching shared resources. Total fi rm revenue is equal to core business revenue plus revenue of the new business. Revenue of the core business (к) is constant over time, and new business revenue is determined by the number of new business customers (Nt) and f ( st ) = 1 −

138

Michael S. Gary

the average revenue per customer each quarter (ε). The fi rst term in the numerator (before the brackets) and the term in the denominator is total fi rm revenue. The term in brackets in the numerator is total fi rm costs. The cost structure for the fi rm includes fi xed costs (ψ), the costs of shared resources and variable costs of servicing new business customers. The costs of shared resources are a function of the stock of shared resources (R t) and the variable cost of each unit of shared resources (ν). The variable costs of serving new business customers are a function of the number of new business customers (Nt) and the variable cost per new business customer each quarter (θ). Economies of scope arise through spreading the existing fi xed costs (ψ) over both the established and new businesses and through higher utilization of shared resources. However, if shared resources are overstretched, this will eventually lead to an increase in total fi rm costs. The impact of overstretching shared resources on costs (Ot) is a multiplier on the total costs of the fi rm. When the fi rm maintains slack resources, there is no impact on costs (Ot = 1). When slack drops below zero, the impact of overstretching shared resources on costs can increase the total costs of the fi rm by as much as 50 per cent. This formulation is consistent with previous research representing the costs of fi rm growth through the impact on total fi rm costs (Baumol 1962).

πt =

κ + ( N t ⋅ ε ) − [ψ + ( Rt ⋅ν ) + ( N t ⋅ θ )]⋅ Ot κ + (Nt ⋅ ε )

(5.11)

With the full model now specified, in the next section we discuss some illustrative simulation results.

SIMULATION EXPERIMENTS This section presents a handful of simulation experiments that highlight the dynamics of the full model. The model has been extensively analyzed to understand the range of behavior possible and the sensitivity of each parameter. The simulations that follow capture the evolution of a fi rm with an established core business over a time period of 60 quarters. Figure 5.7 presents firm profitability results for four different experiments. The core business focus experiment, line 1 in Figure 5.7, represents a single-business fi rm focused entirely on its core business. The core business is mature and is neither growing nor shrinking over the entire time horizon. Profitability for the core business focus experiment is in a stable equilibrium at just under 20 per cent, and will serve as a benchmark for value creation for all subsequent simulations. In the core business focus experiment, the fi rm is endowed with excess resources beyond what are required for normal operations in the core business. This organizational slack is maintained throughout the

Firm Growth and Resource Sharing in Corporate Diversification 139

Figure 5.7

Comparison of profitability for four different simulation experiments.

simulation. In the ideal related diversification experiment, line 2 of Figure 5.7, the fi rm exploits its excess resources by embarking on a diversification move into a related, new business. This diversification move couples the original core business and a growing new business that grows for several years before reaching equilibrium—typical sigmoidal logistic growth. The ideal related diversification simulation illustrates a scenario in which the full potential value of resource sharing between the two businesses is extracted, and profitability reaches nearly 24 per cent by the end of the simulation. The related diversification with overstretching costs simulation, line 3 of Figure 5.7, exploits the exact same potential synergy benefits of the ideal related diversification experiment. In addition, this experiment also includes the costs of overstretching the fi rm’s stock of shared resources if resources are overextended. For the fi rst 12 quarters of the simulation, there is no distinguishable difference between the ideal related diversification and related diversification with overstretching costs experiments. However, after this point the related diversification with overstretching costs experiment shows a dramatic collapse in profitability as the rising costs of overstretching shared resources undermines fi rm performance. After appearing to create value for the fi rst 20 quarters (five years), by the end of the time horizon the related diversification move results in value destruction of over 2 percentage points compared to the core business focus simulation and almost 6 percentage points less than the ideal related diversification experiment. Figure 5.8 illustrates the underlying dynamics of this experiment for six key variables.

140

Michael S. Gary

Total Work Demands

1.57

250,000 Shared Resources Target Resource Workload

0 0

6

12

18

24 30 36 42 Time (Quarter)

48

54

1.00 60

Organizational Slack

10%

15% Organizational Slack

Overstretching Costs

-7.5%

7.5%

-25%

Overstretching Costs

New Business Customers

2.14 New Business Customers

Index of Shared Resources, Total Work Demands, and Target Resource Workload

Related Diversification with Overstretching Costs 500,000

0% 0

Figure 5.8

6

12

18

24

30

36

42

48

54

60

Related diversification with overstretching costs experiment.

The top part of Figure 5.8 shows that the new business customer base grows to 500,000 customers during the fi rst 30 quarters (under eight years). Total work demands increase as the customer base grows during this period, and in response management invests in shared resources in an attempt to keep workload demands and resources in balance. In addition to this conscious managerial response for resource investment, the norms of the organization in the form of the target resource productivity are also evolving. As total work demands increase faster than the stock of shared resources, target resource productivity rises over time. Total work demands, shared resources and target resource productivity are shown in the top part of Figure 5.8 on the right-hand vertical scale as an index. All three variables are indexed relative to their initial values in order to compare them on the same scale. Organizational slack, shown in the bottom part of Figure 5.8, steadily declines from an initial value of 5 per cent down to –16.5 per cent as total

Firm Growth and Resource Sharing in Corporate Diversification 141 work demands rise more rapidly than the stock of shared resources. This negative value indicates that resource workloads are 16.5 per cent higher than the efficient workload level; shared resources are considerably overstretched. The firm continues to operate with negative organizational slack over time because there is no signal for the need to invest in additional shared resources. It has become usual standard operating procedure for resources to cope with higher workloads, and the target resource productivity reflects this established norm. Human actors in the system now have higher expectations for resource productivity. It takes time for overstretching shared resources to have an impact on costs, but ultimately these costs of overextending shared resources trickle through the organization and depress performance. The bottom part of Figure 5.8 shows that overstretching costs start rising around the end of 12 quarters, and rise gradually to reach 8.6 per cent by the end of the simulation. Overstretching costs are expressed here as a percentage of the operating costs of the firm, so that by quarter 60 overstretching burdens the firm with an additional 8.6 per cent over the ordinary operating costs. Profitability was declining rapidly even as the new business continued to grow. The very related diversification experiment, shown in line 4 of Figure 5.7, represents a scenario in which the new business is even more related to the core business than in the ideal related diversification and Related diversification with overstretching costs simulations. In this experiment, the diversifying firm also benefits from leveraging the firm’s reputation in the core business to grow the customer base in the new business more rapidly. This revenue-enhancing synergy is in addition to the potential economies of scope benefits captured in the previous experiments. As a result, the new business customer base grows more quickly and is ultimately 50 per cent larger than in the Ideal related diversification and related diversification with overstretching costs simulations. The very related diversification experiment, counterintuitively, results in lower profitability than the related diversification with overstretching costs simulation—an experiment that represents a less related diversification move. In this case, revenue-enhancing relatedness in the form of a related reputation was not beneficial for the firm since leveraging a related reputation resulted in more rapid growth and an ultimately larger new business customer base which only stretched the stock of shared resources even further. Consequently, the costs of overstretching shared resources were even higher in this more related experiment and undermined the larger potential synergy benefits. The related diversification with overstretching costs and Very related diversification simulations demonstrate how a firm can destroy value in related diversification moves with significant potential economies of scope by definition. These experiments demonstrate that management has an important role to play in coordinating the implementation of resource sharing to avoid undermining potential resource sharing benefits. Figure 5.9 shows two additional simulation experiments exploring how management can successfully tap the benefits of resource sharing. Lines 1 and 2 of Figure 5.9 are the core

142

Michael S. Gary

business focus and ideal related diversification simulations just discussed, and are included here as performance benchmarks. The higher initial slack experiment, line 3 of Figure 5.9, represents a scenario in which the firm starts with an initial 15 per cent in excess resources compared with 5 per cent initial slack resources in all previous simulations. This represents a policy where management embarks on a diversification move only when there is substantial slack in the organization to absorb growth. The rationale for such a policy is that the additional organizational slack enables management to maintain the balance between shared resources and total workload demands with the extra buffer of excess resources before the diversification. Simulation allows us to test the impact of this management policy to see if it can turn the related diversification into a value-creating success. Not surprisingly, the 15 per cent initial slack simulation starts with slightly lower profitability than the previous experiments due to higher shared resource costs. However, performance in this simulation improves as the new business grows and drives up resource utilization. By the end of the simulation, profitability is 1½ percentage points above the core business focus benchmark resulting in a successful diversification strategy. This experiment demonstrates the value of investing in slack shared resources, perhaps quite significantly, prior to a related diversification move. While the related diversification move in the higher initial slack experiment creates value, it is worth pointing out that this policy is not overly robust. Sensitivity tests indicate that performance under this policy is sensitive to several model parameters including the growth rate of the new business, time delays

Figure 5.9

Realizing synergy through higher initial slack or fixed targets.

Firm Growth and Resource Sharing in Corporate Diversification 143 associated with investing and developing shared resources, the costs of holding excess shared resources and time delays associated with adjusting target resource productivity. The differences between creating and destroying value under this policy are relatively small.5 More importantly, it is not obvious that management can always reliably identify ex ante the appropriate level of slack needed before diversifying, and the appropriate level of initial slack is likely to vary considerably across different competitive environments. The fixed target experiment, line 4 of Figure 5.9, represents a scenario in which management maintains a constant target resource productivity over the entire time horizon. There is no aspiration adjustment whatsoever in this simulation, representing a policy in which managers explicitly set targets for the level of resources required to adequately cope with varying workloads, and then stick to these initial targets. The fixed target experiment results in profitability that is almost 3 percentage points higher than the focused strategy performance benchmark. Profitability approaches but is still a bit below the ideal related diversification scenario, because of the additional shared resources required to maintain target resource workloads in this simulation. Figure 5.10 illustrates the underlying dynamics of this experiment for five key variables. Lines 1 and 2 of Figure 5.10 illustrate the time path of total work demands and shared resources over the time horizon for the fixed target experiment. Growth in new business customers increases total workload demands, and management invests in shared resources to correct the resource shortfall. As the two lines diverge in Figure 5.10, we can see that total work demands grow more rapidly than shared resources over the first 30 quarters. However, target resource productivity, line 3, remains constant over the entire time horizon. These first three variables are all indexed relative to their initial values in order to compare them on the same left-hand vertical scale. The imbalance between total work demands and shared resources is reflected in declining organizational slack, line 4 of Figure 5.10 on the right-hand vertical scale, during the first 30 quarters. Slack declines from an initial 5 per cent down to a low of –6.5

Fixed Target Experiment 2.50

Index of Shared Resources, Total Work Demands, & Target Resource Workload

4

4

4

4 5%

4

4.Organizational Slack

5

5

5

5

4

5

0%

5

5.Overstretching Costs

1.75

4

12

1.Total Work Demands

1

12 1.00 1 2 3 3 0 6 12

18

12

12

Organizational Slack & Overstretching Costs (%)

-5%

12

2. Shared Resources

2

-10%

3

3. Target Resource Productivity

3 3 24 30 36 42 Time (Quarter)

3 48

Figure 5.10 Dynamics of the fixed target experiment.

54

3

-15% 60

144

.

Michael S. Gary

per cent in quarter 16 to indicate resource overstretching. As organizational slack drops below 0 per cent, overstretching costs rise from 0 per cent up to roughly 1.5 per cent, after a time lag, indicating a rise in total firm costs due to overstretching. However, since the target resource productivity remains fixed, the signal for management to continue to invest in expanding the stock of shared resources remains strong over this entire period, and eventually the balance is restored between shared resources and total work demands. When this balance is restored, organizational slack recovers and overstretching costs slowly decay back towards zero. This successful diversification implementation policy demonstrates the debilitating effect aspiration adjustment can have in the organization, and management’s role in coordinating resource sharing in related diversification. It is obvious from these simulation experiments that implementation process issues are crucial in determining the success or failure of resource sharing.

CONCLUSION Diversifying into new business activities and shifting the allocation of shared resources across the portfolio is fundamentally a dynamic process. Growth in the new business may unintentionally overextend the stock of shared resources while organizational routines are still adapting to balance resources and workload demands. Overstretching shared resources allows the firm to keep up with increasing workloads but also negatively impacts financial performance in the long run. Overstretching shared resources may also lead to aspiration adjustment for target resource productivity that impacts the desired level of resources to cope with work demands. Once this reinforcing process has been activated, the signal for the need to invest in additional shared resources is progressively weakened. In addition, better-before-worse performance dynamics (Repenning and Sterman 2002) associated with increasing the utilization of shared resources contributes to management’s belief that they have adopted the right strategy. In the short term, higher resource utilization increases financial performance and management takes satisfaction in a successful diversification move. It is only in the longer term that the costs of overstretching resources negatively impact performance, and by that time management may not be able to make accurate causal inferences about the root cause(s) of poor performance. Ideally, management should invest in shared resources in advance of increasing workloads in a new business. Of course, there is always a tension between making large up-front investments based on uncertain forecasts versus waiting until the expected growth materializes and then trying to invest to catch up. Our analysis offers three contributions to understanding the performance of firms attempting to exploit resource sharing between related businesses. First, the findings demonstrate that even if significant economies of scope benefits exist for a related diversification move, these benefits may be wiped out if the implementation of resource sharing is not managed properly.

Firm Growth and Resource Sharing in Corporate Diversification 145 Second, the results illustrate the importance of establishing explicit, fi xed targets for workload levels within the firm and monitoring organizational slack as workload demands fluctuate. Firms should consciously plan for slack shared resources to prevent overstretching. Explicit targets can prevent aspiration adjustment and the unintended, long-run costs of overstretching shared resources. This is consistent with previous research that found that adjusting behavior too quickly in response to feedback can be detrimental to organizational survival if it reduces the buffering effect of organizational slack (Levitt and March 1988). Third, a counterintuitive finding is that a higher degree of relatedness between businesses may negatively impact financial performance. Traditional thinking posits that more related diversifiers should outperform less related firms. Simulation analyses demonstrate that a higher degree of relatedness may actually exacerbate resource overstretching and result in lower profitability compared with a less related case. Together, these three findings suggest that the management of diversification moves may be an important factor in determining performance along with other strategy content factors such as the type and mode of diversification. The potential for economies of scope benefits through resource sharing do not imply that related diversified firms will necessarily realize those benefits, regardless of the magnitude of the potential. Empirical evidence indicates that synergy is elusive and related diversifiers often do not reap the full potential benefits from the strategy (Grant et al. 1988). The implementation process plays a crucial role because the time path of net resource investment flows determines whether there are adequate shared resources to cope with changing workload demands. Coordinating investments and allocating shared resources across a related business portfolio place high information-processing demands on management. Our fieldwork with a European utility indicate that boundedly rational implementation policies for related diversification can lead to overstretching shared resources and ultimately undermine potential economies of scope benefits. This is consistent with prior research on rising administrative demands resulting in escalating decision errors and diseconomies of coordination and control (Sutherland 1980). Our findings are also consistent with previous research indicating that problems of coordination and control are more serious during periods of rapid expansion and growth (Penrose 1959). Managers may believe potential economies of scope can always and automatically be realized. This suggests that organizations are lured into adopting the related diversification strategy by the potential economies of scope without adequate plans for the investment needed to extract these benefits or sufficient consideration of implementation difficulties (Nayyar 1993). As a result, many diversified firms find that expected synergy or business growth does not materialize, and then divest the business (Markides 1995). This refocusing strategy may be successful in improving profitability largely because it reduces resource overstretching, including overextended managers operating beyond their spans of control. If there really are substantial potential synergy benefits, investing in additional shared resources could unleash those benefits and may create more value for shareholders than divesting businesses.

146

Michael S. Gary

All models are simplifications of reality, and numerous factors have been omitted from the resource-sharing model presented in this chapter. The model presented here is purposely very simplified, so that we can begin to build on our current understanding with a parsimonious, integrated model. We hope this model provides a starting point for future research that can refine and extend the model. There are numerous opportunities for such extensions. Figure 5.11 shows an expanded stock and flow diagram incorporating a number of additional feedback loops identified in the strategy literature as important for understanding the performance implications of corporate diversification. We stress that this is still not an exhaustive representation of the factors that impact the performance of diversification moves. The first two feedback loops have already been discussed previously. The “3. Economies of Scope” loop captures the reinforcing economic benefits of resource sharing across multiple products or business units. Higher resource utilization results in lower unit costs and higher financial performance, providing funds to invest in additional diversification moves which increases the breadth of products/business units in the portfolio, leading to additional total work demands and higher required shared resources, resulting in lower organizational slack and higher resource utilization to close a reinforcing feedback loop. The “4. Overstretching” loop captures the balancing impact of overextending shared resources on financial performance. Decreasing organizational slack into negative values increases overstretching costs leading to declining financial performance, which inhibits further diversification moves, reducing the breadth of products/business units (relative to what it would have been otherwise), leading to lower total work demands, fewer required shared resources and higher organizational slack closing a balancing feedback loop. The “5. Cost Pressure” loop captures the reinforcing impact of constraining investment in shared resources. Declining financial performance increases the performance gap leading to rising cost pressure, which decreases net resource investment flows, reducing the level of shared resources (relative to what it would have been otherwise), resulting in lower organizational slack, leading to higher overstretching costs and lower financial performance closing a reinforcing loop. The “6. New Rivals Respond” loop captures the balancing effects of competitive rivalry on further diversification into new businesses. Rising breadth of products/business units results in increased competitive rivalry from established competitors in the new business, resulting in lower financial performance, limiting the investment in additional diversification moves and reducing the breadth of products/business units (relative to what it would have been otherwise) closing the balancing loop. The “7. Misfit Resources” loop captures the impact of applying shared resources across businesses in the portfolio for which at least some of those resources are not a good fit. The best use of a core business resource is in the core business. As the firm diversifies further and further from its core business, the resources are less and less suitable for application in the new businesses, and yet diversified firms often apply misfit resources in the pursuit of cost efficiencies. Increasing breadth of

Figure 5.11

5. Cost Pressure

R

Shared Resources

B

B

6. New Rivals Respond

Financial Performance + -

Desired Shared Resources +

+

Target Productivity

+ Pressure for Growth

Total Work + Demands +

+ Refocusing Rate

9. Divest & Focus

B

+ Pressure for Reorganization

+ Breadth of Products / Business Units

+ + Diversification Rate

Resource Fit Across Portfolio -

B

7. Misfit Resources

R

8. Diversify to Fuel Growth

Performance Growth Gap

Target Adjustment Rate +

-

Expanded feedback diagram for the dynamics of firm growth and resource sharing in corporate diversification.

+

Perceived Resource Gap -

R

2. Aspiration Adjustment

+

Overstretching Costs Competitive 3. Economies of Scope 4. Overstretching Rivalry B + Organizational Slack Required Shared Resources + + + Actual Resource Workload -

R

Resource Utilization -

1. Resource Correction

Net Resource Investment Flow +

-

+ Cost Pressure

Target Performance

Performance + Gap

+

Target Growth

Firm Growth and Resource Sharing in Corporate Diversification 147

148 Michael S. Gary products/business units reduces the resource fit across the portfolio, resulting in lower financial performance, limiting investment in further diversification moves and reducing the breadth of products/business units (relative to what it would have been otherwise) closing the balancing feedback loop. The “8. Diversify to Fuel Growth” Loop captures the reinforcing impact of missing growth expectations and targets on motivating firms to pursue diversification strategies. Falling financial performance increases the performance growth gap which over time leads to pressure for growth, resulting eventually in higher investment in diversification moves, increasing Breadth of products/ business units, rising total work demands, higher required shared resources, lower organizational slack, increasing overstretching costs and lower financial performance closing the reinforcing feedback loop. Note that this pathway creates a number of additional reinforcing loops that have not been explicitly identified on the diagram. The “9. Divest and Focus” loop captures the balancing impact of very prolonged periods of poor performance on triggering largescale corporate reorganizations involving divesting non-core businesses to focus on the core. Falling financial performance increases the performance growth gap which over a very long period of time leads to pressure for reorganization (often eventually spearheaded by the board of directors or an acquiring firm and usually resulting in a change in the senior management), leading to rising refocusing by divesting non-core assets, reducing the breadth of products/ business units, decreasing the total work demands, lowering required shared resources, increasing organizational slack, reducing overstretching costs and increasing financial performance closing the balancing feedback loop. There are many additional feedback loops that we have not captured in the diagram at work in the dynamics of firm growth and resource sharing in corporate diversification. One such feedback mechanism is cumulative learning that drives productivity improvements and frees up slack resources over time (Penrose 1959). Such improvements are certainly important, but would not prevent overstretching in the short- and medium-term time scales. In addition, there are a number of areas of the model that could be further disaggregated to investigate more detailed research questions. All shared resources have been aggregated and all other “unshared” resources in each business have not been included. Disaggregating resources is an obvious candidate, where the distinction between tangible and intangible resources is likely to be important in some contexts. Disaggregating resources would also enable exploration of resource complementarity and coordination. Further investigation is needed to identify different resource-sharing allocation policies in implementing corporate diversification strategies. Our fieldwork with a diversifying European utility company indicated that entering new businesses distracted managerial attention and other skilled resources away from the core business, which negatively impacted performance in the core business. This is consistent with recent empirical evidence showing that diversification negatively impacts productivity levels in the established business (Schoar 2002). At the same time, organization theory on inertia would suggest that new businesses might be starved of shared resources until organizational routines evolved to support

Firm Growth and Resource Sharing in Corporate Diversification 149 both businesses. More work is needed to understand these process issues in diversifying firms. We believe the application of System Dynamics discussed in this chapter highlights a promising path forward for future research investigating variation in resource accumulation and implementation strategies. Over the last three decades, much of the scholarly strategy work has been focused on strategy content; that is, identifying what strategy or strategies would provide competitive advantage in a given environmental context. A number of well-defined, “big commitment” strategic choices have attracted the attention of strategy content researchers including mergers and acquisition moves, international expansion, joint ventures etc. However, as this study demonstrates, research is needed to explore the consequences of different resource accumulation and implementation policies on performance heterogeneity for any of these “big commitment” strategic choices. Theories of what strategies should be adopted also need corresponding theory guiding the implementation process, and such research may help explain the mixed empirical results of cross-sectional strategy studies that have focused just on strategy content issues. Variation in implementation policies across firms is unaccounted for in much of the strategy content research, and holds great promise for increasing our ability to explain performance differences among firms.

NOTES 1. In System Dynamics models, stocks or state variables are represented as rectangles and flow variables are represented as arrow-tipped pipes with a valve controlling the flow rate (Rudolph and Repenning 2002). 2. The investment rate denotes the net investment in shared resources including the acquisition of new resources and the decay rate of existing resources. 3. In a feedback loop diagram the arrow linking any two variables, x and y, indicates that a causal relationship exists between x and y (Sastry 1997). The sign at the head of each arrow denotes the nature of the relationship as: ∂y + x⎯ ⎯→ y Ÿ >0 ∂x ∂y − and x ⎯ ⎯→ y Ÿ < 0. ∂x 4. The formulation for new business customers can be considered a test input for growth in a new business. The logistic equation was chosen to represent organic growth. In general, this test input could take on any functional form including linear, quadratic or a step function to represent an acquisition strategy. 5. Sensitivity runs are available from the author.

BIBLIOGRAPHY Baumol, W.J. (1962) ‘On the theory of expansion of the firm’, The American Economic Review, 52 (5): 1078–87. Bettis, R.A. (1981) ‘Performance differences in related and unrelated diversified fi rms’, Strategic Management Journal, 2 (4): 379–93.

150

Michael S. Gary

Bourgeois, L.J. (1981) ‘On the measurement of organizational slack’, Academy of Management Review, 6 (1): 29–39. Burgelman, R.A. (1983) ‘Corporate entrepreneurship and strategic management: insights from a process study’, Management Science, 29 (12): 1349–64. Christensen, H.K. and Montgomery, C.A. (1981) ‘Corporate economic performance: diversification strategy versus market structure’, Strategic Management Journal, 2: 327–43. Coase, R.H. (1952) ‘The nature of the fi rm’, in G.J. Stigler and K.E. Boulding (eds) Readings in Price Theory, Homewood, IL: Irwin. Cohen, M.D., March, J.G. and Olsen, J.P. (1972) ‘A garbage can model of organizational choice’, Administrative Science Quarterly, 17 (1): 1–25. Cyert, R.M. and March, J.G. (1963; 2nd edn 1992) A Behavioral Theory of the Firm, Cambridge: Blackwell. Dierickx, I. and Cool, K. (1989) ‘Asset stock accumulation and sustainability of competitive advantage’, Management Science, 35 (12): 1504–11. Gary, M.S. (2002) ‘Exploring the impact of organizational growth via diversification’, Simulation Modelling Practice and Theory, 10: 369–86. (2005) ‘Implementation strategy and performance outcomes in related diversification’, Strategic Management Journal, 26: 643–64. Grant, R.M., Jammine, A.P. and Thomas, H. (1988) ‘Diversity, diversification, and profitability among British manufacturing companies 1972–1984’, Academy of Management Journal, 31 (4): 771–801. Hill, C.W.L., Hitt, M.A. and Hoskisson R.E. (1992) ‘Cooperative versus competitive structures in related and unrelated diversified fi rms’, Organization Science, 3 (4): 501–21. Hill, C.W.L. and Hoskisson, R.E. (1987) ‘Strategy and structure in the multiproduct fi rm’, Academy of Management Review, 12 (2): 331–41. Hoskisson, R.E. and Hitt, M.A. (1990) ‘Antecedents and performance outcomes of diversification: a review and critique of theoretical perspectives’, Journal of Management, 16 (2): 461–509. Kazanjian, R.K. and Drazin, R. (1987) ‘Implementing internal diversification: contingency factors for organizational design choices’, Academy of Management Review, 12 (2): 342–54. Lant, T.K. (1992) ‘Aspiration level adaptation: an empirical exploration’, Management Science, 38: 623–44. Lant, T.K., and Mezias, S.J. (1990) ‘Managing discontinuous change: a simulation study of organizational learning and entrepreneurship’, Strategic Management Journal, 11: 147–79. Levitt, B. and March, J.G. (1988) ‘Organizational learning’, Annual Review of Sociology, 14: 319–40. Levitt, R.E., Thomsen, J., Christiansen, T.R., Kunz, J.C., Jin, Y. and Nass, C. (1999) ‘Simulating project work processes and organizations: toward a micro-contingency theory of organizational design’, Management Science, 45 (11): 1479–95. March, J.G. and Simon, H.A. (1958) Organizations, New York: Wiley. Markides, C.C. (1995) ‘Diversification, restructuring and economic performance’, Strategic Management Journal, 16 (2): 101–18. Markides, C.C. and Williamson, P.J. (1994) ‘Related diversification, core competencies and corporate performance’, Strategic Management Journal, 15: 149–65. (1996) ‘Corporate diversification and organizational structure: a resourcebased view’, Academy of Management Journal, 80 (3): 340–67. Montgomery, C.A. (1985) ‘Product-market diversification and market power’, Academy of Management Journal, 28 (4): 789–98. Montgomery, D.B., Silk, A.J. and Zaragoza, C.E. (1971) ‘A multiple-product sales force allocation model’, Management Science, 18 (4): 3–24.

Firm Growth and Resource Sharing in Corporate Diversification 151 Morecroft, J.D.W. (1985) ‘Rationality in the analysis of behavioral simulation models’, Management Science, 31 (7): 900–16. Nayyar, P.R. (1993) ‘Performance effects of information asymmetry and economies of scope in diversified service fi rms’, Academy of Management Journal, 36 (1): 28–57. Nelson, R.R. and Winter, S.G. (1982) An Evolutionary Theory of Economic Change, Cambridge, MA: Harvard University Press. Nerlove, M. and Arrow, K.J. (1962) ‘Optimal advertising policy under dynamic conditions’, Economica, 29: 129–42. Oliva, R. and Sterman, J.D. (2001) ‘Cutting corners and working overtime: quality erosion in the service industry’, Management Science, 47 (7): 894–914. Palich, L.E., Cardinal, L.B. and Miller, C.C. (2000) ‘Curvilinearity in the diversification-performance linkage: an examination over three decades of research’, Strategic Management Journal, 21: 155–74. Panzar, J.C. and Willig, R.D. (1981) ‘Economies of scope’, American Economic Review, 71 (2): 268–72. Penrose, E. (1959) The Theory of the Growth of the Firm, New York: Wiley. Pondy, L.R. (1969) ‘Effects of size, complexity, and ownership on administrative intensity’, Administrative Science Quarterly, 14 (1): 47–60. Ramanujam, V. and Varadarajan, P. (1989) ‘Research on corporate diversification: a synthesis’, Strategic Management Journal, 10: 523–51. Repenning, N.P. and Sterman, J.D. (2002) ‘Capability traps and self-confi rming attribution errors in the dynamics of process improvement’, Administrative Science Quarterly, 46 (2): 265–295. Rubin, P.H. (1972) ‘The expansion of fi rms’, Journal of Political Economy, 81 (4): 936–49. Rudolph J.W. and Repenning N.P. (2002). Disaster dynamics: understanding the role of quantity in organizational collapse. Administrative Science Quarterly 47: 1–30. Rumelt, R. (1974) Strategy Structure and Economic Performance, Division of Research, Boston: Harvard Business School Press. Sastry, A. (1997) ‘Problems and paradoxes in a model of punctuated organizational change’, Administrative Science Quarterly, 42: 237–75. Schoar, A. (2002) ‘Effects of corporate diversification on productivity’, The Journal of Finance, 57 (6): 2379. Sterman, J.D. (1989) ‘Modeling managerial behavior: misperceptions of feedback in a dynamic decision experiment’, Management Science, 35 (3): 321–39. Sutherland, J.W. (1980) ‘A quasi-empirical mapping of optimal scale of enterprise’, Management Science, 26 (10): 963–81. Teece, D.J. (1982) ‘Towards an economic theory of the multiproduct fi rm’, Journal of Economic Behavior and Organization, 3 (1): 39–63. Teece, D.J., Pisano, G. and Shuen, A. (1997) ‘Dynamic capabilities and strategic management’, Strategic Management Journal, 18 (7): 509–33. Thomke, S. and Kuemmerle, W. (2002) ‘Asset accumulation, interependence and technological change: evidence from pharmaceutical drug discovery’, Strategic Management Journal, 23: 619–35. Verhulst, P.F. (1977) ‘A note on the law of population growth’ (trans. reprint of original article from 1838), in D. Smith and N. Keyfitz, Mathematical Demography: Selected Papers, Berlin: Springer-Verlag. Williamson, O.E. (1985) The Economic Institutions of Capitalism: fi rms, markets and relational contracting, New York: Free Press. Winter, S.G. (1987) ‘Knowledge and competence as strategic assets’, in D. Teece, The Competitive Challenge, Cambridge, MA: Ballinge.

6

Revisiting Porter’s Generic Strategies for Competitive Environments Using System Dynamics Martin Kunc

INTRODUCTION Porter (1998) suggests that not only do investment decisions make it hard to forecast with certainty the equilibrium of industries, but also industries may evolve following different paths at different speeds depending on these decisions. Managerial decision making significantly affects the dynamics of fi rms. Management decisions to meet their strategic goals affect not only their fi rms but also the system of resources of competing fi rms, generating reactions that will influence their own resources in the future. The external environment is not completely exogenous but is in part created by managers and their decisions. Consequently, fi rms have to fit into patterns of resource exchanges and competitive actions with other fi rms in the industry, forming adaptive systems embedded in feedback processes. Porter analyzed this aspect of competition through the five forces framework (Porter 1998) and suggested generic strategies (cost leadership and differentiation) as a recipe for competing effectively in industries. This chapter explores the effects of business policies based on Porter’s generic strategies on the performance of the fi rm in a competitive environment. The model portrays managerial decision-making processes using the generic strategies described in Porter’s (1985) competitive strategy: cost leadership and differentiation. The model formalizes managerial decision-making processes, identifying constructs and relationships existing in each generic strategy and transforming them into equations in a process similar to the research methodology employed by Sastry (1997).

A FEEDBACK VIEW OF COMPETITIVE INDUSTRIES The concept of industry describes an environment where fi rms develop their business supplying similar products or services to customers. Basically, an industry is a feedback system comprised of fi rms and a market. On the one hand, fi rms provide services or products to satisfy customers’ requirements. On the other hand, customers have requirements that they

Porter’s Generic Strategies for Competitive Environments 153 try to satisfy with the most convenient product at the best possible price. Firms and customers interact over time through a process of adjustment between consumers’ requirements and fi rms’ products. While successful fi rms grow when an increasing number of customers accept and adopt their products because their products are either different from or cheaper than other competing products, less successful fi rms have to abandon the industry or react by improving their products or reducing the price of their existing products. The physical structure of any business is important as it imposes operating constraints (practical rules for how resources work and combine to deliver products and services) on managers. However, the effect of operating policies (managers’ decision-making processes related to the level of coordination and development of activities related to the value chain of the fi rm) is more relevant to the dynamic behavior of industries because operating policies regulate the competitive behavior of fi rms and, through the interconnections existing in the market, of the industry. Porter’s generic strategies intend to be proposals for managing operating policies in a coherent way; e.g., if a fi rm follows a cost leadership strategy, operating policies aimed at minimizing costs are key for this fi rm to achieve profitability. In System Dynamics, we can analyze fi rms in two areas: managerial decision making, and operating policies to control the system of resources. Management, and managerial decision making, is viewed as the process of converting information into action. This conversion process is decision making. As Forrester (1994) notes, [I]f management is the process of converting information into action, then management success depends primarily on what information is chosen and how the conversion is executed. The difference between a good manager and a poor manager lies at this point between information and action. Therefore, the difference between fi rms’ performance and, as a consequence of the feedback structure of the industry, the level of competition of an industry depends on the managerial decision-making processes. However, we cannot deny that this process is influenced by the lens that managers employ to see their fi rms. In this aspect, Porter’s generic strategies are widespread lenses used by many managers to see the positioning of their fi rms in a competitive environment. In Porter’s words: ‘Every fi rm operates on a set of assumptions about its own situation. These assumptions about its own situation will guide the way the fi rm behaves and the way it reacts to events’ (1998: 58). In System Dynamics, operating policies are normally represented as purposive adjustment of resources through goal-seeking information feedback (Sterman 2000; Morecroft 2002). It is the essence of the feedback view of

154

Martin Kunc

the fi rm (Morecroft 2002). This process of resource building is the cornerstone for the activities grouped into a value chain. Decisions stemming from operating policies lead to corrective actions intended to close observed gaps between desired and actual resources necessary to implement generic strategies. Defi ning and monitoring the gaps (shortages or excesses) in a fi rm’s portfolio of resources is essentially an information-processing activity subject to the practical constraints of bounded rationality (Morecroft 1985a). While every manager has available a large number of information sources to determine an operating policy, each manager selects and uses only a small fraction of all available information that is coherent with the generic strategy selected.

Describing Managerial Decision-Making Styles Based in Porter’s Generic Strategies While the decision-making styles of managers in an industry vary and there is not a clear typology, they can be grouped using Porter (1985) generic strategies into two main styles: cost leadership and differentiation. The sources of cost leadership are varied and depend on the structure of the industry, but they are generally economies of scale or highly productive operational processes. If a fi rm can achieve and sustain overall cost leadership, then it will achieve above-average profits provided it can charge prices at or near the industry average (Porter 1985). However, a cost leader must also achieve parity or proximity relative to its competitors in their bases of differentiation to sustain an above-average performance. Product parity means that the price discount necessary to achieve an acceptable market share will not erode their cost advantage (Porter 1985). The second generic strategy is differentiation. Firms that employ this strategy in the industry seek to be unique along some dimensions that are widely valued by buyers (Porter 1985). Management selects some attributes considered to be important by the potential and actual consumers, and tries to position itself to meet their needs. Firms in this position may be able to charge a premium price. Differentiation can be based on the product itself, the marketing approach or other resources and attributes valued by consumers. An above-average performer using differentiation strategy must have their extra costs incurred for being unique well below the price premium charged. Consequently, a fi rm achieving differentiation must also aim to have cost parity or proximity relative to its competitors. Table 6.1 displays the expected differences in four key issues (market size, customers’ requirements, asset stocks that need to be developed and competitors’ reactions) faced by fi rms following the two generic strategies based on an analysis of Porter’s generic strategies. These key issues are translated into differences in the managerial decision-making styles employed in the simulation model to explore fi rm performance.

Porter’s Generic Strategies for Competitive Environments 155 Table 6.1

Differences in Decision-Making Styles Using the Porter (1985) Generic Strategies

Key Issues

Cost Leader

Differentiation Leader

What is the expected market size?

The expected market size is based on extrapolations of past market growth rate.

Market size is based on the number of consumers that the managers expect to attract with the product.

What are the requirements of potential customers?

Broad requirements in terms The consumers are highly of product characteristics, demanding in terms of but highly sensitive to product characteristics and price. less sensitive to price.

Management expects to build Management believes that What is the set of their competitive advantage customers’ requirements resources necessary to satisfy customers’ by improving the efficiency are mostly related to better products rather than of the existing operations. requirements and maintain a competi- Thus, they allocate most of lower prices. Consequently, tive advantage? their investment to increase management allocates most of the investment in the the effectiveness of their development of new prodoperational resources as a means to reduce costs. Mar- uct technology as a means to achieve a competitive ket share is a key goal for advantage. the achievement of economies of scale. However, they try to maintain close product parity with the differentiation leader. How will the firms Management will increase Management will tend to react to competitors’ their efforts to reduce costs further differentiate the actions? without increasing the gap product from competitors with their competitors’ if they face competitive product. pressures.

Model Formalization The model formalization process follows the steps suggested in Sastry (1997). The first step in formalizing the theory was to identify constructs and relationships that provided the basis for the formal model through a textual analysis of Porter’s (1985, 1998) books and to identify and code statements into categories relevant for the simulation. The next step was to relate variables to each other through stock and flow networks that represent the performance of organizational processes mentioned in the value chain. Then formulations reflecting managerial decision-making processes were represented. Finally, real-world observations, such as examples described in Porter’s (1985, 1998) books, informed the modeling process as much as possible. The decision-making processes simulated describe decision functions as simple rules of thumb similar to behavioral simulation models (Sterman 1987; Morecroft 1985b). Figure 6.1 represents a simplified view of the structure

156

Martin Kunc

of the simulated firms using both the value chain and sector map concepts. Management focuses its attention on sources of information related to the performance of the firm such as profits or market share to coordinate the sectors of the firm. There are four sectors that represent the main resources of the simulated firm: financial, technology, operational and market (equations can be requested from the author). Financial resources are basically accumulated profits, which are later invested in resources related to the competitive positioning of the firm. For example, the level of profits and the existence of financial resources determine the investment in technology. Technology resources are employed to increase the attractiveness of the product. Higher product attractiveness and competitive actions (promotions and advertising) lead to an increasing number of customers and revenues and increasing profits. This is a reinforcing feedback process. Increasing the number of customers leads to requests for more products, which are produced using operational resources. However, operational resources increase operating costs that reduce the level of profits. This is a balancing feedback process.

FIRM INFRAESTRUCTURE HUMAN RESOURCE MANAGEMENT TECHNOLOGY DEVELOPMENT PROCUREMENT INBOUND

OPERATIONS OUTBOUND MARKETING SERVICE

LOGISTICS

LOGISTICS

& SALES

Financial Resources Financial Profits Resources

Financial ProfitResources Goals

+

+ Operational Resources

Technology Resources

B

Capacity Level

R

Actual Customers Level

Technology Level

Product Attractiveness

+

Market Sector + Product Attractiveness + Competitive Actions

Actual Customers Level

Figure 6.1

Model sectors and the concept of value chain.

Porter’s Generic Strategies for Competitive Environments 157

Financial Resources The goal of the operating policy in the financial sector is to maintain the rate of operating income over time, which can be associated to a ratio between actual operating income and expected operating income. Management will allocate more financial resources to change the configuration of the source of competitive advantage (technology or the productivity of operational resources) if the operating income over time falls below its expected level, for example a ratio lower than one. If the rate of profits is considered to be satisfactory (ratio equal to or higher than one), management will reduce the allocation of resources to product technology because they believe that there is no need to change the product technology. This behavior follows the concept of satisficing rather than maximizing, as occurs in economic models (Winter 2000). In other words, management tries to maximize the level of financial resources by finding the right level of product technology.

Technology Resources This set of resources is responsible for the competitive advantage and superior performance. Technology resources comprise two resources: product technology and operational efficiency. Product technology, which describes the technological level of the product portfolio, is a key resource for fi rms following a differentiation strategy, and operational efficiency, which indicates the level of productivity of the operational resources, is a key resource for fi rms following a cost leadership strategy. Product technology represents an index of the level of the product characteristics. The product characteristics can be directly associated to the level of potential customers’ requirements; for example, a product technology level of 100 is fairly close to covering all the possible customers’ requirements, and, consequently, the fi rm may be able to attract a huge number of customers from the total available market. Moreover, a higher product technology level relative to competitors’ level will attract not only potential customers but also customers from existing competitors. Product technology is related to the concept of product innovation (Chapter 8 in Porter 1998). Cost leaders will allocate few resources to changing the technology of their products because that would erode the gains obtained from investing in operating efficiency. The negative effect of inefficiency from innovations is related to the concept of disruptive innovation suggested by Sastry (1997) where too much innovation erodes the coordination of the organization increasing operating costs. However, if there is a widening gap in technology between a differentiation-based fi rm and a cost leader fi rm, a cost leader fi rm will allocate more fi nancial resources to promptly reduce the existing gap (Chapter 3 in Porter 1985). Management’s decisions to change the level of the product technology are implemented through the allocation of fi nancial resources to product development projects.

158

Martin Kunc

Management can also invest fi nancial resources to increase the efficiency of operational resources. Operational efficiency represents the cumulative efforts of the fi rm to refi ne the actual operating technology of the existing products. Operational efficiency reflects management efforts to reduce costs independently of the economies of scale achieved through the level of operational resources. Operational efficiency is related to the concept of process innovation (Chapter 8 in Porter 1998). A fi rm following a differentiation strategy will invest fi nancial resources in operational efficiency only when its management perceive that the actual product technology level is accepted in the market (Chapter 8 in Porter 1998). On the other hand, a management team following a cost leader strategy believes that their main competitive advantage is to have the lowest cost. Consequently, management will invest most of their fi nancial resources to increase the efficiency of the fi rm’s operational resources and less in the technology of the product.

Market Sector Porter (1998: 24) says, ‘[B]uyers compete with the industry by forcing down prices, bargaining for higher quality or more services, and playing competitors against each other—all at the expense of industry profitability’. Porter describes the industry equilibrium as result of the bargaining power of buyers. Additionally, Porter mentions that one of the forces driving the evolution of an industry is related to the changes in demand growth (Porter 1998, Chapter 8). The following paragraphs describe the implementation in the model of these two processes: consumers’ decision making based on pricing, quality and advertising leading to different market equilibrium and changes in demand growth.

Competitive Actions and Industry Equilibrium Management usually takes short-term competitive actions related to the process of attracting competitors’ customers. The model represents two short-term actions: price adjustments and advertising expenditure. The pricing policy is based on “cost plus margin”. Price adjustments, as a result of the non-attainment of the market size goal, are implemented through the adjustment of a gross margin. While a reduction of the gross margin affects the profitability in the short term, the model considers some benefits from this action: a reduction in price will attract more customers to improve the operating income in the long term (if the discount does not exceed the benefit of higher unit sales) as the firm achieves economies of scale, which will be used to invest in cost reduction through operational efficiency. Advertising expenditure is usually used to attract actual customers from competitors or increase brand loyalty of actual customers. In the model, advertising is considered only to pull customers towards the company but it

Porter’s Generic Strategies for Competitive Environments 159 does not change the long-term loyalty of customers. Advertising is a shortterm action that improves the long-term perspective not only of the fi rm but also of the industry since it helps to draw potential customers to the industry, expanding the total market.

Factors Affecting Demand Growth: Product Attractiveness and Market Structure. The simplest model of the evolution of markets over time is the Bass diffusion model (Bass 1969). The Bass diffusion model has been extensively used in System Dynamics to describe the diffusion of innovations (Sterman 2000). I include two modifications to the Bass diffusion model in this simulated market. First, the stock of potential customers, fi xed in the Bass traditional model (Sterman 2000), may change over time as product functionality changes, attracting other segments of people which have not been interested in the product yet. The function represents the process of attracting different segments of the total population as product technology evolves. When the industry improves its product technology (or product functionality), the proportion of the total population interested increases (and the fractional rate of attraction per time increases). However, the rate of growth of the industry diminishes over time, as few members of the population remain without using the products of the industry. Second, I include behavioral variables to represent a basic consumer decision-making process at the adoption of a new product. A weighted value is obtained for each alternative (cost leader or differentiation leader alternatives) in function of the relative weights that heterogeneous customers (price or product functionality sensitive adopters) have about each dimension (price, product functionality or advertising) and the relative strength of each alternative in these dimensions with respect to the existing alternatives in the market (e.g., cost leader product functionality compared to average product functionality). The components are then combined in an overall value for that alternative in terms of share of the potential customers that adopt any of the existing alternatives in the market (cost leader alternative or differentiation leader alternative). To summarize, consumers employ information about price, product functionality and advertising to defi ne the best alternative to adopt, as a fi rst-time buyer, and, later on, to replace the actual product as a repetitive buyer. Customers may change the product as competitors offer better products for the same price or a lower price for the same product technology. The movement of customers between fi rms in the industry is regulated by a perception of the relative position of each alternative (cost leader alternative or differentiation leader alternative) in each dimension (price or product technology). For example, customers may perceive situations where the prices of the differentiation leader is 28 per cent higher than the cost leader as “natural”, but only if the differentiation leader’s product is at the

160

Martin Kunc

same time 28 per cent better than the product of the cost leader. Whenever fi rms in the industry change these perceived relationships, customers will respond by switching to the fi rm that offers the best combination of price and product technology. If competitors do not react promptly, they may fi nd themselves out of the market. For simplicity, I do not model the effect of learning processes at the consumer level that may change the relationships between alternatives in a certain dimension.

Operational Resources The concept of operational resources captures physical and human assets stocks that are necessary to provide the products requested by customers. Firms start with an initial endowment of resources that reflects their initial investments. The development of these resources depends on the expectations that managers have about the evolution of the market. In the model, there will not be backlogs or any effect related to delivery of products on consumer behavior like other System Dynamics models, e.g., the market growth model (Sterman 2000), if the demand is higher than the level of operational resources. Consequently, the potential sales revenue (the unit sold to actual customers—repetitive buyers—and the unit sold to new customers—fi rst-time buyers) is reduced by the availability of products (level of units of operational resources multiplied by the productivity per unit of operational resource). There are a series of additional assumptions related to operational resources. • The level of operational resources determines the basic cost per unit of product, which decreases influenced by the effect of economies of scale. • Operational efficiency determines the productivity per unit of operational resource. Higher productivity reduces the cost per unit of product in addition to the economies of scale obtained by the level of the operational resource. • Operational resources are subject to a normal depreciation rate; however, when the fi rm changes its technology that increases the normal depreciation rate due to technological obsolescence.

Summary of the Model The initial set of resources, e.g., technology, fi nancial and operational resources, is not similar among the participants in the industry. There are two reasons for this assumption. First, I assumed that strategically heterogeneous resources drive the initial selection of a generic strategy. Second, I want to explore in these simulations the result of dissimilar managerial decision-making styles in a competitive industry. Table 6.2 presents a summary of the main decisions in each sector existing in the model of the fi rm.

Porter’s Generic Strategies for Competitive Environments 161 Table 6.2

Main Decision-Making Processes Existing in Each Model of the Firm

Management Decisions

Cost-Oriented Firm

Differentiation-Oriented Firm

Financial Resources

The objective is to maintain The objective is to maintain a stable a stable operating income. operating income. The evolution of profits determines the intensity of If actual profit rate is lower than past profit rate, the resources allocated to technolmore financial resources ogy development. will be allocated to the development of technology or operational efficiency in order to increase revenues.

Technology Resources

The source of competitive advantage The configuration of is believed to be the development technology resources is of new products. Consequently, the principally oriented to resources are mostly allocated to reduce costs by increasdevelop the product technology. ing operating efficiency. However, if there is an important gap with competitor’s product, resources allocated to technology are mostly used to reduce the gap with competitor’s technology.

Operational Resources

Operational resources are developed The expected size of the over time based on the managemarket, which is adjusted ment’s expected size of the market. by an extrapolation of the actual market growth rate, Managers have defined a priori a certain market size. determines the expansion rate of this asset stock.

Market Sector/ Price: it aims to be lowest Price: higher than average in market Competitive in the market by reducing but it will tend to cut gross margin Actions costs of goods sold and, aggressively if the expected market later on, gross margin. size is not achieved. However, it will increase price very fast when expectations about market size achieved are fulfilled. Advertising intensity: lower budget than the differentiation leader.

Advertising intensity: highly intensive.

Results of the Simulation The model consists of a duopoly. The model was calibrated so as to have an initial run with the industry in equilibrium given a certain set of initial conditions (see Table 6.3) and no changes in the conditions over time. The rest of the chapter presents different simulations to show the impact on firm performance of Porter’s generic strategies under diverse industry settings.

162

Martin Kunc

Table 6.3 Initial Conditions for Each of the Two Firms

Scenario 1. The Industry in Equilibrium Considering a Similar Distribution of Buyers Sensitive to Price and Functionality The industry has two firms; one follows the cost leadership strategy and the other the differentiation leadership strategy. The total population is equally divided among people willing to buy products according to their functionality given a certain price—which is not necessarily the lowest (functionalitysensitive adopters)—and those willing to buy products at the lowest price given a certain level of product functionality (price-sensitive adopters). In the first simulation, I left the initial conditions (price and product functionality) fixed during all of the simulation. Figure 6.2 presents the evolution of the market during the simulation. Lines 1 and 2 represent the evolution of the number of customers captured by each firm. Since the proportion of people sensitive to price and to product functionality is equal, both leaders captured the same number of customers achieving equilibrium. Therefore, the model was started in equilibrium in aspects such as market share and profits. 1: Cost Leader Customers 1: 2: 3: 4:

1: 2: 3: 4:

2: Diff Leader Customers

3: Potential Market

4: Total Available Market

10000

5000

1 1

1

2

2

2

4

1: 2: 3: 4:

2 0

0.00 Page 1

3 3

1 90.00

4 3 180.00

4

3 270.00

4 360.00

Months Untitled

Figure 6.2

Market evolution under the initial conditions—industry equilibrium.

Porter’s Generic Strategies for Competitive Environments 163 1: Cost Leader Financial R 1: 2: 3: 4:

2: Diff Leader Financial Re

3: Cost Leader Price

4: Diff Leader Price

2000000 3.00

2 1 1: 2: 3: 4:

1000000

4

4

4

2.00

4

2 1 3

1: 2: 3: 4: Page 8

3

1

3

3

2

0 1.00

1 0.00

2 90.00

180.00

270.00

360.00

Months Untitled

Figure 6.3 Cost leader financial resources (line 1) and price (line 3) and differentiation leader financial resources (line 2) and price (line 4).

Figure 6.3 presents the evolution of price and fi nancial resources for the two fi rms during the period of the simulation. The price of the differentiation leader was higher than the cost leader price but the operating income of both companies was equal because the premium price for a better product was compensated by higher costs than cost leader.

Scenario 2. Industry with a Cost Leader and a Differentiation Leader Now, I simulated the industry with a cost leader and a differentiation leader. They followed operating policies based on the generic strategies’ assumptions in order to achieve their goals. The market reaches equilibrium after quarter 180 without any significant difference between the market shares of both fi rms. The number of customers who chose to switch based on product functionality is similar to those customers who switched based on price. Consequently, the net movement of customers between fi rms is closer to zero. I will analyze the simulation from quarter 1 to quarter 90 to identify the dynamics existing between the differentiation and cost firms. Figure 6.4 shows the evolution of price and financial resources for both leaders. The differentiation leader reduced its price aggressively at the beginning of the industry trying to achieve its expected market size. Most of the initial adopters were attracted by the differentiation leader’s lower price (although still higher than the cost leader) and better product. When the differentiation leader raised its price trying to obtain a premium price for its better product—as it achieved

164 Martin Kunc 1: Cost Leader Financial R 1: 2: 3: 4:

2: Diff Leader Financial Re

3: Cost Leader Price

4: Diff Leader Price

2000000 3.00

4 1: 2: 3: 4:

1000000 3

1.50

4

4

3 1: 2: 3: 4: Page 8

0 0.00

1 1 0.00

2

3 1

2

4 1

2

3

2 90.00

180.00

270.00

360.00

Months Untitled

Figure 6.4 Cost leader financial resources (line 1) and price (line 3) and differentiation leader financial resources (line 2) and price (line 4).

the expected number of customers—the cost leader reacted by reducing its prices due to its previously low market share. A similar pattern occurred again before both firms started decreasing their prices as the operational efficiency process set in and they tried to achieve a higher market share. The results show that the level of financial resources was only a third of the level obtained in the previous simulation. This result clearly shows the adverse effect of competition on the financial performance of both firms. The intensity in competition led to the development of the product technology by the differentiated firm so as to command a premium price. However, the decreasing trend in price led by the cost leader implied that the differentiated firm also had to decrease its price to maintain its competitiveness. Therefore, the industry entered into hypercompetition. Bogner and Barr (2000) suggest that hypercompetitive industries are characterized by rapid changes in technology and price and ambiguous consumer demands, which implies that fi rms cannot earn above-average profits for a meaningful period of time based on a single established innovation or advantage in these environments. Bogner and Barr (2000) also add that the most significant competitive threat is the steady pace of competence-destroying change that occurs and the limited ability of managers to foresee the nature of these changes. Therefore, firms, like the differentiation leader, concentrate on both doing what they already know (increasing their technology) and matching competitors’ movements (reducing prices). The decrease in profitability occurs naturally even for a simulation model based in behavioral and endogenously generated decision-making processes. In conclusion, the differentiation leader’s aggressive price reduction helped it to achieve a higher market share at the beginning of the industry.

Porter’s Generic Strategies for Competitive Environments 165 However, management propensity to raise price reduced its initial advantage as cost leader; reducing its price and customers’ long-term price relationship between alternatives drove customers to the cost leader’s product. The cost leader’s continuous pricing reduction exploited the huge difference in prices to attract customers, which led to higher profits as Figure 6.4 shows. Can a differentiation leader improve its results? One of the key decisions simulated in this model is the rate of investment in product technology development. I simulated different levels of investment in product technology as a percentage of the operating income and the resulting level in terms of customers. The result of no investment in product technology presents important oscillations because both fi rms compete only in price. Thus, a price decrease generates a decrease in the price of the competitor in the following period, reducing the stock of customers to its previous level. A high level of investment in product technology, such as 50 per cent of the operating income, generates a superior performance over time in terms of customers. Even though moderate investment rates in technology development help to achieve higher cumulative fi nancial performance, as Figure 6.5 shows, there seem to be two effects. On the one hand, there are tradeoffs between the levels of investment, which determine the speed at which market dominance is achieved, and the fi nancial performance of the fi rm. There is a decreasing effectiveness of the investment in technology. On the other hand, non-linear effects coupled with feedback processes imply that small changes in the investment rate can generate huge increases in fi nancial performance as data between 0 and 0.20 in Figure 6.5 show.

3,000,000

Financial Resources Accumulated

2,500,000

2,000,000

1,500,000

1,000,000

500,000

0.00

0.10

0.20

0.30

0.40

0.50

Investment rates

Figure 6.5 Relationship between technology investment rates and financial performance obtained for a differentiation leader.

166

Martin Kunc

Scenario 3. Industry with Two Cost Leaders The industry now has two cost leaders. The market did not split equally but one fi rm obtained a higher share even though both fi rms followed a similar strategy. Figure 6.6 shows the main difference between both cost leaders. A small change in price after the market reached the peak in its introduction stage was responsible for the different path for similar competitors. Unfortunately, the main strategy of both fi rms implies that product technology declined, as they did not invest in it (lines 3 and 4 in Figure 6.6). Therefore, industries with fi rms engaged in cost leadership strategies will tend to focus on process innovation rather than product innovation in order to drive their prices down and achieve a higher market share. Another interesting observation is related to prices. In this industry, the level of the price is relatively higher than in the previous simulation with two fi rms following different strategies. Porter suggests that ‘the more diverse or asymmetrical are competitors’ goal and perspectives … the harder it will be to properly interpret each other’s moves and sustain a cooperative outcome’ (1998: 90). Maybe similar strategies pay off more than different ones?

Scenario 4. Industry with Two Differentiation Leaders The simulated industry has two differentiation leaders. The market did not split equally but one firm obtained a higher share even though both firms have similar decision-making processes and initial resources. The second differentiation leader tried to exploit the market by raising its price as it achieved a higher market share. However, this increase in price was not supported by an

1: Cost Leader Price 1: 2: 3: 4:

2: Cost Leader Price 2

3: Cost Leader Product Te

4: Cost Leader Product Te

3.00 12.00

2 1: 2: 3: 4:

3 1.50

1 1

6.00

4 2

1: 2: 3: 4:

0.00

3

0.00 0.00

Page 10

1

90.00

4

2

1

3 180.00

4

2

3 270.00

Months Untitled

Figure 6.6

Price and product technology evolution with two cost leaders.

4 360.00

Porter’s Generic Strategies for Competitive Environments 167 improvement in the performance of the product. As the second differentiation leader could not achieve its expected market size, it started decreasing its price as Figure 6.7 displays. Porter (1998) suggested that goals are important drivers in the competitive behavior of firms. When the actual performance of the firm is different from its goal, the firm will react using short-term competitive actions like price and advertising. This reduction in price deprived one of the differentiation firms from financial resources to sustain product technology development, that led it to its demise. 1: Diff Leader Price 1: 2:

2: Diff Leader Price 2

6.00

2 1: 2:

3.00

1

1 2 1

1

2

2

1: 2:

0.00 0.00

45.00

90.00

Page 2

Figure 6.7

1: Diff Leader Product Technology 60.00

1: 2:

30.00

2: Diff Leader Product Technology 2

1 1

1

2

1

2

2

2

0.00 0.00

Page 3

45.00

90.00

135.00

Months Version 5

Figure 6.8

180.00

Price evolution with two differentiation leaders.

1: 2:

1: 2:

135.00

Months

Technology evolution with two differentiation leaders.

180.00

168

Martin Kunc

Figure 6.8 presents the evolution of the product technology in an industry with two differentiated firms. We can appreciate that product technology increases over time, which is the opposite trend for an industry with two cost leaders. Only the second differentiation leader, which had negative operating profits, did not improve its product technology. Interestingly the firm that could provide higher product functionality (see Figure 6.8) started increasing its price which helped to finance more product development. This is an example of the reinforcing process existing in highly successful firms in many markets, e.g., Intel against its competitors in the semiconductor industry.

Key Findings Related to Porter’s Generic Strategies While I cannot suggest that managers’ mental models are cost or differentiation oriented, the modeling simulation exercise shows how bounded rational managerial decision-making processes may generate dysfunctional performance because their goal-setting process and competitive recipe did not consider the complexity of the feedback structures. Simulations also depict how the interconnection between functional areas of a company may influence its performance over time as it generates competitors’ reactions, which erode the gains obtained by other areas. Interestingly, the simulations show that there are not clear good or bad competitors but different competitors, even when they followed similar strategies. In this sense, nonlinearities like the relationship between investment rates and financial performance preclude the possibility of inferring a unique recipe for success. In terms of sensitivity analysis, the dynamics of the industry did not change significantly when I tested different proportions of the customers sensitive to price or product functionality, but I obtained quite different results when I changed the market share goal of the fi rms. Differences in the expectations can lead to higher competition intensity, eroding profitability. This result supports a basic tenet in Porter’s analysis that industry structure determines fi rm profitability. In this case, industry structure is completely endogenous to firms as their strategies affect competitors. Therefore, performance may be strongly determined by competitors’ moves even in simple gaming simulation (Kunc and Morecroft 2007a).

CONCLUSION Managers face very complex investment decisions due to uncertainties about customer acceptance, market size, technology, actions of competitors embedded in a dynamic complex feedback system. In addition, the complexity of a system of interrelated stocks and the information feedback structure of the industry raises managerial decision-making process as one of the most, if not the most, important variables to manipulate the evolution of an industry.

Porter’s Generic Strategies for Competitive Environments 169 This chapter attempts to analyze the influence of managerial decision making on the evolution of industries’ equilibrium, and more specifically on the dynamic behavior of three key components of any industry: the fi nancial performance of fi rms, the evolution of the market and technology development. Porter (1991) suggests that fi rms can achieve superior competitive positions due to two factors: established conditions and pure managerial choices. Established conditions may be a good factor when the analysis corresponds to established industries; however, in some circumstances, established conditions can be overcame from managerial actions such as diverse goals. Pure managerial choices, such as the defi nition of a certain market share, provide a guide to assembly particular resources required to carry out the strategy. However, managerial choices can also generate non-desired consequences because of the complexity of the environment and the cognitive limitations of the managers, especially when there are different conceptualizations of the set of resources and the competitive actions. System Dynamics modeling and simulation have a long tradition in corporate strategic development, and this chapter provides a glimpse of the method. While this model is purely conceptual and large due to the complexity of interpreting Porter’s generic strategies, there are strategic models that serve for other purposes. Kunc and Morecroft (2007b) present two types of strategic models: a ‘back of the envelope model’, which is a small and insightful model to address one dynamic challenge where timing to market is very important, and ‘a larger model representing not only the market but also interacting functional areas to address more complex problems’ like the one in this chapter. There is not perfect models of an organization that will reveal the future outcome of strategy with certainty since modeling is fundamentally the art and science of interpreting complexity (Kunc and Morecroft 2007b: 188) even for a widespread recipe like Porter’s generic strategies.

BIBLIOGRAPHY Bass, F. (1969) ‘A new product growth for model consumer durables’, Management Science, 15 (5): 215–27. Bogner, W.C. and Barr, P.S. (2000) ‘Making sense in hypercompetitive environments: a cognitive explanation for the persistence of high velocity competition’, Organization Science, 11 (2): 212–26. Forrester, J.W. (1994) ‘Policies, decisions and information sources for modeling’, in J.D. Sterman and J.D. Morecroft (eds) Modeling for Learning Organizations, New York: Productivity Press. Kunc, M. and Morecroft, J. (2007a) ‘Competitive dynamics and gaming simulation: lessons from a fishing industry simulator’, Journal of the Operational Research Society, 58: 1146–55. Kunc, M. and Morecroft, J. (2007b) ‘System dynamics modeling for strategic development’, in R. Dyson and F. O’Brien (eds) Supporting Strategy: frameworks, methods and models, Chichester, England: Wiley.

170 Martin Kunc Morecroft, J. (1985a) ‘The feedback view of business policy and strategy’, System Dynamics Review, 1 (1): 4–19. Morecroft, J. (1985b) ‘Rationality in the analysis of behavioral simulation models’, Management Science, 31 (7): 900–16. Morecroft, J. (2002) ‘Resource management under dynamic complexity’ in J.D. Morecroft, A. Heene and R. Sanchez (eds) Systems Perspectives on Resources, Capabilities, and Management Processes, Oxford: Pergamon. Porter, M.E. (1985) Competitive Advantage, New York: Free Press. Porter, M.E. (1991) ‘Towards a dynamic theory of strategy’, Strategic Management Journal, 12: 95–117. Porter, M.E. (1998) Competitive Strategy: techniques for analyzing industries and competitors, New York: Free Press. Sastry, A.M (1997) ‘Problems and paradoxes in a model of punctuated organizational change’, Administrative Science Quarterly, 42: 237–76. Sterman, J.D. (1987) ‘Testing behavioral simulation models by direct experimentation’, Management Science, 33 (12): 1572–92. Sterman, J.D. (2000) Business Dynamics: systems thinking and modeling for a complex world, New York: Irvine-McGraw-Hill. Winter, S.G. (2000) ‘The satisficing principle in capability learning’, Strategic Management Journal, 21: 981–96.

7

Rivalry and Learning among Clustered and Isolated Firms Cristina Boari, Guido Fioretti, and Vincenza Odorici 1

INTRODUCTION Knowledge has become a crucial asset in modern production systems, and its creation has become a key process in order to sustain or increase competitiveness. The ensuing shift toward a knowledge-based economy has amplified research interests in geographical clustering of fi rms, for geographical proximity is supposed to ease inter-organizational learning. Indeed, there is substantial empirical evidence claiming that fi rms located in geographical clusters are more likely to learn and innovate than isolated fi rms (Audretsch and Feldman 1996; Baptista and Swann 1998; Baptista 2000; Wennberg and Lindqvist 2008). However, this renewed attention to the subject of geographical proximity highlights how far we are from having a clear understanding of its influence on inter-organizational learning and innovation (Boschma 2005; Torre and Gilly 2000). In general, geographical proximity per se is not considered a sufficient condition for learning to take place (Boschma 2005: 62), though it is clearly able to strengthen other factors that facilitate learning processes (Boschma 2005; Boari et al. 2004; Breschi and Lissoni 2005; Greeve 2005). Many scholars starting from different perspectives converge to agree that all concurring factors should be related to one another in order to construct a theory of clustering processes where learning has a key role (Torre and Rallet 2005; Knoben and Oerlemans 2006; Malmberg and Maskell 2002: 429). This chapter aims to make a contribution by investigating the relationships between geographical proximity and rivalry with respect to interorganizational learning and knowledge creation. This is quite unusual in the literature, for most theoretical developments and empirical tests have focused on inter-fi rm cooperation, whereas far less attention has been paid to the interplay of geographical proximity, rivalry and learning processes. This orientation is quite surprising, for rivalry is at the very heart of the concept of geographical cluster as a spatially concentrated group of fi rms that operate in the same industry. Indeed, claims that ‘knowledge in clusters is created through increased competition and intensified rivalry’ (Malmberg and Power 2005: 412) are widely shared.

172

Cristina Boari, Guido Fioretti, and Vincenza Odorici

In our contribution, we wish to explore the relationships between rivalry and geographical proximity at the very level of contacts between individual fi rms. In particular, we wish to highlight the influence of geographical proximity on rival identification, on the comparison of their knowledge and on the consequent elaboration of a strategy. Our fi rms are assumed to be sufficiently small to be led by one single decision maker. Thus, all concerns regarding individual bounded rationality apply straightforwardly to organizational decision making. In order to reproduce the interactions between fi rms, we made use of an agent-based model where the strategic choices of rival firms are derived from general assumptions on competitive behavior and learning processes. The aim of the model is to investigate the co-evolution of fi rms’ knowledge, strategies and performances. The rest of this chapter is structured as follows. The second section provides the theoretical and conceptual framework of our work. The third section explains the elements of the model. The fourth section illustrates the experiments and their results. The fi fth section concludes.

THEORETICAL FRAMEWORK According to Sorenson and Baum (2003) the last few years have witnessed a rapid rise in interest in the topics of place and space in the social sciences. Economists, sociologists and strategy scholars have become particularly interested in studying the implications of the spatial distribution of fi rms for economic growth as well as its consequences on knowledge production and diffusion. In general, their assumption is that a critical mass of co-localized fi rms can promote knowledge production and circulation (R. Cowan et al. 2004). In particular, economic geographers have pointed out a need to understand the relationship between geographic proximity and the processes of localized learning and innovation, a relationship that has been overseen in the economic conceptualization of knowledge as an externality that spreads pervasively within a spatially bounded area (Giuliani 2007) and can be easily reproduced (R. Cowan et al. 2004). A reconsideration of the nature of knowledge and of the problems connected to its reproduction and diffusion has increased the concern about other non-spatial dimensions of proximity relevant in promoting knowledge production and circulation (Boschma 2005; Breschi and Lissoni 2005; Knoben and Oelemans 2006; Greeve 2005). While geographic proximity is the least ambiguous concept involved (Knoben and Oerlemans 2006), its explanatory power has been reduced by the possibility that organizational and relational proximities surrogate its effects (Gallaud and Torre 2005; Torre and Rallet 2005). These different dimensions of proximity should be better specified and related to one another (Boschma 2005: 62; Greeve 2005).

Rivalry and Learning among Clustered and Isolated Firms 173 Contrary to economics, the strategic perspective has rarely considered geographical proximity per se as a factor enabling learning processes. Rather, it has considered geographical proximity as a dimension promoting other mechanisms, such as cooperation and rivalry, that may facilitate learning processes. These mechanisms are at the very heart of the concept of a geographical cluster as a spatially concentrated group of fi rms that compete in the same or related industries and are connected through a set of vertical and horizontal relationships (Porter 1990, 1998). Although this general framework addresses both cooperation and competition, researchers mainly focused their attention on inter-fi rm cooperation induced by geographical proximity—see Knoben and Oerlemans (2006) for an extensive review—and its consequences on learning processes (Dyer and Nobeka 2000; Doz 1996; Inkpen 1998; Inkpen and Crossan 1995; Kale et al. 2000; Khanna et al. 1998; Powell 1998; Simonin 1999). Far less attention has been paid to the impact of geographical proximity on rivalry and competition, as well as on their combined consequences on organizational learning and innovation. The only exceptions—which do not address the issue of geographical proximity, anyway—are the studies on inter-organizational collaborations among rivals and learning processes (Dussauge et al. 2000). These considerations suggested us to focus on rivalry. More specifically, the ensuing subsections deal fi rst with the relationship between geographical proximity and rivalry and, subsequently, with the relationship between rivalry and learning.

Geographical Proximity and the Identification of Rivals On the relationship between geographical proximity and rivalry, scholars have expressed two opposite views. On the one hand, long-term observers of industrial clusters have noted that clustered fi rms exhibit more competition than non-clustered fi rms (Becattini 1990; Dei Ottati 1994; Enright 1991). In fact, according to the theory of industrial organization rivalry involves a large number of local fi rms committed to a fight of all against all (Piore and Sabel 1984). Allegedly, this contributes to the competitive advantage of a geographical area and of the fi rms clustered on it (Porter 1990, 1998; Porter et al. 2000). On the other hand, researchers from the resource-based view claimed that geographical proximity allows an extreme division of labor within the cluster, and consequently, fi rms’ specialization. Thus, this reasoning suggests that rivalry is limited to a few competitors (Lazerson and Lorenzoni 1999). Unfortunately, both interpretations lack any empirical verification. A further source of confusion is the fact that too many researchers on geographical clusters have taken rivalry and competition as synonyms. In reality, since the early days of economic thinking the term competition has been used to identify fi rms that depend on the same resources (Baum and

174 Cristina Boari, Guido Fioretti, and Vincenza Odorici Korn 1996: 225). On the contrary, rivalry has been interpreted as a conscious struggle by each individual fi rm to establish its own supremacy in a specific market (Scherer and Ross 1990). Thus, rivalry and competition do not necessarily coincide. Competition has been neglected because it is an “under-socialized” phenomenon occurring among actors that are anonymous to each other (Lomi and Larsen 1996: 1293). Competition would be determined by market forces not subject to the conscious control of individual fi rms (Baum and Korn 1996: 225). Consequently, it would not be influenced by geographical proximity (Torre and Gilly 2000). However, rivalry does not deserve the same treatment. Albeit of the same relational nature (Baum and Korn 1999; Korn and Baum 1999) as market interactions between dyads of fi rms (Chen and McMillan 1992; Chen 1996: 100), rivalry depends on fi rm-specific competitive conditions (Baum and Korn 1996). Among the two separate approaches to the study of rivalry, the rational-economic and the cognitive managerial (Baldwin and Bengtsson 2004; Chen 1996; D. Miller and Chen 1996; Farjoun and Lai 1997), it is the last one which contributed to the exploration of the role of geographical proximity as an explicit and implicit criterion to “market construction”. According to Porac and Rosa (1996: 372), ‘Defi ning rivals is not so much a matter of overt behavior as it is one of managerial attention and discrimination’. And as for Porac et al. (1995), ‘Rivalry occurs when one fi rm orients toward another and considers the actions and characteristics of the other in business decisions, with the goal of achieving a commercial advantage over the other’. Consequently, rivalry implies mutual recognition and occurs only between paired organizations that are each identifiable by the other one (Lomi and Larsen 1996: 1293). In rivalry, but not in competition, cognitive processes matter. While competitors may be regarded as a nebulous collective actor, rivals must be identified and comparisons with each of them must be made. Cognitive processes make rivalry a localized phenomenon. In fact, several authors (Baum and Haveman 1997; Baum and Mezias 1992; Gripsrud and Gronhaug 1985; Lant and Baum 1995; Porac et al. 1995) claim that fi rms are most likely to identify neighboring competitors as rivals. A quite common explanation is the observability argument (Cyert and March 1963), claiming that geographically proximate fi rms are most likely to be noticed and observed because proximity increases the availability of information and provides an incentive to attend to it (Porac et al. 1995). However, Boari et al. (2003) did not fi nd such a simple relation between rivalry and geographical proximity. These authors showed that, in an Italian cluster of producers of packaging machines, rivals were not necessarily selected among the competitors within the cluster. On the contrary, most rivals were identified among fi rms located outside the cluster. However, they also found that whenever fi rms did not cite any local rival, the total

Rivalry and Learning among Clustered and Isolated Firms 175 number of rivals they gave was consistently smaller. Thus, their research suggested a more complex relationship between geographical proximity and identification of rivals. Boari et al. (2004) advanced the idea that sharing geographical space with rivals may help to extend managerial representations, spreading entrepreneurs’ monitoring attention over a larger number of rivals. This can be readily explained if one accepts that geographical proximity eases the consideration of rivals, and that entrepreneurs are boundedly rational decision makers. Then, their fi xed amount of cognitive resources can be employed to attend to either a large number of geographically proximate rivals, or a small number of geographically distant rivals, or any combination of both.

Geographical Proximity and Learning Processes The relationship between rivalry and learning has been neglected by the majority of the literature on inter-organizational learning (Ingram 2002; Kim and Miner 2007). In fact, in the few studies on the impact of rivalry on learning, rivals have been aggregated (Ingram and Baum 1997; Aharonson et al. 2007). On the contrary, dyadic relationships should be considered (Darr and Kurtzberg 2000). However, the studies on inter-organizational learning and, before them, those on vicarious learning—i.e., induced by others’ experiences (Bandura 1977; Manz and Sims 1981; Gioia and Manz 1985)—are indirect references to rivalry. A notable fi nding of these studies is that when learning is stimulated by the experiences of others, similarity is an orienting principle to choose from whom to learn (Darr and Kurtzberg 2000). In fact, similarity reduces information uncertainty (Farjoun and Lai 1997) creating a context of understanding. Since rivals are similar, their experiences are naturally salient (Ingram 2002). In particular, strategic similarities such as market overlap and product commonality are useful to identify the competitive arena and to influence information flows and learning processes (Porac et al. 1989). Similarity in strategy is expected to have its greatest impact on knowledge transfer (Darr and Kurtzberg 2000), at least because it is the main criterion to identify a set of comparable fi rms that offer experiences useful to defi ne one’s own behavior and role (White 1981; White and Eccles 1987). Cognitive distance is still another type of similarity, which is crucial to identify the rivals to imitate. Cognitive distance measures the different way to perceive, interpret and evaluate the world by two actors (Nooteboom 1992, 1999). The notable feature of cognitive distance is that it be neither too high nor too low for learning to take place. In fact, too high a cognitive distance means that the two actors have such different mental categories that each of them is unable to understand what the other is doing. At the other extreme, too low cognitive distance means that the two actors are so similar, that they have nothing to learn from each other.

176

Cristina Boari, Guido Fioretti, and Vincenza Odorici

The attention paid by many scholars to the concept of similarity implicitly concedes that, through monitoring and comparison, rivalry influences the learning processes (Malmberg and Maskell 2006). However, some scholars have expressed doubts about the quality of what can be learned from rivals. First of all, rivalry discounts the idea that learning from the experience of others may be less important than learning by direct search and experimentation. Moreover, learning by monitoring and comparing (as in rivalry) is considered to contribute less valuable knowledge with respect to learning by interacting (as in collaboration) (Lane and Lubatkin 1998). In fact, establishing comparability through sharing of strategic and cognitive repertoires is supposed to give access only to codified knowledge, whereas interacting with the other organizations may allow one to understand the more tacit components of knowledge. Geographical proximity is supposed to ease learning. Boari et al. (2003) suggest that the depth of the comparison with rivals increases with the geographical proximity of rivals. Geographical proximity could increase the variety that fi rms perceive in the environment (Nooteboom 2006) and enlarge the number of strategic aspects that fi rms take into consideration (Bogner and Thomas 1993). In fact, when fi rms observe distant rivals the complexity of their cognitive representations gets lost (Morgan 2004), both because distance weakens the collection of information and their interpretation (Ghoshal and Kim 1986) and because it decreases the speed of any response (Yu and Cannella 2007). However, some authors suggest that geographical proximity may have a negative side effect. In fact, if learning is limited to proximate rivals, myopia is likely to ensue (Levitt and March 1988; Levinthal and March 1993).

THE MODEL We constructed a model of competitive interactions between clustered fi rms that enlarge or shrink their knowledge while undertaking strategic actions with respect to their rivals. This section illustrates the building blocks of our model and, in its fi nal part, how they are connected to one another.

The Knowledge of Firms We assumed that knowledge articulates into knowledge fields. Each knowledge field is a combination of a product and a market. For instance, if a fi rm produces one product A for two markets 1 and 2, this knowledge is expressed by two knowledge fields: one for the product A and market 1, the other for the product A and market 2. Figure 7.1 illustrates knowledge fields as parallelepipeds composed by a product and a market. The number of knowledge fields owned by a fi rm is not constant with time. In fact, fi rms can start to operate in a new field, or

Rivalry and Learning among Clustered and Isolated Firms 177

Figure 7.1 A firm’s knowledge is entailed in knowledge fields, represented by parallelepipeds. Each knowledge field spans a product and a market. The height of the parallelepiped represents the depth of knowledge in a specific field.

they can leave a field if their managers deem that it is no longer worth pursuing. However, since we are modeling small fi rms with a limited managerial attention implied by human bounded rationality (Simon 1947), we assume the existence of a threshold on the maximum number of knowledge fields that a fi rm can manage. Knowledge fields are characterized by a depth. The depth of a knowledge field owned by a fi rm represents how good a fi rm is in that field. In Figure 7.1, depth is represented by the heights of parallelepipeds. The depth of knowledge decays with time or, conversely, is increased by efforts to develop in-house knowledge or by the imitation of rivals. Our model reconstructs the efforts to create, imitate and deepen knowledge fields against a natural tendency of knowledge to vanish with time. The existence of a particular knowledge fi eld, as well as its similarity to other knowledge fields, is common knowledge (Malmberg and Maskell 2002: 439). This means that all fi rms know that certain products exist and that they are sold in certain markets. However, only the fi rm that owns a particular knowledge field knows its depth exactly. The other fi rms know only a fraction of this depth, depending on their geographical proximity. The farther away they are, the less they know concerning how a certain product is actually made and sold in a certain market (Bogner and Thomas 1993; Boari et al. 2003). We assume that depth decreases linearly from its original value, attained at maximum geographical proximity, down to zero for two fi rms that are as far as possible from one another as is allowed by the model.

Rivals’ Identification and Geographical Proximity Rival fi rms are selected among those fi rms whose knowledge is sufficiently similar. Similarity is measured by pairwise comparison of one’s knowledge fields with those of a potential rival.

178

Cristina Boari, Guido Fioretti, and Vincenza Odorici

In particular, for each pair of knowledge fields it is observed whether they concern the same product (similarity ½), or they deal with the same market (similarity ½), or both (similarity 1). The sum of these numbers is normalized to the [0,1] interval to yield an index of the similarity of the knowledge of the two fi rms. Our model rests on the assumption that considering a rival requires some cognitive effort by the main manager of a small fi rm, whose maximum cognitive effort is limited by the manager’s bounded rationality (Simon 1947). In accordance with empirical fi ndings by Boari et al. (2003), the cognitive effort for entertaining a rival can be assumed to decrease with physical proximity. We shall assume that each fi rm entertains a list of rivals such that the sum of the cognitive efforts expended to entertain them is lower than an amount specified by an exogenous parameter. By this assumption, since cognitive effort decreases with physical proximity, fi rms who focus on geographically close rivals may typically consider a large number of rivals. This result is in accordance with our preliminary empirical fi ndings (Boari et al. 2003).

Cognitive Distance from Rivals At each simulation step, a fi rm picks up a rival at random from its list of current rivals. For each pair constituted by one of its knowledge fields and one of the rival’s knowledge fields, it evaluates the cognitive distance between them. The cognitive distance between two knowledge fields is measured by the extent to which knowledge fields do not overlap: Identical knowledge fields have cognitive distance 0; knowledge fields with identical products (markets) but different markets (products) have cognitive distance ½; knowledge fields with different markets and different products have cognitive distance 1. Note that the fewer the rivals, the less likely that the evaluation of cognitive distance is different at each step. Conversely, fi rms with many rivals are more likely to measure diverse values of cognitive distance, depending on which rival they are picking up.

Evaluation of Performance Past performances are considered a major explanatory variable of organizational learning (Cyert and March 1963; Lenvinthal and March 1981). However, measuring performances of changing knowledge is not a trivial task. In fact, since the outcomes of innovative activities cannot be foreseen, ex ante evaluation by means of utility functions makes little sense. An alternative route is to conceive the usefulness of a piece of knowledge as deriving from its connections to other pieces of knowledge (Villani et al. 2007). For instance, a possible explanation of the success of innovations is

Rivalry and Learning among Clustered and Isolated Firms 179 their ability to connect with other products creating new markets (see Box 7.1). Following this interpretation, we are led to ascribe the performance of knowledge to its ability to bridge structural holes (Burt 1992). Let us interpret common knowledge as a directed weighted graph, where nodes are knowledge fields and edges are common instances of business elements. Thus, the ability to bridge structural holes is measured by betweenness centrality:

gi =

¦

s ≠i ≠t

σ sit σ st

(7.1)

where σst is the number of minimum paths between node s and node t, while σsit is the number of minimum paths between node s and node t passing through node i. Figure 7.2 illustrates a network of knowledge fields, each composed of a product and a market. Knowledge fields are inscribed in dashed circles, which represent the fi rms that own them. A link is there whenever two knowledge fields concern the same product, or the same market. In general, the knowledge of a fi rm may span several fields. Occasionally, different fi rms may have the same knowledge field. In Figure 7.2, fi rm ε owns knowledge fields that constitute the only bridge between the knowledge fields of fi rms α, β on the one side, and the knowledge fields of fi rms γ, δ on the other side. Thus, these knowledge fields are essential for the knowledge in the economy to be connected. It is knowledge fields of this kind that, according to Equation 7.1, have a high betweenness centrality and therefore a high performance. On the contrary, the only knowledge field of fi rm γ has a low betweenness centrality.

Figure 7.2 A network of knowledge fields (solid squares) owned by firms (dashed circles). Products are labeled by the letters A, B, C, D. Markets are labeled by the numbers 1, 2, 3, 4. Firms are labeled by the Greek letters α, β, δ, ε.

180

Cristina Boari, Guido Fioretti, and Vincenza Odorici

Note that by measuring performance by means of betweenness centrality we never assign a positive performance to novel knowledge. In fact, novel knowledge consists of creating a novel product, or a novel market, or both. Thus, in the network of knowledge fields an innovative field corresponds to an isolated node or a node with one single link to the other nodes. In Figure 7.2, the knowledge field owned by fi rm α is one such case. Both theoretical reasons and empirical investigations suggest that innovations are made by applying old knowledge onto uncharted domains (Nooteboom 2000). Thus, henceforth we shall assume that novel knowledge fields are constructed either by creating a novel product or by creating a novel market, but not both. So all nodes representing novel knowledge are created with one link to another node. Box 7.1

Actor Network Theory

Actor network theory (ANT) is a sociological theory where the development and acceptance of artifacts and technologies is understood in terms of the interests of various social actors. ANT stssssresses that different actors may have a different understanding of the properties and potentialities of novel artifacts and technologies; nonetheless, their interests may align to support a particular innovation. In their turn, artifacts and technologies change the balance of powers and the network of relationships between social actors. Equipped with this view, scholars working with ANT have provided historical reconstructions where the development of particular artifacts and technologies is described as the—sometimes unintended—consequence of the work of a large number of actors rather than the visionary plan of an isolated genius (Hughes 1986). In order to understand how ANT relates to our measure of performance, let us consider the following empirical investigations of successful innovations: Law (1986) explained the rise of Portuguese ability to exert long-distance control in the fi fteenth century through certain simplifications of medieval astronomy that made it available to navigators, a new design of vessels that enabled them both to carry large freights and to resist armed attacks, and increased reliability of mariners through extensive drill. The Portuguese ability to exert control as distant as India would derive from the ability of a small committee set out by King John “The Navigator” to embed the results of medieval astronomy in a few simple tools that could be operated without prior knowledge of astronomy. Latour (1988) described the rise of Louis Pasteur and the diffusion of vaccination as a collective outcome of several forces, of which the most important ones were the hygienist movement, that was seeking scientific support for its urban planning prescriptions, the surgeons, who could improve the effectiveness of their art by means of local disinfection, and the military, which did not want its soldiers to be decimated by tropical diseases. On the contrary, physicians opposed vaccination for several decades, until Pasteur proposed post-infection treatments and, most importantly, the state provided a role to physicians in the compulsory vaccination of the French population. In both cases, we see one or a few actors—Louis Pasteur, King John and his astronomers—who were able to place themselves in a position from which they (continued)

Rivalry and Learning among Clustered and Isolated Firms 181 Box 7.1

(continued)

could exert a great influence because powerful allies are there to wait for their innovations—the hygienists in the case of Pasteur, the merchants with their improved vessels in the case of King John. A consequence that we may drawn is that successful innovations are those that are able to bridge between existing bodies of knowledge. From this insight our choice follows, to measure the performance of a knowledge field by means of its betweenness centrality in a graph where nodes are knowledge fields, connected by edges if they have a product or a market in common—see e.g. Figure 7.2.

Inter-Organizational Learning At each simulation step, the depth of knowledge fields decreases according to an exogenous decay rate. If the depth of a knowledge field decreases below a minimum, that field is canceled. However, a fi rm can increase the depth of its knowledge field, or it can even create new ones. We distinguish four kinds of learning actions, that affect the depth of knowledge fields. The received literature makes the two following distinctions: • Experiential learning can be distinguished from vicarious learning (Bandura 1977; Manz and Sims 1981; Gioia and Manz 1985). While the former rests on personal experience, the second takes place through the experience of someone else. • Exploration of novel knowledge can be distinguished from exploitation of existing knowledge (March 1991). Since these distinctions regard different aspects of learning, we can fruitfully cross them with one another as in Figure 7.3. Experiential exploration is the creation of novel knowledge out of personal experience. This knowledge is novel for its creator as for the whole economy. Vicarious exploration occurs when a fi rm borrows a piece of knowledge from another fi rm, that is novel for it though it is not novel for in the economy. Experiential exploitation occurs when a fi rm deepens its existing knowledge disregarding the experience of other fi rms. Finally, vicarious exploitation occurs when a fi rm deepens its own knowledge by learning from the experience of other fi rms. In our model, fi rms select one among the aforementioned actions according to the values attained by performance and cognitive distance. In particular, experiential learning is undertaken if either (i) a fi rm has no rival, or (iia) no rival has any knowledge field with greater depth than one of its knowledge fields and (iib) low or intermediate cognitive distance (i.e., equal to 0 or 0.5) from it. In fact, in these conditions a fi rm has nothing to learn from its rivals so it prefers experiential learning to vicarious learning.

182

Cristina Boari, Guido Fioretti, and Vincenza Odorici

Figure 7.3 Experiential exploration, experiential exploitation, vicarious exploration and vicarious exploitation.

If experiential learning is selected, then the choice between experiential exploration and experiential exploitation is made depending on performance. In fact, poor past performances and rivals’ pressure give firms the impetus to undertake exploration (Tushman and Romanelli 1985; Swaminathan and Delacroix 1991). On the contrary, average or high performances are most often responsible for exploitation, because: (1) they induce managers to believe they have gotten it right; (2) they induce managers to interpret past performances as a sign that less vigilance and less environmental scanning or search are required; (3) they assure leaders the status and resources to perpetuate their power; (4) they induce managers to attribute success to their own actions (D. Miller and Chen 1994; Lant et al. 1992). If performance cannot be evaluated because a knowledge field is not bridging between other pieces of knowledge, then the choice between experiential exploration and experiential exploitation is made randomly, with a probability equal to the ratio of the level of poor performance to the level of high performance. Experiential exploration creates a novel knowledge field by exchanging the product or the market of an existing field with a novel one. The newly created field has a depth drawn randomly from the interval between zero and the depth of the starting field. If, with the newly created field, the number of knowledge fields exceeds the maximum allowed, then the starting field is destroyed. Experiential exploitation deepens the depth of an existing knowledge field by an amount equal to the decay of knowledge. It merely hampers depth to decrease. Vicarious exploitation takes place between any pair of knowledge fields, one for the subject fi rm and one for its rival fi rm, such that their cognitive distance is low or intermediate (i.e., equal to 0 or 0.5) and the knowledge field of the rival fi rm has greater depth. Whenever this occurs, the subject fi rm increases the depth of its knowledge field by an amount equal to the depth of its rival’s field, decreased by an amount inversely proportional to

Rivalry and Learning among Clustered and Isolated Firms 183 geographical proximity, and multiplied by the complement to one of the cognitive distance between the two knowledge fields involved. In practice we assumed that vicarious exploitation takes place whenever a fi rm meets a rival with a sufficiently similar knowledge field to be understood, and more competent than oneself on that field. Vicarious exploration has a different rationale, for it consists in the creation of a new knowledge field out of its observation in a rival’s knowledge. As in the case of vicarious exploitation, cognitive distance should not be too high (i.e., equal to 1) otherwise a rival’s knowledge would not be understood. However, cognitive distance should not be too low either (i.e., equal to 0), for a new knowledge field that is too similar to the existing ones would be uninteresting. Thus we require intermediate cognitive distance for vicarious exploration to take place (Nooteboom 1992, 1999). More precisely, a rival’s knowledge field that does not exist in one’s knowledge can be imitated only if it has intermediate cognitive distance (i.e., equal to 0.5) with at least one of one’s knowledge fields.

The Flowchart The previous building blocks are arranged together in a sequence of operations illustrated in Figure 7.4. For simplicity, only two fi rms have been considered.

Figure 7.4 The sequence of operations carried out by a firm A and their relationships with the analogous sequence carried out by another firm B. Influences of a firm on another are marked by dotted arrows.

184 Cristina Boari, Guido Fioretti, and Vincenza Odorici Consider fi rm A in Figure 7.4. Top to bottom, the squares describe the sequence of operations that it carries out. First, it identifies its rivals. Subsequently, it randomly selects one of them and estimates the cognitive distances of its knowledge fields. Then it observes the graph of all knowledge fields present in the economy and calculates the performance of its own fields. Finally, it undertakes an action and, consequently, its own knowledge changes. Before repeating this sequence, fi rm A must wait until fi rm B has gone through a similar sequence. Note that the selection of rivals and the evaluation of performance depend on the actions that were undertaken by all other fi rms in the previous steps. This is the meaning of the dotted arrows in Figure 7.4.

Initialization Firms are placed on a torus obtained from a square of 100 x 100 pixels. Firms do not move on this space. In order to evaluate the effects of clustering, both clustered and isolated fi rms are considered at the same time, and the number of isolated fi rms is set equal to the number of clustered fi rms. Isolated fi rms are distributed uniformly in space. Our model allows one to choose the number of clusters, the number of firms in each cluster and the geographical proximity of clusters. The number of clustered fi rms is obtained by the product of the number of firms in a cluster by the number of clusters. The geographical proximity of clusters depends on the variance of a normal distribution of the position of clusters. Firms are created with an initial wealth. Following the empirical evidence on the distribution of (however measured) size of firms, wealth is initially distributed according to a Zipf law (Axtell 2001; Gaffeo et al. 2003). The values obtained by the Zipf distribution have been scaled by the length of the square from which the torus is derived, where firms are placed (see the aforementioned). In fact, the absolute size of firms depends on the size of their market, and the size of the world where firms operate is a proxy for market size. Firms are initialized with a random number of knowledge fields drawn from a uniform distribution. The maximum number of knowledge fields per fi rm is a parameter of the model. The initial depth of knowledge fields is drawn randomly in the [0,1] interval. The number of different products and the number of different markets by which these initial knowledge fields are composed are also drawn from a uniform distribution. The maximum number of initial products and markets is obtained multiplying the number of knowledge fields per fi rm by the number of fi rms.

Population Dynamics Each learning action (experiential exploration, experiential exploitation, vicarious exploration, vicarious exploitation) has a cost, which decreases

Rivalry and Learning among Clustered and Isolated Firms 185 the wealth of fi rm. Wealth is also subject to a natural decay according to a fi xed rate. However, each knowledge field provides a performance to its fi rm, which increases its wealth. If the wealth of a fi rm becomes lower than the cost of a learning action, the fi rm dies. A dead fi rm is immediately replaced by a new one that occupies the same geographical position. Its wealth and knowledge are initialized as discussed previously.

THE EXPERIMENTS We carried out simulations in order to compare the actions undertaken, results obtained and knowledge learned by clustered firms with respect to isolated firms. Since we were interested in long-term regularities, for any chosen parameters combination we let the model run with different seeds for 1,000 steps and observed its behavior at simulation end. Reported results refer to 1,000 steps after allowing transitory dynamics to settle down during the initial 100 steps.

The Choice of Parameters Our simulations highlighted that clusters of fi rms are efficient only if they reach a critical mass in terms of the number of fi rms that they entail. According to our model, only if a cluster entails at least 40–50 fi rms do these fi rms obtain substantial advantages with respect to isolated fi rms. Our model is a simplification of reality so the aforementioned value should not be understood as the minimal size a cluster should have in the real world. However, it implies that in the real world a threshold exists, above which a cluster is viable. Some experimentation with the parameters that regulate the number, size and geographical proximity of clusters highlighted that small but geographically proximate clusters offer the same advantages to their members as one large cluster (e.g., 5 clusters of 10 fi rms each, at a distance of less than 10 pixels from one another, are equivalent to one single cluster of 50 fi rms). On the contrary, small clusters far from one another offer no advantage with respect to isolated fi rms. We focused our simulations on the comparison between one single cluster of 50 fi rms and another 50 fi rms scattered around. The number 50 was chosen because it is roughly the minimum cluster size where the advantages of clustering become evident. Parameters were chosen making use of all available empirical information, as well as constraints between parameters: • The model makes sense if the number of fi rms that make bankrupt (and are replaced) is quite low. We found that with a decay rate of 5 per cent and a cost of undertaking an action equal to 0.01 roughly 0.21 per cent of fi rms are replaced at each step.

186

Cristina Boari, Guido Fioretti, and Vincenza Odorici

• The maximum number of knowledge fields per fi rm has been set at 5 in accordance with psychological experiments pointing to some point between 4 and 7 as the maximum number of items that can be managed by a human mind (G.A. Miller 1956; N. Cowan 2000). Though we are conscious that the aforementioned experiments are quite distant from our setting, we deem that they nevertheless provide an indication of the relevant order of magnitude. • The available empirical evidence suggests that the average number of rivals of fi rms located in a cluster may be in the order of 2, 3, 4 or 5; only exceptionally a fi rm may mention something like 8–10 rivals, or no rival at all (Boari et al. 2003; Russo and Pirani 2001). We found that by setting the maximum cognitive effort at 1.2 and the similarity threshold at 0.2 the simulations are in good accord with the empirical evidence. • The lower threshold of the depth of knowledge fields was set at 0.1. This value is much lower than average values attained by depth and, at the same time, sufficiently high to ensure that sufficiently many low-depth fields are destroyed so the average number of fi elds is below the maximum allowed (set at 5, see previous discussion). • The upper threshold of the depth of knowledge fields is necessary in order to avoid that a few fields increase their depth indefi nitely. This threshold was set at 10 with the idea that, by setting it high, it would seldom operate. Indeed, this threshold is eventually attained once or twice during a simulation, and quite often it is not attained at all. • The threshold of performance that decides whether experiential exploration or experiential exploitation was set at 20 per cent of average past performance, calculated over the past 10 simulation steps. Exploration is meant to be carried out in special circumstances (March 1991), so we deem that this threshold should be well below 50 per cent of past performances.

The Results We expound the results of our model following the same sequence illustrated in Figure 7.4. Identification of Rivals Clustered fi rms have, on average, many more rivals than isolated fi rms (3.30 vs. 0.30 rivals). Moreover, most rivals of clustered fi rms are inside their own cluster (3.28 inside, 0.02 outside). Cognitive Distance Clustered fi rms are at a higher average cognitive distance from their rivals (404 per cent higher) than isolated fi rms are from their rivals. In other

Rivalry and Learning among Clustered and Isolated Firms 187 words, more clustered fi rms watch rivals whose knowledge is more different from their own knowledge, than isolated fi rms do. Performance Our simulations confi rm all empirical evidence claiming that clustered fi rms have an advantage over isolated fi rms. In fact, we fi nd out that clustered fi rms have a higher performance than isolated fi rms (38 per cent higher). Consequently, in our model isolated fi rms die (and are replaced) more often than clustered fi rms. Learning Actions Experiential exploitation has the highest frequency (2.03), followed by experiential exploration (1.27), vicarious exploitation (0.96) and fi nally vicarious exploration (0.02). Experiential learning is more frequent than vicarious learning, and exploitation is more frequent than exploration. Knowledge Development We introduced two indicators of knowledge development: the number of knowledge fields managed by a fi rm (scope of knowledge), and the depth of these knowledge fields (depth of knowledge). On both indicators, clustered fi rms perform better than isolated fi rms. Clustered fi rms have on average more knowledge fields than isolated fi rms (18 per cent more), and their knowledge on these fields is deeper than the knowledge of isolated fi rms (114 per cent deeper).

CONCLUSION This study addressed the link between geographical proximity and rivalry as a cognitive and social dimension of competition. In particular, we investigated the relationship between geographical proximity and rivalry with respect to their impact on the development of knowledge by both agglomerated and isolated fi rms. As we mentioned before, the relationship between geographical proximity and rivalry has been considered a crucial issue in the explanation of the competitive advantage of the geographical clusters and of clustered fi rms. In particular, geographical proximity is supposed to foster innovation and diffuse best practices through rivalry. However this is a presumption rather than the result of empirical investigations. In our model we take this presumption together with the thesis of those scholars that, adopting a cognitive approach to the study of rivalry, considered geographical proximity as a powerful cognitive tool used to “construct” the market through rivals’ identification and comparison. In particular,

188 Cristina Boari, Guido Fioretti, and Vincenza Odorici in our model geographical proximity influences rivalry by reducing the cognitive effort necessary to entertain a rival, as well as by increasing the capability to appreciate the depth of a rival’s knowledge. Rivalry takes the characteristics of a localized phenomenon, where nearest competitors may become rivals because geographical proximity increases information availability and provides an incentive to attend to it. We believe this study can improve our understanding of the role played by geographical proximity in the cognitive representation of a market. In particular, according to our simulations geographical proximity allows the borders of the constructed competitive environment to be expanded by affecting the scope and the depth of the knowledge developed. Thus, geographically clustered fi rms have an advantage over isolated fi rms with respect to their ability to develop knowledge and adapt it to changing circumstances. For this reason, in our model as in the real world, clustered fi rms perform better than isolated fi rms. According to our simulations, clustered fi rms excel both in the number of knowledge fields and in their depth. Thus, our model suggests that it is possible for clustered fi rms to improve the scope and the depth of knowledge at the same time. It is possible because clustered fi rms observe many more rivals than isolated fi rms, but most importantly because clustered fi rms have almost complete access to the depth of their rivals’ knowledge, so imitation is quite easy. Our simulations highlight that even in the knowledge economy, geographical clustering matters. It matters because geographical proximity helps establishing and maintaining social relations and, among them, rivalry relations. It is because of rivalry relations that knowledge is created, and it is through rivalry relations that knowledge is imitated. Our model, even in this basic version, reproduces these mechanisms. Future versions may include other aspects of decision making, such as heterogeneity of behavioral capabilities, or different geographical arrangements of both clustered and isolated fi rms. However, preliminary experiments suggest that these modifications are unlikely to change the overall results reported herein. NOTES 1. The computer code was mainly written by Sirio Capizzi. We gratefully acknowledge fi nancial support from the Italian Ministry of Scientific Research through FIRB n. RBNE03HJZZ.

BIBLIOGRAPHY Aharonson, B.S., Baum, J.A.C. and Feldman, M.P. (2007) ‘Desperately seeking spillovers? Increasing returns, industrial organization and the location of new entrants in geographic and technological space’, Industrial and Corporate Change, 16: 89–130.

Rivalry and Learning among Clustered and Isolated Firms 189 Audretsch, D. and Feldman, M.P. (1996) ‘R&D spillovers and the geography of innovation and production’, American Economic Review, 86: 630–40. Axtell, R.L. (2001) ‘Zipf distribution of U.S. fi rm sizes’, Science, 293: 1818–20. Baldwin, A. and Bengtsson, M. (2004) ‘The emotional base of interaction among competitors—an evaluative dimension of cognition’, Scandinavian Journal of Management, 20: 75–102. Bandura, A. (1977) Social Learning Theory, Englewood Cliffs, NJ: Prentice-Hall. Baptista, R. (2000) ‘Do innovations diffuse faster within geographical clusters?’, International Journals of Industrial Organization, 18: 515–35. Baptista, R. and Swann, P. (1998) ‘Do fi rms in cluster innovate more?’, Research Policy, 27: 525–40. Baum, J.A.C. and Haveman, H.A. (1997) ‘Love thy neighbor? Differentiation and agglomeration in the Manhattan hotel industry, 1898–1990’, Administrative Science Quarterly, 42: 304–38. Baum, J.A.C. and Korn, H. (1996) ‘Competitive dynamics of interfi rm rivalry’, Academy of Management Journal, 39: 255–91. (1999) ‘Dynamics of dyadic competitive interaction’, Strategic Management Journal, 20: 251–78. Baum, J.A.C. and Mezias, S.J. (1992) ‘Localized competition and organizational failure in the Manhattan hotel industry, 1898–1990’, Administrative Science Quarterly, 37: 580–604. Becattini, G. (1990) ‘The Marshallian industrial district as a socio-economic notion’, in F. Pyke, G. Becattini and W. Sengenberger (eds) Industrial Districts and InterFirm Co-operation in Italy, Geneva: International Institute for Labour Studies. Boari, C., Odorici, V. and Zamarian, M. (2003) ‘Cluster and rivalry: does localization really matter?’, Scandinavian Journal of Management, 19: 467–89. Boari, C., Espa, G., Odorici, V. and Zamarian, M. (2004) ‘Space in cognition and cognition in space: rivalry within and outside an industrial cluster’, Working paper presented at the XX Egos Colloquium. Bogner, W. and Thomas, H. (1993) ‘The role of competitive groups in strategy formulation: a dynamic integration of two competing models’, Journal of Management Studies, 30: 51–67. Boschma, R. (2005) ‘Proximity and innovation: a critical assessment’, Regional Studies, 39: 61–74. Breschi, S. and Lissoni, F. (2005) ‘Mobility of inventors and the geography of knowledge spillovers: new evidence on US data’, Centro di Ricerca sui Processi di Innovazione e Internazionalizzazione, Working paper no. 184. Burt, R. (1992) Structural Holes: the social structure of competition, Cambridge, MA: Harvard University Press. Chen, M. (1996) ‘Competitor analysis and interfi rm rivalry: toward a theoretical integration’, Academy of Management Review, 21: 100–34. Chen, M. and MacMillan, I. (1992) ‘Nonresponse and delayed response to competitive moves: the roles of competitor dependence and action irreversibility’, Academy of Management Journal, 35: 539–70. Cowan, N. (2000) ‘The magical number 4 in short-term memory: a reconsideration of mental storage capacity’, Behavioral and Brain Sciences, 24: 87–185. Cowan, R., Jonard, N. and Ozman, M. (2004) ‘Knowledge dynamics in a network industry’, Technological Forecasting and Social Change, 71: 469–84. Cyert, R. and March, J. (1963) A Behavioral Theory of the Firm, Englewood Cliffs, NJ: Prentice-Hall. Darr, E. and Kurtzberg, T. (2000) ‘An investigation of partner similarity dimensions on knowledge transfer’, Organizational Behavior and Human Decision Processes, 82: 28–44. Dei Ottati, G. (1994) ‘Cooperation and competition in the industrial district as an organization model’, European Planning Studies, 2: 463–83.

190 Cristina Boari, Guido Fioretti, and Vincenza Odorici Doz, Y. (1996) ‘The evolution of cooperation in strategic alliances’, Strategic Management Journal, 17: 55–83. Dussauge, P., Garrette, B. and Mitchell, W. (2000) ‘Learning from competing partners: outcomes and duration of scale and link alliances in Europe, North America and Asia’, Strategic Management Journal, 21: 99–126. Dyer, J.F. and Nobeka, K. (2000) ‘Creating and managing a high-performance knowledge-sharing network: the Toyota case’, Strategic Management Journal, 21: 345–67. Enright, M.J. (1991) Geographic Concentration and Industrial Organization, Cambridge, MA: Harvard University Press. Farjoun, M. and Lai, L. (1997) ‘Similarity judgments in strategy formulation: role, process and implications’, Strategic Management Journal, 18: 255–73. Gaffeo, E., Gallegati, M. and Palestrini, A. (2003) ‘On the size distribution of fi rms: additional evidence from the G7 countries’, Physica A, 324: 117–23. Gallaud, D. and Torre, A. (2005) ‘Geographical proximity and circulation of knowledge through interfi rm relationships’, Scienze Regionali, 2: 1–21. Ghoshal, S. and Kim, S.K. (1986) ‘Building effective intelligence systems for competitive advantage’, Sloan Management Review, 28: 49–58. Gioia, D.A. and Manz, C.C. (1985) ‘Linking cognition and behavior: a script processing interpretation of vicarious learning’, The Academy of Management Review, 10: 527–39. Giuliani, E. (2007) ‘The selective nature of knowledge networks in clusters: evidence from the wine industry’, Journal of Economic Geography, 7: 139–68. Greeve, H. (2005) ‘Interorganizational learning and heterogeneous social structure’, Organization Studies, 26: 1025–48. Gripsrud, G. and Gronhaug, K. (1985) ‘Structure and strategy in grocery retailing: a sociometric approach’, The Journal of Industrial Economics, 32: 339–47. Hughes, T.P. (1986) ‘The seamless web: technology, science etcetera’, Social Studies of Science, 16: 281–92. Ingram, P. (2002) ‘Interorganizational learning’, in J. Baum (ed.) The Blackwell Companion to Organizations, Oxford: Blackwell. Ingram, P. and Baum, J.A.C. (1997) ‘Chain affi liation and the failure of Manhattan hotels, 1898–1890’, Administrative Science Quarterly, 42: 68–102. (1998) ‘Opportunity and constraint: organizations’ learning from the operating and competitive experience of industries’, Strategic Management Journal, 18: 75–98. Inkpen, A.C. (1998) ‘Learning, knowledge acquisition, and strategic alliances’, European Management Journal, 16: 223–29. Inkpen, A.C. and Crossan, M.M. (1995) ‘Believing is seeing: joint ventures and organizational learning’, Journal of Management Studies, 32: 595–618. Kale, P., Singh, H. and Perlmutter, H. (2000) ‘Learning and protection of proprietary assets in strategic alliances: building relational capital’, Strategic Management Journal, 21: 217–37. Khanna, T., Gulati, R. and Nohria N. (1998) ‘The dynamics of learning alliances: competition, cooperation and relative scope’, Strategic Management Journal, 19: 193–210. Kim, J.Y. and Miner, A.S. (2007) ‘Vicarious learning from the failures and nearfailures of others: evidence from the U.S. commercial banking industry’, The Academy of Management Journal, 50: 687–714. Knoben, J. and Oerlemans, L. (2006) ‘Proximity and inter-organizational collaboration: a literature review’, International Journal of Management Reviews, 8: 71–89. Insert Korn, H. and Baum, J. (1999) “Chance, imitative, and strategic antecedents to multimarket contact”. Academy of Management Journal, 42: 171–193.

Rivalry and Learning among Clustered and Isolated Firms 191 Lane, P.J., and Lubatkin, M. (1998) ‘Relative absorptive capacity and interorganizational learning’, Strategic Management Journal, 19: 461–77. Lant, T. and Baum, J.A.C. (1995) ‘Cognitive sources of socially constructed competitive groups: examples from the Manhattan hotel industry’, in R. Scott and S. Christensen (eds) The Institutional Construction of Organizations, Thousand Oaks, CA: Sage. Lant, T., Milliken, F. and Batra, B. (1992) ‘The role of managerial learning and interpretation in strategic persistence and reorientation: an empirical exploration’, Strategic Management Journal, 13: 585–608. Latour, B. (1988) The Pasteurization of France, Cambridge, MA: Harvard University Press. Law, J. (1986) ‘On the methods of long distance control: vessels, navigation and the Portuguese route to India’, in J. Law (ed.) Power, Action and Belief: a new sociology of knowledge? Sociological Review Monograph, London: Routledge and Kegan Paul. Lazerson, M. and Lorenzoni, G. (1999) ‘The fi rms that feed industrial districts: a return to the Italian source’, Industrial and Corporate Change, 8: 235–66. Levinthal, D.A. and March, J.G. (1981) ‘A model of adaptive organizational search’, Journal of Economic Behavior and Organization, 2: 307–33. (1993) ‘The myopia of learning’, Strategic Management Journal, 14: 92–112. Levitt, B. and March, J.G. (1988) ‘Organizational learning’, Annual Review of Sociology, 14: 319–40. Lomi, A. and Larsen, E. (1996) ‘Interacting locally and evolving globally: a computational approach to the dynamics of organizational population’, Academy of Management Journal, 39: 1287–321. Malmberg, A. and Maskell, P. (2002) ‘The elusive concept of localization economies: toward a knowledge-based theory of spatial clustering’, Environment and Planning A, 34: 429–49. (2006) ‘Localized learning revisited’, Growth and Change, 37: 1–18. Malmberg, A. and Power, D. (2005) ‘(How) do (fi rms in) clusters create knowledge?’, Industry and Innovation, 12: 409–31. Manz, C.C. and Sims, H.P. (1981) ‘Vicarious learning: the influence of modeling on organizational behavior’, The Academy of Management Review, 6: 105–13. March, J.G. (1991) ‘Exploration and exploitation in organizational learning’, Organization Science, 10: 299–316. Miller, D. and Chen, M. (1994) ‘Sources and consequences of competitive inertia: a study of the U.S. airline industry’, Administrative Science Quarterly, 39: 1–30. (1996) ‘The simplicity of competitive repertoires: an empirical analysis’, Strategic Management Journal, 17: 419–39. Miller, G.A. (1956) ‘The magical number seven, plus or minus two: some limits on our capacity for processing information’, The Psychological Review, 63: 81–97. Morgan, K. (2004) ‘The exaggerated death of geography: learning, proximity and territorial innovation systems’, Journal of Economic Geography, 4: 3–21. Nooteboom, B. (1992) ‘Towards a dynamic theory of transactions’, Journal of Evolutionary Economics, 2: 281–99. (1999) Inter-Firm Alliances: analysis and design, London: Routledge. (2000) Learning and Innovation in Organizations and Economies, Oxford: Oxford University Press. (2006) ‘Innovation, learning and cluster dynamics’, in B. Asheim, P. Cooke and R. Martin (eds), Clusters and Regional Development, London: Routledge. Piore, M. and Sabel, C. (1984) The Second Industrial Divide, New York: Basic Books. Porac, J.F. and Rosa, J.A. (1996) ‘Rivalry, industry models, and the cognitive embeddedness of the comparable fi rm’, in P. Shrivastava, A. Huff and J. Dutton (eds) Advances in Strategic Management, Greenwich, CT: JAI Press.

192 Cristina Boari, Guido Fioretti, and Vincenza Odorici Porac, J.F., Thomas, H. and Baden-Fuller, C. (1989) ‘Competitive groups as cognitive communities: the case of Scottish knitwear manufacturers’, Journal of Management Studies, 26: 397–416. Porac, J.F., Thomas, H., Wilson, F., Paton, D. and Kanfer, A. (1995) ‘Rivalry and the industry model of Scottish knitwear producers’, Administrative Science Quarterly, 40: 203–27. Porter, M. (1990) The Competitive Advantage of Nations, New York: Free Press. (1998) ‘Clusters and the new economics of competition’, Harvard Business Review, 76: 77–90. Porter, M., Sakakibara, M. and Takeuchi, H. (2000) Can Japan Compete?, London: Macmillan. Powell, W.W. (1998) ‘Learning from collaboration, knowledge and networks in the biotechnology and pharmaceutical industries’, California Management Review, 40: 228–40. Russo, M. and Pirani, E. (2001) ‘Metalnet. Struttura e dinamica dei cambiamenti nelle relazioni tra le imprese metalmeccaniche in provincia di Modena. Primi risultati dell’indagine empirica’, Università di Modena e Reggio Emilia. Scherer, F. and Ross, D. (1990) Industrial Market Structure and Economic Performance, Boston: Houghton Miffl in. Simon, H.A. (1947) Administrative Behavior, London: Macmillan. Simonin, B.L. (1999) ‘Ambiguity and the process of knowledge transfer in strategic alliances’, Strategic Management Journal, 20: 595–693. Sorenson, O. and Baum, J.A.C. (2003) ‘Geography and strategy: the strategic management of place and space’, in J.A.C. Baum and O. Sorenson (eds) Geography and Strategy, Greenwich, CT: JAI Press. Swaminathan, A. and Delacroix, J. (1991) ‘Differentiation within an organizational population: additional evidences from the wine industry’, Academy of Management Journal, 34: 679–92. Torre, A. and Gilly, J. (2000) ‘On the analytical dimension of proximity dynamics’, Regional Studies, 34: 169–80. Torre, A. and Rallet, A. (2005) ‘Proximity and localization’, Regional Studies, 39: 47–59. Tushman, M. and Romanelli, E. (1985) ‘Organizational evolution: a metamorphosis model of convergence and reorientation’, in L. Cummings and B. Staw (eds.) Research in Organization Behavior, Greenwich, CT: JAI Press. Villani, M., Bonacini, S., Ferrari, D., Serra, R. and Lane, D. (2007) ‘An agent-based model of exaptive processes’, European Management Review, 4: 141–51. Wennberg, K. and Lindqvist, G. (2008) ‘How do entrepreneurs in clusters contribute to economic growth?’, Stockholm School of Economics, SSE/EFI Working Paper Series in Business Administration no. 2008–3. White, H. (1981) ‘Where do markets come from?’, American Journal of Sociology, 87: 517–47. White, H. and Eccles, R. (1987) ‘Producers’ markets’, in J. Estwell and N. Milgrom (eds) The New Palgrave Dictionary of Economics, London: Macmillan. Yu, T. and Cannella, A. (2007) ‘Rivalry between multinational enterprises: an event history approach’, Academy of Management Journal, 50: 665–86.

8

Organization and Strategy in Banks Alessandro Cappellini1 and Alessandro Raimondi2 Alessandro Cappellini and Alessandro Raimondi

INTRODUCTION Through modeling we reproduce reality to achieve a better comprehension of it: our purpose is to gain insight of an economic system, in order to create more successful management policies and organizational structures, as well as solve problems. The focus of our attention is banking organizations, strongly hierarchical structures that play a fundamental role in the economy. Banks pursue profit and efficiency as every kind of fi rm does, but their intermediation function makes them extremely interconnected, in every aspect of their management, with the market and the macroeconomic scenario. As a consequence, willing or not, banks also play a sort of social position which means that the path of actions that they undertake affect in a significant way the greater number of economic actors. Starting from the exploration of the bank–fi rm relationship in terms of fi rm analysis, risk management and commercial policies and the resultant systemic relationship, we depict a simulation framework for study and analysis of banks’ organization and strategy. MODELING The goal of abstraction is to get an idealization that at the same time is as simple as possible, but sufficiently complex as to adequately represent the fundamental process we are interested in. In his milestone book, Industrial Dynamics (Chapter 21, “Industrial Dynamics in Business”, 1961), Jay Forrester writes: Industrial dynamics studies of the corporation should be started as a long-term program at an activity level low enough to avoid pressure for immediate results. Selection of proper men with the managerial viewpoint and the necessary technical skills is crucial. They must be given time to develop their understanding of the company’s problems and of system dynamic behaviour. Suitable men will be those also in demand to fill management positions in the company.

194 Alessandro Cappellini and Alessandro Raimondi Therefore a fundamental role, in order to address fi rms’ problems through modeling, has to be played by the same fi rm managers: this could be the only way to let the abstract representation have an effective impact on the real world. Yet, this may sometimes mean going over old and deep-rooted conceptions and habits. A formal model would make it easier to develop the feedback process between theory and practice that is fundamental to the scientific method, which is grounded upon a continuous interaction and mutual adaptation between a hypothesis and its empirical verification. As Herbert Simon (1997: 63) says, [T]he knowledge that economic actors possess and do not possess, the computations that economic actors can make and cannot make must not enter economic theory as ad hoc assumptions. They must be shaped and tested by the sharpest empirical methods we can devise. We reproduce reality in order to achieve a better comprehension of it, both from the side of observer and from that of the individual involved in the process, often not conscious of its role in the system or of the consequences of their actions or of the reasons why they act in a certain way. Our purpose is to gain insight into the system, in order to create more successful management policies and organizational structures, and to solve problems.

Systems A system in an environment is characterized by a structure, set up by cause– effect relationships among different variables, the functioning of this structure (that is, the process), defined through a set of decision rules, and the emergent dynamic behavior that can be represented through curves in a graph. A system can be created with Systems Dynamics modeling as well as through agent-based modeling. System Dynamics (SD) is the study of information-feedback characteristics of industrial activity to show how organizational structure, amplification (in policies), and time delays (in decisions and actions) interact to influence the success of the enterprise. (Forrester 1958: 38) In System Dynamics: • The structure is made by feedback causal relations among variables: i.e., causal feedback loops, negative or positive; there are level variables (stocks, states of the system) and flow variables (change rates). • The process is given by the decision rules that make the structure work.

Organization and Strategy in Banks

195

• The curve shape that shows the time behavior of the system in a graph is dynamic. The feedback structure of a system generates its behavior: cause–effect relationships are fundamental to mapping the feedback structures of systems. Cause–effect relationships and feedback loops represent reinforcing or outof-equilibrium dynamics. On the ABM side we will have: • structure, i.e., who: the agent; • process, or decision rules, that are the actions of the agent; • the dynamic, which is the representation of agents’ action in time. ABM are essentially decentralized. Compared to SD there is no such place in ABM where the global system behavior (dynamics) would be defi ned. The modeler defi nes behavior at the individual level, so that important heterogeneities in agent structure and decision rules can be represented. Moreover the global behavior emerges as a result of many (hundreds, thousands, millions) individuals, each following its own behavior rules, living together in some environment and communicating with each other and with the environment. This is why ABM is also called bottom-up modeling, while in SD we work with aggregates—items in the same stock are indistinguishable, having no individuality, and the focus is in terms of global structure dependencies; as such SD is a sort of top-down modeling. Therefore with ABM we go from continuum towards discrete mathematics: the variables that represent the state of the system can assume defi nite values representing alternative events (eat or not eat another organism; buy, hold or sell some stocks). The time variable has a fi nite number of states which, depending on the case, can represent generations of a species or fi nancial transactions or whatever. Gell-Mann (1994) defi nes such kind of mathematics as ‘based on rules’, since the changes that take place in the system depend on the state of the system in that precise instant: we can represent systems consisting of many individual adaptive agents, each one of them being itself a complex adaptive system. Usually agents, like organisms in an ecosystem or individuals and fi rms in an economy, evolve schemes that describe the behavior of other agents and teach how to react to it. Therefore the mathematics based on rules becomes a mathematics based on agents. We build ABM following the ERA scheme: environment-rules-agents (Terna 2000). Modeling is a feedback process, not a linear sequence of steps. As stated by Sterman (2000, pg.88), the initial purpose dictates the boundary and scope of the modeling effort, but what is learned from the process of modeling may feed back to alter our basic understanding of the problem and the purpose of our effort.

196 Alessandro Cappellini and Alessandro Raimondi Modeling is embedded in the larger cycle of learning and action constantly taking place in organizations. Indeed modeling is not a one-shot activity that yields “the answer”, but an ongoing process of continual cycling between the virtual world of the model and the real world of action. Simulation models are informed by our mental models and by information gleaned from the real world. Strategies, structures and decision rules used in the real world can be represented and tested in the virtual world of the model: feedbacks alter our mental models and lead to the design of new strategies, new structures and new decision rules. Then new policies are implemented in the real world, and feedback about their effects leads to new insights and further improvements in both our formal and mental models. Instead of trying to explain reality as we do with argumentation and mathematical formalization, with simulation we reproduce it. Sterman outlines a modeling process made up of five steps: problem articulation, formulation of dynamic hypothesis, formulation of a simulation model, testing and analysis, policy design and evaluation. We sketch the behavior of the key concept and variables and of what might be their behavior in the future. It can be real data from records, or an expectation of what might happen in a new situation. The goal is to defi ne the simplest structure that is sensible and capable of generating patterns of behavior that qualitatively resemble the behavior we observe: the model will eventually be validated quantitatively against the available data. Among these structures, we develop some dynamic hypotheses which describe the rules and relations that might be the cause of the observed behavior. Finally, with our tool we can look forward and ask: what environmental conditions might arise? What new decision rules, strategies and structures might be tried in the real world? How can they be represented in the model? What are the effects of the policies? How robust are the policy recommendations under different scenarios and given uncertainties? Do the policies interact? Are there synergies or compensatory responses?

RATIONALITY AND CONTROL The rational kind of a decision rule depends on the hypothesis we make about it. In classical economic theory agents decide, and therefore behave, on the basis of an absolute rationality that makes them able to make perfect forecasts and optimal choices. But as Hayek (1945) claims, if we possess all the relevant information, if we can start out from a given system of preferences and if we command complete knowledge of available means, the problem which remains is purely one of logic. In fact, the question of what is the best use of available means is implicit in our assumptions, and the optimum solution can be easily stated in mathematical form.

Organization and Strategy in Banks

197

Indeed, this hypothesis is much of an abstraction and is not able to depict the uncertainty that real agents face in real situations. Rather, a better paradigm can be devised in what Herbert Simon (1947) refers to as bounded rationality: in such perspective, a rational decision depends on decision premises, namely knowledge, information, environment and alternatives. In resolving a decision-making problem all the paths that effectively lead to a solution have to be explored: knowledge plays a predominating role as it determines which outcomes derive from each alternative and selects a group of alternatives for each strategy. But individuals have limited elaboration capabilities, limited access to information: knowledge on consequences is always incomplete, and only a few of all the possible alternatives are in the minds of individuals. Interaction among individuals comes both from simple rules and from a complex (although not consciously formalized) individual behavior in decision making, that involves environment, information and cognitive aspects of humans, operating on experiences obtained through concrete interactions. Social systems are complex, and have a counterintuitive behavior. Intervention in one side of the system may cause uncounted reactions in another side. If we want to fi x something, we are obliged to understand the whole system. Usually we lack a feedback view: we act as if cause and effect were always closely linked in time and space, but often it is the contrary. Therefore much of the complexity and inner relationships of the system, that could be fundamental in order to gain rationality and control over it, are lost. But there are instruments that permit us to analyze and formalize problems of greater complexity: as long as we have flight simulators, we can have enterprise simulators. There is more. We need cooperation: institutions and organizations are conceived as models of collective behavior that influence individuals (Arrow 1974). The aim of an organization is that of designing a structure and behavior (decision rules) such that individuals in their choices can get as close to rationality as possible. We design such structure and behavior (decision rules) and we look for more rational decisions with simulation modeling. Representing the physical and institutional structure of a system is relatively simple. A much more complex and delicate endeavor is discovering and representing the decision rules of the actors. In this framework, we look at the bank as a system, that is, as an organization consisting of many subsystems having hierarchic features and networks of relations. Usually such an organization is made up of different business units, each one functioning and behaving according to its own decision rules, but with the goal of contributing to the overall value and to an enduring path of growth. The number of customers and their preferences change: managers have to continuously monitor the matching between the demand and the offer of services. So they have to respond coherently with the structure and rules of the organization, through decisions that should be part of a strategy and

198

Alessandro Cappellini and Alessandro Raimondi

that act on the rates at which the resources in the units are organized, used and produced if needed. Managers have access to and can use only the information available to their business units: on this information their decisions are founded. As a consequence, the global behavior of the bank results from the actions of each single unit. Both are characterized by instability, oscillating around the desired and expected results. Causes of instability can be endogenous—think of organization, skills and capabilities—or exogenous—that is to say market trends, national and international macroeconomic scenarios, political economy and non-economic factors. The objective of the management is to identify the sources of instability and to remove or manage them in order to make it feasible in the mid- to long term: this can be done through changes in the organization structures or in the pursued strategies. Referring to system characteristics, each control process depends on a negative feedback structure that experiments with delays: in the feedback loop the actual state of the system is compared to the desired one, and corrective actions are set in order to reduce differences and fi ll the gap. Delays take place in the feedback process from the beginning of corrective actions to the evidence of their effects, and from the changes that take place in the system and the perceiving of such changes by the decision makers. As a matter of fact, fluctuations depend not only on economic and social causes, but also on the existence of delays in the system, and on the fact that often these are ignored by managers who don’t take into account the time that is needed for their actions to be effective.

BANKING LITERATURE Focusing on the “new empirical industrial organization” (NEIO) models, in recent years, scholars deeply studied the bank sector through the structure-conduct-performance paradigm, or SCP. The “structure” is the market described through barriers to entry, ownership and costs frame, and laws and local regulations. “Conduct” strategies regard price policies, promotions, introduction of new services (innovation), segmentation and discrimination. The theory is quite simple: the market structure will influence banks’ performances. And banks will react through their conduct and choices. Performance will be expressed as efficiency, full employment, revenues and so on. The fi rst work in this direction is by Hannan (1991), who tests with a formal model of a fi rm the relationships between market structure on one hand and bank loan rates, bank deposit rates and bank profit rates on the other. A good introduction to the theory can be found in Neuberger (1998), a complete review of SCP literature focusing on the links between

Organization and Strategy in Banks

199

theoretical and empirical research: basic conditions and variables of market structure, conduct and performance special to the banking industry are considered. SCP has been studied extensively for American banking. In Goldberg and Anoop (1996) two explanations of a positive correlation between profitability and concentration have been advanced: the traditional structure-performance hypothesis (SCP) and the efficient-structure hypothesis, according to measurement of X-inefficiency and scale-inefficiency. John et al. (2000) examined the effectiveness of concentrating bank regulation on bank capital ratios proposing a direct mechanism incorporating incentive features of top-management compensation. A set of studies starting from Calem and Carlino (1991) attempted to determine whether banks typically behave competitively or strategically (and whether their conduct is influenced by market concentration). A bank growth strategy mainly follows two paths: opening new branches or starting a merger or an acquisition. The fi rst strategy leads to investigating branches’ efficiency (Berger, Leusner and Mingo 1997). The same author also studied the effect of bank M&A actions (Berger and Humphrey 1994) and little profitability on small-business and SME lending (Berger et al. 1998, Berger and Udell 2002), investigating several aspects such as organizational structure and the relationship lending model. More recently academics as well as central bankers have started to investigate the topology of interbank payment flows in order to reduce systemic risks. The works of Thurner et al. (2003), Boss et al. (2004), Soramäki et al. (2006), Inaoka et al. (2004) and De Masi et al. (2007) described the interbank networks of Japan, Austria, Switzerland and Italy. The general objective is to understand the system’s level of concentration, and to simulate the consequences of catastrophic events by removing a specific node (bank) and measuring the performances of the rest of the network. Interesting techniques have been developed: the Bank of Finland uses large-scale simulation models for settlement systems (e.g., Bedford et al. 2005; Soramäki et al. 2006; Hellqvist and Koskinen 2005); other models are instead based on agent-based simulation (Askenazi 1997; Fioretti 2005; Alentorn et al. 2006; Galbiati and Soramäki 2007). In this direction Terna (2008) and Arciero et al. (2008) represent banks as agents who exchange payment requests in a real-time gross settlement (RTGS) payment system. Coming back to the concept of network, there is another research field describing networks of banks and enterprises as shown in De Masi and Gallegati (2007) and Delli Gatti et al. (2007). There are only a few organizational models about what happens inside a bank, and these mainly refer to queuing theory as in Jerry Banks’ handbook (1998: 161–164), or in Vignaux (2008) and XJ Technologies’ AnyLogic (www.xjtek.com).

200

Alessandro Cappellini and Alessandro Raimondi

In the end, relationships among the market, customers and banks have been widely explored by literature. However a pure management field that analyzes costs and benefits in terms of alternatives, decisions and consequences, in terms of perspective, growth, development and control, has yet to be developed and much room to grow.

OUR FRAMEWORK Our attention is focused on universal banks, fi nancial services companies engaged in several businesses that can be broadly divided into two main areas: the commercial bank activity, devoted to serve customers through loans, mortgages and deposits; and the investment bank, interested in mergers, acquisitions and capital markets. In the fi rst model fi rms borrow money from the bank; in the second the bank acquires part of a fi rm’s capital, offers advisory services or helps the fi rm in issuing shares or bonds to the market. The commercial bank activity is developed in a geographically specific and locally concentrated frame, while the investment banking is more oriented towards a multinational environment. In our framework we split the universal bank into three divisions as in Figure 8.1. Each division is organized according to different criteria. The commercial banking is focused on customers’ segmentation: very similar products (loans, deposits, credit cards etc.) are offered to various customers with different needs; e.g., a shop is interested in receiving credit and debit card payments, while an employee is interested in paying using a credit card. On the other hand the investment banking has highly specialized

Board of Directors Commercial Bank Division

Investment Bank Division

Corporate Center

Retail Small Enterprises Corporate Banking …

Investment Banking Merchant banking Structured finance M&A Advisory …

General Mgmt Planning/Development Risk Mgmt Cost & Accounting Control Human Resources Investor Relations Treasury …

Figure 8.1

Organization of the universal bank.

Organization and Strategy in Banks

201

teams dealing with complex instruments and products devoted to a few customers: large corporate, public entities and other financial institutions. The third division is the corporate center. Here we group those services like accountability, logistic, IT and human resources that are fundamental in order for the bank to be able to physically deliver its services, and to be managed on the other side. According to this second aspect, the greater part of entities in this “box” affect the entire organization by providing management and the intended direction of the bank as a fi rm in and of itself. In fact: [S]trategic management is an ongoing process that assesses the business and the industries in which the company is involved; assesses its competitors and sets goals and strategies to meet all existing and potential competitors; and then reassesses each strategy annually or quarterly [i.e., regularly] to determine how it has been implemented and whether it has succeeded or needs replacement by a new strategy to meet changed circumstances, new technology, new competitors, a new economic environment, or a new social, fi nancial, or political environment. (Lamb 1984: ix) Explaining how fi rms behave, and how this behavior is affected by the consequences of the actions of decision makers, is one of the fundamental issues that defi ne the field of strategy, its priorities and concerns, and the contribution it gives to the theory and practice of management. In the corporate center different functions work towards the government of the bank behavior: planning and control evaluates possible ambitions and achievements in terms of economic profitability; marketing provides positioning and customer needs analysis while setting up communication strategies; investor relations promotes the company and its valuation in the stock markets; and risk management ensures an adequate coverage for operational, credit and market risks. Capital allocation, being capital a scarce resource, allocates it among the different business units according to their risk–return profile. All of these departments gather and elaborate on information, working as a support to the top management. As an example, developing a new product or service will be investigated through different perspectives: the top management will be fed by several opinions which will be reflected in the decision-making process. An interesting example of how managers have to deal with both endogenous and exogenous issues can be the actual crisis on fi nancial markets that started in mid-2007 from the American subprime mortgage market. The main cause of this crisis can be found in households’ mortgages defaults which are the ultimate consequence of two facts. On the one hand, a lack of regulation in the credit system permitted fi nancial

202 Alessandro Cappellini and Alessandro Raimondi institutions to give lending of extremely bad quality, i.e., to borrowers that were not able to afford the indebtedness. On the other hand, a lack of regulation on the fi nancial system permitted those same mortgages to be packed in structured fi nancial tools (known as ABS or asset backed securities) and to be sold to the vast majority of the retail market with no transparency on the risk underlying these instruments. Sketching the outcome, the end of the housing market bubble led to a devaluation of all the ABS that were spread through the market, and therefore of banks’ assets and of their capability to raise funds in the liquidity market. Riskiness, higher cost of funding, liquidity constraints and credit crunch follow. Greenlaw et al. (2008) argued that: [T]he current crisis will abate once one or more of the following three conditions are met. 1. Either, banks and brokers contract their balance sheets sufficiently that their capital cushion is once again large enough to support their balance sheets. 2. Or, banks and brokers raise suffi cient new equity capital to restore the capital cushion to a size large enough to support their balance sheets. 3. Or, the perceptions of risk change to a more benign outlook so that the current level of leverage can once again be supported with existing capital. In Figure 8.2 we have tried to sketch some extremely simplified actions that could be taken in order to cope with this kind of crisis. A fundamental concern of the bank is to control its own fi nancial health through the Core Tier 1 (CT1) ratio, composed primarily of stockholders’ equity. The ratio, that has to be higher than 6 per cent, is that of the bank’s capital to its assets weighted for their risk (risk-weighted assets or RWA) according to the Bank for International Settlements (BIS) guidelines (2004). The higher the ratio, the higher the potential loss the bank is able to recover by using its own resources. The bank has fi nancial tools to act both on the numerator (the capital) and on the denominator (assets and risks) of the ratio. Almost every bank during 2007 and 2008 had to face an increase of their risks given the deterioration in quality of a large part of their loans started from the household market, with a consequent deterioration in their CT1 ratio. To avoid this, the fi rst solution is to increase bank capital through common stocks or special instruments as subordinated-term debt or preferred shares. On the other side a bank can reduce its exposure. A fi rst option is to recur to securitizations, a repackage of some assets (e.g., mortgages) into securities that are then sold to investors. A bad use of securitization is one of the main causes of the mid-2007 fi nancial crisis.

Organization and Strategy in Banks

203

Common stockholders’ equity

Increase capital Subordinated-term debt (e.g.)

Securitization

Increase CT1 ratio

Cost of Equity

Settlement cost “Credit Crunch”

Systemic Risk

Customers selection

Reduce Revenues

Credit Quality

Reduce Market Risk/Focus on Credit Activity

Reduce Diversification

Adoption of Advanced risk models (Basel II AMA)

Installation cost



Figure 8.2

Actions to be taken by banks.

A second choice is acting on the business directions by focusing on credit quality: in other words, reducing “all” actual credit lines or making borrowing more difficult through a severe increase in the cost of loans to riskier counterparts, or to anyone as an unwise last resort. But this behavior could lead to a systemic crisis. Another more virtuous path would be to improve the customer portfolio by choosing high returns and moderate-risk clients. In the end, a bank can also refocus its business positioning by changing or reducing some of the activities in which it’s involved. Of course each solution has pros and cons (the third “column” in the figure), represented by transaction costs, changes in the business behavior or reduction of revenues and diversification appetite in the long run. In our theoretical framework the decision making is not given by a simple minimization of cost or maximization of effects (on CT1 ratio): each solution can receive different evaluation from different corporate center structures (as in Figure 8.1). They can have different time horizons (planners usually think in a 1- to 3-year period, while Treasury spans from 1 to 5 or 10 years) or priorities. But each solution will be discussed according to several aspects such as costs and revenues (accounting), long-term strategies and business plan (planning), and the technical capability according to risks (risk management), liquidity in the market (Treasury) and analyst and market opinion (IR). In the end, the management has to react to both exogenous and endogenous situations. Its actions at the highest level of the organization do affect the micro level of the system, which is its front end to the market. From this same micro level, which consists of fi rms, consumers and the relation that the bank entertains with them, the crisis spread and the financial market began to deteriorate. Therefore it becomes of great interest to understand how management decisions on the organization affect both its behavior and the environment in which the system operates. Indeed, such kind of analysis

204 Alessandro Cappellini and Alessandro Raimondi can lead to interesting evidence on how the organization behaves, on how the organization should be shaped (in terms of structure and actions) in order to offer a sound and solid performance and on what should be its role inside the economic and social system.

OUR MODEL3 Our model depicts the commercial bank inner functioning and its relationship with the market, which is made up of customers, namely fi rms. The description of the bank comprises three different levels that fit the organizational structure: the relationship manager—who deals directly with customers—the branch manager and the head office. At the lowest level of abstraction, we model the relationship manager which acts as a front end for the bank in its relationship with customers, and initiates the fi rst step of analysis and synthesis that in turn will be evaluated by the other parts of the organization. The simulation specifically represents a commercial bank in its role of fi nancing fi rms for their current activities and investments: the decision therefore regards whether or not to give fi nancial support, and depends on the evaluation of the credit capacity of the fi rm and of the probability that loans will be repaid. In other words, we are describing a process of risk selection. Commercial banks are strongly hierarchical structures, with a main head office and many branches that develop their action directly on the territory. According to this hierarchy, people’s decision power varies depending on their position and role in the organization: therefore the evaluation is carried on in different steps, according to such hierarchy. The higher the amount of the credit line and of the riskiness of the operation, the higher the position is in the hierarchy of the person who will have to decide for or against the granting of the funds. The proposal will then move from the account manager to the branch manager then to any of the superior levels until the main head office. As a consequence, varying the structure of the hierarchy, and the distribution of powers through the people in it, will affect the process development and the behavior of the system: the same effect will have a change on the characteristics of individuals, such as risk aversion or granting power. Firms are actually simply described through some characterizing variables, but don’t have inner processes or a dynamic behavior that feed back changes in the environment. Steps further will be made in describing in a more precise and defined way their complexity: for now, we concentrate on the behavior of the bank which however is itself a fi rm, although one carrying out a really particular kind of production and services. Therefore it will be sufficient for us to defi ne some of the leading characteristics of the fi rm on which, at a fi rst glance, analysis for credit is made. We describe the fi rms through the following variables that are related to its balance sheet, which represents the patrimonial end economic situation

Organization and Strategy in Banks

205

of the enterprise and in so being is the principal tool for evaluation: the rating index, a sort of summarizing judgment on the quality of the balance sheet of an enterprise; revenues and the turnover index (the relationship between turnover and total assets); suppliers and bank debts; fi nancing credit; and risk capital. These few variables are those generally addressed in the process of credit evaluation. Our population of enterprises is produced by statistical distributions according to the main features of the Italian market, to a personal experience on the field and through the analysis of quite a wide range of fi rms that can be well defi ned as constituting a representative sample. As already has been said, at the lowest level of abstraction we model the relationship manager: the process evaluation is carried out through a mathematical algorithm. Although this is a simplification, and a strong one, the credit-granting analysis and evaluation is a very standardized process, and there are also software programs able, on the basis of a few variables, to grant or deny fi nancing. So we sketch this fi rst synthesis in the decisionmaking process with an algorithm that mimics the judgment criteria that we have empirically verified. Firms move along in the process depending on their characteristics: the credit affordability is tested, the account is opened, the amounts of the credit lines are proposed and the decision power is evaluated on the basis of amounts and risks. We defi ne three risk classes that represent the credit lines that the enterprise could receive after the evaluation process according to the riskiness and kind of fi nancing: these three lines offer a classification of the power of granting, and determine who will be in charge of the decision. At the middle level we have the branch manager: this individual’s behavior is more connected with the branch results and the incentives that come from the head office. Branch managers ground their decision on their risk aversion and compare it with an assessment of a general value of the solidity of the firm, the risk capital. If the entrepreneur strongly believes in what he is doing, or has until now succeeded, the risk capital of his firm will be higher. The higher the risk capital, the greater the chance of having the credit granted by the branch manager. What is more important is that the risk aversion of the branch manager depends on external factors, namely head office feedback. The head office, which represents the highest level of the organization, stands at our highest level of abstraction: it has to deal with aggregates, although in this case such aggregates are the outcome of every single action and interaction in the hierarchy below. We choose to focus on a simple issue: a classification of good-quality and bad-quality accounts that leads to defi ning the overall credit quality. Flows are represented by new financing, divided into three categories according to the rating of each fi rm granted funds: from A to C quality decreases. As the overall credit quality decreases, the head office intervenes, communicating a higher risk aversion level to the branch manager: this way, less strong and

206 Alessandro Cappellini and Alessandro Raimondi valuable fi rms would not gain access to credit, and the overall quality will again rise. Then, when a satisfactory level of quality is attained (in the standard case, when there are more good accounts than bad accounts) the head office will lower their branch managers’ risk aversion, and a new commercial push will begin. Credit quality emerges and depends on the operations of the individuals as much as on the interactions among levels of hierarchy. As rules change, credit quality changes as well.

Figure 8.3

Loans portfolio composition.

Figure 8.4 Credit quality indicator.

Organization and Strategy in Banks

207

Figures 8.3 and 8.4 describe two credit quality indicators. The fi rst is the portfolio composition, and the second is an index that expresses overall credit quality. In the simulation a large number of fi rms have a C rating, while good ones, with an A rating, are very few. The behavior of this complex system results from many interactions and feedback. Global behavior would therefore be affected by changes in the structure: from the terms in the algorithm at the lower level, to the granting powers, to risk aversion and distribution of fi rms’ characteristics. Steps further will be made in implementing a more comprehensive cognitive behavior both in the relationship manager and in the branch manager. What’s more, the whole bank will fi nd itself in a system competing with other banks, and fi rms with a more complex inner structure will be able to interact and provide feedback to banks in their decisions. Feedback from the environment will also affect the way in which managers will carry out their evaluation processes, determined by the inner structure and organization of the bank in terms of granting powers, commercial spirit and risk aversion. Future improvements will also introduce a pricing system and a return profi le for each business unit, in order to open the road to competition.

CONCLUSION A bank is a complex hierarchical organization that is part of the more complex system of the economic environment: it interacts with it, affects it with its behavior and at the same time is affected by environment feedback and exogenous dynamics. Therefore, bank management is a complex issue, in terms of both inner organization and strategic action. Organizations should convey information useful for decision making: from this perspective, a picture of the whole framework is essential in order to understand cause–effect relationships and feedbacks. At this point, targets can be evaluated and a strategic path of action can be designed in order to reach the desired goals in terms of market role, positioning and profitability. Our ABM model shows how banks assess and carry out the credit and risk evaluation: we have imagined a variation in the risk aversion driven by the corporate center, which is something that may result from a change in shareholders’ risk appetite or from an exogenous change in the business situation. The fi nancial crisis is a good example: banks have to drain liquidity from the system and stop, or drastically reduce, their lending activity to reshuffle the credit quality of their assets in a more sustainable mix for turbulent times. This affects fi rms and their ability to invest, or to survive if in a difficult fi nancial situation. If we open the road to competition, fi rms (and consumers) can choose from among different banks the one which will give more credit or will offer the best prices: each action of each one of the actors will change the situation and affect both the behavior of the system

208

Alessandro Cappellini and Alessandro Raimondi

and that of the organization. In a complex environment a good sketch of how causes and effects are related can be of much interest: in banks, and in all kinds of great organizations as well, the highest part of the hierarchy (in our case the corporate center) is often not completely aware (or not aware at all) of what is happening at lower levels, of what will be the consequences of its actions nor of what problems should be addressed or how it would be suitable to solve them. All the evidence that will enter the simulation model should come from a strong empirical background and from the observation and formalization of real-life rules of decision making. Simulation can play a significant role in giving an adequate formal representation of real-world systems, a representation that is both structural and dynamic in the sense of the consequences of the actions of the decision makers. Our model shows how a big world can be grown from the bottom up: from micro relations a complex macro system emerges and behaves, and problems that affect the whole environment can emerge to form a representation that starts from the actions and characteristics of individuals. In order to defi ne strategies, or in other words paths of concatenated action towards an objective, a tool like simulation modeling for analysis and experiments, and for the understanding of the functioning and of the consequences of decisions, can be a useful breakthrough.

NOTES 1. The opinions expressed in this chapter are the views of the author and do not necessary reflect the views and opinions of Intesa Sanpaolo Group. 2. The opinions expressed in this chapter are the views of the author and do not necessary reflect the views and opinions of Unicredit Group. 3. In this paragraph is described a model whose methodology was introduced during Swarmfest 2005 by the authors. The software adopted is the Swarm project (http://www.swarm.org), born at Santa Fe Institute (Minar et al. 1996).

BIBLIOGRAPHY Alentorn, A., Markose, S., Millard, S. and Yang, J. (2006) Designing Large Value Payment Systems: an agent-based approach, Mimeo. Arciero, L., Biancotti, C., D’Aurizio, L. and Impenna, C. (2008) ‘Exploring agentbased methods for the analysis of payment systems: a crisis model for StarLogo TNG’, Bank of Italy working paper, 686. Arrow, K. (1974) The Limits of Organization, New York: Norton. Askenazi, M. (1997) ‘Some notes on the BankNet model’, Santa Fe Institute working paper. Bank for International Settlements (2004), International Convergence of Capital Measurements and Capital Standards, Basel Committee on Banking Supervision.

Organization and Strategy in Banks

209

Banks, J. (1998), Handbook of Simulation: principles, methodology, advances, applications, and practice, New York: Wiley-IEEE. Bedford, P., Millard, S. and Yang, J. (2005) ‘Analysing the impact of operational incidents in largevalue payment systems: a simulation approach’, in H. Leinonen (ed.), Liquidity, Risks and Speed in Payments and Settlement Systems—a simulation approach, Bank of Finland Studies. Berger, A. and Humphrey, D. (1994) ‘Bank scale economies, mergers, concentration, and efficiency: the U.S. experience’, Finance and Economics Discussion Series, 94–123, Board of Governors of the Federal Reserve System. Berger, A., Leusner, J. and Mingo, J. (1997) ‘The efficiency of bank branches’, Journal of Monetary Economics, 40 (1): 141–62. Berger, A., Saunders, A., Scalise, J. and Udell, G. (1998) ‘The effects of bank mergers and acquisitions on small business lending’, Journal of Financial Economics, 50 (2): 187–229. Berger, A. and Udell, G.F. (2002) ‘Small business credit availability and relationship lending: the importance of bank organisational structure’, Economic Journal, 112 (477): 32–53. Boss, M., Elsinger, H., Summer, M. and Thurner, S. (2004) ‘Network topology of the interbank market’, Quantitative Finance, 4: 677–84. Calem, P. and Carlino, G. (1991) ‘The concentration/conduct relationship in bank deposit markets’, The Review of Economics and Statistics, 73 (2): 268–76. Delli Gatti, D., di Guilmi, C., Gallegati, M. and Giulioni, G. (2007) ‘Financial fragility, industrial dynamics, and business fluctuations in an agent-based model’, Macroeconomic Dynamics, 11: 62–79. De Masi, G. and Gallegati, M. (2007) ‘Debt-credit economic networks of banks and fi rms: the Italian case’, in A. Chatterjee and B.K. Chakrabarti (eds) Econophysics of Markets and Business Networks, New York: Springer, 159–71. De Masi, G., Iori, G. and Caldarelli, G.(2007) ‘The Italian Interbank Network: statistical properties and a simple model’, Proceedings of SPIE, vol. 6601, Noise and Stochastics in Complex Systems and Finance. Fioretti, G. (2005) Financial Fragility in a Basic Agent-Based Model, Mimeo. Forrester, J. (1958) ‘Industrial dynamics: a major breakthrough for decision makers’, Harvard Business Review, 36 (4): 37–66. Forrester, J. (1961) Industrial Dynamics, Cambridge, MA: MIT Press, 362. Galbiati, M. and Soramäki, K. (2007) A Competitive Multi-Agent Model of Interbank Payment Systems, Mimeo. Gell-Mann, M. (1994) The Quark and the Jaguar, New York: Freeman. Goldberg, L. and Anoop, R. (1996) ‘The structure-performance relationship for European banking’, Journal of Banking and Finance, 20 (4): 745–71. Greenlaw, D., Hatzius, J., Kashyap, A.K and Shin, H.S. (2008) ‘Leveraged losses: lessons from the mortgage market meltdown’, US Monetary Policy Forum Report, 2. Hannan, T. (1991) ‘Foundations of the structure-conduct-performance paradigm in banking’, Journal of Money, Credit and Banking, 23 (1): 68–84. Hayek, F. A. (1945) ‘The use of knowledge in society’, American Economic Review, 35 (4): 519–30. Hellqvist, M. and Koskinen, J. (2005) ‘Stress testing securities clearing and settlement systems using simulations’, in H. Leinonen (ed.) Liquidity, Risks and Speed in Payments and Settlement Systems—a simulation approach, Bank of Finland Studies. Inaoka, H., Ninomiya, T., Taniguchi, K., Shimizu, T. and Takayasu, H. (2004) ‘Fractal network derived from banking transaction—an analysis of network structures formed by fi nancial institutions’, Bank of Japan Working Paper Series.

210

Alessandro Cappellini and Alessandro Raimondi

John, K., Saunders, A. and Senbet, L. (2000) ‘A theory of bank regulation and management compensation’, Review of Financial Studies, 13 (1): 95–125. Lamb, R.B. (1984) Competitive Strategic Management, Englewood Cliffs, NJ: Prentice-Hall. Minar, N., Burkhart, R., Langton, C. and Askenazi, M. (1996) ‘The Swarm simulation system: a toolkit for building multi-agent simulations’, Santa Fe Institute working paper WP 96–06–042. Neuberger, D. (1998) ‘Industrial organization of banking: a review’, International Journal of the Economics of Business, 5 (1): 97–118. Simon, H. (1947) Administrative Behaviour, New York: Macmillan. Simon, H. (1997) An Empirically Based Microeconomics, Cambridge: Cambridge University Press. Soramäki, K., Bech, M.L., Arnold, J., Glass, R.J. and Beyeler, W.E. (2006) ‘The topology of interbank payment flows’, New York Federal Reserve Staff Reports, 243. Sterman, J. (2000), Business dynamics, New York: McGraw Hill. Terna, P. (2000) ‘Economic experiments with Swarm: a neural network approach to the self-development of consistency in agents behavior’, in F. Luna and B. Stefansson (eds) Economic Simulations in Swarm: Agent-based modelling and object oriented programming, Dordrecht: Kluwer Academic. Terna, P. (2008) ‘Imaginary or actual artificial worlds using a new tool in the ABM perspective’, in Wivace 2008 Proceedings, Singapore: World Scientific. Thurner, S., Hanel, R. and Pichler, S. (2003) ‘Risk trading, network topology, and banking regulation’, Quantitative Finance, 3: 306–19. Vignaux, G.A. (2008) ‘The bank tutorial’. Online. Available HTTP: (accessed 10 february 2009).

9

Changing Roles in Organizations An Agent-Based Approach Marco Lamieri and Diana Mangalagiu

INTRODUCTION An important element in an organization is the concept of “role”. A role is a description of an abstract behavior of an individual (agent). An agent acquires the social knowledge and skills necessary to assume an organizational role. The process of acquiring roles may appear in many forms ranging from a relatively quick, self-guided, trial-and error process to a far more elaborate one, requiring a lengthy preparation period of education and training followed by a period of official apprenticeship. A role describes the constraints (obligations, requirements, skills) that an agent will have to satisfy to obtain a role, the benefits (abilities, authorization, profits) that an agent will receive in playing that role and the responsibilities associated with that role. A role is also the place for the description of patterns of interactions in which an agent playing that role will have to perform as the relationships between roles define the expected behaviors of the organizational members (Ferber et al. 2004). The concept of organizational role has been on the research agenda for a few decades. In 1956, Lieberman investigated the concept of role and role change (Lieberman 1956) and in 1966, Biddle and Thomas introduced the role theory (Biddle and Thomas 1966). Later, studies such as the one by Graen have been dedicated to role-making processes within complex organizations (Graen 1976), and new attempts of theorizing organizational roles and socializing have been made (Van Maanen and Schein 1979). More recently, computer scientists and modelers started to study organizational concepts such as groups, communities, functions and roles using multi-agent systems approaches. Most of these studies have a normative approach to organizations. The rules of social behavior are then described in terms of rights and obligations (Macy and Willer 2002; Dignum 2004; Pacheco and Carmo 2003). Other social concepts for controlling social action within organizations such as influence and authority have also been investigated, with a focus on social commitment policies (Soriano et al. 2002), social constraints (Barbuceanu 1997) and organizational rules (Zambonelli et al. 2001; DeLoach 2002). However, as most studies focus

212

Marco Lamieri and Diana Mangalagiu

on the impact of social concepts at the level of individual agents and not at the organizational level, not much is known on the influence of organizational roles and organizational roles dynamics on the productivity of the organization. This is the aim of this chapter. The chapter is organized as follows: fi rst, we provide the background and a review of the literature on organizational issues, with a particular emphasis on organizational structure and roles. Then, we discuss the insight the agent-based modeling approach can bring to the understanding of these issues. We present an agent-based model of organization we have developed. The model considers individual agents embedded in a formal hierarchical structure and in an informal social network, having skills and a role. We present the different components of our model: the role and capabilities of individual agents, the different structures of interactions contributing to the organizational dynamics and the characteristics of the environment in terms of complexity and stability. We investigate and compare our experimental results from two configurations: a centrally planned system in which the role of agents is predefi ned and a decentralized self-organizing configuration in which managers, who have a partial view of the enterprise, defi ne and change agents’ roles according to their perception of the productivity of agents in their social or spatial proximity. Finally, we provide concluding remarks and future research directions.

ORGANIZATIONS AS COMPLEX SYSTEMS

Definition of Organizations There is no single definition of an organization. From an economics point of view, a group of individuals is considered to constitute an organization if the group has an objective or a performance criterion that transcends the objectives of the individuals within the group (Van Zandt 1998). According to Gasser (1992), an organization provides a framework for activity and interaction through the definition of roles, behavioral expectations and authority relationships such as control. This defi nition is rather general and does not provide any guidance on how to design organizations. Jennings and Wooldridge (2000) defi ne an organization in more practical terms as a ‘collection of roles, that stand in certain relationships to one another, and that take part in systematic institutionalized patterns of interactions with other roles’. In this chapter, we use the defi nition introduced by Carley and Gasser (1999): [O]rganizations are large-scale problem solving technologies, constituted of multiple human and/or artificial agents who are engaged in

Changing Roles in Organizations 213 one or more tasks, are goal directed (goals can change and may not be shared by all organization members), are able to affect and be affected by their environment, have knowledge, culture, memory, history and capabilities distinct from any single agent and have a legal standing distinct from that of individual agents. A key argument of Carley and Gasser for the existence of organizations, both human and computational, is the need to overcome limitations of individual agency. They identify four basic limitations of individual agency that organizations can help overcome. First, agents have cognitive limitations and can achieve higher levels of performance together. Second, agents are physically limited, in terms of both resources they have access to and their own location. Third, agents are temporally limited. They can join together to achieve goals that transcend the lifetime of any one agent. And finally, agents are institutionally limited. By joining together they can attain organizational status and act as a sole actor. Therefore, an organization overcoming the limitation of individual agency is constituted of agents with individual behaviors structured into groups that may overlap. The behaviors of agents are functionally related to the whole organization activity through dynamic relationships based on the concept of roles.

Organizational Structure According to Chang and Harrington (2006: 1297), the organizational structure is ‘the way in which the interrelated groups of individuals are constructed’, the main concern being to ensure effective communication and coordination. Ferber et al. (2003: 17) describe organizational structure as ‘what persists when components or individuals enter or leave an organization, i.e. the relationships that makes an aggregate of elements a whole’. Both of the preceding definitions emphasize on one hand a partitioning structure, which defines how agents are assembled into groups and how groups are related to each other, and on another hand a role structure which is defined, for each group, by a set of roles and their relationships. Moreover, the organizational structure defines the set of constraints that agents should satisfy to play a specific role and the benefits resulting to that role. While the partitioning structure is mostly static, the role structure is dynamic, defining the modalities to create and enter groups and play roles. For both partitioning and role structures, the following aspects need to be taken into account in the process of modeling: 1. First is the allocation of information, which refers to how agents receive information from the environment and how this information moves within the organization; in brief who reports to whom or hierarchy.

214 Marco Lamieri and Diana Mangalagiu 2. Next is the allocation of authority, which is who makes the decisions. The concepts of modularization and role are critical here. An organization may have to perform many subtasks in solving a problem, and a key structural issue is how these subtasks are combined into distinct modules, which are then re-integrated to produce an organizational solution. The degree to which a problem can be efficaciously modularized depends on the nature of the task. 3. The organizational norms and culture embedded in organizations are often modeled as a reinforcement process and lead to path dependency in the evolution of the organization (March 1991). 4. The fi nal aspect is the motivation and preferences of the agents.

Organizational Roles Organizations do not generally operate at the level of individual instructions but rather roles and responsibilities are defi ned, which are intended to guide the activities of participants. In role theory (Biddle 1979: 65) a role is defi ned as ‘those behaviors characteristic of one or more persons in a context’. The defi nition of a role depends on how it is to be used; as we mentioned earlier, in social sciences there are prescriptive, evaluative, descriptive and action defi nitions of roles (Handy 1999). A prescriptive defi nition is concerned with what should be done by the individual holding the role. An evaluative defi nition assesses how a role is being performed. A descriptive defi nition is based on the actual duties performed when the role is being enacted, and an action defi nition is based on the actual actions performed when the role is being enacted. According to Thomas and Williams (2005) who analyze the use of role in computer science and sociology, roles are properties; roles are anti-rigid and have dynamic properties. Odell et al. (2003: 29) define the role as ‘a class that defines a normative behavioral repertoire of an agent’. Roles are independent from groups but must be played within groups. A group is defined as a set of two or more agents related through their role assignments. Roles can be composed of other roles and can have acquaintance associations with other roles; denoting that interaction may occur between instances of the roles. Roles can be allocated endogenously by emergent self-organization or exogenously by the designer of the model. Self-organizing role allocations are robust to change and particularly suited for domains that are subject to unexpected change. Odell et al. make a distinction between horizontal specialization and vertical specialization in the configuration of a role. Horizontal specialization is concerned with the number and complexity of actions supported by the role. Simple roles, with one or two highly specialized actions, provide building blocks which are both easy to understand and simple to develop with, and are often used in systems where agents can perform their actions independently of one another. In other systems it may be desirable to have versatile agents with very little horizontal specification. Vertical specialization

Changing Roles in Organizations 215 addresses the degree of control in a role, over both the actions of the role concerned and the actions of other roles. The primary purpose of this is to ensure that desired goals are achieved. A highly vertically specialized role might require actions to be carried out only under the direction of another. A broader specialization might be a role responsible for the decision-making and planning tasks for a group of agents. A crucial aspect of a role is how it interacts with other roles. The relationship may specify goals that an agent of one role may attempt to achieve for an agent of another role, and once again, there may be norms attached to these goals. There are a few models investigating the interaction between roles. Among them, the AGR, for agent/group/role, is a simple though powerful and generic organizational model of multi-agent systems developed by Ferber et al. (2004). The AGR model is based on three primitive concepts, agent, group and role, that are structurally connected and cannot be defi ned by other primitives. They satisfy a set of axioms that unite these concepts. An agent is an active, communicating entity, playing roles within groups. An agent may hold multiple roles, and may be a member of several groups (partitions, networks). The agent position within the organization and its relations can be described in terms of centrality, density of relations and multiplexity. An important characteristic of the AGR model is that no constraints are placed upon the architecture of an agent or about its mental capabilities. A group is a set of agents sharing some common characteristic and is used as a context for a pattern of activities; it is also used for partitioning organizations. Two agents may communicate if and only if they belong to the same group, but an agent may belong to several groups.

AGENT-BASED MODELS FOR ORGANIZATIONS One way of addressing organizations’ complexity issues is by designing and analyzing computational models (agent-based models or ABM). This approach has caught the attention of a number of scholars in organizational science (Burton and Obel 1995; Carroll and Burton 2000; Lomi and Larsen 2001; McCallum et al. 2004; Nissen et al. 2006) and is increasingly making significant contributions to management practice. As they are becoming validated, calibrated and refined, computational simulation models of organizations are increasingly used as organizational design tools for predicting and mitigating organizational risks or as ‘virtual synthetic experiments’ (Levitt 2004; Lin et al. 2006). The agent-based models view organizations as collections of agents, interacting with one another in their pursuit of assigned tasks. An ABM doesn’t need to answer the question of “the organization’s utility function” as it is sufficient to instantiate agents and let organizational behavior emerge from the interaction of agents among themselves and with the environment and then measure the organization’s performance (Bonabeau

216

Marco Lamieri and Diana Mangalagiu

2002). In this framework, the performance of an organization is seen most of the time as being determined by the structure of interactions among agents, which defi ne the lines of communication, allocation of informationprocessing tasks, distribution of decision-making authorities and provision of incentives (Chang and Harrington 2006). It may involve the frequency or the average time to reach a particular target (a global optimum, for example) or the accuracy of the organization’s decisions or solutions performed. Organizations are considered as complex, non-linear, dynamic and highly interactive systems. The organizational dynamics is the result of an adaptive behavior both at the individual level and at the structural level of agent (Carley and Svoboda 1996). One computational approach to organizations focuses on the simulation in order to re-create reality, to perform “what if” analysis, to model stocks and flows of resources, information and other variables in supply chains and enterprises and to study the evolution of actual organizations. PowerSim,1 Stella2 and Java Enterprise Simulator3 are examples of this approach. A number of agent-based model development platforms such as Swarm,4 StarLogo, 5 Agentsheets,6 AGR7 and OperA8 allow rapid specification, simulation and analysis of multi-level agent-based system models. Such development platforms are beginning to be used for designing project and program organizations. Other approaches focus on the organization formation (Stacey 1995; McKelvey 2004; Axtell 1999) or on the adaptation and evolution of existing organizations (Carley et al. 1999; Miller 2001; Dupouet and Yildizoglu 2003). The processes of self-organization and emergence at play in organizations have specificities that are tied to the context in which they take place. Routines and power relations that constrain and resist nascent processes structure organizations. As a consequence, self-organization and emergent changes fi nd themselves embedded in and constrained by a relatively rigid strategic and organizational context. Commercial-grade modeling tools such as Organizational Consultant (Burton and Obel 2004) are designed to diagnose and repair organizational misfits in terms of contingency theory findings for an organization. SimVision_R, based on the Virtual Design Team research prototype for designing the micro work processes and organization structure of projects, programs and matrix organizations (Jin and Levitt 1996), and Brahms (Sierhuis et al. 2007) are other examples of computational modeling and simulating tools for team and group communication and work processes and practices. Recently a particular interest has been given to the use of organizational concepts within multi-agent systems where the concepts of organizations, groups, communities, roles, functions etc. play an important role (Ferber et al. 1998; Costa and Demazeau 1996; Zambonelli and Parunak 2002). The notion of role in multi-agent systems is broadly concerned with describing how agents behave and interact with one another within an organization, and roles are considered distinct from the agents who hold

Changing Roles in Organizations 217 them. Roles provide an abstraction from individual agents and can be used to capture social aspects of organizations as well as action-oriented ones. OUR MODEL The proposed model is based on the framework proposed in Lamieri and Mangalagiu (2008). The aim of the model was to understand the effect of formal and informal structures on organization’s performance. This contribution moves further and introduces the concept of reorganization, adding a dynamic dimension to the agent’s behavior. We defi ne the concept of role and function and we investigate how the agents’ roles management affects the organization labor productivity and the overall organization performance. Agents in this model can be relocated within the organization, changing their profession: we are investigating how a reorganization driven by simple behavioral rules, based on agents’ productivity, can lead to an efficiency path that drives the organization to achieve lower production costs and, most important, a more equal workload among agents. The formal structure is defi ned as the organizational hierarchy. This view is static and represents the functional division of work within the organization. In the formal structure, each agent has control on the agents below them and can modify this structure by deleting or creating ties. The informal structure is defi ned as the personal relations between agents within the organization and influences the way the organization performs a task. It is dynamic, emerges from the generative interactions during the task performing and affects the performance of the simulated organization. Both views are modeled as networks where nodes represent the agents and links represents the relations between agents as in Figure 9.1.

Figure 9.1 Example of a simulated organization as the interplay of a formal and informal network.

218 Marco Lamieri and Diana Mangalagiu The agents are bounded rational, with cognitive and informational constraints, and they have only a partial view of the organization. Each agent is autonomous, has skills and learning capabilities, accumulates experience and is able to perform a part of a task. The agents either improve their skills while performing tasks (learning by doing) or decrease them due to a forgetting mechanism. The agents’ skill set is a set of abilities and a position in the hierarchical structure. We consider a population of agents A = {a1, a2 , . . ., an} characterized by heterogeneous professions. The profession is the formal competence of the agents related with their position. The agents at the upper level in the hierarchy can perform a part of the task (if they have the required profession) or they can allocate the tasks to lower-level agents if they are skilled enough to perform it or, as the last option, pass the task to someone higher in the hierarchy who follows the same procedure. The performance of an agent is defi ned as the aggregate performance of the agents below that agent. We consider an exogenous environment E(T,FN,SN), which sends tasks T1, T2 . . . , Tn to the organization. A task T = {s1, s2 , . . . , sm} is a sequence of steps s to be performed, represented by integers drawn from a distribution. Performing each step requires a specific profession P(s). In an indirect way, the task set describes the environment; for a fi rm, the task is a metaphor of the market in terms of demand size and products differentiation. We make the following assumptions on tasks’ execution in order to defi ne the simulation dynamics:

Table 9.1

Task Set

Task Set T T1 = { 10 ,11,13,14 } T2 = { 11,15,13,22 } T3 = { 15,17,13,14 } T4 = { 11,16,10,13 } 1. Task steps are sequential: if the fi rst step has not been performed, it cannot move to the second step. 2. Task value is drawn from uniform and normal distributions with different parameters. The algorithm defi ning the dynamics of the model is quite simple and can be summarized as follows: 1. At each time step, an agent a1, chosen at random, receives a task T1 from the environment.

Changing Roles in Organizations 219 2. The agent a 1 receiving the task performs its fi rst step s 1 if the agent’s profession fits the one required by the fi rst step (s) of the task T1 . 3. Agent a1 looks for another agent a2 , connected to that agent through the informal network IN(A), able to perform the next step. 4. If step 3 cannot be performed, agent a1 sends the task to the next higher level in the formal network FN(A) where the upper-level agent allocates it to another agent. 5. The process continues iterating steps from 1 to 4 until all steps s1 . . . sm of task T1 have been performed. If one step of the task cannot be performed by the organization, the whole task is rejected. Task rejection occurs when the task arrives at the top agent in the hierarchical levels either directly from the environment or pushed up within the organization and no agent below the top agent is able to perform one or more steps of the task. 6. At the end, the task T1 is completed or rejected by the organization and a new task T2 arrives from the environment; the process starts again from step 1. At the beginning of the simulation, a formal structure is set; no predefi ned informal structure is assumed and the ties in the informal structure are created over time. The probability at each time tick of creating an informal link between agent ai and agent aj is defi ned as:

ª º 1 Pc = N«μ = ;σ = 1» DFN (ai ,a j ) «¬ »¼ where D FN (a i , a j ) is the distance, in the formal network FN, between agent ai and agent a j. The probability Pc is a normal distribution centered on the reciprocal of the distance D FN (a i , a j ) . This algorithm has been shaped in order to capture the difficulty workers have of getting in touch with a manager in a higher hierarchical position. This feature is justified by the cognitive cost of maintaining a social relation within an organization, this cost being expressed in both time needed to keep the relation alive and cognitive effort. In this respect, the hierarchical organization is created in order to fi lter and limit the number of people who report to each manager; in the same spirit this algorithm attaches a lower probability of creating a social tie with a high-level manager. The informal structure is dynamically shaped by an evolutionary algorithm that reinforces the links used the most frequently and lowers the strength of the less frequently used links. If an informal link is too weak, it is removed according to a threshold defi ned exogenously by the parameter γ. Links connecting upper-level agents are general and can serve different tasks, while lower-level links are more task-specific. Once new informal links

220

Marco Lamieri and Diana Mangalagiu

are established, agents use them to pass tasks thus increasing the organization’s performance since there is no cost associated with passing the task through informal links. The informal structure is a metaphor of a problem decomposition process. This model describes the subdivision of the overall structure of a task T so that the steps s becomes manageable and the system intelligible. Through decomposition, reciprocal influence among components is reduced and is solved by the formal structure and the informal structure. This resulting performance of the simulated organization can be obtained directly by separating sets of operations with the higher internal interdependence (steps). The emergence of clusters in the informal network describes isolated subsystems that can be left to specialized decision units without affecting the fi nal outcome of the system. The performance of the organization in this sense is related with the opportunity both to decompose the task in separated subtasks and to allocate them to subsystems (clusters of agents connected in an informal network). The tasks that pass through the hierarchical structure can be considered as residual operations not completely decomposed. The execution of such tasks cannot be completed within a single informal cluster of agents (we can imagine it operating as single office within a division) because one or more skills required are not present in the informal cluster. In this case, the task needs to be re-allocated by a manager to an agent not connected with the cluster. This decomposition process has a cost and follows the rules imposed by the organizational structure. The processing cost associated with the decomposition is high considering the coordination needed among the different levels of the organization, while the processing cost for the emerging informal structure is negligible considering that the processing of the task is all performed without the necessity of further coordination.

Change Roles Dynamic In our setting, we consider two sets of parameters: the characteristics of the agents with respect to their skills and the characteristics of the tasks. The decision to change the role is based on agents’ productivity Ф. An agent’s productivity in performing the task T is defi ned as:

Φ (a, T ) =

ρ (T , a ) L(T )

where ρ(T,a) is the number of steps of task T processed by agent a and L(T) is the length of the task (the number of steps to be performed). This measure is affected both by the number of steps in the task (and indirectly by the environment) and by the formal and informal structure

Changing Roles in Organizations 221 of the organization. A centered productivity distribution shows an effi cient allocation of resources, while a positive skewed productivity (the right tail is longer and the mass of the distribution is concentrated on the left of the figure) signals a structural ineffi ciency where few workers are performing many steps while a higher density of workers are low productive. In order to achieve a bounded optimal allocation of the roles within the organization a simple behavioral rule has been implemented (changing rule—CR): every time a manager receives a task from a lower level the manager inspects the subordinates’ productivity; if the productivity of one or more agents in the team is lower than the average productivity of the team, the manager selects the less productive agent and changes the managerial role to the profession required by the task. This rule is inspired from a decentralized decision principle (every manager is affecting only their subordinates) and from bounded rationality (every manager is changing roles only according to the task they are currently performing without any memory of past history or forward-looking behavior). A simple UML representation of the JavaSwarm model is reported in Appendix B.

RESULTS As far as the organization is formalized with this model, the defi nition of performance we consider is straightforward. In order to compute the performance of the organization in this setting, we use four simple indicators that can be computed at the task level. For a detailed description of performance indicator used please refer to Lamieri and Mangalagiu (2008). A synthetic defi nition of performance measure is reported in Appendix A. For benchmarking purposes we considered three different organizational structures as described in Table 9.2.

Table 9.2

Example of Task Set

Structure

Flat

Balanced

Unbalanced

Level 1 Level 2 Level 3 Level 4 Level 5

1 Agent 12 Agents 72 Agents

1 Agent 4 Agents 16 Agents 64 Agents

1 Agent 3 Agents 11 Agents 6 Agents 64 Agents

222 Marco Lamieri and Diana Mangalagiu For each structure, we run two different sets of experiments: • Considering only the formal network without any informal link creation; • Considering both the formal and the emerging informal network. We run different experiments considering task set values distributions: two normal N(µ=8,σ=1), N(µ=8,σ=3) and one uniform distribution U(min=0,max=15). The normal versus uniform distribution of task sets mimics different environments: the normal with σ=1 reflects the most “stable” environment while the uniform distribution mimics a more “fluctuating” environment. Our aim is to test the performance sensitivity to different environments. For all the experiments we used some fi xed parameters in order to compare the results from different experiments: • Population: 85 agents • Task set parameters: • Task length: 10 • Task set domain: [0,15] • Task batch size: 100 • Network evolution parameters: • Ties reinforcement rate: 0.1 • Ties decay rate: 0.001 • Ties destruction threshold: 0.1 In Table 9.3 we can observe a dramatic increase in average agent’s productivity due to the introduction of the changing role rule. Without changing roles only a few agents perform all the tasks leading to a very right-skewed distribution, and the average is pulled close to zero. The beneficial increase in average productivity is higher in a stable environment, formalized by a normal distribution task set. The adaptability of the organization to the

Table 9.3

Formal Structures Used in Experiments Experiments

Structure

Task Set

Balanced Balanced Flat Flat Unbalanced Unbalanced

Normal(8,1) Uniform(0,15) Normal(8,1) Uniform(0,15) Normal(8,1) Uniform(0,15)

Average Agents’ Productivity (%) No Change Role

Change Role

0,054 0,081 0,044 0,070 0,051 0,076

16,588 15,289 13,489 12,404 14,383 13,542

Changing Roles in Organizations 223 Table 9.4

Average Agent’s Productivity

Experiments with Changing Role Rule

Average Agent’s Productivity without Informal Network

Task Set

Formal Structure

Fixed(1–15) Normal(8,1) Uniform(0,15) Fixed(1–15) Normal(8,1) Uniform(0,15)

Balanced Balanced Balanced Flat Flat Flat

Manager

Worker

0.46 0.90 0.78 0.13 0.31 0.23

0.08 0.08 0.13 0.14 0.14 0.14

environment induced by the changing role rule is more effective in a stable environment, where the target roles configuration is able to distribute more uniformly the workload. In an unstable environment the benefit from the target role structure cannot be fully exploited because the fast-changing task set requires continuous adaptations. If we consider separately managers and workers (Table 9.4) we can see that the variance of productivity within the class is much higher for managers than for workers. Managers’ productivity is computed as average productivity of workers they are supervising. In this respect managers’ productivity can be interpreted as the average office/department productivity. Without Informal Network

52

With Informal Network

50

Num. Change Role

48

46

44

42

40 0

10

20

30

40

50

60

70

80

90

100

Tick

Figure 9.2

Number of changing roles with and without the informal network.

224

Marco Lamieri and Diana Mangalagiu

The variance is surprisingly high for managers, showing an emerging clustering of productive workers within the same office/department. This is particularly true for a balanced structure with four hierarchical levels and 21 managers, where the concentration can be higher within small teams. In a flat structure, where the office/department is bigger and there are only 13 managers, the concentration of high-productive workers can take place in a milder way. As we saw in the previous table the effect of the changing role rule is stronger in a more stable environment. Looking at Figure 9.2 we can observe the evolution of the organizational roles configuration over time in terms of number of agents changing roles.9 After an initial period of reorganizations (50 changing roles on average) the model shows a drop down to a stationary 45 changing roles. One of the key results emerging from the model can be observed in Figure 9.3. The number of passages required to complete the task is significantly different among the three considered structures if we look at fi xed roles. In this case the flat structure seems to be performing better than the balanced structured, and the unbalanced structure requires the highest number of passages. When we introduce the changing role rule the organization gains

Figure 9.3

Average number of passages to complete a task set.

Changing Roles in Organizations 225 flexibility and adapts the role configuration according to the tasks required, alleviating the inefficiency embedded in the hierarchical structures. In this case the differences in performance of the three considered structures become negligible.

CONCLUSION In this chapter we presented an agent-based model of organization aimed at investigating the concept of organizational role and its effect on the productivity of the organization. The agents in our model are embedded in a formal hierarchical structure and in an informal social network. They have skills and roles, which can change dynamically. In order to test the simulated organization’s productivity, we implemented three different hierarchical structures: a flat, a balanced and an unbalanced structure. We investigated and compared our experimental results from two configurations: a centrally planned system in which the role of agents is defi ned by their manager who has a partial view of the enterprise, and a decentralized self-organizing configuration in which the agents defi ne and change roles according to their perception of the productivity of agents in their social or spatial proximity. Finally, we provide concluding remarks and future research directions. The introduction of the changing roles dynamics leads to a dramatic increase in the average agent’s productivity and therefore alleviates the rigidity and the inefficiencies of a hierarchical structure, minimizing costs and number of transitions. The stability of the environment is a crucial point affecting the evolution of the agent’s productivity. An unstable environment leads to a higher number of role changes and higher adaptation costs, related also to the high cognitive cost of the agents adapting to it. The future directions of this research: we intend to improve the realism of the model using empirical data. The empirical data needed to fit this model are both macro (economic performance of the organization, formal structure and information about the specific industry the organization is operating in order to model the environment) and micro (workers’ and manager’s behaviors, agent’s skills) at the agent level. The economic information about the fi rm performance can be extracted from balance sheets, and the stylized description of the industry can be obtained from publicly available information. For example, we are envisaging using data on consumption and production at the product level defi ned as a six-digit ATECO code for every industry provided by the Italian Statistical Institute. We envisage obtaining information about agents’ skills and roles using surveys, often made by consulting fi rms in order to assess a organizational change.

226

Marco Lamieri and Diana Mangalagiu

APPENDIX A

Completion Time Г This is the simplest measure of performance proposed in the model and it considers the number of simulation time ticks needed to perform all the steps in the task. This value is a raw proxy of the processing cost of the task but it does not decompose the effect of the hierarchical organization efficiency from the effect of the informal network efficiency. The value can be decomposed in the number of transactions through the formal network and the number of transactions through the informal network.

Γ = ΓFN + ΓIN The higher this value, the higher the production process cost determined by both hierarchical task allocation inefficiency and informal task-decomposition inefficiency.

Hierarchical Transaction Number ГFN The number of transitions through the formal network ГFN , this measure is a proxy of the efficiency of the informal allocation of activity within the organization. The lower this value, the better the organization is able to decompose the task into subtasks that can be performed in an informal cluster of agents without the need of further coordination by managers. The higher this value is the smaller is the hierarchical coordination efficiency.

Task Completion Cost C(Ts) The cost C of performing step s of task T by agent a, given the organization’s structure, is defi ned as:

C (Ts ) = C p (a s ) + Ct (as −1 , a s ) This cost C is divided in two components:

C p (a s ) =

1 S as

Ct (a s −1 , a s ) =

1 min Θ as , Θ as −1

(

)

where Cp(as) is the cost of performing step s by agent a considering the agent a’s skill S in step s. In the second formula, Ct is the transaction cost

Changing Roles in Organizations 227 for the task moving from agent as-1 to agent as, and Θa is the hierarchical position (level) of agent a. The effect of workers’ skills on the cost component Cp is negative

∂C p ∂S

= −1 / S a2s ,

and the increase of one unit of skill leads to a reduction in the performing cost associated. The cost component Ct is affected by the formal structure of the organization: the lower the average distance among agents, the lower the coordination cost associated with moving the task between two agents. APPENDIX B

Figure 9.4

UML representation of the JavaSwarm model.

NOTES 1. 2. 3. 4. 5. 6. 7. 8. 9.

http://www.powersim.com. http://www.hps-inc.com/edu/stella. http://web.econ.unito.it/terna/jes (Terna 2006). http://www.swarm.org. http://www.media.mit.edu/starlogo. http://agentsheets.com. Ferber et al. (2004). Dignum (2004). The results are the average of all the considered experiments

228

Marco Lamieri and Diana Mangalagiu

BIBLIOGRAPHY Axtell, R. (1999). The emergence of fi rms in a population of agents: Local increasing returns, unstable Nash equilibria, and power law size distributions, CSED Working Paper, Economic Studies, The Brookings Institute. Barbuceanu, M. (1997) ‘Coordinating agents by role based social constraints and conversation plans’, in Proceedings of the 14th National Conference on Artificial Intelligence, 16–21. Biddle, B.J. (1979) Role Theory: expectations, identities and behaviors, New York: Academic Press. Biddle, B.J. and Thomas, E.J. (eds) (1966) Role Theory: concepts and research, New York: Wiley. Bonabeau, E. (2002). Agent-based modeling: Methods and techniques for simulating human systems, PNAS, 99 (3): 7280–7287. Burton, R. and Obel, B. (1995). The Validity of Computational Models in Organization Science: From Model Realism to Purpose of the Model, Computational and Mathematical Organization Theory, 1(1), 7–71. Burton R. M. and Obel B. (2004). Strategic Organizational Diagnostics and Design: The Dynamics of Fit. Kluwer Academic Publishers. Carley, K. and Gasser, L. (1999) “Computational Organization Research,” in Gerhard Weiss, eds., Multi-Agent Systems: A Modern Approach to Distributed Artificial Intelligence, Cambridge, MA: MIT Press. Carley, K., Prietula, M. and Lin, J. (1999) ‘Design versus cognition: the interaction of agent cognition and organizational design on organizational performance’, in R. Conte and E. Chattoe (eds) Evolving Societies: The computer simulation of social systems. Carley, K. and Svoboda, D. (1996) ‘Modeling organizational adaptation as a simulated annealing process’, Sociological Methods and Research, 25 (1): 138–68. Carroll, T. and Burton. R. M. (2000). Organizations and Complexity: Searching for the Edge of Chaos, Computational & Mathematical Organization Theory 6 (4): 319–337. Chang, M.H. and Harrington, J. (eds) (2006) ‘Agent-based models of organizations’, in L. Tesfatsion and K. Judd (eds) Handbook of Computational Economics, Vol. 2: ‘agent-based computational economics’, Handbooks in Economics Series, Amsterdam: North Holland. Costa, A. C. R. and Demazeau, Y. (1996) Toward a formal model of multi-agent systems with dynamic organizations. In Proc. of ICMAS 96, Second International Conference on Multi-Agent Systems. Kyoto, IEEE, 417–431. DeLoach, S.A. (2002) ‘Modeling organizational rules in the multi-agent systems engineering methodology’, Proceedings of the 15th Canadian Conference on Artificial Intelligence, Calgary, Canada. Dignum, V. (2004) ‘A model for organizational interaction: based on agents, founded in logic’, unpublished doctoral thesis, Utrecht University. Dupouet O. and Yildizoglu, M. (2003). Organizational Performance in Hierarchies and Communities of Practice, WEHIA, Kiel, Germany, 28–30 May. Ferber, J. and Gutknecht, O. (1998) ‘Aalaadin: a meta-model for the analysis and design of organizations in multi-agent systems’, in Third International Conference on Multi-Agent Systems, Paris: IEEE, 128–35. Ferber J., Michel F., Gutknecht O. (2003) Agent/Group/Roles: Simulating with organizations. Agent Based Simulation Agent Based Simulation 4, Montpellier, 28–30 April. Ferber, J., Gutknecht, O. and Michel, F. (2004) ‘From agents to organizations: an organizational view of multi-agent systems’, in AOSE IV: 4th International Workshop, LNCS 2935, Berlin: Springer, 443–59.

Changing Roles in Organizations 229 Gasser, L. (1992) ‘An overview of DAI’, in L. Gasser and N.M. Avouris (eds) Distributed Artificial Intelligence: theory and praxis, Dordrecht: Kluwer Academic, 9–30. Graen, G. (1976) ‘Role making processes within complex organizations’, in M.D. Dunnette (ed.) Handbook of Industrial and Organizational Psychology, Chicago: Rand McNally. Gutknecht, O., Ferber, J. and Fabien, M. (2001) ‘Integrating tools and infrastructures for generic multi-agent systems’, in J.P. Muller, E. Andre, S. Sen and C. Frasson (eds) Proceedings of the 5th International Conference on Autonomous Agents, Montreal, Canada: ACM Press, 441–8. Handy, C.B. (1999) Understanding Organisations, 5th edn, London: Penguin Books. Jennings, N.R. and Wooldridge, M. (2000) ‘Agent-oriented software engineering’, in J. Bradshaw (ed.) Handbook of Agent Technology, Cambridge, MA: AAAI/ MIT Press. Jin, Y. and Levitt, R. E. (1996). The Virtual Design Team: A Computational Model of Project Organizations, Journal of Computational and Mathematical Organization Theory, 2(3): 171–195. Lamieri, M. and Mangalagiu, D. (2006) ‘Efficiency and evolution of hierarchical organizations’, in Physik sozio-okonomischer Systeme (AKSOE) Proceedings, Dresden, 27–31 March 2006. Lamieri, M. and Mangalagiu, D. (2008) ‘Interactions between formal and informal organizational networks, formation of routines and performance in hierarchical structures: an agent-based approach’, in V. Dignum, Handbook of Research on Multi-Agent Systems: dynamics of organizational models, London: IGI. Levitt, R.E. (2004) ‘Computational modeling of organizations comes of age’, Journal of Computational and Mathematical Organization Theory, 10 (2): 127–45. Lieberman, S. (1956) ‘The effects of changes in roles on the attitudes of role occupants’, Human Relations, 9: 467–86. Lin, Z., Zhao, X., Ismail, K. and Carley, K.M. (2006) ‘Organizational design and restructuring in response to crises: lessons from computational modeling and real-world cases’, Organization Science, 15: 598–618. Lomi, A. and Larsen, E.R. (2001) Dynamics of Organizations: computational modeling and organization theories, Menlo Park, CA: American Association of Artificial Intelligence Press. Macy, M.W. and Willer, R. (2002) ‘From factors to actors: computational sociology and agent-based modeling’, Annual Review of Sociology, 28: 143–66. March, J. G., Exploration and exploitation in organizational learning, Organization science, 2 (1): 71–87. McCallum, M., Norman, T.J. and Vasconcelos, W.W. (2004) ‘A formal model of organizations for engineering multi-agent systems’, in ECAI Workshop on Coordinating Emergent Agent Societies. McKelvey, B. (2004). A “simple rule” approach to CEO leadership in the 21st century in Complexity theory and the management of networks. P. Andriani & G. Passiante (eds.), Imperial College Press. Miller, J.H. (2001). Evolving information processing organizations. In: Lomi, A., Larsen, E.R. (Eds.), Dynamics of Organizations: Computational Modeling and Organization Theories. AAAI Press/The MIT Press, Menlo Park, CA. Nissen, M. E., Orr, R. J. and Levitt, R. E. (2006). Streams of Shared Knowledge: Computational Expansion of Organization Theory, Working Paper, Center for Edge Power, Naval Postgraduate School. Odell, J., Parunak, H.V.D. and Fleischer, M. (2003) ‘The role of roles in designing effective agent organizations’, in A. Garcia, C. Lucena, F. Zambonelli, A. Omicini and J. Castro (eds), Software Engineering for Large-Scale Multi-Agent Systems, LNCS 2603, Berlin: Springer-Verlag.

230 Marco Lamieri and Diana Mangalagiu Pacheco, O. and Carmo, J. (2003) ‘A role based model for the normative specification of organized collective agency and agents interaction’, Autonomous Agents and Multi-Agent Systems, 6 (6): 145–84. Sierhuis, M., Clancey, W.J. and van Hoof, R. (2007) ‘Brahms: a multi-agent modeling environment for simulating work processes and practices’, International Journal of Simulation and Process Modeling, 3 (3): 134–52. Soriano, J., Alonso, F. and Lopez, G. (2002) ‘Social commitment policies for formally specifying the organization and behavior of open agent societies’, in Proceedings of the 3rd International Symposium on Multi-Agent Systems, Large Complex Systems and E-Businesses. Stacey, R.D. (1995) ‘The science of complexity: an alternative perspective for strategic change processes’, Strategic Management Journal, 16: 477–95. Terna, P. (2006) ‘An agent based model of interacting and coevolving workers and fi rms’, Technical Report, Agent Based Models: from analytical models to real life phenomenology, ISI, Turin, Italy. Thomas, G. and Williams, A.B. (2005) ‘Roles in the context of multi-agent task relationships’, AAAI Fall Symposium Roles, an Interdisciplinary Perspective: Ontologies, Programming Languages, and Multi-Agent Systems, FS-05–08, USA. van Maanen, J. and Schein, E.H. (1979) ‘Toward a theory of organizational socialization’, in B. Straw (ed.) Research in Organizational Behavior, Greenwich, CT: JAI Press, 209–64. van Zandt, T. (1998) ‘Organizations with an endogenous number of information processing agents’, in M. Majumdar (ed.) Organizations with Incomplete Information, Ch. 7, Cambridge: Cambridge University Press. Zambonelli, F., Jennings, N.R. and Wooldridge, M. (2001) ‘Organizational rules as an abstraction for the analysis and design of multi-agent systems’, International Journal of Software Engineering and Knowledge Engineering, 11 (3): 303–28. Zambonelli, F. and Parunak, H.V.D. (2002) ‘From design to intentions: signs of a revolution’, in AAMAS 2002, Bologna, Italy: ACM Press, 455–6.

10 Rationality Meets the Tribe Recent Models of Cultural Group Selection 1

David Hales

INTRODUCTION Recent agent-based computational simulation models have demonstrated how cooperative interactions can be sustained by simple cultural learning rules that dynamically create simple social structures (Riolo 1997, Riolo et al. 2001; Hales 2000, 2006; Hales and Areteconi 2006; Marcozzi and Hales 2008; Traulsen and Nowak 2006). These classes of models implement agents as adaptive imitators that copy the traits of others and, occasionally, adapt (or mutate) them. Although these models bear close comparison with biologically inspired models—they implement simple forms of evolution— the interpretation can be of a minimal cultural, or social, learning process in which traits spread through the population via imitation and new traits emerging via randomized, or other kinds of, adaptation. Often agent-based models represent social structures such as groups, fi rms or networks of friends, as external and a priori to the agents. In the models we discuss in this chapter, however, the social structures are endogenous such that agents construct, maintain and adapt them through ongoing behavior. A subset of traits supports the formation and maintenance of simple social structures. As will be seen, it is the dynamic formation and dissolution of these structures over time that drive, or incentivize, the agents to behave cooperatively. Yet, as we will show, it is not necessary for the individual agents to prefer socially beneficial structures or outcomes; rather they emerge through a self-organizing process based on local information and adaptation criteria. A major advantage of the agent-based approach is that the strict simplifying assumptions of rational action theory can be relaxed because models do not need to be designed with deductive tractability in mind but rather can be explored through computational simulation. This is particularly useful for exploring complex models in which agents adapt and learn over time without necessarily converging on any equilibrium or where many equilibria are possible but it is not clear which would be selected.

232

David Hales

This relaxation of rational action assumptions is possible due to the technical innovation of agent-based modeling and large-scale simulation platforms that allow researchers to empirically experiment with their models by performing many exploratory simulation runs, observing alternative time series (or histories) and changing model parameters (or assumptions). This means that modelers can quickly answer “what if” type questions and assess the impact of broad changes in the behavioral assumptions on which the models are based. The researcher does not have to make an a priori commitment to restrictive assumptions. They can be changed (and often are changed) as a result of model exploration (Doran 1998). In the models we present here agents are assumed to have incomplete information and bounded processing abilities (bounded rationality). Given these relaxed assumptions agents use social learning heuristics (imitation) rather than purely individual learning or calculation. It has been argued (Simon 1990, 1997) that complex social worlds will often lead to social imitation (or “docility” in Simon’s terminology) because agents do not have the information or cognitive ability to select appropriate behaviors in unique situations. The basic idea is “imitate others who appear to be performing well”. The models we present demonstrate that from simple imitation heuristics can emerge social behaviors and structures that display highly altruistic in-group behavior even though this is not part of the individual goals of the agents and, moreover, may appear irrational from the point of view of the individual agents. Agents simply wish to improve their own individual condition (or utility) relative to others and have no explicit conception of in- or out-group. Yet a “side effect” of their social learning is to sustain group structures that constrain the spread of highly non-social (selfish) or cheating behavior such as free-riding on the group. We could replace the term “side effect” with the term “invisible hand” or “emergent property”. We can draw a loose analogy with Adam Smith’s thoughts on the market (A. Smith 1836). The difference is that there is no recognizable market here but rather a dynamic evolution of social structure that can transform egotistical imitative behavior into socially beneficial behavior. We term these kinds of models “tribal systems” to indicate the grouping effects and tendency for intra-group homogeneity because individuals joining a group often join this group via the imitation of others who are already a member of the group. We do not use the term “tribal” to signify any relationship between these kinds of models and certain kinds of human societies but rather to indicate the “tribal” nature of all human organizations, i.e., that individuals almost always form cliques, gangs or other groupings that may appear arbitrary and may be highly changeable and ephemeral yet have important effects on inter-agent dynamics and behavior.

Rationality Meets the Tribe 233 In these kinds of tribal systems individual behavior cannot be understood from a standpoint of individual rationality without reference to the interaction history and group dynamics of the system as a whole. The way an individual behaves depends on their history and relationship to the groups or tribes that they form collectively.

SITUATING THE MODELS Diverse models of cultural group selection have been proposed from a wide range of disciplines (Wilson and Sober 1994). More recently attempts to formalize them through mathematical and computer-based modeling have been made. We wish to situate the models we will discuss in this chapter with reference to the more traditional game theory (Binmore 1994) approach that assumes agents are rational, in the homo economicus sense, and have perfect information, common knowledge and no social structures to constrain interactions. Our aim in this section is to give the non-modeling expert a sense of the relation between rational action approaches (game theory) and the more bio—and socially—inspired approaches of cultural group selection by presenting a number of broad dimensions over which they differ. It is of course the case that the boundaries between approaches is never as clean or distinct as simple categories suggest, however, to the extent that a caricature can concisely communicate what we consider to be key points that distinguish approaches it can be of value. Figure 10.1 shows two dimensions along which game theory and cultural group selection approaches may be contrasted. Traditionally game theory models have focused on agents with unbounded rationality (i.e., no limit on computational ability) and complete information (i.e., utility outcomes can be calculated for all given actions). The cultural group selection models presented here focus on highly bounded rationality (agents just copy those with higher utility) and highly limited information (agents cannot calculate a priori utility outcomes). The benefit that game theory gains by focusing on the bottom left-hand region is analytic tractability by proving equilibrium points such as Nash equilibrium for given games. Given incomplete information and bounded rationality it generally becomes more difficult to fi nd tractable solutions and hence (agent-based) computer simulation is often used. Figure 10.2 shows another two dimensions, learning and utility, along which a broad distinction can be made. Game theory models tend to focus on individual utility maximization and action or strategy selection (a kind of learning) at the individual level via deduction (bottom left). Cultural group selection focuses on social learning based on imitation in combination with rare innovation events (comparable to mutation in biological

234 David Hales

Figure 10.1 Traditionally, game theory models have focused on agents with unbounded rationality and complete information. The cultural group selection models presented here focus on highly bounded rationality and incomplete information.

Figure 10.2 Cultural group selection models also differ from the traditional game theory approach in their focus on social learning and (often emergent) social utility over individual utility.

Rationality Meets the Tribe 235

Figure 10.3 The cultural group selection models represent interactions within dynamic social structures whereas game theory has tended towards static “mean field” structures.

models). The emergent result is increase in social utility even though the agents themselves use a heuristic based on trying to improve their own individual utility. Hence cultural group selection could also be placed in the bottom-right quadrant. Figure 10.3 shows another two dimensions, interaction and social structure, that distinguish the cultural group selection models and game theory. The cultural group selection models presented here represent interactions within dynamic social structures whereas game theory has tended towards static “mean field” structures, by which we mean that game interactions are often assumed to occur stochastically, with equal probability, between agents over time. In the cultural group selection models (as will be seen later) a key aspect that drives the evolution of cooperation and increases in social utility is the dynamic formation of in-groups of agents that interact together exclusively, excluding interactions with the “out-group”.

RECENT CULTURAL GROUP SELECTION MODELS Historically group selection has been seen as controversial within both biological and social sciences due to the difficulty in advancing a plausible theory and the inability of identifying such processes empirically in the field. Also certain kinds of naïve non-formalized group selection approaches were exposed as

236

David Hales

logically incoherent by biologists. However these objections have been challenged due to recent advances in the area as a result of extensive use of computational (often agent-based) modeling and a theoretical shift that accepts that from selection operating at the individual level can, under broad conditions, emerge group-level selection at a higher level. The historical debate from a group selectionist perspective is well covered by Wilson and Sober (1994). We will not retread old ground here but will concentrate on presenting a specific class of group selection models that have recently emerged in the literature. These models may be interpreted as cultural evolutionary models in which imitation allows traits to move horizontally. We do not concern ourselves here with the biological interpretation of such models but rather the cultural interpretation. Group selection relies on the dynamic formation and dissolution of groups. Over time individual entities may change groups by moving to those that offer better individual performance. Interaction between entities that determine performance is mainly restricted to those sharing the same group. Essentially then, in a nutshell, groups that support high performance for the individuals that comprise them grow and prosper whereas exploitative or dysfunctional groups dissolve as individuals move away. Hence functional groups, in terms of satisfying individual goals, are selected over time. Key aspects that defi ne different forms of group selection are: how group boundaries are formed, the nature of the interactions between entities within each group, the way that each entity calculates individual performance (or utility) and how entities migrate between groups. The “success” of any group selection model is judged by how well the system self-organizes towards achieving a collective goal—whatever that may be. Often this will be maximizing the sum of individual utility but could involve other measures such as equality or fairness for example. In almost all proposed social and biological models of group selection, in order to test if group selection is stronger than individual selection, populations are composed of individuals that can take one of two kinds of social behavior (or strategy). Either they can act pro-socially, for the good of their group, or they can act selfishly for their own individual benefit at the expense of the group. This captures a form of commons tragedy (Hardin 1968). Often this is formalized as a prisoner’s dilemma (PD) or a donation game in which individuals receive fitness payoffs based on the composition of their group. In either case there is a utility cost c that a pro-social individual incurs and an associated utility benefit b that individuals within a group gain. A group containing only pro-social individuals will lead each to gain a utility of b—c. However, a group containing only selfish individuals will lead each to obtain a utility of zero. But a selfish individual within a group of pro-socials will gain highest utility. In this case the selfish individual will gain b but the rest will gain less than b—c. Given that b and c are positive then it is always in an individual’s interests (to maximize utility) to behave selfishly. In an evolutionary scenario in which the entire population

Rationality Meets the Tribe 237 interacts within a single group then selfish behavior will tend to be selected because this increases utility. This ultimately leads to an entire population of selfish individuals and a suboptimal average population level fitness of zero. This is the Nash equilibrium (Nash 1950) and an evolutionary stable strategy for such a system (J.M. Smith 1982). There have been various models of cooperation and pro-social behavior based on reciprocity using iterated strategies within the PD (Axelrod 1984; Riolo 1997). However, we are interested in models which do not require reciprocity since these are more generally applicable. In many situations, such as large-scale human systems or distributed computer systems, repeated interactions may be rare or hard to implement due to large population sizes (on the order of millions) or cheating behavior that allow individuals (or computer nodes) to fake new identities.

Tag Model In Hales (2000) a “tag” model of cooperation was proposed which selected for pro-social groups. It models populations of evolving agents that form groups with other agents who share an initially arbitrary tag or social marker. The tag approach was originally proposed by Holland (1993) and developed by Riolo (1997, Riolo et al. 2001). The tag is often interpreted as an observable social label (e.g., style of dress or accent etc.) and can be seen as a group membership marker. It can take any mutable form in an agent-based model (e.g., integer or bit string). The strategies of the agents evolve, as do the tags themselves, through agents imitating others obtaining higher utility than themselves. Interestingly this very simple scheme structures the population into a dynamic set of tag groups and selects for pro-social behavior over a wide range of conditions. Figure 10.4 shows a schematic diagram of tag group evolution and an outline algorithm that generates it.

Figure 10.4 Schematic of the evolution of groups in the tag model. Three generations (a–c) are shown. White individuals are pro-social; black are selfish. Individuals sharing the same tag are shown clustered and bounded by large circles. Arrows indicate group lineage. Migration between groups is not shown.

238

David Hales

In general it was found that pro-social behavior was selected when b > c and mt >> ms, where mt is the mutation rate applied to the tag and ms is the mutation rate applied to the strategy. In this model groups emerge from the evolution of the tags. Group splitting is a side effect of mutation applied to a tag during reproduction. A subsequent tag model (Riolo et al. 2001) produced similar results although it cannot be applied to pro-sociality in general because it does not allow for fully selfish behavior of identically tagged individuals (Roberts and Sherrat 2002).

Network-Rewiring Models Network-rewiring models for group selection have been proposed with direct application to peer-to-peer (P2P) protocol design and biological systems (Hales 2004, 2006; Santos et al. 2006). In these models, which were adapted from the tag model described earlier, individuals are represented as nodes on a graph. Group membership is defi ned by the topology of the graph. Nodes directly connected are considered to be within the same group. Each node stores the links that defi ne its neighbors. Nodes evolve by copying both the strategies and links (with probability t) of other nodes in the population with higher utility than themselves. Using this simple learning rule the topology and strategies evolve, promoting pro-social behavior and structuring the population into dynamic arrangements of disconnected clusters (where t = 1) or small-world topologies (where 0.5 < t < 1). Group splitting involves nodes disconnecting from all their current neighbors and reconnecting to a single randomly chosen neighbor with low probability mt. As with the tag model pro-social behavior is selected when b > c and mt >> ms, where ms is the probability of nodes spontaneously changing strategies. Figure 10.5 shows a schematic of network evolution (groups emerge as cliques within the network) and an outline algorithm that implements it.

Figure 10.5 Schematic of the evolution of groups (cliques) in the network-rewiring model. Three generations (a–c) are shown. White individuals are pro-social; black are selfish. Arrows indicate group lineage. Notice the similarity to the tag model in Figure 10.4.

Rationality Meets the Tribe 239 In this model we see dynamics and properties similar to the tag model but in an evolving graph. This is interesting because social networks can be viewed as graphs. In addition from a computer science perspective graphs can represent P2P networks. In Hales (2006) the same rewiring approach was applied to a scenario requiring nodes to adopt specialized roles or skills within their groups, not just pro-social behavior alone, to maximize social benefit. This indicates that the same kind of group selective process can support the emergence of in-group specialization. Interestingly it has also been shown recently (Ohtsuki 2006) in a similar graph model tested over fi xed topologies (e.g., small-world, random, lattice, scale-free) that under a simple evolutionary learning rule pro-social behavior can be sustained in some limited situations if b / c > k, where k is the average number of neighbors over all nodes (the average degree of the graph). This implies that if certain topologies can be imposed then prosocial behavior can be sustained without rewiring of the topology dynamically. Although analysis of this model is at an early stage it would appear that groups form via clusters of pro-social strategies forming and migrating over the graph via nodes learning from neighbors.

Group-Splitting Model In Traulsen and Nowak (2006) a group selection model is presented that sustains pro-social behavior if the population is partitioned into m groups of maximum size n so long as b / c > 1 + n / m. In this model group structure in combination with splitting and extinction processes is assumed a priori and mediated by exogenous parameters. Splitting is accomplished by explicitly limiting group size to n; when a group grows through reproduction beyond n it is split with (high) probability q into two groups by probabilistically reallocating each individual to one of the new groups. By endogenously controlling n and m a detailed analysis of the model was derived such that the cost/benefit condition is shown to be necessary rather than just suffi cient. The model also allows for some migration of individuals between groups outside of the splitting process. Significantly, the group-splitting model can potentially be applied recursively to give multilevel selection groups of groups etc. However, this requires explicit splitting and reallocation mechanisms at each higher level. Figure 10.6 shows a schematic of group-splitting evolution and an outline algorithm that implements it.

APPLICATIONS We believe that these new models could potentially have applications in both understanding real existing social systems and engineering new tools that support new kinds of social systems particularly in online communities. Increasingly online Web2.0 and other communities allow for the

240 David Hales

Figure 10.6 Schematic of the evolution of groups in the group-splitting model. Three generations (a–c) are shown. White individuals are pro-social; black are selfish. Individuals along the same group are shown clustered and bounded by large circles. Arrows indicate group lineage. Migration between groups is not shown.

tracking and measurement of the dynamics of groups over time (Palla et al. 2007). Potentially massive clean data sets can be collected, and the models presented here can be calibrated and validated (or invalidated). In addition, as has already been implied earlier, P2P systems composed of millions of online nodes could benefit from the application of group selection techniques by applying them directly to the algorithms (or protocols) used by nodes to self-organize productive services for users. These two kinds of application of the models are not independent because by increasing our understanding of productive human social processes we can automate aspects of those processes into computer algorithms to increase their speed and reach (consider online social networking as an example of this). CONCLUSION What these models demonstrate is that simple agent heuristics based on imitation directed towards individual improvement of utility can lead to behavior in which agents behave “as if” there is a motivating force which is higher than self-interest: the interests of the group or “tribe”. This higher force does not need to be built in to agents but rather emerges through time and interactions—a historical process. The formation of social structures, over time, creates conditions that favor pro-social behavior. Agents receive utility by interacting in tribes (simple social structures). Tribes that cannot offer the agent a good “utility deal” will disband as agents “vote with their feet” by joining other, better tribes based on their individual utility assessment. Of course movement between tribes, here, is not interpreted as a physical relocation but rather a social and behavioral one. By copying the traits of others who have higher utility the appropriate social structures emerge. Increasingly in electronic and virtual communities the cost of such

Rationality Meets the Tribe 241 movement is converging towards zero or very low individual cost. It could be conjectured that it is this low cost, and consequent freedom from geographical and organizational constraints, which is a major factor in the recent success of online communities, virtual social networks and other peer-production communities such as Wikipedia (Benkler 2006). However, this process would not preclude agents with explicit group-level utility preferences—i.e., incorporating “social rationality” functions or the like. Such agents could potentially improve general welfare through a modicum of explicit planning and encouragement of pro-social group formation. The models presented here rely on random trial and error to find cooperative pro-social “seeds” which then are selected and grow via the evolutionary process—as other agents join the seed. We speculate that an agent with a correctly aligned internal model of what would make a successful seed could proactively recruit others from the population. However, this introduces issues such as explicit recruitment processes, explicit internal social models and, potentially, transferable utility. This implies the requirement for an effective “store of utility” (i.e., money) that the simple models presented here do not contain. Here we begin to see formation of something that resembles a market. In this context the models we have presented could be seen as “premarket” exchange structures in which value is not separated from the social structures that produce it because it cannot be easily stored, accumulated, transferred or controlled. We might argue that where such “pre-market” structures perform well then there will not be any incentive for agents to engage in the additional costs of implementing explicit market structures. The models we have presented mainly focus on social dilemma scenarios—situations in which individuals can improve their own utility at the expense of the group or tribe they interact with. Often the application of the market in these situations does not resolve the dilemma in a socially equitable way (i.e., does not lead to cooperation) but rather can incentivize non-cooperation. This is such a serious issue that game theory explicitly addresses it within the emerging area of mechanism design (Dash et al. 2003). However, often these models rely on standard rational action assumptions and a high degree of central control that enforce the “rules of the game”. A possible interesting future research direction could be to identify those scenarios in which tribal approaches are appropriate and those in which markets are appropriate. Here perhaps we could forge a third way between markets versus central control.

NOTES 1. This work was partially supported by the Future and Emerging Technologies program FP7-COSI-ICT of the European Commission through project QLectives (Grant no. 231200).

242

David Hales

BIBLIOGRAPHY Axelrod, R. (1984) The evolution of cooperation, New York: Basic Books. Benkler, Y. (2006) The Wealth of Networks: how social production transforms markets and freedom, New Haven, CT: Yale University Press. Binmore, K. (1994) Game Theory and the Social Contract, vol. 1: ‘Playing fair’, Cambridge, MA: MIT Press. Dash, R., Jennings, N. and Parkes, D. (2003) ‘Computational-mechanism design: a call to arms’, IEEE Intelligent Systems, November, 40–7 (Special Issue on Agents and Markets). Doran, J. (1998) ‘Simulating collective misbelief’, Journal of Artificial Societies and Social Simulation, 1 (1). Online. Available HTTP: (accessed 22 February 2010). Hales, D. (2000) ‘Cooperation without space or memory: tags, groups and the prisoner’s dilemma’, in S. Moss and P. Davidsson (eds) Multi-Agent-Based Simulation, LNAI 1979, Berlin: Springer, 157–66. Hales, D. (2004) ‘From selfish nodes to cooperative networks—emergent linkbased incentives in peer-to-peer Networks’, in Proceedings of the Fourth IEEE International Conference on Peer-to-Peer Computing (p2p2004), Washington, DC: IEEE Computer Society Press. Hales, D. (2006) ‘Emergent group-level selection in a peer-to-peer network’, Complexus, 3: 108–18 (DOI: 10.1159/000094193). Hales, D. and Arteconi, S. (2006) ‘SLACER: a self-organizing protocol for coordination in P2P networks’, IEEE Intelligent Systems, 21 (2): 29–35. Hardin, G. (1968) ‘The tragedy of the commons’, Scienc 162: 1243–48. Holland, J. (1993) ‘The effect of labels (tags) on social interactions’, Santa Fe Institute working paper 93–10–064, Santa Fe, NM. Lindenberg, S. (2001) ‘Social rationality as a unified model of man (including bounded rationality)’, Journal of Management & Governance, 5 (3–4): 239–51. Marcozzi, A. and Hales, D. (2008) ‘Emergent social rationality in a peer-to-peer system’, Advances in Complex Systems, 11 (4): 581–95. Nash, J. (1950) ‘Equilibrium points in n-person games’, Proceedings of the National Academy of Sciences, 36 (1): 48–9. Ohtsuki, H. et al. (2006) ‘A simple rule for the evolution of cooperation on graphs and social networks’, Nature, 441 (25): 502–5. Palla, G., Barabasi, A.L. and Vicsek, T. (2007) ‘Quantifying social group evolution’, Nature, 446: 664–7. Richerson, P. and Boyd, R. (2001) ‘The biology of commitment to groups: a tribal instincts hypothesis’, in R.M. Nesse (ed.) Evolution and the Capacity for Commitment, New York: Russell Sage Press. Riolo, R. (1997) ‘The effects of tag-mediated selection of partners in Evolving populations playing the iterated prisoner’s dilemma’, Santa Fe Institute working paper 97–02–016, Santa Fe, NM. Riolo, R., Cohen, M.D. and Axelrod, R. (2001) ‘Cooperation without reciprocity’, Nature, 414: 441–3. Roberts, G. and Sherratt, N.T. (2002) ‘Similarity does not breed cooperation’, Nature, 418: 449–500. Santos, F.C., Pacheco J.M. and Lenaerts T. (2006) ‘Cooperation prevails when individuals adjust their social ties’, PLoS Computational Biology, 2 (10): 140. Simon, H.A. (1990) ‘A mechanism for social selection and successful altruism’, Science, 250: 1665–8.

Rationality Meets the Tribe 243 Simon, H.A. (1997) Models of Bounded Rationality, vol. 3: ‘Empirically grounded economic reason’, Cambridge MA: MIT Press. Smith, A. (1836, fi rst edn 1776) An Inquiry into the Nature and Causes of the Wealth of Nations, England: William Clowes and Sons. Smith, J.M. (1982) Evolution and the Theory of Games, Cambridge: Cambridge University Press. Traulsen, A. and Nowak, M.A. (2006) ‘Evolution of cooperation by multilevel selection’, Proceedings of the National Academy of Sciences, 130 (29): 10952–5. Wilson, D.S. and Sober, E. (1994) ‘Re-introducing group selection to the human behavioural sciences’, Behavioural and Brain Sciences, 17 (4): 585–654.

Part III

How to Build Agent-Based Computer Models of Firms

11 An Agent-Based Methodological Framework to Simulate Organizations or the Quest for the Enterprise jES and jESOF, Java Enterprise Simulator and Java Enterprise Simulator Open Foundation Pietro Terna I wish your enterprise to-day may thrive. —Shakespeare, Julius Caesar

INTRODUCTION With the Java Enterprise Simulator (jES), it is possible to reproduce, in a detailed way, the behavior of a fi rm or organization using a computer. This method relies on agent-based simulation techniques, i.e., the reconstruction of a phenomenon via the action and interaction of minded or no-minded agents within a specific environment. In our case, we have defi ned no-minded agents as agents who execute orders, whereas we have defi ned minded agents as those making decisions within the model. Simulating a single enterprise or a system of enterprises (e.g., within a district or within a virtual enterprise system), we can directly apply the “what-if” analysis. In other words, we can introduce changes into the simulation while fully preserving the complexity of our context. Alternatively, the Java Enterprise Simulator Open Foundation (jESOF) uses an open framework to allow many models to interact simultaneously as each one operates according to its own rules and features. However, in the jESOF environment, each model uses simulation structures and rules that are less sophisticated than those operating in jES, a difference resulting from the complexity of the inter-model interaction. Therefore, jES and jESOF are similar, but not identical, projects.

Why Agents and What Kind of Tool? Only in a truly agent-based context, with independent pieces of software modeling the different behavior of all the components of our environment

248

Pietro Terna

(e.g., a fi rm or an organization), can we overtake the traditional limitation of models founded on equations (i.e., differential equations or recursive ones). In this kind of model, the granularity and the flexibility of the description are strongly determined by the limitations of the method. Yet to build this kind of model, it is possible to use a plurality of tools, such as Swarm (www.swarm.org). At present, jES is Swarm-based, but a new version of the jES structure will be developed using SLAPP (the Swarm-Like Agent Protocol in Python), available online at http://eco83. econ.unito.it/slapp. Using Python as a modern object-oriented language, the SLAPP project has the goal of offering scholars interested in agentbased models (ABM) a set of programming examples that can be easily understood in detail and can be adapted to other applications. The location of present and future materials about jES and jESOF is at http://web. econ.unito.it/terna/jes. The reference document of the Java Enterprise Simulator (jES) is reported in the technical guide ‘How to Use the Java Enterprise Simulator (jES) Program’, fi le how_to_use_jes.pdf, while the reference document of the Java Enterprise Simulator Open Foundation (jESOF) can be found in the technical guide ‘How to Use the jES Open Foundation Program to Discover Complexity in Multi-Stratum Models’, fi le how_to_use_jes_o_f.pdf. In this chapter, we also have included jESlet, the “java Enterprise Simulation light experimental tool” in Appendix A. This was originally prepared for introductory and didactic reasons. You also can fi nd it at the jES site.

Perspectives and Applications There are three main applications of jES models: 1. jES models can be used to theoretically analyze “would be” situations passing in enterprises or organizations and in their interaction, to increase the knowledge on how organized bodies start, behave and decline. 2. jES models can be used to simulate interactions between people for two purposes: fi rst, to study how people behave in organizations by building experiments using the simulator; and second, to train people about the consequences of their decisions within an organization. In the second instance, the artificial agent has the role of avatar, insofar as actual people interact within a simulated reality that replicates the complexity of their real activity framework. (A defi nition of avatar is from www.babylon.com: s. avatar [Hindu mythology] earthly incarnation of a god, human embodiment of a deity; ]Internet] online image that represents a user in chat rooms or in a virtual “space”.) 3. jES models can be used to understand enterprise or organization optimization through soft computing tools, such as genetic algorithms

An Agent-Based Methodological Framework 249 and classifier systems, as well as through what-if analysis. In a simulation framework, genetic algorithms and classifier systems use fitness—either of the evolved genotype or the evolved rules—as coming from the outcomes of the running simulation. The main focus of this work is to propose a framework within which to investigate three topics: fi rst, how enterprises and organizations arise, behave and fall; second, how they interact; and fi nally, how we can improve them. The tool that we introduce here to help us in this research effort (our “quest for the enterprise”) is a large agent-based simulation framework which is able to reproduce the enterprise context in a detailed way. Why is the problem so important? Gibbons (2000) reminds us that: For two hundred years, the basic economic model of a fi rm was a black box: labor and physical inputs went in one end; output came out the other, at minimum cost and maximum profit. Most economists paid little attention to the internal structure and functioning of fi rms or other organizations. During the 1980s, however, the black box began to be opened: economists (especially those in business schools) began to study incentives in organizations, often concluding that rational, self-interested organization members might well produce inefficient, informal, and institutionalized organizational behaviors. Clarkson and Simon (1960: 924) also criticize classic economic theories regarding decision making within fi rms: With great ingenuity, we axiomatize utility, as a cardinal quantity; we represent risks by probability distributions. Having done this, we are left with the question of whether we have constructed a theory of how economic man makes his decision, or instead a theory of how he would make his decision if he could reason only in terms of numbers and had no qualitative or verbal concepts. From the point of view of the decision-making process in organizations (Simon 1997), the situation becomes increasingly complicated, as in this realm, prices do not operate at all and classical economics have very poor explanatory capabilities. Other fields of science also fail to offer a strong framework to allow us to understand, explain and modify organization activities. The problem is strictly linked with understanding how human beings make choices. As Simon (1997: viii) notes in his introduction: Administrative Behavior has served me as a useful and reliable port of embarkation for voyages of discovery into human decision making; the

250

Pietro Terna relation of organization structure to decision making, the formalized decision making of operation research and management science, and in more recent years, the thinking and problem solving activities of individual human beings.

Starting from administrative decision making, Simon introduced the key idea of bounded rationality in human behavior. He then extended this insight to political science, economics, organization theory, psychology and artificial intelligence. Following Simon, we can point out that organizations make it possible to formulate decisions because they reduce the set of possible choices to be considered. In other words, they introduce an additional set of bounds on possible choices. If we understand that organizations must act in the context of those decision-making processes by which sets of possible choices are built, we can improve them by remembering that the effects that arise from decision making in actual organizations are non-linear. Because of this non-linearity, consequences frequently seem explainable only in terms of complexity.

USING SIMULATION TECHNIQUES IN AN AGENT-BASED FRAMEWORK Partly because of this non-linearity and complexity and also partly because of the non-quantitative and non-rational basis of a large part of decision making in organizations, we can hardly use traditional equation-based models to investigate organization behavior, including enterprise behavior. As noted in the Afterword by Richard M. Burton, concluding Lomi and Larsen (2001), simulation requires that we specify the world we want to investigate. It can be complex or simple, and it can begin simply and evolve into complexity. Either way, we must specify its “black boxes” and metaphorically open them; we cannot just assume they exist. Thus, in simulation, we make behavioral specifications, not behavioral assumptions. The central issue is that we know more about our simulated world than about our “real” world. With this necessary specification, the simulated world is a laboratory where we know important parameters because we specify them. The rich world of simulation is versatile; we can perform many different kinds of studies. We can test hypotheses, explore new ideas, create large data, help solve problems and go outside the boundaries of the “real” world. The simulated world can be used to understand the limits of our “real” world, extending the limits of the possible. It also can give a picture of what is likely or what might be.

An Agent-Based Methodological Framework 251 With the simulator, we can reproduce the behavior of a fi rm in a detailed way if we build the simulation model employing agent-based techniques. So, why agents? A meaningful reply may be found in Axtell (2000), synthesized in the abstract of the paper: It is argued that there exist three distinct uses of agent modeling techniques. One such use—the simplest—is conceptually quite close to traditional simulation in operations research. This use arises when equations can be formulated that completely describe a social process, and these equations are explicitly soluble, either analytically or numerically. In the former case, the agent model is merely a tool for presenting results, while in the latter it is a novel kind of Monte Carlo analysis. A second, more commonplace usage of computational agent models arises when mathematical models can be written down but not completely solved. In this case the agent-based model can shed significant light on the solution structure, illustrate dynamical properties of the model, serve to test the dependence of results on parameters and assumptions, and be a source of counter-examples. Finally, there are important classes of problems for which writing down equations is not a useful activity. In such circumstances, resort to agent-based computational models may be the only way available to explore such processes systematically, and constitute a third distinct usage of such models.

ENTERPRISE SIMULATION AND jES To run enterprise simulations, we introduce here the Java Enterprise Simulator (jES). With our tool, we describe, in a detailed way, a two-sided world. We consider both the actions to be done, in terms of which orders are to be accomplished (the “What to Do” side, WD), and the structures able to do them, in terms of production units (the “which is Doing What” side, DW). Thus, our simulation model is, fi rst and foremost, a description of the enterprise as it is. Just as the various fl ight simulator programs put at our fi ngertips the control of the simulated airplane and then execute our commands, jES executes exactly what we ask to take place in the simulated enterprise with respect to the two components described previously. The plane can land gracefully or crash depending on our commands; likewise, the enterprise produces or stays clogged if our WD and DW choices are inconsistent. As also described in detail in the online reference ‘How to Use the Java Enterprise Simulator (jES) Program’, we introduce here the basic ideas and the principles upon which the jES simulator is built in order to clarify the goals of the project.

252 Pietro Terna

jES Technique jES employs two independent components to build a description and representation of an enterprise world. Our simulated enterprise has both production units that perform the different steps of the production process and orders to accomplish the production. The orders are described by recipes that contain the “What to Do” (WD) component of the process; the production units represent the “which is Doing What” (DW) component of the same process. A third formalism relates to the time sequence of the events (the orders to be executed) that occur in the environment we are simulating; this is the “When Doing What” (WDW) component. Production units can be within the firm or outside of it. If they are outside the simulated enterprise, they may constitute other enterprises or they may stand alone as small-business actors. The term “recipe” is typical of industrial economics. A recipe to cook something contains data about the quantities of certain actions, their timing and whether the actions are parallel or sequential; our recipes here contain similar data. At this point, it is useful to introduce a dictionary of our terms: • A production unit is a productive structure within or outside our enterprise. A production unit is able to perform one or more of the steps required to accomplish an order. • An order is the object representing a good to be produced. An order contains technical information (the recipe describing the production steps) and accounting data. • A recipe is a sequence of steps to be executed to produce a good. The core of the model is the clear separation between the orders and the production units. WD and DW are completely independent, both in formalism and in code. Therefore, while running the model, we check the consistency of the two components as we would in the actual world, since the output of an enterprise arises from a complex interaction among products and production tools. As we will see later, recipes can also describe internal parallel production paths, computational steps, batch activities and assembly phases. As such, these comprise ways in which the typical procurement problems of a supply chain can be reproduced and tested. Orders enter the simulation from two sources: from an order generator, in which case they are created randomly as outcomes of a predetermined scheme, or from an order distiller, in which case they are extracted from an archive of pre-existing orders and normally represent a given reality to be reproduced via the simulator.

A Simplified View Figure 11.1 provides a snapshot of jES. This is an introductory view, with the recipes written in a simplified way, i.e., as a sequence of steps

An Agent-Based Methodological Framework 253

Figure 11.1 A simplified view of the jES components; recipes are reported here in a simplified way, without time specifications.

to be executed without information about the time required by each step. Observing the recipe 8–28–27–7, we can see that the front end (FE) of an enterprise can take charge in the fi rst step, which will be executed by unit 8 within the enterprise (in this simplified version, production unit and step numbers coincide). Figure 11.2 introduces a more dynamic interpretation of the world we are describing. We have here three simple phases (a, b, c) in which the order containing the recipe 8–28–27–7 goes from one production unit to another. In this sequence, all the necessary information is contained in the order. When the activity of a production unit (as an example, unit 8) is concluded, the production unit consults the order to fi nd out the next step to be performed, and it then asks all the production units to reply if they are able to execute that task. In this way, the order makes its journey from unit 8 to unit 28 (which is outside the enterprise and can be assumed to be a simple business unit) and then to unit 27 (also outside the enterprise). In the next step, designated with an X in Figure 11.2, we have a choice problem, as two production units are able to perform task 7. We will introduce a set of criteria that allow the simulator to deal properly with this kind of problem. While it may seem abstract, it is worth noting that one of the two units able to perform step 7 belongs to another enterprise. Therefore, we can imagine having to open a dialogue with the front end of the second enterprise. We also have to take into account the possibility of a direct link with the production unit within the other enterprise. The idea of linking

254

Pietro Terna

Figure 11.2 A dynamic view of the jES components; recipes are reported here in a simplified way, without time specifications.

together the subunits of more complex enterprises to create temporary production organizations brings us directly to the concept of virtual enterprise as an organizational tool.

Theoretical Use of jES: The Quest for the Enterprise and the Simulation Process While jES can be applied to actual enterprise cases, as we will see later, the goals of this kind of simulation are primarily theoretical. Through virtual enterprises built in the rich jES framework, we can investigate not only how fi rms originate and how they interact in social networks (Burt 1992; Walker et al. 1997) of production units and structures, but also how firms behave in would be situations. This present work is situated midway between an applied study, such as that of Barton et al. (2001), and a theoretical speculation, such as that of Padgett et al. (2003). In the fi rst case, we have a strong simulation tool used to describe a complete enterprise environment and to simulate the consequences of new choices, but it is impossible for unexpected behaviors to emerge. In the second case, the abstractness of a world where skills evolve to ensure production through reinforcing loops is totally devoted to the emergence of unanticipated schemes. jES is at a midway point because it can be used to model actual enterprise situations, analyze the effects of

An Agent-Based Methodological Framework 255 changes, build abstract structures of interacting production units, fi rms, or districts and simulate emerging behavior. Applying jES to the actual world is also useful to collect facts and situations to be later used as a stylized, theoretical template. The theoretical work that jES (and jESOF) makes possible can be understood in terms of two different approaches to “the quest of the enterprise”. The fi rst approach involves interpreting the fi rm as a system of conveniences in which increasing returns for cooperative behavior emerge, but these returns are limited in size. Therefore, agents have relatively low incentives to expend large efforts in relatively large organizations, since their share of the aggregate results is only somewhat affected by their efforts. Moreover, as the number of so-called “free riding” agents increases, agents migrate to other fi rms. As explained by Axtell (1999), ‘successful fi rms are ones that can attract and retain productive workers’. The second approach of our investigation presents the enterprise as a place where entrepreneurial ideas and choices operate, both in the Kirzner sense and in Burt’s models. Following Kirzner (1997) our models look for the trial-and-error process that generates both the creation of enterprises and their decline by focusing on the role of the entrepreneur. If the Austrian theory claims that entrepreneurial discovery can account for a tendency toward equilibrium, that vague-sounding term ‘tendency toward’ is used deliberately, advisedly, and quite precisely. Such a tendency does exist at each and every moment, in the sense that earlier entrepreneurial errors have created profit opportunities which provide the incentives for entrepreneurial corrective decisions to be made. (p. 81) With Burt (1992) we look for a dynamic model of networks in order to observe the success of entrepreneurs in fi nding and exploiting the “structural holes” within zones densely fi lled with relationships. Comparing this model with the theory of social capital can help us understand the simulation of both the creation of enterprises and the effects of their networks (Walker et al. 1997). Zuckerman (2003) analyzes the importance of networks in economics in a comprehensive manner. Finally, we must stress the importance of self-organization and complexity, particularly with regard to the convenience systems that constitute fi rms and the ability of entrepreneurs to exploit situations within these systems. As Batten (2000, p. 255) describes: The surprising thing about self-organization is that it can transform a seemingly simple, incoherent system (e.g., light traffic) into an ordered, coherent whole (a strongly interactive traffic jam). Adding a few more vehicles at a crucial stage transforms the system from a state in which

256

Pietro Terna the individual vehicles follow their own local dynamics to a critical state where the emergent dynamics are global. This involves a phase transition of an unusual kind: a non-equilibrium phase transition. Space scales change suddenly from microscopic to macroscopic. A new organization mechanism, not restricted to local interaction, has taken over. Occasional jamming transitions will even span the whole vehicles population, because the traffic has become a complex system with its own emergent dynamics. What’s most important is that the emergence of stop-start waves and jam, with widely varying populations of affected vehicles, could not have been anticipated from the properties of the individual drivers or their vehicles.

The ability of the organizations to modify themselves, especially insofar as individual agents endogenously adapt their actions to new conditions (Epstein 2003), is a key characteristic which our simulation model must account for. Moreover, we must account for this characteristic with regard to both of the theoretical approaches discussed in this section. All of this, moreover, is related to the problem of how agents coordinate outside the market (where the market is defi ned as inside fi rms and amid interrelated fi rms), as proposed in Richardson’s (1972) seminal paper.

Applied Use of jES: Interaction between People and the Model The second application for jES relates to actual human decisions. When we encounter a choice in a simulation framework, how should our decision be made? The possible responses include: in a random way (this may be a bad approximation of actual people’s behavior, and in fact, this paradox is confi rmed by our experience in enterprise model building.), using fi xed rules, using an expert system, via soft computing techniques (genetic algorithms and classifier systems) or by asking an actual agent what to do. The last possibility is useful both in terms of training people and in terms of monitoring actual agents’ behavior. Regarding human interventions, we can easily include real subjects into the simulation framework by employing artificial agents as avatars of actual agents. In the simulation, avatars follow their represented personality (e.g., via a web page) when deciding what to do while the simulation is running. This is a very promising way of building laboratory experiments that aim to analyze the actions and reactions of participants in a well-known situation, such as their working environment. In reality, decision processes, and consequently, the behavior of enterprises and organizations as a whole, are mainly based on a continuous process of trials, errors, learning and adaptation, with the decisions acting as the kernel for the entire chain of actions. Thus, we must deal with the single steps that ultimately constitute decisions and avoid black boxes by keeping the analysis of Simon (1997), as quoted earlier, in mind.

An Agent-Based Methodological Framework 257

Figure 11.3

Decision dilemma.

For example, how it is possible to decide in a case like that indicated by X in Figure 11.2 and further shown in Figure 11.3? How are decisions assumed in reality and, by extension, how can we reproduce them within the simulation? A pragmatic reply is that a random choice is not such an unrealistic representation. This is especially true if the effects of the possible rational choices nevertheless operate in the presence of multiple constraints and intrinsic environmental complexity. Besides casting decisions in a random way (admittedly perhaps the worst benchmark), we can generate more structured behaviors by using sets of predetermined rules. These rules may eventually be structured via an expert system, which managers rule by applying interactively and eventually employing fuzzy logic tools. This may also help account for situations in which it is difficult or impossible to have clear true or false situations. We can also adopt sophisticated systems of soft computing to evolve choice capabilities via genetic algorithms or to evolve rules via classifier systems. If we use soft computing, the simulation system can operate like a function for estimating the fitness of either the solutions or the proposed rules. While a simulation is running, we can always ask a person what their choice would be in a detailed situation generated by the simulation. We can then incorporate that answer into the successive steps of the model. The simulation thus serves to inform that person about the consequences of their choices, insofar as they can see directly the consequences of their

258 Pietro Terna choices. In addition, the simulation also allows researchers to observe how people behave under certain conditions and situations. As such, we can make true experiments within the metaphorical walls of simulated enterprise or organization, with one or more of the agents who populate the simulated world adopting actions that are informed by choices made by actual individuals (“natural agents”) who are familiar with the enterprise environment that the simulation attempts to replicate. This is because the choices of the natural agents are integrated into the simulated word via the artificial agents acting as their avatars, and those choices subsequently modify the simulated organizational context of the model.

Applied Use of jES: Simulation of Actual Enterprises and Organizations In line with the theoretical background introduced previously, the third application of jES involves the simulation of actual enterprises. In other words, it creates computational models of those realities in order to understand their behavior and thereby improve the related decision processes. Once the simulated model of the enterprise behaves in a way that is sufficiently close to reality, we can introduce “what-if” situations into that model to understand the consequences of various changes. We can then reasonably posit that the same effects will also occur in reality. Along these lines, we present three particularly useful applications of jES: (i) in the mechanical sector, to cope with organizational problems related to the use of partially or fully automated production tools; (ii) in the clothing sector, to compare the performance of web-based enterprises that externalize risks (products are sold directly by the trading companies, to which the production is assigned, to the wholesaler or licensees), versus that of hypothetical similar enterprises that bear the full risk of the production process; and (iii) in the urgent health care field, to experiment with changes in the organization in an artificial way before the actually introducing such changes. In addition, the theses (online in Italian at http://web.econ.unito.it/terna) of several university students of mine relate to applying jES and, more generally, enterprise simulation. One student named Matteo Morini has focused on production optimization using genetic algorithms applied in a simulation framework. At http://papers.ssrn.com/sol3/papers.cfm?abstract_ id=635363 you can fi nd a report, presented with Gianluigi Ferrraris, who is particularly involved in the genetic algorithm part of this work. Gianluigi Ferraris is also co-author with Marco Lamieri of the useful guide for using ART tools for classifier systems and genetic algorithms, online at https:// sourceforge.net/projects/artlibrary. Case (i) successfully found a real bottleneck in the enterprise production and suggested, with the what-if analysis, a consistent solution.

An Agent-Based Methodological Framework 259 Case (ii) demonstrated the possibility of reproducing a complex set of enterprise choices for comparative purposes. For more information, we explained the «computational steps» in section 4.4 of the ‘How to Use the Java Enterprise Simulator (jES) Program’ (fi le how_to_use_jes.pdf in the jES site). Case (iii) focuses on emergency medical transportation and aims to create a complete model of a system that includes a wide territory call system, immediate medical prescreening and ambulance management. Since the program allows researchers to introduce modifications to the simulated world, these three cases employ the simulator to perform what-if analyses. As it is applied to a complete simulation of the actual enterprise, the what-if logic accounts for all the effects of any modifications, including both direct and indirect effects, as well as any internal and external interactions. The main results and instructions on using jES in terms of these three applications are reported in Appendix B. To summarize the basis of jES, agents are objects, and likewise, orders and the production units able to deal with the orders are also objects. There are also agents representing the decision nodes; these nodes are either avatars of actual people or points at which rules and algorithms (like GA or CS) are carried out. Avatars are directed by actual people. Thus, in this way, we can simulate the effects of actual choices, simultaneously enabling us to use the simulator as a training tool and as a way to run economic experiments to illuminate how people behave in organizations. This, we posit, addresses Simon’s (1997) major question. In jES, we use object-oriented programming to simulate a complex environment, where both the WD component (the orders) and the DW component (the production units) are objects. From the perspective of computer science, those objects are agents insofar as they are independent pieces of code. Likewise, from a social science perspective, we can consider anything acting and interacting in a simulation environment to be an agent. However, as social scientists, we also want simulated agents to metaphorically represent actual ones, and we fi nd them in the nodes where decisions have to be made. What kind of decisions are we concerned with? We can deal with very simple decisions, such as assigning an order to a production unit (see subsection 3.5 in the jES How To introduced earlier), deciding whether to produce inventories (see 3.8) or diffusing news about upcoming orders (see 3.7). In addition, we can also deal with very complicated decisions with complex consequences, such as the global scheduling of the events, the introduction of new sub-recipes or recipes (see 2.1) of a new production unit (see 3.1 and 3.2) or the introduction of a new fi rm (more than one production units). Production units can be understood as atomic units, either linked within a fi rm, linked in a system of fi rms (a district) or linked in a system of sub-fi rms constituting a virtual enterprise.

260 Pietro Terna FROM JES TO JESOF Thus far, the use of jES in several fi rm and organization simulations has demonstrated it to be very good at simulating real-world problems. In other words, the main issues and questions that entrepreneurs must confront could be quite easily simulated with the jES approach. Taking advantage of the powerful ABM approach, jES is very flexible. For example, it allows the user to reproduce a wide range of different situations while maintaining the same simulation structure. Production units can be asked to activate their production capabilities, or production units can be given the ability to produce either to satisfy a specific request or to fulfill its warehouse. Moreover, both orders and production units are able to account for costs and production time. The novelty introduced by the jES platform, as compared to process-based simulators, is the idea of recipe: a set of parameters, with information driving the tokens throughout the process flow. The enterprise is represented in an upside-down way since the recipes allow users to describe both the product topics and the process devoted to obtaining them. The process organization comes out as an outcome of the simulation runs. In this way, very little knowledge about the interplay among units is needed, and some mistakes can be avoided. Understanding an enterprise is difficult, since it is hard to catch and formalize the implicit knowledge that is typically hidden within a productive organization. jES is aimed at investigating this implicit knowledge as something that emerges from the simulation. Thus, the user is asked to describe only technical facts that are usually well known and documented. The jESOF tool was developed as an evolution of jES. It consists of a set of simple jESs combined in a multiple stratum structure, like that of Figure 11.4, where each stratum or level could represent a specific type of enterprise: manufacturers, banks, institutional organizations or others. In this way, a far-reaching industrial environment could be implemented by simply replicating the basic jES paradigm.

Figure 11.4 Different layers or strata in jESOF.

An Agent-Based Methodological Framework 261 The jESOF structure is very powerful for setting up a simulation of a complex environment where different types of organizations interact. Each atom of a jESOF structure can be devoted to specific goals, leaving the tool to manage the interactions.

A Three-Level Prey–Predator Model: Grass, Rabbits, and Foxes Now we examine a prey–predator example reported in the tutorial of the document ‘How to Use the jES Open Foundation Program to Discover Complexity in Multi-Stratum Models’, found online at the jES site. In order to reproduce the simulation, we must copy the contents of the folder apps/tutorial/step3b_the-predators/ to the main folder of jESOF. That is, we must copy the folders recipeData0, recipeData1, recipeData2, unitData0, unitData1, unitData2 and the two ComputationalAssembler files, with extension .java and .class, and, finally, jesopenfoundation.scm. The how_to_use_an_application.txt file is only descriptive, and thus it is unnecessary to copy. In addition, if modifications of the code of jESOF have not been previewed, it is also unnecessary to copy ComputationalAssembler.java. Level 0 of the model operates as a fi rst distiller of orders, generating the grass by duplicating the existing grass units in random positions in the space. Despite the random choices, the co-evolution of the three levels concentrate the grass in the zones of the predators that eat prey and not grass. The prey (rabbits, in our case) are represented in level 1, whose distiller of orders generates three types of events: • Orders commanding the rabbits to eat can be sent. These orders contain a computational step. As a consequence of that step, the rabbit that receives the recipe (either all the rabbits or only some of them, according to how many orders are launched, how many rabbits exist and whether a unit can receive more than one order per cycle) cancels a unit of grass in level 0 and increases the level of energy in its matrix of data, if visible grass exists. • Orders can be sent to the rabbits, always through a recipe containing a computational step, to consume a part of their energy or to die if the energy level falls below a preset value. • Orders can be sent to the rabbits, again, through a recipe containing a computational step, to generate new rabbit units near their position. Predators (foxes, in our case) are represented in level 2, whose distiller of orders operates in analogy to that of level 1. The global result reported in Figure 11.5 is a direct consequence of the interaction of these three correlated levels: the presence of predators moves the prey that do not consume the grass. (Note that the areas of visibility with an empty center are due only to a screen refresh problem. The unit has been canceled, but the visibility area will be eliminated only with the redesign of the space at the beginning of the successive cycle.)

262

Pietro Terna

Figure 11.5 From the left: grass level, rabbit level and fox level; grass, rabbits and foxes are represented by the dark areas; the medium gray areas are empty zones; the white spaces are the areas of visibility of the units.

The reader can run all the cases of the prey–predator model proposed in the tutorial and also experiment with new scenarios.

Workers’ Skills and Firms: A Co-Evolution Model A case of co-evolution of two systems is now introduced, one that includes evolution of the enterprise and the other that includes evolution of the workers, in two versions. The fi le (in four versions) can be found in the materials on jESOF at the jES site in four folders with similar names aside from their fi nal character: apps/workers_skills_fi rms/workers_skills_fi rms_v_1/ (or 2, 3, or 4). Here we use only the contents of the fi rst two folders. An order generator, belonging to level 4, randomly creates orders at a rate of one per cycle, which contain recipes based on a dictionary of three possible activities consistent with the existing units. The recipe length is between one and five steps, and every step can involve one or two cycles. The shortest recipe requires a cycle for elaboration, and the longest requires 10. Also active in the simulator is an order distiller that belongs to level 0 but also works with the units from the other levels, comprised the fourth. Thus, we have here five models interoperating, and these can be thought of as five levels, numbered from 0 to 4. For those familiar with Swarm, the models can be understood in the following manner: the most important levels are the fi rst (number 0) and the last (number 4); levels 1, 2 and 3 are dedicated to visualizing the subgroups of workers with professional skills 1, 2 or 3; and the workers are represented all together in level 0. The visualization on a specific level occurs by duplicating the unit so that it appears in the other levels with a modified code (namely, the previous one plus a constant) in order to avoid any ambiguity in the reception of the orders directed to the original units. If a unit disappears from level 0, its copy will be canceled.

An Agent-Based Methodological Framework 263

Figure 11.6 Workers–firms, v. 1.

The enterprises that populate level 4 are of three types, and they coincide with the phase of activity that they can complete. They are initially numbered by type, with a 0.8 probability of generation. In the successive cycles, the probability of birth of an enterprise is 0.2 per cycle. Enterprises can survive in an inactivity state for 15 cycles, and they can accumulate a maximum quantity of 10 products that are unsuccessfully delivered. If we consider that 15 cycles correspond to three months of total inactivity in the real world, which is a reasonably conjectured time frame of survival for a company having no activity, the 3,000 cycles of the simulation are equivalent to 200 quarters, or 50 years. The workers that populate level 0 are also of three types, corresponding to their respective professional skills. Initially, there are 50 workers of each type, with a 1.0 probability of creation. The generation of an individual per cycle has probability 0.9, but this generation does not represent the most important source of new workers in this level. Rather, the presence of new workers is mainly due to orders to generate new workers that are addressed to the existing ones. These orders of duplication of unit representatives of the workers are launched in groups of 10 for each type of activity in every cycle. The computational steps used here can be of two types: the copy representing the new worker in level 0 (with another copy shown in one of the levels 1, 2 or 3) can appear either in proximity of the copied worker or in a random position. The copy orders, generated with probability 0.9, can be received in multiple ways

264

Pietro Terna

from a unit worker in a cycle, with a random shuffle of the sequence of the units before every launch of orders. The areas of visibility of the enterprises are initially randomly set between 0 and 200 points in a world sized 100 × 100 and therefore consisting of 10,000 points. Visibility increases by 0.5 points per cycle. However, the areas of visibility of workers are fi xed at 8 points, that is, the eight positions adjacent to the unit worker. The link between an enterprise and its workers is realized as each cycle launches 10 orders. The recipe contains a fi rst step to be executed by an enterprise, and its second step is to be executed by a worker that has the same professional skill needed by the production of the unit enterprise that was carried out in the fi rst step. The worker is visible to the enterprise if at least one point of their area of visibility (of nine points, that is, eight points of visibility plus the worker) intersects with the visibility area of the enterprise. As the order reaches the unit worker, a computational step accounts for the hired workers in the memory matrix of the enterprise, and another computation step accounts for the days of employment in the matrix of memory of the worker. Workers are hired only for a cycle, and then the process is repeated. An enterprise or a worker can receive more than one order in the same cycle. In each cycle, the order of activation is launched for each one of the three types of unit enterprise. The computational step contained in these recipes applies to all the enterprises of the same type with probability 1.0. The computational step deducts one from the same cell of the matrix of memory of every unit to account for the hired workers. If the total amounts to less

Figure 11.7

Workers–firms, v. 2.

An Agent-Based Methodological Framework 265 than–10, the unit enterprise is canceled, as it cannot survive after 10 cycles without activity. The same type of elaboration is done for the unit workers, with a–5 threshold under which the unit is canceled (in other words, unit workers cannot survive after 5 cycles without activity). The mechanism of interaction between enterprises and workers is quite loose, and it is possible that workforce is employed and activated even if the unit enterprise is currently without production charge, a detail not lacking in realism. Let us now introduce the main difference between the two versions of the model presented here: • Version 1: new workers are placed beside the original position of the copied worker, reproducing a diffusion of the competencies for proximity, with a professional choice due to imitation. The depiction of work units in Figure 11.6 is the direct consequence of this formulation. The enterprises form a unique district, as they are connected inside the cluster to islands of workers with homogenous skills. The number of the enterprises increases quite slowly, and it becomes stable only after a long period of about 30 simulated years. • Version 2: new workers are randomly placed in the space. The depiction of work units in Figure 11.7 is a direct consequence of this formulation. The localization of the enterprises is more articulated. The number of enterprises increases quickly and reaches a stable level after approximately five simulated years. What conclusions can we draw from this particularly stylized experiment? It seems that the conclusions here, at least in terms of professional guidelines, are relative to the risk of an excessive concentration of job competencies in the same space. In both versions of the model, we have a complex dynamic regarding number of workers. Regarding the explanation of cycles in the labor market as an unsolved puzzle, see Hall (2005) and Shimer (2005).

APPENDIX A: jESLET, AN INTRODUCTORY VERSION OF jES jES has a simplified version named jESlet (jES light experimental tool, developed using Swarm), mainly to help social science scholars of agentbased simulation techniques introduce themselves to these tools. jESlet was created by downsizing jES (v. 0.9.7.60) from 36 classes with 10,612 lines of global code to 11 classes with 1,670 lines of global code. If you are interested in the internal structure of the code, we recommend that you follow the following explanation with the code in hand. You can download jeslet-1.2.0.tar.gz (or successive versions) from http://web.econ.

266

Pietro Terna

unito.it/terna/jes/. For Windows users, the basic command to run jESlet from a Command prompt window is jeslet.bat, and it is located in the folder where jESlet has been uncompressed. For more information on running the program (in Linux or in other environments), please look at the README. TXT file in the distribution. To run programs in the jES family, you must have Java (1.5 or more recent) installed. You can download the javaSwarm from http://ftp.swarm. org/pub/swarm/binaries/w32/Swarm-2.2-java.zip and unzip it directly in c:\ root, obtaining c:\ Swarm-2.2-java. After doing so, open the Command prompt window (formerly Dos window), and execute the following commands: set SWARMHOME=c:/Swarm-2.2-java set CLASSPATH=.;c:\Swarm-2.2\share\swarm\kawa.jar;c:\Swarm-2.2\ java\share\swarm\ set PATH=c:\Swarm-2.2-java\bin;%path% Alternatively, you can create a .bat fi le containing the preceding line and execute it either when you open the Command prompt or defi ne the environment variables as previously stated. The key features of jES, which are denoted by [KF] in the paragraph ‘How the model works’ within the ‘How to Use the Java Enterprise Simulator (jES) Program’ document, are reproduced in jESlet and reported here. Let us remember that with jES, we introduce the existence of two independent components to the description and representation of our world and, in a consistent way, to our program (i.e., to our model). Our simulated enterprise or organization must encompass both the “What to Do”, WD, aspect of the world (i.e., orders) as well as the “which is Doing What”, DW, component of the same organization or enterprise (i.e., the production units). A third level of formalism relates to the time sequence of the events (the orders to be executed) that occur in the world that we are reproducing. In other words, this is the “When Doing What”, WDW, component, which is not used in this simplified version of the jES.

How the Light Model Works From a technical point of view it is important to note that almost all the intelligence of our simulation process is placed into the order (WD) side. We can describe the behavior of the code in the following way (suppose that we are not at the beginning of the simulation, so the process is already running to elaborate orders): 1. Production units operate on the orders existing in their waiting lists at a rate of one order per each tick of the simulation clock (into the

An Agent-Based Methodological Framework 267 code: ModelActrions2 in ESFrameModelSwarm.java and unitStep1 in Unit.java). • Once processed, the orders are placed in a “made production” list, which is to be successively diffused to other production units. 2. New orders (each containing its own recipe) are launched in production (into the code: modelActrions2generator in ESFrameModelSwarm. java and createRandomOrderWithNSteps in OrderGenerator). • Each order contains a recipe consisting of a sequence of steps to be done. • New orders enter into the simulation. They are randomly generated via the orderGenerator object for the program test, and this is also the way they are deployed in the light version of the program. • The new orders are assigned to production units in the way described in point 3. 3. Each order contained in the production lists (made previously) of the production units initiates a search via the unit code using the assigning tool code. This search ascertains whether one or more available production units can perform the steps that remain to be done in order to complete the recipe (into the code: modelActrions2b in ESFrameModelSwarm.java and unitStep2 in Unit.java). • Assignments: • If (only) one production unit makes a positive reply, the order is assigned to the waiting list of that production unit. • Once orders are assigned to the waiting list of the chosen production unit, they remain there according to a FIFO (fi rst in, fi rst out) criterion until their specific step is done. • An order is dropped when the last step of its recipe is done. 4. The sequence continuously goes back to the phase described in point 1 for the next tick of the clock (other steps are devoted to initializing and to accounting for operations, but for the sake of simplicity, they are not reported here). Time synchronization and parallelism are obtained via a trick in the simulation. Namely, at each tick of the simulation clock, all the production units make the actions described in the aforementioned point 1 independently (in other words, the actual time sequence does not matter). Then, always in the same clock tick, the program executes point 2, and when all these actions are concluded, the program orders that the operations described in point 3 are performed, again independently and always in the same tick.

The Parameters and the Recipe Structure When jESlet starts, we can see its two probe windows (Figure 11.8). In the fi rst pane, we have the ESFrameObserverSwarm parameters (when

268

Pietro Terna

changing parameters remember to press the enter key to effectively modify the value into the system; logic values are true and false, and the shortened forms t and f do not work here): • displayFrequency, which states the frequency of display updates while the simulation is running, is denoted as 1 if the display is updated in each simulation clock tick, 2 if it is updated every two ticks etc. • verboseChoice, when set to true, produces printed lines related to the internal activities of the program. • timeToFinish, if not set to zero, is the time, expressed in number of ticks, at which the simulation is stopped.

Figure 11.8 The parameters of the simulation.

The second pane of Figure 11.8 reports the ESFrameModelSwarm parameters: • totalUnitNumber is the number of production units populating our simulation. For example, in Figure 11.1, we have three units. The units will automatically receive the identifying numbers 1, 2 and 3, according to the fi le unitData/unitBasicData.txt of the distribution jeslet-1.2.0.tar.gz. In each line, the file contains the identifying number of a unit and the code describing its production capability. In our case, the unit numbers and those concerning the related activities are the same in each case (i.e., unit 1 performs activity 1, unit 2 activity 2 and unit 3 activity 3), but this is not a mandatory condition. Also, the sequence of the numbers does not need to be ordered or continuous. (According to unitData/unitBasicData.txt in the distribution, if we want to use more than 10 units, we have to add other line. Meanwhile, if we have more than 1 unit with the same production capability, only the fi rst one will be used in this light version.). • maxStepNumber is the maximum number of steps contained in a recipe describing an order. • maxStepLength is the maximum number of time units (e.g., seconds) attributed to the execution of a step. In our case, valid cases of recipe structures are:

An Agent-Based Methodological Framework 269 1 s 1 3 s 1 2 s 1: the fi rst step requires the execution of the activity 1, in our case made by unit 1, and lasts 1 second; the second step requires the execution of the activity 3, in our case made by unit 3, and lasts 1 second; and the third step requires the execution of the activity 2, in our case made by unit 2, and lasts 1 second. 3 s 1 2 s 1: the fi rst step requires the execution of the activity 3, in our case made by unit 3, and lasts 1 second; and the second step requires the execution of the activity 2, in our case made by unit 2, and lasts 1 second. With maxStepLength = 4, a valid case would be: 3 s 4 2 s 2: the fi rst step requires the execution of the activity 3, in our case made by unit 3, and lasts 4 seconds; and the second step requires the execution of the activity 2, in our case made by unit 2, and lasts 2 seconds. Recipes in orders in jESlet are randomly generated following both these kind of rules as well as a dictionary that is built using the information contained in the units.

Explaining the Classes and the Model Schedule The overview of jESlet classes and their methods is made following the UML class diagram formalism (the Unified Modeling Language is documented at hrrp://www.uml.org) in Figure 11.9. In Figure 11.9, we have the simplified presentation of the classes of jESlet. The starting point is StartESFrame, which contains the main method, which creates an instance of ESFrameObserverSwarm. The creation is followed by the execution of the methods buildObjects, buildActions and activateIn. BuildObjects (part of the Observer) is responsible for creating both the objects used to monitor the model outcomes and the model itself. The same method in the Model is deployed to generate all the objects used to run the simulation. buildActions (part of the Observer) creates the events to be executed with the simulation clock in order to supervise the model. The same method in the Model creates events to be executed with the simulation clock to activate at due moment the various simulation steps. ActivateIn is a mandatory technical Swarm task both of the Observer and of the Model. Following the Swarm protocol, a container class, the Observer, is used to create both the model and the tools necessary to observe its outcomes. Exactly as StartESFrame creates ESFrameObserverSwarm, the Observer creates an instance of ESFrameModelSwarm and runs methods buildObjects, buildActions, and activateIn. Executing the buildObjects method, ESFrameModelSwarm creates an instance of OrderGenerator and totalUnitNumber instances of Unit. It

270 Pietro Terna

Figure 11.9 A UML view of jESlet.

also creates an instance of AssigningTool and an instance of UnitParameter. UnitParameter is used only in the initial phase of generating the instances of Unit to deal (in the full version of jES) with the problem of complex production units that are able to perform more than one activity. The instances of Order are generated by OrderGenerator while the simulation is running. SwarmUtils, MyReader and MyExit are static classes used to perform technical tasks. When the user presses Next or Start once in the Swarm control pane, the simulated time makes a step or starts running. The sequence of events of the paragraph ‘How the light model works’ is thus executed. Now we look at the points of that description and at the schedule contained in ESFrameModeSwarm, which is reported here in Box 11.1. In this structure, we can also use selectors. For readers unfamiliar with JavaSwarm, a selector is a structure useful to transfer a method name via a list of parameters. With regard to the paragraph ‘How the light model works’: 1. Point 1 relates to the execution of ModelActions2 in the ESFrameModelSwarm schedule, which activates the unitStep1 method in all the instances of the unit at the beginning of each tick of the simulated clock. The current step of the recipe of the fi rst order in the waiting list of each unit is thus executed.

An Agent-Based Methodological Framework 271 2. Point 2 relates to the execution of ModelActions2generator in ESFrameModelSwarm, which activates the createRandomOrderWithNSteps method in OrderGenerator as the second event of each tick of the simulated clock. Through the assigned method of AssigningTool OrderGenerator, it sends the new generated order (one per tick) to the instance of the unit able to perform its fi rst step. 3. Point 3 relates to the execution of ModelActions2b in ESFrameModelSwarm, which activates UnitStep2 method in all the instances of Unit as the third event of each tick of the simulated clock. Through the assigned method of AssigningTool, UnitStep2 sends the Order instances contained in the made production list of each instance of the unit to the instance of the unit able to perform their next step. Order instances without successive steps to be executed are dropped out of the simulation. With the simulation time running, the schedule of ESFrameModelSwarm continuously repeats the a, b and c tasks.

Box 11.1 The ModelActions and the schedule in ESFrameModelSwarm.

/* producing */ modelActions2.createActionForEach$message(unitList, SwarmUtils.getSelector(“Unit”,”unitStep1”)); /* a new order; this step is placed here, after the production step, * to align the diffusion of the order forms (orders under execution *—next step—or new ones—this step -) */ m o d e l A c t i o n s 2 g e n e r a t o r.c r e a t e A c t i o n T o $ m e s s a g e (orderGenerator, SwarmUtils.getSelector(orderGenerator, “createRandomOrderWithNSteps”)); modelActions2b.createActionForEach$message(unitList, SwarmUtils.getSelector(“Unit”,”unitStep2”)); // Then we create a schedule that executes the // modelActions. modelSchedule = new ScheduleImpl (getZone (), 1); modelSchedule.at$createAction (0, modelActions2); modelSchedule.at$createAction (0, modelActions2generator); modelSchedule.at$createAction (0, modelActions2b);

272

Pietro Terna

Running the Examples To run the examples contained in the three subfolders of exampleCases/, copy their contents to the main jESlet folder, and then run the code as explained previously, with jeslet.bat in Windows (follow the README. TXT explanations for other environments). The example case_i, with 3 units and recipes with a maximum of 3 steps, each of length 1, runs with a reasonable ratio total time/total length. In a simulation of 200 ticks, we have a fi nal value of about 1.5; i.e., the time required by the simulated production is 1.5 times the expected time as reported in each recipe. The case_ii, always with 3 units and recipes with a maximum of 3 steps, but each of maximum length 3 for this case, creates a ratio of about 9 after 200 ticks. In case_iii, the number of production units is increased to 6 (with recipes of maximum 3 steps, each of maximum length 3), and the ratio is now only 2 after 200 ticks.

APPENDIX B: THREE JES APPLICATIONS

Case (I): Finding Bottlenecks in Enterprise Production The fi rst application introduced in this appendix involves a mechanical enterprise of a traditional type. This application gives special attention to the process of quality in the production of two main families of products. These products are characterized by a high number of subtypes requested from the worldwide market, a demand that production addresses with an export rate of 95 per cent. The intrinsic complexity of the production cycle justifies reconstructing a virtual model of the enterprise through jES. The description of the enterprise required the defi nition of nearly 100 productive units (operating inside the enterprise), of which one half were of complex type, i.e., able to carry out several kinds of activity. It was also necessary to defi ne approximately 80 recipes, and thus it was quite complicated. All the files can be found in the jES materials in the folder appCases/ Case_i/. To deal with this case, it was necessary to introduce three new tools operating within the order recipes, now permanently incorporated in the code of jES: • Stand-alone batch: this works on groups of products in a single block, with the choice of the production time and the number of products necessary to form a batch. This formalism is particularly useful to specify procurement processes coming from units outside the enterprise, for which it is unnecessary to use a magnifying glass to segment recipes into small steps in the production process.

An Agent-Based Methodological Framework 273 • Sequential batch: this works on bunches of products inside a sequence defi ned by a detailed recipe. The elapsed time of a sequential batch is relative to the entire group of operations. The quantity of pieces to be included in a group is defi ned by the user, and jES waits until the correct number of orders is reached to carry out all the activity simultaneously, employing the programmed time. • OR with orCriterion: using this code, the activities can follow different paths according to different moments and to different waiting lists. Here we use an OR among all the various possible paths. The chosen path is determined by the orCriterion parameter, set at the beginning of the simulation: • With 0, all the branches will be executed in sequence (this option is used only to the test the program). • With 1, the fi rst branch is chosen. • With 2, the second one is chosen (so it is useful to defi ne in fi rst and second positions the more frequently used branches). • With 3, a choice is randomly extracted, considering all the possible branches of the OR. This modality approximates, in a fairly realistic way, much of the real processes of choice, where problems are particularly complicated. • With 4, the branch chosen is that whose fi rst unit (that one doing the fi rst step of the production recipe after the OR point) has the shortest waiting list. • With 5, the choice is determined by a computational step, via Java routing added by the user (look at the jES How To, quoted earlier). Other criteria can be developed in quite an easy way, and the criterion 5 is already open to any possible algorithm through the computational step. With the model, we have simulated three months of activity of the enterprise, with two shifts of work a day, where a working shift is 28,800 seconds. In order to accelerate the calculations, the time has been divided into 29 ticks, each one representing approximately 1,000 seconds. The simulation of the company’s operations has been considered highly realistic by people managing it. In Figure 11.10, the diagram on the left represents the waiting lists in the productive units. The diagram in the center reports the ratio between the effective time of production and the conjectured expected time, and the diagram on the right shows the economic result (where negative values are due only to the choice of conventional costs and revenues). We can see an overload in the units between 1 and 15, those of the lathe sector. Using the unitCriterion = 4, if more than one unit can execute a production step, the choice is directed to the unit with the shortest waiting list.

274

Pietro Terna

Figure 11.10 Production with unitCriterion = 2.

Figure 11.11 The same case, adding three complex units operating as lathes.

We made the experiment of introducing three lathe stations with multifunctional capabilities. In the terminology of jES, these are complex units. The results, shown in Figure 11.11, show an evident reduction in the waiting lists and in delays, with a light improvement in the economic result. We can now make a generalization: the verification of a structural action like that exemplified here, through the simulation, must be considered in a non-symmetrical way. If the simulation succeeds, the result can be taken into consideration for application in the reality, but if it fails, perhaps it is better to renounce the application in the real world altogether.

Case (Ii): Reproducing Different Enterprise Outsourcing Strategies The second example of business system applications involves an enterprise operating in the field of casual clothing, with its own trademarks and a worldwide presence. The products of the enterprise relate only to creating clothing collections and their related services. These include planning and organization, but not manufacturing, neither internal nor outsourced. The main production is that of virtual sample collections of clothing that will be transformed into actual sample collections (this is the unique material production of that fi rm) and sold to the licensees only if a sufficient interest emerges from them after they have been showcased on the web. The licensees, on the basis of the actual collections and of the reactions of their clients, which are mainly stores and commercial chains, formulate a forecast to purchase, although not defi nitive. The enterprise, on the basis of the forecasted quantities, carries out a worldwide auction to ascertain

An Agent-Based Methodological Framework 275 the best price for production with qualitative specifications. The licensees will buy the products directly from the auction winner, but they can also order the production from suppliers in whom they have confidence, though either must respect the qualitative specifications. The company, being the owner of the collection design, receives a royalty on the products sold. To reproduce the simulation model of this unconventional organization, it was necessary to cope with a few technical challenges. Thus, we modified the code in the following ways: • We introduced computational steps, widely used also for other crucial applications and in jESOF. Namely, we introduced a computational step which calls a Java routine when a specific step of a recipe is activated. That routine interacts with the data matrices of the simulation system, whose contents may come from other computational steps or from initial settings. For example, within an OR structure (as seen earlier), a set of virtual orders may be sent to the licensees to request their forecasts. The results are then summarized via the memory matrices of the simulation system in order to verify whether the result is sufficient to start a worldwide auction. • We also introduced layers, not be confused with the strata or levels, containing the multiplicity of models of jESOF. Here, the layers are used to distinguish equal orders (e.g., to produce a sample collection, to ask for forecasting or to open an auction) but belong to different temporal moments. Therefore, the orders contain the recipe that describes both the sequence to be to be served by the different acting units and the layer to which the sequence pertains, with layers acting as metaphorical representations of space or time. The model manages all the layers in a parallel way with its unique time schedule. Links amid different layers are possible, and this may be useful, for example, in dealing with supply processes common to different layers. The simulation model has been useful from two perspectives: (i) to quantify the bottlenecks of the business process; and (ii) to compare, in a virtual way, the results of different ways organizing businesses, both factually or counterfactually. The time range considered in the simulation forces the production of several collections to overlap, thereby verifying the ability of the model to reproduce the different operational behavior of the company: • as it is; • using the worldwide auction process after the forecasting step, that is, buying from the producers and selling to the licensees; • outsourcing the production at its own risk and selling the product to the licensees, without asking them to forecast the quantities; • managing the whole process with internal production and taking up the risk, as a traditional company.

276 Pietro Terna In this way, we test the descriptive capability of the simulation model, not only with reference to worlds that exist, but also with regard to counterfactual situations that we know only through the calculations in our computer. The fi les of this application are in the jES materials in the folder appCases/Case_ii/.

Case (Iii): Emergency Call A system of emergency calls for fi rst aid health care, as that studied here, can be subdivided according to the following functions: (i) operators replying to the telephone calls; (ii) operators evaluating the contents of each calling situation (such as a doctor and several deeply trained paramedical units); (iii) operators who assign the ambulances to the missions (with the help of paramedical staff, if needed); and fi nally, (iv) other agents of the simulation, including those in the subsystems formed by the ambulances and their operating staff. People replying to telephone calls carry out a fi ltering function, accepting only pertinent calls. They assign them to the operators who evaluate, in direct interaction with the callers, the medical situation of each emergency to decide whether or not to send an ambulance and, if so, what type. After the decision, the telephone dialogue is not immediately concluded, as operators give suggestions about actions to be undertaken while waiting for the arrival of the ambulance. People in charge of the management of the ambulances allocate them to the missions, and they also decide in some case the destination hospital. In other cases, this decision is left to the operators of the ambulance, always following a predetermined set of rules. In this critical system, every change must be carefully evaluated before being introduced. Therefore, it makes sense to fi rst simulate it with all its characteristics and details, as well as with routines that reproduce the actions of the operators with all the possible interactions. Moreover, after extensive testing, the model can be used to experiment with changes in a virtual way. If a change succeeds in the computerized model with positive effects, then its application in the real world can be considered, though always with caution. If the change fails in the simulation, then, after having investigated the reasons for the negative outcomes, the change should be considered extremely conservatively before applying it in reality. The simulation of such an organization can be also used as a tool for training, with one or more of the agents of the model acting as avatars connected with actual operators that choose and decide on behalf of the artificial agents (see earlier discussion). The effects of these choices can be incorporated in real time in the model outcomes, thus allowing interactive learning. The simulation model of the emergency call system has been built both as an ex post model and as an ex ante one. In the ex post view, the model

An Agent-Based Methodological Framework 277 reconstructs a cycle of 24 hours of events, as in a normal day. Certainly, it would be useful to extend this type of experiment to a greater temporal interval and to more periods, but the preparation of the data has been hugely time consuming due to the incomplete automatization of the database at the time of this application. With the ex post formulation, the recipes of the jES system are one-to-one copies of the recorded events, and they indicate with absolute precision each step done. They also note the exact ambulance used in each case. The sequence of the orders with those recipes (WDW) is contained in the rows of the sequential archives of the events (orderStartingSequence.xls). The simulation faithfully reproduces what happened in the chosen day, with all the detail of the waiting lists, if any. Verifying the ability of the model to reproduce the reality in an ex post way, the ex ante model (reported in the jES materials in the folder appCases/Case_iii/Modello_ex_ante/) now does not know which ambulance will be assigned to each mission. This is a choice problem, as seen previously introducing the OR criterion. It is possible to apply the model to the same sequence of events historically recorded in a given period or to feed it events (via a random generator of events) extracted from a set of events which actually occurred. In such a way, it is possible to place the system in overload conditions (see the subfolder reported earlier; the file note_per_uso.txt, in Italian, reports the conditions of the overload experiments). In Figure 11.12, we present the system in normal conditions, with the left bars of the diagram indicating a normal condition of activity of the operators doing the medical evaluations and giving fi rst aid prescriptions by phone. In Figure 11.13, we present alternatively the reaction of the system to an overload condition, in which many ambulances are in use. The left area represents the operators conducting medical evaluations and giving fi rst aid prescription by phone, thus exposing relevant waiting lists. As can be seen, those units do not succeed in taking charge of the overflow of the events, creating a dangerous situation in which calls arrive without operators available to answer.

Figure 11.12 The system in a normal situation, with the waiting lists of the units.

278

Pietro Terna

Figure 11.13

The system with simulated overloads.

The overload conditions that have been simulated involve doubling the events during the most dangerous interval of the day (at the beginning, in the middle and at the end of the day). Figure 11.14 shows the return to normality of the system in increments of 50 per cent of the operating units, to deal with an increase of 100 per cent of the activity charge. This result demonstrates the non-linearity of the reaction of the system. This is a case in which the non-linearity of the response must be taken into account in order to solve a problem. This result can offer useful insights about the dimensions of the critical staff, and it may also help clarify the quantity of persons involved in other activities that should be asked to rapidly join the staff in the case of emergency situations. The model can be developed in other directions. First, from a technical point of view, the model can temporarily eliminate a few units (the DW component of the model) in order to simulate ambulances that are temporarily out of service. It is also useful to introduce a spatial localization of the units, to be used both with regard to the choice of units and to represent their movement, as is possible in jESOF. Finally, the model should to be adequately simplified to allow people to interact with it without formal barriers.

Figure 11.14

Increasing the number of the evaluation units: the effect.

An Agent-Based Methodological Framework 279 BIBLIOGRAPHY Axtell, R. (1999) ‘The emergence of fi rms in a population of agents: local increasing returns, unstable Nash equilibria, and power law size distributions’, Center on Social and Economic Dynamics, working paper no. 3. Available HTTP: (accessed 18 February 2009). Axtell, R. (2000) ‘Why agents? On the varied motivations for agent computing in the social Sciences’, Proceedings of the Workshop on Agent Simulation: Applications, Models and Tools, Chicago, Argonne National Laboratory. Available HTTP: (accessed 18 February 2009). Barton, J.A., Love, D.M. and Taylor, G.D. (2001) ‘Evaluating design implementation strategies using enterprise simulation’, International Journal of Production Economics, 72: 285–99. Batten, D.F. (2000) Discovering Artificial Economics—how agents learn and economies evolve, Boulder, CO: Westview Press. Burt, R.S. (1992) Structural Holes—the social structure of competition, Cambridge, MA: Harvard University Press. Clackson, G.P.E. and Simon, H.A. (1960) ‘Simulation of individual and group behavior’, American Economic Review, 50 (5): 920–32. Epstein, J.M. (2003) ‘Growing adaptive organizations: an agent-based computational Approach’, Santa Fe working paper. Available HTTP: (accessed 18 February 2009). Gibbons, R. (2000) ‘Why organizations are such a mess (and what an economist might do about it)’, unpublished manuscript. Available HTTP: (accessed 18 February 2009). Hall, R.E. (2005) ‘employment fluctuations with equilibrium wage stickiness’, American Economic Review, 95 (1): 50–65. Kirzner, I. (1997) ‘Entrepreneurial discovery and the competitive market process: an Austrian approach’, Journal of Economic Literature, 35 (1): 60–85. Lomi, A. and Larsen, E.R. (eds) (2001) Dynamics of Organizations—computational modeling and organization theories, Menlo Park, CA, Cambridge, MA, London: AAAI Press/MIT Press. Padgett, J.F., Lee, D. and Collier, N. (2003) ‘Economic production as chemistry’, Santa Fe working paper. Available HTTP: (accessed 18 February 2009). Richardson, G.B. (1972) ‘The organisation of industry’, Economic Journal, 82 (327): 883–96. Shimer, R. (2005) ‘The cyclical behavior of equilibrium unemployment and vacancies’, American Economic Review, 95 (1): 25–49. Simon, H.A. (1997) Administrative Behavior: a study of decision-making processes in administrative organizations, New York: Free Press, Simon & Schuster. Walker, G., Kogut, B. and Shan, W. (1997) ‘Social capital, structural holes and the formation of an industry network’, Organization Science, 8 (2): 109–25. Zuckerman, E.W. (2003) ‘On Networks and Markets by Rauch and Casella, eds.’, Journal of Economic Literature, 41: 545–65.

12 From Petri Nets to ABM The Analysis of the Enterprise’s Process to Model the Firm Gianluigi Ferraris

INTRODUCTION Agent-based models could be used as a valuable tool to investigate production and management problems. By exploiting such a tool it is possible to perform a deep what-if analysis, and to catch hidden effects due to the dynamic interaction among parts operating in organizations. A critical phase in building the model is the initial analysis of the processes to be simulated; for such a task Petri nets have been employed in a wide range of cases. This chapter will investigate the possibility of using Petri nets to analyze both the real processes and the simulated ones. Some examples will show how to employ this approach to: (i) contribute to the setup of the model, and (ii) validate the model structure by comparing the properties of the previously mentioned nets.

SIMULATION AND ENTERPRISE Simulations and what-if analysis are often used to investigate firm management problems; traditional simulations are normally focused on a few components of the industrial process, only one in a large number of cases, and based upon rigid mathematical models. Such simulations mainly consist of computing a number of topic variables representing stocks or fluxes of the firm. Changes are represented as different configurations of the production and management processes, usually named scenarios. This approach has several limitations: i. It includes heavy “ceteris paribus” conditions. ii. The consequences of each change are simply computed, not really experimented by performing the new process. iii. The scenarios have to be previously configured by the analysts, so their number is limited and their plausibility and effectiveness are fully dependent on that human factor. The “ceteris paribus” conditions are needed in order to focus the simulation on single-topic elements of the system: a great majority of the components’

From Petri Nets to ABM 281 behaviors are assumed to be fully independent; such an assumption could be extremely dangerous when dealing with complex systems where the interaction among the parts plays an important role in determining the aggregate results. Agent-based models offer the possibility of simulating a larger number of components’ behaviors in order to increase the plausibility of the model and improve its effectiveness in dealing with simulations of complex aggregates too. Traditional simulation techniques, like the micro-simulations (Orcutt 1957), simply compute new configurations of a system by applying a sort of coefficient matrix: unexpected behaviors, as well as accidental facts, could be taken into account at the aggregate level only, usually by assigning a certain probability that those facts happen or expressing a level of confidence in the fi nal results. By exploiting the agent-based paradigm events happen in the simulated world: the analyst could observe the dynamics of the phenomena and study the way accidental facts could influence the aggregate results. The what-if analysis compares data referring to different configurations of a system, hereinafter scenarios. Traditional techniques handle those scenarios as exogenous elements, so analysts have to formally describe them before doing the experiment: the number and effectiveness of the configurations are totally dependent on the analysts’ capability and the time they are allowed to spend on the experiment. Simulations based upon the agent-based technique allow endogenous production of a great number of different scenarios, overcoming the limits due to the human reasoning and saving a large amount of time. Summarizing, by using the agent-based paradigm: i. It is possible to simulate the whole enterprise, or almost a complete process, so the “ceteris paribus” condition could be partially relaxed. ii. The processes are executed in the simulated world and their results are collected like in the real world; matters related to the interaction among different components of the process are evident whereas they would have been hidden if the resulting figures had been simply computed. iii. Different scenarios arise and continuously evolve during the simulation execution due to the dynamic interaction among agents. At a practical level agent-based models and, generally speaking, all the bottom-up methodologies offer a big simplification chance: whereas top-down traditional techniques would require the reproduction of the final effect dependent on the whole complexity of the interaction among components, the agent-based ones simply need the description of each component and of the process to be simulated; complex effects will be directly computed during the simulation. The bottom-up description contained in the agent-based models is more comprehensible for managers and staff: it uses names, objects and processes

282

Gianluigi Ferraris

that are daily handled by those people, instead of highly metaphorical descriptions embedded in equation systems. Agent-based models could be used both to: (i) simulate a whole period and observe results and dynamics due to the initial assumptions and parameters settings, and (ii) continuously interact with human beings asked to decide about some management matter; in this way useful simulation games could be implemented for staff-training purposes. The main problem in using agent-based models is related to their verification: formal theories or equation systems have properties that could be observed and studied to ensure they were well built; such a verification is very hard to perform for a model that consists of a set of computer programs dynamically interacting. Another matter that could reduce the effectiveness of the models is related to the process description: often fi rms hardly know their processes at the required level of detail; the production intelligence is a personal duty of single key resources that are unlikely to share it so as to maintain or increase their power. Models based upon such poor descriptive material could contain hidden mistakes and suffer a dramatic lack of precision; results obtained from such simulations could sponsor solutions that will produce deluding results when activated into the real production process. Enterprises and aggregates of programs usually operate following defi ned processes, and both could be assumed to be complex systems. Analysis of such systems could be successfully performed by using Petri nets: a methodology to represent and study processes and systems that provides tools to compute a set of formal properties useful to verify that systems will work as well. The Petri nets method could contribute to overcoming both the verification and poor description problems because: (i) they have formal properties that are easy to test, and (ii) their analysis is useful to catch errors in the process. Petri nets offer the possibility of giving the agent-based models a formal expression in order to allow a scientific validation of them, and are suitable to formalize the analysis of the enterprise processes. By drawing the net of the process to be simulated and of the simulation programs, it is possible to ensure that both have the same properties and are free of interpretation and representation errors.

THE PETRI NETS The method was set up by C.A. Petri (1962) to represent complex systems. The methodology has had a wide diffusion; it was deeply investigated by Reisig (1982), and more recently has been studied by Di Cesare et al. (1993) and Chetty and Gnanasekaran (1996). At a fi rst glance, Petri nets are a tool to graphically and formally describe a system, in order to study its specific properties. A Petri net is a graph

From Petri Nets to ABM 283 where arcs connect two different kind of nodes: places and transitions. Whereas a place represents a status of the system, a transition means an action or an event. Each place could host tokens, usually called marks; different dispositions of tokens into places are named markings of the net. Places that own tokens are defi ned as marked, and places without tokens are indicated as unmarked. Markings represent different states of the whole reproduced system. Arcs can connect only a place to a transition and vice versa; no arcs between two places or two transitions are allowed. The arcs connecting places, so-called input places, to transitions are called input arcs; output places are addressed as output arcs. They show the transitions that will move tokens from the input places to the output ones. By moving tokens, transitions allow the net to evolve from marking to marking. More precisely, when activated, transitions move tokens from their input places to the output ones, letting the system move from a marking to another one; this is the so-called fi ring phase of the transition. Before fi ring, a transition has to be enabled; that means all the input places of the transition have to be marked and the output ones have to be unmarked. During the evolution of the net tokens are always changing place, so that evolution is also called the token game. The initial status of the net is named initial marking. The structure of the net is shown through a special graph where places are represented by circles and transitions by small rectangles; arcs are lines drawn between places and transitions. The structure of the net and the initial marking determine what other markings are reachable during the development of the net. A particular graph, called the reachability graph, can be drawn by representing all the possible markings and connecting them through arcs. This kind of analysis is hardly ever exploited because the reachability graph could be huge even for small nets.

SIMPLE PETRI NETS In the simplest nets, the so-called conditions-events nets (Brauer, Reising and Rotenberg 1986), places can contain one token only. This kind of net could be defi ned (Di Leva and Giolito 1989) as a two-sided graph where nodes belong to two fi nite and separate sets: the conditions set, hereafter C, and the events set, hereafter E. Input and output arcs, respectively I and O, are defi ned in the expression 12.1 as:

I ⊆ C ⊗ E and O ⊆ E ⊗ C

(12.1)

A marking is expressed as: M ∈ [C → {0,1}]

(12.2)

284

Gianluigi Ferraris

The initial status of the system, the initial marking, is defi ned as:

M 0 ∈ [C → {0,1}] .

(12.3)

A whole marked net is defi ned by a set of five elements:

R = (C , E , I , O, M 0 ) .

(12.4)

For each event, i.e., transition, it is possible to defi ne input and output conditions as:

• e = {c ∈ C ∧ < c, e >∈ I }.

(12.5)

e• = {c ∈ C ∧ < e, c >∈ O}.

(12.6)

An event is enabled, so the corresponding transition can fi re if each input place contains almost a token and all the output places are empty. After a transition has fi red, the previous marking M becomes a new one M':

M [e > M ' ] .

(12.7)

It is possible to represent the transitions of a net as a matrix by adopting a simple standard notation where each element of the matrix W(c,e) contains the value: (i) +1 if < e,; (ii) –1 if < c, e >∈ I ; and (iii) 0 in the other cases. A marking can be represented by a vector where the generic element M(Ci) is set to one, if the corresponding condition is activated, i.e., the corresponding place owns a token; otherwise it is set to zero. A sequence of events can be represented as an integer vector σ, where, for each event: (i) one means the event will be enabled, and (ii) zero means the event will not be enabled. Transitions between two markings could be expressed through the fundamental equation:

M ' = M + Wσ T .

(12.8)

This kind of equation allows the usage of linear algebra techniques, and supplies a convenient formal representation of the system. The set of all the markings reachable from a generic one Mi is, usually, expressed through Equation 12.9:

[M i > .

(12.9)

From Petri Nets to ABM 285 All the marking through which the net could evolve is defined by the set of the marks reachable from the initial one M0, as in the expression 12.10:

[M 0 > .

(12.10)

Figure 12.1 Example of a conditions-events net.

A simple Petri net may be used to represent the process inside a simulation model, like the net in Figure 12.1 that represents a simple procedure where agents are asked to do the following things: (i) decide a strategy to be performed into the environment, (ii) perform the chosen strategy, (iii) update some own accounting variables and (iv) reset the just performed strategy to be ready to compute another one. To perform their actions the agents have to deal with the environment— that is, be able to handle a single request at a time—so when an agent is operating, the environment is busy and other agents have to wait till it becomes newly available. Figure 12.1 shows a simple net to represent the events and process of a small generalized agent-based model; there the fi rst place, called “Agent ready”, represents the status where an agent is ready to start a simulation cycle, and “step1” is the fi rst event that is enabled by the presence of agents into the place “Agent ready”. After receiving the “step1” command, the agent elaborates their own strategy to be performed in the environment.

286 Gianluigi Ferraris The second transition “step2”, that will order the agent to perform its strategy into the environment, needs to fi re almost one token into the “Environment ready” place, because the environment is able to deal with only one agent at a time. This transition moves tokens from “Agent thinking” to “Agent operating” and from “Environment ready” to “Environment busy”. When “step3” is performed the environment becomes ready for another agent, while the previous one starts updating its accounting variables. If other agents were in the net one of them would have earned the possibility of starting a simulation step: in this way more than one agent could perform different tasks during the simulation run. “Step4” resets the agents’ strategies to allow the execution of another simulation cycle. Figure 12.1 clearly shows that whereas “step1” and “step4” can be performed by each agent in a real parallel way, “step2” and “step3” need a coordination of the agents, due to the limited capability of the environment. The net in Figure 12.1 represents the action of one agent only; to put in more than one agent, agents’ places have to be multiplied, and the net rapidly becomes wider and wider. Table 12.1 Incidence Matrix of the Conditions-Events Net W Agent ready Agent thinking Agent operating Agent updating Environment ready Environment busy

Step 1

Step 2

Step 3

Step 4

–1 1 0 0 0 0

0 -1 1 0 –1 1

0 0 –1 1 1 –1

1 0 0 –1 0 0

Table 12.1 shows the incidence matrix of the conditions-events Petri net used as an example.

STANDARD PETRI NETS Simple Petri nets tend to generate huge graphs even to represent quite small systems, so the method has been empowered by allowing: (i) each place to contain more than one token, and (ii) each transition to fire multiple tokens, i.e., to move several tokens from the input place to the output place. This kind of net is usually named the standard Petri net or “place-transition” net. Graphs obtained by using standard Petri nets are more concise, and allow the user to represent complex firing rules, including conditions

From Petri Nets to ABM 287 related to the presence of a minimum number of tokens in some places. Such transitions move a finite number of tokens from one place to another. To characterize this kind of network the capability of each place has to be defi ned; for this task a capability function is needed. Provided that P and T are respectively representing the set of places and transitions, I and O the input and output arcs, K the capability function and M0 the initial marking, a generic net becomes formally defi ned by the expression 12.11:

R = ( P, T , I , O, K , M 0 ) ,

(12.11)

where the function K returns, for each place, the capability expressed through a natural number, and the initial marking becomes an application of the places set to N. Places cannot have a capability less than one. This clause is formalized by the conditions expressed in Equation 12.12:

K : P → N − {0} . and M 0 : P → N.

(12.12)

For input and output arcs a weight specification is needed to represent the number of tokens moved when the transition fi res, so the functions for arcs become those shown in Equation 12.13:

I : S ⊗ T → N . and O : T ⊗ T → N .

(12.13)

Provided that • t and t • respectively represent the input and output places of a generic transition t ∈ T , such a transition would be enabled by the marking M if:

∀p ∈ •t M ( p ) ≥ I ( p, t ) . and ∀p ∈ t • M ( p ) + O (t , p ) ≤ K ( p ).

(12.14)

A transition t can fi re only if: (i) in the places in • t the number of tokens is greater than or equal to the weight on the input arcs, and (ii) in the places in t • there is room for all the tokens moved accordingly with the weight on the output arcs. When a transition fi res, the net evolves from the marking M to another one M’, as formalized in Equation 12.15:

∀p ∈ P M ' ( p) = M ( p) − I ( p, t ) + O( p, t ).

(12.15)

Standard nets, as well as the condition-events ones, can be expressed through the matrix:

W = OT − I

(12.16)

288 Gianluigi Ferraris The fi ring rule to move from M to M’ becomes:

M ' = M − Iσ + O T σ ,

(12.17)

that can be shortened to:

M ' = M + Wσ .

(12.18)

Figure 12.2 Example of a standard Petri net.

The exemplying graph, previously used to draw the conditions-events, can be redrawn, like in Figure 12.2, as a standard net graph, by allowing places to host more than one token. In this way multiple agent systems can be represented using the previous simple graph, even if each arc moves one token at a time. The synchronization among agents is clearly shown: when an agent is operating no other agents can do the same thing; the environment is able to deal each time with one agent only. During the development of the net, i.e., during the simulation computation, agents may belong in different states, and the initial marking will never be reached again. No weights are indicated on the arcs because the common convention is that arcs without weights are moving a single token at a time.

From Petri Nets to ABM 289

Figure 12.3 Standard Petri nets with weights on arcs.

In a standard Petri net multiple arcs may be employed to join places and transitions; that means the transition will move more than one token from one place to another when it fi res. Using this kind of representation it is possible to synchronize the action of the system. Figure 12.3 shows the modifi cation needed to give the agents the same information level before the thinking phase, by requiring “step1” to move both the tokens to the “Agent thinking” place. In this way the process will perform coordination between the two agents and ensure they will exploit the same information level. The weight on the arrows between “Agent ready” and “step1” means that “step1” can fi re only if two or more tokens are in “Agent ready”, because only two tokens have been put into the initial marking. That means that “step1” has to be performed by all the agents at the same time. In this way the informative advantage late-operating agents may be given is avoided; the agents start thinking at the same time, sharing the same information. Table 12.2 shows the incidence matrix of the place-transition Petri net used as an example.

290

Gianluigi Ferraris

Table 12.2 Incidence Matrix of a Place-Transition Net W

Step 1

Agent ready Agent thinking Agent operating Agent updating Environment ready Environment busy

Step 2

–2 2 0 0 0 0

0 –1 1 0 –1 1

Step 3 0 0 –1 1 1 –1

Step 4 1 0 0 –1 0 0

PROPERTIES OF A PETRI NET By analyzing graphs and firing rules, systematic properties (Silva 1985) of the net can be investigated; topic properties are: reachability, limit, liveliness, recurrence, presence of blocks, deadlocks, traps and mutually exclusive places. The study of a system may require investigation about the possibility of reaching specific markings; in the Petri nets methodology this research is addressed as the reachability problem of a generic marking M. In the example case the initial marking is an interesting one; the reachability of such a marking is very important to guarantee that agents share the same information level. By studying this problem the net in which all the arcs were moving a single token has been discarded, whereas the net in which “step1” was moving all the agents at one time have been selected. A place-transition system is limited if for all marking the reachability set is limited too; that means it contains a defi nite and limited number of markings, for ∀M ∈ [M 0 >. The shown example net has this character. The liveliness property is true if for all reachable marks, starting from M0, there exists a mark M’ that enables t. This condition is resumed by the formula 12.19.

∀M ∈ [ M 0 > ∧∀t ∈ T ∃M '∈ [ M > → t .

(12.19)

Not-alive systems often contain modeling mistakes; the example net is an alive one. A recursive net is able to regenerate the initial marking. This character is useful for models where a dynamic evolution of the system is needed. The formal condition to defi ne a net as a recursive one is contained in the formula 12.20.

∀M , M '∈ M 0 > M ∈ [ M ' > .

(12.20)

A block occurs when a dead marking is reached. Dead marking does not enable any transitions and stops the net’s evolution. A net is unblocked if:

From Petri Nets to ABM 291

¬∃M ∈ [M 0 > dead .

(12.21)

When in a net a subset of the places has transitions that are both input and output ones at the same time, this subset is called “deadlock”. Provided that D is the set of the deadlock places and D ⊆ P , the deadlock is described as:

• D ⊆ P •.

(12.22)

A deadlock is very critical because if the subset lost all its tokens it would not be able to replace them, and its output transitions would never become enabled. Two places, p and p’, are defi ned as mutually exclusive, if almost one of the two expressions 12.23 and 12.24 is true.

∀M ∈ [ M 0 > M ( p) > 0 → M ( p' ) = 0 .

(12.23)

∀M ∈ [ M 0 > M ( p) = 0 → M ( p' ) > 0.

(12.24)

An instance of this relation is supplied by the places “Environment ready” and “Environment busy” in the example net. A trap is a special configuration due to the presence of a subset of places R whose transitions have both input and output in it. If:

T ⊆ P ∧ ∀t ∈ T → • R ⊆ R •,

(12.25)

the set R may never lose all its tokens.

ANALYSIS OF A PETRI NET For the Petri nets, three different kinds of analysis are applicable: (i) enumeration, useful for studying the marking space; (ii) reduction, used to verify properties like liveliness, limits and unblocks; and (iii) structural analysis, based on algebraic techniques to compute specific properties of a particular net, and to investigate deadlocks and traps. The enumeration analysis (Brams 1983) is based upon drawing all the evolution paths of the net. A specific kind of graph, the “markings graph”, is obtained by connecting each marking to the reachable other ones. If the graph is fi nite the net is limited; very connected graphs usually demonstrate that the nets are cyclic and alive. The exhaustive exploration of the graph allows one to find mutual exclusion, and to deeply investigate the reachability.

292

Gianluigi Ferraris

Figure 12.4 Marking graph of the example net.

For non-fi nite nets, instead of a reachability graph, a tree graph could be used to represent the evolution of the markings. The evolution of the example net and the synchronization between the two agents is shown in its marking graph. Starting from the initial reachable marking other markings are connected through arrows. Figure 12.4 shows the net to be cyclic; the initial marking is reached iteratively, and the system will not end until the user stops the run. The reduction techniques (Berthelot 1985) are devoted to transforming a generic net as in Equation 12.26:

R = ( P, T , I , O, K , M 0 ) .

(12.26)

Another one with the same properties is expressed in Equation 12.27:

R ' = ( P ' , T ' , I ' , O ' , K ' , M ' 0 ).

(12.27)

A fundamental rule states that: a generic net R is alive and/or limited if and only if the resulting R’ net is also alive or limited. Following such a principle the reduction could be performed several times, until the resulting net is small, and simple, enough to allow an easy verification of its properties. Nets are reduced by: (i) eliminating implicit places, and (ii) reducing a subplace to a node. A generic place p ∈ P is an implicit one, if:

From Petri Nets to ABM 293

∃(a p ∧ b ∈ ℜ | ∀M ∈ [ M 0 > M ( p) = b +

¦a

p

M ( p) .

(12.28)

p∈P '

In the expression P ' = P − { p}, and for a generic transition t ∈ p • each p ∈ P ' owns enough tokens to enable t. A subnet R s ∈ R could be substituted by a place, usually called a macro place, if the three following conditions are true: i. The subnet does not create or drop any token; i.e., tokens may move into the net but only in a conservative way. ii. Liveliness of the transitions in the subnet is implicated by the general liveliness of the whole net. iii. All the tokens in the subnet can be employed to enable every exit transition from the net. The structure of a net could be studied independently from its evolution; this kind of investigation supplies evidence of some relations that do not vary during the net’s evolution, the so-called “invariants”. A typical invariant is a set of places P '∈ P that hosts a constant number of tokens in every reachable marking; letting Y T be the characteristic vector of the set P’, and W=OT –I the matrix that algebraically describes the net, an invariant of the net is computed as one of the solutions of the system:

W T Y = 0,

(12.29)

where Y ∈ Nn and n = P . A basic property of the invariants is:

∀M ∈ [ M 0 > ∧∀Y → Y T M = Y T M 0 .

(12.30)

For a generic vector Y, the set of the non-zero components is defi ned as the support of the vector; the usual notation of this set is Y . An invariant is a normalized one, when the greatest common divisor of each Y component is one. The elementary invariants set, hereinafter Y, is defi ned as the set containing the minimum number of invariants, from which all other invariants could be linearly generated. An elementary invariant has to be normalized and its support must not contain the support of any other invariant. By studying the invariants (Alaiwan and Toudic 1985) it is possible to verify important properties of a net, as the presence of limited places, mutually exclusive places and so on.

294

Gianluigi Ferraris

To investigate the presence of limited places it is possible to compute the greatest number of token a generic place p ∈ P can host as: N ( p ) ≤ min (Y ∈Υ ') [Y T M 0 / Y ( p )],

(12.31)

where: Υ ' = {Υ / Y ∈ Υ ∧ ∈ Y

(12.32)

Provided that Y is a generic invariant and p and q are two places of the net for which:

Y ( p ) + Y (q ) > Y T M 0

(12.33)

then p and q are mutually exclusive. A net is a live one, if: ∀Y ∈ Υ Y T M 0 ≥ max ( t∈T ) { ¦ Y ( p ) I ( p, t )} ≥ 1.

(12.34)

p∈ Y

To illustrate the computation of an elementary invariant, Table 12.3 contains the incidence matrix of the weighted net previously used as an example, and shows the computational algorithm suggested by Silva (1982). The procedure consists of building an extended matrix by joining the incidence one (W) to an identity square matrix with a number of columns equal to the number of rows in W. Then the following steps are repeated for each W column, until the whole W contains only zeros:

Table 12.3 Invariants’ Computation of the Example Net N S 1 2 3 4 5 6

d d d d d d

7 8 9 10

d Y0 d Y1

W

Step Step Step Step 1 2 3 4

Agent ready –2 Agent thinking 2 Agent operating 0 Agent updating 0 Environment ready 0 Environment busy 0 = r1 + r2 = r5 + r6 = r7 + r3 = r9 + r4

0 0 0 0

Identity

0 –1 1 0 –1 1

0 0 –1 1 1 –1

1 0 0 –1 0 0

1 0 0 0 0 0

0 1 0 0 0 0

0 0 1 0 0 0

0 0 0 1 0 0

0 0 0 0 1 0

0 0 0 0 0 1

–1 0 0 0

0 0 –1 0

1 0 1 0

1 0 1 1

1 0 1 1

0 0 1 1

0 0 0 1

0 1 0 0

0 1 0 0

From Petri Nets to ABM 295 i. Look for all the linear combination obtained by coupling two rows, that give an all zero row in the selected column. ii. Add the result of the linear combination to the extended matrix. iii. Delete all the remaining rows where the selected column has non-zero values. At the end of the procedure the remaining rows are split and the parts obtained by transforming the added identity square matrix constitute the elementary invariants of the net. Table 12.3 shows the computation for the net used as an example. After the extended matrix has been built, linear combinations to reset column “step1” are searched. There is only one suitable couple: row 1, “Agent ready”, and row 2, “Agent thinking”. The new row, the seventh one, is written at the end of the matrix, and rows 1 and 2 are considered to be deleted. No other rows have to be deleted, because all the other ones contain only zeros in the column “step1”. Then column “step2” is investigated to compute two new rows: row 8 = row 5 + row 6, and row 9 = row 7 + row 3. Then rows 3, 5 and 6 are deleted and so on. At the end of the computation only row 8 and row 10 survive; their second parts constitute the two invariants:

Υ = Y0 = {0,0,0,0,1,1} ∪ Y1 = {1,1,1,1,0,0}.

(12.35)

COLORED PETRI NET Like the standard ones, the colored Petri nets (Kristensen, Christensen and Jensen 1998) provide a framework for the analysis of distributed and concurrent systems. They describe the states to which a system may go, and the related transitions. The states of the colored Petri nets are represented by places that have an associated type, so-called color set; the type determines the kind of data the place can contain. Types may be complex objects, like types used in programming languages: records made by connecting several fields of different format, such as strings, real numbers, integers, binary values and so on. In this way each place could host multiple and different tokens; the representation of the system becomes less expensive in terms of places and arcs, so the resulting net is simpler and smaller than the standard Petri net describing the same system or process. Large complex systems could be better represented and studied. The markings of a colored Petri net consist of a different distribution of tokens; each of them carries specific values, metaphorically speaking

296

Gianluigi Ferraris

colors, that make tokens different; in contrast, all black tokens are used in the standard Petri nets. A generic marking of a place is a multi-set of token values, where a multi-set is defi ned as a set where multiple instances of the same element are allowed to belong, so a place may have more than one token representing the same value. Because transitions in a colored Petri net may move a variable number of tokens carrying different values, a simple specification of weight on the arcs could not contain enough information, so arcs connecting places and transitions carry an expression that determines the number of tokens and their values that are moved by the transition. To examine the consequences of a generic transition fi ring, three variables on the arc expression have to be bound to data values, then the arc expression can be evaluated. In addition, it is possible to attach to an arc expression a guard, a Boolean value that allows binding only if its value is true. If several bindings are possible for an arc, the corresponding transitions can occur in different ways. When an enabled transition fi res, tokens, selected by evaluating the related arc expression, are moved from the input place to the output place. Arc expressions could be managed as functions; to formalize such functions the domain and co-domain definitions are needed. Domains are linked to transitions and co-domains to places. In this way a common domain for all the arc functions on a same transition t ∈ T is defi ned for all the transitions; also, a co-domain for all the functions related to a place is set up for all the places p ∈ P . The co-domain consists of the set that contains all the colors allowed to be hosted in a place; for that reason usually the co-domain is indicated with COL(p) where p ∈ P . The symbol used for the domain is DOM(t) where t ∈ T . Different functions may lie on the input and output arcs of the same transition: Fpt usually indicates the function related to the arc that connects a generic place p to a generic transition t, whereas Fqt is the function on the arc between a transition t and a place q. Provided that M(p) indicates the current marking of the place p, a transition t ∈ T between p and another place, for instance q, is enabled if:

∃e ∈ DOM (t ) | ∀p ∈ •t F pt (e) ∈ M ( p) .

(12.36)

Defi ning Pt as the set of all the input places of a transition t, and Qt as the whole set of the output places, after t has fi red:

Pt − F pt (e) ∈ M ( Pt ) .

(12.37)

From Petri Nets to ABM 297

Qt = Qt + Fqt (e)

(12.38)

A colored Petri net can be represented as:

R = ( P, T , DOM , COL, I , O, M 0 )

(12.39)

or, given W = O—I, in a short form as:

R = ( P, T , DOM , COL,W , M 0 ).

(12.40)

Figure 12.5 Example of a colored Petri net.

An example of a colored Petri net is shown in Figure 12.5, where the example model has been represented. Instead of different places to represent the states of different model components, three different colors are used in this net: “a1” indicates the first agent, “a2” the second and “e” the environment. In this way specialized places for the environment are no longer needed, and the graph is simplified, because some of the remaining places may contain both agents and environment tokens. Arcs are allowed to move different types and number of tokens, in order to simulate the effects of each transition. This kind of representation requires fewer places, even if in the example case only one place has been saved; the advantage becomes much greater when dealing with very complex systems.

298

Gianluigi Ferraris

Referring to the transitions, domains can be defi ned as: DOM(Step1) = DOM(Step4) = {a1,a2}.

(12.41)

DOM(Step2) = DOM(Step3) = {a1,a2,e}.

(12.42)

Colors in places are defi ned as: COL(Ready) = COL(Operating) = {a1,a2,e}.

(12.43)

COL(Thinking) = COL(Updating) = {a1,a2}.

(12.44)

Table 12.4 Incidence Matrix of a Colored Net W

Step 1

Step 2

Step 3

Step 4

Ready

–Ma

–Me

Me

Ma

Thinking

Ma

–Ma

Operating

Id

Updating

–Id Ma

–Ma

An incidence matrix could be used to represent a colored Petri net; Table 12.4 shows the incidence matrix of the colored Petri net used as example. Instead of weights, functions are put on arcs to represent the consequences of each transition; when the arc is an output one, the function is given a minus sign, to mean the function will withdraw marks or colors. The Petri net in Figure 12.5 uses three different functions defined as: Ma(ai) = ai.

(12.45)

Me(e) = e.

(12.46)

id(a1,e) = a1,e.

(12.47)

id(a2,e) = a2,e

(12.48)

ANALYSIS OF A COLORED PETRI NET Colored Petri nets can be analyzed using the same techniques applicable to the standard ones; methods have to be empowered to deal with functions, instead of simple weights. In the reduction techniques set, one more could be added to reduce the number of colors. On the structural site, an invariant can be defi ned as a vector of functions X for which:

From Petri Nets to ABM 299

X T $ W = 0.

(12.49)

Such a vector is usually named a symbolic invariant, because it means a direct relation among markings of the places and functions of the arcs. If X1 and X 2 are two invariants, each combination (k1X1 + k 2X 2) is an invariant: the potentially infi nite set, including all the invariants, is indicated as the kernel of the incidence matrix. The set containing the minimum number of invariants, but enough to obtain the whole kernel, is the base of the kernel. The computation of the invariants of a colored Petri net could be reduced to the simple algorithm used for the place-transition nets by expressing, if possible, the incidence matrix W as the product:

W = F $ A.

(12.50)

where F is a diagonal matrix of functions, and A is a matrix made by simple identity functions. The Id function is defi ned as:

Id (ci ) = ci ∀c ∈ D.

(12.51)

The invariant vector X T is defi ned as:

X T $ F $ A = 0.

(12.52)

A generic invariant of A’, Y T1 may be computed using the algorithm for the place-transition nets and a functions vector could be obtained by multiplying the Y T1 and Id; in this way a vector X T is an invariant if:

X T ⊗ $ F = IdY1T .

(12.53)

If the incidence matrix cannot be decomposed, i.e., if:

¬∃ F | W = F $ A

(12.54)

provided that:

∀Ti ∈ T DOM (Ti ) = D

(12.55)

and that:

∀Pi ∈ P COL( Pi ) = C ,

(12.56)

a kernel base could be obtained by starting from: an identity matrix, appropriately sized (for instance an integer index with initial value 1), hereafter

300 Gianluigi Ferraris Q, and a copy of the W matrix, hereafter Hi. Then a simple three-step algorithm has to be performed: i. The matrix Hi has to be partitioned as: Hi = [H1i , H2i ] to obtain H 1i = F i $ Ai . ii. A base B i for the kernel of H i 1 can, now, be obtained as: BiT $ H 1i = 0 . iii. New matrix Hi+1 and Q have to be generated as: H i +1 ← BiT $ H 2i

(12.57)

Q ← BiT $ Q

(12.58)

i ← i +1

(12.59)

iv. Restart from 2 until Hi is not empty. After this process Q becomes a translated base of the W kernel. It is always possible to fi nd a submatrix of H; at least each simple column in H has to be used.

PETRI NETS AND AGENT-BASED MODELING An agent-based model consists of a set of programs that interact in several ways. This interaction constitutes the dynamic side of an articulated process, intended to perform the computation of the simulated phenomenon. Often such a phenomenon has a high degree of complexity, so the testing phase of the programs hardly ever can be performed at the functional level. Even if it is possible to verify the correct behavior of single modules, or components, mistakes can affect the rules used to handle the cooperation among them; such a matter may heavily influence the obtained results leading the research to poor conclusions. Petri nets allow a formal representation of the simulation process and provide tools to study its fundamental properties, like the invariants computation, the enumeration and so on. Such investigations and representations are useful for drawing the architecture of the project, and setting up the appropriate communication rules among its components. Using Petri nets the software machinery for the simulation can be studied and proved before the implementation of programs starts. It is very important to have tools for verifying processes and architectures before coding the software; mistakes in the analysis phase have, usually, heavy consequences if discovered after programs have been coded. Often this kind of mistake requires rewriting parts of the software, increasing realization time and effort.

From Petri Nets to ABM 301 Structural errors in planning the computation process hardly ever are evident, or simply perceivable, during the usage of the software; neither the usual tests of parameters’ sensitivity, and robustness versus random number generation, could be used to discover such mistakes. An example could show how the usage of Petri nets can help in verifying the simulation process. This example consists of the simulation of an online auction, like the eBay ones, to study the dynamic of prices, under tight and simple assumptions, upon the agents’ behavior. In this simulation agents are competing to buy objects in an auction on the web. Each time they look at the latest published bid Bt, and then decide whether to bid again or not. Each of them has a bid threshold; when the offered price overcomes that value the agent stops bidding. The threshold level depends on both the price at the moment the agent enters the auction B ei , and a stop factor the agent uses to compute the threshold m. For a generic agent i the threshold Mbi is computed as:

Mbi = Bei ∗ (1 + mi ) where 0 ≤ mi ≤ 1.

(12.60)

Agents make bids accordingly with their own value for the factor r, so the bid at time t for the generic agent i Bit is:

Bit = Bt ∗ (1 + ri ).

(12.61)

If Bit is greater than Mbi the agent will stop bidding and leave the auction game, or else the agent will put in its new bid, modifying the conditions for the other agents. In this way the price will rise until no more agents bid. The simulation process could be organized in three different steps: i. All the agents are asked to collect data, by getting the latest received bid. ii. All the agents are requested to set up a bid. iii. All the agents are allowed to put their bids. This process simulation contains a trivial error: people involved in such an auction do not collect data at the same time; each of them observes the prices in the moment they enter the auction and decide on their bid using this information only. The number of people involved in an online auction could be very large, so it is not plausible to pretend all of them can share the same information level at a precise moment; more likely, while a single participant is thinking in order to establish the amount of its new bid, a lot of other participants are bidding or thinking too. The decision process of each agent is always based upon old information. Table 12.5 shows the average results obtained by three different parameter settings. Data have been collected by running 1,000 simulations for each scenario, i.e. each parameter’s setting.

302

Gianluigi Ferraris

Table 12.5 Results of 1,000 Auctions with the Wrong Process Simulation

Price

Agents

Bids

Delta

Results after 1,000 auctions with different number of agents and same random seed

Min Max Average Std. Dev.

50 99 74 14

0 522 192 115

0.00 0.04 0.96 0.04

Results after 1,000 auctions with different random seeds and same number of agents

Min Max Average Std. Dev.

100 100 100 0

1 553 255 145

0.20 1.00 0.96 0.04

Results after 1,000 auctions with different random seeds and different number of agents

Min Max Average Std. Dev.

50 99 74 14

0 519 195 120

0.00 1.00 0.96 0.04

Results appear to be plausible; they are not sensible to the random seeds’ variations, but they show dependence on the number of agents, that is quite common in an auction. The behavior of the people involved in online auctions has to be investigated in order to draw the general process the agent is used to following in doing such a task. Because each agent usually does not know all the other agents, as well as the fact that it is impossible for each of them to observe what other agents do and to collect information about all the bids, the main point is that agents can enter the auction at different times, without any worry about what other agents have just done, or are doing. In order to validate this assumption the real process has been formalized in a Petri net, paying attention to the considerations previously expressed. Actions performed by each person are able to modify the current price for the object under auction; in this way each time an agent observes the prices they could be different depending on the moment the agent makes the observation. The test to be performed consists of verifying that the real process net and the net representing the simulation have the same properties and markings, in order to ensure the plausibility of the model and improve the reliability of the obtained results, upon which the research is based. The net in Figure 12.6 represents the process inside the model; places show the state of agents and net: “Rs” means agent ready to start, “Rt” agent ready to think and “Rb” agent ready to bid; “Nr” is net ready and “Nb” is net busy. Places “C”, “T” and “B” are operating states of the agent, respectively computing, thinking and bidding.

From Petri Nets to ABM 303

Figure 12.6 Place-transition net of the wrong process.

By formalizing the model process in a simple place-transition net as in Figure 12.6, it becomes obvious that all the agents observe the same price. In this case the mistake has been made so evident that the simple comparison of the graphs is enough to perceive it, so the formal analysis of the net—marking graphs as well as incidence matrix and invariants computations—is not presented. Figure 12.7 shows the real process net. Note that arcs now move single tokens from one location to another, instead of all the n available ones; that means each agent can operate autonomously. Coordination among agents is performed at the end of each simulation cycle only, when the transition “Na” fi res if all the tokens (n) are in the “Ws” (waiting for start) place. To perform the process described by the net sub 12.7 the schedule sequence of the model has to be changed; the old process allowed all the agents to observe the prices before one of them could make a bid. In the real process, as described in the net in Figure 12.7, each agent is allowed to perform all the operations from the data collection to the bid posting, and then another agent is allowed to perform the same cycle and so on. In this way each agent observes a different level of prices, strictly depending on what the agents that operated before them had done. They will decide starting from the price they have just observed, so each agent will start its computation from a different basic value.

304

Gianluigi Ferraris

Figure 12.7 Place-transition net after the process has been corrected.

If each agent can enter the auction at different times, after several agents have made bids, the price will be higher and the resulting threshold will become higher too, enforcing the rise of prices. This example has been based on a trivial mistake affecting a simple and well-known process, because it was devoted to illustrating how such errors could be avoided by using process representation through a Petri net. In simulations of more complex processes similar mistakes could be very hard to catch and their effects on the results of the research may be dramatic. Even if the mistake were found before the results were published, removing it would cost a lot in terms of major effort and time. Table 12.6 Results of 1,000 Auctions with the Correct Process Simulation

Price

Agents

Bids

Delta

Results after 1,000 auctions with different number of agents and same random seed

Min Max Average Std. Dev.

50 99 74 14

0 196 125 30

0.00 301.80 22.83 30.19

Results after 1,000 auctions with different random seeds and same number of agents

Min Max Average Std. Dev.

100 100 100 0

1 202 167 23

0.10 470.86 33.48 46.50

Results after 1,000 auctions with different random seeds and different number of agents

Min Max Average Std. Dev.

50 99 74 14

1 193 125 30

0.40 391.55 22.67 30.79

From Petri Nets to ABM 305 Table 12.6 shows the results obtained by running the correct model, i.e., after it has allowed agents to operate in the correct way. The results are dramatically different. As the example shows, the simple analysis of the results could not reveal mistakes in planning the simulation process. Similar mistakes will affect as well simulations of industrial and market processes, especially when the analyst has been given information that is hardly clear and precise. Even if a lot of documentation about the process is available, there may be a strong difference between what the process description says and what is performed by the people involved. Non-formal communication as well as interpersonal relations could significantly modify the process work flow, because human beings are used to interact at the personal level, especially when they experimented with pressure or threat. The Petri net methodology can be widely applied in simulation work; it can contribute to analyze and validate processes both at the model process level and at the code implementation level. Through Petri nets the simulation model can be represented in a formal way, useful for investigating its properties following a rigid scientific approach. They seem to be a promising answer to the problem of validating the simulation processes and may contribute to the explanation of the models.

A SIMPLE INDUSTRIAL PROCESS SIMULATION In order to show the usage of an agent-based model in analyzing an industrial process, a simple model has been developed to reproduce a generic packing process of a fi rm that produces pasta. This imaginary fi rm produces several types of pasta that have to be packed in small plastic envelopes; each of them has to be labeled with the product expiration date. To do the task two kinds of machine are employed: (i) packers that apportion the same quantity of pasta in each envelope and close it, and (ii) labelers that mark the envelope with the appropriate indications for the consumers. Each machine has a specific process capability; and the distribution line that links the two types of machine and the production department is able to perform the research for a free machine, in order to optimize the charge distribution. The Petri net to represent and analyze this process is quite trivial: each machine type could be represented as a node, and arcs connecting nodes could be used to simulate product transfers between two different kinds of machine; the main thing to pay attention to is the correct representation of the machines’ capability.

306

Gianluigi Ferraris

The agent-based model has been developed by writing three agent classes: the mixer that is in charge of the whole production process, the packer that handles the envelope and the labeler that puts data on the envelope. Each class knows its own working capacity and has a working warehouse. The model action is very simple; all the instances of a classes are asked to perform the same actions for each model step. Actions simply consists of: i. Trying to send outputs to the next object: a mixer sends to a packer, then the packer sends to a labeler and the labeler sends to the product warehouse, that is outside the model and is assumed to have infinite capability. ii. After the output has been sent, a new production cycle is performed accordingly with the input’s availability. Sending could be impossible if no machines of the next type are available to receive, i.e., if their working warehouses are not empty, so each machine may suffer two kinds of delays due to the production process, briefly named: (i) “stop” if it has to stop production because the output warehouse is full and no other machines are available to receive, and (ii) “lack” if the production has to be stopped due to the lack of input material. Provided that the simulation is focused on the packing process, it is possible to pretend that the mixer’s input warehouse is always full as well as that the labeler can always send their output to an infi nite product warehouse. Summarizing, four type of events could interrupt the production process: (i) stop of a mixer, (ii) stop of a packer, (iii) lack of a packer and (iv) lack of a labeler. The model has been written to study the distribution of interruptions under different production scenarios, where a scenario is defined as a set of different numbers of machines. One of the interesting possibilities the computer and the agent-based technique offer consists of the automatic generation of a wide range of scenarios. In order to illustrate this topic the model has been endowed with the ability to randomly toss the number of mixers, packers and so on in a defi ned interval—from one to five. After having generated each random scenario, the model has been run for 500 production cycles to collect data for the purpose of comparison. Data consist of the number of experimented interrupts for each production cycle. Through the analysis of data referring to different production scenarios, the corresponding configurations could be evaluated to fi nd a good solution, that reduce the number of stops and lacks in order to ensure a continuous production and a correct exploitation of the machines.

From Petri Nets to ABM 307 Table 12.7 Stops and Lacks During 500 Production Cycles in Five Different Scenarios Production Stops Scenario 422 145 455 144 522

Packing Stops

Number

Average

Std.Dev.

1,899 299 1,750 300 2,398

3.80 0.42 3.50 0.61 4.80

0.46 0.50 0.86 0.49 0.42

Number 0 0 0 0 0

Packing Lacks Scenario 422 145 455 144 522

Number 0 8 2 8 0

Average 0.00 0.02 0.04 0.02 0.00

Average

Std. Dev.

0.00 0.00 0.00 0.00 0.00

0.00 0.00 0.00 0.00 0.00

Labeling Lacks Dev. St

Number

Average

0.00 0.18 0.63 0.14 0.00

522 1,548 1,312 1,044 523

1.04 3.10 2.62 2.09 1.05

Dev. St 0.75 1.06 1.03 0.96 0.66

The data in Table 12.7 report observation about stops and lacks that happened during 500 production cycles in five different scenarios. The first column indicates how many production, packing and labeling units have been employed; the other columns report the number of events and their average values. The simulation has been run for simple demonstrative purposes, so nonsense scenarios, like the first and fifth ones, have not been discarded. More suitable research could have been performed by driving the scenario generation through an optimization function or, better, by a search algorithm able to select the better scenarios, like a genetic algorithm (Holland 1975), a classifier system (Goldberg 1989), a neural network (Minsky 1967) or other methods (Russel and Norvig 2002). In this way the simulation would have supplied previously evaluated results to allow analysts to concentrate on a few good configurations, performing a preventive search for the optimal one. According to the data in Table 12.7 the most interesting and non-trivial emerging result is that the interaction among components may lead to cases in which production stops and packing lacks both appeared. This result hardly would have been forecasted without this kind of simulation. The better configuration appears to be the fourth one because it implies less interruption than the other ones.

308 Gianluigi Ferraris CONCLUSION Simulations could be used as a valuable tool to solve management and planning problems (for an example see Ferraris and Morini 2005), as well as to realize business games useful for training purposes. Starting from a strictly bottom-up approach, agent-based modeling is able to better deal with the huge complexity involved in dynamics of humans actions, often depending on hierarchical institutions and relations. The use of Petri nets as an analyzing tool for ensuring programs will perform the correct data processing constitutes an original way to approach the implementation of complex programs. The method could offer valid help in various phases of the research, such as: (i) the analysis of the industrial process, (ii) the representation and validation of the simulation, (iii) the study of the model’s process and (iv) the validation of the program’s logic. Because Petri nets are a well-known tool, with wide usage, representing agent-based simulations as Petri nets could offer new possibilities to interact with management specialists as well as with enterprise staff. As the presented example illustrated, Petri nets are useful to avoid simulation mistakes too; by studying the invariants and the formal properties of the simulation net, mistakes that are not obvious may be discovered and eliminated, in order to avoid wasting time running wrong programs and analyzing poor results. Finally, Petri nets could supply an interesting way to formally validate the structure of agent-based models, in order to allow confutation test executions.

BIBLIOGRAPHY Alaiwan, H. and Toudic, J.M. (1985) ‘Recherche des semi-flots, des verrous et de trappes dans le réseaux de Petri’, in Technique et Science Informatique, 4 (1): 103–12. Berthelot, G. (1985) ‘Transformation des réseaux de Petri’, in Technique et Science Informatique, 14 (1): 91–101. Brams, G.W. (1983) Réseaux de Petri: Théorie et Pratique, Paris: Masson. Brauer, W., Reisig, G. and Rozenberg, G. (1986) Petri Nets: central model and their properties, Berlin: Springer-Verlag. Chetty, O.V.K. and Gnanasekaran, O.C. (1996) ‘Modelling simulation and scheduling of flexible assembly systems with coloured Petri nets’, International Journal of Advanced Manifacturing Technology, 11 (6): 430–8. Di Cesare, G., Harhalakis, J., Proth, J., Silva, J. and Vernadat, F. (1993) Practice of Petri nets in manufacturing, London: Chapman & Hall. Di Leva, A. and Giolito, P. (1989) I sistemi informativi aziendali, analisi e progetto, Turin: UTET. Ferraris, G. and Morini, M. (2005) ‘Simulation in the textile industry: production planning optimization’ in M. Baldoni, A. De Paoli, F. Martelli and M. Omicidi, WOA2004 Dagli oggetti agli agenti—Sistemi Complessi e agenti razionali, Milan: Pitagora.

From Petri Nets to ABM 309 Goldberg, D.E. (1989) Genetic Algorithms in Search, Optimization and Machine Learning, Boston: Addison-Wesley. Holland, J.H. (1975) Adaptation in Natural and Artificial Systems: an introductory analysis with application to biology, control and artificial intelligence, Ann Arbor: University of Michigan Press. Kristensen, L.M., Christensen, S. and Jensen, K. (1998) ‘The practitioner’s guide to coloured Petri nets’, International Journal on Software Tools for Technology Transfer, 2: 98–132, Berlin: Springer-Verlag. Petri, C.A. (1962) Kommunikationen mit Automen, unpublished doctoral thesis, Institute für Instrumentelle Mathematik, Bonn. Minsky, M.L. (1967) Computation: fi nite and infi nite machines, Upper Saddle River, NJ: Prentice-Hall. Orcutt, G.A. (1957) ’A new type of socio-economic system’, Review of Economics and Statistics, 58: 773–97. Reisig, W. (1982) Petrinetze, Berlin: Springer-Verlag. Russell, S.J. and Norvig, P. (2002) Artificial Intelligence: a modern approach, Upper Saddle River, NJ: Prentice-Hall. Silva, M. (1982) ‘A simple and fast algorithm to obtain all invariants of a generalized Petri net’, in C. Girault and W. Reisig, Application and Theory of Petri Nets, Berlin: Springer-Verlag. Silva, M. (1985) Las Redes de Petri: en la automatica y la informatica, Madrid: AC Editorial.

Contributors

Cristina Boari is full professor of management at the University of Bologna. Her research interests include rivalry, cooperation and learning processes, and entrepreneurships in geographical clusters of fi rms. Email: [email protected] Alessandro Cappellini received a master’s in July 2003, with a thesis in mathematical economics concerning stock market simulation with artificial and natural agents. He holds a PhD in simulation on economics, obtained at the University of Turin, Italy. His main research interests are computer simulation and experiments applied to fi nance, economics and social sciences. He was a research associate at Lagrange Interdisciplinary Laboratory for Excellence in Complexity—LIEC at ISI Foundation, where he focused on econophysics. Currently, he is working in the planning and group control department at Intesa Sanpaolo. Email: [email protected] Bruce Edmonds is the director of the Center for Policy Modelling and a senior reseach fellow at the Manchester Metropolitan University Business School. He is a leading international researcher in complex systems and agent-based social simulation, both applying new techniques from AI/ML to modeling aspects of social phenomena as well as applying ideas from social systems to improve the working of computational systems. For more about him, including his publications, see: http://bruce. edmonds.name. Email: [email protected] Gianluigi Ferraris is graduate in economics and has a PhD in culture and enterprise. Currently he does research in economic and social sciences based upon agent-based simulations and artificial intelligence. He has been involved in enterprise simulations to optimize production planning through genetic algorithms. His latest works have been focused on

312

Contributors

the influence of different stock market regulations in determining both prices and traded quantities. Email: [email protected] Rita Fioresi graduated in 1989 in nuclear engineering and in 1991 in mathematics from the University of Bologna. She got her master’s in 1994 and her PhD in 1997 at the UCLA Department of Mathematics. She is currently an associate professor at the Department of Mathematics, University of Bologna. Her interests range from the study of dynamical systems to various applications of geometry to theoretical physics, including supersymmetry and supergravity. Email: [email protected] Guido Fioretti is assistant professor of organization science at the University of Bologna. His research interests cover individual and collective decision making, with a special emphasis on cognitive processes. Tools range from cognitive maps to neural networks and agent-based models, though purely conceptual papers are not disdained. Email: [email protected] Michael S. Gary is a senior lecturer of strategy and entrepreneurship at the Australian Graduate School of Management (AGSM) in Sydney. He is also associate director of the Accelerated Learning Laboratory and has been a visiting scholar at MIT’s Sloan School of Management and at Duke University’s Fuqua School of Business. Shayne’s research focuses on improving strategic decisions under complexity and uncertainty. He has considerable consulting experience applying systems thinking, dynamic modeling and scenario planning to address high-level strategy issues for Global 1000 clients. Shayne received his PhD at London Business School and his BSc from the Massachusetts Institute of Technology (MIT). Personal web page: http://shaynegary.googlepages.com/. Email: [email protected] David Hales has been working with agent-based models and social simulation since 1995. He completed his PhD under the supervision of Jim Doran at the University of Essex in 2000. Since then he has worked in Manchester, at the Centre for Policy; Rome, at the CNR; and Bologna, at the University of Bologna. Currently he is based in Delft, Netherlands, at the Technical University of Delft on the Tribler team. Email: [email protected] Martin Kunc obtained his PhD in decision science from London Business School. He is currently assistant professor in operational research and man-

Contributors 313 agement science at Warwick Business School. Martin is interested in the dynamics of competitive industries, dynamic capabilities, managerial decision making and strategic management. He has performed research projects in the fast-consuming goods, financial services and wine industries. Email: [email protected] Marco Lamieri has a PhD in computational economics from University of Turin. Currently he is an economist at the Economic Research Department of Intesa Sanpaolo s.p.a. and associate researcher at the Complex Systems Lagrange LAB, Institute for Scientific Interchange Foundation, Turin, Italy. Marco’s research field is industrial economics. The selected research method, besides verbal and statistical analysis, is agent-based computational economics (ACE) with particular focus on models’ realism using empirical data. His main research interest is economic dynamics, in particular the dynamics of interactive social processes involving (boundedly) rational learning agents. Email: [email protected] Diana Mangalagiu is professor of strategy and management at Reims Management School, France, and associate researcher, Complex Systems Lagrange LAB, Institute for Scientific Interchange, Turin. She received a PhD in artificial intelligence from Ecole Polytechnique, Paris. Her research combines modeling and empirical work through a multidisciplinary perspective and focuses on organizational dynamics, social contagion phenomena, sustainability and corporate social responsibility. Recent publications include: ‘Simple Models of Firms’ Emergence’, Physica A: Statistical Mechanics and Its Applications, Elsevier, 2008 (with G. Weisbuch, R. Ben-Av and S. Solomon); ‘Interactions Between Formal and Informal Organizational Networks’, Dynamics of Organizational Models, IGI, 2008 (with M. Lamieri). Email: [email protected] Edoardo Mollona graduated in strategic management from Bocconi University in Milan (Italy) and received a PhD in strategic management and decision sciences from the London Business School. He is currently associate professor in the Faculty of Mathematical, Physical and Natural Sciences at the University of Bologna where he teaches business economics and dynamics of complex organizations. His research interests focus on the application of modeling and simulation techniques to strategic management and organizational theory. In particular, Edoardo conducts research on the evolution and strategic change in large organizations, and on the changing nature of firms in knowledge-based economic contexts. Email: [email protected]

314

Contributors

Vincenza Odorici is associate professor of management at the University of Bologna. Her main research interests concern the study of rivalry as a cognitive social process and of the role of intermediary mechanisms and actors in the process of market construction. Email: [email protected] Alessandro Raimondi graduated in economics and received a PhD in mathematical and simulation modeling for organizations from the University of Torino. His research interests involve micro and macro economics, and formalization and modeling of economic systems and organizations. In particular he is focused on decision making regarding the management of banks, their relationship with the market and institutions, as well as the macroeconomic outcome and related economic policies of these interactions. He currently works in the Strategy and Business Development Department at Unicredit Group. Email: [email protected] Pietro Terna is full professor of economics at the University of Torino (Italy), Department of Economics and Finance G.Prato. His recent works are in the fields (i) of artificial neural networks and economic and fi nancial modeling and (ii) of social simulation with agent-based models, where he has been pioneering the use of Swarm. He has been developing an original simulation system to reproduce enterprise and organization behavior, named java Enterprise Simulator, visible at http://web.econ. unito.it/terna/jes. Email: [email protected]

Index

N.B. Numbers in bold type indicate a figure or table.

A ABM 4, 106–8, 111–14, 172; concept of organizational role 225; engineering solutions through chains of models 112; ERA scheme and 195; investigate production and management problems 280; issues of organizational structures and roles 212; methods of working with 116; organizations and 215–16; “organization’s utility function” question 215; Petri nets and 308; physicists applying a physics perspective 117; preserve heterogeneity 4; problem of verification 282; “randomness and pseudorandomness” 114; replication, two critical questions answered 113–14; represent social structures 231; sheds light on solution structure 251; shows how banks carry out credit and risk evaluation 207; SLAPP project 248; three agent classes 306; using simulation techniques in framework of 250–51; valid tool 106 abstract representation, impact on real world 194 abstraction, goal to get an idealization 193 actor network theory (ANT) 180 Administrative Behavior, human decision making 249 Adner, R., disruptive technologies 15, 18 advertising expenditure, used to attract customers from competitors 61, 158–9

“Agent ready” 285 agent-based modeling 106, 194, 212, 232, 236, 300, 308 agent-based models see ABM agent-based paradigm 281 agent/group/role see AGR agents, bounded rational 218, 232; synchronization among 288; unbounded rationality in game theory 233–4; what kind of tool? 247–8 Agentsheets 216, 227n6 AGR 215–16, 227n7 alternatives to numbers 50–51; abstractions using descriptive simulations 51–3; structural modeling, example of negotiation 53–5 American Can Company 101 American Economic Association 6 analysis of equilibrium points and their stability 75–80 analytic statement, purely explanatory 11 appropriate target for resource productivity, core business and 132 approximate value 43–4 arcs 58, 288–90, 295–9, 303, 305; actions between states of the world 55–6; Petri nets and 283 Arrow, K.J. 137, 197 ART tools 258; case (i) bottleneck in enterprise production 258, 272–4; case (ii) reproducing complex set of enterprise choices 259, 274–6; case (iii) emergency call 259, 276–8 “artifacts” 47–8 “as if” behavior 240 aspiration adjustment 132–4, 143–5, 147

316

Index

asset backed securities (ABS) 202 asset stock of shared resources 129, 149n4 Audretsch, D. 171 Austria, inter-bank networks 199 avatars, directed by actual people 259 Average agent’s productivity 223 average number of passages to complete a task set 224 Axelrod, R. 5; game theory and 39; models of cooperation 237; repeated prisoner’s dilemma 86; simulation as aid to intuition 107 Axtell, R.L. 4–5; agent modeling techniques 251; agent-based social science 107; “alignment” or “docking” 112; organization formation 216; successful firms and productive workers 255; Zipf law 184

B backward-looking 14 bandwagon effects, innovation adoption 15 Bangalore (India), software district 71 Banks, actions to be taken by 202–3; complex hierarchical organizations 207; head office, highest level of abstraction 205; merger and acquisition actions 199; system with many sub systems 197 Bank of Finland, use of large scale simulation models 199 Bank for International Settlements (BIS) guidelines 202 bank managers, access and use information of their business units 198; branch managers and decisions on risk aversion 205; endogenous and exogenous issues in mid-2007 financial crisis 201, 203 bank-firm relationship, exploration of 193 banking literature 198–200 banking organizations, hierarchical structures 193, 207 Banks, J., handbook 199 basic modeling relation 37 Bass diffusion model, System Dynamics and 159 Batten, D.F., self-organization and 255–6

Baum, J.A.C. 172–5 Baumol, W.J. 136, 138 Becattini, G. 71, 173 behavior modeling 109 behavior in the neighborhood of equilibrium P7 82–3, 89, 90 behavioral theory of the firm 4, 7 Biddle, B.J. 211, 214 “big commitment” strategic choices 149 “black boxes” 250, 256 Boari, C. 171, 174–8, 186 Bogner, W.C. 164, 176–7 Boolean or string valued judgments, do not require use of numbers 57 Boschma, R. 171–2 bottlenecks 136–7, 258, 272–5 boundary experiments, robustness of a theory 88 bounded rationality 178, 197, 250 Bourgeois, L.J. 129, 135 Breschi, S. 171–2 Broadcast 112 Burt, R.S. 254; structural holes 255 Burton, R. 215–16 Burton, R.M. 250

C CacheWorld 112 Carley, K. 216; definition of organization 212–13 Carnegie Mellon University 7 Carnegie School 8 Carroll, G.R. 19, 29, 215 causal relationships, feedback nature of 4 “ceduction” 111 “ceteris paribus” conditions 280 “chains of models” 61, 107, 111–13, 115–16 Chang, M.H. 216; organizational structure 213 change role dynamic 220–21 changing roles, dynamics, increase in agent’s productivity 225; more effect in stable environment 223; with and without informal networks 223–4, 227n9 Chen, M. 174, 182 Chinese firms, geographical cluster of Val Vibrata and 79; produce at average lower cost 76; small in comparison to Italian 96 Christensen, H.K. 123, 295

Index Clarkson, G.P.E. 5–6, 249 classical economic theory agents, absolute rationality 196 clustered firms, knowledge fields and 187, 188 “clusters” of models at different levels of abstraction 51 co-evolutionary model, workers’ skills and firms 262–5 Coe, R.M. 5 cognitive distance 186–7; neither too high or low 183; rivals and 178; type of similarity 175 cognitive mechanism, decision-making and 14 cognitive processes: matter in rivalry 174; optimization of utility 54 Cohen, K.J. 3, 30; advantages of computer simulation 8–9, 69; Analytic/inductive and synthetic/ deductive statements 11; people need not be mathematicians to build computer models 100; shoe-making industry and 7, 101; System Dynamics 6; types of computer models 10 Cohen, M.D. 13, 17, 135 colored Petri net 295–8; analysis 298–300 commercial bank, activity devoted to serve customers 200; hierarchical structure 204 commercial-grade modeling tools 216 competing species model 72–5 competition, “under-socialized” phenomenon 174 competitive advantage, exploiting economies of scope 123; geographical clusters and 187 competitive industries, feedback view of 152–4 completeness of solution, flexibility of simulation 87 computational models, more expressive 61–2 computational simulation, experimental theory about some domain 51 computer model, virtual laboratory 14 computer modeling and field research, building the computer model 23–4; building preliminary theoretical framework 22–3; interaction of 20; internal validity 21–2; problem of validity 20–21

317

computer models, complaints about 69–70; learning laboratories 88; mathematical models and 3; sociology and political science 5; synthetic and analytic 10 computer scientists and modelers, studies of organizational concepts 211 computer simulation, artificial history 95; business cycles 5; capture dynamics underpinning time series 94; complex hypotheses of a system’s behavior 10; computer-aided process of deduction 19; decision rules 6; deduction and induction inseparable 12; dialogue between empirical evidence and theory 28; emergence of social differentiation 86; fieldbased inquiry 3; generates time series 24; interaction with mathematical analysis 70; longitudinal and cross-section articulation of hypotheses 24–6; manipulation of symbols using computer code 3; shortcut at lower level of generality 81; sociology and 87; technologically aided process of deduction 4; three goals 94; unique tool 102; validating a theory 27; value to mathematical analysis 100 computer-aided deduction, human capability and selected variables 24–5 computers, numbers easier to store 49 concept of role 211 concept of similarity 176 conditions events net 285; incidence matrix of 286 Conte, R. 5; “exploration simulation” 108 Continental Can Company 101 control process, negative feedback structure 198 Cool, K. 129–30 cooperators, multi-person prisoner’s dilemma games 86 core business, diversifying and 124 core electricity 128 corporate diversification, dynamics of firm growth and resource sharing 146–7; effects of 123; fieldwork with diversifying

318

Index

firm 125–9; general model of resource sharing 129–38; related diversification with overstretching costs simulation 139–42; simulation experiments 138–44 cost leadership, sources varied 154, 166 creating and destroying value, relatively small difference 143, 149n3 credit-granting analysis and evaluation, standardized process 205 credit quality 203, 207; head office intervenes when it decreases 205; indicators 206–7; individuals and 206 crisp and critical comparisons 48 cross-validation, grounding assumptions 112 cultural evolution models, imitation allows traits to move horizontally 236 cultural group selection, interaction and social structure 235; wide range of disciplines 233–5 cultural learning rules, create social structures 231 customer portfolio, high returns and moderate-risk clients 203 Cyert, R.M. 3–4; analytic/inductive and synthetic/deductive statements 11; behavioral theory of the firm 7, 124, 130, 133, 135; computer model requirements 30; computer simulation 9; observability argument 174; organizational learning 178; shoe, leather and hide industries 101; synthetic and analytic computer models 10–11; System Dynamics 6; two types of computer models 10

D data sets, Internet applications and mobile phone records 117 Davis, J.P. 12; interesting hypothesis 88 dead marking, effect of 290 decision dilemma 257 decision-making, activity 13, 62; aggregate 18; authorities 216; business and 6, 31n1, 114; formalized 23; managerial 153, 168; planning 215; problem 197; process 202, 249–50; role of cognitive mechanism 14 deductive inference 10, 12–13, 17

deductive process, key component of scientific reasoning 10–11 development platform 216 development strategies, market demand on 18 Dierickx, I. 129–30 differences in decision-making styles 155 differentiation, firms seek to be unique 154 differentiation leader 159, 165, 168 Diplomacy game 61 disaggregating resources, resource complementarity and coordination 148 Discover Complexity in Multi-Stratum Models, online at jES site 261 disruptive technologies 15, 18 diversification, could result in rising workload demands 136 diversified firms, should benefit from synergy between business units 123 Doran, J.E. 5, 109, 232; “mix and match” ideas 117; social theory 107 Dorfman, R. 6, 31n1 “drift” 47 Dunbar, R.I.M. 5, 86 dynamic system, differential equations 72

E economic geographers, geographic proximity and learning 172 economies of scale 158 economies of scope 123, 129, 134–5, 137–8, 141, 144–7; potential 125, 145 Edmonds, B. 5, 40–42, 46–7, 50, 112–15; “cross-validation” work 112; replication 113, 115 efficiency 158, 160, 164 Eguiluz, V.; modeled social plasticity 87 eigenvalues, Jacobian matrix 78, 92 electronic and virtual communities, cost converging towards zero 240–41 empirical data, macro and micro 225 entrepreneurs, boundedly rational decision makers 175 “environment ready”, “Environment busy” 286, 291 environment-rules-agents 195 Epstein, J.M. 4–5, 107, 256; agentbased social science 107

Index equilibrium point 79 equilibrium points 77 European Electricity company: diversified extensively 125–7; low cost leader before diversification 128; norms for each electricity plant 132; profitability peaked (1994) and then declined 127 evolutionary algorithm 219 exact value 43 existence proof 108–9 experiential exploitation 181, 182, 187 experiential exploration 181–2, 186, 187 experiential learning 16, 181–2, 187 experimental methodology, theoretical models on some testbed 107 experimental science 116 experiments 185; choice of parameters 185–6; different sets for each structure 222 explanation building, iterative process 21 explanation finding 110 explanatory case studies 20 explanatory Models 40–41, 65n3 explicit targets, can prevent aspiration adjustment 145 exploration of novel knowledge 181 exploratory field study 22–3

F falsification logic, completeness gives way to flexibility 81; studies in sociology 85 Farjoun, M. 174, 175 feedback, history-dependent behavior of organizational populations 29 feedback from the environment, affect the way managers evaluate 207 feedback structure of system, generates its behavior 195 feedback system, competition embedded in 168 Ferber, J. 211, 216; description of organizational structure 213; multi-agent systems 215 Ferraris, Gianlugi and Marco Lamieri, guide to ART tools 258 field cases, retrospective studies 26 field research, computer simulation 5, 23, 27; tradition in social science 20 field studies, provide researcher with information 24; social simulation 5; strategy formation in large firms 23

319

fieldwork, diversifying firm 125–9; research 124 financial crisis of mid-2007 201–3 financial resources, accumulated profits 156–7, 160–61 firms to exploit resource sharing, three contributions 144–5 floating model 36, 41–2, 61 flowchart of two firms 183–4 footwear industry, tight relationship between firms and machinery firms 72 formal structures, organizational hierarchy 217; used in experiments 222 formalization 23–4, 29; a posteriori 53; economics and 314; mathematical 6, 196; model 155; rules 208; simple 101; of a theory 30 Forrester, J.W. 4, 6; decision making 153; Industrial Dynamics 193; Sloan School of MIT and 8; System Dynamics 194–5 forward-looking 14, 221 free-riding 87, 232, 255 friend of a friend 87

G game theory 233–5; “rules of the game” and 241 garbage can simulation model 13, 17, 31n2 Gasser, L. definition of organization 212–13 geographical cluster 173; footwear industry and 72 geographical clusters: defined 70–71; social relationships channels, “innovative milieus” 71 geographical proximity 171–2, 188; allows division of labor 173; learning processes and 175–6; “market construction” and 174; rivalry and 173, 187; rivals’ identification and 177–8; used to “construct” market 187–8 Gilbert, G.N. 4, 5, 9; “exploratory simulation” 108; “participatory modeling” 112 Glaser, B.G. 20, 22 globalization of markets, strain on geographical clusters 71 Glynn, M.A. 14 good quality and bad quality accounts, define overall credit quality 205

320 Index Grant, R.M. 123, 126, 136, 145 graphs, standard Petri nets more concise 286 Greenslaw, D., crisis in banking and 202 Greeve, H. 171–2 grounded theorizing 20 group selection, controversial in biology and social sciences 235–6 group-splitting model 239; applications 239–40

H haggling, negotiation and 54, 60–61 Hales, D. 46–7, 60, 115, 231; “ceduction” 111; ”model-to-model” analysis 112, 117; negotiation 55, 60; network-rewiring models 238; replication 113, 115; ”tag” model of cooperation 237, 329 Hanaki, N. 5, 87 Hannan, T., formal model of a firm 198 Hanneman, R.A. 5; gap between theory and history 94; state’s attitude to initiate conflicts with other states 92–3 Harrington 216; organizational structure 213 Harrison, J. 19, 29 heterogeneous resources, generic strategy 160 hierarchy 52, 204–6, 208, 215, 217–18 higher initial slack experiment 142 historical inefficiency, hypothesis of 19 history-divergent simulation run 27, 29, 95 history-friendly modeling 91–9 history-replicating simulation runs 29, 94 Hitt, M.A. 123–4 Hoggatt, A.C., sensitivity of firms’ to market conditions 7, 9 Hollywood district (US) 71 Homan’s theory of interaction 5 horizontal specialization 214 Hoskisson, R.E. 123–4, 136 household mortgage defaults, causes of 201–2 household sector 7, 101 ‘How to Use the jES Open Foundation Program to Discover Complexity in Multi-Stratum Models’ 248 hypercompetitive industries, rapid changes in technology 164

I ideal type 20, 31n4 imitation heuristics, social behaviors and structures 232, 240 implementation strategies 149, 1224 induction and deduction, cyclical process of discovery 18–19 inductive and deductive inferences, intertwined 17 inductive inferences 11, 16 industrial cluster 71 Industrial Districts 71, 96, 97, 99 industrial process, usage of ABM of packing process for pasta 205 industry equilibrium, competitive actions and 158–9 inefficiency, negative effect of 157 inferential models 40 informal network, subsystems left to specialized decision units 220 informal structure, metaphor of problem decomposition process 220; personal relations between agents 217 informal structures 217 initialization 184 innovation adoption 15 “innovative milieus” 71 instability, causes endogenous and exogenous 198 integration of collected and simulated patterns of behavior 25–6 inter-organizational learning 181–4; knowledge creation 171 internal validity 21–2, 27 interval arithmetic 44, 47–8, 114 intra-cluster relationships, intertwined with competitive and commercial relationships 72 investment banking, multinational environment 200 Italian clusters, producers of packaging machines, rivals outside cluster 174 Italian market, statistical distributions and 205 Italian Statistical Institute, six-digit ATECO code 225 Italy: Industrial Districts 71, 96, 97, 99; inter-bank networks 199 iterative chaining 111

J Jacobian matrix 78, 92, 102

Index Jacobian matrix at equilibrium point 102–3 Japan, inter-bank networks 199 java Enterprise Simulator see jES Java Enterprise Simulator Open Foundation see jESOF JavaSwarm model 221, 227 Jennings, N.R., organization ‘collection of roles’ 212 jES 216, 227n3; actual human decisions 256; applications 272–8; applied use 248; interaction between people and the model 256–8; applied use: simulation of actual enterprises and organizations 258–9; dynamic view of 254, 257; enterprise simulation and 251; from to jESOF 260–61; how to use 251, 259; new version using SLAPP 248; second application relates to human decisions 256; snapshot of 253; technique 252; theoretical use of 254–6; three main applications of 248–50; using a computer 247 jESLET, introductory version of jES 265–72 “jES light experimental tool” 248, 265–72 jESOF 247, 260–61 judgments, presented as blobs and arrow pictures 57

K Kant’s Critique of Pure Reason 11 Kaplan, A. 24–5, 88; validation of a theory and 25 Kim, J.Y. 175–6 Kirzner, I. trial-and-error process 255 Knoben, J. 171–3 knowledge, asset in production systems 171 knowledge in clusters, competition and rivalry 171 knowledge fields 176–7, 178–9; clustered firms 187; depth and 177, 181, 186; number of 186 knowledge transfer, similarity and 175 knowledge-based economy, geographical clustering and 171, 188 Kunc, M. 168–9

L labeler, puts data on the envelope 306

321

Lai, L. 174–5 Lamb, R.B., strategic management 201 Lamieri, M. 258; performance indicator 221; proposed model 217 Lant, T.K. 13, 133, 174, 182; adjustment of unit sales objectives 133; modes of organizational change 15 Larsen, E.R. 17, 174, 215, 250; population ecology 17 learning 9, 256, 276; actions 181, 184–5, 187; and adaptations 256; agents 4; capabilities 218; cultural 231; cumulative 148; cycle of 196; evolutionary 239; experiential 14, 16, 181–2, 187; inter-organizational 171, 175; interactive 276; laboratories 88; model 13; organizational 15, 171, 173, 178; problem of 27; process 131, 160, 171–3; regions 71; research 133; rivalry and 175; role of 86; social 232–3; utility and 233; vicarious 176, 181–2, 187 Levinthal, D. 176, 178; cognitive choice 14 Levitt, B. 133, 145, 176 Levitt, R.E. 136, 216; ‘virtual synthetic experiments’ 215 linear algebra techniques 284 Lissoni, F. 171–2 Lockhard scale 44 Lomborg, B. 5, 86 Lomi, A. 17, 174, 215, 250; population ecology 17 longitudinal event studies, compared polar cases 27 Lyapunov functions 84–5, 92

M Macy, M. 5; learning from repeated interaction 86–7; rules of social behavior 211 Malerba, F. 28, 30, 94–5, 101; history friendly computer models 16–17, 30, 94; history replication run 94–5 Malmberg, A. 71, 171, 176 managerial attention 148, 174, 177 managerial decision making, affects dynamics of firms 152, 169 Mangalagiu, D.: performance indicator 221; proposed model 217

322 Index Marafioti, E. 71–2 March, J.G.: behavioral theory of the firm 4, 7, 124, 130; decision making 132; effect of proximity learning 176; exploration of novel knowledge 181, 186; exploring histories 12; observability argument 174; organizational learning 178; organizational slack 145; problem of learning from samples 27; rate of change 133, 135 Marcozzi, A. 112–13, 231; “peer production” models 113 market sector 158 Markides 123, 129, 136, 145 “marking graph” 291–2 Marks/Markings 283–5 Maskell, P. 171, 176 Massachusetts Institute of Technology (MIT) 8 mathematical analysis 6, 9–10, 69, 85, 88, 91, 98; of formal model 80–81, 95; general results 83; how rapidly system abandons initial state 95; limited insight 92; simulation and 70, 72, 100; value of 100–101 matrix, standard Petri nets and condition-events ones 287 “measure theory” 44 Meinhart, W.A. 5, 10 “mental models” 54 methodology 6, 8, 20, 117n2, 208n3; ABM and 106, 117; literature and 107; mix and match 108–111; Petri nets and 282, 305; research 152 Mezias, S.J. 13–15, 133, 174; organizational change 15 Miller, D. 174, 182, 216 mix and match methodologies 108– 111, 115–16 mixer, in charge of whole production process 306 model formalization process 155 model types 39–41 modeler, clear idea and 48, 65n6 modeling 193–4; abstraction in 38, 39, 53; behavior 109; feedback process 195–6; participatory 112; process of , aspects for partitioning and role structure 213–14; simulation and rational decisions

197; steps with a simulation 37–8; useful in scientific sense 36 models, all are simplifications of reality 146 models of cultural group selection 233–9; applications of 239–40 models of negotiation, defined 53 Mollona, E. 71–2, 76, 79, 96; “peer production” models 113 Monte Carlo analysis 251 Montgomery, C.A. 29, 123 Morecroft, J.D.W. 8, 131–2, 153–4, 168–9; behavioral simulation models 155 Morini, Matteo, production optimization using genetic algorithms 258, 308 Moss, S. 40–42, 50, 112; ”cross-validation” work 112; UK macroeconomic models 38–9 Motorsport Valley, South England 71 multi-agent systems, organizational concepts 212, 215–16 multi-product firms, economies of scope to create economic value 123

N Nash equilibrium 233, 237 nature of goods, critical to usefulness 49 negative feedback structure, control process 198 negotiation: discussion of example 59–61; example setup 57–9; participants and internal structures 55–6; problem solving enterprise using “mental models” 54; two sorts 60; using network of nodes and arrows 55, 61 Nelson, R.R. 124; computer simulation for technical change 100–101 Nettle, M. 5, 86 network visualization of models 116 network-rewiring models 238–9 NetWorld 112 “new empirical industrial organization” (NEIO) 198 nodes, states of the world 56, 115 “noise” 47, 50 Nooteboom, B. 175–6, 180, 183 norms 42, 86, 132, 215; organizational 140, 214, 236, 258; social 65 novel knowledge, novel product or novel market 180

Index Nowak, M.A. 86, 231 “number blindness” 36, 41, 49–50, 61 numbers 178–9, 249, 253; abstract representation 46; approximate 47; identifying 268; real 295; set of different 306; simulation and 52; step 253; systems of 42–4; use of 44, 48–50, 57 numerical “kludges” 53, 65n8 numerical representation 36, 41–4, 42–4, 49–50; consequences of inappropriate 44–9; creation of artifacts 47–8; distorting the phenomena 45–6; error accumulation 47; limiting the language of representation 48–9; losing the context 46–7 numerically based frameworks, people trained in 39

O Obel, B. 215–16 Oerlemans, L. 171–3 Oliva, R. 133, 137 OperA 216, 227n8 operational efficiency 158, 160, 164 operational resources, increase operating costs 156–61 Orcutt, G.H. 7–10, 70, 101, 281; demographic trends of US household sector 101; micro simulations 281 order, an 252 order distiller 252 order generator 252 orders, two sources 252 orders commanding rabbits to eat 261 organic growth 125–6, 149 organization theory on inertia, businesses starved of shared resources 148 organizational, learning model 13; learning research, response to experience 133; roles 214–15; structure 213–14 organizational change 15, 15–16, 30, 94, 225 organizational concepts, multi-agent systems 216 Organizational Consultant 216 organizational dynamics 212, 216, 313 organizational learning 171, 178, 181–4 organizational norms 214

323

organizational role 211, 225 organizational slack 135–6, 138, 140–48 organizational structure 13, 194, 199, 204, 212, 213, 220 organizations, definition of 212–13 organized anarchies 13, 31n3 overstretching costs, related diversification 139–41 overstretching shared resources, impact of 138, 144

P packer, handles the envelope 306 Palla, G. 117, 240 Pareto’s predictions 93 Pareto’s theory of social and economic cycles 93 pattern matching 30; explanation building 21, 23, 26; sensitivity analysis and convergent runs 26–8; sensitivity analysis and history divergent runs 28–9 “peer production” models 113 peer-production communities 241 peer-to-peer (P2P) applications 112–13, 238–9 Penrose, E. 129, 134, 136, 145, 148 performance 187, 220–22, 225–6; above-average 154; of agent 218; benchmarks 142–3; deterioration 128, 137; diversification 124, 126, 139; dynamics 144; economic 225; evaluation of 178, 184; financial 24, 127–8, 135–6, 144–8, 164–5, 168–9; firms’ 14, 15, 95, 129, 139, 152–4, 156, 161; heterogeneity 149; measuring 180–81, 199, 215, 226; organizational 14, 133, 155, 217, 220; overall 137; past 182, 186; strategies on 152, 172; structureconduct in banks 198–9 Petri, C.A 282 Petri nets 282–3; agent-based modeling and 300–5; analyze real and simulated processes 280; analysis 291–5; colored 295–8; definition 283; give ABMs formal expression and scientific validation 282; properties of 290–91; simple 283–6; standard 286–90; useful to avoid simulation mistakes 308

324

Index

Petri nets method, overcoming verification and description problems 282 physical sciences, abstract theory 116 physics, floating models and 42 place transition net of wrong process 303 place-transition Petri net 289–90 polar cases 27 Polhill, J.G., interval arithmetic 44, 47, 114 poor decisions, accumulate over course of months 137 Popper, Karl 42, 108 Popperian approach 108, 111 population dynamics 184–5 population ecology 17, 19 populations of firms, qualitative model of relationships 75, 77 Porac, J.F. 174–5 Porter, M.E.: changes in demand growth 158; cluster firms 71, 173; competition , five force framework 152; competitive goals 166–7; generic strategies 153–5, 161, 168–9; investment decisions 152; operational efficiency 158; technology and innovation 157 PowerSim 216, 227n1 predictive models 40 “pre-market” structures 241 predators, foxes 261 pressures for historical and simulated behaviors to diverge, two cases 28 Presutti, M. 71–2, 76, 79, 96 prey, rabbits 261 pricing policy, based on “cost plus margin” 158 prisoner’s dilemma 86 problems of coordination and control, rapid expansion and growth 145 pro-social behavior 238, 241 production unit definition 252 production units, understood as atomic units 259 productivity 148; agents’ 220–23, 225; efficient 135; managers’ 223; operational resources 157, 160; organization 212, 217, 225; skewed 221; target resource 131–4, 140–41, 143–4, 147; workers’ 223

pseudorandom bias 114 punctuated model of organizational change theory 15–16, 30, 93 Python, modern object-oriented language 248

Q qualitative and quantitative, difference between 45, 46 queuing theory 199

R R&D projects 132 Rahmandad, H. 4 Rallet, A. 171–2 Ramanujam, V. 123–4 random trial and error, cooperative prosocial “seeds” 241 rating index, quality of balance sheet 205 “rational” agent 55 rationality and control 196–8 real process net 303–4 real time gross settlement (RTGS) payment system 199 recent cultural group selection models 235–7 “recipe”, term typical of industrial economics 252, 260 replication 107, 113–15, 116 researchers 5–6, 9–10, 13–15, 19; ABM 106–7, 114, 116; computer simulation 20–21, 24–5, 29–30, 69, 81, 91, 232, 258–9; generate alternative histories 95–6; geographical clusters and 173; strategy content 149 resource correction management control loop 130, 149n5 resource overstretching 128, 132 137, 144 resource sharing 147–8; benefits 123, 137, 141, 145–6; diversification 129, 133, 136, 144; failure of 144; implementation 125, 141, 144; portfolio 125; potential value of 139; two requirements 134 Riolo, R.L.46 231; models of cooperation 237–8 rivalry 171, 173, 174–6, 187 rivals, identification 186 role, definition 211 role of agents, predefined 212

Index role theory 211, 214 roles, abstraction from individual agents 217; bounded optimal allocation, change rule implemented 221; organizational 214–15; simple 214 Romanelli, E. 15–16, 30, 93, 182 Rouchier, J. 60, 117 rounding errors, can lead to misleading results 114 routines 276; decision 6, 23, 87; managerial policies or 130–31; organizational 132, 144, 148; power relations and 216; tacit 29 rule-based decision processes 106

S Sastry, M.A. 16–17, 30, 124, 152, 157; disruptive innovation 157; model formalization 155; punctuated organizational change 93–4 “scaling” 24 SCP 198–9; studies in American banking 199 securitization 202–3; mid-2007 financial crisis and 202 selfish behavior, increases utility 237 self-organization 214, 216, 255–6 sending, impossible if no machines of next type available to receive 306 sensitivity 138; tests 142 sensitivity analysis 12, 23, 26–8, 168; revising theoretical explanation 27 shared resources: could unleash benefits 145; management should invest in advance of increasing workloads 144; overstretching exacts costs on diversifying firms 136 shoe-making industry 7, 101 Silicon Valley 71 similarity, reduces information uncertainty 175 Simon, H. 5–6, 132, 256, 259; administrative Behavior 249–50; bounded rationality 178, 197, 250; “docility” 232; hypothesis and its empirical verification 194 simple industrial process simulation 305–7 simplicity, floating models and 41 simulated firms, structure of 156

325

simulated organization, formal and informal network 217 simulated organization’s productivity, three hierarchical structures 225 simulated world, laboratory where we specify parameters 250–51 simulation: allows researchers to observe how people behave 258; appropriate, enables processes to be captured 51; beginning a formal structure set 219; behavioral specifications, not assumptions 250; competitive industry and 160; formal representation of real-world systems 208; minimal constraint upon negotiations 56; real-world systems and 208; results of 161–2; theoretical hypotheses 87; third way of doing science 107; used to stage abstract process 52 simulation dynamics, assumptions on tasks’ execution 218 simulation experiments 138–44; how complement formal analysis 80–81; management and benefits of resource sharing 141–2; performance consequences 124; profitability of four different 138–9 simulation models, virtual laboratories 87 simulation process, three different steps in Petri nets 301 simulation results: scenario 1. 162–3; scenario 2. 163–6; scenario 3. 166; scenario 4. 166–8; yield knowledge that adds value 14 simulation runs from example 62–5 simulation system, soft computing and 257 simulations, firm management problems and 280 simulations and what-if analysis, used to investigate firm management problems 280 SimVision-R 216 SkillWorld 112 Skvoretz, J. 5, 87 SLACER 112 SLAPP, jES and 248 Sloan School 8 Smith, Adam, “invisible hand” 232 Sober, E. 233, 236

326

Index

social concepts, controlling social action within organizations 211 social learning, “side effect” of 232 social plasticity 87 “social rationality” 241 social sciences, computer simulation 4; studies of social simulation 5 social systems, counterintuitive behavior 197 software programs, able to grant or deny financing 205 spatial distribution, economic growth and 172 stability of environment, agent’s productivity 225 standard Petri nets, all black tokens used in 296 StarLogo 216, 227n5 Stella 216, 227n2 Sterman, J.D. 4, 24; Bass traditional model 159; behavioral simulation models 155; five step modeling process 196; market growth model 160; modeling and 195; performance dynamics 144; quality problems 136–7; service quality in service industries 133; Systems Dynamics 153; uncertainty about asset stocks 131 strategic groups 15 Strauss, A.L. 20, 22 structure 62 structure-conduct-performance paradigm see SCP “success” of group selection model, achieving collective goal 236 summary of dynamics of model 218–19 surprise behaviors and cognitive dissonance, theory and 88 Sutherland, J.W. 136, 145 Swarm 216, 227n4; jES based on 248 Switzerland, inter-net bank networks 199 synergy 123–4, 128, 139, 141–2, 145 synthetic definition of performance measurement, Appendix A 226 synthetic statement, extensive 11 system in an environment, characterized by a structure 194 System Dynamics 6, 8, 124, 149, 153, 160, 194; modeling 169, 194; corporate diversification 124 systems of numbers, formal tools 42; limitations 43–4

T tag model 237–8 TagWorld 112 target experiment, dynamics of fixed 143 target productivity 131–2 target resource productivity, actual resource workload 134; aspiration adjusts over time 133 task set 218, 221 technology resources, increase attractiveness of product 156–7, 160–61 Teece, D.J. 129, 135–6 textile geographical cluster, Chinese suppliers 72 theories of conflict 5 theory building 110 theory of decision making 5 ‘theory as process’ 20 theory of interaction 5 theory testing 109–110 Thomas, E.J. 211 Thomas, G., role in computer science 214 Thomas, H. 176–7, 211, 214 three-level Prey-predator model: Grass, rabbits and foxes 261–2 time series: alternative 232; analysis 21; computer simulation and 24, 28, 30, 94; empirical 10; pattern matching and 26; simulated behaviors and 101; snapshots 95; Val Vibrata and 96–7 Torre, A. 171–2, 174 total order 43 trade-off: between flexibility and completeness 81–91; between levels of investment 165; between loss of generality that computer simulation entails 80–81; completeness/flexibility depends on purpose of the model 85 transition moves tokens from “Agent thinking” to “Agent operating” 286 transition of a system between two equilibrium points, 83–4, 91 tribal approaches, appropriate markets and 240–41 “tribal systems”, definition 232–3 Troitzsch, K.G. 3–5, 9, 70 turnover index, relationship between turnover and assets 205 Tushman, M.L. 15–16, 30, 93, 182

Index U UML representation in Appendix B 221, 227 unified modeling language see UML unique labeling 43, 46 universal bank, organization of 200, 203 unstable environment, more role changes and higher adaptation costs 225 “utility deal” 240

V Val Vibrata cluster 76, 79, 96; data collected in 96–7; simulated demographic dynamics 97, 99 Varadarajan, P. 123–4 Variety, not same as variation 45 verbal model, formalized 93 verbal theory of behavior 16, 30–31 vertical specialization 214–15 vicarious exploitation 181–3, 187 vicarious exploration 183, 187 vicarious learning 182 Vignaux, G.A. 199 Virtual Design Team 216 voting behavior 5

W Walker, G. social networks 254–5 wealth, natural decay 185 “what if” 216, 232, 247, 258–9, 280–81

327

“What to Do” (WD) 251–2, 259 “When Doing What” (WDW) 252 “which is Doing What” (DW) component 251–2, 259 Wikipedia 241 Williamson, P.J. 129, 135–6 Wilson, D.S. 233, 236 Winter, S.G. 101, 124, 129, 157 Woolridge, M., organization ‘collection of roles’ 212 workers, hired for a cycle and then process repeated 264 worker’s skills and firms, co-evolution model 262–5 workers-firms v.1 263, 265 workers-firms v.2. 264, 265

X XJ Technologies’ AnyLogic (www.xjet .com) 199

Y Yin, R.K. 20–21

Z Zambonelli, F. 211, 216 Zipf distribution 184 Zuckerman , E.W., analyses of networks in economics 255 Zurich Water Game 61