From complexity in the natural sciences to complexity in operations management systems. Volume 1 9781786303684, 178630368X

724 82 6MB

English Pages 216 [243] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

From complexity in the natural sciences to complexity in operations management systems. Volume 1
 9781786303684, 178630368X

Table of contents :
Content: 1. Complexity and Systems Thinking. 2. Agent-based Modeling of Human Organizations. 3. Complexity and Chaos. Appendix 1. Notions of Graph Theory for Analyzing Social Networks. Appendix 2. Time Series Analysis with a View to Deterministic Chaos.

Citation preview

From Complexity in the Natural Sciences to Complexity in Operations Management Systems

Systems of Systems Complexity Set coordinated by Jean-Pierre Briffaut

Volume 1

From Complexity in the Natural Sciences to Complexity in Operations Management Systems

Jean-Pierre Briffaut

First published 2019 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 27-37 St George’s Road London SW19 4EU UK

John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2019 The rights of Jean-Pierre Briffaut to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. Library of Congress Control Number: 2019930116 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-78630-368-4

Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix

Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

Chapter 1. Complexity and Systems Thinking . . . . . . . . . . .

1

1.1. Introduction: complexity as a problem . . . . . 1.2. Complexity in perspective . . . . . . . . . . . . . 1.2.1. Etymology and semantics . . . . . . . . . . 1.2.2. Methods proposed for dealing with complexity from the Middle Ages to the 17th Century and their current outfalls . . . . . . . 1.3. System-based current methods proposed for dealing with complexity. . . . . . . . . . . . . . . . . 1.3.1. Evolution of system-based methods in the 20th Century . . . . . . . . . . . . . . . . . . 1.3.2. The emergence of a new science of mind . 1.4. Systems thinking and structuralism . . . . . . . 1.4.1. Systems thinking . . . . . . . . . . . . . . . . 1.4.2. Structuralism . . . . . . . . . . . . . . . . . . 1.4.3. Systems modeling . . . . . . . . . . . . . . . 1.5. Biodata of two figureheads in the development of cybernetics . . . . . . . . . . . . . . . . . . . . . . . 1.5.1. Ludwig von Bertalanffy (1901–1972) . . 1.5.2. Heinz von Förster (1911–2002) . . . . . . 1.6. References . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 3 3

. . . . . . . . . .

4

. . . . . . . . . .

9

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

9 20 24 25 30 32

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

45 45 47 51

vi

Complexity in the Natural Sciences and Operations Management

Chapter 2. Agent-based Modeling of Human Organizations . . . . . . . . . . . . . . . . . . . . . . . . . 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . 2.2. Concept of agenthood in the technical world . 2.2.1. Some words about agents explained . . . . 2.2.2. Some implementations of the agenthood paradigm . . . . . . . . . . . . . . . . . 2.3. Concept of agenthood in the social world . . . 2.3.1. Cursory perspective of agenthood in the social world . . . . . . . . . . . . . . . . . . 2.3.2. Organization as a collection of agents . . . 2.4. BDI agents as models of organization agents . 2.4.1. Description of BDI agents . . . . . . . . . . 2.4.2. Comments on the structural components of BDI agents . . . . . . . . . . . . . . . . . . . . . 2.5. Patterns of agent coordination . . . . . . . . . . 2.5.1. Organizational coordination . . . . . . . . . 2.5.2. Contracting for coordination . . . . . . . . 2.5.3. Coordination by multi-agent planning . . . 2.6. Negotiation patterns . . . . . . . . . . . . . . . . 2.7. Theories behind the organization theory . . . . 2.7.1. Structural and functional theories . . . . . 2.7.2. Cognitive and behavioral theories . . . . . 2.7.3. Organization theory and German culture . 2.8. Organizations and complexity . . . . . . . . . . 2.8.1. Structural complexity. . . . . . . . . . . . . 2.8.2. Behavioral complexity in group decision-making . . . . . . . . . . . . . . . . . . . 2.8.3. Autonomous agents and complexity in organization operations: inexorable stretch to artificial organization . . . . . . . . . . . . . . . 2.9. References . . . . . . . . . . . . . . . . . . . . .

55

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55 56 56

. . . . . . . . . . . . . . . . . . . .

58 62

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

62 67 69 69

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

72 86 86 86 87 92 94 95 96 97 101 101

. . . . . . . . . .

102

. . . . . . . . . . . . . . . . . . . .

106 114

Chapter 3. Complexity and Chaos . . . . . . . . . . . . . . . . . . .

119

3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. Complexity and chaos in physics and chemistry . . . 3.2.1. Introductory considerations . . . . . . . . . . . . . 3.2.2. Quadratic iterator modeling the dynamic behavior of animal and plant populations. . . . . . . . . 3.2.3. Traces of chaotic behavior in different contexts .

. . . . . . . . . . . . . . . . . .

119 121 121

. . . . . . . . . . . .

127 133

Contents

3.3. Order out of chaos . . . . . . . . . . . . . . . . 3.3.1. Determinism out of an apparent random algorithm . . . . . . . . . . . . . . . . . . 3.3.2. Chaos game and MRCM (Multiple Reduction Copy Machine) . . . . . . . . . . . . 3.3.3. Randomness and its foolery . . . . . . . . 3.4. Chaos in organizations – the certainty of uncertainty . . . . . . . . . . . . . . . . . . . . . . 3.4.1. Chaos and big data: what is data deluge? 3.4.2. Change management and adaptation of information systems . . . . . . . . . . . . . . . 3.5. References . . . . . . . . . . . . . . . . . . . . .

vii

. . . . . . . . . . .

142

. . . . . . . . . . .

142

. . . . . . . . . . . . . . . . . . . . . .

144 145

. . . . . . . . . . . . . . . . . . . . .

148 148

. . . . . . . . . . . . . . . . . . . . . .

159 188

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

191

Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

203

Appendix 1. Notions of Graph Theory for Analyzing Social Networks . . . . . . . . . . . . . . . . . . . . . .

205

Appendix 2. Time Series Analysis with a View to Deterministic Chaos . . . . . . . . . . . . . . . . . . . . . . .

211

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

215

Preface

The word “complex” is used in many contexts, be it at the level of social sciences, biology, chemistry and physics or in our professional and private environments. Any time we cannot understand a situation, we try to escape the challenge of feeling doubt and uncertainty, because we have the impression that we lack methods and techniques (in one word, capabilities) to address the issues involved. Within this framework, we decide to give up and convince ourselves that we are right to do so because we are overwhelmed by “complexity”. Complexity is an idea, fabric of our daily experience. When a phenomenon seems simple to us, it is because we perceive that one object and one action are involved in spite of the fact that reality may be much more intricate. This simplification is enough for making us “cognize” the ins and outs of the situation we experience. In contrast, when a great number of interacting elements are involved, we perceive the situation as complex. Economic systems and human relationships are complex. Macroscopic situations may appear “simple” because the microscopic underlying states are hidden. We perceive “averages” without knowing the detailed states of the components of a whole. During the second half of the 20th Century, developments in the thermodynamic theory of irreversible processes, the theory of dynamical systems and classical mechanics have converged to show that the chasm between simple and complex, order and disorder, is much more reduced than thought.

x

Complexity in the Natural Sciences and Operations Management

Biology is acknowledged as complex, as it is associated with living organisms whose chemical functioning relies on the interactions of many subsystems. The idea of complexity is no longer restricted to biology and has undergone a paradigm shift. It is invading physical as well as social sciences. The purpose of this book is to describe the main results reached in natural sciences (physics, chemistry and biology) to come to terms with complexity during the second half of the 20th Century and how these results can be adapted to help understand and conduct management operations. It is divided into three main chapters, namely “Complexity and Systems Thinking”, “Agent-based Modeling for Human Organizations” and “Complexity and Chaos”. The purpose of the first chapter, “Complexity and Systems Thinking”, is to give an overview of the way the concept of system has been instrumental in interpreting phenomena observed in natural sciences as well as in emotional behaviors of the human system, as a human being is a system in itself. The second chapter is devoted to complexity and human organizations. Analyzing existing organizations is a difficult exercise because human relations are intricate. Making them explicit entails the help of a relevant model, including cognitive features. The BDI (Beliefs, Desires, Intention) agent model that will be elaborated meets this requirement. The third chapter deals with complexity and chaos that is associated with disorder. We will examine how the concepts developed in physical sciences can be used in the field of human organizations for understanding their behavioral evolutions, especially when change management in organizations is pushed by fast-evolving technologies and their consequences in terms of interacting collaboration and cooperation between their human actors. Jean-Pierre BRIFFAUT January 2019

Dedication

This book has been written on the occasion of the fortieth anniversary of the foundation of the Institut Frederik Bull (IFB) by Bull. It operates as a think tank with working groups studying the societal impacts of informatics and economy digitalization. The working group linked to this book investigates the complexity of Systems of Systems (SoS). IFB has developed collaborations with management and engineering schools (EMLV, ESILV, IIM) located at Le Pôle Universitaire Leonard de Vinci in Paris-La Défense.

1 Complexity and Systems Thinking

1.1. Introduction: complexity as a problem Perception of the world brings about the feeling that it is a giant conundrum with dense connections among what is viewed as its parts. As human beings with limited cognitive capabilities, we cannot cope with it in that form and are forced to reduce it to some separate areas which we can study separately. Our knowledge is thus split into different disciplines, and over the course of time, these disciplines evolve as our understanding of the world changes. Because our education is conducted in terms of this division into different subject matters, it is easy not to be aware that the divisions are man-made and somewhat arbitrary. It is not nature that divides itself into physics, chemistry, biology, sociology, psychology and so on. These “silos” are so ingrained in our thinking processes that we often find it difficult to see the unity underlying these divisions. Given our limited cognitive capabilities, our knowledge has been arranged by classifying it according to some rational principle. Auguste Comte (1880) in the 19th Century proposed a classification following the historical order of the emergence of the sciences, and their increasing degrees of complexity in terms of understanding their evolving concepts. Comte did not mention psychology as a science

From Complexity in the Natural Sciences to Complexity in Operations Management Systems, First Edition. Jean-Pierre Briffaut. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

2

Complexity in the Natural Sciences and Operations Management

linking biology and the social sciences. He did not regard mathematics as a science but as a language which any science may use. He produced a classification of the experimental sciences into the following sequence of complexity: physics, chemistry, biology, psychology and social sciences. Physics is the most basic science, being concerned with the most general concepts such as mass, motion, force, energy, radiation and atomic particles. Chemical reactions clearly entail the interplay of these concepts in a way that is intuitively more intricate than the isolated physical processes. A biological phenomenon such as the growth of a plant or an embryo brings in again a higher level of complexity. Psychology and social sciences belong to the highest degree of human-felt complexity. In fact, we find convenient to tackle the hurdles we are confronted with by establishing a hierarchy of separate sciences. As human beings, biology is of special interest for us because it studies the very fabric of our existence. In physics, the scientific method inspired by the reductionistic approach from Descartes’ rule has proved successful for gaining knowledge. Chemistry and biology can rely on physics for explaining chemical and biological reactions; however, they are left with their own autonomous problems. K. Popper shares this point of view in his “intellectual biography” (Popper 1974): “I conjecture that there is no biological process which cannot be regarded as correlated in detail with a physical process or cannot be progressively analysed in physiochemical terms. But no physiochemical theory can explain the emergence of a new problem… the problems of organisms are not physical: they are neither physical things, nor physical laws, nor physical facts. They are specific biological realities; they are ‘real’ in the sense that their existence may be the cause of biological effects”.

Complexity and Systems Thinking

3

1.2. Complexity in perspective 1.2.1. Etymology and semantics The noun “complexity” or the adjective “complex” are currently used in many oral or written contexts when some situations, facts or events cannot be described and explained with a straightforward line of thought. It is always interesting to investigate the formation of a word and how its meaning has evolved in time in order to get a better understanding of its current usage. “Complex” is derived from Latin complexus, made of interlocked elements. Complectere means to fold and to intertwine. This word showed up in the 16th Century for describing what is composed of heterogeneous entities and was given acceptance in logic and mathematics (complex number) circa 1652. At the turn of the 20th Century, it became closer to “complicated” and used in chemistry (organic complexes), economics (1918) and psychology (Oedipus complex, inferiority complex – Jung and Freud 1909/1910). “Complicated” is derived from Latin complicare, to fold and roll up. It was used in its original meaning at the end of the 17th Century. Its current usage is analogous to “complex”: what is complex – theory, concept, idea, event, fact and situation – is something difficult to understand. It is related to human cognitive and computable capabilities, which are both limited. A telling instance is the meaning given to the word “complex” in psychology: it is a related group of repressed ideas causing abnormal behavior or mental state. It is implicitly supposed that the relations between these repressed ideas are intricate and are difficult to explain to outside observers. In the Oxford dictionary, “complex” is described by three attributes, i.e. consisting of parts, composite and complicated. These descriptors are conducive to exploring the relationships of the concept of

4

Complexity in the Natural Sciences and Operations Management

complexity with two well-established fields of knowledge, i.e. systems thinking and structuralism. That will be done in following sections. 1.2.2. Methods proposed for dealing with complexity from the Middle Ages to the 17th Century and their current outfalls Complexity is not a new issue in quest of what is knowable to humans about the world they live in. Two contributors from the Middle Ages and the Renaissance will be considered here, William of Ockham and René Descartes in their endeavors to come to terms with complexity. Their ideas are still perceptible in the present times. 1.2.2.1. Ockham’s razor and its outfall William of Ockham (circa 1285–1347) is known as the “More Than Subtle Doctor”, English scholastic philosopher, who entered the Franciscan order at an early age and studied at Oxford. William of Ockham’s razor (also called the principle of parsimony) is the name commonly given to the principle formulated in Latin as “entia non sunt multiplicanda praeter necessitatem” (entities should not be multiplied beyond what is necessary). This formulation, often attributed to William of Ockham, has not been traced back in any of his known writings. It can be interpreted as an ontological principle to the effect that one should believe in the existence of the smallest possible number of general kinds of objects: there is no need to postulate inner objects in the mind, but only particular thoughts, or states of mind, whereby the intellect is able to conceive of objects in the world (Cottingham 2008). It can be translated into a methodology to the effect that the explanation of any given fact should appeal to the smallest number of factors required to explain the fact in question. Opponents contended that this methodological principle commends a bias towards simplicity.

Complexity and Systems Thinking

5

Ockham wrote a book, Sum of Logic. Two logical rules, now named De Morgan’s laws, were stated by Ockham. As rules or theorems, the two laws belong to standard propositional logic: 1) [Not (p And q) ] is equivalent to [Not p Or Not q]. 2) [Not (p Or q)] is equivalent to [Not p And Not q]. Not, And and Or are logic connectors; p and q are propositions. In other words, the negation of a conjunction implies, and is implied by, the disjunction of the negated conjuncts. And the negation of a disjunction implies, and is implied by, the conjunction of the negated disjuncts. It can be figured out that Ockham, who was involved in a lot of disputations, felt the need to use the minimum of classes of objects in order to articulate his arguments in efficient discussible ways on the basis of predicate logic. Predicate logic allows for clarifying entangled ideas and arguments, and producing a “rational” chain of conclusions that can be understood by a wide spectrum of informed people. Different posterior schools of thought can be viewed as heirs apparent to the principle of Ockham’s razor, among many others, the ontological theory and Lévi-Strauss’ structuralism. The word “ontology” was coined in the early 17th Century to avoid some of the ambiguities of “metaphysics”. Leibniz was the first philosopher to adopt the word. The terminology introduced by 18th Century came to be widely adopted: ontology is the general theory of being as such and forms the general part of metaphysics. In the usage of 20th-Century analytical philosophy, ontology is the general theory of what there is (MacIntyre 1967). Ontological questions revolve around: – the existence of abstract entities (numbers);

6

Complexity in the Natural Sciences and Operations Management

– the existence of imagined entities such as golden mountains/ square circles; – the very nature of what we seek to know. In the field of organization theories, ontology deals with the nature of human actors and their social interactions. In other more abstract words, ontology aims to establish the nature of entities involved and their relationships. Ontology and knowledge go hand in hand because our conception of knowledge depends on our understanding of the nature of the knowable. The ontological commitment of a theory is twofold: – assumptions about what there is and what kinds of entities can be said to exist (numbers, classes, properties); – when commitments are paraphrased into a canonical form in predicate logic, they are the domains over which the variables are bound to the theory range. When it comes to complexity, the ontological description of an entity should refer to its structure (structural complexity) and its organization (organizational complexity). This is in line with the mindset in German culture to describe a set of entities by two concepts, i.e. Aufbau (structure) and Ablauf (flows of interactions inside the structure). An entity can be a proxy that represents our perception of the world. According to the purpose, a part of the world can be perceived in different ways and can turn out to be modeled by different sets of ontological building blocks. Lévi-Strauss was a Belgian-born French social anthropologist and leading exponent of structuralism, a name applied to the analysis of cultural systems in terms of the structural relationships among their elements. Lévi-Strauss’ structuralism was an effort to classify and reduce the enormous amount of information about cultural systems to ontological entities. Therefore, he viewed cultures as systems of communication and constructed models based on structural linguistics, information theory and cybernetics to give them an interpretation. Structuralism is a school of thought which evolved first in linguistics

Complexity and Systems Thinking

7

(de Saussure 1960) and did not disseminate outside the Frenchspeaking intellectual ecosystem. 1.2.2.2. René Descartes René Descartes (1596–1650), French philosopher and mathematician, was very influential in theorizing the reductionistic approach to analyzing complex objects. It consists of the view that a whole can be fully understood in terms of its isolated parts or an idea in terms of simple concepts. This attitude is closely connected to the crucial issue that science faces, i.e. its ability to cope with complexity. Descartes’ second rule for “properly conducting one’s reason” divides up the problems being examined into separate parts. This principle most central to scientific practice assumes that this division will not dramatically distort the phenomenon under study. It assumes that the components of the whole behave similarly when examined independently to when they are playing their part in the whole, or that the principles governing the assembling of the components into the whole are themselves straightforward. The well-known application of this mindset is the decomposition of a human being into the body and the mind localized in the brain. It is surprising to realize that Descartes’ approach to understanding what a human being is and how (s)he is organized remains an issue discussed by philosophers of our time. The issue of mind–body interaction with the contributions of neurosciences will be developed in another section. The argument supporting this approach is to reduce the complexity of an entity in reducing the variety of variables to analyze concomitantly. It is clear that this methodology can be helpful in a first step but understanding how the isolated parts interact to produce the properties of the whole cannot be avoided. This type of exercise can appear very tricky. This way to approach complexity contrasts with holism. Holism consists of two complementary views. The first

8

Complexity in the Natural Sciences and Operations Management

view is that an account of all the parts of a whole and of their interrelations is inadequate as an account of the whole. For example, an account of the parts of a watch and of their interactions would be incomplete as long as nothing is said about the role of a watch as a whole. The complementary view is that an interpretation of a part is impossible or at least inadequate without reference to the whole to which it belongs. In the philosophy of science, holism is a name given to views like the Duhem–Quine thesis, according to which it is the whole theories rather single hypotheses that are accepted or rejected. For instance, the single hypothesis that the earth is round is confirmed if a ship disappears from view on the horizon. However, this tenet presupposes a whole theory – one which includes the assumption that light travels in straight lines. The disappearance of the ship, with a theory that light-rays are curved, can also be taken to confirm that the earth is flat. The Duhem–Quine thesis implies that a failed prediction does not necessarily refute the hypothesis it is derived from, since it may be preferable to maintain the hypothesis and instead revise some background assumptions. The term holism was created by Jan Smuts (1870–1950), the South African statesman and philosopher, and used in the title of his book Holism and Evolution (Smuts 1926). In social sciences, holism is the view that the proper object of these sciences is systems and structures which cannot be reduced to individual social agents in contrast with individualism. As a mathematician, Descartes has developed what is called analytical geometry. Figures of geometric forms (lines, circles, ellipses, hyperboles, etc.) are defined by analytical functions and their properties described in terms of equations in “Cartesian” coordinates measured from intersecting straight axes. This is an implicit way to facilitate the analysis of complex properties of geometric forms along different spatial directions.

Complexity and Systems Thinking

9

1.3. System-based current methods proposed for dealing with complexity 1.3.1. Evolution of system-based methods in the 20th Century All current methods used to deal with complexity have been evolved in the 20th Century within the framework of what is called the system theory. The system concept is not a new idea. It was already defined in the encyclopedia by Diderot and d’Alembert published in the 18th Century in Amsterdam to describe different fields of knowledge. In astronomy, a system is assumed to be a certain arrangement of various parts that make up the universe. The earth in Ptolemy’s system is the center of the world. This view was supported by Aristotle and Hipparchus. The motionless sun is the center of the universe in the Copernicus’ system. In the art of warfare, a system is the layout of forces on a battlefield or the provision of defensive works respectively according to the concepts of a general or a military engineer. The project by Law around 1720 to introduce paper money for market transactions was called Law’s system. 1.3.1.1. The systems movement from the 1940s to the 1970s A revived interest in the concept of systems emerged in the 1940s, in the wake of the first-order cybernetics whose seminal figure is Norbert Wiener (1894–1964). His well-known book Cybernetics: Or the Study of Control and Communication in the Animal and the Machine (Wiener 1948) was published in 1948 and is considered as a landmark in the field of controlled mechanisms (servo-mechanisms). The word “animal” in the title of Wiener’s book reflects his collaboration with the Mexican physiologist Arturo Rosenblueth (1900–1970) of the Harvard medical School. He worked on transmission processes in nervous systems and favored teleological, non-mechanistic models of living organisms. Cybernetics is the science that studies the abstract principles to control and regulate complex organizational structures. It is concerned not so much with what systems consist of but with their function

10

Complexity in the Natural Sciences and Operations Management

capabilities and their articulations. Cybernetics is applied to design and manufacture purpose-focused systems of nonself-reorganizable components. By design, these mechanistic systems can sustain a certain range of constraints from the environment – never forget that the surroundings in which a system is embedded are part of the system – through feedback and/or feed-forward loops and from the failure of some of its components. In general, this last situation is dealt with at the design stage for securing a “graceful” degradation of operations. This first-order cybernetics clearly refers to Descartes’ school of thought: courses of action controlled by memorized instructions. Shortcomings were revealed when the cybernetics corpus of concepts was applied to non-technical fields and especially social and management sciences. Second-order cybernetics was worked out under the impellent of Heinz von Förster (1978). The core idea was to distinguish the object (the system) and the subject (the system designer and controller). This delineation focuses on the role of the subject that decides on the rules and means a given set of interacting entities (the object) has to operate with. The subject can be a complex system. It may constitute in different parts, a human being or group and technical proxies thereof encapsulating the rules chosen by the human entity. These rules are subject to dynamical changes due to evolving constraints and set targets that are fulfilled or not. The presence of human cognitive capabilities in the control loop allows for securing the sustainability of systems in evolving ecosystems. Another important stakeholder – not to say the founding father – in the conception and dissemination of the system paradigm is the biologist Ludwig von Bertalanffy (1901–1972). It is relevant to describe his scientific contribution to systems thinking, contribution that goes far beyond his well-known book General Systems Theory published in 1968 and often referred to as GST (von Bertalanffy 1968). With the economist Kenneth Boulding, R.W. Gerard and the biomathematician A. Rapoport in 1954 founded a think-tank (Society for General Systems Theory) whose objectives were to define a corpus of concepts and rules relevant to system design, analysis and control. GTS was imagined by Ludwig von Bertanlaffy as a tool to design models in all domains where a “scientific” approach can be secured.

Complexity and Systems Thinking

11

Opposite to the mathematical approach of Norbert Wiener, Ludwig von Bertalanffy describes models in a non-formal language striving to translate relations between objects and phenomena by sets of interacting components, the environment being a full part of the system. These interacting components match an organizational structure with an inner dynamical assembling device like living organisms. Contrary to Norbert Wiener’s cybernetics feedback and feed-forward mechanisms, actions according to Ludwig von Bertalanffy’s view are not only applied to objectively given things but can result in self-(re)organization of systems structures to reach and/or maintain a certain state, as happens in living organisms. In a world where data travels at the speed of light, the response time to adjust a human organization to an evolving environment is critical. The environment of any system is a source of uncertainty, because it is generally out of the control of the system designer and operator. 1.3.1.2. The systems movement in the 1980s: complexity and chaos The systems movement in the 1980s became aware of two extents that were up to this time unconsciously not considered, i.e. that complexity cutting through all the scientific disciplines from physics to biology and economics is a subject matter in itself and that some researchers in a limited number of fields pioneered investigation in chaotic behaviors of systems independently. The Santa Fe Institute (SFI) played and still plays a major leading role in the interdisciplinary approach to complexity. The SFI initiative brings a telling insight into the consciousness felt in the 1980s by scholars of different disciplines that they share the same issue, complexity, and that interdisciplinary discussions could help tackle this common stumbling block to achieve progress in knowledge. SFI was founded in Santa Fe (California) in 1984 by scientists (including several Nobel laureates) mainly coming from the Los Alamos Laboratory. It is a non-profit organization and was created to be a visiting institution with no permanent positions. It consists of a

12

Complexity in the Natural Sciences and Operations Management

small number of resident faculty, post-doctoral researchers and a large group of external faculty. Funding comes from private donors, grantmaking foundations, government science agencies and companies affiliated with its business network. Its budget in 2014 was about 14 million US dollars. The primary focus is theoretical research in wide-ranging models and theories of complexity. Educational programs are also run from undergraduate level to professional department. As viewed by SFI, “complexity science is the mathematical and computational study of evolving physical, biological, social, cultural and technological systems” (SFI website). Research themes and initiatives “emerge from the multidisciplinary collaboration of (their) research community” (SFI website). SFI’s current fields of research which are described on their website demonstrate the wide spectrum of subject matters considered relevant today: – complex intelligence: natural, artificial and collective (measuring and comparing unique and species-spanning forms of intelligence); – complex time (can a theory of complex time explain aging across physical and biological systems?); – invention and innovation (how does novelty – both advantageous and unsuccessful – define evolutionary processes in technological, biological and social systems?). M. Mitchell Waldrop (1992) chronicles the events that happened in SFI from its foundation to the early 1990s. It is outside the scope of this context to survey all the interdisciplinary workshops run during this period of time. I will elaborate on the contributions of John H. Holland and W. Brian Arthur. The lecture “The Global Economy as an Adaptive Process” delivered by John H. Holland, Professor of Psychology and Professor of Computer Science and Engineering at the University of Michigan, at

Complexity and Systems Thinking

13

a workshop held on September 8, 1987 contains the following main points that are of general application: – Economy is the model “par excellence” of what is called “complex adaptive systems” (CAS), term coined by the SFI. They share crucial properties and refer to the natural world (brains, immune systems, cells, developing embryos, etc.) and to the human world (political parties, business organizations, etc.). Each of these systems is a network of agents acting in parallel and interacting. This view implies that the environment of any agent is produced by other acting and reacting agents. The control of this type of system is highly distributed as long as no agent turns to be a controlling supervisor. CAS has to be contrasted with “complex physical systems” (CPS). CPS follow fixed physical laws usually expressed by differential equations – Newton’s laws of gravity and Maxwell’s laws of electromagnetism are cases in point. In CPS, neither the laws nor the nature of elements change over time; only the states of the elements change according to the rules of the relevant laws. The variables of differential equations describe element states. CPS will be investigated in section 1.3.1.3. – Complex adaptive systems have a layered architecture. Agents of lower layers deliver services to agents of higher levels. Furthermore, all agents engineer changes to cope with the environmental requirements perceived through incoming signals. – Complex adaptive systems have capabilities for anticipation and prediction encoded in their genes. Irish-born W. Brian Arthur who shifted from operation research to economics when joining the SFI has kept working in the field of complexity and economics. He produced an SFI working paper in 2013 (Arthur 2013) summarizing his ideas about complexity, economics and complexity economics. This term was first used by W. Brian Arthur in 1999. Here are the main features of W. Brian Arthur’s positions: – “Complexity is not a theory but a movement in the sciences that studies how the interacting elements in a system create overall

14

Complexity in the Natural Sciences and Operations Management

patterns and how those overall patterns in turn cause the interacting elements to change or adapt … Complexity is about formation – the formation of structures – and how this formation affects the objects causing it” (p 4). This means that most systems experience feedback loops of some sort, which entails nonlinear behaviors “genetically”. – An economic system cannot be in equilibrium. “Complexity economics sees the economy as in motion, perpetually ‘computing’ itself – perpetually constructing itself anew”. (p 1). Equilibrium, studied by neoclassical theory, is an idealized case of non-equilibrium which does not reflect the behavior of the real economy. This simplification makes mathematical equations tractable. This point of view is in line with the dissipative systems studied by Ilya Prigogine. An economic system like a living organism operates far from equilibrium. A lot of distinguished scholars brought significant contributions to the SFI activities. It is outside the scope of this framework to describe all of them. 1.3.1.3. Ilya Prigogine: “the future is not included in the present” Ilya Prigogine and his research team played a major role in the study of irreversible processes with the approach of thermodynamics. His reputation outside the scientific realm comes from his co-working with Isabelle Stengers with whom he co-authored some seminal books about the epistemology of science (Prigogine 1979; Prigogine and Stengers 1979 and 1984). He received the 1977 Nobel Prize for Chemistry for his contributions to non-equilibrium thermodynamics, particularly the theory of dissipative structures that are the very core of all phenomena in nature. He started his research at a time when non-equilibrium thermodynamics was not considered worthwhile since all thermodynamic states were supposed to reach equilibrium and stability sooner or later as time passes by. Only equilibrium and near-equilibrium thermodynamic systems were subject to academic research.

Complexity and Systems Thinking

15

The pioneer work of Ilya Prigogine proved that non-equilibrium systems are widespread in nature: many of them are natural organic processes, namely evolving with time without any possible time reverse. That holds true clearly for biological processes. Science cannot yet reverse aging! According to Ilya Prigogine, systems are not complex themselves, but their behaviors are. He contrasted classical and quantum dynamics models and thermodynamics models. In classical dynamics (Newton’s laws) and quantum mechanics, equations of motion and wave functions conserve information as time elapses: initial conditions can be restored by reversing the variable time t into –t. Thermodynamics models refer to a paradigm of processes destroying and creating information without letting initial conditions be restored. Time symmetry is broken: this is expressed by comparing time to an arrow which never comes back to its starting point contrary to boomerangs. A key component of any system is its environment. When this part of the system is not taken into account, the system is qualified as a “closed system”. It is in fact a simplifying abstraction – a falsification in Popper’s terms – which facilitates reasoning and computations. In reality, all thermodynamics systems are “open”, i.e. exchanging matter and energy with their environments. Prigogine called them “dissipative systems”. Why this qualifier? For physicists, irreversible processes, such as the heat transfer from a hot source to a cold source, are associated with energy dissipation, degradation of a sort. There is exhaustion of the available energy in the system. If we want to transfer energy from a cold source to a hot source, energy must be delivered. A dissipative system is a thermodynamically open system operating out of, often far from, thermodynamics equilibrium in an environment with which it exchanges matter and energy. A dissipative structure is a dissipative system with a reproducible steady state, naturally or under constraints. They constitute a large class of systems found in nature. All living organisms exchange matter and energy with their environment and are, as such, dissipative systems. That explains why

16

Complexity in the Natural Sciences and Operations Management

biologists, L. von Bertalanffy among others, were the first to develop interest in this field of research a long time ago. Biological processes are considered by many people as “complex”. Basically, a chemical system is symbolized by a reaction in which a molecule of species A can combine with a molecule of species B to produce one molecule of species C and one molecule of species D. This process is represented by A+B→C+D

[1.1]

A and B are the reactants, whereas C and D are the products. A and B combine at a certain rate and disappear. C and D are formed and appear as the reaction proceeds. In an isolated system, A and B do not disappear completely. After a certain lapse of time, the concentrations [A], [B], [C] and [D] reach a fixed value of the ratio [C] [D]/[A] [B]. How can we come to terms with equation [1.1]? In fact, a reverse reaction takes place C+D→A+B

[1.2]

When both reactions occur with the same velocity, a balanced state, equilibrium, is reached. Equations [1.1] and [1.2] are represented as a reversible reaction [1.3] complying with what is called the “law of mass action”. A+B↔C+D

[1.3]

When a set-up allows the outflow of C and D, the system becomes open. Conditions can be created for the system to attain a state in which concentrations [A], [B], [C] and [D] remain constant, namely d[A]/dt = d[B]/dt = d[C]/dt = d[D]/dt = 0. This state is called the stationary non-equilibrium state. Reaction [1.2] does not logically operate. Reaction [1.3] may not correspond to an actual reaction but to the final outcome of a number of intermediate reactions. The law of mass action must be only applied to actual reactions. When the situation is described by the two reactions [1.4] and [1.5],

Complexity and Systems Thinking

17

A+B↔G

[1.4]

G+B↔C+D

[1.5]

G is described as a catalyst. The situation described by reactions (4) and (5) is modeled by a system shown in Figure 1.1.

Figure 1.1. Dynamics of reactions with the intermediary formation of a catalyst

D is drained away as the product targeted is C. Inside a fluid system, diffusion of reagents and products and chemical reactions are the two main phenomena that take place. A variety of irregularities (inhomogeneous distribution of chemical agents and physical parameters such as diffusibilities of chemicals, temperature and pressure) and random disturbances such as Brownian motion and statistical fluctuations occur in the system and may give rise to unstable equilibria leading to new physico-chemical configurations in terms of space and time. Unstable equilibrium is not a condition which occurs naturally. It usually requires some other artificial interference. Three salient features characterize the complex behaviors of chemical systems according to Gregoire Nicolis and Ilya Prigogine (Nicolis and Prigogine 1989).

18

Complexity in the Natural Sciences and Operations Management

1.3.1.3.1. Non-linearity All phenomena governed by nonlinear equations are highly sensitive to initial conditions (SIC). That means that a small change in the initial values of their variables can dramatically change their time evolutions. Although the nonlinear effects of chemical reactions (the presence of the reaction product) have a feedback action on their “cause” and are comparatively rare in the inorganic world, molecular biology has discovered that they are virtually the rule so far as living systems are concerned. Autocatalysis (the presence of X accelerates its own synthesis), auto-inhibition (the presence of X blocks the catalysis needed to synthesize it) and cross-catalysis (two products belonging to two different reaction chains activate each other’s synthesis) provide the classical regulation mechanism guaranteeing the coherence of the metabolic function. In addition, reaction rates are not always linear functions of the concentrations of the chemical agents involved. Biological systems have a past. Their constitutive molecules are the result of an evolution; they have been selected to take part in the autocatalytic mechanisms to generate very specific forms of organization processes. 1.3.1.3.2. Self-organization Far-from-equilibrium ordinary systems such as chemical reactions can exhibit self-organization phenomena under conditions of given boundary limits. Three modes of self-organization of matter generate complex behaviors: – Bistability and hysteresis According to the initial values of some λ parameter, when increased or decreased, the specific path evolutions are different. They depend on the system’s, past history: this is called the hysteresis. Within a certain range of this parameter [λ1, λ2], the system can be in two stable states depending on the initial conditions. When the upper or lower limits of this value range are reached, the system can switch from one state to the other.

Complexity and Systems Thinking

19

– Oscillations (periodic and non-periodic) The BZ (Belousov–Zhabotinsky) reaction discovered by Boris Belousov in 1951 serves as a classical example of non-equilibrium thermodynamics, resulting in the establishment of a nonlinear chemical oscillator. It is thought to involve 18 different steps. It proves that reactions do not have to be governed by equilibrium thermodynamic behavior. Oscillations can arise if the system in a macroscopic medium is sufficiently far from equilibrium. The BZ reaction operates with the Ce3+/Ce+4 couple as a catalyst and citric acid as a reductant. Ions other than cerium such as iron, copper, etc. and other reductants can be used to produce oscillating reactions. – Spatial patterns When reactions take place in inhomogeneous phases, propagating regular patterns can be observed. The reaction producing ferroin, a complex of phenanthroline and iron, when implemented in a Petri dish, results in the formation of colored spots, growing into a series of expanding concentric rings or spirals depending on temperature. A Petri dish is a shallow cylindrical glass dish named after the German scientist Julius Petri and used by biologist to culture cells. These two features have been mathematically investigated by Alan Turing in a seminal paper which is considered as the foundation of modern morphogenesis (Turing 1952). He modeled the dynamics of morphogenesis reactions by first-order linear differential equations representing the time evolution of chemical reactions. Different behavioral patterns, either stationary or oscillatory, show up as a function of parameter values in differential equations. 1.3.1.3.3. Chaos Deterministic chaos follows deterministic rules and is characterized by long-term unpredictability arising from an extreme sensitivity to initial conditions. Such behavior may be undesirable particularly for monitoring processes dependent on temporal regulation.

20

Complexity in the Natural Sciences and Operations Management

On the contrary, a chaotic system can be viewed as a virtually unlimited reservoir of periodic oscillations which may be assessed when appropriate feedback is applied to one or more of the controllable system parameters in order to stabilize it. Non-equilibrium enables a system to avoid chaos and to transform part of the energy supplied by the environment into an ordered behavior of a new type, the dissipative structure: symmetry breaking, multiple choices and correlations of macroscopic range. This type of structure has been often ascribed in the past only to biological systems. As a matter of fact, it appears to be more common than thought in nature. All of these features led Prigogine to dwell on our innate inability to forecast the future of our ecosystem in spite of the fact that we know the elementary laws of nature. When more than two entities are involved, equations modeling their interactions are often not linear and their computed solutions are highly sensitive to initial conditions so that forecasts depend on the initial conditions chosen. “Rationality can no longer be identified with ‘certainty’, nor probability with ignorance, as has been the case in classical science … We begin therefore to be able to spell out the basic message of the second law (of thermodynamics). This message is that we are living in a world of unstable dynamical systems” (Prigogine 1987). In other words, “the future is not included in the present”. Uncertainty is the very fabric of our human condition. 1.3.2. The emergence of a new science of mind Human beings are inclusive actors of many systems, either on their own or interfaced to hardware and/or software components. Their behavior, either rational or emotional, brings a special contribution to the complexity of the whole system. It is outside the scope of this section to tackle this issue on the basis of psychology and sociology. At the start of the 21st Century, biology and neuroscience have reached a point where they have brought about solid results for helping understand human attitudes.

Complexity and Systems Thinking

21

Our purpose is to give a survey of the main results established by research in biology and neuroscience after having elaborated on some milestones about how emotion has been considered in different disciplines. This focus on emotion is justified by the contemporary views along the evolutionary psychology spectrum, positing that both basic emotions and social emotions evolved to motivate social behaviors. Emotions either are intense feelings directed at someone or something, or refer to mental states not directed at anything such as anxiety, depression and annoyance. Current research suggests that emotion is an essential driver of human decision-making and planning processes. The word “emotion” dates back to 16th Century when it was adapted from the French verb “émouvoir” which means “to stir up”. 1.3.2.1. Darwin and Keynes on emotions Perspectives on emotions from the evolutionary theory were initiated by Charles Darwin’s 1872 book The Expressions of the Emotions in Man and Animals. Darwin argued that emotions serve a purpose for humans in communication and aiding their survival. According to Darwin, emotions evolved via natural selection and cross-cultural counterparts. He also detailed the virtues of experiencing emotions and the parallel experiences that take place in animals. John Maynard Keynes used the term “animal spirits” in his 1936 book The General Theory of Employment, Interest and Money (Keynes 1936) to describe the instincts and emotions that influence and guide human behavior. This human behavior is assessed in terms of “consumer confidence” and “trust”, which are both assumed to be produced by “animal spirits”. This concept of “animal spirits” is still a subject matter of interest as George Akerlof and Robert Schiller’s book Animal Spirits: How Human Psychology Drives the Economy and Why it Matters for Global Capitalism (Akerlof and Schiller 2009) proves.

22

Complexity in the Natural Sciences and Operations Management

1.3.2.2. Neurosciences on emotions The central issue is the distinction between mind and body. In spite of the fact that a lot of philosophers (Thomas Aquinas, Nicolas Malebranche, Bendict Spinoza, John Stuart Mill, Franz Brentano, among others) studied this subject matter, the Cartesian dualism introduced by René Descartes in the 17th Century (Meditationes de Prima Philosophia – 1641) continued to dominate the philosophy of mind for most of the 18th and 19th Centuries. Even nowadays, in some contexts such as data processing, mathematicians are inclined to model the brain function capabilities as software programs made of set instructions without imagining that some disruptive physiological (re)actions can branch out their linear execution. Descartes proposed that whatever is physical is spatial and whatever is mental is non-spatial and unexpended. Mind for Descartes is a “thinking thing” and a thought is a mental state. Increased potential in neuro-imaging has allowed investigations into the various parts of the brain and their interactions. The delineation between mind on one side and brain and body on the other side is strongly contended by Antonio Damasio, a neuroscientist, in his book Descartes’ Error (Damasio 2006). He thinks that the famous sentence “cogito, ergo sum” published in Principae Philosophiae (1644) has pushed biologists up to the present time to model biological processes as time-based mechanisms decontrolled from our conscience. In Descartes’ wording, res cognitans, our spirit and res extensa, our functional organs are two disconnected subsystems. Antonio Damasio is convinced that emotion, reason and brain are closely interrelated, which makes the analysis and understanding of human behaviors a conundrum. In the introduction to his book, Antonio Damasio writes (page XXIII): “the strategies of human reason probably did not develop, in either evolution or any single individual, without the guiding force of the mechanisms of biological regulation, of which emotion and feeling are notable expressions. Moreover, even after reasoning strategies become established in the formative years, their effective deployment probably depends, to a considerable extent, on a continued ability to

Complexity and Systems Thinking

23

experience feelings”. This is one of the main tenets that Damasio unfolds in his book: the body and the mind are not independent, but are closely correlated through the brain. The brain and the body are indissolubly integrated to generate mental states. Let us elaborate on two key words mentioned above, i.e. emotion and feeling. Antonio Damasio (p. 134) distinguishes between primary and secondary emotions. Primary emotions are wired in at birth and innate. Secondary emotions “occur once we begin experiencing feelings and forming systematic connections between categories of objects and situations, on the one hand, and primary emotions, on the other”. Secondary emotions begin with the conscious consideration entertained about a person or situation, and emotions develop as mental images in a thought process. They cause changes in the body state resulting in an “emotional body state”. Feeling is those changes being experienced. Some feelings have a major impact on cognitive processes. Feelings based on universal emotions are happiness, sadness, anger, fear and disgust. The relationships between emotion and feeling are summarized by Antonio Damasio in this quotation: “emotion and feeling rely on two basic processes: 1) the view of a certain body state juxtaposed to the collection of triggering and evaluative images which cause the body state; 2) a particular style and level of efficiency of cognitive process which accompanies the events described in (1), but is operated in parallel” (p. 162). Feelings may change beliefs, and as a result, may also change attitudes. On the basis of the theories mentioned above, events from emotions to attitudes follow this orderly sequence: emotions causing specific body states that generate feelings by cognitive processes and have an impact on beliefs and attitudes. Among the feelings based on universal emotions, mutual fear can be considered as one of the major psychological factors affecting relationships in collaborative networked environments. Fostering trust appears to be a relevant countermeasure to mutual fear. A general argumentative review of the

24

Complexity in the Natural Sciences and Operations Management

biology of emotional states is given in Kandel’s panoramic autobiography (Kandel 2007). At this point, it seems relevant to recall that the brain contains a hundred billion nerve cells interconnected by a hundred trillion links and that it is not an independent actor. It is part of an extended system reaching out to permeate, influence and be influenced by every entity of our body system. All our physical and intellectual activities are directly or indirectly controlled by the action of the nervous system of which the brain is the central part. The brain receives a constant flow of information from our body and the outside world via sensory nerves and blood vessels feeding it with real-time data. When discussing the brain we are faced with a self-referencing paradox: we think about our brain with our brain! We are caught in a conundrum that is difficult to escape and sheds light on the partiality on our own current and future in-depth knowledge of brain processes. Other relevant contributors to the study of emotion are worth mentioning such as Joseph Le Doux (The Emotional Brain) (Le Doux 1996), Derek Denton (The Primordial Emotions: The Dawning of Consciousness) (Denton 2006) and Elaine Fox (Emotion Science: An Integration of Cognitive and Neuroscientific Approaches) (Fox 2008). The two last references show that the understanding of emotion that is the very fabric of our human lives still motivates scholarship research without the hope of getting a full explanation in the future, considering the intricate pattern of the structure of the brain and the influences of the basic physiology and stress system of the human body. 1.4. Systems thinking and structuralism A piecemeal approach to problems within firms and in local and national governments is no longer sustainable when firms and nations compete or collaborate to compete. This is because technology, firms and organizations have been made complex by the number of interacting stakeholders and the variety of techniques involved, and

Complexity and Systems Thinking

25

because decisions incur more far-reaching consequences in terms of space and time. Systems’ thinking has turned out to be a thought tool to understand issues in a wide variety of knowledge realms outside the technical world. It is relevant to describe in detail systems thinking and structuralism that go hand in hand to deliver a structured view of interacting entities. 1.4.1. Systems thinking Bernard Paulré is an economist who has investigated complexity in economic social contexts. We have chosen to elaborate on his description of systems thinking to show that this mindset has disseminated far beyond the technical sphere. Systems thinking or “systemics” has been defined by Bernard Paulré (Paulré 1989) as follows: “It is the study of laws, operational modes and evolution principles of organized wholes, whatever their nature (social, biological, technical …)”. This study is first a matter of analyzing constituent elements of these complex wholes “from the scrutiny of at least two types of interaction: on one hand those linking the elements belonging to the organized whole (and considered as such falling under its full control) and on the other hand those that associate this very whole, as it is globally perceived, and its environment”. At first sight, this definition looks complete and well articulated. However, a word draws attention, i.e. law. – A first question raised is: what is a law? A law gives the quantified relation between a cause and an effect, according to the French scientist Claude Bernard. In a more general way, a law states a correlation between phenomena and is verified by quantitative or qualitative experiments. Direct or indirect validation is a critical step.

26

Complexity in the Natural Sciences and Operations Management

– A second question raised is: are there laws common to all organized systems? As a matter of fact, the system approach is implemented in two very different modeling frameworks to tackle complexity. Within the first framework, it is used to design a real or virtual object by combining “simple” functional systems into a “complex” system. It is a bottom-up approach (composition). The purpose is to easily keep track of the different design steps for quality control and maintenance operations. Within the second framework, the purpose is to understand how an existing “complex” entity operates by “decomposing’ it into simpler elements with the idea of monitoring it. The first situation is widely found in the technical world where the term “system integration” is nowadays of common use. Another item of discussion is the very nature of “system approach”. Is it a theory or a tool for coming to terms with complexity? A theory is supposed to rely on laws. von Bertalanffy’s new ideas about systems were published for the first time in 1949 under the title Zu einer algemein Systemlehre. Lehre in German bears a significance different from théorie in French or theory in English. Lehre has a double meaning, namely “practice” (Erfahrung die man gemacht hat) and doctrine, body of knowledge (die Lehre des Aristoteles). The term “theory” in German includes two contrasting views, namely actual know-how gained from experience and abstract structure of concepts. These two views can be made compatible by considering that the abstract structure of concepts is built on know-how gained from experience. A telling example is cybernetics. The experience gained from mechanical automata developed in the 18th Century was applied in the 20th Century to devices called servo-mechanisms when electromechanical and later on electronic devices became available. The principles of servo-mechanisms were converted into a doctrine called “cybernetics”, the art of governing systems. This doctrine became applied in social, economic and management sciences as a blueprint to design controlling systems. The title of a book by Herbert Stachowiak (Stachowiak 1969) illustrates that state of mind: Denken und Erkennen im kybernetischen Modell (Thinking and understanding by cybernetics modeling).

Complexity and Systems Thinking

27

In order to complete the scope of disciplines and people involved in the development of systems thinking after World War II, it is relevant to mention Kenneth Boulding’s hierarchy of system complexity (Boulding 1956). His purpose was to show that all organic and constructed things could be interpreted as layered constructions of building blocks to make complexity more easily interpretable in terms of explicability, transparency and provability. It is worth noting that this type of hierarchical architecture has become common in the field of information systems and telecommunication services. Two adjacent levels in the hierarchy differ not merely in their degree of diversity or variability but in the appearance of wholly specific system properties. On the contrary, Kenneth Boulding asserts the cumulative properties of the hierarchy so that each level incorporates the elements or components (sub-systems, sub-subsystems, etc.) of all the other lower levels, but a new type of emergent property appears at that level. Boulding identified nine levels of complexity. As you move up the hierarchy, complexity increases, in the sense that observers find it harder to predict what will happen. Boulding’s levels 1–5 are the aggregation of what can be called the “rational–technical” level. This level’s subdivision is often called the mechanistic level. Level 7 is roughly a personal level, level 8 includes the wider environment level and level 9 includes the spiritual side to our lives. Kenneth Boulding’s nine levels of complexity are described as follows: – Level 1: frameworks. The static structural description of the system is produced at this level, i.e. its building blocks; for example, the anatomy of the universe or the solar system. – Level 2: clockworks. The dynamics system with predetermined, necessary motion is presented at this level. Unlike level 1, the state of a level 2 system changes over time. – Level 3: control system. The control mechanism or cybernetic system describes how the system behavior is regulated to remain stable in line with externally defined targets and shun explosive loss of control as in the thermostat. The major difference from level 2

28

Complexity in the Natural Sciences and Operations Management

systems is the information flow between the sensors, regulator and actuator. – Level 4: open systems. These exchange matter and energy with their environment to self-maintain their operations. This is the level at which life begins to differentiate from non-life: it may called the level of the cell. – Level 5: blueprinted growth systems. This is the genetic-societal level typified by the plant. It is the empirical world of the botanist where reproduction takes place not through a process of duplication but by producing seeds containing pre-programmed instructions for development. They differ from level 4 systems which reproduce by duplication (parthenogenesis). – Level 6: internal image systems. The essential feature of level 6 systems is the availability of a detailed awareness of the environment captured by discriminating information sensors. All pieces of information gathered are then aggregated into a knowledge structure. The capabilities of the previous levels are limited to capturing data without transforming it into significant knowledge. – Level 7: symbol-processing systems. Systems at this level exhibit full awareness not only of the environment but also of self-consciousness. This property is linked to the capability of inferring symbols and ideas into abstract informational concepts. These systems consist of self-conscious human speakers. – Level 8: multicephalous systems. This level is intended to describe systems with several “brains”, i.e. a group of individuals acting collectively. These types of systems possess shared systems of meaning – for example, systems of law or culture, or code of collaborative practice – that no individual human being seems to possess. All human organizations have the characteristics of this level. – Level 9: system of unknown complexity. This level is intended to reflect the possibility that some new level of complexity not yet imagined may emerge.

Complexity and Systems Thinking

29

It is worth recalling here the interrelations between signs (symbols), data, information, knowledge and opinions to grasp an unbiased understanding of the contributions of Kenneth Boulding’s hierarchy of complexity. These interrelations form a five-level hierarchy, as shown in Figure 1.2.

Figure 1.2. Relationship between signs, data, information, knowledge and opinions

The basic element that can be captured, transmitted, memorized and processed is a sign (for instance, of alphanumerical nature). In order to convert concatenated signs into data, they have to be ordered according to a morphological lexicon and syntax that is able to produce significances. When interpreted as a function of the context, these significances are turned into meanings that are pieces of information. They are in fact data structured on the basis of the factual relationships with the context taken into account to convert data into pieces of information.

30

Complexity in the Natural Sciences and Operations Management

When a specific piece of information is “interconnected” with other pieces of information, such relations create a network of a sort which can be qualified as knowledge. In general, unconnected pieces of information do not give access to understanding a situation. At this level, a subjective dimension comes into play, i.e. individuals’ opinions that determine human behaviors. Opinions are based on beliefs. 1.4.2. Structuralism Structuralism has been given a wide range of meaning in the course of the 20th Century. It has been called on by many fields of knowledge (linguistics, literary studies, sociology, psychology, anthropology) for delivering a rationalized understanding of their substance. Structuralism is generally considered to derive its organizational principles from the early 20th-Century work of Saussure, the founder of structural linguistics. He proposed a “scientific” model of language, one understood as a closed system of elements and rules that account for the production and the social communication of meaning. Since language is the foremost instance of social sign systems in general, the structural account might serve as an understanding. In general terms, there are three characteristics by which structures differ from mere aggregates: – the nature of an element depends, at least in part, on its place in the structure, i.e. its relations with other elements; – a structure is purpose-made for fulfilling an objective through function capabilities carried out by its elements; – the structure is not static, but allows for re-organization by changing the properties of and/or the relations between its elements to adjust to new requirements, internal or external to the structure. A particular entity can be understood only in the context of the whole system it is embedded in. Structuralism holds that particular elements have no absolute meaning or value. Their added values are relative to the characteristics of other elements. Everything makes sense only in relation to something else. The properties of an element have to

Complexity and Systems Thinking

31

be studied from both points of view, i.e. its intrinsic processing capabilities and the ways in which these processing capabilities are triggered by signals received from its surrounding. An element is supposed to fulfill a function of a sort and deliver an output to another element. This output can depend on the nature of the incoming signal. When an element receives some incoming interference signal (by interference is meant outside the set of signals considered by design), the output may be changed and trigger the emergence of a new configuration of the whole system. Then, the system may run out of control and undergo self-reorganization. It is worth noting that: – Systems thinking assumes that systems have dominant rules that can be used to calculate potential stable states, whereas complexity emphasizes that systems tend to defy calculated stable states and to entail the certainty of uncertain evolutions. The dynamics of many systems is path-dependent due to contingencies. – Systems thinking underpins that systems under the influence of the cybernetics mindset have “control systems” of a sort that guide systems’ operations, whereas complexity recognizes the possibility of self-(re)organization leading to the occurrence of unexpected configurations to cope with contingent constraints. This capability of self-(re)organization is the main feature of complex systems. – Systems thinking suggests that elements in a system can be understood as isolated elements, whereas complexity forces us to see the interdependence of the nature/meaning of individual elements and the context in which they are embedded (holism vs. reductionism and global vs. local). – Systems thinking assumes that systems operate rational processes and yield predictable results, albeit through complicated means, whereas complexity recognizes that solutions are arrived at via dynamic processes that are not likely to result in a conclusive situation. The objective horizon is always moving forward. – Systems thinking hypothesizes that systems change their structures in accordance with rule-based learning, whereas complexity recognizes that change is perpetual, so organizational learning is a constant endeavor to survive. Box 1.1. Notes

32

Complexity in the Natural Sciences and Operations Management

P. Haynes (Managing Complexity in the Public Services) (Haynes 2003) and P. Cilliers (Complexity and Postmodernism: Understanding Complex Systems) (Cillers 1998) elaborate on complexity from the point of view of social sciences that turn out to be hybrid systems referring to cognitive, biological and economic phenomena. 1.4.3. Systems modeling 1.4.3.1. “Traditional” practice Modeling is the answer humans have found to cope with understanding the ecosystems we live in. Galileo and Newton are considered as the founding fathers of modern physics: their achievements were to construct models, i.e. representations of a part of the world described with a mathematical language allowing for computations to be carried out. Today, the modeling and system approaches go hand in hand for challenging the increasing complexity of the societal ecosystem in which we are embedded. We are exposed to a deluge of new faceless digital technologies. We are not yet fully aware of the impacts of what is called big data on our behaviors and private lives. For humans and businesses, the lake of data received from their environments is to be filtered and processed accordingly to help them survive. A model is a constructed representation of a part of the world. Representations are derived from our sensory perceptions of our environments. However, all parts of our environments are not accessible through our five senses (sight, hearing, smell, touch and taste). Some parts of the world are hidden to our sensory perception and their perceived aspects are fuzzy and intangible. Microphysics as well as social behaviors come into this category. When interacting with other people, we have no access to the intentions and ideas behind their mind. A model is necessarily an abstraction. It captures what we estimate to be the relevant characteristics of the real world from a particular perspective and ignores others. Therefore, the same object and the same entity can be represented by different models.

Complexity and Systems Thinking

33

Models are intended to be manipulated and the results can be fed back to the real world through interpretation. They are supposed to help find solutions to real-world problems. Many a model has been worked out to meet the requirements of its purpose. A variety of models with respect to their purposes can be listed: – heuristic model: this model is intended to mimic the behaviors of a real-world process, as it is perceived by outside observers. The actual courses of action at work may be very different from those imagined when working out the model. The real world is a black box with controlled inputs and received outputs. – functional model: this model is developed on the basis of the laws of nature, formal and behavioral logics encapsulated in a theory. It results in deriving equations, allowing the dynamical study of real-world processes. – cybernetics model: cybernetics is in fact a modeling approach to the delineation in the world between the human motivation and purpose, and the construction and control of an artifact by human actors to fulfill tasks on their behalf. There is a transfer of a sort of human responsibility for operational courses of action to an artifact agent whose instructions for action are remotely controlled by humans. 1.4.3.2. Object-oriented modeling for information processing Today, ubiquitous information technology has been relying on software development for the last 50 years. One of the fundamental problems in software development is: how does someone model the real world so that computation of a sort can be carried out? We think that all the efforts made to tackle the issues of software modeling can benefit other fields of knowledge where explicit or implicit modeling is the cornerstone of what can become knowledgeable. In the 1950s and 1960s, the focus of software system developers was on the algorithmic. The main concerns at that time were solving computation problems, designing efficient algorithms and controlling the complexity of computation. The models used were computation-

34

Complexity in the Natural Sciences and Operations Management

oriented and the decomposition of complex systems was primarily based on control flow. In the 1970s and 1980s, different types of software systems emerged to address the complexity of data to process. These systems were centered on data items and data flows with computation becoming less requiring. The models used were data-oriented and the decomposition of complex systems was primarily based on data flows. What is called object-oriented modeling is a balanced view of the computation and data features of software systems. The decomposition of complex systems is based on the structure of objects and their relationships. This approach dates back to the late 1960s. It has a narrow spine but as a matter of fact offers a wide embrace. It can be used as a blueprint for a modeling mindset in different domains where the real world has to be analyzed and modeled to yield design patterns and frameworks. We discuss here the fundamental concepts and the principles of object-oriented modeling (Booch 1994). The interpretation of the real world is translated into its representation by objects and classes (sets of objects). A class serves as a template for creating objects (instances of a class). The state of an object is characterized by its attributes, and their values and its behavior are defined by a set of methods (operations) which manipulate the states. Objects communicate with one another by passing messages. A message consists of the receiver’s method to be invoked. Complexity is dealt with by decomposing a system into a set of highly cohesive (functional relatedness) but loosely coupled (interdependency) modules. With the mindset familiar in the realm of information technology (IT), the interactions between objects can be understood as client–server relationships, i.e. services are exchanged between objects alternatively acting as emitters and receivers of services. In order to maintain a stable overall architectural structure and at the same time to allow for changing the inner organizations of an object, a succinct and precise description of the object’s functional capabilities known as its “contractual” interface should be made available to other objects. That technique, called “encapsulation”,

Complexity and Systems Thinking

35

separates the “public” description of the object’s inputs and outputs from how the object’s functional capabilities are implemented. We will examine whether the concepts used for describing the real world in object-oriented programming can be used to produce a reference framework to assess models developed by different knowledge disciplines. The BDI agent modeling human organizations will be vetted accordingly. 1.4.3.3. Artificial intelligence and system modeling 1.4.3.3.1. Current context Artificial intelligence (AI) is expected to impact system modeling because it is more and more likely that “virtual” agents endowed with AI will interact with human agents in a wide spectrum of systems. These “virtual” agents can be characterized as autonomous agents that, on behalf of their designers, will take in information about their environments, and will be capable of making choices and decisions, sensing and responding to their environments and even modifying their objectives. That situation will have an influence on system modeling, design and operations. It will add complexity to stakeholders’ understanding of such “black-boxed” agents. Let us first elaborate on what AI is. AI is subject to a strong revival in interest because enhanced computing facilities are made available, and the data lakes are supposed to develop the practices of “deep learning”. In the 1980s, a great deal of effort was devoted to this field of research but the outcome was rather disappointing. As a result, less attention was paid to it. In 1995, Stuart Russell and Peter Norvig (Russell and Norvig 1995) described AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect the environment”. This definition is still fully valid. The most critical difference between AI and general-purpose software is “take actions”. That means that AI enables software systems to respond, on their own from the world at large, to signals that

36

Complexity in the Natural Sciences and Operations Management

programmers do not control and therefore cannot anticipate. The fastest-growing category of AI is machine learning, namely the ability of software to improve its own activity by analyzing interactions with its environment. Applications of AI and machine learning could result in new and unexpected forms of interconnectedness between different domains of the economic realm based on the use of previously unrelated data sources. In particular, the unregulated relationships between financial markets and different sectors of the “real” economy could reveal themselves to be highly damaging for these sectors. 1.4.3.3.2. Brief historical overview of AI development Development from the 1960s to the 1980s Artificial intelligence (AI) research can be traced back to the 1960s. From inception, the idea was to use predicate calculus. Therefore, the first papers on AI principally dealt with automatic theorem-proving, a learning draught-playing system and GPS (General Problem Solving) (Feigenbaum and Feldman 1963). In 1968, M. Minsky edited a book (Minsky 1968) in which research on semantic networks, natural language processing and recognition of geometrical patterns and so on was discussed in detail. At the same time, a specific programming language LISP was developed and a book produced by its creators (McCarthy et al. 1962). LISP is a general-purpose language, enabling symbol manipulation procedures to be expressed in a simple way. Other programming languages based on LISP but with added and improved facilities were developed in the 1970s which include INTER-LISP, PLANNER, CONNIVER and POP-2. Applications of AI were first qualified as “expert systems” because they incorporated the expertise or competencies of one or many experts. Their architecture was structured in a way that separates the knowledge of the experts from the detailed traits of a case under study. The stored knowledge is applied to the case and, from

Complexity and Systems Thinking

37

deductive reasoning, advice can be delivered to support decisions for data analyses. An important field of application was in health care to help diagnose diseases. The main components of an expert system are the knowledge base, the inference engine, the explanation subsystem and the knowledge acquisition subsystem. – The knowledge base contains the domain-specific knowledge acquired from experts and the particular data captured about the case under study. – The inference engine is in charge of dynamically applying the domain-specific knowledge to the case being processed. – The explanation subsystem is the interface allowing users to enter requests about how a conclusion has been reached or why a question is being asked. This facility gives users access to the various steps of the reasoning deployment. – The knowledge acquisition subsystem enables the capture of knowledge by the expert or generates general rules from data derived from past cases by automated induction. A key issue is the representation of knowledge. Different forms have different domains of suitability. Rule-based representations are common in many expert systems, particularly in small ones. Much knowledge is declarative rather than procedural. It can be cast in the mould of propositions linked together by logical relationships. Logic-supported rules are known as if-then rules (if antecedent condition, then consequent) associated with the logical connectives AND, OR, NOT. This structure may be conveniently combined with the representation of attributes and values of objects (A-V pairs) within the very rules. For large systems, an Object-Attribute-Value (O-A-V) representation appears to be a better strategy. Semantic networks portray the connections between objects and concepts. They are very flexible. They have a formal composition of nodes connected by directed links describing their relationships. Objects are nodes. They

38

Complexity in the Natural Sciences and Operations Management

may be characterized by attribute and value nodes. Has-a, is-a and is-a-value-of are standard names of links associated with object, attribute and value nodes. Some examples in Figure 1.3 show that how this concept is flexible and makes it possible to represent inheritance hierarchies. If a class of objects has an attribute, any object which belongs to this class is characterized by the same attribute.

Figure 1.3. Examples of relations between objects, classes of objects and attributes

It is supposed that experts rationally represent their knowledge as various interconnected concepts (frames). Frames consist of slots in which entries are made for a particular occurrence of the type frame. The ability of frames for default values to fill the slots, and reference procedures and their possible linking by pointers, provides a powerful tool for describing knowledge. Frames made it possible to change to a different frame if an expected event does not materialize. The objective of Minsky’s frame theory was to explain human thought patterns, and its application scope is more far-reaching than its focus on AI. A language called FRL (Frame Representation Language) has been developed and is simply an extension of LISP. In order to materialize the concept of frame, Figure 1.4 captures the main features of a frame for a long-term loan.

Complexity and Systems Thinking

39

Figure 1.4. A frame for a long-term loan

Different principles of inference can be used whether reasoning is concerned with certainty or some degree of uncertainty is involved when complex thinking processes are associated with expertise. In this case, the techniques of reasoning with certainty and fuzzy set theory interact. Fuzzy set theory interprets predicates as denoting fuzzy sets. A fuzzy set is a set of objects of which a part is definitely in the set. Another part is definitely not and some are only probably or possibly within the set. The issue of modeling uncertain knowledge is addressed by the development of what is called neural networks.

40

Complexity in the Natural Sciences and Operations Management

Inference control strategies ensure that the inference engine carries out reasoning in an exhaustive and efficient manner. Systems which attempt to prove selected goals by establishing facts for proving these goals, and facts which themselves become secondary goals are backward-chaining systems or goal-driven systems. This approach contrasts with forward-chaining systems which attempt to derive a conclusion from existing established facts. All major expert systems use backward chaining. Backward-chaining systems postulate a conclusion and then determine whether or not that conclusion is true. For instance, if the name of an illness is postulated, then a rule which provides that illness as a conclusion is sought. If a rule is found, it is checked whether all the preconditions of that rule are fulfilled, i.e. are compatible with the symptoms. If not, then another hypothesis is adopted and the procedure is repeated till the hypothesis chosen holds. A state of the art in AI in terms of hardware and software by the mid-1980s is compiled in a book edited by Pierre Vandeginste (Vandeginste 1987). Development of neural networks Neural networks (also called connectionist networks) gained attractiveness because of their supposed similarities with human information processing in the brain. Why supposed? Because we still have little understanding of how the brain of living animals actually works. The best approximation to highlight the neural network research that has been most influential would be to combine the papers edited by Anderson and Rosenfeld (1988), with historical remarks by Rumelhart and McClelland (1986), Hinton and Anderson (1981), Nilsson (1965), Duda and Hart (1973) and Hecht-Nielsen (1990). A neural network can serve as the knowledge base for an expert system that does classification tasks. The major advantage of this approach is that the learning algorithms can rely on training examples

Complexity and Systems Thinking

41

and generate expert systems automatically. Any decision is a candidate to a neural network expert system if training data is available. The availability of data lakes and computing power makes it possible to envision the day when short-lived expert systems for temporary tasks become constructed as customized simulation models of human information processing for individual use. Briefly, a neural network model consists of a set of computational units (called cells) and a set of one-way data connection joining units. At certain times, a unit analyzes its input signals and delivers a computed signed number called activation as its output. The activation signal is passed along those connections to other units. Each connection has a signed number called weight. This weight determines whether an activation traveling along a connection influences the receiving cell to produce a similar or a different activation output signal according to the sign (+ or -) of the incoming signal weight. The aggregated level of all incoming signals received by a cell determines the output or activation delivered by this cell to other cells. An example is shown in Figure 1.5.

Figure 1.5. Activation (output) computed for a single cell

42

Complexity in the Natural Sciences and Operations Management

This type of cell is called semi-linear because of the dependence on the linear weighted sum Si of the weights wij of all the incoming signals from the cells uj connected to the cell ui. The activation function f( ) is usually nonlinear, bounded and piecewise differentiable such as: f(x) = +1 for x › 0, f(x) = 0 for x = 0 and f(x) = –1 for x ‹ 0 or f(x) = ( 1+ e–x )–1 From a computational point of view, neural networks offer two salient features, namely learning and representation. Learning Machine learning refers to computer models that improve their performance significantly on the basis of their input data. Two techniques are used and are divided into supervised and unsupervised models. In supervised learning, a “teacher” supplies additional data that gives an assessment of how well a program is performing during a training phase, and delivers recommendations to improve the performance of the learning algorithm. It is an iterative learning algorithm. The most common form of supervised learning is trying to duplicate human behavior specified by a set of training examples consisting of input data and the corresponding correct output. Unsupervised learning is a one-shot algorithm. No performance evaluation is carried out so that, without any knowledge of what constitutes a correct answer or an incorrect one, these systems are used to construct groups of similar input patterns. This is known as clustering. Practically speaking, machine learning in neural networks consists of adjusting connection weights through a learning algorithm which is the critical mechanism of the whole system.

Complexity and Systems Thinking

43

Knowledge representation The issue is to define how to record whatever has been learned. In connectionist models, knowledge representation includes the network description of cells (nodes), the connection weights between the cells, and the semantic interpretations attached to the cells and their activations. Once the learning process has been completed, it appears to be impossible for outside observers to trace back the various steps of reasoning in the form of a sequence of instructions, which is our “natural” way of analyzing our decision processes. This lack of interpretability or “auditability” of machine learning output when neural network techniques are implemented may result in a lack of trust in churned-out results. When important issues arise in risk management such as data privacy, adherence to relevant protocols and cyber security, decision-makers have to be ensured that applications do what they are intended to do. This uncertainty of a sort may appear in the future as a serious hurdle to the mass dissemination of neural network applications in critical contexts. 1.4.3.3.3. Current prospects Three main ways that businesses or organizations can, and could in the near future, use AI for supporting their activities. AI is expected to play a pivotal role in what is called the digital transformation of our society. Assisted intelligence This helps actors be more efficient in performing their tasks. These tasks are clearly defined, rule-based and repeatable. They are found in clerkly environments (billing, finance, regulatory compliance, receiving and processing customer orders, etc.) as well as in field operations (services to industries, maintenance and repair, etc.). AI-aided decision-making has been used for a long time for medical diagnosis on the basis of knowledge and experience captured from experts. AI-based packages are made more and more available to

44

Complexity in the Natural Sciences and Operations Management

simulate the potential successes of various scenarios and reduce the risks incurred. All of these tools are intended to lead to improvement in conventional business metrics such as labor productivity, revenues or margin per employee and average time to process completion. The staff in charge of marshaling and interpreting data produced by AI packages must be properly skilled. This is not a minor issue when large data flows are churned out and their backdrop is hidden to deciding actors. Augmented intelligence Augmented intelligence software packages are intended to offer new capabilities to human activities, allowing organizations to do what they could not carry out before. Unlike assisted intelligence, these packages lead to changes in task contents and in the very nature of human and material resources required to perform the tasks involved. Activity-based processes and organizational models change accordingly. Autonomous intelligence Autonomous intelligence systems are imagined creating and deploying machines that act on their own without direct human intervention. The most anticipated and symbolic forms of autonomous intelligence are, among others, self-driving cars and fully fledged natural language translation programs. The Internet of Things (IoT) will generate vast amounts of data, more than humans can reasonably interpret. Many predictably scheduled activities based on demand sensing or equipment running will be triggered by AI systems fed by data flows. Autonomous intelligence’s greatest challenges may not be technical but human adaptation to new working contexts made of faceless agents. Interaction and collaboration between the stakeholders involved will be mediated through virtual worlds. It is very likely that these new environments will require new skills, and the less skilled part of the current workforce is at risk of being made redundant.

Complexity and Systems Thinking

45

It is very likely that AI systems will be perceived by many decision-makers as backseat driver-like guidance or worse as faceless manipulators depriving them of their free will. The way in which the transition among the various forms of AI will progress is not clear-cut as they sit in a continuum. It is reasonable to think that the deployment will take place with overlapping of a sort between the various forms. It can be figured out that AI systems could level off the playing field between large corporations and small and medium businesses by giving them access to technologically advanced capabilities at affordable costs. 1.5. Biodata of two figureheads in the development of cybernetics Two Austrian-born and -educated 20th-Century millennials, namely Ludwig von Bertalanffy and Heinz von Förster, have brought about decisive contributions to systems thinking and cybernetics through the intricacies of biology. Is it a coincidence? They both trod through biological processes which have always been perceived as complex. That means that understanding their understanding has always been challenging for inquisitive people. On the contrary, at the beginning of the 20th Century, Vienna was an important center for working out advanced knowledge in many a discipline (Freud, Carnap, etc.). The surveys of Ludwig von Bertalanffy’s and Heinz von Förster’s biodata are given in the following sections. 1.5.1. Ludwig von Bertalanffy (1901–1972) Ludwig von Bertalanffy was born near Vienna in 1901. He studied the history of art and philosophy first at the University of Innsbruck and later at the University of Vienna where he qualified for a PhD in 1924. Then, he started studying biology and published his first book on theoretical biology in 1928 (Kritishe Theorie der Formbildung). In the 1930s, von Bertalanffy formulated the organismic system theory that later became the kernel of the GST. His starting objective was to derive the phenomena of life from a spontaneous grouping of system

46

Complexity in the Natural Sciences and Operations Management

impulses. He assumed that a dynamical process exists inside the organic system and modeled it by a heuristic procedure as an open system striving towards a steady state. That results in assigning a selforganizational dynamics to biological systems. In 1934, he published the first volume of his Theoretische Biologie. From 1934 till 1948, he held a teaching position at the University of Vienna. In 1939, he was appointed to an extraordinary “lehrstuhl”. As a Rockefeller Fellow at the Chicago University (1937–1938), he delivered his first lecture about the GTS as a methodology for all fields of science. Its content was published in 1949 in Biologia Generalis (vol. 195, pp. 114–129) under the title “Zu einer algemeine Systemlehre”. In the 1940s, he evolved his “theory of open systems” from a thermodynamic point of view. A stable open system is characterized by dynamically irreversible processes. The components are synchronized to one another and to the “Eigeneschwindigheit” of the complex whole. The general system experiences a kind of self-regulation comparable to the behavior of an organic system. Self-regulation should not to be confused with self-organization. A whole made of interacting elements maintains its structure by an assemblage process of a sort and tends to restore itself after disturbances analogous to the features of living organisms. Isomorphic patterns exist between living organisms, social systems and cybernetic machines. Cybernetics feedback mechanisms (regulation) are controlled by constraints, whereas dynamical systems reflect the interplay of binding powers. The minimum entropy production stabilizes the system structure and the dynamics of flows. The system will achieve the dissipative state that configures a structure by maintaining a state far from equilibrium. Ludwig von Bertalanffy always made efforts to disseminate his ideas outside the world of biology long before his well-known book (General Systems Theory) was published in English and became

Complexity and Systems Thinking

47

the landmark many people appeal to for using the corpus of von Bertalanffy’s ideas in various disciplines. In particular, he published a paper about GST and the philosophy of science (von Bertalanffy 1950) and another paper about GST and the behavioural sciences (von Bertalanffy 1960). 1.5.2. Heinz von Förster (1911–2002) Heinz von Förster was born in 1911 in Vienna. He studied physics at the Technical University of Vienna and at the University of Breslau where in 1944 he received a PhD in physics. He worked in a radar laboratory during the war in spite of the fact that he was of Jewish descent. He moved to the USA in 1949 and worked at the University of Illinois where he held a professorship of electrical engineering until his retirement (1951–1975). In addition, he was also a professor of biophysics (1962–1975) and director of the Biological Computer Laboratory (1958–1975). He was an eclectic figure having developed an interest in anthropological research (1963–1965) within the framework of the Wenner-Gren Foundation. He had the opportunity to meet well-known scientists such as John von Neumann, Norbert Wiener, Humberto Maturana and so on. The BCL (Biological Computer Laboratory) was created one year before the Self-organizing Systems and Bionics conferences were launched in 1959. Analog machines, made of electrical circuits mirroring the behaviors of differential equations like those studied by Turing in morphogenesis, were used in an attempt to reify “biological” computers for simulating the materialization of abstract theoretical concepts like cybernetics, the functional capabilities of experimental artifacts and the social coordination of observers. He dubbed the order-from-noise principle. Noise in a complex system might well lead to further organization. Noise is a universal phenomenon, considered as a nuisance which may actually play a significant role in large classes of both natural systems and artifacts. It provides undesirable disturbances or fluctuations and as a consequence randomness in repeated measurements. These phenomena are well known in electronic and communication technology, subject matters

48

Complexity in the Natural Sciences and Operations Management

close to the electrical engineering studied and taught by Heinz von Förster. This principle has been included for some time in what is now called “stochastic resonance” (SR). It has been observed, quantified and described in a plethora of physical and biological systems including neurons, climate models, electronic circuits, lasers and chemical reactions. SR was first used by Roberto Benzi at the NATO International School of Climatology (1980). SR is a mechanism by which a system embedded in a noisy environment acquires an enhanced sensitivity to small external time-dependent forcing when the noise signal reaches a certain threshold in terms of power and frequency range. It plays a significant role in the emergence of randomness in natural and technical systems as well as in social systems (Barttholomew 1982; Wallace et al. 1997). Interactions between noise-induced randomness and nonlinearity of systems can bring about a self-reorganization of systems exposed to such conditions. He is the real architect of what is called “second-order cybernetics”. Relying on Humberto Maturana’s theorem “Anything said is said by an observer”, Heinz von Förster derived a corollary “Anything said is said to an observer”. He contended that observed and observing systems cannot be considered independently, each having its own purpose and its own complexity. Information circulates between these two systems, between the mechanistic model and the people who have designed and made these models (artifacts). The paradigmatic concept of first-order cybernetics is the homeostatic loophole. When disturbances of a sort change the state of the mechanistic system, variables that are internal or external to the system are engineered to bring it to the set operational conditions (Ashby 1956). He devoted his efforts to the study of observers. They are connected through language and form the very nucleus of a society. The triad, observers, language and society, makes up the architecture of observing systems. This view led him to focus on self-referential systems and the importance of “eigen behaviors” for the explanation of complex phenomena. This refers to the “cybernetics of observing systems” leading to his famous distinction between trivial and non-trivial, which is the inception of complex cognitive behavior:

Complexity and Systems Thinking

49

– trivial machine: its operations are not influenced by previous operations (no memory) and are predictable; – non-trivial machine: the structure cannot be deduced from its behavior. The cybernetics of observed systems may be considered to be first-order cybernetics, while second-order cybernetics is the cybernetics of observing systems. Social cybernetics belongs to second-order cybernetics. Figure 1.6 portrays the main features that contrast between first-order and second-order cybernetics.

Figure 1.6. Contrasting the concepts of first-order cybernetics and second-order cybernetics

50

Complexity in the Natural Sciences and Operations Management

Second-order cybernetics can be analyzed as radical constructivism. It is an unconventional approach to knowledge and knowing. Knowledge is in the heads of people and the thinking subject has no alternative but to construct what (s)he knows on the basis of his/her experience. According to Piaget, experience constitutes the only world we consciously live in. The seminal difference between the conventional psychologist’s approach and Piaget’s is that the former focuses on observable behavior and performance, and the latter focuses on the results of reflective abstractions (mental operations inferred from observations). If cybernetics is the formalization and modeling of information processing in technical artifacts and living organisms, when the cybernetic nature of self and the world is studied, the observer needs to be made part of the picture within the framework of systems plus environments paradigm. This approach draws on the arguments brought about by Gregory Bateson in 1972. The observer is a complex system in itself. This idea is presented in von Förster’s book (von Förster 1982). Heinz von Förster was influenced by Gregory Bateson. In his book Steps to an Ecology of Mind, Gregory Bateson applied cybernetics to the field of ecological anthropology and the concept of homeostasis. He saw the world as a series of systems fitting into each other, those of individuals, societies and ecosystems. Within each system, we can find competition and dependency. Each of these systems has adaptive changes which depend on feedback loops to control balance by changing multiple variables. Gregory Bateson believed that these self-correcting systems were conservative by controlling exponential slippage. He saw the natural ecological system as innately good, as long as it was allowed to maintain a homeostasis and the key unit of survival in evolution was an organism and its environment. Gregory Bateson also viewed all three systems of the individual, society and ecosystem as being all together a part of one supreme cybernetic system that controls everything rather than interacting systems. This supreme cybernetic system is beyond the self of the individual and could be equated to what many people refer to as God, although G. Bateson referred to it as Mind. While Mind is a cybernetic

Complexity and Systems Thinking

51

system, it can only be distinguished as a whole and not parts. Gregory Bateson felt Mind was immanent in the messages and pathways of the supreme cybernetic system. He saw that the root of system collapses as a result of western epistemology. According to Gregory Bateson, consciousness is the bridge between the cybernetic networks of individual, society and ecology, and the mismatch between the systems due to improper understanding will result in the degradation of the entire supreme cybernetic system or Mind. Gregory Bateson thought that consciousness as developed through western epistemology was at direct odds with Mind. “Cybernetics introduces for the first time and not only by saying it but methodologically the notion of circularity, a circular causal system” (Heinz von Förster 1999). The relationship between observed and observing systems as viewed by Heinz von Förster is shown in Figure 1.7.

Figure 1.7. Relationship between observed and observing systems

1.6. References Akerlof, G. and Schiller, R. (2009). Animal Spirits: How Human Psychology Drives the Economy and Why it Matters for Global Capitalism, Princeton University Press. Anderson, J.A. and Rosenfeld, E. (eds) (1988). Neurocomputing, MIT Press, Cambridge.

52

Complexity in the Natural Sciences and Operations Management

Arthur, W.B. (2013). Complexity economics – a different framework for economic thought, Santa Fe Institute report 2013-04-012. Ashby W.R. (1956). An Introduction to Cybernetics, Methuen. Barttholomew, D. (1982). Models for Social Processes, Wiley, New York. Booch, G., (1994). Object-Oriented Analysis and Design with Applications, Benjamin-Cummings. Boulding, K. (1956). “General systems theory: the skeleton of science”, Management Science, vol. 2, pp. 197–208. Cillers, P. (1998). Complexity and Postmodernism: Understanding Complex Systems, Routledge, New York. Comte, A. (1830). Cours de philosophie positive, vol. 1, Paris. Cottingham, J. (2008). “Thought, language and its components: William of Ockham, writings on logic”, in Western Philosophy: An Anthology, 2nd ed., Wiley–Blackwell. Damasio, A.R. (2006). Descartes’ Error – Emotion, Reason and the Human Brain, Vintage Books, London. de Saussure, F. (1960). Course in General Linguistics, Peter Owen, London. Denton, D. (2006). The Primordial Emotions: Consciousness, Oxford University Press.

The

Dawning

of

Duda, R.O. and Hart, P.E. (1973). Pattern Recognition and Scene Analysis, Wiley, New York. Feigenbaum, E. and Feldman, J. (1963). Computers and Thought, McGraw Hill, New York. Fox, E. (2008). Emotion Science: An Integration of Cognitive and Neuroscientific Approaches, Palgrave MacMillan. Haynes, P. (2003). Managing Complexity in the Public Services, Open University Press. Hecht-Nielsen, R. (1990). Neurocomputing, Addison-Wesley, Reading. Hinton, G.E. and Anderson, J.A. (eds) (1981). Parallel Models of Associative Memory, Lawrence Erlbaum, Hillsdale. Kandel, E.R. (2007). In Search of Memory – The Emergence of a New Science of Mind, WW Norton, New York.

Complexity and Systems Thinking

53

Keynes, J.M. (1936). The General Theory of Employment, Interest and Money, MacMillan, London. Le Doux, J. (1996). The Emotional Brain, Simon & Schuster, New York. MacIntyre, A. (1967). “Ontology” in P. Edwards, The Encyclopedia of Philosophy, vol. 5, MacMillan, New York. McCarthy, J., Abrahams, P.W., Edwards, D.J., Hart, T.P. and Levin, M.I., (1962). LISP 1.5 Programmers’ Manual, MIT Press. Minsky, M. (ed.) (1968). Semantic Information Processing, MIT Press. Nicolis, G. and Prigogine, I. (1989). Exploring Complexity, W.H. Freeman and Company, New York. Nilsson, N.J. (1965). Learning Machines, McGraw Hill, New York. Paulré, B. (1989). Perspectives Systémiques – Colloque de Cerisy, L’interdisciplinaire Lyon, p. 8. Popper, K. (1974). “Darwinism as Metaphysics”, The Philosophy of Karl Popper, P.A. Schlilpp (ed.), Open Court. Prigogine, I. (1979). From Being to Becoming, Freeman, San Francisco. Prigogine, I. and Stengers, I. (1979). La nouvelle alliance, Gallimard, Paris. Prigogine, I. and Stengers, I. (1984). Order out of Chaos – Man’s new Dialogue with Nature, Bantam Books. Prigogine, I. (1987). “Exploring complexity”, European Journal of Operational Research, vol. 30, pp. 97–103. Rumelhart D.E. and McClelland, J.L. (1986). Parallel Distributed Processing: Explorations in the Microstructures of Cognition, vol. 1, MIT Press, Cambridge. Russel, S. and Norvig, P. (1995). Artificial Intelligence – A Modern Approach, Pearson. Smuts, J. (1926). Holism and Evolution, Macmillan. Stachowiak, H. (1969). Denken und Erkennen im kybernetishchen Modell, Springer Verlag Wien.

54

Complexity in the Natural Sciences and Operations Management

Turing, A.M. (1952). “The Chemical Basis of Morphogenesis”, Philosophical Transactions of the Royal Society of London B, pp. 37–72. Vandeginste, P. (ed.), (1987). La recherche en intelligence artificielle, Le Seuil, Paris. von Bertalanffy, L. (1950). “An outline of General Systems Theory”, British Journal for the Philosophy of Science, vol. I, pp. 139–164. von Bertalanffy, L. (1960). “General System Theory and the behavioural sciences”, in Discussions on Child Development, vol. 4, J.M. Tanner and B. Inhelder (eds), London. von Bertalanffy, L. (1968). General System Theory – Foundation, Development, Application, Braziller, New York. von Förster, H. (1979). “Cybernetics of cybernetics”, in Communication and Control in Society, K. Krippendorf (ed.), Gordon and Breach, New York. von Förster, H. (1982). Observing Systems, Intersystem Publications, Blackwell Publishers Ltd. von Förster, H. (1999). Der Anfang von Himel und Erde hat keinen Namen. Eine Selbsschaffung in 7 Tagen, Döcker Verlag Wien. Waldrop, M. (1992). Complexity – the Emerging Science at the Edge of Order and Chaos, Simon and Schuster. Wallace, R., Wallace, D. and Andrews, H. (1997). “AIDS, tuberculosis, violent crime and low birth weight in eight US metropolitan areas: Public policy, stochastic resonance and the regional diffusion of inter-city markers”, Environment and Planning, A, vol. 29, no. 3, pp. 525–555. Wiener N. (1948). Cybernetics or Control and Communication in the Animal and the Machine, Wiley, New York.

2 Agent-based Modeling of Human Organizations

2.1. Introduction This section has a narrow spine but a wide embrace. In addressing the relationship between agents and organizations, it takes in an extensive but highly fragmented set of ideas and studies embedded in organization theories. Its purpose is to try to devise a common ground model from which organization theories of various sorts can be logically derived. Many theories have been developed to explain how organizations are structured and conducted and how the stakeholders involved behave. Each one takes a definite point of view without giving the opportunity to understand how they are possibly correlated with each other and whether a mapping of a sort between some of them can be figured out. On the contrary, founders of new approaches seem in most cases to ignore previous works. Agent ontology can function as a background model, allowing for the derivation of the main organizational theories from this base.

From Complexity in the Natural Sciences to Complexity in Operations Management Systems, First Edition. Jean-Pierre Briffaut. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

56

Complexity in the Natural Sciences and Operations Management

2.2. Concept of agenthood in the technical world 2.2.1. Some words about agents explained In the technical field, the concept of agenthood is widely used. A general definition of what an agent is has been produced by J. Ferber (1999) (p. 9). Its adaptation is as follows: An agent is a physical or virtual entity: a) that is capable of acting in an environment; b) that can communicate directly with other agents; c) that is driven by trends towards objectives of the sort of individual and/or collective satisfaction; d) that has access to resources for achieving its goals expressed in terms of objectives; e) that can perceive its environment commensurately with its objectives; f) that possesses skills and competencies for delivering services; g) that may be able to self-reorganize for survival in its current environment; h) that is endowed with autonomy. Box 2.1.

All the terms in italics describe the key features of an agent (action, communication, objectives, autonomy and availability of resources). Virtual entities are software components and computing modules. They are not accessible to human senses as such but, as their number grows exponentially in our living ecosystem, they are contrived to become more and more the faceless partners of human beings. According to their main missions, specific names have been given to agents, namely communicating agents (computer environment without perception of other agents), situated agents (perception of the environment, deliberation on what should be done and action in this

Agent-based Modeling of Human Organizations

57

environment), reactive agents (drive-based reflexes) and cognitive agents (capable of anticipating events and preparing for them). It is important to contrast object, actor and agent. In the field of computer science, object and actor are conceived as structured entities bound to execute computing mechanisms. An object is depicted by three characteristics: – the class/instance relationship representing the class as a structural and behavioral meta-model and the instance as a concrete model of the context attributes under consideration; – inheritance enabling one class to be derived from another and benefiting from the former in terms of attributes and procedures; – message discrimination to trigger polymorph procedures (methods in data-processing vernacular) as a function of incoming message contents. The delineation between objects and communication agents is not always straightforward. This is the fate of all classifications. If a communication agent can be considered as an upgraded sort of object, conversely an object can be viewed as a degenerate communication agent whose language of expression is limited to the key words corresponding to its methods. An agent has services (skills) and objectives embedded in its structure, whereas an object has encapsulated methods (procedures) triggered by incoming messages. Actors in computer science perform data processing in parallel, communicate by buffered asynchronous messaging and generally ask the recipient of a message to forward the processed output to another actor. Another concept associated with agents is what is called a multiagent system (MAS) (Ferber 1999). It has become a paradigm to address complex problems. There is no unified, generally accepted

58

Complexity in the Natural Sciences and Operations Management

definition but communities of practice. The approaches followed by the system designer, namely functional design or object design, are chosen on the basis of answering the two following questions: – What should be considered as an agent to address the issues raised by the problem to tackle? A system is analyzed with a functional approach when centered on the functions the system has to fulfil or with an object approach when centered on the individual or the product to deliver. – How are the tasks to perform allocated to each agent in the whole system from a methodological point of view? There is no miracle recipe to achieve a good design. In addition, it is possible to analyze the same system from different angles and to deliver different designs. The approach for coming to terms with a problem is influenced by the historical development of the field involved. Many people are inclined to think of MAS as a natural extension of design by objects. It is worth noting that the MAS paradigm has disseminated in many technical areas where centralized control was a common practice. For many a reason, especially the computing power of on-board systems and the reliability of available telecommunication services, coordination between distributed systems takes place directly between the very units of the system without any central controlling device. This situation already prevails in railway networks. 2.2.2. Some implementations of the agenthood paradigm The concept of agenthood has been applied in various technical fields from the 1990s onwards. Two examples will be described here, namely telecommunication networks and manufacturing scheduling. 2.2.2.1. Telecommunications networks The world of telecommunication networks is extensively modeled on the basis of this concept. A telecommunication network is a mesh

Agent-based Modeling of Human Organizations

59

of nodes fulfilling a variety of tasks. Each node is an agent. It can be defined as a computational entity: *acting on behalf of other entities in an autonomous fashion (proxy agent); *performing its actions with some level of proactivity and/or reactiveness; *exhibiting some level of the key attributes of learning, cooperation and mobility. Several agent technologies are operated mainly in the telecommunications realm. They fall into two main categories, i.e. distributed agent technology and mobile agent technology. Distributed agent technology refers to a multi-agent system described as a network of actants with the following advantages: – solving problems that may be too large for a centralized agent; – providing enhanced speed and reliability; – tolerating uncertain data and knowledge. They include the following salient features: – communicating between themselves; – coordinating their activities; – negotiating their conflicts. “Actants” are non-human entities such as configurations of equipment, mediators and software programs and are distinguished from actors that are human beings. But actors and “actants” are entangled in ways that provoke complexity dynamics in many circumstances. Mobile agent technology functions by encapsulating the interaction capabilities of agents into their descriptive attributes. A mobile agent is a software entity existing in a distributed software environment. The primary task of this environment is to provide the means which allow

60

Complexity in the Natural Sciences and Operations Management

mobile agents to execute. A mobile agent is a program that chooses to migrate from machine to machine in a heterogeneous network. The description of a mobile agent must contain all of the following models: An agent model (autonomy, learning, cooperation). A life-cycle model: this model defines the dynamics of operations in terms of different execution states and events, triggering the movement from one state to another (start state, running state and death state). A computational model: this model, being closely related to the life-cycle model, describes how the execution of specified instructions occurs when the agent is in a running state (computational capabilities). Implementers of an agent gain access to other models of this agent through the computational model, the structure of which affects all other models. A security model: mobile agent security can be split into two broad areas, i.e. protection of hosts from malicious agents and protection of agents from hosts (leakage, tampering, resource stealing and vandalism). A communication model: communication is used when accessing services outside of the mobile agent during cooperation and coordination. A protocol is an implementation of a communication model. A navigation model: this model concerns itself with all aspects of agent mobility from the discovery and resolution of destination hosts to the manner in which a mobile agent is transported (transportation schemes). 2.2.2.2. Manufacturing scheduling Scheduling shop floor activities is a key issue in the manufacturing industry with respect to making the best economical use of manufacturing equipment and bringing costs under control, as well as delivering committed customer orders at due dates.

Agent-based Modeling of Human Organizations

61

Consider the product structure from the manufacturing point of view as portrayed in Figure 2.1.

Figure 2.1. Product structures from a manufacturing point of view

Pi are parts machined in dedicated shops and Ai are assembled products. In terms of scheduling, the relevant combined attributes of the products and equipment units involved in machining and assembling, whatever their layout job shop or batch or continuous line production, are lead times, so that a Gantt chart can be derived by backward scheduling from the due date when the “root” product A4 or a batch thereof has to be delivered to its client. The situation is pictured in Figure 2.2. Within the framework of decentralized decision-making for making collaborators more motivated for solving problems, they have to deal with and deploy what is called “job enrichment”; the choice of three agents, J1 in charge of the delivery of A2, a second one J2 in charge of the delivery of A3, and a third one J3 in charge of the delivery of A4 to the final client, appears to be the most pragmatic and efficient solution. This third agent is in some way the front office of all the hidden backward activities and responsible for fulfilling the commitments taken with respect to clients.

62

Complexity in the Natural Sciences and Operations Management

Figure 2.2. Gantt chart for scheduling machining and assembly activities for delivering products at a due date

The three agents Ji have to collaborate for establishing a schedule over a time horizon commensurate with the lead times. When manufacturing problems of any sort, which can impact the fulfilment of the schedule, the agents Ji have to collaborate to devise a coherent, sensible solution (outsourcing, hiring extra workforce, etc.), often without letting the top management know the details of the problems but only the adequate courses of action taken. 2.3. Concept of agenthood in the social world 2.3.1. Cursory perspective of agenthood in the social world When considering how the concept of agenthood, if it exists, is used in the social world, we come to the concept of agency in law. It aims at defining the relationship existing when one person or party (the principal) engages another (the agent) to act for him/her, i.e. to do his/her work, to sell his/her goods, to manage his/her business on his/her behalf. Early precedents for agency can be traced back to Roman law when slaves (though not true agents) were considered to be extensions

Agent-based Modeling of Human Organizations

63

of their masters and could make commitments and agreements on their behalf. In formal terms, a mandate is given to a proxy. The concept of agenthood appeared in the field of economics in the past century. In 1937, R. H. Coase (1937) published a seminal article in which he developed a new approach to the theory of the firm. Later on, his line of thought was expounded by economists such as W. Baumol, R. Marris and O.E. Williamson. R. H. Coase emphasized the importance of the relations within the firm. The theory of the firm covers many aspects of what a firm is, how it operates and how it is governed. A section of the theory of the firm is called the agency theory. It investigates the relationship between a principal and its agents within an economic context. This distinction results from the separation between business ownership (principal) and operations management (agents). One of the core issues is to understand the ways and means of how a balanced structure between the principal’s desires and its agents’ commitments and a balanced contract between both parties can be matched. Challenges at stakes, when decision-making takes place, are asymmetric information, risk aversion, ex ante adverse selection and ex-post moral hazard. The concept of “social network” emerged in the 1930s in the Anglophone world for analyzing relationships in industrial organizations and local communities. The English anthropologist John Barn of the School of Manchester introduced the term “network” explicitly when studying a parish community in a Norwegian island (Barnes 1954). This approach was later theorized by Harrison White who developed “Social Network Analysis” as a method of structural network analysis of social relationships. Social network theory strives to provide an explanation to an issue raised in the time of Plato by what is called social philosophy, namely the issue of social order: how to make intelligible the reasons why autonomous individuals cooperate to create enduring, functioning societies. In the 19th Century, A. Comte hoped to found a new field of “social physics” with individuals substituted for atoms. The French sociologist E. Durkheim (1951) argued that human societies are composed of interacting individuals and as such are akin to biological

64

Complexity in the Natural Sciences and Operations Management

systems. Within this cast of thought, social order is not ascribed to the intentions of individuals but to the structure of the social context they are embedded in. In the 1940s and 1950s, matrix algebra and graph theory were used to formalize fundamental socio-psychological concepts such as groups, social circles in network terms, making it possible to identify emergent groups in network data (Luce 1949). During that period of time, network analysis was also used by sociologists for analyzing the changing social fabric of cities in relation with the extension of urbanization. In the 1960s, anthropologists carried out analyses with the view of social structures as networks of roles instead of individuals (Brown 1952). In the 1990s, network analysis radiated in a great number of fields, including physics and biology. It also made its way in management consulting (Cross 2004) where it is often applied in exploiting the knowledge and skills distributed across organizations’ members. A book produced by S. Wasserman and K. Faust (1994) presents a comprehensive discussion of social network methodology. The quantitative features of this methodology rely on the theory of graphs and the properties of matrices. A graph can be either directed or not. A directed graph is an ordered pair G (V, A) where V is a set whose elements are called nodes, points or vertexes and A is a set of ordered pairs of directed edges (heads and tails). V can represent objects or subjects and A linkages between the elements of V. A special case of directed graph is the rooted directed graph in which a node has been distinguished as the root. When a graph is not directed, its edges are undirected. All the properties of graphs can be represented in the matrix formalism. “Social network” agents: within this framework, an agent is no longer an individual but a collection of individuals associated by the linkages between them. The linkages can be deterministic or stochastic. These features imply two consequences. The first one is well acknowledged: the behavior of a social network agent is differentiated from individual behaviors (the whole is not the sum of

Agent-based Modeling of Human Organizations

65

its parts). When some linkages between individuals are altered, the behavior of the composite agent is changed. Networks are categorized by how many modal representations the network has (generally one or two) and by how connection variables are measured. One-mode networks involve measurements of variables on just a single set of actors. The variety of actors covers people, subgroups, organizations, communities and nation states. Their relations extend over a wide spectrum of characteristics: – individual evaluation (friendship, liking, respect, etc.); – financial transactions and exchange of material resources; – transfer of immaterial resources; – kinship (marriage, descent). Two-mode networks refer to measurements of variables on two sets, either two sets of actors or a set of actors and a set of events or activities. In a two-set case, the profiles of actors are similar to those found in one-mode networks. As for relations, some can connect actors inside each set, but at least one relation of a sort must be defined between the two sets of actors. Connection networks are two-mode networks that combine a set of actors and a set of events or activities to which the actors in the first set attend or belong. The requirement is that the actors must be connected to one or more events or activities. These characteristics of connection networks offer wide possibility and flexibility to represent organizations’ or communities’ structures and operational courses of action. Connection networks include three types of built-in linkages: first, they show how the actors and the events or activities are directly related to each other; second, the events or activities create indirect relations between actors; and third, the actors create relations between the events or activities. Let us take a simple example to clarify the ideas. Consider a set of children (Allison, Cindy, Dave, Doug, Ross and Sarah) and a set of

66

Complexity in the Natural Sciences and Operations Management

events (birthday party 1, birthday party 2 and birthday party 3). The attendance of children to the parties can be represented by a matrix whose rows are children and columns are parties as shown in Figure 2.3. Events Actors

Party 1

Party 2

Party 3

Allison

1

0

1

Cindy

0

1

0

Dave

0

1

1

Doug

0

0

1

Ross

1

1

1

Sarah

1

1

0

Figure 2.3. Connection network matrix for the example of six children and three birthday parties

aij = 1 if actor i is affiliated with event j, otherwise aij = 0 A connection network can also be formalized by a bipartite graph. A bipartite graph is a graph in which the nodes can be split into two subsets and all edges are between pairs of nodes belonging to different subsets. Figure 2.4 translates the matrix of Figure 2.3 into a bipartite graph. Bipartite graphs can be generalized to n-partite graphs that visualize long-range correlations between organizations’ stakeholders. Graphs are very flexible means to visualize real world situations.

Agent-based Modeling of Human Organizations

67

Figure 2.4. Bipartite graph of the connection network matrix for the example of six children and three birthday parties (Figure 2.3)

2.3.2. Organization as a collection of agents Defining what an organization is or is not often refers to metaphors. Let us review these metaphors: – an organization is a machine made of interacting parts engineered to transform inputs into outputs called deliverables (products/services); – an organization is an organism achieving a goal and experiencing a life cycle (birth, growth, adaptation to environmental conditions, death) and fulfilling organic functions; – an organization is a network representing a social structure directed towards some goal and created by communication between groups and/or individuals. The social structure mirrors how driving powers are distributed, influences exerted and finally decisions made to attain the set purpose.

68

Complexity in the Natural Sciences and Operations Management

All these instruments of organization representation are an objective symptom showing how this concept has many facets and is approached by apparently partible models. In fact, the question raised is: does an organization consist of relations of ideas or matters of facts? The two first metaphors can be better understood as non-contingent a priori knowledge and the last one as contingent a posteriori knowledge. In other words, the issue is the contingent or not contingent identity of the construct called organization. E. Morin’s cast of thought (Morin 1977; Morin 1991) leans toward the contingent identity of the organization construct. We take his view to describe an organization: it is a mesh of relations between agents, human as well as virtual, which produces a cluster or system of actors sharing objectives and endowed with attributes and procedures for deploying courses of action, not apprehended at the level of single agents. An organization is viewed as a society of agents interacting to achieve a purpose that is beyond their individual capabilities. The significant advantages of this vision are due to the potential abilities of agents which draw on: – communication among themselves; – coordination of their targeted activities; – negotiation once they find themselves in conflict and mobility by transferring their processing capabilities to other agents; – knowledge capitalization by learning; – reaction to stimuli and some degree of autonomy by being proactive. This form of system model allows more flexibility to describe the behavior of organizations. In other words, adaptive behavior can be easily made explicit. By adaptive, it is meant that systems are able to modify their behavior to respond to internal or external signals. Proactivity and autonomy are two essential properties that manifest themselves in a number of different ways. For instance, some agents perform an information filtering role, some of which filter in an autonomous way, only presenting the target agent with information it

Agent-based Modeling of Human Organizations

69

considers to be of interest to it. Similarly, this same type of agent can also be proactive, in that it actively searches for information that it judges would be of interest to its users. An organization is characterized on one hand by its architecture in terms of formalized interplay between its agents (centralized or decentralized) and on the other hand by its functional capabilities, the roles of its actors and the relations between the two sets of their describable attributes (functional capabilities and actors’ roles). 2.4. BDI agents as models of organization agents 2.4.1. Description of BDI agents Organization agents are social agents acting in a specific context. Interaction between social agents (short for social network agents) is central to understanding how organizations are structured and operated. A multiagent system in the social world must focus on how the interactions between agents are made effective, efficient and conducive to reach set objectives. In the late 1980s and the 1990s, a great deal of attention had been devoted to the study of agents capable of rational behavior. Rationality has been investigated in many a field. The economists have developed a strong interest in this concept and have built it as a normative theory of choice with a maximization of what is called a utility function, utility designating here something useful to customers. Hereafter, rationality is understood as the complete exploitation of information, sound reasoning and common sense logic. A particular type of rational agent, a Belief-DesireIntention (BDI) agent, has been worked out by Rao and Georgeff (1995), and its implementation studied from both an ideal theoretical perspective and a more practical one. BDI agents are cooperative agents characterized by having “mentalistic” features and, as such, they may incorporate many attitudes of human behavior. Its representative architecture is illustrated in Figure 2.5. It contains four key entities, namely beliefs, desires, intentions and plans, and an engine, “the interpreter”, securing the smooth coherence between the functional capabilities and roles

70

Complexity in the Natural Sciences and Operations Management

fulfilled by the four key entities. In our opinion, these key structures are well suited to model the way social agents behave in a business environment and can be used as a modeling concept for organizations. Messages from other social agents

Messages to other agents

Figure 2.5. BDI agent architecture

Beliefs correspond to data-laden signals the agent receives from its environment. These signals are deciphered and may deliver incomplete or incorrect information, either because of the deliberate intention of signal sources or because of the lack of competencies at the reception side. Desires refer to the tasks allocated to the agent’s mission. All agents in an organization have an assignment that transforms their missions into clearly defined goals. Intentions represent desires the agent has committed to achieve and which have been chosen from among a set of possible desires, even if all these possible desires are compatible. The agent will typically keep striving to fulfill its commitment until it realizes that its intention is achieved or is no longer achievable. Plans are a set of courses of action to be implemented for achieving the agent’s intentions. They may be qualified as procedural knowledge as they are often reified by lists of instructions to follow.

Agent-based Modeling of Human Organizations

71

The interpreter’s assignment is to detect updated beliefs captured from the surrounding world, assessing possible desires on the basis of newly elaborated beliefs, selecting from a set of current desires, some of which are to act as intentions. Finally, the interpreter chooses to deploy a plan in agreement with the agent’s committed intentions. Consistency has to be maintained between beliefs, desires, intentions and plans. Some degree of intelligence and competence is required to fulfil this functional capability and should be considered embedded in the interpreter. The embedded intelligence and competence capabilities of the interpreter can be mainly expressed in terms of relationships between intentions and commitments. A commitment has two parts, i.e. the commitment condition and the termination condition when an active commitment is terminated (Rao and Georgeff 1995). Different types of commitments can be defined. A blindly committed agent denies any changes to its beliefs and desires which could conflict with its ongoing commitments. A single-minded agent accepts changes in beliefs and desires which could conflict with its ongoing commitments. A singleminded agent accepts changes in beliefs and drops its commitments accordingly. An open-minded agent is supposed to allow changes in both its beliefs and desires, forcing its ongoing commitments to be dropped. The commitment strategy has to be tailored, commensurate with the role(s) given to the agent according to the application context. Let us elaborate on this mechanism by taking a general example of how the roles of the four entities (Beliefs, Desires, Intentions and Plans) articulate in the whole structure and how they are conducted by the interpreter. An input message from another agent is captured by the data structure Beliefs and analyzed by the Interpreter. It deals with a change in the scheduled deliveries from this agent. This change is liable to impact its plan of activities and as consequence its own deliveries of products or services to its clients. Several options are open to analysis under the cover of desires and intentions. All actors in any context are keen on respecting their commitments (intentions), but a new choice has to be carried out when it appears that all desires cannot be met in terms of available resources from suppliers. A

72

Complexity in the Natural Sciences and Operations Management

priority list of desires has to be re-established on the basis of strategic and/or tactical arguments (loyalty, criticality, etc. of clients) and converted in intentions to let the interpreter devise a new plan. The technique of “rolling schedule” in the manufacturing industry resorts to this practice. Explaining the structural components of BDI agents in the next sections will show that BDI agents comply with the characteristics of an agent in the technical world that were given by J. Ferber. 2.4.2. Comments on the structural components of BDI agents 2.4.2.1. Definition of belief The verb “believe” is defined in the Oxford dictionary. We are aware that the word “belief ” has been interpreted in different ways in the realm of philosophy and that its translation in other Indo-European languages (German, French, Italian, Spanish, among others) appears difficult (Cassin 2004). It is outside the scope of this book to discuss this issue. We will interpret it within the framework of what is called “the philosophy of mind” in the English language context. According to Hume’s book (1978 I sec 7), a matter of fact is “easily cleared and ascertained” and is closely correlated with reality: “if this be absurd in fact and reality, it must be absurd in idea”. These matters of fact are objects of belief: “it is certain that we must have an idea of every matter of fact which we believe… When we are convinced of any matter of fact, we do nothing but conceive it” (Hume 1978, I, III sec 8). In his book “Enquiries Concerning Human Understanding and Concerning the Principles of Morals”, Hume (1975) confirms that matters of fact and relations of ideas should be clearly distinguished: all the objects of human reason or enquiry may naturally be divided into two kinds, namely, relations of ideas or matters of fact. Some people distinguish dispositional beliefs and occurring beliefs to try to mirror the storage structures of our memory organization. A dispositional belief is supposed to be held in the mind but not

Agent-based Modeling of Human Organizations

73

currently considered. An occurring belief is a belief being currently considered by the mind. 2.4.2.2. Attitudes and beliefs An attitude is a state of mind favorable to behave in a positive or negative way towards an object or a subject. The information– integration tenet is one of the best credible models of the nature of attitudes and attitude change as stated by Anderson (1971), Fishbein (1975) and Wyer (1974). According to this approach, all pieces of information have the potential to affect one’s attitude. Two parameters have to be considered to understand the degree of influence information has on attitudes, i.e. the how and how much parameters. The how parameter is intended to evaluate the extent to which a piece of information received supports one’s belief. The how much parameter tries to measure the weight assigned to different pieces of information for impacting one’s attitude through a change in one’s belief. Attitudes are dependent on a complex factor involving beliefs and evaluation. It is important to distinguish between two types of belief, i.e. belief in an object and belief about an object. When one believes in an object, one predicts a high probability of the object attributes existing. Belief about is the predicted probability that particular relationships exist between one object and others. Beliefs are embodied by the hundreds of thousands of statements we make about self and the world. Attitudes change when beliefs are altered when acquiring new knowledge. The quantitative assessment of an attitude towards an object or a subject is measured in terms of the weighted sum of each belief about that object or subject times its circumstanced valuation. M. Rokeach has developed an extensive explanation of human behavior based on beliefs, attitudes and values (Rokeach 1969, 1973). According to him, each person has a highly organized system of beliefs, attitudes and values, which guides behavior. From M. Rokeach’s point of view, values are specific types of beliefs that act as life guidance. He concludes that people are guided by a need for

74

Complexity in the Natural Sciences and Operations Management

consistency between their beliefs, attitudes and values. When a piece of information brings about changes in attitude towards an object or a situation, inconsistency develops and creates mistrust. Another facet of belief and trust is linked to certainty and probability. Probability is commonly contrasted with certainty. Some of our beliefs are entertained with certainty, while there are others of which we are not sure. Furthermore, our beliefs are time-dependent along with our acquaintanceships. 2.4.2.3. Beliefs and biases Biases are nonconscious drivers, cognitive quirks that influence how people perceive the world. They appear to be universal in most of humanity, perhaps hardwired in the brain as part of our genetic heritage. They exert their influence outside conscious awareness. We do not take action without our biases kicking in. They can be helpful by enabling people to make quick, efficient judgments and decisions with minimal cognitive effort. But they can also blind a person to new information or inhibit someone from considering valuable data when taking an important decision. Biases often refer to beliefs that appear as the grounds on which decisions and courses of action are taken. Below is a list of biases commonly found in social life: – in-group bias: perceiving people who are similar to you more positively (ethnicity, religion, etc.); – out-group bias: perceiving people who are different from you more negatively; – belief bias: deciding whether an argument is strong or weak on the basis of whether you agree with its implications; – confirmation bias: seeking and finding evidence that confirms your beliefs and ignoring evidence that does not; – availability bias: making a decision based on the data that comes to mind more quickly rather than on more objective evidence;

Agent-based Modeling of Human Organizations

75

– anchoring bias: relying heavily on the first perception or piece of information offered (the anchor) when considering a decision; – halo effect: letting someone’s positive qualities in a specific area influence the free will of one individual or a group of individuals (constraints, lobbying, etc.); – base rate fallacy: when judging how probable an event is, ignoring the base rate (overall rate of occurrence); – planning fallacy: underestimating how long a task will be taken to complete, how much it will cost, i.e. the risks incurred, while overestimating the benefits; – representativeness bias: believing that something that is more representative is necessarily prevalent; – hot hand fallacy: believing that someone who was successful in the past has a greater chance of achieving further success. 2.4.2.4. Degrees of belief Belief, probability and uncertainty An important facet of belief is linked to trust, truth and certainty. Uncertainty is commonly treated with probability methods. Some of our beliefs are entertained with certainty, while there are others of which we are not sure. John Maynard Keynes (Keynes 1921) draws a distinction between uncertainty and risk. Risk is uncertainty structured by objective probabilities. Objective means based on empirical experience gained from past records or purposely designed experimental tests. The concept of probability is related to ideas originally centered on the notion of credibility or reasonable belief falling short of certainty. Two distinct uses of this concept are made, i.e. modeling of physical or social processes and drawing inference from, or making decisions on the basis of, inconclusive data that characterizes uncertainty. When modeling physical or social processes, the purpose is predicting the relative frequency with which the possible outcomes will occur. In evolving a probability model for some phenomenon, an

76

Complexity in the Natural Sciences and Operations Management

implicit assumption is made: how the natural, social and human world is configured and how it behaves. Such assumed assertions are contingent propositions that should be exposed to empirical tests. Probability is also used as an implement for decision-making by drawing inferences when a limited volume of data is available. When combined with an assessment of utilities, it is also used for choosing a course of action in an uncertain context. Probability modeling and inference are often complementary. Inference methods are often required for choosing among competing probabilities. Thus, decision makers are faced with situations represented by sets of probability distributions, giving more weight to some assessments rather than others. These techniques are used by assurance and reassurance companies when they work out contracts for which statistical series are too short. Probability is a tool for reasoning from data akin to logic and for adjusting one’s beliefs to take action. Uncertainty can be rigged, increased or fabricated. This is not unusual in the political and economic realms. Think of the climate change, pesticides, acid rains and medicines and so on. Anyhow, dropping out or neglecting partially certain public data is rejecting an often large volume of data that, in spite of its uncertainty weight, can be turned down with large detriment to the relevance of decisions. The data deluge that pours over us through current uncontrolled communication channels is a challenge not only for citizens but also for businesses in order to distinguish relevant from fake information items. Measures of degrees of belief The degrees of belief about the future are ingrained with uncertainty. The usual way to come to practical terms with uncertainty is to use the concept of probability. Two approaches to probability are generally considered, namely the frequency approach and the Bayesian approach. These two approaches are explained on the basis of the following statement: “the

Agent-based Modeling of Human Organizations

77

probability that the stock exchange index will crash tomorrow is 80%”. The interest in games of chance stimulated work on probability and influenced the character of an emerging theory. The probability situations were analyzed into sets of possible outcomes of a gaming operation. The relative frequency of the occurrence of an event was postulated as a number called the “probability” of this event. It was expected that the relative frequency of occurrence of the event in a large number of trials would lie quite close to that number. But the existence in the real world of such an ideal limiting frequency cannot be proved. This approach to probability is just a model of what we think to be reality. The statement “the probability that the stock exchange index will crash tomorrow is 80%” cannot express a relative frequency (even if financial market records are part of the evidence for the statement), because tomorrow comes but once. The statement implicitly expresses the credibility of the thought that the future is included in the past on the basis that it is rational to be confident of hypothesis (index crash) in the evidence of past records. This approach has often been called subjective, because its early proponents spoke of probability being relative in part to our ignorance and in part to our knowledge (Laplace 1795). It is now acknowledged that the term is misleading, for in fact there is an “objective” relationship between the hypothesis (index crash) and the evidence borne by past records, a probability relationship similar to the deductive relations of logic (Keynes 1921). One is faced with reasonable degrees of belief relative to evidence. The label “objective theory”, according to Keynes’ view, has been criticized by F.P. Ramsay (1926). This skepticism led Ramsey, de Finetti (1937) and Savage (1954) to develop what Savage called a theory of personal probability. Within this framework, a statement of probability is the speaker’s own assessment of the extent to which (s)he is confident of a proposition. It is remarkable that a seemingly subjective idea like this is arguably constrained by exactly the same mathematical rules governing the frequency conception of probability.

78

Complexity in the Natural Sciences and Operations Management

Personal degrees of belief can arguably satisfy the probability axioms. These ideas were first proposed by Ramsey (1926). He considered a probability space as a representation of psychological states of belief. P(A) stands for a person’s degree of confidence in A; it is to be evaluated behaviorally by determining the least favorable rate at which this individual would take a bet on A. If the least favorable odds are, e.g. 3:1, the probability is P(A) = ¾. Conditional probability is denoted by P (A/B). In a frequency interpretation, this is the relative frequency with which A occurs among trials in which B occurs. Conditional probabilities may be explained in terms of conditional bets. In a personal belief interpretation, P(A/B) may be understood as the rate at which a person would make a conditional bet on A – all bets being cancelled unless condition B is fulfilled. This approach, often called Bayes’ theorem, is of serious interest from a belief point of view. Suppose that the set Ai is an exhaustive set of mutually exclusive hypotheses of interest and that B is knowledge bearing on the hypotheses. Assume that a person, on the basis of prior knowledge, has a distribution of belief over Ai, represented by P(Ai) for each i. Call this the prior distribution, assuming that for each Ai, P(B/Ai) is defined. This is called the likelihood of getting B if Ai is true. P(A/B) is interpreted as a logical relationship between A and B. The goal of the Bayesian method is to make inferences regarding unknowns (generic term referring to any value not known to the investigator), given the information available that can be partitioned into information obtained from the current data as well as other information obtained independently or prior to the current data, which can be assigned to the investigator’s current knowledge. The more or less assured certainty of the expected future states of nature is encoded as probability estimates conditional on the information available. Within this framework of thought, the repetitive running of a trial and error process is supposed to allow people to gain new knowledge and eventually change their beliefs. In the inceptive step, the distribution of a priori subjective probabilities with respect to the future possible states of nature and their properties is chosen on the basis of innate and acquired knowledge to build a representation of the likely

Agent-based Modeling of Human Organizations

79

outcome of future action. This procedure draws on Bayes’ theorem. When the factual outcome happens, its compliance and/or discrepancy with the expected effect are analyzed and memorized, producing incremental knowledge coming from experience. There is a clear connection between logical probability, rationality, belief and revision of belief. 2.4.2.5. Belief, trust and truth Truth is an attribute of beliefs (opinions, doctrines, statements, etc.). It refers to the quality of those propositions that accord with reality, or with what is perceived as reality. The contrast is with falsity, faithlessness and fakery. Many explanations have been devised to elaborate on the correspondence between what is true and what makes it true. The correspondence theory asserts that a belief is true provided that a fact corresponding to it exists. What does it mean for a belief to correspond to a fact? How to verify that a fact exists in the context of virtual reality? A third party, trust, seems adequate to intervene within this framework to assess the credibility of information sources. The state of believing involves some degree of confidence towards a propositional object of belief. Other theories have been proposed to explain how a belief is accepted as true. The coherence theory developed by Bradley and Blanchard asserts that a belief is verified when it is part of an entire system of beliefs that is consistent and harmonious. A statement S is considered logically true if and only if S is a substitution-instance of a valid principle of logic. The pragmatic theory produced by two American philosophers C.S. Pierce and W. James asserts that a belief is true if it works, if accepting it brings success. In a book about the impact of blockchain technology on business operations (buying and selling goods and services and their associated money transactions), Don and Alex Tapscott (2016) estimate that a trust protocol has to be established according to four principles of integrity:

80

Complexity in the Natural Sciences and Operations Management

– Honesty has become not only an ethical issue but also an economic one. Trusting relations between all the stakeholders of business and public organizations have to be established and made sustainable. – Consideration means that all parties involved respect the interests, desires or feelings of their partners. – Accountability means clear commitments and abiding to them. – Transparency means that information pertinent to employees, customers and shareholders must be made available to avoid the instillation of distrust. This protocol shows how social actors have become aware of the importance of societal relationships in a faceless virtual world. Blockchain is a distributed ledger technology. Blockchain transactions are secured by powerful cryptography that is considered unbreakable using today’s computers. We resort to K. Lewin’s field theory to analyze how emotions, feelings, beliefs, truth and trust are dynamically articulated when agents interact and perform their activities (Lewin 1951). The fundamental construct introduced by K. Lewin is that of “field”. K. Lewin gained a scientific background in Germany before immigrating to America. That explains why he was led to introduce the concept of a “field” to characterize the spatial–temporal properties of a human ecosystem. This concept is widely used in physics to describe the physical properties of phenomena in a limited space. All behavior in terms of actions, thinking, wishing, striving, valuing, achieving, etc. is conceived of as a change in some state of a “field” in a given time unit. Expressed in the realm of individual psychology, the field is the life space of the individual (Lebensraum in German culture). The life space is equipped with beliefs and facts that interact to produce mental states resulting in attitudes at any given time. K. Lewin’s assertion that the only determinants of attitudes at a given time are the properties of the field at the same time has caused much controversy. But it sounds reasonable to accept that all the past

Agent-based Modeling of Human Organizations

81

is incorporated into the present state of the field under consideration. To put it in a different wording, only the contemporaneous system can have effects at any time. As a matter of fact, the present field has a certain time depth. It includes the “psychological” past, “psychological” present and “psychological” futures which constitute the time dimension of the life space existing at a given time. This idea of time dimension is also found in the concept developed by Markov to approach the description of stochastic processes by chain of events. State changes in a system that occur in time follow some probability law. The transition from a certain state at time t to another state depends only on the properties of the state at time t: all the past features of previous states are considered already included in the attributes of the present state. All attitudes depend on the cognitive structure of the life space that includes, for each agent of a cluster, the other stakeholders of the cluster. When exposed to the behavior suggestions of other cluster agents or their critical judgment of his/her own behavior, every agent develops either a conditioned reflex based on his innate and/or acquired knowledge embedded in his/her brain’s neural connections or branches out into emotional expressions according to the way the received information is appraised as a reward or a threat. This last case occurs if (s)he feels (s)he cannot secure the right pieces of knowledge to produce an appropriate reaction. T.D. Wilson, D.T. Gilbert and D.B. Centerbar (2003) wrote “helplessness theory has demonstrated that if people feel that they cannot control or predict their environments, they are at risk for severe motivational and cognitive deficits, such as depression”. If one organization agent trusts the other organization agents, his/her motivation is strengthened to embark on a learning process to better his/her acquired knowledge. Learning engages imagination, demands concentration, attention, efforts and trust in other agents’ good will. Conscious awareness is fully involved.

82

Complexity in the Natural Sciences and Operations Management

2.4.2.6. Beliefs and logic Logic is the study of consistent sets of beliefs. A set of beliefs is consistent if the beliefs are not only compatible with each other but also do not contradict each other. Beliefs are expressed by sentences. When written, these sentences stating beliefs are called declarative. Many sentences do not naturally state beliefs. One sentence may have different meanings or interpretations depending on the context. Beliefs are, in some way or another, the outcome of “rational” reasoning. By rational, it is meant that rules of logic are called on for justifying the conclusions reached. But which rules? Classical logic can be understood as a set of prescriptive rules defining the way reasoning has to be conducted to yield coherent conclusions. Within this framework, truth is unique. It is implicitly assumed that a universe exists where propositions are either true or false. The “principle of the excluded third” is called on. It is well suited for data processing by computer systems. Data coded by binary digits are memorized and processed by electronic devices able to be maintained only in two states (0 or 1). Classical logic does not mirror the way we reason in our daily life. It is acknowledged that our brain does not operate as a Turing machine (Wilson 2003). If our brain is viewed as a black box converting input into output, the transformation process can be represented by algorithms. But the intimate physiological mechanism cannot be ascribed to algorithmic procedures in the way a computer system crunches numbers. Other systems of logic, descriptive by nature, have been worked out to try to take into account the ways and means we use to make decisions in our daily activities. Modal logic is a system we practice, generally implicitly. Modality is the manner in which a proposition or statement describes or applies to its subject matter. Derivatively, modality refers to characteristics of entities or states of affairs described by modal propositions. Modal logic (Blackburn 2001) is a branch of logic which studies and attempts to systematize those logical relations between propositions which hold by virtue of containing modal terms such as

Agent-based Modeling of Human Organizations

83

“necessarily”, “possibly” and “contingently”; must, may and can. These terms cover three modalities: necessity, actuality and possibility. In short, modal logic is the study of necessity (it is necessary that...) and possibility (it is possible that….). This is done with the help of the two operators □ and ◊ meaning “necessarily” and “possibly”, respectively, and instrumental in dealing with different conceptions of necessity and possibility: – logical necessity, i.e. true by virtue of logic alone (if P then Q); – contextual necessity, i.e. true by virtue of the nature and structure of reality (business context, social context, etc.); – physical necessity, i.e. true by virtue of the laws of nature (water boils at 100°C under standard pressure). Modal logic is not the name of a single logical system; there are a number of different logical systems making use of the operators □ and ◊, each with its own set of rules. Modal operators □ and ◊ are introduced to express the modes with which propositions are true or false. They allow logical opposites to be clearly elicited. The operators □ and ◊ are regarded as quantifiers over entities called possible worlds. □ A is then interpreted as saying that A is true in all possible worlds, while ◊ A is interpreted as saying that A is true in at least one possible world. The two operators are, in fact, connected. To say that something must be the case is to say that it is not possible for it not to be the case. That is, □ A means the same as ¬◊¬A. Similarly, to say that it is possible for something to be the case is to say that it is not necessarily the case that it is false. That is, ◊A means the same as ¬□¬A. For good measure, we can express the fact that it is impossible for A to be true, as ¬◊A (it is not possible that A) or as □¬A (A is necessarily false). The truth value of ◊A cannot be inferred from the knowledge of the truth value of A. Modal operators are situation-dependent. Following the 17th Century philosopher and logician Leibniz, logicians often call the possible options facing a decision-maker possible worlds or

84

Complexity in the Natural Sciences and Operations Management

universes. A fresh approach to the semantics theory of possible worlds was introduced in the 1950s by Kripke (1963a and 1963b). To say that ◊A is true, it is required to say that A is in fact true in at least one of the possible universes associated with a decision-maker’s situation. To say that □A is true implies that A is true in all the possible universes associated with a decision-maker’s situation. The modal status of a proposition is understood in terms of the worlds in which it is true and worlds in which it is false. Contingent propositions are those that are true in some possible worlds and false in others. Impossible propositions are true in no possible world. Two logical operators, i.e. negation and the conditional operator → (if …then …), which are central in decision-making, require special attention to be applied within the framework of possible worlds. Let us assume that a decision-maker is in a situation M and that M is a set of exclusive possible worlds. Each element of the set is a world in itself. Possible worlds are not static but dynamically time-dependent. Today’s world is not tomorrow’s world. This means that each possible world evolves in time according to rules. These dynamics can be represented by a tree diagram with nodes and branches reflecting the relations between the different worlds (nodes). Each branch is a retinue of possible worlds. A tree diagram reads top-down so that from one certain node, access is not given to any other node in the tree. It is posited that A → B in the world m if and only if in all the worlds n accessible from m, A and B are simultaneously true. ¬A is true in the world m if and only if A is false in all the worlds n accessible from m. Let us give examples of inference employing modal operators. Consider a situation S with two associated worlds S1 and S2 and two sentences a and b that can be true (T) or false (F) as shown in Figure 2.6.

Agent-based Modeling of Human Organizations

85

Figure 2.6. A situation S and its two possible worlds

Consider the inference

◊ ◊

. It is invalid: a is T at S1; hence,

◊ ( & )

◊a is true in S. Similarly, b is true in S2; hence, ◊b is true in S. But the (a & b) is true in no associated world; hence, ◊(a & b) is not true in S. By contrast, the inference

□ □ □ ( & )

is valid. If the premises are true

at S, then a and b are true in all the worlds that are associated with S. Then, a & b is true in all those worlds, and □ (a & b) is true in S. Each software user exposed to two environments is subject to developing a situation of his/her own in terms of rules of meaning and action and of sets of “rational” propositions pertaining to his/her situational world at a certain time. Software designers are not aware of this immanent state of affairs. But even if they were, they could hardly cope with the wide variety of possible situational worlds users of software systems are embedded in.

86

Complexity in the Natural Sciences and Operations Management

2.5. Patterns of agent coordination Coordination is central to a multi-agent system for without it, any benefits of interaction vanish and the society of agents degenerates into a collection of individuals with chaotic behavior. Coordination has been studied in diverse disciplines from the social sciences to biology. Biological systems appear to be coordinated though cells or “agents” act independently in a seemingly non-collaborative way. Coordination is essentially a process in which agents engage to ensure a community of individual agents having diverse capabilities and acting in a coherent manner to achieve a goal. Different patterns of coordination can be found. 2.5.1. Organizational coordination The easiest way of ensuring coherent behavior and resolving conflicts consists of providing the group with an agent having a wider perspective of the system, thereby exploiting an organizational structure through hierarchy. This technique yields a classic master/slave or client/server architecture for task and resource allocation. A master agent gathers information from the agents of the group, creates plans, assigns tasks to individual agents and controls how tasks are performed. This pattern is also referred to as a blackboard architecture because agents are supposed to read their tasks from a “blackboard” and post the states of these tasks to it. The blackboard architecture is the model of shared memory. 2.5.2. Contracting for coordination In this approach, a decentralized market structure is assumed, and agents can take two roles, manager and a contractor. If an agent cannot solve an assigned problem using local resources or expertise, it will decompose into sub-problems and try to find other willing agents with the necessary resources/expertise to solve these sub-problems.

Agent-based Modeling of Human Organizations

87

Assigning the sub-problems is engineered by a contracting mechanism consisting of a contract announcement, submissions of bids by bidding agents, their evaluation and the awarding of a contract to the appropriate bidder. There is no possibility of bargaining. 2.5.3. Coordination by multi-agent planning 2.5.3.1. General considerations Coordinating multiple agents is viewed as a planning problem. In this context, all actions and the interactions of agents are determined beforehand, leaving nothing to chance. There are two types of multiagent planning, namely centralized and decentralized. In centralized planning, the separate agents evolve their individual plans and then send them to a central supervisor which analyzes them and detects potential conflicts (Georgeff 1983). The idea behind this approach is that the central supervisor can: a) identify synchronization discrepancies between the plans of the stakeholders; b) suggest changes and insert them in a realistic common schedule after approval by the stakeholders. The distributed planning technique foregoes the presence of a central supervisor. Instead, it is based on the dissemination of each other’s plans to all agents involved (Georgeff 1984, Corkill 1979). Agents exchange with each other until all conflicts are removed in order to produce individual plans coherent with others. This means that each stakeholder shares information with its partners about its resource capacities. 2.5.3.2. E-enabled coordination along a supply chain To illustrate the role of Information Technology and especially telecommunications services in coordination planning, the example of e-enabled demand-driven supply chain management systems will be described (Briffaut 2015). By e-enabled, it is meant that all transactions are electronically paperless engineered along the goods

88

Complexity in the Natural Sciences and Operations Management

flow from suppliers to clients. In particular, this context implies that customers place orders via a website. These cybercustomers expect to be provided without latency with data about product availability and delivery lead times. The role of information sharing is acknowledged to be a key success factor in Supply Chain Management (SCM) in order to secure efficient coordination between all the activities of the stakeholders involved along the goods pipeline. Coordinating multi-agent systems (MAS) is realized in the case of an e-enabled SCM through a common information system engineered to share the relevant data between the stakeholders involved. The MAS approach is a relevant substitute for optimization tools and analytical resolution techniques whose efficiency is usually limited to local problems, without any adequate visibility over the behavior of the entire chain of stakeholders involved. Optimization of the operations of a whole is different from optimization of the operations of its parts A global point of view is required to bring under full control the synchronization of inter-related activities. In traditional contexts, a front office and a back office can be identified in terms of interaction with customers. The front office carries out face-to-face dealings with customers while using a proprietary information system. Relationships with the other agents of the supply chain take place by means of messages exchanged between their information systems. Coordination between the front office and the back office is generally asynchronous and does not meet real-time requirements. When e-commerce is implemented via a website, the delineation between the back office and the front office of the previous configuration is blurred and has no reason to be taken into consideration. The two offices merge into one entity because of the response time constraint. When queries are introduced by cybercustomers via a website portal, the collaborative information system must have the ability to produce real-time answers (availability, delivery date). Then, the information systems of the various stakeholders along the supply chain have to be interfaced in such a way that coordination between them takes place synchronously.

Agent-based Modeling of Human Organizations

89

The thing to do is to implement a workflow of a sort between the “public” parts of the stakeholders’ information systems. In other words, some parts of the stakeholders’ information systems contribute to producing a relevant answer to cybercustomers’ queries. Figure 2.7 shows the changes induced by a portal in terms of information exchange between the supply chain stakeholders.

Figure 2.7. Coordination between proprietary information systems through a collaborative system

2.5.3.3. Scenario setting for placing an order via a website Each time a customer enters an order via the website, the supply chain collaborative information system shared by all stakeholders proceeds with an automatic check of projected inventories and uncommitted resource capacities per time period. If the item ordered is already available, the customer is advised accordingly and can immediately confirm the order. If the item is not available in inventory, the system checks whether the quantity and the delivery date requested can be met. In other words, the system checks whether manufacturing and supply capacities are available to meet the demand

90

Complexity in the Natural Sciences and Operations Management

on the due date. Otherwise, it checks what the best possible date could be and/or what split quantities could be produced by using a simulation engine. Then, the customer is advised of the alternatives and can choose an option suitable to him/her. Once the option is confirmed, the system automatically creates a reservation of capacity and materials on the order promising system and forwards the order parameters to the back-office systems to be included in the production plan of the stakeholders involved. Order acknowledgments and confirmation are generated and sent by email. 2.5.3.4. Mapping the order scenario onto the structures of a BDI agent The role of the Beliefs structure is to record order entries, send answers and process transaction data to turn them into memorized statistics. These statistics are used as entry data to update the APS (Advanced Planning System). The APS is implemented as a control tool over a short time horizon and is used as a non-repudiable commitment taken by the manufacturing shops. The Plan structure establishes and memorizes the APS pertaining to the supply chain as a whole. This means that this entity updates the supply chain APS on a regular time basis from data provided by the Beliefs structure. As the BDI agent acts as the front office of the supply chain with respect to the buyer side, it seems reasonable to ascribe it a centralized coordination role. Within this perspective, it draws up partial plans for the stakeholders on their behalf. When conflicts arise, it has the capabilities to bring the unbalance of distributed resources to terms. In other words, the Plan structure is assigned to implement the APS concept. It ensures that data required to derive their partial APS are made available in due time to all agents involved along the goods flow. It has two major features: – concurrent planning of all partners’ processes; – incremental planning capabilities. APS is intended to secure a global optimization of all flows through the supply chain not only by increasing ROI (Return on Investment) and ROA (Return on Assets) but also by fulfilling customers’ satisfaction and retaining their loyalty.

Agent-based Modeling of Human Organizations

91

The Desires structure is in charge of supporting the use of the ATP and CTP parameters. ATP stands for Available-To-Promise and CTP Capable-To-Promise. Per period of time, ATP has the capability to deliver an answer to a client request in terms of availability (quantity and delivery date). Either the request can be fulfilled from a scheduled inventory derived from the enforced APS or simulation of a sort has to be carried out through the CTP parameter to send an answer to the client. The CTP parameter takes account of lead times required to mobilize potentially available resources and allows a real-time answer to customer requests when necessary. It results, when activated, in the production of a new APS. The APS technique is generally supposed to be able to produce rolling manufacturing plans to match the demands of the buyer side. The fulfillment of the committed schedule APS is ascribed to the Intentions structure. The PAB parameter is managed by this structure because it includes all of what is recorded as committed (APS and customer orders). The PAB (Projected Available Balance) parameter represents the number of completed items on hand at the end of each time period. It can be viewed as a means for giving some margin in cases of temporary unserviceable capability of some resources. Let us use an example to explain how the roles of the four structures (Beliefs, Desires, Intentions and Plans) are connected and how they are performed by the interpreter. An input message coming from a customer is captured by the Beliefs data structure and analyzed by the Interpreter. It deals with a change in the delivery schedule induced by a new supply order entry. If this change requirement falls within the time fence linked to the supply lead time, it is rejected. Otherwise, as the agent has to act, the available resource capacities for the time period, be they committed or uncommitted, the projected on-hand inventory and the uncommitted planned manufacturing output are analyzed. If one of the possible supply sources can meet the specification required, it needs to select appropriate actions or procedures (Plans) to execute from a set of functions available to it

92

Complexity in the Natural Sciences and Operations Management

and commit itself (Intentions). This simple scenario can be conceptualized by a repeat loop as shown below: BDI-Interpreter Initialize-state [ ]; repeat a) Options:= read the event queue (Beliefs) and go to option ATP (Desires); b) Selected option ATP = if ATP parameter proves relevant (order fulfilled without altering the existing MPS) then update its value and go to Intentions to update PAB , otherwise go to selected option CTP ; c) Selected option CTP = if CTP proves relevant (possible adjustment of current MPS while abiding by commitments in force) then go to Intentions for updating otherwise reject the request d) Execute [ ]; end repeat 2.6. Negotiation patterns Negotiations are the very fabric of our social life. A negotiation is a discussion pertaining to decision-making with a view to agreement, full or partial, or compromise when the discussants have incompatible mind-sets. When differences in opinion between discussing parties arise, several strategies are instrumental in trying to resolve the issue by determining what the fair or just outcome should be. A first strategy might be that the parties have agreed to resort to a set of procedural rules that have been defined beforehand for covering eventual cases of conflicts for settling disputes. This situation can be formalized by a negotiation protocol. A second strategy is to seek the advice of a referee. This strategy is aimed at giving the power of intervening in the conflict to an unbiased person. But in this case, the power to decide on the issue remains in the hands of the discussants

Agent-based Modeling of Human Organizations

93

with or without the referee taking part. A third strategy would be to transfer the full responsibility to take a decisive decision over the pending issue. Then, the risk is that “asymmetric ignorance” between the parties involved leads to the absence of consensus when it comes to deploying the decision. Coordination is predicated on the implicit idea that the agents involved share a common interest to achieve an objective. Negotiations do not necessarily imply that they always take place between opponents and competitors, but this term often bears that connotation. There are probably many definitions of negotiation. In our opinion, a basic definition of negotiation has been given by Brussmann and Muller (1992): “…negotiation is the communication process of a group of agents in order to reach a mutually accepted agreement on some matter”. The purpose of any negotiation process is to reach a consensus for the “balanced” benefit of the parties. “Balanced” does not mean “optimal” for the parties involved, but what they may consider the least unfavorable solution. This process may be very complex and involve the exchange of information, the relaxation of initial constraints, mutual concessions, lies or threats. It is easy to figure out the huge and varied literature produced on this subject matter of negotiation. Negotiation can be competitive or cooperative depending on the behavior of individual agents. Competitive negotiation takes place in situations where independent agents with their own goals attempt to group their choices over well-defined alternatives. They are not a priori prepared to share information and cooperate. Cooperative negotiation takes place where agents share the same vision of their goals and are prepared to work together to achieve efficient collaboration.

94

Complexity in the Natural Sciences and Operations Management

2.7. Theories behind the organization theory In spite of the ever-lasting claim by the French to be different in terms of culture (exception culturelle), many management tools currently used in France and introduced after the Second World War are based on imported concepts and practices. The costing system, called in French “comptabilité analytique”, is taken from the German costing system. The first textbooks on costing were published in the 19th Century in Germany (Ballewski 1887). On the other hand, at the same time period, organizational concepts under the wording “organization theory” were imported from the USA. Most contributors to this discipline in the USA had a sociology background. This situation can be ascribed to the characteristics of the American cultural context. A telling insight can be found in the book The Growth of American Thought by Merle Cutti (1964). Two chapters (“The Advance of Science and Technology” – “Business and the Life of the Mind”) are of special interest to understand the involvement of sociologists in studying the working conditions of the labor force in large corporations. The promotion of applied science to the arts was oriented to give engineers and mechanicians the possibilities to oversharpen the material benefits at the expense of moral values so deeply ingrained in the Christian heritage of the Pilgrim Fathers. The many aspects of what is called the organization theory in English and French-speaking contexts defy easy classification. No system of categories is perfectly appropriate for organizing this material. This is why some baseline theories explained in the following subsections can help to derive schemes eliciting the very nature of the multiple casts of thought in this realm in a business environment. It is important to realize that in the past decades, information and communication technologies have had a disruptive impact on the ways and means of how corporations and communities of all sorts have been redesigned to keep up with their environments. Here are the main features of the transformations perceived: 1) The enterprise is transformed from a closed system to an open system, a network of self-governing micro-enterprises with free-flowing communication among them and mutually creative

Agent-based Modeling of Human Organizations

95

connections with outside contributors. Some popular wording can be associated with this idea of openness (networked enterprise, open innovation, co-makership, etc.). 2) Employees are transformed from executors of top-down directions to self-motivated contributors, in many cases choosing or electing the leaders and members of their teams. 3) Purchasers of business offerings are transformed from customers to lifetime users of products and services designed to solve their problems and increase their satisfaction. 2.7.1. Structural and functional theories This includes a broad group of loosely associated approaches. Although the meanings of the terms structuralism and functionalism are not clear cut and can bear a variety of variations, they designate the belief that social structures are real and function in ways that can be observed objectively (Giddens 1979). It is relevant at this stage to elaborate on the term “function”. When considering the function of a thing, a distinction has to be made between a) what the thing does in the normal course of events (its activity); b) what the thing brings about in the normal course of events (the result of its activity). Of course, it is understood that the activity of a thing and the outcome thereof are strongly correlated with the structure of the entity under consideration. When a function is ascribed to an agent, it is usually implied that a certain purpose is served. The concept of mathematical function does not oppose the previous view but complements it by stressing the relation between two terms in an ordered pair. This pair of constituents can be for instance activity and result.

96

Complexity in the Natural Sciences and Operations Management

A functional explanation is a way of explaining why a certain phenomenon occurs or why something acts in a certain way by showing that it is a component of a structure within which it contributes to a particular kind of outcome. The system theory is deeply rooted in the structural–functional tradition which can be traced back to Plato and Aristotle. Modern structuralism generally recognizes E. Durkheim (1951), who emphasized the concept of social structure, and F. de Saussure, founder of structural linguistics, as key figures. The structural technical architecture of cybernetics has explicitly or implicitly permeated the organization theory. This means that a structure is described in terms of controlled and controlling entities with the underlying assumption that it has to deliver targeted output within the framework of a contingent ecosystem. This cybernetics mind-set prevails from the design stage whose process is called organizational design. 2.7.2. Cognitive and behavioral theories This genre of theories is a combination of two different traditions that share many characteristics. They tend to espouse the same general ideas about knowledge as structural–functional theories do. Structural and functional theories focus on social and cultural structures, whereas cognitive and behavioral theories focus on the individual. Psychology is the primary source of cognitive and behavioral theories. Psychological behaviorism deals with the connection between stimuli and behavioral responses. The term cognition refers to thinking or the mind, so cognitivism tries to understand and explain how people think. Cognitivism (Greene 1984) goes one step further than psychology and emphasizes the information processing phase between stimuli and responses. Somehow, cognitivism tries to open the black box, converting stimuli into responses in order to understand the mechanism involved.

Agent-based Modeling of Human Organizations

97

These two groups of theories form a basis from which many other theories revealing the tone and color of their upholders can be derived. When the focus is put on the relations between the various entities of a structure, the structural and functional theories shift to what is called interactionist theories. When theories go further than merely describing a contextual situation but also criticize theories of this kind, e.g. on the grounds of the conflicts of interest in society or the ways in which one group perpetuates domination over another one, they are called critical theories. A behavioral view implies that beliefs are just disposition to behave in certain ways. The question is that our beliefs including their propositional content indicated by a “that” – clause, typically explain why we do what we do. Explaining action via the propositional content of beliefs is not accommodated in the behavioral approach. 2.7.3. Organization theory and German culture When you scrutinize the syllabi of German educational institutions in the field of management, what deals with organization theory (organizationstheorie) is presented as “Grundlagen der Organization, Aufbau- Ablauf- und Prozess-” (Foundations of Organization, Structure, Fluxes and Process) with often an additional subtitle “Unternehmensführung und Strategie” (Enterprise Guidance and Strategy). Within this framework, seven issues have to be addressed to design a coherent organization. By coherent, it is meant that any organizational configuration has a purpose, an objective; otherwise, it is irrelevant to devote efforts, i.e. resources, to design and build an object without significance. The issues to address are: – What is the purpose? – How: what is required for function capabilities? – What is required for resources? – When does this structure have to be operated? – Where: what is the ecosystem of the location?

98

Complexity in the Natural Sciences and Operations Management

– What is the relevance of the strategy? – What is the distribution complexity of deliveries? The synopsis of Figure 2.8 portrays how the procedures for deriving the “Afbauworganisation” and the “Ablauforganisation” components are systematically deployed and how they are combined to deliver an effective fully fledged business organization.

Figure 2.8. Systematic approach to derive the Aufbau- und Ablauforganisation components of a business organization (source: Knut Bleicher (1991) S. 49)

All these processes are underpinned by a theory developed in the field of sociology. The mind-set of a social system has been an important contribution to elicit the problem of social complexity. Niklas Luhmann has brought a major contribution to address this question in his books Soziale Systeme: Grunddriss einer allgemeine Theorie and Einführung in die Systemtheorie (Luhmann 1984; Luhmann 2002). In social systems, theory Niklas Luhmann strives to incorporate the conceptual innovations of the 20th Century in the realm of social theory. He draws on systems theory, which is a major

Agent-based Modeling of Human Organizations

99

conceptual innovation of the 20th Century, to provide a framework for describing modern society as a complex system of communication that has differentiated in a network of social subsystems. The systems theory worked out by Niklas Luhmann explores the collapse of boundaries between observers and is observed from different angles and in a variety of contexts within the framework of the second-order cybernetics. Understanding the complexity of the observed system, the complexity of its observing environment and their combination is a challenging analytical exercise. Niklas Luhmann applied the autopoiesis concept to sociology through his systems theory ideas in an endeavor to come to terms with this conundrum. The term autopoiesis (from Greek αὐτo- auto-, meaning “self ”, and ποίησις (poiesis), meaning “creation, production”) refers to a system capable of reproducing and maintaining itself. The term was introduced in 1972 by Chilean biologists Humberto Maturana and Francisco Varela (1980) to characterize the self-maintaining chemical reactions of living cells. Since then, the concept has also been applied to the fields of cognition, systems theory and sociology. Original definitions produced by Humberto Maturana and Francisco Varela (1980) are given in the following excerpts: – “An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which: i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network” (p. 78). – “The space defined by an autopoietic system is self-contained and cannot be described by using dimensions that define another space. When we refer to our interactions with a concrete autopoietic

100

Complexity in the Natural Sciences and Operations Management

system, however, we project this system on the space of our manipulations and make a description of this projection” (p. 89). This space of manipulations can be thought of akin to Levin’s field theory discussed in section 2.4.2.5. Autopoiesis was originally presented as a system description that was intended to define and explain the nature of living systems. These structures, based on an inflow of molecules and energy, generate the components which, in turn, continue to maintain the organized bounded structure that gives rise to these components. An autopoietic system should be contrasted with an allopoietic system, such as a car factory, which uses raw materials (components) to generate a car (an organized structure) which is something other than itself (the factory). However, if the system is extended from the factory to include components in the factory’s “environment”, such as supply chains, plant/equipment, workers, dealerships, customers, contracts, competitors, cars, spare parts and so on, then as a total viable system, it could be considered to be autopoietic. Though others have often used the term as a synonym for selforganization, Humberto Maturana himself stated he would “[n]ever use the notion of self-organization... Operationally it is impossible. That is, if the organization of a thing changes, the thing changes” (Maturana and Varela 1980). Moreover, an autopoietic system is autonomous and operationally closed, in the sense that there are sufficient processes within it to maintain the whole. Autopoietic systems are “structurally coupled” with their environment, embedded in a dynamic of changes that can be recalled as sensory-motor coupling. This continuous dynamic is considered as a rudimentary form of knowledge or cognition can be observed throughout life forms. Niklas Luhmann’s systems theory allows simulating complexity in order to explain it. It does so by creating a flexible network of selected interrelated concepts that can be combined in many different ways and thus be used to describe the most diverse social phenomena. Niklas Luhman defines complexity in terms of a threshold that marks the distinction between two types of systems, those in which each element

Agent-based Modeling of Human Organizations

101

can be related to every other element and those in which it is no longer the case. In information terms, complexity expresses a lack of information, preventing a system from completely observing itself or its environment. This drives observers to reduce complexity via the formation of systems models that are less complex than their environment. This approach generates an asymmetrical, simplifying relationship to observed systems. This ability to reduce complexity leads to the fact that complexity cannot be observed because “unorganized” complexity is transformed into organized, so to speak, complexity. Niklas Luhmann insists on the difference between the conceptual abstraction (theoretically oriented) and the self-abstraction (structurally directed) of objects when modeling takes place. Conceptual abstraction makes comparisons possible, and self-abstraction enables the repetition of the same structures within objects themselves. The concept “system” serves to abstract facts that are related to objects exhibiting features justifying the use of this concept and to compare them with each other and with other kinds of facts in order to assess their difference or similarity. 2.8. Organizations and complexity 2.8.1. Structural complexity In the current parlance, an organization is a massively parallel system of agents’ concurrent behaviors. In spite of the fact that some agents may be acknowledged to have the mission to issue and control rules, each agent is responsible for its own actions with respect to the current aggregate patterns it is embedded in. In case agents’ behaviors are consistent with these patterns, an organizational equilibrium prevails. As a matter of fact, an organization is organic, evolutionary and contingent. This means that an organization experiences an endogenously generated nonequilibrium. Idealized equilibrium models distort reality that is not static and generate biased decisions by the stakeholders concerned. How people decide is important: they may

102

Complexity in the Natural Sciences and Operations Management

stand back from their current situation and attempt to make sense by surmising, making guesses, using past knowledge or their imagination. “We are in a world where beliefs, strategies and actions of agents are being tested for survival within a situation or outcome or ‘ecology’ that these beliefs, strategies and actions together create” (W.B. Arthur 2013). An organization is subject to inherent feedback and feedforward loops conducive to emerging organizational patterns with a relentless time arrow. New organizational patterns can result from “bifurcation”, as explained by Ilya Prigogine. This feature will be addressed in chapter three. Which branch of a bifurcation is followed is impossible to forecast and results from local “instabilities” developing in global changes. This type of situation turns to be a source of uncertainty. 2.8.2. Behavioral complexity in group decision-making Any group of agents in an organization is supposed to fulfill a mission and to reach a target by taking courses of action after a decision-making process has been explicitly or implicitly carried out. When individual decision-making is considered, arguments are made explicit to explain that “rationality” is the driving force that underpins the behavior of individuals. But what happens when this explanation is applied to groups of individual agents whose interests do not precisely coincide, but who are obliged for some reason or other to act jointly? Consider, therefore, some alternative actions that pertain to a group of individual agents. By this, it is meant that the presence or absence of these alternatives affects each of them. Can any sense be made of the idea of preference between these alternatives where preference refers to them as a group? Let two alternatives be X and Y: X Pi Y means person i prefers X to Y. X Ii Y means personi is indifferent between X and Y. If for some decision-makers their preference is X Pi Y and the remainder of decision-makers are indifferent between X and Y, it seems reasonable to say X P Y, where P refers to the group’s preferences.

Agent-based Modeling of Human Organizations

103

The kind of case we contemplate here is the preference between two courses of action. They are group phenomena in the sense that their effects on any one person may be seen by another group member to be relevant to their interest. It is an interesting and important part of social group coexistence to consider how these differences are dealt with and can be brought under control. This means that if group members disagree about the relative merits of the group situations X and Y, no meaning can be attributed to XPY, YPX or XIY. It may happen that the group is obliged to make a choice between X and Y, so one would then have to analyze their choice in terms other than those regarding group preferences. How can we explore the possibility of group preferences even when there is no agreement among group members? Inspired by the approach used by economists, we can define a preference function mirroring the goal(s) shared by all group members. This function relates individual preferences (independent variables) to the group goal (dependent variable). Let us take an example. Suppose that there are three individuals and two situational actions X and Y. The possible preferences are XPY, YPX and XIY for each individual and the group. This means that there are 27 possible combinations of individual preferences within the group. They are listed in the following table (see next page). For each of these combinations of individual preferences, we must attach some group preferences. It is clear that there is a wide spectrum of possibilities. One is to make the group’s preferences exactly the same as those of a particular member: this is dictatorial behavior that is likely to be rejected. A second possibility is to make the group’s preferences independent of individual preference by, e.g. writing XIY all the time. This option seems pointless and rejects the cases where a consensus for XPY or YPX is shared by all individuals. Let us fill in these cases. They are 15 in number, leaving 12 where there is no consensus. Can we make any progress with this 12? First, it can be referred to Arrow’s impossibility theorem. It states that, when a group of at least two decision-makers is confronted with at least three alternatives, no group-wide consensus can be reached. This means that there is no preference function that satisfies the

104

Complexity in the Natural Sciences and Operations Management

Unanimity, the Independence and the Non-dictatorship axioms. An objective-oriented function maps each profile of individual preferences into a preference function. The preferences are defined on a set of at least three alternatives, and there are no restrictions on the preferences beyond the usual ordering properties. Unanimity says that when all individuals prefer an alternative X to other alternatives Y, Z, etc., then society must “prefer” X to Y, Z, etc.

Agent-based Modeling of Human Organizations

105

Independence means that the only information relevant for determining “preference” on a set of alternatives is the individual preferences on the set. Non-dictatorship rules out an individual such that whenever they prefer X to Y, Z, etc., the group of individuals must “prefer” X to other alternatives. Henceforth, I apply the word “preferences” and related expressions to the group of individuals as well as to individuals without the quotation marks. Some of the flavor of Arrow’s theorem can be seen by considering the so-called paradox of majority voting which lies at its heart. Let the preferences of three individuals among three alternatives X, Y and Z be as follows:

Suppose these three people decide on their group choices by majority voting. If they vote on X versus Y, two of them (1 and 2) prefer X to Y, and one of them (3) prefers Y to X. It follows that the group choice is XPY. If they vote between Y and Z, two of them (1 and 3) prefer Y to Z, and one of them (2) prefers Z to Y. It follows that the group choice is YPZ. By transitivity, the group must now prefer X to Z, but if X and Z are voted on, two of them (2 and 3) prefer Z to X and only one (1) X to Z. In other words, there is also a majority for Z against X. What this means is that if group preference is determined by majority rule, the transitivity principle may cease to hold. Another way of interpreting this fact is to note that the chronological order in which the issues are put on the agenda of discussions may be of crucial importance. They are special cases for which the paradox of majority voting does not arise. The obvious case is when individual preferences are identical. In this case, the group preference will correspond to these identical individual preferences. A second possible case is when the

106

Complexity in the Natural Sciences and Operations Management

group members may be paired with one member left over. Assume that the pairings are arranged so that the preferences of one individual in each pair are exactly the opposite of the other individual. On all voting, therefore, all these individuals will cancel each other out, leaving the odd person’s preferences to determine the group preferences. It is clear that group decision-making is a “complex” contextdependent exercise. It is pointless to produce standard “recipes”, especially when projects such as information systems are implemented. As the project proceeds, the context changes, and the requirements can also change dramatically because the stakeholders experience unexpected changes in their working environment and decide unconsciously or consciously on unforeseen uses of their new equipment. 2.8.3. Autonomous agents and complexity in organization operations: inexorable stretch to artificial organization 2.8.3.1. The burgeoning backdrop In the first era of the Internet, management thinkers talked up the networked enterprise, the Flat Corporation and open innovation as new business ecosystems, successors to the hierarchies of early-20thCentury industrial corporations. On one hand, these hierarchies remain pretty intact in big dot-com companies. On the other hand, new types of problems have emerged in terms of privacy, security and inclusion. They were solved by cryptography. New technologies, namely interconnected devices (Internet of Things), mass data storage, worldwide distributed ledgers (blockchain), etc., have enlarged these problems to such a scale that traditional organizational patterns in some economic sectors will be globally and locally overturned. It is not the purpose of this book to try to forecast how organizations will experience fundamental transformations in their structures and operations. An exercise of this sort is always hazardous. Let us focus on AI-driven autonomous agents which are coming into play. They can be defined as AI-driven devices that, on behalf of their designers, take information from their environments and are capable of choices,

Agent-based Modeling of Human Organizations

107

taking decision courses. They can modify the ways to achieve their objectives, sensing and responding to their environments over time. Humans can interact with agents capable of varying degrees of autonomy whether in the loop (with a human constantly monitoring the operation and remaining in charge of the critical decisions), on the loop (with a human supervising the agent that can intervene at any stage of a process in progress) or out the loop (with the agent carrying out its mission without any human intervention once launched). In this last situation, the autonomous agent is under potential threats of cyber-attacks without being able to detect them and take appropriate counter-measures. Crucially, the identity of the attacker (another autonomous agent?) may be ambiguous, leaving those under attack uncertain as to how to respond when aware of the situation. This context is prone to change the very nature of the economiccompetition battle field. Any individual in our digital economy will more and more interact with faceless hidden partners, in most cases, without knowing whether the response received is sent by a human or a machine. Dealing with hidden partners is a source of uncertainty and anxiety leading to chaotic behaviors. 2.8.3.2. The looming artificial-intelligence-driven organization in the era of the Internet of Everything (IoE) Organizations, whether they are populations, societies, groups or economic sectors, will have to come to terms with the co-presence of human agents with their psychological profiles in terms of beliefs, attitudes and knowledge and virtual agents, some of which are endowed with AI capabilities for deciding on action, communicating, reasoning, perceiving their environments, and planning their objectives. The idea of DAC (Decentralized Autonomous Corporations) companies with no directors is being touted. They would follow a pre-programmed business model and would be managed by the applications of the block-chain tenet. In essence, the block-chain tenet means a shared mass agreement of transactions between a closed cluster of stakeholders and distributed storage of encrypted data. It is a

108

Complexity in the Natural Sciences and Operations Management

ledger that keeps record of transactions accepted by all stakeholders and secures their storage. It is thought that the block-chain system would act as a way for a DAC to store financial accounts, insurance contracts, bonds or security records between the cluster members. This organizational architecture is appealing because outside intruders will find it hard to get access to encrypted data or to shut down the whole system in view of its decentralization. Airing these ideas may be considered as science fiction and disruptive with respect to the current context. But virtual agents have been in operation for a long time, for instance computer-supported Management Information Systems (MIS) and new types are showing on the economic stage. From finance (banking, payments, crowd-funding) to sharing economies (Uber and AirBnB-like platforms) to communications (social networks, email) to reputation systems (credit ratings, seller ratings on e-commerce sites) to governance, decentralized autonomous agents are already economic actors. Their possibilities seem endless in eliminating human intermediation in many industries, but trees do not grow up to reach the sky. Platforms like these may have massive implications on what the future will look like. When this current picture of the future face of the global economy is offered to the imagination of the public by futurologists, the basic characters of the laws of Nature should not be forgotten, namely nonlinearity, self-reorganization and chaos. A special attention should be given to biology laws. Economic societies like all human and animal communities are composed of living organisms, and as such, reference to the approaches and concepts developed in the realm of biology could be helpful for understanding the evolution of economic societies. Jacques Monod’s seminal book Le hazard et la Nécessité – Essai sur la Philosophie Naturelle de la Biologie Moderne (Chance and Necessity – Essay on the Natural Philosophy of Modern Biology) (Monod 1971) is inspired by a quote attributed to Democritus “Everything existing in the universe is the fruit of chance and necessity”. Monod contends that mutations are unpredictable and that natural selection operates only upon the products of chance. In Chapter 7, Monod states that the decisive factor in natural selection is not the “struggle for life” but the differential rate of reproduction. The only mutations “acceptable” to an organism are those that “do not lessen the coherence of the

Agent-based Modeling of Human Organizations

109

teleonomic (end directed) apparatus, but rather, further strengthen it in their already assumed orientation” (Monod 1971, p. 119). Jacques Monod explains that the teleonomic performance is assessed through natural selection, and that this system retains only a very small fraction of mutations that will perfect and enrich the teleonomic apparatus. Jacques Monod makes the point that selection of a mutation is due to the environmental surroundings of the organism and the teleonomic performances. What will be the balance of power between human agents and AIaided virtual agents? It is likely that the future context will be an adapted evolution of the current one. What is impossible to forecast is the tempo of this evolution. In the 1980s, some futurologists forecast that by the end of this decade, cash money would have disappeared. Recently, the CEB has committed itself to keep printing bank notes. It is not a matter of technology but of psychology. People feel that they have the full control of cash money they can hoard. Let us try to elaborate on the position of human agents in a “mixed” context, drawing on the basis of existing situations. Up to now, an important parameter has not been taken into consideration in the arena of Management Information System (MIS) design, namely the virtual nature of computer-aided information systems. A human making use of computer-aided information systems to collaborate in a business environment is exposed to a multiverse, a real-life universe and a virtual universe where interaction with a set of actors hidden behind a screen takes place. Figure 2.9 portrays this situation. Each stakeholder of a collaborative computer-based context is at the interface of two environments with which they interact, namely a virtual environment via a human–machine interface and a real-life environment accessible through all their senses (sight, touch, hearing, smell) as shown in Figure 2.9.

110

Complexity in the Natural Sciences and Operations Management

Figure 2.9. A human agent at the interface of two universes

When a decision-making process is engineered, this human relies on a space of pertinent facts extracted from the real-life and virtual universes they are exposed to. In order to conceptualize how this space is operated, the theory of the coordinated management meaning (CMM) can be called on. It is made up of schemata resulting from interpretative rules of meaning, deciphering messages and events coming from a real-life universe and from a virtual universe (computer-aided systems). Regulative rules of decision for action in the real-life universe are applied to derive data-driven decisions from memorized schemata. These rules refer to modal logic that we practice, often implicitly. In any situation where human and virtual agents (actors) interact, a context is made up of three dimensions: – the agents involved characterized by their functional capabilities and roles; – the shared objectives and/or a common vision evolved from overlapping individual objectives; – the environment surrounding the cluster of agents bound and committed to reach a common achievement.

Agent-based Modeling of Human Organizations

111

As it is stressed in second-order cybernetics, what is ascribed as the environment of a system, here a cluster of agents, must be considered as an agent of the system. The following arguments are intended to demonstrate it. Message transmission between a sender and a recipient is prone to a distortion of content, often called filtering. This may incur biased interpretation by the receiver. It is an important feature to take into account when analyzing how agents communicate. It refers here to two notable elements. The first one is the semantics of messages and can be considered as a dimension of semantic interoperability. The other is the fact that messages can deliver a biased understanding of the actual context, engineered by the senders. A common basis of knowledge should be ensured between all the actors involved in order to avoid misinterpretation and, as a consequence, inefficient decisionmaking resulting from ambiguous mutual understanding. But accurate communication without the impairment of meaning is seldom, if ever, found in the complex realities of business life. Figure 2.10 portrays the picture of an interactive context.

Figure 2.10. Description of a context shared by a set of actors interacting between themselves and their environment

112

Complexity in the Natural Sciences and Operations Management

Two interaction patterns between the actors, i.e. direct interaction and indirect interaction, can be identified. The idea of direct interaction is straightforward to understand. Indirect interaction means that the environment is a mediator that can influence the behaviors of the actors involved through the environment that is considered as a full actor of the system. This feature underlines the importance of the environment as an entity for context-awareness in collaborative environments. With this point of view, the environment must be considered not only as an actor in itself but also the only communication channel between the set of actors when a mediumbased virtual collaborative environment is implemented. The behavior of each actor in the set cannot be understood and analyzed without taking into consideration how the interacting environment is perceived by each actor and is operative. Figure 2.9 changes into Figure 2.11.

Figure 2.11. Set of actors interacting between themselves via their environment

A virtual collaborative environment is an artefact of a sort to which different members of a community are given access and that acts as an intermediate between them, either to exchange information and knowledge about a technical field of common practice or to help solve problems and to deliver results. In the current context, virtual collaborative environments are considered computer-based information systems shared by members of communities of practice,

Agent-based Modeling of Human Organizations

113

coming together virtually to exchange information and help members make pertinent decisions. The universe each stakeholder is embedded in can be conceptualized as a space of tangible facts considered as relevant for making a decision in the real-life environment. To describe how this space is operated, the theory of the coordinated management of meaning (CMM) can be called on. It is the most comprehensive rule theory of communication developed by Pearce and Cronen (1980). CMM states that individuals in any social situation want to understand what is going on and apply rules to figure things out. In other words, constitutive rules in CMM are rules of meaning and rules of decision for action. Rules of meaning are used to decipher a message or an event via interpretative rules. Rules of decision for action are regulative rules used to process interpreted messages or events. Rules of meaning and rules of decision for action are always context-dependent. Often, text (message or action) and context form a loop so that each is used to interpret the other (Cronen, Johnson, Lannamann) (Cronen 1982). The space of tangible facts for each business actor is a repository comprising interpreted messages from information systems and relevant events from the business actor’s real-life environment, events resulting or not from the business actor’s usual courses of action. Let us delve into the background mechanism of the CMM rules of meaning. Kant uses the word “schema” when he argues, in his book, Critique of Pure Reason, that in order to apply non-empirical concepts to empirical facts, a mediating representation is necessary. He calls it a schema. In the ordinary case, there is, according to Kant, homogeneity of a certain sort between concept and object. There is no similar homogeneity in the application of concepts such as the intuitive analysis of messages and events. To apply a causal analysis to a sequence of messages and/or events so that one of them is regarded as a cause, another as the effect is more problematic, because the concept of causality involves necessity. But necessity is not always an element in our experience. The concept of causality only tells us that first, there is something and then something else. For example, the schema for causality is temporal succession according to

114

Complexity in the Natural Sciences and Operations Management

a rule; the schema for necessity is the existence of an object at all times. Figure 2.12 shows the conceptualized processing steps for diagnosing messages and events.

Figure 2.12. Conceptualized processing steps for diagnosing messages and events

Rules give a sense of which interpretations and decisions for action appear to be logical or appropriate in a given context. This sense is called logical force. But this logical force has a contextual dimension. The issue raised at this point is how to make fixed logic rules and changing contexts compatible: our answer is modal logic whose main features have been developed in a previous section. 2.9. References Anderson, N.H. (1971). Integration Theory and Attitude Change, Psychological Review, vol. 78, pp. 171–206. Arthur W. Brian (2013). Complexity Economics – a different framework for economic thought. Santa Fe Institute, Report 2013-04-012. Ballewski, D. (1887). Die Kalkulation von Maschinenfabriken, Magdeburg.

Agent-based Modeling of Human Organizations

115

Barnes, J. (1954). Class and Committees in a Norwegian Island Parish, Human Relations, no. 7, pp. 39–58. Blackburn, P., de Ryke, M., Venema, Y. (2001). Modal Logic, Cambridge University Press. Bleicher, K. (1991). Organization – Strategien – Strukturen – Kulturen, Gabler Verlag Wiesbaden. Briffaut, J.P. (2015), E-enabled Operations Management, ISTE, London and John Wiley & Sons, New York. Brown, R. (1952). Structure and Function in Primitive Society, Free Press Glencoe Il. Bussmann, S. and Muller, J. (1992). A negotiation framework for co-operating agents, in Proc CKBS-SIG, Dake Centre, University of Keele. Cassin, B. (ed.), (2004). Vocabulaire Européen des Philosophies, Seuil Le Robert. Coase, R.H. (1937). The Nature of The Firm, Economica New series, vol. 4, no. 16, pp. 386–405. Corkill, D. (1979), Hierarchical Planning in a Distributed Environment, in Proceedings of the Sixth IJCAL, Cambridge MA Morgan-Kaufmann San Mateo California. Cronen, V.E., Johnson, K.M. and Lannaman, J.W. (1982). Paradoxes, Double Binds and Reflexive Loops: an Alternative Theoretical Perspective, Family Process 20, pp. 91–112. Cross, R. and Parker, A. (2004). The Hidden Power of Social Networks, Harvard Business School Press, Boston MA. Cutti, M. (1964). The Growth of American Thought, 3rd edition, Harper and Row New York, Evanston and London. de Finetti, B. (1937). Foresight: Its logical laws, its subjective sources, translated from French in Kyburg, H.E. Jr and Smokler, H.E., Studies in Subjective Probability, John Wiley, New York. Dehaene, S. (2007). Le cerveau humain est-il une machine de Turing? In L’homme artificiel, J.P. Changeux (ed.), Odile Jacob Paris. Durkheim, E. (1951). Suicide: A Study in Sociology, Free Press New York. Ferber, J. (1999). Multi-agent System, Addison–Wesley.

116

Complexity in the Natural Sciences and Operations Management

Fishbein, M. and Ajzen, I. (1975). Belief, Attitude, Intention and Behavior, Addison–Wesley Reading Mass. Georgeff, M. (1983), Communication and Interaction in Multi-Agent Planning, in Proceedings of the Third National Conference on Artificial Intelligence, Morgan-Kaufmann, San Mateo, California. Georgeff, M. (1984). A Theory of Action for Multi-agent Planning, in Proceedings of the Fourth National Conference on Artificial Intelligence, Austin, Texas. Giddens, A. (1979). Central Problems in Social Theory, University of California Press. Greene, J.O. (1984). Evaluating Cognitive Explanation of Communicative Phenomena, Quarterly Journal of Speech, vol. 70, pp. 241–254. Hume, D. (1978). A Treatise of Human Nature-I, I section 7, Nidditch (ed.) Oxford University Press. Hume, D. (1975). Enquiries Concerning Human Understanding and Concerning the Principles of Morals, Nidditch (ed.), Clarendon Press, Oxford, p. 25. Keynes, J.M. (1921). A Treatise on Probability, Macmillan, London. Kripe, S. (1963a). Semantical Considerations on Modal Logic, Acta Philisophica Fennica, vol. 16, pp. 83–89. Kripe, S. (1963b). Semantical Analysis of Modal Logic I: Normal Propositional Calculi, Zeitschrift für matematische Logik und Grundlagen der Mathematik, vol. 9, pp. 67–96. Laplace P.S. (1795). Lecture on probabilities delivered in 1795 included in A Philosophical Essay on Probabilities translated by E.W. Tuscott and F.L. Emory John, Wiley and Sons, New York (1902), Chapman and Hall Ltd, London. Lewin, K. (1951). Field Theory in Social Science, Harper and Row, New York. Luce, R.D. and Perry, A. (1949). Psychometrica, vol. 14, p. 95. Luhmann, N. (2002). Einführung in die Systemtheorie, Carl-Auer-System Verslag.

Agent-based Modeling of Human Organizations

117

Maturana, H. and Varela, F. (1980). Autopoesis and Cognition: the Realization of the Living, Boston Studies in the Philosophy of Science, vol. 42, Kluwer Academic Publishers. Monod, J., (1971). Chance and Necessity: an Essay on the Natural Philosophy of Modern Biology, Alfred A. Knopf Inc, New York. Morin, E. (1977), La Méthode (1): La Nature de la Nature, Le Seuil, Paris. Morin, E. (1991). La Méthode (4): Les Idées, leur Habitat, Le Seuil, Paris. Pearce, W.B. and Cronen, V.E. (1980). Communication, Action and Meaning, Präger, New York. Ramsay, F.P. (1926). Truth and probability in Foundations: Essays by F.P. Ramsey, Mellor, D.H. (ed), Routledge & Kegan Paul, London. Rao, A. and Georgeff, M. (1995). BDI Agents: From Theory to Practice, Proceedings of the first international conference on Multi-Agent Systems, San Franscisco. Rokeach, M. (1969). Beliefs, Attitudes and Values: A Theory of Organization and Change, Jossey-Bass, San Francisco. Rokeach, M. (1973). The Nature of Human Values, Free Press, New York. Savage, L.J. (1954). The Foundations of Statistics, John Wiley, New York. Tapscott, D. and Tapscott, A. (2016). Blockchain Revolution, Portofolio Penguin. Wasserman, S. and Faust, K. (1994). Social Network Analysis, Cambridge University Press. Wilson, T.D., Gilbert, D.T. and Centerbar, D.B. (2003). Making sense: The cause of emotional evanescence in Brocas, I. and Catillo J. (eds), The psychology of economic decisions, vol. 1, pp. 209–233, Oxford University Press, New York. Wyer, R.S. (1974). Cognitive Organization and Change, Erlbaum Hillsdale, NJ.

3 Complexity and Chaos

As far as the laws of mathematics refer to reality, they are not certain; as far as they are certain they do not refer to reality. – Albert Einstein Research in chaos is the most interesting current area that there is. I am convinced that chaos research will bring about a revolution in the natural sciences similar to that produced by quantum mechanics. – Gerd Binnig, Nobel Prize winner in Physics (1986) 3.1. Introduction Chaos science came to light in the mid-1970s and has had an impact on a host of scientific disciplines from theoretical physics to insurance, economics, meteorology and medicine. It throws doubt on our ability to predict future events accurately. In spite of advanced and costly marketing research, a large proportion of new product introductions turn out to be failures. It sounds as if we are doing no better than chance, or even worse. In fact, chaos science affects our daily life. It can help us to understand such phenomena as the spread of diseases, the stock market volatility, the lack of reliability of weather forecasts and product sales.

From Complexity in the Natural Sciences to Complexity in Operations Management Systems, First Edition. Jean-Pierre Briffaut. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

120

Complexity in the Natural Sciences and Operations Management

Chaos science is the popular name for the study of nonlinear dynamic systems. It has made scientists more aware of the fact that most quantitative models they design and use are linearized approximations of what they observe as reality. Among the new understandings are: – apparently simple systems can be quite complicated in their operations; – apparently complicated systems can be quite simple in their operations; – a system can be perfectly deterministic and yet its behavior impossible to predict. In fact, the concept of chaos has already been apprehended in ancient Chinese, Greek and Roman civilizations. In old Chinese traditional thinking, chaos and order are interrelated ideas. In Chinese mythology, the dragon symbolizes Yang, the active male principle of universe. It is the concept of order that proceeds from chaos. In some Chinese, stories of the creation, Yin, the passive female principle of universe, is viewed as a ray of light from chaos to create heavens. Yin and Yang, male and female ontological objects, respectively, co-create the universe. After having sprung from chaos, Yin and Yang retain stigmata of chaos. When one of them surpasses the other, chaos is brought back. The Greek Hesiodus (8th Century BC) wrote a cosmologist composition in which he posited that chaos existed before all things and the earth, in other words, order comes from disorder. Nothing was deduced from this mythical idea until the 20th Century when the chaos theory emerged. The term is defined by Virgil (Georgica IV, 347) as “a state of confusion having preceded the organization of the world”. It was figuratively used at the time of the lower Roman Empire (4th Century AD – Marcus Victorinus) and by the Fathers of the Church (4th Century AD) to characterize the state of the earth before God’s creative intervention. Chaos rendered the Hebrew word tohu-bohu.

Complexity and Chaos

121

Chaos is derived from Greek χαος referring in mythology to the early state of the universe before gods were born and also to the mindset of infinite space, abyss and chasm (in modern Greek). It is also connected to the German word Gaumen (mouth palate) in relationship with the Indo-European root ghen, which expresses vacuum and lack. Chaos conveys the idea of the state of confusion of the elements before the world was organized in the Antique and Christian cosmologies. By extension, it suggests a state of great confusion with a specialized acceptation in politics (Voltaire, 1756) and in the concrete bears the meaning of heaps of rock blocks (1796). In the 19th Century, the word chaos has gained in science a special value of “random distribution, without describable order, of positions and velocities of molecules” of perfect gases in equilibrium. From 1890 onwards, the adjective chaotic showed up with a strong or weakened meaning of “incoherence, disorder”. Chaos has been instanced for a long time by gases, the construct model of which is made up of the erratic motions of particles. The word “gas” was introduced by the Dutch physician chemist van Helmont (1579–1644) from the Latin word chaos by changing “ch” to “g” according the pronunciation of “g” in Dutch. When an observed phenomenon is qualified as chaotic by an observer, the question to investigate is: “is this phenomenon deterministically chaotic or the consequence of genuine randomness?” This is the core issue we shall address in this chapter. 3.2. Complexity and chaos in physics and chemistry 3.2.1. Introductory considerations Physics is mainly concerned with abstracting simple concepts from a complex world. Newton found simple gravitation laws that could predict planetary motion on the basis of the attraction of two bodies. Results from atomic physics have demonstrated that the Schrödinger

122

Complexity in the Natural Sciences and Operations Management

equation is the correct description of a hydrogen atom, composed of a positive nucleus and an orbiting negative electron. A. M. Turing when presenting his model of the chemical basis of morphogenesis (Turing 1952) insisted on the very nature of a model: “this model will be a simplification and an idealization, and consequently a falsification. It is to be hoped that the features retained for discussion are those of greatest importance in the present state of knowledge.” WYSIATI (What You See Is All There Is) does not hold true. Simple models are substituted for encoding of physical laws. These models, when validated, yield the same predictions and results we observe. In some cases, we do not even know the exact equations governing the phenomena considered. These models have nothing to do with what happens at the micro-levels of phenomena. When many-body systems are considered, the basic models are called upon to produce equations. In general, these equations have no analytical solutions. Physicists and chemists working in different disciplines have developed a variety of methods for obtaining approximate but reliable estimates of the properties and behaviors of many-body systems. These techniques are regarded as a “tool box” needed to tackle the many-body problems. They may be used to study any many-particle system for which the interactions are known. In astronomy, the three-body problem of interacting planets in a gravitational field (for instance, the Sun, the Earth and the Moon) was studied by the mathematician Henri Poincaré at the turn of the 20th Century but an analytical solution was not delivered. Careful analysis shows that an unstable trajectory can develop under certain conditions of parameter values. This is a sign of possible chaos from a long-range point of view. Ahead of his time, almost 100 years ago, Henri Poincaré first suggested the notion of SDCI (sensitive dependence upon initial conditions). “Much of chaos as science is connected to the notion of sensitive dependence on initial conditions. Technically, scientists term as chaotic those non-random complicated motions that exhibit a rapid growth of errors that despite perfect determinism inhibits any pragmatic ability to render accurate long-term prediction.” (Feigenbaum 1992).

Complexity and Chaos

123

The Russian-born Jewish physician Immanuel Velikovsky (1875– 1979) investigated events related in the Bible. In his book Colliding Worlds (1948), he asserted that the orbits of Mars and Venus experience a drastic change about 1000 years BC. His theory helped to solve some problems in agreement with the chronology of the antique world. If variables that complicate matters get in the way, they are generally dealt with by ignoring them or assuming that their effects are random and do not result from a determinist law. On average in the long term, they will cancel themselves out. Those who have taken a course in physics may well remember that most test problems ended with “ignore the effects of friction” or “assume that the wind resistance is zero”. In the real world, friction cannot be ignored and the wind blows from all directions. Most elementary physics textbooks describe a world that seems filled with very simple, regular and symmetrical systems. A student might get the impression that atomic physics is the hydrogen atom, electromagnetic phenomena appear more often in the world in guises like the parallel plate capacitor and regular crystalline solids are “typical” materials. The reality is that linear relationships are actually the exception in the world rather than the rule. A book written by the physicist Carlo Rovelli (2017) has the telling title Reality Is Not What It Seems. It is critical to bear in mind that approximation of reality by simplified models is valid under certain conditions. When these conditions are no longer fulfilled, the models involved cannot be referred to for understanding observations. The way to deal with this type of situation is to distribute observations into different categories, each of them described by the same model. Let us take the example of inorganic chemistry. Chemical reactions can be itemized by the observable differences in the ease with which reactions occur. The following are their descriptions: – type A. Reactions that occur very readily, as soon as the reagents are mixed. Reactions of this kind usually involve gases or solutions. In the case of a liquid reacting with a solid, the rate is limited only by the time taken for the reactants to meet and the products to break away;

124

Complexity in the Natural Sciences and Operations Management

– type B. Reactions that readily proceed if they are started with application of heat, but that proceed very slowly or not at all when the reactants are mixed at room temperature. In some cases, instead of heat, a catalyst will start the reaction; – type C. Reactions that take in heat and are termed “endothermic”. In general, such reactions only continue if a high temperature is maintained by an external supply; – type D. Reactions that proceed slowly but steadily. The velocity of reaction is well defined as the decrease in the concentration of one reactant per unit of time; – type E. Reversible reactions: reactions that readily proceed under certain conditions, but reverse if the conditions are changed. A reaction that reaches equilibrium can be taken nearer to completion if one of the products is removed; – type F. Reactions that do not proceed at all, at any temperature, without an external supply of energy. Another interesting example is the description of electrical properties of solid-state materials. They are categorized into three groups: conductors, semiconductors and isolators, each group being characterized by a specific model. Furthermore, some models of what we call “reality” are dealt with in spaces different from the three-dimensional space we are used to perceiving around us. Two spaces are important: the phase space in classical physics and the Hilbert space in quantum physics. The phase space is constructed to “visualize” the evolution of a system of n particles. It has a large number of dimensions (three position coordinates and three momentum coordinates representing the location and motion of each particle). For n unconstrained particles, there will be a “space” of 6 n dimensions. A single point of a phase state represents the entire state of some physical system. In quantum theory, the appropriate corresponding concept is that of a Hilbert space. A single point of Hilbert space represents the quantum state of an entire system. The most fundamental property of a Hilbert space is that it is what is called a vector space, in fact a complex vector space. This means that any two elements of the space can be

Complexity and Chaos

125

added to obtain another such element with complex-weightings. Complex refers here to complex numbers. Before the advent of computers, it was difficult to numerically solve nonlinear equations. That is the reason why great efforts were made to find approximate linear relationships of nonlinear relationships. However, this search for the simple models makes our description of the world seem to be a caricature rather than a portrait. Three basic questions for trying to understand the fabric of the laws we are confronted with as humans are: – how do simple laws give rise to richly intricate structures? Many examples of this can be found in natural sciences and economics. – why are such structures so ubiquitous in the observed world? – why is it that these structures often embody their own kind of simple laws? Let us try to consider some facts and meaning in attempting to answer these questions. The symptom of WYSIATI (What You See Is All There Is) has become the syndrome of our addiction to resort only to our sense-data. When we cannot interpret and visualize what our brain has constructed from sense signals received, we are tempted to qualify the situation as “complex”. We depend upon the perceivable (measurable) variables from our senses or equipment made available. These variables are derived from statistical laws. The micro-causes of effects are hidden to our senses. Statistical laws are applicable to a large assembly of entities and describe it in static and dynamic quantities according to the context. When a variable appears stable, we are in fact perceiving an average value manifesting the behaviors of a large number of interacting entities. A statistical distribution is maintained because of the existence of interaction processes between these entities. Chaos, short for deterministic chaos, is quite different from randomness in spite of the fact that it is what it looks like. Chaos is absolutely fundamental to differential equations (or difference equations when discrete time events are considered), and these are the basic material required to encode physical laws. The paramount

126

Complexity in the Natural Sciences and Operations Management

consequence is that irregular behavior need not have irregular causes. Deterministic implies that knowledge of the present determines the future uniquely and exactly. Models in classical physics are underpinned by two main principles: Newton’s law of motion and conservation of mass: – Newton’s law of motion states that the acceleration of a body is proportional to the forces that act on it, regardless of their very nature, gravitational or electromagnetic; – the principle of mass conservation, implicit in numerous models, states that the rate at which a substance increases or decreases within some closed environment must be due solely to the rate at which matter enters from the outside minus the rate at which it leaves. If there are sources or sinks of matter inside the closed environment, they have to be accounted for. It is noteworthy that these principles deal with motion, i.e. variation of distance or physical quantity over time. This implies that models are embodied by differential equations with respect to time, Ẋ = F[X(t)], where X(t) represents all the time-dependent parameters characterizing the model under consideration and Ẋ is the derivative of X(t) with respect to time. Stability of solutions derived from differential equations is a central issue to address. The question of stability is related to any equilibrium state X. Equilibrium states are also called fixed points or orbits when periodic motions occur. X states are solutions of the equation Ẋ = 0. The issue to address is: if a physical system initially at rest is perturbed, does it gradually return to the initial position or does it wander about or will it at least remain in the vicinity of the initial position when t →∞? If the physical system returns to the initial position X, then it is said to be stable, otherwise it is unstable. Unstable equilibriums represent states that one is unlikely to observe physically. When the trajectory of points X(t) starts in a domain of initial values X(0) for which X(t) →X as t →∞, then X is said to be asymptotically stable. The domain is called the basin of attraction of X, and X its attractor.

Complexity and Chaos

127

Several models are represented by nonlinear differential equations, which is common place in reality. In this case, how to investigate their stability at their equilibrium states? The answer is linearization. This means that the curve that represents the spatial unfolding of the time-dependent variables is substituted for its tangent at the fixed points and the local stability is analyzed on this basis. So to speak, linearization is a misleading rendering of reality, but is the only way we have found to easily come to terms with the complexity of reality. In the next section, the behavior of a quadratic iterative function is described. It serves as a simple instance of deterministic chaos in many publications. It shows how one of the simplest equations, modeling a population of living creatures, can generate chaotic behavior. 3.2.2. Quadratic iterator modeling the dynamic behavior of animal and plant populations One of the first to draw attention to the chaotic behavior of nonlinear difference equations was the Australian ecologist Robert May, when in 1975 he published a paper on modeling changes in plant and animal population in an ecosystem when there are limits to the available resources. Letting time events t tick in integer steps, corresponding to generations, the incumbent population is measured as a fraction supposed to represent the ratio between a current value and an estimated maximum value. The equation states Xt+1 = k Xt (1 − Xt) It is called the logistic equation, which can be easily transformed in a difference equation. (Xt+1 − Xt)/(t + 1 − t) = ΔXt/Δt = k Xt [(k − 1/k) − Xt] The population should settle to a steady state when Xt+1 = Xt. This leads to two steady states: population 0 and (k − 1/k). A population of size 0 is extinct, so the other value applies to an existing situation.

128

Complexity in the Natural Sciences and Operations Management

This quadratic iterator, a simple feedback process, has been widely investigated (Feigenbaum 1992). Its behavior depends on the value of the parameter k. As a function of its value, a wide variety of behaviors can be observed. The mechanisms are made well observable by graphical iteration (Figure 3.1). Graphical iteration is represented by a path with horizontal and vertical steps, which is called a poly-line for convenience.

Figure 3.1. Graphical paths to derive the iterates step by step from the initial value X0

The parabola k X (1 − X) is the graph of the iteration function and is the locus of points (Xt,,Xt+1). The vertex of the parabola is defined by its coordinates (X = 0.5, k/4). The parabola intersects the bisector at the fixed points P0 = 0 and Pk = (k − 1)/k along the abscissa scale. Only the initial values X0 = 0 and X0 = 1 are attracted by P0. We start with an initial value X0 along the abscissa scale. At this value point a perpendicular to the abscissa scale is erected until it intersects the parabola at the X1 point, whose ordinate value reads along the ordinate scale. Via the bisector, the X1 point is transferred to the abscissa scale and the same mechanism is iterated to yield the X2 point. This procedure is repeated until a stabilized situation is reached. By stabilized, it is meant that the procedure produces an equilibrium

Complexity and Chaos

129

point or a well-defined periodic cycle of different states. In some situations, neither an equilibrium point nor a periodic cycle is observed. The behavior appears chaotic. For each starting value, the number of steps required to reach a stable situation (rest point or cyclic loop) is computed and called a “time profile”. When k < 1, all initial values converge to P0, which is a stable point. Any point initially derived from X0 = 0 by a small deviation ε is attracted back to P0. When 1 < k < 2, all initial values X0 progress step by step to an equilibrium value which is at the intersection of the bisector and the parabola along the abscissa scale. In this situation, P0 (Xt = 0) is not a stable fixed point. When Xt = 0 → Xt+n = 0 + ε through some perturbation, Xt migrates step by step monotonically to the equilibrium value at the intersection of the bisector and the parabola. By monotonically, it is meant that the approach to the equilibrium value takes place by either decreasing or increasing values without oscillations. When 2 < k < 3, all initial values X0 progress step by step through a poly-line of damped oscillations to an equilibrium value, which is at the intersection of the bisector and the parabola along the abscissa scale. By damped oscillations, it is meant that the equilibrium value is alternatively approached by higher and lower values whose amplitudes decrease. The fixed point Pk is the attractor of all poly-lines when 1< k 4, it can be proved that all points in the interval [0,1] are not mapped onto the same interval. Hence, this case is not investigated within the framework of this iterator. The quadratic iterator, often called the logistic function, is described in many contexts as the “showcase” of chaotic behavior. Three features of chaos have emerged from this instance, i.e. sensitivity to initial conditions, mixing and periodic points. Let us elaborate on each of them. 3.2.2.1. Sensitivity to initial conditions Sensitivity to initial conditions is a characteristic that all chaotic systems definitely have. However, sensitivity does not automatically lead to chaos. It implies that any arbitrary small change in the initial value of a chain of iteration will more or less rapidly increase as the number of iterations increases. Minute differences in the input value during an iteration process can generate diverging evolution. These minute differences at each step matter and accumulate as the computing procedure is running. The output of an iteration run is fed back as the input of the next iteration run. A telling example of sensitivity is shown in Table 3.1. The values of the quadratic dynamic law x + kx (1 − x) are sequentially calculated with a Casio machine according two modes, i.e. x added to kx (1 − x) for one mode and for the other mode kx (1 − x) added to x. As the

132

Complexity in the Natural Sciences and Operations Management

addition of two numbers is commutative (a + b = b + a), we expect the same results. Iterations

x + kx (1–x)

kx(1–x) + x

1

0.0397

0.0397

2

0.15407173

0.15407173

3

0.5450726260

0.5450726260

4

1.288978001

1.288978001

5

0.1715191421

0.1715191421

10

0.7229143012

0.7229143012

11

1.323841944

1.323841944

12

0.03769529734

0.03769529724

13

0.146518383

0.1465183826

20

0.5965292447

0.5965293261

25

1.315587846

1.315588447

30

0.3742092321

0.3741338572

35

0.9233215064

0.9257966719

40

0.0021143643

0.0144387553

45

1.219763115

0.497855318

Table 3.1. Calculation of x+ kx (1 − x) versus kx (1 − x) + x with k = 3 and x = 0.01 (excerpt from Feigenbaum, 1992)

The outcome of the exercise is surprising, disturbing and puzzling: – surprising because it clearly displays how rapidly a minute difference can be amplified. There is total agreement between the two calculation modes until the 11th iterate. Then, in the 12th iterate, the last three digits differ (734 vs. 724). The 40th iterates in the two calculation modes differ by a factor greater than 10; – disturbing and puzzling because what we observe here has to be assigned to the finite accuracy of the computing device. Computers have contributed to the discovery of the chaotic properties of physical phenomena modeled by recursive discrete functions. They are able to

Complexity and Chaos

133

carry out thousands of thousands of iteration runs in a short time. One may wonder whether computers could be a reliable means for exploring the chaotic properties of a phenomenon sensitive to initial conditions. The question to address is: is what is observed to be assigned to the very nature of the phenomenon or the finite accuracy of the computing system used? This issue has to be dealt with using great care, and must be sorted out in an attempt to come to terms with the virus of unpredictability. 3.2.2.2. Mixing The mixing property can be described in the following way. For any two intervals I and J of the variable under consideration, initial values in I can be found so that, when iterated, these values will be directed to points in J. Some initial points will never reach the target interval. This approach is an intuitive way to explore how a small error is amplified in the course of iteration. 3.2.2.3. Periodic and fixed points f n(X) denotes the composition of f 0(X) (n − 1) multiplied by itself. Periodic points are defined by the equation f n(X) = X, where n is the number of iteration runs after which the system comes back to the initial value. When iteration starts from a periodic point, only few intervals are targeted iteration after iteration. Fixed points are defined by the following equation: f 0 (X) = X. There are points that are mapped onto themselves by the function. They are not necessarily attractors. 3.2.3. Traces of chaotic behavior in different contexts As interest in chaos spreads, examples of situations can be spotted lurking unnoticed in earlier analyses. Traces of chaotic behavior can be found in the field of operational research where differential/ difference equations are instrumental in solving optimization problems or of control business information systems.

134

Complexity in the Natural Sciences and Operations Management

3.2.3.1. Inventory control Consider a simple model to show how the chosen control parameters and the associated algorithm are a critical matter of interest because inventory incurs cost and is at risk of obsolescence. Let us assume that a distribution company fulfils customer orders on demand from inventory. The company tries to maintain inventory levels under full control. The situation can be described as follows: – market demand Qd,t is an unlagged linear function of the market price Pt. Qd,t is proportional to Pt with a negative slope: as Pt increases, Qd,t decreases, which conforms to economic theory; – the adjustment of price is effected not through market clearance, but through a process of price-setting by the sales department of the distribution company. At the beginning of each time period, the sales department sets a selling price for that period after having taken into account the inventory situation. If, as a result of the preceding-period price, inventory accumulated, then the current-period price is set at a lower level than previously done. However, if depleted instead, then the current price is set higher than the previous value; – the change in price made at the beginning of each period is inversely proportional to the difference between the supply quantity and the actual market demand during the previous period, i.e. the on-hand inventory at the end of each period. Under these assumptions, the inventory control model can be made explicit by the following equations: Qd,t = α − β Pt Qs,t = K (fixed supply quantity) Pt+1 = Pt − σ[Qs,t − Qd,t], where σ denotes the stock-induced price adjustment.

Complexity and Chaos

135

By substituting the first two equations into the third one, the model can be condensed into a single difference equation Pt+1 − Pt + σ β Pt = σ(α − K) or Pt+1 − (1 − σ β)Pt = σ(α − K) Its solution is given by Pt = [P0 − (K − α)β¯1] (1 − σ β)t + (K − α)β¯1 (K − α)β¯1 is the equilibrium price P when the market demand matches the fixed supply quantity. The expression of Pt becomes Pt = [P0 − P] (1 − σ β)t + P The inventory level It at the end of the time period t is the inventory It−1 at the end of the time period t − 1 plus the quantity K supplied during the time period t minus the market demand Qd,t during the time period t. It = It−1 +K − (α − β Pt) or It − It−1 = K − (α − β Pt) The dynamic stability of Pt as well as the changes in inventory levels at the end of time periods (It − It−1) hinges on the parameter 1 − σβ. Table 3.2 presents the situation. 0 < 1– σ β < 1

0 < σ < 1/ β

no oscillation and convergence

1- σ β =1

σ = 1/ β

remaining in equilibrium – fixed point

-1 < 1- σ β < 0

1/ β < σ < 2/β

damped oscillations

1- σ β = -1

σ = 2/ β

uniform oscillations

1- σ β < -1

σ > 2/ β

explosive oscillations

Table 3.2. Dynamic behavior of Pt as a function of the parameter (1 − σ β)

136

Complexity in the Natural Sciences and Operations Management

Let us investigate the sensitivity of the initial value P0 on the explosive oscillations of Pt when σ > 2/β. When P0 is changed in P0 + ε, Pt becomes Ptε = Pt + ε (1 − σ β)t or │Ptε − Pt│ = ε │(1 − σ β)t │ │Ptε − Pt│ increases exponentially to ∞ as t →∞. (It − It−1), the inventory level at the end of each time period t, follows the same behavior. 3.2.3.2. Business monitoring systems Any business organization is made up of subsystems integrated into a total system. These subsystems are business functions (production, sales, human resources, finance) consisting of human and automatic operators and equipment of various sorts, all of which interact with each other to form the hub of business operations. Data flows between these subsystems secure the appropriate orchestration of operations arranged in a logical sequence for achieving set objectives as efficiently as possible. Business systems require resources to enable production as well as administrative functions to operate. Finance is considered the major resource because it is an enabling resource for obtaining the other resources. Information is a critical resource essential to the effective coordinated operations of the other resources. It is managed by what are reffered to as “information systems”. When systems are highly integrated, which is the current situation, they become too complex to understand, not only to the layman but also to their human operators. If one part of the system ceases to function correctly, this may cause the system as a whole to degrade or even to come to a full operational standstill. The trendy integration of systems (Systems of Systems – SoS) does not help to deal with random influences as they occur without too much disruption. Decoupling subsystems is not a viable solution either. The graceful degradation of systems performance has to be taken into account at

Complexity and Chaos

137

the design stage and its robustness checked by crash tests to explore the potential systemic failures and establish the appropriate countermeasures to deploy in case of emergency. In addition to technical problems, the influence of human elements on a complex system must not be forgotten. When management sets objectives, the behavior of people involved in their fulfillment has to be taken into account. Discrepancies may be revealed between organizational objectives and the objectives of the personnel, which form the very fabric of the organization. A great deal depends on the motivational influences and their values to individuals. The behavioral aspects of people at work cannot be programmed. Conflicts of goals can be a serious source of disruption and dysfunction in business environments. Time lags in data transmission are an important source of untimely and thus inappropriate decisions. Instructions applied to a subsystem with a delay, after it has evolved from the state when sensed, are prone to trigger behaviors difficult to bring under control. We shall focus on this feature in the following narrative. The control entity is assumed to be employing a battery of state sensors to control a variety of targets. We may think of it as monitoring the behavior of planned objectives, adjusting the action instruments, sensing the effects of the instruments on the targets and readjusting the action instruments until the sensed effects are in line with the set objectives. This closed-loop process of observation and action may be continuous or trigged at predetermined time intervals. Regardless of the type of closed-loop process, when data are collected from different sources, they have to be aggregated and processed to be made exploitable by the control unit. The time lags may be thought of as comprising three parts: the deployment lag, the data collection lag and the reporting lag, as portrayed in Figure 3.2. Regardless of the nature of time lags and their duration, a time-lag structure is a whole array of quantities occurring with a range of delays. It is outside the scope of this book to go into detailed

138

Complexity in the Natural Sciences and Operations Management

quantitative technicalities of delays, but some simple examples can illustrate what time lags mean in terms of business control. They are shown in Figure 3.3.

Figure 3.2. Structures of time lags in a business control loop

Assume that the planned target is subject to a step increase OA at time t1 (part 1 of Figure 3.3). What is the response time for the planning unit to receive data signaling that the order has been fulfilled? Other parts of Figure 3.3 show how the step response can converge to a definite value. Parts 2 and 3 portray two time-phased profiles of contributions to reach the increase OA at time intervals t1 + 1, t1 + 2, t1 + 3, and so on. Part 4 represents a combination of the dynamic effects shown in parts 2 and 3. In this example, we have assumed that the increase converges along a smooth path to the desired value. Many other types of behavior (cyclical path temporarily diverging but ultimately converging, or explosive oscillations) can be observed when the state of the controlled processes is taken into account. A critical factor is the difference between the state of the controlled processes when the decision was taken at the planning level and the state when the decision has been factually applied. The corrective course of action is engineered to remedy a situation which no longer exists. This is described as the time lag between physical events and information flows. Let us take a simple example to describe such circumstances.

Complexity and Chaos

Figure 3.3. Examples of time-lagged response to a step increase in target value

139

140

Complexity in the Natural Sciences and Operations Management

Figure 3.4. Effects of time lags on an oscillating production system

If a production department is reported to the central planning unit to have failed to meet the set target, corrective action could be taken to increase the output level. If the adjustment order is transmitted with delay, then it is possible that the local production manager has already taken the steps to modify the production schedule. The closed-loop system can amplify the disequilibrium between the desired output and its actual level. Alternative positive and negative feedbacks intended to increase or decrease the output level and reduce the stock-out situations, when implemented at the wrong time, will contribute to create explosive oscillations when iterated. Deviations are amplified instead of being damped. Figure 3.4 portrays the effect of time lags on an oscillating system. Data relative to Figure 3.4 (production per week, delayed information flows, corrective actions taken) are summarized in Table 3.3

Complexity and Chaos

Week n Normal output – 200

Week n+1 Normal output + 200 Information received for week n

Week n+2 Normal output –400 Information received for week n+1

Week n+3 Normal output +800 Information received for week n+2

Corrective action +200

Corrective action –400

Corrective action +800

Adjusted output + 400

Adjusted output -800

Adjusted output +1600

141

Week n+4 Normal output –1600 Information received for week n+3 Corrective action –1600 Adjusted output 3200

Table 3.3. Time lags between physical events and information transmission – consequences of corrective courses of action

Figure 3.5. Effect of positive feedback on an oscillating production system

Business activities are always subjected to internal and external random perturbations and, as a result, hardly ever achieve steady operational states. They often exhibit oscillating results, which is perceived as a dysfunctional chaotic pattern. Feedback mechanisms are engineered to hunt after the target state. Figures 3.5 and 3.6

142

Complexity in the Natural Sciences and Operations Management

visualize the effects of positive and negative feedback actions, respectively.

Figure 3.6. Effect of negative feedback on an oscillating production system

3.3. Order out of chaos 3.3.1. Determinism out of an apparent random algorithm Let us give a simple example to show how an apparent random algorithm creates deterministic shapes. It will quite dramatically change our intuitive idea of randomness. Let us take a die, the six faces of which are labeled with the numbers 1, 2 and 3. Any standard die has six faces. All we have to do is to relabel the three faces 4, 5 and 6, with 1, 2, 3 respectively so that there are two faces for each 1, 2 and 3. When the die is rolled, only 1, 2, or 3 will randomly appear. We can now play a game with a game board on which an equilateral triangle is drawn. Its vertices called game points are labeled 1, 2, 3 as shown in Figure 3.7. Let us describe the rules while playing. We pitch an arbitrary point Z0 outside the triangle and mark it by a dot. We call it a game point. The next step is to roll the dice and assume the outcome is 2. We generate a new game point Z1, which bisects the line Z0 – vertice 2.

Complexity and Chaos

143

After having iterated the same rule k times, we get k Z dots. Once a game point has been “trapped” inside the triangle, the next game points remain inside. After thousands of iterations emerge what is called a Sierpinski gasket (Figure 3.8). Is this phenomenon a miracle or a lucky coincidence? We see in the next section why neither is the case.

Figure 3.7. Three vertices (game points) of the game board and some iterations of the game point

Figure 3.8. A Sierpinski gasket

144

Complexity in the Natural Sciences and Operations Management

3.3.2. Chaos game and MRCM (Multiple Reduction Copy Machine) Why are interested by an MRCM? Because any picture obtained by an iterative deterministic process engineered with an MRCM can be obtained by an adjusted chaos game. First, let us explain what an MRCM is. It is a machine equipped with an image reduction device, at the same time producing a set increase in the number of original images at each run according to a chosen pattern (here a triangle). Figure 3.9 displays two iterations of a circle. Any type of original picture is valid.

Figure 3.9. Two iterations of a circle with an MRCM on the basis of a triangle pattern

An MRCM operates as a baseline feedback machine (also called iterator or loop machine). It performs a dynamic iterative process, the output of one operation being the input for the next one. By dynamic, it is meant that operations are carried out repeatedly. A baseline feedback machine is composed of four units as shown in Figure 3.10. The whole system is run by a clock, which monitors the action of each component and counts cycles.

Complexity and Chaos

145

Figure 3.10. Composite elements of a baseline feedback machine

Any picture obtained by an MRCM working with a triangle pattern and delivering a Sierpinski gasket after much iteration can be obtained by an adjusted chaos game. This means that our perception of a random algorithm relying on the rolling of a dice makes us unaware of the potential emergence of a structured pattern. 3.3.3. Randomness and its foolery Randomness is a key concept associated with disorder and chaos, which we perceive as a lack of order. The intuitive distinction between a sequence of events materialized by measurements that is random and one that is orderly plays a role in the foundations of probability theory and in the scientific study of dynamical systems. What is a random sequence? Subjectivist definitions of randomness focus on the inability of a human agent to determine on the basis of his knowledge the future occurrences in the sequence. Objectivist definitions of randomness seek to characterize it without reference to the knowledge of any agent. “Objectivity is the delusion that observations could be made without an observer” (H von Foerster): that is to say that any observation on the observer’s equipment and interpretation is entangled with the observation setup.

146

Complexity in the Natural Sciences and Operations Management

In order to come to terms with this issue, we have to draw on statistics as explained by W. Ross Ashby. When the observer of a system that is perceived to be very large in terms of understanding its interwoven parts, he cannot specify it completely. That is to say that he has to refer to statistics which is the art of describing things when too many causes seem involved to derive an effect. W. Ross Ashby explains the way to approach this issue: “If a system has too many parts for their specification individually, they must be specified by a manageable number of rules, each of which applies to many parts. The parts specified by one rule need not be identical: generality can be retained by assuming that each rule specifies a set statistically. This means that the rule specifies a distribution of parts and a way in which it shall be sampled. The particular details of the individual outcome are thus determined not by the observer but by the process of sampling” (Ashby 1956, p. 63). Parts of a system are interrelated and their coupling contains also a “random” dimension. This coupling is subject to the same procedure to be assessed. However, statistics has pitfalls. Daniel Kahneman (2012) explains that we think we are good intuitive statisticians when in fact, we are not, because remembrance of our past experience is biased: “A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed. Once you adopt a new view of the world (or any part of it), you immediately loose much of your ability to recall what you used to believe before your mind changed.” This situation is conducive to hindsight bias. “Hindsight bias has a pernicious effect on the evolution of decision-makers. Not the decision-making process is assessed but its outcome good or bad.” (Kahneman 2012, p. 202) Nassim Nicholas Taleb, in his book Fooled by Randomness (Taleb 2005), analyzed how our assessment capabilities are biased and drive us to rely on wrong interpretations of statistical laws. We are inclined to apply the law of large numbers (LNN) to a small amount of data, derived from facts or events, whether measured, intuitively perceived or memorized. LNN describes the result of performing the same experiment a large number of times. The average of the results

Complexity and Chaos

147

obtained should be close to the expected value, the probability-weighted average of all possible values. What is called the law of small numbers (LSN) is “a bias of confidence over doubt” as Daniel Kahneman puts it (Kahneman 2012, p. 113). LNS is a logical fallacy resulting from a faulty generalization: that is a conclusion about all or many instances of a phenomenon, reached on the basis of just a few instances of that phenomenon. When sampling techniques are used, the critical issue is the size and/or segmentation of samples for avoiding judgmental bias occurring when the characteristics of a population are derived from a small number of observations or data points. Another source of judgmental pitfall is what is called the central limit theorem (CLT). CLT asserts that, when independent random variables are added, their properly normalized sum tends toward a normal bell-shaped distribution even if the original variables themselves are not normally distributed. It is then tempting for many people to assume, without further arguments, that the phenomena they observe are subject to a normal distribution following the Gauss law, which is characterized by only two parameters: mean value and standard deviation, the values of which can be easily found in the tables. In his book The Black Swan (Taleb 2008), Nassim Nicholas Taleb qualifies the bell curve as “that great intellectual fraud”. In the same book, he divides the world into two provinces: Mediocristan and Extremistan. “In the province of Mediocristan, particular events do not contribute much individually when your simple is large, no single instance will significantly change the aggregate or the total” (Taleb 2008, p. 32). This province is the Mecca of the symmetric bell-shaped distribution. “In the province of Extremistan inequalities are such that one single observation can disproportionately impact the aggregate or the total” (Taleb 2008, p. 33). All social matters belong to the Extremistan province. Probability distributions with “fat tails” such as the Cauchy distribution [p(x) ~ (1 + x²)−1] are relevant to these situations when low-probability events can incur dramatic consequences.

148

Complexity in the Natural Sciences and Operations Management

The Cauchy’s probability distribution function as other “fat tails” distribution functions has no expected value (mean) and variance. Variance is often taken as the metrics to evaluate risks. What to do when this quantity is not available? It is a challenge for risk analyzers. Even if a random variable appears normally distributed, the quotient of two normally distributed variables, when properly normalized (average = 0 and variance =1), is subjected to a Cauchy distribution. The analysis of random variable series remains always a challenge. When some chaotic phenomenon is observed, two questions are asked: – is this situation perceived by a human observer qualified as chaotic because the evolution of the phenomenon is outside his/her cognitive capabilities? No simple model is known by the observer and/or too many parameters have to be dealt with? – is it the macroscopic appearance of microscopic phenomena a human observer cannot detect with his/her current equipment? To conclude, it is worth summarizing the situation by two Nassim Nicholas Taleb’s quotations (Taleb 2008): – randomness can be qualified as epistemic opacity. It is “the result of incomplete information at some layer. It is functionally indistinguishable from true or physical randomness”; – foolishness and randomness: “the general confusion between luck and determinism leads to a variety of superstitions with practical consequences”. 3.4. Chaos in organizations – the certainty of uncertainty 3.4.1. Chaos and big data: what is data deluge? 3.4.1.1. Setting the picture A mantra heard everywhere in the business realm and disseminated by all sorts of media is digitalization. What does this tenet mean? All economic agents, corporations as well as individuals, are embedded in technical networks providing instant communicative interactions

Complexity and Chaos

149

between them via telecommunication signals at the speed of light. This refers not only to interactions between corporations and their suppliers and clients but also between individuals. This situation results in overloading all the economic agents with heaps of both structured, but mainly unstructured data, which have to be processed to either discard or integrate their relevant content, if any, in decisionmakers’ assessments for taking action. The impact of interconnectedness will be enhanced in unexpected ways by application of AI (artificial intelligence) and machine learning not only in the realm of financial markets and institutions but also in all socio-economic ecosystems. Institutions’ and organizations’ ability to make use of big data from new sources may lead to greater dependencies on previously unrelated macroeconomic entities and financial market prices, including from various non-financial corporate sectors (e-commerce, sharing economy, etc.). As institutions and organizations find algorithms that generate uncorrelated profits or returns, if these will be exploited on a sufficiently wide scale, there is a risk that correlations will actually increase. These potentially unforeseen interconnections will only become clear as technologies are actually adopted. More generally, in a global economy, greater interconnectedness in the financial system may help to share risks and act as a shock absorber up to a point. Yet the same factors could spread the impact of extreme shocks. If a critical segment of financial institutions rely on the same data sources and algorithmic strategies, then under certain market conditions, a shock to those data sources – or a new strategy exploiting a widely adopted algorithmic strategy – could affect that segment as if it were a single node. This may occur even if, on the surface, the segment is made up of tens, hundreds or even thousands of legally independent financial institutions. As a result, collective adoption of AI and machine learning tools may introduce new risks. As individuals, what are our main feeds of data deluge? Persuasive psychology principles are used to grab your attention and keep it. The alarming thing is that several dozen designers living in California and working at just a few companies are affecting the flows of data received by more than a billion people around the planet. Furthermore,

150

Complexity in the Natural Sciences and Operations Management

spending increasing amounts of time scrolling through our feeds from different channels is not the best support for our ability to make decisions on the ground of sound pieces of reliable information, nor for our well-being or performance. Think about our own habits in the past year or two and how any changes in behavior, and specifically spending increasing amounts of time with our feeds, have influenced our beliefs and attitudes. There is also another element of feeds that affects our behavior. We design our own feeds, although heavily influenced by those several dozen designers who are, in fact, more or less our ghost tutors. We connect with friends and colleagues; we like and follow companies and public figures that we admire. And the source of this attraction and connection often stems from some similarity that we see in them in terms of shared values and opinions. As a result of this, our feeds are giving us a world view that is anything but worldly. It is segmented and we are blindsided to the opinions, values, preferences and affiliations of those whom we interact with in a virtual world out of sight, and often without any implement to foster trust. Analytics of incoming data flows in a business or administrative framework has become a need and a challenge, not only because of the very high volume and variety of data, but also because of the required filtering of high-rate data flows in terms of relevance over time from the point of view of decision-making for activity planning. Three V’s can be used to describe data deluge: volume, variety and velocity. 3.4.1.2. Data flows and management practice Analyzing the contents of data flows received through various channels by decision-makers turns out to be a daunting challenge. Ceteris paribus the response time left to decide upon a course of action to take is a critical factor in an evolving environment where information-laden data travel at the speed of light. Generally speaking, it has become more difficult to set priorities because the human brain has a limited capability to deal with the input

Complexity and Chaos

151

of a large quantity of interdependent variables. In some cases, by the time the incoming data have been crunched into understandable management numbers and made available to the controllers in charge, it may be too late to react because these data reflect a situation that has changed. Collected data always reflect what happened in the past: are they still relevant when they reach the decision-maker involved? Regardless of the type of industry cycles (long or short) considered, it is vital to get a full understanding of market behaviors. How are early signals of a changing world recognized? Products manufactured, services delivered, are they going to be sellable tomorrow or do they have to be changed? What is the new competitive environment? How does the chain of customers change? To bring an answer to these questions, critical information systems have to be developed for securing relevant data feeds to the “traditional” management information systems implemented in the past decades. Why critical? Managing a company is still analogue and not digital because human beings are analogue, and the way you manage a company is by dealing with human beings. The value chain of human resources needs to be kept intact. 3.4.1.3. Data overload Information is essential to make relevant decisions, but more often than not it overwhelms us in today’s data-rich environment. Is a systematic framework a viable approach to make better-informed decisions? The data-aggregation complex that has developed around political events has failed again in a spectacular fashion. As big data and analytics have become all the rage in the professional and corporate worlds, a host of pollsters, aggregators and number crunchers have assumed a central and outsized role in our prognostication of contemporary events. For example, the odds of a Clinton victory in the 2016 US Presidential election were posted at more or less 70%.

152

Complexity in the Natural Sciences and Operations Management

The difference between the signals relevant to the object studied and the associated noise is in tune with the zeitgeist. We are deluded by software that is perceived as black boxes. We are most often not aware of algorithm-supported models that are translated in computer programs. Any model has been evolved on the basis of premises, conditions and hypotheses that impose domains of validity to deliver trustable outputs. Thanks to cheap and powerful computers, we can quite easily construct, test, feed and manipulate models. The fact they deal in odds and probability lends an air of humility to the project and can induce confidence of a sort. To a degree, it is the lack of human touch that makes this approach so appealing: data supposedly do not lie. Stories gleaned from reporting and conversations may be telling but are not determinative. The plural of anecdote is not data, as the saying goes. In our age of data analytics, competitive advantage accrues to those organizations that can combine different sets of data in a “smart” way, that understand the correlations between wishful thinking and real-world behavior, and that have a granular view of the make-up of the target market. Marrying the available information with companies’ own experiences and insights – hopes and biases – appears to be the best way to draw conclusions. Purely data-driven approaches are not as evidence-based or infallible as its advocates like to think: a) in our age of mass personalization and near-knowledge about consumer behavior, polls and surveys offer a false sense of certainty. Polls ask people what they think they will do and not what they actually do. There is a big chasm between the two; b) poll-takers must decide what questions to ask, how to phrase them and how to weight their samples (demographic, gender, age group). It is difficult to blow a snap shot into an accurate, large picture;

Complexity and Chaos

153

c) there is a strong human element in summarizing and presenting the findings. What weight to give to historical data or early signals? How to assess model-embedded uncertainty? Predicting an event is not simply a matter of gathering and analyzing data according to predetermined algorithms and patterns. Emotions, desires, incentives and fears of millions of people cannot be captured without complexity of a sort. The Brexit campaign in the United Kingdom in 2016 was a telling illustration of how, in an age of endless information, “smart” algorithms and the relentless aggregation of polling and sentiment indicators, it is possible for many data-driven people to be easily manipulated. It is a cautionary tale of how instinct and the desire to transform ambiguity can steamroll probability, especially when predicting the outcome of profoundly important emotionally charged public events. When any type of public campaign is staged, beliefs and desires of the targeted public change along campaign progress. The analysis carried upstream to launching a campaign is prone to deliver wrong pieces of advice because evolving beliefs and desires of targeted people are difficult to assess dynamically. 3.4.1.4. Forecasting and analytics Decisions in the fields of economics and management have to be made in the context of forecasts about the future state of the economy or market. A great deal of attention has been paid to the question of how best to forecast variables and occurrences of interest. The meaning of forecasting is straightforward: a forecast is a prediction or an opinion given beforehand. In fact, several phases can be distinguished in the forecasting process. 3.4.1.4.1. What happened? The first step is centralizing and crossing data from different sources. This data set delivers hindsight of past situations. It

154

Complexity in the Natural Sciences and Operations Management

summarizes past raw data transforming it in a form understandable to humans. This descriptive analysis can allow the user to view data in many different ways. “Past” is relative; it can refer to a month, a week, a day or even one second ago. A common form for visualizing the analysis is the data displayed on a dashboard, that updates as streams of new data come in. 3.4.1.4.2. Why did it happen? In a second step, their in-depth analysis allows descriptions of and insight into the past situations. Assessing the relevance of collected data may reveal a “complex” non-trivial exercise and heavily rely upon the analyzers’ expertise and knowledge. This diagnostic analysis is used to understand the root causes of certain events. It enables management to make informed decisions about the possible courses of action. 3.4.1.4.3. What will happen next? In a third step, the diagnosis of descriptions in terms of future versus past business environments should yield predictive analyses. There are attempts to make forecasts about the future in order to set goals and make plans. Examples of tools used include: – what-if analysis; – data mining; – Monte Carlo simulation. 3.4.1.4.4. What should I do? In the last step, if the context is stable and foreseeable enough, prescriptive analytics can be envisioned. It relies on a combination of data, mathematical models and business rules with a variety of different sets of assumptions to identify and explore what possible actions to take. It stretches the reach of predictive analytics to organize courses of action that can take advantage of predictions.

Complexity and Chaos

155

Figure 3.11 shows the phased forecasting process in terms of value added.

Figure 3.11. Phased steps of analytics processes

There are several distinct types of forecasting situations including event timing, event outcome and time-series forecasts. Event timing refers to the question of when, if ever, some specific event will occur. Event outcome forecasts try to estimate the outcome of some uncertain event. Forecasting such events is generally attempted by the use of leading indicators. Time-series are derived from data gathered from past records. We focus our attention here on time series. A time series xi is a sequence of values generally collected at regular intervals of time (hourly, daily, weekly, monthly, etc.). The question raised is: when at time n (now), and considering the set of past values available now In, what is the distribution of xn+h over a time horizon h? The answer to such a question is generally engineered by the use of an underlying parametric model. Models rely on assumptions that give

156

Complexity in the Natural Sciences and Operations Management

an oversimplified perception of daily life. Forecasting activities to establish planning follow this iron rule. The future is derived from past measurements recorded along a longitudinal time range and presented in time series as data points. Model parameters are adjusted from experimental data. Several methods have been worked out and depend mainly on the time horizon considered and the volatility of the situation under study. It is beyond the scope of this book to elaborate on the quantitative methods that have been developed to deal with this issue. Two seminal references are given by Box and Jenkins (1970). Let us mention some models: logistic model, Gompertz model, ARMA models (autoregressive moving average) and exponential smoothing models. “Without the right analytical method, more data give a more precise estimate of the wrong thing”: The Economist – Technology Quarterly, 7 March 2015. Hence, a three-stage approach to analysis is suggested: – first stage, choice of a small subset of models; – second stage, estimation of model parameters; – third stage, comparison of the different models in terms of adequate fitting of past data. Bear in mind that the future is not an extension of the past. Model parameters that fit past data well can change over time. The very model can become obsolete because the situation has changed. Fractals are patterns that are similar across different scales. They are created by repeating a simple process over and over again. The fractal business model creates a company, out of smaller companies, which in turn is made out of smaller companies and so the pattern continues. What is in common with all these companies is they are all subject to the same rules and constraints. A fractal is a form of organization occurring in nature, which replicates itself at each level of complexity. When applied to organizations, this characteristic ensures that at each level, the needs of the lowest level are repeated (contained) in the purpose of the organization.

Complexity and Chaos

157

The fractal theory has brought about a new mind set for data analysis. At first sight, a time series of a single variable appears to provide a rather limited amount of information about the phenomenon investigated. In fact, a time series contains a large number of interdependent variables that bear the marks of the dynamics, allowing us to identify some key features of the underlying system without relying on any modeling. The issue is to extract patterns from an irregular mishmash of data. A central issue is addressed from experimental data: can the reconstruction of the dynamics of a complex system derived from time series data? It can be unfolded in several sub-issues: 1) is it possible to identify whether a time series derives from deterministic dynamics or contains an irreducible stochastic element? The key parameter of a time series is its dimension. Dimension provides us with valuable information about the system’s dynamics. The mechanism used to derive the dimension of a time series is presented in appendix 2. In general, fractal dimensions help show how scaling changes a model or modeled object. Take a very complex shape, graphed to a scale, and then reduce the scale. The data points converge and become fewer. This kind of transformation can be measured and judged with fractal dimensions. When a length (dimension 1) or area (dimension 2) is reduced by a factor of 2, the effect is well known; namely the length is divided by 2 and the area by 22. When a d-dimensional object is reduced by a factor 2, its measurement is divided by 2d; 2) assuming that the time series converges to what is called an attractor (a point or a stable behavioral pattern), what is its dimensionality d? d = 1 reveals that the time series refers to self-sustained periodic oscillations. If d = 2, we are dealing with quasi-periodic oscillations of two incommensurate frequencies. If d is not an integer and larger than 2, the underlying system is expected to exhibit a chaotic oscillation featuring a great sensitivity to initial conditions, as well as an intrinsic unpredictability;

158

Complexity in the Natural Sciences and Operations Management

3) what is the minimal dimensionality n of the phase space within which the attractor, if any, is embedded? This defines the number of variables that must be considered. 3.4.1.5. Market demand sensing “Real time” demand sensing between the stakeholders of short or long supply chains in consumer industries is a common practice to fulfill customers’ orders “just-in-time”. Can this data source be used to elaborate long-term forecast? The answer is clearly no. Long term must be appreciated against the manufacturing cycle time of manufactured products and their lifetimes. The time to manufacture a car is much shorter than that for a jet engine or a railway train. Even in “low-volume” industries such as aerospace or railway train manufacturing, where the lifetimes of products exceed several decades, demand sensing is becoming more relevant because of its applicability not only to the provision of repair services and spare parts, but also to assess the needs of new products with innovated features. Aircraft engine manufacturers routinely have live streams of data coming from their products during flight, for example, allowing the manufacturers to monitor conditions in real time, carry out tune-ups and set spare-part inventory. Software packages embarked on railway trains fulfill their operational monitoring and even their collaborative interactions without any central control left. When it comes to implementing demand sensing, companies divide into two camps: those that tend to build their solutions in-house, using open-source or proprietary algorithms, and those that use a range of software solutions, fully or partly tailor-made, provided by software houses. However, the bottom line is that all that is required to make demand sensing work is the willingness to spend some time looking for potential demand signals, putting them into an analytics engine and integrating the results into supply chain planning and execution. Four broad areas of data come into play here, which are: 1) structured internal data, such as that from a PoS (Point of Sales) system, e-commerce sales and consumer service;

Complexity and Chaos

159

2) unstructured internal data, for example, from marketing campaigns, in-store devices and apps; 3) structured external data, which includes macroeconomic indicators, weather patterns and even birth rates; 4) unstructured external data, such as information from connected devices, digital personal assistants and social media. 3.4.2. Change management and adaptation of information systems 3.4.2.1. Change management: a short introduction Many articles have been written by academics and consultants on what we call “change management” over the past hundred years. They focus on critical priorities for an organization’s change effort through learning about its own structure and operational practice, adapting its workforce skills to its new technological environments and capitalizing on acquired knowledge thereof. Change and learning are organically intertwined. Consultants aim to help companies that find themselves facing new technologies or a change in customer expectations. Since the beginning of this century, many CEOs have experienced disruptions to their businesses and the trend can be anticipated to keep going globally. In spite of advice provided by consultants in the form of guiding principles that are supposed to apply to every company, the human impact of business reorganizations imposes a high levy on employees. Every employee looks at the organizational change from the standpoint of how he or she will personally be affected. Selfpreservation becomes a major concern. Change management often, not to say always, affects the workflows associated with the operational procedures people are used to. Furthermore, in many cases, the change in the chain of procedures is implemented using software packages, the deployment of which is embodied in a project called “MIS” (management information system).

160

Complexity in the Natural Sciences and Operations Management

This situation is prone to developing dysfunction in business operations and, as a result, a loss of control spiraling into disorder and chaos. Until personal career issues have been resolved satisfactorily, employees are too preoccupied with their own situations to effectively focus on their work and develop resistance. Even if they are given a good explanation of the rationale for the changes and of the possible alternatives and trade-offs, they have to come to terms with a new working environment. In addition, as a business organization operates as a set of functional subsystems which influence each other, it reveals that it is difficult to trace back the cause of a long-range effect within the system. This operation often resembles looking for a pin in a hay stack. The consultancy literature often treats the topics of “change management”, “top-down management” and “bottom-up management” differently. Each of these is often engineered in a complex and changing corporate environment without providing the top management with a clear picture of the issues at stake. The key success factor is aligning all the working forces so that they turn responsive (change-management), informed (bottom-up) and properly executing (top-down). Although this statement is simplistic, and itself requires further insight, the point that there is no one best approach for all issues should be kept in mind. The bottom-up approach will be poorly executed if the top-down approach does not provide oversight. The top-down approach will be poorly designed and will often miss intended targets if the bottom-up approach is not leveraged as an asset. The change management methodology will not be effective unless it similarly consults with all top-down tiers of management (to identify priorities) as well as the bottom-up tiers of employees (to identify risks, barriers, and threats). The bottom-up approach in change management would typically be viewed as preventative in a business model where the external environment is greatly emphasized over the internal environment (of the company). One could often say that change management seeks to prevent threats to the company’s advantages. This is in contrast to the traditional model, wherein change management seeks to leverage and take advantage of existing opportunities in an effort to increase the

Complexity and Chaos

161

company’s competitive advantages. Ignoring the internal environment during the change management process is a risk that does not clearly address the following two questions: – purpose: what is the change management for?; – action: what are managers responsible for to secure the success of the change management project? The top-down, bottom-up, framework provides the multi-directional feedback that is critical for an accurately designed change management methodology that is executed effectively. Failures in change management can frequently be linked to providing partial answers to incomplete questions. Roughly speaking, consultants can be said to deal with “shop floor” realities, whereas academics are supposed to elaborate on theories, principles and paradigms underpinning the approaches engineered by consultants. In the next section, we scrutinize what academics have elaborated on the subject matter “change management”. Change management can be linked to the notion of bifurcation found in the science world. Any organization can be modeled as a mechanism converting inputs into outputs. As long as the inputs are coherent with the outputs along with the inner transforming mechanisms, the system is stable. As soon as the outputs change, for instance, the products or services offered to the demand side, either the inner transforming mechanisms or the inputs, or both, have to be adjusted. Another situation can be imagined. In order to keep abreast with competitors, technological changes have to be introduced, which can entail a drastic upheaval in organizational patterns and skills in the labor force. When a breaking point in terms of output versus input is hit, several routes to a new organizational pattern can be followed according to plans or, more likely, contingent circumstances, which are always difficult to forecast. The breaking point can be interpreted as the elasticity limit of the inner transforming mechanisms to secure the delivery of outputs of a certain sort from available inputs. This analogy with respect to the notion of bifurcation is depicted in Figure 3.12.

162

Complexity in the Natural Sciences and Operations Management

Figure 3.12. Bifurcation and change in organizational patterns

3.4.2.2. Change management: the academic approach In section 2.7, we explained that the many aspects of what is called the organization theory defy easy classification. We then categorized different approaches into structural and functional theories on one side and cognitive and functional theories on the other side. This classification was proposed from an enterprise point of view. Organizational structures are found in many environments other than the enterprise. Other criteria have been considered to classify them according to their functions or goals, nature of technology employed, ways to achieve compliance or the beneficiary of the organization. D. Katz and R. Kahn (1978) use functions performed or goals sought as the criteria to categorize organizations. According to them, production or economic organizations (manufacturing, logistics, distribution, retailing) exist to provide goods and services for society, whereas pattern maintenance organizations (education systems) prepare people to smoothly and effectively integrate other organizations. The adaptive organization (research and development)

Complexity and Chaos

163

creates knowledge and tests theories, while the managerial or political organization (regulatory agencies) attempts to control resources use and authority. In order to keep track of the baseline ideas of management thought in the profuse variety of literature produced about organizations, it helps to distinguish between organization behavior and organization theory when organizational change, adjustment or reconfiguration is considered. Organization behavior primarily deals with individual and group levels. It covers motivation, perception and decision-making at the individual level and group functions, processes, performance, leadership and team building at the group level. Organization theory deals with organization level such as organization change and growth, planning, development and strategy. Management of conflicts between groups and/or organizational units is dealt with at the intersection of organization behavior and organization theory. From the point of view of change, the metaphor of organizations as open systems subject to external pressures facing them has become an accepted mindset for guiding management thought. Several corpora of thought that have direct or indirect connections with the reasons why organizations have to change have developed. Let us review the main ones. Contingency “theorists” strive to prescribe organizational designs and managerial actions most appropriate for specific situations. It is clearly a perspective as an offshoot of systems theory applied to the study of organizations for organizational analysis. “Contingency” theorists have tended to examine organizational designs and managerial actions most appropriate for specific situations. They view the environment as a source of change in open systems. Simultaneously, they highlight the interdependence of size, environment, technology and managerial structure, and their compelling

164

Complexity in the Natural Sciences and Operations Management

congruence for business success. P. Lawrence and J. Lorsch (1967) who pioneered this approach argued: “Underlying this new approach is the idea that the internal functioning of organizations must be consistent with the demands of the organization’s tasks, technology or external environment and the needs of its members if the organization is to be effective. Rather than searching for the panacea of the one best way to organize under all conditions, investigators have more and more tended to examine the functioning of organizations in relation to the needs of their particular members and the external pressures facing them. Basically this approach seems to be leading to the development of a ‘contingency’ theory of organization with the appropriate internal states and processes of the organization upon external requirements and member needs”. Another group of theorists viewed environments as all-powerful in determining the fate of organizations. Drawing on the work of Charles Darwin, this group presented an “ecology” model that portrays successful organizations as the fittest in a competitive arena, where rules are imposed by the environment. H. Aldrich (1979), in an influential book, argued: “The population ecology model differs from traditional explanations of organizational change […] it focuses on the nature and distribution of resources in organizations’ environments as the central force in change, rather than on internal leadership or participation in decisionmaking”. Organizations as open systems, like living organisms, depend on their environment for critical resources, and a key managerial task is the appropriate provision of human-socio-cultural and political, technical, functional and informational resources. All the arguments developed by different theorists converge to the point that organizational changes are induced or compelled by forces

Complexity and Chaos

165

originating from organizations’ environments. The digital technology being introduced in all sectors of our societal context is a confirming and convincing example of this standpoint. It is a driving force for triggering deep reconfigurations of many an economic sector, beyond sheer adaptations. The interplay between the dynamical environment and current internal structures is a source of complexity to bring about a controlled evolution and to avoid disruptive situations running out of control. 3.4.2.3. Change management and social networks 3.4.2.3.1. General overview If we hark back to section 2.3.2, an organization that is a purposive system is defined as a network representing a social structure directed toward some goal and created by communication between groups and/or individuals. The nodes of a network can be embodied by BDI agents with their attributes in terms of beliefs, intentions and desires. This agent modeling can be used to guide change: 1) by conducting a comprehensive diagnosis before planning any course of action. Organizational change must not be targeted on a specific subsystem but must account for the total system dynamics, the system environment included. If a specific entity is created and assigned the responsibility of leading the change project, it has to be considered as an agent interacting with the other agents involved; 2) for prescribing phased action plans recognizing current patterns that feature the uniqueness of an organization and securing a departure from the search for one single optimal solution, all the more that patterns may change along with the project’s progress. When relevant, the idea of “chunking” agents to be perceived as a single agent establishes a separate identity for the cluster within. According to context, one may wish to ignore the chunk’s internal structure or to take it into account. This allows a hierarchical approach to complex situations.

166

Complexity in the Natural Sciences and Operations Management

Affiliation networks (refer to section 2.3.1) are well-adapted tools to describe the properties of BDI agents’ networks in terms of beliefs, desires and intentions. Either the matrix formalism or n-partite graphs are eligible to represent real-world situations. 3.4.2.3.2. Long-range correlations By analogy with the phenomena observed in physical systems, long-range effects can be observed in social networks. This is inductive to situations that can generate complex and chaotic configurations difficult to be understood by observers because they evolve fast and, as a result, should be dealt with by controllers, if any. When analyzing a social network, a question arises: what is the size of individual nodes to take into account? Dunbar’s number can help define a space of interaction, or at least become aware of its potential range. The British anthropologist R.I.M. Dunbar (Dunbar 1992; Dunbar 2003) suggested that a cognitive limit exists to the number of people with whom one can maintain stable social relationships in which an individual knows who each person is and how each person relates to every other person. The average value for this number proposed by R.I.M. Dunbar is 148 for social group members. He also noted that group sizes could be put in three categories, namely 30–50 (bands), 100–200 (cultural lineage groups) and 500–2,500 (tribes). It has become of interest in anthropology, evolutionary psychology, statistics and business management. Developers of social software are interested in it as they need to know the size of social networks that their software package is intended to support. In the modern military operational environment, psychologists seek such data to support or refute policies to maintain or improve unit cohesion. The application of Dunbar’s number to individual-centered and group-centered social networks has been validated (Gonçalves 2011). When conducting a change management project, this dimension should be considered and integrated in the spectrum of parameters to deal with.

Complexity and Chaos

167

3.4.2.3.3. Some guidelines to conduct a change project with social agent networks The basic idea governing the use of social networks to conduct a change project is to identify the change agents and to focus on their behavioral evolution. Agents as the project progresses can change behaviors: this has to be taken into account when monitoring a project. When modeling social agents as BDI agents, their operational interactions should be represented by their action plans that reflect their activities within the framework of overall task distributions of the organization. An important category of change agents comprises the most connected. They can be identified when analyzing the traffic of incoming and outgoing messages. Each nodal agent sends messages to its contacts and receives messages from its contacts. For each nodal agent, the numbers of messages broadcast and received are captured over a period of time relevant to the context and display on a graph to visualize the results and help the understanding of the mesh of interrelations. Nodal agents that send more messages than they receive are called “root” agents. They are of special interest because a change in their operations and plans will influence their roles more directly as broadcaster of changed messages. On the other hand, nodal agents receiving several messages must also get special attention because they have to be explained the new operational rules and must be assimilated so that their plans become aligned with what is intended by the change project. This is a first-level analysis called the degree of connections in the agent network. It can be considered as a proxy for representing the flow of information inside the enterprise and with its outside partners. Never forget that organizations are open systems and their third-party partners (suppliers, clients) are full parts of the network. A secondlevel analysis is called the degree of closeness between the agents. This second-level analysis refers to the beliefs and desires properties of BDI agents. They have to be polled to assess whether change is perceived as being a minor one or as having wide

168

Complexity in the Natural Sciences and Operations Management

implications in terms of their own operations and their interaction with their closest partners. Specific courses of action have to be developed according to the situations of the different agents. All the features of BDI agents explained in the second part can be drawn on to carry such an analysis and deploy appropriate courses of action. This secondlevel analysis should help detect potential long-range effects liable to trigger complex and/or chaotic behavior conducive to losing control of the project. A third-level analysis refers to the degree of trust between the agents and the acting agent in charge of the change project. This acting agent has itself beliefs, desires and intentions turned into action plans. They result from the content of the change project to be implemented in terms of purposes and objectives to be deployed in the field. It has to have a high degree of closeness with all the network agents involved in the change project and deliver advice in a context of trust. These guidelines are only intended to provide a framework and a roadmap when carrying out a change project using BDI agents as a modeling tool. 3.4.2.4. Change management, data deluge and information systems Functional subsystems influence each other, but the informational subsystem plays a special role because all organizational changes are reflected in MIS (management information system), which provides communication and coordination between all functional subsystems through the control of resources. Figure 3.13 depicts the interplay between resources and primary processes in a manufacturing enterprise. Information is singled out at the same time as a resource, which mirrors the functioning of business models and secures collaboration and cooperation between resource management units.

Complexity and Chaos

169

Figure 3.13. Role of information in controlling business resources in a manufacturing enterprise

Hereafter, we focus on the features of an informational subsystem, which have to receive special attention within the framework of a change management project. Transmission of information-laden data takes place through both formal and informal channels. The formal channel refers to what is called the MIS. Collecting, gathering, processing and transmitting data according to set procedures

170

Complexity in the Natural Sciences and Operations Management

embodied in software programs and mirroring management tools are the functional capabilities of information systems. The way business agents or groups of agents make use of information systems can deviate significantly from the intentions of their designers. Many reasons explained their attitudes: misunderstanding of new procedures, refusal to change existing practice, tighter control of activities by eavesdropping, breakdown of informal channels of communication and so on. Formal elements of information about structures and operations are attractive to company management because they are tangible. They can be easily defined and measured. However, they are only half the story. Many informal communication channels parallel or short-circuit the “official” patterns of communication flows. They are based on trust between the stakeholders involved. In case of reorganizations, many companies reassign decision rights, rework the organization chart or set up knowledge-sharing systems – yet they do not see the results they expect. That is because they have ignored the more informal, intangible building blocks. Norms, commitments, mind sets and networks are essential in getting things done. They represent (and influence) the ways people think, feel, communicate and behave. When these intangibles become desynchronized with one another or with the more tangible building blocks, the organization falters. The informational subsystem must be identified as a subsystem in itself because advances in technology have made it the hub of coordination between all other business functional subsystems and business partners (suppliers, clients). The critical issues at stake are delays in data transmission as already described to shun untimely decisions and their consequences, and accuracy of disseminated data (the destructive effect of fake or biased news does not need to be proved anymore). If these issues are not brought under full control, the whole system may run out of control because counterproductive decisions are taken untimely, as shown in the section “emergent behaviors” with unintended consequences (hunting and oscillating).

Complexity and Chaos

171

In the next sections, we focus on information systems and their architecture challenge in the era of data deluge. 3.4.2.4.1. What is an information system for? Even if they are not aware of the fact, information system designers posit, explicitly or implicitly, that information systems are a modeled vision of the business universe. Regardless of the assumptions made about the chosen representation of the business universe, information system constructs are intended to reflect how the enterprise is organized and how it operates in terms of transactions and control rules. This implies that business information systems contain, in one way or another, a description of the enterprise’s organizational structures, functioning mechanisms and deliverables. Thus, the contents of business information systems should include the descriptive accounts of operations, control rules and deliverables (products/services). This ideal situation was not found in the operational field for a wide variety of reasons: – business information systems usually resulted from the integration of software modules by different vendors. By the way, software integrators have become the flagship products of some vendors such as Net Weaver by SAP. It turns out that this complexity seriously hinders a reasonable understanding of the whole system by the lay business actors; – The MIS are often the back-offices of other processing systems such as transaction processing systems. A transaction is a business operation modifying the state of an enterprise or a part thereof. Capturing, storing, processing, disseminating and reporting transaction data are the objectives of transaction processing systems. The MIS procedures and their meaning are most often impenetrable to the users of transaction systems so that their management becomes short-sighted. In fact, very few business actors lend weight in gaining knowledge of end-to-end business processes and their associated procedures carried out by the supporting information system. They are prone to

172

Complexity in the Natural Sciences and Operations Management

finding themselves destitute of relevant understanding of the situation when they have to make local decisions that often turn out to have global impacts business-wise. ERP-labeled software packages have brought forward the understanding of business processes for developing effective and efficient information systems. The best way to elicit the relationship of business processes and information systems is to devise information systems with an explicit architecture. Architecture defines the function capabilities of the parts of a system and the specifications of the relationships between these parts. Linking ERP business processes and information systems through a layer architecture of IS modules allows us to: – remove barriers across organizational structures; – make procedural structures more explicit and flexible; – help decision-makers simulate the impact of their decisions. The layer-based architecture of information systems has brought about significant improvement in making the links between business operations and information systems more explicit. It must be adapted to the presence of data flows coming from networks of servers. In the first section, the main features of the traditional layer architecture of information systems are described so that in the second section, the new architecture can be clearly contrasted with the traditional one. 3.4.2.4.2. Traditional layer architecture of information systems Layer architecture is based on layers of functional operations and strongly contrasts with a hierarchical architecture developed on management levels (strategy, tactics and operations). In a layered architecture, the underlying layers relate to the implementation, that is, to the different types of resources, while the top layers relate to the business. Figure 3.6 shows a schematic example of a layered architecture. A business system may well have more or fewer layers than the ones shown in Figure 3.14, and it can

Complexity and Chaos

173

also have layers in several dimensions, so that one layer can itself be layered. Business system layer Common business process layer Infrastructure layer Resources layer Figure 3.14. An example of layered business system architecture

The basic principle of any layered architecture is the delivery of services to a layer by underlying layers. When standardization is looked for, formats of services provided are defined, not the very mechanisms engineered to supply services under consideration. The major benefit derived from layer modeling is the versatility it has for focusing on “relevant” views of a complex system and to trace back, when required, relations with detailed or aggregated descriptions. – The resource layer contains the different concrete resources needed for realizing the organization. Here, human resources with different kinds of skills – education, management, experience and so on – and different types of equipment such as information systems and transmission and switching equipment are found. – The infrastructure layer consists of functions generic to the type of business being modeled. Here, functions to support network and services management processes, accounting processes, personnel administration, office administration, and so on are found.

174

Complexity in the Natural Sciences and Operations Management

– The common business process layer contains internal processes generic to the type of business modeled. These internal processes can be used by many different business processes within the company being modeled. Some of these processes can be described as cooperative services allowing for a structured workflow between the various activities of a single business process or different business processes. – Finally, the business system layer consists of the different business processes. These processes must be chosen to provide the top management with synthesized views of the enterprise. These views should be described in terms of performance indicators able to trigger leverage on relevant operations. In systems with a layered architecture, it is important to define the interfaces between the various layers. Object-oriented technology provides an elegant method of doing this. This approach will be reproduced by substituting object for activity. Every layer is a composite set consisting only of its constituent activities. The interface to a layer is made up of the interfaces to its public activities. By public, it is meant here to be accessible from outside. How a layer can be used by layers higher up in the structure is an important architectural decision. There are several such types of layer relations to choose from. Here, we present two of them: – the use of activities in the lower layer is encapsulated in the activities of the higher layer. The design of the higher layers uses the public interfaces of the basic activity layer as primitives; this is because the interface is encapsulated in the overlying layer’s activities and can only be seen in these activities’ implementation parts. The interface should not be used in the documentation, which describes the interaction between activities in the higher layers. A larger business system consists of thousands of activities that communicate with one another in an extremely intensive way; by concealing the communication with underlying activities inside the activities in their own layer, all documentation will be simplified. Since they are designed to be extensively reused, basic command activities should be managed in a very different way compared to

Complexity and Chaos

175

business activities: they should be carefully tested before being released; they should not be allowed to change until all business activities dependent on them have also been changed and so on; – activities in the lower layer need not only be encapsulated in activities of the higher layer but can be used in the design documentation of the higher layer; however, they cannot be changed by the designers of the higher layer. The common business process layer grows and matures during the development of several different business subsystems. It should contain the common activities of an entire business, and it should, therefore, become relatively stable after a number of releases. The layer should be managed in a way different from the business system layer, but similar to the infrastructure layer. An important difference, compared to the objects in the infrastructure layer, is that it is necessary to show the activities in the common business subsystems. When designing a business subsystem, you may show the common business processes you are using, but you are not allowed to make any changes in them whatsoever. You are only allowed to make changes in your own activities, that is, activities to realize your business subsystem. If a common business activity is to be changed, all business subsystems using that activity will need to be changed at the same time. Every layer can be divided into subsystems. The subsystems in the highest layer, if any, cover large business areas and are supposed to reflect the core business processes as viewed by the top management. They can be dependent on each other. In this case, they are interrelated through an activity or a process of the underlying layer. Business system areas and layering are complementary. You can apply both techniques to describe a complex organization. First, you use the business system area technique to find the different business system areas and their associated processes, and then you develop underlying support processes within the framework of a layered architecture. The layers below the highest will be used for all of the business system areas. Figure 3.15 describes the concrete example of a layered information system architecture pertaining to a manufacturing

176

Complexity in the Natural Sciences and Operations Management

company. This architectural pattern can be easily adapted to other types of organizations.

Figure 3.15. Layered information system architecture for a manufacturing company

The specific features of a layered architecture are best understood by contrasting it with hierarchical information system architecture for a manufacturing company as depicted in Figure 3.16. 3.4.2.4.3. Prospective overview of the future structures of information systems It is common saying that we are in an era of rapid digital transformation. All sections of society, individuals as well as businesses and public administrations are embarked on a stream of change in the ways we interact and communicate. Virtual environments, namely server platforms, will be the hubs of faceless human social life in a global economic competition field.

Complexity and Chaos

177

Figure 3.16. Hierarchical architecture of application systems for a manufacturing company

Businesses are already embedded in agent networks of a sort, and in the past decades, they had to adjust to their evolving environments. Increasingly more relations with their suppliers and clients have become paperless and what is called dematerialized. The data sources they feed them will drastically increase in the forecast future altogether by their number and variety (CRM, Web portals, emails, social networks, collaboration systems, HR portals). In particular, IoT sensors monitoring real-time manufactured products will generate large volumes of remote data inflows and will bring unstructured data to businesses in order to be adequately processed. New policies of data governance will have to be promoted and deployed for reaping the benefits of the availability of “data lakes”.

178

Complexity in the Natural Sciences and Operations Management

The upper layer (customer care) and the lower layer (resources management) of the I S architecture depicted in Figure 3.15 are expected to be the most impacted. Their configuration will go through a drastic disruption to meet the processing requirements appropriate for interacting electronically with clients (customer care) and suppliers (resources management) via portals, e-shops, EDI systems, IoT sensors and so on. Each agent of these layers (sales, order handling, problem handling, purchasing, inventory, etc.) will receive or send managed transferred files from/to other agents, which are either other in-house agents or the front office, the enterprise’s trading partners, clients or suppliers. IT systems are purpose-built for supporting efficient collaboration between all the in-shore and off-shore stakeholders of the business entity. Three types of systems can be identified: System of records (SoR) They are the traditional EDP (electronic data processing) systems that manage structured data related to business operations (financials, payroll, CRM – customer relationship management (inventory, order entry, HR – human resources), product life cycle management). They have been designed on the basis of management concepts such as MRP, ERP and CRM (Briffaut 2015). They include data storage and access systems. They are the authoritative source of enterprise data. System of engagement (SoE) This system consists of the front offices of a business with its clients on one side and its suppliers on the other side. It is devicecentric and associated with a variety of communication channels ranging from unstructured data from call centers, mobile devices, IoT sensors and so on to structured data. System of insight (SoI) This system is designed to convert unstructured and heterogeneous data sets received from various sources into structured data that can be fed into the system of records to support decision-making.

Complexity and Chaos

179

IBM (2015) has designed an SoI building block model for offering an enterprise architecture to deal with incoming data flows. It is represented in Figure 3.17.

Figure 3.17. System of insight building blocks to manage incoming data flows from clients by IBM

The consume and collect blocks refer to extracting data from the transactional system that will be used in the decision-making process and to collecting and aggregating it in view of further processing. The analyze and report blocks are intended to make sense of incoming data streams and prepare them for feeding the detect and decide blocks. The detect capability refers to discovering the existence of an event or a situation that requires one to make decisions. The capability of the decide block is to engineer the appropriate steps for delivering a decision. The same model can be used to architecture the interaction of an e-enabled enterprise with its suppliers. In this case, data are generated by in-house transactions and are transformed in prepared data to be sent to suppliers. It is clear that some in-house data sources correspond specifically to external data sources.

180

Complexity in the Natural Sciences and Operations Management

The SoR, SoE and SoI systems are subject to constant adaptation in order to be kept in line with the enterprise evolution. In order to keep their adaptation under control, they are generally configured within a framework that is called an enterprise architecture (EA). A wide variety of definitions have been given to this concept. Gartner in the IT glossary of enterprise architecture (gartner.com, retrieved July 29, 2013) defines it in the following terms: “EA is a discipline for proactively and holistically leading enterprise responses to disruptive forces by identifying and analyzing the execution of change toward desired business vision and outcomes. EA delivers value by presenting business and IT leaders with signatureready recommendations for adjusting policies and projects to achieve target business outcomes that capitalize on relevant business disruption. EA is used to steer decision-making toward the evolution of the future state architecture”. Traditionally, the role of the IS system within the framework of a business approach is visualized as shown in Figure 3.18. When a business unit is described as a system, the purpose is controlling its business operations. Three entities have to be identified: the controlled system, the controlling system and the information system. The controlled system, often called the transformation system, because it converts inputs into outputs, is generally modeled as a process. The relationships between these three entities are explicit from Figure 3.18. (for more details, see Briffaut (2015)). It is noteworthy to elaborate Figure 3.18 to understand the features of the system approach to business description and especially the meaning of direct and indirect control. Direct control refers to direct action on the controlled process to maintain or change its state. Indirect control resorts to some entity external to the system to influence the state of the controlled process by means of inputs.

Complexity and Chaos

181

Figure 3.18. Articulation between controlled process, information system and controlling (decision) system in a manufacturing context with a cybernetic approach

Let us take an example in a manufacturing context to explain how the messages exchanged between the involved entities are connected and how their contents trigger decisions. The controlled process is assumed to be a manufacturing process made of storage and production activities. A message coming from the market place (environment data) is captured and processed by the information system (IS). The message content says that a market slump is forecast. It is directed to the production scheduler in an appropriate format (control data). As a result, the scheduler decides to reduce the production level by releasing orders to the manufacturing shops (direct control) on the basis of inventory levels (process data) and to send orders to suppliers to decrease the number of deliveries (indirect control).

182

Complexity in the Natural Sciences and Operations Management

What has to be changed to cope with an e-enabled context of digital interaction with clients and suppliers and of big data flows? The answer is straightforward: indirect control messages and environment data flows have to go through a layered system of SoE, SoI and SoR, as sketched in Figure 3.19.

Figure 3.19. Articulation between controlled process, information systems and controlling (decision) system in an e-enabled and data deluge context

3.4.2.4.4. Complexity of data governance and management Data governance and management are key issues in a data deluge environment. The situation fits in what is described as a complex system. Three clusters of agents are in interaction, namely the end users of business data, the specialists in charge of delivering services to the end users and the technical infrastructure of data servers and telecommunication facilities. All these agents have to come to terms dynamically in an economic and technical context, which is always on the move. Each cluster has its operating rules, and synchronization of requirements between the clusters is not always, not to say often, effective. It is easy to figure out that this situation generates not only frictions between the stakeholders but also potential chaos, which can go out of control.

Complexity and Chaos

183

The business end users are considered owners of their data and as such demand to have simplified and facilitated access to their data 24/24 and 7/7 or according to SLAs (service-level agreements). When they detect that they require new data or new patterns of managed data, they put pressure on IT specialists to get what they want as soon as possible. This time to delivery can generate frustrations. The number of data specialists involved in data processing and associated services is such that efficient collaboration between them is not straightforward: data preparation (appropriate data coding), database architecture, data analytics, real-time data provisioning to end users (data science). Scalability through modular systems has also become a key performance indicator to assess the quality of data management. The infrastructure refers to computing and storage capacities on the one hand and to telecommunication facilities on the other hand. The matching between these three agents clusters has to be secured through an adequate organizational structure all the more so when the technical infrastructure and data processing (application programs and databases) evolve under the pressure of end users’ requirements and the availability of new technologies. Managing the evolution of a legacy system is a daunting challenge. In many cases, it is not well-documented and changing the properties of a subsystem can generate chaos going out of control. Systems learn more easily by removing parts. In general, we know what is wrong with more clarity than what is right, and our knowledge grows by subtraction. Actions that remove are more robust than those that add because addition may have unseen, complex feedback loops. Adding a new component to a system increases the number of interactions in the system, the nature of which may be uncertain and may not be maintained in terms of functions over time. This approach “learning by subtraction” is explicitly or implicitly used to deal with complex evolving systems.

184

Complexity in the Natural Sciences and Operations Management

3.4.2.4.5. Specific features of manufacturing information systems Manufacturing corporations are faced with specific situations. A buzzword found in the technical press and bruited at conferences organized by a wide spectrum of stakeholders revolving about this industrial sector is digitalizing manufacturing. Examples of this broad notion are 3D printing, the Internet of Things (IoT), mass customization and last but not least big data or data deluge. In fact, this paradigm aims to marry information and communication technology to the process by which highly complex products are designed, manufactured and delivered to the final customer. Data flows add a massive amount of leverage in advanced manufacturing. Information technology in advanced manufacturing Large technology companies operate what can be called a digital factory where the real world, hardware and software merged with the virtual world, where simulation software elements allow us to assess whether the designed product meets the operational requirements set by the designers and can be manufactured with the equipment units and competencies available in the company. The concept of CIM (computer-integrated manufacturing) was introduced by IBM in the seventies of the past century. The basic idea was to memorize in one database all the specifications of products in terms of materials contents and manufacturing operations associated with equipment units. Figure 3.20 describes the main features of this concept. Let us explain the main functions and their relationships. The mission of the product development function is to design products in terms of components, materials required to manufacture final products either manufactured in-house or obtained from third-party suppliers (BM – Bill of Materials) and manufacturing operations expressed as programs for numerically controlled (NC) tools according to a routing appropriate to the job shop layout. Customer orders are collected by the sales department, and their fulfillment is scheduled for making cost-efficient use of

Complexity and Chaos

185

manufacturing resources. The implementation of schedules triggers the materials requirement planning function so that manufacturing and procurement orders are released to meet the delivery due dates of production schedules of finished products.

Figure 3.20. Building blocks of the computer-integrated manufacturing (CIM) concept by IBM

A software package called PLM (Product Lifecycle Manufacturing) allows us to simulate production processes and robotics ahead of time for optimizing engineering, procedures, quality, load time and uptime in order to make production more flexible and reliable in terms of delivery lead times. Before a manufacturing line is built and operated, simulation allows one to not only reduce and bring under control the time to put a new product to the market (time to market), but also to operate the manufacturing line with limited downtimes.

186

Complexity in the Natural Sciences and Operations Management

Sensors collect real-time data to software packages about the states of production lines. These data are used for predictive maintenance. In order to fulfill customer orders on time with the level of quality committed, it is critical to rely on fully operative equipment as scheduled by planning. Conventional approaches to equipment maintenance establishes a schedule for inspecting and maintaining all the components of the equipment units. This schedule is fixed and is based on the mean time before failure for each of the components. In a connected multi-factory environment, analytics of maintenance data can be shared with other sections of the enterprise manufacturing facilities that use the same type of tool. Taking one step further, the data from an individual tool can be sent to the equipment vendor. This piece of information can initiate discussions about improvements and feed into the supplier’s delivery schedule. Further, the concept of proactive maintenance does not stop at the factory wall. The challenge is to make meaningful analytics out of these data for the benefit of customers. Many people do not yet understand why customers would pay for information making use of the products they have bought, easier, better, less costly or more valuable when IoT is taken into account for interacting with vendors. Sensors embedded in products deliver data that make sense in terms of optimized operations, utilization rate, predictive maintenance and better support to the user. This set of data can become the holy grail of innovation with the proviso that data analytics is carefully carried out by manufacturers. In fact, weak signals can be hidden in the main stream of data flows, and each type of weak signal can be relevant to different business stakeholders. Data analytics gives a company a lot of information it can use to optimize and shorten the value chain. This value chain consists of the suppliers of your suppliers (supply network), your company, your customers and their customers (distribution network). The purpose is to make your products faster, more cost-efficient, more flexible in small lot sizes while providing customers with enhanced support. The manufacturing company can cut out different links of its value chain because they provide the least value in the company’s value

Complexity and Chaos

187

chain. Hence, it needs to be understood. Where is the manufacturing company in the global value chain? How can it remain a key link by providing more value than anyone else in the chain? Customers do not look any longer for products but for the services delivered by products. This is a paradigm shift. Many manufacturing companies claim that they sell solutions, applications and comprehensive systems versus products, namely value to customers instead of function capabilities. Car manufacturers are a telling example with the Internet of Things for cars. A connected car is a digitized vehicle with a built-in wireless network enabling advanced information and entertainment systems, real-time location and routing services, vehicle-to-vehicle and vehicle-to-infrastructure communication systems and remote diagnosis and update applications. It is easy to figure out that this approach to market, especially remote diagnosis and update applications, not only entails complex organizational methods and the deployment of appropriate information systems for car manufacturers, but also generates flows of data that have to be processed and analyzed. A state of mind different from B2C and B2B offerings is emerging, namely “business to society” enterprise (B2S). The B2S concept means to contribute to civil society’s development in the world through providing people with reliable, safe services and making life in cities more bearable. This may reveal a start for changing relationships between industry and civil society. Company employees are a special part of society and as such have to be made aware of their roles with respect to the social community they belong to. As company members, they deliver products/services to meet the requirements of their community fellows, add value to the company inputs, allowing them to receive salaries and to make other economic actors active by fulfilling their own needs. The final purpose of any company is durability through sustainable courses of action and societal responsibility. Industry 4.0 What is meant by Industry 4.0? Industry 4.0, as Industry 3.0 draws on telecommunication services based on the Internet protocol of

188

Complexity in the Natural Sciences and Operations Management

communication between message emitters and receivers. Industry 3.0 refers to the automation of machines and processes monitored by a single computer platform associated with network facilities. Industry 4.0 covers the concept of systems run on interconnected platforms combining engineering and manufacturing that can deal with global networked organizations. For the sake of clarity, access to a computer server to retrieve dataladen messages on requests with the Internet protocol was called Internet 1.0. When an additional service became available, namely the capability from a remote terminal to feed data into a computer server, this service was called Internet 2.0. The manufacturing flow is organized to make small lot sizes to meet customer orders on demand. The software steers the manufacturing process so that small lot sizes can be delivered cost-efficiently, at the due date and with the quality level committed to the customer. The manufacturing process is monitored by machines talking to machines. Production is stopped when a defect is detected so that finished products do not have to be re-worked in case failures are detected at the final stage: “do it right the first time” is the paradigm underlying the manufacturing procedures and layout. Industry 4.0 is intended to take the cost of scale to zero: regardless of the lot size, the unit cost price is about the same. This technical situation offers the possibility to convert standardized products into customized products, allowing for a competitive advantage. A product whose specifications are sent via the Internet will be built to order. Specific labor skills are required. Cost-efficient 3D printing makes possible lot size of one, the dream of the just-in-time endeavor launched by the Japanese industry in the 1970s. 3.5. References Aldrich, H. (1979). Organizations and Environments, Prentice Hall. Box, G. and Jenkins, G. (1970). Time Series Analysis, Forecasting and Control, Holden Day, San Francisco.

Complexity and Chaos

189

Briffaut, J.P. (2015). E-enabled Operations Management, ISTE Ltd, London and John Wiley & Sons, New York. Dunbar, R.I.M. (1992). Neocortex size as a constraint on group size in primates, Journal Human Evolution, vol. 22, pp. 469–493. Dunbar, R.I.M. (2003). The Social Brain: Mind, Language and Society in Evolutionary Perspective, Annual Review Anthropology, vol. 32, pp. 163–181. Feigenbaum, M. (1992). “Foreword” in Peitgen, H.O., Jürgens, H.–D., Chaos and Fractals, Saupe, Springer Verlag. Gonçalves, B., Perra, N. and Verpignani, A. (2011). “Modeling Users’ Activity on Twitter Networks – Validation of Dunbar’s Number”, PLoS ONE, e22656, available at: www.plosone.org. Granger, C.W.J. and Newbold, P. (1987). Forecasting Economic Time Series, Academic Press, New York. IBM (2015). Systems of Insight for Digital Transformation, Red Books, SG24-8293-00. Kahneman, D. (2012). Thinking, Fast and Slow, Penguin Books Ltd. Katz, D. and Kahn, R. (1978). The Social Psychology of Organizations, John Wiley & Sons, New York. Lawrence, P. and Lorsch, J. (1967). Organization and Environment, Division of Research Graduate School of Business Administration, Harvard University Press. Ross Ashby, W. (1956). An Introduction to Cybernetics, Methuen, London and New York. Rovelli, C. (2017). Reality is Not What it Seems: the Journey to Quantum Gravity, Penguin Books Ltd. Taleb, N.N. (2005). Fooled by Randomness: the Hidden Role of Chance in Life and in the Markets, Random House Inc, New York. Taleb, N.N. (2008). The Black Swan, Penguin Book Ltd. Turing, A.M. (1952). “The Chemical Basis of Morphogenesis”, Philosphical Transactions of the Royal Society, London B, pp. 37–72.

Conclusion

C.1. Some general considerations Few people are aware of the full implications of chaos science. Many people still believe that with enough data input (even a deluge of data), accurate long-term predictions are possible because it fits into their conventional thinking. As human beings, we are not at ease with the feeling that we are not in full control of our destiny. For some, this constraint questions our capability to exert our free will. “Big data” is the notion that more information than ever can be collected about the world. It is the object of a lot of speculative ideas. The most controversial is that “big data” will result in algorithms that will know people better than they know themselves, and that this knowledge could be used by businesses or governments for manipulative ends, while undermining the very idea of individual freedom. The danger is certainly real. As more of the world may become tailored around individuals’ personality traits and interests, people will be subject to become passive recipients of AI decisions. However, in any case, accurate long-term forecasting will remain teemed with uncertainty. The majority of the scientific community will come, sooner or later, to consider in one way or another that the scientific revolution brought about by the development of complexity and chaos science is to be compared with the upheaval brought about by the theories of relativity and quantum mechanics, but with a critical difference. Only theoretical physicists are exposed to the theories of relativity and

From Complexity in the Natural Sciences to Complexity in Operations Management Systems, First Edition. Jean-Pierre Briffaut. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

192

Complexity in the Natural Sciences and Operations Management

quantum mechanics. The outfalls of complexity and chaos science will more and more affect our daily lives, because we are more and more depended on nonlinear systems of systems embedded in our environments. The purpose of the scientific study of dynamical systems, after the first discoveries of Poincaré at the turn of the 20th Century and the use of computer power (butterfly effect discovered by the meteorologist Lorentz), is to explain how the future states of a system could be forecast from its initial states. In order for such a forecast to be reliable and useful, nearby initial states should lead to future states that do not take courses different from one another. However, it appears that in many cases when the situation is modeled by nonlinear equations, even close initial states lead, within short intervals of time, to future states that diverge quickly from one another. This can be qualified as time-chaos. Some renowned scientists (de Broglie, Bohm, both of them Nobel Prize winners) put forward the idea that hidden variables could be imagined when trying to interpret the Shrödinger’s equation in terms of physical parameters accessible to human senses of perception. Darwin’s theory of evolution is a telling example of work successfully carried out within a framework of hidden variables. It is noteworthy to elaborate on this instance on the fact that penetrating analysis is a critical factor to assess any domain of knowledge successfully. In the wide diversity of life on earth, Darwin distinguished a new principle for the organization of matter and the emergence of design. He was totally unaware of the existence of DNA. Yet despite his ignorance of the hidden variables of inheritance, he derived, through his shrewd analysis of the world around him, his theory of evolution in 1859 in an overwhelming convincing treatise titled “The Origin of Species”. It is a daunting question, when nothing was known about the molecular basis of genetics, to explain how a correct theory was delivered. The answer is rather straightforward. The hidden variables of inheritance imprint their character on the relationships between all organisms. With minor changes, these variables endure through generations and are mixed like a multifold

Conclusion

193

coin-toss in the game of life. The incessant experiment that is natural selection is constrained to produce results in which correlations inherent in the genotype are encoded in the phenotype. Darwin arrived at his theory not by carrying out experiments but by simply observing and analyzing the phenomena of an experiment that had been running for billions of years. Each organism is a data point in the theory of evolution. When these ideas are transposed in universes where human relations are involved, a strong stochastic factor plays a role that is difficult to assess. The second-order cybernetics approach can help provide a general modeling framework, i.e. the observed object and the observing object. Stochastic or deterministically chaotic, whatever the actual nature of unsmooth behaviors, long-range forecasting is unreliable. “Long range” has to be understood as a function of the phenomenon under consideration. Another interesting example can be found when trying to understand the relationship between macroeconomics and microeconomics. At first sight, it can be reasonably thought that macroeconomics can be described as the behavioral aggregation of microeconomic agents, vernacularly called “homo economicus”. A priori teaching and learning macroeconomics should not generate any difficulty. Practically, they give professors and students the jitters. In fact, the subject is notoriously intricate, difficult to explain well and to convey macroeconomic intuition. Clear answers have to be delivered to muddled students, not to say laymen. Using math is an escape from thinking, and clear answers do not leap out of the equations. In addition, theorists (classical, Keynesian, monetarist, new classical, new monetarist, etc.) disagree about so much and textbooks disagree about so little that different models are not always explained consistently with each other or even with themselves. The result is that many professors must teach concepts, notions and paradigms they do not believe in. Furthermore, it is full of faux amis, words that mean something else in every day parlance. The concept of equilibrium, for

194

Complexity in the Natural Sciences and Operations Management

instance, is often used. Thermodynamics tells us that a living organism cannot survive in a state of equilibrium but must be open to its environment in a state of stability for securing its survival. The lack of semantic interoperability and coherence between the two disciplines explains why macroeconomics is often called the “dismal science”, disconnected from the common behavior of individuals and perceived as complex, namely not easily accessible to a wide public. C.2. Complexity versus chaos At the present time, the notion of a complex system is not yet precisely delineated as the idea is still somewhat fuzzy, and it differs from discipline to discipline and author to author. But the complex systems, those which we are keen to understand, are the biological ones, not only deal with our organic bodies but also deal with our lives as mindful beings in societal groupings. Living organisms are the iconic images of complexity in many people’s view. Our purpose here is not to come to a precise definition, but to try to convey the meaning of complexity by enumerating what seem to be the most typical properties. It is noteworthy to be aware of the fact that the systems theory was pioneered by biologists, or scientists with direct or indirect connections to biology. Most of these properties are shared by many non-biological systems as well. C.2.1. Complex systems contain many interdependent and interacting nonlinearly.

constituents

Complex systems refer either to phenomena observed in nature or to man-made artifacts constructed on the basis of what we called the laws derived from the study of nature. The former systems refer to a decomposition approach (top-down) in line with the reductionist method proposed by Descartes, whereas the latter systems are built with a composition approach (bottom-up) integrating step by step subsystems. In both cases, systems are cognitively complex because they are perceived as being made of many constituents, generally interacting nonlinearly.

Conclusion

195

Interdependent implies interacting, otherwise systems constituents should be independent. Recall that nonlinearity is a necessary condition for observing chaotic phenomena, and that almost all nonlinear systems whose “operational” phase space has three or more dimensions are chaotic in at least part of their phase space. This does not mean that all chaotic systems are complex: the logistic equation is a telling example. Chaotic behavior can appear with very few constituents; complexity does not. This establishes a decisive distinction between chaos and complexity. Interdependence may mean different things according to the type of system envisaged. Consider first a non-complex system with many constituents – say a crystal slab. A crystal is made up of a spatial pattern of ordered rows of atoms. Cut away 10% of a crystal slab. On the whole, the very nature of the crystal system has not changed and the laws governing its properties remain the same. Now carry out a thought experiment with a complex system where functional capabilities and roles are assigned to well-identified agents. Take a human body and just cut off a leg! The result will be drastically more spectacular than for the crystal. The new system will be entirely different from the previous one, because it will have to adjust its objectives with its new capabilities. A one-legged human being is not provided with the same mobility capabilities as a two-legged one! C.2.2. A complex system possesses a structure spanning several levels It is often helpful to have access to more than one level of understanding of a situation in our minds. The point is not to maintain different descriptions of a single system. What is confusing is when a single system admits a multi-description on different levels, which nevertheless resemble each other in some way. Computer systems are a case in which many levels of description coexist for a single system, depending on the point of view and where

196

Complexity in the Natural Sciences and Operations Management

all levels are conceptually interoperable between one another. When a computer program is running, it can be envisioned from a number of levels. At each level, the description is given in the language of computer science that makes all of the descriptions compatible. Yet they are important views we get on the different levels. In data processing, all programs, no matter how large and complex and whatever the language in which it has been coded, must be transformed into compounds of instructions executable in machine language. These instructions constitute a repertoire of operations that are “understood” by the electronic hardware. Several intermediary languages are required to run a program written in a “high-level” language such as COBOL, Fortran, Pascal and C++. Two main translators into machine language have been developed, namely compilers and interpreters. Compiler and interpreter programs convert a program written in a high-level language into machine language. It is written in a language which typically does not reflect the structure of the machines which will run programs written in them. When a compiler or interpreter language program is translated into machine language, the resulting program is machine-dependent. While compilers translate all the statements first, before the machine code is executed, interpreters, instead of translating all the statements first and then executing the machine code, read one statement and execute it immediately. This technique has the advantage that a user need not have written a complete program before testing its execution. As long as a program is running correctly, it hardly matters how it is described or is thought of in its functioning. When something is going wrong, it is important to be able to think on different levels. If, for instance, the machine is instructed to divide by zero at some stage, it will come to a halt and let the user know of this problem by telling them where in the program the questionable event occurred. However, the specification is often given on a lower level than that in which the programmer wrote the program. Here are three parallel descriptions of a program grinding to a halt:

Conclusion

197

Machine language level: “Execution of the program stopped in memory location 111000101110001”. Assembly language level: “Execution of the program stopped when the DIV (divide) instruction was hit”. Compiler language level: “Execution of the program stopped during evaluation of the algebraic expression (A+B)/Z”. The idea of assembly language is to “chunk” the individual machine language instructions by referring to them by a name instead of writing sequences of bits. Every level has a specific structure. This is an essential and radically new aspect of a complex system, and it leads to the next property. C.2.3. A complex system is capable of emerging behavior Emergence occurs at a bifurcation point when the pattern of inner-system interactions under the pressure of the environment or of a disruption in the inter-agent interactions is changed. In a multi-level system, certain behavior, observed at a certain level, can be said to be emergent if it can be understood when you study, separately and one by one, every constituent of this level, each of which may also be a complex system made up of finer levels. Thus, the emerging behavior is a new phenomenon unique to the level considered and can impact the behavior of the whole multi-level system. If the observed behavior cannot be explained only by local constituents, influences from other levels have to be taken into account. An emergence at a certain level can trigger chain reactions in other levels resulting in a global emergence.

198

Complexity in the Natural Sciences and Operations Management

The human body is capable of walking. This is an emerging property of the highest level of human capabilities which can be decomposed into a hierarchy of capabilities. If you study only a head, or only a trunk, or only a leg, you will never understand the mechanism of walking. The combination of structure and emergence leads to self-organization, which is what happens when an emerging behavior has the effect of changing the current global structure or creating a new one. There is a special category of complex systems which was worked out especially to accommodate living beings. They are the example par excellence of complex adaptive systems. As their name indicates, they are capable of changing themselves to adapt to a changing environment. They can also influence the environment to suit themselves. Among these, an even narrower category is self-reproducing: they experience birth, growth and death. Needless to say, we know very little that is general about such systems considered as theoretical abstractions. We know a lot about biology. However, we do not know much, if anything, about other kinds of life, or life in general. C.2.4. Complexity involves reciprocal action between chaos and order We already mentioned that complexity and chaos have in common the property of nonlinearity. Since practically every nonlinear system is chaotic part-time, this means that complexity implies the presence of hidden chaos. However, the reverse is not true. Chaos is a subject which has already received much attention from the scientific community. However, complexity has a much wider embrace and covers lots of cognitive situations which have nothing to do with chaos. Chaos requires mathematics to get a quantitative grasp of it, and by now much progress has been made to apprehend it. Complexity is still almost unknown when human agents are involved. The quote of Einstein “As far as the laws of mathematics refer to reality, they are not certain; as far as they are certain they do not refer to reality” holds true especially in the realm of human

Conclusion

199

organizations where individual and collective behaviors cannot be realistically modeled by equations. The field of chaos may appear to be a subfield of the field of complexity. Perhaps the most striking difference between the two is the following. A complex system can always be analyzed by scaling levels. While chaos may reign on level n, the coarser level above it (level n+1) may be perceived as self-organizing, which in a sense is the opposite of chaos. Many people have suggested that complexity occurs “at the edge of chaos”, but no one has been able to elicit this totally. Presumably, it means something like the following. Imagine that the equations governing the evolution of a system contain some “control” parameter which can be adjusted, depending on the context. We know that most nonlinear systems are not chaotic in all conditions: they are chaotic for some ranges of values of the control parameter and not chaotic for others. Then, there is the edge of chaos, i.e. the precise value of the control for which the nature of the dynamics switches. It is like a critical point in phase transitions, or the point where long-range correlations become most influential. Complex systems, such as biological systems, manage to modify their environment so as to operate as much as possible at this edge-of-chaos place, which would also be the place where self-organization is most likely to occur. C.2.5. Complexity involves interplay between cooperation and competition All complex systems that concern all of us very closely are dissipative systems (section 3.1, Chaos). All social systems and all collections of organisms subject to the laws of evolution belong to this category. Examples are plant populations, animal populations, other ecological groupings, our own immune system, and human groups of various sizes such as families, tribes, city-states, social or economic classes, and, of course, modern nations and supranational corporations. In order to evolve and stay alive, as Theilhard de Chardin noticed, living organisms tend to develop more and more complex structures. In order to remain complex, all the systems

200

Complexity in the Natural Sciences and Operations Management

directly or indirectly linked to humanness need to obey the following rule: C.2.5.1. Complexity implies interplay between cooperation and competition at different organizational levels Once again this refers to interplay between system levels. The usual situation is that competition on level n is nourished by cooperation on the finer level below it (level n-1). Insect colonies like ants, bees or termites provide a telling demonstration of this. For a sociological example, consider the alumni of Oxbridge in UK. They compete with each other towards economic success and corporate top positions. They strive to find the most desirable spouses and to provide for their young heirs through the educational system with the same social status they enjoy. And they succeeded better in this strive if they have the unequivocal and earnest support of all their university fellows, and also if all their fellows have a chance to take part in their success by exchanging information, offering mutual help, etc. Once this competition–cooperation dichotomy is understood, the old cliché of Darwin’s “survival of the fittest”, which has done so much damage to the understanding of evolution in the public’s mind, is very far. Complex systems, such as the weather, the economy and social organizations, face the problems dealt with in thermodynamics, namely understanding the relationship and interplay between order and disorder. For a closed system in thermal equilibrium, the transition between order and disorder is the consequence of a compromise: a part of the energy available tends to deploy order, while the other part, called entropy and associated with temperature, tends to break down the order. Thus, order and disorder can be associated respectively with cooperation to yield a compelling structured pattern of a sort and competition to gain a sort of freedom of action. This is a daunting challenge when we try to gain control of the split, especially when human behaviors are involved, all the more so that uncertainty is an integral part of the game. If the approach to reality by systems thinking has been pioneered by Ludwig von Bertalanffy, a French sociologist Edgar Morin has carried out an extensive work on complexity thinking. He has

Conclusion

201

produced the following definition: “the issue related to complexity is not completeness but incompleteness of knowledge. In some way complexity thinking tries to take into account of what truncating kinds of thinking get rid of and I call them simplifiers. Thus it fights not incompleteness but truncating… Complexity thinking in its core, although it endeavours after multidimensionality, has a principle of incompleteness and uncertainty” (Morin 1990, p. 164). Considering its features of incompleteness and uncertainty, complexity thinking can be linked to what Abraham Moles calls “sciences de l’imprécis” (Moles 1995). According to him, recursive analytical analysis has proved explicitly or implicitly to be the most appropriate tool to tackle complexity. “It is always possible, and often helpful, to consider any phenomenon, object, being or message that we perceive in the world, as combining a certain number of simple elements of limited variety according a set of certain rules, called code or structure. This synthesis will be qualified as a model and its value draws on the accuracy with which its functioning mirrors the initial phenomenon. Identifying structural thinking and what can be called ‘atoms’ thinking in the etymological sense of this term is the essential epistemological fact of this approach. Reconstructing the world from these atoms is the very purpose of the structural methodology that is applied in three steps: 1) look for which atoms are involved 2) find out the rules of the assembly code for a certain number of these atoms for reconstructing a masquerade of reality 3) make an appraisal of the masquerade and if it appears not adequate, go back to step 1”. C.3. References Moles, A. (1995). Les sciences de l’imprécis. Éditions du Seuil, Paris, p. 148. Morin, E. (1990). Science avec conscience. Fayard, Paris, p. 164.

Appendices

Appendix 1 Notions of Graph Theory for Analyzing Social Networks

It is beyond the scope of this appendix to expound a fully fledged graph theory. The purpose of the appendix is to give a description of graph theory at a level which provides non-mathematicians with a working knowledge to analyze the main properties of social networks. Graph theory is the study of graphs, mathematical structures, used to model pairwise relations between entities. It is visually made of vertices or nodes connected by edges. These relations can be directed or undirected. Directed relations mean that the interaction is one-way between the influencer and the influenced. Relations can be bidirectional, meaning that the influence between the two actors is reciprocal. There are two alternative formalisms which yield representations either by a diagram of nodes and edges or by a matrix that codes the ties between pairs of nodes. They contain the same information and thus any one can be derived from another. Each representation has some advantages. A diagram delivers an immediate visual understanding of the network structure, whereas the matrix representation is better suited to carry out quantitative manipulations of network properties. Three illustrative networks are shown in Figure A1.1 in both formalisms.

From Complexity in the Natural Sciences to Complexity in Operations Management Systems, First Edition. Jean-Pierre Briffaut. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

206

Complexity in the Natural Sciences and Operations Management

Figure A1.1. Three illustrative networks and their two representative formalisms

In the matrix formalism, each column and each row represents a node. At the intersection of a column and a row each representing a node, 0 means that no relation between the two nodes exists and 1 means that a relation between the two nodes exists. Figure 1 clearly demonstrates that at first sight it is difficult to derive the topology of the networks from the matrices. The general notation for the value of the tie from ni to nj nodes is written as xij with i ≠ j on a certain relation H. When multiple relations exist, the value of the tie from ni to nj nodes is written as xijr with i ≠ j on a relation Hr. Connection networks described in section 6.3.1 are an example of graphs used within a specific framework to represent the relations between actors and events (or activities). In such a representation, the interaction between actors indirectly takes place by sharing the same event or activities. The approach to representing interactions between actors by connection networks is grounded in the importance of

Appendix 1

207

individuals’ memberships (connections) in collectivities. This modeling tool is especially useful for studying urban social structures or organizational patterns where actors are likely to participate in more than one activity (event). Overlap in group connections allows for information flows between groups and also coordination of the groups’ actions. Nodes of directed graphs are often classified by using parameters called indegree dI and outdegree dO. The indegree of node ni, dI (ni) is the number of nodes adjacent to it, whereas the outdegree of node ni, dO ( ni) is the number of nodes adjacent from it. On the basis of these parameters, nodes can be classified as: – isolate dI (ni) = dO ( ni) = 0; – transmitter if dI (ni) = 0 and dO ( ni) > 0; – receiver if dI (ni) > 0 and dO ( ni) = 0; – ordinary if dI (ni) > 0 and dO ( ni) > 0. This typology is useful to describe the roles or positions in social networks. When dealing with directed graphs, it is necessary to represent each graph by two matrices, one representing the incoming relations to each node and another representing the outgoing relations from each node. When all the nodes of a graph have ties with all the other nodes, the graph is called complete. The number of ties of a complete graph with g nodes is g(g-1)/2. Multipartite (bi, tri, etc.) graphs are graphs in which the nodes can be partitioned into several subsets. They highlight the connectivity in a network and make the indirect chains of connection more apparent than the matrix representation. These chains of connection can be useful to explore the long-range correlations between groups of actors when studying change management with the social network model.

208

Complexity in the Natural Sciences and Operations Management

When considering relations between subsets of elements, it is useful, when relevant, to distinguish between isomorphism and homomorphism. When two sets are called isomorphic that means that they have the same structural architecture, in other words there is a one-to-one relation between all the elements of the two sets. When two sets are called homomorphic, one-to-many or many-to-one relations exist between the elements of the two sets. Figure A1.2 shows this situation.

Figure A1.2. Contrasting isomorphism and homomorphism between two sets of elements

The target set in a homomorphic relation has a reduced number of elements with respect to the origin set. This means that the relation between the two sets embodies a constraint of a sort which reduces the variety of elements of the source and, when relevant, can be interpreted as a reduction of leverage for action. Relations between nodes can be valued to indicate the strength, intensity or specific properties of ties between pairs of nodes. Values can refer, for instance, to interaction frequencies, degrees of acquaintance and so on. One of the primary uses of graph theory in social network analysis is identifying the most important actors embedded in a network and

Appendix 1

209

defined at the individual level or aggregated over a group of actors. Central actors are those extensively involved in relations with other actors. This involvement makes them more visible, being recipients or emitters. For non-directional relations, a central actor is defined as one involved in many ties. Access or control over resources, brokerage of information, etc., are issues well suited to be analyzed with this concept. For directional relations, distinction between ties received and ties sent allows for differentiating the status of influencers and the influenced on the basis of the indegree and outdegree parameters.

Appendix 2 Time Series Analysis with a View to Deterministic Chaos

When time-ordered series of measurements are analyzed to help forecast the future behaviors of all sorts of ecosystems, it is important to detect whether there is some underlying equation for the phenomenon observed or it is a stochastic phenomenon. What is called the phase space of a system is the set of all instantaneous states available to a system. The attractor of a dynamical system is the subset of phase space towards which the system evolves as time elapses. It can be just a point or a limited set of points. When an attractor exists, the trajectories of the time-ordered data points with connecting line segments are treated as fractals in the associated phase space. Fractals are geometric forms with irregular patterns that repeat themselves at different scales. The forms consist of fragments of varying size and orientation but similar shape. The fractal dimension of an attractor is a parameter which characterizes a part of its properties. Dimension is the way to measure the effect of enlargement (scaling) on length, area, volume and fractal object. Scaling by a factor n of length is length multiplied by n, of area is area multiplied by n2, of volume is volume multiplied by n3 and of d-dimensional object is object multiplied by nd.

From Complexity in the Natural Sciences to Complexity in Operations Management Systems, First Edition. Jean-Pierre Briffaut. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

212

Complexity in the Natural Sciences and Operations Management

While dimension provides information about the scaling properties of an object, it does not deliver information about the very structure of the object. However, its value gives useful information about the presence of determinism in the phenomenon observed. Many definitions of dimension have been produced since 1919 when F. Haussdorf proposed a definition that is a purely description of the geometry of the fractal set and does not lend itself to quantitative estimation (Haussdorf 1919). An intuitive definition is as an exponent relating the bulk (volume, mass, information, etc.) of an object to its size (linear distance). Bulk ≈ Size dimension or dimension ≈ [log (Bulk) / log (Size) ]size → 0 Size → 0 implies that dimension is a local quantity. Let X(t) be the time series derived from experimental measurements captured at regular intervals τ. Variables of the series are [Xk (k τ )] with k = 0,1….n-1 and take part in the dynamics of the phenomenon. Our intention is to reconstruct the dynamics of the phenomenon only on the basis of the knowledge of the series [Xk (k τ)]. If the plot of Xk+1 versus Xk does not show any structure (cloud of points) for increasing values of k, it can be reasonably deduced that the phenomenon is stochastic. [Xk (k τ)] are defined in the phase space spanned by all the variables of the series, as time elapses. Xk (k τ) follows either a curve or fragments of curves called phase space trajectories. Useful conclusions about the geometric structure of these trajectories can be deduced by using a procedure suggested by Ruelle (1985). In order to try to characterize the presence of underlying variables embedded in the finite sample of measured variables, we now consider the following sequence of k variables Xi of (k+1) dimensions defined at equidistant points with a time lag T, a multiple of τ. X0 : [X(0), X(T), X(2T), ……………..X (kT)]

Appendix 2

213

X1 : [X(τ), X(τ+T), X(τ+2T), …….….X(τ+kT)] X2 : [X(2τ), X(2τ+T), X(2 τ+2T), ….. X(2 τ+kT)] Xk-1 : [X((k-1)τ), X((k-1)τ+T), X((k-1)τ+2T),…… X((k-1)τ+(k-1)T)] If T is properly chosen, the variables Xi can be reasonably supposed to be linearly independent and offer us the possibility of unfolding the system’s dynamics into a multidimensional phase space Xi . When a reference point Xi is chosen and the distances │ Xi - Xj │ from all the remaining points are computed, the data points within a defined distance r from Xi are counted. Repeating the process for all values of i, the quantity C(r,k) can be obtained

C (r , k ) = (k ) −2

k

 θ (r − | Xi − Xj |)

i , j =1 i≠ j

where θ(x) is the Heaviside function, θ(x) = 1 if x > 0 and θ(x) = 0 if x < 0. C(r,k) is an integral correlation function of the attractor. It is expected to be proportional to rd with d being the dimension of the attractor. d can be deduced from the plot of log C(r) versus log r. What is explained above suggests the following procedure: – starting from the time series of measurements, deriving the integral correlation function C(r,k) of pairwise distances by considering successive higher values of the dimensionality k of phase space and different values of r whose minimum is the inter-point distance is computed; – when the d versus k dependence is stabilized beyond a certain k, the system represented by the time series of measurements should

214

Complexity in the Natural Sciences and Operations Management

possess an attractor whose dimensionality is the saturation value of d. The value of k beyond which stabilization appears represents the minimum number of variables required to model the attractor’s mechanism. These conclusions complement the contents of section 7.4.1.4 where questions were expressed. References Eckman, J.P. and Ruelle, D. (1985). “Ergodic theory of chaos and strange attractors”, Reviews of Modern Physics, vol. 57, no. 3, p. 617. Haussdorf, F. (1919). “Dimension und äuβeres Maβ”, Mathematische Annalen, vol. 79, nos 1–2, pp. 157–179. Packard, N.H., Crutchfield, J.P., Farmer, J.D. and Shaw, R.S. (1980). “Geometry from a time series”, Physical Review Letters, vol. 45, pp. 712–716.

Index

A, B, C agenthood in the social world cursory perspective, 62 social network, 64 in the technical world, 56 AI current prospects, 43 historical development, 36 neural networks, 40 autopoiesis, 99 Bateson, Gregory, 50 BDI agent coordination, 86 definition, 69 belief and attitudes, 73 and logic, 82 definition of, 72 degrees of, 75 trust and truth, 79 Bertalanffy, Ludwig von, 45 bifurcation, 161 Boulding, Kenneth, 27

change management academic approach, 162 data deluge and information systems, 168 introduction, 159 social networks, 165 chaos Big Data, 148 chemical systems, 16 MRCM (Multiple Reduction Copy Machine), 144 order out of chaos, 142 organization, 148 quadratic iterator, 127 CMM (coordinated management of meaning), 113 complexity chaos, 19, 121, 194, 198 data governance and management, 182 levels of, 27 Santa Fe Institute, 11 coordination (agent) multi-agent planning, 87 patterns, 86

From Complexity in the Natural Sciences to Complexity in Operations Management Systems, First Edition. Jean-Pierre Briffaut. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

216

Complexity in the Natural Sciences and Operations Management

cybernetics first-order, 9 second-order, 48 D, E, F DAC (Decentralized Autonomous Corporations), 107 Damasio, Antonio, 22 data deluge, see chaos (Big Data), Descartes, René, 7 Dunbar, Robin, I., M., 166 emotions Darwin and Keynes, 21 neurosciences, 22 forecasting and analytics, 153 Förster, Heinz von, 47 H, K, L, M holism, 8 Kahneman, Daniel, 146 Luhmann, Niklas, 98 MAS (multi-agent system), 58, 88 Maturana, Humberto, 99 Moles, Abraham, 201 Morin, Edgar, 68, 200 N, O, P negotiation patterns, 92 neurosciences, see emotions Ockham, William of, 4 organization artificial, 106 behavioral complexity, 102 cognitive and behavioral theories, 96

collection of agents, 67 German culture, 97 structural and functional theories, 95 structural complexity, 101 theory, 94 Popper, Karl, 2 Prigogine, Ilya, 14 Q, R, S quadratic iterator, 127 randomness, 145 Ross Ashby, William, 146 social agent change project, 167 long-range correlation, 166 network, 63 Sierpinski gasket, 143 structuralism, 30 systems AI, 35 business monitoring, 136 complexity, 9 dissipative, 15 modeling, 32 non-linearity, 18 structuralism, 24 thinking, 25, 31 T, U, V Taleb, Nassim Nicholas, 146 Turing, Alan, 19 unpredictability, 133, 157 Varela, Francisco, 99

Other titles from

in Systems and Industrial Engineering – Robotics

2018 BERRAH Lamia, CLIVILLÉ Vincent, FOULLOY Laurent Industrial Objectives and Industrial Performance: Concepts and Fuzzy Handling GONZALEZ-FELIU Jesus Sustainable Urban Logistics: Planning and Evaluation GROUS Ammar Applied Mechanical Design LEROY Alain Production Availability and Reliability: Use in the Oil and Gas Industry MARÉ Jean-Charles Aerospace Actuators 3: European Commercial Aircraft and Tiltrotor Aircraft MAXA Jean-Aimé, BEN MAHMOUD Mohamed Slim, LARRIEU Nicolas Model-driven Development for Embedded Software: Application to Communications for Drone Swarm

MBIHI Jean Analog Automation and Digital Feedback Control Techniques Advanced Techniques and Technology of Computer-Aided Feedback Control MORANA Joëlle Logistics SIMON Christophe, WEBER Philippe, SALLAK Mohamed Data Uncertainty and Important Measures (Systems Dependability Assessment Set – Volume 3) TANIGUCHI Eiichi, THOMPSON Russell G. City Logistics 1: New Opportunities and Challenges City Logistics 2: Modeling and Planning Initiatives City Logistics 3: Towards Sustainable and Liveable Cities ZELM Martin, JAEKEL Frank-Walter, DOUMEINGTS Guy, WOLLSCHLAEGER Martin Enterprise Interoperability: Smart Services and Business Impact of Enterprise Interoperability

2017 ANDRÉ Jean-Claude From Additive Manufacturing to 3D/4D Printing 1: From Concepts to Achievements From Additive Manufacturing to 3D/4D Printing 2: Current Techniques, Improvements and their Limitations From Additive Manufacturing to 3D/4D Printing 3: Breakthrough Innovations: Programmable Material, 4D Printing and Bio-printing ARCHIMÈDE Bernard, VALLESPIR Bruno Enterprise Interoperability: INTEROP-PGSO Vision CAMMAN Christelle, FIORE Claude, LIVOLSI Laurent, QUERRO Pascal Supply Chain Management and Business Performance: The VASC Model FEYEL Philippe Robust Control, Optimization with Metaheuristics

MARÉ Jean-Charles Aerospace Actuators 2: Signal-by-Wire and Power-by-Wire POPESCU Dumitru, AMIRA Gharbi, STEFANOIU Dan, BORNE Pierre Process Control Design for Industrial Applications RÉVEILLAC Jean-Michel Modeling and Simulation of Logistics Flows 1: Theory and Fundamentals Modeling and Simulation of Logistics Flows 2: Dashboards, Traffic Planning and Management Modeling and Simulation of Logistics Flows 3: Discrete and Continuous Flows in 2D/3D

2016 ANDRÉ Michel, SAMARAS Zissis Energy and Environment (Research for Innovative Transports Set - Volume 1) AUBRY Jean-François, BRINZEI Nicolae, MAZOUNI Mohammed-Habib Systems Dependability Assessment: Benefits of Petri Net Models (Systems Dependability Assessment Set - Volume 1) BLANQUART Corinne, CLAUSEN Uwe, JACOB Bernard Towards Innovative Freight and Logistics (Research for Innovative Transports Set - Volume 2) COHEN Simon, YANNIS George Traffic Management (Research for Innovative Transports Set - Volume 3) MARÉ Jean-Charles Aerospace Actuators 1: Needs, Reliability and Hydraulic Power Solutions REZG Nidhal, HAJEJ Zied, BOSCHIAN-CAMPANER Valerio Production and Maintenance Optimization Problems: Logistic Constraints and Leasing Warranty Services

TORRENTI Jean-Michel, LA TORRE Francesca Materials and Infrastructures 1 (Research for Innovative Transports Set Volume 5A) Materials and Infrastructures 2 (Research for Innovative Transports Set Volume 5B) WEBER Philippe, SIMON Christophe Benefits of Bayesian Network Models (Systems Dependability Assessment Set – Volume 2) YANNIS George, COHEN Simon Traffic Safety (Research for Innovative Transports Set - Volume 4)

2015 AUBRY Jean-François, BRINZEI Nicolae Systems Dependability Assessment: Modeling with Graphs and Finite State Automata BOULANGER Jean-Louis CENELEC 50128 and IEC 62279 Standards BRIFFAUT Jean-Pierre E-Enabled Operations Management MISSIKOFF Michele, CANDUCCI Massimo, MAIDEN Neil Enterprise Innovation

2014 CHETTO Maryline Real-time Systems Scheduling Volume 1 – Fundamentals Volume 2 – Focuses DAVIM J. Paulo Machinability of Advanced Materials ESTAMPE Dominique Supply Chain Performance and Evaluation Models

FAVRE Bernard Introduction to Sustainable Transports GAUTHIER Michaël, ANDREFF Nicolas, DOMBRE Etienne Intracorporeal Robotics: From Milliscale to Nanoscale MICOUIN Patrice Model Based Systems Engineering: Fundamentals and Methods MILLOT Patrick Designing HumanMachine Cooperation Systems NI Zhenjiang, PACORET Céline, BENOSMAN Ryad, RÉGNIER Stéphane Haptic Feedback Teleoperation of Optical Tweezers OUSTALOUP Alain Diversity and Non-integer Differentiation for System Dynamics REZG Nidhal, DELLAGI Sofien, KHATAD Abdelhakim Joint Optimization of Maintenance and Production Policies STEFANOIU Dan, BORNE Pierre, POPESCU Dumitru, FILIP Florin Gh., EL KAMEL Abdelkader Optimization in Engineering Sciences: Metaheuristics, Stochastic Methods and Decision Support

2013 ALAZARD Daniel Reverse Engineering in Control Design ARIOUI Hichem, NEHAOUA Lamri Driving Simulation CHADLI Mohammed, COPPIER Hervé Command-control for Real-time Systems DAAFOUZ Jamal, TARBOURIECH Sophie, SIGALOTTI Mario Hybrid Systems with Constraints FEYEL Philippe Loop-shaping Robust Control

FLAUS Jean-Marie Risk Analysis: Socio-technical and Industrial Systems FRIBOURG Laurent, SOULAT Romain Control of Switching Systems by Invariance Analysis: Application to Power Electronics GROSSARD Mathieu, REGNIER Stéphane, CHAILLET Nicolas Flexible Robotics: Applications to Multiscale Manipulations GRUNN Emmanuel, PHAM Anh Tuan Modeling of Complex Systems: Application to Aeronautical Dynamics HABIB Maki K., DAVIM J. Paulo Interdisciplinary Mechatronics: Engineering Science and Research Development HAMMADI Slim, KSOURI Mekki Multimodal Transport Systems JARBOUI Bassem, SIARRY Patrick, TEGHEM Jacques Metaheuristics for Production Scheduling KIRILLOV Oleg N., PELINOVSKY Dmitry E. Nonlinear Physical Systems LE Vu Tuan Hieu, STOICA Cristina, ALAMO Teodoro, CAMACHO Eduardo F., DUMUR Didier Zonotopes: From Guaranteed State-estimation to Control MACHADO Carolina, DAVIM J. Paulo Management and Engineering Innovation MORANA Joëlle Sustainable Supply Chain Management SANDOU Guillaume Metaheuristic Optimization for the Design of Automatic Control Laws STOICAN Florin, OLARU Sorin Set-theoretic Fault Detection in Multisensor Systems

2012 AÏT-KADI Daoud, CHOUINARD Marc, MARCOTTE Suzanne, RIOPEL Diane Sustainable Reverse Logistics Network: Engineering and Management BORNE Pierre, POPESCU Dumitru, FILIP Florin G., STEFANOIU Dan Optimization in Engineering Sciences: Exact Methods CHADLI Mohammed, BORNE Pierre Multiple Models Approach in Automation: Takagi-Sugeno Fuzzy Systems DAVIM J. Paulo Lasers in Manufacturing DECLERCK Philippe Discrete Event Systems in Dioid Algebra and Conventional Algebra DOUMIATI Moustapha, CHARARA Ali, VICTORINO Alessandro, LECHNER Daniel Vehicle Dynamics Estimation using Kalman Filtering: Experimental Validation GUERRERO José A, LOZANO Rogelio Flight Formation Control HAMMADI Slim, KSOURI Mekki Advanced Mobility and Transport Engineering MAILLARD Pierre Competitive Quality Strategies MATTA Nada, VANDENBOOMGAERDE Yves, ARLAT Jean Supervision and Safety of Complex Systems POLER Raul et al. Intelligent Non-hierarchical Manufacturing Networks TROCCAZ Jocelyne Medical Robotics YALAOUI Alice, CHEHADE Hicham, YALAOUI Farouk, AMODEO Lionel Optimization of Logistics

ZELM Martin et al. Enterprise Interoperability –I-EASA12 Proceedings

2011 CANTOT Pascal, LUZEAUX Dominique Simulation and Modeling of Systems of Systems DAVIM J. Paulo Mechatronics DAVIM J. Paulo Wood Machining GROUS Ammar Applied Metrology for Manufacturing Engineering KOLSKI Christophe Human–Computer Interactions in Transport LUZEAUX Dominique, RUAULT Jean-René, WIPPLER Jean-Luc Complex Systems and Systems of Systems Engineering ZELM Martin, et al. Enterprise Interoperability: IWEI2011 Proceedings

2010 BOTTA-GENOULAZ Valérie, CAMPAGNE Jean-Pierre, LLERENA Daniel, PELLEGRIN Claude Supply Chain Performance / Collaboration, Alignement and Coordination BOURLÈS Henri, GODFREY K.C. Kwan Linear Systems BOURRIÈRES Jean-Paul Proceedings of CEISIE’09 CHAILLET Nicolas, REGNIER Stéphane Microrobotics for Micromanipulation DAVIM J. Paulo Sustainable Manufacturing

GIORDANO Max, MATHIEU Luc, VILLENEUVE François Product Life-Cycle Management / Geometric Variations LOZANO Rogelio Unmanned Aerial Vehicles / Embedded Control LUZEAUX Dominique, RUAULT Jean-René Systems of Systems VILLENEUVE François, MATHIEU Luc Geometric Tolerancing of Products

2009 DIAZ Michel Petri Nets / Fundamental Models, Verification and Applications OZEL Tugrul, DAVIM J. Paulo Intelligent Machining PITRAT Jacques Artificial Beings

2008 ARTIGUES Christian, DEMASSEY Sophie, NERON Emmanuel Resources–Constrained Project Scheduling BILLAUT Jean-Charles, MOUKRIM Aziz, SANLAVILLE Eric Flexibility and Robustness in Scheduling DOCHAIN Denis Bioprocess Control LOPEZ Pierre, ROUBELLAT François Production Scheduling THIERRY Caroline, THOMAS André, BEL Gérard Supply Chain Simulation and Management

2007 DE LARMINAT

Philippe Analysis and Control of Linear Systems

DOMBRE Etienne, KHALIL Wisama Robot Manipulators LAMNABHI Françoise et al. Taming Heterogeneity and Complexity of Embedded Control LIMNIOS Nikolaos Fault Trees

2006 FRENCH COLLEGE OF METROLOGY Metrology in Industry NAJIM Kaddour Control of Continuous Linear Systems

WILEY END USER LICENSE AGREEMENT Go to www.wiley.com/go/eula to access Wiley’s ebook EULA.