Econophysics has been used to study a range of economic and financial systems. This book uses the econophysical perspect

*138*
*12*
*6MB*

*English*
*Pages 332
[335]*
*Year 2020*

- Author / Uploaded
- Marcelo Byrro Ribeiro

*Table of contents : ContentsPrefaceAcknowledgmentsPart I Basics 1 Economics and Econophysics 1.1 Physics 1.2 Economics 1.3 Econophysics 1.4 Physics, Reality, and the Natural World 1.5 Economics, Reality, and the Real World 1.6 Econophysics and the Empirical World 2 Measuring the Income Distribution 2.1 Basic Definitions and Results 2.2 Two-Components Distribution 2.3 Income Distribution Functions 2.4 Empirical Evidence 2.5 Limitations of Curve Fitting 3 Piketty’s Capital in the Twenty-First Century 3.1 Data-Driven Analysis 3.2 Phenomenology 3.3 Pluralistic Economic Theory 3.4 Fundamental Laws of Capitalism 3.5 The Mechanisms for Rising Inequality 3.6 Summary of the Main Results 3.7 Econophysics after PikettyPart II Statistical Econophysics 4 Stochastic Dynamics of Income and Wealth 4.1 Uncertainty and Risk in Physics and Economics 4.2 Kinetic Models of Income and Wealth Distribution 4.3 Political Economy 4.4 Class Redistribution 4.5 Econophysics of Income Distribution and Piketty Part III Economic Cycles 5 Circular Flows in Economic Systems 5.1 Economic Growth and Trade Cycles 5.2 Uncertainty, Confidence, Investment, and Instability 5.3 The Goodwin Growth-Cycle Macroeconomic Dynamics 5.4 Empirical Evidence of the Goodwin Macrodynamics 5.5 Conclusions 6 Goodwin-Type Distributive Macrodynamics 6.1 Structural Stability 6.2 Cyclic Evolution in the Tsallis Distribution 6.3 The Boyd–Blatt Model 6.4 The Keen Model 6.5 Other Extensions of the Goodwin ModelPostfaceReferencesSubject IndexAuthor Index*

I N C O M E D I S T R I BU T I O N DY NA M I C S O F ECONOMIC SYSTEMS

Econophysics has been used to study a range of economic and financial systems. This book uses the econophysical perspective to focus on the income distributive dynamics of economic systems. It focuses on the empirical characterization and dynamics of income distribution and its related quantities from the epistemological and practical perspectives of contemporary physics. Several income distribution functions are presented which fit income data and results obtained by statistical physicists on the income distribution problem. The book discusses two separate research traditions: the statistical physics approach; and the approach based on nonlinear trade-cycle models of macroeconomic dynamics. Several models of distributive dynamics based on the latter approach are presented, connecting the studies by physicists on distributive dynamics with the recent literature by economists on income inequality. As econophysics is such an interdisciplinary field, this book will be of interest to physicists, economists, statisticians, and applied mathematicians. m a r c e l o b y r r o r i b e i r o is Associate Professor at the Physics Institute of the Universidade Federal do Rio de Janeiro, Brazil. His research focuses on econophysics and complex systems, relativity and cosmology, and the philosophy of science. He is a member of the International Astronomical Union and International Society on General Relativity and Gravitation.

I N C O M E D I S T R I BU T I O N DY NA M I C S OF ECONOMIC SYSTEMS An Econophysical Approach M A R C E L O B Y R RO R I B E I RO Universidade Federal do Rio de Janeiro

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781107092532 DOI: 10.1017/9781316136119 © Marcelo Byrro Ribeiro 2020 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2020 Printed in the United Kingdom by TJ International Ltd, Padstow Cornwall A catalogue record for this publication is available from the British Library. ISBN 978-1-107-09253-2 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate

To my mother and the memory of my father

When my information changes, I alter my conclusions. John Maynard Keynes It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. Arthur Conan Doyle, “A Scandal in Bohemia” (1892) the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function. F. Scott Fitzgerald, “The Crack-Up” (1936)

Contents

Preface Acknowledgments Part I Basics

page ix xvii 1

1

Economics and Econophysics 1.1 Physics 1.2 Economics 1.3 Econophysics 1.4 Physics, Reality, and the Natural World 1.5 Economics, Reality, and the Real World 1.6 Econophysics and the Empirical World

3 3 7 17 20 27 33

2

Measuring the Income Distribution 2.1 Basic Definitions and Results 2.2 Two-Components Distribution 2.3 Income Distribution Functions 2.4 Empirical Evidence 2.5 Limitations of Curve Fitting

56 56 63 64 81 102

3

Piketty’s Capital in the Twenty-First Century 3.1 Data-Driven Analysis 3.2 Phenomenology 3.3 Pluralistic Economic Theory 3.4 Fundamental Laws of Capitalism 3.5 The Mechanisms for Rising Inequality 3.6 Summary of the Main Results 3.7 Econophysics after Piketty

105 105 107 115 116 136 141 142

vii

viii

Contents

Part II 4

Statistical Econophysics

145

Stochastic Dynamics of Income and Wealth 4.1 Uncertainty and Risk in Physics and Economics 4.2 Kinetic Models of Income and Wealth Distribution 4.3 Political Economy 4.4 Class Redistribution 4.5 Econophysics of Income Distribution and Piketty

147 148 154 182 190 194

Part III Economic Cycles

199

5

Circular Flows in Economic Systems 5.1 Economic Growth and Trade Cycles 5.2 Uncertainty, Confidence, Investment, and Instability 5.3 The Goodwin Growth-Cycle Macroeconomic Dynamics 5.4 Empirical Evidence of the Goodwin Macrodynamics 5.5 Conclusions

201 202 205 210 223 241

6

Goodwin-Type Distributive Macrodynamics 6.1 Structural Stability 6.2 Cyclic Evolution in the Tsallis Distribution 6.3 The Boyd–Blatt Model 6.4 The Keen Model 6.5 Other Extensions of the Goodwin Model

243 244 247 249 262 269

Postface References Subject Index Author Index

275 277 301 311

Preface

I The empirical characterization of the individual income distribution is an old topic in economics. The first systematic studies on this subject are generally attributed to the Italian economist Vilfredo Pareto (1848–1923) who, at the end of the nineteenth century, investigated this problem in a quantitative way and empirically noted the power-law nature of the income distribution among the richest persons in a society (Pareto, 1897, pp. 299–345). This result became known as the Pareto power law and is still valid today. Shortly afterwards, Max Otto Lorenz (1876–1959) and Corrado Gini (1884–1965) proposed income inequality measures which are still widely used today and also bear their names, the Lorenz curve (Lorenz, 1905), and the Gini coefficient (Gini, 1912). Despite this promising start, the empirical characterization and dynamics of income distribution did not receive from mainstream economics the attention it deserves. Except for isolated initiatives, which did not go much further than the essentially descriptive work of these pioneers, mainstream economics has basically left this problem to the sidelines of economic research during most of the twentieth century (Atkinson, 1997). It is uncertain why economists became so uninterested by such an important subject. Perhaps, they found the problem of the dynamic characterization of income distribution too difficult to deal with in view of the analytical and conceptual tools used by most economists during this period. Another possible reason could lie in the ideological choices that shape one’s views as to what is considered an important research theme. As such, economists may have seen this problem as an unimportant one, perhaps because they may have really believed that economic development would by itself solve economic inequality, and then assumed that there was no

ix

x

Preface

need to do any research on this topic. Or, maybe, the reason for leaving this subject out in the cold for so long possibly lies in the fact that income distribution in modern economies is such a socially sensitive subject that rendered itself unattractive to one’s career advancement during a politically tumultuous century mostly dominated by ideological rivalry. Indeed, income distribution is a hot topic, both economically and politically, since it goes to the heart of society’s views on issues like opportunity, egalitarianism and the gap between rich and poor, which means that politics is never far behind when one deals with this subject. So, due to its inherently sensitive and potentially controversial nature, it also seems possible to suppose that several generations of economists assumed that their careers would be bettered by avoiding this topic, and then chose to concentrate on other problems. Whatever the reasons, income distribution resurfaced as an important research subject in the early 2000s in part at least due to the activity of physicists who became interested in economic problems and started to deal with issues which until then were the almost exclusive domain of economists (Eugene Stanley et al., 2001; Doyne Farmer et al., 2005). Income distribution became then a research topic of the emerging field of econophysics, a term coined by the physicist Harry Eugene Stanley in the middle 1990s to name the use of physical concepts and methods to study economic problems. The work of econophysicists on the income distribution problem bore fruits in the form of new and important results which started to emerge just after the appearance of econophysics itself (e.g. Dr˘agulescu and Yakovenko, 2001a, 2001b, 2003). The onset of a serious economic crisis in 2008 caught most economists by surprise (Krugman, 2009), a fact that brought about a major theoretical crisis in the mainstream economic theory (Buiter, 2009; Colander et al., 2009; Kirman, 2009; Keen, 2011b; Soos, 2012) and the process that this economic crisis entailed brought back to the spotlight renewed concerns about the income distribution and inequality problems (Krugman, 2014b). And if there were still doubts about the importance and sensitivity of these topics, the influential book by Thomas Piketty (2014) on the rising inequality worldwide, and the controversy that it immediately started (Irwin, 2014; Krugman, 2014a; Rankin, 2014; Wolfers, 2014) left no margin for such doubts (Palley, 2014). Indeed, Piketty greatly helped to bring this issue back to the spotlight not only among economists, but also to the society at large by extensively dealing with income and wealth distributions and proposing possible ways of counteracting the increasing polarization of both distributions in several societies. It is in this context that this author felt that a book dealing with the distributive dynamics of economic systems in a broad theoretical and empirical sense from the viewpoint of econophysics is not only important, but also timely.

Preface

xi

II This book deals with the empirical characterization and dynamics of income distribution and its related quantities from the epistemological and practical perspectives of contemporary physics. In other words, economic problems are approached here from the methodological point of view of econophysics. The goal is to present a set of tools that may contribute in shedding light on the distributive dynamics of economic systems. Hence, it aims at presenting a set of approaches and methodologies that may allow us to go beyond the simple descriptive, often solely verbal, level when discussing this subject. Due to the youth of such an approach, the theories and models the reader will encounter in the next pages do not form a single, logically intertwined, set. Some of them are complementary to each other without a clear logical sequence, while others are even contradictory to one another. But, since most of them are empirically based, they offer a particular view of the problem of the dynamic description of income distribution and its related quantities, like wealth, both empirically and theoretically. The fact that the personal income distribution is not described by an unique theory is not a handicap, but an advantage at this stage of our knowledge of the problem, and also reflects the fact that after being in practice excluded from the mainstream of economic research for such a long time, this issue has only recently resurfaced as an important research problem. As it will be thoroughly discussed in Chapter 1, science is not ever complete, but always limited. Hence, what this book aspires to offer is a set of viewpoints about income distribution characterization and dynamics as they are seen today from the perspective of econophysics, hopefully serving, therefore, as starting points for further analyses. If the theories and models shown here prove useful, they will remain part of our common body of knowledge and will be further developed. By useful, I mean those that advance concepts and results that bring new perspectives to the problem at hand and can be developed and submitted to empirical verification. If they are not useful, they will be abandoned or superseded by other theories and models. This is the way science evolves. Physicists know this process very well since physics has gone through the steps of proposal, testing, refinement and, not infrequently, the effective abandonment of theories and models several times in its history. Physics has in fact a quite large collection of superseded theories and models which proved untenable empirically, and serve as a reminder of how slippery, misleading, and treacherous scientific research can be. So, physicists are unimpressed by such developments, but economists still seem to react very uneasily to this. One of the aims of this book is to argue that they should accept this

xii

Preface

process as the natural way in which science evolves, however unpleasant it might be to the individuals who possess long held, but empirically flawed, theoretical convictions. Another aim of this book is to bring together two separate research traditions, namely the approach to income distribution based on statistical physics and the one on nonlinear trade cycle models of macroeconomic dynamics. In both of these approaches to distributive dynamics the core variables are the income distribution components, expressed either as a result of trading, mostly kinetic exchange, or of class dynamics, in fact distributive competition. This undertaking is, nevertheless, made by grounding the whole discussion on the epistemological viewpoints that have been used by physicists for over a century. Hence, Chapter 1 is considered essential reading on this respect, because it presents in general terms the epistemological perspective of the whole book in the context of the research object of both economics and econophysics, as well as some basic notions of the economic thought and a few general aspects of the history of both physics and economics. III This book is organized in three parts. Part I discusses the basic topics and tools on the subject of income distribution required in most of the remaining parts of the book, namely the methodological basis of econophysics, its similarities and differences from economics, several distribution functions used to model the income data, including the Pareto power law, the basic inequality measures of the income distribution such as the Lorenz curve and the Gini coefficient. It also presents the evidence collected by both economists and physicists that bring empirical support to those distribution functions. A review of some topics of Thomas Piketty’s (2014) landmark book that this author regards as important to the econophysical approach to income distribution dynamics can also be found in this part. Part II discusses income and wealth distributions from the viewpoint of statistical physics. After a brief discussion on uncertainty and risk in physics and economics, Part II reviews a series of models of the income distribution characterization and dynamics advanced mostly by physicists. The list of models is modest, but representative of current econophysics literature on the subject. The exponential income distribution takes a prominent role as it made it possible to connect the empirical income distributions to the Boltzmann–Gibbs distribution of energy and to present the problem under the viewpoint of money conservation. Other models based on

Preface

xiii

statistical mechanics are also reviewed, including those that discuss trade as elastic and inelastic collision of scattering particles in a typical physical setting. Most models presented in Part II of the book are essentially kinetic exchange models, which, nevertheless, offer a variety of interesting insights into the problem, as well as severe modeling limitations, both of these points being discussed at length. Such kind of modeling was also extended to other aspects of inequality of an economic system, such as the effective econophysical definition of income classes and the inequality of energy consumption. All economic theories, both orthodox and heterodox, are applied to the problem, provided they are able to bring new insights and open new perspectives to the distributive dynamics of economic systems. Part III deals with income distribution under the realization that economic systems have circular flows. This dynamic viewpoint connects income classes to macroeconomic trade cycle theories, whose contributions were mainly due to economists. The initial sections present a typical economic viewpoint on issues like uncertainty, confidence, investment, and stability so that the models discussed afterwards are presented on solid conceptual foundations. The Goodwin (1967) macroeconomic dynamics of growth with cycles is viewed as an important startup model, whose qualities and severe limitations, both theoretical and empirical, are discussed in detail. Then several other models based, or inspired, on this macrodynamics are presented and discussed. In this final part of the book some tools of dynamic systems theory are briefly used. IV The book is structured so as to be read in sequence, since several concepts discussed in previous chapters are used in the next ones. Nevertheless, some chapters are, to a reasonable extent, independent from the others and can be skipped by the hurried reader depending on his/her interests and level of knowledge. Thus, if the reader’s sole interest is in the models’ technicalities, Chapter 1 can be skipped, going directly to Chapter 2 where the most important income distribution functions are presented and discussed; Chapter 2 offers essential material for those whose interest is focused on income distribution models of statistical econophysics. On the other hand, if the interest lies exclusively in the guiding methodology and concepts on which the approach of the whole book is grounded, Chapter 1 offers a discussion on economics and econophysics that does not require any mathematics. Alternatively, the hurried reader can go straightaway to Chapter 5 if the interest is limited to models based on trade cycles. The results of Chapter 2 are, nonetheless,

xiv

Preface

essential for the models presented in Chapter 4, and although Chapters 5 and 6 were written as a single logical unity to be read in sequence, they are mostly independent from the previous chapters. Chapter 3 on Thomas Piketty’s (2014) work can be read by and large independently from the rest of the book. The arrows diagram below indicates the logical dependency of the chapters. The full lines mean required dependency, whereas dashed lines stand for optional dependency.

∨ 4

∨ >5

>3 >

∨ 6 V Works that discuss subjects closely related to society’s political organization and inner workings frequently offer proposals and advice aimed at influencing state policy, as there are those who argue that advancing public policy recommendations is the purpose of such works. Notwithstanding, the reader will find no such advice or recommendations in this book, let alone proposals about how to “change the world.” This is so because this book was written with the intention of discussing how things are, rather than how they should, or must, be. Such stance arises from the realization that the scholars of past generations who dealt with these issues and felt the need to influence the political events of their times lived in social and cultural environments very different from ours, and, naturally, were driven by different motivations. But times change, and as for the researchers of the early twenty-first century, the time of writing, there are three important points that cannot be dismissed by anyone who analyzes these matters. First, as it will be thoroughly discussed in Chapter 1, science by its very nature is incomplete and limited, so every scientific analysis and conclusion can be revised or even refuted. That may happen even to well-established theories as their domains of validity are oftentimes established well after their initial formulation and testing. Scientific analyses and conclusions must be subject to constant theoretical criticism and empirical testing and only time will tell if they survive such close scrutiny and reveal their strengths, weaknesses, domains of validity, and limitations, that is, to

Preface

xv

what extent they correspond to the truth and how, when, and if they can be safely applied to the benefit of society. And it is impossible to predict beforehand the amount of time necessary for this process. Thus, if one is aware of science’s own limitations this should be reason enough to exercise sober restraint and refrain from providing advice and recommendations that may affect society at large, as such advice could be based on theories and models which may not have been fully tested or whose conclusions may prove to be not as far reaching as initially thought, or even wrong at later times. So, premature proposals and recommendations based on not fully tested theories might lead to bad outcomes that have the potential to provoke loss of public confidence in the scientists who had gone public in favor of what later becomes discredited theories or, worse, in academic scholars in general. The basic point here is that although science does provide solutions to problems faced by society, solutions which vary from the technological ones to the understanding of society’s inner workings, finding these solutions and concluding that they are safe to use takes an unpredictable amount of time. Therefore, science cannot be expected to provide ready answers to society’s problems, and should not be put in the position of doing so, especially by scientists themselves. Second, the most basic and essential goal of the scientific enterprise is to understand nature, that is, to reveal its inner workings. However, if the researcher starts the investigation also aiming at influencing policy, these two goals become mixed up. As a result, the introduction of interpretation biases into the scientist’s analysis and conclusions becomes an almost inevitability. That happens because it is all too common to have a tendency to interpret evidence as supporting the validity of the views one already holds, of reinforcing earlier convictions. Therefore, objectivity will end up compromised even when scientists firmly believe they are motivated by a noble cause. This results in the classical situation where the scientist may become too attached to a certain theory and loses perspective of his/her own biased analyses and conclusions. So, as the classical Greek playwright Euripides (480–406 BC) already knew a long time ago, “a bad end comes from a bad beginning” (Euripides, 2008, pp. 28–29: Aeolus). Finally, it is important to recognize that science and politics are two different things and that most scientists lack the proper political training and advice to interfere in state politics. Thus, when scientists try to influence state police they run the risk of becoming entangled in political feuding, of which they often have little understanding and no control, and then possibly end up being used as political pawns. The chain of events that led to the Manhattan Project for the construction and use of the first nuclear bomb and its aftereffects remain in the collective memory of physicists as a reminder of these risks, since some of the physicists who played a key scientific role in that chain of events, but later raised objections

xvi

Preface

on moral and ethical grounds about the use of such device and the militaristic path taken by politics, were ostracized and publicly humiliated (Bird and Sherwin, 2006; Monk, 2012). As citizens, scientists are entitled to their own political views, but the condition of being a scientist should not be confused with that of a politician or activist. So, the primary task of the scientist is to reveal the truth about nature. But, it is up to society to decide if, how, and when to use this acquired knowledge.

Acknowledgments

The proposal of writing a book on income distribution from the viewpoint of econophysics was first raised during a conversation with Simon Capelin, the editor for physical sciences at Cambridge University Press. I am deeply grateful for his enthusiastic response to the early concept of this book and his continuous encouragement, such that I finally took the task of developing the initially vague plan into a feasible book project. I am also grateful to the staff at Cambridge University Press, especially to Sarah Lambert, for their help with some legal and technical aspects regarding publishing. I wish to express my gratitude to the following colleagues who read and commented on parts of the original typescript: Everton Murilo Carvalho de Abreu, Jaques Kerstenetzky, Teresinha de Jesus Stuchi, and Antonio Augusto Passos Videira. I have also benefited from discussions on topics of this book with the following colleagues: Ricardo Azevedo Araujo, Henrique Fortuna Cairus, Fabio Neves Peracio de Freitas, Paulo Murilo Castro de Oliveira, and Mauro Patr˜ao. Nevertheless, all mistakes and omissions that may be found in the text are my own. My thanks also go to Kazuyoshi Carvalho Akiba for his help with some plots, and Thomas Piketty for allowing me to reuse a part of his raw data so that I could produce my own graphs with them. Finally, I thank my wife Ana and my son Pedro for their support and enthusiasm in this project, and their understanding for the many hours I dedicated to writing rather than spending time with them.

xvii

Part I Basics

1 Economics and Econophysics

What is econophysics? If it is just the use of physical methods to investigate economic problems, In what way is econophysics different, if at all, from orthodox mainstream economics? If it is really different from mainstream neoclassical economic thought, Is econophysics just another nonmainstream, or heterodox, approach to economic problems? Can econophysics contribute to the understanding of economic phenomena in a different way than economics itself, no matter if this understanding comes from the orthodox or heterodox traditions? Answering these questions form the subject of the present chapter. The next sections will present a discussion on the similarities and differences between economics and econophysics from the methodological and practical viewpoints. For the benefit of readers not entirely familiar with physical terminology, there will be first a very short summary about the origins of modern physics followed by a similarly short overview on the history of economic thought. 1.1 Physics Physics as a modern science started with the scientific revolution which occurred during the European cultural movement known as the Renaissance. The works of well-known names of that time, like Nicolaus Copernicus (1473–1543), Galileo Galilei (1564–1642), and Johannes Kepler (1571–1630) were fundamental in establishing physics as we know it today. Their contributions spanned the proposal of the heliocentric planetary motion, due to Copernicus, the discovery of the three laws of the planetary motion, due to Kepler, and the law of the falling bodies, due to Galileo, who also made some important contributions to astronomy like the discovery of the sunspots, the biggest four natural satellites of the planet Jupiter, the ring system in the planet Saturn, and the lunar mountain system. Afterwards, the most important advancements in physics are associated with Isaac Newton (1642–1727), whose three laws of motion and his law of universal gravitation 3

4

Economics and Econophysics

established what is now known as Newtonian physics, Newtonian mechanics, or classical mechanics. Thermodynamics, which macroscopically deals with problems related to heat and temperature, and electromagnetism, which studies electric charges, both static and in movement, and magnetic phenomena, as well as the propagation of perturbations in the electric and magnetic fields, the electromagnetic waves, are physical theories whose developments occurred mainly during the eighteenth and nineteenth centuries. The main physicists associated with these theories are Sadi Carnot (1796–1832), James Watt (1736–1819), Michael Faraday (1791–1867), and James Clerk Maxwell (1831–1879). These three major theories, classical mechanics, thermodynamics, and electromagnetism are collectively known as classical physics. During the period lasting from approximately 1870 to 1930, physics underwent a scientific revolution. Starting from results already developed by Maxwell himself, Josiah Willard Gibbs (1839–1903) and Ludwig Boltzmann (1844–1906) used statistical reasoning to describe thermodynamics as a consequence of statistical properties of large ensembles of particles, that is, at a microscopic level, creating then new theories known as statistical mechanics and statistical thermodynamics. At the same time, contradictions between classical mechanics and Maxwell’s electromagnetic theory led Albert Einstein (1879–1955) to propose in 1905 a solution which became known as the special theory of relativity. The famous E = mc2 equation comes from this theory. Concurrently, the inability of Maxwell’s theory to describe some aspects of the electromagnetic radiation, the blackbody radiation, and the behavior of the fabric of the physical world, the atoms, led Max Planck (1858–1947), Niels Bohr (1885–1962), Erwin Schr¨odinger (1887–1961), and Einstein himself to establish the basis of a new theory to describe the micro world, the quantum mechanics. The use of statistical reasoning was also applied to quantum mechanics and the collection of statistical methods to describe physical systems, classical or quantum, is now known as statistical physics. By the end of 1915, Einstein presented a new theory of gravitation which he had been working on during the previous ten years that went much beyond Newton’s one and entirely revolutionized the way physics described the gravitational phenomenon. This theory became known as the general theory of relativity, whose conclusions included the very unexpected result, verified empirically, that light beams are deflected by the mass of celestial bodies. The physical theories which appeared from late nineteenth to early twentieth centuries, that is, statistical physics, both relativity theories and quantum mechanics, are now known as modern physics. The paragraphs above are just an extreme summary of approximately the last 450 years of the history of physics and cannot do justice to dozens of other physicists whose names were omitted in the text above, but made very important

1.1 Physics

5

contributions to physics. Several of them were in fact honored with their names becoming physical units used in our everyday life, this being the case for Andr´eMarie Amp`ere (1775–1836), Andres Celsius (1701–1744), Heinrich Rudolf Hertz (1857–1894), James Prescott Joule (1818–1889), William Thomson, Lord Kelvin (1824–1907), Georg Simon Ohm (1789–1854), Blaise Pascal (1623–1662), and Alessandro Volta (1745–1827), to name just a few. Others like, for instance, Louis de Broglie (1892–1987) and Paul Dirac (1902–1984) made groundbreaking theoretical contributions to quantum theory and both were recipients of the Nobel Prize in Physics. Nevertheless, our purposes here are not to present a brief history of physics, but to establish the terminology of physical theories to nonphysicists, show the time frame when they were advanced, associate these theories with a handful of names more or less well known to nonphysicists and to state that although at about 120 years ago physics went through a scientific revolution, the previous classical theories were not abandoned. There was no methodological rupture in physics or any kind of dismissal of previous results simply because classical physics agrees with empirical results which are strongly anchored in well-tested experiments. In fact, several technologies used in our everyday life, from electric power to airplanes, are based on the results of classical physics. What modern physics established were the limitations of classical physics and worked out, both theoretically and experimentally, the new theories to describe physical phenomena placed beyond the scope of classical physics, like the physical laws of the micro world, how bodies behave at speeds approaching the speed of light, and the dynamic connection between space and time. There were naturally new concepts which contradicted classical physics, but physicists worked out the domains of validity of classical and modern physics, including the ranges where those concepts are applicable or not, and nowadays physics works well with both sets of theories in an integrated way. It must be noted, however, that during these last 450 years several theories and models were effectively abandoned because either their predictions were not validated by experiments or new concepts which described these empirical results rendered those previous theories obsolete. This is, for instance, the case of the caloric theory to explain heat transfer, replaced by the mechanical theory of heat, and the geocentric Ptolemaic epicycles, replaced by Copernicus’ heliocentric system to describe the orbits of the planets, to name just two of several superseded physical theories. At this point it is important to mention the influence of classical Greek philosophers in the development of physics. Several of them discussed topics which are now considered within the scope of physics, like Democritus (460–370 BC), who advanced the ancient theory of atoms, and, especially, Aristotle (384–322 BC),

6

Economics and Econophysics

who discussed the movement of bodies. For the purposes of this discussion one should present one result of the Aristotelian physics as examined by Galileo in his famous book titled Dialogues Concerning Two New Sciences (Galilei, 1638). Aristotle claimed that a heavier body would fall faster than a lighter one under the influence of gravity. Galileo refuted this assertion simply because he made the experiment and found that two bodies of the same shape, but different weights, would fall at the same acceleration and reach the ground at the same time if they are released from the same height, a result contrary to Aristotle’s conclusion. In other words, in this instance Aristotle’s reasoning was deductive and logical, but empirically false. Galileo even questioned if Aristotle ever made the experiment himself (Galilei, 1638, pp. 62–64). Here we arrive at the methodological essence of the Galilean approach to physical phenomena. No matter how logical and convincing a reasoning can be, it can only be considered scientific if it is subjected to empirical validation, either experimentally or observationally. Until that happens, it is just conjecture waiting to be tested, proved, or disproved. This concept is at the heart of the scientific method inaugurated by Galileo, implying that science is an activity whose theories must be constantly checked against observation and/or experimentation and modified accordingly. After Galileo, the term Aristotelian physics became synonymous with pseudoscience, that is, presupposed statements assumed as true, but which were never subjected to empirical testing and validation. Hence, metaphysics, another word originated from Aristotle’s works, is a type of inquiry having nonempirical character, i.e., statements assumed as valid which lead logically to other statements which are also assumed as valid, but never subject to empirical testing and whose final conclusions cannot be considered as having any relationship whatsoever with the real world. That can only happen if they are empirically tested. Actually, to label Aristotelian physics as pseudoscience is historically unfair to Aristotle, since this only focuses on what was changed in our physical view of the world by the Renaissance’s scientific revolution rather than what was not changed by this same revolution. Several physical concepts advanced by Aristotle remained after Galileo and became essential building blocks to very important modern concepts in physics. For instance, this is the case of motion as a process, from potential to actual, from where the modern concepts of potential energy (Aristotle, 2012b, pp. 652–655) and dynamics originated. What did not remain were his models, like the geocentric view of the World and the concept of natural place, to which objects would seek when moving (Aristotle, 2012a, pp. 913, 1074). Nevertheless, one can even find similarities between definitions and results arising in modern physical theories and the Aristotelian concept of natural place (e.g., Neves, 2018), showing that some fragments of Aristotle’s physical models are still with us.

1.2 Economics

7

In addition, at Aristotle’s time the possibilities of doing experimental physics by means of arithmetic operations with physical measurements were rather limited. One of the main obstacles was the very cumbersome and arithmetically impractical numeral system adopted by the ancient Greeks, where arithmetic operations and the representation of very large numbers varied from hard to nearly impossible, this being specially due to the fact that their numerals did not include the number zero (Ifrah, 2000, ch 16, pp. 327, 333). Galileo, on the other hand, was already in possession of the arithmetically very practical modern Indo-Arabic numerals, where the essential concept of the number zero was already well established, allowed unlimited representation of very large numbers and made their arithmetic operations practical. Nevertheless, although historically unfair, the use of the term ‘Aristotelian physics’ to mean pseudoscience has remained to this day within the physics tradition.

1.2 Economics 1.2.1 Antiquity and the Middle Ages Aspects of what constitutes economics as we know it today can be traced back to texts from antiquity which dealt with the justice in the exchange of goods and acquisition of wealth by means of unfair gain in commerce. During this time and the Middle Ages the economic discussion was dominated by Aristotle’s ideas about the moral limits of commercial activity, since this was considered as an unnatural way of acquiring wealth. In addition, Aristotle discussed the role of money as a means of exchange, measure of value, and as a stock of value for future transactions (Backhouse, 2002, chs. 1–2). Scholastic philosophy of the thirteenth century derived from Aristotle’s theory of “just wage,” which was defined as the wage that would give the worker a standard of living adequate to his social condition. Similarly, there were a just price theory connected to the cost of production through the exchange of equivalents. Included in the cost of production is a fair and moderate profit, enough for the merchant’s family and charity (Screpanti and Zamagni, 1993, section 1.1.1). The sixteenth century saw mercantilism dominate economic thought. Its main concerns were no longer the administration of the household, but of the state, no longer the enrichment of the individuals, but of the nation and the merchant class. The goal was the use of state power to build industries and increase the trade surplus by means of exports, leading then to the accumulation of money. The interests of the merchant class were identified with the interests of the collectivity, a situation which meant that economics was no longer domestic, but political (Screpanti and Zamagni, 1993; Backhouse, 2002).

8

Economics and Econophysics

It must be mentioned that many mercantilists were in fact more interested in promoting higher-productivity economic activities through policy interventions; that is, they were focused on solving real-world problems, especially policy proposals on how economically backward countries should develop their economies in order to catch up with the more advanced ones. This focus started a viewpoint in economics that is today known as the developmentalist tradition or development economics. Developmentalist theories are still being advanced and refined today, although policy practices under this tradition can be traced as far back as the fifteenth century (Chang, 2014, pp. 96–99). 1.2.2 Political Economy The seventeenth century witnessed the birth of political economy, the name by which the study of economic matters became known until the end of the nineteenth century (see Section 1.2.4) and which shows the close connection between economics, politics, social sciences, and philosophy. William Petty (1623–1687) produced the first texts generally accepted as belonging to political economy, and his book Political Arithmetick, written between 1671 and 1676, but published after his death (Petty, 1690), reflected his aspiration at providing an empirical base for economics in which pure speculative reasoning must be avoided, and qualitative arguments ought to be replaced by rigorous ones relying on number, weight, and measure (Screpanti and Zamagni, 1993, p. 36). This was a very good start because if we see Petty’s work from the perspective of the early twenty-first century, the time of writing, what he had in mind was really a kind of Newtonian physics of society (Ball, 2004, pp. 3–4). The next important contribution for the development of political economy came from the French school of thought known as the physiocrats. Prominent among the members of the physiocratic school was Franc¸ois Quesnay (1694–1774), whose ´ main contribution was the Tableau Economique (Economic Table), published in 1758. The Tableau is basically a model that sees the economic system as a cycle of deep interdependence and interrelationship among the various productive processes, that is, all parts of the system function according to a certain natural law. Economic exchange is then represented as a circular flow of goods and money among all economic sectors. Related to this interdependence is the idea of equilibrium, which we would today call macroeconomic equilibrium (Screpanti and Zamagni, 1993, section 2.1.2). According to the Tableau, the system is moved by the surplus produced by farmers, considered as the productive class. The landlords formed the distributive class, consuming the surplus created by the productive class and starting the circulation of money and goods among the economic sectors of the nonproductive

1.2 Economics

9

class (manufacturing industry). The circulation is closed by returning part of the surplus to the productive class. Hence, one can find in the Tableau three important economic concepts: production, distribution, and accumulation. The accumulation occurs when the production increases by higher quantities of surpluses which are then returned (invested) into production (Delfaud, 1986). Classical political economy, or the classical school of economics, is a term associated with a group of five very influential economists of the eighteenth and nineteenth centuries: Adam Smith (1723–1790), Jean-Baptiste Say (1767– 1832), Thomas Malthus (1766–1834), David Ricardo (1772–1823) and John Stuart Mill (1806–1873). Their studies inherited the physiocratic method of viewing the economic system as circulatory in nature, treating this system as a whole and seeing it as being dynamically characterized by three main phases: production, distribution and accumulation (Delfaud, 1986). They, nevertheless, studied in much more detail these three phases of the cycle. Smith saw the system as a cumulative mechanism operated in a sequence leading to a virtuous circle of growth: division of labor, enlargement of the markets, and increase in labor productivity. The division of labor triggers the growth process and the accumulation drives it. He discussed a theory of income distribution among the three basic social classes, capitalists, workers, and landlords, differentiated by the productive resources they hold, respectively, capital, labor, and land, and the way they spend their income, respectively, profits, wages, and rents. Smith also provided an explanation for the values of goods, which are measured by the quantity of human labor they are able to command, that is, the wage equivalent or the labor that can be bought with it. For Smith, positive growth rate occurs when labor commanded is higher than the amount of labor used to produce it, leading then to a surplus required to sustain capital accumulation. He also distinguished market price, the real price of a good at a certain moment, from natural price, the normal rates of remuneration for capitalists, workers, and landowners. Smith thought of this system as stable, unique, and in equilibrium, since there would be an invisible hand where individuals would serve the collective interest exactly because they would be guided by self-interest. However, these three properties of the economic system, stability, uniqueness, and equilibrium, which would justify Smith’s conjecture of an invisible hand, remained unproved, and are a source of much debate to this day (Screpanti and Zamagni, 1993, section 2.2). The other classical political economists discussed the economic system by also using its three phases, production, distribution, and accumulation, as the basis of their analysis. Regarding production, Ricardo discussed Smith’s theory of value by arguing that the exchange value must incorporate not only labor, but the tools used in their production, whereas Say argued that the use value implies a certain utility that satisfies needs and wants. Mill viewed labor as determining the supply while

10

Economics and Econophysics

utility governs the demand. On distribution, Malthus and Ricardo talked about a ‘natural salary’ due to the costs of production of labor, since labor is seen as a commodity as any other. But Malthus stated that if the birth rate increases, the natural salary is reduced to the bare minimum subsistence level. For all classical political economists, profit is at the center of the capitalist dynamics as it provides the accumulation. Therefore, from Smith to Mill the engine of the economic system is the cycle accumulation–profit–accumulation. Say tried to show that general excess supply is impossible, so arriving at the famous Say’s law according to which supply always creates its own demand (Delfaud, 1986; Screpanti and Zamagni, 1993). 1.2.3 Marxian Economics Karl Marx’s (1818–1883) contributions are well known to be far-reaching, but here we are not interested in the social and political doctrines associated with his name, Marxism, but only in his contributions to political economy, that is, Marxian economics, which in fact has a close relationship to the classical political economists, particularly Ricardo’s political economy. His conclusions were based on the labor theory of value, the theory of surplus, and an analysis of the behavior and relationship of social classes, issues already discussed by Smith. On (i) production, Marx viewed capitalism as a dynamic system where money (M) and commodities (C) are exchanged in the C–M–C cycle which characterizes a simple commodity production, that is, where commodities produce money which then produces commodities. The M–C–M cycle is, on the other hand, the dominant form of circulation in capitalism, the part of the system’s dynamics that renders the creation of value as long as M > M. Thus M is the final capital whereas M is the invested capital. The difference between M and M is the surplus value or unpaid labor (Delfaud, 1986). Note that Say’s law states that a sale is always followed by a purchase of equal amount, or everything that is produced is consumed. This means no interruption in the C–M–C cycle and, therefore, there is no overproduction, a point which Marx strongly criticized in Ricardo (Sweezy, 1942, pp. 136–138). On (ii) distribution, Marx basically reduced the partition of the produced value in wage share and profits, where the latter is further divided in interests and rent. Finally, on (iii) accumulation he noted that capitalist production grows on cycles of booms and busts. In a boom, profits increase and unemployment decreases as the workers are capable of obtaining better jobs and higher wages due to manpower shortage to feed the growing production. This boom is, nevertheless, followed by a bust inasmuch as less unemployment reduces the profit margin, whose recovery is achieved by a higher unemployment and a reduction of workers’ bargaining power. Smaller salaries lead to an increase in the profit margin which leads to new

1.2 Economics

11

investment and then a new boom starts, being followed by another bust, and so on (Marx, 1867, ch. 25, section 1). It follows from this reasoning the concept of labor reserve army, a large group of unemployed workers willing to accept lower wages in exchange for a job, whose existence is essential to avoid wages going too high and profits too low.

1.2.4 Neoclassical Economics The above paragraphs show that the analytical approach started by the physiocrats was continuously developed until Ricardo and Marx. However, by the end of the nineteenth century the economic thought suffered a methodological rupture in which distinct social classes such as workers, capitalists, and landowners, treated in circular flow according to a process of production–distribution–accumulation, were no longer considered as fundamental concepts. A new form of economic analysis stopped using collective, or aggregate, social classes as their main economic agents, and replaced them with individual economic agents such as consumers and producers. The classical approach of treating the economic system as a whole was also replaced by a reasoning that emphasized the supposed equilibrium of the system from a theory of value expressed in terms of utility and scarcity (Delfaud, 1986), where classical physics concepts such as energy conservation and equilibrium thermodynamics were used as essential metaphors for the establishment of this new approach to economics (Mirowski, 1989, p. 222; Drakopoulos and Katselidis, 2015). Utility is defined as the ability to increase pleasure and decrease suffering, measured indirectly by means of the market behavior. This also defines the indifference curves, representing the same level of utility (satisfaction) between different bundles of goods to which a consumer has no preference, or is indifferent. Thus, economics becomes domestic in the sense that it deals with the maximization of the household’s welfare or the profits of the firm, and the focus is on the allocation of given resources. This new emphasis on the micro level originated the term microeconomics. And the hypothesis of a decreasing marginal utility, where the reasoning is focused on the last available element of a certain good, the margin, gave rise to the term marginalism to this approach. The idea is that once a person has more of a certain good, this person’s marginal utility decreases. The rupture was so strong that even the name of the discipline was changed, from political economy to economics. Although there were predecessors, the main names associated with this marginalist revolution are L´eon Walras (1834–1910), William Stanley Jevons (1835–1882), and Carl Menger (1840–1921). The theoretical system they created became known as neoclassical economics (Screpanti and Zamagni, 1993, section 5.1).

12

Economics and Econophysics

The works of Walras, Jevons, and Menger were published in the early 1870s and from this time onward neoclassical economics became hegemonic, pushing in effect the classical political economic way of thinking to the background, at least in the Western world. The reasons for that are various, but the inability of political economy in solving several theoretical problems is certainly among them. In this respect, one can cite that the labor theory of value, which states that value is created by labor, did not withstand criticism because of the development of Western Europe industrial economies did not lead to labor-intensive industries being more profitable than capital-intensive ones, as predicted. In addition, the classical income distribution was viewed as inadequate because it was based on a theory that operated under the supposition that wages are forced down to the subsistence level by means of Malthus’ population mechanism, something which was not observed due to the real increase of wages, although Marx had not adopted this approach in his theories. Nevertheless, many other results of classical political economy could not be dismissed so easily, like the economic role played by social classes, the concepts of circulation, surplus, labor reserve army, and the analysis of the economic system as a whole, to name just a few. In addition, as we shall see below, the fact that the neoclassical theory also has several cracks and critical flaws, meant that this rupture effectively led to long-lasting divisions and infighting among several economic schools of thought. Despite all this, the neoclassical economics imposed itself and became the new orthodoxy by the turn of the nineteenth to the twentieth century. This coincides with the professionalization of economics, as from that time on economists became full-time university professors whereas previously they were a mixture of entrepreneurs, administrators, businessmen, public servants, politicians, and independent scholars. In particular, Walras’ general equilibrium theory became an essential pillar of the neoclassical utilitarianism. In this theory, individuals are well informed, each well aware of his own choices, self-interested, each thinks about himself and, rationally, each tries to maximize his goals. This combination would systematize and organize production and distribution of income in a supposedly efficient and mutually beneficial way. There are several names associated with this period, which lasts until approximately the 1930s, but for the purposes of this very limited survey two names suffice: Alfred Marshall (1842–1924) and Vilfredo Pareto (1848–1923). Marshall’s ideas focused on the concepts of industry, group of firms producing the same good, and the representative firm, an average firm possessing the essential features of the industry. He concentrated on the equilibrium conditions of a single productive sector. He proposed mathematical methods to solve this problem, known later as partial equilibrium analysis, in which a part of the economy is studied in

1.2 Economics

13

isolation. He studied mathematics and physics, being a Maxwell’s student, before becoming an economist and his ideas were influenced by biology and Charles Darwin’s (1809–1882) theory of evolution (Screpanti and Zamagni, 1993; Backhouse, 2002). This is particularly clear in his theory of the firm, viewed as progressing through a life cycle similar to an individual. They start young and vigorous, but, after reaching maturity, they become old and, eventually, replaced by new and more efficient firms. Marshall’s economic theory was based on the theory of supply and demand. Price is entirely determined by demand. There would be a demand price pd , the maximum price at which demand reaches a certain level, and a supply price ps , the minimum price that leads the sellers to offer a quantity equal to that certain level. Disequilibrium occurs when either pd > ps or pd < ps . In the first case the seller would increase the supply by an increase in production or decrease of stocks, whereas the second case works the other way round (Screpanti and Zamagni, 1993). In both cases the system would, after a period of transition, reach equilibrium. Thus, his analysis was of comparative statics where two states of equilibrium are compared after the adjustment. Although this method used differential calculus, neoclassical economics lost the classical interest in long-range dynamics. One must also note that comparative statics does not offer a method of studying motion, or dynamics toward equilibrium, nor the process of change itself. In addition, if the economic system is not in equilibrium, the conclusions reached by this method would be in doubt. Pareto is known for the income distribution law he found empirically. According to it, income is distributed among the richest individuals in a decreasing powerlaw, a result which is approximately the same for many countries and possibly all times. He also made contributions to the theory of the rational consumer, redefining the utility of a good from its ability to satisfy needs to an expression of preferences and, hence, individual choices. He also arrived at a concept known as Pareto efficiency, which is a certain state of allocation of resources where it is impossible for an individual to be better off without at least one individual being worse off. 1.2.5 Keynesian Economics The 1920s was a period where several capitalist economies experienced a great boom, followed by a bust that started with the Great Crash of 1929 and lasted until World War II. These events inevitably caught the attention of the economists, as it became clear that the then dominant theories were inadequate to explain the level of economic instability and depth of the economic bust. Somehow, the changes in

14

Economics and Econophysics

the level of economic activity were related to money and finance. In addition, it was easy to note a connection between the financial activities that led to the boom and subsequent collapse of the American stock market and the unprecedented depth of the economic depression that followed. Some economists turned back to classical theories of political economy, especially Marxian, to try to understand the events. Others went another way, and revisited the neoclassical theories in a critical light in order to find answers capable of explaining the real world. Among the latter we find the most influential economist of that period, and arguably of the twentieth century: John Maynard Keynes (1883–1946). Keynes was among a group of economists who returned to the problem that gave rise to classical political economy: macroeconomic dynamics. The term macroeconomics seems to suggest a split between the two approaches to economics: the global, or macro, and the partial, the elementary, or micro. However, Keynes followed an intermediate line between these two approaches to economic problems. From the global approach of production–distribution–accumulation, Keynes kept the cycle viewpoint as given by production–income–expense. Nevertheless, this circularity does not occur among social classes, but by means of macroeconomic functions such as consumption, investment, employment. From this viewpoint Keynes concluded that there is more disequilibrium than equilibrium in an economy. From the neoclassical approach, Keynes kept elementary behaviors like the decision to produce, consume, save, or invest. These, however, are no longer individual behaviors, but aggregates which articulate themselves in specific ways. The system is dynamic since it does not try to explain the equilibrium state of production and employment, but its variation process. The Keynesian dynamic process is triggered by the decision of the producers to employ a certain volume of production, which requires a certain level of employment, according to expected sales since from the producer’s viewpoint it is not at all certain that there will be a demand for produced quantity. Therefore, businesses use their experience to predict their sales and profits, not supported in a potential demand based on the globally distributed income, but on the effective demand that comes from real-world expenses of the economic agents. From this point, Keynes determined the components of global demand, consumption, and investment, and then consequences for the variations in income, employment, and prices (Delfaud, 1986). 1.2.6 Contemporary Economics By Keynes’ time and afterwards the economic thought had been dividing itself even further in several schools of thought. Some of them are known by the names of their

1.2 Economics

15

respective predecessors or founders, whereas the very names of others define their approach to economics: post-Keynesian, neo-Ricardian, neo-Marxian, institutional, ecological, behavioral economics, etc. One can basically consider the neoclassical orthodoxy as the present mainstream economic theory, and this includes the interpretation of Keynes, theories within the neoclassical equilibrium context, the IS–LM model as initially proposed by John Richard Hicks (1904–1989), although it has been argued that this model leaves out the most important dynamic aspects of Keynes’ theories (Backhouse, 2002). The other schools are called heterodox, including the post-Keynesian, formed by those who view Keynesian theories as basically incompatible with neoclassical theory since it essentially disregards economic dynamics, a viewpoint shared by the neo-Ricardians and neo-Marxians. The neo-Ricardian school has its source in the work of Piero Sraffa (1898–1983), who sought to perfect the classical economics theory of value, as originally developed by David Ricardo and others, whereas neo-Marxians consider Michał Kalecki (1899–1970) as one of their prominent representatives as he based his theories on the classical class analysis and the physiocratic circular flow of production and income. Institutional economics considers sociopolitical factors and economic history as being at the core of the evolution of economic practices. That is, it argues that one needs to study the social rules, or institutions, that affect, and even shape, individuals. Institutional economics has Thorstein Veblen (1857–1929) as one of its key founders. Development economics focuses on helping economically late starters to catch up with more advanced economies (see p. 8 above). The Schumpeterian school is based on Joseph Schumpeter’s (1883–1950) original thoughts on the role of innovations by entrepreneurs as the driving force of capitalism. Expanding upon Marx’s emphasis on technological development, he argued that capitalism develops by the creation of new products and new markets such that successful entrepreneurs acquire temporary monopolies through innovation. Thus, no firm, however entrenched it may appear, is safe from the process of “creative destruction” provided by new technologies. The Austrian school was started by Carl Menger, Ludwig von Mises (1881– 1973), and Friedrich von Hayek (1899–1992), who argued that government intervention in the economy leads to the loss of fundamental individual liberty. They say that the free market is the best economic system because there are so many things in the world that are unknowable that it is best to leave everyone alone. Ecological economics places sustainability at the center of its approach to economic thought, whereas behavioral economics, especially originated in the works of Herbert A. Simon (1916–2001), discusses psychological aspects on the

16

Economics and Econophysics

economic decisions of institutions and individuals, how people are not always rational and self-interested, how they misinterpret information, how they miscalculate probabilities, and how their emotions distort their decisions. So, the main constraint on our decision-making is our limited ability to process the information we have, rather than the lack of information. This classification of the contemporary schools of thought in economics is not at all comprehensive or unanimous. Different authors provide different views on this matter and advance different classifications, foundational concepts, and authors, as well as interplays among the different theories. It is not the aim of this work to delve into such matters, but the interested reader can find a recent discussion on the different approaches to economics in Chang (2014, ch. 4 and references therein). The above sections provide only a very small overview of just a few general aspects of economic thought, leaving out many names who played important roles in its development. Some of them will be discussed in the next chapters. Nevertheless, it is clear that economic thought cannot today be viewed as integrated. There are basically three levels of analysis: (1) socioeconomics, the main concern of which is the process of economic distribution among social classes; (2) microeconomics, the analysis of which is based on the behavior of economic agents according to some proclaimed “fundamental laws” about the allocation of resources in a universe of scarcity; it uses a deductive and abstract logic which dominates empirical validation; (3) macroeconomics, which uses some observable and measurable macro quantities, called economic aggregates, to try to determine the global economic activity and its tensions like unemployment, inflation, price indices, savings, international trade, and finance (Delfaud, 1986). From a physicist’s viewpoint, this lack of integration due to differing interpretations of what constitutes the core of the theoretical approach to economics somehow resembles the situation in which physics found itself before Newton, a time when there were different interpretations of Aristotle’s teachings if we see Galileo as the most crucial critic of Aristotle’s physics. It is completely unlike the situation between classical and modern physics. The latter does not at all consider classical physics outdated or rejects its concepts. On the contrary, classical and modern physics complement each other as their respective domains of validity are well determined. In addition, as we shall see next, classical physics has not stopped developing after the appearance of modern physics. So, such a division between orthodox and heterodox theories simply does not exist in physics. Even theories which seem incompatible with one another, like modern field theory and general relativity, are unashamedly used when necessary, not uncommonly by the same physicist, and all physical theories, classical and modern, are generally taught at undergraduate

1.3 Econophysics

17

and graduate university courses. This is why the division between orthodox and heterodox theories, commonly accepted in economics, is inapplicable in econophysics as we shall discuss in detail in the next section. As a final comment, one needs to be fair here and acknowledge that one can find economists who also do the same; that is, discuss economic problems using different, sometimes incompatible, theories and compare the different answers provided by them. For instance, Marglin (1984) used the neoclassical, neo-Marxian, and neo-Keynesian approaches to discuss growth and distribution. Another example is Chang (2014), who also often uses several different theoretical approaches to discuss economic problems. Nevertheless, such pluralistic approach to economic problems seems to be the exception rather than the rule among contemporary academic economists. 1.3 Econophysics As seen in the previous section, economics as it stands today does not seem to possess an integrated set of concepts from where economic systems can be studied, let alone more or less well-defined domains of validity for their various approaches to economic phenomena. Even the definition of what constitutes an economic science varies according to the economic school of thought one chooses as reference. But, as econophysics is a new area, it will sooner or later provide its own definition of what constitutes economics, economies, and economic systems. However, it is usually better to avoid definitions based on strict logical sentences when discussing specific research areas, since statements of this sort are often either too restrictive, leaving out important issues which should somehow be included in the definition, but are not, or too wide such that everything can be included and, so, ends up defining nothing. Thus, the best initial approach is to start by determining the set of problems actually discussed in a certain area and then seek later a definition based on the domain, or subject matter, defined by these problems. So, if we follow this practical way of establishing a certain scientific domain, that is, by means of first a list of problems associated with a certain collection of phenomena followed by the methods used in their study, economics can be seen as well defined, since it does have its own circumscribed phenomenological domain of study and collection of methods to analyze the problems within this domain, no matter if those methods come from different schools of thought and are not integrated. A list of economic problems includes the following: dynamics of markets, self-regulating, in equilibrium or anarchy; static and dynamic determination of prices, wages, rents, interests, profits, capital movement, and production; dynamics of economic growth and economic cycles; evolution of industries and firms in terms of technology and revenue; financial movements; stock and labor

18

Economics and Econophysics

markets; dynamics of economic agents defined as classes or consumers and producers; money dynamics; institutional economic agents like the State; environmental influence in production; distribution and accumulation of income and wealth; value of use and exchange; international commodities markets and trade. This is clearly an incomplete list which will certainly change in time, as it has already changed since Aristotle’s first thoughts on this matter. Thus, when physicists began studying economics, they started with the set of problems already identified by economists. However, the method is not the same since they used the methodology of physics. So econophysics cannot be similar to economics simply because although the object of study is the same, the methodology is not (see below). Although there are methodological influences, if econophysics uses the methods of economics, or its mathematically idealized branch of econometrics, it is no longer econophysics, but simply economics. This responds to one of the questions posed at the beginning of this chapter. Interfaces between physics and other disciplines are not new. The nineteenth century witnessed the appearance of various interdisciplinary applications of physics which are still with us today, like astrophysics, biophysics, and geophysics. In those fields, physical concepts and methods were so successfully applied to problems of astronomy, biology, and geology that in various situations there is no longer a clear distinction between the original discipline and its physical counterpart, inasmuch as those successful applications either deeply transformed the original discipline or created entirely new research subfields. Trying today to make a distinction between the two faces in these interfaces is in some situations almost a bureaucratic task, often accomplished by simply labeling a certain set of problems as belonging to one or the other as a result of simple historical nomenclature inertia. In other situations, such a distinction became almost unnecessary, this being the case, for instance, of astronomy and astrophysics, often referred to the two names used together or, when isolated, one implying the other, especially after the introduction of science-oriented artificial satellites and interplanetary probes. Sometimes the distinction comes only from the specific instrument used to investigate the problem, this being the case of astronomy and space science, respectively ground-based telescopes and artificial satellites. Even so such a distinction becomes entirely blurred when one deals with astronomical objects beyond the solar system. So, in view of these successful experiences of physics interfacing with other disciplines, it should come as no surprise when physicists moved into the social sciences. To answer another question posed at the beginning of this chapter, in historical terms econophysics emerged in the mid-1990s when physicists started to systematically use concepts, methods, and analytical tools typically applied in the analysis of physical systems to study economic problems. Note again that, although the

1.3 Econophysics

19

problems come from economics, the method of analysis comes from physics. Therefore, it is not surprising that the name came from a natural derivation from the other interdisciplinary applications of physics mentioned above. Econophysics is somewhat closer to its “sister” discipline of sociophysics, which appeared a bit earlier (Galam, 2004, 2012), than its older “brothers” born in the nineteenth century. Sociophysics focuses on the use of physical methods, particularly of statistical physics, to study social problems. As an example, criminal activity can be modeled statistically by seeing it as a feature emerging from collective social behavior, similar to punishment whose effectiveness is statistically modeled in order to provide measurements allowing to keep the crime rate within acceptable limits (see, e.g., Iglesias et al., 2012, and references therein). One methodological aspect, however, unites all physical disciplines: the fact that they are empirically based sciences whose foundations are strongly anchored in measurable quantities, experimentally or observationally. As we shall see below, that does not mean a smaller role for theoretical studies; on the contrary, in the end even theoretical concepts require some metric, that is, they have to translate themselves into measurable quantities. If a theory cannot translate itself into tools and results that can be objectively measured in order to provide evidences for its conclusions, even if this is a future endeavor to be made by an yet unknown technology, the theory is said to be not even wrong, in the sense that it has not even reached the stage where it could be disproved. This assessment about a theory or model is attributed to the theoretical physicist Wolfgang Pauli (1900–1958), one of the pioneers of quantum physics and the 1945 Nobel Physics laureate, who used to qualify in decreasing order of importance the work of his colleagues in less than polite terms such as “wrong,” “completely wrong,” or “not even wrong”. The last one meant that the theory was so ill defined and incomplete that it could not be used to make a firm prediction whose failure would show it to be wrong (cited in Woit, 2006, preface). Although purportedly scientific, a not-even-wrong theory is such that it fails at some fundamental level and due to that it is considered as bad science or not science at all. In summary, if one seeks a distinction between present-day economics and econophysics, one should look at the viewpoints taken by these two areas to approach, study, and solve economic problems. Some recent analyses of the themes studied by econophysicists have already clearly showed such differences in the sense that econophysics approaches economic problems from a very different set of theoretical viewpoints and assumptions as taken by economists. The list of these themes includes topics such as statistical econophysics and the kinetic theory of gases when applied to economic agents or the principles of complex systems dynamics when we start modeling economies as complex systems (see Schinckus, 2010, 2013; Jovanovic and Schinckus, 2013; Drakopoulos and Katselidis, 2015;

20

Economics and Econophysics

and references therein). Such distinction goes beyond the description of economic issues by means of specific themes and physical theories, reaching the very heart of the epistemology of physics. However, at this point we are directed to another question: what exactly is this physical viewpoint, this physical approach to problems? This has already been partially answered in Section 1.1 when we discussed the scientific method inaugurated by Galileo. But, the epistemological discussions in physics did not stop with Galileo. Particularly rich are the debates and reflections on philosophy of physics that occurred at the end of the nineteenth century, by the time of the marginalist revolution in economics, and continued throughout the early twentieth century when the modern physics revolution took place. Several well-known eminent physicists of the past participated in this debate such as, among others, Einstein, Planck, and Werner Heisenberg (1901–1976), another key pioneer of the quantum theory who advanced the uncertainty principle, a fundamental concept needed to understand quantum systems. But here we shall focus on the ideas of the physicist who in this author’s views superbly synthesized this epistemology: Ludwig Boltzmann. 1.4 Physics, Reality, and the Natural World Any scientifically minded person living in the early twenty-first century may find it hard to believe that there was a time not too far back when several physicists did not accept the concept of the atom. This was, however, the situation in the physics by the end of the nineteenth century when the atom concept was facing a growing number of opponents, like Wilhelm Ostwald (1853–1932) and Georg Ferdinand Helm (1851–1923), who considered the atomic picture of the world outdated (Cercignani, 1998) and proposed its replacement by the concept of energy conservation and its derivatives. They believed that this energetic viewpoint was the only way of correctly describing the physical world. Boltzmann feared that such a purely energetic representation of the physical world would lead physics to become dogmatic (Videira, 1995) and then passionately engaged himself in intense debates with many other eminent scientists of his time, like Hermann von Helmholtz (1821–1894), Heinrich Hertz, Ernst Mach (1838–1916), Pierre Duhem (1861–1916), Henri Poincar´e (1854–1912), and Max Planck, as well as Ostwald and Helm (Boltzmann, 1974; Ribeiro and Videira, 2007). The issues under discussion revolved around the aims and methods of theoretical physics, the importance of the hypotheses in physics, how a physical theory is built, if one must always start from empirically known facts or one could freely use scientific ingenuity and creativity to build them, or, yet, if the physical theories should describe, instead of explaining, nature. This last point meant putting aside the old ideal of reaching the final causes of natural phenomena.

1.4 Physics, Reality, and the Natural World

21

Boltzmann sought in those epistemological discussions to assure the survival of his favorite theories, as well as guaranteeing a place for the other ones. The ability of some theory in predicting new phenomena does not make it capable of predicting its own future and, even less, of science. At the same time, if a theory had already produced good results it should not be abandoned. Recognizing the scientific limits of a theory does not mean that it should be excluded from science. The main reason that motivated Boltzmann in trying to better understand the process along which science develops is probably his conclusion that a theory is incapable of predicting its own future. Boltzmann’s interpretation of Darwin’s theory of evolution gave him the basis from where he was able to reach some important conclusions. For Boltzmann a scientific theory is nothing more than a representation of nature (Boltzmann, 1974; Cercignani, 1998; Ribeiro and Videira, 1998, 2007).

1.4.1 Theoretical Pluralism By being representations, scientific theories cannot aim to know nature in itself, since such a knowledge would explain why the natural world phenomena show themselves to us the way we observe them. Therefore, such ultimate knowledge is, and will ever be, unknowable, which means that a scientific theory will never be complete or definitively true. This point of view in fact implies the existence of limits of knowledge, since it redefines the concept of scientific truth by means of the notion that there is indeed a weak identification between the researched object and theory, because this identification cannot be (1) unique, (2) complete, and (3) is temporarily limited. The consequences of these points are as follows: (1.1) the same aspects of the natural world can be represented by more than one theory, often in competition among themselves for the preference of the scientific community; (2.1) as they are representations, or images of nature, the scientific theories will never be able to describe all aspects of natural phenomena, since such a complete knowledge is unreachable; (3.1) a scientific theory can one day be replaced by another. It is the possibility of replacement of one theory by another that defines and constitutes the scientific progress (Ribeiro and Videira, 2007). Boltzmann’s ideas about theories as representations are clearly explained in a passage from the entry “model” he wrote for the 1902 edition of the Encyclopedia Britannica: Models in the mathematical, physical and mechanical sciences are of the greatest importance. Long ago philosophy perceived the essence of our process of thought to lie in the fact that we attach to the various real objects around us particular physical attributes – our concepts – and by means of these try to represent the objects to our minds. Such views were formerly regarded by mathematicians and physicists as nothing more than unfertile

22

Economics and Econophysics

speculations, but in more recent times they have been brought by J. C. Maxwell, H. v. Helmholtz, E. Mach, H. Hertz and many others into intimate relation with the whole body of mathematical and physical theory. On this view our thoughts stand to things in the same relation as models to the objects they represent. The essence of the process is the attachment of one concept having a definite content to each thing, but without implying complete similarity between thing and thought; for naturally we can know but little of the resemblance of our thoughts to the things to which we attach them. What resemblance there is lies principally in the nature of the connexion, the correlation being analogous to that which obtains between thought and language, language and writing. . . . Here, of course, the symbolization of the thing is the important point, though, where feasible, the utmost possible correspondence is sought between the two . . . we are simply extending and continuing the principle by means of which we comprehend objects in thought and represent them in language or writing. (Boltzmann, 1974, p. 213)

The conclusion that natural phenomena can be represented by many different theories, even in opposition to one another, constitutes the core of Boltzmann’s philosophical thinking, his most important epistemological conclusion, and is usually called theoretical pluralism. This clearly follows the thesis that all scientific theories are representations of nature. As a representation, a scientific theory is initially a free creation of the scientist, who can formulate it from a purely personal perspective where preferences for a certain type of mathematical language, theoretical options, metaphysical presuppositions, and even the dismissal of some observational data, can enter into its formulation. That occurs when the theory is being devised. Nevertheless, for this theory to become part of science, it needs to be confronted by the experience, with empirical facts. If it is not approved in this crucial test the theory must be reformulated, or even dismissed. Boltzmann stressed that inasmuch as all scientific theories are, to some extent, free creations of scientists, scientific work is impossible without the use of theoretical concepts, which originates from the fact that the creation of any scientific theory is impossible simply from the mere observation of natural phenomena because any theory requires some mental acts. Theoretical pluralism also implies that the same natural phenomenon can be described by different theories, since any theory is a construction, an image of the natural external world, and nothing more. According to Boltzmann one cannot do science in any other way. Either it is a representation, a construction, or the theory is not scientific. In Boltzmann’s words: Hertz makes physicists properly aware of something philosophers had no doubt long since stated, namely that no theory can be objective, actually coinciding with nature, but rather that each theory is only a mental picture of phenomena, related to them as sign is to designatum . . . From this it follows that it cannot be our task to find an absolutely correct theory but rather a picture that is, as simple as possible and that represents phenomena as

1.4 Physics, Reality, and the Natural World

23

accurately as possible. One might even conceive of two quite different theories both equally simple and equally congruent with phenomena, which therefore in spite of their difference are equally correct. The assertion that a given theory is the only correct one can only express our subjective conviction that there could not be another equally simple and fitting image. (1974, pp. 90–91)

Since theories are images of the natural world, Boltzmann also noted that all have some explanatory power. In addition, a good theory is achieved by being carefully crafted by scientists, in a process similar to Darwin’s natural selection. His words on this illustrate this connection very clearly. Mach himself has ingeniously discussed the fact that no theory is absolutely true, and equally hardly any absolutely false either, but each must gradually be perfected, as organisms must according to Darwin’s theory. By being strongly attacked, a theory can gradually shed inappropriate elements while the appropriate residue remains. (Boltzmann, 1974, p. 153)

Theoretical pluralism synthesizes the fact that as the complete, or final, knowledge of nature is impossible, a theory can only be better than another. Hence there cannot be any ultimate, or final, scientific theory. Theoretical pluralism is the necessary mechanism which prevents science from risking stagnation. Also within this perspective, truth is provisional. In fact, it can only be provisional since any theory can only aim to be a temporary explanation of what one chooses, or is able, to observe and experiment in the natural world. A scientific theory is indeed an approximation achieved by different means, that is, by different theoretical constructions, and when it is formulated it is already doomed to disappear, to be replaced by another theory. The always present irony is that no one can precisely predicts when that will happen, unless one takes a dogmatic attitude (see below). Boltzmann’s theoretical pluralism in fact redefines the notion of scientific truth. This is so because since Galileo’s times scientists have been accepting the notion of truth as the complete correspondence between models and observations, between theories and empirical facts. Let us call this relationship as the strong correspondence principle. Nevertheless, since according to Boltzmann all scientific theories are representations of natural phenomena, and, hence, they are not capable of determining what really constitutes nature, truth in modern science can no longer be thought of as searching to determine nature itself. Therefore, this strong concept of correspondence ought to be replaced by the weak correspondence principle, which in turn enables scientists to choose one theory among other possible ones, inasmuch as more than one theory, or model, may represent the same group of natural phenomena and/or experimental data. At this moment Boltzmann advances another definition of scientific truth, which may be called the adequacy principle (Ribeiro and Videira, 2007). According to

24

Economics and Econophysics

him, theory A is more adequate than theory B if the former is capable of explaining more intelligibly, more rationally a certain set of natural phenomena, than the latter. The following two passages state this point very clearly. [L]et me choose as goal of the present talk not just kinetic molecular theory but a largely specialized branch of it. Far from wishing to deny that this contains hypothetical elements, I must declare that branch to be a picture that boldly transcends pure facts of observation, and yet I regard it as not unworthy of discussion at this point; a measure of my confidence in the utility of the hypotheses as soon as they throw new light on certain peculiar features of the observed facts, representing their interrelation with a clarity unattainable by other means. Of course we shall always have to remember that we are dealing with hypotheses capable and needful of constant further development and to be abandoned only when all the relations they represent can be understood even more clearly in some other way. (Boltzmann, 1974, p. 163) We must not aspire to derive nature from our concepts, but must adapt the latter to the former. We must not think that everything can be arranged according to our categories or that there is such a thing as a most perfect arrangement: it will only ever be a variable one, merely adapted to current needs. Even the splitting of physics into theoretical and experimental is only a consequence of the two-fold division of methods currently being used, and it will not remain so forever. (Boltzmann, 1974, p. 166)

1.4.2 Scientific Realism One can derive several important consequences of Boltzmann’s epistemological theses (see Ribeiro and Videira, 2007, section 3), but for the purposes of this chapter I shall present just a few of them. First, besides being a good representation, in the sense of being an adequate description of the natural phenomena, theories can gain the preference of the scientists by means of their predictive abilities. Once some theoretical prediction is confirmed empirically, our knowledge about nature increases quantitatively due to the weak correspondence principle. A correct prediction is always formulated in the context of a specific theoretical picture, so by being able to predict unknown phenomena a theory shows its explanatory power, as it is not only able to describe the already known “pieces,” but it is also capable of going even further to show the existence of other still missing pieces that are necessary for a deeper and more organized understanding of nature. If a theory gets ahead in the preference of the scientists due to its predictive abilities, it is more likely to be developed and even of incorporating several elements of the less preferred theories. After some period of time, the gap between them may become so large that it may no longer be worth working with the less preferred theories, which are then put aside and, eventually, forgotten.

1.4 Physics, Reality, and the Natural World

25

Second, theoretical pluralism does not necessarily mean competition among different theoretical constructs, but often means complementarity since all theories possess some explanatory power. Thus, all theories say something about the natural processes that go on in nature as they all use the same or similar set of natural problems they seek to explain. This implies that the emergence of different theories for similar sets of natural phenomena is far from being a problem, but in fact contributes to our better understanding of nature. And if those different theories have elements that contradict each other, observation or experimentation associated to the their internal logic and consistency provide us the mechanisms which allows us to discard the inappropriate elements of the emergent theories while the appropriate elements remain. When Boltzmann advanced the thesis of theoretical pluralism, he also had the goal of fighting dogmatism. While orthodoxy and skepticism are important to science as they preserve the scientific knowledge obtained on solid bases until new theories prove to have enough empirical validation and internal consistency, if they become too deep rooted the scientific community may end up avoiding any change in the established theories, which become dogmatic. If such a situation is not effectively challenged, the scientific debate ceases to exist. Therefore, dogmatism works against scientific progress. Boltzmann believed that once theoretical pluralism was accepted and entirely absorbed into the research practice, it would forbid that, once proposed, a theory could be excluded from the scientific scenario, meaning in practice the extinction of dogmatic tendencies (Ribeiro and Videira, 1998). Another important point is that under Boltzmann’s epistemological views one cannot confuse reality with real. Reality is the set of mental pictures, or images, of the natural world created in the human brain, whereas real is the nature itself, the external natural world, whose ultimate knowledge is, and will ever be, unknowable. So, nature constitutes what is real, being outside our brains, the real world, but reality is the collection of mental pictures created in our brains by its interface with what is real, with nature. Another way of putting it: reality is a projection of the real, the external natural world, into our internal mental world of observations, perceptions, and measurements. Since reality connects our brains with what is real, this means that reality is realistic. But as reality is made of internal mental pictures, or images of nature, which change with time, one can only conclude that reality changes and evolves. To accept the above set of philosophical presuppositions advanced by Boltzmann, actually means adopting the philosophical position of scientific realism. This signifies that all theories must be empirically tested. However, to make an observation or perform an experiment is impossible without the supporting context of a theory in some form or shape, which means that facts are never theory-neutral, that is, they are never free of contamination from one or other theory. So, to reach

26

Economics and Econophysics

workable theories, to formulate laws of nature, scientists rely on both intuition, which means theorizing, and the constant checking of those intuitions against experiments and facts that come from those theories. This is a highly convoluted process, prone to errors, advances, and retreats, which occurs in the slippery and treacherous ground called research practice. As Einstein (2002, p. 44) put it, “There is no logical path to these laws; only intuition, resting on sympathetic understanding of experience, can reach them.” Finally, testing a theory is not as straightforward as can be initially thought. This is because any theoretical application is built upon a whole series of auxiliary assumptions or hypotheses. So, to prove a theory false, or to reach a conclusion about the falsification of a theory, one must have to falsify all its auxiliary assumptions. In practice this means that when faced with possibly falsifying data, scientists tend to blame those auxiliary hypotheses and tinker with them before abandoning the entire theoretical structure, which implies that a test is often inconclusive. In addition, because theories are representations of the real, all are intrinsically imprecise and, to some extent, falsifiable (Kuhn, 1996, p. 146). These points taken together explain why it is often so difficult to dethrone a certain theory. Facts only are not enough as one can always tinker with the auxiliary hypotheses and claim that they can be satisfied by revising old, or introducing new, auxiliary hypotheses. What actually leads to the replacement of a theory is the accumulation of problems in the old theory together with the appearance of a new one, a better theory in the sense of the adequacy principle discussed above (see also Kuhn, 1996, p. 206). This often requires a generational change, as famously remarked by Max Planck, the winner of the 1918 Nobel Prize in Physics: “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it” (1950, pp. 33–34). Hence, theories must be constantly checked against old and new empirical facts, and only time will tell if, and until when, they survive this process. Bearing this point in mind, we must also include in this epistemological discussion the testability principle: the main requirement of a scientific theory is that it should be in some way testable against existing or new empirical facts. In other words, to be useful a theory must be vulnerable, since its testing means unwrapping auxiliary hypotheses that often are hidden at first (Baggott, 2013, p. 20). To conclude this section, the paragraphs above are just a general presentation of epistemological concepts adopted in practice by most physicists. This epistemology was inherited from various eminent physicists of the past and goes as far back as to Galileo, but came mostly from the physicists who were active participants in the modern physics revolution that occurred at the turn of the nineteenth to the

1.5 Economics, Reality, and the Real World

27

twentieth century. They shared Boltzmann’s ideas and saw their work in physics as natural extensions of their philosophical positions. Some of them actually expressed themselves in philosophical matters, this being particularly the case of Einstein, Poincar´e, and Heisenberg. As a consequence, twentieth-century physics was overwhelmingly influenced by their philosophical positions, influence which is still present in the early twenty-first century. 1.5 Economics, Reality, and the Real World Let us now turn back to economics and discuss it under the epistemological perspective presented in the previous section. Since this discipline also calls itself “economic science,” we are entitled to ask whether or not scientific realism has also been adopted in economics. In this respect, the first point worth mentioning is that the lack of an integrated economic theory, as discussed in Section 1.2, cannot be seen as a problem since theoretical pluralism states that the same set of scientific questions can be described by different theories. So having a collection of classical and neoclassical theories, or orthodox and heterodox approaches to economics, is in fact an asset to the discipline. Some have criticized economics because it became self-referential in the sense that it moves by its internal problems. Again, this is not really an issue since any scientific discipline is fundamentally moved by its own internal dynamics. However, clear indications of an abnormal condition would be if theories are no longer systematically validated by empirical testing or if there is a concerted attempt to put one of these various approaches to economic problems as the only legitimate starting point for addressing economic questions and try to suppress the others, especially in teaching. Both situations would certainly stem scientific progress, and the second one clearly characterizes a dogmatic attitude that can only be viewed as anti-scientific. Unfortunately, this seems to have happened at least partially in economics. Recent evidence of the lack of pluralism within economics is the appearance of the post-autistic economics (PAE) movement, initiated by a group of Parisian economic students who, in June 2000, circulated a petition calling for reform in their economic curriculum. They complained about the narrowness of their university economic education, the one-sided way of addressing economic questions and asked for a broad spectrum of analytical viewpoints, more efforts to support theoretical claims with empirical evidence and interdisciplinary dialogue. Their petition was quickly followed by similar petitions from economic students in several other countries, which were then also supported by a no small number of economists worldwide. The speed in which the PAE movement has spread, gathering worldwide support in less than two years, surprised everyone involved

28

Economics and Econophysics

and clearly showed a deep dissatisfaction with the way economics is taught in several of the world’s universities.1 Actually, the very existence of two seemingly watertight approaches to economic questions, an orthodox as opposed to a heterodox one, and the fact that the neoclassical economics viewpoint is considered “mainstream,” a situation that in practice relegates the alternative theories to the sidelines of economic thought, indicates that the problems felt by the students who started the PAE movement – lack of theoretical pluralism and empirical grounding in economics, one theoretical viewpoint elevated as the one solely capable of providing concepts and tools to analyze economic phenomena and bring understanding to real-life economic issues, especially in its research modus operandi – are not a recent occurrence. Classical political economy certainly has theoretical problems, but according to theoretical pluralism the existence of open problems, puzzles, or limitations is not enough for the abandonment of an empirically sounded theory, especially because certain ways of addressing economic problems are unique to this theory, like the economic influence of social classes and the central role of the economic surplus. The abnormalities in economic thought indicated above are in fact a manifestation of what may be considered as the deepest epistemological predicament in economics: the effective divorce between theory and empirical evidence. There is a tendency to assume that the main theoretical problems in economics are solved only theoretically, by a logical dispute between different theoretical visions, and not by referral to empirical data and results. This is a kind of Aristotelism, in the pseudoscience sense (see p. 6 above), that goes against everything that physics has stood for since the time of Galileo, as it treats experimental and observational data almost with contempt. It does not lead to the creation of a theoretical reality, but of a theoretical illusion. In other words, such a divorce between theoretical thought and empirical evidence does not create a reality in the sense of being an interface between our internal mental pictures and the real external world, but an illusion made of a group of mental pictures unrelated to the real world. Therefore, even models which might have been initially inspired by empiricism, but which in turn are not systematically put to test, produce conclusions which quickly escalate into the surreal world of theoretical illusions. What happens then is a complete inversion of what scientific realism stands for, because those who are imbued with these theoretical illusions assume then that reality must follow them.2 So, anything empirical becomes just 1 See www.paecon.net/HistoryPAE.htm (accessed September 28, 2018) for a brief history of the PAE movement. 2 Ribeiro and Videira (1998) named this phenomenon scientific dogmatism, which occurs when scientists

become unreasonably overconfident that their theories are true in the sense that nature does follow them. By doing so, these overconfident researches confuse reality with what is real, theory with the external world, representations with nature.

1.5 Economics, Reality, and the Real World

29

an attempt to reinforce these illusions and what does not fit into them are labeled externalities; that is, exogenous facts that do not belong to the phenomenon, but are outside it, which is just a fallacious way of stating that the model is unable to address those real-world facts. This means that to talk about externalities is in fact a refusal to grant scientific defeat. But, when researchers are in scientific denial, that is, are incapable of admitting their failure to come up with realistic representations of the real world, the process of force-fitting empirical data, and not using data to seek hypothesis validation, becomes the norm, resulting in an entirely upside-down “scientific” methodology which then becomes the supposedly “natural” way of doing things (see examples on p. 38 below). Physicists embodied by the scientific method quickly detect these abnormalities in mainstream economics and, more generally, in the way economics is actually practiced nowadays (see, e.g., Blatt, 1983, pp. 4–8; Roehner, 2002, sections 1.3– 1.4; Bouchaud, 2008, 2009; McCauley, 2009, chs. 1–2; Sinha et al., 2011, pp. 1–5; Buchanan, 2013; Cristelli et al., 2014; Sylos Labini and Caprara, 2017), a situation that can only lead to the conclusion that the present-day academic economics is epistemologically sick. Physicists, however, were not the first to identify this illness, it was the economists themselves. There is a large literature on this (Blaug, 1998; Keen, 2009b, 2011a; Hudson, 2012; and references therein), but for the purposes of this chapter only a few relatively recent examples will be presented. Let us start with Paul Ormerod, who advanced the following viewpoint regarding those matters. An internal culture has developed within academic economics which positively extols esoteric irrelevance. Despite new emphasis in some of the very best work being done . . . on confronting theory with empirical evidence, a relatively low status is given to applied work, involving the empirical testing of theories, in contrast to pure theoretical research . . . Contemporary orthodox economics . . . method of analysis is isolated from the wider context of society, in which the economy operates, and . . . its methodology, despite the pretensions of many of its practitioners, is isolated from that of the physical sciences, to whose status it none the less aspires. (Ormerod, 1997, pp. 20–21)

Mark Blaug was also very critical of the teaching in economics: Economics as taught in graduate schools has become increasingly preoccupied with formal technique to the exclusion of studying real-world problems and issues . . . Economics has increasingly become an intellectual game played for its own sake and not for its practical consequences. Economists have gradually converted the subject into a sort of social mathematics in which analytical rigor as understood in math departments is everything and empirical relevance (as understood in physics departments) is nothing . . . [General equilibrium theory] has become a perfect example of . . . “blackboard economics,” a model

30

Economics and Econophysics

that can be written down on blackboards using terms like prices, quantities, factors of production, and so on, but nevertheless is clearly and even scandalously unrepresentative of any recognizable economic system . . . It is high time that economists re-examine their long-standing antipathy to induction, fact-grubbing, and fact gathering before, and not after, we sit down to theorize. (1998, 12–14, 30)

Blaugh also noted that “the rot goes back to” John Hicks and even Joan Robinson (1903–1983), works that respectively appeared in 1939 and 1933, concluding that “Modern economics is sick” (1998, see also n. 6). Steve Keen also voiced similar thoughts regarding his education in economics: [W]hat I had initially thought was an education in economics was in fact a little better than indoctrination. More than a decade before I became an undergraduate, a major theoretical battle had broken out over the validity of economic theory. Yet none of this turned up in the standard undergraduate or honours curriculum . . . There were also entire schools of thought which . . . were ignored unless there was a dissident on the staff . . . Why has economics persisted with a theory which has been comprehensively shown to be unsound? . . . The answer lies in the way economics is taught in the world’s universities. (Keen, 2001, preface)

Years later, after the 2008 economic crisis began, he added that “neoclassical economists were about the only ones who were ill equipped to see it coming” and that “as a means to understand the behavior of a complex market economy, the so-called science of economics is a melange of myths” (Keen, 2011a, preface to 2nd ed.). Backhouse (2002, ch. 11) discussed how the mathematization of economics starting in the 1930s actually led to a separation between economic theory and economic applications, and this also led to a divorce between theoretical and empirical research. Lionel Robbins (1898–1984) provided the intellectual basis of such an approach by arguing that the main economic theses could be obtained without knowing much more than just that resources are scarce, which suggested that a theory could be sought in a way highly independent from the empirical world. This meant that theoreticians could ignore the empirical works since the task of testing theories falls on econometrics. This point is also shared by Hudson (2010), and Mandelbrot. The latter stated the following in this respect. Compared to other disciplines, economics tends to let theory gallop well ahead of evidence. I prefer to keep theory under control and stick to the data I have and the mathematical tools I have devised. (Mandelbrot and Hudson, 2004, p. 229)

Chang (2010) pointed out the absence of economists in the governments of East Asia during the miracle years when their economies boomed, roughly after the 1950s, and advanced as possible explanation that “economics taught in university

1.5 Economics, Reality, and the Real World

31

classrooms is too detached from reality to be of practical use.” In addition, he argued that “modern economy is populated by people with limited rationality and complex motives, who are organized in a complex way, combining markets, (public and private) bureaucracies and networks.” He concluded that “Economics does not have to be useless or harmful. We just have to learn the right kinds of economics” (Chang, 2010, pp. 244, 250–251). Hudson (2012, p. 116) argued that neoclassical economics came to being not because there was some kind of understanding that the limits of validity of classical political economy had been reached, in a similar way modern physics came to being to extend classical physics, but as a kind of reaction movement whose purpose was to change the topic from the social implications of the classical theory a view somewhat shared by Screpanti and Zamagni (1993, pp. 149–155). Hudson systematically referred to neoclassical theory as “post-classical” to emphasize its “anti-classical” nature, or even as “junk economics” to emphasize its unrealistic underlying assumptions. He strongly criticized the way mathematics is used in economics: To mathematize economic models using obsolete or dysfunctional concepts hardly can be said to be scientific, if we define science as the understanding of how the world actually works . . . Many economists are trained in calculus and higher mathematics without feeling much need to test their theories quantitatively. They tend to use mathematics less as an empirical measuring tool than as an expository language, or simply as a decoration to give seemingly scientific veneer to their policy prescriptions . . . The main criterion of success in modern economics is its ability to maintain internal consistency in the assumptions being made. As in science fiction, the trick is to convince readers to suspend their disbelief in these assumptions . . . What modern economics lacks is an epistemological dimension. (Hudson, 2012, pp. 163, 168, 172, 175, emphasis added)

H¨aring and Douglas (2012) went further and argued that it was the influence of the powerful that changed and distorted theoretical economics so that it ignored critical empirical facts. Their analysis aimed at focusing on this issue: [We] examine how we got from an economic science that treated relative economic power as an important variable and regarded the resulting income distribution as a core issue of the discipline, [changed] to a science that de-emphasizes power and does not want explicitly to deal with distributional issues. (H¨aring and Douglas, 2012, p. 1) [T]hese external influences have, over the decades and centuries, created a science that is strongly biased in favor of negative viewpoints regarding issues like creating equality of opportunity . . . [and] neglecting or even denying the influence of power and the tendency of market economics toward the concentration of wealth, power and opportunity in a minority. (H¨aring and Douglas, 2012, p. 45)

32

Economics and Econophysics

Mirowski (2013) discussed “the economic crisis as a social disaster, but simultaneously a tumult of intellectual disarray.” His major thesis was that “most economists did not understand the economy’s peculiar path prior to the crisis, and persisted in befuddlement in the aftermath,” a situation which he considered as a “catastrophic intellectual failure of the economics profession at large” (Mirowski, 2013, pp. 15, 18). Finally, Gallegati (2018) voiced similar criticisms, stating that “the economic theory taught in almost all universities around the world is axiomatic and seems inadequate to explain the real world,” adding that “the real problem lies . . . in the fact that the dominant economic theory does not contemplate a major crisis.” He concluded by criticizing mainstream neoclassical economic theories not because the lack of use of advanced techniques, but because “it just uses the wrong ones” (Gallegati, 2018, pp. 17, 19, 20). Concluding this section, it must be noted that the above-discussed epistemological sickness afflicting economics is not unique to this discipline. Physics is not immune to it, as various authors have indicated that certain branches of modern theoretical physics have fallen ill of the same disease (Woit, 2006; Baggott, 2013; Unzicker and Jones, 2013, and references therein). These authors strongly criticized some physicists working on some problems of theoretical physics under the same epistemological position as modern economics was criticized above, i.e., that replacing experimentation, observation and testing with theorizing means giving up the scientific method (see also Ellis and Silk, 2014). There is also a dangerous tendency to think of results coming from computer simulations made with equations used in domains where their empirical validity is not well established, or coming from untested theories, to be seen as if they were empirical results, rather than, at best, just general indications of the possible behavior of the phenomena under study. In this respect it is worth remembering the criticism made by the well-known Soviet physicist Lev Davidovich Landau (1908– 1968) to his colleagues who worked on cosmological problems in the 1960s and at that time already had lots of speculative theories about how the universe evolved, but very little empirical facts coming from astronomy. He stated that “cosmologists are often in error, but seldom in doubt.” Considering that the temptation to substitute empirical results for logical reasoning seems very hard to resist in the presentday academic economics, a situation which may well explain the recent failures of mainstream economics regarding its inability to see the 2008 crisis coming, one may paraphrase Landau and state that a great deal of academic economists appear to be mostly in error, but never in doubt. The modern physics revolution that started in the late nineteenth century was overwhelmingly influenced by physicists with strong philosophical backgrounds, this being particularly true of those in the German-speaking world. One of the many

1.6 Econophysics and the Empirical World

33

tragedies of Nazi Germany was the destruction of several academic institutions in physics and mathematics which kept this philosophical tradition alive for centuries, and this destruction contributed to a slow decline of the philosophical influence in physics as it developed in the second half of the twentieth century. Hence, the above critique of the ways certain branches of theoretical physics have been developing lately are just a consequence of this loss of philosophical tradition. Thus, historical perspective suggests that both physics and economics would greatly benefit from the return of some philosophical education. 1.6 Econophysics and the Empirical World We can now discuss the final question posed at the beginning of this chapter, of whether or not and in what way econophysics can contribute to the improvement of our understanding of the economic phenomena. To better answer this question we should briefly look first at the works made by a few econophysicists before the term econophysics was coined and the area created, that is, before 1995. The short list below of earlier econophysicists is not at all comprehensive, but is enough for the purpose of showing some important real cases of the different viewpoints taken by those physicists when approaching economic problems.

1.6.1 Before 1995 The first true econophysicist in the sense we understand the term today was Louis Bachelier (1870–1946). He was in fact a trained mathematician, but discussed the Brownian motion five years before Einstein and applied it in the study of finance in his PhD thesis entitled “Th´eorie de la sp´eculation” and finished in 1900. His PhD supervisor was Henri Poincar´e and his work basically provided the foundations of mathematical finance. The results obtained by him became essential to this topic, although their seminal importance went largely unrecognized for several decades (Courtault et al., 2000; Mandelbrot and Hudson, 2004, ch. 3). Frederick Soddy (1877–1956) was a physicist who, in the first two decades of the twentieth century, worked on radioactive decay problems and whose outstanding achievements earned him the 1922 Nobel Prize in Chemistry. Afterwards he turned to economics and wrote a book summarizing his findings where he advanced the proposition that “[t]he production of Wealth, as distinct from Debt, obeys the physical laws of conservation and the exact reasoning of the physical sciences can be applied” (Soddy, 1926, p. 294). He distinguished between real wealth and virtual wealth, the former being the means of production, machinery, buildings, tools, etc., whereas the latter is made of money and debt. For him, real wealth is subject to the laws of physics whereas debt is subject to the laws of mathematics since debt

34

Economics and Econophysics

does not decay with time and is not consumed in the process of living. Since debt is basically financial claims on real wealth, it tends to expand more rapidly than the production of real wealth available to pay for the virtual wealth. Although he seems to have been entirely ignored by academic economists of his time and for several decades afterwards, perhaps because they were focused on building the theoretical edifice of neoclassical economics and Soddy was very critical of various prevalent concepts of this theory, his ideas seem to be making a recent reappearance (Martin Hattersley, 1988; Daly and Rufus, 2008; Zencey, 2009; Hudson, 2012, p. 416). Matthew F. M. Osborne (1916–2003) was a physicist who worked on several problems of applied physics, like the hydrodynamics of migrating salmon, before turning his attention to the stock market and finance. His book on these topics clearly exemplifies how a trained physicist views economic and financial problems, using both real cases found in the history of physics and issues from philosophy of science, to discuss the usability of mathematical axioms and their limitations when applied to empirical sciences (Osborne, 1977, sections 3.3–3.5). Of particular relevance to the issues raised in this chapter is his discussion about the theorem proved by Kurt G¨odel (1906–1978), which states that any domain defined by a set of axioms will always raise questions which cannot be decided within that predefined axiomatic domain. Hence, there will always exist a finite range of experience and understanding where our ideas work, which means that we “can understand the theory best when [we] find out where those boundaries are. So G¨odel’s theorem puts a limit on the power of logic itself” (Osborne, 1977, p. 112). Osborne also confronted the idealized supply-and-demand functions found in elementary textbooks of orthodox microeconomics, the so-called Marshallian cross diagram (see, e.g., Gregory Mankiw, 2009, p. 77), with real-life examples (Osborne, 1977, sections 2.3–2.4; see also McCauley, 2009, section 2.4) and concluded that “it is indeed very difficult to extract from real data what a real life supply and demand curve is like,” although it must be noted that some of his real-life supply-and-demand curves had some resemblance to the idealized version, but not in the way economists portray it. He also noted that “supply and demand are both altered by [a] transaction and that there is an asymmetry of information in who has knowledge of the other demand and supply functions” (Osborne, 1977, pp. 18, 25–27). Echoing Boltzmann in some sense, Osborne voiced what social scientists do not seem to have learned: [I]t is an incorrect procedure that data should be made to fit the theory. . . . As a result [social scientists] very often won’t even undertake an investigation and collect data unless they have some sort of a theory or model to fit the data to. This is not the way significant

1.6 Econophysics and the Empirical World

35

discoveries are made, . . . [which is] probably an explanation . . . of why economics is called the dismal science, but that doesn’t prevent economics from being important. (1977, p. 19)

John Markus Blatt (1921–1990) was an Austrian-born physicist specializing in nuclear physics and superconductivity, who in the 1970s turned his attention to economics. His writings on this subject showed that his economic interests were mainly focused on the dynamics of economic phenomena. He considered the trade cycle as the most striking example of dynamics in economic systems and devoted his first book to this subject, critically surveying the most important dynamic economic theories (Blatt, 1983). He criticized neoclassical comparative statics analysis by stating that “it is by no means true that all dynamic behaviour can be understood best, or even understood at all, by starting from a study of the system in its equilibrium state” since there are systems whose important and interesting features are essentially dynamic (Blatt, 1983, p. 5). Echoing Boltzmann’s discussion on dogmatism (see p. 25 above), Blatt argued: [T]he main enemy of scientific progress is not the things we do not know. Rather, it is the things which we think we know well, but which are actually not so! Progress can be retarded by a lack of facts. But, when it comes to bringing progress to an absolute halt, there is nothing as effective as incorrect ideas and misleading concepts. (1983, p. 6, emphasis in the original) [T]he theory of [Blatt’s] book . . . is directly relevant to something equally prevalent, namely the creation of economic myths and fairy tales, to the effect that all our present-day ills, such as unemployment and inflation, are due primarily to the mistaken intervention by the state in the workings of what would otherwise be a perfect, self-adjusting system of competitive capitalism. This system was in power in the nineteenth century. It is wellknown that it failed to ensure either common equity . . . or economic stability . . . [T]he failure of stability was no accident, but rather was, and is, an inherent and inescapable feature of the freely competitive system with perfect market clearing. The usual equilibrium analysis assumes stability from the start, whereas the equilibrium is highly unstable in the long run. The economic myths pushed by so many interested parties are not only in contradiction to known history, but also to sound theory. (1983, p. 8; emphases in the original)

In collaboration with Ian Boyd, his PhD research student at the time, this dynamic approach was further advanced. They argued that the essential features of the observed trade cycle of a laissez-faire system cannot be understood in purely real terms. Rather, it is necessary to include the psychological variable of “confidence” with its major effects on credit conditions, and thence on the “real” economy. (Boyd and Blatt, 1988, p. 1)

36

Economics and Econophysics

They also proposed a model of trade cycle incorporating an investor confidence variable as a major element with a usable definition that relates it to what they called “horizon of uncertainty” which is “the time interval over which the typical investor is prepared to place at least some trust in his, or other peoples’, predictions of the future” (Boyd and Blatt, 1988, p. 4). As we shall see in later chapters, Blatt’s approach to the dynamics of economic systems will significantly influence our discussions. When in the mid-1990s the term econophysics was coined, formally defining and establishing this new research field (Eugene Stanley, 2008), physicists started to join in in growing numbers and as a result a flurry of econophysics research activity came about. This inevitably also led other physicists to voice similar criticisms to mainstream economics. Some of them have already been mentioned and others will be reviewed as needed in the next chapters. Nevertheless, considering the brief presentation above, quite apart from Bachelier who basically created the field of finance, both Osborne and Blatt did not dismiss conventional economic theories, but looked at them from a physicist’s viewpoint and in doing so they basically disagreed with several, if not most, of the prevalent neoclassical assumptions and results because they compared those results and assumptions with empirical facts. Note that both of them were trained physicists and their criticisms of neoclassical economics are essentially similar to the above discussed epistemological sickness of academic economics (see p. 29 above). As a consequence, those authors stood in a position highly critical of mainstream economics, on par with the previously examined criticisms made by economists themselves, but that does not mean that they were in favor of all results of classical political economy or dismissed entirely the neoclassical theoretical body. In fact, recent research in econophysics has stressed the need to be careful of criticizing orthodox economics, because neoclassical equilibrium theories were initially based on a reasonable attempt to understand the economic phenomena (Doyne Farmer and Geanakoplos, 2009). In summary, physicists criticizing the foundations of neoclassical economics is not at all a new phenomenon. For several decades various physicists working independently on economic problems have been voicing criticisms that varied from serious reservations to flat-out rejection of most mainstream economics theoretical premises and conclusions under similar grounds: that they were reached without following basic scientific methodology. This translates into the absence of empirical foundations for those theories, a fact which inevitably led to the creation of several myths and illusions rather than sound scientific theory, results, and conclusions. So, from this assessment of mainstream economic theory it is not difficult to understand why, prior to the 2008 financial crisis, the economics profession was

1.6 Econophysics and the Empirical World

37

so misguided in its evaluation of macroeconomic stability, misguided to such an extent that Robert Lucas, the 1995 winner of the Nobel Prize in Economics, wrote: [M]acroeconomics . . . succeeded [in solving the] central problem of depression prevention . . . for all practical purposes . . . for many decades. (Lucas, 2003)

The former chairman of the Federal Reserve, the central bank of the USA, also strongly underestimated economic volatility because it was thought it had been tamed, or “moderated,” by the supposed achievements of economic theory (Bernanke, 2004). As it turned out, economists were taken aback when the crisis started (Colander et al., 2009; Kirman, 2009; Krugman, 2009). But, now that we are aware of the important limitations of neoclassical equilibrium theories, we can start the hard work of laying new foundations to go beyond.

1.6.2 Methodology Considering what has been set out so far in this chapter, it is clear that the first, and perhaps major, contribution of econophysics to economics is epistemological, that is, to bring the scientific methodology that proved so successful for centuries in physics to the economic mainstream in order to bring to economics its missing epistemological perspective, as mentioned above by Michael Hudson. Again, this viewpoint is not really new (Roehner, 2002; McCauley, 2009; sections 3 of Moura and Ribeiro, 2009, 2013, and references therein) and some economists have also reached at a similar conclusion (Drakopoulos and Katselidis, 2015). Inasmuch as the previous interfaces of physics with other disciplines proved so successful, e.g., astrophysics, geophysics, and biophysics, there is a good chance that econophysics will follow suit. 1.6.2.1 Epistemology The epistemological perspective that ought to be brought by econophysics may be divided into three major aspects. First and foremost, it is mandatory to bring empirical testing and validation of economic theories and models to the forefront of economic analysis, meaning that theories and models must propose some metric for their quantitative testing. In other words, models and theories must be made vulnerable to empirical scrutiny. It is preferable that this metric comes with the proposal of the model, but there is also room for theoretical work where such metric proposal may come later. If that does not happen, the theory or model are destined to fall into Pauli’s not-even-wrong category and should not deserve much attention until someone somehow makes them testable.

38

Economics and Econophysics

Second, research in economics must look at the data with as little theoretical preconceptions as possible in order to try to discover patterns, regularities, processes, structures, and interrelationships that may indicate where theories can be built. The aim must be to define a problem arising during empirical research and then devising or selecting a theory capable of solving it. This entails dataoriented studies based on appropriate metrics where the number of possible free parameters must be kept to the bare minimum during the theory-building process, and can only increase once the theory is well tested and validated. The opposite path, that is, hypothesizing the key theoretical roles and then looking for evidence to test the hypothesis will, most likely, lead to the process of data being forcefitted into models during the data analysis process, a situation equivalent to putting the scientific method upside down. Such path very rarely produces functioning theories, even in physics, because it fails to consider multiple competing, and often equally consistent, hypotheses. A striking example of such upside-down way of thinking within economics comes from the following statement allegedly expressed by Edward C. Prescott, the 2004 Nobel Prize Winner in Economics: “If the model and the data are in conflict, the data must be wrong” (cited in Farmer, 2016, p. 49) This statement reportedly appeared in defense of a research programme that advocated the use of data “selectively to judge a theory,” called “calibration” (Farmer, 2016, p. 49). William F. Sharpe, the 1990 Nobel Prize Winner in Economics, also expressed a similar viewpoint. I’ve been amazed at how little you can trust any empirical results, including your own. I have concluded that I may never see an empirical result that will convince me that it disconfirms any theory. I’m very suspicious. If you try another time period, another country, or another empirical method, you often will get different results. Fischer Black,3 in a wonderful talk that was published toward the end of his life, explained why theory is much more important than empirical work. (Bernstein, 2005, p. 43)

Bearing in mind the epistemological viewpoints expressed above, such statements indicate lack of understanding of the history of sciences, that science evolves through a process that intertwines theory with experimentation and/or observation. So, real science does not place theory above experimentation or empirical evidence, theory above the data. Or the other way round. A cursory study of the history of science shows abundantly clear that the best scientific theories are the ones that survive once they are compared to data that were not previously “selected.” By selecting data to judge a theory one can prove anything right. Data, of course, can

3 Fischer Sheffrey Black (1938–1995).

1.6 Econophysics and the Empirical World

39

be wrong, but one cannot distrust empirical data due to that, but work toward better data. And inasmuch as models are representations, a result produced by a model does not necessarily mean that it will be present in the real world. These statements express a view that puts theory above empirical validation, which is very far from being the most productive way of doing science. If Galileo had done that 400 years ago it is very possible that physics would still be Aristotelian and we would have missed all technological results brought about by the applications of classical and modern physics, from airplanes to smart phones. Another example of such upside-down way of thinking was presented by Roger E. A. Farmer, who argued that “all models are wrong . . . [because this] is the definition of a model” (Farmer, 2016, p. 49). Under the epistemological perspective advanced here all models do have some explanatory power and, so, they are correct to some extent. But, by being representations they have limitations, sometimes so severe that it is best to put the model aside and produce a new one. So, all models are both right and wrong to some extent, and the task of science is exactly to find these limitations, conclude which models are more adequate representations of the real world and, if inadequate, throw them away and propose better models. Finally, theoretical pluralism must be taken by heart in economics research, which means that there cannot be an a priori dismissal of any school of economic thought, orthodox or heterodox, neoclassical economics or political economy. At the present state of affairs all of them have something to contribute to the understanding of economic phenomena, but all of them need to be empirically scrutinized through the modern economic databases available worldwide for most, if not all, countries in order to see if their theories and models stand the test of experience and observation. No author, however important to political ideology, can be left out of such close scrutiny and only through this process will we see if their ideas reflect how nature operates and, therefore, truly belong to science, or if those ideas belong someplace else. If the epistemological perspective proposed above were actually absorbed into economics research practice it will probably bring about a change of paradigm in economics, in fact a possible scientific revolution in the Kuhnian sense (Kuhn, 1996). Nevertheless, transformation processes of this sort do not come about easily, since evidence from the history of sciences suggests that paradigmatic shifts in scientific disciplines do not happen smoothly, being usually resisted at every corner. And although various econophysicists and a few economists believe that this is the only way out of the present intellectual crisis of academic economics, it remains to be seen whether or not its current practitioners will learn from this crisis and change the profession or if the discipline will eventually have to be taken over by scholars originated from different areas like, among others, physics.

40

Economics and Econophysics

Whatever outcome this changing process in economics brings, there are a few practical developments resulting from the three methodological points above that are worth some brief comments. 1.6.2.2 Theoretical Physics Despite the warning words above about following a path that starts from pure theoretical speculation, it must be mentioned that Einstein’s general relativity theory advanced in 1915 was in fact developed by following this speculative theoretical path. That was so because Einstein proposed a theory that was not suggested by data, but arose from pure theoretical reasoning based on his views about some theoretical inconsistencies of Newtonian mechanics and his desire to extend his special relativity theory advanced a decade earlier. Notwithstanding, to the general astonishment of the physics academic establishment of the time the theory was validated by astronomical observations almost immediately afterwards. Since then, it has been subjected to intense observational and experimental scrutiny, especially after the dawn of the space age, using (unselected) data of all kinds, from gravitationally bound binary stars to the GPS navigational system, to name just two among many, and survived the test of time by consolidating its empirical validation one test after another, in addition to having opened up an entirely new theoretical view of the physical world, a situation that helped to elevate its theoretical and experimental importance to new levels, although at the time of writing some competing gravity theories have not yet been entirely ruled out by solar system experiments and gravitational waves measurements (Moskvitch, 2018; Sakstein, 2018). This was, however, a rare case, perhaps the only one, of a successful physical theory built that way: proposed without a clear set of experimental facts that suggested the need of a new theory and only afterwards being tested and successfully validated empirically. For this reason, it cannot be considered a role model of scientific investigation, especially if we remember that Einstein himself tried very hard, but failed, to repeat his own feat in the last four decades of his life when he devoted himself to finding a physical theory unifying gravitation, electromagnetism, and quantum theory (Pais, 2005), a task that to this day continues to elude theoretical physicists. So, although this speculative path for proposing scientific theories might sometimes work, its success rate as measured by an a posteriori empirical validation can be considered as exceedingly rare even in physics. Nevertheless, its impact on economics might had been considerable, because one may speculate that since Einstein’s achievements in theoretical physics coincided temporally with the consolidation of neoclassical economics, it is possible that economists might have misunderstood the role of theoretical reasoning by failing to recognize the

1.6 Econophysics and the Empirical World

41

uniqueness of Einstein’s achievements and to note his subsequent inability in repeating his own previous triumphs using pure theoretical reasoning. The net result of these failures was the entirely unwarranted claim that theoretical reasoning is more important than empirical verification of theories. 1.6.2.3 Scientific Method The systematic application of the scientific method creates in practice a virtuous circle where theoretical and experimental approaches occur in parallel and complement each other, inasmuch as they progress in a reciprocal feedback that in fact constitutes two sides of the same coin. This is why physics is both an experimental and theoretical science. However, experiments are characterized by a high level of controlability and reproducibility, whereas this is not possible when one deals with observations, which have a low or unpredictable level of reproducibility and no controlability. Nonetheless, both experiments and observations require some metric and measurement tools in order to allow the phenomena to be studied quantitatively. Since economics deals with social issues, experiments in controlled conditions may be either unrealistic or undesirable on ethical grounds. Even if they are allowed by some ethical code, perhaps similar to what happens to testing of pharmaceutical compounds on animals and humans, due to its social nature the possible economic experiments may be too limited to be of a value. In this case, researches are left to no other option than to rely on observations, which put them on a par with astronomy as astronomical objects cannot be created or reproduced in a laboratory (at least not yet). The major difference is that observations are much less constrained than experiments and, thus, are subject to much larger margins of error.4 But, they will have to do if experiments are either impractical or undesirable. 1.6.2.4 Probability Theories Another important aspect to be considered is that scientists always work under a reasonable degree of subjectivity when performing scientific research. This is so because under the viewpoint that all theories or models are representations, or images, of the real world, by necessity they circumscribe and limit reality in one way or another as the scientist must necessarily pick and choose among various aspects of reality to be added into a model. This is, of course, also valid when one sees nature through the lens of probability theory, that is, in statistical and stochastic modeling, which implies that in using a model based on probability theory to reach a conclusion about the possible outcome

4 Errors in physical measurements will be discussed in Section 4.1.1.

42

Economics and Econophysics

of an event, the calculated probability, even when accurately derived from theory, will in itself also provide a partial and incomplete answer about this outcome, since all models, probability theories included, are partial and incomplete representations of the real world. Therefore, no probability calculation can be used as a kind of empirical measurement like, for instance, temperature, which is based on an objective physical way of interacting with nature. Perhaps the best way of describing this situation is to use the word expectation in the sense that we expect something to occur by means of a certain probability calculation. Even so, an event having, say, 99.9999% probability cannot be expected to have a sure outcome because this probability calculation is based on a theory which in turn is a necessarily limited, therefore incomplete, image of the real world. Due to this the expectations of small probabilities have even less meaning and we may take this reasoning to the point that very small probabilities may have almost no meaning at all in terms of expectations. This is, perhaps, what Nassim Taleb called “true randomness” because this whole discussion is really about the limits of knowledge in probability theory (2010, pp. 339–360), a point somewhat popularized by the well-known quote about “unknown unknowns,” that is, things we do not know that we do not know (Rumsfeld, 2002).5 This point of view regarding probabilistic calculations leads us directly to Bayesian statistics. Currently there are two ways of defining probability: the frequentist definition assumes that probabilities represent long-run frequencies with which events occur. In other words, probability in frequentist statistics is only meaningful in the context of repeated experiments, or multiple trials, even if those repetitions are only hypothetical. The Bayesian definition assumes that probabilities are degrees of credibility that an event will occur. So, probability in Bayesian statistics is seen as a measure of belief, that is, it is something subjectively used to describe uncertainties because it quantifies our ignorance that something will happen. In other words, in the Bayesian view, probabilities are essentially linked to our degree of knowledge about an event (D’Agostini 2003, section 2.2; VanderPlas, 2014). The Bayesian statistics is based on Bayes’ rule, sometimes also called the Bayes– Price–Laplace rule, after Thomas Bayes (1701–1761), Richard Price (1723–1791), and Pierre Simon Laplace (1749–1827), those behind its original formulation,

5 Two other manifestations of the limits of knowledge in physical theories are the existence of constants of

nature and physical singularities. Constants of nature, like the gravitational constant, have no theoretical explanation since they usually appear in semi-empirical approaches to physical problems, are measured by careful experiments and then used as such. When a more elaborated theory is proposed to take into account some constant of nature, it comes with another constant of nature. Physical singularities imply the breakdown of a theory, where it is no longer valid. This is, for instance, the case of general relativistic results like black holes or the cosmological big bang.

1.6 Econophysics and the Empirical World

43

publication, interpretation, and early practical use.6 The basic idea of this approach is to combine subjectively assessed prior probability of an initial belief with objectively attained data, so that an initial belief is modified by objective new information to produce a posterior probability of a newly revised belief. This methodology evolves because every time a new bit of information is added the posterior becomes prior of the new iteration and probabilities are recalculated. Hence, accumulation of data brings observers closer and closer to certitude and converge on the truth. In a sense, John Maynard Keynes summarized this process of belief evolution in a quotation attributed to him: “When the facts change, I change my mind. What do you do, sir?”7 When there is enough data and priors have similar or equal weight these two definitions tend to produce the same results (VanderPlas, 2014). But some real differences occur when one has little data, or several parameters, and a good amount of knowledge about the event so that one is able to have different weights to constrain priors. This was the case, for instance, when one was hunting for a missing commercial airplane that crashed out of radar range on a remote part of the Atlantic ocean after encountering a severe electric storm during an international nocturnal flight from Rio de Janeiro to Paris and plunged to the depths of the sea in a region full of underwater mountains and turbulent water currents (McGrayne, 2011, pp. 252–256). In such circumstance a Bayesian will use every bit of information, old and new, however small, to update his/her belief at the plane’s location, even to the extent of using very tiny probabilities that a frequentist would discard as meaningless because their frequencies are irrelevant. For a Bayesian, every information is considered a valuable datum, no matter how tiny its implied probability. So, the hunt for the missing airplane would go on with every iteration increasing our knowledge about the probable location of the airplane’s debris until it was actually found after covering a small part of the area where the accident might have occurred. At this point an important question arises. If probabilities are regarded as subjective measures of belief, then different people could look at the same information and reach different conclusions because they may use different subjective probabilities or priors. Would this not dismiss the whole Bayesian approach because science must be objective whereas priors are not? This argument has been raised again and again against Bayesian reasoning from those who think that science is, or has to be, entirely objective (McGrayne, 2011).

6 See McGrayne (2011) for a detailed historical account of Bayes’ rule and its two centuries of controversy. 7 There is some controversy if this is precisely what Keynes said, as it has been claimed that what he really

stated is as quoted at page vi (see also Kay, 2015; Keynes, n.d.). Whatever the historical truth of the quote above, as well as the one paraphrased in the epigraph of this book, are now widely credited to him.

44

Economics and Econophysics

Nevertheless, it is exactly this point which leads us to Boltzmann’s epistemological views, because, as in science in general, probabilistic results cannot be considered as entirely objective, but as always having subjective elements, being therefore incomplete and subject to fail. Similar to theoretical pluralism, which states that there is no unique way of representing nature, there should be no unique way of assigning prior probabilities as they are, as with the proposal of any theory, always contaminated by subjective choices.8 So, similar to the way of proving a theory, experimentation updates our beliefs and determines more adequate prior and posterior probabilities until different opinions converge to the truth. In other words, it means learning from experience, which is the same as combining old knowledge with new. We shall return to this topic in Section 4.1.2 when discussing the concepts of risk and uncertainty in economics. 1.6.2.5 Mathematical Economics The use of mathematics in economics is another point which deserves some thoughts. It has been known since Galileo’s times that mathematical tools are essential to describe physical concepts, and new physical theories often require the development of new mathematical tools. As an example, the invention of the infinitesimal calculus by Newton and Gottfried W. Leibniz (1646–1716) in the seventeenth century was fundamental to the development of classical mechanics. However, mathematical tools can only be effective as long as the scientific concepts they describe are equally effective representations of the real world. If they are not, even the most sophisticated mathematics will produce bad science or not science at all, sometimes dubbed Cargo Cult Science, in reference to the speech delivered by the famous physicist Richard Feynman (1918–1988), a recipient of the 1965 Nobel Prize in Physics, about methodologically inadequate, or false, science (Feynman, 1974). For this reason physicists have known for quite some time that it is a serious mistake to confuse mathematics with physics. Feynman expressed very clearly this viewpoint in a series of lectures delivered at the University of Cornell in 1964: The mathematicians only are dealing with the structure of the reasoning, and they do not really care about what they are talking. They don’t even need to know what they are talking about, or, as they themselves say, whether what they say is true . . . If you state the axioms and say, such-and-such is so, and such-and-such is so, and such-and-such is so: what then? Then the logic can be carried out without knowing what the such-and-such words mean. That is, if the statements about the axioms are carefully formulated and complete enough, it is not necessary for the man who is doing the reasoning to have any knowledge of

8 A note of caution is due here. Subjectivity does not mean conventionalism or arbitrariness (D’Agostini, 2003,

p. 30).

1.6 Econophysics and the Empirical World

45

the meaning of these words, and [he] will be able to deduce in the same language new conclusions . . . In other words, mathematicians prepare abstract reasoning that is ready to be used if you will only have a set of axioms about the real world. But the physicist has meaning to all the phrases. And there is a very important thing that the people who study physics that come from mathematics don’t appreciate. Physics is not mathematics. And mathematics is not physics. One helps the other. But, you have to have some understanding of the connection of the words with the real world . . . to find out whether the consequences are true. And this is a problem which is not a problem of mathematics at all. Mathematicians also like to make their reasoning as general as possible . . . [but the] physicist is always interested in the special case. He is never interested in the general case! He is talking about something! He is not talking abstractly about anything! He knows what he is talking about. When you know what it is you are talking about . . . then you can use an awful lot of common sense . . . about the world . . . You’ve seen various things, [and] you know more or less how the phenomenon is gonna behave, whereas the poor mathematician translates into their equations, and [as] the symbols don’t mean anything to him he has no guide, but precise mathematical rigor and care in the argument, whereas the physicist, who knows more or less how the answer is gonna come out, can sort of guess part way, and so go along rather rapidly. The mathematical rigor of great precision is not very useful in the physics, nor is the modern attitude in mathematics to look at axioms. Mathematicians can do what they want to do. One should not criticize them because they are not slaves to physics. It is not necessary that just because [something] is useful to you they have to do it that way. They can do what they will. It is their own job. And if you want something else, then you work it out for yourself. The [next] question is to what extent models help? . . . But, the greatest discoveries, it always turns out, abstract away from the model and it never did any good . . . The method of guessing the equation seems to be a pretty effective way of guessing new laws. This shows us again that mathematics is a deep way of expressing nature, and attempts to express nature in philosophical principles . . . is not an efficient way. (1964, 44:15–50:40)9

In the same vein, Galam (2012) elaborated why “physics does not care about mathematical rigour”: While the use of modeling in physics has been tremendously powerful in establishing the field as an exact hard science, capable of building concrete and efficient experimental devices, its power comes from the empirical use of mathematics to describe real phenomena. This means that it is not the mathematical rigor that prevails but the capability to reproduce particular properties using some mathematics. It is exact opposite of what economists have been doing for decades, who focused on the mathematical rigor of their model rather than their ability to reproduce real features. 9 This transcript of the passages of Feynman’s exposition were made by this author from the recorded video

lecture. See also Feynman (1967, pp. 55–57).

46

Economics and Econophysics

Another essential characteristic of physics is that all results obtained from the various models are aimed, sooner or later, at being tested against experimental data, even if it takes many years or decades or even centuries before being able to do so . . . Physics is a so-called hard science but it balances between the hard reality and the rich possibilities of inexact mathematics. (2012, p. 27)

So, like the authors’ viewpoints cited above, one should not confuse economics with mathematics since economics is not mathematics. And mathematics is not economics. Therefore, bad economic theories, that is, those not connected with the real world, will produce bad results, regardless of their mathematical contents. But, sophisticated mathematics can also conceal bad theories by making them obscure and arcane, and, so, rendering theoretical inadequacy and ineffectiveness more difficult to recognize. Hence, although economics deals with social issues, there is nothing intrinsically blameworthy in the use of mathematics in economics. If the results and predictions made by the economic theories are bad, this is a consequence of incorrect, inadequate, or inappropriate concepts formulated in mathematical language, which means that mathematical tools are not to blame for such a failure. Physicists learned from the experience of previous generations of physicists that if a theory systematically produces wrong results and predictions this means that our understanding of the problem is at fault, i.e., that there is something fundamentally wrong with our view of the phenomenon and its supposed theoretical description. This point, nevertheless, leads to another question. Could the ineffectiveness of the use of mathematics in economics have something to do with the fact that economics deals with social issues whereas physics deals with physical quantities? This seems hardly the case. Human beings, either individually or interacting collectively in society, belong to nature as much as falling bodies and atoms. So, the difference between the natural and social sciences are a consequence of the use of different analytical tools to understand different aspects of nature. Therefore, inasmuch as the tools and concepts used to deal with classical mechanics are not the same as the ones applied to describe the atomic structure, from the viewpoint of physicists the real-world aspects of the collective human action and interaction must similarly entail specific implementations. 1.6.2.6 An Econophysical Definition of Economy? Based on the reasoning expounded in the paragraphs above it may be now possible to tentatively propose a practical econophysical definition of an economy, as being ‘an open system comprising the collective human interactions and interdependencies empirically observed in the dynamic environment created in societies by production, trade, accumulation, and distribution of value.’ A similarly

1.6 Econophysics and the Empirical World

47

practical econophysical definition of value would be ‘the set of materials, services and energy produced, transported, traded, and consumed by society.’ Economics and econophysics are then the study of economies, or economic systems, which then requires appropriate concepts and mathematical tools to adequately describe modern economies. From this viewpoint the terms economy and economic system have the same meaning and can be used indistinctively. The tentative definitions above of what is value and an economy open the way for applying new theories to understand the economic phenomenon. In this respect modern physics and applied mathematics may come to help, particularly due to the developments of classical physics and nonlinear dynamics that occurred during the second half of the twentieth century which led to the new theories that will be briefly set out below. 1.6.3 Recent Theories Classical physics did not stop developing after the appearance of modern physics. Some of its validity domains were established by both quantum mechanics and relativity theory in the early twentieth century, but afterwards some problems of classical physics showed to have surprisingly new features when their nonlinear dynamic systems were more thoroughly studied. Together with branches of applied mathematics, like the mathematical theory of singularities and bifurcations, these new features gave rise to the theories of the three Cs: catastrophe, chaos, and complexity. Another important new development of classical physics is the nonequilibrium thermodynamics. Although the origins of the three-Cs theories can be traced as far back as Poincar´e at the end of the nineteenth century, they made their effective appearance approximately in the period from 1960 to 1990. They all presented new concepts and methods which are now seen as quite fitting to the study of the economic phenomena. In addition, despite the fact that classical physics provided essential inspiration for the establishment of neoclassical economics at the end of the nineteenth century (Mirowski, 1989), mainstream economics seems to have missed these new developments during the late twentieth and early twenty-first centuries, most likely because it remained stuck with the equilibrium, comparative statics, and linear systems paradigms for analyzing economic systems. There were, nevertheless, some attempts to incorporate nonlinear dynamics ideas, particularly chaos theory, into economic analysis, but they seem to have been mostly isolated initiatives, only slightly touching mainstream economics, and, therefore, incapable of changing the prevailing twentieth century neoclassical economics paradigms.

48

Economics and Econophysics

In addition, as we shall see in the next chapters, several of those initiatives suffered from the same limitations that hampered the development of mainstream economics in the sense that they remained basically theoretical, having rarely motivated, or generated, empirically verifiable studies capable of actually testing the proposed models with real-world economic data in order to generate results that in turn would be able to change, by improving or rejecting, those theories. As the concepts of these new developments of classical physics and nonlinear dynamics are important to the discussions of the next chapters, their most basic concepts will be very briefly presented below. 1.6.3.1 Catastrophe Theory The theory of catastrophes is basically a combination of the mathematical theory of singularities with its applications in the study of a great variety of different processes and phenomena in all areas of sciences. Singularities are points where a certain mathematical object is not defined or where it behaves badly. For instance, the coordinate system of the geographical points on the surface of the earth have singular points at both the South and North Poles because at the South Pole the concepts of south, east or west become undefined. Similarly, on the North Pole there is no north, east, or west, but only south. Since describing our world mathematically requires a delicate interaction between continuous and discontinuous, or discrete, phenomena, the importance of singularities come from their ability to describe how discrete properties can appear from continuous ones, inasmuch as most interesting phenomena in nature involve discontinuities. In this respect it is worth citing the late Soviet and Russian mathematician Vladimir I. Arnold (1937–2010): Singularities, bifurcations and catastrophes are different terms for describing the emergence of discrete structures from smooth, continuous ones . . . The word bifurcation means forking and is used in a broad sense for designating all sorts of qualitative reorganizations and metamorphoses of various entities resulting from a change of the parameters on which they depend. Catastrophes are abrupt changes arising as a sudden response of a system to a smooth change in external conditions. (1986, pp. vii, 2)

Hence, as pointed out by Saunders (1980), when catastrophe theory is applied to systems whose inner workings are unknown, it deals with discontinuous properties directly, without referring to any specific underlying mechanism. So, this makes catastrophe theory well suited for problems which the only reliable observations are of the discontinuities, rendering catastrophe theory capable of predicting qualitative behavior of systems without knowledge of their governing differential equations and their solutions. Both Saunders (1980) and Arnold (1986) provide readable

1.6 Econophysics and the Empirical World

49

introductions to catastrophe theory with several examples of applications to a wide range of fields. 1.6.3.2 Chaos Theory Chaotic dynamics, or chaos theory, arose as a consequence of the limitations of Newtonian mechanics in a different domain than the one that led to the appearance of quantum mechanics. The classical view of the world was of a machine governed by a set of equations of motion discovered by Newton, whose solution would allow the exact prediction of the future state of a system described by these equations given a precise knowledge of the present state of all relevant forces. Such dynamic systems are called deterministic because they are governed by equations entirely determined by their initial conditions. For instance, the weather can be described by a set of differential equations, so under the old Newtonian mechanicist viewpoint it was thought that given information precise enough it would be possible to predict the weather for several months, or years, in advance. Therefore, it came as a great surprise when researchers realized this to be impossible. The difficulty lies in the fact that these equations have properties such that unless we are able to specify the initial conditions with infinite precision, we eventually lose the ability to predict the system’s future behavior. In addition, in many cases even with a good knowledge of the system’s initial conditions its future behavior is random. And this randomness is generated by simple deterministic systems with only a few components. So, this feature is fundamental; gathering more information does not make this randomness disappear. Systems exhibiting such features became known as chaotic because they have elements of determinism, predictability, and unpredictability, even when they are discussed completely within the domain of classical physics. This means that although the future was thought to be determined by the past, small uncertainties are so hugely amplified in chaotic systems that for all practical purposes they become unpredictable. Hence, chaotic systems are predictable in the short term, but unpredictable in the long term. Predictability is reduced in such a fundamental way that we can only talk about a horizon of predictability beyond which one can no longer predict the system’s future behavior (Lighthill, 1986). This is the case of the weather, whose horizon of predictability is of about a week. In summary, the future of a chaotic system is only partially determined by the past, and this is restricted to within the horizon of predictability. Thus, the essence of chaos, that is, of systems exhibiting extreme sensitivity to initial conditions, lies in the fact that small changes in these conditions can change entirely the future outcome of the system, a feature popularly known as the butterfly effect. On the other hand, since the motion of dynamic systems can be represented as points following orbits in what is called a phase space, chaotic systems were

50

Economics and Econophysics

found to generate elegant geometrical patterns in this space, differently from the nonchaotic ones which generate simple curves. So, in a sense this result indicates that there is also order in chaos. The geometrical patterns generated by chaotic behavior were also found to have interesting properties. First, there are points in the phase space where the orbits seem to concentrate and others where they avoid. These are respectively called chaotic attractors, or strange attractors, and chaotic repellers. Second, it was found that the orbits of strange attractors are fractals, that is, patterns so irregularly embedded in regular space, but whose dimension is noninteger. In simple terms, the fractal dimension quantifies how “broken,” or irregular, is a distribution, that is, how far a distribution departs from regularity. Because fractals usually arise from power laws, which in turn seem to be ubiquitous in nature, being present from the galaxy distribution (Ribeiro and Miguelote, 1998) to the stock markets (Peters, 1994), power laws came to be seen as powerful indicators of dynamic behavior of systems where the concepts above may apply, since the slope of the power laws indicate the fractal dimension of the distributions. The history of how chaos theory appeared is fascinating, showing the interplay of different study areas and motivations, ranging from the meteorologist Edward Norton Lorenz’s (1917–2008) first attempts of weather prediction using a small digital computer during the 1950s, until he realized this to be impossible in the long term and finally proposing the butterfly effect (see Palmer, 2008), to the contributions of various physicists and mathematicians such as Andrey Kolmogorov (1903–1987), J¨urgen Moser (1928–1999), Vladimir Arnold and Benoit Mandelbrot, to name just a few among several others. Another issue closely related to both chaos and catastrophe theories is that science proceeds under the assumption that experiments are generally repeatable. However, since infinite precision is impossible, no experiment can be repeated at exactly the same conditions in which it was previously performed. So, what science truly expects is that if an experiment is repeated under approximately the same conditions we will obtain approximately the same results. This property is known as structural stability. The issue here is that the system under study must be resistant to perturbations of the conditions of the experiment. Mathematically speaking, structural stability requires that a dynamic system does not change its qualitative behavior and nature if it suffers small perturbations. If it does, the system is said to be structurally unstable. This concept plays an important role in dynamic systems, especially the nonlinear ones, and shall be revisited in Section 6.1 in the context of a specific model. 1.6.3.3 Complexity Complex systems theory, or simply complexity, is basically a conceptual framework which proposes that science is made of hierarchical levels where each level has

1.6 Econophysics and the Empirical World

51

its own fundamental laws. Such proposition challenges the reductionistic working hypothesis that chemistry and biology can in principle be completely reduced to physics, to the laws of quantum physics. So, complexity does not endorse the reductionistic claim that all animate and inanimate matter is thought to be ruled by the same set of fundamental laws uncovered through research on fundamental problems lying at the frontiers of science. In a well-known article, P. W. Anderson, the 1977 Nobel Physics laureate, wrote. The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science, much less to those of society. (1972, p. 393)

He then defined the fundamental concept of emergence as follows. The behaviour of large and complex aggregates of elementary particles . . . is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviours requires research . . . as fundamental in its nature as any other . . . At each stage entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry. (Anderson, 1972, p. 393)

Finally, after mentioning Karl Marx’s proposition that quantitative differences become qualitative ones, Anderson reasoned that a dialog in Paris in the 1920s sums this up even more clearly: “Fitzgerald: The rich are different from us. Hemingway: Yes, they have more money” (1972, p. 396). From the above it is clear that the central concept underlying a complex dynamic system is that it is made of a large number of material objects reaching such a high degree of relatedness of its components that then genuinely novel properties and processes may emerge, properties which are not reducible to the material attributes of its components. Hence, emergence can be expressed by the thesis that the whole is more than the sum of its parts. Closely related to the concept of emergence is the one of downward causation, whose thesis is that the whole determines the behaviors of its parts (see also Ellis, 2015). Complex systems are made of interacting objects, usually called agents, linked together by direct interactions or through subgroups, forming a complex network. But a complex system is made of parts that do more than just interact, as they are also interdependent, that is, they must be cooperative. One cannot separate the parts and continue having the same properties. So, it is the nature of the interactions of the agents within the network that defines the complex system. To illustrate, complex agents can be cars in a traffic jam, pedestrians in a crowd, firms in an economy, banks in a financial system, people in society, computers on

52

Economics and Econophysics

the Internet, electricity transmission nodes in a power grid, etc. (Strogatz, 2001). All these systems have emerging properties which only appear in a large aggregate of agents, being not present at the individual agent level. So, they cannot be derived by microscopic dynamics, that is, it is not possible to link the large scales, temporally or spatially, with the correspondent small scales of the micro-dynamics (Paradisi et al., 2015). This viewpoint can be applied, for instance, to economic growth and innovation, as they can be considered as key features of an ecosystem with complex interactions involving intangible elements like good education, financial status, labor cost, hightech industry, energy availability, quality of life, etc., elements which cannot be measured in monetary figures. The emphasis is then on the complexity and diversity of a country’s export basket because this expresses the technologies and capabilities it is able to control, tap, and exploit. This complexity could then determine the level of national competitiveness as an emergent property in a dynamic way (Tacchella et al., 2012; Cristelli et al., 2013; Pietronero et al., 2013; Van Norden, 2016). The Internet is another good example of an emergent property in the complex system of interacting computers, because it only makes sense to talk about the Internet, or the World Wide Web, once there is a large number of interconnected computers collectively interacting with each other or in subgroups, forming a complex network of computers. Thus, the Internet cannot be understood by looking at individual computers, or cannot be understood by means of the laws of electronic circuits or electromagnetism, although these are required for the understanding and construction of computers at the individual level. Following the same reasoning, a society cannot be understood by only considering the behavior and psychology of individual human beings, however average, which means that there is indeed such a thing as society. Similarly, this conceptual framework tells us that an economy cannot be understood by simply reducing its behavior to the one of a “representative agent” or “representative firm.” It is the collective that matters, or, better stated, the properties of the collective. Not surprisingly, some economic research has already reached similar conclusions (see, e.g., Kirman, 1992, 2010). Each complexity level has its own unique properties and requires its own analytical tools. The transition of one complexity level to another requires different models, and each one needs to be placed in its own context of cause and effect. In addition, emergent phenomena usually appear in the absence of any kind of central controller or “invisible hand,” showing a complicated mix of ordered and disordered behavior. Because local interactions between the components of an initially disordered system lead to the emergence of an overall coordination, complex systems are said to present self-organization. Besides, since complex

1.6 Econophysics and the Empirical World

53

systems are typically open in the sense that they influence and can be influenced by its surrounding environment, if the system responds to its environment by changing its behavior it is said to self-adapt, that is, it becomes an adaptive system. Finally, at the core of most real-world examples of complexity one can find competition for some kind of limited resource, such as energy, space, food, income, wealth, economic value. This last remark will be very useful in the next chapters. The implications of complexity concepts to economics are fundamental. It essentially proposes that the understanding of economic dynamics cannot be achieved by means of a “representative agent” because complexity economics builds from the assumption that the economy is not necessarily in equilibrium, which means that economic agents constantly change their strategies and actions in response to the environment created by their mutual interactions, which can in principle be either predatory, or symbiotic, or a mixture of both. Such an approach cannot be an extension of the neoclassical economics viewpoint (Kirman, 2010), which emphasizes stability, permanence, and reversibility due to its basic equilibrium hypothesis, but sees economics as being formed by structures that constantly change and rearrange themselves, creating patterns which in turn lead the interacting elements to change and adapt for survival in a sort of ecology where time becomes important as the created events are historical contingencies (Brian Arthur, 2014, and references therein). As we shall see below, this viewpoint is neatly connected to physical theories where systems in equilibrium are just a particular case of more general patterns, and are not particularly interesting ones. 1.6.3.4 Thermodynamics of Nonequilibrium Nonequilibrium thermodynamics is the study of open systems far from equilibrium that can still be described in terms of macroscopic thermodynamic variables. This approach to thermodynamics was initiated by Ilya Prigogine (1917–2003), the 1977 Nobel Chemistry laureate, who started from the observation that an open system submitted to a gradient of temperature such that it reaches a nonequilibrium state which may be a source of order. This is, for instance, the case when water is contained between one hot plate below and another cold plate above, leading to the appearance of a macroscopically ordered structure of fluid patterns due to convection flows, called B´enard cells. To put the background theory in simple terms, this observation led to the realization that systems submitted to a continuous flow of energy reach a far-from-equilibrium stable thermodynamic state, and this stable state endows them with structures. Thus, nonequilibrium and, particularly, far-from-equilibrium systems are a source of order as they evolve toward coherent behavior. These structures are radically different from equilibrium structures and can only be maintained in

54

Economics and Econophysics

far-from-equilibrium conditions by means of enough flow of energy and matter. Due to this feature they are called dissipative structures (Nicolis and Prigogine, 1977). In view of the fact that since Boltzmann’s contributions to statistical thermodynamics in the nineteenth century it is known that high entropy is a measure of disorder, and since dissipative structures are stabilized by exchanges of energy with the outside world and, for this reason, have low entropy, the conclusion is that dissipative structures are ordered and are possibly entropy-reducing systems. In other words, dissipative structures are a manifestation of self-organization in nonequilibrium systems. The results of nonequilibrium thermodynamics are in fact far-reaching. Life, for instance, is made of dissipative structures, since cells require a continuous flow of energy and matter to keep them functioning (Morowitz, 1968). Hence, biological systems are in a far-from-equilibrium state because if they achieve equilibrium the energy and matter flows come to a halt, which means death. Another illustration is of a city that can only maintain itself as long as it has an inflow of fuel, food, and other commodities and an outflow of wastes and other products. This means that an economy, or its production system, can be seen as a dissipative structure where matter is changed into forms useful for human beings, kept in action by a constant energy flow in a manner akin to biological systems. This connection between non equilibrium thermodynamics and economics is not trivial to be formalized in terms of specific mathematical models, but some authors faced the challenge and advanced this interface conceptually. For instance, Ayres and Nair (1984) argued that the laws of the conservation of energy and of the increase of entropy constrain processes by which raw materials are transformed in consumable goods, Georgescu-Roegen (1986) proposed that economic processes are entropic, and Pokrovski (1999, 2018) discussed the physical principles of economic growth from the viewpoint that economies are fundamentally based on the flow of energy and matter. Finally, non equilibrium thermodynamics have aspects of all theories above, since it is nonlinear and presents elements of complexity, chaos, structural stability, and catastrophe. A technical approach to these issues is given by Nicolis and Prigogine (1989). The summary above of some recent physical theories that are in principle applicable to economic systems form an assortment of concepts and ideas capable of opening new perspectives for the understanding of the economic phenomena and of providing avenues for fresh modeling approaches. Nevertheless, none of them will have any value if they do not produce new results and applications that can

1.6 Econophysics and the Empirical World

55

now or in the future be connected to the real world and validated by empirical findings. The way forward is, therefore, not to rely on purely logical premises of how economies would, or should, work, not to depend solely on mathematically constructed theories about idealized economies, but to piece together from empirical observations of the real world how economic systems actually work and build mathematical models using both intuition and ingenuity that are capable of describing and predicting the real-world economic phenomena.

2 Measuring the Income Distribution

This chapter presents essential tools required for the description and measurement of the individual income distribution. It starts by defining the most basic variables capable of characterizing the income distribution, discusses their boundary conditions, and then moves to the most well-known inequality measures that can be derived from the distribution: the Lorenz curve (Lorenz, 1905) and the Gini coefficient (Gini, 1912, 1914, 1921, 1955; Ceriani and Verme, 2012). Several specific distribution functions that have already been used to characterize the income distribution of various countries are presented, such as the power law, the exponential, and the Gompertz curve, as well as specific expressions of inequality measures of those functions. Other less well-known inequality measures used to characterize the income distribution’s skewness are also discussed. The application of some of these distributions to actual data are also discussed.

2.1 Basic Definitions and Results Let x be the individual income of a given population obtained by some sampling in a certain time period. This quantity is usually measured in terms of a specific currency unit, like, e.g., the dollar $ or the pound sterling £, but it can be made dimensionless by means of some normalization, such as using the average income obtained in a predefined period of time. If the sample used to derive x is large and dense, that is, if in practice it contains a large number of points close enough to each other, the measurements of the population’s individual income can be assumed as effectively smooth, which means that x may be considered as a continuous variable. Throughout this work, we shall take this smoothing hypothesis as valid, which means assuming that it is possible to obtain a sample containing enough points so that this hypothesis is valid in practice. Since income is usually expressed in monetary terms and received in some currency at some specific time period, x is 56

2.1 Basic Definitions and Results

57

in fact time dependent. Thus one can write x = x(t), where the time t is usually expressed in units of months or years. In order to build effective describing tools for the income distribution, let us start by defining F(x) as being the cumulative distribution function (CDF) of individual income, or simply cumulative distribution, which gives the probability that an individual receives an income less than or equal to x. This quantity may also be thought of as the proportion of the population that receives an income equal to or less than x. It follows from this definition that the complementary cumulative distribution function (CCDF) of the individual income F (x), or simply complementary distribution, gives the probability that an individual receives an income equal to or greater than x. From this it is clear that F(x) and F (x) are related as follows: F(x) + F (x) = 100.

(2.1)

This expression assumes that the maximum probability, or total proportion of the population, is equal to 100%. This normalization value for the maximum probability will be assumed from now on. In view of the smoothing hypothesis discussed above, both F(x) and F (x) are assumed to be continuous functions with continuous derivatives for all values of x. This means that: dF (x) dF(x) = f (x), = −f (x), (2.2) dx dx and ∞ f (x) dx = 100. (2.3) 0

Here f (x) is the probability distribution function (PDF) of individual income, defined such that f (x) dx is the fraction of individuals with income between x and x + dx. This function is also known as the probability density or probability income distribution.1 Both F(x) and F (x) are dimensionless quantities, but to avoid numerical mistakes (see next chapter), it is convenient to consider their dimensions as being of percentage, that is, [F(x)] = [F (x)] = %. Strictly speaking, quantities having “dimension of percentage” are dimensionless, but this approach aims at preparing

1 The infinite income used in the integration above is a mathematical idealization to represent a limit of very

large number. In physical or economic realities there is no infinity of anything as it is nowhere to be found, experienced, or observed. As pointed out by Ellis et al. (2018), infinity is an entity whose very nature is unattainable. Infinity is not a very big number, but no number at all. Similar situation occurs with the zero, as it does not exist in the physical world and, by extension, in the economic reality, but it can be a mathematical idealization to represent a very small number. “There is a duality between zero and infinity, expressed in the elementary identity 1/0 = ∞. If one side of the duality does not occur in nature, also the other side ought not to” (Ellis et al., 2018, p. 770).

58

Measuring the Income Distribution

for the treatment to be presented in Section 3.4.3. With these considerations, the functions above become in fact dimensionless if we write them as follows: F (x) F(x) = = 1. (2.4) 100 100 Let us now make a brief stopover for a notational definition. Considering that the discussion here aims to be independent of any actual currency, let us adopt a currency symbol different from the ones currently in usage worldwide. Hence, let be the generic currency symbol. This symbol can, of course, be replaced by any actual currency symbol such as $ or £ if the income data under analysis so requires. The reason for the introduction of this notation is to avoid attaching the whole discussion of this book to any particular national currency. As a consequence of this choice, the generic currency unit is written as 1.00 , where the usual convention in physics of placing the dimensional symbol after its respective numerical value is adopted. Returning to the previous point, the dimension of the PDF f (x) depends on if x is defined as a currency. In this case the PDF is dimensionally given by percentage over this currency. If x is defined as a currency value divided by a normalizing average currency value, then it is dimensionless. Thus, if x has currency dimension, i.e., [x] = , then the probability density will be dimensionally given as −1 . Otherwise, if [x] = 1 then [f (x)] = %. [f (x)] = % · Eqs. (2.2) lead to the following straightforward results: x F(x) − F(0) = f (w) dw, (2.5)

0 ∞

F (x) − F (∞) =

f (w) dw.

(2.6)

x

Although it is often the case that real samples have a non-negligible number of individuals who earn nothing, zero income values do not have a weight in the income distribution function, and, hence, it is assumed that those results are of a transitional nature and ought to be dismissed. Similarly, only very few individuals are extremely rich, which means that their probabilities tend to zero. These are, however, limiting cases and should only be considered as true within the uncertainties of the measurements. Therefore, following this reasoning, it is reasonable to reach the approximate boundary conditions: F(0) = F (∞) ∼ = 0, (2.7) F(∞) = F (0) ∼ = 100. Clearly both F (x) and F(x) run from 0 to 100, but in opposite directions as x goes from 0 to ∞. Considering the boundary conditions above together with the definition (2.2), we can write the normalization (2.3) as:

2.1 Basic Definitions and Results

100

dF = −

0

0

dF =

100

∞

f (x) dx = 100.

59

(2.8)

0

The first-moment distribution function of personal income, or simply firstmoment distribution, F1 (x) is defined as: x x w f (w) dw 1 0 F1 (x) = 100 ∞ w f (w) dw, (2.9) = x 0 w f (w) dw 0

where x is the average income of the sampled population, defined as: ∞ ∞ x f (x) dx 1 0 x f (x) dx. = x = ∞ 100 0 f (x) dx

(2.10)

0

The complementary first-moment distribution straightforwardly follows the definition (2.9) above, yielding: ∞ 1 w f (w) dw. (2.11) F1 (x) = x x If the dimension of x is [x] = , x will also be given in terms of . Both the first-moment distribution and its complement have dimension of percentage. The chosen normalization for x implies that F1 (∞) = F1 (0) = 0 and F1 (0) = F1 (∞) = 100, a range conveniently set for use by one of the most common tools adopted to discuss income inequality, the Lorenz curve. This is a two-dimensional curve placed in a Cartesian coordinate system whose horizontal coordinate represents the fraction of the sampled population having income below x and the vertical coordinate gives the fraction of the total income of the sampled population receiving income less than x as a fraction of the total income of this population. In other words, the x-axis is the proportion of individuals having income less than or equal to x, whereas the y-axis is the proportional share of total income of individuals having income less than or equal to x. From this definition it is therefore clear that the cumulative income distribution defined by Eq. (2.5), subject to the boundary condition (2.7), defines the x-axis of the Lorenz curve: x f (w) dw, (2.12) F(x) = 0

whereas the y-axis of the Lorenz curve is defined by Eq. (2.9). Fig. 2.1 shows a plot of F1 vs. F where the function F1 = F1 [F(x)] is the Lorenz curve.

60

Measuring the Income Distribution

Figure 2.1 Graph of the first-moment income distribution F1 (x) versus the cumulative income distribution F(x). Arrows clearly indicate the Lorenz curve and the line of perfect equality. The individual income x is a parameter of the Lorenz curve, varying from 0 to ∞, but being implicit in the plot, whereas both F1 and F are proportional quantities representing percentages that run from 0 to 100. The area in between the line of perfect equality and the Lorenz curve is represented by A, whereas B is the area of the graph below the Lorenz curve. The total area of the box is 104 . The point F = k is where the diagonal line F1 = 100 − F crosses the Lorenz curve.

The line of perfect equality, or egalitarian line, is the particular case where F1 = F for all values of x. So, along this line a certain share of the population receives the same share of total income. For instance, the midpoint of the egalitarian line means that 50% of the population receives 50% of the total income. But, if the Lorenz curve is not egalitarian, then 50% of the population would receive much less than 50% of the total income. The other limiting case is when there is perfect inequality. This happens when F1 = 0 for F ≤ 100, but jumps to F1 = 100 when F = 100. In other words, F1 = 0 for all values of x, but the maximum one (theoretically when x = ∞) where it jumps from zero to 100. In practical terms this means that only one individual or a single group of individuals receives the whole income and everybody else receives nothing.

2.1 Basic Definitions and Results

61

It is straightforward to derive the following expressions from the equations above dF1 x = , dF x

(2.13)

and d2 F1 dF

2

=

1 . xf (x)

(2.14)

Since income is positively defined, Eqs. (2.13) and (2.14) are also positive and, hence, these equations respectively tell us that the Lorenz curve increases monotonically and is convex to the F-axis, meaning that F1 ≤ F. In addition, at the point where x = x the tangent of the Lorenz curve has the same slope as the egalitarian line, i.e., they are parallel. This means that at this point the Lorenz curve is at its maximum distance from the line of perfect equality. It should be noted that one can also define the Lorenz curve by plotting F1 vs. F instead of F1 vs. F and by doing this the Lorenz curve will become concave rather than convex (Eliazar, 2015a, 2015b). However, the convex Lorenz curve shown in Fig. 2.1 is its most common representation and shall be adopted throughout this book. Fig. 2.1 shows clearly that the egalitarian line divides the Cartesian box enclosing the Lorenz curve into two parts having equal areas. Let A be the area limited by the egalitarian line and the Lorenz curve and B the area below the Lorenz curve. The Gini coefficient, or Gini index, G can be very intuitively defined as a relationship between these two areas A . (2.15) G≡ A+B So, if the Lorenz curve coincides with the egalitarian line, then A = 0 which implies that G = 0. However, in the case of perfect inequality, B = 0 and then G = 1. Hence, the Gini coefficient is in fact a dimensionless measure of inequality, that is, [G] = 1, providing a very useful tool to characterize the income distribution of a population. Fig. 2.1 clearly shows that A = (104 /2) − B, which means, considering Eq. (2.15), that G = 1 − 2 × 10−4 B. This allows us to write the general expression for the Gini coefficient in terms of the individual income as: 100 ∞ F1 dF = 1 − 2 × 10−4 F1 (x)f (x) dx. (2.16) G = 1 − 2 × 10−4 0

0

4

Note that the factor of 10 comes as a consequence of the normalization chosen here, which is absent in the usual way of representing the Lorenz curve where it is placed inside a unit square box (Kakwani, 1980, p. 30).

62

Measuring the Income Distribution

Albeit being the most famous and widely used, the Gini coefficient is not the only inequality measure that can be derived from the Lorenz curve. The k-index, also known as Kolkata index, recently introduced by econophysicists as a new inequality measure, is defined as the fraction k of people who possess (100 − k) fraction of total income (Ghosh et al., 2014; Inoue et al., 2015). In the present context both fractions are given in terms of percentages. The k-index is given by the coordinate value k in the x-axis in Fig. 2.1 where the diagonal orthogonal to the egalitarian line cuts the Lorenz curve. The k-index can sometimes be more useful in terms of inequality calculations than the Gini coefficient because the latter requires information of the complete income distribution data set, whereas the former needs only accurate information of the distribution around the middle range of the distribution. On this, it is relevant to consider the claim made by Chatterjee et al. (2016, 2017) of having found an empirically based linear relationship between k and G, valid up to a certain Gini coefficient value, that, after suitable renormalization, reads as: k = 50 + k1 G, (0 < G 0.70, k1 = 36.5 ± 0.5).

(2.17)

One should remark that both the Gini coefficient and the k-index are measures of inequality that can be applied to quantities unequally distributed other than income (Ghosh et al., 2014). The Lorenz curve can also be used to geometrically define the Pietra index, given by twice the area of the largest triangle inscribed in the Lorenz curve and the egalitarian line (Lee, 1999). Alternatively, it is a measure of the maximum distance between the Lorenz curve and the line of perfect equality (Eliazar and Sokolov, 2010; Eliazar, 2015b). The Gini and Pietra indices can be seen as just two different inequality measures arising from different geometrical loci provided by the Lorenz curve. However, Eliazar and Sokolov (2010) argued that in terms of random variables and populations, rather than samples, the Gini coefficient provides the difference between two randomly chosen members of the population, whereas the Pietra index gives the deviation of a randomly chosen member of the population to the population mean. So, from a physics viewpoint, the Gini index measures particle– particle interactions, whereas the Pietra index measures particle–mean fluctuations. This feature may explain why the Pietra index is also known as the Robin Hood index, because it gives the proportion of income that has to be transferred from those above the mean to those below the mean in order to achieve an egalitarian distribution (de Maio, 2007; Aoyama et al., 2010, section 2.4.1). Sarabia and Jord´a (2014) provided several explicit expressions of the Pietra index for the generalized function for the size distribution of income as proposed by McDonald (1984). Inequality measures based on coefficients which summarize in a single number the inequality of the distribution, also called synthetic indices due to this property,

2.2 Two-Components Distribution

63

can also be defined without being based on the Lorenz curve. This is the case of the Theil index which is based on the notion of the entropy of income shares (Kakwani, 1980, p. 88). Sarabia et al. (2016) presented several explicit expressions of the Theil index for various income distributions. Another alternative way of visualizing inequality are the hill curves, introduced under the claim that they provide detailed scans of the gaps between rich and poor (Eliazar, 2016). Other Lorenz- and non-Lorenz-based inequality measures are discussed in Kakwani (1980, ch. 5) and Arnold (2015, ch. 4). De Maio (2007) provided a short and useful glossary of several inequality measures. However, from the viewpoint of the focus of this book, the Gini and k indices will be enough for our purposes here. 2.2 Two-Components Distribution It is often the case that the income distribution of real samples is better characterized not by one, but two different functions at well-defined income range domains. This is so because in several cases it is preferable to do that than to resort to a single, often, but not always, more complex function that usually has several parameters required for the description of the whole population. Below the general results of the previous section for the case of distributions characterized by two different functions at different income range domains will be presented. Let xt be the transition income value that limits two income range domains, a be the index denoting the income domain defined in the range 0 ≤ x < xt , and b the one in the range xt ≤ x ≤ ∞. The defining expressions of the previous section can be rewritten as:

a

a

F (x) + F (x) = 100, b b F (x) + F (x) = 100,

xt

f (x) dx + a

0

⎧ x ⎪ a ⎪ f a (w)dw, ⎨ F (x) = 0 xt b ⎪ a ⎪ f (w)dw + ⎩ F (x) = 0

∞

(0 ≤ x < xt ), (xt ≤ x ≤ ∞),

f b (x) dx = 100,

(2.18)

(2.19)

xt

(0 ≤ x < xt ), x

f b (w)dw, (xt ≤ x ≤ ∞).

(2.20)

xt

The boundary conditions (2.7) become

F (0) = F (∞) ∼ = 0, b a F (∞) = F (0) ∼ = 100, a

b

(2.21)

64

Measuring the Income Distribution

and Eqs. (2.9) and (2.10) for the average income and first-moment distribution can be written as: xt ∞ 1 w f a (w) dw + w f b (w) dw , (2.22) x = 100 0 xt ⎧ x 1 a ⎪ ⎪ w f a (w) dw, (0 ≤ x < xt ), ⎪ F1 (x) = ⎪ x 0 ⎨ xt x ⎪ ⎪ 1 b ⎪ a b ⎪ w f (w) dw + w f (w) dw , (xt ≤ x < ∞). ⎩ F1 (x) = x 0 xt (2.23) From the results above the Gini coefficient yields xt a a −4 F1 (x)f (x) dx + G = 1 − 2 × 10 0

∞

xt

b

b

F1 (x)f (x) dx .

(2.24)

All modern societies have a class of individuals, generically known as uppermiddle class, whose income varies from the typical values received by the average middle class to the ones received by the rich, in addition to being numerous enough to appear in most income data samples. Then, the income values of the upper-middle class are assumed to provide enough intermediate values so that there should be a continuous transition between domains a and b, however speedy this transition may be. Therefore, under this reasoning the following constraint equation is assumed to be valid a

b

F (xt ) = F (xt ).

(2.25)

The results of this section and the previous one provide a minimum set of tools capable of analyzing income distribution. As mentioned above, they have been advanced over a century ago, but to this day they are by far the most widely adopted tools for discussing income inequality. Other measures have been proposed, but they are mostly variations of the Lorenz curve and Gini coefficient and their uses are basically confined to special cases (see Kakwani, 1980; Kleiber and Kotz, 2003; Arnold, 2015). 2.3 Income Distribution Functions The previous results indicate that if one of the three main functions listed above, namely F(x), F (x) or f (x), can be determined from the income data, then one is able to fully characterize the income inequality of a population. This is exactly what Vilfredo Pareto did over a century ago when he reached the result that the complementary distribution F (x) for the richest people of several countries follows

2.3 Income Distribution Functions

65

a power law (Pareto, 1897), a result that has been verified over and over again since then for different countries and at different times (see Kakwani, 1980; Kleiber and Kotz, 2003; Chatterjee et al., 2005; Moura and Ribeiro, 2009; Chakrabarti et al., 2013; Arnold, 2015; and references therein). Despite this, the problem of finding the shape of the income distribution function for those that do not belong to the rich class, the vast majority of people in modern societies, remained basically open, although solving it seemed to be a simple matter of data fitting. In other words, approaching the problem of the characterization of income inequality from the viewpoint of the tools discussed above reduces the problem to advancing several distribution functions and choosing the best fit. However, how does one decide what is the best fit if several different functions, sometimes applied to the same sample, produce equally reasonable results? If one tries to solve this difficulty statistically, that is, by resorting to some kind of supposedly “robust” fit which should hopefully tell us how the data “really” behaves, coupled with assumptions that are supposed to underpin the economic aspects of the problem at hand, at best one will end up with more than one equally fitting function, which does not solve at all the predicament. At worst, one will end up with functions of increasing complexity and an unreasonably high number of parameters, that being an end result of the required theoretical hypotheses suggested by the supposed underlying economic complexity of the problem. Physicists, however, usually do not follow such an approach, but start with the least possible preconceived hypotheses, fit functions as simple as possible to real data, and then try to see what kind of dynamics is suggested by the observed behavior. The basic point here is that solely using statistics to try to determine what is the “real” shape of the distribution ignores the fundamental epistemology adopted by physicists for centuries that what is real can be represented in multiple forms. So, trying to use some robust statistics to find out what is the “real” distribution is equivalent to trying to find the distribution’s ultimate shape, something that theoretical pluralism explicitly denies as being possible, since what is real can be represented in numerous ways. From this viewpoint, what is required is to find the most adequate representation of the underlying dynamics and, therefore, introducing economic assumptions into curves does not lead us very far in understanding the mechanism that generates income inequality. For that, one has to resort to dynamic equations. In addition, since polynomial functions having five or more parameters can fit any data, one should only carry out data fitting with functions having more than four parameters in problems whose underlying dynamics are already very well understood, that is, whose limits of validity are well known in advance by means of extensive empirical testing. At the time of writing this is very far from being the case for the income distribution problem.

66

Measuring the Income Distribution

So, following a physicist’s approach to the problem, what will be shown next is not a collection of every function that can be fitted to income data, but just a handful of income distribution functions, starting with the simplest then moving to others more complex, usually combined with a Pareto power-law tail, which are already known to offer reasonable data fits of different samples collected at different time periods. This collection of income distribution functions, whose validation is reasonably well anchored on several empirical data samples, allows us to discuss different scenarios for income distribution dynamics in later chapters. 2.3.1 Exponential-Pareto Distribution Dr˘agulescu and Yakovenko (2000, 2001a) found evidence that the portion of the complementary income distribution belonging to the less rich segment of the population (x < xt ) of some countries can be well represented by the exponential function. As the second segment of the distribution (x ≥ xt ) is known to follow the Pareto power law (Pareto, 1897, pp. 305–306, 312–313), we can then write a two-components distribution as: a F (x) = Ce−Bx , (0 ≤ x < xt ), (2.26) b −α F (x) = βx , (xt ≤ x < ∞). Here B, C, α, and β are positive parameters and α is called the Pareto index or Pareto exponent. The boundary and constraint conditions given by Eqs. (2.21) and (2.25) clearly show that C = 100 and β = 100(xt )α exp (−Bxt ). Substituting these results into Eqs. (2.26) we obtain the expressions for the exponential–Pareto distribution (EPD)2 ⎧ −Bx (0 ≤ x < xt ), ⎨ e ,

FEPD (x) = 100 × (2.27) α ⎩ e−Bxt xt , (xt ≤ x < ∞), x

⎧ −Bx , (0 ≤ x < xt ), ⎨ 1−e

(2.28) FEPD (x) = 100 × α ⎩ 1 − e−Bxt xt , (xt ≤ x < ∞). x The respective probability density is given by B e−Bx , (0 ≤ x < xt ), fEPD (x) = 100 × (2.29) α (xt )α e−Bxt (x)−(1+α), (xt ≤ x < ∞),

2 The purely exponential income distribution will be discussed in Sections 4.4 and 4.5.

2.3 Income Distribution Functions

67

whereas the general equation (2.22) for the average income turns out to be given by

xEPD

(1 + Bx)e−Bx = − B

xt

α −Bxt

+ α(xt ) e 0

x (1−α) 1−α

∞ .

(2.30)

xt

The second term on the right hand side is due to the Pareto power law and it will only converge if α > 1. Under this condition the expression above is reduced to xEPD =

xt e−Bxt (1 − e−Bxt ) + . (α − 1) B

(2.31)

The first-moment distribution may be written as:

100 F1,EPD (x) = BxEPD ⎧ −Bx ⎪ ⎪ 1 − (1 + Bx) e , (0 ≤ x < xt ), ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ Bα(xt )α × −Bxt e 1 − + Bx + ) (1 ⎪ t ⎪ ⎪ (1 − α) ⎪ ⎪

⎪ ⎪ ⎪ 1−α −Bx 1−α t ⎩ , (xt ≤ x < ∞). − xt e × x (2.32) Eqs. (2.28) and (2.32) allow us to obtain the parametric expressions for the Lorenz curve in this distribution, yielding

100 F1,EPD (x) = BxEPD ⎧ FEPD (x) FEPD (x) FEPD (x) ⎪ ⎪ ln 1 − , (0 ≤ x < xt ), + 1 − ⎪ ⎪ ⎪ 100 100 100 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ Bα xt Bα xt −Bxt /α −Bxt e + 1 − 1 + Bxt − e × ⎪ (1 − α) (1 − α) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ FEPD (x) (1−1/α) ⎪ ⎪ × 1− , (xt ≤ x < ∞). ⎩ 100 (2.33)

68

Measuring the Income Distribution

Finally, after some calculations the Gini coefficient can be expressed according to the equation: 1 2 Bxt 1 −2Bxt e . (2.34) + − GEPD = 1 − BxEPD 4 2(2α − 1) 4 These results show that the EPD is fully determined by three positive parameters, B, xt , and α, that can in principle be obtained by data fitting, provided that the condition α > 1 is also satisfied. The cutoff parameter xt can also be determined by data fitting, by linearizing the exponential segment of the EPD, given by the first expression in Eq. (2.27), and carrying out a linear fit to the points limited by those whose fitted straight line cross the y-axis as close as possible to ln(100) = 4.60517. 2.3.2 Gompertz–Pareto Distribution Moura and Ribeiro (2009) found evidence that in a population sample of highincome inequality the exponential does not provide a good fit to the lower segment of the income distribution. They found instead the data to be better fitted by a Gompertz curve (Gompertz, 1825; Winsor, 1932). Hence they proposed the following two-components distribution where the richest segment is also described by a Pareto power law (see also Chami Figueira et al., 2011) (A−Bx) a , ( 0 ≤ x < xt ), F (x) = ee (2.35) b −α F (x) = β x , (xt ≤ x ≤ ∞), where A, B, α, β and xt are positive constants. Eqs. (2.21) and (2.25) for the boundary and constraint conditions imply that, A = ln(ln 100) and β = α (xt ) exp exp (A − Bxt ) . Thus, the Gompertz–Pareto distribution (GPD) is given by the expressions: ⎧ (A−Bx) ⎪ ⎨ ee , (0 ≤ x < xt ), (2.36) FGPD (x) = (A−Bxt ) xt α ⎪ ⎩ ee , (xt ≤ x < ∞), x ⎧ (A−Bx) ⎪ ⎨ 100 − ee , (0 ≤ x < xt ),

FGPD (x) = (2.37) α ⎪ 100 − ee(A−Bxt ) xt , ⎩ (xt ≤ x < ∞), x where the parameter A was not replaced by its value, determined above by using the boundary conditions, in order to keep the notation somewhat simpler. The composite expression for the probability density is given by

2.3 Income Distribution Functions

fGPD (x) =

⎧ (A−Bx) ⎨ B e(A−Bx) ee , ⎩

(0 ≤ x < xt ),

(A−Bxt ) x −(1+α), α (xt ) ee α

69

(2.38)

(xt ≤ x < ∞),

and the average income may be written as: 1 α xt e(A−Bxt ) I(xt ) + . xGPD = e 100 (α − 1) Here I(x) is a special function defined by the integral equation x (A−Bw) w B e(A−Bw) ee dw. I(x) ≡

(2.39)

(2.40)

0

Note that the convergence of the GPD average income also requires that α > 1. Finally, the first-moment distribution and the Gini coefficient for the GPD are respectively written as follows ⎧ I(x) ⎪ ⎪ , (0 ≤ x < xt ), ⎪ ⎪ xGPD ⎪ ⎪ ⎨ ⎡ ⎤ (2.41) F1,GPD (x) = (A−Bxt ) ⎪ α e ⎪ α(x ) e ⎪ t ⎪ ⎦ x (1−α), (xt ≤ x < ∞), ⎪ 100 + ⎣ ⎪ ⎩ (1 − α)xGPD GGPD = 1 −

⎧ ⎪ 2 × 10−4 ⎨ xGPD

⎪ ⎩

B

(A−Bxt ) + 100 ee xGPD

xt

(A−Bx) I(x)e(A−Bx) ee dx

0

⎤⎫ ⎪ ⎬ α 2 xt e2e ⎣ ⎦ . + (α − 1)(1 − 2α) ⎪ ⎭ ⎡

(A−Bxt )

(2.42)

Hence, the GPD is fully determined by the parameters B, α and xt , provided that α > 1. However, there is the additional task of evaluating the function I(x) as it is needed to calculate the average income, a task which can be accomplished by numerical means. The cutoff parameter xt can also be determined by data fitting once the Gompertzian segment of the distribution is linearized and a linear fit is performed such that it is limited to the data points that make the fitted straight line cross the y-axis as close as possible to A = ln(ln 100) = 1.52718. This also provides a crude way of determining the uncertainty of the parameter. A further indication that the GPD should be a better fit for population samples having high values of the Gini coefficient can obtained by the following approximation. For large values of x, that is, when the income nears the Paretian segment,

70

Measuring the Income Distribution

Bx dominates over A, so the Gompertzian segment given by Eq. (2.35) can be (−Bx) a . Let us now define a new variable z = e−Bx such that written as F (x) ≈ ee large values of x imply small values for z. In this situation the following Taylor expansion holds: ez = 1 + z + z2 /2 + z3 /6 + . . . (z < 1).

(2.43)

Considering this result, we may write the approximation (A−Bx)

F (x) = ee a

≈ 1 + e−Bx (for Bx > A and e−Bx < 1),

(2.44)

which makes it clear that for high-income values the Gompertz curve behaves as the exponential. This suggests that there should be some value or range of values for the Gini coefficient above which the exponential no longer provides a good description for the lower segment of the income distribution.

2.3.3 Log-Normal–Pareto Distribution As mentioned above, the Pareto power law is unable to describe the income distribution data for those who do not belong to the rich group, a fact that seems to have been first observed by the French engineer Robert Gibrat (1904–1980), since Pareto himself did not have the data necessary for the extension of his original studies to those other than the rich. Gibrat then proposed the two-parameters log-normal function as capable of describing the income of the vast majority of the population, while retaining the power-law description for the very rich (Gibrat, 1931). This result has become known as Gibrat’s law. The log-normal distribution is basically a normal function whose variable x scales as ln x. That is, the log-normal is a normal distribution of the logarithm of x (Aitchison and Brown, 1957; Weisstein, n.d.; Kleiber and Kotz, 2003). So, the probability density of the normal function scaled that way becomes

2 ln x − μ 1 exp − , (2.45) N(ln x) = √ 2σ 2 σ 2π where the parameters μ and σ are respectively the mean value of the logarithmic 2 2 variable and its variance, that is, μ = ln x and σ = ln x − μ . A change of variables produces N(ln x) dx. x For equal probabilities under the normal and log-normal densities, incremental areas should also be equal, that is, N(ln x)d(ln x) = N (x)dx. This means that the probability density of the log-normal distribution is given by N(ln x) d(ln x) =

2.3 Income Distribution Functions

N (x) =

ln x − μ N(ln x) 1 exp − = √ x 2σ 2 xσ 2π

The respective cumulative distribution may be written as: ! " x ln x − μ 1 1 + erf , N (w)dw = √ 2 σ 2 0

71

2 .

(2.46)

(2.47)

√ #x −t 2 where erf(x) = 2/ π e dt is the error function (see, e.g., Gradshteyn and 0

Ryzhik, 2007). Therefore, in the present context the interpretation of Gibrat’s law is given by a two-components distribution encompassing the income data of the whole population as: ⎧ ! " ⎪ ⎨ F a (x) = 100 − 1 1 + erf ln x√− μ , (0 < x < xt ), 2 σ 2 (2.48) ⎪ ⎩ F b (x) = βx −α, (xt ≤ x < ∞), where α and β are positive parameters. Due to its logarithmic nature, this distribution is not defined at x = 0, so one cannot use the boundary condition (2.21) in Eq. (2.48). However, the constraint equation (2.25) can be applied, which leads to the following expression for the log-normal–Pareto distribution (LnPD) ⎧ ! " ln x − μ 1 ⎪ ⎪ ⎪ 1 + erf 100 − , (0 < x < xt ), √ ⎪ ⎨ 2 σ 2 FLnPD (x) = ! " ⎪ ⎪ ln xt − μ 1 xt α ⎪ ⎪ 1 + erf 100 − , (xt ≤ x < ∞). √ ⎩ 2 x σ 2 (2.49) √ The factor 1/ 2σ 2 is known as Gibrat index. The respective probability density may be written as: ⎧

2 ⎪ ln x − μ 1 ⎪ ⎪ , (0 < x < xt ), exp − √ ⎪ ⎪ ⎨ xσ 2π 2σ 2 fLnPD (x) = ⎪ ! " ⎪ ⎪ ln xt − μ α(xt )α 1 ⎪ ⎪ ⎩ 100 − 1 + erf , (xt ≤ x < ∞). √ 2 x (1+α) σ 2 (2.50) In view of Eqs. (2.18) the cumulative distribution FLnPD is straightforwardly obtained from Eq. (2.49). However, finding xLnPD , F1,LnPD and GLnPD becomes

72

Measuring the Income Distribution

clearly a numerical task, as according to Eqs. (2.22), (2.23), and (2.24) their expressions have integrals that most likely will require numerical evaluations. As discussed above, in the present context we have identified the Gibrat’s law with the LnPD. Nevertheless, it might be argued that this may not necessarily be the case, since under special circumstances the log-normal may behave as a power law. Indeed, if we take the logarithm of the first expression above we obtain the result

√ ln x − μ 2 a . (2.51) ln fLnPD (x) = − ln x − ln σ 2π − 2σ 2 So, if the variance σ 2 of the log-normal distribution is large enough, the last term on the right-hand side becomes very small, which means that in a log–log plot it may appear linear, generating a power-law-like behavior. As final remarks, due to its analytical format the log-normal does not allow us to find one of its two parameters by means of the boundary conditions (2.21). So, fitting the LnPD to the data requires in fact fitting the three parameters μ, σ , and α of the composite expression (2.49), as well as finding a way to obtain xt from the data. In other words, this a four-parameters fitting job. That, together with calculating the remaining quantities and building the Lorenz curve requires numerical tasks that seems only to be justified if using the log-normal to represent empirical income data produces significantly better fitting results than, say, the exponential, or if it is able to produce dynamic models of the income distribution that other functions are incapable of. As we shall see below, in both situations this is not always the case. 2.3.4 Gamma–Pareto Distribution According to evidence presented by Ferrero (2004, 2005), the income distribution of the less than rich can also be reasonably well fitted by the gamma distribution. This is a three-parameters function whose probability density reads as (see also Patriarca et al., 2004b): σ

f (x) =

AB (σ −1) −Bx e , x (σ )

where (σ ) is the gamma function, defined as ∞ t σ −1 e−t dt, (σ ) =

(2.52)

(2.53)

0

and A, σ , and B are parameters, respectively normalizing parameter, shape called σ parameter, and rate parameter. The term AB / (σ ) is then a normalization factor.

2.3 Income Distribution Functions

The cumulative distribution is given by x f (t) dt = F (x) = 0

where

Bx

γ (σ,Bx) =

A γ (σ,Bx), (σ )

t σ −1 e−t dt,

73

(2.54)

(2.55)

0

is the lower incomplete gamma function. The complementary distribution reads as ∞ A f (t) dt = F (x) = (σ,Bx), (2.56) (σ ) x where

(σ,Bx) =

∞

t σ −1 e−t dt,

(2.57)

Bx

is the upper incomplete gamma function (Abramowitz and Stegun, 1965; Luke, 1969a, 1969b; Gradshteyn and Ryzhik, 2007). As in the previous cases, the gamma distribution deviates from the data points when it reaches the rich segment (Ferrero, 2005), where they again can be fitted by a Pareto power law. Hence, the composite distribution formed by the gamma and Pareto distributions to represent the whole population may be written as: ⎧ ⎪ ⎨ F a (x) = A (σ,Bx), ( 0 ≤ x < xt ), (σ ) (2.58) ⎪ ⎩ F b (x) = β x −α , (xt ≤ x ≤ ∞), where α and β condition (2.21) β = 100(xt )α (PD) yields

are positive parameters. Since (σ,0) = (σ ), the boundary implies that A = 100 and the constraint condition (2.25) yields (σ,Bxt )/ (σ ) . Therefore, the gamma–Pareto distribution

⎧ (σ,Bx) ⎪ ⎪ , ⎪ ⎨ (σ ) FPD (x) = 100 × ⎪ (σ,Bxt ) xt α ⎪ ⎪ ⎩ , (σ ) x

(0 < x < xt ), (2.59) (xt ≤ x < ∞).

The respective probability density reads as ⎧ σ B ⎪ (σ −1) −Bx ⎪ ⎪ e , ⎨ (σ ) x fPD (x) = 100 × ⎪ ⎪ α (σ,Bxt ) ⎪ x −(1+α), ⎩ α(xt ) (σ )

(0 < x < xt ), (2.60) (xt ≤ x < ∞),

74

Measuring the Income Distribution

the cumulative income distribution yields ⎧ γ (σ,Bx) ⎪ ⎪ , ⎪ ⎨ (σ ) FPD (x) = 100 × ⎪ (σ,Bxt ) xt α ⎪ ⎪ , 1 − ⎩ (σ ) x

(0 < x < xt ), (2.61) (xt ≤ x < ∞),

the average income becomes xPD =

γ (σ +1,Bxt ) α xt (σ,Bxt ) + , B (σ ) (α − 1) (σ )

and the first-moment distribution turns out to be written as 100 F1,PD (x) = B (σ )xPD ⎧ ⎪ γ (σ +1,Bx), (0 ≤ x < xt ), ⎪ ⎪ ⎪ ⎪ ⎨ α × γ (σ +1,Bxt ) + α(xt ) B (σ,Bxt ) ⎪ ⎪ (1 − α) ⎪ ⎪ (1−α) ⎪ ⎩ − xt (1−α) , (xt ≤ x < ∞). × x

(2.62)

(2.63)

Writing the Gini coefficient GPD by substituting Eqs. (2.60), (2.62), and (2.63) into the definition (2.24) will produce a long expression whose integration is probably better performed numerically. This is also true for deriving the Lorenz curve using Eqs. (2.61) and (2.63). The results above show that the PD is fully determined by four parameters, B, σ , xt , and α. Nevertheless, some real data fitting suggests that in practice the gamma distribution requires a three-parameters fitting, as the normalization factor has to be determined as well (Bhattacharya et al., 2005). So, in addition to finding a way to determine xt from the data, as well as fitting α, using the PD may be in fact a five-parameters fitting job. Considering that for σ = 1 the PD reduces to the much simpler EPD, since γ (σ,0) = 0, (σ,0) = (σ ), γ (1,Bx) = 1 − e−Bx , and (1,Bx) = e−Bx , this means that on the subject of modeling the income distribution one really needs a strong dynamic case to justify the use of the PD instead of other, analytically much simpler, distributions, like the previously discussed EPD or the GPD, that require fewer parameters. 2.3.5 Tsallis Distribution Ferrero (2004, 2011) and Soares et al. (2016) presented evidence that the whole income data can be represented by means of the Tsallis q-functions, the q-exponential, and q-logarithm. These are defined as (Tsallis, 1994, 2009):

2.3 Income Distribution Functions

x (1−q) − 1 , 1−q 1/(1−q) ≡ 1 + (1 − q)x .

lnq x ≡ expq (x) = eq x

75

(2.64) (2.65)

For q = 1 both functions become the usual logarithm and exponential: e1 x = ex ;

ln1 x = ln x.

Tsallis q-functions are then just the usual exponential and logarithmic functions deformed in such a way as to be useful in Tsallis’ theory of nonextensive statistical mechanics (Tsallis, 2009). From their definitions we have eq ( lnq x) = lnq (eq x ) = x.

(2.66)

In addition, lnq 1 = 0 for any q. So, if there exists a value x0 such that x/x0 = 1, then lnq (x/x0 ) = 0. Two other properties of the q-exponential useful in the present context are (Yamano, 2002): f (x) a af (x) eq , (2.67) = e 1−(1−q)/a

f (x) q eq d f (x) eq = . dx f (x)

(2.68)

The complementary distribution F (x) of personal income data plotted against the income x in a log–log scale behaves very similarly to eq −x for q > 1 also plotted in a log–log scale (Tsallis, 2009, fig. 3.4). Such a feature motivated the use of the Tsallis functions as representations of the income distribution. Besides, the decreasing q-exponential behaves as a power law for high-income values, that is, at the tail of the distribution (Ferrero, 2005; see below). So, these features suggested the following expression to represent the CCDF of individual income F (x) = A eq −Bx ,

(2.69)

where A and B are positive parameters. It is not difficult to show that the expression above becomes a power law for large values of x and tends to the exponential when x → 0. Indeed, using the definition (2.65) Eq. (2.69) can be rewritten as follows, " ! x −m , (2.70) F (x) = A 1 + x0 where m = 1/(q − 1) and 1/x0 = B(q − 1). So, for large values of x this expression reduces to F (x) ≈ A(x/x0 )−m . For small values of x, the Taylor

76

Measuring the Income Distribution

expansion of Eq. (2.70) produces the same first order term as the Taylor expansion of the exponential. Indeed: !

x F (x) = A 1 + x0

"−m

! ≈1−

" m x + · · · ≈ A e−(m/x0 )x . x0

(2.71)

Remembering the boundary condition F (0) = 100 one can straightforwardly conclude that A = 100. Therefore, the Tsallis distribution (TD) for the income of a population may be written as FTD (x) = 100 eq −Bx ,

(2.72)

whose q-logarithm yields the linear function lnq

FTD (x) = −B x. 100

(2.73)

Thus, the respective CDF becomes

FTD (x) = 100 1 − eq −Bx ,

(2.74)

Although the TD asymptotically approaches the EPD at both ends, it holds the important difference that at the intermediate level it may represent very different dynamics than those described by the exponential and the power law. The GPD is also partially incorporated in the TD, since the GPD is approximately the EPD, except for the segment of very low-income values (see p. 70 above). In addition, Eq. (2.73) clearly starts at the coordinate’s origin once we remember the properties of the q-logarithm outlined above, a fact that greatly simplifies the data fitting problem since this is reduced to finding only two parameters, q and B. However, that comes at the cost of a still undetermined dynamic behavior at the intermediate levels of the income distribution, dynamics which might be very complex and may require further parametrization for its characterization (Borges, 2004; Wilk and Włodarczyk, 2015; Soares et al., 2016). Considering Eq. (2.68), the corresponding probability density yields fTD (x) =

100 −Bx q 100 −qBx eq = e2−1/q . B B

(2.75)

It should be noted, however, that the normalization condition for the density above, as established in Eq. (2.8), is only satisfied for q > 1.

2.3 Income Distribution Functions

77

Let us now define the special function Jq (x) as given by the integral equation: x

q Jq (x) = t eq −Bt dt. (2.76) 0

This expression allows us to write the remaining functions of the TD. Indeed the average income, first-moment, and Gini coefficient may be written as: Jq (∞) , B Jq (x) F1, TD (x) = 100 , Jq (∞) ∞

q 2 Jq (x) eq −Bx dx. GTD = 1 − BJq (∞) 0 xTD =

(2.77) (2.78) (2.79)

As mentioned above, the TD is fully determined by only two parameters, B and q. Besides, since the TD approaches asymptotically the EPD at both ends and partially includes the GPD, it seems to provide an obvious analytical advantage over the others discussed in the previous sections, apart from the fact that the special function Jq (x) and the Gini coefficient probably require numerical integration. 2.3.6 κ-Generalized Distribution The Tsallis q-functions are not the only way of deforming the exponential and logarithmic functions in order to fit the whole income data range. Another proposal for doing this is by employing the κ-exponential and κ-logarithm, defined as (Kaniadakis, 2001):

$ 1/κ 1 + κ 2 x 2 + κx , (2.80) exp{κ} (x) = e{κ} x ≡ ln{κ} x ≡

x κ − x −κ . 2κ

(2.81)

For κ = 0 the above expressions reduce to the usual exponential and logarithm e{0} x = ex ,

ln{0} x = ln x.

Clearly e{κ}

ln{κ} x

= ln{κ} e{κ} x = x.

78

Measuring the Income Distribution

Further properties of these functions are written (Kaniadakis, 2001) as:

a ⎧ e{κ} ax = e{aκ} x , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ e{−κ} x = e{κ} x , ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ e 0 = 1, {κ}

⎪ ln{κ} e{κ} 0 = ln{κ} 1 = 0, ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ln{−κ} x = ln{κ} x, ⎪ ⎪ ⎪ ⎪ ⎩ ln{κ} x a = a ln{aκ} (x).

(2.82)

For small values of x the following Taylor expansion holds e{κ} x = 1 + x +

x2 x3 + (1 + κ 2 ) + . . . ≈ ex . 2 3!

(2.83)

In addition, asymptotically we have e{κ} x

≈

x→±∞

|2κx|±1/|κ| .

(2.84)

So, as happens with the q-exponential, the κ-exponential also approaches asymptotically the common exponential and a power law at x → 0 and x → ∞, respectively. Other properties of these functions can be found in Kaniadakis (2001) and Clementi et al. (2009). Reasoning as in the previous section, the κ-exponential suggests the following expression for the CCDF of personal income F (x) = Ae{κ} −Bx ,

(2.85)

where boundary conditions imply A = 100, reducing then the fitting problem to the determination of two parameters, κ and B. Hence, the κ-generalized distribution (KGD) may be written as FKGD (x) = 100 e{κ} −Bx ,

(2.86)

whose respective density yields 100 B e{κ} −Bx . fKGD (x) = √ 1 + κ 2B 2x 2 As was done with the TD, Eq. (2.86) can be linearized FKGD (x) = −Bx, ln{κ} 100 and the parameters κ and B can be determined empirically by linear fitting.

(2.87)

(2.88)

2.3 Income Distribution Functions

79

Nevertheless, in a series of papers Clementi et al. (2007–2010, 2012a, 2012b) advanced a CCDF somewhat different from Eq. (2.86) and presented empirical evidence that the κ-functions can in fact be used to describe the whole income data of various countries. Thus, they defined the KGD as being given by the expression %KGD (x) = e −βx α . F {κ}

(2.89)

The respective PDF is written as αβx α−1 e{κ} −βx f%KGD (x) = $ , 1 + κ 2 β 2 x 2α α

(2.90)

where κ, α, and β are positive parameters. It follows from this model that for small income values the CCDF defined in Eq. (2.89) behaves exponentially, %KGD (x) = %KGD (x) = e−βx α , whereas it becomes a power law at high incomes, F F −1/κ −α/κ (2βκ) x . The three parameters of this model can be reduced to two by employing the density normalization condition. So, the fitting problem is reduced to the determination of only two parameters from the data. Empirical evidence concerning the suitability of this function as a good representation of income data was provided for Australia, USA, Italy, Germany, and the United Kingdom (see below). In addition, expressions for the CDF, first-moment distribution and Gini coefficient were obtained in terms of special functions (Clementi et al., 2009). It should be noted,# however, that in their calculation ∞ the authors followed the PDF normalization 0 f%(y) dy = 1 instead of the one adopted in this book (Eq. 2.3). Finally, since both the q and κ functions originated from proposals generalizing the statistical mechanics for systems observed as having power-law tails and where the Boltzmann–Gibbs approach is problematic (Kaniadakis, 2012), these two approaches are connected. That is, the κ-functions are linked to their respective Tsallis counterparts, defined by Eqs. (2.64) and (2.65), as follows (Kaniadakis, 2001):

1 ln1+κ x + ln1−κ x , ln{κ} x = 2 √ 2x 2 − 1 1 + κ . e{κ} x = e1+κ x + κ

(2.91) (2.92)

By being connected, these two approaches also share similar difficulties, especially the unclear dynamic behaviour at the intermediate levels. Nevertheless, in view of the fact that both the q and κ functions are capable of fitting the income data of different countries, this empirical result suggests that, contrary to what obsessively

80

Measuring the Income Distribution

neo-classical economists state, systems like the income distribution are most likely not in equilibrium. In fact, they are probably very far from it. 2.3.7 Other Distributions The distribution functions presented in the previous sections do not at all exhaust the reservoir of possible functions capable of fitting the income data. As a matter of fact, several other functions have been proposed by different authors who claimed that they also provide good data fits. Nevertheless, they often require fitting several parameters and, unlike the exponential or the power law, they do not have the advantage of being simple. That makes the process of revealing the underlying income distribution dynamics even more obscure and difficult, if not impossible, using the direct data fitting methodology, a fact that further impairs their usefulness. However, as a satisfactory theory for the income distribution dynamics is still lacking, at the present stage of investigation these functions cannot yet be discarded. So, for completeness some of them will be briefly mentioned without going into details, but providing references for the interested reader. The pre-econophysics approaches to income fitting can be grouped into three categories. The first considers the functional form describing the income distribution as the result of a stochastic process. The second approach proposes flexible analytical forms by only considering their ability to produce a satisfactory fit to empirical data. Third, models are also obtained from differential equations specifically designed to capture some regular features of observed income distributions (Clementi et al., 2010). The historical sketch of income modeling provided by Arnold (2015, ch. 1; see also Kleiber and Kotz, 2003, section 1.3) indicates, nevertheless, that the research emphasis followed by economists and statisticians on income distribution in the twentieth century was of basically following the second approach above, that is, of proposing dozens of new fitting functions and inequality measures and narrowing the discussion only to their pros and cons (see also: Kakwani, 1980; Deaton, 1997). In other words, during the last century they basically tried to propose alternatives to the works of the pioneers of this area, Pareto, Lorenz and Gini, but, fundamentally, economists were unable to supersede them. Mandelbrot (1960) introduced a modification of the Pareto power law, called weak Pareto law, aiming at representing the lower-income range, and advanced the Pareto–Levy law as a special case, possibly applicable to represent income data. Kakwani (1980, pp. 20–29) described a family of distributions based on Pareto power laws (see also Aoyama et al., 2010, p. 35), as well as the Champernowne

2.4 Empirical Evidence

81

distribution (Champernowne, 1953), functionally determined by four parameters. Income data have also been fitted using the Dagum, Singh–Maddala, and Weibull distributions, these being special cases of the four-parameters generalized beta distribution of the second kind. Clementi et al. (2012a) fitted the USA income data with the Dagum, Singh– Maddala, and κ-generalized distributions, concluding that the KGD provides superior fitting as compared to the other two. Domma et al. (2018) have also provided further studies on the Dagum distribution by making its parameters directly interpretable in terms of income inequality, median, and poverty measures. Bose and Banerjee (2005, table 1) presented a connection of the generalized beta distribution to its various special cases, including the power law, gamma, and log-normal distributions. Sarabia et al. (2016) provided several tables containing explicit PDFs of several of the most common, and other less common, distributions used to fit income data, as well as their respective Theil indices. Other recent works have been done with new distributions or using special combinations of already well-known functions under the claim that these combinations provide better fits (Maclachlan and Reith, 2008; Cerqueti and Ausloos, 2015; Bourguignon et al., 2016; Calder´ın-Ojeda et al., 2016; Guerra et al., 2017). Kleiber and Kotz (2003, chs. 6–7) wrote a very detailed monograph that presents the family of generalized beta distributions of second kind, their properties and applications to income data, as well as various other less well-known distributions. Chotikapanich (2008) reprinted some influential papers on income distribution, including the works of Dagum and Singh–Maddala, followed by a survey of the applications of these and other functions plus some new contributions. The interested reader is especially referred to the last two references for further studies on other distributions than those discussed here. 2.4 Empirical Evidence Trying to fit different distribution functions to people’s income data of various countries and regions in different epochs is now a topic more than a century old, which means that the literature on this subject is huge. Nevertheless, our aim here is not to provide an exhaustive historical review of income data fitting, but to show a few, mostly recent, analyses made primarily by econophysicists whose results bring empirical support to the income distribution functions showed above and provide background material for the discussion of income models to be presented in the next chapters. Thus, the next sections will present a noncomprehensive set of results arising from the application of the distribution functions discussed in Section 2.3 to various income data sets.

82

Measuring the Income Distribution

2.4.1 Pareto Power Laws Power laws are ubiquitous to nature. They can be found in physics, astronomy, biology, earth and planetary sciences, computer science, demography, and the social sciences (Ribeiro and Miguelote, 1998; Gabaix, 1999; Newman, 2005; Moura and Ribeiro, 2006; Pinto et al., 2012; Conde-Saavedra et al., 2015; Kumamoto and Kamihigashi, 2018; Arshad et al., 2018, 2019; and references therein). They can also be found in economics and finance, having fascinated economists of successive generations (Gabaix, 2009, 2016). Therefore, the first empirical data one should obviously start with is the exponent of probably the most famous power law in economics, the Pareto index α (see Eq. 2.26). Pareto’s original estimates for α in several countries and regions of his time fell in the range 1.13–1.89 (Pareto, 1897, p. 312; see also Kleiber and Kotz, 2003, p. 8). Arnold (2015, pp. 353–357) reported a large collection of empirical findings made from several authors using data obtained at different epochs, countries, and regions, too many to cite in here, in the period from 1897 to 1976 resulting in the broader range given by 1.20 ≤ α ≤ 2.7. More recent estimates for the USA reviewed by Yakovenko and Rosser (2009) and Yakovenko (2016) indicate the interval 1.3 ≤ α ≤ 1.9, although Moura and Ribeiro (2009, 2013) pushed this range to a higher level, 2.11 ≤ α ≤ 3.75, in the case of Brazil from 1981 to 2009. Soriano-Hern´andez et al. (2016) obtained the interval 2.65 ≤ α ≤ 3.70 for Mexico between 1992 to 2008. Klaas et al. (2006) used the Forbes 400 lists during 1988–2003 to estimate the Pareto index to be α = 1.49 on average. Clementi and Gallegati (2005a) measured α at different time spans for the USA, Germany, and the United Kingdom, and found out the interval 1.63 ≤ α ≤ 5.76, although results having α ≥ 4 were the exception rather than the rule. In a different analysis and time span, Dr˘agulescu and Yakovenko (2001b) obtained results for the Pareto exponent with income data of the United Kingdom lying in the range 2.0–2.3, whereas Clementi et al. (2006) calculated α = 2.3–2.5 for Australia and Italy around the last two decades of the last century. Souma (2001), Souma and Nirei (2005), and Fujiwara (2005) reported measures of the Pareto index for Japan generally between 1.5 and 2.5 during the time span of a little over a century up until the year 2000. Jagielski et al. (2016) studied the income and wealth distribution of the richest Norwegian individuals in the period 2010– 2013 and conclude that α = 2.3 ± 0.4 for income and α = 1.5 ± 0.3 for wealth. Ferrero (2010) calculated α for Argentina in the period 2000–2009, noting that in 2002 the country went through an economic and financial collapse. During the crisis year of 2002 the Pareto index of Argentina had an abnormally large variation as it oscillated from 1 to 8 in just one year, having stabilized afterwards at α ≈ 2. So, the previous wide variation could have been due to unreliable data or the crisis itself.

2.4 Empirical Evidence

83

Atkinson (2017) presented a detailed study on the long-term evolution of the Pareto tail for the UK from 1799 to 2014. He noted that smaller values of α imply steeper income gradient, that is, a rise in the Pareto index corresponds to a fall in income concentration. He used this observation to probe the degree of income differentiation at higher ranks and concluded that during the nineteenth century the top incomes were more concentrated than today, as the Pareto index was lower then. In the twentieth century there was more volatility, because α rose from 1.46 to 2.96 from 1919 to 1979, indicating less concentration at the top incomes, a trend that was sharply reversed from 1979 on. The Pareto index α reached the value of 1.75 in the UK at the end of the century. There was less volatility in the twenty-first century, where α approximately varied in the range 1.6–1.8. Atkinson (2017) noted that a major issue regarding the measurement of the Pareto index is the threshold above which the distribution is considered as behaving as a power law. In his estimates he used the top 5% incomes as belonging to the Pareto tail, but noted the value of α may depend on the chosen cutoff value. In any case, using his methodology to probe more thoroughly for 1918 to 2014, his results showed that the Pareto index started this period with α ≈ 1.5, rose until about 1980 when its value was almost equal to 3, then declined sharply to reach about 1.6 in 2008, only to fluctuate slightly around this value until 2014. The same overall pattern, with few differences, happened in the values of α even if the threshold was chosen to be 1%. Fix (2018) advanced a model where the power-law tail of income distribution is created by firm hierarchy. In other words, firm hierarchy and its associated properties are responsible for generating the Pareto power-law in the tail of income distributions. The basic hypothesis came from the works of Simon (1957) and Lydall (1959), who developed models focused on the branching structure of firm hierarchies. The basic assumption is that human institutions are hierarchically organized, and that hierarchical rank plays a fundamental role in determining income. In Simon’s (1957, p. 35) words, “the distribution of executive salaries is not unambiguously determined by economic forces, but is subject to modifications through social processes that determine the relevant norms.” Fix (2018) tested the model using the hierarchical structure of the US private sector (not government) for 1992 to 2015. The Pareto index was obtained using 1% as the threshold above which the power-law tail of the distribution begins. The simulations and the empirical data resulted in α ≈ 1.7. Oancea et al. (2018) presented a novel study of the income distribution of Romania using data from 2013 to 2014. The authors divided the data in to income from labor and income from capital, resulting in a labor–capital split where the capital income for the Pareto upper tail yields α ∼ 1.44 whereas the labor income generated the higher Pareto index of α ∼ 2.53. The importance of this work is that

84

Measuring the Income Distribution

it seems to have been the first to appear in the econophysics literature of income distribution that explicitly analyzed the different contributions of capital and labor on the total income. This study was motivated by previous results from other authors indicating that the upper tail of the income distribution is dominated by the capital income (Piketty, 2014; Oancea et al., 2018, and references therein). In the case of Romania, the labor–capital split shows up in the studied time window by means of two very different Lorenz curves (Oancea et al., 2018, fig. 1), where the one for capital income has a much more pronounced convexity that indicates higher inequality. The labor–capital split will be discussed in detail in Chapter 3. Although the studies reviewed above do not comprise an exhaustive list (see Richmond et al., 2006b, fig. 5.3, table 5.3 at pp. 135, 154; and Chakrabarti et al., 2013, pp. 10–26, for further references and analyses), they seem to be representative enough to suggest the following interval where the large majority of the empirical estimates of the Pareto index obtained from income data appear to lie 1.2 α 3.2.

(2.93)

Results where α ≥ 4 are generally considered unreliable. Such diversity of measures are most likely a consequence of different income inequalities in the different population samples studied by various authors at very different time spans. It should also be noted that other estimates of the Pareto index for wealth distribution generally follow the window above (Richmond et al., 2006b; Brzezinski, 2014, fig. 1). Coelho et al. (2008) also derived the Pareto index for the United Kingdom in 1995, but found out that the rich segment is in fact composed by two Pareto power laws, one for the rich, where α = 3.2 − 3.3, and other for the super-rich, having α = 1.3, as shown in Fig. 2.2. Since the super-rich have a smaller Pareto index, this led them to suggest that the super-rich may pay fewer taxes in proportion to their income than the rest of society (see also Section 4.2.7). Nevertheless, Cowell et al. (1999) also found two segments in the Pareto tail of Brazil in 1990, but their results showed the opposite tendency, that is, higher values for the super-rich. They reported α = 1.9 for the rich and α = 2.5 for the super-rich. These results suggest that there should be indeed two classes of rich people, both represented by a power law, and the remark at the end of the caption of Fig. 2.6 regarding the Japanese data, a point apparently not noted by the original author, reinforces this view. Richmond et al. (2006b, p. 141, fig. 5.4) have also indicated the existence of a distinction between the rich and super-rich in wealth data. But, the relative position of these two categories of income in terms of the Pareto index remains unresolved and requires more data and studies.

2.4 Empirical Evidence

Log of cumulative weekly income

1.5 0 –2

Log of weekly income in pounds 2.5 3.5 4.5 5.5 6.5

85

7.5

8.5

y = –3.2453x + 7.8691 R2 = 0.9997

–4 –6 y = –3.3271x + 7.5344 –8 R2 = 0.9997

–10

–12 –14

y = –1.2626x – 1.335 R2 = 0.9828

–16 Figure 2.2 CCDF of the high-income population of the United Kingdom in 1995 showing a double power law. This means that the high-income class should in fact be subdivided into two classes, the rich and the super-rich. Reprinted from Coelho et al. (2008, fig. 2), with permission from Elsevier

As a final remark, it is very interesting to note that Scheidel and Friesen (2009, pp. 80–81, 86–87, figs. 1–3) have estimated the degree of inequality of the Roman Empire at its peak, around the year 150. Their most “plausible” result indicated a Pareto power-law income inequality for the Roman elite with its slope estimated as being α ∼ 2.7. They also estimated that the top 1% grabbed about 16% of the total production of the Empire, obtained the most plausible Lorenz curves and calculated the Gini index as being in the range of 0.42–0.44. 2.4.2 Exponential Empirical evidence for the exponential function as being a good fit for the nonParetian income segment was initially advanced by Dr˘agulescu and Yakovenko (2001a, 2001b), reviewed in Dr˘agulescu (2002), Yakovenko (2003), and Dr˘agulescu and Yakovenko (2003), and followed with new results by Christian Silva (2005), Christian Silva and Yakovenko (2005), Banerjee et al. (2006), and Tao et al. (2016). These authors found good exponential fits for the income data of the less rich population of the United Kingdom, USA, Australia, and other countries, concluding

86

Measuring the Income Distribution 18%

Probability

14%

Probability

12% 10%

10%

B

1% A 0.1%

8%

0.01% 0

6%

20 40 60 80 100 Individual annual income, k$

Cumulative probability

100% 16%

120

4% 2% 0% 0

10

20

30

40 50 60 70 80 90 100 110 120 Individual annual income, k$

Figure 2.3 Probability density distribution of personal income in the USA for 1996. The solid line is the exponential fit. In the inset, the graph A is the same, but with a vertical logarithmic scale, whereas the graph B is the complementary cumulative distribution. It can be noted in the main panel that despite producing a good fit for most data, the exponential fails to represent the data at small income values. Reprinted from Dr˘agulescu and Yakovenko (2001a, fig. 1), with permission from Springer Nature

then that societies appear to be constituted by two classes as far as income is concerned. The income distribution of the less rich follows an exponential and tops at about 99% of the population, whereas the remaining 1% constitutes the tiny rich minority whose income is well represented by the Pareto power law. Albeit these authors were not the first to conclude that societies are fundamentally constituted by two distinct classes, they appear to have been the first to connect such class division to income distribution in a quantitative way. This was done by proposing two simple functions to represent the income data of each class and then quantifying their relative composition by means of an objective methodology. As we shall see in the next chapters, such methodology became an important contribution as it started to shed new light upon the dynamics of income distribution and the mechanisms of resources distribution among the different classes in a society. It also allowed a direct connection of the income distribution problem to statistical physics, providing new paths for interpreting the dynamics of income distribution, in addition to connecting these results to others previously made in isolation by some economists. Figs. 2.3–2.5 present a short summary of these results.

2.4 Empirical Evidence

87

United States, IRS data for 1997

Cumulative percent of returns

100%

Boltzmann−Gibbs 10% 100% Pareto

1%

10%

0 0.1% 1

20

40 60 AGI, k$

80

100

10 100 Adjusted gross income, k$

1000

Figure 2.4 CCDF of the personal income in the USA in 1997. The main panel shows the points in log–log scale and the inset in log–linear scale. The solid lines are the exponential, named here as Boltzmann–Gibbs, and Pareto power law fits to these functions. Clearly the power law is only valid at the tail of the distribution. Reprinted from Dr˘agulescu and Yakovenko (2003, fig. 1, left panel), with the permission of AIP Publishing

2.4.3 Log-Normal The application of the log-normal to study income distribution has a long history, starting with Robert Gibrat (see p. 70 above) who fitted the log-normal to the income data of various countries and regions for 1852–1919. Kleiber and Kotz (2003, table 4.1 and pp. 126–129) briefly discussed Gibrat’s results, as well as many others obtained for the time window from 1938 to 1997 on several countries in the world. Aitchison and Brown (1957, section 11.6) also presented a brief study where the log-normal was used to fit the income data of the USA from 1944 to 1950. These earliest results indicated the log-normal as being a good fit for income data for those not belonging to the rich, or Pareto, segment. Reinforcing this point, Souma (2001, 2002) obtained good results for the fit of the Japanese income data from 1965 to 1998 with the log-normal. The results for 1998 are shown in Fig. 2.6. Clementi and Gallegati (2005a, 2005b) also studied the personal income of Italy, Germany, United Kingdom, and the USA in several temporal segments during

88

Measuring the Income Distribution US, IRS data for 1997 100% 16%

Cumulative percent of income

90 80 70 60 50 40 30 20 10 0 0

10

20 30 40 50 60 70 80 90 100% Cumulative percent of tax returns

Figure 2.5 Lorenz curve of the points shown in Fig. 2.4. The solid Lorenz curve is the one given as if the most of the distribution were exponential. Note that the Lorenz curve experiences a jump at the top of the plot in order to adjust to the rapid increase in the data points. This is due to the distortion in the exponential distribution produced by the Pareto power-law tail. The fraction of total income contained in the power-law tail in excess of the exponential is indicated as 16%. Hence, although the Pareto tail contains only a fraction of the entire population, about 1%, it contains an excess of 16% of the total income. More details about this plot are discussed in Section 4.4. Reprinted from Dr˘agulescu and Yakovenko (2003, fig. 1, right panel), with the permission of AIP Publishing

the period 1977–2002. They found consistency between the data and the two parameters of the log-normal distribution for the non-Paretian segment in all cases. Similar results of good fits using the log-normal together with a Pareto power-law tail were reported by Zou et al. (2015) regarding the Chinese income data between 1990 and 2002. The respective Pareto index for China in this period was found to be in the range 3.7 < α < 4.3. This interval is somewhat higher than most results presented in Section 2.4.2. Nevertheless, Xu et al. (2017) studied the Chinese data for 1998–2002 and obtained results in the range 3.05 ≤ α ≤ 3.29. 2.4.4 Gamma Function The gamma distribution also has a long history of being used to fit the income distribution data, history which goes as far back as 1898 when the French statistician Lucien March (1859–1933) used it to represent the income data of France,

2.4 Empirical Evidence

Cumulative probability

1

89

Adjusted income tax Income Income + employment income Power law (α = 2.06) Log−normal (x 0 = 4 million yen, β = 2.68)

−2

10

−4

10

−6

10

−8

10

1

1

10

2

10

3

10

4

10

5

10

Income (million yen) Figure 2.6 Plot of the CCDF of the personal income of Japan in 1998. The empirical data were fit to both a log-normal function, at the low-income region, and the Pareto power law at the high-income segment. β is the Gibrat index (see p. 71 above) and μ = ln x0 is defined in Eq. (2.50). Note that the power-law segment seems to be in fact a double power law. There is a split at its very end, and whose respective Pareto index seems to be smaller than in the first segment. This is similar to the results shown in Fig. 2.2. Reprinted from Souma (2002, fig. 2), with permission from Springer Japan

Germany, and the USA. That was followed by various other authors who used both the gamma distribution and its generalized form with four parameters to fit both earning and wealth data (Kleiber and Kotz, 2003, pp. 157–160, 166–168). Angle (1996) applied the gamma distribution to fit the 1980 income data of the USA population aged 25 plus, concluding that it represents well various strata of population in terms of age and educational level. His aggregated data are summarized in Fig. 2.7 and show a reasonable gamma PDF data fit. These results supported Angle’s earlier model, called inequality process, which proposes a mechanism of how the gamma distribution would arise from income data by assuming some basic hypotheses about distributional processes in a society (Angle, 1986b). The Angle’s model, or simply Angle process, will be discussed in more detail in Chapter 4. Banerjee et al. (2006) studied the income data of Australia for 1989–2000 using the exponential, log-normal and gamma distributions, obtaining good fits for 99% of the lower-income population when using the three functions, as shown in Fig. 2.8. The adjusted functions are not perfect, but good enough to produce fits

90

Measuring the Income Distribution

Figure 2.7 Results of fitting the gamma PDF to the 1980 income data of the USA population aged 25+ aggregated to age and educational levels. Reprinted from Angle (1996, fig. 3) by permission of Taylor & Francis Ltd

with about the same quality. The respective fits for the PDFs are shown in Fig. 2.9 where all three functions perform reasonably good fits, although the exponential fares worse than the other two at very low-income values. Ferrero (2005) also studied the gamma distribution by applying it to the income data of Japan and the United Kingdom for 1998, and New Zealand for 1996. Fig. 2.10 shows his results which, like those of Banerjee et al. (2006), are reasonably good for most data points, but deviates strongly when they reach the high-income segment. Soriano-Hern´andez et al. (2016) have also studied the income distribution of Mexico for 1992–2008 and successfully fitted the lowerincome segment by both the log-normal and gamma distributions, whereas the high-income segment was well fitted by the Pareto power law. Richmond et al. (2006b, p. 134) presented in his table 5.2 the parameter fits carried out by several authors on income and wealth data ranging from the fourteenth century BC to 2004 using all distribution functions discussed above.

2.4 Empirical Evidence

91

(a)

0

Cumulative Probability

10

1989−90 1993−94 1994−95 1995−96 1996−97 1998−99 1999−00 Exponential Log−normal Gamma

−1

10

−2

10

−3

10

0

1

2

3

4

5

6

7

8

9

10

Annual Income / T Figure 2.8 CCDF of Australia income data for the period 1989–2000 fitted to three theoretical functions: exponential, log-normal, and gamma distributions. All functions produce fits of the same quality for the lower-income segment composed of 99% of the population. Here T = 1/B (see Eq. 2.26). Reprinted from Banerjee et al. (2006, fig. 1a) with permission from Elsevier

2.4.5 Gompertz Curve Inspired by the results regarding the exponential fit to the income data, Moura and Ribeiro (2009) went on to investigate the income distribution of Brazil for 1978–2005. They were, nevertheless, surprised to verify the exponential’s inability to adequately fit the lower-income portion of the Brazilian data, although the power law did fit the high-income segment. Intrigued, they noticed that the log–log plot of the non-Paretian segment had a partial linear portion which would disappear at very low-income values, as shown in Fig. 2.11. This finding inspired them to take the second logarithm of the CCDF of the non-Paretian segment to finally obtain a linear plot, whose results are shown in Fig. 2.12. Such procedure suggested the Gompertz curve (see Eq. 2.35) as a suitable function to fit the Brazilian data of the lower-income segment, as the higher-income classes were well fitted by the power law. These were the empirical observations that led them to propose the Gompertz– Pareto distribution (see Section 2.3.2). The question why the exponential failed at very low-income data could then be seen in the light of Brazil’s historically notorious high inequality (see p. 101 below for a discussion on this topic), whose Gini indices have been revolving around

92

Measuring the Income Distribution (a) 1 1989−90 1993−94 1994−95 1995−96 1996−97 1998−99 1999−00 Exponential Log-normal Gamma

Probability Density

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

1

2

3

4

5

Annual Income / T Figure 2.9 This graph shows the respective PDFs of Australia income data shown in Fig. 2.8. All three functions produce reasonably good fits, although the exponential fares worse as far as very low-income values are concerned. Reprinted from Banerjee et al. (2006, fig. 2a) with permission from Elsevier

0.55–0.60, one of the highest in the world. Comparing with other samples where the exponential fit the data well, most, if not all of them have Gini indices much smaller than Brazil’s. So, this suggests that countries possessing a sizable portion of very poor people, possibly mainly in the underclass category (see Section 2.4.8), requires a different component to represent their income data. The Gompertz curve then suggested itself because of its original application for representing human mortality as growing very rapidly as one approaches 100 years old (Gompertz, 1825),3 which in this case works the other way round, that is, as one reaches low-income values there is a quickening of people accumulating in that very low-income portion. Another remark regarding the Brazilian income data, a simple look at the data points presented in Fig. 2.11 suggests that there could be another way of representing the data rather than using a Gompertz curve: a double exponential with two different slopes. Another recent work of interest in this discussion is of Shaikh et al. (2014), who analyzed personal wage and salary income by race and gender from 1996 to 2008 3 A recent reappraisal of the Gompertz law of human mortality using modern data is discussed in Richmond and

Roehner (2016).

2.4 Empirical Evidence

93

Japan 1998

10

New Zealand 1996

Probability density,%

UK 1998

1

0,1

0

20

40

60

80

100

Income/1000

Figure 2.10 Probability density vs. money for the income data of Japan and UK, 1998 and New Zealand 1996. The gamma function fits are shown as solid lines. The distribution strongly deviates from the data points at high-income values. Reprinted from Ferrero (2005, fig. 1) with permission from Springer Nature

in the USA. They concluded that the exponential is a good representation of the income for all individuals viewed together or separated for males, females, whites, and African Americans. However, a closer look at their plots, particularly their loglinear plots of CCDF vs. income shown in Figs. 1–3 of theirs do not appear linear. In fact, their graphs have a much closer similarity to the Brazilian equivalent shown in Fig. 2.11, which means that their data would probably be better represented by the GPD rather than the EPD. This point appears to have been overlooked by the authors. In addition, it could also be claimed that their data of the lower-income segment could also be thought as being a double exponential as mentioned above for the case of Brazil. Whatever choice of representation, it seems that as the rich 1% represented by the Pareto power law can apparently be decomposed into the rich and superrich, the remaining 99% can also be subdivided into two segments as well, the middle and lower classes. However, to simplify matters, most attempts made by econophysicists to model the income distribution merge these possible substructures and consider only two distinct classes, the rich 1% and the remaining 99%, and this naturally leads to the assumption of only two distinct functions to represent income data with apparently no fewer than three parameters to be found from the samples.

ln[F(x)]

4.8 4 3.2 2.4 1.6 0.8 0 –0.8 4.8 4 3.2 2.4 1.6 0.8 0 –0.8 4.8 4 3.2 2.4 1.6 0.8 0 –0.8 4.8 4 3.2 2.4 1.6 0.8 0 –0.8

4.8 4 3.2 2.4 1.6 0.8 0 –0.8

1992

0

1

2

3

4

5

6

7

1996

0

1

2

3

4

5

6

7

8

1999

0

1

2

3

4

5

6

7

8

2003

0

1

2

3

4

5

6

7

4.8 4 3.2 2.4 1.6 0.8 0 –0.8 4.8 4 3.2 2.4 1.6 0.8 0 –0.8 4.8 4 3.2 2.4 1.6 0.8 0 –0.8

4.8 4 3.2 2.4 1.6 0.8 0 –0.8

1993

0

1

2

3

4

5

6

7

1997

0

1

2

3

4

5

6

7

8

2001

0

1

2

3

4

5

6

7

8

2004

0

1

2

3

4

5

6

7

4.8 4 3.2 2.4 1.6 0.8 0 –0.8 4.8 4 3.2 2.4 1.6 0.8 0 –0.8 4.8 4 3.2 2.4 1.6 0.8 0 –0.8

1995

0

1

2

3

4

5

6

7

8

1998

0

1

2

3

4

5

6

7

2002

0

1

2

3

4

5

6

7

8

2005

0

1

2

3

4

5

6

7

x

Figure 2.11 Log-linear graphs of some of the results for the non-Paretian CCDF of Brazil. The dashed line is the linear data fit. Clearly the data only follows an exponential at its higher-income portion. Overall, the exponential is unable to produce a good data fit. Reprinted from Moura and Ribeiro (2009, fig. 7) with permission from Springer Nature

ln{ln[F(x)]}

1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2 –1.6 1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2 –1.6 –2 1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2 –1.6 1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2

1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2

1992

0

1

2

3

4

5

6

7

1996

0

1

2

3

4

5

6

7

8

1999

0

1

2

3

4

5

6

7

8

2003

0

1

2

3

4

5

6

7

1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2 –1.6 1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2 1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2 –1.6

1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2 –1.6

1993

0

1

2

3

4

5

6

7

1997

0

1

2

3

4

5

6

7

8

2001

0

1

2

3

4

5

6

7

8

2004

0

1

2

3

4

5

6

7

1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2 1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2 –1.6 1.6 1.2 0.8 0.4 0 –0.4 –0.8 –1.2

1995

0

1

2

3

4

5

6

7

8

1998

0

1

2

3

4

5

6

7

2002

0

1

2

3

4

5

6

7

8

2005

0

1

2

3

4

5

6

7

x

Figure 2.12 Double log-linear graphs of the non-power-law CCDF of Brazil. The dashed line is the linear fit, which offers a reasonable compatibility with the data. These results suggest the Gompertz curve (see Eq. 2.35) as a more suitable function to represent income data having high Gini index values, this being the case of Brazil. Reprinted from Moura and Ribeiro (2009, fig. 9) with permission from Springer Nature

96

Measuring the Income Distribution

Notwithstanding, recent results coming from the econophysics literature show that this is not necessarily the case. There are at least two functions which are able to represent the entire income data range by means of only two fitting parameters, the Tsallis and κ-generalized distributions, respectively discussed in Sections 2.3.5 and 2.3.6.

2.4.6 Tsallis and κ-Generalized Functions As mentioned above (p. 79), a series of papers have appeared proposing the KGD as a suitable distribution for representing the whole income distribution data without the need of two separate functions, and the KGD was successfully applied to the income samples of several countries. For instance, Clementi et al. (2007) used the data of Germany, Italy, and the United Kingdom for the years 1980–2002 to fit the KGD. Fig. 2.13 shows the results for Germany in 2001 where the κ-function is clearly able to represent the entire income data range. Clementi et al. (2012a) studied the wealth distribution of the USA from 1984 to 2009, but, in addition to fitting the KGD, they also performed fits for the Dagum and Singh–Madalla distributions for comparison (see also: Clementi and Gallegati, 2016; Clementi et al., 2016), concluding that the KGD performs better than those other two in terms of data fit. Ferrero (2005) showed that the Tsallis distribution can adequately fit the overall income data of Japan, the United Kingdom, and New Zealand. Fig. 2.14 shows the same data of Fig. 2.10 now fitted with the Tsallis distribution where one can see this function’s ability to represent the entire data range. The TD was also used by Soares et al. (2016) to fit the whole income data of Brazil for the years 1978–2014, significantly expanding upon Ferrero’s work which used only one year in each of his samples. Fig. 2.15 shows the results only for 2013 where it is clear the TD’s ability to successfully fit the entire income range. A new feature, however, also appears very clearly in Fig. 2.15 in the form of a small oscillation of the data points around the fitted straight line. Curiously, although this feature can be observed in samples studied earlier by other econophysicists, until then it has not been explicitly characterized in the literature. Indeed, a careful look at graph B in the inset of Fig. 2.3, the data points of Fig. 2.7, the exponential portion of Fig. 2.8, and the data of Figs. 2.12 and 2.13, they all show the same small oscillation around the fitted lines. The fact that the same feature appears in different samples, whose data were collected by different governments or private bodies in different years, different countries, were analyzed by different authors who also used different techniques, is a compelling indication that such feature is real and deserves further investigation. In fact, as pointed out by Soares et al. (2016), this log-periodic oscillation has been identified in other phenomena such as earthquakes and stock

2.4 Empirical Evidence

97

Germany – Income 2001 100

κ = 0.5697 ± 0.0005 α = 2.5659 ± 0.0007 β = 0.8788 ± 0.0003

P> (x)

10−1

10−2

(2βκ)−1/κ x−α/κ expκ (−βxα )

10−3

exp (−βxα ) 10−4 10−1

100 x

101

Figure 2.13 Graph showing the personal income of Germany for 2001 fitted to the exponential (dotted line), the power law (dashed line), and the κ-generalized distribution (solid line). Clearly the κ function is able to adequately fit the entire data range. Reprinted from Clementi et al. (2007, fig. 4 left) with permission from Springer Nature

markets near financial crashes and its origin is still a mystery. These oscillations can also be represented by allowing the TD’s q parameter to become complex, an approach that leads to periodic behaviour arising naturally from this formalism (Abreu et al., 2019). But, whatever way one may choose to describe these oscillations, they may indicate a nontrivial substructure, or income dynamics, of yet unknown origin. 2.4.7 Fitting Income Data with a Single Parameter? As seen above there is indeed strong empirical evidence for all CCDFs discussed in the previous sections, and also for others not discussed here in detail, but it is worth noting that all empirical studies of the income distribution have been basically approaching the data by means of either the PDF or CCDF fits. However, in an interesting paper Henle et al. (2008) showed it to be theoretically possible to represent all Lorenz curves by means of a function with a single parameter, something which apparently has not yet been empirically tried with any income sample. Nevertheless, it remains to be seen how useful this possible data fitting would be for modeling purposes and as a representation of the real-world income data.

98

Measuring the Income Distribution

UK Japan New Zealand

Population

10

1

0.1

0.01 0

20

40

60

80

100

Money

Figure 2.14 Probability density vs. money for the income data of Japan and the United Kingdom, 1998, and New Zealand 1996. The Tsallis function fits are shown as solid lines. The data points are the same as shown in Fig. 2.10 that were fitted using the gamma distribution. It is clear that the Tsallis distribution provides a better fit to the whole income data. Reprinted from Ferrero (2005, fig. 2) with permission from Springer Nature

2.4.8 Econophysical Income Classes 2.4.8.1 Class Divisions The empirical results presented in the previous paragraphs give strong indication that the income distribution of all countries presents substructures that can be conceptualized as being class divisions. This is certainly nothing new as the class structure of societies is something that has been part of the work of the classical political economy for a long time, from Adam Smith to Karl Marx, and even before them. What is new is the quantification of this class division in terms of income and the general percentage of the population in each income strata. In addition, despite the obvious class division of income, division which comes out very clearly in the works of econophysicists, it is curious to note that economists have generally been very shy, or entirely mute, to point this out, although they have been working with the income distribution problem for a much longer time than econophysicists. As briefly discussed above (p. 93), these results give support to an econophysical class classification of modern societies by basically subdividing them in the following class hierarchy of decreasing income.

2.4 Empirical Evidence

99

2013 20 ln [F(x)/F(0)] = − 0.9977 * x q

q = 1.265

−20

−40

q

ln [F(x)/F(0)]

0

−60

−80

−100 −10

0

10

20

30

40

50

60

70

80

x Figure 2.15 Graph of the income data of Brazil for 2013 fitted to the Tsallis distribution (solid line). The plot clearly shows the adequacy of the TD, as defined by Eq. (2.73), to represent the entire income data range. This plot is just one example of the annual Brazilian income from 1978 to 2014 studied by Soares et al. (2016), but all years behave similarly. Note the small oscillation around the fitted straight line and that the amplitude of this oscillation grows at increasing values of the normalized income x. This log-periodic oscillation is also present in other samples of other countries studied by different authors using different techniques (see main text). Reprinted from Soares et al. (2016, fig. 6 left) with permission from Elsevier

(1) The super-rich, composed of approximately 0.1% of the population at the top of income and wealth share. (2) The rich, together with the super-rich amount to about 1% of the people. Both the rich and super-rich, sometimes referred as “the 1%,” or simply as “the rich class,” are represented by a single Pareto power law on income, or a double one if they can be differentiated. (3) The middle class, a group whose percentage of the total population is so far unclear from a dynamic viewpoint as its size is likely to be very variable (Massari et al., 2009; Foster and Wolfson, 2010). They probably are the majority in modern industrial societies and is represented by the exponential on income data. (4) The lower class or underclass, at the bottom of the income hierarchy, composed of the poorest people. The lower class is represented by the

100

Measuring the Income Distribution

nonlinearizable part of the lower-income segment of the exponential, as found in the case of Brazil, or by the lowest-income range given by a double exponential. Their percentage size is still unclear as well, possibly being as variable as the middle class. Together, the lower and middle classes amount to 99% of people in present day societies and are often referred simply as “the 99%.” The income of the super-rich is most likely composed solely of income from capital, whereas both the middle and lower classes would have labor as their only source of income. The rich, however, should probably have their income originated from both labor and capital (see Section 3.2.2). Differentiating the lower class from the underclass may basically be a schematic exercise. The lower class could be thought of as being made of people with semipermanent low-paying jobs and limited access to welfare state benefits, whereas the underclass would be characterized by those unemployed, or unemployable in view of the current levels of education and training required for job insertion in modern economies, and would have virtually no access to social welfare benefits. However, from the point of view of income classes discussed here they can be economically considered as the same group. Considering the analytical perspective of econophysics one can set aside in many studies the difference between the rich and super-rich on the one hand, and the middle and lower classes on the other, and assume that the society is composed by only two groups, the 1% and the 99%. Doing this is easier in analytical terms as one can represent the income data by two simple functions and three parameters. However, as we saw, it is also possible to represent the entire income range by a single function with two parameters, but by doing this we may lose the simple internal class structure of the income data given by the two-functions approach. That structure would possibly only reappears in dynamic models, since both the TD and the KGD have the power law and the exponential as limiting cases. 2.4.8.2 The Case of Brazil The results concerning the income distribution of Brazil may be of relevance for a global consideration of inequality. This is so because this country is the fifth biggest in the world in terms of both population and area, has an economy whose size is among the top ten, but has income inequality close to the world average. Indeed, as shown in Fig. 2.16 the global Gini index is very close to Brazil’s figure in 2010 according to the World Bank (2016) estimates. Therefore, the inequality-generating social processes that take place in Brazil could, perhaps, be similar to others occurring worldwide, not in its specific history and social sources, but in its end results. This possibility justifies some analysis.

2.4 Empirical Evidence

101

Figure 2.16 Global Gini index obtained from averaging individual countries’ Gini coefficient using the World Bank database. Since this is obtained without weighting for population size, it is a measure of inequality among countries in the world, not among people in the world. Nevertheless, according to Milanovic (2012, fig. 1) the weighted and unweighted calculations of the global Gini index tend to converge to similar results in 2010. Source: Conference Board of Canada (2011, third graph from top to bottom)

This high inequality is a consequence of Brazil’s long history of slavery and subsequent huge social neglect toward the poor. The Portuguese colonization of Brazil effectively started in 1532 and slavery started with it. In the first 150 years of colonization most slaves were captured native Indians, but later they were replaced en masse by black Africans, although these were already present as slaves since the very beginning of colonization (Alexander, 1922; Arsenault and Rose, 2006). For a period of 350 years it is estimated that about 4.8 million black people were forcefully brought from Africa to work as slaves in Brazilian plantations, a figure fifteen times higher than the number of black people forcefully transported to the USA for the same purpose (Marques, 2016; Trans-Atlantic Slave Trade Database, 2018). Brazil became politically independent from Portugal in 1822, nonetheless slavery continued unabated for over sixty years until its abolishment in 1888, date which made Brazil the last country in the Western world to officially terminate slavery. Even so, in 1910 a slave-like revolt took place in Rio de Janeiro, then the capital of the country (Morgan, 2014). After its official ending, slavery in Brazil was basically replaced by a kind of unofficial apartheid-like system that effectively confined the overwhelming majority of slave descendants, about 55% of the current population, most remnants of native people and a fair amount of nonblack, mostly European, immigrants to the income underclass, estimated as amounting from a third to over half of the population of about 210 million people at the time of writing. And although this unofficial segregation system toward the poor, from whom black slaves’ descendants constitute the largest segment, has been diluting since then, it had not been terminated by the late 2010s, as well as modern versions of slavery that still survive in small corners of the country’s huge territory.

102

Measuring the Income Distribution

In addition, when one considers that at the time of writing Brazil’s top 1% income class is still overwhelmingly composed of descendants of white European immigrants, and that black people are essentially nonexistent in this population stratum, as well as in the country’s political and judiciary establishments, it is clear that even 130 years after the end of slavery there still are social mechanisms in place that prevent Brazilian black slave descendants from participating in the economic and power elites, although they form the majority of the current population. Therefore, the previous long history of slavery and the permanence of segregating social mechanisms explain the persistently high Brazilian Gini index in the 0.5–0.6 interval considering only household income data, or ∼ 0.7 if tax data is used to correct the top incomes (Medeiros et al., 2015; Souza and Medeiros, 2015; Souza, 2016; Morgan, 2017), making this country an effective small-scale representation of the global inequality (see also Milanovic, 2010, section 2.2). In conclusion, from the knowledge obtained on the history of inequality in Brazil one may suppose, by inference, that similar social mechanisms were likely in place in the past, and are still in place today, that prevented and prevent the majority of the world’s population from leaving poverty. 2.4.8.3 Other Aspects of Inequality Finally, it is important to clarify what one means by ‘inequality’, stated in general terms, and ‘inequality on income distribution’ (Milanovic, 2010). The latter is just one aspect of the former, since inequality in general includes more than just income distribution, but also implies economic, sociological and political aspects of inequality, such as inequality in opportunities, outcomes, gender, race, age, relationship with economic growth and political system, health, happiness, etc. (Graham and Felton, 2006; Morrisson and Murtin, 2013; United Nations, 2013; Vijaya, 2013; Mendes, 2014; Ostry et al., 2014). These topics deserve separate investigations and are beyond the scope of this book, which is mainly devoted to income distribution, or income inequality, focused on dynamics from the perspective of econophysics, as well as on the related topic of wealth inequality. Even so, some of these other topics on inequality will be briefly mentioned from time to time. 2.5 Limitations of Curve Fitting We have seen that the distribution functions presented above, as well as several others which were not discussed here, but a few were mentioned in Section 2.3.7, were used to successfully fit the empirical income data of several countries. Since those distributions are analytically different, the question arises: which one is the best? Considering models as representations, the obvious answer is: all of them. Since all those functions do represent the data to some extent they are all good enough, some with a few statistical parameters better than others for different data sets and at different time periods.

2.5 Limitations of Curve Fitting

103

Answering that way may seem very unsatisfactory to some, but what underlines such dissatisfaction is in fact the limitations of curve fitting. The basic point here is that curve fitting is just the start of the study of a subject like income distribution dynamics because its ability to reveal the inner dynamics of this distribution is very limited. This is especially true if we consider that the income distribution may very well change over time, which means that in the same country one particular function that fits the data well in a certain time period may no longer be a good fit at later times (Gallegati et al., 2006, p. 4). Then, we need to go further than curve fitting to really start to understand the mechanisms that govern how income gets distributed in a society. And this cannot be done by means of “rigorous” statistics or statistical hypotheses testing. What is required are dynamic models. From a physicist’s perspective, the following real-life episode may give a better idea of this limitation. In a short essay, Freeman Dyson (2004) reminisced an encounter he had in 1953 with the famous physicist Enrico Fermi (1901–1954), who was awarded the Nobel Prize in Physics in 1938 and also participated in the Manhattan Project. The meeting was to discuss a program of research that Dyson and his students had been pursuing for several years. After showing his results Fermi was not at all impressed. Dyson recollects that he pointed out that there are two ways of doing research in physics: to have a clear picture of the process one is calculating, or having a precise and self-consistent mathematical formalism of this process. Fermi asked Dyson how many parameters he had used in his theory. “Four,” he replied. Then Fermi added: “I remember my friend Johnny von Neumann4 used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk” (Dyson, 2004 p. 297; see also Mayer et al., 2010). So, Dyson recollected: In a few minutes, Fermi politely but ruthlessly demolished [that] programme of research . . . He probably saved us from several more years of fruitless wandering along a road that was leading nowhere. I am eternally grateful to him for destroying our illusions and telling us the bitter truth. (2004, p. 297)

This real-life account summarizes the serious limitations of curve fitting. If one does not have a clear picture of the underlying dynamic process one is working with, or a mathematical formalism that empirically represents this dynamics, simple data fitting that seeks more and more “robust” and “rigorous” statistics will not shed much light on the underlying process. The basic point is that one should not try to incorporate economic hypotheses into curves. This must be done in the differential equations that represent the process under study. Trying to include economic hypotheses into curves is not so dissimilar to the ‘comparative statics

4 John von Neumann (1903–1957).

104

Measuring the Income Distribution

analysis’ made by neoclassical economics, a theory so limited that its own name speaks for itself (see also: Blatt, 1983, pp. 4–8; Estola, 2017, pp. v–vi). So, the main aim must always be to unveil the dynamics behind our problem, express this dynamics in terms of time evolving differential equations, possibly in tensorial form, understand its mechanisms, and test them with real data in order to anchor the resulting theoretical concepts into very solid empirical foundations. To achieve this goal, curve fitting coupled to “rigorous” statistics can only be the very starting point in the study of a subject, and by itself will not add much to its dynamic understanding. This may explain why the study of income distribution dynamics has not advanced much since Pareto’s time, because economists have to a great extent remained basically stuck with curve fitting during the twentieth century. Considering that at the time of writing there are serious concerns in several modern societies about rising inequality (Milanovic, 2010; United Nations, 2013; Piketty, 2014; Atkinson, 2015), paraphrasing Freeman Dyson one may state that income distribution dynamics is a problem too important to remain in the state of a fruitless wander along a road that leads nowhere.

3 Piketty’s Capital in the Twenty-First Century

Thomas Piketty’s (2014) study of the long-run inequality trends of income and wealth distributions has already become a landmark contribution to this subject. He was not the first economist to discuss those trends (see, e.g., Lindert, 2000, Morrisson, 2000, Roine and Waldenstr¨om, 2015, for overviews of alternative works), but what makes it different is its historical breadth in terms of real-world data leading to models empirically based on a huge collection of evidences gathered from databases spanning centuries. Its long-term historical analysis provides important empirical material for further studies on the dynamics of income distribution and, therefore, an overview of some of its results is mandatory for the econophysical modeling of income and wealth dynamics to be discussed in the next chapters. This chapter presents a brief exposition of some topics discussed in Capital in the Twenty-First Century, focused on the models, main empirical results, and other aspects covered in the book and its technical appendix that this author believes are relevant to the econophysical approach to the problem of distributive dynamics. Thus, the next paragraphs are not aimed at providing a broad survey of the book, as there already are quite a few good reviews available in the literature (e.g., Milanovic, 2014; Palley, 2014; Davies, 2015; Piketty, 2015; Pressman, 2016, and references therein). Nevertheless, as the book aroused as much praise as criticism, which is always the case for influential works in any area, in the following lines there will be some comments here and there in this respect. More general overviews of its pros and cons can be found elsewhere (e.g., Fullbrook and Morgan, 2014; Krusell and Smith, 2015; Blume and Durlauf, 2015; Morgan, 2016; Pressman, 2016; King, 2017). 3.1 Data-Driven Analysis The first important thing to note about the book is its data-driven nature. From the start, Piketty added his voice to the other economists in criticizing the economics 105

106

Piketty’s Capital in the Twenty-First Century

profession for working on no-facts-based theories (see pp. 29–32 above). His words in this respect are as follows. If the question of inequality is again to become central, we must begin by gathering as extensive as possible a set of historical data for the purpose of understanding past and present trends. For it is by patiently establishing facts and patterns and then comparing different countries that we can hope to identify the mechanisms at work and gain a clearer idea of the future. (Piketty, 2014, p. 16, emphasis added) [T]he [economics] profession continued to churn out purely theoretical results without even knowing what facts needed to be explained. . . . To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation . . . Economists are all too often preoccupied with petty mathematical problems of interest only to themselves. This obsession with mathematics is an easy way of acquiring the appearance of scientificity without having to answer the far more complex questions posed by the world we live in. [Hence, the] appeal to theory and abstract models and concepts [were done] only to the extent that theory enhances our understanding of the changes we observe. (Piketty, 2014, pp. 31–33)

So, Piketty’s conclusions are primarily drawn from a careful analysis of realworld data, or stylized facts as economists like to call phenomenological results and relationships among variables suggested by empirical findings. This is the main strength of the analysis, and for this reason he is able to reach conclusions that otherwise would not be possible under the mainstream economics way of doing things. He avoided falling into the typical neoclassical economics theoretical narrative, which more often than not puts theoretical constructs on top of everything even when it is in flat contradiction with data. He also shunned the arcane axiomatic narrative often found in economics texts where the mathematical rigor is everything and real-world data is almost nonexistent. In these respects it is much closer to the way most physicists actually carry out their research than mainstream economists, as discussed in Chapter 1, Sections 1.4 and 1.5. As an example of a conclusion obtained from data analysis which does not fall into the neoclassical economics narrative is the absence of economic equilibrium in the history of inequality of income and wealth. On the contrary, Piketty talks about powerful forces of convergence and divergence on the evolution of the inequality which alternate in one direction or another, adding that “there is no natural, spontaneous process to prevent destabilizing inegalitarian forces from prevailing permanently” (Piketty, 2014, p. 21). Even when he discusses the second fundamental law of capitalism (see below), which could lead to an economic equilibrium situation, he makes it clear that this is achievable only asymptotically,

3.2 Phenomenology

107

that is, in an undetermined future, and, hence, never perfectly realized in practice (Piketty, 2014, pp. 168–169). He also pointed out the dangers of economic determinism regarding the inequality evolution in income and wealth, either apocalyptic or rosy ones, as the history of inequality is deeply political and, so, it cannot be reduced to purely economic mechanisms (Piketty, 2014, p. 20). This is a criticism of past scholars who dealt with these questions, although he pointed out that they did not have the extensive databases we have today and, especially, the computational power to deal with huge amounts of data, a situation that seriously limited their data analysis. Even so, he did indicate how one’s own analysis can be biased by starting with conclusions obtained under ideological, philosophical, or political influences, citing Karl Marx and Simon Kuznets (1901–1985) as examples (Piketty, 2014, p. 10). Kuznets, who received the Nobel Prize in economics in 1971, hypothesized in the 1950s that as an economy develops and average income grows, inequality initially increases and then decreases due to market forces. The graph of inequality level against income per capita reflecting this possible behaviour would be an inverted-U-shaped curve known as the Kuznets curve (see Milanovic, 2016, ch. 2, for further details and a possible update of this hypothesis; see also Yakovenko, 2009, section VII-B-a, for an econophysical criticism of the Kuznets curve). Piketty contrasts Marx’s apocalyptic end of capitalism due to the principle of infinite accumulation, where there would be no stopping the process of wealth ending up in fewer and fewer hands, to Kuznets’ fairy tale, or happy ending for capitalism, where inequality would eventually decrease, the further economies develop (Piketty, 2014, p. 11). 3.2 Phenomenology By taking a facts-based analytical path, Piketty’s theoretical concepts also follow a practical line. This is the case of capital, income classes, and inequality as described by distribution tables. He also goes beyond household surveys to establish income inequalities. Let us now take a closer look at these definitions and concepts, as well as the main results that follow from them. 3.2.1 Capital Piketty’s definition of capital is best explained in his own words: [C]apital is defined as the sum total of nonhuman assets that can be owned and exchanged on some market. Capital includes all forms of real property (including residential real estate) as well as financial and professional capital (plants, infrastructure, machinery, patents, and so on) used by firms and government agencies . . . [including] all forms of

108

Piketty’s Capital in the Twenty-First Century

wealth that individuals (or groups of individuals) can own and that can be transferred or traded through the market on a permanent basis. (Piketty, 2014, p. 46) To simplify the text, I use the words “capital” and “wealth” interchangeably, as if they were perfectly synonymous. (Piketty, 2014, p. 47, emphases added) Capital in all its forms has always played a dual role, as both a store of value and a factor of production. I therefore decided that it was simpler not to impose a rigid distinction between wealth and capital. (Piketty, 2014, p. 48)

He also noted that this definition excludes what is called human capital, that comprises an individual’s labor power, training, and abilities, which cannot be exchanged on any market in non-slave societies. Hence, for Piketty capital is defined as net wealth, or ‘the market value of what one owns’ (assets) minus ‘the market value of what one owes’ (liabilities or debts). In a physicist’s terminology this is a phenomenological definition well suited to a data-driven analysis and, so, there is nothing out of the ordinary here unless to note how much time and energy economists spend squabbling about what is the “real” nature of capital (e.g., Varoufakis, 2014). This kind of discord recalls the old debate in physics that lasted 300 years about the “real” nature of light, from Ren´e Descartes’ (1596–1650) viewpoint in the 1630s that light is a disturbance that propagates as a wave, to Newton’s corpuscular concept of light half a century later based on a series of experiments made by Newton himself, and then, a century after Newton, to Thomas Young’s (1773–1829) wave theory of light, to cite just three names among several in this long debate. Young carried out experiments in the early 1800s that clearly showed light’s interference patterns and that, together with Maxwell’s electromagnetic theory, seemed to end the debate at the end of nineteenth century in favor of wave theory, until Einstein revived the argument in 1905 by showing that light does behave as a particle in the photoelectric effect, an explanation that earned him the 1921 Nobel Prize in Physics. By the 1930s this centuries-old debate was finally settled by the recognition of the dual nature of light, behaving as a wave in certain situations and as a particle in others, called wave–particle duality. And the possibility is open for other theories of light, something that no longer causes stress among physicists. So, returning to the “real” nature of capital, it is clear that there are as many definitions of capital as theories of capital. Piketty chose one suited to his needs, which is possibly not applicable in other contexts. And if this choice leads to contradictions, that is absolutely normal as no theory is free from internal contradictions and limitations, especially when it is being built.

3.2 Phenomenology

109

The point is, one cannot dismiss a claim based on data because, as argued, it uses a definition whose “construct has no role in economic theory” (Hillinger, 2014, p. 133). One can easily counterpoint this argument by asking: which theory? If a claim based on data does not fit a theory, one should change, create an alternative, or dismiss entirely the theory. The bottom line is that no amount of argument about theory can say whether some concept is right or wrong. Ultimately the only test is the real data, experimental or observational. Past discords among physicists may be instructive for possibly settling some debates in economics. 3.2.2 Income Classes In order to describe inequality one must first define social classes and be able to identify the sources of their incomes. Regarding the sources, Piketty began by noticing that in all societies income can be viewed in two terms, income from labor and income from capital. Therefore, income inequality is the result of adding up these two components, which also leads to a third component, the interaction between these two terms. Income from labor includes wages and nonwage labor (e.g., self-employment). Income from capital derives from ownership of capital, like rents, dividends, interests, royalties, profits, capital gains. The total income is, of course, the sum of both terms. The basic question on the sources leading to the skewed income or wealth distributions, or inequality, is then the extent to which high income from labor also means high income from capital, because, as a matter of fact those possessing large fortunes often manage to obtain a higher return than those having modest fortunes or no fortune at all. So, this means that inequality of income from capital may be greater than inequality of capital itself. The basic observation requiring such distinction comes from the fact that the mechanisms advanced to explain the observed evolution of these components are very different. The unequal income from labor comes from different skills, education, and labor markets. The unequal income from capital comes from savings, investment, inheritance, and operation on real estate and financial markets (Piketty, 2014, pp. 238, 242–243). Regarding classes, Piketty chose categories based on statistical concepts on the grounds that they must be exactly the same in different societies and allow objective comparisons across time and space. To do so he resorted to the concepts of decile (10%), centile, or percentile, (1%) and thousandth centile (0.1%) of an adult population (minors are excluded), and then defined the lower class as the bottom 50%, the middle class as the middle 40%, and the upper class as the top 10%. The top decile is broken down in two subgroups: the dominant class consists of the upper centile, or the 1%, and the remaining nine centiles form the wealthy class, also named the well-to-do. The top centile can be further broken down in

110

Piketty’s Capital in the Twenty-First Century

the top thousandth centile, or the top 0.1%. However, he does make it clear that his definitions of income classes are just one set of choices among others, making no claim that they are better or worse, but just that they are the most appropriate for the kind of analytical description he intends to pursue (Piketty, 2014, pp. 250–255). Piketty’s classes are, nonetheless, just very simplified strata of percentiles easily derived from the Lorenz curve. For instance, the Lorenz curve appearing in Fig. 2.1 (see p. 60 above) represents the income distribution of a hypothetical population from where a cursory examination using a common rule, or simply the eye, roughly shows that the bottom 50% of the population it represents, as indicated by the x-axis of the Lorenz curve, receives only about 10% of the total income, as indicated by the correspondent value in the y-axis. The income of the top 10%, whose geometrical locus is where the x and y axes of the Lorenz curve in Fig. 2.1 are respectively equal to 90% and 55%, grabs 100% − 55% = 45% of the total income of that hypothetical population. From this description it is clear that Piketty’s classes have some superposition to the ones defined in econophysics (see above p. 98). For instance, both define the top 1% in almost exactly the same manner, the difference being that in econophysics the 1% is analytically associated with the Pareto segment of the income distribution, whereas Piketty does not attach any analytical pattern to this income class hierarchy. On the other hand, an important difference is that classes in econophysics are generally based on household surveys where averages are defined for the whole population, minors included. In addition, whereas Piketty defines very clearly what he means by middle class, econophysicists have not yet provided an equivalently unique and clear definition of middle class in terms of analytical income patterns. Once the income classes were defined, Piketty was able to find some important patterns in the data. The main ones are summarized below (from Piketty, 2014). (a) The distribution of capital ownership and income from capital is always greater (more skewed) than the distribution of income from labor. This was found in all countries and in all periods where data is available. Usually, the top 10% of labor income receives from 25% to 30% of total labor income, whereas the top 10% of capital income distribution always own more than 50% of all wealth. On the other hand, the bottom 50% of labor income receives about 25% to 35% of total labor income, as much as the top 10%, whereas the bottom 50% of wealth distribution owns less than 5% of all wealth, that is, virtually nothing. So half of the population has practically no ownership whatsoever (pp. 244, 257). (b) In every society, even the most egalitarian, the top 10% is a world unto itself. Even more so the top 1%, as they tend to live in the same cities, same neighborhoods, being a group large enough to influence significantly in the social, political, and economic order (pp. 252–254).

3.2 Phenomenology

111

(c) Hierarchies of income are not the same as those of wealth. The top 10% or bottom 50% of income from labor are not the same people constituting the top 10% or bottom 50% of wealth distribution. This happens because people having large fortunes do not work and, hence, are at the bottom of the labor income hierarchy (pp. 254–255). One should note that the ability to separate in quantitative terms both hierarchies of income leads definitively to a more precise description of the income distribution than what has been done so far in econophysics, which basically studies only income distribution from labor (see Section 3.2.3). (d) In most countries women significantly populate the bottom 50% of earners (p. 256). (e) The distribution of capital ownership is extremely inegalitarian everywhere, even in Scandinavian countries in the 1970s and 1980s. Piketty wrote that “no society has ever existed in which ownership of capital can reasonably be described as ‘mildly’ inegalitarian . . . where the poorest half of society would own a significant share” (p. 258). (f) The top 10% of wealth distribution is even more unequal than the top 10% of wage distribution (p. 259). (g) The poorest half of the population in Europe were as poor in 1910 as they were in 2010, owning less than 5% of the total wealth. But, in 1910 the middle 40% were as poor as the bottom 50%, whereas in 2010 they managed to climb from below 5% to about 25% to 30% of all wealth. So, this patrimonial, or propertied, middle class was a major innovation of the twentieth century (pp. 260–262). (h) In Europe around 1900 the top 10% of population owned 90% of wealth, whereas the top 1% owned 50%. It is not inconceivable that this situation will return again, possibly in the whole world (see Section 3.4). (i) There are two general concepts for characterizing societies with high levels of inequality such as described in the item (h) above. A society of rentiers, where inheritance dominates the high income from capital, and a hypermeritocratic society composed of very high paying individuals, an invention of the USA, where hypermeritocracy dominates, that is, a society whose top 0.1% incomes are made of superstars and, mainly (≈70%), supermanagers (pp. 302–303, 410–424). If these two logics combine, the future societies may be even more unequal than those in Europe in the nineteenth century (see Section 3.4).

3.2.3 Household Surveys vs. Tax Data Economists and econophysicists who work on the subject of income distribution usually use survey data of regions or countries in order to carry out their studies. Most, if not all, of the income distribution functions discussed in the previous

112

Piketty’s Capital in the Twenty-First Century

chapter were derived from datasets of this kind. Such surveys are samples taken from a limited number of households such that once extrapolated under appropriate weights they supposedly become statistically representative of the regions where the limited samples were collected. Nevertheless, these surveys have a severe limitation since wealth is self-reported, which means that they may underestimate the largest fortunes as well as the income from capital. Piketty was mostly interested in studying the top incomes, particularly the top 10% and 1% of income hierarchies in populations because these are the groups where wealth is overwhelmingly concentrated, and so are the incomes from capital and labor. But, by being aware of the limitations of household surveys, he avoided them and collected data from other sources. The datasets that allowed him to compute historical series, establish the long-term evolution of income and wealth distributions, and reach conclusions about the observed patterns on inequality outlined above came from three sources: tax data on wealth and inheritance and income tax returns (Piketty, 2014, pp. 18, 258). The serious limitation of household surveys in describing the largest incomes is not a small problem. For instance, as mentioned above (p. 102), by using tax data to infer the largest incomes instead of household surveys the Gini index of Brazil jumps from about 0.55 to 0.7, a very significant increase. Therefore, it is clear that tax data reveal much higher top income levels than do household surveys. Clearly, household surveys, which are often the only source used by international organizations (in particular the World Bank) and governments for gauging inequality, give a biased and misleadingly complacent view of the distribution of wealth. (Piketty, 2014, p. 330)

Hence, it is then reasonable to assume that all parameters of income distribution functions obtained from fitting household surveys data discussed in the previous chapter are subject to systematic errors, probably strongly underestimating the high end of the distributions. However, once we become aware of this problem it can be mitigated by applying appropriate corrections to the fitted parameters in order to decrease and/or eliminate the systematics, as has been done by Souza and Medeiros (2015) and Souza (2016) in the study of the top incomes in Brazil from the 1920s to 2010s and, as well as in another example to be discussed in the next chapter using econophysical tools. These studies showed that those systematics underestimate the Gini coefficient by roughly 10% to 20%, percentages that need to be added to the Gini values obtained from household surveys to properly consider the top incomes (see Chapter 4, Section 4.4 and Souza, 2016, fig. 42). Nevertheless, household surveys do contain sociodemographic data on families, such as age and educational level, which allow researchers to ask questions about distributional changes, such as how demographic changes impact inequality.

3.2 Phenomenology

113

On the other hand, tax data are not problem free. As reviewed in more detail by Pressman (2016, ch. 2), tax data are not consistent among countries and do not allow researchers to make amendments due to changes in family size like, for instance, couples who got divorced and then moved down the income and wealth hierarchies. And there is also the additional problem of tax evasion, a problem that seems to be much on the increase lately. Thus, both data types have advantages and disadvantages and it seems that this is a situation in which two types of datasets can, and should, be used to complement one another. Inasmuch as one of the goals of the econophysics of income distribution is to determine the dynamic origins of the analytical functions that describe the observed pattern in income distributions, once this is achieved it will likely complement studies such as those of Piketty. 3.2.4 Synthetic Indices vs. Distribution Tables Piketty’s quantitative analysis of inequality uses a methodology that entirely disregards the synthetic inequality indices, like the Gini coefficient or Theil index, mostly used in the characterization of income distributions by both economists and econophysicists. He chose to focus on distribution tables, defining the income classes in terms of deciles (10%), centiles (1%), and thousandth centiles (0.1%) of an adult population. So his entire discussion on inequality is based on tables that compare the top deciles or centiles of income and wealth with the rest, or the top deciles to the 50% least favorable income or wealth recipient population, and so on. He based his criticism of synthetic indices on the grounds that they simplify too much the problem by mixing up things which should not be mixed. This particularly applies to the distinction between income from labor and income from capital, as in those indices they generally appear together. However, these two types of income follow very different patterns and are a result of different economic mechanisms and, therefore, cannot be included in the same index without unjustifiably simplifying the problem at hand (Piketty, 2014, pp. 243, 266). These are very good points, but the problem lies not with the indices themselves, but the way they are applied. There is no problem in using those indices, provided one makes the distinction between the different income types that Piketty identified in his treatise, as has been done by the research of Oancea et al. (2018). Actually, he pointed out such a distinction in the paragraph just above the one in which he strongly criticized those indices (Piketty, 2014, p. 266). From an econophysical viewpoint, it seems to be a fact that Piketty’s theory of wealth is at the moment better developed conceptually than most econophysics theories of wealth. Econophysicists have basically been investigating the income distribution of labor and there is not yet a dynamic econophysical theory linking

114

Piketty’s Capital in the Twenty-First Century

both types of income (the model discussed in Chapter 4, Section 4.3 is a possible exception). Actually, in light of Piketty’s theory this is something that econophysicists ought to pay closer attention to from now on. But his criticisms do not invalidate the use of those indices, especially because even Piketty did not provide an analytical mechanism explaining why his laws of capitalism (see below) generated the income distribution patterns we observe in the income from labor, that is, the specific analytical forms discussed in the previous chapter. Besides, corrections can in principle be made to those indices such that the difference between the top incomes and the rest of society becomes clearly identified. An example of such correction is discussed in Chapter 4, Section 4.4. Nevertheless, Piketty’s distribution tables do make his work easier to understand, especially to the layperson, because: [Distribution tables] force everyone to take note of the income and wealth levels of the various social groups that make up the existing hierarchy . . . rather than by way of artificial statistical measures that can be difficult to interpret . . . [and] give an abstract and sterile view of inequality, which makes it difficult for people to grasp their position in the contemporary hierarchy. (2014, p. 267)

So, although he criticizes other methods of describing the income and wealth distribution as being “hardly neutral” as they ignore the top end, one could say the same about his methodological choices since they focused on the top end (see also Medeiros and Souza, 2015). Piketty also criticizes the synthetic indices on the grounds that “they obscure the fact that there are anomalies or inconsistencies in the underlying data, or that data from other countries or other periods are not directly comparable” (2014, p. 267). Again, this criticism has more to do with the way the synthetic indices have been used, misused, or not properly presented, rather than their usefulness. These indices do describe something that is going on in the income distribution, and the ability to use them to compare different countries or periods depends on the theory, not yet available, that explains the origins of the specific analytical forms of these distributions. In addition, if there are indeed anomalies and inconsistencies, these can be addressed by providing the synthetic indices with error bars. In fact, it must be duly noted that one cannot find any uncertainty in the data points of all the graphs in Piketty’s lengthy book, although he does discuss possible uncertainties here and there in an unsystematic form, with a few vague guesses about their possible magnitudes (e.g., Piketty, 2014, ch. 8, pp. 271–303). And although any serious reader of Piketty’s book becomes immediately aware that the task of gathering, homogenizing, analyzing, placing the historical data in the proper social contexts of various countries and extracting patterns was certainly Herculean, the absence of uncertainties and error margins in the data do indicate an important

3.3 Pluralistic Economic Theory

115

limitation in his study. On the other hand, it is not even clear if uncertainties and errors are possible to be estimated from the available data, especially considering that quite a few of them are several centuries long. Nevertheless, whatever the case, providing uncertainties and errors to historical data is certainly desirable because they could either reinforce or impair some of Piketty’s conclusions, especially that the capital share tends to follow the capital/income ratio in the long run (see p. 127 below). In this respect, it is worth noting that it is very unusual to find uncertainties or error bars in data presented by economists. They generally seem to have no concern whatsoever in taking seriously this essential point in their analyses. It is also very difficult to find a discussion regarding the possibility, or not, of estimating uncertainties, even in general terms, or the possible presence of systematic errors in the data (see Chapter 4, Section 4.1.1). Apparently, economists do not seem to have yet realized how critical uncertainties and systematics are as far as data interpretation is concerned. Physicists have known for a very long time that once they are estimated, uncertainties and systematics can either reinforce or entirely dismiss a theory. In the end, the desire to make his results more accessible to the non-specialized public, something that certainly ought to be praised, had an important weight in Piketty’s choice of presenting his results in the form of distribution tables. But, these tables are less suitable in the analytical discussion of income distribution currently made by econophysicists. It is clear from the previous chapter that the income distribution of the lower and middle classes follow a definitively different pattern than those of the rich and super-rich, and, hence, the distribution tables of income inequality are not able to expose those patterns, a feature that makes them less interesting for the econophysical approach of income distribution. But, one cannot dismiss them either, and the rightful conclusion one can reach under the epistemological viewpoint taken in this book is that both descriptions of income and wealth distributions, synthetic indices and distribution tables, complement rather than exclude one another. 3.3 Pluralistic Economic Theory To unveil the mechanisms behind the observed data patterns, Piketty used all available theoretical tools at his disposable, from classical political economy to neoclassical concepts. He started his exposition on inequality with similar questions as posed by the classical authors, citing the concerns of Malthus, Ricardo, and Marx about the long-run evolution of the distribution of wealth and class structure of society. When trying to understand the evolution of the quantities derived from the observed data patterns, he summoned typical neoclassical reasoning,

116

Piketty’s Capital in the Twenty-First Century

like elasticity1 and marginal products of capital and labor (see Section 3.4.7). He also reached at the very interesting conclusion that Marx’s principle of infinite accumulation is in fact a special case of the second fundamental law of capitalism, and discussed why his apocalyptic prediction did not occur. At least, not yet (see Eq. 3.37 in Section 3.4.8). So, he did not spare any theoretical tool when trying to analyze his data, applying without prejudice all theoretical assets available in economics he thought as being capable of shedding light on the underlying dynamic patterns he encountered, using as needed both classical and neoclassical concepts, but criticizing both when necessary by showing their limitations and even discarding some of the models proposed by both theories. This makes the book essentially pluralistic in nature as far as theory is concerned, a very important quality since it seems that a not irrelevant portion of modern analysis in economics tries to ignore anything related to the classical political economy, or the so-called heterodox thinking in economics. Hence, Piketty’s approach reinforces the viewpoint extensively discussed in Chapter 1 about the epistemological problems of academic economics and the possible way out of its current theoretical crisis, which most likely includes the acceptance that all school of thoughts in economics have something to contribute to the understanding of the economic phenomena and none ought to be ignored, especially in teaching. And that it is inevitable that some portions of the economic thought of all schools will be proven wrong and will be eventually discarded. 3.4 Fundamental Laws of Capitalism Piketty’s conclusions about inequality in capitalism are essentially based on a theory consisting of relationships among macro variables, or aggregates in 1 Elasticity is a concept widely used in modern economics whose essence is very simple. It measures how one

variable, say q, changes in response to the changes in another variable, say p, that causally influences the first, that is, q = q(p). Mathematically, it is a percentage change of one variable in terms of the percentage change of another, being, therefore, a dimensionless ratio. So, the elasticity of q at a certain value p, is written as: dq/q d d p dq = = p (ln q) = [ln q(p)] q(p) dp dp/p dp d(ln p) ⎡ ⎤ q(x) 1 − ⎢ q(x) − q(p) p q(p) ⎥ ⎥ ≈ % q(p) . = lim = lim ⎢ x→p x−p q(p) x→p ⎣ 1 − x ⎦ % p p

Eq(p) =

(3.1)

If p is the price of a certain product sold at certain quantity q, Eq(p) is called the price elasticity of demand, or demand elasticity. It measures the responsiveness, or elasticity, of the quantity demanded for a certain product to changes in its price. Note that neoclassical economics treats this concept in static terms, that is, both p and q are compared at two different times, i.e., without any dynamic knowledge or supposition about how both quantities evolve in time. This is why this method is called comparative statics.

3.4 Fundamental Laws of Capitalism

117

economic parlance, modeled from empirical data. From a physicist’s viewpoint they can be interpreted as state equations, or state functions, as in the classical thermodynamics sense, that is, relationships among macro variables that define the macro state of the system (Lemons, 2008, ch. 1; Kondepudi and Prigogine, 2015, p. 5). In other words, the quantities upon which his models are based could be seen as state variables of the economic system. Following the presentation logic set out at the beginning of this chapter, the paragraphs below will present in a synthetic way Piketty’s fundamental definitions, quantities, and expressions in their original notation wherever applicable, noting that some of the quantities defined by him use the same symbols adopted for other definitions in the previous chapter. To avoid confusion the notation will be changed as needed in later chapters or the use of the same symbol to represent different quantities will be clearly stated. An important point to note regarding economic models is that lamentably dimensional analysis is something so neglected by economists to the point of being virtually ignored in the economic literature, apart from, seemingly, very few and isolated discussions of this essential subject (Parry Lewis, 1963; Abrassart, 1967; de Jong and Quade, 1967; Rader, 1972; Neal and Shone, 1976; Shone, 2002, section 1.3; Barnett, 2004; Folsom and Gonzalez, 2005; Fr¨ohlich, 2011; Grudzewski and Rosłanowska-Plichci´nska, 2013; Kim, 2015; Estola, 2017, ch. 2; Texocotitla et al., 2018). Physics, however, is fundamentally a science of measurement and, therefore, econophysics cannot prescind from incorporating dimensions into economic quantities. Hence, some dimensional analyses of Piketty’s expressions and laws will also be presented in what follows. 3.4.1 Basic Quantities and Their Measurements Let us start with the two most basic economic quantities, income and capital, and then move on to other quantities extensively used by Piketty to discuss the flow of income from capital, the capital return, and the share of income from capital. Income Y is a flow that corresponds to the quantity of goods produced and distributed in a given time interval t. It is dimensionally defined as Y = [M][T ]−1 , where [T ] is the time dimension and [M] is the money, or currency, dimension (Parry Lewis, 1963; Neal and Shone, 1976, p. 63). Since this time interval is generally taken to be a year,2 then [T ] = a = [ t]. Considering that goods are generally traded using monetary transfers in some currency, let us 2 The traditional symbol yr used in English-speaking countries to denote year as a unit of measure has been

replaced in scientific publications by the first letter of the word annum – year in Latin.

118

Piketty’s Capital in the Twenty-First Century

express [M] in terms of the generic currency units, so [M] = (see p. 58 above). As a consequence of these choices of measuring standards, income is dimensionally written as [Y ] = · a−1 . In addition, as income changes with time, it is functionally expressed as Y = Y (t). Capital K is a stock that corresponds to the total wealth owned at a given time t. This stock comes from the wealth appropriated or accumulated in all previous years combined, so it is only expressed in terms of the money dimension, that is, K = [M], since a stock does not have dimension of time (Parry Lewis, 1963; Neal and Shone, 1976, p. 63). So, following the measuring dimensions chosen above, we have that [K] = . Now, if capital at a certain time t is an accumulation from income and an appropriation from a previous time t0 , it may be written as follows (e.g., Sørensen and Whitta-Jacobsen, 2010, p. 62): K(t) = ( S(t) + K(t0 ),

(3.2)

S is the economic saving, where K(t0 ) is the capital accumulated up to time t0 and ( that is, the products and goods that the economy has saved during the time interval

t = t − t0 > 0. Saving can be roughly defined as income minus consumption during a time interval, so it has dimension of flow (Frank et al., 2015, pp. 209–212). This expression is certainly a simplification as it does not take into account items like capital depreciation, investment, exports and government spending, but we shall not delve into such details for now. It is important to note that in some texts the concept of savings S% seems to be considered different from the concept of saving ( S (note the final ‘s’). There seems to be no agreement as to the best use of both terms (see notes 3 and 4 below), this possibly being a consequence that both quantities are used in economic texts without being defined dimensionally, a situation that leads to confusion (at least to physicists) and inconsistencies. In some contexts S% is viewed as an observable defined as an accumulated quantity at a certain time t, being therefore a stock, whereas ( S is considered a flow since it occurs in the period t. If one follows these definitions, as will be done here from now on, these two quantities have the % = and [( S] = · a−1 . Therefore, they are related by following dimensions: [S] the dimensionally homogeneous expressions below: % = S(t)

t

( S(t) dt,

(3.3)

t0

)t sY dS%)) S= , ) =( ) dt 100 t0

(3.4)

3.4 Fundamental Laws of Capitalism

119

where s is the economic savings rate in the period t.3 This quantity measures the percentage change to which wealth grows, since capital, or wealth, grows based upon how much saving took place, being dimensionally given in percentage, [s] = %. In other words, a savings rate is the amount of currency, expressed as a percentage or ratio, that someone deducts from his/hers personal income to set aside as accumulated cash for later use. If nothing gets saved, then ( S = 0 and 4 wealth does not grow. Also note that Eq. (3.2) is dimensionally inhomogeneous since it mixes up stocks and flows. That indicates that this expression probably involves a hidden constant, since this is necessary in order to make it dimensionally homogeneous. We shall see one possible way of making dimensional sense of this expression (see p. 130 below). Income can be of two types, income from capital, denoted by YK , and income from labor, given by YL . The dividing line is, of course, blurred since, for instance, self-employment and entrepreneurial income are hard to break down, but this · a−1 . The division is generally possible empirically. Clearly [YK ] = [YL ] = national income is the total income at the disposal of the residents of a country in a year, including industrial profits, land and building rents, wages, that is, both capital and labor incomes. It is closely related to the gross domestic product (GDP), but differs from it by subtracting capital depreciation (∼ 10%) and adding foreign income (Piketty, 2014, p. 43). As it can be divided into capital and labor incomes, we may write its general functional relationship as: Y = Y (YK ,YL ),

(3.5)

leaving the specifics of this function for later discussion. Let α be the share of income from capital, defined as follows (Piketty, 2014, p. 203): YK . Y Similarly, the share of income from labor yields α = 100

γ = 100

YL . Y

(3.6)

(3.7)

3 Acemoglu (2009, p. 35, eq. 2.10) defines the savings rate by the same expression as in the right-hand side of

Eq. (3.4), apart from the normalizing number 100, but calls ( S savings. Sørensen and Whitta-Jacobsen (2010, p. 62, eq. 12) also defines savings rate as in the right-hand side of Eq. (3.4), but calls ( S saving (line 2, from top to bottom). 4 Piketty and Zucman (2014) adopts the term saving rate, whereas Piketty (2014) uses both terms, savings rate (e.g., pp. 33, 166) and saving rate (e.g., pp. 174, 187). One could argue that savings rate is the amount of saving that took place during the time t, whereas the saving rate is the amount of saving that takes place in a specific time t. Hence, they would basically be the same quantity.

120

Piketty’s Capital in the Twenty-First Century

It follows from these definitions that both of these shares are quantities measured in percentages (see Section 3.4.3), that is, [α] = [γ ] = %, and are then related as: α + γ = 100.

(3.8)

Piketty defined the rate of return on capital, or capital return, r as follows: The rate of return on capital measures the yield on capital over the course of a year regardless of its legal form (profits, rents, dividends, interests, royalties, capital gains, etc), expressed as a percentage of the value of capital invested. It is therefore a broader notion than the “rate of profit” and much broader than the “rate of interest,” while incorporating both. (Piketty, 2014, p. 52)

Hence, it may be written as: r = 100

YK . K

(3.9)

This expression is simply the percentage of income generated by the existent stock of capital, or how much the invested capital returns. Similarly, the wage rate, or rate of return from labor, is given as: A = 100

YL , L

(3.10)

where L is the cost of the labor force, a stock formed by all wages paid to the working people. Hence, [L] = . The cost of labor in a certain time period is the wage bill (see below). From the definitions above one could be justified in arguing that the dimension of both r and A is % · a−1 . However, if we assume both of these dimensional choices for r and A, serious difficulties would arise in some of Piketty’s results because his expressions would become dimensionally inhomogeneous. Let us postpone this point until Section 3.4.3. 3.4.2 Capital/Income Ratio The capital/income ratio β is defined as the total stock of capital K divided by the annual flow of income Y . It may be written as: β = 100

K . Y

(3.11)

According to Piketty, this quantity is phenomenologically “the most natural and useful way to measure the capital stock in a particular country” (2014, p. 50).

3.4 Fundamental Laws of Capitalism

121

Figure 3.1 Capital/income ratio β for three European countries from 1870 to 2010. There is a clear U-shape in the graph from 1910 to 1950, and then a steady increase. At the beginning of the twenty-first century the capital/income ratios have reached values similar to those at the end of the nineteenth century. Graph produced by this author reusing, under permission, the data from Piketty (2014, fig. I.2, p. 26)

Fig. 3.1 shows the capital/income ratio in three European countries from 1870 to 2010. There is a clear downturn in the data starting around World War I and then a steady recovery so that in 2010 it has reached levels prevalent at the end of the nineteenth century. These results form some of the empirical basis from which Piketty argues that the destruction provoked by two world wars and the budgetary shocks produced by them led to a decrease in inequality, particularly after World War I, an effect that began to be reversed in 1950 so that β has reached now values in these countries only seen a century ago, with the consequent increase on inequality (more on this below). Fig. 3.2 shows the data for the world capital/income ratio until 2010, plus its projection until 2100 under estimated demographic and economic growths. These projections are uncertain, but if correct by 2100 the entire world would look like Europe at the end of the nineteenth century. 3.4.3 Dimensions Throughout his book Piketty follows most economists by not making it clear the dimensions of some of the quantities he uses, particularly the possible dimensional

122

Piketty’s Capital in the Twenty-First Century

Figure 3.2 World capital/income ratio β from 1870 to 2010, extrapolated to 2100. Most observed data before World War II comes from European countries and the USA, as other countries started to collect data much later. Even so, the world series follows the same U-shaped downturn that started on the eve of the World War I until the beginning of its recovery by 1950. The projections beyond 2010 are based on predictions about demographic and economic growth and, therefore, are quite uncertain. However, these extrapolations indicate that β would continue to rise until a level when the entire world would look like Europe at the end of the nineteenth century, although this is, of course, just one possibility among others. Nonetheless, one cannot fail to note that the radical, or revolutionary, socialism made its political debut at more or less the same time as β was at its highest level (and growing) in Europe, i.e., the second half of the nineteenth century (see Fig. 3.1). Graph produced by this author reusing, under permission, the data from Piketty (2014, fig. 5.8, p. 196)

differences for r, α and β, since he stated that they are all given in percentages, a situation that leads to dimensional ambiguities and inconsistencies in his results. In addition, several expressions are not explicitly multiplied by 100, as required for quantities given in percentages (e.g., Piketty, 2014, pp. 52 and 584, n. 13). As an example of such dimensional ambiguity, if one assumes Eq. (3.11) as given, the dimension of the capital/income ratio is [β] = % · a. However, if β is a quantity given in percentages only, then a constant having the dimension of time is missing on the right-hand side of Eq. (3.11).5 5 As already mentioned (see p. 57 above), strictly speaking a quantity expressed in percentages is dimensionless,

but to give a coherent account throughout this book dimensionless quantities explicitly given in percentages will be referred to as having dimension of percentage. Hence, if a certain quantity x is said to have dimension of percentage, it will be referred to as having dimension [x] = %. Since x% means in fact the fractional number x/100, then x/100 is dimensionless, that is, [x/100] = 1. In addition, considering that this book

3.4 Fundamental Laws of Capitalism

123

Hence, under the circumstance of lack of dimensional clarity, some dimensional choices will be made by this author in what follows, but they will be stated very clearly so that if changes are needed they can be made very straightforwardly. The main point is that whatever dimensions one chooses, they must be coherently followed in all subsequent expressions. As we shall see, the hidden constant that makes all expressions dimensionally homogeneous comes down to simply inserting a constant with the dimension of time in most expressions. Bearing the points above in mind, I shall henceforth assume that the capital/income ratio, capital return, and wage rate have dimensions respectively given by [β] = % · a, [r] = [A] = %. All other quantities whose dimensions have already been defined above will be kept as such. The remaining ones still undefined will be discussed below. One must stress that the dimensional assumptions above are not unique. They are being made here only on the basis of making Piketty’s results dimensionally homogeneous, but other assumptions are also possible in principle. It all depends on the interpretations one gives to these observable quantities, and this should always be done under a certain theoretical framework. One example of the above, in the context of neoclassical economic growth model, Texocotitla et al. (2018, p. 17) interpreted rates as operators that transform stocks into flows. Although Eq. (3.9) could be viewed as such, to achieve dimensional homogeneity to Piketty’s results we must introduce a constant having the dimension of time. Doing so upsets a little interpreting rates as operators, unless such a constant is incorporated into the rates. 3.4.4 First Fundamental Law of Capitalism Combining Eqs. (3.6), (3.9), and (3.11) we reach Piketty’s first fundamental law of capitalism, linking the capital stock to the flow of income from capital as: r = 100

αY α = . β K

(3.12)

This expression is basically an accounting identity, being valid for all countries at all times. There is, however, a dimensional problem in Eqs. (3.9), (3.10), and (3.12) if α, r and A are to be quantities measured in percentage, problem which can be solved by assuming that these expressions have a hidden constant.

adopted the maximum probability normalization as being 100%, and to avoid possible numerical mistakes, all quantities in percentage must use their nominal values, that once divided by 100 will produce dimensionally homogeneous expressions by changing them to fractional quantities. Carrying the number 100 in all expressions might be considered annoying, but this author opted for this procedure for clarity because most economists neglect basic dimensional concerns.

124

Piketty’s Capital in the Twenty-First Century

But, before presenting a solution to this dimension problem, let us first define economic growth in the present context. 3.4.5 Growth Rate The economic growth rate g is the percentage change of the national output Y¯ over the course of a time interval, usually one year. The national output is a flow, measuring in some currency unit all goods and services produced by a country in a given time interval and, hence, it has the same dimension as the national income. It is not unusual to equate the national income with the national output, i.e., to assume the following identity Y (t) = Y¯ (t),

(3.13)

but let us keep them distinct for now. In order to reach a self-consistent expression for g, let us start by considering that the increase in the national output is proportional to its present amount, i.e., Y˙¯ ∝ Y¯ , where the dot over will mean time differentiation d (3.14) ·≡ dt from now on. Without loss of generality this proportionality can be represented by the following differential equation, g 1

Y˙¯ , = ln 1 + τ 100 Y¯

(3.15)

whose solution yields

g t/τ , (3.16) Y¯ (t) = A0 1 + 100 where the constant A0 is the initial value for Y¯ and τ is the time constant giving the time required for Y¯ to increase by a factor of (1 + g/100). For dimensionally homogeneous equations we have [τ ] = a, [A0 ] = · a−1 and the economic growth rate as a quantity measured in percentage, that is, [g] = %. An annual economic growth rate means t = t − t0 = τ = 1 a, which trivially leads to the following well-known result ¯ Y (t) − Y¯ (t0 ) . (3.17) g = 100 Y¯ (t0 ) Note that although the growth rate g is given in a certain time interval t, under the definition above it does not have dimension of time, but of percentage only. So, g is a percentage comparison of income at two different time values usually separated by one year.

3.4 Fundamental Laws of Capitalism

125

3.4.6 Piketty’s Central Thesis In his analysis Piketty concluded empirically that in a capitalist economy the capital return tends to be permanently above the rate of growth of the economy, that is r > g.

(3.18)

This result is in fact Piketty’s central thesis, as it leads to the conclusion that “a small gap between the return on capital and the rate of growth can in the long run have powerful and destabilizing effects on the structure and dynamics of social inequality” (2014, p. 77). In physics terminology this means that the distributive dynamics of the economic system does not seem to be at all in equilibrium as far as macroscopic state variables are concerned. We shall discuss in Section 3.5 how exactly the expression (3.18) affects income inequality, but before that let us first return to the dimensional problem indicated above. As noted above, if we were to express the dimension of the capital return considering only Eq. (3.12) it would have dimension of [% · a−1 ], a result that renders the inequality (3.18) dimensionally inhomogeneous, i.e., dimensionally incorrect. Nevertheless, since Piketty claims that both Eqs. (3.12) and (3.18) are correct, it is necessary to assume that Eq. (3.12) probably involves hidden constants, and this must also be true of Eqs. (3.9) and (3.10). Reasoning purely in terms of dimensional analysis, one possibility for making dimensional sense of Eq. (3.18) is to rewrite Eqs. (3.9) and (3.10) respectively as: YK τ, K YL A = 100 τ. L r = 100

(3.19) (3.20)

In this way both returns become quantities measured in percentage, that is, · a−1 is the wage bill. Then Eq. (3.12) turns [r] = [A] = %, and [L/τ ] = out to be written as: r = 100

αY α τ= τ. β K

(3.21)

This way of expressing Piketty’s first fundamental law of capitalism brings dimensional sense to Eq. (3.18). Besides, as on an annual basis τ = 1 a, if one is dealing with databases whose time interval between data gathering events might not always be one year, i.e., τ 1 or τ 1, that would mean the introduction of imprecisions that possibly lead to deviations from τ ≈ 1. If these possible deviations can indeed be found in the data, one might, perhaps, be able to use them to estimate measurement errors in τ in order to calculate uncertainties in Piketty’s

126

Piketty’s Capital in the Twenty-First Century

results and add error bars to his graphs. Actually, such possible deviations could also be the source of systematic errors, or biases, of unknown magnitude in his results. Chapter 4, Section 4.1.1 discusses in more detail uncertainties, errors, and systematics in physical and observational measurements. One must emphasize that Eqs. (3.19), (3.20), and (3.21) are not being justified here in economic terms, as they are purely a result of dimensional analysis. In any case, a more sounding economic reasoning possibly needs to consider that the relationship between Y and Y¯ is not an identity, which means a more complex expression than Eq. (3.13), probably including other factors like capital decay, saving under some theoretical reasoning and/or empirical findings. In this case the hidden constants needed to render Eq. (3.12) dimensionally homogeneous might not be the one included in Eqs. (3.19)–(3.21). But, discussing this problem under a more detailed perspective that takes these factors into consideration shall not be treated here, since the discussion leading to Eq. (3.21) aimed simply at emphasizing the importance of treating dimensional analysis seriously in economics, a fundamental subject very much neglected by economists. 3.4.7 Evolution of the Macro Variables Once the main quantities are defined and some relationships established, the next step in Piketty’s analysis involves the possible stable time evolution of the macro variables. The basic question is how much the last available unit of capital return increases the stock of capital, a problem which leads to resorting to some tools of neoclassical economics marginalism. This is so because this query is equivalent to asking how much the marginal productivity of capital decreases when the stock of capital increases. But, the really important question is how fast it decreases. In other words, the point is to see if there are unbalances among the quantities in the first fundamental law of capitalism (3.21) as it evolves in time. Trying to answer these questions requires focusing on the evolution of the capital/income ratio β, the capital return r, the share of capital in the national income α and the wage rate A. Piketty analyzes their evolving relationships in a verbally, and rather convoluted, way (2014, pp. 215–222), but his reasoning might, perhaps, be more clearly presented with the help of a little analysis (2016–2017, pp. 66–68). The additional justification for detailing this here is to give a taste of how neoclassical economists approach these problems to readers unfamiliar with their way of thinking. Let us start by time-differentiating Eq. (3.21). The resulting expression is shown below r β˙ + β r˙ . (3.22) α˙ = 100 τ

3.4 Fundamental Laws of Capitalism

127

The cases of interest occur when: (1) β˙ > 0 and r˙ < 0; (2) β˙ < 0 and r˙ > 0. Analyzing these two situations in the expression above comes down to two possibilities. ⎧ β ⎪ ⎨ a) β˙ < r˙ r (1) β ⎪ ⎩ b) β˙ > r˙ r ⎧ β ⎪ ⎨ a) β˙ < r˙ r (2) β ⎪ ⎩ b) β˙ > r˙ r

(for α˙ < 0), (3.23) (for α˙ > 0), (for α˙ > 0), (3.24) (for α˙ < 0),

Note that both r and β are positive. The first case in both sets (1) and (2) means that r falls faster than the proportional increase in β, whereas in the second situation the capital return decreases more slowly than the proportional increase in the capital/income ratio. One can go further and discuss the responsiveness of the capital/income ratio to changes in the capital return, that is, the elasticity of the capital/income ratio (see n. 1 above). The results are respectively shown below ⎧ r dβ ⎪ ⎪ < 1, as ⎨ Eβ(r) < 1, β dr (3.25) r dβ ⎪ ⎪ ⎩ Eβ(r) > 1, as > 1. β dr There is also the limiting situation where the elasticity is equal to one. Then: Eβ(r) = 1,

⇒

r dβ = 1, β dr

⇒

β β˙ = r˙, r

(3.26)

and the last equation on the right indicates that the return on capital decreases in the same proportion as the capital/income ratio increases. Historical evolution of the observed data in Britain and France favors both the second cases in Eqs. (3.23) and (3.24), which in turn favors the second case in Eq. (3.25), as α follows the ups and downs of β (Piketty, 2014, pp. 200, 201, 216; see Fig. 3.3). But, in principle, all cases are possible, depending on the available technologies capable of combining capital and labor in the production of goods consumed by society. The way economists deal with these questions is by means of the concept of a production function, which is a mathematical expression detailing a set of inputs necessary to produce a set of outputs at a certain technology. In our simplified case

128

Piketty’s Capital in the Twenty-First Century

Figure 3.3 Capital–labor split in Britain from 1770 to 2010. This country possesses one of the most complete historical data since the eighteenth century. The cross points are the labor share γ in Eq. (3.7), whereas the cross and saltire points are the capital share α of Eq. (3.6). The comparison between the capital share above and the British capital/income ratio shown in Fig. 3.1 suggests a general tendency for α to follow the ups and downs of β. Note, however, that the curves presented in both figures have no uncertainties or error bars, which, if added, could either reinforce or impair this conclusion. Note still that there is no visibly stable capital–labor split over the very long run, in this case 240 years. Graph produced by this author reusing, under permission, the data from Piketty (2014, fig. 6.1, p. 200)

here, the production function inputs capital and labor and outputs national income. This means writing Eq. (3.5) as: Y = Y (K,L),

(3.27)

and assuming the identity (3.13). Then one defines the marginal products of capital and labor as follows (Piketty, 2016–2017): r = 100τ

∂Y , ∂K

A = 100τ

∂Y , ∂L

(3.28)

and propose a somewhat ad hoc functional relationship for Eq. (3.27). The time constant τ multiplied by 100 was introduced to properly balance the dimensions of these equations. The difficulty with this analytical path is that the specific forms of the production function are basically assumed, rather than phenomenologically suggested from data. In particular, there is a strong tendency to think in simple terms, even if the

3.4 Fundamental Laws of Capitalism

129

real world is not that simple, and, together with some additional ideological choices, to assume the limiting situation given by the expressions (3.26). Let us explain this in more detail. Combining Eqs. (3.6), (3.7), (3.8), (3.19), (3.20) and dividing the resulting expressions by one another yields: ! " L 100 r = −1 . (3.29) K α A Now we can probe the responsiveness of the labor/capital ratio to changes in the capital–return/wage–rate ratio by calculating the elasticity of substitution of capital for labor, or labor for capital. It is simple to reach the following result, EL/K(r/A) = 1.

(3.30)

This result leads to a stable evolution of the capital–labor split. Since this elasticity is the same as obtained in the balanced evolution of the capital return and the capital/income ratio, as given by Eq. (3.26), the share of income going to profits is then supposedly stable in time, that is, the first law holds exactly all the time. Nevertheless, as pointed out above, historical data show that α tends to follow the ups and downs of β (see Fig. 3.3), indicating that the elasticity is bigger than one, and although both cases (1-b) and (2-b) in, respectively, Eqs. (3.23) and (3.24) result in r˙β < r β˙ and Eβ(r) > 1, Piketty (2014, pp. 220–222) points out that over the very long run, a century or more, the predominant tendency is β˙ > 0, r˙ < 0 and α˙ > 0, the case (1-b), despite the drop in β after 1910 and its steady increase since the 1970s. Eq. (3.29) can also be obtained by assuming the Cobb–Douglas production function. This is a particular functional relationship between K and L that in the current context may be written as follows, Y (K,L) = τ

−1

K

α/100

L

1−(α/100)

.

(3.31)

The time constant τ is required in this expression for its correct dimensionality (Kim, 2015). Note that labor in production functions is often written in terms of the labor force , that is, the amount of labor or the number of workers, which means that the cost of labor L is just the labor force times the average wage. So, to write a production function in terms of either L or is just a matter of considering a constant term representing an average. Using now Eqs. (3.28) to calculate the marginal productivity of capital and labor, it is straightforward to combine the results to obtain Eq. (3.29), from which Eq. (3.30) is easily followed (Piketty, 2016–2017, pp. 66–74). The Cobb–Douglas analytical expression for Eq. (3.27) has been particularly favored by economists

130

Piketty’s Capital in the Twenty-First Century

because it is simple and avoids going into politically hot debates about who is getting better or worse economically (Piketty, 2014, pp. 219–220). The results above mean that it is always possible to find new and different ways for using capital in the long run. Think of houses equipped with rooftop solar panels, medical technologies that require even larger investments, etc. One may even imagine, as science fiction literature sometimes does, a fully robotized economy in which machines create machines, that is, where capital would reproduce itself, a situation that corresponds to an infinite elasticity of substitution of capital for labor. So, it is possible to conclude that there are many different ways of capital serving as a substitute for labor, this being a feature of the modern economy, and that a rise in β will not lead to a significant drop in r, meaning that “no selfcorrective mechanism exists to prevent a steady increase of the capital/income ratio, β, together with a steady rise in the capital’s share of the national income, α” (Piketty, 2014, p. 222). 3.4.8 Second Fundamental Law of Capitalism The study of the very long-term historical data on the capital/income ratio led to the conclusion that there is a dynamic relationship between β and the growth and savings rates of an economy. Piketty named this relationship the second fundamental law of capitalism, which reads as: s β = 100 , g

(3.32)

where s is the economic savings rate, defined in Eq. (3.4). This is similar to the economic growth rate in Eq. (3.16), being therefore a quantity measured in percentage, [s] = % (see p. 119 above). Here, again, the above expression is dimensionally inhomogeneous, a problem that can be fixed in the same way as previously, that is, by rewriting it as follows β = 100

s τ. g

(3.33)

In addition, Eq. (3.2) may also be rewritten as:

K(t) = K(t) − K(t0 ) = τ ( S(t).

(3.34)

The expression (3.33) means that if a country grows slowly and saves a lot, in the long term it will accumulate a huge stock of capital relative to its income, a fact that will significantly affect the distribution of wealth. Saying the same in a different way, past accumulated wealth will acquire a disproportionate importance in an economically quasi-stagnant society (Piketty, 2014, p. 166). Such mechanism

3.4 Fundamental Laws of Capitalism

131

is also amplified by the privatization, or the gradual transfer of public wealth into private hands (Piketty, 2014, p. 183). Nevertheless, Piketty made it clear that this is a long-term law, taking several decades to realize, being therefore valid asymptotically and only if it “focuses on those forms of capital that human beings can accumulate” (Piketty, 2014, p. 169), which means that if pure natural resources are high, e.g., land without any improvements like irrigation systems, roads, β can be high without any contribution from savings. In addition, it requires that asset prices (real estate or stocks) evolve on average like consumer prices, since if the former evolve faster than the latter β can be quite high without any new savings. So, this formula describes the same growth path for all macroeconomic quantities, progressing at the same pace over the long run, and does not guarantee a commensurate distribution of wealth or reduction of inequality in the ownership of capital (Piketty, 2014, p. 232). Since the second fundamental law of capitalism is valid only in the long run, it is therefore unable to explain short-term shocks or bubbles. To make this point Piketty presented empirical data for β in a smaller time window than shown in the Figs. 3.1 and 3.2. The evolution of the capital/income ratio for 1970–2010 is shown in Fig. 3.4, where one can identify a constant increase in β for all countries,

Figure 3.4 Capital/income ratio from 1970 to 2010 for the main capitalist industrial countries of this period. The main trend of a constant increase in β is clear, although short-term shocks in the form of bubbles are also clearly visible, the Japanese one from about 1985 to 1990 is the most significant. Graph produced by this author reusing, under permission, the data from Piketty (2014, fig. 5.3, p. 171)

132

Piketty’s Capital in the Twenty-First Century

coupled with short-term volatilities in the form of bubbles, that is, a short-term increase followed by a decrease in a level usually slightly higher than when the increase started such that the overall growing trend remains. The most spectacular bubble is the Japanese one in the period 1986–1991. Britain and Italy also had significant bubbles, respectively for 1997–2003 and 1990–1995. These countries also experienced bubbles in other time windows, but of smaller amplitudes. The other countries also showed similarly smaller bubbles. To summarize, a closer look at the curves suggests the following time frames

t

≈ 5 a,

(3.35)

t

10 a .

(3.36)

(bubbles) (long run)

These timescales should not be taken at face values since they are the result of data constrained to a limited time window of only forty years. Nevertheless, considering that the economy tends to a state of equilibrium that is never realized in practice, these empirically suggested time frames viewed together with the remark that the second fundamental law is only valid in the long run indicate that Piketty’s second fundamental law of capitalism (3.33) could be rewritten as follows lim

t→[10 a]

β = 100

s τ. g

(3.37)

Note that in the special case of a quasi-stagnant society that even though it keeps on saving, g → 0 and β → ∞, this being essentially Marx’s principle of infinite accumulation, which led him to predict the inevitable falling rate of profits (decreasing capital return) such that “the bourgeoisie digs its own grave.” The point here is that according to the law (3.21) an ever larger β means an ever smaller r, otherwise α becomes so large that it devours all national income Y . In the first situation capitalists may tear each other apart in a desperate attempt to counteract the falling rate of profits, whereas the latter case will lead the lower and middle classes into such a destitute state that they end up having nothing to lose in challenging, and eventually destroying, the entire capitalist system. The only way out of this apocalypse is permanent growth, of productivity and population, in order to compensate for the continuous addition of new units of capital, as predicted by the second law above. So, modern growth based on growing productivity and population, even if in some cases this population growth occurred mainly through immigration, has (so far) avoided Marx’s apocalypse, but it has not altered the basic structures of capital (Piketty, 2014, pp. 227–229, 234). Another important remark regarding this law is that from an empirical viewpoint s and g are basically macrosocial quantities independent of each other, but varying

3.4 Fundamental Laws of Capitalism

133

Figure 3.5 Share of the top 10% highest incomes in Europe (averaged) and USA from 1900 to 2010. It is clear that the share of national income of this group decreased sharply in Europe after 1910 due to WWI, whereas in the USA such decrease only started after the crash of 1929, both reaching their lowest levels in the period 1970–1980 and then initiating a recovery. The increase in the USA is huge after 1970 and by 2010 the share of the national income of the topmost 10% earners has surpassed the level of a century ago. Graph produced by this author reusing, under permission, the data from Piketty (2014, fig. 9.8, p. 324)

considerably from country to country (Piketty, 2014, pp. 199). In other words, both s and g are independent state variables of the economic system. 3.4.9 Data Pertaining Rising Inequality Let us now examine the long-term pattern of inequality arising from Piketty’s historical data, starting with the total income, that is, the sum of income from labor and from capital ownership. 3.4.9.1 Total Income Fig. 3.5 shows the share of the national income of the top 10% earners in Europe and the USA. The downturn of their share in the national income had started in Europe by 1914 and was followed two decades later by the USA. A general recovery of the total income share of this group in both regions started in 1970–1980 and continues to this day. In other words, inequality decreased remarkably during about seventy years in Europe and forty years in the USA before starting to increase again. By 2010 the share of the top 10% earners in the national income in the USA has surpassed its figure of a century ago, whereas in Europe such similar recovery

Piketty’s Capital in the Twenty-First Century

S

134

Figure 3.6 Share of the national income of the top 1% highest incomes in several emerging countries from 1900 to 2010. Although the data are much less complete and out of phase with both Europe and the USA, the general trend is very similar to both Europe and the USA. A quick glance of the plots shows that inequality in Colombia is one of the highest in the world. Graph produced by this author reusing, under permission, the data from Piketty (2014, fig. 9.9, p. 327)

is still under way. It is clear from these curves that the topmost 10% earners own a very sizable portion of the national income and its increase since the 1970s and 1980s explains the rising inequality in both regions. Fig. 3.6 shows a similar plot for several emerging countries, but due to incomplete data it presents only the topmost 1% incomes of a few countries. The data also have several discontinuities, lacunae, and in some cases the data collection started only very recently. This is in striking contrast with Europe and the USA, whose historical data series are much more complete. However, despite these limitations one can note a very similar general trend, although more out of phase with both Europe and the USA. The decrease in inequality started in 1930–1940, and was followed by an increase in the period 1980–1990. It is clear that a general mechanism for rising inequality is taking place worldwide. In the case of the USA and Britain, the vast majority of top earners are senior managers of large firms whose incomes have exploded because their corporations became much more tolerant of extremely generous pay packages after 1970, a phenomenon followed a decade or two later by European and Japanese firms and, apparently, by the emerging countries as well. Piketty (2014, pp. 314–315, 330– 335) called this phenomenon meritocratic extremism and characterized it as being a powerful force for divergence of the wealth distribution.

3.4 Fundamental Laws of Capitalism

135

Figure 3.7 Wealth inequality in Europe and the USA from 1810 to 2010. European data are made of an average of the historical series of France, Britain, and Sweden. Wealth concentration showed no visible trend toward its reduction until the eve of WWI. However, there was a very significant reduction in the period 1910– 1970, particularly for the wealth of the European topmost 1%, whose drop was quite dramatic, from about 65% of total capital ownership to 20%. After 1970 the concentration of wealth has started to rise again. Looking at the curves one may wonder how high the wealth concentration might have gone had there been no war. Graph produced by this author reusing, under permission, the data from Piketty (2014, fig. 10.6, p. 349)

3.4.9.2 Capital Ownership Let us now turn to capital ownership only. Piketty concentrated his data analysis in the countries where there are fairly complete historical estimates: France, Britain, USA, and Sweden. France was particularly important because it has homogeneous historical sources starting in 1791, allowing a continuous study of wealth distribution since then. Britain, Sweden, and the USA started collecting data of the same quality as France about a century or more later. Other countries, like Denmark, Germany, Australia, and Japan, have much more limited data, but they tend to follow the same general tendency of the four main countries (Piketty, 2014, n. 3, pp. 337, 611). Fig. 3.7 summarizes the wealth share of the top deciles and percentiles in Europe and the USA. It is clear that prior to the shocks of the two world wars (1914–1945) there is a visible tendency toward the increase of inequality in capital ownership. In the USA the decrease of inequality of wealth after 1910 was less dramatic than in Europe, apparently because the shocks of both wars were less violent. In addition, when analyzing the French data, Piketty made the following remarks.

136

Piketty’s Capital in the Twenty-First Century

The magnitude of the changes initiated by the French Revolution should not be overstated . . . [since] the most significant fact is that inequality of capital ownership remained relatively stable at an extremely high level throughout the eighteenth and nineteenth centuries. [Hence,] the French revolution had relatively little effect in the capital/income ratio [and that] both before and after the Revolution, France was a patrimonial society characterized by a hyperconcentration of capital, in which inheritance and marriage played a key role and inheriting or marrying a large fortune could procure a level of comfort not obtainable through work or study. (Piketty, 2014, p. 342)

The historical evolution of wealth is, perhaps, the most important topic as far as inequality is concerned because the only reason why income inequality decreased in the first half of the twentieth century was the reduction of income derived from capital. Contrary to Kuznets’ optimistic model, inequality from labor did not decrease structurally in the period 1900–1960 (Fig. 3.3). The sharp drop in total income inequality was a result of the collapse of high incomes from capital due to destruction of capital (bombardment) in both world wars and the budgetary shocks that derived from them in the form of inflation, expropriations, bankruptcies, and taxes of various kinds imposed on interests, profits, dividends, rents, as well as the rise of progressive income taxes (Fig. 3.2). This is a very important point because the rise of the supermanagers has so far been mainly a geographically limited phenomenon to the Anglo-Saxon world, and since the capital/income ratio is rising again (Figs. 3.1, 3.2, and 3.4) as growth slows down (see below), the capital ownership is apparently becoming more and more concentrated again (Fig. 3.7). One should remember that wealth is extremely concentrated in all countries, including the Scandinavian ones. In the past 200 years or so, the top 10% wealth owners possessed from 90% to 60% of all wealth, the 40% middle class owned from 5% to 35%, with the bottom 50% of the population owning virtually nothing, less than 5%, which nowadays includes home items like furniture and, possibly, an automobile and very little savings. 3.5 The Mechanisms for Rising Inequality The data shown above demonstrate unambiguously the extreme concentration of wealth in all European countries during the eighteenth and nineteenth centuries and that the formal nature of political regimes had very little influence on wealth distribution. Piketty argues that data, although less precise, allow comparisons with wealth in other societies leading to the conclusion that in terms of orders of magnitude this was the state of affairs in modern agrarian societies, the Middle Ages, and antiquity, that is, an extreme concentration of wealth where the top 10% owned from 80% to 90% of all wealth.

3.5 The Mechanisms for Rising Inequality

137

Three questions are posed in that respect. (1) Why were the wealth inequalities so extreme and increasing before World War I? (2) Why is the wealth concentration now significantly below its historical high record? (3) What is the future of wealth concentration? The second question has already been partially answered above. Before the shocks of 1914–1945 the top 10% owned about 90% of all wealth, but after that it was reduced to about 60%. The 30% difference, a significant amount, went to the twentieth-century new phenomenon: a patrimonial middle class composed of almost half the population. Answering the first question also partially explains the second. The main mechanism for the hyperconcentration of wealth in virtually all societies before World War I is in fact Piketty’s central thesis, whose mathematical expressions is given by Eq. (3.18). This is a fundamental force of divergence since while the rate of return on capital is generally on the order of 4% to 5% per year, the economic growth of the world before the eighteenth and nineteenth centuries were about 0.5% to 1% per year. So, if g = 1% and r = 5%, saving only 1% of the capital return is enough to make one’s capital to keep up with the rate of growth of the economy. If one has a large fortune, one can save more and then one’s capital will grow more rapidly than the economy, generating a higher concentration of capital in one’s hands by increasing one’s wealth even if one contributes no income from labor. If a society is characterized by both large fortunes from generation to generation and high concentration of wealth, these are the ideal conditions for an inheritance society to prosper, where fortunes accumulated in the past predominate over wealth accumulated in the present. In this case wealth originated in the past grows more rapidly, even without labor, than produced by work, so the past tends to devour the future (Piketty, 2014, p. 378). These conditions existed for numerous societies along history, particularly the European ones of the nineteenth century, which were characterized as being societies of rentiers (see p. 111 above), that is, societies where the topmost 1% of the population basically live off capital gains. It should be noted that there are two main legal ways of accumulating wealth: work and inheritance. The main illegal ones are corruption, theft, pillage, and trading in illegal items like drugs, weapons, but these often find ways of being laundered into legal ones.6 In the case of France of the second half on the nineteenth century, 80% to 90% of all legal wealth was acquired through inheritance (Piketty, 2014, fig. 11.7 at p. 402). 6 “The secret of a vast fortune with no apparent cause is a crime which has been forgotten, because it was

committed cleanly” (Balzac, 2011, para. 22.63).

138

Piketty’s Capital in the Twenty-First Century

Figure 3.8 Pre-tax capital rate of return r vs. the economic growth rate g at the world level from antiquity to 2010 and then projected to 2100. If capital owners save only about 1% of the capital return this has been more than enough through most of the time to make capital grow and wealth concentrate. Graph produced by this author reusing, under permission, data from Piketty (2014, fig. 10.9, p. 354)

Fig. 3.8 compares the before tax capital return to the economic growth rate from antiquity to 2010 with projections to 2100. Throughout most of known civilization the economic growth was about 0.1% to 0.2%, whereas the capital return varied mostly from 4.5% to 5%. This is 10–20 times greater than the rate of growth of output (and income). Piketty wrote, “this fact is to a large extent the very foundation of society itself: it is what allowed a class of owners to devote themselves to something other than their own subsistence” (2014, p. 353). However, in the twentieth century the world economy grew from 2% to 4% and the gap between r and g was considerably reduced, squeezing the capital gains. In this context shocks of various kinds and taxes on capital can even reverse the inequality (3.18), turning capital gains into negative figures and then leading to a decrease in wealth. Before World War I taxes on capital were very low, but afterwards taxes on top incomes, profits, and wealth rose to very high levels. Fig. 3.9 shows an after-tax comparison of r and g. The data are basically the same as in Fig. 3.8, but with taxes included after 1914, which reduced the capital gains given by r. It is clear that three main factors generated an exceptional situation that lasted for almost a century, reversing the historical inequality given by Eq. (3.18): the much higher-than-average growth for about thirty years after World War II, progressive tax policies, and wartime destruction. Nevertheless, this situation seems about to end.

3.5 The Mechanisms for Rising Inequality

139

Figure 3.9 After tax capital rate of return r vs. the economic growth rate g at the world level since antiquity to 2010 and projected to 2100. These are the same results of Fig. 3.8 with taxes in r included. One can clearly note that the wartime shocks of 1914–1945 together with a huge economic growth rate reversed the historical inequality r > g, creating the exceptional circumstance that led to a large decrease in the concentration in wealth for most of the twentieth century. Signs are that this is about to end and that at some time the twenty-first century will witness the return of the historical inequality r > g and an increase in the concentration of wealth. Graph produced by this author reusing, under permission, data from Piketty (2014, fig. 10.10, p. 356)

To answer the second and third questions posed above, one should note that since the 1980s taxes have been falling due to the financial globalization and growing competition for capital among states, and we may expect that at some point in the twenty-first century the gap between r and g will return to levels prevalent in the nineteenth century. The reasons for that lie in the gap between r and g. Above a certain limit there is no equilibrium: inequality of wealth will increase without bounds, as the very wealthy may have nowhere to spend their money, but to save it and add to their capital stock. That cannot last forever, as having no place to invest the capital return will inevitably fall. But this can take a long time, several decades, and before it happens the concentration may exceed 90% in the hands of the top 10% of the population, as was the case in Europe on the eve of World War I. So, the reason why the wealth concentration is still significantly below its historical record may be because not enough time has passed since the shocks of two world wars that led to the collapse of the capital/income ratio. This means that we may be entering into another era of inheritance, this time worldwide, where study, work, and talent will pay significantly less than inheritance.

140

Piketty’s Capital in the Twenty-First Century

These conclusions rely, however, on the assumption that there will be no significant political reaction to the trend of financial globalization and wealth concentration. “Given the tumultuous history of the past century, this is a dubious” Piketty wrote, and “not a very plausible hypothesis, precisely because its inegalitarian consequences would be considerable and would probably not be tolerated indefinitely.” For this reason Piketty called Eq. (3.18) the central contradiction of capitalism (2014, pp. 358, 571). Another question is now on order. Why the return on capital is systematically higher than the rate of growth? Is there a deep reason for this effect? Piketty’s words on this issue are as follows. I take this to be a historical fact, not a logical necessity. It is an incontrovertible historical reality that r was indeed greater than g over a long period of time. . . . The inequality r > g has clearly been true throughout most of human history, right up to the eve of World War I, and it will probably be true again in the twenty-first century. Its truth depends, however, on the shocks to which capital is subject, as well as on what public policies and institutions are put in place to regulate the relationship between capital and labour . . . [It] is a contingent historical proposition, which is true in some periods and political contexts and not in others . . . In practice, however, there appears never to have been a society in which the rate of return on capital fell naturally and persistently to less than 2–3 percent and the mean return we generally see (averaging all types of investments) is generally closer to 4–5 percent (before taxes) . . . It is a result of a confluence of forces, each largely independent of the others. (Piketty, 2014, pp. 353, 358, 361, emphasis added)

The last sentence is very interesting from a physicist’s viewpoint because it basically states that Eq. (3.18) is an empirical finding that reflects a macro state of the system, whose micro foundations are still unknown. Such lack of knowledge of the micro structure, whatever one means by micro in this context, does not at all diminish the importance and significance of the macro result expressed by the inequality r > g. Before ending this section, there is still a final point to be considered. When one talks about the rate of return on capital as being of 4% to 5%, these are average figures. As a matter of fact wealthier people obtain higher capital returns than less wealthy people because money tends to reproduce itself, so the larger the fortune, the faster it grows. This means that the largest fortunes, created or inherited, can generate higher-than-average capital returns. Once a fortune is established the capital growth follows a dynamic of its own and can continue to increase at a very rapid pace simply due to its size. Using data from the Forbes ranking of billionaires, university endowments in the USA, and sovereign wealth funds, Piketty (2014, pp. 430–458) showed that the capital return on large amounts of capital vary from 6% to 11%, which means that inequality on wealth will grow without limits due to the structural reason that the

3.6 Summary of the Main Results

141

gap between r and g is even wider for the top 1% wealthiest people on the planet. Unequal returns on capital are a force of divergence that amplifies inequalities in a process that Piketty called oligarchic divergence where all countries could be owned more and more by the planet’s billionaires (2014, p. 463; see also Cattani, 2009b) 3.6 Summary of the Main Results Piketty’s results can be synthesized as below, where the constants used to dimensionally balance his expressions were removed to conform this summary with his original notation. (1) First fundamental concept: capital and wealth are considered as basically the same concept, so inequality in wealth is the same as inequality of income from capital. (2) Second fundamental concept: income from labor follows mechanisms very different of income from capital. (3) First fundamental law of capitalism: α = rβ, that is, the share of income from capital α equals the capital return r times the capital/income ratio β. (4) Second fundamental law of capitalism: β = s/g holds asymptotically, that is, in the very long run the capital/income ratio equals the ratio between the savings rate s and the economic growth rate g. (5) Third fundamental law of capitalism: r > g, that is, in a capitalist economy the capital return tends to be above the economic growth rate7 . Hence, wealth accumulated in the past grows faster than wages and output. (6) First empirical result: the inequality r > g implies that β will increase. (7) Second empirical result: the rate of return on capital r is relatively stable, thus it follows from the first fundamental law that as β increases, so will α. (8) First empirical observation: the reduction of inequality in the twentieth century occurred basically from the reduction of inequality in wealth, not on inequality on labor which remained stable. (9) Second empirical observation: inequality worldwide has been rising since the 1970s due to a revamp in wealth concentration and in the twenty-first century it may reach levels equivalent to those prevalent in Europe on the eve of World War I. (10) Remark on the second empirical observation: the above prediction is by no means a certainty as the history of income and wealth has always been

7 It is reasonable to call Piketty’s central thesis r > g, also dubbed as Piketty’s fundamental inequality, or the

central contradiction of capitalism, as being the third fundamental law of capitalism.

142

Piketty’s Capital in the Twenty-First Century

deeply political, chaotic, and unpredictable. Therefore, it cannot be exclusively reduced to economic mechanisms. (11) Third empirical observation: the return on capital varies proportionally with its initial size, so that a divergence on wealth distribution is occurring on a global scale. This summarizes the most important conclusions of the first three parts of Piketty’s book. They amount to three quarters of the total volume and cover the phenomenological facts drew from the extensive historical data analysis and the consequent dynamic results. The remainder part four is devoted to public policy recommendations aimed at proposing possible ways of controlling or reducing the growing inequality worldwide. As discussed in the preface, this topic is outside the scope of this book and will not be treated here. The interested reader can find a good review on this subject, as well as a review of several reviewers of Piketty, including the controversy with the Financial Times, in Pressman (2016, chs. 7, 8). As happens in every summary review, the paragraphs above cannot convey the full richness of the original work. For instance, the book presents detailed data about the flow of inheritance of past societies and its possible future in the twentyfirst century. It also has lots of very interesting observations and remarks regarding how the novelists of the nineteenth century, particularly Balzac, were very accurate in their descriptions of inequalities in the context of their novels, especially the inheritance societies of the past, accuracy that can be related to the data itself. These discussions are so interesting that at times one has the impression of reading a novel, or a literary analysis of past and present cultural trends, rather than an academic book on economics. Obviously, these discussions, and so many interesting other ones, could not find a place in this limited overview. It is, nevertheless, this author’s hope that this summary will encourage readers to immerse themselves in the original work. As a final observation, Piketty did not write a non-facts-based book full of arcane, axiomatic, and sterile theories showing how economies ought to work, or how they might work. He wrote a book with lots of data and empirical results to explain how the real-world economies actually work. This is the most important reason why his book arose so much interest both inside and outside academic circles.

3.7 Econophysics after Piketty In developing his theories of inequality Piketty took a different path as that taken by economists who followed Pareto’s tradition and, so far, most econophysicists, when they examine the income and wealth distributions. He advanced several new

3.7 Econophysics after Piketty

143

concepts and reached many important results which certainly pose considerable challenges to the econophysics of income distribution, because his conclusions are based on extensive empirical data and are cogent enough so that they cannot be easily dismissed. For these reasons econophysicists dealing with income and wealth distributions cannot ignore Piketty’s results from now on. Piketty has also put forward several concepts that challenge the way most econophysicists (and economists) have been dealing with data, particularly the difference between the income from labor and capital, although at least in one case econophysicists have accepted the challenge and already produced results in line with this approach (see Oancea et al., 2018, discussed at p. 84 above; see also Jagielski et al., 2016). This difference has also already been indirectly taken into account in other econophysical models, as we shall see below. In addition, his fundamental laws, which are basically empirical relationships among state variables, beg for explanations coming from, perhaps, first econophysical principles regarding income distribution and production, explanations that do not necessarily require “micro foundations,” but which will probably require still unknown econophysical postulates.8 The econophysics of income distribution has been basically trying to go beyond Pareto’s traditional study on income distribution functions, but doing this by using probably solely household survey data which, Piketty argues, in a convincing case, are biased and grossly underestimate the higher end of incomes and, hence, cannot offer a true picture of income distribution. So, some sort of correction of what are truly systematic underestimations is needed in order to address this bias in household survey data. And last, but not least, Piketty treated income from labor and income from capital as basically independent phenomenological entities, but it is intuitive that they are intrinsically linked by some sort of theory of production that econophysics ought to somehow put forward. 8 Physicists usually name principles required in theory building as postulates rather than axioms. An axiom is a

self-evident statement often obeying pure mathematical logic and valid in all branches of science, but not necessarily related to anything empirical. On the other hand, a postulate is a proposition of truth limited to a specific branch of science that serves a practical purpose in theory building, and whose necessity ought to be justified on empirical grounds as the theory unfolds. Once empirically validated, a postulate becomes a law of nature. If this empirical justification fails, the postulate is discarded. One should note that modern economic theories are often expressed in terms of axioms rather than postulates. In fact, postulates seem to be rarely, if ever, mentioned in economic texts. See also Richard Feynman’s views on this point (p. 44 above).

Part II Statistical Econophysics

4 Stochastic Dynamics of Income and Wealth

“How can one account for the observed shape of income distribution? In practice, a very complicated stochastic process is involved,” wrote Arnold (2015, p. 19). In a similar vein, Paul Cockshott et al. (2009, p. 140) stated that “Econophysics approaches to traditional economic problems are essentially probabilistic in nature.” These two statements represent the way statisticians and statistical physicists approach an economic problem, since they see the world through the lens of probabilities. Econophysics, in fact, started with statistical physics. There is no question about the importance of statistical reasoning in physical modeling as the works of great physicists of the past, like Gibbs and Boltzmann, showed with their seminal contributions. However, in the spirit of theoretical pluralism, the probabilistic lens is certainly not a unique way of looking at the complex problem of income distribution dynamics, although these two statements suggest as much. In this chapter I shall review the econophysical models that approach the origins of the income and wealth distribution through the lenses of probabilities, leaving for the next chapters the alternative way of studying them without probabilistic reasoning, as this is the case of Piketty’s theory reviewed in the last chapter. As previously, there will be no attempt at being complete, i.e., of presenting a comprehensive set of models in all their details. So, the next paragraphs are dedicated to showing the basic ideas and results of the main models of income and wealth distributions. But, before reviewing the stochastic models themselves, a preliminary discussion is required. Since probabilistic ideas are intrinsically linked to the notions of risk and uncertainty in economics, first, it is important to clarify the distinctive use of the concept of uncertainty as made by physicists and economists. We will examine this issue in the next section and, subsequently, deal with the main topic of this chapter.

147

148

Stochastic Dynamics of Income and Wealth

4.1 Uncertainty and Risk in Physics and Economics The concept of risk is generally absent from physics research, and although physicists commonly talk about uncertainty, its meaning in physics is different than in economics. Econophysics is an interface between physics and economics, so one should start by clearly stating these differences in order to clarify the road ahead and avoid conceptual confusion. 4.1.1 Uncertainty and Error in Physics The following is a very short summary of basic error analysis in physics and engineering (see, e.g., Taylor, 1997). It is being provided here as a guide to those unfamiliar with the subject and to establish the basic terminology. Physics is fundamentally a science of measurements. To measure something, one requires an instrument or a measuring device. However, since every instrument has an intrinsic measuring limitation and repeated measures of an object using the same instrument under similar conditions generally produce different results, the concept of uncertainty in physical measurements arises exactly from such situations and limitations. Let us take, for instance, the common rule as our measuring device. It is usually marked in millimeters, so measuring the length of an object by means of a common rule has a minimum variation of about half the smallest calibration, that is, an uncertainty of 0.5 mm. This means that the true length of this object lies within the measured length made with the rule plus or minus the rule’s uncertainty. This is the instrumental uncertainty of this device. If one needs to measure lengths of a fraction of millimeter, say 0.001 mm, one requires another device as the common rule can only provide measuring fineness up to its 0.5 mm uncertainty. Let us now suppose that different people measure the same length several times, but under similar conditions (same rule, room, day hour, lighting conditions, etc.). In this case they will most likely produce different results. We can then calculate the average length and obtain the uncertainty as being half the range between both extremes of all measured lengths. If all these measures are very close to one another and in agreement, the differences among them will be small, as will be the uncertainty. In this case the measure is said to have a good precision. Let us now also suppose that what is being measured has a true value, that is, some reference given by either a well-proven theory or a series of very careful measurements made with a wide variety of different instruments and techniques all capable of providing very low precision. Then the difference between the measured value and the true value is called the error or true error. The accuracy of a measurement is defined by its difference from the true value. So a measurement is accurate if the error is small. This means that a measurement can have at the

4.1 Uncertainty and Risk in Physics and Economics

149

same time high precision, in the sense of having low uncertainty, but low accuracy, in the sense of having high error, or being too far from the true value. Errors can be of two types: random or systematic. Random errors are unpredictable, brought about by things without our control, whereas systematic errors, or systematics, tend to shift measurements methodically so that the average is displaced, or varies, in a predictable way. Systematics come from instrumental calibration or the way the measurement is made, that is, as a result of a regular overestimation or underestimation of the measure due to unforeseen circumstances. They can be corrected once the true value is known, or by a careful examination of the experimental or observational conditions so that the experimental or observational biases, the total shift caused by systematics, are identified and then reduced or eliminated. Systematics can also be dealt with by improving the measuring devices or data collection techniques such that observational or experimental samples start to include previously ignored or uncollected data. If systematics occur by some form of sorting during the data gathering process, such as leaving data out that are essential to the phenomenon under study, once identified they can be corrected by applying appropriate corrections to the samples or by producing new and more complete samples. Errors and uncertainties can be combined, or propagated, that is, they can be carried over to another quantity not measured directly, but expressed in terms of some algebraic combination of two or more measurements. Rules for uncertainty and error propagation are the subject of statistical data analysis. Finally, in laboratory physics practice the words uncertainty, error, and margin of error are often used interchangeably when describing how well we are able to constrain a measure. 4.1.2 Uncertainty and Risk in Economics Risk and uncertainty in economics were first methodically discussed by Frank Knight (1885–1972), who formalized both concepts in a well-known book based on his PhD thesis (Knight, 1921). According to him, the ever-changing world brings new opportunities for businesses to make profits, but at the same time we always have imperfect knowledge of future events. So, Knight stated that risk is applied to situations where we do not know the outcome of a given situation, but are able to accurately measure the odds by means of some known statistics of past events. Uncertainty, on the other hand, applies to situations where we cannot know all the information we need in order to set accurate odds in the first place. “There is a fundamental distinction between the reward for taking a known risk and that for assuming a risk whose value itself is not known,” Knight (1921, pp. 43–44) wrote. A known risk is “easily converted into an effective certainty,” (Knight, 1921, p. 46)

150

Stochastic Dynamics of Income and Wealth

while “true uncertainty,” as Knight (1921, p. 20) called it, is “not susceptible to measurement” (Knight, 1921, p. 48) This true uncertainty is often called Knightian uncertainty. An airline carrier might forecast that the risk of losing one of its planes in an accident is, say, one in 10 million landings, but the economic outlook for airline companies in fifty years time is incalculable as it involves several entirely unknown factors. Hence, in Knight’s view uncertainty arises out of our partial knowledge of a situation and our difficulty in forecasting outcomes. “The essence of the situation is action according to opinion, of greater or less foundation and value, neither entire ignorance nor complete and perfect information, but partial knowledge” (Knight, 1921, p. 199). This definition leads, however, to discussions about what exactly this partialness means. Let us briefly see the terms of this discussion. One viewpoint argues that the distinction between risk and uncertainty is essential and comes from the role of judgment in economic life. [This partialness] has more to do with the initial classification of random outcomes than with the assignment of probabilities to the outcomes [since] Knight’s main concern was about the possibility of classifying the “states of nature” . . . [So] one must first know which alternatives are possible . . . [U]ncertainty as Knight understood it arises from the impossibility of exhaustive classification of states . . . When a decision maker faces uncertainty (a situation in which “there is no valid basis of any kind for classifying instances”), he or she would have first to “estimate” the possible outcomes to be able to “estimate” the probabilities of occurrence of each. The first step requires judgment and intuition rather rather than calculation. (Langlois and Cosgel, 1993, pp. 459–460)

Another viewpoint argues that risk and uncertainty are not really different concepts when one deals with reallife situations, and the distinction would only make sense in games of chance played in casinos, since these happen in controlled circumstances not present in real life. Some economists have argued that this distinction is overblown. In the real business world . . . all events are so complex that forecasting is always a matter of grappling with “true uncertainty,” not risk; past data used to forecast risk may not reflect current conditions, anyway. In this view, “risk” would be best applied to a highly controlled environment, like a pure game of chance in a casino, and “uncertainty” would apply to nearly everything else. (Dizikes, 2010)

Although these differing viewpoints about what this partial knowledge really mean in practice may appear very distinct, we may, perhaps, be able to reconcile them by developing further the reasoning on probability theories presented in Chapter 1 Section 1.6.2.

4.1 Uncertainty and Risk in Physics and Economics

151

Earlier, we discussed how calculated probabilities cannot be taken as empirical measures as they are theory driven and, therefore, subjected to the limitations of the theories themselves as these are partial representations of the real world (see p. 42 above). The word ‘expectation’ was used to express those limitations in the sense that even high probabilities accurately calculated cannot be taken as a sure thing due to that. But when one is able to accurately calculate the probability of each possible contingency of certain things, like death, fire, car accident, even if their outcomes cannot ever be considered as 100% certain, the fact that they can be precisely derived mathematically is called risk. However, there are other things we do not know the possible contingencies of, which means that their respective mathematical likelihoods are unknown, that is, they are truly uncertain. So uncertainty in the Knightian sense arises from things we know that we do not know, whereas risk arises from things that we know that we know. But, we can add to these two knowledge categories a third one, which comes from things that we do not know that we do not know. This third category goes in fact into the realm of terra incognita, one step further than the Knightian uncertainty. Let us call it extreme uncertainty. These three categories were synthesized very clearly by, surprisingly, the former Secretary of Defense of the USA Donald Rumsfeld during a press briefing regarding the military situation in Afghanistan. Although he was not the first to discuss this knowledge classification (Morris, 2014), he was certainly the one who popularized it. So, we may call this Rumsfeld knowledge classification, originally stated as follows. [T]here are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And . . . it is the latter category that tend to be the difficult ones. (Rumsfeld, 2002, emphases added)

Let us now try to put together all this discussion. Risk consists of predicting the future on the basis of currently existing information. This is the case when using probability distributions obtained from current market data. Knightian uncertainty, or true uncertainty, consists in the absence of reliable information today about the future outcomes of current decisions, since the economic future is created by decisions taken today. Calculating risks requires some knowledge of the frequencies of events, that is, the frequentist approach to probability. However, when one is in the realm of uncertainty, the frequentist approach may become less useful, or even useless, as one may not be able to obtain reliable frequencies from events of very rare occurrence in order to calculate their probabilities, or to know the

152

Stochastic Dynamics of Income and Wealth

relative frequency of an event that has not yet happened since one cannot estimate the odds of an unknown process. This is the case of the market conditions for airline companies in fifty years time, or the technologies that will be used in possible future wars among countries in the twenty-second century. In these cases one can try to rely on the Bayesian approach which might give some help in some situations. If one knows almost nothing about something, its measure of belief will be very small, or very uncertain. Once we acquire some knowledge about the phenomenon this Bayesian probability will increase. So, the Bayesian approach allows us to go step by step in the direction of increased knowledge, from true uncertainty to risk. Under this viewpoint, expectation can be seen as our degree of belief in a certain probability and, therefore, is the same as probability in the Bayesian sense. But when one deals with extreme uncertainty there is no way of calculating probabilities just because we cannot know of the contingencies, let alone their possible results. The point is that we cannot know the possible outcomes of something we do not even know is in the realm of possibilities, although we may expect that completely unforeseen, and unforeseeable, events may happen from time to time. This is the realm of the true randomness, or Taleb’s black swan events (Davidson, 2010a; Terzi, 2010), that is, events extremely difficult or impossible to predict but whose impact may be very significant. So, extreme uncertainty or true randomness are the nesting grounds of black swan events. Fig. 4.1 attempts to graphically summarize this discussion. There also is a fourth category of knowledge not originally mentioned by Rumsfeld, the unknown knowns, things we do not know that we know. This is the Freudian unconscious mind, “the knowledge that does not know itself,” the unconscious knowledge and beliefs that determine how we perceive the world ˇ zek, 2014, p. 11). This fourth philosophical around us and intervene in it (Ziˇ category of knowledge is not really relevant to the Knightian uncertainties and risks discussed here, although we may speculate that it might have some influence when prior probabilities are advanced. Daase and Kessler (2007) argued that knowledge and lack of knowledge are independent, but linked, and that the cognitive frame for practice is determined by the relationship between these four categories: “what we know, what we do not know, what we cannot know and what we do not like to know” (p. 411). The knowledge we do not want or like to know, the unknown knowns, is made of “the things we could know but rather decide not to know by forgetting, suppressing or repressing them” (Daase and Kessler, 2007, p. 412). Their reasoning goes as follows: We might have reliable methods of identifying observable facts, thus producing known knowns. But, we might also have some methods of dealing with phenomena that we are not 100% sure of, thus creating known unknowns. On the other hand, we must accept that there

4.1 Uncertainty and Risk in Physics and Economics

153

Figure 4.1 Schematic representation of Rumsfeld knowledge categories in terms of probabilities interpreted as measures of belief. The increase in knowledge leads to an increase in the probability of the outcome of an event, from the extreme uncertainty to low risk. The figure shows some superposition of the various degrees of belief, but the boundaries are schematic and should not be taken at face value. Under the viewpoint that probabilities are theory driven and should not be regarded as empirical measures, one can never reach 100% probability of an event because under Boltzmann’s epistemology all theories are limited, and therefore necessarily incomplete representations of nature. might be things we do not even dream of and have no method of anticipating; these are unknown unknowns. But, there are also situations in which factual knowledge is available in principle, but not used because it is ignored or repressed; this is the category of the unknown knowns. (Daase and Kessler, 2007, p. 413)

Keynes (1937, pp. 213–214) also reflected about uncertainty on somewhat similar grounds as Knight’s (Terzi, 2010), but his focus was on the relationship between uncertainty, business confidence, and investment (Keynes, 1937; Ferrari-Filho and Conceic¸a˜ o, 2005). We shall postpone the discussion on Keynes’ views on uncertainty to Chapter 5, Section 5.2.1.

154

Stochastic Dynamics of Income and Wealth

4.2 Kinetic Models of Income and Wealth Distribution How does wealth accumulate in societies? Is it possible to adopt an agent-based approach in order to computationally study the emergence of wealth concentration? Is such a bottom-up analysis capable of isolating the key mechanisms leading to the stratification of income and wealth? This investigative path to study wealth concentration dynamics was initiated by econophysicists rather than economists. However, in doing so physicists missed an important predecessor in the sociological literature. Hence, I shall start this review with this overlooked approach to wealth distribution dynamics. 4.2.1 The Angle Process Before the term econophysics was even created, Angle (1983) advanced the fundamental concepts of an agent-based model of wealth formation based on particle-like microscopic interactions of agents. The Angle model, also called Angle inequality process or simply Angle process, was elaborated in a long sequence of papers spanning decades (Angle, 1983, 1986a, 1986b, 1990, 1992, 1993a, 1993b, 1996, 1997, 1998, 1999a, 1999b, 2000, 2001, 2002a, 2002b, 2002c, 2003a, 2003b, 2004, 2005, 2006, 2007a, 2007b, 2007c, 2009, 2012, 2013; Angle et al., 2010). It is basically a class of stochastic processes proposed as a generating mechanism for the emergence of wealth distributions in societies. The starting empirical basis for the Angle process is the archaeological evidence that the introduction of agriculture created a food surplus that led to the stratification of societies into ranked classes and chiefdoms, that is, the formation of classes and an elite (Angle, 1986b; Diamond, 1997, ch. 4; Harari, 2014, chs. 6–7; and references therein). In addition, it was observed that “the concept of wealth has very nearly the same meaning in societies as different as those of hunter/gatherers on the one hand and industrial societies on the other” (Angle, 1986b, p. 293). So, comparisons between different societies is made by considering only the shape of the size distribution of wealth, and that income and wealth basically indicate their respective sizes in terms of one another because an income stream is evidence of wealth. The generating mechanism of the model comes from the sociological and archaeological surplus theory of social stratification, which states that as soon as there is some excess capacity of food, a process is set into motion from which inequality emerges. The two core definitions of the surplus theory are: (1) subsistence wealth, which is necessary to keep the wealth producers alive and cover the costs of production; and (2) surplus wealth, which is the remaining net product after subsistence is subtracted from the total wealth produced by the producers. This surplus theory may be described by four propositions. (i) Once surplus is produced, some of it will leave the possession of the producers. (ii) Those who

4.2 Kinetic Models of Income and Wealth Distribution

155

possess wealth have the ability to extract even more wealth from others who have less wealth. Such a situation leads to a greater competition for surplus as each person’s ability to extract wealth from others depends on his or her own surplus wealth. The result of this competition is that those who have more surplus, the rich, tend to take the surplus away from those who have less surplus, the poor. (iii) As surplus wealth is taken away from those who produced it, less of what surplus is left with the producers is available for transfer. (iv) Industrial societies have a smaller proportion of surplus wealth extracted from producers than in societies having more primitive technologies. “The essential issue in the definition of surplus is availability for redistribution,” wrote Angle (1986b, p. 299), so there must be a fugitive surplus wealth that changes hands by means of encounters that can be modeled by two equations, one for each party of the encounter. Note, however, that the surplus theory does not concern itself with how wealth is created or destroyed, that is, with production, but only with how it is distributed, so these encounters are a zero-sum game: what one party gains is what another party loses. The expropriation of the losers happens by means of (1) theft, (2) extortion, (3) taxation, (4) exchanged coerced by unequal power between participants, (5) genuinely voluntary exchange, or (6) gift (Angle, 1986b, p. 298). These ideas were formalized in a true interacting particle model of a finite population of agents whose encounters are randomly matched in pairs, where each try to catch part of the other’s wealth. Hence, Angle’s inequality process is basically an interacting particle system with binary interactions describing competition between random pairs of individuals. The richer individual, or agent, has a fixed probability p of winning (0.5 < p < 1), whereas the loser in a random encounter loses a fixed proportion w of wealth (0 < w < 1). Let i and j be two agents and Dt ∈ {0,1} a random toss that decides which of the agents is the winner of this conflict. So, at each encounter we have the following set of equations Xi,t = Xi,t−1 + Dt wXj,t−1 − (1 − Dt )wXi,t−1,

(4.1)

Xj,t = Xj,t−1 + (1 − Dt )wXi,t−1 − Dt wXj,t−1,

(4.2)

where Xi,t is i’s surplus wealth after an encounter with j and Xi,t−1 is i’s surplus wealth before the encounter with j . Eqs. (4.1) and (4.2) implement the proposition (i) that surplus wealth is fugitive in encounters between members of a populations if we specify Dt to be given as follows 1 with probability 0.5, Dt = (4.3) 0 with probability 0.5.

156

Stochastic Dynamics of Income and Wealth

Eq. (4.1) indicates that i’s current wealth is i’s wealth before the encounter with a gain or loss to j , respectively represented by the last two terms of the righthand side of the expression (4.1). “Winning” means retaining your own surplus while taking some of the other party. “Losing” means the inability to retain all of your own surplus during the encounter. So, if i wins over j , which according to Eq. (4.1) occurs when Dt = 1, i receives Dt wXj,t−1 from j , where w is a continuous uniform random variable in the range 0 < w < 1. Conversely, j wins by taking (1 − Dt )wXi,t−1 from i if Dt = 0, as formalized by Eq. (4.2). Proposition (ii) states that the individuals with greater wealth surplus will tend to win encounters, which is implemented by specifying Dt to be given as: 1 with probability p, if Xi,t−1 ≥ Xj,t−1, Dt = (4.4) 0 with probability 1 − p, if Xi,t−1 < Xj,t−1 . Note that since p > 0.5, this means that the richer party is more likely to win an encounter than the poorer one. Proposition (iii) essentially asserts that surplus is made up of layers. The top layers are more easily lost than the bottom ones, that is, the wealth of those at the top is more fugitive than those close to the subsistence level. This idea is implemented by letting L be the number of layers, arbitrarily set as any integer such that L > 1. Large values of L mean smaller losses of surplus by the losers in the conflict. A loser loses a w1 proportion of the top layer, a (w2 )2 proportion of the next lower layer, and so on. At the lowest layer a proportion (wL )L is lost by the loser to the encounter. Hence, the loss by the loser in each layer gets smaller and smaller, and the larger the value of L the smaller the loss of surplus by the loser. Summarizing, we may write the proportion of surplus wealth lost by the loser as: L * (wk )k k=1

L

= Z,

(4.5)

where wk is independent of wk−1 for all k. Incorporating these ideas into the original Angle inequality process described by Eqs. (4.1) and (4.2) they can be rewritten as: Xi,t = Xi,t−1 + Dt ZXj,t−1 − (1 − Dt )ZXi,t−1,

(4.6)

Xj,t = Xj,t−1 + (1 − Dt )ZXi,t−1 − Dt ZXj,t−1 .

(4.7)

Finally, the proposition (iv) of increased resistance of surplus extraction at higher levels of technology can be modeled by increasing L. The results of applying the Angle inequality processes have been obtained from numerical simulations using different values for the parameters w, p, and L. Several generalizations of the basic mechanism can be made (Angle 1986b, 2006, 2012), such as introducing coalition among wealth holders, that is, a pact among partners

4.2 Kinetic Models of Income and Wealth Distribution

157

against those not in the group, random shares of lost wealth w, and even conjecturing processes where p = (1 − w)/w. But the important conclusion is that all results coming out of the simulations concerning the distribution of wealth, both PDFs and CCDFs, can be well approximated by a gamma distribution, as shown in Fig. 2.7. There is, however, no rigorous proof of this fact. Note too that the Angle process does not generate size distributions of total wealth, but only of surplus wealth. Nevertheless, since the results tend to agree with empirical facts this is reason enough to take the Angle process seriously into consideration as an important mechanism for generating wealth distribution in societies via competition between agents formed by people as individuals, firms, or both. This means that the opposition emanating from the economic academia to the concept of wealth distribution as a result of interpersonal competition appears to be basically ideological, having little or no factual bases. Regarding the physical analogies of the Angle process, as mentioned above it is a binary particle interaction that shares several features with the kinetic theory of gases. The number of particles and their wealth sum remain constant, like the constant sum of particle energy in the kinetic theory of gases. However, the Angle process concentrates wealth, whereas the kinetic theory of gases dissipates energy. Since the Angle process has parameters which can take different values at different societal developments, the distribution can take different shapes in different contexts, something not possible in the kinetic theory of gases. Despite being viewed as a model worthy of serious considerations by some reviewers (Kleiber and Kotz, 2003, pp. 162–163), others have taken a somewhat negative position regarding the Angle process. For instance, Lux (2005) took a very negative view of theft as being one of the components of inequality process because he believes that in modern societies voluntary exchange is the most important aspect of economic activity at all levels rather than a minor facet. This criticism is, nonetheless, open to debate, since Angle stated that theft is one of several possible mechanisms of wealth stratification, and not necessarily the most important one at all times. Besides, Angle took a very long-term perspective of the inequality process, a perspective in which modern economies are only a small part of, and it is very doubtful if Lux’s criticism holds when one views inequality in terms of hundreds, or thousands, of years. One should never forget that European colonialism in Africa, the Americas, and Asia in the period from the sixteenth to twentieth centuries was essential for the strong enrichment of several European countries by then (e.g.; Milanovic, 2010, section 3.7; Hickel, 2018), and for them remaining rich today, and that European colonialism was to a great extent based on extortion, tax manipulation for huge surplus extraction from colonies (Patnaik, 2017; Sharma, 2018;

158

Stochastic Dynamics of Income and Wealth

Sreevastsan, 2018) and slavery, practices that are essentially institutionalized theft (see also pp. 100–102, 261). So, it seems quite unwarranted to dismiss theft as one important aspect of wealth stratification in the past, whose consequences are still felt nowadays, not to mention the huge black markets of all kinds that exist today where theft and extortion are common practices. Hence, the dismissal of theft as an important aspect of wealth stratification seems to fall into the knowledge category of unknown knowns discussed above. But, as happens with every model, the Angle inequality process has its limitations. Lux (2005) rightfully pointed out that the Angle process is not the only stochastic process that gives rise to a gamma distribution of wealth, and that it has no monetary dynamics. In addition, being able to generate a distribution that can be fitted by the gamma function does not by itself confer a strong empirical basis to the model, since we have already shown in Chapter 2 that several other distributions do fit the data quite well, and in several cases they fit better than the gamma distribution. One might even add, as explicitly stated above, that the Angle process does not consider production and, as a consequence, the overall increase of wealth in society. By being a zero-sum game the Angle process does not model production, although such criticism can also be raised to several, if not most, of econophysical models of income and wealth distribution. Other points raised by other authors were thoroughly discussed in Angle (2007a, 2013). Despite its limitations, John Angle has in fact pioneered the statistical econophysics approach to the income and wealth distribution problem, approach which was, often independently, followed later by other econophysicists (e.g., Ispolatov et al., 1998). We shall discuss some of these “offsprings” below. 4.2.2 The Money Conservation Model The studies reviewed in Chapter 2 (Sections 2.3.1 and 2.4.2) showed that there is empirical evidence that the income distribution of the less rich population, about 99% of the people, can be represented in several countries by the exponential.1 This result is far from universal because, as discussed in Section 2.4, there also is enough empirical evidence showing that other functions can fit the data equally well (see also, Schneider, 2015). But, for Dr˘agulescu and Yakovenko (2000) the finding of good exponential fits in some data was compelling enough for them to propose a model where “money is conserved.” Their reasoning went as 1 An interesting side development of this work was the empirical finding that energy consumption among

countries is very unequal as unveiled by the correspondent Lorenz curve. In addition, the data concerned global energy consumption also showed that the CCDF of per-capita energy consumption in the world for 1990, 2000, and 2005 can also be roughly approximated by an exponential (Banerjee and Yakovenko, 2010; Yakovenko, 2010, 2012). These studies were widened by Lawrence et al. (2013) for the longer period 1980–2010.

4.2 Kinetic Models of Income and Wealth Distribution

159

follows (see also: Banerjee and Yakovenko, 2010; Chakrabarti et al., 2013, section 4.1; Yakovenko, 2009–2012; 2016; Yakovenko and Rosser, 2009, and references therein). The fundamental Boltzmann–Gibbs law of equilibrium in statistical mechanics states that the probability distribution of energy ε is exponentially given as f (ε) = λ e−ε/T ,

(4.8)

where λ is a normalizing constant and T is the temperature. Since this law comes as a consequence of the conservation of energy in systems like the ideal gases, Dr˘agulescu and Yakovenko (2000) reasoned that the empirical evidence for the exponential in the income distribution should also imply some sort of quantity being conserved in economic trading. Following this logic, they claimed that money should also obey an equilibrium probability distribution whose PDF is similar to the Boltzmann–Gibbs law, given by, f (m) = C e−m/T ,

(4.9)

where m is money. The normalization conditions established by Eqs. (2.3) and (2.10) hold for Eq. (4.9), therefore we may write the following expressions ∞ ∞ M 1 f (m) dm = 100, m = m f (m) dm, (4.10) = N 100 0 0 which yield C=

100 , T

T =

M = m. N

(4.11)

Here N is the total number of agents in the economy and M is the total amount of money in circulation. For an economic system with many agents, which may be individuals or firms, we clearly require the condition N 1, and the temperature plays the role of the average amount of money per agent. Eqs. (4.10) and (4.11) become dimensionally homogeneous by the following dimensional choices: −1 and [N ] = 1. Nevertheless, [M] = [m] = [T ] = , [f (m)] = [C] = % · these choices are not unique as dimensional consistency could also be obtained by the introduction of a constant having currency dimension. Let us now label i as being an agent holding the amount mi of money, which can be exchanged with another agent labeled j so that some money m changes parties. This trade is evidently written as: [mi ,mj ] → [mi ,mj ] = [mi + m,mj − m].

(4.12)

160

Stochastic Dynamics of Income and Wealth

As a consequence, the total amount of money is conserved in each transaction, that is mi + mj = mi + mj .

(4.13)

Such money conservation “law” is analogous to the energy conservation in atomic collisions. In this respect this money conservation model is similar to the Angle inequality process, as in the latter it is the wealth sum that remains constant. Assuming no flux of external money, that is, a closed economy, the total amount M of money is conserved in the system. Let us now divide M in to bins consecutively indexed as i = (1,2, . . . ,), where can be as big as necessary. Then each ni , the number of agents in each bin, will hold the quantity mi of money. Hence *

ni = N,

(4.14)

mi ni = M.

(4.15)

i=1 * i=1

The probability that a randomly picked agent will have money mi may be written as below (Dr˘agulescu and Yakovenko, 2000; Paul Cockshott et al., 2009, p. 145)

n i . (4.16) fi (mi ) = 100 N Substituting this expression into Eqs. (4.14) and (4.15) yields the following results *

fi (mi ) = 100,

(4.17)

i=1 *

mi fi (mi ) = 100

i=1

M . N

(4.18)

If in this discrete formulation we follow the dimensional choices above and set −1 , this requires that the probability fi (mi ) must be multiplied at [fi (mi )] = % · each instance by a unity constant with dimension of so that Eqs. (4.14) to (4.18) become dimensionally homogeneous. Since this unitary constant does not change the results, it will not be included in these expressions for convenience. To obtain the equivalent Boltzmann–Gibbs statistical equilibrium law it is required to maximize the entropy of the system, defined as S=−

* i=1

fi (mi ) ln fi (mi ),

(4.19)

4.2 Kinetic Models of Income and Wealth Distribution

161

or, in continuous terms, as S=−

∞

dm f (m) ln f (m),

(4.20)

0

To solve this maximization problem one has to apply a reasoning similar to the one originally used by Boltzmann, who reached at his law considering the energy εi of each particle and the total conservation of energy in an isolated particle system. Hence, in this specific econophysical application of statistical physics concepts, one needs to require that the total money M and total number of agents N are both conserved and that the economic system is isolated. Similar conditions applied to an ideal gas result in Eq. (4.8) (see, e.g., Lemons, 2013), which then justifies the adoption of Eq. (4.9) as the distribution function of this money conservation exchange-only model. Dr˘agulescu and Yakovenko (2000) performed several numerical simulations using different initial values of money per agent or different exchanged values. In their first simulation all agents started with 1,000 units of money, random pair of agents (i,j ) were selected, an amount of money m was transferred between agents and this process was repeated many times. In a second simulation the exchanged amount was fixed to one currency unit, m = 1. After a transitory time period the distribution converges in both cases to a stationary form that can be fitted to the exponential function (4.9) where the constants are given by Eqs. (4.11). The final results are shown in Fig. 4.2. Computer animations of this model were also presented by Chen and Yakovenko (2007). It should be noted, however, that this exchange-only distributive dynamics based on the money conservation hypothesis the PDF f (m) of Eq. (4.9) describes money distribution, not wealth. Money conservation does not mean wealth conservation in this model. In addition, since Eq. (4.11) shows that in this model the temperature is the average amount of money per agent, this result provides an analogous thermodynamic mechanism of speculation (Dr˘agulescu and Yakovenko, 2000), as explained below. Let us now suppose two disconnected economic systems whose average amount of money per agent are different, say T1 and T2 such that T2 > T1 . So, one can buy a product in a system having average price T1 and sell at the system T2 , extracting then the speculative profit T2 − T1 . This is exactly what happens in a thermal machine, which suggests that speculative profits can thrive when the economic system as whole is out of equilibrium, with money being transferred from the system with higher temperature to the one with lower temperature until T1 = T2 , which is when speculative profit is no longer possible, this being the economic equivalent of a “thermal death,” like the hypothetical result known long ago in classical thermodynamics as the heat death of the universe.

162

Stochastic Dynamics of Income and Wealth

Figure 4.2 Probability density function f (m) (Y axis) of money m. The solid lines are fits to the money-conservation model of Eq. (4.9) with the constants defined by Eqs. (4.11). The vertical lines are the initial money values. Reprinted from Dr˘agulescu and Yakovenko (2000, fig. 1) with permission from Springer Nature

Another interesting point raised by Dr˘agulescu and Yakovenko (2000) regarding this money conservation model is the role of debt. So far it has been assumed nonnegative money, mi ≥ 0, similarly to the condition applied to the kinetic energy of atoms where εi ≥ 0. However, in this model debt can be viewed as negative money. When one side of the trading pair does not have enough money to pay the other one, he can borrow a certain amount from a “reservoir” and his balance becomes negative. The money-conservation hypothesis is not violated because the other party in the exchange has a positive balance and the sum total remains constant. This apparently contradicts the empirical fact that in modern economies money is created on a regular basis every time a loan is granted by a bank (McLeay et al., 2014). However, economists do not consider negative liabilities as money, and so bank money is not conserved. So this reservoir should be a bank, possibly a central bank. Dr˘agulescu and Yakovenko (2000) have actually worked out this possible situation by first putting a maximal debt md such that mi > −md . This means that unlimited debt is not allowed in the model. The boundary condition of the previous

4.2 Kinetic Models of Income and Wealth Distribution

163

Figure 4.3 Probability density function f (m) (Y axis) of money m considering models with and without debt. The average money per agent, or temperature, are defined in Eqs. (4.11) and (4.22). The solid lines are fits to the exponential with different temperatures. Reprinted from Dr˘agulescu and Yakovenko (2000, fig. 3) with permission from Springer Nature

model, f (m < 0) = 0, is replaced by f (m < −md ) = 0. The constraints given by Eqs. (4.10) are now replaced by

∞

−md

f (m) dm = 100,

M 1 = N 100

∞

m f (m) dm,

(4.21)

−md

which, after considering Eq. (4.9), yield the following higher temperature, or average money m = T =

M + md . N

(4.22)

The results of the simulations of the model with debt are shown in Fig. 4.3. Further simulations with many agents and one bank indicate that depending on the parameters the bank constantly loses money, which increases the temperature of the agents, or the other way round (Dr˘agulescu and Yakovenko, 2000).

164

Stochastic Dynamics of Income and Wealth

4.2.3 Questioning the Money Conservation Model The money-conservation model has sparked an interesting controversy in the literature. The paragraphs below outline the main arguments raised in favor and against this model, as well as some further developments on this issue inspired by this debate. 4.2.3.1 Objections Based on Money Expansion and Open Economies As with the Angle process, the first and foremost objection to the money conservation model is that is does not take into account production. In fact, to stimulate economic growth, or counteract financial crises, several governments embark from time to time on credit expansion policies where money is created (see p. 209 below). Growth means that the economic system augments its production by increasing its capacity for absorbing larger quantities of money which then induces more trade and pairwise exchanges. So, when credit expansion policies are in effect the total amount of money M would not be conserved because the negative money picked up from the money reservoir described above would not be balanced by a positive one since the government is injecting new money into the economic system. However, if the money coming from the reservoir is solely used to buy debt, then the money conservation hypothesis could hold in principle. Nevertheless, two other objections immediately arise: (1) no economic system is closed and (2) to be a money reservoir the banks cannot participate in the money exchange process described above. Regarding the first point, all economies have positive or negative influxes of money by external trade, and these money flows going in and out of the system are not necessarily balanced. In addition, when an economy expands the number N of agents increases as new firms are created and become engaged in pairwise exchanges. So, N could only possibly be constant if the number of births and deaths of firms are exactly balanced, a hypothesis difficult to accept for long periods of time because when an economy grows business thrives, more firms are being created than being destroyed. And the opposite happens when an economy contracts. On the second point, it is doubtful if one can consider banks as being outside the system, since they participate in exchanges. So, it seems that the money conservation hypothesis could only be valid if the economic system is static, with zero growth both in production and population, and if banks were basically outside of the money exchange. One can also object to this model by arguing that although 99% of the population of several countries may have their income distribution as possibly being described by the exponential, this is definitively not the case for the whole population, since the remaining 1% does not follow the exponential, but the Pareto power law. Despite being numerically small, the 1% cannot be neglected because

4.2 Kinetic Models of Income and Wealth Distribution

165

they grab a very sizable portion of the total income, usually about 20% or more of the total. Therefore, unless one considers that 99% of the population is completely isolated from the remaining 1%, an apparently wholly unrealistic assumption, the money conservation model cannot be applied in its entirety to real economies, modern or old. 4.2.3.2 The Thermodynamic Argument There is also the possibility of agents forming coalitions in order to purposely change the money distribution in their favor by means of politics or other means (Cattani, 2009a; Ribeiro et al., 2018). In such a case the simple Boltzmann–Gibbs equilibrium statistics cannot hold. Hence, as in the classical thought experiment of the violation of the second law of thermodynamics known as Maxwell’s demon, those “economic demons” will alter the income distribution toward lower entropy and then lead the system as a whole further away from the exponential, a situation which implies that some kind of entropy-reducing work is at play in order to “sort” money among different social classes (Paul Cockshott et al., 2009, p. 146). In such a scenario there would be no conservation law, and, therefore, no money conservation as well. Analogies between thermodynamics and economics are not new, but what is new within econophysics are connections which imply the possibility of farfrom-equilibrium entropy-reducing situations. Mimkes (2017) provides a study in this direction by arguing along similar lines as above that economic systems are structured toward order, so they must not be seen as entropy maximizing, but as entropy-lowering systems. In such a case these systems would be characterized by energy and matter flows, that is, open systems where ideas of conservation, either of energy or money, would have a limited viability, or no place at all. Intuitively it seems that the best way to characterize economic systems is to consider them as open systems, and that would rule out, at least in a larger scenario, models based on statistical equilibrium. Other objections against exchange-only models were also raised under the grounds that the total income in industrialized economies experienced a dramatic expansion over time (Gallegati et al., 2006; Keen, 2007). Hence, these economies are not a conservative system, which means that “income is not, like energy in physics, conserved by economic processes . . . Yet this is an inevitable consequence of exchange-only models, since exchange is a conservative process” (Gallegati et al., 2006, p. 5; emphasis in the original). This criticism does carry some weight and shares similarities with the entropyreducing objection above, although it is not free of its own critique (Richmond et al., 2006a). One may counteract this criticism by arguing that the exponential income distribution and its money conservation hypothesis may only be approximately

166

Stochastic Dynamics of Income and Wealth

valid in a partial segment of the population (Christian Silva and Yakovenko, 2005) and, perhaps, during short periods of time. One could then compare different short periods of time with one another in order to draw conclusions. That procedure, however, does not indicate the time windows where this is supposedly valid, and basically mimics the comparative statics theory of neoclassical economics, a limited theory which econophysicists should not try to replicate. Therefore, such an approximation would best be treated with great care, perhaps considering it as stationary rather than static to avoid falling into well-known neoclassical economics theoretical traps.2 4.2.3.3 Production and the Signature of the Capital Paul Cockshott and Zachariah (2014–2015) have also advanced an interesting reasoning sequence based on Marx’s cycle of money (M) and commodities (C) where the money conservation hypothesis is only partially objected. As discussed earlier (see p. 10), according to Marx (1867, part II) the circulation of commodities has the signature commodity, money, commodity, that is C→M→C

(4.23)

which has an inbuilt possibility of crisis because if money is removed from circulation by being saved the goods cannot be sold, since one can only sell if there is a buyer. From the standpoint of the seller, the introduction of credit replaces the circulation above to the form C → Fa → C

(4.24)

where Fa is a financial asset, that is, a credit in a bank. If both the purchaser and the seller have financial assets, there is monetary circulation and the financial asset just changes hands. This means that such circulation is in fact a scattering-like interaction, like in the physics of interacting particles, that can be represented by the following diagram C

Fa

>

seller :

> >

(4.25)

Fa

>

buyer :

C

2 A static system is such that it does not change over time. This implies that its variables are not time dependent.

A stationary system may be defined as one in which the time derivatives of its variables are constant or change very slowly in specific time intervals. Hence, static systems are special cases of stationary ones.

4.2 Kinetic Models of Income and Wealth Distribution

167

But, if the purchase is made on credit, the interaction above turns out to be represented as: C

Fa

>

seller :

> >

(4.26)

a

>

buyer :

C +F

where a is credit and F is the financial liability, or debt. Therefore, in an exchange financed by credit the seller starts with commodity and ends up with financial asset, whereas the purchaser starts with credit and ends up with commodity plus debt. The exchange is funded by means of the creation of a financial asset/liability pair by a “debt creation operator” that gives rise to both Fa and F . The commodity C is conserved because it merely changes hands and the financial asset–liability pair cancels out due to an accounting identity, that is Fa + F = 0.

(4.27)

The commodity conservation law that results from the interactions (4.25) and (4.26) is merely a conservation of value that resulted from the commodity changing hands in the pairwise economic exchange system. In summary, credit facilitates the movement of goods, but it does not create the goods, since the financial asset–liability pair cancels out after the commodity movement. In this sense fiat money as a measure of value is in fact conserved. Nevertheless, if the Marxian commodity cycle (4.23) leads to a conservation law, this is no longer the case in its circulation counterpart, the Marxian money cycle, also known as the signature of the capital (Paul Cockshott et al., 2009, pp. 237–240) M → C → M

(4.28)

where M = M − M > 0 is the profit, or surplus value, which refers to a money increase above the initial amount used to produce the commodity. Considering production, the cycle (4.28) must be broken up as follows M → P → M

(4.29)

P : C → C + C

(4.30)

where

168

Stochastic Dynamics of Income and Wealth

is the production process function that uses the initial commodity C to generate the physical surplus product C after it is deducted from the consumption required to produce C. Marx’s basic question was how to explain the money expansion M. His answer was to find an explanation outside the realm of commodity exchange (Marx, 1867, chs. 5–6; Paul Cockshott and Zachariah, 2014–2015, p. 18). This problem is fundamental, inasmuch as capitalism compulsively seeks an indefinite money expansion by reinvesting the profit in further production capacity. In other words, the signature of the capital gives the following process of exponential growth M → C → M → C → M → C → M → . . .

(4.31)

which is a capital accumulation process that can only occur by interacting with other system like labor, material conditions, natural resources, energy availability. This is clearly a non-conservative growth process that is disrupted when it encounters labor, material, or energy constraints. Therefore, the cycle (4.29), whose production process is detailed in function (4.30), should be expanded to the following diagram

M

L,R,ε .. .. .. . ∨.. >P .. .. .. . ∨.. E

> M

(4.32)

where L stands for labor, R for natural resources (materials, arable land, availability of transportation systems, etc.), ε for energy (oil, coal, nuclear power, etc.) and E for emissions resulting from the production process itself (waste and pollution). In other words, the creation of value C, the surplus which causes the money expansion M, inputs L, R and ε and outputs E. Hence, systems required in the creation of value are non-conservative and come from outside the exchange system, which preserves value and is, therefore, conservative. In fact, the value creation system, or production, works in parallel to the exchange-only, and moneypreserving, system. 4.2.3.4 Value and Energy Two points arising from the discussion above made by Paul Cockshott and Zachariah (2014–2015) are worth emphasizing. First, as mentioned, financial systems, banks, and the likes, do not create value, but only act as facilitators of the value-preserving, or money conservation, exchange-only system, being in effect

4.2 Kinetic Models of Income and Wealth Distribution

169

outside of its realm. Second, it seems that in a sense Marx somewhat intuited the nonequilibrium thermodynamic conditions required for the creation of value, a point which can be further expanded by advancing the proposition that human labor may not be the biggest contributor to the creation of value in today’s world, but probably its smallest one, a reasoning that also seems to be partially shared by Keen (2012b) and Pokrovski (2018, ch. 11). Let us elaborate this point a bit further from the perspective of physics. The action of human labor in an economic system can be seen as work being performed on the system as defined in basic thermodynamics, that is, a change in the state of the system by increasing its internal energy (Lemons, 2008). The word ‘work’ is used here in the sense of physics, that is, change in the system’s energy as measured in joules or calories. So, available energy capable of doing work either in the form of human and animal labor or energy-powered devices, would be the ultimate creators of value. Since material conditions, such as mining and constructions, also require work-performing energy and that in modern economies most work, as defined in physics, is no longer solely performed by humans and animals, but mostly by engines powered by fuels or by energy produced in hydroelectric, nuclear, coal, oil, gas, wind, or solarbased power plants, it seems reasonable to suppose that since the energy coming from these sources actually performs most work on a modern economic system ultimately they are the main sources for the creation of value. But, work alone is not enough. Value creation requires purposeful action. For instance, transport is not diffusion, since it moves groups of masses to specific locations along specific routes not at all at random. So, value can only be created if work-performing engines are associated to purposeful action, that is, labor (Paul Cockshott et al., 2009, ch. 1). However, purposeful action includes information since, for instance, one cannot drive a car without knowing beforehand how to drive a car. There is an intriguing connection between Boltzmann’s entropy as defined in statistical mechanics and Claude Shannon’s (1916–2001) entropy as defined in information theory where entropy-lowering processes in both the mechanical and information sides would be connected. We shall not delve further into this issue and the interested reader can find more on this topic in Paul Cockshott et al. (2009, chs. 2–3). 4.2.3.5 A Possible Compromise Is it possible to reconcile these ideas with the money conservation model? One possibility is, perhaps, to think in terms of a two-class system as in the model itself. The economic system creates the surplus value C according to the production process function P , but this surplus would only negligibly add to the constant commodity amount C that already circulates among 99% of the population, an amount which is

170

Stochastic Dynamics of Income and Wealth

exchanged and conserved in this segment and can be described by the exponential income distribution of the Boltzmann–Gibbs law of statistical equilibrium. Most of the produced surplus value C then can only go to the remaining 1% of the population, whose distribution is not the exponential of the money conservation model based on statistical equilibrium, but a power law. This reasoning implies by logical requirement that this power law cannot come from a conservative statistical equilibrium description, since 1% of the population receives a constant input of value and this population segment is non-conservative. There should also be a middle class placed in between these two classes that gets some crumbs of C. Some empirical support of this possibility will be discussed below. Finally, it is important to place the objections to the money conservation model of Dr˘agulescu and Yakovenko (2000) in historical perspective. The point is that models in physics are most often built gradually, which means that the initial models incorporate just a few aspects of reality, frequently being not too far from toy models. But, realism comes on a step-by-step basis and there is no reason to suppose that this will be different for exchange-only models. Physical models incorporating full realism at their inception are not the rule and, if we recall Boltzmann’s epistemological theses discussed in Chapter 1, a model can gradually shed its inappropriate elements while retaining its appropriate residue (see p. 23 above). In fact, further studies and generalizations to these exchange-only models were made by different authors, who worked out the analogies of the physical theories applied to elastic and inelastic scattering particles to pairwise exchange interactions in the attempt of bringing more realism to these models. They included new features like savings, wealth, commodities, networks, and taxes. Hence, despite its limitations, the works above led econophysicists to quickly occupy a research niche basically left empty by academic economics, and that happened despite the claim of modern economics that has its roots in physics. We now consider some of the models that were followed up. 4.2.4 Models Based on Money Conservation Section 4.2.2 presented the model of Dr˘agulescu and Yakovenko (2000) in its original form, but in order to better compare it with other models of the same or similar classes the notation and approach advanced by Chakrabarti et al. (2013, pp. 55–56) is more convenient in this respect. Hence, this section will start by summarizing the money conservation model in the slightly different notation advanced by those authors and incorporating a few new remarks. Let mi (t) and mj (t) be the amount of money that agents i and j respectively hold at a certain time t before an exchange. If agent j can spare the amount of money

4.2 Kinetic Models of Income and Wealth Distribution

171

m to trade with agent i for a certain product, agents i and j can respectively be called the seller and the buyer in the exchange. If we now label by t + 1 the time after the exchange, Eqs. (4.12) may be slightly rewritten as: mi (t + 1) = mi (t) + m, (4.33) mj (t + 1) = mj (t) − m, and the local conservation of money given by Eq. (4.13) may now be slightly rewritten as below mi (t) + mj (t) = mi (t + 1) + mj (t + 1).

(4.34)

If ij is the random fraction (0 ≤ ij ≤ 1) of the total money shared between agents in an exchange, then the model of Dr˘agulescu and Yakovenko (2000) states that mi (t) + m = ij [mi (t) + mj (t)].

(4.35)

This expression basically defines the fraction m of the total money that participates in the exchange. The following scattering like diagram pictorially represents this exchange seller :

mi (t + 1) >

mi (t)

> >

(4.36)

buyer :

mj (t)

> mj (t + 1)

where the money of both agents i and j is redistributed in the market by means of the trade interaction. Note that differently from the diagrams (4.25) and (4.26), the product being traded does not appear explicitly in the diagram above, but only the consequence of its trade, that is, the money redistribution between agents. As discussed in Section 4.2.2, as t → ∞ the money distribution in this model becomes Boltzmann–Gibbs, given by Eqs. (4.9) and (4.11), whose numerical simulations are shown in Fig. 4.2. This result holds for a closed economic system where both the total money M and total number of agents N remain fixed, a situation which corresponds to an economic activity confined only to trading. In this case the economic growth is very slow compared to trading. In addition, the model always evolves to a situation where most people, or agents, have very little money. If debt is allowed in the model, very little is changed in this basic scenario, apart from an increase in the average money, given by Eqs. (4.21) and (4.22), and a small change in the lower threshold of the distribution, as shown in Fig. 4.3.

172

Stochastic Dynamics of Income and Wealth

4.2.4.1 Savings Chakraborti and Chakrabarti (2000) proposed a generalization of the basic money conservation model to include savings by means of a savings propensity factor λ, in which at time t each trader saves a fraction λ of its money mi (t) and then trades randomly with the others (see also Chakraborti, 2002). This situation is formalized in the expressions + , mi (t + 1) = λ mi (t) + ij (1 − λ) mi (t) + mj (t) , (4.37) + , mj (t + 1) = λ mj (t) + (1 − ij ) (1 − λ) mi (t) + mj (t) , (4.38) where

, +

m = (1 − λ) ij mi (t) + mj (t) − mi (t) .

(4.39)

By definition, 0 ≤ λ ≤ 1, and the special case of λ = 0 reduces this savings model to the original money conservation model given by Eqs. (4.33) and (4.35). Fig. 4.4 shows the results of numerical simulations with this model, where it is clear that as λ increases the most probable money value shifts away from the original money conservation model given by λ = 0. This means that for λ = 0 most agents have a very small amount of money, whereas the number of the richest agents, those having m larger than a given value m(t + 1), as well as the fraction of the total money owned by them, exponentially decreases with m(t +1). If, however, λ = 0 the shape of the distribution changes significantly such that the fraction of those with little money decreases as λ increases. Numerical results suggest that the distribution arising from this model is closely approximated by a gamma function (Patriarca et al., 2004a, 2004b, 2010), however this argument is solely based on numerical experimentations, as no demonstration has so far been provided about the generality of this result (Sinha et al., 2011, section 8.3; Chakrabarti et al., 2013, pp. 58, 116–119). Even so, Lallouche et al. (2010) have challenged this result by following a reasoning based on the calculation of the the gamma distribution moments. One should remark that the savings model above and Angle’s one parameter inequality process are basically the same. In fact, Eqs. (4.37) and (4.38) are very similar to Eqs. (4.1) and (4.2). Hence, the savings propensity factor λ seems to play a similar role as the surplus in Angle’s theory. In addition, the Angle model also leads to a PDF that can be fitted quite well by a gamma function, (see p. 157 above). 4.2.4.2 Other Models Further generalizations of the basic savings model include: (i) savings that varied from agent to agent, that is, distributed savings; (ii) time varying savings propensity

4.2 Kinetic Models of Income and Wealth Distribution

173

Figure 4.4 Numerical results of the probability density distribution f (m) in terms of money values for the model with a savings propensity factor λ (see also Chatterjee and Chakrabarti, 2007, fig. 3). The simulations considered about 107 transactions for each value of λ for systems having N = 500 agents. The continuous curves are fitting functions closely resembling the gamma distribution. The case of λ = 0 reduces to the exponential decay shown in Fig. 4.2. As λ increases, the most probable money per agent shifts from m = 0 (for λ = 0) to m = 1 (for λ = 1). Reprinted from Patriarca et al. (2004a, fig. 1) with permission from Elsevier

factor λ; (iii) savings models with commodities; (iv) models where N agents are connected in a network of N nodes. Nevertheless, all these models are basically of the same class and, therefore, share the restrictive assumptions of closed economies and markets where the number of agents, commodities and total money are fixed. Detailed discussions and references on these models can be found in Gupta (2006), Yakovenko and Rosser (2009), Sinha et al. (2011, section 8.3), Chakrabarti et al. (2013, sections 4.1–4.3), and Sharma and Chakraborti (2016). 4.2.5 Trading as Elastic Collisions of Scattering Particles At this point a brief summary is in order so that some important results can be presented. All models discussed above are physical models of kinetically interacting agents that are elastically exchanging energy in a scattering process. Anything that can be exchanged and conserved is interpreted as energy.

174

Stochastic Dynamics of Income and Wealth

The econophysical application consists of interpreting energy as money, particles as agents, the elastic collision of a pair of particles as pairwise trading and the entire interaction composed by the situation before and after the exchange as scattering particles. If particles are not created or destroyed and the energy is conserved, this means a closed economic system with a fixed amount of agents and total money. Random elastic collisions leads to the Boltzmann–Gibbs distribution, which is essentially the basis of the money conservation model of Dr˘agulescu and Yakovenko (2000). Despite this general result, Scafetta and West (2007) showed that even elastic collisions of a large number of ideal gas particles do not necessarily lead to Boltzmann–Gibbs distributions, provided that some rules for the energy exchanged are introduced. These authors discussed in detail several cases of anomalous random energy exchange models, where some of them are basically the same as the savings models above. The first case occurs when agents save a fraction λ of their energy before interacting. The second case is of energy change that is distributed among agents, that is, the savings are not the same for all agents. Next, instead of the amount of the exchanged energy being a fraction of the sum of the energy of both interacting agents, only a fixed amount is exchanged. The fourth situation is of an energy exchange bounded to the less energetic of the two agents, with both agents having the same odds of losing or gaining energy, whereas in the fifth case studied by them only one of the two agents is favored to gain energy. The final situation consists of introducing a parameter to regulate the transfer of energy that depends on the less energetic agent. In all these models the resulting distribution is not Boltzmann–Gibbs, but a plethora of cases such as the Gaussian, power-law tail, truncated exponential, gamma, and mixed exponential and inverse power-law distributions. Scafetta and West (2007) also found evolving distributions that do not possess a stable equilibrium, this being the case of models where the energy exchanged is bounded to the less energetic of the two agents. In time, all energy is absorbed by a single agent (see Fig. 4.5). Results for all cases were obtained by numerical simulations. So, it is clear that money conservation models possess a large variety of possibilities and their study should not be limited by their apparently very restrictive assumptions. 4.2.6 Taxes in Inelastic Collisions So far we have reviewed energy, or money, conservation models based on scattering particles experiencing elastic collisions. However, non-conservative inelastic collisions can also be used to model money and wealth transfers through trading. We now look at a work of this type and its econophysical interpretations.

4.2 Kinetic Models of Income and Wealth Distribution

175

Figure 4.5 Numerical results for the PDF (y-axis) of an elastic scattering model whose energy exchange is bounded to the less energetic of the two agents. In each interaction the trader with less energy ε, or money, will at most lose or gain an amount of energy equal to its own. Although both agents have the same probability of losing or gaining, the less energetic one is at a higher risk because if it loses some energy, it is less likely to gain it back in the next interaction. The distribution is not stable, in time all energy is absorbed by a single agent, approaching a power law of the type 1/ε. The simulation uses N = 105 agents with initial energy equal to unity, and the distribution is obtained after 106 and 107 collisions. Reprinted from Scafetta and West (2007, fig. 7) under permission from IOP Publishing

Guala (2009) proposed a model of wealth where inelastic binary collisions are assumed to be equivalent to taxation. In other words, the loss of energy in inelastic collisions is interpreted as tax payments, which are then redistributed among the population according to some “state” rule. As usual, the model is of a closed economic system having a total amount of money (wealth) W and a total amount of N agents. All economic activity is confined to trading. The model is then defined mathematically as follows. Let wi (t) be the wealth of agent i at a time t before each trading, which occurs in two steps. At the first step, labeled by time t + 1/2, two randomly chosen agents i and j trade their money in an inelastic collision such that a fraction of their money is lost by taxes. At the second step, labeled by time t + 1, the taxes are redistributed to a subset of the population, or agents, benefited by the redistribution policy. After these two steps the total wealth is conserved. This means that the quantity below if fixed

176

Stochastic Dynamics of Income and Wealth

W =

N *

wi (t).

(4.40)

i=1

The following diagram represents the inelastic scattering process wi (t + 1/2) >

wi (t)

(4.41)

>

>

................................ > taxes(energy)

wj (t)

> wj (t + 1/2)

whose energy loss is interpreted as taxes. The first step in the interaction is given by the expressions below ⎧ wi (t + 1/2) = (1 − f ) ij wi (t) + wj (t) , ⎪ ⎪ ⎪ ⎪ ⎨ wj (t + 1/2) = (1 − f )(1 − ij )wi (t) + wj (t), ⎪ wi (t + 1/2) + wj (t + 1/2) = (1 − f ) wi (t) + wj (t) , ⎪ ⎪ ⎪

⎩ wk (t + 1/2) = wk (t), ∀ k = i,j ,

(4.42)

where ij is a random fraction (0 ≤ ij ≤ 1) and f is the fraction of trading lost by taxes. In the second step, taxes are redistributed to individuals according to the rule below ⎧ f

⎨ wi (t) + wj (t) , ∀ r ∈ S , 1 |S| (4.43) wr (t + 1) = wr (t + /2) + ⎩ 0, otherwise, where S is the subset of the population benefited by the tax redistribution. If f = 0, Eqs. (4.42) reduce to Eqs. (4.33) and (4.35). Numerical simulations of this model assumed the average wealth as being given by W/N = 1, which follows the definition of Eq. (4.11), and were made in two conditions: (i) for S = N and f = 0, that is, a tax redistribution that benefits all population, and (ii) S for a subset of 20% of the poorest people. In the first case the PDF has its most probable value at w = 0 when f = 0, shifting away as f increases, reaches a maximum and then decreases back to w = 0 as f → 1. Fig. 4.6 shows this behavior and the results seem to indicate an optimal level for taxation in which the distribution is more egalitarian, because the fraction of paupers decreases to a certain level of taxes so that most people end up close to the average wealth. Fig. 4.7 shows a plot with the same results of Fig. 4.6, but presenting the most probable wealth value in terms of f . There is a clear indication

4.2 Kinetic Models of Income and Wealth Distribution

177

0

Figure 4.6 PDF of wealth distribution for different values of the fraction f lost by taxes in the exchange process modeled by inelastic scattering. The tax redistribution occurs nonselectively, i.e., for all agents (S = N ). As f increases from the initial value of f = 0, the most probable wealth shifts from w = 0, reaches a maximum and then decreases back to w = 0 as f → 1. This indicates an optimal value for f (see Fig. 4.7). The simulations assumed N = 1000. Reprinted from Guala (2009, fig. 2a) under permission of Creative Commons Attribution 4.0 International License

of an optimal level of taxes at f ≈ 0.325 for a nonselective tax redistribution model, where the economy is more egalitarian. The second case considered a model where the collected taxes are selectively redistributed uniformly to the subset S corresponding to 20% of the poorest in the population. Fig. 4.8 shows the PDF of this case in terms of w for different values of f . The plot shows for w ≥ 1, f ≥ 0.3, which indicates that most people belong to the richest segment of the population and a few people belong to the poorest portion. Hence, the wealth of most of the richest is slightly greater than the average, a situation that could be viewed as egalitarian socialism. 4.2.7 Who Should Pay More Taxes? It is a common discussion in several modern societies if the wealthier should pay more or less taxes than those who have and earn less. These discussions, however, should be made on scientific grounds, that is, under the shedding light of real-world modeling or studies of the long-term effects on societies of these two possibilities. In an attempt to investigate what would be the long-term effects of both scenarios, de Oliveira (2017) advanced a very simple time evolving probabilistic model relating tax payments to patrimony. The model considers taxpayers as agents

178

Stochastic Dynamics of Income and Wealth

Figure 4.7 This graph shows the same results of Fig. 4.6, but presents the most probable wealth, the modal value of the distribution, in terms of the fraction f lost by taxes in an inelastic scattering collision for a nonselective tax redistribution (S = N). The average wealth is W/N = 1 and N = 1000 in the numerical simulation. There is clearly a maximum value of f ≈ 0.325 where the tax redistribution reaches its optimal situation. Reprinted from Guala (2009, fig. 3a) under permission of Creative Commons Attribution 4.0 International License

having a certain patrimony, or wealth, that increases in time due to earnings and the redistribution of the agents’ patrimonies due to progressive, or regressive, taxation. Despite being very simple, the model reaches very interesting conclusions. We briefly review this model and its main conclusions. Let each of N agents have their patrimony at a certain time t given by Wi (t), (i = 1,2, . . . ,N), which is initially random and normalized as follows wn =

Wn N -

.

(4.44)

Wi

i=1

During a certain time unit, usually one year, each agent n has its patrimony Wn multiplied by a positive random factor in order to factor out the patrimonial evolution. In addition, at the end of this chosen time unit each agent n pays a tax rate, which should not be confused with tax, by means of the following linear function A + p wn,

(4.45)

where A is a tax that all agents pay independently of their patrimony, like taxes paid on all consumer goods, and p is a taxation parameter that describes who in

4.2 Kinetic Models of Income and Wealth Distribution

179

Figure 4.8 PDF of wealth distribution for different values of of the fraction f lost by taxes, but whose redistribution follows the rule S = 0,2N , where N is selected among those with least wealth. Hence, the taxes are redistributed to the subset formed by 20% of the poorest in the population. For w ≥ 1, f ≥ 0.3, which means that the wealth of most people is slightly greater than average, that is, a more egalitarian situation. Reprinted from Guala (2009, fig. 3d) under permission of Creative Commons Attribution 4.0 International License

the classes hierarchies pay more or less taxes. Hence, p is the slope of function relating the tax rate in terms of increasing patrimony. If p > 0 the rich pay more taxes, which means progressive taxation, but if p < 0 the poor pay more taxes as a proportion of their taxable patrimony, that is, negative values for p mean regressive taxation. The constraint 0 < (A + p) < 1,

(4.46)

is required so that when p < 0 a negative taxation is avoided. Hence, at the end of each time unit the patrimony of the agent n is iterated as below, Wn → (1 − A − p wn )Wn .

(4.47)

Let us now call wmax the patrimony of the richest agent. As the sum total of the patrimonies of all N agents is normalized to unit, we have that wmax < 1. Hence, the quantity (1−wmax ) is the sum total of all patrimonies, but the richest one. So, the first iteration of Eq. (4.47) will produce a number for (1−wmax ) necessarily smaller than unit. Then, at each iteration this quantity will change due to the patrimonial redistribution.

180

Stochastic Dynamics of Income and Wealth

Figure 4.9 Patrimonial, or wealth, evolution of a simple taxation model. The vertical axis shows the sum total, normalized to unit, of all wealth fractions, except the richest agent wmax . Each value of the time unit t corresponds to an iteration of Eq. (4.47). The curve indicated by an arrow is a power law for p = 0 being, therefore, the critical transition between the states of patrimonial evolution stability (above the power law) due to progressive taxation (p > 0), and collapse (left of the arrow) due to regressive tax (p < 0). The final collapsed situation is the same in all simulations: all wealth ends up in the hands of a single agent. The curves above and below the power law respectively indicate growing or decreasing values for p, respectively p = ±0.005, p = ±0.01, p = ±0.02 and p = ±0.03. Reproduced from de Oliveira (2017) with permission from the author

The results after thousands of numerical iterations, where each iteration represents one time unit t, are shown in Fig. 4.9. The curve where p = 0, indicated by an arrow, is the critical transition value from stability to collapse, which becomes a power law after fewer than a hundred iterations. Above the power law all curves have p > 0 and tend to a stable distribution of wealth. Below the power law all curves have p < 0 and they tend to collapse, that is, all wealth ends up concentrated in the hands of the richest agent wmax . To reach those results de Oliveira (2017) applied a binary probability distribution: at each iteration, or each time unit, agents either keep their patrimony, with 50% probability, or double it. This obviously means a huge increase in patrimony at every time unit, on average a 50% increase, but changing this does not alter the basic dynamics, since various other distributions other than the binary lead to the same results. Note that economic growth is indirectly taken into account by patrimonial increase at each iteration. de Oliveira (2017) also expanded the model by following the evolution of wn by means of a mean field approximation, but reached at similar general conclusions.

4.2 Kinetic Models of Income and Wealth Distribution

181

As pointed out, the model is very simple, but the results are striking. In the long run, regressive taxation leads societies to collapse as all wealth ends up in the hands of a single agent. This result is just a different way of arriving at Marx’s principle of infinite accumulation. It is obvious that this evolution can be lessened and the final result delayed by several orders of magnitude of the time unit by means of tax redistribution, that is, when governments or other agencies take some of the collected tax back to those who have less by, say, government subsidies or the welfare state, but this does not alter the long-term dynamics of regressive taxation, only abates it. So, if p < 0, even by a tiny amount, the basic dynamics leading to societal collapse in unchanged. 4.2.8 Limitations of Kinetic Exchange Models We have seen that kinetic exchange models based on both elastic and inelastic scattering processes can indeed be used to model some empirical features of trading in economic systems. In fact, the original objections that models solely based on kinetic exchange could not offer any realistic modeling of the real-world economic phenomena because, as discussed in Section 4.2.3, economies are not conservative systems, underestimated the richness of these models, as well as their ability to include non-conservative exchange, which was made possible by using as a template the physics of inelastic collisions of particles. Nevertheless, there are, of course, limitations, the most important one being that all kinetic exchange models only deal with trading. So, there is no income in the strict sense of the word, that is, a flow, either as wage labor or capital return. Nor is there wealth accumulation, or capital stock in most of these models. This is so because production is almost entirely ignored. Despite these shortcomings, it would be improper to dismiss the ability of these models to describe processes that really go on in economic systems, however limited those descriptions may be. Remembering the methodological theses discussed in Chapter 1, finding such limitations is actually a strength, not a weakness, because once we are aware of them we may be able to know exactly which conditions and circumstances these models can be applied to describe, and possibly predict, real-world phenomena. In addition, such exposure is fundamental for going beyond them, that is, to propose new models that attempt to overcome these weakness and limitations (see Section 4.3). Besides, there still are unexplored roads, such as models with particle creation and destruction which could, perhaps, be interpreted as some kind of production process. Another important aspect to point out about kinetic exchange models is that so far these models are clearly phenomenological, being basically semi-empirical rules inspired from observed phenomena. They were not originated from a theory

182

Stochastic Dynamics of Income and Wealth

whose empirical scenarios studied so far would arise as particular cases of a general theoretical formalism. Several authors have, however, tried to connect these semi-empirical models to equations of statistical physics, but the results are still unsatisfactory as far as a general theoretical framework is concerned. This is so because so far those attempts are essentially just a collection of disjointed hints (Yakovenko and Rosser, 2009; Chakrabarti et al., 2013, ch. 5; Yakovenko, 2016). Therefore, the question of whether or not kinetic models could be more than just phenomenological descriptions of observed phenomena and serve as progenitors of a general theory of exchange and, perhaps, production, is still unanswered.

4.3 Political Economy Scafetta et al. (2004a) advanced a stochastic model of trade and investment not based on kinetic exchange, but on the conceptual framework of political economy. This is a very interesting model which contrasts with the kinetic exchange models discussed above not only conceptually, but also in the sense of being an out-ofequilibrium model. Especially because its conceptual framework is grounded on an important discussion opposing views based on political economy and neoclassical economics, it will be presented here in more detail. 4.3.1 Conceptual Background The authors started their reasoning by arguing that wealth can be accumulated in two ways: by investment and by trading. Investment is any act that creates or destroys wealth, whereas trade is any type of economic transaction. In addition, they argue that there is a fundamental difference between price and value, a distinction that comes from the classical political economy: price is what is paid for a product in a certain market on a particular time stamp, whereas value is related to the long-term cost of production of a commodity. The difference between price and value is not present in neoclassical economics because this school of economic thought assumes that all trades occur in “equilibrium” where these two quantities are equal, an assumption formalized in the controversial Say’s law which states that production is the source of demand, so demand and supply are always met. This means that once a product is created it provides its own market, then the value one can buy is equal to the value one can produce. Classical economists, nevertheless, expected a divergence between value and price to occur, although they also expected that these two quantities would converge over time. This means that within the framework of political economy it is expected that trade will involve a transfer of wealth and then, differently from

4.3 Political Economy

183

kinetic exchange models based on elastic collisions discussed above, trades would be out of equilibrium because price ought to differ from value. In this respect, Scafetta et al. (2004a) stated that the existence of fluctuating levels of stocks of all kinds of unsold goods is manifest proof that real prices are not ‘market clearing’. Data show that . . . the market price does not converge towards value even after several centuries. Wealth transfer should therefore always occur in real trades because it is possible to buy or sell a commodity for a price that may be higher or lower than its value around which actual transaction prices fluctuate. Moreover, the total amount of wealth that can be transferred in trades from one agent to another should be constrained by the total amount of wealth of the poorer trader because a trader cannot (usually) afford to buy or sell a commodity whose value or price is larger than his/her own total wealth. (Scafetta et al., 2004a, p. 354)

In other words, as the price paid for, and the value of, a commodity transferred in a trade are different, the amount of wealth that changes hands in a trade is bounded, because the value and the price of a product cannot exceed the wealth of the poorer of the two agents in a pairwise exchange. The important concept arising at this point is that when the rich and the poor interact, they are well aware of their reciprocal social status, which means that the Boltzmann–Gibbs distribution is problematic in real trades because it assumes equal likelihood that requires absence of information. In a negotiation, the poor always try to save money or get more money when trading with the rich, situation which constitutes added information so that the outcome of the trade is not equally likely. These ideas were incorporated into a nonlinear stochastic trade–investment model whose one of the main elements is to take into account how prices emerge from negotiations among agents that belong to different social classes. In addition, since investment is also present in the model, another important element is the interaction of social classes that use different economic tools. The model proposed by Scafetta et al. (2004a, 2004b) that translates these concepts into equations is presented below. 4.3.2 The Model Let Wi (t) be the wealth of the agent i and N the total number of agents. The wealth evolution is then given by the following discrete nonlinear equation Wi (t + 1) = Wi (t) + ri ξ(t)Wi (t) +

N *

wij (t),

(4.48)

j =1(=i)

that expresses the transaction in the temporal interval [t :t + 1]. Here ri ξ(t)Wi (t) is the investment term and ri > 0 is the individual investment index, which gives

184

Stochastic Dynamics of Income and Wealth

the percentage of wealth Wi that agent i actually invests. ri is the standard deviation of the Gaussian random variable ξ(t) and the quantity wij describes the amount of wealth agents i and j exchange in a trade, which may change in each transaction. In order for the trade to incorporate a flow of wealth, let the agent i to be the seller and agent j the buyer. Then we may write the definition below, wij = price − value.

(4.49)

Conversely, if i is the buyer and j the seller, we have that, wj i = value − price.

(4.50)

This means that this quantity is antisymmetric: wij = −wj i. A trade interaction will then yield: Wi ,Wj → Wi + wij ,Wj + wj i , (4.51) where, similarly to Eq. (4.12), the total wealth is conserved in the exchange. If during the transaction occurring in the time interval between t and t +1 there is no trade, there also is no transfer of wealth in the trade and then wij = 0. Therefore, in this model wealth can only be created or destroyed by investment. For simplicity, wij is assumed to be a Gaussian random variable with probability density function given by

2 wij − wij 1 exp − p (wij ) = √ , (4.52) 2σ 2 σ 2π where wij is the mean wealth that may move between agents i and j , and σ = h Wij ,

(4.53)

is the standard deviation of wij , which is assumed to depend on the variable, (4.54) Wij = Wj i = min Wi ,Wj . The definitions (4.53) and (4.54) incorporate the assumption that in a realistic trade the fluctuation of wealth that actually occurs in a real transaction is a fraction of the wealth of the poorer agent, as 0 ≤ h ≤ 1. The quantity h is interpreted as a poverty index due to Eq. (4.53). This means that the poorer a society is, the greater is the fraction of one’s own wealth at stake in an exchange. Since σ depends on Wi and Wj , at each temporal step of the simulation its value is updated. The model is completely characterized once wij is specified. The authors argue that this quantity conditions the social differentiation among the members of a

4.3 Political Economy

185

society, since rich and poor approach an exchange in different ways. The poor is more careful because they are constrained by their poverty and look very carefully for opportunities to save money. On the other hand, the rich is less constrained because they are more willing to buy regardless of the cost. Such an asymmetry tends to disappear when the two traders are economically equivalent. This reasoning is expressed in the following definition w ij = αij h Wij ,

(4.55)

where αij is a nonlinear and out-of-equilibrium term that depends on the wealth of both traders, defined by the expression below " ! Wj − Wi , (4.56) αij = f Wj + W i where f > 0. The quantity f is called the social index, because if Wi ≈ Wj , then αij ≈ 0 since both traders have similar odds of gaining or losing wealth. But, if Wj Wi this means that the agent i is the poorer one, then αij ≈ f , the distribution p (wij ) is shifted toward positive values and the poorer trader i has a better chance of obtaining wealth in the trade. The mechanism by which the poor have a better chance of improving their own wealth is related to their own restricted resources, being different from both the mean–field approximation and kinetic theory where the poor have the possibility of gaining an unreasonable large quantity of wealth from the rich. The process is out of equilibrium because the poor and the rich trade in different ways due to their social differentiation. The authors further explained this dynamics as follows. The trade transaction has a higher probability to occur if the price is below a threshold at which the buyer would like to buy. Because this threshold increases with the total wealth of an agent, when a wealthy agent would like to buy something from a poorer agent, there is a higher probability that the transaction occurs at a higher price than when a wealthy agent would like to sell the same item to a poorer agent . . . This asymmetric disadvantage-advantage tends to disappear when the two traders are economically equivalently [sic]. (Scafetta et al., 2004b, p. 343)

4.3.3 Results of Numerical Simulations The previous paragraphs showed that the nonlinear, stochastic, and out-ofequilibrium trade–investment model based on concepts of political economy proposed by Scafetta et al. (2004a, 2004b) is fully determined by three parameters: the individual investment index r, the poverty index h, and the social index f .

186

Stochastic Dynamics of Income and Wealth

In addition, these parameters are constrained by the following conditions: 0 ≤ h ≤ 1, r ≥ 0 and f ≥ 0. The properties of the model were obtained by running about 2 × 108 numerically simulated interactions with 105 agents. The results provided both PDFs and CCDFs of the wealth distribution under three main assumptions for the values of the parameters of the model, and are summarized below. (1) h > 0, f = 0 and r = 0. This case means assuming a symmetric trade-alone economy where wij = 0, which implies that in any exchange each amount of wealth transferred between agents is equally likely to be gained or lost by any trader. In this situation the available wealth becomes concentrated in the hands of a few agents very rapidly, because if σ in Eq. (4.53) is viewed as the risk that each agent i and j incurs during the trade, the poorer one is taking a greater risk. This is equivalent to the work of the Maxwell’s demon in the wealth dynamics forcefully sorting out wealth in favor of the rich. The authors stressed that modern societies generally avert such catastrophic situation, which implies that a statistical equilibrium between rich and poor is unrealistic, a similar conclusion regarding the mean–field approximation because it yields a uniform distribution of wealth. Therefore, a mechanism is needed to dampen the differences between rich and poor, rather than amplify them, by means of a redistribution of wealth that is advantageous to the poor. (2) h > 0, f > 0 and r = 0. According to Eqs. (4.55) and (4.56), this situation leads to wij = 0, meaning a bias in favor of the poorer agent i since it has a greater chance of drawing a positive value for wij . The distribution becomes narrower at increasing values of f , which means that the probability of becoming rich falls rapidly with increasing wealth. The distribution becomes stable and there is the emergence of a large middle class, a smaller poor class, and an even smaller rich class. The economy of the low and middle classes are then mostly characterized by trades. Moreover, this trade dynamics leads to a gamma distribution being well fitted for the low and middle classes, something that is observed in the phenomenological data. The authors stressed that the model balances two opposing contributions: the concentration mechanism and a redistributing factor due to a stochastic advantage to the poor. (3) h > 0, f > 0 and r > 0. This is the most interesting case, because it examines the role played by investment in the economy when trade is also present. Increasing r, all other quantities being equal, leads to a greater economic disparity. The introduction of an

4.3 Political Economy

187

Figure 4.10 CCDF of wealth distribution for the stochastic and nonlinear trade– investment model with h = 0.05, f = 0.3 and r = 0.075. A power law with Pareto index equal to 1.5 ± 0.02 becomes clearly visible at the tail of the distribution, which represents the wealth of the rich. The inset shows an exponential fit at the mid-range segment of the curve, which might apparently means a Boltzmann– Gibbs behavior in this portion of the distribution. The wealth is measured in units of the poorest agent’s wealth. Reprinted from Scafetta et al. (2004a, fig. 6) with permission from Taylor & Francis Ltd

investment component gives rise to a Pareto power-law tail of the curve, which is interpreted as representing the wealth distribution of the rich. In addition, a Boltzmann–Gibbs exponential can be fitted at intermediate ranges of the distribution. Both features are shown in Fig. 4.10. The authors stressed that the exponential fit is only apparent and does not justify the application of the Boltzmann–Gibbs distribution to the model. The simulations also indicate that the Pareto index of the tail segment decreases with increasing h, and increases with increasing values of f . The reason why the wealth distribution of the rich is different from that of the rest of society, as reproduced by the results of the case (3) above, is because different segments of society rely on different economic instruments. As extensively discussed in Chapter 3, there is compelling empirical evidence that the rich mainly rely on investments, particularly capital gains, whereas the rest of society basically relies on trades, especially of labor (see pp. 109, 110, 137, and

188

Stochastic Dynamics of Income and Wealth

Fig. 3.3 above). Scafetta et al. (2004a,b) incorporated this evidence into the model by subdividing the N agents in to two subgroups with different investments indices. This means rewriting Eq. (4.48) as follows: ⎧ N ⎪ * ⎪ ⎪ ⎪ r1 ξ(t)Wi (t) + wij , (1 ≤ i ≤ N1 ), ⎪ ⎪ ⎨ j =1(=i) (4.57)

Wi = N ⎪ * ⎪ ⎪ ⎪ ⎪ r ξ(t)Wi (t) + wij , (N1 ≤ i ≤ N), ⎪ ⎩ 2 j =1(=i)

where

⎧ ⎨ Wi = Wi (t + 1) − Wi (t), N = N1 + N2, ⎩ r1 = r2 .

(4.58)

These expressions are just an extension of the conditions used to simulate the case (3) described above. Fig. 4.11 shows the CCDF of this double trade–investment model where half the population has investment indices r1 = 0.075 and 0.055, whereas the other half has r2 = 0 in both simulations. It is clearly visible in the curves that this subdivision intensifies the difference between the segment of the distribution showing a Pareto power-law tail, consisting of about 1% of the population, and the other region, made up by the remaining 99%, which can be fitted by a gamma distribution. The authors stressed that by carefully choosing the parameters of the model it is possible to fit the anomalous phenomenological shape of the income and wealth distributions. The results show that the basic assumptions of the model have indeed empirical bases as far as the observed general shape of the distributions of wealth and income are concerned. We interpret this result as a signal that such duality of economic mechanisms, pursued by different strata of society, may indeed be responsible for the observed dual behaviour of empirical curves . . . [and that] general features observed in empirical wealth distributions can be ascribed to specific activities that are known to be present in society. (Scafetta et al., 2004a, p. 362)

In summary, the overall contribution of this model could be summed up in its ability to reproduce known economic practices of different social classes, i.e., that investment is the main economic tool of the very rich and trade is the dominant economic tool of the majority of society. It also showed how these features are reflected in the empirically observed two-components shape of the income and wealth distribution of several countries at different time periods. Another important result revealed by the model was the possibility of fitting a Boltzmann–Gibbs exponential at the mid range of the resulting wealth distribution despite the fact

4.3 Political Economy

189

Figure 4.11 CCDF of wealth distribution for a stochastic and nonlinear trade– investment model that incorporates a two-tiered subdivision of society as defined by Eqs. (4.57) and (4.58). Half the population only trades, that is, has zero investment index r2 = 0, whereas the other half does invest, but with different indices, one case with r1 = 0.075 and the other one with r1 = 0.055. The social and poverty indices were kept fixed in both cases, respectively f = 0.3 and h = 0.05. The results show an intensification of the difference between two segments of society as compared with the result shown in Fig. 4.10, the one who trades, about 99% of people consisting of the low and middle classes and whose distribution can be fitted by a gamma function, and the one that invests, about 1% of the population forming the rich class and whose distribution is a Pareto powerlaw tail. The Pareto index for both cases are, respectively, 1.5 and 2.5, values well within the empirically determined range summarized in Eq. (2.93). Reprinted from Scafetta et al. (2004a, fig. 7a) with permission from Taylor & Francis Ltd

that the model is, in its most basic conceptual sense, very distinct from the class of kinetic models discussed in the previous sections where an exponential shape of the distribution arises naturally due to the initial assumption of an economic equilibrium, hypothesis which is absent from the model of Scafetta et al. (2004a, 2004b). Even so, this exponential fit is so tenuous that a slight change in the parameters changed the fitting function to a gamma function. Therefore, the sober assessment arising from these results is that, as already discussed in Section 2.5, simple data fits may tell us very little about the underlying dynamics of income and wealth, which means that one needs to be very careful in reaching general conclusions about this dynamics even in the presence of good data fits, exponential or otherwise.

190

Stochastic Dynamics of Income and Wealth

4.4 Class Redistribution The previous sections explored models that are in contradiction to, or go beyond the original money conservation model discussed in Section 4.2.2, where it was argued in favor of the exponential income distribution for the majority of population. However, besides making this case, Dr˘agulescu and Yakovenko (2001a) also observed that the Lorenz curve produced as if the income of the whole population were exponential reveals the degree of economic disparity among social classes. Let us see the details of this interesting analysis of the model. As discussed in Chapter 2, Eqs. (2.9) and (2.12) respectively define the y-axis and x-axis of the Lorenz curve (see Fig. 2.1). Hence, for the purely exponential distribution (PED), whose PDF may be written as fPED (x) = 100B e−Bx ,

(4.59)

(see Eq. 2.29), the Lorenz curve takes the simplified, and parameter-free, analytical expression below FPED (x) FPED (x) ln 1 − . F1,PED (x) = FPED (x) + 100 1 − 100 100

(4.60)

Note that both of these expressions assume the maximum probability as being equal to 100%, which is the normalization adopted in this book (see p. 57 above). Substituting Eq. (4.60) into the definition of the Gini coefficient in Eq. (2.16), it is easy to show that for the PED the Gini index becomes GPED = 1/2.

(4.61)

It is worth remembering that, as discussed earlier (see pp. 57–61), the functions F1 (x), F(x), and f (x) are dimensionally defined as percentage values valid in the interval [0 : 100], whereas x and G respectively have the intervals [0 : ∞] and [0 : 1] as their domains of validity. In addition, both results given by Eqs. (4.60) and (4.61) can also be easily obtained from, respectively, Eqs. (2.33) and (2.34) by taking the limit xt → ∞, which is the condition for turning the exponential–Pareto distribution discussed in Chapter 2, Section 2.3.1 into the PED. Fig. 4.12 shows a Lorenz plot where the points are the tax data of the USA (1979–1997) and the solid line represents the Lorenz curve for the PED given by Eq. (4.60). The points agree relatively well with the curve for most data, but deviate quite significantly as they approach the high-income region at the top of

4.4 Class Redistribution

191

Figure 4.12 The solid line is the Lorenz curve for the PED, whereas the points are the income data for the USA during 1979–1997. The empirical points clearly deviate at the top incomes from the Lorenz curve obtained from a PED, which is the Pareto tail excess income. The inset shows the Gini coefficient in the same period where the data deviate from the value 1/2 given by the PED. The Gini index has been steadily increasing during the studied time period, evidencing the rising inequality in favor of the rich in this country. Reproduced from Dr˘agulescu and Yakovenko (2001a, fig. 3) with permission from Springer Nature

the curve. Such discrepancy can be clearly attributed to the power-law tail of the income distribution which distorts the top incomes in the Lorenz curve obtained using the PED, an effect that can also be seen by revisiting Fig. 2.5 (see p. 88). Another way of illustrating this discrepancy is presented in the inset of Fig. 4.12, which shows that the Gini index of the PED given by Eq. (4.61) has been deviating from 1/2 since approximately 1985, meaning that a mechanism for raising inequality has been taking place in the USA since then. Dr˘agulescu and Yakovenko (2003) and Christian Silva and Yakovenko (2005) proposal to take into account the extra income in the upper tail of the distribution was to modify Eq. (4.60) to obtain the Lorenz curve for the exponential distribution with a Pareto tail distortion (ExPt), as follows

192

Stochastic Dynamics of Income and Wealth

" FExPt (x) fpt FExPt (x) + 100 1 − F1,ExPt (x) = 1 − 100 100 F (x) + fpt H FExPt (x) − 100 , × ln 1 − ExPt 100 !

(4.62)

where fpt is the percentage fraction of total income contained in the Pareto powerlaw tail, and H FExPt (x) − 100 is a step function equal to 1 for FExPt (x) ≥ 100 and 0 for FExPt (x) < 100. This means that the Lorenz curve undergoes a vertical jump of height fpt at FExPt (x) = 100. Fig. 2.5 (p. 88) shows this behavior very clearly in the form of a jump at the top of the solid line produced by the Lorenz curve given by Eq. (4.62). In this figure most of the solid line fits the data points as a pure exponential, but there is a distortion at the top due to the fraction of total income contained in the power law in excess of fpt = 16%. This can be clearly seen by comparing Figs. 2.5 and 4.12. Regarding the Gini coefficient, the Pareto tail distortion can also be corrected by recalculating G using the expression (4.62). The result may be written as: ! " fpt 1 1+ . (4.63) GExPt = 2 100 Fig. 4.13 shows plots of the Lorenz curves for the income distribution of the USA for the years 1983 and 2000. The solid lines are the theoretical curves represented by Eq. (4.62) and the correcting vertical jumps in height fpt are respectively equal to 4% and 19%. The inset shows the evolution of the Gini index, where the open circles were obtained using the theoretical formula (4.63). The data show very clearly an increase in the income inequality in the USA for about twenty years in favor of the rich due to the swelling of the Pareto tail, but the stock market crash of 2001 led to a decrease in that year. This is another indication that the economic tools of the upper class are different from the rest of society as the income of the rich seems correlated to the ups and downs of the stock market, a conclusion similar to the one reached by Scafetta et al. (2004a, 2004b) using the model of trade and investment discussed in Section 4.3. Fig. 4.14 updates and expands the results shown in Fig. 4.13 regarding the evolution of the Gini coefficient and the Pareto index, and their correlation to the fluctuations in the stock market by enlarging the studied time window of the USA data, in addition to presenting various plots which indicate good evidence that such correlations are indeed empirically sound. Considering Thomas Piketty’s results and trends discussed in the previous chapter, similar results are very likely to be present in the rest of the world.

4.4 Class Redistribution

193

Figure 4.13 The solid lines of this plot represent the Lorenz curves fitted to the USA income data for 1983 and 2000 using the expression (4.62). Note the jumps at the top of both curves that indicate the excess Pareto tail income of respectively 4% and 19%. The inset clearly shows the steady increase in the Gini index during the studied period and its deviations from the value given by Eq. (4.61) for the PED. The open circles show the results given by the theoretical formula (4.63). Note that the authors reached these results using an expression for the Gini coefficient not normalized by 100 as the maximum probability, but just the unit. That explains the different expression in the inset as compared to Eq. (4.63). Reprinted from Christian Silva and Yakovenko (2005, fig. 5) with permission from IOP Science

As final remarks, Eqs. (4.62) and (4.63) are simpler forms of expressing analytically the second segment of the exponential–Pareto distribution studied in Section 2.3.1, since the Pareto tail is treated in this section as a kind of perturbation of the PED at the top incomes. In fact, Eqs. (4.62) and (4.63) are, respectively, approximations of Eqs. (2.33) and (2.34) in the domain xt ≤ x < ∞. Since the EPD is reduced algebraically to the PED when xt → ∞, the EPD approach has the cutoff value xt separating the exponential from the power law already built in the expressions. This is an additional information that may be useful depending on the application.

194

Stochastic Dynamics of Income and Wealth

Figure 4.14 (a) Gini coefficient for the USA income distribution for 1983 to 2013 (connected line) compared with the theoretical expression (4.63) where the term 100 is absent due to the different normalization adopted in this book. (b) The Pareto power-law index for the tail of the distribution in the same time period. (c) The average income r for all society, the average income T for the lower, exponential, class, and the fraction income fpt (just f in the plot) of the rich class, the 1%. Note that the Gini and Pareto indices, as well as fpt , tend to follow the ups and downs of the stock markets, a result that bring strong empirical support to the conclusion that the economic tools of the rich class are indeed different from those of the rest of society. Reprinted from Yakovenko (2016, fig. 1) with permission from Springer Nature

4.5 Econophysics of Income Distribution and Piketty The result for the Gini coefficient given in Eq. (4.61) is for PED earners. A related question is the distribution of family income when both spouses are earners (Dr˘agulescu and Yakovenko, 2003; Christian Silva and Yakovenko, 2005). If their individual incomes are also distributed exponentially, the resulting income is the sum of both spouses and the family PDF is the convolution of the individual distribution functions (Feller, 1971, pp. 6–7). Therefore, the PDF for the family income distribution of two earners (Fa2), each distributed exponentially according to Eq. (4.59) and under the currently adopted normalization defined by Eq. (2.3), results in the expression below

4.5 Econophysics of Income Distribution and Piketty

fFa2 (x) =

1 100

x

f (w)f (x − w) dw = 100B 2 x e−Bx .

195

(4.64)

0

The respective CDF and first-moment distribution are given by FFa2 (x) = 100 1 − (Bx + 1) e−Bx ,

F1,Fa2 (x) = 50 2 − B 2 x 2 + 2Bx + 2 e−Bx .

(4.65) (4.66)

Considering Eqs. (2.16), (4.64) and (4.66), the Gini coefficient for a family of two earners where the income of each spouse is distributed exponentially yields GFa2 = 3/8.

(4.67)

This value is close to the average Gini coefficient for developed capitalist countries of the Western world (see Dr˘agulescu and Yakovenko, 2003, fig. 3), bringing confidence that this family description of income distribution is empirically sound, at least as far as the income data of some countries are concerned. Considering the results above in conjunction with Eq. (4.63), empirical evidence then suggests that one can write a general linear approximation for the overall Gini coefficient of some countries parameterized by the purely exponential portion of the distribution and a term that depends solely on the top incomes, as follows (Shaikh, 2017) G ≈ a + (1 − a)

fpt . 100

(4.68)

Here fpt is the percentage share of the top incomes described by a Pareto power-law tail and a is a constant equal to 1/2 or 3/8. Since a = 3/8 is the result closer to the empirical findings and that the top incomes change according to the ups and downs of the stock market, as shown in Fig. 4.14, the term fpt is likely to be a proper descriptor of property income. This means that the rise in the income at the Pareto tail, that is, in the 1% of the population, accounts for most of the rise in inequality. Nevertheless, as pointed out by Shaikh (2017), these results do not mean that the exponential portion of the distribution is dominated by labor income, since the data used by Dr˘agulescu and Yakovenko (2003) to reach the conclusion that the top incomes are largely dominated by property income lacks information on the source of income. In order to try to differentiate labor income from capital income, or capital return in the sense of Piketty (see pp. 109 and 119 above), Shaikh (2017) used the indirect method of testing the hypothesis that the Pareto segment is basically composed of property income. This means reasoning on the basis of the results given by

196

Stochastic Dynamics of Income and Wealth

Figure 4.15 Plot of the census Gini coefficient against the percentage share of property income σcr for the USA from 1947 to 2012. The data show a reasonable linear dependency between these two quantities, a result which brings empirical confidence that the Gini coefficient is largely determined by the percentage share of the top incomes in an economy. Note that the parameters of the fitted straight line in this figure, shown in Eq. (4.70), are not too dissimilar to the ones in the theoretical approximation proposed in Eq. (4.69), a fact that reinforces the interpretation that the exponential segment of the income distribution is largely dominated by labor income of families with two earners. Reprinted from Shaikh (2017, fig. 8) with permission from Taylor & Francis Ltd

Eqs. (4.67) and (4.68) in order to propose the expression below as an empirical approximation for the overall Gini coefficient "! " ! " ! σcr σcr 3 3 = 0.375 + 0.625 , (4.69) G ≈ + 1− 8 8 100 100 where σcr is the percentage share of capital return. Shaikh (2017) then used the USA Internal Revenue Service data from 1947 to 2012 to calculate σcr and then plotted the results against the Gini coefficient obtained from census data in the same time period. Fig. 4.15 shows the results obtained by Shaikh (2017) for the USA, where one can see that the data present a reasonable linear dependency between G and σcr . The calculated parameters of the fitted straight line appearing in Fig. 4.15 are given as:

4.5 Econophysics of Income Distribution and Piketty

! G=

census Gini 100

"

" PISCG = 0.249 + 0.646 , 100

197

!

(4.70)

where PISCG stands for property income share with capital gains, that is, the share of capital return in the distribution, or σcr . Note that the fitted parameters in the expression above deviate from the first and second theoretical values of the approximation in Eq. (4.69) by, respectively, 34% and 3%. The conclusions that can be reached from these results are twofold, and mostly in line with the results of the model discussed in Section 4.3. First, the top incomes described by the Pareto power-law tail seem to be indeed largely dominated by capital returns, whereas the remaining segment of the income distribution, exponential or otherwise, is essentially determined by labor income. Second, it appears that inequality is in fact directly dependent on the percentage share of the top incomes in the distribution. The higher it goes, the more economically unequal is a society. As a final comment, it is worth noting that Palma (2017, and references therein) reached the same conclusion that inequality is basically determined by the share of the rich, but by means of a different methodological route. He defined the so-called Palma ratio, obtained by dividing the income of the top 10% and bottom 40%, to indicate inequality and used it as a benchmark to compare inequality in different countries. Besides, he defined an “extra share” of the top 10% required for the Palma ratio to reach the unity, which is basically a measurement of how the rich is “squeezing” the bottom and middle classes for extra income. The whole analysis is, nevertheless, just a different way of manipulating data from income distribution tables to indicate inequality with information already present in the Lorenz curve, as discussed above (see p. 110).

Note added on proof. Another approach relevant to this chapter for modeling pairwise exchange was advanced by Bouchaud and M´ezard (2000). They presented an agent-based model of wealth exchange whose distribution follows a Pareto power law for large values of wealth. Then a stochastic dynamic equation for wealth exchange with random trading results in a phase transition for some values of the Pareto index where wealth ends up in the hands of a finite fraction of the population. They concluded that favoring exchanges or increasing taxes is an efficient way to reduce inequalities (see also Bouchaud, 2015). This approach was further developed by other authors and the parametric results match the US and European wealth distribution data to a high degree of accuracy (Boghosian, 2019, and references therein). The overall conclusion is that in a free-market economy wealth is inclined to flow upward and inequality is only limited by redistribution.

Part III Economic Cycles

5 Circular Flows in Economic Systems

Physicists and economists who attempt to address the dynamics of income distribution often start by assuming that it has to be the result of some stochastic process, which means framing this problem only in terms of probabilities and random variables (e.g. Chatterjee, 2015), as reviewed in the previous chapter. Nevertheless, one can envisage a different path, rooted in the physics tradition of dealing with macroscopic quantities without at first any consideration of their “micro foundations.” Classical thermodynamics is an example of a centuries-old and extremely successful physical theory which deals with macroscopic quantities without any concern for micro implications. Actually, statistical thermodynamics came into being much later (see p. 4 above), and at a time when key concepts of classical thermodynamics were already well established and the theory had produced technological applications without the use of statistical or stochastic reasoning. Other classical, or nonquantum, physical theories, like electromagnetism and general relativity, also provide extremely useful results without any consideration of probabilities, random variables, or stochastic processes. Nonetheless, to approach the income distribution problem in dynamic terms by following this alternative path requires starting from a different economic standpoint than so far assumed, and then build models from it. The starting point chosen here comes from the centuries-old realization in the economic thought that production in economic systems has a circular flow (Blatt, 1983; Kurz and Salvadori, 1995, ch. 13) from where growth and cycles emerge. This chapter discusses the income distribution problem in the dynamic context of the circulation, or reproduction, of economic quantities. As explained below, the core model for this approach is the Goodwin (1967) macroeconomic dynamics of growth with cycles. However, before discussing this model in detail, and its theoretical extensions and empirical consequences, we need to start with a brief exposition of circular flow, growth, and cycle phenomena in economics and revisit the concept of 201

202

Circular Flows in Economic Systems

economic uncertainty in the context of investment and business confidence. These are all key concepts that arise from the economic circular flow, being intimately connected to the ups and downs of markets, which, as seen in the previous chapter, affect directly the income distribution. Therefore, a discussion of these concepts is mandatory to properly understand the income distribution models based on economic growth and trade cycles.

5.1 Economic Growth and Trade Cycles ´ The Tableau Economique proposed by Franc¸ois Quesnay in 1760 is probably the most important earlier contribution to the economic thought to point out the essential economic phenomenon of circular flow. As discussed in page 8, the Tableau views the economic system as a cycle of deep interdependence and interrelationship among the various productive processes. Hence, the economic exchange is seen as a circular flow of goods and money among economic sectors, circulation which is moved by the surplus produced by farmers and consumed by landlords, intermediated by the circulation of money and goods among the sectors of the manufacturing industry. The concepts of production, distribution, accumulation, investment, and circulation can all be found in the Tableau. John Markus Blatt, a nuclear physicist and one of the pioneers of econophysics (see p. 35 above), wrote the following in his important book on the physical approach to economic dynamics. In his Tableau Economique . . . Quesnay set out clearly that the economic system is not like a “fair” with independent buyers and sellers, but rather has a circular flow, like the flow of blood in the human body. The buyers are at the same time sellers, the sellers are at the same time buyers, and thus there is a casual nexus between sales and purchases much more stringent than envisaged in the “auction” of Walras. The science of economics has paid a heavy price for ignoring the insight of this great founder. Indeed, the only economists who have made significant progress in dynamic economics are those who have gone back, consciously or otherwise, to the Tableau Economique . . . Without [it], the problems of dynamic economics are insurmountable. (Blatt, 1983, p. 130; emphases and quotation marks in the original)

What is inherent in the Tableau, apart from the circular flow, is the concept of social surplus, which keeps the system going at least in steady state, as explained below. The simplest circular flow in an economic system is the one where the surplus is so small that virtually none is left to create a market in the modern sense of the word, much less to keep it going. Therefore, everything produced is dedicated to

5.1 Economic Growth and Trade Cycles

203

the maintenance of the workforce’s life, defense included, and for the inputs of the next year’s production. In each period, say one year, production is basically enough to continue the cycle in the next period with identical production levels. The system is closed as all goods produced within the system are consumed in it, so the market is inherently very limited to the point of nonexistence in the modern sense. That is, there is very little money being exchanged and, hence, virtually none is available to buy other goods in a wider market, which, for this reason, is also nonexistent. Under these conditions the system is in a steady state mode, balanced at the subsistence level. An example of a steady state economic system is the medieval village in Europe, since there was very little trade within the village, and virtually no external trade at all. Long-distance trade was limited to low amounts of a few luxury goods. In fact, the decline of trade in Europe at the beginning of the Middle Ages was a consequence of the decline of the Roman Empire, and it took centuries for Europe to reverse this condition and achieve a significant wider market for trading. Although the surplus was not exactly zero within the village, it was very small and could only support a small ruling class at a level of consumption higher than the rest (Blatt, 1983, ch. 2). The steady state mode of circulation can be replaced once the economic system is able to generate a fairly reliable and sizable social surplus so that the production is established for a wider market and the system takes off onto a growth path. Then the economy can go to its simplest dynamic system of balanced growth, where all quantities increase uniformly with time, by the same growth rate at each period and prices stay constant. It is outside the scope of this book to discuss balanced growth theories from an econophysical viewpoint, but the interested reader is directed to Blatt (1983, chs. 3–7) for a physicist’s view of this problem. Nevertheless, steady balanced growth path is dynamically unstable: under small deviations the system either goes to infinity or collapses to zero (Blatt, 1983, ch. 7). But this is only true in linear dynamic systems. Nonlinear systems exhibit a limit cycle analogous to trade cycle in economics. Mathematically, a limit cycle is an isolated closed integral curve, also called an orbit, originated from the solution of a nonlinear system of differential equations, often ordinary and first order (Blatt, 1983, ch. 8; Gandolfo, 1997, pp. 355–357 and 24.3; Shone, 2002, sections 4.10– 4.12; Puu, 2003, section 2.4). Limit cycle motions are found in all human-made engines as they perform repetitive motions based on an external energy supply. So, a limit cycle requires outside energy to keep operating. But, how about realistic economic situations that resemble limit cycles? The following excerpts from Blatt (1983) suffice to address this point (see also Boyd and Blatt, 1988, ch. II).

204

Circular Flows in Economic Systems

Looked at on an overall scale, the nineteenth century was a period of enormous economic growth. But this growth was by no means steady and balanced. On the contrary, there existed a clear and pronounced “trade cycle” (“business cycle” in America). Periods of slowly, and then more rapidly, increasing prosperity were followed by wild speculative booms, ending in a “panic.” Then the cycle started up again . . . If we take the period from the end of the 1816 depression to the panic of 1873, we can count 6 full cycles in approximately 57 years, an average of 9 to 10 years per cycle [in the laissez-faire committed British economy]. The actual intervals between panics ranged from a low 7 years, to more than 11 years. Thus the nineteenth-century trade cycle was by no means [regular] . . . neither was nineteenth-century growth at all steady or balanced. (Blatt, 1983, pp. 158, 160)

Although Blatt used as examples the nineteenth-century economies of Britain, the biggest one at that time, and the USA, the recent 2008 crisis fits this scenario of unbalanced growth and collapse due to a shock followed by panic and then recovery. Hence, Blatt’s viewpoints were based upon a realistic observation of economic history, rather than some a priori idealized and non-facts-based presuppositions about how economies should, or must, behave. From this perspective his conclusions about how to approach economic theory are quite straightforward. Now, if balanced growth is unstable, then actual growth will normally be unbalanced. As unbalanced growth continues for some time, it is bound to lead to a more and more unbalanced state of the economy as a whole. Such unbalanced state is necessarily unstable and can collapse easily into a state of panic when subject to even a slight shock, such as collapse of a major bank, failure of an important crop, or else an unexpectedly successful harvest, etc. etc. . . . From this point of view, the trade cycle is not a mere fluctuation superimposed on a state of steady, balanced growth; rather, the trade cycle is part of the very process of growth in a competitive economy . . . Economic theories based upon growth equilibrium plus small deviations from that state are unsuitable as a basis for explaining reality. (Blatt, 1983, pp. 161 and 162; emphases in the original)

So, growth and cycles are two intertwined economic phenomena which emerge from the circular flow in economic systems. Since competitive capitalist systems undergo trade cycles (Rau, 1974, ch. 1), it is clear that they should be modeled by nonlinear dynamic systems. Once the economic system produces significant amounts of surplus, three questions arise (Blatt, 1983, pp. 22, 316), which have no meaning in a system at the subsistence level. (1) Who gets the surplus? (2) How is the surplus used? (3) Who chooses what to produce if the surplus is big enough to allow such a choice? Under this new condition of the availability of significant amounts of social surplus the system can leave the steady state mode and go into a growth path. In this new situation these three questions need to be addressed.

5.2 Uncertainty, Confidence, Investment, and Instability

205

Regarding question (1), history tells us that the lion’s share of the surplus always goes to society’s economic elite,1 at present times mostly in the form of capital return or capital gains (Chapter 3; Ruccio and Morgan, 2018). As for question (2), the economic elite can use the surplus for nonproductive economic items, e.g., war materials, consumption of luxury items and idle shows, etc, or can invest in the economy itself. In the former case the surplus is frittered away and the economy either stagnates or falls behind relative to other economies, that being specially true due to excessive military spending (Kennedy, 1989). In the latter case, those who own most of the surplus forego consumption in favour of investment, and by doing so economic growth is achieved. Addressing question (3) requires discussing the composition of the economic elite. In broad terms it can be said that in modern capitalist economies the economic elite is basically divided into entrepreneurs and rentiers, who can either struggle politically or form political alliances to secure the power of choosing what to produce with the social surplus. Entrepreneurs were described by Keynes (1936, p. 161) as having animal spirits, that is, “a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.” Without animal spirits there is no progress. Rentiers, on the other hand, are low in animal spirits, so one of the problems of the growth is how to persuade the wealthy to forego immediate consumption. This leads us directly to the concepts of uncertainty, confidence, and investment valuation in economics. 5.2 Uncertainty, Confidence, Investment, and Instability Economic uncertainty was discussed in the previous chapter (see Section 4.1.2), where it was pointed out that the distinction between uncertainty and risk in a Knightian sense is a matter of knowledge. Risk refers to situations where there is enough information so that probabilities can be calculated, whereas uncertain situations are too imprecise to allow the availability of probabilities. In both situations the actual future outcome cannot be predicted with certainty, but with risk the probabilities of various future outcomes are known (Schinckus, 2009). Knightian uncertainty refers in fact to the limits of knowledge, a reality (see p. 25 above) that cannot be reached, where all future outcomes of a situation

1 The possible sociological distinctions for the terms economic elite, ruling elite, power elite, or ruling class

are of no interest in the context of this book. The term ‘economic elite’ is used here to encompass all these groupings, and may refer to either or all of them, as well as the following class categories defined in the previous chapters: the dominant class, the 1%, the rich, the rich class, the top 0.1%, or the super-rich.

206

Circular Flows in Economic Systems

cannot even be listed, let alone enumerated and their probabilities computed, an impossibility due to lack of information. In this situation probabilities are unknown and unknowable. However, this reality can possibly be studied in the future once more knowledge is developed and becomes available. This means that reality can progressively be discovered, and once new information is obtained probabilities can be computed to specific situations, which then become risk ones.

5.2.1 Keynesian Uncertainty and Confidence Economic uncertainty was also discussed by Keynes, but in the context of business and producers’ decisions. He wrote. The considerations upon which expectations of prospective yields are based are partly existing facts which we can assume to be known more or less for certain, and partly future events which can only be forecasted with more or less confidence. The state of long-term expectations, upon which our decisions are based, does not solely depend, therefore, on the most probable forecast we can make. It also depends on the confidence with which we make this forecast—on how highly we rate the likelihood of our best forecast turning out quite wrong. If we expect large changes but are very uncertain as to what precise form these changes will take, then our confidence will be weak. The state of confidence, as they term it, is a matter to which practical men always pay the closest and most anxious attention. (Keynes, 1936, pp. 147, 148; emphases in the original)

So, for Keynes uncertainty is a subjective matter because it is the result of a way of thinking about the world. When the businessperson is faced with future situations whose outcome “we simply do not know,” Keynes wrote: [T]he necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed. How do we manage in such circumstances to behave in a manner which saves our faces as rational, economic men? We have devised for the purpose a variety of techniques, of which much the most important are the three following: (1) We assume that the present is a much more serviceable guide to the future than a candid examination of past experience would show it to have been hitherto. In other words we largely ignore the prospect of future changes about the actual character of which we know nothing. (2) We assume that the existing state of opinion as expressed in prices and the character of existing output is based on a correct summing up of future prospects, so that we can accept it as such unless and until something new and relevant comes into the picture.

5.2 Uncertainty, Confidence, Investment, and Instability

207

(3) Knowing that our own individual judgment is worthless, we endeavor to fall back on the judgment of the rest of the world which is perhaps better informed. That is, we endeavor to conform with the behavior of the majority or the average. The psychology of a society of individuals each of whom is endeavoring to copy the others leads to what we may strictly term a conventional judgment. Now a practical theory of the future based on these three principles has certain marked characteristics. In particular, being based on so flimsy a foundation, it is subject to sudden and violent changes. The practice of calmness and immobility, of certainty and security, suddenly breaks down. New fears and hopes will, without warning, take charge of human conduct. The forces of disillusion may suddenly impose a new conventional basis of valuation. All these pretty, polite techniques, made for a well-panelled Board Room and a nicely regulated market, are liable to collapse. At all times the vague panic fears and equally vague and unreasoned hopes are not really lulled, and lie but a little way below the surface. Perhaps the reader feels that this general, philosophical disquisition on the behavior of mankind is somewhat remote from the economic theory under discussion. But I think not. Though this is how we behave in the market place, the theory we devise in the study of how we behave in the market place should not itself submit to market-place idols. I accuse the classical economic theory of being itself one of these pretty, polite techniques which tries to deal with the present by abstracting from the fact that we know very little about the future. (Keynes, 1937, pp. 214, 215, emphases in the original)

So, it is clear from the excerpts that Keynes was acutely aware of the intimate relationship between uncertainty and the confidence of the “practical men” in taking business decisions, so much so that he devised a kind of rule of thumb on how they should behave when faced with true uncertainty. But, what is the relationship of these two concepts, uncertainty and confidence, to trade cycles and market crashes from a Keynesian viewpoint? The following excerpt addresses this point. For Keynes the inability of firms and households to ‘know’ the economic future is essential to understanding why financial crashes occur in an economy that uses money and money contracts to organize transactions. Firms and households use money contracts to gain some control over their cash inflows and outflows as they venture into the uncertain future. Liquidity in such an economy implies the ability to meet all money contractual obligations when they fall due. The role of financial markets is to assure holders of financial assets that are traded on orderly markets that they can readily convert these liquid assets into cash whenever additional funds are needed to meet a contractual cash outflow commitment. In Keynes’s analysis, the sudden drying up of liquidity in financial markets, occasioned by sudden drops of confidence, explains why ‘unfortunate collisions’ occur – and have occurred more than a hundred times in the last 30 years. (Davidson, 2010b)

Blatt has also written on uncertainty in the sense of both Knight and Keynes, with additional strong criticism of neoclassical economic theories (Blatt, 1983, ch. 12,

208

Circular Flows in Economic Systems

sections A, F). He also has the following comments regarding how investors deal with uncertainty in the real business world: The sensible men . . . [reacts] to true uncertainty by ordering his affairs so that he survives no matter which outcomes actually comes to pass . . . The golden rule is “safety first” . . . [which] is quite different from “avoid all risky projects.” Rather, if a project is risky and you want to go into it nonetheless, make sure that someone else gets stung if it fails . . . To a sensible businessman, the effects of uncertainty are not something to be calculated by some theory. They are something to be avoided like the plague . . . Risk taking is for suckers. (Blatt, 1983, pp. 267, 268; emphases and quotation marks in the original)

These points bring us back to the relationship of uncertainty and confidence with investment. Keynes (1936) emphasized investment as being a crucial variable in macroeconomics. But to invest the investors need confidence in the future prospects of their businesses. As surveyed by Boyd and Blatt (1988, ch. III), before Keynes many economic scholars, some from previous centuries, were acutely aware of the importance of this factor. And, although entrepreneurs possess enough confidence, or ‘gut instincts,’ in the future prospects of their businesses and act even in the face of economic uncertainty, rentiers, on the other hand, are low in animal spirits and do not share those instincts. How, then, to persuade the wealthy to forego immediate consumption in order to invest toward economic growth? The answer is to promise them the moon in future dividends, and to make sure that, in the meantime, share “values” on the exchange keep rising, so that the occasional skeptic can sell his shares at a profit, thereby assuring all the other investors that their “values” are safe and sound. (Boyd and Blatt, 1988, p. 107, quotation marks in the original)

5.2.2 Investment and Minskyian Financial Instability At this point we must consider the contribution of Hyman Minsky (1919–1996) in connecting the concepts above to trade cycles and market crashes. He advanced what became known as Minsky financial instability hypothesis (Minsky, 1977), a theory that came under the spotlight after the market crashes in the last decade of the twentieth century and the first decade of the twenty-first. The next lines present a succinct summary of Minsky’s ideas based on the more detailed expositions provided by Boyd and Blatt (1988, section III-G), Keen (1995, 2012a), and Aliber and Kindleberger (2015). One key aspect of Minsky’s approach to the problem of trade cycles and market crashes in capitalist economies comes directly from his reading of Keynes’ (1936) General Theory. In Keynes’s theory, “time” is calendar time and the future always is uncertain. Thus investment and financing decisions are made in the face of intractable uncertainty, and

5.2 Uncertainty, Confidence, Investment, and Instability

209

uncertainty implies that views about the future can undergo marked changes in short periods of time. (Minsky, 1977, p. 8)

Investment in Minsky’s theory is mainly financed by the increase in the money supply due to banks issuing loans to entrepreneurs, and such an increase causes an increase in debt, which finances not only investment, but also speculation. Financial entities are subdivided into several classes, but the two main ones are hedge finance, anticipated cash flows capable to meet all future commitments on debts, and speculative finance, called Ponzi financiers, who borrow not to invest, but to buy existing assets in the hope to profit by selling those assets on a rising market. Entrepreneurs’ debts can be serviced and repaid from future profits, but Ponzi financiers always have debt servicing costs above the cash flows from the assets they purchased with borrowed money. They therefore must keep expanding their debts or sell assets to continue operating. Minsky’s viewpoint at its core is very straightforward. In good times investors take on risk, and the longer times stay good, the more risk they take on, until they have taken on too much risk. Eventually, a point is reached where the cash generated by their assets is no longer enough to pay off the huge amounts of debt they took on to acquire them. Losses on speculative assets prompt lenders to call in their loans. This leads to a collapse of asset values. Afterwards, prices deflate and the cycle restarts. Hence, the stages of the Minsky cycle are boom, crisis, panic, deflation, stagnation, expansion, recovery, and then boom again. In more detail, the cycle proceeds as follows. The beginning of euphoria occurs when both lenders and borrowers believe that the future is assured and, hence, most investments will succeed. The euphoria allows the development of Ponzi financiers who profit by trading assets on a rising market and incurring significant debt when doing so. Increasing debt eventually affects the viability of many business activities, forcing them to sell assets to service their debts. The price boom is then checked, Ponzi financiers can no longer trade at a profit, and debts levels cannot be serviced from the cash flows of the businesses they now control. Banks that financed these purchases find that their customers can no longer pay their debts, leading to an increase in interest rates. Liquidity is suddenly much more valuable and those who hold illiquid assets attempt to sell them in return for liquidity. The market is then flooded by sellers and the euphoria becomes a panic. The boom becomes a slump. The turning point is now known as the Minsky moment, that is, a sudden major collapse of asset prices. In other words, when investors are forced to sell even their less speculative assets to be able to pay their debts, markets spiral lower and that creates a severe demand for cash. That is the point when the Minsky moment arrives.

210

Circular Flows in Economic Systems

As the boom collapses due to the divergence between debts and cash flows, so does investment, leading to a deflation of asset prices that brings cash flows and asset prices back into harmony. Then the cycle starts over again. Government intervention may improve the situation when the boom becomes a panic by means of spending, using mechanisms like large-scale asset purchases, also known as quantitative easing, to soften the blow and try to avoid too many businesses going bankrupt. This mechanism was largely used by several central banks in the aftermath of the 2008 crisis (Bernanke, 2015; Hudson, 2017, p. 189). Another way to try to avoid this situation from happening in the first place is by means of tight government regulations on bank loans to avoid speculative over-lending. In conclusion, Minsky’s hypothesis is that a financial system initially stable and consisting largely of hedge finance is gradually transformed into one dominated by speculative Ponzi finance. So, “stability is destabilizing.” An exposition of Minsky’s ideas applied to real-world financial bubbles and crashes in a historical perspective encompassing hundreds of years, from the Dutch Tulip Bulb Bubble in 1636 to the recent Lehman Brothers crisis of 2008, is provided by Aliber and Kindleberger (2015). Nikolaidi and Stockhammer (2017) also discuss Minsky’s ideas and surveyed several models based on them.

5.3 The Goodwin Growth-Cycle Macroeconomic Dynamics The previous sections presented a condensed discussion about the connection between some key macroeconomic concepts, namely, circular flow, investment, and trade cycles, as well as the associated phenomena of market boom, collapse, panic, and recovery. These features are seen as essential phenomenological facts from which the econophysics of income distribution macrodynamics must start off. Models without at least a couple of them are dismissed as unrealistic, a situation which leaves us with very few items in our inventory of macroeconomic models (Blatt, 1983, p. 162). And the first remaining item of relevance in the context of this book is the growth-cycle macrodynamics proposed by Goodwin (1967), whose roots lie in the analysis made by Marx (1867) on the interconnected dynamics of the percentage shares of unemployment, labor, and profits. 5.3.1 The Marx Circuit The descriptive dynamics advanced by Marx on economic booms and busts (see p. 11 above) clearly links trade cycles to the income of the two main classes in Marx’s analysis: workers and capitalists. According to him, in a boom profits increase and unemployment decreases, but this is followed by a bust, since less unemployment reduces the profit margins, which leads to higher unemployment

5.3 The Goodwin Growth-Cycle Macroeconomic Dynamics

211

Figure 5.1 Schematic representation of the Marx circuit viewed as a dynamic system in the phase space. The relationship between the percentage share of labor u in the national income versus the employment rate v would ideally produce a closed clockwise orbit in the u–v phase plane. Since the national income is a finite quantity, the profit share U and the unemployment rate V are given by their respective complements, cf. Eqs. (5.1). In this phase portrait profits increase in the arc DAB and fall in the arc BCD, whereas unemployment increases in the arc CDA and falls in the segment ABC of the orbit. In a boom, represented by the arc segment AB, both profits and employment increase, whereas both decrease in the bust segment CD. Clearly, at point A we have v˙ = 0, u˙ < 0 and U˙ > 0, whereas the point B yields u˙ = 0, v˙ > 0 and V˙ < 0. It is important to note that this figure is for illustrative purposes only and, therefore, should not be taken at its face value. This is so because a realistic cycle will most likely not be symmetric or follow the same time interval at each cycle stage, which means different arc lengths for each period of the cycle. A realistic cycle will not even necessarily form a closed orbit with a fixed center, as will be discussed below.

followed by an increase in the profit margins and investment so that a new boom starts. The cycle is then repeated by another bust, and so on and so forth. Marx’s model is only verbal, but in today’s mathematical terminology it can be interpreted as an orbit, a limit cycle in a phase plane. In other words, it is a closed orbit in the phase portrait. The interpretation of this dynamics, which may be called the Marx circuit, is schematically shown in Fig. 5.1. The two main quantities that define the Marx circuit are the percentage share of labor u and the employment rate v, the percentage fraction of the total population who is employed. Both of them are time-dependent quantities, that is, u = u(t) and v = v(t), and are normalized by the maximum value adopted in this book: 100%. Hence, dimensionally, [u] = [v] = % (see footnote at p. 122 above). Therefore, the profit share U and unemployment rate V are, respectively, given by their complements, written as:

212

Circular Flows in Economic Systems

U (t) = 100 − u(t), V(t) = 100 − v(t).

(5.1)

Clearly, [U] = [V] = %. The boom and bust situations are respectively represented in Fig. 5.1 by the arcs AB and CD. Note that the boom is characterized by the increase in both profits and employment, that is, when U˙ > 0 and v˙ > 0, where the dot over means time differentiation as already defined in Eq. (3.14). In the bust segment both quantities decrease, having then negative time derivatives, so, U˙ < 0 and v˙ < 0. 5.3.2 Growth-Cycle Macrodynamics Richard M. Goodwin (1913–1996) used the Marx circuit as inspiration to his model in order to give a mathematical form to Marx’s conceptual ideas about cycles in capitalism and to show how cyclical behavior could arise from very simple economic hypotheses (Goodwin, 1967; Blatt, 1983, section 10.C; Gandolfo, 1997, section 24.4.3; Moura and Ribeiro, 2013; and references therein). In other words, one of the aims was to show that growth and cycles result from an extremely simplified nonlinear representation of the economy. The traditional way of presenting the Goodwin model in the literature heavily emphasizes its underlying economic ideas and hypotheses (e.g., Gandolfo, 1997). Nevertheless, this path will not be taken here, let alone the approach taken by those who seem to confuse with pure mathematics what is essentially an empirically based model. Hence, the emphasis will be on testing the model, that is, on testing the resulting equations against empirical data, which means that the underlying hypotheses will take a back seat here. As brilliantly stated by Richard Feynman, rather than tinkering with models, the method of guessing the equations is a more efficient way of finding new laws of nature (see p. 45 above). 5.3.2.1 Definitions Let Y be the output that an economy can generate and K the amount of fixed capital, that is, plants, equipments, properties, etc. Let us now assume a fixed capital to output ratio β over time, which may be written as follows K . (5.2) Y This relationship is known as the accelerator assumption. The analytical definition of this quantity is similar to Piketty’s capital/income ratio introduced earlier (see p. 120 above), but one should note the possible caveat in the text above Eq. (3.13). Assuming such similarity, β clearly varies with time in a country, as shown in Piketty’s historical data presented in Fig. 3.1, but let us consider it static in this β = 100

5.3 The Goodwin Growth-Cycle Macroeconomic Dynamics

213

model as a first approximation, which means that variations in β could only be neglected for t < 10 a (see Eqs. 3.35 and 3.36). A constant capital to output ratio is also known as a Harrod-neutral technical progress in the economic literature. Regarding dimensions, following the discussion of Section 3.4.3 these in Eq. (5.2) are [β] = % · a, [Y ] = · a−1 and [K] = . Let be the amount of employed labor, that is, the number of working people or labor force. For a given output, let A be the output to labor ratio as follows, Y . (5.3) This ratio is the labor productivity or the output per unit of labor. is a dimensionless quantity, then A has dimension of generic currency units (see p. 58) and inverse of time, that is, of a flow, [A] = · a−1 . The model assumes a steady growth rate of labor productivity, which may be written as: A=

A = ζ ea1 t ,

(5.4)

where ζ and a1 are positive parameters dimensionally defined as [ζ ] = · a−1 and [a1 ] = a−1 . Clearly [t] = a. The amount of employed labor is a fraction of the total population and, therefore, it may be written in terms of the population number N and the employment rate v, yielding Nv . (5.5) 100 Here N is a dimensionless quantity representing the potential labor supply. This is assumed to grow steadily according to the following expression =

N = c1 ec2 t ,

(5.6)

where c1 and c2 are constant positive parameters. By definition, [c1 ] = 1 and [c2 ] = a−1 . The cost of the labour force L, that is, the total amount of wages in an economy, and the average wage value w are related by the expression below, L = wτ,

(5.7)

where τ is the time constant, equal to unit for annual time windows. The ratio L/τ is the wage bill (see p. 125 above) and the average wage value w is a flow. Hence, dimensionally we have that [L/τ ] = [w] = · a−1 and [τ ] = a. The original model also assumes as a first approximation that the employment rate v can be related to the rise of wages as follows f1 (v) w˙ = , w τ

(5.8)

214

Circular Flows in Economic Systems

where τ is introduced for this expression to become dimensionally homogeneous, since this function is chosen to be dimensionless (see below). Nevertheless, for all practical purposes τ has the value of unit because most empirical applications consider annual data and, so, τ should not interfere in the expressions unless annual uncertainties are being evaluated (see discussion on pp. 124 and 126 above). It should be emphasized that Eq. (5.8) was not derived, but supposed valid from the start by the model, which means that the function f1 (v) is assumed as monotonically increasing with v (Gandolfo, 1997, fig. 24.7). As a consequence, the rate of change in wages is sensitive to the employment rate v, or, inversely, to the unemployment rate V. If v is much smaller than 100%, which means high unemployment, wages tend to decrease. Conversely, if it approaches full employment, that is, when v → 100, wages tend to rise and f1 (v) becomes arbitrarily large. The relationship f1 (V) between wage rate and unemployment has some empirical justification. It is known today as the original, or traditional, Phillips curve (Phillips, 1958), due to the empirical studies made by the New Zealand economist Alban William Phillips (1914–1975). Nevertheless, as noted by Blatt (1983, p. 207), the earliest empirical justification dates back to Karl Marx’s observations that led him to the concept of the labor reserve army, which states that capitalism requires the existence of a large group of unemployed workers in order to keep the wages of the employed works under check (see p. 11 above). We shall see below that if f1 (v) is chosen to be a linear function this means approximately assuming the original Phillips curve in the model, but this is just a particular choice, and not necessarily the most realistic one. The wage share may be simply defined as the dimensionless equivalent to the percentage share of labor u, that is L u = . 100 τY The profit level P is a flow, being defined as:

(5.9)

L . (5.10) τ The model also assumes that all profits are invested, which means that the possibility of some money or other financial assets being saved is ignored, this being a feature of simple models with no financial sector. So the profit share, which is the share of output to capitalists, yields P =Y −

P U = 100 . Y Finally, the rate of increase in capital K˙ is given by (100 − u) U Y = Y. K˙ = 100 100

(5.11)

(5.12)

5.3 The Goodwin Growth-Cycle Macroeconomic Dynamics

215

This expression actually means that the flow of investment I is equal to changes in capital which in itself is the same as profit P . This becomes clear by putting those quantities together: I = K˙ = P ,

(5.13)

˙ = [P ] = · a−1 . So, once again, in this formulation all where [I ] = [K] profits are invested. One should note that Eq. (5.13) does not take into account capital depreciation, which would mean a term subtracting from its right-hand side. Capital depreciation in the Goodwin model will be discussed in the next chapter, Section 6.4, along with other additional features. 5.3.2.2 Differential Equations The set of dynamic equations of the model can be obtained as follows. Differentiating Eq. (5.2) and considering Eq. (5.12) we obtain 100 − u Y˙ = . Y β

(5.14)

Differentiating Eqs. (5.3) and (5.5) respectively yield Y˙ − a1 Y ˙ = , A

100 ˙ − c2 . v˙ = N

(5.15) (5.16)

Substituting Eq. (5.15) into Eq. (5.16) and considering Eqs. (5.3), (5.4) and (5.14), we reach the expression v˙ (100 − u) = − a1 − c2 . v β

(5.17)

One may also write the expression above in the following functional form v˙ f2 (u) = , v τ

(5.18)

where f2 (u) is a dimensionless function. Defining 100 − a1 − c2, β 100 b2 = , β

a2 =

(5.19) (5.20)

where a2 and b2 are positive parameters, whose dimensions are given as [a2 ] = [b2 ] = a−1 , we obtain from Eq. (5.17) the first differential equation of the Goodwin model

216

Circular Flows in Economic Systems

v˙ u = a2 − b2 . (5.21) v 100 Note that a2 > 0 for (100/β) > (a1 + c2 ). To find the second equation of the model, let start by differentiating Eq. (5.9). Considering Eqs. (5.3), (5.4) and (5.7), the result is: 100 (5.22) (w˙ − a1 w) . A Bearing in mind the proposition (5.8) we can include it in the expression above and assume the linear monotonically increasing Phillips curve below v , (5.23) f1 (v) = τ b1 100 u˙ =

where b1 > 0 and [b1 ] = a−1 . Thus, we reach the second differential equation of the Goodwin model v u˙ = −a1 + b1 . (5.24) u 100 In summary, the equations of the Goodwin model that are aligned with Marx’s ideas (Goodwin, 1967; see also Gandolfo, 1997, section 24.4.3) yield ⎧

v u˙ ⎪ ⎪ + b (5.25) = −a , 1 1 ⎪ ⎨u 100 ⎪

⎪ ⎪ ⎩ v˙ = a2 − b2 u . v 100

(5.26)

Here a1 , b1 , a2 , b2 are positive parameters with dimension of a−1 . The Goodwin model is then described by a coupled nonlinear system of two first-order ordinary differential equations. 5.3.2.3 Dynamic Properties The first integral of the equations of motion (5.25) and (5.26) is in effect a conservation law, or the path of this system. Let us divide the former equation by the latter, and remembering that du dt u˙ du = = , dv dt dv v˙ then Eqs. (5.25) and (5.26) may be rewritten as follows, du (b1 v − 100a1 )u = . dv (100a2 − b2 u)v

(5.27)

(5.28)

The indefinite integration of this expression after the variables are separated yields 100 a2 ln u − b2 u = b1 v − 100 a1 ln v + E,

(5.29)

5.3 The Goodwin Growth-Cycle Macroeconomic Dynamics

217

where E is an integration constant. If we call by φ(u) the function on the left-hand side of Eq. (5.29) and ψ(v) the function on its right-hand side, the expression above is reduced to φ(u) − ψ(v) = E.

(5.30)

The result above means that the phase portrait, i.e., the orbit of the system in the u–v phase plane, is restricted to the points that satisfy Eq. (5.30). At time t = 0 we have the initial values u(0) and v(0) and then E can be obtained from Eq. (5.30). Any subsequent values for u and v are such that Eq. (5.30) is obeyed, which means the same value for the constant E. Therefore, this means a closed curve in the u–v plane. There is a whole family of curves that lie inside one another and do not intersect, where each one has a different value for E. If there is some kind of “disturbance,” or perturbation, the variables simply change to a new orbit with different value for E, not returning to the one before the disturbance. Let us see now the average values of these variables. The motion of Eqs. (5.25) and (5.26) from time t1 to time t2 means that the variables of the system change according to u1 → u2 and v1 → v2 . Integrating them yield ⎧ ! " t2 b1 u2 ⎪ ⎪ = −a (t − t ) + v dt, ln ⎪ 1 2 1 ⎪ ⎨ u1 100 t1 ! " t2 ⎪ ⎪ v b 1 2 ⎪ ⎪ = a2 (t2 − t1 ) − u dt. ⎩ ln v2 100 t1

(5.31)

(5.32)

In a limit cycle the period is given by T = t2 − t1 and the variables return to their initial values. Hence, after a time T , u2 = u1 and v2 = v1 , and the expressions above result in the following equations ⎧ a2 1 T ⎪ ⎪ ⎪ (5.33) u dt = 100 , u ≡ ⎪ ⎨ T 0 b2 ⎪ ⎪ a1 1 T ⎪ ⎪ v dt = 100 . (5.34) ⎩ v ≡ T 0 b1 So, the average values u and v over the entire cycle are constants. The state of equilibrium, or fixed point, of the orbits may be found as follows. Fig. 5.1 shows that at the point D the variable u does not change in time. So, at u˙ = 0 Eq. (5.25) yields v = 100

a1 . b1

(5.35)

218

Circular Flows in Economic Systems

Similarly, for point C the condition v˙ = 0 holds. So, we have that, u = 100

a2 . b2

(5.36)

The points u and v of the Cartesian axes define then the coordinates of the center of the Goodwin cycles, being then the fixed point, or equilibrium point, of the model. Note that there are in principle situations where both u and v could take values above 100, depending on the parameter values of the model, situations that are not in line with economic reality. That may happen because if in Eq. (5.35) we have b1 > a1 , then v > 100, which in principle should not be allowed. Similarly, considering Eqs. (5.19) and (5.20) in Eq. (5.36) we reach the result u = 100 − β(a1 + c2 ).

(5.37)

If (a1 + c2 ) < 0, then u > 100, which in principle should not be possible as well. One should also note that even when both u and v are within the acceptable limits, trajectories may still partly lie above 100. Blatt (1983, pp. 210–211) noted this shortcoming, as well as other authors who proposed modifications of the Goodwin model to overcome this difficulty. We shall discuss this issue later on (see p. 270 below). Another way of reaching at the same conclusions as given by Eqs. (5.35) and (5.36) is by finding the extreme point of the function φ(u). From Eqs. (5.29) and (5.30) we have that 100 a2 dφ = − b2, du u

(5.38)

d2 φ 100 a2 = − 2 < 0. 2 du u

(5.39)

and

Equating the first expression to zero shows that u is an extremum and the second expression shows that this point is a maximum. Similarly ψ(v) assumes a maximum at v . Although in the particular case studied here we have that u = u and v = v , this is not always true because the procedure to obtain the equilibrium values and the averages are different. If the function f1 (v) is chosen differently, with different signs for the parameters in Eqs. (5.25) and (5.26), the results will be different since, as mentioned above, the choice of a straight line for f1 (v) means in fact assuming the original Phillips curve in the model, that is a linear relationship between wage rate and unemployment. Blatt (1983, p. 213) chose a nonlinear function, whereas Goodwin (1967) chose a linear one in his original approach. In particular, if one

5.3 The Goodwin Growth-Cycle Macroeconomic Dynamics

219

does not choose a linear Phillips curve in f1 (v), even if the signs remain the same as in Eq. (5.26), we will have v = v (Blatt, 1983, pp. 215–216). We should now justify in more detail the claim made above that the Goodwin model has closed orbits. Let us write Eqs. (5.25) and (5.26) as follows ⎧ b1 ⎪ ⎪ ⎪ (5.40) ⎨H1 (u,v) = u˙ = −a1 u + 100 u v, ⎪ b2 ⎪ ⎪ ⎩H2 (u,v) = v˙ = a2 v − u v. (5.41) 100 This system can be linearized by evaluating at the fixed point u and v the first term of the Taylor expansion of the functions H1 (u,v) and H2 (u,v). If we define ˙ so the linear the variables U = u − u and V = v − v , clearly U˙ = u˙ and V˙ = v, approximation of the Goodwin model may be written as: 0 a2 b1 /b2 U U˙ = . (5.42) V˙ −a1 b2 /b1 0 V The coefficient matrix above is simply the Jacobian at the equilibrium point of the system ⎡ ⎤ ⎡ ∂ ∂ b1 ⎤ H1 (u,v ) H1 (u,v ) 0 a 2 ⎢ ∂u ⎥ ⎢ ∂v b2 ⎥ ⎥=⎣ (5.43) J(u,v ) = ⎢ ⎦. ⎣∂ ⎦ b 2 ∂ −a 0 1 H2 (u,v ) H2 (u,v ) b1 ∂u ∂v The eigenvalues λ of the linear system are found by setting ) ) b1 ) ) ) −λ a2 ) ) b2 )) = 0, det (J − λI) = ) )−a b2 −λ ) ) 1 ) b1

(5.44)

where I is the identity matrix. Hence, the characteristic equation for the eigenvalues, λ2 + a1 a2 = 0,

(5.45)

√ √ λ1 = i a1 a2 ; λ2 = −i a1 a2 .

(5.46)

has the following roots,

From dynamic systems theory (e.g., Percival and Richards, 1982, pp. 32–36; Shone, 2002, p. 176; Boyce et al., 2017, section 7.6 and ch. 9) it is known that complex conjugated eigenvalues with zero real part mean a stable elliptic point for the limit cycle of the system above. Moreover, since all parameters of the coefficient matrix in Eq. (5.42) are positive, the orbit is clockwise and the linear Goodwin

220

Circular Flows in Economic Systems

Figure 5.2 Dynamic illustration of the Goodwin cycles. The two orbits are represented by two different values of the constant E defined by Eq. (5.30), whereas the coordinates of the center of all cycles are given by the expressions (5.35) and (5.36). Note that the cycles are not necessarily symmetric, even in the linear approximation of the model. The cycles have maxima at the points u and v . If the variables are perturbed by any amount, the orbits move to a new value of E and do not return to the original one. All orbits are centered around the stable equilibrium fixed point.

period is then straightforwardly obtained from the results above (Boyce et al., 2017, pp. 393–395), Tlinear = √

2π . a1 a2

(5.47)

Note that this expression is only valid nearby the fixed point where the linear approximation holds. Once one distances oneself from the center the full nonlinear system must be considered to obtain the period of the orbits (see p. 223 below). Fig. 5.2 summarizes the dynamic properties of the Goodwin model. Let us now put together the simplifying hypotheses that produced the Goodwin model. (1) (2) (3) (4) (5) (6)

β, as defined in Eq. (5.2), is static, since it is assumed fixed in time; A, as defined in Eq. (5.3), grows steadily; the rising wages are directly related to the employment rate (Eq. 5.8); all profits are invested (Eq. 5.12); steady growth in the labor force (Eq. 5.6) the traditional Phillips curve holds, as the general function f1 (v) in Eq. (5.8) is assumed linear (Eq. 5.23).

5.3 The Goodwin Growth-Cycle Macroeconomic Dynamics

221

Clearly the model results from extremely simple specifications of the economy. There are six analytical assumptions aimed at simplifying the mathematical complexity of the model, most of them quite severe and that are already known to fail in the long run ( t > 10 a), such as, assuming a fixed capital to output ratio β. Other hypotheses of the model not explicated analytically include: (7) production is given in terms of real quantities, so there is no financial sector; (8) capitalists are considered a single class; there is no distinction between entrepreneurs (producers) and rentiers (Boyd and Blatt, 1988, p. 106). So, this is a very simple model, so simple that it cannot reproduce the frequency properties of output growth in a certain time period or the distribution of recession sizes and duration. In addition, it does not take into account the sheer speed of the market crashes observed in real cycles and, therefore, does not take into account psychological aspects of real markets like investors’ confidence and uncertainty of the future (Boyd and Blatt, 1988, pp. 50–51). Besides, in order to have a better representation of the real-world economic systems, it would be preferable to have a model with a limit cycle and an unstable equilibrium point within it (Blatt, 1983, p. 215). However, despite all these simplifications the model advances a set of differential equations that do reflect some aspects of real-world economies and are in principle testable. These are the crucial aspects of the Goodwin model that justify its importance, although many economists seem to have failed to realize this second essential feature, its testability, and considered it only as a toy model of mere mathematical interest. So, as we shall see in Section 5.4, what is remarkable is that the very restricted model proposed by Goodwin finds any empirical support at all in the real data. 5.3.3 Interpretations The Goodwin model is mathematically equivalent to the classical model advanced by the US biophysicist Alfred James Lotka (1880–1949) and the Italian mathematician and physicist Vito Volterra (1860–1940) well known in population dynamics and mathematical biology. The Lotka–Volterra model describes the competitive dynamics of two interacting species, one as predator and the other as prey. In the classical Lotka–Volterra predator–prey model the population number of each species changes through time according to the pair of equations (5.25) and (5.26) in a competitive sinusoidal pattern of their respective evolving population sizes. The evolving pattern of each population is, nevertheless, out of phase and with different amplitudes. These features are shown in Fig. 5.3. The connection of the classical predator-prey model with the Marx circuit seems straightforward at first. Capitalists are viewed as predators and workers a prey, each

222

Circular Flows in Economic Systems

Figure 5.3 Evolution of the variables in the predator-prey model dynamics. The y-axis represents the population of each species in the classical Lotka-Volterra model. The population cycles of the two competing species, as represented by the model, are out of phase, since when the prey population tops the predator population is still on the rise. But, the predator population reaches its lowest level only after the prey size has reached its bottom, as there is little prey individuals left to feed the predators. When the number of predators is at its lowest, the prey population is already on the rise again.

class competing with one another in the struggle for existence. So, for this reason the Goodwin model is also known as a class struggle model, interpretation that is apparently very seductive to the Marxist ideology. Nonetheless, science is based on facts and evidence, not on ideology, no matter how seducing is interpreting some model in a certain way. So, scientifically speaking, one needs to see to what extent this specific interpretation of the model actually holds once we start to better understand the model itself and its empirical evidences. We have already seen above that the model has very restricted assumptions and before any talk about a “class struggle” that the model supposedly supports, we need to verify if the data confirms the most basic aspects of the model, particularly the sign of its parameters. Moreover, the model itself can be interpreted as having more than one type of competition, or struggle. Let us see in detail the predator– prey aspect of the model and the possible kinds of competitions it may represent. Remembering that the parameters a1 , b1 , a2 , b2 are positive in Eqs. (5.25) and (5.26), according to these expressions when u = 0, u˙ = 0 and v˙ > 0. This is the situation of the prey population v in the absence of predators u, since in this case the predator population u remains equal to zero whereas the prey population v grows without bound. On the other hand, when v = 0, v˙ = 0 and u˙ < 0, which means that without prey (v = 0), the predator population number u decreases (u˙ < 0). Let us now consider the possible “class struggle” structure of this model. Identifying the employment rate v with the workers’ class, then the variable U (see

5.4 Empirical Evidence of the Goodwin Macrodynamics

223

Eqs. 5.1) is the profit share of the capitalists, since this is the share of total national income obtained by the class that controls the capital. In this case the conflict is between the workers and the capitalists. That can be seen in the light of a change of variables such that when u = 0, u˙ = 0, U˙ = 0 and v˙ > 0. This means that when the profit U attained by capitalists remains constant, i.e., when U˙ = 0, the workers’ share v grows without bound. So the variable v represents the prey and the capitalist class, represented by the variable U , plays the role of predators. U is assumed to have a maximum value equal to 100%. On the other hand, following Robert M. Solow (1990), the recipient of the 1987 Nobel Prize in Economics, if we identify employed workers with the workers’ share u and the unemployed workers with the variable V, then the conflict is between employed and unemployed workers. When u = 0, u˙ = 0 and V˙ < 0. This is consistent with unemployed workers V being identified with the predators and the employed workforce u in the role of prey. The capitalists in this case are passive nonplayers. Clearly there is more than one way of discussing the kind of competition, or conflict, brought about by the model. In addition, it must be noted again that these economic interpretations of the Goodwin model are dependent on the parameters a1 , b1 , a2 and b2 being positive. These constraints were, however, not established on empirical bases, but originated from the entirely heuristic reasoning that produced the model itself. Inasmuch as the variables u and v can be identified with different economic players, it seems quite unreasonable to assign meaning to the economic roles of these variables in an aprioristic way. In fact, in the next chapter we find a third, even more novel, interpretation of these equations (see Section 6.5). Hence, the Goodwin model must be interpreted with great care, never on a speculative basis and only in the light of real-world data analysis, which means that it seems premature to consider the Goodwin dynamics as being a model for “class struggle.” What seems unavoidable is that the model puts competition over income distribution at the center stage of the distributive dynamics mechanism, whatever class actors one may choose to include as players in modeling this struggle. As a final remark, it must be noted that Eq. (5.47) refers only to the linear approximation of Eqs. (5.25) and (5.26). The full nonlinear period cannot be found by means of this expression and has to be obtained by methods that require numerical evaluation (Shih, 1997). 5.4 Empirical Evidence of the Goodwin Macrodynamics Considering that the Goodwin macrodynamics is now over half a century old, and that during this time it has served as inspiration for the production of a long list of theoretical follow-ups (see next chapter), the number of researchers, mostly

224

Circular Flows in Economic Systems

economists, who have investigated the empirical soundness of the model since its original proposal is relatively low. It seems that most who saw its potential were basically attracted by its theoretical aspects, and seems to have paid little attention to the fact that any model, however theoretically seducing, has a very limited scientific value if it remains untested against real-world data. And numerical evaluations of the model using ad hoc parameter values is insufficient in this respect. Therefore, in order to ascertain the possible scientific worthiness of the Goodwin model beyond its theoretical relevance, in this section I shall review a series of studies that actually attempted to test the model using empirical data. 5.4.1 First Studies Just two years after the model’s original publication, Atkinson (1969) carried out a rough estimation of the period of the Goodwin cycles by means of the linear approximation defined by Eq. (5.47). Using a set of empirically based “quite reasonable” parameter values, he concluded that the linear period would lie approximately between four to twenty years, depending on the reasonableness of the parameters range. He also argued that the results showed “very little difference” if the same set of parameter values is used with the full nonlinear expression for the Goodwin period. The interesting aspect to emphasize regarding Atkinson’s (1969) analysis is that by means of a very simple methodology he was able to estimate a range of cycling periods not too different from what can be expected if we consider Piketty’s recent analysis, mentioned earlier (see p. 212 above), that indicates the existence of shortterm cycles with periods smaller than approximately ten years. Desai (1984) took a different analytical route to discuss the model. He plotted the data from the national income, expenditure, and output of the United Kingdom from 1855 to 1965 in terms of the labor share u and employment rate v. The plot he obtained with the data is shown in Fig. 5.4, from where some observations become evident. The time period up to 1910 somewhat resembles a stretched Fig. 5.2, with u being systematically shifted to the right, that is, of rising u. From 1910 to 1940 there are huge swings up and down in v, while u keeps its horizontal shift to the right. After 1940 there is little change in v, whereas u continues its horizontal shift toward rising labor share of the national income, although much more moderately than previously. In order to fit the data, Desai (1984) made three modifications to the original model with the purpose of incorporating “some outside influences,” namely: (1) to include wage bargains not in real terms, but in money terms, (2) total labor is measured in terms of earnings rather than wages, and (3) to allow price expectations in the wage bargain. The various parameters of his modified model were then fitted

5.4 Empirical Evidence of the Goodwin Macrodynamics

225

Figure 5.4 Cycles in the UK economy from 1855 to 1965 obtained by Desai (1984) using data from the national income, expenditure and output of the UK. One can discern a more or less stretched periodic motion in the data resembling the Goodwin cycles. This stretching of the plot is caused by a systematic shift to the right of the labor share u during all the time period shown in the graph. Reprinted from Desai (1984, figure on page 275) with permission from Springer Nature

in different separated subsamples in order to analyze various features of the UK economy at different time periods. But, he was unable to solve the cycle length of his modified model as Eq. (5.47) was no longer valid. From the viewpoint taken here, the most interesting result that comes out of Desai’s (1984) analysis is the data plot itself, showing that Fig. 5.4 somewhat resembles the periodic motion predicted by the u–v phase plane of the Goodwin model presented in Fig. 5.2. As will be shown below, this kind of periodic motion without a single center will appear again and again in different data analyses. Solow (1990) approached the problem in a very straightforward way. After reasoning in a manner quite similar to the ideas behind the Marx circuit, he concluded that “the model does capture something real.” He then asked the most obvious question regarding the scientific worthiness of the model: “Does the Goodwin growthcycle model fits the facts?” Then, very interestingly, he added that by fitting the facts he did not think that “a flourish of multiple correlation coefficients is really to the point.” His intention was “to see if it belongs in any obvious niche.” With this

226

Circular Flows in Economic Systems

Figure 5.5 Cycles in the non-farm economy of the USA from 1947 to 1986 according to Solow (1990). The phase diagram shows a predominantly clockwise motion, but without a single center. The graph also hints at a single large longperiod clockwise sweep. Some values for u are above the unity, but there is no explanation for this behaviour in the original text. Reproduced from Solow (1990, fig. 4.1, p. 40) with permission of SNCSC

objective in mind, he used the annual data for the USA non-farm business economy from 1947 to 1986 to plot the phase diagram shown in Fig. 5.5. As remarked by Solow (1990), the graph clearly shows a predominantly clockwise motion, and since the displacements are large and do not have a single center, “the Goodwin model cannot be the only mechanism governing the relation between the wage share and the employment rate.” 5.4.2 A Comprehensive Test Harvie (2000) was the first author to actually carry out a truly comprehensive test of the Goodwin model using a database spanning several countries. After explicitly stating how little, close to nothing, had been done until then to evaluate the model empirically, he took the task of using data from OECD countries in the period from 1951 to 1994 and divided his analysis into two levels: qualitative and quantitative.

5.4 Empirical Evidence of the Goodwin Macrodynamics

227

On the qualitative level, Harvie (2000) plotted the actual trajectories of the two state variables of the Goodwin model, the worker share u and the employment rate v, with the available OECD data, that is, for Australia, Canada, Finland, France, Germany, Greece, Italy, Norway, UK, and USA. The results showed that the model clearly predicts an interdependence of income distribution and employment between the two variables. In his words: The evidence presented [in the graphs] lends broad qualitative support to Goodwin’s model. For all ten of the countries being investigated, the phase portraits show clearly that, concentrating on the ‘big picture,’ a ‘high’ employment rate is followed by a rising share of national income accruing to workers; this is followed by falling employment; this, in turn, seems to lead to the diminishing of worker’s share once more. For ten countries, the evidence suggests the existence of a three-quarter cycle over the period 1956–94, initiating with low u/high v, and ending with low u/low v. But, within this large partial cycle, there appear, in addition, to be sub-cycles for some of the countries. (Harvie, 2000, p. 357; emphases in the original)

On the quantitative level, Harvie (2000) used the data to estimate the coordinates of the center of the cycles and their respective periods, defined by Eqs. (5.35), (5.36), and (5.47), in order to predict both quantities for each country. There was also an attempt to test if the capital to output ratio β remains fixed, an important restriction of the original model, using Desai’s (1984) modified model, test which, turned out negative. This is hardly surprising considering that the analyzed data time interval is more than ten years (see discussion below Eq. 5.2). A table was presented containing the numerical predictions for u , v and Tlinear for the countries under study. In all cases the predicted centers lay outside the actual cycles, with values exceeding from 20% to 100%. Regarding the linear period, his results implied very short-term values, ranging from 1.1 to 2.4 years, which do not seem to be present in the data plots conveying the history of the state variables of the model. Such results indicate that the model is inadequate on the quantitative level. It is unnecessary to present here all ten plots obtained by the author, hence the results for the UK and USA were chosen as representative enough. Figs. 5.6 and 5.7 present the trajectories of these two countries and the predicted coordinate centers, showing that the predicted values of (u,v ) for each country lie far outside the actual cycles. These results might be compared to the ones shown in Figs. 5.4 and 5.5. Summing up, Harvie’s (2000) results are mixed. On the quantitative level the model failed quite severely. But, on the qualitative level the model produced very encouraging results, since the phase space trajectories clearly showed mostly clockwise cycling behaviors of the state variable u and v for all ten countries included in the study. Most plots are even able to hint what appear to be short-term cycles inside long-term ones. This result raises the question of the nature of the cycles, that

228

Circular Flows in Economic Systems

Figure 5.6 Cyclic trajectories obtained by Harvie (2000) for the motion of of the state variables in the UK economy from 1956 to 1994. The coordinates of the center of trajectories according to the Goodwin model are far from the center of the actual cycles, as indicated in the plot. Reprinted from Harvie (2000, p. 360, last figure from top to bottom) with permission from Oxford University Press

Figure 5.7 Cycles in the history of the state variables calculated by Harvie (2000) for the economy of the USA from 1956 to 1994. The coordinates of the center of the trajectories calculated according to the Goodwin model are clearly located very far from the center of the actual cycles. Reprinted from Harvie (2000, figure at p. 361) with permission from Oxford University Press

5.4 Empirical Evidence of the Goodwin Macrodynamics

229

is, if they represent a distributive income conflict between workers and capitalists, or employed against unemployed workers, or both of them and something else. What seems indisputable, though, is that the plots show large changes in the income distribution among labor and capital, rendering in fact strong empirical support to Marx’s originally verbal model for the distributive conflict among social classes, and its particular interpretation as the Marx circuit. Whatever answers one might have to the questions posed above, or the possible shortcomings of Harvie’s (2000) analysis (see below), it seems unquestionable that he truly inaugurated the large-scale testing of the Goodwin growth-cycle macrodynamics, as the sequence of papers shown below can testify. 5.4.3 More Empirical Results Moreno (2002) applied the methodology proposed by Harvie (2000) to the economy of Colombia, obtaining similar cycles for the two state variables u and v. Nevertheless, the calculated coordinates (u,v ) located the equilibrium point of the Goodwin model inside the cycle. These results are shown in Fig. 5.8. Mohun and Veneziani (2008) approached the problem by first trying to answer the questions posed above regarding the nature of short-term versus possible long-term cycles. They did not expand the database to more countries, but carried out an in-depth analysis of the USA for 1948–2004. They pointed out that Goodwin (1967) originally remarked that the labor force must continuously

Figure 5.8 Historical trajectories obtained by Moreno (2002) for the labor share u and employment rate v with data from Colombia in the period 1951–1995. The calculated equilibrium coordinates are u = 0.40 and v = 0.91, result that puts the coordinates of the center (u,v ) inside the cycle, as opposed to the results obtained by Harvie (2000). Reprinted from Moreno (2002, graph 1) with permission from Cuadernos de Econom´ıa

230

Circular Flows in Economic Systems

grow through the expansion of the population and through people being made unemployed by technological progress in order to continuously replenish the labor reserve army, requirement met in the model by assuming Eq. (5.6).2 Since the most basic logic of capitalism is accumulation, this can only occur if wages are forced to adjust to the requirements of accumulation. Therefore, an essential ingredient for the cyclical motion in the Marx circuit is the existence of a labor reserve army that keeps wages down. Without this the cycle breaks down. From this viewpoint the distributive dynamic conflict among social classes, or between employed and unemployed workers, can only be short term, leaving the possible long-term cycles to other mechanisms, speculated as being competition through innovations of products and processes, that is, technical progress. In other words, long-term trends would be associated with particular technological profiles (Mohun and Veneziani, 2008, section 2). Goodwin (1967) made it clear that his model intended to represent “the most essential dynamic aspects of capitalism,” so Mohun and Veneziani (2008) focused on this point by trying to differentiate between cycle and trend. Regarding data handling and results, the authors stressed that there was not yet a common measuring methodology for the investigation of the Goodwin cycles, so they were forced to make some choices in data handling not necessarily in line with previous works on this subject. After choosing what was to be called employment rate and wage share in their approach, they identified long-term trends in the US corporate sector from 1948 to 2004, but with no firm evidence of longterm Goodwin cycles. Then they used the Hodrick–Prescott filter to detrend the data, that is, to remove the long-terms trends from the original data (Whittaker, 1922; Hodrick and Prescott, 1997; Phillips, 2010). They chose different filter, or smoothing, parameters for annual and quarterly data, respectively equal to 1000 and 256,000. Fig. 5.9 shows detrended cycles in the USA over the period under study, 1948–2004. There is strong evidence of short-term cyclical behaviour pertaining to the employment rate and labor share, although the cycles differ in amplitude, position, and period, similar to results obtained by other authors. The authors also divided the data shown in Fig. 5.9 into smaller time windows, as the graph showing the motion of the state variables over the whole period is somewhat cluttered. In those smaller time windows the cycles can be visualized more clearly. As an example, Fig. 5.10 presents the trajectories of both the labor share u and employment rate v from 1974 to 1980 where a detrended clockwise cycle could be clearly noticed. The other sub-periods showed similar

2 Another common way for the continuous replenishing of the labor reserve army is through migration, either

within borders from rural to urban areas or international migration. One may also add that the spurt of privatizations worldwide since the 1980s has also reinforced this replenishing process.

5.4 Empirical Evidence of the Goodwin Macrodynamics

Figure 5.9 Detrended cycles in the economy of the USA from 1948 to 2004. The cycles are clearly short term, but differ in amplitude, period, and position of the center. The negative numbers in both axes are a result of how the data was computed, since they are deviations from a trend. Therefore, a negative number means that the raw value of a variable was below the trend. The Hodrick–Prescott filter was used to compute the distinction between trend and cycle. Reprinted from Mohun and Veneziani (2008, fig. 3) with permission from Taylor & Francis Informa UK Ltd 1974Q4–1980Q3 2

Employment rate

1

0 –4

–2

0

2 1974Q4

4

–1

–2

–3 Wage share

Figure 5.10 This figure shows the same results for the economy of the USA as presented in Fig. 5.9, but in the smaller time window from 1974 to 1980. A clockwise motion is clearly visible in the data. As in the previous figure, a filter was used to compute the distinction between cycle and trend. Reprinted from Mohun and Veneziani (2008, fig. 4, bottom right) with permission from Taylor & Francis Informa UK Ltd

231

232

Circular Flows in Economic Systems

results. Mohun and Veneziani (2008) argued that those short-run cycles describe cyclical relations between the Goodwin variables u and v and, therefore, can be characterized as “class conflict” due to the distributive dynamics of these two quantities. Vadasz (2007) tested the Goodwin model using Hungarian data for the years 1955–1985. He noted that Hungary at that time had a planned economy, so the model should had been able to predict the planning cycle because planned economies had this cycle fixed, five years in the case of Hungary at that time. Hence, he used the data to estimate u and then substituted this result into the model to predict u , v and Tlinear by means of Eqs. (5.35), (5.36) and (5.47). According to Eqs. (5.33) and (5.36), if the model were correct the data would have indicated u ≈ u , but that did not happen. Moreover, he found out Tlinear = 9.1 − 9.9 a, which is almost twice the actual value of five years. He concluded that the original model has a poor predictive ability. Moreover, “the predicted cycle has worker’s share fluctuating between 0.57 and 1.30, meaning that there should have been times when worker’s wages surpassed output,” an unacceptable result (Vadasz, 2007, p. 15). An extensive testing of the Goodwin model using data from a large number of countries at different development stages was carried out by Garc´ıa-Molina and Herrera-Medina (2010). They applied Harvie’s (2000) methodology to sixty-seven countries using data collected from the United Nations, International Monetary Fund, World Bank, central banks, national statistics offices, regional organizations, and local universities databases. Due to the diversity of sources and data time spans, comparisons among countries could not be made quantitatively, although qualitative comparisons were possible. Similarly to Harvie (2000), the theoretical centers are located outside the cycles in all cases. The linear Phillips curve, as explicitly assumed in the model by means of Eqs. (5.23) and (5.25), was rejected in 31 countries. Plots showing the employment rate v in terms of the wage share u were made for all the 67 countries, but the results were very diverse. For a group of 26 countries the trajectories behaved as predicted by the model, with clockwise cycles. For a second group of 9 countries the cycles were anticlockwise, and for a third group of 32 countries they found no evidence for a cycle. The problem of short- versus long-term cycles resurfaced, since the authors applied the Hodrick–Prescott filter to smooth out the motion of the state variables with the annual smoothing parameter equal to 100, a value very different from the ones used by Mohun and Veneziani (2008; and see p. 230 above). This filter together with the choice of parameter effectively erased short-term cycles in their data treatment. They did not justify this choice by means of any economic mechanism. Nevertheless, long-term cycles did appear in several graphs. As examples of the author’s approach, Figs. 5.11 and 5.12 respectively show the original data series for

5.4 Empirical Evidence of the Goodwin Macrodynamics

Employment rate

1.02

233

United States (1959–2006)

1.00 0.98 0.96 0.94 0.92 0.60

0.62

0.64

0.66

Workers’ share Figure 5.11 Original raw data for the wage share u in terms of employment rate v for the economy of the USA from 1959 to 2006. Cycles, mostly clockwise and both short and long term, are clearly visible in the motion of the state variables. Reprinted from Garc´ıa-Molina and Herrera-Medina (2010, fig. 2A) with permission from Cuadernos de Econom´ıa

United States (1959–2006) Employment rate

1.06

+ (U*,V*)

1.00 0.98 0.96

1959

2006

0.94

0.60 0.61 0.62 0.63 0.64 0.65

0.89

Workers’ share Figure 5.12 This graph shows the same results of Fig. 5.11, but with the original data smoothed out by the Hodrick–Prescott filter. The filtering process effectively erased the short-term cycles clearly distinguishable in Fig. 5.11, leaving only the long-term one visible. Note that the coordinates u and v obtained from data according to the Goodwin model is located at the center outside the cycle. Reprinted from Garc´ıa-Molina and Herrera-Medina (2010, fig. 1) with permission from Cuadernos de Econom´ıa

234

Circular Flows in Economic Systems

the USA and its smoothed-out version, both for the period 1959–2006. In conclusion, the results of Garc´ıa-Molina and Herrera-Medina (2010) do not give support to the Goodwin model in quantitative terms. Qualitatively, the results are mixed since for the sixty-seven countries only twenty-six showed evidence of cycles predicted by the model. Other analyses of the model with the usual, or somewhat different, interpretations of the state variables using data from the economy of the USA and a few other countries were made by different authors. Goldstein (1999) obtained cycles for the USA from 1949 to 1985 using statistical tools to study the interaction between profit share of income and unemployment. Barbosa-Filho and Taylor (2006) found cycles in the USA from 1929 to 2002 between the labor share and global capacity utilization, that is, the extent to which the US economy used its productive capacity. This is relationship between output actually produced with installed equipment and potential output that could be produced with the equipment if it were fully used. Dibeh et al. (2007) analyzed data for France and Italy during the years 1960– 2005 and estimated the parameters of the differential equations in the Goodwin model. Tarassow (2010) concluded for the existence of cycles for 1948–2006 in the USA by quantifying the relationship between wage share and employment rate. Flaschel (2010, ch. 21) studied data from the US economy from 1958 to 2004 and found both short- and long-term clockwise cycles using the Hodrick–Precott filter to separate the state variables into trend and cycles. He argued that data spanning a time period of at least fifty years is necessary to detect long-term cycles in which several short-term ones are included. Massy et al. (2013) introduced a modification of the original model by adding multiple sine–cosine perturbation terms to the state equations (5.25) and (5.26) and testing the modified model for sixteen countries. They found cycles in the phase-plane of the state variables u and v for France and concluded that “the model actually fits the data and has good forecasting properties for half of the countries.” 5.4.4 An Econophysical Evaluation of the Model Moura and Ribeiro (2013) studied the Goodwin cycles using the individual income data of Brazil from 1981 to 2009, but their approach was different from all previously discussed authors. Instead of employing the data to derive various intermediate quantities, like the amount of labor , the capital to output ratio β and the parameters c1 and c2 , in order to eventually obtain the history of the state variables of the Goodwin model, they adopted the statistical econophysics approach to the distributive problem and interpreted the two conflicting classes with the two components of the income distribution, identifying them with the variables u and v. Using their previous experience of fitting the Brazilian income data with

5.4 Empirical Evidence of the Goodwin Macrodynamics

235

the Gompertz–Pareto distribution (GPD) discussed in Chapter 2 (see Sections 2.3.2 and 2.4.5), they identified the Gompertz curve with the large majority of population (∼ 99%) and the Pareto power law with the tiny richest part (∼ 1%). In other words, they assumed the econophysics income classes, the 1% rich and the remaining 99%, as being the basic data sources for the state variables u and v of the model. Once the three parameters of the GPD were found by data fitting, the parameter xt , which determines the transition from the Gompertzian segment to the Paretian one (see Eqs. 2.36 and 2.37), allowed the authors to calculate the percentage share of labor u, identified with the Gompertz segment, directly from the raw data for all years they analyzed. To obtain the unemployment rate v, the authors used the concept of effective unemployment under the view that there is a minimum income such that below it a person “barely participates in the production and for all practical effects is jobless.” This minimum income was found by probing the data for income values that produced unemployment rates in line with unemployment surveys and then extrapolating this to the whole time window under analysis, from 1981 to 2009, since they did not have homogeneous samples of official unemployment rates for the whole time span under study. Once the results were derived under the methodology described above, Moura and Ribeiro (2013) produced graphs for the time evolution of the state variables and their trajectories in the phase space. Fig. 5.13 shows the state variables u and v in terms of time, where a clear out of phase evolution of short-term cycles with periods of approximately four years is visible. This plot resembles the predator–prey cycles shown in Fig. 5.3 and the period found would classify the periodic fluctuations as Kitchin cycles.3 The u–v phase space with these results is shown in Fig. 5.14, where cycles, mostly clockwise, are clearly visible, although they reverse direction and go anticlockwise at certain intervals. The cycles have no single center, a conclusion similar to the previous studies discussed above, although this result was reached by a very different methodology. Another interesting feature appearing in this graph is that the trajectories of the state variables in the phase plane can be divided into two groups: one spread out mostly below v = 90% and another above this value and more tightly packed to the right of the graph. The following possible explanation for this feature was provided the authors. Brazil experienced runaway inflation and hyperinflation until 1994, when the inflationary process was suddenly halted by a new monetary system put into effect in that year. This coincides with the data moving from the lower to the upper right 3 Joseph Kitchin (1861–1932) was a British statistician who identified short-term business cycles of about four

to five years (Kitchin, 1923; Gabisch and Lorenz, 1989, pp. 8–10). This period is similar to the Kondratieff short cycles of about 3.5 years hypothesized by the Russian economist Nikolai Kondratieff (1892–1938). See Jadevicius and Huston (2014) for a review of various economic cycles.

236

Circular Flows in Economic Systems

Figure 5.13 Time evolution of the worker’s share u and employment rate v in Brazil from 1981 to 2009. In this approach to testing the Goodwin model, u is interpreted as the Gompertzian segment of the Brazilian income distribution. Outof-phase short-term cycles with periods of about four years are present for both variables. The comparison of this plot with Fig. 5.3 indicates that the income distribution dynamics of real economic systems can be represented, at least partially, by predator–prey-like models. Reprinted from Moura and Ribeiro (2013, fig. 1) with permission from Elsevier

part of the plot. Moreover, two years do appear as points in the upper part, and they are exactly the years of two previously short-lived attempts to stop hyperinflation (see caption of Fig. 5.14 for more details). So, it seems that the Brazilian economy suffered a shock caused by the end of the inflationary process that moved the system from one region of the phase plane to another. Moura and Ribeiro (2013) also provided a different visualization for the results of Brazil by plotting them in 3D. Fig. 5.15 shows the evolution of the phase portrait cycles where one can distinguish a helix-like curve, moving mostly in one direction and then, sometimes, reversing it. The YZ-plane projection of the graph shows in fact three regions where the system was located during the studied time interval. Therefore, in qualitative terms the Goodwin cycles seem to have strong empirical support. Nevertheless, quantitatively speaking the model showed several fragilities. The data derived by Moura and Ribeiro (2013) allowed them to fit Eqs. (5.25) and (5.26) after time derivatives of both u and v were calculated numerically. By doing so they were able to obtain empirically the parameters a1 , b1 , a2 and b2 . Note that according to these equations the slopes of the straight lines would respectively be positive and

5.4 Empirical Evidence of the Goodwin Macrodynamics

237

24

94 15

18

27

92

17

10 19

6

29 25 22

28

16

Employment rate [v] (%)

20

2009

14 21 23 7 9

90

26

11

5

13

88

8 4

1981

86

12 3

2 1

82

83

84

85

86

87

88

Gompertz (labor) share [u] (%)

Figure 5.14 u–v phase plane, or phase portrait, for the Brazilian economy from 1981 to 2009. The historical trajectories of the state variables for each year are labeled in a growing numerical sequence, showing mostly clockwise evolution of the cycles. The end of hyperinflation in 1994, indicated by the label ‘14’, moved the system to a new cycling region. Note that the years 1986 and 1990, respectively labeled as ‘6’ and ‘10’, correspond to short-lived attempts to end hyperinflation and, perhaps for this reason they are located in the upper right cycling region where all points are located after the reformed monetary system put in effect in 1994 effectively ended hyperinflation. Note the absence of a single center for the cycles. Reprinted from Moura and Ribeiro (2013, fig. 2) with permission from Elsevier

negative. Figs. 5.16 and 5.17 respectively present the fits for Eqs. (5.25) and (5.26) using the Brazilian data, where it is clear that the set of points are compatible with the linear approximation proposed in the original model for functions f1 (v) and f2 (u). However, the expected slopes are not present in the data. Where a positive slope was expected, the authors found a negative one, and the other way round. The dispersions are high, but the slopes are unmistakably clear. Considering these results, the conclusions reached by Moura and Ribeiro (2013) are mixed and very much similar to almost all previous authors who examined the Goodwin growth-cycle macrodynamics. Qualitatively the model has strong empirical evidence. However, it fails quantitatively in a very dramatic fashion. Clearly there is something fundamentally amiss in the model, but even so it does

238

Circular Flows in Economic Systems

Figure 5.15 Helix-like evolution of the state variables u and v in Brazil for 1981– 2009. This 3D plot represents the same results appearing in Figs. 5.13 and 5.14. The YZ-plane projection, that is, the surface for v vs. time on the left, indicates three regions. Two of them were already specified above, before and after the end of hyperinflation in 1994 (label ‘14’), but the points for 1981–1983, labeled ‘1’ to ‘3’ seem to indicate a transition from an unspecified earlier region where the system remained before the start of the inflationary period at around 1980. Reprinted from Moura and Ribeiro (2013, fig. 4) with permission from Elsevier

capture something important: short-term cycles. Moreover, they speculated that the constants of the model may not be constants at all, but dynamic variables connected to an as yet unspecified dynamic. The authors then argued that the system of differential equations (5.25) and (5.26) are insufficient to represent the empirical data and must somehow be modified. But, they argued that the goal of reaching a more realistic growth-cycle macrodynamics will probably not be achieved by just tinkering with the fundamental hypotheses of the model, but by proposing a new and more realistic mathematical model where the Goodwin model may possibly be a particular case.

5.4 Empirical Evidence of the Goodwin Macrodynamics

239

(d[u]/dt) / [u] (%) / year)

0.02

0.01

0.00

–0.01

–0.02

–0.03 84

86

88

90

92

94

[v] (%)

Figure 5.16 Fitting of the linear relationship of the state variables u/u ˙ vs. v of the income data of Brazil from 1981 to 2009. Although Eq. (5.25) predicts a positive slope, the data did not conform to this prediction and showed a negative one. There is an error in the dimension of the y-axis in the original plot reproduced here, as it should read only as (year−1 ), The fitted parameters are a1 = (0.17±0.06) a−1 and b1 = (−0.0019 ± 0.0006) a−1 . Reprinted from Moura and Ribeiro (2013, fig. 5 left) with permission from Elsevier 0.05 0.04

(d[u]/dt) / [v] (%) / year)

0.03 0.02 0.01 0.00 –0.01 –0.02 –0.03 –0.04 82

83

84

85

86

87

88

89

[u] (%)

Figure 5.17 Fitting of the linear relationship of the state variables v/v ˙ vs. u of the income data of Brazil from 1981 to 2009. Although Eq. (5.26) predicts a negative slope, the data did not conform to this prediction and showed a positive one. There is an error in the dimension of the y-axis in the original plot reproduced here, as it should read (year−1 ), The resulting fitted parameters are a2 = (−0.52 ± 0.22) a−1 and b2 = (0.006 ± 0.003) a−1 . Reprinted from Moura and Ribeiro (2013, fig. 5 right) with permission from Elsevier

240

Circular Flows in Economic Systems

5.4.5 Reappraisal and Further Evidence The previous sections showed that most empirical assessments concluding that the Goodwin model was quantitatively inadequate were based on the results obtained from the comprehensive multi-country analyses produced by the pioneering testing methodology introduced by Harvie (2000). However, this viewpoint changed somewhat when Grasselli and Maheshwari (2017, 2018) published studies that challenged some of these methods, considered as a benchmark until then, and reassessed the results. In brief, Grasselli and Maheshwari (2016, fn. 3) argued that Harvie (2000) made a mistake when calculating the parameters of the linear Phillips curve so that they were off by a factor of 100. Apparently this was caused by a dimensional problem, since quantities given in percentages were used as fractional numbers. This dimensional error by a factor of 100 led to wrong values for the equilibrium employment rate v and the length of business cycle Tlinear , respectively Eqs. (5.35) and (5.47). Grasselli and Maheshwari (2017) argued further that correcting this mistake led to significant improvements in the performance of the model. Grasselli and Maheshwari (2018) went even further and updated Harvie’s (2000) data to cover a longer time window, from 1960 to 2010, and redefined some variables of the model. In the original model all profits are invested, hence these authors modified this hypothesis by assuming that only a fraction of the profits are invested, which means redefining Eq. (5.13) as follows: I = kP ,

(5.48)

where 0 < k < 1 is a number representing the fraction of profits actually invested. The remainder fraction (1 − k) of profits are distributed as dividends. As the model assumed no savings, all dividends and wages are consumed. Using data from the same ten countries studied by Harvie (2000), Grasselli and Maheshwari (2018) estimated values for k much smaller than the unit assumed by the original model, leading to coordinates of the center (u,v ) closer to the empirical means and inside the empirical cycles. They then produced new cycles and estimates for all countries previously studied, as well as the empirical evolution of the state variables, comparing them with the estimated ones. Fig. 5.18 shows a graph with data from the UK as illustration of the results produced by Grasselli and Maheshwari (2018). It is clear that the new plot is much improved as compared to the previous one obtained by Harvie (2000). Despite that the shortcomings of the model noted above remain: the cycles do not even remotely resemble the well-behaved closed orbits predicted by the original model, showing again its inability to reproduce the complicated trajectories produced by the actual dynamics of the wage shares and employment rate in the phase plane.

5.5 Conclusions

241

Figure 5.18 Reassessed cycles for the UK from to 1960 to 2010 as made by Grasselli and Maheshwari (2018). In this plot the wage share u is denoted by ω, whereas the employment rate v is denoted by λ. Comparing this plot with Harvie’s (2000) analysis shown in Fig. 5.6, it is clear that these results have a center inside the actual cycles. Nevertheless, the simulated trajectory, as predicted by the Goodwin model, and its estimated equilibrium mean (eqm) do not resemble even remotely the empirical results. Reprinted from Grasselli and Maheshwari (2018, fig. 3, bottom left) with permission from Wiley

To end this short review on the empirical studies of the Goodwin model, one should mention that Fiorio et al. (2018) carried out a long-run analysis of the dynamic evolution of the Goodwin state variables for the UK from 1892 to 2016, reaching at results very much similar to all others presented above. In particular they concluded that although short-run profit-squeeze mechanisms are evident in the data, income shares are much more variable in the long run, indicating that power politics among social classes is a key component in the outcome of the income distribution dynamics. 5.5 Conclusions The model proposed by Goodwin (1967) remains a popular starting point for other, more complex, macroeconomic explorations and extensions of distributive dynamics of growth with cycles. Hence, in summarizing the conclusions of the studies presented above one should especially note that these results can be interpreted in more than one way.

242

Circular Flows in Economic Systems

First of all one should mention that all empirical studies indicate that the Goodwin model has very encouraging qualitative acceptance. However, these same studies also indicate that it fails quantitatively. This failure may be partially attributed to the different ways in which the data is handled and how the model’s state variables are defined. On this respect, so far there seems to be no agreement in the literature as to what u and v actually represent. They could be viewed strictly as wage share and employment rate as defined in a variety of ways from the data, or they could be seen just as percentage shares of two income classes. Both approaches lead to similar qualitative results, but different quantitative ones, although the model fails quantitatively in both when compared to real data. Only further empirical explorations of the model and its extensions can answer the question as to what is the most adequate theoretical representation of the state variables. In any case it is clear that growth-cycles models based on extensions of the basic Goodwin model deserve closer attention and much more empirical studies, because the works showed above do not cover all possible methods on how to actually test the model with the available data, and how the data is best represented in the model itself. Finally, it is worth mentioning another aspect of empirical reality that seems to have received scarce attention so far. As noted by Boyd and Blatt (1988), the history of manias, panics, and crashes (Aliber and Kindleberger, 2015) shows that the Goodwin model also does not resemble another basic empirical observational regarding financial crashes: the temporal asymmetry between euphoria and panic, a feature intimately connected to the Keynesian uncertainty. In their words: Any satisfactory theory must demonstrate strong asymmetry between the time of rise and the time of fall, with the rise time greatly exceeding the fall time . . . The phenomenon of the “panic,” associated with the speed of the sudden crash at the end of the speculative boom, must also be explained by a satisfactory theory. [T]he failure of models of the Goodwin type to yield a sufficiently fast “crash” is inherent in the basic structure of these models, in their adherence to “real” economic factors, to the exclusion of all psychological factors . . . psychological factors such as investor confidence can, and do frequently, change almost instantaneously, and a theory which includes such factors has no difficulty whatever in explaining the speed of the crash. (Boyd and Blatt, 1988, pp. 19, 51)

6 Goodwin-Type Distributive Macrodynamics

The previous chapter approached the competition of social classes over the income distribution by viewing this distributive conflict from the conceptual structure of economic cycles. This way of looking at the problem is quite different from the approach of Chapter 4, where the dynamic competition of social classes over the economic income was treated in the context of stochastic exchange. In the framework of economic cycles, the distributive dynamics among social classes was considered from a macro viewpoint, that is, with no consideration of possible micro foundations, or even what one means by micro in such context. The template model of choice for such an approach is the Goodwin growth–cycles macroeconomic dynamics. A short review of real-world studies carried out until now on this model was presented, and its empirical strengths and weakness were also discussed in detail. This chapter is a continuation of the previous one, as it deals with studies that in some way generalize the Goodwin model, or somehow use it as starting point for further explorations of the income distribution dynamics and related topics. Models that partially agree with the empirical evidences offer excellent inspiration to more realistic modeling, since a deeper look at their mathematical structure and where exactly they fail empirically provide new research avenues worth pursuing. Since such endeavors are always followed in a variety of ways, in view of the existing huge literature, the next pages present only models that may be connected to empirical results. Hence, what follows is a selection of studies, presented in no particular order, that offer particular dynamic perspectives on our template model and how its dynamic structure can be generalized and connected to real-world observations. As we shall see, the roads pursued by these studies are very varied, often including generalizations that either add other dimensions to the original ones of capital and labor, or extensions of the model’s nonlinearity with terms absent in the original

243

244

Goodwin-Type Distributive Macrodynamics

system of differential equations, or even using the cycling pattern brought about the dynamic limit cycle as inspiration to other approaches. 6.1 Structural Stability In experimental research one usually expects that if an experiment is repeated under approximately the same conditions it should approximately produce the same results. This leads to the following, often required, proposition in model building: for a system represented by a set of dynamic equations, if their variables suffer a small change, the dynamic equations should not change their qualitative properties. In dynamic systems theory this is translated to the requirement that the phase portrait featuring the motion of the system does not essentially change if the system suffers an infinitesimal perturbation. In other words, the nearby orbits in the phase space should have the same qualitative dynamic properties. A system exhibiting such robustness to small perturbation is said to be structurally stable (Lee, 1992, pp. 20–23). In the context of economics and econophysics, one should change the research conditions from experimentation to observation and state that if a certain econophysical phenomenon described by a system is observed under similar conditions, it is said to exhibit structural stability if the observational conditions are approximately the same. Otherwise the system is said to be structurally unstable. It follows from these two paragraphs that the basic idea of structural stability, briefly discussed earlier (see p. 50), is very simple and intuitive. It can be framed in formal mathematical definitions and theorems (Hirsch and Smale, 1974, section 16.3; Lee, 1992, section 6.5), but I will not discuss this topic from a rigorous mathematical standpoint, as one often finds in economic papers. Framing structural stability under great mathematical precision often has the effect of clouding, rather than elucidating, the problem, because in several instances where this is done the practical result is the inclusion of unnecessary mathematical parlance that adds little, or virtually nothing, to the understanding of the empirical problem at hand.1 6.1.1 Structural Analysis of Goodwin-Type Models In order to analyze the Goodwin model from the perspective of structural stability one should start by revisiting Eqs. (5.43) to (5.46), from where one can immediately note that the Jacobian of the linearized system around the center of equilibrium (u,v ) has eigenvalues whose real parts are zero. This means that a small perturbation could lead to a non-zero trace in the Jacobian (5.43) and produce eigenvalues

1 See the comments on the limited usefulness of mathematical rigor in physics (pp. 44–46).

6.1 Structural Stability

245

with non-zero real parts, turning the orbits into a source or a sink (Percival and Richards, 1982, pp. 34–36; Boyce et al., 2017, sections 9.1 and 9.3). Hence, from this simple remark it becomes immediately clear that the Goodwin model, as well as the classical Lotka–Volterra predator–prey model, are both structurally unstable. The next question is if structural instability is reason enough for the automatic rejection of the model. Some argue that structural stability is a fundamental criterion in model building, being a necessary condition for predictability, observationability, reproducibility and confirmation of experiments on scientific phenomena. This viewpoint is known as the principle of structural stability. Therefore, structurally unstable models would be considered inadequate for describing empirical regularities, having no predictive abilities and ought to be rejected. Under this perspective a number of authors attempted to modify the Goodwin model by making it structurally stable, but keeping its limit cycle property (Veneziani and Mohun, 2006, and references therein). Nevertheless, it is possible to change the notion of structure stability based on the concept of topological equivalence, defined, as above, by dynamic systems whose phase portraits are unchanged by small perturbations, to the notion of observational, or practical, equivalence, where the perturbed orbits remain nearby the trajectories of the original orbits long enough, and the length of time considered long enough will depend on the problem being studied (Veneziani and Mohun, 2006, p. 441). As a consequence, dissipative systems possessing very slow convergence to the center of equilibrium of the original model may be considered, for all practical purposes, as observationally equivalent and, in this sense, structurally stable. 6.1.2 Are Structurally Unstable Models Fatally Flawed? Veneziani and Mohun (2006, pp. 441–442) discussed further the adequacy of structurally unstable models as realistic representations of natural phenomena, and argued that from the viewpoint of the mathematical definitions and theorems the inquiry of whether or not a system is structurally unstable cannot be met by a simple yes or no, because there are various instances where the rigorous mathematical theorem on structural instability are partially violated or some conditions of the theorem are not met. This is hardly surprising because mathematical definitions are usually too restrictive, often excluding particular cases of the phenomenon under study whose empirical properties are such that they ought to be regarded as included in the proposed definition. Nonetheless, the lack of simple yes-or-no answer regarding the strict mathematical obedience of the theorem on structural stability does not refute the Goodwin model’s structural instability, but it does raise doubts about a straightforward mechanical application of the methodological prescription that a structurally unsta-

246

Goodwin-Type Distributive Macrodynamics

ble model should be immediately rejected. In fact, there is no guarantee of predictability or observability in structurally stable models because, as discussed earlier (p. 49), Newtonian dynamic systems do not necessarily exhibit the expected predictability properties. As noted, predictability is impossible in several simple systems that satisfy Newtonian equations beyond a certain definite time horizon (Lighthill, 1986). In view of this, one can make the point that in real-world analyses and empirical applications, mathematical theorems should not be taken at face value, because they usually are too restrictive, and there are always cases where some of the definitions and requirements of rigorous mathematical theorems do not apply. Therefore, the current problem is more productively approached in less strict, and more intuitive, terms, using the rigorous mathematical theorem only as a general guide, and even disregarding it if the empirical problem so suggests. Besides these points, structural instability can be the engine of change in some processes, because, as it is often observed in nature, systems do change their behavior qualitatively under certain conditions. In fact, systems characterized by a certain degree of instability may possibly be very good representations of complex economies once we accept that economic systems are not in equilibrium, but far from it. So, the doctrine of stability, or stability dogma, established by the principle of structural stability, which regards structurally unstable systems with suspicious (Lee, 1992, sections. 1.2 and 6.1), may not be empirically grounded, possibly having exclusive philosophical basis originated from the a priori conviction that reality must be stable. By this view, the structural instability of the Goodwin model and some of its sequels do not make unsatisfactory models for representing the basic mechanism of distributive conflict between social classes (Veneziani and Mohun, 2006, p. 444), but quite the contrary. The reasoning set out above may be further strengthened if we adopt the concept that class conflict is inherent to the capitalist system and that historical facts on many societies show that this conflict occurs discontinuously, changing over time according to the balance of power between the main political players. Indeed, considering the results reviewed in Chapter 3, the history of income and wealth inequality is always political, chaotic, and unpredictable. It involves national identities, institutional structures, and sharp reversals. Nobody can fully predict the reversals of the future. In conclusion, interpreting the distributive dynamics as focused on qualitative changes and structural instability appears to be more in line with empirical evidence, because the structural instability of the Goodwin model could express the fragility of the mechanism regulating the distributive conflict. Perturbations in the model leading to qualitative changes in its phase portrait could be viewed as basically describing different institutional, political, and economic environments

6.2 Cyclic Evolution in the Tsallis Distribution

247

that imply in different effects on the distributive competition, because economic policies would then affect the dynamics of the Goodwin cycles by shifting their loci in the phase portrait and the center of the cycles (Veneziani and Mohun, 2006, pp. 445–448). 6.2 Cyclic Evolution in the Tsallis Distribution It was shown in Section 5.4.4 that by representing the income distribution of 99% of a population by the Gompertz curve, the percentage share of this component relative to the total income can be identified with the percentage share of labor, that is, with the Goodwin variable u. The other Goodwin variable, the unemployment rate v, can be obtained from the income data distribution using the concept of effective unemployment, a minimum income such that below it a person is in effect jobless. Then, by probing the Brazilian income data from 1981 to 2009, Moura and Ribeiro (2013) showed that the u–v phase plane formed with these data do cycle in a manner similar to the predictions of a predator–prey model of Goodwin type, also evolving into a helix like pattern (see Figs. 5.13– 5.15). A different, and unexpected, way of reaching a similar cycling pattern was obtained by Soares et al. (2016), who fitted the Brazilian income data with the Tsallis distribution (TD). As discussed in Sections 2.3.5 and 2.4.6, the TD is capable of fitting the whole distribution by means of only two positive parameters, B and q. This approach to data fitting allows the study of the parameters space evolution of the entire distribution, since there is no need to divide the data into two distinct components to obtain a good fit. Fig. 6.1 shows results for the TD parameters space evolution obtained with the Brazilian income data from 1981 to 2009. Note the striking similarities with the results shown in Fig. 5.15 depicting the phase portrait evolution of the Goodwin variables u and v obtained with the same data. The projections showing the time evolution of both parameters can be similarly compared to the evolution of both u and v in Fig. 5.13, as well as the cycling nature of the u–v phase portrait in Fig. 5.14 as compared to the projection on the floor of Fig. 6.1. It is important to stress that the studies of Moura and Ribeiro (2013) and Soares et al. (2016) were made with the same income data of Brazil, but using entirely different methodologies. Nevertheless, the results are alike. Admittedly, the curves in both cases are not exactly the same, but the general behavior is very similar, including the helix like evolution of the parameter values. Moreover, although the identification of the percentage share of labor with the Goodwin variable u was made by Moura and Ribeiro (2013) by means of the Gompertz curve, similar results could in principle also be reached with other functions, such as the exponential. Therefore, it is conceivable that the parameters obtained by fitting income

248

Goodwin-Type Distributive Macrodynamics

Figure 6.1 Parameter space evolution of the Tsallis distribution with Brazilian income data for 1981–2009. Since the TD is able to fit the whole income data using only two parameters, B and q, their evolution carries information about the temporal behavior of the whole distribution. A helix-like pattern is clearly noticeable as well as its phase portrait projection, both resembling the ones obtained with the Goodwin model as shown in Figs. 5.13 and 5.15. Soares et al. (2016) extended these results to the time window 1978–2014, but there was no significant change in the general pattern, apart from a different normalization in the parameter B that basically squeezed the projected phase portrait in the B-axis.

distribution data by other functions than the Gompertz curve should also cycle if fitted annually in a time window of at least a decade. These results again raise questions about the possible dynamic nature of the parameters obtained by fitting the income data with different functions, since, as argued (p. 238), these parameters may not be parameters at all, but dynamic variables possibly connected to models such as the Goodwin one. If this turns out to be true, the quest to find the “best”-fitting function for the income data by some supposedly robust, or rigorous, statistics will be in doubt, as argued above (pp. 65, 103). This is so because these supposed parameters would change according to a certain, yet unspecified, dynamic theory that in effect would turn these parameters into dynamic quantities. Nevertheless, it remains to be seen under what kind of theoretical framework such a connection could actually be made.

6.3 The Boyd–Blatt Model

249

6.3 The Boyd–Blatt Model The previous chapter showed that the original Goodwin model has qualitative features generally in line with the empirical data, although it also seems qualitatively incapable of reproducing the observed temporal shape of business cycles. This is so because the Goodwin cycles are perfectly symmetric in time, whereas real-world trade cycles are not at all temporally symmetric (see, e.g., Gabisch and Lorenz, 1989, p. 11). As noted above (p. 242), the upswing movement occurs over several years consisting of time intervals much longer than the drop, which happens quite suddenly when share values collapse in a matter of days, or even hours. It was under this perceived disagreement between the results of the model and observed behavior of real cycles that Boyd and Blatt (1988; see also Boyd, 1986, 1987) proposed a new model for business cycles where Keynes’ ideas on business confidence and Minsky’s financial instability hypothesis, both discussed in Section 5.2, play central roles. The theory advanced by these authors is quite complex, not from a mathematical viewpoint, but from a conceptual one, since it empirically addresses the problem of describing trade cycles from various perspectives, an approach that generates several variables, equations, and parameters. It is based on the functioning of the nineteenth-century laissez-faire capitalism systems, although several of its conclusions seem in line to what happens in several world economies at the time of writing, this being especially due to state policies advocating a severe reduction of the state size and intervention in economic affairs, as well as in view of the 2008 financial crisis. A full exposition of this model in all its details is beyond the scope of this section, which aims at comparing this proposal and its results with their simpler counterparts produced by the Goodwin-type models, and bringing the reader’s attention to what this author believes is a model that deserves further attention and developments because it has the potential of shedding light on the income distribution problem by connecting it to the wider phenomenon of business cycle. So, the next paragraphs will present a limited review of the Boyd–Blatt model, discussing its most important features and indicating where the interested reader can learn more about it. But, before presenting Boyd and Blatt’s proposal for a time asymmetric cycling model, one should note that it is unclear if this asymmetric behavior of business cycles is directly connected to the proposal advanced by Goodwin (1967), since his model describes the distributive dynamics of two income classes, characterizing the cyclic stages where they both gain, lose, or the gain of one is intimately connected to the loss of the other. This happens regardless of the interpretation of its main variables. And although the behavior of market shares may have a direct impact on the gains or losses of both sides of this distributive conflict, it is unclear if these two dynamics have synchronized periods, or if the latter cycle might end when the

250

Goodwin-Type Distributive Macrodynamics

period of the business cycle is still incomplete or even in its early stages. This is so because there are several types of business cycles and their periods differ quite a lot from one another (see below). 6.3.1 Main Variables and Equations The Boyd–Blatt model is composed of several differential equations grouped into two sets: one for the deterministic, or systematic, behavior of its key variables, and another for additional random effects. The model does not make use of the random walk for adding random shocks, but white noise, and this implies that its main features are a consequence of the systematic behavior of its variables as obtained by the solutions of deterministic differential equations. The random component just adds secondary random effects without altering significantly the main behavior given by the quantities reached deterministically. There are two main types of agents interacting in the model: rentiers and entrepreneurs. The former group are the investors, willing and able to place their money stock in shares of speculative enterprises with the promise of a future cash flow. Entrepreneurs are the ones who possess the Keynesian animal spirits, being often made of speculators, dreamers, adventurers, or gullible fools, whereas rentiers are basically parasitical on the rest of the economy, since they collect rent, but do not perform significant services to the rest of the economy in return for these payments (Boyd and Blatt, 1988, p. 104). Keynes (1936, p. 376) even called rentiers “functionless investors” (see also Hudson, 2017, pp. 197–200). A third group made by everyone else includes banks and other financial institutions, as well the interaction between capital and labor. Because the authors state that their explicit purpose in the model is to gain “an insight into the causation of the trade cycle” (Boyd and Blatt, 1988, p. 56), these groups acquire a lesser importance than investors and entrepreneurs. 6.3.1.1 Present Value of Investment Investors have stocks of currency and are willing to put their money into some investment project with the promise of a future cash flow with interest payments. For them the basic question is how to value their investment. The conventional answer is given in terms of discounted present value V of the future cash flow x with interest r. Hence,2

x V = 100τ , (6.1) r 2 The expressions presented in this section will be slightly altered from the original ones appearing in

Boyd and Blatt (1988) in order for them to conform with the dimensional choices adopted in this book (see Section 3.4.3).

6.3 The Boyd–Blatt Model

251

where x is the amount of periodic future cash flow in some currency, r is the discount rate, that is, the interest rate used to determine the present value of future cash flows, and τ is the time constant necessary for having a dimensionally homogeneous expression. The discount rate is the reward that investors demand for accepting delayed payments and the chosen unit of the time constant determines the time interval of these payments, usually a year.3 Dimensionally, the present value of investment V is a stock expressible in terms of generic currency units, [V ] = , the promised periodic future cash flow x is the amount of currency a year, that is, [x] = · a−1 , the interest is given in percentage, [r] = %, and the time constant is, as usual, a unit constant given in years (see p. 124). 6.3.1.2 Risk Premium, Confidence, and the Horizon of Uncertainty If the future cash flow x is entirely riskless, it is usual to determine the discount rate for risk-free investments as being quite close to the interest paid for government bonds. Let us use r0 for this risk-free interest rate. However, future cash flows are not entirely certain, especially if the investment project is such that uncertainty is an important economic factor. Then the principle that “a safe dollar is worth more than a risky one” implies that the discount rate must be adjusted by adding a risk premium r∗ , yielding r = r0 + r∗ .

(6.2)

The main point here is that there is a significant time lapse between the beginning of a project and the anticipated returns. Since speculative investments project into a future that is not known for certainty, or not known at all, this being especially the case if one looks into a more distant future, the Keynesian practical men (see pp. 206, 207) relies on a psychological factor called confidence in the future, which is in the essence of the risk premium r∗ . Under this reasoning, as a first approximation Boyd and Blatt (1988) ignored the risk-free rate of return r0 so that r in Eq. (6.2) becomes basically the risk premium. The valuation formula (6.1) may then be rewritten as V = f x,

(6.3)

100τ r∗

(6.4)

where f =

3 In finance the stream of equally spaced cash flows with interests is called an annuity. Eq. (6.1) indicates that

if an investor wishes to receive the amount x a year in some currency with an interest rate r, the amount of currency stock to be invested is V . See Brealey et al. (2001, pp. 50–51) for details.

252

Goodwin-Type Distributive Macrodynamics

is called the horizon of uncertainty. Since [r∗ ] = %, then f has dimension of time, i.e., [f ] = a. On this quantity, they wrote: Thus, “investors’ confidence” can be described by a psychological variable f which has the dimension of time, and has the intuitive meaning of a “horizon of uncertainty,” a time beyond which the investor does not believe the future course of events to be predictable. Clearly, f increases as “confidence” increases, and decreases when there are shocks to confidence. (Boyd and Blatt, 1988, p. 64; quotation marks in the original)

Brealey et al. (2001, p. 316) reported that the average risk premium for common stocks in the American stock market index Standard & Poor’s 500 from 1926 to 1998 was 9.4%. According to the analysis above this means an average horizon of uncertainty, or psychological investors’ confidence into the future, of about 10 years. Note that this definition for the horizon of uncertainty assumes no cash flow after a time f , and positive values for x before time f . The next addressed point is how f changes in time. The Boyd–Blatt model considers that the confidence term, or horizon of uncertainty, must have a positive constant term that makes f increase continuously when there are no shocks of confidence, and a decreasing function called bankruptcy rate B related to speculative business. So, during the time interval dt, a fraction (B/100) dt of all speculative business go bust. The proposed expression yields a1 Bf df =1− , (6.5) dt 100τ where a1 is a positive dimensionless parameter, [B] = % and τ was introduced, as usual, so that the expression has correct dimensions. 6.3.1.3 Share Market Prices The investment valuation model above also needs to consider the movement of share market prices. Let P be the current market price of a share which pays a dividend yield j , that is, a percentage that the investor receives a year for its investment (Brealey et al., 2001, p. 282). So, if the share pays dividends of x = j P /(100τ ), the valuation formula (6.3) becomes V = fx =

fj P . 100τ

(6.6)

The dimensions of the new quantities are as follows: [P ] = and [j ] = %. The variables within the valuation formula above are constantly changing due to market movements, but it takes time to adjust, so we have V = P most of the time. The investor always tries to predict short-term movements, thus, if V > P the investor will buy in the hope of getting real gains from the imminent adjustment.

6.3 The Boyd–Blatt Model

253

If V < P the investor will not buy, but sell the shares he holds. Therefore, the factor that indicates the direction of share price movements in the near future is given by ! " fj P fj V −P = −P = − 1 P. (6.7) 100τ 100τ 6.3.1.4 Fractional Rate of Change of Price Index The model does not use P for any actual share price, but only as a general indicator of price movements. So, in the model P is basically a share price index. The beginning of the upswing phase is assumed to yield P = 1. Then P > 1 indicates a strong market where share prices rise with ease, whereas P < 1 means a low, depressed share market. Another quantity introduced in the model is the fractional rate of change of the share price index, D, defined as D=

1 dP . P dt

(6.8)

It is assumed that D is proportional to the term within parenthesis on the right-hand side of Eq. (6.7), which is dimensionless. The constant of proportionally is called b, and it must have dimension of [b] = a−1 , since this is the same dimension of D. The model has a number of such constants b, from b1 to b6 , all having the same dimension of 1/time. Similarly, the model also has six dimensionless constants a. All these parameters are used to define the rate of adaptation of financial quantities and the market reaction to one or more major bankruptcies. This means that D is affected by major bankruptcies, and not only by f . So the term −a3 B is added to the expression of D to reflect this effect, where B is the rate of bankruptcies. A third component is added as a kind of lower barrier to the share price index P . As markets tumble in panic there are always some optimistic fools and sober investors who believe that the drop has gone too far and then prevent P descending to zero. Since these investors have different views on how far the drop has already gone, and will go no further, the model does not assume a lower barrier, but a soft barrier for the price index. As prices go lower and lower, potential buyers conclude that the time to buy has arrived and this sets into motion forces that counteract and then arrest the fall. Let P1 be the soft barrier for the price index. Clearly [P1 ] = . 6.3.1.5 Random Events Random effects ought to be considered in the model. Boyd and Blatt (1988, section VI-D) made it clear that share price variations are subject to unpredictable influences such as sudden rumors, unexpected good or bad news, sudden shifts due to unfounded pessimism or optimism, national, or international events, etc. These factors are grouped into what is called market sentiment, this being the overall

254

Goodwin-Type Distributive Macrodynamics

attitude of investors toward the market. Nevertheless, these factors lie beyond reach of mathematical prediction and were considered as having only minor effects over the course of the trade cycle, since its basic trend is essentially determined by the systematic effects discussed so far. They stated that “if the systematic effects are omitted, and all the emphasis is placed on the random effects, then the resulting ‘cycle’ bears only a superficial resemblance to what happens in reality” (Boyd and Blatt, 1988, p. 70), because they would not explain the long-run components, of 24 months or more, whose time scale is comparable to a cycle. Since share price variations D and the bankruptcy rate B do not move in a simple and regular fashion, but fluctuate up and down, to consider this they introduced the random variable u1 in the model as: √ (6.9) u1 = 12 b5 (r1 − 0.5), where b5 is parameter having dimension of 1/time and r1 is a random variable uniformly distributed in the interval (0,1). All these considerations put together resulted in the proposal of the following expression for the fractional rate of change of the share price index, ! " fj a3 B b4 (6.10) −1 − + ramp(P1 − P ) + u1, D = b1 100τ 100τ P where

ramp(X) =

X, 0,

(X ≥ 0), (X < 0).

(6.11)

At this point in their explanation Boyd and Blatt (1988) remarked that all price variations occur in reality, but that the price level of shares is much more volatile than real wages, a difference that sets their model in sharp fundamental contrast to all growth-cycle models of Goodwin type. 6.3.1.6 Desire to Invest To describe realistic markets, one has also to consider the willingness of investors to hold a fraction of their wealth in the form of shares and bonds. Let the symbol G refer to the desired investment fraction. This is a dimensionless quantity that cannot take negative values in the absence of an explicit banking sector, which is the case of the Boyd–Blatt model. In addition, G cannot be larger than unit. The desired investment fraction tends to increase when share prices rise, but not indefinitely as its maximum value is 1. It also tends to decrease when share prices fall, but also not all the way down to zero. The faster share prices rise, ceteris paribus, so also does G. For this reason the model assumes G to be proportional to D, but in this case G will only rise as long as D remains positive, and at a

6.3 The Boyd–Blatt Model

255

slowing down rate as G approaches the unit. When G is close to zero it means that the bottom of the crash is near, a situation where investors do not wish to invest, but to divest, a case where both G and dG/dt lose their mathematical significance. These considerations attempt to reproduce important qualitative characteristics of investor’s psychology, which are represented by the following expression ! " b3 dG ramp(G1 − G). (6.12) = a2 1 − G sign(D) + dt G Here a2 is a dimensionless parameter, [D] = [b3 ] = a−1 and G1 is a dimensionless soft lower barrier parameter whose aim is to avoid a vanishing G. 6.3.1.7 Financial Investment and Consumption The next quantity defined in the model is the flow of investment fund from rentiers to entrepreneurs. Let L represent the actual flow rate of investment funds to speculative entrepreneurs and A1 the market valuation of shares and bonds counted as part of the investors’ assets. So, A1 is in fact the perceived asset position of the investor. The desired financial investment is the desired investment fraction G multiplied by the total perceived assets A1 . Let now J be the actual financial investment, that is, the value of outstanding shares and bonds. So, J refers to the shares and bonds currently held by investors. In terms of dimensions, it is clear that [J ] = [A1 ] = and [L] = · a−1 . Remembering that G is dimensionless, if the desired value GA1 agrees with the actual value J , there is on average no reason to buy more shares, But, if GA1 > J , the difference GA1 − J provides an incentive to do so. Then, when a company issues new shares to raise money (placement) there are investors willing to buy them and cash flows toward speculative enterprises. But, if GA1 < J , or the desired investment is smaller than the actual one, there is no reversed cash flow. This means that in this model financial investment is an irreversible process and an equation which determines the flow rate L implies that L can never be negative. The investors’ actual consumption c depends on their income flow. Let y be the perceived income flow of rentiers. The term “perceived” means that the wealthy investor counts capital gains in the share market as income. Remembering that j is the dividend yield and J is the actual financial investment, then the term (j J )/(100τ ) is the flow of dividend income, DJ is the flow of capital gain (for D > 0) from the share portfolio of the rentier and − (BJ )/(100τ ) is the flow of capital losses due to bankruptcies. If there is an increase in the money supply q, then y also increases. The desired consumption of the investors c∗ is an increasing function as long as y is positive, being zero otherwise. In addition, one needs to consider the land rent v in the perceived income flow.

256

Goodwin-Type Distributive Macrodynamics

All these considerations lead to the proposal of the following expressions jJ BJ − + q, 100τ 100τ " ! y , c∗ = k ramp y + k

y = v + DJ +

(6.13) (6.14)

dc = b1 (c∗ − c), dt

(6.15)

L = b1 A1 ramp(G − J /A1 ),

(6.16)

dA1 = y − c, dt " ! dJ B J, =L+ D− dt 100τ

(6.17) (6.18)

where b1 , k and k are parameters. The dimensions of these quantities are as · a−1 and [D] = follows: [y] = [v ] = [c] = [c∗ ] = [k ] = [k ] = [q] = [b1 ] = a−1 . 6.3.1.8 Bankruptcy Rate Let A2 be the cash position of entrepreneurs. The model considers this quantity as a consolidated account and, therefore, dismisses any book value since when the market nosedives and panic sets in the company book value is considered largely irrelevant. The companies most at risk have cash positions at or below zero. The model assumes that when companies enter the danger zone for bankruptcy this should be proportional to −dA2 /dt, with the proportionality being a function of A2 that increases when the consolidated cash position A2 worsens. The formula should also include a random shock effect to produce cycles different from one another. These shocks, however, should not alter qualitatively the course of the trade cycle in the model. These considerations yield the following expressions " ! B 1 dA2 (6.19) = ramp − + u2 , 100τ A2 dt √ u2 = 12 b6 (r2 − 0.5), (6.20) where r2 is a random variable uniformly distributed in the interval (0,1) and b6 is a parameter. The dimensions are as follows: [b6 ] = [u2 ] = a−1 and [A2 ] = .

6.3 The Boyd–Blatt Model

257

6.3.1.9 Flow of Funds and Entrepreneurial Activity The activity of speculative ventures is correlated with the flow of funds emerging from these projects. But this correlation cannot be instantaneous, since these ventures take time to mature, meaning a delayed flow of funds originating from them. Hence, the actual activity level z must have its expression adjusted by a desired activity level z∗ , which depends proportionally on the flow of funds L coming from investors. This proportionality must, however, be above unit since a project usually started, at least in the nineteenth century, well before all necessary funds were raised. Bankruptcy also causes a negative effect on z, and a term considering this must also be included. The process is initiated when entrepreneurs startreceiving a flow of funds L from investors. Then they pay dividends at a flow rate (j J )/(100τ ) , expend funds at the activity rate z, and receive a flow due to the capital rental v for the use of their capital goods from the “everyone else” group (see p. 250). The first equation that can be written from these considerations is in fact an accounting identity for the current cash position of entrepreneurs, as follows jJ dA2 = v + L − z − . dt 100τ

(6.21)

In this expression the following dimensions hold: [z] = [z∗ ] = [v ] = · a−1 . Note that the cash position above enters in the bankruptcy rate given by Eq. (6.19). The proportionality between z∗ and L is, as mentioned above, written by means of a dimensionless parameter, a5 in this case z∗ = a5 L,

(6.22)

whereas the rental for the use of capital is paid by entrepreneurs and everyone else, yielding v = a4 (c + z),

(6.23)

which uses another dimensionless parameter, a4 . Finally, the change in activity level is adjusted by the desired activity level less a term related to the cessation of activities due to bankruptcy zB dz = b2 (z∗ − z) − . dt 100τ Here [b2 ] = a−1 .

(6.24)

258

Goodwin-Type Distributive Macrodynamics

6.3.1.10 Everyone Else This third group receives revenue in the form of sales of consumption goods to investors at flow c and raw materials and labor to entrepreneurs according to their actual activity level z. The outflow in this group consists of rentals for the use of land v and the use of entrepreneurs’ capital goods v . This results in changing the cash asset position of everyone else A3 by means of balanced accounting expression dA3 = c + z − v − v, dt where its dimension is clearly set as [A3 ] =

(6.25)

.

6.3.1.11 Summary The model can be succinctly described as having the following sets of endogenous and exogenous variables, equations, auxiliary quantities, and parameters. (1) Nine endogenous variables, namely A1 , A2 , A3 , c, f , G, J , P , and z. (2) Nine first-order ordinary differential equations determining the motion of the endogenous variables above, namely Eqs. (6.5), (6.8), (6.12), (6.15), (6.17), (6.18), (6.21), (6.24), and (6.25). (3) Four exogenous variables, specifically time t, increase in the money supply q due to gold discoveries in the nineteenth century, and random shock terms u1 and u2 , respectively defined by Eqs. (6.9) and (6.20). (4) Seven auxiliary quantities obtained from the endogenous variables, namely D, y, c∗ , L, B, z∗ , and v , respectively defined in Eqs. (6.10), (6.13) , (6.14), (6.16), (6.19), (6.22), and (6.23). (5) Twenty-two parameters, although several of them were included for computational purposes only. The numerical values used by the authors in their simulations can be found in Boyd (1986, pp. 255–256) or Boyd and Blatt (1988, pp. 131-132). A detailed list of all quantities and symbols used in the model can also be found in Boyd (1986, pp. 231–234).

6.3.2 Results Simulations were performed for all variables for a period of forty years containing four cycles of about 10 years each. This means that the Boyd–Blatt waves could be classified as Juglar cycles,4 whose periods are different from the Kitchin cycles

4 Cl´ement Juglar (1819–1905) was a French statistician who observed in 1860s cycles on interest rates, prices,

and employment rate. He noted the existence of waves of booms and busts with periods varying from seven to eleven years, identifying four phases: prosperity, crisis, liquidation and depression (Gabisch and Lorenz, 1989,

6.3 The Boyd–Blatt Model

259

which, as discussed above (p. 235), are thought to be the ones most closely associated with the Goodwin model. The results were presented graphically in two versions, with and without random shock terms, the latter being called “deterministic” and the former “stochastic.” The graphs presented by the authors show clear cycles in both versions for L, f , P , G, z, B, J , c∗ , c, A1 , A2 , and A3 . In addition, the authors produced further deterministic and stochastic versions of their results including the exogenous variable of money supply q. There is no point in reproducing here all those plots, which can be found in Boyd (1986, pp. 257, 271) or Boyd and Blatt (1988, pp. 14, 82–85, 99–102). Instead, a simple graph made by this author, roughly outlining the general qualitative features of the deterministic results of the Boyd–Blatt model without money supply is presented. The interested readers are referred to the original plots for the examination of all resulting details and comparison with real data. Fig. 6.2 shows such a graph, where the time asymmetry of the trade cycles is clearly visible. It shows that the crash is characterized by a quick and general collapse of all endogenous variables, followed by a depressed post-crash and then a slow recovery, culminating in the euphoria that is followed by all variables crashing down again in a nosedive manner. Note that the model reproduces in general the key roles played by the Keynesian concept of confidence and Minsky’s financial instability hypothesis. For further detailed analyses of the behavior of each quantity defined in the model the reader is referred to Boyd and Blatt (1988, section VII-G, ch. VIII). Some words regarding the longer run are appropriate now. The long-term average over a succession of complete cycles cannot be equated to a result obtained from an equilibrium calculation because the concept of equilibrium is inapplicable in the present context. The balance of the various funds flowing from one group to another alters very sharply during the various phases of the trade cycle and, as a consequence, there is no normal rate of return, since this quantity varies very violently throughout the cycle. The trade cycle is the normal mode of functioning of a laissez-faire economic system, not some kind of “aberration” or deviation from a (supposedly normal) state of equilibrium or balanced growth. (Boyd and Blatt, 1988, p. 108; quotation marks in the original)

The authors also pointed out that their model requires the parameters to be finetuned, so much so that none of the three groups can always be the main loser in order for the system to continue working indefinitely. In addition, one needs to stress that this is a model for the nineteenth century laissez-faire economies, whose currencies were strictly metallic based. Thus, when the money supply grew

pp. 8–10; Legrand and Hagemann, 2007). The Juglar cycles have periods similar to the Kondratieff intermediate cycles of about 7 to 10 years (see Jadevicius and Huston, 2014, for a review of various cycles).

Goodwin-Type Distributive Macrodynamics

endogenous variables

260

time

Figure 6.2 Simple graph outlining the general qualitative behavior of the trade cycles of the endogenous variables in the Boyd–Blatt model. The plot shows an upswing phase characterized by rising confidence, followed by a boom phase where confidence and the flow of funds from investors rise quite sharply, and then a collapse, when confidence nosedives and all other variables crash down. This results in a depressed phase of very low confidence and no flow of funds from investors, followed by a slow recovery, after which the cycle restarts. Note the time asymmetry of the cycles, a key feature of the model that is in sharp contrast with the time symmetric cycles produced by the Goodwin type models (see Fig. 5.3). This plot is a rough qualitative outline made by this author based on the results obtained by Boyd and Blatt (1988), and, therefore, it should not be taken at face value. The reader is referred to the original plots for details.

due to new gold discoveries, this exogenous effect altered those economies enormously. The authors even suggested on the basis of their simulations that the great depression of 1873–1896 was partly due to the reduction in the money supply rate of growth (Boyd and Blatt, 1988, sections VIII-B, VIII-C). 6.3.3 Economic Function of Trade Cycles Boyd and Blatt (1988, ch. IX) finalized their study with a general discussion about the economic role of the trade cycle, arguing that this is an essential mechanism to avoid economic stagnation in capitalist economies. They reached this conclusion by comparing the parallel economic developments of Britain and France during the eighteenth century. The unstable British economy, where ups, downs, and panics occurred on a regular fashion, grew by leaps and bounds, whereas France, with no

6.3 The Boyd–Blatt Model

261

violent ups and downs, descended into a stagnant economy that led to the 1789 revolution. Nevertheless, some studies raise doubts about this rosy view of the British economy being able to grow by leaps and bounds only due to its unregulated laissez-faire economy. One should not forget that Britain had several colonies at that time from where it was able to extract huge amounts of economic surplus for centuries, this being especially the case of India (Patnaik, 2017). In addition, Britain was able to drain surplus from the colonies of other countries, that being particularly the case of the huge amounts of gold extracted from Brazil in the eighteenth century during its colonial period gold rush. The importance of this gold drainage from Brazil to Britain spanning almost a century should not be underestimated in its contributing to Britain’s reaching the position of the hegemonic world power of the time, as the following clearly illustrates: [T]he inflow of Brazilian gold was crucial to the increasing gold circulation and to the establishment of the gold standard in England and facilitated trade with the “hard currency” areas of Europe. The manipulation of this gold trade notably aided London’s rise to the position of the leading European financial center, which it wrested from Amsterdam. (Boxer, 1969, p. 472, quotation marks in the original)

France had colonies too at that time, but in comparison to Britain its colonial and power reach, and, consequently, its ability to drain surplus from abroad, was much more limited. If Britain were to be in a similar position, that is, incapable of draining such huge quantities of economic surplus, particularly gold, from abroad for so long, Britain could, perhaps, had shared a similar fate as France even with its unregulated laissez-faire economy. Boyd and Blatt (1988) also concluded their study by remarking that trade cycles serve the purpose of net transferring resources from rentiers, which are basically economic parasites, to the economic productively class of entrepreneurs. Only when rentiers are persuaded to forgo personal consumption in favor of investment is economic growth achieved, otherwise the economic resources are squandered and the economy stagnates. 6.3.4 Discussion At this point the following question comes to prominence. To what extent, if any, is a model of nineteenth century laissez-faire economies relevant to understand twenty-first-century economies and the class-based distributive competition taking place today? This question is particularly relevant once we consider that the twentieth- and early twenty-first-centuries economies have several elements that distant themselves from truly laissez-faire economies, such as fiat money, trade union power, state intervention in the economy, monopoly, oligopoly, and the welfare state system. In other words, most countries have today economic elements

262

Goodwin-Type Distributive Macrodynamics

that, if they were not present at that time, were weak to the point of economic irrelevance for most of the nineteenth century. To answer this question it is important to point out that since the 1980s there are economic processes taking place that seem to be steering several economies toward the situation they found themselves at the end of the nineteenth century in terms of income inequality, as extensively discussed in Chapter 3. So, to understand how the nineteenth-century economies functioned may help to clarify how several countries will find themselves by, say, the middle of the twenty-first century if they keep on going in the direction of a shrinking welfare state system coupled with much less state presence and intervention in their economies. Second, trade cycles, with their associated phenomena of euphoria, mania, and panic, are still with us today, as the recent economic depressions and crashes, most notably the 2008 financial one, show us. So, although the Boyd–Blatt model was developed by taking into account nineteenth-century economies, it may serve as an initial theoretical framework for further developments with the aim of empirically describing twenty-first-century economies, as several economic features used in the model may still be valid today. In conclusion, despite the fact that there is not a clear connection between the class-based income conflict discussed in the previous chapter and the Boyd–Blatt model, such a connection may, perhaps, be developed by using this model as a starting point and adding new elements to it. 6.4 The Keen Model The previous section discussed how the Boyd–Blatt model had, among other elements, Minsky’s financial instability hypothesis discussed in Section 5.2.2 as one of its motivational concepts. The results of the simulations showed that this hypothesis is basically sound, however, a key element of today’s economies is missing in that model: debt. It was this absence that led Keen (1995) to propose a model for debt-financed instability. The empirical motivation came from the realization that the 1980s were a period of booming debt-financed euphoria that culminated in the 1987 crash, which left a period of economic instability in its wake that lasted to the early 1990s. Hence, although Minsky’s ideas had been, Keen wrote, “comfortably neglected during the long boom of the 1960s, and even during the oil and Third World debt shocks of the 1970s,” after the 1987 crash and its economically unstable aftermath these “economic circumstances warrant a more considered evaluation of Minsky’s theories” (Keen, 1995, p. 607). So, he proceeded introducing debt due to euphoric expectations into Goodwin’s framework, resulting in the stable limit cycle of the original model being converted into a chaotic one, including a

6.4 The Keen Model

263

destabilizing income inequality and possible divergences that were interpreted as depression. The next paragraphs will present a review of the basic Keen model, its main features, and most important results. As the original paper does not concern itself with dimensional analysis, the discussion below will attempt to show the dimensions of all quantities, but keeping them coherent with the choices and treatment presented in the previous chapters. This means that some expressions of the original paper will appear here slightly modified.

6.4.1 Definitions The model starts by assuming the definitions and results of the Goodwin model given by Eqs. (5.1) to (5.11). Then some new definitions follow. The bankers’ income B is defined as being the interest rate r times outstanding debt D. It may be written as follows: B= where5 [B] = defined as:

· a−1 , [D] =

rD , 100τ

(6.26)

, [r] = % and τ = 1a. The bankers’ share b¯ is rD B , b¯ = 100 = Y τY

(6.27)

where [ b¯ ] = %. Since the Keen model includes profits obtained from bankers, the profit share defined in Eq. (5.1) is changed to the following expression ¯ U = 100 − u − b,

(6.28)

which means that Eq. (5.10) must now include the bankers’ income L − B. (6.29) τ The interest rate is assumed as a linear function of outstanding debt to output ratio, as follows D (6.30) r = r1 + r 2 , τY where [r1 ] = [r2 ] = %. The debt to income ratio d¯ may be written as P =Y −

D d¯ = 100 , Y where [d¯ ] = % · a. 5 The adopted dimensional choices and notation are described at p. 122.

(6.31)

264

Goodwin-Type Distributive Macrodynamics

The following important modification is proposed for Eq. (5.13) I = K˙ = Yf3 (P /K) − γ K.

(6.32)

In this expression the investment I is constituted by a function f3 (P /K) of profit times output Y minus the capital depreciation term γ K, where γ is a parameter. For a dimensionally homogeneous expression (6.32) we have that [f3 (P /K)] = 1 and [γ ] = a−1 . Note that considering the accelerator assumption (5.2) the rate of profit U must now assume the proposal given by Eq. (6.28), appearing as follows in function f3 (P /K)

100 − u − b¯ P U = = . K β β

(6.33)

This modification actually means that the Keen model no longer obeys the Say’s law, i.e., all profits are no longer invested, since investment can now be financed by debt because Eq. (6.32) is in fact I = K˙ = Yf3 (U /β) − γ K.

(6.34)

This important modification of the Goodwin model encapsulates Minsky’s key insight that past liabilities are validated by current cash flows, forming then the basis for future cash flows. So, low or negative profits lead to deleveraging of the economy whereas high profits lead to more borrowing. Keen (1995) pointed out that debt is solely used by capitalists to finance investment, but in the spirit of the Boyd– Blatt model this idea is, perhaps, better formulated by stating that entrepreneurs are solely financed by rentiers by means of debt. The equation below expresses this concept dD = B + I − P, D˙ = dt

(6.35)

this being a remarkable departure from the investment expression (5.13) of the Goodwin model. Finally, the unknown functions are defined in similar functional form. Regarding the wage function f1 (v) in Eq. (5.8), it is changed from the original monotonically increasing linear expression for the Phillips curve (5.23) to the nonlinear quadratic formula below f1 (v) =

A (B − Cv/100)2

− D,

(6.36)

6.4 The Keen Model

265

where A, B, C, and D are dimensionless parameters. As for the other function, the proposal is as follows f3 (U /β) = !

E GU F− β

"2 − H,

(U ≥ 0),

(6.37)

for [E] = [F ] = [H ] = 1 and [G] = a. 6.4.2 Differential Equations Considering Eqs. (5.2), (6.33), and (6.34), it is straightforward to show that Y˙ f3 (U /β) = − γ. Y [β/100]

(6.38)

This result should be compared to Eq. (5.14) produced by the original Goodwin model. Further, in the basic Keen model we have: v˙ f3 (U /β) = − a1 − c2 − γ , v [β/100]

(6.39)

which should also be compared to Eq. (5.17). Considering Eqs. (5.7), (5.9), and (5.15) the result yields u˙ f1 (v) = − a1 . u τ

(6.40)

The rate of change of the debt to income ratio d˙¯ is obtained after some algebra considering Eqs. (5.2), (5.11) , (6.27), and (6.34). The final equation may be written as below ¯ f3 (U/β) − γ . d˙¯ = b¯ − U + (β − d) (6.41) [β/100] From the linear formula (6.30) for the interest rate, it is easy to see that r˙ =

r2 d˙¯ . 100τ

(6.42)

This equation allows us to obtain the rate of change of the bankers’ share, yielding b˙¯ =

! " r2 d¯ d˙¯ +r . 100τ 100τ

(6.43)

266

Goodwin-Type Distributive Macrodynamics

6.4.3 Results The original Goodwin model is reobtained by assuming the linear Phillips curve (5.23) instead of the nonlinear Eq. (6.36), for Eq. (6.28) reverting to its format ¯ and for the function f3 being in Eq. (5.1), that is, without the bankers’ share b, assigned the expression below instead of Eq. (6.37) f3 (U /β) = 1 −

u β +γ . 100 100

(6.44)

The dynamics of the Goodwin model is due to the reaction of workers to the level of employment with capitalists passively investing all their profits. The model consists of two classes in conflict, forming then a 2D system. Depending on the interpretation (see p. 223), the conflict is between workers and capitalists or employed and unemployed workers. The model also does not make any distinction between entrepreneurs and rentiers since there is no financial sector in the model (as noted at p. 221). The phase portrait shown in Fig. 5.2 indicates an essentially stable dynamics with cycles repeating itself on a regular fashion. When the finance sector is introduced into the system, it fundamentally alters the dynamics of the model since it is no longer made of two, but three classes: workers, entrepreneurs, and rentiers (banks and other financial institutions). In this 3D dynamic system, entrepreneurs can now borrow to finance their investments and, therefore, accumulate long-term debts with the rentiers. The simulations carried by Keen (1995) using the parameter values indicated within the paper showed a model dynamically evolving along two major possibilities that depend on a key parameter: interest rate r. If it is low enough the system is stable, but if it is above a certain threshold the dynamics breaks down due to unsustainable debt to output ratio d¯ ¯ caused by an ever-increasing bankers’ share b. The stable evolution of the model is shown in Fig. 6.3, where the evolving simulation for r = 3% appears as a bottom-up motion of the three classes represented in the three axes. The cycles gradually decrease until the system reaches a steady state growth. The stability of the model’s dynamics is no longer possible when the interest rate reaches 4.6% or above, since an entirely different dynamic evolution emerges. Fig. 6.4 shows the motion of three classes of the model again in a 3D plot where the bottom-up evolution is rather different than the one at low interest rate. Initially workers and bankers’ shares are low, but investment financed by borrowing leads to a rise in the bankers’ share, employment and an even further rise the workers’ share. Increase in the workers’ share leads to a drop in investment and then to lower employment. That situation, which would damp the cycle in the Goodwin model, does not happen in the Keen model when r is above a threshold. On the contrary, bankers’ share continues to rise generating a kind of wage–employment vortex and

6.4 The Keen Model

267

Figure 6.3 Stable evolution of the Keen model with low interest rate. The motion of the system is bottom-up, showing a gradual stabilization of the cycles until the system reaches a steady state growth. Reprinted from Keen (1995, fig. 6) with permission from Taylor & Francis Ltd

the cycles become more intense reaching a point where the system collapses toward zero wages, employment, and profits, whereas bankers’ share continues going up. This is the point where the system reaches a debt-induced breakdown. These results show very clearly that uncontrolled debt-induced growth due to high interest rates can lead to an upward spiral of extreme investment boom, followed by wages blowing up until their collapse, which then reduces the workers’ share and temporarily reverse the trend. But this trend returns afterwards at even higher values of bankers’ share until a complete breakdown of the model. The only class that always benefits until the breakdown are the rentiers, whose share of the economy always increase. 6.4.4 Further Developments and Generalizations Keen (1995) studied a generalization of the basic model showed above that takes into consideration Minsky’s perspective of the government’s role in stabilizing the economy by (1) preventing the euphoric expectations of both entrepreneurs and rentiers during the boom and (2) enabling the debt-payment by entrepreneurs by increasing cash flows during the slump. Further equations and definitions are introduced, and the results of the simulations show that the government plays the role of

268

Goodwin-Type Distributive Macrodynamics

Figure 6.4 Unstable evolution of the Keen model with high interest rate. The axis not labeled on top of the plot is of wages. The bottom-up motion of the 3D system starts with cycles that become more and more intense until a vortex is reached. Then, decrease in employment leads to a further cycle where wages and employment keep on diminishing, whereas bankers share always increase. The final situation is a collapse of the system due to a debt-induced breakdown. Reprinted from Keen (1995, fig. 8) with permission from Taylor & Francis Ltd

a countercyclical stabilizer by increasing taxation during the euphoric moments of the boom. In Keen’s words Government intervention greatly diminishes the possibility of complete breakdown, but it does not eliminate cycles. Instead, the system displays apparently random, irregular cycles . . . The economic interpretation of this apparently bizarre behavior is that objectives of the various economic actors are not consistent. (1995, p. 628)

The Keen model with government intervention greatly increases the complexity of the model, since it introduces variables such as government spending, debt, and taxation. This changes the three dimensions of the model without government intervention into a system with six dimensions, instead of two of the Goodwin model. This probably creates various attractor points into what effectively becomes a chaotic dynamic system. The Keen model has been further explored by the original author himself by introducing Ponzi finance and other variables (Keen, 2000, 2009a, 2013a, 2013b, 2018). Grasselli and Costa Lima (2012) carried out an extensive study of the

6.5 Other Extensions of the Goodwin Model

269

dynamic properties of various possibilities and generalizations of the Keen model, presenting numerical examples of several properties obtained with different aspects of this class of models, as well as determining their equilibria and local stabilities. A nonmathematical discussion of the conceptual basis of the contemporary debtinduced economic instabilities, as well as a pedagogical exposition of Minsky’s ideas, can be found in Keen (2017; see also Ehnts, 2018). Minsky’s ideas of economic dynamics have been experiencing greater impetus lately due to its ability to describe economic reality. This interest led to the development of programs aimed at modeling the dynamics of economic financial instability based on Minsky’s concepts, now possible via an open source software called Minsky,6 which is based on the general concepts behind Keen class-type models. Dynamic monetary modeling using the program Minsky is discussed in Keen (2016). 6.5 Other Extensions of the Goodwin Model Since its inception over half a century ago the Goodwin model induced the production of a large literature. It is beyond the scope of this book to provide a comprehensive review of all studies based on it, so this section will briefly mention, in no particular order, a small group of sequel papers of Goodwin-type models that have not yet been presented. Combined with the studies showed above this set of articles should provide the interested reader with a bibliographic starting point for further studies. 6.5.1 Palomba Model Although it is generally attributed to Richard Goodwin (1967) the first application of the Lotka–Volterra predator–prey model to economics, the Italian economist Giuseppe Palomba (1908–1986) had in fact used these equations in an economic model much earlier. The Palomba model envisages an economy made only of capital and consumption goods, where the former tends to increase. So, part of the consumption goods is diverted to capital equipment. With only these few assumptions, Palomba wrote a model with the same system of coupled ordinary firstorder differential equations given by Eqs. (5.25) and (5.26), but whose variables u and v were respectively interpreted as quantities of capital and consumer goods (Palomba, 1939, pp. 92–99; see also: Gandolfo, 2008; Gandolfo, 2009, section 23.4.4; Petracca, 2016).

6 https://sourceforge.net/projects/minsky

270

Goodwin-Type Distributive Macrodynamics

Palomba’s (1939) analysis indicated that the parameters of the system should be considered as time-dependent continuous functions. As noted above (p. 238), Moura and Ribeiro (2013) also independently reached at the same conclusion. Moreover, the Palomba model is more general in the sense that it initially leaves the signs of the parameters unspecified and in effect concludes that business cycles are an endogenous phenomenon in an economy due to its nonlinearity, a result similarly reached by the various nonlinear models of late examined above. Hence, trade cycles cannot be attributed to exogenous “shocks,” a conclusion that effectively dismisses linear models as descriptors of economic reality. Hence, Palomba (1939) has in fact proposed a third way of interpreting Eqs. (5.25) and (5.26), not as a class conflict, but as a conflict between two different kinds of interdependent goods produced by the economy. 6.5.2 DHMP and Other Similar Extensions The Desai–Henry–Mosley–Pemberton (DHMP) extension of the Goodwin model was proposed by Desai et al. (2006) under the realization that the original model can produce solutions outside the u–v domain [0,100] × [0,100]. As noted above (p. 218), in principle such situations do not make sense. The DHMP extension does not allow these spurious results by imposing conditions (Desai et al., 2006, section 2) on the right-hand side of Eqs. (5.25) and (5.26). It also relaxes two other hypotheses: profits are not always invested and the Phillips curve is nonlinear. The model is then described by two nonlinear versions of the right-hand side of Eqs. (5.25) and (5.26), but at the cost of introducing three additional parameters, where only one has a clear economic interpretation: “the maximum share of labour that capitalists would tolerate” (see also Moura and Ribeiro, 2013, section 2.2). Testing the DHMP extension using Brazilian income data from 1981 to 2009 was carried out by Moura and Ribeiro (2013, section 5.2). The results showed a poor empirical performance of this extension as compared to the original model. One fitted parameter value even is in flat contradiction with the hypotheses of the model, since it predicts a positive value and the results showed it to be negative. Moreover, the model has so many additional parameters that even after data fitting the quantities (u/u ˙ vs. v) and (v/v ˙ vs. u) two parameters remain unknown and require determination by other, also unknown, expression. As a result of these modifications on the original Goodwin model, although the DHMP extension followed most of the original path of the Goodwin model by assuming several economic conditions supposedly reasonable from the start, in order to avoid some of its theoretical shortcomings it added so many more unknown

6.5 Other Extensions of the Goodwin Model

271

quantities that its empirical credentials, that is, its ability to describe real-world data, remains inconclusive. Other authors pointed out the same difficulty in the Goodwin model, that is, its possible solutions outside the u-v domain [0,100]×[0,100], and proposed different changes to avoid it. Blatt (1983, p. 211) suggested that a floor level of net investment ought to be introduced in the model. The Harvie–Kelmanson–Knapp (HK2 ) extension of the Goodwin model (Harvie et al., 2006) acknowledged the same problem of the main variables taking values above 100% in their trajectories on the phase portrait. They also pointed out the unrealistic symmetry of the Goodwin cycles, as was previously noted by Boyd and Blatt (1988). The HK2 extension introduced in the Goodwin model the biological concept of carrying capacity in order to provide limits to the Goodwin variables (Harvie et al., 2006, pp. 56, 61). This concept, common in population dynamics, is an environmental factor that prevents the prey population going above a certain size even in the absence of predators, given water, habitat, food, and other necessities available in the environment, being in fact the environment’s maximal load. This is often represented by the logistic curve or other similar S-shaped, or sigmoid, functions like the Gompertz curve. In their approach they simply multiplied Eqs. (5.25) and (5.26) by, respectively, two functions g1 (u) and g2 (v), where g1 (u) → 0 as u → 100, with g1 (u) > 0 for u < 100, and g2 (v) → 0 as v → 100, with g2 (v) > 0 for v < 100. The HK2 extension chose two different forms for the functions g1 (u) and g2 (v) and obtained their stationary points, characteristic equation for the eigenvalues, and an approximative function for the period of the cycles. Numerical simulations were carried out for the Goodwin model and both proposed extensions, showing that they keep the variables u and v under the desired bounds and result in asymmetric cycles. However, there were no attempts of testing the models against real-world data. 6.5.3 Sordi–Vercelli Model Sordi and Vercelli’s (2014) study of the Goodwin model is essentially a proposed modification of the Keen model assuming that investment solely financed by debt is not enough to explain some features of real economic systems. The Sordi– Vercelli model basically considers that “the goods market may be characterized by a disequilibrium state,” proposing then an alternative approach to Minsky’s financial instability hypothesis where “a basic role is played by both the liquidity and solvency indexes of firms,” although they initially ignored the solvency aspect for simplicity (Sordi and Vercelli, 2014, p. 326). They started their analysis by arguing that, as suggested by Blatt (1983, p. 213), assuming a nonlinear specification for the function f1 (v) in Eq. (5.8) plus some

272

Goodwin-Type Distributive Macrodynamics

conditions on the parameters is enough to avoid the variable v going above 100. Nevertheless, that still leaves open the possibility for u going above this value, which they argued to be less serious because if disinvestment occurs the total wages may be above output exceptionally. After recalling the basic hypotheses that led to the Keen (1995) model and the analysis made by Grasselli and Costa Lima (2012), the authors proposed an alteration of Eq. (5.12) so that Eq. (5.13) becomes ¯ exp − K, I = K˙ = Kdes − K = βY

(6.45)

Yexp = Y + τ Y˙ .

(6.46)

where

Here Kdes is the desired capital stock, β¯ is the desired capital to output ratio and Yexp is the expected output. Eq. (6.45) is called the flexible accelerator (Gandolfo, 2009, p. 224). Since in Eq. (5.12) the term U/100 is the fractional profit share, or the capitalists’ savings, the assumption made in Eq. (6.45) means that investment is independent of savings made by capitalists. This allows a possible imbalance between profit share and investment which in turn may lead to lack of demand or supply in the goods market. The Sordi–Vercelli model further assumes as a functional form for Eq. (5.8) such that the new Phillips curve for the labor bargaining power depends not only on the employment rate v, but also on its growth rate, as follows

f4 v, v˙/v w˙ = . (6.47) w τ As a result of these modifications the 3D system similar to the Keen model is maintained, but with a different investment function than proposed in Eq. (6.34). Sordi and Vercelli (2014) analyzed the trajectories, equilibrium points, and local stability of their model by means of extensive numerical simulations. However, there is no attempt to test it against real-world data. They concluded that debt is a crucial factor that may accelerate the growth of output and capital accumulation. They also claimed that their model clarifies the endogenous mechanism of progressive deterioration of liquidity during the upswing, that eventually leads to a downturn. 6.5.4 Other Goodwin Extensions Desai (1973) proposed three extensions of the Goodwin model. The first extension basically rewrites the bargaining equation (5.8) by adding a price-adjustment equation. The second modification alters Eq. (5.2) by no longer assuming the capital to

6.5 Other Extensions of the Goodwin Model

273

output ratio as constant, but rewriting β as a function of the employment ratio v, which in fact means giving more flexibility to technology. The third extension adds inflation to the model by adding it to the bargaining equation. The study only deals with the stability of these extensions, as there is no numerical simulation or any attempt to test the model empirically. Van der Ploeg (1983) also extended the Goodwin model in theory only by adding terms related to when workers are prepared to sacrifice some real wage growth for fear of redundancies, inflation, which increase accumulation and redistribute income in favor of capitalists, and adaptative expectations, when prices are perfectly anticipated. There was no attempt at testing the proposed extensions as only numerical simulation were presented. The author concluded that in some situations the conflict is damped, but in others there are exploding cycles of conflict. The author concludes his analysis by stating the following words regarding the use of the predator–prey model to describe the dynamics of economic systems. It produces a perpetual cycle of conflict and highlights the symbiotic contradictions of capitalism, since capitalists and workers struggle for a share in the national income and at the same time each one needs the other to survive. (van der Ploeg, 1983, p. 277)

Colacchio et al. (2007) removed the hypothesis of Harrod-neutral technical progress, that is, they did not assume a constant capital to output ratio in Eq. (5.2), and by doing so the structural stability of the Goodwin model is reestablished. Bella (2013) studied generalized Hopf bifurcations in the Goodwin model. Serebriakov and Dohnal (2017) discussed the Goodwin model qualitatively, obtaining all of its possible developments based on the two variables u and v.

Postface

The variety of theories and models discussed throughout this book show that physicists have been studying the income distribution problem for quite some time in parallel to similar works done by economists. Unfortunately there has been little interaction between these two research communities, although their interplay may potentially be very fruitful. Four factors may be listed as the possible main reasons for this lack of interaction. First, physicists are primarily interested in dynamics. Hence, they have little appreciation for equilibrium situations, and even less for comparative statics. Cases of equilibrium in dynamic models are the exception, rather than the rule, and they are mostly not particularly interesting or enlightening. This factor immediately distance physicists from most economists, since the latter mostly embrace neoclassical economics, a theory still obsessed with equilibrium and statics. Second, as emphasized over and over throughout this book, physics is an empirical science where measurements and empirical verification of models are its heart. Therefore, in physics the only way one can conclude if a concept is correct is by means of data testing, since no amount of argument about theory can say if a concept is right or wrong. To suppose that theoretical discussions solve scientific dilemmas goes against everything that for hundreds of years physics has stood for. Mathematics is essential for physical modeling, but physicists in general do not care much for mathematical rigor in itself, something that appears to have become another obsession in the economic literature. Third, the results obtained by physicists who studied the income distribution problem by either statistical econophysics or economic cycles conclude that social classes do matter in economic analysis. The approach made by neoclassical economics of considering society as being constituted by households and firms (consumers and producers) does not seem to bring forth useful results for the income distribution problem, because a family that belongs to the top 1% is in a very different economic position as compared to another in the remaining 99% as far as 275

276

Postface

income distribution dynamics is concerned. These two families cannot be grouped together because their income data is distributed very differently. Therefore, economic classes and class conflict, old concepts of political economy that neoclassical economics ignores, are essential to understand income distribution. Finally, there is compelling evidence coming from the results of statistical econophysics and Piketty’s analyses that contrary to common formulation growing inequality is not caused by the poor becoming poorer, but by the rich becoming even richer. So, the framing of the problem should be reversed, not as “inequality and poverty,” but as “inequality and wealth.” It seems that neoclassical economics has neither the inclination (Milanovic, 2007; Milanovic, 2010, section 1.10) nor the analytical tools capable of dealing with the income distribution problem from this perspective. That does not mean that neoclassical economics is an empty box. But, that it has become less of a scientific endeavor and more of a dogmatic ideology turned to serve and justify specific interests. Due to this, the science of economics has, to some extent, stopped progressing because its theories have divorced themselves from the real-world phenomena. But not its applications. Indeed, economic engineering has developed original financial methods in order to channel most benefits of economic development toward a few interested parties. The end result of this process is that economics as a science entered into an intellectual crisis from which it has not yet emerged. The theories and models proposed by econophysicists and their criticisms to academic economics should hopefully help to bring economics back to its scientific path by encouraging it to put aside the vested interests that have imprisoned its science. Otherwise, the last sentence of the quotation below could be viewed as a prophecy. [It is assumed] that economic theory aims to explain actual economic events, and to aid us in bettering economic performance. However, this is not the only conceivable aim. The Roman Catholic Church uses the term “apologetics” to mean a set of writings which purport to give logical reasons for acceptance of the dogmas of the Catholic faith. If economic theory is intended primarily as apologetics, in this sense, for laissez-faire policies by the State, then current economic theory is already very well adapted to that purpose. Improvement of that theory should then be neither necessary nor desirable; it might even be impossible. (Boyd and Blatt, 1988, p. 109; quotation marks in the original)

References

Abramowitz, M., and Stegun, I. A. 1965. Handbook of Mathematical Functions. New York: Dover. 9th Dover printing of the 1964 original work published by the National Bureau of Standards. Conforms to the 10th (December 1972) printing by the Government Printing Office with additional corrections. Abrassart, Arthur Eugene. 1967. Dimensional Analysis in Economics. Ph.D. thesis, University of Illinois. Abreu, Everton M. C., Moura, N. J., Jr. Soares, Abner D., and Ribeiro, Marcelo Byrro. 2019. Oscillations in the Tsallis Income Distribution. Physica A, 533, 121967, arXiv:1706.10141v2. Acemoglu, Daron. 2009. Introduction to Modern Economic Growth. Princeton University Press. Aitchison, J., and Brown, J. A. C. 1957. The Lognormal Distribution. 1st ed. University of Cambridge Department of Applied Economics Monograph: 5. Cambridge: Cambridge University Press. Alexander, Herbert B. 1922. Brazilian and United States Slavery Compared. Journal of Negro History, 7(4), 349–364. Aliber, Robert Z., and Kindleberger, Charles P. 2015. Manias, Panics and Crashes: A History of Financial Crashes. 7th ed. Palgrave Macmillan. Anderson, P. W. 1972. More Is Different. Science, 177(4047), 393–396. Angle, John. 1983. The Surplus Theory of Social Stratification and the Size Distribution of Personal Wealth. Pages 395–400 of: Proceedings of the Social Statistics Section of the American Statistical Association. Angle, John. 1986a. Coalitions in a Stochastic Process of Wealth Distribution. Pages 259–263 of: Proceedings of the American Statistical Association, Social Statistics Section. American Statistical Association, Alexandria, VA. Angle, John. 1986b. The Surplus Theory of Social Stratification and the Size Distribution of Personal Wealth. Social Forces, 65(2), 293–326. Angle, John. 1990. A Stochastic Interacting Particle System Model of the Size Distribution of Wealth and Income. Pages 279–284 of: Proceedings of the Social Statistics Section of the American Statistical Association. Angle, John. 1992. The Inequality Process and the Distribution of Income to Blacks and Whites. Journal of Mathematical Sociology, 17(1), 77–98. Angle, John. 1993a. An Apparent Invariance of the Size Distribution of Personal Income Conditioned on Education. Pages 197–292 of: Proceedings of the Social Statistics Section of the American Statistical Association. 277

278

References

Angle, John. 1993b. Deriving the Size Distribution of the Personal Wealth from “the Rich Get Richer, the Poor Get Poorer.” Journal of Mathematical Sociology, 18(1), 27–46. Angle, John. 1996. How the Gamma Law of Income Distribution Appears Invariant under Aggregation. Journal of Mathematical Sociology, 21(4), 325–358. Angle, John. 1997. A Theory of Income Distribution. Pages 388–393 of: Proceedings of the Section on Government Statistics and Section on Social Statistics of the American Statistical Association. Angle, John. 1998. Contingent Forecasting of the Size of the Small Income Population in a Recession. Pages 138–143 of: Proceedings of the Section on Government Statistics and Section on Social Statistics of the American Statistical Association. Angle, John. 1999a. Contingent Forecasting of the Size of a Vulnerable Nonmetro Population. Pages 161–169 of: Proceedings of the 1999 Federal Forecasters’ Conference. Washington, DC: US Government Printing Office. Angle, John. 1999b. Pervasive Competition: The Dynamics of Incomes and Income Distribution. Pages 331–336 of: Proceedings of the Government Statistics and Social Statistics Sections of the American Statistical Association. Angle, John. 2000. The Binary Interacting Particle System (bips) Underlying the Maxentropic of the Gamma Law of Income Distribution. Pages 270–275 of: Proceedings of the American Statistical Association, Social Statistics Section. American Statistical Association, Alexandria, VA. Angle, John. 2001. Modeling the Right Tail of the Nonmetro Distribution of Wage and Salary Income. In: Proceedings of the American Statistical Association, Social Statistics Section. American Statistical Association, Alexandria, VA. [CD-ROM]. Angle, John. 2002a. Contingent Forecasting of Bulges in the Left and Right Tails of the Nonmetro Wage and Salary Income Distribution. In: Proceedings of the 2002 Federal Forecasters’ Conference. Washington, DC: US Government Printing Office. Angle, John. 2002b. Modeling the Dynamics of the Nonmetro Distribution of Wage and Salary Income as a Function of Its Mean. In: Proceedings of the American Statistical Association, Business and Economics Statistics Section. American Statistical Association, Alexandria, VA. [CD-ROM]. Angle, John. 2002c. The Statistical Signature of Pervasive Competition on Wage and Salary Incomes. Journal of Mathematical Sociology, 26(4), 217–270. Angle, John. 2003a. The Dynamics of the Distribution of Wage and Salary Income in the Nonmetropolitan U.S. Estad´ıstica, 55(164–165), 59–93. Angle, John. 2003b. Imitating the Salamander: A Model of the Right Tail of the Wage Distribution Truncated by Topcoding. In: Proceedings of the Conference of the Federal Committee on Statistical Methodology. Federal Committee on Statistical Methodology. https://s3.amazonaws.com/sitesusa/wp-content/uploads/sites/242/ 2014/05/2003FCSM Angle.pdf (accessed May 4, 2017). Angle, John. 2004. The Inequality Process. Pages 488–490 of: Lewis-Beck, Michael S., Bryman, Alan, and Liao, Tim Futing (eds.), The SAGE Encyclopedia of Social Science Research Methods, vol. 2. Thousand Oaks, SAGE. Angle, John. 2005 (Feb.). Speculation: The Inequality Process Is the Competition Process Driving Human Evolution. In: First General Scholarly Meeting. Society for Anthropological Science, Santa Fe, NM. Angle, John. 2006. The Inequality Process as a Wealth Maximizing Process. Physica A, 367, 388–414. Angle, John. 2007a (Oct.). The Econophysics of Income Distribution as Economics. Preprint: http://andromeda.rutgers.edu/∼jmbarr/EEA2008/angle.pdf (accessed July 24, 2017).

References

279

Angle, John. 2007b. The Macro Model of the Inequality Process and the Surging Relative Frequency of Large Wave Incomes. Pages 185–213 of: Chatterjee, Arnab, and Chakrabarti, Bikas K. (eds.), Econophysics of Markets and Business Networks. Berlin: Springer. Angle, John. 2007c. A Mathematical Sociologist’s Tribute to Comte: Sociology as Science. Footnotes, 35(2), 10–11. www.asanet.org/sites/default/files/fn 2007 02.pdf (accessed May 4, 2017). Angle, John. 2009. Two Similar Particle Systems of Labor Income Distribution Conditioned on Education. In: JSM Proceedings, Business and Economics Statistics Sections. American Statistical Association, Alexandria, VA. [CD-ROM]. Angle, John. 2012. The Inequality Process versus the Saved Wealth Model: Which Is the More Likely to Imply an Analogue of Thermodynamics in Social Science? Journal of Mathematical Sociology, 36, 156–182. Angle, John. 2013. How to Win Acceptance of the Inequality Process as Economics? IIM Kozhikode Society and Management Review, 2(2), 117–134. Preprint: https://mpra.ub.uni-muenchen.de/52887/8/MPRA paper 52887.pdf (accessed May 6, 2017); doi:10.1177/2277975213507831. Angle, John, Nielsen, F., and Scalas, E. 2010. The Kuznets Curve and the Inequality Process. Pages 125–138 of: Basu, Banasri, Chakravarty, Satya R., Chakrabarti, Bikas K., and Gangopadhyay, Kausik (eds.), Econophysics and Economics of Games, Social Choices and Quantitative Techniques. Milan: Springer. Aoyama, Hideaki, Fujiwara, Yoshi, Ikeda, Yuichi, Iyetomi, Hiroshi, and Souma, Wataru. 2010. Econophysics and Companies: Statistical Life and Death in Complex Business Networks. Cambridge: Cambridge University Press. Aristotle. 2012a. On the Heavens. In: Works. Translated to English by J. L. Stocks. www.constitution.org/ari/aristotle-organon+physics.pdf (accessed August 8, 2014). Aristotle. 2012b. Physics. In: Works. Translated to English by R. P. Hardie and R. K. Gaye. www.constitution.org/ari/aristotle-organon+physics.pdf (accessed August 8, 2014). Arnold, Barry C. 2015. Pareto Distributions. 2nd ed. New York: Chapman & Hall. Arnold, V. I. 1986. Catastrophe Theory. 2nd ed. Berlin: Springer. Arsenault, Natalie, and Rose, Christopher. 2006 (Mar.). Africa Enslaved: Slavery in Brazil. https://liberalarts.utexas.edu/hemispheres/ files/pdf/slavery/Slavery in Brazil .pdf (accessed September 15, 2018). Arshad, Sidra, Hu, Shougeng, and Ashraf, Badar Nadeem. 2018. Zipf’s Law and City Size Distribution: A Survey of the Literature and Future Research Agenda. Physica A, 492, 75–92. Arshad, Sidra, Hu, Shougeng, and Ashraf, Badar Nadeem. 2019. Zipf’s Law, the Coherence of the Urban System and City Size Distribution: Evidence from Pakistan. Physica A, 513, 87–103. Atkinson, A. B. 1969. The Timescale of Economic Models: How Long Is the Long Run? Review of Economic Studies, 36(2), 137–152. Atkinson, A. B. 1997. Bringing Income Distribution in from the Cold. Economic Journal, 107, 297–321. Atkinson, A. B. 2015. Inequality: What Can Be Done? Cambridge, MA: Harvard University Press. Atkinson, A. B. 2017. Pareto and the Upper Tail of the Income Distribution in the UK: 1799 to the Present. Economica, 84(334), 129–156. doi:10.1111/ecca.12214. Ayres, Robert U., and Nair, Indira. 1984. Thermodynamics and Economics. Physics Today, Nov., 62–71. Backhouse, Roger F. 2002. The Penguin History of Economics. London: Penguin.

280

References

Baggott, Jim. 2013. Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth. New York: Pegasus Books. Ball, Philip. 2004. Critical Mass. New York: Farrar, Straus, and Giroux. Balzac, Honor´e de. 2011. Old Man Goriot. ePenguin. First published in French as Le P`ere Goriot 1835. New translation and notes by Olivia McCannon. Introduction by Graham Robb. Banerjee, Anand, and Yakovenko, Victor M. 2010. Universal Patterns of Inequality. New Journal of Physics, 12, 075032. arXiv:0912.4898v4. Banerjee, Anand, Yakovenko, Victor M., and Di Matteo, T. 2006. A Study of the Personal Income Distribution in Australia. Physica A, 370, 54–59. arXiv:physics/0601176v1. Barbosa-Filho, Nelson H., and Taylor, Lance. 2006. Distributive and Demand Cycles in the US Economy – A Structuralist Goodwin Model. Metroeconomica, 57(3), 389–411. Barnett, William, II. 2004. Dimensions and Economics: Some Problems. Quarterly Journal of Austrian Economics, 7(1), 95–104. Bella, Giovanni. 2013. Multiple Cycles and the Bautin Bifurcation in the Goodwin Model of a Class Struggle. Nonlinear Analysis: Modelling and Control, 18(3), 265–274. Bernanke, Ben S. 2004 (Feb. 20). The Great Moderation. Federal Reserve Board. Speech at the Meetings of the Eastern Economic Association, Washington, DC; www .federalreserve.gov/boarddocs/speeches/2004/20040220/ (accessed June 2, 2016). Bernanke, Ben S. 2015. The Courage to Act: A Memoir of a Crisis and Its Aftermath. New York: Norton. Bernstein, Peter. 2005. Most Nobel Minds. CFA Institute Magazine, 16(6), 36–43. Bhattacharya, K., Mukherjee, G., and Manna, S. S. 2005. Detailed Simulation Results for Some Wealth Distribution Models in Econophysics. Pages 111–119 of: Chatterjee, A., Yarlagadda, S., and Chakrabarti, B. K. (eds.), Econophysics of Wealth Distribution. Milan: Springer. Bird, Kai, and Sherwin, Martin J. 2006. American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer. New York: Vintage. Blatt, John Markus. 1983. Dynamic Economic Systems: A Post-Keynesian Approach. Armonk, NY: M. E. Sharpe. Blaug, Mark. 1998. Disturbing Currents in Modern Economics. Challenge, 41(3), 11–34. Blume, Lawrence E., and Durlauf, Steven N. 2015. Capital in the Twenty-First Century: A Review Essay. Journal of Political Economy, 123(4), 749–777. Boghosian, Bruce M. 2019. The Inescapable Casino. Scientific American, November 2019, 71–77. Also in: https://www.scientificamerican.com/article/is-inequality-inevitable/ (accessed January 9, 2020). Boltzmann, Ludwig. 1974. Theoretical Physics and Philosophical Problems. Vienna Circle Collection, vol. 5. Dordrecht: Reidel. Collection of texts originally published from 1892 to 1905. McGuiness, Brian (ed.). Translated from German to English by S. R. Groot. Borges, E. P. 2004. Empirical Nonextensive Laws for the County Distribution of Total Personal Income and Gross Domestic Product. Physica A, 334, 255–266. Bose, Indrani, and Banerjee, Subhasis. 2005. A Stochastic Model of Wealth Distribution. Pages 195–198 of: Chatterjee, A., Yarlagadda, S., and Chakrabarti, B. K. (eds.), Econophysics of Wealth Distribution. Milan: Springer. Bouchaud, J. P. 2008. Economics Needs a Scientific Revolution. Nature, 455, 1181. Also in: Real-World Economics Review, 48 (2008) 291–292, arXiv:0810.5306v1. Bouchaud, J. P. 2009. The (Unfortunate) Complexity of the Economy. Physics World, Apr., 28–32. arXiv:0904.0805v1.

References

281

Bouchaud, J. P. 2015. On Growth-optimal Tax Rates and the Issue of Wealth Inequalities. Journal of Statistical Mechanics: Theory and Experiment, 2015, P11011, doi:10.1088/1742-5468/2015/11/P11011. Bouchaud, J. P. and M´ezard, Marc 2000. Wealth Condensation in a Simple Model of Economy. Physica A, 282, 536–545. Bourguignon, Marcelo, Saulo, Helton, and Fernandez, Rodrigo Nobre. 2016. A New Pareto-Type Distribution with Applications in Reliability and Income Data. Physica A, 457, 166–175. Boxer, C. R. 1969. Brazilian Gold and British Traders in the First Half of the Eighteenth Century. Hispanic American Historical Review, 49(3), 454–472. Boyce, William E., DiPrima, Richard C., and Meade, Douglas B. 2017. Elementary Differential Equations and Boundary Value Problems. 11th ed. New York: Wiley. Boyd, Ian. 1986. A Mathematical Model of the Business Cycle. Ph.D. thesis, School of Mathematics, Faculty of Science, University of New South Wales. http://unsworks .unsw.edu.au/rep-dis/dpds/rem/unsworks:51390 (accessed January 4, 2019). Boyd, Ian. 1987. A Mathematical Model of the Business Cycle. Bulletin of the Autralian Mathematical Society, 36(2), 331–332. Boyd, Ian, and Blatt, John Markus. 1988. Investment Confidence and Business Cycles. Berlin: Springer. Brealey, Richard A., Myers, Stewart C., and Marcus, Alan J. 2001. Fundamentals of Corporate Finance. 3rd ed. New York: McGraw-Hill. Brian Arthur, W. 2014. Complexity Economics: A Different Framework for Economic Thought. In: Complexity and the Economy. Oxford: Oxford University Press. http:// tuvalu.santafe.edu/∼wbarthur/Papers/Comp.Econ.SFI.pdf (accessed January 6, 2015). Brzezinski, Michal. 2014. Do Wealth Distributions Follow Power Laws? Evidence from the “Rich Lists.” Physica A, 406, 155–162. Buchanan, Mark. 2013. Forecast: What Physics, Meteorology, and the Natural Sciences Can Teach Us about Economics. New York: Bloomsbury. Buiter, Willem. 2009. The Unfortunate Uselessness of Most “State of the Art” Academic Monetary Economics. https://econpapers.repec.org/paper/pramprapa/ 58407.htm/ (accessed November 7, 2019). Calder´ın-Ojeda, Enrique, Azpitarte, Francisco, and G´omez-D´eniz, Emilio. 2016. Modelling Income Data Using Two Extensions of the Exponential Distribution. Physica A, 461, 756–766. Cattani, Antonio David. 2009a. Fraudes corporativos y apropiaci´on de la riqueza. Convergencia, 16(51), 59–84. www.scielo.org.mx/scielo.php?pid=S1405-14352009000 300004&script=sci arttext (accessed August 19, 2017). Cattani, Antonio David. 2009b. Riqueza e Desigualdades. Caderno CRH, 22, 547–561. doi:10.1590/S0103-49792009000300009. Cercignani, Carlo. 1998. Ludwig Boltzmann: The Man Who Trusted Atoms. Oxford: Oxford University Press. Ceriani, L., and Verme, P. 2012. The Origins of the Gini Index: Extracts from Variabilit`a e Mutabilit`a (1912) by Corrado Gini. Journal of Economic Inequality, 10, 421–443. Cerqueti, Roy, and Ausloos, Marcel. 2015. Evidence of Economic Regularities and Disparities of Italian Regions from Aggregated Tax Income Size Data. Physica A, 421, 187–207. Chakrabarti, B. K., Chakraborti, A., Chakravarty, S. T., and Chatterjee, A. 2013. Econophysics of Income and Wealth Distributions. Cambridge: Cambridge University Press. Chakraborti, A. 2002. Distributions of Money in Model Markets of Economy. International Journal of Modern Physics C, 13, 1315–1321. arXiv:cond-mat/0205221v1.

282

References

Chakraborti, A., and Chakrabarti, B. K. 2000. Statistical Mechanics of Money: How Saving Propensity Affects Its Distribution. European Physical Journal B, 17, 167–170. arXiv:cond-mat/0004256v2. Chami Figueira, F., Moura, N. J., Jr. and Ribeiro, Marcelo Byrro. 2011. The Gompertz– Pareto Income Distribution. Physica A, 390, 689–698. arXiv:1010.1994v1. Champernowne, D. G. 1953. A Model of Income Distribution. Economic Journal, 63(250), 318–351. Chang, Ha-Joon. 2010. 23 Things They Don’t Tell You about Capitalism. New York: Bloomsbury. Chang, Ha-Joon. 2014. Economics: The User’s Guide. New York: Bloomsbury. Chatterjee, A., Yarlagadda, S., and Chakrabarti, B. K. (eds.). 2005. Econophysics of Wealth Distribution. Milan: Springer. Chatterjee, Arnab. 2015. Socio-economic Inequalities: A Statistical Physics Perspective. Pages 287–324 of: Abergel, Fr´ed´eric, Aoyama, Hideaki, Chakrabarti, Bikas K., Chakraborti, Anirban, and Ghosh, Asim (eds.), Econophysics and Data Driven Modelling of Market Dynamics. New York: Springer. Chatterjee, Arnab, and Chakrabarti, Bikas K. 2007. Kinetic Exchange Models of Income and Wealth Distributions. European Physical Journal B, 60, 135–149. arXiv:0709.1543v2. Chatterjee, Arnab, Ghosh, Asim, and Chakrabarti, Bikas K. 2016. Socio-economic Inequality and Prospects of Institutional Econophysics. arXiv:1611.00723v2. Chatterjee, Arnab, Ghosh, Asim, and Chakrabarti, Bikas K. 2017. Socio-economic Inequality: Relationship between Gini and Kolkata Indices. Physica A, 466, 583–595. arXiv:1606.03261v2. Chen, Justin, and Yakovenko, V. M. 2007. Computer Animations of Money Exchange Models. http://physics.umd.edu/∼yakovenk/econophysics/animation.html (accessed August 1, 2017). Chotikapanich, Duangkamon (ed.). 2008. Modeling Income Distributions and Lorenz Curves. New York: Springer. Christian Silva, A. 2005. Applications of Physics to Finance and Economics: Returns, Trading Activity and Income. Ph.D. thesis, Department of Physics, University of Maryland. arXiv:physics/0507022v1. Christian Silva, A., and Yakovenko, Victor M. 2005. Temporal Evolution of the “Thermal” and “Superthermal” Income Classes in the USA during 1983–2001. Europhysics Letters, 69(2), 304–310. arXiv:cond-mat/0406385v3. Clementi, F., and Gallegati, M. 2005a. Pareto’s Law of Income Distribution: Evidence for Germany, the United Kingdom, and the United States. Pages 3–14 of: Chatterjee, A., Yarlagadda, S., and Chakrabarti, B. K. (eds.), Econophysics of Wealth Distribution. Milan: Springer. arXiv:physics/0504217v3. Clementi, F., and Gallegati, M. 2005b. Power Law Tails in the Italian Personal Income Distribution. Physica A, 350, 427–438. arXiv:cond-mat/0408067v1. Clementi, F., and Gallegati, M. 2016. New Economic Windows on Income and Wealth: The κ-Generalized Family of Distributions. arXiv:1608.06076v1. Clementi, F., Di Matteo, T., and Gallegati, M. 2006. The Power-Law Tail Exponent of Income Distribution. Physica A, 370, 49–53. arXiv:physics/0603061v1. Clementi, F., Gallegati, M., and Kaniadakis, G. 2007. κ-Generalised Statistics in Personal Income Distribution. European Physical Journal B, 57, 187–193. arXiv:physics/0607293v2.

References

283

Clementi, F., Gallegati, M., and Kaniadakis, G. 2009. A κ-Generalized Statistical Mechanics Approach to Income Analysis. Journal of Statistical Mechanics, Feb., P02037. arXiv:0902.0075v2. Clementi, F., Gallegati, M., and Kaniadakis, G. 2010. A Model of Personal Income Distribution with Application to Italian Data. Empirical Economics, 39, 559–591. Clementi, F., Gallegati, M., and Kaniadakis, G. 2012a. A Generalized Statistical Model for the Size Distribution of Wealth. Journal of Statistical Mechanics, Dec., P12006. arXiv:1209.4787v2. Clementi, F., Gallegati, M., and Kaniadakis, G. 2012b. A New Model of Income Distribution: The κ-Generalized Distribution. Journal of Economics, 105, 63–91. Clementi, F., Di Matteo, T., Gallegati, M., and Kaniadakis, G. 2008. The κ-Generalised Distribution: A New Descriptive Model for the Size Distribution of Incomes. Physica A, 387, 3201–3208. arXiv:0710.3645v4. Clementi, F., Gallegati, M., Kaniadakis, G., and Landini, S. 2016. κ-Generalised Models of Income and Wealth Distributions: A Survey. European Physical Journal Special Topics, 225, 1959–1984. arXiv:1610.08676. Coelho, Ricardo, Richmond, Peter, Barry, Joseph, and Hutzler, Stefan. 2008. Double Power Laws in Income and Wealth Distributions. Physica A, 387(15), 3847–3851. arXiv:0710.0917v1. Colacchio, Giorgio, Sparro, Marco, and Tebaldi, Claudio. 2007. Sequences of Cycles and Transitions to Chaos in a Modified Goodwin’s Growth Cycle Model. International Journal of Bifurcation and Chaos, 17(6), 1911–1932. Colander, D., Goldberg, M., Haas, A., Juselius, K., Kirman, A., Lux., T., and Sloth, B. 2009. The Financial Crisis and the Systemic Failure of Economics Profession. Critical Review, 21, 249–267. Also at www.econometica.it/events/2009/30-11/prof Kirman .pdf. Earlier version (with Hans F¨olmer), The Financial Crisis and the Systemic Failure of Academic Economics. Kiel Institute Working Paper No. 1489 (February 2009), https://ideas.repec.org/p/zbw/ifwkwp/1489.html (accessed November 7, 2019). Conan Doyle, Arthur. 1892. Adventures of Sherlock Holmes, Adventure I: A Scandal in Bohemia. www.gutenberg.org/files/48320/48320-0.txt (accessed February 5, 2018). Conde-Saavedra, G., Iribarrem, A., and Ribeiro, Marcelo Byrro. 2015. Fractal Analysis of the Galaxy Distribution in the Redshift Range 0.45 ≤ z ≤ 5.0. Physica A, 417, 332–344. arXiv:1409.5409. Conference Board of Canada. 2011 (Sept.). Hot Topic: World Income Inequality. Is the World Becoming More Unequal? Conference Board of Canada. www.conferenceboard .ca/Files/hcp/pdfs/hot-topics/worldinequality.pdf (accessed March 7, 2019). Courtault, Jean-Michel, Kabanov, Yuri, Bru, Bernard, Cr´epel, Pierre, Lebon, Isabelle, and Marchand, Arnaud Le. 2000. Louis Bachelier on the Centenary of Th´eorie de la Sp´eculation. Mathematical Finance, 10(3), 341–353. Cowell, Frank A., Ferreira, Francisco H. G., and Litchfield, Julie A. 1999. Income Distribution in Brazil 1981–1990: Parametric and Non-Parametric Approaches. Journal of Income Distribution, 8(1), 63–76. doi:10.1016/S0926-6437(99)80004-3. Cristelli, Matthieu, Gabrielli, Andrea, Tacchella, Andrea, Caldarelli, Guido, and Pietronero, Luciano. 2013. Measuring the Intangibles: A Metrics for Economic Complexity of Countries and Products. PLOS one, 8(8), e70726. Cristelli, Matthieu, Tacchella, Andrea, and Pietronero, Luciano. 2014. An Overview of the New Frontiers of Economic Complexity. Ch. 8, pages 147–159 of: Abergel, F. et al. (ed.), Econophysics of Agent-Based Models. Berlin: Springer.

284

References

Daase, Christopher, and Kessler, Oliver. 2007. Knowns and Unknowns in the “War on Terror”: Uncertainty and the Political Construction of Danger. Security Dialogue, 48(4), 411–434. D’Agostini, Giulio. 2003. Bayesian Reasoning in Data Analysis. World Scientific. Daly, Herman, and Rufus, Anneli. 2008. The Crisis. Adbusters, Nov. 19. www.adbusters .org/magazine/81/the crisis.html (accessed August 20, 2014). Davidson, Paul. 2010a. Black Swans and Knight’s Epistemological Uncertainty. Journal of Post Keynesian Economics, 32(4), 567–570. Davidson, Paul. 2010b. The Non-Existent Hand. London Review of Books, 32(10). Letters. www.lrb.co.uk/v32/n10/letters (accessed January 15, 2018). Davies, James B. 2015. Book Review of “Capital in the Twenty-First Century.” Journal of Economic Inequality, 13, 155–160. de Jong, Frits J., and Quade, Wilhelm. 1967. Dimensional Analysis for Economists. Amsterdam: North Holland. With a mathematical appendix on the algebraic structure of dimensional analysis by Wilhelm Quade. de Maio, Fernando G. 2007. Income Inequality Measures. Journal of Epidemiology and Community Health, 61(10), 849–852. de Oliveira, Paulo Murilo Castro. 2017 (May). Poor versus Rich: Who Should Pay More Taxes? Invited paper presented at the 38th Max Born Symposium “Crossing Frontiers in Science: A Physicist’s Approach,” in celebration of Andrzej Pekalski’s 80th birthday. Deaton, Angus. 1997. The Analysis of Household Surveys. Baltimore: Johns Hopkins University Press. ´ Delfaud, Pierre. 1986. Les Th´eories Economiques. Paris: Presses Universitaires de France. Desai, M. 1973. Growth Cycles and Inflation in a Model of the Class Struggle. Journal of Economic Theory, 6, 527–545. Desai, M. 1984. An Econometric Model of the Share of Wages in National Income: UK 1855–1965. Pages 253–277 of: Goodwin, R. M., Kr¨uger, M., and Vercelli, A. (eds.), Nonlinear Models of Fluctuations and Growth. Berlin: Springer. Desai, M., Henry, B., Mosley, A., and Pemberton, M. 2006. A Clarification of the Goodwin Model of the Growth Cycle. Journal of Economic Dynamics and Control, 30, 2661–2670. Diamond, Jared. 1997. Guns, Germs, and Steel. New York: Norton. Dibeh, Ghassan, Luchinsky, Dmitry G., Luchinskaya, Daria D., and Smelyanskiy, Vadim N. 2007. A Bayesian Estimation of a Stochastic Predator–Prey Model of Economic Fluctuations. In: Kert´esz, J., Bornholdt, S., and Mantegna, R. N. (eds.), Noise and Stochastics in Complex Systems and Finance, vol. 6601. Proceedings SPIE. Dizikes, Peter. 2010 (June 2). Explained: Knightian uncertainty. Tech. rept. Massachusetts Institute of Technology. MIT News Office, http://news.mit.edu/2010/explainedknightian-0602 (accessed June 18, 2016). Domma, Filippo, Condino, Francesca, and Giordano, Sabrina. 2018. A New Formulation of the Dagum Distribution in Terms of Income Inequality and Poverty Measures. Physica A. doi:10.1016/j.physa.2018.07.027. Doyne Farmer, J., and Geanakoplos, John. 2009. The Virtues and Vices of Equilibrium and the Future of Financial Economics. Complexity, 14(3), 11–38. Doyne Farmer, J., Shubik, Martin, and Smith, Eric. 2005. Is Economics the Next Physical Science? Physics Today, Sept., 37–42. Drakopoulos, Stavros, and Katselidis, Ioannis. 2015. From Edgeworth to Econophysics: A Methodological Perspective. Journal of Economic Methodology, 22(1), 77–95.

References

285

Dr˘agulescu, A. 2002 (May). Applications of Physics to Economics and Finance: Money, Income, Wealth, and the Stock Market. Ph.D. thesis, Department of Physics, University of Maryland, College Park. arXiv:cond-mat/0307341v2. Dr˘agulescu, A., and Yakovenko, V. M. 2000. Statistical Mechanics of Money. European Physical Journal B, 17, 723. arXiv:cond-mat/0001432v4. Dr˘agulescu, A., and Yakovenko, V. M. 2001a. Evidence for the Exponential Distribution of Income in the USA. European Physical Journal B, 20, 585. arXiv:condmat/0008305v2. Dr˘agulescu, A., and Yakovenko, V. M. 2001b. Exponential and Power Law Probability Distributions of Wealth and Income in the United Kingdom and the United States. Physica A, 299, 213. arXiv:cond-mat/0103544v2. Dr˘agulescu, A., and Yakovenko, V. M. 2003. Statistical Mechanics of Money, Income, and Wealth: A Short Survey. Pages 180–183 of: Garrido, P. L., and Marro, J. (eds.), Modeling of Complex Systems: Seventh Granada Lectures. AIP Conference Proceedings, vol. 661. New York: American Institute of Physics. arXiv:cond-mat/0211175v1. Dyson, Freeman. 2004. A Meeting with Enrico Fermi. Nature, 427(6972), 297. www.nature .com/nature/journal/v427/n6972/full/427297a.html (accessed October 1, 2015). Ehnts, Dirk. 2018. Book Review of Can We Avoid Another Financial Crisis?, by Steve Keen. International Journal of Pluralism and Economics Education, 9(3), 328–330. Einstein, Albert. 2002. Motives for Research. In: The Collected Papers of Albert Einstein. Translation Volume 7. Princeton: Princeton University Press. Speech delivered at Max Planck’s 60th birthday, April 1918. English translation at http://alberteinstein.info/ vufind1/images/einstein/ear01/view/1/4-009.tr 000012786.pdf (accessed November 8, 2019). Eliazar, Iddo. 2015a. The Sociogeometry of Inequality: Part I. Physica A, 426, 93–115. Eliazar, Iddo. 2015b. The Sociogeometry of Inequality: Part II. Physica A, 426, 116–137. Eliazar, Iddo. 2016. Visualizing Inequality. Physica A, 454, 66–80. Eliazar, Iddo, and Sokolov, Igor M. 2010. Measuring Statistical Heterogeneity: The Pietra Index. Physica A, 389, 117–125. Ellis, George. 2015. Recognising Top-Down Causation. Ch. 3, pages 17–44 of: Aguirre, Anthony, Foster, Brendan, and Merali, Zeeya (eds.), Questioning the Foundations of Physics: Which of Our Fundamental Assumptions Are Wrong? The Frontiers Collection. New York: Springer. Ellis, George F. R., Meissner, Krzysztof, and Nicolai, Hermann. 2018. The Physics of Infinity. Nature Physics, 14, 770–772. Ellis, George, and Silk, Joe. 2014. Defend the Integrity of Science. Nature (December, 18–25), 516, 321–323. Estola, Matti. 2017. Newtonian Microeconomics: A Dynamic Extension to Neoclassical Micro Theory. New York: Palgrave Macmillan. Eugene Stanley, H. 2008 (Dec.). Econophysics and the Current Economic Turmoil. APS News. American Physical Society. Eugene Stanley, H., Amaral, L. A. N., Gabaix, X., Gopikrishnan, P., and Plerou, V. 2001. Similarities and Differences between Physics and Economics. Physica A, 299, 1–15. Euripides. 2008. Fragments: Aegeus-Meleager. Loeb Classical Library, vol. 504. Edited and translated by Christopher Collard and Martin Cropp. Cambridge, MA: Harvard University Press. Farmer, Roger E. A. 2016. Prosperity for All: How to Prevent Financial Crises. Oxford: Oxford University Press. Feller, William. 1971. An Introduction to Probability Theory and Its Applications. Vol. 2. New York: Wiley.

286

References

Ferrari-Filho, Fernando, and Conceic¸a˜ o, Octavio A. C. 2005. The Concept of Uncertainty in Post Keynesian Theory and in Institutional Economics. Journal of Economic Issues, 39(3), 579–594. Ferrero, Juan C. 2004. The Statistical Distribution of Money and the Rate of Money Transference. Physica A, 341, 575–585. Ferrero, Juan C. 2005. The Monomodal, Polymodal, Equilibrium and Nonequilibrium Distribution of Money. Pages 159–167 of: Chatterjee, A., Yarlagadda, S., and Chakrabarti, B. K. (eds.), Econophysics of Wealth Distribution. Milan: Springer. Ferrero, Juan C. 2010. The Individual Income Distribution in Argentina in the Period 2000–2009: A Unique Source of Non Stationary Data. Science and Culture, 76(9–10), 444–447. Ferrero, Juan C. 2011. A Statistical Analysis of Stratification and Inequality in the Income Distribution. European Physical Journal B, 80, 255–261. Feynman, Richard. 1964 (Nov. 11). The Character of Physical Law: The Relation of Mathematics to Physics. Lecture 2 delivered at Cornell University. www.cornell.edu/ video/richard-feynman-messenger-lecture-2-relation-mathematics-physics (accessed September 16, 2017). Feynman, Richard. 1967. The Character of Physical Law. Cambridge, MA: MIT Press Feynman, Richard. 1974. Cargo Cult Science. Engineering and Science, June, 10–13. Fiorio, Carlo V., Mohun, Simon, and Veneziani, Roberto. 2018 (May 4). Social Democracy and Distributive Conflict in the UK. Preprint. Fix, Blair. 2018. Hierarchy and the Power Law Income Distribution Tail. Journal of Computational Social Science, July 16. doi:10.1007/s42001-018-0019-8. Flaschel, Peter. 2010. Topics in Classical Micro- and Macroeconomics: Elements of a Critique of Neo-Ricardian Theory. New York: Springer. Folsom, R. G., and Gonzalez, R. A. 2005. Dimensions and Economics: Some Answers. Quarterly Journal of Austrian Economics, 8(4), 45–65. Foster, James E., and Wolfson, Michael C. 2010. Polarization and the Decline of the Middle Class: Canada and the U.S. Journal of Economic Inequality, 8(2), 247–273. Frank, Robert, Bernanke, Ben, Antonovics, Kate, and Heffetz, Ori. 2015. Principles of Macroeconomics. 6th ed. New York: McGraw-Hill Fr¨ohlich, Nils. 2011. Dimensional Analysis of Price–Value Deviations. www.tu-chemnitz .de/wirtschaft/vwl2/downloads/paper/froehlich/da-value.pdf (accessed November 5, 2016). Fujiwara, Yoshi. 2005. Pareto-Zipf, Gibrat’s Laws, Detailed Balance and Their Breakdown. Pages 24–33 of: Chatterjee, A., Yarlagadda, S., and Chakrabarti, B. K. (eds.), Econophysics of Wealth Distribution. Milan: Springer. Fullbrook, E., and Morgan, Jamie (eds.). 2014. Piketty’s Capital in the Twenty-First Century. College Publications. Also in: special issue on Piketty’s capital, Real-World Economics Review, No. 69, Oct. 2014. Gabaix, Xavier. 1999. Zipf’s Law for Cities: An Explanation. Quarterly Journal of Economics, 114(3), 739–767. Gabaix, Xavier. 2009. Power Laws in Economics and Finance. Annual Review of Economics, 1, 255–293. Gabaix, Xavier. 2016. Power Laws in Economics: An Introduction. Journal of Economic Perspectives, 30(1), 185–206. doi:10.1257/jep.30.1.185. Gabisch, G¨unter, and Lorenz, Hans-Walter. 1989. Business Cycle Theory: A Survey of Methods and Concepts. 2nd ed. Berlin: Springer Galam, Serge. 2004. Sociophysics: A Personal Testimony. Physica A, 336, 49–55. arXiv:physics/0403122.

References

287

Galam, Serge. 2012. Sociophysics: A Physicist’s Modeling of Psycho-Political Phenomena. New York: Springer. Galilei, Galileo. 1638. Dialogues Concerning Two New Sciences. Indianapolis: Liberty Fund. Translated from the Italian and Latin into English by Henry Crew and Alfonso de Savio (New York: Macmillan, 1914). http://oll.libertyfund.org/titles/753 (accessed July 6, 2014). Gallegati, Mauro. 2018. Complex Agent-Based Models. New York: Springer International. doi:10.1007/978-3-319-93858-5. Ch. 2, pages 17–36. Gallegati, Mauro, Keen, Steve, Lux, Thomas, and Ormerod, Paul. 2006. Worrying Trends in Econophysics. Physica A, 370, 1–6. Gandolfo, Giancarlo. 1997. Economic Dynamics: Study Edition. Berlin: Springer. Gandolfo, Giancarlo. 2008. Giuseppe Palomba and the lotka–volterra Equations. Rendiconti Lincei, 19(4), 347–357. doi:10.1007/s12210-008-0023-7. Gandolfo, Giancarlo. 2009. Economic Dynamics. 4th ed. Berlin: Springer. Garc´ıa-Molina, M., and Herrera-Medina, E. 2010. Are There Goodwin Employment– Distribution Cycles? International Empirical Evidence. Cuadernos de Econom´ıa, 29(53), 1–29. Georgescu-Roegen, Nicholas. 1986. The Entropy Law and the Economic Process in Retrospect. Eastern Economic Journal, 12(1), 3–25. Ghosh, Asim, Chattopadhyay, Nachiketa, and Chakrabarti, Bikas K. 2014. Inequality in Societies, Academic Institutions and Science Journals: Gini and k-Indices. Physica A, 410, 30–34. Gibrat, R. 1931. Les Ingalites Economiques. Paris: Libraire du Recueil Sirey. Gini, Corrado. 1912. Variabilit`a e mutabilit`a. Contributo allo Studio delle Distribuzioni e delle Relazioni Statistiche. Bologna: C. Cuppini. Gini, Corrado. 1914. Sulla misura della concentrazione e della variabilit`a dei caratteri. Transactions of the Real Instituto Veneto di Scienze, Lettere ed Arti, 53(2), 1203–1248. Gini, Corrado. 1921. Measurement of Inequality of Incomes. Economic Journal, 31, 124–126. Gini, Corrado. 1955. Variabilit`a e mutabilit`a. In: Memorie di metodologica statistica. Rome: Libreria Eredi Virgilio Veschi. Reprinted. Goldstein, Jonathan P. 1999. Predator–Prey Model Estimates of the Cyclical Profit Squeeze. Metroeconomica, 50(2), 139–173. Gompertz, B. 1825. On the Nature of the Functions Expressive of the Law of Human Mortality, and on a New Method of Determining the Value of Life Contingencies. Philosophical Transactions of the Royal Society of London, 115, 513–585. Goodwin, Richard M. 1967. A Growth Cycle. Pages 54–58 of: Feinstein, C. H. (ed.), Socialism, Capitalism and Economics. Cambridge: Cambridge University Press. Gradshteyn, I. S., and Ryzhik, I. M. 2007. Table of Integrals, Series, and Products. 7th ed. New York: Academic Press. Graham, Carol, and Felton, Andrew. 2006. Inequality and Happiness: Insights from Latin America. Journal of Economic Inequality, 4, 107–122. Grasselli, Matheus R., and Costa Lima, B. 2012. An Analysis of the Keen Model for Credit Expansion, Asset Price Bubbles and Financial Fragility. Mathematics and Financial Economics, 6, 191–210. Grasselli, Matheus R., and Maheshwari, Aditya. 2016. Econometric Estimation of Goodwin Growth Models. Preprint. Grasselli, Matheus R., and Maheshwari, Aditya. 2017. A comment on “Testing Goodwin: Growth cycles in ten OECD countries.” Cambridge Journal of Economics, 41(6), 1761–1766. arXiv:1803.01527v1.

288

References

Grasselli, Matheus R., and Maheshwari, Aditya. 2018. Testing a Goodwin Model with General Capital Accumulation Rate. Metroeconomica, 69(3), 619–643. doi:10.1111/meca.12204; arXiv:1803.01536v1. Gregory Mankiw, N. 2009. Principles of Economics. 6th ed. Mason: South-Western Cengage Learning. Grudzewski, W. M., and Rosłanowska-Plichci´nska, K. 2013. Application of Dimensional Analysis in Economics. Amsterdam: IOS Press. Guala, Sebastian. 2009. Taxes in a Wealth Distribution Model by Inelastically Scattering of Particles. Interdisciplinary Description of Complex Systems, 7(1), 1–7. arXiv:0807.4484v1. Guerra, Renata R., Pe˜na-Ram´ırez, Fernando A., and Cordeiro, Gauss M. 2017. The Gamma Burr XII Distributions: Theory and Applications. Journal of Data Science, 15, 467–494. Gupta, Abhijit Kar. 2006. Models of Wealth Distributions – A Perspective. Ch. 6, pages 161–190 of: Chakrabarti, B. K., Chakraborti, A., and Chatterjee, A. (eds.), Econophysics and Sociophysics – Trends and Perspectives. New York: Wiley. Harari, Yuval Noah. 2014. Sapiens: A Brief History of Humankind. London: Vintage Books. H¨aring, Norbert, and Douglas, Niall. 2012. Economists and the Powerful: Convenient Theories, Distorted Facts, Ample Rewards. London: Anthem Press. Harvie, D. 2000. Testing Goodwin: Growth Cycles in Ten OECD Countries. Cambridge Journal of Economics, 24(3), 349–376. Harvie, David, Kelmanson, Mark A., and Knapp, David G. 2006. A Dynamical Model of Business-Cycle Asymmetries: Extending Goodwin. Economic Issues, 12, 53–92. Henle, J. M., Horton, N. J., and Jakus, S. J. 2008. Modelling Inequality with a Single Parameter. Ch. 14, pages 255–269 of: Chotikapanich, Duangkamon (ed.), Modeling Income Distributions and Lorenz Curves. New York: Springer. Hickel, Jason. 2018. How Britain Stole $45 Trillion from India: And Lied about It. Aljazeera, Dec. 19. www.aljazeera.com/indepth/opinion/britain-stole-45-trillionindia-181206124830851.html (accessed December 31, 2018). Hillinger, Claude. 2014. Is Capital in the Twenty-First Century, Das Kapital for the Twenty-First Century? In: Fullbrook and Morgan (2014). Also In: Real-World Economics Review, No. 69, Oct. 2014, 131–137. Hirsch, Morris W., and Smale, Stephen. 1974. Differential Equations, Dynamical Systems, and Linear Algebra. Pure and Applied Mathematics: A Series of Monographs and Textbooks. New York: Academic Press. Hodrick, Robert J., and Prescott, Edward C. 1997. Postwar U.S. Business Cycles: An Empirical Investigation. Journal of Money, Credit and Banking, 29(1), 1–16. Hudson, Michael. 2010. The Use and Abuse of Mathematical Economics. Real-World Economics Review. Issue No. 55 (Dec.), 2–22. Hudson, Michael. 2012. The Bubble and Beyond. Dresden: Islet-Verlag. Hudson, Michael. 2017. J Is for Junk Economics: A Guide to Reality in an Age of Deception. Dresden: ISLET-Verlag. Ifrah, Georges. 2000. The Universal History of Numbers: From Prehistory to the Invention of Computers. Translated from the French by D. Bellos, E. F. Harding, S. Wood, and I. Monk. New York: Wiley. Iglesias, J. R., Semeshenko, V., Schneider, E. M., and Gordon, M. B. 2012. Crime and Punishment: Does It Pay to Punish? Physica A, 391(15), 3942–3950. Inoue, Jun-ichi, Ghosh, Asim, Chatterjee, Arnab, and Chakrabarti, Bikas K. 2015. Measuring Social Inequality with Quantitative Methodology: Analytical Estimates and Empirical Data Analysis by Gini and k Indices. Physica A, 429, 184–204.

References

289

Irwin, Neil. 2014. Everything You Need to Know about Thomas Piketty vs. The Financial Times. New York Times, May 30. www.nytimes.com/2014/05/31/upshot/everythingyou-need-to-know-about-thomas-piketty-vs-the-financial-times.html (accessed June 17, 2014). Ispolatov, S., Krapivsky, P. L., and Redner, S. 1998. Wealth Distributions in Asset Exchange Models. European Physical Journal B, 2, 267–276. Jadevicius, Arvydas, and Huston, Simon. 2014. A “Family of Cycles” – Major and Auxiliary Business Cycles. Journal of Property Investment and Finance, 32(3), 306–323. Jagielski, M., Czy˙zewski, K., Kutner, R., and Eugene Stanley, H. 2016. Income and Wealth Distribution of the Richest Norwegian Individuals: An Inequality Analysis. arXiv:1610.08918v1. doi: 10.1016/j.physa.2017.01.077 Jovanovic, Franck, and Schinckus, Christophe. 2013. The Emergence of Econophysics: A New Approach in Modern Financial Theory. History of Political Economy, 45(3), 443–474. Kakwani, N. 1980. Income Inequality and Poverty. Oxford: Oxford University Press. Kaniadakis, G. 2001. Non-Linear Kinetics Underlying Generalized Statistics. Physica A, 296, 405–425. Kaniadakis, G. 2012. Physical Origin of the Power-Law Tailed Statistical Distributions. Modern Physics Letters B, 26, 1250061. arXiv:1206.2250v1. Kay, John. 2015. Keynes Was Half Right about the Facts. Financial Times, Aug. 4. www .ft.com/content/96a620a8-3a8d-11e5-bbd1-b37bc06f590c (accessed November 16, 2016). Keen, Steve. 1995. Finance and Economic Breakdown: Modelling Minsky’s “Financial Instability Hypothesis.” Journal of Post Keynesian Economics, 17(4), 607–635. Keen, Steve. 2000. The Nonlinear Economics of Debt Deflation. Ch. 5, pages 83–110 of: Barnett, William A., Chiarella, Carl, Keen, Steve, Marks, Robert, and Schnabl, Hermann (eds.), Commerce, Complexity, and Evolution: Topics in Economics, Finance, Marketing, and Management: Proceedings of the Twelfth International Symposium in Economic Theory and Econometrics. International Symposia in Economic Theory and Econometrics. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511896682.008. Keen, Steve. 2001. Debunking Economics. London: Zed Books. Keen, Steve. 2007. Conservation “Laws” in Econophysics, and the Non-conservation of Money. http://keenomics.s3.amazonaws.com/debtdeflation media/2007/09/KeenNon ConservationMoney.pdf (accessed August 7, 2017). Keen, Steve. 2009a. Household Debt: The Final Stage in an Artificially Extended Ponzi Bubble. Australian Economic Review, 42(3), 347–357. Keen, Steve. 2009b. Mad, Bad and Dangerous to Know. Real-World Economics Review. Issue No. 49 (Mar.), 1–7. Keen, Steve. 2011a. Debunking Economics. 2nd ed. London: Zed Books. Keen, Steve. 2011b. Debunking Macroeconomics. Economic Analysis and Policy, 41(3), 147–167. Keen, Steve. 2012a. Instability in Financial Markets: Sources and Remedies. INET Conference, Berlin. http://keenomics.s3.amazonaws.com/debtdeflation media/2012/ 04/KeenINET2012InstabilityFinancialMarkets02.pdf (accessed January 15, 2018). Keen, Steve. 2012b (Nov. 15). Production, Entropy and Monetary Macroeconomics. Lecture available at www.youtube.com/watch?v=14vVhhNvWX0 (accessed February 14, 2018). Keen, Steve. 2013a. A Monetary Minsky Model of the Great Moderation and the Great Recession. Journal of Economic Behavior and Organization, 86, 221–235.

290

References

Keen, Steve. 2013b. Predicting the “Global Financial Crisis”: Post-Keynesian Macroeconomics. Economic Record, 89(285), 228–254. Keen, Steve. 2016. Modeling Financial Instability. Ch. 4, pages 67–103 of: Malliaris, A. G., Shaw, Leslie, and Shefrin, Hersh (eds), The Global Financial Crisis and Its Aftermath: Hidden Factors in the Meltdown. Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199386222.003.0004. Keen, Steve. 2017. Can We Avoid Another Financial Crisis? London: Polity Press. Keen, Steve. 2018. Kornai and Anti-Equilibrium. Acta Oeconomica, 68, 55–75. Kennedy, Paul M. 1989. The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000. New York: Vintage Books. Keynes, John Maynard. https://en.wikiquote.org/wiki/John Maynard Keynes (accessed February 4, 2018). The quotation “When my information changes, I alter my conclusions. What do you do, sir?” is widely attributed to Keynes, but there is no available information of when he actually said that. See also Kay (2015) above. Keynes, John Maynard. 1936. The General Theory of Employment, Interest and Money. New York: Harcourt, Brace & World. Republication by Prometheus Books: Great Minds Series, New York, 1997. Keynes, John Maynard. 1937. The General Theory of Employment. Quarterly Journal of Economics, 51(2), 209–223. Kim, Minseong. 2015. Dimensional Analysis of Production and Utility Functions in Economics. MPRA paper No. 61147, https://mpra.ub.uni-muenchen.de/61147. King, J. E. 2017. The Literature on Piketty. Review of Political Economy, 29(1), 1–17. doi:10.1080/09538259.2016.1173425. Kirman, Alan. 1992. Whom or What Does the Representative Individual Represent? Journal of Economic Perspectives, 6(2), 117–136. Kirman, Alan. 2009. Economic Theory and the Crisis. Real-World Economics Review. Issue No. 51 (Dec.), 80–83. Kirman, Alan. 2010. The Economic Crisis Is a Crisis for Economic Theory. CESifo Economic Studies, 56(4), 498–535. Kitchin, Joseph. 1923. Cycles and Trends in Economic Factors. Review of Economics and Statistics, 5(1), 10–16. Klaas, Oren S., Biham, Ofer, Levy, Moshe, Malcai, Ofer, and Solomon, Sorin. 2006. The Forbes 400 and the Pareto Wealth Distribution. Economic Letters, 90, 290–295. Kleiber, Christian, and Kotz, Samuel. 2003. Statistical Size Distributions in Economics and Actuarial Sciences. New York: Wiley. Knight, Frank Hyneman. 1921. Risk, Uncertainty and Profit. Boston: Houghton Mifflin. Kondepudi, Dilip, and Prigogine, Ilya. 2015. Modern Thermodynamics: From Heat Engines to Dissipative Structures. 2nd ed. New York: Wiley. Krugman, Paul. 2009. How Did Economists Get It So Wrong? New York Times, Sept. 2. www.nytimes.com/2009/09/06/magazine/06Economic-t.html (accessed June 17, 2014). Krugman, Paul. 2014a. Is Piketty All Wrong? New York Times, May 24. http://krugman .blogs.nytimes.com/2014/05/24/is-piketty-all-wrong/ (accessed June 17, 2014). Krugman, Paul. 2014b. The Populist Imperative. New York Times, Jan. 23. www.nytimes .com/2014/01/24/opinion/krugman-the-populist-imperative.html (accessed June 17, 2014). Krusell, Per, and Smith, Anthony A., Jr. 2015. Is Piketty’s ‘Second Law of Capitalism’ Fundamental? Journal of Political Economy, 123(4), 725–748. Kuhn, Thomas S. 1996. The Structure of Scientific Revolutions. 3rd ed. Chicago: University of Chicago Press.

References

291

Kumamoto, Shin-Ichiro, and Kamihigashi, Takashi. 2018. Power Laws in Stochastic Processes for Social Phenomena: An Introductory Review. Frontiers in Physics, 6(Mar. 15). doi:10.3389/fphy.2018.00020. Kurz, Heinz D., and Salvadori, Neri. 1995. Theory of Production: A Long-Period Analysis. Cambridge: Cambridge University Press. Lallouche, Mehdi, Jedidi, Aymen, and Chakraborti, Anirban. 2010. Wealth Distributions: To Be or Not to Be a Gamma? Science and Culture (Kolkata, India), 76(9–10), 478–484. www.scienceandculture-isna.org/sep-oct-2010.htm, arXiv:1004.5109v2. Langlois, Richard N., and Cosgel, Metin M. 1993. Frank Knight on Risk, Uncertainty, and the Firm: A New Interpretation. Economic Inquiry, 31(July), 456–465. Lawrence, Scott, Liu, Qin, and Yakovenko, Victor M. 2013. Global Inequality in Energy Consumption from 1980 to 2010. Entropy, 15, 5565–5579. arXiv:1312.6443v2. Lee, Kotik K. 1992. Lectures on Dynamical Systems, Structural Stability, and Their Applications. New York: World Scientific. Lee, Wen-Chung. 1999. Probabilistic Analysis of Global Performances of Diagnostic Tests: Interpreting the Lorenz Curve-Based Summary Measures. Statistics in Medicine, 18, 455–471. Legrand, Muriel Dal-Pont, and Hagemann, Harald. 2007. Business Cycles in Juglar and Schumpeter. History of Economic Thought, 49(1), 1–18. Lemons, Don S. 2008. Mere Thermodynamics. Baltimore: Johns Hopkins University Press. Lemons, Don S. 2013. A Student’s Guide to Entropy. Cambridge: Cambridge University Press. Lighthill, James. 1986. The Recently Recognized Failure of Predictability in Newtonian Dynamics. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 407(1832), 35–50. Lindert, Peter H. 2000. Three Centuries of Inequality in Britain and America. Ch. 3, pages 167–216 of: Atkinson, Anthony B., and Bourguignon, Franc¸ois (eds.), Handbook of Income Distribution, vol. 1. Amsterdam: North-Holland. Lorenz, Max O. 1905. Methods of Measuring the Concentration of Wealth. Publications of the American Statistical Association, 9(70), 209–219. Lucas, Robert E. 2003. Macroeconomic Priorities. American Economic Review, 93(1), 1–14. Presidential address delivery to the 115th meeting of the American Economic Association, January 4, 2003, Washington DC. Luke, Yudell L. 1969a. The Special Functions and their Approximations. Vol. 1. New York: Academic Press. Luke, Yudell L. 1969b. The Special Functions and their Approximations. Vol. 2. New York: Academic Press. Lux, Thomas. 2005. Emergent Statistical Wealth Distributions in Simple Monetary Exchange Models: A Critical Review. Pages 51–60 of: Chatterjee, A., Yarlagadda, S., and Chakrabarti, B. K. (eds.), Econophysics of Wealth Distribution. Milan: Springer. Lydall, H. F. 1959. The Distribution of Employment Incomes. Econometrica, 27(1), 110–115. McCauley, Joseph L. 2009. Dynamics of Markets: The New Financial Economics. Second edn. Cambridge: Cambridge University Press. McDonald, J. B. 1984. Some Generalized Functions for the Size Distribution of Income. Econometrica, 52, 647–663. Maclachlan, Fiona, and Reith, John E. 2008. The Beaman Distribution: A New Descriptive Model for the Size Distribution of Incomes. Journal of Income Distribution, 17(1), 81–86.

292

References

McLeay, Michael, Radia, Amar, and Ryland, Thomas. 2014. Money Creation in the Modern Economy. Bank of England Quarterly Bulletin, 54(1), 14–22. McGrayne, Sharon Bertsch. 2011. The Theory that Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy. New Haven: Yale University Press. Mandelbrot, B. B. 1960. The Pareto-Levy Law and the Distribution of Income. International Economic Review, 1, 79–105. Mandelbrot, Benoit B., and Hudson, Richard L. 2004. The (mis)Behaviour of Markets. Prelude written by Richard L. Hudson. Throughout the book the “I” voice is that of Mandelbrot. New York: Basic Books. Marglin, Stephen A. 1984. Growth, Distribution and Prices. Boston: Harvard University Press. Marques, Leonardo. 2016 (Nov. 21). The Slave Trade in the U.S. and Brazil: Comparisons and Connections. Press Blog. Yale University. http://blog.yalebooks.com/2016/11/21/ slave-trade-u-s-brazil-comparisons-connections/ (accessed September 15, 2018). Martin Hattersley, J. 1988 (Mar.). Frederick Soddy and the Doctrine of “Virtual Wealth.” https://web.archive.org/web/20081120224637/http://www.nesara.org/articles/soddy8 8.htm (accessed November 8, 2019). Paper presented to the 14th Annual Convention of the Eastern Economics Association, Boston, Mass. Marx, Karl. 1867. Capital. 1st English edition, 1887 edn. book one, vol. I. Marx/Engels Internet Archive. www.marxists.org/archive/marx/works/1867-c1/ (accessed August 14, 2012). Massari, Riccardo, Pittau, Maria Grazia, and Zelli, Roberto. 2009. A Dwindling Middle Class? Italian Evidence in the 2000s. Journal of Economic Inequality, 7, 333–350. Massy, Ibrahim, Avila, Alba, and Garcia-Molina, Mario. 2013. Quantitative Evidence of Goodwin’s Non-Linear Growth Cycles. Applied Mathematical Sciences, 7(29), 1409–1417. Mayer, J., Khairy, K., and Howard, J. 2010. Drawing an Elephant with Four Complex Parameters. American Journal of Physics, 78(6), 648–649. Medeiros, Marcelo, and Souza, Pedro H. G. Ferreira. 2015. The Rich, the Affluent and the Top Incomes. Current Sociology Review, 63(6), 869–895. Medeiros, Marcelo, Souza, Pedro H. G. F., and Castro, F´abio A. 2015. A estabilidade da desigualdade de renda no Brasil, 2006 a 2012: Estimativa com dados do imposto de renda e pesquisas domiciliares. Ciˆencia e Sa´ude Coletiva, 20(4), 971–986. doi:10.1590/1413-81232015204.00362014. Mendes, Marcos. 2014. Inequality, Democracy, and Growth in Brazil: A Country at the Crossroads of Economic Development. New York: Academic Press. Milanovic, Branko. 2007. Why We All Care about Inequality (But Some of Us Are Loathe to Admit It). Challenge, 50(6), 109–120. Milanovic, Branko. 2010. The Haves and the Have-Nots: A Brief and Idiosyncratic History of Global Inequality. New York: Basic Books. Milanovic, Branko. 2012. Global Inequality Recalculated and Updated: The Effect of New PPP Estimates on Global Inequality and 2005 Estimates. Journal of Economic Inequality, 10(1), 1–18. Milanovic, Branko. 2014. The Return of “Patrimonial Capitalism”: A Review of Thomas Piketty’s “Capital in the Twenty-First Century.” Journal of Economic Literature, 52(2), 519–534. Milanovic, Branko. 2016. Global Inequality: A New Approach for the Age of Globalization. Cambridge, MA: Belknap Press.

References

293

Mimkes, J¨ugen. 2017. Thermodynamics and Economics. Ch. 23, pages 495–519 of: ˇ ak, Jaroslav, Hub´ık, Pavel, and Mareˇs, Jiˇr´ı J. (eds.), Thermal Physics and Thermal Sest´ Analysis. Hot Topics in Thermal Analysis and Calorimetry, vol. 11. Cham.: Springer. Minsky, Hyman. 1977. The Financial Instability Hypothesis: An Interpretation of Keynes and an Alternative to “Standard” Theory. Nebraska Journal of Economics and Business, 16(1), 5–16. Mirowski, Philip. 1989. More Heat than Light: Economics as Social Physics, Physics as Nature’s Economics. Cambridge: Cambridge University Press. Mirowski, Philip. 2013. Never Let a Serious Crisis Go to Waste. London: Verso. Mohun, Simon, and Veneziani, Roberto. 2008. Goodwin Cycles and the U.S. Economy, 1948–2004. Ch. 6, pages 157–194 of: Flaschel, Peter, and Landesmann, Michael (eds.), Mathematical Economics and the Dynamics of Capitalism: Goodwin’s Legacy Continued. New York: Routledge. Also in: Munich Personal RePEc Archive (MPRA) Paper, https://ideas.repec.org/p/pra/mprapa/30444.html (accessed June 14, 2018). Monk, Ray. 2012. Robert Oppenheimer: A Life Inside the Center. New York: Anchor Books. ´ Moreno, Alvaro M. 2002. El Modelo de Ciclo y Crecimiento de Richard Goodwin. Una Evaluaci´on Emp´ırica para Colombia. Cuadernos de Econom´ıa, 21(37), 1–20. Morgan, Jamie. 2016. Book review, Understanding Piketty’s Capital in the Twenty-First Century, by Steven Pressman. Review of Political Economy. doi:10.1080/09538259.2016.1173967. Morgan, Marc. 2017 (Aug. 10). Extreme and Persistent Inequality: New Evidence for Brazil Combining National Accounts, Survey and Fiscal Data, 2001–2005. WID.world Working Paper Series 2017/12. World Wealth and Income Database. http://wid.world/ wp-content/uploads/2017/09/Morgan2017BrazilDINA-.pdf (accessed September 10, 2017). Morgan, Zachary R. 2014. Legacy of the Lash: Race and Corporal Punishment in the Brazilian Navy and the Atlantic World. Bloomington: Indiana University Press. Morowitz, Harold J. 1968. Energy Flow in Biology. Academic Press. Morris, Errol. 2014. The Certainty of Donald Rumsfeld (Part 2). New York Times, Mar. 26. http: //opinionator.blogs.nytimes.com / 2014/ 03/ 26/the-certainty-of-donald-rumsfeldpart-2/#more-152439 (accessed March 11, 2017). Morrisson, Christian. 2000. Historical Perspectives on Income Distribution: The Case of Europe. Ch. 4, pages 217–260 of: Atkinson, Anthony B., and Bourguignon, Franc¸ois (eds.), Handbook of Income Distribution, vol. 1. Amsterdam: North-Holland. Morrisson, Christian, and Murtin, Fabrice. 2013. The Kuznets Curve of Human Capital Inequality: 1870–2010. Journal of Economic Inequality, 11(3), 283–301. Moskvitch, Katia. 2018. Troubled Times for Alternatives to Einstein’s Theory of Gravity. Quanta magazine, Apr. 30. www.quantamagazine.org/troubled-times-foralternatives-to-einsteins-theory-of-gravity-20180430 (accessed February 10, 2019). Moura, N. J., Jr. and Ribeiro, Marcelo Byrro. 2006. Zipf Law for Brazilian Cities. Physica A, 367, 441–448. arXiv:physics/0511216v2. Moura, N. J., Jr. and Ribeiro, Marcelo Byrro. 2009. Evidence for the Gompertz Curve in the Income Distribution of Brazil 1978–2005. European Physical Journal B, 67, 101–120. arXiv:0812.2664v1. Moura, N. J., Jr. and Ribeiro, Marcelo Byrro. 2013. Testing the Goodwin Growth-Cycle Macroeconomic Dynamics in Brazil. Physica A, 392, 2088–2103. arXiv:1301.1090. Neal, F., and Shone, R. 1976. Economic Model Building. London: Macmillan Education UK.

294

References

Neves, Juliano C. S. 2018. Infinities as Natural Places. Foundations of Science, July 21. doi:10.1007/s10699-018-9556-0; arXiv:1803.07995v2. Newman, M. E. J. 2005. Power Laws, Pareto Distributions and Zipf’s Law. Contemporary Physics, 46, 323. arXiv:cond-mat/0412004v3. Nicolis, Gr´egoire, and Prigogine, Ilya. 1977. Self-Organization in Nonequilibrium Systems. New York: Wiley. Nicolis, Gr´egoire, and Prigogine, Ilya. 1989. Exploring Complexity. Munich: Freeman. Nikolaidi, Maria, and Stockhammer, Engelbert. 2017. Minsky Models: A Structured Survey. Journal of Economic Surveys, 31(5), 1304–1331. Also in ch. 7, pages 175–205 of: Veneziani, Roberto and Zamparelli, Luca (eds), Analytical Political Economy. New York: Wiley-Blackwell, 2018. Oancea, Bogdan, Pirjol, Dan, and Andrei, Turodel. 2018. A Pareto Upper Tail for Capital Income Distribution. Physica A, 492, 403–417. Ormerod, Paul. 1997. The Death of Economics. New York: Wiley. Osborne, Matthew Fontaine Maury. 1977. The Stock Market and Finance from a Physicist’s Viewpoint. Minneapolis: Crossgar Press. Ostry, Jonathan D., Berg, Andrew, and Tsangarides, Charalambos G. 2014 (Apr.). Redistribution, Inequality, and Growth. Tech. rept. Research Department, International Monetary Fund. IMF Staff Discussion Note, SDN/14/02, www.imf.org/external/ pubs/ft/sdn/2014/sdn1402.pdf (accessed October 5, 2016). Pais, Abraham. 2005. Subtle Is the Lord: The Science and the Life of Albert Einstein. Oxford: Oxford University Press. Palley, Thomas I. 2014. The Accidental Controversialist: Deeper Reflections on Thomas Piketty’s Capital. Real-World Economics Review. Issue No. 67 (May), 143–146. Palma, Jos´e Gabriel. 2017. Does the Broad Spectrum of Inequality across the World in the Current Era of Neo-liberal Globalization Reflect a Wide Diversity of Fundamentals, or Just a Multiplicity of Political Settlements and Market Failures? Initiative for Policy Dialogue (based at Columbia University), http://policydialogue.org/files/ events/background-materials/03 Palma-with-Tables Figures.pdf (accessed January 15, 2018). Palmer, Tim. 2008. Edward Norton Lorenz. Physics Today, 61(9), 81–82. http:// scitation.aip.org/content/aip/magazine/physicstoday/article/61/9/10.1063/1.2982132 (accessed March 11, 2015). Palomba, Giuseppe. 1939. Introduzione allo studio della dinamica economica. Napoli: Jovene. Paradisi, Paolo, Kaniadakis, Giorgio, and Scarfone, Antonio Maria. 2015. The Emergence of Self-Organization in Complex Systems – Preface. Chaos, Solitons and Fractals, 81, 407–411. ´ Pareto, Vilfredo. 1897. Cours d’Economie Politique. Vol. 2. Lausanne: F. Rouge. www .institutcoppet.org/wp-content/uploads/2012/05/Cours-d%C3%A9conomie-politiqueTome-II-Vilfredo-Pareto.pdf (accessed August 30, 2016). Parry Lewis, J. 1963. Dimensions in Economic Theory. Manchester School, 31(3), 243–254. Patnaik, Utsa. 2017. Revisiting the “Drain,” or Transfers from India to Britain in the Context of Global Diffusion of Capitalism. Pages 277–317 of: Chakrabarti, Shubhra, and Patnaik, Utsa (eds.), Agrarian and other Histories: Essays for Binay Bhushan Chaudhuri. New Delhi: Tulika Books. Patriarca, Marco, Chakraborti, A., and Kaski, Kimmo. 2004a. Gibbs versus Non-Gibbs Distributions in Money Dynamics. Physica A, 340, 334–339. arXiv:cond-mat/ 0312167v1.

References

295

Patriarca, Marco, Chakraborti, A., and Kaski, K. 2004b. Statistical Model with a Standard Distribution. Physical Review E, 70, 016104. Patriarca, Marco, Heinsalu, E., and Chakraborti, A. 2010. Basic Kinetic Wealth-Exchange Models: Common Features and Open Problems. European Physical Journal B, 73, 145–153. Paul Cockshott, W., and Zachariah, David. 2014–2015. Conservation Laws, Financial Entropy and the Eurozone Crisis. Economics: The Open-Access, Open-Assessment E-Journal, 8(Jan. 29, 2014). arXiv:1301.5974v1. Paul Cockshott, W., Cottrell, Allin F., Michaelson, Gregory J., Wright, Ian P., and Yakovenko, Victor M. 2009. Classical Econophysics. New York: Routledge. Percival, Ian, and Richards, Derek. 1982. Introduction to Dynamics. New York: Cambridge University Press. Peters, Edgar E. 1994. Fractal Market Analysis. New York: Wiley. Petracca, Enrico. 2016 (June 23-25). Giuseppe Palomba and the Structural Realist View of Economics. 13th Annual STOREP Conference. University of Catania. http:// conference.storep.org/index.php?conference=storep-annual-conference&schedConf= 2016&page=paper&op=viewFile&path%5B%5D=27&path%5B%5D=24 (accessed January 29, 2019). Petty, William. 1690. Political Arithmetick. London: Printed for Robert Clavel at the Peacock, and Hen. Mortlock at the Phoenix in St. Paul’s Church-yard. https://la.utexas .edu/users/hcleaver/368/368PettyPolArithtableedit.pdf (accessed February 9, 2019). Phillips, A. W. 1958. The Relation between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom, 1861–1957. Economica, 25(100), 283–299. Phillips, Peter C. B. 2010 (Sept.). The Mysteries of Trend. Tech. rept. paper No. 1771. Cowles Foundation for Research in Economics, Yale University. https://ssrn.com/ abstract=1676216 (accessed January 24, 2019). Pietronero, L., Cristelli, M., and Tacchella, A. 2013. New Metrics of Economic Complexity: Measuring the Intangible Growth Potential of Countries. www.ineteconomics.org/ uploads/papers/Pietronero-Paper.pdf (accessed September 6, 2016). Piketty, Thomas. 2014. Capital in the Twenty-First Century. Translated by Arthur Goldhammer from the author’s original in French Le capital au XXIe si´ecle. Cambridge, MA: Harvard University Press. Piketty, Thomas. 2015. About Capital in the Twenty-First Century. American Economic Review, 105(5), 48–53. Piketty, Thomas. 2016–2017. The Dynamics of Capital Accumulation: Private vs. Public Capital and the Great Transformation. Paris School of Economics. Lecture 3 on Economic History and respective exams, http://piketty.pse.ens.fr/en/teaching/10/17, (accessed January 5, 2017). Piketty, Thomas, and Zucman, Gabriel. 2014. Capital Is Back: Wealth-Income Ratios in Rich Countries 1700–2010. Quarterly Journal of Economics, 129(3), 1255–1310. Pinto, Carla M. A., Mendes Lopes, A., and Tenreiro Machado, J. A. 2012. A Review of Power Laws in Real Life Phenomena. Communications in Nonlinear Science and Numerical Simulation, 17(9), 3558–3578. doi:10.1016/j.cnsns.2012.01.013. Planck, Max. 1950. Scientific Autobiography and Other Papers. Translated from German by Frank Gaynor. London: Williams & Norgate. Pokrovski, Vladimir N. 1999. Physical Principles in the Theory of Economic Growth. Hants: Ashgate. Pokrovski, Vladimir N. 2018. Econodynamics. 3rd ed. New York: Springer.

296

References

Pressman, Steven. 2016. Understanding Piketty’s Capital in the Twenty-First Century. New York: Routledge. Puu, T¨onu. 2003. Attractors, Bifurcations and Chaos: Nonlinear Phenomena in Economics. 2nd ed. New York: Springer Rader, Trout. 1972. Book Review: Dimensional Analysis for Economists, by Frits J. de Jong and Wilhelm Quade. Econometrica, 40(1), 214–216. Rankin, Jennifer. 2014. Thomas Piketty accuses Financial Times of dishonest criticism. The Guardian, May 26. www.theguardian.com/business/2014/may/26/thomas-pikettyfinancial-times-dishonest-criticism-economics-book-inequality (accessed June 17, 2014). Rau, Nicholas. 1974. Trade Cycles: Theory and Evidence. New York: Macmillan. Ribeiro, Harold, Alves, Luiz G. A., Martins, Alvaro, Lenzi, Ervin K., and Perc, Matjaˇz. 2018. The Dynamical Structure of Political Corruption Networks. Journal of Complex Networks. doi:10.1093/comnet/cny002; arXiv:1801.01869v1. Ribeiro, Marcelo Byrro, and Miguelote, Alexandre Y. 1998. Fractals and the Distribution of Galaxies. Brazilian Journal of Physics, 28(2), 132–160. doi:10.1590/S010397331998000200007. Ribeiro, Marcelo Byrro, and Videira, Antonio A. P. 1998. Dogmatism and Theoretical Pluralism in Modern Cosmology. Apeiron, 5, 227–234. arXiv:physics/9806011v1. Ribeiro, Marcelo Byrro, and Videira, Antonio A. P. 2007. Boltzmann’s Concept of Reality. arXiv:physics/0701308v1. Richmond, Peter, and Roehner, Bertrand M. 2016. Predictive Implications of Gompertz’s Law. Physica A, 447, 446–454. arXiv:1509.07271v1. Richmond, Peter, Chakrabarti, Bikas K., Chatterjee, Arnab, and Angle, John. 2006a. Comments on “Worrying Trends in Econophysics”: Income Distribution Models. Pages 244–253 of: Chatterjee, Arnab, and Chakrabarti, Bikas K. (eds.), Econophysics of Stock and Other Markets. Milan: Springer. Richmond, Peter, Hutzler, Stephan, Coelho, Ricardo, and Repetowicz, Przemek. 2006b. A Review of Empirical Studies and Models of Income Distribution in Society. Ch. 5, pages 131–159 of: Chakrabarti, B. K., Chakraborti, A., and Chatterjee, A. (eds.), Econophysics and Sociophysics – Trends and Perspectives. New York: Wiley-VCH. Roehner, Bertrand M. 2002. Patterns of Speculation: A Study in Observational Econophysics. Cambridge: Cambridge University Press. Roine, Jesper, and Waldenstr¨om, Daniel. 2015. Long-Run Trends in the Distribution of Income and Wealth. Ch. 7, pages 469–592 of: Atkinson, Anthony B., and Bourguignon, Franc¸ois (eds.), Handbook of Income Distribution, vol. 2A–2B. Amsterdam: North-Holland. Ruccio, David, and Morgan, Jamie. 2018. Capital and Class: Inequality after the Crash. Real-World Economics Review. Issue No. 85 (Sep. 19), 15–24. Rumsfeld, Donald H. 2002 (Feb. 12). U.S. Department of Defense News Briefing. News Transcript. http://archive.defense.gov/Transcripts/Transcript.aspx?TranscriptID= 2636 (accessed June 20, 2016); see also www.youtube.com/watch?v=GiPe1OiKQuk (accessed March 11, 2017). Sakstein, Jeremy. 2018. Tests of Gravity with Future Space-Based Experiments. Physical Review D, 97, 064028. arXiv:1710.03156v4. Sarabia, Jos´e M., and Jord´a, Vanesa. 2014. Explicit Expressions of the Pietra Index for the Generalized Function for the Size Distribution of Income. Physica A, 416, 582–595. Sarabia, Jos´e M., Jord´a, Vanesa, and Remuzgo, Lorena. 2016. The Theil Indices in Parametric Families of Income Distributions: A Short Review. Review of Income and Wealth. doi:10.1111/roiw.12260.

References

297

Saunders, P. T. 1980. An Introduction to Catastrophe Theory. Cambridge: Cambridge University Press. Scafetta, Nicola, and West, Bruce J. 2007. Probability Distributions in Conservative Energy Exchange Models of Multiple Interacting Agents. Journal of Physics: Condensed Matter, 19, 065138. Scafetta, Nicola, Picozzi, Sergio, and West, Bruce J. 2004a. An Out-of-Equilibrium Model of the Distributions of Wealth. Quantitative Finance, 4(3), 353–364. Scafetta, Nicola, Picozzi, Sergio, and West, Bruce J. 2004b. A Trade-Investment Model for Distribution of Wealth. Physica D, 193, 338–352. Scheidel, Walter, and Friesen, Steven J. 2009. The Size of the Economy and the Distribution of Income in the Roman Empire. Journal of Roman Studies, 99, 61–91. Schinckus, C. 2009. Economic Uncertainty and Econophysics. Physica A, 388, 4414–4423. Schinckus, Christophe. 2010. Econophysics and Economics: Sister Disciplines? American Journal of Physics, 78(4), 325–327. Schinckus, Christophe. 2013. Between Complexity of Modelling and Modelling of Complexity: An Essay on Econophysics. Physica A, 392, 3654–3665. Schneider, Markus P. A. 2015. Revisiting the Thermal and Superthermal Two-Class Distribution of Incomes: A Critical Perspective. European Physical Journal B, 88(1). Article Nr. 5. Scott Fitzgerald, F. 1936 (Feb.). The Crack-Up. www.pbs.org/wnet/americanmasters/ f-scott-fitzgerald-essay-the-crack-up/1028/ (accessed December 10, 2017); www .esquire.com/lifestyle/a4310/the-crack-up/ (accessed February 5, 2018). Screpanti, Ernesto, and Zamagni, Stefano. 1993. An Outline of the History of Economic Thought. Oxford: Oxford University Press. Serebriakov, Vladimir, and Dohnal, Mirko. 2017. Qualitative Analysis of the Goodwin Model of the Growth Cycle. Revista de M´etodos Cuantitativos para la Econom´ıa y la Empresa, 23(June), 223–233. www.upo.es/revistas/index.php/RevMetCuant/article/ view/2694 (accessed September 15, 2018). Shaikh, A., Papanikolaou, N., and Wiener, N. 2014. Race, Gender and the Econophysics of Income Distribution in the USA. Physica A, 415, 54–60. Shaikh, Anwar. 2017. Income Distribution, Econophysics and Piketty. Review of Political Economy, 29(1), 18–29. doi:10.1080/09538259.2016.1205295. Sharma, Kiran, and Chakraborti, Anirban. 2016. Physicists’ Approach to Studying Socioeconomic Inequalities: Can Humans Be Modelled as Atoms? arXiv:1606.06051v1. Sharma, Manoj. 2018. How Much Money Did Britain Take away from India? About $45 Trillion in 173 Years, Says Top Economist. Business Today, Nov. 19. www .businesstoday.in / current / economy-politics / this-economist-says-britain-took-awayusd-45-trillion-from-india-in-173-years/story/292352.html (accessed February 20, 2019). Shih, Shagi-Di. 1997. The Period of a Lotka–Volterra System. Taiwanese Journal of Mathematics, 1(4), 451–470. Shone, R. 2002. Economic Dynamics: Phase Diagrams and Their Economic Application. 2nd ed. Cambridge: Cambridge University Press. Simon, Herbert A. 1957. The Compensation of Executives. Sociometry, 20(1), 32–35. Sinha, Sitabhra, Chatterjee, Arnab, Chakraborti, Anirban, and Chakrabarti, Bikas K. 2011. Econophysics: An Introduction. Weinheim: Wiley-VCH Verlag. Soares, Abner D., Moura., N. J., Jr. and Ribeiro, Marcelo Byrro. 2016. Tsallis Statistics in the Income Distribution of Brazil. Chaos, Solitons and Fractals, 88, 158–171. arXiv:1602.06855v2. Soddy, Frederick. 1926. Wealth, Virtual Wealth and Debt. London: George Allen & Unwin.

298

References

Solow, Robert M. 1990. Goodwin’s Growth Cycle: Reminiscence and Rumination. Pages 31–41 of: Velupillai, K. (ed.), Nonlinear and Multisectoral Macro-dynamics: Essays in Honour of Richard Goodwin. London: Palgrave Macmillan. Soos, Philip. 2012. Stop Letting Economists off the Hook. Business Spectator, Mar. 16. https://www.theaustralian.com.au/business/business-spectator/news-story/stopletting-economists-off-the-hook/7236aa59c30280f673deed283924786d (accessed November 7, 2019). Sordi, Serena, and Vercelli, Alessandro. 2014. Unemployment, Income Distribution and Debt-Financed Investment in a Growth Cycle Model. Journal of Economic Dynamics and Control, 48, 325–348. Sørensen, Peter Birch, and Whitta-Jacobsen, Hans Jørgen. 2010. Introducing Advanced Macroeconomics: Growth and Business Cycles. 2nd ed. New York: McGraw-Hill. Soriano-Hern´andez, P., del Castillo-Mussota, M., C´ordoba-Rodr´ıguez, O., and MansillaCorona, R. 2016. Non-Stationary Individual and Household Income of Poor, Rich and Middle Classes in Mexico. doi:10.1016/j.physa.2016.08.042. Souma, Wataru. 2001. Universal Structure of the Personal Income Distribution. Fractals, 9, 463–470. Souma, Wataru. 2002. Physics of Personal Income. Pages 343–352 of: Takayasu, Hideki (ed.), Empirical Science of Financial Fluctuations. Tokyo: Springer Japan. arXiv:cond-mat/0202388v1. Souma, Wataru, and Nirei, Makoto. 2005. Empirical Study and Model of Personal Income. Pages 34–42 of: Chatterjee, A., Yarlagadda, S., and Chakrabarti, B. K. (eds.), Econophysics of Wealth Distribution. Milan: Springer. Souza, Pedro H. G. F. 2016 (Sept.). A desigualdade vista do topo: A concentrac¸a˜ o de renda entre os ricos no Brasil, 1926–2013. Ph.D. thesis, Departmento de Sociologia, Instituto de Ciˆencias Sociais, Universidade de Bras´ılia–UnB. http://repositorio.unb.br/ bitstream /10482/22005/1/2016 PedroHerculanoGuimar%C3%A3esFerreiradeSouza .pdf (accessed December 16, 2017). Souza, Pedro H. G. F., and Medeiros, Marcelo. 2015. Top Income Shares and Inequality in Brazil, 1928–2012. Sociologies in Dialogue, 1(1), 119–132. Sreevastsan, Ajai. 2018. British Raj Siphoned out $45 Trillion from India: Utsa Patnaik. Livemint, Nov. 21. www.livemint.com/Companies/HNZA71LNVNNVXQ1eaIKu6M/ British-Raj-siphoned-out-45-trillion-from-India-Utsa-Patna.html (accessed December 31, 2018). Strogatz, Steven H. 2001. Exploring Complex Networks. Nature, 410(Mar. 8), 268–276. Sweezy, Paul M. 1942. The Theory of Capitalist Development. New York: Monthly Review Press. Sylos Labini, Francesco, and Caprara, Sergio. 2017. On the Description of Financial Markets: A Physicist’s Viewpoint. Pages 63–71 of: Ippoliti, Emiliano, and Chen, Ping (eds.), Methods and Finance: A Unifying View on Finance, Mathematics and Philosophy. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol. 34. New York: Springer International. doi:10.1007/978-3-319-49872-0. Tacchella, Andrea, Cristelli, Matthieu, Caldarelli, Guido, Gabrielli, Andrea, and Pietronero, Luciano. 2012. A New Metrics for Countries’ Fitness and Products’ Complexity. Scientific Reports, 2(723), 1–7. doi:10.1038/srep00723. Taleb, Nassim Nicholas. 2010. The Black Swan: The Impact of the Highly Improbable. 2nd ed. New York: Random House. Tao, Yong, Wu, Xiangjun, Zhou, Tao, Yan, Weibo, Huang, Yanyuxiang, Yu, Han, Modal, Benedict, and Yakovenko, Victor M. 2016. Universal Exponential Structure of Income Inequality: Evidence from 60 Countries. arXiv:1612.01624v1.

References

299

Tarassow, Artur. 2010. The Empirical Relevance of Goodwin’s Business Cycle Model for the US Economy. Munich Personal RePEc Archive (MPRA) Paper 21815, https://mpra .ub.uni-muenchen.de/21815/1/MPRA paper 21815.pdf (accessed June 15, 2018). Taylor, John Robert. 1997. An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. 2nd ed. Sausalito, CA: University Science Books. Terzi, Andrea. 2010. Keyne’s Uncertainty Is Not about White or Black Swans. Journal of Post Keynesian Economics, 32(4), 559–566. Texocotitla, Miguel Alvarez, Hern´andez, M. David Alvarez, and Hern´andez, Shan´ı Alvarez. 2018. Dimensional Analysis in Economics: A Study of the Neoclassical Economic Growth Model. arXiv:1802.10528v1. Trans-Atlantic Slave Trade Database. 2018. www.slavevoyages.org/assessment/estimates (accessed September 15, 2018). Tsallis, C. 1994. What Are the Numbers that Experiments Provide? Qu´ımica Nova, 17, 468–471. Tsallis, Constantino. 2009. Introduction to Nonextensive Statistical Mechanics: Approaching a Complex World. New York: Springer United Nations. 2013. Inequality Matters. Report of the World Social Situation 2013. Tech. rept. Department of Economic and Social Affairs, United Nations, New York. www .un.org/esa/socdev/documents/reports/InequalityMatters.pdf (accessed September 19, 2016). Unzicker, Alexander, and Jones, Sheila. 2013. Bankrupting Physics: How Today’s Top Scientists Are Gambling Away Their Credibility. New York: Palgrave Macmillan. Vadasz, Viktor. 2007. Economic Motion: An Application of the Lotka–Volterra Equations. Undergraduate honors thesis, Franklin and Marshall College. https://dspace.fandm .edu/handle/11016/4287 (accessed August 3, 2014). van der Ploeg, Frederik. 1983. Economic Growth and Conflict over the Income Distribution. Journal of Economic Dynamics and Control, 6, 253–279. Van Norden, Richard. 2016. Physicists Make “Weather Forecasts” for Economies. Nature. doi:10.1038/nature.2015.16963. VanderPlas, Jake. 2014. Frequentism and Bayesianism: A Python-driven Primer. arXiv:1411.5018v1. Varoufakis, Yanis. 2014. Egalitarianism’s Latest Foe: A Critical Review of Thomas Piketty’s Capital in the Twenty-First Century. In: Fullbrook and Morgan (2014). Also In: Real-World Economics Review, No. 69, Oct. 2014, 18–35. Veneziani, Roberto, and Mohun, Simon. 2006. Structural Stability and Goodwin’s Growth Cycle. Structural Change and Economic Dynamics, 17, 437–451. Videira, Antonio A. P. 1995. Atomism and Energetics at the End of the 19th Century: The Luebeck Meeting of 1895. Serie “Ciˆencia e Sociedade.” Centro Brasileiro de Pesquisa F´ısicas – CBPF. Publication number CBPF–CS–003/1995. Vijaya, Ramya M. 2013. Book review of Inequality, Development and Growth. Journal of Economic Inequality, 11, 127–129. Weisstein, Eric W. Log Normal Distribution. From MathWorld – A Wolfram Web Resource. http://mathworld.wolfram.com/LogNormalDistribution.html (accessed October 15, 2015). Whittaker, E. T. 1922. On a New Method of Graduation. Proceedings of the Edinburgh Mathematical Society, 41, 63–75. doi:10.1017/S0013091500077853. Wilk, G., and Włodarczyk, Z. 2015. Tsallis Distribution Decorated with Log-Periodic Oscillation. Entropy, 17, 384–400. Winsor, C. P. 1932. The Gompertz Curve as a Growth Curve. Proceedings of the National Academy of Sciences, 18(1), 1–8.

300

References

Woit, Peter. 2006. Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law. New York: Basic Books. Wolfers, Justin. 2014. A New Critique of Piketty Has Its Own Shortcomings. New York Times, May 23. www.nytimes.com/2014/05/25/upshot/a-new-critique-of-pikettyhas-its-own-shortcomings.html (accessed June 17, 2014). World Bank. 2016. GINI Index. http://data.worldbank.org/indicator/SI.POV.GINI?end= 2015&locations=BR&start=2008 (accessed September 15, 2018). Xu, Yan, Wang, Yougui, Tao, Xiaobo, and Liˇzbetinov´a, Lenka. 2017. Evidence of Chinese Income Dynamics and Its Effects on Income Scaling Law. Physica A, 487, 143–152. Yakovenko, V. M. 2003. Research in Econophysics. arXiv:cond-mat/0302270v2. Yakovenko, V. M. 2009. Econophysics, Statistical Mechanics Approach to. Pages 2800–2826 of: Meyers, R. A. (ed.), Encyclopedia of Complexity and System Science. New York: Springer. arXiv:0709.3662v4. Yakovenko, V. M. 2010. Statistical Mechanics of Money, Debt, and Energy Consumption. Science and Culture, 76(9–10), 430–436. arXiv:1008.2179v1. Yakovenko, V. M. 2011. Statistical Mechanics Approach to the Probability Distribution of Money. Ch. 7, pages 104–123 of: Ganssmann, Heiner (ed.), New Approaches to Monetary Theory: Interdisciplinary Perspectives. New York: Routledge. arXiv:1007.5074v1. Yakovenko, V. M. 2012. Applications of Statistical Mechanics to Economics: Entropic Origin of the Probability Distributions of Money, Income, and Energy Consumption. arXiv:1204.6483v1. Yakovenko, V. M. 2016. Monetary Economics from Econophysics Perspective. European Physical Journal Special Topics, 225, 3313–3335. arXiv:1608.04832v1. Yakovenko, V. M., and Rosser, J. Barkley, Jr. 2009. Colloquium: Statistical Mechanics of Money, Wealth, and Income. Reviews of Modern Physics, 81, 1703–1725. arXiv:0905.1518v2. Yamano, Takuya. 2002. Some Properties of the q-Logarithm and q-Exponential Functions in Tsallis Statistics. Physica A, 305, 486–496. Zencey, Eric. 2009. Mr. Soddy’s Ecological Economy. New York Times, Apr. 11. www .nytimes.com/2009/04/12/opinion/12zencey.html (accessed August 20, 2014). ˇ zek, Slavoj. 2014. Event: A Philosophical Journey through a Concept. New York: Ziˇ Melville House. Zou, Yijiang, Deng, Weibing, Li, Wei, and Cai, Xu. 2015. An Agent-Based Interaction Model for Chinese Personal Income Distribution. Physica A, 436, 933–942.

Subject Index

accelerator assumption, see Goodwin model accumulation infinite, see principle of infinite accumulation Marx, 10 wealth, 118 accuracy, see measurement activity level of speculative projects actual, 257 desired, 257 adaptive system, see complex systems agent in complex systems, 51 representative, 52, 53 aggregates, see economic Angle inequality process, 89, 154–158, 160 Angle process, see Angle inequality process Angle’s model, see Angle inequality process animal spirits, 205, 208, 250 annuity, 251 apartheid, 101 apologetics, 276 Aristotelian physics, 6 Aristotelism, see physics, Aristotelian “a safe dollar is worth more than a risky one,” 251 average wage real value, see wage axioms, 143 balanced growth, see circular flow bankers income, 263 share, 263, 265, 266 bankrupt rate, 252 Bayes’ rule, 42 Bayes–Price–Laplace rule, 42 B´enard cells, 53 bifurcation, 48 black swan events, 152 blackboard economics, 29

Boltzmann theoretical pluralism, 22 theory as representations, 21 Boltzmann’s entropy, see entropy, statistical mechanics Boltzmann–Gibbs law, 159 book value, 256 boom, 212 Boyd–Blatt model, 249, 250, 252, 254, 262, 264 activity level of speculative projects actual, 257 desired, 257 bankrupt rate, 252 book value, 256 capital rental, 257 confidence in the future, see confidence current market price of a share, 252 desired investment fraction, 254 discount rate, 251 dividend yield, 252, 255 entrepreneurs, 250 cash position, 256 everyone else, 250 cash asset position, 258 financial investment actual, 255 desired, 255 flow of capital gain, 255 of capital losses (bankruptcies), 255 of dividend income, 255 rate of investment, 255 fractional rate of change, 253 horizon of uncertainty, 252 interest rate risk premium, 251 risk-free, 251 investors, 250

301

302

Subject Index

Boyd–Blatt model (cont.) investors consumption actual, 255 desired, 255 market sentiment, 253 money supply, 255 perceived asset position, 255 placement, 255 present value of future cash flow with interests, 250 price index lower barrier, 253 soft barrier, 253 qualitative cyclic behavior, 260 rentiers, 250 perceived income flow, 255 share price index, 253 time constant, 251 valuation formula, 251 waves, 258 Brazil gold drainage, 261 gold rush, 261 slavery, 101 business confidence, see confidence cycle, 204, 249 bust, 212 butterfly effect, 49 C–M–C cycle, 10, 166 capacity utilization, 234 capital depreciation, 264 fixed, 212 gains, 187, 205 human, 108 Piketty’s definition, 107 rate of increase, 214 rate of return, see capital return signature, 167, 168 as stock, 118 capital income, 119 capital rental, 257 capital return, 117, 120, 125–127, 129, 132, 138, 139, 141, 181, 195, 205 capital to output ratio, 212, 221, 227, 234, 272 capital/income ratio, 115, 120–122, 126, 127, 129–131, 136, 139, 141, 212 Cargo Cult Science, 44 carrying capacity, 271 cash position of entrepreneurs, 256 catastrophe theory, 48 CCDF (complementary cumulative distribution function), 57 CDF (cumulative distribution function), 57 centile, 109, 113 central contradiction of capitalism, 140, see Piketty’s central thesis

Champernowne distribution, see income distribution chaos, see chaotic dynamics chaos theory, see chaotic dynamics chaotic dynamics, 49 attractors, 50 predictability horizon, 49 repellers, 50 sensitivity to initial conditions, 49 strange attractors, 50 chaotic systems features, 49 circular flow, 201, 202 balanced growth, 203, 204 steady state mode, 203 unbalanced growth, 204 class struggle model, see Goodwin model classical school, see economics, classical school Cobb–Douglas production function, 129 commodity conservation law, see conservation law comparative statics, see economics complex systems, 50 adaptive system, 53 agent, 51 downward causation, 51 emergence, 51 network, 51 self-adaptation, 53 self-organization, 52 complexity, see complex systems confidence, 205–208, 249, 251, 252, 259 conservation law commodity, 167 energy, 159, 160 in the Goodwin model, 216 money, 158, 160, 162, 164, 165, 170 value, 167 conventional judgment, 207 cost of labor force, see labor, see labor creative destruction, 15 cycle accumulation–profit–accumulation, 10 C–M–C, 10, 166 limit, see limit cycle M–C–M , 10, 167 production–income–expense, 14 trade, see trade cycle cycles Juglar, 258 Kitchin, 235, 258 Kondratieff intermediate, 259 short, 235 Dagum distribution, see income distribution debt creator operator, 167 as negative money, 162 outstanding, 263 to income ratio, 263, 265, 266

Subject Index debt-financed instability, see Keen model debt-induced breakdown, see Keen model growth, see Keen model decile, 109, 113 demand elasticity, see price elasticity of demand desired investment fraction, 254 development economics, see economics, development developmentalist tradition, see economics, development DHMP extension, see Goodwin model dimension currency, 117 generic currency, 118, 213 homogeneity, see expression inhomogeneity, see expression money, 117 percentage, 57, 119, 122, 213 time, 117, 213 dimensional analysis in economics, 117 discount rate, 251 dismal science, see economics dissipative structures, see nonequilibrium systems distribution function complementary cumulative income distribution function, 57 complementary first-moment income distribution, 59 cumulative income distribution function, 57 first-moment income distribution, 59 probability distribution function, 57 distribution tables, 113 dividend yield, 252, 255 doctrine of stability, 246 dogmatism, 25, 35 dominant class, see income classes (Piketty) downward causation, see complex systems Dutch Tulip Bulb Bubble, 210 dynamic system, 10 deterministic, 49 linear, 203 nonlinear, 203, 204 structurally stable, 50, 244 structurally unstable, 50, 244 economic aggregates, 16, 116 demons, 165 elite, 205 engineering, 276 growth rate, 124, 130, 138, 141 saving, 118 saving rate, 119 savings, 118 savings rate, 119, 130, 141 system, 47 thermal death, 161 uncertainty, 149, 208

303

economic cycles, see trade cycles economic value, see value economics Austrian school, 15 behavioral, 15 classical school, 9 comparative statics, 13, 104, 116, 166 complexity, 20 development, 8, 15 dismal science, 35 ecological, 15 econometrics, 18 in econophysics, 47 heterodox, 15 institutional, 15 macroeconomic dynamics, 14 macroeconomics, 14 mainstream theory, see economics, neoclassical Marxian, 10 microeconomics, 11 neo-Marxian, 15 neo-Ricardian, 15 neoclassical, 11 general equilibrium theory, 12, 29 IS-LM model, 15 partial equilibrium analysis, 12 post-autistic, 27 post-Keynesian, 15 Schumpeterian, 15 economy as entropy reducing, 165 in econophysics, 46 laissez-faire, 204, 249, 259, 261, 276 phenomenological domain, 18 econophysics definition, 18 appearance, 18 epistemological perspective, 37 methodology, 18 effective demand, 14 effective unemployment, 235, 247 egalitarian line, see Lorenz curve, line of perfect equality eigenvalues, 219 elasticity, 116, 127 electromagnetism, 4 elite economic, 205 power, 205 ruling, 205 emergent phenomena, see complex systems, emergence employment rate, 211 energy conservation law, see conservation law entrepreneurs, 205, 221, 250 entropy information theory, 169 statistical mechanics, 54, 160, 169

304

Subject Index

EPD, 66 equilibrium point, 218 equivalence (in structural stability) observational, 245 practical, 245 topological, 245 era of inheritance, 139 erf(x) function, 71 error, see measurement error function, 71 expectation, 42, 151, 206 long-term, 206 exponential distribution with a Pareto tail distortion (ExPt), see income distribution exponential–Pareto distribution (EPD), see income distribution expression dimensionally homogeneous, 119, 214 dimensionally inhomogeneous, 119 ExPt, 191 external natural world, see real externalities, 29 extreme uncertainty, see uncertainty Fa2, 194 falling rate of profits, 132 family income distribution of two-earners (Fa2), see income distribution fiat money, 167 finance hedge, 209, 210 Ponzi, 210, 268 speculative, 209 financial instability hypothesis, 208, 210, 249, 259, 262, 271 financial investment actual, 255 desired, 255 firm hierarchy, 83 fixed point, 217, 218 flexible accelerator, 272 flow of capital gain, 255 of capital losses (bankruptcies), 255 of dividend income, 255 rate of investments, 255 forking, see bifurcation fractal, 50 fractal dimension, 50 fractional rate of change of the share price index, 253 functionless investors, 250 fundamental laws of capitalism first, 123, 126, 141 first rewritten, 125 second, 106, 116, 130, 131, 141 second rewritten, 132 third, 141

gamma distribution, 72 rate parameter, 72 shape parameter, 72 gamma function, 72, 158, 172 lower incomplete, 73 upper incomplete, 73 gamma–Pareto distribution (PD), see income distribution PD, 73 (σ ) function, 72 (σ,x) function, 73 γ (σ,x) function, 73 GDP, see gross domestic product general equilibrium analysis, see economics, neoclassical generalized beta distribution of second kind, see income distribution generic currency symbol, 58 unit, 58, 118, 213, 251 Gibrat index, 71, 89 law, 70 Gini coefficient, ix, xii, 56, 61, 113, 190, 192 index, 61, 101, 112, 190, 192 G¨odel’s theorem, 34 Gompertz curve, 56, 68, 70, 91, 92, 95, 235, 271 law of human mortality, 92 Gompertz–Pareto distribution (GPD), see income distribution Goodwin dynamics, see Goodwin model Goodwin growth-cycle model, see Goodwin model Goodwin macrodynamics, see Goodwin model Goodwin model, 212, 216, 220–226, 228, 229, 236, 240–242, 246, 248, 249, 254, 263 accelerator assumption, 212, 264 assumptions, 221 bargaining equation, 272 capitalists’ savings, 272 class struggle, 222 closed orbits, 219 conservation law, 216 cycles, 218, 220, 224, 225, 230, 234, 236 Desay–Henry–Mosley–Pemberton (DHMP) extension, 270 dynamic illustration, 220 equations of, 216 fixed point, 218 growth-cycle macrodynamics, 229, 237 Harrod-neutral technical progress, 213, 273 Harvie–Kelmanson–Knapp (HK2 ) extension, 271 from Keen model, 266 linear approximation, 219 linear period, 220 Marxian origins, 212 nonlinear period, 223, 224

Subject Index nonlinear system, 216 potential labor supply, 213 simplifying hypotheses, 220 state variables, 227–229, 234 history, 227 structural instability, 245 variables, 232 Goodwin variables, see Goodwin model government bonds, 251 GPD, 68 Great Crash, 13 gross domestic product, 119 growth rate, see economic, growth rate growth-cycle model, see Goodwin model gut instincts, 208 Harrod-neutral technical progress, 213, 273 heat death of the universe, 161 hedge, see finance hierarchical structure of firms, 83 hill curves, 63 HK2 extension, see Goodwin model Hodrick–Prescott filter, 230–234 horizon of predictability, 49 of uncertainty, 36, 252 human capital, 108 hypermeritocratic society, 111 I(x) function, 69 income bankers, 263 capital, 109, 113, see capital income capital and labor, 109 as flow, 117 labor, 109, 113, see labor income national, 119 time constant, 124 share of income from capital, 119 share of income from labor, 119 total, 109 income classes (econophysics) the 1%, 99, 194, 205, 235, 275 the 99%, 100, 235, 275 1% rich, 93, 99, 205, 235 99% remaining, 93, 235 lower class, 99 middle class, 99 rich, 99, 205 rich class, 99, 194, 205 super-rich, 99, 205 underclass, 99 income classes (Piketty) the 1%, 109, 205, 275 dominant class (the top 1%), 109, 205 lower class (the bottom 50%), 109 middle class (the middle 40%), 109 top 0.1%, 110, 205

305

upper class (the top 10%), 109 wealthy class (the bottom 9% of top 10%), 109 well-to-do (the bottom 9% of the top 10%), 109 income decomposition, 109 income distribution κ-generalized distribution (KGD), 78, 96, 97 Adam Smith, 9 average income, 59 Champernowne distribution, 81 complementary distribution, 57 complementary first-moment, 59 cumulative distribution, 57 Dagum distribution, 81, 96 exponential distribution with a Pareto tail distortion (ExPt), 191 exponential–Pareto distribution (EPD), 66, 190, 193 family income distribution of two-earners (Fa2), 194 first-moment, 59 gamma–Pareto distribution (PD), 73 generalized beta distribution of second kind, 81 Gompertz–Pareto distribution (GPD), 68, 91, 235 log-normal–Pareto distribution (LnPD), 71 Pareto power law, 68 Pareto powerlaw, 70, 73, 80, 82, 84, 86, 90, 93, 99, 235 Pareto–Levy law, 80 probability density, 57 probability income distribution, 57 purely exponential distribution (PED), 190 Singh–Maddala distribution, 81, 96 smoothing hypothesis of income samples, 56 transition income value, 63 Tsallis distribution (TD), 76, 96, 98, 99, 247 weak Pareto law, 80 Weibull distribution, 81 Indian colonial drainage, 158, 261 indifference curves, 11 individual income, 56 individual investment index, 183, 185 inequality in general, 102 in global energy consumption, 158 income, 102 income from capital, 141 measures, 62 process, 89, 154 wealth, 102 in wealth, 141 information theory, 169 inheritance society, 137 interest rate risk premium, 251 risk-free, 251 investment definition of, 182 valuation, 205

306

Subject Index

investors, 250 confidence, see confidence desired investment fraction, 254 perceived asset position, 255 investors consumption actual, 255 desired, 255 invisible hand, 9, 52 IS–LM model, see economics, neoclassical Jq (x) function, 77 Jacobian matrix, 219 Juglar cycles, 258 k-index, 62 κ-exponential, 77 κ-generalized distribution (KGD), see income distribution κ-logarithm, 77 Keen model, 271, 272 bankers income, 263 share, 263, 265, 266 basic motivation, 262 capital depreciation, 264 debt to income ratio, 263, 265, 266 debt-financed instability, 262 debt-induced breakdown, 267, 268 growth, 267 with government intervention, 267 interest rate, 263 outstanding debt, 263 Ponzi finance, 268 three classes, 266 Keynes’ General Theory, 208 Keynesian confidence, 206 practical men, 206, 207, 251 uncertainty, 206 KGD, 78 kinetic theory of gases, 157 Kitchin cycles, 235, 258 Knightian uncertainty, see uncertainty, Knightian known knowns, see Rumsfeld knowledge classification known unknowns, see Rumsfeld knowledge classification Kolkata index, see k-index Kondratieff cycles intermediate, 259 short, 235 Kuznets curve, 107 Kuznets’ fairy tale, 107, 136

labor amount, 129, 234 amount employed, 213 commanded, 9 force, 129, 213 force, cost of, 120, 213 income, 119 percentage share, 211, 214 productivity, 213 rate of return, 120 reserve army, 11, 214, 230 supply, potential, 213 labor-capital income split, 83 laissez-faire economy, see economy large-scale asset purchases, see quantitative easing law of nature, 143 Lehman Brothers, 210 limit cycle, 203, 211, 217, 219, 221 limits of knowledge, 21, 42 LnPD, 71 log-normal distribution, 70 log-normal–Pareto distribution (LnPD), see income distribution log-periodic oscillation, 96, 99 logistic curve, 271 Lorenz curve, ix, xii, 56, 59, 60, 110, 190 line of perfect equality, 60 perfect inequality, 60 Lotka–Volterra cycles, 222 model, 221, 245, 269 structural instability, 245 lower class, see income classes (Piketty) M–C–M cycle, 10, 167 Manhattan Project, xv, 103 marginal productivity of capital, 126 utility, 11 marginal product capital, 128 labor, 128 marginalism, 11, 126 marginalist revolution, 11 market sentiment, 253 Marx circuit, 211, 221, 225, 229, 230 Marx’s apocalypse, see principle of infinite accumulation Marxian cycles commodity, see C–M–C cycle money, see M–C–M cycle Marxian economics, 10 Marxism, 10 Marxist ideology, 222 maximum probability, 57 Maxwell’s demon, 165, 186 measurement accuracy, 148

Subject Index error, 148 margin of error, 41, 149 propagation, 149 random, 149 systematic, 112, 115, 126, 149 true error, 148 experimental bias, 149 observational bias, 149 precision, 148 true value, 148 mechanics classical, 4 Newtonian, 4 quantum, 4 statistical, 4 mercantilism, 7 meritocratic extremism, 134 metaphysics, 6 middle class, see income classes (econophysics), see income classes (Piketty) Minsky cycle, 209 financial instability hypothesis, 208, 210, 249, 259, 262, 271 moment, 209 open source financial modelling program, 269 “stability is destabilizing”, 210 theory, 209 money conservation hypothesis, see conservation law money conservation law, see conservation law money conservation model, see conservation law money supply, 255 national income, see income, national national output, 124 natural place, 6 salary, 10 world, see real negative money, 162 net wealth, 108 Nobel Prizes in Chemistry, 33, 53 in Economics, 37, 38, 107, 223 in Physics, 5, 19, 26, 44, 51, 103, 108 nonequilibrium systems, 54 dissipative structures, 54 entropy reducing, 54, 165 far-from-equilibrium systems, 53 self-organization, 54 nonequilibrium thermodynamics, 47, 53, 165 observational equivalence, see structural stability oligarchic divergence, 141

307

one parameter inequality process, see Angle inequality process, 172 orbit, 203 output to labor ratio, 213 outstanding debt, 263 PAE movement, see economics, post-autistic Palma ratio, 197 Palomba model, 269 Pareto efficiency, 13 exponent, 66, 82 index, 66, 82, 187, 189, 192 power law, ix, 13, 66, 164, 192, 235 power-law tail, 66, 88, 189, 192, 195, 197 Pareto–Levy law, see income distribution partial equilibrium analysis, see economics, neoclassical partial knowledge, see Rumsfeld knowledge classification patrimonial middle class, 111, 137 PDF (probability distribution function), 57 PED, 190 perceived asset position of the investor, 255 perceived income flow of rentiers, 255 percentage share of labor, see labor perfect inequality, see Lorenz curve phase diagram, 226 evolution, 235 plane, 211, 217, 225, 234–237, 240, 247 portrait, 211, 217, 227, 236, 237, 244, 247, 248 space, 211, 227, 235, 244 Phillips curve, 214, 218–220, 232, 240, 264, 270, 272 photoelectric effect, 108 physics Aristotelian, 6 classical, 4 modern, 4 Newtonian, 4 statistical, 4 physiocrats, 8 Pietra index, 62 Piketty fundamental inequality r > g, see Piketty’s central thesis Piketty inequality r > g, see Piketty’s central thesis Piketty’s central thesis, 125, 137, 141 Piketty’s fundamental laws of capitalism, see fundamental laws of capitalism placement, 255 political economy, 8 classical, 9 Ponzi finance, 268 Financiers, 209 population number, 213 posterior, see probability

308

Subject Index

postulates, 143 potential labor supply, 213 poverty index, 184, 185 powerlaw double, 85, 89 everywhere, 82 fractal, 50 Pareto, 13 “practical men,” see Keynesian practical men practical equivalence, see structural stability precision, see measurement predator–prey cycles, 222, 247 model, 221, 245, 247, 269 structural instability, 245 present value, 250, 251 price just, 7 market, 9 natural, 9 in political economy, 182 price elasticity of demand, 116 price index fractional rate of change, 253 lower barrier, 253 for shares, 253 soft barrier, 253 principle of infinite accumulation, 107, 116, 132, 181 Principle of Structural Stability, 245, 246 prior, see probability privatization, 131 probability Bayesian, 42 definitions, 42 frequentist, 42 as frequency, 42 as a measure of belief, 42 normalization, 57 posterior, 43 prior, 43, 152 subjective, 43 producers, 221 production function, 127 Cobb–Douglas, 129 production process function, 168 production system, 54 productive capacity, 234 profit level, 214 rate, 264 share, 211, 214, 263 progressive taxation, see tax property income, 195 pseudoscience, 6 purely exponential distribution (PED), see income distribution purposeful action, 169

q-exponential, 74 q-logarithm, 74 quantitative easing, 210 real, 25 real world, see real reality, 25, 205 reductionistic hypothesis, 51 regressive taxation, see tax relativity the general theory, 4, 40 the special theory, 4, 40 rentiers, 205, 208, 221, 250, 261 as economic parasites, 250, 261 representative agent, 52 firm, 12, 52 reserve army of labor, see labor, reserve army rich class, see income classes (econophysics) risk, 149, 186, 205 risk premium, 251 Robin Hood index, 62 ruling class, 203, 205 elite, 205 Rumsfeld knowledge classification, 151 known knowns, 151 known unknowns, 151 unknown unknowns, 42, 151 S-shaped curves, 271 saving rate, see economic, saving rate savings propensity fraction, 172 savings rate, see economic, savings rate Say’s law, 10, 182, 264 scientific dogmatism, 28 method, 6, 20, 29, 32 application, 41 upside down, 29, 38, 39 realism, 25, 28 scientific truth, 21 adequacy principle, 23 auxiliary assumptions, 26 dogmatism, 25 falsification, 26 provisional, 23 strong correspondence principle, 23 testability principle, 26 weak correspondence principle, 23, 24 self-adaptation, see complex systems Shannon’s entropy, see entropy, information theory share current market price, 252 market prices, 252 price index, 253 share of income from capital, see income

Subject Index sigmoid functions, 271 Singh–Maddala distribution, see income distribution slavery, 101, 102, 158 in Brazil, 101 smoothing hypothesis of income samples, see income distribution social index, 185 society of rentiers, 111, 137 sociophysics, 19 Sordi–Vercelli model assumptions, 271 speculation thermodynamic equivalent, 161 “stability is destabilizing,” see Minsky stability dogma, 246 Standard & Poor’s 500, 252 state of confidence, 206 state of equilibrium, 217 state variables, see thermodynamics statistics Bayesian, 42 frequentist, 42 steady state circulation mode, see circular flow structural stability, 50, 244 observational equivalence, 245 practical equivalence, 245 principle of, 245, 246 topological equivalence, 245 stylized facts, 106 super-rich, see income classes surplus, 202 surplus theory of social stratification, 154 synthetic indices, 62, 114 synthetic inequality indices, 113 systematics, see measurement, error ´ Tableau Economique, 8, 202 tax in inelastic collisions, 175 progressive, 178–180 regressive, 178–180 TD, 76, 247, 248 the 0.1%, see income classes (Piketty) the 1%, see income classes (econophysics), see income classes (Piketty) the 99%, see income classes (econophysics) the bottom 50%, see income classes (Piketty) the bottom 9% of top 10%, see income classes (Piketty) the middle 40%, see income classes (Piketty) the top 0.1%, see income classes (Piketty) the top 10%, see income classes (Piketty) Theil index, 63, 81, 113 theoretical pluralism, see Boltzmann theory catastrophe, see catastrophe theory

chaos, see chaotic dynamics complex system, see complex systems as images, see Boltzmann, 41 not even wrong, 19 predictive ability, 24 as representations, see Boltzmann singularity, 48 theoretical pluralism, see Boltzmann thermodynamics classical, 4, 201 second law, 165 state equations, 117 state functions, 117 state variables, 117 statistical, 4, 201 thousandth centile, 109 three Cs theories, 47 time constant, 124, 128, 129, 213, 251 time horizon, 246 topological equivalence, see structural stability trade definition of, 182 trade cycle, 203, 204, 249, 254 true error, see measurement true randomness, 42, 152 true value, see measurement Tsallis q-functions, 74 Tsallis distribution (TD), see income distribution unbalanced growth, see circular flow uncertainty economic, 205–207 in economics, 149 extreme uncertainty, 151, 152 in physics, 148 instrumental, 148 Keynesian, 206, 242 Knightian, 150, 205 true uncertainty, 150, 207, 208 underclass, see income classes (econophysics) unemployment rate, 211 unknown knowns, 152, 158 unknown unknowns, see Rumsfeld knowledge classification upper class, see income classes (Piketty) utility, 11 valuation formula, 251 value creation of, 168, 169 in econophysics, 47 exchange value, 9 in political economy, 182 surplus value, 10, 167 use value, 9 utility, 9 value conservation law, see conservation law

309

310 wage average real value, 213 average value, 129 bill, 120, 125, 213 just, 7 rate, 120 share, 214 wave-particle duality, 108 weak Pareto law, see income distribution

Subject Index wealth Piketty’s definition, 108 subsistence, 154 surplus, 154 wealthy class, see income classes (Piketty) Weibull distribution, see income distribution well-to-do class, see income classes (Piketty) work (physics), 169 zero income values, 58

Author Index

Abramowitz, M., 73 Abreu, Everton M. C., 97 Acemoglu, Daron, 119 Aitchison, J., 70, 87 Alexander, Herbert B., 101 Aliber, Robert Z., 208, 210, 242 Amp`ere, Andr´e-Marie, 5 Anderson, Philip W., 51 Angle, John, 89, 90, 154, 158 Aristotle, 5–7, 16 Arnold, Barry C., 63–65, 80, 82, 147 Arnold, Vladimir I., 48, 50 Arsenault, Natalie, 101 Arshad, Sidra, 82 Atkinson, A. B., ix, 83, 104, 224 Ausloos, Marcel, 81 Ayres, Robert U., 54 Bachelier, Louis, 33 Backhouse, Roger F., 7, 13, 15, 30 Baggott, Jim, 26, 32 Ball, Philip, 8 Balzac, Honor´e de, 137, 142 Banerjee, Anand, 85, 89, 91, 92, 158, 159 Banerjee, Subhasis, 81 Barbosa-Filho, Nelson H., 234 Barnett, William, II, 117 Bayes, Thomas, 42 Bella, Giovanni, 273 Bernanke, Ben S., 37, 210 Bernstein, Peter, 38 Bird, Kai, xvi Black, Fischer Sheffrey, 38 Blatt, John Markus, 29, 35, 104, 201–204, 207, 208, 210, 212, 218, 219, 221, 242, 249–251, 253, 254, 258–261, 271, 276 Blaug, Mark, 29 Blume, Lawrence E., 105 Boghosian, Bruce M., 197

Bohr, Niels, 4 Boltzmann, Ludwig, 4, 20–24 Borges, E. P., 76 Bose, Indrani, 81 Bouchaud, J. P., 29, 197 Bourguignon, Marcelo, 81 Boxer, C. R., 261 Boyce, William E., 219, 245 Boyd, Ian, 35, 203, 208, 221, 242, 249–251, 253, 254, 258–261, 271, 276 Brealey, Richard A., 251, 252 Brian Arthur, W., 53 Brown, J. A. C., 70, 87 Brzezinski, Michal, 84 Buchanan, Mark, 29 Buiter, Willem, x Calder´ın-Ojeda, Enrique, 81 Caprara, Sergio, 29 Cattani, Antonio David, 141, 165 Celsius, Anders, 5 Cercignani, Carlo, 20, 21 Ceriani, L., 56 Cerqueti, Roy, 81 Chakrabarti, B. K., 65, 84, 159, 170, 172, 173, 182 Chakraborti, Anirban, 172, 173 Chami Figueira, F., 68 Champernowne, D. G., 81 Chang, Ha-Joon, 8, 16, 30 Chatterjee, Arnab, 62, 65, 173, 201 Chen, Justin, 161 Christian Silva, A., 85, 166, 191, 193, 194 Clementi, F., 87, 96, 97 Clementi, Fabio, 79, 80, 82 Coelho, Ricardo, 84, 85 Colacchio, Giorgio, 273 Colander, D., x, 37 Conan Doyle, Arthur, vi Conceic¸a˜ o, Octavio A. C., 153

311

312

Author Index

Conde-Saavedra, G., 82 Conference Board of Canada, 101 Copernicus, Nicolaus, 3 Cosgel, Metin N., 150 Costa Lima, B., 268, 272 Courtault, Jean-Michel, 33 Cowell, Frank, 84 Cristelli, Matthieu, 29, 52 Daase, Christopher, 152 D’Agostini, Giulio, 42, 44 Daly, Herman, 34 Darwin, Charles, 13, 23 Davidson, Paul, 152, 207 Davies, James B., 105 de Broglie, Louis, 5 de Jong, Frits J., 117 de Maio, Fernando G., 62 de Oliveira, Paulo Murilo Castro, 177, 180 Deaton, Angus, 80 Delfaud, Pierre, 9–11, 14, 16 Democritus, 5 Desai, M., 224, 225, 270, 272 Descartes, Ren´e, 108 Diamond, Jared, 154 Dibeh, Ghassan, 234 Dirac, Paul, 5 Dizikes, Peter, 150 Dohnal, Mirko, 273 Domma, Filippo, 81 Douglas, Niall, 31 Doyne Farmer, J., x, 36 Dr˘agulescu, A., x, 66, 82, 85–88, 158–163, 170, 174, 190, 191, 194, 195 Drakopoulos, Stavros, 11, 37 Duhem, Pierre, 20 Durlauf, Steven N., 105 Dyson, Freeman, 103 Ehnts, Dirk, 269 Einstein, Albert, 4, 26, 27, 40, 108 Eliazar, Iddo, 61–63 Ellis, George, 51, 57 Estola, Matti, 104, 117 Eugene Stanley, H., x, 36 Euripides, xv Faraday, Michael, 4 Farmer, Roger E. A., 39 Feller, William, 194 Felton, Andrew, 102 Fermi, Enrico, 103 Ferrari-Filho, Fernando, 153 Ferrero, Juan C., 72, 74, 82, 90, 93, 98 Feynman, Richard, 44, 45, 143, 212 Financial Times, 142 Fiorio, Carlo V., 241 Fix, Blair, 83

Flaschel, Peter, 234 Folsom, R. G., 117 Forbes, 140 Foster, James E., 99 Frank, Robert, 118 Friesen, Steven J., 85 Fr¨ohlich, Nils, 117 Fujiwara, Yoshi, 82 Fullbrook, E., 105 Gabaix, Xavier, 82 Gabisch, G¨unter, 235, 249, 259 Galam, Serge, 19, 45, 46 Galilei, Galileo, 3, 6, 16, 20, 23, 26, 28, 39 Gallegati, Mauro, 32, 82, 87, 96, 103, 165 Gandolfo, Giancarlo, 203, 212, 214, 269, 272 Garc´ıa–Molina, M., 232–234 Geanakoplos, John, 36 Georgescu-Roegen, Nicholas, 54 Ghosh, Asim, 62 Gibbs, Josiah Willard, 4 Gibrat, Robert, 70, 87 Gini, Corrado, ix, 56 G¨odel, Kurt, 34 Goldstein, Jonathan P., 234 Gompertz, B., 68, 92 Gonzalez, R. A., 117 Goodwin, Richard M., xiii, 201, 210, 218, 229, 230, 241, 249, 269 Gradshteyn, I. S., 73 Graham, Carol, 102 Grasselli, Matheus R., 240, 241, 268, 272 Gregory Mankiw, N., 34 Grudzewski, W. M., 117 Guala, Sebastian, 175, 177–179 Guerra, Renata R., 81 Gupta, Abhijit Kar, 173 Hagemann, Harald, 259 Harari, Yuval Noah, 154 H¨aring, Norbert, 31 Harvie, D., 226–229, 232, 240 Hayek, Friedrich von, 15 Heisenberg, Werner, 20, 27 Helm, Georg Ferdinand, 20 Helmholtz, Hermann v., 20, 22 Henle, J. M., 97 Herrera–Medina, E., 232–234 Hertz, Heinrich R., 5, 20, 22 Hickel, Jason, 157 Hicks, John Richard, 15, 30 Hideaki, Aoyama, 62, 80 Hillinger, Claude, 109 Hirsch, Morris W., 244 Hodrick, Robert J., 230 Hudson, Michael, 29–31, 34, 210, 250 Hudson, Richard L., 33 Huston, Simon, 235, 259

Author Index Ifrah, Georges, 7 Iglesias, J. R., 19 Inoue, Jun-ichi, 62 International Monetary Fund, 232 Irwin, Neil, x Ispolatov, S., 158 Jadevicius, Arvydas, 235, 259 Jagielski, M., 82, 143 Jevons, William Stanley, 11 Jones, Sheila, 32 Jord´a, Vanesa, 62 Joule, James Prescott, 5 Jovanovic, Franck, 20 Juglar, Cl´ement, 258 Kakwani, Nanak C., 61, 63–65, 80 Kalecki, Michał, 15 Kamihigashi, Takashi, 82 Kaniadakis, Giorgio, 77–79 Katselidis, Ioannis, 11, 37 Kay, John, 43 Keen, Steve, x, 29, 30, 165, 169, 208, 262, 264, 266–269 Kelvin, see Lord Kelvin Kennedy, Paul M., 205 Kepler, Johanes, 3 Kessler, Oliver, 152 Keynes, John Maynard, vi, 14, 43, 153, 205–208, 250 Kim, Minseong, 117, 129 Kindleberger, Charles P., 208, 210, 242 King, J. E., 105 Kirman, Alan, x, 37, 52, 53 Kitchin, Joseph, 235 Klaas, Oren S., 82 Kleiber, Christian, 64, 65, 70, 81, 82, 87, 89, 157 Knight, Frank Hyneman, 149, 207 Kolmogorov, Andrey N., 50 Kondepudi, Dilip, 117 Kondratieff, Nikolai, 235 Kotz, Samuel, 64, 65, 70, 81, 87, 89, 157 Krugman, Paul, x, 37 Krusell, Per, 105 Kuhn, Thomas S., 26, 39 Kumamoto, Shin-Ichiro, 82 Kurz, Heinz D., 201 Kuznets, Simon, 107, 136 Lallouche, Mehdi, 172 Landau, Lev Davidovich, 32 Langlois, Richard N., 150 Laplace, Pierre Simon, 42 Lawrence, Scott, 158 Lee, Kotik K., 244, 246 Lee, Wen-Chung, 62 Legrand, Muriel Dal-Pont, 259 Leibniz, Gottfried Wilhelm., 44 Lemons, Don S., 117, 161, 169

313

Lighthill, James, 49, 246 Lindert, Peter H., 105 Lord Kelvin, 5 Lorenz, Edward Norton, 50 Lorenz, Hans-Walter, 235, 249, 259 Lorenz, Max Otto, ix, 56 Lotka, Alfred James, 221 Lucas, Robert E., 37 Luke, Yudell L., 73 Lux, Thomas, 157 Lydall, H. F., 83 McCauley, Joseph L., 29, 34, 37 McDonald, J. B., 62 McGrayne, Sharon Bertsch, 43 Mach, Ernst, 20, 22, 23 Maclachlan, Fiona, 81 McLeay, Michael, 162 Maheshwari, Aditya, 240, 241 Malthus, Thomas, 9, 115 Mandelbrot, Benoit B., 30, 33, 50, 80 March, Lucien, 88 Marglin, Stephen A., 17 Marques, Leonardo, 101 Marshall, Alfred, 12 Martin Hattersley, J., 34 Marx, Karl, 10, 12, 51, 98, 107, 115, 116, 132, 166, 168, 169, 181, 210, 214, 229 Massari, Riccardo, 99 Massy, Ibrahim, 234 Maxwell, James Clerk, 4, 22, 108 Medeiros, Marcelo, 102, 112, 114 Mendes, Marcos, 102 Menger, Carl, 11, 15 Miguelote, Alexandre Y., 50, 82 Milanovic, Branko, 101, 102, 104, 105, 107, 157, 276 Mill, John Stuart, 9 Mimkes, J¨ugen, 165 Minsky, Hyman, 208–210 Mirowski, Philip, 11, 32, 47 Mises, Ludwig von, 15 Mohun, Simon, 229–232, 245, 246 Monk, Ray, xvi ´ Moreno, Alvaro M., 229 Morgan, Jamie, 105, 205 Morgan, Zachary R., 101 Morowitz, Harold J., 54 Morris, Errol, 151 Morrisson, Christian, 102, 105 Moser, J¨ugen, 50 Moskvitch, Katia, 40 Moura, N. J., Jr., 37, 65, 68, 82, 91, 94, 95, 212, 234–239, 247, 270 Murtin, Fabrice, 102 Nair, Indira, 54 Neal, F., 117, 118 Neves, Juliano C. S., 6

314

Author Index

Newman, M. E. J., 82 Newton, Isaac, 3, 16, 44 Nicolis, Gr´egoire, 54 Nikolaidi, Maria, 210 Nirei, Makoto, 82 Oancea, Bogdan, 83, 113, 143 Ohm, Georg Simon, 5 Ormerod, Paul, 29 Osborne, Matthew F. M., 34 Ostry, Jonathan D., 102 Ostwald, Wilhelm, 20 Pais, Abraham, 40 Palley, Thomas I., x, 105 Palma, Jos´e Gabriel, 197 Palmer, Tim, 50 Palomba, Giuseppe, 269 Paradisi, Paolo, 52 Pareto, Vilfredo, ix, 12, 13, 65, 66, 82 Parry Lewis, J., 117, 118 Pascal, Blaise, 5 Patnaik, Utsa, 157, 261 Patriarca, M., 172 Patriarca, Marco, 173 Paul Cockshott, W., 147, 160, 165, 166, 168, 169 Pauli, Wolfgang, 19 Percival, Ian, 219, 245 Peters, Edgar E., 50 Petracca, Enrico, 269 Petty, William, 8 Phillips, Alban William, 214 Phillips, Peter C. B., 230 Pietronero, L., 52 Piketty, Thomas, x, xiv, 84, 104–117, 119–123, 125–138, 140–143, 192, 195, 212, 224 Pinto, Carla M. A., 82 Planck, Max, 4, 20, 26 Poincar´e, Henri, 20, 27, 33, 47 Pokrovski, Vladimir N., 54, 169 Prescott, Edward C., 38, 230 Pressman, Steven, 105, 113, 142 Price, Richard, 42 Prigogine, Ilya, 53, 54, 117 Puu, T¨onu, 203 Quade, Wilhem, 117 Quesnay, Franc¸ois, 8, 202 Rader, Trout, 117 Rankin, Jennifer, x Rau, Nicholas, 204 Reith, John E., 81 Ribeiro, Haroldo, 165 Ribeiro, Marcelo Byrro, 20, 21, 23–25, 28, 37, 50, 65, 68, 82, 91, 94, 95, 212, 234–239, 247, 270 Ricardo, David, 9, 115

Richards, Derek, 219, 245 Richmond, Peter, 84, 90, 92, 165 Robins, Lionel, 30 Robinson, Joan, 30 Roehner, Bertrand M., 29, 37, 92 Roine, Jesper, 105 Rose, Christopher, 101 Rosser, J. Barkley, Jr., 82, 173, 182 Rosłanowska-Plichci´nska, K., 117 Ruccio, David, 205 Rufus, Anneli, 34 Rumsfeld, Donald H., 42, 151 Ryzhik, I. M., 73 Sadi Carnot, Nicolas L´eonard, 4 Sakstein, Jeremy, 40 Salvadori, Neri, 201 Sarabia, Jos´e M., 62, 63, 81 Saunders, P. T., 48 Say, Jean-Baptiste, 9 Scafetta, Nicola, 174, 175, 182, 183, 185, 187–189, 192 Scheidel, Walter, 85 Schinckus, Christophe, 20, 205 Schneider, Markus P. A., 158 Schr¨odinger, Erwin, 4 Schumpeter, Joseph, 15 Scott Fitzgerald, F., vi Screpanti, Ernesto, 7–11, 13, 31 Serebriakov, Vladimir, 273 Shaikh, Anwar, 92, 195, 196 Shannon, Claude, 169 Sharma, Kiran, 173 Sharma, Manoj, 157 Sharpe, William F., 38 Sherwin, Martin J., xvi Shih, Shagi-Di, 223 Shone, R., 117, 118, 203, 219 Silk, Joe, 32 Simon, Herbert A., 15, 83 Sinha, S., 29, 172, 173 Smale, Stephen, 244 Smith, Adam, 9, 98 Smith, Anthony A., Jr., 105 Soares, Abner D., 74, 76, 96, 99, 247, 248 Soddy, Frederick, 33 Sokolov, Igor M., 62 Solow, Robert M., 223, 225, 226 Soos, Philip, x Sordi, Serena, 271 Sørensen, Peter Birch, 118, 119 Soriano-Hern´andez, P., 82 Souma, Wataru, 82, 87, 89 Souza, Pedro H. G. F., 102, 112, 114 Sraffa, Piero, 15 Sreevatsan, Ajai, 158 Stegun, I. A., 73 Stockhammer, Engelbert, 210

Author Index Strogatz, Steven H., 52 Sylos Labini, Francesco, 29

Volterra, Vito, 221 von Neumann, John, 103

Tacchella, Andrea, 52 Taleb, Nassim Nicholas, 42, 152 Tao, Yong, 85 Tarassow, Artur, 234 Taylor, John Robert, 148 Taylor, Lance, 234 Terzi, Andrea, 152, 153 Texocotitla, Miguel Alvarez, 117, 123 Thomson, William, see Lord Kelvin Trans-Atlantic Slave Trade Database, 101 Tsallis, Constantino, 74

Waldenstr¨om, Daniel, 105 Walras, L´eon, 11, 202 Watt, James, 4 Weisstein, Eric W., 70 West, Bruce J., 174, 175 Whitta-Jacobsen, Hans Jørgen, 118, 119 Whittaker, E. T., 230 Wilk, G., 76 Winsor, C. P., 68 Woit, Peter, 19, 32 Wolfers, Justin, x Wolfson, Michael C., 99 World Bank, 100, 101, 112, 232 Włodarczyk, Z., 76

United Nations, 102, 104, 232 Unzicker, Alexander, 32 Vadasz, Viktor, 232 van der Ploeg, Frederik, 273 Van Norden, Richard, 52 VanderPlas, Jake, 42, 43 Varoufakis, Yanis, 108 Veblen, Thorstein, 15 Veneziani, Roberto, 229–232, 245, 246 Vercelli, Alessandro, 271 Verme, P., 56 Videira, Antonio A. P., 20, 21, 23–25, 28 Vijaya, Ramya M., 102 Volta, Alessandro, 5

315

Xu, Yan, 88 Yakovenko, V. M., x, 66, 82, 85–88, 107, 158–163, 166, 170, 173, 174, 182, 190, 191, 193–195 Yamano, Takuya, 75 Young, Thomas, 108 Zachariah, David, 166, 168 Zamagni, Stefano, 7–11, 13, 31 Zencey, Eric, 34 ˇ zek, Slavoj, 152 Ziˇ Zou, Yijiang, 88 Zucman, Gabriel, 119