The Structure of Policy Change 9780226529868

When the Soviet Union launched Sputnik, the Red Scare seized the American public. While President Eisenhower cautioned r

192 39 2MB

English Pages 208 [194] Year 2018

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Structure of Policy Change
 9780226529868

Citation preview

The Structure of Policy Change

The Structure of Policy Change

DEREK A. EPP

The University of Chicago Press Chicago and London

The University of Chicago Press, Chicago 60637 The University of Chicago Press, Ltd., London © 2018 by The University of Chicago All rights reserved. No part of this book may be used or reproduced in any manner whatsoever without written permission, except in the case of brief quotations in critical articles and reviews. For more information, contact the University of Chicago Press, 1427 East 60th Street, Chicago, IL 60637. Published 2018 Printed in the United States of America 27 26 25 24 23 22 21 20 19 18

1 2 3 4 5

ISBN-13: 978-0-226-52969-1 (cloth) ISBN-13: 978-0-226-52972-1 (paper) ISBN-13: 978-0-226-52986-8 (e-book) DOI: 10.7208/chicago/9780226529868.001.0001 Library of Congress Cataloging-in-Publication Data Names: Epp, Derek A., author. Title: The structure of policy change / Derek A. Epp. Description: Chicago ; London : The University of Chicago Press, 2018. | Includes bibliographical references and index. Identifiers: LCCN 2017047640 | ISBN 9780226529691 (cloth : alk. paper) | ISBN 9780226529721 (pbk : alk. paper) | ISBN 9780226529868 (e-book) Subjects: LCSH: Policy sciences—United States. | Public administration— United States. | United States—Politics and government. Classification: LCC H97 .E76 2018 | DDC 320.60973—dc23 LC record available at https://lccn.loc.gov/2017047640 This paper meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper).

For my parents

CONTENTS

Acknowledgments / ix P A R T I : P U N C T U AT I O N S I N P U B L I C P O L I C I E S

ONE

/ The Rise and Fall of NASA’s Budget and Other Instabilities / 3 T WO

/ Complexity, Capacity, and Collective Decisions / 29

THREE

FOUR

/ A Macroscopic View of the Policy Process / 15

/ Distributional Assessments of Institutional Response / 46

PART II : ISSUE COMPLEXITY AND INSTITUTIONAL CAPACITY

FIVE

SIX

/ Instabilities in Federal Policy Making / 67

/ Institutional Capacity in the American States / 96

PART III : POLITICS AND COLLECTIVE INTELLIGENCE

SEVEN

EIGHT

/ Decision-Making Pathologies / 121

/ Revisiting the Efficiency of the Private Sector / 145

NINE

/ Designing Responsive Institutions / 156 Notes / 165 References / 169 Index / 179

AC K N OW L E D G M E N T S

This book would never have been possible without the guidance of Frank Baumgartner. The best decision I made as a graduate student was asking Frank to be my adviser. I entered graduate school knowing nothing about the profession, but I was tipped off that Frank would be a good choice for adviser by a wall in his office that is covered with teaching accolades. He lived up to his reputation. Long ago, Frank established himself as a terrific political scientist. What makes him a good mentor is that he is a terrific person—thoughtful, generous, and unfailingly supportive. Thank you, Frank. I was fortunate to have a lot of good mentors early in my career. Thank you, Jim Stimson, Virginia Gray, Tom Carsey, Chris Clark, Mike MacKuen, and Bryan Jones. Together with Frank, you taught me almost everything I know about political science; most important, that it is best practiced as a collective enterprise, in which we work together to think through new ideas. This book benefited tremendously from three reviewers who read the entire manuscript. Thank you, Chris Wlezien, Chris Koski, and Peter Mortensen. Your reviews were top-notch in every respect: intellectually challenging and critical, but always constructive. Dedicated, thoughtful reviewers are such a luxury, and I am indebted to your insights. Thanks as well to Chuck Myers for your guidance and advice throughout the process. I would like to thank all the faculty and staff at the Rockefeller Center at Dartmouth College. In particular, thanks to Ron Shaiko for believing in me and hiring me for what is without any doubt the very best postdoc available in the profession. Your pursuit of excellence in pedagogy and your insistence that a successful department makes positive contributions to the local community was inspirational. Thanks to Jane DaSilva for being so

x / Acknowledgments

kind and supportive. Thanks to Herschel Nachlis, Julie Rose, Sean Westwood, Jeff Friedman, Kathryn Schwartz, and David Cottrell for muchneeded distractions from writing. I thank Greg Wolf, Amanda Grigg, John Lovett, and the rest of my graduate student colleagues. I am proud of what we accomplished together during our time at Chapel Hill. I thank Brian Godfrey, my excellent and very capable research assistant. I thank Erica. Most of all, I would like to thank my wonderful parents.

PA R T O N E

Punctuations in Public Policies

ONE

The Rise and Fall of NASA’s Budget and Other Instabilities

President Eisenhower was dismissive. Having been briefed on the R-7 Semyorka, the Soviet Union’s powerful new rocket, he was well aware that the USSR was capable of putting a satellite into orbit. In a press conference shortly after Sputnik’s 1957 launch, Eisenhower attempted to reassure the American people, conceding that the Soviets had “put one small ball in the air,” but quickly adding “I wouldn’t believe that at this moment you have to fear the intelligence aspects of this.” Later, his chief of staff, Sherman Adams, likened the satellite launch to “one shot in an outer-space basketball game.” What the Eisenhower administration had underestimated was the deep, almost visceral, reaction Americans had to news of the satellite. It was disconcerting on two levels. First, it appeared inconsistent with the prevailing notion that the Soviet Union was a technological backwater, incapable of matching the United States’ economic or scientific prowess. Second, people were skeptical of Eisenhower’s assurance that they had nothing to fear. Radio stations had broadcast the satellite’s signal as it traveled over America, and it seemed obvious that something that so easily violated transnational boundaries presented security risks. Edward Teller, the famous nuclear arms proponent, warned that “America has lost a battle more important and greater than Pearl Harbor.” The public seemed inclined to agree with these ominous sentiments. During the ensuing media frenzy, the Eisenhower administration bowed to public pressure and rethought its initial restraint. It appeared that a major undertaking was needed to reassure the public that America, although second out of the gate, was not going to lose the space race. Change came quickly. Within a year, Eisenhower signed legislation creating the Advanced Research Project Agency and the National Aeronautics and Space Administration (NASA) and passing the National Defense Education Act, which

4 / Chapter One

allocated billions of dollars to helping students go to college to get degrees in math and science. By 1961, when President Kennedy gave his famous speech about putting an American on the moon, US outlays toward spaceflight and technology, a budget category that scarcely existed in the early 1950s, had already increased tenfold from 1957 levels. Altogether, from the launch of Sputnik in 1959 to the moon landing in 1969, US spending on spaceflight increased by almost 5,000 percent. Figure 1.1 illustrates the rapidity of the change by tracking the annual budget authority (the amount of money Congress authorizes government agencies to spend) for spaceflight from the budget category’s introduction in 1949 through 2014. Sputnik’s launch lies at the base of an enormous mountain of new spending, illustrating the urgency with which policy makers reacted to the Soviet satellite. Indeed, the US government pursued a moon mission with an almost single-minded obsession. Even by today’s standards, landing an object on the moon, or even putting something into lunar orbit, is a major technological accomplishment—one that few countries have achieved. In the 1950s, it was a task of unprecedented complexity. Between 1958 and 1965 NASA launched eighteen unmanned lunar missions; the first fifteen failed, many exploding before ever reaching space (Hall 1977). This record of failure is revealing. It shows the tenacity of American society, but it also tells us something about the power of ideas. In this case, the idea that gripped America was the utter necessity of besting

Figure 1.1. Annual budget authority for spaceflight Note: Budget authority is presented in constant 2012 dollars.

The Rise and Fall of NASA’s Budget and Other Instabilities / 5

the Soviet Union, no matter the stakes, the sacrifices, or the nature of the competition. Eventually, NASA turned that string of failures into a series of fantastic successes. In 1969, having successfully planned and executed a manned trip to the moon, NASA’s directors had good reason to feel optimistic about America’s future in space. The next step was the Space Transportation System, which would feature a number of reusable space shuttles that could reliably move astronauts in and out of orbit, a series of space stations leading to a lunar base, and eventually a trip to Mars. Few at the time would have guessed that spending on spaceflight had peaked in 1965, four years before Apollo 11 put astronauts on the moon. Much like President Eisenhower, NASA’s directors had seriously misjudged public sentiment. They had hoped that scientific exploration of space was the idea motivating America’s pursuit of the moon. This was partly true, but the overriding idea was one of fear of and competition with the Soviets. With the space race won decisively for America, enthusiasm for costly space adventures rapidly diminished. The price NASA paid for their success was a 50 percent reduction in their operating budget in the early 1970s. Policy makers had envisioned a trip to the moon, NASA had delivered, and that was as far as the vision went. Although reusable space shuttles were developed in the early 1980s, by 2011 the fleet was grounded due to safety concerns and high operating costs. Today, NASA’s capacity to put people in orbit is less than it was in the 1960s, and, ironically, American astronauts currently buy passage to space on board Soyuz, spacecraft developed and operated by NASA’s Russian counterpart. In retrospect, it has become clear that Eisenhower’s initial restraint was born out of neither ignorance nor false bravado, but rather a careful consideration of the relative military capabilities of the United States and the USSR. In making this assessment, Eisenhower benefited greatly from the information provided by the new U-2 spy planes, which had been flying over the Soviet Union since 1956. Surveillance photos taken by the spy planes made clear that while successful in launching Sputnik, the Soviets were still a long way from developing functional intercontinental ballistic missiles (ICBMs). With the benefit of hindsight, we realize that Sputnik’s launch appears to have been much closer to Adams’s basketball analogy than Teller’s Pearl Harbor. Most of all, Eisenhower wanted to stifle American fears over Soviet Russia—fears that were already greatly heightened in the McCarthy era. Eisenhower knew the United States was technologically and militarily superior to the USSR and wished to avoid a costly and, to his mind, pointless arms race. In this he failed. Robert A. Divine, in his case

6 / Chapter One

study of the Sputnik incident, concludes by noting that “Dwight Eisenhower had learned how much more important appearances were than reality when it came to space feats” (Divine 1993, 205). American concerns about falling behind Soviet missile production (the missile gap) would feature prominently in foreign policy for the next three decades and usher in an unprecedented wave of defense spending. Eisenhower’s inclination was to act pragmatically, carefully plotting a course of action for his administration based on the best intelligence available at the time. If he had been successful at allaying American fears over Sputnik, it is not difficult to imagine a very different history, perhaps one in which the United States never went to the moon, but also perhaps one that avoided the dramatic proliferation of nuclear arms in the latter half of the twentieth century. In the end, Eisenhower was unable to avoid getting swept up in the “red scare” that gripped the country. Consequently, the period is characterized by dramatic swings in US spending on spaceflight, as policy makers moved rapidly to counter the perceived Soviet threat, and then, once the American advantage was clear, shifted their attention and budgetary priorities elsewhere. Sputnik nicely illustrates the two competing forces that operate on the policy-making process. On one hand are the forces of pragmatism, where information is sought out and engaged with, and where there is a tight link between the course of public policy and the size of problems. On the other are tidal waves of popular opinion as attention becomes momentarily fixed on particular fears, perceptions, or ideas, only to drift elsewhere as alarming new discoveries transition into yesterday’s news. These forces are not always in opposition. Public scrutiny can gravitate to issues where the best available information indicates that problems are indeed severe and in need of innovative solutions. Sometimes, however, popular opinion demands a solution to a problem that experts see as relatively trivial. Such was Eisenhower’s confusion when he confided to his science advisory committee, “I can’t understand why the American people have got so worked up over this thing. It’s certainly not going to drop on their heads” (Divine 1993, 12). Or, conversely, a lack of public enthusiasm can prevent policy action on problems that greatly alarm the experts. Which of these forces determine the course of public policy in the United States? That is the central question of this book. Do pragmatists usually carry the day, or do policy makers simply ride from one wave of collective enthusiasm to the next? Are the instabilities evident in spending on spaceflight generally characteristic of the policy-making process, or are they the exception? I pursue these questions because they speak di-

The Rise and Fall of NASA’s Budget and Other Instabilities / 7

rectly to the nature of public governance. As scholars, citizens, and practitioners we have good reason to seek their answers. If public governance is an erratic process, rather than a meticulous and deliberate one, then this has important implications for the responsiveness of government to its citizens and to social problems. If policy makers are meticulous planners, then citizens can expect the urgency of problems to factor heavily into the government’s policy response. In this world, the launch of Sputnik, which the Eisenhower administration recognized as a relatively minor concern, should not provoke a tenfold increase in US spending on spaceflight. On the other hand, an erratic policy-making process is one that is subject to the whims of enthusiasm. The size and urgency of problems still matter, but these are far from the most important factors. Instead, it is the prominence of problems—how much attention they can command—that is critical. Even somewhat minor problems could provoke mammoth responses from policy makers, if they receive enough attention. This is an altogether more random and uncertain type of policy making, as it depends more on the fickleness of public perceptions than on concrete, knowable facts.

Policy Instabilities A growing consensus among political scientists is that public governance adheres much more closely to the erratic-process model than to the meticulous-planner model. Two separate lines of inquiry support this conclusion. First came observational evidence: the recognition that major disruptions of the kind examined above are relatively common within the policy process. For instance, we find that almost every programmatic area of the US federal budget has undergone at least one massive adjustment within the last fifty years, inflection points when government expenditures change dramatically from one year to the next. Spending on spaceflight is in no way unique; almost every policy area has its own Sputnik-type story where waves of enthusiastic support for an idea (or rapid disillusionment with a previously popular status quo) result in massive adjustments. And this instability is not limited to federal budgets. The same erratic-change pattern is evident in other policy activities, such as the number of congressional bills addressing an issue, and at the subnational level. These disruptions have been documented through a rich case-study tradition within policy scholarship. For example, a classic in the field is Kingdon’s 1984 Agendas, Alternatives, and Public Policies, which, based on extensive fieldwork in Washington, DC, develops his “policy windows” approach. He explains: “Policy windows open infrequently, and do not stay

8 / Chapter One

open long. Despite their rarity, the major changes in public policy result from the appearance of these opportunities” (Kingdon 1984, 166). Social scientists have long recognized the significance of these brief but dramatic periods of political momentum, and they go by various names in the literature: issue-attention cycles, positive-feedback loops, policy cascades, waves, and slippery slopes (Downs 1972; Arthur 1989; Bikhchandani, Hirshleifer, and Welch 1992; Pierson 2000). During such periods, political change can be explosive following an exponential rate of growth or decay (Casstevens 1980; Baumgartner and Jones 1993). Furthermore, these punctuations are not simply micro-level phenomena affecting individual policy areas. Occasionally, the entire US federal budget experiences major adjustments. Figure 1.2 tracks US federal spending from 1792 to 2014, with outlays in the top panel and the annual per-

Figure 1.2. Annual outlays by the US federal government Note: Outlays are presented in constant 2000 dollars. Data are available from the Policy Agendas Project and are constructed from data series made available by the Treasury Department and the Office of Management and Budget.

The Rise and Fall of NASA’s Budget and Other Instabilities / 9

centage changes in those outlays on the bottom. This reveals five occasions when US spending increased by over a hundred percent from one year to the next. The two most dramatic punctuations coincide with the Civil War and World War I, when the US government ramped up spending by around five hundred percent. The evidence for major disruptions in the status quo is very strong, but why does it matter? Policy punctuations are interesting because they seem inconsistent with a government that is meticulously plotting its course, carefully updating public policies in response to incoming information. Instead, they seem to imply a government that is often caught unawares by the severity of issues, or that dramatically overreacts to relatively minor concerns. The fact that punctuations can be observed at every governmental level, from individual budget categories to the entire federal budget in aggregate, suggests that they are fundamental features of the policy-making process rather than rare, idiosyncratic events (or artifacts of looking at relatively small budget categories). So too does the fact that they occur throughout history and are not isolated to any specific time period. Punctuations then are observational evidence that point to policy making as an erratic, disproportionate process in which attention to problems lurches from one temporary equilibrium to the next. Explaining Punctuations with Theory The second development in support of the erratic-process model was theoretical. Scholars, starting with Padgett in his seminal 1980 article “Bounded Rationality in Budgetary Research,” began to link models of government agenda setting with distributions of changes in public policy. Meticulous policy making requires a proportional engagement with society’s various problems. If a problem is gradually growing worse over time, policy makers should respond by gradually increasing the scale and frequency of interventions required to solve it. Likewise, there should be a reduction in policy interventions for problems that are getting better. Padgett’s insight was to note that such a model implies a uniform allocation of governmental attention across issues. That is, the engagement with issues would be an unbiased one, so that issues would receive governmental attention in proportion to their severity. Assuming the underlying problems to which government might attend are stochastic and independent, then an unbiased (or uniform) allocation of attention across issues would result in policy responses that should be normally distributed, from the

10 / Chapter One

Central Limit Theorem. Thus, the major disruptions evident in almost every policy domain tell us something important about the policy process— specifically, that policy makers are not allocating their attention evenly across issues, but rather in a disjointed fashion. While political scientists were already well aware that policy making was subject to waves of political momentum, Padgett demonstrated that the meticulous-planner model was empirically incompatible with the observed evidence. Baumgartner and Jones provided the next major development with their theory of “disproportionate information processing” (Baumgartner and Jones 1993; Jones, Baumgartner, and True 1998; Jones, Sulkin, and Larsen 2003; Jones and Baumgartner 2005). Their argument was that institutional and cognitive “frictions” prevent policy makers from attending uniformly to incoming information and that, instead, some problem indicators become entrenched while many others are ignored. Occasionally, the forces of issue entrenchment are disrupted and a policy will undergo a brief period of disequilibria. During these periods, old indicators are discarded in favor of new (and supposedly improved) ways of thinking about a problem. It is during these tumultuous periods that punctuations occur as policy makers dramatically revise their fiscal priorities to better match the demands of whatever indicator is now seen as the critical arbitrator of an issue’s severity. This was the first theoretical model that clearly matched the data, explaining both long periods of tranquility and brief but pronounced punctuations. Many of the frictions that Baumgartner and Jones identified as causing governments to attend disproportionately to policy information were considered largely unavoidable. In particular, cognitive frictions result from basic human thought processes, and it was unclear how organizations composed of humans could circumvent these limitations. This led them to postulate the General Punctuation Hypothesis, which predicts that the result of any human decision-making process will feature long periods of stability interspersed with punctuation. As they put it: “We will observe a greater likelihood of punctuations in some institutions of government than in others; this is related to the efficiency of the institutional design. But cognitive limits are ubiquitous, so we observe the effects of these in all institutional settings” (Jones and Baumgartner 2005, 20). This type of sweeping prediction is rarely seen in the social sciences, but subsequent scholarship has been highly supportive, finding evidence of both stasis and punctuation in the outputs of a wide range of institutional processes. The evidence appears unequivocal: policy making is not an exercise in careful planning, but follows an enormously complex and unpredictable path.

The Rise and Fall of NASA’s Budget and Other Instabilities / 11

Finding Order in Complexity This book searches for the limits of the disproportionate information processing model. Its chief argument is that policy making is not an inherently unstable process and, under certain circumstances, the process of crafting new policies adheres more closely to the meticulous-planner model. An institution’s capacity to process and respond to information is best understood as a continuum. At one extreme are the most myopic and entrenched institutions, where new information is rarely considered and policies are updated only in the face of the most severe social problems. At the other end are the most flexible and responsive institutions. Here, problem indicators are continually updated in response to new information. Problems are solved before they develop into crises, avoiding the type of “stick-slip” pattern revealed in figures 1.1 and 1.2, in which policies alternate between incremental adjustments and massive changes. This book identifies decision-making mechanisms and institutional structures that determine what position institutions occupy on the information processing continuum. Three factors are shown to greatly condition the ability of institutions to process information and, subsequently, the stability of the policies they produce. These are complexity, institutional capacity, and collective decision making. Chapter 3 goes into greater detail about why each factor is so important, but the basic logic can be briefly stated in the form of three hypotheses: H Y P O T H E S I S 1 : C O M P L E X I T Y. Not every social problem is equally complex. For example, governments have known since ancient Greece how to provide citizens with safe drinking water, and today there is little disagreement that this is among the most important and basic functions of government. Immigration is much more complicated. How many people should be allowed to immigrate into the country each year? From where should migrants be accepted? What is the process by which they will be assimilated, and who will incur the financial burdens? These are questions that are subject to intense disagreements. Before policy makers can begin to solve immigration issues, they need to agree on the nature of the problem. If you direct a government agency tasked with providing clean water, chances are you have a good handle on the size and scope of the problems confronting your agency, and you know what resources it will take to solve them. The same is not true for the director in charge of enforcing immigration policy, who can expect to receive mixed and uncertain messages from policy makers as they struggle to find common ground on a polarizing issue. Institutions tasked with solving relatively simple issues should be able

12 / Chapter One

to process policy information more uniformly than can those in charge of complex problems, because there is less of that information to process. This makes the potential for careful planning much greater, as policy makers are able to anticipate problems well in advance and adapt accordingly. Consequently, policies relating to simple issues should be more stable and less prone to dramatic upheaval than those relating to more complex areas of governance. H Y P O T H E S I S 2 : I N S T I T U T I O N A L C A P A C I T Y. Institutions operate with varying levels of informational capacity. Those with higher capacity can gather, process, and respond to new information more quickly and completely than can lower-capacity institutions. Picture yourself as a representative in the New Hampshire legislature. You serve a two-year term, are assigned no support staff, and earn $200 per term. Very likely, participating in the business of state can be only a part-time job, and, thus, your attention is divided between your role as a public servant and more lucrative employment. Staying abreast of new policy information is more difficult than it would be if you could commit fully to the task of legislating. New Hampshire places an emphasis on “citizen-legislators,” and their model is designed to discourage representatives from becoming career politicians. Circumstances differ in other statehouses, where representatives earn a living wage, including the payroll to staff an office. State legislative professionalism is a prime example (one that is revisited in later chapters) of how institutional structures affect the processing of information. All public institutions require some capacity to seek out and respond to information, but not all institutions are equal in this regard. When institutions have robust mechanisms for seeking out and understanding information, the policies they produce are less likely to undergo punctuations. Policies from institutions with weaker mechanisms for gathering and attending to information will show greater volatility. H Y P O T H E S I S 3 : C O L L E C T I V E D E C I S I O N M A K I N G . Collective decision making occurs when the decisions of many individual actors are aggregated to reach a final output. A classic example is financial markets, where the value of traded commodities is a function of many independent investors’ decisions to buy or sell. I contrast collective decision making with conventional deliberative processes whereby a group of people reach a consensus decision based on debate and compromise. The difference between these two types of decision-making processes is explored in great depth, but the crux of the distinction can be simply stated: collective decision making features independent actors and an aggregation mechanism, while delibera-

The Rise and Fall of NASA’s Budget and Other Instabilities / 13

tive decision making is designed so that actors may influence one another and outcomes are dichotomous rather than cumulative. Collective processes have featured prominently in economic studies for decades, but their role in policy making has been poorly understood. I draw a distinction between deliberative decision-making processes— grouping together firms and governments—and collective processes. Under the right conditions, collective processes can mitigate many of the cognitive frictions that are thought to weigh heavily on human decision-making processes. I explore the degree to which these conditions exist within the scope of the policy-making enterprise and present evidence demonstrating that when policies are established by collective processes they are extremely stable, adapting smoothly to new information. Empirical analysis finds strong support for each hypothesis. The book’s findings therefore offer a cautionary tale to anyone expecting policy instabilities to be pronounced and inevitable. I document many cases in which, with simple issues and collective decision processes in place, political outcomes change only gradually with time rather than in a disjointed fashion. These conditions are not particularly rare within government and relatively common when taking a broad view of human enterprise. These observations are not hostile to the disproportionate processing model of policy change; rather, because they occur where attention is least scarce, they match theoretical expectations. Still, given the emphasis in policy scholarship that government decision making will always result in punctuated outcomes, it is important to note that important policy areas are in fact quite stable. Understanding the conditions that allow for more proportional and meticulous information processing is important both for advancing conceptual models of governance and for appreciating the consequences of different institutional arrangements. Thus, the book seeks to advance theoretical models of policy making, while providing practical knowledge to practitioners and students of government. The next chapter gives a brief account of how students of public policy arrived at the idea of a “policy window,” as Kingdon put it, and then takes an in-depth look at information processing: what it means, how governments process information, and what the implications are for policy change. This account culminates with a review of the disproportionate processing model that dominates current ways of thinking about the policy process. Chapter 3 advances this scholarship by developing the book’s three hypotheses. The chapter argues that complexity, institutional capacity, and collective decision making are the keys to understanding punctua-

14 / Chapter One

tions in institutional outputs. Chapter 4 introduces the empirical methods used in this book, with a particular focus on the distributional analysis pioneered by Padgett. Simulations of different information processing models demonstrate the link between informational efficiency and policy punctuations. Part 2 shifts to hypothesis testing. Chapter 5 examines US federal budgets over the last seventy years to test hypotheses relating to complexity and institutional capacity. New measures of both concepts are introduced, and multivariate regressions show that they exert a dynamic influence on the stability of government spending. Chapter 6 focuses on the American states. It shows that legislative professionalism is a powerful predictor of the stability of state spending. States with part-time legislatures are more likely to dramatically revise their spending behavior than those with fulltime legislatures. Chapter 7 begins part 3, which focuses on decision-making pathologies. The chapter proceeds through a series of case studies that look at the use of collective decision making in determining political outcomes. Each study shows that when outcomes are set through collective processes they are remarkably unpunctuated. I argue that this type of decision making is relatively common within policy making and discuss ways in which it could be further integrated into the process. Chapter 8 builds on the findings of previous chapters to investigate a classic arbitrator of institutional efficiency: the public–private divide. Private firms have traditionally been considered more efficient than their public counterparts. The chapter shows this to be a false dichotomy. Both firms and public institutions grapple with similar institutional and cognitive constraints, and both produce highly punctuated outputs. I argue that complexity, capacity, and collective decision making are much better arbitrators of efficiency, and that these factors vary greatly across institutions within both the public and private sectors. Chapter 9 reviews the book’s key findings and places them in context with existing scholarship and discusses next steps for unraveling the conjoined enigmas of public governance and the policy-making process.

T WO

A Macroscopic View of the Policy Process

Students of the policy process seek to understand how governments prioritize problems. This line of inquiry quickly leads to a series of questions that get to the heart of representative democracy: Does policy represent the will of the people, or does it cater to the interests of social elites? What threshold of urgency must be crossed before policy makers will address an issue, and how does this threshold vary depending on the groups involved? What are the most effective ways to lobby the government? Early studies of policy change focused on defining the space in which policy making takes place; before getting policies changed in your favor, your viewpoints must be represented at the table. This space varies depending on the system of government. For example, in a direct democracy policy making is conducted by the mass public, or at least the subset of the public that turns out to vote. On the other hand, in autocracies policy making is carried out by a small group of oligarchs. The United States’ system of representative democracy is more complicated; policies are made by members of Congress and the president, either through the passage of new legislation or executive orders, and these policy makers are elected through popular vote. Whatever the system, the “rules of the game” that govern the actual process of policy making are relatively straightforward to understand. A much more difficult task is to understand the consequences of these rules and how they affect the range of interests that seek representation. When looking at the US system, scholars noted a tendency for policy making to collapse into small “functional islands of decision-making power” (Sayre and Kaufman 1965, 305). In other words, while legislators are quick to shroud their actions in the language of popular mandates, the actual details of new policies are hashed out behind closed doors by a small community of experts (McConnell 1967; Lowi 1979). Who are these

16 / Chapter Two

experts? Members of Congress who through a committee assignment or a particularity of their constituencies have a unique interest in an issue, career bureaucrats, and interest groups with the financial resources and social networks to lobby the government. Together these actors form a “policy monopoly” and, by positioning themselves as the gatekeepers of legitimate policy-relevant knowledge, can control the discussion surrounding an issue (Griffith 1961; Redford 1969; Walker 1983; Chubb 1985). (Such arrangements are also known as iron triangles, policy networks, subsystem politics, and policy whirlpools.) To understand why policy monopolies flourish, imagine yourself as a member of the House of Representatives representing an urban district. The chamber is voting on a bill to set Medicare reimbursement rates for a rural hospital: How do you vote? The issue has little relevance to your constituents and, thus, you know nothing about it, so you seek advice from a colleague who sits on the health subcommittee. Your colleague is all too happy to help—Medicare is an issue she specializes in, and this is her moment in the spotlight. Her advice is informative, but also boilerplate in the sense that it adheres closely the Medicare subsystem’s understanding of the issue. Policy monopolies can be seen as a direct consequence of a system that requires the comprehension of a vast array of information. If governance were less complicated, there would be no need for the divisions of labor that encourage subsystem development. And the truth is that most of the legislation policy makers consider is mundane, in the sense that it does not inspire widespread interest. Cameron, in his book on presidential vetoes, writes “there is a little secret about Congress that is never discussed in the legions of textbooks on American government: the vast bulk of legislation produced by that august body is stunningly banal” (2000, 37). That most policy decisions are made behind closed doors raises red flags about the pluralistic nature of US governance. Policy scholars have often concluded that policy making disproportionately benefits socioeconomics elites because their great wealth allows them to influence the subsystems where new policies are crafted, either by donating to members of Congress or hiring well-connected lobbyists (Lindblom 1977). E. E. Schattschneider’s Semisovereign People (1960) famously develops this point with his thesis on conflict expansion. According to Schattschneider, political power is the ability to set the scope of conflict. Political “losers,” those outside of power, want to expand the scope of conflict to include new ideas, while those currently in power want to maintain the status quo. As the title implies, Schattschneider believes that more often than not the eco-

A Macroscopic View of the Policy Process / 17

nomic elites win out and nothing changes, and consequently policies only vaguely represent the interests of the mass public.

Incrementalism It follows from the existence of policy monopolies that the status quo should dominate American politics. The system is characterized by negative feedback: an overwhelming press of complex problems forces policy makers to retreat into niche areas of expertise, which are then vulnerable to capture by socioeconomic elites interested in preserving their comparative advantage. Moreover, the founders intentionally designed the system to be slow moving by including multiple layers of redundancy such as the dualcameral legislature. Early studies of agenda setting latched onto the seemingly plodding nature of policy change as the fundamental feature of the policy-making environment. Charles Lindblom’s influential article “The Science of ‘Muddling Through’” (1959) introduced the theory of incrementalism, which formalized the notion that various institutional and resource constraints force policy makers to behave myopically, gradually updating existing policies rather than reimagining programs from the ground up. This idea found immediate traction among students of the federal budgetary process, in which researchers argued that “the largest determining factor of the size and content of this year’s budget is last year’s budget” (Wildavsky 1964; Davis, Dempster, and Wildavsky 1966). The idea is that every year when legislators craft a new budget they simply move iteratively through programmatic spending areas and adjust expenditures by some small increment, either upward or downward. An implication is that budgets are best understood as political documents with only a tangential relationship to the environmental problems that they are intended to solve. Incremental budgeting might deal very well with problems that only gradually become worse or better, but budgets may quickly fall behind in areas in which the severity of problems moves more rapidly.

Policy Windows The first chapter talked briefly about policy windows, short-lived opportunities in which it is possible to upend the status quo. When a policy window is open there is a rare opportunity to disrupt the forces of negative feedback that typically define the system. Predicting when these opportuni-

18 / Chapter Two

ties will arise is an elusive goal of agenda-setting scholarship, but post-hoc analysis of major policy changes reveals that they almost always involve redefining old problems in a new light. This can happen when a problem that was originally seen as outside the scope of governmental response is redefined as problem that government can solve (Stone 1989). It was this kind of redefinition that fueled the massive growth in government programs in the 1950s and 1960s, aimed at reducing social inequalities (Williams 1998). Another possibility is that the consensus on what remedies are effective at solving a problem gets redefined. For example, Kingdon (1984) writes about how user charges—in the form of fuel taxes or licensing fees—were successfully framed as a solution to deteriorating infrastructure, whereas such repairs had traditionally been paid for by the general taxpayer. Policy scholars have been writing about policy windows for a long time, and their existence suggested that incrementalist theory was incomplete. No one doubted that most policy changes were very small, but clearly there were exceptions, moments when the political stars align and issues are redefined in ways that lead to dramatic adjustments. A further concern was raised by Padgett (1980) when he showed that incrementalism implied that first differences in government spending should be normally distributed. He showed that, in reality, budget distributions have excessive kurtosis, with wider tails and a higher central peak than the normal distribution. (More on this in chapter 4.) So, incrementalism appeared to have both conceptual and observational flaws. The question facing academics was how to square long periods of stasis with abrupt and sudden periods of change. The theories that did a good job of explaining incremental changes had little to say about policy disruptions, and scholarship focusing on policy windows failed to provide a mechanism that would explain why these windows exist but remain so uncommon. Baumgartner and Jones introduced their theory of disproportionate information processing as a potential solution. They were heavily influenced by the work of Simon (1947, 1983, 1996, 1997), who won the Noble Prize in 1978 for research that considered the effects of bounded rationality on economic decision making. Traditionally, economists had assumed that individuals were fully rational; that is, that a decision maker would have access to perfect information and could always determine what options would maximize their utility. Of course, economists knew that these were unrealistic assumptions, but Simon was among the first to investigate how cognitive limitations would alter economic behavior. Baumgartner and

A Macroscopic View of the Policy Process / 19

Jones were interested in how bounded rationality would affect government agenda setting. Their insight was that cognitive limitations would cause policy makers to process information intermittently, which could explain both long periods of incremental change and the emergence of periodic windows of opportunity. Before further unpacking the theory, it will be useful to take a closer look at the cognitive processes that dictate how individuals adapt to new information and the implications for government institutions.

What Does Information Processing Mean? Information processing is a prerequisite of decision making and takes place over two distinct stages. The first involves receiving new information, and the second requires comprehension of that information. Then, once the information is processed, the decision maker decides how to act on it. Basic requirements for information processing are therefore (1) access to information, (2) time to consider the information, and (3) the expertise or intellect to comprehend the information. Individuals or organizations with a high ability to process information easily meet the requirements. On the other hand, a low ability to process information results from difficulty meeting one or all three requirements. For example, as individuals, we are often faced with decisions so complex that we lack the time to process all the potentially relevant information. Or we may be faced with a decision in an area in which we have no prior experience, making it difficult to make sense of the information we are receiving. These variables are closely related to the scope of information that is relevant to making a particular decision. Processing information relevant to simple tasks, such as deciding what to eat for breakfast, is not particularly burdensome and can be achieved at a high level by almost everyone. On the other hand, processing the information relevant to governing a country is more difficult by several orders of magnitude. Furthermore, decisions are often required where there is no relevant information to begin with, or the available information is inaccurate. In these cases, uncertainty can cause individuals to pursue courses of action that are suboptimal given the reality of the issues at hand. Another consideration is that the processes by which organizations and individuals arrive at decisions can vary, with some being less efficient than others. In all, the ability to process information will vary by issue, decision maker, and from one decision-making process to the next.

20 / Chapter Two

Accessing Information Before acting on information, policy makers must have access to it. An economic perspective would emphasize that, like any consumer good, information comes with costs. For individuals to be informed about politics they may have to subscribe to a newspaper or pay a cable subscription fee. In the Downsian account of democracy, information costs are a key driver of political inequality (Downs 1957). The rich have the luxury of staying abreast of political events (and participating in them) while the poor lack the resources to stay consistently engaged. For its part, the US government has devoted considerable resources to information gathering through the development of research agencies such as the Office of Management and Budget, the Government Accountability Office, the Congressional Research Service, and the Congressional Budget Office. Information is not free but, for modern democratic governments, it is often plentiful. In fact, finding time to sort through and prioritize information relevant to a decision is often more troublesome than accessing the information in the first place. In his 1996 Participation in Congress, Hall writes that “policy-relevant information is abundant, perhaps embarrassingly rich, on Capitol Hill” (page 90). Besides the various research agencies dedicated full time to producing policy analysis, the congressional committee system encourages members to become experts in specific issue areas (Krehbiel 1992; Baumgartner, Jones, and MacLeod 1998). And, of course, there is an extensive interest-group ecology that is more than happy to supply policy makers with information; indeed, supplying the “right” information to policy makers is a basic purpose of the vast lobbying enterprise operating at the federal level (Moe 1980). Finally, the media is often a crucial source of information for policy makers (Cook 1998, 2006). Information then is readily available to policy makers. Even events that take the country by surprise often appear, in retrospect, less random, even predictable. A distressing finding of the 9/11 Commission was that there was actually good evidence that a hijacking plot was in the works, and that this evidence was available to federal law enforcement. Philip Shenon of the New York Times writes, “The F.B.I. had been aware for several years that Osama bin Laden and his terrorist network were training pilots in the United States and elsewhere around the world” (Shenon 2002). Of course, the advantage of hindsight is that we know how information should have been prioritized. Obviously, this information did not receive the attention it deserved, but in the crush of current events this is not atypical. With so

A Macroscopic View of the Policy Process / 21

much potentially relevant information at hand, separating the critical from the merely noteworthy or irrelevant is difficult. So, it is often the case that too much information, not too little, is the problem. Considering Information Access to information is the first step in information processing. For that information to be useful it must be prioritized. That is, institutions or individuals must be able to consider and deliberate over the information they have received. This consideration takes time and agenda space, both of which are scarce commodities. The government cannot possibly address all the social issues that might benefit from policy interventions. In a classic study, McCubbins and Schwartz (1984) describe how these inevitable agenda constraints lead to a system of “fire alarm oversight,” where congressional attention focuses on an issue only when enough outside groups protest the inadequacies of the current status quo. They contrast this with a more active, “police patrol” type of oversight, where Congress is directly involved in seeking information about policy issues. Many scholars have remarked on the overwhelming complexity of governance and noted that national agendas tend to be limited to only a few highly salient topics, and at the expense of many seemingly important issues (Sigelman and Buell 2004; Walgrave and Nuytemans 2009; Green-Pedersen and Mortensen 2010; Baumgartner et al. 2011). This agenda myopia on the part of policy makers can be seen as a coping mechanism. The more information government considers, the more problems it discovers that require government solutions. Better search mechanisms lead directly to a larger and more active government—a potentially endless cycle of government growth, as there are a near infinite number of problems government could be addressing. Kaufman (1976), for example, has shown that once governmental organizations are developed they tend to stick around indefinitely, even if their original jurisdiction was narrowly focused. One way to avoid this cycle is to keep things simple. Rather than seeking out new problems, better to focus on well-known problem indicators that government is already structured to address (Baumgartner and Jones 2015). As there are so many issues to which the government might attend and limited agenda space it is often difficult to predict what issues will receive attention. Even partisanship is a poor predictor of government agendas. For example, we might think that government will focus more heavily on

22 / Chapter Two

traditionally conservative issues—such as national defense and the debt— when a Republican is in the White House. But such preconceptions are vastly oversimplified. Consider that during George W. Bush’s presidency, we saw a major expansion of government through the creation of the Department of Homeland Security and, with the 2007 stimulus package, one of the largest government interventions into the private economy in US history. Ideologically massive public spending programs are not typically associated with the Republican Party, but they were seen as reasonable, if not necessary, responses to the September 11th attacks and the 2007 financial crisis. Agendas, carefully planned and promoted during elections, are quickly sidetracked by the demands of responsible governance with particularly significant events, such as wars or recessions, occupying an inordinate amount of agenda space (Mortensen et al. 2011; Epp, Lovett, and Baumgartner 2014). Beyond changing political and economic realities, to which parties must respond regardless of ideology, there are systemic factors that constrain political agendas. Much has been written about the institutional “rules of the game” and their effect on agendas. For example, the closed-primary system used in many states encourages the participation of more ideologically extreme candidates, as the first round of voting is often restricted to Democratic and Republican primary voters who are less moderate than the broader electorate. Subsequently, elected officials tend to be relatively ideological (much more so than the public at large), which contributes to polarization and gridlock in Congress, thereby limiting the number of issues to which the government can attend. Other constraints include the dual-chambered legislature, congressional gatekeepers, the presidential veto, and a demanding election schedule (Bish 1973; Buchanan and Tullock 1962; Cox and McCubbins 2005; Koger 2006; Oleszek 2010). In each case, these factors limit the overall “carry-capacity” of the system and the cumulative effect is that the US government moves as it was designed to do by the founders: slowly. A more ubiquitous agenda constraint is the basic limitations of human cognition. Bounded rationality is the idea that humans are rational, which is to say that they actively pursue their own self-interest or, in economic parlance, attempt to maximize their utility, but that they are hampered in their ability to do so by profound cognitive limitations. These include a limited capacity for recalling pertinent information from long-term memory (Dansereau and Gregg 1966), the amount of time it takes humans to remember a new piece of information (Ebbinghaus 1964), and a limited short-term memory (Miller 1956). Experimental research has shown that

A Macroscopic View of the Policy Process / 23

these limitations are remarkably constant across individuals and therefore can be thought of as the operational parameters of human cognition. To reconcile cognitive limits with a complex world, people develop heuristics whereby decisions are based on habit or underlying patterns (Margolis 1987). These heuristics are helpful cognitive shortcuts that allow people to bypass a great deal of nonessential information. So, instead of processing and responding to all the information that is relevant to a particular decision, people selectively weigh only a few key factors. Imagine, for example, the complexities involved with buying a new car. How can you decide what car is right for you? Cost, gas mileage, safety, and comfort are factors near the top of almost everyone’s car-buying list, but the reality is that there are thousands of factors that might be important to consider. Everything that makes one car different from another is potentially useful information. If humans were comprehensive processors of information, we would iteratively consider each of these distinctions but, of course, that is impossible. Instead, we consider only a few factors and then make our best judgment call. This is the essence of bounded rationality. Heuristics come into play when we base our car-buying decisions on things like the reputation of the manufacturer, which is quite common. Is it a brand we trust? Miller famously characterized the extent of these cognitive limitations: “There is a clear and definite limit to the accuracy with which we can identify absolutely the magnitude of a unidimensional stimulus variable. I would propose to call this limit the span of absolute judgment, and I maintain that for unidimensional judgments this span is usually somewhere in the neighborhood of seven” (1956, 90). Put simply, Miller’s point is that people can only process around seven unique stimuli at a time. Returning to the car-buying example, we can expect people to consider around seven factors when comparing between cars, which is a very small number when we think about the full range of factors that we might use to distinguish one motor vehicle from another. In all, during any decision-making process a great deal of pertinent information is never actively considered (Simon 1947, 1999; Jones 1994, 1999, 2001). For all these reasons, finding the agenda space to attend to information can be seen as a more problematic requirement of information processing than accessing information in the first place. Comprehending Information The challenge of interpreting so much diverse information is considerable, and organizations develop strategies, both structural and procedural,

24 / Chapter Two

to facilitate the task. An example from the US Congress is the committee structure, which is designed to create niche areas of expertise. Not every member of Congress can be an expert on every issue but, by dividing attention across committees, collectively Congress can exercise proficiency on a wide range of topics. This kind of parallel processing allows organizations to make many decisions routinely, without the need for comprehensive oversight. To a limited extent, people are also capable of parallel processing. Remember that Miller put the limits on cognition at around seven individual stimuli, which leaves some room for people to multitask. Parallel processing allows organizations to skirt the limits of attention but, at some point, those limits will be confronted. Simon, writing about decision making, notes that “a second, and more interesting, circumstance under which we would expect a system to act intermittently is one where there are many different actions to be performed, but where the system is capable of performing only one or a few at a time; the environment makes parallel demands on the system, but the system can only respond serially” (1977, 157). This is an apt characterization of public governance. Problems requiring legislative action will often occur simultaneously and in no predictable or convenient order, but governments can only address them one at a time. Of course, bifurcations of labor make it possible for the federal government to attend to a number of different tasks simultaneously. At any given time, employees at the Food and Drug Administration are conducting food-safety tests, the post office is delivering mail, the State Department is conducting trade negotiations, and the FBI is investigating crimes. Much of the government, therefore, operates on a kind of autopilot, but this is only appropriate for routine tasks. When it becomes necessary to change the routines or develop new ones, legislative action is required. This is when organizations shift from parallel- to serial-information processing, as responsibility for making decisions moves from multitudes of bureaucratic employees to a much smaller subset of organizational leaders. In other words, this is when agenda setting takes place. The US government, makes thousands of decisions routinely, but implementing new public policy or redirecting bureaucratic focus requires centralized decision making by the president, party leaders in Congress, or both. At this point, the cognitive limits of people in leadership positions come into play. Attention can therefore be stretched by bureaucratic divisions of labor, but only so far. Even for complex, hierarchical organizations it is a scarce commodity.

A Macroscopic View of the Policy Process / 25

Disproportionate Information Processing and Punctuated Equilibrium What happens in a political system in which information can be processed only intermittently? What will policy change look like? Simon (1983) noted that if attention is allocated disjointedly then change should likewise be erratic. Jones and Baumgartner’s disproportionate information processing model applies this general logic to the policy-making process. Their crucial insight is that if agenda space is severely limited, policy makers will only have time to deal with crises. Anything less than a full-blown crisis and the issue will be left off the agenda, as in most modern societies there are enough crises to fully occupy the government’s limited attention. Of course, there are many issues that might not rise to crisis-level, but would still benefit from governmental attention. These issues will be ignored, and they worsen through inattention, until they become crises in their own right. Note that there is nothing objective about assigning urgency to some issues over others. History is full of examples of governments taking unprecedented steps to alleviate a perceived crisis that in retrospect looks trivial or nonexistent. A classic example of this type of behavior is the Prohibition era, when policy makers thought that alcohol was so dangerous that they took the extremely rare step of changing the Constitution to ban its consumption (Schrad 2010). Likewise, some contemporary issues look very urgent, but are still being ignored. Regardless of the selection mechanism, because agenda space is limited, issues must displace each other as they rise and fall in urgency. Crises demand action, so when issues do make it on the agenda there is the potential for major policy adjustments. There is also the possibility that government deliberations end in gridlock, and nothing will be accomplished, no matter how dire a situation appears. Even with attention, policy change is no certainly. Still, the likelihood for major changes is greater where attention is focused. Absent any attention, it is hard to imagine how or why large shifts would be enacted. This then is what Jones and Baumgartner mean by disproportionate; some issues receive intense scrutiny while many others are ignored. If policy change follows attention, then we can expect a punctuated equilibrium pattern of change, in which long periods of stasis occasionally give way to brief periods of momentous change.1 Returning to figure 1.2 from the first chapter, we can see evidence of a punctuated equilibrium pattern in the bottom panel, which looks at annual changes in total US outlays. Note that many of the changes are rela-

26 / Chapter Two

tively modest in size, but that this low, equilibrium level of change is occasionally punctuated by dramatic adjustments. Institutions are disproportionate processors of information for all the reasons examined above, that is, because they have institutional procedures that limit the number of issues to which policy makers can attend, because they lack knowledge of the issues, because there are disagreements about the true nature of issues, and because basic cognitive limits mean that policy makers can attend to issues only serially rather than in parallel. Jones and Baumgartner refer to these information processing impediments as frictions. Their idea is that frictions prevent the smooth updating of policies in response to new information, and, instead, policies change in a series of fits and starts, as frictions build up behind issues that have long been ignored. They conceive of frictions as either institutional or cognitive. So, for example, the dual chambers of the US Congress are an institutional friction and bounded rationality is a cognitive one. The model was developed to explain policy change at the national level, and this is where it is most frequently applied by policy scholars. But Jones and Baumgartner recognized that their model had broader applications. The causal mechanism described by the model is rooted in a basic understanding of human cognition. Certainly, attention can be considered an important prerequisite for change in many contexts and taking the implications of bounded rationality seriously suggests that decision makers in many situations must contend with attention scarcities. This led to the development of the General Punctuation Hypothesis, which predicts that the result of any complex human decision-making process will be punctuated. Empirical evidence is supportive. A long series of studies reveals evidence of disequilibria across a wide range of institutional outputs, including congressional hearings, bill passages, media coverage, and the budgets of city, local, and state governments (Jordan 2003; John and Margetts 2003; Breunig and Koski 2006; Mortensen 2007; Jones et al. 2009; Baumgartner et al. 2009; Boydstun 2013). A recent review of the literature found over three hundred publications that reference punctuated equilibrium in relation to public policies or other institutional outputs (Baumgartner, Jones, and Mortensen 2014). Clearly, then, the model has proven highly effective at explaining the structure of change in institutional outputs.

Do Policy Instabilities Matter? Does it matter if public policies are updated smoothly over time, or if they exhibit punctuations? Economists have long noted problems associated

A Macroscopic View of the Policy Process / 27

with economic instability. Chief among these is uncertainty, which has been shown to negatively affect economic performance (Barro 1976; Kormendi and Meguire 1985; Alesina and Perotti 1996; Mendoza 1997). Regarding uncertainty in the tax code, Adam Smith wrote: “Certainty of what each individual ought to pay is, in taxation, a matter of so great importance that a very considerable degree of inequality . . . is not near so great an evil as a very small degree of uncertainty” (1776, 352). Reservations about instabilities are also reflected in policy scholarship. Both Lindblom (1959) and Wildavsky (1964) comment on the advantageous qualities of incremental decision making—specifically, that major mistakes become less likely when governments move away from the status quo only gradually. Davis, Dempster, and Wildavsky (1966) note that incremental budgeting allows agencies to better plan future courses of action than if fiscal commitments to different programs are volatile. Within the punctuated equilibrium framework instabilities are linked to inefficient and cognitively limited decision processes, while a pattern of incremental to moderate adjustments is thought to be indicative of more comprehensive information processing. If time and resources are permitting, certainly it is better to approach information comprehensively than intermittently. Beyond conceptual concerns there is some empirical evidence that instability is bad for organizational performance. O’Toole and Meier (2005) showed that greater personnel stability appears to have beneficial effects on public school performance. Andersen and Mortensen (2010) apply this notion to school budgets, showing that students attending schools with volatile budgets perform worse on standardized tests than their peers from schools with more stable budgets, controlling for other factors such as socioeconomic status. Of course, it is possible that the budgetary instabilities are simply a leading indicator of other underlying problems that would lead to lower test scores, such as a school that poorly adapts to environmental changes. More broadly, policy instabilities are concerning because they suggest a disconnect between the urgency of problems and government’s reaction to those problems. As Jones puts it “incrementalism is the rule when most actors are not attending in any detail to a program, but when the program attracts their attention, punctuations occur[,] . . . and these punctuations are not necessarily tied to major disruptions in the environment” (2001, 114). Punctuations therefore raise concerns about the responsiveness of government to ever-changing environmental circumstances. For example, many argue that climate change is the social problem that will define the twentyfirst century. Warnings from usually reserved academic communities have

28 / Chapter Two

become increasingly dire, with a recent report by the UN Panel on Climate Change predicting “severe, pervasive and irreversible impacts for people and ecosystems” if climate change continues unabated (IPCC 2014). But climate change is far from a new issue. Scientists were warning about the potential of greenhouse gas emissions to alter the global climate as early as the 1960s (although a sweeping scientific consensus did not emerge until the 1990s). Had governments attended to the problem from its discovery and begun a gradual migration away from fossil fuels, it is entirely possible that they would have already “solved” the issue of climate change. Instead, the problem was mostly ignored and, today, stabilizing the climate will require dramatic steps from the international community. If these steps are ever taken, they will represent a major policy disruption, and this will be a consequence of not sufficiently attending to the issue in the first place. And, finally, policy instabilities matter because they are symptomatic of underlying governmental processes. As students of policy making, our goal is to understand these processes. Padgett proved that different information processing models can be linked to different patterns of policy change. Thus, if we observe a particular pattern of change, we can make inferences about the institution that produced it. This is a powerful addition to the analyst’s toolbox. It allows social scientists to speak directly to the attentiveness of public institutions to new information. By exploring variation in the instability of institutional outputs, we can better understand how different organizational forms affect the capacity of institutions to process information. Research in this area, therefore, has conceptual and practical advantages. Theoretical models of the policy-making process can help practitioners understand how different institutional arrangements structure policy change.

THREE

Complexity, Capacity, and Collective Decisions

Disproportionate information processing is the cause of policy punctuations. Policy makers, overwhelmed by information, allocate their attention disjointedly across issues. Where attention is concentrated policy punctuations are possible (although by no means inevitable) but, for the many issues that are not receiving any particular attention, policy makers rely on tried-and-true problem indicators, and the status quo dominates. Spending on spaceflight nicely illustrates this concept. Recall from figure 1.1 that in the years prior to Sputnik’s launch, spending on spaceflight was comparatively minuscule and basically a flat line, changing only very marginally from year to year. The status quo at this point was that space was a potentially interesting new frontier, but not an area that merited a great deal of attention or fiscal priority. That notion changed rapidly after Sputnik, when spaceflight was thrust into the center of Cold War politics. Consequently, spending went through two periods of dramatic transformation: first a major increase to pay the way for astronauts to land on the moon and then, once American supremacy in space was assured and attention moved elsewhere, a major decrease. Of course, the degree to which public institutions are disproportionate processors of information is not constant. Institutions that can allocate attention more evenly across issues should produce less-punctuated outputs than institutions in which attention is severely limited. This is, in fact, a basic prediction of the model: Levels of instability should vary depending on the degree to which an institution can adapt to new information. Identifying the factors that affect the capacity of institutions to process and respond to information has been a major goal of policy scholarship since the introduction of the disproportionate information processing model. As Robinson and his colleagues put it, “now punctuated equilibrium theo-

30 / Chapter Three

rists need to build a set of predictions related to when organizations are likely to act in a manner consistent with punctuated equilibrium theory and when they are not” (Robinson et al. 2007, 141). Much progress has been made on this front in the past decade. Scholars have shown that as frictions intensify moving through the policy process from inputs—such as congressional hearings and bill introductions—to outputs, the magnitude of punctuations grow (Jones and Baumgartner 2005; Baumgartner et al. 2009). Likewise, varying bureaucratic structures appear to affect the likelihood of punctuations in systematic ways (Ryu 2011; Robinson 2014) as does the nature of different policy domains (Breunig and Koski 2006; Jensen 2009; Breunig, Koski, and Mortensen 2010). More recently, focus has shifted to comparing governmental forms, where researchers have found that policies are less stable under authoritarian regimes than democracies (Lam and Chan 2014; Baumgartner et al. 2016). Much uncertainty still remains, however. Previous scholarship has found variance in the magnitude of punctuation across institutional outputs but, overall, this variance has been marginal. Patterns of change in institutional outputs are all punctuated, and to a similar degree. This has been taken as evidence in support of the General Punctuation Hypothesis. In particular, scholars have noted that while there are many different types of institutions, and we can expect different forms may be associated with varying levels of friction, cognitive limitations are ubiquitous. Policy instabilities should, therefore, have a definitive lower boundary; highly efficient systems may mitigate them to some degree, but they will always be there. The question going forward is whether there is any institutional or decision-making form that allows policy makers to circumvent cognitive limits and thereby process information more proportionally. I argue that the answer is yes. Complexity, institutional capacity, and decision-making processes exert a powerful influence on the ability of institutions to process information. Complexity refers to the types of issues that institutions are designed to address. Institutional capacity looks at the ability of institutions to gather and comprehend information. Different types of decision-making processes determine how quickly and completely institutions can respond to new information. When issues are simple, institutions operate with a high informational capacity, or collective decision processes are in place, then outputs are adjusted only gradually with time, rather than in a disjointed or episodic fashion. This chapter will look at each factor in turn and introduce three hypotheses that will be the focus of subsequent empirical tests.

Complexity, Capacity, and Collective Decisions / 31

Complexity Public governance is an exercise in sorting through complexity. How should problems be prioritized? How and when should they be solved? Whose interests should be represented? Often there are no easy answers. Certainly there is no how-to manual for running a country. Cohen, March, and Olsen (1972) address the complexity problem in their “garbage can model” of organizational choice. They argue that many organizations are actually organized anarchies, by which they mean that such organizations have no clear and cohesive set of goals. Instead, within many organizations there are a set of problems and a set of solutions that are largely independent of each other. These problems and solutions mix together in the garbage can and pair off in ways that are not based on any overarching logic or organizational goal. Kingdon later refined this model in his “multiple streams” approach, which emphasized the ambiguity of governmental agenda setting (1984). This is an apt characterization of how governments match solutions with problems. It is also a consequence of the complexity that organizational actors face. Consider the rise of ISIS (the Islamic State of Iraq and Syria) in the Middle East. Clearly this is a problem for US interests in the region, so if you are the president how do you go about solving it? Different members of your cabinet will offer different solutions. The secretary of state may point to the growing humanitarian crisis and note that economic and social development would provide young men in the region with opportunities that would make joining ISIS less appealing. The solution in this case is more international aid directed toward education initiatives. The secretary of defense disagrees, arguing instead that the best way to defeat ISIS is by bombing them. The solution is to invest more heavily in drones. What solution does the president pick? This is the essence of complexity. No one knows exactly what it will take to defeat ISIS, so members of the cabinet are free to present whatever solutions their agency is best positioned to deliver. The president then chooses to prioritize some of these solutions over others. Complexity can be grouped into two types. The first type is “natural” complexity; some issues are simply more complicated than others. ISIS, for example, is a multifaceted problem that has roots in a number of different issues, such as regional instability, religious intolerance, and economic depression. Climate change is another deeply complex issue, involving global collective-action problems. But some issues are conceptually very simple. Snow accumulation is a problem that governments know how to solve.

32 / Chapter Three

Sometimes snowplows break down, or local governments fail to purchase them in sufficient quantities, but there is little disagreement that plowing is the solution to snow. In their recent Politics of Information, Baumgartner and Jones refer to these types of issues as engineering problems: “Governments deal with many simple problems, such as providing clean water, maintaining roads, and trash collection. We can refer to these as ‘engineering problems,’ meaning these are problems for which we understand how to provide a solution. By contrast, ‘complex problems’ are those where we may not even understand the nature of the problem itself, much less be aware of an effective solution” (2015, 15). It is the complex problems that end up in the organizational-choice garbage can, searching for solutions. Simple problems can avoid this fate because, for the most part, everyone agrees on how these problems should be solved, and only rarely is there a need for major reconsiderations. Problems can also be complex for political reasons. If you are gay and unable to marry your partner, this is a major social problem, and the solution could not be simpler: the government needs to legalize same-sex marriage. The path to legalization may offer twists and turns, but the solution is never in doubt. These types of social issues are often among the most complex that governments face because there are deep-seated disagreements about the nature of the issue. The solutions are not complex, but the very existence of a problem that needs solving is hotly contested. Another example is with the recent passage of stringent voter ID laws in some US states. These laws require a would-be voter to present a governmentissued ID before entering the polling booth. If voter-impersonation fraud is a problem, then ID requirements are a reasonable solution. Many, however, doubt the existence of the problem. A recent case before the US Seventh Circuit Court on the legality of a voter ID law in Wisconsin highlighted the disagreement. In his dissent from the majority decision, which upheld the law, Judge Richard Posner asked, “as there is no evidence that voterimpersonation fraud is a problem, how can the fact that a legislature says it’s a problem turn it into one? If the Wisconsin legislature says witches are a problem, shall Wisconsin courts be permitted to conduct witch trials?”2 So, even when problems and their solutions appear simple, they can be politically complex. Issues that are complicated for political reasons are called position issues, indicating that people have staked out very different positions on the appropriate solution to the problem—if they even agree that a problem exists. Politically simple issues are often referred to as valence issues. These are the “baseball and apple pie” issues that everyone agrees exist and where

Complexity, Capacity, and Collective Decisions / 33 Table 3.1 Two dimensions of issue complexity Natural Engineering

Complex

Valence

Least complicated Medium complicated (Snow accumulation) (Economic prosperity)

Position

Medium complicated Most complicated (Same-sex marriage) (Climate change)

Political

there is a clear role for government in solving them. The overall complexity of issues can, therefore, vary along natural and political dimensions. Table  3.1 summarizes the possibilities, with an example of each type of issue in parentheses. Snow accumulation represents a very simple issue. Most everyone agrees that governments should clear the roadways when it snows, and plowing is the obvious means of doing so. Climate change is the most complex type of issue. Many people are dubious that human activity is to blame for the changing climate, and even among those who agree that it is a major issue the complexity of climate models and the scale of the collective-action problems make implementing solutions difficult. The demands that the different types of issues in table 3.1 place on the government’s attention are highly variable. Plowing snow is a routine function of local governments and is carried out with little fanfare. Occasionally a particularly long winter may force policy makers to reconsider governmental commitments to snow removal but, in general, snowfall fluctuates only marginally around an annual average. This type of “stable” problem, in which there is a clear and widely agreed upon role for government, has in a sense already been solved. We can imagine that an organization dedicated solely to snow removal would be less prone to the “stick-slip” dynamics postulated in the disproportionate information processing model. Only under very rare circumstances—the invention of a new technology to replace the snow plow or the rapid desertification of a community, for example—would we expect to see a dramatic reconsideration of the government’s approach to snow removal. Governments can attend to simple problems with some low level of routine attention but, in the uncertain environment that surrounds complex issues, governmental attention is at its most episodic. Complex problems fill the organizational-choice garbage can. They are complex precisely

34 / Chapter Three

because they have no agreed-upon solution, or there is controversy over the extent to which a problem actually exists. This uncertainty allows agency bureaucrats and interest groups to advocate their solution as the most appropriate—competition that makes the status quo less assured. When new solutions are adopted they can lead to policy punctuations, as the government’s approach to solving a problem is rapidly overhauled. Similarly, for complex problems, shifting political fortunes are more likely to lead to instability. When a newly elected party takes the reins of government, it may move to solve an issue that the previous administration did not acknowledge as problematic. There are also strong incentives for policy makers simply to ignore complex issues, especially if their solution requires major new government programs. Concerns about “big government” are always salient in American politics, so the “do nothing” approach has an intrinsic appeal. The less we know about a problem, the less we feel pressured to solve it. Thus, long periods of policy stagnation are also to be expected. In all, complexity offers a promising avenue for answering Robinson’s charge to determine when organizations are likely to act in a manner consistent with punctuated equilibrium theory and when they are not. Simple issues are less likely to elicit the episodic processing of information called for by the theory and, therefore, the policies addressing these issues should be less likely to undergo punctuations. As issues increase in complexity (both the political and natural variety) uncertainly grows and stick-slip dynamics begin to take hold. This proposition forms my first hypothesis: H Y P O T H E S I S 1 : C O M P L E X I T Y. As issues grow in complexity the public policies addressing them show greater instability. Previous scholarship offers tangential support of the hypothesis. In one of the first studies looking at individual policy domains, John and Margetts (2003) found great variability in the levels of punctuation associated with different UK budget functions, as did Mortensen (2005) when applying a similar methodology to Danish budgets. Shifting the focus to the United States, Breunig and Koski (2006) investigated the stability of spending on different policy domains across the US states. They, too, found evidence of variability, noting that some policy domains were much more likely to experience spending punctuations than others. They speculated on the possible causes of this variance and called for further research into the phenomenon. In a later article, Breunig and Koski would be joined by Mortensen (2010) in showing that many policy domains tend to show similar levels of stability across the United States and Denmark. For example, in both countries they found that spending on health care changed only margin-

Complexity, Capacity, and Collective Decisions / 35

ally from year to year. These cross-national similarities are suggestive of common attributes intrinsic to different policy domains. Along these lines, research by Jensen, Mortensen, and Serritzlew (2016) has suggested that the disproportionate information processing model can be improved by accounting for the varying levels of friction that operate on different policy domains. Each study points to seemingly systematic differences across policy domains; some areas of policy making appear more likely than others to adhere to a punctuated equilibrium pattern. I argue that complexity is the missing element that can explain this variability. In very simple areas, government may have already solved the information problem that leads to policy punctuations. This does not mean that punctuations will never occur in these areas; occasionally even well-understood issues are revaluated in light of changing technological or environmental circumstances. But for the most part valence issues (engineering issues), are isolated from the dynamics outlined by the disproportionate information processing model. Where issues are complex, and competition between solutions is fierce, we can expect the instabilities predicated by the model to be most pronounced. Chapter 5 tests the complexity hypothesis, using multivariate models in conjunction with US federal budget data. The empirical evidence is strongly supportive of the idea that complexity is a major driver of policy instability.

Institutional Capacity The complexity hypothesis looks at differences across policy domains, postulating that the informational demands of simple issues are less than those of complex ones. Subsequently, we can expect governments to process information differently depending on the complexity of the issue at hand: more proportionally for simple issues and more episodically for complex issues. With the institutional-capacity hypothesis, I shift my focus from issues to the institutions tasked with solving issues. Institutional capacity—as I use the term—refers to the ability of institutions to process and respond to policy-relevant information. Institutions with a higher capacity to process and respond are, therefore, better equipped to process information proportionally and should produce policies less prone to instability than their less-efficient counterparts. What features support an institution’s capacity to gather and process information? Chief among these features is a need for institutional actors to have access to reliable information. Williams (1998) makes reference to

36 / Chapter Three

“honest numbers”—what he describes as “policy data produced by competent researchers and analysts who use sound technical methods without the application of political spin to fit partisan needs” (1998, ix). In Honest Numbers and Democracy, he describes such policy data as being essential to representative governance, and he documents the rise and fall of policy analysis in the executive and legislative branches of the US government. What started as an informal demand for policy data under President Eisenhower was gradually institutionalized through the introduction of various analytic agencies to the cabinet departments. Increasingly, presidents came to depend on the availability of high-quality data when making policy decisions, a trend that reached its peak during the Nixon and Ford administrations. More than any recent president, Nixon moved aggressively to strengthen the powers of the executive branch. His strategy was to increase the informational advantage the executive held over Congress, which he accomplished in part by reorganizing the Bureau of the Budget into the Office of Management and Budget (OMB). Where the Bureau of the Budget had been seen as mainstay of “neutral competence” in American government, the OMB under Nixon was unmistakably partisan (Berman 1979). Claiming that Congress had no capacity to match expenditures with revenues, whereas policy wonks at the OMB could do so for the executive branch, Nixon argued that to guarantee a balanced budget he was forced to impound funds authorized by Congress. These actions provoked a constitutional crisis, as congressional leaders vehemently protested what they saw as unprecedented presidential overreach. In Train v. City of New York the Supreme Court agreed, finding Nixon’s actions to be unconstitutional. While he failed to undermine Congress’s budgetary authority, Nixon succeed in demonstrating the power of information. His central argument had been that Congress lacked the informational capacity to produce a balanced budget. Congress responded by passing the Congressional Budget and Impoundment Control Act of 1974, which created budget committees in each chamber of Congress, established formal rules governing the deferral of appropriated funds, and established the Congressional Budget Office (CBO). Today the CBO is widely considered the preeminent government agency for providing honest numbers, and many bills live or die based on CBO projections (Joyce 2011). Skocpol, in her analysis of President Clinton’s failed 1994 attempt at health care reform, remarks: The CBO, it is worth underlining, has by now become virtually a sovereign branch of the U.S. federal government, comparable in its clout in relation

Complexity, Capacity, and Collective Decisions / 37 to the executive and the Congress to the courts back in the Progressive Era and the New Deal. Back then, proponents of new public social policies used to spend a lot of time trying to guess what the courts, and especially the Supreme Court, would accept as constitutional. . . . Today, a comparable process occurs with the CBO, an expert-run agency that the Founding Fathers certainly never imagined when they wrote the Constitution! Today’s drafters of legislation live in fear that the CBO will, ultimately, reject their proposals as not “costed out.” (1997, 67)

With the rise of agencies such as the CBO, the gathering and processing of information has come to play a prominent role in federal policy making. Today a great number of agencies provide policy data to both the executive branch and Congress. But, simply because the agencies exist does not mean that political leaders are always interested in what they have to say. A second factor that weighs heavily on an institution’s capacity for information processing is the receptiveness of its leaders to new information. So, the availability of information should be seen as only a prerequisite for information processing. Indeed, there are often political reasons for limiting the capacity of public institutions to attend to political information. Searching for and processing information can be a slippery slope toward expanding government: newly discovered problems may require new government programs to solve. If you are an advocate of a reduced role for government, then rigorous search mechanisms work counter to your aims. Williams notes that while the peak of governmental interest in policy analysis came during the Nixon and Ford administrations, the decline began with Ronald Reagan. Reagan distrusted policy wonks, preferring to govern based on partisan instincts rather than analytic data. The rise of supply-side economics was characteristic of this new approach to governance. Famously described by George H. W. Bush as “voodoo economics,” the idea that cutting taxes will increase government revenue is for many the ultimate triumph of conservative ideology over empirical realities. Reagan was much less concerned with using policy data to understand problems than he was with using conservative solutions to solve them. For Williams, this heralded a new trend in governance, especially for the Republican Party, where honest assessments of problems took a backseat to partisan maneuvering (Williams 2003). Building on Williams’s analysis, Baumgartner and Jones (2015) measured the federal government’s analytic capacity from the 1940s to 2014. They discovered that, starting in the 1960s, the scope and quantity of policy analysis conducted by government agencies began to expand dra-

38 / Chapter Three

matically, which coincided with an increase in the range of issues to which government was attending. They termed this the “great expansionary period.” This “thickening” of government continued until the mid-1980s, when policy makers began to slowly but persistently scale back the range of their collective attention. The authors concluded that modern Republican politics can be seen as a backlash to the great expansionary period, and they concurred with Williams that this trend began with President Reagan. The existence of agencies that can conduct reliable policy analysis is, therefore, only a necessary and not sufficient condition for having a high institutional capacity. Institutional capacity also depends on the willingness of political leaders to engage with analytic agencies and the policy data that they produce. Beyond political concerns about the size and scope of government, privacy concerns are increasingly playing a role in debates about governmental information processing. New technologies make it possible for modern governments to gather information on an unprecedented scale. Consider the revelations about the National Security Agency’s (NSA) PRISM program made public by Edward Snowden in 2013. Deriving authority from the 2007 Foreign Intelligence Surveillance Act and the Protect America Act, the NSA subpoenaed electronic information from major technology companies through the clandestine Foreign Intelligence Surveillance Court. These subpoenas were remarkable, both for their secrecy and their scope; documents leaked by Snowden suggested that the NSA was collecting the telephone records of tens of millions of Americans, and that Verizon was compelled to turn over its data on a daily basis (allegations the NSA denied). Furthermore, the leaks documented NSA efforts to spy on US allies, including recording phone conversations of German chancellor Angela Merkel’s top advisors. The backlash to the PRISM program was extensive, with opponents questioning its constitutionality under the Fourth Amendment. In 2015, Congress allowed the Patriot Act to expire and replaced it with the USA Freedom Act, which renewed some of the authority granted by the Patriot Act to federal agencies but did away with the statute allowing the NSA to collect and store metadata, effectively ending the PRISM program as it had operated. The example exposes the tension between democratic governance and robust information processing. For much of the twentieth century, a concern was the lack of sufficient policy data available to policy makers. Now the concern is the opposite: modern technology makes it possible for the government to spy on people in bulk, downloading their email conversations and tracking their phone calls. In defense of PRISM, former NSA

Complexity, Capacity, and Collective Decisions / 39

Director General Keith Alexander acknowledged the controversy surrounding the program, but claimed that it was the best way to stop future terrorist attacks, stating, “there is no other way we know to connect the dots” (Ackerman 2013). PRISM was seen by many in government as an essential information search function and a critical tool in the war on terror. Opponents of the program argued that while bulk-data collection programs might offer an informational advantage, they are incompatible with constitutional rights to privacy. Democracies then may have good reasons for keeping their institutional capacity below a certain threshold. Finally, institutional capacity requires that political leaders have the time to attend to policy data. Policy scholars have identified bounded rationality as a major impediment to proportional information processing. As cognitive limitations are ubiquitous, it is thought that they represent a baseline level of informational efficiency. Still, how far from that baseline an institution falls depends in part on the time demands it places on its members. In democracies, the demands of the electoral calendar can be particularly acute (Fenno 1978). Time spent campaigning is time when relevant information goes unprocessed, and these electoral distractions can have a tangible effect on policy outcomes. For example, Lindsey and Hobbs show that presidents spend less time on foreign policy in the lead-up to a presidential election, and that this has adverse effects on the quality of foreign policy outcomes (2015). Others have argued that demanding electoral calendars at the federal level are to blame for persistent congressional gridlock, as the ongoing need to engage in combative campaign rhetoric makes it difficult for members of Congress to form the kind of bipartisan coalitions essential to effective policy making. In summary, institutional capacity comes down to two factors: the resources to conduct policy analysis and receptiveness to that analysis. These factors vary across institutions and also (as Williams showed) within the same institutions over time, depending on the political leadership or electoral pressures. For modern governments, receptiveness is more often the limiting factor. It is possible for today’s governments to gather information on an unprecedented scale, but often political leaders have good political and ethical reasons for not doing so. When institutions have the resources to conduct policy analysis and political leaders are receptive to that analysis, then institutions can operate at a high informational capacity. When the resources are lacking or policy makers are uninterested, then capacity is reduced. My expectation is that varying levels of institutional capacity should affect the magnitude of policy instabilities. The logic behind this expecta-

40 / Chapter Three

tion is straightforward. Policy punctuations are caused by the intermittent allocation of attention across issues. Institutions operating with greater capacity have more attention to go around and, thus, may be better situated to allocate this attention evenly across issues than are their more myopic institutional counterparts. Fewer small issues should “slip through the cracks,” only to require dramatic solutions at a later point once they have grown into more substantial problems. H Y P O T H E S I S 2 : I N S T I T U T I O N A L C A P A C I T Y. Institutions with a greater informational capacity will produce outputs less prone to instabilities than those from institutions with lower capacity. I draw preliminary support for the hypothesis from scholarship showing systematic variance in the stability of outputs across institutional forms. Robinson (2004) focused on the bureaucratization of school districts. He found that more bureaucracy tends to reduce punctuations in school budgets. In a later article collaborative article, Robinson (2007) expanded on these findings by looking separately at two dimensions of bureaucracy: centralization and organizational size. The authors found that centralization makes punctuations more likely, while organizational size works in the opposite direction. Breunig and Koski (2009) argued that states where governors have more centralized budgetary authority have lower informational capacity and thus produce more punctuated budgets. More recently, Flink (2017) looked at the organizational performance and personnel turnover of hundreds of Texas schools over a twenty-year period. She found that as organizational performance increases, so does budgetary stability. Personnel turnover also affects the likelihood of punctuations, with less turnover leading to more stability. In all, there are good reasons to expect that institutional factors affect the capacity of institutions to engage with policy information. Subsequent analysis will build on the Robinson and Flink findings by testing the institutional-capacity hypothesis at the state and federal levels.

Decision-Making Processes The previous hypotheses focus on the characteristics of issues and institutions, postulating that different characteristics can be associated with better or worse information processing. Now the focus shifts to the decisionmaking processes by which public institutions allocate resources. In a sense, this element can be considered another dimension of institutional capacity. One way to increase capacity is to invest in research agencies that

Complexity, Capacity, and Collective Decisions / 41

produce policy analysis; another is to use decision-making procedures that allow for a more comprehensive processing of information. This is an area that has received comparatively little attention in policy scholarship. In part, this neglect is because most public institutions use the same type of decision process: policy makers within an institution compromise and debate over the merits of a particular proposal, which is ultimately put to an up or down vote. While the rules governing the nature of the proceedings vary, the process is fundamentally the same, resulting in outputs that are the consensus result of debate by participants. I term this type of decision making “deliberative,” which I contrast with collective processes. Collective decision making aggregates the actions of many independent decision makers to arrive at an outcome, so no deliberations (besides internal ones) are required. The distinction is critical because the independent nature of collective processes means that they have the potential to “gain by aggregation” and in doing so circumvent many of the cognitive barriers to change that feature so prominently in the disproportionate information processing model. For example, visitors to the state fair often have an opportunity to participate in a classic carnival game that asks participants to guess the number of gumdrops in a jar. Whoever is closest to the actual number wins the jar. One variation of the game requires that participants beat the averaged crowd response in order to win. Carnival workers know that this seemingly innocuous twist makes the game enormously more difficult; to beat the crowd, a participant will have to guess the number of gumdrops almost exactly. In fact, state fairs led to one of the first academic investigations of collective decision making. In The Wisdom of Crowds, Surowiecki (2004) recounts how Francis Galton, the nineteenth-century English polymath, was shocked to discover that collectively a crowd of fairgoers had come closer to guessing the true weight of an ox than had butchers and other experts. Galton described the results of the weight-judging contest in the journal Nature and introduced his article as follows: “In these democratic days, any investigation into the trustworthiness and peculiarities of popular judgements is of interest. The material about to be discussed refers to a small matter, but is much to the point” (1907, 450). For students of public opinion these words have a special ring to them. The phenomenon Galton described is now the cornerstone for modern theories of representation. Crowds are “wise” because collective decisions gain by aggregation. Each individual actor has an effect on the final outcome, so idiosyncratic behavior by actors who are uninformed averages out, leaving a clear, sophisti-

42 / Chapter Three

cated signal from the actors who are reacting to some common stimuli. There is, in other words, a powerful empirical reason why collective systems can sometimes process information more effectively than deliberative ones. Imagine that every day some investors sell their shares in a certain stock for various personal reasons; perhaps they have a major purchase on the horizon. Conversely, other people may decide to finally act on some longheld hunch about the stock and purchase shares. Since this behavior is random, it will average out, with the people buying new shares making up for the deficit caused by people who sold their shares. It has to average out because there is no common stimulus driving the behavior; buyers and sellers are acting independently on personal and idiosyncratic information. Now imagine that the company selling the shares announces that quarterly revenues were lower than expected. This sends a strong signal to investors that the company is not performing as well as hoped, that perhaps more bad news is on the horizon, and that it might be time to divest. This is an actual signal, not random noise, so we can expect it will not average out, but instead cause share prices to immediately adjust to a lower level, as many investors respond in tandem. Crucially, the number of investors who receive the signal and react to it by selling their shares only has to be a small percentage of the total number of investors for share prices to drop. If 10 percent of investors sell their shares and the other 90 percent continue to behave randomly, then the net effect will reflect the actions of the attentive 10 percent. The implication is that even if the number of people paying attention is small, crowds can be responsive to environmental stimuli. This is aggregation gain. A majority of participants can miss a signal, but as long as some catch it and react, the system will respond. This process has been described mathematically as the diversity prediction theorem (Page 2007). Page shows that the group error is equivalent to the average individual error minus group diversity. In other words, more diverse groups tend to make more accurate predictions than individuals or less diverse groups. This can be taken as evidence of diverse groups’ ability to comprehend information more efficiently than more homogenous groups. Notice that diversity is the key element. A large group is not always the same thing as a diverse group; increasing the number of participants but not their diversity can lead to asymmetric biases and very inaccurate group predictions. So collective intelligence does not inevitably emerge from every aggregative process, certain conditions must be met. In addition to diversity, Surowiecki (2004) argues that collective intelli-

Complexity, Capacity, and Collective Decisions / 43

gence requires that participants act independently and that they make their decisions in private. Trading on financial markets routinely violates these conditions. Investors are incredibly diverse and bring esoteric knowledge to the table, but their decisions are broadcast publicly through changing assets prices, and they are highly attuned to the behavior of their counterparts. This type of herd behavior is often to blame for market bubbles. One of the earliest, well-documented instances of a speculative economic bubble is the famous “tulip mania” that gripped the Netherlands during the seventeenth century. Tulips were introduced to Europe from the Ottoman Empire sometime in the late sixteenth century and soon gained great popularity among the upper classes as luxury items. Cultivating tulips required some patience, as they could only be grown from bulbs, which took anywhere from five to twelve years to grow from seeds. Complicating matters was that those tulips considered the most beautiful featured a mosaic coloring pattern that was caused by a tulip virus that undermined the bulb’s viability. The most desirable tulips were therefore very rare and this contributed to the mania. In his 1841 Extraordinary Popular Delusions and the Madness of Crowds, Mackay describes a purchase at the height of the mania of forty tulip bulbs for 100,000 florins. Skilled laborers at the time would earn around 150 florins a year. Given these incredible prices, the Dutch people rushed into the tulip business, only to be dismayed when, within a few short years, the market collapsed and prices fell back to earth. The Dutch tulip mania remains a classic example of an economic bubble because in retrospect it seems so absurd. Tulips are, after all, only flowers; even diamonds have some practical, industrial purposes. Clearly then, while crowds can be wise, this is not guaranteed; sometimes they appear delusional. And economic bubbles still happen today. The 2008 housing market collapse is the most recent and dramatic example. In these cases, the problem often results from a lack of independence, which traces its roots back to bounded rationality. One of the many heuristics people use to simplify a complex world is mimicry and, by imitating the behavior of a trusted source, people save themselves the time it would take to arrive at their own independent conclusions. In financial markets, when this type of behavior accumulates over time it can lead to vastly inflated prices for assets, relative to their value fundamentals (Basu 1977; Shiller 2000; Sell 2001; Sornette 2003). Clearly, then, collective decisions processes should not be considered a “cure” for bounded rationality. Still, it makes sense to draw a distinc-

44 / Chapter Three

tion between the informational efficiency of deliberative and collective decision-making procedures. A decade of scholarship has shown us that deliberative policy making produces highly punctuated patterns of policy change. Collective processes (when used under the right circumstances) may offer an alternative. Their chief advantage is that, by aggregating across independent actors, they minimize cognitive frictions, although certainly they do not eliminate these entirely. But the question is threefold: (a) Are there any areas of policy making where the conditions necessary to give rise to collective intelligence are met? (b) If those conditions exist, are there collective decision processes in place to take advantage of them? (c) If those processes exist, do they result in policies that are less punctuated than those typically produced by deliberative procedures? I argue that the answer to all three questions is yes, and this leads to my last hypothesis. H Y P O T H E S I S 3 : C O L L E C T I V E D E C I S I O N M A K I N G . Policy outputs that are determined by collective decision making will be less punctuated than those that are the result of deliberative processes. A major argument of this book is that collective processes are not uncommon to policy making. Monetary policy, a substantively important component of governance, is based predominantly on collective interactions. Many social policies also feature collective aggregation components. For example, the Affordable Care Act established a marketplace for health insurance. Subsequent analysis focuses on three ways that collective decision making is integrated into the policy process: (1) They are used directly to set policy outcomes; (2) policy makers establish markets and mandate that private organizations participate in them; and (3) governments vacate an economic sector through deregulation to spur competition. Jones, Sulkin, and Larsen (2003) showed that annual changes in stock market returns rarely underwent punctuations. The authors focused on institutional frictions, emphasizing that modern stock markets are characterized by relatively low transaction and information costs. Subsequently, they concluded that the lack of instability in market returns matched theoretical expectations because markets have fewer institutional constraints than governments. The authors were correct to focus on institutional barriers to change—markets often do minimize these frictions—but they missed an even greater consideration: that collective decision making can also minimize cognitive frictions. Given the right circumstances, crowds are much more informationally efficient than deliberative bodies because they gain by aggregation. Collective decision making, therefore, has the potential to greatly reduce policy instabilities because it can minimize both institutional and cognitive barriers to change.

Complexity, Capacity, and Collective Decisions / 45

Understanding Policy Instabilities I argue that complexity, institutional capacity, and decision-making processes powerfully condition the stability of institutional outputs. The effects of complexity and capacity on the structure of policy change have been discussed in prior scholarship, and subsequent analysis will push this literature forward by building explicit empirical tests of these ideas. Collective processes have also received some consideration, but their ability to mitigate cognitive frictions has gone unappreciated. In particular, the distinction I draw between deliberative and collective decision making is new to policy scholarship. All three hypotheses find empirical support. When issues are simple or institutions operate at a high information capacity, then policy instabilities are less common. Collective processes can, in some circumstances, eliminate the occurrence of policy punctuations almost entirely, offering a potential exception to the General Punctuation Hypothesis. Before proceeding to tests of these hypotheses, the next chapter lays out the book’s empirical approach. It introduces stochastic process methods and demonstrates with simulations how different information processing models can be linked to different distributions of change.

FOUR

Distributional Assessments of Institutional Response

As we saw in chapter 1, federal spending on spaceflight has undergone a number of punctuations since the budget category was introduced in 1949. These included a dramatic ramp-up in spending after Sputnik’s launch and, in the 1970s, an almost equally massive decrease, as public enthusiasm for expensive adventures in space waned. Aside from these brief periods, however, spending on spaceflight fluctuated only marginally. In the 1950s, the category scarcely existed, with spending drifting slowly between $300 and $700 million (in 2012 dollars). Since the early 1990s, spending on spaceflight has moved between $15 and $18 billion. Looking at the entire history of the category, we can describe annual changes to spending on spaceflight as predominantly small, with a few very large adjustments. Figure 4.1 makes this clear by showing the distribution of these annual changes. The vast majority of changes are clustered around 0, but one change is larger than 150 percent and another is less than −50 percent. The pattern of change on display in the figure is exactly what the disproportionate information processing model predicts. Policies “stick” in the same place for long periods of time, so we observe a very tall central peak centered around zero, and then briefly but dramatically “slip” forward when urgency around an issue grows or when enthusiasm rapidly evaporates, so we observe very wide tails. Of course, the model is not about spaceflight in particular, it describes the entire policy process. This means that if we look at other policy areas—congressional hearings on transportation, food stamps, or international aid, for example—we should observe a similarly disjointed pattern of change. Consequently, if we pooled changes across every budget category then the shape of the resulting full-budget distribution should match that of the distribution in figure 4.1, as we would simply be layering similarly shaped distributions on top of one another.

Distributional Assessments of Institutional Response / 47

Figure 4.1. Distribution of annual changes in federal budget authority for spaceflight Note: Shapiro-Francia test allows a rejection of the hypothesis that the data are normally distributed, with a p-value of 0.00001.

Figure 4.2 shows that this is exactly what we find. The full-budget distribution, based on constant 2012 dollars, is more filled out than the spaceflight distribution (as there are many more observations), but their shapes are very similar. Both feature a high central peak, indicating that most changes are only modest, and fat tails, indicating that some changes are very large. In fact, the tails of the budget distribution stretch so wide that the figure clusters very large changes at 150 percent and small changes at −80 percent, so that the center of the distribution is fully visible. (Note that these changes are made only as a visual aid; statistical moments are calculated on the full, unclustered range of values.) With this number of observations, we can see that the full-budget distribution has very weak shoulders, meaning that there are comparatively few mid-level changes between 15 percent and 50 percent. So changes appear to come in two types: small or very large. The figure also displays a normal distribution that has the same mean and standard deviation as the budget distribution. Comparing the two highlights how far budgeting deviates from a normal data-generating process. And this visual comparison is supported by the Shapiro-Francia test, which tests the null hypothesis that the data are normally distributed. A significant test parameter allows the rejection of the null, indicating that the data under examination are not normally distributed. Studies have shown that budget distributions more closely resemble the Paretian (Jones et al. 2009), a class of “extreme value” distributions originally described by

48 / Chapter Four

Figure 4.2. Annual changes in federal budget authority pooled across on-budget subfunctions Note: Percentage changes are not calculated for amounts of less than $50 million. Extremely high values are clustered at 150 percent and −80 percent. Shapiro-Francia test allows a rejection of the hypothesis that the data are normally distributed, with a p-value of 0.00001.

the Italian economist Pareto to explain the allocation of wealth in society. Pareto distributions have density functions that follow a power law, so they describe processes that are prone to extremely rapid growth or decay. Pareto had noted that societal wealth tends to become concentrated among a few ultra-rich citizens because it becomes increasingly easy to make more money once you already have a lot of money, thus, the saying “the rich get richer.” It is not just the US federal budget that matches the Paretian—the form holds for almost every government budget so far described; the central point of the “general empirical law of public budgeting” described by Jones et al. 2009. The distribution in figure 4.2 is an updated classic, first appearing in Jones and Baumgartner’s The Politics of Attention (2005, 111). Since then, distributional analysis has become a cornerstone of policy scholarship. Knowing the distribution of a variable is always important when conducting empirical analysis and, in particular, when fitting statistical models. For scholars interested in policy change, distributional assessments have an added importance because models of policy making have implications for how distributions of changes in public policies are shaped. Simulations later in the chapter will demonstrate this connection, showing that incremental models of policy making imply a normal distribution of policy

Distributional Assessments of Institutional Response / 49

changes, while the disproportionate information processing model predicts a fat-tailed distribution. When the entire distribution of values is of theoretic interest, learning the shape of that distribution can be an important test of theory in and of itself. Furthermore, policy scholars are often most interested in outliers and deviations from the norm, where standard statistical methods that seek to explain mean differences across variables are poorly suited. For these reasons, “stochastic process methods” that shift the focus from mean differences to higher moments of a distribution (variance, skew, or kurtosis) are often employed in policy studies (Breunig and Jones 2010). This book employs a variety of methods, including statistical modeling, but stochastic process methods feature most prominently. As distributional analysis is central to this methodology, the next section reviews popular approaches to assessing a distribution’s shape.

Unpacking the Policy Scholar’s Toolbox A distribution of values has four statistical moments that describe its shape. Of these, the mean (the first moment) receives the most attention; most statistical models in political science describe how a change in the mean of one variable affects the mean of another variable. Variance describes the spread of values and skew describes their symmetry. Kurtosis is the fourth moment and, in the social sciences, this property generally receives the least attention. The exception is in policy studies where kurtosis is of great importance because it measures the “peakedness” of a distribution. Distributions with high kurtosis have high central peaks, fat tails, and weak shoulders. These are exactly the distributional properties predicted by the disproportionate information processing model. Institutional and cognitive frictions make moving away from the status quo extremely difficult, so we observe a high central peak centered close to 0 percent, indicating that the majority of changes are only marginal. Occasionally, these modest changes are interrupted by a massive adjustment, so we observe fat tails, indicating that some changes are very large. Moderate changes, which should fall onto the shoulders of the distribution, are conspicuously absent. Kurtosis has, therefore, become a leading indicator of the degree to which an institution conforms to the model. If institutional outputs form a distribution with high kurtosis, then this is taken as evidence of disproportionate information processing. Institutions that produce low-kurtosis distributions can be considered more informationally efficient and thus less susceptible to stick-slip dynamics. A Gaussian normal distribution has

50 / Chapter Four

a kurtosis value of 3. Values above 3 indicate leptokurtosis (excessive peakedness) and values below 3 platykurtosis (excessive flatness). For example, the budget distribution in figure 4.2 has a kurtosis value of 469, so it deviates substantially from the Gaussian normal. The normal distribution in the figure has the same mean and standard deviation as the budget distribution, illustrating the vast differences in distributional shape that higher kurtosis can make. As an analytic tool, however, kurtosis does have drawbacks. Chief among these are that its estimation is sensitive to extreme values, making it somewhat unreliable (Groeneveld 1998). The solution is to estimate kurtosis based on L-moment ratios, which are less affected by outliers and more appropriate when the total number of cases is small (Hosking 1990; Hosking 1992). L-moment ratios are standardized versions of the conventional moments based on linear combinations of ordered statistics. The L-kurtosis ratio is calculated by normalizing kurtosis by a measure of variance (the second moment). Subsequently, L-kurtosis takes values between zero and one. The Gaussian normal has an L-kurtosis of 0.123, with higher values indicating more leptokurtosis and lower values less. Note that in figure 4.2 the budget distribution has an L-kurtosis of 0.619. Breunig and Koski (2006) introduced L-kurtosis scores to policy scholars in their examination of budgeting in the American states. Even using L-kurtosis, scholars working with budget data often worry that extreme values will bias their estimates. It is much easier to effect massive percentage changes to a small base value than a large one and this could potentially lead to spurious conclusions about the existence of a punctuated change pattern when, in fact, policy makers were only making relatively small adjustments to a minor budget category. For example, a $500,000 increase is relatively minor at the federal level, but if such a change is made to a budget category that in the previous year saw only $1 million in spending, the percentage change would be 50 percent. Standard practice is therefore to exclude very small base values before drawing a distribution in order to make L-kurtosis estimates robust against this potential measurement bias. This is what I have done in figure 4.2 by not including any changes calculated from base values less than $50 million in the distribution. I arrived at $50 million as the cutoff point through an iterative process in which I repeatedly estimated the L-kurtosis of the budget distribution after excluding increasingly large dollar amounts. Excluding values less than or equal to $50 million creates marked reductions in L-kurtosis, but only slight reductions in the number of observations. However, excluding amounts above $50 million substantially reduces the

Distributional Assessments of Institutional Response / 51

overall number of observations without effecting any meaningful change in marginal L-kurtosis. This suggests that changes to a few, substantively small ($50 million or less) budget categories were a powerful driver of L-kurtosis, and it is important to exclude such observations so that the analysis is not biased by extreme variability around these small base values. Kurtosis and L-kurtosis tell the same story, but L-kurtosis is a more precise empirical measure. This makes it appealing for policy scholars attempting to distinguish between institutional processes. For example, the complexity hypothesis introduced in chapter 3 postulates that institutions will be more complete processors of information for simple issues than for complex ones. Empirically, then, the expectation is that distributions of policy changes for complex issues will have higher L-kurtosis, as policy makers are more prone to alternate between ignoring these issues and over-responding to them. Distributions of changes for simple issues should have lower L-kurtosis and look more like the Gaussian. Estimating L-kurtosis is the first component of the policy scholar’s toolbox and features prominently in most articles investigating the causes of policy instabilities. The importance of L-kurtosis as an analytic tool is such that it is worth spending some time becoming familiar with how kurtosis affects the shape of a distribution. As a reference point, figure 4.3 presents an array of randomly generated distributions with different L-kurtosis values. The distribution on the top left is platykurtotic, with an L-kurtosis value of 0.037. On the top right is the Gaussian distribution, the bottom left a leptokurtic distribution with an L-kurtosis of 0.261, and on the bottom right is a distribution that is extremely leptokurtotic. This treatment demonstrates why leptokurtosis is associated with peakedness and the profound effect kurtosis levels have on a distribution’s shape. A Note on Bin Widths When drawing a distribution, bin widths are the size of each column in the resulting histogram of values. Larger bin widths group more values together into one column, and smaller bin widths spread them out over many columns. Not surprisingly, by manipulating bin widths an analyst can significantly alter the shape of a distribution and this, in turn, can affect estimates of kurtosis. John and Margetts (2003) point out that this kind of analytic flexibility makes evidence of leptokurtosis somewhat unreliable, as the choice of bin widths is often arbitrarily made. Statistical programs, such as Stata and R, implement Hosking’s 1990 formula for estimating L-moments and doing so does not require the researcher to estab-

52 / Chapter Four

Figure 4.3. Distributions with different kurtosis values

lish bin widths. That is, an analyst can find the L-kurtosis of a data series, without drawing a histogram, in much the same way that an analyst might calculate the data’s mean. When plotting distributions for visual comparison, the challenge is to accurately translate a continuous probability distribution into a histogram where values are clustered by magnitude. This can be accomplished by minimizing the trade-off between bias and variance that arises when choosing a bin width (Sheather and Jones 1991; Simonoff 1996; Breunig and Jones 2010). Many statistical programs automatically operationalize this rule when drawing histograms. Data with different characteristics will result in different bin widths. For example, figure 4.1 shows the distribution of changes in budget authority for spaceflight, and the histogram uses magnitude ten

Distributional Assessments of Institutional Response / 53

bin widths; grouping percentage changes by ten-unit intervals. Figure 4.2, which looks at the full-budget distribution, uses a smaller bin width, grouping percentage changes by five-unit intervals. This leads to an important point: When visually comparing distributions drawn from different data, it is less important for the bin widths to be the same size across distributions than for the number of bins to minimize the mean interval squared error for the data. Quantile Regression Given the extreme shape of many policy distributions, special considerations must be made when operationalizing policy change as the dependent variable in statistical models. Conventional ordinary least squares (OLS) assumes that the dependent variable is normally distribution. When looking at changes in public policies this is often not the case—indeed, a normally distributed set of policy outputs would be remarkable, running counter to the General Punctuation Hypothesis. So, if the goal is to predict the magnitude of changes in federal spending, then OLS would be a suboptimal estimation technique given the shape of the budget distribution in figure 4.2. A popular alternative is using quantile regression. Quantile regression is a special case of OLS that estimates the effect of predictor variables at different quantiles of the dependent variable (Koenker and Bassett 1978). This allows for the possibility that the factors that lead to incremental change are different from those that cause punctuations or, for example, that the factors that lead to policy retrenchment are different than those that lead to expansion. Another approach to the issue of nonnormality in policy distributions is to divide cases into discrete categories: for example, grouping changes above the second standard deviation of the distribution as punctuations. This technique is used by Robinson and his colleagues (2007), who assign budget changes to different categories depending on their magnitude and then use multinomial logistic regressions to estimate the effects of independent variables on the likelihood that a change falls into a category. However, quantile regression has an advantage in that it allows the analyst to leverage the full distribution of values. These techniques—estimating L-kurtosis values and empirical modeling with consideration for nonnormality—are only two of many stochastic process methods available to researchers.3 But their relative accessibility and empirical precision make them the most popular in the literature on policy change. Both techniques feature prominently in upcoming analysis,

54 / Chapter Four

with chapters 5 and 6 developing a model of policy change and chapters 7 and 8 estimating L-kurtosis for a variety change distributions.

Linking Information Processing Models to Change Distributions Policy making is the act of turning inputs into policy outputs. These inputs take the form of problem indicators, and every year some problems grow worse and others get better. A government’s response function—how it reacts to shifting indictors—determines the rate of policy change and, subsequently, how distributions of policy outputs will be shaped. As political scientists we can empirically measure the distribution of policy changes to test our hypotheses about government responsiveness. One of the first descriptions of the government’s response function was Lindblom’s incrementalism. In applying the theory to budgeting, Wildavsky noted that forming a budget is one of the most complex tasks facing policy makers and at the federal level a new one is required every year. Obviously policy makers are unable to evaluate the merits of every program from scratch (this approach is called “zero-base” budgeting and is untenable for most large organizations), so Wildavsky reasoned that policy makers simply take last year’s budget and marginally adjust spending to each program based on their best assessment of changing environmental circumstances. Spending on any individual program would be adjusted by some amalgamation of the relevant problem indicators. Taken together, if those indicators suggest the problem is getting better, spending is adjusted downward, and if the sum of indicators point to a problem that is growing worse, policy makers spend more. This, then, is the incrementalist response function. Crucially, it implies that policy makers are informationally neutral, in the sense that problem indicators are all given equal weight in determining governmental response. That is, that policy makers take an unbiased summation of various problem indicators to determine their response. In many respects, incrementalism remains a highly successful account of the policy process; as the distribution in figure 4.2 makes clear, the vast majority of budgetary changes enacted by the federal government are indeed very small, exactly what incrementalism predicts. The flaw with incrementalism is that some budgetary changes are not small and that these disruptions occur far too frequently to be easily dismissed as mere statistical flukes (although this was the initial response of proponents of the model). Conceptually, incrementalism is at a loss to explain the dramatic ramp-up in spending on spaceflight in the wake of the Sputnik crisis or the subsequent cuts; these aberrations simply do not fit the model.

Distributional Assessments of Institutional Response / 55

In an article that revolutionized policy studies, Padgett (1980) demonstrated that information processing models could be linked to specific distributional forms. In particular, he showed that incrementalism implied a normal distribution for first differences. Assuming that the problems facing government are independent and stochastic (a safe assumption taking a broad view of the problem space), if policy makers simply sum across the indicators of different problems to arrive at their policy response, then the distribution of these responses will be Gaussian normal. This logic follows directly from the Central Limit Theorem, which says the sum of many independent and stochastic factors will be normally distributed. Padgett went on to note that the actual distribution of problems is irrelevant (as long as they are independent and random), and what matters is only how policy makers respond to those indicators. If they are informationally neutral, that is, if they weight every indicator equally, then policy changes will be normally distributed. Having made this crucial empirical connection, Padgett could then test the incrementalist model by simply observing the distribution of changes in government spending. Doing so, he found that budget distributions deviated sharply from the normal (as we just saw in figure 4.2), suggesting a serious flaw with how incrementalism conceptualized policy making. Baumgartner and Jones’s insight (1993, 2005) was to revisit the notion that policy makers would weight indicators equally when determining government’s response. They noted that the institutional and cognitive frictions at work in the policy process suggested that policy makers would become “stuck” on a particular indicator and overprioritize it. In other words, policy makers are not informationally neutral, and the policy-response function derives from a heavily weighted process in which some indicators are given great priority and most are ignored. This “disproportionate information processing” stands in contrast to the proportional allocation of attention across indicators implied by incrementalism. Critically, the words “disproportionate” and “proportionate” do not refer to the size of government’s response vis-à-vis the size of problems. That is, disproportionate does not imply a response out of sync with the magnitude of an underlying problem (although this may certainly be the case). Nor does proportionate imply an efficient response. Rather, they refer to the allocation of the government’s attention to problem indicators: Is it even (proportionate) across indicators or uneven (disproportionate)? The averaging-out effects of the Central Limit Theorem are disrupted by the disproportionate weighting of different indicators and, thus, the JonesBaumgartner model predicts a fat-tailed, leptokurtic distribution of policy changes.

56 / Chapter Four

Figure 4.4 summarizes the response function under disproportionate information processing. The process starts in the real world, where thousands of problems confront human societies. One such problem is violent crime, and every year in the United States thousands of people are murdered, robbed, or otherwise assaulted. It is reasonable to expect that governments will try to solve this problem, but where should they begin? Crime (like many endemic social problems) is multifaceted. Is the root of the problem with so-called super predators, young men born with innate proclivities for violence, beyond any hope of redemption? Or perhaps the problem is one of hopelessness—people unable to escape a cycle of poverty turn to desperate measures in a last-ditch effort to alter their circumstances. Or the issue might be grounded in long-standing racial inequalities, whereby minorities contend with a justice system designed to limit their opportunities for social advancement. Of course, elements of all three narratives (and many more) may have some truth to them. We can measure each element with a reasonable degree of accuracy and then make policy makers aware of those measurements so these elements can be considered indicators of the larger problem of violent crime. Note, however, that at this stage in the process we have passed through the first filter; problems in reality are often more complex and interconnected than we can account for, so the indicators that arrive on policy makers’ desks tell only a partial story. In order to craft a policy response, policy makers must then decide how to prioritize the various indicators that they are made aware of. This is the stage of the process where the incrementalist and disproportionate information processing accounts differ. In the incrementalist framework, priorities are updated only marginally from year to year, implying a proportional allocation of attention across indicators. Conversely, Jones and Baumgart-

Figure 4.4. Policy response under disproportionate information processing

Distributional Assessments of Institutional Response / 57

ner argue that some indicators are overemphasized, whereas others are mostly ignored. This is what figure 4.4 shows: The hypothetical government in question has directed many resources toward neutralizing super predators, less attention to ending poverty, and very little attention to racial inequalities. So the second filter is based on problem weighting, and sometimes indicators can be filtered out entirely if they receive no attention. The resulting policy response (stage 4 of the process) is the summation of each indicator, weighted by their relative priorities. In this example, the government views violent crime as predominately a psychological problem. But a key insight of the disproportionate information processing model is that the weighting of different indicators never settles down into a permanent equilibrium, so the positions are not fixed over time. It is the abrupt reprioritization of indicators that generates the policy punctuations called for by the theory. Simulations The dynamic nature of the relationship between information processing and the shape of change distributions can be demonstrated with a series of very simple simulations in R. Annotated code is provided for replication purposes. First, I look at the incremental model, simulating what happens when policy makers attend equally to a set of problem indicators: Simulation 1—Incrementalism Problems