America's Leaning Ivory Tower: The Measurement of and Response to Concentration of Federal Funding for Academic Research [1st ed.] 978-3-030-18703-3;978-3-030-18704-0

This book will expand the body of literature on capacity-building in science and improve public understanding of the iss

250 48 2MB

English Pages XII, 141 [151] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

America's Leaning Ivory Tower: The Measurement of and Response to Concentration of Federal Funding for Academic Research [1st ed.]
 978-3-030-18703-3;978-3-030-18704-0

Table of contents :
Front Matter ....Pages i-xii
The Funding of Academic Research in the U.S. (Yonghong Wu)....Pages 1-10
Geographical Concentration of Funding of Academic Research (Yonghong Wu)....Pages 11-28
Public Policy Response to Concentration of Academic Research (Yonghong Wu)....Pages 29-41
Assessment of Scientists’ Research Capacity (Yonghong Wu)....Pages 43-56
Multi-level Assessment on EPSCoR (Yonghong Wu)....Pages 57-72
EPSCoR Programs and Research Facilities (Yonghong Wu)....Pages 73-87
The Future of EPSCoR (Yonghong Wu)....Pages 89-95
Back Matter ....Pages 97-141

Citation preview

SPRINGER BRIEFS IN POLITICAL SCIENCE

Yonghong Wu

America’s Leaning Ivory Tower The Measurement of and Response to Concentration of Federal Funding for Academic Research 123

SpringerBriefs in Political Science

More information about this series at http://www.springer.com/series/8871

Yonghong Wu

America’s Leaning Ivory Tower The Measurement of and Response to Concentration of Federal Funding for Academic Research

123

Yonghong Wu Department of Public Administration University of Illinois at Chicago Chicago, IL, USA

ISSN 2191-5466 ISSN 2191-5474 (electronic) SpringerBriefs in Political Science ISBN 978-3-030-18703-3 ISBN 978-3-030-18704-0 (eBook) https://doi.org/10.1007/978-3-030-18704-0 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

As a scholar in the field of public budgeting and finance, I have a genuine interest in the allocation of government resources. Government allocation decisions affect the operation of various public, private, and not-for-profit organizations and, ultimately, the lives of people in this country. Because elected representatives make budgetary decisions at all levels of government, government resource allocation is basically an outcome of the political process. However, federal resource allocation does not end when Congress passes appropriation bills and the president signs them. Government agencies often need to further allocate appropriated funds to other institutions and individuals. While public budgeting and finance scholars focus on the politics of government budgeting, the post-appropriation allocation has not been studied enough. A few federal agencies such as the National Science Foundation and the National Institute of Health receive billions of appropriated funds each year to support academic research, and they allocate the funds to higher education institutions and researchers via a competitive selection process. With my bachelor’s degree in applied physics and doctoral training in science and technology policy, I understand the rationales of government funding of academic research and the need for merit-based competition in funding scientific research. However, a competitionbased allocation mechanism has led to a substantial concentration of federal funding in high-capacity states, while low-capacity states are largely left underfunded. The uneven distribution of federal funding for academic research has been an important public policy issue for decades. Under the mandate of Congress, some federal agencies launched special programs to address funding concentration issue in the 1980s and 1990s. Although the programs have shown some modest effects on research-capacity building through investing in collaboration development and infrastructure improvement, insufficient attention to the institutional environment limits the progress toward a more equitable distribution of federal research funding across jurisdictions. I feel the obligation to thoroughly investigate the issue of uneven distribution of federal resources and recommend ways to improve the effectiveness of government efforts in this arena. v

vi

Preface

The book is the product of my long-term focus and research on this topic. I sincerely hope that this work will have a positive impact on policy-making and program implementation aimed at addressing this funding concentration in the United States. I am indebted to several people who assisted my research on this topic. Julia Melkers was the co-author of my first paper on EPSCoR, which inspired my decade-long interest in the topic. Eric W. Welch generously shared materials and data that are helpful to this work. I also want to thank the external reviewers for their thoughtful comments. Finally, this book is dedicated to my wife, Karen, and my children, Jerry and Cindy. This book would not have been possible without their love and support. Chicago, USA

Yonghong Wu

Contents

1 The 1.1 1.2 1.3

Funding of Academic Research in the U.S. . . . . . . . . . . . Fiscal Federalism in Financing Academic Research . . . . . . Trends of Fiscal Federalism in Funding Academic Research Substantial Disparity in Federal Funding of Academic S&E Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

..... ..... .....

1 2 3

..... .....

5 10

2 Geographical Concentration of Funding of Academic Research 2.1 Measurement of Concentration of Federal Funding of Academic Research in the U.S. . . . . . . . . . . . . . . . . . . . . 2.2 Causes and Consequences of Uneven Distribution of Federal Funding of Academic R&D . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

....

11

....

12

.... ....

23 27

. . . . .

29 29 33 38 40

......

43

......

44

...... ......

48 55

5 Multi-level Assessment on EPSCoR . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 The Changing Top 100 Recipients of Federal Academic R&D Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Macro-level Assessment of Concentration of Federal Funding for Academic Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

3 Public Policy Response to Concentration of Academic Research 3.1 History of NSF’s EPSCoR and Similar Programs . . . . . . . . . . 3.2 State-Level EPSCoR Coordination and Heterogeneity . . . . . . . 3.3 An Evaluative Framework on EPSCoR . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Assessment of Scientists’ Research Capacity . . . . . . . . . . . . . 4.1 Empirical Test of the Determinants of Individual Research Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Comparison of Research Capacity Between EPSCoR and Non-EPSCoR States . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

58 59

vii

viii

Contents

5.3 State-Level Assessment of NSF EPSCoR and NIH IDeA . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64 72

. . . .

73 74 81 86

............. .............

89 89

............. .............

92 95

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

6 EPSCoR Programs and Research Facilities . . . . . . . . . . . . . . 6.1 Size, Funding and Density of Academic Research Facilities 6.2 The Impact of EPSCoR on Funding of Research Facilities . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 The Future of EPSCoR . . . . . . . . . . . . . . . . . . . . . . 7.1 Evolving Political Support of EPSCoR . . . . . . . 7.2 Strategic Shift: From Infrastructure Improvement to Institutional Innovation . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

List of Figures

Fig. 1.1

Fig. 1.2

Fig. Fig. Fig. Fig.

1.3 2.1 2.2 2.3

Fig. Fig. Fig. Fig.

3.1 6.1 6.2 6.3

Amounts of federally and state/local financed higher education R&D expenditures. Note The black solid and dashed lines represent the amounts of federally and state/local financed R&D expenditures for all higher education institutions (in billions constant dollars). The gray solid and dashed lines represent the amounts of federally and state/local financed R&D expenditures for public higher education institutions (in billions constant dollars) . . . . . . . . . . . . . . . . . . . . . . . . . . . Shares of federally and state/local financed higher education R&D expenditures. Note The black solid and dashed lines represent the shares of federally and state/local financed R&D expenditures for all higher education institutions (in %). The gray solid and dashed lines represent the shares of federally and state/local financed R&D expenditures for public higher education institutions (in %) . . . . . . . . . . . . . . . . . . . . . . . . . . . State’s share of federal academic R&D support in 2015 . . . . . . State’s share of federal academic R&D support in 1975 . . . . . . State’s share of doctorate recipients in engineering in 2016 . . . State’s share of doctorate recipients in Physical Sciences, Life Sciences, Math and Computer Sciences and Geosciences in 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Framework of research capacity and competitiveness . . . . . . . . State’s share of total space for S&E research in 2003 . . . . . . . State’s share of total space for S&E research in 2015 . . . . . . . Funds for new construction of research facility by source—all academic institutions. Note The solid, dashed, and dotted lines represent funds from institutions or other sources, state/local governments, and federal government (in billions constant dollars) . . . . . . . . . . . . . . . . . . . . . . . . . . .

..

3

. . . .

. . . .

4 6 12 26

. . . .

. . . .

27 39 75 76

..

77

ix

x

Fig. 6.4

Fig. 6.5

Fig. 6.6

List of Figures

Funds for repair/renovation of research facility by source—all academic institutions. Note The solid, dashed, and dotted lines represent funds from institutions or other sources, state/local governments, and federal government (in billions constant dollars) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Funds for new construction of research facility by source—public academic institutions. Note The solid, dashed, and dotted lines represent funds from institutions or other sources, state/local governments, and federal government (in billions constant dollars) . . . . . . . . . . . . . . . . . . . Funds for repair/renovation of research facility by source—public academic institutions. Note The solid, dashed, and dotted lines represent funds from institutions or other sources, state/local governments, and federal government (in billions constant dollars) . . . . . . . . . . . . . . . . . . .

77

78

78

List of Tables

Table 2.1 Table 2.2 Table Table Table Table Table Table Table Table

2.3 2.4 2.5 3.1 3.2 3.3 4.1 4.2

Table Table Table Table Table

4.3 4.4 4.5 4.6 5.1

Table Table Table Table Table Table Table Table

5.2 5.3 5.4 5.5 5.6 5.7 6.1 6.2

Table 6.3

Descriptive statistics for state’s share of federal academic R&D support by year . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Descriptive statistics for state’s share of NSF academic R&D support by year . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Concentration index for 50 states . . . . . . . . . . . . . . . . . . . . . Concentration index for groups of 5-state (G5) . . . . . . . . . . . Concentration index for groups of 10-state (G10) . . . . . . . . . EPSCoR jurisdictions and their years of entry . . . . . . . . . . . NSF EPSCoR funding by year . . . . . . . . . . . . . . . . . . . . . . . Major initial NSF EPSCoR awards in five states . . . . . . . . . Regression analysis of scientists’ research capacity . . . . . . . . Characteristics of respondents from EPSCoR versus non-EPSCoR states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of collaborative networks (1) . . . . . . . . . . . . . . . Comparison of collaborative networks (2) . . . . . . . . . . . . . . . Comparison of satisfaction with work environment. . . . . . . . Comparison of grant-seeking performance . . . . . . . . . . . . . . Distribution of top 100 academic institutions receiving federal R&D support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Macro analysis of NSF EPSCoR. . . . . . . . . . . . . . . . . . . . . . Macro analysis of NIH IDeA . . . . . . . . . . . . . . . . . . . . . . . . State-level analysis of NSF EPSCoR . . . . . . . . . . . . . . . . . . State-level analysis of NIH IDeA . . . . . . . . . . . . . . . . . . . . . NSF EPSCoR effects by state . . . . . . . . . . . . . . . . . . . . . . . . NIH IDeA effects by state . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of research density—all academic institutions . . Comparison of research density—public academic institutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regression analysis of funding of both new construction and repair/renovation projects . . . . . . . . . . . . . . . . . . . . . . . .

..

13

. . . . . . . .

. . . . . . . .

14 17 19 20 31 32 36 47

. . . . .

. . . . .

50 50 52 53 54

. . . . . . . .

. . . . . . . .

60 62 63 67 68 70 71 80

..

80

..

84

xi

xii

Table 6.4 Table A.1 Table A.2 Table A.3 Table A.4 Table A.5 Table A.6 Table A.7 Table A.8 Table A.9

List of Tables

Regression analysis of funding of new construction projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Top 100 academic institutions receiving federal R&D support in 2015 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Top 100 academic institutions receiving federal R&D support in 2010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Top 100 academic institutions receiving federal R&D support in 2005 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Top 100 academic institutions receiving federal R&D support in 2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Top 100 academic institutions receiving federal R&D support in 1995 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Top 100 academic institutions receiving federal R&D support in 1990 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Top 100 academic institutions receiving federal R&D support in 1985 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Top 100 academic institutions receiving federal R&D support in 1980 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Top 100 academic institutions receiving federal R&D support in 1975 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

..

85

..

97

. . 102 . . 108 . . 112 . . 117 . . 122 . . 127 . . 132 . . 136

Chapter 1

The Funding of Academic Research in the U.S.

Like the epithetical ivory tower, academic institutions in the U.S. are a hierarchical array of public and private universities, doctoral, master’s and baccalaureate universities/colleges, universities that belong to the Ivy League versus those that do not, with varying academic reputations and funding sources. Diverse academic institutions may be necessary to meet the various needs of people and society as a whole. However, the geographic concentration of research-intensive higher education institutions creates potential issues of inequity. California and Massachusetts have more top-tier, prestigious academic institutions than other states, creating an uneven distribution of educational benefits and the substantial concentration of federal funding of academic research. This book describes the funding disparity aspect of higher education in the U.S. and assesses the effectiveness of federal programs tackling this issue. The main goal of this chapter is to introduce readers to the broad context of government funding of academic research. In the American science policy arena, there have been continuous debates on peer-review versus equity-based approaches to the allocation of federal research funding. The peer-review system has been the primary mechanism for distributing federal government funding for research among universities since shortly after World War II. Peer review ensures the production of the best science by funding the most capable researchers in the country. As a result, federal research funding has been concentrated in “high-capacity” states where many of the most capable researchers reside, and a large number of “low-capacity” states have received substantially less research funding from federal agencies. In fiscal year 2016, all higher education institutions in the U.S. spent a total of $67.7 billion in their conduct of research and development (R&D) in science and engineering (S&E). Public institutions spent $44.2 billion, about 65.4% of total academic R&D expenditures in that year. The money spent by higher education institutions came from different sources. As the primary sponsor of academic research, the federal government provided $37.7 billion or about 55.7% of total R&D spending within universities in 2016 ($23.1 billion of which went to public universities). State governments contributed $3.7 billion, about 5.5% of total academic R&D expenditures in 2016 ($3.4 billion to universities under their control). Other sources included the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Y. Wu, America’s Leaning Ivory Tower, SpringerBriefs in Political Science, https://doi.org/10.1007/978-3-030-18704-0_1

1

2

1 The Funding of Academic Research in the U.S.

business sector, higher education institutions, and other organizations, accounting for 6.0%, 23.5%, and 9.3%, respectively. The most recent data on academic R&D expenditures reveal important features of financing the conduct of S&E research in American colleges and universities. Scientists and engineers primarily rely on government funding for research, and the federal government plays a dominant role by providing a much larger share of financial support than state governments. Federal dominance in this area is a result of investment that began shortly after World War II, when Vannevar Bush recommended that the federal government take the lead in “promoting the flow of new scientific knowledge and the development of scientific talent” (Bush, 1945, p. 4). The involvement of state governments came later, as state governments started enhancing their efforts to assert a greater role in the formulation and administration of national science and technology policies beginning in the early 1980s (Feller, 1997).

1.1 Fiscal Federalism in Financing Academic Research The term federalism is used to describe a system of government in which the power to govern is constitutionally divided between a central (federal) governing authority and constituent political units (like states). Scholars have developed different theoretical models (dual, cooperative, and coercive federalism) that are dominant in different times and applicable to different policy arenas. For instance, the cooperative federalism model posits that federal and state governments interact cooperatively and collectively to solve common problems. Federalism in funding academic research refers to the shared funding responsibility for S&E research by both federal and state governments. Federalism in science policy has been cooperative over time. Research funding comes from several sources, including federal and state governments. The federal government never told state governments to support or not support any particular type of research. Funding decisions at the state level are independent of federal government, except for the cost-sharing requirement of federal research grants. For instance, while the Bush administration forbade the use of federal funds for research involving the destruction or creation of embryos, some states could still step in to advance this vital research without federal preemptions. Fiscal federalism is based on the idea that a public service should be financed in such a way that the benefits are confined to the jurisdiction financing the service. This no-spillover arrangement is supposed to achieve and maintain efficient decisionmaking at different levels of government. The application of this theory in scientific research means that federal and state governments share responsibility in proportion to the expected scope of benefit from specific research projects. Federal government should support research projects that benefit the entire country (like basic research) or meet national needs (like mission-driven research). On the other hand, it is the states’ responsibility to support research projects with outputs benefiting primarily their jurisdictions.

1.2 Trends of Fiscal Federalism in Funding Academic Research

3

45.00 40.00 35.00 30.00 25.00 20.00 15.00 10.00 5.00 1972 1974 1976 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016

0.00

Fig. 1.1 Amounts of federally and state/local financed higher education R&D expenditures. Note The black solid and dashed lines represent the amounts of federally and state/local financed R&D expenditures for all higher education institutions (in billions constant dollars). The gray solid and dashed lines represent the amounts of federally and state/local financed R&D expenditures for public higher education institutions (in billions constant dollars)

1.2 Trends of Fiscal Federalism in Funding Academic Research In order to understand how fiscal federalism works, I examine the pattern and trends of academic research funding over time in terms of annual shares of funding for academic research by federal and state governments. The data source is the National Science Foundation (NSF)’s Survey of Research and Development Expenditures at Universities and Colleges. The survey has been collecting data since 1972 on separately budgeted R&D expenditures within academic institutions by source of funds, including federal government, state/local governments, businesses, higher-education institutions, and other sources (NSF, 2011). The data are collected from universities directly, using consistent, uniform definitions and collection techniques. Figures 1.1 and 1.2 show the main pattern and time trends of federally and state/local financed higher education R&D expenditures for all or public higher education institutions, respectively. The actual amounts (converted to constant dollars)1 are presented in Fig. 1.1 and the shares as percentage of total expenditures are in Fig. 1.2. The two figures show that the federal government financed increasing amounts of academic R&D during 1972–2016, whereas state government support was fairly stable. The gap between federally financed R&D expenditures for all and public institutions indicates that federal agencies also provide substantial amounts of financial 1I

use the GDP implicit price deflator with base year 2009 as of July 2017.

4

1 The Funding of Academic Research in the U.S.

80.0% 70.0% 60.0% 50.0% 40.0% 30.0% 20.0% 10.0%

1972 1974 1976 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016

0.0%

Fig. 1.2 Shares of federally and state/local financed higher education R&D expenditures. Note The black solid and dashed lines represent the shares of federally and state/local financed R&D expenditures for all higher education institutions (in %). The gray solid and dashed lines represent the shares of federally and state/local financed R&D expenditures for public higher education institutions (in %)

support to private universities. The ratio of federally financed R&D expenditures of public to private institutions has been quite constant, moving from 1.3 in the 1970s and 1980s to 1.5–1.6 in the 1990s and afterward. Meanwhile, state/local financial support overwhelmingly flows to public institutions. The two figures also demonstrate that (1) government funding of academic R&D has been dominant among all sources, with government’s share of total academic R&D expenditures between 61% and 79% for all higher education institutions, 60–78% for public institutions only; (2) the government funding of academic R&D has been primarily from the federal government, with federal share between 56% and 69% for all higher education institutions, and 52–63% for public institutions only; (3) the state governments play a relatively minor role in this regard, only providing 5–10% and 7–15% of academic R&D expenditures for all and public higher education institutions, respectively. Federal dominance has been fairly stable over the years. At the start of this period, the federal shares of academic R&D expenditures were about 68 and 63% for all and public higher education institutions. The two shares declined slightly over time to 58 and 52% in the year 2000; increased modestly during 2000–2005 and 2009–2011. The

1.2 Trends of Fiscal Federalism in Funding Academic Research

5

state share of academic R&D expenditures for all higher education institutions has decreased almost continuously from slightly above 10% in the early 1970s to about 5.5% at the end of the period. Although state/local financed R&D expenditures are mostly taken by public institutions, the state share of academic R&D expenditures for public institutions has declined substantially, dropping from 14% in the early 1970s to 7.7% in 2016.

1.3 Substantial Disparity in Federal Funding of Academic S&E Research In addition to the overall pattern of shared funding by federal and state governments, it is important to examine the distribution of federal funding of academic R&D. The pursuit of efficiency in the conduct of academic research mandates the use of peer review in allocating federal support in this arena. However, the dominance of government, particularly federal government, legitimatizes the concern for inequitable distribution of federal resources across jurisdictions. Figure 1.3 presents each state’s share of federal R&D support to academic institutions in 2015. The federal R&D support refers to the federal obligations for academic R&D in S&E fields. It covers all direct, indirect, incidental, or related costs resulting from or necessary to the performance of R&D by private individuals and organizations under grant, contract, or cooperative agreement, as well as demonstration projects and research equipment (NSF, 2015). The data source is the NSF’s Survey of Federal Science and Engineering Support to Universities, Colleges, and Nonprofit Institutions. The survey includes all academic institutions that receive funding from federal agencies that finance federal R&D obligations to the academic sector. The data are collected from federal agencies directly. The disparity in federal academic R&D funding is substantial. The top ten states received an aggregation of over 60% of total federal R&D support to academic institutions in 2015, while the bottom ten states received less than 2% of the total federal support in that year. The disparity in per capita terms becomes less impressive as the top ten states account for about 48% of the total U.S. population, and only about 4.5% reside in the bottom ten states.2 However, the disparity remains quite substantial when I compare individual states. The state of Maryland, with about 1.9% of the U.S. population, received 6.4% of federal R&D support; the state of West Virginia, with about 0.9% of U.S. population, received only 0.2% of federal R&D support. The federal pursuit of efficiency in the conduct of scientific research coupled with uneven distribution of research capacity plays a leading role in this funding disparity. Competition for federal S&E research funding is primarily merit-based via peer review. The concentration of academic research in a few states is a natural result of agglomeration of prestigious research universities in those states. Aside from 2I

use 2010 Census data from https://www.census.gov/2010census/data/.

6

1 The Funding of Academic Research in the U.S.

Fig. 1.3 State’s share of federal academic R&D support in 2015

getting more or less federal research dollars, education and training opportunities for college students in S&E fields are affected, and spillover benefits from federally funded research projects such as patents with potential commercial prospects and the incubation of new industries and products become similarly concentrated. Federal dominance coupled with the uneven distribution of federal funds raises legitimate concerns about how federal academic R&D dollars should be distributed among individual states. The substantially uneven distribution of federal funding for academic research has been an important public policy issue for decades. Under the mandates of Congress, NSF launched the first Experimental Program to Stimulate Competitive Research (EPSCoR) to support low-capacity jurisdictions in 1979, and several other federal agencies established similar funding programs in the 1990s. The number of EPSCoR-eligible jurisdictions increased from 5 in 1980 to 31 in 2012. Decades of federal EPSCoR efforts have not delivered the expected results. As the America COMPETES Reauthorization Act of 2010 states, “National Science Foundation funding remains highly concentrated, with 27 states and two territories, taken together, receiving only about 10% of all NSF research funding.” The book is intended to provide a comprehensive assessment on the effectiveness of the EPSCoR

1.3 Substantial Disparity in Federal Funding of Academic S&E Research

7

programs in mitigating undue concentration of academic research funding across states. Although the policy goals have been expanded over time, the improvement of research capacity and enhancement of research competitiveness have remained the primary goals of EPSCoR. The assessment focuses on scientists’ research capacity as measured by their grant-seeking performance, and jurisdictional research competiveness as measured by the success in winning federal academic R&D support. There has been an increasing academic and non-academic literature on issues of funding academic research and related policy initiatives, most of which is in journal articles and reports of think-tanks, consulting firms, and organizations such as the National Academies and the American Association for the Advancement of Science (AAAS). A National Academies’ 2013 report, based on a comprehensive examination of EPSCoR’s evolving mission, program operations, and program evaluation, makes a number of recommendations to improve the effectiveness of EPSCoR programs (National Academies, 2013). Two years later, the Science and Technology Policy Institute published another report on EPSCoR (Zuckerman et al., 2015). The research team collected a lot of data from a variety of relevant sources, and conducted descriptive comparisons between EPSCoR and non-EPSCoR jurisdictions. The two reports provide valuable operational and management details of the EPSCoR programs. However, they are not program evaluations by academic standard. With a focus on the issue of inequity, this book takes a broader view of government funding of academic research by quantifying the degree of concentration in this area, and contributing a multi-level, vigorous assessment on government efforts tackling this issue. While EPSCoR efforts have shown some effects on research capacity-building through investing in collaboration development and infrastructure improvement, insufficient attention to institutional environment limits the progress toward a more equitable distribution of federal research funding. In addition, the size of research facilities relative to the academic R&D expenditures is significantly larger in EPSCoR states, indicating over-investment in physical research infrastructure and inefficiency in the conduct of research in EPSCoR institutions. Our analytical results indicate that it is the time to shift EPSCoR focus from research infrastructure improvement and collaboration development to innovating institutional environments to recruit and motivate scientists. This book incorporates extensive quantitative description and regression analyses. Data from authoritative sources and graphical tools are employed to illustrate the extent of concentration of federal funding of academic research in the U.S. Panel regression techniques are used to test the hypothesis about how EPSCoR programs have affected various measures of research capacity and competitiveness of EPSCoR jurisdictions. Since research universities in the EPSCoR states are predominately public universities, the book also examines state government investments in the construction and renovation of physical research infrastructure in EPSCoR jurisdictions. After the introduction of the broad context of government funding of academic research in America, Chap. 2 focuses on the measurement of jurisdictional concentration of federal funding of academic research in the U.S. Beyond the comparison of states’ shares of total federal obligations for academic R&D, conventional descriptive statistics such as mean and standard deviation and a newly developed concentration

8

1 The Funding of Academic Research in the U.S.

index are used to describe the extent of concentration of academic research funding from several federal agencies. In particular, several group-based concentration indices are introduced to succinctly summarize the level of jurisdictional concentration of federal obligation for academic R&D. The group-based concentration index has the advantage to avoid false indication of policy effect. The chapter concludes with a brief discussion of equity implications of the uneven distribution of federal research funding by showing that academic R&D funding is closely tied to a state’s educational opportunities and economic growth. Chapter 3 describes federal government response to the uneven distribution of academic research funding. The chapter briefly reviews the history of federal EPSCoR programs, particularly NSF’s EPSCoR and National Institutes of Health (NIH)’s Institutional Development Award (IDeA), and describes evolving policy goals and programmatic features of the programs and capacity-building activities in higher education institutions in the eligible states. State-level EPSCoR coordination and heterogeneity are discussed as well. I also develop an evaluative framework of research capacity and competiveness as a conceptual guide to the subsequent multilevel assessment of EPSCoR effects on research capacity at the individual level and research competitiveness at the jurisdiction level. The framework encompasses talent, collaboration, support, and motivation as four key determinants of individual research capacity, because the ability to conduct scientific research relies not only on the scientific and collaborative abilities of the researchers but also on their access to necessary facilities and equipment and encouraging institutional and work environments. The first part of Chap. 4 is an empirical test of the evaluative framework illustrated in Chap. 3. Using a recent data set of a sample of academic scientists, I develop measures of talent, collaboration, support, and motivation, and examine how these measures affect scientists’ research capacity as demonstrated by their grant-seeking performance. The focus on scientists’ grant-seeking performance is closely related to the primary goal of EPSCoR in the pursuit of a more equitable distribution of federal research funding. After the evaluative framework is empirically validated, I conduct an assessment of EPSCoR efforts in building scientists’ research capacity in the eligible jurisdictions by comparing the mean values of the four key determinants of individual research capacity between scientists in EPSCoR states and those in other states. The analysis of variance results suggest that individual scientists in EPSCoR states do not show significant weakness in research talent, collaboration, and motivation, and they seem to perform equally well in grant-seeking as their counterparts in non-EPSCoR states. But the results also reveal important frustrations among scientists in EPSCoR states that EPSCoR initiatives might address and mitigate. Chapter 5 provides a comprehensive and updated assessment on the effectiveness of the EPSCoR beyond the individual level. It begins with a descriptive analysis of the mobility of the top 100 academic institutions in receipt of federal R&D support from 1975 to 2015. This institution-level analysis reveals the dominance of

1.3 Substantial Disparity in Federal Funding of Academic S&E Research

9

non-EPSCoR institutions among the top competitors for federal funding of academic R&D and a modest gain by academic institutions in EPSCoR states. It is followed by a macro-level assessment showing that the two largest EPSCoR programs—NSF EPSCoR and NIH IDeA—have been effective in reducing the concentration index of the respective agency support to academic research, but the magnitude of the effects is small. Two additional state-level assessments present quite modest effects of NSF EPSCoR and NIH IDeA on a state’s shares of NSF and NIH obligations for academic R&D, respectively. In consideration of the heterogeneity of state EPSCoR programs, supplemental analysis is also performed on the share of NSF or NIH funding of academic R&D for each state to identify the varying effects of EPSCoR across the eligible states. These assessments are complementary to each other, and collectively provide solid empirical evidence on the effects of EPSCoR on various measures of research capacity and competiveness. Chapter 6 focuses on the construction and renovation of research infrastructure in higher education institutions. Research infrastructure is a critical pillar of academic research capacity and has been a primary focus of EPSCoR since the early 2000s. I first develop a measure of research density by comparing the R&D expenditures made by academic institutions within a state with the size of its academic research facilities. The analysis shows that EPSCoR states have a larger size of research facilities relative to their academic R&D expenditures than non-EPSCoR states, indicating that EPSCoR institutions have likely over-invested resources in physical research infrastructure and do not utilize research facilities as efficiently as their counterparts in non-EPSCoR states. The chapter also demonstrates that state governments have been playing a more important role than federal government in funding of research facilities. The empirical evidence furthermore shows that EPSCoR state governments do not invest significantly more funds in research facilities than non-EPSCoR states. I conclude in Chap. 7 with a synthesis of the analyses and a discussion of the implications for the future of EPSCoR and similar efforts to address the concentration of federal funding for academic research. Although EPSCoR efforts have been effective in building scientists’ research capacity, the limited effects at institutional, state and national levels indicate the need for program improvement. Our empirical evidence suggests that scientists are significantly dissatisfied with institutional environments in the EPSCoR states, and this may limit the progress toward a more equitable distribution of federal research funding at the institution or state level. I also find evidence of redundancy and inefficiency in the construction and utilization of research facilities in the EPSCoR states. The book therefore calls for a shift in EPSCoR strategy from research collaboration and infrastructure to innovating and improving institutional environments that help recruitment, retention, and motivation of S&E research talents. The chapter also describes evolving political support for EPSCoR, and makes additional recommendations to improve EPSCoR’s effectiveness.

10

1 The Funding of Academic Research in the U.S.

References Bush, V. (1945). Science, the endless frontier: A report to the President. Washington, DC: U.S. Government Printing Office. Feller, I. (1997). Federal and state government roles in science and technology. Economic Development Quarterly, 11(4), 283–295. National Academies. (2013). The experimental program to stimulate competitive research. Washington, DC: The National Academies Press. National Science Foundation. (2011). Academic research and development expenditures: Fiscal year 2009 (NSF 11-313). Arlington, VA: National Science Foundation. National Science Foundation. (2015). Federal science and engineering support to universities, colleges, and nonprofit institutions: FY 2013 (NSF 15-327). Arlington, VA: National Science Foundation. Zuckerman, B. L., et al., (2015). Evaluation of the National Science Foundation’s Experimental Program to Stimulate Competitive Research (EPSCoR): Final report. IDA Paper P-522. Science and Technology Policy Institute.

Chapter 2

Geographical Concentration of Funding of Academic Research

Building upon the context outlined in Chap. 1, this chapter illustrates the concentration of federal funding for academic research in the U.S. It also explores the underlying causes for and likely consequences of the uneven geographical distribution of federal funding of R&D in the higher education sector. This uneven pattern of distribution has persisted for a long time. Similar to Fig. 1.3, Fig. 2.1 shows that the shares of federal R&D funding received by academic institutions in the 50 states differed substantially in 1975. Together, the figures show that federal support of academic R&D is concentrated in a few states, and a large number of the 50 states have received minimal proportions. The top ten and the bottom ten states largely overlap, even with 40 years between the data in the figures. California, New York, Maryland, Pennsylvania, Massachusetts, Texas, Illinois, and Michigan are among the top ten states in both 1975 and 2015. Washington and Ohio only appear in the top-ten list of 1975, whereas North Carolina and Georgia make the list in 2015. Although four states get in and out of the top-ten list over the period 1975–2015, the change is not dramatic. For instance, Washington and Ohio dropped from 8th and 10th in 1975 to 13th and 11th in 2015. North Carolina and Georgia moved up from 12th and 19th in 1975 to 7th and 10th in 2015. In other words, the top winners of federal academic R&D funding are virtually the same in 1975 and 2015. Conversely, North Dakota, Nevada, Arkansas, Idaho, South Dakota, West Virginia, Wyoming, and Maine are in the bottom ten states in both 1975 and 2015. Montana and Delaware were among the bottom ten states in 1975 (43rd and 44th) and moved up to 39th and 40th in 2015. Alaska and Vermont, on the other hand, were 35th and 38th in 1975, but dropped to 41st and 42nd in 2015. It seems that the winners and losers in receiving federal academic R&D dollars in the U.S. have stayed essentially the same for 40 years.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Y. Wu, America’s Leaning Ivory Tower, SpringerBriefs in Political Science, https://doi.org/10.1007/978-3-030-18704-0_2

11

12

2 Geographical Concentration of Funding of Academic Research

Fig. 2.1 State’s share of federal academic R&D support in 1975

2.1 Measurement of Concentration of Federal Funding of Academic Research in the U.S. We can use one graph to show the disparity of federal funding of academic R&D within a particular year, but the 45 figures for the years 1971–2015 are too complex to reveal the pattern of change from year to year. To present a clearer picture of funding over time, I develop some common descriptive statistics such as minimum, maximum, mean and standard deviation of state’s share of federal funding for academic R&D in each year. Table 2.1 presents the statistics for the share of federal academic R&D support by year from 1971 to 2015. Because NSF has been an important sponsor of academic research, I also present the statistics for the share of NSF academic R&D support by year in Table 2.2.

2.1 Measurement of Concentration of Federal Funding …

13

Table 2.1 Descriptive statistics for state’s share of federal academic R&D support by year Year

Number of states

Mean (%)

Standard deviation (%)

Minimum (%)

Maximum (%)

1971

50

1.97

2.92

0.09

14.69

1972

50

1.97

2.91

0.09

14.57

1973

50

1.97

2.94

0.08

14.92

1974

50

1.97

2.86

0.10

15.47

1975

50

1.97

2.84

0.11

14.98

1976

50

1.97

2.84

0.12

15.47

1977

50

1.97

2.78

0.10

14.85

1978

50

1.97

2.74

0.11

13.90

1979

50

1.98

2.78

0.10

14.27

1980

50

1.98

2.77

0.11

14.13

1981

50

1.98

2.81

0.11

13.65

1982

50

1.98

2.78

0.11

13.76

1983

50

1.97

2.85

0.08

14.37

1984

50

1.97

2.82

0.09

14.51

1985

50

1.97

2.81

0.08

14.47

1986

50

1.97

2.81

0.08

14.70

1987

50

1.97

2.78

0.07

14.54

1988

50

1.97

2.76

0.06

14.31

1989

50

1.97

2.74

0.08

14.69

1990

50

1.97

2.71

0.06

14.52

1991

50

1.97

2.66

0.07

14.32

1992

50

1.97

2.68

0.07

14.71

1993

50

1.97

2.60

0.09

13.87

1994

50

1.97

2.63

0.09

14.27

1995

50

1.96

2.60

0.08

14.28

1996

50

1.96

2.63

0.07

14.58

1997

50

1.96

2.60

0.10

14.37

1998

50

1.97

2.64

0.06

14.76

1999

50

1.96

2.61

0.06

14.42

2000

50

1.97

2.59

0.12

14.53

2001

50

1.97

2.53

0.12

13.88

2002

50

1.96

2.53

0.09

13.89

2003

50

1.97

2.53

0.10

13.98

2004

50

1.98

2.59

0.10

14.55

2005

50

1.98

2.56

0.09

14.38 (continued)

14

2 Geographical Concentration of Funding of Academic Research

Table 2.1 (continued) Year

Number of states

Mean (%)

Standard deviation (%)

Minimum (%)

Maximum (%)

2006

50

1.98

2.52

0.09

13.78

2007

50

1.98

2.52

0.11

13.80

2008

50

1.97

2.51

0.10

13.83

2009

50

1.97

2.46

0.10

13.26

2010

50

1.97

2.53

0.12

13.89

2011

50

1.97

2.55

0.09

14.02

2012

50

1.97

2.60

0.10

14.29

2013

50

1.96

2.60

0.10

14.37

2014

50

1.96

2.59

0.10

14.17

2015

50

1.96

2.58

0.11

14.21

Table 2.2 Descriptive statistics for state’s share of NSF academic R&D support by year Year

Number of states

Mean (%)

Standard deviation (%)

Minimum (%)

Maximum (%)

1971

50

1.99

3.25

0.02

17.89

1972

50

1.99

3.49

0.01

19.05

1973

50

1.99

3.50

0.02

19.90

1974

50

1.99

3.53

0.03

20.25

1975

50

1.99

3.37

0.03

18.04

1976

50

1.99

3.58

0.03

20.84

1977

50

1.99

3.46

0.05

19.56

1978

50

1.99

3.49

0.04

19.68

1979

50

1.99

3.46

0.06

19.41

1980

50

1.99

3.46

0.04

19.50

1981

50

1.99

3.36

0.04

18.93

1982

50

1.99

3.43

0.04

18.81

1983

50

1.99

3.31

0.03

17.85

1984

50

1.99

3.23

0.04

16.82

1985

50

1.99

3.23

0.06

16.67

1986

50

1.99

3.19

0.04

16.94

1987

50

1.99

3.23

0.05

17.62

1988

50

1.99

3.12

0.07

16.47

1989

50

1.98

3.00

0.04

16.01

1990

50

1.98

2.92

0.09

15.27

1991

50

1.97

2.83

0.10

15.02 (continued)

2.1 Measurement of Concentration of Federal Funding …

15

Table 2.2 (continued) Year

Number of states

Mean (%)

Standard deviation (%)

Minimum (%)

Maximum (%)

1992

50

1.98

2.81

0.07

15.35

1993

50

1.98

2.71

0.05

14.25

1994

50

1.98

2.83

0.02

14.35

1995

50

1.99

2.80

0.01

14.73

1996

50

1.98

2.68

0.09

14.05

1997

50

1.98

2.75

0.01

15.16

1998

50

1.99

2.82

0.01

16.22

1999

50

1.99

2.78

0.01

16.60

2000

50

1.99

2.85

0.00

17.23

2001

50

1.99

2.71

0.01

15.56

2002

50

1.99

2.86

0.01

16.98

2003

50

1.99

2.89

0.01

17.13

2004

50

1.99

2.88

0.11

17.27

2005

50

1.99

2.87

0.08

17.34

2006

50

1.99

2.75

0.12

16.54

2007

50

1.98

2.72

0.09

16.07

2008

50

1.96

2.69

0.10

15.47

2009

50

1.92

2.43

0.12

13.97

2010

50

1.94

2.61

0.14

15.54

2011

50

1.94

2.50

0.15

13.79

2012

50

1.93

2.49

0.10

14.12

2013

50

1.92

2.41

0.15

13.42

2014

50

1.92

2.41

0.18

13.13

2015

50

1.95

2.41

0.17

13.15

Table 2.1 shows that the average values of state’s shares of federal academic research funding are fairly stable over time, fluctuating within a quite narrow range from 1.96 to 1.98%. Given the nearly constant mean values, we can focus on the standard deviation as a reasonable measure of dispersion of the share measure of federal funding of academic research. The trend of standard deviation is clearly downward, even though the decrements are modest. It reached the maximum value of 2.94% in 1973, and almost continuously declined to 2.46% in 2009, and rose slightly afterwards. As the overall disparity in state’s share of federal funding declined, the concentration of federal support of academic research has been tapering over time, at quite a slow pace. Table 2.2 shows a similar decline of dispersion in state’s share of NSF funding for academic research. The standard deviation was at its peak in 1976 (3.58%) and

16

2 Geographical Concentration of Funding of Academic Research

dropped to 2.43% in 2009, then rose briefly and declined again to 2.41% after 2012. Similar to total federal support of academic R&D, the mean shares of NSF support are fairly constant in this period with the exception of some minor sliding in 2009–2014. The combination of mean and standard deviation is helpful to describe the distribution of an important variable. The standard deviation is a common measure of overall deviations of individual observations from the mean. A smaller standard deviation indicates that the individual observations converge to the mean. However, it is not particularly useful in describing the degree of geographic concentration of federal funding because the computation of standard deviation relies on the mean of the data. In other words, the mean should be constant for standard deviation to be a compatible measure of dispersion over time. I introduce an alternate measure that is not dependent upon the mean value in each year. Rather than calculating the sum of squared deviations from the mean (the formula of standard deviation), I simply square each state’s share of federal academic research funding and sum them up. In one year, 50 squared shares are aggregated to get what I would call the concentration index in that year. The concentration index has several desirable features. First, its range is from 1/50 (equal distribution of federal funding) to 1 (maximum concentration of federal funding). The index takes the value 1/50 only when all 50 states get equal shares of federal academic research funding. In the scenario of maximum concentration, all federal funding goes to one state and the other 49 states receive nothing. Second, when the index moves from the minimum (1/50) to the maximum (1), the degree of concentration escalates. In other words, the index values close to 1 mean high levels of concentration, whereas the values close to 1/50 represent low levels of concentration. The concentration index provides an opportunity to concisely summarize with a single numeric value the degree of geographical concentration of federal support of academic research. This index can be used as a thermostat for policy response, indicating when government should take action as the level of concentration goes beyond a certain benchmark. It also makes it easier to keep track of change of concentration of federal funding from year to year. The index could be used in policy assessment and program evolution because a significant decrement is likely a sign of success for policy initiatives aimed at reducing geographic concentration. I calculate the concentration index using the state shares of federal support of academic R&D for years from 1971 to 2015.1 I also calculate the concentration index for support provided by major federal sponsors of academic research such as Department of Defense (DOD), Department of Energy (DOE), NIH and NSF. The details are presented in Table 2.3.

1I

use the total federal support of academic R&D in 50 states rather than in the entire U.S. as the denominator in the calculation of a state’s share of federal support. The two share measures only differ slightly. I make this choice to ensure that the concentration index is exactly 1 in the scenario of maximum degree of concentration.

2.1 Measurement of Concentration of Federal Funding … Table 2.3 Concentration index for 50 states

17

Year

Federal

NSF

DOD

DOE

NIH

1971

0.063

0.072

0.090

0.081

0.066

1972

0.063

0.080

0.112

0.083

0.066

1973

0.064

0.080

0.151

0.080

0.066

1974

0.061

0.082

0.082

0.079

0.066

1975

0.061

0.076

0.079

0.072

0.068

1976

0.061

0.084

0.085

0.068

0.065

1977

0.059

0.079

0.076

0.078

0.065

1978

0.058

0.080

0.134

0.083

0.063

1979

0.059

0.079

0.180

0.072

0.065

1980

0.058

0.079

0.133

0.079

0.064

1981

0.060

0.076

0.198

0.088

0.065

1982

0.059

0.078

0.130

0.078

0.065

1983

0.061

0.074

0.114

0.080

0.067

1984

0.060

0.072

0.098

0.076

0.067

1985

0.060

0.072

0.094

0.071

0.066

1986

0.060

0.070

0.098

0.078

0.065

1987

0.059

0.072

0.097

0.075

0.064

1988

0.058

0.068

0.099

0.072

0.064

1989

0.058

0.065

0.081

0.071

0.063

1990

0.057

0.063

0.100

0.064

0.062

1991

0.056

0.060

0.097

0.055

0.062

1992

0.056

0.060

0.099

0.061

0.062

1993

0.054

0.057

0.084

0.060

0.062

1994

0.055

0.060

0.091

0.060

0.061

1995

0.055

0.059

0.084

0.061

0.060

1996

0.055

0.056

0.079

0.058

0.060

1997

0.054

0.058

0.076

0.054

0.059

1998

0.055

0.059

0.072

0.062

0.059

1999

0.055

0.059

0.085

0.061

0.058

2000

0.054

0.060

0.074

0.059

0.058

2001

0.053

0.056

0.071

0.058

0.057

2002

0.053

0.061

0.070

0.057

0.056

2003

0.052

0.061

0.060

0.061

0.056

2004

0.054

0.061

0.066

0.063

0.057

2005

0.053

0.061

0.059

0.057

0.056

2006

0.052

0.058

0.067

0.058

0.057 (continued)

18 Table 2.3 (continued)

2 Geographical Concentration of Funding of Academic Research

Year

Federal

NSF

DOD

DOE

NIH

2007

0.052

0.057

0.065

0.051

0.057

2008

0.052

0.057

0.069

0.047

0.057

2009

0.051

0.051

0.068

0.047

0.056

2010

0.052

0.055

0.079

0.042

0.059

2011

0.053

0.053

0.082

0.057

0.059

2012

0.054

0.053

0.085

0.059

0.059

2013

0.054

0.051

0.086

0.054

0.061

2014

0.054

0.051

0.095

0.081

0.061

2015

0.054

0.050

0.086

0.055

0.060

The concentration index values are relatively small because they are fairly close to the theoretical minimum value of 0.02 (or 1/50). This is due to another feature of the concentration index—the index values become smaller when the number of observations is larger. In this case, the number of observations is 50, which is so large that each state gets a quite small share of federal funding of academic R&D. Because the squared shares are even smaller than the shares, the sum of the squared shares gives a small concentration index. To demonstrate the impact of the number of observations, I develop different versions of the concentration index by dividing the 50 states into groups of 25, 10, 5, and 2 states. For demonstration purpose, I only present the index values for groups of 5 states (G5 index) and 10 states (G10 index). For the G5 index, all 50 states are first ranked by the share of federal academic R&D support in a particular year. Then the top five states will form one group, the second five states will form another group, until the bottom five states form the last group. We end up with 10 five-state groups. For every group, I calculate their aggregate federal academic R&D support as percentage of total federal support in that year. Each group’s percent share is squared and the ten squared group shares are summed up to calculate the G5 concentration index. The G10 concentration index can be calculated in a similar way, except that each group now has ten states instead of five. Tables 2.4 and 2.5 present the G5 and G10 concentration indices for each year. Unlike the original concentration index (the G1 index), the group concentration indices possess two additional features. First, because the grouping is solely based on relative rankings of individual states and state rankings may change from year to year, one particular group (i.e., top-five group or top-ten group) may not include the same states in different years. For instance, Washington and Ohio are among the top-ten group in 1975 but belong to the group next to the top ten in 2015. North Carolina and Georgia are included in the second-ten group in 1975 but move up to the top-ten group in 2015. It should be noted that the change of state rankings is gradual, and there is no significant change in state rankings during two adjacent years. Second, the group indices do not imply geographic agglomeration or clustering in any way. The states in the same group may spread from the east coast (New York

2.1 Measurement of Concentration of Federal Funding … Table 2.4 Concentration index for groups of 5-state (G5)

19

Year

Federal

NSF

DOD

DOE

NIH

1971

0.277

0.308

0.354

0.364

0.281

1972

0.273

0.339

0.376

0.369

0.282

1973

0.276

0.334

0.448

0.359

0.281

1974

0.264

0.332

0.331

0.350

0.284

1975

0.264

0.331

0.337

0.324

0.287

1976

0.261

0.337

0.327

0.307

0.283

1977

0.257

0.323

0.323

0.325

0.281

1978

0.263

0.326

0.427

0.326

0.281

1979

0.266

0.326

0.468

0.289

0.280

1980

0.265

0.325

0.445

0.312

0.281

1981

0.276

0.318

0.497

0.346

0.285

1982

0.271

0.329

0.429

0.335

0.287

1983

0.278

0.315

0.423

0.337

0.293

1984

0.268

0.309

0.392

0.322

0.292

1985

0.267

0.313

0.395

0.303

0.289

1986

0.269

0.309

0.412

0.323

0.286

1987

0.264

0.314

0.394

0.317

0.279

1988

0.264

0.301

0.394

0.310

0.278

1989

0.259

0.287

0.354

0.307

0.275

1990

0.257

0.280

0.405

0.275

0.272

1991

0.253

0.270

0.401

0.236

0.272

1992

0.252

0.264

0.408

0.251

0.274

1993

0.248

0.258

0.365

0.261

0.272

1994

0.248

0.273

0.353

0.260

0.269

1995

0.245

0.265

0.358

0.261

0.265

1996

0.247

0.253

0.327

0.246

0.267

1997

0.243

0.252

0.336

0.236

0.266

1998

0.245

0.256

0.317

0.260

0.267

1999

0.245

0.247

0.357

0.258

0.264

2000

0.240

0.251

0.325

0.247

0.261

2001

0.236

0.247

0.325

0.249

0.256

2002

0.236

0.257

0.313

0.248

0.253

2003

0.235

0.259

0.283

0.265

0.251

2004

0.237

0.257

0.295

0.274

0.256

2005

0.233

0.254

0.262

0.255

0.253

2006

0.233

0.241

0.284

0.254

0.255

2007

0.233

0.242

0.291

0.235

0.255 (continued)

20 Table 2.4 (continued)

Table 2.5 Concentration index for groups of 10-state (G10)

2 Geographical Concentration of Funding of Academic Research

Year

Federal

NSF

DOD

DOE

NIH

2008

0.232

0.245

0.308

0.215

0.255

2009

0.229

0.224

0.294

0.221

0.252

2010

0.234

0.234

0.309

0.196

0.260

2011

0.238

0.237

0.332

0.257

0.256

2012

0.243

0.229

0.359

0.266

0.258

2013

0.243

0.229

0.360

0.245

0.261

2014

0.245

0.232

0.366

0.321

0.260

2015

0.241

0.225

0.362

0.261

0.259

Year

Federal

NSF

DOD

DOE

NIH

1971

0.458

0.499

0.559

0.582

0.479

1972

0.453

0.524

0.575

0.582

0.480

1973

0.454

0.521

0.629

0.580

0.478

1974

0.443

0.521

0.540

0.571

0.480

1975

0.442

0.514

0.560

0.532

0.481

1976

0.438

0.529

0.543

0.506

0.480

1977

0.433

0.506

0.560

0.513

0.479

1978

0.451

0.507

0.668

0.522

0.479

1979

0.454

0.505

0.682

0.475

0.482

1980

0.453

0.506

0.671

0.495

0.478

1981

0.466

0.500

0.711

0.537

0.485

1982

0.462

0.508

0.661

0.526

0.489

1983

0.470

0.497

0.658

0.520

0.497

1984

0.459

0.493

0.643

0.514

0.497

1985

0.457

0.498

0.641

0.485

0.492

1986

0.461

0.495

0.651

0.523

0.488

1987

0.456

0.495

0.608

0.528

0.482

1988

0.456

0.482

0.625

0.513

0.482

1989

0.449

0.465

0.576

0.511

0.479

1990

0.445

0.457

0.636

0.459

0.474

1991

0.440

0.450

0.614

0.401

0.472

1992

0.440

0.440

0.629

0.417

0.477

1993

0.437

0.433

0.563

0.442

0.476

1994

0.437

0.452

0.546

0.437

0.473

1995

0.435

0.441

0.556

0.432

0.469

1996

0.439

0.435

0.521

0.423

0.473

1997

0.433

0.431

0.525

0.409

0.472 (continued)

2.1 Measurement of Concentration of Federal Funding … Table 2.5 (continued)

21

Year

Federal

NSF

DOD

DOE

NIH

1998

0.433

0.450

0.512

0.436

0.470

1999

0.434

0.419

0.542

0.436

0.468

2000

0.427

0.424

0.508

0.417

0.462

2001

0.424

0.422

0.525

0.426

0.455

2002

0.422

0.434

0.499

0.422

0.450

2003

0.422

0.435

0.472

0.449

0.450

2004

0.426

0.424

0.476

0.462

0.457

2005

0.422

0.425

0.439

0.436

0.456

2006

0.422

0.414

0.463

0.435

0.459

2007

0.425

0.416

0.481

0.413

0.460

2008

0.423

0.430

0.497

0.382

0.458

2009

0.419

0.403

0.494

0.407

0.454

2010

0.424

0.412

0.490

0.360

0.463

2011

0.430

0.417

0.528

0.453

0.461

2012

0.436

0.407

0.561

0.464

0.462

2013

0.437

0.405

0.570

0.431

0.467

2014

0.441

0.407

0.579

0.523

0.462

2015

0.435

0.404

0.555

0.479

0.462

and Massachusetts) to the west coast (California and Washington). So there is no regional concentration of federal funding of academic research. In this book, when the term geographic concentration is used, it means concentration of federal resources in certain geographically unrelated states or jurisdictions, with no geographic clustering implication. Compared with the original G1 index, the G5 and G10 indices are substantially larger. For instance, the G5 and G10 indices based on state’s shares of federal academic research funding ranges 0.23–0.28 and 0.42–0.47 as compared to the original range of 0.05–0.06. The concentration index as constructed has a ground value the index can never go below and a ceiling value it can never surpass in any circumstance. As discussed before, the ceiling value (or theoretical maximum value) of any concentration index is 1, and the ground value (or theoretical minimum value) of the index equals the reciprocal of the number of observations. So the ground values of G1, G5, and G10 indices are the reciprocals of 50, 10, and 5—0.02, 0.1 and 0.2, respectively. The concentration index provides a common ground to compare geographic concentration of federal support of academic R&D across agencies. Among the eight federal agencies,2 the overall concentration of DOD and NASA funding for aca2 The eight agencies include DOD, DOE, NIH, NSF, Department of Agriculture (DOA), Department

of Commerce (DOC), Environmental Protection Agency (EPA), and National Aeronautics and Space Administration (NASA). They are selected because they are major federal R&D agencies.

22

2 Geographical Concentration of Funding of Academic Research

demic research is higher than others. The 45-year average G5 indices are 0.57 and 0.52 for DOD and NASA respectively, which are larger than the average indices of the funding from other agencies. This means that the funding from those two agencies is more geographically concentrated than funding from the other six major federal sponsors of academic research. The concentration of DOD and NASA is probably more acceptable, given their mission-driven research agenda. The other agencies are not far behind (the 45-year average G5 concentration indices are between 0.46 and 0.51 for five of the six agencies), but the DOA’s 45-year average G5 concentration index is only 0.26. If public and private academic institutions are considered separately, federal academic research funding is much more concentrated among private institutions than public ones. The average G5 concentration index is 0.37 for public institutions and 0.75 for private institutions with the distribution of total federal support of academic R&D during 1971–2015. The concentration index is even above 0.85 for DOA, DOC, DOD, EPA, and NASA funding of academic research in private institutions. This indicates that the distribution of academic research capacity is highly uneven among private colleges and universities. The establishment of public research colleges and universities helps alleviate the degree of concentration in this arena. A value of the index closer to the ceiling means higher level of concentration. Larger values of G5 and G10 indices seem to indicate more serious concentration issues. However, it is meaningless and even misleading to compare the concentration levels across different versions of the index because the number of observations also matters to the calculation. Policy assessment should focus on the change of one particular concentration index after the launch of a policy or program compared to the level before the policy or program. One potential issue with the original concentration index G1 (50 states are separately considered) is that a certain change of the index could be the result of different scenarios of redistribution of federal research funding across states, some of which are policy relevant and others are not. For instance, suppose that California, the top recipient of federal academic R&D funding, received 10% less funding in a year compared to the prior year. This could happen if the other top five states get what is lost by academic institutions in California, or the bottom ten states are able to increase their receipt of federal research funding by 80% each.3 The latter scenario is really an indication of the success of the EPSCoR-type program. However, the former scenario does not indicate any significant progress in raising the shares of the “low-capacity” states. The two scenarios produce the same percentage decline of the G1 concentration index, but the change itself may not necessarily indicate the expected policy effect and does not help policy assessment in this regard. Group-based concentration indices such as the G5 and G10 avoid the false indication of policy effect. Because the calculation of the G5 or G10 indices is based on the 3 In

2015, California received 14.2% of total federal support of academic R&D, and the sum of the funding received by the bottom ten states was about 1.8%. A 10% loss of California’s funding is roughly equivalent to about 80% increase of the total federal academic R&D funding won by the bottom ten states.

2.1 Measurement of Concentration of Federal Funding …

23

shares of federal funding at group level, any redistribution within each group does not affect the value of the indices. This effectively addresses the issue of possible internal redistribution on the concentration index that may be falsely viewed as the effect of EPSCoR-type programs. I would suggest that either G5 or G10 index be used to describe the concentration of federal resources. Like the G1 index, group indices with few states included in each group (such as the G2 index) can potentially falsify actual policy effect. Moreover, given relative large number of observations, the magnitude of the indices is relatively small and likely under-represents the extent to which federal funding is concentrated in a few states. On the other hand, the concentration indices with a large number of states included in each group (such as the G25 index) likely inflate magnitude and may artificially exaggerate the issue of undue concentration. Although different versions of the concentration index produce quite different magnitudes, they show almost identical time trends. Taking the shares of total federal support of academic R&D as an example, the original G1, G5, and G10 indices peaked in 1983, declined continuously to the minimum in 2009, and rose after that year. The shape of the three trends is highly similar, with correlation coefficients at 0.96 between the G1 and G10 indices, 0.95 between the G5 and G10 indices, and 0.84 between the G1 and G5 indices. The three correlation coefficients are statistically significant at the 1% level. For the shares of NSF support, the resemblance of the three trends is even stronger, with correlation coefficients at 0.98 between the G1 and G10 indices, 0.99 between the G5 and G10 indices, and 0.97 between the G1 and G5 indices. The three correlation coefficients are also statistically significant at the 1% level. Similarly constructed indices have been used in other fields. For instance, the Herfindahl-Hirschman index (HHI) in economics research indicates the level of competition among individual firms in one industry by integrating the relative size of individual firms in relation to the whole industry (Hirschman, 1964). The measure is also widely used by federal agencies as an indicator of market concentration (Brown & Warren-Boulton, 1988). The HHI is calculated by squaring the market share of each firm competing in the market and then summing all the squared terms. It approaches zero when a market is occupied by a large number of firms of relatively equal size and reaches its maximum of one when a single firm controls a market. The HHI increases both as the number of firms in the market decreases and as the disparity in size between those firms increases.

2.2 Causes and Consequences of Uneven Distribution of Federal Funding of Academic R&D Federal research awards and grants are made primarily through merit-based competition. Therefore, the uneven distribution of federal academic R&D support is simply a result of substantial disparity in academic institutions’ research capacity and competitiveness across states. The concentration of prestigious research univer-

24

2 Geographical Concentration of Funding of Academic Research

sities and highly capable scientists in such states as California and Massachusetts makes them much more competitive for federal R&D funding than some other states with a handful of research institutions and just a few good university scientists. In addition to research capacity, the distribution of federal funding of academic R&D is also determined by states’ requests for earmarked projects that get into the appropriation bills through the influence of their representatives in the appropriations committees. As Savage observed, the House and Senate appropriations committees play a decisive role in federal earmarked spending on academic research (Savage, 1999). Since the early 1980s, it has been common for members of the appropriations committees to bring academic earmarks back to their congressional districts or states (Savage, 1999). Academic earmarks totaled less than $17 million in 1980 and rose to about $17 billion in 2001, representing about 10% of total federal funding of academic research (De Figueiredo & Silverman, 2006). There are a handful of empirical studies on the distribution of federal research funding to universities. Payne (2003) focuses on the effects of congressional appropriation committee membership. Using a panel of 220 universities over the period 1973–1999, she presents mixed estimates with regard to the effects of congressional appropriation committee membership on federal research funding to universities. In another study, De Figueiredo and Silverman (2006) examine the effects of university lobbying on academic earmarks. Their statistical results show that having a member on either the House’s Appropriations Committee or the Senate’s Appropriations Committee increases academic earmarks for a university by a significant amount. Wu’s 2013 study expands the empirical literature that has provided the mixed empirical evidence on the distribution of federal academic R&D funding. Unlike the two prior studies, he focuses on individual states rather than universities. Using a panel of 50 states over the 28-year period from 1979 to 2006, Wu (2013) finds that a state’s research capacity, as measured by the annual doctorate recipients in sciencerelated and engineering disciplines, is a statistically significant determinant of how much federal research funding the universities within its jurisdiction can receive. A 10% rise in a state’s number of doctoral recipients in science-related disciplines may increase its receipt of federal academic R&D funding by about 6%. In addition, a 10% rise in a state’s number of doctoral recipients in engineering disciplines may increase its receipt of federal academic R&D funding by about 2%. Like Payne (2003), Wu (2013) also reports mixed evidence about the effects of a state’s membership in congressional appropriations committees. The estimates of a state’s representation in the House’s Appropriations Committee tend to be insignificant, while the estimates of a state’s representatives as percent of majority party members in the U.S. Senate’s Appropriations Committee are statistically significant. The estimated effect is quite small: having one more senator of the majority party on the appropriations committee could lead to an increase of the state’s annual federal research funding by about $92,000 (in 2000 constant dollars), all else being equal. In addition, Wu (2013) finds some spillover effects from a strong federal research presence within a state’s boundaries. The estimates of the annual federal funds for federal intramural R&D are positive and statistically significant. A 10% increase in federal funds for federal intramural R&D in a state may lead to about a 0.8%

2.2 Causes and Consequences of Uneven Distribution …

25

rise in federal academic research funding to the state’s universities. The finding indicates that universities may take advantage of spillover effects from local federal research, and this effect could be better utilized if more effective measures are taken to strengthen the contact and collaboration between researchers in academia and federal laboratories. Some may argue that scientific research is just a game of talented intellectuals and does not relate to the lives of ordinary citizens, so it does not matter much if some institutions and researchers receive more support than others. It is not necessary to view this issue through the lens of inequality. However, federal academic R&D support may bring multiple benefits to the recipient states. In addition to the inflow of federal funds, funded research projects may produce scientific knowledge that could lead to new products and start-ups. The substantial disparities in federal research funding do have equity implications within the broader context of higher education and regional economic growth. In higher education it is well recognized that the involvement of S&E undergraduate and graduate students in quality research projects is important and perhaps necessary to cultivate future scientists and engineers. Disparity in funded research leads to the lack of training opportunities for S&E students in low-capacity states, limiting their opportunities and exposure to educational resources critical to their professional growth. The shortage of educational and training resources and opportunities is reflected in relatively low shares of doctorate recipients in science and engineering disciplines. Using NSF survey data of earned doctorates by academic discipline, I calculate each state’s share of doctorate recipients in engineering and science-related disciplines in the year of 2016 and present the results in Figs. 2.2 and 2.3. The pattern is evident: Academic institutions in the EPSCoR states produced fewer doctorates in science and engineering disciplines compared with their counterparts in non-EPSCoR states. The exceptions are quite scarce. For the share of doctorate recipients in engineering in 2016, Connecticut and Oregon are the only two nonEPSCoR states falling behind the top 25 states, while Tennessee is the only EPSCoR state that barely makes the top-20 list. For the share of doctorate recipients in Physical Sciences, Life Sciences, Math and Computer Sciences, and Geosciences in 2016, Oregon is the only non-EPSCoR state ranked below the 25th of the list and Missouri is the only EPSCoR state that barely makes the top-20 list. The distribution of academic R&D funding is also likely tied to regional economic growth and prosperity. The evolution of endogenous growth theory is primarily based on the spillover effect of technological knowledge generated from research and development activities. Romer (1990) points out that the distinguishing feature of technological knowledge as an input of production is that it is neither a conventional good nor a public good; it is a non-rival, partially excludable good. More than one firm or industry can use knowledge concurrently, without the use by one entity prohibiting the use by other entities (non-rivalry), and other entities than the entity that developed the knowledge can often not be excluded from using the knowledge (non-excludability). The spillover of knowledge can be beneficial to the economic output of one firm or institution, because it can take advantage of both internal and external technological resources to strengthen its capacity of R&D and enhance the performance of economic activities.

26

2 Geographical Concentration of Funding of Academic Research

Fig. 2.2 State’s share of doctorate recipients in engineering in 2016

As one major producer of technological knowledge, academic institutions, especially major research universities, play a key role in national and regional economic development. One important mechanism through which academic institutions contribute to regional economic growth is by converting scientific inventions to innovation through patenting and licensing of research outputs. The Bayh-Dole Act of 1980 particularly enables universities to patent publicly funded research and engage with industries in technology transfer and research commercialization. By 1998, every Carnegie I or II research university had established a Technology Transfer Office to facilitate patenting and commercialization of university research (Bercovitz & Feldman, 2007), and university patenting activity has increased since the passage of the Bayh-Dole Act (Henderson, Jaffe & Trajtenberg, 1998; Mowery, Nelson, Sampat, & Ziedonis, 2001; Mowery, Sampat & Ziedonis, 2002; Mowery & Ziedonis, 2002; Shane, 2004). The University of California is one of the most profitable because of its commercialization of patented research (Mowery & Ziedonis, 2002).

References

27

Fig. 2.3 State’s share of doctorate recipients in Physical Sciences, Life Sciences, Math and Computer Sciences and Geosciences in 2016

References Bercovitz, J., & Feldman, M. (2007). Academic entrepreneurs and technology transfer: Who participates and why? In F. Malerba & S. Brusoni (Eds.), Perspectives on innovation (pp. 381–398). Cambridge, MA: Cambridge University Press. Brown, D. M., & Warren-Boulton, F. R. (1988). Testing the structure-competition relationship on cross-sectional firm data. Discussion Paper 88-6. Economic Analysis Group, U.S. Department of Justice. De Figueiredo, J. M., & Silverman, B. S. (2006). Academic earmarks and the returns to lobbying. The Journal of Law and Economics, 49(2), 597–625. Henderson, R., Jaffe, A. B., & Trajtenberg, M. (1998). Universities as a source of commercial technology: A detailed analysis of university patenting, 1965–1988. The Review of Economics and Statistics, 80(1), 119–127. Hirschman, A. O. (1964). The paternity of an index. The American Economic Review, 54(5), 761–762.

28

2 Geographical Concentration of Funding of Academic Research

Mowery, D. C., Nelson, R. R., Sampat, B. N., & Ziedonis, A. A. (2001). The growth of patenting and licensing by U.S. universities: An assessment of the effects of the Bayh-Dole Act of 1980. Research Policy, 30(1), 99–119. Mowery, D. C., Sampat, B. N., & Ziedonis, A. A. (2002). Learning to patent: Institutional experience, learning, and the characteristics of U.S. university patents after the Bayh-Dole Act, 1981–1992. Management Science, 48(1), 73–89. Mowery, D. C., & Ziedonis, A. A. (2002). Academic patent quality and quantity before and after the Bayh-Dole Act in the United States. Research Policy, 31(3), 399–418. Payne, A. A. (2003). The effects of congressional appropriation committee membership on the distribution of federal research funding to universities. Economic Inquiry, 41(2), 325–345. Romer, P. M. (1990). Endogenous technological change. Journal of Political Economy, 98(5), 71–102. Savage, J. D. (1999). Funding science in America: Congress, universities, and the politics of the academic pork barrel. New York: Cambridge University Press. Shane, S. (2004). Encouraging university entrepreneurship? The effect of the Bayh-Dole Act on university patenting in the United States. Journal of Business Venturing, 19(1), 127–151. Wu, Y. (2013). The cross-state distribution of federal funding in the USA: The case of financing academic research and development. Science and Public Policy, 40(3), 316–326.

Chapter 3

Public Policy Response to Concentration of Academic Research

Chapter 2 describes the degree of geographic concentration of federal funding for academic R&D and discusses the underlying reasons for the uneven distribution of federal resources. Inequity exists not only in substantial disparities of federal support of academic R&D, but also in ramifications of the long lasting disparities in receiving federal funding in this arena. This chapter describes how federal government, in collaboration with state governments, has responded to the concentration of academic research funding across states. Research universities play a significant role in the production of science in the United States (Geiger, 2004; Jaffe, 1989; Hall, Link, & Scott, 2003). Some states have better research-related resources and a potentially competitive advantage in gaining federal and other sources of research funding. Disparity in federal academic R&D funding is primarily the result of varied research capacity among higher education institutions in the fifty states. The EPSCoR was initiated as an important federal effort to build university-based research capacity in selected states. Implemented through a number of federal agencies, EPSCoR programs provide funds to the designated states with a focus on building the capacity of scientists via investments in research facilities and equipment, funding of selected research initiatives, addition of research-oriented faculty, and support for graduate students.

3.1 History of NSF’s EPSCoR and Similar Programs Figures 1.3 and 2.1 show substantial disparities in federal funding of academic R&D among the states starting in the 1970s. In the late 1970s, some members of Congress from the low-capacity states started pushing NSF to address the issue of uneven distribution of federal research funds. The NSF initiated its EPSCoR as a response to the growing political pressure at that time. The program was initiated to “avoid undue concentration” of federal funding of R&D to the states by supporting the selected academic research projects in the states with a relatively low share of federal R&D funding. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Y. Wu, America’s Leaning Ivory Tower, SpringerBriefs in Political Science, https://doi.org/10.1007/978-3-030-18704-0_3

29

30

3 Public Policy Response to Concentration of Academic Research

Since its inception by the NSF in 1979, EPSCoR has focused on enhancing the research capacity of less competitive states in seeking federal research funds. It was considered experimental because it was intended as a trial effort on whether a catalytic approach in the form of targeted funding of relatively less competitive researchers would spur research capacity (Lambright, 2000). Though it was not initially envisioned as a long-term program, it has significantly expanded over time from a funder to a catalytic agent for increasingly ambitious goals. In recognition of its longevity, NSF recently changed the program name to Established Program to Stimulate Competitive Research. In 1998, NSF divided its EPSCoR program into two parts: infrastructure grants and co-funding research grants (Lambright, 2000). Since 2001, the infrastructure grants have been made through the Research Infrastructure Improvement program (RII). The infrastructure improvement grants are generally up to $9 million over a three-year term, with a focus on statewide infrastructure building in the areas identified as strategic by the states. Co-funding research grants have been made through the regular NSF divisions to research proposals from the EPSCoR states that are just below the top-rated proposals considered by the regular divisions. With co-funding, EPSCoR states receive more money than they might under EPSCoR alone. Moreover, co-funded grants are likely to have a more immediate impact than traditional EPSCoR grants on enhancing the competitiveness of universities in the EPSCoR states for future federal funding because they target the researchers who are very close to the top tier in their fields. NSF started EPSCoR RII Track 2 in 2009, which is aimed to promoting interjurisdictional collaboration. EPSCoR RII Track 3 began in 2013 to broaden participation of underrepresented groups in STEM fields. To be eligible for EPSCoR funds (and therefore designated as an EPSCoR state), a state must have received no more than 0.75% of NSF R&D funds over the prior three-year period. Eligibility criteria have changed over time. In 1979 and 1981, the eligible jurisdictions were the states that received less than $1 million and $3 million NSF funds respectively in two of the three recent fiscal years for which data were available and were ranked lower on some selected indicators. In 1991, eligible jurisdictions were the states that received less than 0.5% of NSF funds over the previous three-year period and were ranked lower on some selected indicators. Since 2003, any state that receives 0.75% (0.7% in 2002) or less of the total NSF research funds averaged over the past three years qualifies. The number of participating states and the size of program funds have grown over time. In 1980, the first five EPSCoR awards of $3 million were made to Arkansas, Maine, Montana, South Carolina, and West Virginia. More states were designated as EPSCoR-eligible over time, so the program was operating in twenty-eight states as well as Guan, Puerto Rico, and the Virgin Islands by 2012. All the EPSCoR states remained in the program until 2015, when three states (Iowa, Tennessee, and Utah) became ineligible for EPSCoR funding.1 Table 3.1 shows the list of the jurisdictions that were members of the program and their years of entry. As more and more jurisdictions joined NSF EPSCoR, its funding scale was escalated. Table 3.2 shows NSF funding amounts for RII, co-founding, and other related 1 Tennessee

left the program briefly in 2006 but was eligible for EPSCoR support in 2007.

3.1 History of NSF’s EPSCoR and Similar Programs Table 3.1 EPSCoR jurisdictions and their years of entry

31

Jurisdiction

Year of entry

Alabama

1985

Alaska

2000

Arkansas

1980

Delaware

2003

Guan

2012

Hawaii

2001

Idaho

1987

Iowa

2009

Kansas

1992

Kentucky

1985

Louisiana

1987

Maine

1980

Mississippi

1987

Missouri

2012

Montana

1980

Nebraska

1992

Nevada

1985

New Hampshire

2004

New Mexico

2001

North Dakota

1985

Oklahoma

1985

Puerto Rico

1985

Rhode Island

2004

South Carolina

1980

South Dakota

1987

Tennessee

2004

Utah

2009

Vermont

1985

Virgin Islands

2002

West Virginia

1980

Wyoming

1985

Source Accessed on February 16, 2018 at https://www.nsf.gov/od/ oia/programs/epscor/images/FY17_EPSCoR_Map.png

32 Table 3.2 NSF EPSCoR funding by year

3 Public Policy Response to Concentration of Academic Research

Year

RII

Co-funding

Outreach and workshops

Total

1999

26.3

21.5

0.9

48.7

2000

30.7

19.6

1.4

51.7

2001

39.9

33.7

1.3

75.0

2002

40.5

37.7

1.5

79.7

2003

47.5

39.7

2.0

89.2

2004

56.7

36.2

1.3

94.2

2005

59.2

33.7

0.5

93.4

2006

61.7

36.4

0.1

98.2

2007

65.8

36.2

0.1

102.1

2008

72.8

46.7

0.5

120.0

2009

91.3

41.1

0.5

132.9

2010

100.2

45.4

1.5

147.1

2011

106.2

39.4

1.2

146.8

2012

110.6

38.8

1.5

150.9

2013

116.3

30.8

0.5

147.6

2014

132.2

25.3

1.0

158.2

2015

137.4

27.6

0.5

165.5

Source Compiled by the author based on the NSF’s EPSCoR archives available from https://www.nsf.gov/od/oia/programs/ epscor/nsf_oiia_epscor_archives.jsp Note Amounts are in millions

activities for years from 1999 to 2015. In nominal dollars, the total EPSCoR funding rose by 240% from 1999 to 2015, with an annualized growth rate of nearly 8%. Total NSF EPSCoR support rose to 3% of the total NSF research support by 2015. With the increase in the number of eligible states, EPSCoR’s goals expanded to include “systemic change” in state science and technology environments, contribution to technology-based economic development, and enhancement of human resources in R&D. In the mid of 2000s, NSF EPSCoR aimed to (a) provide strategic programs and opportunities for EPSCoR participants that stimulate sustainable improvements in their R&D capacity and competitiveness; and (b) advance science and engineering capabilities in EPSCoR jurisdictions for discovery, innovation, and overall knowledge-based prosperity (NSF, 2006). In its most recent program solicitation (NSF, 2017), NSF describes EPSCoR’s goals as follows: • Catalyze the development of research capabilities and the creation of new knowledge that expands jurisdictions’ contributions to scientific discovery, innovation, learning, and knowledge-based prosperity;

3.1 History of NSF’s EPSCoR and Similar Programs

33

• Establish sustainable science, technology, engineering, and mathematics (STEM) education, training, and professional development pathways that advance jurisdiction-identified research areas and workforce development; • Broaden direct participation of diverse individuals, institutions, and organizations in the project’s science and engineering research and education initiatives; • Effect sustainable engagement of project participants and partners, the jurisdiction, the national research community, and the general public through data-sharing, communication, outreach, and dissemination; and • Impact research, education, and economic development beyond the project at academic, government, and private sector levels. EPSCoR-type programs have been implemented through a number of federal agencies, including NSF, NIH, DOD, DOA, NASA, DOE, and EPA. In 1991, DOD, EPA, DOE, and DOA started their EPSCoR programs that were modeled after the NSF EPSCoR. NASA’s EPSCoR was launched in 1993. That same year, NIH initiated its Institutional Development Award (IDeA) program that is similar to EPSCoR. NIH’s IDeA aimed to improve capacity and competitiveness of research institutions in the states that have relatively low NIH grant proposal success rates. It began with a small budget of $2 million in 1993 but has grown continuously and substantially into the largest EPSCoR type program, accounting for about half of the national EPSCoR budget (National Academies, 2013). When IDeA was initiated, 22 states (Alabama, Alaska, Arkansas, Delaware, Hawaii, Idaho, Kansas, Kentucky, Louisiana, Maine, Mississippi, Montana, Nebraska, Nevada, New Mexico, North Dakota, Oklahoma, South Carolina, South Dakota, Vermont, West Virginia, and Wyoming) and Puerto Rico were eligible for funding. After the entry of New Hampshire and Rhode Island and the exit of Alabama, the list of eligible jurisdictions (23 states plus Puerto Rico) was stable after 2000 and was frozen by NIH in 2008. As the two largest EPSCoR type programs, NSF EPSCoR and NIH IDeA focus on research capacity-building in virtually the same jurisdictions. When 23 jurisdictions were eligible for the original IDeA in 1993, the number of NSF EPSCoR jurisdictions was 19 (18 states plus Puerto Rico). Since 2000, NSF has steadily increased the number of its EPSCoR participants, while participating jurisdictions have been quite stable for NIH IDeA. The two current lists of eligible jurisdictions are almost identical—all 24 IDeA jurisdictions are eligible for NSF EPSCoR, but only two states (Alabama and Missouri) and two territories (Guan and Virgin Islands) are eligible for NSF EPSCoR but not for NIH IDeA.

3.2 State-Level EPSCoR Coordination and Heterogeneity Unlike other NSF programs, the implementation of NSF EPSCoR has relied on the participating states’ EPSCoR committees and/or offices as the principal operational arm. Typically the state EPSCoR committee is responsible for coordinating

34

3 Public Policy Response to Concentration of Academic Research

statewide EPSCoR-related activities, including but not limited to developing the state science and technology plan, gaining political and funding support for EPSCoR programs, and facilitating networking efforts. This diversity of function is important as it indicates more inputs to capacity-building than financial support alone. In its 2013 report, the Committee to Evaluate the EPSCoR and Similar Federal Agency Programs2 provided an update on the working structure and functions of state EPSCoR committees (National Academies, 2013). The report found that almost all state EPSCoR committees have state government representation, usually from the governor’s office but also from both houses in the state legislature. The EPSCoR committees also include high-level officials such as vice presidents or provosts of academic institutions. Some states encourage faculty representation in the committee and many states reserve a number of seats for representatives from industry. The EPSCoR committee’s primary function is to oversee and maintain the state’s EPSCoR program. It also acts as a liaison between NSF and the state, selects proposals for submission to EPSCoR, coordinates the EPSCoR programs, and promotes research competitiveness. Most of the state EPSCoR committees measure performance by the following criteria (National Academies, 2013): • Acquiring additional extramural funding; • Expanding the state’s contributions to the scientific community (through a greater number of publications, presentations, honors, awards, and so on); • Increasing the number of science, technology, engineering, and mathematics graduates; • Enhancing the diversity of students and faculty; • Raising the number and quality of faculty hires; • Stimulating economic development (jobs, patents, new technology, and so on); • Creating new research institutes; • Strengthening collaboration and launching new interdisciplinary research and education programs; and • Developing state-of-art infrastructure (for example, by improving laboratory equipment and classroom facilities). An important feature of the NSF EPSCoR funding process is that an eligible state can only have one EPSCoR award at any time, and only one EPSCoR proposal is invited once the current EPSCoR grant expires. The state EPSCoR committee is responsible for the selection of experts to screen and evaluate pre-proposals and developing a “best” state proposal to compete for a limited number of EPSCoR awards. The solicitation and submission of the final proposal often involves a great deal of educational, networking, and technical support activities. For instance, Kentucky’s EPSCoR office held series of proposal sessions to prepare for the development of a state proposal. One of the Mississippi Research Consortium’s major roles is to coordinate and facilitate the proposal submission with one of the 2 Under

the direction of the America COMPETES Reauthorization Act of 2010, the National Academy of Sciences, National Academy of Engineering, and Institute of Medicine created the Committee to Evaluate the EPSCoR and Similar Federal Agency Programs.

3.2 State-Level EPSCoR Coordination and Heterogeneity

35

universities taking a lead role for each of the projects. The single proposal process requires internal communication and planning across research groups, institutions, and research centers in the selection and configuring of the proposal (Payne, 2003). Through its collaborative process, the NSF EPSCoR program facilitates the development of scientific and technical human capital as related to effective collaboration among individual scientists. The policy of one proposal per state requires that the universities and researchers that wish to participate and traditionally are rivals would have to collaborate. The large scale of EPSCoR grants requires assembling multiple projects and a number of university scientists in a single major project. Resources and opportunities for productive collaborations among potential EPSCoR participants are built into the process. State EPSCoR committees build networks in order to develop the best possible proposal, while NSF sponsors activities to facilitate building collaboration among individual researchers in particular regions and areas of study. Not all EPSCoR states are similar and the heterogeneity of EPSCoR programs is common within the EPSCoR framework. For instance, the heterogeneity of the mix of academic institutions involved in EPSCoR presents important diversity in institutional resources and general research capacity (Feller, 2000). This is important in not only tailoring EPSCoR policy initiatives as Feller argues, but also in recognizing the distinctions in research capacity across institutions within a particular EPSCoR jurisdiction. Feller (2000) explores ten general strategies to enhance the competitiveness of the EPSCoR universities, including increasing the number, size, and quality of research proposals; exploring niche markets; interdisciplinarity; catching a new wave; collaboration; emphasis on industrial and applied research; building a medical school; bootstrap; using political leverage; and strategic redefinition of objectives. As he states, the EPSCoR jurisdictions do not have any advantage in employing these strategies for pursuing research competitiveness because most of them are equally available to EPSCoR and non-EPSCoR universities (Feller, 2000). Another way in which the EPSCoR states differ is the mix of targeted scientific fields. While some EPSCoR states take a broader approach to the portfolio of EPSCoR research projects, other states have developed a more targeted approach to a particular area of science. States’ EPSCoR projects may target different S&E fields that are strategic for their academic and economic interests. The chosen field often reflects a state’s natural endowment and other strengths. Table 3.3 presents strategic foci of major initial NSF EPSCoR awards in five states: Kansas, Kentucky, Montana, New Mexico, and Rhode Island. In that period, Kansas EPSCoR efforts focused on ecology and ecological forecasting. Kentucky, Montana, and New Mexico chose nano science and technology; however, Kentucky focused on micro-fabrication of devices, Montana on bio-inspired nanomaterials, and New Mexico on nano-water interface and nanomaterials energy systems. In addition to nano science and technology, the three states focused on other research areas: Kentucky on biotechnology and visualization; Montana on biomolecular structure and dynamics, neuroscience, and bioengineering; and New Mexico on hydrology. Rhode Island targeted the life sciences. Montana’s NSF EPSCoR focus changed to hydrogen and the environment and large river ecosystems in the period 2007–2010.

Montana

(continued)

Support integrative analysis of complex biological systems based on environmental sciences, physical and social sciences; nanotechnology research; and biomolecular structure and dynamics

Support research in nanotechnology (micro-fabrication of devices in four main areas: electronics, sensors, materials, and biotechnology); biotechnology (specializing in metabolomics); and information technology (visualization and virtual environments)

Building Kentucky’s New Economy with EPSCoR Award number: 0447479 Award amount: $9,000,000 Award period: Jun 1, 2005–May 31, 2008

Montana’s EPSCoR Infrastructure: Cross-sectional Partnership Building for the Future Award number: 0091995 Award amount: $9,000,000 Award period: Feb 1, 2001–Jul 31, 2004

Enhance biological and environmental research. Support functional genomics, cellular and molecular proteomics, biomedical nano- and micro-electro-mechanical systems, and structural biology

Kentucky EPSCoR Award number: 0132295 Award amount: $9,000,000 Award period: Mar 1, 2002–Feb 28, 2005

Establish a virtual ecological forecasting center to address the evaluation, modeling and forecasting of the biological and ecological consequences of the world’s accelerating global changes

Phase V: Building Research Infrastructure to Address Grand Challenge Problems in Ecological Forecasting and Biomaterials Design Award number: 0553722 Award amount: $4,500,000 Award period: Apr 1, 2006–Mar 31, 2008

Kentucky

Focus on living systems to enhance research strengths in ecology, genetics, and biochemistry. Target the interface of ecology and genetics, and the emerging field of lipid profiling (lipidomics)

Phase IV: Improvement of the Academic Research Infrastructure Award number: 0236913 Award amount: $9,000,000 Award period: Apr 1, 2003–Sep 30, 2006

Kansas

Strategic foci

Major initial NSF EPSCoR awards

State

Table 3.3 Major initial NSF EPSCoR awards in five states

36 3 Public Policy Response to Concentration of Academic Research

Enhance research in life sciences. Establish a center for research excellence in marine life sciences, and core research facilities in genomics and proteomics

Focus on hydrology (instrumentation and algorithm development for regional hydrolic modeling and Evapotranspiration estimation); and nanoscience (nano-water interface and nanomaterials energy systems)

New Mexico EPSCoR RII (NM NEW) Proposal Award number: 0447691 Award amount: $6,750,000 Award period: Apr 1, 2005–Mar 31, 2009

Rhode Island EPSCoR: Catalyzing a Research, Education and Innovation Network Award number: 0554548 Award amount: $4,500,000 Award period: Jun 1, 2006–May 31, 2008

Focus on natural resource analysis and management, and nanoscience

Enhance research in biomolecular structure and dynamics; neuroscience; bioengineering; and bioinspired nanomaterials

Montana Infrastructure via Science and Technology Enhanced Partnerships—INSTEP Award number: 0346458 Award amount: $9,000,000 Award period: Aug 1, 2004–Dec 31, 2007

New Mexico EPSCoR Infrastructure Award Award number: 0132632 Award amount: $6,259,280 Award period: Mar 1, 2002–Apr 30, 2005

Strategic foci

Major initial NSF EPSCoR awards

Source Compiled by the author based on the NSF’s Awardee Information available from https://www.nsf.gov/awardsearch/simpleSearch.jsp

Rhode Island

New Mexico

State

Table 3.3 (continued)

3.2 State-Level EPSCoR Coordination and Heterogeneity 37

38

3 Public Policy Response to Concentration of Academic Research

3.3 An Evaluative Framework on EPSCoR The share of a state’s federal academic R&D funding is an indicator of its research competitiveness, and capacity-building is the necessary process to enhance research competitiveness of academic institutions in a state. However, capacity development in the sciences is considerably more complex than macro-level grant figures can reveal. In particular, the ability to make progress in sustainable research funding requires capacity-building of not only the scientific and collaborative qualities of the researchers engaged in the funded projects but also their access to necessary facilities and equipment and encouraging institutional and work environments. Scholars generally support the view that R&D policy as development instrument should center not only on the scientific outcomes, but on the growth of capacity (Bozeman, Dietz, & Gaughan, 2001). The growth of capacity is generated from the accumulation of “scientific and technical human capital,” which includes both human capital endowments such as formal education and training and social relations and network ties (Bozeman & Corley, 2004). The evaluation of science and technology programs has incorporated measures of research capacity-building. As the ultimate carrier of research capacity, academic scientists are the center of capacity-building. Good scientists are those who are talented and capable of proposing innovative ideas and carrying out vigorous investigations in their scientific fields. Their research capacity can be enhanced through the development of productive collaborative relationships. The literature of social network states that individuals’ participation in career-related networks can enhance opportunities for professional advancement and career satisfaction by increasing the amount and quality of resources accrued by the networked individuals (Granovetter, 1973, 1983; Burt, 1992, 2005; Renzulli, Aldrich, & Moody, 2000). Researchers’ social capital gained through participation in the collaborative research-related networks can take different forms. The sharing of knowledge, expertise, equipment, and introductions to peers may occur through these networks, so access to and participation in collaborative teams can enhance capacity-building in science and engineering. There are two other important aspects of research capacity at the individual level. One is the access to necessary research facilities and equipment. Modern scientific investigations increasingly rely on sophisticated and expensive instrumentation; academic researchers cannot conduct frontier research without state-of-the-art facilities. Infrastructure costs are particularly high in academic fields such as structural biology and biotechnology. Research infrastructure has been a critical factor to the productivity of academic researchers and has even become an important factor in the recruitment and retention of top academic scientists. Another important and often ignored factor is scientists’ motivation as it relates to research capacity. Two types of motivation affect work performance and outcomes. Intrinsic motivation exists when an individual engages in an activity because of interest and enjoyment of the activity itself, whereas extrinsic motivation leads the individual to engage in the activity because of incentives or external pressures (Reeve, 1995; Sansone & Harackiewicz, 2000). Many people tend to stereotype scientists as

3.3 An Evaluative Framework on EPSCoR

39

Fig. 3.1 Framework of research capacity and competitiveness

nerds with a genuine interest in scientific discovery. They are expected to not care about such factors as allocation of reward, assignment of support, promotion criteria, and so on. While scientists’ intrinsic motivation is important, they are human beings working in a particular institutional environment that inevitably affects their morale and motivation to conduct scientific research. If they feel good about the environment, they can be highly productive. If they are dissatisfied, they are likely less productive than their peers with similar qualities in other academic institutions. Based on the literature (Melkers & Wu, 2009), I develop an evaluative framework of research capacity and competiveness as shown in Fig. 3.1. It includes talent, collaboration, support, and motivation as four key determinants of individual researchers’ capacity for conducting scientific research. The capacity of individual researchers in higher education institutions in a state is summative and collectively constitutes the state’s competiveness for federal funding of academic research. Individual scientists’ capacity can be measured by faculty productivity indicators such as the number of research grant proposals submitted and awarded and articles published in peerreviewed journals. A state’ competiveness can be measured by the share of federal academic R&D funding. A larger share of federal academic R&D funding indicates that the researchers in the states have made progress in capacity-building as compared with their competitors in other states. An individual scientist’s research capacity could be affected by a variety of factors, including various university resources (research assistants, financial resources) and individual characteristics (education, experience). The evaluative framework only includes talent, collaboration, support, and motivation as four composite constructs, with each incorporating multiple factors. For instance, a researcher’s talent may reflect the effects of prior education, training, and experience. The factor of support may cover various institutional resources such as personnel, financial, and infrastructure support. Although EPSCoR jurisdictions differ in strategic foci of research, they have followed similar routes to building their research capacity and competiveness. All the projects invested EPSCoR funds in key research infrastructures such as research

40

3 Public Policy Response to Concentration of Academic Research

centers, laboratories, and other facilities as well as hiring new faculty researchers. For instance, with NSF’s EPSCoR assistance, Kansas established an ecological forecasting collaboratory to enhance and integrate established expertise and infrastructure across universities, academic units, and research centers in the state. In Kentucky, NSF EPSCoR supported developing a nationally recognized center focusing on research to understand cellular and molecular signaling processes with real-time spatial and temporal resolution. Once the funded project ends, established institutions are expected to seek resources from other sources and support their long-term sustainability. The EPSCoR funding they receive serves as seed money to cultivate capacity that may bring more non-EPSCoR federal and other funds. Since the early 2000s, NSF EPSCoR has started enhancing research capacity through improvements in research-relevant infrastructure via the RII finding initiative. In practical terms, funds are distributed for the improvement and purchase of facilities and equipment, but also human resources in the form of undergraduate and graduate students and post-doctoral researchers. University institutions also provide important resources including facilities, equipment, graduate assistants, and other factors critical to the conduct of scientific research and the development of capacity. These targeted uses of NSF EPSCoR funds in fact underscore the multiple dimensions of capacity development, hard infrastructure and human-based infrastructure and resources. The infrastructure-based investments contribute to the development of scientist’s capacity first, which in turn leads to institutional and state-level competitiveness. In addition to new facilities and faculty researchers, EPSCoR also supports development of collaborative relationships as an important component of human research capacity. EPSCoR fosters effective collaboration among individual scientists and research teams. Initiated in 2009, the NSF EPSCoR RII Track 2 is particularly aimed to promoting interjurisdictional collaboration. Among the four key determinants of individual research capacity, EPSCoR has not paid sufficient attention to institutional and work environments that affect scientists’ motivation in conduct of research. Scientists’ extrinsic motivation could be strengthened and their research productivity improved through positive feedback of good incentives or pressures. It is equally possible that they could be demotivated by negative feedback of bad incentives or pressures.

References Bozeman, B., & Corley, E. A. (2004). Scientists’ collaboration strategies: Implications for scientific and technical human capital. Research Policy, 33(4), 599–616. Bozeman, B., Dietz, J. S., & Gaughan, M. (2001). Scientific and technical human capital: An alternative model for research evaluation. International Journal of Technology Management, 22(7–8), 716–740. Burt, R. S. (1992). Structural holes versus the social structure of competition. Cambridge, MA: Harvard University Press.

References

41

Burt, R. S. (2005). Brokerage and closure: An introduction to social capital. New York, NY: Oxford University Press. Feller, I. (2000). Strategic options to enhance the research competitiveness of EPSCoR universities. In J. S. Hauger & C. McEnaney (Eds.), Strategies for competitiveness in academic research (pp. 11–36). Washington, DC: American Association for the Advancement of Science. Geiger, R. L. (2004). Knowledge and money: Research universities and the paradox of the marketplace. Stanford, CA: Stanford University Press. Granovetter, M. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–1380. Granovetter, M. (1983). The strength of weak ties: A network theory revisited. Sociological Theory, 1, 201–233. Hall, B. H., Link, A. N., & Scott, J. T. (2003). Universities as research partners. Review of Economics and Statistics, 85(2), 485–491. Jaffe, A. (1989). Real effects of academic research. The American Economic Review, 79(5), 957–970. Lambright, W. H. (2000). Building state science: The EPSCoR experience. In J. S. Hauger & C. McEnaney (Eds.), Strategies for competitiveness in academic research (pp. 37–76). Washington, DC: American Association for the Advancement of Science. Melkers, J., & Wu, Y. (2009). Evaluating the improved research capacity of EPSCoR states: R&D funding and collaborative networks in the NSF EPSCoR program. Review of Policy Research, 26(6), 761–782. National Academies. (2013). The experimental program to stimulate competitive research. Washington, DC: The National Academies Press. National Science Foundation. (2006). EPSCoR research infrastructure improvement grant program: Program solicitation (NSF 06-583). Retrieved from http://www.nsf.gov/pubs/2006/nsf06583/ nsf06583.pdf. National Science Foundation. (2017). EPSCoR research infrastructure improvement program track1: Program solicitation (NSF 17-562). Retrieved from https://www.nsf.gov/pubs/2017/nsf17562/ nsf17562.pdf. Payne, A. A. (2003). The role of politically motivated subsidies on university research activities. Educational Policy, 17(1), 12–37. Reeve, J. M. (1995). Motivating others: Nurturing inner motivational resources. Boston, MA: Allyn & Bacon. Renzulli, L. A., Aldrich, H., & Moody, J. (2000). Family matters: Gender, networks, and entrepreneurial outcomes. Social Forces, 79(2), 523–546. Sansone, C., & Harackiewicz, J. M. (2000). Intrinsic and extrinsic motivation: The search for optimal motivation and performance. New York, NY: Academic Press.

Chapter 4

Assessment of Scientists’ Research Capacity

The evaluative framework as discussed in Chap. 3 illustrates how EPSCoR is intended to address the geographic concentration of federal academic research funding by enhancing research capacity of individual researchers in low-capacity states. There are four key determinants of research capacity: talent, collaboration, support, and motivation. Talent and support are two vital pillars of academic research capacity. Scientists have to be smart and capable to succeed in the competition for federal research grants, and they need appropriate facilities and equipment for modern science and engineering research. Without those things, scientists are neither capable nor competitive. Scientists can increase research capacity through their social network (Bozeman & Corley, 2004). Collaboration is increasingly important to competitive academic research, although it may not be necessary in some disciplines. Scientists can extend their access to a variety of research-related resources by collaborating with researchers within or outside their home institution and within or outside their own academic field. Motivation is another key determinant of scientists’ research capacity. Motivation in general is about what prompts an individual to do certain things and act in a certain way. Scholars of public administration theorize that some individuals desire to serve the public and link their personal actions with the overall public interest. The public service motivation (PSM) theory explains the motivations of individuals who choose careers in the government and non-profit sectors despite the potential for more financially lucrative careers in the private sector. Perry and Wise (1990) state that PSM is often influenced by various social, political, and institutional factors, and failure to recognize the motivation of employees could discourage them from the public sector. I would argue that motivation, both intrinsic and extrinsic, is important in the conduct of research because scientific research is often a lengthy, tedious, and difficult process with uncertain reward. Like employees working in public sector, scientists’ motivation for conducting S&E research is also shaped by a variety of social and institutional factors, particularly institutional reputation and the academic reward system. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Y. Wu, America’s Leaning Ivory Tower, SpringerBriefs in Political Science, https://doi.org/10.1007/978-3-030-18704-0_4

43

44

4 Assessment of Scientists’ Research Capacity

In this chapter, I first conduct an empirical test of the evaluative framework. Using a recent data set of a sample of academic scientists, I develop measures of talent, collaboration, support, and motivation, and examine how these measures affect scientists’ research capacity as measured by their success in seeking research grants. The empirical test focuses on grant-seeking for two reasons. First, the number of grants submitted and awarded is a commonly used indicator of research productivity in academia. Second, academic scientists’ grant-seeking performance is closely related to the primary goal of EPSCoR in the pursuit of more equitable distribution of federal research support. After the evaluative framework is empirically validated, I conduct an assessment of EPSCoR efforts in building scientists’ research capacity in the eligible jurisdictions by comparing the four key determinants of individual research capacity between scientists in the EPSCoR states and those in other states. The analysis of variance is used to evaluate the differences in each determinant across the two groups of scientists that lead to variation in research capacity at the individual level and collective competitiveness at the state level.

4.1 Empirical Test of the Determinants of Individual Research Capacity The data set was collected in 2012 for an NSF-funded study to examine the role of professional networks for academic scientists’ career outcomes including production, advancement, and mobility.1 The original sample includes 9,925 tenured and tenure-track academic scientists in four fields: biology, biochemistry, engineering, and mathematics. The survey collected 4,196 valid responses with a response rate about 40%, and 1,903 of them are from research-intensive and research-extensive academic institutions. After exclusion of incomplete responses, I end up with a sample of 1,699 respondents from research-intensive and research-extensive institutions. The survey includes the following questions about scientists’ grant-seeking activity within the last two academic years: • • • • •

How many external research grant proposals have you submitted? How many of the grants you submitted were successful? How many of those successful grants were federally funded? What is the total dollar amount of those successful grants? What is the dollar amount of the largest of those successful grants?

The respondents were also asked about the average number of research proposals they have submitted per year over the past five academic years. According to the

1 The data were collected under the auspices of the NSF Grant: “Breaking through the Reputational

Ceiling: Professional Networks as a Determinant of Advancement, Mobility, and Career Outcomes for Women and Minorities in STEM” (NSF Grant # DRL-0910191).

4.1 Empirical Test of the Determinants of Individual Research Capacity

45

survey responses, I develop four measures of scientists’ capacity in grant-seeking and present their descriptive statistics as following: • The number of external research grant proposals submitted within past two years – The minimum and maximum values are 0 and 50. – The mean value is 4.6, and standard deviation is 5.1. • The number of grants submitted and awarded within past two years – The minimum and maximum values are 0 and 50. – The mean value is 1.8, and standard deviation is 2.6. • The number of grants federally funded within past two years – The minimum and maximum values are 0 and 18. – The mean value is 1.2, and standard deviation is 1.5. • The average number of grant proposals submitted per year within past five years – The minimum and maximum values are 0 and 50. – The mean value is 2.5, and standard deviation is 3.1. The empirical test examines how talent, collaboration, support and motivation affect individual scientists’ grant-seeking performance. The measure of each capacity determinant is developed based on some of the survey questions. First, the term talent refers to a person’s intelligence or ability to conduct scientific research in S&E fields. The talent can be natural endowment or gained through education, training, and experience. I use a scientist’s success rate in her first job search as a proxy measure of her talent in academic context. The recruitment in research universities is a process of comprehensive assessment on the candidates, and talent is usually the most important criterion for success in job hunting. Based on the survey data, I calculate the number of job offers as a percentage of the number of positions applied for in the first job search. The success rate in the first job search ranges from 0 to 100%, with a mean of 33% and a standard deviation of 35%. That means, on average, one third of the first job applications ended up with job offers. A few very talented individuals got offers for every position they applied for, while some failed every application in their first job search. The survey collected ego-centric network data through name-generating social network questions for which respondents named colleagues in several categories: close research collaborators, people with whom they discuss teaching issues, and scientists from whom they seek career-related advice. I focus on the number of close research collaborators because close collaboration is the most relevant to a scientist’s research capacity in general and her grant-seeking performance in particular. On average, respondents have 3.6 close research collaborators. The minimum and maximum numbers are zero and eight, respectively. The standard deviation is 2.3. Academic research institutions were divided into research-intensive and researchextensive institutions in the Carnegie classification of higher education institutions. Research-extensive universities grant at least 50 doctoral degrees per year in at least

46

4 Assessment of Scientists’ Research Capacity

15 disciplines, while research-intensive universities grant at least 20 doctoral degrees per year, or at least 10 doctoral degrees in at least three disciplines (Carnegie Foundation for the Advancement of Teaching, 2004). Research-extensive institutions are obviously more research oriented than research-intensive institutions. I introduce a binary dummy variable to differentiate the two types of research institutions as a measure of institutional support to S&E research. The assumption is that researchextensive institutions have more abundant resources to assist scientists in their conduct of research. In this sample, about 60% are from research-extensive universities. The literature indicates that motivation drives effort. For instance, in a study of 781 faculty members working in U.S. research-extensive universities, Hardré, Beesley, Miller, & Pace (2011) find that faculty research effort is strongly and positively related to intrinsic motivation, and also a strong and positive predictor of faculty research productivity. Therefore, I use respondents’ efforts in research as a proxy measure of their motivation for the conduct of scientific research. The survey particularly asks the respondents how many hours on average they work in a typical week, and what percentage of their work hours are allocated to research, teaching, or service. Based on the responses to the two questions, I calculate the number of hours spent on research in a typical week. Because highly motivated scientists spend more time on research than their colleagues, I expect a positive effect of this motivation measure on grant-seeking performance. In a typical week, the respondents spent a maximum of 84 h on research, but several did not spend any time on research. The average number of hours is 24 with a standard deviation of 14 h a week. It should be noted that individual motivation is shaped by institutional environment in fairly complex ways. An organization may use both positive reinforcement and negative reinforcement to motivate its employees to improve work performance. In the context of academic research, both positive reinforcement and negative reinforcement play a very important role in motivating academic scientists. On the positive side, the perceived reputation of the institution likely motivates researchers to actively pursue research grants in order to grow professionally. The faculty reward system may work as a carrot or a stick. If the system is used as a carrot, the standards are likely reasonable and high-performing scientists will be rewarded. If the system is used more as a stick, the standards are likely unreasonably high. Scientists may not be satisfied with the system but have to prevail through extraordinary hard work. The carrot works as a positive reinforcement, but the stick effect is a negative reinforcement for faculty grant-seeking efforts and outcomes. The four dependent variables are measured in counts. Because the distribution of the count variables is likely over-dispersed, we use negative binomial regression to relate each measure of faculty grant-seeking performance to the explanatory factors such as success rate in the first job search, number of close research collaborators, type of institution, and number of hours spent on research in a typical week as the measure of faculty motivation. The model also includes a dummy variable to differentiate biological and biochemistry (the field variable takes one) from other fields (the field variable takes zero). The statistical results are presented in Table 4.1. First, the negative binomial regression is the right estimator because the lnalpha values are statistically significant in all the four regressions, indicating that the four

4.1 Empirical Test of the Determinants of Individual Research Capacity

47

Table 4.1 Regression analysis of scientists’ research capacity Variable

NB(1)

NB(2)

NB(3)

NB(4)

Success rate in the first job search

−0.0005 (0.0007)

0.0027*** (0.0009)

0.0022** (0.0010)

−0.0008 (0.0008)

Number of close research collaborators

0.0739*** (0.0112)

0.1031*** (0.0139)

0.1004*** (0.0147)

0.0479*** (0.0117)

Type of institution (research extensive = 1)

0.0462 (0.0535)

0.0228 (0.0667)

0.1378* (0.0732)

0.0547 (0.0567)

Number of hours spent on research in a typical week

0.0287*** (0.0021)

0.0239*** (0.0025)

0.0224*** (0.0027)

0.0245*** (0.0021)

Field (biology or biochemistry = 1)

−0.2377*** (0.0505)

−0.4735*** (0.0637)

−0.3461*** (0.0680)

−0.3079*** (0.0533)

Constant

0.5383*** (0.0770)

−0.3830*** (0.0976)

−0.8629*** (0.1071)

0.2112*** (0.0816)

No. of observations

1,307

1,202

1,074

1,209

LR chi2(5)

275.03***

203.35***

165.15***

195.26***

lnalpha

−0.5667***

−0.6088***

−1.1609***

−0.9031***

Note The dependent variables are (1) the number of external research grant proposals submitted within past two years, (2) the number of grants submitted and awarded within past two years, (3) the number of grants federally funded within past two years, and (4) average number of grant proposals submitted per year within past five years. NB refers to negative binomial regression. Standard errors are in parentheses. *** denotes significance level